Blog Details

Red Team Exercises Using Real Security Alerts

Red Team Exercises Using Real Security Alerts

Many traditional red team exercises are ineffective for a single reason: they focus on testing types of attack instead of the detection effectiveness. 
 
By including actual alerts, the approach is reversed: 
1. The Red Team emulates realistic attack behaviors
2. The Blue Team responds to alerts that are already established as being trusted
3. The detection gaps are immediately identifiable, thus converting the exercise into detection engineering rather than theatre.
 
What is Meant by “Using Actual Alerts”
When using actual alerts, it does NOT mean:
1. Performing destructive attacks
2. Intentionally bypassing controls
 
Rather it does mean: 
1. Generating existing alerts on purpose
2. Monitoring the quality and timing of alerts
3. Evaluating how analysts respond to alerts
The main purpose is that the outcome is a learning experience, not one of shock.
 
Types of Alerts Typically Used in Red Team Exercises
Real-world red team exercises often study large numbers of: 
1. Unusual authentication attempts 
2. Erratic processes executing 
3. Privilege escalation indicators
4. Signals relating to lateral movement
5. Unusual data access activity. 
Most of these alerts are already created; the objective of the exercise is to evaluate the effectiveness of the alert.
 
Practical Exercise 1: Alert for Credential Abuse
Scenario
Determine whether the alerts for credential misuse are being triggered and escalated correctly.
Controlled Activity
Investigate with the following:
1. Please use a test user account.
2. Authenticate from the following:
a) New device
b) New IP address range
c) Time that is out of the ordinary
 
Expected Alert Messages
1. “Unusual Sign-in Activity”
2. “Impossible Travel”
3. MFA Challenge Initiated (Multifactor Authentication)
 
What will we examine as a team
1. Elapsed time until alert is sent.
2. Alert clarity.
3. Actions that an analyst would take in response to the alert.
4. Process of handling false positives.
 
Practical Exercise 2: Detecting Process Execution
Scenario
Confirm the endpoint alert mechanism is functioning as expected when suspicious process activity occurs.
Approved/Safe Activity
Perform the following activities:
1. Run a known test program or simulate a script.
2. Utilize common attacker signed applications that have been abused by attackers for their own purposes (ex. Educational example: Start-Process notepad.exe -Argumentlist "test.txt") and monitor for:
a) Parent process/child process alerts
b) Command line logging
c) Behavioral alerting
 
Key Questions
1. Did the alert trigger?
2. Was severity at the correct level?
3. Did the analyst have the ability to understand the context?
 
Practical Exercise 3: Alert Saturation Vs Signal
Scenario
To test how well alert prioritization works under high-volume workloads.
Actions
1. Trigger several low-risk alerts
2. Trigger a single high-risk alert
 
Example concept logic:
alerts = generate_alerts(low = 20, high = 1)
for alert in alerts:
perform processing of the alert
 
Lessons Learned from this Activity
1. Was the critical alert overlooked because it was subject to inundation by many other lower-risk alerts?
2. Did the automation of systems help decrease false alerts being generated?
3. Did the escalation processes work as they should in accordance with business requirements?
This exercise will identify the potential problems associated with alert fatigue for an organization.
 
Tools Commonly Used in These Exercises
Detection & Visibility
1. Microsoft Defender for Endpoint
2. CrowdStrike Falcon
3. Elastic Security
4. Splunk Enterprise Security
 
Simulation & Validation
1. Atomic Red Team (safe, modular tests)
2. Caldera (controlled adversary emulation)
3. PurpleSharp (Windows-focused simulations)
These tools map actions to real alerts, not exploits.
 
Pipeline to Review & Respond to Alerts
1. SOAR vendors (XSOAR, Phantom)
2. Case Management Systems
3. Ticketing Systems
 
Exercises that test individuals and procedures as opposed to just technology.
 
Sample Code Used to Validate Alerts
Conceptual validation of alerts during testing:
def validate_alert(alert):
    if alert.severity >= "High" and alert.response_time < 30:
        return "Effective"
    elif alert.severity == "Low":
        return "Needs tuning"
    else:
        return "Review required"
 
The purpose of this code is to Assist Teams to:
1. determine the score for quality of alerts
2. provide insight on progress over time.
 
Common Mistakes Teams Make
1. Using unrealistic attack paths
2. Testing only red team success
3. Ignoring analyst experience
4. Not documenting lessons learned
An exercise without feedback is just noise.
 
Effective teams utilize the following methods for conducting exercises:
1. Utilizing pre-approved exercise scenarios
2. Notifying leadership prior to execution of exercise, not notifying analysts
3. Measuring the quality of detection
4. Tuning all alerts right away
5. Repeat on quarterly basis
This approach leads to ongoing enhancement rather than enhancing panic.
 
Key Takeaways
1. Real alerts make red teaming meaningful
2. Detection quality matters more than attack success
3. Exercises should improve workflows, not surprise people
4. Metrics beat opinions
5. Red and blue teams win together
If an alert can’t guide action,
it’s not helping anyone.

 

© 2016 - 2026 Red Secure Tech Ltd. Registered in England and Wales under Company Number: 15581067