Red Teaming and MITRE ATT&CK
Red teams simulate real-world adversaries to test organizational defenses. MITRE ATT&CK provides the framework for ensuring red team exercises comprehensively test defenses against realistic adversary behavior.
Rather than arbitrary attacks, red teams using ATT&CK conduct exercises aligned with how real adversaries operate. This alignment ensures exercises test defenses against techniques actually used in the wild.
Red teaming with ATT&CK answers critical questions: Which adversary tactics can our defenders detect? Which techniques would we miss? How well do our detections and prevention measures actually work?
Planning Red Team Exercises with ATT&CK
Structure red team exercises using ATT&CK tactics and techniques:
Step 1: Define Exercise Scope
Determine:
- Which tactics will you exercise? (All 14 enterprise tactics or a subset?)
- Which threat groups will you simulate? (Specific APT groups, general criminal behavior?)
- What are your primary objectives? (Detect evasion, validate persistence controls, test incident response?)
- What systems/applications are in scope?
This scoping prevents exercises from becoming overwhelming while ensuring they remain realistic.
Step 2: Create Attack Scenarios
Map your exercise scenarios to specific ATT&CK techniques:
Scenario: "Initial breach via email phishing, credential theft, and persistence"
Maps to:
- T1566.002 (Phishing: Spearphishing Link)
- T1566.001 (Phishing: Spearphishing Attachment)
- T1110.001 (Brute Force: Password Guessing)
- T1547.001 (Boot or Logon Autostart Execution: Registry Run Keys)
This explicit mapping ensures the exercise covers intended techniques.
Step 3: Plan Technique Implementation
For each technique in your exercise, plan specifically how you'll implement it:
Technique: T1057 (Process Discovery) Implementation: "Use 'tasklist' command to enumerate running processes, looking for security tools" Detection Expected: "Process monitoring should detect tasklist execution" Evasion: "Use alternate tools or scripts less likely to be monitored"
This detailed planning ensures consistency and appropriate challenge level.
Step 4: Document Expected Detections
Before executing, document what you expect defenders to detect:
Technique: T1056.004 (Input Capture: Credential API Hooking) Expected Detection: "API logging detects CreateHook or SetWindowsHookEx calls" Actual Result: "Attackers bypassed this detection using direct syscalls" Lesson: "Need kernel-level monitoring to detect credential API hooking"
Recording expected vs. actual results provides valuable learning.
Using ATT&CK for Realistic Threat Simulation
Red teams should simulate real adversary behavior, not just test random attacks.
Select Real Threat Groups to Simulate
MITRE ATT&CK documents real threat groups and their techniques. Rather than inventing attack methods, simulate documented threat group behavior:
"This exercise simulates APT28 tactics based on their documented technique usage."
This grounds your exercise in reality and tests defenses against known threats.
Follow Realistic Attack Chains
Rather than executing techniques in random order, follow realistic attack chains:
Realistic chain: Phishing → Credential Theft → Lateral Movement → Persistence → Data Collection → Exfiltration
Unrealistic chain: Jump randomly between techniques regardless of logical flow
Real attacks follow logical progression. Realistic exercises test defensive depth across multiple stages.
Maintain Operational Security
Red teams themselves should practice operational security. Document that you:
- Avoid detection while performing surveillance
- Use legitimate tools when possible
- Cover your tracks
- Communicate via secure channels
This realism ensures defenders learn to detect genuine adversary behavior.
Comprehensive Coverage Assessment
Use ATT&CK to systematically assess coverage:
Create an ATT&CK Matrix
List all tactics and techniques relevant to your organization:
| Tactic | Technique | Implementation | Detection Method | Expected Result |
|---|---|---|---|---|
| Initial Access | T1566.002 | Phishing link | Email gateway detection | Stopped |
| Persistence | T1547.001 | Registry key modification | Endpoint monitoring | Detected |
| Privilege Escalation | T1134.005 | Token impersonation | API logging | Undetected |
Complete the matrix for all techniques in your exercise.
Identify Blind Spots
Techniques marked "Undetected" represent gaps in your defensive program. Red team exercises reveal:
- Techniques you can't detect
- Detections that aren't working properly
- Defenders who don't recognize real attacks
- Tools with inadequate logging
These findings drive remediation efforts.
Test Detection Quality
Not all detections are equal. Some are fragile and easily evaded; others are robust:
Fragile Detection: "Alert on mimikatz.exe process name" Evasion: Rename the executable Result: Evasion successful
Robust Detection: "Alert on process accessing lsass with PROCESS_VM_READ" Evasion: More difficult; requires different credential theft method Result: Evasion partially successful but alternative detected
Use red team exercises to test detection robustness and identify weak detections needing improvement.
Incident Response Simulation
Red team exercises test not just detection but also incident response:
Exercise Flow:
- Red team conducts initial attack (T1566 - Phishing)
- Blue team detects the attack
- Blue team investigates and determines scope
- Blue team eradicates the attack
- Blue team verifies eradication
This end-to-end testing reveals weaknesses throughout the incident response process:
- Detection delays
- Investigation bottlenecks
- Incomplete eradication
- Difficulty getting approvals
- Confusion about procedures
Red team reports should document these process gaps alongside technical findings.
Documentation and Reporting
Structure red team reports using ATT&CK techniques:
Executive Summary: Overview of exercise scope and key findings
Technique Execution Details: For each technique:
- Technique ID and name
- Attack description
- Was it successful?
- How was it detected/prevented?
- Detection latency
- Evidence preservation
Coverage Assessment: Summary of techniques successfully executed vs. detected vs. prevented
Recommendations: Prioritized improvements:
- High-impact techniques not detected (immediate action)
- Detections with false negatives (near-term improvement)
- Detection optimization (ongoing tuning)
This structured reporting enables clear communication with security leadership.
Adversary Emulation Exercises
Advanced red teams conduct "adversary emulation" exercises using documented threat group behavior:
APT28 Emulation Exercise:
Based on MITRE ATT&CK documentation of APT28's tactics:
- Initial Access via spear-phishing (T1566.002)
- Credential theft via OS credential dumping (T1003.001)
- Lateral movement via pass-the-hash (T1550.002)
- Persistence via registry run keys (T1547.001)
- Exfiltration of targeted data (T1041)
This exercise tests realistic defenses against a known threat group's methods.
Continuous Red Teaming
Rather than annual assessments, continuous red teaming using ATT&CK enables:
Monthly Exercises: Small red team exercises targeting specific tactics or techniques
- Month 1: Test Credential Access defenses
- Month 2: Test Persistence detection
- Month 3: Test Lateral Movement prevention
Automated Emulation: Tools like Atomic Red Team enable scripted execution of ATT&CK techniques for repeated testing
Threat-Based Prioritization: When new threats emerge, immediately test your defenses against the techniques they use
Tools for ATT&CK-Based Red Teaming
Atomic Red Team: Open-source framework for executing atomic tests mapped to ATT&CK techniques
Invoke-AtomicTest T1566.002 # Simulate phishing
Invoke-AtomicTest T1059.001 # Simulate PowerShell execution
Caldera: Automated adversary emulation platform with ATT&CK technique mapping
ATT&CK Navigator: Visualize which techniques you've tested and which need coverage
Metasploit: Exploitation framework with modules mapped to ATT&CK techniques
Ethical and Legal Considerations
Red teaming must operate within ethical and legal boundaries:
- Authorization: Ensure written approval for all testing
- Scope: Stay strictly within defined scope
- Timing: Coordinate with operations to avoid interfering with critical business
- Escalation: Have procedures for stopping if critical systems are threatened
- Cleanups: Fully remove all attack artifacts
Red team exercises should improve security, not cause operational harm.
Learning from Red Team Results
Red team findings should drive immediate and sustained improvements:
Immediate Actions:
- Fix critical detections that failed
- Patch exploitable vulnerabilities
- Update security controls
Short-term Actions (30-90 days):
- Improve detection quality for missed techniques
- Add monitoring for blind spots
- Train staff on missed detections
Long-term Actions (6-12 months):
- Architectural improvements (network segmentation, endpoint hardening)
- Tool evaluation and selection
- Detection program maturation
Track remediation of red team findings to demonstrate security improvements.
Benchmark Against Industry
Compare your red team results against industry benchmarks:
- What percentage of techniques did defenders detect?
- What was average detection latency?
- How many false positives occurred?
- How quickly did incident response engage?
These metrics help assess your program maturity relative to peers.
Conclusion
MITRE ATT&CK-based red teaming transforms security exercises from ad-hoc testing into systematic assessment of defenses against realistic adversary behavior. Select specific threat groups or tactics to simulate. Document which techniques you'll test and how. Execute exercises and record results. Identify coverage gaps and detection weaknesses. Systematically remediate findings. Repeat regularly. This structured approach ensures red team exercises provide maximum value for security improvement and accurately reflect your defensive capabilities against real-world threats.
