Threat modeling is a structured approach to identifying, categorizing, and prioritizing security threats in an application or system before they are exploited. Among the various methodologies available, STRIDE remains one of the most widely adopted frameworks for application-level threat analysis. Originally developed at Microsoft in the late 1990s by Loren Kohnfelder and Praerit Garg, STRIDE provides a systematic way to think about what can go wrong in a system by mapping threats to six well-defined categories.
This guide walks you through every step of building a STRIDE threat model, from decomposing your application into analyzable components to scoring threats with DREAD and producing a prioritized remediation plan. Whether you are performing your first threat model or refining an existing practice, these steps will help you produce actionable results. You can also accelerate the process by using the Threat Modeling Wizard to generate a structured threat model interactively.
What Is Threat Modeling and Why It Matters
Threat modeling is the practice of systematically analyzing a system to discover potential security weaknesses before attackers do. Rather than waiting for penetration tests or bug bounty reports to reveal vulnerabilities, threat modeling shifts security left into the design phase where fixes are orders of magnitude cheaper to implement.
The core premise is straightforward: if you understand how your application is built, what data it handles, and who interacts with it, you can predict the types of threats it faces and proactively design defenses. Without threat modeling, security decisions tend to be reactive, driven by the latest vulnerability disclosure or audit finding rather than by a coherent understanding of the system's attack surface.
STRIDE is particularly effective for application security because it maps directly to the properties that secure systems must exhibit. Each STRIDE category represents the violation of a specific security property: authentication, integrity, non-repudiation, confidentiality, availability, and authorization. This direct mapping makes it easy for developers and architects to understand why each threat matters and what controls address it.
Organizations that adopt threat modeling consistently report fewer critical vulnerabilities in production, faster incident response times (because the threat model serves as a reference during triage), and more efficient security spending because resources are directed at the highest-risk components rather than spread uniformly.
The cost-benefit argument for threat modeling is compelling. Research from Microsoft's Security Development Lifecycle (SDL) program found that finding and fixing a vulnerability during design costs roughly 30 times less than finding it in production. A two-hour threat modeling session with five engineers costs less than a single day of incident response, yet it can prevent dozens of vulnerabilities from ever reaching production.
Threat modeling also strengthens the relationship between security and development teams. When developers participate in identifying threats, they develop a security mindset that persists beyond the modeling session. They begin to recognize threat patterns in their daily work and raise security concerns during code reviews without being prompted.
Prerequisites for Threat Modeling
Before you begin a STRIDE threat model, gather the following materials and ensure the right people are available. Insufficient preparation is one of the most common reasons threat modeling sessions fail to produce useful results.
Architecture documentation. You need an up-to-date architecture diagram that shows the major components of the application, the data flows between them, and the trust boundaries that separate different security zones. If your architecture documentation is outdated, spend time updating it first. Threat modeling against an inaccurate diagram produces inaccurate results. The architecture diagram should show all system components, including web servers, application servers, databases, caches, message queues, external APIs, load balancers, CDNs, and identity providers.
Data flow diagrams (DFDs). DFDs are the primary artifact used in STRIDE analysis. A Level 0 DFD shows the system as a single process with its external entities and data stores. Level 1 expands the main process into sub-processes. Most threat models operate at Level 1 or Level 2 depending on the complexity of the application. If you do not have DFDs, building them is the first task in your threat modeling session. Use standard DFD notation: circles for processes, rectangles for external entities, parallel lines for data stores, and arrows for data flows.
Stakeholder availability. Schedule a block of two to four hours with your threat modeling team. The session should include at least one developer familiar with the codebase, one architect who owns the system design, and one security-focused participant. Having a product owner present helps contextualize business impact when scoring threats. Ensure all participants can commit to the full session without interruptions. Partial attendance leads to gaps in the analysis.
Scope definition. Clearly define what is in scope and what is out of scope for this threat model. Are you analyzing the entire application or a specific feature? Are third-party services in scope? Are infrastructure-level threats (e.g., OS vulnerabilities, physical access) in scope, or only application-level threats? Defining boundaries upfront prevents the session from expanding indefinitely and ensures that the analysis is deep rather than broad.
Prior security artifacts. Gather any existing security documentation, including previous threat models, penetration test reports, vulnerability scan results, incident reports, and architecture security reviews. These provide context and help the team identify threats that have already been observed or documented.
Tooling. While threat modeling can be done with a whiteboard and sticky notes, using a structured tool significantly improves consistency and traceability. The Threat Modeling Wizard provides a guided workflow that walks you through each STRIDE category for every component in your system, ensuring nothing is overlooked. Other popular tools include Microsoft Threat Modeling Tool, OWASP Threat Dragon, and IriusRisk. Choose a tool that your team will actually use; an unused sophisticated tool is less valuable than a whiteboard that everyone participates with.
Step 1: Decompose the Application
The first step in any STRIDE threat model is to break the application down into its constituent parts. This decomposition serves as the foundation for the entire analysis because threats are identified per component, per data flow, and per trust boundary.
Identify External Entities
External entities are anything outside your system's control that interacts with it. Common external entities include end users (authenticated and unauthenticated), third-party APIs and services, partner integrations, administrative users, and automated systems like CI/CD pipelines that deploy to your environment. Each external entity represents a potential source of malicious input or unauthorized access.
List every external entity and document what data it sends to or receives from your system. Pay special attention to entities that cross trust boundaries, such as a public-facing API that accepts input from unauthenticated users. For each external entity, document:
- The entity's identity and role
- What data the entity sends to the system
- What data the entity receives from the system
- What level of trust the system places in the entity
- What authentication and authorization mechanisms govern the entity's access
Do not overlook less obvious external entities. Monitoring systems that collect metrics, log aggregation services that receive application logs, DNS servers that resolve domain names, and certificate authorities that issue TLS certificates are all external entities that interact with your system and could potentially be compromised.
Map Data Flows
For each pair of communicating components, document the data flow between them. A data flow includes the source, destination, the type of data being transferred, the protocol used, and whether the channel is encrypted. Data flows that cross trust boundaries are the most important to analyze because they represent points where untrusted data enters a trusted zone or sensitive data leaves a protected zone.
Common data flows to document include:
- User input from browsers to web servers (HTTP/HTTPS)
- API calls between microservices (gRPC, REST, GraphQL)
- Database queries and responses (SQL, NoSQL protocols)
- File uploads and downloads (multipart form data, S3 pre-signed URLs)
- Logging and monitoring data sent to external services (syslog, OTLP)
- Authentication tokens passed between identity providers and your application (OIDC, SAML)
- Webhook notifications from third-party services
- Message queue interactions (publish/subscribe, request/reply)
- Cache read/write operations
- Configuration data loaded from external sources (environment variables, config servers)
For each data flow, note whether the data contains sensitive information (PII, credentials, financial data, health data) and whether the transmission channel provides confidentiality (encryption) and integrity (checksums, signatures) protection.
Define Trust Boundaries
Trust boundaries separate zones of different trust levels. The most common trust boundary is between the internet and your application's public-facing components, but internal trust boundaries are equally important. Examples include the boundary between a web server and an application server, between an application server and a database, between a container and its host, between different microservices that run with different privilege levels, between a user's browser and a content delivery network, and between a production environment and a staging environment.
Draw trust boundaries on your DFD as dashed lines. Every data flow that crosses a trust boundary is a candidate for detailed STRIDE analysis. Components within the same trust zone can be analyzed together if they share the same privilege level and access controls.
Consider these common trust boundaries that are often overlooked:
- Between different tenants in a multi-tenant system
- Between different user roles (standard user vs. administrator)
- Between the application and its build/deploy pipeline
- Between the application and its dependency chain (npm packages, Docker base images)
- Between managed cloud services and your application code running on them
- Between encrypted and unencrypted network segments
Catalog Data Stores
Identify every location where data is stored, including relational databases, NoSQL stores, file systems, caches, message queues, log aggregation services, search indexes, data warehouses, and backup systems. For each data store, document the sensitivity of the data it contains, the access controls protecting it, the encryption status at rest, and the backup and recovery mechanisms.
Data stores are high-value targets because they concentrate sensitive information. A single compromised database can expose millions of records, making data stores a priority for STRIDE analysis. Pay particular attention to:
- Data stores that contain PII, financial data, health data, or credentials
- Data stores that are shared between multiple services or tenants
- Data stores that are accessible from multiple trust zones
- Temporary data stores (caches, session stores) that may contain sensitive data without the same protections as primary data stores
- Backup data stores that may contain snapshots of sensitive data with weaker access controls than the primary store
Create the Application Decomposition Document
Compile all of the above into a structured decomposition document. This document serves as the input to the STRIDE analysis and should be reviewed by all participants before the threat modeling session begins. A well-prepared decomposition document typically includes:
- A system overview describing the application's purpose and architecture
- A Level 1 (or Level 2) DFD with all components, data flows, and trust boundaries labeled
- A table listing all external entities with their data exchange descriptions
- A table listing all data stores with their sensitivity classifications
- A narrative describing each trust boundary and the security controls at each boundary
Step 2: Apply STRIDE to Each Component
With your application decomposed, systematically walk through each component, data flow, and trust boundary crossing, asking whether each STRIDE threat category applies.
The STRIDE Categories
The following table provides a reference for each STRIDE category, the security property it violates, and concrete examples to guide your analysis.
| Category | Security Property Violated | Description | Example Threats |
|---|---|---|---|
| Spoofing | Authentication | An attacker pretends to be someone or something they are not. | Credential stuffing, session hijacking, forged API tokens, IP spoofing, DNS spoofing, certificate impersonation |
| Tampering | Integrity | Data or code is modified without authorization. | SQL injection, man-in-the-middle modification, unsigned software updates, log alteration, parameter manipulation, DOM manipulation |
| Repudiation | Non-repudiation | An actor denies performing an action and there is no evidence to prove otherwise. | Missing audit logs, unsigned transactions, no tamper-evident logging, lack of digital signatures, no timestamps on events |
| Information Disclosure | Confidentiality | Sensitive data is exposed to unauthorized parties. | Verbose error messages, directory traversal, unencrypted storage, side-channel leaks, exposed API keys, debug endpoints in production |
| Denial of Service | Availability | The system is rendered unavailable to legitimate users. | Resource exhaustion, algorithmic complexity attacks, unthrottled API endpoints, lock-out mechanisms, disk-filling attacks, connection pool exhaustion |
| Elevation of Privilege | Authorization | An attacker gains capabilities beyond what they are authorized for. | Broken access controls, privilege escalation via kernel exploits, IDOR vulnerabilities, JWT manipulation, mass assignment, path traversal to admin functions |
Applying STRIDE Systematically
For each element in your DFD, work through the six categories and ask a targeted question:
- Spoofing: Can an attacker impersonate this entity or component? What authentication mechanisms are in place? Are credentials stored securely? Can authentication be bypassed through alternative paths?
- Tampering: Can data in transit or at rest be modified? Are integrity checks applied? Is input validated and sanitized? Are software updates signed and verified?
- Repudiation: If a malicious action occurs here, can we prove who did it and when? Are logs sufficient? Are logs tamper-proof? Do logs include enough detail to reconstruct the event?
- Information Disclosure: Can an attacker extract sensitive data from this component? Are there leakage paths through error messages, timing differences, or metadata? Is data encrypted at rest and in transit?
- Denial of Service: Can this component be overwhelmed or crashed? Are rate limits and circuit breakers in place? Are there resource limits on queries, file uploads, or connection counts?
- Elevation of Privilege: Can an attacker escalate from this component's privilege level to a higher one? Are least-privilege principles enforced? Are authorization checks applied on every request, not just at the UI layer?
Document each identified threat with a unique identifier (e.g., T-001), the affected component, the STRIDE category, a description of the threat scenario, any existing mitigations already in place, and a preliminary assessment of severity.
Focus on Trust Boundary Crossings
Not all components warrant equal attention. Prioritize your analysis on data flows that cross trust boundaries because these are the points where attacks are most likely to succeed. A data flow from the internet to your web application crosses the most critical trust boundary, while a data flow between two containers in the same pod with the same service account crosses a less critical boundary.
For each trust boundary crossing, consider all six STRIDE categories. For components entirely within a trust zone, you may choose to focus on the categories most relevant to that component type:
- External entities: Focus on Spoofing (can the entity be impersonated?) and Repudiation (can the entity deny its actions?)
- Processes: Consider all six STRIDE categories, with emphasis on Tampering, Information Disclosure, Denial of Service, and Elevation of Privilege
- Data stores: Focus on Tampering (can stored data be modified?), Information Disclosure (can data be exfiltrated?), and Denial of Service (can the store be overwhelmed?)
- Data flows: Focus on Tampering (can data be modified in transit?), Information Disclosure (can data be intercepted?), and Denial of Service (can the flow be disrupted?)
Worked Example: E-Commerce API
To illustrate the process, consider an e-commerce API that accepts orders from a web frontend. The API receives order data from authenticated users, writes to a PostgreSQL database, calls a payment gateway API, and publishes events to a message queue for fulfillment processing.
Applying STRIDE to the data flow between the frontend and the API:
- Spoofing: An attacker could steal a session token and place orders as another user. Mitigation: bind session tokens to client fingerprints, implement token rotation.
- Tampering: An attacker could modify the order total in the request body. Mitigation: recalculate the total server-side from item prices; never trust client-supplied totals.
- Repudiation: A user could deny placing an order. Mitigation: log all order creation events with user ID, timestamp, IP address, and request fingerprint.
- Information Disclosure: Error responses could reveal database schema or internal IP addresses. Mitigation: return generic error messages; log details server-side only.
- Denial of Service: An attacker could submit thousands of orders per second, overwhelming the database. Mitigation: implement per-user rate limiting on order creation; add request queuing.
- Elevation of Privilege: A standard user could modify the request to access admin-only order management endpoints. Mitigation: enforce authorization checks on every endpoint; use role-based access control.
This example demonstrates how each STRIDE category produces a specific, actionable threat and mitigation rather than a vague security concern.
Step 3: Score Threats with DREAD
Once you have a list of identified threats, you need to prioritize them. Not all threats carry equal risk, and attempting to mitigate every threat simultaneously is impractical. DREAD provides a structured scoring rubric that evaluates each threat across five dimensions.
The DREAD Scoring Rubric
| Dimension | Score 1-3 (Low) | Score 4-6 (Medium) | Score 7-10 (High) |
|---|---|---|---|
| Damage | Minor inconvenience, no data loss, no financial impact | Partial data exposure, limited financial impact, reversible damage | Full data breach, regulatory penalties, complete system compromise, permanent damage |
| Reproducibility | Requires rare conditions, timing-dependent, non-deterministic | Reproducible with specialized tools or knowledge, some setup required | Easily reproducible by anyone, automated exploitation possible, deterministic |
| Exploitability | Requires physical access or insider knowledge, custom exploit development | Requires moderate skill and custom tooling, some security knowledge needed | Script-kiddie level, public exploits available, no authentication needed, automated tools exist |
| Affected Users | Single user or admin account, isolated impact | A subset of users or a specific tenant, department-level impact | All users, all tenants, or all data, organization-wide impact |
| Discoverability | Requires source code access or extensive reconnaissance, hidden behind authentication | Discoverable through targeted scanning or fuzzing, requires some knowledge of the system | Obvious from public documentation, error messages, or network observation, trivially discoverable |
Scoring Process
For each threat in your catalog, assign a score of 1 to 10 for each DREAD dimension. Then calculate the average to produce the overall DREAD score. The Risk Matrix Calculator can help structure this prioritization process by mapping threats to likelihood and impact dimensions. For example, a threat with Damage=9, Reproducibility=8, Exploitability=7, Affected Users=10, Discoverability=6 would receive an overall score of (9+8+7+10+6)/5 = 8.0, placing it in the critical category.
It is important that scoring is done collaboratively rather than by a single individual. Different perspectives lead to more accurate scores. The developer may understand Reproducibility best, the security engineer may have the best insight into Exploitability, and the product owner can best assess Damage and Affected Users.
Work through the scoring as a group discussion. For each dimension, have the most knowledgeable participant propose a score and explain their reasoning. Allow other participants to challenge the score. If there is disagreement, discuss until consensus is reached or use the higher of the two proposed scores to be conservative.
Addressing Scoring Bias
DREAD scoring is inherently subjective, and teams commonly encounter several biases.
First, the Discoverability dimension can create a false sense of security: a threat scored low on Discoverability might still be found by a determined attacker, so some teams choose to always score Discoverability as the maximum (10) to adopt a worst-case perspective. This approach, sometimes called "assume breach," prevents security-through-obscurity reasoning from influencing prioritization.
Second, teams new to threat modeling tend to cluster scores in the middle range (4-6), avoiding strong positions. Encourage participants to use the full scale and provide concrete reasoning for their scores. A threat that would expose every customer record deserves a Damage score of 9 or 10, not a diplomatically moderate 6.
Third, anchoring bias occurs when the first score proposed influences subsequent scores. To mitigate this, have participants write down their scores independently before sharing with the group. Then compare scores and discuss divergences.
Fourth, recency bias can cause teams to overweight threats similar to recent incidents or news stories while underweighting threats that have not materialized recently. Ground scoring in objective criteria (the rubric) rather than recent events.
Scoring Multiple Threats Efficiently
When your threat catalog contains dozens of threats, scoring each one in a group session becomes time-consuming. Use a two-pass approach:
- Rapid triage: Quickly categorize each threat as critical, high, medium, or low based on gut feeling. This takes about one minute per threat with a group.
- Detailed scoring: Apply DREAD scoring only to threats rated critical or high in the triage. Threats rated medium or low can be scored asynchronously by the security lead and reviewed at the next session.
This approach focuses the group's time and energy on the threats that matter most while still ensuring that all threats are eventually scored.
Step 4: Prioritize and Mitigate
With scored threats in hand, create a prioritized remediation plan that maps to your development workflow.
Categorize by Risk Level
Divide threats into three tiers based on their DREAD score:
-
Critical (8.0-10.0): Must be mitigated before the next release. These threats represent imminent, high-impact risks that could lead to data breaches, regulatory violations, or complete system compromise. Assign these to the current sprint and treat them as release blockers. Every critical threat should have a named owner and a target completion date within the current development cycle.
-
High (5.0-7.9): Should be scheduled for mitigation within the next one to two sprints. These threats are significant but may require more complex remediation or architectural changes. Create backlog items with clear acceptance criteria and target dates. Track these in your project management tool and review progress at each sprint retrospective.
-
Medium/Low (1.0-4.9): Track in a risk register and address during regular security maintenance windows. While these threats are lower priority, they should not be ignored indefinitely. Review them quarterly and reprioritize if the threat landscape changes, if the application architecture evolves to increase the threat's impact, or if a similar vulnerability is exploited in the wild.
Select Appropriate Mitigations
For each threat, identify one or more mitigations. Mitigations should be specific and actionable, not generic statements like "improve security." Effective mitigations describe a concrete technical or procedural change that reduces the threat's DREAD score.
Examples of effective mitigations organized by STRIDE category:
- Spoofing threat against API endpoint: Implement mutual TLS authentication for service-to-service calls. Add HMAC signature verification for webhook payloads. Enforce multi-factor authentication for administrative API access.
- Tampering threat against database records: Implement row-level checksums using cryptographic hashes. Deploy database activity monitoring with alerting on unauthorized modifications. Use parameterized queries exclusively to prevent SQL injection.
- Repudiation threat against financial transactions: Implement write-once audit logging with cryptographic chaining (hash-linked log entries). Deploy a separate log aggregation system with independent access controls. Include user identity, timestamp, source IP, and request details in every audit entry.
- Information Disclosure via error messages: Replace detailed error responses with generic messages for unauthenticated users. Route detailed errors to structured logging only. Disable stack traces and debug output in production. Implement consistent response times to prevent timing-based information leakage.
- Denial of Service against search endpoint: Implement per-user rate limiting at 100 requests per minute. Add query complexity analysis to reject expensive queries. Deploy an application-level circuit breaker that degrades gracefully under load. Set maximum response sizes and pagination limits.
- Elevation of Privilege via IDOR: Implement authorization checks on every data access, not just at the controller layer. Use UUIDs instead of sequential IDs to prevent enumeration. Apply the principle of least privilege to service accounts and database connections.
Calculate Risk Reduction
For each mitigation, estimate the residual risk score by re-evaluating the DREAD dimensions assuming the mitigation is in place. This allows you to calculate the risk reduction achieved by each mitigation and compare the cost of implementation against the risk reduction.
For example, if a Spoofing threat has a DREAD score of 8.0 before mitigation, and implementing mutual TLS is expected to reduce Reproducibility from 8 to 2 and Exploitability from 9 to 3, the new score would be (9+2+3+10+6)/5 = 6.0, representing a risk reduction of 2.0 points. If implementing mutual TLS takes two developer-days, you can compare this cost against the risk reduction to prioritize across multiple mitigations.
When budget is limited, choose mitigations that provide the greatest risk reduction per unit of effort. A mitigation that reduces a critical threat to high risk in two hours is more valuable than a mitigation that reduces a medium threat to low risk in two weeks.
Map Mitigations to Development Work
Each mitigation should be translated into one or more development tasks that can be tracked in your project management system. For each task, specify:
- A clear description of what needs to be changed
- Acceptance criteria that define when the mitigation is complete
- A verification method (how to confirm the mitigation works)
- An owner (the developer responsible for implementation)
- A target completion date
Step 5: Document and Review
A threat model is only valuable if it is documented, accessible, and maintained over time. An undocumented threat model lives only in the memories of the participants and degrades rapidly as people change roles, leave the organization, or simply forget the details.
Create the Threat Model Document
Your threat model document should include the following sections:
- Scope and Objectives: What system or feature was analyzed and what the goals of the analysis were. Include the date, participants, and duration of the modeling session.
- Architecture Overview: The DFDs, trust boundaries, and component descriptions used as input. Include the decomposition document created in Step 1.
- Threat Catalog: The complete list of identified threats with their STRIDE category, description, DREAD score, affected component, and current mitigation status. Organize the catalog by DREAD score (highest first) for easy prioritization reference.
- Mitigation Plan: The prioritized list of mitigations with owners, target dates, expected risk reduction, and implementation status. Update this section as mitigations are completed.
- Assumptions and Limitations: What was assumed to be true during the analysis and what was explicitly excluded from scope. For example, "We assumed the cloud provider's physical security is adequate" or "Insider threats from privileged administrators were out of scope."
- Accepted Risks: Threats that were identified but accepted without mitigation, along with the rationale for acceptance and any conditions that would trigger re-evaluation.
- Review History: A log of when the threat model was created, reviewed, and updated, including who participated in each review and what changes were made.
Store Alongside the Code
Threat models should live close to the code they describe. Store the document in the same repository as the application code, ideally in a docs/security/ or threat-models/ directory. This ensures that the threat model is versioned alongside the code and that developers encounter it naturally when working on the system.
Use a format that supports version control and diffing, such as Markdown or a structured format like YAML or JSON. Avoid storing threat models only in wiki systems or shared drives where they become disconnected from the code they describe.
Schedule Regular Reviews
Establish a cadence for reviewing and updating the threat model. At a minimum, review the threat model:
- Before each major release to verify that new features do not introduce unmitigated threats
- When significant architectural changes are made, such as adding a new service, changing the database, or integrating a new third-party API
- After a security incident involving the modeled system, to determine whether the incident was covered by the threat model and whether new threats need to be added
- Quarterly, as part of a security review cycle, to reassess DREAD scores based on changes in the threat landscape
- After penetration testing to validate findings against the threat model. Use the Penetration Test Scoping Calculator to plan targeted testing that validates your highest-priority threats.
During each review, verify that identified mitigations have been implemented, reassess DREAD scores based on changes in the threat landscape, add any new threats identified since the last review, and retire threats that are no longer relevant due to architectural changes.
Integrate with the Development Process
The most effective way to keep a threat model current is to integrate it into the development process. Require a threat model review as part of the definition of done for new features or architectural changes. Add a threat modeling checklist to your pull request template for changes that modify authentication, authorization, data handling, or external integrations. Include threat model status in sprint retrospectives.
Common Mistakes to Avoid
Even experienced teams make mistakes during threat modeling. Being aware of these common pitfalls helps you produce more accurate and useful results.
Boiling the Ocean
Attempting to threat model an entire enterprise application in a single session leads to shallow analysis and participant fatigue. Instead, scope each session to a specific feature, service, or subsystem. A focused two-hour session on a single microservice produces better results than an eight-hour marathon covering the entire platform. If your application is large, plan a series of focused sessions that each cover one subsystem, and then conduct a cross-cutting session to identify threats that span multiple subsystems.
Ignoring the Human Element
STRIDE focuses on technical threats, but many real-world attacks exploit human factors. Social engineering, insider threats, and operational mistakes should be considered alongside technical threats. While these may not map cleanly to STRIDE categories, document them as supplementary threats in your threat catalog. For example, an attacker who phishes an administrator for credentials could be documented under Spoofing, and an insider who exfiltrates data could be documented under Information Disclosure with a note that the threat vector is an authorized user rather than an external attacker.
Treating the Threat Model as a One-Time Activity
A threat model that is created during initial design and never updated quickly becomes stale. Applications evolve, new features are added, dependencies change, and the threat landscape shifts. Build threat model reviews into your development process so that the model remains current. A stale threat model is arguably worse than no threat model because it creates a false sense of security.
Overcomplicating the DFD
Data flow diagrams should be detailed enough to support meaningful analysis but not so complex that they become unreadable. A Level 1 or Level 2 DFD is sufficient for most applications. If your DFD has more than 20 to 30 elements, consider breaking it into multiple diagrams, each covering a specific subsystem. The goal is a diagram that all session participants can understand and discuss, not an exhaustive technical specification.
Skipping DREAD Scoring
Some teams identify threats but skip the scoring step, treating all threats as equally important. Without scoring, there is no rational basis for prioritization, and remediation efforts tend to focus on whichever threat is easiest to fix rather than which threat poses the greatest risk. Always score and prioritize. The discipline of assigning numbers forces the team to think critically about each threat's actual impact and likelihood.
Not Involving Developers
Threat models produced solely by security teams without developer input tend to be disconnected from the actual implementation. Developers bring essential knowledge about how the system is actually built (as opposed to how it was designed), which often reveals threats that architecture diagrams alone would miss. A security engineer might not know that the application stores session tokens in local storage instead of HTTP-only cookies, but the developer who implemented the session management certainly does.
Confusing Threats with Vulnerabilities
A threat is something that can go wrong (e.g., an attacker spoofs an API token). A vulnerability is a specific weakness that enables the threat (e.g., API tokens are not signed and can be forged). During STRIDE analysis, focus on identifying threats rather than jumping to specific vulnerabilities. The vulnerability analysis comes later, during mitigation planning. This distinction keeps the session focused on what can happen rather than getting lost in how it would happen technically.
Failing to Act on Results
The worst outcome of a threat modeling session is a well-documented threat model that sits in a repository and is never acted upon. Ensure that every critical and high threat has an assigned owner, a target date, and a development task. Track remediation progress in sprint reviews. A threat model that does not lead to action is merely an academic exercise.
Putting It All Together
Building a STRIDE threat model is an investment that pays dividends throughout the application lifecycle. By decomposing your application into analyzable components, systematically applying the six STRIDE categories, scoring threats with DREAD, and maintaining the model over time, you create a living security artifact that guides development decisions and incident response.
The process becomes faster and more natural with practice. A team that has conducted several STRIDE sessions develops an intuition for where threats are most likely to exist and can complete an analysis in half the time it took initially. This efficiency gain makes it practical to integrate threat modeling into the regular development cadence rather than treating it as a special event.
If you are starting your first threat model, the Threat Modeling Wizard provides a structured, interactive workflow that guides you through each step described in this article. It generates a formatted threat catalog with DREAD scores and recommended mitigations, giving you a head start on producing professional-quality threat model documentation.
Remember that the goal is not to eliminate all threats, as that is impossible, but to understand them well enough to make informed decisions about where to invest in defenses. A good threat model surfaces the risks that matter most and provides a clear path to reducing them to an acceptable level. It transforms security from a vague aspiration into a concrete, measurable, and manageable discipline.
The investment in threat modeling pays for itself many times over. Each vulnerability caught during design is a vulnerability that does not require emergency patching in production, does not trigger an incident response, and does not erode customer trust. Start with your highest-risk application, refine your process, and expand from there.
Appendix: STRIDE Threat Model Checklist
Use this checklist during each threat modeling session to ensure comprehensive coverage:
Preparation Phase
- Verify that architecture documentation is current and accurate
- Confirm that DFDs include all components, data flows, and trust boundaries
- Define the scope and objectives of this session in writing
- Ensure all required participants are available and prepared
- Gather prior security artifacts (previous threat models, pen test reports, incident reports)
- Prepare the threat modeling tool or workspace
Decomposition Phase
- List all external entities and their data interactions
- Document all data flows with protocol, encryption, and sensitivity details
- Draw all trust boundaries, including internal boundaries often overlooked
- Catalog all data stores with their classification levels and access controls
- Identify all authentication and authorization mechanisms
- Note any shared infrastructure or multi-tenant boundaries
Analysis Phase
- Apply all six STRIDE categories to each trust boundary crossing
- Apply relevant STRIDE categories to each component within trust zones
- Document each identified threat with a unique identifier
- Record existing mitigations for each threat
- Assign a DREAD score to each critical and high threat collaboratively
- Calculate the overall DREAD score and assign a risk level
Prioritization Phase
- Categorize all threats by risk level (Critical, High, Medium/Low)
- Assign a mitigation owner and target date to each Critical and High threat
- Estimate the risk reduction for each proposed mitigation
- Create development tasks for each mitigation
- Document accepted risks with rationale
Documentation Phase
- Compile the threat model document with all required sections
- Store the document in the code repository alongside the application
- Schedule the next review date
- Communicate results to the development team and stakeholders
- Add mitigation tasks to the project backlog
Review Phase (for existing models)
- Verify that previously identified mitigations have been implemented
- Reassess DREAD scores based on changes in the threat landscape
- Add new threats from architectural changes, new features, or new threat intelligence
- Retire threats that are no longer relevant
- Update the review history log
This checklist ensures that no step is skipped during the threat modeling process and provides a consistent structure that improves with each iteration. Over time, your team will internalize these steps and move through them efficiently, but the checklist remains valuable as a quality assurance mechanism even for experienced practitioners.
STRIDE in Different Contexts
While STRIDE was originally designed for application-level threat modeling, the methodology can be adapted for different contexts with some modifications.
STRIDE for Microservices Architectures
Microservices architectures present unique challenges for STRIDE threat modeling because the number of components, data flows, and trust boundaries is significantly larger than in monolithic applications. Each microservice communicates with multiple other services, and each communication channel represents a potential attack vector.
When applying STRIDE to microservices, focus your initial analysis on the API gateway (the primary trust boundary crossing between external and internal traffic), the service mesh or inter-service communication layer, shared data stores and message queues, the service discovery and configuration management systems, and the CI/CD pipeline that deploys services.
Group microservices by domain or bounded context and analyze each group as a unit rather than modeling every individual service separately. This keeps the analysis manageable while still capturing the most important threats. For services within the same group that share the same trust level and access controls, a single STRIDE analysis is sufficient. For services that cross trust boundaries or handle significantly different data sensitivity levels, analyze them individually.
Pay special attention to Elevation of Privilege threats in microservices architectures. A compromised service with limited permissions may be able to impersonate other services, access service mesh features that provide lateral movement capabilities, or exploit misconfigured RBAC policies to gain access to higher-privilege services.
STRIDE for Cloud-Native Applications
Cloud-native applications introduce additional elements that must be included in the threat model:
- Cloud provider APIs: Management APIs (AWS Console, Azure Portal, GCP Cloud Console) are high-value targets because compromising them can give an attacker full control of the infrastructure.
- IAM roles and policies: Misconfigured IAM policies are a leading cause of cloud breaches. Include IAM configuration in your Elevation of Privilege analysis.
- Serverless functions: Lambda, Azure Functions, and Cloud Functions execute in ephemeral environments with shared infrastructure, introducing unique Spoofing and Information Disclosure threats.
- Container orchestration: Kubernetes clusters, ECS tasks, and similar orchestration systems introduce threats related to container escape, pod security policies, and namespace isolation.
- Storage services: Object storage (S3, Azure Blob) with misconfigured access policies is a recurring source of Information Disclosure incidents.
STRIDE for IoT and Embedded Systems
IoT devices and embedded systems present unique STRIDE challenges because they often have limited computational resources (making encryption expensive), operate in physically accessible environments (enabling Tampering and Spoofing through physical access), use long-lived credentials that are difficult to rotate, receive infrequent firmware updates (leaving known vulnerabilities unpatched), and communicate over wireless protocols that are susceptible to interception and spoofing.
When threat modeling IoT systems, expand your analysis to include the device firmware and its update mechanism, the communication protocol between the device and its cloud backend, the physical device and its resistance to tamper and reverse engineering, the provisioning and credential management process, and the device management platform that controls fleet-wide updates and configuration.
STRIDE for Machine Learning Systems
Machine learning systems face additional threat categories that traditional applications do not encounter:
- Data poisoning (Tampering): Attackers inject malicious data into training datasets to bias model behavior, potentially causing the model to make incorrect security decisions.
- Model extraction (Information Disclosure): Attackers query the model systematically to reconstruct a copy of the proprietary model, stealing intellectual property.
- Adversarial inputs (Spoofing/Tampering): Attackers craft inputs designed to cause the model to produce incorrect outputs, potentially bypassing security controls that rely on the model's predictions.
- Training data leakage (Information Disclosure): The model memorizes and reveals sensitive data from the training set through its outputs, exposing private information.
Including these ML-specific threats in your STRIDE analysis ensures that AI and ML systems receive the same rigorous security scrutiny as traditional applications.
Resources for Continued Learning
To deepen your threat modeling practice beyond this guide, consider these resources:
- OWASP Threat Modeling Project: Provides open-source tools, cheat sheets, and community resources for threat modeling practitioners at all experience levels.
- Adam Shostack's "Threat Modeling: Designing for Security": The definitive reference book on threat modeling, covering STRIDE and other methodologies in comprehensive depth with extensive examples.
- Microsoft SDL Threat Modeling Tool: A free tool that provides guided STRIDE analysis with built-in threat templates and report generation.
- OWASP Threat Dragon: An open-source, web-based threat modeling tool that supports DFD creation and STRIDE analysis with collaborative features.
- SAFECode "Tactical Threat Modeling" paper: A practical guide to integrating threat modeling into agile development workflows, particularly useful for teams adopting threat modeling for the first time.
The key to building a successful threat modeling practice is consistency and iteration. Your first threat model will not be perfect, but it will be infinitely more valuable than no threat model at all. Each subsequent model will be faster, more accurate, and more actionable as your team builds experience and institutional knowledge. Begin with a single critical application, apply the STRIDE methodology following this guide, and expand your practice from there.