How to Select the Right Security Model for Your Organization

Compare Bell-LaPadula, Biba, Clark-Wilson, and Brewer-Nash formal security models to find the right fit for your organizational requirements.

12 min readUpdated 2026-01-27

Want us to handle this for you?

Get expert help →

Formal security models provide the mathematical foundation for access control policies that protect organizational data. Choosing the right model is not merely an academic exercise. It determines how your systems enforce confidentiality, integrity, and separation of duties at a fundamental level.

Organizations that skip this step often end up with ad hoc access controls that contain hidden gaps, inconsistencies, and vulnerabilities. These weaknesses only surface during audits or, worse, during a breach.

This guide walks through the four most important formal security models, explains how each one works, and provides a decision framework for selecting the right model for your specific requirements. If you want to explore how different models map to your organization's constraints interactively, the Security Model Decision Matrix tool can help you evaluate trade-offs between models based on your regulatory environment and data classification needs.

What Are Formal Security Models

A formal security model is a precise, mathematical description of a security policy. Unlike informal guidelines or best-practice documents, formal models use state machines, lattices, and logical rules to define exactly what access is permitted and what is denied. This precision makes it possible to prove that a system implementation correctly enforces the desired policy, assuming the model's assumptions hold.

The history of formal security models is closely tied to the history of government and military computing. In the early days of multi-user computing systems, the U.S. Department of Defense recognized that sharing a single computer among users with different security clearances required a rigorous approach to access control. The result was a series of research projects, beginning in the early 1970s, that produced the formal models discussed in this guide.

These models were not designed in isolation. They were responses to real security failures in early multi-user systems where informal access control policies proved insufficient.

Why Formal Models Matter

Formal models serve several critical purposes in security architecture.

Eliminating Ambiguity. When a policy says "only authorized users may access sensitive data," a formal model specifies exactly what "authorized" means in terms of clearance levels, categories, and access rules. Without this precision, different implementers may interpret the same policy differently. This leads to inconsistent and potentially insecure implementations across the organization.

Enabling Formal Verification. Without a formal model, there is no rigorous way to determine whether an access control implementation is correct. You can test individual scenarios, but you cannot prove that all possible access requests are handled correctly. Formal verification uses mathematical proof techniques to demonstrate that every reachable state of the system satisfies the security properties defined by the model. This is qualitatively different from testing, which can only show that specific scenarios work correctly but cannot guarantee the absence of flaws.

Identifying Conflicts and Gaps. When two access control rules interact in unexpected ways, a formal model can reveal the conflict during the design phase rather than after deployment. This is particularly important in large organizations where multiple policy authors may create rules that are individually correct but collectively inconsistent. The formal model provides a framework for reasoning about the combined effect of all rules simultaneously.

Providing a Common Language. Rather than describing access controls in vague natural language, stakeholders can reference specific model properties. Terms like "the simple security property" or "separation of duties" have precise, universally understood definitions. This shared vocabulary reduces miscommunication between technical and non-technical stakeholders. It facilitates more productive discussions about security requirements.

Supporting Regulatory Compliance. When an auditor asks "why is this access control configured this way?", the answer can reference a formal model property and its mathematical foundation. This is particularly valuable for organizations subject to strict regulatory oversight where access control decisions must be justified with rigor.

The Role of Models in Modern Security

While formal security models were developed primarily in the 1970s and 1980s, they remain the intellectual foundation of modern access control systems. Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Zero Trust architectures all trace their theoretical roots back to these formal models. Understanding the models helps security architects make better design decisions, even when implementing more modern frameworks.

RBAC can be viewed as a practical implementation mechanism for Clark-Wilson's separation of duties requirements. Instead of specifying individual (User, TP, CDI) triples, RBAC groups users into roles and assigns permissions to roles. This is functionally equivalent but administratively simpler.

ABAC extends the lattice-based access decisions of Bell-LaPadula and Biba to support arbitrary attributes rather than simple hierarchical levels. An ABAC policy that grants access based on a combination of department, project, clearance, and location is a generalization of the lattice-based approach that the formal models pioneered.

Zero Trust's principle of "never trust, always verify" echoes the formal models' emphasis on explicit access decisions based on subject and object properties rather than network location or implicit trust.

Modern cloud environments add another layer of relevance. Cloud service providers must implement isolation between tenants, which maps to Brewer-Nash conflict of interest principles. Cloud IAM policies that restrict which principals can access which resources implement concepts from Bell-LaPadula and Clark-Wilson, even if the policy language does not reference these models by name.

  • AWS IAM policies enforce resource-level permissions with principal conditions
  • Azure RBAC implements role-based restrictions with scope hierarchies
  • Google Cloud IAM applies attribute-based conditions to resource access

All three enforce access control rules that can be analyzed through the lens of formal security models.

The emergence of data-centric security architectures has further renewed interest in formal models. As organizations move from perimeter-based security to protecting data wherever it resides, the lattice-based classification schemes of BLP and Biba become directly relevant. Data Loss Prevention (DLP) systems that classify and restrict data movement based on sensitivity labels are implementing BLP's information flow rules in a modern context.

Categories of Security Focus

Formal security models generally address one of three concerns:

  • Confidentiality models (like Bell-LaPadula) prevent unauthorized disclosure of information. They answer the question: "Who can read what?" The Data Classification Policy Architect can help you define the classification levels and categories that Bell-LaPadula implementations rely on.
  • Integrity models (like Biba and Clark-Wilson) prevent unauthorized modification of information. They answer the question: "Who can change what, and through what process?"
  • Conflict-of-interest models (like Brewer-Nash) prevent information flows that could create conflicts of interest. They answer the question: "What information should be kept separate?"

Some organizations need all three, while others may prioritize one over the others based on their regulatory environment and threat landscape.

  • A military intelligence agency prioritizes confidentiality above all else.
  • A financial institution prioritizes integrity to ensure transaction accuracy.
  • A consulting firm prioritizes conflict prevention to maintain client trust.
  • Most organizations need a combination of models.

The decision framework later in this guide helps determine the right blend. Understanding which category your primary concern falls into is the first step toward selecting the right model. The Security Model Decision Matrix can help you systematically evaluate your security priorities and map them to the appropriate formal models.

Bell-LaPadula Model (Confidentiality)

The Bell-LaPadula (BLP) model was developed in 1973 by David Elliott Bell and Leonard LaPadula for the U.S. Department of Defense. It is the foundational formal model for enforcing confidentiality in classified environments. The model uses a lattice-based access control structure where subjects (users or processes) and objects (files or resources) are assigned security labels from an ordered set of classification levels.

The original research was conducted under contract to the Electronic Systems Division of the U.S. Air Force. The work was motivated by the need to formalize the security requirements for the Multics operating system, which was one of the first systems designed to handle multiple classification levels simultaneously. The resulting model became the theoretical basis for the Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the "Orange Book," published by the DoD in 1983.

How It Works

Bell-LaPadula operates as a state machine where each state represents the current set of access permissions in the system. The model defines a set of secure states and proves that transitions between states preserve security properties.

Every subject is assigned a clearance level, such as:

  • Unclassified
  • Confidential
  • Secret
  • Top Secret

Every object is assigned a classification level from the same hierarchy. Access decisions are governed by two mandatory rules plus one discretionary rule.

The system enforces access decisions automatically based on the relationship between the subject's clearance and the object's classification. Users cannot override these decisions regardless of ownership or other factors. This mandatory enforcement is what makes BLP suitable for environments where information disclosure must be prevented at all costs.

In addition to the hierarchical levels, BLP supports compartments (also called categories or need-to-know labels). A subject's clearance includes both a level (e.g., Secret) and a set of compartments (e.g., {HUMINT, COMINT}). To access an object, two conditions must be met:

  • The subject's clearance level must dominate the object's classification level
  • The subject's compartment set must include all of the object's compartments

This creates a lattice rather than a simple linear hierarchy, enabling fine-grained access control based on need-to-know principles.

The lattice structure is mathematically significant because it defines a partial ordering over security labels. Not all labels are comparable. A subject with Secret/HUMINT clearance and a subject with Secret/COMINT clearance are neither above nor below each other in the lattice. Neither can access the other's compartmented information despite having the same hierarchical level. This non-comparability is what makes the lattice more expressive than a simple hierarchy.

The state machine formalism means that BLP can be analyzed for security properties at every state transition. When a subject requests access to an object, or when a subject's clearance changes, or when an object's classification changes, the model verifies that the resulting state is still secure. If the transition would violate a security property, it is rejected.

Core Rules and Axioms

The Bell-LaPadula model is defined by three fundamental properties:

Simple Security Property (No Read Up). A subject at a given clearance level cannot read an object at a higher classification level. A user with Confidential clearance cannot read a Secret document. This rule prevents unauthorized access to information above a subject's clearance. The rule is sometimes expressed as: "a subject can read an object only if the subject's clearance dominates the object's classification."

The Simple Security Property is intuitive and directly maps to the common-sense rule that classified information should only be accessible to people with the appropriate clearance. What makes it formal is the precise definition of "dominates" in terms of the lattice structure. Both the level must be at least as high and the compartment set must be a superset.

Star Property (No Write Down). A subject at a given clearance level cannot write to an object at a lower classification level. A user with Secret clearance cannot write to a Confidential file. This rule prevents information leakage from higher classification levels to lower ones.

Without this rule, a cleared user could simply copy classified information into an unclassified file, effectively declassifying it without authorization. The Star Property is less intuitive than the Simple Security Property. It prevents a high-clearance subject from accidentally or intentionally leaking information by writing it to a lower-classified container.

However, it also means that a Secret-cleared user cannot send an email to an Unclassified email system, even if the email content is itself unclassified. This rigidity is one of BLP's limitations in practice. Real implementations often include "trusted subjects" who are permitted to bypass the Star Property under specific, audited circumstances. For example, authorized downgrade officers who declassify information through a formal review process.

Discretionary Security Property. Access is also governed by an access control matrix that specifies which subjects have which access rights to which objects. This provides an additional layer of control beyond the mandatory rules. Even if a subject's clearance dominates an object's classification (satisfying the mandatory rules), the discretionary property may still deny access if the access matrix does not include the appropriate entry.

The combination of these three properties creates the "read down, write up" paradigm. Information flows only upward in the classification hierarchy, never downward. This guarantees that classified information cannot leak to lower classification levels through direct read or write operations.

Best Use Cases

Bell-LaPadula is the model of choice for any environment where confidentiality is the primary concern and data is organized into hierarchical classification levels:

  • Military and intelligence agencies handling classified information at multiple levels represent the original and most direct use case. These organizations have established classification hierarchies with compartments for different intelligence disciplines. Every major military computing system that processes classified information implements BLP-derived access controls.

  • Government contractors subject to NIST SP 800-53 or ICD 503 often implement BLP-inspired controls as part of their system authorization packages. CMMC (Cybersecurity Maturity Model Certification) requirements for protecting Controlled Unclassified Information (CUI) also trace their theoretical foundations to BLP.

  • Healthcare organizations handling patient data can apply BLP concepts to create hierarchical sensitivity levels. Mental health records, substance abuse treatment records, and HIV status may be classified at a higher sensitivity level than general health records. Access is restricted to providers who have both the appropriate role and the appropriate sensitivity clearance.

  • Technology companies protecting trade secrets can use BLP to prevent information leakage between projects. Source code for a next-generation product might be classified at a higher level than documentation for released products. Access to pre-release financial data during quiet periods follows BLP-style confidentiality levels.

  • Law firms managing privileged client communications can apply BLP to ensure information from one client's matter cannot be accessed by attorneys working on a different client's matter at a lower privilege level.

Limitations

The most significant limitation of Bell-LaPadula is that it addresses only confidentiality, not integrity. The "no write down" rule actually permits writing to objects at higher classification levels. This means a user at a low clearance could corrupt high-classification data by writing garbage to it. This is precisely the gap that integrity models like Biba were designed to fill.

BLP also assumes a static, hierarchical classification system. In modern dynamic environments where data sensitivity changes frequently or where classification does not fit a strict hierarchy, the model can be rigid and difficult to maintain. Reclassifying data requires administrative action and may trigger cascading changes throughout the system.

Additionally, BLP does not address covert channels. These are indirect methods of transmitting information that bypass the model's rules entirely:

  • A covert timing channel might encode information in the timing of system events visible across classification levels
  • A covert storage channel might encode information in the allocation of shared resources like disk space or CPU time

A complete BLP implementation should include covert channel analysis as part of the system accreditation process.

The tranquility principle presents another challenge. In the "strong tranquility" variant, objects cannot change classification once assigned. In the "weak tranquility" variant, classification can change but only in ways that do not violate the security properties. Neither variant handles dynamic data classification well.

Data that is unclassified individually may become classified when aggregated. This phenomenon, known as the aggregation problem, is not addressed by BLP. For environments where aggregation and inference attacks are a concern alongside confidentiality, the Database Inference & Aggregation Simulator can help you identify and mitigate these risks in multilevel security implementations.

Biba Model (Integrity)

The Biba model, published by Kenneth Biba in 1977, is essentially the mathematical dual of Bell-LaPadula. Where BLP focuses on preventing unauthorized disclosure (confidentiality), Biba focuses on preventing unauthorized modification (integrity). The model uses integrity levels instead of classification levels, and its rules are the mirror image of BLP's rules.

Biba's work was motivated by the recognition that confidentiality alone was insufficient for many computing environments. A military logistics system, for example, must not only prevent unauthorized users from reading supply chain data, but also prevent unauthorized or unreliable sources from corrupting that data. A corrupted inventory database is potentially more dangerous than a disclosed one, because incorrect supply data could lead to operational failures in the field.

How It Works

Biba assigns integrity levels to both subjects and objects. These integrity levels represent the degree of trustworthiness of the data or the user.

  • A subject with a high integrity level is trusted to produce reliable, accurate data
  • An object with a high integrity level is considered to be accurate and uncorrupted

The model then defines rules that prevent low-integrity entities from corrupting high-integrity data.

The integrity lattice in Biba works in the opposite direction from BLP's confidentiality lattice. While BLP restricts the upward flow of read access and the downward flow of write access, Biba restricts:

  • The downward flow of read access (to prevent contamination from unreliable sources)
  • The upward flow of write access (to prevent corruption of trustworthy data)

Understanding the distinction between confidentiality and integrity lattices is crucial. In a confidentiality lattice, "high" means "more secret," and information must not flow downward. In an integrity lattice, "high" means "more trustworthy," and corruption must not flow upward.

The concept of "contamination" is central to Biba. If a high-integrity process reads data from a low-integrity source, the process itself becomes contaminated. Its subsequent outputs may be influenced by the unreliable input.

This is analogous to a financial audit: if an auditor relies on unverified data to produce an audit report, the report's integrity is compromised regardless of the auditor's competence. The contamination propagates through all subsequent computations that depend on the contaminated data.

This contamination model reflects a fundamental truth about data processing. The trustworthiness of output data is limited by the trustworthiness of input data. No amount of processing rigor can restore integrity lost by incorporating untrustworthy inputs. Biba formalizes this principle and provides rules to prevent it.

Core Rules and Axioms

Biba defines three primary integrity axioms:

Simple Integrity Axiom (No Read Down). A subject at a given integrity level cannot read an object at a lower integrity level. This prevents a trusted process from being contaminated by unreliable data. If a financial calculation engine with high integrity reads data from an unverified source, the integrity of its output can no longer be guaranteed.

This axiom can be counterintuitive. Why would reading low-integrity data be harmful? The answer is that the integrity model is concerned with the trustworthiness of outputs, not just the modification of inputs. If a trusted process incorporates untrusted data into its decisions, its decisions are no longer fully trustworthy.

Consider a navigation system: if a high-integrity GPS module reads position data from an untrusted sensor, the navigation output is contaminated even though the GPS module itself was not modified.

Star Integrity Axiom (No Write Up). A subject at a given integrity level cannot write to an object at a higher integrity level. This prevents an untrusted process from corrupting trusted data. A user who has not been verified cannot modify records in the authoritative database. An unvalidated data feed cannot update the production data warehouse.

This is the most intuitive of Biba's axioms. It directly implements the common-sense principle that untrustworthy sources should not be able to modify trusted data. Without this rule, a compromised or unreliable process could silently corrupt the organization's most trusted data stores. The damage from such corruption can be enormous because decisions based on corrupted data may not be recognized as flawed until significant harm has occurred.

Invocation Property. A subject at a given integrity level cannot invoke (call or execute) a subject at a higher integrity level. This prevents untrusted code from leveraging trusted code to perform unauthorized modifications indirectly. Without this rule, a low-integrity process could call a high-integrity process and manipulate its inputs to cause the high-integrity process to corrupt high-integrity data. This is sometimes called a "confused deputy" attack.

The combination of these rules creates a "read up, write down" paradigm. This is the exact opposite of Bell-LaPadula. Information flows downward in the integrity hierarchy. Contamination from low-integrity sources is blocked from reaching high-integrity objects.

Best Use Cases

Biba is most appropriate for environments where data integrity is paramount:

  • Software development environments benefit from Biba concepts by ensuring untested code cannot be promoted to production without proper verification. Code in a developer's workspace has low integrity. Code that passes automated tests achieves a higher integrity level. Code that passes code review and QA testing achieves the highest pre-production integrity level. Only code at the highest integrity level can be deployed to the production environment. Modern CI/CD pipelines implement this graduated integrity model.

  • Manufacturing systems use Biba-like controls to ensure sensor data and control commands maintain integrity. An unverified sensor reading should not influence a safety-critical control decision. In SCADA and ICS environments, the integrity of control signals is literally a life-safety concern.

  • Healthcare systems that must ensure the accuracy of patient records implement Biba-inspired controls. A nursing student's preliminary assessment should not overwrite an attending physician's diagnosis without proper review channels.

  • Package management systems implement Biba's principles through code signing and verification. An unsigned package (low integrity) cannot replace a signed package (high integrity) in a trusted repository. Package managers like apt, yum, and npm verify digital signatures before installing.

  • Financial trading systems use Biba-inspired controls to ensure trade execution data maintains integrity throughout clearing and settlement. A trade record validated by the exchange has high integrity and should not be modifiable by an unvalidated source.

Limitations

Like Bell-LaPadula, Biba addresses only one dimension of security. It provides no confidentiality protections. An organization that implements only Biba protects data integrity but does nothing to prevent unauthorized users from reading sensitive information.

Biba's strict rules can also be impractical in real-world systems where subjects regularly need to interact with data at multiple integrity levels. A developer may need to read external documentation (low integrity) while working on production code (high integrity). Biba's "no read down" rule would prevent this.

In practice, many Biba implementations use a "water mark" variant that dynamically lowers a subject's integrity level when it reads low-integrity data. This is more practical but provides weaker guarantees than the strict model.

The model lacks the concept of well-formed transactions that Clark-Wilson introduces. Biba can tell you that an untrusted subject cannot modify a trusted object, but it cannot specify how modifications should occur or what constitutes a valid modification.

Finally, Biba's integrity levels are static and assigned administratively. This does not capture the dynamic nature of trust in real systems. A process that was trustworthy yesterday may be compromised today, but Biba has no mechanism for dynamically adjusting integrity levels based on observed behavior.

Clark-Wilson Model (Integrity via Transactions)

The Clark-Wilson model, published by David D. Clark and David R. Wilson in 1987, takes a fundamentally different approach to integrity compared to Biba. Rather than using lattice-based access levels, Clark-Wilson enforces integrity through well-formed transactions and separation of duties. This makes it much closer to how real-world business processes actually work, particularly in financial and accounting systems.

The model was explicitly designed to address the gap between academic access control models and the practical integrity requirements of commercial computing. Clark and Wilson observed that business integrity requirements (such as double-entry bookkeeping, separation of duties, and audit trails) were not well-served by lattice-based models.

How It Works

Clark-Wilson distinguishes between two types of data items:

  • Constrained Data Items (CDIs) are the critical data objects that must maintain integrity, such as account balances, inventory records, or patient medication lists.
  • Unconstrained Data Items (UDIs) are data that enters the system from external sources and has not yet been validated.

The distinction is important because UDIs represent a potential source of corruption that must be handled carefully before they can influence CDIs.

The model defines two types of procedures:

  • Transformation Procedures (TPs) are the only authorized mechanisms for modifying CDIs. Each TP is a well-formed transaction that takes a CDI from one valid state to another valid state. For example, a bank transfer TP would debit one account and credit another, maintaining the invariant that total assets remain balanced.
  • Integrity Verification Procedures (IVPs) confirm that CDIs are in a valid state. For example, an IVP might verify that all account balances sum to the expected total, or that every inventory item has a corresponding purchase order.

Users cannot access CDIs directly. Instead, users are authorized to execute specific TPs on specific CDIs, creating a three-part relationship: (User, TP, CDI). This is fundamentally different from the simple subject-object relationship in BLP and Biba. The triple ensures that even authorized users can only modify data through approved procedures, never through direct manipulation.

A bank teller cannot edit an account balance directly. They can only execute deposit, withdrawal, and transfer transactions that have been certified to maintain account integrity. The teller cannot execute an interest calculation TP (which is reserved for the interest processing system) or modify the audit log CDI (which is reserved for system administrators).

Core Rules and Axioms

Clark-Wilson defines nine enforcement rules and five certification rules:

Certification Rules (verified by the security officer or auditor):

  • C1: All IVPs must ensure that all CDIs are in a valid state when the IVP is run. IVPs must check all relevant integrity constraints, not just a subset.
  • C2: All TPs must be certified to transform CDIs from one valid state to another valid state. Edge cases, boundary conditions, and error handling must all be verified.
  • C3: The allowed relationships (User, TP, CDI) must enforce separation of duties. No single user should execute all TPs necessary to complete a critical business process. The person who creates a purchase order should not also be the person who approves it.
  • C4: All TPs must append sufficient information to a log to reconstruct the operation (audit trail). This log must itself be a CDI, protected from unauthorized modification.
  • C5: Any TP that takes a UDI as input must transform it into a CDI or reject it (input validation). This is the boundary between the untrusted outside world and the trusted internal data.

Enforcement Rules (enforced by the system):

  • E1: The system must maintain the list of certified (TP, CDI) pairs and ensure TPs only manipulate authorized CDIs.
  • E2: The system must maintain the list of (User, TP, CDI) triples and enforce them.
  • E3: The system must authenticate the identity of each user attempting to execute a TP.
  • E4: Only the security officer (or equivalent authority) may change the authorization lists.

Best Use Cases

Clark-Wilson is the natural choice for:

  • Banking and financial services. Account balances are CDIs. Deposits, withdrawals, transfers, and interest calculations are TPs. Daily reconciliation processes are IVPs. SOX Section 404 internal control requirements align closely with Clark-Wilson certification rules.

  • Healthcare electronic health records. Patient records are CDIs, clinical workflows are TPs, and audit processes are IVPs. HIPAA's requirement for audit trails maps directly to Clark-Wilson's C4 rule.

  • Inventory management systems. Inventory counts and locations are CDIs, receiving and shipping processes are TPs, and physical inventory counts are IVPs. This approach is required by frameworks like ISO 9001.

  • Insurance claims processing. Claims are processed through certified workflows with separation of duties between adjusters, reviewers, and approvers.

  • Government procurement systems. The Federal Acquisition Regulation (FAR) requires separation of duties between contracting officers, contracting officer representatives, and receiving officials.

Limitations

Clark-Wilson's complexity is its primary limitation. Defining all valid TPs, certifying them, and maintaining the (User, TP, CDI) relationship tables requires significant administrative effort. For a large ERP system with thousands of transaction types, hundreds of data tables, and thousands of users, the number of triples can be enormous.

The model does not address confidentiality at all. If confidentiality is also a concern, Clark-Wilson must be combined with a confidentiality model like Bell-LaPadula.

Clark-Wilson does not scale easily to systems with thousands of data items and procedures without automation. Modern implementations use role-based grouping to manage the complexity.

The model assumes a trusted security officer who manages the authorization lists and certifies TPs. In practice, this role may not exist as a single person. The certification process may be informal or incomplete.

Brewer-Nash Model (Chinese Wall)

The Brewer-Nash model, published by David Brewer and Michael Nash in 1989, addresses a unique security concern that the other models do not: conflicts of interest. Originally designed for the financial consulting industry, the model prevents information flows between organizations that are competitors or have conflicting interests.

The model was motivated by real-world regulatory requirements in the financial services industry. Conflicts of interest can lead to:

  • Insider trading
  • Unfair competitive advantages
  • Violations of fiduciary duty

The SEC, FINRA, and other regulators require financial firms to maintain information barriers between different departments and between client engagements that involve competing interests.

How It Works

Brewer-Nash organizes objects into three hierarchical levels:

  1. Individual objects at the lowest level, such as specific files or datasets belonging to a company
  2. Company datasets in the middle, where each dataset contains all information relating to a single company
  3. Conflict of interest classes at the top, grouping company datasets that represent competitors

For example, a conflict of interest class might contain datasets for Bank A and Bank B, since a consultant with inside information about both competing banks would face a conflict of interest.

The model's access control rules are dynamic, meaning they change based on the history of a subject's access patterns. This is fundamentally different from BLP, Biba, and Clark-Wilson, where access rules are static. In Brewer-Nash, a subject's permissions evolve automatically as they access data, progressively narrowing to prevent conflicts.

When a subject first accesses an object in a particular company dataset, the model locks out access to all other company datasets within the same conflict of interest class. This ensures that a consultant who has read confidential information about Bank A cannot subsequently access confidential information about Bank B if both banks are in the same conflict class.

The model allows maximum initial freedom but progressively restricts access as the subject's exposure to confidential information grows. The dynamic nature means that access decisions depend on the full history of a subject's prior accesses.

Core Rules and Axioms

Simple Security Rule. A subject can access an object only if:

  • The object is in the same company dataset as objects already accessed by the subject, OR
  • The object belongs to a conflict of interest class in which the subject has not yet accessed any data

Once a subject reads data from Bank A, they can continue to access Bank A data but cannot access Bank B data if both are in the same conflict class. Each access decision further constrains future access decisions.

Star Property. Write access is only permitted if:

  • The subject cannot read any object in a different company dataset from the object to be written
  • The object to be written is in the same company dataset as objects the subject can read

This prevents indirect information leakage through write operations. Without this property, a consultant who has read Bank A's data could write a report incorporating Bank A's information that is accessible to Bank B's team.

Sanitization Exception. The model allows access to sanitized (anonymized) data that has been stripped of company-identifying information. This permits analysts to work with aggregated industry data without creating conflicts. This exception is essential for practical operation because without it, industry benchmarking and aggregate analysis would be impossible.

Best Use Cases

Brewer-Nash is essential for:

  • Investment banks maintaining information barriers between advisory, trading, and research departments per SEC regulations
  • Management consulting firms serving clients across multiple industries, including direct competitors
  • Law firms representing clients who may have adverse interests, governed by Rules of Professional Conduct (particularly Rule 1.7 and 1.9)
  • Accounting firms auditing multiple companies in the same industry, subject to SOX and PCAOB standards
  • Cloud service providers ensuring administrative access to one tenant's infrastructure does not enable access to a competing tenant's data

Limitations

The dynamic nature of Brewer-Nash access controls makes them more complex to implement than static access control models. The system must maintain a complete access history for each subject. As subjects access more data over time, their accessible scope narrows, which can create operational challenges.

Staff turnover and role changes present challenges. When a consultant finishes work for one client and is reassigned to a competitor, the model would permanently prevent the access. Practical implementations typically include a "cooling off" period or history reset mechanism that is not part of the original formal model.

The model does not address confidentiality hierarchies (BLP) or data integrity (Biba/Clark-Wilson). It must be combined with other models for comprehensive security.

Decision Framework

Selecting the right security model requires analyzing your organization's primary security concerns, regulatory requirements, data characteristics, and operational constraints. The Security Model Decision Matrix provides an interactive way to evaluate these factors.

Model Comparison Table

AspectBell-LaPadulaBibaClark-WilsonBrewer-Nash
Primary FocusConfidentialityIntegrityIntegrity via transactionsConflict of interest
Core MechanismLattice-based clearance levelsLattice-based integrity levelsWell-formed transactions, separation of dutiesDynamic access based on history
Key RulesNo read up, no write downNo read down, no write upCDIs, TPs, IVPs, user-TP-CDI triplesDynamic conflict class lockout
Access TypeMandatory (MAC)Mandatory (MAC)Transaction-basedDynamic mandatory
Best ForMilitary, classified data, trade secretsSoftware integrity, manufacturing, data accuracyFinancial systems, ERP, healthcare recordsConsulting, legal, investment banking
Regulatory AlignmentNIST SP 800-53, ICD 503, ITARFISMA integrity controls, FDA 21 CFR Part 11SOX, PCI DSS, HIPAA transaction controlsSEC regulations, FINRA, legal ethics rules
LimitationsNo integrity protection, rigid hierarchyNo confidentiality, impractical strict rulesComplex administration, no confidentialityNarrowing access over time, no hierarchy
Implementation ExamplesSELinux, Trusted Solaris, Windows MICPackage managers, CI/CD pipelinesBanking systems, ERP modulesCRM systems, consulting firm databases

Requirements-to-Model Decision Matrix

Your Primary RequirementRecommended Model(s)Rationale
Prevent unauthorized disclosure of classified dataBell-LaPadulaPurpose-built for hierarchical confidentiality
Ensure data cannot be tampered with by untrusted sourcesBibaPrevents low-integrity contamination of trusted data
Enforce business transaction integrity and audit trailsClark-WilsonWell-formed transactions with separation of duties
Prevent conflicts of interest across competing clientsBrewer-NashDynamic access control based on access history
Protect classified data AND ensure integrityBell-LaPadula + BibaCombined lattice for both confidentiality and integrity
Financial regulatory compliance (SOX, PCI DSS)Clark-Wilson (primary) + BLPTransaction integrity with confidentiality overlay
Legal or consulting firm with competing clientsBrewer-Nash + Clark-WilsonConflict prevention with transaction integrity
Healthcare with HIPAA requirementsClark-Wilson + BLPTransaction controls for records, confidentiality for PHI
Government contractor with CMMC requirementsBell-LaPadula + BibaConfidentiality and integrity per CUI requirements
Cloud multi-tenant isolationBrewer-Nash + BLPTenant isolation with confidentiality guarantees

Step-by-Step Selection Process

Step 1: Identify Your Primary Security Concern. Determine whether your most critical requirement is confidentiality, integrity, or conflict prevention. Most organizations will identify one primary concern, though many will need to address multiple concerns. Document your primary and secondary concerns with specific examples of the data and scenarios you need to protect.

Step 2: Map Regulatory Requirements. Review your regulatory obligations and map them to model capabilities:

  • SOX and PCI DSS requirements for transaction controls point toward Clark-Wilson
  • NIST 800-171 requirements for CUI protection point toward Bell-LaPadula
  • SEC and FINRA requirements for information barriers point toward Brewer-Nash

Document the specific regulatory clauses and their corresponding model properties.

Step 3: Assess Data Characteristics. Consider whether your data fits:

  • A hierarchical classification scheme (BLP/Biba)
  • A transaction-processing model (Clark-Wilson)
  • A conflict-class grouping (Brewer-Nash)

Data that changes classification frequently may not be a good fit for static lattice models. Data that requires structured business processes for modification is a natural fit for Clark-Wilson.

Step 4: Evaluate Operational Constraints. Consider the administrative burden of each model:

  • Clark-Wilson requires defining all valid transactions and maintaining user-TP-CDI triples
  • BLP requires maintaining a classification hierarchy and clearance assignments
  • Brewer-Nash requires defining conflict classes and tracking access history

Assess your organization's capacity to manage these requirements.

Step 5: Design the Combined Policy. Most organizations will select a primary model and supplement it with elements from other models. Use the Security Model Decision Matrix to explore how different combinations address your specific requirements. Document the combined policy in a formal security architecture document. Include exceptions and deviations with their justifications.

Implementation Considerations

Translating a formal security model into a working implementation requires careful mapping between the model's abstract concepts and the concrete mechanisms available in your technology stack.

Mapping Models to Technology

Bell-LaPadula maps most directly to operating system-level mandatory access control mechanisms. SELinux implements Multi-Level Security (MLS) policies that enforce BLP rules through kernel-level access decisions. Windows implements a simplified version through Mandatory Integrity Control (MIC). Cloud implementations use IAM policies with resource tags that represent classification levels.

Biba maps to integrity controls in software supply chains, package management systems, and code signing infrastructure. The concept of "no write up" appears in CI/CD pipelines where untested code cannot be promoted to production without going through approved gates. Container image signing and verification implements Biba's principle. Secure boot chains enforce Biba's invocation property.

Clark-Wilson maps most naturally to application-level transaction controls. Database stored procedures serve as Transformation Procedures. Database constraints and triggers serve as Integrity Verification Procedures. Modern ERP systems like SAP, Oracle, and Microsoft Dynamics implement Clark-Wilson principles through their authorization and audit frameworks.

Brewer-Nash maps to CRM systems with conflict checking, document management systems with dynamic access controls, and multi-tenant cloud platforms with tenant isolation.

Common Implementation Pitfalls

Common mistakes to avoid:

  • Implementing only the "happy path" while ignoring edge cases. A BLP implementation that enforces "no read up" but fails to enforce "no write down" provides incomplete confidentiality protection.
  • Conflating authentication with authorization. Formal models assume subjects are correctly identified, but the models themselves define authorization rules. Strong authentication without proper authorization enforcement provides a false sense of security.
  • Overlooking covert channel analysis in BLP implementations. Even with correct explicit access rules, information can leak through timing channels, storage channels, or other indirect means.
  • Over-reliance on a single enforcement point. If all access control is enforced at the application layer, a direct database connection bypasses all controls. Defense in depth requires enforcement at multiple layers.
  • Failure to test the negative case. Organizations test that authorized access works but do not test that unauthorized access is properly denied.

Performance and Scalability

Lattice-based models (BLP, Biba) scale linearly with the number of subjects and objects, as access decisions require only a comparison of labels. Clark-Wilson can become complex as the number of (User, TP, CDI) triples grows, potentially requiring optimization through role-based grouping. Brewer-Nash requires maintaining per-subject access history, which grows over time and must be efficiently queried for each access decision.

Organizations with large-scale deployments should consider:

  • Caching access decisions
  • Pre-computing allowable access based on current state
  • Using policy decision points (PDPs) that evaluate requests in milliseconds

Modern policy engines like Open Policy Agent (OPA) can implement formal model rules as declarative policies. XACML provides another standardized approach to expressing complex access control policies.

Real-World Application Examples

Understanding how formal security models apply in practice helps bridge the gap between theory and implementation.

Defense and Intelligence: Bell-LaPadula in Multi-Level Secure Systems

A defense contractor processing information at multiple classification levels deploys a Multi-Level Secure (MLS) system built on SELinux. Every file is labeled with a classification (Unclassified, Confidential, Secret, Top Secret) and a set of compartments (HUMINT, SIGINT, COMINT). Users are assigned clearances and need-to-know compartments. The SELinux MLS policy enforces BLP rules at the kernel level, preventing unauthorized reads and writes across classification boundaries.

Financial Services: Clark-Wilson in Banking

A commercial bank implements Clark-Wilson principles in its core banking platform. Account balances are Constrained Data Items (CDIs). All modifications must occur through certified Transformation Procedures: deposits, withdrawals, transfers, and interest calculations. Each TP is validated to maintain the invariant that the sum of all account balances equals the total asset figure. Separation of duties requires different users to initiate and approve transactions above a threshold. Nightly batch processes serve as Integrity Verification Procedures.

Consulting: Brewer-Nash in Professional Services

A management consulting firm serves clients across multiple industries, including competing companies. The firm implements Brewer-Nash controls in its knowledge management system. When a consultant is staffed on a project for Telecom Company A, the system automatically restricts their access to confidential materials for Telecom Company B. Sanitized industry research reports are exempt from the restrictions.

Healthcare: Combined BLP and Clark-Wilson for EHR Systems

A hospital network implements a hybrid approach. Bell-LaPadula handles patient record confidentiality with sensitivity levels (general health, mental health, substance abuse, HIV status). Clark-Wilson handles clinical data integrity through certified clinical workflows. All clinical data modifications occur through TPs that maintain medical record integrity, enforce co-signature requirements, and generate audit trails.

Software Development: Biba in CI/CD Pipelines

A software company implements Biba integrity controls in its deployment pipeline:

  • Source code in feature branches has the lowest integrity level
  • Code that has passed unit tests receives a higher integrity level
  • Code that has passed integration tests and code review receives the highest pre-production integrity level
  • Only code at the highest integrity level can be deployed to production

Code signing at each stage provides cryptographic verification of integrity level transitions.

These examples demonstrate that formal security models are not merely academic constructs but practical tools for designing security architectures. By understanding the models' properties and limitations, security architects can select and combine models to create access control systems that are both rigorous and practical.

Frequently Asked Questions

Find answers to common questions

Yes, most real-world implementations combine multiple models to address different security concerns simultaneously. For example, an organization might apply Bell-LaPadula for classified document handling while using Clark-Wilson for financial transaction processing. The key challenge is ensuring the combined policies do not create contradictions or gaps that an attacker could exploit.

Need Security Compliance Expertise?

Navigate compliance frameworks and risk management with our expert security team. From assessments to ongoing oversight.