D3: Principles of Secure Design Flashcards
Principles of Secure Design
The security requirements of an information system are driven by the security policy of the organization that will use the system
To incorporate the abstract goals of a security policy into an information system’s architecture, you will need to use security models
We serve the business, so ultimately our security stuff should reflect business objectives
Security Model
A security model lays out the framework and mathematical models that act as a security related specifications for a system architecture
System Architecture
The overall design of the components such as hardware, OS, applications and networks - of an information system
The design should meet the specifications provided by the security model.
System
A group of individual components working together towards a common goal or outcome e.g. computers, network segments, departments
Security Model Examples and Significance for Exam
State Machine Model Bell-LaPadula Model** Biba Model** Clark-Wilson Model* Brewer & Nash Model* Information Flow Model Non-Interference Model Lattice Model
State Machine Model
The state of a system is its snapshot at any one particular moment. The state machine model describes subjects, objects, and sequences in a system. The focus of this model is to capture the system’s state and ensure its security
When an object accepts input, the value of the state variable is modified. For a subject to access this object or modify the object value, the subject should have appropriate access rights
State transitions refer to activities that alter a system state
**Other models are built off this model
if a system starts, functions, and even fails/shuts down securely - it’s a secure system.»_space; this model instructs you in that unless if you can secure a system in all 3 states, then you don’t have a secure system
A system is most difficult to secure is start-up because all security mechanisms haven’t loaded yet. During the initial load, the system is very vulnerable.
Shutdown - trusted recover, in the event of a violation of the system security, it should be shut down in a way that it protects itself e.g. blue screen of death
Bell-LaPadula Model - Overview
- confidentiality
The ONLY model designed for confidentiality aka Confidentiality Model
- a former mathematical model that’s built on the state machine model. so this doesn’t mean anything unless the three states are secure.
Developed by David Elliot Bell and Len LaPadula
- this model focuses on data confidentiality and access to classified information
- a formal model developed for the DoD multilevel security policy
- this formal model divides entities in an information system into subjects and objects
- model is built on the concept of a state machine with different allowable states (i.e. secure state)
Bell-LaPadula Model - Three Security Rules to Enforce Confidentiality
- Simple Security Property - “no read up”
- a subject cannot read data from a security level higher than subject’s security level - Security Property - “no write down”
- a subject cannot write data to a security level lower than the subject’s security level. Prevent data leakage from upper levels to lower
- Security Property - “no write down”
- Strong * Property - “no read/write up or down”
- a subject with read/write privilege can perform read/write functions only at the subject’s security levels.
Each rule is independent of the other, unless it’s expressly denied then it’s allowed
Having upper and lower bounds help enforce security permissions
Biba Integrity Model - Overview
- integrity
The exact oppossite of BL because it focuses on integrity
Developed by Kenneth J. Biba in 1977 based on a set of access control rules designed to ensure data integrity
No subject can depend on an object of lesser integrity
Based on a hierarchal lattice of integrity levels
authorized users must perform correct and safe procedures to protect data integrity
Biba Integrity Model - Three Security Rules to Enforce Integrity
Focus on protecting the integrity/sanctity of our knowledge base
Simple Integrity Axiom - “no read down” a subject cannot read data from an object of lower integrity level
- implies and encourages you can read above you
- Integrity Axiom - “no write up” - a subject cannot write data to an object at a higher integrity level
Invocation property - a subject cannot invoke (call upon) subjects at a higher integrity level
Clark Wilson Security Model
- integrity
Integrity Model
Model Characteristics:
- Clark Wilson enforces well-informed transactions through the use of the access triple:
- user > transformation procedure > CDI (constrained data item) > deals with all three integrity goals
Separation of Duties
- prevent unauthorized users from making modifications
- prevents authorized users from making improper modifications
- maintain internal and external consistency - reinforces separation of duties
SIMPLY PUT - Keep users out of your stuff or they’ll break it; for example. amazon does not give you access to our stuff and instead uses a trusted interface AKA the frontend
- if amazon allowed free text, people would input information in all different types of formats and a database wouldn’t know what that means. so they give you a drop down, or limit a username field to just 12 characters
EXAM - untrusted entity does not get to access a trusted entity;
User > Interface > Stuff
API: Application Programming Interface - letting an untrusted application to provide access to trusted resources
Brewer Nash
- SoDs
Commercial Models: Brewer-Nash
Brewer-Nash Model - aka Chinese Wall
- developed to combat conflict of interest in databases housing competitor information
Publish in 1989 to ensure fair competition
Defines a wall and a sat of rules to ensure that no
subject accesses objects on the other side of the wall
way of separating competitions data within the same integrated database
Shouldn’t go from Visa, recording information, Citi, recording information, etc. - seems like a conflict of interest
Information Flow Model
- most likely not testable
- data is compartmentalized based on classification and the need to know
model seeks to eliminate covert channels
model ensures that information always flows from a low security level to higher integrity level to a high integrity level
Whatever component directly affects the flow of information must dominate all components involved with the flow of information
Non-interference Model
*most likely not testable
Model Characteristics
- model ensures that actions at a higher security level does not interfere with the actions at a lower security level
- the goal of this model is to protect the state of an entity at the lower security level by actions at the higher security level so that data does not pass through covert or timing channels
Lattice Model
*most likely not testable
Model consists of a set of objects constrained between the least upper bound and the greatest lower bound values
the least upper bound is the value that defines the least level of object access rights granted to a subject
the goal of this model is to protect the confidentiality of an object and only allow access by an authorized subject
Security Architecture
Security Architecture directs how the components included in the system architecture should be organized to ensure that security requirements are met. The security architecture of an information system should include:
- a description of the locations in the overall architecture where security measures should be placed
- a description of how various components of the architecture should interact to ensure security
- the security specifications to be followed when designing and developing a system
Computer Architecture Terminology
Program (app) > Process (loaded program) > Thread (process instructions)
- Program: An application, e.g. Word
- Process: a program loaded into memory
- Thread: Each individual instruction within a process
Multiprogramming: appearance of multiple programs running but they aren’t > no true isolation. Windows appeared open, but weren’t running because the OS didn’t have the means to isolate each process its own set of resources.
– Multi-tasking: you really are running multiple programs and each are allocated its own sets of resources.
Multiprocessing: more than one CPU
Multithreading: If you want to run threads simultaneously. In the past, multiple CPUs were needed - today multi-core processors provide this.
Operating System Architecture Process Activity Memory Management Memory Types - RAM, ROM, etc. Virtual memory CPU modes & Protection Rings
CPU Modes & Protection Rings
Protection Rings: provide a security mechanisms for an OS by creating boundaries between the various processes operating on a system and also ensures that processes do not affect each other or harm critical system components
Ring 0: Most Privileged - Operating system kernel (supervisor/privilege mode).
- Trusted Computer Base (TCB) - your system is only as trustworthy as your processor, bias, OS kernel
Ring 1: remaining parts of the OS e.g. executive function- memory management
Ring 2: OS and I/O drivers and OS utilities
Ring 3: Least Trusted: Application (programs) and user activity
Clark Wilson - can the inner rings access the outer rings? Yes, but not the only way around
Today, there are only two rings - trusted or not trusted
System Architecture - Important Concepts
Trusted Computing Base (TCB) - firmware - system bios - hardware - CPU and memory - specific elements that drive and enforce the security policy Security Perimeter - the fence that isolates the TCB
Defined Subset of Subjects and Objects
- anytime a subject calls on an object, two components of the OS kernel are called into play: reference monitor and the security kernel
Reference Monitor: the LAW, all the rules that controls access to resources. This is checked every time a subject accesses a protected object
Security Kernel: the POLICE, the security kernel enforces AND invokes the reference monitor concept. Has 3 main requirements:
- must facilitate isolation of processes; must be able to allow and deny access as required
- must be invoked at every access attempt
- must be small enough to be tested and verified in a comprehensive manner
Security Policy: a set of rules on how resources are managed within a computer system.
Least Privilege: on process has no more privileges than it needs
Secure Modes of Operation
State = Classification
Single State
- one classification of data
- a filing cabinet with one drawer and everything is top secret - need to have clearance for everything in that system
Multi-State
- filing cabinet with multiple drawers; need clearance for drawers with the same level
Compartmented
- need to know; not enough to have top secret clearance unless there is a need to know
- systems that can understand different levels of ‘need to know’ along with clearances and classifications
- have to have an additional way to determine need-to-know outside of clearance
Dedicated
- a system that doesn’t need to differentiate ‘need to know’ buckets
Combos
- single state compartmented; need to have clearance for everything but need-to-know
single state dedicated - need clearance and need-to-know for everything
- multistate compartmented - clearance for drawer you’ll access, and need to know for what you’ll access
- multi-state dedicated - clearance for what you will access and need-to-know for everything at that level
Evaluation Criteria
Why Evaluate? To carefully examine the security-related components of a system
- trust vs. assurance: the two elements that are evaluated
- trust: the function of the product; what does the product do?
- assurance: the reliability of the process; was the product designed well with reliable processes
- Example: a product that is SUPER cool, but broke after the first week = high trust, low assurance
CMMI: maturity of processes, will determine maturity of product and assurance in product
The Orange Book (TCSEC)
The Orange Book & Rainbow Series
ITSEC (InfoTech Security Evaluation Criteria)
Common Criteria
Trusted Computer Security Evaluation Criteria (TCSEC)
Developed by the National Computer Security Center
AKA the Orange Book
Based on the Bell-LaPadula model (deals only with confidentiality
Uses a hierarchically ordered series of evaluation classes
Defines Trust and Assurance, but does not allow for them to be evaluated independently
Not ideal, because it didn’t look at trust and assurance separately and it was like a report card with grades - no context
Ratings: A1: verified protection B1, B2, B3: mandatory protection - b1 is lowest, b2 adds on, etc C1, C2: discretionary protection D: minimal security
IT Security Evaluation Criteria (ITSEC)
Created by a collection of European nations in 1991 as a standard to evaluate security attributes of computer systems
The first Criteria to evaluate functionality and assurance separately
F1 to F10 rates for functionality
E0 to E6 for assurance
Common Criteria ISO 15408
International Standard
Protection Profile: system requirements from Agency or Customer
Target of Evaluation (ToE): System Designed by Vendor based on Protection Profile
Security Target Documentation: Documents how ToE meets Protection Profile; how vendor is filling the need
Evaluation Assurance Level (EAL 1-7): objective third-party describes the level to which ToE meets Protection Profile
Designed to work based on the need of the agency
WHATEVER THE EXAM QUESTION; ANSWER IS EAL 4
EXAMPLE:
Protection Profile: Agency wants a phone case that won’t break
ToE: a case created that can’t break
EAL 1-3 - simple tests getting more aggressive; 1 ft drop, 5 ft drop,
EAL 4: certain degree of testing; dropped from rooftop
EAL 5: 10 story drops and shatters
why not build a level 7 case? not worth the price