Chapter 8 Principles of Security Models, Design, and Capabilities Flashcards

1
Q

Objects and Subjects

A

The subject is the user or process that makes a request to access a resource. Access can mean reading from or writing to a resource. The object is the resource a user or process wants to access. Keep in mind that the subject and object refer to some specific access request, so the same resource can serve as a subject and an object in different access requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Transitive trust

A

is the concept that if A trusts B and B trusts C, then A inherits trust of C through the transitive property—which works like it would in a mathematical equation: if a = b, and b = c, then a = c. In the previous example, when A requests data from B and then B requests data from C, the data that A receives is essentially from C. Transitive trust is a serious security concern because it may enable bypassing of restrictions or limitations between A and C, especially if A and C both support interaction with B. An example of this would be when an organization blocks access to Facebook or YouTube to increase worker productivity. Thus, workers (A) do not have access to certain internet sites (C). However, if workers are able to access to a web proxy, virtual private network (VPN), or anonymization service, then this can serve as a means to bypass the local network restriction. In other words, if workers (A) are accessing VPN service (B), and the VPN service (B) can access the blocked internet service (C); then A is able to access C through B via a transitive trust exploitation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Closed systems

A

are harder to integrate with unlike systems, but they can be more secure. A closed system often comprises proprietary hardware and software that does not incorporate industry standards. This lack of integration ease means that attacks on many generic system components either will not work or must be customized to be successful. In many cases, attacking a closed system is harder than launching an attack on an open system. Many software and hardware components with known vulnerabilities may not exist on a closed system. In addition to the lack of known vulnerable components on a closed system, it is often necessary to possess more in-depth knowledge of the specific target system to launch a successful attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Open systems

A

are generally far easier to integrate with other open systems. It is easy, for example, to create a local area network (LAN) with a Microsoft Windows Server machine, a Linux machine, and a Macintosh machine. Although all three computers use different operating systems and could represent up to three different hardware architectures, each supports industry standards and makes it easy for networked (or other) communications to occur. This ease comes at a price, however. Because standard communications components are incorporated into each of these three open systems, there are far more predictable entry points and methods for launching attacks. In general, their openness makes them more vulnerable to attack, and their widespread availability makes it possible for attackers to find (and even to practice on) plenty of potential targets. Also, open systems are more popular than closed systems and attract more attention. An attacker who develops basic attacking skills will find more targets on open systems than on closed ones. This larger “market” of potential targets usually means that there is more emphasis on targeting open systems. Inarguably, there’s a greater body of shared experience and knowledge on how to attack open systems than there is for closed systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

OPEN SOURCE VS. CLOSED SOURCE

A

It’s also helpful to keep in mind the distinction between open-source and closed-source systems. An open-source solution is one where the source code, and other internal logic, is exposed to the public. A closed-source solution is one where the source code and other internal logic is hidden from the public. Open-source solutions often depend on public inspection and review to improve the product over time. Closed-source solutions are more dependent on the vendor/programmer to revise the product over time. Both open-source and closed-source solutions can be available for sale or at no charge, but the term commercial typically implies closed-source. However, closed-source code is often revealed through either vendor compromise or through decompiling. The former is always a breach of ethics and often the law, whereas the latter is a standard element in ethical reverse engineering or systems analysis.

It is also the case that a closed-source program can be either an open system or a closed system, and an open-source program can be either an open system or a closed system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Confinement

A

Software designers use process confinement to restrict the actions of a program. Simply put, process confinement allows a process to read from and write to only certain memory locations and resources. This is also known as sandboxing. The operating system, or some other security component, disallows illegal read/write requests. If a process attempts to initiate an action beyond its granted authority, that action will be denied. In addition, further actions, such as logging the violation attempt, may be taken. Systems that must comply with higher security ratings usually record all violations and respond in some tangible way. Generally, the offending process is terminated. Confinement can be implemented in the operating system itself (such as through process isolation and memory protection), through the use of a confinement application or service (for example, Sandboxie at www.sandboxie.com), or through a virtualization or hypervisor solution (such as VMware or Oracle’s VirtualBox).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bounds

A

Each process that runs on a system is assigned an authority level. The authority level tells the operating system what the process can do. In simple systems, there may be only two authority levels: user and kernel. The authority level tells the operating system how to set the bounds for a process. The bounds of a process consist of limits set on the memory addresses and resources it can access. The bounds state the area within which a process is confined or contained. In most systems, these bounds segment logical areas of memory for each process to use. It is the responsibility of the operating system to enforce these logical bounds and to disallow access to other processes. More secure systems may require physically bounded processes. Physical bounds require each bounded process to run in an area of memory that is physically separated from other bounded processes, not just logically bounded in the same memory space. Physically bounded memory can be very expensive, but it’s also more secure than logical bounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Isolation

A

When a process is confined through enforcing access bounds, that process runs in isolation. Process isolation ensures that any behavior will affect only the memory and resources associated with the isolated process. Isolation is used to protect the operating environment, the kernel of the operating system (OS), and other independent applications. Isolation is an essential component of a stable operating system. Isolation is what prevents an application from accessing the memory or resources of another application, whether for good or ill. The operating system may provide intermediary services, such as cut-and-paste and resource sharing (such as the keyboard, network interface, and storage device access).

These three concepts (confinement, bounds, and isolation) make designing secure programs and operating systems more difficult, but they also make it possible to implement more secure systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

control

A

uses access rules to limit the access of a subject to an object. Access rules state which objects are valid for each subject. Further, an object might be valid for one type of access and be invalid for another type of access. One common control is for file access. A file can be protected from modification by making it read-only for most users but read-write for a small set of users who have the authority to modify it.

There are both mandatory and discretionary access controls, often called mandatory access control (MAC) and discretionary access control (DAC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

mandatory controls

A

static attributes of the subject and the object are considered to determine the permissibility of an access. Each subject possesses attributes that define its clearance, or authority, to access resources. Each object possesses attributes that define its classification. Different types of security methods classify resources in different ways. For example, subject A is granted access to object B if the security system can find a rule that allows a subject with subject A’s clearance to access an object with object B’s classification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discretionary controls

A

differ from mandatory controls in that the subject has some ability to define the objects to access. Within limits, discretionary access controls allow the subject to define a list of objects to access as needed. This access control list serves as a dynamic access rule set that the subject can modify. The constraints imposed on the modifications often relate to the subject’s identity. Based on the identity, the subject may be allowed to add or modify the rules that define access to objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

trusted system

A

one in which all protection mechanisms work together to process sensitive data for many types of users while maintaining a stable and secure computing environment. Assurance is simply defined as the degree of confidence in satisfaction of security needs. Assurance must be continually maintained, updated, and reverified. This is true if the trusted system experiences a known change or if a significant amount of time has passed. In either case, change has occurred at some level. Change is often the antithesis of security; it often diminishes security. So, whenever change occurs, the system needs to be reevaluated to verify that the level of security it provided previously is still intact. Assurance varies from one system to another and must be established on individual systems. However, there are grades or levels of assurance that can be placed across numerous systems of the same type, systems that support the same services, or systems that are deployed in the same geographic location. Thus, trust can be built into a system by implementing specific security features, whereas assurance is an assessment of the reliability and usability of those security features in a real-world situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

security model

A

provides a way for designers to map abstract statements into a security policy that prescribes the algorithms and data structures necessary to build hardware and software. Thus, a security model gives software designers something against which to measure their design and implementation. That model, of course, must support each part of the security policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

security attributes for an object

A

A security token is a separate object that is associated with a resource and describes its security attributes. This token can communicate security information about an object prior to requesting access to the actual object. In other implementations, various lists are used to store security information about multiple objects. A capabilities list maintains a row of security attributes for each controlled object. Although not as flexible as the token approach, capabilities lists generally offer quicker lookups when a subject requests access to an object. A third common type of attribute storage is called a security label, which is generally a permanent part of the object to which it’s attached.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

TRUSTED COMPUTING BASE

A

An old U.S. Department of Defense standard known colloquially as the Orange Book/Trusted Computer System Evaluation Criteria (TCSEC) (DoD Standard 5200.28, covered in more detail later in this chapter in the section “Rainbow Series”) describes a trusted computing base (TCB) as a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy. The TCB is a subset of a complete information system. It should be as small as possible so that a detailed analysis can reasonably ensure that the system meets design specifications and requirements. The TCB is the only portion of that system that can be trusted to adhere to and enforce the security policy. It is not necessary that every component of a system be trusted. But any time you consider a system from a security standpoint, your evaluation should include all trusted components that define that system’s TCB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Security Perimeter

A

The security perimeter of your system is an imaginary boundary that separates the TCB from the rest of the system (Figure 8.1). This boundary ensures that no insecure communications or interactions occur between the TCB and the remaining elements of the computer system. For the TCB to communicate with the rest of the system, it must create secure channels, also called trusted paths. A trusted path is a channel established with strict standards to allow necessary communication to occur without exposing the TCB to security vulnerabilities. A trusted path also protects system users (sometimes known as subjects) from compromise as a result of a TCB interchange.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Reference Monitors and Kernels

A

When the time comes to implement a secure system, it’s essential to develop some part of the TCB to enforce access controls on system assets and resources (sometimes known as objects). The part of the TCB that validates access to every resource prior to granting access requests is called the reference monitor (Figure 8.1). The reference monitor stands between every subject and object, verifying that a requesting subject’s credentials meet the object’s access requirements before any requests are allowed to proceed. If such access requirements aren’t met, access requests are turned down. Effectively, the reference monitor is the access control enforcer for the TCB. Thus, authorized and secured actions and activities are allowed to occur, whereas unauthorized and insecure activities and actions are denied and blocked from occurring. The reference monitor enforces access control or authorization based on the desired security model, whether Discretionary, Mandatory, Role Based, or some other form of access control. The reference monitor may be a conceptual part of the TCB; it doesn’t need to be an actual, stand-alone, or independent working system component.

The collection of components in the TCB that work together to implement reference monitor functions is called the security kernel. The reference monitor is a concept or theory that is put into practice via the implementation of a security kernel in software and hardware. The purpose of the security kernel is to launch appropriate components to enforce reference monitor functionality and resist all known attacks. The security kernel uses a trusted path to communicate with subjects. It also mediates all resource access requests, granting only those requests that match the appropriate access rules in use for a system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

STATE MACHINE MODEL

A

The state machine model describes a system that is always secure no matter what state it is in. It’s based on the computer science definition of a finite state machine (FSM). An FSM combines an external input with an internal machine state to model all kinds of complex systems, including parsers, decoders, and interpreters. Given an input and a state, an FSM transitions to another state and may create an output. Mathematically, the next state is a function of the current state and the input next state; that is, the next state = F(input, current state). Likewise, the output is also a function of the input and the current state output; that is, the output = F(input, current state).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

state transitions

A

Many security models are based on the secure state concept. According to the state machine model, a state is a snapshot of a system at a specific moment in time. If all aspects of a state meet the requirements of the security policy, that state is considered secure. A transition occurs when accepting input or producing output. A transition always results in a new state (also called a state transition). All state transitions must be evaluated. If each possible state transition results in another secure state, the system can be called a secure state machine. A secure state machine model system always boots into a secure state, maintains a secure state across all transitions, and allows subjects to access resources only in a secure manner compliant with the security policy. The secure state machine model is the basis for many other security models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

INFORMATION FLOW MODEL

A

Information flow models are designed to prevent unauthorized, insecure, or restricted information flow, often between different levels of security (these are often referred to as multilevel models). Information flow can be between subjects and objects at the same classification level as well as between subjects and objects at different classification levels. An information flow model allows all authorized information flows, whether within the same classification level or between classification levels. It prevents all unauthorized information flows, whether within the same classification level or between classification levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

NONINTERFERENCE MODEL

A

The noninterference model is loosely based on the information flow model. However, instead of being concerned about the flow of information, the noninterference model is concerned with how the actions of a subject at a higher security level affect the system state or the actions of a subject at a lower security level. Basically, the actions of subject A (high) should not affect the actions of subject B (low) or even be noticed by subject B. The real concern is to prevent the actions of subject A at a high level of security classification from affecting the system state at a lower level. If this occurs, subject B may be placed into an insecure state or be able to deduce or infer information about a higher level of classification. This is a type of information leakage and implicitly creates a covert channel. Thus, the noninterference model can be imposed to provide a form of protection against damage caused by malicious programs such as Trojan horses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

COMPOSITION THEORIES

A

Some other models that fall into the information flow category build on the notion of how inputs and outputs between multiple systems relate to one another—which follows how information flows between systems rather than within an individual system. These are called composition theories because they explain how outputs from one system relate to inputs to another system. There are three recognized types of composition theories:

Cascading: Input for one system comes from the output of another system.
Feedback: One system provides input to another system, which reciprocates by reversing those roles (so that system A first provides input for system B and then system B provides input to system A).
Hookup: One system sends input to another system but also sends input to external entities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

TAKE-GRANT MODEL

A

The Take-Grant model employs a directed graph (Figure 8.2) to dictate how rights can be passed from one subject to another or from a subject to an object. Simply put, a subject with the grant right can grant another subject or another object any other right they possess. Likewise, a subject with the take right can take a right from another subject. In addition to these two primary rules, the Take-Grant model may adopt a create rule and a remove rule to generate or delete rights. The key to this model is that using these rules allows you to figure out when rights in the system can change and where leakage (that is, unintentional distribution of permissions) can occur.

Take rule Allows a subject to take rights over an object
Grant rule Allows a subject to grant rights to an object
Create rule Allows a subject to create new rights
Remove rule Allows a subject to remove rights it has

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

ACCESS CONTROL MATRIX

A

An access control matrix is a table of subjects and objects that indicates the actions or functions that each subject can perform on each object. Each column of the matrix is an access control list (ACL). Each row of the matrix is a capabilities list. An ACL is tied to the object; it lists valid actions each subject can perform. A capability list is tied to the subject; it lists valid actions that can be taken on each object. From an administration perspective, using only capability lists for access control is a management nightmare. A capability list method of access control can be accomplished by storing on each subject a list of rights the subject has for every object. This effectively gives each user a key ring of accesses and rights to objects within the security domain. To remove access to a particular object, every user (subject) that has access to it must be individually manipulated. Thus, managing access on each user account is much more difficult than managing access on each object (in other words, via ACLs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

BELL-LAPADULA MODEL

A

By design, the Bell-LaPadula model prevents the leaking or transfer of classified information to less secure clearance levels. This is accomplished by blocking lower-classified subjects from accessing higher-classified objects. With these restrictions, the Bell-LaPadula model is focused on maintaining the confidentiality of objects. Thus, the complexities involved in ensuring the confidentiality of documents are addressed in the Bell-LaPadula model. However, Bell-LaPadula does not address the aspects of integrity or availability for objects. Bell-LaPadula is also the first mathematical model of a multilevel security policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

LATTICE-BASED ACCESS CONTROL

A

This general category for nondiscretionary access controls is covered in Chapter 13, “Managing Identity and Authentication.” Here’s a quick preview on that more detailed coverage of this subject (which drives the underpinnings for most access control security models): Subjects under lattice-based access controls are assigned positions in a lattice. These positions fall between defined security labels or classifications. Subjects can access only those objects that fall into the range between the least upper bound (the nearest security label or classification higher than their lattice position) and the highest lower bound (the nearest security label or classification lower than their lattice position) of the labels or classifications for their lattice position. Thus, a subject that falls between the private and sensitive labels in a commercial scheme that reads bottom up as public, sensitive, private, proprietary, and confidential can access only public and sensitive data but not private, proprietary, or confidential data. Lattice-based access controls also fit into the general category of information flow models and deal primarily with confidentiality (that’s the reason for the connection to Bell-LaPadula).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Bell-LaPadula model

A

The Simple Security Property states that a subject may not read information at a higher sensitivity level (no read up).
The * (star) Security Property states that a subject may not write information to an object at a lower sensitivity level (no write down). This is also known as the Confinement Property.
The Discretionary Security Property states that the system uses an access matrix to enforce discretionary access control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

BIBA MODEL

A

The Biba model was designed after the Bell-LaPadula model. Where the Bell-LaPadula model addresses confidentiality, the Biba model addresses integrity. The Biba model is also built on a state machine concept, is based on information flow, and is a multilevel model. In fact, Biba appears to be pretty similar to the Bell-LaPadula model, except inverted. Both use states and transitions. Both have basic properties. The biggest difference is their primary focus: Biba primarily protects data integrity. Here are the basic properties or axioms of the Biba model state machine:

The Simple Integrity Property states that a subject cannot read an object at a lower integrity level (no read-down).
The * (star) Integrity Property states that a subject cannot modify an object at a higher integrity level (no write-up).
Prevent modification of objects by unauthorized subjects.
Prevent unauthorized modification of objects by authorized subjects.
Protect internal and external object consistency.
Critiques of the Biba model reveal a few drawbacks:

It addresses only integrity, not confidentiality or availability.
It focuses on protecting objects from external threats; it assumes that internal threats are handled programmatically.
It does not address access control management, and it doesn’t provide a way to assign or change an object’s or subject’s classification level.
It does not prevent covert channels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

CLARK-WILSON MODEL

A

Although the Biba model works in commercial applications, another model was designed in 1987 specifically for the commercial environment. The Clark-Wilson model uses a multifaceted approach to enforcing data integrity. Instead of defining a formal state machine, the Clark-Wilson model defines each data item and allows modifications through only a small set of programs.

The Clark-Wilson model does not require the use of a lattice structure; rather, it uses a three-part relationship of subject/program/object (or subject/transaction/object) known as a triple or an access control triple. Subjects do not have direct access to objects. Objects can be accessed only through programs. Through the use of two principles—well-formed transactions and separation of duties—the Clark-Wilson model provides an effective means to protect integrity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Clark-Wilson defines

A
A constrained data item (CDI) is any data item whose integrity is protected by the security model.
An unconstrained data item (UDI) is any data item that is not controlled by the security model. Any data that is to be input and hasn’t been validated, or any output, would be considered an unconstrained data item.
An integrity verification procedure (IVP) is a procedure that scans data items and confirms their integrity.
Transformation procedures (TPs) are the only procedures that are allowed to modify a CDI. The limited access to CDIs through TPs forms the backbone of the Clark-Wilson integrity model.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

BREWER AND NASH MODEL (AKA CHINESE WALL)

A

The Brewer and Nash model was created to permit access controls to change dynamically based on a user’s previous activity (making it a kind of state machine model as well). This model applies to a single integrated database; it seeks to create security domains that are sensitive to the notion of conflict of interest (for example, someone who works at Company C who has access to proprietary data for Company A should not also be allowed access to similar data for Company B if those two companies compete with each other). This model is known as the Chinese Wall model because it creates a class of data that defines which security domains are potentially in conflict and prevents any subject with access to one domain that belongs to a specific conflict class from accessing any other domain that belongs to the same conflict class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

GOGUEN-MESEGUER MODEL

A

The Goguen-Meseguer model is an integrity model, although not as well known as Biba and the others. In fact, this model is said to be the foundation of noninterference conceptual theories. Often when someone refers to a noninterference model, they are actually referring to the Goguen-Meseguer model.

The Goguen-Meseguer model is based on predetermining the set or domain—a list of objects that a subject can access. This model is based on automation theory and domain separation. This means subjects are allowed only to perform predetermined actions against predetermined objects. When similar users are grouped into their own domain (that is, collective), the members of one subject domain cannot interfere with the members of another subject domain. Thus, subjects are unable to interfere with each other’s activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

SUTHERLAND MODEL

A

The Sutherland model is an integrity model. It focuses on preventing interference in support of integrity. It is formally based on the state machine model and the information flow model. However, it does not directly indicate specific mechanisms for protection of integrity. Instead, the model is based on the idea of defining a set of system states, initial states, and state transitions. Through the use of only these predetermined secure states, integrity is maintained and interference is prohibited.

A common example of the Sutherland model is its use to prevent a covert channel from being used to influence the outcome of a process or activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

GRAHAM-DENNING MODEL

A

The Graham-Denning model is focused on the secure creation and deletion of both subjects and objects. Graham-Denning is a collection of eight primary protection rules or actions that define the boundaries of certain secure actions:

Securely create an object.
Securely create a subject.
Securely delete an object.
Securely delete a subject.
Securely provide the read access right.
Securely provide the grant access right.
Securely provide the delete access right.
Securely provide the transfer access right.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

When formal evaluations are undertaken, systems are usually subjected to a two-step process

A

The system is tested and a technical evaluation is performed to make sure that the system’s security capabilities meet criteria laid out for its intended use.
The system is subjected to a formal comparison of its design and security criteria and its actual capabilities and performance, and individuals responsible for the security and veracity of such systems must decide whether to adopt them, reject them, or make some changes to their criteria and try again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

TCSEC Classes and Required Functionality

A
D Minimal Protection
C1 Discretionary Protection 
C2 Controlled access Protection
B1 Labeled Security 
B2 Structured Protection
B3 Security Domains
A1 Verified Protection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Category D

A

Minimal protection. Reserved for systems that have been evaluated but do not meet requirements to belong to any other category.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Discretionary Protection (Categories C1, C2)

A

Discretionary protection systems provide basic access control. Systems in this category do provide some security controls but are lacking in more sophisticated and stringent controls that address specific needs for secure systems. C1 and C2 systems provide basic controls and complete documentation for system installation and configuration.

39
Q

Discretionary Security Protection (C1)

A

A discretionary security protection system controls access by user IDs and/or groups. Although there are some controls in place that limit object access, systems in this category provide only weak protection.

40
Q

Controlled Access Protection (C2)

A

Mandatory Protection (Categories B1, B2, B3)

41
Q

Labeled Security (B1)

A

In a labeled security system, each subject and each object has a security label. A B1 system grants access by matching up the subject and object labels and comparing their permission compatibility. B1 systems support sufficient security to house classified data.

42
Q

Structured Protection (B2)

A

In addition to the requirement for security labels (as in B1 systems), B2 systems must ensure that no covert channels exist. Operator and administrator functions are separated, and process isolation is maintained. B2 systems are sufficient for classified data that requires more security functionality than a B1 system can deliver.

43
Q

Security Domains (B3)

A

Security domain systems provide more secure functionality by further increasing the separation and isolation of unrelated processes. Administration functions are clearly defined and separate from functions available to other users. The focus of B3 systems shifts to simplicity to reduce any exposure to vulnerabilities in unused or extra code. The secure state of B3 systems must also be addressed during the initial boot process. B3 systems are difficult to attack successfully and provide sufficient secure controls for very sensitive or secret data.

44
Q

Verified Protection (Category A1)

A

Verified protection systems are similar to B3 systems in the structure and controls they employ. The difference is in the development cycle. Each phase of the development cycle is controlled using formal methods. Each phase of the design is documented, evaluated, and verified before the next step is taken. This forces extreme security consciousness during all steps of development and deployment and is the only way to formally guarantee strong system security.

A verified design system starts with a design document that states how the resulting system will satisfy the security policy. From there, each development step is evaluated in the context of the security policy. Functionality is crucial, but assurance becomes more important than in lower security categories. A1 systems represent the top level of security and are designed to handle top-secret data. Every step is documented and verified, from the design all the way through to delivery and installation

45
Q

RAINBOW SERIES

A

The first such set of standards resulted in the creation of the Trusted Computer System Evaluation Criteria (TCSEC) in the 1980s, as the U.S. Department of Defense (DoD) worked to develop and impose security standards for the systems it purchased and used. In turn, this led to a whole series of such publications through the mid-1990s. Since these publications were routinely identified by the color of their covers, they are known collectively as the rainbow series.

46
Q

Orange Book

A

In 1985, the National Computer Security Center (NCSC) developed the TCSEC, usually called the Orange Book because of the color of this publication’s covers. The TCSEC established guidelines to be used when evaluating a stand-alone computer from the security perspective. These guidelines address basic security functionality and allow evaluators to measure and rate a system’s functionality and trustworthiness. In the TCSEC, in fact, functionality and security assurance are combined and not separated as they are in security criteria developed later. TCSEC guidelines were designed to be used when evaluating vendor products or by vendors to ensure that they build all necessary functionality and security assurance into new products. Keep in mind while you continue to read through the rest of this section that the TCSEC was replaced by the Common Criteria in 2005

47
Q

Red Book

A

Because the Orange Book applies only to stand-alone computers not attached to a network, and so many systems were used on networks (even in the 1980s), the Red Book was developed to interpret the TCSEC in a networking context. In fact, the official title of the Red Book is Trusted Network Interpretation of the TCSEC so it could be considered an interpretation of the Orange Book with a bent on networking. Quickly the Red Book became more relevant and important to system buyers and builders than the Orange Book. The following list includes a few other functions of the Red Book:

Rates confidentiality and integrity
Addresses communications integrity
Addresses denial of service protection
Addresses compromise (in other words, intrusion) protection and prevention
Is restricted to a limited class of networks that are labeled as “centralized networks with a single accreditation authority”
Uses only four rating levels: None, C1 (Minimum), C2 (Fair), and B2 (Good)

48
Q

Green Book

A

the Department of Defense Password Management Guidelines, provides password creation and management guidelines; it’s important for those who configure and manage trusted systems.

49
Q

Information Technology Security Evaluation Criteria (ITSEC) guidelines

A

The ITSEC guidelines evaluate the functionality and assurance of a system using separate ratings for each category. In this context, a system’s functionality is a measurement of the system’s utility value for users. The functionality rating of a system states how well the system performs all necessary functions based on its design and intended purpose. The assurance rating represents the degree of confidence that the system will work properly in a consistent manner.

ITSEC refers to any system being evaluated as a target of evaluation (TOE). All ratings are expressed as TOE ratings in two categories. ITSEC uses two scales to rate functionality and assurance.

The functionality of a system is rated from F-D through F-B3 (there is no F-A1). The assurance of a system is rated from E0 through E6

50
Q

COMMON CRITERIA

A

The Common Criteria (CC) represents a more or less global effort that involves everybody who worked on TCSEC and ITSEC as well as other global players. Ultimately, it results in the ability to purchase CC-evaluated products (where CC, of course, stands for Common Criteria). The Common Criteria defines various levels of testing and confirmation of systems’ security capabilities, and the number of the level indicates what kind of testing and confirmation has been performed. Nevertheless, it’s wise to observe that even the highest CC ratings do not equate to a guarantee that such systems are completely secure or that they are entirely devoid of vulnerabilities or susceptibilities to exploit. The Common Criteria was designed as a product evaluation model.

51
Q

ISO 15408, Evaluation Criteria for Information Technology Security. The objectives of the CC guidelines

A

To add to buyers’ confidence in the security of evaluated, rated information technology (IT) products
To eliminate duplicate evaluations (among other things, this means that if one country, agency, or validation organization follows the CC in rating specific systems and configurations, others elsewhere need not repeat this work)
To keep making security evaluations and the certification process more cost effective and efficient
To make sure evaluations of IT products adhere to high and consistent standards
To promote evaluation and increase availability of evaluated, rated IT products
To evaluate the functionality (in other words, what the system does) and assurance (in other words, how much can you trust the system) of the TOE

52
Q

The Common Criteria process is based on two key elements: protection profiles and security targets.

A

Protection profiles (PPs) specify for a product that is to be evaluated (the TOE) the security requirements and protections, which are considered the security desires or the “I want” from a customer. Security targets (STs) specify the claims of security from the vendor that are built into a TOE. STs are considered the implemented security measures or the “I will provide” from the vendor. In addition to offering security targets, vendors may offer packages of additional security features. A package is an intermediate grouping of security requirement components that can be added to or removed from a TOE (like the option packages when purchasing a new vehicle).

53
Q

The CC guidelines are divided into three areas, as follows

A

Part 1 Introduction and General Model describes the general concepts and underlying model used to evaluate IT security and what’s involved in specifying targets of evaluation. It contains useful introductory and explanatory material for those unfamiliar with the workings of the security evaluation process or who need help reading and interpreting evaluation results.

Part 2 Security Functional Requirements describes various functional requirements in terms of security audits, communications security, cryptographic support for security, user data protection, identification and authentication, security management, TOE security functions (TSFs), resource utilization, system access, and trusted paths. Covers the complete range of security functions as envisioned in the CC evaluation process, with additional appendices (called annexes) to explain each functional area.

Part 3 Security Assurance covers assurance requirements for TOEs in the areas of configuration management, delivery and operation, development, guidance documents, and lifecycle support plus assurance tests and vulnerability assessments. Covers the complete range of security assurance checks and protects profiles as envisioned in the CC evaluation process, with information on evaluation assurance levels that describe how systems are designed, checked, and tested.

54
Q

CC evaluation assurance levels

A

EAL1 Functional tested
EAL2 Structurally tested
EAL3 Methodically tested and checked
EAL4 Methodically designed, tested, and reviewed
EAL5 Semi-formally designed and tested
EAL6 Semi-formally verified, designed, and tested
EAL7 Formally verified, designed and tested

55
Q

EAL1 Functional tested

A

EAL1 Functionally tested Applies when some confidence in correct operation is required but where threats to security are not serious. This is of value when independent assurance that due care has been exercised in protecting personal information is necessary.

56
Q

EAL2 Structurally tested

A

Applies when delivery of design information and test results are in keeping with good commercial practices. This is of value when developers or users require low to moderate levels of independently assured security. IT is especially relevant when evaluating legacy systems.

57
Q

EAL3 Methodically tested and checked

A

Applies when security engineering begins at the design stage and is carried through without substantial subsequent alteration. This is of value when developers or users require a moderate level of independently assured security, including thorough investigation of TOE and its development.

58
Q

EAL4 Methodically designed, tested, and reviewed

A

Applies when rigorous, positive security engineering and good commercial development practices are used. This does not require substantial specialist knowledge, skills, or resources. It involves independent testing of all TOE security functions.

59
Q

EAL5 Semi-formally designed and tested

A

Uses rigorous security engineering and commercial development practices, including specialist security engineering techniques, for semi-formal testing. This applies when developers or users require a high level of independently assured security in a planned development approach, followed by rigorous development.

60
Q

EAL6 Semi-formally verified, designed, and tested

A

Uses direct, rigorous security engineering techniques at all phases of design, development, and testing to produce a premium TOE. This applies when TOEs for high-risk situations are needed, where the value of protected assets justifies additional cost. Extensive testing reduces risks of penetration, probability of cover channels, and vulnerability to attack.

61
Q

EAL7 Formally verified, designed and tested

A

Used only for highest-risk situations or where high-value assets are involved. This is limited to TOEs where tightly focused security functionality is subject to extensive formal analysis and testing.

62
Q

Payment Card Industry Data Security Standard (PCI DSS)

A

is a collection of requirements for improving the security of electronic payment transactions. These standards were defined by the PCI Security Standards Council members, who are primarily credit card banks and financial institutions. The PCI DSS defines requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures.

63
Q

International Organization for Standardization (ISO)

A

is a worldwide standards-setting group of representatives from various national standards organizations. ISO defines standards for industrial and commercial equipment, software, protocols, and management, among others. It issues six main products: International Standards, Technical Reports, Technical Specifications, Publicly Available Specifications, Technical Corrigenda, and Guides. ISO standards are widely accepted across many industries and have even been adopted as requirements or laws by various governments.

64
Q

Certification

A

The first phase in a total evaluation process is certification. Certification is the comprehensive evaluation of the technical and nontechnical security features of an IT system and other safeguards made in support of the accreditation process to establish the extent to which a particular design and implementation meets a set of specified security requirements.

System certification is the technical evaluation of each part of a computer system to assess its concordance with security standards. First, you must choose evaluation criteria (we will present criteria alternatives in later sections). Once you select criteria to use, you analyze each system component to determine whether it satisfies the desired security goals. The certification analysis includes testing the system’s hardware, software, and configuration. All controls are evaluated during this phase, including administrative, technical, and physical controls.

65
Q

Accreditation

A

In the certification phase, you test and document the security capabilities of a system in a specific configuration. With this information in hand, the management of an organization compares the capabilities of a system to the needs of the organization. It is imperative that the security policy clearly states the requirements of a security system. Management reviews the certification information and decides whether the system satisfies the security needs of the organization. If management decides the certification of the system satisfies their needs, the system is accredited. Accreditation is the formal declaration by the designated approving authority (DAA) that an IT system is approved to operate in a particular security mode using a prescribed set of safeguards at an acceptable level of risk. Once accreditation is performed, management can formally accept the adequacy of the overall security performance of an evaluated system.

66
Q

Risk Management Framework (RMF) and Committee on National Security Systems (CNSS) Policy (CNSSP) processes

A

Phase 1: Definition Involves the assignment of appropriate project personnel; documentation of the mission need; and registration, negotiation, and creation of a System Security Authorization Agreement (SSAA) that guides the entire certification and accreditation process

Phase 2: Verification Includes refinement of the SSAA, systems development activities, and a certification analysis

Phase 3: Validation Includes further refinement of the SSAA, certification evaluation of the integrated system, development of a recommendation to the DAA, and the DAA’s accreditation decision

Phase 4: Post Accreditation Includes maintenance of the SSAA, system operation, change management, and compliance validation

67
Q

MEMORY PROTECTION

A

Memory protection is a core security component that must be designed and implemented into an operating system. It must be enforced regardless of the programs executing in the system. Otherwise instability, violation of integrity, denial of service, and disclosure are likely results. Memory protection is used to prevent an active process from interacting with an area of memory that was not specifically assigned or allocated to it.

68
Q

MELTDOWN AND SPECTRE

A

Meltdown is an exploitation that can allow for the reading of private kernel memory contents by a nonprivileged process. Spectre can enable the wholesale theft of memory contents from other running applications. An astoundingly wide range of processors are vulnerable to one or both of these exploits. While two different issues, they were discovered nearly concurrently and made public at the same time. By the time of the publication of this book, patches are likely to be available to address these issues in existing hardware, and future processors should have native mechanisms to prevent such exploitations.

69
Q

VIRTUALIZATION

A

Virtualization technology is used to host one or more operating systems within the memory of a single host computer. This mechanism allows virtually any OS to operate on any hardware. It also allows multiple OSs to work simultaneously on the same hardware. Common examples include VMware Workstation Pro, VMware vSphere and vSphere Hypervisor, VMware Fusion for Mac, Microsoft Hyper-V, Oracle VirtualBox, XenServer, and Parallels Desktop for Mac.

70
Q

TRUSTED PLATFORM MODULE

A

The Trusted Platform Module (TPM) is both a specification for a cryptoprocessor chip on a mainboard and the general name for implementation of the specification. A TPM chip is used to store and process cryptographic keys for the purposes of a hardware supported/implemented hard drive encryption system. Generally, a hardware implementation, rather than a software-only implementation of hard drive encryption, is considered to be more secure.

71
Q

hardware security module (HSM)

A

A hardware security module (HSM) is a cryptoprocessor used to manage/store digital encryption keys, accelerate crypto operations, support faster digital signatures, and improve authentication. An HSM is often an add-on adapter or peripheral or can be a Transmission Control Protocol/Internet Protocol (TCP/IP) network device. HSMs include tamper protection to prevent their misuse even if physical access is gained by an attacker. A TPM is just one example of an HSM.

HSMs provide an accelerated solution for large (2,048+ bit) asymmetric encryption calculations and a secure vault for key storage. Many certificate authority systems use HSMs to store certificates; ATM and POS bank terminals often employ proprietary HSMs; hardware SSL accelerators can include HSM support; and Domain Name System Security Extensions (DNSSEC)–compliant Domain Name System (DNS) servers use HSM for key and zone file storage.

72
Q

INTERFACES

A

A constrained or restricted interface is implemented within an application to restrict what users can do or see based on their privileges. Users with full privileges have access to all the capabilities of the application. Users with restricted privileges have limited access.

Applications constrain the interface using different methods. A common method is to hide the capability if the user doesn’t have permissions to use it. Commands might be available to administrators via a menu or by right-clicking an item, but if a regular user doesn’t have permissions, the command does not appear. Other times, the command is shown but is dimmed or disabled. The regular user can see it but will not be able to use it.

The purpose of a constrained interface is to limit or restrict the actions of both authorized and unauthorized users. The use of such an interface is a practical implementation of the Clark-Wilson model of security.

73
Q

FAULT TOLERANCE

A

Fault tolerance is the ability of a system to suffer a fault but continue to operate. Fault tolerance is achieved by adding redundant components such as additional disks within a redundant array of inexpensive disks (RAID) array, or additional servers within a failover clustered configuration. Fault tolerance is an essential element of security design. It is also considered part of avoiding single points of failure and the implementation of redundancy. For more details on fault tolerance, redundant servers, RAID, and failover solutions, see Chapter 18, “Disaster Recovery Planning.”

74
Q

What is system certification?

Formal acceptance of a stated system configuration
A technical evaluation of each part of a computer system to assess its compliance with security standards
A functional evaluation of the manufacturer’s goals for each hardware and software component to meet integration standards
A manufacturer’s certificate stating that all components were installed and configured correctly

A

B. A system certification is a technical evaluation. Option A describes system accreditation. Options C and D refer to manufacturer standards, not implementation standards.

75
Q

What is system accreditation?

Formal acceptance of a stated system configuration
A functional evaluation of the manufacturer’s goals for each hardware and software component to meet integration standards
Acceptance of test results that prove the computer system enforces the security policy
The process to specify secure communication between machines

A

A. Accreditation is the formal acceptance process. Option B is not an appropriate answer because it addresses manufacturer standards. Options C and D are incorrect because there is no way to prove that a configuration enforces a security policy, and accreditation does not entail secure communication specification.

76
Q

What is a closed system?

A system designed around final, or closed, standards
A system that includes industry standards
A proprietary system that uses unpublished protocols
Any machine that does not run Windows

A

C. A closed system is one that uses largely proprietary or unpublished protocols and standards. Options A and D do not describe any particular systems, and Option B describes an open system.

77
Q

Which best describes a confined or constrained process?

A process that can run only for a limited time
A process that can run only during certain times of the day
A process that can access only certain memory locations
A process that controls access to an object

A

C. A constrained process is one that can access only certain memory locations. Options A, B, and D do not describe a constrained process.

78
Q

What is an access object?

A resource a user or process wants to access
A user or process that wants to access a resource
A list of valid access rules
The sequence of valid access types

A

A. An object is a resource a user or process wants to access. Option A describes an access object.

79
Q

What is a security control?

A security component that stores attributes that describe an object
A document that lists all data classification types
A list of valid access rules
A mechanism that limits access to an object

A

D. A control limits access to an object to protect it from misuse by unauthorized users.

80
Q

For what type of information system security accreditation are the applications and systems at a specific, self-contained location evaluated?

System accreditation
Site accreditation
Application accreditation
Type accreditation

A

B. The applications and systems at a specific, self-contained location are evaluated for DITSCAP and NIACAP site accreditation.

81
Q

How many major categories do the TCSEC criteria define?

Two
Three
Four
Five

A

C. TCSEC defines four major categories: Category A is verified protection, Category B is mandatory protection, Category C is discretionary protection, and Category D is minimal protection.

82
Q

Hosts on your network that support secure transmissions
The operating system kernel and device drivers
The combination of hardware, software, and controls that work together to enforce a security policy
The software and controls that certify a security policy

A

C. The TCB is the combination of hardware, software, and controls that work together to enforce a security policy.

83
Q

What is a security perimeter? (Choose all that apply.)

The boundary of the physically secure area surrounding your system
The imaginary boundary that separates the TCB from the rest of the system
The network where your firewall resides
Any connections to your computer system

A

A, B. Although the most correct answer in the context of this chapter is Option B, Option A is also a correct answer in the context of physical security.

84
Q

What part of the TCB concept validates access to every resource prior to granting the requested access?

TCB partition
Trusted library
Reference monitor
Security kernel

A

C. The reference monitor validates access to every resource prior to granting the requested access. Option D, the security kernel, is the collection of TCB components that work together to implement the reference monitor functions. In other words, the security kernel is the implementation of the reference monitor concept. Options A and B are not valid TCB concept components.

85
Q

What is the best definition of a security model?

A security model states policies an organization must follow.
A security model provides a framework to implement a security policy.
A security model is a technical evaluation of each part of a computer system to assess its concordance with security standards.
A security model is the process of formal acceptance of a certified configuration.

A

B. Option B is the only option that correctly defines a security model. Options A, C, and D define part of a security policy and the certification and accreditation process.

86
Q

Which security models are built on a state machine model?

Bell-LaPadula and Take-Grant
Biba and Clark-Wilson
Clark-Wilson and Bell-LaPadula
Bell-LaPadula and Biba

A

D. The Bell-LaPadula and Biba models are built on the state machine model.

87
Q

Which security model addresses data confidentiality?

Bell-LaPadula
Biba
Clark-Wilson
Brewer and Nash

A

A. Only the Bell-LaPadula model addresses data confidentiality. The Biba and Clark-Wilson models address data integrity. The Brewer and Nash model prevents conflicts of interest.

88
Q

Which Bell-LaPadula property keeps lower-level subjects from accessing objects with a higher security level?

(star) Security Property
No write up property
No read up property
No read down property

A

C. The no read up property, also called the Simple Security Policy, prohibits subjects from reading a higher-security-level object.

89
Q

What is the implied meaning of the simple property of Biba?

Write down
Read up
No write up
No read down

A

B. The simple property of Biba is no read down, but it implies that it is acceptable to read up.

90
Q

When a trusted subject violates the star property of Bell-LaPadula in order to write an object into a lower level, what valid operation could be taking place?

Perturbation
Polyinstantiation
Aggregation
Declassification

A

D. Declassification is the process of moving an object into a lower level of classification once it is determined that it no longer justifies being placed at a higher level. Only a trusted subject can perform declassification because this action is a violation of the verbiage of the star property of Bell-LaPadula, but not the spirit or intent, which is to prevent unauthorized disclosure.

91
Q

What security method, mechanism, or model reveals a capabilities list of a subject across multiple objects?

Separation of duties
Access control matrix
Biba
Clark-Wilson

A

B. An access control matrix assembles ACLs from multiple objects into a single table. The rows of that table are the ACEs of a subject across those objects, thus a capabilities list.

92
Q

What security model has a feature that in theory has one name or label, but when implemented into a solution, takes on the name or label of the security kernel?

Graham-Denning model
Deployment modes
Trusted computing base
Chinese Wall

A

C. The trusted computing base (TCB) has a component known as the reference monitor in theory, which becomes the security kernel in implementation.

93
Q

Which of the following is not part of the access control relationship of the Clark-Wilson model?

Object
Interface
Programming language
Subject

A

C. The three parts of the Clark-Wilson model’s access control relationship (aka access triple) are subject, object, and program (or interface).