TEE Flashcards

1
Q

Trusted vs Trustworthy

A

The term trustworthy refers to something that will not compromise security, offering guarantees of safe operation.

In contrast, trusted describes someone or something that you rely upon to not compromise your security, even though no absolute guarantees exist.

In essence, trusted is about how you use something, while trustworthy is about whether it is safe to use something. If something is trustworthy, it can inherently be trusted. While, if something is trusted, it does not necessarily mean it is trustworthy; it simply reflects a choice to rely on it.

In scientific discourse or formal writing, the distinction is critical.
Trustworthiness requires formal verification or proof that an entity will not cause harm. This rigorous definition is what differentiates it from trust, which is a subjective choice based on context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

TEE (generic)

A

A trusted execution environment represents a domain within a system that is deliberately chosen to be trusted.

This environment is relied upon to execute sensitive tasks, often referred to as trusted applications (TAs). A trusted application is one that requires execution within a protected and secure environment, safeguarding it from potential interference or compromise. Hopefully, the TEE is trusworthy too.

The trusted execution environment is so named because it is chosen as the domain to execute sensitive tasks. While it would be desirable for the TEE to also be trustworthy, achieving such a status requires extensive validation and formal guarantees, which are not always available in practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

TEE and REE

A

The Trusted Execution Environment (TEE) operates on a hardware platform and is designed to coexist with another environment called the Rich Execution Environment (REE).
These two environments serve distinct purposes within the same system, with the REE offering flexibility and the TEE focusing on high security.

In the Rich Execution Environment (REE), users can run a rich operating system like Android on smartphones or Linux and Windows on conventional computers. This environment is characterized by minimal restrictions, enabling the execution of a wide range of applications without significant limitations. It is designed for general-purpose usage, providing the flexibility required for normal operations.

In contrast, the Trusted Execution Environment (TEE) is tailored for executing sensitive tasks that demand a higher level of security. Unlike the REE, the TEE relies on hardware-based trust anchors, which include Hardware keys (for secure cryptographic operations), Secure storage (for protecting sensitive data), Trusted peripherals (Trusted User Interface - TUI) (for secure input and output for instance TUI ensures that only the internet banking app has access to the user interface while all other apps are disabled) and Secure elements (edicated hardware resources designed to enforce security policies).

A TEE requires specific hardware platforms that meet defined security requirements (e.g., TR0 specifications) to be implemented. Therefore, TEEs cannot be implemented on just any CPU.

Within the TEE, a core framework handles execution and provides a secure internal API. This API is accessible only to trusted applications (TAs).

The TEE also includes a communication agent that interacts with a client API in the REE. This interface facilitates communication between the two environments, but it introduces potential security risks, therefore, access control on these APIs is crucial to limit vulnerabilities; since they connect the untrusted world to the insides of trusted boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

TEE wrt Confidential Computing, RoT and TCB

A

The Trusted Execution Environment (TEE) has and important role the confidential computing field, which aims to address a critical gap in data protection, that is the protection of data in use (when the data resides in RAM and is actively accessed by the CPU).

Confidential computing tries to ensure that data in use is accessible only to trusted applications, without beinf read, processed or written by any unauthorized entity.

To implement such protection, the architecture relies on a Root of Trust (RoT). A Root of Trust is not a single component but rather an abstract concept referring to an element whose behavior is implicitly trusted. If the Root of Trust is compromised or behaves incorrectly, the entire system’s security can be undermined because the misbehaviour cannot be detected at runtime. For this reason, it is essential for the Root of Trust to be both trusted and ideally trustworthy.

The Root of Trust is a fundamental part of the Trusted Computing Base (TCB) that should be Trusted and Trustworthy.
The TCB encompasses all hardware, firmware, and, potentially, software components that are critical to a system’s security. Any vulnerability within the TCB can compromise the entire system and this misbehaviour cannot be detected at runtime, making its minimization a priority. The smaller and simpler the TCB, the lower the risk of vulnerabilities, meaning we need to minimize the TCB.
The minimization of the TCB is critical because every component within it represents a potential vulner- ability.

Therefore:
* Hardware: Forms the core of the TCB, as it is inherently less complex than software and offers a smaller attack surface
* Firmware: Provides the essential layer for accessing hardware and must be included, though its simplicity can help maintain security
* Software: Typically more complex, it is ideally excluded from the TCB unless absolutely necessary, as its inclusion significantly increases the attack surface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

TEE security principles

A

To design a Trusted Execution Environment (TEE) effectively, certain security principles must be followed:

  1. Integration with Secure Boot Chain
    The TEE should be an integral part of the device’s secure boot chain, which is based on a Root of Trust (RoT). During every boot process, the system must verify the code integrity. If any modification or tampering is detected, the system will refuse to boot.
  2. Hardware-Based Isolation
    The separation between the Rich OS Environment (REE) and the TEE should be implemented at the hardware level permitting the execution of sensitive code. This is crucial because hardware-based isolation minimizes reliance on software, which is more prone to vulnerabilities. By enforcing this separation in hardware, an attacker would need to breach the hardware itself, a much more challenging task compared to exploiting software vulnerabilities.
  3. Execution Isolation for Trusted Applications
    TEEs often execute multiple trusted applications (TAs). While some TEEs allow all trusted applications to run in the same environment, it is preferable to have separate containers in which each trusted application is executed alone. This isolation ensures that the compromise of one trusted application does not impact others. Though not compulsory, this is a desirable feature in TEE design and is supported by some implementations.
  4. Secure Data Storage
    Trusted applications may require secure and permanent data storage. To achieve this, data must be protected such that:
    * No other trusted application or any REE application can access the data.
    * Data is tied to the hardware, using a hardware-bound key accessible only be the TEE OS.
    * Unauthorized access or modification is prevented.
    * Migration of data to another device, such as through swapping an SD card, is impossible (as the (hardware)-key is bound to the specific hardware component).
    This ensures that sensitive information remains inaccessible outside the TEE and is usable only within its original device.
  5. Trusted Path and secure access to peripherals
    Secure access to peripherals such as fingerprint sensors, displays, touchpads, and keyboards is essential. This trusted path can be hardware-isolated and controlled solely by the TEE during specific actions. Applications in the REE, including those compromised by malware, should have no visibility or access to these peripherals.
    For example, during a secure transaction involving fingerprint authentication or input through a touchpad, the TEE should have exclusive control of the peripherals, ensuring that malware or unauthorized applications in the REE cannot intercept or interfere with the operation.
  6. Protection Against Malware
    One of the key advantages of a TEE is its resilience to malware infections in the REE. Any malware affecting the rich environment is confined to that domain and cannot influence or possess visibility of the data or operations within the TEE. When a trusted application is activated, it operates independently and securely, maintaining the integrity of sensitive processes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Intel IPT

A

The Intel Identity Protection Technology (Intel IPT) is an example of TEE.
The IPT has 2 separate CPUs:
1. The primary CPU, which executes user programs and applications.
2. A secondary, specialized CPU called the Management Engine (ME), which runs Java applet on this separated CPU.

The Management Engine is mostly used to execute management tasks, even when the device is powered off. This is enabled through a feature called wake-on-LAN.

In the context of Intel IPT, the Management Engine serves as a physically separated trusted execution environment. It is capable of running Java applets independently from the main CPU. These applets are bound to the physical hardware of the Management Engine and can perform various secure operations.
Some key examples include:
* Cryptographic Key Management: The Management Engine can generate and store cryptographic keys in a protected memory space that is inaccessible to any user-level programs or the primary CPU. This functionality is integrated with interfaces such as the Windows Cryptography API, allowing Windows applications to request key generation or access securely stored keys.
* One-Time Password (OTP) Generation: Intel IPT has been used in products like Vasco My- Digipass, which relies on the Management Engine to store secrets and generate OTPs securely. These OTPs are critical for secure authentication processes.
* Secure PIN Entry: By leveraging the Management Engine’s ability to control video output and peripherals, Intel IPT ensures that PIN entry is isolated from the primary CPU. This is possible because chipset also manages video and it prevents unauthorized programs from intercepting sensitive information, such as the entered PIN.

The architectural separation provided by the Management Engine makes Intel IPT an example of a physically isolated Trusted Environment.
Unlike many TEE implementations that share resources with the Rich Execution Environment (REE), Intel IPT achieves separation at the hardware level, with two independent CPUs executing tasks in parallel. This design inherently limits the attack surface, as the Management Engine operates outside the control of the primary system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ARM TrustZone

A

Another widely recognized Trusted Execution Environment (TEE) is the ARM TrustZone, a feature available in some ARM CPUs, which enables a single CPU to operate in two separate modes: secure mode and normal mode.

The core technical innovation of the TrustZone lies in its extension of the normal CPU bus to a virtual 33-bit bus. This additional signal bit indicates whether the CPU is operating in secure mode or normal mode. This signal is not confined within the CPU but is exposed externally to facilitate the creation of secure peripherals and secure RAM. These external components can use the signal to enforce access control, ensuring that only processes in secure mode can interact with protected hardware resources.

The ARM TrustZone is an open and documented system, facilitating widespread adoption and integration across various platforms.

Despite its advantages, the TrustZone has notable limitations, such as the presence of only a Single Secure Enclave. All trusted applications operate within this single enclave, without hardware-based isolation separating them. This limitation arises because the trusted zone’s separation is not enforced by hardware but is instead managed by the software running within the TEE. As a result, this approach is inherently less secure compared to architectures that provide real hardware separation between different applications, making it more vulnerable to potential breaches within the enclave.

To address the limitations of the current architecture, ARM is actively developing a third mode of operation (TEE, REE and Attestation zone) within its platform. This new mode is designed to support specific features, such as attestation, which demand a more specialized and secure environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Trustonic

A

The ARM TrustZone architecture, while providing a hardware-based mechanism for creating a secure enclave, has limited utility by itself. This limitation comes from the fact that TrustZone allows for only a single secure enclave.

Gemalto developed the Trusted Foundation System, and G+D (Giesecke+Devrient) developed MobiCore. These are TEE operating systems designed to run within the TrustZone. Their primary innovation was to split the single secure enclave into multiple virtual enclaves, allowing several trusted applications to run in parallel, each isolated from the others. This was achieved throough smart card operating systems for use in the TEE. However, this separation is software-based, as the ARM hardware does not inherently support hardware-based isolation between trusted applications.
Later Gemalto and G+D merged efforts leading to the development of Trustonic, based on G+D’s Mo- biCore. The resulting operating system, named Kinibi, became a highly evolved and sophisticated TEE solution. For instance, Kinibi 500 introduced features such as 64-bit symmetrical multiprocessing, en- abling its use even in high-performance embedded systems, not just low-capacity CPUs. However, deploying Kinibi requires licensing fees to implement code, as it is a proprietary solution developed by Trustonic.
In parallel, other companies developed competing solutions. For example, Samsung integrated a TEE solution called KNOX into its devices. KNOX provides similar functionality to Trustonic, with the added feature of secure boot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Intel SGX

A

Another prominent example of a Trusted Execution Environment (TEE) is Intel SGX (Software Guard Extensions) that is tightly integrated with the CPU, making it a hardware-based solution for secure computing.

When purchasing an Intel CPU, users can check whether it supports SGX functionality.

Intel SGX modifies the standard memory management of the CPU. When an enclave is declared within the Intel environment, SGX ensures that the memory allocated to that enclave is fully isolated and protected from other code, and vice versa.

This means:
* No other process, not even those running with the highest level of privilege on the CPU, can access the memory area allocated to the enclave.
* Hardware-protected memory areas are created for each enclave, preventing access both from general-purpose processes and other enclaves.
* Enclaves are also restricted from accessing areas outside their own memory boundaries.

A critical feature of SGX is its use of measurement, which is fundamental to its security architecture. When an enclave is created, the system computes a hash of all relevant components, typically including the executable loaded into the enclave.
This hash serves as a verification metric, enabling the system to confirm that the enclave has not been tampered with and is running as intended.

However, Intel SGX has limitations. Its scope is restricted to the protection of the execution environment, specifically CPU operations and memory. It does not inherently provide trusted input/output channels. To achieve secure input and output, SGX can be combined with other Intel technologies, such as Intel IPT, which offers capabilities like trusted display and input isolation.

Intel SGX has undergone significant evolution over time. The original version, SGX1, was available on both low-end and high-end CPUs. While SGX2 is exclusively available on high-end server-oriented CPUs, such as Intel Xeon processors.

Using SGX requires additional steps and considerations:
* To create an enclave, developers must obtain special permissions and utilize a specific library provided by Intel.
* The code intended for the enclave must be signed by Intel (!). This requirement means developers must submit their executable to Intel to receive the necessary signature, a process that has raised concerns about privacy and control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Keystone

A

Keystone is an open-source framework that allows users to customize and build their own TEEs based on their unique requirements.

Keystone’s design enables selective inclusion of TEE features, making it adaptable to different scenarios. For example in embedded systems or IoT devices with limited memory and low computational power, certain TEE features might be excluded to optimize performance and resource usage.
Thanks to its flexibility, excluding unused components reduces the attack surface, ensuring minimization of the Trusted Computing Base (TCB). A smaller TCB inherently reduces the amount of code and hardware that must be trusted, improving overall security.

Keystone provides a basic architecture that includes:
* An untrusted environment (similar to a general purpose OS).
* Multiple trusted segregated enclaves.

Keystone works on top of the RISC-V open-source hardware platform. RISC-V offers the first open-source CPU architecture, where all designs are public, allowing users to build, customize, and test CPUs on platforms like FPGAs or SoC.
For projects that need high performance, RISC-V can be integrated into System-on-Chip (SoC) designs, alongside other necessary components.

The core RISC-V architecture provides basic computational functionality. However, the open-source nature of RISC-V enables the addition of Intellectual Property (IP) modules that extend the CPU’s functionality like vector computation, aritificial intelligence or cryptographic extensions.

RISC-V incorporates a hardware-based Physical Memory Protection (PMP) system, which enforces access control to memory pages ensuring that processes cannot access each other’s memory, either permanently or temporarily and Input/output devices, often memory-mapped, are protected during the execution of trusted applications.

RISC-V provides three execution modes in decreasing order of privilege:
* Machine Mode: The most privileged mode, primarily used for low-level hardware control.
* Supervisor Mode: A less privileged mode, typically used for operating systems.
* User Mode: The least privileged mode, used for general application execution.

These modes, combined with PMP, allow fine-grained control over memory and device access, ensuring that only authorized processes can interact with sensitive resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Motivation for Keystone

A

The motivation behind the creation of Keystone lies in addressing the limitations of traditional Trusted Execution Environments (TEEs), which are often rigid and lack customization options.
When compared to other existing solutions, these limitations become evident:
* Intel SGX offers a hardware-based approach to creating enclaves, providing strong isolation for secure computations. However, it requires the involvement of a large software stack to build and manage enclaves. This includes numerous libraries and dependencies, which significantly increase the Trusted Computing Base (TCB). A larger TCB introduces a broader attack surface and makes the system more challenging to secure.

  • AMD SEV (Secure Encrypted Virtualization) provides a hardware-based method to create secure enclaves, treating them as virtual machines with built-in encryption. While this approach differs from Intel SGX, it also suffers from the drawback of requiring extensive development and a large TCB. The reliance on a comprehensive software stack for managing these virtualized environments limits the system’s flexibility and security.
  • ARM TrustZone implements a simpler model with only two domains: the untrusted domain (Rich Execution Environment) and the trusted domain (Trusted Execution Environment). However, its design lacks the capability to create additional domains or segregated secure enclaves. This rigidity restricts the level of customization and limits its utility for more complex use cases requiring finer- grained isolation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Keystone architecture

A

At the core of Keystone’s architecture design there is trusted hardware, which includes essential components like the RISC-V cores for normal operations. These cores can be enhanced with optional H/W features such as cryptographic extensions or specialized instructions for artificial intelligence workloads, depending on the needs of the application.
Additionally, a Root of Trust (RoT) is included to ensure secure boot and establish the foundational trust required for the system.

One of Keystone’s key principles is minimizing the Trusted Computing Base (TCB). To achieve this, it voids running a traditional operating system or hypervisor directly on the hardware. Instead, it employs a lightweight Security Monitor (SM) that operates in machine mode (m-mode), the highest privilege level in the RISC-V architecture.
The Security Monitor’s sole responsibility is access control; it mediates all requests from upper layers to the hardware, ensuring only authorized operations are permitted. By limiting its functionality to this single task, the Security Monitor remains simple and secure, avoiding unnecessary complexity.

Keystone’s system architecture is organized into two main domains. The first is the untrusted domain, which can run general-purpose operating systems like Linux or Android. This domain operates in supervisor mode and is used for non-sensitive tasks, offering flexibility to execute standard applications without security concerns.

The second and more critical domain consists of the trusted enclaves. These enclaves are designed to operate in user mode, the least privileged level, and are intended for running sensitive applications securely.
Each enclave is independent and runs a single application, avoiding the need for a general-purpose operating system.
Instead, Keystone provides a lightweight layer known as the Keystone Runtime, which acts as a ligthweight operating system, offering only the minimal features necessary for the enclave’s specific application.
This approach reduces the complexity of the system and ensures that each enclave’s runtime is customized for its application’s needs.

At the top of each enclave is the trusted application, referred to as an Enclave Application (Eapp) in Keystone’s terminology. These applications are isolated from each other and the untrusted domain, ensuring robust security.

Keystone allows for the creation of multiple enclaves, each capable of running its own Eapp, each of which isolated from other enclaves and the untrusted domain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

BIOS and UEFI

A

Attackers often target the lowest levels of a system, such as the boot process or firmware, to inject malware. This strategy provides two key advantages: it reduces the likelihood of detection and increases the scope of control over the system.

By compromising these low-level components, attackers can:
* modify the OS.
* try to boot an alternative OS.
* modify the boot sequence or the boot loader, enabling untrusted components to coexist with trusted ones undetected.

To counter these threats, it is needed to protect the boot process and the operating system.
Historically, systems relied on the BIOS (Basic Input/Output System) as the initial layer of firmware for starting the system. However, BIOS implementations were often vendor-specific and it is difficult to protect.

This limitation led to the development of the Unified Extensible Firmware Interface (UEFI), now standard on most modern devices.
UEFI provides a standardized firmware environment and incorporates native support for firmware signature and verification.
During the boot process, the UEFI firmware is signed by the platform manufacturer and verified by the hardware. If the verification process detects any tampering or modifications, the boot process is stopped, preventing the system from starting.

Once the UEFI firmware is verified, the bootloader becomes the next critical component in the trusted chain.

The verified bootloader is responsible for checking the integrity of the operating system before loading it into memory. This ensures that only authorized and unaltered operating systems are executed, maintaining the trust established during the earlier stages of the boot process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Rootkits

A

Rootkits are malware designed to provide root access, the highest level of privilege on a machine.

The goal of rootkits is to operate undetected by compromising critical system components, often before the operating system fully loads.

  • Firmware Rootkit: targets the firmware of the BIOS/UEFI, or the firmware of. other hardware components, such that the rootkit can start before the OS.
  • Bootkit, which replaces the operating system’s bootloader so that the node loads the bootkit before the OS. A bootkit ensures that it is loaded before the operating system, allowing it to create an invisible layer that operates below the OS but above the hardware.
    The OS performs as expected, but the bootkit runs concurrently in memory, granting attackers control without detection, in order to avoid suspiscion.
  • Kernel rootkits: replace portions of the kernel, ensuring they load automatically when the operating system loads in order to gain persisten access at kernel-level.
  • Driver rootkits: compromise the drivers that are loaded at boot. By masquerading as a trusted driver, these rootkits can intercept and alter communication between the operating system and specific hardware devices.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Boot types

A

When the BIOS has successfully started, the next step is to boot the operating system (OS).

There are different boot types, each offering different levels of security:
* Plain Boot: default boot process with no security measures in place. It performs a normal boot without any verification of integrity or authenticity.

  • Secure Boot: the firmware verifies the signature of the components it loads and halts the platform if the verification fails. Secure boot is mostly hardware-based (relying on a crypto chip or the contents of BIOS memory) and verifies up to the OS loader. Secure boot is a responsibility of the Hardware manufacturer (for chip verification). The OS loader, being part of the firmware, is checked, while the actual operating system resides on the disk. If the signature verification fails, the platform does not proceed to boot the OS.
  • Trusted Boot: Trusted boot assumes that the initial part of the boot process, up to the firmware, was executed securely. It focuses on verifying the integrity of the OS components such as drivers and anti-malware software.
    If the signature verification of these components fails, the operating system itself will not start.
    Unlike secure boot, this process operates only at the software level and verifies the OS’s operational state. Trusted boot is a responsibility of the OS manufacturer (for OS verification).
  • Measured Boot: This mechanism operates in parallel, introducing a detection-based approach and does not stop the system. It measures all components executed from boot up to a defined level (e.g., OS operational state) by calculating their hashes. These measurements do not halt operations; instead, they are securely reported to an external verifier.
    The external verifier periodically queries the system, asking for its status and reviewing the reported measurements. If the reports indicate tampering or inconsistency, the verifier can classify the system as untrusted and take appropriate actions, such as isolating the node from the infrastructure.
    Measured boot ensures that even if the platform is attacked, the integrity of the reported measurements cannot be faked or manipulated.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Trusted Computing and attestation process

A

To establish trust in a platform, an attestation process is performed and Trusted Computing defines schemes for establishing trust in platform.

Attestation provides evidence of the platform’s current state, which can be independently verified by an external party. By state, we refer to the software state, considering all running applications and configurations.

The foundation for such trust lies in the Root of Trust (RoT). The Root of Trust is a component or mechanism that must inherently be trusted. For instance, it could be established through Secure Boot, where the starting point is the microcontroller or the firmware itself performing self-verification. Trust is then built incrementally.

One key element in this process is the Trusted Platform Module (TPM).
The TPM provides methods for collecting and reporting the identities of these components.
* The TPM does not stop the system from operating but acts as a Trusted Reporter. It offers undeniable evidence of the platform’s current state, which cannot be faked or tampered with.
* A TPM used in a computer system reports on the hardware and software state in a way that allows determination of expected behaviour and, from that expectation, establishment of trust.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

TCB vs TPM

A

The Trusted Computing Base (TCB) is a collection of system resources, including both hardware and software, responsible for maintaining the security policy of the system.

A key attribute of the TCB is its ability to prevent itself from being compromised by any hardware or software that is not part of the TCB. The TCB is self-protecting, meaning it can resist modification or interference from other components.

For example, both self-verification mechanisms and external hardware Roots of Trust (RoT) are part of the TCB, as they cannot be altered by any hardware or software (excluding physical manipulation, which remains a feasible attack vector).

The Trusted Platform Module (TPM) is not part of the TCB. Instead, the TPM acts as a component that enables an independent entity (typically external to the system, known as an external verifier) to determine whether the TCB has been compromised.
In rare cases, the TPM can be configured to prevent the system from starting if the TCB cannot be correctly instantiated.

Unlike the secure boot process discussed earlier, a TPM allows the entire BIOS and boot sequence to be verified. The TPM can then provide a flag (good/bad) indicating the result of the verification. In some configurations, if the verification result is bad, physical intervention is required to halt the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

RoT

A

A Root of Trust (RoT) is a component that must always behave in the expected manner. If it misbehaves, its misbehavior cannot be detected, making it a fundamental building block for establishing trust in a platform. Typically, the RoT includes a hardware component and may later consider certain software parts.

In a trusted computing environment, there are different types of Roots of Trust:
* Root of Trust for Measurement (RTM): Responsible for measuring and sending integrity measurements to the RTS. Typically, the CPU executes the Core Root of Trust for Measurement (CRTM), at boot as the first piece of BIOS/UEFI code, to start the chain of trust.
* Root of Trust for Storage (RTS): A secured/shielded portion of memory designed for storage. Shielding ensures that only the CRTM can modify the values stored within the RTS.
* Root of Trust for Reporting (RTR): An entity responsible for securely reporting the contents of the RTS to external verifiers.

The process works as follows:
1. The RTM computes the measurement.
2. The measurement is securely stored in the RTS.
3. When required, the RTR retrieves the measurement from the RTS and provides it to an external verifier.

The Trusted Platform Module (TPM) typically combines the functionalities of the RTS and RTR. It acts as a secure storage component and as a trusted entity for reporting. This means the TPM will only report what is stored within its secure storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Chain of trust

A

In general, the process of measurement and verification involves the interaction of multiple components:
* Component A measures the integrity of Component B and, once the measurements are completed, stores the results in the Root of Trust for Storage (RTS).
* Component B, in turn, performs similar tasks by measuring Component C, storing those integrity measurements in the RTS as well.
* And so on…

With these measurements stored, the Root of Trust for Reporting (RTR) can be queried to retrieve the measurements of Component B and Component C from the RTS. If Component A is trustworthy, the verifier can trust these measurements to determine the integrity of Component B and Component C for this reason that component A is typically the CRTM, which is part of the TCB. Keep in mind that B and C can only be trusted if A is trustworthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

TPM features

A
  • inexpensive
  • tamper-resistant (but not tamper-proof).
  • It is a passive component that needs to be driven by the CPU
  • RTS (Root of Trust for Storage): this special memory supports a unique operation called extend
  • RTR (Root of Trust for Reporting): the TPM contains a private-public key pair, and every time the content of the RTS is extracted, it is accompanied by a digital signature. This guarantees the integrity of the data, ensuring that: data cannot be altered, fake data cannot be generated, the reported values reflect the correct content of the RTS at that moment.
  • Hardware Random Number Generator: The TPM includes a true hardware-based random number generator, not a pseudo-RNG.
  • It performs crypto algorithms but it’s not a crypto accellerator, it is slow.
  • Secure Key Generation: The TPM securely generates cryptographic keys for specific, limited uses.
  • Remote Attestation: The TPM can be used for remote attestation, storing a hash summary of the hardware and software configuration. This allows a third party to verify that the software has not been altered.
  • Binding: data encrypted using the TPM’s bind key (a specific, unique RSA key derived from a storage key) cannot be decrypted outside that particular TPM. This ensures that even if data is stolen, it cannot be decrypted without the specific TPM = physical dependence.
  • Sealing: This provides an additional layer of security. In sealing: data is encrypted using an internal TPM key + the decryption process requires the TPM’s state to match the state at the time of encryption (so we need also to be running the same SW state and configuration). State refers to the collection of all running applications and configuration files.
  • Authentication of Hardware Devices: Each TPM chip has a unique and **secret Endorsement Key **(EK), burned into the chip during production. This allows the clear identification of a specific machine.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

TPM-1.2

A

This version was a minimal implementation, designed with limited flexibility but sufficient for its time.
* Fixed Set of Algorithms: TPM 1.2 relied on SHA-1 for computing hashes and RSA for signatures, verification, and encryption of keys. Optionally, AES could be used.
* Single Storage Hierarchy: There was only one storage hierarchy for the platform user.
* One Root Key: The Storage Root Key (SRK), an RSA-2048 key, served as the root key for storage.
* Built-in Hardware Identity: The Endorsement Key (EK), also an RSA-2048 key, provided a unique hardware identity.
* Sealing Tied to PCR Values: Sealing (where data can only be decrypted in a specific state) was tied to Platform Configuration Register (PCR) values. PCRs are special registers inside the TPM that store collected measurements of the system’s configuration.

The Persistent Memory of TPM 1.2 included:
* The Endorsement Key (EK).
* The Storage Root Key (SRK).

The Flexible Memory (or Versatile Memory) was used for other purposes and included:
* Platform Configuration Registers (PCRs): These registers recorded the current configuration of the system by collecting measurements.
* Attestation Identity Keys (AIKs): These keys were used to sign external reports for performing remote attestation.
* Storage for Additional Keys: This storage could hold various keys required during operations.

22
Q

TPM2.0

A
  • cryptographic agility, meaning that it is able to support different (and also future) cryptographic primitives with rapid adaptions. For backward compatibility, it continues to support SHA-1, but it also offers SHA-256. It retains RSA for compatibility but can also perform signature, verification, and encryption using ECC-256. Native features include HMAC based on SHA-1 or SHA-256 and AES-128 as a minimum, with other features being optional.
  • three key hierarchies: each hierarchy supports multiple keys and algorithms.
  • When changes are needed in the TPM, policy-based authorization is used. This approach allows for multiple forms of authorization, such as two-factor authentication or fingerprint verification. This is a significant improvement over TPM 1.2, which relied on a single password for all changes.
  • Additionally, TPM 2.0 includes platform-specific specifications, extending its application beyond PC clients to mobile and automotive environments, where trust is crucial.

TPM 2.0 can be implemented in several forms, each with specific characteristics:
* Discrete TPM: A dedicated chip implementing TPM functionality within a tamper-resistant semiconductor package.
* Integrated TPM: The TPM functionality is embedded as part of another chip. In this case, tamper resistance derives from it. .
* Firmware TPM: A software-only solution where the TPM functionality runs in the CPU’s Trusted Execution Environment (TEE).
* Hypervisor TPM: A virtual TPM provided by a hypervisor. It runs in an isolated execution environment with a security level comparable to a firmware TPM.
* Software TPM: A software emulator of the TPM running in user space. It is primarily used for development purposes and does not provide real security.

The type of TPM implementation determines the Trusted Computing Base:
* Discrete TPM: The hardware Root of Trust (RoT) is a separate component, ensuring high security.
* Integrated TPM: The RoT is embedded within the chip, relying on the assumption that the man- ufacturer has implemented and maintained the TPM correctly.
* Firmware TPM: The software becomes part of the TCB, requiring trust in the firmware’s integrity and mechanisms such as Secure Boot.
* Hypervisor TPM: Trust is abstracted, as the user has no insight into the hypervisor’s operations.
* Software TPM: It does not contribute to security and is only suitable for development purposes.

23
Q

TPM-2.0 three hierarchies

A

One of the main features of a TPM is generating keys and using those keys to attest facts about the TPM.

Instead of storing keys directly, TPMs have secret values called “seeds” that never leave the TPM and persist through reboots. Seeds are used to deterministically generate keys, which can in turn identify the TPM even if the external storage is wiped (e.g. during OS installs).

There are three seeds and associated hierarchies:
* Platform Hierarchy: For the platform’s firmware. It contains non-volatile storage, keys and data related to the platform.

  • Endorsement Hierarchy: it is used by the privacy administrator for storing keys and data related to privacy. Typically endorsement keys used to identify the TPM.
  • Storage Hierarchy: used by the platform’s owner which is usually also the privacy administrator and that contains non-volatile storage, keys, and data. Typically storage keys are used by local applications.

Each hierarchy has a dedicated authorization (password as a minimum) and a policy (can be very simple or complex). Each hierarchy has also different seed for generating the primary keys. So, keys of the various hierarchies are unrelated.

24
Q

Secure storage methods in TPM

A

Physical Isolation
Data is stored directly within the Non-Volatile RAM (NV-RAM) of the TPM. This memory is very limited in size, so only critical items such as primary keys and permanent keys are typically stored.
Access to these keys is controlled using Mandatory Access Control (MAC), enforcing policies that strictly govern who or what can access them.

Cryptographic Isolation
Large quantities of data or additional keys are stored outside the TPM, for example, on magnetic or solid-state disks.
These external items, referred to as blobs, consist of encrypted sets of bits that needs to be protected like keys or data.
Even though the data or keys are stored outside the TPM, they can only be decrypted using the specific TPM that controls the encryption key.
Mandatory Access Control also applies here, ensuring that decryption and access to the TPM-controlled key follow strict policies.
A key advantage of cryptographic isolation is its ability to store large amounts of data securely. The external storage provides effectively unlimited capacity. However, decrypting this data requires the specific TPM, making migration to another platform challenging without the original TPM.

Both methods leverage the TPM’s secure infrastructure and enforce Mandatory Access Control to ensure the integrity and confidentiality of the stored data or keys.

25
Q

TPM objects

A

These are the objects that are managed by the TPM:
* Primary keys: the TPM does never return the private value, so the private keys are inside the TPM, and it is not possible to extract them. These keys can be recreated using the same parameters assuming that the primary seed has not been changed. So even if the private key is destroyed or lost, it can be recreated using the same seed and the same parameters.

  • Keys and sealed data objects (SDO): they are protected by a Storage Parent Key (SPK). The SPK is needed in the TPM to load or create a key or SDO. Randomness for these keys come from the TPM RNG which is internal to the TPM. The TPM returns the private part of these keys protected by the SPK, so the private part needs to be stored externally somewhere, but then it is needed the TPM, that SPK to decrypt it.
26
Q

TPM object’s area

A
  • Public Area: it is used to uniquely identify an object and it is accessible even without special permissions, allowing the listing of objects stored inside the TPM.
  • Private Area: contains the object’s secrets. It exists only inside the TPM, ensuring high security.
  • Sensitive Area: it represents an encrypted private area used for storage outside the TPM. This area is optional and not mandatory.
  • Public Area and Private Area: These are mandatory and reside within the TPM.
  • Sensitive Area: This is external to the TPM and optional, meaning it is not required for all objects.
27
Q

PCRs

A

The Trusted Platform Module (TPM) has the ability to report the current state of the system using a special set of registers called Platform Configuration Registers (PCRs), that are fundamental to the TPM’s implementation of the Root of Trust for Storage (RTS) and serve as the primary mechanism for recording platform integrity.

Security features:
* Reset Mechanism: PCRs can only be reset during a platform reset (e.g., after a system reboot) or
via a hardware signal that requires physical access. When reset, all PCR values are initialized to 0. This prevents malicious code from modifying measurements or rolling back the PCR values once they have been updated.

  • Extend Operation: PCRs have an operation called EXTEND. This operation updates the PCR value by concatenating the current PCR value with the digest of new data, then hashing the result. Formally, this operation is defined as:
    PCR_new = hash(PCR_old || digest_of_new_data)
    The EXTEND operation ensures that the PCR value accumulates a sequential history of all measurements, making it impossible to overwrite or alter individual entries. Importantly, the operation is non-commutative, meaning that the order of measurements affects the final value.
28
Q

Measured Boot Process

A

The process involves several stages, where each step measures the integrity of the next and records the results in the Platform Configuration Registers (PCRs).

Here is a description of how Measured Boot operates:
1. The first stage of the boot loader, which is assumed to be trusted, constitutes the Core Root for Measurement. Its hash value is computed and stored in the PCR. This forms the initial measurement of the system’s integrity.
2. The first stage boot loader then measures the second stage boot loader (e.g., UEFI/BIOS) by computing its hash. After taking this measurement, it loads and executes the second stage boot loader. The hash of this measurement is extended into the PCR using the EXTEND operation.
3. The second stage boot loader measures the operating system before loading and executing it. The new measurement is extended into the PCR.
4. Optionally, the operating system itself can be modified to continue this process. When the OS starts an application, it measures the application, computes its hash, and extends the value into the PCR. This can create a complete record of everything executed on the system, forming a history of all measurements in the PCR.

29
Q

Remote Attestation Procedure

A

In remote attestation, there is a platform with the TPM and an external verifier, typically remote (via a network).

The process proceeds as follows:
1. When the verifier wants to determine if the platform is in a trustworthy state, it sends a challenge (a nonce to avoid replay attacks) to the platform.
2. In response to the challenge, the TPM reads all the values present in the PCRs. This list of values is digitally signed using the TPM’s Device Identifier (DevID). This ensures that the values have not been tampered with, the response is unique to this specific TPM and this specific challenge, replay attacks are prevented because the response depends on the challenge, which is a nonce.
3. The external verifier performs validation in two steps: (1) Signature Validation: The verifier validates the signature cryptographically using the device’s identifier (DevID). (2) Measurement Comparison: The verifier compares the reported PCR values against Reference Measurements, also known as golden values. These reference values represent the expected state of the platform, including the software executed and the sequence in which it was executed.

30
Q

Management of Remote Attestation

A

The first decision in remote attestation is whether to perform static attestation, focusing only on boot processes, or dynamic attestation, which involves periodic checks during runtime. The choice depends on the attack model:
* Static Attestation: Suitable for systems targeting static threats and the system is verified only once, during startup.
* Dynamic Attestation: Necessary for environments where runtime vulnerabilities exist, in such cases, periodic checks are essential.

For dynamic attestation, the frequency of checks becomes critical. Periodic checks introduce the following considerations:
* Speed of Attack: The periodicity must match the potential attack window. For example, checking once an hour may leave the system vulnerable during that interval.
* Physical and Performance Constraints: The attestation cycle involves generating a nonce, extracting PCR values, signing them, transmitting results, and performing a database lookup. Among these, the TPM’s cryptographic signature operation is typically the slowest step due to hardware limitations. Current implementations support checks as frequently as every five seconds, which balances security and system constraints for many use cases.

Managing golden values (reference measurements) is a significant challenge in remote attestation, particularly in dynamic and complex environments. Some key points include:
* Whitelist Generation: For general-purpose systems like clouds, generating a comprehensive whitelist is challenging because it requires enumerating all possible software components and configurations. Conversely, this is more manageable in limited environments such as IoT devices, edge devices, SDN (Software Defined Networks), or NFV (Network Function Virtualization).
* Labeling: Maintaining a labeled database of golden values can improve management. Labels such as good, old, buggy, and vulnerable help administrators make informed decisions.

In addition to measuring software components, remote attestation should consider system configurations. Configurations can originate from various sources:
* File-Based Configurations: These are easier to measure because the TPM can compute a hash of the configuration file contents. For example, an iptables configuration file can be measured when iptables is loaded.
* Memory-Based Configurations: These are more challenging as they depend on platform-specific implementations. For example, a router configured via NETCONF stores its configuration in memory, making file-based measurement infeasible. Addressing this requires memory inspection tools that can extract and hash in-memory configuration states for verification.

31
Q

PCRs values

A
  • PCR0 is measuring the Platform firmware from system board ROM and it is a fixed value given one version of the firmware. Typically, in a database there are different values for PCR0 according to the version of the firmware and the version running in a device could be deduced by these values. It also stores the UEFI boot services, UEFI runtime services.
  • PCR1 contains the hash of various extensions of the firmware (ACPI, SMBIOS, OTHER).
  • PCR2 drivers loaded from the disk.
  • PCR4 is the part of the UEFI OS loader and of the Legacy OS Loader
  • PCR5 is for the Platform hardware, for example it is reading and computing the hash of the partition table. That is important if someone manipulated the hardware.
  • PCR7 contains the Policy for Secure Boot.
  • All registers from PCR8 to above are given to the OS to decide what will be used for.
32
Q

Linux’s IMA

A

IMA is Integrity Measurement Architecture, is an additional component inside Linux which extends the Root of Trust to the applications (extends attestation to dynamic execution) and perform various operations:

  • Collect: Every time the kernel is invoked for accessing a file (exec,read, write… system call), the IMA measures the binary before running it.
  • Store: After measuring, IMA stores the value in one of the PCRs (typically PCRs 8 to 15). The measurement is added to a kernel-resident list, and the corresponding IMA PCR is extended with the new value.
  • Appraise (Optional): IMA can optionally enforce local validation of a measurement against a good value stored in an extended attribute of the file. Certain Linux file systems allow files to have extended attributes in addition to standard ones (e.g., owner, group, permissions). If this feature is enabled:
    – The good value (expected hash) is stored in the file’s extended attributes.
    – When the binary is executed, its hash is compared against the stored value.
    – If the hashes do not match, the binary will not be executed. This ensures that binaries and their expected values are not altered.
    – However, this approach relies on trust that the extended attributes themselves have not been modified.
  • Protect: IMA protects a file’s security extended attributes (including the appraisal hash) against offline attacks. For instance:
    – An attacker could unplug the disk, attach it to another machine, and modify the hash values stored in the extended attributes.
    – To counter such attacks, IMA computes an HMAC over the attributes using a secret key.
    Without this key, an attacker cannot recompute the HMAC, and any tampering will result in a
    mismatch

The measurement configuration of what IMA measures is determined by the IMA template, which defines the data structure for measurements. Most systems use the ima-ng template, but this can be customized as needed.
All measurements performed by IMA are recorded in a file located at /sys/kernel/security/ima/ascii_runtime_measure This file resides in the kernel’s securityfs and contains a table summarizing the measurements. The table
provides a human-readable format, listing each measured object and its corresponding hash values.
The IMA measurement table provides detailed information about the components being executed and ex- tended into PCR10. The table includes entries showing the PCR number, the initial template-hash value, and subsequent updates made using the specified template. Each entry includes the filedata-hash value and the corresponding name filename-hint, which identifies the component.
This table is crucial because, by examining only the value stored in PCR10, it is impossible to determine whether the value is correct or not. The table provides the necessary context, detailing which commands were executed and in what order. This information allows for precise verification of the measurements and ensures the integrity of the recorded data.

33
Q

Verification of the IMA ML

A

To verify the integrity of the system when IMA is enabled, the **attestation process includes not only the nonce and PCR values but also the Measurement List **(ML).

Verifying PCR10 is particularly challenging because its value depends on two factors:
(A) the applications executed and
(B) the order of their execution.

To determine whether the returned PCR10 value is correct, the following steps are performed:
1. Start with an initial PCR10 value of 0 in the verifier (representing the reset state).
2. Extend the boot_aggregate value, which is derived from the concatenation of PCRs 0 through 7. This value is added to the verifier’s PCR10 using the same EXTEND operation performed by the TPM.
3. Process each measurement m of a component c from the Measurement List (ML) in the same order as they were performed on the system:
* If the measure of component c differs from its golden value, raise an alarm. This indicates that either the component is not permitted or has been manipulated, compromising the system integrity.
* If the measure matches the golden value, extend PCR10 in the verifier with the measure of c. This operation simulates the actual TPM EXTEND operation but is performed in the verifier’s memory.
4. Continue processing all measurements in the list. Once the entire list has been scanned and processed, compare the verifier’s PCR10 value with the PCR10 value returned in the attestation response from the system.
* If the values match, the system is verified as intact.
* If the values differ, raise an alarm, as this indicates tampering at some point in the boot, OS, or application execution process.

This approach allows comprehensive attestation, covering all system components from the boot process to the operating system, including applications and file-based configurations. By re-performing the EXTEND operations in memory, the verifier can accurately validate the integrity of the system.

34
Q

Size and variability of the TCB

A

To mitigate these risks, efforts can be made to improve confidence in the TCB through techniques such as:
* Static Verification: Analyzing the source code to identify potential issues.
* Code Inspection: Manually reviewing the code for correctness.
* Testing: Running tests to validate the behavior of the TCB.
* Formal Methods: Using mathematical approaches to prove the correctness of the TCB.
*
While these methods can be effective, they are also expensive and time-consuming.

Therefore, reducing the complexity of the TCB remains a critical objective in modern systems.
The TPM attempts to create the TCB via Core Root of Trust for Measurement (CRTM), which serves as a foundation for trusted measurements. However, even with this approach, the TCB has become too large and too dynamic, leading to additional complications:
* The increasing number of components results in a growing list of expected measurements, making it difficult to maintain an accurate set of golden values.
* Variability between systems means that two computational nodes, even if perfectly fine, can produce very different measurements due to differences in their configurations, environments, or execution orders.

35
Q

Dynamic Root of Trust for Measurement

A

A Dynamic Root of Trust for Measurement (DRTM) was introduced to address limitations of relying solely on secure boot.

Instead of trusting the entire process starting from the BIOS, DRTM creates a new starting point for trusted measurements. Once Secure Boot completes, ensuring that the initial components cannot be modified, the system performs a reset and begins measuring components from that point. The assumption is that the first stage is secure, and now measurements start from a new baseline.

Since TPM version 1.2, a set of dynamic PCRs (PCRs 17 to 23) was added to support this process. These dynamic PCRs differ from the standard PCRs (0 to 15) in several ways:
* Dynamic PCRs (17-23) are initialized to -1 at boot, whereas other PCRs are initialized to 0. This distinction highlights their different purpose.
* Dynamic PCRs (17-23) can be reset to 0 by the operating system, unlike standard PCRs, which can only be reset by a hardware reset.
* PCR 17, the first dynamic PCR, is special because its value can only be set using a specific hardware instruction:
– SKINIT for AMD systems with virtualization.
– SENTER for Intel systems using Intel TXT (Trusted Execution Technology).

These hardware instructions have the same effect regardless of the architecture. They disable Direct Memory Access (DMA), interrupts, and debugging modes, ensuring that the program executing the instruction is the only active process on the system. This isolation guarantees that no other process can access memory, interrupt execution, or trace/debug the operation.
The program executing these instructions measures the secure loader block and transfers control to it. This creates a new Dynamic Root of Trust for Measurement (DRTM).

The DRTM ensures that the secure loader block and the memory region it operates on are measured and verified before execution. After stopping all processing on the platform via the special processor commands (SKINIT, SENTER), the DRTM hashes the content of the specific memory regions and stores the measurements are then stored in the dynamic PCRs. The secure loader block continues the process, transferring control to a specific memory location.

This process is referred to as a late launch, as it occurs after the normal system launch.

The late launch addresses challenges such as:
* Firmware Updates: When the firmware is updated, PCR values change, causing issues with sealing operations. Sealing requires the exact system state to unseal data, meaning any firmware update would prevent access to sealed data, even if the update is legitimate.
* Sealing After DRTM: By sealing data against the operational state established after the DRTM, rather than the initial firmware measurements, this issue is mitigated. Sealing is tied to a dynamic and operational state, ensuring compatibility after updates.

36
Q

Hypervisor TEE

A

Originally, the Dynamic Root of Trust for Measurement (DRTM) was intended to allow the loading of hypervisors. A hypervisor is the software layer responsible for creating and managing virtual machines (VMs).

Once the hypervisor is loaded, it isolates and manages the virtual machines. The TPM attests to the integrity of the hypervisor, as it is the first software layer loaded.
TPM sealed storage is designed to release its protected memory only to a properly loaded and unaltered hypervisor. This ensures that:
* Data is sealed against the hypervisor, meaning it is accessible only if the correct hypervisor is executed without modifications.
* Protected memory, often needed for specific virtual machines, is released to the hypervisor when these conditions are met.
* When saving the state of a virtual machine, this state is sealed against the TPM to ensure its integrity and confidentiality.

This approach makes DRTM particularly useful for cloud computing, which heavily relies on virtualization technologies.

However, there are significant challenges with this model. Hypervisors such as Xen and VMware ESX consist of a massive amount of code, often comparable to a full Linux distribution. Xen, for instance, includes a complete copy of Linux, while VMware ESX has a similarly large codebase. This is problematic because the Trusted Computing Base (TCB) is intended to be as small as possible to reduce the risk of vulnerabilities.

37
Q

RA in virtualized environments

A

Addressing the challenges of remote attestation and measurements in virtualized environments introduces new complexities. While a hardware root of trust is fundamental for ensuring security, virtualization creates complications.

**In full virtualization, such as virtual machines (VMs), the root of trust is often implemented as a Virtual TPM (vTPM) which is only a software emulation of a TPM and lacks the hardware protection of a Physical TPM (pTPM). **

This distinction is critical:
* A vTPM, being software-based, is susceptible to attacks.
* A single pTPM can typically support only one machine, making it impractical for environments with multiple virtual machines.

To bridge this gap, a strong link between the vTPM and the pTPM is required. This concept is referred to as deep attestation, which ensures that:
* The vTPM is validated by the pTPM, creating a trusted connection to real hardware.
* Sealed objects can be stored in the pTPM to protect the vTPM.

Additionally, major cloud providers, such as Amazon, Google, and Microsoft, often rely solely on vTPMs and claim equivalence to pTPMs. However, this equivalence does not provide the same level of security, as vTPMs lack hardware protection.

Ligth Virtualization as an Alternative
An alternative to full virtualization is the use of containers, such as those provided by Docker. Containers operate differently:
* The host operating system (OS) runs directly on the physical hardware and interacts with the pTPM.
* Containers do not require a separate virtualization layer. Instead, they rely on the host OS to manage resources and operations.
* Separation between containers is achieved through namespaces, such as:
– Storage namespaces for isolating storage.
– Networking namespaces for isolating network operations.
* Despite appearing as virtual resources, all operations in containers are mapped to the host OS, which interacts directly with the pTPM.

This approach offers significant advantages for attestation:
* The host OS, regardless of the number of containers, is the single entity interacting with the PTPM.
* This simplifies attestation, as the operations are not affected by a separate virtualization layer.

38
Q

RA for OCI containers

A

This approach is generic and not tied to a specific containerization technology.

The key point is that containers are executed on a host in contact with the physical hardware.

This solution is transparent to the container runtime and containerized workloads, enabling the remote attestation of two essential components:
* The Host: The host must always be attested first. If the base layer is compromised, the security of everything running on it is invalidated.
* Containers: Once the host is verified, any or all containers can be attested as needed.

The remote attestation of host + containers is based on hardware RoT. The architecture consists of the
following key elements:
* A hardware root of trust.
* An attestation agent.
* A virtualization layer for container-based workloads.

Within the virtualization layer, every operation is labeled to specify its execution environment.
For example:
* If an operation is executed on the host, it is labeled as such.
* If an operation is executed inside a container, it is labeled with the specific container’s identifier (e.g., container 1, container 2, container 3).

This labeling is possible because all operations, even those appearing to be executed in containers, are ultimately handled by the host operating system.

The verification process involves several steps:
1. The verifier sends an attestation request containing a nonce.
2. The attestation response includes: The measurement list, The nonce, The PCR values, all signed by the physical TPM.
3. The verifier performs the following checks sequentially:
(a) Verify TPM authenticity: Check the digital signature and certificate to ensure it matches the expected node.
(b) Validate TPM quote: Ensure the signature corresponds to the values inside the response.
(c) Check trusted boot PCRs: Confirm that PCRs related to the trusted boot have the correct
values.
(d) Validate the IMA measurement list: Compare all values in the measurement list againstgolden values.
(e) Verify the host OS: Ensure the host operating system part is correct.
(f) Verify containers

39
Q

Credential chain of trust

A

Each TPM contains a pre-generated key called the Endorsement Key (EK). This key is created by the TPM vendor and accompanied by a certificate signed by the vendor, verifying that the TPM is genuine and was man-
ufactured by a trusted source.
The EK and its certificate serve as the root of trust, proving the authenticity of the hardware.

When a TPM is incorporated into a device, such as a computer or server, the OEM (Other Equipment Manufacturer) generates an Initial Device Identifier (IDevID). This new key is based on the Endorsement Key (EK) and the endorsement certificate provided by the TPM vendor.
The IDevID is signed by the OEM and provides a new identity for the device. It proves that the device
itself (not just the TPM) was created and configured by a trusted manufacturer.

When the device is deployed into a specific environment (e.g., a corporate infrastructure), a Local Device Identifier (LDevID) is created. The LDevID is based on the IDevID and the OEM certificate and is signed by the customer (e.g., the IT operator or system owner).
* Provides a unique identifier for the device within the network infrastructure.

The LDevID ensures privacy and prevents tracking, as there is no way to trace:
* The LDevID back to the IDevID.
* The IDevID back to the Endorsement Key (EK).
This separation preserves privacy while maintaining a secure chain of trust.

This is the chain of trust: the initial certification tells that it is a genuine TPM really created by that manufacturer, the second certification tells that it is a genuine device and finally the third certificate says that the device is a registered component of a certain infrastructure and avoid being traced the LDevID will be used in all network operations. There is no possibility to go from the LDevID to the IdevID and from the IDevID to the EK.

39
Q

TPM-2.0 Make/Activate Credentials

A

The process involves interaction between the TPM, the host platform, and a Certificate Authority (CA):
1. The TPM’s host platform instruct the TPM to create a key.
2. The TPM generates the key and provides the public part of the requested key to the host platform.
3. The host platform sends the following to the Certificate Authority (CA): the newly created public key, the TPM Endorsement Credential, which proves the authenticity of the TPM.
4. The CA verifies the endorsement credential and creates a certificate for the new key. The certificate is encrypted using the TPM’s Endorsement Key (EK). The encryption serves as a mechanism to verify that the TPM indeed possesses the EK, as by decrypting the certificate there’s proof of possession of the EK.
5. The encrypted certificate is sent back to the TPM, which checks if it can decrypt and certify the new key using its EK. If successful, this proves the TPM’s ownership of the Endorsement Key.
6. Finally, the TPM returns the verified certificate to the TPM’s host platform. This step constitutes Proof of Possession (POP), as the certificate is provided to an entity that can demonstrate ownership of the corresponding key.
In this procedure, there is no signature performed during the certificate request phase. This differs from typical processes like PKCS#10. Here, the operation with the private key occurs during the distribution of the certificate.

40
Q

TPM basic authorization mechanism

A

When using the TPM, authorization mechanisms are required to execute commands securely.
* Direct Password-Based Authorization: The simplest form of authorization involves providing a password when issuing a command to the TPM. However, this method presents a significant risk if an attacker is able to sniff commands, for example, on the bus of a personal computer.
* password-based HMAC Authorization: A more secure alternative is to use HMAC authentication for commands and responses. HMAC-based authorization ensures that:
– Even if the internal bus is compromised, no useful information can be extracted by sniffing commands.
– Nonces are used to protect against replay attacks: ∗ A caller_nonce is included in the request.
∗ A TPM_nonce is returned in the response.
This approach significantly improves security, especially when the platform’s internals are at risk of compromise.
While basic mechanisms address authentication, additional authorization policies can be configured to en- force specific conditions before an object can be used, since the TPM knows a lot about the platform states. These advanced features include:
* PCR-Based Authorization: even if the password is known or if an HMAC command is received, if the PCRs have not the specific values the operation will not be performed.
* Time-Limited Authorization: Object usage can be prevented to a specific timeframe, e.g., valid for only two days.
* Multi-Entity Authorization: For highly secure operations, authorization may require approval from multiple entities. Examples include:
Requiring two individuals to input their passwords.
Using a command that requires double HMAC, where the HMAC is computed using two separate passwords.

41
Q

Audit and forensic analysis

A

Attestation is increasingly recognized as a vital mechanism for analyzing and validating the behavior of mod- ern systems. This need is particularly pronounced as computational intelligence shifts away from centralized infrastructures to edge nodes and distributed networks. In this evolving landscape, it becomes imperative to ensure that systems function as intended and that any anomalies can be thoroughly investigated.
In today’s interconnected systems, behaviors of nodes, such as IoT devices or Electronic Control Units (ECUs) in autonomous vehicles, can no longer be assumed to be reliable. The complexity of software, combined with potential vulnerabilities, makes it critical to validate both the state and actions of these nodes. As more intelligence and computational tasks move to edge environments, the challenges of ensuring correct behavior become even more significant.
When something goes wrong, TPM is also good to perform audit and forensic analysis for understanding what occurred. For instance, if an autonomous vehicle crashes, it is vital to determine the system state at the time T of the incident. Was the software functioning as intended, or was the system compromised by malware? Attestation can provide the evidence needed.

42
Q

SHIELD project - trust monitor

A

To illustrate how attestation can be applied in practical scenarios, we refer to a European project, known as SHIELD.

This project focused on security as a service (SAAS), which leverages Network Function Virtualization (NFV) and Software-Defined Networking (SDN) to create and manage specific virtualized security functions, such as firewalls, intrusion detection systems, and VPN channels.

The project’s architecture consisted of the following key components:
* Software-Defined Network (SDN)/NFV Infrastructure: This was the target platform for deploying the virtualized security functions.
* Orchestrator: The vNSF orchestrator was responsible for deciding which security functions must be placed and where within the infrastructure.
* vNSF Store: A repository of virtualized network security functions (vNSFs), which were software objects to be downloaded and deployed as needed.
* Security Dashboard: Provided an interface for monitoring and managing the security state of the infrastructure, which could include intrusion detection systems or network monitoring tools.
* Trust Monitor: This component used remote attestation to verify the integrity of the entire infrastructure. Since the infrastructure was software-based, the trust monitor ensured that nodes and their virtual functions were not compromised.
* Analysis and Remediation Module: Reacted to any detected misconfigurations or integrity violations.

The trust monitor was the main component for attestation, receiving inputs from two main sources:
1. The SDN/NFV Infrastructure: This provided the attestation results or measurements. Periodically, the trust monitor queried each node in the infrastructure, asking for its current measurements to validate its state and the state of its deployed virtual functions.
2. The vNSF Store: This repository supplied the golden measurements. These values served as the benchmark for verifying the integrity of the infrastructure.

The trust monitor compared the measurements from the infrastructure with the golden measurements from the vNSF store. If a mismatch was detected, indicating a possible integrity violation, the trust monitor triggered an alarm. This alarm was sent to the Analysis and Remediation Module, which analyzed the issue, identified the root cause, and initiated corrective actions to restore the system’s integrity.

43
Q

SHIELD - Golden value creation

A

One important thing (in SHIELD but also in general) is the golden value creation. The trust monitor is initially going to the vNSF store containing all the virtual network security function, requesting the manifest (the list of the elements inside that virtual function) and then the measurements are extracted from the manifest.
If a measurement corresponds to a component already present in the database, it is skipped, as it has already been verified as authorized. However, if the measurement is new and not yet encountered in other vNSFs, the database is updated to include it. This database is often referred to as a whitelist or accept list, representing the set of components authorized to run on the system. In this way, the trust monitor dynamically maintains a record of trusted components, ensuring that only validated elements are executed within the infrastructure.

44
Q

SHIELD - Initial deployment of a security function

A

When a security function is deployed, the orchestrator initiates the process by requesting the attestation result for the host, referred to as the middle box. Before deploying a virtual network security function (vNSF), the trust monitor verifies the integrity of the host by checking its current state. This includes validating the operating system, container runtime, and other essential components to ensure that the box is in a secure state.
Upon receiving this request, the trust monitor performs a remote attestation by sending a nonce to the designated host. The host then provides a TPM quote along with the event log as proof of its current state. If the attestation proof matches the expected values, the trust monitor sends an attestation success response to the orchestrator, allowing the inclusion of the middle box in the network.
If the attestation fails, the trust monitor reports the failure, isolates the compromised node, and raises an alarm for the security manager to investigate the issue. The compromised node is excluded from further orchestration processes to ensure the integrity of the infrastructure.

45
Q

SHIELD - Periodic attestation of security functions

A

Periodic attestation is performed automatically by the trust monitor at intervals determined by the manager, based on the speed of the attack. The periodicity can range from 10 seconds to 30 seconds or up to one minute, with typical intervals being in the order of minutes, rather than hours.
During each cycle, the trust monitor requests the network state from the vNSF orchestrator. The network state provides information on the nodes currently involved in executing the security functions. This approach avoids the need to attest the entire network and focuses on the specific nodes relevant to the protection being provided. This information is retrieved from the orchestrator.
For each middle box in the subnetwork, the trust monitor requests a remote attestation proof from the deployed security function. Each middle box responds with its TPM quote and event log, which are then verified against expected values.
If an attestation fails, the security dashboard issues a notification, highlighting the issue within the network. In such cases, decisions regarding the compromised node can include terminating, excluding, or reconfiguring it. Although artificial intelligence could be employed to automate these decisions, the preferred approach involves maintaining a human in the loop for final decision-making. The system generates an alarm, allowing the operator to decide the most appropriate course of action.

46
Q

Keylime

48
Q

Veraison

50
Q

Amazon Web Services (AWS)

51
Q

Azure Confidential Containers