TEE Flashcards
(52 cards)
Trusted vs Trustworthy
The term trustworthy refers to something that will not compromise security, offering guarantees of safe operation.
In contrast, trusted describes someone or something that you rely upon to not compromise your security, even though no absolute guarantees exist.
In essence, trusted is about how you use something, while trustworthy is about whether it is safe to use something. If something is trustworthy, it can inherently be trusted. While, if something is trusted, it does not necessarily mean it is trustworthy; it simply reflects a choice to rely on it.
In scientific discourse or formal writing, the distinction is critical.
Trustworthiness requires formal verification or proof that an entity will not cause harm. This rigorous definition is what differentiates it from trust, which is a subjective choice based on context.
TEE (generic)
A trusted execution environment represents a domain within a system that is deliberately chosen to be trusted.
This environment is relied upon to execute sensitive tasks, often referred to as trusted applications (TAs). A trusted application is one that requires execution within a protected and secure environment, safeguarding it from potential interference or compromise. Hopefully, the TEE is trusworthy too.
The trusted execution environment is so named because it is chosen as the domain to execute sensitive tasks. While it would be desirable for the TEE to also be trustworthy, achieving such a status requires extensive validation and formal guarantees, which are not always available in practice.
TEE and REE
The Trusted Execution Environment (TEE) operates on a hardware platform and is designed to coexist with another environment called the Rich Execution Environment (REE).
These two environments serve distinct purposes within the same system, with the REE offering flexibility and the TEE focusing on high security.
In the Rich Execution Environment (REE), users can run a rich operating system like Android on smartphones or Linux and Windows on conventional computers. This environment is characterized by minimal restrictions, enabling the execution of a wide range of applications without significant limitations. It is designed for general-purpose usage, providing the flexibility required for normal operations.
In contrast, the Trusted Execution Environment (TEE) is tailored for executing sensitive tasks that demand a higher level of security. Unlike the REE, the TEE relies on hardware-based trust anchors, which include Hardware keys (for secure cryptographic operations), Secure storage (for protecting sensitive data), Trusted peripherals (Trusted User Interface - TUI) (for secure input and output for instance TUI ensures that only the internet banking app has access to the user interface while all other apps are disabled) and Secure elements (edicated hardware resources designed to enforce security policies).
A TEE requires specific hardware platforms that meet defined security requirements (e.g., TR0 specifications) to be implemented. Therefore, TEEs cannot be implemented on just any CPU.
Within the TEE, a core framework handles execution and provides a secure internal API. This API is accessible only to trusted applications (TAs).
The TEE also includes a communication agent that interacts with a client API in the REE. This interface facilitates communication between the two environments, but it introduces potential security risks, therefore, access control on these APIs is crucial to limit vulnerabilities; since they connect the untrusted world to the insides of trusted boundary.
TEE wrt Confidential Computing, RoT and TCB
The Trusted Execution Environment (TEE) has and important role the confidential computing field, which aims to address a critical gap in data protection, that is the protection of data in use (when the data resides in RAM and is actively accessed by the CPU).
Confidential computing tries to ensure that data in use is accessible only to trusted applications, without beinf read, processed or written by any unauthorized entity.
To implement such protection, the architecture relies on a Root of Trust (RoT). A Root of Trust is not a single component but rather an abstract concept referring to an element whose behavior is implicitly trusted. If the Root of Trust is compromised or behaves incorrectly, the entire system’s security can be undermined because the misbehaviour cannot be detected at runtime. For this reason, it is essential for the Root of Trust to be both trusted and ideally trustworthy.
The Root of Trust is a fundamental part of the Trusted Computing Base (TCB) that should be Trusted and Trustworthy.
The TCB encompasses all hardware, firmware, and, potentially, software components that are critical to a system’s security. Any vulnerability within the TCB can compromise the entire system and this misbehaviour cannot be detected at runtime, making its minimization a priority. The smaller and simpler the TCB, the lower the risk of vulnerabilities, meaning we need to minimize the TCB.
The minimization of the TCB is critical because every component within it represents a potential vulner- ability.
Therefore:
* Hardware: Forms the core of the TCB, as it is inherently less complex than software and offers a smaller attack surface
* Firmware: Provides the essential layer for accessing hardware and must be included, though its simplicity can help maintain security
* Software: Typically more complex, it is ideally excluded from the TCB unless absolutely necessary, as its inclusion significantly increases the attack surface
TEE security principles
To design a Trusted Execution Environment (TEE) effectively, certain security principles must be followed:
-
Integration with Secure Boot Chain
The TEE should be an integral part of the device’s secure boot chain, which is based on a Root of Trust (RoT). During every boot process, the system must verify the code integrity. If any modification or tampering is detected, the system will refuse to boot. -
Hardware-Based Isolation
The separation between the Rich OS Environment (REE) and the TEE should be implemented at the hardware level permitting the execution of sensitive code. This is crucial because hardware-based isolation minimizes reliance on software, which is more prone to vulnerabilities. By enforcing this separation in hardware, an attacker would need to breach the hardware itself, a much more challenging task compared to exploiting software vulnerabilities. -
Execution Isolation for Trusted Applications
TEEs often execute multiple trusted applications (TAs). While some TEEs allow all trusted applications to run in the same environment, it is preferable to have separate containers in which each trusted application is executed alone. This isolation ensures that the compromise of one trusted application does not impact others. Though not compulsory, this is a desirable feature in TEE design and is supported by some implementations. -
Secure Data Storage
Trusted applications may require secure and permanent data storage. To achieve this, data must be protected such that:
* No other trusted application or any REE application can access the data.
* Data is tied to the hardware, using a hardware-bound key accessible only be the TEE OS.
* Unauthorized access or modification is prevented.
* Migration of data to another device, such as through swapping an SD card, is impossible (as the (hardware)-key is bound to the specific hardware component).
This ensures that sensitive information remains inaccessible outside the TEE and is usable only within its original device. -
Trusted Path and secure access to peripherals
Secure access to peripherals such as fingerprint sensors, displays, touchpads, and keyboards is essential. This trusted path can be hardware-isolated and controlled solely by the TEE during specific actions. Applications in the REE, including those compromised by malware, should have no visibility or access to these peripherals.
For example, during a secure transaction involving fingerprint authentication or input through a touchpad, the TEE should have exclusive control of the peripherals, ensuring that malware or unauthorized applications in the REE cannot intercept or interfere with the operation. -
Protection Against Malware
One of the key advantages of a TEE is its resilience to malware infections in the REE. Any malware affecting the rich environment is confined to that domain and cannot influence or possess visibility of the data or operations within the TEE. When a trusted application is activated, it operates independently and securely, maintaining the integrity of sensitive processes.
Intel IPT
The Intel Identity Protection Technology (Intel IPT) is an example of TEE.
The IPT has 2 separate CPUs:
1. The primary CPU, which executes user programs and applications.
2. A secondary, specialized CPU called the Management Engine (ME), which runs Java applet on this separated CPU.
The Management Engine is mostly used to execute management tasks, even when the device is powered off. This is enabled through a feature called wake-on-LAN.
In the context of Intel IPT, the Management Engine serves as a physically separated trusted execution environment. It is capable of running Java applets independently from the main CPU. These applets are bound to the physical hardware of the Management Engine and can perform various secure operations.
Some key examples include:
* Cryptographic Key Management: The Management Engine can generate and store cryptographic keys in a protected memory space that is inaccessible to any user-level programs or the primary CPU. This functionality is integrated with interfaces such as the Windows Cryptography API, allowing Windows applications to request key generation or access securely stored keys.
* One-Time Password (OTP) Generation: Intel IPT has been used in products like Vasco My- Digipass, which relies on the Management Engine to store secrets and generate OTPs securely. These OTPs are critical for secure authentication processes.
* Secure PIN Entry: By leveraging the Management Engine’s ability to control video output and peripherals, Intel IPT ensures that PIN entry is isolated from the primary CPU. This is possible because chipset also manages video and it prevents unauthorized programs from intercepting sensitive information, such as the entered PIN.
The architectural separation provided by the Management Engine makes Intel IPT an example of a physically isolated Trusted Environment.
Unlike many TEE implementations that share resources with the Rich Execution Environment (REE), Intel IPT achieves separation at the hardware level, with two independent CPUs executing tasks in parallel. This design inherently limits the attack surface, as the Management Engine operates outside the control of the primary system.
ARM TrustZone
Another widely recognized Trusted Execution Environment (TEE) is the ARM TrustZone, a feature available in some ARM CPUs, which enables a single CPU to operate in two separate modes: secure mode and normal mode.
The core technical innovation of the TrustZone lies in its extension of the normal CPU bus to a virtual 33-bit bus. This additional signal bit indicates whether the CPU is operating in secure mode or normal mode. This signal is not confined within the CPU but is exposed externally to facilitate the creation of secure peripherals and secure RAM. These external components can use the signal to enforce access control, ensuring that only processes in secure mode can interact with protected hardware resources.
The ARM TrustZone is an open and documented system, facilitating widespread adoption and integration across various platforms.
Despite its advantages, the TrustZone has notable limitations, such as the presence of only a Single Secure Enclave. All trusted applications operate within this single enclave, without hardware-based isolation separating them. This limitation arises because the trusted zone’s separation is not enforced by hardware but is instead managed by the software running within the TEE. As a result, this approach is inherently less secure compared to architectures that provide real hardware separation between different applications, making it more vulnerable to potential breaches within the enclave.
To address the limitations of the current architecture, ARM is actively developing a third mode of operation (TEE, REE and Attestation zone) within its platform. This new mode is designed to support specific features, such as attestation, which demand a more specialized and secure environment.
Trustonic
The ARM TrustZone architecture, while providing a hardware-based mechanism for creating a secure enclave, has limited utility by itself. This limitation comes from the fact that TrustZone allows for only a single secure enclave.
Gemalto developed the Trusted Foundation System, and G+D (Giesecke+Devrient) developed MobiCore. These are TEE operating systems designed to run within the TrustZone. Their primary innovation was to split the single secure enclave into multiple virtual enclaves, allowing several trusted applications to run in parallel, each isolated from the others. This was achieved throough smart card operating systems for use in the TEE. However, this separation is software-based, as the ARM hardware does not inherently support hardware-based isolation between trusted applications.
Later Gemalto and G+D merged efforts leading to the development of Trustonic, based on G+D’s Mo- biCore. The resulting operating system, named Kinibi, became a highly evolved and sophisticated TEE solution. For instance, Kinibi 500 introduced features such as 64-bit symmetrical multiprocessing, en- abling its use even in high-performance embedded systems, not just low-capacity CPUs. However, deploying Kinibi requires licensing fees to implement code, as it is a proprietary solution developed by Trustonic.
In parallel, other companies developed competing solutions. For example, Samsung integrated a TEE solution called KNOX into its devices. KNOX provides similar functionality to Trustonic, with the added feature of secure boot.
Intel SGX
Another prominent example of a Trusted Execution Environment (TEE) is Intel SGX (Software Guard Extensions) that is tightly integrated with the CPU, making it a hardware-based solution for secure computing.
When purchasing an Intel CPU, users can check whether it supports SGX functionality.
Intel SGX modifies the standard memory management of the CPU. When an enclave is declared within the Intel environment, SGX ensures that the memory allocated to that enclave is fully isolated and protected from other code, and vice versa.
This means:
* No other process, not even those running with the highest level of privilege on the CPU, can access the memory area allocated to the enclave.
* Hardware-protected memory areas are created for each enclave, preventing access both from general-purpose processes and other enclaves.
* Enclaves are also restricted from accessing areas outside their own memory boundaries.
A critical feature of SGX is its use of measurement, which is fundamental to its security architecture. When an enclave is created, the system computes a hash of all relevant components, typically including the executable loaded into the enclave.
This hash serves as a verification metric, enabling the system to confirm that the enclave has not been tampered with and is running as intended.
However, Intel SGX has limitations. Its scope is restricted to the protection of the execution environment, specifically CPU operations and memory. It does not inherently provide trusted input/output channels. To achieve secure input and output, SGX can be combined with other Intel technologies, such as Intel IPT, which offers capabilities like trusted display and input isolation.
Intel SGX has undergone significant evolution over time. The original version, SGX1, was available on both low-end and high-end CPUs. While SGX2 is exclusively available on high-end server-oriented CPUs, such as Intel Xeon processors.
Using SGX requires additional steps and considerations:
* To create an enclave, developers must obtain special permissions and utilize a specific library provided by Intel.
* The code intended for the enclave must be signed by Intel (!). This requirement means developers must submit their executable to Intel to receive the necessary signature, a process that has raised concerns about privacy and control.
Keystone
Keystone is an open-source framework that allows users to customize and build their own TEEs based on their unique requirements.
Keystone’s design enables selective inclusion of TEE features, making it adaptable to different scenarios. For example in embedded systems or IoT devices with limited memory and low computational power, certain TEE features might be excluded to optimize performance and resource usage.
Thanks to its flexibility, excluding unused components reduces the attack surface, ensuring minimization of the Trusted Computing Base (TCB). A smaller TCB inherently reduces the amount of code and hardware that must be trusted, improving overall security.
Keystone provides a basic architecture that includes:
* An untrusted environment (similar to a general purpose OS).
* Multiple trusted segregated enclaves.
Keystone works on top of the RISC-V open-source hardware platform. RISC-V offers the first open-source CPU architecture, where all designs are public, allowing users to build, customize, and test CPUs on platforms like FPGAs or SoC.
For projects that need high performance, RISC-V can be integrated into System-on-Chip (SoC) designs, alongside other necessary components.
The core RISC-V architecture provides basic computational functionality. However, the open-source nature of RISC-V enables the addition of Intellectual Property (IP) modules that extend the CPU’s functionality like vector computation, aritificial intelligence or cryptographic extensions.
RISC-V incorporates a hardware-based Physical Memory Protection (PMP) system, which enforces access control to memory pages ensuring that processes cannot access each other’s memory, either permanently or temporarily and Input/output devices, often memory-mapped, are protected during the execution of trusted applications.
RISC-V provides three execution modes in decreasing order of privilege:
* Machine Mode: The most privileged mode, primarily used for low-level hardware control.
* Supervisor Mode: A less privileged mode, typically used for operating systems.
* User Mode: The least privileged mode, used for general application execution.
These modes, combined with PMP, allow fine-grained control over memory and device access, ensuring that only authorized processes can interact with sensitive resources.
Motivation for Keystone
The motivation behind the creation of Keystone lies in addressing the limitations of traditional Trusted Execution Environments (TEEs), which are often rigid and lack customization options.
When compared to other existing solutions, these limitations become evident:
* Intel SGX offers a hardware-based approach to creating enclaves, providing strong isolation for secure computations. However, it requires the involvement of a large software stack to build and manage enclaves. This includes numerous libraries and dependencies, which significantly increase the Trusted Computing Base (TCB). A larger TCB introduces a broader attack surface and makes the system more challenging to secure.
- AMD SEV (Secure Encrypted Virtualization) provides a hardware-based method to create secure enclaves, treating them as virtual machines with built-in encryption. While this approach differs from Intel SGX, it also suffers from the drawback of requiring extensive development and a large TCB. The reliance on a comprehensive software stack for managing these virtualized environments limits the system’s flexibility and security.
- ARM TrustZone implements a simpler model with only two domains: the untrusted domain (Rich Execution Environment) and the trusted domain (Trusted Execution Environment). However, its design lacks the capability to create additional domains or segregated secure enclaves. This rigidity restricts the level of customization and limits its utility for more complex use cases requiring finer- grained isolation.
Keystone architecture
At the core of Keystone’s architecture design there is trusted hardware, which includes essential components like the RISC-V cores for normal operations. These cores can be enhanced with optional H/W features such as cryptographic extensions or specialized instructions for artificial intelligence workloads, depending on the needs of the application.
Additionally, a Root of Trust (RoT) is included to ensure secure boot and establish the foundational trust required for the system.
One of Keystone’s key principles is minimizing the Trusted Computing Base (TCB). To achieve this, it voids running a traditional operating system or hypervisor directly on the hardware. Instead, it employs a lightweight Security Monitor (SM) that operates in machine mode (m-mode), the highest privilege level in the RISC-V architecture.
The Security Monitor’s sole responsibility is access control; it mediates all requests from upper layers to the hardware, ensuring only authorized operations are permitted. By limiting its functionality to this single task, the Security Monitor remains simple and secure, avoiding unnecessary complexity.
Keystone’s system architecture is organized into two main domains. The first is the untrusted domain, which can run general-purpose operating systems like Linux or Android. This domain operates in supervisor mode and is used for non-sensitive tasks, offering flexibility to execute standard applications without security concerns.
The second and more critical domain consists of the trusted enclaves. These enclaves are designed to operate in user mode, the least privileged level, and are intended for running sensitive applications securely.
Each enclave is independent and runs a single application, avoiding the need for a general-purpose operating system.
Instead, Keystone provides a lightweight layer known as the Keystone Runtime, which acts as a ligthweight operating system, offering only the minimal features necessary for the enclave’s specific application.
This approach reduces the complexity of the system and ensures that each enclave’s runtime is customized for its application’s needs.
At the top of each enclave is the trusted application, referred to as an Enclave Application (Eapp) in Keystone’s terminology. These applications are isolated from each other and the untrusted domain, ensuring robust security.
Keystone allows for the creation of multiple enclaves, each capable of running its own Eapp, each of which isolated from other enclaves and the untrusted domain.
BIOS and UEFI
Attackers often target the lowest levels of a system, such as the boot process or firmware, to inject malware. This strategy provides two key advantages: it reduces the likelihood of detection and increases the scope of control over the system.
By compromising these low-level components, attackers can:
* modify the OS.
* try to boot an alternative OS.
* modify the boot sequence or the boot loader, enabling untrusted components to coexist with trusted ones undetected.
To counter these threats, it is needed to protect the boot process and the operating system.
Historically, systems relied on the BIOS (Basic Input/Output System) as the initial layer of firmware for starting the system. However, BIOS implementations were often vendor-specific and it is difficult to protect.
This limitation led to the development of the Unified Extensible Firmware Interface (UEFI), now standard on most modern devices.
UEFI provides a standardized firmware environment and incorporates native support for firmware signature and verification.
During the boot process, the UEFI firmware is signed by the platform manufacturer and verified by the hardware. If the verification process detects any tampering or modifications, the boot process is stopped, preventing the system from starting.
Once the UEFI firmware is verified, the bootloader becomes the next critical component in the trusted chain.
The verified bootloader is responsible for checking the integrity of the operating system before loading it into memory. This ensures that only authorized and unaltered operating systems are executed, maintaining the trust established during the earlier stages of the boot process.
Rootkits
Rootkits are malware designed to provide root access, the highest level of privilege on a machine.
The goal of rootkits is to operate undetected by compromising critical system components, often before the operating system fully loads.
- Firmware Rootkit: targets the firmware of the BIOS/UEFI, or the firmware of. other hardware components, such that the rootkit can start before the OS.
- Bootkit, which replaces the operating system’s bootloader so that the node loads the bootkit before the OS. A bootkit ensures that it is loaded before the operating system, allowing it to create an invisible layer that operates below the OS but above the hardware.
The OS performs as expected, but the bootkit runs concurrently in memory, granting attackers control without detection, in order to avoid suspiscion. - Kernel rootkits: replace portions of the kernel, ensuring they load automatically when the operating system loads in order to gain persisten access at kernel-level.
- Driver rootkits: compromise the drivers that are loaded at boot. By masquerading as a trusted driver, these rootkits can intercept and alter communication between the operating system and specific hardware devices.
Boot types
When the BIOS has successfully started, the next step is to boot the operating system (OS).
There are different boot types, each offering different levels of security:
* Plain Boot: default boot process with no security measures in place. It performs a normal boot without any verification of integrity or authenticity.
- Secure Boot: the firmware verifies the signature of the components it loads and halts the platform if the verification fails. Secure boot is mostly hardware-based (relying on a crypto chip or the contents of BIOS memory) and verifies up to the OS loader. Secure boot is a responsibility of the Hardware manufacturer (for chip verification). The OS loader, being part of the firmware, is checked, while the actual operating system resides on the disk. If the signature verification fails, the platform does not proceed to boot the OS.
- Trusted Boot: Trusted boot assumes that the initial part of the boot process, up to the firmware, was executed securely. It focuses on verifying the integrity of the OS components such as drivers and anti-malware software.
If the signature verification of these components fails, the operating system itself will not start.
Unlike secure boot, this process operates only at the software level and verifies the OS’s operational state. Trusted boot is a responsibility of the OS manufacturer (for OS verification). - Measured Boot: This mechanism operates in parallel, introducing a detection-based approach and does not stop the system. It measures all components executed from boot up to a defined level (e.g., OS operational state) by calculating their hashes. These measurements do not halt operations; instead, they are securely reported to an external verifier.
The external verifier periodically queries the system, asking for its status and reviewing the reported measurements. If the reports indicate tampering or inconsistency, the verifier can classify the system as untrusted and take appropriate actions, such as isolating the node from the infrastructure.
Measured boot ensures that even if the platform is attacked, the integrity of the reported measurements cannot be faked or manipulated.
Trusted Computing and attestation process
To establish trust in a platform, an attestation process is performed and Trusted Computing defines schemes for establishing trust in platform.
Attestation provides evidence of the platform’s current state, which can be independently verified by an external party. By state, we refer to the software state, considering all running applications and configurations.
The foundation for such trust lies in the Root of Trust (RoT). The Root of Trust is a component or mechanism that must inherently be trusted. For instance, it could be established through Secure Boot, where the starting point is the microcontroller or the firmware itself performing self-verification. Trust is then built incrementally.
One key element in this process is the Trusted Platform Module (TPM).
The TPM provides methods for collecting and reporting the identities of these components.
* The TPM does not stop the system from operating but acts as a Trusted Reporter. It offers undeniable evidence of the platform’s current state, which cannot be faked or tampered with.
* A TPM used in a computer system reports on the hardware and software state in a way that allows determination of expected behaviour and, from that expectation, establishment of trust.
TCB vs TPM
The Trusted Computing Base (TCB) is a collection of system resources, including both hardware and software, responsible for maintaining the security policy of the system.
A key attribute of the TCB is its ability to prevent itself from being compromised by any hardware or software that is not part of the TCB. The TCB is self-protecting, meaning it can resist modification or interference from other components.
For example, both self-verification mechanisms and external hardware Roots of Trust (RoT) are part of the TCB, as they cannot be altered by any hardware or software (excluding physical manipulation, which remains a feasible attack vector).
The Trusted Platform Module (TPM) is not part of the TCB. Instead, the TPM acts as a component that enables an independent entity (typically external to the system, known as an external verifier) to determine whether the TCB has been compromised.
In rare cases, the TPM can be configured to prevent the system from starting if the TCB cannot be correctly instantiated.
Unlike the secure boot process discussed earlier, a TPM allows the entire BIOS and boot sequence to be verified. The TPM can then provide a flag (good/bad) indicating the result of the verification. In some configurations, if the verification result is bad, physical intervention is required to halt the system.
RoT
A Root of Trust (RoT) is a component that must always behave in the expected manner. If it misbehaves, its misbehavior cannot be detected, making it a fundamental building block for establishing trust in a platform. Typically, the RoT includes a hardware component and may later consider certain software parts.
In a trusted computing environment, there are different types of Roots of Trust:
* Root of Trust for Measurement (RTM): Responsible for measuring and sending integrity measurements to the RTS. Typically, the CPU executes the Core Root of Trust for Measurement (CRTM), at boot as the first piece of BIOS/UEFI code, to start the chain of trust.
* Root of Trust for Storage (RTS): A secured/shielded portion of memory designed for storage. Shielding ensures that only the CRTM can modify the values stored within the RTS.
* Root of Trust for Reporting (RTR): An entity responsible for securely reporting the contents of the RTS to external verifiers.
The process works as follows:
1. The RTM computes the measurement.
2. The measurement is securely stored in the RTS.
3. When required, the RTR retrieves the measurement from the RTS and provides it to an external verifier.
The Trusted Platform Module (TPM) typically combines the functionalities of the RTS and RTR. It acts as a secure storage component and as a trusted entity for reporting. This means the TPM will only report what is stored within its secure storage.
Chain of trust
In general, the process of measurement and verification involves the interaction of multiple components:
* Component A measures the integrity of Component B and, once the measurements are completed, stores the results in the Root of Trust for Storage (RTS).
* Component B, in turn, performs similar tasks by measuring Component C, storing those integrity measurements in the RTS as well.
* And so on…
With these measurements stored, the Root of Trust for Reporting (RTR) can be queried to retrieve the measurements of Component B and Component C from the RTS. If Component A is trustworthy, the verifier can trust these measurements to determine the integrity of Component B and Component C for this reason that component A is typically the CRTM, which is part of the TCB. Keep in mind that B and C can only be trusted if A is trustworthy.
TPM features
- inexpensive
- tamper-resistant (but not tamper-proof).
- It is a passive component that needs to be driven by the CPU
- RTS (Root of Trust for Storage): this special memory supports a unique operation called extend
- RTR (Root of Trust for Reporting): the TPM contains a private-public key pair, and every time the content of the RTS is extracted, it is accompanied by a digital signature. This guarantees the integrity of the data, ensuring that: data cannot be altered, fake data cannot be generated, the reported values reflect the correct content of the RTS at that moment.
- Hardware Random Number Generator: The TPM includes a true hardware-based random number generator, not a pseudo-RNG.
- It performs crypto algorithms but it’s not a crypto accellerator, it is slow.
- Secure Key Generation: The TPM securely generates cryptographic keys for specific, limited uses.
- Remote Attestation: The TPM can be used for remote attestation, storing a hash summary of the hardware and software configuration. This allows a third party to verify that the software has not been altered.
- Binding: data encrypted using the TPM’s bind key (a specific, unique RSA key derived from a storage key) cannot be decrypted outside that particular TPM. This ensures that even if data is stolen, it cannot be decrypted without the specific TPM = physical dependence.
- Sealing: This provides an additional layer of security. In sealing: data is encrypted using an internal TPM key + the decryption process requires the TPM’s state to match the state at the time of encryption (so we need also to be running the same SW state and configuration). State refers to the collection of all running applications and configuration files.
- Authentication of Hardware Devices: Each TPM chip has a unique and **secret Endorsement Key **(EK), burned into the chip during production. This allows the clear identification of a specific machine.
TPM-1.2
This version was a minimal implementation, designed with limited flexibility but sufficient for its time.
* Fixed Set of Algorithms: TPM 1.2 relied on SHA-1 for computing hashes and RSA for signatures, verification, and encryption of keys. Optionally, AES could be used.
* Single Storage Hierarchy: There was only one storage hierarchy for the platform user.
* One Root Key: The Storage Root Key (SRK), an RSA-2048 key, served as the root key for storage.
* Built-in Hardware Identity: The Endorsement Key (EK), also an RSA-2048 key, provided a unique hardware identity.
* Sealing Tied to PCR Values: Sealing (where data can only be decrypted in a specific state) was tied to Platform Configuration Register (PCR) values. PCRs are special registers inside the TPM that store collected measurements of the system’s configuration.
The Persistent Memory of TPM 1.2 included:
* The Endorsement Key (EK).
* The Storage Root Key (SRK).
The Flexible Memory (or Versatile Memory) was used for other purposes and included:
* Platform Configuration Registers (PCRs): These registers recorded the current configuration of the system by collecting measurements.
* Attestation Identity Keys (AIKs): These keys were used to sign external reports for performing remote attestation.
* Storage for Additional Keys: This storage could hold various keys required during operations.
TPM2.0
- cryptographic agility, meaning that it is able to support different (and also future) cryptographic primitives with rapid adaptions. For backward compatibility, it continues to support SHA-1, but it also offers SHA-256. It retains RSA for compatibility but can also perform signature, verification, and encryption using ECC-256. Native features include HMAC based on SHA-1 or SHA-256 and AES-128 as a minimum, with other features being optional.
- three key hierarchies: each hierarchy supports multiple keys and algorithms.
- When changes are needed in the TPM, policy-based authorization is used. This approach allows for multiple forms of authorization, such as two-factor authentication or fingerprint verification. This is a significant improvement over TPM 1.2, which relied on a single password for all changes.
- Additionally, TPM 2.0 includes platform-specific specifications, extending its application beyond PC clients to mobile and automotive environments, where trust is crucial.
TPM 2.0 can be implemented in several forms, each with specific characteristics:
* Discrete TPM: A dedicated chip implementing TPM functionality within a tamper-resistant semiconductor package.
* Integrated TPM: The TPM functionality is embedded as part of another chip. In this case, tamper resistance derives from it. .
* Firmware TPM: A software-only solution where the TPM functionality runs in the CPU’s Trusted Execution Environment (TEE).
* Hypervisor TPM: A virtual TPM provided by a hypervisor. It runs in an isolated execution environment with a security level comparable to a firmware TPM.
* Software TPM: A software emulator of the TPM running in user space. It is primarily used for development purposes and does not provide real security.
The type of TPM implementation determines the Trusted Computing Base:
* Discrete TPM: The hardware Root of Trust (RoT) is a separate component, ensuring high security.
* Integrated TPM: The RoT is embedded within the chip, relying on the assumption that the man- ufacturer has implemented and maintained the TPM correctly.
* Firmware TPM: The software becomes part of the TCB, requiring trust in the firmware’s integrity and mechanisms such as Secure Boot.
* Hypervisor TPM: Trust is abstracted, as the user has no insight into the hypervisor’s operations.
* Software TPM: It does not contribute to security and is only suitable for development purposes.
TPM-2.0 three hierarchies
One of the main features of a TPM is generating keys and using those keys to attest facts about the TPM.
Instead of storing keys directly, TPMs have secret values called “seeds” that never leave the TPM and persist through reboots. Seeds are used to deterministically generate keys, which can in turn identify the TPM even if the external storage is wiped (e.g. during OS installs).
There are three seeds and associated hierarchies:
* Platform Hierarchy: For the platform’s firmware. It contains non-volatile storage, keys and data related to the platform.
- Endorsement Hierarchy: it is used by the privacy administrator for storing keys and data related to privacy. Typically endorsement keys used to identify the TPM.
- Storage Hierarchy: used by the platform’s owner which is usually also the privacy administrator and that contains non-volatile storage, keys, and data. Typically storage keys are used by local applications.
Each hierarchy has a dedicated authorization (password as a minimum) and a policy (can be very simple or complex). Each hierarchy has also different seed for generating the primary keys. So, keys of the various hierarchies are unrelated.
Secure storage methods in TPM
Physical Isolation
Data is stored directly within the Non-Volatile RAM (NV-RAM) of the TPM. This memory is very limited in size, so only critical items such as primary keys and permanent keys are typically stored.
Access to these keys is controlled using Mandatory Access Control (MAC), enforcing policies that strictly govern who or what can access them.
Cryptographic Isolation
Large quantities of data or additional keys are stored outside the TPM, for example, on magnetic or solid-state disks.
These external items, referred to as blobs, consist of encrypted sets of bits that needs to be protected like keys or data.
Even though the data or keys are stored outside the TPM, they can only be decrypted using the specific TPM that controls the encryption key.
Mandatory Access Control also applies here, ensuring that decryption and access to the TPM-controlled key follow strict policies.
A key advantage of cryptographic isolation is its ability to store large amounts of data securely. The external storage provides effectively unlimited capacity. However, decrypting this data requires the specific TPM, making migration to another platform challenging without the original TPM.
Both methods leverage the TPM’s secure infrastructure and enforce Mandatory Access Control to ensure the integrity and confidentiality of the stored data or keys.