Memory Architecture Flashcards
Computer buses
Communication channels running between motherboard and RAM
Memory Management Unit (MMU)
Translates the address that the processor request to its corresponding address in main memory.
Translation Lookaside Buffer (TLU)
Because a given translation can require multiple memory read operations, the processor uses a special cache. Prior to each memory access, the TLB is consulted before asking the MMU to perform costly operations.
Memory Controller -
CPU relies on memory controller to manage communication with main memory.
Direct Memory Access (DMA)
Provides a mechanism to directly access the contents of physical memory from a peripheral device w/out involving the untrusted software running on the machine.
Linear Address Space
The single continuous address space that is exposed to a running program.
Physical Address Space
Refers to the addresses that the processor requests for accessing physical memory. These addresses are obtained by translating the linear addresses to physical ones, using one or more page tables.
Registers
In IA-32 architecture registers define a small amount of extremely fast memory which the CPU uses for temporary storage during processing.
Paging
Provides the ability to virtualize the linear address space. It creates an execution environment in which a large linear address space is simulated with a modest amount of physical memory and disk storage. Each 32-bit linear address space is broken up into fixed-length sections called pages which can be mapped into physical memory in arbitrary order. IA-32 architecture supports pages of size 4 MB.
Physical Address Extension (PAE)
Allows the processor to support physical address spaces greater than 4 GB.
Interrupt Descriptor Table (IDT)
PC architectures provide a mechanism for interrupting process execution and passing control to a privileged mode software routine. The routines are stored in the IDT.
Interrupt Service Routine (ISR)
The IDT contrains the address of the ISR that can handle the particular interrupt or exception. In the event of an interrupt or exception, the specified interrupt
Thread
the basic unit of CPU utilization and execution. A thread is often characterized by a thread ID, CPU register set, and execution stack(s).
Process
An instance of a program executing memory. A process’s thread shares the same code, data, address space, and operating system resources. Acts as a container for system resources that are accessible to its threads.
CPU Scheduling
Operating system’s capability to distribute CPU execution time among multiple threads.
Context Switching
Switching execution of one thread to another. During a context switch the OS suspends the execution of a thread and stores its execution context in main memory. The OS then retrieves the execution context of another thread from memory, updates the state of the CPU registers, and resumes execution where it was previously suspended.
Use of Context Switching for Forensics
The saved execution context associated with suspended threads can provide valuable insight during memory analysis such as which section of code were being executed or which parameters were passed to system calls.
Tracked Operating System Resources
Processes, threads, files, network sockets, synchronization objects, and regions of shared memory.
Virtual Memory
OSs provide each process with its own private virtual address space. This abstraction creates a separation between the logical memory that a process sees and the actual physical memory installed on the machine. During execution the memory manager and the MMU work together to translate the virtual address into physical addresses.
OS Memory vs. Private Memory
The range of addresses reserved for the OS is generally consisten across all processes, whereas the private ranges depend on the process that is executing. With the support of the hardware, the memory manager can partition the data to prevent a malicious or misbehaving process from reading or writing memory that belongs to kernel memory or other processes.
Demand Paging
Mechanism that is commonly used to implement virtual memory–a memory management policy for determining which regions are resident in main memory and which are moved to a slower secondary storage when the need arises.
Page File or Swap
Refers to a file or partition on an internal disk, the most common secondary storage. A demand paging implementation attemps to load only the pages that are actually needed into memory as opposed to entire processes.
Locality of Reference
Based on the observation that memory locations are likely to be frequently accessed in a short period of time, as are their neighbors. To improve performance and stability, an OS’s memory manager has a mechanism for designating which regions of memory are paged versus those that must remain resident.
Shared Memory
Commonly used to conserve physical memory. Instead of allocating multiple physical pages that contain the same data, you can create a single instance of the data in physical memory and map various regions of virtual memory to it.