Memory Flashcards
What do computers use to hold data?
Computers use memory, which is a large array of equal-sized storage spaces, to hold data.
How are locations in computer memory identified?
Each location in computer memory has its own identifier, usually expressed as a hexadecimal number, which is called the address of the location.
What is stored inside each location in computer memory?
Inside each location in computer memory, there is a value which represents the data. The data can represent various things such as numbers, letters, colours, etc.
How can you retrieve the data stored at a specific memory address?
Given an address, you can look up the data stored at that address in computer memory.
What can you do with most computer memory?
Most computer memory can be both read and written, allowing you to store variables and modify their values.
What is the meaning of “RAM” in computer memory?
“RAM” stands for Random Access Memory, which is a type of computer memory that is both readable and writeable. The term “random” indicates that any address in the memory is equally accessible to the computer.
Can memory other than RAM be randomly accessed?
Yes, memory other than RAM, such as Read Only Memory (ROM), can also be randomly accessed, although it may have different characteristics or limitations.
What happens to memory availability when one process uses up some of it in a simple system without virtual memory?
In a simple system without virtual memory, when one process uses up some proportion of the memory, it becomes unavailable for other processes.
What are logical memory segments in a process?
Logical memory segments are contiguous parts of the address space that any process requires. They can vary in number and function.
What are some typical examples of logical memory segments and their properties?
Code: Executable binary, fixed size, read-only.
Read-only data: Strings, tables, fixed size, read-only.
Static data: Global variables, fixed size, read/write.
Stack: Local variables, dynamic, read/write.
Heap: Run-time structures, dynamic, read/write.
What is responsible for memory as a resource in a computer?
The operating system (OS) is responsible for memory as a resource in a computer.
What are some tasks performed by the operating system for memory management?
Allocating spaces when a process is loaded.
Setting up pages in a virtual memory system.
Managing access permissions.
Allocating additional space if the stack or heap overflows.
Keeping track of a process’ use and recovering the resource when the process terminates.
Can memory management be done by both the application and the operating system?
Yes, memory management can be done by both the application and the operating system. The application may request large blocks of memory and allocate from them, while the OS has the ultimate responsibility for memory management.
What information does the operating system keep track of in a virtual memory environment?
In a virtual memory environment, the operating system keeps records of the memory in use and which physical pages are in use, along with the purposes for which they are used.
What determines the size of the virtual address space in a computer processor?
The size of the virtual address space in a computer processor is fixed by the architecture of that processor.
What is the typical pattern for the size of virtual address spaces in binary systems?
The size of virtual address spaces in binary systems tends to be a power of two, most likely a power of a power of two.
What are the sizes of 16-bit, 32-bit, and 64-bit virtual address spaces?
16-bit spaces: 2^16 = 65,536 locations (or 64Ki)
32-bit spaces: 2^32 = 4,294,967,296 locations (or 4Gi)
64-bit spaces: 2^64 = 18,446,744,073,709,551,616 locations (or 16Ei)
What is the typical assumption for the size of “locations” in a virtual address space?
In most cases, “locations” in a virtual address space are assumed to be 8-bit bytes.
How does the physical memory size tend to change over time?
The physical memory size tends to expand as technology progresses, with falling prices per bit of memory, following a trend loosely aligned with Moore’s Law.
Why does the address space in a particular machine architecture often appear larger than the physical memory available to most users?
Machine architectures often have address spaces that are much larger than the physical memory available to most users, resulting in mostly empty address space while physical RAM is limited.
What happens as memory becomes more affordable over time?
As memory becomes more affordable, mechanisms may be developed to allow mapping of a physical address space larger than the virtual address space visible to any single process, enabling storage of data from multiple processes concurrently.
What occurs when new machine architectures with bigger address spaces are introduced?
When new machine architectures with bigger address spaces are introduced, the cycle of expanding address spaces and increasing memory affordability starts again.
What is the purpose of computer memory in relation to running processes?
Computer memory is a physical storage medium that holds one data item per location (address), and a running process uses an address to specify a location to load or store data in memory.
How are addresses used by different processes in a multiprocessing system?
In a multiprocessing system each process has its own independent set of addresses it wants to use to ensure they do not interfere with each other.
What is virtual memory and how does it address the issue of process interference?
Virtual memory allows each process to attempt to use any virtual address without interfering with other processes. It provides a mapping between virtual addresses and physical addresses.
How are virtual addresses translated to physical addresses in a virtual memory system?
In a virtual memory system, a process’s virtual address is translated to a physical address through a process called address translation, where each physical location has a single correspondence with a process-plus-virtual-address combination.
What are some properties of memory access that are exploited during address translation?
Very few processes use the entire virtual address space.
Locality of access is typically observed, meaning that if one address is used, addresses around it are also likely to be used.
How are virtual addresses organized and translated in address translation?
Addresses are organized into pages, and translations are done on a per-page basis.
Page tables are used to perform the translations, with the most significant bits of the address defining the page number.
Aliasing multiple virtual pages to the same physical page is possible but not commonly used.
Not all virtual pages need an allocation of physical RAM, and pages can be marked as ‘invalid’ if not needed.
What is the role of the Memory Management Unit (MMU) in memory mapping?
The mapping between virtual addresses and physical addresses is done by the MMU, which is a hardware component responsible for performing address translations using page tables stored in RAM.
How is the issue of table size addressed in larger address spaces?
In larger address spaces, hierarchical page tables are used to address the problem of table size. Hierarchical page tables reduce the size of the tables by organizing them in a hierarchical structure, enabling efficient memory utilization for processes.
What is the problem with using large page tables for every process in a real memory mapping scenario?
Using large page tables for every process can result in a significant memory overhead. For example, in a 32-bit system, a single-level page table may require several megabytes of memory for supporting a relatively small utility, making it inefficient. Similarly, in a 64-bit system, the memory requirement for page tables becomes infeasibly large.
What is the von Neumann computing architecture known for?
The von Neumann computing architecture, the most common architecture, has a single memory address space shared by code and data in all their forms.
How is memory divided in the von Neumann architecture?
Memory is logically divided into regions in applications, with the most common division being between ‘code’ and ‘data’. ‘Code’ refers to the processor’s instructions, while ‘data’ encompasses variables that can be written to.
What are segments in the context of memory management?
Segments are logical divisions of memory that group related parts together. Each process has its own set of segments, which can have different attributes and sizes.
How have segments historically been implemented in memory management?
Historically, segments were supported as dedicated parts of memory with their own addresses and hardware mappings. However, this approach is uncommon today as segments are typically mapped using hardware pages and organized by software.
Do operating systems still use segment tables for memory allocation?
Yes, operating system software may still maintain segment tables to allocate pages appropriately, even though logical segments are mapped using hardware pages.
What are some examples of logical divisions or segments in computing?
Logical divisions or segments can be found in various software tools. For example, in Unix binaries, files are organized into segments, providing a standard for file organization.
What is the significance of the term “segmentation faults”?
The term “segmentation faults” refers to errors that occur when accessing memory segments improperly or in violation of their defined attributes. These errors can cause program crashes or unexpected behaviour.