Midterm Flashcards

1
Q

Why can threads share the same program code and heap areas on memory, but not the stack area?

A

Each thread has its own flow of execution, with its own sequence of function calls. It would be impossible to maintain a single stack that would keep track of multiple threads at the same time. Besides, stacks typically allow accessing the top element only, meaning that only one thread could access the stack at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define Atomicity.

A

The state of being indivisible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define Critical section.

A

Operations that should occur in an atomic way with mutual exclusion to avoid a race condition. A critical section is a piece of code that accesses a shared variable (or more generally, a shared resource) and must not be concurrently executed by more than one thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define Race condition.

A

Running two or more operations that should run sequentially at the same time. A race condition arises if multiple threads of execution enter the critical section at roughly the same time; both attempt to update the shared data structure, leading to a surprising (and perhaps undesirable) outcome. A race condition occurs when the outcome of a program depends on the timing or sequence of events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define Mutual exclusion.

A

Only one thread can access a certain resource at a single time. Mutual exclusion (mutex) guarantees that if one thread is executing within the critical section, the others will be prevented from doing so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define Deadlock.

A

Infinite wait to resources due to cyclic dependencies across threads. In gridlock, mutual exclusion occurs because only one line of cars can occupy an intersection at a time. Hold-and-wait occurs when one line of cars can occupy an intersection while waiting for another intersection to become available. No preemption means that when one line of cars is occupying an intersection, others cannot forcefully remove them. Circular wait can occur with lines of cars passing through intersections in different orders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we implement locks in a way that is guaranteed to be correct, efficient, and fair? Do we need assistance from the hardware to do so? What about the operating system?

A

For a lock to be correct, we need to obtain/release the lock in an atomic way. To do that, we need assistance from the hardware in the form of special instructions (e.g., test-and-set). For it to be efficient, we need to avoid spinning (busy waiting). To do that, we need assistance from the operating system to keep threads that cannot obtain a lock ‘sleeping’ until the lock is available. The operating system can also keep track of the threads waiting for the same lock and enforce some policy (e.g., FIFO) to ensure fairness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why would the given implementation of a condition not work?

A

The process of going to sleep does not occur in an atomic way. A thread may check the value of the variable done and decide to sleep but then be scheduled out before sleeping. Another thread may change the value of the variable done and send a signal to wake up the sleeping thread before it sleeps, so when it goes back into execution it will go into sleep indefinitely. To solve this problem, they need a mutual exclusion lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the race condition in the given put function for a hash table?

A

Lines 23 and 24 (e->next = n; and *p = e;). Two insertions can occur at the same location at the same time, so we may have missing nodes or an unsorted linked list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can you change the put function using a single lock to ensure correct operation of concurrent calls?

A

We must obtain the lock before Line 17 and release it after Line 24. If we lock Lines 23-24 only, we avoid having missing nodes, but we can still have nodes out of order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can you change the put function using the atomic __compare_and_swap function to avoid a race condition without locks or semaphores?

A

for(;;) { for(p=&table[key%NBUCKET], n=table[key%NBUCKET]; n != NULL; p=&n->next, n=n->next) { if(n->key > key) break; } e->next = n; if(__compare_and_swap(p, n, e)) break; }

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference in performance between the single-lock solution and the compare-and-swap solution for the put function? In which scenario does one outperform the other?

A

The single-lock solution does not allow concurrent access to the linked lists for insertion. Meanwhile, the compare-and-swap solution allows concurrent access and only slows down when two threads try to insert a node in the same location. Whenever we have a lot of hash collisions, the compare-and-swap solution will outperform the single-lock solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In the producer/consumer code with semaphores, what stops 2*MAX producer threads from producing more than MAX elements at a time and overwriting elements in the buffer? How?

A

The initialization of the ‘empty’ semaphore. It starts with the value MAX, meaning that the first MAX producers will decrement the semaphore value, but it will still be non-negative, so they all can execute concurrently. But as soon as the producer MAX+1 tries to pass the semaphore, it will decrement it into a negative value and will sleep until it receives a signal for the ‘empty’ semaphore, which can only be done by a consumer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain how gridlock is the same as deadlock in an operating system by showing how each of the four conditions for deadlock hold.

A

In the gridlock analogy, each line of cars traveling in the same direction is considered a thread.
• Mutual exclusion: Only one line of cars can occupy an intersection at a time.
• Hold-and-wait: One line of cars can occupy an intersection while waiting for another intersection to become available.
• No preemption: When one line of cars is occupying an intersection, others cannot forcefully remove them to free the intersection.
• Circular wait: We can have lines of cars passing through intersections in different orders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the disadvantages of paging?

A

• Internal fragmentation: Page size may not match size needed by process, leading to wasted memory.
• Additional memory reference to page table: Can be very inefficient without a TLB.
• Storage for page tables may be substantial: Especially with linear page tables.
• Page tables must be allocated contiguously in memory: This can be a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a TLB?

A

TLB stands for Translation Lookaside Buffer. It is a hardware cache of popular virtual-to-physical address translations within the CPU’s Memory Management Unit (MMU). It caches some popular page table entries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Outline the paging translation steps with a TLB.

A
  1. Extract VPN (virtual page num) from VA (virtual addr).
  2. Check TLB for VPN.
  3. If miss: a. Calculate addr of PTE (page table entry). b. Read PTE from memory. c. Replace some entry in TLB.
  4. Extract PFN (page frame number).
  5. Build PA (phys addr).
  6. Read contents of PA from memory into register.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How can the system improve TLB performance (hit rate) given a fixed number of TLB entries?

A

Increase page size. Fewer unique page translations are needed to access the same amount of memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Define TLB Reach.

A

Number of TLB entries * Page Size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What access pattern will result in slow TLB performance?

A

Highly random access with no repeat accesses. Sequential array accesses almost always hit in the TLB and are very fast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain temporal locality.

A

An instruction or data item that has been recently accessed will likely be re-accessed soon in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Explain spatial locality.

A

If a program accesses memory at address x, it will likely soon access memory near x. TLBs improve performance due to spatial locality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What TLB characteristics are best for spatial locality?

A

Access same page repeatedly; need same vpn→ppn translation. Same TLB entry re-used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What TLB characteristics are best for temporal locality?

A

Access same address near in future. Same TLB entry re-used in near future. How near in future? How many TLB entries are there?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Name some TLB replacement policies.

A

LRU (Least-Recently Used), FIFO (First-In, First-Out), and Random.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Why might a random TLB replacement policy sometimes be better than LRU?

A

For certain workloads, especially with strided access patterns that exceed the TLB size, LRU can lead to poor performance, and sometimes random is better than a ‘smart’ policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What happens during a context switch regarding the TLB? What are the solutions?

A

If a process uses cached TLB entries from another process, it can lead to incorrect memory access. Solutions include:
1. Flush TLB on each switch: Costly as all recently cached translations are lost.
2. Track which entries are for which process: Use Address Space Identifiers (ASID) to tag each TLB entry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Who handles TLB misses? Hardware or OS?

A

Both.
• Hardware-managed TLB: CPU knows where page tables are (e.g., CR3 register on x86). Page table structure is fixed. Hardware ‘walks’ the page table and fills the TLB.
• Software-managed TLB: CPU traps into OS upon TLB miss. OS interprets page tables as it chooses. Modifying TLB entries is privileged.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the purpose of the ‘valid bit’ in a TLB entry?

A

It indicates whether the entry has a valid translation or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the purpose of the ‘protection bits’ in a TLB entry?

A

They determine how a page can be accessed (e.g., read, write, execute).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the purpose of the ‘address-space identifier (ASID)’ in a TLB entry?

A

It tracks which process the TLB entry belongs to, allowing the TLB to hold translations from multiple processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is the ‘dirty bit’ in a TLB entry?

A

It is marked when the page has been written to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the advantages of paging?

A

• No external fragmentation.
• Fast to allocate and free memory.
• Simple to swap-out portions of memory to disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is a linear page table?

A

A simple array-based page table where the virtual page number (VPN) is used as an index to find the corresponding page table entry (PTE).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is internal fragmentation in the context of paging?

A

Wasted memory within a page because the page size may be larger than the size needed by a process. It grows with larger pages.

36
Q

How can increasing the page size be a simple solution to reduce page table size? What is the drawback?

A

With larger pages, fewer pages are needed to cover the same virtual address space, thus reducing the number of page table entries. The major problem is increased internal fragmentation.

37
Q

What is a segmented page table?

A

An approach to reduce page table overhead by dividing the address space into segments (code, heap, stack), with each segment having its own page table. The base register points to the page table of the segment.

38
Q

What are the advantages of combining paging and segmentation?

A

• Supports sparse address spaces, decreasing the size of page tables.
• No external fragmentation.
• Segments can grow without reshuffling.
• Can run process when some pages are swapped to disk.
• Increases flexibility of sharing at the page or segment level.

39
Q

What is a multi-level page table? What is the goal?

A

A hierarchical page table structure that pages the page tables themselves. The goal is to allow each page table to be allocated non-contiguously and to only allocate page table space for pages in use, supporting sparse address spaces and reducing overall memory consumption for page tables. It uses an outer-level page directory.

40
Q

What are the advantages of multi-level paging?

A

• Page directory can fit into a single page.
• Page tables do not need to be allocated linearly.
• Overall size of page tables is smaller.
• Supports sparse address spaces.

41
Q

What are the disadvantages of multi-level paging?

A

• Requires more memory accesses to traverse multiple levels of page tables if there is a TLB miss.
• Increases address translation time, potentially leading to slower program execution on a TLB miss.
• Increased complexity compared to single-level paging.

42
Q

What is an inverted page table?

A

A page table structure where there is one entry per physical page in the system, storing information about which process and virtual page is mapped to that physical page. Requires searching to find the correct entry, often using a hash table.

43
Q

What is the motivation for concurrency?

A

• To fully utilize multiple CPU cores available in modern systems for parallelism.
• To avoid blocking program progress due to slow I/O by allowing other tasks to run while one thread is waiting.

44
Q

How does dividing a process into threads help achieve concurrency?

A

Threads are like processes but share the same address space, allowing them to communicate easily through shared memory. This enables dividing a large task into smaller, cooperative subtasks that can run concurrently.

45
Q

What state do threads within the same process share?

A

• Process ID (PID).
• Address space (code, heap, most data). They share page directories and virtual memory.
• Open file descriptors.
• Current working directory.
• User and group ID.

46
Q

What state does each thread have its own private copy of?

A

• Thread ID (TID).
• Set of registers.

47
Q

What state does each thread have its own private copy of?

A

• Thread ID (TID).
• Set of registers, including Program Counter (IP/PC) and Stack Pointer (SP).
• Stack for local variables and return addresses.

48
Q

What are user-level threads? What are their advantages and disadvantages?

A

User-level threads are implemented by user-level runtime libraries, and the OS is not aware of them (one-to-many mapping).
• Advantages: Does not require OS support; portable. Can tune scheduling policy. Lower overhead thread operations (no system calls).
• Disadvantages: Cannot leverage multiprocessors. Entire process blocks if one thread blocks.

49
Q

What are kernel-level threads? What are their advantages and disadvantages?

A

Kernel-level threads are managed by the OS (one-to-one mapping). The OS provides each user-level thread with a kernel thread.
• Advantages: Each thread can run in parallel on a multiprocessor. When one thread blocks, others can run.
• Disadvantages: Higher overhead for thread operations. OS must scale well with many threads.

50
Q

What is pthread_create() used for?

A

To create a new thread. It takes a pointer to a pthread_t, thread attributes, the function pointer to be executed, and the argument for the function.

51
Q

What is pthread_join() used for?

A

To wait for a specified thread to complete its execution.

52
Q

Explain non-determinism in concurrent programming. What causes it?

A

Concurrency can lead to non-deterministic results where the output varies even with the same inputs. This is due to race conditions and depends on the CPU schedule, which can interleave thread execution in different ways.

53
Q

What is a critical section in concurrent programming? What do we want for critical sections?

A

A critical section is a piece of code that accesses shared resources and must not be executed concurrently by multiple threads to avoid race conditions. We want mutual exclusion for critical sections, ensuring that only one thread is in the critical section at any time. We want critical sections to be atomic (execute as an uninterruptible group).

54
Q

What is a lock? What are its basic operations?

A

A synchronization primitive that ensures that any critical section executes as if it were a single atomic instruction. Basic operations include:
• Allocate and Initialize: Create and set up the lock.
• Acquire (lock): Obtain exclusive access to the lock, waiting if it’s not available (mutual exclusion).
• Release (unlock): Release exclusive access, allowing another thread to enter the critical section.

55
Q

What is a mutex in the context of Pthreads?

A

The POSIX name for a lock, used to provide mutual exclusion between threads.

56
Q

What are the goals of a good lock implementation?

A

• Correctness: Mutual exclusion (only one thread in critical section). Progress (deadlock-free). Bounded waiting (starvation-free).
• Fairness: Each thread waits for roughly the same amount of time.
• Performance: CPU is not used unnecessarily (e.g., minimal spinning). Low overhead when no contention.

57
Q

Why is disabling interrupts not a good general-purpose solution for implementing locks?

A

• Only works on uniprocessors.
• Requires privileged operations, trusting applications not to abuse it (e.g., monopolize CPU).
• Turning off interrupts for too long can lead to lost interrupts and system problems.

58
Q

Why does a simple lock implementation using a shared boolean flag with load and store operations fail to provide mutual exclusion?

A

The test of the flag and the setting of the flag are not atomic. An interrupt can occur between these two operations, allowing multiple threads to acquire the ‘lock’ simultaneously.

59
Q

What is the TestAndSet hardware instruction (or atomic exchange)? How can it be used to build a spin lock?

A

TestAndSet atomically returns the old value at a memory location and sets it to a new value. A spin lock can be built by repeatedly calling TestAndSet to set the lock flag to 1 until it returns 0 (meaning the lock was free). unlock simply resets the flag to 0.

60
Q

What is a spin lock? What are its advantages and disadvantages?

A

A type of lock where a thread repeatedly checks (spins) if the lock is available until it can acquire it.
• Advantages: Can be fast if locks are held for short periods and there are many CPUs (avoids context switch).
• Disadvantages: Wastes CPU cycles while spinning, especially on a uniprocessor or when locks are held for a long time. Not fair, can lead to starvation.

61
Q

What is the CompareAndSwap hardware instruction? How can it be used to build a spin lock?

A

CompareAndSwap atomically checks if the value at a memory location is equal to an expected value, and if so, replaces it with a new value. It returns the original value. A spin lock can be built by repeatedly calling CompareAndSwap to set the lock flag to 1 only if it is currently 0.

62
Q

What are Load-Linked (LL) and Store-Conditional (SC) instructions? How can they be used to build a lock?

A

LL loads a value from memory. SC only stores a new value to that memory location if no other store has occurred since the last LL. SC returns 1 on success and 0 on failure. A lock can be built in a loop: LL reads the lock status; SC tries to set it to locked; if SC fails, the loop repeats.

63
Q

What is the FetchAndAdd hardware instruction? How can it be used to build a ticket lock?

A

FetchAndAdd atomically increments a value at a memory location and returns the old value. In a ticket lock, each thread gets a unique ticket number using FetchAndAdd. The lock grants access to threads in the order of their ticket numbers. Threads spin until their ticket number matches the current turn number. unlock increments the turn number. Ticket locks ensure progress and can be fairer than basic spinlocks.

64
Q

What is priority inversion? Why can spin locks exacerbate this problem?

A

Priority inversion occurs when a high-priority thread is blocked waiting for a lower-priority thread to release a resource (e.g., a lock). Spin locks can worsen this because the high-priority thread will spin (consume CPU) while waiting, preventing the lower-priority thread from running and releasing the lock.

65
Q

How can blocking (sleeping) be used to implement locks more efficiently than spin-waiting, especially on a uniprocessor? What OS support is needed?

A

Instead of spinning, a thread that cannot acquire a lock can put itself to sleep (block), allowing other threads (including the one holding the lock) to run. When the lock is released, the sleeping thread is woken up. OS support is needed for park() (to put a thread to sleep) and unpark() (to wake up a specific thread).

66
Q

What is a two-phase lock?

A

A lock implementation that combines spinning and blocking. In the first phase, the lock spins for a while, hoping to acquire the lock quickly. If the lock is not acquired, it enters a second phase where the caller blocks (goes to sleep) until the lock is free. The Linux futex lock has elements of this approach.

67
Q

What is a condition variable? What operations are associated with it?

A

A condition variable is an explicit queue that threads can put themselves on when some condition is not as desired. Operations include:
• wait(cond_t *cv, mutex_t *lock): Atomically releases the lock and puts the caller to sleep on the condition variable. When woken up, it re-acquires the lock before returning. Assumes the lock is held when called.
• signal(cond_t *cv): Wakes up a single thread waiting on the condition variable (if any).
• broadcast(cond_t *cv): Wakes up all threads waiting on the condition variable (if any).

68
Q

What is the relationship between condition variables and mutexes? Why is a mutex typically associated with a condition variable?

A

Condition variables are always used in conjunction with a mutex lock. The mutex protects the shared state (the condition) that threads are waiting for. The wait() operation atomically releases the mutex while the thread goes to sleep and re-acquires it upon waking. This prevents race conditions when checking the condition and going to sleep.

69
Q

What is the ‘wait/signal’ race condition that can occur without proper use of mutexes with condition variables?

A

A thread might check a condition and decide to wait, but before it actually calls wait() to go to sleep, another thread changes the condition and calls signal(). The first thread then goes to sleep and might wait indefinitely because it missed the signal. Using a mutex to protect the condition check and the wait() call prevents this.

70
Q

What is the first rule of thumb for using condition variables? Why is it important?

A

Keep state in addition to CV’s!. CVs are used to signal threads when state changes. If the state is already as needed when a thread checks, it doesn’t need to wait for a signal.

71
Q

What is the second rule of thumb for using condition variables? Why is it important?

A

Modify state with mutex held (in threads calling wait and signal). The mutex is required to ensure the state does not change between the testing of the state and waiting on the CV.

72
Q

Why should you always use a while loop to check the condition when waiting on a condition variable instead of an if statement?

A

This is because of Mesa semantics, where a signaled thread is only guaranteed to be woken up, but the condition might have changed by the time it runs again (due to other threads running in between). Also handles spurious wakeups (where a thread might wake up without a signal). Re-checking the condition in a while loop ensures that the thread only proceeds if the condition is still true after waking up.

73
Q

In a producer/consumer problem with a bounded buffer, why might you need two condition variables (e.g., empty and fill) instead of one?

A

Using two condition variables allows for more directed signaling. Producers wait on the empty condition when the buffer is full and signal the fill condition when they add data. Consumers wait on the fill condition when the buffer is empty and signal the empty condition when they remove data. This prevents a consumer from accidentally waking up another consumer (when a producer should be woken) or vice versa, avoiding potential deadlocks or incorrect states.

74
Q

What is a covering condition? When might using pthread_cond_broadcast() be appropriate? What is the potential downside?

A

A covering condition is a condition that covers all cases where a thread needs to wake up, even if it means waking up more threads than necessary. Using pthread_cond_broadcast() (wake all waiting threads) might be appropriate when the signaling thread doesn’t know which specific waiting thread(s) should be woken up to make progress (e.g., in a memory allocator where different waiting threads might need different amounts of memory). The downside is potential negative performance impact as many threads might wake up needlessly, re-check the condition, and immediately go back to sleep.

75
Q

What is swap space? What is its purpose?

A

Swap space is a reserved area on the disk used by the operating system to move (swap out) inactive pages from physical memory to free up RAM and create the illusion of a larger virtual memory than physically available. Pages can also be brought back (swapped in) from the swap space into physical memory when needed. The OS needs to remember the disk address of a given page in swap.

76
Q

What is the ‘present bit’ in a page table entry? What does it indicate?

A

The present bit is a flag in each page table entry (PTE) that indicates whether the corresponding page is currently residing in physical memory. If the bit is set to 1, the page is in memory; if it is 0, the page is not in memory and likely resides in swap space on disk.

77
Q

What is a page fault? When does it occur?

A

A page fault is an exception raised by the hardware when a process tries to access a virtual memory page that is valid (mapped in the page table) but not currently present in physical memory (present bit in PTE is 0). The act of accessing a page that is not in physical memory.

78
Q

Briefly describe the steps involved in handling a page fault.

A
  1. The hardware detects the page fault and transfers control to the OS (page-fault handler).
  2. The OS determines the location of the missing page on disk (e.g., from the PTE).
  3. The OS initiates a disk read to bring the page into a free physical memory frame. If no free frame is available, a page replacement policy is used to evict a page.
  4. Once the disk I/O completes, the OS updates the page table entry (sets the present bit to 1, records the PFN).
  5. The OS may also update the TLB.
  6. The OS retries the instruction that caused the page fault.
79
Q

What is a page replacement policy? Why is it needed?

A

A page replacement policy is the algorithm used by the OS to decide which page in physical memory to evict (replace) when a new page needs to be brought in and memory is full or below a certain threshold. It is needed to make space for incoming pages from disk and aims to minimize the number of future page faults.

80
Q

What is a swap daemon (or page daemon)? What is its role?

A

A background OS thread responsible for freeing up memory. It typically runs when the amount of free physical memory falls below a low watermark (LW) and evicts pages until the free memory reaches a high watermark (HW). This proactive eviction helps maintain a pool of free memory.

81
Q

Why might the OS cluster or group pages together when writing them out to swap?

A

To increase the efficiency of disk I/O. Writing multiple contiguous pages at once reduces disk seek and rotational overheads compared to writing them individually.

82
Q

What is a virtually-indexed cache? What problem does it try to solve, and what new issues does it introduce?

A

A cache in the CPU that is indexed (addressed) using virtual addresses instead of physical addresses. It tries to solve the performance bottleneck where address translation (TLB lookup) has to happen before a physically-indexed cache can be accessed. However, it introduces new issues related to cache coherence and aliasing (different virtual addresses mapping to the same physical address) that need to be handled by the hardware and/or OS.

83
Q

Define TLB coverage. What happens if a program exceeds it?

A

TLB coverage is the total amount of virtual memory that can be simultaneously translated by the entries in the TLB (Number of TLB entries * Page Size). If a program accesses a number of pages exceeding the TLB coverage within a short period, it will experience a high number of TLB misses, leading to significant performance degradation as the page table needs to be.

84
Q

What is aliasing in the context of virtual and physical addresses?

A

Aliasing refers to different virtual addresses mapping to the same physical address.

85
Q

What is TLB coverage?

A

TLB coverage is the total amount of virtual memory that can be simultaneously translated by the entries in the TLB (Number of TLB entries * Page Size).

86
Q

What happens if a program exceeds TLB coverage?

A

If a program accesses a number of pages exceeding the TLB coverage within a short period, it will experience a high number of TLB misses, leading to significant performance degradation as the page table needs to be consulted for each new translation.