Week 6 Flashcards
Open file descriptor table
Contains all the files that have been opened by a process; stored in the PCB.
Page table
Contains all the mappings from virtual to physical address space; a pointer to the page table is stored in the process PCB.
Thread
Created within a process; enables splitting an executing program into multiple simultaneously or pseudo-simultaneously running tasks to keep the CPU cores busy by having them run in parallel; each thread has its own execution context, though the heap memory, static data segment, and code segment of the virtual memory, as well as the open files, are shared with the process.
Thread Table
Stores the execution context of the threads, which includes the thread processor registers, stack pointer, program counter, MMU, general registers, and stack memory segments.
Thread Pool
Where you pre-create a number of threads in the system, and what you do is whenever a client comes in, you take one of the idle threads to service the client; after the client request has been serviced, then the thread is returned to the pool and it’s available for servicing future clients.
Thread Affinity
CPU cores have caches and in multicore systems, if threads keep getting run on the same core, then their data will be more likely to be cached; thus, threads should always run on a specific core.
User level threads
The thread library is located in user space; the OS does is not aware of the threads.
Kernel level threads
The thread library is located in kernel space; each thread is handled and scheduled individually.
Virtual dynamic shared objects
Allows user space to handle certain kernel space routines, to reduce the need for context switching; memory allocated in user space.
Race Conditions
Two or more threads or processes attempt to access and update the same data at the same time; the result of a computation depends on the exact timing of the multiple processes or threads being executed.
Concurrency Control
Managing the interleaved execution of multiple processes that access the same shared states, to produce the correct results.
Synchronization
Implements coordination between processes to prevent issues like race conditions.
Critical Section
A portion of code that involves an access or modification of shared state.
Mutual Exclusion
Enforcing that only one process is in any given critical section at a time.
Progress
If no process is currently in the critical section, and at least one process wants to enter it, some process will eventually be able to enter.
Bounded Waiting
A process that is waiting in line to enter the critical section will not be waiting indefinitely, and is guaranteed to have an opportunity to enter.
Busy-wait
A loop checking the same condition repeatedly; the process busy-waiting cannot proceed until the condition becomes true; consumes CPU resources inefficiently.
Disabling interrupts
An overly powerful approach to implementing synchronization that prevents the OS from performing all context switches, which are caused by interrupts.
Atomic Statements
The compiler can translate such a statement to one line of assembly language, thus making this an uninterruptible statement.
Deadlock
Processes are waiting for each other, such that none of them can proceed.