Module 6: Threads and Concurrency Control Flashcards
open file descriptor table
contains all the files that have been opened by a process; stored in the PCB
page table
contains all the mappings from virtual to physical address space; a pointer to the page table is stored in the process PCB
thread
created within a process; enables splitting an executing program into multiple simultaneously or pseudo-simultaneously running tasks to keep the CPU cores busy by having them run in parallel; each thread has its own execution context, through the heap memory, as well as the open files, are shared with the process
thread table
stores the execution of the threads, which includes the thread processor registers, stack pointer, program counter, MMU, general registers, and stack memory segments
thread pool
where you pre-create a number of threads in the system, and whenever a client comes in, you take one of the idle threads to service the client; after the client request has been serviced, then the thread is returned to the pool and it’s available for servicing future clients
thread affinity
CPU cores have caches and in multicore systems, in threads keep getting run on the same core, then their data will be more likely to be cached; thus, threads always run on a specific core
user-level threads
the thread library is located in user space; the OS is not aware of the threads
kernel-level threads
the thread library is located in kernel space; each thread is handled and scheduled individually
virtual dynamic shared objects
allows user space to handle certain kernel space routines, to reduce the need for context switching; memory allocated in user space
concurrency control
managing the interleaved execution of multiple processes that access the same shared states, to produce the correct results
race conditions
two or more threads or processes attempt to access and update the same data at the same time; the result of a computation depends on the exact timing of the multiple processes or threads being executed
(very hard to debug race conditions because it is hard to reproduce the same bug)
synchronization
implementing coordination that involves an access or modification of a shared state
critical section (synchronization)
a portion of code that involves an access or modification of shared state
mutual exclusion (synchronization)
enforcing that only one process is in any given critical section at a time
progress (synchronization)
if no process is currently in line to enter the critical section, and at least one process wants to enter it, some process will eventually be able to enter