M6: Threads and Concurrency Control - Managing Multiple Tasks at the Same Time Flashcards
Open file descriptor table:
contains all the files that have been opened by a
process; stored in the PCB
Page table:
contains all the mappings from virtual to physical address space;
a pointer to the page table is stored in the process PCB
Thread
created within a process;
enables splitting an executing program into multiple simultaneously or pseudo-simultaneously running tasks to keep the CPU cores busy by having them run in parallel;
each thread has its own execution context, though the heap memory, static data segment, and code segment of the virtual memory, as well as the open files, are shared with the process
Thread table:
stores the execution context of the threads, which includes the thread processor registers, stack pointer, program counter, MMU, general registers, and stack memory segments
Thread pool:
where you pre-create a number of threads in the system, and what you do is whenever a client comes in, you take one of the idle threads to service the client; after the client request has been serviced, then the thread is returned to the pool and it’s available for servicing future clients
Thread affinity:
CPU cores have caches and in multicore systems, if threads keep getting run on the same core, then their data will be more likely to be cached; thus, threads should always run on a specific core
User level threads:
the thread library is located in user space; the OS does is not aware of the threads
Kernel level threads:
the thread library is located in kernel space; each
thread is handled and scheduled individually
Virtual dynamic shared objects
allows user space to handle certain kernel space routines, to reduce the need for context switching; memory allocated in user space
Concurrency control:
managing the interleaved execution of multiple
processes that access the same shared states, to produce the correct results
Race conditions
two or more threads or processes attempt to access and update the same data at the same time; the result of a computation depends on the exact timing of the multiple processes or threads being executed
Synchronization
implementing coordination between processes
Critical section:
a portion of code that involves an access or
modification of shared state
Mutual exclusion
enforcing that only one process is in any given
critical section at a time
Progress :
if no process is currently in the critical section, and at least one process wants to enter it, some process will eventually be able to enter
If no process is executing in its critical section and
some processes want to enter their corresponding
critical sections, then
1. Only those processes that are waiting to enter
can participate in the competition (to enter
their critical sections) and no other processes
can influence this decision.
2. This decision cannot be postponed indefinitely
(i.e., finite decision time). Thus, one of the
waiting processes can enter its critical section.