Exam practice Flashcards
User
People, machines or other PC’s
Kernel
Central component of the OS that handles fundamental hardware operations.
Operating System
Interface for the Kernel. Acts as an interface for the hardware. Bridge between HW and SW. Provides sets of services to system users. Controls executions of programs and resource manager for CPU, memory and I/O devices.
I/O device
any operation, program or device that transfers data to and from the computer.
User mode
User has restricted access to the hardware
Kernel mode
User has unrestricted access to the hardware
BootStrap loader
a small piece of code that initialize and loads the kernel on startup.
Program
a passive entity stored on disk as an executable file
Process
a program in execution loaded into memory, which progresses in a sequential manner. Multiple processes can exist for one program.
Process states
new, ready, running, waiting, terminated
Context switch
Switching from one process to another, saving the data of the old process with the intention to be restored later.
PCB
Process Control Block, contains all information associated with each process. Ex process state, memory management info.
Process scheduling
handles resources for processes to maximize and optimize CPU usage. Has short term (CPU scheduler) and long term schedulers(job schedulers) depending on the process.
PID
Used to identify processes. Useful when creating child processes and handling resources.
Process creation
child is created, then the program from parent is loaded in as a copy.
Process termination
child is terminated. Data of child is returned to parent via wait() and its resources deallocated by OS.
Zombie process
a child process whose parent has not signaled to parent that it is terminated. The process stays active but with no resource usage.
IPC
Interprocess communication via Direct communication(directly addressing its sender and receiver) or Indirect communication(addressing a port which processes use to read and write to)
Blocking
synchronous message sending. The sender process is blocked until the message has been received, and the receiver process is blocked until the message is available.
Non-blocking
the sender process is free to continue sending messages once a message is sent, and the receiver process can receive multiple messages, including NULL.
Pipes
an IPC system which allows senders and receivers to communicate via indirect communication.
CPU cycle
Start, Fetch new instruction, execute instruction, (1)HALT or (2) execute cycle fetch cycle
Thread
a lightweight process which carries out instructions for a process.
Multi-threading
Execution model allowing multiple threads to exist within one process. These threads share resources but are able to execute independently.
Multi-threaded server architecture
(1) Client makes a request to server. (2) server creates thread to service client request. (3) server resumes listening for additional client responses
Parallelism
allowing multiple threads to execute parallel. Two types of parallelism: Data parallelism (distributes subsets of the same data across multiple cores) and Task parallelism(distributes threads across cores, each thread performing unique operations)
Concurrency
multiple threads are not executed parallel, but can be processed at the same time.
Amdahl’s Law
Identifies performance gains from adding additional cores to an application. Speedup <= 1/(s + (1-s)/N) where s is the serial portion and N the amount of cores.
Many-to-one multithreading model:
many user level threads mapped to a single kernel thread.
One-to-one multi-threading model:
one user level thread mapped to a single kernel thread.
Many-to-many multi-threading model
many user level threads mapped to multiple kernel threads.
Preemptive
the CPU is allocated to a process for a limited time.
Non-preemptive
the CPU is allocated to a process until the process is terminated.
Turnaround
the time between the submission of a process until its termination.
FIFO
a non-preemptive scheduling algorithm which prioritizes processes based on arrival time.
SJF
a non-preemptive scheduling algorithm which prioritize process based on shortest burst time
SRT
a preemptive scheduling algorithm which prioritizes processes based on shortest time remaining until termination.
CPU core
logical execution unit of CPU. A CPU may have multiple cores.
Gang scheduling
groups of threads are scheduled as a gang with the same quantum.
Critical section
a segment of code which is shared between processes.
Mutual Exclusion
one one process/thread is allowed to execute its critical section at a time.
Progress
the critical section which is being accessed by a process has a set time to execute.
Fairness
all processes accessing the resource does so sequentially with priority.
Safety
No deadlock and critical section is protected.
Hold & wait
Condition met when deadlock occurs. Processes holding onto a resource do not release upon request from other processes or time expired.
Circular wait
Condition met when deadlock occurs. A chain of processes waiting for resources to be released.
Peterson’s Algorithm
algorithm for mutual exclusion using shared memory for communication. 2 semaphores, one for turn and one for allowing thread to access CS once the other thread is done.
Bounded product-consumer buffer
The producer of items must wait if the buffer is full. The consumer of items must wait if the buffer is empty. Solved with Peterson’s algorithm.
Deadlock
a set of processes blocking each other so none of them can proceed. Requires (1) mutual exclusion, (2) hold & wait, (3)no preemption and (4) circular wait.
Deadlock prevention
eliminate deadlock by removing 1 of the 4 requirements of deadlock.
Eliminating circular wait
means imposing fairness.
Eliminating non-preemption
means processes release held resources while requesting others.
Eliminating hold & wait
means requesting processes get their resources at once
Lamport’s algorithm
Use of tickets for reading and writing the critical section.