Chapters 1 - 5 Flashcards
Remember that [blank(1)] command is just a method of organizing file directories in a table.
The [blank(2)] document is maintained by the Linux community as a means of ensuring compatibility across the various system components. This standard specifies the overall layout of a standard Linux file system as well as how to organize using [blank(2)] compliant methods. It determines under which directory names configuration files, libraries, system binaries, and run-time data files should be stored. Additionally, the [blank(2)] document gives insight into standards of different [blank(1)] layouts.
1 filesystem table fstab
2 File System Hierarchy Standard (FHS)
What operating system service is related to ensuring the efficient operation of the system and is unrelated to providing services to user programs?
Resource Allocation
__________ provide an interface to the services made available by an operating system.
System calls
Each system call has a number, and that number is used as an index in the _______________, to invoke the appropriate routine.
System call table
System service:
A collection of applications included with or added to an operating system to provide services beyond those provided by the kernel.
Registry:
A file, set of files, or service used to store and retrieve configuration information. In Windows, the manager of hives of data.
System utility:
A collection of applications included with or added to an operating system to provide services beyond what are provided by the kernel.
Application program:
A program designed for end-user execution, such as a word processor, spreadsheet, compiler, or Web browser.
SRT
Shortest Remaining Time (Switch to shortest time at moment of arrival and after finishing a process)
RR Scheduling
Round Robbin receives one per Quanta + S per move
What is the top three valuable chapters
Top three most important concepts
Top three hardest concepts
What do you usually tell students
Remember this resources
NTFS Linux required additional material
course announcement study material
lab
quizzes
study guide
quizzet.com
(in message passing) Messages are sent and received using
system calls and waiting for kernel intervention can slow down the performance of message passing.
Shared memory only requires an initial
system call for establishing the shared memory segment. Once the segment has been created, accessing the shared memory is performed in user mode and requires no kernel intervention.
To determine if a bounded buffer is empty, you can perform the following test:
Check if the buffer’s count or size is zero: If the buffer has a count or size variable that keeps track of the number of elements currently in the buffer, you can test if this count is zero. If the count is zero, it means the buffer is empty.
Typically, a rendezvous involves two processes
(often referred to as the sender and receiver) that need to synchronize their execution. The sender process waits until the receiver process is ready, and vice versa.
Message passing on Windows systems is known as an_________ which allows two processes on the same machine to communicate.
Message passing on Windows systems is known as an advanced local procedure call which allows two processes on the same machine to communicate.
__________ pipes are more powerful than __________or _________ pipes, including allowing several processes to use the pipe for communication.
Named pipes are more powerful than ordinary or anonymous pipes, including allowing several processes to use the pipe for communication.
Concurrent execution on a single-core system.
Concurrency means all threads make progress on a single-core system as each thread gets to run for a short period of time on the single processing core.
Parallel execution on a dual-core system.
Parallelism allows two threads to run at the same time as each thread runs on a separate processing core.
Task parallelism involves
distributing not data but tasks (threads) across multiple computing cores.
A parallel system allows
multiple tasks to run at the same time. Parallel systems require more than one CPU core.
A concurrent system allows
multiple tasks to make progress, but there are no guarantees more than one task can run at a time.
The term “multithreaded” refers to
a program in which several activities are to be performed concurrently.
data parallelism
A computing method that distributes subsets of the same data across multiple cores and performs the same operation on each core.
In a system with deferred cancellation, a running thread periodically
checks for a cancellation request or a termination condition.
Thread-local storage allows a thread to
have data that is not accessible to other threads belonging to the same process
signal: In __________ and other operating systems, a means used to __________
signal: In UNIX and other operating systems, a means used to notify a process that an event has occurred.
Multiprocessor systems, also known as parallel computing systems, consist of
multiple processors working together to perform computational tasks simultaneously.
A Lightweight Process (LWP) is a virtual processor-like data structure that
provides a mapping between a user thread and a kernel thread in an operating system
Context switching refers to the process of
saving the current state of a running process or thread (known as the “context”) and restoring the saved state of another process or thread to continue its execution.
long-term scheduling is to short-term scheduling what suspended list is to
waiting list
A non-preemptive scheduling algorithm allows
a running process to continue until the process terminates or blocks on a resource.
A preemptive scheduling algorithm may
stop the currently running process and choose another process to run. The decision is made whenever:
A new process enters the ready list.
A previously blocked or suspended process re-enters the RL.
The OS periodically interrupts the currently running process to give other processes a chance to run.
A non-preemptive decision is made
only when the currently running process terminates or blocks, not when a process enters the RL.
Total CPU time
The amount of CPU time the process will consume between arrival and departure. For short-term scheduling, total CPU time is sometimes called the CPU burst.
a periodic process refers to a task or activity that
repeats at regular intervals or follows a predictable pattern.
The total CPU time (attained) under long-term scheduling accumulates from
process creation until process destruction. For short term it restarts every time the process stops.
SJF is non-preemptive.
Which means it doesn’t interrupt processes. SJN (Shortest Job Next), schedules processes according to the total CPU time requirements.
SRT is the __________ version of SJF.
preemptive
A time quantum, Q, is a
small amount of time (typically 10 to 100 milliseconds) during which a process is allowed to use the CPU.
The round-robin (RR) algorithm uses a
single queue of processes. The priority is determined solely by a process’s position within the queue. The process at the head of the queue has the highest priority and is allowed to run for Q time units. When Q ends, the process is moved to the tail of the queue and the next process now at the head of the queue is allowed to run for Q time units.
Multilevel (ML) scheduling maintains a
separate queue of processes at each priority level. Within each level, processes are scheduled using RR. Processes at a given level can run only if all queues at higher levels are empty.
Under the multilevel feedback (MLF) algorithm a newly arriving process enters the highest-priority queue, N, and is allowed to run for Q time units. When Q is exceeded, the process is moved to
the next lower priority queue, N-1, and is allowed to run for 2Q time units. At the lowest priority level the time is unlimited
The response time of a process is the elapsed time from
the submission of a request (pressing the Enter key or clicking a mouse button) until the response begins to arrive.
In MLF the lower the lever the higher the ________
Q allotment
A period is a
time interval (typically in milliseconds or even microseconds) within which each input item must be processed. The end of each period is the implicit deadline for processing the current item.
The rate monotonic (RM) algorithm schedules processes according to
the period. The shorter the period, the higher the priority.
RM is preemptive.
The earliest deadline first (EDF) algorithm schedules processes according to
the shortest remaining time until the deadline. The shorter the remaining time, the higher the priority.
A schedule is feasible if
the deadlines of all processes can be met.
The CPU utilization (U) is the sum of
the individual fractions of CPU times used by each process.
Real-time processes are very short but all must execute before
any interactive or batch process is scheduled.
Processor affinity means a thread may run on
only one processor. By running on a specific processor, a thread can take advantage of that processor’s cache memory.
With coarse-grained multithreading, a thread executes on a core until
a long-latency event such as a memory stall occurs
Fine-grained (or interleaved) multithreading switches between threads at a much finer level of granularity—typically at
the boundary of an instruction cycle. However, the architectural design of fine-grained systems includes logic for thread switching. As a result, the cost of switching between threads is small.
Asymmetric multiprocessing is simple because
only one core accesses the system data structures, reducing the need for data sharing.
Load balancing forces threads to __________ in order to distribute the workload. With push migration, a task checks the __________ and __________ to ensure balance. With pull migration, an idle processor tales work from a busy processor.
Load balancing forces threads to migrate in order to distribute the workload. With push migration, a task checks the state of processors and moves the load to ensure balance. With pull migration, an idle processor tales work from a busy processor.
Timing Requirements: Real-time systems have specific timing constraints that must be met. They can be categorized into two main types (explain each)
Hard Real-Time: Tasks or processes have strict and immovable deadlines. Failure to meet these deadlines can result in catastrophic consequences, such as system failures, safety hazards, or financial losses.
Soft Real-Time: Tasks or processes have timing constraints, but missing occasional deadlines does not necessarily lead to catastrophic failures. However, timely response is still desirable to maintain system performance and effectiveness.
rate-monotonic: The rate-monotonic scheduling algorithm schedules periodic tasks using a
static priority policy with preemption.
Ports that a client can use
Outside of 1024, the known range
User space processes run in a restricted environment and interact with the operating system through
system calls, which are functions provided by the kernel.
Rotational Latency
It refers to the time it takes for the desired data sector on a rotating disk to rotate under the read/write head.
CLV (Constant Linear Velocity): refers to a method of data storage where the disk rotates at
a constant speed, and the linear velocity of the read/write head varies depending on the radial position on the disk. This allows for a consistent data transfer rate across the entire surface of the disk.
CAV (Constant Angular Velocity): the disk rotates at a constant speed, and the read/write head
maintains a constant angular velocity as it moves across the surface of the disk. This means that the linear velocity of the head varies depending on the radial position on the disk. Unlike CLV, the data transfer rate is not constant in CAV, as the outer tracks have a higher linear velocity and therefore a higher data transfer rate compared to the inner tracks.
Head Crash: A head crash refers to a
mechanical failure that occurs in a hard disk drive when the read/write head comes into physical contact with the rotating platters, usually due to a mechanical or operational problem. This collision can cause damage to the magnetic surface of the disk, leading to data loss or corruption. Head crashes can be caused by factors such as sudden impacts, shocks, manufacturing defects, wear and tear, or improper handling of the drive. They are typically serious issues and can result in permanent data loss if not addressed properly.
CLV is commonly used in optical storage systems like __________, while CAV is used in certain types of __________ and some __________. In modern HDDs, more advanced techniques such as Zone Bit Recording (ZBR) or Perpendicular Magnetic Recording (PMR) are employed to optimize data storage and access performance.
CLV is commonly used in optical storage systems like CDs and DVDs, while CAV is used in certain types of hard disk drives (HDDs) and some older magnetic tape systems. In modern HDDs, more advanced techniques such as Zone Bit Recording (ZBR) or Perpendicular Magnetic Recording (PMR) are employed to optimize data storage and access performance.
Constant Linear Velocity (CLV) has a consistent __________
while CVA has a consistent __________
Consistent Data Transfer Rate: CLV ensures a consistent data transfer rate across the entire surface of the disk. This can be advantageous for applications that require a predictable and uniform data access speed.
In CAV, the read/write head maintains a constant angular velocity as it moves across the disk. This results in a constant seek time since the linear velocity remains the same regardless of the radial position. This can lead to faster access times for data located on inner tracks.
Simplicity: CAV is a simpler method compared to CLV as it does not require complex mechanisms for dynamically adjusting the linear velocity.
Mutual exclusion
Only one process may be executing within the CS (critical section)
Lockout
A process not attempting to enter the CS must not prevent other processes from entering the CS (critical section)
Starvation
A process (or a group of processes) must not be able to repeatedly enter the CS while other processes are waiting to enter.
Deadlock
Multiple processes trying to enter the CS at the same time must not block each other indefinitely.
(T/F) A buffer is a CS
A critical section is a segment of code. The buffer is a data structure.
Interleaving is a tool that is used to enhance existing
error correcting codes so that they can be used to perform burst error corrections as well.
Most error correcting codes (ECCs) are designed to correct random errors, i.e. error caused by additive noise that is independent of each other. Burst error are the errors that occur in a sequence or as groups. They are caused due to defects in storage media or disruption in communication signals due to external factors like lightning etc. Interleaving modifies the ECC or does some processing on the data after they are encoded by ECCs.
A race condition may occur if commands to read and write a large amount of data are received at
almost the same instant, and the machine attempts to overwrite some or all of the old data while that old data is still being read.
Operations related to mutex locks (mutually exclusive locks)
acquire() release()
A semaphore s is a
non-negative integer variable that can be accessed using only two special operations, P and V.
V(s): increment s by 1
P(s): if s > 0, decrement s by 1, otherwise wait until s > 0
Overall, semaphores provide a mechanism for coordinating and synchronizing concurrent activities, enabling efficient resource sharing and avoiding conflicts in multi-threaded or multi-process systems
The semaphore mutex can only have the values 0 or 1.
T/F
True
When each process runs on a separate CPU then mutual exclusion cannot be guaranteed. T/F
False
The implementation of P and V operations must guarantee that executing multiple P and/or V operations simultaneously produces the same result as executing the operations in sequence.
Each CPU has its own cache and memory subsystem, which allows for independent execution and control of the processes.
In a multi-CPU system, the mutex implementation typically includes atomic operations and memory barriers to ensure correct synchronization and mutual exclusion. When a process tries to acquire the mutex, it performs atomic operations that ensure that only one process can successfully acquire the mutex and enter the critical section while other processes are blocked.
When the producer and the consumer run at highly varying speeds then the buffer should consist of a _____ number of slots.
Large/ Small
A large number of slots is beneficial to catch a burst of data items produced when the producer runs far ahead of the consumer, and vice versa.
The test-and-set instruction (TS) copies a variable into a
register and sets the variable to zero in one indivisible machine cycle. Test-and-set has the form TS(R, x) where R is a register and x is a memory location and performs the following operations:
Copy x into R
Set x to 0
If R = 0 and x = 1, then after executing TS(R, x) the values become
TS copies x = 1 into R and sets x to 0. R = 1, x = 0
Placing a data item into the buffer takes 10 ms when executed in isolation. Similarly, the removal of a data item takes 10 ms.
When the producer and the consumer run concurrently on one CPU and the producer begins filling the buffer at time t, the consumer will finish emptying the buffer at time
While the producer is placing data into the buffer, the consumer is busy-waiting, thus doubling the time to 20 ms. The buffer will be filled at time t + 20. Then, analogously, the time for emptying the buffer is doubled to 20 ms. So the consumer will finish emptying the buffer at time t + 20 + 20.
What is a high-level synchronization primitive?
High-level synchronization primitives abstract away the low-level details of synchronization mechanisms and provide a more structured and easier-to-use approach to managing concurrency.
A monitor is a
high-level synchronization primitive
The monitor implementation guarantees mutual exclusion. Only one process may be executing inside the monitor.
(monitor) c.signal blocks the calling process only when there is
another process waiting in the queue associated with c. When the queue is empty, c.signal has no effect since no process needs to be reactivated.
An exiting writer signals all readers currently on the ok_to_read queue (T/F)
An exiting writer signals only the first reader, which then signals the next reader, etc.
Only the last reader leaving the CS signals the next writer.
In Peterson’s solution, flag[i] is an
array element that represents the intent of process i to enter a critical section in a concurrent programming scenario with two processes.