Processor Management Flashcards
Central to the design of operating systems is the concept of _____
process
This term was
first used by the designers of Multics in the 1960s [DALE68]. It is a somewhat
more general term than job. Many definitions have been given for the term ____ ,
including
* A program in execution
* An instance of a program running on a computer
* The entity that can be assigned to and executed on a processor
* A unit of activity characterized by a single sequential thread of execution, a
current state, and an associated set of system resources
process
Enumerate the process states
- New
- Running
- Waiting
- Ready
- Terminated
Process state
The process is being created
New
Process state
Instructions are being executed
Running
Process state
The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
Waiting
Process state
the process is waiting to be assigned to a processor.
Ready
Process state
the process has finished execution
Terminated
Three major lines of computer system development created problems in timing and synchronization that contributed to the development:
multiprogramming batch operation, time sharing, and real-time transaction
Development of the Process
designed to keep the processor
and I/O devices, including storage devices, simultaneously busy to achieve maximum
efficiency. The key mechanism is this: In response to signals indicating the
completion of I/O transactions, the processor is switched among the various programs
residing in main memory.
multiprogramming batch operation
Development of the Process
key design objective is to be responsive to the needs of the individual user and yet,
for cost reasons, be able to support many users simultaneously. These goals are
compatible because of the relatively slow reaction time of the user. For example,
if a typical user needs an average of 2 seconds of processing time per minute, then
close to 30 such users should be able to share the same system without noticeable
interference. Of course, OS overhead must be factored into such calculations.
general-purpose time sharing
Development of the Process
. In this case, a number of users are entering queries or updates against a
database. An example is an airline reservation system. The key difference between
the transaction processing system and the time-sharing system is that the former
is limited to one or a few applications, whereas users of a time-sharing system can
engage in program development, job execution, and the use of various applications.
In both cases, system response time is paramount.
real-time transaction
Enumerate
cause of errors
improper synchronization, nondeterminate program operation, failed mutual exclusion, deadlocks
cause of errors
- a program must wait until the data are available in a buffer
- improper design of the signaling mechanism can result in loss or duplication
improper synchronization
cause of errors
- more than one user or program attempts to make use of a shared resource at the same time
- only one routine at a time allowed to perform an update against the file
failed mutual exclusion
cause of errors
- program execution is interleaved by the processor when memory is shared
- the order in which programs are scheduled may affect their outcome
nondeterminate program operation
cause of errors
- it is possible for two or more programs to be hung up waiting for each other
- may depend on the chance timing of resource allocation and release
deadlocks
components of a process
A process contains three components:
enumerate
- an executable program
- the associated data needed by the program (variables, work space, buffers, etc.)
- the execution context (or “process state”) of the program
components of a process
The execution context is essential:
enumerate
- it is the internal data by which the OS is able to supervise and control the process
- includes the contents of the various process registers
- includes information such as the priority of the process and whether the process is waiting for the completion of a particular I/O event
process control block
process control blocks
enumerate
- process state
- program counter
- CPU register
- CPU-scheduling information
- memory-management information
- accounting information
- i/o status information
process control block
The state maybe new, ready, running, waiting, halted, and so on
process state
process control block
the counter indicates the address of the next instruction to be executed for this process
program counter
process control block
the registers vary in number and type, depending on the computer architecture.
CPU register
process control block
includes a process priority, pointers to scheduling queues, and nay other scheduling parameters
CPU scheduling information
process control block
may include the value of the base and limit registers, the page tables, or the segment tables depending on the memory system used by the operating system.
memory-management information
process control block
includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.
accounting information
process control block
includes the list of I/O devices allocated to this process, list of open files and so on.
i/o status information
is the low-level scheduler that assigns the CPU to execute the processes of those jobs placed on the READY queue by the Job Scheduler.
process scheduler
process scheduling policies
enumerate
- Maximize throughput
- Minimize response time
- Minimize turnaround time
- Minimize waiting time
- Maximize CPU efficiency
- Ensure fairness for all jobs
process scheduling policies
Run as many jobs as possible in a given amount of time
Maximize throughputMaximize throughput
process scheduling policies
Quickly turn around interactive requests
Minimize response time
process scheduling policies
Move entire jobs in and out of the system quickly
Minimize turnaround time
process scheduling policies
Move jobs out of the READY queue as quickly as possible
Minimize waiting time
process scheduling policies
Keep the CPU busy 100 percent of the time
Maximize CPU efficiency
process scheduling policies
. Give everyone an equal amount of CPU and I/O time.
Ensure fairness for all jobs
is a non-preemptive scheduling algorithm that handles jobs according to their arrival time: the earlier they arrive, the sooner they’re served. It’s a very simple algorithm to implement because it uses a FIFO queue.
This algorithm is fine for most batch systems, but it is unacceptable for interactive systems because interactive users expect quick response times.
In a strictly FCFS system there are no WAIT queues (each job is run to completion)
First-Come, First-Served
is a non-preemptive scheduling algorithm (also known as shortest job first, or SJN) that handles jobs based on the length of their CPU cycle time.
It’s easiest to implement in batch environments where the estimated CPU time required to run the job is given in advance by each user at the start of each job.
However, it doesn’t work in interactive systems because users don’t estimate in advance the CPU time required to run their jobs.
Shortes Job Next
is a nonpreemptive algorithm and one of the most common scheduling algorithms in batch systems, even though it may give slower turnaround to some users.
Priorities can be assigned by a system administrator using characteristics extrinsic to the jobs.
Priority Scheduling
Priorities can also be determined by the Processor Manager based on characteristics intrinsic to the jobs such as:
enumerate
- Memory requirements
- Number and type of peripheral devices
- Amount of time already spent in the system
- Total CPU time
determining priorities
Jobs requiring large amounts of memory could be allocated lower priorities than those requesting small amounts of memory, or vice versa.
Memory requirements.
determining priorities
This is the total amount of elapsed time since the job was accepted for processing.
Some systems increase the priority of jobs that have been in the system for an unusually long time to expedite their exit.
This is known as aging.
Amount of time already spent in the system
determining priorities
Jobs having a long CPU Jobs having a long CPU cycle, or estimated run time, would be given lower priorities than those having a brief estimated run time, or estimated run time, would be given lower priorities than those having a brief estimated run time
Total CPU time
is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion—but even this job can be preempted if a newer job in the READY queue has a time to completion that’s shorter.
Shortest Remaining Time
is a preemptive process scheduling algorithm that is used extensively in interactive systems.
It’s easy to implement and isn’t based on job characteristics but on a predetermined slice of time that’s given to each job to ensure that the CPU is equally shared among all active processes and isn’t monopolized by any one job.
Round Robin
round robin
This time slice is called a___ and its size is crucial to the performance of the system.
It usually varies from 100 milliseconds to 1 or 2 seconds.
time quantum
In the event that the job’s CPU cycle is shorter than the time quantum, one of two actions will take place:
enumerate
(1) If this is the job’s last CPU cycle and the job is finished, then all resources allocated to it are released and the completed job is returned to the user;
(2) if the CPU cycle has been interrupted by an I/O request, then information about
the job is saved in its PCB and it is linked at the end of the appropriate I/O queue.