Test p2 Flashcards
The Process
The status
of the current activity of a process is represented by the value of the program
counter and the contents of the processor’s registers.
The memory layout of a
process is typically divided into multiple sections
- Text section—the executable code
- Data section—global variables
- Heap section—memory that is dynamically allocated during program run
time - Stack section—temporary data storage when invoking functions (such as
function parameters, return addresses, and local variables)
the stack and heap
Notice that the sizes of the text and data sections are fixed, as their sizes do
not change during program run time.However, the stack and heap sections can
shrink and grow dynamically during program execution. Each time a function
is called, an activation record containing function parameters, local variables,
and the return address is pushed onto the stack; when control is returned from
the function, the activation record is popped fromthe stack. Similarly, the heap
will grow as memory is dynamically allocated, and will shrink when memory
is returned to the system. Although the stack and heap sections grow toward
one another, the operating system must ensure they do not overlap one another.
Process State
As a process executes, it changes state. The state of a process is defined in part
by the current activity of that process. A process may be in one of the following
states:
* New. The process is being created.
* Running. Instructions are being executed.
* Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
* Ready. The process is waiting to be assigned to a processor.
*Terminated. The process has finished execution.
MEMORY LAYOUT OF A C PROGRAM
The figure shown below illustrates the layout of a C program in memory,
highlighting how the different sections of a process relate to an actual C
program. This figure is similar to the general concept of a process in memory
as shown in Figure 3.1, with a few differences:
* The global data section is divided into different sections for (a) initialized
data and (b) uninitialized data.
* Aseparate section is provided for the argc and argv parameters passed
to the main() function.
Process Control Block
Each process is represented in the operating system by a process control
block (PCB)—also called a task control block. A PCB is shown in Figure 3.3.
It contains many pieces of information associated with a specific process,
including these:
* Process state. The state may be new, ready, running, waiting, halted, and
so on.
* Program counter. The counter indicates the address of the next instruction
to be executed for this process.
* CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code information.
Along with the program counter, this state information must be saved
when an interrupt occurs, to allow the process to be continued correctly
afterward when it is rescheduled to run.
* CPU-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
(Chapter 5 describes process scheduling.)
* Memory-management information. This information may include such
items as the value of the base and limit registers and the page tables, or the
segment tables, depending on the memory system used by the operating
system (Chapter 9).
* Accounting information. This information includes the amount of CPU
and real time used, time limits, account numbers, job or process numbers,
and so on.
* I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
Threads
The process model discussed so far has implied that a process is a program that
performs a single thread of execution. For example, when a process is running
a word-processor program, a single thread of instructions is being executed.
This single thread of control allows the process to perform only one task at a
time. Thus, the user cannot simultaneously type in characters and run the spell
checker. Most modern operating systems have extended the process concept
to allow a process to have multiple threads of execution and thus to perform
more than one task at a time. This feature is especially beneficial on multicore
systems, where multiple threads can run in parallel. A multithreaded word
processor could, for example, assign one thread to manage user input while
another thread runs the spell checker. On systems that support threads, the PCB
is expanded to include information for each thread. Other changes throughout
the system are also needed to support threads. Chapter 4 explores threads in
detail.
Process Scheduling
The objective of multiprogramming is to have some process running at all times
so as to maximize CPU utilization. The objective of time sharing is to switch
a CPU core among processes so frequently that users can interact with each
program while it is running. To meet these objectives, the process scheduler
selects an available process (possibly from a set of several available processes)
for program execution on a core. Each CPU core can run one process at a time. For a system with a single CPU core, there will never be more than one process
running at a time, whereas a multicore system can run multiple processes at
one time. If there are more processes than cores, excess processes will have to wait until a core is free and can be rescheduled. The number of processes
currently in memory is known as the degree of multiprogramming.
Balancing the objectives of multiprogramming and time sharing also
requires taking the general behavior of a process into account. In general, most
processes can be described as either I/O bound or CPU bound. An I/O-bound
process is one that spends more of its time doing I/O than it spends doing
computations. A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
Scheduling Queues
As processes enter the system, they are put into a ready queue, where they are
ready and waiting to execute on a CPU’s core This queue is generally stored as
a linked list; a ready-queue header contains pointers to the first PCB in the list,
and each PCB includes a pointer field that points to the next PCB in the ready
queue.
The system also includes other queues. When a process is allocated a CPU
core, it executes for a while and eventually terminates, is interrupted, or waits
for the occurrence of a particular event, such as the completion of an I/O
request. Suppose the process makes an I/O request to a device such as a disk.
Since devices run significantly slower than processors, the process will have
to wait for the I/O to become available. Processes that are waiting for a certain
event to occur — such as completion of I/O — are placed in a wait queue
(Figure 3.4).
A common representation of process scheduling is a queueing diagram,
such as that in Figure 3.5. Two types of queues are present: the ready queue and
a set of wait queues. The circles represent the resources that serve the queues,
and the arrows indicate the flow of processes in the system.
A new process is initially put in the ready queue. It waits there until it is
selected for execution, or dispatched. Once the process is allocated a CPU core
and is executing, one of several events could occur:
* The process could issue an I/O request and then be placed in an I/O wait
queue.
* The process could create a new child process and then be placed in a wait
queue while it awaits the child’s termination.
* The process could be removed forcibly from the core, as a result of an
interrupt or having its time slice expire, and be put back in the ready queue.
In the first two cases, the process eventually switches from thewaiting state
to the ready state and is then put back in the ready queue. Aprocess continues
this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
CPU Scheduling
Aprocess migrates among the ready queue and various wait queues throughout
its lifetime. The role of the CPU scheduler is to select from among the
processes that are in the ready queue and allocate a CPU core to one of them. The
CPU schedulermust select a new process for the CPU frequently. An I/O-bound
process may execute for only a few milliseconds before waiting for an I/O
request. Although a CPU-bound processwill require a CPU core for longer durations,
the scheduler is unlikely to grant the core to a process for an extended
period. Instead, it is likely designed to forcibly remove the CPU from a process
and schedule another process to run. Therefore, the CPU scheduler executes at
least once every 100 milliseconds, although typically much more frequently.
Some operating systems have an intermediate form of scheduling, known
as swapping, whose key idea is that sometimes it can be advantageous to
remove a process from memory (and from active contention for the CPU)
and thus reduce the degree of multiprogramming. Later, the process can be
reintroduced into memory, and its execution can be continued where it left off.
This scheme is known as swapping because a process can be “swapped out” from memory to disk, where its current status is saved, and later “swapped in”
from disk back to memory, where its status is restored. Swapping is typically
only necessary when memory has been overcommitted and must be freed up.
Context Switch
As mentioned in Section 1.2.1, interrupts cause the operating system to change
a CPU core from its current task and to run a kernel routine. Such operations
happen frequently on general-purpose systems. When an interrupt occurs, the
system needs to save the current context of the process running on the CPU
core so that it can restore that context when its processing is done, essentially
suspending the process and then resuming it. The context is represented in
the PCB of the process. It includes the value of the CPU registers, the process
state (see Figure 3.2), and memory-management information. Generically, we
perform a state save of the current state of the CPU core, be it in kernel or user
mode, and then a state restore to resume operations.
Switching the CPU core to another process requires performing a state
save of the current process and a state restore of a different process. This
task is known as a context switch and is illustrated in Figure 3.6. When a
context switch occurs, the kernel saves the context of the old process in its
PCB and loads the saved context of the new process scheduled to run. Contextswitch
time is pure overhead, because the system does no useful work while
switching. Switching speed varies from machine tomachine, depending on the memory speed, the number of registers that must be copied, and the existence
of special instructions (such as a single instruction to load or store all registers).
Atypical speed is a several microseconds.
MULTITASKING IN MOBILE SYSTEMS
Because of the constraints imposed on mobile devices, early versions of iOS
did not provide user-application multitasking; only one application ran in
the foregroundwhile all other user applications were suspended. Operatingsystem
tasks were multitasked because they were written by Apple and well
behaved. However, beginning with iOS 4, Apple provided a limited form of
multitasking for user applications, thus allowing a single foreground application
to run concurrently with multiple background applications. (On a
mobile device, the foreground application is the application currently open
and appearing on the display. The background application remains in memory,
but does not occupy the display screen.) The iOS 4 programming API
provided support formultitasking, thus allowing a process to run in the background
without being suspended. However, it was limited and only available
for a few application types. As hardware for mobile devices began to offer
larger memory capacities, multiple processing cores, and greater battery life,
subsequent versions of iOS began to support richer functionality for multitasking
with fewer restrictions. For example, the larger screen on iPad tablets
allowed running two foreground apps at the same time, a technique known
as split-screen.
Since its origins, Android has supported multitasking and does not place
constraints on the types of applications that can run in the background. If
an application requires processing while in the background, the application
must use a service, a separate application component that runs on behalf
of the background process. Consider a streaming audio application: if the
application moves to the background, the service continues to send audio
data to the audio device driver on behalf of the background application. In
fact, the service will continue to run even if the background application is
suspended. Services do not have a user interface and have a small memory
footprint, thus providing an efficient technique for multitasking in a mobile
environment.
Context-switch times
are highly dependent on hardware support. For
instance, some processors provide multiple sets of registers. A context switch
here simply requires changing the pointer to the current register set. Of course,
if there are more active processes than there are register sets, the system resorts
to copying register data to and frommemory, as before. Also, the more complex
the operating system, the greater the amount of work that must be done during
a context switch. As we will see in Chapter 9, advanced memory-management
techniques may require that extra data be switched with each context. For
instance, the address space of the current process must be preserved as the
space of the next task is prepared for use. How the address space is preserved,
and what amount of work is needed to preserve it, depend on the memorymanagement
method of the operating system.
Most operating systems (including UNIX, Linux, and Windows) identify
processes according to
a unique process identifie (or pid), which is typically
an integer number. The pid provides a unique value for each process in the
system, and it can be used as an index to access various attributes of a process
within the kernel.
Linux PID
Figure 3.7 illustrates a typical process tree for the Linux operating system,
showing the name of each process and its pid. (We use the term process rather
loosely in this situation, as Linux prefers the term task instead.) The systemd
process (which always has a pid of 1) serves as the root parent process for all
user processes, and is the first user process created when the system boots.
Once the system has booted, the systemd process creates processes which
provide additional services such as a web or print server, an ssh server, and
the like. In Figure 3.7, we see two children of systemd—logind and sshd.
The logind process is responsible for managing clients that directly log onto
the system. In this example, a client has logged on and is using the bash shell,
which has been assigned pid 8416. Using the bash command-line interface,
this user has created the process ps as well as the vim editor. The sshd process
is responsible for managing clients that connect to the system by using ssh
(which is short for secure shell).
python