C191-Terms-Chapter-3 Flashcards
process
An instance of a program being executed by an OS. Ex: When a user opens a new application like a web browser or text editor, the OS creates a new process.
process control block (PCB)
The OS keeps track of each process using a process control block (PCB): A data structure that holds information for a process, including the current instruction address, the execution stack, the set of resources used by the process, and the program being executed.
The PCB is the concrete representation of a process.
new state
A newly created process is placed into the new state before the process is allowed to compete for the CPU.
Ex: The OS may want to regulate the number of processes competing for the CPU.
terminated state
A process is placed into the terminated state when execution can no longer continue but before the PCB is deleted.
Ex: The OS may want to examine the final state of a process that committed a fatal error.
suspended state
A process may be placed into the suspended state even though the CPU and all resources are available.
Ex: The OS may want to stop a process to allow debugging or to regulate performance.
context switch
The CPU is always running one process at a time. Multiple processes can share a single CPU by taking turns.
A context switch is the transfer of control from one process to another. Each time a p stops running to allow another one to resume, the OS must save all info about the stopped p. This info is restored when p gets to run again.
The info that needs to be saved is the CPU state, which consists of all values held in any CPU regs and hardware flags.
One of the regs is the pc, which determines the next instruction to execute after p is restored.
physical CPU
A real hardware instance of a CPU
virtual CPU
A CPU that the process assumes is available only to itself.
Structuring an application as processes allows independence from the:
Number of CPUs: A physical CPU is a real hardware instance of a CPU. Multiple processes may run on one physical CPU using a technique known as time sharing. Each process is given a virtual CPU: A CPU that the process assumes is available only to itself.
Type of CPU: A virtual CPU can be just an abstraction of the physical CPU or it can be software that emulates the behavior of a different CPU.
Benefits of virtual CPUs
Independence from the number and type of CPUs provides several crucial benefits:
Multi-user support: Multiple users, each represented by one or more separate processes, can share the same machine without being aware of each other.
Multi-CPU transparency: An app written to use multiple CPUs will run correctly, perhaps more slowly if only one CPU is available.
Portability: An application compiled for one type of CPU can run on a different CPU without being modified or even recompiled.
Using multiple cooperating processes instead of one has several important advantages:
The interfaces between the processes are simple and easy to understand.
Each process can be designed and studied in isolation.
The implementation reduces idle time by overlapping the execution of multiple processes.
Different processes can utilize separate CPUs, if available, thus speeding up the execution.
Two ways exist to organize all PCBs:
An array of structures. The PCBs are marked as free or allocated, which eliminates the need for any dynamic memory management. The main drawback is a lot of wasted memory space to maintain a sufficient number of PCB slots.
An array of pointers to dynamically allocated PCBs. The pointer array wastes little space and can be made much larger than the array of structures. The drawback is the overhead of dynamic memory management to allocate each new PCB and to free the memory when the process terminates.
waiting list
A waiting list is associated with every resource and contains all processes blocked on that resource because the resource is not available.
ready list (RL)
A list containing all processes that are in the ready state and thus are able to run on the CPU.
The RL also includes the currently running process. The RL maintains all processes sorted by their importance, which is expressed by an integer value called priority.
The RL can be a linked list where the priority of a process is the current position in the list. The RL can also maintain processes in different lists sorted by priorities.
process creation hierarchy
A graphical representation of the dynamically changing parent-child relationships among all processes. The process creation hierarchy changes each time a process is created or destroyed.
create process function
Allocates a new PCB, fills the PCB entries with initial values, and links the PCB to other data structures in the system:
destroy process function
Destroys a process by freeing the PCB data structure and removing any references to the PCB from the system.
Depending on the OS, the destroy function may also destroy all of the process’s descendants to prevent having “orphan” processes in the system.
The destruction of the entire hierarchy of descendants is accomplished by calling destroy(c) recursively on all children c of p.
The destroy() function performs the following steps:
- After calling destroy(c) on all child processes, remove p from either the RL or from the waiting list of a resource.
- Remove p from the list of the calling process’s children.
- Release all memory and other resources, close all files, and deallocate the PCB.
- Call the scheduler to choose the next p to run. The call must be made outside of the destroy(p) function to ensure that the scheduler executes only once, after the entire hierarchy of processes has been destroyed.
resource control block (RCB)
A data structure that represents a resource. Its fields include resource_description, state, and waiting_list.
request resource function
Allocates a resource r to a process p or blocks p if r is currently allocated to another process.
If r is currently free, the state of r is changed to allocated and a pointer to r is inserted into the list of other_resources of p.
If r is not free, the calling process p is blocked. p’s PCB is moved from the RL to the waiting_list of r. Since the calling process, p, is now blocked, the scheduler must be called to select another process to run.
release resource function
Allocates the resource r to the next process on the r’s waiting list. If the waiting list is empty, r is marked as free.
If r’s waiting_list has no processes then the state of r is changed to free and p continues executing.
If the waiting list of r is not empty then the process q at the head of the list is allocated r, the state of q is changed to ready, and q is moved from the waiting_list to RL.
Since a new process (q) is now on RL, the scheduler must be called to decide which process (p or q) should continue to run.
scheduler function
determines which process should run next and starts the process. The scheduler function is called at the end of each of the process and resource management functions: create, destroy, request, and release.
Assuming the RL is implemented as a priority list, the scheduler() function performs the following tasks:
- Find the highest priority process q on the RL.
- Perform a context switch from p to q if either of the following conditions is met:
The priority of the running process is less than the priority of another process. This condition is true when the scheduler() is called from create() or release(). In these cases, process q could have a higher priority than p.
The state of the running process p is blocked. This is true when the scheduler() is called from request() and the resource is unavailable.
thread
A thread is an instance of executing a portion of a program within a process without incurring the overhead of creating and managing separate PCBs.
thread control block (TCB)
A data structure that holds a separate copy of the dynamically changing information necessary for a thread to execute independently.
The replication of only the bare minimum of information in each TCB, while sharing the same code, global data, resources, and open files, is what makes threads much more efficient to manage than processes.
independent process
If it does not share data with any other processes executing in the system.
cooperating process
If it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.
interprocess communication (IPC)
The mechanism that will allow them to exchange data— that is, send data to and receive data from each other. There are two fundamental models of interprocess communication: shared memory and message passing.
shared memory
In the shared-memory model, a region of memory that is shared by the cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region.
message passing
In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.
browser process
Responsible for managing the user interface as well as disk and network I/O. A new browser process is created when Chrome is started. Only one browser process is created.
Renderer processes
Contain logic for rendering web pages. Thus, they contain the logic for handling HTML, Javascript, images, and so forth.
As a general rule, a new renderer process is created for each website opened in a new tab, so several renderer processes may be active at the same time.
plug-in process
Created for each type of plug-in (such as Flash or QuickTime) in use. Plug-in processes contain the code for the plug-in as well as additional code that enables the plug-in to communicate with associated renderer processes and the browser process.
sandbox
A contained environment (e.g., a virtual machine).
Renderer processes run in a sandbox, which means that access to disk and network I/O is restricted, minimizing the effects of any security exploits.
unbounded buffer
Places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items.
bounded buffer
Assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
producer
A process role in which the process produces information that is consumed by a consumer process.
consumer
A process role in which the process consumes information produced by a producer process.
communication link
If processes P and Q want to communicate, they must send messages to and receive messages from each other: a communication link must exist between them.
Here are several methods for logically implementing a link and the send()/receive() operations:
Direct or indirect communication
Synchronous or asynchronous communication
Automatic or explicit buffering
direct communication
Each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send() and receive() primitives are defined as:
send(P, message)—Send a message to process P.
receive(Q, message)—Receive a message from process Q.
symmetry addressing
Both the sender process and the receiver process must name the other to communicate.
asymmetry in addressing
Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive() primitives are defined as follows:
send(P, message)—Send a message to process P.
receive(id, message)—Receive a message from any process. The variable id is set to the name of the process with which communication has taken place.
The disadvantage of symmetry and asymmetry addressing
the limited modularity of the resulting process definitions. Changing the identifier of a process may necessitate examining all other process definitions.
All references to the old identifier must be found so that they can be modified to the new identifier.