Process creation and management Flashcards

1
Q

There are two basic operations that can be performed on a process:

A

creation and deletion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A process has several stages that it passes through from beginning to end. There must be a minimum of five states.

A

Even though during execution, the process could be in one of these states, the names of the states are not standardized. Each process goes through several stages throughout its life cycle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

state of process:
New (create)

A

In this step, the process is about to be created but not yet created. It is the program that is present in secondary memory that will be picked up by OS to create the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

state of process:
Ready

A

New -> Ready to run. After the creation of a process, the process enters the ready state i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue called ready queue for ready processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

state of process:
Run

A

The process is chosen from the ready queue by the CPU for execution and the instructions within the process are executed by any one of the available CPU cores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

state of process:
Blocked or Wait

A

Whenever the process requests access to I/O or needs input from the user or needs access to a critical region(the lock for which is already acquired) it enters the blocked or waits for the state. The process continues to wait in the main memory and does not require CPU. Once the I/O operation is completed the process goes to the ready state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

state of process:
Terminated or COmpleted

A

Process is killed as well as PCB is deleted. The resources allocated to the process will be released or deallocated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

state of process:
Suspend Ready

A

Process that was initially in the ready state but was swapped out of main memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said to be in suspend ready state. The process will transition back to a ready state whenever the process is again brought onto the main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

state of process:
Suspend wait or suspend blocked

A

Similar to suspend ready but uses the process which was performing I/O operation and lack of main memory caused them to move to secondary memory. When work is finished it may go to suspend ready.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

state of process:
CPU and I/O Bound Processes

A

If the process is intensive in terms of CPU operations, then it is called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is called I/O bound process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does a process move between different states in an operating system?

A

A process can move between different states in an operating system based on its execution status and resource availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

New to ready

A

When a process is created, it is in a new state. It moves to the ready state when the operating system has allocated resources to it and it is ready to be executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ready to running

A

When the CPU becomes available, the operating system selects a process from the ready queue depending on various scheduling algorithms and moves it to the running state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Running to blocked

A

When a process needs to wait for an event to occur (I/O operation or system call), it moves to the blocked state. For example, if a process needs to wait for user input, it moves to the blocked state until the user provides the input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Running to ready

A

When a running process is preempted by the operating system, it moves to the ready state. For example, if a higher-priority process becomes ready, the operating system may preempt the running process and move it to the ready state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Blocked to ready

A

When the event a blocked process was waiting for occurs, the process moves to the ready state. For example, if a process was waiting for user input and the input is provided, it moves to the ready state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Running to terminated

A

When a process completes its execution or is terminated by the operating system, it moves to the terminated state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Scheduler:
long-term- performance

A

Decides how many processes should be made to stay in the ready state. This decides the degree of multiprogramming. Once a decision is taken it lasts for a long time which also indicates that it runs infrequently. Hence it is called a long-term scheduler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Scheduler:
short-term- context switching time

A

Short-term scheduler will decide which process is to be executed next and then it will call the dispatcher. A dispatcher is a software that moves the process from ready to run and vice versa. In other words, it is context switching. It runs frequently. Short-term scheduler is also called CPU scheduler.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Scheduler:
Medium-term- swapping time

A

Suspension decision is taken by the medium-term scheduler. The medium-term scheduler is used for swapping which is moving the process from main memory to secondary and vice versa. The swapping is done to reduce degree of multiprogramming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

multiprogramming:
Preemption

A

Process is forcefully removed from CPU. Pre-emotion is also called time sharing or multitasking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

multiprogramming:
Non-preemption

A

Processes are not removed until they complete the execution. Once control is given to the CPU for a process execution, till the CPU releases the control by itself, control cannot be taken back forcibly from the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

degree of multiprogramming

A

The number of processes that can reside in the ready state at maximum decides the degree of multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the ready state at maximum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Operation on the process:
Creation

A

The process will be ready once it has been created, enter the ready queue (main memory), and be prepared for execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Operation on the process:
Planning

A

The operating system picks one process to begin executing from among the numerous processes that are currently in the ready queue. Scheduling is the process of choosing the next process to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Operation on the process:
Application

A

The processor begins running the process as soon as it is scheduled to run. During execution, a process may become blocked or wait, at which point the processor switches to executing the other processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Operation on the process:
Killing or Deletion

A

The OS will terminate the process once its purpose has been fulfilled. The process’s context will be over there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Operation on the process:
Blocking

A

When a process is waiting for an event or resource, it is blocked. The operating system will place it in a blocked state, and it will not be able to execute until the event or resource becomes available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Operation on the process:
Resumption

A

When the event or resource that caused a process to block becomes available, the process is removed from the blocked state and added back to the ready queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Operation on the process:
Context Switching

A

When the operating system switches from executing one process to another, it must save the current process’s context and load the context of the next process to execute. This is known as context switching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Operation on the process:
Inter-process communication

A

Processes may need to communicate with each other to share data or coordinate actions. The operating system provides mechanisms for inter-process communication, such as shared memory, message passing, and synchronization primitives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Operation on the process:
Process Synchronization

A

Multiple processes may need to access a shared resource or critical section of code simultaneously. The operating system provides synchronization mechanisms to ensure that only one process can access the resource or critical section at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Operation on the process:
Process status

A

Processes may be in one of several states, including ready, running, waiting, and terminated. The operating system manages the process states and transitions between them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A process can move from the running state to the ____ ______ if it needs to wait for a resource to become available.

A process can move from the _____ _____ to the ready state when the resource it was waiting for becomes available.

A

waiting state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A process can move from the ____ ____ to the running state when it is selected by the operating system for execution.

The scheduling algorithm used by the operating system determines which process is selected to execute from the ____ _____

A

ready state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

The operating system may also move a process from the ____ _____ to the ready state to allow other processes to execute.

A process can move from the ____ _____ to the terminated state when it completes its execution.

A

running state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A process can move from the waiting state directly to the terminated state if it is aborted or killed by the operating system or another process.

A process can go through ready, running and waiting state any number of times in its lifecycle but new and terminated happens only once.

A

The process state includes information about the program counter, CPU registers, memory allocation, and other resources used by the process.

The operating system maintains a process control block (PCB) for each process, which contains information about the process state, priority, scheduling information, and other process-related data.

The process state diagram is used to represent the transitions between different states of a process and is an essential concept in process management in operating systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

While creating a process the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes.

A

For this task, the process control block (PCB) is used to track the process’s execution status. Each block of memory contains information about the process state, program counter, stack pointer, status of opened files, scheduling algorithms, etc. All this information is required and must be saved when the process is switched from one state to another. When the process makes a transition from one state to another, the operating system must update information in the process’s PCB. A process control block (PCB) contains information about the process, i.e. registers, quantum, priority, etc. The process table is an array of PCBs, that means logically contains a PCB for all of the current processes in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Process Control Block:
Pointer
Process State
Process number
Program counter
Registers
Memory Limits
Open File Lists
Misc. Accounting and status data

A

Pointer – It is a stack pointer which is required to be saved when the process is switched from one state to another to retain the current position of the process.

Process state – It stores the respective state of the process.

Process number – Every process is assigned with a unique id known as process ID or PID which stores the process identifier.

Program counter – It stores the counter which contains the address of the next instruction that is to be executed for the process.

Register – These are the CPU registers which includes: accumulator, base, registers and general purpose registers.

Memory limits – This field contains the information about memory management system used by operating system. This may include the page tables, segment tables etc.

Open files list – This information includes the list of files opened for a process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Interrupt handling

A

The PCB also contains information about the interrupts that a process may have generated and how they were handled by the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

context switching

A

The process of switching from one process to another is called context switching. The PCB plays a crucial role in context switching by saving the state of the current process and restoring the state of the next process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Real-time systems

A

Real-time operating systems may require additional information in the PCB, such as deadlines and priorities, to ensure that time-critical processes are executed in a timely manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Virtual memory management

A

The PCB may contain information about a process’s virtual memory management, such as page tables and page fault handling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Inter-process communication

A

The PCB can be used to facilitate inter-process communication by storing information about shared resources and communication channels between processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Fault tolerance

A

Some operating systems may use multiple copies of the PCB to provide fault tolerance in case of hardware failures or software errors.

46
Q

Advantages of Process table:
Efficient process management

A

The process table and PCB provide an efficient way to manage processes in an operating system. The process table contains all the information about each process, while the PCB contains the current state of the process, such as the program counter and CPU registers.

47
Q

Advantages of process table:
Resource management

A

The process table and PCB allow the operating system to manage system resources, such as memory and CPU time, efficiently. By keeping track of each process’s resource usage, the operating system can ensure that all processes have access to the resources they need.

48
Q

Advantages of process table:
Process synchronization

A

The process table and PCB can be used to synchronize processes in an operating system. The PCB contains information about each process’s synchronization state, such as its waiting status and the resources it is waiting for.

49
Q

Advantages of process table:
Process scheduling

A

The process table and PCB can be used to schedule processes for execution. By keeping track of each process’s state and resource usage, the operating system can determine which processes should be executed next.

50
Q

Disadvantages of process table:
Overhead and complexity

A

The process table and PCB can introduce overhead and reduce system performance. The operating system must maintain the process table and PCB for each process, which can consume system resources.

The process table and PCB can increase system complexity and make it more challenging to develop and maintain operating systems. The need to manage and synchronize multiple processes can make it more difficult to design and implement system features and ensure system stability.

51
Q

Disadvantages of process table:
Scalability and Security

A

The process table and PCB may not scale well for large-scale systems with many processes. As the number of processes increases, the process table and PCB can become larger and more difficult to manage efficiently.

The process table and PCB can introduce security risks if they are not implemented correctly. Malicious programs can potentially access or modify the process table and PCB to gain unauthorized access to system resources or cause system instability.

52
Q

Disadvantages of process table:
Misc accounting and status data

A

This field includes information about the amount of CPU used, time constraints, jobs or process number, etc. The process control block stores the register content also known as execution content of the processor when it was blocked from running. This execution content architecture enables the operating system to restore a process’s execution context when the process returns to the running state. When the process makes a transition from one state to another, the operating system updates its information in the process’s PCB. The operating system maintains pointers to each process’s PCB in a process table so that it can access the PCB quickly.

53
Q

To increase CPU utilization in multiprogramming, a memory management scheme known as swapping can be used. Swapping is the process of bringing a process into memory and then temporarily copying it to the disc after it has run for a while.

A

The purpose of swapping in an operating system is to access data on a hard disc and move it to RAM so that application programs can use it. It’s important to remember that swapping is only used when data isn’t available in RAM. Although the swapping process degrades system performance, it allows larger and multiple processes to run concurrently. Because of this, swapping is also known as memory compaction.

54
Q

The CPU scheduler determines which processes are swapped in and which are swapped out. Consider a multiprogramming environment that employs a priority-based scheduling algorithm.

A

When a high-priority process enters the input queue, a low-priority process is swapped out so the high-priority process can be loaded and executed. When this process terminates, the low priority process is swapped back into memory to continue its execution.

55
Q

Swapping has been subdivided into two concepts: swap-in and swap-out.

A

Swap-out is a technique for moving a process from RAM to the hard disc.

Swap-in is a method of transferring a program from a hard disc to main memory, or RAM.

56
Q

Advantages of swapping

A

If there is low main memory so some processes may has to wait for much long but by using swapping process do not have to wait long for execution on CPU.

It utilize the main memory.

Using only single main memory, multiple process can be run by CPU using swap partition.

The concept of virtual memory start from here and it utilize it in better way.

This concept can be useful in priority based scheduling to optimize the swapping process.

57
Q

Disadvantages of swapping

A

If there is low main memory resource and user is executing too many processes and suddenly the power of system goes off there might be a scenario where data get erase of the processes which are took parts in swapping.

Chances of number of page faults occur

Low processing performance

58
Q

Only one process occupies the user program area of memory in a single tasking operating system and remains in memory until the process is completed.

A

When all of the active processes in a multitasking operating system cannot coordinate in main memory, a process is swapped out of main memory so that other processes can enter it.

59
Q

Compaction is a technique to collect all the free memory present in form of fragments into one large chunk of free memory, which can be used to run other processes.

A

It does that by moving all the processes towards one end of the memory and all the available free space towards the other end of the memory so that it becomes contiguous.

60
Q

It is not always easy to do compaction. Compaction can be done only when the relocation is dynamic and done at execution time.

A

Compaction can not be done when relocation is static and is performed at load time or assembly time.

61
Q

Before compaction, the main memory has some free space between occupied space. This condition is known as external fragmentation. Due to less free space between occupied spaces, large processes cannot be loaded into them.

After compaction, all the occupied space has been moved up and the free space at the bottom. This makes the space contiguous and removes external fragmentation. Processes with large memory requirements can be now loaded into the main memory.

A

Main Memory

Occupied space
Free space
Occupied space
Occupied space
Free space

Main Memory

Occupied space
Occupied space
Occupied space
Free space
Free space

62
Q

Purpose of compaction in operating system

A

While allocating memory to process, the operating system often faces a problem when there’s a sufficient amount of free space within the memory to satisfy the memory demand of a process. however the process’s memory request can’t be fulfilled because the free memory available is in a non-contiguous manner, this problem is referred to as external fragmentation. To solve such kinds of problems compaction technique is used.

63
Q

Issues with compaction

A

Although the compaction technique is very useful in making memory utilization efficient and reduces external fragmentation of memory, the problem with it is that a large amount of time is wasted in the process and during that time the CPU sits idle hence reducing the efficiency of the system.

64
Q

Advantages of compaction

A

Reduces external fragmentation.

Make memory usage efficient.

Memory becomes contiguous.

Since memory becomes contiguous more processes can be loaded to memory.

65
Q

Disadvantages of compaction

A

System efficiency reduces.

A huge amount of time is wasted in performing compaction.

CPU sits idle for a long time.

Not always easy to perform compaction.

66
Q

Internal fragmentation

A

Internal fragmentation happens when the memory is split into mounted-sized blocks. Whenever a method is requested for the memory, the mounted-sized block is allotted to the method. In the case where the memory allotted to the method is somewhat larger than the memory requested, then the difference between allotted and requested memory is called internal fragmentation. We fixed the sizes of the memory blocks, which has caused this issue. If we use dynamic partitioning to allot space to the process, this issue can be solved.

67
Q

External fragmentation

A

External fragmentation happens when there’s a sufficient quantity of area within the memory to satisfy the memory request of a method. However, the process’s memory request cannot be fulfilled because the memory offered is in a non-contiguous manner. Whether you apply a first-fit or best-fit memory allocation strategy it’ll cause external fragmentation.

68
Q

Non-contigious allocation

A

also known as dynamic or linked allocation, is a memory allocation technique used in operating systems to allocate memory to processes that do not require a contiguous block of memory. In this technique, each process is allocated a series of non-contiguous blocks of memory that can be located anywhere in the physical memory.

Non-contiguous allocation involves the use of pointers to link the non-contiguous memory blocks allocated to a process. These pointers are used by the operating system to keep track of the memory blocks allocated to the process and to locate them during the execution of the process.

69
Q

advantages to non-contiguous allocation

A

First, it reduces internal fragmentation since memory blocks can be allocated as needed, regardless of their physical location. Second, it allows processes to be allocated memory in a more flexible and efficient manner since the operating system can allocate memory to a process wherever free memory is available.

70
Q

non-contiguous allocation also has some disadvantages

A

It can lead to external fragmentation, where the available memory is broken into small, non-contiguous blocks, making it difficult to allocate large blocks of memory to a process. Additionally, the use of pointers to link memory blocks can introduce additional overhead, leading to slower memory allocation and deallocation times.

71
Q

The main memory is central to the operation of a Modern Computer.

A

Main Memory is a large array of words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of rapidly available information shared by the CPU and I/O devices. Main memory is the place where programs and information are kept when the processor is effectively utilizing them. Main memory is associated with the processor, so moving instructions and information into and out of the processor is extremely fast. Main memory is also known as RAM (Random Access Memory). This memory is a volatile memory. RAM lost its data when a power interruption occurs.

72
Q

n a multiprogramming computer, the Operating System resides in a part of memory and the rest is used by multiple processes. The task of subdividing the memory among different processes is called Memory Management.

A

Memory management is a method in the operating system to manage operations between main memory and disk during process execution. The main aim of memory management is to achieve efficient utilization of memory.

73
Q

Why Memory Management is Required?

A

Allocate and de-allocate memory before and after process execution.

To keep track of used memory space by processes.

To minimize fragmentation issues.

To proper utilization of main memory.

To maintain data integrity while executing of process.

74
Q

Logical address space

A

An address generated by the CPU is known as a “Logical Address”. It is also known as a Virtual address. Logical address space can be defined as the size of the process. A logical address can be changed.

75
Q

Physical address space

A

An address seen by the memory unit (i.e the one loaded into the memory address register of the memory) is commonly known as a “Physical Address”. A Physical address is also known as a Real address. The set of all physical addresses corresponding to these logical addresses is known as Physical address space. A physical address is computed by MMU. The run-time mapping from virtual to physical addresses is done by a hardware device Memory Management Unit(MMU). The physical address always remains constant.

76
Q

static loading

A

Static Loading is basically loading the entire program into a fixed address. It requires more memory space.

77
Q

dynamic loading

A

The entire program and all data of a process must be in physical memory for the process to execute. So, the size of a process is limited to the size of physical memory. To gain proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded until it is called. All routines are residing on disk in a relocatable load format. One of the advantages of dynamic loading is that the unused routine is never loaded. This loading is useful when a large amount of code is needed to handle it efficiently.

78
Q

static linking

A

In static linking, the linker combines all necessary program modules into a single executable program. So there is no runtime dependency. Some operating systems support only static linking, in which system language libraries are treated like any other object module.

79
Q

dynamic linking

A

The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of code. When the stub is executed, it checks whether the needed routine is already in memory or not. If not available then the program loads the routine into memory.

80
Q

swapping

A

When a process is executed it must have resided in memory. Swapping is a process of swapping a process temporarily into a secondary memory from the main memory, which is fast compared to secondary memory. A swapping allows more processes to be run and can be fit into memory at one time. The main part of swapping is transferred time and the total time is directly proportional to the amount of memory swapped. Swapping is also known as roll-out, or roll because if a higher priority process arrives and wants service, the memory manager can swap out the lower priority process and then load and execute the higher priority process. After finishing higher priority work, the lower priority process swapped back in memory and continued to the execution process.

81
Q

Multiple partition allocation

A

In this method, a process is selected from the input queue and loaded into the free partition. When the process terminates, the partition becomes available for other processes.

82
Q

Fixed partition allocation

A

In this method, the operating system maintains a table that indicates which parts of memory are available and which are occupied by processes. Initially, all memory is available for user processes and is considered one large block of available memory. This available memory is known as a “Hole”. When the process arrives and needs memory, we search for a hole that is large enough to store this process. If the requirement is fulfilled then we allocate memory to process, otherwise keeping the rest available to satisfy future requests. While allocating a memory sometimes dynamic storage allocation problems occur, which concerns how to satisfy a request of size n from a list of free holes.

83
Q

Fragmentation

A

defined as when the process is loaded and removed after execution from memory, it creates a small free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the memory requirement of the process. To achieve a degree of multiprogramming, we must reduce the waste of memory or fragmentation problems. In the operating systems two types of fragmentation:
Internal
External

84
Q

Internal fragmentation

A

occurs when memory blocks are allocated to the process more than their requested size. Due to this some unused space is left over and creating an internal fragmentation problem.Example: Suppose there is a fixed partitioning used for memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demands a block of memory. It gets a memory block of 3MB but 1MB block of memory is a waste, and it can not be allocated to other processes too. This is called internal fragmentation.

85
Q

External fragmentation

A

we have a free memory block, but we can not assign it to a process because blocks are not contiguous. Example: Suppose (consider the above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively. After allocating the process p1 process and the p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a 3MB block of memory, which is available, but we can not assign it because free memory space is not contiguous. This is called external fragmentation.

86
Q

paging

A

a memory management scheme that eliminates the need for a contiguous allocation of physical memory. This scheme permits the physical address space of a process to be non-contiguous.

Logical Address or Virtual Address (represented in bits): An address generated by the CPU

Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical addresses generated by a program

Physical Address (represented in bits): An address actually available on a memory unit

Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to the logical addresses

87
Q

Process management

A

refers to the techniques and strategies used by organizations to design, monitor, and control their business processes to achieve their goals efficiently and effectively. It involves identifying the steps involved in completing a task, assessing the resources required for each step, and determining the best way to execute the task.

88
Q

Process management can help organizations improve their operational efficiency, reduce costs, increase customer satisfaction, and maintain compliance with regulatory requirements. It involves analyzing the performance of existing processes, identifying bottlenecks, and making changes to optimize the process flow.

A

Process management includes various tools and techniques such as process mapping, process analysis, process improvement, process automation, and process control. By applying these tools and techniques, organizations can streamline their processes, eliminate waste, and improve productivity.

89
Q

If the operating system supports multiple users then services under this are very important. In this regard, operating systems have to keep track of all the completed processes, Schedule them, and dispatch them one after another. But the user should feel that he has full control of the CPU.

Some of the systems call in this category are as follows.

A

Create a child’s process identical to the parent’s.
Terminate a process
Wait for a child process to terminate
Change the priority of the process
Block the process
Ready the process
Dispatch a process
Suspend a process
Resume a process
Delay a process
Fork a process

90
Q

Explanation of process

A

Text Section: A Process, sometimes known as the Text Section, also includes the current activity represented by the value of the Program Counter.
Stack: The stack contains temporary data, such as function parameters, returns addresses, and local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically allocated memory to process during its run time.

91
Q

Attributes or characteristics of a process

A

Process Id: A unique identifier assigned by the operating system

Process State: Can be ready, running, etc.

CPU registers: Like the Program Counter (CPU registers must be saved and restored when a process is swapped in and out of the CPU)

Accounts information: Amount of CPU used for process execution, time limits, execution ID, etc

I/O status information: For example, devices allocated to the process, open files, etc

CPU scheduling information: For example, Priority (Different processes may have different priorities, for example, a shorter process assigned high priority in the shortest job first scheduling)

92
Q

States of process

A

New: Newly Created Process (or) being-created process.

Ready: After the creation process moves to the Ready state, i.e. the process is ready for execution.

Run: Currently running process in CPU (only one process at a time can be under execution in a single processor)

Wait (or Block): When a process requests I/O access.

Complete (or Terminated): The process completed its execution.

Suspended Ready: When the ready queue becomes full, some processes are moved to a suspended ready state

Suspended Block: When the waiting queue becomes full.

93
Q

Context switching

A

The process of saving the context of one process and loading the context of another process is known as Context Switching. In simple terms, it is like loading and unloading the process from the running state to the ready state.

94
Q

when does context switching happen?

A
  1. When a high-priority process comes to a ready state (i.e. with higher priority than the running process)
  2. An Interrupt occurs
  3. User and kernel-mode switch (It is not necessary though)
  4. Preemptive CPU scheduling is used.
95
Q

Context switch vs mode switch

A

A mode switch occurs when the CPU privilege level is changed, for example when a system call is made or a fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process wants to access things that are only accessible to the kernel, a mode switch must occur. The currently executing process need not be changed during a mode switch. A mode switch typically occurs for a process context switch to occur. Only the kernel can cause a context switch.

96
Q

CPU-Bound vs I/O-Bound Processes

A

A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound process spends more time in the waiting state.

Process planning is an integral part of the process management operating system. It refers to the mechanism used by the operating system to determine which process to run next. The goal of process scheduling is to improve overall system performance by maximizing CPU utilization, minimizing execution time, and improving system response time.

97
Q

Scheduling algorithm:
first-come, first served (FCFS)

A

This is the simplest scheduling algorithm, where the process is executed on a first-come, first-served basis. FCFS is non-preemptive, which means that once a process starts executing, it continues until it is finished or waiting for I/O.

98
Q

Scheduling algorithm:
Shortest Job First (SJF)

A

SJF is a proactive scheduling algorithm that selects the process with the shortest burst time. The burst time is the time a process takes to complete its execution. SJF minimizes the average waiting time of processes.

99
Q

Scheduling algorithm:
Round Robin (RR)

A

RR is a proactive scheduling algorithm that reserves a fixed amount of time in a round for each process. If a process does not complete its execution within the specified time, it is blocked and added to the end of the queue. RR ensures fair distribution of CPU time to all processes and avoids starvation.

100
Q

Scheduling algorithm:
Priority Scheduling

A

This scheduling algorithm assigns priority to each process and the process with the highest priority is executed first. Priority can be set based on process type, importance, or resource requirements.

101
Q

Scheduling algorithm:
Multilevel queue

A

This scheduling algorithm divides the ready queue into several separate queues, each queue having a different priority. Processes are queued based on their priority, and each queue uses its own scheduling algorithm. This scheduling algorithm is useful in scenarios where different types of processes have different priorities.

102
Q

Advantages of Process management

A

Improved Efficiency: Process management can help organizations identify bottlenecks and inefficiencies in their processes, allowing them to make changes to streamline workflows and increase productivity.

Cost Savings: By identifying and eliminating waste and inefficiencies, process management can help organizations reduce costs associated with their business operations.

Improved Quality: Process management can help organizations improve the quality of their products or services by standardizing processes and reducing errors.

Increased Customer Satisfaction: By improving efficiency and quality, process management can enhance the customer experience and increase satisfaction.

Compliance with Regulations: Process management can help organizations comply with regulatory requirements by ensuring that processes are properly documented, controlled, and monitored.

103
Q

Disadvantages of process management

A

Time and Resource Intensive: Implementing and maintaining process management initiatives can be time-consuming and require significant resources.

Resistance to Change: Some employees may resist changes to established processes, which can slow down or hinder the implementation of process management initiatives.

Overemphasis on Process: Overemphasis on the process can lead to a lack of focus on customer needs and other important aspects of business operations.

Risk of Standardization: Standardizing processes too much can limit flexibility and creativity, potentially stifling innovation.

Difficulty in Measuring Results: Measuring the effectiveness of process management initiatives can be difficult, making it challenging to determine their impact on organizational performance.

104
Q

Multiprogramming in an operating system as the name suggests multi means more than one and programming means the execution of the program. when more than one program can execute in an operating system then this is termed a multiprogramming operating system.

A

Before the concept of Multiprogramming, computing takes place in other way which does not use the CPU efficiently.Earlier, CPU executes only one program at a time. In earlier day’s computing ,the problem is that when a program undergoes in waiting state for an input/output operation, the CPU remains idle which leads to underutilization of CPU and thus poor performance . Multiprogramming addresses this issue and solve this issue.

105
Q

Features of multiprogramming

A

Need Single CPU for implementation.

Context switch between process.

Switching happens when current process undergoes waiting state.

CPU idle time is reduced.

High resource utilization.

High Performance.

106
Q

Disadvantages of multithreading

A

Prior knowledge of scheduling algorithms is required.
If it has a large number of jobs, then long-term jobs will have to require a long wait.

Memory management is needed in the operating system because all types of tasks are stored in the main memory.

Using multiprogramming up to a larger extent can cause a heat-up issue.

107
Q

Multitasking type:
Preemptive scheduling algorithm

A

In the preemptive scheduling algorithm if more than one process wants to enter into the critical section then it will be allowed and it can enter into the critical section without any interruption only if no other progress is in the critical section.

108
Q

Multitasking type:
Non-preemptive scheduling algorithm

A

If a process gets a critical section then it will not leave the critical section until or unless it works gets done

109
Q

In multiprogramming system, multiple programs are to be stored in memory and each program has to be given a specific portion of memory which is known as process.The operating system handles all these process and their states.Before the process undergoes execution, the operating system selects a ready process by checking which one process should udergo execution.When the choosen process undergoes CPU execution, it might be possible that in between process need any input/output operation at that time process goes out of main memory for I/O operation and temporarily stored in secondary storage and CPU switches to next ready process.

A

And when the process which undergoes for I/O operation comes again after completing the work, then CPU switches to this process.This switching is happening so fast and repeatedly that creates an illusion of simultaneous execution.

110
Q

Process creation

A
  1. When a new process is created, the operating system assigns a unique Process Identifier (PID) to it and inserts a new entry in the primary process table.
  2. Then required memory space for all the elements of the process such as program, data, and stack is allocated including space for its Process Control Block (PCB).
  3. Next, the various values in PCB are initialized such as,

The process identification part is filled with PID assigned to it in step (1) and also its parent’s PID.

The processor register values are mostly filled with zeroes, except for the stack pointer and program counter. The stack pointer is filled with the address of the stack-allocated to it in step (ii) and the program counter is filled with the address of its program entry point.

The process state information would be set to ‘New’.

Priority would be lowest by default, but the user can specify any priority during creation.

  1. Then the operating system will link this process to the scheduling queue and the process state would be changed from ‘New’ to ‘Ready’. Now the process is competing for the CPU.
  2. Additionally, the operating system will create some other data structures such as log files or accounting files to keep track of processes activity.
111
Q

Process Deletion

A

Processes are terminated by themselves when they finish executing their last statement, then operating system USES exit( ) system call to delete its context. Then all the resources held by that process like physical and virtual memory, 10 buffers, open files, etc., are taken back by the operating system. A process P can be terminated either by the operating system or by the parent process of P.

A parent may terminate a process due to one of the following reasons:

When task given to the child is not required now.

When the child has taken more resources than its limit.

The parent of the process is exciting, as a result, all its children are deleted. This is called cascaded termination