Operating System Concepts Flashcards

1
Q

What is an Operating System (OS)?

A

An OS is system software that manages computer hardware, software resources, and provides various services for computer programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe User Mode and Kernel Mode.

A

User Mode and Kernel Mode are the two modes of operation of the CPU. User Mode is restricted and does not have access to hardware or memory, while Kernel Mode has full access to all hardware and all memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define Cache

A

A cache is a hardware or software component that stores data so that future requests for that data can be served faster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an Interrupt?

A

An interrupt is a signal to the processor indicating an event needing immediate attention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a System Call?

A

A system call is the programmatic way a computer program requests a service from the kernel of the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define a Process.

A

A process is a program in execution, with its own dedicated system resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define a Program.

A

A program is a sequence of instructions that tell a computer what tasks to perform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Address Space?

A

Address Space is a range of valid addresses in memory that a process or thread can use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Multiprogramming?

A

Multiprogramming is the rapid switching of the CPU between multiple processes in memory. Allows multiple programs to share a CPU and run in (pseudo) parallel on a single CPU. Increases efficiency as it keeps CPU busy all the time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain Process Creation.

A

Process creation is the act of creating a new process. This is typically done by a parent process, creating a child process.

System call in UNIX: fork()
Creates an exact clone of calling process
- Different address space
- Same memory image
- Same program counter, registers, open files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain Process Termination.

A

Process termination occurs when a process completes its execution or is explicitly killed.

Typical conditions which terminate a process:
1. Normal exit (voluntary)
2. Error exit (voluntary)
3. Fatal error (involuntary)
4. Killed by another process (involuntary)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the different Process States?

A

A process may be in one of three states:
1. Running: Using CPU
2. Ready: Runnable, but temporarily stopped
3. Blocked: unable to run until some external event happens

running -> blocked
running -> ready
ready -> running
blocked -> ready

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define Threads.

A

Threads are the smallest sequence of programmed instructions that can be managed independently by the scheduler, which is a part of the operating system. Threads are lighter than processes, and share the same memory space, allowing them to communicate with each other more easily. This is useful in programming when you want a program to perform multiple tasks simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Compare Processes vs Threads.

A

Both are independent sequences of execution. The typical difference is that threads run in a shared memory space, while processes run in separate memory spaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Multi-threading?

A

Multi-threading is the ability of a central processing unit (CPU) to provide multiple threads of execution concurrently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Interprocess Communication (IPC)?

A

IPC is a set of methods for the exchange of data among multiple threads in one or more processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define Race Conditions.

A

A race condition occurs when two or more threads can access shared data and they try to change it at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a Critical Region?

A

A critical region is a piece of code in a process during which the process is accessing a shared resource, like a data structure, a peripheral device, or a network connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain Mutual Exclusion.

A

Mutual Exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Busy Waiting?

A

Busy waiting is a method where a process repeatedly checks to see if a condition is true, such as whether keyboard input or a lock is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Define Blocking in OS.

A

Blocking is a state where a process is waiting for an event such as an I/O operation to complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a Mutex?

A

Mutex is a program object that allows multiple program threads to share the same resource, but not simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Explain Deadlocks.

A

Deadlocks occur when two or more processes are unable to proceed because each is waiting for the other to release resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a Process Scheduler?

A

The process scheduler is an OS component which selects from among the processes that are in the ready state, and allocates the CPU to them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Explain Preemption in OS.

A

Preemption is the ability of the operating system to interrupt and move a currently running process to the waiting state to allow another process to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Define Context Switch.

A

Context Switching is the procedure that a computer’s CPU (central processing unit) follows to change from one task (or process) to another while ensuring that the tasks do not conflict.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are Scheduling Algorithms?

A

Scheduling Algorithms are methods to schedule processes in the OS based on criteria such as priority, shortest job first, round robin, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Explain Nonpreemptive Scheduling.

A

In nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or switching to the waiting state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Explain Nonpreemptive Scheduling.

A

In nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or switching to the waiting state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is Preemptive Scheduling?

A

In preemptive scheduling, if a new process arrives with a higher priority, the scheduler could preempt the currently executing process, allocate the CPU to the higher priority process.

31
Q

What is Memory Management?

A

Memory Management is the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize overall system performance.

32
Q

Explain Memory Abstraction.

A

Memory Abstraction allows programmers to not worry about the amount of memory available or what is currently stored in memory. It’s often implemented using layers and spaces.

33
Q

What is a Memory Management Unit (MMU)?

A

MMU is a computer hardware unit that handles all memory and caching operations associated with the processor.

34
Q

What is Virtual Memory?

A

Virtual Memory is a memory management capability of an operating system that uses hardware and software to allow a computer to compensate for physical memory shortages by temporarily transferring data from RAM to disk storage.

35
Q

What is Paging?

A

Paging is a storage mechanism used in OS to retrieve processes from secondary storage to the main memory as pages. The primary concept behind paging is to break each process into individual pages. Thus the primary memory would also be separated into frames.

36
Q

What does TLB stand for and its role?

A

TLB stands for Translation Lookaside Buffer. It is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval.

37
Q

What is a Page Fault?

A

A page fault occurs when a program attempts to access data or code that is in its address space, but is not currently located in the system RAM.

38
Q

Explain a Page Replacement Algorithm.

A

Page Replacement Algorithms decide which memory pages to page out when a page of memory needs to be allocated. They are critical in determining the effectiveness of virtual memory systems.

39
Q

What is the focus of Security & Cryptography in operating systems?

A

It involves techniques for ensuring that data stored in a system cannot be read or compromised by any individuals without authorization.

40
Q

Define Encryption/Decryption.

A

Encryption is the process of translating plain text data into something that appears to be random and meaningless (ciphertext). Decryption is the process of converting ciphertext back to its original format.

41
Q

What is Symmetric Cryptography? Give examples.

A

Symmetric Cryptography is a type of encryption where a single key is used for encryption and decryption. Examples are AES (Advanced Encryption Standard) and DES (Data Encryption Standard).

42
Q

Define ECB and CBC in the context of cryptography.

A

EBC (Electronic Code Book) and CBC (Cipher Block Chaining) are modes of operation for a block cipher. In EBC, each block of plaintext is independently encrypted, while in CBC, each block is XORed with the previous ciphertext block before encryption.

43
Q

Explain Asymmetric Cryptography.

A

Asymmetric cryptography, also known as public key cryptography, uses different keys for encryption and decryption. An example is the RSA algorithm.

44
Q

What is Key Exchange?

A

Key exchange is a method in cryptography by which cryptographic keys are exchanged between two parties, allowing use of a cryptographic algorithm.

45
Q

What are Digital Signatures?

A

A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents.

46
Q

Define Cryptographic Hashing.

A

A cryptographic hash function is a special class of hash function that has certain properties which make it suitable for use in cryptography.

47
Q

What does Secure Communication entail?

A

Secure Communication involves encrypting the data that is sent between two systems to prevent potential eavesdropping.

48
Q

What is a Message Authentication Code (MAC)?

A

A MAC is a short piece of information used to authenticate a message and to provide integrity and authenticity assurances on the message.

49
Q

Explain the Diffie-Hellman Key Exchange.

A

The Diffie-Hellman Key Exchange is a method of securely exchanging cryptographic keys over a public channel, allowing two parties, each having a public-private key pair, to establish a shared secret over an insecure channel.

50
Q

What are Digital Certificates and Public Key?

A

A digital certificate is a digital form of identification, while a public key is a large numerical value that is used to encrypt data. The certificate associates the public key with an entity that holds the corresponding private key.

51
Q

What is Transport Layer Security (TLS)?

A

TLS is a cryptographic protocol designed to provide communications security over a computer network. It is used to secure web traffic, preventing tampering, eavesdropping and message forgery.

52
Q

Explain the Handshake Protocol in the context of networking.

A

The Handshake Protocol is part of the SSL/TLS protocol for initiating a secure connection. It allows the server and client to authenticate each other and to negotiate an encryption algorithm and cryptographic keys before data is exchanged.

53
Q

What is a Trap Instruction?

A

A trap instruction, also known as a software interrupt or an exception, is a mechanism used in computer architecture to transfer control from user mode to kernel mode. It is typically triggered by a software event or a specific instruction executed by a program.

54
Q

What is Starvation?

A

When low priority jobs will never execute, if more important jobs continually arrive

55
Q

Optimal Page Replacement Algorithm

A

The optimal page replacement algorithm, also known as Belady’s MIN algorithm, replaces the page that will not be used for the longest period of time in the future. It is the most efficient algorithm but is difficult to implement in practice because it requires future knowledge of the reference string.

56
Q

Not Recently Used (NRU) Algorithm

A

The NRU algorithm classifies pages into four categories based on the reference and modified bits and removes a page at random from the lowest non-empty category. It’s a simple and efficient algorithm but doesn’t always provide the best performance.

57
Q

First-In, First-Out (FIFO) Algorithm

A

The FIFO algorithm replaces the oldest page, i.e., the page that was loaded first. It’s simple and inexpensive to implement but can suffer from the Belady’s Anomaly where increasing the number of page frames results in more page faults.

58
Q

Second-Chance Algorithm

A

The Second-Chance algorithm is a variation of the FIFO algorithm that avoids the problem of replacing a heavily used page by checking the reference bit and giving a second chance if it’s set.

59
Q

Clock Page Replacement Algorithm

A

The Clock algorithm is an efficient implementation of the Second-Chance algorithm that arranges pages in a circular queue (like a clock) and uses a single hand to sweep around replacing pages.

60
Q

Least Recently Used (LRU) Algorithm

A

The LRU algorithm replaces the page that hasn’t been used for the longest time. It’s good for temporal locality but can be expensive to implement perfectly.

61
Q

Working Set Page Replacement Algorithm

A

The Working Set algorithm tries to keep the set of pages that a program is currently using (its working set) in memory to minimize page faults. It’s based on the principle of locality but can be difficult to determine the working set accurately.

62
Q

WSClock Page Replacement Algorithm

A

The WSClock algorithm is a combination of the Clock algorithm and the Working Set algorithm that tries to get the benefits of both. It’s more complex but can provide good performance in a variety of situations.

63
Q

What’s a page fault?

A

A page fault is an exception that’s raised by computer hardware when a running program accesses a memory page that is mapped into the virtual address space, but not loaded into physical memory.

64
Q

Page Fault Handling

A

Page fault handling is a crucial part of a system’s memory management. Here are the steps taken when a page fault occurs:

  1. Interrupt and handler invocation: A page fault triggers an interrupt. The system stops the execution of the current process and invokes the page fault handler routine in the kernel.
  2. Verification: The handler first verifies that the page reference was legal, meaning that the page is indeed part of the process’s virtual address space.
  3. Page Frame Allocation: If the page is not already loaded in physical memory, the handler looks for a free page frame. If there are no free page frames, it must select one for replacement using a page replacement algorithm, which may involve writing the replaced page back to disk if it has been modified.
  4. Page Loading: The handler then loads the requested page into the allocated page frame from the backing store (typically a disk or SSD).
  5. Page Table Update: The handler updates the process’s page table to reflect the new mapping between the virtual page and the physical page frame.
  6. Process Restart: Finally, the handler restarts the process from the instruction that caused the page fault. Because the process’s address space now includes the previously faulted page, the instruction can execute successfully this time.
65
Q

What is paging?

A

Storage mechanism used to retrieve processes from the secondary storage into the main memory in the form of pages.

66
Q

What are the advantages and disadvantages of using larger page sizes in a paging system?

A

Advantages include fewer page faults if the process’s working set is all on one page, fewer pages to manage, and potentially improved TLB performance due to a decreased miss rate. Disadvantages include increased internal fragmentation, as not all of the space in a large page may be used, and potentially less efficient use of memory if a process only needs a small amount of additional memory that forces allocation of a large page.

67
Q

Consider a computer system with a 32-bit logical address space, 4KB pages and 4 bytes per page table entry. How many entries will there be in a single-level page table?

A

The page size is 4KB, or 2^12 bytes (as 2^10 is approximately 1K). This means that the offset within a page is 12 bits, leaving 20 bits for the page number in a 32-bit address. Therefore, there will be 2^20 or 1,048,576 entries in the page table.

68
Q

Describe a situation where a multi-level page table would be beneficial. What are the trade-offs in using a multi-level page table?

A

A multi-level page table can be beneficial when the logical address space is large, but the actual amount of memory used by a process is relatively small. In this case, a single-level page table might be very large, but mostly empty. A multi-level page table allows memory to be allocated for page table entries only as they are needed. The trade-off is that translating a logical address to a physical address may require accessing memory multiple times: once for each level of the page table.

69
Q

How does paging support the sharing of code or data between processes? What are the potential security issues and how can they be addressed?

A

Paging supports the sharing of code or data by allowing different processes to have page table entries that point to the same physical page. This is often used to allow multiple processes to share a single copy of a code library. Security issues arise because this sharing can potentially allow one process to read or modify the data of another process. This can be addressed through mechanisms such as copy-on-write (where a shared page is duplicated if a process tries to write to it) and setting permissions on pages to control which processes can read, write, or execute them.

70
Q

Consider a system with a TLB hit rate of 80%, a main memory access time of 100ns, and a TLB access time of 20ns. What is the effective memory access time for this system?

A

Effective access time is calculated as (TLB hit time + Memory access time) * TLB hit rate + (TLB access time + 2 * Memory access time) * (1 - TLB hit rate). Therefore, the effective access time would be (20ns + 100ns) * 0.8 + (20ns + 2*100ns) * 0.2 = 180ns.

71
Q

What’s Translation Lookaside Buffer

A

Hardware cache in a computer’s memory management unit (MMU) that improves virtual-to-physical address translation speed. This translation process is crucial for memory accesses in systems using virtual memory.

In a system that uses virtual memory, the memory address referenced by a program (the virtual address) is not the same as the physical address where the data is actually stored in memory. Instead, the system maintains a map (typically a page table) from virtual addresses to physical addresses.

72
Q

How does associativity affect the performance of a TLB?

A

Associativity in a TLB refers to the number of slots or entries in the TLB that a given page can potentially be mapped to. The greater the degree of associativity, the greater the number of potential slots for each page, which can potentially increase the TLB hit rate and hence improve performance. However, higher associativity can make the TLB hardware more complex and potentially slower, and it can also increase power consumption.

73
Q

How does the size of the TLB, the size of the pages, and the organization of the virtual memory (such as the number of levels in the page table) interact to affect system performance?

A

This is a complex question as the interaction between these factors can be quite intricate. In general, larger TLBs can potentially improve the hit rate and thus reduce the average memory access time. Larger pages can also improve the TLB hit rate, but they can potentially increase internal fragmentation. A multi-level page table can reduce the memory needed for the page table, but it can potentially increase the number of memory accesses needed for a page table walk on a TLB miss. The optimal combination of these parameters depends on the specifics of the workload and the hardware.

74
Q

In a system that supports both page-level and segment-level memory management, where in the memory management process would the TLB come into play?

A

The TLB is part of the paging mechanism of memory management. It caches translations from virtual page numbers to physical page numbers. So, if a system supports both segmentation and paging, the TLB would typically be used after the segmentation mechanism has translated a logical address to a virtual address, and now the paging mechanism needs to translate this virtual address to a physical address.