Operating Systems Flashcards

1
Q

What is kernel mode

A

TLDR: It gives unrestricted access to both user and OS memory, hardware, and system resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain variable partition scheme and what the hole is

A

It is when memory is separated into partitions, and each variably sized partition contains one process.

Initially, when all memory is available, this space is one large block called the hole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a problem suffered by first fit and best fit solution? Explain.

A

External fragmentation is a problem. This is when there is enough total memory space to satisfy a request, but the available spaces are not contiguous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the 50 percent rule?

A

First fit analysis finds that given N blocks allocated in memory, another 50 percent of those blocks (0.5N) are lost to fragmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is compaction, under what conditions does it need to work?

A

Compaction is shuffling the holes in memory to place all free memory together.

This is only possible if relocation is dynamic and done at execution time by changing the base registers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is contiguous allocation?

A

OS is held in low memory with interrupt vectors. User processes are held in higher memory. (Memory is divided into 2 partitions).

It is when each process is stored in a single contiguous section of
memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is non contiguous allocation

A

Allows a process to acquire several memory blocks at different locations in memory. A process is split up into different partitions in memory. Reduces memory wastage from external and internal fragmentation.

Done through paging and segmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Internal fragmentation (L17)

A

Allocated memory for a process may be larger than than requested memory, left with a small hole. This size difference between memory being used and the memory allocated, all within a partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In terms of paging, what is the worst case fragmentation? (L17)

A

Worst case fragmentation is when one frame only contains one byte.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain how segmentation works. (L17)

A

Programs are a collection of segments for things like main page, procedure, arrays, variables. The logical address consists of a tuple <segment number, offset>.

There is a segment table that has the segment number as index with a corresponding base address (which is the physical address where the segment starts in memory) and the limit (length of segment in main memory). The offset has to be less than the limit to be placed into main memory.

This has external fragmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the difference between segmentation and paging. (L17)

A

Long ass table, look it up in mindmap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is demand paging?

A

It’s like swapping and paging but processes are split into pages.

Demand paging is when you execute partially loaded processes. This is when processes are split into pages so that processes do not need to fully be in main memory for a part of a process to run, so each program takes less memory to run. Demand paging allows logical memory to exceed your physical memory, which allows for a higher degree of multiprogramming capabilities. How this is done is through using a secondary memory (swap space) (hard disk) to simulate the RAM by mapping some pages of a processes into it. Those pages are swapped into and out of main memory when a page fault occurs and is kept track of using a page table.

Essentially when a context switch occurs, the new process begins to execute after loading the first page and remaining pages are fetched (similar to paging system with swapping)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is virtual memory?

A

Virtual memory is the technique that allows the execution of processes that are not completely in memory by storing part of s a program (pages) outside of main memory in a secondary storage space. Therefore, logical address space can exceed physical memory space and programs can take less memory to run.

This increases CPU utilisation and throughput, with no increase in response time or turnaround time, increasing degree of multiprogramming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a page fault? How is it handled?

A

A page fault is when a page is marked invalid when you try to access it.

  1. Check internal table to see if reference was to a valid or invalid memory access.
  2. If invalid, terminate process. If page is just not in memory, then page it in.
  3. Find a free frame.
  4. Schedule the secondary storage operation to read the designated page into the newly allocated frame.
  5. When storage read is completed, modify the process’ page table to indicate it is in memory.
  6. Restart instruction and process accesses page. We save the state of interrupted process when page fault occurs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is degree of multiprogramming?

A

The degree of multiprogramming describes the maximum number of processes that a single-processor system can accommodate efficiently. It is the number of processes in ready state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does it mean when a bit is set to valid or invalid on a page table? What are two ways a bit can be invalid?

A

When a bit is set to valid in a page table, it means that page is available in the memory for the CPU to access (a memory resident). When a bit is set to invalid in the page table, it means that the page is either valid and in secondary storage or not loaded into the logical address space. Access to a page marked invalid causes a page fault, which traps to the OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What hardware is required to support demand paging?

A
  1. A page table is needed to mark validity
  2. Secondary memory is needed to swap pages into and out of main memory.
  3. The ability to restart instruction when a page fault occurs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is copy on write?

A

Copy on write is when a page of a process is copied when it is modified.

Parent and child processes can share pages in main memory. If either processes modify a shared page, only then is the page copied. This allows for more efficient process creation as only modified pages are copied.

Instead of immediately creating a copy of data when a process is duplicated, the system allows the copies to share the same memory space unless one of the copies are modified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is vfork()? Should it always be used?

A

vfork() is available on UNIX and is a variation of fork(). Instead of copy on write, parent process is suspended when a child process is using that address space of the parent’s. However, they must be careful to not modify address space. This should only be used when a child process calls exec() immediately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What causes a need for page replacement?

A

When main memory is full and there are no free frames, but you need to bring in a page from secondary memory to continue a process, you need to find a page in memory that is not in use and page it out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is standard swapping and why is it no longer in use?

A

Entire processes can be temporarily swapped out of memory to a hacking store. Mostly idle processes are a good candidate. Makes it possible for total physical memory space of processed to exceed physical memory.

Most systems swap pages rather than entire processes now.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Explain the steps of basic page replacement.

A
  1. You find the location of a desired page in secondary store.
  2. If there are no free frames, use page replacement algorithm to select a victim frame. Write the victim frame to secondary store, update page/frame table accordingly.
  3. Read desired page into the newly freed frame, change page/frame table.
  4. Continue process by restarting the instruction that caused the trap (page fault)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is overhead from basic page replacement reduced with a modify/dirty bit?

A

Each page or frame has a modify/dirty bit associated with it in the hardware. Bit is set by hardware whenever a page is modified so we know to write page to storage. If bit is not set, no need to write it to memory as it is already there (when a page is swapped into main memory, the page still exists in secondary storage).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the idea behind page fault algorithms?

A

We want the lowest page fault rates on both first access and re-access with algorithms. An algorithm is evaluated by running it on a string of memory references (reference string) and computing the number of page faults on that string.

If we have a reference to a page, then any references to page p that immediately follow will never cause a page fault.

Result of algorithm depends on number of frames available (higher # of frames, lower # of faults)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the FIFO algo for page replacement?

A

TLDR: The FIFO algorithm removes the oldest page that came into memory first.

Each page is associated with the time that page was brought into memory. When a page must be replaced, the oldest page is chosen.

There is no need to keep track of when a page is brought in as we have a FIFO queue in memory.

Con: it may remove active pages

Belady’s anomaly: this algorithm may even increase page fault rates as number of frames increase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is optimal page replacement and what are the cons?

A

Optimal page replacement is when you replace the pages that will not be used for the longest period of time.

This is hard to implement because you can’t read into the future (oracle)

Instead, this is used as a comparison algorithm instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the least recently used algorithm?

A

This uses past knowledge and replaces the page that has not been used for the longest time.

Time of its last use is associated with each page. It is generally a good algorithm and frequently used.

This requires substantial hardware…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are two ways to implement the LRU algorithm?

A
  1. Counters: Every page table entry has a counter and every time a page is referenced, clock is copied into counter. When a page needs to be replaced, page table is searched for the smallest counter value.
  2. Static Implementation: A stack of page numbers if kept in a doubly linked form. If a page is referenced, it is moved to the top of the stack. This requires 6 pointers to be changed at worst. No search is done, but each update is a little more expensive as tail pointer points to the bottom of stack.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Describe a hard disk drive (HDD)

A

Hard disk drive is a secondary storage I/O device that contains many platters. Each disk platter is like a CD covered in magnetic material that stores info magnetically. The magnetic patterns are read. There is a read/write head above each platter.

Disk drive motor spins at 60 to 250 times per second.

Disks are usually removable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is transfer rates in regards to HDD

A

It is the rate at which data flows between the drive and the computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is constant angular velocity?

A

It is when you are reading the tracks of a HDD, the rotating speed is constant and each track on the disk has the same number of sectors.

The density of bits decreases from inner to outer tracks to keep data rate constant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Describe nonvolatile memory devices

A

Electrical rather than mechanical. If similar to hard disk drives, it is a solid state drive. Can also be USB’s.

More reliable, but more expensive per MB.

Shorter lifespan, less capacity, but faster.

No moving parts so no seek time or rotational latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How is data transfer carried out between secondary storage and the computer?

A

Secondary storage device is attached to a computer by the system bus or an I/O device. Data is transferred on bus carried out by special electronic processors called controllers. There are controllers on the computer’s end of the bus and the device’s end of the bus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Explain address mapping and logical block address

A

Storage devices are addressed as a large one dimensional array of logical blocks. A (LBA) logical block address is used to refer to each sector of the drive.

For HDD’s: the sectors are mapped sequentially, through a track, then through a cylinder an the rest of the cylinders.

For NVM: chips/pages/blocks are mapped to array of logical blocks.

Ascending logical address = ascending physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is disk bandwidth?

A

Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the transfer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Explain FCFS Scheduling for servicing disk I/O requests. What is the con of it?

A

It is when disk requests are serviced on a first come first serve basis.

The con is that there is a lot of head movements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is scan scheduling?

A

Disk arm/head is moved from one end of disk to another. When the head movement is reversed, disks requests are continued to be serviced (elevator algorithm). The con is that when a head is reversed, few requests are in the front, meaning that remaining requests (heavy density) at the other end will wait the longest.

However, less starvation and good for heavy load of requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is C-Scan Scheduling?

A

Disk arm/head is moved from one end of disk to another. But the head movement is not reversed and returns to the beginning immediately once reaching the end without servicing any requests on the return trip. This allows for more uniform wait time as it treats cylinders as a circular list that wraps around from the last cylinder to the first.

However, less starvation and good for heavy load of requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is SSTF?

A

This stands for shortest seek time path first. This services requests with the least seek time from the current head position. However, similar to shortest job first, this can lead to starvation. This is the most common as default. Increased performance over FCFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is Look and Clook?

A

This is an alternate version of scan and C-Scan. However, this algorithm goes as far as the last request in each direction, then reverses immediately. Look is natural as a default choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is pure demand paging?

A

Pure demand paging is an extreme case. It is when you start running a process with no pages in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is position/random access time?

A

It is the time necessary to move the disk head to the desired cylinder (seek time) and the time for desired sector to rotate to under the disk head (rotational latency)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is a head crash?

A

If a disk head makes contact with disk surface, this is bad.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Describe the relationship between processes and main memory (L17).

A

Processes are kept in main memory as CPU is shared between processes.

Programs are brought from disk into memory and ran as processes.

The CPU can only access main memory and registers as storage and the memory only sees streams of addresses and read/write requests.

Accessing main memory may take many CPU cycles, causing CPU to stall. This is remediated through a cache that sit between memory and CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is a base and limit register? (L17)

A

To protect the memory, each process must have their own memory space by determining the range of legal addresses with a base and limit register. They help define a contiguous block of memory accessible to a program.

The base register holds the smallest legal memory address and the limit register specifies size of that range.

The CPU checks memory address generated by users to ensure it is between the limit for that user.

Base/limit registers can only be loaded by OS in kernel mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is instructions binding and how does it work?

A

Before a program runs, it needs to be translated to see where those instructions go in the memory.

The binding of instructions to memory involves associating program instructions with specific memory addresses, ensuring that the CPU can correctly locate and execute these instructions during program execution.

This binding can occur at different stages, and the term “binding” refers to the process of connecting symbolic addresses in the source code to physical addresses in memory.

Compiler binds symbolic address (variable count) to a relocatable address (14 bytes from beginning of module).

The linker/loader binds relocatable address to absolute address (memory address)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Binding to memory address can happen at three stages

A

Compile time: If know at compile time the address, then absolute code can be generated.

If you don’t know at compile time where the code will reside in memory, then relocatable code will be generated. That code binding can be changed at load time, when program is loaded into memory for execution.

Execution time: binding is delayed until run time if the process can be moved from one segment of memory to another during execution. Need hardware for this, but most common.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is a logical and physical address?

A

The physical address is the address in memory. This is the memory address that will be loaded into memory. A logical address is generated by the CPU (virtual address), it is a conceptual address space made for a program, not a real location in memory.

These would be the same address if binding happens at compile or load time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is the memory management unit (MMU) and the relocation register (L17)?

A

The hardware is responsible for mapping the logical address into a physical address and the relocation register (base register) is added to the logical address.

User programs never access the actual physical address. The logical address is not a real location in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is contiguous allocation? (L17)

A

Continuous allocation is when processes are contained in one single section of memory continuously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is variable partition scheme? What is a hole? (L17)

A

It is when each process are stored in one variably sized partitions. The OS keeps track of which parts of memory is available. Initially, all of memory is available, it is one hole.

When a process exits memory, it frees up a partition and adjacent holes combine.

OS has info about allocated partitions and free partitions.

52
Q

What is the dynamic storage location problem and what are 3 ways to allocate storage? (L17)

A

The dynamic storage location problem is how n processes of different sizes are stored in memory.

  1. First fit: processes are stored in the first hole that is big enough. Hole can be searched from the first hole or where they last left off.
  2. Best fit: processes are stored in the smallest hole big enough. However, this requires a search.
  3. Worst fit: processes are stored in largest hole. (Produces largest leftover, worst in speed and storage)
53
Q

What is external fragmentation?

A

When there is enough total memory space to satisfy a request, but the available spaces are not contiguous.

54
Q

What is paging? (L17)

A

Physical memory is divided into fixed size frames and a logical memory is divided into fixed size blocks called pages. Free frames are kept in track and to run a program of N pages, find N free frames to load program.

A page table is used to translate logical to physical address.

Needs ability for dynamic relocation.

No external fragmentation. However, this can still have internal fragmentation.

55
Q

What is address translation scheme? (L7)

A

Address translation scheme is how the CPU translates a logical address into a physical one. An address generated by the CPU has two parts, the page number and the offset. The page number is the index on the page table, which contains the base address of each page in physical memory. The page offset helps you index on the page itself, it is combined with the base address to find the physical address.

To find the physical address, you multiply the page number’s corresponding frame number on the page table with the frame size, plus the offset. (FN*framesize + offset)

56
Q

How to calculate internal fragmentation when paging? (L17)

A

Get remainder of process size divided by page size. Subtract remainder from page size.

57
Q

What is multiprogramming? (L16)

A

When a process is waiting for I/O requests, instead of having it wait idly, we want to give the CPU core to another process to maximise CPU utilisation.

58
Q

How does process execution work, explain it in terms of CPU burst and I/O burst? (L16)

A

Process execution consists of cycles of CPU cycle execution and I/O wait, alternating between them.

Process execution begins with CPU burst where process executes in CPU. It is followed by I/O burst where I/O is done.

Concern: CP burst distribution.

59
Q

What is the short term scheduler and when may CPU scheduling take place? (L16)

A

A short term scheduler selects processes in ready queue to be executed in CPU.

CPU scheduling may take place when a process:
1. Switches from running to waiting
2. Switches from running to ready
3. Switches from waiting to ready
4. Terminates

60
Q

What is pre-emptive and nonpreemptive scheduling?

A

Nonpreemptive: once CPU is allocated to a process, that process keeps the CPU until switches from running to waiting or terminates.

Pre-emptive: a type of scheduling where processes are involuntarily moved to waiting or ready. Considers interrupts.

61
Q

What is the dispatcher? What is dispatch latency? (L16)

A

It is a component of CPU scheduling. It is a kernel module that gives control of CPU to process selected by CPU scheduler.

Dispatch latency is the time it takes for dispatcher to stop one process and to start another.

62
Q

What are the scheduling criteria? Which of the criteria is used to determine if an algorithm is good? (L16)

A
  1. CPU utilisation: How busy the CPU is kept (should b 40 to 905)
  2. Throughput: Number of processes completed per unit time
  3. Turnaround time: Amount of time it takes to complete process
  4. Waiting time: Amount of time a process waits in ready queue
  5. Response time: Amount of time from when a request was submitted until the first response was produced. (Some parts of a picture shows as whole picture loads)

Waiting time is used to evaluate algorithms and consider only one CPU burst per second.

63
Q

What is the first come first serve (FCFS) scheduling? (L16)

If P1 has burst time of 24, P2 has burst time of 3, and P3 has burst time of 3, and they all arrive at time 0, how long is the average wait time?

A

It is when processes that request the CPU first gets allocated the CPU first.

When process enters ready queue, its PCB is linked to the tail of queue. When CPU is free, it is allocated to the process at the head and the process is removed from queue.

Average wait time is (0+24+27) / 3 = 17

Average wait time varies depending on order. It is nonpreemptive.

64
Q

What is the convoy effect? (L16)

A

The convoy effect is when short processes are scheduled behind long processes, causing average wait time to be long and suboptimal CPU utilisation.

65
Q

What is shortest job first (SJF) scheduling? (L16)

What is the second version of this algorithm?

If P1 has burst time of 6, P2 has burst time of 8, P3 has burst time of 7, and P4 has burst time of 3, and they all arrive at the same time, how long is the average wait time for version 1?

A

It is when processes with the shortest next CPU burst time is selected. FCFS is used to break ties. This is optimal because it gives minimum average wait times.

However, difficulty in knowing the length of next CPU request. To predict, use exponential averaging using length of previous busts.

The second version is shortest remaining time first. Pre-emptive SJF. When a new process arrives, if its burst time is shorter than the one currently executing, pre-empt it and run the new one.

Version 1: 7 (units of time)

66
Q

Name the scheduling algorithms. (L16)

A
  1. FCFS
    2.1 SJF
    2.2 Shortest remaining time first
  2. Round Robin
  3. Multilevel queue scheduling
67
Q

What is priority scheduling? (L16) How does this relate to SJF and what is one problem and what is the solution to that problem?

A

Priority is associated with each process and CPU is allocated to process with highest priority (lowest number). Ties are FCFS.

SJF is a type of priority scheduling where time is priority integer.

Could be pre-emptive, if it is, then if a new process arrives in ready queue with a higher priority, current process will be pre-empted.

Starvation is a problem as low priority processes may never execute. The solution would be aging, which is to increase priority of process as time progresses.

68
Q

What is round robin scheduling?

If P1 has burst time of 24, P2 has burst time of 3, and P3 has burst time of 3, and they all arrive at time 0 and time quantum is 4 seconds, how long is the average wait time?

L16

A

It is similar to FCFS but pre-emption is added to enable system to switch between processes. The ready queue is a circular queue.

Each process gets a small unit of CPU time (time quantum, q, usually 10 to 100 ms). After this time passed, process pre-empted and added to end of ready queue. Timer interrupts every quantum.

Average wait time is 17/3 ms.

Higher turn around than SJF (and does not always improve as quantum size increases) but better response time.

69
Q

What is multilevel queue scheduling? (L16)

A

Ready queues split into different queues like foreground and background. Each queue has its own scheduling algorithm and every program in that queue permanently.

The CPU switches between those queues. Scheduling done between queues.
- Fix priority scheduling: serve all of one queue first before next, but starvation possibility
- Time slicing: Each queue gets portion of CPU time.

70
Q

What is multilevel feedback queue? L16

A

Processes can move between various queues. If a process does not finish in a queue, it can move to a lower priority queue and get executed through that queue. Or if a process is waiting for too long, it can move to a higher priority queue (aging). This prevents starvation.

71
Q

What is the idea behind process synchronisation? (L15)

A

Processes can execute concurrently and may be interrupted before it completes. Concurrent access to data of the same file may result in inconsistent data. Need to maintain data consistency.

72
Q

What is the critical section problem? (L15)

A

The critical section problem is when each process has a section of code that is the critical section. It is the section where only one process can execute it at a time.

The problem is the design a protocol for processes to synchronise their access to the critical section.

Process must ask permission to enter in entry section and follows with exit and remainder section.

73
Q

What is race condition? (L15)

A

When several processes access same data concurrently and outcome depends on order of access.

74
Q

What are the 3 requirements for a solution to the critical section problem? (L15)

A
  1. Mutual exclusion: If process Pi is executing in CS, no other process can
  2. Progress: if no process is executing in CS and there is a process that wants to, selection of that process to enter cannot be postponed.
  3. Bounded Waiting: A bound must exist on the number of times other processes can enter CS after a process has made a request to do so.
75
Q

What is Peterson’s solution? (L15)

A

It is a 2 process solution, both processes alternate execution between CS and remainder section. Not guaranteed to work on multiple cores as processes might be reordered.

Share 2 variables:
int turn: who’s turn it is to enter CS
Boolean flag[2]: array of 2 elements to indicate if a process is ready to enter CS

When process i wants to enter CS, it sets its flag[i] to true (the other flag[j] is false and turn = t the other process. This breaks out of the while loop as it makes the condition false. It executes CS and then sets its flag[i] to false because it is done executing. When process j wants to enter, since flag[i] is false now, it breaks that process’ while loop and that process enters CS.

All 3 criteria are met.

76
Q

Process synchronisation with hardware through locks. What are the two types of instructions? (L15)

A

Protects CS with locks. The hardware itself has special atomic instructions (atomic means instructions operate as uninterruptible unit). When process checks log, acquires/set lock, execute CS, it happens as one uninterrupted operation.

test_and_set() and compare_and_swap()

77
Q

Explain test_and_set() (L15)

A

If two of these instructions execute at the same time on different processors, they will be executed sequentially/atomically.

Mutual exclusion done through lock variable, initialised to false and is shared between two processes. If lock is false, it takes turns lock to true to execute CS.

Essentially: a process will pass the shared lock variable address to test_and_set() function. Initially the lock variable is false. The lock variable will be saved to a separate variable and then lock set to true. Return separate variable. Since the returned variable is false, this will break process out of while loop and execute CS. After executing CS, then the lock variable will be set back to false.

If a different process wants to execute while first process is in CS, it will pass shared lock variable (value=true) to test_and_set(). The lock variable will be saved to a separate variable and then lock set to true (didn’t change anything). Return separate variable. Since the returned variable is true, this will keep looping, and not break process out of while loop and will not enter CS until the lock is set to false by the first process. Only then will test_and_set() return false and break process out of while loop.

Analogy: one room for one person but multiple people want to use it. Can only enter when room is unlocked and no one is in there.

Does not satisfy bounded waiting as some process’ remainder sections may be longer.

78
Q

Explain compare_and_swap() (L15)

A

If two of these instructions execute at the same time on different processors, they will be executed sequentially/atomically.

Essentially: a process will pass the global lock variable address to compare_and_swap() function. A int expected = 0 is also passed and int new_value = 1 is also passed. Initially the lock variable is false. The lock variable will be saved to a separate variable. Then the lock variable will be compared to the expected variable. If lock variable = expected = 0, then lock variable set to equal new_value = 1. Return separate variable. Since the returned variable is false, this will break process out of while loop and execute CS. After executing CS, then the lock variable will be set back to false.

If a different process wants to execute while first process is in CS, it will pass global lock variable (value=true) to compare_and_swap(). The lock variable will be saved to a separate variable. Since condition is not met, lock value will not be set to 1. Return separate variable. Since the returned variable is true, this will keep looping, and not break process out of while loop and will not enter CS until the lock is set to false by the first process. Only then will compare_and_swap() return false and break process out of while loop.

Does not satisfy bounded waiting as some process’ remainder sections may be longer.

79
Q

What are mutex locks and why do we use them? (L15)

A

Process synchronisation with hardware is inaccessible to programmers generally.

Mutex lock (mutual exclusion) is a software solution to CS problem. It must acquire lock before entering CS. Calls to acquire() and release() must be anatomical.

Process passes lock variable to acquire(), if lock is false, it breaks out of while loop and sets lock to true. It goes back to the process and executes CS. It calls release(), which sets lock to false, “releasing” the lock.

If another process wants to enter CS, it is stuck in the loop in acquire() until the lock is released.

80
Q

What is a spin lock and how does it relate to busy waiting? (L15)

A

Busy waiting is when while a process is executing the CS, any other process that tries to enter must loop continuous in acquire(), which wastes CPU resources.

This lock is called a spinlock.

Technically happens for test_and_set() and compare_and_swap() as well.

81
Q

What are semaphores? (L15)

A

Semaphore is an integer variable (s)

More sophisticated than mutex locks for process synchronisation.

Two atomic operations wait/(p) and signal/(v).

Can be binary or counting. Each process that wishes to use a resource calls wait(), if s is <= 0, decrease count of s by 1.

When a process releases a resource, it calls signal(), increase count of s by 1.

When count is 0, all resources are used.

82
Q

What is semaphores without busy waiting? (L15)

A

Each semaphore has an integer value and a list of processes.

If wait() is called and s value is not positive, put process in waiting queue instead of busy waiting.

Here, there is a list of processes waiting on a semaphore. When wait() is called, decrease s. If s value is less than zero, you add the process to list and call block(), placing the process in waiting queue.

When signal() is called, if s is less than or equal to zero, increase s. If s if less than or equal to zero, remove process from list and pass it into wakeup() to put into ready queue.

83
Q

What is a process and where are processes from? (L14)

A

It is a program in execution. Processes can be from user (apps) or OS (kernel) code. The OS is a process itself.

84
Q

What does a process look like in memory? (L14)

A

It has 4 sections.
1. Text: executable code
2. Data: global variables
3. Heap: dynamically allocated at runtime
4. Stack: Temporary storage when invoking functions

85
Q

What are the states that a process can be in? (L14)

A
  1. New: process being created
  2. Ready: waiting to be assigned a processor
  3. Waiting: process waiting for event (such as I/O)
  4. Running: process being executed.
  5. Terminated: finished execution
86
Q

What is the PCB? (L14)

A

It is the process control block. It contains information about a process in OS, in RAM.

Info such as process state, ID, CPU registers, CPU scheduling info, memory management, accounting info, I/O status.

Need to save info because when you are switching between processes, the state of that process that was stopped from running must be saved.

87
Q

What is a context switch? (L14)

A

When CPU switches between executing different processes, must save state of old process and load state of new process via context switch, which has overhead time.

88
Q

Describe process scheduling. (L14)

A

Selects among available processes for next execution in CPU to maximize CPU use.

89
Q

Explain the difference between job, ready, and device queue. (L14)

A

The job queue is the set of all processes in the system.

The ready queue is all the processes in memory that are ready to be assigned to a core.

The device queue is all the processes waiting for an I/O device request. (wait queue)

90
Q

What is the short term and long term scheduler? (L14)

A

Short term scheduler (CPU scheduler) selects which process should be executed next from ready queue. Sometimes the only one.

Long term scheduler selects which process from job pool should be brought into ready queue. Controls degree of programming.

91
Q

What is process creation? What are the resource sharing and execution options? (L14)

A

Parent process creates child process, forming a tree.

Processes ID’ed with PID.

Resource sharing:
1. Parent and child share all
2. Child share subset of parents’
2. Parent and child shares none

Execution:
1. Parent and child execute concurrently
2. Parent waits until child terminates

92
Q

What is process termination and what are the 3 sys calls related to it? (L14)

A

Process executes last line and delete it using exit() system call and resources deallocated by OS.

wait() returns status data to parent.

Parent can terminate child process with abort(). Reasons:
1. Overuse resources
2. No longer need child
3. Cascading termination: parent exits and OS does not allow child to continue without parent

93
Q

Explain multiprocess architecture with chrome. (L14)

A

Multiprocess instead of a whole browser running as one process. One site having problems won’t affect other sites.

  1. Browser process: mange UI, disk, network. Created when open chrome.
  2. Renderer: process renders web pages, HTML, java. Runs in sandbox, access to disk/IO restricted
  3. Plug in: created for each plug in process.
94
Q

What is interprocess communication? (L14)

A

Independent or cooperating processes.

Cooperate to share information, convenient, divide system functions, break tasks into subtasks

95
Q

What are the two models of IPC? Explain them (L14)

A
  1. Message passing: communicate messages exchanged before cooperating processes. Process A sends message to message queue and process B reads from it, all managed by kernel.
  2. Shared memory: establish region of memory to be shard, processes then read/write data to region. Controlled by user processes as process decides what is the form of the data.
96
Q

Communications in client/server systems: pipes (L14)

A

Way for processes to communicate.

Ordinary pipes: only accessible by process that created it. For use between parent and child. Unidirectional, local. Terminates after use.

Names pipes: Over network, bi-directional, continues to exists after communication. Several processes can use it, no need parent/child relationship.

97
Q

Communications in client/server systems: sockets (L14)

A

One end point of a two way communication between 2 programs. Pair of processes have 2 sockets. Socket ID’ed by IP and port number. All ports below 1024 are for standard services. Special loopback IP is to refer to system where process is running on.

98
Q

Communications in client/server systems: remote procedure call (L14)

A

Abstracts procedure calls between processes on networked separate systems. Uses ports and stubs (a placeholder function to call and receive messages)

99
Q

Roll out, roll in

A

Swapping variant used for priority based scheduling algos . Lower priority processes are swapped out so higher priority process can be loaded and executed. (relate to swapping)

100
Q

What are the OS services for users? (L13)

A
  1. UI: GUI, command line interface, touch screen interface
  2. Program execution: able to load program into memory and run until termination
  3. I/O operations: Programs may involve I/O device or files
  4. Communications: Processes exchange info within computer or across.
  5. File system manipulation: Programs need to write/read files, create/delete them.
  6. Error detection: OS needs to constantly be aware or errors and take appropriate action.
101
Q

What are the OS services for ensuring efficient operation of system? (L13)

A
  1. Resource allocation: when multiple users/processes are running concurrently, resources (memory, cpu cycles, i/o) must be allocated accordingly.
  2. Accounting: keep track of which users use how much and what kinds of computer resources.
  3. Protection and security: owners of info stored in a multiuser/networked computer system may want to control use of that info, make sure processes don’t interfere with each other.
102
Q

Explain command line interpreter (L13)

A

Direct command entry

Different shells (BASH, Korn)

Fetch command from user and executes it. Calls built in commands or system program names (eg. rm) which are files to be loaded into memory to do something.

103
Q

Explain GUI (L13)

A

Desktop metaphor: mouse, keyboard, and monitor, with icons to represent files, actions, programs. Mouse over objects lead to an action.

Systems have both GUI and CLI

104
Q

Explain touch screen interface (L13)

A

Touchscreen devices require new interface, mouse is not possible.

Action/selection based on gesture.

Virtual keyboard for text and voice commands.

105
Q

What are system calls and how does this relate to API? (L13)

A

Provides an interface to the services made available by an operating system.

Process request service from the OS kernel like interact with another process or peripheral device.

Programs access system calls through API (application programming interface). API’s specify a set of available functions avail to programmer.

106
Q

Name all the types of system calls (L13)

A
  1. Process control
  2. File management
  3. Device management
  4. Information maintenance
  5. Communications
  6. Protection
107
Q

Guideline for OS design and implementation and what are the 2 principles to separate? (L13)

A

Define goals/specifications. Highest level is choice of hardware and system type.

Consider user goals and system goals. User: OS convenient to use/reliable/fast. System: OS easy to design/implement/maintain.

Principles of policy versus mechanism. What will be done versus how to do it, to allow for flexibility.

108
Q

What is emulation? (L13)

A

Emulation is to allow an OS to run on non native hardware.

109
Q

What are loadable kernel modules?

A

Kernel has a set of core components and can link in additional services via modules at either boot or run.

110
Q

What are core dump and crash dump files? L13

A

Core dump files: application failure log files

Crash dump files: system failures log files

111
Q

What is trace listing and profiling? L13

A

Trace listing is collecting data for a specific event for analysis.

Profiling is periodic sampling of instruction pointer to look for trends.

112
Q

What is SYSGEN? (L13)

A

system generator that scans your hardware and compile kernel code to optimise it to your hardware.

113
Q

what is an OS (L12)? what are the four components?

A

it is a layer of code that acts as intermediary between user/app and hardware.

a resource allocator and control program. the one program running at all times, the kernel.

make it easier to use computer system, hardware, OS, apps, users

114
Q

Explain how a device connects to the CPU (L12)

A

Every IO device has a device controller with a buffer. The OS has a drive driver for every controller to present a uniform interface to the rest of the OS.

The controller causes interrupts to tell CPU it is finished with operation.

Buffers help with data transfer because devices operate on different speeds.

115
Q

Two ways CPU finds out about an interrupt (when device needs services) L12

A
  1. Polling: CPU periodically checks each device to see if it needs services. Takes time even when there are no requests, shortening polling intervals is inefficient. can be efficient if event arrives freq.
  2. Interrupts: each device has interrupt line to signal to processor (change normal flow of instructions) and CPU executes interrupt handler by looking up vector table to see what type of interrupt it is.
116
Q

what is caching? L12

A

a place to store data temporarily, faster storage easier to access. if data is there can use info directly otherwise copy it into cache from slower storage.

117
Q

describe asymmetric and symmetric processing L12

A

asymmetric: multiprocessing where each processor is assigned a specified task

symmetric: each processor performs all tasks. all share buses and have access to memory

118
Q

what is a clustered system? L12

A

multiple systems working together created by 2 or more individual computer systems (nodes). some have distributed lock manager to avoid conflicting operations.

asymmetric: one node on standby to take place of another node if it fails

symmetric: all nodes run apps and monitor each other, more efficient.

beowulf clusters uses cheap personal computers to perform parallel computing when idle

119
Q

what does it mean to be interrupt driven L12

A

hardware interrupted by device

software interrupted by exception or trap. error, request for OS, infinite loops, processes modifying each other

120
Q

difference between program and process L12

A

program is an instruction written on disk

process is when that set or instruction is within the RAM and program being used executed.

121
Q

explain the kernel data structures L12

A
  1. singly linked list: every single element has a pointer to the next
  2. doubly linked list: every single elements has a pointer to previous and next
  3. circular linked list: last element has reference to first, plus singly linked list.
  4. push and pop
  5. first in first out
  6. binary search tree
  7. hash map
122
Q

what is distributed computing? L12

A

separate systems networked together. network is a communications path.

123
Q

peer to peer computing environment L12

A

all nodes are peers (can be client or server)

central lookup service on network or broadcast request for service and responde to requests

124
Q

what is cloud computing? L12

A

delivers computing storage apps as service across a network

125
Q

Explain the boot up process

A

When the computer turns on, the processor has the half coded address of the BIOS ROM/NOR flash and is copied into RAM . The BIOS loads in settings from the CMOS chip and has configurable settings. BIOS does POST (power on self test) to make sure memory has not faults and can be read. It initialises hardware and sets up vector table and searches for bootable device (hard disk or floppy disk for LINUX). It then locates the boot loader code (GRUB or LILO in linux) in the MBR (master boot record) and loads the linux kernel/OS into RAM and processor executes core OS. Computer is ready for user interaction to load user executable code (apps) into RAM.

126
Q

What changes mode to kernel from user? L12

A

System calls change mode to kernel mode, but after return from call it goes back to user

127
Q

What is a mode bit

A

It is the bit that distinguishes when a system is running on kernel or user mode