OC Flashcards

1
Q

What is An Operating System? (Two functions)

A

An operation system is a software that performs 2 functions:
- Extended Machine
- Resource Manager

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is extended machine?

A

Extends the underlying machine architecture
(instruction set, memory org., I/O, and bus structure)
– E.g., disk driver instead of reading SATA 450-page manual
– Provides abstractions like files and processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a resource manager?

A
  • Manages resources such as CPUs, memories, timers,
    disks and NICs
  • Allocate resources to programs
  • Manages concurrent requests to resources, e.g., time
    multiplexing on the CPU, space multiplexing on RAM
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

OS Examples UNIX

A

OpenBSD, FreeBSD, NetBSD, MacOS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

OS Examples LINUX

A
  • Debian (MX, Mint, Ubuntu),
  • Redhat (RHEL, Fedora, CentOS)
  • Arch (Manjaro)
  • Gentoo (Pentoo)
  • Android, …

————– not LINUX
Windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the CPU? what is the loop (FDLE)?

A

The “brain” of the computer
In an infinite loop does:
* Fetch an instruction from memory
* Decode it
* Load its operands (if any) and
* Execute it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does a CPU have?

A

-Each CPU has its instruction set: your
phones is completely different from the
one in your laptop
-CPU has a set of registers to store data
about to be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CPU Registers

A
  • Program Counter contains the address of the
    next instruction to execute
  • Stack Pointer points to the top of the stack in
    memory
  • The stack contains a frame for each function call that
    has not returned yet
  • One frame contains input parameters, local variables,
  • Program Status Word (PSW) is set by
    comparison instructions, it contains the condition
    code bits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Memory hierarchy (R,C,MM,MD)

A

Registers, Cache, Main Memory, Magnetic Disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is L1 and L2 caching

A

L1 and L2 cache are small, high-speed memory storage areas that are built into the processor chip of a computer. The purpose of caching is to reduce the time it takes for the processor to access data from the slower main memory, which is usually DRAM (Dynamic Random Access Memory).

L1 cache (primary) is the smallest and fastest cache on the processor chip.

L2 cache (secondary) is a larger cache that is slower than L1 cache but faster than main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Disks: OS as an Extended Machine

A

read & write: 13 parameters in 9 bytes
1) address of the disk;
2) number of surface;
3) number of cylinder;
4) number of track;
5) kind of recording mode
(continuous, interleaf, …);
6) what to do if the data is not found;
7) …
is the motor on? is the motor off?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Disk Speed

A

-Platters rotate at 5400, 7200, 10800 RPM
-It takes 1 msec to move the head from one
cylinder to the next
* 5 to 10 msec to move to a random one
-It takes 5 to 10 msec for a sector to be
under the arm
-Reading speeds are in the range of 50 to
160MBps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Booting-up facts

A
  • The PC has a parentboard
  • The Basic Input/Output System (BIOS) is
    a program stored on the parentboard
  • BIOS has low-level software to use the
    keyboard, screen, disks, etc.
  • BIOS is stored in flash RAM
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

BIOS - 6 Steps

A
  1. Checks how much RAM is installed and whether the
    keyboard, etc., are working (USB, PS/2)
  2. Scans for PCI and PCIe devices and installs them
  3. Checks in CMOS memory for a list of booting
    devices and tries them
    - Complementary Metal–Oxide–Semiconductor (CMOS)
  4. Loads the first sector from the booting device
  5. Runs its code, which normally examines the
    partition table at the end of the boot sector to
    determine which partition is active
  6. This results in running a secondary boot loader from
    that partition which loads the OS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

OS Booting up

A
  • Queries the BIOS for the configuration info.
  • Detect if the hardware configuration changed
    since last restart
  • For each device, check if it has its driver
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

OS Types

A

Mainframe OS
Goliath computer! Heavily oriented toward
processing jobs at the same time
* Offer batch, transaction processing and
timesharing services
* E.g., OS/390, descendent of OS/360

Server OS
* Serve multiple users at once (Web, file printing)
* E.g., Linux server distros, Solaris, FreeBSD,
Windows Server

Multiprocessor OS
* Runs multiple CPUs into a single system

Personal OS
* E.g., Linux, Windows, FreeBSD, OpenBSD

Handheld OS
* E.g., Android

Embedded OS
* Control non-computer devices, e.g., oven
* End-user can’t install software on it
* E.g., Embedded Linux

Sensor-Node OS
* Sensors have limited power
* Run a tiny OS, often event-driven
- Respond to external event, or an internal clock
* E.g., TinyOS

Real-time OS
* Need to perform tasks within limited time
* Tasks are triggered by events
* Hard real-time OS: if an action absolutely
must occur at a certain moment or within a
certain range
-E.g., eCos
-Users can’t add their software
* Soft real-time OS: it’s acceptable, but not
desirable, to occasionally miss a deadline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a Process?

A

A process is a program in execution
A process has its address space

It also has a set of resources:
* Registers (including the program counter and stack pointer)
* List of open files
* Lists of related processes
* etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

One main task of OS

A

One of the main tasks of an OS is to
decide which process runs at an instant t
* And in which processor core
-For example, you might be using Firefox
and LibreOffice at the same time on a
single-core machine
After running Firefox for a certain time, the
OS needs to pause Firefox to run LibreOffice
- In order to be able to resume Firefox once
LibreOffice’s time is over, the OS needs to
store the status of Firefox before pausing it:
* Save its address space (core image) as well as
* Other information (registers content, file pointers,
etc.)
- Process information, other than its address
space, is usually stored by the OS in a data
structure inside a process table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a linux time slice?

A

In Linux, the time slice, also known as the time quantum, is the amount of time allocated to a process for it to execute on the CPU. The time slice is determined by the kernel scheduler and can vary depending on the specific system and kernel configuration.
By default, the time slice in Linux is usually around 10 milliseconds (ms)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What happens when “gedit” is entered in shell?

A

this effectively asks the “shell”
process to create a new process and run “gedit” in it
- In this scenario, “gedit” is a child of “shell”. “shell” is its
parent
- A process could create many processes to help it
perform a complex operation, e.g., multiplication of
large matrices
- Linux processes form a tree
- The parent and children processes need to co- ordinate the work
* They need to use some form of inter-process
communication, e.g., by sending signals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a thread?

A

A thread is one flow (thread) of execution
within a process
A process can have one or more threads
in it
- E.g., LibreOffice
Threads share the same memory space,
making it easier for them to communicate
than different processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Address Space

A

A protection mechanism is required to
prevent a program from accessing
another’s space
* Provided by the hardware, managed by OS

  • Virtual memory, use the disk as an extension
    of the actual physical memory
    The address space as an abstraction of
    the physical memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Files

A

An abstraction of a collection of bytes
Organised into folders
System calls to manipulate both
Notions of
* Path
* Working directory
* File descriptor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

*nix special files

A
  • Block: randomly addressable, e.g., disks
  • Character: takes input or outputs a text stream
    -E.g., keyboard, modem, printers, etc.
  • Under /dev (e.g., /dev/lp could be a printer)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Kernel Mode

A

Kernel Mode
- Code has access to all CPU instructions
- Code has access to all Memory addresses
- Only accessible to most trusted parts of the OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

User Mode

A
  • Code does not directly access hardware or
    memory
  • Uses OS API to do so
  • Crashes are recoverable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The Shell

A

Most common on Linux is bash.
Sometimes Desktops are called graphical
shells.
They are sorted:
application programs
system programs
hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

System Calls

A

The lowest interface provided by the OS to
users
System calls to manipulate processes, files,
memory, inter-process communication, etc.
Process management: fork, waitpid, execve,
exit, …
File management: open, close, read, write,
lseek, stat, unlink, …
Folder management: mkdir, rmdir, mount,
umount
Memory: brk, malloc, free, …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Monotholic Systems

A

Runs entirely in kernel mode
* Including drivers
* Any crash in a procedure brings the system down
Each procedure in the system can call any other
* No layering
* No restrictions
Can have minimum structure
* A service procedure per system call
* A service utility used by one or more service procedures
E.g., Unix, Linux, OpenVMS, MS-DOS, Windows 9x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Layered Systems (OUIOMP)

A

The system is divided into layers
LAYER FUNCTION
5 the operator
4 user programs
3 input/output management
2 operator-process communication
1 memory and drum management
0 processor allocation and multiprogramming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Microkernel

A

2-10 bugs per thousand lines of code
- Linux kernel is around 5M lines of code
- Microkernel OS minimises code that runs
in kernel mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Hybrid Kernel

A

Combines Monolithic and Microkernel approaches
- Runs almost all OS service in kernel space
- Used in Windows NT (including 10)
- Linus Torvalds doesn’t find enough difference
between hybrid and monolithic kernels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Time-Multiplexing

A

Computer resources could be shared
between processes/users using either:
* Time-multiplexing: processes take turns at the
resource
E.g., CPU core, printer (actual printing),
* Or Both (space-multiplexing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Space-multiplexing

A

Computer resources could be shared
between processes/users using either:
Space-multiplexing: multiple processes can
use/share the resource at the same time
- E.g., hard-drive, RAM, printer’s buffer (if available)
* Or Both (Time-Multiplexing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Process States - Which steps result in/from calling the
scheduling algorithm?

A

All! Running, Ready, Blocked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

I/O Operations

A

I/O operations are usually much slower
than CPU instructions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What does the scheduler do?

A

The Scheduler chooses the next process
to run.
Process Queue
* The process queue(s) can be ordered in
different ways
- FIFO
- Priority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Fairness in processes?

A
  • Fairness: each process gets a “fair share”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Throughput

A
  • Throughput: number of processes that
    complete their execution per time unit
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Turnaround time

A
  • Turnaround time: time from submission to
    completion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Waiting time

A
  • Waiting time: the amount of time a process
    spends in the ready queue
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Response time

A
  • Response time: the amount of time from
    submission to the first response
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Non-preemptive

A
  • A running process will release the CPU only
    when it exits or is blocked
    Non-preemptive scheduling is typically more suitable for batch systems than interactive ones as jobs are submitted in advance.
    If no clock interrupts are available, non- preemptive scheduling is the only option
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Preemptive scheduling

A
  • A running process can be suspended after some
    time
  • Requires a clock interrupt at the end of time
    intervals (e.g., 50Hz, 60Hz)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Dispatcher

A

Switching the CPU from process A to a
new one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Context Switching steps

A

Steps
* Switch from user mode to kernel mode
* Save A’s state (e.g., registers)
* Select process B
* Load B’s state
* Start B
The time it takes is the Dispatch Latency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

First-Come, First-Served (FCFS) Scheduling

A

The CPU is allocated to processes in their
order of arrival
* FIFO process queue
- Each process owns the CPU until it is
done or blocked
This is a non-preemptive algorithm
FCFS suitable for
* Batch systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

FCFS Advantages and Disadvantages

A

Advantages
* Simple to implement
Disadvantages
* Waiting time can be large if short requests
wait behind the long ones
* Not suitable for multiprogramming systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Shortest-Job-First (SJF) Scheduling

A

For each process, determine the length of its
next CPU burst.
Run processors in their ascending burst
length order.
SJF is both a non-preemptive and preemptive algorithm.
SJF suitable for batch systems

In SJF, if A (L=20) is available at t=0 and B (L=1)
arrives at t=1
* A will run from 0 to 20, B runs from 20 to 21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

SJF Advantages and Disadvantages

A

Advantage
* SJF minimises the average waiting time

Disadvantages
* Difficult to get a good estimate of the burst
length
- Easier if similar tasks ran in the past
* Starvation
- The longest process(es) might never get to run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Shortest Remaining Time next (SRT)

A

SRT takes into consideration the fact that processes
arrive at different times
* When a new process arrives, interrupt the running process
if its remaining time is longer than the new process
- With SRT
* A runs from 0 to 1
* At 1, B arrives, B’s length (1) is shorter than A’s remaining
time (19)
* Run B from 1 to 2, Run A from 2 to 21
SRT is preemptive
Typically used in interactive systems where response time is critical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Round-Robin (RR)

A

The most intuitive multiprogramming
algorithm
* Split the timeline into equally long segments
1. Give each process a segment
2. Once all present processes had a segment each, go
to 1.
* The length of the segment is called quantum
- Usually 10-100ms
* At the end of the timeslot, the process is pre-empted

More suited for interactive systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

RR Facts

A

If the quantum is too short
* The cost of context switches becomes the
main performance criteria
If the quantum is too long
* Becomes FCFS
Performance criteria
* Length of time quantum
* Number of context switches
* Length of CPU bursts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

RR Advantages & Disadvantages

A

Advantages
* Simple to implement (circular queue)
* Very suitable for multi-programming
Disadvantages
* Challenging to find the optimal value of the
quantum
* Assumes all processes are equally important

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Priority Scheduling

A

Each process has a priority value
* Lower integer value  higher priority
- The scheduler picks processes in their
descending priority order
Two variants
* Preemptive: pause the current process if a higher
priority one arrives
* Non-preemptive: current process cannot be
interrupted by higher-priority processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Priority Scheduling Advantages & Disadvantages

A

Advantages
* Can define more sophisticated policies
- E.g., SJF can be seen as priority scheduling where
the priority value is the length of CPU burst time
Disadvantages
* Starvation for low-priority processes
- Can include an ageing factor: increase the priority of
waiting for processes with time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What is a semaphore?

A

Semaphores are an abstract data type used for synchronization between multiple threads or processes in a concurrent system.
Semaphores can be used to protect shared resources or critical sections of code, by ensuring that only one thread or process can access them at a time. They can also be used to signal events between threads or processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is true about process scheduling?

A

The waiting time is always larger than or equal to the response time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is the waiting time in process scheduling?

A

The time spent by a process in the ready queue waiting for CPU time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is the response time in process scheduling?

A

The time between a user request and the first response produced by the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

A page fault occurs when:

A

A requested virtual memory page is not present in RAM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

The address space of a process refers to the space it uses in the ______?

A

RAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

In an OS that implements paging, what statement is accurate?

A

The size of a virtual memory page has to be identical to the size of a page frame.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

A binary file can be defined as:

A

A file that uses 8 bits out of each byte

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

For which of the following systems is nonpreemptive scheduling more suitable?

A

Batch Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

In preemptive priority scheduling, if the current running process has a priority of 3 and a process of priority 10 arrives, then what does the scheduler do?

A

Nothing, the current process has a higher priority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

“On average, only 50% of each memory page will contain useful data.”
Is this true?

A

False, only the last memory page of a process will, on average, contain 50% useful data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Examples of “Character” special files include (select two)

A
  • a network interface
  • a terminal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Which of the following statements is correct:
1- SJF is suitable for interactive operating systems
2- SRT is preemptive
3- Both preemptive and non-preemptive variants of priority scheduling are suitable for a smartphone OS

A

Only 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Is SRT non-preemptive or preemptive

A

preemptive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which of the following statements is true? (select two)

  • A process can have multiple threads in it
  • A thread can have multiple processes in it
  • Threads from the same process share the process’ local
    variables
  • Processes from the same thread share the thread’s local variables
A
  • A process can have multiple threads in it
  • Threads from the same process share the process’ local
    variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

A process references memory address 2000. If it is running on an OS that uses dynamic relocation where the base address of that process is 3000 and its limit is 1000, then…

A

The access request will fail because 1000<2000

In dynamic relocation, the OS maps the logical address of a process to a physical address by adding the base address of the process to the logical address. Therefore, in this case, the logical address of the process is 2000, and the base address is 3000, which gives a physical address of 5000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

A stack frame is created when…

A

a function is called.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

In the context of file management systems, “random access” refers to

A

No need to read bytes 1 to n-1 before reading or writing byte n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

How do you calculate the number of page frames in the physical memory?

A

To calculate the number of page frames in physical memory, we need to divide the total physical memory size (e.g. 2GB = 2^31 bytes) by the page size (e.g. 2^12 bytes) = 2^19 pages.
we need to round down the number of pages to the nearest power of 2 that is less than or equal to the number of page frames.
The largest power of 2 that is less than or equal to 2^19 is 2^18, so the number of page frames is:

2^18 = 262,144

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Is the following statement true or false?

Using the Hoare implementation, a signal will immediately transfer ownership of the monitor to the one thread woken by the signal. Under the Mesa implementation, a signal wakes at least one waiting thread, which will then compete normally to obtain ownership of the monitor in competition with any other threads that are trying to obtain ownership of that monitor.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

what implementation of monitors transfers ownership of the monitor immediately to the thread that is woken up by a signal.

A

Hoare implementation of monitors transfers ownership of the monitor immediately to the thread that is woken up by a signal. This means that the woken thread gains exclusive access to the monitor and continues executing its critical section without interference from other threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

what implementation only wakes up one waiting thread when a signal is sent.

A

Mesa implementation only wakes up one waiting thread when a signal is sent.
The woken thread then competes with other threads that are waiting to acquire the monitor by using a lock or semaphore mechanism.
This means that the woken thread may not immediately gain access to the monitor and may have to wait until it acquires the lock or semaphore

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

The bounded buffer problem is also known as the _______________ problem.

A

Producer-Consumer.
The producers generate data items and add them to the buffer, while the consumers remove data items from the buffer and process them. The problem is to ensure that the producers do not add items to the buffer when it is full, and the consumers do not remove items from the buffer when it is empty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

When the method release() of the Java Semaphore class is called, it will:

A

sometimes unblock a waiting thread that called acquire().

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What are the advantages of using the Synchronized keyword (i.e. Intrinsic Monitors) over the ReentrantLock class (i.e. Extrinsic Monitors)?

A
  • Guarantees that ownership is released at the end of methods that invoke the mutual exclusion protection
  • less error-prone.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

advantages that Monitors have over Semaphores:

A
  • Clear recommended programming pattern
  • Has separate and clear syntax for mutual exclusion
  • Programming less prone to errors
  • No constructor arguments are required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What kind of storage do we need

A

Need:
* Long-term storage
* Bigger size
* Non-volatile
* Easier sharing
* With hardware abstraction
- Read block k
- Write block k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Files

A
  • Files are persistent logical units of
    information
  • Created and manipulated by processes
  • File (Management) System is the part of
    the OS dedicated to file structures, names,
    uses, etc.
  • Process, Address Space and Files are the
    most important abstractions an OS
    provides
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Files Systems Timeline

A

Big names for end-user computers include:
* 1984: FAT-16 created by IBM and Microsoft and used
in MS-DOS, then in Window 95 and 98
* 1987: Minix V1 FS created by Andrew Tanenbaum for
Minix 1.0
* 1992: ext and ext2 (1993) created by Remy Card for
Linux
* 1993: NTFS 1.0 introduced in Windows NT 3.1
* 1996: FAT-32 introduced in Windows 98
* 1999: ext3
* 2006: ext4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

File Extensions

A

Windows
* Extensions dictate the choice of the program
to open the file, e.g., jpg, pptx, etc.

Linux
* Less binding between extensions and
programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

File Types

A

Files types:
* Regular: your files, ASCII or binary
* Directories (folders): contain collections of
files
* “Character” special files: model serial I/O
devices, such as terminals, printers and
networks
* “Block” special files: model disks
Under Linux, use “ls -l“

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Regular and Binary Files

A

ASCII files
* Lines terminated by a carriage return, line feed, or
both (Windows)
Binary files start with magic numbers that identify
their formats
* 50 4B 03 04 (PK??) for zip (and derivatives, e.g.,
JAR)
* 7F 45 4C 46 (.ELF) for ELF files
* 25 50 44 46 2d (%PDF-) guess!
* 4D 5A (MZ) Windows Executables, Mark Zbikowski
* https://en.wikipedia.org/wiki/List_of_file_signatures
OS must, by default, be able to parse its own
executable formats

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Files Access

A

Sequential: access a file byte-by-byte or
record-by-record
* Inconvenience: if you want to read the nth
byte, then you must read everything before it!
* Can rewind a file, name from magnetic tape
age!
- Before disks
Random access: access any byte/record
out of order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Attributes (aka metadata) include:

A

Attributes (aka metadata) include:
* Protection information
* Creator
* Creation time
* Owner
* Hidden Flag
* Time of last access
* Time of last change
* Current size

91
Q

File Attribute Manipulation in Linux

A

Check permissions using “ls –l some_path”
* -rwxrw-r– 1 user group 2 Jan 24 19:59 test.txt
* https://wiki.archlinux.org/index.php/File_permissions_and_attributes
- chmod [u/g/o/a][=/+/-][r/w/x] file
- chown user:group file
- lsattr and chattr to list and change attributes

92
Q

Implementers focus on

A
  • How files and folders are implemented
  • How disk space is managed
  • How to make to the file system efficient and
    reliable
  • How to integrate the FS with the rest of the
    OS
93
Q

About (Magnetic) Disks

A

Units
A. Track
B. Geometrical sector
C. Track sector
D. Cluster
Block vs. Sector
* blockdev –getbsz /dev/sda

94
Q

Sector’s size is:

A

512B on traditional HDD
* 4096B on newer ones
* Smallest storage unit
- SSDs simulate sectors

95
Q

Allocation of Blocks to Files

A

Different ways to stores files in blocks
1. Contiguous allocation
2. Linked list allocation
3. Linked list with a Table
4. i-nodes

96
Q

Contiguous Allocation Pros and Cons

A

Pros
* Easy to implement
* Fast to store a file
Cons
* Creates unused fragments of memory
- Used in CD-ROM

97
Q

Linked List Allocation Pros and Cons

A

Pros
* No more empty fragments
Cons
* Takes longer to read all blocks of a file
* Slow random (vs. sequential) access
* You can easily end up with a disk full of
fragmented files
* “Pointer” bytes wasted
- Storing a block uses a bit more than a block

98
Q

Linked List with a Table Description + Pros and Cons

A

Store pointers in a table
for faster access
* File Allocation Table,
implemented it in MS-DOS
Pros
* Faster access
Cons
* Table size grows linearly
with disk size
* 1TB disk with 1 KB blocks
has 1 billion entries
* For performance, have the
FAT in RAM

99
Q

i-nodes

A

An i-node stores information about file
blocks + its properties
- An i-node stores a list of block addresses
of a file
- The OS loads in RAM i-nodes of active
files
* If each file has N blocks and there are K
active files, then only N*K addresses loaded

But N is not a constant * Use the last pointer to
point to another i-node
Used in
* Unix File System
- A Unix directory is a list of pairs of filenames and
i- node numbers
* ext- ext4
- ls-i to display the i-node number of a file
- “struct inode” in linux/fs.h
* NTFS

100
Q

Magnetic Disks

A

A collection of platters
Each platter is covered with magnetic
layer
* On both faces
Heads can be positioned on tracks

101
Q

Disk Scheduling on HDD

A

Different in SSD!
Disks merge requests from all processes in one
queue
Performance Criteria: time taken to get data
1. Access time: time to get to the first byte
Seek time: taken to move the arm to the right cylinder
Rotational latency: needed to get the right sector under the
head
2. Disk bandwidth: number of transferred bytes divided
by the time between the first request and last byte
transfer
Seek time is usually the longest, will focus on it

102
Q

Disk Scheduling on HDD: An Example

A

Scenario (Tanenbaum)
* Range of cylinders: 0-39
* Head starting position: cylinder 11
* Request queue: 1, 36, 16, 34, 9, 12
Disks can store pending requests to
cylinders as separate linked lists
* Run requests of a given cylinder in one go

103
Q

First Come First Served

A

Process request in the same order they were
received in
Arm moved by (11,1, 36, 16, 34, 9, 12):
* 11-> 1: 10
* 1-> 36: 35
* 36-> 16: 20
* 16-> 34: 18
* 34-> 9: 25
* 9-> 12: 3
* Total: 111 arm moves

104
Q

Shortest Seek Time First

A

Instead of making non-optimal arm moves
by going to the next received request
Move to the closest requested cylinder

105
Q

Shortest Seek Time First

A

Arm moved by (11,1, 36, 16, 34, 9, 12):
* 11-> 12: 1
* 12-> 9: 3
* 9->16: 7
* 16->1: 15
* 1->34: 33
* 34->36: 2
* Total: 61

106
Q

Shortest Seek Time First

A

Problem with SSF
* Say, during processing of request for 16 (1
waiting)
- Request comes for 8: 8 runs, 1 waits
- Request comes for 13: 13 runs, 1 waits
- Etc. and 1 waits …
* On a heavily loaded disk, the arm will tend to
be in the middle and requests on the
extremities will starve
Similar issue with elevators of tall
buildings!

107
Q

Elevator Algorithm

A
  • Maintain 1 bit that gives the direction
  • Find a request in the current direction
  • Direction is reversed when no more
    requests left in the current direction
  • Say, the bit is initially set to UP …
108
Q

Elevator Algorithm Arm moved by (11,1, 36, 16, 34, 9, 12):

A

Arm moved by (11,1, 36, 16, 34, 9, 12):
* 11-> 12: 1
* 12-> 16: 4
* 16-> 34: 18
* 34->36: 2
* 36->9: 27
* 9->1: 8
* Total: 60

109
Q

Elevator Algorithm Properties

A

Properties
* Usually slower than SSF, but covers the entire
disk
* Upper bound for covering all requests: twice the
size of the entire range
Circular Elevator Algorithm
* Always in the same direction
* Once no more requests in the chosen direction,
start over again
* 11, 12, 16, 34, 36, 1, 9
- instead of 9, 1 in Elevator

110
Q

Review of Disk Scheduling Algorithms

A

On a lightly loaded hard-drive, all
algorithms “converge to” FCFS!

Elevator and derivatives perform better on
heavily loaded hard-drives

File allocation has its word in the
performance of file access

111
Q

Master Boot Record (MBR)

A

Reminder: disks are divided into sectors
Sector Zero of a disk is the MBR
* Used to boot the computer
The partition table is at the end of the
MBR
* Give the type, start and end of each partition
* One of the partitions is marked as active
partition

112
Q

Boot process

A

When a machine is booted
* Executes the MBR
* The MBR program locates the active partition
* MBR program reads the first block of the
partition
Aka the Boot Block
* The Boot Block program loads the OS from
the partition

113
Q

Programmer’s Wishlist

A

All programmers dream of a memory that
is:
* Private: their program’s memory is
inaccessible to other processes
* Infinitely large
* Infinitely fast
* Nonvolatile: does not lose content when
powered down
* Inexpensive?!

114
Q

Memory Manager

A

Memory hierarchy: a sequence of layers of
memory chunks, the first is directly connected to
the CPU

When you move away from CPU, the memory is
* Slower, Larger and Cheaper

The memory manager is heavily involved in the
management of this hierarchy
* Allocates memory to processes, knows free memory,
etc.

115
Q

Raw Memory Access: No Memory Abstraction

A

The simplest scenario: processes directly
access physical memory
An instruction like
* MOV R1, 4th_byte
* accesses the actual 4th byte of memory

Cannot have 2 programs in RAM at the
same time: what if both access Byte 4?

Three possible implementations (+minor
variations)

116
Q

Raw Memory Access

A

One process in memory at a time
* (a) in original mainframes, rarely used now
* (b) the OS in Read-Only-Memory; embedded
and handheld devices
* (c) early personal computers used ROM for
the BIOS

117
Q

Limitations Raw Memory Access

A

Only one process can run at a time
* Still can swap processes between RAM and
disk

Processes can access OS memory in
scenarios (a) and (c) and might crash it

Obsolete for computers, but still in use for
very simple systems
* Your washing machine, fridge, kitchen
machine, etc.

118
Q

Raw Memory Access with Many Processes

A

Program Status Word: Early IBM 360
* Can swap processes between RAM and disk
* Split memory in 2KB blocks, each has a 4-bit
protection key
* Each process has a Program Status Word
* A process can access a chunk if its PSW
matches the protection key
* Only the OS can change the PSW

119
Q

Program Status Word Limitations

A

Code in both programs refer to the same
set of addresses
* JMP 28, in process 2 (white), fails in (c)

120
Q

Static Relocation

A

Code in both programs
refer to the same set of
addresses

Solution: Static
Relocation
* Process P2 (white) is loaded at address 16384
* Each reference to an @ by 2 is incremented by16384

121
Q

Static Relocation Limitations

A

The OS must set the 4-bit key of each
block of 2KB of the process memory
* Time-consuming

In an instruction like MOV R1, 400 you
need to know if 400 is an address

Need some form of memory abstraction
* Access to abstract addresses rather than real
ones

122
Q

Solving the Protection and Relocation Problems

A

The use of keys and PSW addresses the
protection problem
* But not the relocation one

IBM’s static relocation addresses the
relocation problem
* But slow

Both non-scalable

123
Q

Address Space

A

An address space is an abstraction of the
physical memory
* The set of all memory addresses that a
process can access
Numbered in a different way to the actual physical
addresses being used
* Identical values of addresses from different
processes refer to different physical locations
Address 400 of P1 could be 0x10640
Address 400 of P2 could be 0x20880

124
Q

Dynamic Relocation

A

Used in CDC 6600 andIntel 8088

Uses base and limit registers

A process’s base address is where it was loaded

The limit gives the size of the program
* Access to an @>limit generates a fault

For each memory access, add the base’s value

125
Q

Dynamic Relocation Limitations

A

Intel 8080 did not provide a limit register

Cost
* Addition of @s to base, a bit of a problem
- Propagation of the carry bit
* Comparison to limit, OK
- What if two programs, combined, don’t fit in memory?
* Two approaches: swapping and virtual
memory

126
Q

Swapping

A

Swapping simply consists in backing up a process
(or more) on disk if the next process to load
doesn’t fit in memory
* Note: the whole process memory is copied

127
Q

Swapping Limitations: Memory holes

A

Swapping creates “holes” in memory:
small ranges of unused memory

Solution? Compact memory by moving all used slots up or down to remove the holes
* BUT: this is quite time-consuming
- If you have 16GB of RAM that copies 8B in 8ns, it
could take 16s

128
Q

Swapping Limitations: Process Size

A

Processes can grow during execution
* Stack: for every function call (a return frees memory)
* Heap: for each memory allocation instruction

OS might already provide room for growth

If the process exceeds the max size, then needs to be relocated

Another Limitation: Bloatware

129
Q

Virtual Memory Context

A

A process might not fit in memory

But also, a collection of processes might not fit in memory
* Swapping entire processes is resource-consuming
* SATA disks transfer rates of 100s MBps, takes
seconds to swap out a process of 1GB and
similar time to swap in a similar process

130
Q

Overlays

A

 Solution from the 60s
 Slice a program into multiple overlays
 Only load the overlay manager (OM)
* The overlay manager loads overlay 0
* Once done, asks the OM to load overlay 1
 Overlays kept on disk, swapped in on
demand
 Slicing a program is done by its creator
* A nightmare! Error-prone

131
Q

Virtual Memory and Memory Pages

A

Needed to make the OS slice programs
Virtual Memory
* Each process has an address space
* The @ space is cut into chunks, aka memory pages, by the OS
* A memory page is mapped onto a contiguous
range of physical addresses
E.g., range 20480000-20484095 of virtual mem. maps
onto 10240000-10244095 of physical mem.
* When a process references a virtual address, its corresponding physical page is brought from the desk, if not already loaded in RAM

132
Q

What is Memory Management Unit (MMU)

A

Does the translation of virtual addresses
into physical addresses
E.g., translates virtual @1 into physical @
1024000001

133
Q

MMU Operations: Example

A

 Consider a machine with
* 32KB physical memory
* 4KB pages
* Uses virtual memory to simulate a memory of 64KB
 A page fits in a page frame
* 16 virtual pages
* 8 physical page frames
 Sizes of memory addresses?
* 16 bit virtual, 15 bit physical

134
Q

MMU Operations: Page Fault

A

The OS keeps track of which pages are
loaded in RAM
* A present/absent bit

If referenced address is in a page not present in RAM, the CPU traps to the OS
1. Page fault trap
2. The OS chooses a page from RAM and evicts it from its page frame
- Backs it up to disk if its content has changed
3. Brings the requested page from the disk

135
Q

Working Set and Thrashing

A

Pages currently in RAM are the process’ working set

Thrashing when a process keeps causing page faults

135
Q

Working Set and Thrashing

A

Pages currently in RAM are the process’ working set

Thrashing when a process keeps causing page faults

136
Q

Page Size

A

Hardware comes with its pre-defined page size, e.g., 4096B (4KB)

The OS can define its software page size to, say, 2 hardware pages

On average, half of the last page will be unused
* Internal fragmentation
* This pushes to use small page sizes
* But very small pages results in higher management cos

137
Q

Paging as a Performance Bottleneck

A

Page information stored in a page table
* The number of a virtual page is used as index
* If the present/absent bit is 1

The cell contains the page location in RAM
* Else, not loaded

Need to access the page table for each memory reference

Worse, need to access the page table in order to read next instruction to execute

Big decision: where do we store the page
table?
* The table can be large
- 4KB pages, 32-bit address => 1 million pages
- One-page table per process!
- Naive approaches
* Store the page table in main memory
* Load the whole page table of a process into the MMU

138
Q

Translation Lookaside Buffer (TLB)

A

Most programs access a small number of pages most of the time

Add Translation Lookaside Buffer (TLB) to MMU
* Part of paging hardware
Stores frequently accessed frames
* Acts as a cache for the page table
* Rarely more than 256 entries

If referenced memory address is in TLB, then no need to access the page table
If not, then fetch the page entry from the page table and load it into the TLB

Valid bit indicates whether page is in use or not

139
Q

Effective Access Time

A

Effective access time assumes
* The requested virtual page is in RAM

Given
* It takes 30 nanoseconds to search the TLB
* It takes 100 nanoseconds to access memory
* An 80%t hit ratio, i.e., desired page number in the TLB 80% of the time
 Finding the Effective Access Time for a memory address
* If the page number is in the TLB: 30ns TLB + 100ns memory
* Otherwise, not in the TLB (30ns)
 Search memory for the page table and frame number (+100ns)
 Search the desired byte in memory (100ns)
 230 nanoseconds in total
* effective access time = 0.80 x 130 + 0.20 x 230 = 150ns

140
Q

Segments

A

Each component of the program can be given a segment

A segment is a contiguous set of addresses

Segments sizes might be different

141
Q

Page Replacement Algorithms

A

Required to decide on which page to evict
* Should not evict a page that is going to be used soon

If a page has been modified, it must be written back to disk

142
Q

Optimal Replacement Algorithms

A

Estimate the time it will take before you need to
access each page

Order the pages in the ascending order of
* E.g., page 3 will be used next, then page 17, then page 23, …, then page 5

Evict the last page
* E.g., page 5

Pick the one which will not be used before the longest
time

Not possible unless we know when pages will be referenced (crystal ball)

Used as ideal reference algorithm

143
Q

Not Recently Used

A

Uses two bits:
* R: recently used
* M: modified

Periodically, set R to Zero
A page belongs to one of four classes
* Class 0: not referenced, not modified
* Class 1: not referenced, modified
* Class 2: referenced, not modified
* Class 3: referenced, modified
Evict a page from the lowest class

144
Q

First Come, First Served (First-In, First-Out)

A

Maintains a list ordered by ascending arrival time
Evict the head of the list
Problem
* If a page in the list is used again, it does not
change its position
* The oldest might be still in heavy use

145
Q

Second Chance Algorithm

A

Similar to FIFO, but checks the R bit
If 1
* Put the page at the end of the list, set R to 0
If 0
* Evict

146
Q

Least Recently Used (LRU)

A

“Least” not as in a counter
* Not “Least Frequently Used” (LFU)

Evict the page that has not been used for the longest time
Maintain a list of pages according to their
recency of usage
* From most recent (head) to least recent (tail)
Resource consuming, will potentially need
to frequently move pages in the queue

147
Q

Process ID (PID)

A

A unique identifier for each process

148
Q

Parent Process ID (PPID)

A

The PID of the parent process

149
Q

Process State

A

The current state of the process (e.g., running, blocked, suspended, etc.).
Process Priority: The priority level assigned to the process by the operating system.

150
Q

Process Priority

A

The priority level is assigned to the process by the operating system.

151
Q

Process Resources

A

The resources allocated to the process by the operating system (e.g., memory, CPU time, I/O devices).

152
Q

Thread ID (TID)

A

A unique identifier for each thread within a process.

153
Q

Thread State

A

The current state of the thread (e.g., running, blocked, suspended, etc.).

154
Q

Register Set

A

The set of registers used by the thread for storing its execution context.

155
Q

Stack Pointer (SP)

A

The memory address pointing to the current top of the thread’s stack.

156
Q

Thread-specific Data

A

Data structures specific to a particular thread, such as thread-local storage or thread-specific variables.

157
Q

Blocked (B): (state diagram of a process)

A

The process is waiting for a resource (e.g., I/O) to become available, and it cannot continue until the resource is ready.

158
Q

Running (R): (state diagram of a process)

A

The process is currently running on the CPU.

159
Q

Ready (R) : (state diagram of a process)

A

The process is waiting for the CPU to become available, and it is ready to execute.

160
Q

Blocked to Ready state of a process

A

A process that was waiting for an I/O operation or another event to complete becomes ready to run again once the event has finished or the resource becomes available.

161
Q

Ready to Running state of a process

A

A process in the ready queue is selected by the scheduler to run on the CPU.

162
Q

Running to Blocked state of a process

A

A process that needs to perform an I/O operation or another event that cannot be completed immediately must wait for the event to complete, and therefore, it becomes blocked.

163
Q

Running to Ready state of a process

A

A process that is preempted by the scheduler or finishes its CPU burst returns to the ready queue, waiting for another opportunity to run on the CPU.

164
Q

First-Come, First-Served (FCFS)

A

The operating system schedules processes in the order they arrive, with the first process in the queue being the first to receive CPU time.

165
Q

Round Robin (RR):

A

The operating system allocates a fixed time slice or quantum of CPU time to each process in turn, with processes being rotated through the CPU based on their arrival time and the length of their CPU bursts.

166
Q

Priority-based scheduling:

A

The operating system assigns each process a priority level based on its characteristics, such as its level of interactivity or its importance to the system, and schedules processes with higher priorities before those with lower priorities.

167
Q

What is a race condition?

A

A race condition is a situation that occurs when two or more processes or threads access a shared resource or data concurrently and try to modify it at the same time, leading to unpredictable or incorrect behaviour.

Proper synchronization techniques and mutual exclusion mechanisms such as locks, semaphores, or monitors can be used to prevent race conditions and ensure safe access to shared resources.

168
Q

What is a critical region

A

A critical region is a section of code or a segment of a program where shared resources, such as variables, data structures, or I/O devices, are accessed and modified by multiple processes or threads concurrently.

Since the execution of the critical region by multiple processes or threads can lead to race conditions and inconsistencies, it is necessary to ensure mutual exclusion and synchronization of access to the shared resources.

To achieve this, critical regions are typically protected by synchronization mechanisms such as locks, semaphores, or monitors that allow only one process or thread at a time to access the critical region and ensure that the modifications to the shared resources are made atomically and in a consistent order.

The use of critical regions and synchronization mechanisms is essential to prevent race conditions, ensure data integrity and consistency, and avoid conflicts and deadlocks in concurrent software systems.

169
Q

What is TSL

A

The TSL (Test and Set Lock) instruction is a low-level synchronization primitive that is used to implement mutual exclusion and critical sections in concurrent software systems.

The TSL instruction typically takes two arguments: a memory location or a variable, and a register. The instruction atomically tests the value of the memory location or variable, and if it is zero, sets it to a non-zero value and stores the old value in the register.

The TSL instruction guarantees that the entire operation is executed atomically and is not interrupted by other threads or processes.

However, the TSL instruction is a low-level primitive and can be prone to several issues such as busy waiting, starvation, or deadlocks if not used correctly.

Therefore, higher-level synchronization primitives such as locks, semaphores, or monitors are typically used instead of the TSL instruction in modern concurrent programming.

170
Q

Explain the abstract idea of semaphores.

A

Semaphores are an abstract synchronization primitive used to control access to shared resources in a concurrent system.

They provide two basic operations: wait and signal, which can be used to coordinate the execution of multiple threads or processes. Semaphores are a fundamental building block for many higher-level synchronization mechanisms such as locks, monitors, and condition variables.

171
Q

How can you use a semaphore to implement mutual exclusion?

A

Mutual exclusion can be implemented using a semaphore, also known as a mutex.
The mutex semaphore is initialized to 1, indicating that the critical section is initially free.
When a thread wants to enter the critical section, it first executes a wait operation on the mutex semaphore.
If the value of the semaphore is 1, indicating that the critical section is free, the thread decrements the value of the semaphore to 0 and enters the critical section.
If the value of the semaphore is 0, indicating that the critical section is already being used by another thread, the wait operation blocks the calling thread until the value of the semaphore becomes 1.
When the thread leaves the critical section, it executes a signal operation on the mutex semaphore, which increments the value of the semaphore to 1 and wakes up any waiting threads. This ensures that only one thread can be in the critical section at any given time, providing mutual exclusion.

172
Q

Concurrency definition

A

Concurrency refers to the property of a system in which multiple independent tasks or processes can make progress simultaneously, potentially overlapping in time.

173
Q

Parallelism Definition

A

Parallelism refers to the execution of multiple tasks or processes simultaneously using multiple computing resources, such as multiple CPUs, cores, or nodes.

174
Q

Give an example of concurrency that is not parallelism

A

One example of concurrency that is not parallelism is a single-threaded web server handling multiple requests from different clients.
While the server can only execute one request at a time, it can handle multiple requests concurrently by switching between them in rapid succession.

While the worker thread is busy, the server can switch back to the main thread and handle other incoming requests in a similar manner. This way, the server can handle multiple requests concurrently, even though it is not using multiple processors or cores.

175
Q

Explain how Latency and Throughput are related on a single core
system and how pipelining (using more than one core) can affect
that relation.

A

On a single-core system, latency and throughput are often inversely related: reducing latency usually comes at the cost of reducing throughput, and vice versa.
This is because increasing the speed at which tasks are executed (to reduce latency) often requires more resources, which can reduce the number of tasks that can be executed in parallel (thus reducing throughput).

Pipelining (using more than one core) can help to mitigate this by allowing multiple tasks to be executed simultaneously, potentially increasing both throughput and reducing latency.

By splitting tasks into smaller sub-tasks and assigning them to different cores, pipelining can increase the total amount of work that can be done in a given period of time (throughput) while also reducing the amount of time it takes for a task to be completed (latency).

176
Q

Consider a stream between two processes: a producer process P
and a consumer process C.
(What UNIX construct can be used to implement this?

A

The Unix construct that can be used to implement a stream between two processes is a pipe.
A pipe is a communication mechanism that allows the output of one process (the producer) to be connected to the input of another process (the consumer) so that data can be transferred between them. The pipe behaves like a unidirectional stream, with data flowing from the producer to the consumer.

177
Q

Can your pipe approach be used if more than one producer /
the consumer is involved?
If so, how? If not, why not?

A

Yes, the pipe approach can be used when more than one producer and/or consumer is involved.

Multiple producers can write data to the same pipe, which can be read by a single consumer. The consumer can use techniques such as polling or blocking to read data from the pipe as it becomes available.

Similarly, multiple consumers can read data from the same pipe, which is produced by a single producer. The producer can write data to the pipe as it becomes available, and the consumers can use techniques such as polling or blocking to read data from the pipe as it arrives.

178
Q

A stream between threads can be implemented in a lock-free manner if the following conditions are met:

A
  1. The data is transferred in fixed-size chunks, with a well-defined boundary between each chunk.
  2. The producer and consumer threads operate independently, without requiring any synchronization or coordination between them.
  3. The producer thread writes data to a shared buffer, and the consumer thread reads data from the same buffer.
  4. The buffer is implemented using lock-free data structures, such as circular buffers or linked lists. This allows the producer and consumer threads to access the buffer without requiring any locks or other synchronization primitives.
179
Q

A program counter is:

A

A register used to store the address of the latest instruction run for a program. Each program has a program counter.

180
Q

A binary file can be defined as:

A

A file that uses 8 bits out of each byte

181
Q

The address space of a process refers to the space it uses in the:

A

RAM

182
Q

In the context of file management systems, “random access” refers to:

A

No need to read bytes 1 to n-1 before reading or writing byte 1

183
Q

A stack frame is created

A

when a function is called

184
Q

What makes a text file executable

A

chmod a+x file

185
Q

For which systems is non-preemptive scheduling more suitable?

A

Batch

186
Q

In an OS that implements paging, what is true about page frame size

A

The size of the page frame has to be identical to the size of a page frame

187
Q

A page fault occurs when

A

A requested virtual memory page is not present in RAM

188
Q

“On average, only 50% of each memory page will contain useful data.”

A

False, only the last page of a process will, on average, contain 50% of useful data.

189
Q

A process references memory address 2000. If it is running on an OS that uses dynamic relocation where the base address of that process is 3000 and its limit is 1000, then:

A

The access request will fail because 1000<2000

190
Q

Which of the following statements is correct:
1- SJF is suitable for interactive operating systems
2- SRT is preemptive
3- Both preemptive and non-preemptive variants of priority scheduling are suitable for a
smartphone OS

A

2- SRT is preemptive

191
Q

Latency Definition

A

Latency refers to the amount of time it takes for a single task to complete. It is the delay between the initiation of a request and the time when the response is received. It is usually measured in units of time, such as seconds, milliseconds, or microseconds.

192
Q

Threads vs Processes

A

Threads
– Abstracts & hides CPUs
– OS shares CPUs over Threads
– Hence schedules threads to CPUs

Process
– Abstracts & hides computer resources (except CPUs)
– Shares computer resources (RAM, Disc and I/O) over processes

193
Q

What is the Context of a Thread?

A

Thread Context
= all info needed to restore a Thread to run in a CPU
e.g.
program counter, stack pointer, thread state,
thread register values, pointer to thread code,
pointer to parent process etc.

194
Q

What causes Context Switches? (Non pre-emptive Scheduler)

A
  • Running thread calls an OS service e.g.
  • I/O service
  • Synchronisation primitive
  • Delay function
  • etc.
195
Q

What causes Context Switches? pre-emptive Scheduler

A

Pre-emptive Scheduler
* All non pre-emptive reasons plus
interrupts e.g.
* Timer interrupt for time slicing
* I/O interrupt (e.g. disc or comms
controller has finished DMA, GUI
button pressed etc.)
* Software exception

196
Q

“OS” as used in the Concurrency part of
F29OC

A

The code that is responsible for sharing the CPU(s) between threads.
Note that this can be:
a) Part of a standard Operating System (e.g. Linux).
b) Part of a ‘program’ e.g. part of Java’s JVM (executes in user space).
c) A combination of (a) and (b).

197
Q

How does the “OS” execute on multiple CPUs?

A

The OS will most likely run on the last interrupted CPU or
where a Thread calls an OS service

198
Q

Can a Thread execute on multiple CPUs?

A

Note that the OS can switch threads across CPUs (this will be dependent upon the scheduling algorithm)
* But also note that a thread will execute on multiple CPUs at the same time

199
Q

Advantages of Multithreaded Systems

A
  • Speed of execution on multicore systems (inc. logical cores)
  • Responsiveness
  • Ease of programming
200
Q

Disadvantages of Multithreaded Systems

A

Disadvantages
* You need to understand and program in a multithreaded
paradigm

201
Q

Processes vs Threads (Mikes Explanation)

A

In a multithreaded OS:
* Threads used to share CPU(s)
* Context Switching between Threads is generally fast
* Context and status stored in TCB
* Processes used to share other resources
(e.g. memory, I/O systems etc.)

  • Processes can have many threads
  • Generally threads can access all a parent processes resources
  • Switching between processes generally much slower than a Thread
    Context Switch.
  • Process info. stored in its PCB
    (process control block)
202
Q

Definition of Race Condition (Mike)

A

where the answer depends upon the scheduling of the
threads!!!!

(code is not “thread-safe”)

Solution: Protect with Critical Region

203
Q

Critical Region (mike)

A

“Critical Region” = section of code that can only
be executed by one thread at a time

204
Q

Multiple Critical regions

A

A thread that has control (ownership) of one critical region prevents access to all critical regions protected by that lock.

205
Q

Multiple locks

A

You can use multiple locks to protect multiple critical regions
* A thread that has control (ownership) of one lock does not prevent other threads from gaining control of other locks and their critical
regions

206
Q

What is a ‘Shared Resource’ or ‘Shared State’?

A

Any mutable object that can accessed by more than one
thread

207
Q

How Critical Regions prevent races

A

Prevent concurrent access to shared objects and
variables

208
Q

Critical Region (Mike)

A

“Critical Region” = section of code that can only
be executed by one thread at a time

209
Q

Java Extrinsic Monitors

A

– Use for Mutual Exclusion
– Use for thread synchronization

210
Q

Classic Monitor Model
– MX Properties

A

Monitor (instances) contain a set of mutually exclusive methods. They’re similar but more flexible than critical regions.

Thus once a thread has entered a monitor (i.e. started to execute any of its methods) it is said to have ownership of the monitor.

Once it finishes executing a method – then it releases ownership.
If a thread has ownership then all other threads will be blocked when they attempt to enter the monitor.

211
Q

Intrinsic Monitors

A

Use synchronized key word and a single implicit condition
variable per synchronised object

212
Q

Extrinsic Monitors

A

Uses explicit locks (e.g. ReentrantLock) and explicit condition
variables that are properties of their locks

213
Q

Use for Synchronisation

A

We can declare and use Condition variables
with locks in order to synchronise the operation
of threads

213
Q

Monitor Terminology (Ownership, Signaller, Waiter)

A
  • Ownership: the thread that has MX control of the monitor
  • Signaller: a thread that calls lock.signal()
  • Waiter: a thread that calls lock.await()Monitor Terminology
214
Q

Interface Condition

A

The key property that waiting for a condition provides is that it atomically releases the associated lock and suspends the current thread.

215
Q

Hoare (signaller yields) (Mike)

A
  • OS selects (say) t1 from cv1 queue
  • Signaller releases ownership to waiter (t1) and t1 gains ownership and starts running
  • Note that if the signal is not at the end of the signaller’s (t3’s) method call, then ownership will revert back to the signaller (t3) at some stage (but after t1 releases ownership) so that it (t3) can finish execution of its method

– Gotcha: signaller releases MX control
and must then regain it

216
Q

Mesa (signaller continues) (Mike)

A
  • OS selects (say) t1 from cv1 wait queue
  • waiter (t1) added to MX (ownership) queue and signaller (t3) continues.
  • Signaller (t3) exits monitor method and releases ownership
  • Next thread queued on ownership Q (say t1) moved to ready Q …
217
Q

Why Use a Buffer?

A
  1. Decouple Timing of Consumer and Producer
  2. Act as Collector for Multiple Producers
  3. Act as Distributor for Multiple Consumers
217
Q

Why Use a Buffer?

A
  1. Decouple Timing of Consumer and Producer
  2. Act as Collector for Multiple Producers
  3. Act as Distributor for Multiple Consumers
218
Q

Shared State?

A
  • Pass by value to thread = not part of shared state
  • Passed by reference to thread = Part of shared state
219
Q

Thread States in Busy Wait

A

Always Runnable – in either ReadyToRun or
Running (on a CPU)

  • Can use lots of processor cycles and make
    system unresponsive!

For safety can add sleep into loop
* Prevents processor hogging but increases
average latency

220
Q

What OS-provided software mechanisms can be used to achieve
communication between two processes? Name 3 different
concepts and explain in one sentence each what they provide
and how that can be used for communication.

A

Three OS-provided software mechanisms for achieving communication between two processes are:

Pipes: A pipe is a unidirectional communication channel that allows the output of one process to be connected to the input of another process. Pipes are typically used for simple, sequential communication between two processes.

Sockets: Sockets are used for communication between processes over a network. They provide a standardized interface for communication, allowing processes to communicate with each other regardless of their location or underlying hardware.

Shared memory: Shared memory allows multiple processes to access the same physical memory space, allowing for efficient communication between them. This can be used for high-performance, low-latency communication between processes that require frequent exchange of data.