Exam 1 Flashcards

1
Q

Simple OS Definition?

P1L2: Introduction to Operating Systems

A

Special piece of software that abstracts and arbitrates the use of a computer system

Arbitration (manage hardware) | Abstractions (simplify view of hardware)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

High-level OS tasks?

How is an OS like a Toy Shop Manager?

P1L2: Introduction to Operating Systems

A
  1. Direct operational resources - control use of hardware resources
  2. Enforce working policies - fair resource access/limit resource usage/etc
  3. Mitigates difficulty of complex tasks - abstract hardware details (system calls)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does a computing system consist of?

P1L2: Introduction to Operating Systems

A
  1. Processing Element - CPUs (CPUs with multiple cores have multiple processing elements)
  2. Memory - RAM
  3. Network Interconnects - WiFi Card / Ethernet ports
  4. Graphical Processing Elements - GPUs
  5. Storage - HDDs, SSDs, Flash Devices (USB drives)

All are hardware components, and typicall are used by multiple apps

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an Operating System?

P1L2: Introduction to Operating Systems

A

A layer of systems software that sits between the hardware and the software applications.
* Directly has privileged access to the underlying hardware.
* Hides hardware complexity.
* Manages hardware on behalf of one or more applications according to some predefined policies.
* Ensures applications are isolated and protected from one another.

There is not ONE formal definition of what an OS is.

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the roles of an OS?

P1L2: Introduction to Operating Systems

A
  1. Hide hardware complexity from both the applications and application developers.
  2. Manages the resources of the hardware on behalf of the executing applications.
  3. Ensures that each application is appriopriately isolated and protected so that they may complete their task(s).

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following are likely components of an operating system?
* File Editor
* File System
* Device Driver
* Cache Memory
* Web Browser
* Scheduler

P1L2: Introduction to Operating Systems

A
  • File System
  • Device Driver
  • Scheduler

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Indicate for each of the following options whether they are an example of Abstractions (B) or Arbitration (R).
* Distributing memory between multiple processes
* Supporting different types of speakers
* Interchangeable access of hard disk or SSD

Arbitration (manage hardware) | Abstractions (simplify view of hardware)

P1L2: Introduction to Operating Systems

A
  • Distributing memory between multiple processes - Arbitration (R)
  • Supporting different types of speakers - Abstractions (B)
  • Interchangeable access of hard disk or SSD - Abstractions (B)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some examples of OS environment types?

P1L2: Introduction to Operating Systems

A
  • Desktop
  • Embedded
  • Ultra High-end machines (mainframes)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are examples of Desktop OS environments?

P1L2: Introduction to Operating Systems

A
  • Microsoft Windows
  • Unix-based (MacOS X [BSD], Linux)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are examples of Embedded OS environments

P1L2: Introduction to Operating Systems

A
  • Andriod
  • iOS
  • Symbian

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some OS Abstraction examples?

P1L2: Introduction to Operating Systems

A
  • Process
  • Thread
  • File
  • Socket
  • Memory Page

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are some OS Mechanisim examples?

P1L2: Introduction to Operating Systems

A
  • Create
  • Schedule
  • Open
  • Write
  • Allocate

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some OS Policy examples?

P1L2: Introduction to Operating Systems

A
  • Leas-Recently Used (LRU)
  • Earliest Deadline First (EDF)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

OS Design Principles

Describe “Separation of Mechanism & Policy”

P1L2: Introduction to Operating Systems

A
  • Implement flexible mechanisms to support many policies
  • Examples: LRU, LFU, random

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

OS Design Principles

Describe “Optimize for Common Case”

P1L2: Introduction to Operating Systems

A

Based on the Common Case, pick a policy/policies that make support this Common Case given the underlying mechanisms and abstractions available/supported.

Questions used to determine Common Case:
* Where will the OS be used?
* What will the user want to execute on that machine?
* What are the workload requirements?

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the modes computer platforms typically distinguish?

P1L2: Introduction to Operating Systems

A
  • (Unprivileged) User-Level - applications
  • (Privileged) Kernel-Level - operating systems with direct hardware access

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Can you switch from User to Kernal modes?

P1L2: Introduction to Operating Systems

A

Yes most modern hardware supports switching from User to Kernal modes, if a special bit within the CPU is set.

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are Trap Instructions?

P1L2: Introduction to Operating Systems

A

Special instructions that can be used to switch to Kernal mode, initiated when user-level applications attempt a kernal-level task.

The OS will determine if the task should be allowed, and if so switch to kernal-level and exceute the task.

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are System Calls?

P1L2: Introduction to Operating Systems

A

OS provide an interface, a set of operations that applications can invoke, allowing specific kernal-level services/tasks to be executed on their behalf.

  • open (file)
  • send (socket)
  • mmap (memory)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are Signals?

P1L2: Introduction to Operating Systems

A

A mechanism for the operating system to pass notifications into applications.

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What must an Application do to make a System Call?

P1L2: Introduction to Operating Systems

A
  • Write arguments
  • Save relevant data at well-defined location
  • Execute/make system call

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is does it mean when System Calls are executed in “Synchronous Mode?”

P1L2: Introduction to Operating Systems

A

The Application will wait until the System Call is fully completed before moving to the next step/task.

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What impact can User/Kernal Transitions have?

P1L2: Introduction to Operating Systems

A
  • Are not cheap in terms of resource usage (cache memory)
  • Can cause a switch of locality (hardware cache affected)

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Describe Hot and Cold Cache

P1L2: Introduction to Operating Systems

A
  • Hot - an application is accessing the cache when it contains the data/addresses it needs
  • Cold - an application is accessing the cache when it does not contain the data/address it needs, forcing it to retrieve the data/address from main memory

P1L2: Introduction to Operating Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are OS Services? ## Footnote P1L2: Introduction to Operating Systems
Services provided by the OS that provide applications and application developers with a number of useful types of functionality. * Process management * File management * Device management * Memory management * Storage management * Security * etc ## Footnote P1L2: Introduction to Operating Systems
26
What is Monolithic OS design? ## Footnote P1L2: Introduction to Operating Systems
Where every possible service that any of the applications would require or that any type of hardware will demand is already part of the OS. ## Footnote P1L2: Introduction to Operating Systems
27
What are benefits of Monolithic OS design? ## Footnote P1L2: Introduction to Operating Systems
* Everything is included * Inlining, compile-time optimizations ## Footnote P1L2: Introduction to Operating Systems
28
What are some downsides to Monolithic OS design? ## Footnote P1L2: Introduction to Operating Systems
* Customization, portability, manageability are difficult * Memory footprint * Performance ## Footnote P1L2: Introduction to Operating Systems
29
What is Modular OS design? ## Footnote P1L2: Introduction to Operating Systems
More common approach (Linux), has a number of basic services and APIs, but the OS specifices certain interfaces that any module must implement in order to be part of the operating system. Modules can be dynamically updated or replaced to implement the interfaces in the desired way. ## Footnote P1L2: Introduction to Operating Systems
30
What are some benefits of Modular OS design? ## Footnote P1L2: Introduction to Operating Systems
* Maintabeable * Smaller footprint * Less resources needs ## Footnote P1L2: Introduction to Operating Systems
31
What are some downsides to Modular OS design? ## Footnote P1L2: Introduction to Operating Systems
* Indirection can impact performance * Maintenance can be still be an issue due to modulars potentially coming from many disparate sources ## Footnote P1L2: Introduction to Operating Systems
32
What is the Microkernal OS design? ## Footnote P1L2: Introduction to Operating Systems
OS designed to that only contain the most basic primitives at the OS level. Address Space, Threads, and IPC are typical services included at the OS level. All other services are run at the User/unpriviliged level. ## Footnote P1L2: Introduction to Operating Systems
33
What do Microkernal OSes typically support for services? ## Footnote P1L2: Introduction to Operating Systems
Inter-Process Communications (IPC) is typically included as a core abstraction/mechnisms within Mirokernal OSes. ## Footnote P1L2: Introduction to Operating Systems
34
What are some benefits of Microkernal OS design? ## Footnote P1L2: Introduction to Operating Systems
* Size (very small) * Verifiability ## Footnote P1L2: Introduction to Operating Systems
35
What are some downsides of Microkernal OS design? ## Footnote P1L2: Introduction to Operating Systems
* Portability * Complexity of software development * Cost of User/Kernal crossing ## Footnote P1L2: Introduction to Operating Systems
36
Describe the Linux Architecture ## Footnote P1L2: Introduction to Operating Systems
User Mode * Users - User Interface * Standards Utility Programs (Shell, Editors, Compliers, etc) - Library Interface * Standard Library (Open, Close, Read, Write, Fork, etc) - System Call Interface Kernal Mode * Linux Operating System (Process Management, Memory Management, the File Syste, I/O, etc) Physical * Hardware (CPU, Memory, Disks, Terminals, etc) ## Footnote P1L2: Introduction to Operating Systems
37
Describe Mac OS X Architecture ## Footnote P1L2: Introduction to Operating Systems
Graphical User Interface * Aqua Application Environments and Services * Java, Cocoa, Quicktime, etc * BSD Kernal Environment * Mach - core of kernal, implements key primitives * BSD - provides Unix interoperability via BSD CLI, POSIX API support, and Network I/O I/O Kit and Kernal Extensions * Environments for development of drivers and kernal modules that can be dynamically loaded into the kernal ## Footnote P1L2: Introduction to Operating Systems
38
Define what a Process is in simple terms ## Footnote P2L1: Processes and Process Management
Instance of an executing program ("task" or "job") ## Footnote P2L1: Processes and Process Management
39
What is a process? ## Footnote P2L1: Processes and Process Management
State of a program when executing loaded in memory (active entity) ## Footnote P2L1: Processes and Process Management
40
Is an Application a Process? ## Footnote P2L1: Processes and Process Management
No, Processes are **active entities** while Applications are **static entities** ## Footnote P2L1: Processes and Process Management
41
What OS abstraction encapsulates a process? ## Footnote P2L1: Processes and Process Management
An Address Space * v0 to vMAX * Will contain all the addresses of the process parts (stack, heap, data, text/code) ## Footnote P2L1: Processes and Process Management
42
Types of State of a Process ## Footnote P2L1: Processes and Process Management
* Text and Data - static state when process first loads * Heap - dynamically created during execution * Stack - grows and shrinks (LIFO queue) ## Footnote P2L1: Processes and Process Management
43
Address Space ## Footnote P2L1: Processes and Process Management
"in memory" representation of a process that utilizes virtual addresses ## Footnote P2L1: Processes and Process Management
44
Page Tables ## Footnote P2L1: Processes and Process Management
Mapping of vrtual to physical addresses ## Footnote P2L1: Processes and Process Management
45
Physical Addresses ## Footnote P2L1: Processes and Process Management
True/Actual locations in physical memory managed by OS ## Footnote P2L1: Processes and Process Management
46
Virtual Address ## Footnote P2L1: Processes and Process Management
The adderess value(s) used by a process within the process address space ## Footnote P2L1: Processes and Process Management
47
Page Table Entry ## Footnote P2L1: Processes and Process Management
The record that defines a virtual address to physical address mapping. ## Footnote P2L1: Processes and Process Management
48
How does an OS manage space with processes that need more space than available? ## Footnote P2L1: Processes and Process Management
Store parts of processes on the disk and swaps to memory when required, maintain the mapping of the virtual addresses to physical addresses. ## Footnote P2L1: Processes and Process Management
49
If two Processes are running at the same time, what will the Virtual Address Space ranges be? ## Footnote P2L1: Processes and Process Management
The same. Virtual Address Spaces are decoupled from the Physical Addresses. The OS manages the mapping of those addresses to physical memory. ## Footnote P2L1: Processes and Process Management
50
How does the OS know where in the Program the Process is? ## Footnote P2L1: Processes and Process Management
Program Counter ## Footnote P2L1: Processes and Process Management
51
Where is the Program Counter maintained? ## Footnote P2L1: Processes and Process Management
Registers within the CPU ## Footnote P2L1: Processes and Process Management
52
What defines with a process is doing? ## Footnote P2L1: Processes and Process Management
Process Stack ## Footnote P2L1: Processes and Process Management
53
What defines the top of a process stack and why is it important? ## Footnote P2L1: Processes and Process Management
Stack Pointer, ensures we know what the top of the stack is for LIFO ## Footnote P2L1: Processes and Process Management
54
What does the OS Process Control Block (PCB) do? ## Footnote P2L1: Processes and Process Management
OS maintains the PCB, which contains and maintains all the useful information for all processes. ## Footnote P2L1: Processes and Process Management
55
What is a Process Control Block (PCB)? ## Footnote P2L1: Processes and Process Management
Data Structure that the OS maintains for every one of the processes that it manages. ## Footnote P2L1: Processes and Process Management
56
What is a Process Control Block (PCB) comprised of? ## Footnote P2L1: Processes and Process Management
* Process State - Program Counter, Stack Pointer, basically all CPU registers and their values * Various Memory Mappings - Virtual to Phyiscal mappings * Various Information - list of open files, scheduling info, allocation information, etc ## Footnote P2L1: Processes and Process Management
57
When is a Process Control Block (PCB) created? ## Footnote P2L1: Processes and Process Management
When a process itself is created ## Footnote P2L1: Processes and Process Management
58
Can Process Control Block (PCB) fields be updated? ## Footnote P2L1: Processes and Process Management
Yes, as needed when process state changes occur ## Footnote P2L1: Processes and Process Management
59
How are frequently updated values handled differently within Process Control Blocks (PCBs)? ## Footnote P2L1: Processes and Process Management
Frequently updated values (such as the program counter) are stored in dedicated/specific registers (such as cache-friendly addresses/memory) that allow for quick, efficient read/write operations ## Footnote P2L1: Processes and Process Management
60
What happens/how is the PCB used in a situation when an OS switches from one process to another? And what is this called? ## Footnote P2L1: Processes and Process Management
The OS, currently holding PCB information for Program 1 (p1) in the CPU registers, will save the PCB vales for p1 into memory, then bring the PCB values for Program 2 into the CPU registers for use. This is "Context Switch" ## Footnote P2L1: Processes and Process Management
61
Define Context Switch ## Footnote P2L1: Processes and Process Management
Switching the CPU from the context of one process to the context of another ## Footnote P2L1: Processes and Process Management
62
Is Context Switching expensive or inexpensive, explain ## Footnote P2L1: Processes and Process Management
They are expensive * direct costs - number of cycles for load and store instructions * indirect costs - cold cache, cache misses ## Footnote P2L1: Processes and Process Management
63
For the following sentence, check all options that correctly complete it: When a cache is hot... * it can malfunction so we must context switch to another process * most process data is in the cache so the process performance will be at its best * sometimes we must context switch ## Footnote P2L1: Processes and Process Management
* most process data is in the cache so the process performance will be at its best * sometimes we must context switch ## Footnote P2L1: Processes and Process Management
64
What possible states can a Process be in and define them ## Footnote P2L1: Processes and Process Management
* New - initilization of the process, OS will determine if process is good to run and if so provision necessary memory and PCB (can be admitted to Ready State) * Running - the process is currently be executed (can be interrupted to Ready State, exit to Terminanted, or placed in Waiting by an event or I/O) * Idle / Ready - not currently being executed, but ready to execute (can be put into Running state when called by scheduler dispatch * Waiting - the process is performing a longer operation, waiting for data (can be placed into Ready state once operation is completed) * Terminated - the process is killed ## Footnote P2L1: Processes and Process Management
65
The CPU is able to execute a process when the process is in which state(s)? ## Footnote P2L1: Processes and Process Management
Running and Ready ## Footnote P2L1: Processes and Process Management
66
Define the mechanisms for process creation ## Footnote P2L1: Processes and Process Management
* Fork - copies the parent PCB into new child PCB, both parent and child continue execution at instruction after fork * Exec - replace child image, load new program and start from first instruction, child's PCB is not point to new program ## Footnote P2L1: Processes and Process Management
67
How does Exec relate to Fork ## Footnote P2L1: Processes and Process Management
When using Exec, you will Fork the parent program to create the child, then replace the child image with the new program ## Footnote P2L1: Processes and Process Management
68
On UNIX-based OSs, which process is often regarded as "the parent of all processes?" On Android OS, which process is regarded as "the parent of all App processes?" ## Footnote P2L1: Processes and Process Management
UNIX OS - init Andriod OS - Zygote ## Footnote P2L1: Processes and Process Management
69
What is the role of the CPU scheduler? ## Footnote P2L1: Processes and Process Management
OS component that determines which one of the currently ready processes will be dispatched to the CPU to start running, and how long it should run for ## Footnote P2L1: Processes and Process Management
70
Describe the steps the OS goes through when run a new process ## Footnote P2L1: Processes and Process Management
* Preempt - interrupt and save current context * Schedule - run scheduler to choose next process * Dispatch - dispatch process and switch into its context ## Footnote P2L1: Processes and Process Management
71
What is Timeslice ## Footnote P2L1: Processes and Process Management
Timeslice - time (Tp) allocated to a process on the CPU ## Footnote P2L1: Processes and Process Management
72
Equation to determine efficieny of CPU work time? Implications? ## Footnote P2L1: Processes and Process Management
Useful CPU work = Total Processing Time / Total Time = (2 * Tp) / (2 * Tp + 2 * t_sched) Tp = timeslice t_sched = scheduling time The less time spent scheduling in ratio/comparison to the total processing time indicates more efficient use of the CPU processing resources ## Footnote P2L1: Processes and Process Management
73
List the ways a process makes its way to the Ready Queue ## Footnote P2L1: Processes and Process Management
* I/O Request - process makes a request of I/O, and after going through the I/O queue and receving the necessary data moves back to Ready Queue * Time Slice Expires - the processes time slice expires and it moves from a Running state back to the Ready Queue * Fork A Child - new process is created through the Fork call and moves to the Ready Queue * Wait For An Interrupt - once interrupt occurs, process moves to the Ready Queue ## Footnote P2L1: Processes and Process Management
74
Which of the following is NOT a responsibility of the CPU scheduler? (Pick all that apply) * maintaining the I/O queue * maintaining the ready queue * decision on when to context switch * decision on when to generate an event that a process is waiting on ## Footnote P2L1: Processes and Process Management
* maintaining the I/O queue * decision on when to generate an event that a process is waiting on ## Footnote P2L1: Processes and Process Management
75
Can processes interact and how? ## Footnote P2L1: Processes and Process Management
Yes through Inter Process Communication (IPC) ## Footnote P2L1: Processes and Process Management
76
Define Inter Process Communication (IPC) Mechanisms ## Footnote P2L1: Processes and Process Management
* transfer data/info between address spaces * maintain protection and isolation * provide flexibility and performance ## Footnote P2L1: Processes and Process Management
77
Define Message-Passing IPC ## Footnote P2L1: Processes and Process Management
* OS provides communication channel, like shared buffer * Proccess write (send) / read (recv) message to/from channel * OS managed * Has overhead, data has to be copied from user space to the channel in Kernal memory and back to user space ## Footnote P2L1: Processes and Process Management
78
Define Shared Memory IPC ## Footnote P2L1: Processes and Process Management
* OS established a shared channel and maps it into each process address space * Processes directly read/write from this memory * OS is out of the way (main advantage) * Can be more error prone or developers might have to reimplement code for APIs that the OS typically handled for managing memory ## Footnote P2L1: Processes and Process Management
79
Shared memory based communication performs better than message passing communication? ## Footnote P2L1: Processes and Process Management
It depends: * Shared Memory Based IPC has cheap individual data exchange (does not require data copied in/out of kernal), but mapping memory between two processes can be expensive * It only makes sense to use Shared Memory Based IPC if the cost of mapping memory between the processes can be amortized across a sufficiently large number of messages ## Footnote P2L1: Processes and Process Management
80
What parts of a Process do Threads share? ## Footnote P2L2: Threads and Concurrency
Virtual Address Space for: * code * data * files ## Footnote P2L2: Threads and Concurrency
81
What parts do Threads not share within a Process? ## Footnote P2L2: Threads and Concurrency
* Regs. * Stack ## Footnote P2L2: Threads and Concurrency
82
What is the difference between a Process and Thread? ## Footnote P2L2: Threads and Concurrency
Processes represent the totality of an executing program, Threads are individual sequences of instructions within a Process ## Footnote P2L2: Threads and Concurrency
83
What benefits do Threads present? ## Footnote P2L2: Threads and Concurrency
* Parallelization - increased speed * Specialization - hot cache * Single Address Space - memory efficient * No IPC required - less resource cost Summary - multithreaded programs are more efficient in their resources requirements and incur lower overheads for their inter-thread communication ## Footnote P2L2: Threads and Concurrency
84
Why or why not might threads be useful on Single CPUs? ## Footnote P2L2: Threads and Concurrency
If the idle time for a potential thread is less than the time to context switch to another thread (and back) then threads increase efficiency. (Basically hides the idle time) if (t_idle) > 2(t_ctx_switch), than threads useful t_ctx_switch threads < t_ctx_switch processes ## Footnote P2L2: Threads and Concurrency
85
What OS benefits does multithreading present? ## Footnote P2L2: Threads and Concurrency
* Support of multiple exection contexts * threads working on behalf of apps * OS-level services like daemons or drivers ## Footnote P2L2: Threads and Concurrency
86
Do the following statements apply to Processes (P), Threads (T), or Both (B)? * Can share a virtual address space * Take longer to context switch * Have an execution context * Usually result in hotter caches when multiple exists * Make use of some communication mechanisms ## Footnote P2L2: Threads and Concurrency
* T - Can share a virtual address space * P - Take longer to context switch * B - Have an execution context * T - Usually result in hotter caches when multiple exists * B - Make use of some communication mechanisms ## Footnote P2L2: Threads and Concurrency
87
What is needed to support Threads? ## Footnote P2L2: Threads and Concurrency
* Thread Data Structure (identify threads, keep track of resource usage, etc.) * Mechanisms to _create_ and _manage_ threads * Mechanisms to safely _coordinate_ among threads running _concurrently_ in the same address space ## Footnote P2L2: Threads and Concurrency
88
What is a potential downfall of Threads in regards to having shared addresses? ## Footnote P2L2: Threads and Concurrency
There is the potential for inconsistencies if the address is accessed at the same time - data race ## Footnote P2L2: Threads and Concurrency
89
How do we deal with concurrency issues in multithread operations? ## Footnote P2L2: Threads and Concurrency
Synchronization Mechanisims: * Mutaul Exclusion - exclusive access to only one thread at a time (mutex) * Waiting - literally waiting on other threads to complete their executions, specific condition before proceeding (condition variables) ## Footnote P2L2: Threads and Concurrency
90
Define the data structure of a thread (thread type) | Birrell - An Introduction to Programming with Threads ## Footnote P2L2: Threads and Concurrency
Thread Type * Thread ID * Program Counter * Stack Pointer * Registers * Stack * Additional Attributes ## Footnote P2L2: Threads and Concurrency
91
How are Threads Created? | Birrell - An Introduction to Programming with Threads ## Footnote P2L2: Threads and Concurrency
Fork (proc, args) * proc - the procedure the thread will start executing * args - arguments for the procedure NOT a UNIX fork When Threads are created, a new data structure is created with the stack pointer pointed to the first execution in the proc ## Footnote P2L2: Threads and Concurrency
92
What happens when a (child) thread, created from another (parent) thread, finishes its execution instructions? | Birrell - An Introduction to Programming with Threads ## Footnote P2L2: Threads and Concurrency
* result or a status is returned * result is stored a well-defined location and notify that result is available OR a Join call ## Footnote P2L2: Threads and Concurrency
93
Describe a JOIN for threads | Birrell - An Introduction to Programming with Threads ## Footnote P2L2: Threads and Concurrency
Join (thread) Parent thread is blocked until the child thread completes its execution instructions. Child will return the result of the child thread computation, then terminates the child thread. Parent thread then able to terminate or move forward. ## Footnote P2L2: Threads and Concurrency
94
Thread thread1; Shared_list list; thread1 = fork(safe_insert, 4); safe_insert(6); join(thread1); //optional In the above example, thread0 executes the instructions shown. What order will safe_insert be in once all executions are complete? ## Footnote P2L2: Threads and Concurrency
Unknown, it could be either 4,6,nill or 6,4,nill. There is no guarentee that one thread finsihes before the other with this example. ## Footnote P2L2: Threads and Concurrency
95
Define mutex ## Footnote P2L2: Threads and Concurrency
a construct that acts like a lock for a shared resource Mutex Data structure * locked? - bool * owner - thread * blocked_threads - list of threads (not necessarily ordered) ## Footnote P2L2: Threads and Concurrency
96
What is the portion of code locked by Lock(mutex) called? ## Footnote P2L2: Threads and Concurrency
Critical Section ## Footnote P2L2: Threads and Concurrency
96
Explain Birrell's advocation for mutal exlusion for threads ## Footnote P2L2: Threads and Concurrency
Binary operation, a resource is either free and accessible or locked and must wait ## Footnote P2L2: Threads and Concurrency
97
Producer / Consumer Model ## Footnote P2L2: Threads and Concurrency
Many producers that create a bucket of data (lists for example) Single consumer that waits for the bucket to be full, then processes the data ## Footnote P2L2: Threads and Concurrency
98
Explain condition variables ## Footnote P2L2: Threads and Concurrency
Allows for singaling of a condition for threads to become active ## Footnote P2L2: Threads and Concurrency
99
Why do we use "while" instead of "if" when wrapping a wait call with mutex and conditions? ## Footnote P2L2: Threads and Concurrency
1. "while" can support multiple consumer threads 2. we cannot guarentee access to the mutex once the condition has been signaled 3. the list/critical data could change before the consumer gets access again ## Footnote P2L2: Threads and Concurrency
100
Explain Reader/Writer Problem ## Footnote P2L2: Threads and Concurrency
Readers - 0 or more can access a file Writers - 0 or 1 can access a file if 0 readers and 0 writers are accessing the file then read and write OK if 1 or more readers are accessing the file then read PL if writer accessing the file then read and write NOT OK proxy/helper variable/expression, which reflects the state, can be used to indirectly control access ## Footnote P2L2: Threads and Concurrency
101
Describe a typical critical section structure in pseudo code ## Footnote P2L2: Threads and Concurrency
lock(mutex) { while (!predicate_indicating_access_ok) wait(mutex, cond_var) update_state => update predicate signal/broadcast(cond_var) } unlock(mutex) ## Footnote P2L2: Threads and Concurrency
102
What are some common mistakes found in multithreaded applications? ## Footnote P2L2: Threads and Concurrency
* Keeping track of mutex/condition variables used with a resource * Always ensure that critical sections throughout the program are within the correct lock and unlock * Allowing multiple mutexes to access a single resource, one mutex per single resource is correct * Ensure that the correct condition variable is used in the signal/broadcast * Ensure you do not use signal when broadcast is needed * If priority guaranetees are needed, make sure they are hard coded in * Spurious wake ups * Dead locks ## Footnote P2L2: Threads and Concurrency
103
What is a Spurious Wakeup ## Footnote P2L2: Threads and Concurrency
Unnecessary wakeups of threads. Typically happens when you signal/broadcast before an unlock when the shared resource is not needed anymore. If the share resource is still needed, then signal/broadcasting before unlock is required. ## Footnote P2L2: Threads and Concurrency
104
What is a deadlock? ## Footnote P2L2: Threads and Concurrency
two or more competing threads are waiting on each to complete, but none of them ever do - threads are stuck ## Footnote P2L2: Threads and Concurrency
105
What is called when you lock and then unlock a mutex before locking another mutex? ## Footnote P2L2: Threads and Concurrency
fine grained locking ## Footnote P2L2: Threads and Concurrency
106
What is the most utilized way to avoid deadlocks? ## Footnote P2L2: Threads and Concurrency
Maintaining lock order, if two threads need mutexA and mutexB, both should lock and unlock the mutexes in the same order. ## Footnote P2L2: Threads and Concurrency
107
One-to-One Model ## Footnote P2L2: Threads and Concurrency
Each user level thread uses one kernal level thread +OS sees/understands threads, synchronization, blocking -Must go to OS for all operations (may be expesnives) -OS may have limits on policies or amount of threads -Portability ## Footnote P2L2: Threads and Concurrency
108
Many-to-One Model ## Footnote P2L2: Threads and Concurrency
Many user level threads utilize on kernel thread +Totally portable, does not depend on OS limits and policies -OS has no insights into application needs -OS may block entire process if one user-level threads blocks on I/O ## Footnote P2L2: Threads and Concurrency
109
Many-to-Many Model ## Footnote P2L2: Threads and Concurrency
One or more user level threads can be attributed to one or many kernal level threads +Can be best of both worlds +Can have bound or unbound threads -Requires coordination between user-level and kernal-level thread managers ## Footnote P2L2: Threads and Concurrency
110
Process Scope ## Footnote P2L2: Threads and Concurrency
User-level library manages threads within a single process ## Footnote P2L2: Threads and Concurrency
110
System Scope ## Footnote P2L2: Threads and Concurrency
System-wide thread management by OS-level thread managers (e.g. CPU scheduler) ## Footnote P2L2: Threads and Concurrency
110
How many workers should be used for Boss/Worker Pattern? ## Footnote P2L2: Threads and Concurrency
Could either add workers on demand as needed (not used often) or have a pool of workers available to do the work. Typically we allow more threads to be added dynamically to the thread if the work is not being completed in an efficient manner, with a predecided base number for the pool +simplicity -thread pool management -locality ## Footnote P2L2: Threads and Concurrency
110
What implementation of a Boss/Worker multithreaded pattern is most often used and why? ## Footnote P2L2: Threads and Concurrency
Producer/Consumer thread, reduces throughput as the Boss is no longer required to track what each worker is doing and can more quickly hand off/place work into the queue to move on to next request ## Footnote P2L2: Threads and Concurrency
110
Boss/Worker Pattern ## Footnote P2L2: Threads and Concurrency
Boss - assigns work to workers Workers - perform entire task The throughput of the system is limited by the boss thread (must keep boss efficient) Throughput = 1/boss_time_per_order) Boss assigns work by directly signalling specific worker +workers dont need to synchronize -Boss must track what each worker is doing -Throughput might suffer Some of these issues can be mitigated through implementing a queue similar to a Producer/Consumer queue +Boss doesnt need to know details about workers -Queue synchronization ## Footnote P2L2: Threads and Concurrency
111
Boss Worker Variants ## Footnote P2L2: Threads and Concurrency
All Workers created Equal vs. Workers specialized for certain tasks +better locality; QoS (quality of service) management -load balancing (how many threads should be assigned to each task?) ## Footnote P2L2: Threads and Concurrency
111
Pipeline Pattern ## Footnote P2L2: Threads and Concurrency
* Threads assigned one subtask in the system * Entire tasks == pipeline of threads * Multiple tasks concurrently in the system, in different pipeline stages * Throughput == weakest link (can mitigate by adding a threadpool to the weakest point) * Shared-buff based communication between stages Pipeline: * Sequence of stages * Stage == subtask * Each stage == thread pool * Buffer-based communication +Specialization and locality improvements of threads -Balancing and synchronization overheads ## Footnote P2L2: Threads and Concurrency
112
Layered Pattern ## Footnote P2L2: Threads and Concurrency
* Each later group of related subtasks * End-to-End task must pass up and down through all layers +Specialization +Less fine-grained than pipeline -Not suitable for all applications -Synchronization ## Footnote P2L2: Threads and Concurrency
113
pthread creation? ## Footnote P2L3: Threads Case Study: PThreads
pthread_t aThread; //type of thread int pthread_create(pthread_t * thread, const pthreadattr_t * attr, void * ( * start_routine)(void * ), void * arg)); //thread create function int pthread_join(pthread_t thread, void * * status); //join of thread ## Footnote P2L3: Threads Case Study: PThreads
114
Boss/Worker Formula ## Footnote P2L2: Threads and Concurrency
timeToFinish1Order * ceiling(numOrders / numConcurrentThreads) where ceiling is the value rounded up ## Footnote P2L2: Threads and Concurrency
114
Pipeline Formula ## Footnote P2L2: Threads and Concurrency
timeToFinishFirstOrder + (remainingOrders * timeToFinishLastStage) ## Footnote P2L2: Threads and Concurrency
114
PThreads stands for? ## Footnote P2L3: Threads Case Study: PThreads
POSIX Threads POSIX versions of Birrell's API Specifies syntax and semantics of the operations (mutexes, conditions, etc) ## Footnote P2L3: Threads Case Study: PThreads
115
POSIX stands for? ## Footnote P2L3: Threads Case Study: PThreads
Portable Operating System Interface ## Footnote P2L3: Threads Case Study: PThreads
116
pthread Attributes ## Footnote P2L3: Threads Case Study: PThreads
- Specificed in the pthread_create - Defines features of the new thread - Has default behavior with NULL in pthread_create Can be used to specify the stack size, inheritance, joinable, scheduling policy, priority, system/process scope int pthread_attr_init(pthread_attr_t * attr); int pthread_attr_destroy(pthread_attr_t * attr); pthread_attr_set/get{attribute} ## Footnote P2L3: Threads Case Study: PThreads
116
What is the default behavior of pthreads as related to Birrells API? ## Footnote P2L3: Threads Case Study: PThreads
pthreads are set to joinable by default, meaning they can join back into the parent thread once their work is complete ## Footnote P2L3: Threads Case Study: PThreads
117
Detached threads ## Footnote P2L3: Threads Case Study: PThreads
Not originally discussed in Birrell's API. Can be set using a pthread attribute. Allowing the child thread to continue on without the parent thread, allowing for the parent to end before the child finishes work. ## Footnote P2L3: Threads Case Study: PThreads
118
Proper compiling methods/checks for pthreads ## Footnote P2L3: Threads Case Study: PThreads
1. `#include ` import the pthread library in the main file 2. Compile source `-lpthread` or `-pthread` `mythreadedapp ~ => gcc -o main.c -lpthread` `mythreadedapp ~ => gcc -o main.c -pthread` 3. Check return values of common functions ## Footnote P2L3: Threads Case Study: PThreads
119
What is an issue that can occur with globally visible variables? ## Footnote P2L3: Threads Case Study: PThreads
Data Race or Race Condition ## Footnote P2L3: Threads Case Study: PThreads
119
Define Data Race/Race Condition ## Footnote P2L3: Threads Case Study: PThreads
When a thread tries to read a value, while another thread modifies it. Happens with improper locking of mutexes or with globally visible variables ## Footnote P2L3: Threads Case Study: PThreads
120
How do we solve mutual exlusion problems among concurrent threads within pthreads? ## Footnote P2L3: Threads Case Study: PThreads
pthread mutexes ## Footnote P2L3: Threads Case Study: PThreads
121
creation and use of pthread mutexes ## Footnote P2L3: Threads Case Study: PThreads
create mutex `pthread_mutex_t aMutex;` lock and unlock mutex `int pthread_mutex_lock(pthread_mutex_t *mutex); //explicit lock` `int pthread_mutex_unlock(pthread_mutex_t *mutex)' //explicit unlock` clean up `int pthread_mutex_destroy(pthread_mutex_t *mutex);` ## Footnote P2L3: Threads Case Study: PThreads
121
How are mutexes initiliazed? ## Footnote P2L3: Threads Case Study: PThreads
`int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr);` The mutex attributes specifies mutex behavior when a mutex is shared among processes ## Footnote P2L3: Threads Case Study: PThreads
122
What are pthread conidtion variables? ## Footnote P2L3: Threads Case Study: PThreads
Synchronization constructs which allow block threads to be notified once a specific condition occurs ## Footnote P2L3: Threads Case Study: PThreads
122
describe the behavior of `int pthread_mutex_trylock(pthread_mutex_t *mutex);` ## Footnote P2L3: Threads Case Study: PThreads
Does not block the thread if the mutex is in use; if it is in use will return immediately and let the thread know that the mutex is not available, allowing the thread to move on to do something else while the mutex is locked; if it is free it locks the mutex ## Footnote P2L3: Threads Case Study: PThreads
122
Mutex safety tips for pthreads? ## Footnote P2L3: Threads Case Study: PThreads
- Shared data should always be accessed through a single mutex - Mutex scope must be visible to all - Globally order locks -> for all threads, lock mutexes in order - Always unlock mutexes, unlock them in the correct order, and ensure you are unlocking the correct mutex ## Footnote P2L3: Threads Case Study: PThreads
122
describe the main three pthread mechanisms ## Footnote P2L3: Threads Case Study: PThreads
Create condition variable `pthread_cond_t aCond;` Wait `int pthread_cond_wait(pthread_cond_t *cond, pthread_mutedx_t *mutex);` Signal/Broadcast `int pthread_cond_{signal/broadcast}(pthread_cond_t *cond);` ## Footnote P2L3: Threads Case Study: PThreads
123
describe how to properly create and cleanup a pthread condition variable ## Footnote P2L3: Threads Case Study: PThreads
init `int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr);` cleanup/destroy `int pthread_cond_destroy(pthread_cond_t *cond);` ## Footnote P2L3: Threads Case Study: PThreads
123
What does a Kernel maintain for threads? ## Footnote P2L4: Thread Design Considerations
- thread abstraction - scheduling, synchornization, etc means the OS is multithreaded itself ## Footnote P2L4: Thread Design Considerations
123
pthread condition variable safety tips ## Footnote P2L3: Threads Case Study: PThreads
- Do not forget to notify waiting threads; predicate change => singal/broadcast correct condition variable - When in doubt broadcast (performance loss) - You do not need a mutex to signal/broadcast ## Footnote P2L3: Threads Case Study: PThreads
123
Describe the data structures across user and kernel for a Single CPU environment ## Footnote P2L4: Thread Design Considerations
User Level Threads - UL thread ID - UL Regs. - Thread stack Process State (PCB) - Virtual address mapping Kernel Level Threads - stack - registers Each Kernel Level Thread looks like a virtual CPU to the User Level ## Footnote P2L4: Thread Design Considerations
124
What do user-level thread libraries provide? ## Footnote P2L4: Thread Design Considerations
- thread abstraction - scheduling, synchronization, etc ## Footnote P2L4: Thread Design Considerations
125
Define Hard Process State ## Footnote P2L4: Thread Design Considerations
the parts of a Process Control Block that are relevant to all the user-level threads that execute within process (e.g. virtual address mapping) ## Footnote P2L4: Thread Design Considerations
126
Define Soft Process State ## Footnote P2L4: Thread Design Considerations
the parts of a Process Control Block that are relevant to a particular, subset user-level thread within a process (signal mask, system call args, etc) currently associated with a kernel-level thread ## Footnote P2L4: Thread Design Considerations
126
In the context of thread design, describe a multi-data structure set up ## Footnote P2L4: Thread Design Considerations
Multiple Data Structures - smaller data structures - easier to share - on context switch only save and restore what needs to change - user-level library need only update portion of the state Benefits - scalability - overheads reduced - performance - flexibility ## Footnote P2L4: Thread Design Considerations
127
In the context of thread design, describe a Single PCB structure and its limitations ## Footnote P2L4: Thread Design Considerations
Single PCB - large continuous data structure - private for each entity - saved and restored on each context switch - update for any changes Limitations - scalability - overheads - performance - flexibility ## Footnote P2L4: Thread Design Considerations
127
What is the name of the Kernel Thread Structure (name of C structure)? ## Footnote P2L4: Thread Design Considerations
kthread_worker | https://elixir.bootlin.com/linux/v3.17/source/ ## Footnote P2L4: Thread Design Considerations
128
What is the name of the data structure, contained in the Kernel Thread Structure, that describes the process the Kernel thread is running? ## Footnote P2L4: Thread Design Considerations
task_struct | https://elixir.bootlin.com/linux/v3.17/source/ ## Footnote P2L4: Thread Design Considerations
129
Describe the SunOS 5.0 Threading Model ## Footnote P2L4: Thread Design Considerations
Multiple CPUs/Processors Kernel itself is multithreaded, multiple kernel level threads Processes can have single or multithreaded Both many-to-many and one-to-one mappings are allowed Each kernel-level thread executing a user-level thread has alightweight process data structure associated with it The user-level library sees lightweight processes as virtual CPUs on which it schedules user-level threads At kernel-level there will be a kernel-level scheduler that will be managing the kernel-level threads and scheduling them onto the physical CPUs ## Footnote P2L4: Thread Design Considerations
130
Describe User-level Lightweight Thread Data Structures ## Footnote P2L4: Thread Design Considerations
- thread creation => thread ID (tid) - tid => index into table of pointers - table of pointers point to per thread data structure - Contains a number of fields (execution context, registers, signal mask, priority, stack pointer, thread local storage, stack) ## Footnote P2L4: Thread Design Considerations
131
Describe the major issue with User-level lightweight thread data structures and the solution to it ## Footnote P2L4: Thread Design Considerations
The stack growth can get so large for one thread that it overwrites/corrupts another thread. This will only be caught when an error is found with the corrupted thread and it is hard to determine the original issue/problem thread Use a red zone that is part of the virtual address not allocated between threads ## Footnote P2L4: Thread Design Considerations
132
Describe Kernel-level Data Structures ## Footnote P2L4: Thread Design Considerations
Process - list of kernel-level threads - virtual address space - user credentials - signal handlers Light-Weight Process (LWP) - user-level registers - system call args - resource usage info (on a per-kernel basis) - signal mask Similar to ULT, but visible to kernel not needed when process not running. Does not always have to be present in memory => swappable Kernel-level threads - kernel-level registers - stack pointer - scheduling info (class, ...) - pointers to associated LWP, process, CPU structures Information eneded even when proces snot running => not swappable CPU - current thread - list of kernel-level thread - dispatching & interrupt handling information On SPARC dedicated reg == current thread ## Footnote P2L4: Thread Design Considerations
133
Basic thread management interactions ## Footnote P2L4: Thread Design Considerations
system calls and special signals that allow kernel and ULT libraries to to coordinate and interact with each other ## Footnote P2L4: Thread Design Considerations
134
In the pthreads library, which function sets the concurrency level (function name) and for that function, which concurrency value instructs the implementation to manage the concurrency level as it deems appropriate (integer)? ## Footnote P2L4: Thread Design Considerations
pthread_setconcurrency() 0 ## Footnote P2L4: Thread Design Considerations
135
describe "bound" threads ## Footnote P2L4: Thread Design Considerations
When a user-level thread requests a user-level thread to be permanents associated to a kernel-level thread ## Footnote P2L4: Thread Design Considerations
136
## Footnote P2L4: Thread Design Considerations
## Footnote P2L4: Thread Design Considerations
137
Describe "pinning" ## Footnote P2L4: Thread Design Considerations
when a kernel-level thread is permanently associated to a CPU ## Footnote P2L4: Thread Design Considerations
138
When does a process jump to the UL library scheduler? ## Footnote P2L4: Thread Design Considerations
- ULTs explicitly yield - timer set by UL library expires - ULTs call library functions like lock/unlock - blocked threads become runnable ## Footnote P2L4: Thread Design Considerations
139
When is the UL library scheduler execute? ## Footnote P2L4: Thread Design Considerations
- runs on ULT operations - runs on signals from timer or kernel ## Footnote P2L4: Thread Design Considerations
140
What type of issues can occur with multiple CPUs ## Footnote P2L4: Thread Design Considerations
- scheduling from one CPU to another - Synchronization issues ## Footnote P2L4: Thread Design Considerations
141
How do address synchronization related issues on multi CPU systems? ## Footnote P2L4: Thread Design Considerations
Adaptive Mutexes - if critical section is short => dont block, spin - for long critical sections use default blocking behavior with mutex queue ## Footnote P2L4: Thread Design Considerations
142
Why might we not immediately destroy a thread? ## Footnote P2L4: Thread Design Considerations
- reuse thread instead - reuse of thread structures/stacks means performance gains when it exits put it on "death row," periodically destroyed by reaper thread ## Footnote P2L4: Thread Design Considerations
143
What is the minimum number of threads needed to allow the Linux kernel to boot? What is the name of the variable used to set this limit? ## Footnote P2L4: Thread Design Considerations
20 threads (see fork.c) max_threads
144
Define Interrupts ## Footnote P2L4: Thread Design Considerations
- events generated externally by components other than the CPU (I/O devices, timers, other CPUs) - determined based on the physical platform - appear asynchronously; they are not a direct response to some specific action thats taking place on the CPU ## Footnote P2L4: Thread Design Considerations
145
define Signals ## Footnote P2L4: Thread Design Considerations
- events triggered by the CPU and sfotware running on it - determined based on the operation system - appear schronously (they occur to a specific action that took place on the CPU) or asynchronously (not a direct response to some specific action that takes place on the CPU) ## Footnote P2L4: Thread Design Considerations
146
What aspects of Interrupts and Signals are similar? ## Footnote P2L4: Thread Design Considerations
- have a unique ID depending on the hardware or OS - can be masked and disabled/suspended via corresponding mask (per-CPU interrupt mask, per-process signal mask) - if enabled, trigger corresponding handler (interrupt handler set for entire system by OS) (signal handler set on per process basis, by process) ## Footnote P2L4: Thread Design Considerations
147
describe steps of interrupts ## Footnote P2L4: Thread Design Considerations
1. Message - INT# (MSI#) - send interrupt signal to the CPU 2. Send interrupt and interrupt the execution of the thread 3. Check the interrupt code/# on the Interrupt handler table to get the starting address of the interrupt handler routine 4. Execute the matching interrupt handler routine based on the interrupt code/# ## Footnote P2L4: Thread Design Considerations
148
what defines the interrupt code/# ## Footnote P2L4: Thread Design Considerations
hardware defined ## Footnote P2L4: Thread Design Considerations
149
what defines the interrupt handler routine and address ## Footnote P2L4: Thread Design Considerations
OS defined ## Footnote P2L4: Thread Design Considerations
150
define the steps for signals ## Footnote P2L4: Thread Design Considerations
1. Executing thread attempts an action (eg illegal memory access) that causes OS to generate signal 2. Send signal to stop execution of thread 3. Check the signal code/# against the signal handler table and match the code/# to the correct signal handler routine 4. Execute the signal handler routine ## Footnote P2L4: Thread Design Considerations
151
What defines the signal code/#s ## Footnote P2L4: Thread Design Considerations
OS defined ## Footnote P2L4: Thread Design Considerations
152
What defines the signal handler routines? ## Footnote P2L4: Thread Design Considerations
Process defined/specific ## Footnote P2L4: Thread Design Considerations
153
## Footnote P2L4: Thread Design Considerations
## Footnote P2L4: Thread Design Considerations
154
Why might we disable interrupts or signals and how do we? ## Footnote P2L4: Thread Design Considerations
Disabling interrupts or signals allows for us to ensure proper handling of mutexes, letting them be properly unlocked or code to execute we can use interrupt/signal masks to disable them ## Footnote P2L4: Thread Design Considerations
155
interrupt masks are per < blank > signal masks are per < blank > ## Footnote P2L4: Thread Design Considerations
interrupt masks are per CPU => the hardware interrupt routing mechanism will not deliver interrupt to CPU when mask disables interrupt signal masks are per execution context => kernel sees mask and will not interrupt corresponding thread when mask disables signal ## Footnote P2L4: Thread Design Considerations
156
detail multicore system interrupts ## Footnote P2L4: Thread Design Considerations
- Interrupts can be direct to any cpu that has them enabled - May set interrupt on just a single core => avoids overheads and perturbations on all other cores improves performance ## Footnote P2L4: Thread Design Considerations
157
types of signals ## Footnote P2L4: Thread Design Considerations
One-shot Signals - "n signals pending == 1 signal pending" : at least once (overwriting behvior) - must be explicitly re-enabled Real Time Signals - "if n signals raised, then handler is called n times" (queueing behavior) ## Footnote P2L4: Thread Design Considerations
158
indicate the correct signal names for the following events: 1. terminal interrupt signal 2. high bandwidth data is available on a socket 3. background process attempting write 4. file size limit exceeded ## Footnote P2L4: Thread Design Considerations
1. SIGINT 2. SIGURG 3. SIGTTOU 4. SIGXFSZ found in the POSIX standard ## Footnote P2L4: Thread Design Considerations
159
How to decide if an interrupt handler should be executed on interrupted thread stack or turned into real thread? How do you optimize it? ## Footnote P2L4: Thread Design Considerations
if handler doesn't lock => execute on interrupted threads stack if handler can block => turn into real thread (optimatize through precreate and preinitialize thread structures for interrupt routines) ## Footnote P2L4: Thread Design Considerations
160
What is the top half interrupt ## Footnote P2L4: Thread Design Considerations
fast, non-blocking, min amount of processing executes immediately when an interrupt occurs ## Footnote P2L4: Thread Design Considerations
161
What is the bottom half interrupt ## Footnote P2L4: Thread Design Considerations
arbitrary complexity can execute like any other thread, scheduled and blocked as needed on another thread ## Footnote P2L4: Thread Design Considerations
162
What are performance overall costs associated with interrupts as threads? ## Footnote P2L4: Thread Design Considerations
- overhead of 40 SPARC instructions per interrupt - saving of 12 instructions PER mutex (no changes in interrupt mask, level...) - fewer interripts than mutex lock/unlock operations => overall win ## Footnote P2L4: Thread Design Considerations
163
Where are signal masks located? ## Footnote P2L4: Thread Design Considerations
Both at the user-level (associated the user-level process and associated to user-level thread, only visible at this level) and the kernel-level (only visible to the kernel-level thread / light weight process) ## Footnote P2L4: Thread Design Considerations
164
Describe how signals are optimized within multithreads ## Footnote P2L4: Thread Design Considerations
- signals less frequent than signal mask updates - system acalls avoided - cheaper to update UL mask - signal handling more expensive ## Footnote P2L4: Thread Design Considerations
165
What are tasks? ## Footnote P2L4: Thread Design Considerations
the execution context of a kernel level thread ## Footnote P2L4: Thread Design Considerations
166
How is task created ## Footnote P2L4: Thread Design Considerations
clone(function, stack_ptr, sharing_flags, args) ## Footnote P2L4: Thread Design Considerations
167
What are the two ways/metrics to determine what type of model to implement? ## Footnote P2L5: Thread Performance Considerations
execution time or avg. time to complete order ## Footnote P2L5: Thread Performance Considerations
168
Why are threads useful? ## Footnote P2L5: Thread Performance Considerations
- parralelization => speed up - specialization => hot cache - efficiency => lower memory requirement and cheaper synchronization - hide latency of I/O operations (single CPUs) ## Footnote P2L5: Thread Performance Considerations
169
define metrics ## Footnote P2L5: Thread Performance Considerations
a measurement standard - measurable and/or quantifiable property - associated with the system we're interested in - can be used to evaluate the system behavior ## Footnote P2L5: Thread Performance Considerations
170
What are some performance metrics? ## Footnote P2L5: Thread Performance Considerations
- execution time - throughput - request rate - CPU utilization - performance/W - wait time - platform efficiency - performance/$ - percentage of SLA violations - client-perceived performance - aggregate performance - average resource usage ## Footnote P2L5: Thread Performance Considerations
171
What is one way to achieve concurrency? ## Footnote P2L5: Thread Performance Considerations
simple duplicate the amount of (same) processes running multi process ## Footnote P2L5: Thread Performance Considerations
172
What are the benefits/disadvantages of multiprocessing? ## Footnote P2L5: Thread Performance Considerations
Benefits - simple programming Disadvantages - many processes => high memory usage - costly context switching - hard/costly to maintain shared state ## Footnote P2L5: Thread Performance Considerations
173
What are the benefits/disadvantages of multithreading? ## Footnote P2L5: Thread Performance Considerations
Benefits - shared address space - shared state - cheap context switch Disadvantages - not simple implementation - requires synchronization - underlying support for threads (not much of an issue today) ## Footnote P2L5: Thread Performance Considerations
174
describe the Event-Driven Model ## Footnote P2L5: Thread Performance Considerations
Event Dispatch calls event handlers (accept conn, read req, send header, read file/send data) Dispatcher == state machine on external events => call handler == jump to code Handler - run to completion - if they need to block => initiate blocking operation and pass control to dispatch loop ## Footnote P2L5: Thread Performance Considerations
175
How do we achieve concurrency in the Event-Driven Model? ## Footnote P2L5: Thread Performance Considerations
many requests interleaved in an execution context switches among processing of different requests ## Footnote P2L5: Thread Performance Considerations
176
Why does the Event-Driven Model work? ## Footnote P2L5: Thread Performance Considerations
on 1 CPU "threads hide latency" `if (t_idle > 2 * t_ctx_switch)` => ctx_switch to hide latency `if (t_idle == 0)` then context switching just wastes cycles that could have been used for request processing => When a request hits a point that would cause blocking/wait, switch to processing another request Still works on Multi CPUs => multiple event-driven processes ## Footnote P2L5: Thread Performance Considerations
177
What are the benefits of the Event-Driven Model? ## Footnote P2L5: Thread Performance Considerations
- single address space - single flow of control - smaller memory requirement - no context switching - no synchronization ## Footnote P2L5: Thread Performance Considerations
178
What are the challenges of the Event-Driven Model? And how do we handle them? ## Footnote P2L5: Thread Performance Considerations
- a blocking request/handler will block the entire process Handled through Asynchronous System Call ## Footnote P2L5: Thread Performance Considerations
179
Describe Asynchronous System Calls ## Footnote P2L5: Thread Performance Considerations
- process/thread makes system call - OS obtains all relevant information from stack and either learns where to return results, or tells caller where to get results later - process/thread can continue Requires support from Kernel (e.g. threads) and/or device (e.g. DMA) ## Footnote P2L5: Thread Performance Considerations
180
Describe Helpers ## Footnote P2L5: Thread Performance Considerations
- designated for blocking I/O operations only - pipe/socket based communication with event dispatcher - select()/poll() still okay - helper blocks, but main event loop (and process) continues uninterrupted and does not block AMPED => Asymmetric Multi-Process Event Driven Model AMTED => Asymmetric Multi-Threaded Event Drive Model ## Footnote P2L5: Thread Performance Considerations
181
What are benefits/downsides of helper threads/processes ## Footnote P2L5: Thread Performance Considerations
Benefits - resolves portability limitations of basic event-driven model - smaller footprint than regular worker thread Downsides - applicability to certain classes of applications - event routing on multi CPU systems ## Footnote P2L5: Thread Performance Considerations
182
what is Flash ## Footnote P2L5: Thread Performance Considerations
- an event driven webserver (AMPED) - with asymmetric helper processes - helpers used for disk reads - pipes used for communication w/ dispatcher - helper reads file in memory (via mmap) - dispatcher checks (via mincore) if pages are in memory to decide 'local' handler or helper => extra check but possible big savings ## Footnote P2L5: Thread Performance Considerations
183
What optimizations does Flash utilize? ## Footnote P2L5: Thread Performance Considerations
- application level caching (data and computation) - alignment for DMA - use of DMA with scatter-gather => vector I/O operations all now fairly common optimizations now ## Footnote P2L5: Thread Performance Considerations
184
How does Apache compare to Flash ## Footnote P2L5: Thread Performance Considerations
core = basic server skeleton modules: per functionality flow of control: similar to event driven model BUT Apache is - combination of multi process + multi thread - each process == boss/worker with dynamic thread pool - # of processes can also be dynamically adjusted ## Footnote P2L5: Thread Performance Considerations