Midterm 2 Flashcards

1
Q

deadlock

A

A set of processes or threads are waiting and need something to make progress, but the thing they need is in another waiting process or thread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

request (on a server)

A

When a process wants to use a resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

use (on a resource)

A

When a resource is in use by a process and usually cannot be used by any other processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

release (on a resource)

A

When a process is done using a resource. Allows another process to use the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

livelock

A

A condition in which a thread continuously attempts an action that fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

necessary conditions (for deadlock)

A

Mutual exclusion (some resources can only be used by one process at a time), hold and wait (processes can have some resources and ask for more), no preemption (OS gets a resource back only once a process is done using it), circular wait (its possible to build a cycle of processes P1..Pn such that each resource is held by the next process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

mutual exclusion (as a necessary condition for deadlock)

A

some resources can only be used by one process at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

hold and wait (as a necessary condition for deadlock)

A

processes can have some resources and ask for more

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

no preemption (as a necessary condition for deadlock)

A

OS gets a resource back only once a process is done using it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

circular wait (as a necessary condition for deadlock)

A

its possible to build a cycle of processes P1..Pn such that each resource is held by the next process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

resource allocation graph

A

Shows which processes have which resources and which processes are requesting which resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

request edge

A

directed edge from a process to a resource in a resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

assignment edge

A

directed edge from a resource instance to a process in a resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

deadlock prevention

A

Building a system where deadlock can’t happen by preventing the deadlock conditions by happening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

deadlock detection

A

The strategy of letting deadlocks happen and deal with them when they occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

deadlock avoidance

A

Deal with deadlock by staying on safe path by never letting a process do something that couldn’t cause deadlock in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

claim edge

A

dashed arrow from process to resource in the resource allocation graph when the process might request the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

safe state (in deadlock avoidance)

A

A state where processes could not create deadlock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

unsafe state (in deadlock avoidance)

A

A state where processes could get into deadlock and the OS would not stop them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

banker’s algorithm

A

Before a resource is granted, check if granted request would lead to unsafe state. If the request would lead to an unsafe state, the OS does not give the process the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

wait-for graph

A

In deadlock detection, a variant of the resource-allocation graph with resource nodes removed; indicates a deadlock if the graph contains a cycle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

recovery (from deadlock)

A

Usually have to terminate one or more processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

victim selection (in deadlock recovery)

A

Choose the best process to terminate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

rollback (in deadlock recovery)

A

Terminating a process and restarting later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
stall (in memory access)
A CPU state occurring when the CPU is waiting for data from main memory and must delay execution.
26
cache
Small amount of very fast memory, usually in CPU. Reduces delay of going to main memory for every instruction
27
base register
marks the start of a process in memory
28
limit register
marks the end of a process’ memory
29
address binding
choosing a physical memory address where a program will run
30
compile-time address binding
Compiler chooses an address when it builds the program. Builds absolute code, must be run in a particular location and rebuild if its going to run elsewhere
31
load-time address binding
Choose memory address when program starts running
32
execution-time address binding
Can move after it starts running. Needs more help from hardware.
33
absolute code
Code that cannot easily be relocated
34
relocatable code
Position independent code that can be relocated
35
logical address space
Address space that the user program sees. The address space can be the same, no matter where the program is running.
36
logical address
An address that the user program sees that might differ from the physical address
37
address translation
runtime conversion between logical and physical addresses
38
physical address space
The physical addresses on the computer hardware
39
physical address
The actual address where something is stored
40
memory-management unit
Hardware device that handles mapping of logical to physical addresses. Usually a part of the CPU
41
relocation register
A CPU register whose value is added to every logical address to create a physical address (for primitive memory management).
42
dynamic loading
Bring in needed libraries at runtime. Loads parts of the program when they are first needed
43
static linking
Do all linking at build time. Application code and libraries linked into a single load image. Ready to execute after its copied into memory. Executable includes a copy of needed libraries, but just the parts it needs. Sometimes needs a lot of unnecessary space
44
dynamic linking
Linking at load time (startup). Gets the latest version of every library with smaller executable size. Can be done by writing code with a stub, which is a small piece of code that gets called instead of the subroutine. On the first call, the stub finds and replaces itself with the actual function. Programs share common code, but not necessarily in memory. Code is usually copied into an individual program’s memory
45
dynamically linked library
Just one copy of the library on the disk that is copied into shared memory and used by multiple programs
46
shared library
A dynamically linked library that can be used by multiple programs
47
contiguous memory allocation
all memory for a process must be in a big block
48
variable partitioning
Put into memory one after another with no gaps. A process exits and a new one can fill space. OS clears memory before giving it to another process. Might get external fragmentation- pieces of memory that are too small to store a process
49
dynamic storage allocation
OS needs to find space for processes as they start and exit. Just like malloc has to find space
50
hole (in memory allocation)
Pieces of memory not currently in use
51
first-fit
Choose first block that is big enough. Sometimes wastes space. Can be done quickly
52
next-fit
Get the first sufficiently large block after the one allocated. Might promote locality
53
best-fit
Find smallest that is big enough. Doesn't waste as much space, but could take longer
54
worst-fit
Find largest hole and use it. Make the leftover part into a hole. Possibly more efficient to implement
55
external fragmentation
may be wasted memory in small pieces that are too small to use
56
50-percent rule
With first-fit, if a total of n bytes of memory have been allocated, another 1/2n will be lost to fragmentation
57
internal fragmentation
might give a little extra memory to the process if there’s only going to be a tiny hole. Maybe giving 1064 bytes instead of 1063 as requested, etc.
58
compaction
Move allocated blocks to combine holes
59
paging
A common memory management scheme that avoids external fragmentation by splitting physical memory into fixed-sized frames and logical memory into blocks of the same size called pages.
60
memory pages
fixed-sized blocks of logical memory.
61
memory frames
Fixed-sized blocks of physical memory
62
page number
Part of a memory address generated by the CPU in a system using paged memory; an index into the page table
63
page offset
Part of a memory address generated by the CPU in a system using paged memory; the offset of the location within the page of the word being addressed.
64
page table
list of where each page is located
65
frame table
In paged memory, the table containing frame details, including which frames are allocated, which are free, total frames in the system, etc
66
page-table base register
register holding the address of the page table for the current process
67
translation look-aside buffer
Cache for each CPU core for a subset of the page table. Speeds up memory access by storing frequently accessed page table entries
68
address space identifier
A part of a TLB entry that identifies the process associated with that entry and, if the requesting process doesn’t match the ID, causes a TLB miss for address space protection
69
page-table length register
records how large a page table is for a process
70
TLB hit / miss
When a frame is or isn't in the TLB
71
TLB hit ratio
fraction of memory access that gets TLB hits
72
TLB flush
Removing everything from the TLB
73
effective memory-access time
74
valid-invalid bit
Bit stating whether a page is valid or invalid
75
hierarchical paging
Reduces the amount of memory a page table takes up. Break up page table into page-size pieces. Outer (2nd level) page table stores pages of page table. Inner (1st level) page table stores pages of things in memory. Needs a little more memory
76
hashed page table
Store pages that are being used in a hash table. Lots of memory, but a small amount of it is actually used by the process. Sparse mapping made into a hash table. Hash tables leave out the empty pages
77
inverted page table
Entry for every frame instead of every page. More complicated lookup. Might have to do a linear search on each memory access, which would be inefficient. Depends heavily on a TLB. TLB does most address translation. More difficult to have pages shared by multiple processes
78
swapping
When running lots of processes, memory might fill up. Temporarily remove an entire process from memory and put it somewhere else
79
backing store
Secondary memory
80
virtual memory
Extends memory to use secondary storage
81
sparse (address space)
Address space with lots of space between entries
82
demand paging
Move less-used pages to backing store. Bring a page in when needed. Less physical memory per process
83
page fault
Happens when a process tries to access an invalid page
84
memory resident
Page stored in main memory
85
pure demand paging
Don’t load pages until they are first used. Less I/O at startup. More concurrent processes and less physical memory per process
86
locality of reference
Referencing things that have similar locations and might be stored in the same pages
87
page-fault rate
The amount of page faults that happen for a process
88
copy-on-write
Generally, the practice by which any write causes the data to first be copied and then modified, rather than overwritten. In virtual memory, on a write attempt to a shared page, the page is first copied, and the write is made to that copy.
89
page replacement
Removing a page from main memory and replacing it with a page from the backing store
90
modify bit (dirty bit)
A bit representing whether a page has been modified
91
frame-allocation algorithm
92
page-replacement algorithm
Choosing a good page to take out of memory and replace
93
reference string
a sequence of pages to reference
94
FIFO page replacement
Throw out page that’s been in memory the longest. Efficient victim choosing. Could be implemented with a next victim pointer
95
Belady’s anomaly
Having more pages could sometimes give more page faults
96
stack algorithm
Immune to Belady’s anomaly. The set of pages in memory that has f frames is always a subset of the set of pages that has f + 1 pages. FIFO is not a stack algorithm
97
optimal page-replacement algorithm (OPT)
Victim page is always the page that will not be used again for the longest amount of time. Best algorithm. Requires knowledge of the future, so not possible to implement
98
least-recently-used page replacement
Approximation of opt. Victim is the page that has been used least recently. Eliminates the need to know what pages will be referenced in the future. Inefficient to implement, so an approximation is needed. Use reference bit to approximate least recently used
99
reference bit
A bit indicating whether a page have been referenced
100
second-chance page-replacement algorithm
Keep a pointer to the next victim like in FIFO. If reference bit is set, clear bit and move to next page. Approximation of least recently used
101
least frequently used page-replacement algorithm
Count the number of references to each page. Replace page with smallest count. If algorithm is used a lot then not used a lot, it might stay in memory longer than it needs to
102
page buffering
Count the number of references to each page. Replace page with smallest count. If algorithm is used a lot then not used a lot, it might stay in memory longer than it needs to
103
equal allocation
An allocation algorithm that assigns equal amounts of a resource to all requestors. In virtual memory, assigning an equal number of frames to each process.
104
proportional allocation
An allocation algorithm that assigns a resource in proportion to some aspect of the requestor. In virtual memory, the assignment of page frames in proportion to the size each process.
105
global replacement
remove a page from any process
106
local replacement
remove a page from the process
107
thrashing
when the CPU can’t complete a lot of work because most of the time is spent servicing page faults
108
working-set model
the pages a process is actively using at a given time. If a process is page faulting too much, it might need more frames. If a process is not page faulting a lot, decrease number of frames
109
page-fault frequency
The amount that a process page faults
110
memory-mapped file
A file that is loaded into physical memory via virtual memory methods, allowing access by reading and writing to the memory address occupied by the fi le
111
distributed system
Allows for resource sharing, computational speedups, reliability, communication, and cost effectiveness. Ideally, remote resources should be just as easy to use as local resources
112
resource sharing
Multiple processes using a resource at once
113
computation speedup
Computations can be completed faster on distributed systems
114
load sharing (in a distributed system)
Moving processes around to distribute them equally
115
reliability
Distributed systems are more reliable than single systems
116
communication
Distributed systems communicate with each other
117
remote login
It is possible to login remotely to distributed systems
118
session (in network communication)
The time when systems are connected together
119
network operating system
require extra programming effort to access remote resources
120
distributed operating system
A collection of loosely coupled nodes interconnected by a communication network
121
data migration
Moving data between sources
122
computation migration
Moving computations between sources
123
process migration
Moving processes between sources
124
local area network
A network that connects computers within a room, a building, or a campus.
125
wide-area network
A network that links buildings, cities, or countries.
126
physical layer
127
data-link layer
128
network layer
129
transport layer
130
user datagram protocol
131
transmission control protocol