Midterm 2 Flashcards

1
Q

deadlock

A

A set of processes or threads are waiting and need something to make progress, but the thing they need is in another waiting process or thread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

request (on a server)

A

When a process wants to use a resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

use (on a resource)

A

When a resource is in use by a process and usually cannot be used by any other processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

release (on a resource)

A

When a process is done using a resource. Allows another process to use the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

livelock

A

A condition in which a thread continuously attempts an action that fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

necessary conditions (for deadlock)

A

Mutual exclusion (some resources can only be used by one process at a time), hold and wait (processes can have some resources and ask for more), no preemption (OS gets a resource back only once a process is done using it), circular wait (its possible to build a cycle of processes P1..Pn such that each resource is held by the next process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

mutual exclusion (as a necessary condition for deadlock)

A

some resources can only be used by one process at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

hold and wait (as a necessary condition for deadlock)

A

processes can have some resources and ask for more

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

no preemption (as a necessary condition for deadlock)

A

OS gets a resource back only once a process is done using it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

circular wait (as a necessary condition for deadlock)

A

its possible to build a cycle of processes P1..Pn such that each resource is held by the next process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

resource allocation graph

A

Shows which processes have which resources and which processes are requesting which resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

request edge

A

directed edge from a process to a resource in a resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

assignment edge

A

directed edge from a resource instance to a process in a resource allocation graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

deadlock prevention

A

Building a system where deadlock can’t happen by preventing the deadlock conditions by happening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

deadlock detection

A

The strategy of letting deadlocks happen and deal with them when they occur

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

deadlock avoidance

A

Deal with deadlock by staying on safe path by never letting a process do something that couldn’t cause deadlock in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

claim edge

A

dashed arrow from process to resource in the resource allocation graph when the process might request the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

safe state (in deadlock avoidance)

A

A state where processes could not create deadlock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

unsafe state (in deadlock avoidance)

A

A state where processes could get into deadlock and the OS would not stop them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

banker’s algorithm

A

Before a resource is granted, check if granted request would lead to unsafe state. If the request would lead to an unsafe state, the OS does not give the process the resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

wait-for graph

A

In deadlock detection, a variant of the resource-allocation graph with resource nodes removed; indicates a deadlock if the graph contains a cycle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

recovery (from deadlock)

A

Usually have to terminate one or more processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

victim selection (in deadlock recovery)

A

Choose the best process to terminate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

rollback (in deadlock recovery)

A

Terminating a process and restarting later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

stall (in memory access)

A

A CPU state occurring when the CPU is waiting for data from main memory and must delay execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

cache

A

Small amount of very fast memory, usually in CPU. Reduces delay of going to main memory for every instruction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

base register

A

marks the start of a process in memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

limit register

A

marks the end of a process’ memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

address binding

A

choosing a physical memory address where a program will run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

compile-time address binding

A

Compiler chooses an address when it builds the program. Builds absolute code, must be run in a particular location and rebuild if its going to run elsewhere

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

load-time address binding

A

Choose memory address when program starts running

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

execution-time address binding

A

Can move after it starts running. Needs more help from hardware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

absolute code

A

Code that cannot easily be relocated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

relocatable code

A

Position independent code that can be relocated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

logical address space

A

Address space that the user program sees. The address space can be the same, no matter where the program is running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

logical address

A

An address that the user program sees that might differ from the physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

address translation

A

runtime conversion between logical and physical addresses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

physical address space

A

The physical addresses on the computer hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

physical address

A

The actual address where something is stored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

memory-management unit

A

Hardware device that handles mapping of logical to physical addresses. Usually a part of the CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

relocation register

A

A CPU register whose value is added to every logical address to create a physical address (for primitive memory management).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

dynamic loading

A

Bring in needed libraries at runtime. Loads parts of the program when they are first needed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

static linking

A

Do all linking at build time. Application code and libraries linked into a single load image. Ready to execute after its copied into memory. Executable includes a copy of needed libraries, but just the parts it needs. Sometimes needs a lot of unnecessary space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

dynamic linking

A

Linking at load time (startup). Gets the latest version of every library with smaller executable size. Can be done by writing code with a stub, which is a small piece of code that gets called instead of the subroutine. On the first call, the stub finds and replaces itself with the actual function. Programs share common code, but not necessarily in memory. Code is usually copied into an individual program’s memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

dynamically linked library

A

Just one copy of the library on the disk that is copied into shared memory and used by multiple programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

shared library

A

A dynamically linked library that can be used by multiple programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

contiguous memory allocation

A

all memory for a process must be in a big block

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

variable partitioning

A

Put into memory one after another with no gaps. A process exits and a new one can fill space. OS clears memory before giving it to another process. Might get external fragmentation- pieces of memory that are too small to store a process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

dynamic storage allocation

A

OS needs to find space for processes as they start and exit. Just like malloc has to find space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

hole (in memory allocation)

A

Pieces of memory not currently in use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

first-fit

A

Choose first block that is big enough. Sometimes wastes space. Can be done quickly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

next-fit

A

Get the first sufficiently large block after the one allocated. Might promote locality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

best-fit

A

Find smallest that is big enough. Doesn’t waste as much space, but could take longer

54
Q

worst-fit

A

Find largest hole and use it. Make the leftover part into a hole. Possibly more efficient to implement

55
Q

external fragmentation

A

may be wasted memory in small pieces that are too small to use

56
Q

50-percent rule

A

With first-fit, if a total of n bytes of memory have been allocated, another 1/2n will be lost to fragmentation

57
Q

internal fragmentation

A

might give a little extra memory to the process if there’s only going to be a tiny hole. Maybe giving 1064 bytes instead of 1063 as requested, etc.

58
Q

compaction

A

Move allocated blocks to combine holes

59
Q

paging

A

A common memory management scheme that avoids external fragmentation by splitting physical memory into fixed-sized frames and logical memory into blocks of the same size called pages.

60
Q

memory pages

A

fixed-sized blocks of logical memory.

61
Q

memory frames

A

Fixed-sized blocks of physical memory

62
Q

page number

A

Part of a memory address generated by the CPU in a system using paged memory; an index into the page table

63
Q

page offset

A

Part of a memory address generated by the CPU in a system using paged memory; the offset of the location within the page of the word being addressed.

64
Q

page table

A

list of where each page is located

65
Q

frame table

A

In paged memory, the table containing frame details, including which frames are allocated, which are free, total frames in the system, etc

66
Q

page-table base register

A

register holding the address of the page table for the current process

67
Q

translation look-aside buffer

A

Cache for each CPU core for a subset of the page table. Speeds up memory access by storing frequently accessed page table entries

68
Q

address space identifier

A

A part of a TLB entry that identifies the process associated with that entry and, if the requesting process doesn’t match the ID, causes a TLB miss for address space protection

69
Q

page-table length register

A

records how large a page table is for a process

70
Q

TLB hit / miss

A

When a frame is or isn’t in the TLB

71
Q

TLB hit ratio

A

fraction of memory access that gets TLB hits

72
Q

TLB flush

A

Removing everything from the TLB

73
Q

effective memory-access time

A
74
Q

valid-invalid bit

A

Bit stating whether a page is valid or invalid

75
Q

hierarchical paging

A

Reduces the amount of memory a page table takes up. Break up page table into page-size pieces. Outer (2nd level) page table stores pages of page table. Inner (1st level) page table stores pages of things in memory. Needs a little more memory

76
Q

hashed page table

A

Store pages that are being used in a hash table. Lots of memory, but a small amount of it is actually used by the process. Sparse mapping made into a hash table. Hash tables leave out the empty pages

77
Q

inverted page table

A

Entry for every frame instead of every page. More complicated lookup. Might have to do a linear search on each memory access, which would be inefficient. Depends heavily on a TLB. TLB does most address translation. More difficult to have pages shared by multiple processes

78
Q

swapping

A

When running lots of processes, memory might fill up. Temporarily remove an entire process from memory and put it somewhere else

79
Q

backing store

A

Secondary memory

80
Q

virtual memory

A

Extends memory to use secondary storage

81
Q

sparse (address space)

A

Address space with lots of space between entries

82
Q

demand paging

A

Move less-used pages to backing store. Bring a page in when needed. Less physical memory per process

83
Q

page fault

A

Happens when a process tries to access an invalid page

84
Q

memory resident

A

Page stored in main memory

85
Q

pure demand paging

A

Don’t load pages until they are first used. Less I/O at startup. More concurrent processes and less physical memory per process

86
Q

locality of reference

A

Referencing things that have similar locations and might be stored in the same pages

87
Q

page-fault rate

A

The amount of page faults that happen for a process

88
Q

copy-on-write

A

Generally, the practice by which any write causes the data to first be copied and then modified, rather than overwritten. In virtual memory, on a write attempt to a shared page, the page is first copied, and the write is made to that copy.

89
Q

page replacement

A

Removing a page from main memory and replacing it with a page from the backing store

90
Q

modify bit (dirty bit)

A

A bit representing whether a page has been modified

91
Q

frame-allocation algorithm

A
92
Q

page-replacement algorithm

A

Choosing a good page to take out of memory and replace

93
Q

reference string

A

a sequence of pages to reference

94
Q

FIFO page replacement

A

Throw out page that’s been in memory the longest. Efficient victim choosing. Could be implemented with a next victim pointer

95
Q

Belady’s anomaly

A

Having more pages could sometimes give more page faults

96
Q

stack algorithm

A

Immune to Belady’s anomaly. The set of pages in memory that has f frames is always a subset of the set of pages that has f + 1 pages. FIFO is not a stack algorithm

97
Q

optimal page-replacement algorithm (OPT)

A

Victim page is always the page that will not be used again for the longest amount of time. Best algorithm. Requires knowledge of the future, so not possible to implement

98
Q

least-recently-used page replacement

A

Approximation of opt. Victim is the page that has been used least recently. Eliminates the need to know what pages will be referenced in the future. Inefficient to implement, so an approximation is needed. Use reference bit to approximate least recently used

99
Q

reference bit

A

A bit indicating whether a page have been referenced

100
Q

second-chance page-replacement algorithm

A

Keep a pointer to the next victim like in FIFO. If reference bit is set, clear bit and move to next page. Approximation of least recently used

101
Q

least frequently used page-replacement algorithm

A

Count the number of references to each page. Replace page with smallest count. If algorithm is used a lot then not used a lot, it might stay in memory longer than it needs to

102
Q

page buffering

A

Count the number of references to each page. Replace page with smallest count. If algorithm is used a lot then not used a lot, it might stay in memory longer than it needs to

103
Q

equal allocation

A

An allocation algorithm that assigns equal amounts of a resource to all requestors. In virtual memory, assigning an equal number of frames to each process.

104
Q

proportional allocation

A

An allocation algorithm that assigns a resource in proportion to some aspect of the requestor. In virtual memory, the assignment of page frames in proportion to the size each process.

105
Q

global replacement

A

remove a page from any process

106
Q

local replacement

A

remove a page from the process

107
Q

thrashing

A

when the CPU can’t complete a lot of work because most of the time is spent servicing page faults

108
Q

working-set model

A

the pages a process is actively using at a given time. If a process is page faulting too much, it might need more frames. If a process is not page faulting a lot, decrease number of frames

109
Q

page-fault frequency

A

The amount that a process page faults

110
Q

memory-mapped file

A

A file that is loaded into physical memory via virtual memory methods, allowing access by reading and writing to the memory address occupied by the fi le

111
Q

distributed system

A

Allows for resource sharing, computational speedups, reliability, communication, and cost effectiveness. Ideally, remote resources should be just as easy to use as local resources

112
Q

resource sharing

A

Multiple processes using a resource at once

113
Q

computation speedup

A

Computations can be completed faster on distributed systems

114
Q

load sharing (in a distributed system)

A

Moving processes around to distribute them equally

115
Q

reliability

A

Distributed systems are more reliable than single systems

116
Q

communication

A

Distributed systems communicate with each other

117
Q

remote login

A

It is possible to login remotely to distributed systems

118
Q

session (in network communication)

A

The time when systems are connected together

119
Q

network operating system

A

require extra programming effort to access remote resources

120
Q

distributed operating system

A

A collection of loosely coupled nodes interconnected by a communication network

121
Q

data migration

A

Moving data between sources

122
Q

computation migration

A

Moving computations between sources

123
Q

process migration

A

Moving processes between sources

124
Q

local area network

A

A network that connects computers within a room, a building, or a campus.

125
Q

wide-area network

A

A network that links buildings, cities, or countries.

126
Q

physical layer

A
127
Q

data-link layer

A
128
Q

network layer

A
129
Q

transport layer

A
130
Q

user datagram protocol

A
131
Q

transmission control protocol

A