Unit 6 Flashcards

1
Q

The locality principle stating that if a data location is referenced then it will tend to be referenced again soon.

A

Temporal locality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The locality principle stating that if a data location is referenced, data locations with nearby addresses will tend to be referenced soon.

A

Spatial locality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The high likelihood of accessing multiple elements within array A is an example of _____ locality.

A

spatial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The high likelihood of accessing i = i + 1 repeatedly is an example of _____ locality.

A

temporal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A structure that uses multiple levels of memories; as the distance from the processor increases, the size of the memories and the access time both increase.

A

Memory hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The minimum unit of information that can be either present or not present in a cache.

A

Block (or line)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The fraction of memory accesses found in a level of the memory hierarchy.

A

Hit rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The fraction of memory accesses not found in a level of the memory hierarchy.

A

Miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The time required to access a level of the memory hierarchy, including the time needed to determine whether the access is a hit or a miss.

A

Hit time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The time required to fetch a block into a level of the memory hierarchy from the lower level, including the time to access the block, transmit it from one level to the other, insert it in the level that experienced the miss, and then pass the block to the requestor.

A

Miss penalty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Blank exhibit both temporal locality, the tendency to reuse recently accessed data items, and spatial locality, the tendency to reference data items that are close to other recently accessed items.

A

Programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Memory hierarchies take advantage of temporal locality by keeping more recently accessed data items closer to the blank.

A

processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Memory hierarchies take advantage of spatial locality by moving blocks consisting of multiple contiguous words in memory to blank of the hierarchy.

A

upper levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A memory hierarchy uses smaller and faster memory technologies blank to the processor.

A

close

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In most systems, the memory is a true blank, meaning that data cannot be present in level i unless they are also present in level i + 1.

A

hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Blank are simply integrated circuits that are memory arrays with (usually) a single access port that can provide either a read or a write. Blank have a fixed access time to any datum, though the read and write access times may differ.

A

SRAMs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

SRAMs don’t need to blank and so the access time is very close to the cycle time.

A

refresh

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Cache type of memory

A

SRAMS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In a blank, the value kept in a cell is stored as a charge in a capacitor.

A

dynamic RAM (DRAM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Because DRAMs use only blank per bit of storage, they are much denser and cheaper per bit than SRAM.

A

one transistor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

As DRAMs store the charge on a capacitor, it cannot be kept indefinitely and must periodically be blank

A

refreshed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Main memory type

A

DRAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

To improve performance, DRAMs buffer blank for repeated access.

A

rows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Modern DRAMS are organized in banks. Each bank consists of a series of ___

A

rows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
DRAMs enable fast access to data by transferring bits in bursts. Successive bits are transferred on each _____.
clock edge
26
Between 1980 and 2012, the average column access time to an existing row _____.
decreased
27
Blank is a type of electrically erasable programmable read-only memory (EEPROM).
Flash memory
28
Unlike disks and DRAM, but like other EEPROM technologies, writes can wear out flash memory bits. To cope with such limits, most flash products include a controller to spread the writes by remapping blocks that have been written many times to less trodden blocks. This technique is called blank.
wear leveling
29
One of thousands of concentric circles that make up the surface of a magnetic disk.
Track
30
One of the segments that make up a track on a magnetic disk; a sector is the smallest amount of information that is read or written on a disk.
Sector
31
Also called rotational delay. The time required for the desired sector of a disk to rotate under the read/write head; usually assumed to be half the rotation time.
Rotational latency
32
The process of positioning a read/write head over the proper track on a disk
Seek
33
The time required to transfer a block of bits.
Transfer time
34
The time required to move the head to the desired track.
Seek time
35
The time required for the desired sector to rotate under the head.
Rotational latency
36
A cache structure in which each memory location is mapped to exactly one location in the cache
Direct-mapped cache
37
A field in a table used for a memory hierarchy that contains the address information required to identify whether the associated block in the hierarchy corresponds to a requested word.
Tag
38
A field in the tables of a memory hierarchy that indicates that the associated block in the hierarchy contains valid data.
Valid bit
39
Caching is perhaps the most important example of the big idea of blank.
prediction
40
Blank relies on the principle of locality to try to find the desired data in the higher levels of the memory hierarchy, and provides mechanisms to ensure that when the prediction is wrong it finds and uses the proper data from the lower levels of the memory hierarchy.
Caching
41
The hit rates of the cache prediction on modern computers are often above blank
95%
42
The blank size is 2^n blocks, so n bits are used for the index
cache
43
The blank is 2^m words (2m+2 bytes), so m bits are used for the word within the block, and two bits are used for the byte part of the address
block size
44
A request for data from the cache that cannot be filled because the data are not present in the cache
Cache miss
45
Steps to be taken on an instruction cache miss:
Send the original PC value to the memory. Instruct main memory to perform a read and wait for the memory to complete its access. Write the cache entry, putting the data from memory in the data portion of the entry, writing the upper bits of the address (from the ALU) into the tag field, and turning the valid bit on. Restart the instruction execution at the first step, which will refetch the instruction, this time finding it in the cache.
46
A scheme in which writes always update both the cache and the next lower level of the memory hierarchy, ensuring that data are always consistent between the two.
Write-through
47
A queue that holds data while the data are waiting to be written to memory
Write buffer
48
A scheme that handles writes by updating values only to the block in the cache, then writing the modified block to the lower level of the hierarchy when the block is replaced.
Write-back
49
A value is read from the cache and modified. The modified value is written to the cache and the corresponding memory location.
Write-through scheme
50
A value is read from the cache and modified. The modified value is written to the cache and to a queue that stores the value while waiting to be written to the corresponding memory location.
Write buffer
51
A value is read from the cache and modified. The modified value is written to the cache. The modified value is only written from the cache to memory when the cache block is replaced.
Write-back scheme
52
A scheme in which a level of the memory hierarchy is composed of two independent caches that operate in parallel with each other, with one handling instructions and one handling data.
Split cache
53
To take advantage of blank, a cache must have a block size larger than one word.
spatial locality
54
A cache structure in which a block can be placed in any location in the cache.
Fully associative cache
55
A cache that has a fixed number of locations (at least two) where each block can be placed.
Set-associative cache
56
Remember that in a direct-mapped cache, the position of a memory block is given by blank
block number modulo number of blocks in cache
57
In a set-associative cache, the set containing a memory block is given by blank
block number modulo number of sets in cache
58
A replacement scheme in which the block replaced is the one that has been unused for the longest time.
Least recently used (LRU)
59
A memory hierarchy with multiple levels of caches, rather than just a cache and main memory.
Multilevel cache
60
The fraction of references that miss in all levels of a mutlilevel cache.
Global miss rate
61
The fraction of references to one level of a cache that miss; used in multilevel hierarchies.
Local miss rate
62
Blank is a measure of the continuous service accomplishment—or, equivalently, of the time to failure—from a reference point.
Reliability
63
Blank where the service is delivered as specified
Service accomplishment
64
Blank where the delivered service is different from the specified service
Service interruption
65
A related term is blank which is just the percentage of devices that would be expected to fail in a year for a given MTTF
annual failure rate (AFR)
66
Blank is measured as mean time to repair (MTTR). Mean time between failures (MTBF) is simply the sum of MTTF + MTTR. Although MTBF is widely used, MTTF is often the more appropriate term.
Service interruption
67
Blank is then a measure of service accomplishment with respect to the alternation between the two states of accomplishment and interruption.
Availability
68
Preventing fault occurrence by construction.
Fault avoidance
69
Using redundancy to allow the service to comply with the service specification despite faults occurring.
Fault tolerance
70
Predicting the presence and creation of faults, allowing the component to be replaced before it fails. PARTICIPATION ACTIVITY 6.5.2: Terms and measures.
Fault forecasting
71
Four nines of availability per year indicates that the service is available _____% of the year.
99.99
72
A code that enables the detection of an error in data, but not the precise location and, hence, correction of the error.
Error detection code
73
Richard Hamming invented a popular redundancy scheme for memory, for which he received the Turing Award in 1968. To invent redundant codes, it is helpful to talk about how "close" correct bit patterns can be. What we call the blank is just the minimum number of bits that are different between any two correct bit patterns.
Hamming distance
74
Hamming used a blank for error detection. In a parity code, the number of 1s in a word is counted; the word has odd parity if the number of 1s is odd and even otherwise. When a word is written into memory, the parity bit is also written (1 for odd, 0 for even). That is, the parity of the N+1 bit word should always be even.
parity code
75
blank were first developed in the mid-1960s, and they have remained an important part of mainframe computing over the years.
Virtual machines (VM)
76
VMs provide two other benefits that are commercially significant:
Managing software. Managing hardware.
77
A technique that uses main memory as a "cache" for secondary storage.
Virtual memory
78
An address in main memory.
Physical address
79
A set of mechanisms for ensuring that multiple processes sharing the processor, memory, or I/O devices cannot interfere, intentionally or unintentionally, with one another by reading or writing each other's data. These mechanisms also isolate the operating system from a user process.
Protection
80
An event that occurs when an accessed page is not present in main memory.
Page fault
81
An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.
Virtual address
82
Also called address mapping. The process by which a virtual address is mapped to an address used to access memory.
Address translation
83
A virtual memory miss.
Page fault
84
Segmentation: A variable-size address mapping scheme in which an address consists of two parts: a segment number, which is mapped to a physical address, and a segment offset.
Segmentation
85
The table containing the virtual to physical address translations in a virtual memory system. The table, which is stored in memory, is typically indexed by the virtual page number; each entry in the table contains the physical page number for that virtual page if the page is currently in memory.
Page table
86
The space on the disk reserved for the full virtual memory space of a process.
Swap space
87
Also called use bit or access bit. A field that is set whenever a page is accessed and that is used to implement LRU or other replacement schemes.
Reference bit
88
A cache that keeps track of recently used address mappings to try to avoid an access to the page table.
Translation-lookaside buffer (TLB)
89
A cache that is accessed with a virtual address rather than a physical address.
Virtually addressed cache
90
A situation in which two addresses access the same object; it can occur in virtual memory when there are two virtual addresses for the same physical page.
Aliasing
91
A cache that is addressed by a physical address.
Physically addressed cache
92
Also called kernal mode. A mode indicating that a running process is an operating system process.
Supervisor mode
93
A special instruction that transfers control from user mode to a dedicated location in supervisor code space, invoking the exception mechanism in the process.
System call
94
A changing of the internal state of the processor to allow a different process to use the processor that includes saving the state needed to return to the currently executing process.
Context switch
95
Also called interrupt enable. A signal or action that controls whether the process responds to an exception or not; necessary for preventing the occurrence of exceptions during intervals before the processor has safely saved the state needed to restart.
Exception enable
96
An instruction that can resume execution after an exception is resolved without the exception's affecting the result of the instruction.
Restartable instruction
97
How is a block found?
there are four methods: indexing (as in a direct-mapped cache), limited search (as in a set-associative cache), full search (as in a fully associative cache), and a separate lookup table (as in a page table).
98
A cache model in which all cache misses are classified into one of three categories: compulsory misses, capacity misses, and conflict misses.
Three Cs model
99
Also called cold-start miss. A cache miss caused by the first access to a block that has never been in the cache.
Compulsory miss
100
A cache miss that occurs because the cache, even with full associativity, cannot contain all the blocks needed to satisfy the request.
Capacity miss
101
Also called collision miss. A cache miss that occurs in a set-associative or direct-mapped cache when multiple blocks compete for the same set and that are eliminated in a fully associative cache of the same size.
Conflict miss
102
A sequential logic function consisting of a set of inputs and outputs, a next-state function that maps the current state and the inputs to a new state, and an output function that maps the current state and possibly the inputs to a set of asserted outputs.
Finite-state machine
103
A combinational function that, given the inputs and the current state, determines the next state of a finite-state machine.
Next-state machine
104
When two unrelated shared variables are located in the same cache block and the full block is exchanged between processors even though the processors are accessing different variables.
False sharing
105
An organization of disks that uses an array of small and inexpensive disks so as to increase both performance and reliability.
Redundant arrays of inexpensive disks (RAID)
106
Allocation of logically sequential blocks to separate disks to allow higher performance than a single disk can deliver.
Striping
107
No redundancy
(RAID 0)
108
Mirroring
(RAID 1)
109
Writing identical data to multiple disks to increase data availability.
Mirroring
110
Error detecting and correcting code
(RAID 2)
111
Bit-interleaved parity
(RAID 3)
112
The group of data disks or blocks that share a common check disk or block.
Protection group:
113
Block-interleaved parity
(RAID 4)
114
Distributed block-interleaved parity
(RAID 5)
115
P + Q redundancy
(RAID 6)
116
Replacing a hardware component while the system is running
Hot-swapping
117
Reserve hardware resources that can immediately take the place of a failed component.
Standby spares
118
A cache that allows the processor to make references to the cache while the cache is handling an earlier miss.
Nonblocking cache
119
The two "non-cache" or "no allocate" blank and blank are intended for streaming through lots of data, so the data are unlikely to be used in the future; that is, no temporal locality.
load pair (LDNP) store pair (STNP)
120
A technique in which data blocks needed in the future are brought into the cache early by using special instructions that specify the address of the block.
Prefetching
121
J . Presper Eckert developed technology based on _____ delay lines to act as registers and to replace vacuum tube technology.
mercury
122
_____ had the first known working mercury delay lines.
EDSAC
123
In the 1950s, _____ memory emerged as a cheaper, faster, and more reliable storage method than previous technologies.
core
124
Building a _____ became possible because of the development of the integrated circuit.
DRAM
125
Currently, DRAMs are packaged with multiple chips on a little board called a _____.
DIMM
126
First computer to employ two level memory hierarchy.
ATLAS
127
Replaced by the 32-bit VAX because of too little address space
PDP-11
128
The computer for which the term translation-lookaside buffer was coined
IBM-370
129
The first RISC architecture to transition from 32-bit addressing to 64-bit addressing.
MIPS
130
The first _____ implementation was based on tunnel diode memory, the fastest form of memory available at the time.
cache
131
The 1956 blank was the first computer to use a moving-head disk storage system.
IBM 305 RAMAC
132
Blank developed removable hard disk drives in 1962, six years before Intel existed.
IBM
133
The sealed blank disk replaced removable disks in the 1980s partly because the cost of disk electronics continued to decrease.
Winchester
134
Flash memory was first used in blank
digital cameras
135
A pioneering database management system created at General Electric in 1961.
IDS
136
A database system created to test the viability of relational databases.
System R and Ingres
137
In the 1990s, _____ databases emerged for analytic processing and data mining.
parallel
138
The first personal computer.
Alto
139
A timesharing system that incorporated the good ideas from MULTICS and left out the more complex features.
UNIX
140
First timesharing system
CTSS
141
The operating system that Microsoft provided to IBM.
MS_DOS
142
Berkeley Software Distribution
BSD
143
Berkeley timesharing system that added paging virtual memory hardware to the SDS 920 computer and had a new operating system.
CAL TSS