final dump Flashcards

1
Q

UMA

A

same amount of memory for each processor
uniform(same) latency for memory accesses
large latency in a network with many processors and memory
crossbar interconnect used to connect processor and memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

NUMA

A

local memory for each processor
low latency when accessing local memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

temporal locality

A

When an information item, instruction or data is first needed, it should be brought back / kept close to the cache as it will likely be needed again

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

spatial locality

A

Most time in programs is spent looping through the same block of instructions thus. Its useful to fetch several items located at adjacent address as they are likely to be used together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

fully associative cache

A

any data block anywhere
+ very flexible
- slow to search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

direct mapping

A

each memory block mapped to a specific cache block
+ fast to search
- inflexible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

n-way set-associative

A

n memory blocks in a set
+/- reasonably fast to search
+/- reasonably flexible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

LRU

A

cache keeps time stamps of accesses
replace least recently used block

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

DMA

A

process of transferring data between main memory and hardware subsystem (i/o) without involvement of the processor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

cycle stealing

A

a method DMA applies to avoid competition on the memory bus between the CPU and the DMA engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

DMA +/-

A

+ delivers high bulk data performance
+ frees CPU form doing bulk data transfers from device-> memory
- can interfere with CPU memory access
- extra intelligence (standalone chip) in devices to access memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DMA stages

A
  1. CPU programs the DMA engine of the storage device
  2. internal processing
  3. DMA transfer from the device
  4. interrupt to the CPU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

memory mapped io

A

uses same address space as to address both memory and i/o devices
memory address refers to portion of RAM or memory and registers of IO devices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

port mapped io

A

io devices mapped to separate address space
-> different set of signal lines to indicate a memory access vs a port access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

memory mapped io +/-

A

+ requires less internal logic -> cheaper
+ easier to build, faster, consumes less power and can be physically smaller
- address bus must be fully decoded for every device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

port mapped io +/-

A

+ less logic needed to decode discrete address and therefore costs less to add hardware to machine
- more instructions required to accomplish same task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

RISC

A

reduced instruction set computer
less and simpler instructions
emphasis on software
small code, high cycles
transistors used more for complex instructions
more power efficient
single word I
increases I per program, reduces CPI

18
Q

CISC

A

complex instruction set computer
lots of complex instructions
memory-to-memory operations
multiple word I
instructions may take multiple clock cycles
more energy hungry
increases CPI, reduces I per program

19
Q

ISA trade-offs

A

complexity, performance, energy use, security

20
Q

bubble

A

one clock cycle of idle time

21
Q

What recognises data dependencies?

22
Q

superscalar processors

A

CPU that implements a form of parallelism called instruction-level parallelism within a single processor
increased throughput
contain multiple execution units

23
Q

superscalar vs pipelining

A

multiple I executed in parallel using multiple I units
<->
multiple I executed in parallel by the same units dividing them into phases

24
Q

crossbar

A

network that provides a direct link between any pair of units connected to the network
-> used in UMA to connect processors
enables simultaneous transfers if target is not experiencing multiple requests

25
switches formula
for n processors and k memory units, n * k switches are needed
26
MDR
CPU register that stores memory address of data that will be fetched to CPU or address to which data will be written
27
MAR
CPU register that contains the data to be stored in main memory, or the fetched data from the memory
28
given a memory cell of size x^a * z^b with y^c data pins
a=column bits b-c=row bits
29
flynn taxonomy
categorisation of forms of parallel computer architectures
30
SIMD
single instruction multiple data enables processing of multiple data with a single instruction ex: retrieving multiple files at the same time, GPU, single-core superscalar processor vector computers
31
MIMD
employed to achieve parallelism machines that use it have many asynchronous and independent processors multiprocessors
32
SISD
one instruction on a single data stream ex: single-core CPU with pipelining traditional Von Neumann
33
MISD
multiple instructions on a single data stream
34
SIMD <-> MIMD
SIMD: simpler, smaller, faster, operations can take longer MIMD: capable of more complex operations, can perform complex operations concurently
35
arbitration
process of resolving conflicts that arise when multiple devices attempt to access the bus at the same time either using addresses, or first-come-first-service basis, or a daisy chaining
36
arbiter circuit
receives bus requests and processes them -> granted/denied selects one request for bus access
37
interrupts
a request for the processor to interrupt code that is currently executing so that an event can be processed
38
interrupt service routine/handler (ISR)
function that can be executed when an interrupt is called
39
interrupt cycle
1. device raises interrupt request 2. processor interrupts program execution 3. interrupts are disabled 4. device lowers interrupt 5. interrupt handled by interrupt handler 6. interrupt are enabled 7. execution of interrupted program is resumed
40
thread
path of execution of a process context: program counter, registers for the process
41
process
program (instructions) + state (data)