10 - Architecture and Parallelism Flashcards
How can you create an ALU?
Integrating: A full-adder, 2s-complenter, shifter, and comparator.
There’s a logic unit for each bit, with its own carry in, carry out, decoder, and logical unit.
What does the internal bus do?
It allows the control unit, ALU, registers, addressing unit, etc.
The speed of the entire system will depend on bus width (number of bits that can transfer simultaneously) and bus length (the motivation for miniature computers).
Bus arbitration - the problem is that only one set of signals can be sent per clock cycle. (e.g. a register transfers something to ALU, but also data transfer to general register). The bus arbitration system decides which gets to go first.
Also, there may be multiple busses on which they go.
How is memory made up of gates?
Gates combine to make switches, which combine to make memory cells, which are then combined and integrated to make memory chips.
What are the two major types of memory?
- RAM (Random access memory) - programs can access and manipulate memory cells while the computer is running.
- - This can be addressed by machine instructions through the memory address register, manipulated through the data register, etc. - ROM (read-only memory) - cannot be changed while the computer is running.
- - Ordinarily burned into a single configuration (e.g. bootup).
How quick is a clock cycle?
The computer can transition to a new “state” at every tick of the system. (near light-speed).
Clock cycle length determines CPU speed (mostly). However, clock cycle length depends on distance between components.
How has CISC architecture been improved?
- more efficient microprograms
- more powerful ISA level instructions
- cache memory
- more registers
- wider buses
- making it smaller
- more processors
- floating point instructions
What are the limitations of improving CISC?
Improving a specific architecture requires instructions to be backward compatible.
However, the improvements that you can make come at the expense of backward compatibility (SOME companies have built in the old and the new -> not an improvement, just a transition).
What is RISC?
Reduced instruction set architecture. These instructions are like CISC micro-instructions.
There’s a much smaller set of instructions at ISA level because there’s no need to go through decoding.
For example, smartphones use RISC. Even though the programs look much longer, they execute faster (b/c they do several things). RISC architecture is generally used in embedded systems so that the programs execute much faster.
What are the major RISC design principles?
- Instructions are executed directly by the hardware (no microprograms)
- Instruction cache (maximize rate of fetching instructions).
- Instructions easy to decode (a separate fetch unit, often with its own cache)
- Only 2 instructions from memory (LOAD and STORE).
- Plenty of registers.
How is speed generally improving now?
- Try to miminize memory and I/O accesses
- - Cache
- - Separate I/O unit (buffers/ processing)
- - Separate network communication unit (NIC) - Parallel processing
What are the two types of parallelism?
- Instruction-level parallelism
- - pipeline
- - cache - Processor-level parallelism
- - multiprocessor (multiple CPUs, common memory)
- - multicomputer (multiple CPUs, each with own memory)
What is pipelining?
The hardware must provide separate units responsible for its part of the instruction, and when that is done, it will work on the next instruction as the previous one is working on the previous instruction.
U-1 - instruction fetch U-2 - instruction decode U-3 - operand fetch U-4 - instruction execute U-5 - operand store
What is instruction caching?
The hardware provides area for multiple instructions in the CPU.
- reduces number of memory accesses
- instructions available for immediate execution
- might cause problems with decision, repetition, and procedure structures in programs
What is multiprocessor parallelism?
Multiple processors all accessing the same shared memory (the jobs can be split among all the processors).
One of the ways they are managed is to have a master processor to direct them. Another way is to have them communicate with each other.
What is multicomputer parallelism?
Each of the processors has its own memory and communicates with the others through an interconnection network (the job is split up and assigned to those that have their own memory, etc.).