15 - hardware Flashcards
CISC processors
Complex Instruction set computer
- uses more internal instruction formats
- carries out a task with as few lines of assembly code as possible
- hardware must therefore be able to handle complex instructions
- complex instructions are converted by the process into sub instructions to carry out the operation
2 types of processors
RISC
CISC
RISC processors
Reduced Instruction Set Computer
- fewer built in instruction formats
- uses less complex instructions - break up the code into a number of siimple single cylce instructions
- smaller, but more optimised instructions
CISC features
- many instruction formats
- more addressing modes
- multi cycle instructions
- variable length instructions
- longer execution time
- more complex decoding
- hard to pipeline
- emphasis on hardware
- uses memory unit to allow complex instructions to be carries out
RISC features
- less instruction formats/ sets
- fewer addressing modes
- single cycle instructions
- fixed length instructions
- faster execution time
- uses general multi- purpose registers
- easier to pipeline
- emphasis on software
- processor chips require fewer transistors
pipelining
allows several instructions to be processed simultaneously without waiting for the previous instructions to be complete
- once program A has finished the first step of execution it moves to second step while B starts step 1 etc
- needs several regsters to store each stage
how is execution of an instruction split
- instruction fetch cycle
- instruction decode cycle
- operand fetch cycle
- instruction execution cycle
- writeback result process
interrupts
- once th eprocessor detects it
- the program is stopped if higher priority
interrupts with pipelining
- when there is an interrupt there could be multiple instructions in the pipeline
- discard all instructions except the last one in the write back stage
- apply the interrupt handler routine tp this instruction
- once done the processor restarts with the next instruction
OR - can store the contents of all stages in registers allowing it to be restored to its previous status
parallel processor systems
- operation which allows a process to be split up and for each part to be executed by a different processor at the same time
SISD
SIMD
MISD
MIMD
SISD
single instruction single data
one processor that cant handle one instruction and uses one data source
processes sequentially
doesn’t allow parallel processing
SIMD
single instruction multiple data
uses many processors
each processor executes the same instruction but on different data
- array procoessors
applications of SIMD
graphics cards - eg an image of 400 pixels - each processor can do the same thing to each of the different data (pixels) eg increase the brightness by the same amount at the same time
sound ssamplig
- alter a large number of items by the same amount
MISD
multiple instruction single data
uses several processors - each use a different instruction over the same data source
eg the american space shuttle control system to run multiple computers at the saem time on the same data in case of failure of one or multiple of the processors
MIMD
multiple instruction multiple data
uses multiple processors
each one takes instructions independently and each processor can use a separate data source
- used in multicore systems
parallel processing
- processors need to be qable to communicate - data needs to be transfered between processors
- software must be capable of processing data from multiple processors at once
- faster for large volumes of independent data - dependent data is not suitbale
- overcomes the von neumann bottleneck - data is always moving between memory and processor leading to latency
- more expensive hardware
cluster
a number of computers with SIMD that are networked together
each processor performs a larger pseudo parallel system acting like a super computer
super computer
a powerful mainframe computer.
massively parallel computers
the linking together of a number of comps effectively forming one machine with thousands of processors
- increases processing power of a ‘single machine’ massively
- different to cluster where each comp remains independent
- each processor carries out part of processing and communicates via data pathways
de morgans laws
to split a not split it then change the sign between from an or to and or vice versa
half adder
does binary addition on 2 bits
outputs the sum and carry
formed of a XOR outputting sum and an AND ouptuting carry
cant add more bits
full adder
does binary addition on mutliple bits
joins half adders to achieve this
eg
add ABC do a half adder on AB
then another on the sum of AB and C
- the sum of that is the sum
and OR the two carries to get the carry
combination circuits
output depends entirely on input eg half and full adders
sequential circuit
output depends on the input from the previous output eg flip flop
SR flip slop
two cross coupled NAND or NOR gates with inputs S and R and outputs Q and Qnot
use of SR flip flop
as memory/ storage for one bit
problems with SR
invalid S, R conditions - lead to conflicting Q and notQ - need to be avoided
if outputs dont arrive at same time can become unstable
JK flip flop
overcomes problems of SR
has a clock and additionaql gates to synchronise the two inputs and provent illegal states
- when both 0 there is no change in output of Q
- if values of J or K change then Q will be the same as J
- if both = 1 then Q toggles (switches) after each clock pulse
use of JK
several can produce shift registers in a comp
simple binary counter can be made by linking JK circuits
sum of products
write all the places where the output is one using AND, OR, NOT
karnaugh maps
always 00 01 11 10 - gray codes
- groups must be even number of 1s 1,2,4,6
- must be as large as possible
- may overlap
- final expressiosn considers the values that remain constant in the group
- can be joined as cylinders - at the edges
simplixy Z + notZ.X
Z+X
features of SISD, SIMD, MISD, MIMD
if its single instruction - the instructions can be performed sequentially, taking advantage of pipelining
if multiple instruction - Each processor works independently
all except SISD - parallel computers with multiple processors