5. Computing Systems Flashcards
Embedded Systems
Computers that are designed to perform a narrow range of functions as part of a larger system
Located on a single microprocessor chip with the programs stored in ROM
Bit-level Parallelism
Increase in processor WORD SIZE on a computer
–> reduces number of instructions that a computer must execute
Instruction-level parallelism
Simultaneous execution of a sequence of instructions in a program (possible because some instructions can be carried out independently)
Techniques that are used to achieve instruction-level parallelism
1) Instruction pipelining: execution of multiple instructions can be partially overlapped
2) Superscalar execution: Multiple execution units are used to execute multiple instructions in parallel during a clock cycle
3) Out-of-order execution: Instructions execute in any order that does not violate data dependencies (still under development)
Data-level Parallelism
Single set of instructions run on multiple data sets at the same time
–> effective when the same process needs to be applied to multiple datasets
Task-level parallelism (function parallelism)
Running many different tasks at the same time on the same data
–> focus on distributing tasks concurrently (performed through different processors)
Emphasizes the distributed nature of independent tasks