Module 6 Flashcards

1
Q

Note on Flynn’s Classification and it’s types

A

Flynn’s Classification - Detailed Explanation
Flynn’s taxonomy classifies computer architectures based on Instruction Streams and Data Streams:

Instruction Stream: Sequence of instructions executed by the processor.
Data Stream: Sequence of data elements being processed.
This classification leads to four architectures, as explained below:

1. Single Instruction, Single Data Stream (SISD)
Description:

  • A single processor fetches a single instruction and executes it on a single data element at a time.
  • Common in traditional sequential computers.

Characteristics:
- Single processor: Only one Processing Unit (PU).
- Single instruction stream: One instruction fetched and executed at a time.
- Single data stream: Operates on one data element at a time.
- Data stored in a single memory block.
- Example: Early single-core CPUs (e.g., Intel 8086).

Diagram:
An Instruction Pool sends a single instruction to the Processing Unit (PU), which processes data from a Data Pool.

2. Single Instruction, Multiple Data Streams (SIMD)

Description:
- A single instruction is applied to multiple data elements in parallel.
- Used for vector processing and data-parallel tasks (e.g., image and signal processing).

Characteristics:
- Single machine instruction: All processing units execute the same instruction simultaneously.
- Multiple data streams: Each processor works on a separate data set.
- Useful in scenarios where the same operation needs to be performed on a large data set.
- Example: GPUs (used in gaming and machine learning).

Diagram:
An Instruction Pool sends a single instruction to multiple Processing Units (PUs), each connected to a Data Pool with separate data streams.

3. Multiple Instruction, Single Data Stream (MISD)

Description:
- Multiple processors execute different instructions on the same data stream simultaneously.
- Rarely implemented in practice due to limited use cases.

Characteristics:
- Multiple instructions: Each processor has its own unique instruction set.
- Single data stream: A common data set is fed to all processors.
- Used in fault-tolerant systems, where the same data is processed redundantly to detect and correct errors.
- Example: Theoretical systems; has no practical implementation.

Diagram:
Multiple Instruction Pools feed instructions to Processing Units, all of which operate on the same Data Pool.

4. Multiple Instruction, Multiple Data Streams (MIMD)

Description:
- Multiple processors operate independently, executing different instructions on different data sets.
- Most common in modern computing systems.

Characteristics:
- Multiple instructions: Each processor has its own instruction stream.
- Multiple data streams: Each processor works on its own separate data set.
- Highly scalable and flexible, used in multiprocessing and distributed computing.
- Example: Multi-core processors, NUMA (Non-Uniform Memory Access) systems, and SMPs (Symmetric Multiprocessing Systems).

Diagram:
Each Instruction Pool feeds a separate instruction stream to a corresponding Processing Unit (PU), which operates on a separate Data Pool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Pipelining and it’s Stages

A

Pipeline Processing
- Temporal overlapping of processing stages.
- The input task (process) is divided into sequential subtasks.
- Each subtask is executed by specialized hardware, operating concurrently with other stages in the pipeline.
- Benefit: Increases overall processing speed by executing multiple instructions in overlapping stages.

Key Idea:
- Next instructions can be fetched while the processor is performing current arithmetic/logic operations.
- These instructions are stored in a buffer near the processor until execution.

Goal:Ensures a continuous flow of instructions, minimizing idle times.

Six Stages of Instruction Pipelining

  1. Fetch Instruction (FI):
    Reads the next instruction from memory into a buffer.
  2. Decode Instruction (DI):
    Determines the operation (opcode) and the operands required.
  3. Calculate Operands (CO):
    Calculates effective memory addresses for the operands (if required).
  4. Fetch Operands (FO):
    Retrieves operands from memory or registers.
  5. Execute Instruction (EI):
    Performs the specified operation (e.g., arithmetic or logic).
  6. Write Operand (WO):
    Stores the result back into memory or a register.

Pipeline Overlap:While one instruction is in the execution stage, others can simultaneously be in fetch, decode, or operand calculation stages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Pipelining Hazards

A

Pipeline Hazards
Pipeline hazards are situations that disrupt the smooth flow of instructions through a pipeline. They can cause delays or stalling in the execution of instructions. The major types of hazards include:

Resource Hazard (Structural Hazard)
- Occurs when multiple instructions compete for the same hardware resource.
- Commonly caused by memory access conflicts, especially if both data and instructions share the same memory.
- Solution: Use separate instruction and data caches to avoid conflicts.

Data Hazard
- Types:
– RAW (Read After Write): Example: A = B + C followed by D = A + E. The second instruction cannot start until the first completes.
– WAW (Write After Write): Both instructions write to the same register or memory location, requiring sequential execution.
– WAR (Write After Read): Happens when an instruction tries to write to a location while another is reading from it.
- Solution: Use pipeline scheduling or instruction reordering.

Branch Hazard
- Caused by conditional or unconditional branching.
- Examples:
– Flush Pipeline: Clear all pending instructions in the pipeline.
– Delayed Branching: Delay branch execution to allow other instructions to complete.
– Conditional Branching: Prediction mechanisms (like branch prediction algorithms) are used to guess the instruction path.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Principles of Designing Pipelined Processors

A

Proper Data Buffering
- Buffers are used to temporarily hold data between pipeline stages.
- Prevents data congestion and ensures smooth operations by allowing data to flow without bottlenecks.

Instruction Dependence Relationship
- Analyze and address dependencies between instructions:
- Data dependencies (e.g., RAW, WAR, WAW).
- Resource dependencies (e.g., contention for the same functional units).

Logic Hazards Detection and Resolution
- Logic hazards occur when unexpected changes in signals create errors in computations.
- Pipelines should be designed to detect and mitigate these hazards (e.g., by adding synchronization points or hazard detection units).

Avoid Collisions and Structural Hazards
- Collisions happen when multiple instructions compete for the same resource (e.g., memory, ALU).
- Structural hazards can be avoided by proper sequencing of operations or adding additional resources (e.g., separate data and instruction caches).

Reconfiguration of Pipelines
- Pipelines should be flexible enough to adapt to changes in workload or operation type.
- Reconfiguration can improve performance for specific tasks (e.g., dynamic adjustment of pipeline depth or stages).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly