Module 1 Flashcards
Introducing Parallel and Distributed Concepts in Digital Logic
_ involves processing instructions one at a time, using only a single processor, without distributing tasks across multiple processors.
Serial computing
or sequential computing
_ was introduced as computer science evolved to address the slow speeds of serial computing.
Parallel computing
_ is a method where parallel programming enables computers to run processes and perform calculations simultaneously.
Parallel processing
_ is a process where large computing problems are broken down into smaller problems that multiple processors can solve simultaneously.
Parallel computing
Also known as parallel programming
Multiple processors working simultaneously on different parts of a task.
Example: New British weather supercomputer MetOffice’s
Real-world applications of parallel computing span diverse domains, from scientific simulations to big data analytics and high-performance computing.
Noted
Parallel computing architectures enable efficient processing and analysis of large datasets, sophisticated simulations, and complex computational tasks.
- Task Distribution: Parallel computing is the process by which a supercomputer can split the whole grid into sub-grids
- Simultaneous Computation: Thousands of processors work simultaneously on different parts of the grid to calculate the data which is stored at different locations
- Communication between Processors: The main reason for processors to communicate with each other is the fact that the weather for one part of the grid can have an impact on the areas adjacent to it
_ consist of multiple processing units, or ‘cores,’ on a single integrated circuit (IC). This structure facilitates parallel computing, which enhances performance while potentially reducing power consumption.
Multicore processors
The need for higher performance, faster response times, increased functionality, and energy efficiency has never been more pressing.
Multi-cores, a system can perform multiple tasks at once.
Parallel Computing Benefits
- SPEED AND EFFICIENCY: allows tasks to be completed faster by dividing them into smaller sub-tasks that can be processed simultaneously by multiple processors or cores.
- HANDLING LARGE DATA SETS (Scalability): essential for processing large data sets that would be impractical or too slow to handle sequentially
- SOLVING COMPLEX PROBLEMS: allows for the tackling of such problems by leveraging multiple processors.
- FAULT TOLERANCE: Parallel systems can be designed to be fault-tolerant, meaning they can continue to operate even if one or more processors fail. This improves the reliability and availability of the system.
PARALLEL COMPUTING IS A VERSATILE TOOL APPLIED IN MANY DIFFERENT AREAS OF INDUSTRY, INCLUDING:
- SCIENTIFIC SIMULATIONS: Parallel computing is required for simulations that are complex and belong to such fields as physics, chemistry, and biology. (It enables researchers to model large-scale systems)
- DATA ANALYSIS: In genomics, astronomy, and finance, parallel computing is necessary for the analysis of large data sets. (massive datasets faster processing of data, enabling researchers to extract valuable insights and make informed decisions)
- MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE: Setting up a big machine learning model like a neural network needs a huge computational (accelerates the training process, enabling the development of more advanced AI systems.)
Parallel computing plays a vital role in addressing complex problems and enabling advancements in various fields. It provides the computational power necessary for scientific research, data analysis, machine learning, high-performance computing, and other demanding applications
Noted
Named after the Hungarian
mathematician John von Neumann
Von Newmann Architecture
Von Newmann Architecture was named after the Hungarian mathematician _
John von Neumann
A _ computer uses the
stored-program concept
von Neumann
The CPU executes a stored program that specifically a sequence of read and write operations on the memory.
The _ gets the instructions and/ or data from the memory, decodes the instructions, and then sequentially performs them.
CPU
One of the more widely used
classifications, in use since 1966.
Flynn’s Classical Taxonomy
Distinguishes multi-processor computer architectures according to how they can be classified along two independent dimensions of instruction and data.
Flynn’s Classical Taxonomy
According to Flynn’s Classical Taxonomy, each dimension can have only one of two possible states: _ or _.
single or multiple
In Flynn’s Matrix Array, the matrix defines four classification:
SISD | SIMD
MISD | MIMD
Y = INSTRUCTION
X = DATA, Y = INSTRUCTION
SISD - Single Instruction, Single Data
SIMD - Single Instruction, Multiple Data
MISD - Multiple Instruction, Single Data
MIMD - Multiple Instruction, Multiple Data
- A single processor takes data from the single address memory and performs a single instruction on the data at the time.
Flynn’s SISD
Pipelining can be implemented, but only one instruction will be executed at a time.
A single instruction is executed on a multiple different pieces of data.
Flynn’s SIMD
Instructions can be performed sequentially taking advantage of pipelining or in parallel using a multiple processors.
GPU, containing vector processors and array processors, are commonly SIMD system.
Multiple processors work on the same data performing different instructions at the same time.
Flynn’s MISD
Example: Space shuttle flight control system
Autonomous processors perform operations on differences pieces of data either independently or as part of shared memory
Flynn’s MIMD
Several different instruction can be executed at the same time using different streams