Module 1 Flashcards

Introducing Parallel and Distributed Concepts in Digital Logic

1
Q

_ involves processing instructions one at a time, using only a single processor, without distributing tasks across multiple processors.

A

Serial computing

or sequential computing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

_ was introduced as computer science evolved to address the slow speeds of serial computing.

A

Parallel computing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

_ is a method where parallel programming enables computers to run processes and perform calculations simultaneously.

A

Parallel processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

_ is a process where large computing problems are broken down into smaller problems that multiple processors can solve simultaneously.

A

Parallel computing

Also known as parallel programming

Multiple processors working simultaneously on different parts of a task.

Example: New British weather supercomputer MetOffice’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Real-world applications of parallel computing span diverse domains, from scientific simulations to big data analytics and high-performance computing.

A

Noted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Parallel computing architectures enable efficient processing and analysis of large datasets, sophisticated simulations, and complex computational tasks.

A
  • Task Distribution: Parallel computing is the process by which a supercomputer can split the whole grid into sub-grids
  • Simultaneous Computation: Thousands of processors work simultaneously on different parts of the grid to calculate the data which is stored at different locations
  • Communication between Processors: The main reason for processors to communicate with each other is the fact that the weather for one part of the grid can have an impact on the areas adjacent to it
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

_ consist of multiple processing units, or ‘cores,’ on a single integrated circuit (IC). This structure facilitates parallel computing, which enhances performance while potentially reducing power consumption.

A

Multicore processors

The need for higher performance, faster response times, increased functionality, and energy efficiency has never been more pressing.

Multi-cores, a system can perform multiple tasks at once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Parallel Computing Benefits

A
  1. SPEED AND EFFICIENCY: allows tasks to be completed faster by dividing them into smaller sub-tasks that can be processed simultaneously by multiple processors or cores.
  2. HANDLING LARGE DATA SETS (Scalability): essential for processing large data sets that would be impractical or too slow to handle sequentially
  3. SOLVING COMPLEX PROBLEMS: allows for the tackling of such problems by leveraging multiple processors.
  4. FAULT TOLERANCE: Parallel systems can be designed to be fault-tolerant, meaning they can continue to operate even if one or more processors fail. This improves the reliability and availability of the system.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

PARALLEL COMPUTING IS A VERSATILE TOOL APPLIED IN MANY DIFFERENT AREAS OF INDUSTRY, INCLUDING:

A
  1. SCIENTIFIC SIMULATIONS: Parallel computing is required for simulations that are complex and belong to such fields as physics, chemistry, and biology. (It enables researchers to model large-scale systems)
  2. DATA ANALYSIS: In genomics, astronomy, and finance, parallel computing is necessary for the analysis of large data sets. (massive datasets faster processing of data, enabling researchers to extract valuable insights and make informed decisions)
  3. MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE: Setting up a big machine learning model like a neural network needs a huge computational (accelerates the training process, enabling the development of more advanced AI systems.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Parallel computing plays a vital role in addressing complex problems and enabling advancements in various fields. It provides the computational power necessary for scientific research, data analysis, machine learning, high-performance computing, and other demanding applications

A

Noted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Named after the Hungarian
mathematician John von Neumann

A

Von Newmann Architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Von Newmann Architecture was named after the Hungarian mathematician _

A

John von Neumann

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A _ computer uses the
stored-program concept

A

von Neumann

The CPU executes a stored program that specifically a sequence of read and write operations on the memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The _ gets the instructions and/ or data from the memory, decodes the instructions, and then sequentially performs them.

A

CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

One of the more widely used
classifications, in use since 1966.

A

Flynn’s Classical Taxonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Distinguishes multi-processor computer architectures according to how they can be classified along two independent dimensions of instruction and data.

A

Flynn’s Classical Taxonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

According to Flynn’s Classical Taxonomy, each dimension can have only one of two possible states: _ or _.

A

single or multiple

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In Flynn’s Matrix Array, the matrix defines four classification:

A

SISD | SIMD
MISD | MIMD

Y = INSTRUCTION

X = DATA, Y = INSTRUCTION

SISD - Single Instruction, Single Data
SIMD - Single Instruction, Multiple Data
MISD - Multiple Instruction, Single Data
MIMD - Multiple Instruction, Multiple Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  • A single processor takes data from the single address memory and performs a single instruction on the data at the time.
A

Flynn’s SISD

Pipelining can be implemented, but only one instruction will be executed at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A single instruction is executed on a multiple different pieces of data.

A

Flynn’s SIMD

Instructions can be performed sequentially taking advantage of pipelining or in parallel using a multiple processors.

GPU, containing vector processors and array processors, are commonly SIMD system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Multiple processors work on the same data performing different instructions at the same time.

A

Flynn’s MISD

Example: Space shuttle flight control system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Autonomous processors perform operations on differences pieces of data either independently or as part of shared memory

A

Flynn’s MIMD

Several different instruction can be executed at the same time using different streams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Main reasons for using parallel programming

A
  1. Save time and/or money
  2. Solve larger / more complex problems
  3. Provide concurrency
  4. Complex, large datasets
24
Q

The word “distributed” in distributed computing is similar to terms such as?

A

distributed system
distributed programming
distributed algorithm

25
Q

Originally referred to independent computers interconnected via a network, that is capable of collaborating on a task.

A

Distributed Computing

26
Q

Networks of interconnected computers that work together to achieve a common goal.

A

Distributed Computing

27
Q

In distributed computing, computers are often spread across different locations and connected through a _, such as the internet or a local area network (LAN).

A

network

28
Q

In a distributed computing system, the workload is divided among the various _, which communicate and coordinate their efforts to achieve a common goal.

A

nodes

29
Q

Computers can be physically close ( _ ) or far apart ( _ ).

A

local network
wide area network

30
Q

Benefits of distributed computing

A
  1. Performance Improvement
  2. Scalability
  3. Resilience and redundancy
  4. Cost-effectiveness
  5. Efficiency Distributed applications
  6. Geographical Distribution
  7. Resource Sharing
31
Q

Advantages of Distributed Computing

A
  1. Leverage Commodity Hardware: Use less expensive, off-the-shelf hardware instead of costly, specialized servers.
  2. Horizontal Scaling: Easily add more nodes (computers) to a distributed system to handle increased workloads.
  3. Fault Tolerance: if one node fails, the system can continue to operate, ensuring uninterrupted service.
  4. Reliability: Can recover quickly from node failures or network issue
32
Q

Disadvantages of Distributed Computing

A
  1. Network Latency: Increased Complexity: Managing network latency can be complex, requiring careful consideration of factors like network topology, bandwidth, and routing protocols.
  2. Coordination Overhead: Ensuring that multiple nodes coordinate their actions can be challenging, especially in distributed systems with many components.
  3. Security Concerns: Vulnerabilities: more susceptible to security threats like hacking, data breaches, and denial-of-service attacks due to their interconnected nature.
  4. Debugging and Troubleshooting: Complexity: Identifying and resolving issues in distributed systems can be challenging due to their distributed nature and the potential for interactions between multiple components.
33
Q

Example of Servers of Distributed Computing

A
  1. Cloud Computing: Services like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform rely on distributed computing to offer scalable and reliable cloud services.
  2. Artificial Intelligence and Machine Learning: Artificial Intelligence (AI) and Machine Learning (ML) are two of the most exciting and rapidly developing fields in technology today. They are also among the most notable use cases for
    distributed computing.
  3. Scientific Research and High-Performance Computing (HPC): where distributed is used extensively in these fields, distributed computing is used to solve complex scientific problems that require enormous computational resources.
34
Q

Distributed Computing Concept

A _ in a computer network is a device that is connected to other devices within the same computer network.

A

node

It can be a computer, a server, a switch, a hub, a router, or any other device that has an IP address and can communicate with other devices over a network.

35
Q

Distributed Computing Concept

A node can be from _ or _.

A

open group
close network

36
Q

Distributed Computing Concept

Resources can be anything, such as?

A

On file
On services
Storage facility
On network

37
Q

Distributed Computing Concept

_ refers to hiding the complexities of the system’s implementation details from users and applications.

A

Transparency

It aims to provide a seamless and consistent user experience regardless of the system’s underlying architecture, distribution, or configuration.

Keep as much as possible hidden from users.

38
Q

Distributed Computing Concept

A logical layer on top of nodes collectively.

A

Middleware

39
Q

Provides communication and security services while handling failure and other complexities in distributed systems

A

Middleware

the backbone of the distributed system

40
Q

_ smooth collaboration between operations and events.

A

Coordination

41
Q

_ order events, control access to resources

A

Synchronization

42
Q

_ helps with better management of complexities and describes how nodes communicate and interact in a system

A

Architectural Model

43
Q

3-Sub Paced of Distributed Computing

A
  1. Cluster Computing
  2. Cloud Computing
  3. Grid Computing
44
Q

_ breaks down the problem across several networked computing device

A

Distributed computing

45
Q

_ is the world’s largest coding community for children and a coding language with a simple visual interface design to gentle introduction of programming concepts.

A

Scratch

46
Q

_ is an increasingly popular choice as a first programming language in computer science curricula.

A

Python

47
Q

These architectures use smart clients that contact a server for data, and then format and display that data to the user.

A

Client-server architectures

48
Q

Typically used in application servers, these architectures use web applications to forward requests to other enterprise services.

A

N-tier system architectures

49
Q

These architectures divide all responsibilities among all peer computers, which can serve as clients or servers.

A

Peer-to-peer architectures

50
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

NUMBER OF COMPUTER SYSTEMS INVOLVED

A

Parallel Computing: A task is divided into multiple sub-tasks which are then allotted to different processors on the same computer system.

Distributed Computing: A number of unified computers work towards a common task while communicating with each other with the help of message passing.

51
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

DEPENDENCY BETWEEN PROCESSES

A

Parallel Computing: A single physical computer system hosts multiple processors.

Distributed Computing: Multiple physical computer systems are present in the computer systems.

52
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

SCALABILITY

A

Parallel Computing: The systems that implement parallel computing have limited scalability.

Distributed Computing: Easily scalable as there are no limitations on how many systems can be added to a network.

53
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

RESOURCE SHARING

A

Parallel Computing: All processors share the same memory.

Distributed Computing: Computers have their own memory and processors.

54
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

SYNCHRONIZATION

A

Parallel Computing: All processors use the same master clock for synchronization.

Distributed Computing: Networks have to implement synchronization algorithms.

55
Q

PARALLEL COMPUTING VS DISTRIBUTED COMPUTING

USAGE

A

Parallel Computing: Generally preferred in places requiring faster speed and better performance.

Distributed Computing: Generally preferred in places requiring high scalability.

56
Q

Parallel computing uses multiple processors within a single computer system to divide and process tasks simultaneously, with shared memory and synchronization through a master clock. It is ideal for faster performance but has limited scalability. In contrast, distributed computing involves multiple independent systems working together via message passing, each with its own memory and processors. This allows for greater scalability and is suited for tasks requiring extensive resource sharing across systems.

A

Noted