Inno V- System design Flashcards

1
Q

Load Balancers

A

Distribute incoming traffic across multiple servers to optimize performance and ensure reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Key-Value Stores:

A

Storage systems that manage data as pairs of keys and values, often implemented using distributed hash tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Blob Storage

A

A service for storing large amounts of unstructured data, such as media files (e.g., YouTube, Netflix).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Databases

A

Organized collections of data that facilitate easy access, management, and modification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Rate Limiters

A

Control the maximum number of requests a service can handle in a given timeframe to prevent overload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Monitoring Systems

A

Tools that enable administrators to track and analyze infrastructure performance, including bandwidth and CPU usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Distributed Messaging Queues:

A

Mediums that facilitate communication between producers and consumers, ensuring reliable message delivery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Distributed Unique ID Generators:

A

Systems that generate unique identifiers for events or tasks in a distributed environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Distributed Search

A

Mechanisms that allow users to search across multiple data sources or websites for relevant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Distributed Logging Services:

A

Systems that collect and trace logs across services to monitor and troubleshoot applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Distributed Task Schedulers

A

Tools that manage and allocate computational resources for executing tasks across a distributed system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Caching

A

A technique for temporarily storing frequently requested data that speeds up its retrieval when needed again called caching. The main database or data source is less burdened when caching is included in system architecture, which enhances performance and efficiency. Quick access, reduces database load, improves performance and user experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Content Delivery Network (CDN)

A

A Content Delivery Network (CDN) is a network of servers spread across different regions that enables faster delivery of content to users, such as webpages, movies, and photos. when a user requests content (like a video or an image), instead of retrieving it from the main server, the request is handled by a nearby CDN server, which has a cached copy of the content.

This reduces the distance the data has to travel, making it load faster for the user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

API Gateways

A

is like a central doorway or “traffic controller” for requests coming into a system. In system design, it acts as a single entry point for clients (such as apps or websites) to access multiple backend services in an organized and secure way. Routing, security, monitoring, load management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Latency

A

is defined as the amount of time required for a single data to be delivered successfully. Latency is measured in milliseconds (ms).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Availability

A

is the percentage of time the system is up and working for the needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Redundancy

A

is defined as a concept where certain entities are duplicated with aim to scale up the system and reduce over all down-time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Consistency

A

is referred to as data uniformity in systems.

19
Q

Different types of System Architecture Patterns include:

A

Client-Server Architecture Pattern: Separates the system into two main components: clients that request services and servers that provide them.

Event-Driven Architecture Pattern: Uses events to trigger and communicate between decoupled components, enhancing responsiveness and scalability.

Microkernel Architecture Pattern: Centers around a** core system (microkernel)** with additional features and functionalities added as plugins or extensions.

Microservices Architecture Pattern: Breaks down applications into small, independent services that can be developed, deployed, and scaled independently.

20
Q

System Design Life Cycle (SDLC)

A

Planning => study => system design => implementation => testing => deployment => maintenance and support

  1. Planning Stage
  2. Feasibility Study Stage (check if we can actually deliver the system)
  3. System Design Stage
  4. Implementation Stage
  5. Testing Stage
  6. Deployment Stage
  7. Maintenance and Support
21
Q

CAP theorem:

A

C- consistency
-when system returns the info, it is always the newest, up to date content
A- availability
-system always return info, even if stale, system always returns info, never stop functioning
P- partition tolerance
-during the partition of the system, the system can still operate

Cap theorem- you can only have 2 of 3 properties, you cant have all three

22
Q

Lamport’s Logical Clock Theorem

A

Lamport’s Logical Clock is a process to ascertain the sequence in which events take place.

It acts as the foundation for the more complex Vector Clock Algorithm. A logical clock is required because a distributed operating system (Lamport) lacks a global clock.

to algorytm zaproponowany przez Leslie’a Lamporta w 1978 roku, który pozwala na synchronizację zdarzeń w rozproszonych systemach komputerowych, gdzie nie ma wspólnego zegara. Zegar ten nie mierzy czasu w tradycyjny sposób, ale śledzi kolejność zdarzeń w systemie.

23
Q

What are Functional Requirements?

A

These are the requirements that the end user specifically demands as basic facilities that the system should offer. All these functionalities need to be necessarily incorporated into the system as a part of the contract.They are the requirements stated by the user which one can see directly in the final product, unlike the non-functional requirements.

24
Q

What are Non-Functional Requirements?

A

These are the quality constraints that the system must satisfy according to the project contract. The priority or extent to which these factors are implemented varies from one project to another. They are also called non-behavioral requirements. They deal with issues like:

Portability
Security
Maintainability
Reliability
Scalability
Performance
Reusability
Flexibility
Examples:

25
What are Bottleneck Conditions?
A bottleneck in a system is a point where the flow of data or processing is limited, causing the overall system performance to degrade. Bottlenecks are like narrow choke points in a highway; when traffic (data or requests) surpasses the capacity of these points, it leads to congestion and delays.
26
Architectures types
Monolithic architecture is a software design methodology that combines all of an application's components into a single, inseparable unit. Under this architecture, the user interface, business logic, and data access layers are all created, put into use, and maintained as one, unified unit. Microservices are an architectural approach to developing software applications as a collection of small, independent services that communicate with each other over a network. Instead of building a monolithic application where all the functionality is tightly integrated into a single codebase, microservices break down the application into smaller, loosely coupled services. Microservices: Small, loosely coupled services that handle specific business functions, each focusing on a distinct capability.
27
API Gateway: Service Registry and Discovery Load Balancer:
API Gateway: Acts as a central entry point for external clients also they manage requests, authentication and route the requests to the appropriate microservice. Service Registry and Discovery: Keeps track of the locations and addresses of all microservices, enabling them to locate and communicate with each other dynamically. Load Balancer: Distributes incoming traffic across multiple service instances and prevent any of the microservice from being overwhelmed.
28
Containerization
Facilitates communication between microservices, allowing pub/sub asynchronous interaction of events between components/microservices.
29
Database per Microservice
Each microservice usually has its own database, promoting data autonomy and allowing for independent management and scaling.
30
Caching
Cache stores frequently accessed data close to the microservice which improved performance by reducing the repetitive queries.
31
Fault Tolerance and Resilience Components:
Components like circuit breakers and retry mechanisms ensure that the system can handle failures gracefully, maintaining overall functionality.
32
The Circuit Breaker pattern
is a design pattern used in microservices to enhance system resilience and fault tolerance. **It acts like an electrical circuit breaker by preventing an application from repeatedly trying to execute an operation that is likely to fail, which can lead to cascading failures across the system.**
33
A Service Registry
serves as a centralized database or directory where information about available services and their locations is stored and maintained. It acts as a vital component of service discovery by providing a central point for service registration, lookup, and management.
34
The SAGA Design Pattern i
is a pattern used to manage long-running and distributed transactions, particularly in microservices architecture. Unlike traditional monolithic transactions, which require a single, centralized transaction management system, the SAGA pattern breaks down a complex transaction into a series of smaller, isolated operations, each handled by a different service. Instead of relying on traditional, monolithic transactions (which require locking databases across services), the SAGA pattern breaks the transaction into a series of smaller, independent operations that each belong to a different service. These smaller operations, also called saga steps, are executed sequentially or in parallel, with each step compensating for potential failures in the others
35
The Strangler pattern
is an architectural approach **employed during the migration from a monolithic application to a microservices-based architecture**. It derives its name from the way a vine slowly strangles a tree, gradually replacing its growth. Similarly, the Strangler pattern involves replacing parts of a monolithic application with microservices over time. In order to implement strangler pattern, we need to follow 3 steps that are as follows: Transform Co-exists Eliminate
36
The Bulkhead Pattern
is a design principle used in software architecture to improve system resilience and fault tolerance by isolating components or resources within a system. By isolating components, the **Bulkhead Pattern helps minimize the impact of failures**, maintain system stability, and enhance overall reliability.
37
The API Composition Pattern
is a design approach in microservices architecture that allows developers to aggregate responses from multiple microservices into a single API endpoint. In a typical microservices setup, each service handles a specific business capability and exposes its own API. However, clients often require data from multiple services to fulfill a request, leading to increased complexity and potential performance issues when multiple network calls are made. The API Composition Pattern addresses this by acting as an intermediary layer that orchestrates calls to the relevant microservices, compiles their responses, and presents a unified response to the client. **This pattern can be implemented using a dedicated API gateway or a composition service that fetches data asynchronously, ensuring that the client receives all necessary information in one call.**
38
Relationship between CQS and CQRS
Command Query Separation (CQS) and CQRS are related in that CQRS extends upon the fundamental concept of CQS. To put it simply, this is how they are related: CQS: It is a programming principle that says you should separate operations that change data (commands) from those that read data (queries). If you have a method, for instance, it should either return something or update something, but not both. CQRS: By dividing the design of the entire system into two sections—one for managing commands (writing or modifying data) and another for managing queries (reading data), CQRS expands on this idea. Each side can have its own database or model to optimize how they work. So, CQS is the basic rule, and CQRS is like an advanced version of it used for bigger systems where you want to handle reading and writing differently.
39
With event-driven architecture (EDA),
various system components communicate with one another by generating, identifying, and reacting to events. These events can be important happenings, like user actions or changes in the system's state. In EDA, components are independent, meaning they can function without being tightly linked to one another. When an event takes place, a message is dispatched, prompting the relevant components to respond accordingly. This structure enhances flexibility, scalability, and real-time responsiveness in systems.
40
Serverless
A cloud computing execution model known as serverless computing allows the cloud provider to dynamically control server provisioning and allocation. Developers are unaware of the complexity of operating these servers, which includes capacity planning, scaling, and server maintenance. The cloud provider automatically executes the code in stateless compute containers that are event-triggered, and fully managed by the provider.
41
Azure Functions:
With Azure Functions, a serverless compute solution from Microsoft, you can execute event-triggered code without explicitly provisioning or managing infrastructure. Azure Functions easily connects with other Azure services and supports a large number of programming languages.
42
Concurrency
Concurrency relates to an application that is processing more than one task at the same time. Concurrency is an approach that is used for decreasing the response time of the system by using the single processing unit. Concurrency creates the illusion of parallelism, however actually the chunks of a task aren’t parallelly processed, but inside the application, there are more than one task is being processed at a time. It doesn’t fully end one task before it begins ensuing.
43
Parallelism
Parallelism is related to an application where tasks are divided into smaller sub-tasks that are processed seemingly simultaneously or parallel. It is used to increase the throughput and computational speed of the system by using multiple processors. It enables single sequential CPUs to do lot of things “seemingly” simultaneously.
44
Authentication Authorization
Authentication is like checking someone's ID to make sure they really are who they say they are. Authorization, on the other hand, is making sure that once someone is confirmed to be who they say they are, they only get access to the stuff they're supposed to.