System Design Flashcards

1
Q

How would you design a system to handle millions of users?

A

Core Principles:

Scalability:
Horizontal scaling is crucial: Adding more servers instead of upgrading existing ones.
Design for statelessness: Services should not rely on local session data.

Availability:
Redundancy: Duplicate critical components to prevent single points of failure.
Fault tolerance: Design the system to handle failures gracefully.

Reliability:
Data integrity: Ensure data consistency and durability.
Monitoring and logging: Implement comprehensive monitoring to detect and resolve issues.

Performance:
Minimize latency: Optimize response times for a smooth user experience.
Efficient resource utilization: Avoid bottlenecks and optimize resource usage.* Use microservices architecture.
* Implement load balancing.
* Use caching (Redis, CDN).
* Scale with Kubernetes and auto-scaling

groups.2. Architectural Components:

Load Balancing:
Distribute incoming traffic across multiple servers to prevent overload.
Use load balancers at various layers (e.g., network, application).
Application Layer:
Microservices architecture: Break down the application into smaller, independent services.
API gateways: Manage API requests and provide a centralized entry point.
Asynchronous processing: Use message queues (e.g., Kafka, RabbitMQ) for background tasks.
Data Layer:
Database scaling:
Database sharding: Partition data across multiple databases.
Database replication: Create read replicas to distribute read load.
**NoSQL databases: **Use NoSQL databases for unstructured or high-volume data.
Caching:
In-memory caching (e.g., Redis, Memcached) to store frequently accessed data.
Content Delivery Networks (CDNs) to cache static assets.
Infrastructure:
**Cloud computing **(AWS, Azure, GCP): Leverage cloud services for scalability and reliability.
Containerization (Docker, Kubernetes): Package and deploy applications in containers for portability and scalability.
Monitoring and Logging:
Centralized logging: Collect and analyze logs from all components.
Performance monitoring: Track key metrics (e.g., response time, CPU usage).
Alerting: Set up alerts for critical issues.
3. Key Techniques:

Caching:
Implement caching at various layers (e.g., client-side, server-side, database).
Use appropriate caching strategies (e.g., LRU, LFU).
Asynchronous Processing:
Use message queues to decouple components and handle background tasks.
This improves responsiveness and prevents overload.
Database Optimization:
Optimize database queries and schemas.
Use database indexes to improve query performance.
Content Delivery Networks (CDNs):
Distribute static content (e.g., images, videos) to edge servers.
This reduces latency and improves performance.
4. Example Scenario:

Imagine an e-commerce platform with millions of users.
Load balancers distribute traffic across multiple application servers.
Microservices handle different functionalities (e.g., product catalog, shopping cart, order processing).
A distributed cache stores frequently accessed product information.
The database is sharded to handle the large volume of user and order data.
CDNs deliver product images and other static assets.
Monitoring and logging systems track performance and detect issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is Microservices Architecture

A

Microservices architecture is popular in over the past decade. breaking down a large application into smaller, independent services. that can be developed, deployed, and scaled independently. Each microservice typically focuses on a specific business capability and communicates with other services through APIs.
Pros
Scalability, Flexibility, Resilience, Faster Deployment, Team Autonomy
Cons
Complexity, Inter-Service Communication: Data Management: Operational Overhead:
Monolithic Approach a monolithic architecture involves building an application as a single, unified codebase. All components of the application are tightly integrated and run as a single process.

Pros
Simplicity, Performance, Easier Debugging, Lower Operational Overhead,( There is no need for complex orchestration or managing multiple deployment pipelines. )
Cons
Scalability Limitations, Slower Deployment:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly