Concepts and Technologies Flashcards

1
Q

Load Balancing

A
  • spread traffic across a cluster of servers
    to improve responsiveness and
    availability of services
  • typically sits between client and server
  • prevents a server from being a single
    point of failure
  • keeps statuses on resources / health
    checks
  • use active / passive cluster to prevent
    load balancer from being point of failure

BENEFITS:

  • faster, uninterrupted service
  • less server downtime = higher
    throughput
  • fewer failed or stressed components
  • predictive analytics = less bottlenecks

Ex. 1:
client –> internet –> load balancer –> servers

Ex. 2:
client –> LB –> web servers –> LB –> application servers –> LB –> databases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Load Balancing Algorithms (How to pick which servers to send traffic to)

A
  • fewest active connections
  • lowest avg. response time
  • least amount of traffic (bandwidth)
  • round robin method (cycle through)
  • weighted round robin (each server given
    a weight based on processing capacity,
    direct traffic to lowest weight first)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Caching

A
  • enable you to make better use of
    resources
  • “recently requested data likely to be
    requested again”
  • short term memory, limited space but
    faster
  • can exist at all levels, often nearest to
    front-end

APPLICATION SERVER CACHE:
- placing a cache on the request layer
node enables the local storage of
response data
- request node –> check cache (fast) –>
otherwise go to disk

CONTENT DELIVERY NETWORK (CDN)
- for sites serving large amounts of static
media
- request to CDN –> serve or request to
back end servers –> cache it locally –>
client

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cache Invalidation

A
  • if data modified in db, needs to be
    invalidated in cache
  • write-through: data written to both at
    same time. ensures nothing lost, but
    higher latency
  • write-around: written to storage
    bypassing cache. smaller cache = higher
    latency
  • write-back: only written to cache, then
    eventually storage. low latency and high
    throughput, but risk of data loss
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cache Eviction Policies

A
  • First in First Out
  • Last in First Out
  • LRU
  • MRU
  • LFU
  • random replacement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Relational Database (SQL)

A
  • structured, with pre-defined schemas
  • store data in rows/columns and the data
    needs to be compliant

Querying:
- uses SQL, which is very powerful

Scalability:
- vertically scalable, which is expensive.
- horizontal is challenging and time
consuming

Reliability:
- ACID compliant, better for data reliability

Use Cases:
- data reliability and ACID compliance
- if data is structured and unchanging
- no massive growth and consistent data
and traffic

Examples:
- Microsoft SQL Server
- Oracle SQL Server
- Postgre SQL Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Non-Relational Database (NoSQL)

A
  • unstructured, dynamic schema. each
    row doesn’t have to have data for each
    column, can create columns on the fly.
  • data is distributed

types:
- key-value stores
- document db: data stored in documents
in collections, documents can have
different structure

Querying:
- based on documents

Scalability:
- horizontally scalable, which is much
more cost effective

Reliability:
- sacrifice reliability for performance and
scalability

Use Cases:
- if all other components are fast /
seamless, db will not be the bottleneck
- storing large volumes of data with no
structure
- make the most of cloud computing and
storage. can be very cost effective, but
requires data to be easily spread across
servers to scale up.
- rapid development

Examples:
- Amazon DynamoDB
- MongoDB
- Cassandra

How well did you know this?
1
Not at all
2
3
4
5
Perfectly