System Design Fundamentals Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What are the questions you should think about when it comes to the first step of designing a system?

A

Who is going to use it?

How are they going to use it?

How many users are there?

What does the system do?

What are the inputs and outputs of the system?

How much data do we expect to handle?

How many requests per second do we expect?

What is the expected read to write ratio?

What are the functional requirements?

What are the non-functional requirements such as scale? What do we care about, consistency, availability, partition, etc. - based on the functional requirements? What about durability and fault tolerance?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe using a NoSQL vs. SQL database?

A

Reasons for SQL:

Structured data
Strict schema
Relational data
Need for complex joins
Transactions
Clear patterns for scaling
More established: developers, community, code, tools, etc
Lookups by index are very fast

Reasons for NoSQL:

Semi-structured data
Dynamic or flexible schema
Non-relational data
No need for complex joins
Store many TB (or PB) of data
Very data intensive workload
Very high throughput for IOPS

Sample data well-suited for NoSQL:

Rapid ingest of clickstream and log data
Leaderboard or scoring data
Temporary data, such as a shopping cart
Frequently accessed ('hot') tables
Metadata/lookup tables

The benefits of NoSQL:

NoSQL has better locality than SQL

  1. What this means is that for a large document, for example some sort of social media profile, it can all be stored in area
  2. Therefore pulling this document is much faster than having to fetch multiple related rows in a SQL table
  3. However, in the event that we only need a certain part of this information, we will be sending more data over the network resulting in a slower call
  4. Since sequential disk reads are faster, because the disk does not have to move around its head, having lots of data stored in the same place results in a faster query

NoSQL is easier to shard

  • To give a quick summary of sharding (will go into more detail later), it is taking a database that is too big and splitting it up over multiple machines
  • This is complicated with a SQL database, because when using a join call, you may potentially have to access many partitions resulting in lots of network calls
  • On the other hand, the increased locality of NoSQL means that the entire document will likely be on one machine and can be quickly accessed with a single network call

NoSQL data is not formatted

  • Makes it a bit more maintainable when adding new features to an object, or just having related data with slightly different structures
  • Not needing to format data in rows allows database formatting to more accurately reflect the data structure in memory that stores the object

The benefits of SQL:

SQL allows joins, whereas NoSQL does not

  • Generally speaking SQL allows easily fetching multiple related rows of various tables using the join command, whereas there is no support for this in NoSQL
  • You can do it using application code, however it will require many network calls and result in a slow query
  • As a result, it seems that SQL allows the data to be a bit more modular, where you can only request certain parts of a potentially large document, at the tradeoff that trying to fetch the entire document may take a long time

SQL has transactions

  • Transactions are something that will be covered later, but the gist is that they are an abstraction on top of database writes to provide some guarantees about them and simplify the edge cases a programmer must consider
  • However, in a distributed setting it rarely makes sense to use transactions and so this benefit is diminished
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is reliability?

A

To work correctly even in the face of adversity.

So if a machine goes down does it still work? Connection going down? etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Scalability?

A

Reasonable ways of dealing with growth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is maintainability?

A

Maintainability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do you use to store data?

A

Database, such as sql and nosql databases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How would you speed up a read?

A

Use a cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How would you search data?

A

Use a search index

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is stream processing?

A

Send a message to another process asynchronously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What do you call the action of “periodically crunch data”?

A

Batch processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can you describe the differences between Performance vs. Scalability?

A

A service is scalable if it results in increased performance in a manner that is proportional to the resources added.

Generally increasing performances means serving more units of work, but it can also handle larger units of work.

Performance Problem - system is slow for a single user (maybe due to some compute work unit that the server doesn’t efficiently due to some bad algorithm possibly so both amount/size of unit of work is bad)

Scalability Problem - system is fast for a single user but slow under heavy load

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What property does a SQL (RDBMS) fulfill when it comes to the CAP Theorum?

A

C and A
C - consistency
A - availability

We don’t have network partitions in a SQL database (we do have partitions)

We fulfil availability and consistency with ACID transactions in SQL database

ACID 
A - Atomic
C - Consisteny
I - Isolated
D - Durable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the type of systems that we can design in regards to the CAP Theorem?

A

CP - Consistency and Partitioning
AP - Availability and Partition

There is only one choice to make in a case of a network partition, do you sacrifice availability or consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Give an example of a database that is consistent and available? (no partition)

A

Partition is network partition

Postgres fulfills CA, and you can use Replication for scaling disk reads and writes for sharding. You do lose Consistency with that scaling but you get AP at that point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe what a Consistent/Partition database has to do which sacrifices Availability?

A

CP databases enable consistency and partition tolerance, but not availability.

I have to be careful when labeling or given an example of a database that has Consistency and Partitioning, because it depends on the way the database is configured and what the setup looks like.

For example, by default MongoDB on a single node instance is strongly consistent. So you get consistent reads and writes,

MongoDB is strongly consistent when you use a single connection or the correct Write/Read Concern Level (Which will cost you execution speed). As soon as you don’t meet those conditions (especially when you are reading from a secondary-replica) MongoDB becomes Eventually Consistent. So MongoDB becomes eventually consistent if you decide to have child nodes that are used for reads with a single main write node.

That’s where I mean it depends on what database we are talking about, and how its configured/setup.

When a partition occurs, the system has to turn off inconsistent nodes until the partition can be fixed. MongoDB is an example of a CP database. It’s a NoSQL database management system (DBMS) that uses documents for data storage. It’s considered schema-less, which means that it doesn’t require a defined database schema. It’s commonly used in big data and applications running in different locations. The CP system is structured so that there’s only one primary node that receives all of the write requests in a given replica set.

Secondary nodes replicate the data in the primary nodes, so if the primary node fails, a secondary node can stand-in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe and give an example of an AP Database? What happens in an Available-Partition database when a partition is happening? (eventual consistency = AP database)

A

AP databases enable availability and partition tolerance, but not consistency. In the event of a partition, all nodes are available, but they’re not all updated. For example, if a user tries to access data from a bad node, they won’t receive the most up-to-date version of the data. When the partition is eventually resolved, most AP databases will sync the nodes to ensure consistency across them.

Apache Cassandra is an example of an AP database. It’s a NoSQL database with no primary node, meaning that all of the nodes remain available. Cassandra allows for eventual consistency because users can resync their data right after a partition is resolved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In most scale systems you’ll be trading off against consistency vs. availability, why?

A

Networks aren’t reliable, so you’ll need to support partition tolerance. You’ll need to make a software tradeoff between consistency and availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When should you use a CP approach to a system?

A

Waiting for a response from the partitioned node might result in a timeout error.

CP is a good choice if your business needs require atomic reads and writes.

Because consistency means = Every read receives the most recent write or an error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When should you use an AP approach to designing a system? (Availability and Partitioning)

A

Availability means: Every request receives a response, without guarantee that it contains the most recent version of the information

Responses return the most readily available version of the data available on any node, which might not be the latest. Writes might take some time to propagate when the partition is resolved.

AP is a good choice if the business needs allow for eventual consistency or when the system needs to continue working despite external errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

When should you consider/classify your system to take consistency tradeoffs vs. availability trade offs?

A

This is in the requirements/non-functional requirements stage. This will guide us to know what databases we want to use, and what approaches we’ll need to take in order to fulfill the requirements properly.

That means think about consistency and availability trade-offs early on.

CP System: Bank application, Stocks
AP System: Leaderboard, Analytics systems (can have some lag time),

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How would you choose between a SQL or NoSQL database in a chat system for example to store chat history?

A
  1. Examine the data types and read/write patterns
  2. Types of data that exists usually = generic data such as user profile, settings, user friend list -> this data is stored in a robust reliable relational db for Consistency and Partitioning. Replication and Sharding are common techniques to satisfy availability and scalability requirements for SQL database, so its available for read replicas, and can scale writes depending on where to route the write.
    The second piece of data that is unique for chat system is chat history data. Chat history data has a different read/write pattern which is:
    - chat history data is enormous, its is a LOT of messages
    - Only recent chats are accessed frequently, not old messages
    - Recent chat history is viewed in most cases, users might use features to jump to specific messages, and these need to be supported by the data acess layer.
    - The read to write ratio is about 1:1 for 1 on 1 chat apps, usually don’t have a heavy read and light write pattern, it is generally even. For this, we will use a key-value store.

Why a key-value store for chat history?

  1. K-V stores allow for easier horizontal scalings
  2. We don’t need joins, can use K-V store
  3. Key-value stores provide low latency to access the data
  4. Relational databases do not handle long tail of data well, when indexes grow, random access becomes expensive.
  5. Key-Value stores have been used by FB and Discord. Facebook uses HBase, and Discord uses Cassandra (AP database system)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Describe the 3 Consistency Patterns? (WES)

A

Weak, Eventual, Strong
Weak = Real time systems such as VoIP, video chat, realtime multi-player games
Eventual = AP System
Strong = CP System

Weak consistency: After a write, reads may or may not see it. A best effort approach is taken.

This approach is seen in systems such as memcached.

Weak consistency works well in real time use cases such as VoIP, video chat, and realtime multiplayer games. For example, if you are on a phone call and lose reception for a few seconds, when you regain connection you do not hear what was spoken during connection loss.

Eventual consistency
After a write, reads will eventually see it (typically within milliseconds). Data is replicated asynchronously.

This approach is seen in systems such as DNS and email. Eventual consistency works well in highly available systems.

Strong consistency
After a write, reads will see it. Data is replicated synchronously.

This approach is seen in file systems and RDBMSes. Strong consistency works well in systems that need transactions.

23
Q

Explain limits to the CAP theorem in regards to Partition Tolerance?

A

Basically, databases that claim to be CA (consistent and available) while not providing partition tolerance is incorrect or at least misunderstands the CAP theorem.

  1. Consistency

Remember that consistency means ACID (a system is consistent if an update is applied to all relevant nodes at the same logical time) or we at least lock are not available until then.

Among other things, this means that standard database replication is not strongly consistent. As anyone whose read replicas have drifted from the master knows, special logic must be introduced to handle replication lag.

  1. Availability

And regarding availability - For a distributed system to be continuously available, every request received by a non-failing node in the system must result in a response.

Despite the notion of “100% uptime as much as possible,” there are limits to availability. If you have a single piece of data on five nodes and all five nodes die, that data is gone and any request which required it in order to be processed cannot be handled.

(N.B.: A 500 The Bees They’re In My Eyes response does not count as an actual response any more than a network timeout does. A response contains the results of the requested work.)

  1. Partition Tolerance

Partition tolerance is often misunderstood, it really means how many many messages can a network be allowed to lose due to network partitions such as dropped packets, crashed servers, failed nodes, cases where all nodes fail and all messages are “lost”. Handling a crashed machine counts as partition tolerance.

That’s why when you have multiple nodes, you will only ever think about C vs. A, you cannot sacrifice Partitioning. So the idea of a CA system really doesn’t make sense, because a single node can go down, and then lose messages, therefore in the face of failure, its not partition tolerant given the definition that partition tolerance is the tolerance of knowing you can lose messages and still be fine.

For a distributed (i.e., multi-node) system to not require partition-tolerance it would have to run on a network which is guaranteed to never drop messages (or even deliver them late) and whose nodes are guaranteed to never die. You and I do not work with these types of systems because they don’t exist. This is hard to be true because as the number of nodes increase for a system, the higher chance you have one of them that will die, therefore drop a message. So the question should instead be: In the event of failures, which will this system sacrifice? Consistency or availability?

24
Q

What happens if a system chooses consistency over availability?

A

If a system chooses to provide Consistency over Availability in the presence of partitions (again, read: failures), it will preserve the guarantees of its atomic reads and writes by refusing to respond to some requests.

  1. It may decide to shut down entirely (like the clients of a single-node data store)
  2. Refuse writes (like Two-Phase Commit)
  3. Only respond to reads and writes for pieces of data whose “master” node is inside the partition component (like Membase).

This is perfectly reasonable. There are plenty of things (atomic counters, for one) which are made much easier (or even possible) by strongly consistent systems. They are a perfectly valid type of tool for satisfying a particular set of business requirements.

25
Q

What are the things to think about practically in regards to the CAP Theorem and real world distributed systems

A

I think part of the problem with practical interpretations of the CAP theorem, especially with Gilbert and Lynch’s formulation, is the fact that most real distributed systems do not require atomic consistency or perfect availability and will never be called upon to perform on a network suffering from arbitrary message loss. Consistency, Availability, and Partition Tolerance are the Platonic ideals of a distributed system–we can partake of them enough to meet business requirements, but the nature of reality is such that there will always be compromises.

26
Q

Describe the tradeoffs you have to make based on business requirements regarding answering requests. vs giving answers based on incomplete data?

A

Despite your best efforts, your system will experience enough faults that it will have to make a choice between reducing yield (i.e., stop answering requests) and reducing harvest (i.e., giving answers based on incomplete data). This decision should be based on business requirements.

27
Q

What specific system does the CAP theorem talk about?

A

CAP theorem doesn’t just describe any old system, but a very specific model of a system:

The CAP system model is a single, read-write register – that’s all. For example, the CAP theorem says nothing about transactions that touch multiple objects: they are simply out of scope of the theorem, unless you can somehow reduce them down to a single register.

The only fault considered by the CAP theorem is a network partition (i.e. nodes remain up, but the network between some of them is not working). That kind of fault absolutely does happen, but it’s not the only kind of thing that can go wrong: nodes can crash or be rebooted, you can run out of disk space, you can hit a bug in the software, etc. In building distributed systems, you need to consider a much wider range of trade-offs, and focussing too much on the CAP theorem leads to ignoring other important issues.

Also, the CAP theorem says nothing about latency, which people tend to care about more than availability. In fact, CAP-available systems are allowed to be arbitrarily slow to respond, and can still be called “available”. Going out on a limb, I’d guess that your users wouldn’t call your system “available” if it takes 2 minutes to load a page.

28
Q

Describe what a network partition is?

A

An example of a network partition would be a network link getting interrupted between 2 datacenters as an example that have replica sets we want to sync.

29
Q

CP/AP: a false dichotomy

A

CP/AP: a false dichotomy

The fact that we haven’t been able to classify even one datastore as unambiguously “AP” or “CP” should be telling us something: those are simply not the right labels to describe systems.

I believe that we should stop putting datastores into the “AP” or “CP” buckets, because:

Within one piece of software, you may well have various operations with different consistency characteristics.

Many systems are neither consistent nor available under the CAP theorem’s definitions. However, I’ve never heard anyone call their system just “P”, presumably because it looks bad. But it’s not bad – it may be a perfectly reasonable design, it just doesn’t fit one of the two CP/AP buckets.

Even though most software doesn’t neatly fit one of those two buckets, people try to shoehorn software into one of the two buckets anyway, thereby inevitably changing the meaning of “consistency” or “availability” to whatever definition suits them. Unfortunately, if the meaning of the words is changed, the CAP theorem no longer applies, and thus the CP/AP distinction is rendered completely meaningless.

A huge amount of subtlety is lost by putting a system in one of two buckets. There are many considerations of fault-tolerance, latency, simplicity of programming model, operability, etc. that feed into the design of a distributed systems. It is simply not possible to encode this subtlety in one bit of information. For example, even though ZooKeeper has an “AP” read-only mode, this mode still provides a total ordering of historical writes, which is a vastly stronger guarantee than the “AP” in a system like Riak or Cassandra – so it’s ridiculous to throw them into the same bucket.

Even Eric Brewer admits that CAP is misleading and oversimplified. In 2000, it was meant to start a discussion about trade-offs in distributed data systems, and it did that very well. It wasn’t intended to be a breakthrough formal result, nor was it meant to be a rigorous classification scheme for data systems. 15 years later, we now have a much greater range of tools with different consistency and fault-tolerance models to choose from. CAP has served its purpose, and now it’s time to move on.

30
Q

What is horizontal scaling?

A

Having clones of servers running the same code and then creating an image of one server and adding a bunch of new machines is called horizontal scaling.

You want to have these services be stateless and store data externally in some cache or database. Example would be user data you would have in a database or session information for User Session in a persistent cache like Redis for better performance.

“External” means that the data does not live in the application servers, the servers themselves just have the code base, it the logic/rule area that pulls out data from the DB, or other types of data stores.

31
Q

[ INCOMPLETE - SKIP ]

What is Zookeeper?
When would you use it?
What are the problems does Zookeeper solves?
Give an example when you’d want to use Zookeeper?
Why does Zookeeper exist?
How does Zookeeper work?

A

What is Zookeeper? - It helps sync information across nodes in a cluster in the case you have multiple nodes in your cluster or infrastructure
When would you use it? - Generally for keeping data services in sync, what comes to mind is
What are the problems does Zookeeper solves?
Give an example when you’d want to use Zookeeper?
Why does Zookeeper exist?
How does Zookeeper work?

32
Q

What is cache invalidation?

Please given an example about cache invalidation strategies. (incomplete, add more strategies)

A

Cache invalidation is when an entry in the cache is when we no longer containing the correct result and we have to replace or remove it.

One strategy is TTL (time to live) - this has drawbacks and it depends on the TTL we set for an entry and this can be tricky at some times to get correct.

(1) Write Through Strategy

After we update our DB -> we update our Cache.
So an example would be a basic system such as Client -> App Service -> DB (and after the DB state is written) -> we update our cache that reflects the state in our DB.

Write through strategy updates after a state change in the DB, so another service cannot also change this DB directly and can only use the API we have to update the DB so the cache also gets invalidated through the write-through strategy.

Defining boundaries are really important in Cache Invalidation.

33
Q

Why use Caching? What is the purpose of Caching?

A

First of all, the main purpose of caching is speed. The basic idea is simple: if you know you are going to compute the same thing, you may just load the result saved from the previous run, and skip the computing this time. There are two keywords here: “the same thing”, and “the saved result”. The latter means you are essentially trading (more) storage space for (less) time. That is the price to pay for caching, and also an important fact to be aware of when you use caching (i.e., caching is definitely not free, and sometimes the price can be fairly high).

The tricky thing is “the same thing”. How do you know that you are computing the same thing? That is all “cache invalidation” is about. When things become different, you have to invalidate the cache, and do the (presumably time-consuming) computing again.

34
Q

What is replication lag?

A

A replication lag is the cost of delay for transaction(s) or operation(s) calculated by its time difference of execution between the primary/master against the standby/slave node.

Basically the read (secondary) nodes or child nodes of the primary can’t keep up or in sync, and take a bit of time to get the latest data.

It could either be due to I/O thread or SQL thread.

To work out what’s causing the lag, you must determine which replication thread is getting backed up. Replication relies on three threads per master/slave connection: one is created on the master and two are created on the slave.

The Slave I/O Thread. When you issue START SLAVE on a slave server, the slave creates this thread which connects to the master and requests a copy of the master’s binary log.
The Binlog Dump Thread. When the slave connects to the master, the master uses this thread to send the slave the contents of its binary log.
The Slave SQL Thread. The slaves creates this SQL (or applier) thread to read the contents of the retrieved binary log and apply its contents.

35
Q

What is a binary log? Why does it exist?

A

The binary log is a set of log files that contain information about data modifications made to a MySQL server instance. The log is enabled by starting the server with the –log-bin option.

Basically it a list of state transition of the database, basically a log of all the transactions that can be applied to a database.

There are 2 reasons why a binary log exists:

  1. Replication for child nodes
  2. Backups

The binary log has two important purposes:

For replication, the binary log is used on master replication servers as a record of the statements to be sent to slave servers. Many details of binary log format and handling are specific to this purpose. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. A slave stores events received from the master in its relay log until they can be executed. The relay log has the same format as the binary log.

Certain data recovery operations require use of the binary log. After a backup file has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup.

There are two types of binary logging:

Statement-based logging: Events contain SQL statements that produce data changes (inserts, updates, deletes)

Row-based logging: Events describe changes to individual rows

Reference: https://dev.mysql.com/doc/internals/en/binary-log-overview.html

36
Q

What is a CDN? Why is it used? What are the 2 types of CDNs? How do you decide between the different CDNs?

A

A content delivery network is a bunch of caches basically that are around the world, they are proxy servers that serve content closer to the user typically for faster response times for static assets in most cases such as html/css/js and videos too. It is to help increase performance in 2 ways

  1. Users are closer to the datacenter having the static content (we may still need some time to hit a server for dynamic content)
  2. Our servers don’t need to fulfil the requests the a CDN can.

Pros:

  • Speed up in responses and can take load off servers for requests as we are saving items

Cons

  1. There are costs that are high financially if traffic is high
  2. Content might be stale if it is updated before the TTL expires it
  3. CDNs require changing URLs for static content to point to the CDN

The 2 types of CDNs are push and pull CDNs

Push - receive new content whenever changes occur in our server. So you can push content to a push cdn right away on load.

Pull - grab (poll) new content from our server when the first user requests that content. You leave the content on your server and rewrite URLs to point to the CDN. This results in slower responses until the content is cached in the CDN.

Depending on the situation there are trade-offs to both, Push seems better as we have the latest content a user can get from the server vs. Pull where the initial requests may be slow.

The decision on which CDN type to go with revolves in large part around traffic and downloads. Travel blogs that are hosting videos and podcasts (aka. large downloads) will find a push CDN cheaper and more efficient in the long run since the CDN won’t re-download content until you actively push it to the CDN. A pull CDN can help high-traffic-small-download sites by keeping the most popular content on CDN servers. Subsequent updates (or “pulls”) for content aren’t frequent enough to drive up costs past that of a push CDN.

37
Q

What is an HTTP request?

A

An HTTP request ismade by a client, to a named host, which is located on a server. The aim of the request is to access a resource on the server. To make the request, the client uses components of a URL (Uniform Resource Locator), which includes the information needed to access the resource.

HTTP itself is a protocol, or a set of rules for accessing resources on the internet in a client-server model. Resources can be HTML files, JSON, media, etc. We make HTTP requests to APIs using the HTTP protocol, that allows developers to access these resources

Reference: https://www.freecodecamp.org/news/http-request-methods-explained/

38
Q

HTTP Requests, GET vs. POST vs. PUT vs. DELETE, Explain all 4 and their use cases and features.

A
  • A GET request, (HTTP method here is GET) is a method we use to read/retrieve a resource. A succesful GET request returns a response for the information you requested.
  • A POST request on the other hand is used when we want to create a new resource, a post request requires a body in which you defined the data that you want to create. Usually we will get a response back for a success with 200 as a response code.
  • A PUT request is used when we want to modify/update an existing resource, it is also idempotent meaning it will always produce the same result whereas if we call a post request constantly we will be creating the same resource multiple times. PUT is usually also associated with creating/updating an item by usually some explicit id, where the resource is known by the client, such as PUT /users/1 whereas POST /users post request would typically be constructed to hit the server URL and handle the logic from there and often the case the client doesn’t know the exact URL of the resource.
    • When it comes to POST vs. PUT, you often see both opinions of using one over the other, from my current experience, I think it depends on the idempotence of the action, meaning that depending on the problem, which method is safer for us to use? Does POST create side effects because we have cases where we may need to make a request multiple times and the requirements are that we should simply replace an item if needed? POST CAN be idempotent, its just not guaranteed.
  • A DELETE requestused to delete a resource from the server. Unlike GET and HEAD requests, the DELETE requests may change the server state. Sending a message body on a DELETE request might cause some servers to reject the request. But you still can send data to the server using URL parameters. You have to know the resource as well to specify what exactly you want to delete as well.
39
Q

Explain what TCP and UDP is, and their differences, use-cases

A

TCP is a connection-oriented protocol, whereas UDP is a connectionless protocol. A key difference between TCP and UDP is speed,as TCP is comparatively slower than UDP. Overall, UDP is a much faster, simpler, and efficient protocol, however, retransmission of lost data packets is only possible with TCP.

  • When to use TCP or UPD
    • TCP: You want to use TCP when you want to have a connection, and be guaranteed that your packets will actually arrive. That includes web pages, and web API calls that you want to use TCP for.
    • UDP: You want to use UDP when you care about having updated data such as video livestreams or HFT trading firms. UDP is datagram based, not connection based.
  • Advantages and Disadvantages of TCP
    • Advantages of TCP
      • Guarantees the resending, ignoring of duplications, rearranging of packets, and rate of packets sent.
      • Can do read() and write() on the TCP file descriptor
    • Disadvantages of TCP
      • Resource and more overhead
  • Advantages and Disadvantages of UDP
    • Advantages of UDP
      • Lightweight
      • Gets the most updates information, and fast. No 3 way handshake.
      • UDP is stateless, no setup to be done.
    • Disadvantages of UDP
      • No guarantee
      • Duplicate, missing, and out of order packets
40
Q

How would you describe load and load estimates in the context of a system? Talk about 3 important metrics and describe them.

A

Oftentimes in an interview we will need to estimate the performance of a system, however there are many ways to measure this. We can look at average performance, but this does not describe potential variation in calls to our service.

Three important metrics:

  1. Throughput - The number of records processed per second (Good for batch jobs)
  2. Response time - The time between the client sending a request and receiving a response. More important when talking about online systems
  3. Latency - The duration that a request is waiting to be handled

Another important concept is tail latency, which describes the latency at a certain percentile of requests. For example, the latency metric of 1 second for p95 says that 95% of requests have a lower time than 1 second, the other 5% do not.

41
Q

Describe a SQL (relational database)

A

Consists of tables holding many rows of structured data
Rows of one table can have relation to rows of another table if both rows share a common key
Has a built in query optimizer that uses the quickest estimated implementation for a SQL statement (declarative language\

42
Q

What is an index? What database uses it and what are its benefits?
What are the tradeoffs (pros and cons) of indexes?

A

Imagine a basic database where you want to fetch a record when making a read. If all of the rows are just stored on a hard drive, every single time a read call is made, you would need to search for the row on the disk, resulting in a very slow O(n) time complexity. Instead, this process can be sped up by creating an index, which allows a database to quickly search for rows based on certain values of the tuple that defines a record.

While indexes sound great, they also have tradeoffs:

Pros: Having an index speeds up reads, if it is frequently used (the application often queries for values based on the column the index corresponds to)
Cons: Having an index slows down writes, because on every write additional work must be done behind the scene to maintain the proper data formatting for the index

43
Q

What is a database transaction log in SQL?

A

Note that all writes to databases (assuming no indexes) are done by just appending to a log, as this is the quickest way to write to disk (sequential writes). It also makes concurrency much easier to deal with as there are no conflicts on crash of one value being partially overwritten.

Every SQL Server database has a transaction log that records all transactions and the database modifications made by each transaction.

The transaction log is a critical component of the database. If there is a system failure, you will need that log to bring your database back to a consistent state.

For information about the transaction log architecture and internals, see the SQL Server Transaction Log Architecture and Management Guide.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/logs/the-transaction-log-sql-server?view=sql-server-ver15

44
Q

Describe what is Replication? Describe 3 purposes of replication.
Describe the single types of replication.

A

Replication is the process of storing multiple copies of the same data on multiple different computers.

It serves three main purposes.

Firstly, the redundancy of the data means that a service can still serve reads and writes if one of the database nodes crashes.
Secondly, replication can actually speed up the process of reading or writing if the operation is performed by a database node that is geographically closer to the client.
Finally, replicating data to many databases allows the reduction of load on each database. However, there are many different ways of implementing replication, each with their own benefits and drawbacks, and we will read about them below.

Types

Single leader replication
Replication log implementation
Multi leader replication
Leaderless replication
Sloppy Quorums
45
Q

What is a SQL Join?

Describe Inner and Outer Joins.

A

A JOIN clause is used to combine rows from two or more tables, based on a related column between them.

EXAMPLE

46
Q

Using Conditions to get all records in a SQL database for all rows where user_id and game_id match the givens

A

SELECT user_id, game_id
FROM scores
WHERE (user_id= ‘0877’) AND (game_id=’9322’)

3 Tables - Users, Scores, and Games

SCORE

  • id (bigint auto_increment)
  • user_id
  • game_id
  • creation_date
  • score (bigint)

USER

  • id (bigint auto_increment)
  • username

GAME

  • id (bigint auto_increment)
  • title: varchar(255)
  • description: text

GET all the score history for a given user_id and given game_id
GET /leaderboard/:game_id/:user_id

47
Q

What is single leader replication?

A

One of the nodes is designated to be the leader -> all writes are sent to this leader and they are written to the leader’s local storage.

All of the other replicas are known as followers

  • Data is sent to the followers from the leader via a replication log
  • Each follower takes the log and updates the local data in the same order that the log specifies

Clients can perform reads from either the leader or the follower

Can be performed either synchronously or asynchronously

  • Synchronous replication is when the client only receives a message that a given write was successful once the changes have been propagated to all of the replicas (strong consistency)
  • Asynchronous replication is when the client receives a message saying that their write was successful the moment it reaches the leader database, all changes to the replicas are propagated in the background (eventual consistency)
  • While synchronous replication ensures that all followers have up to date data, it is impractical because a crash on one of the followers or just a follower operating slowly slows breaks the whole system
  • Typically synchronous replication means that only one follower is synchronous while the rest are asynchronous, if the synchronous follower fails another one of the followers is made synchronous
  • In a fully asynchronous system, writes to the leader that have yet to be propagated are lost on a crash

Setting up new followers is a relatively easy process, and does not affect write throughput of the system

  • Take a consistent snapshot of the leader database, and copy this to the follower node
  • Then connect to the leader, and use the replication log to catch up from the position of the snapshot in the log
  • Once caught up, start acting as a normal follower

It is very easy to recover from a follower crashing

  • Just check the log of changes that it needs to make to see what point the follower was upto when it crashed, and from then on connect to the leader and connect all of the changes since then
  • After catching up, continue to act as a normal follower

If the leader fails in this configuration, the system must perform a failover
- First the system must determine that the leader has actually failed, this is impossible to do with complete certainty as there are a variety of things that can go wrong (crashes, power outage, network issues), so most systems have databases frequently communicate with one another and use a timeout to determine if a node is dead
- Use some sort of consensus mechanism (will talk about this in more detail later) to determine a follower node that will become the new leader, typically good choice is the most up to date follower
- Configure clients to send their write requests to the new leader, make sure if the old leader comes back it now realizes it is a follower
- Failover can be a dangerous situation because you may need to discard some writes from the old leader, can lead to inconsistencies if other systems (not database) had already propagated those changes internally (such as a cache)
In some scenarios two nodes may end up thinking they are the leader, could lead to corrupted data (split brain, can be dealt with)
- If timeout for determining failover is too small, may perform unnecessary failovers and introduce extra load on a system

48
Q

What is Replication log and how is it implemented?

A

A log/list of transactions applied to a database. Easiest is to just copy over the SQL statements. Another is to use a write ahead log the same way that databases do for indexing.

The most simple way is to just copy over the SQL statements used by the leader

  • However, this is a problem because some SQL commands are nondeterministic
  • While these values could be replaced with deterministic values by the original database, other solutions that are better have been made

Another option is to use a write ahead log in the same way that databases do for indexing

  • Append only sequence of bytes containing all writes to the database
  • This log already exists on disk, so just send it over network to followers
  • Disadvantage is write ahead log has which bytes were changed, so a change in the storage engine over a replica may render everything moot if things are stored in different locations, makes rolling upgrades impossible and requires downtime

A logical log describes all the changes made to a given row (usually by primary key)
- Decoupling the storage index and the log allows for rolling upgrades and backwards compatibility

49
Q

Describe Problems with replication lag and eventual consistency

A

Here are some problems with replication lag as the readers will eventually be consistent:

  1. Reading your own writes
  • After writing data and refreshing, you may still see the old data since the changes you have made have not yet been propagated on the replica you are reading from.
  • Requires read-after-write consistency, which says that after uploading a page you will see the writes that you have just made
  • Can either always query the leader for areas of the application that are editable by the user, or keep track of the last write on the client, and for some amount of period of time afterwards only read from the leader (or a replica that is up to date as of that timestamp)
  1. Monotonic reads (problem)
  • Reads occurring on several different replicas actually can make it seem as if you are moving back in time
  • Guarantee monotonic reads, one way of doing so is to make sure that each user always reads from the same replica, can be done based off of a hash of the user ID (this can break down if said replica fails)
  1. Consistent prefix reads
  • When two things in the database have a causal relationship, but the one that precedes the other has a greater replication lag so to another user it seems like the latter write comes before the preceding one (happens when they are on different partitions, otherwise log would maintain order)
  • Could perhaps make sure causally related writes are on the same partition, but not always possible, so may have to explicitly keep track of causal dependencies
50
Q

What is Leaderless replication?

A

High-availability writes in a distributed database with leaderless replication (both Dynamo and Cassandra employ leaderless replication) requires a heuristic for conflict resolution between concurrent writes. This is essential because every replica of data is considered equal and concurrent writes on the same record at two different replicas are considered perfectly valid.

Example: Dynamo, Cassandra

Pros and Cons

Any replica can accept writes from any of the clients

No such thing as failover, simply set a threshold of the number of nodes that need to accept the write for the write to be successful, same with reads

  • If an unavailable node comes back online, a client may read from many nodes simultaneously, realize the previously offline node has an outdated value and update it accordingly (use version numbers to check which values are out of date), this process is known as read repair
  • Another way of ensuring up to date data is anti entropy, which is a background process that looks for data differences in replicas and copies the correct data over, however the writes are not copied in any particular order

If we can only write to a fraction of nodes at a time and read from a fraction, we can use a quorum in order to ensure that we always read from at least node with a most up to date copy of the data

  • This occurs when the number of nodes successfully written to plus the number of nodes read from are greater than the number of total replicas
  • Typically reads and writes are sent to all replicas in parallel

There are still cases where quorum reads and writes are not perfect

  • Even if writes do not succeed on the specified number of nodes they will not be rolled back on the nodes where they have been written
  • In the event that sloppy quorums are used, the writes may end up on different nodes than reads, such that there is no overlap between them
  • If a node with a new value fails and its data is restored using a node with an old value, the new value will be lost

Works well with multi-datacenter operation
- Send writes to all nodes, but have the acknowledgements from the client’s local datacenter be sufficient to fulfill a quorum write in order to reduce the high cross datacenter latency of writes

51
Q

What are concurrent writes? How to detect them and which replication strategies do they come up in?

A

Concurrent writes = multiple writes on a few different nodes which are not communication
Leaderless and Multileader have this issue (more than 1 write node)

A problem that occurs in both multileader and leaderless implementations of replications is being able to detect many concurrent writes. Concurrent writes occur when two writes to the database from different clients do not know about each other. While it is most important that the database replicas all converge to a consistent state, there are certain ways of dealing with concurrency that improve durability by not arbitrarily picking one write to keep and throwing out the others.

How to detect them?

[INCOMPLETE]

52
Q

What is Partitioning?

4 approaches to deal with Partition the data

A

When dealing with large systems, a common issue that may occur is that a single database table actually becomes too big to store on a single machine. As a result, the table must be partitioned, or split, onto multiple different nodes. How exactly this splitting is done is an implementation detail, but being able to partition a database greatly increases the scalability of a system by allowing a given database table to get arbitrarily big, and perhaps even store more relevant data in nodes closer to the users accessing it. This being said, partitioning, also known as sharding, comes with many complications.

Dealing with partitioning data

  1. Want to split up keys so that each partition has relatively even load on it (both in data and queries), otherwise result is hot spot partitions
  2. Can partition keys by range chunks, not necessarily even ranges because some ranges will have more data and activity than others
    - These ranges can be chosen manually or automatically by the database
    - Keep keys in sorted order within the partition
    - In certain scenarios, such as partitioning by timestamp ranges, this can easily lead to hotspots if most of the queries want recent data
  3. Can partition by hash of key and split by a range of hashes, good hash functions will uniformly distribute the keys
    - Loses the ability to do fast range queries, have to check all partitions
    - Helps reduce hotspots, but if all of the activity is on one key, then hot spots will still occur, can perhaps be mitigated for certain keys by adding a random number to the key every time and thus partitioning all of the operations to it, but makes reads slow because need to check all of the partitions for the key data
  4. Certain databases allow partitioning by a hash of one key (for example a user id), but then allow you to do efficient range queries on other columns of the data (such as a timestamp
53
Q

What are transactions in database? What is ACID?

A

Transactions are an abstraction used by databases to reduce all writes to either a successful one that can be committed, or an erroneous one that can be aborted. While transactions are somewhat hard to implement in distributed systems (we will discuss later), in a single database they can be rather useful. They hope to provide the safety guarantees outlined by ACID.


The meaning of ACID:
Atomicity - If a client makes several writes, but a fault occurs after only some of the writes are completed, the existing completed writes will be rolled back

Consistency - The application can rely on the properties of the database to ensure that invariants about the data will hold (in the face of faults)

Isolation - Concurrently executing transactions are isolated from one another (serializability), each transaction can pretend it is the only one running on the database. Most databases do not implement this due to performance penalties, instead use weak isolation levels

Durability - Once a transaction is completed, the data will never be forgotten, even in the face of faults

In single object writes, almost all database engines provide guarantees about atomicity and isolation so that the data for an individual key does not become moot or somehow mixed with the previous value - atomicity can be implemented using a log for crash recovery and isolation can be done using a lock on each object.

54
Q

What is two phase commit? What is it used for, the problem it solves?

A

Although we have now spoken about some problems that can be reduced to consensus, it now seems best to actually discuss some ways that consensus can be achieved. Firstly, we can talk about two phase commit, which is somewhat inefficient, but solves the problem of atomic commit (getting all replicas to agree on whether a transaction should be committed or aborted).

Two phase commit:
Algorithm used to solve the atomic commit problem
Coordinator node (the application) sends writes to each node
Coordinator then sends each node a prepare requests, in which each node responds saying whether it will be able to commit
If all the nodes can commit, the coordinator tells them to do so, otherwise it tells them all to abort
Coordinator has internal log with its decisions for each transaction in the event that it crashes
If the request to commit or abort does not reach all the participants, the coordinator must keep retrying on all nodes until they get the message, cannot accept a timeout
Two points of no return
Participants (database replicas) that say yes in the prepare stage must eventually commit the write and are not allowed to eventually abort it
Once the coordinator decides to commit or abort it must get this through to all of the participant nodes
The coordinator is a single point of failure and if it crashes none of the nodes can abort or commit after the have done their preparations (should be replicated)
To avoid this happening we would need a perfect failure detector to perform some sort of failover which is impossible due to unbounded network delay
When this happens the replicas often have a lock grabbed on many rows, which may prevent a significant amount of transactions until the coordinator node is back

Database internal distributed transactions (transactions using only the same database technology) can actually be pretty quick and optimized, however when using multiple different types of data systems (like databases, message brokers, email services), you need a transaction API (such as XA) which is often quite slow.

Unlike two phase commit, good consensus algorithms reach agreement by using a majority (quorum) of nodes, in order to improve availability. After new leaders are elected in a subsequent epoch (monotonically increasing in order to prevent split brain), consensus algorithms define a recovery process which nodes can use to get into a consistent state.

Coordination services such as ZooKeeper are used internally in many other popular libraries, and are a replicated in memory key value store that allows total order broadcast to your database replicas.