All Flashcards

1
Q

What is a Tier

A

A tier is a physical separation of components in an application or a service. Usually the physical separation matches 1-1 with the logical separation (or layer)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Examples of components (5)

A

Database, Backend Application Server, User Interface, Messaging, Caching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Single-Tier Application?

  • Can you give examples.
  • What are the advantages and disatvantages
A

A single-tier application is an application where the user interface, backend business logic, and the database all reside in the same machine.

  • MS Office, PC Games or an image editing software like Gimp.
  • The main upside of single-tier applications is that they have no network latency because every component is located on the same machine.
  • Performance of the application highly depends on the configuration of the machine. Once the software is shipped, no code or feature changes can be done. The code in single-tier applications is also vulnerable to being tweaked and reversed engineered.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Two-Tier application

A

A two-tier application involves a client and a server. The client contains the user interface and the business logic in one machine. Meanwhile, the backend server includes the database running on a different machine.
It’s useful when we need to reduce the network latency. Also when we need to control the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Three-Tier Applications

A

In a three-tier application, the user interface, application logic, and the database all lie on different machines and, thus, have different tiers. They are physically separated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are N-Tier Applications

A

An n-tier application is an application that has more than three components involved. E.g.

  • Cache
  • Message queues for asynchronous behavior
  • Load balancers
  • Search servers
  • Web services
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Single Responsibility Principle?

A

Single responsibility principle simply means giving only one responsibility to a component and letting it execute it perfectly, be it saving data, running the application logic or ensuring the delivery of the messages throughout the system.
This approach gives us a lot of flexibility and makes management easier, like when upgrading a database server.
We can also have dedicated teams and code repositories for every component, which keeps things cleaner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Separation of Concerns

A

Separation of concerns kind of means the same thing, be concerned about your work only and stop worrying about the rest of the stuff.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the difference between Tier and Layer

A

The difference between layers and tiers is that layers represent the conceptual organization of the code and its components, whereas, tiers represent the physical separation of components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
What is Client-Server Architecture?
What is Client?
What technologies are used in Client?
Thin Client vs Thick Client.
What is a Web Server?
A

The architecture works on a request-response model. The client sends the request to the server for information and the server responds with it.

  • A client is the window to our application
  • Based on client type but web based client usually are written in ReactJS, AngularJS, VueJS, jQuery etc All these libraries use JavaScript.
  • Thin client holds UI only. No business logic. Thick client holds all or some part of the business logic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Web Server?
Name other kind of servers (5).
Name web servers used to host JAVA application.

A
  • The primary task of a web server is to receive the requests from the client and provide the response after executing the business logic based on the request parameters received from the client.
  • Servers running web applications are commonly known as the application servers.
  • Proxy Server, Mail Server, File Server, Virtual Server.
  • Apache HTTP Server (static serving), Apache Tomcat (dynamic serving) or Jetty
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Request-response model?

A

The client sends the request and the server responds with the data :)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What HTTP

A

HTTP protocol is a request-response protocol that defines how information is transmitted across the web.

It’s a stateless protocol, and every process over HTTP is executed independently and has no knowledge of previous processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is REST?

REST endpoint

A

REST stands for Representational State Transfer. It’s a software architectural style for implementing web services. Web services implemented using the REST architectural style are known as the RESTful Web services.

A REST API is an API implementation that adheres to the REST architectural constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe HTTP PULL vs PUSH?
What technologies are used for PULL (2)?
What technologies are used for PUSH (5)?

A

HTTP Pull = Client pulls data. Normal behavior.
Technologies: HTTP Method, AJAX Call

HTTP Push = The client sends the request for particular information to the server just once. After the first request, the server keeps pushing the new updates to the client whenever they are available.
Also known as Callback.
Technologies: Ajax Long polling, Web Sockets, HTML5 Event Source, Message Queues, Streaming over HTTP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What browser object does AJAX uses to make async calls?

A

XMLHttpRequest

17
Q

What is TTL?

A

In the regular client-server communication, which is HTTP PULL, there is a Time to Live (TTL) for every request. It could be 30 secs to 60 secs, varying from browser to browser.
If the client doesn’t receive a response from the server within the TTL, the browser kills the connection and the client has to re-send the request hoping it receives the data from the server before the TTL ends again.
Open connections consume resources, and there is a limit to the number of open connections a server can handle at once. If the connections don’t close and new ones are being introduced, over time, the server will run out of memory. Hence, the TTL is used in client-server communication.

18
Q

How does a PUSH request work?

+TTL

A

In the regular client-server communication, which is HTTP PULL, there is a Time to Live (TTL) for every request. It could be 30 secs to 60 secs, varying from browser to browser.
If the client doesn’t receive a response from the server within the TTL, the browser kills the connection and the client has to re-send the request hoping it receives the data from the server before the TTL ends again.
Open connections consume resources, and there is a limit to the number of open connections a server can handle at once. If the connections don’t close and new ones are being introduced, over time, the server will run out of memory. Hence, the TTL is used in client-server communication.

PUSH requests work using a persistent connection between the client and the server that remains open for further requests and responses, as opposed to being closed after a single communication.
Now you might be wondering how is a persistent connection possible if the browser kills the open connections to the server every X seconds?

Using Heartbeat Interceptors!
These are just blank request responses between the client and the server to prevent the browser from killing the connection.

Persistent connections consume a lot of resources compared to the HTTP PULL behavior.

Long opened connections can be implemented by multiple techniques such as AJAX Long Polling, Web Sockets, Server-Sent Events, etc.

19
Q

How various HTTP Push technologies work?

A

WebSocket is a protocol providing full-duplex communication channels over a single TCP connection.
A Web Socket connection is preferred when we need a persistent bi-directional low latency data flow from the client to server and back.

Long polling: In this technique instead of immediately returning the response, the server holds the response until it finds an update to be sent to the client.

HTML5 Event-Source API and Server-Sent Events: N/A

Streaming over HTTP: N/A

Every tech has a specific use case, and AJAX is used to dynamically update the web page by polling the server at regular intervals.

Long polling has a connection open time slightly longer than the polling mechanism.

Web Sockets have bi-directional data flow, whereas server-sent events facilitate data flow from the server to the client.

Streaming over HTTP facilitates the streaming of large objects like multi-media files.

20
Q

What is the advantage of Client-Side vs. Server-Side Rendering?

A

Client-Side rendering saves the server processing power and is generally more dynamic as it uses AJAX instead of page posts.

Server-Side rendering saves the browser processing power. Every time a browser receives an HTTP response it needs to go through several components:

  • Browser engine
  • Rendering engine
  • JavaScript interpreter
  • Networking and the UI backend
  • Data storage etc.
21
Q

What is Scalability?

A

Scalability means the ability of the application to handle and withstand increased workload without sacrificing the latency.

For instance, if your app takes x seconds to respond to a user request. It should take the same x seconds to respond to each of the million concurrent user requests on your app.

22
Q

What is Latency?
What type of latencies do you know? (2)
How do we measure it?
What is low latency important?

A

Latency is the amount of time a system takes to respond to a user request.

  • Network latency is the amount of time that the network takes to send a data packet from point A to point B. The network should be efficient enough to handle the increased traffic load on the website. To cut down the network latency, businesses use CDN and try to deploy their servers across the globe as close to the end-user as possible.
  • Application latency is the amount of time the application takes to process a user request. There are more than a few ways to cut down the application latency. The first step is to run stress and load tests on the application and scan for the bottlenecks that slow down the system as a whole. We’ll talk more about it in the upcoming lesson.

Check the website for answer to why low latency is important but you should be able to wing an answer as this is more of a communication skill problem than a knowledge problem

23
Q

What is Vertical vs Horizontal scaling?
What is pros/cons?
What is cloud elasticity?

A

Vertical scaling means adding more power to your existing server. Vertical scaling is also called scaling up.

Horizontal scaling, also known as scaling out, means adding more hardware to the existing hardware resource pool.
Pros: Easy. No code changes. Low administrative monitoring and management efforts.
Cons: High availability risk.

Cloud elasticity is the ability to scale up and down dynamically. Tis feature is only available in cloud computing.
Pros: High availability. Low latency.
Cons: Headaches.

24
Q

What code should we avoid to allow horizontal scaling?

A
  • There should be no state in the code.
  • No static instances in the class.
  • Rather, use a persistent memory like a Key-value store to hold the data
25
Q

What are the primary Bottlenecks That Hurt the Scalability of our Application (6)

A
  • Database
  • Application architecture
  • Not using caching in the application wisely
  • Inefficient configuration and setup of load balancers
  • Adding business logic to the database
    Not picking the right database
  • At the code level
26
Q

Name 5 techniques to Improve and Test the Scalability of our Application

A
  • Profiling: Run application profiler and code profiler. See which processes are taking too long and which are eating up too many resources. Find out the bottlenecks. Get rid of them.
  • Cashing: Cache wisely, and cache everywhere. Cache all the static content. Hit the database only when it is really required. Try to serve all the read requests from the cache. Use a write-through cache.
  • CDN: Use a Content Delivery Network (CDN). Using a CDN further reduces the latency of the application due to the proximity of the data from the requesting user.
  • Data compression: Compressed data consumes less bandwidth, consequently, the download speed of the data on the client will be faster.
  • Avoid unnecessary client server requests: Avoid unnecessary round trips between the client and server. Try to club multiple requests into one.
27
Q

How do we test the scalability of our application?

A

We should run load and stress tests on the system by simulating concurrent traffic with the help of tools.

28
Q

What is high availability?

How is it achieved?

A

High availability, also known as HA, is the ability of the system to stay online despite having failures at the infrastructural level in real-time.

To meet the high availability requirements, systems are designed to be fault-tolerant and their components are made redundant.

To achieve high availability at the application level, the entire massive service is architecturally broken down into smaller loosely coupled services called micro services.

29
Q

What is fault tolerance?

A

Fault tolerance is a system’s ability to stay up despite taking hits.

30
Q

What is fail soft?

A

A fail-soft system is a system designed to shut down any nonessential components in the event of a failure.

31
Q

What are the advantages of breaking down a big monolith into several micro services?

A
  • Easier management
  • Easier development
  • Ease of adding new features
  • Ease of maintenance
  • High availability

Every micro service takes the onus of running different features of an application such as image upload, comment, instant messaging etc.

32
Q

What is Redundancy

A

Redundancy is duplicating the components or instances and keeping extras on standby to take over in case the active instances go down. It is the fail-safe, backup mechanism.

33
Q

Why distributed systems are populat?

A

Distributed systems became so popular solely because we could get rid of the single points of failure present in a monolithic architecture.

34
Q

What is Replication and how is it different than Redundancy?

Why is Geographical replication is important?

A

Replication means having a number of similar nodes running the workload together. There are no standby or passive instances (as in redundancy). When a single or a few nodes go down, the remaining nodes bear the load of the service.

As a contingency for natural disasters, regional power outages, and other big-scale failures, data center workloads are spread across different data centers across the world in different geographical zones.

This avoids the single point of failure in the context of a data center. Also, the latency is reduced by quite an extent due to the proximity of data to the user.

35
Q

Describe a High Availability Cluster

A

A high availability cluster also known as the fail-over cluster, contains a set of nodes running in conjunction with each other that ensures high availability of the service.

The nodes in the cluster are connected by a private network called the heartbeat network that continuously monitors the health and the status of each node in the cluster.

A single state across all the nodes in a cluster is achieved with the help of a shared distributed memory and a distributed coordination service like the Zookeeper.