AWS Autoscaling and Load Balancer Interview Questions Flashcards
AWS Autoscaling and Load Balancer Interview Questions
What is auto-scaling and how does it work?
Auto-scaling is one of the most important features that Amazon Web Service provides that gives you an allowance to configure and automatically stipulate and also twists new instances without even your intervention. This can be done by setting the edges and measurements to screen.
At the point when those edges have crossed another instance based on your preference will be spun up, rolled, and configured into the load balancer pool. Now, you would’ve scaled that horizontally without the intervention of an operator.
https://www.whizlabs.com/blog/wp-content/uploads/2019/01/AUTOSCALING-GROUP.png
What is Server Load Balancing?
SLB (Server Load Balancing) provides the performance of the network and also it delivers the content by the implementation of a series of priorities as well as algorithms which helps in responding to the precise requests that are made to the network. In other words Server Load Balancing (SLB) takes the part of distributing the clients to a vast group of some servers and that also ensures that the clients which are sent are only sent to the specific servers and not to the failed servers.
What is Global Server Load Balancing (GSLB) and does Clustering need to be turned on in order to use GSLB?
GSLB (Global Server Load Balancing) is very much similar to SLB (Server Load Balancing) but GSLB takes SLB to a global scale. It authenticates us to stack balance VIPs from various geographical locations as well as a single entity. From this, the geographic site gets scalability and fault tolerance.
Yes, you must turn on clustering and also configure it in order to use Global Server Load Balancing. Each and every proxy that comes within the site or cluster must acquire the same configuration. So, every piece of equipment can act as a DNS server if that becomes the master for the site. Each of the sites will be having a unique SLB/GSLB/Cluster configuration, and you will have to use the GSLB site overflow command so that the remote GSLB site can be added to the local appliance.
What are the automation tools that can be used to spin up the servers?
The use of AWS API is the most prominent way to roll your own scripts. The scripts like this can be written in any language of one’s choice like bash or python. Another option is that we can use configuration management and also provisioning the tool like its puppet or it can be better when the successor Opcode Chef can be used.
There is one more prominent option which is Ansible because the need of an agent is not required, and also the shell scripts can run as it is. The Cloudformation and Terraform are the things which you might look towards and in the end, the whole infrastructure can be captured by the resulting code, and all of this can be checked in the git repository.
What are those load balancing methods which are supported with array network GSLB and also explain Reverse Proxy Cache?
The following methods of Global Server Load Balancing are supported by Array appliance.
- Overflow: Overflow method allows all the requests to be sent to the different remote site when the local site id loaded up to 80%
- lc: “lc” here stands for Least Connections, it sends the clients to the site which has the least count of current connections.
- rr: “rr” here stands for Round Robin, it sends the clients in the round robin suction to each site.
Reverse Proxy Cache is a cache that is presented In the front of the origin servers. That’s the reason for using the reverse term in the name. If a request of the cached object is made by the client then the request will be served from the cache and not from the origin server by the proxy.
What are the challenges in microservices debugging and troubleshooting?
In the serverless world, debugging and troubleshooting is the most difficult process. The log error and warning messages are logged in CloudWatch. This is the area that needs attention and Amazon is working on it.