ELB & ASG Flashcards
What does vertical scalability mean in AWS?
Explain the concept of vertical scalability.
How does vertical scalability work in AWS?
Vertical scalability in AWS refers to increasing the capacity or power of a single instance (such as an EC2 instance) by enhancing its resources, like adding more CPU, RAM, or storage to handle increased demands. It involves upgrading the existing instance’s size to accommodate more workload.
Vertical scalability is like upgrading your own superhero suit when you need more powers. It means making your computer in the cloud stronger by giving it more muscles (like adding more strength or memory) so it can do bigger tasks without needing more friends to help.
Rather than adding more instances, AWS makes instances more powerful
Vertical scalability is like making your computer in the cloud a stronger superhero suit when it needs more powers.
What is horizontal scalability?
Explain the concept of horizontal scalability.
How does horizontal scalability work in expanding resources?
Horizontal scalability refers to the capability of adding more instances (like more servers or computers) to a system to handle increased workload or demand. It involves scaling out by adding more similar units, spreading the load across multiple machines rather than increasing the power of a single machine.
Horizontal scalability is like inviting more friends to help build a huge Lego castle. Instead of making one person build faster, everyone brings their own Legos and builds together, making the castle bigger and stronger.
It’s about adding more computers or servers to share the work
Horizontal scalability is like inviting more friends to build a bigger Lego castle together.”
What is Load Balancing?
Explain the concept of load balancing.
How does load balancing work in computing?
Load balancer is server that carries out load balancing process
Load balancing is the process of distributing incoming network traffic across multiple servers or resources to ensure efficient utilization, optimal performance, and preventing any single server from getting overloaded. It helps evenly distribute work to handle varying levels of demand.
Load balancing is like sharing candies equally among friends so that no one feels left out. It’s about making sure that all the computers helping out with a task get an equal number of jobs to do, so none of them gets too tired or overwhelmed.
Ensure that all resources involved in a task share the work evenly
Load balancing is like sharing candies equally among friends, so no computer gets overloaded with too much work.
What is the purpose of using load balancers?
Why are load balancers used in computing?
How do load balancers benefit systems?
Load balancers are used to evenly distribute incoming network traffic across multiple servers or resources, ensuring no single server gets overwhelmed and optimizing performance. They help in achieving high availability, scalability, and reliability by preventing overloads and providing redundancy.
Load balancers are like traffic managers for computers, making sure that no one computer gets too much work, just like a teacher making sure each student gets a fair share of activities in class.
Ensure none of the EC2s gets too busy & can handle lots of user-requests
Load balancers are like traffic managers for computers, ensuring they all share work equally and the system runs smoothly.
What are the different types of load balancers available on AWS?
Explain the variations of load balancers on AWS.
How do the different load balancers on AWS function?
On AWS, there are mainly three types of load balancers:
1. Application Load Balancer (ALB); operates at the application layer.
2. Network Load Balancer (NLB); NLB at the network layer
3. Classic Load Balancer (CLB); CLB is the traditional load balancer handling both layers but with fewer features. Its deprecated
ALB is good for websites.
NLB is faster for more technical stuff.
CLB is a bit older and less fancy but still gets the job done.
AWS has ALB for websites, NLB for technical tasks, and CLB, the older.
ALB Use Cases: Web applications, API services, microservices architectures, content-based routing, modern application deployments using containers.
NLB Use Cases: High-performance scenarios, gaming applications, IoT (Internet of Things) setups, situations requiring static IP addresses, handling non-HTTP(S) protocols.
Choosing between ALBs and NLBs depends on the specific requirements of your application, the type of traffic you’re dealing with, and the level of functionality and features needed for your load balancing setup. Often, a combination of both types of load balancers is used within a system to cater to different traffic types and application needs.
What is an Application Load Balancer (ALB)?
Explain the purpose and functionalities of an Application Load Balancer.
How is an Application Load Balancer used in AWS?
An Application Load Balancer (ALB) is a type of load balancer on AWS that operates at the application layer (Layer 7) of the OSI model. It intelligently directs incoming web traffic and routes requests to specific targets (such as EC2 instances or containers) based on content, allowing for more advanced routing and support for features like path-based routing and host-based routing.
ALB on AWS directs web traffic intelligently, sending requests to specific parts of an application for better performance.
Application load balancer
Use-case Question: How can an Application Load Balancer be used to direct incoming traffic to different microservices within an application architecture?
use-cases
An ALB can utilize its advanced routing capabilities to examine incoming requests and route them to specific microservices within an application based on URL paths, headers, or hostnames. For instance, if an application has different microservices handling user authentication, profile management, and payments, the ALB can intelligently route requests to these services based on the URL paths or headers, ensuring efficient handling of different functionalities within the application.
ALB are a great fir for micro services & container based applications.
ALB manages web traffic, directing requests based on their content to different parts of an application running on servers
What is a Target Group in AWS?
Explain the purpose and functionality of a Target Group in AWS.
How is a Target Group used within the AWS ecosystem?
In AWS, a Target Group is a logical grouping of targets, typically instances (like EC2 instances), containers, or IP addresses, for routing requests from a load balancer. It defines where the load balancer sends traffic by directing requests to registered targets based on configured rules and health checks.
A Target Group in AWS is like a team in a treasure hunt game. The team members (targets) are grouped together, and the Target Group (team) follows specific rules to guide the treasure (incoming requests) to the right team members, making sure the treasure hunt goes smoothly.
TG helps LB send requests to specific targets based on rules.
AWS Target Groups group together targets for load balancers to efficiently direct traffic to specific instances or resources.”
What is a Network Load Balancer (NLB)?
Explain the purpose and functionalities of a Network Load Balancer.
How is a Network Load Balancer utilized within the AWS infrastructure?
A Network Load Balancer (NLB) in AWS is a high-performance load balancer that operates at the network layer (Layer 4) of the OSI model.
NLB has one static IP per AZ and supports assigning elastic IPs.
NLBs are used for extreme performance TCP/UDP
NLB efficiently manages traffic at a network level, swiftly sending requests to different targets without much processing overhead.
Lower lantecy compared to APL
AWS Network Load Balancers efficiently direct traffic at a network level, ensuring fast and reliable routing to different servers.
What is a Gateway Load Balancer (GWLB)?
Explain the purpose and functionalities of a Gateway Load Balancer.
How is a Gateway Load Balancer used within the AWS environment?
Batman, Ironman
A Gateway Load Balancer (GWLB) in AWS is a highly scalable load balancing service that allows users to deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection systems, and other network appliances, easily. It handles incoming traffic, distributing it across multiple virtual appliances to enhance security and performance.
A Gateway Load Balancer is like a superhero team leader assigning tasks to different superheroes (security appliances) to protect the city. It makes sure each superhero (virtual appliance) gets the right job (traffic) to keep the city (network) safe and running smoothly.
GWLB manages traffic flow to different security appliances to ensure better security and performance of the network.
AWS Gateway Load Balancers efficiently manage traffic among different security appliances to enhance network security and performance.
What are Sticky Sessions?
Explain the concept of Sticky Sessions in web applications.
How do Sticky Sessions impact user sessions in web environments?
Sticky Sessions, also known as session affinity, is a mechanism in web applications where a load balancer directs a user’s requests to the same server for the duration of their session. It ensures that subsequent requests from the same user are sent to the server that initially served their first request, maintaining session persistence.
Sticky Sessions are like a waiter at a restaurant who remembers your table number and always brings your food to the same table, making sure you always sit in your favorite spot and don’t have to move around.
Sticky Sessions (session affinity) ensure that your requests go to the same server, making your website experience more consistent.
What is Cross-Zone Load Balancing?
Explain the concept of Cross-Zone Load Balancing in AWS.
How does Cross-Zone Load Balancing impact load balancing within AWS?
Cross-Zone Load Balancing in AWS refers to the distribution of traffic evenly across instances in multiple Availability Zones. It ensures that incoming requests are directed across all available instances in different zones, optimizing performance and ensuring better fault tolerance by utilizing resources across zones.
Use-case Explanation: For instance, if an online shopping website utilizes Cross-Zone Load Balancing, it ensures that customer traffic is evenly distributed across servers in different Availability Zones. If one zone experiences high traffic or goes down, the other zones can handle the load, ensuring the website remains accessible and responsive.
Cross-Zone Load Balancing spreads traffic across different zones to make sure no single zone gets too busy, improving performance and reliability.
AWS Cross-Zone Load Balancing evenly spreads traffic across different Availability Zones, enhancing performance and fault tolerance.
What are TLS Certificates?
Explain the role and importance of TLS Certificates in web security.
How do TLS Certificates contribute to secure communication on internet?
TLS Certificates, also known as SSL Certificates, are digital certificates that facilitate secure communication between a web browser and a server. They encrypt data transmitted over the internet, verifying the identity of websites and ensuring data integrity and confidentiality.
TLS Certificates are like secret codes that only the right people (websites and browsers) know to keep their messages safe from spies (hackers). It’s like using a special lock and key to keep a treasure box’s content secret while sending it across the internet.
TLS Certificates secure data transferred between websites and browse
TLS Certificates encrypt internet data, keeping it safe and ensuring that websites are trustworthy.
What is Server Name Indication (SNI)?
Explain the purpose and functionality of Server Name Indication.
How does SNI enhance web server functionality?
Under the TSL protocol
Server Name Indication (SNI) is an extension of the TLS protocol that allows a server to host multiple SSL certificates for different domains on the same IP address. It enables the server to identify which certificate to present to the client during the SSL/TLS handshake, facilitating secure communication with multiple websites on a single server.
Analogy: Server Name Indication is like a magic tag attached to different doors in a house (server), telling guests (web browsers) which room (website) they want to visit. It helps the server show the right certificate to the browser, making sure everyone goes to the correct place securely.
- Only works for ALB, NLB or Cloudfront and doesn’t work with CLB.
SNI allows a server to handle multiple secure websites on the same IP.
Server Name Indication helps a server manage multiple secure websites on one IP address by showing the right certificate to web browsers.
What is Connection Draining or Deregistration Delay?
Explain the concept and purpose of Connection Draining or Deregistration Delay.
How do Connection Draining/Deregistration Delay impact load balancers?
Connection Draining or Deregistration Delay is a feature in load balancers that allows existing connections(instances) to complete before removing an instance from the pool of available targets.
It ensures ongoing requests are completed before taking an instance out of service, preventing disruption and loss of data during the transition.