AWS Solutions Architect Flashcards
AWS Aurora
Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL.
Handles highly transactional (OLTP) workloads. Online Transaction Processing.
The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora.
Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications.
The underlying storage grows automatically as needed, up to 64 tebibytes (TiB).
Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration.
OLTP
OLTP (Online Transactional Processing) is a category of data processing that is focused on transaction-oriented tasks.
OLTP typically involves inserting, updating, and/or deleting small amounts of data in a database. OLTP mainly deals with large numbers of transactions by a large number of users.
Think RDS
OLAP
OLAP (Online Analytical Processing) is the technology behind many Business Intelligence (BI) applications. OLAP is a powerful technology for data discovery, including capabilities for limitless report viewing, complex analytical calculations, and predictive “what if” scenario (budget, forecast) planning.
Think Redshift
AWS ECS
Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service.
ECS supports Fargate to provide serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
There are two different charge models for Amazon Elastic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application requests while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.
AWS Fargate
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
AWS Elastic Kubernetes Service
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.
EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
EKS integrates with AWS App Mesh
App Mesh
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure.
Fanout Pattern
A “fanout” pattern is when an Amazon SNS message is sent to a topic and then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or email addresses.
This allows for parallel asynchronous processing.
Visibility Timeout
a period of time during which Amazon SQS prevents other consumers from receiving and processing the message.
The default visibility timeout for a message is 30 seconds. The maximum is 12 hours.
Dead Letter Queue
Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can’t be processed (consumed) successfully.
Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.
AWS CloudHSM
AWS CloudHSM enables you to generate and use your encryption keys on a FIPS 140-2 Level 3 validated hardware.
CloudHSM protects your keys with exclusive, single-tenant access to tamper-resistant HSM instances in your own Amazon Virtual Private Cloud (VPC).
Attempting to log in as the administrator more than twice with the wrong password zeroizes your HSM appliance. When an HSM is zeroized, all keys, certificates, and other data on the HSM is destroyed. You can use your cluster’s security group to prevent an unauthenticated user from zeroizing your HSM.
Amazon strongly recommends that you use two or more HSMs in separate Availability Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys.
ALB Health Checks
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.
Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer.
Each load balancer node checks the health of each target, using the health check settings for the target group with which the target is registered. After your target is registered, it must pass one health check to be considered healthy.
After each health check is completed, the load balancer node closes the connection that was established for the health check.
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices.
It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.
Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.
Security – identification of security settings that could make your AWS solution less secure.
Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.
Performance – recommendations that can help to improve the speed and responsiveness of your applications.
Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.
Cooldown Period
The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect.
After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.
When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period, but you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instance.
IAM Tagging
You can define the tags on UAT and production EC2 instances and add a condition to the IAM policy which allows access to specific tags.
Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you’ve assigned to it.
AWS Resource Access Manager (RAM)
the AWS Resource Access Manager (RAM) is primarily used to securely share your resources across AWS accounts or within your Organization and not on a single AWS account.
Edge Location
An edge location helps deliver high availability, scalability, and performance of your application for all of your customers from anywhere in the world.
This is used by other services such as Lambda and Amazon CloudFront.
CloudFront
Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds.
CloudFront delivers your files to end-users using a global network of edge locations.
ELB Access Logs
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer.
Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.
You can use these access logs to analyze traffic patterns and troubleshoot issues.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default.
After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files.
You can disable access logging at any time.
IAM Database Authentication
You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication.
IAM database authentication works with MySQL and PostgreSQL.
With this authentication method, you don’t need to use a password when you connect to a DB instance.
An authentication token is a string of characters that you use instead of a password.
After you generate an authentication token, it’s valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.
Benefits:
Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security
“AWSAuthenticationPlugin”
AWS-provided plugin that works seamlessly with IAM to authenticate your IAM users.
S3 Standard (S3 Standard)
General-purpose storage of frequently accessed data
S3 Standard_IA
Long lived, but less frequently accessed data. Stored redundantly across multiple geo separated AZs.
Suitable for objects larger than 128k.
S3 Onezone_IA
Stores object data in only one AZ. Less expensive than Standard_IA, but not as resilient to the physical loss of an AZ.
Suitable for objects larger than 128k.
S3 Intelligent Tiering
Designed for customers who want to optimize storage costs automatically.
First cloud object storage class that delivers automatic cost savings by moving data between two access tiers - Frequent access and infrequent access.
Ideal for data with unknown or changing access patterns.
Monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. If accessed later, it is moved back to frequent.
No retrieval fees in intelligent tiering.
Glacier
Long-term archive
Not available for real-time access
Cannot be specified at the time an object is created.
Visible through S3 only.
Retrieval Options:
Expedited - allows quick access for urgent requests for a subset of archives. Data access is typically within 1-5 minutes. Two types:
On-demand - similar to EC2 and are available most of the time.
Provisioned - are guaranteed to be available.
Standard - Allows access to any archives within several hours. Typically 3-5 hours. Default.
Bulk - Glaciers lowest-cost retrieval option, enabling retrieval of large amounts, even petabytes, of data inexpensively in a day. Usually takes 5-12 hours.
Aurora Database Failure - Auto-healing
Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual administrative intervention.
If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds.
If you are running Aurora Serverless and the DB instance or AZ become unavailable, Aurora will automatically recreate the DB instance in a different AZ.
If you do not have an Amazon Aurora Replica (i.e. single instance) and are not running Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone.
VPC Endpoint
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services do not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
There are two types of VPC endpoints: interface endpoints and gateway endpoints. You should create the type of VPC endpoint required by the supported service. As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, which use VPC Gateway Endpoint.
ALB Health Check Protocols/Ports
HTTP: 80
HTTPS: 443
Succeeds if instance returns 200 response code within HC interval
NLB Listener Protocols/Ports
TCP, TLS, UDP, TCP_UDP
1-65535
NLB Healthcheck
Succeeds if TCP connection succeeds
Kinesis Results Storage
Redshift DynamoDB S3 Amazon EMR Kinesis Firehose
Cloudtrail Default Encryption Settings
By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE).
You can also choose to encrypt your log files with an AWS Key Management Service (AWS KMS) key.
Cloudtrail
Actions taken by a user, role, or an AWS service in the AWS management console, CLI, and AWS SDKs and APIs are recorded as events.
Enabled on account creation.
Focuses on auditing API activity.
Event history allows viewing, search, and download of the past 90 days of activity.
Two Types:
All regions - Records events in each region and delivers the CT log files to a specified S3 bucket. (default option)
One region - records events in the region specified. (default option when creating trail in CLI or CT API).
Cloudtrail events can be sent to Cloudwatch logs to trigger alarms according to metric filters.
Cloudtrail log file integrity validation can verify if a log file was modified, deleted, or unchanged after Cloudtrail delivered.
Organization Trail
A cloudtrail trail that will log all events for AWS accounts in an organization created by AWS Organizations. Org trails must be created by the master account.
AWS WAF - Allow then block
Create a web ACL with a rule that explicitly allows an approved IP.
Then create another rule with a condition like “geo match” that blocks requests that originate from a specific country.
Allow first, then block what you desire.
AWS Database Migration Service
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora.
AWS Schema Conversion Tool
Used to convert the source schema and code to match that of the target database in a heterogeneous database migration.
FSx for Windows File Server
fully managed, highly reliable, and scalable file storage accessible over the industry-standard Service Message Block (SMB) protocol.
It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.
Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) Namespaces to scale-out performance across multiple file systems in the same namespace up to tens of Gbps and millions of IOPS.
Lifecycle hook
Add a lifecycle hook to your Auto Scaling group so that you can perform custom actions when instances launch or terminate.
Lifecycle hook stages and hook actions
Pending Pending:Wait Pending:Proceed InService Terminating:Wait Terminating:Proceed Terminated
If you added an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook to your Auto Scaling group, the instances move from the Pending state to the Pending:Wait state. After you complete the lifecycle action, the instances enter the Pending:Proceed state. When the instances are fully configured, they are attached to the Auto Scaling group and they enter the InService state.
When Amazon EC2 Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. If you added an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook to your Auto Scaling group, the instances move from the Terminating state to the Terminating:Wait state. After you complete the lifecycle action, the instances enter the Terminating:Proceed state. When the instances are fully terminated, they enter the Terminated state.
HDD Volumes
Cannot be used as boot volume in AWS
Large, Sequential I/O operations
Low Price
Big data, data warehouses, log processing
Throughput-oriented storage for a large volumes of data is infrequently accessed
Cost: Low
SSD Volumes
Small, random I/O operations
Best for Transactional workloads
Critical business apps that require sustained IOPS performance
Large database workloads such as MongoDB, Oracle, Microsoft SQL
Cost: moderate/high
API Gateway supported protocol?
All of the APIs created with Amazon API Gateway expose HTTPS endpoints only.
Amazon API Gateway does not support unencrypted (HTTP) endpoints. By default, Amazon API Gateway assigns an internal domain to the API that automatically uses the Amazon API Gateway certificate. When configuring your APIs to run under a custom domain name, you can provide your own certificate for the domain.
SQS Retention Period Default/Max
Amazon SQS automatically deletes messages that have been in a queue for more than the maximum message retention period. The default message retention period is 4 days.
Max is 14 days.
Amazon Workspaces
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
You can pay either monthly or hourly, just for the WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises VDI solutions.
What Route 53 Alias resource record sets allow you to do?
Allows mapping of zone apex (e.g. yourwebsite.com) DNS name to your load balancer (or other AWS resource) DNS name.
Useful because IP addresses change all of the time, so using DNS is appropriate.
AWS Glue
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog.
Once cataloged, your data is immediately searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data transformations and data loading processes.
ASG Default Termination Policy
- If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, choose the Availability Zone with the instances that use the oldest launch configuration.
- Determine which unprotected instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, terminate it.
- If there are multiple instances to terminate based on the above criteria, determine which unprotected instances are closest to the next billing hour. (This helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is one such instance, terminate it.
- If there is more than one unprotected instance closest to the next billing hour, choose one of these instances at random.
AWS Storage Gateway Hardware Appliance
A physical hardware appliance with the Storage Gateway software preinstalled on a validated server configuration.
The hardware appliance is a high-performance 1U server that you can deploy in your data center, or on-premises inside your corporate firewall.
When you buy and activate your hardware appliance, the activation process associates your hardware appliance with your AWS account. After activation, your hardware appliance appears in the console as a gateway on the Hardware page.
You can configure your hardware appliance as a file gateway, tape gateway, or volume gateway type. The procedure that you use to deploy and activate these gateway types on a hardware appliance is the same as on a virtual platform.
A file gateway can be configured to store and retrieve objects in Amazon S3 using the protocols NFS and SMB.
“aws ec2 describe-instances”
The describe-instances command shows the status of the EC2 instances including the recently terminated instances. It also returns a StateReason of why the instance was terminated.
What characteristics does an Encrypted EBS Volume have?
When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
- Data at rest inside the volume
- All data moving between the volume and the instance
- All snapshots created from the volume
- All volumes created from those snapshots
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.
Direct Connect
Direct Connect creates a direct, private connection from your on-premises data center to AWS, letting you establish a 1-gigabit or 10-gigabit dedicated network connection using Ethernet fiber-optic cable.
S3 Scaling
S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge.
Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications. This improvement is now available in all AWS Regions.
What is SQS ReceiveMessageWaitTimeSeconds?
The queue attribute that determines whether you are using Short or Long polling.
By default, its value is zero which means it is using Short polling. If it is set to a value greater than zero, then it is Long polling.
Long Polling
- Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty responses when there are no messages available to return in reply to a ReceiveMessage request sent to an Amazon SQS queue and eliminating false empty responses when messages are available in the queue but aren’t included in the response.
- Long polling reduces the number of empty responses by allowing Amazon SQS to wait until a message is available in the queue before sending a response. Unless the connection times out, the response to the ReceiveMessage request contains at least one of the available messages, up to the maximum number of messages specified in the ReceiveMessage action.
- Long polling eliminates false empty responses by querying all (rather than a limited number) of the servers. Long polling returns messages as soon any message becomes available.
What are the best AWS tools for Distributed Session Data Management?
Think Elasticache
Redis, and Memchached
Redis
Redis is an open source, in-memory data structure store used as a database, cache, and message broker.1
Memcached
Memcached is an in-memory key-value store for small arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.
Elastic Network Adapter (ENA) w/ Enhanced Networking
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types.
It supports network speeds of up to 100 Gbps for supported instance types. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to support VPC networking.
Enhanced Networking
Enhanced networking provides higher bandwidth, higher packet-per-second (PPS) performance, and consistently lower inter-instance latencies.
SR-IOV
a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces.
Elastic Fabric Adapter (EFA)
An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with additional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality.
The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities.
How do you encrypt an EBS volume?
Two ways:
- By using your own keys in AWS KMS
- By using Amazon-managed keys in AWS KMS
Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data using Amazon-managed keys, or keys you create and manage using the AWS Key Management Service (KMS). The encryption occurs on the servers that host EC2 instances, providing encryption of data as it moves between EC2 instances and EBS storage.
Cloudformation Outputs
Outputs is an optional section of the CloudFormation template that describes the values that are returned whenever you view your stack’s properties.
Can provide info like DNS name of ELB, hostname of EC2 instances, etc.
EC2 “Server Refused our Key” error
You might be unable to log into an EC2 instance if:
- You’re using an SSH private key but the corresponding public key is not in the authorized_keys file.
- You don’t have permissions for your authorized_keys file.
- You don’t have permissions for the .ssh folder.
- Your authorized_keys file or .ssh folder isn’t named correctly.
- Your authorized_keys file or .ssh folder was deleted.
- Your instance was launched without a key, or it was launched with an incorrect key.
To connect to your EC2 instance after receiving the error “Server refused our key,” you can update the instance’s user data to append the specified SSH public key to the authorized_keys file, which sets the appropriate ownership and file permissions for the SSH directory and files contained in it.
Throuhput Optimized HDD (st1)
Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS.
This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.
Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data.
EBS Provisioned IOPS SSD (io1) & io2
Highest performance SSD volume designed for latency-sensitive transactional workloads
I/O-intensive NoSQL and relational databases
4 GB - 16 TB
64,000 IOPS
io2 is same as io1, except it has higher durability:
99.999% vs. 99.8-99.9%
EBS General Purpose SSD (gp2)
General Purpose SSD volume that balances price performance for a wide variety of transactional workloads.
Boot volumes, low-latency interactive apps, dev and test
99.8% - 99.9% durability
1 GB - 16 TB
16,000 IOPS
250 MB/s Throughput
EBS Cold HDD (sc1)
Lowest cost HDD volume designed for less frequently accessed workloads
99.8% - 99.9% durability
Colder data requiring fewer scans per day
500 GB - 16 TB
250IOPS per Volume
250 MB/s Throughput
250 Max IPOS/volume
What is the AWS Systems Manager Run Command?
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances.
A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager.
Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost.
Storage Gateway - Volume Gateway
Volume Gateway presents cloud-backed iSCSI block storage volumes to your on-premises applications. Volume Gateway stores and manages on-premises data in Amazon S3 on your behalf and operates in either cache mode or stored mode.
In the cached Volume Gateway mode, your primary data is stored in Amazon S3, while retaining your frequently accessed data locally in the cache for low latency access.
In the stored Volume Gateway mode, your primary data is stored locally and your entire dataset is available for low latency access on premises while also asynchronously getting backed up to Amazon S3.
In either mode, you can take point-in-time copies of your volumes using AWS Backup, which are stored in AWS as Amazon EBS snapshots. Using Amazon EBS Snapshots enables you to make space-efficient versioned copies of your volumes for data protection, recovery, migration, and various other copy data needs.
Can an EBS volume be used when a snapshot is in progress?
EBS volumes can be used while a snapshot is in progress.
Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed.
While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume hence, you can still use the EBS volume normally.
A non-root EBS volume can be detached or attached to a new EC2 instance while the snapshot is in progress. The only exception here is if you are taking a snapshot of your root volume.
ALB Weighted Target Groups Routing
Application Load Balancers support Weighted Target Groups routing. With this feature, you will be able to do weighted routing of the traffic forwarded by a rule to multiple target groups.
This enables various use cases like blue-green, canary and hybrid deployments without the need for multiple load balancers. It even enables zero-downtime migration between on-premises and cloud or between different compute types like EC2 and Lambda.
Route 53 Weighted Routing
To divert 50% of the traffic to the new application in AWS and the other 50% to the application, you can also use Route 53 with Weighted routing policy. This will divert the traffic between the on-premises and AWS-hosted application accordingly.
Weighted routing lets you associate multiple resources with a single domain name (yourwebsite.com) or subdomain name (portal.yourwebsite.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. You can set a specific percentage of how much traffic will be allocated to the resource by specifying the weights.
For example, if you want to send a tiny portion of your traffic to one resource and the rest to another resource, you might specify weights of 1 and 255. The resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/256ths (255/1+255).
You can gradually change the balance by changing the weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0.
ALB Target Types
- instance - The targets are specified by instance ID.
- ip - The targets are IP addresses.
- Lambda - The target is a Lambda function.
ALB IP CIDR block Target Supported Ranges
When the target type is ip, you can specify IP addresses from one of the following CIDR blocks:
- 10.0.0.0/8 (RFC 1918)
- 100.64.0.0/10 (RFC 6598)
- 172.16.0.0/12 (RFC 1918)
- 192.168.0.0/16 (RFC 1918)
- The subnets of the VPC for the target group
These supported CIDR blocks enable you to register the following with a target group: ClassicLink instances, instances in a VPC that is peered to the load balancer VPC, AWS resources that are addressable by IP address and port (for example, databases), and on-premises resources linked to AWS through AWS Direct Connect or a VPN connection.
Take note that you can not specify publicly routable IP addresses. If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces. This enables multiple applications on an instance to use the same port. Each network interface can have its own security group.
VPC Gateway Endpoint
Used for S3, DynamoDB
All others use interface endpoint
After creating admins in IAM, what do you need to do to give them access to the AWS console?
Provide a password for each user created and give these passwords to the admins.
The AWS Management Console is the web interface used to manage your AWS resources using your web browser. To access this, your users should have a password that they can use to login to the web console.
Target tracking scaling - ASG
Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.
If you are scaling based on a utilization metric that increases or decreases proportionally to the number of instances in an Auto Scaling group, then it is recommended that you use target tracking scaling policies. Otherwise, it is better to use step scaling policies instead.
Difference between step scaling and simple scaling?
Step scaling policies and simple scaling policies are two of the dynamic scaling options available for you to use. Both require you to create CloudWatch alarms for the scaling policies. Both require you to specify the high and low thresholds for the alarms. Both require you to define whether to add or remove instances, and how many, or set the group to an exact size.
The main difference between the policy types is the step adjustments that you get with step scaling policies. When step adjustments are applied, and they increase or decrease the current capacity of your Auto Scaling group, the adjustments vary based on the size of the alarm breach.
In most cases, step scaling policies are a better choice than simple scaling policies, even if you have only a single scaling adjustment.
The main issue with simple scaling is that after a scaling activity is started, the policy must wait for the scaling activity or health check replacement to complete and the cooldown period to expire before responding to additional alarms. Cooldown periods help to prevent the initiation of additional scaling activities before the effects of previous activities are visible.
Simple scaling - ASG
Increase or decrease the current capacity of the group based on a single scaling adjustment.
Scheduled Scaling - ASG
Based on a schedule that allows you to set your own scaling schedule for predictable load changes.
Client-side Encryption
Encrypting data before it is sent to S3.
To enable:
Use an AWS KMS managed customer master key
OR
Use client-side master key for client-side encryption that are never sent to AWS
AWS SHIELD
Network & Transport layer protection
DDOS Attacks, near real-time visibility
Integration with WAF
Cloudwatch Events
Deliver near real-time stream of system events that describe changes in AWS resources
Events respond to Operational Changes and take corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
Concepts:
- Events - indicates a change in your AWS environment
- Targets - Processes events.
- Rules - matches incoming events and routes them to targets for processing.
EX: can increase or reduce ECS tasks based on PUT or DELETES