Quick Facts Flashcards
Amazon S3
All Amazon S3 buckets have encryption configured by default
Amazon S3 managed keys (SSE-S3) is the default encryption configuration
S3 Object Lock uses “Write Once Read Many” (WORM) model
S3 Object Lock help prevent accidental or inappropriate deletion of data
Stores data across multiple Availability Zones by default
S3 CRR, you can replicate data between distant AWS Regions
Use S3 Inventory to audit and report for business, compliance, and regulatory needs
Enable Amazon S3 server access logging to provide detailed records of the bucket requests
S3 Storage Lens is a cloud-storage analytics feature
Trusted Advisor gives Amazon S3 “open access permissions”
Amazon AppFlow
Securely exchange data between software as a service (SaaS) applications, such as Salesforce, and AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift
AWS we encrypt your data at rest and in transit
Integrates with AWS PrivateLink to provide private data transfer
Amazon Athena
You can run queries in Amazon Athena on encrypted data in Amazon S3 in the same Region and across a limited number of Regions
User can encrypt the results of all queries in Amazon S3 or AWS Glue Data Catalog
Amazon Athena uses Transport Layer Security (TLS) encryption for data in-transit between Athena and Amazon S3
Monitor Athena usage with CloudTrail and Amazon QuickSight
AWS Lake Formation allows you to define and enforce database, table, and column-level access policies when using Athena queries to read data stored in Amazon S3
AWS Lake Formation helps you centrally govern, secure, and globally share data for analytics and machine learning
Amazon Athena makes it easy to interactively run data analytics and exploration using Apache Spark
AWS Data Exchange
AWS Data Exchange is a service that helps AWS easily share and manage data entitlements from other organizations at scale
As a data receiver, you can track and manage all of your data grants and AWS Marketplace data subscriptions in one place
For data senders, it eliminates the need to build and maintain any data delivery and entitlement infrastructure
AWS Data Exchange always encrypts all data products stored in the service at rest without requiring any additional configuration. This encryption is automatic when you use AWS Data Exchange
AWS Data Pipeline
Automate the movement and transformation of data
Define data-driven workflows
For ege. , you can use AWS Data Pipeline to archive your web server’s logs to Amazon Simple Storage Service (Amazon S3) each day and then run a weekly Amazon EMR (Amazon EMR) cluster over those logs to generate traffic reports
Amazon EMR
Its a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark
It has Cluster that consists of EC2 instances. Each EC instance is called a node and there are three types of nodes - Primary, Core and Task
Regional Service
It can run on Outposts or Local Zones
Amazon EC2 security groups act as a virtual firewall for Amazon EMR cluster instances, limiting inbound and outbound network traffic.
Amazon EMR block public access (BPA) prevents you from launching a cluster in a public subnet
Secure Shell (SSH) helps provide a secure way for users to connect to the command line on cluster instances
You can use Amazon EMR security configurations to configure data encryption
Amazon Glue
Helps you to discover, prepare, move, and integrate data from multiple sources
You can manage your data in a centralized data catalog
You can immediately search and query cataloged data using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum
AWS Glue Data Brew – A visual data preparation tool that you can use to clean and normalize data without writing any code
Athena, Redshift Spectrum, EMR use AWS Glue Data Catalog
AWS Glue Streaming is more focused on high-level ETL processing, while Kinesis Data Streams provides a lower-level streaming platform
AWS Glue provides statistics about the health of your environment
Amazon Kinesis Data Streams
Used to collect and process large streams of data records in real time
A common use is the real-time aggregation of data followed by loading the aggregate data into a data warehouse or map-reduce cluster
The Kinesis Client Library enables fault-tolerant consumption of data from streams and provides scaling support for Kinesis Data Streams applications
Kinesis Client Library (KCL) provides metrics per shard, worker, and KCL application
Amazon RedShift
It has On-demand pricing for provisioned capacity by the hour
Amazon Redshift Serverless for as low as $3 per hour and pay only for the compute capacity your data warehouse consumes when it is active
Managed storage pricing - You pay for data stored in managed storage at a fixed GB-month rate for your region
Amazon Redshift Spectrum allows you to directly run SQL queries against exabytes of data in Amazon S3
Amazon Redshift Spectrum pricing - You are charged for the number of bytes scanned by Redshift Spectrum, rounded up to the next megabyte
Amazon EventBridge
An event indicates a change in an environment. Events are represented as JSON objects and they all have a similar structure, and the same top-level fields
1) Amazon EC2 generates an event when the state of an instance changes from pending to running.
2) Amazon EC2 Auto Scaling generates events when it launches or terminates instances.
3) AWS CloudTrail publishes events when you make API calls.
You could create a pipe with a DynamoDB stream for a source, and an event bus as the target. The pipe receives events from the DynamoDB stream and sends them to the event bus, which then sends them on to multiple targets according to the rules you’ve specified on the event bus
EventBridge encrypts event metadata and message data that it stores. By default, EventBridge encrypts data using 256-bit Advanced Encryption Standard (AES-256) under an AWS owned key
Amazon MQ
A message broker allows software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols
Amazon SQS and Amazon SNS are queue and topic services that are highly scalable, simple to use, and don’t require you to set up message brokers. Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers.
We recommend Amazon MQ for migrating applications from existing message brokers. We recommend SQS and SNS these services for new applications.
When you create an Amazon MQ for ActiveMQ broker or an Amazon MQ for RabbitMQ broker, you can specify the AWS KMS key that you want Amazon MQ to use to encrypt your data at rest. If you do not specify a KMS key, Amazon MQ creates an AWS owned KMS key for you and uses it on your behalf
Amazon SNS
You can develop an application that publishes a message to an SNS topic whenever an order is placed for a product. Then, SQS queues that are subscribed to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the processing or fulfillment of the order
You can receive immediate notification when an event occurs, such as a specific change to your Amazon EC2 Auto Scaling group, a new file uploaded to an Amazon S3 bucket, or a metric threshold breached in Amazon CloudWatch
You could send e-commerce order confirmations as user notifications to individuals or groups
You can use Amazon SNS to send update notifications to an app. The notification message can include a link to download and install the update
User has to enable encryption for data stored in SNS Topics
Message data protection safeguards the data that’s published to your Amazon SNS topics by using data protection policies to audit, mask, redact, or block the sensitive information that moves between applications or AWS services
Amazon SQS
SQS and SNS are NOT synchronous and does NOT have message brockers like MQ
Amazon SQS supports two types of queues – standard queues and FIFO queues
Standard Queue is needed if you have to send data between applications when the throughput is important
FIFO queue is needed if you have to send data between applications when the order of events is important
SSE protects the contents of messages in queues using SQS-managed encryption keys (SSE-SQS) or keys managed in the AWS Key Management Service (SSE-KMS)
Distributed queues is example of resilience
AWS Step Functions
Step Functions has two workflow types. Standard workflows have exactly-once workflow execution and can run for up to one year. This means that each step in a Standard workflow will execute exactly once. Express workflows, however, have at-least-once workflow execution and can run for up to five minutes
Standard workflows are ideal for long-running, auditable workflows, as they show execution history and visual debugging. Express workflows are ideal for high-event-rate workloads, such as streaming data processing and IoT data ingestion
The Amazon Simple Workflow Service (Amazon SWF) provides a way to build, run, and scale background jobs that have parallel or sequential steps
Amazon Chime
You can use Amazon Chime for online meetings, video conferencing, calls, and chat
Amazon Pinpoint
You can use Amazon Pinpoint to send push notifications, in-app notifications, emails, text messages, voice messages, and messages over custom channels.
Using the analytics that Amazon Pinpoint provides, you can gain insight into your user base by viewing trends related to user engagement, campaign outreach, revenue, and more
Amazon SES
Amazon Simple Email Service (SES) is an email platform that provides an easy, cost-effective way for you to send and receive email using your own email addresses and domains.
You can send marketing emails such as special offers, transactional emails such as order confirmations, and other types of correspondence such as newsletters
Amazon Savings Plan
Compute Savings Plans, EC2 Instance Savings Plans and SageMaker Savings Plans
With Compute Savings Plans, you can move a workload from C5 to M5, shift your usage from EU (Ireland) to EU (London), or migrate your application from Amazon EC2 to Amazon ECS using Fargate at any time.
With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your EC2 Instance Savings Plan
With SageMaker Savings Plans, you can move a workload from ml.c5 to ml.m5, shift your usage from Europe (Ireland) to Europe (London), or migrate your usage from Training to Inference at any time and continue to receive benefits
Savings plan recommendations can be found under AWS Cost Management console ->Savings Plans->Recommendations
OR
You can also receive your Savings Plans recommendations via the AWS Cost Explorer API
Automating Savings Plans with Amazon EventBridge
AWS App Runner
Provides a fast, simple, and cost-effective way to deploy from source code or a container image directly to a scalable and secure web application in the AWS Cloud.
AWS App Runner takes your source code or source image from a repository, and creates and maintains a running web service for you in the AWS Cloud
AWS Batch
AWS Batch CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your AWS Batch compute environments and jobs. The metrics include CPU, memory, disk, and network utilization. You can add these metrics to CloudWatch dashboards.
EC2
Leverage security groups as the primary mechanism for controlling network access to Amazon EC2 instances. When necessary, use network ACLs sparingly to provide stateless, coarse-grain network control
Customers often use the Windows Firewall for further visibility into network traffic and to complement security group filters
Resilience is achieved by replicating your data or applications over greater geographic distances, use AWS Local Zones
AND/OR
Copying AMIs and EBS snapshots across Regions
AND/OR
Use Amazon EC2 Auto Scaling
AND/OR
Distributing incoming traffic across multiple instances in a single Availability Zone or multiple Availability Zones using Elastic Load Balancing
Amazon EC2 stores the public key on your instance, and you store the private key
Security group rules enable you to filter traffic based on protocols and port numbers, they are stateful, you can add and remove rules at any time and you can associate multiple security groups with an instance
Elastic Bean Stalk
You can also perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console)
EBS provides a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version
The basic health reporting system provides information about the health of instances in an Elastic Beanstalk environment based on health checks performed by Elastic Load Balancing for load-balanced environments, or Amazon Elastic Compute Cloud for single-instance environments
Elastic Beanstalk also monitors the other resources in your environment and reports missing or incorrectly configured resources that can cause your environment to become unavailable to users
Additional Checks:
The environment’s Auto Scaling group is available and has a minimum of at least one instance.
The environment’s security group is available and is configured to allow incoming traffic on port 80
The environment CNAME exists and is pointing to the right load balancer.
In a worker environment, the Amazon Simple Queue Service (Amazon SQS) queue is being polled at least once every three minutes
With basic health reporting, the Elastic Beanstalk service does not publish any metrics to Amazon CloudWatch
Elastic Beanstalk stores various objects in an Amazon Simple Storage Service (Amazon S3) bucket that it creates for each AWS Region in which you create environments. Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates
AWS Lambda
Lambda service runs your code inside an execution environment
Execution environments are run on hardware virtualized virtual machines (MicroVMs) which are dedicated to a single AWS account
The Lambda service stores your function code in an internal S3 bucket that’s private to your account
Lambda execution environments handle one request at a time. After the invocation has ended, the execution environment is retained for a period of time. If another request arrives, the environment is reused to handle the subsequent request
If requests arrive simultaneously, the Lambda service scales up the Lambda function to provide multiple execution environments
When not to use a Lambda function:
For functions that act as orchestrators. Better moving the orchestration flow to AWS Step Functions
Lambda functions that transport data from one service to another without performing any business logic
Amazon LightSail
AWS and Lightsail do not update or patch the operating system or applications on instances after you create them
Lightsail also does not update or patch the operating system and software that you configure on your Lightsail container services
Amazon Lightsail reports metric data for instances, databases, content delivery network (CDN) distributions, load balancers, container services, and buckets. You can view and monitor this data in the Lightsail console
Lightsail also offers Amazon Lightsail for Research
AWS Local Zones
There’s no additional charge for enabling Local Zones. You pay only for the resources that you deploy in your Local Zones. AWS resources in Local Zones have different prices than they do in parent AWS Regions
Local Zones have their own connections to the internet and support AWS Direct Connect
To use a Local Zone, you must first enable it. Next, you create a subnet in the Local Zone. Finally, you launch resources in the Local Zone subnet
Amazon Elastic Container Registry (ECR)
An Amazon ECR private registry is provided to each AWS account; you can create one or more repositories in your registry and store Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts in them
Image scanning helps in identifying software vulnerabilities in your container images
Cross-Region and cross-account replication
Amazon ECR stores images in Amazon S3 buckets
By default, Amazon ECR uses server-side encryption with Amazon S3-managed encryption keys
Use the CloudWatch console to visualize your service quotas
Amazon Elastic Container Service (ECS)
There are three layers in Amazon ECS:
Capacity - The infrastructure where your containers run
Controller - Deploy and manage your applications that run on the containers
Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers
Runtime Monitoring in GuardDuty is an intelligent threat detection service that protects workloads running on Fargate and EC2 container instances by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior
AWS KMS
Create multi-Region keys, which act like copies of the same KMS key in different AWS Regions.
Each AWS KMS key that you create in AWS KMS costs $1/month (prorated hourly)
You are not charged for the following:
Creation and storage of AWS managed or AWS owned KMS keys
There is no charge for customer managed KMS keys that you manage and are scheduled for deletion
By default, AWS KMS generates and protects the cryptographic key material for KMS keys
AWS IQ
AWS IQ connects you to AWS Certified experts for hands-on help for your AWS projects. You create a request and choose from experts who respond
AWS Support Plans
All AWS customers automatically have 24x7 access to these features of Basic Support:
- One-on-one responses to account and billing questions
- Support forums
- Service health checks
- Documentation, technical papers, and best practice guides
Developer Support plan have access to these additional features:
- Best practice guidance
- Client-side diagnostic tools
- Building-block architecture support: guidance on how to use AWS products, features, and services together
- Supports an unlimited number of support cases that can be opened by one primary contact, which is the AWS account root user.
In addition, Business, Enterprise On-Ramp, or Enterprise Support plan have access to these features:
- Use-case guidance
- AWS Trusted Advisor
- The AWS Support API
- Third-party software support
- Supports an unlimited number of AWS (IAM) users
In addition, Enterprise On-Ramp or Enterprise Support plan have access to these features:
- Application architecture guidance
- Infrastructure event management
- Technical account manager
- White-glove case routing
- Management business reviews
AWS support plan response time
Developer:
General guidance: < 24 hours
System impaired: < 12 hours
Additional in Business:
Production system impaired: < 4 hours
Production system down: < 1 hour
Additional in Enterprise-onramp:
Business-critical system down: < 30 minutes
Additional in Enterprise:
Business/Mission-critical system down: < 15 minutes
Amazon Aurora
Compatible with MySQL and PostgreSQL
Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region
For high availability across multiple AWS Regions, you can set up Aurora global databases
An Aurora global database consists of one primary AWS Region where your data is written, and up to five read-only secondary AWS Regions
Aurora global databases currently don’t support Aurora Auto Scaling for secondary DB clusters
Amazon Aurora can encrypt your Amazon Aurora DB clusters. Data that is encrypted at rest includes the underlying storage for DB clusters, its automated backups, read replicas, and snapshots
You can’t turn off encryption on an encrypted DB cluster
Aurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period
Amazon DoumentDB
Amazon DocumentDB supports two types of clusters: instance-based clusters and elastic clusters. Elastic clusters support workloads with millions of reads/writes per second and petabytes of storage capacity
Instance based:
Amazon DocumentDB automatically grows the size of your storage volume
Amazon DocumentDB lets you scale the compute and memory resources for each of your instances up or down
Amazon DocumentDB runs in Amazon Virtual Private Cloud (Amazon VPC)
Amazon DocumentDB continuously monitors the health of your cluster
On instance failure, Amazon DocumentDB automates failover to one of up to 15 Amazon DocumentDB replicas that you create in other Availability Zones. If no replicas have been provisioned and a failure occurs, Amazon DocumentDB tries to create a new Amazon DocumentDB instance automatically
backup capability allows you to restore your cluster to any second during your retention period, up to the last 5 minutes
Amazon MemoryDB
MemoryDB at-rest encryption is always enabled
Amazon Neptune
Fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets
With Amazon Neptune, you don’t have to worry about database management tasks like hardware provisioning, software patching, setup, configuration, or backups.
Amazon QLDB
You can use Amazon QLDB to track all application data changes, and maintain a complete and verifiable history of changes over time
All data stored in Amazon QLDB is fully encrypted at rest by default
QLDB doesn’t provide a dedicated backup and related restore feature at this time
Amazon RDS
Database on EC2, customer is responsible for:
Application optimization, Scaling, High availability, Database backups, Database software patching, Database software install, OS patching, OS installation
Using Amazon RDS, customer is responsible for:
Application optimization
Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL
Amazon RDS Custom is an RDS management type that gives you full access to your database and operating system
You can also subscribe to Amazon RDS events to be notified about changes to a DB instance, DB snapshot, or DB parameter group
Pricing = DB instance hours + Provisioned Storage + (I/O) requests (for Amazon RDS magnetic storage only) + Provisioned IOPS + Backup storage + Data transfer in and out of your DB instance from or to the internet and other AWS Regions
To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key in the destination AWS Region. This is because KMS keys are specific to the AWS Region that they are created in
Amazon TimeStream
Use cases:
Monitoring metrics to improve the performance and availability of your applications.
Storage and analysis of industrial telemetry to streamline equipment management and maintenance.
Tracking user interaction with an application over time.
Storage and analysis of IoT sensor data.
Amazon CloudShell
With the shell that’s created for AWS CloudShell sessions, you can switch seamlessly between your preferred command line shells. More specifically, you can switch between Bash, PowerShell, and Z shell
AWS CloudShell is an AWS service that’s available at no additional charge. However, you pay for other AWS resources that you run with AWS CloudShell. Moreover, standard data transfer rates also apply
AWS CodeBuild
The CodeBuild console also provides a way to quickly search for your resources, such as repositories, build projects, deployment applications, and pipelines
Amazon CodeCatalyst
CodeCatalyst provides one place where you can plan work, collaborate on code, and build, test, and deploy applications with continuous integration/continuous delivery (CI/CD) tools.
Its similar to AWS CodeStar
AWS CodeCommit
Anyone with an AWS account can get started with AWS CodeCommit for free. Your account gets 5 active users per month for free (within limits), after which you pay $1.00 per additional active user per month. There are no upfront fees or commitments.
CodeCommit is not integrated with gitHUB
Things not to use for CodeCommit:
Large files that change frequently - Use S3
Audit trails - Use S3
Backups - AWS Backup and Restore
Large numbers of branches or references - We recommend using CodeCommit, but delete branches and tags that are no longer needed
AWS CodeDeploy
CodeDeploy is able to deploy applications to three compute platforms:
EC2/On-Premises
AWS Lambda
Amazon ECS
Customer code is not stored in CodeDeploy
Service model data for deployments, deployment configuration, deployment groups, applications, and application revisions are stored in Amazon DynamoDB and encrypted at rest using an AWS owned key, owned and managed by CodeDeploy
Amazon AppStream 2.0
A streaming instance (also known as a fleet instance) is an EC2 instance that is made available to a single user for application streaming. After the user’s session completes, the instance is terminated by EC2
Amazon WorkSpaces
WorkSpaces uses your Simple AD, AD Connector, or AWS Managed Microsoft AD directory to authenticate users
Amazon WorkSpaces Web
Designed to facilitate secure browser access to internal websites and software-as-a-service (SaaS) applications
Amazon WorkSpaces Core
Managed virtual desktop infrastructure designed to work with third-party VDI solutions
AWS Amplify
Enables frontend web and mobile developers to quickly and easily build full-stack applications on AWS
Amplify uses Amazon CloudFront to serve your app to your customers
AWS AppSync
Enables developers to connect their applications and services to data and events with secure, serverless and high-performing GraphQL and Pub/Sub APIs
AWS Security Credentials
You can’t use IAM policies to deny the root user access to resources explicitly. You can only use an AWS Organizations service control policy (SCP) to limit the permissions of the root user
If you lose your root user access keys, you must be able to sign in to your account as the root user to create new ones
Only Root User can:
1. Change account settings
2. Restore IAM user permissions.
3. Close account
4. Activate IAM access to the Billing and Cost Management console
5. Register as a seller
6. Configure an Amazon S3 bucket to enable MFA
7. Edit or delete an Amazon SQS resource policy that denies all principals
8. Edit or delete an Amazon S3 bucket policy that denies all principals
9. Sign up for AWS GovCloud (US)
Don’t create long-term access keys for human users who need access to applications or AWS services
Don’t store long-term access keys within an AWS compute service
Use IAM roles to generate temporary security credentials whenever possible
Don’t embed long-term access keys and secret access keys in your application code or in a code repository
AWS IAM Access Analyzer does the following:
- Identify resources that are shared with an external entity
- Identify unused access
- Validates IAM policies against AWS best practices
- Validate IAM policies against your specified security standards
- Generates IAM policies based on access activity in your AWS CloudTrail logs
AWS Service Endpoints
Service General EndPoint supported by :
EC2, ASG, EMR
When you use a general endpoint, AWS routes the API request to US East (N. Virginia) (us-east-1), which is the default Region for API calls
AWS Tagging
Each resource can have a maximum of 50 user created tags
For each resource, each tag key must be unique, and each tag key can have only one value
The tag value must be a minimum of 0 and a maximum of 256 Unicode characters in UTF-8
Allowed characters are letters, numbers, spaces representable in UTF-8, and the following characters: _ . : / = + - @
Tag keys and values are case sensitive
Tags for automation
Tags for access control
AWS IoT Core
Use secure tunneling to establish bidirectional communication to remote devices over a secure connection that is managed by AWS IoT
Using FleetIndexing you can query a group of devices
AWS IoT Device Defender is a security service that allows you to audit the configuration of your devices, monitor connected devices to detect abnormal behavior, and mitigate security risks
Device Advisor is a cloud-based, fully managed test capability for validating IoT devices during device software development
AWS IoT Greengrass
AWS IoT Greengrass is an open source Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy and manage IoT applications on your devices
Amazon Comprehend
Find the documents about a particular subject using Amazon Comprehend topic modeling. Scan a set of documents to determine the topics discussed, and to find the documents associated with each topic. You can specify the number of topics that Amazon Comprehend should return from the document set.
If your company publishes a catalog, let Amazon Comprehend tell you what customers think of your products
Use Amazon Comprehend topic modeling to discover the topics that your customers are talking about on your forums and message boards, then use entity detection to determine the people, places, and things that they associate with the topic. Use sentiment analysis to determine how your customers feel about a topic
Amazon CodeGuru
CodeGuru Security is a rearchitected and redesigned version of CodeGuru Reviewer
Currently, CodeGuru Security is in preview release and is free to use
Types of code scans
1. Code security analysis
2. Code quality analysis
3. Secrets detection
AWS Billing Conductor
AWS Billing Conductor is a custom billing service that can support the showback and chargeback workflows of AWS Solution Providers and Enterprise customers
Group your accounts into a set of mutually exclusive set of accounts (billing groups)
Apply custom pricing plans
Create and apply one-time or recurring charges or credits to your billing groups
AWS Cost Categotries
You can create groupings of costs using cost categories. For example, assume that your business is organized by teams and that each team has multiple accounts within. To build this structure in cost categories, create a cost category named Team. Then, you can map costs to a cost category value that’s named Team 1.
AWS Credits
The order of how credits apply if an AWS account has more than one credit:
- The soonest to expire amongst the credits
- The credit with the least number of eligible services
- The oldest of all credits
Consolidate Billing
Reserved Instances and Savings Plans discount sharing:
To share an Reserved Instance or Savings Plans discount with an account, both accounts must have sharing activated
Amazon Rekognition
Common use cases for using Amazon Rekognition include the following:
Searchable image and video libraries
Face Liveness Detection : helps you verify that a user is physically present in front of the camera and isn’t a bad actor spoofing
Face-based user verification
Facial detection and analysis - can detect and analyze different facial components and attributes, such as: emotional expressions (like happy, sad, or surprised), demographic information (like gender or age)
Facial Search
Unsafe content detection - can detect inappropriate, unwanted, or offensive content
Celebrity recognition
Text detection
Custom labels - you can find your logo in social media posts, identify your products on store shelves
AWS AppConfig
Safely release new capabilities to your customers in a controlled environment
Carefully introduce application changes while testing the impact of those changes with users in production environments
Control access to premium features or instantly block specific users without deploying new code
Keep your configuration data organized and consistent across all of your workloads
Amazon EC2 ASG
Better fault tolerance
Better availability
Better cost management
Strategies:
Maintain a fixed number of instances
Scale based on a schedule - This is useful when you know exactly when to increase or decrease the number of instances in your group
Scale dynamically based on demand
a) Step scaling
b) Simple scaling
c) Target tracking scaling
Step Scaling e.g. :
You can define different step adjustments based on the breach size of the alarm. You can define separate scaling policies to handle scaling out (increasing capacity) and scaling in (decreasing capacity) when an alarm threshold is breached
For example:
Scale out by 10 instances if the alarm metric reaches 60 percent
Scale out by 30 instances if the alarm metric reaches 75 percent
Scale out by 40 instances if the alarm metric reaches 85 percent
Simple scaling policies are similar to step scaling policies, except they’re based on a single scaling adjustment
Target tracking scaling e.g. :
You can meet this need by creating a target tracking scaling policy that targets an average CPU utilization of 50 percent. Then, your Auto Scaling group will scale out (increase capacity) when CPU exceeds 50 percent to handle increased load. It will scale in (decrease capacity) when CPU drops below 50 percent to optimize costs during periods of low utilization.
Predictive scaling e.g. :
Consider an application that has high usage during business hours and low usage overnight. At the start of each business day, predictive scaling can add capacity before the first influx of traffic. This helps your application maintain high availability and performance when going from a period of lower utilization to a period of higher utilization
AWS CloudFormation
Public and Private extensions
Public extensions - AWS Lambda functions or Amazon S3 buckets
Private extensions - Custom resource types like third party resources
AWS CloudTrail
CloudTrail provides three ways to record events:
- Event history
- CloudTrail Lake
- Trails
Amazon CloudWatch
Services used along with Amazon CloudWatch
1. SNS
2. EC2 Auto Scaling
3. CloudTrail
4. iAM
CloudWatch cross-account observability is used to get cross-account, cross-Region metrics, logs, and traces
CloudWatch agent can collect metrics from
1. Amazon EC2 instances across operating systems
2. System-level metrics from on-premises servers
3. Custom metrics from your applications or services
4. Collect logs from Amazon EC2 instances and on-premises servers
You can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs. Canaries check the availability and latency of your endpoints and can store load time data and screenshots of the UI. By using canaries, you can discover issues before your customers do
Amazon EC2 detailed monitoring provides more frequent metrics, published at one-minute intervals, instead of the five-minute intervals used in Amazon EC2 basic monitoring
AWS Compute Optimizer
Compute Optimizer generates recommendations for the following resources:
- EC2
- EC2 ASG
- EBS
- Lambda functions
- ECS
- Commercial software licenses
Recommendation preferences
1. Rightsizing recommendation preferences - Free
2. Enhanced infrastructure metrics for ASG - Paid
3. Savings estimation mode - Allows Compute Optimizer to analyze specific pricing discounts when generating the estimated cost savings of rightsizing recommendations
AWS Control Tower
AWS Control Tower offers the easiest way to set up and govern a secure, compliant, multi-account AWS environment
AWS Control Tower has the following features:
1. Landing zone
2. Controls
3. Account Factory
4. Dashboard
AWS OpsWorks
AWS OpsWorks is a configuration management service that helps you configure and operate applications in a cloud enterprise by using Puppet or Chef
AWS OpsWorks Services
1. AWS OpsWorks for Puppet Enterprise
2. AWS OpsWorks for Chef Automate
3. AWS OpsWorks Stacks (to be migrated to AWS Systems Manager)
AWS Organizations
In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions
You can use backup policies to configure and automatically apply AWS Backup plans to resources across all your organization’s accounts
The user can access only what is allowed by both the AWS Organizations policies and IAM policies
AWS Organizations is a global service with a single endpoint that works from any and all AWS Regions. You don’t need to explicitly select a region to operate in
Global Services
Their Data Plane is in different regions
AWS IAM
AWS Organizations
AWS Account Management
Route 53 Application Recovery Controller (ARC)
AWS Network Manager
Route 53 Private DNS
Their Data Plane is on Point of Presence(PoP)
Amazon CloudFront
AWS WAF for CloudFront
AWS Global Accelerator
AWS Shield Advanced
Some operations of these services have global impact:
Route 53
Amazon S3
CloudFront
AWS STS
AWS IAM Identity Center
Amazon S3 Storage Lens
AWS Systems Manager
Capabilities:
- Application Manager helps DevOps engineers investigate and remediate issues with their AWS resources.
- Change Manager is for requesting, approving, implementing, and reporting on operational changes to your application configuration and infrastructure
- Use Compliance to scan your fleet of managed nodes for patch compliance and configuration inconsistencies
- Incident Manager is an incident management console that helps users mitigate and recover from incidents affecting their AWS hosted applications
- Use Quick Setup to configure frequently used AWS services and features with recommended best practices
Amazon Elastic Transcoder
You can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions
You can protect your Elastic Transcoder data by encrypting any input and output files using S3 Server-Side Encryption or Client-Side Encryption Using Customer-Provided Keys
AWS Application Discovery Service
Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. These third-party solutions can help you import details about your on-premises environment directly into Migration Hub, without using any agentless collector or discovery agent.
AWS Application Migration Service
Retire: Retiring the application means that you can shut down the servers within that application stack.
Retain: This is the migration strategy for applications that you want to keep in your source environment or applications that you are not ready to migrate. You might choose to migrate these applications in the future.
Rehost (lift and shift): Rehost is the process of moving applications from your source environment to the AWS Cloud without making any changes to the application.
Relocate: Relocate is transferring a large number of servers, comprising one or more applications, at a given time from on-premises platform to a cloud version of the platform. For example, you can use this strategy to transfer servers in bulk from VMware software-defined data center (SDDC) to VMware Cloud on AWS.
Repurchase (drop and shop): Repurchase means replacing your application with a different version or product.
Replatform (lift, tinker, and shift): Replatform is moving an application to the cloud and introducing some level of optimization in order to operate the application efficiently, reduce costs, or take advantage of cloud capabilities.
Refactor: Refactor is moving an application to the cloud, and modifying its architecture by taking full advantage of cloud-native features to improve agility, performance, and scalability.
AWS DataSync
DataSync works with the following on-premises storage systems:
Network File System (NFS)
Server Message Block (SMB)
Hadoop Distributed File Systems (HDFS)
Object storage
The only data that DataSync handles at rest relates to the information that it discovers about your on-premises storage system and the details needs to complete your transfer. DataSync stores the following data with full at-rest encryption in Amazon DynamoDB
AWS Transfer Family
AWS Transfer Family supports transferring data from or to the following AWS storage services.
Amazon S3 and Amazon EFS
AWS Transfer Family supports transferring data over the following protocols:
Secure Shell (SSH) File Transfer Protocol (SFTP): version 3
File Transfer Protocol Secure (FTPS)
File Transfer Protocol (FTP)
Applicability Statement 2 (AS2)
Amazon API Gateway
Optimization:
Enabling API caching to enhance responsiveness
Enabling payload compression for an API
Amazon CloudFront
Function code and configuration in CloudFront Functions is always stored in an encrypted format on the edge location POPs, and in other storage locations used by CloudFront
You can improve resiliency and increase availability for specific scenarios by setting up CloudFront with origin failover
AWS DirectConnect
AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable.
AWS Direct Connect does not encrypt your traffic that is in transit by default
Elastic Load Balancing
Application Load Balancer (Layer 7)
Protocols : HTTP or HTTPS
Target Types are:
- InstanceID
- ip Address
- Lambda Function
When the target type is ip, you can specify IP addresses from one of the following CIDR blocks:
1. The subnets of the VPC for the target group
2. List as specified by AWS
You can’t specify publicly routable IP addresses.
Network Load Balancers(Layer 4)
Protocols: TCP, TLS, UDP, TCP_UDP
Target Types are:
- InstanceID
- ip Address
- Application Load Balancer
Gateway Load Balancer (Gateway Load Balancers listen for all IP packets across all ports. Layer 3)
Protocol : GENEVE
Target Types are:
- InstanceID
- ip Address
AWS Global Accelerator
The static IP addresses provided by AWS Global Accelerator serve as single fixed entry points for your clients.
The static IP addresses accept incoming traffic onto the AWS global network from the edge location that is closest to your users
By using a standard accelerator, you can improve the availability and performance of your applications running on Application Load Balancers, Network Load Balancers, or Amazon EC2 instances
Amazon Route 53
Three main functions:
- Register domain names
- Route internet traffic to the resources for your domain
- Check the health of your resources
If Route 53 considers the endpoint unhealthy and if you configured notification for the health check, Route 53 notifies CloudWatch
AWS Direct Connect - Describes establishing a private, logical connection from your remote network to Amazon VPC, using AWS Direct Connect.
AWS Direct Connect + AWS Site-to-Site VPN – Describes establishing a private, encrypted connection from your remote network to Amazon VPC, using AWS Direct Connect and AWS Site-to-Site VPN.
AWS VPN CloudHub – Describes establishing a hub-and-spoke model for connecting remote branch offices.
AWS Site-to-Site VPN
A Site-to-Site VPN connection has the following limitations.
IPv6 traffic is not supported for VPN connections on a virtual private gateway.
An AWS VPN connection does not support Path MTU Discovery.
When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks
AWS Ground Station
AWS Ground Station is a fully managed service that enables you to control satellite communications, process satellite data, and scale your satellite operations
AWS Network Firewall
Intrusion detection and prevention service for your AWS VPC
Filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect
Tasks:
Pass traffic through only from known AWS service domains or IP address endpoints, such as Amazon S3.
Use custom lists of known bad domains to limit the types of domain names that your applications can access.
Perform deep packet inspection on traffic entering or leaving your VPC.
Use stateful protocol detection to filter protocols like HTTPS, independent of the port used.
AWS WAF
Monitor the HTTP(S) requests that are forwarded to your protected web application resources. Protect the following resources:
CloudFront
API Gateway REST API
ALB
AppSync GraphQL API
Amazon Cognito
App Runner
AWS Verified Access instance
You protect by defining a web access control list (ACL)
Tasks:
Allow the requests to go to the protected resource for processing and response.
Block the requests.
Count the requests.
Run CAPTCHA or challenge checks
Amazon EBS
We recommend Amazon EBS for data that must be quickly accessible and requires long-term persistence.
Primary storage for file systems, databases
You can attach multiple EBS volumes to a single instance
The volume and instance must be in the same Availability Zone
You create an EBS volume in a specific Availability Zone, and then attach it to an instance in that same Availability Zone.
To make a volume available outside of the Availability Zone, you can create a snapshot and restore that snapshot to a new volume anywhere in that Region.
You can copy snapshots to other Regions and then restore them to new volumes there, making it easier to leverage multiple AWS Regions for geographical expansion, data center migration, and disaster recovery
When you create snapshots, you incur charges in Amazon S3 based on the size of the data being backed up, not the size of the source volume.
Subsequent snapshots of the same volume are incremental snapshots.
They include only changed and new data written to the volume since the last snapshot was created, and you are charged only for this changed and new data
AWS Instance Store
Instance store volumes can be attached to an instance only when you launch it. You can’t attach instance store volumes to an instance after you’ve launched it
After you launch an instance with attached instance store volumes, you must mount the volumes before you can access them
The data on an instance store volume persists even if the instance is rebooted.
However, the data does not persist if the instance is stopped, hibernated, or terminated
File Storage
Use Amazon S3 with Amazon EC2
Use Amazon EFS with Amazon EC2
Use Amazon FSx with Amazon EC2
Use Amazon File Cache with Amazon EC2
Amazon EFS is not supported on Windows instances.
EFS automatically grows and shrinks as you add and remove files
AWS Acceptable Use Policy
The focus is on main focus is on ensuring responsible and legal use of AWS services which is “Security, Compliance, Data Protection, and Intellectual Property”