Quick Facts Flashcards
Amazon S3
All Amazon S3 buckets have encryption configured by default
Amazon S3 managed keys (SSE-S3) is the default encryption configuration
S3 Object Lock uses “Write Once Read Many” (WORM) model
S3 Object Lock help prevent accidental or inappropriate deletion of data
Stores data across multiple Availability Zones by default
S3 CRR, you can replicate data between distant AWS Regions
Use S3 Inventory to audit and report for business, compliance, and regulatory needs
Enable Amazon S3 server access logging to provide detailed records of the bucket requests
S3 Storage Lens is a cloud-storage analytics feature
Trusted Advisor gives Amazon S3 “open access permissions”
Amazon AppFlow
Securely exchange data between software as a service (SaaS) applications, such as Salesforce, and AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift
AWS we encrypt your data at rest and in transit
Integrates with AWS PrivateLink to provide private data transfer
Amazon Athena
You can run queries in Amazon Athena on encrypted data in Amazon S3 in the same Region and across a limited number of Regions
User can encrypt the results of all queries in Amazon S3 or AWS Glue Data Catalog
Amazon Athena uses Transport Layer Security (TLS) encryption for data in-transit between Athena and Amazon S3
Monitor Athena usage with CloudTrail and Amazon QuickSight
AWS Lake Formation allows you to define and enforce database, table, and column-level access policies when using Athena queries to read data stored in Amazon S3
AWS Lake Formation helps you centrally govern, secure, and globally share data for analytics and machine learning
Amazon Athena makes it easy to interactively run data analytics and exploration using Apache Spark
AWS Data Exchange
AWS Data Exchange is a service that helps AWS easily share and manage data entitlements from other organizations at scale
As a data receiver, you can track and manage all of your data grants and AWS Marketplace data subscriptions in one place
For data senders, it eliminates the need to build and maintain any data delivery and entitlement infrastructure
AWS Data Exchange always encrypts all data products stored in the service at rest without requiring any additional configuration. This encryption is automatic when you use AWS Data Exchange
AWS Data Pipeline
Automate the movement and transformation of data
Define data-driven workflows
For ege. , you can use AWS Data Pipeline to archive your web server’s logs to Amazon Simple Storage Service (Amazon S3) each day and then run a weekly Amazon EMR (Amazon EMR) cluster over those logs to generate traffic reports
Amazon EMR
Its a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark
It has Cluster that consists of EC2 instances. Each EC instance is called a node and there are three types of nodes - Primary, Core and Task
Regional Service
It can run on Outposts or Local Zones
Amazon EC2 security groups act as a virtual firewall for Amazon EMR cluster instances, limiting inbound and outbound network traffic.
Amazon EMR block public access (BPA) prevents you from launching a cluster in a public subnet
Secure Shell (SSH) helps provide a secure way for users to connect to the command line on cluster instances
You can use Amazon EMR security configurations to configure data encryption
Amazon Glue
Helps you to discover, prepare, move, and integrate data from multiple sources
You can manage your data in a centralized data catalog
You can immediately search and query cataloged data using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum
AWS Glue Data Brew – A visual data preparation tool that you can use to clean and normalize data without writing any code
Athena, Redshift Spectrum, EMR use AWS Glue Data Catalog
AWS Glue Streaming is more focused on high-level ETL processing, while Kinesis Data Streams provides a lower-level streaming platform
AWS Glue provides statistics about the health of your environment
Amazon Kinesis Data Streams
Used to collect and process large streams of data records in real time
A common use is the real-time aggregation of data followed by loading the aggregate data into a data warehouse or map-reduce cluster
The Kinesis Client Library enables fault-tolerant consumption of data from streams and provides scaling support for Kinesis Data Streams applications
Kinesis Client Library (KCL) provides metrics per shard, worker, and KCL application
Amazon RedShift
It has On-demand pricing for provisioned capacity by the hour
Amazon Redshift Serverless for as low as $3 per hour and pay only for the compute capacity your data warehouse consumes when it is active
Managed storage pricing - You pay for data stored in managed storage at a fixed GB-month rate for your region
Amazon Redshift Spectrum allows you to directly run SQL queries against exabytes of data in Amazon S3
Amazon Redshift Spectrum pricing - You are charged for the number of bytes scanned by Redshift Spectrum, rounded up to the next megabyte
Amazon EventBridge
An event indicates a change in an environment. Events are represented as JSON objects and they all have a similar structure, and the same top-level fields
1) Amazon EC2 generates an event when the state of an instance changes from pending to running.
2) Amazon EC2 Auto Scaling generates events when it launches or terminates instances.
3) AWS CloudTrail publishes events when you make API calls.
You could create a pipe with a DynamoDB stream for a source, and an event bus as the target. The pipe receives events from the DynamoDB stream and sends them to the event bus, which then sends them on to multiple targets according to the rules you’ve specified on the event bus
EventBridge encrypts event metadata and message data that it stores. By default, EventBridge encrypts data using 256-bit Advanced Encryption Standard (AES-256) under an AWS owned key
Amazon MQ
A message broker allows software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols
Amazon SQS and Amazon SNS are queue and topic services that are highly scalable, simple to use, and don’t require you to set up message brokers. Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers.
We recommend Amazon MQ for migrating applications from existing message brokers. We recommend SQS and SNS these services for new applications.
When you create an Amazon MQ for ActiveMQ broker or an Amazon MQ for RabbitMQ broker, you can specify the AWS KMS key that you want Amazon MQ to use to encrypt your data at rest. If you do not specify a KMS key, Amazon MQ creates an AWS owned KMS key for you and uses it on your behalf
Amazon SNS
You can develop an application that publishes a message to an SNS topic whenever an order is placed for a product. Then, SQS queues that are subscribed to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the processing or fulfillment of the order
You can receive immediate notification when an event occurs, such as a specific change to your Amazon EC2 Auto Scaling group, a new file uploaded to an Amazon S3 bucket, or a metric threshold breached in Amazon CloudWatch
You could send e-commerce order confirmations as user notifications to individuals or groups
You can use Amazon SNS to send update notifications to an app. The notification message can include a link to download and install the update
User has to enable encryption for data stored in SNS Topics
Message data protection safeguards the data that’s published to your Amazon SNS topics by using data protection policies to audit, mask, redact, or block the sensitive information that moves between applications or AWS services
Amazon SQS
SQS and SNS are NOT synchronous and does NOT have message brockers like MQ
Amazon SQS supports two types of queues – standard queues and FIFO queues
Standard Queue is needed if you have to send data between applications when the throughput is important
FIFO queue is needed if you have to send data between applications when the order of events is important
SSE protects the contents of messages in queues using SQS-managed encryption keys (SSE-SQS) or keys managed in the AWS Key Management Service (SSE-KMS)
Distributed queues is example of resilience
AWS Step Functions
Step Functions has two workflow types. Standard workflows have exactly-once workflow execution and can run for up to one year. This means that each step in a Standard workflow will execute exactly once. Express workflows, however, have at-least-once workflow execution and can run for up to five minutes
Standard workflows are ideal for long-running, auditable workflows, as they show execution history and visual debugging. Express workflows are ideal for high-event-rate workloads, such as streaming data processing and IoT data ingestion
The Amazon Simple Workflow Service (Amazon SWF) provides a way to build, run, and scale background jobs that have parallel or sequential steps
Amazon Chime
You can use Amazon Chime for online meetings, video conferencing, calls, and chat
Amazon Pinpoint
You can use Amazon Pinpoint to send push notifications, in-app notifications, emails, text messages, voice messages, and messages over custom channels.
Using the analytics that Amazon Pinpoint provides, you can gain insight into your user base by viewing trends related to user engagement, campaign outreach, revenue, and more
Amazon SES
Amazon Simple Email Service (SES) is an email platform that provides an easy, cost-effective way for you to send and receive email using your own email addresses and domains.
You can send marketing emails such as special offers, transactional emails such as order confirmations, and other types of correspondence such as newsletters
Amazon Savings Plan
Compute Savings Plans, EC2 Instance Savings Plans and SageMaker Savings Plans
With Compute Savings Plans, you can move a workload from C5 to M5, shift your usage from EU (Ireland) to EU (London), or migrate your application from Amazon EC2 to Amazon ECS using Fargate at any time.
With an EC2 Instance Savings Plan, you can change your instance size within the instance family (for example, from c5.xlarge to c5.2xlarge) or the operating system (for example, from Windows to Linux), or move from Dedicated tenancy to Default and continue to receive the discounted rate provided by your EC2 Instance Savings Plan
With SageMaker Savings Plans, you can move a workload from ml.c5 to ml.m5, shift your usage from Europe (Ireland) to Europe (London), or migrate your usage from Training to Inference at any time and continue to receive benefits
Savings plan recommendations can be found under AWS Cost Management console ->Savings Plans->Recommendations
OR
You can also receive your Savings Plans recommendations via the AWS Cost Explorer API
Automating Savings Plans with Amazon EventBridge
AWS App Runner
Provides a fast, simple, and cost-effective way to deploy from source code or a container image directly to a scalable and secure web application in the AWS Cloud.
AWS App Runner takes your source code or source image from a repository, and creates and maintains a running web service for you in the AWS Cloud
AWS Batch
AWS Batch CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your AWS Batch compute environments and jobs. The metrics include CPU, memory, disk, and network utilization. You can add these metrics to CloudWatch dashboards.
EC2
Leverage security groups as the primary mechanism for controlling network access to Amazon EC2 instances. When necessary, use network ACLs sparingly to provide stateless, coarse-grain network control
Customers often use the Windows Firewall for further visibility into network traffic and to complement security group filters
Resilience is achieved by replicating your data or applications over greater geographic distances, use AWS Local Zones
AND/OR
Copying AMIs and EBS snapshots across Regions
AND/OR
Use Amazon EC2 Auto Scaling
AND/OR
Distributing incoming traffic across multiple instances in a single Availability Zone or multiple Availability Zones using Elastic Load Balancing
Amazon EC2 stores the public key on your instance, and you store the private key
Security group rules enable you to filter traffic based on protocols and port numbers, they are stateful, you can add and remove rules at any time and you can associate multiple security groups with an instance
Elastic Bean Stalk
You can also perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console)
EBS provides a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version
The basic health reporting system provides information about the health of instances in an Elastic Beanstalk environment based on health checks performed by Elastic Load Balancing for load-balanced environments, or Amazon Elastic Compute Cloud for single-instance environments
Elastic Beanstalk also monitors the other resources in your environment and reports missing or incorrectly configured resources that can cause your environment to become unavailable to users
Additional Checks:
The environment’s Auto Scaling group is available and has a minimum of at least one instance.
The environment’s security group is available and is configured to allow incoming traffic on port 80
The environment CNAME exists and is pointing to the right load balancer.
In a worker environment, the Amazon Simple Queue Service (Amazon SQS) queue is being polled at least once every three minutes
With basic health reporting, the Elastic Beanstalk service does not publish any metrics to Amazon CloudWatch
Elastic Beanstalk stores various objects in an Amazon Simple Storage Service (Amazon S3) bucket that it creates for each AWS Region in which you create environments. Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates
AWS Lambda
Lambda service runs your code inside an execution environment
Execution environments are run on hardware virtualized virtual machines (MicroVMs) which are dedicated to a single AWS account
The Lambda service stores your function code in an internal S3 bucket that’s private to your account
Lambda execution environments handle one request at a time. After the invocation has ended, the execution environment is retained for a period of time. If another request arrives, the environment is reused to handle the subsequent request
If requests arrive simultaneously, the Lambda service scales up the Lambda function to provide multiple execution environments
When not to use a Lambda function:
For functions that act as orchestrators. Better moving the orchestration flow to AWS Step Functions
Lambda functions that transport data from one service to another without performing any business logic
Amazon LightSail
AWS and Lightsail do not update or patch the operating system or applications on instances after you create them
Lightsail also does not update or patch the operating system and software that you configure on your Lightsail container services
Amazon Lightsail reports metric data for instances, databases, content delivery network (CDN) distributions, load balancers, container services, and buckets. You can view and monitor this data in the Lightsail console
Lightsail also offers Amazon Lightsail for Research
AWS Local Zones
There’s no additional charge for enabling Local Zones. You pay only for the resources that you deploy in your Local Zones. AWS resources in Local Zones have different prices than they do in parent AWS Regions
Local Zones have their own connections to the internet and support AWS Direct Connect
To use a Local Zone, you must first enable it. Next, you create a subnet in the Local Zone. Finally, you launch resources in the Local Zone subnet
Amazon Elastic Container Registry (ECR)
An Amazon ECR private registry is provided to each AWS account; you can create one or more repositories in your registry and store Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts in them
Image scanning helps in identifying software vulnerabilities in your container images
Cross-Region and cross-account replication
Amazon ECR stores images in Amazon S3 buckets
By default, Amazon ECR uses server-side encryption with Amazon S3-managed encryption keys
Use the CloudWatch console to visualize your service quotas
Amazon Elastic Container Service (ECS)
There are three layers in Amazon ECS:
Capacity - The infrastructure where your containers run
Controller - Deploy and manage your applications that run on the containers
Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers
Runtime Monitoring in GuardDuty is an intelligent threat detection service that protects workloads running on Fargate and EC2 container instances by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior
AWS KMS
Create multi-Region keys, which act like copies of the same KMS key in different AWS Regions.
Each AWS KMS key that you create in AWS KMS costs $1/month (prorated hourly)
You are not charged for the following:
Creation and storage of AWS managed or AWS owned KMS keys
There is no charge for customer managed KMS keys that you manage and are scheduled for deletion
By default, AWS KMS generates and protects the cryptographic key material for KMS keys
AWS IQ
AWS IQ connects you to AWS Certified experts for hands-on help for your AWS projects. You create a request and choose from experts who respond
AWS Support Plans
All AWS customers automatically have 24x7 access to these features of Basic Support:
- One-on-one responses to account and billing questions
- Support forums
- Service health checks
- Documentation, technical papers, and best practice guides
Developer Support plan have access to these additional features:
- Best practice guidance
- Client-side diagnostic tools
- Building-block architecture support: guidance on how to use AWS products, features, and services together
- Supports an unlimited number of support cases that can be opened by one primary contact, which is the AWS account root user.
In addition, Business, Enterprise On-Ramp, or Enterprise Support plan have access to these features:
- Use-case guidance
- AWS Trusted Advisor
- The AWS Support API
- Third-party software support
- Supports an unlimited number of AWS (IAM) users
In addition, Enterprise On-Ramp or Enterprise Support plan have access to these features:
- Application architecture guidance
- Infrastructure event management
- Technical account manager
- White-glove case routing
- Management business reviews
AWS support plan response time
Developer:
General guidance: < 24 hours
System impaired: < 12 hours
Additional in Business:
Production system impaired: < 4 hours
Production system down: < 1 hour
Additional in Enterprise-onramp:
Business-critical system down: < 30 minutes
Additional in Enterprise:
Business/Mission-critical system down: < 15 minutes
Amazon Aurora
Compatible with MySQL and PostgreSQL
Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region
For high availability across multiple AWS Regions, you can set up Aurora global databases
An Aurora global database consists of one primary AWS Region where your data is written, and up to five read-only secondary AWS Regions
Aurora global databases currently don’t support Aurora Auto Scaling for secondary DB clusters
Amazon Aurora can encrypt your Amazon Aurora DB clusters. Data that is encrypted at rest includes the underlying storage for DB clusters, its automated backups, read replicas, and snapshots
You can’t turn off encryption on an encrypted DB cluster
Aurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period
Amazon DoumentDB
Amazon DocumentDB supports two types of clusters: instance-based clusters and elastic clusters. Elastic clusters support workloads with millions of reads/writes per second and petabytes of storage capacity
Instance based:
Amazon DocumentDB automatically grows the size of your storage volume
Amazon DocumentDB lets you scale the compute and memory resources for each of your instances up or down
Amazon DocumentDB runs in Amazon Virtual Private Cloud (Amazon VPC)
Amazon DocumentDB continuously monitors the health of your cluster
On instance failure, Amazon DocumentDB automates failover to one of up to 15 Amazon DocumentDB replicas that you create in other Availability Zones. If no replicas have been provisioned and a failure occurs, Amazon DocumentDB tries to create a new Amazon DocumentDB instance automatically
backup capability allows you to restore your cluster to any second during your retention period, up to the last 5 minutes
Amazon MemoryDB
MemoryDB at-rest encryption is always enabled
Amazon Neptune
Fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets
With Amazon Neptune, you don’t have to worry about database management tasks like hardware provisioning, software patching, setup, configuration, or backups.
Amazon QLDB
You can use Amazon QLDB to track all application data changes, and maintain a complete and verifiable history of changes over time
All data stored in Amazon QLDB is fully encrypted at rest by default
QLDB doesn’t provide a dedicated backup and related restore feature at this time