Add Stuff Flashcards
T/F: DataSync can only be used to transfer data with an on-premise source and a destination within an AWS VPC.
What is the difference between the following 2 DataSync modes?
- Transfer only data that has changed
- Transfer all data
F: ex – A solution needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket.
- Transfer only data that has changed – DataSync copies only the data and metadata that differs between the source and destination location.
- Transfer all data – DataSync copies everything in the source to the destination without comparing differences between the locations.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-metadata.html
___________ Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications.
Amazon API Gateway: Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications.
___________ is Used to capture and upload streaming data to other AWS services. For example, you can store captured customer activity across different web applications to process analytics and make predictions in an Amazon S3 bucket.
Amazon Kinesis Data Firehose: Used to capture and upload streaming data to other AWS services. In this case, you can store the information in an Amazon S3 bucket.
____________ Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic and ensures that the authorization step is performed securely.
API Gateway Lambda Authorizer: Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic. This solution offers scalability, the ability to handle unpredictable surges in activity, and integration capabilities. Using a Lambda API Gateway authorizer ensures that the authorization step is performed securely.
______________ is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state.
Amazon ElastiCache for Redis: Redis is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state.
What would you use if you need to ensure that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic (e.g., ensure Sticky sessions when using auto-scaling group).
Sticky sessions and auto-scaling group: Using ElastiCache for Redis enables centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic.
In what situation with microservices is it advantageous to use the API Gateway over an ALB to direct incoming requests to the appropriate microservices housed on an EKS backend?
Use Amazon API Gateway to connect requests to Amazon EKS what you want to be cost effective.
You are charged for each hour or partial hour that an application load balancer is running, and the number of load balancer capacity units (LCUs) used per hour. With Amazon API Gateway, you only pay when your APIs are in use.
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
When using ElastiCache for Redis, what configuration would be the most appropriate option to achieve high availability at both the node level and the AWS Region level?
Multi-AZ Redis Replication Groups with shards containing multiple nodes is the most appropriate option to achieve high availability at both the node level and the AWS Region level in Amazon ElastiCache for Redis.
What ElatiCache Redis configuration provides high availability at a region level?
Multi-AZ Redis Replication Groups: Amazon ElastiCache provides Multi-AZ support for Redis, allowing the creation of replication groups that span multiple availability zones (AZs) within a region. This guarantees high availability at a regional level.
What ElatiCache Redis configuration provides scalability and redundancy at the node level contributing to high availability and performance?
Shards with Multi-node: Shards within the replication group can contain multiple nodes, providing scalability and redundancy at the node level. This contributes to high availability and performance.
What EC2 option allows ec2 instances to persist their in-memory state to Amazon EBS? When in use, it allows an instance to quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.
EC2 On-Demand Instances with Hibernation: Hibernation allows EC2 instances to persist their in-memory state to Amazon EBS. When an instance is hibernated, it can quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.
What auto scaling feature allows you to keep a specific number of instances running even when demand is low, which can be used to help reduce the time it takes for an instance to become fully productive?
EC2 Auto Scaling Warm Pools: Auto Scaling warm pools allow you to keep a specific number of instances running even when demand is low. Warm pools keep instances in a state where they can respond quickly to increased demand. This helps reduce the time it takes for an instance to become fully productive.
Making use of serverless, what does Step Functions do?
AWS Step Functions allow you to orchestrate and scale distributed processing using map state. Map state can process elements in a large data set in parallel by distributing work across multiple resources.
Step Functions is serverless, so there are no servers to manage. It will automatically scale based on demand.
AWS Step Functions is a fully managed service that makes it easier to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function helps you scale more easily and change applications more quickly.
Step Functions is a reliable way to coordinate components and step through the functions of your application. Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps. This makes it easier to build and run multi-step applications.
Step Functions automatically triggers and tracks each step and retries when there are errors, so your application executes in order and as expected. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems more quickly.
https://docs.aws.amazon.com/step-functions/latest/dg/use-dist-map-orchestrate-large-scale-parallel-workloads.html
Describe the feature of AWS Step Functions that is known as Using Map state in Distributed mode
Using map state in distributed mode will automatically take care of parallel processing and scaling. Step Functions will add more workers to process the data as needed.
To set up a large-scale parallel workload in your workflows, include a Map state in Distributed mode. The Map state processes items in a dataset concurrently. A Map state set to Distributed is known as a Distributed Map state. In Distributed mode, the Map state allows high-concurrency processing. In Distributed mode, the Map state processes the items in the dataset in iterations called child workflow executions. You can specify the number of child workflow executions that can run in parallel. Each child workflow execution has its own, separate execution history from that of the parent workflow.
T/F
There is nothing preventing you from transitioning objects to S3 Standard-IA or S3 One Zone-IA immediately after upload. For example, you can create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them.
F
Before you transition objects to S3 Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them. Amazon S3 doesn’t support this transition within the first 30 days because newer objects are often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA storage. Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html