Practice Test Pearson Flashcards
Can I have cross-region replicas with Amazon Aurora?
Yes, with Aurora MySQL you can set up cross-region Aurora Replicas using either logical or physical replication.
Logical replication can replicate to up to five secondary AWS regions. Physical replication, called Aurora Global Database, uses dedicated infrastructure that leaves your databases entirely available to server your application, and can replicate to one secondary region with typical latency of under a second. For low-latency reads and disaster recovery, Global Database is recommended.
Aurora PostgreSQL does not currently support cross-region replicas.
What are Aurora replicas?
Aurora replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability.
Why do Aurora replicas work well for read scaling?
Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, minimal additional work is required to replicate a copy of data for each Aurora Replica.
What are some caching strategies for ElastiCache?
Lazy loading, write-through. You can also add TTL to help avoid stale data.
What is lazy loading?
Lazy loading is a caching strategy that loads data into the cache only when necessary. When an application requests data, it first makes a request to the ElastiCache cache. If the data exists and is current, ElastiCache returns the data the application. If it does not exist or is expired, the application makes a request to the data store, which returns the requested data. Your application next writes the retrieved data to the cache. This way, it will be more quickly retrieved next time it’s being requested.
What are some advantages of lazy loading?
Advantages:
- Only requested data is cached. Because most data is never requested, lazy loading avoids filling up the cache with data that isn’t requested.
- Node failures aren’t fatal for your application. When a node fails and is replaced by a new, empty node, your application continues to function, though with increase latency.
What are some disadvantages of lazy loading?
Disadvantages:
- There is a cache miss penalty. Each cache miss results in three trips. 1) Initial request for data from the cache. 2) Query of the database for the data. 3) Writing the data to the cache.
- Stale data. If data is written to the cache only when there is a cache miss, data in the cache can become stale. This result occurs because there are no updates to the cache when data is changed in the database. To address this issue, you can use write-through and Adding TTL strategies.
What is the write-through caching strategy?
The write-through strategy adds or updates data in the cache whenever data is written to the database.
What are the advantages of the write-through caching strategy?
Advantages:
- Data in the cache is never stale. Because the data in the cache is updated every time it’s written to the database, the data in the cache is always current.
- Write penalty vs. read penalty. Every write involves two trips, 1) A write to the cache. 2) A write to the database. This adds latency to the process, but end users are generally more accepting of updates taking longer than reads.
What are the disadvantages of the write-through caching strategy?
Disadvantages
- Missing data. If you spin up a new node, whether due to a node failure or scaling out, there is missing data. This data continues to be missing until it’s added or updated on the database. You can minimize this by implementing lazy loading with write-through.
- Cache churn. Most data is never read, which is a waste of resources. By adding a time to live (TTL) value, you can minimize wasted space.
What happens when configuration changes in an Elastic Beanstalk environment require terminates all instances in your environment and replacing them?
Configuration changes that modify the launch configuration or VPC settings require terminating all instances in your environment and replacing then. E.g., when you change the instance type or SSH key setting for your environment, the EC2 instances must be terminated and replaced. To prevent downtime during these processes, Elastic Beanstalk applies these configuration changes in batches, keeping a minimum number of instances running and serving traffic at all times.This process is known as a rolling update.
What can you do to minimize the amount of time that is used to upload an item greater than 100MB?
You can use Multipart Upload. Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uloaded, S3 assembles these parts and creates the object. In general, when your object size reaches 100MB, you should consider using multipart uploads instead of uploading the object in a single operation.
What are custom CloudWatch metrics?
There are two types of custom metrics: standard resolution (with data having a one-minute granularity) and high resolution (with data having a one-second granularity).
Metrics produced by AWS services are standard resolution by default.
What are CloudWatch statistic sets?
You can aggregate your data before you publish to CloudWatch. When you have multiple data points per minute, aggregating data minimizes the number of calls to put-metric-data. E.g., instead of calling put-metric-data multiple times for three data points that are within 3 seconds of each other, you can aggregate the data into a statistic set that you publish with one call, using the –statistic-values parameter.
What are the four deployment methods for Elastic Beanstalk?
All at once, rolling, rolling with an additional batch, immutable.