Caching Information with Amazon ElastiCache Flashcards
What is ElastiCache and which engines are used
ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.
ElastiCache supports the following open-source in-memory caching engines:
• Memcached
• Redis
Compare Memcached and Redis
- Memcached offers multithreading, whereas Redis uses single-threading
- When used in Amazon ElastiCache, Memcached clusters can easily add and remove nodes using the Auto Discovery feature.
- Redis supports structuring of data. By contrast, Memcached is designed to cache flat strings (flat HTML pages, serialized JSON, etc.).
- Data in Redis has persistence, so you can actually use it as a primary data store. Memcached lacks persistence.
- Redis offers atomic operations and you can use it to increase or decrease values in the data within the cache.
- Redis also offers publish or subscribe messaging.
- Redis offers built-in read replicas with failover.
What is a node
A node is the smallest building block of an ElastiCache deployment. It’s a fixed-size chunk of secure, network-attached RAM.
What is a cluster
A cluster is a logical grouping of one or more nodes.
What is a replication group
A replication group is a collection of Redis clusters, with one primary read-write cluster and up to five secondary, read-only clusters, which are called read replicas.
Applications can read from any cluster in the replication group. Applications can write only to the primary cluster.
What are cache hit and cache miss
A cache hit occurs when the cache contains the information requested. A cache miss occurs when the cache does not contain the information requested.
How are stored data in ElastiCache
ElastiCache caches data as key-value pairs. An application can store an item in cache by specifying a key, value, and an expiration time (TTL). Time to live (TTL) is an integer value that specifies the number of seconds until the key expires.
What is the Lazy loading strategy
Lazy Loading is a caching strategy that loads data into the cache only when necessary. Lazy Loading avoids filling up the cache with unnecessary data.
What is write through strategy
The write through strategy adds data or updates data in the cache whenever data is written to the database.
Advantages of Lazy Loading
- Only requested data is cached. Since most data is never requested, lazy loading avoids filling up the cache with data that isn’t requested.
- Node failures are not fatal.
- When a node fails and is replaced by a new, empty node the application continues to function, though with increased latency. As requests are made to the new node each cache miss results in a query of the database and adding the data copy to the cache so that subsequent requests are retrieved from the cache.
Disadvantages of Lazy Loading
- There is a cache miss penalty. Each cache miss results in 3 trips - initial request for data from the cache, query of the database for the data, writing the data to the cache - which can cause a noticeable delay in data getting to the application.
- Application may receive stale data because another application may have updated the data in the database behind the scenes.
Advantages of Write Through
• The data in the cache is never stale. Since the data in the cache is updated every time it is written to the database, the data in the cache is always current.
Disadvantages of Write Through
- Write penalty: Every write involves two trips - a write to the cache and a write to the database.
- Missing data: When a new node is created to scale up or to replace a failed node, the node does not contain all data. Data continues to be missing until it is added or updated in the database. In this scenario, you might choose to use a lazy caching approach to repopulate the cache.
- Unused data: Since most data is never read, there can be a lot of data in the cluster that is never read.
- Cache churn: The cache may be updated often if certain records are updated repeatedly.