Caching Information with Amazon ElastiCache Flashcards

1
Q

What is ElastiCache and which engines are used

A

ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.

ElastiCache supports the following open-source in-memory caching engines:
• Memcached
• Redis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Compare Memcached and Redis

A
  • Memcached offers multithreading, whereas Redis uses single-threading
  • When used in Amazon ElastiCache, Memcached clusters can easily add and remove nodes using the Auto Discovery feature.
  • Redis supports structuring of data. By contrast, Memcached is designed to cache flat strings (flat HTML pages, serialized JSON, etc.).
  • Data in Redis has persistence, so you can actually use it as a primary data store. Memcached lacks persistence.
  • Redis offers atomic operations and you can use it to increase or decrease values in the data within the cache.
  • Redis also offers publish or subscribe messaging.
  • Redis offers built-in read replicas with failover.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a node

A

A node is the smallest building block of an ElastiCache deployment. It’s a fixed-size chunk of secure, network-attached RAM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a cluster

A

A cluster is a logical grouping of one or more nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a replication group

A

A replication group is a collection of Redis clusters, with one primary read-write cluster and up to five secondary, read-only clusters, which are called read replicas.

Applications can read from any cluster in the replication group. Applications can write only to the primary cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are cache hit and cache miss

A

A cache hit occurs when the cache contains the information requested. A cache miss occurs when the cache does not contain the information requested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are stored data in ElastiCache

A

ElastiCache caches data as key-value pairs. An application can store an item in cache by specifying a key, value, and an expiration time (TTL). Time to live (TTL) is an integer value that specifies the number of seconds until the key expires.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the Lazy loading strategy

A

Lazy Loading is a caching strategy that loads data into the cache only when necessary. Lazy Loading avoids filling up the cache with unnecessary data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is write through strategy

A

The write through strategy adds data or updates data in the cache whenever data is written to the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Advantages of Lazy Loading

A
  • Only requested data is cached. Since most data is never requested, lazy loading avoids filling up the cache with data that isn’t requested.
  • Node failures are not fatal.
  • When a node fails and is replaced by a new, empty node the application continues to function, though with increased latency. As requests are made to the new node each cache miss results in a query of the database and adding the data copy to the cache so that subsequent requests are retrieved from the cache.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Disadvantages of Lazy Loading

A
  • There is a cache miss penalty. Each cache miss results in 3 trips - initial request for data from the cache, query of the database for the data, writing the data to the cache - which can cause a noticeable delay in data getting to the application.
  • Application may receive stale data because another application may have updated the data in the database behind the scenes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Advantages of Write Through

A

• The data in the cache is never stale. Since the data in the cache is updated every time it is written to the database, the data in the cache is always current.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Disadvantages of Write Through

A
  • Write penalty: Every write involves two trips - a write to the cache and a write to the database.
  • Missing data: When a new node is created to scale up or to replace a failed node, the node does not contain all data. Data continues to be missing until it is added or updated in the database. In this scenario, you might choose to use a lazy caching approach to repopulate the cache.
  • Unused data: Since most data is never read, there can be a lot of data in the cluster that is never read.
  • Cache churn: The cache may be updated often if certain records are updated repeatedly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly