Caching Layers: Flashcards
What is caching in the context of databases?
Caching stores frequently accessed data in a fast, in-memory store (like Redis or Memcached) to reduce database load and improve performance by avoiding repeated database queries.
Why would you use caching in a system?
Caching reduces the time to fetch data, decreases database load, and enhances system performance, especially for frequently accessed or computationally expensive data.
What is the difference between Redis and Memcached?
Redis: Supports complex data structures like lists, sets, and sorted sets, and provides persistence.
Memcached: Simple key-value store, better suited for caching purely in-memory with no persistence.
When is Redis preferred for caching?
Redis is preferred when you need more advanced data structures, persistence, or features like pub/sub and replication, in addition to basic caching.
When is Memcached preferred for caching?
Memcached is better for simple, high-performance caching of small key-value pairs when persistence and complex data structures are not needed.
What is write-through caching?
In write-through caching, data is written to both the cache and the database simultaneously. It ensures the cache is always up-to-date but adds latency to writes.
What is write-behind caching?
In write-behind (write-back) caching, data is written to the cache first and asynchronously written to the database later, improving write performance but increasing the risk of data loss in case of cache failure.
What are cache eviction policies?
Cache eviction policies determine which data is removed from the cache when it reaches its capacity. Common strategies include LRU (Least Recently Used) and LFU (Least Frequently Used).
What is the Least Recently Used (LRU) eviction policy?
LRU eviction removes the data that has not been accessed for the longest time when the cache reaches its capacity. It assumes that data not recently used is less likely to be used soon.
What is the Least Frequently Used (LFU) eviction policy?
LFU eviction removes the data that has been accessed the fewest number of times. It assumes that less frequently used data is less important.
What is the Cache Aside strategy?
In Cache Aside, the application checks the cache first. If the data is not found (cache miss), the application loads it from the database, stores it in the cache, and then serves the request.
What are the pros and cons of write-through caching?
Pros:
- Always consistent between cache and database.
- No stale data in the cache.
Cons:
- Higher write latency due to simultaneous database and cache writes.
- May increase database load with writes.
What are the pros and cons of write-behind caching?
Pros:
- Lower write latency.
- Faster write operations.
Cons:
- Risk of data loss if the cache fails before the data is written to the database.
- More complex to manage.
What are the pros and cons of the LRU eviction policy?
Pros:
- Simple to implement.
- Works well for workloads where recently accessed data is likely to be accessed again.
Cons:
- May evict frequently used but older data.
- Can be inefficient for workloads with varying access patterns.
What are the pros and cons of the LFU eviction policy?
Pros:
- Keeps the most popular data in the cache.
- Efficient for workloads where frequently accessed data remains important over time.
Cons:
- More complex to implement.
- May keep old data that is no longer relevant but was frequently accessed in the past.