Cache Flashcards
Where cache can located?
- Client side (OS or browser)
- Web server caching(using reverse proxy), CDN, mainly caching for web page/static data, some user response
- Application caching(In-memory caches, mainly caching for for database data)
- . Inside database caching Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
Trade-off of having a Cache
Advantages:
- Improves page load times
- Can reduce the load on your servers and databases.
Disadvantage(s):
- consistency between cache/cache OR cache/datastore - Need to maintain such as the database through cache invalidation.
- Cache invalidation/update adds additional complexity.
- Increase the memory usage of the app - can cause performance issue
Implementation:
Redis
memcached
What we usually cache?
- Caching at the database query level Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues: Hard to delete a cached result with complex queries If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell 2. Caching at the object level See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s): Remove the object from cache if its underlying data has changed Allows for asynchronous processing: workers assemble objects by consuming the latest cached object Suggestions of what to cache: User sessions Fully rendered web pages Activity streams User graph data
Cache update strategy - Write around
Write fast / Read slow
Process: -The application is responsible for reading and writing from storage. The cache does not interact with storage directly.
Write - Data is written only to the backing store without writing to the cache. So, I/O completion is confirmed as soon as the data is written to the backing store.
Read, check cache first, then load from DB store in the cache is there is a cache miss
Adv:
- Lazy loading. Only requested data is cached, not flooding the cache
Advantage: Good for not flooding the cache with data that may not subsequently be re-read.
Disadvantage(s):
- Read recently written is slow - each cache miss results in three trips, which can cause a noticeable delay.
- Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through. When a node fails, it is replaced by a new, empty node, increasing latency.
What is it good for?
Good for applications that don’t frequently re-read recently written data.
This will result in lower write latency VS higher read latency which is a acceptable trade-off for these scenarios.
Cache update strategy
Cache-aside Write-through Write-behind (write-back) Refresh-ahead
Cache update strategy - Write-through
Write slow/ read fast
Process: The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database.
- Application adds/updates entry in cache
- Cache synchronously writes entry to data store
- Return
Using the write-through policy, data is written to the cache and the backing store location at the same time. The significance here is not the order in which it happens or whether it happens in parallel. The significance is that I/O completion is only confirmed once the data has been written to both places.
Ads:
- Read of cached data are fast
- Data in the cache are not stale
- no data loss when cache is down
Disadvantage(s):
- Writes are slow - every writes to 2 places
What is it good for?
- write and then re-read data frequently - slightly higher write latency but low read latency. So, it’s ok to spend a bit longer writing once, but then benefit from reading frequently with low latency.
Cache update strategy - Write-behind (write-back)
Write/read all fast, data loss risk
Process:
- Add/update entry in cache
- Asynchronously write entry to the data store, improving write performance
Data is written to the cache and Then I/O completion is confirmed. The data is then typically also written to the backing store in the background but the completion confirmation is not blocked on that.
Advantage: Low latency and high throughput for write-intensive applications.
Disadvantage: Potencial data loss when cache is down before write to DB completed.
What is it good for?
- write-intensive applications.
- In reality, you can add resiliency (e.g. by duplicating writes) to reduce the likelihood of data loss.
Cache update strategy - Refresh-ahead
You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration. Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future. Disadvantage(s): refresh-ahead Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
What is CDN
A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JS, photos, and videos are served from CDN, although some CDNs such as Amazon’s CloudFront support dynamic content. The site’s DNS resolution will tell clients which server to contact.
Push CDNs
Push CDNs receive new content whenever changes occur on your server. You take full responsibility for providing content, uploading directly to the CDN and rewriting URLs to point to the CDN. You can configure when content expires and when it is updated. Content is uploaded only when it is new or changed, minimizing traffic, but maximizing storage. Sites with a small amount of traffic or sites with content that isn’t often updated work well with push CDNs. Content is placed on the CDNs once, instead of being re-pulled at regular intervals.
Pull CDNs
Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the CDN. A time-to-live (TTL) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed. Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN.
CDN trade-offs
adv: improve performance in two ways:
- Users receive content at data centers close to them
- Your servers do not have to serve requests that the CDN fulfills
Disadvantage(s):
1.CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN.
2/Content might be stale if it is updated before the TTL expires it.
3.CDNs require changing URLs for static content to point to the CDN.