Chapter 7: Web Caching & Consistent Hashing Flashcards
Congested networks & over‐loaded servers NN
- Web‐based processing is prone to unpredictable delays and (frequent) failures
- Two main reasons: Congested networks and over‐ loaded servers
- Date moves slowly through congested networks
- Over‐loaded servers either refuse to serve requests or serve them slowly
- Problems can occur unexpectedly and without prior
notice (e.g., Slashdot effect, flash crowd, DDOS, etc.) inncdiahly popularity
- In this case, planning ahead is of limited use
- Better, adapt to changing circumstances
(Web) Caching to the rescue
- Caching employed to improve efficiency & reliability of data delivery (Web, Internet, disk, CPU, etc.)
- Cache serves a (cached) page quickly even if the originating server is swamped or path to it is congested
- “Cache hierarchy“, with caches
- at client (Web browser)
- at client’s site (institutional proxy)
- at server’s site (reverse proxy)
- at ISPs (CDN endpoints of major players, proxy)
- part of service delivery, 3rd party service (content delivery network)
- Wide use of caches engenders a general good: If requests are served by nearby caches, then fewer go to source server, reducing load and network traffic for the benefit of all users
“Web cache hierarchy”
Web caching - what if cache crashes?
flood of requests
Single, shared caching machine per group of users
-
Drawbacks of single cache machine
- All users are impacted in case of cache failure –> increased latency
- Also, potential for severe impact on source server – Cache a potential bottleneck, if heavily used
-
Two important limits regarding hit rate of a single cache
- “False misses” for repeated requests of objects which were evicted for lack of space
- User limit of cache works against aggregating requests from as many users as possible for caching:
- *The more user requests are aggregated, the better the hit rate becomes due to locality, the higher impact of failure**
Web proxy, proxy server
Reverse proxy, Web accelerator
- closer to backend
- generic logic can be pre-executed (e.g. landing page)
Content‐delivery network (CDN)
Place content as close as possible to user to reduce latency. Challenge is how to devise content
Web caching trade‐offs
- Client‐server architecture is inherently non‐scalable
- Proxies serve as a level of indirection
- Proxies reduce client response time
- Direct and indirect effect
- Less load on the server: Server does not have to be over‐provisioned
- Proxies reduce network bandwidth usage
- Wide area vs. local area use
- These two objectives are often in conflict
- May do exhaustive local search to avoid using wide area bandwidth (e.g., in cooperative caching)
- Prefetching uses extra bandwidth to reduce client response time
Problem: Mapping objects to caches
- Given a number of caches (nodes)
- Each node should carry an equal share of objects
- Clients need to know what node to query for a given object
- Nodes should be able to come and go without disrupting the whole operation (i.e., all nodes)
Attempted solution: Use hashing (mapping objects to cache)
- Map page referenced by URL, u, into one of the nodes
-
Use a hash function that maps u to node h(u)
- For example, h(x) = (ax + b) mod p, where p is range of h(x)
- Interpret u as a number based on bit pattern of string underlying input URL
-
Hash functions tend to distribute their input uniformly “random” across the range of the hash function
- URLs are equally balanced across nodes
- No one cache responsible for an uneven share of URLs
- A disproportionately loaded node would be a bottleneck
h(u) = (7u + 4) mod 5 (Problem when failing node?)
Removing a node with normal hashing changes location of every node
h(u) = (7u + 4) mod 5 (Problem when adding node?)
Adding a node changes the location of every node
Consistent hashing
- Goals
- Uniform distribution of objects across nodes
- Easily find objects
- Let any client perform a local computation mapping a URL to the node that contains referenced object
- Allow for nodes to be added/removed without much disruption
- D. Karger et al., MIT, 1997
- Basis for Akamai
- CDN company (content distribution network)
- Web cache as a service
Consistent hashing (how does it work?)
- Select a base hash function that maps strings to the number range [0, …, M]
- Divide by M, re‐mapping strings to [0, 1]
- Interpret this interval as the unit circle: Here, circle with circumference 1 (normally radius 1)
- Each URL is mapped to a point on the unit circle
- Also, map each cache to a point on the unit circle
- Assign each URL to the closest cache point in clockwise direction on the circle