Consistent Hash Ring

Consistent hashing is a special kind of hashing such that when a hash table is resized and consistent hashing is used, only K/n keys need to be remapped on average, where K is the number of keys, and n is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped.

Consistent hashing achieves the same goals as Rendezvous hashing (also called HRW Hashing). The two techniques use different algorithms, and were devised independently and contemporaneously.

Problem on Distributed Cache

The need for consistent hashing arose from limitations experienced while running collections of caching machines - web caches, for example. If you have a collection of n cache machines then a common way of load balancing across them is to put object o in cache machine number hash(o) mod n. This works well until you add or remove cache machines (for whatever reason), for then n changes and every object is hashed to a new location. This can be catastrophic since the originating content servers are swamped with requests from the cache machines. It's as if the cache suddenly disappeared. Which it has, in a sense. (This is why you should care - consistent hashing is needed to avoid swamping your servers!)

Consistent Hashing

Consistent hashing was first described in a paper, Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web (1997) by David Karger et al. It is used in distributed storage systems like Amazon Dynamo, memcached, Project Voldemort and Riak.

It's assumed that the range of Hash value is [0, 232-1], and we calculate the Hash value of servers(may be using IP address or host name). As the figure below.


Suppose we have 4 data objects and we calculate the Hash value of the data object and put them into the Hash ring as the figure shows below:


According to the consistent hash algorithm, the data object A will be located to Server 1, the data object C will be located to Server 3 and the data object B, C will be located in Server 2.

Error Tolerance and Scalability

In this section, we are going to talk about the error tolerance and scalability in consistent hashing algorithm.

Suppose that the Server 3 is down, as the figure shows below:


We'll find that the data object A, B and C is still available. At this time, the data object D will be located to Server 2.

In general hashing algorithm, all location of data objects will be be changed. However, only partial data objects will be re-located in consistent hashing algorithm.

OK, what if add a server to the hash ring?


We'll find that only the data object B re-located to the Server 4.

Virtual Nodes

It seems that the algorithm works well. However if the size of the intervals assigned to each cache is pretty hit and miss. Since it is essentially random it is possible to have a very non-uniform distribution of objects between caches. The solution to this problem is to introduce the idea of "virtual nodes", which are replicas of cache points in the circle. So whenever we add a cache we create a number of points in the circle for it.


It's obvious that the Server 1 in the figure above will store most data objects. The performance of the hash ring will be decreased. To solve this problem, we will use virtual nodes. We can add serial number to the IP address or host name of the servers. For the case mentioned above, we create 3 virtual nodes for each server: Memcached Server 1#1, Memcached Server 1#2, Memcached Server 1#3, etc. And we will get 6 virtual nodes.


Actually the data objects located to Memcached Server 1#1, Memcached Server 1#2 and Memcached Server 1#3 will all located to Memcached Server 1. And it solved the non-uniform distribution of objects between caches.


So how can you use consistent hashing? You are most likely to meet it in a library, rather than having to code it yourself. For example, as mentioned above, memcached, a distributed memory object caching system, now has clients that support consistent hashing.'s ketama by Richard Jones was the first, and there is now a Java implementation by Dustin Sallings (which inspired my simplified demonstration implementation above). It is interesting to note that it is only the client that needs to implement the consistent hashing algorithm - the memcached server is unchanged. Other systems that employ consistent hashing include Chord, which is a distributed hash table implementation, and Amazon's Dynamo, which is a key-value store (not available outside Amazon).


Contact Us
  • Tencent AI Lab, Shenzhen, China
  • cshzxie [at] gmail [dot] com