Distributed Systems

Consistent hashing is a special kind of hashing such that when a hash table is resized and consistent hashing is used, only K/n keys need to be remapped on average, where K is the number of keys, and n is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped. Consistent hashing achieves the same goals as Rendezvous hashing (also called HRW Hashing).
Apache™ Hadoop® is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software’s ability to detect and handle failures at the application layer. Hadoop 1 popularized MapReduce programming for batch jobs and demonstrated the potential value of large scale, distributed processing.
In this tutorial I will describe the required steps for setting up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu Linux. This tutorial has been tested with the following software versions: Ubuntu 13.04 Apache Hadoop 1.1.2 (Released on February 15th, 2013) Prerequisites Oracle Java 7 Hadoop requires a working Java 1.5+ (aka Java 5) installation. In this tutorial, I will describe the installation of Java 1.