Erasure Codes for Large-Scale Distributed Storage

01 Jul 2014

Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high rebuild cost is often considered an unavoidable price to pay for high storage efficiency and high reliability.

In recent work, WNCG Profs. Alex Dimakis and Sriram Vishwanath, along with WNCG students Ankit Rawat, Megasthenis Asteris, Dimitrios Papailioupoulos and collaborators show how to overcome this limitation. The researchers present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. The team demonstrates analytically that their codes are optimal on a recently identified tradeoff between locality and minimum distance.

In paper 3, the researchers further show how to modify these codes for 'hot' or frequently accessed data that requires multiple data access operations in parallel, implement new codes in Hadoop HDFS and compare them to a currently deployed HDFS module that uses Reed-Solomon codes used by Facebook clusters. The WNCG team's modified HDFS implementation shows a reduction of approximately 2x on the repair disk I/O and repair network traffic. Furthermore, they show one order of magnitude higher data availability compared to the previous state of the art. 

Paper 1: Locally Repairable Codes (to appear in IEEE Transactions on Information Theory)

Paper 2: XORing Elephants: Novel Erasure Codes for Big Data

Paper 3: Locality and Availability in Distributed Storage