Explain Merkle Trees for use in Eventual Consistency

Merkle Trees are used as an anti-entropy mechanism in several distributed, replicated key/value stores:

  • Dynamo
  • Riak
  • Cassandra

No doubt an anti-entropy mechanism is A Good Thing - transient failures just happen, in production. I'm just not sure I understand why Merkle Trees are the popular approach.

  • Sending a complete Merkle tree to a peer involves sending the local key-space to that peer, along with hashes of each key value, stored in the lowest levels of the tree.

  • Diffing a Merkle tree sent from a peer requires having a Merkle tree of your own.

Since both peers must already have a sorted key / value-hash space on hand, why not do a linear merge to detect discrepancies?

I'm just not convinced that the tree structure provides any kind of savings when you factor in upkeep costs, and the fact that linear passes over the tree leaves are already being done just to serialize the representation over the wire.

To ground this out, a straw-man alternative might be to have nodes exchange arrays of hash digests, which are incrementally updated and bucketed by modulo ring-position.

What am I missing?


Solution 1:

Merkle trees limit the amount of data transferred when synchronizing. The general assumptions are:

  1. Network I/O is more expensive than local I/O + computing the hashes.
  2. Transferring the entire sorted key space is more expensive than progressively limiting the comparison over several steps.
  3. The key spaces have fewer discrepancies than similarities.

A Merkle Tree exchange would look like this:

  1. Start with the root of the tree (a list of one hash value).
  2. The origin sends the list of hashes at the current level.
  3. The destination diffs the list of hashes against its own and then requests subtrees that are different. If there are no differences, the request can terminate.
  4. Repeat steps 2 and 3 until leaf nodes are reached.
  5. The origin sends the values of the keys in the resulting set.

In the typical case, the complexity of synchronizing the key spaces will be log(N). Yes, at the extreme, where there are no keys in common, the operation will be equivalent to sending the entire sorted list of hashes, O(N). One could amortize the expense of building Merkle trees by building them dynamically as writes come in and keeping the serialized form on disk.

I can't speak to how Dynamo or Cassandra use Merkle trees, but Riak stopped using them for intra-cluster synchronization (hinted handoff and read-repair are sufficient in most cases). We have plans to add them back later after some internal architectural bits have changed.

For more information about Riak, we encourage you to join the mailing list: http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com