Choosing a Good Hash Function, Part 3

Author’s note: Part three of a series studying hash functions. My last post identified a few candidate algorithms that are subjected to further scrutiny here today.

The Story So Far

The simplest attribute on which one could imagine differentiating candidate hash functions is the number of collision produced when hashing a fixed pool of keys. By that standard, my last post identified Murmur3, Jenkins, City, Spooky, FNV1/1a, SDBM, AP, and RS as possible contenders. Today we’re going to see how they compare  to each other on some more rigorous tests.

Random Uniformity

A hash function ought to distribute its keys uniformly across its output range. To see how these functions stack up, we’ll put our 42 million unique keys through each hash function, bin the output, and compare the bin counts with expectation:

For bins of equal size, E[bini] = Number of items hashed/Number of bins

Now, uniformity is different from random uniformity. In general the latter is not always necessary for building a good hash table, but the analysis of some schemes assume it. For our purposes, we’re going to want our hashes to look like they are drawn from a random uniform distribution — simple uniformity won’t cut it for our applications. This means that when we look at our bin counts, we want them to be neither too smooth nor too lumpy. To quantify this concept, we’ll use a chi-squared test.

In volume II of TAOCP Donald Knuth provides a somewhat ad-hoc, but easy to understand method for interpreting the p-values calculated by a chi-squared test of randomness. If your p-value is less than 0.01 or greater than 0.99 the process that generated those results is almost certainly non-random. Something less than 0.05 or greater than 0.95 should be considered suspect. Finally, he designates a p-value of less than 0.1 or greater than 0.90 as “almost suspect”.

Here I’ve cut the whole 64 bit output space into 100 bins, and again in 1,000,000 bins. For a final test I modded out the bottom 20 bits, to check their distributions in isolation.

Hash Function 1 Million bins* Bottom 20 bits* 100 bins
AP  0.70  0.50  <0.01
City 0.07  0.29  0.46
FNV64-1  <0.01  >0.99  0.97
FNV64-1a  >0.99  >0.99  0.87
Jenkins  0.17  0.46  0.72
Murmur3  0.14  0.31  0.08
RS  >0.99  >0.99  0.23
SDBM  >0.99  >0.99  >0.99
Spooky  0.84  0.27  0.98

*p-values estimated from a standard normal distribution

Jenkins passes all three of these nicely. City and Murmur each come up “almost suspect” once, and Spooky shows some suspicious behavior in the 100 bin test. I put the heaviest weight on the bottom 20 bit test, and can pretty comfortably give these four functions a pass here. AP does dramatically better at higher bin counts, which is interesting. We can pretty solidly eliminate RS, SDBM, AP, and both FNV variants based on this analysis alone.

As a final note, hash functions are not meant to be RNGs! This test holds them to a very rigid standard that is not generally necessary to build a good hash table. It’s just that in our specific application, we’re going to want our hash values to be somewhat random looking.

Using Keyspace Structure

Before I continue, let me explain a little bit more of the structure of the data I am working with. I have 251 namespaces, each of which has a variable number of 192 and 256 bit keys associated with it. All told I have in the neighborhood of 66 million datapoints of the form (namespace, key). Only the key portion of these tuples actually gets hashed, however. Up until this point, we have been ignoring the namespace attribute of these data points, and thus have been restricted to looking at the 42 million unique (key, hash(key)) pairs. Let’s see if we can exploit larger set of data by including the namespaces!

In the chi-squared analysis above, we did our binning over the union of all namespaces. Now let’s individually bin the hash values of each namespace. All said and done, we have 251 namespaces ranging in size from a tiny handful to several million elements. This gives us 251 vectors of size 100, with

V{n,i} = Number of items of namespace n hashed to the i-th bin

For each namespace, we can compute the mean and variance of its count vector. I’ll leave it as an exercise to the reader, but it’s a pretty simple calculation to show that if you sample from a random uniform distribution, the variance of such a bin-count vector should equal its mean. If the variance is lower than the mean, it implies that the distribution is flatter than expected. On the contrary, if the variance is higher, it implies the existence of hot-spots on the range that are getting more than their fair share of data points hashed there.

Enough with the words, let’s look at the graphs! To generate these, I took the subset of namespaces that had at least 100,000 elements, of which there are 83. Each point is a namespace, and the green line shows the theoretical variance = mean relationship we’d expect from binning a random uniform distribution. Finally, I ran a Bonferroni corrected chi-squared test within each namespace. Those that come out “almost suspect” or worse are highlighted in red.

You can think of these namespaces as small experiments. Together, they help give us a picture of what the chi-squared test done on the whole dataset tells us.

A few observations:

  • Under the 100 bin chi-squared test, SDBM was flagged as being way too uniformly distributed. We can see that quite clearly here. Generally, the variance of the bin counts is quite a bit lower than the mean bin count.
  • On the other hand, AP has a comparatively high variance. This translates, again, to some bins being overly “favored” by the hash function.
  • These pictures also give us some idea of how noisy the functions are on a namespace by namespace basis. Compare Spooky and Murmur3. The residuals for all of the namespaces are quite low, and basically equal for Spooky, whereas Murmur3’s residuals show a lot more variability.

So far we’ve been taking our input sets as a given, and examining the statistical properties of the outputs. While powerful, we need not limit ourselves to these techniques. Onward to avalanche!

Avalanche Analysis

A common test of hash function performance is whether or not it achieves “avalanche.” This refers to the desireable characteristic that

P(Output bit i changes | Input bit j changes) = 0.5 for all i, j

Basically, if we keep all of the input bits the same, save for exactly 1 which we flip, we’d hope that each of our hash function’s output bits changes with probability 1/2.

I generated the following avalanche diagrams by using a random sample of 4000 keys (2000 of each type). The x-axis is the input key bit, the y axis is the output hash bit, and the color of the (x,y) tile is a measure of the bias that I/O pair has. Black indicates the desired 50% flip-probability, bright green indicates that the output bit is “stuck” and, certeris paribus, it doesn’t ever vary as a result of flipping just that input bit.

Avalanche Diagram

This test absolutely wrecks AP, SDBM, both FNV twins, and RS. Jenkins has some poor mixing in its upper bits, but that is mentioned in the implementation. It’s very small, but a slight bias can be observed in City’s lowest bits on the Creative keys. Murmur3 and Spooky are the only two functions left unscathed by this test. Given some of our algorithmic needs, this is a very slight knock against both Jenkins and City.

Conclusion

After all of this, Murmur3, Jenkins, City, and Spooky are the only functions that I’m really pleased with for our work. I’ll give a slight edge to Murmur3 and City over Jenkins due to the avalanche results, and City’s incredible speed. Spooky’s performance here is notable, but I’m a little uneasy putting it forward as a candidate for use in production, as it is still in beta. I’ll be keeping my eye on it. Based on these results it shows a lot of promise!

The next logical step is to plug some of these in to Timon’s work, and see how they serve as the keystone of our hash table!

Choosing a Good Hash Function, Part 2

Author’s note: Part two of a series in which I investigate the performance of a menagerie of hash functions on our data. In today’s episode the analysis begins in earnest with an investigation of collision rates.

Hash function designers have many tools at their disposal, but at their heart, most algorithms follow the same pattern: bytewise iteration over a key during which some internal state is mixed up with the key bits via some combination of ANDs, ORs, XORs, ADDs, shifts, magic numbers, modular arithmetic, and similar tools. As an example, consider the famous FNV hash function, which is astoundingly simple in its construction:

uint64_t fnv1_hash (void *key, int n_bytes)
{
    unsigned char *p = key;
    uint64_t h = 14695981039346656037;
    int i;
    for (i = 0; i < n_bytes; i++) {
        h = (h * 1099511628211) ^ p[i];
    }
    return h;
}

With all hash functions, the hope is that one may sufficiently mix up the input bits such that, on average, the output is uniformly distributed across its available range. If you think that designing such an algorithm sounds tricky, you’re right!

Over the years many hash functions have been developed that vary widely in quality and complexity. There are many that, despite some demonstrable theoretical flaws, have worked well in enough practical applications to have gained popularity. Other algorithms have been designed from the ground up to achieve a variety of theoretical benchmarks. To get started with this project, I spent some time looking around and came up with a list of 16 reasonably well-known functions that run a pretty wide breadth of quality from negative control to veteran. I started with the simplest test imaginable: I have ~42 million keys available, each of which are either 192 or 256 bits long. Given my entire available set of keys, what fraction can be hashed without collision?

Fraction of keys hashed without collision

A few notes about this graph:

  • All hashes are 64 bits.
  • Hashing is hard. Many of these functions do quite poorly compared to sampling from a random uniform distribution. The theoretical expectation here is that 0 keys should collide.
  • It looks like there is a significant hurdle at ~85% of the keys.
  • Although hard to see on this chart, OAT (Bob Jenkins’ less popular one-at-a-time hash) came in just under 100%. While this is a standout performance in comparison to most of the functions tested, it is still below what is expected by theory.
  • Unsurprisingly, Murmur3 and Jenkins eat this data set for lunch. They are carefully designed to work well on a broad variety of inputs, thoroughly tested, and I would have been shocked to see them fail here. They are matched by Google’s City Hash, Spooky Hash (Jenkins’ most recent project, which is still under development), FNV-1/1a, SDBM hash (also known as x65599), RS (Arash Partow‘s version of a hash function designed by Robert Sedgewick), another function of Partow’s own creation.

We’re by no means done here — we’ve simply thinned our list to a few algorithms that merit deeper exploration. The challenge now becomes distinguishing our high performers, and for that we’ll need tools a little bit more sophisticated than simple collision counts. Bring your statistics thinking cap to part 3!

Appendix: Further Reading

  • Unsurprisingly, Donald Knuth’s chapter from The Art of Computer Programming, Volume III: Sorting and Searching is an excellent piece.
  • Bob Jenkins wrote a great article in Dr. Dobb’s back in 1997 that is also a great starting place.
  • More generally, Jenkins’ own website is a treasure trove of material on the subject of hashing
  • There’s a lot of material about FNV to be had here.
  • And let’s not leave out Murmur Hash and City Hash.

Big Memory, Part 4

Author’s Note: This is part 4 of a series of posts about my adventures in building a “large”, in-memory hash table. This post is a summary of some pure Java hash table libraries.

Background

In my last post, I discussed the results of rerunning Nick Welch’s benchmark of C/C++ hash tables. However, since we use the JVM in production, those results were purely academic. I need to either verify or strike down my suspicions in the Java world.

For this test, I’m only concerned with primitive long-long mappings. For those following along, this is a departure from the methodology used in part 3. Here’s why.

Goals

I was hoping to replicate some fraction of the amazing throughput seen in the C/C++ benchmark. (A million inserts per second wouldn’t be a bad start!) More importantly, I sought to learn something about the relative performance of different schemes on the JVM. Would the bit-masking used in power-of-2-sized tables prove faster than the modulus operator used by prime-sized tables? Would the extra pointer hops of an externally chained implementation be more or less costly than the probe chains of open addressing schemes? Would Java even be able to handle a 128GiB hash table? What effect would GC have? What implementation tricks could be learned?

I’ll save you some time right now if you’re actually looking for the exact answers to all those questions: I don’t have most of them. I’ll hopefully get to them in later posts. The available, open-source primitive hash maps simply did not cover enough of the algorithm space to reliably demonstrate the isolated effect of a particular design choice such as power-of-2- vs prime-sized bucket arrays.

Instead, this post’s goals are more modest: I wish to answer the following questions:

  • What is the effect of load factor on the performance of these maps?
  • Given the out-of-the-box implementations available, which designs are the most appealing?
  • What optimizations can be applied to their implementation, given my requirements?
  • Are any of these suitable for my needs right out of the box?

Candidates

Lib. Ver. Rel. Date Class Coll. Res. No. Buckets Hash Fun. Bytes/Entry
mahout-collections 1.0.0 2010 OpenLongLongHashMap double hashing prime [1] 17
Trove 3.0.1 2011 TLongLongHashMap double hashing prime [2] 17
PCJ 1.2 2003 LongKeyLongOpenHashMap double hashing prime [2] 17
fast-util 6.4.1 2011 Long2LongOpenHashMap linear probing 2n MurmurHash3 17
hppc 0.4.1 2011 LongLongOpenHashMap linear probing 2n MurmurHash3 17
PCJ 1.2 2003 LongKeyLongChainedHashMap external chaining prime [2] 48
JDK 1.6.0_27 2011 HashMap<Long, Long> external chaining 2n [3] 100NOTE

[1] value ^ (value >> 32) [2] value ^ (value >>> 32) [3] h ^= (h >>> 20) ^ (h >>> 12); h ^ (h >>> 7) ^ (h >>> 4);

Setup

  • 2 warmup runs, 2 observation runs for each combination of (Class, Load Factor, Size Hint)
  • Centos 5.4, dual-Xeon X7542  (2.67GHz, 6 cores each), 256 GB of RAM
  • 4.72 billion 64-bit integer (976 million uniques) keys inserted with random long values, read from disk
  • baseline read/parse rate was roughly 10 million records per second
  • JVM flags: -server -Xmx128g -XX:+PrintGCTimeStamps
  • Code is on GitHub, added to the benchmark used in part 3. See the section of the README that discusses the “libraries” comparison.

The runs completed are listed below, as are some odd failures that occurred in testing.

The test pushed several hash tables to their breaking point, since they either imposed artificial limits on the backing array length (usually 230 ~ 1.074B) or they ran into Java’s built-in limit, Integer.MAX_VALUE, of 231-1 ~ 2.147B. Extremely high load factors were required to even get some tables to finish the benchmark, which I understand is suboptimal.

In general, I tried to run three different maximum load factors where possible: 0.50, 0.75, and 0.91. The first two are common default load factors for the hash tables in question, and the last one is the highest load factor such that 976,000,000/loadFactor < 230. What’s important to note here is that I actively avoided resizes during the runs by setting an initial capacity corresponding to the load factor. The number of “usable” buckets was held constant (at 976M) while the number of empty buckets varied in order to test the effect of load factor on performance.

Results

The broad strokes are that all of these libraries live between the 1M to 2M inserts per second range, with performance degrading as they approach their maximum load factors, some more than others. The drastically different bottom quartiles are what stand out to me.

Worst-cases for the chained maps are close to full stops whereas the open addressing maps’ throughputs fall by about 25%. HashMap<Long, Long> and LongKeyLongChainedHashMap clearly fall apart in a serious way on a regular basis, roughly every 50 million new keys. It’s not clear to me that the code would allow resizes/rehashes given the initial capacity provided, so my guess was that these were stop-the-world GCs.

I reran the test with -XX:+PrintGCTimeStamps -verbose:gc to verify, and 78% of the real time of the two observation runs was consumed by GC, with regular pauses up to 100 seconds. For contrast, Long2LongOpenHashMap‘s GCs amounted to less than 1% of real time with no pause greater than 0.2 seconds (with the exception of the Full GC that occurred between observation runs). I computed those percentages by dividing the sum of the reported GC durations by the ‘real’ value from time(1).

It makes sense that GC pauses would uniquely affect the externally-chained implementations, since they produce so many objects. (All the open addressing implementations only create a few handfuls of objects per map instance.) In any case, that problem alone disqualifies it for my uses: such significant dips in throughput are unacceptable. Indeed, any externally-chained map will be suspect at these sizes because of the need to create a new object per entry. Perhaps different collectors can mitigate this problem, but given the extremely sound performance of the open addressing implementations, I’m inclined to table the externally chained maps for a while. The operational advantage of simply not worrying about tuning GC is huge for me.

The two most interesting maps proved to be HPPC’s LongLongOpenHashMap and fastutil’s Long2LongOpenHashMap. Both were blazing fast and quite stable, with minor throughput degradation only after 700 million keys. LongKeyLongOpenHashMap, TLongLongHashMap, and OpenLongLongHashMap began their degradation right from the get-go, dropping by over 50% over the course of the test. fastutil’s implementation’s variance was also extremely low, which is a very admirable quality from an operational perspective.

It’s fascinating that a design as straightforward as power-of-two bucket counts and linear probing could be so amazingly effective. It’s the classic example of a ‘flawed’ open addressing scheme: it’s susceptible to primary clustering, the number of probes shoots up as you go past 90% occupancy, and it’s unforgiving of mediocre hash functions.

After thinking about it, though, it makes great sense that it performed so well here. Both favorites use a byte[] to store the state of each bucket, which means that each 64-byte cache line has on average the next 32 bucket statuses pre-cached. This means that walking the linear probe chain is effectively free for the next 32 buckets (compared to going to L1 or, heaven-forbid, main memory). At 90% occupancy, the average successful lookup requires 5.5 probes while unsuccessful lookups require 50 probes. This means that on average we’re fetching at most two cache lines for the probing. Compare this to double hashing, which by design is meant to probe far from the original hash, which requires a line fetch per probe. At 2.5/10 average probes for successful/unsuccessful lookups, you’ll be doing far more fetches with double hashing than with linear probing. If you roll your own status array on top of a long[] and some masking tricks, you could potentially fit 256 2-bit statuses on each cache line, further improving linear probing’s efficiency. ($ cat /sys/devices/system/cpu/cpu0/cache/index0/size to get your cache line size in bytes.)

If you combine linear hashing with a good hash function, like Murmur3, you can avoid primary clustering in practice. What I find curious is that so many of the double-hashing libraries use such a weak hash function. Sure it’s only 2 ops, but is the hash function the real bottleneck in their code? Can they really not afford the extra 6 ops? I doubt it. I’m going to add some instrumentation to a few of these implementations to do probe length histograms to verify what I’ve speculated. Stay tuned.

Conclusions

In order to meet the requirements I’ve enumerated in my previous posts, I’m going to need more than 976 million occupied buckets. I’m going to need 1.5 billion, and a fraction of that again for empty buckets. Unfortunately, 230 is just a bit over 1.073 billion, and well short of 1.5 billion. This means that I can’t use one of the two favorites from my testing (fastutil’s Long2LongOpenHashMap) out of the box. Luckily, HPPC’s implementation does not have an artificial cap on the maximum number of buckets, so moving forward that will be a viable out-of-the-box option. (Edit: It has the same 230 cap as fastutil.) In order to complete my testing, I’ll likely dig up another month’s worth of data and run HPPC and modified variants of the other open addressing hash tables up to about 2 billion keys.


Mapping types  [UP]

In part 3 of this series of posts, I tested out several hash table “services” to see if any of them would give me the end-to-end storage functionality I needed for this project. In that test, I used an append semantic for the performance comparison because I was hoping that one of those “services” would solve all of my storage needs in one shot. Ideally, such a service would provide a fast algorithm, efficient memory management, and solid monitoring facilities. Sadly, all of them failed on the first count and I was forced to move onto a more hands-on approach.

This means worrying about the actual storage implementation and its interaction with the JVM’s GC. If I take the naive route and simply allocate a new long[] for every user’s activity, the memory allocation time alone could cripple the application. It has become clear that storing 1.5 billion long-lived long[]s (about 80GB’s worth) on a high-occupancy heap will cause unacceptable GC pauses. As such, I’ll be storing the user activity chains in a slab allocator (of my own construction) and using the hash table to translate between user IDs and their storage reference. Thus, I’m transitioning from benchmarking appends to puts, and I’m now mapping from long to long instead of long to long[]. Since the slab allocator will return roughly random longs as storage references, I’ve simulated the values of the hash table with a call to Random#nextLong().

An Aside on HashMap<Long, Long> and External Chaining  [UP]

Unlike the other candidates, HashMap<Long, Long> stores boxed keys and values, wrapping each pair in an Entryobject.

    public final class Long extends Number implements Comparable {
        ...
        private final long value;
        ...
    }

    static class Entry implements Map.Entry {
        final K key;
        V value;
        Entry next;
        final int hash;
        ...
    }

On a 64-bit JVM with a starting heap greater than 32GB, each of our 976 million unique users (wrapped in an Entryinstance) requires:

  • 8 bytes for the pointer to the Entry from the backing Entry[],
  • 16 bytes for the object header of the Entry,
  • 8 bytes for the pointer to final K key,
  • 8 bytes for the pointer to V value,
  • 8 byte pointer to Entry next,
  • 4 bytes for int hash,
  • 16 bytes for the object header of final K key,
  • 8 bytes for the underlying long of final K key,
  • 16 bytes for the object header of V value, and
  • 8 bytes for the underlying long of V value

for a grand total of 100 bytes per unique key. This means that at the default 0.75 load factor, you’re looking at 97,600,000,000 bytes for the used buckets and 2,600,000,000 bytes for the empty buckets. 93 GiB later and I haven’t even stored the events themselves. Since boxing has its own CPU costs, I’m not expecting much in terms of performance or memory footprint from HashMap. It’s included here mostly as a baseline, but also to demonstrate it’s unsuitability for this scale of work. The same goes for LongKeyLongChainedHashMap, minus the boxing. Even at a more modest 48 bytes per entry, it’s still using 55 GiB plus the cost of maintaing a billion items on the heap. External chaining is simply unviable at these sizes.

Runs  [UP]

For each load factor, the initial capacity of the table was hinted to be at least 976,000,000/loadFactor in order to avoid resizes, where possible. However, Trove and fastutil automatically do this computation by requesting the number of unique elements expected, instead of a lower bound on the initial bucket count. They were given 976,000,000 as their starting size hint regardless of load factor. As a last resort, 976,000,000 was used as the size hint when a map could or would not finish a test run.

The “Constructor” column indicates whether the constructor expects the initial capacity (“IC”) or the expected elements count (“EE”).

Lib. Class Constructor LF Size Hint
mahout-collections OpenLongLongHashMap IC 0.91 1,073M
0.75 1,302M
0.50 1,952M
Trove TLongLongHashMap EE 0.91 976M
0.75 976M
0.50 976M
PCJ LongKeyLongOpenHashMap IC 0.91 1,073M
0.75 1,302M
0.50 1,952M
fast-util Long2LongOpenHashMap EE 0.91 976M
hppc LongLongOpenHashMap IC 0.91 1,073M
PCJ LongKeyLongChainedHashMap IC 0.91 976M
JDK HashMap<Long, Long> IC 0.91 976M
0.75 1,302M
0.50 1,952M

Oddities  [UP]

When running initial tests, the configuration (TLongLongHashMap, 0.50, 976M) forced a catastrophic six-hour-long rehash at roughly 1.78 billion records in. At first I suspected a pathological probe chain followed by a rehash, however, I could not reproduce this particular case despite using identical data and identical runtime parameters. This is utterly horrifying from an operational perspective, but I’m willing to write this off as some artifact of the machine or perhaps the screen session. In another initial test, the configuration (LongKeyLongChainedHashMap, 0.50, 976M) crashed the Parallel Mark-Sweep garbage collector thread. Again, I couldn’t reproduce the problem despite my best efforts, so I’m writing it off as an artifact. It could have been the 128GB heap, it could have been the hundreds of millions of objects, it could have been anything. Sadly, I don’t even have a reproducible test case to give the JVM team, so I don’t foresee much progress on the bug.

Choosing a Good Hash Function, Part 1

Author’s note: Hello, reader! I’m Colin, a new data scientist on the team. This is the first in a series of posts in which I will be describing my efforts to characterize various hash functions for use here at AK. Future posts will discuss the statistical and computational properties exhibited by these algorithms on our data. Additionally, I will be tackling the problem of  trying to use the data that we have available to uncover potentially pathological input sets.

At AK, every event that we track is encoded as an n-tuple of 64-bit integers:

key component #1, key component #2, … , key component #n

This is a convenient form for summary and analysis, but obviously not optimal from a storage perspective. Internet advertising is no stranger to large numbers, but 264n is enormous. The set of keys that we will draw from this theoretical universe of keys is comparatively quite small. We find ourselves posed with a problem that looks very much like a natural fit for hashing!

A well chosen hash function, operating at the heart of solidly designed hash table could allow us a big win on both the internal storage/representation front, as well as in wild, freeing up space in client cookies, etc.

Paraphrasing Knuth, one should not choose a random hash function to generate a good hash table. As with any hashing task, there are the three classical issues to consider:

  • The size of the hash in terms of the number of bits of output needed to hit your collision (two distinct keys hashing to the same value) goals and remain within your storage constraints
  • The distributions of hashes on your input data, and the related problem of collisions
  • Computation time

Over the next several posts, I will be putting a number of hash functions through the wringer in an effort to identify a handful that perform well on our data.

Big Memory, Part 3.5: Google sparsehash!

Author’s Note: This is part 3.5 of a series of posts about my adventures in building a “large”, in-memory hash table. This post is a handful of observations I made while running someone else’s benchmark of C/C++ hash table implementations.

I ran across Nick Welch’s Hash Table Benchmarks during my research and decided to rerun a subset of his benchmark for much larger key counts. (You can find my fork of his code on GitHub.)

The differences between the benchmarks he ran and the ones I ran are:

  • I’m using sparsehash 1.11 (vs. 1.5), Boost 1.41 (vs. 1.38), and Qt 4.6 (vs. 4.5).
  • I removed Ruby’s hash implementation because of its abysmal performance in his benchmark.
  • I increased the insertion count to 1.5 billion, and the step size to 100 million from 40 million/2 million.
  • I only provided the Random Inserts: Execution Time (integers) and the Memory Usage (integers) tests.

I did a best of three, using the recommended ionice/nice run command on a m2.4xlarge EC2 instance. (I added a dummy entry for Ruby’s hash so that the colors of my plots would match the originals.)

As Nick noted, Google’s sparsehash maps are the real deal. They sustain their performance across a broad range of key counts, offering strong tradeoffs between memory and speed. When memory usage must be low and allocation smooth, sparse_hash_map is the clear choice. Interestingly, it’s simply quadratic probing built on top of a dynamic array. Dynamic arrays only store addressed entries, so no matter how large the array address space may be, you only pay for the entries you’re using, plus some bookkeeping overhead. The dynamic array mitigates the memory cost of maintaining the low load factor required to operate a fast, large hash table with quadratic probing. At sparse_hash_map‘s default load factor of 0.8, a conventional array-backed quadratic probing scheme ‘wastes’ 20% of allocated memory. This benchmark maps to and from 64-bit integers, so at the claimed 1-2 bits of overhead per entry the dynamic array implementation only ‘wastes’ about 1% of allocated memory. It also allocates memory one entry at a time, so each insert is slower but requires constant time. Rehashes are the only allocation hiccup in this scheme.

A good review of the algorithm of the sparsetable used to back the sparse_hash_map is provided by the Google folks here, under insert() and in the source in sparsetable.cppsparsehashtable.h, and sparse_hash_map. You’ll also find an interesting claim about their particular choice of quadratic probing with triangular numbers, which I’ve been unable to find evidence for elsewhere. Regardless of the theoretical details, these benchmark numbers are enough to convince me of the efficacy of that choice.

In the opposite case where throughput is of paramount importance, dense_hash_map shines. It maintains a 0.5 load factor and stores entries in a regular C array, which means it has to pay for all the buckets it addresses (not just the ones it uses) and also leaves half of them empty. This avoids the costly bitmap population counts and the constant reallocation needed by the sparse_hash_map at the cost of infrequent, lumpy allocation and a large per-key overhead. So lumpy, in fact, that it couldn’t quite make it to the 1.5 billion key mark on the provided 68GB of memory.

The algorithm behind sparse_hash_map seemed to suit my needs perfectly: smooth allocation, compact footprint, and still plenty fast despite being the slowest of the group. I dug in a little further to get a feel for the throughput profile and ended up modifying the benchmark to report the amount of time between every million inserts up to 1.5 billion. I ran 4 warmup runs and 16 observation runs. This gave a familiar view, but with a twist. The throughput costs of the different allocation strategies were exposed:

(Note: Both plots share the bottom plot’s legend.)

The top plot shows the number of inserts per second, measured over 1-million-record intervals. Overall, the results are as expected: the sparse_hash_map and dense_hash_map sandwiching the other three, which may as well have been the same library. (They practically are: the versions of Boost, Qt, and GCC that I used are all prime-sized, chained hash tables.) The throughputs are pretty stable overall: dense_hash_map at around 4.5M inserts/sec, the unordered_maps and QHash at about 3M, and sparse_hash_map at about 1M.

The bottom plot shows the slowest million-record intervals, plotting the number of seconds needed to insert 1 million records. These correspond to the dips in throughput of the top plot. I can only assume these are the observations that spanned a resizing of the table. sparse_hash_map was the worst offender, taking more than three minutes to resize and rehash the table in the worst case, while dense_hash_map had the best worst-case at 43 seconds. The other three predictably fell somewhere in between. (See the five rightmost points on the bottom plot.)

A brief aside: it’s curious that GCC’s std::unordered_map performed roughly 20% faster after its final reallocation (near the 700M record mark) than both Boost’s unordered_map and Qt’s QHash. I’ll have to dig in later to figure out what’s actually happening there, but I suspect it has to do with the simple fact that it reallocates to a much larger memory footprint than the other two, leaving it with a much lower load factor through that section of the test.

In any case, the valuable lesson for me was that unless you properly allocate the initial table size, you will be forced to suffer catastrophically slow resizes, even in C/C++. But what recourse do you have if your table crosses the rehash point between pruning? This also raises important questions about most open-addressing schemes’ need to periodically rehash to clear tombstone entries. Is it even possible to initially allocate a table under these schemes that will be able to balance insertions and deletions without either being incredibly wasteful or forcing a rehash? I suppose it depends deeply on your workload. There are some promising proposals such as hopscotch hashing that are meant to accomodate very high load factors, but it remains to be seen if the algorithm is resilient to long periods of constant use and my particular workload. Another issue I foresee with open addressing is the concurrence of a pathological probing sequence with a hot key (my workload is append-heavy and highly variable from key to key). When combined, you could see meaningful but hard-to-diagnose dips in throughput, which is a worst-nightmare of sorts. The extra pointers needed for a chained hash table are looking more and more palatable if it provides better worst-case scenarios.

All this means that I’ll have to spend a great deal of time analyzing our traffic to solve for the correct size such that the deletions and insertions tally up correctly. I’ll also have to take into account maintaining an acceptable load factor if I use open addressing so that “standard” operational throughput doesn’t suffer either. This has wide-ranging implications for the garbage-collection/trimming algorithm I choose as well as the backing storage, but I’ll get into that in the next few posts.

Big Memory, Part 3

Author’s Note: This is part 3 of a series of posts about my adventures in building a “large”, in-memory hash table. Part 1 introduced our goals and our approach to the task at hand. This post is a summary of some candidate hash table “services”.

Goals

To recap, I need a hash table that can support the following:

  • 1.5 billion 64-bit keys, uniformly and randomly distributed
  • values between 16 bytes and 16 kilobytes, with sizes in a Zipfian distribution
  • deployed to one machine, all in-memory
  • sustained 200,000 writes per second over the course of many hours

The API should support a non-bulk, mutable, key-value interface with an append command.

The final requirement is that the source be obtainable. After all, this is just as much about finding a viable candidate as understanding how the results are achieved.

My approach to testing the initial viability of candidates was to replicate a subset of the required production load using some of our production logs. The test amounted to writing 212 million records to a bit over 78 million unique keys. Each record’s key is 8 bytes and its value 16 bytes. The value bytes are simply appended to the existing value corresponding to the key. This closely mimics our real write workload for the project.

Note that throughput and latency are the primary concerns here: we seek a consistently high write rate. Memory overhead, at this stage, is not under scrutiny. (This may strike some as odd, given the hard bounds on a single machine’s memory, but honestly the raw data set we’re seeking to store is easily within the bounds of the servers I described in my previous post. As long as nothing absurd is going on, we can afford to trade some memory for speed.)

Candidates

Given the API requirements, the candidates that immediately came to mind were:

  • Berkeley DB
  • Kyoto Cabinet
  • Redis
  • Memcached

Note that the scope here is restricted to hash table “services”, not hash table libraries. Specifically, I don’t want to manage memory, rehashing, growing, or shrinking. I’ll be covering libraries in the next post.

Under the hood these all use slightly different hashing and collision resolution schemes.

Berkeley DB uses an implementation of Litwin’s Extended Linear Hashing. In particular, it implements linear hashing using a hybrid split control: bucket overflow and load factor independently trigger splits. (Look for ffactor and do_expand in hash_page.c’s __ham_add_el().) Notably, BDB chains memory pages, not object pointers. This is a sensible optimization in a world where main memory is small and disk seeks are costly. However, the cost in code complexity is immense. For an idea of just how much attention to detail is required, download BDB’s source and check out hash_page.c’s __ham_replpair() and __ham_add_el(). It is fascinating to see how much work goes into managing the differences between small and large values. [1]

Kyoto Cabinet “boringly” uses the C++ stdlib’s std::unordered_map. I had trouble finding implementations other than GCC’s, so I can’t really speak to anything but that. The tr1/hashtable implementation uses chaining, with a prime bucket count and a max-load-factor-based rehashing policy. When the ratio of elements to buckets passes a certain threshold (1, I believe), a full stop-the-world rehash is performed. (_M_rehash() on line 1146) The resizing policy finds the smallest prime greater than twice the current number of buckets, and the table is resized. (_M_need_rehash() on line 455) The prime policy default can be seen here.

Redis implements its own hash table that uses chaining as well with a target load factor of 1. Interestingly, it rehashes the keys incrementally in the background, pushing updates to a new table while checking both the old and new tables for reads. The incremental work is spread over all subsequent reads and writes issued to the table. This is perhaps the easiest of the four implementations to fully understand on the first read.

Similar to BDB, memcached implements linear hashing, but it chains object pointers, not memory pages. It uses what the paper calls “load control[led]” splits, meaning that incremental rehashing occurs when the load factor exceeds a certain value. (In this case, 3/2.) Unlike Redis, it does the rehashing in an another thread in the background, not as a part of the read or write operations. assoc.c very nicely illustrates the gist of linear hashing with controlled splits; check out assoc_find() and assoc_insert(). Beware, assoc_expand() just sets up some state to signal incremental rehashing. The real guts of the rehashing is in assoc_maintenance_thread(). It is notable how much simpler the code for object-pointer chaining is than the page chaining used in BDB.

Ease of Use

Note: I’m talking specifically about ease of use from a developer perspective. I’m not qualified or interested in commenting on their operational merits here.

Without a doubt, the easiest candidate to set up, use, and analyze was Redis. Between the trivial build from source, the simplicity of the Jedis API, and the visibility provided by the INFO command, using Redis was a walk in the park. The redis.conf file has a lot of knobs but most of them can safely be ignored and the inline documentation is ample.

Kyoto Cabinet came in a close second. I forgot to set $JAVA_HOME before installing the Java bindings, which caused me some grief, but once I figured that bit out everything was right as rain. Instantiating and using it were painless if somewhat sparsely documented.

Memcached was actually a pain to use, not because of the daemon itself, but because of the client libraries available in Java. The fact that the append command required a CAS value in some clients and not in others was the main culprit. One qualm with the daemon itself is that an append command only succeeds if used on an existing key.

Finally, BDB was by far the most frustrating candidate. The configuration is arcane and poorly documented. The errors are undescriptive and often cryptic. Setting the proper combination of permissions for a client is exceedingly difficult unless you peruse the documentation with a very keen eye. The distinctions made between what configuration should be done on the EnvironmentConfig versus the DatabaseConfig is unclear and poorly documented. Despite specifying an in-memory hash database, a home directory for BDB is still required, even though it never touches it. One has to manually initialize the memory subsystem. Blah! Maybe I’m just uninitiated, but I don’t think I’ve ever been more frustrated with a piece of software. To boot, only the Heap, Queue, and Recno access methods support append puts, leaving me to manually do a get/append/put in the client. Even if BDB is fast enough, there’s absolutely no chance I’ll use it in production due to these limitations.

Results

I’ll briefly note that memcached was so slow that it didn’t complete the test suite in the two days I left it running. As such, I’ve removed it entirely from this comparison. I was probably doing something wrong vis-a-vis configuration of the client and server. Similarly, a simple un-pipelined Redis connection proved to be incredibly slow, at least an order of magnitude slower than BDB. As such, I reran the original Redis test with a pipelined connection, flushing every 10,000 records. Both versions of the test are included in the source for posterity.

These plots come from 30 runs over the data set, preceded by 10 warmup runs. The hash marks are the average value of the number of records processed per second at the particular record count, and the points are the actual observations with 10% alpha.

The first plot includes a baseline processing rate (‘xfer’ in the legend) which indicates how quickly the records can be read and prepared. The second simply excludes the baseline, for a clearer view. You can click through for larger versions of the plots.





You can find the source code used to run these comparisons on GitHub.

Notable bits

  • Despite the drastically different algorithms used by BDB and KC, their results were roughly equivalent. KC’s performance proved to be slightly smoother, and seems to have reached a stable point at around 170 million records while BDB continued to degrade. A concern is that they were the only two packages that were used through JNI. This may have limited performance, but I am disinclined to investigate further as we use the JVM in production which necessitates this cost when interacting with these services. That said, tr1/hashtable’s underlying algorithm is still quite attractive. It performed smoother despite not having a hint about the number of unique keys while BDB did.
  • Though Redis’ throughput proved to be about 50% greater than KC and BDB, the precipitous drops during (what I assume is) resizing are extremely worrisome. (I’m guessing it’s resizing since the distance between drop-offs roughly doubles each time.) The performance drop off just doesn’t jive with the goal of continuously high throughput. Equally worrying is the cost of at least doubling memory use during rehashing. Even though I mentioned this is a secondary concern in this comparison, it is an important operational problem.
  • The performance difference between tr1/hashtable (KC) and Redis is marked, given they both use chaining. I suspect this is either a result of pipelining or JNI overhead. The purpose of adding the pipelined version of the Redis test was to emulate a scenario where issuing commands did not carry network and serialization overhead. Perhaps it optimized Redis access unfairly by instead emulating a bulk command API as opposed to just mitigating protocol overhead.

Conclusions

It seems that the overhead of a general-purpose hash table “service” makes these options unsuitable for my needs. I’m honestly not sure whether it’s the broad feature sets, the JNI bridge, or the serialization/deserialization overhead of having a network server on top of the hash table, and as a practical matter I won’t pursue any further. All three problems can be circumvented by using a Java-resident hash table, so that is where we’ll go in the next post. -Xmx128g here we come!

Footnotes

[1] Per the Berkeley DB license I am including a link to Oracle’s site where one can find the full source of the database.

Big Memory, Part 2

Author’s Note: This is part 2 of a series of posts about my adventures in building a “large”, in-memory hash table. Part 1 introduced our goals and our approach to the task at hand. This post is a summary of some of the research I’ve done to familiarize myself with the problem.

Background

Our current approach to custom attribution models is very simple: pay Amazon thousands of dollars a month and “do it in the cloud” with Elastic MapReduce. In Hadoop, we partition the data by user, sort by time, identify their conversion events, and run an attribution model on these conversion-terminated “chains” of events. This is both costly and more cumbersome than we’d like.

A faster, cheaper, and arguably more transparent approach might be to pipe the live events to a service that could buffer and assemble these chains in memory and output “completed” chains (when a conversion event arrived) to a separate service to do the model computation.

We’ve come to the conclusion that a large in-memory hash table could be suitable to the task. Our specifications for said hash table are:

  • 1.5 billion 64-bit keys, uniformly and randomly distributed
  • values between 16 bytes and 16 kilobytes
  • deployed to one machine, all in-memory
  • sustained 200,000 updates per second over the course of a 14-hour “internet day”

Before jumping into building one of these, I thought I’d learn a bit about hash tables themselves.

The Research

Naturally, my research began with Wikipedia. The article on hash tables is a fairly comprehensive overview. From there, I read handful of papers and articles to dig a little deeper. Below are a selection that helped me immensely.

Dynamic Hashing

  • Fagin’s Extendible Hashing – A Fast Access Method for Dynamic Files

    One of those seminal IBM papers that everyone seems to reference. It provides some interesting historical context for the introduction of dynamic hashing. The central thesis is an insight as impressive now as it was then: by separating the hash space from the storage addressing space, a hash table can be made incrementally extendible.

  • Seltzer and Yigit’s A New Hashing Package for UNIX

    An older overview of hashing algorithms for use in and out of main memory. Includes exquisite insight into the implementation concerns the authors took into account while building a general hashing library for UNIX.

  • Rathi, Lu, and Hedrick’s Performance Comparison of Extendible Hashing And Linear Hashing Techniques

    An old but very useful comparison of linear and extendible hashing that demonstrates certain periodic performance characteristics that may make one or the other unsuitable for your application.

  • Baeza-Yates and Soza-Pollman’s Analysis of Linear Hashing Revisited

    A theoretical analysis of different overflow control functions in linear hashing. Lots of math, but very clearly demonstrates the differences between global versus local overflow resolution functions and the impact of page sizes.

Hash Functions

  • Jenkins Hash

    A solid general-purpose hash whose source and documentation are a masterwork of explication and thoroughness.

  • (Minimal) Perfect Hashing: some theory, some practice

    For when you have all of your keys ahead of time and want 100% occupancy.

Collision Resolution

  • Pagh and Rodler‘s Cuckoo Hashing

    The original cuckoo hashing paper that compares cuckoo hashing against chaining methods and linear probing. Includes a nice section at the end recapping earlier hashing schemes and their historical context.

  • Erlingsson, Manasse, and McSherry’s A Cool and Practical Alternative to Hash Tables

    They present an empirical analysis of parametrized cuckoo hashing (in number of hash functions and bucket size). There’s an interesting bit at the end discussing dynamic expansion by adding bins and/or new hash functions.

  • Lehman and Panigrahy’s 3.5-Way Cuckoo Hashing for the Price of 2-and-a-Bit

    Describes a parametrized cuckoo hashing scheme with overlapping bins. Improves likely utilization by several percent.

  • Herhily, Shavit, and Tzafrir’s Hopscotch Hashing

    Incorporates ideas from linear probing, cuckoo, and chaining techniques to avoid any of their individual pitfalls.

  • Panigrahy’s Efficient Hashing with Lookups in two Memory Accesses

    Provides a lucid graph-theoretic description of the problem of collision resolution. The solution proposed is all-theory, so don’t bother looking for a practical result therein.

  • Dietzfelbinger and Schellbach’s On Risks of Using Cuckoo Hashing with Simple Universal Hash Classes

    Discusses the unsuitability of certain linear and multiplicative hash functions for use with cuckoo hashing, using a graph theoretic argument.

Observations

After my academic explorations, I started to look for candidate data stores. In doing this, I began digging into the history of key-value stores and hash table implementations. A few things jumped out immediately:

  • Engineering effort seems to have been diverted from hash table development to distributed hash table development, in the past 5 years.
  • Dynamic hashing innovation seems to have stopped at linear and extendible hashing.
  • No benchmarks I ran into exceeded 100M insertions. In fact, this benchmark is the only one that I found that exceeded 10M insertions.

The first seems obvious given the meteoric rise in data captured from the web and the relatively fixed decrease in RAM price and increase in density. With dozens or hundreds of terabytes of “online” data, one can hardly be blamed for steering toward mid-range commodity servers en masse. However, this approach comes at a cost: coordinating and maintaining a cluster of servers is no mean feat. In fact, I consider the consensus and commitment protocols that make said coordination possible significantly more challenging to understand, let alone implement, than any of the hashing subjects mentioned above. (Just look at the Wikipedia entry for Paxos!) Similarly, hot-node issues and debugging distributed systems strike me as being an order of magnitude harder to solve than the problem of building a “better” hash table. To be clear, I’m not arguing that these two things solve the same problems. Rather, given the choice of implementing a “huge”, performant hash table in memory or the algorithms to support a clustered solution, I would choose the former.

Despite the fact that the progress of Dynamo, Cassandra, Riak, and Voldemort took most of the headlines from 2005 to 2010, work still progressed on in-memory and disk-based non-distributed hash tables like Tokyo Cabinet and Kyoto Cabinet, Redis, and even the venerable Berekely DB. (If you’re at all interested in the history of “NoSQL” data stores, you should check out this handy timeline.) That said, little in terms of novel hash table technology came from these efforts. As far as I know, BDB still uses a variant of linear hashing, Redis uses standard chaining, and Kyoto Cabinet falls back on std::unordered_map for its in-memory hash table.

This brings us to the other two points: indeed, how could traditional hash table development cease (practically) in light of the advances of DHTs? With “web-scale” data sets even a single node’s data storage needs should easily exceed anything seen 5 years prior, right? In fairness, some work has been done in the last few years to add concurrency to linear hashing as well as some work on optimizing hash table algorithms for modern cache hierarchies, but this doesn’t feel like the same kind of fundamental, basic result as, say, the introduction of extendible hashing.

I suppose the fact that there has been little visible engineering progress on this front is a testament to the quality of the existing algorithms and code. Either that or existing workloads have not yet exceeded that high watermark of 100M entries and we’re just waiting for the next jump to inspire new work in the field.

Next post: a roundup of existing candidates, benchmarks, and observations about their ease-of-use.

Big Memory, Part 1

Author’s note: This will be the first of a series of posts about my adventures in building a “large”, in-memory hash table. This first post will focus on a few philosophical notes that inspired this adventure. Research summaries, benchmarks, engineering notes, and so on will follow in future posts.

Memories

A few years ago, I recall being flabbergasted when I was told that Google had deployed a Perforce server with 256GB of RAM. Production machines at my then job had 16GB of RAM and I had certainly heard of 32- and 64-GB boxes, but 256GB struck me as an unthinkable amount. Our whole production database in RAM twice over! Wham! Pow! Smack!

Fast-forward to a month ago when I was told that we had two “leftover” boxes with a dozen cores and 256GB of RAM each. Impressive, yes, but a pleasant surprise at best. How the times have changed!

Brainstorming

The availability of the hardware got Rob and I thinking about novel things we could do in RAM. After some brainstorming, we came up with some basic tenets that should guide our exploration of the space.

We’re not in the business of saving lives.

We track ads online. Lots of them. Not all components of our system require perfect uptime and not all of our data has to be perfectly accurate. I think perhaps this scenario is more common than many are willing to admit or embrace, especially in the analytics community. My main beef with MapReduce is the a priori necessity of examining every last piece of data. Throw out what doesn’t matter! Live a little!

That said, “in-memory” does not mean unstable or lossy.

If your data fits in memory and you can easily reconstruct your data store by replaying the input stream, there’s really no reason to dismiss a volatile design. Hell, the extra speed you’re likely to pick up with an in-memory design can actually make your recoveries quicker than with a persistent solution. By the same token, writing a persistence layer for a data store is arguably the most complicated part. (See these two posts, for instance.) Mitigate the volatility of an in-memory solution by going back to the simplicity, transparency, and composability espoused by the Unix philosophy.

K.I.S.S.!

One thread does all the reading and writing, all in-memory, with only one I/O format: protobuf messages over 0MQ. See how fast that baby can go and how big she can grow before you get any fancier. Sure I could wave my hands about all kinds of fancy things like context switching, but that’s not the justification here. We’re really trying to test the limits of a relatively simple computation model, without working around it: how much can you do with a fast processor and gobs of RAM?

Benchmark with the future in mind.

Test at capacities that significantly exceed current needs. Push the envelope well past what roughly similar projects are doing. Stress it until it genuinely breaks.

Action

Since Rob has already started tackling our aggregation bottlenecks with the Summarizer (which he will surely write more about soon, nudge,nudge), I decided to try my hand at our custom attribution problem. We need a way to store user interaction streams and run attribution models over them quicker than in Hadoop, but not quite “in real-time”.

Practically, the problem amounts to storing a billion or so randomly- and uniformly- distributed 64-bit integer keys with a Zipfian distribution of values ranging between 16 bytes and 16 kilobytes of structured data. The combination of an extremely heavy write workload, zero key locality, and no durability requirements points to an in-memory hash table.

Next post, I’ll cover the research I did and am doing to familiarize myself with the problem of large, in-memory hash tables.

Follow

Get every new post delivered to your Inbox.

Join 256 other followers