Big Memory, Part 4

Author’s Note: This is part 4 of a series of posts about my adventures in building a “large”, in-memory hash table. This post is a summary of some pure Java hash table libraries.

Background

In my last post, I discussed the results of rerunning Nick Welch’s benchmark of C/C++ hash tables. However, since we use the JVM in production, those results were purely academic. I need to either verify or strike down my suspicions in the Java world.

For this test, I’m only concerned with primitive long-long mappings. For those following along, this is a departure from the methodology used in part 3. Here’s why.

Goals

I was hoping to replicate some fraction of the amazing throughput seen in the C/C++ benchmark. (A million inserts per second wouldn’t be a bad start!) More importantly, I sought to learn something about the relative performance of different schemes on the JVM. Would the bit-masking used in power-of-2-sized tables prove faster than the modulus operator used by prime-sized tables? Would the extra pointer hops of an externally chained implementation be more or less costly than the probe chains of open addressing schemes? Would Java even be able to handle a 128GiB hash table? What effect would GC have? What implementation tricks could be learned?

I’ll save you some time right now if you’re actually looking for the exact answers to all those questions: I don’t have most of them. I’ll hopefully get to them in later posts. The available, open-source primitive hash maps simply did not cover enough of the algorithm space to reliably demonstrate the isolated effect of a particular design choice such as power-of-2- vs prime-sized bucket arrays.

Instead, this post’s goals are more modest: I wish to answer the following questions:

  • What is the effect of load factor on the performance of these maps?
  • Given the out-of-the-box implementations available, which designs are the most appealing?
  • What optimizations can be applied to their implementation, given my requirements?
  • Are any of these suitable for my needs right out of the box?

Candidates

Lib. Ver. Rel. Date Class Coll. Res. No. Buckets Hash Fun. Bytes/Entry
mahout-collections 1.0.0 2010 OpenLongLongHashMap double hashing prime [1] 17
Trove 3.0.1 2011 TLongLongHashMap double hashing prime [2] 17
PCJ 1.2 2003 LongKeyLongOpenHashMap double hashing prime [2] 17
fast-util 6.4.1 2011 Long2LongOpenHashMap linear probing 2n MurmurHash3 17
hppc 0.4.1 2011 LongLongOpenHashMap linear probing 2n MurmurHash3 17
PCJ 1.2 2003 LongKeyLongChainedHashMap external chaining prime [2] 48
JDK 1.6.0_27 2011 HashMap<Long, Long> external chaining 2n [3] 100NOTE

[1] value ^ (value >> 32) [2] value ^ (value >>> 32) [3] h ^= (h >>> 20) ^ (h >>> 12); h ^ (h >>> 7) ^ (h >>> 4);

Setup

  • 2 warmup runs, 2 observation runs for each combination of (Class, Load Factor, Size Hint)
  • Centos 5.4, dual-Xeon X7542  (2.67GHz, 6 cores each), 256 GB of RAM
  • 4.72 billion 64-bit integer (976 million uniques) keys inserted with random long values, read from disk
  • baseline read/parse rate was roughly 10 million records per second
  • JVM flags: -server -Xmx128g -XX:+PrintGCTimeStamps
  • Code is on GitHub, added to the benchmark used in part 3. See the section of the README that discusses the “libraries” comparison.

The runs completed are listed below, as are some odd failures that occurred in testing.

The test pushed several hash tables to their breaking point, since they either imposed artificial limits on the backing array length (usually 230 ~ 1.074B) or they ran into Java’s built-in limit, Integer.MAX_VALUE, of 231-1 ~ 2.147B. Extremely high load factors were required to even get some tables to finish the benchmark, which I understand is suboptimal.

In general, I tried to run three different maximum load factors where possible: 0.50, 0.75, and 0.91. The first two are common default load factors for the hash tables in question, and the last one is the highest load factor such that 976,000,000/loadFactor < 230. What’s important to note here is that I actively avoided resizes during the runs by setting an initial capacity corresponding to the load factor. The number of “usable” buckets was held constant (at 976M) while the number of empty buckets varied in order to test the effect of load factor on performance.

Results

The broad strokes are that all of these libraries live between the 1M to 2M inserts per second range, with performance degrading as they approach their maximum load factors, some more than others. The drastically different bottom quartiles are what stand out to me.

Worst-cases for the chained maps are close to full stops whereas the open addressing maps’ throughputs fall by about 25%. HashMap<Long, Long> and LongKeyLongChainedHashMap clearly fall apart in a serious way on a regular basis, roughly every 50 million new keys. It’s not clear to me that the code would allow resizes/rehashes given the initial capacity provided, so my guess was that these were stop-the-world GCs.

I reran the test with -XX:+PrintGCTimeStamps -verbose:gc to verify, and 78% of the real time of the two observation runs was consumed by GC, with regular pauses up to 100 seconds. For contrast, Long2LongOpenHashMap‘s GCs amounted to less than 1% of real time with no pause greater than 0.2 seconds (with the exception of the Full GC that occurred between observation runs). I computed those percentages by dividing the sum of the reported GC durations by the ‘real’ value from time(1).

It makes sense that GC pauses would uniquely affect the externally-chained implementations, since they produce so many objects. (All the open addressing implementations only create a few handfuls of objects per map instance.) In any case, that problem alone disqualifies it for my uses: such significant dips in throughput are unacceptable. Indeed, any externally-chained map will be suspect at these sizes because of the need to create a new object per entry. Perhaps different collectors can mitigate this problem, but given the extremely sound performance of the open addressing implementations, I’m inclined to table the externally chained maps for a while. The operational advantage of simply not worrying about tuning GC is huge for me.

The two most interesting maps proved to be HPPC’s LongLongOpenHashMap and fastutil’s Long2LongOpenHashMap. Both were blazing fast and quite stable, with minor throughput degradation only after 700 million keys. LongKeyLongOpenHashMap, TLongLongHashMap, and OpenLongLongHashMap began their degradation right from the get-go, dropping by over 50% over the course of the test. fastutil’s implementation’s variance was also extremely low, which is a very admirable quality from an operational perspective.

It’s fascinating that a design as straightforward as power-of-two bucket counts and linear probing could be so amazingly effective. It’s the classic example of a ‘flawed’ open addressing scheme: it’s susceptible to primary clustering, the number of probes shoots up as you go past 90% occupancy, and it’s unforgiving of mediocre hash functions.

After thinking about it, though, it makes great sense that it performed so well here. Both favorites use a byte[] to store the state of each bucket, which means that each 64-byte cache line has on average the next 32 bucket statuses pre-cached. This means that walking the linear probe chain is effectively free for the next 32 buckets (compared to going to L1 or, heaven-forbid, main memory). At 90% occupancy, the average successful lookup requires 5.5 probes while unsuccessful lookups require 50 probes. This means that on average we’re fetching at most two cache lines for the probing. Compare this to double hashing, which by design is meant to probe far from the original hash, which requires a line fetch per probe. At 2.5/10 average probes for successful/unsuccessful lookups, you’ll be doing far more fetches with double hashing than with linear probing. If you roll your own status array on top of a long[] and some masking tricks, you could potentially fit 256 2-bit statuses on each cache line, further improving linear probing’s efficiency. ($ cat /sys/devices/system/cpu/cpu0/cache/index0/size to get your cache line size in bytes.)

If you combine linear hashing with a good hash function, like Murmur3, you can avoid primary clustering in practice. What I find curious is that so many of the double-hashing libraries use such a weak hash function. Sure it’s only 2 ops, but is the hash function the real bottleneck in their code? Can they really not afford the extra 6 ops? I doubt it. I’m going to add some instrumentation to a few of these implementations to do probe length histograms to verify what I’ve speculated. Stay tuned.

Conclusions

In order to meet the requirements I’ve enumerated in my previous posts, I’m going to need more than 976 million occupied buckets. I’m going to need 1.5 billion, and a fraction of that again for empty buckets. Unfortunately, 230 is just a bit over 1.073 billion, and well short of 1.5 billion. This means that I can’t use one of the two favorites from my testing (fastutil’s Long2LongOpenHashMap) out of the box. Luckily, HPPC’s implementation does not have an artificial cap on the maximum number of buckets, so moving forward that will be a viable out-of-the-box option. (Edit: It has the same 230 cap as fastutil.) In order to complete my testing, I’ll likely dig up another month’s worth of data and run HPPC and modified variants of the other open addressing hash tables up to about 2 billion keys.


Mapping types  [UP]

In part 3 of this series of posts, I tested out several hash table “services” to see if any of them would give me the end-to-end storage functionality I needed for this project. In that test, I used an append semantic for the performance comparison because I was hoping that one of those “services” would solve all of my storage needs in one shot. Ideally, such a service would provide a fast algorithm, efficient memory management, and solid monitoring facilities. Sadly, all of them failed on the first count and I was forced to move onto a more hands-on approach.

This means worrying about the actual storage implementation and its interaction with the JVM’s GC. If I take the naive route and simply allocate a new long[] for every user’s activity, the memory allocation time alone could cripple the application. It has become clear that storing 1.5 billion long-lived long[]s (about 80GB’s worth) on a high-occupancy heap will cause unacceptable GC pauses. As such, I’ll be storing the user activity chains in a slab allocator (of my own construction) and using the hash table to translate between user IDs and their storage reference. Thus, I’m transitioning from benchmarking appends to puts, and I’m now mapping from long to long instead of long to long[]. Since the slab allocator will return roughly random longs as storage references, I’ve simulated the values of the hash table with a call to Random#nextLong().

An Aside on HashMap<Long, Long> and External Chaining  [UP]

Unlike the other candidates, HashMap<Long, Long> stores boxed keys and values, wrapping each pair in an Entryobject.

    public final class Long extends Number implements Comparable {
        ...
        private final long value;
        ...
    }

    static class Entry implements Map.Entry {
        final K key;
        V value;
        Entry next;
        final int hash;
        ...
    }

On a 64-bit JVM with a starting heap greater than 32GB, each of our 976 million unique users (wrapped in an Entryinstance) requires:

  • 8 bytes for the pointer to the Entry from the backing Entry[],
  • 16 bytes for the object header of the Entry,
  • 8 bytes for the pointer to final K key,
  • 8 bytes for the pointer to V value,
  • 8 byte pointer to Entry next,
  • 4 bytes for int hash,
  • 16 bytes for the object header of final K key,
  • 8 bytes for the underlying long of final K key,
  • 16 bytes for the object header of V value, and
  • 8 bytes for the underlying long of V value

for a grand total of 100 bytes per unique key. This means that at the default 0.75 load factor, you’re looking at 97,600,000,000 bytes for the used buckets and 2,600,000,000 bytes for the empty buckets. 93 GiB later and I haven’t even stored the events themselves. Since boxing has its own CPU costs, I’m not expecting much in terms of performance or memory footprint from HashMap. It’s included here mostly as a baseline, but also to demonstrate it’s unsuitability for this scale of work. The same goes for LongKeyLongChainedHashMap, minus the boxing. Even at a more modest 48 bytes per entry, it’s still using 55 GiB plus the cost of maintaing a billion items on the heap. External chaining is simply unviable at these sizes.

Runs  [UP]

For each load factor, the initial capacity of the table was hinted to be at least 976,000,000/loadFactor in order to avoid resizes, where possible. However, Trove and fastutil automatically do this computation by requesting the number of unique elements expected, instead of a lower bound on the initial bucket count. They were given 976,000,000 as their starting size hint regardless of load factor. As a last resort, 976,000,000 was used as the size hint when a map could or would not finish a test run.

The “Constructor” column indicates whether the constructor expects the initial capacity (“IC”) or the expected elements count (“EE”).

Lib. Class Constructor LF Size Hint
mahout-collections OpenLongLongHashMap IC 0.91 1,073M
0.75 1,302M
0.50 1,952M
Trove TLongLongHashMap EE 0.91 976M
0.75 976M
0.50 976M
PCJ LongKeyLongOpenHashMap IC 0.91 1,073M
0.75 1,302M
0.50 1,952M
fast-util Long2LongOpenHashMap EE 0.91 976M
hppc LongLongOpenHashMap IC 0.91 1,073M
PCJ LongKeyLongChainedHashMap IC 0.91 976M
JDK HashMap<Long, Long> IC 0.91 976M
0.75 1,302M
0.50 1,952M

Oddities  [UP]

When running initial tests, the configuration (TLongLongHashMap, 0.50, 976M) forced a catastrophic six-hour-long rehash at roughly 1.78 billion records in. At first I suspected a pathological probe chain followed by a rehash, however, I could not reproduce this particular case despite using identical data and identical runtime parameters. This is utterly horrifying from an operational perspective, but I’m willing to write this off as some artifact of the machine or perhaps the screen session. In another initial test, the configuration (LongKeyLongChainedHashMap, 0.50, 976M) crashed the Parallel Mark-Sweep garbage collector thread. Again, I couldn’t reproduce the problem despite my best efforts, so I’m writing it off as an artifact. It could have been the 128GB heap, it could have been the hundreds of millions of objects, it could have been anything. Sadly, I don’t even have a reproducible test case to give the JVM team, so I don’t foresee much progress on the bug.

Very large heaps in Java

Java HeapI’ve been wanting to write up a juicy post on how we deal with very large heaps in Java to reduce GC pauses.  Unfortunately I keep getting side tracked getting the data together.  The latest bump in the road is due to a JVM bug of sorts.

Backstory:  Todd Lipcon’s twitter post pointed me to the JVM option -XX:PrintFLSStatistics=1 to be able to get out some good information about heap fragmentation. He was even kind enough to provide the Python and R scripts! I figured that it would be a few minutes of fiddling and I’d have some good data for a post. No such luck. Our JVM GC/heap options are -XX:+UseConcMarkSweepGC -Xms65g -Xmx65g. When -XX:PrintFLSStatistics=1 is used with this, the following output is seen:

Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: -1824684952
Max   Chunk Size: -1824684952
Number of Blocks: 1
Av.  Block  Size: -1824684952
Tree      Height: 1

A few seconds of digging into the Hotspot source reveals:

void BinaryTreeDictionary::reportStatistics() const {
  verify_par_locked();
  gclog_or_tty-&gt;print("Statistics for BinaryTreeDictionary:\n"
         "------------------------------------\n");
  size_t totalSize = totalChunkSize(debug_only(NULL));
  size_t    freeBlocks = numFreeBlocks();
  gclog_or_tty-&gt;print("Total Free Space: %d\n", totalSize);
  gclog_or_tty-&gt;print("Max   Chunk Size: %d\n", maxChunkSize());
  gclog_or_tty-&gt;print("Number of Blocks: %d\n", freeBlocks);
  if (freeBlocks &gt; 0) {
    gclog_or_tty-&gt;print("Av.  Block  Size: %d\n", totalSize/freeBlocks);
  }
  gclog_or_tty-&gt;print("Tree      Height: %d\n", treeHeight());
}

in hotspot/src/share/vm/gc_implementation/concurrentMarkSweep/binaryTreeDictionary.cpp. (“%d” just doesn’t cut it with a “long”‘s worth of data.)  I filed a hotspot bug so hopefully it will be fixed in some release in the not-too-distant-future.

I can work around this but it has slowed down my getting to the juicy blog post. Stay tuned!

Follow

Get every new post delivered to your Inbox.

Join 267 other followers