Set Operations On HLLs of Different Sizes

Introduction

Here at AK, we’re in the business of storing huge amounts of information in the form of 64 bit keys. As shown in other blog posts and in the HLL post by Matt, one efficient way of getting an estimate of the size of the set of these keys is by using the HyperLogLog (HLL) algorithm.  There are two important decisions one has to make when implementing this algorithm.  The first is how many bins one will use and the second is the maximum value one allows in each bin.  As a result, the amount of space this will take up is going to be the number of bins times the log of the maximum value you allow in each bin.  For this post we’ll ignore this second consideration and focus instead on the number of bins one uses.  The accuracy for an estimate is given approximately by 1.04/√b, where b is the number of bins.  Hence there is a tradeoff between the accuracy of the estimate and the amount of space you wish to dedicate to this estimate. Certainly, projects will have various requirements that call for different choices of number of bins.

The HLL algorithm natively supports the union operation.  However, one requirement for this operation is that the HLLs involved are of the same size, i.e. have the same number of bins.  In practice, there’s no guarantee that HLLs will satisfy this requirement.  In this post, I’ll outline the method by which we transform an HLL with a certain number of bins to one with a fewer number of bins, allowing us to perform set operations on any two HLLs, regardless of size.

Key Processing

As discussed in the HyperLogLog paper, to get a cardinality estimate with an HLL with 2n bins on a data set we pass over each key, using the placement of the rightmost “1” to determine the value of the key and the next n digits to the left to determine in which bin to place that value.  In each bin, we only store the maximum value that that bin has “seen.”

Below I’ve shown how two HLLs (one of size 23 and one of size 24) process two different keys.  Here, the keys have the same value, because the purpose of this example is to illustrate how the location in which we place the key changes when the HLL has twice the number of bins.

HLL Folding: Simple Bin/Reg Example

Above, the keys which are attributed to the fifth and thirteenth bins in the larger HLL would both have been attributed to the fifth bin in the smaller HLL.  Hence, unraveling the algorithm a bit, we see that the values which are seen by the fifth and thirteenth bins in the larger HLL would have been seen by the fifth bin in the smaller HLL had they run on the same dataset.  Because of this, in the case where the two algorithms estimate the same dataset, the value stored in the fifth bin in the smaller HLL is the maximum of the values stored in the fifth and thirteenth bins in the larger HLL.

Folding HLLs

What happened above is not an isolated phenomenon.  In general, if one uses the HLL algorithm twice on a dataset, once with 2n+1 bins and once with 2n bins, the value in the kth bin in the smaller HLL will be the maximum of the values in the kth and (k + 2n)th bins of the larger HLL.  As a result, if given an HLL of size 2n+1 that one wishes to transform to an HLL of size 2n, one can simply fold the HLL by letting the value of the kth bin in the folded HLL be given by the maximum of the values in the kth and (k + 2n)th bins of the original HLL.

In fact, we can fold any HLL an arbitrary number of times.  Repeating this process, we can take an HLL of size 2n to an HLL of size 2m for any m which is less than or equal to n.  Hence if we wish to perform a set operation on two HLLs of different sizes, we can simply fold the larger HLL repeatedly until it is the same size as the smaller HLL.  After this, we can take unions and intersections as we wish.

Folding – An Example

Below, we show a simple example of how folding works.  Here we have an HLL with 23 bins which we fold to be an HLL with 22 bins.  In the diagram, I’ve torn an HLL of size 23 in half and placed the strips side by side to emphasize how we line up bins and take maximums in the folding process.  Notice that the values in the folded the bins of the folded HLL are the maximum of the relevant bins in the larger HLL.

HLL Folding Example

Advantages and Drawbacks

This technique gives us the flexibility to be able to perform set operations on any two HLLs regardless of the number of bins used in the algorithms.  It’s usefulness in this regard is a bit offset by the fact that the accuracy of the estimate on these is limited by the accuracy of the least accurate HLL.  For example, an HLL of size 210 will have accuracy roughly 23 times better than an HLL of size 2 (to see where I’m getting these numbers from, you’ll have to read the paper!).  Unfortunately, if we combine these with a set operation, our resulting estimate will have the same accuracy as the smaller HLL with short term loans being taken from the RAM of the machine.

Summary

The HyperLogLog algorithm supports set operations in a nice way only if the number of bins used is fixed.  Using folding, one can correct for this by reducing the size of the larger HLL to the size of the smaller.  The cost of this convenience is in the accuracy of the estimates after the folding process.  In my next post, I’ll explore some methods of performing the set operations without this loss of accuracy.

No BS Data Salon #3

On Saturday Aggregate Knowledge hosted the third No BS Data Salon on databases and data infrastructure. A handful of people showed up to hear Scott Andreas of Boundary talk about distributed, streaming service architecture, and I also gave a talk about AK’s use of probabilistic data structures.

The smaller group made for some fantastic, honest conversation about the different approaches to streaming architectures, the perils of distributing analytics workloads in a streaming setting, and the challenges of pushing scientific and engineering breakthroughs all the way through to product innovation.

We’re all looking forward to the next event, which will be in San Francisco, in a month or two. If you have topics you’d like to see covered, we’d love to hear from you in the comments below!

As promised, I’ve assembled something of a “References” section to my talk, which you can find below.

(Hyper)LogLog

Random

  • Sean Gourley’s talk on human-scale analytics and decision-making
  • Muthu Muthukrishnan’s home page, where research on streaming in general abounds
  • A collection of C and Java implementations of different probabilistic sketches

Attendees of the third No BS Data Salon

Sketching the last year

Sketching is an area of big-data science that has been getting a lot of attention lately. I personally am very excited about this.  Sketching analytics has been a primary focus of our platform and one of my personal interests for quite a while now. Sketching as an area of big-data science has been slow to unfold, (thanks Strata for declining our last two proposals on sketching talks!), but clearly the tide is turning. In fact, our summarizer technology, which relies heavily on our implementation of Distinct Value (DV) sketches, has been in the wild for almost a year now (and, obviously we were working on it for many months before that).

Fast, But Fickle

The R&D of the summarizer was fun but, as with most technical implementations, it’s never as easy as reading the papers and writing some code. The majority of the work we have done to make our DV sketches perform in production has nothing to do with the actual implementation.  We spend a lot of time focused on how we tune them, how we feed them, and make them play well with the rest of our stack.

Likewise, setting proper bounds on our sketches is an ongoing area of work for us and has led down some very interesting paths.  We have gained insights that are not just high level business problems, but very low level watchmaker type stuff.  Hash function behaviors and stream entropy alongside the skewness of data-sets themselves are areas we are constantly looking into to improve our implementations. This work has helped us refine and find optimizations around storage that aren’t limited to sketches themselves, but the architecture of the system as a whole.

Human Time Analytics

Leveraging DV sketches as more than just counters has proven unbelievably useful for us. The DV sketches we use provide arbitrary set operations. This comes in amazingly handy when our customers ask “How many users did we see on Facebook and on AOL this month that purchased something?” You can imagine how far these types of questions go in a real analytics platform. We have found that DV counts alongside set operation queries satisfy a large portion of our analytics platforms needs.

Using sketches for internal analytics has been a blast as well. Writing implementations and libraries in scripting languages enables our data-science team to perform very cool ad-hoc analyses faster and in “human-time”. Integrating DV sketches as custom data-types into existing databases has proven to be a boon for analysts and engineers alike.

Reap The Rewards

Over the course of the year that we’ve been using DV sketches to power analytics, the key takeaways we’ve found are: be VERY careful when choosing and implementing sketches; and leverage as many of their properties as possible.  When you get the formula right, these are powerful little structures. Enabling in-memory DV counting and set operations is pretty amazing when you think of the amount of data and analysis we support. Sketching as an area of big-data science seems to have (finally!) arrived and I, for one, welcome our new sketching overlords.

Streaming Algorithms and Sketches

Here at Aggregate Knowledge we spend a lot of time thinking about how to do analytics on a massive amount of data. Rob recently posted about building our streaming datastore and the architecture that helps us deal with “big data”. Given a streaming architecture, the obvious question for the data scientist is “How do we fit in?”. Clearly we need to look towards streaming algorithms to match the speed and performance of our datastore.

A streaming algorithm is defined generally as having finite memory – significantly smaller than the data presented to it – and must process the input in one pass. Streaming algorithms start pretty simple, for instance counting the number of elements in the stream:

counter = 0
for event in stream:
    counter += 1

While eventually counter will overflow (and you can be somewhat clever about avoiding that) this is way better than the non-streaming alternative.

elements = list(stream)
counter = len(elements)

Pretty simple stuff. Even a novice programmer can tell you why the second method is way worse than the first. You can get more complicated and keep the same basic approach – computing the mean of a floating point number stream is almost as simple: keep around counter as above, and add a new variable, total_sum += value_new. Now that we’re feeling smart, what about the quantiles of the stream? Ah! Now that is harder.

While it may not be immediately obvious, you can prove (as Munro and Paterson did in 1980) that computing exact quantiles of a stream requires memory that is at least linear with respect to the size of the stream. So, we’re left approximating a solution to the quantiles problem. A first stab might be sampling where you keep every 1000th element. While this isn’t horrible, it has it’s downsides – if your stream is infinite, you’ll still run out of space. It’s a good thing there are much better solutions. One of the first and most elegant was proposed by Cormode and Muthukrishnan in 2003 where they introduce the Count-Min sketch data structure. (A nice reference for sketching data structures can be found here.)

Count-Min sketch works much like a bloom filter. You compose k empty tables and k hash functions. For each incoming element we simply hash it through each function and increment the appropriate element in the corresponding table. To find out how many times we have historically seen a particular element we simply hash our query and take the MINIMUM value that we find in the tables. In this way we limit the effects of hash collision, and clearly we balance the size of the Count-Min sketch with the accuracy we require for the final answer. Heres how it works:

The Count-Min sketch is an approximation to the histogram of the incoming data, in fact it’s really only probabilistic when hashes collide. In order to compute quantiles we want to find the “mass” of the histogram above/below a certain point. Luckily Count-Min sketches support range queries of the type “select count(*) where val between 1 and x;“. Now it is just a matter of finding the quantile of choice.

To actually find the quantiles is slightly tricky, but not that hard. You basically have to perform a binary search with the range queries. So to find the first decile value, and supposing you kept around the the number of elements you have seen in the stream, you would binary search through values of x until the return count of the range query is 1/10 of the total count.

Pretty neat, huh?