*Author’s Note***:** *This post is related to a few previous posts dealing with the HyperLogLog algorithm. See Matt’s overview of HLL, and see this post for an overview of “folding” or shrinking HLLs in order to perform set operations. It is also the first in a series of three posts on doubling the size of HLLs – the next two will be about set operations and utilizing additional bits of data, respectively.*

### Overview

In this post, we explore the error of the cardinality estimate of an HLL whose size has been doubled using several different fill techniques. Specifically, we’re looking at how the error changes as additional keys are seen by the HLL.

#### A Quick Reminder – Terminology and Fill Strategies

If we have an HLL of size and we double it to be an HLL of size , we call two bins “partners” if their bin number differs by . For example, in an HLL double to be size , the bins and are partners, as are and , etc. The *Zeroes* doubling strategy fills in the newly created bins with zeroes. The *Concatenate* strategy fills in the newly created bins with the values of their partner bins. *MinusTwo *fills in each bin with two less than its partner bin’s value. *RE *fills in the newly created bins according to the empirical distribution of each bin.

### Some Sample Recovery Paths

Below, we ran four experiments to check recovery time. Each experiment consisted of running an HLL of size *2 ^{10}* on 500,000 unique hashed keys (modeled here using a random number generator), doubling the HLL to be size

*2*, and then ran 500,000 more hashed keys through the HLL. Below, we have graphs showing how the error decreases as more keys are added. Both graphs show the same data (the only difference being the scale on the y-axis). We have also graphed “Large,” an HLL of size , and “Small,” an HLL of size , which are shown only for comparison and are never doubled.

^{11}One thing to note about the graphs is that the error is relative.

Notice that *Concatenate* and *Zeroes* perform particularly poorly. Even after 500,000 extra keys have been added, they still don’t come within 5% of the true value! For *Zeroes*, this isn’t too surprising. Clearly the initial error of *Zeroes*, that is the error immediately after doubling, should be high. A quick look at the harmonic mean shows why this occurs. If a single bin has a zero as its value, the harmonic mean of the values in the bins will be zero. Essentially, the harmonic mean of a list always tends towards the lowest elements of the list. Hence, even after all the zeroes have been replaced with positive values, the cardinality estimate will be very low.

On the other hand, a more surprising result is that *Concatenate* gives such a poor guess. To see this we need to look at the formula for the estimate again. The formula for the cardinality estimate is where is the value in the bin, is the number of bins, and is a constant approaching about . For *Concatenate*, the value is equal to . Hence we have that the cardinality estimate for *Concatenate* is:

Notice that this last term is about equal to 2 times the cardinality estimate of the HLL before doubling. One quick thing that we can take away from this is that it is unlikely for two “partner” bins to have the same value in them (since if this happens frequently, we get an estimate close to that given by *Concatenate* – which is very inaccurate!).

As for *MinusTwo* and *RE*, these have small initial error and the error only falls afterwards. The initial error is small since the rules for these give guesses approximately equal to the guess of the original HLL before doubling. From here, the error should continue to shrink, and eventually, it should match that of the large HLL.

One thing we noticed was that error for *Concatenate* in the graph above suggested that the absolute error wasn’t decreasing at all. To check this we looked at the trials and, sure enough, the absolute error stays pretty flat. Essentially, *Concatenate* overestimates pretty badly, and puts the HLL in a state where it thinks it has seen twice as many keys as it actually has. In the short term, it will continue to make estimates as if it has seen 500,000 extra keys. We can see this clearly in the graphs below.

### Recovery Time Data

I also ran 100 experiments where we doubled the HLLs after adding 500,000 keys, then continued to add keys until the cardinality estimate fell within 5% of the true cardinality. The HLLs were set up to stop running at 2,000,000 keys if they hadn’t arrived at the error bound.

Notice how badly *Concatenate* did! In no trials did it make it under 5% error. *Zeroes* did poorly as well, though it did recover eventually. My guess here is that the harmonic mean had a bit to do with this – any bin with a low value, call it , in it would pull the estimate down to be about . As a result, the estimate produced by the *Zeroes* HLL will remain depressed until every bin is hit with a(n unlikely) high value. *Zeroes* and *Concatenate* should not do well since essentially the initial estimate (after doubling) of each HLL is off by a very large fixed amount. The graph of absolute errors, above, shows this.

On the other hand, *RE* and *MinusTwo* performed fairly well. Certainly, *RE* looks better in terms of median and middle 50%, though its variance is much higher than *MinusTwo*‘s.This should make sense as we are injecting a lot of randomness into *RE* when we fill in the values, whereas *MinusTwo*‘s bins are filled in deterministically.

### Recovery Time As A Function of Size

One might wonder whether the recovery time of *MinusTwo* and *RE *depend on the size of the HLL before the doubling process. To get a quick view of whether or not this is true, we did 1,000 trials like those above but by adding 200K, 400K, 600K, 800K, 1M keys and with a a cutoff of 3% this time. Below, we have the box plots for the data for each of these. The headings of each graph gives the size of the HLL before doubling, and the y-axis gives the fractional recovery time (the true recovery time divided by the size of the HLL before doubling).

Notice that, for each doubling rule, there is almost no variation between each of the plots. This suggests that the size of the HLL before doubling doesn’t change the fractional recovery time. As a side note, one thing that we found really surprising is that *RE* is no longer king – *MinusTwo* has a slightly better average case. We think that this is just a factor of the higher variation of *RE *and the change in cutoff.

### Summary

Of the four rules, *MinusTwo* and *RE* are clearly the best. Both take about 50 – 75% more keys after doubling to get within 3% error, and both are recover extremely quickly if you ask for them to only get within 5% error.

To leave you with one last little brainteaser, an HLL of size , which is then doubled, will eventually have the same values in its bins as an HLL of size which ran on the same data. About how long will it take for these HLLs to converge? One (weak) requirement for this to happen is to have the value in every bin of both HLLs be changed. To get an upper bound on how long this should take, one should read about the coupon collector problem.