Birthday Problem and Conficker

Hide behind huge numbers, making fighting against very expensive

Birthday problem or paradox is the probability that, from a given set of people, two of them will have the same birthday. It is a paradox because the result defies common sense. For a group of 23 people, the chance that two of them share the same birthday is greater than 50%, and for a group of 57 people, it is higher than 99%.

The best known use of Birthday Problem paradox is probably the Cryptographic Attack known as the Birthday Attack. This attack exploits the math behind the Birthday Paradox, by looking for collisions in a small set, having a much higher collision chance than expected.

Recently I came across a different use of the same paradox, in what else than the infamous Conficker. Here, the use of this statistical paradox is different, with the purpose of making the fight against this worm much harder.

Here is the problem explained: each day, they have a pool of 50,000 (pseudo)randomly generated URLs, out of each any infected computer randomly chooses 500. The total number of possible draws is huge. It actually has 1,215 digits in decimal representation and people will find hard even to imagine it. Just for the fun of it, I annexed the file to the end of my post. However, as you will see, in practice things work on a much smaller scale.

Registering all of them is an incredibly challenging task, and here lies the power of the aforementioned statistical problem. The problem here is to find the probability that a random group of 500 URLs contains at least one out of a smaller set (I will refer to such cases from here on as hits). And the result is amazing: if one registers 50 URLs, the chance to hit is 39.514%, and if one registers 500 URLs the chance to hit becomes 99.359%. That is, with only 1% of the pool registered, one can achieve more than 99% success rate in spreading new malware content using Conficker.

The graph of hit chance is here

On the horizontal axis we have the number of registered URLs and on the vertical axis we have the chance that any random draw will hit at least one of them, or in other words, the chance that a computer infected with Conficker will access one of these URLs.

This shows the importance of blocking as many of the 50,000 URLs as possible. A single missed URL that happens to be registered for malicious purpose can get 1% chance to spread malware to Conficker infected machines.

Randomly blocking some of the URLs have limited benefits, since the pool size is fairly big and the number of URLs potentially used by the malware is relatively small (at least two orders of magnitude).

Even if the above statement is true, there are some particularities that may help overcome these facts. The domains have to be registered through a limited number of Registrars, based on their TLD. By working with the registrars directly, bulkily blocking large numbers of domains becomes less of a problem than Conficker’s authors had foreseen, and with all the attention this thing is getting, people are willing to put in a lot of work to see this threat over.

The “good guys” also may use this paradox to their own advantage. It may give means in estimating the real size of the “infection” in the world. By registering a limited numbers of URLs, one can monitor the incoming requests, and knowing the chance a URL is picked, one can extrapolate to the number of infected machines.

Appendix 1 - Mathematical reasoning behind the numbers I’ve presented here.

Let’s denote by C(n,k) the number of combinations of size k chosen from a set of n elements (S). Our problem is to determine the probability that a randomly chosen set hit at least one element from a smaller subset of S. Let m be the number of elements in that smaller set. M is the subset of S, having Card(M) = m. The total number of possible k sized sets out of S is C(n,k).

In order to see how many of them contain at least an element from M we check first its complement. That is, the number of k-sets that do not contain an element from M. It is obvious that to have such sets, m has to be smaller than or equal to n– k. In other words if m is greater than n– k, there is no possible choice of k-sets that do not contain elements from M. If m is smaller than n– k and we subtract M from S (S\M) we get a subset of S, denoted by S’ that has n– m elements. It is clear that all k-sets from S that do not contain elements from M are also k-sets for S’, and all k-sets from S’ are also k-sets for S, so the sets are equivalent. Thus the number of k-sets from S that do not contain elements from M is equal to the number of k-sets from S’ which is equal to C(n-m,k).

As a direct result, the number of k-sets from S containing at least an element from M is C(n,k)–C(n-m,k). In order to compute the probability we divide this number by the total number of sets. We get P(m) = 1 – C(n–m,k)/C(n,k). If we break this down we get to

As we see, the second element is a product of sub-unitary numbers, which decreases towards 0 as we increase the number of elements (m). As a matter of fact, each element in the product is smaller than the first element (n–k)/n (trivial to prove under the assumption 1=j=m=n-k) resulting in the following approximation,

that is closing to 0 faster than an exponential. This means that our probability can be approximated with the following formula
Another debate may be started around the fact Conficker doesn't check for duplicates when picking up the 500 URLs. To take this into account, we have to estimate the average number of duplicates in a randomly picked 500 set out of the bigger 50,000 possible choices. A collision counting formula may be found at Collision counting formula. Applying the formula on our case, gives an estimate of 2.4867 duplicates on any random draw. To take this into account, we have to adjust previous calculations with 497 instead of 500, but this doesn't induce a notable difference in the results.

Another approach for the same arguments is to take into account the number of combinations with repetitions, rather than the number of combinations. This changes the above formulas to C(n+k–1,k) used instead of C(n,k); Combinations with repetitions. Having the following substitution n'=n+k–1 we get to the same formulas, but n' used in place of n. The differences in the numbers above are insignificant, and this is true for similar cases: n much bigger in comparison with k.

Appendix 2 - Here is a table showing the probabilities to get a hit for up to 100 URLs. The values are computed with the exact formula, not using the approximation, but in most cases, especially with large numbers, the estimation gives a pretty good idea.

#Chosen URLs Chance to hit            
1 1.00% 26 23.00% 51 40.12% 76 53.44%
2 1.99% 27 23.77% 52 40.72% 77 53.91%
3 2.97% 28 24.53% 53 41.31% 78 54.37%
4 3.94% 29 25.29% 54 41.90% 79 54.82%
5 4.90% 30 26.04% 55 42.48% 80 55.28%
6 5.85% 31 26.78% 56 43.06% 81 55.72%
7 6.79% 32 27.51% 57 43.63% 82 56.17%
8 7.73% 33 28.23% 58 44.19% 83 56.61%
9 8.65% 34 28.95% 59 44.75% 84 57.04%
10 9.56% 35 29.66% 60 45.30% 85 57.47%
11 10.47% 36 30.37% 61 45.85% 86 57.90%
12 11.36% 37 31.06% 62 46.39% 87 58.32%
13 12.25% 38 31.75% 63 46.93% 88 58.74%
14 13.13% 39 32.44% 64 47.46% 89 59.15%
15 14.00% 40 33.11% 65 47.99% 90 59.56%
16 14.86% 41 33.78% 66 48.51% 91 59.96%
17 15.71% 42 34.45% 67 49.02% 92 60.37%
18 16.55% 43 35.10% 68 49.53% 93 60.76%
19 17.39% 44 35.75% 69 50.04% 94 61.16%
20 18.21% 45 36.39% 70 50.54% 95 61.55%
21 19.03% 46 37.03% 71 51.04% 96 61.93%
22 19.84% 47 37.66% 72 51.53% 97 62.31%
23 20.64% 48 38.29% 73 52.01% 98 62.69%
24 21.44% 49 38.90% 74 52.49% 99 63.06%
25 22.22% 50 39.51% 75 52.97% 100 63.43%

Appendix 3 - Number of 500 sized groups out of a pool of 50,000


 --Dan Nicolescu

Comments (0)

Skip to main content