What, if anything, is wrong with this shuffling algorithm and how can I know?

General remark

My personal approach about correctness of probability-using algorithms: if you know how to prove it's correct, then it's probably correct; if you don't, it's certainly wrong.

Said differently, it's generally hopeless to try to analyse every algorithm you could come up with: you have to keep looking for an algorithm until you find one that you can prove correct.

Analysing a random algorithm by computing the distribution

I know of one way to "automatically" analyse a shuffle (or more generally a random-using algorithm) that is stronger than the simple "throw lots of tests and check for uniformity". You can mechanically compute the distribution associated to each input of your algorithm.

The general idea is that a random-using algorithm explores a part of a world of possibilities. Each time your algorithm asks for a random element in a set ({true, false} when flipping a coin), there are two possible outcomes for your algorithm, and one of them is chosen. You can change your algorithm so that, instead of returning one of the possible outcomes, it explores all solutions in parallel and returns all possible outcomes with the associated distributions.

In general, that would require rewriting your algorithm in depth. If your language supports delimited continuations, you don't have to; you can implement "exploration of all possible outcomes" inside the function asking for a random element (the idea is that the random generator, instead of returning a result, capture the continuation associated to your program and run it with all different results). For an example of this approach, see oleg's HANSEI.

An intermediary, and probably less arcane, solution is to represent this "world of possible outcomes" as a monad, and use a language such as Haskell with facilities for monadic programming. Here is an example implementation of a variant¹ of your algorithm, in Haskell, using the probability monad of the probability package :

import Numeric.Probability.Distribution

shuffleM :: (Num prob, Fractional prob) => [a] -> T prob [a]
shuffleM [] = return []
shuffleM [x] = return [x]
shuffleM (pivot:li) = do
        (left, right) <- partition li
        sleft <- shuffleM left
        sright <- shuffleM right
        return (sleft ++ [pivot] ++ sright)
  where partition [] = return ([], [])
        partition (x:xs) = do
                  (left, right) <- partition xs
                  uniform [(x:left, right), (left, x:right)]

You can run it for a given input, and get the output distribution :

*Main> shuffleM [1,2]
fromFreqs [([1,2],0.5),([2,1],0.5)]
*Main> shuffleM [1,2,3]
fromFreqs
  [([2,1,3],0.25),([3,1,2],0.25),([1,2,3],0.125),
   ([1,3,2],0.125),([2,3,1],0.125),([3,2,1],0.125)]

You can see that this algorithm is uniform with inputs of size 2, but non-uniform on inputs of size 3.

The difference with the test-based approach is that we can gain absolute certainty in a finite number of steps : it can be quite big, as it amounts to an exhaustive exploration of the world of possibles (but generally smaller than 2^N, as there are factorisations of similar outcomes), but if it returns a non-uniform distribution we know for sure that the algorithm is wrong. Of course, if it returns an uniform distribution for [1..N] and 1 <= N <= 100, you only know that your algorithm is uniform up to lists of size 100; it may still be wrong.

¹: this algorithm is a variant of your Erlang's implementation, because of the specific pivot handling. If I use no pivot, like in your case, the input size doesn't decrease at each step anymore : the algorithm also considers the case where all inputs are in the left list (or right list), and get lost in an infinite loop. This is a weakness of the probability monad implementation (if an algorithm has a probability 0 of non-termination, the distribution computation may still diverge), that I don't yet know how to fix.

Sort-based shuffles

Here is a simple algorithm that I feel confident I could prove correct:

  1. Pick a random key for each element in your collection.
  2. If the keys are not all distinct, restart from step 1.
  3. Sort the collection by these random keys.

You can omit step 2 if you know the probability of a collision (two random numbers picked are equal) is sufficiently low, but without it the shuffle is not perfectly uniform.

If you pick your keys in [1..N] where N is the length of your collection, you'll have lots of collisions (Birthday problem). If you pick your key as a 32-bit integer, the probability of conflict is low in practice, but still subject to the birthday problem.

If you use infinite (lazily evaluated) bitstrings as keys, rather than finite-length keys, the probability of a collision becomes 0, and checking for distinctness is no longer necessary.

Here is a shuffle implementation in OCaml, using lazy real numbers as infinite bitstrings:

type 'a stream = Cons of 'a * 'a stream lazy_t

let rec real_number () =
  Cons (Random.bool (), lazy (real_number ()))

let rec compare_real a b = match a, b with
| Cons (true, _), Cons (false, _) -> 1
| Cons (false, _), Cons (true, _) -> -1
| Cons (_, lazy a'), Cons (_, lazy b') ->
    compare_real a' b'

let shuffle list =
  List.map snd
    (List.sort (fun (ra, _) (rb, _) -> compare_real ra rb)
       (List.map (fun x -> real_number (), x) list))

There are other approaches to "pure shuffling". A nice one is apfelmus's mergesort-based solution.

Algorithmic considerations: the complexity of the previous algorithm depends on the probability that all keys are distinct. If you pick them as 32-bit integers, you have a one in ~4 billion probability that a particular key collides with another key. Sorting by these keys is O(n log n), assuming picking a random number is O(1).

If you infinite bitstrings, you never have to restart picking, but the complexity is then related to "how many elements of the streams are evaluated on average". I conjecture it is O(log n) in average (hence still O(n log n) in total), but have no proof.

... and I think your algorithm works

After more reflexion, I think (like douplep), that your implementation is correct. Here is an informal explanation.

Each element in your list is tested by several random:uniform() < 0.5 tests. To an element, you can associate the list of outcomes of those tests, as a list of booleans or {0, 1}. At the beginning of the algorithm, you don't know the list associated to any of those number. After the first partition call, you know the first element of each list, etc. When your algorithm returns, the list of tests are completely known and the elements are sorted according to those lists (sorted in lexicographic order, or considered as binary representations of real numbers).

So, your algorithm is equivalent to sorting by infinite bitstring keys. The action of partitioning the list, reminiscent of quicksort's partition over a pivot element, is actually a way of separating, for a given position in the bitstring, the elements with valuation 0 from the elements with valuation 1.

The sort is uniform because the bitstrings are all different. Indeed, two elements with real numbers equal up to the n-th bit are on the same side of a partition occurring during a recursive shuffle call of depth n. The algorithm only terminates when all the lists resulting from partitions are empty or singletons : all elements have been separated by at least one test, and therefore have one distinct binary decimal.

Probabilistic termination

A subtle point about your algorithm (or my equivalent sort-based method) is that the termination condition is probabilistic. Fisher-Yates always terminates after a known number of steps (the number of elements in the array). With your algorithm, the termination depends on the output of the random number generator.

There are possible outputs that would make your algorithm diverge, not terminate. For example, if the random number generator always output 0, each partition call will return the input list unchanged, on which you recursively call the shuffle : you will loop indefinitely.

However, this is not an issue if you're confident that your random number generator is fair : it does not cheat and always return independent uniformly distributed results. In that case, the probability that the test random:uniform() < 0.5 always returns true (or false) is exactly 0 :

  • the probability that the first N calls return true is 2^{-N}
  • the probability that all calls return true is the probability of the infinite intersection, for all N, of the event that the first N calls return 0; it is the infimum limit¹ of the 2^{-N}, which is 0

¹: for the mathematical details, see http://en.wikipedia.org/wiki/Measure_(mathematics)#Measures_of_infinite_intersections_of_measurable_sets

More generally, the algorithm does not terminate if and only if some of the elements get associated to the same boolean stream. This means that at least two elements have the same boolean stream. But the probability that two random boolean streams are equal is again 0 : the probability that the digits at position K are equal is 1/2, so the probability that the N first digits are equal is 2^{-N}, and the same analysis applies.

Therefore, you know that your algorithm terminates with probability 1. This is a slightly weaker guarantee that the Fisher-Yates algorithm, which always terminate. In particular, you're vulnerable to an attack of an evil adversary that would control your random number generator.

With more probability theory, you could also compute the distribution of running times of your algorithm for a given input length. This is beyond my technical abilities, but I assume it's good : I suppose that you only need to look at O(log N) first digits on average to check that all N lazy streams are different, and that the probability of much higher running times decrease exponentially.


Your algorithm is a sort-based shuffle, as discussed in the Wikipedia article.

Generally speaking, the computational complexity of sort-based shuffles is the same as the underlying sort algorithm (e.g. O(n log n) average, O(n²) worst case for a quicksort-based shuffle), and while the distribution is not perfectly uniform, it should approach uniform close enough for most practical purposes.

Oleg Kiselyov provides the following article / discussion:

  • Provably perfect random shuffling and its pure functional implementations

which covers the limitations of sort-based shuffles in more detail, and also offers two adaptations of the Fischer–Yates strategy: a naive O(n²) one, and a binary-tree-based O(n log n) one.

Sadly the functional programming world doesn't give you access to mutable state.

This is not true: while purely functional programming avoids side effects, it supports access to mutable state with first-class effects, without requiring side effects.

In this case, you can use Haskell's mutable arrays to implement the mutating Fischer–Yates algorithm as described in this tutorial:

  • Haskell Shuffling (Brett Hall)

Addendum

The specific foundation of your shuffle sort is actually an infinite-key radix sort: as gasche points out, each partition corresponds to a digit grouping.

The main disadvantage of this is the same as any other infinite-key sorting shuffle: there is no termination guarantee. Although the likelihood of termination increases as the comparison proceeds, there is never an upper bound: the worst-case complexity is O(∞).


I was doing some stuff similar to this a while ago, and in particular you might be interested in Clojure's vectors, which are functional and immutable but still with O(1) random access/update characteristics. These two gists have several implementations of a "take N elements at random from this M-sized list"; at least one of them turns into a functional implementation of Fisher-Yates if you let N=M.

https://gist.github.com/805546

https://gist.github.com/805747


Based on How to test randomness (case in point - Shuffling) , I propose:

Shuffle (medium sized) arrays composed of equal numbers of zeroes and ones. Repeat and concatenate until bored. Use these as input to the diehard tests. If you have a good shuffle, then you should be generating random sequences of zeroes and ones (with the caveat that the cumulative excess of zeroes (or ones) is zero at the boundaries of the medium sized arrays, which you would hope the tests detect, but the larger "medium" is the less likely they are to do so).

Note that a test can reject your shuffle for three reasons:

  • the shuffle algorithm is bad,
  • the random number generator used by the shuffler or during initialization is bad, or
  • the test implementation is bad.

You'll have to resolve which is the case if any test rejects.

Various adaptations of the diehard tests (to resolve certain numbers, I used the source from the diehard page). The principle mechanism of adaptation is to make the shuffle algorithm act as a source of uniformly distributed random bits.

  • Birthday spacings: In an array of n zeroes, insert log n ones. Shuffle. Repeat until bored. Construct the distribution of inter-one distances, compare with the exponential distribution. You should perform this experiment with different initialization strategies -- the ones at the front, the ones at the end, the ones together in the middle, the ones scattered at random. (The latter has the greatest hazard of a bad initialization randomization (with respect to the shuffling randomization) yielding rejection of the shuffling.) This can actually be done with blocks of identical values, but has the problem that it introduces correlation in the distributions (a one and a two can't be at the same location in a single shuffle).
  • Overlapping permutations: shuffle five values a bunch of times. Verify that the 120 outcomes are about equally likely. (Chi-squared test, 119 degrees of freedom -- the diehard test (cdoperm5.c) uses 99 degrees of freedom, but this is (mostly) an artifact of sequential correlation caused by using overlapping subsequences of the input sequence.)
  • Ranks of matrices: from 2*(6*8)^2 = 4608 bits from shuffling equal numbers of zeroes and ones, select 6 non-overlapping 8-bit substrings. Treat these as a 6-by-8 binary matrix and compute its rank. Repeat for 100,000 matrices. (Pool together ranks of 0-4. Ranks are then either 6, 5, or 0-4.) The expected fraction of ranks is 0.773118, 0.217439, 0.009443. Chi-squared compare with observed fractions with two degrees of freedom. The 31-by-31 and 32-by-32 tests are similar. Ranks of 0-28 and 0-29 are pooled, respectively. Expected fractions are 0.2887880952, 0.5775761902, 0.1283502644, 0.0052854502. Chi-squared test has three degrees of freedom.

and so on...

You may also wish to leverage dieharder and/or ent to make similar adapted tests.