Why is the minimalist, example Haskell quicksort not a "true" quicksort?

The true quicksort has two beautiful aspects:

  1. Divide and conquer: break the problem into two smaller problems.
  2. Partition the elements in-place.

The short Haskell example demonstrates (1), but not (2). How (2) is done may not be obvious if you don't already know the technique!


True inplace quicksort in Haskell:

import qualified Data.Vector.Generic as V 
import qualified Data.Vector.Generic.Mutable as M 

qsort :: (V.Vector v a, Ord a) => v a -> v a
qsort = V.modify go where
    go xs | M.length xs < 2 = return ()
          | otherwise = do
            p <- M.read xs (M.length xs `div` 2)
            j <- M.unstablePartition (< p) xs
            let (l, pr) = M.splitAt j xs 
            k <- M.unstablePartition (== p) pr
            go l; go $ M.drop k pr

Here is a transliteration of the "true" quicksort C code into Haskell. Brace yourself.

import Control.Monad
import Data.Array.IO
import Data.IORef

qsort :: IOUArray Int Int -> Int -> Int -> IO ()
qsort a lo hi = do
  (h,l,p,t) <- liftM4 (,,,) z z z z

  when (lo < hi) $ do
    l .= lo
    h .= hi
    p .=. (a!hi)

    doWhile (get l .< get h) $ do
      while ((get l .< get h) .&& ((a.!l) .<= get p)) $ do
        modifyIORef l succ
      while ((get h .> get l) .&& ((a.!h) .>= get p)) $ do
        modifyIORef h pred
      b <- get l .< get h
      when b $ do
        t .=. (a.!l)
        lVal <- get l
        hVal <- get h
        writeArray a lVal =<< a!hVal
        writeArray a hVal =<< get t

    lVal <- get l
    writeArray a hi =<< a!lVal
    writeArray a lVal =<< get p

    hi' <- fmap pred (get l)
    qsort a lo hi'
    lo' <- fmap succ (get l)
    qsort a lo' hi

That was fun, wasn't it? I actually cut out this large let at the beginning, as well as the where at the end of the function, defining all of the helpers to make the preceding code somewhat pretty.

  let z :: IO (IORef Int)
      z = newIORef 0
      (.=) = writeIORef
      ref .=. action = do v <- action; ref .= v
      (!) = readArray
      (.!) a ref = readArray a =<< get ref
      get = readIORef
      (.<) = liftM2 (<)
      (.>) = liftM2 (>)
      (.<=) = liftM2 (<=)
      (.>=) = liftM2 (>=)
      (.&&) = liftM2 (&&)
  -- ...
  where doWhile cond foo = do
          foo
          b <- cond
          when b $ doWhile cond foo
        while cond foo = do
          b <- cond
          when b $ foo >> while cond foo

And here, a dumb test to see if it works.

main = do
    a <- (newListArray (0,9) [10,9..1]) :: IO (IOUArray Int Int)
    printArr a
    putStrLn "Sorting..."
    qsort a 0 9
    putStrLn "Sorted."
    printArr a
  where printArr a = mapM_ (\x -> print =<< readArray a x) [0..9]

I don't write imperative code very often in Haskell, so I'm sure there are plenty of ways to clean this code up.

So what?

You will notice that the above code is very, very long. The heart of it is about as long as the C code, though each line is often a bit more verbose. This is because C secretly does a lot of nasty things that you might take for granted. For example, a[l] = a[h];. This accesses the mutable variables l and h, and then accesses the mutable array a, and then mutates the mutable array a. Holy mutation, batman! In Haskell, mutation and accessing mutable variables is explicit. The "fake" qsort is attractive for various reasons, but chief among them is it does not use mutation; this self-imposed restriction makes it much easier to understand at a glance.


In my opinion, saying that it's "not a true quicksort" overstates the case. I think it's a valid implementation of the Quicksort algorithm, just not a particularly efficient one.


I think the case this argument tries to make is that the reason why quicksort is commonly used is that it's in-place and fairly cache-friendly as a result. Since you don't have those benefits with Haskell lists, its main raison d'être is gone, and you might as well use merge sort, which guarantees O(n log n), whereas with quicksort you either have to use randomization or complicated partitioning schemes to avoid O(n2) run time in the worst case.