Different std in pandas vs numpy

The standard deviation differs between pandas and numpy. Why and which one is the correct one? (the relative difference is 3.5% which should not come from rounding, this is high in my opinion).

Example

import numpy as np
import pandas as pd
from StringIO import StringIO

a='''0.057411
0.024367
 0.021247
-0.001809
-0.010874
-0.035845
0.001663
0.043282
0.004433
-0.007242
0.029294
0.023699
0.049654
0.034422
-0.005380'''


df = pd.read_csv(StringIO(a.strip()), delim_whitespace=True, header=None)

df.std()==np.std(df) # False
df.std() # 0.025801
np.std(df) # 0.024926

(0.024926 - 0.025801) / 0.024926 # 3.5% relative difference

I use these versions:

pandas '0.14.0'
numpy '1.8.1'

Solution 1:

In a nutshell, neither is "incorrect". Pandas uses the unbiased estimator (N-1 in the denominator), whereas Numpy by default does not.

To make them behave the same, pass ddof=1 to numpy.std().

For further discussion, see

  • Can someone explain biased/unbiased population/sample standard deviation?
  • Population variance and sample variance.
  • Why divide by n-1?

Solution 2:

For pandas to performed the same as numpy, you can pass in the ddof=0 parameter, so df.std(ddof=0).

This short video explains quite well why n-1 might be preferred for samples. https://www.youtube.com/watch?v=Cn0skMJ2F3c