How can I use numpy.correlate to do autocorrelation?
To answer your first question, numpy.correlate(a, v, mode)
is performing the convolution of a
with the reverse of v
and giving the results clipped by the specified mode. The definition of convolution, C(t)=∑ -∞ < i < ∞ aivt+i where -∞ < t < ∞, allows for results from -∞ to ∞, but you obviously can't store an infinitely long array. So it has to be clipped, and that is where the mode comes in. There are 3 different modes: full, same, & valid:
- "full" mode returns results for every
t
where botha
andv
have some overlap. - "same" mode returns a result with the same length as the shortest vector (
a
orv
). - "valid" mode returns results only when
a
andv
completely overlap each other. The documentation fornumpy.convolve
gives more detail on the modes.
For your second question, I think numpy.correlate
is giving you the autocorrelation, it is just giving you a little more as well. The autocorrelation is used to find how similar a signal, or function, is to itself at a certain time difference. At a time difference of 0, the auto-correlation should be the highest because the signal is identical to itself, so you expected that the first element in the autocorrelation result array would be the greatest. However, the correlation is not starting at a time difference of 0. It starts at a negative time difference, closes to 0, and then goes positive. That is, you were expecting:
autocorrelation(a) = ∑ -∞ < i < ∞ aivt+i where 0 <= t < ∞
But what you got was:
autocorrelation(a) = ∑ -∞ < i < ∞ aivt+i where -∞ < t < ∞
What you need to do is take the last half of your correlation result, and that should be the autocorrelation you are looking for. A simple python function to do that would be:
def autocorr(x):
result = numpy.correlate(x, x, mode='full')
return result[result.size/2:]
You will, of course, need error checking to make sure that x
is actually a 1-d array. Also, this explanation probably isn't the most mathematically rigorous. I've been throwing around infinities because the definition of convolution uses them, but that doesn't necessarily apply for autocorrelation. So, the theoretical portion of this explanation may be slightly wonky, but hopefully the practical results are helpful. These pages on autocorrelation are pretty helpful, and can give you a much better theoretical background if you don't mind wading through the notation and heavy concepts.
Auto-correlation comes in two versions: statistical and convolution. They both do the same, except for a little detail: The statistical version is normalized to be on the interval [-1,1]. Here is an example of how you do the statistical one:
def acf(x, length=20):
return numpy.array([1]+[numpy.corrcoef(x[:-i], x[i:])[0,1] \
for i in range(1, length)])
I think there are 2 things that add confusion to this topic:
- statistical v.s. signal processing definition: as others have pointed out, in statistics we normalize auto-correlation into [-1,1].
- partial v.s. non-partial mean/variance: when the timeseries shifts at a lag>0, their overlap size will always < original length. Do we use the mean and std of the original (non-partial), or always compute a new mean and std using the ever changing overlap (partial) makes a difference. (There's probably a formal term for this, but I'm gonna use "partial" for now).
I've created 5 functions that compute auto-correlation of a 1d array, with partial v.s. non-partial distinctions. Some use formula from statistics, some use correlate in the signal processing sense, which can also be done via FFT. But all results are auto-correlations in the statistics definition, so they illustrate how they are linked to each other. Code below:
import numpy
import matplotlib.pyplot as plt
def autocorr1(x,lags):
'''numpy.corrcoef, partial'''
corr=[1. if l==0 else numpy.corrcoef(x[l:],x[:-l])[0][1] for l in lags]
return numpy.array(corr)
def autocorr2(x,lags):
'''manualy compute, non partial'''
mean=numpy.mean(x)
var=numpy.var(x)
xp=x-mean
corr=[1. if l==0 else numpy.sum(xp[l:]*xp[:-l])/len(x)/var for l in lags]
return numpy.array(corr)
def autocorr3(x,lags):
'''fft, pad 0s, non partial'''
n=len(x)
# pad 0s to 2n-1
ext_size=2*n-1
# nearest power of 2
fsize=2**numpy.ceil(numpy.log2(ext_size)).astype('int')
xp=x-numpy.mean(x)
var=numpy.var(x)
# do fft and ifft
cf=numpy.fft.fft(xp,fsize)
sf=cf.conjugate()*cf
corr=numpy.fft.ifft(sf).real
corr=corr/var/n
return corr[:len(lags)]
def autocorr4(x,lags):
'''fft, don't pad 0s, non partial'''
mean=x.mean()
var=numpy.var(x)
xp=x-mean
cf=numpy.fft.fft(xp)
sf=cf.conjugate()*cf
corr=numpy.fft.ifft(sf).real/var/len(x)
return corr[:len(lags)]
def autocorr5(x,lags):
'''numpy.correlate, non partial'''
mean=x.mean()
var=numpy.var(x)
xp=x-mean
corr=numpy.correlate(xp,xp,'full')[len(x)-1:]/var/len(x)
return corr[:len(lags)]
if __name__=='__main__':
y=[28,28,26,19,16,24,26,24,24,29,29,27,31,26,38,23,13,14,28,19,19,\
17,22,2,4,5,7,8,14,14,23]
y=numpy.array(y).astype('float')
lags=range(15)
fig,ax=plt.subplots()
for funcii, labelii in zip([autocorr1, autocorr2, autocorr3, autocorr4,
autocorr5], ['np.corrcoef, partial', 'manual, non-partial',
'fft, pad 0s, non-partial', 'fft, no padding, non-partial',
'np.correlate, non-partial']):
cii=funcii(y,lags)
print(labelii)
print(cii)
ax.plot(lags,cii,label=labelii)
ax.set_xlabel('lag')
ax.set_ylabel('correlation coefficient')
ax.legend()
plt.show()
Here is the output figure:
We don't see all 5 lines because 3 of them overlap (at the purple). The overlaps are all non-partial auto-correlations. This is because computations from the signal processing methods (np.correlate
, FFT) don't compute a different mean/std for each overlap.
Also note that the fft, no padding, non-partial
(red line) result is different, because it didn't pad the timeseries with 0s before doing FFT, so it's circular FFT. I can't explain in detail why, that's what I learned from elsewhere.
Use the numpy.corrcoef
function instead of numpy.correlate
to calculate the statistical correlation for a lag of t:
def autocorr(x, t=1):
return numpy.corrcoef(numpy.array([x[:-t], x[t:]]))