I've implemented the autocorrelation function in Python according to the normalized autocovariance function for discrete signals, i.e: $$\gamma(k)=\frac{1}{N-1}\sum_{i=0}^{N-k}(x(i+k)-x_{s})(x(i)-x_{s})$$, where $x_{s}=\frac{\sum_{i=0}^{N}x(i)}{N}$, and for normalization $\rho(k)=\frac{\gamma(k)}{\gamma(0)}$.
$x(i)$ is my signal, e.g a vector $[1, 2, 3]$, and $k$ is it's shift by given value, say $k=2$. Now, the below implementation works just fine for returning a single value of the autocorrelation coefficient:
import numpy as np
Xi = np.array([1, 2, 3])
N = np.size(Xi)
k = 2
Xs = np.average(Xi)
def autocovariance(Xi, N, k, Xs):
autoCov = 0
for i in np.arange(0, N-k):
autoCov += (Xi[i+k]-Xs)*(Xi[i]-Xs)
return (1/(N-1))*autoCov
def autocorrelation():
return autocovariance(Xi, N, k, Xs) / autocovariance(Xi, N, 0, Xs)
print("Autocorrelation:", autocorrelation())
But here is a thing: I've got a little exercise on the page where I learn about signals, in which I'm instructed to write an autocorrelation function which for input $[1, 2, 3]$ and $k=2$ will return a vector of $[-0.5, 0, 1, 0, 0.5]$.
It is important, that I know of the existence of numpy.correlate() function, but I want to implement it by myself, to know how it can actually work. I hope I described my problem clear. Thank you in advance.
Problem solved. What I did, was to calculate the autocorrelation coefficient for each $k$ value from the range of $\left \langle \left | -k \right | ; k\right \rangle$, where $k\in \mathbb{N}_{+}\cup\left \{ 0 \right \}$. And that's it.