According to Wikipedia, autocorrelation has two definitions. Oh my god!
In statistics, the autocorrelation between times $s$ and $t$ is defined as: $$\displaystyle R(s,t) = \frac{\mathbb{E}[(X_t-\mu_t)(X_s-\mu_s)]}{\sigma_t\sigma_s}$$
However, in signal processing, the above definition is often used without normalization.
According to digital communication written by Bernard Sklar, $$R_x(\tau)=\int_{-\infty}^{\infty}x(t)x(t+\tau)du$$
$$R_X(\tau)=\mathbb{E}\{X(t)X(t+\tau)\}$$
Where \begin{array}{a} \tau & \mbox{ is the difference between } t \mbox{ and }s.\\ x(t) & \mbox{ is real-valued energy signal.}\\ X(t) & \mbox{ is a random process. }\end{array}
For stationary stochastic processes (random signals), $R_X(\tau)=\mathbb{E}\{X(t)X(t+\tau)\}$ is usually called (auto)correlation. This implicitly assumes that the signal has zero mean (as it usual is). Still, in standard probability terminology, it should be called covariance, because it's not normalized between $[-1,1]$. It's just a matter of convention.
In classical statistical signal processing (with zero mean and weak stationary signals), the autocorrelation functions (or its Fourier transform, the spectrum) comprises all the statistical relevant information (second order statistics), and, in particular, the autocorrelation at zero gives the variance (or "energy") of the stationary signal.
When we switch from stochastic process to deterministic signals, one would estimate the autocorrelation, for discrete signals, with something like $R_X(\tau)=\frac{1}{N}\sum_{n=1}^N x(n) x(n+\tau)$. Sometimes the term $1/N$ is also dropped (for example, in linear least squares estimation). This would be also an omitted normalization -but, mind you, totally different from the previous- it should be obvious from the context why that's justified.