When correlation is defined in systems and signals, as well as in the statistical sense, it is defined as a normalized measure with respect to the Cauchy-Schwarz inequality.
$\space$
In systems and signals, correlation is defined as
$$\rho = \frac{ \langle x(t), \space y(t) \rangle}{\| x(t)\|\|y(t)\|} = \frac{1}{\sqrt{E_xE_y}}\int_{-\infty}^{\infty}x(t)y^*(t)dt$$
Here, the inner product is normalized by dividing it by the square root of the energies, or the norms, of the two signals in question. The Cauchy-Schwarz inequality for this case is stated as
$$ |\langle x(t), \space y(t)\rangle| \leq \| x(t)\|\|y(t)\|$$
$\space$
In statistics, correlation is defined as
$$\rho_{xy} = \frac{Cov(X,Y)}{\sigma_x\sigma_y}$$
Here, the covariance is normalized by dividing it by the square root of the variances, or the standard deviations, of the two random variables in question. The Cauchy-Schwarz inequality for this case is stated as
$$|Cov(X,Y)| \leq \sigma_x\sigma_y$$
$\space$
So, it seems as if there is a universal trend for defining correlation in a normalized sense. That is until we get to cross-correlation. For some reason cross-correlation is defined as
$$ \psi(\tau) = \langle x(t), \space y(t-\tau) \rangle = \int_{-\infty}^{\infty} x(t)y^*(t-\tau)dt $$
Here, the cross-correlation is simply the time-shifted inner product. No normalization has been applied in this definition.
$\space$
So, why do we define correlation in a normalized sense across the board, but when it comes to its' extended time delayed version, cross-correlation, we "forget" to normalize it all of a sudden?