The standard definition of the autocorrelation of a random variable X(t) is $$R_X(s,t)=\frac{\mathbb{E}[(X(s)-\mu_s)(X(t)-\mu_t)]}{\sigma_s\sigma_t}$$
I'm currently analyzing time series theory, particularly when I have that X(t) is a WSS (wide-sense stationary) process. Therefore, the mean $\mu$ and the variance $\sigma^2$ are time independent, so one can easily show that the autocorrelation only depends on the lag $\tau=s-t$, and now we have that $$R_X(\tau)=\frac{\mathbb{E}[(X_{t+\tau}-\mu)(X_t-\mu)]}{\sigma^2}$$
I have some questions about this. First of all, what would the parameters $\mu$ and $\sigma^2$ mean here? Their integral (in the classical way) isn't necessarily well defined because X(t) is a WSS process and therefore it doesn't necessarily tend to zero for large values of $t$.
Also, I have seen in several sites another "equivalent" definition of the autocorrelation by simply ignoring the $\mu$ and $\sigma^2$ parameters from the equation and defining it as $$R_X(\tau)=\mathbb{E}[X_{t+\tau}X_t]$$ Where this latter definition is used to easily define the Power Spectral Density (PSD) function. In which way could you say that this two definitions are "equivalent"? What are the advantages of using one instead the other?
Thank you very much.