I'm learning time series analysis and I'm a little confused as to how the auto-covariance of a time series dataset works.
We define the auto-cov function of a time series as: $\gamma(h) = E[(Y_t - u_t)(Y_s - u_s)] \ \forall \ h = t-s $
Here t and s are different time indeces for a dataset. But this is where I'm confused since aren't the values at t and s deterministic? For example if we're considering Google stock price for the past 2 years, then on day 30 (t) and day 75 (s) Google's stock price had only 1 value if we're using the closing value of of the stock price for the data set. So I'm confused as to where the distribution is in these time intervals of a time series to compute the auto-covariance function?