The autocovariance of an Ornstein–Uhlenbeck process
$$ dX(t) = \theta (\mu - X(t))dt + \sigma dW(t) $$
is given on Wikipedia as
$$ Cov(X(s),X(t)) = \frac{\sigma^2}{2\theta}(e^{-\theta|t-s|} - e^{-\theta(t+s)}) \quad \quad (1).$$
which seems to depend on the time of origin since it has $t+s$ term.
On the other hand, the discreet-time analogue of the O-U process is the AR(1) process $$ X_{i+1} = \theta' (\mu' - X_i) + \sigma' Z_{i+1} $$
which has autocovariance (again according to Wikipedia)
$$Cov(X_{i+n},X_i) = \frac{(\sigma')^2}{1-(\theta')^2}(\theta')^{|n|} \quad \quad (2)$$
which only depend on the lag $n$. How does one reconcile the two? I can see that in the limit of $t,s \to \infty$ in such a way that $t-s = n$, $(1)$ becomes
$$ Cov(X(s),X(t)) = \frac{\sigma^2}{2\theta}e^{-\theta|n|} \quad \quad (3)$$
but it is not clear how this is related to $(2)$.
Also, if we have a time series of O-U realisation (for which we do not know the origin of time), what do we actually get when we compute sample autocovarince: $(1)$ or $(2)$?
Add 1
If I discretise the O-U process, then I get
$$ X_{t+1} - X_t = \theta (\mu - X_t) \delta t + \sigma \sqrt{\delta t} Z_{t+1} $$
or after re-arranging $$ X_{t+1} = \theta \mu \delta t + (1- \theta \delta t) X_t + \sigma \sqrt{\delta t} Z_{t+1} .$$
If I compare this now to $(2)$, I see that $\theta'= \theta \delta t - 1$ and $\sigma' = \sigma \sqrt{\delta t}$ so that on substitution into $(3)$ I get
$$ Cov(X(s),X(t)) = \frac{(\sigma')^2 /\delta t}{2(1+\theta')/\delta t}e^{-\theta|n|} = \frac{(\sigma')^2}{2(1+\theta')}e^{-\theta|n|}\quad \quad (4)$$
but it still does not have the form of $(2)$.
Instead of using the discretization, you can use the continuous time solution. Following the substitutions from this answer, we have that
$$\begin{align} \sigma' &= \frac1{2\theta}\sigma^2(1-e^{-2\theta\delta t})\\ \theta' &= e^{-\theta\delta t} \end{align}$$
which, when applied to your formula for the AR(1) covariance, yields
$$ \frac{{\sigma'}^2}{1-{\theta'}^2}{\theta'}^{|n|} = \frac{\sigma^2}{2\theta}\frac{1-e^{-2\theta\delta t}}{1-e^{-2\theta\delta t}}{e^{-\theta\delta t}}^{|n|} = \frac{\sigma^2}{2\theta}e^{-\theta\delta t|n|} $$
Now if you instead let $t-s \to n/\delta t$, you should have your answer.
I think that because your derivation would depends on an approximation (i.e. discretization), your answer would be an approximation.
There are two different covariances, one conditional (usually on $0$), and one unconditional. The one at Wikipedia calculates the conditional one
$$\begin{align} cov(x_s, x_t) &= E((x_s - E(x_s))(x_t-E(x_t))) \\&= E\left(\int_{\color{red}0}^s\sigma e^{\theta(u-s)}\,\mathrm d W_u \int_{\color{red}0}^t\sigma e^{\theta(v-t)}\,\mathrm d W_v\right) \\&= \sigma^2 e^{-\theta(s+t)}E\left(\int_{\color{red}0}^s e^{\theta(u)}\,\mathrm d W_u \int_{\color{red}0}^t e^{\theta(v)}\,\mathrm d W_v\right) \\&= \frac{\sigma^2}{2\theta} e^{-\theta(s+t)}(e^{2\theta \min(s,t)} \color{red}{- 1}) \\&= \frac{\sigma^2}{2\theta}(e^{-\theta|s-t|} \color{red}{- e^{-\theta(s+t)}}) \end{align}$$
Changing the lower limit to $-\infty$, causes the red term to vanish.