Within the lecture notes provided for a module on stochastic modeling, the following statement may be found:
Let $S_0$ denote the spot price of an asset at time $t=0$ and let $S_\delta^+ = S_0 e^{\mu \delta + \sigma \sqrt{\delta}}$ and $S_\delta^- = S_0 e^{\mu \delta - \sigma \sqrt{\delta}}$ be the 2 possible values that the asset price may take at time $\delta$. Let the price take values $S_\delta^+$ and $S_\delta^-$ with probability $0.5$ each. Then \begin{equation*} \mathbb{E}[\log(S_\delta) - \log(S_0)] = \mu \delta \hspace{10mm} (1) \end{equation*} where $\mu$ is known as the drift of the process.
I have no problem deriving $(1)$. However, on Wikipedia it states that the drift of a such a process refers to its "expected return". Based on this, I would expect to have $$ \mathbb{E}[S_\delta - S_0] = \mu \delta $$
So where has the $\log$ term come from in $(1)$?
Actually, it's relative return that is considered
$$\frac{S_{\delta}-S_0}{S_0}$$
When you compare this to the log return
$$\log S_{\delta} - \log S_0 = \log\left(\frac{S_{\delta}}{S_0}\right) = \log\left(1+\frac{S_{\delta}-S_0}{S_0}\right)$$
and by the Taylor expansion $\log(1+x)\approx x$, you see that both are equal for small relative returns.