How do I compute the following 6 expectations given the various conditions:
(i) $E\left[W_s \Big| \mathcal{F}_t \right]$, where t > s
(ii) $E\left[W_s \Big| \mathcal{F}_t \right]$, where t < s
(iii) $E\left[W_s \Big| W_t \right]$, where t > s
(iv) $E\left[W_s \Big| W_t \right]$, where t < s
(v) $E\left[\int_{0}^{s} W_udu\Big|W_t\right]$, where t > s
(vi) $E\left[\int_{0}^{s} W_udu\Big|W_t\right]$, where t < s
Given that $W_t$ is a standard Brownian motion and $\{\mathcal{F}_t, t\ge 0\}$ denotes its standard filtration.
EDIT:
From @SpettroDiA's hints:
(i) $=W_s$
(ii) $=W_t$
(iii) See @SpettroDiA answer
(iv) $=E\left[W_s-W_t \Big| \mathcal{F}_{W_t}\right]+ E\left[W_t \Big| \mathcal{F}_{W_t}\right] = 0 + W_t = W_t$
(v)
(vi)
Any hints on how to approach (v) and (vi)?
I'll give you some hints:
(i) Remember that if $X$ is $\Sigma$-measurable, then $E(X | \Sigma)=X$.
(ii) The Wiener process is a martingale with respect to its standard filtration.
(iii) / (iv) Recall the definition of conditional expectation with respect to a random variable, $E(X|Y)= E(X| \sigma(Y))$, hence $E(W_t | W_s)= E(W_t | \mathcal{F}_{W_s})$ for every $t,s$. Furthermore, if $X$ is independent of $\Sigma$ then $E(X| \Sigma)=E(X)$. With this in mind, exploit the role of the increments (on which we have a lot of information): for instance, if $t < s$
$$ E(W_s | W_t )=E(W_s | \mathcal{F}_{W_t}) = E(W_s-W_t | \mathcal{F}_{W_t})+ E(W_t | \mathcal{F}_{W_t}) = 0 + W_t=W_t $$
(v)/(vi) On which set are these integrals calculated? Maybe you want to consider the conditional expectation of the process $Y_t= \int_{0}^{t} W_r \, dr $.
Hope this helps, let me know if you need a deeper discussion.
As pointed out in a comment, (iii) requires a different approach. Let $\phi(x)=E(W_t | W_s = x) $, we want an explicit form for $\phi$ that will be eventually evaluated at $W_s$. $$ \phi(x)=\int y \mathbb{P}(W_t \in dy | W_s=x ) $$
Now recall that, by definition of Brownian motion:
$$ \mathbb{P}(W_s \in I, W_t \in J )=\int_{I} \int_{J} g_t(x) g_{s-t}(y-x) \, dx dy $$
where $I,J$ are any Borel set of $\mathbb{R}$ and $g_t(x)=(2 \pi t)^{-1/2} \exp\left( {\frac{x^2}{2t}} \right)$ is the classic Gaussian density. Furthermore, the definition of conditional probability leads (with some boring but easy algebraic passages) to the following density equation: \begin{align*} \mathbb{P}(W_t \in dy | W_s=x) & =\frac{\mathbb{P}(W_t \in dy, W_s \in dx)}{\mathbb{P}(W_s \in dx)}=\frac{g_t(y)g_{s-t}(x-y)dx dy}{g_s(x) dx} \\ & = g_{\frac{t}{s}(s-t)}\left( y-\frac{tx}{s} \right) dy \end{align*} Hence we recognize that $X_x= (W_t | W_s=x) \sim \mathcal{N}\left(\frac{tx}{s},\frac{t}{s}(s-t) \right)$. In conclusion:
$$ E(W_t | W_s )=\phi(x) \bigg\rvert_{x=W_s} = \int y g_{\frac{t}{s}(s-t)}\left(y-\frac{tW_s}{s}\right) \, dy = \frac{t}{s} W_s $$
Interesting remark
If we set $s=1 $ and $x=0$ we obtain:
$$ X_0=(W_t | W_1=0) \sim \mathcal{N}(0,t(1-t)) \ \ \text{with} \ \ t \in [0,1] $$
which is a Brownian motion starting at zero and conditioned to have $W_1=0$, also known as the famous "Brownian Bridge". See https://en.wikipedia.org/wiki/Brownian_bridge for more details and more connections with part (iii) of this exercise.