I know the following statement:
if $f$ is a deterministic function and continuous, i.e. $f\in C^0([0,T],\mathbb{R})$, then $\int f(s)dW_s$ is normally distributed with mean zero and variance $\int f^2(s) ds$.
Now my first question is, is this result still true if we drop the continuity assumption, i.e. is it true for $f\in L^2([0,T])$? Can I deduce this from the continuous case? If not how to prove this?
My second question is about the following: if $\gamma$ is some nice process, i.e. predictable and locally bounded so that $\int \gamma_s dW_s$ makes sense and is a local martingale. Are there assumptions on $\gamma$ such that $E[\int \gamma_s dW_s]=0$? Even more, is the distribution of this integral also known? I guess to know the distribution one has to know at least the interplay between $\gamma$ and $W$. It is clear that in general it seem rather hard (if not impossible) to make a statement about the distribution of $\int\gamma dW$, since both, the integrator and the integrand are stochastic.
Of course if $\gamma \in L^2(W)$, i.e. $E[\int \gamma^2_s ds]<\infty$, we know that $\int \gamma_s dW_s$ is martingale, hence $E[\int \gamma_s dW_s]=0$ is trivial. Therefore I'm interested in the non trivial case.
Let $f \in L^2([0,T])$ and $\{f_n\}$ be a sequence in $C([0,T])$ such that $f_n$ converges to $f$ in $L^2([0,T])$. It is easily seen that $$ I_n = \int_0^T f_n(t)\,dW_t $$ defines a Cauchy sequence in $L^2(\Omega)$ (hence is convergent) since $$E\left[(I_n - I_m)^2\right] = E\left[\left(\int_0^T (f_n(t)-f_m(t))\,dW_t\right)^2\right] = \int_0^T (f_n(t)-f_m(t))^2\,dt.$$
Notice that, by definition, $$ \int_0^Tf(t)\, dW_t = \lim_{n\to\infty} I_n. $$ Here the limit is in $L^2(\Omega)$, hence the answer to your first question follows.