I would like to compute $\int |dW|$ where $W$ is a Wiener process. I know that this is a divergent integral by the following argument:
$ 1 = \sum |W_{t_{i+1}}-W_{t_{i}}|^2 < max|W_{t_{i+1}}-W_{t_{i}}|\times\sum |W_{t_{i+1}}-W_{t_{i}}| $
where $t_i \in [0,1]$ and $max|W_{t_{i+1}}-W_{t_{i}}|$ tends to $0$ as $t_{i+1}\rightarrow t_{i}$. The divergent summation can be interpreted as the arc-length of the brownian motion curve. However, I would like to be able to explicitly compute it.
In addition, it seems to me a little counterintuitive because from the other hand I have computed $E(|dW|)$ and the result is finite. Even more, it is not hard to prove that $\int_0^T dW = W_T$ (also convergent). This leads to the conclusion that the modulus has a deep impact on the computation.
The Brownian motion has infinite first variation for any interval with probability $1$, that is the series $$ \sum |W_{t_{i+1}}-W_{t_i}| $$ diverges when $\max{|t_{i+1} - t_i|} \to 0$.
So if you use that as definition of arc length it is always infinity.
However, the so called second variation, or quadratic variation, you write is always finite: $$ \sum |W_{t_{i+1}}-W_{t_i}|^2 = T, \textrm{ when } \max{|t_{i+1} - t_i|} \to 0. $$ Where $T=b-a$ is the length of the interval $[a,b], a=t_0 < t_1 \ldots < t_N = b $ you are summing over.
I guess you can see that as some sort of generalized length of the second order. At least I like to see it that way in practical calculations.
Edit: Ok, I think I know what you mean now. You know that the first variation of the Brownian motion is infinite, but at the same time you know that $$ |\Delta W| = |W_{t_{i+1}} - W_{t_{i}}| $$ is distributed like $$ |W_{t_{i+1}} - W_{t_{i}}| \sim |N(0, \Delta t)| = |Z| \cdot \sqrt{\Delta t}, $$ where $\Delta t = t_{i+1} - t_{i}$ and $z \sim N(0, 1)$. So you get that $$ E[|W_{t_{i+1}} - W_{t_{i}}|] = C \sqrt{\Delta t}, $$ for some constant $C = E[|Z|]$. Therefore you wonder why you can't calculate the first variation as the sum of this over all small $\sqrt{\Delta t}$?
In general you are not allowed to move the expectation inside the limit. We have $$ E[\lim_{\Delta t \to 0} \sum |W_{t_{i+1}}-W_{t_i}|] = E[\infty] = \infty, $$ If we cheat and still put the expectation inside the limit, we get $$ \begin{eqnarray*} \lim_{\Delta t \to 0} E[\sum |W_{t_{i+1}}-W_{t_i}|] &=& \lim_{\Delta t \to 0} \sum E[|W_{t_{i+1}}-W_{t_i}|] = \lim_{\Delta t \to 0} C \cdot \sum \sqrt{\Delta t} \end{eqnarray*} $$ However, this too happens to go to $\infty$ as well from what I can see. Put $\Delta t=1/n$ and assume for simplicity that the interval in question is $[0,1]$, then the last limit is $$ \lim_{\Delta t \to 0} C \cdot \sum_{i=1}^{n} \sqrt{\Delta t} = \lim_{n \to \infty} C \cdot n \cdot \sqrt{\frac{1}{n}} \to \infty. $$ So in this case we get infinity even if we cheat and put the expectation inside the limit (which is not allowed in general). So I still don't get how you get a finite value.