Let $(B_t)_{t \geq 0}$ a Brownian motion on $\mathbb R^n$ and $\ell_t \in \mathbb S$, where $\mathbb S$ is the space of all increasing càdlàg function from $(0,\infty)$ to $(0,\infty)$ with $\lim_{s\downarrow 0} = 0$. Then the quadratic variation of $(B_{\ell_t})_{t\geq 0}$ is given by $$[B_{\ell}]_t = \ell_t - \sum_{0<s\leq t}\Delta \ell_s + \sum_{0<s\leq t} | \Delta B_{\ell_s} |^2.$$
Heuristically, this is very clear: the square bracket can be written as the sum of a continuous and discontinuous part. Since the cts. part $[B_\ell]_t^c = l_t$ we get the first summand + the jumps which is the third summand. Finally, since $\ell_t$ is càdlàg, we have to subtracte the jumps $\Delta \ell_s$.
My thoughts so far: By Definition, the quadratic variation of a semimartingale $X$ is given by $$[X] = X^2 - \int X_- \ \mathrm dX.$$ We have to calculate the stochastic integral for $X_t := B_{\ell_t}$. Let $\Pi = \{0 \leq s_1^k \leq ... \leq s_{n(k)}^k \leq t\}$ be a partition of the interval $[0,T]$. Then $$\sum_i B_{\ell(s_j)} (B_{\ell(s_{j+1})} - B_{\ell(s_j)}) \overset{ucp}\longrightarrow \int_0^t B_{\ell_{t-}} \ \mathrm d B_{\ell_t}.$$
If there are reasonable sets A und B, then using the elementary equality $b^2 - a^2 - (b-a)^2 = 2a (b-a)$, we find \begin{align*} 2 \sum_i B_{\ell(s_j)} (B_{\ell(s_{j+1})} - B_{\ell(s_j)}) &= \sum_i (B_{\ell(s_{j+1})}^2 - B_{\ell(s_j)}^2) - \sum_i (B_{\ell(s_{j+1})} - \sum B_{\ell(s_j)})^2\\ &= \underbrace{ \sum_{i,B} (B_{\ell(s_{j+1})}^2 - B_{\ell(s_j)}^2) }_{= \ B_{\ell_t}} - \sum_{i,A} (B_{\ell(s_{j+1})}^2 - B_{\ell(s_j))^2 }\\ &\qquad - \Bigg( \underbrace{ \sum_{i,B} (B_{\ell(s_{j+1})} - B_{\ell(s_j)})^2 }_{\overset{ucp}{\longrightarrow} \ \ell_t} - \sum_{i,A} (B_{\ell(s_{j+1})} - B_{\ell(s_j)} )^2 \Bigg)\\ &\overset{k\to \infty}\longrightarrow B_{\ell_t}^2 - \sum_{0<s\leq t} (B_{\ell_s} - B_{\ell_{s-}} )^2 - (\ell_t - \sum_{0<s\leq t} (\ell_s - \ell_{s-})) \\ &= B_{\ell_t}^2 - \ell_t - \sum_{0<s\leq t} \Delta(B^2)_s + \sum_{0<s\leq t} \Delta \ell_s. \end{align*}
Is this decomposition of the sums possible? My idea was something like $$J(\epsilon) := \{s \in [0,t] : |\Delta X_s| > \epsilon\}.$$ Since $s \mapsto X_s$ is càdlàg, $J$ is a.s. finite, each $\epsilon > 0$ and $\sum_{s \in J(\epsilon)} \overset{\epsilon \to 0}\longrightarrow \sum_{0<s\leq t}$ a.s.
Is there an easier way? Thanks.
I think it is not obvious that the third and fourth term (in your calculation) converge to their prospective limits; however, the idea of your proof is basically correct.
Fix $t>0$. By definition,
$$[B_{\ell}]_t = \mathbb{P}-\lim_{|\Pi| \to 0} \sum_{j=1}^n (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2;$$
here $\Pi = \{0=t_0<\ldots<t_n = t\}$ denotes a partition of the interval $[0,t]$ with mesh size $|\Pi| := \max_j |t_j-t_{j-1}|$. For fixed $\epsilon>0$ we denote by $S = S(\epsilon) := \{s_1<s_2<\ldots<s_m\}$ the jump times of $\ell$ up to time $t$ with jump height exceeding $\epsilon$. (Since $\ell$ is càdlàg, there are only finitely many such jumps.) It follows directly from the continuity of the sample paths of Brownian motion and the fact that $|S|<\infty$ that
\begin{equation} \sum_{j: (t_{j-1},t_j] \cap S \neq \emptyset} (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2 \xrightarrow[]{|\Pi| \to 0} \sum_{s \leq t, \Delta \ell_s \geq \epsilon} \Delta B_{\ell_s}^2 \tag{1} \end{equation}
almost surely. Using exactly the same argumentation as for the "standard" quadratic variation of Brownian motion (that is, $\ell(t)=t$), we find
$$\mathbb{E} \left( \left| \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2- \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_j}-\ell_{t_{j-1}}) \right|^2 \right) \leq C \sup_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_{j-1}} - \ell_{t_j})$$ for some constant $C>0$ which does not depend on $\Pi$ and $\epsilon$; the idea is basically to use the scaling property of Brownian motion, for more details see e.g. René Schilling & Lothar Partzsch: Brownian motion - An Introduction to Stochastic Processes, Theorem 9.1. Consequently,
\begin{equation} \limsup_{|\Pi| \to 0} \mathbb{E} \left( \left| \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2- \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_j}-\ell_{t_{j-1}}) \right|^2 \right)\leq C \epsilon. \tag{2} \end{equation}
Moreover, we note that
\begin{equation} \ell_t - \sum_{s \leq t, \Delta \ell_s \geq \epsilon} \Delta \ell_s = \lim_{|\Pi| \to 0} \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_j}-\ell_{t_{j-1}}). \tag{3} \end{equation}
Now we are ready to prove the convergence in probability. Fix $\gamma>0$. By the triangle inequality, we have
$$\mathbb{P} \left( \left| \sum_{j} |B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}}|^2 - \left( \sum_{s \leq t} \Delta B_{\ell_s}^2 + \ell_t - \sum_{s \leq t} \Delta \ell_s \right) \right| > \gamma \right) \leq I_1+I_2+I_3+I_4$$
where \begin{align*} I_1 &:= \mathbb{P} \left( \left| \sum_{j: (t_{j-1},t_j] \cap S \neq \emptyset} (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2- \sum_{s \leq t, \Delta \ell_s \geq \epsilon} \Delta B_{\ell_s}^2 \right|> \frac{\gamma}{4} \right) \\ I_2 &:= \mathbb{P} \left( \left| \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}})^2- \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_j}-\ell_{t_{j-1}}) \right| > \frac{\gamma}{4} \right) \\ I_3 &:= \mathbb{P} \left( \left| \ell_t - \sum_{s \leq t, \Delta \ell_s \geq \epsilon} (\ell_{t_j}-\ell_{t_{j-1}}) - \sum_{j: (t_{j-1},t_j] \cap S = \emptyset} (\ell_{t_j}-\ell_{t_{j-1}}) \right| > \frac{\gamma}{4} \right) \\ I_4 &:= \mathbb{P} \left( \left| \sum_{s \leq t, \Delta \ell_s < \epsilon} \Delta B_{\ell_s}^2 \right| + \left| \sum_{s \leq t, \Delta \ell_s < \epsilon} \ell_s \right|> \frac{\gamma}{4} \right) \end{align*}
We consider the terms separately:
Adding all up gives
$$\lim_{|\Pi| \to 0} \mathbb{P} \left( \left| \sum_{j} |B_{\ell_{t_j}}-B_{\ell_{t_{j-1}}}|^2 - \left( \sum_{s \leq t} \Delta B_{\ell_s}^2 + \ell_t - \sum_{s \leq t} \Delta \ell_s \right) \right| > \gamma \right) = 0.$$
Remark: