I quote Kuo (2006):
Let $C$ be the Banach space of real-valued continuous functions $\omega$ on $[0,1]$ with $\omega(0)=0$.
A cylindrical subset $A$ of $C$ is a set of the form $$A=\{\omega\in C: (\omega(t_1),\omega(t_2),\ldots,\omega(t_n))\in U\}\tag{1}$$ where $0<t_1<t_2<\ldots<t_n\leq 1$ and $U\in\mathcal{B}(\mathbb{R}^n)$, the Borel $\sigma$-field.
Let $\mathcal{R}$ be the collection of all cylindrical subsets of $C$. Obviously, $\mathcal{R}$ is a field. However, it is not a $\sigma$-field.
Suppose $A\in\mathcal{R}$ is given by $(1)$. Define $\mu(A)$ by $$\mu(A)=\displaystyle{\int_U \prod_{i=1}^n}\bigg(\frac{1}{\sqrt{2\pi(t_i-t_{i-1})}}\exp\bigg[-\frac{(u_i-u_{i-1})^2}{2(t_i-t_{i-1}))}\bigg]\bigg)du_1\ldots du_n\tag{2}$$ where $t_0=u_0=0$
Theorem: The stochastic process $B(t,\omega)=\omega(t), 0\leq t\leq 1, \omega\in C,\text{ }$ is a Brownian motion
Well, as the above theorem, I am struggling to show independence of increments, in the mutual sense, not just pairwise (as required by definition), namely that $$B(t_1), B(t_2)-B(t_1),\ldots, B(t_n)-B(t_{n-1})\text{ are independent}$$ And this would be true if one showed that: $$\begin{split}\mu\{B(t_1)\leq a_1, B(t_2)-B(t_1)\leq a_2,\ldots, B(t_i)-B(t_{i-1})\leq a_i\}=\\=\mu\{B(t_1)\leq a_1\}\mu\{B(t_2)-B(t_1)\leq a_2\}\cdots\mu\{B(t_i)-B(t_{i-1})\leq a_i\}\end{split}\tag{3}$$ for $i=\{1,\ldots,n\}$.
I was trying to show $(3)$ by induction.
$(3)$ clearly holds true for $n=1$. Suppose now it holds true for $i=(n-1)$, so I have to show that it holds true for $i=n$ as well. So, starting point is:
$$\begin{split}\mu\{B(t_1)\leq a_1, B(t_2)-B(t_1)\leq a_2,\ldots, B(t_{n-1})-B(t_{n-2})\leq a_{n-1}\}=\\=\mu\{B(t_1)\leq a_1\}\mu\{B(t_2)-B(t_1)\leq a_2\}\cdots\mu\{B(t_{n-1})-B(t_{n-2})\leq a_{n-1}\}\end{split}\tag{4}$$
and I have to get to:
$$\begin{split}\mu\{B(t_1)\leq a_1, B(t_2)-B(t_1)\leq a_2,\ldots, B(t_{n})-B(t_{n-1})\leq a_{n}\}=\\=\mu\{B(t_1)\leq a_1\}\mu\{B(t_2)-B(t_1)\leq a_2\}\cdots\mu\{B(t_{n})-B(t_{n-1})\leq a_{n}\}\end{split}\tag{5}$$
Is there any good way to pass from $(4)$ to $(5)$ relying on definition $(2)$?

Everything boils down to joint behaviour of Brownian motion at two certain times.
Let $0\le s < t$ and look at $(B(s),B(t))$. By $(2)$ we have that joint density of that vector is given by formula:
$$ g_{s,t}(x,y) = \frac{1}{\sqrt{2\pi s}}\exp(-\frac{x^2}{2s})\frac{1}{\sqrt{2\pi (t-s)}}\exp(-\frac{(y-x)^2}{2(t-s)}) $$
We are especially interested in $Cov(B(s),B(t)) = \mathbb E[B(s)B(t)]$ (the term $\mathbb E[B(s)]\mathbb E[B(t)]$ is zero, again due to $(2)$ (Indeed, using $(2)$ with only one time, that is for some $B(r)$ we get that $B(r) \sim \mathcal N(0,r)$)
We need to calculate it. Using Fubinii (due to integrability)
$$ \mathbb E[B(s)B(t)] = \int_{\mathbb R} \frac{x}{\sqrt{2\pi s}} \exp(-\frac{x^2}{2s}) \int_{\mathbb R} y \frac{1}{\sqrt{2\pi (t-s)}}\exp(-\frac{(y-x)^2}{2(t-s)}) dy dx $$
The inner integral is just the $\mathbb E[Z]$, where $Z \sim \mathcal N(x,t-s)$, so it's just $x$, hence: $$ \mathbb E[B(s)B(t)] = \int_{\mathbb R} \frac{x^2}{\sqrt{2\pi s}} \exp(-\frac{x^2}{2s}) dx $$
Similarly, this time we recognize something, too. It's $\mathbb E[Y^2]$, where $Y \sim \mathcal N(0,s)$, hence it's just $s$, and ... we're done.
We showed for any $0 \le s < t < 1$ that $Cov(B(s),B(t)) = s$. Clearly for $t=s$ it holds, too, since $B(r) \sim \mathcal N(0,r)$ as we said above. By symetry of Covariance, we showed that for any $s,t \in [0,1]$ we have $Cov(B(s),B(t)) = \min\{s,t\}$.
Now, we're ready to proceed. Take any $0=t_0 < t_1 < ... < t_n \le 1$. We want to show that $\{B(t_k)-B(t_{k-1}) : k \in \{1,...,n\} \}$ is an independent family. Note that vector $(B(t_1)-B(t_0),...,B(t_n)-B(t_{n-1}))$ is gaussian as a linear transformation of vector $(B(t_0),...,B(t_n))$ which is gaussian (due to assumption $(2)$ we have it's density). Hence it would be enough to show that covariance matrix is diagonal, hence it is enough to show that for $j \neq k$ (WLOG due to symetry assume that $j < k$) we have: $Cov(B(t_j)-B(t_{j-1}),B(t_k)-B(t_{k-1}))=0$. Indeed due to linearity:
$$ Cov( B(t_j) - B(t_{j-1}),B(t_k)-B(t_{k-1})) = Cov(B(t_j),B(t_k)) - Cov(B(t_j),B(t_{k-1})) - Cov(B(t_{j-1}),B(t_k)) + Cov(B(t_{j-1}),B(t_{k-1}))$$
Using $j<k$ so that $j \le k-1$, too and our result above, we get:
$$ Cov( B(t_j) - B(t_{j-1}),B(t_k)-B(t_{k-1})) = t_j - t_j - t_{j-1} + t_{j-1} = 0$$
Moreover (not needed in fact)
$$ Var((B(t_k) - B(t_{k-1})) = Var(B(t_k)) + Var(B(t_{k-1}) - 2Cov(B(t_k), B(t_{k-1})) = t_k + t_{k-1} - 2 t_{k-1} = t_k - t_{k-1} $$
So that the covariance of our Gaussian vector $(B(t_1)-B(t_0),...,B(t_n)-B(t_{n-1}))$ is given by $n \times n$ matrix:
$$ \begin{bmatrix} t_1 & 0 & 0 & ... & 0 \\ 0 & t_2-t_1 & 0 & ... & 0 \\ . & . & . & ... & . \\ . & . & . & ... & . \\ . & . & . & ... & . \\ 0 & 0 & 0 & ... & t_n-t_{n-1} \\ \end{bmatrix} $$
Hence our family is indeed independent (because measure of Gaussian vector with mean vector $(0,0,...,0)$ (our vector has such mean) and a diagonal covariance matrix is a product measure of $n$ real measures $\mu_1,...,\mu_n$, where $\mu_k \sim B(t_k)-B(t_{k-1}) \sim \mathcal N(0,t_k - t_{k-1}))$