Sometimes, nice results from analysis appear unexpectedly in probability theory.
Here are a couple of examples:
$1.$ If $Z \sim \mathcal{N}(0,1)$, then $Z^2 \sim \Gamma(1/2,2)$
When we want to prove this, we find that $Z^2$ has density function $x \mapsto \sqrt{2\pi}^{-1} x^{-1/2} e^{-x/2}$ for $x \geq 0$ and comparing this to the density function of the gamma $(1/2,2)$ distribution, and using the fact that $\int_{-\infty}^{+ \infty} f(x)dx = 1$ for a density function $f$, it follows that $\boxed{\Gamma(1/2) = \sqrt{\pi}}$
$2.$ If $X \sim \Gamma(\alpha_1, \beta), Y \sim \Gamma( \alpha_2, \beta)$ and $X,Y$ are independent, then $X+Y \sim\Gamma(\alpha_1 + \alpha_2, \beta)$
While proving this, one can find the identity
$$\boxed{\int_0^1 u^{\alpha_1 -1}(1-u)^{\alpha_2 -1}du = \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)}{\Gamma(\alpha_1 + \alpha_2)}}$$
So my question is: what are other examples where we can find interesting results from analysis (or other other branches of mathematics) using probability theory?
The, so-called, Brownian motion is a stochastic process which has (with probability one) sample paths which are Hölder continuous but nowhere differentiable. This shows, in particular, the existence of functions with the above properties.
Moreover, there is a close connection between PDEs and Brownian motion, and therefore Brownian motion can be used to give probabilistic proofs of PDE results, for instance to study existence and uniqueness of solutions to the heat equation or the Dirichlet problem. Take a look at the book Brownian motion by Schilling & Partzsch if you are interested in the topic.
There is a probabilistic proof of this statement which relies on the martingale convergence theorem, see this question here for details.
The strong law of large numbers can be used to compute $\pi$ numerically. Indeed, if we consider a sequence of independent random variables $(X_n)_{n \geq 1}$ which are uniformly distributed on the square $[-1,1] \times [-1,1]$, then
$$\frac{1}{n} \sum_{i=1}^n 1_{|X_i| \leq 1}(\omega) = \frac{1}{n} \sharp \{1 \leq i \leq n; |X_i(\omega)| \leq 1\}$$
converges almost surely to $\pi/4$ as $n \to \infty$. Sampling such a sequence $(X_n)_{n \in \mathbb{N}}$ is pretty easy, and therefore this is a nice way to calculate $\pi$ numerically.
There is a probabilistic proof of the fundamental theorem of algebra; it relies on a martingale convergence theorem and the (neighbourhood) recurrence of Brownian motion in dimension $d=2$; see here or the book by Rogers & Williams for details.
There is a probabilistic proof of the open mapping theorem for analytic functions, see this article; the proof relies on the conformal invariance of Brownian motion.
The existence of normal numbers can be shown by applying the strong law of large numbers. Borel used probabilistic methods to prove that Lebsgue-almost all real numbers are normal.