Choose $n$ numbers $x_1,\cdots, x_n$ uniformly and independently from $[0,1]$. Find the expected value of $$\max_{1\leq i\leq n} x_i - \min_{1\leq i\leq n} x_i$$
The numbers $x_1,\cdots, x_n$ form a Uniform $[0,1]^n$ distribution. The expected value of $f(x_1,\cdots, x_n) := \max_{1\leq i\leq n} x_i - \min_{1\leq i\leq n} x_i$ equals $\int_0^1\cdots\int_0^1 f(x_1,\cdots, x_n)dx_n\cdots dx_1$. Note that $\max_{1\leq i\leq n} x_i$ is continuous as a function on $\mathbb{R}^n$ since its inverse image of any closed set of the form $(-\infty, a]$ is closed and the closed sets of that form generate all closed sets of $\mathbb{R}$ (since the smallest sigma algebra containing all closed sets of the given form generates all closed sets of $\mathbb{R}$).
Isn't there a simpler, more elementary way to justify the continuity of $x_1,\cdots, x_n\mapsto \max_{1\leq i\leq n} x_i$?
The above integral seems fairly complex to evaluate. I'm not exactly sure, but there may be a lot of symmetry in this problem. For instance, the desired integral may in fact equal $n!\int_0^1 \int_0^{x_1}\cdots \int_0^{x_{n-1}} x_1 - x_ndx_n\cdots dx_1$, where $x_1\ge x_2\ge\cdots \ge x_n$.
Does this equality actually hold? And if so, how would one prove it?
One can then apply an induction argument to see that for $i \ge 1$,
$$\int_0^{x_0}\cdots \int_0^{x_{n-1}} x_1-x_n dx_n\cdots dx_1 = \int_0^{x_0} \cdots \int_0^{x_{i-1}} x_1 \dfrac{x_i^{n-i}}{(n-i)!} - \dfrac{x_i^{n-i+1}}{(n-i+1)!} dx_i \cdots dx_1$$
where $x_0=1$. Thus substituting $i=1,$ we see that the desired integral equals $\int_0^{1} x_1^n (\dfrac{1}{(n-1)!}-\dfrac{1}{n!})dx_1 = \dfrac{n-1}{n!}\cdot \dfrac{1}{n+1} = \dfrac{n-1}{(n+1)!}.$ Hence, if I were to guess, I'd say the desired integral and hence the expected value is $\dfrac{n-1}{n+1}.$
Alternatively, it may be possible to induct on n for this problem. For $n=1,$ we just have the uniform $[0,1]$ distribution, and the value of $f(x_1)$ is always 0 so the expected value is $0$. For $n=2$, we have the Uniform $[0,1]^2$ distribution, and the integral we need to evaluate is $\int_0^1 \int_0^1 \max\{x_1,x_2\} - \min\{x_1,x_2\}dx_2dx_1 = \int_0^1 \int_0^{x_1} x_1 - x_2 dx_2 dx_1 + \int_0^1 \int_{x_1}^1 x_2 - x_1dx_2dx_1 = \dfrac{1}3.$ So at least for $n=1,2$ the claim holds.
The maximum of any finite number of continuous functions is also continuous, cf. proof.
You correctly identified a symmetry for evaluating this integral that can be formalized as follows: For each permutation $\tau\in S_n$ (here, $S_n$ denotes the group of permutations on $\{1,\dots,n\}$), consider the Borel set $A_\tau = \{(x_1,\dots,x_n)\in[0,1]^n: x_{\tau(1)}\ge x_{\tau(2)}\ge\dots\ge x_{\tau(n)}\}$. We have $$\bigcup_{\tau\in S_n} A_\tau = [0,1]^n$$ and the pairwise intersection of any two distinct $A_\tau$ is a set of Lebesgue measure $0$ (exercise).
Therefore,
$$\int_{[0,1]^n} f(x)\,\mathrm dx = \sum_{\tau\in S_n}\int_{A_n} f(x)\,\mathrm dx.$$
Now notice, using the change of variables $(x_1,\dots,x_n)\mapsto(x_{\tau(1)},\dots, x_{\tau(n)})$ (details left to you), that for all $\tau\in S_n$, $$\int_{A_\tau} f(x)\,\mathrm dx = \int_{A_{\text{id}}} f(x)\,\mathrm dx,$$ where $\text{id}\in S_n$ denotes the identity map, i.e. the permutation leaving all elements fixed.
$S_n$ contains $n!$ elements and $\int_{A_{\text{id}}} f$ is exactly the integral you wrote down with slightly different notation.
The result $\frac{n-1}{n+1}$ is also correct. Alternatively you could note that if $X_1,\dots, X_n$ are independent, $\text{Uniform}([0,1])$-distributed random variables, then $$\mathbb E\left(\sup_{k\in\{1,\dots,n\}} X_k\right) = \frac{n}{n+1}$$ and $$\mathbb E\left(\inf_{k\in\{1,\dots,n\}} X_k\right) = \frac{1}{n+1}$$ so $$\mathbb E\left(\sup_{k\in\{1,\dots,n\}} X_k-\inf_{k\in\{1,\dots,n\}} X_k\right) = \frac{n-1}{n+1}.$$