My question sounds rather simple, but sadly I have not been able to find an answer online (which is rather strange as this seems very basic, so I apologize if I simply haven't looked well enough)
The problem is as follows: You have $N$ measurements $x$, and each $x$ has an error in it, so for example $x_1 = 0.21 \pm 0.04$. What I'm looking for is a measure for the mean that takes these error bars into account. My first thought was to use the formula for propagation of uncertainty

Now this gives you an indication for the error of the mean. But how would one combine this with the notion of, say, a confidence interval? For that you would look at the standard deviation, right? I don't see how one would incorporate the errors in that.
If the random variables $X_1,\ldots,X_n$ are independent and normally distributed with mean $\mu_i$ and variance $\sigma_i^2$ (i.e., standard deviation $\sigma_i$), then the sum $$ S_n = X_1 + \ldots + X_n $$ is normally distributed with mean $\mu = \mu_1 + \ldots + \mu_n$ and variance $\sigma^2 := \sigma_1^2 + \ldots + \sigma_n^2$. Thus, the average $$ A_n = \frac{1}{n}S_n $$ is normally distributed with $$ \text{mean } \bar\mu = \frac{1}{n}\sum_{k=1}^n \mu_k \text{ and variance } \bar\sigma^2 = \frac{1}{n^2} \sum_{k=1}^n \sigma_k^2 \text{.} $$
Now say you have a series of measurements with error intervals, i.e. a sequence of intervals $$ \left((l_k,u_k)\right)_{1 \leq k \leq n} \text{.} $$ We are going to interpret that as $X_k \in (l_k,u_k)$ with probability $1-\alpha$ for some fixed $\alpha$ you get to pick. Instead of a sequence of intervals, we can also view this as a sequence of centers $x_k$ and error bounds $e_k$, i.e. simply set $$ x_k = \frac{l_k + u_k}{2} \text{ and } e_k = \frac{u_k - l_k}{2} \text{.} $$ This then matches you notation of the $k$-th measurement being $x_k \pm e_k$.
If we now assume that all the $X_k$ are normally distributed, we can determine these distributions parameters $\mu_k,\sigma^2_k$ from the $x_k$ and $e_k$. We set $$ \mu_k = x_k \text{ and } \sigma_k = \frac{e_k}{\textrm{erf}^{-1}(1-\alpha)\sqrt{2}} \text{.} $$ This then yields, as we required, that $P(l_k < X_k < u_k) = 1 - \alpha$. Note that if you set $\alpha \approx 0.32$, you get $\sigma_k = e_k$, because the probability of $x$ lying further away than one standard deviation is about $0.32$ for the normal distribution.
We can now compute the mean $\bar\mu$ and standard deviation $\bar\sigma$ of the sample average $A_n$, using the formulas from above, which were $$ \bar\mu = \frac{1}{n} \sum_{k=1}^\infty \mu_k \text{ and } \bar\sigma = \sqrt{\frac{1}{n^2} \sum_{k=1}^n \sigma_k^2} = \frac{1}{\sqrt{n}}\sqrt{\frac{1}{n}\sum_{k=1}^n \sigma^2_k} \text{.} $$ We can turn this back into an error interval by finding $\bar e$ such that $P(\bar\mu - \bar e < A_n < \bar\mu + \bar e) = 1 - \alpha$. This yields $$ \bar e = \bar\sigma \sqrt{2} \textrm{erf}^{-1}(1-\alpha) \text{.} $$
If we put it all together, we get for measurements $x_k \pm e_k$ that the average is $$ \left(\frac{1}{n}\sum_{k=1}^n x_k\right) \pm \left(\frac{1}{\sqrt{n}}\sqrt{\frac{1}{n}\sum_{k=1}^n e^2_k}\right) \text{.} $$ Note that the result is independent of the $\alpha$ we picked, and thus holds for every $\alpha$, i.e. every assumed probability that the true results lies within the error interval. Also note that the error bound decreases as $n$ increases, but not linearly but with factor $\frac{1}{\sqrt{n}}$. Thus, to half the length of the error interval of the average, you need to quadruple the number of measurements.