Let's have three "true" values $\mu_1, \mu_2, \mu_3 \in \mathbb{R}$. Their "true" mean is simply $y = \frac{1}{N} \sum_{i = 1}^{N} \mu_i$ where $N = 3$.
Now say I observe each "true" value with some error in measurement, and this error is distributed as a Laplace random variable, namely the observations are independent random variables $X_1 \sim \mathrm{Lap}(\mu_1; b)$, $X_2 \sim \mathrm{Lap}(\mu_2; b)$, and $X_3 \sim \mathrm{Lap}(\mu_3; b)$. Note that all three have the same scale $b$.
Now I simply say the "noisy" mean is $Y = \frac{1}{N} \sum_{i = 1}^{N} X_i$. How big is my error on this estimate? Namely, $$ \Pr\left(\left|Y - y \right| > t\right) = \Pr\left(\left|\frac{1}{N} \sum_{i = 1}^{N} X_i - \frac{1}{N}\sum_{i = 1}^{N} \mu_i \right| > t\right) = ? $$ It seems like a pretty standard thing to ask, but I've struggled to find a hint. These $X$ are not i.i.d. (because they have different means).
A tail bound for a single Laplace random variable is easy (beacuse we consider it zero-centered), but here it gets somehow more complicated.
I've considered some MGF approach and then Chernoff bounds, but it (1) doesn't look straightforward and (2) is only an approximation.
Am I missing something (so it's a known and solved problem) or is it really non-trivial?
One has $X_k = \mu_k + bW_k$, where the $W_k$ are i.i.d. with distribution Lap$(0,1)$. Hence $$Y-y = \frac{b}{N}\sum_{k=1}^N W_k,$$ so for any $t \ge 0$, $$P[|Y-y|>t] = P[|W_1+\cdots+W_N|>Nt/b].$$ So we are led to compute the distribution of $S_N := W_1+\cdots+W_N$.
The characteristic function is very simple: since the $W_k$ are i.i.d., $$\phi_{S_N}(\theta) = \phi_{W_1}(\theta)^N = \frac{1}{(1+\theta^2)^N}.$$ Using a Fourier (or Laplace) inversion, one can derive the
density function, which is a Sargan density. https://www.researchgate.net/publication/4856056_Sargan_densities_which_one
Here is another method. A possible way to construct i.i.d. random variables with distribution Lap$(0,1)$ is to start from i.i.d. random variables $U_1,V_1,\ldots,U_N,V_N$ with distribution Exp$(1)$, and to set $W_k:=U_k-V_k$ for each $k$. Hence $S_N = (U_1+\cdots+U_N)-(V_1+\cdots+V_N)$ is the difference of two i.i.d. random variables with distribution Gamma$(N,1)$. We derive its a density as a convolution product. For every $s \in \mathbb{R}$, $$f_{S_N}(s) = \int_{\mathbb{R}} 1_{x>\max(0,-s)} \frac{(s+x)^{N-1}}{(N-1)!} e^{-(s+x)}\frac{x^{N-1}}{(N-1)!} e^{-x} \mathrm{d}s.$$ This density is an even function. Let us compute it for $s \ge 0$. $$f_{S_N}(s) = \frac{e^{-s}}{(N-1)!^2} \int_0^{+\infty} (s+x)^{N-1} x^{N-1} e^{-2x} \mathrm{d}s.$$ Using $$(s+x)^{N-1} = \sum_{k=0}^{N-1} {N-1 \choose k} s^k x^{N-1-k},$$ one derives $$f_{S_N}(s) = \frac{e^{-s}}{(N-1)!^2} \sum_{k=0}^{N-1} s^k {N-1 \choose k} \frac{(2N-2-k)!}{2^{2N-1-k}} \text{ for } s \ge 0.$$ $$f_{S_N}(s) = \frac{e^{-s}}{(N-1)!} \sum_{k=0}^{N-1} \frac{s^k}{k!} \frac{(2N-2-k)!}{2^{2N-1-k}(N-1-k)!} \text{ for } s \ge 0.$$ Set $$P_k(s) = \sum_{\ell=0}^k \frac{s^ \ell}{\ell!}.$$ Observe that $$e^{-s}\frac{s^k}{k!} = e^{-s}\big(P_k(s)-P'_k(s)\big) = -\frac{\mathrm{d}}{\mathrm{d}s} \big(e^{-s}P_k(s)\big).$$ Hence, by integrating the density $f_{S_n}$ from $s$ to infinity, one derives $$P[S_n>s] = \frac{e^{-s}}{(N-1)!} \sum_{k=0}^{N-1} P_k(s) \frac{(2N-2-k)!}{2^{2N-1-k}(N-1-k)!} \text{ for } s \ge 0.$$ By symmetry, $P[|S_n|>s] = 2P[S_n>s]$ for $s \ge 0$.