Given n iid samples $x_1, ..., x_n$ from a Laplace($\mu$, $b$), I know that the MLE for b is: $$\hat{b} = \frac{1}{n} \sum_{i=1}^n \lvert x_i - \hat{\mu} \rvert$$ where $$\hat{\mu} = med(x_1,...,x_n)$$is the sample median.
It can also be shown that $b = E[|x_i - \mu|]$. I noticed that the relationship between $\hat{b}$ and $b$ looks a lot like the relationship between the sample variance and the true variance. The sample variance is a downward biased estimate for the true variance, since we use the sample mean to compute it. This is what motivates the Bessel correction.
Based on this, I conjectured that $\hat{b}$ is a downward biased estimate for $b$, as we're using the sample median $\hat{\mu}$ to compute it. A quick Monte Carlo script in Python seems to corroborate my hunch. For small values of $n$, the downward bias is fairly large, and it tends toward zero for larger n.
But when I tried deriving the result analytically, I found myself totally stuck. In the past, I derived the Bessel correction; that was relatively easy since we can expand the square and we're dealing with a sample mean (for which there's an easy formula) rather than a sample median. However, here, we have an absolute value and a median. I tried focusing in on an individual $E[|x_i - \hat{\mu}|]$, setting up n iterated integrals with respect to each Laplace density. But I'm not sure how to deal with the fact that $\hat{\mu}$ is a function of $(x_1, ..., x_n)$.
Is it even possible to derive something like the Bessel correction for this? If not, how can I get an unbiased estimate of $b$ from the sample?
This is not a complete answer, but to illustrate some of the calculation steps.
Assume the parametrization follow the definition from Wikipedia
The pdf of each sample is
$$ f(x) = \frac {1} {2b} \exp\left\{- \frac {|x-\mu|} {b} \right\}$$
The CDF is
$$ F(x) = \begin{cases} \displaystyle \frac {1} {2} \exp\left\{\frac {x-\mu} {b} \right\} && \text{if } x \leq \mu \\ \displaystyle 1 - \frac {1} {2} \exp\left\{- \frac {x-\mu} {b} \right\} && \text{if } x > \mu \end{cases}$$
As $n$ is the sample size, we have
$$ X_{(1)} < X_{(2)} < \ldots < X_{(n)} $$
as the order statistics of the sample.
As $n$ is a natural number, it can be either odd or even. So there exist another natural number $k$ such that $$n = \begin{cases} 2k - 1 && \text{if } n \text{ is odd}\\ 2k && \text{if } n \text{ is even} \end{cases}$$
Then the sample median $\hat{\mu}$ is defined by
$$ \hat{\mu} = \begin{cases} X_{(k)} && \text{if } n = 2k - 1\\ \displaystyle \frac {X_{(k)} + X_{(k+1)}} {2} && \text{if } n = 2k \end{cases}$$
Since the summation functional is symmetric, the MLE of $b$ can be rewritten as
$$ \begin{align} \hat{b} &= \sum_{i=1}^n |X_{i} - \hat{\mu}| = \sum_{i=1}^n |X_{(i)} - \hat{\mu}| \\ &= \begin{cases}\displaystyle \sum_{i=1}^{k-1} (X_{(k)} - X_{(i)}) + \sum_{j=k+1}^{2k-1} (X_{(j)} - X_{(k)}) && \text{if } n = 2k - 1\\ \displaystyle \sum_{i=1}^{k} \left( \frac {X_{(k)} + X_{(k+1)}} {2} - X_{(i)} \right) + \sum_{j=k+1}^{2k} \left(X_{(j)} - \frac {X_{(k)} + X_{(k+1)}} {2} \right) && \text{if } n = 2k \end{cases} \\ &= \begin{cases}\displaystyle \sum_{j=k+1}^{2k-1} X_{(j)} - \sum_{i=1}^{k-1} X_{(i)} && \text{if } n = 2k - 1 \\ \displaystyle \sum_{j=k+1}^{2k} X_{(j)} - \sum_{i=1}^{k} X_{(i)} && \text{if } n = 2k \end{cases} \end{align} $$
Following standard multinomial argument, see Wikipedia, the pdf of the order statistics $X_{(i)}$ is
$$ \begin{align} &~~ f_{X_{(i)}}(x) \\ &= \frac {n!} {(i-1)!(n-i)!} f(x)F(x)^{i-1}[1 - F(x)]^{n-i} \\ &= \begin{cases} \displaystyle \frac {n!} {(i-1)!(n-i)!} \frac {1} {2b} \exp\left\{\frac {x - \mu} {b} \right\} \left[\frac {1} {2} \exp\left\{\frac {x-\mu} {b} \right\}\right]^{i-1} \left[1 - \frac {1} {2} \exp\left\{\frac {x-\mu} {b} \right\}\right]^{n-i} && \text{if } x \leq \mu \\ \displaystyle \frac {n!} {(i-1)!(n-i)!} \frac {1} {2b} \exp\left\{- \frac {x - \mu} {b} \right\} \left[1 - \frac {1} {2} \exp\left\{- \frac {x-\mu} {b} \right\}\right]^{i-1} \left[\frac {1} {2} \exp\left\{- \frac {x-\mu} {b} \right\}\right]^{n-i} && \text{if } x > \mu \end{cases} \\ &= \begin{cases} \displaystyle \frac {n!} {2^ib(i-1)!(n-i)!} \exp\left\{\frac {i(x - \mu)} {b} \right\} \sum_{l=0}^{n-i} \frac {(n-i)!} {l!(n-i-l)!} \frac {(-1)^l} {2^l} \exp\left\{\frac {l(x-\mu)} {b} \right\} && \text{if } x \leq \mu \\ \displaystyle \frac {n!} {2^{n-i+1}b(i-1)!(n-i)!} \exp\left\{- \frac {(n-i+1)(x - \mu)} {b} \right\} \sum_{l=0}^{i-1} \frac {(i-1)!} {l!(i-1-l)!} \frac {(-1)^l} {2^l} \exp\left\{- \frac {l(x-\mu)} {b} \right\} && \text{if } x > \mu \end{cases} \\ &= \begin{cases} \displaystyle \sum_{l=0}^{n-i} \frac {(-1)^ln!} {2^{i+l}b(i-1)!l!(n-i-l)!} \exp\left\{\frac {(i+l)(x - \mu)} {b} \right\} && \text{if } x \leq \mu \\ \displaystyle \sum_{l=0}^{i-1} \frac {(-1)^ln!} {2^{n-i+1+l}b(n-i)!l!(i-1-l)!} \exp\left\{- \frac {(n-i+1+l)(x - \mu)} {b} \right\} && \text{if } x > \mu \end{cases} \\ \end{align} $$
So the expected value of $X_{(i)}$ is
$$ \begin{align} E[X_{(i)}] &= \int_{-\infty}^{+\infty}xf_{X_{(i)}}(x)dx = \int_{-\infty}^{\mu} x f_{X_{(i)}}(x)dx + \int_{\mu}^{+\infty} x f_{X_{(i)}}(x)dx \\ &= \sum_{l=0}^{n-i} \frac {(-1)^ln!} {2^{i+l}b(i-1)!l!(n-i-l)!} \int_{-\infty}^{\mu} x \exp\left\{\frac {(i+l)(x - \mu)} {b} \right\}dx \\ &+ \sum_{l=0}^{i-1} \frac {(-1)^ln!} {2^{n-i+1+l}b(n-i)!l!(i-1-l)!} \int_{\mu}^{+\infty} x \exp\left\{- \frac {(n-i+1+l)(x - \mu)} {b} \right\} dx \\ &= \sum_{l=0}^{n-i} \frac {(-1)^ln!} {2^{i+l}b(i-1)!l!(n-i-l)!} \frac {b} {i+l}\left(\mu - \frac {b} {i+l}\right) \\ &+ \sum_{l=0}^{i-1} \frac {(-1)^ln!} {2^{n-i+1+l}b(n-i)!l!(i-1-l)!} \frac {b} {n-i+1+l} \left(\mu + \frac {b} {n-i+1+l} \right) \\ &= \frac {n!} {2^i(n-i)!(i-1)!} \sum_{l=0}^{n-i} \left(-\frac {1} {2} \right)^l \frac {(n-i)!} {l!(n-i-l)!} \frac {1} {i+l} \left(\mu - \frac {b} {i+l}\right) \\ &+ \frac {n!} {2^{n-i+1}(n-i)!(i-1)!} \sum_{l=0}^{i-1} \left(-\frac {1} {2} \right)^l \frac {(i-1)!} {l!(i-1-l)!} \frac {1} {n-i+1+l} \left(\mu + \frac {b} {n-i+1+l} \right) \end{align} $$
Not sure if the sum can be further simplified. Mark Fischler mentioned a result in Harmonic series and the binomial theorem
So the final result here may depends on $\mu$ if it is not cancelled out at the end. But anyway it is linear in $b$.