If I have $n$ values chosen from a normal distribution, I can calculate the mean $\mu$ and the standard deviation $\sigma$ very straightforwardly. As I understand it, these are the most likely parameters of the normal distribution from which these values were taken; in other words it's possible that some other distribution produced these numbers, but less likely.
Is there a comparable method for figuring out the most likely parameters $a$ and $b$ such that our numbers were chosen uniformly from $[a,b]$?
The German tank problem is related to this, although it assumes a discrete uniform random variable and, in the first instance, assumes that we know the lower bound and are only looking for the upper bound. However, we can use it as inspiration for the bidirectional continuous case.
Broadly, what we would do is:
Express the probability of drawing a sample resembling the observed values as a function of $a$ and $b$.
Reverse-engineer this to produce estimators of $a$ and $b$, based on the sample values, with appropriate properties (e.g. maximum likelihood, or unbiasedness).