So, I got this problem
I need to estimate via MLE the position of a user given the distance of a certain number of fixed points. I know that (in my exercises) the distance between a point and the user can be modeled as Gaussian distribution $N(d_{i},\sigma )$ where the $_i$ indicates each point.
$d_i$ and $\sigma$ are known
Now, I know I've to maximize the likelihood function which is the production of all mine Gaussian density functions
$\prod_{i=1}^{n} \frac{1}{\sqrt{2\pi }\sigma }e^{- \frac{\sqrt{x-d_i}}{2\sigma^{2}}}$
but, I haven't really understand what's the link between MLE and the actual estimated distance
So you observe a sample of distances $\{d_i\}_{i=1}^n$ perturbed by normal noise with a known standard deviation of $\sigma$. The true location is $x_0$ and you want an estimator $\hat{x}$. Great.
The likelihood function is $$ \mathcal{L}(x;d,\sigma) = \prod_{i=1}^n \dfrac{1}{\sqrt{2\pi}\sigma} e^{-\dfrac{(d_i-x)^2}{2\sigma^2}} $$ if the signals are normally distribute. Like you get a data point $d_i$ from google every hour $i$, but there is noise in the measurement, so $d_i = x + \sigma^2 \varepsilon$ where $\varepsilon \sim N(0,1)$.
First take the log to simplify the calculations: $$ \log \mathcal{L} = \sum_{i=1}^N \left\lbrace -\log(\sqrt{2\pi}\sigma) - \dfrac{(d_i-x)^2}{2\sigma^2} \right\rbrace. $$ We want to pick the $\hat{x}$ that maximizes the log likelihood. The reason is that this estimator is consistent, so as $n$ becomes large, $\hat{x}_n \rightarrow x_0$. The best explanation for this is the study of $M$-estimators, which really makes the identification and convergence arguments clear without relying too much on the structure of any given statistical model.
Anyway, we get candidate maximizers by taking the first derivative and setting it equal to zero: $$ \sum_{i=1}^N \dfrac{(d_i-\hat{x})}{2\sigma^2} = 0 $$ with second order condition $$ \sum_{i=1}^N\dfrac{-1}{2\sigma^2} <0 $$ so the problem is globally concave: there is only one maximizer and it is the solution to the FONC.
Then $$ \hat{x} = \dfrac{1}{N} \sum_{i=1}^N d_i $$ so the MLE is the average of the measurements. Since $\mathbb{E}[d_i] = \mathbb{E}[x+\sigma^2 \varepsilon_i] = x$, it follows that $\mathbb{E}[\hat{x}] = x$, and your estimator is even unbiased. Congratulations!