There are three points in 3d space: $p_1$, $p_2$, $p_3$ (or more).
These points form a triangle, so you can assume are not collinear.
There exist an additional unknown point $p_\star$ for which I only know $d_i = \| p_\star - p_i \|$ for $i=1,2,3$ (or more).
Easy question: How to find $p_\star$?
Hard question: If the measurements $p_1$, $p_2$, $p_3$, $d_1$, $d_2$, $d_3$ are corrupt by noise, so that an exact solution is not possible to find, or there are (infinitely) many solutions, how to find $p_\star$ minimizing the error norm, also exploiting the fact I can measure points and distances for more than 3 points?
This is a problem that you could solve with a gradient descent method. In fact, it could be easily extended to arbitrarily many points (within practical limits). It doesn't seem completely clear in your question, but it seems that you want to find the point p that minimises the squared norm of your distances.
Explaining the Method
I'll give you a short explanation here. Basically, you want to minimise the term that I'm going to call D:
$$ D(\vec P) = \sum_k \left| \vec p_i - \vec P \right|^2$$
I've squared the absolute values to make the proceeding math easier, it's not strictly necessary, and it won't change the end result (just how quickly you converge to it). You could just rewrite this in three dimensions with the standard distance formula as:
$$ D(\vec P) = \sum_k (p_{ix}-P_z)^2 + (p_{iy}-P_y)^2 + (p_{iz}-P_z)^2$$
So in this case, you want to find the best value of P, that minimises D (which is usually called a cost function). To achieve this, you would start off with a (reasonable) guess for $ \vec P = \begin{bmatrix} P_x & P_y & P_z \end{bmatrix}^T $, my initial guess would just be the average of all of your points.
Solving the Problem
In order to solve the problem, we can just use a method called gradient decent to move towards the minima. This will work despite any noise that you may have in the system. You simply find the gradient of D with respect to P, so we define:
$$ \nabla D = \begin{bmatrix} {\partial D \over \partial P_x} \\ {\partial D \over \partial P_y} \\ {\partial D \over \partial P_z} \end{bmatrix}$$
Then we can just take steps, starting at our initial P, to walk towards the global minimum (in the direction of steepest descent). In this particular problem, local minimas are not a large issue. Hence we have:
$$ \vec P_{i+1} = \vec P_{i} - \alpha \nabla D $$
Where $\alpha$ just determines how aggressively we descend towards the minima. You can analytically find $\nabla D$, which I will leave as an exercise to you :P
Hope that helps!