Euclidean distance is not linear in high dimensions. However, in multiple regression the idea is to minimize square distances from data points to a hyperplane.
Other data analysis techniques have been considered problematic for their reliance on Euclidean distances (nearest neighbors), and dimensionality reduction techniques have been proposed.
Why is this not a problem in multiple regression?
I'm not sure how regression applies to this question but Euclidean distance is valid in any number of dimensions as shown by the Pythagorean theorem where: $$A^2+B^2+C^2+\cdots+X^2=Y^2\quad\text{where}\quad X,Y\space\text{are arbitrary variables}$$ For example, we know that $3^2+4^2=5^2$ can be a diagonal on the front of a box. If the depth of the box is $12$, then we can also know that the distance between opposite corners is shown by the equation $3^2+4^2+12^2=13^2\space$ because the $5^2$ in $(5,12,13)$ calculations can be replaced with $3^2+4^2$.
Likewise, if we have a $4$D box $\quad 3^2+4^2+12^2+84^2=85^2\quad$ because the triples $\quad (3,4,5)\quad (5,12,13)\quad (13,84,85)\quad$ can all be similarly joined into a quintuple. The process can be reversed for dimensional reduction. For example $\quad 3^2+4^2+12^2+84^2=\quad 3^2+12.64911064^2+84^2=85$
At least one form of regression works on minimizing distances and the corner-to-corner distances here are the shortest [straight-line] distance. The example has shown integer solutions but works the same with non-integers such as those found by $\space A=x_1-x_0\quad B=y_1-y_0\quad C=z_1-z_0\quad \cdots $
There are methods of finding missing values if one is unknown and I have been compiling many of these in a paper. If these would be useful, I can probably translate them to your application if I can understand what your application is.