My question concerns the minimum Euclidean distance $d_\mathrm{min}$ between a point $\vec{x}_1$ on a hyperplane 1, and a point $\vec{x}_2$ on a hyperplane 2.
Setup
Let the dimension of the space we are in be $D$, i.e. $\vec{x}_1, \vec{x}_2 \in \mathbb{R}^D$. Let the hyperplane 1 be $d$-dimensional, and defined by a single point $\vec{p} \in \mathbb{R}^D$ and a set of $d$ orthogonal basis vectors $\vec{u}_1$, $\vec{u}_2$, ..., $\vec{u}_d$. Let the hyperplane 2 be $n$-dimensional, and defined by a single point $\vec{q} \in \mathbb{R}^D$ and a set of $n$ orthogonal basis vectors $\vec{v}_1$, $\vec{v}_2$, ..., $\vec{v}_{n}$. Note that generically the dimensions $d$ and $n$ are not the same.
Problem statement
I want to know what the minimum distance between a point $\vec{x}_1$ on hyperplane 1, and another point $\vec{x}_2$ on hyperplane 2 can be, provided that I can move $\vec{x}_1$, and $\vec{x}_2$ as I wish, given that they stay on their respective hyperplanes.
I can parametrized the position on hyperplane 1 by the components $\vec{\alpha} \in \mathbb{R}^d$ along the basis vectors $\{ \vec{u}_i \}$.
$$ \vec{x}_1(\vec{\alpha}) = \vec{p} + \sum_{i=1}^d \alpha_i \vec{u}_i \, . $$
Equivalently, on hyperplane 2 $$ \vec{x}_2(\vec{\beta}) = \vec{q} + \sum_{i=1}^n \beta_i \vec{v}_i \, , $$ where $\vec{\beta} \in \mathbb{R}^n$ is the position vector within the hyperplane 2.
My question is now what the minimum $L_2$ Euclidean distance $|\vec{x}_2 - \vec{x}_1|$ is. I am looking for $$ d_\mathrm{min} = \mathrm{min}_{\vec{\alpha},\vec{\beta}} \left | \vec{q} + \sum_{i=1}^n \beta_i \vec{v}_i - \vec{p} - \sum_{i=1}^d \alpha_i \vec{u}_i \right | \, , $$ given the hyperplane specifications $\vec{p}$, $\vec{q}$, $d$, $n$, $\{ \vec{u}_i \}$, and $\{ \vec{v}_i \}$.
My progress so far
I know that there exist restricted solutions. For example, if I had $d=0$ (just a single point) and $n=D-1$ (an actual hyperplane partitioning the space into two halves), I know I could solve it analytically. However, I haven't made much progress on the general case.
It would be great to have an analytic solution, however, a nice numerical method would be almost equally useful for me. Right now, I just run gradient descent on $\vec{\alpha}$ and $\vec{\beta}$.
Thank you!
I will assume throughout that the vectors parametrising your closed subspaces are given in terms of the canonical basis $(e_1,\dots,e_D)$.
Let's say that $M_1$ is given by point $p$ and vectors $u_1,\dots, u_d$ while $M_2$ is given by point $q$ and vectors $v_1,\dots, v_n$, where \begin{align*} p=&\sum_{i=1}^Dp_ie_i,\\ q=&\sum_{i=1}^Dq_ie_i \end{align*} for some real coffecients $p_i$, $q_i$ and where for every $j\leq d$ (resp. $j\leq n$) \begin{align*} u_j=&\sum_{i=1}^Du_{j,i}e_i,\\ v_j=&\sum_{i=1}^Dv_{j,i}e_i \end{align*} for some real coefficients $u_{j,i}$, $v_{j,i}$. You write general points $x_1$, $x_2$ on $M_1$, $M_2$ respectively as \begin{align*} x_1(\alpha)=&p+\sum_{j=1}^d\alpha_ju_j =\sum_{i=1}^Dp_ie_i+\sum_{j=1}^d\alpha_j\sum_{i=1}^Du_{j,i}e_i =\sum_{i=1}^D\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}\Big)e_i\\ x_2(\beta)=&q+\sum_{j=1}^n\beta_jv_j =\sum_{i=1}^Dq_ie_i+\sum_{j=1}^n\beta_j\sum_{i=1}^Dv_{j,i}e_i =\sum_{i=1}^D\Big(q_i+\sum_{j=1}^n\beta_jv_{j,i}\Big)e_i, \end{align*} so $d(x_1,x_2)$ becomes $$ d(x_1,x_2) =d\Big(\sum_{i=1}^D\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}\Big)e_i,\sum_{i=1}^D\Big(q_i+\sum_{j=1}^n\beta_jv_{j,i}\Big)e_i\Big) =\sqrt{\sum_{i=1}^D\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\Big)^2}. $$ Now let's compute \begin{align*} \frac{\partial d(x_1,x_2)^2}{\partial\alpha_k} =&\frac{\partial \sum_{i=1}^D\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\Big)^2}{\partial\alpha_k} \\ =&\sum_{i=1}^D2\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\Big)\frac{\partial\big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\big)}{\partial\alpha_k} \\ =&\sum_{i=1}^D2\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\Big)u_{k,i} \\ \end{align*} and similarly $$ \frac{\partial d(x_1,x_2)^2}{\partial\beta_k} =-\sum_{i=1}^D2\Big(p_i+\sum_{j=1}^d\alpha_ju_{j,i}-q_i-\sum_{j=1}^n\beta_jv_{j,i}\Big)v_{k,i}.$$
Now we want all these derivatives to be zero, that is $d+n$ linear equations in $d+n$ variables, i.e. either no solution (but that cannot happen here) or infinitely many solutions (if $M_1$ is parallel to a subspace of $M_2$ or vice versa) or one solution.