Let's say I have an $N$-dimensional space and some subspace with a non-orthonormal unitary basis , defined by vectors : $$ \vec{Ss} : (\vec{u_1}, \vec{u_2}, \cdots, \vec{u_{n}})$$ All I know about my vectors is their scalar products with each other.
Now, let's say I have some unitary vector $ \vec{v}$, and I want to know the norm of its projection on $ \vec{Ss}$. Same thing, all I know about $ \vec{v}$ is its scalar products with $ \vec{Ss}$ vectors.
I'm thinking of a recursive algorithm to perform this (recursive regarding the subspace dimension $n$), but it doesn't work, I must have made some mistake. Here's my R code :
myFunction_orthogonalize <- function(subScalars)
{
# subScalars : square matrix of subspace scalar products
if(is.null(ncol(subScalars))) # subspace dimension is 1
{
return(1)
}
else
{
thisDimensions <- nrow(subScalars)
previousSubspaceExpression <- myFunction_projection(subScalars[-thisDimensions,-thisDimensions])
thisNewVect_withoutLastScalar <- t(previousSubspaceExpression)%*%(subScalars[,thisDimensions][-thisDimensions] %>% matrix(nrow = thisDimensions-1, ncol = 1)
daNorm <- sqrt(1-sum(thisNewVect_withoutLastScalar^2))
beforsLast <- c(rep(0,thisDimensions-1),1) - c(thisNewVect_withoutLastScalar, 0)
return(cbind(rbind(previousSubspaceExpression, c(rep(0,thisDimensions-1))) , (beforsLast/daNorm)))
}
}
myFunction_norm <- function(subScalars, toProject_scalars)
{
# subScalars : square matrix of subspace scalar products
# toProject_scalars : vector of scalar products of the vector to project and subspace vectors
return(norm(t(myFunction_orthogonalize(subScalars))%*%rndmVector_scalars, type = '2'))
}
Did you need a recursive algorithm because you have to work with too large dimensions? Or you might be able to invert an $n\times n$ symmetric matrix (or Hermitian, if working in $\mathbb C$) without too much rounding error and in reasonable time scales?
In that case, you can see that the norm of the projection $\vec v\,'$ of $\vec v\in \mathbb V$ over the $n$-dimensional subspace $S\subseteq \mathbb V$ can be obtained from
where $$\vec \beta=\big(\begin{matrix}\langle \vec v,\vec u_1\rangle&\langle \vec v,\vec u_2\rangle &\ldots &\langle \vec v,\vec u_n\rangle\\\end{matrix}\big)$$ and $$M=\left(\begin{matrix}\langle \vec u_1,\vec u_1\rangle&\langle \vec u_1,\vec u_2\rangle&\cdots&\langle \vec u_1,\vec u_n\rangle\\\langle\vec u_2,\vec u_1\rangle&\langle \vec u_2,\vec u_2\rangle&\cdots&\langle \vec u_2,\vec u_n\rangle\\\vdots&\vdots&\ddots&\vdots\\\langle\vec u_n,\vec u_1\rangle&\langle \vec u_n,\vec u_2\rangle&\cdots&\langle \vec u_n,\vec u_n\rangle\\\end{matrix}\right)$$ (assuming that the scalars are real numbers; if not, the transpose needs to be changed to the conjugate-transpose and $M$ is not symmetrical but Hermitian).
Proof: (For the real case.) Since $\vec v\,'$ is actually in $S$, you can say that $$\vec v\,'=\sum_{i=1}^n \alpha_i \vec u_i$$ (all sums in the following go from $1$ to $n$) for some scalars $\alpha_i$ $(*)$. Let's put them all together in a row vector $$\vec\alpha=\big(\begin{matrix}\alpha_1&\alpha_2&\ldots&\alpha_n\\\end{matrix}\big).$$
Also, since $\vec v-\vec v\,'$ lies in the orthogonal space of $S$ and $\vec u_i\in S$ for all $i=1,\ldots,n$, it is true for all $i$ that $\langle \vec v-\vec v\,',\vec u_i\rangle=0$, that is $$\langle \vec v,\vec u_i\rangle=\langle \vec v\,',\vec u_i\rangle.$$
Now, we see that $$\beta_j=\langle \vec v,\vec u_j\rangle=\langle \vec v\,',\vec u_j\rangle=\left\langle \sum_i \alpha_i \vec u_i,\vec u_j\right\rangle=\sum_i \alpha_i \langle\vec u_i,\vec u_j\rangle=\sum_i \alpha_i M_{ij}=\big(\vec\alpha\cdot M\big)_j.$$ In matrix notation: $$\vec\alpha \cdot M=\vec \beta,$$ and so $$\vec\alpha =\vec \beta \cdot M^{-1}.$$ But we need $\|\vec v\,'\|=\sqrt{\langle \vec v\,',\vec v\,'\rangle}$. Now, $$\langle \vec v\,',\vec v\,'\rangle=\left\langle\sum_i \alpha_i \vec u_i,\sum_j \alpha_j \vec u_j\right\rangle=\sum_i \sum_j \alpha_i \langle \vec u_i,\vec u_j \rangle \alpha_j=\sum_i \alpha_i \left(\sum_j M_{ij} \alpha_j\right)=$$ $$=\sum_i \alpha_i \big(M\cdot \vec \alpha\,^T\big)_i=\vec \alpha\cdot M\cdot \vec \alpha\,^T.$$
And finally, since $\vec \alpha=\vec \beta\cdot M^{-1}$ and $M^T=M$, we have $$\|\vec v\,'\|^2=\vec \beta\cdot M^{-1}\cdot M \cdot \left(\vec \beta\cdot M^{-1}\right)^T=\vec \beta \cdot M^{-1} \cdot\vec \beta\,^T,$$ as desired.
$(*)$ Actually, this is true always if $S$ is generated by the vectors $\vec u_1,\ldots,\vec u_n$; but if $\big(\vec u_1,\ldots,\vec u_n\big)$ is linearly independent—that is, a base of $S$—, the $\alpha_i$ scalars are uniquely determined. Moreover, this assumption is fundamental in order that the matrix $M$ that we will define be invertible.