I've been trying to pin down the following intuitive statement:
Let $\pi$ be a k-dimensional subspace in $\mathbb R^N$, with ON basis $e_1,\ldots, e_k$, and extend to an ON basis $e_1,\ldots, e_k, f_1,\ldots, f_{N-k}$ of $\mathbb R^N$. Let $v_1,\ldots, v_{N-k}$ be vectors in $\mathbb \pi.$
Now define the tilted plane $\bar\pi$ by $$\bar\pi=\{y+L(y) : y\in \pi\} $$ where $$ L(y):= \sum_{j=1}^{N-k}(v_j\cdot y)f_j$$ Thus $\bar\pi$ is $\pi$ after being ``tilted'' in the directions of the $v_j$.
The estimate I want is the following: $$\|P_\pi-P_{\bar\pi}\|\leqslant C\sum_{j=1}^{N-k}\|v_j\|^2$$ (or possibly a $1$ for the last exponent) for a purely dimensional constant C not depending upon the $v_j$, where the $P_{-}$ are the orthogonal projections onto the respective planes.
As for progress I've made, a basis $\zeta_1,\ldots,\zeta_k$ of $\bar\pi$ defined by $$ \zeta_i=e_i+L(e_i)=e_i+\sum_{j=1}^{N-k}(v_j\cdot e_i)f_j$$ should yield the metric $$g=I+L^tL$$ where $L$ is the matrix whose $i,j$ entry is $v_i\cdot e_j$.
This yields an estimate of the desired form $$\|g-I\|\leqslant C\sum_{j=1}^{N-k}\|v_j\|^2$$.
If the RHS was small enough, we could apply the Neumann series theorem and invert $g$ and get estimates, but I cant do this without choosing a new norm depending on the $v_j$, thus introducing non-dimensional dependencies into the constant C.
At any rate, just subtracting the projections on a vector $x$ of norm 1 yields something like $$P_{\bar\pi}x-P_\pi x=(\delta^{ij}-g^{ij})(x\cdot e_i)e_j-g^{ij}(x\cdot L(e_i))e_j-g^{ij}(x\cdot e_i)L(e_j)-g^{ij}(x\cdot L(e_i))L(e_j).$$ Sooooo----any estimate has to come from the $g^{ij}$, and I am at a loss for how to do this. Of course, an estimate on the inverse metric would yield an estimate on $g^{ij}-\delta^{ij}=g^{ik}(\delta_{kj}-g_{kj})$ taking care of the first term as well. A potential issue I've come across in my scratch work is that it's very easy to introduce higher powers of the $\|v_j\|$ which I can't control, as far as I can tell.
Perhaps a related question is what may be said about estimating the norm of the matrix $(I+A)^{-1}$ where $A$ is symmetric. I've seen some posts around that give some rather convoluted expressions involving $A^{-1}$, which I would rather not deal with if possible since then I need estimates on the entries of the inverse.
Through and through all of this, I am quite shocked that this hasnt been more immediate--the statement is so intuitive after all!
Thanks so much!
A friend came up with the following simplified approach:
Let $T:=P_{\bar\pi}-P_\pi$, and let $C=\sum_{j=1}^{N-k}\|v_j\|$. We aim to show that $$\|Tx\|\leqslant C\|x\| $$ for all $x\in\mathbb R^N$.
It suffices to show this boundedness separately when $x\in \pi$ and when $x\in\pi^\perp$, since in any $z\in\mathbb R^n$ has an orthogonal decomposition with components in $\pi$ and $\pi^\perp$, allowing us to estimate $$\|Tx\|=\|Tx^\mathrm{T}+Tx^\perp\|\leqslant \|Tx^\mathrm{T}\|+\|Tx^\perp\|\leqslant C\|x^\mathrm{T}\|+C\|x^\perp\|\leqslant \sqrt{2}C\|x\|$$ where the last inequality follows from Jensen.
Thus, in case $x\in\pi$ and $\|x\|=1$, we have $$ \|P_{\pi}x-P_{\bar\pi}x\|=\|x-P_{\bar\pi}x\|\leqslant\|x-y\| $$ for all $y\in\bar\pi$. In particular the inequality holds for $y=x+L(x)$, and the estimate follows.
In case $x\in\pi^\perp$ and $\|x\|=1$, we have $$\|P_{\bar\pi} x-P_\pi x\|^2=\|P_{\bar\pi} x\|^2=\langle P_{\bar\pi} x,P_{\bar\pi} x\rangle =\frac{\langle x,P_{\bar\pi} x\rangle^2}{\|P_{\bar\pi} x\|^2}=\frac{\langle x,y+L(y)\rangle^2}{\|y+L(y)\|^2}$$ for some $y\in \pi$ which yields
$$\|P_{\bar\pi} x-P_\pi x\|\leqslant\left(\frac{\|L(y)\|^2}{\|y\|^2+\|L(y)\|^2}\right)^{1/2}\leqslant =\left(\frac{\|L(\hat y)\|^2}{1+\|L(\hat y)\|^2}\right)^{1/2}\leqslant C$$ where we have written $\hat y=y/\|y\|$.
And that's it!