Minimum bound of integral of $f^2$ subject to inner product constraints

90 Views Asked by At

I'm copying this problem from aops (the work I have done so far is also there), I wanted to get some more exposure to it from a different audience.

Given $f\in L^2[0,1]$ satisfying $\int_0^1 f(x)\, dx=\int_0^1 xf(x)\, dx=\cdots=\int_0^1 x^nf(x)\, dx=1$ find the minimum value of $\int_0^1 f(x)^2\, dx$.

From some pattern recognition, I believe it is $(n+1)^2$. The progress I have so far follows a method already posted. We define $P_n(x)$ to be a polynomial of degree $n$ satisfying for any $i\in [n+1]$, $$\int_0^1 P_n(x)x^i\, dx=1$$ This polynomial does exist and can be explicitly found by recognizing this is an inner product. Going back to more familiar ground with matrices, we see that if we use the obvious bijection between polynomials of degree at most $n$ and vectors in $\mathbb{R}^{n+1}$, then the inner product of integrating the product on the interval $[0,1]$ is equivalent to the following inner product $$x^TH_{n+1}y$$ Where $H_{n+1}$ refers to the Hilbert matrix. If $p$ is the vector corresponding to $P_n(x)$, then $p$ should satisfy $$p^TH_{n+1}y=\begin{pmatrix}1&\cdots&1\end{pmatrix}y$$ for all $y\in\mathbb{R}^{n+1}$, this is equivalent to $$p=H_{n+1}^{-1}\begin{pmatrix}1\\\vdots\\1\end{pmatrix}$$ Looking at the Hilbert matrix wiki page, I see that $p$ can be explicitly calculated by some not fun looking binomial expression. While a proof using this could be interested to see, I wouldn't say I'm the most interested as the explicit formula for the inverse doesn't look the most enlightening. Rather, as I will show later, the minimum value of $\int_0^1 f(x)^2\, dx$ that we are looking for is attained by $f(x)=P_n(x)$, so by linearity of the inner product, the minimum value is $P_n(1)$ which is $$\begin{pmatrix}1&\cdots&1\end{pmatrix}H_{n+1}^{-1}\begin{pmatrix}1\\\vdots\\1\end{pmatrix}$$ I think there might be some possibility in using this matrix determinant lemma to just skip ahead and calculate this value. All one would need to do is calculate the determinant of the $(n+1)\times (n+1)$ matrix $A$ with $A_{ij}=\frac{i+j}{i+j-1}$. However, I really haven't had much luck trying to do something similar to the proof for cauchy's determinant formula. I'm not sure if I'm missing some slick row operations to make that feasible.


The proof for why $f(x)=P_n(x)$ attains the minimum can be shown as such. Let $g(x)=f(x)-P_n(x)$. Note that $g(x)$ is orthogonal to every other polynomial of degree at most $n$. We have that \begin{align*} \langle g(x),g(x)\rangle &\geq 0\\ \langle f(x)-P_n(x),g(x)\rangle &\geq 0\\ \langle f(x),g(x)\rangle &\geq 0\\ \langle f(x),f(x)-P_n(x)\rangle &\geq 0\\ \langle f(x),f(x)\rangle &\geq \langle f(x),P_n(x)\rangle\\ \langle f(x),f(x)\rangle &\geq \langle g(x)+P_n(x),P_n(x)\rangle\\ \langle f(x),f(x)\rangle &\geq \langle P_n(x),P_n(x)\rangle\\ \end{align*} And it is quite obvious that equality can be obtained when $f(x)=P_n(x)$


So it actually looks like the sum of the elements of the inverse of a cauchy matrix is already a well known result (proof here). There's a lot of machinery here, so I'm still going through it. If anyone has other ideas, especially using either of the 2 approaches I started, that would be interesting.