Does the following integral inequality hold?

117 Views Asked by At

For any square, symmetric, positive definite matrix $M\in\Re^{m\times{m}}$, a scalar $\gamma$ and a vector valued function $\omega:[0,\gamma]\to\Re^{m}$ such that the following inequality holds \begin{equation}\label{eq:1} \gamma\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~{d\beta}\geq\Big(\int_{0}^{\gamma}\omega(\beta)~d\beta\Big)^{T}~M~\Big(\int_{0}^{\gamma}\omega(\beta)~d\beta\Big). \tag{1} \end{equation}

To show that the claim is true, I use Schur's complement as follows. The matrix

\begin{equation} H(\beta)=\begin{bmatrix}{\omega^{T}(\beta)~M~\omega(\beta)} & \omega^{T}(\beta)\\\omega(\beta) & M^{-1}\end{bmatrix}\geq{0}, \end{equation}

since $M>{0}$ implies $M^{-1}>{0}$ and also $\omega^{T}(\beta)~M~\omega(\beta)-\omega^{T}(\beta)(M^{-1})^{-1}\omega(\beta)=0$. This shows why $H\geq{0}$. Now taking the integral of $H$ within the interval $[0,\gamma]$ results

$G(\gamma)=\int_{0}^{\gamma}H(\beta)~d\beta=\begin{bmatrix}{\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)}~d\beta & \int_{0}^{\gamma}\omega^{T}(\beta)~d\beta\\\int_{0}^{\gamma}\omega(\beta)~d\beta & {\gamma}M^{-1}\end{bmatrix}\geq{0}$.

After applying schur complement again on $G(\gamma)$ yields

$\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta~-~\int_{0}^{\gamma}\omega^{T}(\beta)~d\beta~\big[\dfrac{1}{\gamma}M\big]~\int_{0}^{\gamma}\omega(\beta)~d\beta\geq{0}$

from which the inequality \eqref{eq:1} is derived. However I am interested to know whether the inequality \eqref{eq:1} holds for a positive semi-definite $M$. Instead of $M^{-1}$, I am using Moore Penrose pseudo inverse $M^{+}$ in the expression for $H(\beta)$. I am not really sure if this is the way I should go about the problem and thus any suggestions on this are very much appreciated.

1

There are 1 best solutions below

0
On BEST ANSWER

I think I have figured out a solution and correct me if I am wrong in the derivation outlined below. A positive semi-definite matrix $M\in\Re^{n\times{n}}$ can be expressed as $M=J*D_{M}*J^{T}$, where $J^{T}=J^{-1}$ and $D_{M}$ is a diagonal matrix with all the eigenvalues of $M$ in its diagonal.

$\gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta=\gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~J*D_{M}*J^{T}~\omega(\beta)~d\beta$.

Take a vector valued function $\omega^{*}(\beta)=J^{T}~\omega(\beta)$ and rewrite the above equation as

$ \gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta=\gamma~\int_{0}^{\gamma}\omega^{*^T}(\beta)~D_{M}~\omega^{*}(\beta)~d\beta~\tag{1} $

Take $\omega^{*}(\beta)=\begin{bmatrix}\omega^{*}_{1}(\beta)\\\omega^{*}_{2}(\beta)\\\vdots\\\omega^{*}_{n}(\beta)\end{bmatrix}$ and $D_{M}=diag(\lambda_{1},\lambda_{2},\cdots,\lambda_{r},0,\cdots,0)$

with $M$ being assumed to have $(n-r)$ zero eigenvalues, i.e. $\lambda_{i}=0$, for $i=r+1,r+2,\cdots,n$. Then equation (1) gives

\begin{equation}{\label{eq:2}} \gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta=\gamma~\int_{0}^{\gamma}\Big[\sum_{i=0}^{r}\lambda_{i}\omega^{*^T}_{i}(\beta)~\omega^{*}_{i}(\beta)\Big]~d\beta\tag{2}. \end{equation}

As $\gamma~\int_{0}^{\gamma}\omega^{*^T}_{i}(\beta)~\omega^{*}_{i}(\beta)~d\beta~\geq~\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]^{T}\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]$ (procedure given in the question), the equation \eqref{eq:2} yields

\begin{equation}{\label{eq:3}} \begin{aligned} &\gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta\geq\\ &\sum_{i=1}^{r}\lambda_{i}\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]^T~\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]\\ & =\sum_{i=1}^{n}\lambda_{i}\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]^T~\Big[\int_{0}^{\gamma}\omega^{*}_{i}(\beta)~d\beta\Big]\\ & =\begin{bmatrix}\int_{0}^{\gamma}\omega^{*^T}_{1}(\beta)~d\beta,\cdots,\int_{0}^{\gamma}\omega^{*^T}_{n}(\beta)~d\beta\end{bmatrix}\begin{bmatrix}\lambda_{1} & 0 & 0 & \cdots & 0\\0 & \lambda_{2} & 0 & \cdots & 0\\\vdots & \vdots & \vdots & \vdots & \vdots\\0 & 0 & 0 & \cdots & \lambda_{n}\end{bmatrix}\begin{bmatrix}\int_{0}^{\gamma}\omega^{*}_{1}(\beta)~d\beta\\\vdots\\\int_{0}^{\gamma}\omega^{*}_{n}(\beta)~d\beta\end{bmatrix}\\ & =\Big[\int_{0}^{\gamma}\omega^{*^T}(\beta)~d\beta\Big]~D_{M}~\Big[\int_{0}^{\gamma}\omega^{*}(\beta)~d\beta\Big]\\ & =\Big[\int_{0}^{\gamma}\omega^{T}(\beta)~d\beta\Big]~J*D_{M}*J^T~\Big[\int_{0}^{\gamma}\omega(\beta)~d\beta\Big]\\ & =\Big[\int_{0}^{\gamma}\omega^{T}(\beta)~d\beta\Big]~M~\Big[\int_{0}^{\gamma}\omega(\beta)~d\beta\Big]. \end{aligned} \end{equation}

Therefore $\gamma~\int_{0}^{\gamma}\omega^{T}(\beta)~M~\omega(\beta)~d\beta\geq\Big[\int_{0}^{\gamma}\omega^{T}(\beta)~d\beta\Big]~M~\Big[\int_{0}^{\gamma}\omega(\beta)~d\beta\Big]$.