Pseudo-eigenvector times matrix inverse

391 Views Asked by At

Actually I don't know what should be a good title of my question.

Here comes the simplified version of the question. Let's call it case 1. As we know, for a non-singular matrix $\textbf{A}$ with eigenvalue decomposition $\textbf{A}=\textbf{U}\Lambda \textbf{U}^H$, we have

$$\lambda_i \textbf{u}_i^H \textbf{A}^{-1} \textbf{u}_i=1$$, where $\Lambda=\text{diag}\{\lambda_i \}$, $\textbf{u}_i$ is the $i$th column of $\textbf{U}$. The proof is simple since the inverse of $\textbf{U}$ is its Hermitian, i.e. $\textbf{U}^{-1}=\textbf{U}^H$, which implies $\textbf{A}^{-1}=\textbf{U}\Lambda^{-1} \textbf{U}^H$. By the orthogonality and unit norm property of $\textbf{U}$, we can easily prove it.

Here comes the generalized version, called case 2. I found that for some condition, if we have $\textbf{B}=\sum_i p_i \textbf{v}_i \textbf{v}_i^H$ ($p_i$ is some positive scalar, $\textbf{v}_i$ is some vector), we can get $$p_i \textbf{v}_i^H \textbf{B}^{-1} \textbf{v}_i=1$$, where $\textbf{v}_i$s are not necessarily orthogonal. But I don't know how to prove it. Does anyone know?

Here comes my problem, case 3. I have a matrix $\textbf{C}=\sum_i p_i \textbf{v}_i \textbf{v}_i^H + \sigma \textbf{I}=\textbf{V}\textbf{P}\textbf{V}^H+\sigma \textbf{I}$ ($\textbf{v}_i$ are not orthogonal to each other, nor normalized, $\textbf{I}$ is the identity matrix), I know for very small $\sigma$, $$p_i \textbf{v}_i^H \textbf{C}^{-1} \textbf{v}_i\approx 1$$. I wonder if there is a way to quantify the gap $1-p_i \textbf{v}_i^H \textbf{C}^{-1} \textbf{v}_i$?

2

There are 2 best solutions below

2
On

I apologize if this comes off as rambling, I am way too tired for math proper.

Take the matrix $${\bf B}= \sum_{i}p_i {\bf v}_i{\bf v}_i^H.$$ Assuming that $\bf B$ has an eigenvalue decomposition in the space spanned by eigenvectors ${\bf u}_i$ then a transformation can be made from ${\bf v}_i$ into ${\bf u}_i$. $${\bf v}_i=\sum_k {\bf C}_{i,k} {\bf u}_k.$$ Here ${\bf C}_{i,k}$ represents the projection of ${\bf v}_i$ into the ${\bf u}_k$ basis.The matrix $\bf B$ can then be written as $${\bf B}=\sum_{i,k}p_i{\bf C}_{i,k}^2{\bf u}_k{\bf u}_k^H.$$ Taking that ${\bf B}^{-1}$ is also an element of the set of matrices spanned by the vectors ${\bf u}_i$'s $${\bf B}^{-1}=\sum_{l}L_{l}{\bf u}_l{\bf u}_l^H.$$ Here the weighting of the l'th state is the coefficient $L_l$. Looking at what this yields: $${\bf BB}^{-1}= \sum_{i,k}p_i{\bf C}_{i,k}^2{\bf u}_k{\bf u}_k^H\sum_{l}L_{l}{\bf u}_l{\bf u}_l^H$$ $$=\sum_{i,k,l}p_i{\bf C}_{i,k}^2L_l{\bf u}_k{\bf u}_k^H{\bf u}_l{\bf u}_l^H.$$ In the middle of the vector part the term ${\bf u}_k^H{\bf u}_l$ can be recognized as the dot product of these two vectors. Exploiting orthogonality this is equal to $\delta_{l,k}$. So,$${\bf BB}^{-1}=\sum_{i,k,l}p_i{\bf C}_{i,k}^2L_l{\bf u}_k\delta_{k,l}{\bf u}_l^H.$$ $$=\sum_{i,k}p_i{\bf C}_{i,k}^2L_l{\bf u}_k{\bf u}_k^H={\bf I}.$$ A nice thing however is that ${\bf I}= \sum_k{\bf u}_k{\bf u}_k^H$ for any complete normalized basis. This helps because it then shows that the coefficient $L_l= \frac{1}{p_i{\bf C}_{i,k}^2}$. And even though all of this work seems like it is for nothing we are almost finished with case 2. The statement to prove is $$p_i{\bf v}_i^H{\bf B}^{-1}{\bf v}_i=1.$$ Using the expansion that we have for ${\bf B}^{-1}$ above and for the vectors ${\bf v}_i$: $$p_i{\bf v}_i^H{\bf B}^{-1}{\bf v}_i=p_i(\sum_{i,l} {\bf C}_{i,l} {\bf u}_l)^H\sum_{k}\frac{1}{p_i{\bf C}_{i,k}^2}{\bf u}_k{\bf u}_k^H\sum_k {\bf C}_{i,j} {\bf u}_j$$ $$=p_i\sum_{k,l,m}\frac{{\bf C}_{i,l}{\bf C}_{i,j}}{p_i{\bf C}_{i,k}^2}u_l^H{\bf u}_k{\bf u}_k^H{\bf u}_j$$ $$=p_i\sum_{k,l,m}\frac{{\bf C}_{i,l}{\bf C}_{i,j}}{p_i{\bf C}_{i,k}^2}\delta_{l,j,k}$$ $$=p_i\frac{{\bf C}_{i,k}^2}{p_i{\bf C}_{i,k}^2}$$ $$=p_i\frac{1}{p_i}=1.$$ Amazingly it works out. For the third case, you will want to go about an eigenvalue decomposition again, just as done above. And if you want to remove the eigenvectors ${\bf u}_k$ from the equation then you will be looking for stronger statements on the projection coefficients ${\bf C}_{i,k}$. Hope this helps.

0
On

I have found a proof for case 2 when $V=[v_1,v_2,...,v_n]$ is nonsingular, as follows.

$B$ is actually $VV^H$ if we absorb $p_i$ into $v_i$.

We know $Tr(v_i v_i^H (VV^H)^{-1})=v_i^H (VV^H)^{-1}v_i$

Since V is nonsingular and we have $V^{-1}v_i=e_i$, where $e_i$ is a column of all zeros except the $i$th element. The reason is that $V^{-1} V=I$, thus $V^{-1}v_i$ is the $i$th column of $I$, the identity matrix.

Thus $v_i^H (VV^H)^{-1}v_i=e_i^H e_i=1$.