Effect of change in a column of a matrix in its determinant and eigenvalue

325 Views Asked by At

I have matrix $R$ which based on it, another matrix $H$ is computed using $$ H(R) =\frac { (R R^T+ sI)^{1/2} } { trace(R R^T+ sI)^{1/2}} $$ $s$ is a small value.

Now, one column of the matrix $R$ is changed, say column $j$ by $\Delta$. i.e $R_{*j}=R_{*j}+\Delta$. How can I compute the change in $\det(H)$ also, how to compute $\min(\operatorname{eig}(H))$ and $\max(\operatorname{eig}(H))$ or even $\operatorname{cond}(H)$.

Does computing $ det(H(R)^{-2}) $ is easier? Actually, my aim is to minimize $$ min_R det(H(R)^{-2}) $$
How i can find $ \frac {\partial det(H(R)^{-2})} {\partial R} $.

Can i assume $ trace(R R^T+ sI)^{1/2}=constant $ in minimizing $ min det(H(R)^{-2}) $

Thank you.

1

There are 1 best solutions below

1
On BEST ANSWER

It looks like this problem falls under the category of smooth optimization of a function of the eigenvalues of a symmetric matrix. Ie, the constrained optimization problem

\begin{align} \min_{X,U,\Lambda} & f(\Lambda) \\ \mathrm{such~that~} & XU = U\Lambda \mathrm{~is~the~eigenvalue~decomposition~of~X}. \end{align}

You can apply Newton's method to your optimization problem via eigenvalue sensitivity techniques and the multidimensional chain and product rules. Basically, $$X_{k+1} = X_k - [d^2f]^{-1}df$$ where $$df=f'(\Lambda)d\Lambda,$$ $$d^2 f = f'(\Lambda)d^2\Lambda + f''(\Lambda)d\Lambda.$$

To determine the eigenvector and eigenvalue sensitivities $d\Lambda$ and $dU$, you differentiate the eigenvalue equation to find the first derivative of the eigenvalue and eigenvector matrices as, $$d\Lambda = I \circ (U^T ~dX~ U),$$ $$dU = UC,$$ where $I \circ$ is the Hadamard product with the identity (Ie, just take the diaginal entries of the matrix and set the nondiagonals to zero), and the coefficient matrix is, $$C = \begin{cases} \frac{u_i^T dX u_j}{\lambda_j - \lambda_i}, & i=j \\ 0, &i=j. \end{cases}$$ For details see here or here. Differentiating again and using the product rule yields the second derivative, $$ d^2 \Lambda = 2 I \circ (dU_2^T ~dX_1~ U).$$ Here $dX_1$ is the first matrix direction you differentiated in. In other words (abusing notation), $$\frac{d^2 \Lambda}{dX_1 dX_2} = dX_2^T [2 I \circ (dU_2^T ~dX_1~ U)].$$

These formulas only hold for cases when the eigenvalues are all distinct (if there are repeated eigenvalues the eigenvalue decomposition is not unique). However that is not a big deal because in the following paper they show that Newton's method for the optimization problem will converge correctly using these formulas even if the solution it is converging to has repeated eigenvalues,

Overton, Michael L., and Robert S. Womersley. "Second derivatives for optimizing eigenvalues of symmetric matrices." SIAM Journal on Matrix Analysis and Applications 16.3 (1995): 697-718. http://ftp.cs.nyu.edu/cs/faculty/overton/papers/pdffiles/eighess.pdf