nonegative inverse eigenvalue problem

283 Views Asked by At

I need to construct a non-negative matrix with desired eigenvalues. To that end, I came up with a block matrix of the following form:

$$ \mathbf{M} = \begin{vmatrix} \mathbf{A} & \mathbf{b} \\ \mathbf{c^T} & \it{d} \\ \end{vmatrix} $$

Where matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$ is free parameter, and vectors $\mathbf{b}$, $\mathbf{c} \in \mathbb{R}^{n}$ and scalar $\it{d}$ depend on $\mathbf{A}$ and a set of desired eigenvalues (given as diagonal elements of matrix $\mathbf{\Lambda} \in \mathbb{R}^{n \times n}$)

Equation that relates $\mathbf{c}$ to $\mathbf{A}$ and $\mathbf{\Lambda}$ is quite complicated and involves an inverse of the solution to a Sylvester equation. Namely

$$ \mathbf{c^T} = \left[\text{diag}(\mathbf{\Lambda}) - \mathbf{1^T}(\text{tr}(\mathbf{\Lambda}-\mathbf{A}) + \sigma)\right]\mathbf{K^{-1}} $$

where $\mathbf{K}$ is a solution to a following Sylvester equation:

$$ \mathbf{A}\mathbf{K}-\mathbf{K}\mathbf{\Lambda} = (\mathbf{A}-\sigma\mathbf{I}_n)\mathbf{1} \cdot\mathbf{1^T} $$

Questions:

How to find constraints on parameter $\mathbf{A}$ which will render $\mathbf{M}$ non-negative?

$\mathbf{A}$ itself must be non-negative of course, and $\mathbf{b}$ and $d$ have a straightforward dependence on $\mathbf{A}$, so it's easy. I have no idea, however, what to do with $\mathbf{c}$, because there's this $\mathbf{K^{-1}}$ which I don't have a solution for (only for $\text{vec}(\mathbf{K})$). How could I approach the question analytically? Or is there an efficient numerical way to solve the problem (e.g. given $\mathbf{\Lambda}$, find $A$ that maximizes the smallest element of $M$)?


Details:

Here's the dependence of $\mathbf{b}$ and $d$ on $\mathbf{A}$ and $\mathbf{\Lambda}$:

$$ \mathbf{b} = (\sigma \mathbf{I}_n - \mathbf{A})\mathbf{1} $$

$$ d = \text{tr}(\mathbf{\Lambda} - \mathbf{A}) + \sigma $$

and a vectorized solution for $\mathbf{K}$

$$ \text{vec}(\mathbf{K}) = (\mathbf{I}_n \otimes\mathbf{A} - \mathbf{\Lambda} \otimes \mathbf{I}_n)^{-1}(\mathbf{1} \otimes \mathbf{b}) $$

Given these relations, the resulting matrix $\mathbf{M}$ is equivalent to $$ \mathbf{M} = \begin{vmatrix} \mathbf{A} & \mathbf{b} \\ \mathbf{c^T} & \it{d} \\ \end{vmatrix} = \begin{vmatrix} \mathbf{K} & \mathbf{1} \\ \mathbf{1^T} & 1 \\ \end{vmatrix} \begin{vmatrix} \mathbf{\Lambda} & \mathbf{0} \\ \mathbf{0} & \sigma \\ \end{vmatrix} \begin{vmatrix} \mathbf{K} & \mathbf{1} \\ \mathbf{1^T} & 1 \\ \end{vmatrix}^{-1} $$

$\mathbf{\Lambda}$ need not be diagonal, but its eigenvalues must be chosen as an input argument.


UPD: ok, sorry for a confusing question. I'll try to clear things up with a 2 $\times$ 2 special case. It can be shown that a 2 $\times$ 2 matrix of the form

$$ \mathbf{M} = \begin{vmatrix} a & \sigma - a \\ a - \lambda & \sigma - (a - \lambda) \\ \end{vmatrix} $$

has equal row-sum ($\sigma$) and a set of eigenvalues $\{\lambda, \sigma\}$ for any arbitrary choice of $a$. Now, because all equations are scalar, it is easy to find constraints on $a$ and $\lambda$ that will result in a non-negative $\mathbf{M}$ by solving a simple system of inequalities. The result is, for $\mathbf{M}$ to be non-negative

$$ \begin{cases} \begin{align} 0 \le & a \le \sigma\\ \lambda \le & a \le \lambda + \sigma \\ -\sigma < &\lambda < \sigma \\ \end{align} \end{cases} $$

The original question is an attempt to generalize this problem to $(n+1)\times(n+1)$, where $a$ becomes $\mathbf{A} \in \mathbb{R}^{n \times n}$ and so forth. You can see that equations for $\mathbf{b}$, $\mathbf{c}$ and $d$ in the first section of the post collapse to the scalar equations for entries of $\mathbf{M}$ here, when $n$ is set to 1.

1

There are 1 best solutions below

5
On

Forgive me if I have completely misunderstood you, but it seems in the comments you are after a matrix (call it $\mathbf{M}$) that has:

  • some desired eigenvalues
  • equal row sum.

If they are the only constraints, consider a square arbitrary diagonalizable matrix, $\mathbf{M}$: $$\mathbf{M}=\mathbf{S}\mathbf{D}\mathbf{S}^{-1}.$$ The column vectors of $\mathbf{S}$ are the right eigenvectors of matrix $\mathbf{M}$ and $\mathbf{D}$ is the diagonal matrix containing the eigenvalues of $\mathbf{M}$. To construct your $\mathbf{M}$ you would then simply fill $\mathbf{D}$ with the desired eigenvalues and then, to insure that the row sums of $\mathbf{M}$ are equal, as you noted, you make every element of one of the column vectors of $\mathbf{S}$ (call it $E_1$) to be equal to $1$, then fill up the rest of $\mathbf{S}$ however you please so long as $\text{det}[\mathbf{S}]\ne0$, then finally calculate the inverse $\mathbf{S}^{-1}$ and you have your $\mathbf{M}$. The row sum would then be equal to the eigenvalue (call it $e_1$) corresponding to $E_1$ as can be seen by noting that the right hand side of: $$e_1E_1=\mathbf{M}E_1$$ gives a column vector containing the row sums and the left hand side is a column vector full of eigenvalue $e_1$.

Example:

Take $\mathbf{D}$ as the matrix containing your given set of eigenvalues on the diagonal: $$\mathbf{D}= \left[ \begin {array}{ccc} \sigma&0&0\\ 0&1&0 \\ 0&0&2\end {array} \right], $$ then construct $\mathbf{S}$: $$\mathbf{S}= \left[ \begin {array}{ccc} 1&2&3\\ 1&3&4 \\ 1&4&6\end {array} \right],$$ then find the inverse: $$ \mathbf{S}^{-1}=\left[ \begin {array}{ccc} 2&0&-1\\ -2&3&-1 \\ 1&-2&1\end {array} \right], $$ then build $\mathbf{M}$: $$ \mathbf{M}=\mathbf{S}\mathbf{D}\mathbf{S}^{-1}=\left[ \begin {array}{ccc} 2\,\sigma+2&-6&-\sigma+4 \\ 2\,\sigma+2&-7&-\sigma+5\\ 2\ \sigma+4&-12&-\sigma+8\end {array} \right], $$ $$\text{row sum}=\sigma.$$

I'm sure this is not what you're asking for as I ignored the Sylvestre equation e.t.c, but I'll post it just in case. I'll delete it when you highlight my ignorance...