Existence of solution for matrix equation $ (I - \alpha A) \bar{x}=\bar{b}$

395 Views Asked by At

This is my first question in here and I would be really thankful if someone could help me with understanding the matter.

I am solving a matrix equation $(I-\alpha A) \bar{x} = \bar{b}$ for a positive vector $\bar{x}$. I don't know anything about the signs of the scalar $\alpha$ and vector $\bar{b}$ (but I can always split the solution into several cases). Matrix $A$ is a symmetric matrix with $tr(A)=0$ (basically it is an adjacent matrix of an undirected graph so it is built only of 0 and 1 with 0 on its diagonal).

My approach would be just to write it down as $\bar{x} = (I-\alpha A)^{-1} \bar{b}$ whenever $(I-\alpha A)$ is nonsingular (as far as I understand that happens for a finite number of $\alpha$ and thus Lebesgue measure of this set of "inappropriate" $\alpha$ is 0).

In most of the literature, however, it is required that $1 > \alpha \cdot \lambda_{max}(A)$ for the convergence of $I+\alpha A + \alpha^2 A^2+\ldots$ and for the solution $\bar{x}$ to exist (where $\lambda_{max}(A)$ is a maximum eigenvalue of $A$) . Moreover, then $det(I-\alpha A) = 1-\alpha \cdot det(A) = 1-\alpha \cdot \prod_{i} \lambda_i > 0$ and hence it is also invertible.

So I have two questions here:

Q1: Why do I need convergence of $I+\alpha A + \alpha^2 A^2+\ldots$ in here? I understand that if it converges then $\sum_{k=0}^{+\infty} \alpha^k A^k = (I-\alpha A)^{-1}$, but why do I need that for the existence of a solution $\bar{x}$? For instance, if we take a simple example: $$A = \begin{vmatrix} 0 & 1 \\ 1 & 0 \end{vmatrix} \quad \alpha=30 $$ $\lambda_{max}(A)=1$ and hence, convergence condition is violated: $30 > 1$. However, I can still calculate $\bar{x}$. Inverse of $(I-\alpha A)$ exists and is equal to $-\frac{1}{899}\begin{vmatrix} 1 & 30 \\ 30 & 1 \end{vmatrix}$. If, for instance, $\bar{b}$ was negative, then it would give me a positive $\bar{x}$. Or is the condition $1 > \alpha \cdot \lambda_{max}(A)$ needed to guarantee that inverse will give a positive solution with both positive $\alpha$ and $\bar{b}$?

Q2: What will happen if $\alpha < 0$ (and potentially $\bar{b} < 0$)? What are the conditions for existence of solution $\bar{x}$? What are the conditions for it to be positive?

I think this question is a bit messy, but this is due to confusion in my head. Thanks in advance for helping me to conquer this confusion :-)

1

There are 1 best solutions below

1
On BEST ANSWER

Q1. In the way you set your problem, this condition is needed only if you approximate your inverse by this sum. Otherwise you only need non-singularity for $(I-\alpha A)$ namely $1/\alpha$ should not equal any of eigenvalues of $A$. But I think you have deeper analyze your problem, the nature of $b,x,\alpha$. It might happen that without of the condition $0 < \lambda_{max}(A)\alpha$ your $x$ does not make any sense. Indeed, approaching $\alpha$ to any if eigenvalues will make your matrix close to singular and vanish your equation. May be $b$ also depends on $\alpha$?.

Q_2. If $\alpha < 0$ your matrix has all entries nonnegative. That means for negative $b$ some of elements of $x$ will be negative. I don't know anything special for this case. A matrix still can be singular or close to singular.