I have been thinking about inverse functions of matrices lately.
(Yees yees, I know usually for anything more complicated than reals we need to define/select branch and for reals to select interval of validity.)
One of the things I got to think about is the results using Caley-Hamilton theorem $$P_1(A) = 0$$ to show how to calculate $P_2,P_3$ such that: $$A^{n} = P_2(A)$$ and $$A^{-1}=P_3(A)$$ somehow in combination with power series expansion representation.
- Is this theoretically correct?
- Is this practically useful in cases we want to solve $$f(A)=B$$
Let $A$ be a $k\times k$ matrix.
The Cayley-Hamilton theorem has the consequence that, for every $n$, there exists a polynomial of degree at most $k-1$, $P_n(x)=a_{0,n}+a_{1,n}x+\dots+a_{k-1,n}x^{k-1}$, such that $$ A^n=P_n(A) $$ (only nonnegative $n$ if $A$ is not invertible).
Indeed, $P_n=x^n$ for $0\le n\le k-1$; if $k=n$ we can consider the characteristic polynomial $\chi_A(x)=\det(xI-A)=c_0+c_1x+\dots+c_{k-1}x^{k-1}+x^k$ and write $$ A^k=-c_0I-c_1A-\dots-c_{k-1}A^{k-1} $$ and do induction on $n$ for $n>k$.
If $A$ is invertible, we know that $c_0\ne0$, so from $$ c_0A^{-1}=-c_1A-\dots-c_{k-1}A^{k-2}-A^{k-1} $$ we obtain $A^{-1}=P_{-1}(A)$ with obvious steps. Then we can do induction again.
The induction proof will also provide recursive formulas for the coefficients $a_{i,n}$.