What is the *correct* (matrix) square-root of $A_2=\begin{bmatrix} 0&-1 \\ 1& 2 \end{bmatrix} $?

298 Views Asked by At

In studying the problem of some trivial(?) generalization of the NSW-numbers [ OEIS,wikipedia ] (see my other related question) there came up one detail where I think I have the correct answer but might be wrong.

Background
Basic motivating problem: The NSW-numbers can be expressed by the recursion $$ a_6(n) = -a_6(n-2)+6 \cdot a_6(n-1) $$ and I set the initial values $$a_6(0)=-1,a_6(1)=1$$ (one offset from that in the wikipedia and elsewhere). Because prominent properties of that NSW-numbers $a_6(n)$ are expressed using so-to-say a "pseudo-index" $j=2n+1$ (for instance the multiplicativity of the sequence) I find it interesting to interpolate to the intermediate integers to have a formula for all natural $j$.
I think the most simple way is that to represent the n'th recursion by the n'th power of the matrix $$ \small A_6 = \begin{bmatrix} 0&-1 \\ 1 & 6 \end{bmatrix} $$ right-multiplied to the vector containing the initial values $[-1,1]$ (which gives then an odd value in the "pseudo-index" j ) and the interpolation to even indexes j is then possible having the squareroot $ \small B_6 = \sqrt{A_6}$ and I get the sequence of values $b_6(j)$. This is all very nice and I have the matrix $$\small B_6 = {1\over \sqrt 8} \cdot \begin{bmatrix} 1&-1 \\ 1& 7\end{bmatrix} $$ using diagonalization.

Generalization: Now the generalization goes via $$ \small A_w = \begin{bmatrix} 0&-1 \\ 1 & w \end{bmatrix} \qquad \qquad a_w(0)=-1,a_w(1)=1$$ to $$ \small B_w = {1\over \sqrt {2+w}}\begin{bmatrix} 1&-1 \\ 1 & 1+w \end{bmatrix} $$

Problematic case: The complicated case is now for $w=2$: the diagonalization gives a defective eigensystem; the standard solution becomes then (using Pari/GP's diagonalization-procedure)
$$ \small B_2^* = {1\over \sqrt {2+2}}\begin{bmatrix} 2& 0 \\ -2 & 0 \end{bmatrix} $$
but which gives the constant sequence of $b_2(j)=-2$
Clearly one would expect that $$ \small B_2 = {1\over \sqrt {2+2}}\begin{bmatrix} 1& -1 \\ 1 & 3 \end{bmatrix} $$
I seem to get this (expected) solution as a limit with $\lim_{\epsilon \to 0} w= 2 \pm \epsilon$ and interestingly this gives (in the limit if this is allowed to use(!)) the sequence of odd numbers $a_2(n)=2n-1$. (Why is this "interesting"? - because the multiplicativity in the sense as indicated above is now "perfect" and any other generalizations are simply "imperfect" in a similar aspect like the multiplicativity of the Mersenne-numbers)

Question
So I want now to know: Q: Is that second matrix $B_2$ the correct squareroot or do I still have to assume $B_2^*$ the correct one and refer to $B_2$ always as only valid when talking about the index not equal to $2$?

3

There are 3 best solutions below

1
On BEST ANSWER

You can see the Higham's book: "Functions of matrices". It is linked to the primary matrix function $f(A)$. If $J$ is a Jordan block $\lambda I_n+N$, then $f(J)$ has as first line $[f(\lambda),f'(\lambda),\cdots,\dfrac{f^{(n-1)}(\lambda)}{(n-1)!}]$; the second line is a right shift of the first line,....We define $f(A)$ using the Jordan decomposition of $A$. $f(A)$ is a polynomial in $A$.

In particular if $A$, a complex matrix, has no eigenvalues on $\mathbb{R}^-$, then there exists a unique square root $X$ of $A$ all of whose eigenvalues lie in in the open right half-plane (the principal square root $X=A^{1/2}$). Moreover $A^{1/2}=\dfrac{2}{\pi}A\int_0^{+\infty}(t^2I+A)^{-1}dt$.

Here, using Gottfried's post, it is sufficient to calculate his $J^{1/2}=\begin{pmatrix}f(1)&f'(1)\\0&f(1)\end{pmatrix}=\begin{pmatrix}1&1/2\\0&1\end{pmatrix}=1/2I+1/2J$.

EDIT 1: I give some instances when $A$ is non-singular.

  1. If $A=I_n$, then the square roots $\pm I$ are the primary square roots of $A$; the other square roots (an infinity !) are not primary square roots.

  2. If $A=J_n(\lambda)=\lambda I_n+N$ is a Jordan block, then $A$ has exactly $2$ square roots $\pm (J_n)^{1/2}$ and they are primary square roots (cf formula above).

  3. If $\lambda\not=\mu$ and $A=diag(J_n(\lambda),J_n(\mu))$ then $A$ has $4$ primary square roots $diag(\pm {J_n(\lambda)}^{1/2},\pm {J_n(\mu)}^{1/2})$.

  4. If $A=diag(J_n(\lambda),J_n(\lambda))$, then $A$ is a derogatory matrix and has only $2$ primary square roots $\pm diag({J_n(\lambda)}^{1/2},{J_n(\lambda)}^{1/2})$. The other square roots are not primary square roots.

In general a square root $B$ of $A$ is a primary square root IFF the sum of any $2$ of the eigenvalues of $B$ is non-zero.

EDIT 2: As Markyan wrote,the command "MatrixFunction(A,sqrt(x),x)" of Maple gives the principal square root of $A$. Maple works correctly but the calculation is incredibly low. Moreover Maple gives a result when $A$ has negative eigenvalues but, beware, the function "square root" is no more continuous in a neighborhood of these matrices!

1
On

The Maple code $$S := LinearAlgebra:-MatrixFunction(Matrix([[0, -1], [1, 2]]), sqrt(x), x) $$ outputs $$ \left[ \begin {array}{cc} 1/2&-1/2\\ 1/2&3/2 \end {array} \right] .$$

0
On

Ah, I found a new, possibly final argument myself. Using the Jordan-form of a matrix, I get (with the help of wolframalpha) $$ A_2 = S \cdot J \cdot S^{-1} $$ with $$ S = \begin{bmatrix} -1&1 \\ 1 & 0 \end{bmatrix} \qquad S^{-1} = \begin{bmatrix} 0&1 \\ 1 & 1 \end{bmatrix} \qquad J = \begin{bmatrix} 1&1 \\ 0 & 1 \end{bmatrix} $$ and $J$ can be seen as the truncated Pascalmatrix, for which I know that the squareroot can be determined using the matrix-logarithm with the effect that $$ J^{1/2} = \begin{bmatrix} 1&1/2 \\ 0 & 1 \end{bmatrix} $$ so that we arrive at the expected result.

This looks now much better and I think it should be the correct answer; however - I still do not know whether this is the formal correct way because introducing the matrix-logarithm means again to introduce some limit-process.