Let $ \begin{bmatrix} a&b&c \\ d&e&f\\ g&h&i\\ \end{bmatrix}^2 = \begin{bmatrix} x&0&0\\ 0&0&-y\\ 0&1&-z\\ \end{bmatrix} $
Where all of the components are integer.
I am trying to figure out what will be the condition of $a,b,c,d,e,f,g,h,i$ or $x,y z$. so that the equation is true.
I mean For example, a diagonal integer matrix has a square root over $\mathbb{Z}$ if the diagonal entries are perfect square. Something like that condition.
I'd tried to solve it by a system of equation, but its VERY hard because, its solving $9$ equations. And I cant do it. What I just showed that $$y = -\frac{f}{h}$$ and $$cdh = bgf.$$ And thats all. Im stuck.
My question is, is their any other way, or other approach to solve this ?
Pls help me with this.
Yeah, so a very normal approach to solve this sort of equation is to generalize to the reals and diagonalize. Suppose your matrix can be factored as $$ M = CDC^{-1}, $$ for $D$ diagonal, then its square can be factored as $$M^2 = C D C^{-1}CDC^{-1} = C D^2 C^{-1},$$so the same coordinate-change matrix $C$ is used for the square as for the original, but $D^2$ is very easy to compute for a diagonal matrix -- just square each component. This argument probably works in reverse but it probably misses out on a "necessary" branch of a "necessary and sufficient condition" -- if you work it out you may find that non-diagonalizable matrices can also have square roots. You might be able to work out the necessary condition by using generalized eigenvectors, since any matrix always has a complete set of generalized eigenvectors, but that might be a step ahead of where you are right now.
Still, one can persuasively argue for example from this that $b=c=d=g=0$ and $a = \sqrt{x}$ as that part of the diagonalization is "already done for you." So what's left is diagonalizing $$\begin{bmatrix}0&-y\\1&-z\end{bmatrix}$$To do this it is helpful to get the eigenvalues, and to get that it's helpful to know that the determinant is $y$ which must be the product of the eigenvalues, while the trace is $-z$ which must be their sum; solving $\lambda_+\lambda_- = y$ with $\lambda_+ + \lambda_- =-z$ gives $$\lambda_\pm = \frac{-z \pm \sqrt{z^2 - 4y}}{2}$$and from there I think the initial $(0, 1)$ column makes this rather easy to diagonalize as one gets something like $$C = \begin{bmatrix}\lambda_+&\lambda_-\\1&1\end{bmatrix},$$ whose determinant is $\lambda_+ - \lambda_-$ and therefore a final solution looks something like $$\begin{bmatrix}e&f\\h&i\end{bmatrix} = \frac{1}{\lambda_+ - \lambda_-} \begin{bmatrix} \lambda_+&\lambda_-\\1&1\end{bmatrix} \begin{bmatrix} \sqrt{\lambda_+}&0\\0&\sqrt{\lambda_-}\end{bmatrix} \begin{bmatrix} 1&-\lambda_-\\-1&\lambda_+\end{bmatrix}. $$ It's a bit messy, of course, but one can derive that it would be sufficient for this square root to exist for $x > 0$ and $z^2 - 4y > 0$ while $-z -\sqrt{z^2 - 4y} > 0$ as well. There might be some other family of solutions lurking about -- I don't know -- but those are the "main" ones.