Why the eigenvalues of a matrix with random numbers from a standard uniform distribution, are real?

166 Views Asked by At

I'm studying for finals and one of the previous exams, there was the following question:

Take a look at the following code:

x = torch.rand((100,2,2))
torch.linalg.eigvals(x)

What can you say about the eigenvalues? Can you explain it?

So, as I understand, they create a three-dimensional tensor of size $[100,2,2]$ which contains random numbers from a standard uniform distribution and then find the eigenvalues for each matrix of size $2\times 2$ obtained for each value of the first dimension. Looking into the result:

tensor([[-0.5570+0.j,  1.2446+0.j],
        [ 1.1396+0.j, -0.0692+0.j],
        [ 0.1051+0.j,  1.5977+0.j],
        [-0.3857+0.j,  1.2068+0.j],
        [-0.1900+0.j,  0.6888+0.j],
        [ 0.8390+0.j, -0.1490+0.j],
        [-0.0808+0.j,  1.1531+0.j],
        [ 1.0626+0.j,  0.1750+0.j],
        [ 1.1559+0.j, -0.0850+0.j],
        [ 0.8241+0.j,  0.3729+0.j],
        [-0.1503+0.j,  1.1238+0.j],
        [ 0.9225+0.j,  0.2806+0.j],
        [-0.4166+0.j,  1.0259+0.j],
        [ 0.4550+0.j,  0.2162+0.j],
        [-0.3492+0.j,  0.5358+0.j],
        [-0.3372+0.j,  1.5667+0.j],
        [ 0.8774+0.j,  0.3880+0.j],
        [ 1.2072+0.j,  0.4624+0.j],
        [ 1.3175+0.j, -0.3315+0.j],
        [ 1.2090+0.j,  0.4176+0.j],
        [ 0.3264+0.j,  0.8513+0.j],
        [-0.1040+0.j,  1.3297+0.j],
        [ 1.3935+0.j,  0.4920+0.j],
        [-0.0139+0.j,  0.7395+0.j],
        [ 1.1510+0.j,  0.0133+0.j],
        [ 0.6787+0.j,  0.1693+0.j],
        [ 0.4437+0.j,  0.9047+0.j],
        [ 1.0557+0.j,  0.1924+0.j],
        [ 0.2404+0.j,  0.9783+0.j],
        [-0.0271+0.j,  1.0466+0.j],
        [ 0.0557+0.j,  0.9318+0.j],
        [ 0.2873+0.j,  1.3448+0.j],
        [ 0.9613+0.j,  0.0234+0.j],
        [-0.6177+0.j,  0.7919+0.j],
        [ 1.0627+0.j,  0.0449+0.j],
        [ 1.5548+0.j,  0.2818+0.j],
        [ 1.1050+0.j, -0.0872+0.j],
        [ 0.2118+0.j,  1.2132+0.j],
        [ 0.2067+0.j,  0.7218+0.j],
        [ 0.4729+0.j,  0.1361+0.j],
        [ 1.3679+0.j, -0.2658+0.j],
        [ 1.2630+0.j, -0.1738+0.j],
        [ 1.0363+0.j, -0.0702+0.j],
        [ 0.0820+0.j,  0.4279+0.j],
        [ 1.1039+0.j, -0.2253+0.j],
        [ 0.0337+0.j,  0.9844+0.j],
        [ 0.4379+0.j,  0.9513+0.j],
        [-0.0751+0.j,  0.8960+0.j],
        [ 1.6588+0.j,  0.1274+0.j],
        [ 0.4472+0.j, -0.2579+0.j],
        [ 0.7036+0.j, -0.4937+0.j],
        [-0.0448+0.j,  1.2299+0.j],
        [-0.4491+0.j,  0.8446+0.j],
        [ 0.1992+0.j,  1.0333+0.j],
        [-0.3112+0.j,  1.4406+0.j],
        [-0.1636+0.j,  0.7642+0.j],
        [ 0.9079+0.j,  0.1707+0.j],
        [-0.2145+0.j,  1.0980+0.j],
        [ 0.0138+0.j,  0.9005+0.j],
        [ 1.0440+0.j,  0.7159+0.j],
        [ 0.1382+0.j,  0.4394+0.j],
        [ 0.1148+0.j,  1.1513+0.j],
        [ 1.7433+0.j, -0.0715+0.j],
        [ 1.3076+0.j,  0.2794+0.j],
        [ 0.4560+0.j,  1.2436+0.j],
        [ 0.7295+0.j, -0.1123+0.j],
        [ 0.2718+0.j,  1.1274+0.j],
        [ 1.1747+0.j, -0.3776+0.j],
        [ 0.0591+0.j,  0.8922+0.j],
        [ 1.1071+0.j, -0.0657+0.j],
        [ 0.1129+0.j,  0.9661+0.j],
        [ 0.8626+0.j,  0.3020+0.j],
        [ 0.9777+0.j, -0.0466+0.j],
        [ 1.3685+0.j,  0.0440+0.j],
        [ 0.6077+0.j, -0.3720+0.j],
        [ 0.8696+0.j,  0.3922+0.j],
        [ 1.0548+0.j, -0.4926+0.j],
        [ 0.1512+0.j,  1.0271+0.j],
        [ 0.7386+0.j, -0.1197+0.j],
        [-0.2226+0.j,  0.5897+0.j],
        [ 0.1573+0.j,  0.9751+0.j],
        [ 0.6483+0.j,  0.2715+0.j],
        [ 1.4565+0.j,  0.1442+0.j],
        [ 1.2801+0.j, -0.2555+0.j],
        [-0.4170+0.j,  1.3144+0.j],
        [ 0.3950+0.j, -0.1008+0.j],
        [ 1.0641+0.j,  0.0123+0.j],
        [-0.1150+0.j,  1.6959+0.j],
        [ 0.9549+0.j,  0.7390+0.j],
        [ 0.2701+0.j,  0.9601+0.j],
        [-0.0525+0.j,  1.8261+0.j],
        [ 1.4911+0.j, -0.1839+0.j],
        [ 0.0693+0.j,  0.4429+0.j],
        [ 0.6547+0.j, -0.0035+0.j],
        [-0.0339+0.j,  1.0083+0.j],
        [ 1.6416+0.j, -0.2071+0.j],
        [ 1.2116+0.j, -0.5143+0.j],
        [ 0.7792+0.j, -0.0854+0.j],
        [ 0.3977+0.j,  1.0889+0.j],
        [ 1.1936+0.j, -0.2374+0.j]])

Clearly the imaginary part is $0$. But what is the explanation for it?

To make it a mathematical question:

Why the eigenvalues of a three-dimensional tensor of size $[100,2,2]$ which contains random numbers from a standard uniform distribution, are real?

1

There are 1 best solutions below

3
On BEST ANSWER

If you have a 2x2 real matrix with entries $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ the discriminant of its characteristic polynomial is $(a-d)^2 + 4bc$. If $b$ and $c$ are nonnegative, the discriminant is nonnegative so the roots are real. Using torch.rand your entries are chosen uniformly at random from $[0,1)$ so $b$ and $c$ are certainly nonnegative.

When torch.linalg.eigvals gets a 3d tensor like this, it treats it as a list of 2x2 matrices and computes the eigenvalues of each one in turn, so you have a list of eigenvalues of 2x2 matrices whose entries were chosen uniformly at random from $[0,1)$ and which are therefore real.