I was plotting the eigenvalues of a real symmetric matrix $h_2$ continuously transforming into another symmetric matrix, $h_1$, when parameter $\alpha$ is changed from $0$ to $1$.
$$ h = \alpha h_1\ +\ (1-\alpha)h_2 $$
I generated any real square random matrix (entries have a uniform distribution $\in$ [0,1)) and added its transpose to make it symmetric. Nothing unusual here. But subtracting them will give me anti-symmetric matrices. Now, instead of subtracting exactly the transpose of same matrix, I changed it by a small factor. Say I have two real random matrix $r_1$ and $r_2$
$$h_1 = r_1\ -\ (1.1)(r_1)^{T}$$ $$h_2 = r_2\ -\ (1.1)(r_2)^{T}$$
Some of the eigenvalues are degenerate. But with change of parameter $\alpha$, I started seeing these bubbles, as shown in the figure below. Can anyone explain the math behind these? To be precise, I am plotting the real values of eigenvalues with $\alpha$.
To make it more concrete, I have added 3D plot of locus of eigenvalues for $2\times2$ dimension. x and y axis represent real and imaginary coordinates of the eigenvalues and z axis shows $\alpha$. 
In case, someone wants too check the python code I am using, here is the github link.


