Suppose I sample random variables A and B from normal distributions. When I do a scatter plot of these two variables I see a radial pattern centered around (0,0). If I zoom into the circle I see that the points look roughly uniform, which is what I expect since they are not correlated.
Now say I have two random variables C and `D from t-distributions. When I do a scatter plot of these two variables I see a 't' shape which seems to imply they are somehow correlated. However they are not. I am wondering if I am interpreting these correctly or if I need to perform some transformation in order to see the uniformity that I expect.
Here are some of my favorite examples.
Sample mean and variance of normal data. If $X_1, X_2, \dots, X_n$ is a random sample from a normal population then one can prove that the random variables $\bar X$ and $S = \sqrt{\frac{\sum (X_i - \bar X)^2}{n-1}}$ are (probabistically) independent. This may seem strange because they are not $functionally$ independent ($\bar X$ appears in the definition of $S.$) This is true only for normal data.
Below is a scatterplot of $(\bar X, S)$ pairs for 30,000 standard normal samples of size $n = 5.$ They look independent and they have a sample correlation of $r = 0.006,$ which is consistent with a population correlation $\rho = 0.$
Rounded normal data. When data from $Norm(\mu = 0, \sigma=3)$ are rounded to the nearest integer, a similar plot shows strange effects, some of which may be due to the resolution of the plotting. Strictly speaking, independence is destroyed by rounding, but the correlation is essentially $0.$
Sample mean and variance for data from $Beta(.1, .1).$ This is a symmetrical 'bathtub' shaped distribution that puts most of its probability near $0$ and $1.$ (See the Wikipedia article on 'beta distribution'.) Because the independence of $\bar X$ and $S$ holds only for normal data, we might wonder what we would see in a similar scatterplot of $(\bar X, S)$ pairs from samples of size $5$ from this distribution.
Based on symmetry one expects $0$ correlation, and the sample correlation is again very nearly zero. However, it is obvious that $\bar X$ and $S$ are not independent. For example, events $\{0 < \bar X < .05\}$ (green) and $\{.4 < S < .45\}$ (orange) both have positive probability, while their intersection has probability $0$. (If all five observations $X_i$ are very near $0,$ then their SD cannot be as large as $.4$.) We have zero correlation without independence.
The original datapoints lie near the faces, edges, and corners of a 5-D hypercube. The transformation to $(\bar X, S)$ has 'squashed' this hypercube onto a 2-D space. But we can still see evidences of concentrations of points from edges and corners.