I am studying GANs(neural networks), where discriminator tries to predict whether data comes from original distribution $p_r$ (real image, for example) or whether from fake distribution $p_g$ (generated by Generator). Below is a text from my university script(for the context):
The dimensions of many real-world datasets represented by $p_r$, only appear to be artificially high. They usually concentrate in a lower dimensional manifold. Thinking of the real world images, once the theme or the contained object is fixed, the images have a lot of restrictions to follow, i.e., a dog should have two ears and a tail, etc. These restrictions keep images away from having a high dimensional representation.
$p_g$ is contained in a low dimensional manifold too. Whenever the generator is asked to produce a much larger image like 64 × 64 given a small dimension, such as 100, of the noise variable input z, the distribution of colors over these 4096 pixels has been defined by the small 100-dimension random number vector and can hardly fill up the whole high dimensional space.
My question is: why $p_g$ and $p_r$ are almost surely gonna be disjoint?