TLDR: I made a toy version of a GAN. During model training, I observed a feedback loop where the discriminator "runs away" from the generator, and then the generator "follows" it. How is this problem addressed in practice?
I've created a toy version of a GAN. The true inputs $X$ follow a uniform distribution U(5, 7). The generator is fed by the random values $Z$ that follow standard uniform random variable U(0, 1).
The discriminator seeks to fit parameters $a$ and $b$ for the following prediction function:
$D(x) = exp(-(a \cdot x + b)^2)$.
The generator seeks to fit parameters $c$ and $d$ for the following generator function:
$G(z) = c \cdot z + d$.
The cost function is defined as the following expression:
$Cost = {1 \over N_X} \sum\limits_{x \in X} [1 - D(x)] + {1 \over N_Z} \sum\limits_{z \in Z} [D(G(z))]$.
The discriminator seeks to minimize cost, and the generator seeks to maximize cost. I am training the GAN using gradient descent.
During training, it seems that the generator "figures out" that the true data is over range [5, 7], and it adjusts parameters to create values in that range. With nowhere to go, it seems like the discriminator attempts to "run away" from the generator by changing its parameters so that it classifies the range [5, 7] as a $0$. Then the generator "follows" the generator, and this creates a feedback loop where the discriminator keeps "running away" and the generator "follows" it.
How is this problem addressed in practice?
Project repo here:
https://gitlab.com/tyler194/toy_gan
The blue line represents the discriminator function (x-axis is input, y-axis is output). The red line represents the range of generator function (domain [0,1] corresponds to output range shown on x axis, y axis means nothing).
generator going towards the range [5,7], discriminator flees slowly
![fig2]](https://i.stack.imgur.com/nmGK7.png)
stability reached, but generator makes it's range smaller and smaller around x val corresponding to y peak of discriminator function


