This is basically a recurring thought I have from time to time and I suspect it is flawed, so my question is basically why can't you do the following:
Basically, I imagine that given a seed and a random number generator, the binary result at a given length might have something in common with some target data that you want to compress. In the case that it just happened to be the same, you'd get a phenomenal amount of compression.
That said, I suspect this is in the same league as the perpetual motion machine I thought I invented when I was a kid (air lock at the bottom of a tube of water, ball floats to top and falls back down to air lock...)
"In the case it just happened to be the same" is the key. It will (almost) never happen. Say you have million bit strings. There are $2^{1000000}\approx 10^{301030}$ of them. If the string happened to be $\pi$, you would have a phenomenal amount of compression. That happens very rarely. If your random number generator is good, it will happen just as rarely. The compression we use reflects the non-randomness in the files we compress. There are tremendous correlations in text (because it is generated in natural language, not random) or photos (big areas are close in color). Compression takes advantage of that.