I'm not sure the title is the best fit but I'm not sure how else to put it without a paragraph worth of describing what I want to do:
Imagine you have an image of some set resolution, say, 1024x1024, with three color channels an eight bits worth of colors per channel. (The precise parameters don't really matter) That gives us $1024\times1024\times8\times3=25165824$ bits and, therefore, $2^{25165824}$ different images.
It's extremely easy to sample that many random bit flips to receive a random image in a more or less unbiased way. However, very nearly always, the image you get that way will be extremely low in structure. I.e. it will look like white noise. Which makes sense, of course. It's nearly impossible to randomly hit a situation where the bits end up such that the entire left half of the image is white and the entire right half is black (that happens in exactly a single case out of those $2^{25165824}$) and even being anywhere near something that looks like that is pretty unlikely.
Similarly, though far more flexibly in principle, it's practically impossible to run into an image that just so happens to feature a cat.
But what if I want to sample in a way that boosts the likelihood to run into any sort of structure?
I have a first attempt at this which just goes like this:
- sample a random number of bits (so in this case from the closed interval $\left[0,25165824\right]$)
- set that many bits to $1$ and the rest to $0$
- randomly permute this collection of bits
That way, I at the very least make more structures far more likely. In effect, this samples from the space of all images in a way that makes every amount of total energy equally likely. In particular, the all black or all white case now have a chance of $1/25165824$ instead of $2^{-25165824}$.
I can extend this so I can run into any average color by doing this separately for each color channel. (This will actually decrease the chance of running into pure black or white again, but it will make it more likely to run into, say, a purely uniformly red image)
However, even with these changes to sampling, the end result is typically something like a tinted, darkened, or brightened version of white noise. I.e. the only thing I really made more uniform here is the statistics of the average color of the entire image.
So how might I go further? How might I make it just as likely to run into an accidental cat as I am running into white noise? (Or at least get as close to this as possible)
if you prefer not to think of images, it's perhaps sufficient to think of any possible pattern a string of bits could make. I would like to boost the probabilities of "more ordered bitstrings" such that "every degree of order" is equally likely to occur, rather than every string. Indeed, this kind of sampling would make some strings exponentially less likely because their "class of orderliness" is so large.
I guess what it comes down to is something like defining appropriate macrostates, sampling a macrostate uniformly at random, and then sample uniformly at random one of all possible microstates that correspond to a given macrostate. Both of those tasks can be a problem:
- how best to define my macrostates (the current examples that just count how many bits are $1$ is too simplistic for my goals)
- how to efficiently sample some microstate from an arbitrary macrostate (for those particular macrostates, general permutations should be sufficient, but for more sophisticated macrostates this may not hold true)
(I do still want every single microstate to be possible)
some examples of what my current approach ends up doing:

silly Python Code that accomplishes this:
from random import randint
import numpy as np
from PIL import Image
rng = np.random.default_rng()
def sample():
bytes = 1
dimx = 512
dimy = 512
totalbits_R = 8 * bytes * dimx * dimy
totalbits_G = 8 * bytes * dimx * dimy
totalbits_B = 8 * bytes * dimx * dimy
on_bits_R = randint(0, totalbits_R)
on_bits_G = randint(0, totalbits_G)
on_bits_B = randint(0, totalbits_B)
bits_R = rng.permutation(np.concatenate(
(np.zeros(totalbits_R - on_bits_R, dtype=bool)
, np.ones(on_bits_R, dtype=bool)),
axis=0))
bits_G = rng.permutation(np.concatenate(
(np.zeros(totalbits_G - on_bits_G, dtype=bool)
, np.ones(on_bits_G, dtype=bool)),
axis=0))
bits_B = rng.permutation(np.concatenate(
(np.zeros(totalbits_B - on_bits_B, dtype=bool)
, np.ones(on_bits_B, dtype=bool)),
axis=0))
bits_R = bits_R.reshape((dimx, dimy, 8 * bytes))
bits_G = bits_G.reshape((dimx, dimy, 8 * bytes))
bits_B = bits_B.reshape((dimx, dimy, 8 * bytes))
ints_R = np.packbits(bits_R, axis=-1)
ints_G = np.packbits(bits_G, axis=-1)
ints_B = np.packbits(bits_B, axis=-1)
ints = np.concatenate((ints_R, ints_G, ints_B), axis=-1)
image = Image.fromarray(ints, mode="RGB")
image.show('test')