I'm working on a script that generates histograms for images. It first takes a histogram of all pixels, and then tries various distributions of sample pixels, such as - a grid of pixels, horizontal stripes, diagonal cross...
The idea is that when it's used in embedded system, you need fast method of getting histogram, and fewer pixels = less work.
Here are some example histograms
Reference - all pixels
Pixels: 65536

Grid
Pixels: 841
Chi squared: 0.05546

Diagonals
Pixels: 512
Chi squared: 0.1213

Now I have the histograms, but I need to determine their "quality" - some good metric that will give clear results. So far I tried chi squared test, but I'm not sure it's the right choice.
Also I'd like to calculate a "score" factor, based on number of pixels, and quality of histogram. The fewer pixels and better quality, the higher score. I tried multiplying chi squared by number of pixels, but it skewed the results weirdly (lower score would then mean better).
Maybe instead of calculating score, I could make some plot of the two values?
(I'll run this on a larger set of images, so the scores can be averaged)