I have about 50 equally sized photos of magazine covers, which I'm attempting to blend into one composite image that shows the "average" cover. Each of the covers has a single face on it, so the result should look pretty cool.
Each photo is 400 x 525 pixels, and for each pixel I have 50 RGB values (e.g. (100, 140, 255), one from each source photo.
I initially averaged the values to get the resulting image. It was a good start but very muddy. Then I simplified the pixels to 256-color and took the mode for each set. This was very pixelated and tended to favor black and white.
I think what I need here is, for each vector of 50 RGB values, find the largest cluster of values with a minimum density of X, then take the centroid of that cluster. But I don't know if this is a sensical solution, or the most expedient way to find that value. It's a sort of "average approximate mode", I guess.
Expedience is key because I'd like to be able to do this for 20,000 photos at some point, so computationally heavy K-means algos and so forth will not be feasible.
Thank you!