I am trying to figure out how to display a histogram of a digital image in the face of massive outliers (lots of shadows, highlights, or lots of anything inbetween). If I simply choose the bin with the most entries to be the '100% height' of my fixed display area the rest of the bins are dwarfed and you can't really get any useful information by looking at it.
I attempted to find the standard deviation between the bins then only include bins within a certain number of std devs from the average when picking the '100% height' bin, but it didn't turn out too well in the general case... certain number of std devs worked well for some images and not others.
A good example of what I want is Photoshop's histogram, but I'm not sure how they accomplish this and I've come up short on the google. Does anyone have any advice?
You could reject the tallest 5% (for example) of the bins, and choose the height within which the remaining 95% fit. That is, sort the heights of the $n$ bins and use the height of the $0.05n$-th tallest bin. I don't know how Photoshop does it, but this is how I would do it if I were writing Photoshop.
Logarithmic scaling of the heights is also a nice solution, but it changes the shape of the histogram. Therefore, most image editors give you a choice between linear and logarithmic scaling, so one still needs a solution for when the user chooses linear scaling.