I would like to know how the MSU - Video Quality measurement tool:
calculates the blur metric.
What could be the mathematical computation to calculate blur?
I want to do something similar in my Image pipeline.
EDIT: If I have a blurry image, I upsize it / downsize it, and then compute the blur again for this resampled image, would it be same/more/less?
There are many possible blur metrics. For a list of some, see this student paper (from Stanford U). For many more, Google "blurring metric image". Most appear to measure some form of local contrast. (This suggests downsized images would be measured as less blurry than the originals and upsized as more blurry, but as @mattdm comments, the blurriness of the upsized image depends on the resampling algorithm.) Do not expect any simple mathematical formula. For example, one method first runs an edge detector over the image and then estimates the edge widths (wider = blurrier of course). For the details, and to answer the edited question, you would need to consult the MSU people, because they do not appear to have publicly documented their metric.