I am writing a Gaussian blur filter in graphics shader code. I want to make the blur parameterized by radius from the users perspective. The best method I can figure to do this is to pick a suitable stopping point for y, say .001, and solve for the variance to plug into the normal distribution function that will achieve that value of y.
Unfortunately I cannot for the life of me solve this equation for v...
$x = 10$ (blur radius)
$$.001 = \frac{1}{2 \pi v^2}e^{-\frac{x^2}{2v^2}}$$.
I don't think the radius has to involve the variance. You could just use the radius to scale the x-axis.
So you could sample the distribution from -1 to 1, always. If the radius is 4, would sample 9 regions. For 10 you sample 21 times between -1 and 1. The variance can stay the same.
You might want to try this question here too https://gamedev.stackexchange.com/.