I recently came across the following formula referenced in this particular answer here: https://math.stackexchange.com/a/493434/106050.
This is pretty much perfect to what I would like to use in a statistical library I'm working on, but I was wondering if there is a way to simplify this such that f(x) would yield the decimal percentage of the population covered by x standard deviations within a normal distribution; i.e.
f(1) = .6827
f(2) = .9545
f(3) = .9973, etc...
I'd like this to work for decimal standard deviations as well... what's the simplest equation that could satisfy this? I'm hoping it won't require integral calculation of any kind...
Unfortunately, it does require integration.
A normal distribution can be modeled as
$$\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}.$$
So, the probability that a point lies in the range $(-t\sigma+\mu, t\sigma+\mu)$ is
$$\int_{-t\sigma+\mu}^{t\sigma+\mu} \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} dx = \int_{-t}^t \frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} dy.$$
This can be expressed in terms of the error function, but cannot be written in terms of elementary functions.