I'd like to evaluate following expression efficiently (numerically).
$$g_a(x) := \int_0^x e^{-t^2} \int_0^{at} e^{-s^2} dsdt$$
If I want a given fixed accuracy, and evaluate both integrals using e.g. a summed quadrature formula for each integral, the cost is about $O(ax^2)$. Is there a better way than this naive one?
As the commenter noted, implicit in the question is whether it is better to do 2D quadrature than evaluate an error function at each sample point of a 1D quadrature; i.e., is there structure in the definition of the error function that allows speedup relative to the naive quadrature implementation? So I wrote a little code to find out: Using
boost::math::erfandstd::exp, I determined that it is faster to evaluateerfthanstd::exp:So it's faster to evaluate
erfthanexp(though keep in mind that boost is not bound by the 1ULP accuracy goal of the standard library math functions). Next you just need a good quadrature rule, say, tanh-sinh for small $x$ and trapezoidal for large. These will converge exponentially in about 30 function evaluations (double precision), so your total compute time should be about $(12+9)*30 \mathrm{ns} \approx 0.5\mu s$.That's as fast as I can think to get it.
Code to compute the speed: