I want to integrate $$f(x) = (2\pi)^{-\frac{p}2}\operatorname{det}(\Sigma)^{-\frac 12}\exp\left(-\frac 12 (x-\mu)'\Sigma^{-1}(x-\mu)\right)$$ over the $p$-dimensional hypercube $[a,b]^p$. Since $p$ may be very large, I generate a low discrepancy sequence $z_1, z_2,\dots, z_n$ (Halton) and approximate the integral as follows: $$\int_{[a,b]^p} f(x)\,\mathrm dx \approx \frac{\prod_{i=1}^p(b_i - a_i)}n\sum_{k=1}^n f(Tz_k),$$ where $Tz = \operatorname{diag}(b_1 - a_1, b_2 - a_2, ..., b_p - a_p)z + a$. Since the low discrepancy sequence is only defined in $[0, 1)^p$, I have to scale it scale it such that it matches $[a,b]^p$. This approach works quite well, even if $p$ is large.
However, there is one big problem: if $a$ and $b$ are large in magnitude, then I would expect the true value to be close to one (since the integral over whole $\mathbb R^p$ equals one). But the approximation yields a value close to zero as $f$ has very light tails. Is there are way to avoid this issue? Of course, I could just manually threshold (i.e., set estimated value to one if the hypercube exceeds a certain volume), but I think that's not a good idea as I would have to first determine a threshold and then hope that this threshold is correct.
Are there better ways?