I am looking to solve the following expression for $x$:
$$\int_{0}^x e^{-t^2}dt=2xe^{-x^2}.$$
Multiplication of both sides by $2/\sqrt{\pi}$ yields error function on the LHS, while the RHS contains Gaussian function multiplied by $x$.
I have no idea what to do with this. I would love a closed-form solution or $x$ in terms of functions that are easily computable by MATLAB (like error or Bessel functions), but would be satisfied with a numerical method that is more efficient than searching for a zero of $2xe^{-x^2}-\int_{0}^x e^{-t^2}dt$ (using, say, fzero in MATLAB Optimization Toolbox). Any help?
Looking at the plots, the solution seems to be $x=1$.
If I differentiate both sides, I obtain $e^{-x^2}=2e^{-x^2}+2xe^{-x^2}$, which yields $x=1$. However, is that correct approach? I might be missing something very simple. Can anyone elucidate?
If you can rapidly compute the special function $\mathrm{Erf}(x)$, then you can rapidly compute the zeros of this function by using Newton's method. If you're trying to find the first-order zeros of a differentiable function $F(x)$, recall that the iteration is
$$ x_1 = x_0 - F(x_0) / F'(x_0)$$
In our case, $F(x) = \tfrac{1}{2} \sqrt{\pi} \mathrm{Erf}(x) - 2x e^{-x^2}$ (e.g. with Matlab), and its derivative is simply $F'(x) = -e^{-x^2} + 4x^2 e^{-x^2}$. Given an initial guess $x_0$, we can plug this into Matlab as
Repeatedly enter this line until convergence. Depending on whether $x_0 < 0.5$ or $x_0 > 0.5$ (not that you cannot use $x_0 = 0.5$), this converges to $0$ and $0.9899...$, and furthermore, it does so in only a few iterations.