Peace be upon you,
I have the following system of equations \begin{align*} \begin{cases} \psi(x_1)-\psi(x_1+x_2)+0.6931471805599456\\ \psi(x_2)-\psi(x_1+x_2)+0.6931471805599456 \end{cases} \end{align*} Or equally
f=@(x) [psi(x(1)) - psi(x(1)+x(2)) + 0.6931471805599456; psi(x(2)) - psi(x(1)+x(2)) + 0.6931471805599456];
when I set the tolerance to a small value like $1.1 \times 10^{-20}$ then running the fsolve() yields in a residual of $10^{-5}$ which is okay.
options = optimoptions('fsolve','TolFun',1.1e-20,'TolX',1.1e-20,'maxIter',Inf,'MaxFunEvals',Inf);
>> fsolve(f,[1;1],options)
Equation solved, fsolve stalled.
fsolve stopped because the relative size of the current step is less than the
selected value of the step size tolerance squared and the vector of function values
is near zero as measured by the selected value of the function tolerance.
<stopping criteria details>
ans =
1.0e+04 *
8.3620
8.3620
f(ans)
ans =
1.0e-05 *
-0.2990
-0.2990
But, I am looking for a way by which I can set a bound for residual directly so that I can be sure of the accuracy. Any lighting up ideas?
This is an important point:
Since numerical methods work by evaluating a (continuous) function at discrete points, they do not have any information about the behavior of the function outside of these points and thus cannot "guarantee" anything (unless they use symbolic logic, but then they are not purely numerical, and AFAIK Matlab does not implement such hybrid algorithms). Instead, they employ heuristics to determine when they are likely to be "close" to the solution.
One of the common heuristics in iterative methods is the step size, which when falling below a fixed or dynamically computed threshold triggers the stopping criterion. For real-world applications and reasonably well-behaved functions, this criterion usually yields very good approximations that hold to high accuracy.