I'm trying to wrap my head around ways to minimize total computational error (defined as a sum of the bounds on the truncation and rounding errors) by taking a differentiable function $f : \mathbb{R} \rightarrow \mathbb{R}$ and a finite difference approximation of its second derivative
$$ f''(x) = \frac{f(x + h) - 2f(x) + f(x-h)}{h^2} $$
I know that, by Taylor's Theorem
$$ f(x + h) = f(x) + f'(x)h + f''(x)\frac{h^2}{2} + f'''(\theta)\frac{h^3}{6} $$
for some $\theta \in [x, x + h]$.
How would you determine the value of $h$ for which a bound of the total computational error is minimized?
You know that the evaluation process results in some number $f(x+kh)(1+\varepsilon_k)$ with $\varepsilon_k$ in the magnitude of a small multiple $m$ of the machine precision $\mu$.
Thus the error term of your expression is bound by something like (discounting other minor contributions) $$ \frac{4f(x)mμ}{h^2}+\frac2{4!}f^{(4)}(x)h^2 $$ which tells us that in not too exceptional situations, the minimal computational error can be found, magnitude wise, at $h\sim\sqrt[4]\mu$.
The image shows the error of the second derivative for some test function - $\sin x\exp(x^2\cos x)$ - first with the central difference formula as above, with a clear minimum at $h=\sqrt[4]{10^{-16}}=10^{-4}$, and below that the first Richardson extrapolant, which reduces the truncation error to 6th order and thus is minimal at $h\sim\sqrt[6]\mu$, with $\sqrt[6]{10^{-15}}\approx 10^{-2.5}\approx 0.003$.