Let's say I have a single-variable function $f$ of a variable $x$ defined in the domain $[a, b]$. I want to find the absolute maximum of this function in this interval by just sampling the function at a bunch of points with spacing $h$, so for example with $h=1$, I would calculate $f(a), f(a+1), f(a+2), \cdots, f(b-1), f(b)$, and find the maximum of those values. Then, if the maximum is $f(x_0)$, then my maximum is at $x=x_0$. I want to make an argument to say that since I sampled points with spacing $h$, my uncertainty in $x_0$ is at most $h$. However, this is clearly not true for any function $f$. With the example where $h=0$, I could have a situation where $f(k) = 1, f(k + 0.3) = 1000, f(k+1) = 0.6$ for some integer $k$, but the maximum I find is $x=x_0$ where $f(x_0) = 10$, since I didn't check $x = k + 0.3$. The distance between $x_0$, my alleged maximum, and $k + 0.3$, the actual maximum, could be arbitrarily large, so it doesn't make sense to say that the maximum is $x= x_0 \pm h$.
However, if I for example have a function that I know is monotonically increasing before the maximum and monotonically decreasing after the maximum, then I think I can say with certainty that the maximum is $x=x_0 \pm h$. However, I can't seem to justify this to myself mathematically, and am curious to see a rigorous mathematical argument for how this can be justified. Also, I suspect this condition can be made more lenient and the property will still hold. So, essentially I want to know what specific condition on $f$ is required to justify saying that the maximum is $x = x_0 \pm h$?
Thanks!