I have the following problem:
I am calculating the value of $\log X$ using some iterative functions. With each iteration of the function, the value of $\log X$ gets more precise. One of them is Taylor's expansion for example.
I have a task to find the minimum number of iterations required so that every value of log from a given interval, lets say <0.25, 3.24> will be precise enough using the bisection method.
By precise enough, I mean the difference between the log of any number from that interval I calculate using my functions and the value of the actual log from a calculator for example, should not be greater than some small number I specify of course.
And I am struggling with the end points, which ones do I select?
Do I just pick two iterations, like the 50th iteration and the 2nd iteration and go from there?
Will that give me the correct result?
For example, I now know the number of iterations required for my first function to be precise enough but I calculated that using a different method.
I should get the same number using the bisection method of course.
Let me know what you guys think! Thanks a lot!