How is error in numerical methods defined (Big O notation, O(h))?

73 Views Asked by At

I am trying to understand how the numerical method error is measured in O(h). In my understanding, the big O notation is depend on the dominant term of equation, for example f(x)=x^2+x, the big O notation for this function is O(x^2).

But how is this related to numerical methods? Ever numerical methods I have seen has a big O notation of step size, for example(O(h),O(h^2)). How are these measured?

Thank you.