Linear approximation around a point through Taylor series requires the first order derivative to be non-zero unless you want to get only the value at that point. However this is only true when you are extremely close to the referred point. Is there a better way to linearly approximate such a function.
For example, using Taylor Series to evaluate linear approximation of the function, $f(x) = (1+x^2)^{1/2}$, yields $1$ which is really true if you are extremely close to $x=0$. Also it is not a first order approximation since its constant.
Is there another way to approximate, lets say, if I know the lower and upper bounds on $x$?
For simplicity I will use your function as an example. The pointwise error of linear approximation can be defined as $$dE=\big(\sqrt{1+x^2}-ax-b \big)^2$$ If you integrate the error over your domain (let's say from $-x_1$ to $x_1$) you find the total error over the domain $$E=\int_{-x_1}^{x_1}dE=\int_{-x_1}^{x_1}\big(\sqrt{1+x^2}-ax-b \big)^2dx$$ Completing the integration you get the closed form for total error $$E= \frac23(1+a^2)x_1^3+2x_1\bigg(1+b^2-b\sqrt{1+x_1^2}\bigg)-2b\text{ArcSinh(}x_1\text{)}$$ Now you can minimize the error by taking the partial derivatives wrt a and b; equating the equations to zero.