Given a integral of a continuous function on a closed (small) interval, one can find an approximate value of the integral using Simpson's rule or other quadrature rules. What happens if even though the improper integral converges, the integrand is not defined at the endpoints of the interval?
Specifically, I'm interested in finding an approximation for the following integral, which I don't think has a closed form: $$I = -\int_{-\alpha}^0\frac{k \left(\beta x^2+x+1\right) \log ^k\left(\frac{1}{x+1}\right)}{x}dx$$ The integrand diverges as $x\to 0$, but the integral converges to a finite value. It is claimed (and correct) that when $\alpha \ll 1$, this integral can be approximated by the simple form: $$I(\alpha\to 0)\approx -\int_{-\alpha}^0\frac{k \log ^k\left(\frac{1}{x+1}\right)}{x}dx \approx \alpha^k$$ Can anyone explain how this approximation is derived? It would be nice to have a method that would be generalizable to similar cases.
I now realize that this particular approximation probably has to do with the fact that: $$\log\left(\frac{1}{x+1}\right)\sim -x+\frac{x^2}{2}+O(x^3)$$ So that: $$I(\alpha\to 0)\approx -\int_{-\alpha}^0\frac{k \log ^k\left(\frac{1}{x+1}\right)}{x}dx \approx -\int_{-\alpha}^0\frac{k (-x)^k}{x}dx=\alpha^k$$ Though it would be nice to see a more rigorous method..