Asymptotic expansion of integral involving Bessel Function and Logarithm

399 Views Asked by At

I would like to obtain the asymptotic expression for $b \rightarrow \infty$ of the following integral

$$ \tag 1 I = \int_0^1 dx \ln(x) \frac{x}{x^4+a^2} J_0(bx), $$

where $a$ is a real constant and $J_0$ is the Bessel function of order 0. From numerical analysis, the integral seems to be governed by small $x$, so I believe that taking $1 \rightarrow \infty$ in the upper bound should be a valid approximation.

I tried extending the integral to the complex plane, and also deriving with respect to $b$ to try to find a differential equation for $I$ but none of these approaches seem to work.

Any idea about how to tackle this kind of problem?

2

There are 2 best solutions below

5
On BEST ANSWER

EDIT: Thanks to a nice comment by @Maxim, a correction of the answer is made to take into account (a typo and) the contribution of the region $x>1$ to the asymptotic behavior of the integral for $b\to\infty$.

Decomposing the integral as \begin{align} I&=I_0-I_1\\ &=\int_0^\infty \ln(x) \frac{x}{x^4+a^2} J_0(bx)\,dx-\int_1^\infty \ln(x) \frac{x}{x^4+a^2} J_0(bx)\,dx \end{align} As remarked in the OP, it will be shown that the main asymptotic behavior of the integral is given by $I_0$. \begin{align} I_0&=\frac{1}{2}\int_0^\infty\ln(x^2) \frac{x}{x^4+a^2} J_0(bx)\,dx \end{align} Under this form, we can use the Gabutti and Lepora result which states that, under rather mild conditions, the Hankel transform of an even function \begin{equation} \mathcal{H}^{(\nu)}\left[\omega,f\right]=\int_0^\infty J_\nu(\omega x)f(x^2)x^{\nu+1}\,dx \end{equation} can be written as \begin{equation} \mathcal{H}^{(\nu)}\left[\omega,f\right]=\frac{\omega^\nu}{2^{\nu+1}}\int_0^\infty \exp\left( -\frac{\omega^2}{4t} \right)F(t)t^{-\nu-1}\,dt \end{equation} when \begin{equation} f(s)=\int_0^\infty e^{-st}F(t)\,dt \end{equation} i.e. $F(.)$ is the inverse Laplace transform of $f(.)$.

Here, with $\nu=0, \omega=b$, we take $f(s)=\frac{\ln s}{s^2+a^2}$ to write \begin{equation} I_0= \frac{1}{4a}\int_0^\infty \exp\left( -\frac{b^2}{4t} \right)F(t)\frac{dt}{t} \end{equation} where (see, for example, Ederlyi TI, 5.7.6 p.251) \begin{equation} F(t)=\cos (at)\operatorname{Si}(at)+\sin (at)\left[\ln a-\operatorname{Ci}(at)\right] \end{equation} when $b\to \infty$ the main contribution of the integral comes from the values of $t\to\infty$, where (DLMF), \begin{equation} F(t)\sim \frac{\pi}{2}\cos(at)+\sin(at)\ln a-\frac{1}{at}\left( 1-\frac{2!}{a^2t^2}+\frac{4!}{a^4t^4}-\frac{6!}{a^6t^6}+\cdots \right) \end{equation} this expansion can be integrated term by term. For the first two terms, \begin{align} K&=\int_0^\infty \exp\left( -\frac{b^2}{4t} \right)\exp(iat)\frac{dt}{t}\\ &=\int_0^\infty \exp\left( -\frac{b}{4}\left(u-\frac{4ia}{u} \right) \right)\frac{du}{u}\\ &=2K_0(b\sqrt{ a}e^{-i\pi/4})\\ &=2\operatorname{ker}(b\sqrt{a})-2i\operatorname{kei}(b\sqrt{a}) \end{align} where an integral representation of the modified Bessel function was used (G&R 8.432.7) as well as the Kelvin functions $\operatorname{ker}$ and $\operatorname{kei}$ (G&R 8.8.567.2). The saddle point method shows that this term exponentially decreases when $b\sqrt{a}\gg1$.

The contributions of the other terms can be evaluated using \begin{align} Q_n&=\int_0^\infty \exp\left( -\frac{b^2}{4t} \right)\frac{dt}{t^{n+1}}\\ &=\int_0^\infty \exp\left( -\frac{b^2}{4}u \right)u^{n-1}\,du\\ &=\Gamma(n)\left( \frac{2}{b} \right)^{2n} \end{align} We obtain thus \begin{equation} I_0\sim \frac{1}{4a}\left( \pi\operatorname{ker}(b\sqrt{a})-2\ln (a)\operatorname{kei}(b\sqrt{a})-\frac{(0!)^22^2}{ab^2}+\frac{(2!)^22^6}{a^3b^6}-\frac{(4!)^22^{10}}{a^5b^{10}}+\cdots \right) \end{equation} We can also use the asymptotic approximations for the Kelvin functions (G&R 8.566), \begin{align} \operatorname{ker}(z)&=\sqrt{\frac{\pi}{2z}}e^{\alpha(-z)}\cos(\beta(-z))\\ \operatorname{kei}(z)&=\sqrt{\frac{\pi}{2z}}e^{\alpha(-z)}\sin(\beta(-z))\\ \alpha(z)&\sim \frac{z}{\sqrt{2}}+\frac{1}{8z\sqrt{2}}+\ldots\\ \beta(z)&\sim \frac{z}{\sqrt{2}}-\frac{\pi}{8}-\frac{1}{8z\sqrt{2}}+\ldots\\ \end{align} For sufficiently large values of $b\sqrt{a}$, the contribution of the Kelvin functions can be neglected. \begin{equation} I_0=-\frac{1}{a^2b^2}+O\left(\frac1{b^6}\right) \end{equation} To evaluate $I_1$, one use IBP twice, by remarking that $\int xJ_0( bx)\,dx=b^{-1}xJ_1(bx)$ and $\int J_1( bx)\,dx=-b^{-1}J_0(bx)$. One obains then $$I_1=-\frac{J_0(b)}{b^2(1+a^2)}+O\left(\frac1{b^3}\right) $$ Thus $$ I=-\frac{1}{a^2b^2}+\frac{J_0(b)}{b^2(1+a^2)}+O\left(\frac1{b^3}\right)$$ Keeping the first term only, we find $\ln(-I)\simeq -2\ln(b)-2\ln(a)$. With $b=10^k$, this explains the result proposed by @Claude Leibovici : $\gamma=-\ln(100)$ (and also the order of magnitude of $\alpha$ and $\beta$, through a linear regression over the values of $1<a<10$).

0
On

I have not been able to do almost anything of potential interest; so, to (may be) get some idea, I considered numerically the behaviour of the integral $$I_{a,k} = \int_0^1 \frac{x \log(x)}{x^4+a^2}\, J_0(10^k\,x)\,dx$$ for $(a=1,2,\cdots,10)$ and $(k=1,2,\cdots,10)$ and after a few graphical attempts decided to look at the behaviour of $\log(-I_{a,k})$ which, at least, seems to be linearly dependent on $k$ for a given $a$ (with a slope almost independent on the value of $a$).

So, I considered the complete bilinear model $$\log(-I_{a,k})=\alpha + \beta \,a + \gamma\, k +\delta \,a\,k$$ The result of the first regression $(R^2=0.999795)$ are $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ \alpha & -0.59129 & 0.21257 & \{-1.0133,-0.1693\} \\ \beta & -0.45739 & 0.03426 & \{-0.5254,-0.3894\} \\ \gamma & -4.53646 & 0.03426 & \{-4.6045,-4.4684\} \\ \delta & -0.00047 & 0.00552 & \{-0.0114,+0.0105\} \\ \end{array}$$

As expected from the previous remark, the cross parameter $\delta$ is no significant. Repeating the calculations without it, the following results $(R^2=0.999795)$ $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ \alpha & -0.57699 & 0.13150 & \{-0.83805,-0.31593\} \\ \beta & -0.45999 & 0.01586 & \{-0.49147,-0.42850\} \\ \gamma & -4.53906 & 0.01586 & \{-4.57055,-4.50757\} \end{array}$$ which basically divide by a factor of $2$ the previous standard errors.

At this point, being very cautious, it seems that $\gamma \sim -\log(100)$ and $\beta \sim -\frac 1 {10}\log(100)$