Integral involving power, exponential and confluent hypergeometric function

448 Views Asked by At

I am seeking a solution for the following integral: \begin{equation} \int\frac{e^{-\beta t}}{t}\,U(a,b,t) \,\mathrm{d}t \end{equation} where $0<\beta<1$, $a<1$, $b<2$, and $U(a,b,t)$ is the confluent hypergeometric function of the 2nd kind. I have already solved this integral for $a=0,-1,-2,\dots$


Motivation for asking this question:

I am trying to compute the following Cauchy principal valued integral \begin{equation} \lim_{\varepsilon\to 0^{+}}\int_{\varepsilon}^{\infty}\frac{1}{t}\left(\frac{e^{-\beta_{1} t}}{\Gamma(\alpha_{1})}U(1-\alpha_{1},2-\alpha_{3},\beta_{3}t) - \frac{e^{-\beta_{2} t}}{\Gamma(\alpha_{2})}U(1-\alpha_{2},2-\alpha_{3},\beta_{3}t) \right)\mathrm{d}t, \end{equation} where $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}>0$, $\alpha_{3}=\alpha_{1}+\alpha_{2}$, and $\beta_{3}=\beta_{1}+\beta_{2}$. I have already solved this integral for the following increasingly general special cases:

$\qquad(1).\ \ $ $\alpha_{1}=\alpha_{2}$ and $\beta_{1}=\beta_{2}$ (trivial case).

$\qquad(2).\ \ $ $\alpha_{1}=\alpha_{2}=1$.

$\qquad(3).\ \ $ $\alpha_{1}\in\mathbb{N}^{+}$ and $\alpha_{2}\in\mathbb{N}^{+}$.

My ultimate goal is to find the solution for $\alpha_{1},\alpha_{2}\in\mathbb{R}^{+}$.

2

There are 2 best solutions below

2
On BEST ANSWER

We begin with the following Cauchy principal value integral: \begin{equation} \tag{1} I= \lim_{\varepsilon\to 0} \frac{1}{\Gamma(\alpha_{1})}\int_{\varepsilon}^{1/\varepsilon} e^{-\beta_{1}y}\,y^{-1}\,U(1-\alpha_{1},2-\alpha_{3},\beta_{3}y) \,\mathrm{d}y\\ - \frac{1}{\Gamma(\alpha_{2})}\int_{\varepsilon}^{1/\varepsilon} e^{-\beta_{2}y}\,y^{-1}\,U(1-\alpha_{2},2-\alpha_{3},\beta_{3}y) \,\mathrm{d}y\,, \end{equation} where $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}>0$, $\alpha_{3}=\alpha_{1}+\alpha_{2}$, and $\beta_{3}=\beta_{1}+\beta_{2}$. In the pursuit of a closed-form expression for $I$ we attack these integrals from another direction, namely, by increasing the power component of the integrands by small positive constant $\varepsilon$, i.e. $y^{\varepsilon-1}$, which subsequently allows the limits of integration to extend over the interval $y\in[0,\infty)$ without causing the integrals to diverge. With a substitution of $t=\beta_{3}y$, the expression for $I$ is rewritten as \begin{equation} \tag{2} I= \lim_{\varepsilon\to 0}\, \frac{\beta_{3}^{-\varepsilon}}{\Gamma(\alpha_{1})}\int_{0}^{\infty} \exp\left(-\tfrac{\beta_{1}}{\beta_{3}}t\right)\,t^{\varepsilon-1}\,U(1-\alpha_{1},2-\alpha_{3},t) \,\mathrm{d}t\\ - \frac{\beta_{3}^{-\varepsilon}}{\Gamma(\alpha_{2})}\int_{0}^{\infty} \exp\left(-\tfrac{\beta_{2}}{\beta_{3}}t\right)\,t^{\varepsilon-1}\,U(1-\alpha_{2},2-\alpha_{3},t) \,\mathrm{d}t\,, \end{equation} which now puts it into the form of DLMF 13.10.7: \begin{equation} \tag{3} \int_{0}^{\infty}e^{-zt}t^{\varepsilon-1}U(a,b,t)\,\mathrm{d}t=\frac{\Gamma(\varepsilon)\Gamma(\varepsilon-b+1)}{\Gamma(a-b+1+\varepsilon)}z^{-\varepsilon} {_{2}F_{1}}\left(a,\varepsilon;a-b+1+\varepsilon;1-\tfrac{1}{z}\right), \end{equation} for $\Re\varepsilon>\max(\Re b-1,0)$, and $\Re z>0$. Since $\varepsilon$ is going to ultimately approach zero, use of this formula requires that $\alpha_{3}>1$. Nevertheless, making use of the formula $I$ assumes the form of \begin{equation} \tag{4} I= \lim_{\varepsilon\to 0} C\Gamma(\varepsilon) \left( \frac{{_{2}F_{1}}\left(1-\alpha_{1},\varepsilon;\alpha_{2}+\varepsilon;-\tfrac{\beta_{2}}{\beta_{1}}\right)}{\Gamma(\alpha_{1})\Gamma(\alpha_{2}+\varepsilon)\beta_{1}^{\varepsilon}} - \frac{{_{2}F_{1}}\left(1-\alpha_{2},\varepsilon;\alpha_{1}+\varepsilon;-\tfrac{\beta_{1}}{\beta_{2}}\right)}{\Gamma(\alpha_{1}+\varepsilon)\Gamma(\alpha_{2})\beta_{2}^{\varepsilon}} \right), \end{equation} where $C=\Gamma(\alpha_{3}-1)$. It is clear that the $\Gamma(\varepsilon)$ term is the only term that diverges as $\varepsilon\to0$. Therefore to help simplify the limit, consider for a moment the Laurent series of the gamma function about the origin \begin{equation} \tag{5} \Gamma(\varepsilon)=\frac{1}{\varepsilon}-\gamma+\mathcal{O}(\varepsilon)\qquad\text{for}\ |\varepsilon|<1\land\varepsilon\neq0\,, \end{equation} Since the limit of the quantity $(\cdots-\cdots)$ in Eq. $4$ yields zero, the constant and $\mathcal{O}(\varepsilon)$ terms can be dropped; thus, $\Gamma(\varepsilon)$ is replaced by $1/\varepsilon$ which puts the entire limit into a $0/0$ indeterminant form. As a consequence, L'Hopital's rule is applied resulting in \begin{equation} \tag{6} I= \lim_{\varepsilon\to 0} \frac{\partial}{\partial\varepsilon}C \left( \frac{{_{2}F_{1}}\left(1-\alpha_{1},\varepsilon;\alpha_{2}+\varepsilon;-\tfrac{\beta_{2}}{\beta_{1}}\right)}{\Gamma(\alpha_{1})\Gamma(\alpha_{2}+\varepsilon)\beta_{1}^{\varepsilon}} - \frac{{_{2}F_{1}}\left(1-\alpha_{2},\varepsilon;\alpha_{1}+\varepsilon;-\tfrac{\beta_{1}}{\beta_{2}}\right)}{\Gamma(\alpha_{1}+\varepsilon)\Gamma(\alpha_{2})\beta_{2}^{\varepsilon}} \right). \end{equation}

For the sake of brevity, we will define the following functions: \begin{gather} f_{1}(\varepsilon) = {_{2}F_{1}}\left(1-\alpha_{1},\varepsilon;\alpha_{2}+\varepsilon;-\tfrac{\beta_{2}}{\beta_{1}}\right),\\ f_{2}(\varepsilon) = {_{2}F_{1}}\left(1-\alpha_{2},\varepsilon;\alpha_{1}+\varepsilon;-\tfrac{\beta_{1}}{\beta_{2}}\right),\\ g_{1}(\varepsilon) = \Gamma(\alpha_{1})\Gamma(\alpha_{2}+\varepsilon)\beta_{1}^{\varepsilon},\\ g_{2}(\varepsilon) = \Gamma(\alpha_{1}+\varepsilon)\Gamma(\alpha_{2})\beta_{2}^{\varepsilon}, \end{gather} and then compute the derivative in Eq. $6$ to arrive at \begin{equation} \tag{7} I= \lim_{\varepsilon\to 0}C \left( \frac{f_{1}^{\prime}(\varepsilon)}{g_{1}(\varepsilon)} - \frac{f_{1}(\varepsilon)\, g_{1}^{\prime}(\varepsilon)}{g_{1}^{2}(\varepsilon)} - \frac{f_{2}^{\prime}(\varepsilon)}{g_{2}(\varepsilon)} + \frac{f_{2}(\varepsilon)\, g_{2}^{\prime}(\varepsilon)}{g_{2}^{2}(\varepsilon)} \right). \end{equation}

At this point all that is left to do is find the limits of the functions $f_{1}$, $f_{2}$, $g_{1}$, and $g_{2}$ and their corresponding derivatives. Starting with the function $g(\varepsilon)=\Gamma(a)\Gamma(b+\varepsilon)c^{\varepsilon}$ which is of the same form as $g_{1}$ and $g_{2}$ it is straightforward to show that \begin{equation} \tag{8} \lim_{\varepsilon\to0}g(\varepsilon)= \Gamma(a)\Gamma(b), \end{equation} and \begin{equation} \tag{9} \lim_{\varepsilon\to0}g^{\prime}(\varepsilon)= \Gamma(a)\Gamma(b)(\psi(b)+\log c), \end{equation} where $\psi(z)$ is the digamma function. Next, we consider the function $f(\varepsilon)={_{2}F_{1}}\left(1-a,\varepsilon;b+\varepsilon;-c/d\right)$ which is of the same form as $f_{1}$ and $f_{2}$. Writing $f$ using it's power series representation we find \begin{equation} \tag{10} \lim_{\varepsilon\to0} f(\varepsilon)= \lim_{\varepsilon\to0} 1+\sum_{k=1}^{\infty}\frac{(1-a)_{k}(\varepsilon)_{k}}{(b+\varepsilon)_{k}\,k!}\,\left(-\frac{c}{d}\right)^{k}=1\,. \end{equation} For $\lim_{\varepsilon\to}f^{\prime}(\varepsilon)$, we first compute the derivative of each term yielding \begin{equation} \tag{11} \frac{\partial}{\partial\varepsilon} \frac{\theta_{k}\,(\varepsilon)_{k}}{(b+\varepsilon)_{k}}= \frac{\theta_{k}\,(\varepsilon)_{k}}{(b+\varepsilon)_{k}}(\psi(b+\varepsilon)-\psi(\varepsilon)+\psi(k+\varepsilon)-\psi(b+k+\varepsilon))\,. \end{equation} where $\theta_{k}$ is the constant part of each term w.r.t. $\varepsilon$. In the limit, all of the $\psi(\cdots+\varepsilon)$ terms can be dropped since $\lim_{\varepsilon\to0}(\varepsilon)_{k}=0$ leaving \begin{equation} \tag{12} \lim_{\varepsilon\to0} \frac{\partial}{\partial\varepsilon} \frac{\theta_{k}\,(\varepsilon)_{k}}{(b+\varepsilon)_{k}}= -\theta_{k}\frac{\Gamma(k)}{(b)_{k}}\lim_{\varepsilon\to0}\frac{\psi(\varepsilon)}{\Gamma(\varepsilon)}\,. \end{equation} Now consider the following asymptotically equivalent forms for $\varepsilon\approx0$: \begin{gather} \tag{13} \psi(\varepsilon)=-\frac{1}{\varepsilon}-\gamma+\mathcal{O}(\varepsilon)\,,\\ \tag{14} \frac{1}{\Gamma(\varepsilon)}=\varepsilon+\mathcal{O}(\varepsilon^{2})\,. \end{gather} From these limiting forms it is clear that the remaining limit in Eq. $12$ is equal to $-1$ such that \begin{equation} \tag{14} \lim_{\varepsilon\to 0} f^{\prime}(\varepsilon) = \sum_{k=1}^{\infty}\frac{(1-a)_{k}}{k!}\left(-\frac{c}{d}\right)^{k}\frac{\Gamma(k)}{(b)_{k}}\,. \end{equation} Shifting the index down to start at $k=0$ and then simplifying yields \begin{equation} \tag{15} \lim_{\varepsilon\to 0} f^{\prime}(\varepsilon) = \frac{(a-1)\,c}{b\,d} \sum_{k=0}^{\infty}\frac{(2-a)_{k}(1)_{k}(1)_{k}}{(1+b)_{k}(2)_{k}}\,\frac{1}{k!} \left(-\frac{c}{d}\right)^{k}, \end{equation} which is now in the form of the generalized hypergeometric function. Thus \begin{equation} \tag{16} \lim_{\varepsilon\to 0} f^{\prime}(\varepsilon) = \frac{(a-1)\,c}{b\,d} {_{3}F_{2}}\left(2-a,1,1;1+b,2;-\tfrac{c}{d}\right), \end{equation} where ${_{p}}F_{q}(a_{1},\dots,a_{p};b_{1},\dots,b_{q};z)$ is the generalized hypergeometric function. It is important to note that if $2-a\neq 0,-1,-2,\dots$, the series in Eq. $15$ only converges for $|c/d|<1$; therefore, for $|c/d|\geq 1$ the result of Eq. $16$ is defined by the analytic continuation of ${_{3}F_{2}}(\cdots;z)$ w.r.t. $z$. With these results at hand, the limit of each component in Eq. $7$ is solved yielding the final solution for $I$ when $\alpha_{3}>1$: \begin{multline} \tag{17} I= C\biggl( \log\tfrac{\beta_{2}}{\beta_{1}}+\psi(\alpha_{1})-\psi(\alpha_{2}) + \tfrac{(\alpha_{1}-1)\,\beta_{2}}{\alpha_{2}\,\beta_{1}}{_{3}F_{2}}\left(2-\alpha_{1},1,1;1+\alpha_{2},2;-\tfrac{\beta_{2}}{\beta_{1}}\right)\\ - \tfrac{(\alpha_{2}-1)\,\beta_{1}}{\alpha_{1}\,\beta_{2}}{_{3}F_{2}}\left(2-\alpha_{2},1,1;1+\alpha_{1},2;-\tfrac{\beta_{1}}{\beta_{2}}\right) \biggr), \end{multline} where \begin{equation} C = \frac{\Gamma(\alpha_{3}-1)} {\Gamma(\alpha_{1})\Gamma(\alpha_{2})}. \end{equation}

8
On

If you consider this not to a relevant answer, just tell me and I will delete it. Of course, constructive remarks are always welcome.
Be aware that $U()$ has a lot of special cases (computational branch points) that I have ignored in favour of what I consider typical. If you have special cases of interest that makes this blow up, I will try to look into them.
An equation is:${\displaystyle \int_{0}^{\infty}}t^{s-1}e^{-\alpha t}U\left(a,b,\lambda\cdot t\right)dt$
Using Mellin transform interpretation of http://dlmf.nist.gov/13.10#E7
${\displaystyle \int_{0}^{\infty}}t^{s-1}e^{-z\cdot t}U\left(a,c,t\right)dt=\Gamma\left(s\right)\cdot\Gamma\left(s-c+1\right)\cdot z^{-s}\cdot\,_{2}F_{1}\left(a,s;a+s-c+1;1-\frac{1}{z}\right)$
Requiring $\mathbb{\mathcal{R}}\left(z\right)>0,\mathcal{R}\left(s\right)>max\left(\mathbb{\mathcal{R}}\left(c\right),0\right)$
We can examine the terms of the right hand side and make sure the power series expansion matches up after inversion on a term by term basis. Failure of the inverse Mellin transform implies that when $\mathcal{R}\left(s\right)\leq0$ either the closure of the contour around the left hand poles fails because of essential or logarithmic singularity ; or because of the problem of the obvious pole/zero cancellation. But let's just move the goal posts a little using http://dlmf.nist.gov/15.5.E15 and some substitution
$_{2}F_{1}\left(a,s;a+s-c+1;1-\frac{1}{z}\right)=\left(1-\frac{b}{c}\right)\cdot\left(1-\frac{1}{z}\right)\cdot\,_{2}F_{1}\left(a,s+1;a+s-c+2;1-\frac{1}{z}\right)-\frac{1}{z}\cdot\,_{2}F_{1}\left(a,s+1;a+s-c+1;1-\frac{1}{z}\right)$
$\Gamma\left(s\right)\cdot\Gamma\left(s-c+1\right)\Rightarrow\frac{\Gamma\left(s+1\right)}{s}\cdot\Gamma\left(s-c+1\right)$
Which pulls out $\frac{1}{s}$ as a simple pole and the other terms are well behaved. In fact we can recognize the alteration as almost a simple integration.
Now I can probably pull this back to two integrals of $\int e^{-z\cdot t}U(..)$ and do a term by term check; but given the complexity of the series expansion $U()$ you probably won't like it and I would put it in a link. An alternative is rearrange the original to $U()=M()\cdot...+M()\cdot...$ using http://dlmf.nist.gov/13.2.E42. But that has the restriction on c not be an integer.
Yet another alternative is using “Cauchy principal value” which, as I recall, deals with this sort of thing for the Casimir effect. I think I can find the usage explanation and usage in an MAA article.