Define $\mathrm{B}(x,y;\alpha,\beta)$ as the generalized incomplete beta function:
$$\mathrm{B}(x,y;\alpha,\beta) = \int_x^y t^{\alpha-1}(1-t)^{\beta-1}\mathrm{d}t$$
You can assume that $x,y\in[0,1]$ and that $\alpha,\beta > 0$.
The ordinary incomplete Beta function is defined as:
$$\mathrm{B}(x;\alpha,\beta) = \int_0^x t^{\alpha-1}(1-t)^{\beta-1}\mathrm{d}t$$
Computing $\mathrm{B}(x,y;\alpha,\beta)$ as the difference $\mathrm{B}(y;\alpha,\beta) - \mathrm{B}(x;\alpha,\beta)$ can result in underflows, because for $\alpha,\beta$ large, this expression can be the difference of two very close numbers.
Moreover, computing $\mathrm{B}(x,y;\alpha,\beta)$ directly can result in very large values. So I am interested in computing the logarithm of the gneeralized incomplete Beta function:
$$\log \mathrm{B}(x,y;\alpha,\beta) = \log \int_x^y t^{\alpha-1}(1-t)^{\beta-1}\mathrm{d}t$$
How can I compute this numerically? Is there a C++ implementation available somewhere?
Sample numerical data: I decided to add a numerical example, so that you know the typical sizes of the numbers I am dealing with.
$$\alpha = 19.2294, \, \beta = 31.6308, \, x = 0.875002, \, y = 1.$$
As computed by Mathematica, we have (I'm not sure how accurate this is):
$$\mathrm{B}(x, y, \alpha, \beta) = 8.18326\times10^{-32}$$
Attempting to compute $\mathrm{B}(x, y, \alpha, \beta)$ as the difference $\mathrm{B}(y, \alpha, \beta) - \mathrm{B}(x, \alpha, \beta)$ results in catastraphic cancellation, because these two values are very close:
$$\mathrm{B}(y, \alpha, \beta) \approx \mathrm{B}(x, \alpha, \beta) \approx 1.64211\times10^{-15}$$
Updated 20140607: After another day of inspecting the problem and "fiddling with inequalities", I recommend the following scheme: $$ \hat{B}(x,y;a,b) = \begin{cases} B(y;a,b) - B(x;a,b) &, 0 \leq x < y \leq \frac{a-1}{a+b-2} \\ B(1-x;b,a) - B(1-y;b,a) &, \frac{a-1}{a+b-2} \leq x < y \leq 1 \\ B(a,b) - B(x;a,b) - B(1-y;b,a) &, \text{otherwise} \end{cases} $$
In the first case, $x$ and $y$ are to the left of the peak of the "bump" in the integrand. In the second case, both are to the right and substantial cancellation occurs in $B(y; a,b) - B(x; a,b)$ since both beta values include the bulk of the area of the bump, swamping the small difference between them; this is overcome by computing the areas from the right using the reflection identity below. In the last case, $x$ is to the left and $y$ is to the right of the bump and the area of the bump is the complete integral minus the tails to the right and left. My numerical analysis suggests that these expressions will maintain as much precision as your floating point representation can accommodate up to the limits of you library's implementation of the complete and incomplete beta functions.
If you are serious about wanting logarithms instead of function values, use this:
$$ \log\hat{B}(x,y;a,b) = \begin{cases} \log(B(y;a,b)) + \log\left(1 - \frac{B(x;a,b)}{B(y;a,b)}\right) &, 0 \leq x < y \leq \frac{a-1}{a+b-2} \\ \log (B(1-x;b,a)) + \log\left(1 - \frac{B(1-y;b,a)}{B(1-x;b,a)}\right) &, \frac{a-1}{a+b-2} \leq x < y \leq 1 \\ \log(B(a,b)) + \log\left(1 - \frac{B(x,a,b) + B(1-y,b,a)}{B(a,b)}\right) &, \text{otherwise} \end{cases} $$
Where the "$\log 1- \dots$" should be evaluated using a function like log1p() from the C standard library. (Also present in NumPy and PHP using this function name.)
It is worth noting that the above recipes will behave poorly when $y$ is too near zero, $x$ is too near one, or $x$ and $y$ are too near. In each case "too near" means less than a few orders of magnitude greater than the minimum positive value representable in your choice of floating point number. This is because whether your library evaluates beta by numerical integration or summation of hypergeometric series, when the "too near" conditions are met, the integrand will underflow to zero, the "delta t" in the numerical integration will underflow to zero, or one or more of the summands will underflow to zero. This is not a defect of the function evaluation, since the result of the computation cannot be represented. But is a defect of the logarithmic version. For instance the log of a "too near" beta value could be $-1000$ but your floating point representation has no hope of representing the $10^{-1000}$s that appear in the recipe. If these sorts of numbers are where you are going, then more attention will be needed in the "too near" cases.
Updated 20140606: For the range of parameters $x \in [0,1], y \in [0,1], \alpha \in [0,100], \beta \in [0,100]$, the function $$\hat{B}(x,y;\alpha, \beta) = \begin{cases} \frac{y-x}{2}\left( x^{\alpha -1}(1-x)^{\beta-1} + y^{\alpha-1}(1-y)^{\beta-1} \right) &, 1-10^{-5} \leq \frac{B(x;a,b)}{B(y;a,b)} \leq 1+10^{-5} \\ B(y;a,b)-B(x;a,b) &, \text{otherwise} \end{cases}$$ has, when computed with 10 digits precision (about 30 bits, so typical double floating point precision) absolute error bounded by the precision ($ \sim 10^{-10}$) and relative error bounded by a factor of $49$. (This poor relative error occurs in the corner with $x$ near $1$, $y=1$, and both $\alpha$ and $\beta$ near $100$. For $x>0.99$, $B(x,1,100,100) < 10^{-200}$, so perhaps this relative error is not significant?)
(This large relative error can be addressed using the identity $$B(x,y;\alpha,\beta) = B(1-y,1-x;\beta,\alpha)$$ to rewrite betas with $1/2<x<y$.)
The cases break into "beta at $x$ and $y$ agree in the first few digits, so cancellation is likely, but also the integrand is miniscule, so may be approximated by the trapezoid of the integrand evaluated at $x$ and $y$" and "beta at $x$ and $y$ do not agree in magnitude or at least do not agree in the first few digits, so catastrophic cancellation will not occur".
Original answer:
\begin{align} B(x,y;\alpha, \beta) &= \frac{y^\alpha}{\alpha} {}_2F_1(\alpha, 1-\beta, \alpha+1,y) - \frac{x^\alpha}{\alpha} {}_2F_1(\alpha, 1-\beta, \alpha+1,x) \\ &= \frac{y^\alpha - x^\alpha}{\alpha} + \frac{\alpha (1-\beta)}{(\alpha+1) 1!} \frac{y^{\alpha+1} - x^{\alpha+1}}{\alpha} + \frac{\alpha(\alpha+1) (1-\beta)(2-\beta)}{(\alpha+1)(\alpha+2) 2!} \frac{y^{\alpha+2} - x^{\alpha+2}}{\alpha} + \dots \end{align}
The pattern is that the numerator is successive rising factorials of $\alpha$ and $1-\beta$, the denominator is rising factorials of $\alpha+1$ and $1$, and the powers of $y$ and $x$ increment by one each time.
With $x,y \in [0,1]$, the powers of $y$ and $x$ term will get small fast. If even this is an underflow problem for you, I recommend dividing $y^\alpha$ out of every term, yielding \begin{align} B(x,y;\alpha, \beta) &= \frac{y^\alpha - x^\alpha}{\alpha} + \frac{\alpha (1-\beta)}{(\alpha+1) 1!} \frac{y^{\alpha+1} - x^{\alpha+1}}{\alpha} + \frac{\alpha(\alpha+1) (1-\beta)(2-\beta)}{(\alpha+1)(\alpha+2) 2!} \frac{y^{\alpha+2} - x^{\alpha+2}}{\alpha} + \dots \\ &= \frac{y^\alpha}{\alpha}\left( 1- \left(\frac{x}{y}\right)^\alpha + \frac{\alpha (1-\beta)}{(\alpha+1) 1!} \left(y - x \left(\frac{x}{y}\right)^\alpha \right) \right.\\ &\quad \left. + \frac{\alpha(\alpha+1) (1-\beta)(2-\beta)}{(\alpha+1)(\alpha+2) 2!} \left(y^2 - x^2 \left(\frac{x}{y}\right)^\alpha \right) + \dots \right) \end{align}
If $\alpha$ or $1-\beta$ is zero or is a negative integer, then these terms are eventually zero. If $\alpha+1$ is zero or a negative integer, we have to use limits to evaluate the series. Try to avoid that.
Note that for $\alpha$ large the dominating term is the first one and the approximate logarithm of this is $\alpha \log y - \log \alpha$. To improve your confidence in a result you may add the logarithm of what is in the parentheses. Continue summing terms (and taking the logs of these partial sums, yielding $\alpha \log y - \log \alpha + \log(\text{sum})$) until you reach whatever your confidence requirements are. (We could use various estimates to determine which term will be the largest, but most likely the pattern of term values will be evident.)
The sum is rapidly converging for $x,y$ near zero. For $x,y$ near one, we should start with the complementary incomplete beta function and perform the same analysis. There are no interesting details.