I have some difficulties in finding : $\sigma_{X}^{2}(t)=\begin{cases} x_{0}\frac{\beta}{\alpha}e^{\alpha t}[e^{\alpha t}-1], & \alpha \neq 0\\ x_{0}\beta t, & \alpha = 0 \end{cases}$
Could you please elaborate on the calculation that led to it?
[from An Introduction to Stochastic Processes with Applications to Biology, by Linda J. S. Allen, 2010, p. 368]
The mean and variance can be written as $$\mu_X(t)=\int_0^\infty x p(x,t)\,dx$$ and $$\sigma^2_X(t)=\int_0^\infty x^2p(x,t)\,dx-\mu^2_X(t),$$ with intial conditions $\mu_x(0)=\int_{-\infty}^\infty x\delta(x-x_0)=x_0$ and $\sigma^2_X(0)=0.$
Applying the forward Kolmogorov equation $(8.16),$ then
$$\begin{align}{d\mu_X\over dt}&={d\over dt}\int_0^\infty xp(x,t)\,dx=\int_0^\infty x{\partial p\over \partial t}dx\\ &=\int_0^\infty\left[-\alpha x{\partial(xp)\over\partial x}+{1\over 2}\beta x{\partial^2(xp)\over\partial x^2} \right]\,dx\\ &=-\alpha x^2p(x,t)\big|_0^\infty+\alpha\int_0^\infty xp(x,t)\,dx\\ &\quad +{1\over 2}\beta x{\partial(xp)\over\partial x}\bigg|_0^\infty-{1\over 2}\beta\int_0^\infty{\partial(xp)\over\partial x}\,dx\\ &=\alpha\int_0^\infty xp(x,t)\,dx. \end{align}$$
Thus, $${d\mu_X\over dt}=\alpha\mu_x(t).$$
In the preceding derivation it was assumed that
$$\lim_\limits{x\to\infty}x^2{\partial p(x,t)\over\partial x}=\lim_\limits{x\to\infty}x^2p(x,t)=\lim_\limits{x\to\infty}x p(x,t)=0,$$
which follows from the existence of the mean and variance of $X(t).$ The solution of the differential equation for the mean $\mu_X(t)$ with $\mu_X(0)=x_0$ is
$$\bbox[5px,border:2px solid orange]{\mu_X(t)=x_0e^{\alpha t}.}$$
The mean grows exponentially if $\alpha>0.$ The variance is equal to
$$\bbox[5px,border:2px solid orange]{\sigma^2_X(t)=\begin{cases}x_0{\beta\over\alpha}e^{\alpha t}[e^{\alpha t}-1]&\alpha\ne 0\\ x_0\beta t&\alpha=0.\end{cases}}$$
That result is for a time-homogeneous diffusion process $\{X(t)\}_{t\ge 0}$ such that $X(0)=x_0$, with linear infinitesimal mean and variance ($\alpha x$ and $\beta x$, respectively); then the pdf $p(x,t)$ satisfies the forward Kolmogorov equation: $$\partial_t\,p=-\alpha\,\partial_x(x\,p)+{1\over 2}\beta\,\partial^2_x(x\,p), $$ so $\sigma^2_X(t)$ must satisfy the following differential equation, where $\mu_X(t)=x_0\,e^{\alpha t}$: $$\begin{align}D_t\,\sigma^2_X(t) &=D_t\left(\int_0^\infty x^2\,p\,dx -\mu^2_X(t)\right)\\[2ex] &=\int_0^\infty x^2\,\partial_tp\,dx -2\alpha\mu^2_X(t)\\[2ex] &=-\alpha\int_0^\infty x^2\partial_x(x\,p)\,dx+{1\over 2}\beta \int_0^\infty x^2\partial^2_x(x\,p)\,dx - 2\alpha\mu^2_X(t)\\[2ex] &=-\alpha\left(\left[x^3p\right]_0^\infty-\int_0^\infty2x^2p\,dx \right) \\ &\quad+{1\over 2}\beta\left(\left[x^2\partial_x(xp)\right]_0^\infty- \int_0^\infty 2x\partial_x(xp)dx\right)-2\alpha\mu^2_X(t)\\[2ex] &=2\alpha\left(\int_0^\infty x^2p\,dx \right) -\beta\left(\int_0^\infty x\partial_x(xp)dx\right)-2\alpha\mu^2_X(t)\\[2ex] &=2\alpha\left(\sigma^2_X(t)+\mu^2_X(t) \right) -\beta\left(\left[x^2p\right]_0^\infty-\int_0^\infty xp\,dx\right)-2\alpha\mu^2_X(t)\\[2ex] &=2\alpha\left(\sigma^2_X(t)+\mu^2_X(t) \right) +\beta\mu_X(t)-2\alpha\mu^2_X(t)\\[2ex] \color{blue}{D_t\,\sigma^2_X(t)}&=\color{blue}{2\alpha\,\sigma^2_X(t)+x_0\beta\,e^{\alpha t}}\tag{Eq.1} \end{align}$$ where we have repeatedly applied integration by parts, and have assumed that $\lim\limits_{x\to\infty}x^kp(x,t)=0$ for $k=1,2,3,$ and also that $\lim\limits_{x\to\infty}x^2\partial_xp(x,t)=0.$
It is now readily verified that Eq.1 is satisfied by the stated solution: $$\sigma^2_X(t)=\begin{cases}x_0{\beta\over\alpha}e^{\alpha t}(e^{\alpha t}-1)&\alpha\ne 0\\ x_0\beta t&\alpha=0.\end{cases}$$
NB: Eq.1 is a simple nonhomogeneous linear first-order ordinary differential equation; consequently, rather than just verifying the stated solution, it's straightforward to obtain it directly by standard methods, letting $y(t)=\sigma^2_X(t):$
If $\alpha=0$ then Eq.1 is just $y'=x_0 \beta.$ Integration then gives $y=x_0\beta t+c,$ and the initial condition $y(0)=0$ (because $V[X(0)]=V[x_0]=0$) then requires $c=0;$ hence, the solution is $y=x_0\beta t.$
If $\alpha\ne 0$ then apply the standard methods: