Suppose that $\alpha(t)$ is an infinitesimal as t goes to infinity, i.e., $\lim_{t\rightarrow\infty}\alpha(t)$=0.
Consider the ODE $$ \dot{x}(t)=-\gamma x(t) + \alpha(t), \quad \gamma>0 $$ Can we prove that $x$ goes to zero as $t$ goes to infinity? i.e., $\lim_{t\rightarrow\infty}x(t)=0$ ?
If a Lyapunov function candidate $L = x^2/2$ is defined, we can have $$ \dot{L}=-\gamma x^2+\alpha x $$ Can we still use the analysis methods based on Lyapunov stability theory? How? Can we say that since $\lim_{t\rightarrow\infty}\alpha=0$, we have $\lim_{t\rightarrow}\dot{L}\leq0$ and therefore $x$ is stable or even goes to zero?
As Did mentions, in the above case (in which the system is linear) you can just get the solution analytically (it's $x(t)=x(0)e^{-\gamma t}+\int_0^te^{-\gamma(t-\tau)}\alpha(\tau)d\tau$) and take limit of it to deduce the asymptotic behavior.
Alternatively, if one wants to still rely on Lyapunov functions (which is useful because it allows one to deal with the same problem in the non-linear case where one can't find the solution analytically) what you are looking for is the idea of converging input converging state (CICS) systems (which relates to the better known concept of input-state stability).
As the name implies, a CICS system is one in which convergence to zero of the input (i.e., $\alpha(t)\rightarrow 0 $ as $t\rightarrow\infty$) implies convergence of the state (i.e., $x(t)\rightarrow 0 $ as $t\rightarrow\infty$). One usually checks for CICS with Lyapunov-type functions (the one you have up there does the trick).
For more info, see http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01178918 or http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=70365. Sorry I don't go into much detail, its actually relatively lengthy and in the linear case it is much easier to just look at the solution - I just wanted to make the point that you were thinking along the right lines for more general systems.