I was trying to prove the asymptotic stability of the trivial equilibrium $(0,0)$ of the two-dimensional non linear ODE system:
\begin{align} \frac{dH}{dt}=\mu\frac{(H+F)^2}{K^2+(H+F)^2}-d_1 H-H\left(\sigma_1-\sigma_2\frac{F}{H+F}\right)\\ \frac{dF}{dt}= H\left(\sigma_1-\sigma_2\frac{F}{H+F}\right)-(p+d_2)F \end{align} where $H$ and $F$ are dependent variable and positive.
All other parameters are non negative with $\sigma_2>\sigma_1$.
I tried using Jacobian matrix but it blows up when I plug in $(0,0)$ into the Jacobian matrix. So I thought of defining a positively invariant region around the equilibrium point and then try to prove that it is positively invariant. That is, the solutions that enter into the region must converge to the equilibrium and hence it is stable. But I don't know how to prove it using differential inequalities!
Any kind of help/suggestion/guidance will be appreciated! Thanks!
You need $K$ to be strictly positive. Because if $K=0$ and $\mu>0$, then in a neighborhood of the origin, $dH/dt \approx \mu >0$. So, suppose $K>0$.
In the first equation, the terms $\mu\frac{(H+F)^2}{K^2+(H+F)^2}$ and $H\sigma_2\frac{F}{H+F}$ are of second order of smallness near the origin. Thus, the sign of $dH/dt$ is determined by $-(d_1+\sigma_1)H$ which of course suggests stability.
In the second equation, $-H\sigma_2\frac{F}{H+F}$ is of second order. Neglecting it, we are left with $H \sigma_1 - (p+d_2)F$. This looks troublesome, but if $H$ goes to zero, $F$ will be forced to follow.
Let's summarize. For every $\epsilon>0$ there is a neighborhood of the origin in which $$\begin{split}\frac{dH}{dt}&<-(d_1+\sigma_1 )H +\epsilon (H+F) \\ \frac{dF}{dt} & < H \sigma_1 - (p+d_2)F +\epsilon (H+F) \end{split}$$ (I work in the positive quadrant $H,F>0$, which is what you are interested in). Hence, $$\frac{d(2H+F)}{dt} <-(2d_1+\sigma_1 )H - (p+d_2)F +3\epsilon (H+F) $$ which is negative, provided $2d_1+\sigma_1>0$, $p+d_2>0$, and $\epsilon$ is chosen sufficiently small.
I took $2H+F$ instead of $H+F$ so that the coefficient of $H$ on the right would have $\sigma_1$, increasing the chance of the coefficient being negative.