Let $g: \Theta \to \mathbb R$ be a function of an unknown parameter $\theta$. We want to estimate $g$. Let $\hat g_n$ be a sequence of estimators for $g(\theta)$ such that for all $\theta \in \Theta$: $$\lim_{n\to\infty} E[\hat g_n]=g(\theta) \hspace{1cm} \text{ and} \hspace{1cm} \lim_{n\to\infty} \text{Var}(\hat g_n)=0$$
Show that the sequence $\hat g_n$is consistent.
My attempt: I was thinking of using chebyshev's inequality here but I was not sure if it can be applied to sequences like this. In order for $\hat g_n$ to be consistent I need to show that:
$$\lim_{n \to \infty} \Pr(\vert \hat g_n-g \vert > \varepsilon )=0$$
I am given that $\hat g_n$ has expected value $g$, therefore I can use chebyshevs inequality:
$$ \begin{split}&\phantom{\implies} \, \, \Pr(\vert X-E[X]\vert > \varepsilon)\le\frac{\text{Var}(X)}{\varepsilon^2} \\[10pt] & \implies \Pr(\vert \hat g_n-g\vert > \varepsilon) \le \frac{\text{Var}(\hat g_n)}{\varepsilon^2} \\[10pt] & \iff \lim_{n \to \infty}\Pr(\vert \hat g_n-g\vert > \varepsilon) \le \lim_{n \to \infty}\ \frac{\text{Var}(\hat g_n)}{\varepsilon^2}=\frac{1}{\varepsilon^2}\lim_{n\to \infty}\text{Var}(\hat g_n)=0\end{split}$$
Is this the correct way to do this or am I "not allowed" to use the chebyshev inequality here?
You can use Chebyshev's inequality but I suspect not quite like that, as you do not know $E[\hat g_n]=g$ for any finite $n$
Instead perhaps you could say something like: for any $\varepsilon>0$ and any $\delta >0$ there is
and since $\delta$ was arbitrary, this gives you $\lim\limits_{n \to \infty} \Pr(\vert \hat g_n-g \vert > \varepsilon )=0$ and convergence in probability