I am dealing with a real-valued random function $\sigma_n(\theta)$, where $\theta\in\mathbb{R}$ is a parameter. I know that
$$
\sigma_n(\theta) \overset{p}{\rightarrow} \sigma(\theta),
$$
uniformly, as $n \rightarrow \infty$. My understanding is that, due to the Continuous Mapping Theorem, we have
$$
1/\sigma_n(\theta) \overset{p}{\rightarrow} 1/\sigma(\theta)
$$
as well, but not uniformly on $\mathbb{R}$, because the reciprocal function is not uniformly continuous.
My question is, given that $1/x$ is uniformly continuous on $x\in[\mu, \infty)$, with $\mu > 0$, if I impose something like $$ \text{Prob}\{\sigma_n(\theta)<\mu\}\rightarrow 0,\;\;\; \text{as} \;\; n\rightarrow \infty, $$ for any $\theta$, will this be sufficient to get uniform convergence in probability? If the answer is negative, what kind of conditions should I impose?
I suspect that, in order to get uniform convergence, you may need to require $P(\sigma_n(\theta) < \mu) \to 0$ uniformly in $\theta$. Probably there are general results to this effect, but a direct proof from scratch is also not difficult.
Lemma. Let $\sup_\theta P(\sigma_n(\theta) < \mu) \to 0$ as $n\to\infty$ and $\sigma_n(\theta) \to \sigma(\theta)$ in probability, uniformly in $\theta$. Then $1/\sigma_n(\theta) \to 1/\sigma(\theta)$ in probability, uniformly in $\theta$.
Proof. We first show that $\sigma(\theta) \geq \mu$ a.s. for all $\theta$. This is true since we have
$$\begin{align*} 0 &\leq P\bigl(\sigma(\theta) < \mu-\epsilon\bigr) \\ &= P\bigl(\sigma(\theta) < \mu-\epsilon, \sigma_n(\theta)<\mu \bigr) + P\bigl(\sigma(\theta) < \mu-\epsilon, \sigma_n(\theta) \geq \mu\bigr) \\ &\leq P\bigl(\sigma_n(\theta)<\mu \bigr) + P\bigl(\bigl|\sigma_n(\theta) - \sigma(\theta)\bigr| >\epsilon\bigr) \\ &\to 0+0 \\ &= 0, \end{align*}$$ and thus $P(\sigma(\theta) < \mu-\epsilon) = 0$ for all $\epsilon>0$, which implies $P(\sigma(\theta) < \mu) = 0$ for all $\theta$.
Now we can show the required convergence: Let $\epsilon > 0$ and choose $\delta > 0$ such that $|1/x - 1/y| \leq \epsilon$ for all $x,y\geq \mu$ with $|x-y| \leq \delta$. For fixed $\theta$ we get $$\begin{align*} P\bigl(\bigl|\frac{1}{\sigma_n(\theta)}-\frac{1}{\sigma(\theta)}\bigr| > \epsilon\bigr) &\leq P\bigl(\bigl|\sigma_n(\theta)-\sigma(\theta)\bigr| > \delta \mbox{ or } \sigma_n(\theta) < \mu \mbox{ or } \sigma(\theta) < \mu\bigr) \\ &\leq P\bigl(\bigl|\sigma_n(\theta)-\sigma(\theta)\bigr| > \delta\bigr) + P\bigl(\sigma_n(\theta) < \mu \bigr) \\ &\leq \sup_\theta P\bigl(\bigl|\sigma_n(\theta)-\sigma(\theta)\bigr| > \delta\bigr) + \sup_\theta P\bigl(\sigma_n(\theta) < \mu \bigr) \end{align*}$$ and thus $$\begin{align*} \sup_\theta P\bigl(\bigl|\frac{1}{\sigma_n(\theta)}-\frac{1}{\sigma(\theta)}\bigr| > \epsilon\bigr) &\leq \sup_\theta P\bigl(\bigl|\sigma_n(\theta)-\sigma(\theta)\bigr| > \delta\bigr) + \sup_\theta P\bigl(\sigma_n(\theta) < \mu \bigr) \\ &\to 0 + 0 \\ &= 0. \end{align*}$$ This completes the proof.