So, here is the formal statement:
Let $m^*$ denote the Lebesgue outer measure on $\mathbb{R^n}$, and suppose $E \subset \mathbb{R^n}$ with $m^*(E) < \infty$. Let $\sigma_m = \{x\in \mathbb{R^n} : d(x,E) < \frac{1}{m}\}$. Show that if $E$ is compact, then $m^*(E) = \lim_{m \to \infty } m^*(\sigma_m)$.
I basically have come up with a proof attempt, but unfortunately, I'm highly suspicious of it because I didn't really use the fact that $E$ was compact. I was hoping that someone can point out a flaw in my argument or provide suggestions on how to more clearly incorporate this hypothesis if it turns out I'm implicitly using it.
Proof Attempt: Fix $\epsilon > 0$ and choose an open cover $Q = \bigcup_{j=1}^{\infty} Q_j$ of $E$ such that $$\sum_{j=1}^{\infty} m^*(Q_j) \leq m^*(E) + \epsilon$$
Since $m^*$ possesses countable subadditivity, it follows that $m^*(Q) \leq \sum_{j=1}^{\infty} m^*(Q_j)$. But this implies $m^*(Q) \leq m^*(E) + \epsilon$. Now, (I suspect this is where my argument begins to fail) for all $m$ large enough, we can have $\sigma_m \subset Q$, and by the monotonicity of $m^*$, it will follow that $m^*(\sigma_m) \leq m^*(Q)$ for all $m$ large enough. But then, this means that $m^*(\sigma_m) \leq m^*(E) + \epsilon \implies m^*(\sigma_m) - m^*(E) \leq \epsilon$. Since we notice that $E \subset \sigma_m$ for every $m$, it follows by monotonicity that $m^*(E) \leq m^*(\sigma_m) \implies 0 \leq m^*(\sigma_m) - m^*(E)$. Thus, for all $m$ large enough, $|m^*(\sigma_m) - m^*(E)| = m^*(\sigma_m) - m^*(E) \leq \epsilon$. This is what we wished to show.