Let's consider a set $\{X_i\}_{i=1}^N$ of $N$ i.i.d. random variables drawn from the distribution $P_X(x) = \mathcal{N}(\mu, \sigma^2)$. Define the variable $$\hat{\sigma}^2 = \frac{1}{N} \sum_i (X_i - \hat{\mu})^2$$ where $\hat{\mu} \equiv \frac{1}{N}\sum_i X_i$; it follows that (see for example here) :
$$\hat{\sigma}^2 \sim \frac{1}{2^{\frac{N-1}{2}}\Gamma\left( \frac{N-1}{2}\right)}x^{\frac{N-3}{2}} \left( \frac{N}{\sigma^2} \right)^{\frac{N-1}{2}} e^{-\frac{x}{2} \frac{N}{\sigma^2}} \;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\; (1)$$
Now, given that $\hat{\sigma}^2$ is the sample variance, from the law of large numbers, in the $\lim_{N \to \infty}$, $\hat\sigma^2$ converges in probability to the expected value $\sigma^2$.
Question:
Is it possible to show that in the $\lim_{N \to \infty}$ the measure of $\hat\sigma^2$ distribution (Eq. (1)) concentrates around the value $\sigma^2$, i.e. $$\lim_{N \to \infty} P(|\hat \sigma^2 - \sigma^2|>\epsilon )=0 \;, \;\;\; \forall \epsilon>0 \; \;?$$
Your assertion
needs some clarifying. Because $\hat{\sigma}^2 $ is not a deterministic function of $N$, it's a random variable. Then we need to specify in what type of convergence we are speaking of.
Typically, in this case (an estimator), we want convergence in probability, (consistent estimator). For which a sufficient (non necessary) condition is convergence in mean square, which in turn is verified if the estimator ins unbiased (or at least asymptotically unbiased) and its variance tends to zero.