Given $\hat \theta=$ the maximum likelihood estimator for a parameter $\theta$ of a distribution, we know that $$\sqrt{n}(\hat \theta-\theta)\rightarrow^d N(0,V(\hat\theta))$$ where the $\rightarrow^d$ represents convergence in distribution. However, does this imply that: $$\hat\theta\sim_a N\left( \theta,\frac{V(\hat\theta)}{n} \right)?$$
Meaning that, for large enough $n$, the distribution of $\hat\theta$ can be approximated by a normal distribution with the specified mean and variance? $\sim_a$ is just the notation I used to represent "approximately follows". I know that the actual distribution of $\hat\theta$ may not be normal.
Don't forget that (under the "regularity" conditions; inter alia,, well defined and finite Fisher information) you have $$ \sqrt{n}( \hat{\theta}_n - \theta )\xrightarrow{D}N(0, \mathcal{I}^{-1}(\theta)), $$
where $\mathcal{I}^{-1}(\theta)$ is the inverse of Fisher information matrix/scalar. Namely, asymptotically, the variance of an MLE reaches the Cramer-Rao lower bound for unbiased estimators (even if the MLE is biased for any finite $n$). Hence, for large enough $n$ indeed $$ \hat{\theta}_n \sim_{approx} N(\theta,n^{-1}I^{-1}(\theta)). $$
Whether this approximation is good or bad is the same question as when we can use the CLT (stressed: LIMIT) to say that $$ \bar{X}_n \sim N(\mu,n^{-1}\sigma^2), $$ regardless of the original distribution? And the answer is; for symmetric, "well behaved" (i.e., small variance, unimodal, etc.) for fair $n$ it'll be OK, for not so nice distributions - it's not. General answer? Without any further information except for the validity of the regularity conditions, you can only be sure for $n\to \infty$.