Suppose I have an estimator $\hat{\theta}_n$ of $\theta$ where $\mathbb{E}[\hat{\theta}_n] \neq \theta$, but I do know that $\displaystyle\lim_{n \to \infty} \mathbb{E}[\hat{\theta}_n] = \theta$.
Also suppose that $\hat{\theta}_n \sim \mathcal{N}(\mu_n, \sigma_n^2)$ and from the above we then have that $\displaystyle\lim_{n\to\infty} \mu_n = \theta$.
Is there any way to construct confidence intervals of the form with probability $\geq 1 - \delta$, $\theta \in [\text{lower bound}, \text{upper bound}]$?
I was trying to find an upper bound on $$ \mathbb{P}(|\hat{\theta}_n - \theta| > \varepsilon) < \text{???} $$ which we could then invert to get something of the form "with probability $\geq 1 - \delta$, $\theta \in [L(\hat{\theta}_n, n), U(\hat{\theta}_n, n)]$ where $L$ and $U$ are the lower and upper bounds.
Since for finite $n$, $\hat{\theta}_n$ isn't unbiased, we can't use standard Gaussian tail bounds or anything so I'm not sure what the best approach is here.
At the moment your stated distribution for $\hat{\theta}_n$ does not specify the relationship between the moments $\mu_n$ and $\sigma_n^2$ and the true parameter $\theta$. This means that there is presently insufficient information to form a pivotal quantity that would be the basis for a confidence interval. If we make some assumptions about the relationship of these moments to the underlying parameter then we can form a pivotal quantity and derive an appropriate confidence interval formula.
For simplicity, I will assume that the variance $\sigma_n^2$ is unrelated to the underlying parameter, and that the mean is related to the underlying parameter through a monotonic function. That is, you have $\mu_n = f_n(\theta)$ via some monotonically increasing function $f_n$ (we will denote the inverse function by $g_n = f_n^{-1}$). Then you can write the pivotal quantity:
$$\frac{\hat{\theta}_n - f_n(\theta)}{\sigma_n} \sim \mathcal{N}(0,1).$$
Taking $z_{\alpha/2}$ as the quantile of the standard normal distribution with upper tail area $\alpha/2$, you have:
$$\begin{equation} \begin{aligned} 1-\alpha &= \mathbb{P} \Big( z_{\alpha/2} \leqslant \frac{\hat{\theta}_n - f_n(\theta)}{\sigma_n} \leqslant z_{\alpha/2} \Big) \\[6pt] &= \mathbb{P} \Big( \sigma_n \cdot z_{\alpha/2} \leqslant \hat{\theta}_n - f_n(\theta) \leqslant \sigma_n \cdot z_{\alpha/2} \Big) \\[6pt] &= \mathbb{P} \Big( \hat{\theta}_n - \sigma_n \cdot z_{\alpha/2} \leqslant f_n(\theta) \leqslant \hat{\theta}_n + \sigma_n \cdot z_{\alpha/2} \Big) \\[6pt] &= \mathbb{P} \Big( g_n(\hat{\theta}_n - \sigma_n \cdot z_{\alpha/2}) \leqslant \theta \leqslant g_n(\hat{\theta}_n + \sigma_n \cdot z_{\alpha/2}) \Big). \\[6pt] \end{aligned} \end{equation}$$
In the above equations we have taken $\hat{\theta}_n$ to be a random variable (i.e., prior to fixing the data). Once the data is observed this statistic becomes a fixed value $\hat{\theta}_{n,\text{obs}}$ and we have the $1-\alpha$ level confidence interval:
$$\text{CI}_\theta(1-\alpha) = \Big[ g_n(\hat{\theta}_{n,\text{obs}} - \sigma_n \cdot z_{\alpha/2}), \text{ } g_n(\hat{\theta}_{n,\text{obs}} + \sigma_n \cdot z_{\alpha/2}) \Big].$$
(Note that this formula is for a monotonically increasing function $f_n$. If the function were instead monotonically decreasing then the bounds of the confidence interval would be reversed.)
For simplicity, this particular confidence interval takes $\sigma_n$ as known. If this is unknown then you would need to specify its relationship to the sample variance and you would probably then derive a similar interval, only using the critical values of the T-distribution instead of the normal distribution.