Say I have my original objective function $\|Y - X \beta\|_2^2$, and for some reason (other motivation), I want to add a penalty term and obtain a new objective function.
The target estimator $\beta_0$ is the unique minimizer of $E_{\beta_0} (y-x^T\beta)^2$. The OLS estimator $\hat \beta$ is an unbiased estimator under some regularity assumptions on the error.
After adding a penalty, I obtain a new estimator $\tilde{\beta}$, which is the minimizer of the objective function plus the penalty. This $\tilde{\beta}$ is biased clearly. Ideally, I can compute the bias of this estimator, denote $B$, then add it back to create a new estimator $\hat \beta = \tilde{\beta} - B$, which has bias 0. This $\hat \beta$ will be good for constructing confidence intervals.
But my question is - the reason I study the objective function plus a penalty is that I'm interested in knowing its corresponding minimizer, or the estimator of this minimizer $\tilde \beta$. For bias-correcting purposes, I now have $\hat \beta$, which is not exactly the minimizer of my interest.
Intuitively I feel working with bias-corrected estimator $\hat \beta$ somehow offset the benefit I got by adding that penalty function, because after all, what I am really interested in is estimating the minimizer of the objective function plus that penalty term.
Please let me know if there is anything unclear, I will try my best to edit the question.