In the context of high dimension problem, I observe a pattern that some paper start with variable selection consistency, i.e. the probability that the estimated important variable set equals the true important variable set converges to 1 as n goes to infinity.
Then condition on the event that the estimated important variable set equals the true important variable set, the author prove l2 error convergence rate and Gaussian approximation theorem.
However, when I read lasso-related method, I found that some group of people is doing variable selection consistency statement purely.
Meanwhile, there is another group of researcher concern with the assumptions the previous group of people make such as beta-min and irrepresentable condition, which are noncheckable in practice. with the motivation of relaxing the assumption, they don't try to show variable selection consistency of lasso estimator, but trying to show hypothesis testing on the statistical significant of parameter of interest.
Is my observation correct? In sum, the proof technique of doing hypothesis testing related to lasso method is different from other method (which aims for solving high dimension problem as well such as debiased-threshold ridge regression).