Let $X_1$ and $X_2$ are $n \times k$ and $n \times p$ matrices. Then $X=(X_1,X_2)$ is $n\times (k+p)$ matrix. Define $H= X^{\top}X$. Then
$$H = \begin{pmatrix} X_1^{\top}X_1 & X_1^{\top}X_2\\ X_2^{\top}X_1 & X_2^{\top}X_2 \end{pmatrix}.$$
In the linear regression $Y = X \beta + \epsilon$, the LSE is consistent if the smallest eigenvalue of $H$ diverges as $n\to\infty$.
Using the eigenvalue interlacing theorem, divergence of smallest eigenvalue of $H$ implies that the smallest eigenvalues of $X_1^{\top}X_1$ and $X_2^{\top}X_2$ diverge. I am interested in a converse, i.e., the smallest eigenvalues of $X_1^{\top}X_1$ and $X_2^{\top}X_2$ diverge $+\,\boxed{something}\Rightarrow$ smallest eigenvalue of $H$ diverges. The easiest (and non practical) value of the $\boxed{something}$ is $X_1^{\top}X_2=0$. I am interested in a value of the $\boxed{something}$ which can be justified for real life situations. It would be great if you comment or suggest a reference in this direction.