In this paper by Bickel and Levina, I am confused about result (A15) which claims that since $$ (A14) \qquad \| \text{Var}(\mathbf{X}) - \widehat{\text{Var}}(\mathbf{X})\|_{\max} = O_P(n^{-1/2} \log^{1/2}p) $$ then it follows that $$ (A15) \qquad \| \text{Var}^{-1}(\mathbf{Z}_j^{(k)}) - \widehat{\text{Var}}^{-1}(\mathbf{Z}_{j}^{(k)})\|_{\max} = O_P(n^{-1/2} \log^{1/2}p) $$
where $\mathbb{R}^p \ni \mathbf{X} = (X_1,\dots,X_p) \sim N(0,\Sigma_p)$ and
$$ \mathbf{Z_j^{(k)}} =(X_{\max(1, j-k)}, \dots, X_j) $$ is the collection of $X_j$ and $k$ previous neighbours in $\mathbf{X}$. The estimated quantities are based on an i.i.d. sample $\mathbf{X}_1,\dots, \mathbf{X}_n \sim N(0,\Sigma_p)$, so
$$ \widehat{\text{Var}}(\mathbf{X}) := \frac{1}{n} \sum_{i=1}^n (\mathbf{X}_i - \overline{\mathbf{X}})(\mathbf{X}_i - \overline{\mathbf{X}})^T $$ and
$$ \widehat{\text{Var}}^{-1}(\mathbf{Z}_j^{(k)}) := \left ( \frac{1}{n} \sum_{i=1}^n (\mathbf{Z}_{i,j}^{(k)} - {\overline{\mathbf{Z}}_{j}^{(k)} })(\mathbf{Z}_{i,j}^{(k)} - {\overline{\mathbf{Z}}_{j}^{(k)} })^T\right)^{-1} $$ where $\mathbf{Z}_{i,j}^{(k)} = (X_{i, {\max(1, j-k)}}, \dots, X_{i,j})$ - i.e. the collection of $X_j$ and its k previous neighbours for the $i$-th observation. The authors take: $k \asymp (n^{-1} \log p)^{-1/2(\alpha+1)}$. For the purpose of this question I think that exact meaning of $\alpha$ is not important and we can think of it as some positive constant. I am unsure how the bound on the element-wise maximum of the sample covariance matrix in (A14) leads to the same bound on the inverse covariance in (A15). It seems to be a trivial step in their proof so I feel like that I must be missing something obvious here.