Suppose that $\mathbf{X}\in\mathbb{R}^{p\times p}$ is a symmetric and positive definite random matrix with $\mathbb{E}\mathbf{X} \succeq \lambda I_{p\times p}$, $\lambda > 0$. If $\mathbb{E}\|\mathbf{X}\|_2^v < \infty$, is it possible to bound $\mathbb{E}\|\mathbf{X}^{-1}\|_2^v$? Here $\|X\|_2$ means the biggest eigenvalue of $X$.
Actually, the problem comes from the M-estimation in statistics. Suppose that $\{Z_i\}_{i=1}^n$ are IID distributed random vectors in $\mathbb{R}^d$, and the parameter of interest $\theta^*\in\mathbb{R}^p$ is defined as $$\theta^* = \underset{\theta \in \Theta}{argmin} \mathbb{E}L(Z_1,\theta),$$ where $L(\cdot,\cdot)$ is the objective function. In practive we solve for the sample version of the objective to obtain $\hat{\theta}$, $$\hat{\theta} = \underset{\theta \in \Theta}{argmin}L_n(\theta),$$ where $L_n(\theta) = \frac{1}{n}\sum_{i=1}^nL(Z_i;\theta)$. And we know under some regularity conditions, $$\sqrt{n}(\hat{\theta}- \theta^*)\overset{d}{\rightarrow}\mathcal{N}(\mathbf{0},\Sigma),$$ where $\Sigma = \{\mathbb{E}\nabla_{\theta}^2L(Z_1,\theta^*)\}^{-1}\mathbb{E}\{\nabla_{\theta}L(Z_1,\theta^*)\nabla_{\theta}L(Z_1,\theta^*)^T\}\{\mathbb{E}\nabla_{\theta}^2L(Z_1,\theta^*)\}^{-1}.$
To conduct a statistical inference, we need to give a consistent estimator of the asymptotic variance, and usually, we use the sandwich-type estimator: $\hat{\Sigma} = \{\frac{1}{n}\sum_{i=1}^n\nabla_{\theta}^2L(Z_i,\theta^*)\}^{-1}\frac{1}{n}\sum_{i=1}^n\nabla_{\theta}L(Z_i,\theta^*)\nabla_{\theta}L(Z_i,\theta^*)^T\{\frac{1}{n}\sum_{i=1}^n\nabla_{\theta}^2L(Z_i,\theta^*)\}^{-1}$.
I'm wondering that is it possible to bound the risk $\mathbb{E}\|\hat{\Sigma} - \Sigma\|^v.$ Thank you very much!