How will covariance matrix converge to the covariance function, especially in the aspect of eigenvalues and eigenvectors

133 Views Asked by At

In spatial statistics, people want to estimate the covariance function $C(x,y)$, $x,y\in [0,1]$. Because this covariance function is positive definite, we have the following decomposition $$C(x,y)=\sum_{i=1}^\infty\lambda_i \nu_i(x)\nu_i(y),$$ where $\lambda_i$ and $\nu_i$ are eigenvalues and eigenfunctions separately. However, most of the time we could only estimate a covariance matrix $C_n=(C(\frac{j}{n}),C(\frac{k}{n}))_{j,k},j,k=1,\dots,n$, where $n$ is the number of grids. And then we calculate the eigenvalues and eigenvectors of $C_n$, let's say they are $\lambda_{n,i}$ and $\nu_{n,i}$. I was wondering under what kind of norm and what kind of rate $\lambda_{n,i}$ and $\nu_{n,i}$ will converge to $\lambda_i$ and $\nu_i$. You may add some common conditions if you want, for smoothness or decreasing rate of $\lambda_i$. I hope you could provide concise proof or inference. Thanks a lot!!