I have a large matrix L of size 400,000 $\times $ 400,000 . I'm using this L matrix in the following way.
Lin = L$^{-1}$
C = D - B * Lin * B';
B,D are of appropriate sizes. L matrix is sparse and doesn't have any particular structure to it.
From C which is a dense matrix I'm computing the bottom $m$~$10$ eigenvectors to do further computations.
I'm working on a toy problem.On real-world datasets the size of L matrix will be much higher than this.
Can someone direct me to some literature or solvers which can solve this problem.
Thank you.
As far as I understood the problem, you want to compute a few eigenvectors associated with the smallest eigenvalues of $C:=D-BL^{-1}B^T$ (probably $C$ is symmetric and positive definite?). Since the matrices involved are sparse, you can use methods based on the Arnoldi method implemented, e.g., in ARPACK. A good feature of these methods is that you do not need to compute $C$ explicitly, but instead, you must be able to compute matrix-vector products.
An issue here is that since you want to compute the eigenvectors from the "bottom" part of the spectrum (does it mean smallest in magnitude if $C$ is not SPD?), you might need to use an analogue of the inverse power iteration, that is, you need to be able compute matvecs with the inverse of $C$, or, equivalently, solve systems of the form $Cx=f$ for given right-hand side vectors $f$.
The matrix $C$ has the Schur complement form and, as you already noticed, $L^{-1}$ is dense as it usually happens when attempting to invert sparse matrices so inverting it or trying to compute $C$ explicitly in some way is not a good option. If $C$ was well-conditioned, you can solve $Cx=f$ iteratively, e.g., by conjugate gradients or GMRES. These methods, similarly to the Arnoldi method in ARPACK, require only products with $C$ so you could, e.g., factorize $L$ and then the product with $C$ involves only multiplications with $B$, $B^T$, $D$, and a solution of a system with $L$: $$ \begin{align} g=Cu=(D-BL^{-1}B^T)u: && &v:=B^Tu,\\ && &\text{solve}\;Lw=v,\\ && &g:=Du+Bw. \end{align} $$ A possible drawback is that if $C$ was ill-conditioned, the convergence of CG could be slow.
If you are not affraid of factorizing or solving in any other way sparse indefinite systems, you can also consider the following alternative on how to obtain the solution of $Cx=f$. Consider the "augmented" system $$\tag{$*$} Mu:=\pmatrix{L&B^T\\B&-D}\pmatrix{y\\x}=\pmatrix{0\\-f}. $$ You can verify that the $x$-component of the solution vector $u$ solves $Cx=f$. This approach has a small disadvantage: depending on the properties of your matrices, $M$ is likely to be indefinite so some care when factorizing it is needed. Nevertheless, factors of $M$ are more likely to be sparse.
For more information on how to solve systems like ($*$), see, e.g., this paper. You can find a good overview of methods for solving such problems there including tons of references (although the paper was published already a few years ago).