It is a folklore that the Lanczos method is good for looking for the smallest or largest few eigenvalues, but not useful for looking for those in the middle of the spectrum. Actually, this seems a general rule for many similar algorithms, like the Davidson method.
Is there any general reason for this general limitation?
In general Krylov-subspace methods such as the Lanczos method yield an approximation of the "dominant space" of a matrix, i.e. you approximate the largest or smallest eigenvectors. The advantage of this method is that the computation of a small number of eigenvectors (compared to the size of the matrix) is much cheaper than computing a factorization of the matrix. For many applications the computation of the largest or smallest eigenvalues is sufficient.
It is not possible to directly approximate the middle of the spectrum using Lanczos method. If you would want to approximate the middle of the spectrum with the Lanczos method, you would need to approximate a "dominant space", which contains approximations for all eigenvectors from the largest/smallest up to the ones in the middle, i.e. more than $n/2$ for a $n \times n$ matrix. This is not efficient any more and might lead to numerical instabilities.
If you can estimate a value close the the middle eigenvalue, you can use the inverse iteration to compute the eigenvalues closest to a prescribed value. Otherwise, you will have to compute at least half of the eigenvalues, which might be very expensive.
In the latter case, it might be better to directly compute all eigenvalues using a suitable algorithm (see https://en.wikipedia.org/wiki/List_of_numerical_analysis_topics#Eigenvalue_algorithms). These direct eigensolvers typically involve computing a tridiagonal form or Schur decomposition, from which you obtain the eigenvalues (see also https://en.wikipedia.org/wiki/Eigenvalue_algorithm).
To better understand why most algorithms compute the largest or smallest eigenvalue, it might be good to start by looking at the power method. By computing $A^kx$ for $k\to \infty$ you obtain an approximation of the largest eigenvalue. In Krylov methods you project your matrix onto the Krylov-subspace $\{ x, Ax, A^2x, A^3x, A^4x, \dots\}$. By construction this subspace tends to contain approximately the larger eigenvectors.
To get the smallest eigenvalue you compute $A^{-k}x$ which is related to the Krylov-subspace $\{ x, A^{-1}x,A^{-2}x,\dots\}$.
You can not directly builds a Krylov-subspace for the middle eigenvalues.