I have a symmetric matrix $K\in \mathbb{R}^{(2N+1)\times(2N+1)}$, with $i,j=-N,\dots,N$,
$$ K_{ij}=-\frac{N}{2\pi[{(i-j)}^2-1/4]} $$
Because the matrix is strictly diagonally dominant with positive diagonals, all eigenvalues are positive. I want to know if the limit of the smallest eigenvalue
$$\lim\limits_{N\to\infty}\lambda_0$$
exists. Numerical tests suggest that $\lambda_0\approx0.58$ increases very slowly with $N$. How can I prove that the limit exists?
Observation 1
Minimum eigenvalue can be computed as the lower bound of the Rayleigh quotient: $$ R_N(x) \equiv \frac{(x, K^{(N)} x)}{(x, x)}\\ \lambda_0^{(N)} = \min_x R_N(x) $$
For large $N$ the smallest eigenvector (blue) looks like a bell-shaped function, somewhat similar to parabola (orange).
Let's try to estimate $\lambda_0$ from above computing $R(x)$ on a quadratic function. Taking $x_i^{(N)} = 1 - \left(\frac{i}{N}\right)^2$ and using Mathematica to compute sums give $$ (x^{(N)}, Kx^{(N)}) = \frac{(16N^2 - 1) H\left(2N + \frac{1}{2}\right)+8N (8N^3 + N (\log 16-5)-1)+2-\log 4}{32\pi N^3} \gtrsim \frac{2N}{\pi}, \quad N > 1\\ (x^{(N)}, x^{(N)}) = \frac{16 N^4 - 1}{15 N^3} \lesssim \frac{16 N}{15}. $$ Here $H(n) = \int_0^1 \frac{1-x^n}{1-x} dx$ is a harmonic number function and $H(n) = \gamma + \log n + O(n^{-1})$.
So, $$ R_N(x^{(N)}) > \frac{15}{8\pi}, \quad N > 1\\ \lim_{N \to \infty} R_N(x^{(N)}) = \frac{15}{8\pi}. $$
If $\lim_{N \to \infty} \lambda_0^{(N)}$ exists, then it cannot exceed $\frac{15}{8\pi} \approx 0.596831$.
Observation 2
Gershgorin theorem gives a lower bound on the minimal eigenvalue for diagonally dominant matrices. $$ \lambda_0^{(N)} \geq \min_{i \in [-N, N]} K_{ii}^{(N)} - \sum_{j \neq i} |K_{ij}^{(N)}| $$ The minimum is attained at $i = 0$ and is equal to $$ K^{(N)}_{0,0} - \sum_{j \neq 0} |K^{(N)}_{0,j}| = \frac{4N}{2\pi} - 2\sum_{j = 1}^{N} \frac{N}{2\pi (j^2 - 1/4)} = \\ = \frac{2N}{\pi} - \frac{N}{\pi}\frac{4N}{2N+1} = \frac{2N}{\pi (2N+1)} \to \frac{1}{\pi} \approx 0.3183. $$
Update
On evaluating $(x, Kx)$. Summating by diagonals we get $$ (x, Kx) = \sum_{i,j=-N}^N K_{ij} x_i x_j = \frac{2N}{\pi} \sum_{j=-N}^N x_j^2 - \frac{N}{\pi}\sum_{s=1}^{2N} \frac{1}{s^2 - 1/4} \sum_{j=-N+s}^N x_j x_{j-s}. $$ For $x_i = 1 + \frac{i^2}{N^2}$: $$ \sum_{j=-N}^N x_j^2 = \sum_{j=-N}^N \left(1 - 2\frac{j^2}{N^2} + \frac{j^4}{N^4}\right) = \frac{16N^4 - 1}{15 N^3}. $$ Sum of powers $j$ may be easily computed from Faulhaber's formula. $$ \sum_{j=-N+s}^N x_j x_{j-s} = \frac{(2N - s - 1) (2N - s) (2N - s + 1) (4N^2 + 6 N s + s^2 + 1)}{30 N^4} $$ This is basically a 5-th degree polynomial in $s$, so we need to compute $$ \sum_{s=1}^{2N} \frac{s^k}{s^2 - 1/4}, \quad k = 0, 1, \dots, 5. $$ To evaluate the sum let's divide $s^k$ by $s^2 - 1/4$: $$ s^{2q} = P_{2q}(s) (s^2 - 1/4) + R_{2q}\\ s^{2q+1} = s P_{2q}(s) (s^2 - 1/4) + s R_{2q}\\ $$ For even powers the remainder is a constant, while for the odd powers the remainder is a linear term in $s$. $$ \sum_{s=1}^{2N} \frac{s^{2q}}{s^2 - 1/4} = \sum_{s=1}^{2N} P_{2q}(s) + R_{2q} \sum_{s=1}^{2N} \frac{1}{s^2 - 1/4} \\ \sum_{s=1}^{2N} \frac{s^{2q+1}}{s^2 - 1/4} = \sum_{s=1}^{2N} s P_{2q}(s) + R_{2q} \sum_{s=1}^{2N} \frac{s}{s^2 - 1/4}. $$ Next, $$ \sum_{s=1}^{2N} \frac{1}{s^2 - 1/4} = \sum_{s=1}^{2N} \left(\frac{2}{2s-1} - \frac{2}{2s+1}\right) = ({\rm telescoping}) = 2 - \frac{2}{4N+1}. $$ And the most complex sum, $$ \sum_{s=1}^{2N} \frac{s}{s^2 - 1/4} = \sum_{s=1}^{2N} \left(\frac{2s}{2s-1} - \frac{2s}{2s+1}\right) = \sum_{s=1}^{2N} \left(1 + \frac{1}{2s-1} - 1 + \frac{1}{2s+1}\right) = \\ = \sum_{s=1}^{2N} \frac{1}{2s-1} + \sum_{s=1}^{2N} \frac{1}{2s+1} = \frac{1}{2} \sum_{s=1}^{2N} \frac{1}{s-1/2} + \frac{1}{2} \sum_{s=1}^{2N} \frac{1}{s+1/2} = \\ = 1 + \frac{1}{2} \sum_{s=2}^{2N} \frac{1}{s-1/2} + \frac{1}{4N+1} + \frac{1}{2} \sum_{s=1}^{2N-1} \frac{1}{s+1/2} = 1 + \frac{1}{4N+1} +\sum_{s=1}^{2N-1} \frac{1}{s+1/2}. $$ The last sum may be expressed via harmonic numbers: $$ \sum_{s=1}^{2N-1} \frac{1}{s+1/2} = \frac{1}{3/2} + \frac{1}{5/2} + \frac{1}{7/2} + \dots + \frac{1}{(4N-1)/2} = \\ = (H_{3/2} - H_{1/2}) + (H_{5/2} - H_{3/2}) + \dots + (H_{(4N-1)/2} - H_{(4N-3)/2}) = ({\rm telescoping}) = \\ = H_{(4N-1)/2} - H_{1/2} = H_{(4N-1)/2} - 2 + \log 4. $$ In principle, one could do all of it by hand, but CAS like Mathematica do it faster and without errors (usually).
Update 2. Observation 3
For this section I would move the $-\frac{1}{2\pi}$ factor to the right hand side.
The matrix problem $$ K u = -2\pi \lambda u, \quad u \in \mathbb R^{2N+1} $$ is an approximation for an eigenvalue problem for an integral operator with hypersingular kernel: $$ \mathcal K[u](x) \equiv \int_{-1}^{1} \frac{u(y)}{(y-x)^2} dy. $$ Consider a grid defined as $$ x_i = ih, \quad i = -N, \dots, N. $$ It will be referred to as collocation points. Here $h \equiv \frac{1}{N}$ is the grid step. Consider also functions $$ \psi_i(x) = \begin{cases} 1, &|x - x_i| \leq \frac{h}{2},\\ 0, &\text{otherwise}. \end{cases} $$ These functions will be referred to as piecewise constant basis functions.
The discretization idea it to search solution in form $$ u_h(x) = \sum_{k=-N}^N c_k \psi_k(x), $$ i.e. the solution is sought as a linear combination of piewise-constant basis functions. Coefficients $c_k$ are found by exact equality of $\mathcal K[u_h](x) = -2\pi \lambda u_h(x)$ at collocation points: $$ \mathcal K[u_h](x_i) = -2\pi\lambda u_h(x_i), \quad i = -N,\dots,N. $$ A minor correction is necessary to have a full match between this method and the original matrix problem. The integral operator integration domain needs to be expanded by $h/2$: $$ {\mathcal K}_h[u](x) \equiv \int\limits_{-1-h/2}^{1+h/2} \frac{u(y)}{(y-x)^2} dy. $$ Let's obtain matrix formulation from $\mathcal K_h[u_h](x_i) = -2\pi\lambda u_h(x_i)$. On the right hand side we have simply $c_i$ since all other terms in sum vanish. On the left side we have $$ \sum_{k=-N}^N c_k \int_{-1-h/2}^{1+h/2} \frac{\psi_k(y)}{(y-x)^2}dy = \sum_{k=-N}^N c_k \int_{x_k-h/2}^{x_k+h/2} \frac{1}{(y-x)^2}dy. $$ Integrals are hypersingular and need to be treated accordingly. When $x$ is outside of $[x_k - h/2, x_k + h/2]$ there is no singularity in the integration domain, so $$ \int_{x_k-h/2}^{x_k+h/2} \frac{1}{(y-x)^2}dy = -\left.\frac{1}{y-x}\right|_{x_k-h/2}^{x_k+h/2} = \frac{1}{x_k-x-h/2} - \frac{1}{x_k-x+h/2} = \\ = \frac{h}{(x_k-x+h/2)(x_k-x-h/2)} = \frac{h}{(x_k - x)^2 - h^2/4}. $$ So for $i \neq k$ we have $$ \int_{x_k-h/2}^{x_k+h/2} \frac{1}{(y-x_i)^2}dy = \frac{h}{(x_k - x_i)^2 - h^2/4} = \frac{N}{(k - i)^2 - 1/4}. $$ Now let's consider the singular case: $$ \int_{x_k-h/2}^{x_k+h/2} \frac{1}{(y-x_k)^2}dy = \\ = \lim_{\epsilon \to 0} \left[ \int_{x_k-h/2}^{x_k-\epsilon} \frac{1}{(y-x_k)^2}dy + \int_{x_k+\epsilon}^{x_k+h/2} \frac{1}{(y-x_k)^2}dy - \text{ signular terms in } \epsilon \right] = \\ = \lim_{\epsilon \to 0} \left[ \int_{-h/2}^{-\epsilon} \frac{1}{z^2}dy + \int_{\epsilon}^{h/2} \frac{1}{z^2}dy - \text{ signular terms in } \epsilon \right] = \\ = \lim_{\epsilon \to 0} \left[ -\frac{2}{h} + \frac{1}{\epsilon} -\frac{2}{h} + \frac{1}{\epsilon} -\text{ signular terms in } \epsilon \right] = \\ = -\frac{4}{h} = \frac{N}{(k-k)^2 - 1/4}. $$ So the discrete problem has the following form $$ \sum_{k=-N}^N \frac{N}{(i-k)^2 - 1/4} c_k = -2\pi\lambda c_i, \quad i = -N, \dots, N. $$
I do not consider myself proficient in numerical solving of the hypersingular integral equations, but it seems that the lowest eigenpair of the discrete problem approximates the corresponding eigenpair of $$ \int_{-1}^{1} \frac{u(y)}{(y-x)^2} dy = -2\pi\lambda u(x)\\ u(-1) = u(1) = 0. $$ Numerical experiment confirms $O(h)$ convergence (that is $\lambda_0^{(N)} = \lambda_0 + c h + O(h^2)$): $$ \begin{array}{c|c|c} N & \lambda_0^{(N)} & \lambda_0^{(N)} - \lambda_0^{(N/2)} = -ch + O(h^2)\\\hline 10 & 0.537368328299425 & -\\ 20 & 0.557608872089796 & 0.02024054\\ 40 & 0.5681282883636342 & 0.01051942\\ 80 & 0.5734807004118339 & 0.00535241\\ 160 & 0.5761778623915926 & 0.00269716\\ 320 & 0.5775311031471828 & 0.00135324 \end{array}. $$
Looking at the eigenvector I've noticed the square-root-like behavior near $x=\pm 1$. By some trials and errors I found a function $\frac{\cos \frac{\pi x}{2}}{\sqrt{1-x^2}}$ that is very close to the eigenvector:
Let's search $u(x)$ in form $\sqrt{1-x^2} s(x)$ where $s(x)$ is a smooth function. The modified equation has the form $$ \int_{-1}^1 \frac{\sqrt{1-y^2} s(y)}{(y-x)^2} dy = -2\pi\lambda \sqrt{1-x^2} s(x) $$ Following this review let's express $s(x) = \sum_{k=0}^\infty c_k U_{2k}(x)$, where $U_m(x)$ are the Chebyshev polynomials of the second kind. $$ \int_{-1}^1 \frac{\sqrt{1-y^2} s(y)}{(y-x)^2} dy = \sum_{k=0}^\infty c_k \int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{(y-x)^2} dy. $$ Let's evaluate the integrals $\int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{(y-x)^2} dy$. First, let's reduce it from hypersingular to singular by integration by $x$: $$ \int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{(y-x)^2} dy = \frac{d}{dx} \int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{y-x} dy. $$ This integral is known: $$ \int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{y-x} dy = -\pi T_{2k+1}(x). $$ Taking derivative by $x$ and noting $T_n'(x) = n U_{n-1}(x)$ we get the relation for hypersingular integral: $$ \int_{-1}^1 \frac{\sqrt{1-y^2} U_{2k}(y)}{(y-x)^2} dy = -\pi (2k+1) U_{2k}(x). $$ This greatly simplifies the original problem which now becomes $$ -\pi \sum_{k=0}^\infty c_k (2k+1) U_{2k}(x) = -2\pi \lambda \sqrt{1-x^2} \sum_{k=0}^\infty c_k U_{2k}(x),\\ \sum_{k=0}^\infty c_k (2k+1) U_{2k}(x) = 2 \lambda \sqrt{1-x^2} \sum_{k=0}^\infty c_k U_{2k}(x). $$ Note that $U_{2k}(x)$ are orthogonal with weight $\sqrt{1-x^2}$. Let's integrate both sides with $U_{2m}(x)$: $$ \sum_{k=0}^\infty c_k (2k+1) \int_{-1}^1 U_{2k}(x) U_{2m}(x) dx = 2 \lambda \sum_{k=0}^\infty c_k \int_{-1}^1 U_{2k}(x) U_{2m}(x) \sqrt{1-x^2} dx = 2\lambda c_m \frac{\pi}{2}. $$ We got another eigenproblem for $\lambda$ and $c_k$: $$ \frac{1}{\pi} \sum_{k=0}^\infty c_k (2k+1) \int_{-1}^1 U_{2k}(x) U_{2m}(x) dx = \lambda c_m. $$ By multiplication with $\sqrt\frac{2m+1}{2k+1}$ the problem becomes symmetric for unknown $d_m = \sqrt{2m+1} c_m$: $$ \frac{1}{\pi} \sum_{k=0}^\infty d_k \sqrt{(2k+1)(2m+1)} \int_{-1}^1 U_{2k}(x) U_{2m}(x) dx = \lambda d_m. $$ Its matrix is $$ A_{k,m} = \frac{\sqrt{(2k+1)(2m+1)}}{\pi} \int_{-1}^1 U_{2k}(x) U_{2m}(x) dx. $$ Product of $U_{2k}(x) U_{2m}(x)$ is expressable in terms of sum: $$ U_{2k}(x) U_{2m}(x) = \sum_{p=2|k-m|}^{2k+2m} U_p(x). $$ By integrating we get $$ \int_{-1}^1 U_{2k}(x) U_{2m}(x) dx = \sum_{p=|k-m|}^{k+m} \frac{2}{2p+1}. $$ Then the matrix takes its final form $$ A_{k,m} = \frac{2\sqrt{(2k+1)(2m+1)}}{\pi} \sum_{p=|m-k|}^{m+k} \frac{1}{2p+1}. $$
Truncating the matrix to $M \times M$ size and solving the corresponding finite-dimensional problem we obtain a sequence of eigenvalues $$ \begin{array}{c|c|c} M & \lambda_0^{(M)} & \lambda_0^{(M)} - \lambda_0^{(M/2)}\\\hline 5 & 0.578898528414741 & -\\ 10 & 0.578887652711596 & -1.088 \cdot 10^{-5}\\ 20 & 0.578886985185902 & -6.675 \cdot 10^{-7}\\ 40 & 0.578886944510590 & -4.068 \cdot 10^{-8}\\ 80 & 0.578886942013462 & -2.497 \cdot 10^{-9}\\ 160 & 0.578886941859036 & -1.544 \cdot 10^{-10}\\ 320 & 0.578886941849439 & -9.598 \cdot 10^{-12} \end{array} $$ The $\lambda_0^{(M)} - \lambda_0^{(M/2)}$ decreases by approximately $16$ times when $M$ is doubles. This means that this method has 4-th order convergence (and I have no idea why it is 4-th).
A notable property is that eigenvalue estimates are decreasing. This is a consequence of eigenvalue interlacing theorem, since truncated matrix $A$ with size $M \times M$ is embedded in truncated matrix of size $(M + 1) \times (M + 1)$ so $\lambda_0^{(M+1)} < \lambda_0^{(M)}$.
The limiting eigenfunction normalized by $u_0(0) = 1$ is approximately $$ u_0(x) = \sqrt{1 - x^2} (0.92051141827 - 0.08051912092 (4x^2 - 1) - 0.001599248738 (16 x^4 - 12 x^2 + 1) + \dots). $$