I know (at first empirically, then read somewhere) that for for second order finite differences matrices like
$$\begin{pmatrix} -2&1&0&0&0\\ 1&-2&1&0&0\\ 0&1&-2&1&0\\ 0&0&1&-2&1\\ 0&0&0&1&-2 \end{pmatrix},$$
and other larger and smaller $n\times n$ matrices with $(1,-2,1)$ on their diagonal have eigenvalues with the following analytical expression:
$$\lambda_k=-4\sin^2\left(\frac\pi{n+1}\frac k2\right).$$
I'm now interested in higher order finite differences. For example, for 4th order the matrix would have $\left(-\frac1{12},\frac43,-\frac52,\frac43,-\frac12\right)$ on diagonal, and 6th order would have $\left(\frac1{90},-\frac3{20},\frac32,-\frac{49}{18},\frac32,-\frac3{20},\frac1{90}\right)$.
Here's a plot of (numerically computed) spectra of 2nd, 4th, 6th, 8th, 10th and 20th orders $100\times 100$ matrices (the higher the order, the lower the curve) , and their limit spectrum for continuous operator (black curve):

Is there some analytical expression for spectra of such matrices? Is there some general result for $q$th order finite differences $n\times n$ matrix? How are such formulas found?
First let's show that eigenvectors of infinite-dimensional 2nd order finite differences matrix for 2nd derivative are $$v_{k,x}=\sin\left(\frac\pi{n+1}kx\right),$$
where $v_{k,x}$ is $k$th eigenvector's $x$th component, with $k=1,2,\dots$, and $x\in\mathbb Z$. It's easy to calculate that
$$v_{k,x+1}-2v_{k,x}+v_{k,x-1}=-4\sin^2\left(\frac\pi{n+1}\frac k2\right)v_{k,x}.$$
I.e. 2nd order finite difference of $v_{k,x}$ is just $v_{k,x}$ multiplied by the eigenvalue given in the question.
For finite $n\times n$ matrix we force our function to be zero at borders (implied at $x=0$ and $x=n+1$), and the matrix just uses that value automatically when we just say that its top row is $(-2,1,0,..,0)$ and similarly for bottom row. This is consistent with our choice of $v_{k,x}$ above.
Now it can be calculated just the same way for infinite-dimensional matrix of 4th order of precision to show that its eigenvalues are
$$\lambda_k^{(4th)}=\frac23\left(\cos\left(\frac{\pi}{n+1}k\right)-7\right)\sin^2\left(\frac\pi{n+1}\frac k2\right).$$
The pitfall here is how to correctly impose the boundary conditions. We wanted to have zero Dirichlet boundary conditions, and for 4th order finite differences matrix with $(c,b,a,b,c)=\left(-\frac1{12},\frac43,-\frac52,\frac43,-\frac12\right)$ at the diagonal it would be tempting to just write:
$$D^{(4th)}=\begin{pmatrix} a&b&c&0&0&0\\ b&a&b&c&0&0\\ c&b&a&b&c&0\\ 0&c&b&a&b&c\\ 0&0&c&b&a&b\\ 0&0&0&c&b&a \end{pmatrix}.$$
But such matrix assumes that our target function has zeroes beyond border, which is quite bad: such a function is not differentiable at border, thus finite difference will lose its high order of precision at borders.
As we (the users) really don't care how the function behaves outside our domain, what we have to do is to antisymmetrically continue our function beyond the borders, so that it remains completely differentiable$^\dagger$. Currently in the top row we have
$$\underbrace{\color{gray}{c\;\;\;\;b}}_\text{implied}\;\;\underbrace{(a\;\;\;\;b\;\;\;\;c\;\;\;\;0\;\;\;\;0\;\;\;\;0)}_\text{actual row},$$
and now the implied $b$ is at the border, thus is multiplied by $0$, but $c$ is outside the domain, and as we now antisymmetrically continue the function, then $c$ must be multiplied by minus the same value as for $a$ in the actual row. I.e. we now have updated row:
$$(a-c\;\;\;\;b\;\;\;\;c\;\;\;\;0\;\;\;\;0\;\;\;\;0),$$
and similarly for bottom row. For 6th order we'll have to modify two rows at top and bottom, for 8th — three, etc. — by number of values needed to be taken from points outside the domain.
Similar calculation to the above 4th order matrix would for 6th order give:
$$\lambda_k^{(6th)}=\frac2{45}\left(23\cos\left(\frac\pi{n+1}k\right)-2\cos\left(\frac{2\pi}{n+1}k\right)-111\right)\sin^2\left(\frac\pi{n+1}\frac k2\right).$$
$^\dagger$Actually this is not quite correct, since if we periodically continue a non-periodic analytic function starting from some point $x_0$, we'll make some of its derivatives discontinuous at $x_0$, which will still affect precision of our finite differences scheme. But it will happen for a higher-order derivative than if we simply zero out the function beyond $x_0$, so the degradation will be much smaller.