$1$D mass-spring chain leading to a 'discrete ODE' in space

312 Views Asked by At

Background (Not strictly necessary, you can jump to 'Question')

Problem statement

I'm studying a $1$D system of $N$ masses, of equal mass $m$, connected between them by springs, of equal stiffness $k$. Masses are at starting locations $x_n\equiv x(n)=na$ where $n\in[0,..,N]$ and $a\in\mathbb{R}$, the initial distance between masses, is a known constant parameter. No external forces are applied.

System then, is modeled by $N$ constant-coefficient differential equations of the kind

$$\qquad\qquad m\,\ddot u(x_n,t)+2k\,u(x_n,t)-k\,[u(x_{n+1},t)+u(x_{n-1},t)]=0\qquad n=1,..,N\qquad (1)$$

where $u(x_n,t)$ is the dispacement of mass $n$ at time $t$, relative to its initial position $x_n$.

This set of equations account for inertia force of mass $n$ and spring force contributions of left and right-side springs, that connect mass $n$ respectively to masses $n-1$ and $n+1$.

Note: for $a\to0$ system reduces to the so-called wave equation $$u_{tt}-c^{2}\,u_{xx}=0$$ with $c=\sqrt{E/\rho}$. To do this, just consider $m\equiv \rho Aa$ and $k\equiv EA/a$, where $\rho$ is the volume density of the continuous rod (whose discretization creates the $1$D system), $A$ is the cross-sectional area and $E$ is the elasticity. Finally, the fact that $\lim_{a\to 0}\,[u(x+a,t)-2\,u(x,t)+u(x-a,t)]/a^2=u_{xx}$ by definition of $2$nd derivative.

Note: Last 'Note' justify intuetively why following steps are taken, although, i'm posting this question because i want to justify them rigorously, being this, at the end, a purely mathematical problem.

Solution - Step I

Impose an harmonic solution: $$u(x_n,t)=u_n(t)=u_n(\omega)\,e^{-i\omega t}$$

No further justification given, i explained this to myself in the following way:

If we define $u_n(t)\equiv u(x_n,t)$ and develop the system $(1)$ in matrix form we have

\begin{align} \underset{M}{\underbrace{\begin{bmatrix} m&0&0&-&0\\ 0&m&0&-&0\\ 0&0&m&-&0\\ |&|&|& &|\\ 0&0&0&-&m \end{bmatrix}}} \underset{\ddot{\mathbf{u}}(t) }{\underbrace{\begin{Bmatrix} \ddot u_1(t)\\ \ddot u_2(t)\\ \ddot u_3(t)\\ |\\ \ddot u_n(t) \end{Bmatrix}}} + \underset{K}{\underbrace{\begin{bmatrix} 2k&-k&0&-&0\\ -k&2k&-k&-&0\\ 0&-k&2k&-&0\\ |&|&|& &|\\ 0&0&0&-&2k \end{bmatrix}}} \underset{\mathbf{u}(t) }{\underbrace{\begin{Bmatrix} u_1(t)\\ u_2(t)\\ u_3(t)\\ |\\ u_n(t) \end{Bmatrix}}} = \mathbf{0} \end{align} Then, being $det(M)\neq 0$ and thus matrix $M$ invertible, we can write \begin{align} \left\{\begin{matrix} \dot{\mathbf{u}}(t)=\dot{\mathbf{u}}(t)\qquad\qquad\\ \ddot{\mathbf{u}}(t)=-M^{-1}K\,\mathbf{u}(t) \end{matrix}\right. \end{align} Next, defining $\mathbf{z}(t)\equiv \begin{Bmatrix}\mathbf{u}(t)\\\dot{\mathbf{u}}(t)\end{Bmatrix}$, we can compact last system in the form

$$\dot{\mathbf{z}}(t)= \underset{A}{\underbrace{\begin{bmatrix} 0&\mathcal{I_{n\times n}}\\ M^{-1}K&0 \end{bmatrix}}}\mathbf{z}(t)$$ where $\mathcal{I}_{n\times n}$ is the Identity matrix of order $n$

Eventually, we have achieved the manageable form of the constant-coefficient linear ODE system $$\dot{\mathbf{z}}(t)=A\,\mathbf{z}(t)$$ which has solution

$$\mathbf{z}(t)=e^{A(t-t_0)}\,\mathbf{z}(t_0)$$

This solution, under the condition of $A$ being diagonalizable* (see this stack question about), can be reduced in the form

$$\mathbf{z}(t)=\sum_{i=1}^{N}c_i\,\mathbf{z}_0\,(\omega_i)\,e^{\omega_i t}$$

From this form, finally, for a specific $\omega_i$, can be extracted the type of harmonic solution used in the quote

$$\boxed{u_{n,\,\omega_i}(t)=u_n(\omega_i)\,e^{-i\omega_i t}}$$

*Note: Regarding $A$ being diagonalizable, for this case, i couldn't actually managed to carry out a proof, maybe it will be the object of a specific stack question about it

Solution - Step II

Impose a wave solution: $$u_n(\omega)=u_0[\kappa(\omega)]\,e^{-i\kappa(\omega) x_n}$$

My explanation: Imposing what obtained for 'Solution - Step I' , system $(1)$ reduces to

\begin{align} \{-\omega_i^2\,m\,u_{n}(\omega_i)+2k\,u_{n}(\omega_i)-k\,[u_{n+1}(\omega_i)+u_{n-1}(\omega_i)]\}\,e^{-i\omega_i t}&=0\\ & n=1,..,N,\; i=1,..,N \end{align}

Eventually, remembering that $u_n=u(x_n)$, dropping $i,n$ notation and simplifying $e^{-i\omega_i t}$ as $\neq 0$, we end up with

$$-\omega^2\,m\,u(x,\omega)+2k\,u(x,\omega)-k\,[u(x+a,\omega)+u(x-a,\omega)]=0$$

Eventually, this equation should be solved with the solution provided in the second quote. Now, this is not a differential equation, although its solution is imposed to be so. Neither is an usual form i know how to deal with.

From this non-understanding arises my question, which i will formulate better in the next section (there i will omit $\omega$ dependency as $\omega$ is set, i.e. a different equation is meant to exist for each $\omega$).


Question

Consider an equation of the form

$$-\omega^2a^2\,u(x)+2\,c^2\,[u(x)-u(x+a)-u(x-a)]=0$$

with $u(x)\in C^2:\mathbb{R}\to\mathbb{R}$ and $a,c,\omega\in\mathbb{R}$ known parameters.

So, i ask

  • Which kind of equation is this? Does it fall in some particular category?
  • How to find the complete set of solution for $u(x)$?

Hint1: Some solutions are in the form $u(x)=u_0(\kappa)\,e^{\kappa x}$

Hint2: If we divide by $a^2$ and take the limit for $a\to 0$, equation transforms in the $2$nd order constant-coefficient linear ODE $\omega^2u-c^2\,u_{xx}=0$ (but $a$ is actually meant to stay not null)

Hint3: In equation there are many parameters which i just used to stay coherent with theory where equation is from, if these parameters bother you, feel free to rename them or group them


My solution attempt (incomplete)

Due to the presence of arguments like $x\pm a$, and the fact that solution can have exponential form, my idea is to apply Fourier Transform at right and left hand-side, so to obtain

\begin{align} -\omega^2 a^2\,\hat u(\kappa)+2 c^2\,[\hat u(\kappa)-e^{\kappa a}\,\hat u(\kappa)-e^{-\kappa a}\,\hat u(\kappa)]&=0\\ [-\omega^2 a^2+2 c^2(1-e^{\kappa a}-e^{-\kappa a})]\,\hat u(\kappa)&=0 \end{align} Eventually, looking for non-zero solutions and expanding exponential with Euler formula, we obtain $$-\omega^2 a^2+2 c^2\,[1-cos(\kappa a)]=0$$ Solving for $\kappa$ $$\kappa=\frac{1}{a}\,{cos}^{-1}\bigg(1-\frac{\omega^2 a^2}{2\,c^2}\bigg)$$ Then, $\kappa$ can have either two distinct solutions or one solution, based on parameters value.

To find back $u(x)$, we can compute anti-Fourier transform, so to have \begin{align} u(x)&=\frac 1 {2\pi}\,\int_{-\infty}^{+\infty}\hat u(\kappa)\,e^{i\kappa x}\,d\kappa\\ &=\kappa\in[\kappa_1,\kappa_2] \text{ or } \kappa=\kappa_1\text{ (trouble fomalizing this step in the integration)}\\ &= \left\{\begin{matrix} \frac 1 {2\pi}\,\hat u(\kappa_1)\,e^{i\kappa_1 x}+\frac 1 {2\pi}\,\hat u(\kappa_2)\,e^{i\kappa_2 x}\qquad\qquad\qquad\qquad \\ [\frac 1 {2\pi}\,\hat u(\kappa_1)+\frac 1 {2\pi}\,\hat u(\kappa_1)\,x]\,e^{i\kappa_1 x}\text{(just a sound guess)} \end{matrix}\right.\\ &= \left\{\begin{matrix} u_0(\kappa_1)\,e^{i\kappa_1 x}+u_0(\kappa_2)\,e^{i\kappa_2 x}\qquad\qquad\qquad\qquad \\ [u_0(\kappa_1)+u_0(\kappa_1)\,x]\,e^{i\kappa_1 x}\text{(just a sound guess)} \end{matrix}\right. \end{align}

Which, should be the complete set of solution


Update: Googling 'Discrete ODE' i just found out that an equation like the one i'm looking solutions for might be called 'difference equation', and so it shall be rewritten like $$-\omega^2 a^2\,u_n+2c^2\,[u_n-u_{n+1}-u_{n-1}]=0$$

1

There are 1 best solutions below

0
On

While it seems you have tracked down most pieces to the puzzle perhaps it would be of some use to sketch out a logical pathway toward a solution, from a point of view of one step leading to another starting with an assumption of little background knowledge. To this end consider a n-mass system where the position of each mass is given by ${{x}_{n}}\left( t \right),\,\,n=0,1,...,N-1$ coupled by $N-1$ interconnecting springs each having the same equilibrium length ${{L}_{0}}$ say. For now we will assume each spring has the same spring constant k. Newton’s laws imply $${{m}_{0}}x'{{'}_{0}}=k\left( {{x}_{1}}\left( t \right)-{{x}_{0}}\left( t \right)-{{L}_{0}} \right)$$ $${{m}_{n}}x'{{'}_{n}}=-k\left( {{x}_{n}}\left( t \right)-{{x}_{n-1}}\left( t \right)-{{L}_{0}} \right)+k\left( {{x}_{n+1}}\left( t \right)-{{x}_{n}}\left( t \right)-{{L}_{0}} \right)$$ $${{m}_{N-1}}x'{{'}_{N-1}}=-k\left( {{x}_{N-1}}\left( t \right)-{{x}_{N-2}}\left( t \right)-{{L}_{0}} \right)$$ Hence in this configuration the length of each spring is for example simply the distance between each mass i.e. ${{L}_{n}}\left( t \right)={{x}_{n}}\left( t \right)-{{x}_{n-1}}\left( t \right)$. We will simplify the problem by introducing spring displacements from equilibrium length. Hence let ${{x}_{n}}\left( t \right)-{{x}_{n-1}}\left( t \right)-{{L}_{0}}={{L}_{n}}\left( t \right)-{{L}_{0}}={{\eta }_{n}}\left( t \right)$ represent the amount of displacement of the nth spring. We may then write the coupled set of ordinary differential equations for the positions in terms of displacements. $$\frac{{{d}^{2}}}{d{{t}^{2}}}\left[ \begin{matrix} {{\eta }_{1}} \\ {{\eta }_{2}} \\ {{\eta }_{3}} \\ ... \\ {{\eta }_{N-2}} \\ {{\eta }_{N-1}} \\ \end{matrix} \right]=\kappa \left[ \begin{matrix} -2 & 1 & 0 & 0 & ... & 0 \\ 1 & -2 & 1 & 0 & ... & 0 \\ 0 & 1 & -2 & 1 & ... & 0 \\ ... & ... & ... & ... & ... & ... \\ 0 & 0 & ... & 1 & -2 & 1 \\ 0 & 0 & ... & 0 & 1 & -2 \\ \end{matrix} \right]\left[ \begin{matrix} {{\eta }_{1}} \\ {{\eta }_{2}} \\ {{\eta }_{3}} \\ ... \\ {{\eta }_{N-2}} \\ {{\eta }_{N-1}} \\ \end{matrix} \right]$$ We set boundary conditions ${{\eta }_{n}}\left( 0 \right)=f\left( n \right),{{\eta }_{n}}'\left( 0 \right)=0$. Here I have left the initial displacements of each spring arbitrary (i.e. a function of n) but insist that each mass starts from rest. We know we want to describe the motion of individual masses that are probably oscillatory in nature. We know from having attempted to decouple lower dimensional systems that we run into trouble very quickly (from memory I think you can easily decouple up to $n=3$). So we need a different way of pulling this thing to pieces.

We can start by first considering what would happen if each mass was allowed to move with the same frequency (normal modes). If such a solution could exist then it would possibly take the form ${{\eta }_{n}}\left( t \right)={{v}_{n}}\cos \left( \omega t \right)$ so that ${{\eta }_{n}}''=-{{\omega }^{2}}{{\eta }_{n}}$. Supposing this were true then $$\left( \frac{-{{\omega }^{2}}}{\kappa }\mathbf{I}-\mathbf{A} \right)\pmb{\eta }=\mathbf{0}$$ which we will write as $$\left( \mathbf{A}-\lambda \mathbf{I} \right)\pmb{\eta }=\mathbf{0}$$ This seems like quite a natural thing to do as it has left us with a set of simultaneous equations the parameters must satisfy should such motion exist. We now start building up our knowledge. For example we dig around and determine that we need to let the determinant be zero to obtain non trivial solutions (you could interpret that a number of ways – for now just take the most obvious interpretation as a requirement for solutions where $\mathbf{\eta }\ne \mathbf{0}$). So consider therefore $$\det \left( \mathbf{A}-\lambda \mathbf{I} \right)={{U}_{N-1}}=\left| \begin{matrix} -2-\lambda & 1 & 0 & 0 & ... & 0 \\ 1 & -2-\lambda & 1 & 0 & ... & 0 \\ 0 & 1 & -2-\lambda & 1 & ... & 0 \\ ... & ... & ... & ... & ... & ... \\ 0 & 0 & ... & 1 & -2-\lambda & 1 \\ 0 & 0 & ... & 0 & 1 & -2-\lambda \\ \end{matrix} \right|$$ Let $D=-2-\lambda $ and consider the following sequence $$\begin{align} {{U}_{1}}&=\left| D \right|=D\\ {{U}_{2}}&=\left| \begin{matrix} D & 1 \\ 1 & D \\ \end{matrix} \right|={{D}^{2}}-1\\ {{U}_{3}}&=\left| \begin{matrix} D & 1 & 0 \\ 1 & D & 1 \\ 0 & 1 & D \\ \end{matrix} \right|=D{{U}_{2}}-1\left| \begin{matrix} 1 & 1 \\ 0 & D \\ \end{matrix} \right|=D{{U}_{2}}-{{U}_{1}}\\ {{U}_{4}}&=\left| \begin{matrix} D & 1 & 0 & 0 \\ 1 & D & 1 & 0 \\ 0 & 1 & D & 1 \\ 0 & 0 & 1 & D \\ \end{matrix} \right|=D{{U}_{3}}-\left| \begin{matrix} 1 & 1 & 0 \\ 0 & D & 1 \\ 0 & 1 & D \\ \end{matrix} \right|=D{{U}_{3}}-{{U}_{2}}\end{align}$$ Hence we conjecture the following recurrence relationship $${{U}_{n+2}}-D{{U}_{n+1}}+{{U}_{n}}=0,\ \ {{U}_{1}}=D,\ {{U}_{0}}=1$$ Next define the generating function in the following way $$u\left( x \right)=\sum\limits_{n=0}^{\infty }{{{U}_{n}}{{x}^{n}}}$$ Multiply the recurrence relation by ${{x}^{2+n}}$ , and we have $${{U}_{n+2}}{{x}^{2+n}}-D{{U}_{n+1}}{{x}^{2+n}}+{{U}_{n}}{{x}^{2+n}}=0$$ And so $$\sum\limits_{n=0}^{\infty }{{}}{{U}_{n+2}}{{x}^{2+n}}-\sum\limits_{n=0}^{\infty }{{}}D{{U}_{n+1}}{{x}^{2+n}}+\sum\limits_{n=0}^{\infty }{{}}{{U}_{n}}{{x}^{2+n}}=0$$ Or $$\sum\limits_{n=2}^{\infty }{{}}{{U}_{n}}{{x}^{n}}-xD\sum\limits_{n=1}^{\infty }{{}}{{U}_{n}}{{x}^{n}}+{{x}^{2}}\sum\limits_{n=0}^{\infty }{{}}{{U}_{n}}{{x}^{n}}=0$$ Which we can write as $$\left( u\left( x \right)-{{U}_{0}}-{{U}_{1}}x \right)-xD\left( u\left( x \right)-{{U}_{0}} \right)+{{x}^{2}}u\left( x \right)=0$$ Applying the initial conditions of the recurrence relation we have $$u\left( x \right)=\frac{1}{{{x}^{2}}-xD+1}$$ This is the generating function for the ${{U}_{n}}$, (and incidentally the generating function for the Chebyshev polynomials of the second kind). We now write $u\left( x \right)$ as a series in the following way

$$\begin{align}u\left( x \right)&=\frac{1}{{{x}^{2}}-xD+1}\\&=\frac{1}{\left( x-{{\beta }_{+}} \right)\left( x-{{\beta }_{-}} \right)} \\&=\frac{1}{\left( {{\beta }_{+}}-{{\beta }_{-}} \right)}\left( \frac{1}{\left( x-{{\beta }_{+}} \right)}-\frac{1}{\left( x-{{\beta }_{-}} \right)} \right)\\&=\frac{1}{\left( {{\beta }_{+}}-{{\beta }_{-}} \right)}\sum\limits_{n=0}^{\infty }{\left\{ {{\left( \frac{1}{{{\beta }_{-}}} \right)}^{n+1}}-{{\left( \frac{1}{{{\beta }_{+}}} \right)}^{n+1}} \right\}{{x}^{n}}}\end{align}$$ Here ${{\beta }_{\pm }}^{2}-{{\beta }_{\pm }}D+1=0\Rightarrow {{\beta }_{\pm }}=\frac{D\pm \sqrt{{{D}^{2}}-4}}{2}$, therefore ${{\beta }_{+}}{{\beta }_{-}}=1$, and so $${{U}_{N-1}}=\frac{1}{\left( {{\beta }_{+}}-{{\beta }_{-}} \right)}\left\{ {{\beta }_{+}}^{N}-{{\beta }_{-}}^{N} \right\}$$ In order for $\det \left( \mathbf{A}-\lambda \mathbf{I} \right)={{U}_{N-1}}=0$, we have then the condition ${{\beta }_{+}}\ne {{\beta }_{-}}$, and ${{\beta }_{+}}^{N}={{\beta }_{-}}^{N}$. These conditions imply that ${{\beta }_{-}}={{\beta }_{+}}{{e}^{i\theta }}$ where $\theta \ne 2\pi n,\,\,n\in \mathbb{Z}$ such that $$\theta =\frac{2\pi n}{N},\,\,\,\,n=1,2,...,N-1$$ Hence $${{\beta }_{-}}={{\beta }_{+}}{{e}^{i\frac{2\pi n}{N}}}$$ Using this condition we see $$\left( D-\sqrt{{{D}^{2}}-4} \right)=\left( D+\sqrt{{{D}^{2}}-4} \right){{e}^{i\frac{2\pi n}{N}}}$$ We now solve for D. Squaring both sides yields $$\left( D-\sqrt{{{D}^{2}}-4}-\frac{2}{D} \right)=\left( D+\sqrt{{{D}^{2}}-4}-\frac{2}{D} \right){{e}^{i\frac{4\pi n}{N}}}$$ Then substituting for $D-\sqrt{{{D}^{2}}-4}$, $$D\left( D+\sqrt{{{D}^{2}}-4} \right)=\frac{2\left( 1-{{e}^{i\frac{4\pi n}{N}}} \right)}{{{e}^{i\frac{2\pi n}{N}}}\left( 1-{{e}^{i\frac{2\pi n}{N}}} \right)}$$ Squaring once more and performing another substitution we find $${{D}^{2}}=\frac{{{\left( {{e}^{i\frac{2\pi n}{N}}}-{{e}^{-i\frac{2\pi n}{N}}} \right)}^{2}}}{\left( {{e}^{i\frac{2\pi n}{N}}}+{{e}^{-i\frac{2\pi n}{N}}}-2 \right)}$$ which may be written as $${{D}^{2}}=2\left( \cos \left( \frac{2\pi n}{N} \right)+1 \right)$$ or $${{D}^{2}}=4{{\cos }^{2}}\left( \frac{\pi n}{N} \right)\Rightarrow {{D}_{\pm }}=\pm 2\left| \cos \left( \frac{\pi n}{N} \right) \right|$$ Now let there be n such that for $n=1,2,...,m$, $\cos \left( \frac{\pi n}{N} \right)>0$, and for $n=m+1,...,N-1$, $\cos \left( \frac{\pi n}{N} \right)<0$, then $${{D}_{\pm}}=\pm\left\{ \begin{matrix} 2\cos \left( \frac{\pi n}{N} \right) & n=1,...,m \\ -2\cos \left( \frac{\pi n}{N} \right) & n=m+1,...,N-1 \\ \end{matrix} \right.$$ We therefore have two situations for each D and two situations for $\beta $. In each case we must have the $\beta$ satisfy ${{\beta }_{+}}{{\beta }_{-}}=1$and ${{\beta }_{-}}={{\beta }_{+}}{{e}^{i\frac{2\pi n}{N}}}$. First we consider the $n=1,2,...,m$ case. We have two possibilities $${{\beta }_{\pm }}={{e}^{\pm i\frac{\pi n}{N}}}\ \ {or} \ \ \ \ {{\beta }_{-}}=-{{e}^{i\frac{\pi n}{N}}}, {{\beta }_{+}}=-{{e}^{-i\frac{\pi n}{N}}},\ \ \ \ \ n=1,2,...,m$$ All these solutions satisfy the first condition, but only the second set satisfies the last condition. Similarly for the $n=m+1,...,N-1$ case we have $${{\beta }_{+}}=-{{e}^{-i\frac{\pi n}{N}}}, {{\beta }_{-}}=-{{e}^{i\frac{\pi n}{N}}}\ \ or \ \ {{\beta }_{\pm }}={{e}^{\pm i\frac{\pi n}{N}}},\ \ \ \ \ n=m+1,...,N-1$$ Only the first of these satisfy the two required conditions. In either case it is the $D=-2\cos \left( \frac{\pi n}{N} \right)$ solution that satisfies all conditions and therefore we have for all n $${{\beta }_{+}}=-{{e}^{-i\frac{\pi n}{N}}},\ \ {{\beta }_{-}}=-{{e}^{i\frac{\pi n}{N}}},\ \ {{D}_{n}}=-2\cos \left( \frac{\pi n}{N} \right),\ \ \ n=1,2,...,N-1$$ Now $D=-2-\lambda =\frac{{{\omega }^{2}}}{\kappa }-2$ , and hence $${{\omega }_{n}}=\sqrt{\frac{2k}{m}\left( 1-\cos \left( \frac{\pi n}{N} \right) \right)},\,n=1,2,...,N-1$$ These are the resonant frequencies of the system. Now we dig a bit further and note that each of these normal modes represents a solution (vector) to the linear system of equations and so surely a series of such solutions is also a solution to the system. In fact why not just take sum over all possible normal modes weighting each one effectively to build up a solution satisfying our boundary conditions. We note the eigenvalues are $${{\lambda }_{n}}=-\frac{{{\omega }_{n}}^{2}}{\kappa }=2\left( \cos \left( \frac{\pi n}{N} \right)-1 \right)$$ and so let the corresponding eigenvectors be given by ${{\mathbf{v}}_{\mathbf{n}}}={{\left[ {{v}_{1,n}},{{v}_{2,n}},{{v}_{3,n}}...,{{v}_{N-1,n}} \right]}^{T}}$ . Each eigenvector must satisfy the original equation $\left( \mathbf{A}-{{\lambda }_{n}}\mathbf{I} \right){{\mathbf{v}}_{n}}=\mathbf{0}$ and so to find the eigenvectors we have the following set of simultaneous equations to solve $$\left[ \begin{matrix} {{D}_{n}} & 1 & 0 & 0 & ... & 0 \\ 1 & {{D}_{n}} & 1 & 0 & ... & 0 \\ 0 & 1 & {{D}_{n}} & 1 & ... & 0 \\ ... & ... & ... & ... & ... & ... \\ 0 & 0 & ... & 1 & {{D}_{n}} & 1 \\ 0 & 0 & ... & 0 & 1 & {{D}_{n}} \\ \end{matrix} \right]\left[ \begin{matrix} {{v}_{1,n}} \\ {{v}_{2,n}} \\ {{v}_{3,n}} \\ ... \\ {{v}_{N-2,n}} \\ {{v}_{N-1,n}} \\ \end{matrix} \right]=\mathbf{0}$$ where ${{D}_{n}}=-2-{{\lambda }_{n}}$. Expanding the matrix we find $$\begin{align} {{D}_{n}}{{v}_{1,n}}+{{v}_{2,n}}&=0 \\ {{v}_{j,n}}+{{D}_{n}}{{v}_{j+1,n}}+{{v}_{j+2,n}}&=0 \\ {{v}_{N-2,n}}+{{D}_{n}}{{v}_{N-1,n}}&=0 \\ \end{align}$$ Let ${{v}_{1,n}}=1$, (we can do this because we will scale/weight the eigenvectors later so we choose this for convenience now) then for example $${{v}_{2,n}}=-{{D}_{n}}=-{{U}_{1,n}},\,\,{{v}_{3,n}}={{D}_{n}}^{2}-1={{U}_{2,n}},\,\,{{v}_{4,n}}=2{{D}_{n}}-{{D}_{n}}^{3}=-{{U}_{3,n}},...$$ Hence $${{v}_{j,n}}={{\left( -1 \right)}^{j-1}}{{U}_{j-1,n}},\ {{U}_{0,n}}=1$$ Which we may write as $${{v}_{j,n}}=\frac{{{\left( -1 \right)}^{j-1}}}{\sqrt{{{D}_{n}}^{2}-4}}\left\{ {{\beta }_{n,+}}^{j}-{{\beta }_{n,-}}^{j} \right\}$$ Performing the substitutions for ${{\beta }_{+}}=-{{e}^{-i\frac{\pi n}{N}}}$, ${{\beta }_{-}}=-{{e}^{i\frac{\pi n}{N}}}$,${{D}_{n}}=-2\cos \left( \frac{\pi n}{N} \right)$, we have then the eigenvalues and the components of the corresponding eigenvectors $${{\omega }_{n}}=\sqrt{\frac{2k}{m}\left( 1-\cos \left( \frac{\pi n}{N} \right) \right)},\ {{v}_{j,n}}=\frac{\sin \left( \frac{\pi n}{N}j \right)}{\sin \left( \frac{\pi n}{N} \right)},\ \ n,j=1,2,...,N-1$$ Each displacement may therefore be represented as a linear combination of the eigenvectors in the following manner $$\pmb{\eta }={{\gamma }_{1}}{{\mathbf{v}}_{\mathbf{1}}}\cos \left( {{\omega }_{1}}t \right)+{{\gamma }_{2}}{{\mathbf{v}}_{2}}\cos \left( {{\omega }_{2}}t \right)+..+{{\gamma }_{N-1}}{{\mathbf{v}}_{N-1}}\cos \left( {{\omega }_{N-1}}t \right)$$ Or $${{\eta }_{n}}\left( t \right)=\sum\limits_{j=1}^{N-1}{{{\gamma }_{j}}{{v}_{j,n}}\cos \left( {{\omega }_{j}}t \right)}$$ where ${{\gamma }_{j}}$ are constants pertaining to superposition amplitudes. The magnitude of the eigenvectors are therefore given by $${{\left| {{\mathbf{v}}_{\mathbf{n}}} \right|}^{2}}=\frac{1}{\sin {{\left( \frac{\pi n}{N} \right)}^{2}}}\sum\limits_{j=1}^{N-1}{\sin {{\left( \frac{\pi n}{N}j \right)}^{2}}}=\frac{1}{4\sin {{\left( \frac{\pi n}{N} \right)}^{2}}}\left\{ 2N-1-\frac{\sin \left( \frac{\pi n}{N}\left( 2N-1 \right) \right)}{\sin \left( \frac{\pi n}{N} \right)} \right\}$$ so $$\left| {{\mathbf{v}}_{\mathbf{n}}} \right|=\frac{\sqrt{N}}{\sqrt{2}\sin \left( \frac{\pi n}{N} \right)}$$ and therefore the unit eigenvectors are given by $${{\hat{v}}_{j,n}}=\sqrt{\frac{2}{N}}\sin \left( \frac{\pi n}{N}j \right)$$ And now we’re sort of stuck. But we delve deeper (quite a bit deeper in fact) and note that a real symmetric matrix will have eigenvectors associated with different eigenvalues that are orthogonal forming an orthonormal basis. This therefore implies $$\pmb{\eta }\left( 0 \right)\cdot {{\mathbf{v}}_{n}}={{\gamma }_{n}}\left| {{\mathbf{v}}_{n}} \right|\Rightarrow {{\gamma }_{n}}=\pmb{\eta }\left( 0 \right)\cdot {{\mathbf{\hat{v}}}_{n}}$$ Where we’ve absorbed a scalar into the ${{\gamma }_{n}}$ term without loss of generality. It’s probably at this point that you might backtrack and restart the entire operation with the spectral theorem in hand (depending on the flavour you want). Anyway…we get, $${{\gamma }_{j}}=\sqrt{\frac{2}{N}}\sum\limits_{n=1}^{N-1}{{{\eta }_{n}}\left( 0 \right)\sin \left( \frac{\pi n}{N}j \right)}$$ and once the initial displacements are specified we have the future displacements being $${{\eta }_{n}}\left( t \right)=\sum\limits_{j=1}^{N-1}{{{\gamma }_{j}}{{v}_{j,n}}\cos \left( {{\omega }_{j}}t \right)}$$ As a specific example suppose the initial displacement is uniform for all masses, i.e. ${{\eta }_{n}}\left( 0 \right)=\alpha $, then we have $${{\gamma }_{j}}=\alpha \sqrt{\frac{2}{N}}\sum\limits_{n=1}^{N-1}{\sin \left( \frac{\pi n}{N}j \right)}$$ Completing the summation and simplifying we find $${{\gamma }_{2j}}=0,\,\,{{\gamma }_{2j-1}}=\alpha \sqrt{\frac{2}{N}}\cot \left( \frac{\left( 2j-1 \right)\pi }{2N} \right)$$ So we have then $${{\eta }_{n}}\left( t \right)=\frac{2\alpha }{N}\sum\limits_{j=1}^{\left\lfloor \tfrac{1}{2}N \right\rfloor }{\cot \left( \frac{\left( 2j-1 \right)\pi }{2N} \right)\sin \left( \frac{\pi n}{N}\left( 2j-1 \right) \right)\cos \left( {{\omega }_{2j-1}}t \right)}$$ Take a simple N=4 mass system/ 3 spring system, and put all coefficients to unity (including initial displacements) , then the solutions from above become $$\begin{align} {{\eta }_{1,3}}&=\frac{1}{2\sqrt{2}}\left\{ \cot\left( \tfrac{1}{8}\pi \right)\cos \left( \sqrt{2-\sqrt{2}}t \right)+\tan \left( \tfrac{1}{8}\pi \right)\cos \left( \sqrt{2+\sqrt{2}}t \right) \right\} \\ {{\eta }_{2}}&=\frac{1}{2}\left\{ \cot\left( \tfrac{1}{8}\pi \right)\cos \left( \sqrt{2-\sqrt{2}}t \right)-\tan \left( \tfrac{1}{8}\pi \right)\cos \left( \sqrt{2+\sqrt{2}}t \right) \right\} \\ \end{align}$$ It’s rather tedious but you can show that these satisfy the system $$\begin{align} {{\eta }_{1}}''&=-2{{\eta }_{1}}+{{\eta }_{2}}\\ {{\eta }_{2}}''&={{\eta }_{1}}-2{{\eta }_{2}}+{{\eta }_{3}}\\ {{\eta }_{3}}''&={{\eta }_{2}}-2{{\eta }_{3}}\end{align}$$ and the initial condition $\pmb{\eta }\left( 0 \right)=\mathbf{1}$.

Apologies in advance (is that correct given its the end of the post?) for any typo's omissions etc. however perhaps this is enough of an idea to fill in the gaps.

Here is a curious observation: If you force the entire system to accelerate (constant acceleration) then the masses near the ends of the system 'jump' in energy/velocity much like the levels in a quantum oscillator (see for example "falling coupled oscillators & trigonometric sums" Z. Angew. Math. Phys. (2018) 69:19).