I have a symmetric $n\times n$ matrix $\mathbb A$ with entries:
$$A_{ij} = (a_i + a_{i-1})\delta_{ij} - a_i\delta_{i,j-1}-a_{j}\delta_{i-1,j}$$
where $a_0,\dots,a_n$ are given positive numbers.
Is there an analytical formula for the inverse of $\mathbb A$?
Old: I've found numerically that $\mathbb A^{-1}$ can also be tridiagonal. But I have not been able to prove this.
Edit: $\mathbb A^{-1}$ is not tridiagonal in general, as pointed out by a simple counterexample by Jean-Claude in the comments. But I'm still interested in a closed-form formula for $\mathbb A^{-1}$, if it exists.
First of all it is convenient to index the matrices from $0$.
I will indicate with ${\mathbf{X}_{ \, h} }$ a square matrix with indices in $[0,h]^2$.
Then it is convenient to put that $a_n = 0 \; | \, n < 0$ , and keeping your definition, starting from $n_0$, then the matrix $\bf A$ becomes, e.g. for $h=3$,
$$ {\bf A}_{\,3} = \left( {\matrix{ {a_{\,0} } & { - a_{\,0} } & 0 & 0 \cr { - a_{\,0} } & {a_0 + a_{\,1} } & { - a_{\,1} } & 0 \cr 0 & { - a_{\,1} } & {a_{\,1} + a_{\,2} } & { - a_{\,2} } \cr 0 & 0 & { - a_{\,2} } & {a_{\,2} + a_{\,3} } \cr } } \right) $$ We can see that the lower diagonal block contains the matrix as you defined it.
It is not difficult to demonstrate that the determinant is simply
$$ d(h) = \left| {\;{\bf A}_{\,h} \;} \right| = \prod\limits_{0\, \le \,k\, \le \,h} {a_{\,k} } $$ while that of the matrix defined by you is $$ d_1 (h) = \left| {\;{\bf A}_{\,1 \ldots h} \;} \right| = \sum\limits_{0\, \le \,j\, \le \,h} {\prod\limits_{0\, \le \,k\, \ne \;j\, \le \,h} {a_{\,k} } } = \left( {\prod\limits_{0\, \le \,k\, \le \,h} {a_{\,k} } } \right)\sum\limits_{0\, \le \,j\, \le \,h} {{1 \over {a_{\,j} }}} $$
The eigenvalues are however complicated, and so is the Jordan decomposition..
Trying instead the LU decomposition for the lowest value of $h$ we get the hint that it might be quite straight and simple.
We get $$ {\bf A}_{\,h} = {\bf L}_{\,h} \,{\bf U}_{\,h} = {\bf L}_{\,h} \,{\bf D}_{\,h} \;\overline {{\bf L}_{\,h} } $$ where the overbar denotes the transpose, and where we adopt the following notation $$ \eqalign{ & {\bf D}_{\,h} = \left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\quad \left| {\quad \left( {f(n) \circ {\bf I}} \right)_{\,n,\,m} = f(n)\;\delta _{\,n,\,m} } \right. \cr & {\bf L}_{\,h} = {\bf I}_{\,h} - {\bf E}_{\,h} \quad \left| {\quad {\bf E}_{\,n,\,m} = \;\delta _{\,n,\,m + 1} } \right. \cr} $$
In fact $$ \eqalign{ & {\bf A}_{\,h} = {\bf L}_{\,h} \,{\bf D}_{\,h} \;\overline {{\bf L}_{\,h} } = \left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\left( {{\bf I}_{\,h} - \overline {{\bf E}_{\,h} } } \right) = \cr & = \left( {a_{\,n} \circ {\bf I}_{\,h} } \right) - {\bf E}_{\,h} \left( {a_{\,n} \circ {\bf I}_{\,h} } \right) - \left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\overline {{\bf E}_{\,h} } + {\bf E}_{\,h} \left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\overline {{\bf E}_{\,h} } = \cr & = \left( {a_{\,n} \circ {\bf I}_{\,h} } \right) + \left( {a_{\,n - 1} \circ {\bf I}_{\,h} } \right){\bf E}_{\,h} \overline {{\bf E}_{\,h} } - {\bf E}_{\,h} \left( {a_{\,n} \circ {\bf I}_{\,h} } \right) - \left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\overline {{\bf E}_{\,h} } = \cr & = \left( {\left( {a_{\,n} + \left[ {1 \le n} \right]a_{\,n - 1} } \right) \circ {\bf I}_{\,h} } \right) - {\bf E}_{\,h} \left( {a_{\,n} \circ {\bf I}_{\,h} } \right) - \left( {a_{\,n} \circ {\bf I}_{\,h} } \right)\overline {{\bf E}_{\,h} } \cr} $$ which is the definition of ${\bf A}$
(the square brackets denote the Iverson bracket ).
Since the inverse of $\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)$ is the "Summing" matrix $ {\bf S}_{\,h}$ $$ \left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)^{ - \,{\bf 1}} = {\bf S}_{\,h} \quad \left| {\;S_{\,n,\,m} = \left[ {m \le n} \right]} \right. $$ then we conclude that $$ \eqalign{ & {\bf A}_{\,h} ^{ - \,{\bf 1}} = \left( {{\bf I}_{\,h} - \overline {{\bf E}_{\,h} } } \right)^{ - \,{\bf 1}} \left( {1/a_{\,n} \circ {\bf I}_{\,h} } \right)\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)^{ - \,{\bf 1}} = \cr & = \overline {{\bf S}_{\,h} } \left( {1/a_{\,n} \circ {\bf I}_{\,h} } \right){\bf S}_{\,h} \cr} $$ that is $$ \eqalign{ & \left( {{\bf A}_{\,h} ^{ - \,{\bf 1}} } \right)_{\,n,\,m} = \sum\limits_{0\, \le \,j\, \le \,h} {\sum\limits_{0\, \le \,k\, \le \,h} {\left[ {n \le k} \right]{{\left[ {k = j} \right]} \over {a_{\,k} }}\left[ {m \le j} \right]} } = \cr & = \sum\limits_{0\, \le \,k\, \le \,h} {\left[ {n \le k} \right]{1 \over {a_{\,k} }}\left[ {m \le k} \right] = \sum\limits_{\max \left( {n,m} \right)\, \le \,k\, \le \,h} {{1 \over {a_{\,k} }}} } \cr} $$
From here, by partitioning $\bf A$ into four blocks, enucleating the first row and the first column, and applying the Inversion by Blocks method, we can deduce the inverse of the matrix as defined by you.
---------- your actual matrix -----------
$$ \eqalign{ & {\bf A}_{\,h} = \left( {\matrix{ {a_{\,0} + a_{\,1} } & { - a_{\,1} } & 0 & \cdots \cr { - a_{\,1} } & {a_{\,1} + a_{\,2} } & { - a_{\,2} } & \ddots \cr 0 & { - a_{\,2} } & {a_{\,2} + a_{\,3} } & \ddots \cr \vdots & \ddots & \ddots & \ddots \cr } } \right) = \cr & = \left( {\left( {a(n) + a(n - 1)} \right) \circ {\bf I}_{\,h} } \right) - \left( {a(n - 1) \circ {\bf I}_{\,h} } \right){\bf E}_{\,h} - \overline {{\bf E}_{\,h} } \left( {a(n) \circ {\bf I}_{\,h} } \right) \cr & \cr} $$
The determinant now is $$ d (h) = \left| {\;{\bf A}_{\,h} \;} \right| = \sum\limits_{0\, \le \,j\, \le \,h} {\prod\limits_{0\, \le \,k\, \ne \;j\, \le \,h} {a_{\,k} } } = \left( {\prod\limits_{0\, \le \,k\, \le \,h} {a_{\,k} } } \right)\sum\limits_{0\, \le \,j\, \le \,h} {{1 \over {a_{\,j} }}} $$ and we conventionally put $d(0)=1$.
The LU decomposition,gives hints to that $$ {\bf A}_{\,h} = {\bf L}_{\,h} \,{\bf U}_{\,h} = {\bf L}_{\,h} \,{\bf D}_{\,h} \;\overline {{\bf L}_{\,h} } $$ with $$ \left\{ \matrix{ {\bf D}_{\,h} = \left( {{{d(n)} \over {d(n - 1)}} \circ {\bf I}_{\,h} } \right) \hfill \cr {\bf L}_{\,h} = {\bf I}_{\,h} - {\bf E}_{\,h} \left( {a(n) \circ {\bf I}_{\,h} } \right)\left( {{{d(n - 1)} \over {d(n)}} \circ {\bf I}_{\,h} } \right) = \hfill \cr = {\bf I}_{\,h} - {\bf E}_{\,h} \left( {a(n){{d(n - 1)} \over {d(n)}} \circ {\bf I}_{\,h} } \right) = \hfill \cr = {\bf I}_{\,h} - {\bf E}_{\,h} \left( {a(n) \circ {\bf I}_{\,h} } \right){\bf D}_{\,h} ^{ - \,{\bf 1}} \hfill \cr} \right. $$
Since $$ \eqalign{ & {\bf I}_{\,h} - \left( {f(n - 1) \circ {\bf I}_{\,h} } \right){\bf E}_{\,h} = {\bf I}_{\,h} - {\bf E}_{\,h} \left( {f(n) \circ {\bf I}_{\,h} } \right)\quad \left| {\;0 \ne f(n)} \right.\;\left| {\;n = 1 \ldots h} \right.\quad = \cr & = \left( {\left( {\prod\limits_{1\, \le k\, \le \,n - 1} {f(k)} } \right) \circ {\bf I}_{\,h} } \right)\;\,\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\,\;\left( {\left( {\prod\limits_{1\, \le k\, \le \,n - 1} {f(k)} } \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} \cr} $$ then $$ \eqalign{ & {\bf L}_{\,h} = {\bf I}_{\,h} - {\bf E}_{\,h} \left( {a(n){{d(n - 1)} \over {d(n)}} \circ {\bf I}_{\,h} } \right) = \cr & = \left( {\left( {{1 \over {d(n - 1)}}\prod\limits_{1\, \le k\, \le \,n - 1} {a(k)} } \right) \circ {\bf I}_{\,h} } \right)\;\,\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\,\;\left( {\left( {{1 \over {d(n - 1)}}\prod\limits_{1\, \le k\, \le \,n - 1} {a(k)} } \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} = \cr & = \left( {\left( {{{a_{\,0} } \over {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} }}} \right) \circ {\bf I}_{\,h} } \right)\;\,\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\,\;\left( {\left( {{{a_{\,0} } \over {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} }}} \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} = \cr & = \left( {\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} } \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} \;\,\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\,\;\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} \circ {\bf I}_{\,h} } \right) \cr} $$
and it is clear the path reach to the conclusion, i.e.
$$ \bbox[lightyellow] { \eqalign{ & {\bf A}_{\,h} ^{\,{\bf - }\,{\bf 1}} = \overline {{\bf L}_{\,h} } ^{\,{\bf - }\,{\bf 1}} \,\;{\bf D}_{\,h} ^{\,{\bf - }\,{\bf 1}} \;{\bf L}_{\,h} ^{\,{\bf - }\,{\bf 1}} \; = \cr & = \left( {\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} } \right) \circ {\bf I}_{\,h} } \right)\;\,\left( {{\bf I}_{\,h} - \overline {{\bf E}_{\,h} } } \right)^{\,{\bf - }\,{\bf 1}} \,\, \cdot \cr & \cdot \;\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} \left( {{{\left( {\prod\limits_{0\, \le \,k\, \le \,n - 1} {a_{\,k} } } \right)\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} } \over {\left( {\prod\limits_{0\, \le \,k\, \le \,n} {a_{\,k} } } \right)\sum\limits_{0\, \le \,j\, \le \,n} {{1 \over {a_{\,j} }}} }}} \right)\left( {\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} } \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} \;\,\, \cdot \cr & \cdot \,\left( {{\bf I}_{\,h} - {\bf E}_{\,h} } \right)\,\;\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} \circ {\bf I}_{\,h} } \right) = \cr & = \left( {\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} } \right) \circ {\bf I}_{\,h} } \right)\;\,\overline {{\bf S}_{\,h} } \;\left( {\left( {a_{\,n} \sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} \sum\limits_{0\, \le \,k\, \le \,n} {{1 \over {a_{\,k} }}} } \right) \circ {\bf I}_{\,h} } \right)^{\,{\bf - }\,{\bf 1}} \;{\bf S}_{\,h} \,\;\left( {\sum\limits_{0\, \le \,j\, \le \,n - 1} {{1 \over {a_{\,j} }}} \circ {\bf I}_{\,h} } \right) \cr} }$$