The Problem
Show that if $a\neq b$, then we have for the $n\times n$-matrix $$\textrm{det}\begin{pmatrix} a+b & ab & 0 & \ldots & 0 & 0 \\ 1 & a+b & ab & \ldots & 0 & 0 \\ 0 & 1 & a+b & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & a+b & ab \\ 0 & 0 & 0 & \ldots & 1 & a+b\end{pmatrix} =\frac{a^{n+1}-b^{n+1}}{a-b}$$ What if $a=b$?
My Questions
I am not entirely sure how to begin this proof. I suspect I am missing something that makes it quite simple. I tried looking for similar problems, and I noticed a common theme being the use of row operations to rewrite the matrix, from which the determinant was found. However, I still didn't understand many of the intermediate computations when it came to actually finding the determinant. My questions are as follows.
- Should I use row operations to rewrite this matrix? If so, what would be an example of how that would look computationally?
- In either case, how should I go about actually computing the determinant to show the statement is true? Is there an algorithm that is helpful here?
- I noticed for the follow up question that, if $a=b$, then $$\textrm{det}\begin{pmatrix} a+a & aa & 0 & \ldots & 0 & 0 \\ 1 & a+a & aa & \ldots & 0 & 0 \\ 0 & 1 & a+a & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & a+a & aa \\ 0 & 0 & 0 & \ldots & 1 & a+a\end{pmatrix}.$$ Am I right in my thinking there?
Other Details
The book used in the course is Abstract Linear Algebra by Curtis. It has been of little help to me here...
Let's call $A_n$ the matrix for which you want to compute the determinant and let's see how it can be constructed iteratively: $$ A_{n+1} = \begin{pmatrix} a+b & ab & 0 & \ldots & 0 & 0 \\ 1 & a+b & ab & \ldots & 0 & 0 \\ 0 & 1 & a+b & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & a+b & ab \\ 0 & 0 & 0 & \ldots & 1 & a+b\end{pmatrix} = \begin{pmatrix} A_n & \begin{matrix} 0 \\ \cdots \\ 0 \\ ab \end{matrix} \\ \begin{matrix} 0 & \cdots & 0 & 1 \end{matrix} & a+b \end{pmatrix} $$
Such layout cries out for induction to compute the determinant $D_n = \det(A_n)$ . For $n=1$, we have $A_1 = (a+b)$ so $D_1 = a+b$ and $D_1 \cdot (a-b) = a^2 - b^2 $.
Now, if we assume the formula is true up to an integer $n$, can we prove that it is also true for $n+1$ ?
We can use Laplace expansion applied to the last column to find out that: $$ D_{n+1} = (a+b)\cdot D_n - ab \cdot D_{n-1}$$
Therefore: $$ \begin{align} (a-b)\cdot D_{n+1} & = (a+b)\cdot (a-b) D_n - ab \cdot(a-b) D_{n-1} \\ & = (a+b)\cdot( a^{n+1} - b^{n+1} ) - ab \cdot( a^{n} - b^{n} ) \\ & = a^{n+2}- ab^{n+1} + ba^{n+1} - b^{n+2} - ba^{n+1}+ ab^{n+1} \\ & = a^{n+2}- b^{n+2}\\ \end{align} $$
And we are done for the first question.
The second question asks what is happening when $a=b$. Obviously we can't apply the formula above since it is only valid for $a \ne b $. But we can take $b$ as close as we want to $a$, let's say $b_k = a + \frac{1}{k}$. Now we can apply the previous result, meaning that for all positive integer $k$: $$ \det \begin{pmatrix} a+(a + \frac{1}{k}) & a(a + \frac{1}{k}) & \ldots & 0 & 0 \\ 1 & a+(a + \frac{1}{k}) & \ldots & 0 & 0 \\ 0 & 1 & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & \ldots & a+(a + \frac{1}{k}) & a(a + \frac{1}{k}) \\ 0 & 0 & \ldots & 1 & a+(a + \frac{1}{k})\end{pmatrix} = \frac{\left(a + \frac{1}{k}\right)^{n+1} - a^{n+1}}{ \frac{1}{k} } $$
To conclude, you can use the continuity of the determinant and take the limit on both sides as $k \to \infty $. On the left you have the determinant of the matrix $A_n$ with $a=b$ and on the right you have the derivative of the function $ x \mapsto x^{n+1} $ at $x=a$, which is $(n+1)a^n$.