A problem with a determinant

126 Views Asked by At

Let $A$ and $B$ be $n\times n$ matrices. Let $x$ be a scalar variable and define $D(x):=det(A+Bx)$. It can be easily shown by induction on $n$ that $D(x)$ is a polynomial of degree at most $n$. The problem is to find an explicit polynomial if the matrices are as of the following form (with $a\neq b$):

$$A= \begin{pmatrix} \lambda_1 & a & a & \dots & a \\ b & \lambda_2 & a & \dots & a \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b & b & b & \dots & \lambda_n \end{pmatrix}$$

$$B= \begin{pmatrix} 1 & 1 & 1 & \dots & 1 \\ 1 & 1 & 1 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \dots & 1 \end{pmatrix}$$

I already spotted an inductive pattern not really that useful: I found some regularity by calculating the determinant on the first column: the first term is $(\lambda_1+x)*D_{n-1}$ and the other terms are summations of $(b+x)(a+x)*D_{n-2}$ where every $D$ is called on submatrices with different $\lambda$ but it doesnt't really help for finding an explicit formulation. It seems to me at most useful for proving an already found polynomial is the right one.

2

There are 2 best solutions below

0
On BEST ANSWER

It is easy to see that $\deg D(x) \le 1$ regardless of $A$ and $B$ (since we can multiply the first row by $-1$ and add it to all the other rows and then expand along the first row). Thus $$D(x) = \alpha + \beta x$$ for some $\alpha, \beta$. Now it is easy to see that $\alpha = \det(A)$, and $\beta = \det(A+B)-det(A)$.

For this particular $A$, $B$, note that $$C_a := D(-a) = (\lambda_1-a) \dots (\lambda_n-a)$$ and $$C_b:= D(-b) = (\lambda_1-b) \dots (\lambda_n-b).$$ Since $a\ne b$, we have

$$D(x) = -\frac{C_b-C_a}{b-a} (x + a) + C_a.$$

0
On

So here is what I think is the "nicest" formula for this determinant.

We will include the extra assumption that the $\lambda_i\neq b$. Now, we will denote by $S$ the matrix $$\begin{pmatrix} \lambda_1-b & a-b & \cdots & a-b \\ & \lambda_2-b &\cdots&a-b \\ && \ddots&\vdots\\ &&&\lambda_n-b \end{pmatrix}.$$ and let $\gamma = (\gamma_1,\cdots, \gamma_n)^T$ be the unique solution such that $Sy=(b,b,\cdots, b)^T.$ Notice that we have $S\gamma(1+x/b)=(b+x,b+x,\cdots, b+x)^T. $ We will denote by $S_i$ the matrix which has entries equal to $S$ except the $i$-th column is replaced by all $b+x$. Now using the usual rules for determinants (adding and subtracting columns of $b+x$ and using that a determinant is zero if two columns are the same) one can show that we have $$D(x) = |S| + |S_1|+\cdots|S_n|. $$ However, by cramer's rule we have that $$D(x) = |S|(1+(1+x/b)(\gamma_1+\cdots + \gamma_n))=\left(1+(1+x/b)\sum_{i=1}^n\gamma_i\right)\prod_{i=1}^n(\lambda_i-b). $$

Why I think this is 'nice': Since $S$ is row reduced, solving the system $S\gamma=b$ is easy and so this formula should be fairly efficient for larger $n$.

There is an analogues formula in $a$ if $\lambda_i \neq a$.