Convergence of an infinite determinant

268 Views Asked by At

I'm stuck upon the following exercise from A Course of Modern Analysis by Whittaker:

Show that the necessary and sufficient condition for the absolute convergence of the infinite determinant $$\lim_{m\rightarrow\infty}\begin{vmatrix}1&a_1&0&0&\cdots&0\\b_1&1&a_2&0&\cdots&0\\0&b_2&1&a_3&\cdots&0\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\0&\cdots&0&b_m&&1\end{vmatrix}$$ is that the series $$a_1b_1 + a_2b_2 + a_3b_3 + \cdots$$ shall converge absolutely.

The "sufficient" part is easy. Let $$f(m) = \begin{vmatrix}1&a_1&0&0&\cdots&0\\b_1&1&a_2&0&\cdots&0\\0&b_2&1&a_3&\cdots&0\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\0&\cdots&0&b_m&&1\end{vmatrix}$$ Then $f(m)$ satisfies the recurrence equation $$f(m) = f(m - 1) - c_{m}f(m - 2)$$ where $$c_k = a_kb_k$$ If the series $$|c_1| + |c_2| + |c_3| + \cdots$$ converges absolutely, then so does the product $$(1+|c_1|)(1+|c_2|)(1+|c_3|)\cdots$$ Each term in the expansion of $f(m)$ will correspond to a term in the infinite product with equal magnitude. Hence the determinant also converges.

However, the "necessary" part seems problematic. Suppose that $$\lim_{m\rightarrow\infty} f(m) = D$$ Consider the case $D > 0$. Choose any number $0 < r < D$. Then there exists integer $m_0$ such that $$0 < D - r < f(m) < D + r \quad \forall m > m_0$$ Rearranging the recurrence equation, we obtain $$c_m = \frac{f(m - 1) - f(m)}{f(m - 2)}$$ $$|c_m| = \frac{|f(m - 1) - f(m)|}{|f(m - 2)|} > \frac{1}{D + r}|f(m - 1) - f(m)| \quad \forall m > m_0 + 2$$ These equations imply that by choosing appropriate values of $c_k$, we can make $f(m)$ equal to any sequence that converges to $D$.

Now if we take $$f(m) = \sum_{k = 1}^m \frac{(-1)^{k + 1}}{k}$$ Then $f(m)$ converges to $\ln 2 > 0$. Furthermore, for all $m$ we have $1/2 \leq f(m) \leq 1$. Therefore $$|c_3| + |c_4| + |c_5| + \cdots \geq 1/3 + 1/4 + 1/5 + \cdots$$ which obviously does not converge.

Am I making a mistake in interpreting the question?

Update: I found another simpler counterexample: Let $$a_1 = b_1 = \frac{\sqrt{2}}{2}$$ $$a_k = b_k = \frac{1}{2} \quad \forall k > 1$$ Then we can verify that $f(m) = 2^{- m}$, which converges to $0$. However, $$a_1b_1 + a_2b_2 + a_3b_3 + \cdots = \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \cdots$$ which diverges.

1

There are 1 best solutions below

0
On BEST ANSWER

The book does not define what is meant by "absolute convergence of determinants".

This paper seems to suggest that "absolute convergence" actually means if we replace each term in the expansion of $f(m)$ by its absolute value, the determinant will still converge.

In this case, the problem is very easy. Let $g(m)$ be the function obtained by replacing each term of $f(m)$ with its absolute value. Note that $g(m) \geq 1$ and $$g(m) = g(m - 1) + c_mg(m - 2) \geq g(m - 1) + c_m$$ Hence $$g(m) \geq 1 + a_1b_1 + a_2b_2 + \cdots + a_mb_m$$

Sufficient: If $$a_1b_1 + a_2b_2 + a_3b_3 + \cdots$$ converges absolutely, then so does $$(1+a_1b_1)(1+a_2b_2)(1+a_3b_3)\cdots > g(m)$$

Necessary: Since $$g(m) \geq 1 + a_1b_1 + a_2b_2 + \cdots + a_mb_m$$ If $g(m)$ converges, then so does the infinite sum $$a_1b_1 + a_2b_2 + a_3b_3 + \cdots$$