Flawed Power Series Uniqueness Proof, and related question [Abbott's Understanding Analysis]

34 Views Asked by At

I've been working through some problems on power series in Abbott's understanding analysis, and I've been working on an exercise on the uniqueness of power series. However, in comparing my answer to the starts of others' proofs it seems that this would be incorrect. However, in this logic I can't seem to find a flaw (at least using real numbers)

The question starts from the following \begin{equation} f(x) = \sum_{n=0}^{\infty}a_{n}x^{n} = \sum_{n=0}^{\infty}b_{n}x^{n} \end{equation} $$\forall x \in (-R,R)$$

My logic then starts as such: $ 0< x_{1} < R$ can then be chosen, such that each series converges uniformly. We can then rewrite this as the following new power series, which should also converge at $x_{1}:

\begin{equation} \sum_{n=0}^{\infty}c_{n}x^{n} = \sum_{n=0}^{\infty}a_{n}x^{n} - b_{n}x^{n} = \sum_{n=0}^{\infty}a_{n}x^{n} -\sum_{n=0}^{\infty} b_{n}x^{n} = f(x) - f(x) = 0 \end{equation} where $a_{n} -b_{n} = c_{n} \forall n$.

Now, since this series converges at $x_{1}$, we should be able to define $x_{2} = (1/2) x_{1} > 0$. Then because $|x_{2}| < |x_{1}|$, the series should converge absolutely at $x_{2}$. As a result we can write the following:

\begin{equation} \sum_{n=0}^{\infty}|c_{n}x_{2}^{n}| = \sum_{n=0}^{\infty}|c_{n}||x_{2}|^{n} =0 \end{equation} Now for this new series the sequence of partial sums will be increasing, and hence if any term in the sequence is non-zero, the series cannot converge to zero. As a result, we must conclude that $$c_{n} = 0 \hspace{20pt} \forall n$$ (since $|x_{2}|^{n} > 0$)

This in turn implies that the coefficients are equal for all n.

I was wondering if you might be able to help answer find where I've jumped in my logic.

This is also in turn related to exercise 6.5.10 in Abbott:

Let $g(x) = \sum_{n=0}^{\infty}b_{n}x^{n} $ converge on $(−R, R)$, and assume $(x_{n}) \to 0$ with $x_{n} \neq 0$. If $g(x_{n}) = 0 \forall n \in N$, show that $g(x)$ must be identically zero on all of $(−R, R)$.

In this I'd like to use the same logic, by taking an element of the sequence at which the sequence converges, and then finding another element of the sequence, which is smaller in absolute value, at the series which must in turn converge absolutely, and using this to prove that the coefficients are all zero in a similar manner to above.

Is this method of proof for this second exercise similarly flawed?

Thanks in advance for any help you can supply.