In completion to this question:
Need A help in understanding a step in matrix representation of bounded linear operators.
The book said:
"Now, $$A \phi_{j} = \sum_{k}<A \phi_{j},\phi_{k}> \phi_{k}......(2),$$ Combining (1) (where (1) is $Ax = \sum_{j}<x,\phi_{j}> A\phi_{j}$) and (2) gives, $$Ax = \sum_{j} \sum_{k}<x,\phi_{j}> <A \phi_{j},\phi_{k}> \phi_{k} = \sum_{k} \sum_{j}<x,\phi_{j}> <A \phi_{j},\phi_{k}> \phi_{k}.$$ "
My professor said that the interchange of the summation sign in the last step is by using the conditional convergence and absolute convergence, could anyone explain this in details for me please?
Thanks!
We do agree that:
$Ax=\sum_{k}\langle x,\phi_k\rangle A\phi_k$
now we can take again its prokections on the basis:
$\langle Ax,\phi_j\rangle=\sum_{k}\langle x,\phi_k\rangle \langle A\phi_k,\phi_j\rangle$
Again by definition:
$Ax=\sum_{j}\langle Ax,\phi_j\rangle\phi_j=\sum_{j}(\sum_{k}\langle x,\phi_k\rangle \langle A\phi_k,\phi_j\rangle)\phi_j$.
As you can see this is the correct way of proceed, and no need at all of interchanging sums of vectors.
In any case in order to interchange sums, for real numbers, you need to check the Tonelli condition (uniform convergence), see for istance http://www.math.ubc.ca/~feldman/m321/twosum.pdf.