I have often seen derivations where people take two series that should be equal and then set the terms equal, term by term. Under what conditions is this valid?
For example, say I expand a real function, $f\left(x\right)$ using some arbitrary real orthogonal polynomials, $P_k\left(x\right)$, as $f(x) = \sum_k a_k P_k\left(x\right)$. Since $f\left(x\right)$ is real we can find a relation for the coefficients as:
\begin{equation} \begin{array}{rcl} f\left(x\right) & = & f^*\left(x\right) \\ \sum_k a_k P_k\left(x\right) & = & \sum_k a_k^* P_k\left(x\right) \\ \end{array} \end{equation}
Then people usually say that the coefficients for the $k$-th term must be equal, so you would get $a_k=a_k^*$. But why do the terms have to be equal term by term? Is it something special about orthogonal polynomials? or orthogonal basis functions in general?
For instance say that I have an adjacency matrix $A_{ij}$, which is symmetric. And I define node weights as:
$$w_i=\sum_j \frac{w_j}{d_j}A_{ij}$$
where $d_j$ is the degree (number of edges connected to) of node $j$, I could try to prove that the weights are proportional to the degrees by the following:
\begin{equation} \begin{array}{rcl} \sum_i w_i &= & \sum_i\sum_j \frac{w_j}{d_j}A_{ij} \\ \sum_i\sum_j \frac{w_j}{d_j}A_{ij} &= &\sum_i\sum_j \frac{w_j}{d_j}A_{ji} \\ \sum_i\sum_j \frac{w_j}{d_j}A_{ij} &= &\sum_i\sum_j \frac{w_i}{d_i}A_{ij} \\ \end{array} \end{equation}
Then if the "coefficients" of the adjacency matrix have to be equal term by term we get: $$\frac{w_j}{d_j}=\frac{w_i}{d_i}=C$$ However, I was told that this is inappropriate, but I don't know why you can't set the terms of the summation equal term by term in this case?
Decided to make my comments into an answer ...
The crucial property is linear independence. In fact, uniqueness of coefficients in linear combinations is essentially the definition of linear independence. Then, if coefficients are unique, you can equate them as you described, of course.
Two special cases are worth mentioning, and are discussed below: (1) when the set of vectors forms a basis, and (2) when the vectors are orthogonal.
The vectors in a basis of a vector space are always linearly independent, of course, so the "equating" technique works when you have linear combinations of basis vectors. The other key property of a basis is its ability to span the entire vector space. This is not relevant to our current discussion -- we only need the linear independence of the basis vectors.
If a collection of vectors are orthogonal, then they are certainly linearly independent. So orthogonality is sufficient for the "equating" step (though certainly not necessary).
All of the above assumes that the sums are finite.