One reason the Einstein summation convention seems to be useful, at least in my extremely limited experience, is that in calculations involving a chain rule of some sort and changes of coordinates, one can essentially skip many steps in a computation involving changing the order of summation, all appealing essentially to the distributive and commutative properties of multiplication.
Question: can the Einstein summation convention be relevant in settings where one has infinite sums for which interchanging summation is not guaranteed and requires an appeal to something like the dominated convergence theorem, say by acting as if the variables satisfy some sort of non-commutative multiplication?
Or is the usefulness of the Einstein summation convention restricted inherently to settings where finite index sets are always guaranteed?
I have added an example of the phenomenon I am referring to as a community-wiki answer.
Example: Say we have an open subset $\mathscr{U}$ of an $n-$dimensional manifold $M$, and two smooth charts $\varphi, \tilde{\varphi}: \mathscr{U} \to U \subseteq \mathbb{R}^n$ (which by definition are diffeomorphic onto their image $U$), then for a point $p \in \mathscr{U}$ denote the two corresponding sets of local coordinates by$$\varphi(p)=:(x^1, \dots, x^n),\quad\quad\tilde{\varphi}(p)=:(\tilde{x}^1, \dots, \tilde{x}^n). $$ Then $T_p\mathscr{U} \cong T_pM$ (the isomorphism is valid because $\mathscr{U}$ is an open subset of $M$) is spanned by either one of the two following sets of derivations $\mathcal{C}^{\infty}(M)\to\mathbb{R}$: $$\left\{ \left.\frac{\partial}{\partial x^1} \right|_p, \dots,\left.\frac{\partial}{\partial x^n}\right|_p \right\}, \quad\quad \left\{ \left.\frac{\partial}{\partial \tilde{x}^1}\right|_p, \dots, \left.\frac{\partial}{\partial \tilde{x}\ ^n}\right|_p \right\}, $$ which are defined for $f \in \mathcal{C}^{\infty}(M)$ by $$\left.\frac{\partial}{\partial x^j}\right|_p(f) = \partial_j (f\circ \varphi^{-1})(\varphi(p))\quad \text{and}\quad \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p(f):=\partial_i(f \circ \tilde{\varphi}^{-1})(\tilde{\varphi}(p)),$$ i.e. simply as the pullbacks (pre-composition is always contravariant) of the partial derivative operators in $U \subseteq \mathbb{R}^n$ under the respective inverse charts $\varphi^{-1}, \tilde{\varphi}^{-1}$.
Anyway, let $X \in T_p\mathscr{U}$. Then $X$ has both of the following coordinate representations: $$X = \sum_j X^j \left.\frac{\partial}{\partial x^j}\right|_p, \quad \quad X=\sum_i \tilde{X}^i \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p. $$ We want to express the components $\tilde{X}^i$ in terms of the components $X^j$.
Without Einstein Summation Convention: By the Chain Rule, one has $$\left.\frac{\partial}{\partial x^j}\right|_p = \sum_i \frac{\partial \tilde{x}\ ^i}{\partial x^j} \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p, $$ so using the two coordinate formulae for $X$, we get $$\sum_i \tilde{X}^i \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p = X = \sum_j X^j \left.\frac{\partial}{\partial x^j}\right|_p = \sum_j X^j \left( \sum_i \frac{\partial \tilde{x}\ ^i}{\partial x^j} \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p \right) = \sum_j \left( \sum_i X^j \frac{\partial \tilde{x}\ ^i}{\partial x^j} \left. \frac{\partial}{\partial \tilde{x}\ ^i} \right|_p \right) $$ the crucial moment where the finiteness of the indexing sets lets us switch the order of summation: $$\sum_i \left( \sum_j X^j \frac{\partial \tilde{x}\ ^i}{\partial x^j}\left. \frac{\partial}{\partial \tilde{x}\ ^i}\right|_p \right) = \sum_i \left( \sum_j X^j \frac{\partial \tilde{x}\ ^i}{\partial x^j} \right) \left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p = \sum_i \left( \sum_j \frac{\partial \tilde{x}\ ^i}{\partial x^j} X^j \right) \left. \frac{\partial}{\partial \tilde{x}\ ^i} \right|_p. $$ Using the linear independence of the $\left.\frac{\partial}{\partial \tilde{x}\ ^i}\right|_p$ and comparing coefficients, we get our result: $$\tilde{X}^i = \sum_j \frac{\partial \tilde{x}\ ^i}{\partial x ^j} X^j. $$
With Einstein Summation Convention: By the Chain Rule, one has $$\left. \frac{\partial}{\partial x^j}\right|_p = \frac{\partial \tilde{x}\ ^i}{\partial x^j}\left.\frac{\partial}{\partial \tilde{x}\ ^i} \right|_p$$ so inferring from the two different but equal coordinate representations $$\tilde{X}^i \left. \frac{\partial}{\partial \tilde{x}\ ^i}\right|_p = X = X^j \left. \frac{\partial}{\partial x^j}\right|_p = X^j \frac{\partial \tilde{x}\ ^i}{\partial x^j} \left.\frac{\partial}{\partial \tilde{x}\ ^i} \right|_p = \frac{\partial\tilde{x}\ ^i}{\partial x^j}X^j \left.\frac{\partial}{\partial \tilde{x}\ ^i} \right|_p, $$ from which we again produce the desired result $$\tilde{X}^i = \frac{\partial \tilde{x}\ ^i}{\partial x^j}X^j. $$
Comparing the two derivations, one could reach the conclusion (as I have) that the efficiency of the Einstein notation in this instance is due to suppressing/ignoring information about the order of summation which one finds in the typical notation.
So for situations with infinite summation indices, can the Einstein summation convention still be used by pretending that multiplication is no longer commutative (i.e. to represent the non-interchangeability of order of summation), or should it be shunned in such situations altogether?