I would like to compute in a numerically stable way an oscillating series. Imagine I have a signal $C(n)$, $n\in\mathbb{N}$ which decays exponentially with $n$ e.g. $C(n) = e^{-2n}$. Also, imagine I want to combine it with increasing and oscillating coefficients $g(n)$, say e.g. exponentially decreasing $g(n) = e^n \sin{(n)}$ and in any case s.t. $\sum_n C(n) g(n) < \infty$. For instance, in the above example, it converges to $-e \sin{(1)}/(2e\cos{(1)} - e^2 - 1)$.
Now, imagine I want to numerically compute the sum $\sum_n C(n)g(n)$. This procedure involves many many cancellations of small numbers. I was wondering whether there is a known procedure for this type of problems, or if someone has a clever idea on how to compute it, since the numerical computation is highly unstable and possibly affected by rounding errors.
EDIT Thanks for the answer, @heropup. I will try to make a specific example. I am solving a linear system of the form $$ Hg = f \,,\quad H_{ij} = \frac{1}{i+j-1}\,, i,j=1,\dots,N $$ where $H$ is the Hilbert matrix, $f$ is a known vector and $g$ is the unknown vector of coefficients. I am not interested in computing the coefficients themselves, but their linear combination with a particular function $C(t)$, which I know should converge to a finite number. Numerically, though, the coefficients $g(t)$ are oscillating between many orders of magnitude (with $N=47$ they reach $10^{31}$), and many cancellations occur in the sum before convergence. I have tried many options, including regularization, but they are always affected by a systematic error. I was wondering whether some knowledge on this type of problems was available.
Thanks!
This is not a complete answer, but there are a number of methods to improve convergence of series in general, e.g., the Euler transform, and also Wynn's epsilon method. Their implementation and effectiveness would be dependent on the particular summand.