This question is related to this other. I have this function:
$$f(n):=\sum_{k=1}^n\tan(k)$$
and Im evaluating it using machine precision numbers for each addend (my knowledge about how computers make computations is poor, so apologies if I write some brutality), that is, Im not using symbolic evaluation for each $f(n)$. If Im not wrong machine numbers are floating point numbers of length 32 or 64, depending of the CPU of the machine.
Now observe that the function $\tan(k)$ take arbitrarily big or small (in absolute value) values, that is it can be the case that
$$|\tan(k_1)|\approx 100000.000..,\quad |\tan(k_2)|\approx 0.00000005..$$
for suitable $k_1$ and $k_2$. Then the truncation of the values of $\tan(k)$ at machine precision get totally different "levels" of error of the function depending on $k$.
Then, how to estimate the accumulated error of $f$? Im interested in the most used analytic and computer-aided estimations of the error.
If the question is too broad I would like to know if there is a standard or generalized way to proceed for the function $f$ above.