I recently wrote How calculators do trigonometry where I wrote a simple program for computing $\sin(x)$. For completeness, I include a slightly modified version of the program here:
function y=mysin(x)
z=mod(x,2*pi);
if 3*pi/2<=z
y=-sin1(2*pi-z);
elseif pi<=z
y=-sin1(z-pi);
elseif pi/2<=z
y=sin1(pi-z);
else
y=sin1(z);
end
end
function y=sin1(x)
if x>pi/4
y=cos1(pi/2-x);
else
n=0:7;
m=2*n+1;
y=sum((-1).^n.*x.^(m)./factorial(m));
end
end
function y=cos1(x)
if x>pi/4
y=sin1(pi/2-x);
else
n=0:8;
m=2*n;
y=sum((-1).^n.*x.^(m)./factorial(m));
end
end
Provided all of the calculations were done in exact arithmetic, it would compute $\sin(x)$ for any real $x$ to within an accuracy of $10^{-16}$. I noticed that it has nevertheless has some error with large arguments; in particular, in Matlab I get
abs(sin(50000)-mysin(50000))=3.4861e-14.
I have concluded that this is caused by the rounding in the first step, where I compute mod(x,2*pi), because in Matlab I get
abs(mysin(50000)-sin(mod(50000,2*pi)))=0
How would I fix this? Would I need to perform this first step in higher precision somehow?
Edit: upon discussion with LutzL, I see that I can pose this problem in a more well-defined way as follows. Given any real number $x$, how can I calculate $x$ mod $2 \pi$ to within double precision, given a method for computing $\pi$ itself to arbitrary precision?
The floating point number $50000$ represents actually an interval of numbers contained in $[50000·(1-\mu),50000·(1+\mu)]$, with $\mu=2^{-53}$. The exact interval would require an analysis of the binary representation of $50000$.
Since the computer has no knowledge which of the numbers inside this interval is the given one, the most exact reduction by $2\pi$ still has an uncertainty of $50000·\mu\approx 5·10^{-12}$ which translates into an error interval of the same magnitude in the sine computation.
Thus the observed error between the two methods of computation of $3·10^{-14}$ is well within these margins.