I want to thank those who answered my previous post. Now,
I understand $\pi$ has been computed to 31 trillion decimal digits. Suppose I have computed $\pi$ to 3.3 trillion binary digits, which is roughly equivalent to computing $\pi$ to a trillion decimal digits. How long would it take to convert to decimal?
For a small number of digits, I assume that you just multiply by say 1000000 and pick off six digits, and repeat until you have all the digits. For a large number of digits, my guess is this is much too slow.
Also, when they compute $\pi$, do they compute in decimal or do they convert from binary?
Thanks for listening.
Well, if you need to do arbitrary precision floating point arithmetics, you need to implement it in software, and there you make the choice of whether to internally represent it as decimal or binary. Algorithms such as AGM or Ramanujan-Chudnovsky could be done in either.
Converting between arbitrary binary and decimal floating point is slow work (scales $O(N\log N)$ where $N$ is the number of digits) and should be avoided.