How do I best represent the number $\pi$ in binary?
I have been thinking about it for sometime and, if it was me, I would use four bits for each digit, because four bits are sufficient to represent 10-based digits. Something like:
0011 , 0001 0100 0001 ...
I wouldn't know, also, how to represent the decimal point here.
I hope my question is clear enough.
Update
I will put this into context. I didn't want to, because I thought you would find it stupid.
I want to make a full back tattoo, representing pi in binary so, what I am looking for is a way that will be more or less simple to scale, as I will want thousands of digits, and I want it to be precise.
What you're suggesting is known as binary coded decimal representation. It is used by some simple calculators, and used to be commonly used in financial computing, because it can represent dollar amounts without rounding the cents (and also makes it easier to produce decimal output for human consumption). But it is harder to calculate with than true binary representation, which for $\pi$ is
as found here. Here each bit after the point represents successively $1/2$, $1/4$, $1/8$, $1/16$, ..., $2^{-n}$, ..., so according to this representation $$ \pi = 2+1+1/8+1/64+1/2048+1/4096+\cdots$$
In practical computing, irrational numbers are nowadays almost always represented in binary, rounded to a fixed number of significant binary digits and then represented in a binary variant of scientific notation. There's a widely-used standard for how to do this, IEEE-754, which allows different programs or systems to exchange floating-point values without first converting them to decimal notation. (A recent update of the standard defines more format, but is not as widely adopted).