Should I use powers of $2$ when possible in computations?

111 Views Asked by At

For example, if I want the most accuracy and efficiency when performing millions of iterations wouldn't it be better to use $2^k$ -- as opposed to some other nearby number -- whenever possible, since the computer can divide intervals (and some other arithmetic) powers of $2$ without error?

More specifically, let's say for a problem I had to run some algorithm for some time $0$ roughly to time $T = 2000$ using a time step of roughly $h = 0.005$. During each step of the algorithm I'm saving values to vectors and/or matrices. Would it be better to use something like $T = 2048$ and time step $h = 0.0039 = 1/256$? I'm finding that some of my graphs look good (more appropriate) when I use powers of 2.

Although, it's strange, because I've read through a few numerical analysis texts, and had a few classes on fundamentals of numerical methods and never heard anyone specifically state such a thing.

1

There are 1 best solutions below

0
On BEST ANSWER

The value for $T$ does not matter.

The one for $h$ may be relevant, in fact on computers real numbers are stored in the form $x=m\,2^n$ with $m,n$ integer values with a limited range.

So best accuracy is reached when numbers are representable with this formula.

For instance $h=\dfrac 1{300}$ is badly represented (because $\frac 13$ has an infinite developpement in base $2$) while $h=\dfrac 1{256}$ is exactly represented.

But it depends also on the operations you perform on $h$, ideally best accuracy would be reached when all operations involving $h$ would lead to representable numbers.

However this is likely unprobable unless you are just performing some simple additions or multiplications, as soon as there are divisions or even more complex operations involved it is unlikely that the result may be exact in base $2$ with finite number of digits, so the result will suffer approximations and error accumulation over time.