Floating point division resulting in a value exceeding 1 but should be equal to 1...maybe!

152 Views Asked by At

I am computing the apparent magnitude of comets and minor planets using data from the Minor Planet Center. The formula I use has a division and that result is passed to the arccosine function (in Python).

Occasionally the result of the division is slightly greater than 1, for example

6.272370848320445 / 6.2723697617591405 = 1.0000001732297912

I assume the result is not actually greater than 1; rather the result is an artefact of floating point arithmetic, particularly given the number of zeros after the decimal point.

If my assumption is correct, I want to catch this sort of situation and treat the result as if the division yielded exactly 1 (and so can safely be passed to arccosine).

If I make an arbitrary choice of say five decimal points and truncate the division result, then if the result is 1 I'm good, otherwise, drop the result.

I'm not sure if what I'm proposing is correct mathematically, so looking for guidance please. I have assumed this issue arises due to floating point arithmetic, but if that assumption is wrong, then I'm not sure what else I can do. It is possible the source data from the Minor Planet Center contains spurious data which is causing the actual error (and thus I shouldn't be trying to work around it but just let it fail).

The formula can be found here (using the H,G model).

3

There are 3 best solutions below

1
On BEST ANSWER

I have experienced exactly the same situation with round-off (in calculation of finite rotations in C#). You theoretically know that the division equals maximally to 1 ergo you can implement it as follows:

$\max(-1.0, \min(1.0, x/y))~$ instead of $~x/y$

and your code will be safe.

2
On

In your example the numerator is slightly larger than the denominator (look at the fifth decimal place) so that quotient should be greater than $1$.

Pasting the division into the google search bar gives $1.00000017323$.

Your program is calculating the quotient correctly. If it should be no larger than $1$ the problem is upstream where the data comes from.

0
On

I looked at the link you mention in your question and I don't have time to study all of the details. However, the expression

beta = acos((rp*rp + rho*rho - rsn*rsn)/(2*rp*rho));

seems like an instance of Law of cosines. This can be rewritten as

beta = acos(1 -(rsn+rho-rp)(rsn-rho+rp)/(2*rp*rho));

The problem in computing this is that rsn+rho-rp or rsn-rho+rp loses precision if rsn and |rho-rp| are of comparable value and may become negative. This happens when the triangle is close to degenerate where one of the angles becomes very close to zero. In these cases you can use the haversine idea and use the formula

beta = 2*asin(sqrt(max(0,(rsn+rho-rp)(rsn-rho+rp)/(4*rp*rho)));

or the approximation when the angle is close to zero

beta = sqrt(max(0,(rsn+rho-rp)(rsn-rho+rp)/(rp*rho)));

where the max(0,.) ensures that we don't attempt to take the square root of a negative number.

Note that the acos(x) formula is great when $x$ is close to zero, but the asin(x) formula is better when $x$ is close to one in absolute value.