Finding good approximation for $x^{1/2.4}$

1.9k Views Asked by At

I would like to a good (8 bits accuracy) approximation for $x^{1/2.4}$ in the range $[0, 1]$. This transform is used for converting linear intensities to SRGB compressed values, so it's important that I make it run fast.

Plot of function:

enter image description here

Using a simple polynomial isn't practical because

  1. the function has lots of high order derivatives
  2. the function is roughly asymptotic to $x$, which is very different from the asymptotic behavior of high order polynomials

I've already have code that constructs an arbitrary degree polynomial for any function by minimizing the square error and even for a 10th degree polynomial, the accuracy is still only like 6 significant bits.

Then I learned about rational function approximations, which will have a much better asymptotic behavior. But the problem is I don't know how to find the optimal coefficients. There's the Pade formulation which creates an approximation around a single point, but since it doesn't use global information, it can be a very bad fit overall just like Taylor series.

I had Mathematica create an approximation of the form $(a_0 + a_1 x + a_2 x^2) / (1 + a_3 x)$ with PadeApproximant[x^(1/2.4), {x, 0.2, {3, 2}}], which is much better than a simple 3rd degree polynomial, but still not good enough, so I want to find a globally optimal solution, probably of the same form.

I tried finding a least squares solution like before, but it involves 4 huge, non-linear polynomial equations, that is taking Mathematica forever (I've waited 1/2 hour so far) to solve.

Can anyone suggest how to solve those non-linear equations, or another way to find a rational function approximation, or an entirely different approximation?

Thanks for any help

3

There are 3 best solutions below

1
On

Consider $\frac{1}{2.4} \approx 0.41$, perhaps starting with a square root and approximating the ratio is easier. Or just use approximations for the logarithm and exponential. Those functions are directly implemented in hardware nowadays, and quite cheap. Or select judiciously some points and interpolate. Or approximate through splines. The magic fast inverse square root algorithm (really, its justification) might also give some ideas.

Just make sure this operation is really relevant performance wise before going down this path.

0
On

I think your reasoning is correct -- a rational approximation is probably the best solution. But, constructing rational approximations is difficult.

You have to decide how you will measure error. If you just look at maximum difference between the value of the function and it's approximation, then you are doing "uniform" or "minimax" approximation, which is usually more difficult than least-squares approximation. As you mentioned, Pade approximation is not very good for this, because it's a Taylor-series-like approach that's fixated on a single point.

I would recommend that you try the Mathematica RationalInterpolation function, if you haven't already.

Or, if you have access to Matlab, there is an add-on called Chebfun that does a very good job of constructing minimax polynomial and rational approximations. There are commands named ratinterp and remez, and a couple of others. The name "remez" comes from the Remez Exchange Algorithm, which is the standard way of computing minimax approximations. This section of the Chebfun documentation explains the techniques that are available. Section 4.8 covers rational approximations.

There is a nice book by Trefethen that provides a modern computationally-oriented account of approximation theory, with a focus on Chebfun. Insightful, and easy to read.

Edit

I tried the following Mathematica code

gx = GeneralMiniMaxApproximation[{t, 1 - t + t^(1/2.4)}, 
                             {t, {0, 1}, 3, 3}, x, 
                              MaxIterations -> 200, WorkingPrecision -> 50]

and then

hx = gx - 1 + x

This gave an approximation $hx$ of your function

$$ \frac{x^4 - 1.3730926529x^3 - 1.2231795079x^2 - 0.0120332034x - 3.892203211 \times 10^{-7}} {x^3 - 2.47604929086x^2 - 0.13996763348x - 0.00008072719}$$

The error function looks like this:

error-plot

Not quite 8 bits, but getting close. You can fiddle with the degrees of the numerator and denominator to try to get something better.

2
On

Continuing on from the simple approximation method I provided in my other answer, it appears that this method works quite well for doing such approximations.

Thus, for example, noting that $\frac{5}{12} = \frac{6.6666...}{16}$, gives us the following approximation $x^\frac{5}{12} \approx \frac{x^\frac{6}{16} + 2x^\frac{7}{16}}{3}$, which gives 9-bit precision over the desired range, and can be calculated quickly given square root.