Computational complexity of sine of Gamma function

72 Views Asked by At

In sense of bit complexity, how is it difficult co compute $$\sin(a\Gamma(x))$$ where a is a constant and x>1? Is it possible to avoid the computation of gamma function as first step? Is there a way to determine if the expression equals 0, without direct computing?

1

There are 1 best solutions below

0
On BEST ANSWER

All of this is assuming we're just talking about being able to compute values to arbitrary precision, since most of the time these calculations aren't actually going to be discrete and so an exact calculation will take infinite time regardless of the algorithm.

To answer your last question first - the zeros of $\sin(\theta)$ occur when $\theta = n \pi$ for some integer $n$, so $\sin(a \Gamma(x)) = 0$ has solutions whenever $a \Gamma(x) = n \pi$ so you do need to calculate $\Gamma(x)$ but you don't then need to calculate the sine part.

To answer your first question, it would be possible but difficult. Most efficient calculations of trigonometric functions take advantage of the fact that they're periodic, so they do something like:

  1. Reduce the argument of the function to something in a small interval (e.g. instead of looking at $\sin(1000000)$, look at $\sin(1000000-318309\pi) \approx \sin(2.784)$.

  2. Use things like Taylor series and Chebyshev polynomials to calculate the required value.

This post on StackOverflow goes into a little more detail on how that's implemented in computers.

So if you wanted to calculate $\sin(a \Gamma(x))$ accurately and efficiently for small $x$, you might try to find a Chebyshev expansion and use that, although even doing that would involve several steps of numerical approximation. But once $x$ gets large enough, things are going to get pretty hairy since the gamma function grows very quickly and it won't be long before the function is oscillating rapidly and getting a decent approximation becomes very difficult.