Why are the elementary functions the ones that they are?

54 Views Asked by At

Many antiderivatives exist but cannot be expressed in terms of elementary functions. This got me thinking, well, what if we expanded the set of elementary functions? Would it work then?

Let's think of some elementary functions, such as addition. Addition is used to define multiplication (at leas tin the reals). A product is just addition a certain number of times. Exponentiation is just multiplication a certain number of times. This implies an infinite sequence let $f(x, y, n)$ be the "super exponential" where $x, y$ are reals and $f(x, y, n) = f(x, f(x, f(x, ..., n-1) , n-1), n-1)$ i.e. you apply the f of the lower level $y$ times to $x$.

Simply put $f(x,y,0) = x + y$ (addition), $f(x,y,1) = f(x,f(x, ... ,0), 0) = x + x + x...$ (multiplication). I am too lazy to do exponentiation, but you can see how that recursive definition can also give you exponentiation by recursively applying multiplication $y$ times, which itself applies addition $x$ times for every nesting. Thus this pattern can continue on and on to infinity. However we only consider elementary functions to be those up to $n=2$ (exponentiation) as far as I know.

Trig has a similar thing, cos and sin are the 2D coordinates of a point on the unit circle. Well I can define transine as the z coordinate in a 3D sphere and come up with a generalized version of sine and cosine that works there. And I can do this for any arbitrary hypersphere, once again getting infinitely many such functions with their own properties.

The question isn't whether any of these 2 is actually a solution to finding closed forms of antiderivatives but rather. What's so special about the functions we picked as elementary functions? Do they actually capture something so essential that just adding more complexity by increasing dimensions of recursion won't really add anything or bring something new to the table?