A friend of mine mentioned that given the functions $g(x, n)$ and $f(x)$ such that:
$$g(x,n)=\overbrace{f(f(f(...f(x))}^\text{n}$$ For example: $$g(x,1)= f(x)\\g(x,2) = f(f(x))$$
and that there is a closed form solution. I don't know exactly where I would begin, but any help or pointers would be very much appreciated. I am also not sure if I denoted the function $g(x,n)$ using the correct notation.
Thank you in advance.
Surely, for a function $f(x) = K + ax$ the iteration has a tractable form. Often the iteration is written as $f°^h(x)$ where the tiny circle indicates that the superscripted $h$ means the iteration-"height". For iterates of this $f(x)$ we iterate the insertion of $f(x)$ for $x$ such that we get
$$ \begin{array} {rlllll} f°^0(x) =& x & \qquad & \\ f°^1(x) =& f(x) & = K + ax \qquad & \\ f°^2(x) =& f(f(x)) & = K + a\cdot f(x) &= K + aK + a^2x \qquad & \\ f°^3(x) =& f(f°^2(x)) & = K + a\cdot f°^2(x) &= K + aK + a^2K + a^3x \qquad & \\ \ldots \\ \\ f°^h(x) =& f(f°^{h-1}(x)) & &= K(1+a+a^2+...+a^{h-1}) + a^h x \qquad & \\ \\ \\ =& K \cdot h+ x && \text{ for }a = 1 \\ =& K { a^h-1\over a-1} + a^h x && \text{ for }a \ne 1 \end{array} $$
The key for the first shot here is to insert the function at the place of $x$ (and accordingly at its powers if they occur) and then collect coefficients at like powers of $x$.
While the expression in the above linear function is surely tractable, polynomials of higher degree require much more complicated expressions when iterated and functions, which are given in terms of power series might be very complex or completely intractable when expressed depending on an indeterminate symbol for the iteration-height.
I've found the access to that problem using "matrix-operators" or "Carleman-matrices", which "convert function composition into matrix-multiplication". That is -for the beginning- simply a notational framework for manipulation of formal power series, namely composition and self-composition (aka iteration). And later it gives an indication for iteration to fractional heights by fractional powers of matrices, as far as that can actually be done.
The -perhaps- simplest one of the Carleman-matrices is the Pascal-matrix, which due to the binomial-theorem "implements" the "inc-" function, the mapping of $x^k \to (x+1)^k$ and $h$'s powers of it $x^k \to (x+h)^k$ which for the case of exponent 1 means iteration of the function $f(x) = x+1$ . I write this $ V(x) \cdot P = V(x+1) $ and $ V(x) \cdot P^h = V(x+h) $ . Here $V(x)$ means a rowvector of consecutive powers of its argument $x$ of the appropriate dimension (usually infinite), so $V(x)=[1,x,x^2,x^3,...]$ . Then a matrix-multiplication with the Pascal-matrix $$V(x) \cdot P = V(y) = V(f(x))= V(x+1) $$ looks like $$ \small \begin{array} {rrrrr|r|rr} & & & & & & & 1 & 1 & 1 & 1 & 1 & \cdots \\ & & & & & & &. & 1 & 2 & 3 & 4 & \cdots\\ & & & & & & &. & . & 1 & 3 & 6 & \cdots\\ & & & & &* & & . & . & . & 1 & 4 & \cdots\\ & & & & & & & . & . & . & . & 1 & \cdots\\ & & & & & & & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ \hline \\ [1 & x & x^2 & x^3 & x^4 \ldots] & =& [& 1 & y & y^2 & y^3 & y^4 &\ldots ] \end{array} \\ \ \\ \ \\ \text {where } y = f(x) = 1+x $$ Because the result is of the same form as the argument (namely a $V(x)$-vector having the form of consecutive powers of its argument, I call this "Vandermondevector") we see immediately, that this operation can be iterated, and multiple uses of the Carleman-matrix is then simply expressed by its powers and perform iterations of the associated function $f(x)$.
Another convenient example is the Carleman-matrix $S_2$ for the exponential-function $f(x)=\exp(x)-1$ by $$V(x) \cdot S_2 = V(\exp(x)-1) $$ Here the Stirlingnumbers 2nd kind are involved and in the second column of the Carlemanmatrix we recognize the coefficients of the exponential function $f(x)$:
$$ \small \begin{array} {rrrrr|r|rr} & & & & & & & 1 & . & . & . & . & \cdots \\ & & & & & & &. & 1 & . & . & . & \cdots\\ & & & & & & &. & 1/2 & 1 & . & . & \cdots\\ & & & & &* & & . & 1/6 & 1 & 1 & . & \cdots\\ & & & & & & & . & 1/24 & 7/12 & 3/2 & 1 & \cdots\\ & & & & & & & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ \hline \\ [1 & x & x^2 & x^3 & x^4 \ldots] & =& [& 1 & y & y^2 & y^3 & y^4 &\ldots ] \end{array} \\ \ \\ \ \\ \text {where } y = f(x)= \exp(x)-1 $$
When I began to explore that field I wrote a small treatize, which is still very amateurish and should be updated, but possibly has motivating/orientating power.
If you have access to a library you might look out for the "advanced combinatorics" of L. Comtet; around the pages 140-150 he explains the same thing under the notion of "Bell-polynomials"
A nice article online is "Continuous Iterations of Dynamical Maps" of Aldrovandi/Freitas at arXiv and for the most general version of Carleman-matrices you might consider articles of Eri Jabotinsky from the 40ties and 50ties of the previous century, which can be found online, too.