Consider real polynomials of one variable $x$.
From Taylor's theorem we know that entire functions are expressible as " a polynomial of infinite degree" ; a power series expansion. Also we know that if we take that power expansion and throw away all the terms $x^M$ for all $M$ larger than $n$ , we Get the best fitting polynomial of degree $n$ near the expansion point.
So let us call that reducing ( or truncating) mod $x^{n+1}$.
Now consider finding entire solutions like
$$ a(2 x) = b(a(x)) $$
Where we search for entire $a$ and an entire $b$ is given.
Step 1) truncate b to degree n. Ideally If b is a polynomial already.
Step 2) truncate a also to the same degree as b and use dummy variables as coëfficiënts.
Now you can set up the $n$ equations with $n$ variables by expanding
$$ a(2x) = b(a(x)) \mod x^{n+1} $$
Let us call these solutions $a(n,x)$.
Now the logical question is
$$ \lim_{n \to \infty} a(n,x) = a(x) $$
???
Where $a(x)$ is a solution to the original equation.
So for instance consider
$$ f(3x) = 3 f(x) - 4 f(x)^3 $$
Do the solutions $f_n$ converge to $\sin(x)$ ?? Or to another valid solution $f(x)$ ?
When does this method work and When not ?
Or do we Get a double limit ? Like jumping from one correct solution to another ?
To keep my question specific and short I mainly ask about the sin here , but ofcourse this is a general one.
Also I was thinking about Some variants of the idea above , But the ideas are not solid yet. Maybe it relates, or Maybe it can be improved :
Here it is
There is a Unique solution
1) $d(x)$ is nonconstant.
2) $d(x)$ is analytic.
3) $d(3x) = 3 d(x) - 4 d(x)^3$
4) $d(5x) = 16 d(x)^5 - 20 d(x)^3 + 5 d(x) $
Then $d(x) = sin(x)$.
So Maybe this system of equations can somehow be solved with a kind of truncated polynomials or So , and Then force it to converge to sin ??
And ofcourse the sine is Just an example again. This isafain a part of a bigger question.
There are more questions but this already long enough. I Will ask the others later in a seperate question.
Note : the limits may Go to an analytic but not entire function , or it might diverge. Or approach radius $0$. Or something weird perhaps like sin(x) + sin(n x)/n ... messing up the intuition of the derivative. Im not sure such Weirdness can occur but any answer attempt should take this into account ...
Sorry for this lenghty question.
Edit :
To give Some idea about the more general case :
Would this method work to Find say
$$ f(2x) = exp(f(x)). $$
???
For this reason I like to add the tag tetration. But I have to many tags :/
Specific example
Let $f(X) = \sum_{n=0}^{\infty} a_n X^n $. Then if $$ f(3X) - 3f(X) + 4f(X)^3 \equiv 0 \pmod{X^k}, $$ we obtain $k$ equations for the first $k$ coefficients. Moreover, these equations have a very particular structure, as becomes clear by writing down the first few: \begin{align} a_0-3a_0+4a_0^3 &= 0 && (X^0) \\ 12 a_0^2 a_1 &= 0 && (X^1) \\ 12 a_1^2 a_0+(12a_0^2+6) a_2 &= 0 && (X^2) \\ 4 a_1^3+24 a_0 a_2 a_1+(12 a_0^2+24) a_3 &= 0 && (X^3). \end{align} Importantly, in all but the first, the highest coefficent is $a_i$, it appears only linearly, and its coefficient will turn out to be nonzero. Hence once we determine $a_0$ from the cubic, there is only one choice for each $a_i$.
Now, let's look at the first two equations. The first has three solutions, $$ a_0 = 0, \quad a_0 = \pm 1/\sqrt{2}. $$ Let's show that the latter force the solution to be constant. Suppose the lowest nonconstant power of $X$ is $n$, i.e. $a_1,\dotsc,a_{n-1}=0$ and $a_n \neq 0$. Then $$ (a_0+a_n 3^nX^n+\dotsb)-3(a_0+a_n X^n+\dotsb)+4(a_0+a_nX^n+\dotsb)^3 \\ \equiv (-2a_0+4a_0^3)+((3^n-3)+12a_0^2)a_n X^n \pmod{X^{n+1}} $$ The first term vanishes since $a_0=\pm 1/\sqrt{2}$. But for the second term to be zero, since $(3^n-3)+12a_0^2>0$ for any $n \geq 1$, we must have $a_n=0$, contradicting our assumption. Hence there can be no nonconstant power of $X$, so the only solutions with $a_0 \neq 0$ are constant.
For $a_0=0$, the second equation says we can choose $a_1$ freely. Call it $a$. Now, we know that one such solution is $\sin{aX}=\sum_{n=0}^{\infty} (-1)^na^{2n+1} X^{2n+1}/(2n+1)!$. But because the rest of the equations are linear, the series is entirely determined by the choice $a$, so it must be equal to $\sin{aX}$.
As far as convergence goes, we have been working in the ring of formal power series $\mathbb{C}[[X]]$, which has a topology that says a sequence of series converges if all the coefficients are eventually constant. This is rather different from the ordinary convergence of a complex power series, but we know that it will converge to an analytic function on an open disk centred at zero if it at least converges for some nonzero evaluation of $X$. In this case, we know that $\sin{ax}$ converges everywhere, so there's no problem. But it is useful to note that any such computations are really about formal series, since we don't actually use the analytic structure of $\mathbb{C}$ until we actually want to talk about the convergence to a function.
So the following is a characterisation of $\sin{x}$:
More generally
The above gives two methods of showing the only possibilities: using the form of the equations for the coefficients, or trying to produce a contradiction from supposing that there is a different solution. Both of these may be used on the general problem. Notice the importance of the coefficient of the highest possible power not being zero: if this were not the case, one would have a free choice of the appropriate coefficient, as in the $a_0=0$ case. This will work for any analytic functional equation of the form $f(nx)=g(f(x))$ with $g$ analytic near $f(0)$, provided that there are sufficiently many nonzero coefficients to keep the number of possible solutions we are working with small (it may well be possible to prove this, but it seems unlikely to hold for general $g$).
The example $f(2x)=e^{f(x)}$ is unfortunately rather boring: you can check that it only has constant solutions $f(x)=-W_n(-1)$, where $W_n$ is the $n$th branch of the Lambert W-function.