Number of zeros in polynomial-exponential sums

2.3k Views Asked by At

Is there some bound (or even an exact solution) on number of real roots of polynomial-exponential sum of type $$f(x) = a_1b_1^x+a_2b_2^x+\cdots=\sum_{i=1}^N a_i b_i^x = 0$$ where $b_i>0, a_i\in\mathbb{R}$.

Clearly, if $N=1$, there is no root ($a_1 b_1^x = 0$). Also if $a_1-a_2b_2^x=0$, there is one root $x=\frac{\ln \frac{a_1}{a_2}}{\ln b_2}$. Can we say something about the number of roots in general (at least a bound depending on $N$ - similarly to normal polynomials where number of distinct roots is less than or equal to polynomial degree)?

EDIT: I cannot find an answer to this question. Does some version of Descartes' rule of sign apply also for "exponential polynomials"? (it seems to be the case for $N=2$)

3

There are 3 best solutions below

4
On

$f_N(x) =\sum_{i=1}^N a_i b_i^x = 0$,

The idea is to use induction to infer that $f_N$ has atmost $N-1$ roots.

As you did the, base case $n=1$, and $n=2$, where $a_i\in\mathbb{R},b_i>0$ are easy to verify;

By induction hypothesis assume for all functions $f_n$ ($1\le n\le N$) with requires properties for the coefficients has $N-1$ roots.

Let, $f_{N+1}(x)=\sum_{i=1}^{N+1} a_i b_i^x$, for arbitrary coefficients $a_i\in\mathbb{R},b_i>0$ wih not all $a_i's$ zero.

$f_{N+1}(x)=\sum_{i=1}^{N+1} a_i b_i^x$, WLOG assume $a_1$ is not zero.

Then, $f_{N+1}(x)=\sum_{i=1}^{N+1} a_i b_i^x =a_1b_1^x (1+\sum_{i=2}^{N+1} \frac{a_i}{a_1} (\frac{b_i}{b_1})^x)=a_1b_1^x(1+f_n)$, for some $n \le N$.

Contrary to our hypothesis if $f_{N+1}$ has more than $N$ roots. Then $(1+f_n)$ has more than $N$ roots. By Rolle's theorem $(1+f_n)'=f_n'$ has more than $N-1$, roots. Which contradicts our indiction hypothesis. Therefore, we infer $f_{N+1}$ has no more than $N$ roots.

Take $$f_N(x)=S(x,N):=\frac{1}{N!}\sum_{j=0}^{N-1} (-1)^j {N \choose j}(N-j)^x$$

The $f_N$ vanishes precisely for $x = 1,\cdots,N-1$. Hence, $N-1$ is the best bound on the number of zeros.

2
On

The original questioner asked if there was a version of Descartes' rule of signs for sums of exponential functions. The answer is yes.

Theorem. For $n\ge 0$, let $p_0>p_1>\cdots > p_n > 0$, and let $\alpha_j\ne 0$ be a real number for $j = 0$, $1$, ... , $n$. Then the function $$f(t) = \sum_{j=0}^n \alpha_j p_j^t$$ has no real zeros if $n=0$, and for $n\ge 1$ has at most as many zeros as there are among the $n$ pairs of successive constants $$[\alpha_0,\alpha_1],\ [\alpha_1,\alpha_2],\ \cdots, [\alpha_{n-1},\alpha_n]$$ pairs where the two constants have opposite sign.

We note that the sum in the case $n=0$ has plainly no zeros, because the sum consists of the single term $\alpha_0 p_0^t$, which is either strictly positive or strictly negative. We also note that for arbitrary $n\ge 1$, the sum as no zeros if the entries in all the $n$ pairs of numbers have the same sign, because then all the terms in the sum are either strictly positive or strictly negative. Equally elementary arguments show the result is true in the case $n=1$. The cases remaining are when are $n\ge 2$ and where the number $m$ of the pairs of numbers having entries of opposite sign has $m\ge 1$. These cases we establish by induction on $n$.

Induction step: assume the theorem is true for an index $n\ge 1$, and all $m$ with $0<m<n$; and show it must therefore also be true for the index $n+1$, and for any $m$ in the as-yet unestablished cases where $1\le m < n+1$.

Examine the sum $$f(t) = \sum_{j=0}^{n+1} \alpha_j p_j^t\ ,$$ and pull out some particular term, say the $k^{\rm th}$: $$f(t) = \alpha_k p_k^t +\!\!\! \sum_{j=0,j\ne k}^{n+1} \alpha_j p_j^t\ .$$ Now $p_k>0$, so we can pull out a nonzero positive factor $p_k^t$ to get $$f(t) = p_k^t \Big[\alpha_k + \!\!\!\sum_{j=0,j\ne k}^{n+1} \alpha_j ({p_j}/{p_k})^{t}\Big]\ .$$ Now we define $$q_j = p_j/p_k\ .$$ Each constant $q_j$ is the ratio of two positive numbers, and so $q_j>0$. We also note that the sequence $q_j$ is strictly decreasing, $q_i>q_j$ for $i<j$. We define $$g(t) = \alpha_k+ \!\!\!\sum_{j=0,j\ne k}^{n+1} \alpha_j q_j^t\ ,$$ when we have $f(t) = p_k^t g(t)$, and consider the derivative $$g'(t) = \sum_{j=0,j\ne k}^{n+1} \alpha_j\ln(q_j)\, q_j^t\ .$$ We define the nonzero constants $$\beta_j = \alpha_j\ln q_j\ ,$$ when $$g'(t) = \sum_{j=0,j\ne k}^{n+1}\beta_j q_j^t\ .$$ We see that $g'$ is a function that matches the assumptions of the theorem, with now $n$ terms in the sum. The sign of the ratio $\alpha_j/\beta_j$ is negative for $j<k$, and positive for $j>k$, because we have $q_j>1$ for $j>k$ and we have $0<q_j<1$ for $j<k$, and because the function $\ln x$ is negative for $x<1$ and positive for $x>1$.

Examine an arbitrary pattern of signs for the constants $\alpha_j$ in the function $f$. For example, if $f$ were the sum of ten terms, sign of $\alpha_j$ might vary with $j$ as $$\pmatrix{ 0&1&2&3&4&5&6&7&8&9\cr +&+&-&-&+&-&-&+&-&+\cr}\ .$$ We have assumed $m>1$, and so there must be at least one pair of successive entries where the numbers in the pair differ in sign. In particular, there has to be a smallest value of $j$ where the sign of $\alpha_j$ differs from the the sign of $\alpha_0$; and for such a $j$ we must have $j\ge 1$. In the example given, we have $j=2$. Now in the expression for $g'$, choose $k={j-1}$; since we know $j\ge 1$, we must have $k\ge 0$, so there must indeed be a constant $\alpha_k$ to correspond to that value of $k$. The resulting sum for the function $g'$ will have the term with the constant $\beta_k$ omitted, and will have the signs of the $\beta_j$ for $j<k$ of opposite sign as $\alpha_j$, and have all the signs of $\beta_j$ for $j>k$ of the same sign as $\alpha_j$. In the example given, the pattern of signs of the coefficents $\beta_j$ will be $$\pmatrix{ 0&1&2&3&4&5&6&7&8&9\cr -&.&-&-&+&-&-&+&-&+\cr}$$ The dot indicates that the coefficient $\beta_1$ does not appear in the sum that defines $g'$. The key observation is that this process not only reduces the number of terms in the sum from the original $n+1$ in $f$ down to $n$ in $g'$, but it also reduces the number of sign changes by one: if $f$ has $m$ such sign changes, then $g'$ will have $m-1$. Since $g'$ is otherwise an exponential sum in the proper form to apply the theorem, by assumption $g'$ will have at most $m-1$ zeros.

By an application of Rolle's theorem, since $g'$ has at most $m-1$ zeros, we may conclude that its integral $g$ has at most $m$ zeros. But $f$ differs from $g$ only by being multiplied by the function $p_k^t$ that is never zero, and so $f$ must also have at most $m$ zeros. That completes the inductive step and proves the theorem.

0
On

This seems related to something I worked on recently: figuring out a method to find all the real roots of any equation of the form $$f(x) = A_1 e^{B_1x} + A_2 e^{B_2x} + \ldots + A_N e^{B_Nx} = 0$$

It seems like for any of your terms $a_N b_N^x$ where $b_N > 0$ could be written as $a_N e^{\ln(b_N) x}$ so these are equivalent problems where $A_N = a_N$ and $B_N = \ln(b_N)$.

Therefore I think my method, described here and also implemented here: solve-exponentials would work to numerically find all the roots.

I know this doesn't really answer the question about how many roots there could be in an arbitrary case, but as soon as you put in some constants you can solve for all the roots and then you'll know.