Find the smallest k such that $n^k > \sum_{i=0}^{n-1} i^k$

885 Views Asked by At

Let $n \in \mathbb{N}$. Is it possible to find the smallest $k \in \mathbb{N}$ such that $$n^k > \sum_{i=1}^{n-1} i^k \ ?$$


It's easy to prove that such $k$ exist because:

$$n^k > 1^k + 2^k + 3^k + 4^k + \dots + (n-2)^k + (n-1)^k$$ $$ \iff 1 > \left(\frac 1n\right)^k + \left(\frac 2n\right)^k + \left(\frac 3n\right)^k + \dots + \left(\frac {n-1}n\right)^k$$

And since all fractions tends to zero when $k \to \infty$, there must be a solution for $k$.

I've tried using logartihms, but I wasn't able to do anythyng.

Then I tried to find the value of the sum using Faulhaber formula, but the Faulhaber formula depends on the exponent $k$ and since it's not fixed it's almost useless.

Any other way to solve this?

4

There are 4 best solutions below

10
On BEST ANSWER

Using the inequality $$ e^{-x/(1-x/k)}\le\left(1-\frac xk\right)^k\le e^{-x}\tag{1} $$ and setting $x=\frac{ik}{n}$ we get $$ \begin{align} \frac1{n^k}\sum_{i=0}^{n-1}i^k &=\frac1{n^k}\sum_{i=1}^n(n-i)^k\\ &=\sum_{i=1}^n\left(1-\frac in\right)^k\\ &\le\sum_{i=1}^ne^{-ik/n}\\ &\le\frac1{e^{k/n}-1}\tag{2} \end{align} $$ Thus, $k=\log(2)n$ is an upper bound.

Assuming $k/n$ is near $\log(2)$, we have for any $\alpha\gt0$, $$ \begin{align} \hspace{-1cm}\frac1{n^k}\sum_{i=0}^{n-1}i^k &=\frac1{n^k}\sum_{i=1}^n(n-i)^k\\ &=\sum_{i=1}^n\left(1-\frac in\right)^k\\ &\ge\sum_{i=1}^ne^{-ik/(n-i)}\\ &=\sum_{i=1}^ne^{-ik/n}-\sum_{i=1}^ne^{-ik/n}\left(1-e^{-i^2k/n/(n-i)}\right)\\ &\ge\color{#00A000}{\sum_{i=1}^ne^{-ik/n}}\color{#C00000}{-\sum_{i=1}^ne^{-ik/n}\frac{i^2k}{n(n-i)}}\\ &\ge\color{#00A000}{\sum_{i=1}^\infty e^{-ik/n}}\color{#C00000}{-\frac1{1-\alpha}\sum_{i=1}^{\alpha n}e^{-ik/n}\frac{i^2k}{n^2}-\sum_{i=\alpha n+1}^ne^{-ik/n}\frac{i^2k}{n}}\color{#00A000}{-\sum_{i=n+1}^\infty e^{-ik/n}}\\ &\ge\frac1{e^{k/n}-1}-\frac1{1-\alpha}\frac{k}{n^2}\frac{e^{-k/n}+e^{-2k/n}}{(1-e^{-k/n})^3}\color{#0000FF}{-e^{-\alpha k}\frac{nk}{e^{k/n}-1}-\frac{e^{-n}}{e^{k/n}-1}}\\ &\sim\frac1{e^{k/n}-1}-\frac{6\log(2)}{(1-\alpha)n}\tag{3} \end{align} $$ because the blue terms decay exponentially.

Since the derivative of $\frac1{e^x-1}$ at $x=\log(2)$ is $-2$, we get $\frac1{e^{k/n}-1}-\frac{6\log(2)}{n}=1$ at approximately $k=\log(2)(n-3)$. With a bit more care, we see that more precisely: $$ k=\log(2)(n-3)-\frac{3\log(2)(17\log(2)-6)}{2n}+\dots\tag{4} $$ Therefore, $k=\log(2)(n-3)$ is a good estimate for the lower bound.


Proof of the Inequality

Since $1+x\le e^x$ for $x\in\mathbb{R}$, we have $$ 1-\frac xk\le e^{-x/k}\le\frac1{1+\frac xk}=1-\frac{x}{k+x}\tag{5} $$ Therefore, $$ e^{-x/(k-x)}\le1-\frac xk\le e^{-x/k}\tag{6} $$ Raising to the $k^\text{th}$ power, $$ e^{-x/(1-x/k)}\le\left(1-\frac xk\right)^k\le e^{-x}\tag{7} $$ as desired.

0
On

Note: This post contains only numerics; see the answers by robjohn and Jack D'Aurizio for actual math.

Numerical computations show that $$k(n) = \lfloor n\log 2-0.039721\rfloor \quad \text{ for } \ 2\le n<944029$$ which is kind of nice. However, this empirical formula breaks down for $n=944029$. What is worse, it turns out there is no $c$ such that $k(n) = \lfloor n\log 2-c\rfloor$ holds for all $n$. Indeed, we have $$k(944029) = 654351, \quad \text{with } 944029\log(2)-654351 < 0.039717$$ $$k(603162) = 418079, \quad \text{with } \ 603162 \log 2 - 418079 > 1.0397208$$ So, the range of values of $n\log 2-k(n)$ exceeds $1$, if only by a tiny bit.

Still, it is reasonable to conjecture that $$\lfloor n\log 2\rfloor-1 \le k(n) \le \lfloor n\log 2\rfloor \tag{??}$$ for all $n\ge 2$.

The upper bound was proved by Jack D'Aurizio and by robjohn; also, robjohn gave a lower bound that is very close to (??).

3
On

I was having some luck by defining $$ S_n(x) = \sum_{i=1}^{n-1} \left(\frac{i}{n}\right)^x. $$

In the very least, you get a reasonable justification for the $\log{2}$ coefficient. With minimal algebra, one sees $$ S_n(x) = \left(\frac{n-1}{n}\right)^x \left[1+S_{n-1}(x)\right]. $$

Define $x_n$ such that $S_n(x_n)=1$. We write $x_{n}=x_{n-1}+\Delta_n$. So, $$ S_n(x_n) = 1 = \left(\frac{n-1}{n}\right)^{x_n}\left[1+S_{n-1}(x_{n-1}+\Delta_n)\right] = \left(\frac{n-1}{n}\right)^{x_n}\left[2+\Delta_nS_{n-1}'(x_{n-1}) + O(\Delta_n^2)\right]. $$

The part I don't like now is neglecting everything, but suppose the derivatives of $S_{n-1}$ are small at $x_{n-1}$. In which case, $$ x_n \approx \frac{\log{2}}{\log(n/(n-1))} = \frac{(n-1)\log{2}}{1+O(1/(n-1))} \approx (n-1)\log{2}. $$

However, this might motivate a perturbation solution $x_n-x_{n-1} = \log{2}+O(\epsilon)$ or a limit $\lim_{n\to\infty} (x_n-x_{n-1}) = \log{2}$.

So, not a full solution, but I think a possible, worthwhile line of argument to pursue.

0
On

Borrowing @Jason's notation for: $$S_n(x)=\sum_{i=1}^{n-1}\left(\frac{i}{n}\right)^x$$ we have that: $$S_n(x)=-1+(1+1/n)^x\cdot S_{n+1}(x)\leq -1+e^{x/n}\cdot S_{n+1}(x),\tag{1}$$ so in order to have $S_n(x_n)<1$, it is sufficient that $S_{n+1}(x_n)<2\cdot e^{-x_n/n}$.

Now I state, for later proof: $$ 0<S_{n+1}(x)-S_{n}(x)<\frac{1}{x+1}\tag{2}.$$ Combining $(1)$ and $(2)$ we get: $$S_n(x)< -1+(1+1/n)^x\cdot\left(\frac{1}{x+1}+S_n(x)\right),$$ $$\frac{S_n(x)}{S_n(x)+\frac{1}{x+1}}<-1+(1+1/n)^x<e^{x/n}-1,$$

$$S_n(x) < \frac{e^{x/n}-1}{(2-e^{x/n})(x+1)}.\tag{3}$$

Now it is clear that if $x$ is about $n\log 2$ (how much far can be determined by solving $RHS=1$, i.e. $e^{x/n}=\frac{2x+3}{x+2}$, or $x=n\cdot\log\left(2-\frac{1}{x+2}\right)$, through Newton's method, for instance) the $RHS$ is less than $1$ and we are done:

$$ S_n\left(n\cdot \log\left(2-\frac{1}{n\log 2}\right)\right)<1\tag{4}$$

is a straightforward consequence of $(3)$; moreover, it goes a little beyond the $n\cdot\log 2$-wall. Now proving that we cannot hope to have $S_n(x)<1$ for an $x$ well below $n\cdot\log 2$ is just an exercise in inequalities-reversing: we just need a $\frac{C}{x+1}$ lower bound for $S_{n+1}(x)-S_{n}(x)$ to replace the LHS of $(2)$ (maybe through the Newton-Cotes formulas with a quantitative error term?). Moreover, we already know that $S_n(x)<1$ implies $x>\frac{2n}{3}-1$ by a straighforward Riemann-sums&convexity argument shown by @127.0.9.6: just a tiny piece of work is still needed.