The context of this question is the construction of the Daubechies wavelet.
$f$ is a polynomial of degree $p-1$ which satisfies the equation:
$$ x^pf(1-x) + (1-x)^pf(x) = 1 \tag{1} $$
Since
$$ f(x) = \frac{1}{(1-x)^p} - \frac{x^p}{(1-x)^p} f(1-x) $$
and
$$ \frac{1}{(1-x)^p} = \sum_{k=0}^{\infty} \begin{pmatrix} p-1+k \\ k \end{pmatrix} x^k $$
it is argued that
$$ f(x) = \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix} x^k + O(x^p) = \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix} x^k $$
where higher order terms are zero because $f$ is degree $p-1$.
My problem is that I can't seem to verify that $(1)$ holds with this $f$. I've tried expanding the $(1-x)^p$ and $(1-x)^k$ terms. This turns into a mess. I've tried doing induction on $p$. Again, I get nowhere. I've tried other approaches with no success. I feel like I'm missing something simple!
Here is an example of one of my attempts:
\begin{align*} x^pf(1-x) + (1-x)^pf(x) &= x^p \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix} (1-x)^k + (1-x)^p \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix} x^k \\ &= \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix}\left( x^p(1-x)^k + x^k(1-x)^p \right) \\ &= \sum_{k=0}^{p-1} \begin{pmatrix} p-1+k \\ k \end{pmatrix}\left( x^{p-k}[x(1-x)]^k + [x(1-x)]^k(1-x)^{p-k} \right) \end{align*}
I get to this point, and I can't help but thinking of using the Binomial Theorem, but I cannot seem to manipulate the factorial terms in nice way.
I've found a method of validation which uses a system-of-equations type approach to find the coefficients of the polynomial which satisfies:
$$ x^pf(1-x) + (1-x)^pf(x) = 1 \tag{1} $$
I then use induction to show that these coefficients are equal to the ones given above. It's somewhat tedious. I typed it up in markdown some time ago, and I thought I'd post it here for historical reasons, given the popularity of the Daubechies wavelet.
Let $g(x) = (1-x)^p f(x)$. This implies $f(x) = \frac{g(x)}{(1-x)^p}$. Since $f$ is a polynomial, this implies $g$ has a zero of order $p$ at $x=1$. So, $g^{(n)}(1) = 0, \;n=0,1,\ldots,p-1$. We can make the factorization:
$$ g(x) = (1-x)^p \sum_{j=0}^{p-1} a_j x^j \tag{2} $$
It's clear that the coefficients, $a_j$, are the coefficients of $f$. How do we find them? Notice that differentiating condition $(1)$ $n$ times yields:
$$ g^{(n)}(x) + (-1)^n g^{(n)}(1-x) = \delta[n] $$
where $\delta[n]$ is one only if $n=0$ and $1$ otherwise. Plugging in $x=1$, we find:
$$ g^{(n)}(0) = \delta[n], \; n=0,1,\ldots,p-1 $$
This yields $p$ linear equations. Thus, to have a unique solution, there must be exactly $p$ unknowns, which justifies why $f$ should be degree $p-1$.
Using $(2)$ and $n=0$, it's immediate that $a_0=1$. On the other hand, viewing $g(x) = r(x)s(x)$ with $r(x) = (1-x)^p$ and $s(x) = \sum_j= a_j x^j$, one can apply the general Leibniz rule to find:
$$ g^{(n)}(x) = \sum_{l=0}^n \sum_{j=n-l}^{p-1} \begin{pmatrix} n \\ l \end{pmatrix} \frac{p!}{(p-l)!} \frac{j!}{(j-(n-l))!} (-1)^l a_j x^{j-(n-l)} (1-x)^{p-1} $$
Thus,
$$ g^{(n)}(0) = \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} \frac{p!(n-1)!}{(p-l)!} (-1)^l a_{n-l} $$
This yields explicitly the set of $p$ linear equations:
$$ \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} \frac{p!(n-1)!}{(p-l)!} (-1)^l a_{n-l} = \delta[n], \; n=0,1,\ldots,p-1 \tag{3} $$
This forms a triangular linear system is where the diagonal is $1$, so the polynomial, $f$, satisfying $(1)$ exists and is unique. It is not difficult manipulate $(3)$ into a recurrence beginning with $a_0$. However, it turns out that there is nicer closed form expression for the coefficients.
Proposition
$$ a_n = \begin{pmatrix} p-1+n \\ n \end{pmatrix} \tag{4} $$
Proof
Clearly, $a_0=1$ using the above. We use induction to show that $(3)$ holds for $n>0$ precisely when we use $(4)$. That is, we show:
$$ \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} \frac{p!(n-l)!}{(p-l)!} (-1)^l \begin{pmatrix} p-1+n-l \\ n-l \end{pmatrix} = \sum_{l=0}^n \begin{pmatrix} p \\ l \end{pmatrix} (-1)^l \frac{(p-1+n-1)!}{(p-1)!} = 0 $$
holds for $n>0$. The base case holds:
$$ \sum_{l=0}^1 \begin{pmatrix} p \\ l \end{pmatrix} (-1)^l \frac{(p-1)!}{(p-1)!} = 0 $$
Next, we show that
$$ \sum_{l=0}^{n+1} \begin{pmatrix} n+1 \\ l \end{pmatrix} (-1)^l \frac{(p-l+n)!}{(p-l)!} = 0 \tag{5} $$
under the induction hypothesis. First, split the sum:
$$ (5) = \sum_{l=0}^n \begin{pmatrix} n+1 \\ l \end{pmatrix} (-1)^l \frac{(p-l+n)!}{(p-l)!} + (-1)^{n+1} \frac{(p-1)!}{(p-n-1)!} =: (a) + (b) $$
Next, we use an identity sometimes called Pascal's rule to split the binomial term in $(a)$:
$$ (a) = \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n)!}{(p-l)!} + \sum_{l=1}^n \begin{pmatrix} n \\ l-1 \end{pmatrix} (-1)^l \frac{(p-l+n)!}{(p-l)!} =: (c) + (d) $$
First, we focus on $(c)$:
\begin{align*} (c) &= \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n-1)!}{(p-l)!} ((p+n) - l) \\ &= (p+n) \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n-1)!}{(p-l)!} + \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n-1)!}{(p-l)!} (-l) \\ &= 0 + \sum_{l=0}^n \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n-1)!}{(p-l)!} (-l) \end{align*}
The first term is $0$ due to the induction hypothesis. Next, let's look at $(d)$:
\begin{align*} (d) &= \sum_{l=1}^n \begin{pmatrix} n \\ l-1 \end{pmatrix} (-1)^l \frac{(p-l+n)!}{(p-l)!} \\ &= -\sum_{l=0}^{n-1} \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l)(p-l+n-1)!}{(p-l)!} \\ &= -\sum_{l=0}^{n} \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l)(p-l+n-1)!}{(p-l)!} + (-1)^n \frac{(p-n)(p-1)!}{(p-n)!} \\ &= 0 + \sum_{l=0}^{n} \begin{pmatrix} n \\ l \end{pmatrix} (-1)^l \frac{(p-l+n-1)!}{(p-l)!}l + (-1)^n \frac{(p-n)(p-1)!}{(p-n)!} \end{align*}
where the last step is again due the induction hypothesis. Wrapping up, we have that:
$$ (c) + (d) = (-1)^n \frac{(p-1)!}{(p-n-1)!} $$
So that:
$$ (5) = (a) + (b) = 0 $$
which completes the proof.