Let $$ u_n(r) = \frac{1}{2\pi} \int_0^{2\pi} f(r,\theta)e^{-in\theta} d\theta. $$ Prove that, for all $(r,\theta) \in \mathbb{R}^2$, $$ f(r,\theta) = u_0(r) + \sum_{n=1}^{+\infty} \left(u_n(r) e^{in\theta} + u_{-n}(r)e^{-in\theta}\right). $$
I’m really lost, to be quite honest I’m still a learner but I would appreciate a proof for this.
But I think I cracked it. the n subscript is referring to the integral order of the function in question. What it appears to be setting up is the rule that the natural exponent follows when integrated upon. Therefore it seems you need to prove that this integration rule holds true, but I haven’t dug deep into this theory and I’m struggling. Thanks.
The first thing is that the "theorem" you've stated here is false for a great many functions $f$. For instance, it's essential that $f$ be integrable in $\theta$. It's also important for $f$ to be continuous (with continuity at the origin being, I suspect, particularly important, but I haven't checked this).
In short: theorems have hypotheses, and you can't just trim them out and ignore them. They're a little like laws. We have a law that says that if you murder someone, we can throw you in prison. You wouldn't want that to be applied without the "if" part, i.e., someone saying "the law says we can throw you in prison!", right?
The next thing is to look carefully at the definition of $u_n$. Holding $f$ fixed for the time being, for each integer $n$, we're defining a function called $u_2$. For instance, for $n = 3$, the definition says that
$$ u_{\color{red}{3}}(r) = \frac{1}{2\pi} \int_0^{2\pi} f(r,\theta)e^{-{\color{red}{3}}i\theta} d\theta. $$
You see how (in red) the 3 appears on both sides? We can make this kind of definition for every integer $n$. For instance,
$$ u_{\color{red}{0}}(r) = \frac{1}{2\pi} \int_0^{2\pi} f(r,\theta)e^{-{\color{red}{0}}i\theta} d\theta = u_{\color{red}{0}}(r) = \frac{1}{2\pi} \int_0^{2\pi} f(r,\theta)e^{0} d\theta = \frac{1}{2\pi} \int_0^{2\pi} f(r,\theta) d\theta $$ so that $u_0(r)$ is just the average value of $f(r, \theta)$, averaged over $\theta$ (you can now see why it's important that $f$ be integrable in $\theta$: if it were not, then $u_0$ could not even be defined!).
The functions $u_1, u_2, ...$ have similar, but more complicated, interpretations.
The claim being made is that if you actually write down all these functions, and then multiply $u_k$ by $\exp(ki\theta)$, and sum up over $k$, you get back $f$.
Now let me tell you why it's not true. Let's look at $$ f_1(r, \theta) = \exp(i\theta) $$ First note that $f_1$ is independent of $r$, so each $u_k(r)$ will be a constant function.
Second, it'll take a little work, but you could show that using this function for $f$, when you compute $u_k$ for $k \ne \pm 1$, you get exactly $0$. For $k = 1$, we compute: \begin{align} u_1(r) &= \frac{1}{2\pi} \int_0^{2\pi} f_1(r, \theta) \exp(-1i\theta) ~d \theta \\ &= \frac{1}{2\pi} \int_0^{2\pi} \exp(+1i\theta) \exp(-1i\theta) ~d \theta & \text{def'n of $f_1$}\\ &= \frac{1}{2\pi} \int_0^{2\pi} \exp(i\theta) \exp(-i\theta) ~d \theta \\ &= \frac{1}{2\pi} \int_0^{2\pi} \exp((i- i)\theta) ~d \theta & \text{property of $\exp$}\\ &= \frac{1}{2\pi} \int_0^{2\pi} \exp(0\theta) ~d \theta\\ &= \frac{1}{2\pi} \int_0^{2\pi} 1 ~d \theta\\ &= \frac{1}{2\pi} 2\pi\\ &= 1. \end{align} For $k = -1$, we end up integrating $\exp(-2i \theta)$, and the integral becomes zero. So it turns out that $$ \vdots\\ u_{-2}(r) = 0\\ u_{-1}(r) = 0\\ u_0(r) = 0\\ u_1(r) = 1\\ u_2(r) = 0\\ u_3(r) = 0\\ \vdots $$ so the sum on the right-hand side becomes just $$ 1 \cdot exp(2\pi i \theta) $$ which is exactly $f_1(r, \theta)$.
Wait...didn't I claim that the theorem was false?? It seems as if I just proved it was exactly true for $f_1$.
That's right. It is true. Now let's define $$ f_2(r, \theta) = \begin{cases} 17 & r = 1, \theta = 0\\ f_1(r, \theta) & \text{otherwise} \end{cases} $$ You can see that $f_1$ and $f_2$ are identical everywhere except at one point. That means that their integrals (and the integrals of their products with those exponential functions) are in fact equal. Now if the theorem were true for the function $f_2$, we'd know that the right-hand side had to sum up to exactly $f_2$. But it in fact sums up to exactly $f_1$. So if the theorem, as stated, were true, then we'd have to have $f_2 = f_1$, which is false.
In other words, the hypotheses of the theorem are essential or it's not true. What that means is that if you want to try to prove the theorem, you're going to need to use the hypotheses somewhere. Trying without them is just a waste of time.