I would like to solve a constrained optimization problem where the choice is a function over an interval rather than a finite number of variables or a sequence. The problem is given by:
$\max_{[x(i)]_{i=0}^1} \int_{i=0}^1 f(x(i))di$ subject to $\int_{i=0}^1 x(i)di = X$
where $x(i) > 0, \forall i \in [0,1]$, and $f:\mathbb{R}^{++}\rightarrow\mathbb{R}$ is a twice differentiable, strictly increasing and strictly concave function, i.e. $f'(x) > 0$ and $f''(x) < 0, \forall x \in \mathbb{R}$. An example would be $f(x) = \ln(x)$.
I intuitively know that a solution is $x(i) = X, \forall i \in [0,1]$. I also can see that $x(i) = X$ must be true almost everywhere for any solution. Mimicking the heuristics of constrained optimization problems where the choice is over a countable number of points, I would assign a multiplier $\lambda$ to the constraint, and then obtain a candidate solution by:
$\frac{\partial}{\partial x(i)}\left(f(x(i))\right) + \frac{\partial}{\partial x(i)} \left(-\lambda x(i)\right) = 0, \forall i \in [0,1]$
$f'(x(i)) = \lambda, \forall i \in [0,1]$
$x(i) = x(j) \equiv x, \forall i,j \in [0,1]$
$\int_{i=0}^1 xdi = X \Rightarrow x = X = x(i), \forall i \in [0,1]$
However, I do not know how to prove this, or which theorem to invoke. If the choice variable was a sequence, I would use Karush-Kuhn-Tucker. Which theorem should I use when the choice is a function over an interval rather than a sequence? What is the general name for this type of constrained optimization problem?
Thanks a lot in advance.
Generally it is a good idea to frame the question in terms of some 'well known' space. In this case, the $x$ must be integrable, so at a first cut, we take $x \in L^1[0,1]$.
Your intuition is correct, and we can proceed directly or by using standard techniques such as Lagrange multipliers.
It is generally more difficult with such problems to assert the existence of extrema. In this particular example we can appeal to convexity (through Jensen's inequality) to show that the average is the extremising value.
Given that we know that an $\max$ exists, we can use Lagrange multipliers to compute a solution. One version of the theorem requires that the constraint function $g$ be surjective.
The problem is (slightly relaxed) $\min \{ f(x) | g(x) = 0 \}$ where $x \in L^1[0,1]$, $f(x) = \int f \circ x$, and $g(x) = \int x$. We see that $g$ is surjective. It is straightforward to compute ${\partial f(x) \over \partial x}h = \int {\partial f(x(t)) \over \partial x} h(t) dt$ and ${\partial g(x) \over \partial x}h = \int h(t) dt$ and it is straightforward to check that derivatives are continuous.
Then at a $\max$, say $x^*$, there exists a multiplier $\lambda$ such that ${\partial f(x^*) \over \partial x} + \lambda {\partial g(x^*) \over \partial x} = 0$.
This reduces to ${\partial f(x^*(t)) \over \partial x} + \lambda = 0$ for ae. $t$. Since $f$ is strictly monotonic, we conclude that there is some constant $c$ such that $x^*(t) = c$ for ae. $t$ and from $g(x^*) = X$ we get $c= X$.