So I'm curious the distinction between the field PDE Optimization and Control theory Applied to PDES.To me they seem exactly the same, but if they were why different names?
For example, suppose we have some cost functional, constrained by a governing PDE, with some initial conditions. Lets suppose I want to find the initial condition which will minimize my cost function. I see clearly this is a PDE optimization problem. However, we could say this initial condition we want to find is our 'control', which makes me think of this as also a control problem.
Thoughts?
Let's make things a little more concrete. Suppose you wish to choose a function $u$ to maximise
$$ \int_0^T f(t,x(t),u(t)) \, \mathrm{d}t $$ subject to $$ \dot{x}(t) = g(t,x(t),u(t)) $$ and $u(t) \in U$ for $t\in[0,T]$. You also have that $x(0)=x_0$. Here, $x$ is the state variable while $u$ is the control. The dynamics of the state variable are given by an ODE for simplicity.
Subject to some regularity conditions (see your favourite optimal control reference), you can find an optimal control which we will denote by $u^*$. The chosen $u^*$ induces a function $x^*$ determined by the dynamics of $x$ along with its initial condition. That is, $x^*$ solves the ODE above with the initial condition $x^*(0)=x_0$, and $u(t) = u^*(t)$.
If we evaluate our objective using $x^*$ and $u^*$, we obtain a value function
$$ V(x_0) = \int_0^T f(t,x^*(t),u^*(t))\,\mathrm{d}t. $$
You can then look for the value of $x_0$ that maximises the function $V$. (One way to make this easy is to find conditions that guarantee differentiability of $V$. Again, see your favourite reference.) Note however that this is a one-dimensional optimisation problem, and hence is not really thought of as an "optimal control" problem. Problems in optimal control, at least to the best of my knowledge, are always multidimensional, and in fact are often infinite dimensional.