I am trying to get a better understanding of what the difference is between the methods in optimal control and those in operational research. Both fields are similar (I think) in that they want to minimize some type of objective function, but I think that the main difference comes in the structure of the mathematics that are applied.
From what I have gathered, Optimal Control seems to dabble in the world of ODE's and PDE's in order to describe the system of study. Operational research methods tend to look at linear programming, integer programming, mixed integer programming, etc, and this branch of mathematics allows for conditional functions and binary variables.
I would love if someone could help illuminate any further differences or could contrast and compare to help further my understanding.
I think optimal control tends to focus primarily on control of physical systems (such as guidance of a vehicle) while operations research is used primarily (though not necessarily exclusively) for making "managerial", "operational" or "strategic" decisions (such as where to base the vehicle). Some OR problems (but not all) involve decisions with discrete domains (yes/no, pick $n$ of the following choices, ...) whereas optimal control is (as far as I know) restricted to problems where the variables have continuous domains.
On the mathematical side, OR encompasses many models and methods besides optimization, including some where "optimal" is not an issue (such as discrete event simulation and queuing models in many applications). It often deals with stochastic systems. (I have no idea if there is a stochastic version of optimal control. My contact with it was limited to part of one course in grad school, in a previous millennium.) It also deals with some very non-smooth types of constraints, such as logical constraints (for instance, shipments out of this warehouse are at most $C$ if you build the warehouse and $0$ if you do not).
Update: According to the Wikipedia entry on optimal control, "as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions." (My limited exposure to optimal control theory does not exactly predate digital computers, but it comes close.) The remainder of the Wikipedia description suggests that the discrete time problems are approximations to continuous systems, rather than problems with discrete decisions. Maybe.