I am trying to control a spacecraft moving in 2D space through a number of waypoints as fast as possible. The spacecraft has directional acceleration (with an upper limit on its magnitude, and a limit on the rate of directional change of acceleration). The spacecraft must move within a certain distance of a waypoint to 'pass through' a waypoint.
My knowledge of optimal control is limited. I have spent time research calculus of variation and dynamic programming, however, both of these solutions seem inapplicable in their raw form.
What are some suggestions for numerical control algorithms that would be successful in tackling this problem?
I am currently reading through a survey of numerical optimal control methods, but I'm finding it difficult to understand where my problem fits in.
I'm assuming that your problem is non-convex and relatively high-dimensional. Note that if you have only 4-5 states or less to keep track of, DPA (or variations thereof) should be a viable option, and will give you the strongest result (i.e. you can actually prove global optima). For bigger problems, you will run into the curse of dimensionality.
With these assumptions, I'd say you have 3 main choices (that I'm aware of).
Trajectory optimization
For a good explanation of this, see here. For good papers on this, see either Hargraves '85 or von Stryk '92.
This is a gradient-based approach, meaning a) you need a decent initial-guess, b) you're only guaranteed to be locally-optimal (which is one reason you need a good inital-guess) and c) you need to either supply the jacobian of the dynamics, or the algorithm will have to calculate these numerically for each iteration. Calculating them numerically is generally quite slow and can lead to bad numerical issues.
Oh and, this will work a lot better if you have a good non-linear solver. I've used SNOPT (which is commerical, free academic license for relatively small problems), also heard good things about NLOPT. Matlab's fmincon will also work, but it's slow (and not as good).
"Blackbox" optimization
These are not actually optimal-control, but just straight up optimization: you parameterize everything about your problem (including the control-inputs), and then the algorithm tries to find a combination of the parameters which yields good solutions. The advantages are that it is generally quick/easy to apply (your algorithm doesn't care at all about the structure of the problem), and it's generally quite good for very non-convex problems, due to the 'random' nature of the algorithm (it can get out of local-minima). The disadvantages are that you have no real guarantees on your solution, and often you get a solution where you have no idea why that particular combination of parameters is actually good. It has also been applied quite a bit to space-orbits, for example see PyGmo.
Pontryagin Minimum Principle (PMP)
Since you (I'm assuming) know the order in which the waypoints need to be reached, you can simply split the problem up into N smaller problems of reaching point A from point B in the fastest time possible, which is a typical example of PMP. Depending on your dynamics, it might not be possible to get a closed-form solution (you might have to solve non-linear PDEs), but for your case it seems to me that it should be doable (without having thought too much about it though).
At the very least, you can check if the resulting Hamiltonian is linear w.r.t. your control inputs. If they are, you are guaranteed that the optimal controller must be bang-bang (you just wouldn't know when to switch from one bang to another). Knowing this is very powerful: for example it allows you to parameterise your control inputs in a much tighter way in for your black-box optimizations.