Solve linear system of equations with constraints

149 Views Asked by At

We are working on a game which requires computation of set of active thrusters to maintain desired linear and angular velocities. We know desired velocity.x velocity.y and angular_velocity as a column (a0 b0 c0).

Then we have n thrusters and each of them, when fully enabled, changes velocity.x by ai, velocity.y by bi, and angular_velocity by ci.

Each thruster could be enabled with arbitrary coefficient in the range of [0, 1], 0 meaning it has no impact on the velocities.

The problem is to solve programmatically following linear system respecting the constraints of ki being in the range of [0, 1] each. Simply solving it the usual way yields arbitrary coefficients outside of [0, 1] range.

a0 = k1 * a1 + k2 * a2 + .. + kn * an
b0 = k1 * b1 + k2 * b2 + .. + kn * bn
c0 = k1 * c1 + k2 * c2 + .. + kn * cn

Ideal solution would be a row (k1 k2 .. kn) where ki belongs to [0, 1], anything that would come close to that is highly appreciated as well.

The closest we came to solving this is imprecise N^2 algorithm of iterating the array of not-yet-enabled jets N times each time picking the jet with smallest introduced error (thus highest score in terms of sum of both velocities deltas) and enabling it (so only 0 and 1 ki-s).

2

There are 2 best solutions below

0
On

Your system of equations is overdetermined (as $n>3$). This means that (in all but extremely weird cases) you have infinetly many solutions to your problem. Write your system of equations as $A\cdot k = y$ with $A = \begin{pmatrix} a_1 & a_2 & a_3 &\cdots \\ b_1 &b_2& b_3 &\cdots \\ c_1 & c_2 & c_3 &\cdots \end{pmatrix} \in \mathbb{R}^{3 \times n}$, $k = \begin{pmatrix}k_1 &k_2 &k_3&\cdots &k_n \end{pmatrix}^T \in \mathbb{R}^n$ and your desired target vector $y = \begin{pmatrix} a_0 & b_0 &c_0\end{pmatrix}^T\in \mathbb{R}^3$. You have $0 \leq k_i \leq 1$ as restriction.

I propose the following scheme to find a solution: First: Redefine your problem with symmetric bounds on the solution. For this, we set $k_i=0.5$ for all $i$ as standard value and rewrite the system accordingly. This leaves us with \begin{align}A\cdot (k_{off}+k_{std}) = b \\ A\cdot k_{off} = b-A\cdot k_{std}, \end{align} where $k_{std}$ is the vector of all $0.5$ and $k_{off}$ is the desired solution. You can evalute the new right hand side directly. This allows the new restriction $\|k_{off}\|_\infty \leq 0.5$., which now is symmetric around zero. (You can also multiply the new RHS by two to get a bound of 1, but thats up to you).

The second part is using the Moore-Penrose inverse (can be found using the singular value decomposition) of $A$ to find the solution with the minimum 2-norm. This is a first approximation for a solution with minimum infinity norm.

From there on, you can fix some coefficients on the maximum value and repeate this process with the remaining matrix columns, always finding minimum 2-norm solutions to the problem.

This solution is somewhat costly, as you calculate many Moore-Penrose inverses, but is the best solution I found to solve overdetermined linear systems with restrictions on the maximum absolute value of a parameter. (I did a research project on that subject during university)

0
On

You can solve this problem via quadratic programming by introducing an error variable $e_i$ for each constraint and minimizing $\sum_{i=1}^3 e_i^2$ subject to \begin{align} \sum_{j=1}^n a_j k_j + e_1 &= a_0 \\ \sum_{j=1}^n b_j k_j + e_2 &= b_0 \\ \sum_{j=1}^n c_j k_j + e_3 &= c_0 \\ 0 \le k_j &\le 1 &&\text{for all $j$} \end{align}