Solving $f = Ax + b$ when x is saturated but want to preserve directionality

42 Views Asked by At

I have a given equation of the form

$f_{tot} = A x^\prime + \beta \tag{1}$

where A is a $R^{nxm}$ and we can assume m>n.

Suppose that I want to produce a desired vector $f_{des}$, but I want to ensure that $-0.5\leq x^\prime_i \leq 0.5$. If a solution does not exist that satisfies that constraint, then I want the vector $f_{tot}$ to be in the same direction as $f_{des}$: ($\hat{f_{des}}\cdot \hat{f_{tot}} = 1$).

Is there a way to easily solve this using matrix algebra that does not require to send it to an optimizer?

The visual representation is something like this:

Image description of what I am trying to do. The red Ax' violates the inequality constraints

This is the optimization problem

$min \lvert f_{des} - f_{tot} \lvert_{2} \tag{2}$

s.t. $\hat{f_{des}}\cdot \hat{f_{tot}} = 1 \tag{2.1}$

$0.5 \leq x^\prime_i \leq 0.5 \tag{2.2}$

Current Attempted solution

My current thinking is to approach it like this:

Let $A = USV'$ be the single value decomposition of matrix A. Note that since m> n there is a nullspace in which some combination of a column of V causes $AV_i = 0$. To solve for $x^\prime$ I can find the linear combination of matrix V so that I can produced $f_{des}$ as

$\alpha = (US)^* ( f_{des} - \beta) \tag{3}$

where the * is the pseudo inverse or Matlab's \ command. Then,

$x^\prime = V \alpha \tag{4}$.

If $\lvert x^\prime \lvert_{\infty} > 0.5$ then I can try to resolve the problem with a scaled $f_{des}$

$min \quad \gamma \tag{5}$

s.t. $\lvert x^{\prime} + x^{\prime\prime}\lvert_{\infty} \leq 0.5 \tag{5.1}$

$x^{\prime\prime} = V ( (US)^* (-\gamma f_{des})) \tag{5.2}$

Once I find $\gamma$ then I can repeat the original procedure to find the new $\alpha = (US)^* (\gamma f_{des} - \beta)$ using eq. (3) and then solve for the new $x^\prime$ using equation (4).

At this point however, I think that I made the problem more difficult than needed. I would appreciate any insights into how I can simplify the problem and avoid an optimization problem formulation. I'm thinking that I can also play around with the nullspace of matrix V to reduce the values of $x^\prime$ but unsure how to do that as of now.

1

There are 1 best solutions below

0
On

I realized that the optimization problem (5) in my attempted solution was not doing what I wanted in optimization problem (2). However, the updated approach (6) below currently works and is significantly faster than solving problem (2) when using the same fmincon solver for both.

$min \quad \gamma \tag{6}$

s.t. $\lvert V \left( US^*\left(1-\gamma\right) f_{des}-\beta \right) + V_{null}\phi \lvert_{\infty} \leq 0.5 \tag{5.1}$ $0 \leq \gamma \leq 1 \tag{6.2}$ $-1\leq \phi_i \leq1 \tag{6.3}$

Where $V_{null} \in R^{mx(m-r)}$ is the null matrix obtained by the SVD of matrix A, r is the rank of matrix A, and $\phi \in R^{n-r, 1}$.

Once $\gamma_{opt}$ and $\phi_{opt}$ are obtained, I can find x' by using the following,

$x'= V \left( US^*\left(1-\gamma_{opt}\right) f_{des}-\beta \right) + V_{null}\phi_{opt}$.

This preserves the directionality of $f_{des}$ as well as the constrained in that every element in $x'$ is less than 0.5.