Dis-aggregate full-negative and non-negative signal

45 Views Asked by At

I have $n$ signals ($x_1,x_2,\dots,x_n$). Each signal is aggregated by a non-negative independent vector ($d_i$) and a fully-negative vector ($c$), and each signal has the same $c$, defined as:

$x_1=d_1+c\\x_2=d_2+c\\\dots\\x_n=d_n+c$

where, $d_i\geq0,c\leq0$.

This problem can also be written as:

$X=BA$

where, $X=(x_1,x_2,\dots,x_n)\in\mathbb{R}^{t\times n}$, $B=(d_1,d_2,\dots,d_n,c)\in\mathbb{R}^{t\times(n+1)}$, $A\in\mathbb{R}^{(n+1)\times t}$.

$A=\begin{bmatrix} 1,0,\dots,0\\ 0,1,\dots,0\\ \dots\\ 0,0,\dots,1\\ 1,1,\dots,1 \end{bmatrix}$

This is an under-determined problem, because there are $n$ functions with $n+1$ variables. However, there is sign constraint can help us to solve this problem. If I only get access to $x_1,x_2,\dots,x_n$, how can I introduce those constraints to disaggregate $x_1,x_2,\dots,x_n$ to get $c$? ($d_1,d_2,\dots,d_n$ are not needed to be obtained.)

Can anyone suggest some algorithm for representing $c$? or doing the dimensional reduction for converting this problem to determined problem?

Under-determined ICA? Hidden Markov model? or other unsupervised learning.

1

There are 1 best solutions below

0
On BEST ANSWER

For convenience, I will reformulate your system using a "column" notation and assuming all unknown variables as non-negative, i.e. $$ \mathbf{x}=\mathbf{A}\mathbf{b}, $$ with $\mathbf{x} \triangleq [x_1,x_2,\ldots,x_n]^T$, $\mathbf{b} \triangleq [d_1,d_2,\ldots,d_n, c]^T$, $\mathbf{A}\triangleq[\mathbf{I}_n,-\mathbf{1}_n]$, where $\mathbf{I}_n$ is the $n\times n$ identity matrix, and $\mathbf{1}_n$ is the $n\times 1$ all-ones column.

Since your system is underdetermined, there are multiple choices for $c$ (and $\{d_i\}$) that are valid. In principle, you have to identify all of them and choose the one most suitable according to some criterion. This criterion will in general be represented by some function $f(\cdot)$ of $c$ and $\{d_i\}$.

The above imply that you are essentially dealing with an optimization problem of the form

$$ \begin{align} \text{maximize } & f(\mathbf{b})\\ \text{subject to } & \mathbf{x}=\mathbf{A}\mathbf{b},\\ & \mathbf{b}\geq \mathbf{0}, \end{align} $$ with optimization variables the elements of $\mathbf{b}\in\mathbb{R}^{n+1}$ and notation $\mathbf{b}\geq \mathbf{0}$ representing the assumption (constraint) that all the elements of $\mathbf{b}$ are non-negative.

Of course, the solution of (and the method to solve) the problem will depend on the choice of the objective function $f(\cdot)$. For example, a linear objective of the form $f(\mathbf{b})=\mathbf{c}^T \mathbf{b}$, for some $\mathbf{c}\in \mathbb{R}^{n+1}$, results in a so called linear programming problem for which there exist efficient numerical solvers.

Note that the above problem formulation can be simplified by observing that any vector $\mathbf{b}$ that satisfies $\mathbf{x}=\mathbf{A}\mathbf{b}$ must be of the form $$ \mathbf{b}=\mathbf{b}_0 + g \mathbf{1}_{n+1}, $$ where $\mathbf{b}_0$ is any (particular) solution of $\mathbf{x}=\mathbf{A}\mathbf{b}$ and $g\in \mathbb{R}$. Given any particular solution, say, the least squares solution $\mathbf{b}_0=\mathbf{A}^T(\mathbf{A}\mathbf{A}^T)^{-1} \mathbf{x}$, you could pose the optimization problem with respect to the scalar variable $g$ as $$ \begin{align} \text{maximize } & f(g)\\ \text{subject to } & \mathbf{b}_0 + g \mathbf{1}_{n+1} \geq \mathbf{0},\\ \end{align} $$