Can we extract back the value of a vector after convolution with another vector whose first value and absolute values are only known

79 Views Asked by At

Assume we have Hadamard-Walsh matrix whose size is 4 x 4 as below:

enter image description here

And we have random vector, for example: $h=[2, 3, 1, 5]$. $S$ is the result of convolution operations between vectors $h$ and one row chosen randomly from the matrix $W$, let's choose row 2 which is $W_2 = [1, -1, 1, -1]$.

So, the result of $S = h*W_2 = [2, 1, 0, 5, -7, 4, -5]$ , where $*$ denote the convolution operation.

What I am looking for is to estimate which row from matrix $W$ was convoluted with $h$ based on $S$, or in other words the possibility to calculate $h$.

As known, that can be done since convolution is nothing but multiplication of toeplitz matrix with a vector. For example, we can build the toeplitz matrix of $h$, and then calculate the value of $h$, in case if $h$ has more than one solutions, choose the value which make the equation equal to $|W| = 1$.

For example, taken the case of Hadamard-Walsh matrix whose size is 2 x 2 as below: enter image description here

and $h$ is generated randomly $h = [2, 4]$, so in that case the result of convolution between $W_2: [1, -1]$ and $h$ is : $S = [2, 2, -4]$ , Hence, we can calculate the value of $h$ and $W_2$ based on $S$ by creating $S = toelitz_{ matrix} * W_2$ as follows:

enter image description here

Therefore, we should have three equations :

1- $h_1 W_{21} = 2$ ---> since $W_{21}$ is equals to 1 in all cases, so $h_1 = 2$ , $W_{21} = 1$.

2- $h_2 W_{21} + h_1 W_{22} = 2$ ---> $W_{21}$ and $h_1$ are already known from equation 1, then the equation (2) will be

$h_2 + 2 W_{22} = 2$ ---> this is equation 2

3- $h_2 W_{22} = -4$, ---> this is equation 3

Based on equations 2 and 3, we conclude that either $h_2 = 4$ or $h_2 = -2$, therefore, $h_2 = 4$ since we know that $|W_{22}| = 1$, but we don't know it's sign, but based on $h_2 = 4$ and equations 3, we knew that $W_{22} = -1$.

So, as seen above, we could extract the vector $h$ based on first value of vector $W_2$ and the known absolute values of matrix $W$.

My question, Is there an easier method or known algorithm (method) can implemented in that case to avoid the higher complexity resulted in case if size matrix $W$ and length of vector $h$ are big? Or can machine or deep learning algorithm can be used to implemented that easily?

Thank you in advance.

1

There are 1 best solutions below

1
On BEST ANSWER

I am not 100% clear on your question, but perhaps this will help.

Let $\mathbf{W}_m$ be the convolution matrix formed from the $m^{\mathrm{th}}$ row of matrix $\mathbf{W}\in\mathbb{C}^{M \times N}$, which is full rank and has no repeated rows. You have an observation given by $\mathbf{s}$ and the model for this observation is $\mathbf{W}_i \mathbf{h} = \mathbf{s}$ for some $i$. From this you wish to find both $\mathbf{h}$ and $i$.

Using the Moore-Penrose inverse we know that $\mathbf{h} = (\mathbf{W}_i^\mathrm{H} \mathbf{W}_i )^{-1} \mathbf{W}_i^\mathrm{H} \mathbf{s}$.

Thus, use the Moore-Penrose inverse to calculate $\mathbf{h_m} = (\mathbf{W}_m^\mathrm{H} \mathbf{W}_m )^{-1} \mathbf{W}_m^\mathrm{H} \mathbf{s}$ for $m=1,\dots,M$. One of these vectors is equal $\mathbf{h}$.

If $m=i$, then $\mathbf{h}_m = \mathbf{h}$ and $\mathbf{W}_m = \mathbf{W}_i$, and, thus, $\mathbf{W}_m \mathbf{h}_m = \mathbf{s}$.

On the other hand, if $m \neq i$, then $\mathbf{W}_m \mathbf{h}_m = \mathbf{W}_m (\mathbf{W}_m^\mathrm{H} \mathbf{W}_m )^{-1} \mathbf{W}_m^\mathrm{H} \mathbf{s} \neq \mathbf{s}$, since no two rows of $\mathbf{W}$ are the same.

Thus, the algorithm would be to compute each $\mathbf{h}_m$ and see if $\mathbf{W}_m \mathbf{h}_m = \mathbf{s}$. If it does, then $m=i$ and $\mathbf{h} = \mathbf{h}_m$.