Find Linear Map between Majorized Vectors

74 Views Asked by At

Suppose I am given two vectors, $x, y$ such that $x\prec y$ ($x$ is majorized by $y$). Suppose further that these are infinite dimensional vectors, such that $\sum_i x_i = \sum_i y_i = 1$, and they are already arranged in decreasing order. We know as a consequence of the majorization relation that we can write $x = Dy$, where $D$ is a doubly stochastic matrix ($\sum_i a_{ij} = \sum_j a_{ij} = 1$, $a_{i,j} \geq 0 \ \forall \ i,j$ ). Is it possible to analytically find $D$? If so, is there some continuous-variable method than we can use to achieve this?

1

There are 1 best solutions below

0
On BEST ANSWER

I believe there is a general procedure for solving this problem. We have two sequences $x = (x_1, x_2, \cdots, x_n)$ and $y = (y_1, y_2, \cdots, y_n)$ such that $x\prec y$. Suppose these are already in descending order (if not, a permutation of the vectors yields the desired ordering). First, we identify that the following two conditions are true:

$$ y_n \leq x_1 \leq y_1 \\ y_k \leq x_1 \leq y_{k-1}, $$

for some $k$. We can thus write:

$$ x_1 = t_1y_1 + (1-t_1)y_k $$

and define new vectors $x^{'}, y^{'}$ such that:

$$ x^{'} = (x_2, \cdots, x_n)\\ y^{'} = (x_2, \cdots, y_{k-1}, (1-t_1)y_1 + t_1y_k, y_{k+1}, \cdots x_n) $$

Now, we can write

$$ y^{'}_n \leq x^{1}_1 \leq y^{1}_1 \\ y^{'}_j \leq x^{1}_1 \leq y^{'}_{j-1}, $$

for some index $j$. We can thus write:

$$ x^{'}_1 = t_2y^{'}_1 + (1-t_2)y^{'}_j $$

Now we may define new vectors $x^{''}, y^{''}$ such that:

$$ x^{''} = (x_3, \cdots, x_n)\\ y^{''} = (x_3, \cdots, y_{j-1}, (1-t_2)y_1 + t_2y_j, y_{j+1}, \cdots, y_{k-1}, (1-t_1)y_1 + t_1y_k, y_{k+1}, \cdots x_n) $$

This procedure thus continues until we can write $x = Dy$, where $D$ is a doubly stochastic matrix ($\sum_i a_{ij} = \sum_j a_{ij} = 1$, $a_{i,j} \geq 0 \ \forall \ i,j$) comprised of T-transforms, i.e. $D = T_r\cdots T_2 T_1$. T-transforms act on just 2 components of a vector, and act as the identity on all other components; these have the form:

$$ \begin{bmatrix} t&1-t\\ 1-t & t \end{bmatrix} $$

These transforms can be thought of as convex combinations of the identity map and a permutation; thus, $D$ is a convex combination of the identity map and permutations.


Let's consider an example:

$$ x = (5, 3, 2)\\ y = (6, 3, 1) $$

We have that:

$$ y_n = 1 \leq x_1 = 5 \leq y_1 = 6\\ y_k = y_2 = 3 \leq x_1 = 5 \leq y_{k-1} = y_1 = 6 $$

Therefore, we have that

$$ 5 = x_1 = t_1y_1 + (1-t_1)y_k = 6t_1 + 3(1-t_1)\\ \therefore t_1 = \frac{2}{3} $$

We now consider the reduced vectors:

$$ x^{'} = (3, 2)\\ y^{'} = ((1-t_1)y_1 + t_1y_k, 1) = ((\frac{1}{3})\cdot6 + \frac{2}{3}\cdot3, 1) = (4,1) $$

Note that $x^{'}\prec y^{'}$. Solving for the second t-parameter:

$$ y^{'}_n = 1 \leq x^{'}_1 = 3 \leq y^{'}_1 = 4\\ y^{'}_k = y^{'}_2 = 1 \leq x^{'}_1 = 3 \leq y^{'}_{k-1} = y^{'}_1 = 4\\ 3 = x^{'}_1 = t_2y^{'}_1 + (1-t_2)y^{'}_k = 4t_2 + 1(1-t_2)\\ \therefore t_2 = \frac{2}{3} $$

Now note that:

$$ x^{''} = (2)\\ y^{''} = ((1-t_2)y^{'}_1 + t_2y^{'}_k) = ((\frac{2}{3})\cdot4 + \frac{2}{3}\cdot2) = (2) $$

Hence, we have solved for the overall transformation matrix:

$$ x = Dy = T_2 T_1 y\\ \begin{pmatrix}5\\3\\2 \end{pmatrix} = \begin{bmatrix} 1 & 0 & 0\\ 0&t_2&1-t_2\\ 0&1-t_2 & t_2 \end{bmatrix} \begin{bmatrix} t_1&1-t_1 &0\\ 1-t_1 & t_1&0\\ 0&0&1 \end{bmatrix} \begin{pmatrix}6\\3\\1 \end{pmatrix} $$