From vector to gradient matrix - reverse Singular Value Decomposition?

36 Views Asked by At

Assume that I have a scalar gradient $g=\frac{||x||}{||y||}$ where $||x||$ is the $L_2$-norm of a vector $x$ in some space $\mathbb{R}^m$ and $||y||$ is the $L_2$-norm of a vector $y$ in some space $\mathbb{R}^n$. Since $x$ and $y$ are only vectors, it does not really matter how large $m$ and $n$ are as long as $m,n > 0$. We could re-orient the basis of $\mathbb{R}^m$ and $\mathbb{R}^n$ so that $x$ and $y$ lie along a single basis vector, and thus the gradient $g$ becomes scalar.

If this re-orientation/rotation is not known, we would likely have the gradient not in scalar form but as a gradient matrix $\nabla_y x$, a $n\times m$ matrix:

$$\nabla_y x= \left[ \begin{matrix} \frac{x_1}{y_1} & \dots & \frac{x_m}{y_1} \\ \vdots & \ddots & \vdots \\ \frac{x_1}{y_n} & \dots & \frac{x_m}{y_n} \\ \end{matrix} \right]$$

for $x=\left[x_1,...,x_m\right]^T$ and $y=\left[y_1,...,y_n\right]^T$. If we then take a singular decomposition $\nabla_y x=USV^T$, we would realize that $S$ contains only a single non-zero singular value corresponding to the scalar gradient $g$, the first column of $U$ would correspond to the normalized version $\bar y=\frac{y}{||y||}$ and the first row of $V^T$ would correspond to the normalized version $\bar x=\frac{x}{||x||}$. This way we would arrive at $\bar y$, $g$, and $\bar x$ from $\nabla_y x$.

Is this correct so far?

If so, my question is if you could invert this process, solve it the other way around. If we have $\bar y$, $g$, and $\bar x$, Could we then assemble the gradient matrix in the following way:

$$\nabla_y x= {\bar y}g{\bar x}^T$$

Is this assumption correct? If not, where is my error?