I was about to solve this equation:
$$z_1 = Ax$$ $$z_2 = Bz_1$$ $$z_3 = Cz_3$$ $$b = Dz_3$$
where $b$ is a vector and $x$ is a vector and $A, B, C, D$ are matrices. But then I realize that $A, B, C, D$ are unknown to me. So I hade to use a simplier example.
So let's assume that we have this system of equation:
$$b = Ax$$
Where we want to find $A$. What is the best way to find $A$ if $b$ and $x$ are known vectors?
Here is two options what I have found.
- Pseudo inverse: Using numerical linear algebra
- Backpropagation: Using gradient descent
- Estimation: Using a kalman filter
So what do you recommend here for me if I want to solve $A$ or in the other case, find $DCBA$ from $b = DCBAx$
My deleted answer did not address the question as correctly observed by @Winter.
Given two vectors $x$ and $b$, the objective is to construct a matrix $A$ such that $$Ax=b.$$
If $x=0$, then $Ax=0$ for any $A$ and we must have $b=0$ or the problem has no solution.
If $x=b=0$, then any matrix $A$ will suffice.
If $x\not=0$, then the matrix given by $$A=\frac{1}{\|x\|^2_2}bx^T$$ satisfies $$Ax = \frac{1}{\|x\|^2_2}bx^Tx = b.$$
There is a significant downside to this matrix $A$. It has rank 1, so it likely to break any software based on Gaussian elimination.
To make further progress we turn to the singular value decomposition and write a general matrix $A$ as $$ A = \sum_{j=0}^m \sigma_j v_j u_j^T$$ We want to choose the singular value and the singular vectors such that $$Ax=b$$ and $A$ will not break software based on Gaussian elimination.
Again if $x=0$, then we must have $b = 0$ and $A=I$ will work.
If $x \not = 0$, then we consider $b$ as follows.
If $b = 0$, then any acceptable $A$ will be singular and it should break software based on Gaussian elimination.
If $b \not =0$, then we set $u_1 = x/\|x\|_2$ and extend $u_1$ to an orthogonal matrix $U$ using Gram-Schmidt's method. We set $v_1 = b/\|b\|_2$ and extend $v_1$ to an orthogonal matrix $V$ using Gram-Schmidt's method. Then $$Ax = \sum_{j=0}^m \sigma_j v_j u_j^T x = \sigma_1 v_1 u_1^Tx = \sigma_1 \frac{b}{\|b\|_2} \frac{x^Tx}{\|x\|_2}$$ It follows that we should choose $\sigma_1 = \frac{\|b\|_2}{\|x\|_2}$. We are free to choose the remaining $\sigma_j$. The choice of $$ \sigma_j = \frac{\|b\|_2}{\|x\|_2}$$ is attractive because it ensures that the test matrix is well-conditioned. On the other hand we have the freedom to ensure that matrix $A$ is arbitrarily ill-conditioned.