I have the following system of 3 equations and 3 unknowns: $$c_{0} = \frac{x_0}{x_0 + x_1},\ \ c_{1} = \frac{x_1}{x_1 + x_2},\ \ \ c_{2} = \frac{x_2}{x_2 + x_0},$$ where $c_i\!\in\!(0,1)$ are known and $x_i > 0$ are unknown. This is described by the system $\textbf{A}\textbf{x}=\textbf{0}$, where
$$\mathbf{A}=\left[\begin{matrix}(c_0-1)& c_0 & 0 \\ 0 & (c_1-1) & c_1 \\ c_2 & 0 & (c_2-1) \end{matrix}\right]\ \ \text{ and }\ \ \textbf{x}=\left[\begin{matrix}x_0 \\ x_1 \\ x_2 \end{matrix}\right]$$
I initially posted here: Can some advise me on how to solve this system of equations? , hoping to get some advice on finding a basis for the nullspace of $\textbf{A}$. But it turns out that in the general case, only the trivial solution, $\textbf{x}=\textbf{0}$, holds. This means that, for the given conditions for $c_i$ and $x_i$ above, there is no exact solution. Therefore, I want to reformulate my problem to the following: Given the system $\textbf{A}\textbf{x}=\textbf{b}$, compute $\textbf{x}$ such that $|\!|\textbf{b}|\!|$ is minimised.
Can anyone give me advice on how to solve the above minimisation problem?
p.s. It was pointed out to me by @brenderson that the above reformulation is meaningless. Therefore, my question now is: how do I reformulate the "exact" system above in such a way that I can get the "best possible" numerical solution for $\textbf{x}$? I am no longer sure how to formulate this mathematically.
Not sure exactly what you want - but here is one approach : You can try to minimize $A \mathbf{x}$ for $\mathbf{x}$ of a fixed norm. When $A$ has a nontrivial kernel, this will give you $\mathbf{x}$ of the given norm living inside the null-space. If $A$ is nonsingular, then actually you are solving:
$$\mathbf{x} \quad \text{ that minimizes } \quad \frac{\|A \mathbf{x}\|}{\|\mathbf{x}\|}$$
Note however that such an $\mathbf{x}$ also has the property that:
$$\mathbf{y} = A\mathbf{x} \quad \text{ maximises } \quad \frac{\|A^{-1} \mathbf{y} \|}{\|\mathbf{y}\|}$$
This sort of problem is pretty much determining matrix norm of $A^{-1}$:
$$\| A^{-1} \| = \text{sup}\{\|A^{-1} y \| : y \in K^n \text{ with } \|y\|=1\}$$
and for different norms there are different results. In the case of the $L^2$ norm (euclidean-distance), the operator norm $\|A^{-1}\|_2$ coincides with the spectral norm (see proof here for example) - and the value of $\mathbf{y}$ you want lives in the eigenspace of the largest eigenvalue of $(A^{-1})^T A^{-1}$, which is something you can compute for your explicit example.