I'm currently trying to fit a linear experimental function that relates
$$ y^{exp} = \alpha^{exp} x^{exp}, $$
with a model that has several parameters $t_1,\ldots,t_4$, where $x = x (t_1,\ldots,t_4)$ and $y = y (t_1,\ldots,t_4)$. I will call the $y$ and $x$ sets that are the results of the simulation $y^{sim}$ and $x^{sim}$, while the arbitrary set of parameters used in the simulation are $t_1^{sim},\ldots,t_4^{sim}$.
The variation of the real set of parameters occurring in nature (non measured in the experiment) that generate the curve $y^{exp} = \alpha x^{exp}$ are the unknown of my problem $dt_1^{exp},\ldots,dt_4^{exp}$.
Next I write down
$$ dy^{exp} = \frac{\partial y^{sim}}{\partial t_1^{sim}}dt^{exp}_1 + \dots + \frac{\partial y^{sim}}{\partial t_4^{sim}}dt_4^{exp}, $$
$$ dx^{exp} = \frac{\partial x^{sim}}{\partial t_1^{sim}}dt_1^{exp} + \dots + \frac{\partial x^{sim}}{\partial t_4^{sim}}dt_4^{exp}, $$
that is a linear system whose unknowns are $dt_1^{exp},\ldots,dt_4^{exp}$. Since its rank is not $4$ there is no unique solution, however the following weirdness happens:
I split the linear system in 6 solvable ones, that are
$$ dy^{exp} = \frac{\partial y^{sim}}{\partial t_i^{sim}}dt^{exp}_i + \frac{\partial y^{sim}}{\partial t_j^{sim}}dt_j^{exp}, $$
$$ dx^{exp} = \frac{\partial x^{sim}}{\partial t_i^{sim}}dt_i^{exp} + \frac{\partial x^{sim}}{\partial t_j^{sim}}dt_j^{exp}, $$
where $i,j = 1,2,3,4$ and $i\neq j$. The $t_i^{exp}$ parameter obtained from the solution of the $n^{th}$ linear system of equations $dt_{i,n}^{exp}$ is different from the $dt_i^{exp}$ solution of the $m^{th}$ linear system $dt_{i,m}^{exp}$ with $n\neq m$, of course.
HOWEVER,
if for example the first set of equations fits the experimental $dt_{1,1}$ and $dt_{2,1}$; the second set of equations fits $dt_{2,2}$ and $dt_{3,2}$ and the third $dt_{1,3}$ and $dt_{3,3}$, it just so happens that
$$ \frac{dt_{1,1}}{dt_{2,1}} \frac{dt_{2,2}}{dt_{3,2}} \equiv \frac{dt_{1,3}}{dt_{3,3}}, $$
in other words, the relative change is immutable, regardless of how I simplify
$$ \frac{dt_{i,n}}{dt_{j,n}} \frac{dt_{j,m}}{dt_{k,m}} \equiv \frac{dt_{i,p}}{dt_{k,p}}. $$
Finally, even tho I don't have absolute values for $dt_i$, I do end up with exact relative ratios that depend on two free parameters $\beta$ and $\gamma$ so that $y^{exp} = \gamma y^{sim}$ and $x^{exp} = \beta x^{sim}$.
Had my mathematical knowledge been stronger I probably could have seen it coming, and even tho I hoped for some degree of self-consistency, this results holds up to the zero-machine of the computer I'm running the simulations on, regardless of the arbitrary set of simulation parameters that I choose. WHY? It looks like, despite a non-maximal rank matrix being non-invertible, the ratio between any solution of the set of the split system is constrained to maintain these relative ratios,
- Does this thing have a name?
- Is it so trivial that it doesn't have one?
- Would it have worked if the number of the parameters $(\beta,\gamma)$ plus the rank of the original matrix $(2)$ was not equal to the number of parameter $(t_1,\ldots)$ of the model?
Thank you very much in advance
This is a special property of exactly this case, two linear equations in 3 variables, $Ax=b$ with $A=(a_1,a_2,a_3)$. With the Cramer rule the ratios can be expressed as $$ \frac{x_{1,1}}{x_{2,1}}=\frac{\det(b,a_2)}{\det(a_1,b)},~~ \frac{x_{2,2}}{x_{3,2}}=\frac{\det(b,a_3)}{\det(a_2,b)},~~ \frac{x_{1,3}}{x_{3,3}}=\frac{\det(b,a_3)}{\det(a_1,b)},~~ $$ Up to the sign, there are 3 factors on the right that cancel in the product $$ \frac{x_{1,1}}{x_{2,1}}\frac{x_{2,2}}{x_{3,2}}\frac{x_{3,3}}{x_{1,3}}=-1. $$