Identifying changes in new data using previously trained regression model

16 Views Asked by At

I would like some ideas on the following problem. I have a data $x$: $$x(y,z) = [x_1(y,z), x_2(y,z), x_3(y,z)]$$ such that they are function of $(y,z)$, but neither of $(y,z)$ is available. Therefore I set up a regression model, such that: $$\tilde{x}_1 = ax_2 + bx_3$$ $$\tilde{x}_2 = ax_1 + bx_3 \tag{1}$$ $$\tilde{x}_3 = ax_1 + bx_2$$

Such model, let's simplify the notation to $\tilde{x}_i = ABx_{\notin{i}}$, can reflect when there is a change of $x$, other than due to $(y,z)$. This is because the relationship, other than relationship to $(y,z)$ is broken and the model will output $\tilde{x}$, which is different from the given $x$.

Now, I also know, that $x_{new}$, that is the new incoming data, can differ from $x$, only in some particular way, lets say:

$$x = n^{(1)}x_{new} + n^{(2)} \tag{2}$$

where $n^{(1)}$ and $n^{(2)}$ are vectors, the same dimension as $x$. This gives $2*dim(x)$ unknowns.


QUESTION:

Is it possible to recover the vectors $n^{(1)}$, $n^{(2)}$, given sufficient data $x_{new}$ ?

The whole problem again in simple words. The regression model is trained on $x_{train}$, such that $(1)$ is obtained. Then there is a change in the input $x$, such that $x_{new}$ in $(1)$ produces error. Can I find a function $(2)$, such that $x_{new}$ in $(2)$, will give $x$ using $(1)$ ?

Is there any material/technique that could help me to analyze this situation / solve the problem ?


P.S.

I have simply plugged $(2)$ into $(1)$, and using at least 2 $x_{new}$ to obtain $n^{(1)}$, $n^{(2)}$. Why 2 $x_{new}$ ? Because there are $2*dim(x)$ unknowns. This probably should work, but it is non-convex and relatively difficult to optimize.

P.S.2

I can use some stochastic search algorithm, but I would rather try some more mathematically reasoned techniques.