Assuming I know exactly my forward model, which is represented by $n$ non-linear functions, or some probability models:
$\vec{R}=f(x,y,z)$ , $f:\mathbb{R}^3\to\mathbb{R}^n$
Where each item in $R_i$ correspond to a different function $f_i$ on the same input $(x,y,z)$.
Now, I am interested in the inverse model, from a vector $\vec{R}$ back into $(x,y,z)$. I can't really inverse my functions, but I can create as many $\vec{R}$ samples as I'd like for every possible combinations of input $(x,y,z)$. So, I can create a lot of labeled data $\{\vec{R},(x,y,z)\}$ for the regression, without any noise or outliers.
I've seen all kinds of regression methods (trees/forests, Support Vector Regression, Lasso/Ridge,...) but none of them use any information about the forward model. Feels weird to me to use trees or SVR which just try to minimize some error over the data, without incorporating the forward model or the exact knowledge about how the data was created in the first place.
Am I missing something or this is the best we can do for such cases?