I have large system of difference equations in this form:
Each function ($x$) is represented by $n$-th order difference equations of all the the other functions.
Background:
Each function ($x$) is an approximation of an arbitrary set of data points. There is unique data set for each function ($x$)
The Coefficients ($A$) are solved for such that each function best approximates it's corresponding data set.
Problem:
When I use the set of equations with optimized coefficients to predict future data points I find that most function go off to infinity (positive & negative) This indicates that the set of equations, while optimize, is not stable. I want to add a stability criteria when solving for the coefficients ($A$).
How can i do this?

