Do Real World Models "Escape" the Runge Phenomena?
As we know, the Runge Phenomena describes the (mathematically provable) effect of polynomial approximation displaying large variance and oscillatory behavior around the extremities over the range at which the function is being approximated - as a result, greatly reducing the overall quality of the approximation. Supposedly, this oscillatory behavior increases as the degree of the polynomial increases, and the Runge Phenomena is said to occur for any function:
However - fields like Statistical Modelling and Machine Learning continue to approximate complicated real world functions (using other functions) with increasing accuracy, thus seemingly defying the Runge Phenomena.
According to the Wikipedia Article, there are some methods that can apparently "reduce the Runge Phenomena" such as:
- Piecewise Polynomials
- Least Squares
- Constraint Norm Minimization (i.e. "Regularization" - I can understand that this "forces" you to approximate high order polynomials using relatively lower order polynomials, thus potentially reducing the Runge Phenomena, as the Runge Phenomena is said to be more prevalent in higher order functions compared to lower order functions)
But it remains unclear as to why some functions greatly suffer from the Runge Phenomena, whereas other functions are able to escape the Runge Phenomena. For example, Neural Network models have been observed to approximate very complicated functions in the real world.
Can someone please comment on this?
Thanks!
References:
