This problem originates from data analysis, but this question takes place after we have found an $M $th order $N$ dimensional polynomial that fits the data well and satisfies statistical metrics.
The question is, given a polynomial of the form
$P(x_1, x_2,...,x_n) =\sum_{{i_1}=0}^m\sum_{{i_2}=0}^m...\sum_{{i_n}=0}^m C_jx_1^{i_1} x_2^{i_2}...x_n^{i_n}$, where $i_1+ i_2+...+i_n\leq m$ and $j$ indexes all permutations of exponents of the indeterminates.
can we provide a metric for how "smooth" the polynomial is?
We have upper and lower bounds on each $x_{1...n}$
My first thought was to construct a gradient of $P$, and find the maximum of its absolute value over the data, $\text{max }\vert\nabla P\vert$. But how would I do that over ranges in $N$ dimensions? And what values would indicate polynomials too "spiky" to use?
Again, the polynomial was found using a process which maximizes the $R^2$ using cross validation. I have a good idea that it provides a good fit without over-fitting, but since it is hard to visualize (the actual data has five inputs and fits one output) we want to make sure we don't have a sudden spike in the value.
Calculus based or numerical answers are both fine.
EDIT: adding qualitative examples of smooth vs. non-smooth functions in 3D, desired answer in $n$D.
Also, suggesting we use a simpler fit is not an answer to this question. We have a function, it fits test data, we are only deciding whether or not to discard the function.
This is what I am trying to avoid, I want a metric that detects functions like this so I can discard them.
This is an example of a smooth curve fit


The volume of the Newton polytope might help: $${\rm NP}(f) = {\rm conv}(\{\alpha\mid c_\alpha\ne 0\})\subseteq {\Bbb R}^n$$ where $f=\sum_{\alpha} c_\alpha X^\alpha$ is a polynomial in unknowns $X_1,\ldots,X_n$ and the $X^\alpha =X_1^{\alpha_1}\cdots X_n^{\alpha_n}$ are the involved monomials (i.e., $c_\alpha\ne0$) with $\alpha=(\alpha_1,\ldots,\alpha_n)\in{\Bbb N}_0^n$.