Does a larger change in $R^2$ indicate a given parameter is "more important" in model fitting?

16 Views Asked by At

I have a set of experimental data, and a given model which is supposed to fit that data. Let's say this model has $n$ parameters. I need to write a parameter estimation algorithm to determine the optimum values of these $n$ parameters, such that the coefficient of determination ($R^2$) is minimised (or as minimised as possible within computation time constraints) when this model is fit to my data.

One possible algorithm I am in the process of designing depends on knowing which of the parameters has the "biggest effect" on how well the model fits my data. For example, a parameter named $k$, which could physically be anywhere between 0.01 and 0.02, may have either a very large or insignificantly small effect on the shape of the model as its value "changes" from 0.01 to 0.02.

As I know the upper and lower bound of what each parameter could be, my thought was to do the following:

  1. Set all parameters to their 'mid-point' (e.g. k = 0.015)
  2. For each parameter, select (say) 10 uniformly spaced points within the possible bounds.
  3. For each resulting model (varying one parameter and leaving the others at their mid-point), fit it to the data and calculate $R^2$.
  4. Sort each parameter based on the variance in $R^2$.

The parameter with the highest variation in $R^2$ has the "biggest" effect on the model and the parameter with the lowest $R^2$ variation has the "smallest" effect on the fit of the model.

In short: Is this a correct/adequate way of determining the relative importance of the parameters, and if not, is there another way?