I have a set of data $(x_{\rm data}, y_{\rm data}, \Delta y_{{\rm data}})$, where $\Delta y_{{\rm data}}$ is the uncertainty in the measurement. I also have some model used to explain the data. The model is non-linear:$$y_{\rm model} =\mathcal{F}(\bf{p}; x_{\rm model})~,$$ $\bf{p}$ being the model parameters. I have some prior knowledge about $\bf{p}$, so I run MCMC chain to estimate the posterior distribution of $\bf{p}$, and use that distribution to find the best-fit values and the uncertainties for $\bf{p}$. This is all good. However, I have few (less than 10) candidate models to explain the dataset, and the true model is unknown. All these models give different estimate for best-fit values and the uncertainties of $\bf{p}$. Thus, I need some good method which could be used to calculate the best-fit values and the uncertainties of $\bf{p}$, such that the model uncertainty is also included. While calculating such systematic uncertainty I also need to keep in mind that some of these models fit the data poorly compared to others (measured with some metric like $\chi^2 =\sum (y_{\rm data}- y_{\rm model})^2/\Delta y_{\rm data}^2$). Any suggestions?
P.S. For model selection, I came across information criterion like the Akaike Information Criterion, so my naive thought was choose the best model from such criterion and report the best-fit value and uncertainty of the parameters for such best-fit model.
The uncertainty in the estimated parameters $\mathbf{p}$ can be measured by their (estimated) standard error and confidence intervals. While comparison between different models (on the same data) is indeed can be conducted using the AIC or the BIC criteria. You can give the standard error or the confidence intervals of the difference in the AIC criteria between different models, which gives you some metric of the significance of your improvement (or the final model). If you compare between models in the same family, e.g., linear models, then you have well established algorithms for models selection (F test or AIC using Backward\forward\stepwise variables selection). If you compare between different types of models (e.g., linear with fixed effects and non linear with mixed effects, then AIC \ BIC with uncertainty metrics like standard errors is probably the most suitable choice).