I am following a Bayesian approach (specifying an underlying class of models and a prior) in order to produce a predictive distribution of some quantity. The question I am troubled with is: how can I check whether my underlying model is a good approximation of reality?
The concern I have is that the form of the prior distribution I have specified may result in a posterior distribution that fits the data well (by some measure) but for a given set of parameter values the underlying model may not fit well to the data. In other words, in an ideal world, I feel I should first check whether the "form" of the underlying model is appropriate (represents reality well in some way) and then determine the predictive/posterior distribution.
This question has arisen because I have moved from an approach that used simple point estimates for the parameters to a Bayesian one that gives me a distribution of parameters. When I was using point estimates I felt as if (at least for a single set of parameter values) I could compare the model output to available data and say that the form of the underlying model wasn't unreasonable. Now that there is a distribution of parameters it seems possible that the form of the underlying model may be incorrect but that after "mixing" this with the prior distribution the posterior distribution may fit the data well.
I think it is possible to come up with some examples where a poorly chosen prior can result in a posterior distribution that fits the data well but where the underlying model is inappropriate.
What should I be checking to satisfy myself that the form of the underlying model is appropriate?