I have a simple question that I think will have potentially many solutions, depending on the level of complexity with which one wants to approach it.
I've built a generative model with two variables and have used Bayesian analysis to estimate the parameters. It is a joint distribution $\mathbf{f_{x,s}(\vec{\theta})}$, where $\mathbf{x}$ is continuous, and $\mathbf{s}$ is an unobserved discrete-choice variable. $\mathbf{\vec{\theta}}$ is a vector of parameters.
That all worked fine, and I was able to estimate the model and perform inference as required. But I have now obtained additional data $\mathbf{y}$ to use on this model , which I have grouped into quantiles to turn into a discrete variable. So say $\mathbf{y \in \{ y_{1},y_{1},y_{1},y_{1},y_{1} \}}$.
The question is, if I re-estimate the model by the different groupings of $y$, is this equivalent to estimating $\mathbf{f_{x,s|y}}$ ?
Is this a good place to start, i.e—simply grouping the data and re-estimating the model ? Would you use a Bayesian hierarchical model ? Bayesian ANOVA ?
This is my first Bayesian model and would love to get some clues on how to approach inference when one is able to group the data into categories, after the model has already been specified.
I understand that the more admissible route would be to re-specify the model using $\mathbf{y}$, and estimate $\mathbf{f_{x,y,s}}$. But is this absolutely necessary ?
To ground the example, suppose $\mathbf{s}$ is abinary choice vairable (to buy not to buy), $\mathbf{x}$ is the price of an asset, and $\mathbf{y}$ is a demographic category like income or age.