It can often be read that Bayesian inference i.e. using Bayes theorem, to compute the posterior i.e.:
$p(\theta|x)$, with $\theta$ a continuous random variable (and possibly $x$ too)
can be intractable, i.e., non computable,because of the integral at the denominator.
However, it seems to me that if the numerator would be computable, then we can sum over many values of $\theta$, e.g. like "grid approximation" and then divide by this sum to normalize.
i.e. the whole thing (continuous) Bayes theorem:
$p(\theta|x)=\frac{p(x|\theta)p(\theta)}{\int_{\Theta} p(x,u)du}$
where $\theta \in \Theta$
Or are we talking only about computing the analytical form of the intergal i.e. antiderivarive and get a closed form?
Sometimes e.g. for MLE or MAP one can ignore the denominator abd compute the likelihood as e.g. the product (or sum of logs, loglikelihood)... but i guess the problem is that if we want a distribution, i.e. the posterior distribution and not a point estimate as the MLE or MAP then we would need to then evaluate it at an infinite number of values of continuous $\theta$ and then - if we could do that which we can't of course - then sum and divide by that (which is of course the integral) ... just is that an "intuitive" way to say it?
I would just like someone to validate and/or possibly reformulate this statement and possibly develop a bit...