I am trying to find the posterior predictive distribution of a future test case of a Bayesian model, but I'm stuck on which prior to use, and how to integrate it with the likelihood to obtain the posterior predictive distribution.
$D=\{{x_{1},...,x_{n}}\}$ is the data set
$a$ is the parameter
The likelihood must have a uniform distribution:
$p(D|a) = (\frac{1}{2a})^{n} \times I( \ D \in [-a, a] \ )$
where $I(x)=1$ if $x$ is true and $I(x)=0$ if $x$ is false
and $D \in [-a, a]$ is true if all $\{x_{1}, ... , x_{n}\}$ are inside the interval $[-a, a]$
Now to obtain the posterior distribution, we do the following:
$p(a)$ represents the prior, $p(D|a)$ represents the likelihood, and $p(a|D)$ represents the posterior, so...
$p(a|D) \propto p(D|a) \times p(a)$
$p(a|D) = \dfrac{p(D|a) \ p(a)}{\int_{a = 0}^{a = + \infty}p(D|a) \ p(a) \ da}$
My question is this: using the Uniform distribution works fine for the likelihood since we are holding $D$ constant and distributing across $a$, but what distribution should we use for the prior $p(a)$? With the p(a) distribution, $a$ is no longer a parameter, but a random variable that must be distributed from zero to infinity. This means that its probability density must be zero (or infinitely close to zero) the whole way from $0$ to $\infty$, which will cause the integral in the denominator to come out to zero when we evaluate it.
Am I wrong about this? If so, why, and if not, which distribution do we use for the prior?
Thanks in advance