Bayes estimator from Beta prior Geometric data

1.4k Views Asked by At

Let $X$ be random variable of geometric distribution $f_{\theta}(x)=(1- \theta)\cdot \theta^x$ for $x=0,1,\dots .\;$

Find Bayes estimator of $ \theta$ calculated on a basis of observation $X=0$ which is $E(\Theta|X=0)$.

My problem is with a priori distribution which is $\pi(\theta)=3\theta^2,\theta\in(0,1).$ Why is that?

1

There are 1 best solutions below

0
On BEST ANSWER

The prior distribution $\theta \sim \mathsf{Beta}(3,1)$ with density function $\pi(\theta) = 3\theta^2 = 3\theta^{3-1}(1-\theta)^{1-1},$ for $0 < \theta < 1,$ has mean $E(\theta) = 3/(3+1) = 3/4$ and places probability 0.875 in $(.5, 1).$ You can learn more about beta distributions from your text or from the Wikipedia article. The probability 0.875 can be obtained by integration of the density function or from software. The computation in R statistical software is shown below.

1-pbeta(.5, 3, 1)
## 0.875

Thus you begin this inferential procedure with the prior information or personal belief that $\theta,$ which must lie in $(0,1)$ is noticeably greater than $1/2.$ All Bayesian inferences begin with a prior distribution.

The geometric likelihood function corresponding to data $x = 0$ is $\pi(x|\theta) = (1 - \theta)\theta^x = \pi(0|\theta) = (1-\theta).$

Following my Comment, by Bayes' Theorem, the posterior distribution is $$\pi(\theta|x) \propto \pi(\theta) = \theta^2 \times (1 - \theta) \propto \theta^2(1-\theta) = \theta^{3-1}(1-\theta)^{2-1},$$ where the symbol $\propto$ (read 'proportional to') recognizes that we are using the kernels of the prior and posterior density functions and omitting the constants, which are not necessary.

We recognize the posterior kernel as that of $\mathsf{Beta}(3,2),$ which has mean $3/(3+2) = 0.6.$ Thus $E(\Theta|0) = 0.6.$ The central idea of Bayesian estimation is that the information in the prior distribution and the data are combined to yield a posterior distribution. In this case the data $X=0$ has reduced the estimated mean from 0.75 to 0.6.

This easy identification of the posterior distribution is possible because the prior distribution and the likelihood function are conjugate (that is 'mathematically compatible') so that the kernel of the posterior density is easily recognized.

Note: A 95% Bayesian posterior interval estimate of $\Theta$ is $(.194, .932).$

qbeta(c(.025,.975), 3, 2)
## 0.1941204 0.9324140