Confused with Bayesian parameter estimation and Maximum Likelihood Estimation

20 Views Asked by At

I'm confused with an example in MLE and Bayesian estimation.

It's about 'coin tossing', where we toss a coin n times, and the probability of observing head is theta. 'theta' is an unknown parameter in this case.

The textbook says, we denote x as :

1 (when head), 0 (when tail)

and D to be the sample set of all xs defined above. then with MLE :

MLE method

but with Bayesian estimation on the same problem, (It says, with p(theta) = theta(1-theta))

Bayesian method

How is the solution in the Bayesian method possible (theta_hat is a fixed value)? What I learnt was, the estimated parameter theta_hat would be a fixed value in MLE estimation, but in Bayesian estimation, the estimated parameter would be a random variable and is not a fixed value. Also, I don't get that the estimated parameter is derived by solving the gradient equation = 0. How is this possible?