Bayes theorem expansion

67 Views Asked by At

I've been reading the paper (Amini, Alexander, et al.) and in Eq.6, the authors show the following transition w/h Bayes theorem (I think); $$ p(y_i | m) = \frac{p(y_i | \theta, m) p(\theta | m)}{p(\theta | y_i, m)} $$ where $y_i$ an observation (of the label in Machine Learning context), $m$ the evidential distribution parameter, and $\theta$ likelihood parameters.

My question

I can't seem to work out how this transition holds... Could someone help?

Reference

Amini, Alexander, et al. "Deep evidential regression." Advances in Neural Information Processing Systems 33 (2020): 14927-14937.

2

There are 2 best solutions below

0
On

It is because $$p(y_{i}\mid \theta, m) = \dfrac{p(\theta \mid y_{i},m)p(y_{i}\mid m)}{p(\theta \mid m)}$$ (by Bayes' theorem – remember that $P(A\mid B) = \frac{P(B\mid A)P(A)}{P(B)}$ and this also holds if everything is conditioned on $C$, i.e. $P(A\mid B, C) = \frac{P(B\mid A, C)P(A\mid C)}{P(B\mid C)}$). Rearranging this gives the equation you want.

0
On

Since everything is conditioned on $m$, we can drop it from the notation.

Now, note that $$ p(y_{i},\theta)=p(y_{i}\mid\theta)p(\theta) $$ and $$ p(y_{i},\theta)=p(\theta\mid y_{i})p(y_{i}). $$ Dividing the first equation by the second we get $$ 1=\frac{p(y_{i}\mid\theta)p(\theta)}{p(\theta\mid y_{i})p(y_{i})}. $$ Rearranging terms gives the desired result.