Inference in probabilistic graphical models (Bayesian networks)

219 Views Asked by At

I've been given a practice final exam that uses this network from CMU and was given some probabilities to determine. I'll attach some pictures below of the information at the link above.

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

I've been able to solve all of the questions so far with the exception of two:

$P(C=1 | R=1, H=1, M=0)$

$P(M=1 | H=1, C=0)$


When I initially see these, I look at the graph and notice that $C$ and $M$ don't depend on anything ($M$ depends on $V$ but $V$ is not in the question), so I keep thinking they're just $P(C=1)$ and $P(M=1)$. I feel like I'm missing something and would appreciate a push in the right direction! I assume I will need something like Bayes' rule, total probability, conditional probability, etc.


edit: The graphical model contains conditional independence relationships. The assumptions needed to answer the questions are:

$P(h|r,c,m) = P(h|c,m)$

$P(m|c) = P(m)$

$P(c|m) = P(c)$

1

There are 1 best solutions below

3
On

Do you know about message passing in a Bayesian network? The fact that C doesn't have any direct dependencies doesn't mean you can't reason about it. C is a parent node of R and H. That is what the arrow represents. The question $P(C=1|....)$ is stating that you have an observation on $R, H, \text{and } M$ and want to know the probability that $C=1$ given that observation.

To do so depends on the message passing scheme you are using and usually involves a multi-way distribution of information, first from the observations, to all unknown nodes, then a unification of belief in the network. What message passing scheme are you using? It will help you here.