I have a bayesian network like that:
B -> A <- F
And this are the values for A:
P(A=true | B=true, F=false) = 0.01
P(A=true | B=true, F=true) = 0.92
P(A=true | B=false, F=false) = 1.00
P(A=true | B=false, F=true) = 1.00
Now I want to sum out the variable B because I need the values P(A=true | F=true) and P(A=true | F=false). What I thought I have to do is sum this lines where the F values are the same and B is true and false. So line 1 and line 3 on the one side and line 2 and 4 on the other. My result would be:
P(A=true | F=true) = 1.92
P(A=true | F=false) = 1.01
I was quite sure this is right but now I wonder why the sum is greater then 1!? I thought probabilities have to go between 0 and 1? Am I doing something wrong or where is my fault?
Don't you need the prior probabilities of $B$ and $F$?
$P(A=t|F=t) = \frac{P(A=t,F=t)}{P(F=t)} = \frac{P(A=t,B=t,F=t)+P(A=t,B=f,F=t)}{P(F=t)}$
$ = \frac{P(A=t|B=t,F=t) P(B=t) P(F=t) +P(A=t|B=f,F=t)P(B=f)P(F=t)} {P(F=t)}$
$ = 0.92 P(B=t) +1P(B=f)$