Currently reading through Pearl [1988]. Here is the Bayesian network shown in the book:
Where the probabilities of each $X$ node are given by the probability vector $(q_i,p_i)$ where $q_i$ is the probability of the $X$ input being off (so $q_i=1-p_1$). I know that to compute a lambda message to a parent $u_i$ of a node $x$ with other parents in $U$ the equation
$\lambda_x(u_i) = \beta \sum_x \lambda(x) \sum_{u_k:k \neq i} P(x | u_1,...,u_n) \prod_i \pi_X(u_i)$ is used
In the above network when the evidence $Y_0=1$, $X_2 = 1$, $Y_3 = 0$ is implemented, the lambda messages sent to the nodes $Y_2$ and $X_3$ are given as above. They are $\lambda_{y_3}(y_2)=(1,q_3)$ and $\lambda_{y_3}(x_3)=(1,1-p_1p_2)$. I cannot figure out how this was done, as I obtain (for $\lambda_{y_3}(x)$)
$\lambda_{y_3}(x) = \lambda(y_3)\sum P(y_3 | y_2,x)\pi_{y_3}(y_2) = (1,1)((1,0)+(1,0)+(1,0)+(0,1))(1-p_1p_2,p_1p_2) = (3(1-p_1p_2),p_1p_2)$
which is obviously not correct. What am I messing up here?