There is a problem which I solved wrong. I know the correct solution from the book, but I cannot find the gap in my reasoning.
The problem is: three fair dice are rolled at the same time. What is the probability of getting at least one "1" upon condition that at least one die shows a "6"?
The solution in the book is quite complicated, with a result of 30/91.
My idea - before checking the book solution - was: let's just throw out the die that rolled a "6", now we have two dice and are looking for the probability of getting at least one "1" from them. Now this is fairly easy, it's 1-(5/6)^2 = 11/36. WRONG.
Yes I know this is wrong. I went as far as write a program that simulated 10,000,000 rollings of three dice, and the relevant percentage worked out very close to 30/91, the result in the book, not my result.
But where did I go wrong?
You computed the probability of at least one '1' rolling 2 dice, and as such it is correct. But the book problem is different. You don't know which die falls '6' and that is why the conditional information affects the solution.
I am not sure that it is possible to answer your question more clearly. You assume that the conditional information is not useful and does not affect the answer, but it is not so. This situation reminds me the Monty Hall problem, where it is also not obvious that conditional information is important, and you would better "shut up and calculate" as physicists say.