I'm currently reading "Labyrinths of Reason" by William Poundstone. My experience and ability when it comes to mathematics and formal logic is pretty limited, but the book seems to imply the following:
A) There is an algorithm that can determine whether or not a set of statements in a formal language is consistent.
B) In order to show that a statement (X) can be deduced from a given set of statements (S), it is sufficient to show that X is consistent with S, and that 'not X' is not consistent with S.
If these two are true, it would seem that there must be a solution to Hilbert's "decision problem", which is impossible.
I'm sure the problem is with my interpretation of the book and/or my reasoning, but I haven't been able to find it. Are A and/or B in fact false?
edit: The book in question contains a chapter called "a computer as big as the universe" which claims that if such a computer (with components the size of protons and switching speeds = the time it takes for light to travel the diameter of a proton, etc) started adding "beliefs" to a list one by one and checking for contradictions along the way, it would only be able to verify the consistency of around 558 beliefs in the span of a few hundred billion years. It's mentioned that "for the computer's benefit" the beliefs are encoded as logical statements about boolean unknowns.
B is true ... though the part about showing that X is consistent with S is unnecessary: as long as you can show that not X is inconsistent with S, then X logically follows from S.
A is false ... assuming we are talking about first-order logic there is no such algorithm ... but there is an algorithm that can determine that some set of first-order logic sentences is inconsistent (which, combined with B means that the algorithm will determine that some first-order X logic statement logically follows from some set S of first-orer logic statements), but this algorithm will not be able to tell all consistent sets of statements to be consistent (for some of those consistent sets, the algorithm will go into an infinite loop, so you will never know that it is consistent). Thus, we say that first-order logic is 'semi-decidable' ... But it is not (fully) decidable.