Various organizations, such as DARPA, have purported to create virtually "hack-proof" software systems using mathematical techniques such as formal verification. In short, the idea is generally to mathematically "prove" the correctness of an algorithm, thereby demonstrating it is capable of completing only the task it was designed to. Wikipedia gives a much better summary:
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
The verification of these systems is done by providing a formal proof on an abstract mathematical model of the system, the correspondence between the mathematical model and the nature of the system being otherwise known by construction
Putting aside the engineering and practical challenges, is it mathematically possible to create a "logic system" (software, e.g.) on the order of complexity of today's technology that is proven to be "hack-proof*" (through formal verification or other means)?
What does mathematics tell us about this?
I'm familiar with terms and concepts in mathematical logic and proofs and, should the discussion require or benefit from discussing "technical" details, I'm not at all opposed to spending the time necessary to understand a less superfluous answer.
*I know that "hack-proof" is a bit vague. We'll agree that "hack-proof" means that a logic system (algorithm, etc) can and will do ONLY the task it was designed to do and will do so precisely as intended.