After reading the very interesting Examples of mathematical discoveries which were kept as a secret I came to think of something: most math discoveries seem to have been made centuries ago, and with Pascal and Leibniz we already had machines that could add and multiply (some consider them as the first computers). Of course modern computers needed electricity and electric signals that can be converted to 1's and 0's, but those we have since the 19th century. First computers were also built with transistors, but we have those since the 1940's.
Modern computers are, at their core, not so different from the first computers that followed the Von Neumann architecture (with its arithmetic-logic unit). That is: they know how move data from one place to another, how to add and how to multiply, as well as perform logical operations (and, or, xor, not), and from that they get all the other operations (some can substract, divide and do some other stuff, but it's all very primitive in that sense). Of course, modern computers work at a much higher level than their predecessors: while in the beginning programming had to be done using commands composed of 0's and 1's or, if they were lucky, using assembly language, now we have high level programming languages that (to say it roughly) enclose a bunch of those single commands into one high level command.
And when I see pretty graphics in video-games, programs like photoshop, 3D rendering, CAD programs and such, I always think of all the calculus stuff I learned and how it must be applied to achieve those wonderful results.
And this is where my question arises: all of this mathematical knowledge has been available to us for way longer than computers existed. So, are there any modern mathematical discoveries that enabled the giant leap we took from the first computers we had in the 40's to what we have now? Or maybe old math started being applied in a different way at some point in time?
The mathematical foundations of modern computer science began to be laid by Kurt Gödel with his incompleteness theorem (1931). In this theorem, he showed that there were limits to what could be proved and disproved within a formal system. This led to work by Gödel and others to define and describe these formal systems, including concepts such as mu-recursive functions and lambda-definable functions.
In 1936 Alan Turing and Alonzo Church independently, and also together, introduced the formalization of an algorithm, with limits on what can be computed, and a "purely mechanical" model for computing. This became the Church–Turing thesis, a hypothesis about the nature of mechanical calculation devices, such as electronic computers. The thesis claims that any calculation that is possible can be performed by an algorithm running on a computer, provided that sufficient time and storage space are available.
In 1936, Alan Turing also published his seminal work on the Turing machines, an abstract digital computing machine which is now simply referred to as the Universal Turing machine. This machine invented the principle of the modern computer and was the birthplace of the stored program concept that almost all modern day computers use. These hypothetical machines were designed to formally determine, mathematically, what can be computed, taking into account limitations on computing ability. If a Turing machine can complete the task, it is considered Turing computable or more commonly, Turing complete.
Up to and during the 1930s, electrical engineers were able to build electronic circuits to solve mathematical and logic problems, but most did so in an ad hoc manner, lacking any theoretical rigor. This changed with Claude Elwood Shannon's publication of his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. While taking an undergraduate philosophy class, Shannon had been exposed to Boole's work, and recognized that it could be used to arrange electromechanical relays (then used in telephone routing switches) to solve logic problems. This concept, of utilizing the properties of electrical switches to do logic, is the basic concept that underlies all electronic digital computers, and his thesis became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after World War II.
Shannon went on to found the field of information theory with his paper titled A Mathematical Theory of Communication, which applied probability theory to the problem of how to best encode the information a sender wants to transmit. This work is one of the theoretical foundations for many areas of study, including data compression and cryptography.