Is computer science a branch of mathematics?

65.6k Views Asked by At

I have been wondering, is computer science a branch of mathematics? No one has ever adequately described it to me. It all seems very math-like to me. My second question is, are there any books about computer science/programming that are very rigorous and take an axiomatic approach? Basically, putting computer science and programming on a rigorous foundation.

13

There are 13 best solutions below

2
On BEST ANSWER

I think your first question has been answered eloquently by others here. I'd just like to add a group of references for your second question. As a math major / CS minor, I was taught CS by people in "Dijkstra's school", which I would consider more rigorously grounded in mathematics than Knuth's. To get a taste of Dijkstra's ideas, you can read all his articles, letters etc., in all their glorious idiosyncrasy, at http://www.cs.utexas.edu/users/EWD/; a favourite of mine is The notational conventions I adopted, and why (EWD1300). An excellent introduction to his view of programming and computer science is:

This was the foundational book for me - really the only CS book for first year university. Kaldewaij is Dijkstra's academic grandchild, by the way.

I believe the following book may have similar content:

18
On

I would say that computer science is a branch of mathematics.

Donald Knuth is a famous computer scientists and is also considered a great mathematician. He wrote a series of books called "The Art of Computer Programming" which is extremely rigorous and mathematical.

Edit:

To make my position more clear since it is apparently controversial.

Almost all of computer science is about answering two questions: "What can a turing machine do?" and "How do we make a turing machine do X?". A Turing machine is an abstract mathematical object and any question about Turing machines or their capabilities will be mathematical in nature.

Now, we have implemented Turing machines in various ways (e.g. your PC), but the details of the particular implementation are the subject of computer engineering, not computer science.

15
On

Theoretical computer science could certainly be considered a branch of mathematics. This branch of computer science deals with computers and computer programs as mathematical objects. Theoretical computer scientists could be described as computer scientists who know little about computers.

However, when people say "computer science" they usually include many things which would not be considered mathematics, for instance computer architecture, specific programming languages, etc.

3
On

Is Computer Science a branch of mathematics?

In my opinion it is not. It's true that it uses nearly every aspect of mathematics. And heavily depends on Logic. But the questions posed are way different then the ones in mathematics.

In mathematics you normally try to understand the structure of objects and try to categorize them. Mostly it is totally unnecesairy if they are easy to compute or not. For example the Algorithm for a determinant of a $n\times n$ matrix is given by $\sum_{\sigma \in S_n} a_{1\sigma(1)}\cdots a_{n\sigma(n)}$ this formula is highly usefull in mathematics, calculating this is nearly impossible with $|S_n| = n!$. So this formula says a lot about the structure of matrices but it says nothing about how to compute it.
So the main work of a mathematician is bending your brain till you got a prove for your problem.

You got these kind of problems in computer science too. But normally you want a solution that is fast and precise not perfect. It does also contain parts like Software engineering which are not mathematical at all.
Or the theory behind building a computer which is part of eletrical engineering.
Also it needs a lot of parts of mathematics in various areas without really needing the tiny proves and lemmas and axioms you depend on in math.

For me it's like physics, all written down in the language of math. But still knowing math will not help you at all! I mean you could also ask if chemistry is just part of physics as it heavily relies on it. Or biology beeing part of chemistry because it also heavily depends on it.

But I guess there are a lot different opinions. I always felt that I have to bend my brain differently if I want to solve problems in computer science. And that a lot mathematicians can't really programm at all!

Is there a good axiomatic Book

I've never read it myself but "The Art of Computer Programming" from Donald Knuth was often noted to be very good! Althought it is quite old.

7
On

It is not uncommon to hear ideas along the lines that

  • computer science is computer programming without practical constraints

  • theoretical computer science is computer science without physical constraints

  • mathematics is computer science without finiteness constraints

Each subject in the chain is seen as a limiting case of the one before, where some parameter describing constraints or resource limits goes to zero or infinity. From this point of view, mathematics is degenerate special case of theoretical computer science, which is a degenerate case of computer science, which is ...

0
On

I say that computer science is a branch of mathematics, but it has many branches connected to other sciences or fields of study as well. Many of the founding contributors to the field were mathematicians. The analogy of a musician to sports has no bearing because the person's sports accomplishments weren't necessarily any by-product of this person's musical talents. Computer science was fostered from mathematical carving.

1
On

Computer Science touches a lot of different areas of study, and Mathematics is one of them.

I would probably reword the idea to say that Computer Science involves Mathematics. Computer Math is a specialized field because the way computers have been designed has created a focus on mathematics involving Base 2, 8 and other number systems, and a focus on Algorithm designed and efficient use of algorithms.

The study of Cryptology is almost 100% mathematics.

I think of Computer Science as Mathematics with a whole bunch of Application and O/S development mixed in.

I think Computer Science is really a Branch of Engineering. Now whether Engineering is a branch of Mathematics, or whether Mathematics is a branch of Engineering will entirely depend on your perspective and your definitions of both. Because Mathematics is a system of tools, or an abstracted view of accounting for things, I would say Math is more of a branch of Language, and a tool for Engineering and Science.

11
On

Let $C$ and $M$ be the set of all things which are considered "computer science" and "mathematics", respectively. If I understand you correctly, your question is: Is $C\subset M$?

If this is the case, your question is not well posed because neither $C$ nor $M$ are well defined. How do you draw the line between what is math and what is not math without ambiguity? No reasonable answer can be given if you try to treat them as sets.

However, I do believe that $C\cap M\neq \varnothing$ because certain aspects of both "sets" are shared, as others have pointed out (e.g. logic, proofs, etc.)

0
On

Any science, whether it is physics, computer science, or statistics, is driven forward by a combination of engineering "this usually works" and mathematics "this can be proved to be true under certain assumptions".

I would say there is mathematics in computer science, but computer science has elements which are not yet mathematics.

2
On

That depends on whether you consider software engineering to be computer science. I don't. The theory of computation is absolutely a branch of mathematics, and one of the most difficult. Forget P vs. NP, we can't even decide the Collatz conjecture, which can be understood by the average third-grader in a minute or two.

On the other hand, software engineering is applied psychology: how do we economically build and maintain complex software systems, given the limits of human intellect?

6
On

If you create a Cayley graph of a group, is that mathematics? What if you simulate the orbit of an element from the group? Where do mathematical models cease to become mathematics? The natural numbers still behave the way they do in an accurate model of them when the model isn't being used by any person. The sole purpose of a computer is to model mathematical objects.

The goal of computer science is to construct or describe a model of mathematical objects, even if there's no implementation of them on any computer or way to get the result that the program is to compute.


Algorithms, ironically, are the aspect of computer science that has the most influence on other fields of mathematics.

A simple case is if you determine an algorithm's time complexity to be too great to actually implement, that is still a contribution to computer science. On the other hand, if you show two constructions are equivalent, say the clique decision problem and factoring large numbers, and there's an algorithm to do one of those things which is of lower time complexity, there must be an algorithm of that same complexity for the second property. The question is, how much information you can avoid obtaining about one problem from the information in the solution or exhaustion of another? The time complexity of an algorithm is an invariant measure of it that gives you a feel for how hard a problem is. When two problems are not equally hard, complex, or time consuming, the harder one cannot be solved using just the easier one's solutions (without those solutions being processed by an algorithm that closes the difference in complexity in the total algorithm). Studying different problems and their time complexities, you see how the genericness of a problem relates to its difficulty to be solved in general. It's also useful to see, for example, logic gates implemented as the solution to a game of Minesweeper, because it shows you what the properties of a Turing-complete system look and feel like.

However, these things can also be deceptive: Initially it might look like you must check every permutation of a type of object to find out what the subset of those permutations satisfy a property, which indicates a hard problem if those permutations grow quickly, say, with the size of the set of permutations. However, it might have a second stage where there's a saturation of independent information and the permutations tried no longer contribute just themselves to the solution of the problem, or else are all determined by the amount of information collected. Like reaching that minimum 3 points to determine a circle.

There are also theorems that characterize the type of data that are viable members of the search space. This is sort of a Mandelbrot program - use your eyes to see what solutions to the problem look like, find a way to enforce those characteristics, and show that they hold for all possible solutions. A good example of this is with projective planes, where the incidence diagrams for finite planes don't have enough symmetry to decide whether even large groups of arrangements which form partial matches are viable pieces of the incidence diagram or not, leading most algorithms to require orders of magnitude beyond the age of the universe to determine the projective planes of a given order, and even those that have managed to find a second stage require massive amounts of searching and weaving of the data together over years of actual runtime to come to a conclusion. The picture I'm painting here is not a success story, it's the reflection of a real lack of understanding of what incidence diagrams for projective planes are like, what rules govern them. An understood object should have an algorithm from the examination of which one can understand the difference between that object and pure noise, a strategy which works well enough to make guesses about the nature of the solutions. So are projective planes' incidence diagrams pure noise? Looking at the incidence diagrams for small planes, they look very distinctive, so the suggestion is that the typical plane looks nothing like any known plane. But in fact, that overall pattern can be induced in a binary matrix, and is shown to hold in planes in general, which means it's possible, starting from that characterization, to greatly reduce the problem's search space. Hence, characterizing the search space of a hard problem is a lot like characterizing the object itself through invariant properties.

I would say that problems relate to their complexity in much the same way as the positions of elements of an infinite set of natural numbers relates to the set's density.


But what is an algorithm? What is a program?

I refer you to Wikipedia for Martin-Loef type theory and Calculus of Constructions for some specific implementations of computation. For full coverage, Practical Foundations for Programming Languages (Harper). For a treatment of domain theory I refer you to Domain Theory in Logical Form (Abramsky).

One answer, given in domain theory by Scott domains, is that the logical structure of a program, as a space of properties and inferences, is like a lattice of subalgebras or normal subalgebras of an abstract algebra that doesn't need a head or abstract algebra containing them all, just a forked or flawed crystal converging to the space where it would be, and that to formulate recursive definitions of program logic is to find the fixed points of the endomorphisms of this order structure, which are equivalent to continuous maps onto itself of a topological space derived from this unfinished lattice. The Stone dual of this is the executed program operating according to this logic, which is a locale whose points are the algorithms. In actual domain theory you have to generalize this a bit, because Scott domains don't form a cartesian closed category, meaning they aren't enough like a topos for programs represented by Scott domains to be arbitrarily expressive.

All of this is a buried version of Lawvere's famous statement that there's a Galois connection between syntax and semantics. More specifically, between a theory and models of the theory, or between a logic and the space of computations it performs (denotational semantics).

On the other hand, there is the relationship between constructions of an object and proofs of a theorem. I don't know if this is the exact reasoning, but if you think about it if propositions can be converted to existential, simultaneously holding properties, then in aggregate they describe a kind of object whose definition is the thing which simultaneously holds those properties, so that to prove those propositions true is to prove the type is occupied by some actual thing. The Curry-Howard correspondence is a proof that proofs are equivalent to programs or constructions, and more generally that intuitionistic logics correspond to typed lambda calculi, and thence to cartesian closed categories (which agrees with how, for logicians, a topos is just the categories of sets for an intuitionistic logic). This affects language design quite a bit, because it provides a means of computing with proofs instead of algorithms. It is the basis for the philosophy of Homotopy Type Theory (Univalent Foundations Program), as well as much of intuitionistic logic and computer science.

There is some degree of interplay between linear logic (i.e. noncommutative geometry) and computer science here. Physics, Topology, Logic and Computation: A Rosetta Stone (Baez, Stay) shows that if you replace cartesian closed categories with closed monoidal categories, you can generalize the Curry-Howard isomorphism to get all sorts of wonderful quantum behavior and semantics. Stone duality features again in the study of Chu spaces, which are a model of linear logic that is ultimately pretty similar to Domain theory in effect.

So, if all this applies to intuitionistic logic, what's classical logic? It turns out certain implementations of continuation passing, like call/cc from Scheme, as well as introducing control flow passing or procedural programming into purely functional language like Haskell amounts to making it a nonconstructive logic. The thing that makes Haskell so bothersome (many say outright useless) is that you can't make a program communicate with the outside world or depend on outside communication, even runtime-dependent parameters, without full-stop breaking the format of the language to talk in procedures with the so-called IO monad, or else destroying all the built-in logic that verifies your program behavior. So the lesson is that non-constructive mathematics is like an interactive program, and constructive mathematics is like a library.

2
On

For a rigorous approach inspired by Bourbaki, you may be interested in Alexander Stepanov and Paul McJones's Elements of Programming (2009).

It works with a subset of C++, and proves various interesting theorems. But not all propositions have proofs (e.g., the first two lemmas do not have proofs given).

The authors apparently wanted to work in the "Axiomatic method", which seems to be the OP's interest...

1
On

the two fields of CS and mathematics are becoming increasingly intertwined especially on the theoretical side and once "more sharp" boundaries are getting blurred by various active/ongoing research programs and developments, and one would expect this trend to continue and heighten gradually over the long term, this century. possibly a whole essay, paper or book could be written on this subject. some examples/highlights where there are strong connections or overlap: