I've always had problems understanding what I can infer from some math definitions, such as:
"A relation R is antisymmetric if for all a and b in X, if R(a, b) and R(b, a), then a = b"
This is not everyday English and it's confusing. With that definition for example if $a \not \mathrel{R} b$ and $b \not \mathrel{R} a$ i would infer that the relation is not antisymmetric, when it is.
So looking for a solution, I read this book: https://www.amazon.com/Meaning-Argument-Introduction-Through-Language/dp/1405196734/ref=sr_1_2?keywords=meaning+and+argument&qid=1579256405&sr=8-2 Since a part of the description says: "Concentrates on symbolization and works out all the technical logic with truth tables instead of derivations" it explains some propositional logic and first order logic in natural language, and I learned a lot from this book, but there are a lot of definitions I cant symbolize still, such as: "a function f: X→Y is injective if and only if for all x₁, x₂∈X, if f(x₁)=f(x₂) then x₁=x".
I sent an email to the author of the book and he told me:
"For that all you need is higher-order quantification. That goes beyond what we discuss in M&A, but the extension is not so difficult. You need variables that range over functions, in addition to those that range over objects. When it comes to metalogic, things get more interesting, as logic with second-order quantification has no complete proof system, since it has the expressive power to represent the axioms of arithmetic"
This confused me even more, since he said "has no complete proof system", but at least I understood (I guess) that I have to learn second-order logic.
So my question is, is there a logic that lets you symbolize "every" mathematical definition/theorem etc, to check if what I infer is valid or not (truth tables, truth trees etc)? If yes what do I have to study? if not what is the purpose of logic? lol
Thank you!
Disclaimer: by "second/higher order logic" below I refer - as does the author - to the standard semantics. There is also Henkin semantics, which avoids the pathologies mentioned below ... but Henkin semantics is really just first-order logic "in disguise," so it's not really a good solution to these issues. I'll also - when referring to logics as opposed to theories (see below) - write "complete" for "sound and complete."
I find the author's response highly misleading in two key ways; I'll treat these first, and then try to address your question proper.
That said, since this answer is rather long, I'll start by answering your title question:
See also the first bulletpoint of the last section of this answer.
The role of higher-order quantification
In my opinion, the author over-emphasizes the importance of higher-order logic here.
If $\Sigma$ is any first-order language (= set of function, relation, and constant symbols) and $f$ is a unary function symbol in $\Sigma$, we can write a single sentence $\eta$ such that for every $\Sigma$-structure $M$ we have $M\models\eta$ iff $M$ interprets the symbol $f$ as an injective function:
$$\forall x,y(f(x)=f(y)\leftrightarrow x=y).$$
Actually only the "$\rightarrow$" bit is necessary.
What you need higher-order quantification for is to express something like "Every injective function is surjective" (which is true about a structure iff$^1$ that structure is finite):
$$\forall f([\forall x,y(f(x)=f(y)\leftrightarrow x=y)]\rightarrow[\forall x\exists y(f(y)=x)]).$$ The outermost quantifier ranges over all (unary) functions, and so is a second-order quantifier.
But we don't always need to do that. Higher-order quantification is indeed extremely important to understand, but the idea that it's already implicit in the definition of injectivity is in my opinion quite misleading.
$$$$
$^1$OK, fine, Dedekind-infinite; they're not actually equivalent necessarily if we drop the axiom of choice. But meh.
The complexity of higher-order logic
Here the author is at best speaking imprecisely about a rather finicky topic.
There are two notions of completeness in logic: completeness of a logic and completeness of a theory in a given logic. The tension between these two can be seen in the following two theorems:
Godel's completeness theorem: there is a computably enumerable "proof system" (a precise term) which is complete for first-order logic.
Godel's incompleteness theorem: the theory $TA$ of true arithmetic (= the set of sentences true in the structure $(\mathbb{N}; +,\times)$) is not computably enumerable; moreover, there is a single sentence $\varphi$ in $TA$ such that no consistent computably enumerable theory containing $\varphi$ is complete.
The phrase "axioms of arithmetic" that the author uses is a bad choice in this context: at a first glance, I would read that as "axioms of first-order Peano arithmetic," which is a particular computably enumerable first-order theory (and as such, "any logic interpreting the axioms of arithmetic is incomplete" would be false since it would apply to first-order logic already).
What is true is that there is a single sentence $\pi$ in second-order logic which characterizes $(\mathbb{N};+,\times)$ up to isomorphism (this is the conjunction of the finitely many axioms of second-order Peano arithmetic). This lets us show that there is no computably enumerable proof system which is complete for second-order logic: if there were, $TA$ would be computable (to tell if $\varphi\in TA$ just ask "does $\pi$ entail $\varphi$?").
Annoyingly, this is often abbreviated as "second-order logic is incomplete," which I quite dislike; it doesn't make sense to say that a logic itself is complete/incomplete, but rather that a proof system is complete/incomplete with respect to a logic.
In fact, this $\pi$ shows that second-order logic isn't even compact (this is a good exercise). And this only scratches the surface of the pathological nature of second-order logic, although the deeper pathologies are more technical to state. Stewart Shapiro's book has some positive things to say about second-order logic, however.
Your actual question
Now let me address your question
We get around the tension between the terrible-ness of higher-order logic and its ubiquity in natural-language reasoning about mathematics by embedding our practice of mathematics inside ZFC (or similar), which is a first-order theory of higher-order objects; roughly, think of ZFC as what we get if we decide to study "all orders" of mathematical objects but restrict ourselves to the framework of first-order logic (and computably enumerable axiom systems) alone.
This isn't a perfect system, but the results above demonstrate that no perfect system exists: as soon as we try to get around one bad feature (incompleteness of our theories) we hit another bad feature (lack of a complete proof system for our logic). In practice, this amounts to a tension between the two goals of power and usability. The latter goal has won out: a strong axiom system/logic is useless to us if we can't actually tell what a valid proof in that system is.
The goal of logic is not to answer every question, and the position that it's not worth studying unless it can ("if not what is the purpose of logic? lol") isn't a good one; for example, answering some questions is still a good thing! Here are a couple pro-logic points:
It gives us a common framework for doing mathematics in. First-order ZFC has been established as the default system for mathematics: if I claim "I've proved $P$," I'm implicitly claiming that I've demonstrated that $P$ (appropriately phrased in the language of set theory according to standard methods) is provable in ZFC. Moreover, by the Completeness Theorem we'll never have a fundamental disagreement on whether or not something is a valid ZFC-proof (indeed, this is computer-verifiable in principle). This guarantees the coherence of mathematical practice - something we take for granted now, but was a point of worry "back in the day."
It is useful to, and a field of, pure math independently of foundational issues. Applications of the compactness theorem, e.g. to a proof of Ax-Grothendieck, constitute some good examples of the former point - see also this paper on logic's role in theoretical computer science - and the whole subject of mathematical logic as it is contemporarily practiced demonstrates the latter point.
And combining the foundational and pragmatic aspects, logic tells us what we can expect from mathematics. Theorems like the incompleteness theorem are not just relevant to our choice of common framework, or mathematically interesting in their own right; they're also key to understanding what mathematics is and can be. Good or bad, there's no way around them - and I think that they are worth knowing. (For some more in this direction, I've written a bit about what I consider to be some "key" results in logic here.)
I'll end by mentioning one key result which I think constitutes a good defense of first-order logic:
There are others.
Basically, any time we try to strengthen first-order logic we either give up the notion of proof in any reasonable sense or we fold distinctions between higher levels of infinity into our base logic as opposed to making them mathematical topics which arise later on. (The value of this latter point is rather technical, but despite initial negativity it's generally agreed that it is a Good Thing.) Lindstrom's theorem more-or-less initiated the general study of logical systems, called "abstract model theory."