So I'm taking this term three mathematical courses in Computer Science degree: CALCULUS 1M (Especially dealing with the term of limit of sets etc.), Linear Algebra and Discrete math.
I have accumulated so far some essential questions:
- First, in CALCULUS 1M we encounter the term of a 'field' - this object which has two operations - from there we have to infer other properties about those fields. Besides that this term is still vague for me. I didn't catch enough the intrinsic difference between a field and a set. I would be glad to get a good explanation to it.
Secondly - the proofs - it has to be very rigorous. To such an extent that many times I really don't know how to start writing an answer - from where to begin? Is it legal to start from this statement or no?.. Very ambiguous. I don't know if you can give me some concrete help concerning it but I will leave it here.
Some technique misunderstanding: I didn't find some good explanation to what relation order is, what an ordered field is. I have learned like a parrot that an ordered field has some properties to be ordered - Consistency to multiplication and addition, transitivity... I don't understand deeply why specifically those are the properties. What is behind this?
Concerning Linear Algebra: We have learned about matrices and the operations on it but again like robots. No one really understands what really this matrix is, why are those operations work on it this way. And generally, we have no intuition about it, how does it look like? Why actually linear equation systems can be solved by this? Why those elementary row operations work?
Discrete math: later on...
Thank you.
Here's a wordy explanation which is a bit lighter on the advanced mathematics compared to other answers:
A set is roughly a collection of elements. Examples of sets are $\{A, B, F, X\}$, $\{1,-4,\frac{1}{2}, 0, 12\}$, $\{\triangle, \circ, \square \}$, as well as some pre-defined sets like $\mathbb{N}$, $\mathbb{Z}$, and $\mathbb{R}$. Note that elements of sets have no concept of order, no inherent operations can transform one or more elements of a set into another element of a set, and no inherent relationship to one another. However, most of the time, sets contain the same "kind" of objects (letters, numbers, shapes, animals, etc.); this is not a hard requirement though.
Some sets are a little more structured than this though. We might want to consider some kind of number system, where we can add and subtract, multiply and divide, and combine these operations in a meaningful way so that we always get an answer out. Fields are one kind of structured set, motivated by the numbers and operation we use in daily life. There are many "rules" or axioms which need to be satisfied for a set with some operations to be called a field, but basically, a field is just a set of numbers which act like real numbers (or complex numbers) with addition and multiplication. Things like associativity, commutativity, identity, and inverse are required for each of addition and multiplication, and distributivity is required for their combination. With all this structure, you have a lot of room to do algebra on these kinds of numbers, which is great. In order to do calculus (limits), you need some additional structure (completeness), which real and complex numbers have, but not every field does.
In a 100% rigorous system, the only legal statements without proof are axioms. Whatever axioms you have in the system you are trying to prove something under, you can use whenever you want, as they necessarily hold. In addition to axioms, any statements which have been previously proven with the currently legal statements are legal. That is, you must be able to prove a statement with only axioms and the statements you have already proven before you can use that statement to prove any other statements. This is the general process of building a theoretical framework. Very quickly, you stop referencing axioms and start referencing the theorems you have proven, but you need to be careful about taking statements proven by other people. If you see a statement proven by someone else, you can use it (proofs and theorems are not copyright), but you need to be sure that the proof they give is valid within your existing framework.
An ordered field can be considered by the existence of a relation which satisfies certain conditions/axioms, but intuitively, it is exactly what you know as order. If I give you 2 numbers $a$ and $b$, you can say for sure that $a \leq b$ or $b \leq a$, and both of those statements are true if and only if $a = b$. You also know that if $a \leq b$ and $b \leq c$, then $a \leq c$. Lastly, and most obvious, any number $a$ satisfies $a \leq a$ (because $a=a$). These intuitive ideas of order don't necessarily exist in all sets, but ones that do have it are called ordered. If the set is a field, then it is an ordered field. Real numbers are an ordered field, but complex numbers are not (what does it mean to say $3+i \leq 2+2i$?).
Linear algebra, in a sense, starts from linear systems. If you have a system of equations,
$$ 3x+5y-2z = 12 \\ 2x+2y+6z = 3 \\ -x+z = -9 $$
we might want to write it in a shorthand way, where we don't repeat things like $x,y,z$ and the $=$ sign up to 3 times each (or more for higher dimensions). We first look at all the coefficients, which are already laid out in a rectangular shape, so let's just make a box of coefficients
$$ A = \begin{bmatrix} 3 & 5 & -2 \\ 2 & 2 & 6 \\ -1 & 0 & 1 \end{bmatrix} $$
On the right hand side, we have a column of numbers, so let's make a column-shaped box
$$ \mathbf b = \begin{bmatrix} 12 \\ 3 \\ -9 \end{bmatrix} $$
Since the output of the system is a column, we should make the input of the system a column too, that way we can use the output of this system as the input to another one if we want
$$ \mathbf x = \begin{bmatrix} x \\ y \\ z \end{bmatrix} $$
Then, simply define the multiplication of a box and a column to be the way in which we "unwrap" this shorthand: each element in a given row of $A$ is the coefficient of the corresponding column of $\mathbf x$. This means our system can now be written as
$$ A\mathbf x = \mathbf b $$
In this form, we can look at $A$ as a way of transforming a column $\mathbf x$ into another column $\mathbf b$; we say $L(\mathbf x) = A\mathbf x$ is a linear transformation.
So how do we multiply two matrices? Well, since $A\mathbf x$ is the left side of a linear system with coefficients from $A$, $BA\mathbf x$ should be the left side of a linear system with coefficients from $BA$. $A\mathbf x$ is a column, so we can easily calculate $BA\mathbf x$ using our definition of a box times a column. Then, we can simply take the coefficients of the resulting linear system as the components of $BA$. In this way, the standard matrix multiplication definition is formed. The limitations on the size of matrices to be multiplied and the size of their product should be deducible from this definition.
Elementary row operations are simply the kinds of operations you were allowed to do on a linear system, but written in the new shorthand. Swapping rows is like swapping equations, absolutely nothing changes, but the new order might be a more convenient arrangement for you. Multiplying an equation (row) by some constant is obviously allowed, as if $a = b$, then $ca = cb$. For the same reason, we can add two equations (rows); if $a=b$ and $c=d$, then $a+c = b+d$. When adding two equations, it might be confusing why we end up with the same number of equations; simply, the added equation doesn't offer any new information, so we replace one of the summand equations with the new sum equation. Since each of these operations individually doesn't change the value of a system, any combination of these will also preserve the system. Thus, we can freely apply these operations in any order and as many times as we need to reduce the system to one which is solved.
It turns out that the shorthand notation for linear systems has the kind of structure needed to do some algebra, so we study columns (formally vectors), and boxes (matrices) with their associated linear transformations in general. This leads to a large number of techniques for analyzing different kinds of linear systems (algebraic, differential, etc.) which may not have been developed by taking the system of equations at face value.