Origin of min/max notation

4k Views Asked by At

Here I am referring to the notation $x \wedge y = \min \{ x,y \}$ and $x \vee y = \max \{ x,y \}$. These seem to reference the corresponding usages in logic, where $\wedge$ means "and" and $\vee$ means "or". That is, they reference the usages in logic in the sense that both systems form Boolean lattices.

But this interpretation of the notation works exactly when we are dealing with $>$ and $\geq$ inequalities. That is, $x \wedge y > z$ can be read as "the minimum of $x$ and $y$ is greater than $z$" or as "$x$ and $y$ are greater than $z$". The analogous correspondence works with $\vee$.

My question is: since analysts much more frequently deal with $<$ and $\leq$, why do we not use $\wedge$ for supremm and $\vee$ for infimum? Did this happen in order theory outside analysis and then transfer over? Historical references would be fantastic.

(Also, I blindly guessed how to tag this. Feel free to edit my tags.)

3

There are 3 best solutions below

2
On

It is not essential to write it the way you have written it. For example, we could have written $$z\leq x\wedge y$$ and this reads as "$z$ is less than or equal to $x$ and $y$". So if you interpret the way you have written it as backwards, then write it the other way and it is not backwards.

0
On

When constructing real numbers as Dedekind cuts, people tend to focus on one of two sets, saving themselves trouble of carrying around the notation for both sets. And it tends to be the lower set: e.g., the construction of reals in Rudin's Principles of Mathematical Analysis goes through the lower sets.

When real numbers are defined as lower cuts (downward closed subsets of $\mathbb Q$), their intersection corresponds to taking the minimum and union to the maximum. This reinforces the parallels $$\wedge : \cap : \min$$ $$\vee : \cup : \max$$

0
On

I have to admit that the motivation and meaning of your question is not very clear to me; I don't know if this "answer" addresses it at all. Here goes nothing.

If we regard the truth-value "true" as being greater than the truth-value "false", say by identifying "true" with $1$ and "false" with $0$, then the truth-value of the disjunction $A\text{ OR }B$ is the greater of the two truth-values, while the truth-value of the conjunction $A\text{ AND }B$ is the lesser of the two truth-values. Thus it makes sense to equate disjunction with supremum and conjunction with infimum.

If we make the opposite convention, identifying "true" with $0$ and "false" with $1$, then of course a disjunction is an infimum and a conjunction is a supremum.

Is your question about why logicians usually (but not always) use $1$ for "true" and $0$ for "false" rather than the way round? It seems to be quite arbitrary, but it is traditional. I believe the tradition goes back to George Boole's The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought (1853). Of course Boole did not use the symbols $\wedge$ and $\vee$.