In the beginning of [Kanerva 1988] a boolean algebra over $$ \{0, 1\}^n $$ with bitwise OR and AND is introduced.
Example for bitwise OR: $$101 + 001 = 101$$ Example for bitwise AND: $$101 * 001 = 001$$
A bit later, Kanerva introduces orthogonality. Two points $x, y \in \{0, 1\}^n $ are orthogonal, if $$ d(x, y) = \frac{n}{2}$$ meaning, $x$ and $y$ are orthogonal, if the distance $d$, which is the norm of the difference (Hamming distance) of the two points $$ d(x, y) = \| x - y \| $$ equals half the dimensionality of the space.
What is the intuition behind this?
My intuition seems to be hindered by looking at the above from a geometric angle. If we apply a simple dot product on two boolean valued vectors in the plane, we get that $(1, 0)$ and $(0, 1)$ are orthogonal, since $\sum_{i=0}^{n} x_i y_i = 0$, e.g. $$1 0 + 0 1 = 0$$
With the other definition of orthogonality, we get that in two dimensions, $n = 2$, the points $(1, 0)$ and $(1, 1)$ are orthogonal, since $$ d((1, 0), (1, 1)) = \| (1, 0) - (1, 1) \| = \| (0, 1) \| = 1 = \frac{n}{2}$$
Update: I have a very rudimentary idea, what Kanervas orthogonality conveys, but I am happy if you can correct me if I am wrong.
Kanerva calls the distance $\frac{n}{2}$ the indifference distance of $\{0, 1\}^n$. Two points could not be more indifferent in this space, if they agree on half the elements and disagree on the other half.
[Kaverva 1988] Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.
It may be by analogy with "orthogonal" points on a (hyper)sphere, if we define those to be ones separated by 90°.
On a sphere, then, the points orthogonal to $x$ are exactly those whose distance from $x$ is half the (geodesic) diameter of the space.
Half the diameter is also the $d$ such that the set of points with distance $d$ from $x$ has the largest possible measure -- both for the spehere and for the hypercube defined in the question.
The kicker, however, is probably that this does reduce to $u\cdot v=0$ if we consider the space $\{-1,1\}^n$ instead of $\{0,1\}^n$.