So I'm reading Landau and Lifshitz' Theory of Elasticity (https://archive.org/details/TheoryOfElasticity) and they have done, among (many) other things, something I simply don't understand.
On page 5, they start talking about the moment of force, defined I guess as Fxr. They have a little footnote at the bottom saying you can write the components of a vector product as an antisymmetrical tensor of rank two.
I...really really don't know what the hell they are talking about, or how that is possible. Can someone help?
Let's start from the top. If you have any programming background, then the following might make some more sense to you:
A tensor is merely a function that is linear on each of its arguments and produces a number. For instance, let $f: \mathbb R^n \to \mathbb R$. That is, for any vector $v \in \mathbb R^n$, $f(v)$ is some real number. If $f$ is linear--that is, for another vector $w$, $f(v) + f(w) = f(v+w)$ and other stuff--then $f$ is also a tensor.
In physics parlance, a "tensor of rank two" means "a tensor that is a function of two arguments." So we have some tensor $A: \mathbb R^n \times \mathbb R^n \to \mathbb R$, or $A(v,w)$ is a number.
How does this connect with index notation? Because of linearity, you can fully describe $A(v,w)$ for any vectors $v,w$ by examining $A(e_i, e_j)$, where $e_i$ and $e_j$ are stand-ins for basis vectors. Explicitly, it's this:
$$A(v,w) = \sum_{i,j} v^i w^j A(e_i, e_j)$$
And it is typical to write $A_{ij} = A(e_i, e_j)$.
You mention being lost on the subtle difference between a matrix and a rank 2 tensor here. I would say it's this: for a rank-2 tensor, you can write the rules above entirely in terms of matrix multiplication. But you are not required to do so.
(Nb. A tensor's components also have particular transformation laws under change of basis, but for the purposes of this answer, I'm taking the position that such transformations are passive transformations, and that these different sets of components represent the same geometric object merely in different bases. I'm well aware that active and passive transformations are indistinguishable, but I feel this point is out of scope for this answer.)
Now, what does this have to do with the cross product?
Consider $f(v,w) = v \cdot (F \times w)$ for vectors $v, F, w$. We've defined some function $f$, but we have not proved this is a tensor. Yet.
All we need do is prove whether $f$ is linear in both arguments. Is it?
Let $c$ be another vector and $\alpha$ a scalar. Then...
Is it true that $f(v+c,w) = f(v,w) + f(c,w)?$ Yes. See that $(v+c) \cdot (F \times w) = v \cdot (F \times w) + c \cdot (F \times w)$.
Is it true that $f(v, w+c) = f(v,w) + f(v,c)$? Again, yes. This follows from distributivity of the cross product over addition.
Is it true that $f(\alpha v, w) = \alpha f(v,w)$? That $f(v,\alpha w) = \alpha f(v,w)$? Again, yes and yes. Both the dot and cross products respect scalar multiplication.
Thus, $f$ is linear in both arguments, and it is therefore a tensor (of rank 2, in physics parlance).
Thus far, I've proven that for any vector $F$, there is a tensor $f$ such that $f(v,w) = v \cdot (F \times w)$. Using this prescription, we can extract the components:
$$f_{ij} = e_i \cdot (F \times e_j)$$
For instance, $f_{xy} = \hat x \cdot (F \times \hat y) = -F^z$ in Cartesian coordinates.
As I mentioned earlier, one can use matrices and matrix multiplication to facilitate computation with rank-2 tensors. The resulting matrix representation would look like this:
$$[f] = \begin{bmatrix} 0 & -F^z & F^y \\ F^z & 0 & -F^x \\ -F^y & F^x & 0 \end{bmatrix}$$
And then we would just compute $f(v,w) = [v]^T [f] [w]$.