I want to show that all bivectors in three dimensions are simple.
If I understand correctly, a bivector is simply an element from the two-fold exterior product $\bigwedge^2V$ of a vector space $V$, right?
We can define $\wedge(e_i\otimes e_j):=e_i\otimes e_j - e_j\otimes e_i$. Now let $T=t^{ij}e_i\wedge e_j\mapsto t^{ij}(e_i\otimes e_j - e_j\otimes e_i)=(t^{ij}-t^{ji})e_i\otimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.
We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
But the rank of a skew-symmetric matrix is never one.
I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?
Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $\mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i \otimes e_j - e_j \otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i \wedge e_j$. These are not the same object; one of them lives in $V^{\otimes 2}$ and the other one lives in $\Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 \in \Lambda^{n-1}(V)$ be a vector, where $\dim V = n$. Choose a nonzero element $\omega \in \Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$\wedge : V \times \Lambda^{n-1}(V) \to \Lambda^n(V) \cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, \dots v_n \in \Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, \dots e_n \in V$ defined by the condition that
$$e_i \wedge v_j = \delta_{ij} \omega \in \Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} \frac{\omega}{e_1 \wedge \dots \wedge e_n} e_1 \wedge \dots \wedge \widehat{e_i} \wedge \dots \wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = \frac{\omega}{e_1 \wedge \dots \wedge e_n} e_2 \wedge \dots \wedge e_n.$$