Plücker Relations

10.4k Views Asked by At

Let $K$ be a field, $1 \leq d \leq n$ integers and $V$ an $n$-dimensional vector space. The Plücker relations are quadratic forms on $\wedge^d V$ whose zero set is exactly the set of decomposable vectors in $\wedge^d V$ (i.e. which are of the form $v_1 \wedge ... \wedge v_d$), thus describing the ideal corresponding to the Plücker embedding $\text{Gr}_d(V) \to \mathbb{P}(\wedge^d V)$. But in every book I've read so far, these Plücker relations are constructed by means of many identifications between duals, exterior powers, etc. so that I am not able to write them down explicitely. Although I've tried it, many signs and sums confuse me.

Question. Is it possible to write down these Plücker relations explicitely as a set of polynomials in the ring $K[\{x_H\}]$, where $H$ runs through the subsets of $\{1,...,n\}$ with $d$ elements? (Of course it is possible, but I wonder how do this in general)

Edit: Following the answer below, here is the

Answer: Instead of using these subsets $H$, use indices $1 \leq i_1 < ... < i_d \leq n$, and extend the definition of $x_{i_1,...,i_d}$ to all $d$-tuples in such a way that $x_{i_1,...,i_d}=0$ if these $i_j$ are not pairwise distinct, and otherwise $x_{i_1,....,i_d} = sign(\sigma) \cdot x_{i_{\sigma(1)},...,i_{\sigma(d)}}$, where $\sigma$ is the unique permutation of $1,...,d$ which makes $i_{\sigma(1)} < ... < i_{\sigma(d)}$. Then the Plücker relations are

$\sum\limits_{j=0}^{d} (-1)^j x_{i_1,...,i_{d-1},k_j} * x_{k_0,...,\hat{k_j},...,k_d} = 0$

for integers $i_1,...,i_{d-1},k_0,...,k_d$ between $1,...,n$.

2

There are 2 best solutions below

6
On BEST ANSWER

Yes, the Plücker relations are written down totally explicitly in terms of the polynomials you require on page 110, equation (3.4.10), of Jacobson's book Finite-Dimensional Algebras over Fields. The proof, attributed by the author to Faulkner (a student of his?), is completely down-to-earth: no identifications, no duality,...

Edit Since Martin doesn't have access to the book, I'm adding an online presentation, with the relevant equations on page 21. It is very elementary, with concrete examples, and might appeal to readers whose interest has been whetted by Martin's question.
And the bibliography contains a reference to a masterful article by Kleiman and Laksov, which also contains the Plücker relations handled with minors of determinants and nothing else.

0
On

Consider a vector the vector space $W=k^d$, $v_1$, $\ldots$, $v_d$ a basis of $W$. Then for every vector $v$ has a unique expression

$$v = \sum_{i=1}^d c_i v_i$$ where $$c_i = \frac{(v_1, \ldots, v,\ldots, v_d)}{(v_1, \ldots, v_d)}$$ and we denote by the bracket the mixed product (determinant) of $d$ vectors (Cramer's rule).

From the above we get the equality

$$(v_1, \ldots, v_d) v = \sum_{i=1}^d (v_1, \ldots, v, \ldots, v_d) v_i$$ This last equality is valid even if $(v_1, \ldots, v_d) = 0$ ( by continuity). This gives a general relation between $d+1$ vectors in a $d$ dimensional space.


$\bf{Added:}$ Another way to look at the above equality: the $d+1$ vectors $v_1$, $\ldots$, $v_{d+1}$ are linearly dependent, so

$$v_1 \wedge v_2 \wedge \cdots \wedge v_{d+1}=0$$ in $\wedge^{d+1}(V)$ ($0$ itself). But now consider the comultiplication map

$$\wedge^{d+1}(V) \mapsto \wedge^d (V)\otimes V$$

and see $0=v_1 \wedge \cdots v_{d+1}$ maps to

$$\sum_{k=1}^{d+1} (-1)^{d+1-k} (v_1 \wedge \hat v_k \wedge v_{d+1}) \otimes v_k = 0$$


Now, consider $d-1$ more vectors $w_1$, $\ldots$, $w_{d-1}$. From the above we get

$$ (v_1, v_2, \ldots, v_d)(v, w_1, \ldots, w_{d-1}) = \sum_{i=1}^d (v_1, \ldots, v, \ldots, v_d)( v_i, w_1, \ldots, w_{d-1})$$

Example: from $$(v_1, v_2)v_3 =(v_3, v_2) v_1 + (v_1, v_3) v_2 $$ we can get $$( v_1, v_2)(v_3, w) = (v_3, v_2)(v_1, w) + (v_1, v_3)(v_2, w) $$

Now, consider the $v_i$ and the $w_j$ as columns of a matrix $n\times d$ with entries in (a commutative ring) $k$. We get the Plücker relations.

$\bf{Added:}$ While the necessity of conditions are at the level of first year LinAlg, the sufficiency is a bit more involved.

Consider $\omega\in \wedge^d(V)$. Recall that we have a bilinear map ( internal product) $$\wedge^d(V) \times \wedge^{d-1}(V^{\star}) \to V$$

Fixing $\omega$ on the left factor, we get a map $\wedge^{d-1}(V^{\star})\to V$. Denote the image of this map by $S(\omega)$ ( a subspace of $V$ associated to $\omega$). The Plucker relations in invariant form are $$\omega \wedge w = 0$$ for all $w \in L(\omega)$.

Now, use this general lemma valid for every $\omega \in \wedge^d(V)$, $w \in V$.

$\omega \wedge w = 0$ if and only if $\omega = \eta\wedge w$ for some $\eta \in \wedge^{d-1}(W)$.

The proof is now fairly simple.

Note: for every $0\ne\omega \in \wedge^{d}(W)$ we have the space of vector $w$ such that $\omega\wedge w = 0$, denote it by $K(\omega)$ ( the nucleus of $\omega$). We also have $S(\omega)$ from above. Moreover, we also have $S'(\omega)$, the smallest subspace $V'\subset V$ such that $\wedge^d(V') \ni \omega$. Apriori, $S'(\omega) \supset S(\omega)$. It turns out that we have $$S'(\omega) = S(\omega)$$ the span of $\omega$.

Clearly, $K(\omega) \subset S(\omega)$. Moreover, $\wedge^k(K(\omega))$ is a factor of $\omega$ ( the largest possible split factor).