Kernel of a map in Graph Theory (toric ideals)

256 Views Asked by At

If we have an $n$-cycle with edges $e_1 =\{x_1,x_2 \}, e_2 = \{x_2, x_3 \},\dots, e_n = \{x_n,x_1\}$ with a $K-$algebra homomorphism $\phi: k[e_1,\dots, e_n] \to k[x_1,\dots, x_n]$ defined by $\phi(e_i) = x_ix_{i+1}$ with $n \neq 3$ (where $x_{n+1} = x_1$). What is the kernel of this map if $n > 3$?

I tried constructing $n = 4$ (a square) and I got $$ \{(x_1,x_2)x_3 - (x_2x_3)x_1,(x_2x_3)x_4-(x_3x_4)x_2,(x_3x_4)x_1 - (x_4x_1)x_3\} $$

But then I realize this set is $0$, so I must be doing something fundamentally wrong.

I've been following this link for guidance.

1

There are 1 best solutions below

1
On BEST ANSWER

This is going to be long, so I'm omitting most proofs.

First of all, I suspect that your confusion about how I "define the product $e_{1}e_{3}$" is due to a certain abuse of notation in the statement. Namely, the symbols $e_{1},e_{2},\ldots,e_{n}$ are used both for the edges of the $n$-cycle and for $n$ commuting indeterminates which "correspond" to these edges. An expression such as $e_{1}e_{3}$ is to be understood as a product of indeterminates, not as a product of edges (whatever this would be). To make matters less confusing, it would be helpful to use different letters for the edges and for the indeterminates. I actually don't think you need to mention the edges at all (after all, the definition of $\phi$ does not use the $n$-cycle), so let me use the letters $e_{1},e_{2},\ldots,e_{n}$ for the indeterminates.

Kernels of linearizations

Now, let me comment on how to find the kernel of such a map. I start in greater generality. Let $\mathbf{k}$ be a commutative ring with unity. If $S$ is a set, then $\mathbf{k}S$ will mean the free $\mathbf{k}$-module with basis $S$; thus, the elements of $\mathbf{k}S$ are formal $\mathbf{k}$-linear combinations of elements of $S$. To every element $s$ of $S$, there is a corresponding basis vector $\left[ s\right] $ of $\mathbf{k}S$. It is customary to abbreviate this basis vector $\left[ s\right] $ as $s$, but this is an abuse of notation and thus I'll try to avoid it.

If $f:S\rightarrow T$ is a map from a set $S$ to a set $T$, then we define a $\mathbf{k}$-linear map $\mathbf{k}f:\mathbf{k}S\rightarrow\mathbf{k}T$ by the requirement that $\left( \mathbf{k}f\right) \left[ s\right] =\left[ f\left( s\right) \right] $ for every $s\in S$. We say that the map $\mathbf{k}f$ is the linearization of the map $f$.

Now, the following simple fact allows you to compute the kernel of $\mathbf{k}f$ if you understand $f$ well enough:

Proposition 1. Let $S$ and $T$ be two sets. Let $f:S\rightarrow T$ be a map. Then, the $\mathbf{k}$-module $\operatorname*{Ker}\left( \mathbf{k} f\right) $ is spanned by the elements $\left[ s\right] -\left[ s^{\prime }\right] $ for all $\left( s,s^{\prime}\right) \in S\times S$ satisfying $f\left( s\right) =f\left( s^{\prime}\right) $.

Of course, it is clear that if $\left( s,s^{\prime}\right) \in S\times S$ satisfies $f\left( s\right) =f\left( s^{\prime}\right) $, then $\left[ s\right] -\left[ s^{\prime}\right] \in\operatorname*{Ker}\left( \mathbf{k}f\right) $ (because $\mathbf{k}f$ sends both $\left[ s\right] $ and $\left[ s^{\prime}\right] $ to the same image $\left[ f\left( s\right) \right] =\left[ f\left( s^{\prime}\right) \right] $). Proposition 1 says that $\operatorname*{Ker}\left( \mathbf{k}f\right) $ is spanned by such elements. The proof is simple, and I'm leaving it to you.

[Hints to the proof of Proposition 1: Let $v \in \operatorname*{Ker}\left( \mathbf{k} f\right)$. Write $v$ as $\sum_{s \in S} v_s \left[s\right]$, where $v_s \in \mathbf{k}$, and observe that $\left(\mathbf{k} f\right)\left(v\right) = 0$ entails that $\sum_{s \in S;\ f\left(s\right) = t} v_s = 0$ for each $t \in T$. Say that a $t \in T$ is alive if there exists some $s \in S$ satisfying $v_s \neq 0$ and $f\left(s\right) = t$. There are only finitely many alive $t \in T$. For each alive $t \in T$, pick some $s \in S$ satisfying $f\left(s\right) = t$, and denote it by $r\left(t\right)$. Now, show that $v = \sum_{s \in S;\ v_s \neq 0} v_s \left(\left[s\right] - \left[r\left(f\left(s\right)\right)\right]\right)$, and argue that this equality represents $v$ as a $\mathbf{k}$-linear combination of elements $\left[ s\right] -\left[ s^{\prime }\right] $ with $\left( s,s^{\prime}\right) \in S\times S$ satisfying $f\left( s\right) =f\left( s^{\prime}\right) $.]

When $S$ and $T$ have more structure, then more can be said. When $S$ is a monoid, the $\mathbf{k}$-module $\mathbf{k}S$ becomes a $\mathbf{k}$-algebra (not necessarily commutative) in the "obvious" way: its multiplication is given by the rule that $\left[ s_{1}\right] \left[ s_{2}\right] =\left[ s_{1}s_{2}\right] $ for any $s_{1},s_{2}\in S$. (And by the requirement that it be $\mathbf{k} $-bilinear.) The $\mathbf{k}$-algebra $\mathbf{k}S$ is called the monoid algebra of $S$ (over $\mathbf{k}$). When $S$ is a commutative monoid, then this $\mathbf{k}$-algebra $\mathbf{k}S$ is commutative.

Monomials and polynomials

Let me now go back to the basics and clarify the notions of monomials and polynomials. You probably know most of this, but I figure it doesn't hurt to repeat it, in order to completely stamp out your confusion about products of $e_{i}$'s.

What is a monomial? What is a polynomial?

For any set $U$, we can define a certain commutative monoid $\operatorname*{Mon}U$, which consists of the monomials over $U$. Informally speaking, a monomial is a formal product of finitely many elements of $U$, with no regard for their order. For instance, if $U=\left\{ x,y,z\right\} $, then $xxy$, $xyx$ and $yz$ are three monomials over $U$, although the first two of them are identical. (Also, the empty product is a monomial.) There are several ways to define a monomial over $U$ formally:

  • A word over $U$ is defined to be a finite list of elements of $U$. Given two such words $p$ and $q$, we write $p\sim q$ if the word $p$ is a permutation of $q$. Then, $\sim$ is an equivalence relation on the set of all words over $U$. Its equivalence classes are called the monomials over $U$.

  • A monomial over $U$ is defined to be a family $\left( a_{u}\right) _{u\in U}\in\mathbb{N}^{U}$ of nonnegative integers such that all but finitely many $u\in U$ satisfy $a_{u}=0$.

  • A monomial over $U$ is defined to be a finite multiset of elements of $U$.

  • A monomial over $U$ is defined to be a "formal $\mathbb{N}$-linear combination of the elements of $U$, written multiplicatively", i.e., really an element of $\mathbb{N}U$. (I am not going into details on this.)

The four definitions are equivalent. It is up to you which one you prefer. Anyway, to every $v\in U$ corresponds a certain special monomial over $U$, which we will identify with $v$. (If we define monomials as equivalence classes of words, then this monomial is the equivalence class of the one-element word $\left( v\right) $. If we define monomials as families $\left( a_{u}\right) _{u\in U}$, then this monomial is the family $\left( \delta_{u,v}\right) _{u\in U}$, where $\delta_{u,v}= \begin{cases} 1, & \text{if }u=v;\\ 0, & \text{if }u\neq v \end{cases} $. If we define monomials as finite multisets, then this monomial is the multiset consisting of the element $u$, appearing only once.) Thus, $U\subseteq\operatorname*{Mon}U$. There is also a monomial called $1$, which is in some sense the "empty" monomial. (If we define monomials as equivalence classes of words, then this monomial is the equivalence class of the empty word $\left( {}\right) $. With the other definitions, it is the family $\left( 0\right) _{u\in U}$, or the empty multiset.)

Moreover, we can define a product operation on $\operatorname*{Mon}U$, which makes $\operatorname*{Mon}U$ a monoid (with neutral element $1$). (If we define monomials as equivalence classes of words, then its product is given by the rule that the equivalence class of $\left( u_{1},u_{2},\ldots ,u_{a}\right) $ times the equivalence class of $\left( v_{1},v_{2} ,\ldots,v_{b}\right) $ is the equivalence class of $\left( u_{1} ,u_{2},\ldots,u_{a},v_{1},v_{2},\ldots,v_{b}\right) $. If we define monomials as families $\left( a_{u}\right) _{u\in U}$, then the product is given by the rule $\left( a_{u}\right) _{u\in U}\cdot\left( b_{u}\right) _{u\in U}=\left( a_{u}+b_{u}\right) _{u\in U}$. If we define monomials as multisets, then the product is the disjoint union of multisets -- although this is more a definition of disjoint union than a definition of the product.)

Now, $\mathbf{k}\left( \operatorname*{Mon}U\right) $ is a commutative $\mathbf{k}$-algebra (since $\operatorname*{Mon}U$ is a monoid). It is called the polynomial ring over $\mathbf{k}$ with indeterminates in the set $U$, and its elements are called polynomials in $U$ (over $\mathbf{k}$). This is probably the quickest definition of a multivariate polynomial ring.

You need to be careful with notations, so as to avoid ambiguities. For instance, if there already is some multiplication defined on the set $U$ (for example, $U$ can be $\mathbb{N}$), then our identification of $U$ with a subset of $\operatorname*{Mon}U$ is a bad idea, and we should instead denote the monomial corresponding to a given $v\in U$ by something like $x_{v}$ (instead of $v$). For instance, if $U=\left\{ 1,2,\ldots,n\right\} $ for some nonnegative integer $n$, then it is usual to denote the monomial corresponding to a $v\in\left\{ 1,2,\ldots,n\right\} $ by $x_{v}$. In this case, $\mathbf{k}\left( \operatorname*{Mon}U\right) $ is the polynomial ring in the $n$ indeterminates $x_{1},x_{2},\ldots,x_{n}$ over $\mathbf{k}$; it is denoted by $\mathbf{k}\left[ x_{1},x_{2},\ldots,x_{n}\right] $.

Morphisms from free monoids

Now, let $U$ be a finite set, and let $A$ be an abelian monoid. Let $g:U\rightarrow A$ be some map. Then, we can define a map $\widetilde{g}:\operatorname*{Mon} U\rightarrow A$ which takes any monomial $u_{1}u_{2}\cdots u_{n}$ over $U$ to the product $g\left( u_{1}\right) g\left( u_{2}\right) \cdots g\left( u_{n}\right) $ in $A$ (where $u_{1},u_{2},\ldots,u_{n}\in U$). This map $\widetilde{g}$ is a monoid homomorphism (and, in fact, is the only monoid homomorphism from $\operatorname{Mon} U$ to $A$ that extends the map $g$). Hence, it is easy to check that $\mathbf{k} \widetilde{g}$ is a $\mathbf{k}$-algebra homomorphism $\mathbf{k}\left( \operatorname*{Mon} U\right) \rightarrow\mathbf{k} A$. How to compute the kernel of $\mathbf{k} \widetilde{g}$ ?

Proposition 1 (applied to $S=\operatorname*{Mon}U$, $T=A$ and $f=\widetilde{g}$) shows that

(1) the $\mathbf{k}$-module $\operatorname*{Ker}\left( \mathbf{k} \widetilde{g} \right) $ is spanned by the elements $\left[ s\right] -\left[ s^{\prime}\right] $ for all $\left( s,s^{\prime}\right) \in\left( \operatorname*{Mon}U\right) \times\left( \operatorname*{Mon} U\right) $ satisfying $\widetilde{g} \left( s\right) =\widetilde{g} \left( s^{\prime}\right) $.

This is not yet a very good answer, because there often will be infinitely many such elements.

Your question

Now let me get to your question. Fix an integer $n\geq2$ (I don't know why you want to require $n>3$). Let $\phi$ be the $\mathbf{k}$-algebra homomorphism $\mathbf{k}\left[ e_{1},e_{2},\ldots,e_{n}\right] \rightarrow\mathbf{k} \left[ x_{1},x_{2},\ldots,x_{n}\right] $ (between two polynomial rings) which sends every $e_{i}$ to $x_{i}x_{i+1}$ (where $x_{n+1}$ means $x_{1}$). We can view $\mathbf{k}\left[ e_{1},e_{2},\ldots,e_{n}\right] $ as $\mathbf{k}\left( \operatorname*{Mon}\left\{ e_{1},e_{2},\ldots ,e_{n}\right\} \right) $, where $e_{1},e_{2},\ldots,e_{n}$ are $n$ distinct symbols; similarly we can view $\mathbf{k}\left[ x_{1},x_{2},\ldots ,x_{n}\right] $ as $\mathbf{k}\left( \operatorname*{Mon}\left\{ x_{1} ,x_{2},\ldots,x_{n}\right\} \right) $. Then, we can define a map $g:\left\{ e_{1},e_{2},\ldots,e_{n}\right\} \rightarrow \operatorname{Mon} \left\{ x_{1},x_{2},\ldots ,x_{n}\right\} $ by

$g\left( e_{i}\right) =x_{i}x_{i+1}$ for every $i\in\left\{ 1,2,\ldots ,n\right\} $.

(Note that $g$ is a map of finite sets, not an algebra homomorphism!)

As above, we extend $g$ to a monoid homomorphism $\widetilde{g} : \operatorname{Mon} \left\{e_{1},e_{2},\ldots,e_{n}\right\} \rightarrow \operatorname{Mon} \left\{ x_{1},x_{2},\ldots ,x_{n}\right\} $. Explicitly, this $\widetilde{g}$ sends any monomial $e_{i_1} e_{i_2} \cdots e_{i_n}$ to $g\left(e_{i_1}\right) g\left(e_{i_{2}}\right) \cdots g\left(e_{i_n}\right) = \left(x_{i_1} x_{i_1 + 1}\right) \left(x_{i_2} x_{i_2 + 1}\right) \cdots \left(x_{i_n} x_{i_n + 1}\right)$.

Now, it is easy to see that $\phi=\mathbf{k} \widetilde{g}$ (because both maps $\phi$ and $\mathbf{k} \widetilde{g}$ are $\mathbf{k}$-algebra homomorphisms $\mathbf{k}\left[ e_{1},e_{2},\ldots,e_{n}\right] \rightarrow\mathbf{k} \left[ x_{1},x_{2},\ldots,x_{n}\right] $, and both of them send the generators $e_{i}$ to the same images $x_{i}x_{i+1}$). Hence, $\operatorname*{Ker}\phi=\operatorname*{Ker}\left( \mathbf{k} \widetilde{g} \right) $ can be computed using (1).

In order to make use of this, we need to figure out which $\left( s,s^{\prime}\right) \in\left( \operatorname*{Mon}\left\{ e_{1},e_{2} ,\ldots,e_{n}\right\} \right) \times\left( \operatorname*{Mon}\left\{ e_{1},e_{2},\ldots,e_{n}\right\} \right) $ satisfy $\widetilde{g} \left( s\right) =\widetilde{g} \left( s^{\prime}\right) $. So let $s=e_{1}^{a_{1}}e_{2}^{a_{2} }\cdots e_{n}^{a_{n}}$ and $s^{\prime}=e_{1}^{b_{1}}e_{2}^{b_{2}}\cdots e_{n}^{b_{n}}$ for some nonnegative integers $a_{1},a_{2},\ldots,a_{n}$ and $b_{1},b_{2},\ldots,b_{n}$. Then, $\widetilde{g} \left( s\right) =x_{1}^{a_{n}+a_{1}}x_{2}^{a_{1}+a_{2}}\cdots x_{n} ^{a_{n-1}+a_{n}}$ and $\widetilde{g} \left( s^{\prime }\right) =x_{1}^{b_{n}+b_{1}}x_{2}^{b_{1}+b_{2}}\cdots x_{n}^{b_{n-1}+b_{n}} $. Hence, $\widetilde{g} \left( s\right) = \widetilde{g} \left( s^{\prime}\right) $ is equivalent to $x_{1}^{a_{n}+a_{1}}x_{2}^{a_{1}+a_{2}}\cdots x_{n}^{a_{n-1}+a_{n}} =x_{1}^{b_{n}+b_{1}}x_{2}^{b_{1}+b_{2}}\cdots x_{n}^{b_{n-1}+b_{n}}$. The latter equality, in turn, is equivalent to the system of equations

(2) $\left\{ \begin{array}[c]{c} a_{n}+a_{1}=b_{n}+b_{1};\\ a_{1}+a_{2}=b_{1}+b_{2};\\ \vdots\\ a_{n-1}+a_{n}=b_{n-1}+b_{n} \end{array} \right. $.

It remains to solve the system (2). Now, we are almost in the realm of linear algebra, except that we are looking for solutions in nonnegative integers.

The answer shall depend on whether $n$ is even or odd, so we will treat these two cases separately.

Let us first assume that $n$ is odd. Then, simple linear algebra shows that (2) is equivalent to $\left( a_{1},a_{2},\ldots,a_{n}\right) =\left( b_{1},b_{2},\ldots,b_{n}\right) $ (in fact, if we treat (2) as a system of equations in $a_{1},a_{2},\ldots,a_{n}$ for fixed $b_{1},b_{2},\ldots ,b_{n}$, then this system has a unique solution, and it is $\left( a_{1},a_{2},\ldots,a_{n}\right) =\left( b_{1},b_{2},\ldots,b_{n}\right) $). Of course, $\left( a_{1},a_{2},\ldots,a_{n}\right) =\left( b_{1} ,b_{2},\ldots,b_{n}\right) $ means that $s=s^{\prime}$. Hence, if $n$ is odd, then the only $\left( s,s^{\prime}\right) \in\left( \operatorname*{Mon} \left\{ e_{1},e_{2},\ldots,e_{n}\right\} \right) \times\left( \operatorname*{Mon}\left\{ e_{1},e_{2},\ldots,e_{n}\right\} \right) $ satisfying $\widetilde{g} \left( s\right) = \widetilde{g} \left( s^{\prime}\right) $ are those that satisfy $s=s^{\prime}$. Consequently, (1) shows that the $\mathbf{k} $-module $\operatorname*{Ker}\left( \mathbf{k}\widetilde{g} \right) $ is spanned by the elements $\left[ s\right] -\left[ s^{\prime}\right] $ for all $\left( s,s^{\prime}\right) $ satisfying $s=s^{\prime}$. These elements, of course, are $0$, and therefore the $\mathbf{k}$-module $\operatorname*{Ker}\left( \mathbf{k} \widetilde{g} \right) $ is spanned by $0$. In other words, $\operatorname*{Ker}\left( \mathbf{k} \widetilde{g} \right) =0$. Hence, $\operatorname*{Ker}\phi=\operatorname*{Ker}\left( \mathbf{k}\widetilde{g} \right) =0$. So this is our answer for the case when $n$ is odd.

Let us now assume that $n$ is even. Then, $n=2m$ for some $m\in\mathbb{N}$. Consider this $m$. Again using linear algebra, we can see that (2) is equivalent to having

(3) $\left( a_{1},a_{2},\ldots,a_{n}\right) =\left( b_{1},b_{2} ,\ldots,b_{n}\right) +\left( c,-c,c,-c,\ldots,c,-c\right) $ for some $c\in\mathbb{Z}$

(where the addition is usual addition of vectors). But in terms of the monomials $s$ and $s^{\prime}$, the condition (3) rewrites as

(4) $s\cdot\left( e_{2}e_{4}\cdots e_{2m}\right) ^{c}=s^{\prime} \cdot\left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{c}$ for some $c\in\mathbb{Z}$.

(Here, we are slightly abusing notation and allowing monomials with negative exponents, i.e., Laurent monomials.)

Consequently, (1) yields that the $\mathbf{k}$-module $\operatorname*{Ker} \phi=\operatorname*{Ker}\left( \mathbf{k} \widetilde{g} \right) $ is spanned by the elements $\left[ s\right] -\left[ s^{\prime }\right] $ for all $\left( s,s^{\prime}\right) $ satisfying (4). In this situation, it is harmless to identify each monomial $s$ with the element $\left[ s\right] $ of the polynomial ring; thus, we do so, and conclude that

(5) the $\mathbf{k}$-module $\operatorname*{Ker}\phi$ is spanned by the elements $s-s^{\prime}$ for all $\left( s,s^{\prime}\right) $ satisfying (4).

This is quite a lot of elements.

But now, let me claim that

(6) $\operatorname*{Ker}\phi$ is the ideal of $\mathbf{k}\left[ e_{1},e_{2},\ldots,e_{n}\right] $ generated by the element $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$.

Indeed, it is clear that the element $e_{1}e_{3}\cdots e_{2m-1}-e_{2} e_{4}\cdots e_{2m}$ belongs to $\operatorname*{Ker}\phi$. Hence, in order to prove (6), it suffices to show that every element of $\operatorname*{Ker} \phi$ is a multiple of $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$. Because of (5), we only need to check that every element of the form $s-s^{\prime}$, with $\left( s,s^{\prime}\right) $ satisfying (4), is a multiple of $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$ (because (5) says that every element of $\operatorname*{Ker}\phi$ is a $\mathbf{k} $-linear combination of elements of this form). So let $\left( s,s^{\prime }\right) $ satisfy (4); we then need to prove that $s-s^{\prime}$ is a multiple of $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$.

Since $\left( s,s^{\prime}\right) $ satisfies (4), it is easy to see that there exists some monomial $t\in\operatorname*{Mon}\left\{ e_{1} ,e_{2},\ldots,e_{n}\right\} $ and some $d\in\mathbb{N}$ (actually, $d=\left\vert c\right\vert $) such that

either $s=t\cdot\left( e_{2}e_{4}\cdots e_{2m}\right) ^{d}$ and $s^{\prime}=t\cdot\left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{d}$,

or $s=t\cdot\left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{d}$ and $s^{\prime}=t\cdot\left( e_{2}e_{4}\cdots e_{2m}\right) ^{d}$

(indeed, the first case occurs when $c\leq0$, and the second when $c>0$). In either of these two cases, we have

$s-s^{\prime}=\pm t\cdot\left( \left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{d}-\left( e_{2}e_{4}\cdots e_{2m}\right) ^{d}\right) $.

Hence, $s-s^{\prime}$ is a multiple of $\left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{d}-\left( e_{2}e_{4}\cdots e_{2m}\right) ^{d}$. But $\left( e_{1}e_{3}\cdots e_{2m-1}\right) ^{d}-\left( e_{2}e_{4}\cdots e_{2m}\right) ^{d}$, in turn, is a multiple of $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$. Therefore, $s-s^{\prime}$ is a multiple of $e_{1}e_{3}\cdots e_{2m-1}-e_{2}e_{4}\cdots e_{2m}$. As we said above, this proves (6).