Generators of the ideal correspond to $d$-uple embedding

I am trying to mimic the interesting proof from Hartshorne Problem 1.2.14 on Segre Embedding to give a generating set of the ideal correspond to the $d$-uple embedding:

We fix the notation $\Delta = \big\{ (\nu_0, \dots, \nu_n) \in {\mathbb{N}}^{n+1}_0 ~:~ \nu_0 + \dots + \nu_n = d \big\}$. The polynomial map is $\theta : k[y_\nu : \nu \in \Delta] \rightarrow k[x_0, \dotsc, x_n]$ given by $$ \theta ( y_\nu ) := x^{\nu_0}_0 \cdots x^{\nu_n}_n $$ and we know that ${\mathrm{ker}}~\theta$ is the ideal of the $d$-uple embedding, i.e., $\rho_d({\mathbb{P}}^n) = Z({\mathrm{ker}}~\theta)$ (cf. Exercise I.2.12 Hartshorne).

Now let $W \leq k[y_\nu : \nu \in \Delta]$ denote the ideal $$ W := \Big\langle y_{\tau_1} y_{\tau_2} - y_{\tau^{\prime}_1} y_{\tau^{\prime}_2} ~:~ \tau_1, \tau_2, \tau^{\prime}_1, \tau^{\prime}_2 \in \Delta, \tau_1 + \tau_2 = \tau^{\prime}_1 + \tau^{\prime}_2 \Big\rangle $$

Clearly, $W \subseteq {\mathrm{ker}}~\theta$. Using the lemma in the above link, we need to produce a $k$-subspace $T \subseteq k[y_\nu : \nu \in \Delta]$ with the conditions $$ T + W = k[y_\nu : \nu \in \Delta], \hspace{.2in} \theta \lvert_{T} = {\mathrm{injective}} $$

After a few thoughts it seems reasonable to think of the following subspace as a candidate for $T$:

First define the equivalence relation on $\Delta \times \Delta$ given by $(\tau_1, \tau_2) \sim (\tau^{\prime}_1, \tau^{\prime}_2)$ if $\tau_1 + \tau_2 = \tau^{\prime}_1 + \tau^{\prime}_2$. Let $\Omega \subseteq \Delta \times \Delta$ be a subset with a unique representative from each class of $\sim$.

Now let $T$ be the $k$-span of the monomials $y^{\alpha_1}_{\tau_1} \dotsc y^{\alpha_r}_{\tau_r}$ (with the distinct elements $\tau_1, \dotsc, \tau_r \in \Delta$ and $\alpha_1, \dotsc, \alpha_r \geq 1$) such that $(\tau_i, \tau_j) \in \Omega$ for each $i \neq j$. Clearly, $\theta$ is injective if restricted to $T$.

I got lost in the step : Each monomial $M$ of $k[y_\nu : \nu \in \Delta]$ satisfy $M \equiv b$ mod $W$ for some $b \in T$. Did I do anything wrong?


Solution 1:

Here is an outline of a proof. (Thanks to Darij Grinberg for the proof.)

We relabel the variables $x_0, \dotsc, x_n$ by $x_1, \dotsc, x_{n+1}$ to avoid confusing notations. Here ${\mathbb{N}}_0 = {\mathbb{N}} \cup \{ 0 \}$.

${\textbf{ Combinatorial background:}}$

${\textbf{Definition 1.}}$ Given two integer tuples $\alpha = (\alpha_1, \dotsc, \alpha_{n+1}) \in {\mathbb{N}}^{n+1}_0,~ \beta = (\beta_1, \dotsc, \beta_m) \in {\mathbb{N}}^m_0$ we define a $(\alpha, \beta)$-${\textbf{contingency table}}$ as a matrix

$$A = \begin{pmatrix} \nu_{11} & \nu_{12} & \dotsc & \nu_{1m} \\ \nu_{21} & \nu_{22} & \dotsc & \nu_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ \nu_{n+1,1} & \nu_{n+1,2} & \dotsc & \nu_{n+1,m} \end{pmatrix}$$

of size $(n+1) \times m$ with entries from ${\mathbb{N}}_0$ that satisfy the properties:

(i) the $i$-th row sum $\nu_{i1} + \nu_{i2} + \dotsc + \nu_{im} = \alpha_i = i$-th entry of $\alpha$,

(ii) the $j$-th column sum $\nu_{1j} + \nu_{2j} + \dotsc + \nu_{n+1,j} = \beta_j = j$-th entry of $\beta$.

Let ${\mathrm{CT}}(\alpha, \beta)$ denote the set of all $(\alpha, \beta)$-contingency tables.

${\textbf{Definition 2.}}$ Let $\alpha = (\alpha_1, \dotsc, \alpha_{n+1}) \in {\mathbb{N}}^{n+1}_0, \beta = (\beta_1, \dotsc, \beta_m) \in {\mathbb{N}}^m_0$ and $A$ be a $(\alpha, \beta)$-contingency table. For $1 \leq i \leq n, 1 \leq j \leq m-1$ and positive integers $1 \leq s \leq n+1-i, 1 \leq t \leq m-j$, we define the ${\textbf{positive}}$ and ${\textbf{negative swap}}$ denoted by $S(i,j;s,t;+)$ and $S(i,j;s,t;-)$ respectively as the operations on $A$ which changes the $2 \times 2$ submatrices of $A$ as

$$\begin{pmatrix} \nu_{ij} & \nu_{i,j+t} \\ \nu_{i+s,j} & \nu_{i+s, j+t} \end{pmatrix} \xrightarrow{S(i,j;s,t;+)} \begin{pmatrix} \nu_{ij} + 1 & \nu_{i,j+t} - 1 \\ \nu_{i+s,j} - 1 & \nu_{i+s, j+t} + 1 \end{pmatrix}$$

and

$$\begin{pmatrix} \nu_{ij} & \nu_{i,j+t} \\ \nu_{i+s,j} & \nu_{i+s, j+t} \end{pmatrix} \xrightarrow{S(i,j;s,t;-)} \begin{pmatrix} \nu_{ij} - 1 & \nu_{i,j+t} + 1 \\ \nu_{i+s,j} + 1 & \nu_{i+s, j+t} - 1 \end{pmatrix}$$

and keep the remaining entries constant. The entry $\nu_{ij}$ is called the ${\textbf{hook}}$ of the swap operation. For an $A \in {\mathrm{CT}}(\alpha, \beta)$, a swap operation $S$ defined above is said to be ${\textbf{legal}}$ if $S(A) \in {\mathrm{CT}}(\alpha, \beta)$. This means that the entries of the matrix $S(A)$ are still from ${\mathbb{N}}_0$.

${\textbf{Lemma 3.}}$ For any $A, B \in {\mathrm{CT}}(\alpha, \beta)$, there exists a sequence of legal swaps $S_1, \dotsc, S_r$ transforming $A$ into $B$.

${\textbf{Proof.}}$ Denote by $A = [\nu_{ij}]$ and $B = [\mu_{ij}]$. First suppose $\nu_{11} > \mu_{11}$, then using the fact that the row and column sums of both $A$ and $B$ are same, there exists $s,t \geq 1$ such that $\nu_{1+s,1+t} \geq 1$. If this is not true then both the following are $(\alpha, \beta)$-contingency tables

$$A = \begin{pmatrix} \nu_{11} & \beta_2 & \dotsc & \beta_m \\ \nu_{21} & 0 & \dotsc & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \nu_{n+1,1} & 0 & \dotsc & 0 \end{pmatrix},$$

$$B = \begin{pmatrix} \mu_{11} & \mu_{12} & \dotsc & \mu_{1m} \\ \mu_{21} & \mu_{22} & \dotsc & \mu_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ \mu_{n+1,1} & \mu_{n+1,2} & \dotsc & \mu_{n+1,m} \end{pmatrix}.$$

This implies that

$$\nu_{11} + \beta_2 + \dotsc + \beta_m = \mu_{11} + \mu_{12} + \dotsc + \mu_{1m}$$ and hence one of the following inequalities must happen

$$\beta_2 < \mu_{12}, \dotsc, \beta_m < \mu_{1m}.$$

This contradicts that from second column onwards the column sums are $\beta_2, \dotsc, \beta_m$. Once this choice is made we can now apply the swap operation $S(1,1;s,t;-)$ to the submatrix

$$\begin{pmatrix} \nu_{11} & \nu_{1,1+t} \\ \nu_{1+s,1} & \nu_{1+s,1+t} \end{pmatrix} \xrightarrow{S(1,1;s,t;-)} \begin{pmatrix} \nu_{11} - 1 & \nu_{1,1+t} + 1 \\ \nu_{1+s,1} + 1 & \nu_{1+s, 1+t} - 1 \end{pmatrix}$$

to reduce the hook until it become $\mu_{11}$ or $\nu_{1+s,1+t}$ become $0$, whichever is earlier. In the second situation, once $\nu_{1+s,1+t}$ become $0$ and the hook is still larger than $\mu_{11}$, we choose another submatrix to do the same until the hook coincide with $\mu_{11}$. Proof of the other part $\nu_{11} < \mu_{11}$ is similar. Once we ensured $\nu_{11} = \mu_{11}$, we apply the same to the adjacent entries of the first row except the last entry $\nu_{1,m}$, which would coincide because the row sum is invariant by swap operations. Now using the induction on $n+1+m$, the result follows.

${\textbf{Definition 4.}}$ An $(\alpha, \beta)$-contingency table $A$ is said to be a normal form if it has no $2 \times 2$ submatrices with both diagonal entries positive.

${\textbf{Lemma 5.}}$ Given a $\alpha \in {\mathbb{N}}^{n+1}_0, \beta \in {\mathbb{N}}^m_0$, there exists a unique $A \in {\mathrm{CT}}(\alpha, \beta)$ which is in its normal form.

${\textbf{Back to the proof of the main result:}}$ We have $W \subseteq {\mathrm{ker}}~\theta$. For a monomial $z_{\tau_1} \dotsc z_{\tau_m} \in k[y_{\nu} : \nu \in \Delta]$ if

$$\tau_j = (\tau_{1j}, \dotsc, \tau_{n+1,j})~ 1 \leq j \leq m$$

then this correspond to the $(\alpha, \beta)$-contingency table $A = [\tau_{ij}]$ of size $(n+1) \times m$, where $\beta = (d, d, \dotsc, d)$. Conversely, every $(\alpha, \beta)$-contingency table with $\beta = (d, d, \dotsc, d)$ correspond to a monomial in $k[y_{\nu} : \nu \in \Delta]$. In addition, the representation of the monomial by contingency table is unique up to permuting columns.

Now fix $\beta = (d, d, \dotsc, d)$ and let $M_1, M_2 \in k[y_{\nu} : \nu \in \Delta]$ be two monomials with contingency tables $A = [\nu_{ij}]$ and $B = [\mu_{ij}]$. Suppose $B$ is obtained from $A$ by using the positive swap $S(i,j;s,t;+)$. Then $A$ and $B$ only differs in their $j$-th and $(j+t)$-th columns. Thus if we write

$$M_1 = z_{\nu_1} \dotsc z_{\nu_m}, ~M_2 = z_{\mu_1} \dotsc z_{\mu_m}$$

then $\nu_l = \mu_l$ for each $l \neq j, j+t$ and

$$\mu_j = \begin{pmatrix} \nu_{1j} \\ \vdots \\ \nu_{ij} + 1 \\ \vdots \\ \nu_{i+s,j} - 1 \\ \vdots \\ \nu_{n+1,j} \end{pmatrix}$$

$$\mu_{j+t} = \begin{pmatrix} \nu_{1,j+t} \\ \vdots \\ \nu_{i,j+t} - 1 \\ \vdots \\ \nu_{i+s,j+t} + 1 \\ \vdots \\ \nu_{n+1,j+t} \end{pmatrix}$$

Then $z_{\mu_j} z_{\mu_{j+t}} - z_{\nu_j} z_{\nu_{j+t}} \in W$ and hence $M_1 - M_2 \in W$. By induction it follows that if $M_1$ and $M_2$ are two monomials with the corresponding $(\alpha, \beta)$-contingency tables for a fixed $\alpha \in {\mathbb{N}}^{n+1}_0$, then $M_1 \equiv M_2$ mod $W$.

Now for every $m \in {\mathbb{N}}_0$ and every $\alpha = (\alpha_1, \dotsc, \alpha_{n+1}) \in {\mathbb{N}}^{n+1}_0$ with $\sum_{i=1}^{n+1} \alpha_i = dm$, let $U(\alpha, m)$ denote the unique monomial corresponding to the unique normal form (up to permutation). Let $T$ denote the $k$-linear subspace of $k[y_{\nu} : \nu \in \Delta]$ spanned by these monomials. Notice that

$$\theta \Big( U(\alpha, m) \Big) = x^{\alpha_1}_1 \dotsc x^{\alpha_{n+1}}_{n+1}$$

and the image is a monomial in $k[x_1, \dotsc, x_{n+1}]$ of degree $dm$. Hence these form a basis for a subspace in $k[x_1, \dotsc, x_{n+1}]$ and consequently $\theta \lvert_T$ in injective. Moreover, using Lemma 2.12.5 we have for any monomial $M \in k[y_{\nu} : \nu \in \Delta]$, there exists a monomial $M^{\prime} \in T$ such that $M \equiv M^{\prime}$ mod $W$. This proves that $k[y_{\nu} : \nu \in \Delta] = T + W$. Now using Lemma from Hartshorne Problem 1.2.14 on Segre Embedding it follows that $W = {\mathrm{ker}}~\theta$.