Generalize exterior algebra: vectors are nilcube instead of nilsquare

The exterior product on a ($d$-dimensional) vector space $V$ is defined to be associative and bilinear, and to make any vector square to $0$, and is otherwise unrestricted. Formally, the exterior algebra $\Lambda V$ is a quotient of the tensor algebra.

What happens if, instead, the $n$th power of any vector is $0$? Let's call this the $n$th nilpotent algebra, $N_nV$.

The $0$th nilpotent algebra is trivial, with only one element $1=a^0=0$, so $N_0V=\{0\}$, which is $0$-dimensional.

The $1$st nilpotent algebra is the scalar field, because any vector is $a^1=0$; thus $N_1V=\mathbb F$, which is $1$-dimensional.

The $2$nd nilpotent algebra, with $a^2=a\wedge a=0$, is the exterior algebra $N_2V=\Lambda V$, which is $2^d$-dimensional.

What is the $3$rd nilpotent algebra? Would $N_3V$ be $3^d$-dimensional? Would $N_nV$ be $n^d$-dimensional?

If $d=1$, so $V$ has a basis $\{a\}$, then $N_nV$ has a basis $\{1,a,a^2,\cdots,a^{n-1}\}$, so it is indeed $n^1$-dimensional.

If $d=2$, so $V$ has a basis $\{a,b\}$, then $N_0V$ has basis $\{\}$, and $N_1V$ has basis $\{1\}$, and $N_2V$ has basis $\{1,a,b,ab\}$. What is a basis for $N_3V$?

Since any vector cubes to $0$, we have

$$0=(a+b)^3=a^3+aab+aba+abb+baa+bab+bba+b^3$$

and

$$0=(a-b)^3=a^3-aab-aba+abb-baa+bab+bba-b^3.$$

Subtract these and divide by $2$ to get

$$0=aab+aba+baa.$$

Are there any other relations like this, not immediately obvious from the definition? Naively using only $a^3=b^3=0$, we would expect this to be a basis:

$$\{1,a,b,a^2,ab,ba,b^2,a^2b,aba,ab^2,ba^2,bab,b^2a,$$

$$a^2ba,a^2b^2,aba^2,abab,ab^2a,ba^2b,baba,bab^2,b^2a^2,b^2ab,\cdots\}$$

which would contain an infinite number of terms like $a^2ba^2ba^2b$; but some terms must be removed, being linearly dependent:

$$ba^2=-a^2b-aba$$

$$b^2a=-ab^2-bab$$

$$a(ba^2)=0-a^2ba$$

$$a(b^2a)=-a^2b^2-abab$$

$$(ba^2)b=-a^2b^2-abab$$

$$b^2a^2=a^2b^2+abab-baba$$

$$(b^2a)b=0-bab^2$$

$$\vdots$$

Will there be only finitely many independent terms left?


Solution 1:

I assume $V$ is a vector space over a field of characteristic zero. For $k\geq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:

$\dim N_n^kV$ is the number of length $k$ words on the alphabet $\{1,\dots,\dim V\}$ containing no non-decreasing subword of length $n.$

In particular, $N_nV$ is infinite dimensional for $n\geq 3$ and $\dim V\geq 2.$ The sequence $\dim N_n^kV$ for $k=0,1,2,\dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $\dim N_3^kV$:

\begin{align*} (\dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,\dots\\ (\dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,\dots \end{align*}


To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.

A convenient basis for the ideal

Let $V$ have basis $x_1,\dots,x_d$ and, to avoid excessive subscripting, let $[i_1,\dots,i_m]$ denote the monomial $x_{i_1}\dots x_{i_m}.$

First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$\sum_{\pi}[\pi(i_1,\dots,i_n)]\tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(\sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0\leq i,j\leq d$ be an inverse Vandermonde matrix such that $\sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $\sum_{t\in\{0,\dots,k\}^d}c_{i_1,t_1}\dots c_{i_d,t_d}(\sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}\dots t_d^{i_d}$ in $(\sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).

Reductions

So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of

$$[i_1,\dots,i_n]\mapsto -\sum_{\pi\neq id} [\pi(i_1,\dots,i_n)]$$

for any $i_1\leq i_2\leq \dots \leq i_n,$ where the sum is over the set of permutations of $(i_1,\dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.

A monomial $[i_1,\dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1\dots i_k$ doesn't have a non-decreasing subword of length $n.$ We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$

Example: for $n=3$ and $k\geq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form "$(a|b|\epsilon)(ba)^n(a|b|\epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.

We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1\leq\dots\leq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,\dots,i_{n+1}].$

Lemma. For $i_1\leq \dots\leq i_{n+1},$ the two ways of reducing $[i_1,\dots,i_{n+1}]$ give the same result under some sequence of reductions.

Proof. I will use a table notation where $[j,\dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if the middle elements are in non-decreasing order, and "$\pi$" for the sum over all other orders which may be an empty sum. A star in the row header means to sum over the remaining options (for the first case, $\{i_1,\dots,i_{n+1}\}\setminus\{i_1,i_2,i_{n+1}\}$), and similarly for the columns.

Case $i_1<i_2\leq i_n<i_{n+1}$

$[i_1,\dots,i_{n+1}]$ is represented by: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&1\\ \hline i_2 &&&&\\ \hline * &&&&\\ \hline i_{n+1}&&&&\\ \hline \end{array}

Applying the reduction rule to the first $n$ letters is the same as subtracting these terms: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&1+\pi&1+\pi&\color{red}{1}+\pi\\ \hline i_2 &&&&\\ \hline * &&&&\\ \hline i_{n+1}&&&&\\ \hline \end{array} I have drawn the left-hand-side of the reduction rule in red.

We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&\color{red}{1}+\pi&\color{red}{1}+\pi&\\ \hline i_2 &&1+\pi&1+\pi&\\ \hline * &&1+\pi&1+\pi&\\ \hline i_{n+1}&&1+\pi&1+\pi&\\ \hline \end{array}

Finally we subtract \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&\\ \hline i_2 &&&&\\ \hline * &&&&\\ \hline i_{n+1}&1+\pi&1+\pi&\color{red}{1}+\pi&\\ \hline \end{array}

The final result is \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&-\pi\\ \hline i_2 &&1+\pi&1+\pi&\\ \hline * &&1+\pi&1+\pi&\\ \hline i_{n+1}&-\pi&&&\\ \hline \end{array}

If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.

Case $i_1=i_2< i_n<i_{n+1}$

$[i_1,\dots,i_{n+1}]$ is represented by: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&1\\ \hline * &&&&\\ \hline i_{n+1}&&&&\\ \hline \end{array}

Reducing the first $n$ letters is represented by subtracting \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &1+\pi&1+\pi&1+\pi&\color{red}{1}+\pi\\ \hline * &&&&\\ \hline i_{n+1}&&&&\\ \hline \end{array}

Add: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &\color{red}{1}+\pi&\color{red}{1}+\pi&\color{red}{1}+\pi&\\ \hline * &1+\pi&1+\pi&1+\pi&\\ \hline i_{n+1}&1+\pi&1+\pi&1+\pi&\\ \hline \end{array}

Subtract: \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&\\ \hline * &&&&\\ \hline i_{n+1}&1+\pi&1+\pi&\color{red}{1}+\pi&\\ \hline \end{array}

The result is \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&\\ \hline * &1+\pi&1+\pi&1+\pi&\\ \hline i_{n+1}&&&&\\ \hline \end{array}

If we started by reducing the last $n$ letters we would instead subtract \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&\color{red}{1}+\pi\\ \hline * &&&&1+\pi\\ \hline i_{n+1}&&&&\\ \hline \end{array}

Then add \begin{array} {|c|c|c|c|c|}\hline &i_1&*&i_n&i_{n+1}\\\hline i_1 &&&&\\ \hline * &1+\pi&1+\pi&1+\pi&\color{red}{1}+\pi\\ \hline i_{n+1}&&&&\\ \hline \end{array}

which is the same result.

Case $i_1<i_2< i_n=i_{n+1}$

This is dual to the previous case.

Case $i_1=i_2<i_n=i_{n+1}$

I'll omit the steps, but the result is \begin{array} {|c|c|c|c|}\hline &i_1&*&i_{n+1}\\\hline i_1 &&&-\pi\\ \hline * &1+\pi&1+\pi&\\ \hline i_{n+1}&1+\pi&1+\pi&\\ \hline \end{array}

Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$

Left to the reader for now.

$\Box$

Diamond lemma variant

We have shown that overlap ambiguities of $[i_1,\dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible. Say that an overlap ambiguity $(\sigma,\nu,A,B,C)$ is decomposable if there are overlap ambiguities $(\sigma,\tau,A',A''B,C')$ and $(\tau,\nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$ I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".

The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_\sigma C-Af_\tau)M\in I_D.$. For the case of a decomposable overlap ambiguity $(\sigma,\nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_\sigma C-Af_\tau)M\in I_D.$ By decomposability there are overlap ambiguities $(\sigma,\tau,A',A''B,C')$ and $(\tau,\nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).

We can prove this by induction on the length of $ABC.$ If $(\sigma,\tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_\sigma C'-A'f_\tau)C''M\in I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(\tau,\nu,A'',BC',C'')$ we get $LA'(f_{\tau} C''-A''f_\nu)M\in I_D.$ Adding these gives $L(f_\sigma C-Af_\tau)M\in I_D$ as required.

By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.


This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+\sum_{b,d}\tfrac12g_{ac,bd}x^bx^d+O(\|x\|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.


Singular script to generate the dimensions directly:

LIB "fpadim.lib";

// d must be 2 or 3 int d=2; //int d=3;

ring r2 = 0,(a,b),dp; def R2 = makeLetterplaceRing(10); ring r3 = 0,(a,b,c),dp; def R3 = makeLetterplaceRing(10); if (d==2) { setring R2; list monoms=list(a(1),b(1)); } else { setring R3; list monoms=list(a(1),b(1),c(1)); }

// Construct ideal of all monomial nilcube relations ideal G=0; int m1,m2,m3; poly p; for (m1=1; m1<=d; m1=m1+1) { for (m2=1; m2<=d; m2=m2+1) { for (m3=1; m3<=d; m3=m3+1) { p=0; p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]); p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]); p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]); p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]); p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]); p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]); G=G+p; } } }

lpDHilbert(G);