Does a four-variable analog of the Hall-Witt identity exist?

Lately I have been thinking about commutator formulas, sparked by rereading the following paragraph in Isaacs (p.125):

An amazing commutator formula is the Hall-Witt identity: $$[x,y^{-1},z]^y[y,z^{-1},x]^z[z,x^{-1},y]^x=1,$$ which holds for any three elements of every group. $\ldots$ One can think of the Hall-Witt formula as a kind of three-variable version of the much more elementary two-variable identity, $[x,y][y,x]=1$. This observation hints at the possibility that a corresponding four-variable formula might exist, but if there is such a four-variable identity, it has yet to be discovered.

To my knowledge a four-variable formula hasn't been discovered since this was written. I was thinking about this and found myself unable to even decide whether or not I thought one could exist (i.e. whether one would, hypothetically, try to find an identity or disprove its existence). Thus, I am attempting to "write down the problem."

How can one rigorously formulate the question, "Does a four-variable analog of the Hall-Witt identity exist?"

Let's start by explicitly making the free group on $4$ letters. Suppose we have a free generating set $A=\{a,b,c,d\}$, so that $S=A\cup A^{-1}$ and $A^{-1}=\{a^{-1},b^{-1},c^{-1},d^{-1}\}$. Let $F_A$ the group of all reduced words in $S$. Words are reduced when they have been can no longer be simplified by cancelling adjacent $x$ and $x^{-1}$s (for $x\in A$).

Cyclically reduced words are words where the first and last letters are not inverse to each other. Every word is conjugate to a cyclically reduced word, so we can consider $\hat{F_A}$, the quotient of $F_A$ by the equivalence relation of being cyclically reduced. (Note that this is not the equivalence relation of being conjugate.)

Let $\Phi$ be the set of functions $\varphi:A^4\rightarrow \hat{F_A}$ defining words in $\hat{F_A}$ that contain at least one instance of each free generator or its inverse. Now let $\Psi:\Phi\rightarrow \hat{F_A}$ be formally defined by $$\Psi(\varphi)(u,x,y,z)=\varphi(u,x,y,z)\varphi(x,y,z,u)\varphi(y,z,u,x)\varphi(z,u,x,y).$$ So, if a $4$-variable Hall-Witt identity exists, it will be among the functions in the preimage of $0$ (the function which just maps everything to the empty word) under $\Psi$.

Question: Assuming the above formulation is sound, does there exist a (nontrivial) four-variable analog to the Hall-Witt identity? Can this approach be used to further refine the question?

My use of "nontrivial" above is somewhat ambiguous: what I mean is that the identity should be made with commutators and conjugations in a free set of four letters, and done so in a way that it does not reduce to the two- or three-variable commutator identities by a substitution.


Progress.

Let $$\begin{eqnarray*} W(a,b,c) & \triangleq & [a,b^{-1},c]^b \\ &=& b^{-1}[a,b^{-1}]^{-1}c^{-1}[a,b^{-1}]cb\\ &=&a^{-1}b^{-1}ac^{-1}a^{-1}bab^{-1}cb.\end{eqnarray*}$$ Let's chop $W(a,b,c)$ in half and name the two parts. Define $$w_1(a,b,c)\triangleq a^{-1}b^{-1}ac^{-1}a^{-1} \hspace{30pt} \text{and} \hspace{30pt} w_2(a,b,c)\triangleq bab^{-1}cb,$$ so that $$W(a,b,c)=w_1(a,b,c)w_2(a,b,c).$$ Now, the Hall-Witt Identity can be written as $$W(x,y,z)W(y,z,x)W(z,x,y)=1,$$ that is, $$\overbrace{\underbrace{x^{-1}y^{-1}xz^{-1}x^{-1}}_{w_1(x,y,z)}\underbrace{yxy^{-1}zy}_{w_2(x,y,z)}}^{W(x,y,z)} \overbrace{\underbrace{y^{-1}z^{-1}yx^{-1}y^{-1}}_{w_1(y,z,x)}\underbrace{zyz^{-1}xz}_{w_2(y,z,x)}}^{W(y,z,x)} \overbrace{\underbrace{z^{-1}x^{-1}zy^{-1}z^{-1}}_{w_1(z,x,y)}\underbrace{xzx^{-1}yx}_{w_2(z,x,y)}}^{W(z,x,y)} =1.$$ It's clear the cancellation works by $w_1(b,c,a)=w_2(a,b,c)^{-1}$. This makes sense: we should be able to cyclically permute the overall word and have it still work, since $1^a=1$. So, what we should be looking for are four letter strings $w_1$ and $w_2$ such that $w_1(a,b,c,d)$ is the inverse of $w_2(b,c,d,a)$.

Update: In fact, the above observation is not just sufficient, but necessary for the existence of a four-variable Hall-Witt identity. Consider for example the case where $W(a,b,c,d)$ would be split into three subwords, rather than two: $$W(a,b,c,d)=w_1(a,b,c,d)w_2(a,b,c,d)w_3(a,b,c,d).$$ We would in this case have to insist that $$w_3(a,b,c,d)=w_1(b,c,d,a)^{-1}\hspace{14pt}\text{ and }\hspace{14pt}w_2(a,b,c,d)=w_2(b,c,d,a)^{-1}.$$ After the $w_1$'s and $w_3$'s cancelled out, we'd be left with $$w_2(x,y,z,u)w_2(y,z,u,x)w_2(z,u,x,y)w_2(u,x,y,z)=1.$$ But of course asking whether that that can happen is the same as asking whether $W$ can exist, so eventually we will see a separation into two subwords, or we have a contradiction by infinite descent. If we divided up $W(a,b,c,d)$ into $n>3$ subwords, we would reduce in the case of odd $n$ to the case of a self-inverse word analogously to the $n=3$ case, or we would have an even number of subwords, which is the same thing as the $2$ subword case.

So, it suffices to either find a word $W(a,b,c,d)=w_1(a,b,c,d)w_2(a,b,c,d)$, nontrivial in each variable, such that $w_1(a,b,c,d)=w_2(b,c,d,a)^{-1}$, or to prove that such a word does not exist.


Solution 1:

$\renewcommand{\mod}[1]{~(\text{mod $#1$})} \newcommand{congr}{\equiv} $The last statement in your question (which is not a mere question) also suggests a solution of the more general problem, namely the $n$-variable analog of the Hall-Witt formula for any $n\geq 2$.

Let $x_1$, $\ldots$, $x_n$ be the variables. If $w=w(x_1,x_2,\ldots,x_n)$ is any word in $x_i$'s and their inverses, define the word $\gamma w$ by $(\gamma w)(x_1,x_2,\ldots,x_n):=w(x_2,\ldots,x_n,x_1)$, and set $W:=w(\gamma w)^{-1}$. Then $$W(\gamma W)\cdots(\gamma^{n-1}W)=1~.$$ The requirement that $W$ is nontrivial in each variable is easily satisfied. In this way you can produce $n$-variable analogs of the Hall-Witt formula by truckload.

To make things more interesting, you may require, say, that $W$ must be representable by an expression built from the variables and their inverses (the 'tokens') by making commutators $[u,v]$, where $u$ and $v$ are already built expressions, and by conjugations $u^y$, where $u$ is an already built expression, but not a token, and $y$ is a token; then the requirement is that $W=u$ for an expression built in this way that is not a single token. Now the things are no longer so simple. Am I far wrong in supposing that in fact you had some such additional conditions in mind when you formulated your question?

I have no idea how to go about finding an example for $n=4$ or proving that it does not exist. The only (rather weak) restriction on the word $w$ I have found so far is $l_1=l_2=\cdots=l_n$, where $l_i$ is the sum of the exponents of the appearances of $x_i^{\pm1}$ in $w$.

If we relax the condition imposed on $W$ and require only that $W\in[G,G]$, where $G$ is the free group generated by the variables $x_1$, $\ldots$, $x_n$, then we can give the full solution of the problem, since for every $w\in G$ we have $w(\gamma w)^{-1}\in[G,G]$ if and only if $l_1(w)=l_2(w)=\cdots=l_n(w)$. Here $\gamma$ is the automorphism of $G$ that sends $x_i$ to $x_{i+1}$ for $1\leq i<n$, and sends $x_n$ to $x_1$. We must also tell how the functions $l_i: G\to\mathbb{Z}$ are defined. Let $A$ be the free abelian (additive) group generated by $x_1$, $\ldots$, $x_n$, and let $h\colon G\to A$ be the homomorphism sending $x_i\in G$ to $x_i\in A$ for $1\leq i\leq n$; note that $\ker h=[G,G]$. For every $w\in G$ we have $h(w)=\sum_{i=1}^n l_i(w)x_i$: this defines the $l_i$'s.

For example, if $w=x_1^{-1}x_2^{-1}\cdots x_{n-1}^{-1}x_n^{-1}$, then $W=w(\gamma w)^{-1}=[x_1,x_nx_{n-1}\cdots x_2]$. When $n=3$ we obtain the identity $$ [x,zy][y,xz][z,yx]=1~, $$ which is a humble cousin of the Hall-Witt formula, and is probably quite useless (nice as it is).
If you wish you can rewrite $W=[x,zy]=x^{-1}y^{-1}z^{-1}xzy$ as a product of iterated commutators, $W=[x,y][x,z][[x,z],y]$, or perhaps in the well-known form $W=[x,y][x,z]^y$. (Mark that here $[x,y]=x^{-1}y^{-1}xy$; some authors define the commutator as $[x,y]:=xyx^{-1}y^{-1}$.) Note that the example $w=w_1(a,b,c,d)=a^{-1}bc^{-1}b^{-1}dad^{-1}$ provided by weux082690 has $l_a(w)=l_b(w)=l_d(w)=0$ but $l_c(w)=-1$, thus $w(\gamma w)^{-1}\notin[G,G]$.

By a sort of a retrograde progress, let us return to the beginning. We are going to prove the following:

Let $G$ be the free group with the free generators $x_1$, $\ldots$, $x_n$, $n\geq 2$, and let $\gamma$ be the automorphism of $G$ that rotates the generators one place to the right, that is, $\gamma x_i=x_{i+1}$ for $1\leq i<n$ and $\gamma x_n=x_1$. Then $g\in G$ has the property that $g(\gamma g)\cdots(\gamma^{n-1}g)=1$ iff there exists $h\in G$ such that $g=h(\gamma h)^{-1}$.

Proof. $~$The sufficiency is clear.

Necessity. Let $T$ be the set of 'tokens' $\{x_1,x_1^{-1},\ldots,x_n,x_n^{-1}\}$, and let $T^*$ denote the free monoid (of 'words') generated by $T$; the neutral element of $T^*$ is the empty word $\varepsilon$. Then $G=T^*/{\sim}$, where $\sim$ is the congruence on the monoid $T^*$ generated by $x_i^\alpha x_i^{-\alpha}\sim\varepsilon$, $1\leq i\leq n$, $\alpha=\pm1$. The equivalence class of the empty word is the multiplicative identity of the free group: $\varepsilon/{\sim}=1_G=1$. Every equivalence class $g\in G$ contains a unique reduced word $\varrho(g)$ which does not contain any pair of consecutive tokens that are inverses of each other. Given a word $w\in T^*$, we obtain the reduced word $\varrho(w/{\sim})$ by repeatedly applying the reductions $ux_i^{\alpha}x_i^{-\alpha}v\to uv$, $1\leq i\leq n$, $\alpha=\pm1$, $u,v\in T^*$; it is easy to verify that this system of reductions is locally confluent, thus it has the Church-Rosser property, and so there is in fact a unique reduced word in each equivalence class. The rotation $\gamma$ of generators induces the double rotation of tokens (the rotation of the tokens with the exponent $1$ and also the rotation of the tokens with the exponent $-1$); the automorphism of the free monoid $T^*$ determined by this double rotation we still denote by $\gamma$.
$\quad$Let $w=x_{i_1}^{\alpha_1}x_{i_2}^{\alpha_2}\cdots x_{i_m}^{\alpha_m}\in T^*$. We denote by $|w|$ the length $m$ of the word $w$. For each $i$, $1\leq i\leq n$, we denote by $l_i(w)$ the sum of the exponents of all tokens $x_i^{\pm1}$ appearing in $w$, and by $l(w)$ we denote the sum of all exponents $\alpha_1+\alpha_2+\cdots+\alpha_m=l_1(w)+\cdots+l_n(w)$. Always $|w|\congr l(w)\mod{2}$, because every term in $|w|-l(w)=(1-\alpha_1)+(1-\alpha_2)+\cdots+(1-\alpha_m)$ is ether $0$ or $2$. For each $i$, $1\leq i\leq n$, we have $l_i(uv)=l_i(u)+l_i(v)$ for all $u,v\in T^*$, and $l_i(w)$ is constant on every equivalence class $g\in G$.
$\quad$Now suppose that $g\in G$ has the property $g(\gamma g)\cdots(\gamma^{n-1} g)=1$, and let $w:=\varrho(g)$. Then $l_n(w)+l_n(\gamma w)+\cdots+l_n(\gamma^{n-1} w)=l_n(\varepsilon)=0$; since $l_n(\gamma w)=l_{n-1}(w)$, $\ldots$, $l_n(\gamma^{n-1} w)=l_1(w)$ we have $l(w)=l_1(w)+\cdots+l_n(w)=0$, therefore the word $w$ is of even length, $|w|=2k$. We split the word $w$ as $w=w_1w_2$, where $|w_1|=|w_2|=k$. We are assuming that $k>0$, since the case $k=0$ is trivial. The reduction process must reduce the word $$ W := w_1w_2(\gamma w_1)(\gamma w_2)(\gamma^2w_1)(\gamma^2w_2)\cdots (\gamma^{n-1}w_1)(\gamma^{n-1}w_2) $$ to the empty word. Since the words $w_1w_2$, $(\gamma w_1)(\gamma w_2)$, $\ldots$, $(\gamma^{n-1}w_1)(\gamma^{n-1}w_2)$ are reduced, the only places in the word $W$ where the reductions can be applied are at the points of contact between subwords $w_2$ and $\gamma w_1$, $\gamma w_2$ and $\gamma^2 w_1$, $\ldots\,$ Consider the effect of the reduction process on the subword $w_2(\gamma w_1)$ (mark that we can carry out the reductions in any order we choose, the result will be always the same). The first reduction eliminates some product $t\,t^{-1}$ from the center of $w_2(\gamma w_1)$, where $t$ is the last token in $w_2$ and $t^{-1}$ is the first token in $\gamma w_1$. We are left with $w_2'(\gamma w_1')$, where $|w_1'|=|w_2'|=k-1$. If $k>1$, there may be another reduction applicable at the center of $w_2'(\gamma w_1')$ (and nowhere else in this word), so we apply it, and so on. In fact the reductions must proceed to the bitter end, reducing the initial word $w_2(\gamma w_1)$ to the final empty word. For suppose that the reduction process stops after $r<k$ reductions; then the reduction process, applied to each of the subwords $(\gamma w_2)(\gamma^2 w_1)$, $\ldots$, $(\gamma^{n-2}w_2)(\gamma^{n-1}w_1)$, will likewise stop after $r$ reductions, and we will have a nonempty reduced word on our hands, which cannot be, because the word $W$ must reduce to the empty word. Let $h:=w_1/{\sim}$. Since $w_2(\gamma w_1)\sim\varepsilon$, it follows that $w_2/{\sim}=(\gamma h)^{-1}$, whence $g=(w_1/{\sim})(w_2/{\sim})=h(\gamma h)^{-1}$.$~$ Done.

Solution 2:

I don't know how you would reformulate it in terms of commutators, but $W(a,b,c,d) = a^{-1}bc^{-1}b^{-1}da^{-1}d^{-1}ab^{-1}a^{-1}cdc^{-1}b$ works with $w_1(a,b,c,d) = a^{-1}bc^{-1}b^{-1}dad^{-1}$ and $w_2(a,b,c,d) = ab^{-1}a^{-1}cdc^{-1}b$. Therefore, $w_2(a,b,c,d)*w_1(b,c,d,a) = ab^{-1}a^{-1}cdc^{-1}b * b^{-1}cd^{-1}c^{-1}aba^{-1} = 1$

Update: by shuffling around the letters a bit, I got it in somewhat commutator terms: $w_1(a,b,c,d) = b^{-1}c^{-1}ba^{-1}dad^{-1} = (c^{-1})^b * [a, d^{-1}]$ and $w_2(a,b,c,d) = ab^{-1}a^{-1}bc^{-1}dc = [a^{-1}, b] * d^c$ so $W(a,b,c,d) = (c^{-1})^b * [a, d^{-1}] * [a^{-1}, b] * d^c$.