The quadratic form $g(x,y) = xy$ can be diagonalized

by the change of variables $x = (u + v)$ and $y = (u - v)$ .

However, it seems unlikely that the cubic form $f(x,y,z) = xyz$,

can be diagonalized by a linear change of variables.

Is there a short computational or theoretical proof of this?

Thanks.


Solution 1:

The diagonalisation of quadratic forms comes from the correspondence with symmetric matrices, and the fact you can diagonalise symmetric matrices.

Cubic forms correspond with third order symmetric tensors and diagonalising cubic forms is equivalent to being able to diagonalise them. I don't know anything about the theory of diagonalising higher order tensors, but a google search seems to come up with a few results for "diagonalising rank three tensors" and "diagonalising higher order tensors" so there may be something out there.

EDIT: I decided to do a bit more research. If you have access to JSTOR, the paper "The Transformation of Tensors into Diagonal Form" by Oliver Aberth from SIAM Journal on Applied Mathematics, Vol. 15, No. 5 (Sep., 1967), pp. 1247-1252 which can be found at http://www.jstor.org/stable/2099163 gives conditions for diagonalisation of general tensors, and seems to be able to be understood with basic knowledge of the summation convention. (Think of tensors as arrays; and the summation conventions says if an index appears exactly twice in a product, you sum over it.)

Corollary 1 from this paper says that:

A Cartesian tensor $A_{i_1,i_2,\ldots,i_s}$ can be transformed so that it is diagonalised in the $s$ indices $i_1,\ldots,i_s$, $2 \leq s \leq n$ if and only if it is symmetric in these indices and the tensor $$A_{t i_2,\ldots,i_n}A_{t j_2,\ldots,j_n}$$ is symmetric in the indices $i_2,j_2$.

The condition is trivial in $2$ dimensions, but is not implied by symmetry in higher dimensions. It also shows that $xyz$ can't be diagonalised; it's corresponding symmetric tensor is given by $T_{ijk}$ being $1/6$ if $i,j,k$ are all different and $0$ if any two are the same. However the tensor in the condition is not symmetric; $A_{t12}A_{t12}$ is equal to $1$, but $A_{t22}A_{11}$ is zero.

Solution 2:

If I understand correctly your question, you are asking if it is possible to write $xyz = \ell_1^3 + \ell_2^3 + \ell_3^3$ for some linear forms $\ell_1=\ell_1(x,y,z)$, $\ell_2,\ell_3$.

We can prove that $xyz \neq \ell_1^3 + \ell_2^3 + \ell_3^3$ in a few different ways. Here is a short theoretical proof that uses a little bit of projective geometry.

Lemma: If $xyz = \ell_1^3 + \ell_2^3 + \ell_3^3$, then $\{\ell_1,\ell_2,\ell_3\}$ are linearly independent.

Proof: The second derivatives of $xyz$ include $x$, $y$, and $z$. If $\ell_1,\ell_2,\ell_3$ span a space of dimension less than $3$, then they depend on only $2$ (or $1$) variables, and the second derivatives of each $\ell_i^3$ still only depend on $2$ (or $1$) variables, so they span less than $3$ dimensions. $\square$

Claim: $xyz \neq \ell_1^3 + \ell_2^3 + \ell_3^3$.

Proof: For convenience say $u=\ell_1$, $v=\ell_2$, $w=\ell_3$. The projective curve defined by $xyz=0$ is singular, in fact reducible. However the curve defined by $u^3+v^3+w^3=0$ is nonsingular. There is no linear change of coordinates that can carry a singular curve to a nonsingular one. $\square$

However, if you are not interested in projective geometry, it is still possible to deal with this via other approaches.

There is some literature on this subject under the name "Waring rank", Waring decompositions, symmetric tensors, and symmetric tensor rank. Some general introductions include Landsberg, Tensors: Geometry and Applications or Carlini, Grieve, Oeding, Four Lectures on Secant Varieties. That book and paper have basic explanations and references to further reading. I hope it helps.

Solution 3:

The "diagonalization" is given on page 101 of this paper:

https://www.sciencedirect.com/science/article/pii/0024379587902898

Here is a picture of the formula:

Expressing xyz as a minimal sum of cubes of linear polynomials