In Geometric Algebra, is there a geometric product between matrices?

Thanks for your help in advance.

I literally just started to self-study about geometric algebra.

I have some coursework background in linear algebra and was trying to make an educational bridge between what I know and what I'm trying to learn.

My question: Is there a geometric product for matrices in geometric algebra, like there is a geometric product for vectors? If so, how would one compute the geometric product between matrices?

Thanks


Solution 1:

Let me address this more on the side of how linear algebra is presented in some GA material.

In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.

In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $\underline T$, and you want to compute its action on a bivector $a \wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $\underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.

Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $\underline T(I) = I \det \underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.

With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.

Solution 2:

I think you're giving undue distinction to matrices.

Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(\Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $\Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.

The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.


In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.

Solution 3:

There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.

Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $V\oplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $\mathbb R$). Define a dot product on $V\oplus V^*$ by

$$(a+\alpha)\cdot(b+\beta)=a\cdot\beta+\alpha\cdot b=\beta(a)+\alpha(b)$$

where $a\in V,\alpha\in V^*,b\in V,\beta\in V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)

Take a basis $\{e_i\}=\{e_1,e_2,\cdots,e_n\}$ for $V$, and the dual basis $\{\varepsilon^i\}$ for $V^*$, satisfying $\varepsilon^i\cdot e_i=1$ and otherwise $\varepsilon^i\cdot e_j=0$. These together form a basis for $V\oplus V^*$. We can make a different basis $\{\sigma_i,\tau_i\}$, defined by

$$\sigma_i=\frac{e_i+\varepsilon^i}{\sqrt2},\qquad\tau_i=\frac{e_i-\varepsilon^i}{\sqrt2}.$$

(If you want to avoid $\sqrt2$ for some reason (like using $\mathbb Q$ as the scalar field), then define $\sigma_i=\frac12e_i+\varepsilon^i,\;\tau_i=\frac12e_i-\varepsilon^i$. The result is the same.)

It can be seen that $\sigma_i\cdot\tau_j=0$, and $\sigma_i\cdot\sigma_i=1=-\tau_i\cdot\tau_i$ and otherwise $\sigma_i\cdot\sigma_j=0=\tau_i\cdot\tau_j$. So we have an orthonormal basis of $n$ vectors $\sigma_i$ squaring to ${^+}1$ and $n$ vectors $\tau_i$ squaring to ${^-}1$, showing that $V\oplus V^*$ is isomorphic to the pseudo-Euclidean space $\mathbb R^{n,n}$.


Method 1: Bivectors

Any $n\times n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $V\oplus V^*$. Given the scalar components $M^i\!_j$ of a matrix, the corresponding bivector is

$$M=\sum_{i,j}M^i\!_j\,e_i\wedge\varepsilon^j.$$

For example, with $n=2$, we would have

$$M=\begin{pmatrix}M^1\!_1e_1\wedge\varepsilon^1+M^1\!_2e_1\wedge\varepsilon^2 \\ +M^2\!_1e_2\wedge\varepsilon^1+M^2\!_2e_2\wedge\varepsilon^2 \end{pmatrix}\cong\begin{bmatrix}M^1\!_1 & M^1\!_2 \\ M^2\!_1 & M^2\!_2\end{bmatrix}.$$

The transformation applying to a vector $a=\sum_ia^ie_i$ is

$$a\mapsto M\bullet a=M\,\llcorner\,a=M\times a=-a\bullet M$$

$$=\sum_{i,j,k}M^i\!_ja^k(e_i\wedge\varepsilon^j)\bullet e_k$$

$$=\sum_{i,j,k}M^i\!_ja^k\big(e_i(\varepsilon^j\cdot e_k)-(e_i\cdot e_k)\varepsilon^j\big)$$

$$=\sum_{i,j,k}M^i\!_ja^k\big(e_i(\delta^j_k)-0\big)$$

$$=\sum_{i,j}M^i\!_ja^je_i.$$

There I used the bac-cab identity $(a\wedge b)\bullet c=a(b\cdot c)-(a\cdot c)b$, and the products $\bullet\,\llcorner\times$ defined here.

(Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)

The pullback/adjoint transformation on $V^*$ is $\alpha\mapsto\alpha\bullet M=-M\bullet\alpha=\sum_{i,j}\alpha_iM^i\!_j\varepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A\,\lrcorner\,B)\,\llcorner\,C=A\,\lrcorner\,(B\,\llcorner\,C)$, which implies $(\alpha\bullet M)\cdot b=\alpha\cdot(M\bullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.


The outermorphism can be calculated using the exterior powers of $M$ :

$$(M\bullet a)\wedge(M\bullet b)=\frac{M\wedge M}{2}\bullet(a\wedge b)$$

$$(M\bullet a)\wedge(M\bullet b)\wedge(M\bullet c)=\frac{M\wedge M\wedge M}{6}\bullet(a\wedge b\wedge c)$$

$$(M\bullet a_1)\wedge(M\bullet a_2)\wedge\cdots\wedge(M\bullet a_n)=\frac{1(\wedge M)^n}{n!}\bullet(a_1\wedge a_2\wedge\cdots\wedge a_n)$$

$$=\frac{M\wedge M\wedge\cdots\wedge M}{1\;\cdot\;2\;\cdot\;\cdots\;\cdot\;n}\bullet(a_1\wedge a_2\wedge\cdots\wedge a_n)$$

(This notation, $1(\wedge M)^n$, is sometimes replaced with $\wedge^nM$ or $M^{\wedge n}$, but those don't look right to me.)

I'll prove the trivector case; the others are similar. I'll use the identities $A\,\llcorner\,(B\wedge C)=(A\,\llcorner\,B)\,\llcorner\,C$, and $a\,\lrcorner\,(B\wedge C)=(a\,\lrcorner\,B)\wedge C+(-1)^kB\wedge(a\,\lrcorner\,C)$ when $a$ has grade $1$ and $B$ has grade $k$.

$$\frac{M\wedge M\wedge M}{6}\bullet(a\wedge b\wedge c)$$

$$=\bigg(\frac{M\wedge M\wedge M}{6}\bullet a\bigg)\bullet(b\wedge c)$$

$$=\bigg(\frac{M\wedge M\wedge(M\bullet a)+M\wedge(M\bullet a)\wedge M+(M\bullet a)\wedge M\wedge M}{6}\bigg)\bullet(b\wedge c)$$

(bivector $\wedge$ is commutative, so these are all the same)

$$=\bigg(\frac{(M\bullet a)\wedge M\wedge M}{2}\bigg)\bullet(b\wedge c)$$

$$=\bigg(\frac{(M\bullet a)\wedge M\wedge M}{2}\bullet b\bigg)\bullet c$$

$$=\bigg(\frac{(M\bullet a)\wedge M\wedge(M\bullet b)+(M\bullet a)\wedge(M\bullet b)\wedge M+\big((M\bullet a)\cdot b\big)\wedge M\wedge M}{2}\bigg)\bullet c$$

(remember, all vectors in $V$ are orthogonal, so $(M\bullet a)\cdot b=0$ )

$$=\Big((M\bullet a)\wedge(M\bullet b)\wedge M\Big)\bullet c$$

$$=(M\bullet a)\wedge(M\bullet b)\wedge(M\bullet c)+(M\bullet a)\wedge\big((M\bullet b)\cdot c\big)\wedge M+\big((M\bullet a)\cdot c\big)\wedge(M\bullet b)\wedge M$$

$$=(M\bullet a)\wedge(M\bullet b)\wedge(M\bullet c).$$

This provides a formula for the determinant. Take the $n$-blade $E=e_1\wedge e_2\wedge\cdots\wedge e_n=e_1e_2\cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then

$$\frac{1(\wedge M)^n}{n!}\bullet E=(\det M)E.$$

And, using the commutator identity $A\times(BC)=(A\times B)C+B(A\times C)$, we find the trace:

$$ME=M\,\lrcorner\,E+M\times E+M\wedge E=0+M\times E+0$$

$$=(M\times e_1)e_2\cdots e_n+e_1(M\times e_2)\cdots e_n+\cdots+e_1e_2\cdots(M\times e_n)$$

$$=\Big(\sum_iM^i\!_1e_i\Big)e_2\cdots e_n+e_1\Big(\sum_iM^i\!_2e_i\Big)\cdots e_n+\cdots+e_1e_2\cdots\Big(\sum_iM^i\!_ne_i\Big)$$

(most of the terms disappear because $e_ie_i=0$ )

$$=(M^1\!_1e_1)e_2\cdots e_n+e_1(M^2\!_2e_2)\cdots e_n+\cdots+e_1e_2\cdots(M^n\!_ne_n)$$

$$=(M^1\!_1+M^2\!_2+\cdots+M^n\!_n)e_1e_2\cdots e_n=(\text{tr}\,M)E.$$

More generally, the characteristic polynomial coefficients are determined by the geometric product

$$\frac{1(\wedge M)^k}{k!}E=c_kE.$$

These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by

$$\exp\!\wedge(A)=\sum_k\frac{1(\wedge A)^k}{k!}=1+A+\frac{A\wedge A}2+\frac{A\wedge A\wedge A}{6}+\cdots,$$

we have

$$\big(\exp\!\wedge(tM)\big)E=\Big(\sum_kc_kt^k\Big)E=\big(1+(\text{tr}\,M)t+c_2t^2+\cdots+(\det M)t^n\big)E$$

$$=t^n\bigg(\frac{1}{t^n}+\frac{\text{tr}\,M}{t^{n-1}}+\frac{c_2}{t^{n-2}}+\cdots+\frac{\det M}{1}\bigg)E.$$


The reverse of a multivector is $\tilde A=\sum_k(-1)^{k(k-1)/2}\langle A\rangle_k$; the reverse of a product is $(AB)^\sim=\tilde B\tilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2\wedge a_1)\bullet(b_1\wedge b_2)=(a_1\cdot b_1)(a_2\cdot b_2)-(a_1\cdot b_2)(a_2\cdot b_1)$.

Given the above, and the blades $E=e_1\cdots e_n$ and $\cal E=\varepsilon^1\cdots\varepsilon^n$, it follows that $E\bullet\tilde{\cal E}=1$. The full geometric product happens to be the exterior exponential $E\tilde{\cal E}=\exp\!\wedge K$, where $K=\sum_ie_i\wedge\varepsilon^i$ represents the identity transformation. So we can multiply this equation

$$\frac{1(\wedge M)^k}{k!}E=c_kE$$

by $\tilde{\cal E}$ to get

$$\frac{1(\wedge M)^k}{k!}\exp\!\wedge K=c_k\exp\!\wedge K$$

and take the scalar part, to isolate the polynomial coefficients

$$\frac{1(\wedge M)^k}{k!}\bullet\frac{1(\wedge K)^k}{k!}=c_k.$$

Or, multiply the $\exp\!\wedge(tM)$ equation by $\tilde{\cal E}$ to get

$$\big(\exp\!\wedge(tM)\big)\exp\!\wedge K=\Big(\sum_kc_kt^k\Big)\exp\!\wedge K.$$

This can be wedged with $\exp\!\wedge(-K)$ to isolate the polynomial, because $(\exp\!\wedge A)\wedge(\exp\!\wedge B)=\exp\!\wedge(A+B)$ if $A$ or $B$ has even grade.

We also have the adjugate, which can be used to calculate the matrix inverse:

$$\frac{1(\wedge M)^{n-1}}{(n-1)!}\bullet\frac{1(\wedge K)^n}{n!}=\text{adj}\,M.$$


The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.

$$MN=M\bullet N+M\times N+M\wedge N$$

The first part is the trace of the matrix product:

$$M\bullet N=\sum_{i,j,k,l}M^i\!_jN^k\!_l(e_i\wedge\varepsilon^j)\bullet(e_k\wedge\varepsilon^l)$$

$$=\sum_{i,j,k,l}M^i\!_jN^k\!_l(\delta^j_k\delta^l_i)$$

$$=\sum_{i,j}M^i\!_jN^j\!_i=\text{tr}(M\boxdot N).$$

The second part is the commutator of matrix products:

$$M\times N=\sum_{i,j,k,l}M^i\!_jN^k\!_l(e_i\wedge\varepsilon^j)\times(e_k\wedge\varepsilon^l)$$

$$=\sum_{i,j,k,l}M^i\!_jN^k\!_l(\delta^j_ke_i\wedge\varepsilon^l+\delta^l_i\varepsilon^j\wedge e_k)$$

$$=\sum_{i,j,l}M^i\!_jN^j\!_le_i\wedge\varepsilon^l-\sum_{j,k,l}N^k\!_lM^l\!_je_k\wedge\varepsilon^j=M\boxdot N-N\boxdot M.$$

(This can also be justified by Jacobi's identity $(M\times N)\times a=M\times(N\times a)-N\times(M\times a)$.)

The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces

$$(M\wedge N)\bullet(a\wedge b)=(M\bullet a)\wedge(N\bullet b)+(N\bullet a)\wedge(M\bullet b).$$

Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=\sum_ie_i\wedge\varepsilon^i$:

$$M\boxdot N=\frac{M\times N+(M\bullet K)N+(N\bullet K)M-(M\wedge N)\bullet K}{2}=\sum_{i,j,k}M^i\!_jN^j\!_ke_i\wedge\varepsilon^k$$

Note that $M\bullet K=\text{tr}\,M$. And, of course, we have the defining relation $(M\boxdot N)\bullet a=M\bullet(N\bullet a)$.

(That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $V\oplus V^*\oplus W\oplus W^*$, with basis $\{e_i,\varepsilon^i,f_i,\phi^i\}$, if $M=\sum_{i,j}M^i\!_je_i\wedge\varepsilon^j$ maps $V$ to itself, and $N=\sum_{i,j}N^i\!_je_i\wedge\phi^j$ maps $W$ to $V$, then the matrix product is simply $M\boxdot N=M\times N$.)


Method 2: Rotors

Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}\cdots r_2r_1$, a geometric product of an even number of invertible vectors in $V\oplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"

$$a\mapsto RaR^{-1}=r_{2k}\cdots r_2r_1ar_1^{-1}r_2^{-1}\cdots r_{2k}^{-1}.$$

Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})\cdot(RbR^{-1})=R(a\cdot b)R^{-1}=a\cdot b$, and $(RaR^{-1})\wedge(RbR^{-1})=R(a\wedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $A\mapsto RAR^{-1}$.

The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:

$$a\mapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$


Here are some examples, using $\sigma_i=(e_i+\varepsilon^i)/\sqrt2,\;\tau_i=(e_i-\varepsilon^i)/\sqrt2$, and

$$a=\sum_ia^ie_i=a^1\frac{\sigma_1+\tau_1}{\sqrt2}+a^2\frac{\sigma_2+\tau_2}{\sqrt2}+\cdots+a^n\frac{\sigma_n+\tau_n}{\sqrt2}.$$

Reflection along $e_1$:

$$R=\tau_1\sigma_1=e_1\wedge\varepsilon^1$$

$$RaR^{-1}=a^1\frac{-\sigma_1-\tau_1}{\sqrt2}+a^2\frac{\sigma_2+\tau_2}{\sqrt2}+\cdots+a^n\frac{\sigma_n+\tau_n}{\sqrt2}$$

$$=-a^1e_1+a^2e_2+\cdots+a^ne_n$$

Stretching by factor $\exp\theta$ along $e_1$:

$$R=\exp\Big(\frac\theta2\tau_1\sigma_1\Big)=\cosh\frac\theta2+\tau_1\sigma_1\sinh\frac\theta2$$

$$=\Big(\sigma_1\cosh\frac\theta2+\tau_1\sinh\frac\theta2\Big)\sigma_1$$

$$RaR^{-1}=a^1\frac{(\sigma_1\cosh\theta+\tau_1\sinh\theta)+(\tau_1\cosh\theta+\sigma_1\sinh\theta)}{\sqrt2}+a^2\frac{\sigma_2+\tau_2}{\sqrt2}+\cdots+a^n\frac{\sigma_n+\tau_n}{\sqrt2}$$

$$=a^1e_1\exp\theta+a^2e_2+\cdots+a^ne_n$$

Circular rotation by $\theta$ from $e_1$ towards $e_2$ (note that $\sigma_2\sigma_1$ commutes with $\tau_2\tau_1$, and both square to $-1$ so Euler's formula applies) :

$$R=\exp\Big(\frac\theta2(\sigma_2\sigma_1-\tau_2\tau_1)\Big)=\exp\Big(\frac\theta2\sigma_2\sigma_1\Big)\exp\Big(-\frac\theta2\tau_2\tau_1\Big)$$

$$=\Big(\sigma_1\cos\frac\theta2+\sigma_2\sin\frac\theta2\Big)\sigma_1\Big(-\tau_1\cos\frac\theta2-\tau_2\sin\frac\theta2\Big)\tau_1$$

$$RaR^{-1}=a^1\frac{(\sigma_1\cos\theta+\sigma_2\sin\theta)+(\tau_1\cos\theta+\tau_2\sin\theta)}{\sqrt2}+a^2\frac{(-\sigma_1\sin\theta+\sigma_2\cos\theta)+(-\tau_1\sin\theta+\tau_2\cos\theta)}{\sqrt2}+a^3\frac{\sigma_3+\tau_3}{\sqrt2}+\cdots+a^n\frac{\sigma_n+\tau_n}{\sqrt2}$$

$$=a^1(e_1\cos\theta+e_2\sin\theta)+a^2(-e_1\sin\theta+e_2\cos\theta)+a^3e_3+\cdots+a^ne_n$$

Hyperbolic rotation by $\theta$ from $e_1$ towards $e_2$:

$$R=\exp\Big(\frac\theta2(\tau_2\sigma_1-\sigma_2\tau_1)\Big)=\exp\Big(\frac\theta2\tau_2\sigma_1\Big)\exp\Big(-\frac\theta2\sigma_2\tau_1\Big)$$

$$=\Big(\sigma_1\cosh\frac\theta2+\tau_2\sinh\frac\theta2\Big)\sigma_1\Big(-\tau_1\cosh\frac\theta2-\sigma_2\sinh\frac\theta2\Big)\tau_1$$

$$RaR^{-1}=a^1\frac{(\sigma_1\cosh\theta+\tau_2\sinh\theta)+(\tau_1\cosh\theta+\sigma_2\sinh\theta)}{\sqrt2}+a^2\frac{(\tau_1\sinh\theta+\sigma_2\cosh\theta)+(\sigma_1\sinh\theta+\tau_2\cosh\theta)}{\sqrt2}+a^3\frac{\sigma_3+\tau_3}{\sqrt2}+\cdots+a^n\frac{\sigma_n+\tau_n}{\sqrt2}$$

$$=a^1(e_1\cosh\theta+e_2\sinh\theta)+a^2(e_1\sinh\theta+e_2\cosh\theta)+a^3e_3+\cdots+a^ne_n$$

Shear by $\theta$ from $e_1$ towards $e_2$:

$$R=\exp\Big(\frac\theta2e_2\wedge\varepsilon^1\Big)=1+\frac\theta2e_2\wedge\varepsilon^1$$

$$=-\frac14\Big(e_1-\varepsilon^1+\frac\theta4e_2\Big)\Big(e_1-\varepsilon^1-\frac\theta4e_2\Big)\Big(e_1+\varepsilon^1+\frac\theta4e_2\Big)\Big(e_1+\varepsilon^1-\frac\theta4e_2\Big)$$

$$RaR^{-1}=a^1(e_1+\theta e_2)+a^2e_2+a^3e_3+\cdots+a^ne_n$$


This post is too long...

Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)

Solution 4:

The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is

$$A B = \begin{bmatrix}\sum_k a_{ik} b_{kj}\end{bmatrix},$$

and not $$A B = \begin{bmatrix}\sum_k b_{kj} a_{ik}\end{bmatrix}.$$

Such matrices can occur naturally when factoring certain multivector expressions. See for example chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express the Lagrangian for a chain of N spherical-pendulums.