Canonical isomorphism between $\mathfrak{so}(3)$ and $\mathbb R^3$ with vector cross product
There is a well-known isomorphism between the Lie algebra $\mathfrak{so}(3)$ and $\mathbb{R}^3$ which maps the Lie bracket to the vector cross product. It looks like
$$ \begin{pmatrix} 0 & -z &y\\ z & 0 & -x\\ -y & x & 0 \end{pmatrix} \mapsto \begin{pmatrix}x\\y\\z\end{pmatrix}. $$
There is a more geometric description to this isomorphism, which in broad strokes involves identifying $\mathbb{R}^3$ with $\Lambda^2(\mathbb{R}^3)$ via the Hodge star, and identifying $\mathfrak{so}(3)\subseteq \operatorname{End}(\mathbb{R}^3)\cong\mathbb{R}^3\otimes(\mathbb{R}^3)^*$ with $\Lambda^2(\mathbb{R}^3)$.
If you work in $\mathbb{R}^3$ with its canonical basis, canonical inner product, and canonical orientation, it's easy to see that these isomorphisms yield the specified result. Now I'm trying to check the details of these identifications for a general vector space (with dim = 3 injected into the argument when necessary), giving all the isomorphisms explicitly and without choosing a basis. I found this excellent answer by Qiaochu Yuan to be helpful for some one of the steps.
Let me walk through the steps, as I see them:
- First let's set notation. Let $V$ be an arbitrary real vector space with inner product $g$ and volume form $\Omega$. The automorphism group $SO(V)=\{O\in \operatorname{Aut}(V)\mid g(Ov,Ow)=g(v,w)\}$ has Lie algebra $\mathfrak{so}(V)=\{X\in \operatorname{End}(V)\mid g(Xv,w)+g(v,Xw)=0\}$
- Therefore the map defined by $\tilde{\alpha}(X)=\operatorname{eval}(\flat\circ X\otimes\operatorname{id})\colon v\otimes w\mapsto g(Xv,w)$ is skew-symmetric. Here $\operatorname{eval}$ is the evaluation map $V\otimes V^*$ which takes $v\otimes\sigma\mapsto \sigma(v)$, and $\flat$ is the canonical isomorphism $V\to V^*$ induced by the non-degenerate bilinear form $g$, given by $u\mapsto (v\mapsto g(u,v))$.
- Since $\tilde{\alpha}(X)\colon V\otimes V\to \mathbb{R}$ is skew-symmetric, it factors to a map $\alpha(X)\colon\Lambda^2(V)\to\mathbb{R}$. I.e. $\alpha(X)\in\Lambda^2V^*$. Now apply the inverse map $\sharp\colon V^*\to V$ to the first tensor factor, we have $\beta(X)=(\sharp\wedge\operatorname{id})\circ\alpha(X)\in V\wedge V^*$. To make this construction a little more concrete, observe that given a one parameter group of rotations in the plane fixed by two vectors $v,w$, one can check that the corresponding Lie group element is $\beta(X)=w\wedge v^\flat$.
- Apply $\sharp$ to the second tensor factor to get an element of the standard exterior algebra $\gamma(X)=(\operatorname{id}\wedge\sharp)\circ\beta(X)\in\Lambda^2V$.
- Apply the Hodge star operator to get an element of the vector space $\delta(X)=*\gamma(X)\in V.$
- Check that $\delta$ is an isomorphism of Lie algebras $\delta([X,Y])=\delta(X)\times\delta(Y)=*(\delta(X)\wedge\delta(Y))$ (Clearly it's linear. I suppose it should also be verified that $\delta$ is bijective)
So I have several problems.
- Step 2 above seems very convoluted and inelegant, with all the raising and lowering operators just to check whether something is an antisymmetric map.
- Step 2 is not just convoluted, but of dubious construction. Usually one considers the exterior power of a vector space with itself, not a vector space wedged with its dual space. That construction seems nonstandard to me, so I'm in unfamiliar territory and wonder whether this is the wrong path.
- I can't really see how to do step 6. I can't push the Lie bracket through all these opaque constructions.
- The $\operatorname{eval}$ map used in step 2 is also somewhat mysterious, at least in one direction. That is, there's really only a canonical injection $\operatorname{eval}\colon V\otimes V^*\to \operatorname{End}(V)$ given by $v\otimes\sigma\mapsto (w\mapsto \sigma(w)v)$. In the case that $V$ is not finite dimensional, this will not be an isomorphism and does not have an inverse. In the finite dimensional case, we do have an inverse, any endomorphism can be written as a product of vectors and dual vectors, but not canonically. Mapping $\mathfrak{so}(V)$ to $\Lambda^2(V)$ seems to require as an intermediate step mapping $\operatorname{End}(V)$ to $V\otimes V^*$. Can this be done canonically?
- Given the concerns about the direction of the map $V\otimes V^*\to \operatorname{End}(V)$, perhaps I should try to go the other way, but I can't make any progress that way either, because I don't know any identity for the Hodge star operator that will allow me to evaluate expressions like $*(X\wedge Y)$. How does the Hodge star operator interact with the wedge product?
Those are the issues I'm running into with my approach. I had somehow expected the I'd appreciate any resolutions to those issues, or alternatively if there is a better approach to this question, or some reason why the question itself is not a good one, I'd love to hear about it. Thanks.
Solution 1:
Convoluted and inelegant is in the eye of the beholder. Once you get used to identifying in your head everything which is related by a raising or lowering operator things aren't so bad.
$V$ is an inner product space, so $V \otimes V^{\ast}$ can be canonically identified with both $V \otimes V$ and $V^{\ast} \otimes V^{\ast}$.
I have some ideas for how to do this, but they're not particularly nice. One way is to use the representation theory of $\text{SO}(V)$. Because everything you've done is canonical, it's all $\text{SO}(V)$-equivariant. $V$ is an irreducible representation, so it follows by Schur's lemma that any two isomorphisms $\Lambda^2(V) \to V$ are a scalar multiple of each other. In particular, the Lie bracket and the exterior product are scalar multiples of each other, and from here you can probably finish by arguing using inner products.
The inverse of a canonical map, when the inverse exists, is also canonical.
See above.
Solution 2:
This problem is nearly trivial in the index notation, not because passing to the index notation involves picking a basis, but simply because it's much easier express the relevant maps in that notation. For example in the comments to Qiaochu's answer you define the iso $\Lambda^2V\rightarrow\mathfrak{so}(3)$ by $a\wedge b\mapsto a\otimes b^\flat-b\otimes a^\flat$ and using the fact that every element of $\Lambda^2V$ is of the form $a\wedge b$ for some $a$ and $b$ (in dimension $>3$ we would have to write "a sum of elements of the form..."). But in the index notation we just let the metric be $g_{ab}$ and write $X^{ab}\mapsto X^{ac}g_{cb}$, with no need to pass to a decomposition into simple tensors.
The key fact in the proof using index notation is that $$\Omega_{abc}\Omega^{cde}=\delta^d_a\delta^e_b-\delta^e_a\delta^d_b$$ where $\Omega_{abc}$ is the volume form (usually we would write $\varepsilon_{abc}$) and $\Omega^{abc}$ is the corresponding volume form on the dual space (or equivalently just what you get when you raise all the indices of $\Omega_{abc}$ using the (inverse of the) metric).
If we want to give an index-free proof we will need something equivalent to the above identity. This turns out to be a formula for the wedge product of the duals of some things $$*(a\wedge b)\wedge*(c\wedge d)=a\wedge d\langle b,c\rangle-a\wedge c\langle b,d\rangle-b\wedge d\langle b,c\rangle+b\wedge c\langle a,d\rangle$$ (Warning! Works only in dimension $=3$)
If we pick $x,y\in V$ then in $\Lambda^2V$ they map to $*x,*y$. In order to map these into $\mathfrak{so}(3)$ we need to represent them as $*x=a\wedge b$ and $*y=c\wedge d$. Then these map to $\mathfrak{so}(3)$ to give $a\otimes b^\flat-b\otimes a^\flat$ and $c\otimes d^\flat-d\otimes c^\flat$. We can then take their Lie bracket in $\operatorname{End}(V)$, getting $8$ terms the first of which is $$(a\otimes b^\flat)\circ(c\otimes d^\flat)=a\otimes d^\flat\langle b,c\rangle.$$ If you write out these $8$ terms you'll spot that they are the image under our iso from $\Lambda^2V$ of $$a\wedge d\langle b,c\rangle-a\wedge c\langle b,d\rangle-b\wedge d\langle b,c\rangle+b\wedge c\langle a,d\rangle$$ which by our above identity is $$*(a\wedge b)\wedge*(c\wedge d)=x\wedge y$$. Mapping back into $V$ from $\Lambda^2V$ gives $*(x\wedge y)$, which is the cross-product!
So where does our identity above come from? I don't know. I just proved it using index notation. The general form is the following:
We're working in dimension $n$ and we want to know $$*(a_1\wedge\dots\wedge a_p)\wedge*(b_1\wedge\dots\wedge b_q).$$ Let $k=p+q-n$. Let $P=\{1,\dots,p\}$ and $Q=\{1,\dots,p\}$. For $K\subseteq P$ define $a_K$ to be $$a_{k_1}\wedge\dots\wedge a_{k_K}$$ where $k_1,\dots,k_K$ are the elements of $K$ in size order. Also, define $\Sigma K$ to be the sum of the elements of $K$. Similarly $b_L$ and $\Sigma L$ for $L\subseteq Q$.
Then $$*(a_1\wedge\dots\wedge a_p)\wedge*(b_1\wedge\dots\wedge b_q)=\sum_{|K|=k}\sum_{|L|=k}(-1)^{\Sigma K+\Sigma L}a_{P\setminus K}\wedge b_{Q\setminus L}\left\langle a_K,b_L\right\rangle$$ where the inner product is induced on $\Lambda^kV$ by the one on $V$.
I can prove this formula using index notation, but surprisingly I can't find any references for it. Perhaps I will ask a question to see if anyone recognises it and perhaps they'll be able to give an index-free proof.