Understanding the Musical Isomorphisms in Vector Spaces

I am trying to solidify my understanding of the muscial isomorphisms in the context of vector spaces. I believe I understand the definitions but would appreciate corrections if my understanding is not correct. Also, as I have had some difficulty tracking down related material, I would welcome any suggested references that would expand upon this material and related results.

So, let $V$ be a finite-dimensional vector space over $\mathbb{R}$ with inner product $\langle \cdot, \cdot \rangle$ and let $V^*$ denote its dual. For each element $v \in V$, one can define a mapping $V \rightarrow V^*$ by $v \mapsto \langle \cdot, v \rangle$. By the Riesz representation theorem this mapping actually determines an isomorphism that allows us to identify each element in $V$ with a unique functional in $V^*$ The "flat" operator $\flat$ is defined by by $v^{\flat}(u) = \langle u , v \rangle$ for all $u \in V$ and is thus just the Riesz isomorphism in the direction $V \rightarrow V^*$ as defined above. On the other hand, given a linear functional $f_v \in V^*$, we know that there exists a unique $v \in V$ such that $f_v(u) = \langle u , v \rangle$ for all $u \in V$ and the "sharp" operator $\sharp$ is defined by by $f_v^{\sharp} = v$ and this represents the other direction of the Riesz isomorphism.

Is this understanding correct? Can anyone provide a reference to some examples/exercises that would explore these operators in a concrete way? The Wikipedia page on the topic isn't of much help.

Update: I am adding a bounty to this question in hopes that someone will be able to provide examples/exercises (or references to such) that illustrate the use of the musical isomorphisms in the context of vector spaces.


Solution 1:

"Musical isomorphism" sounds like "catastrophe theory". But, irony aside, what you think and write is absolutely correct. (A minor point: You are not "given a functional $f_v\in V^*$", but you are given a functional $f\in V^*$, and the isomorphism in question guarantees you the existence of a unique $v\in V$ such that $f(u)=\langle v,u\rangle$ for all $u\in V$.)

Now it seems that you cannot make the connection with the Wikipedia page about the "musical isomorphism". This has to do with the way a scalar product has to be encoded when the given basis is not orthonormal to start with. In this respect one can say the following:

If $V$ is a real vector space provided with a scalar product $\langle\cdot,\cdot\rangle$, and if $(e_1,\ldots,e_n)$ is an arbitrary (i.e. not necessarily orthonormal) basis of $V$ then (a) any vector $x\in V$ is represented by a $(n\times 1)$ column vector $x\in{\mathbb R}^n$ such that $x=\sum_{k=1}^n x_k\> e_k$, and (b) there is a matrix $G=[g_{ik}]$ such that $$\langle x,y\rangle \ =\ x'\> G \> y\qquad \forall x,\forall y\ .$$ This implies that the functional $v^\flat$ appears in the form $$v^\flat(u)=\langle v,u\rangle=v'\> G\> u\ .$$ Putting $\sum_{i=1}^n v_i g_{ik}=: \bar v_k$ one therefore has $$v^\flat(u)=\sum_{k=1}^n \bar v_k\> u_k\ ,$$ so that one can interpret the coefficients $\bar v_k$ as "coordinates“ of the functional $v^\flat$.

Similar considerations apply to the operator $\sharp$, and going through the motions one finds that the inverse of the matrix $G$ comes into play. The details are as follows: A given functional $f\in V^*$ appears "coordinatewise" as $$f(u)\ =\ \sum_{k=1}^n f_k u_k\quad (u\in V)\ ,$$ where the coefficient vector $f=(f_1,\ldots, f_n)$ is some real $n$-tuple. Now you want to represent $f$ by means of the scalar product as a vector $f^\sharp\in V$ in the following way: We should have $$f(u)=\langle f^\sharp, u\rangle\qquad \forall u\in V\ ,$$ which in terms of matrix products means $$f'\> u\ =\ (f^\sharp)'\> G\>u\qquad\forall u\in{\mathbb R}^n$$ or $$f^\sharp =G^{-1}\>f\ $$ (note that the matrix $G$ is symmetric). This formula gives you the coordinates of $f^\sharp$ with respect to the basis $(e_1,\ldots,e_n)$.

In old books you read about "covariant" and "contravariant" coordinates of one and the same vector (or functional) $v$ resp. $f$.

Solution 2:

I do complex differential geometry, so for me these things pop up in the context of finite dimensional linear algebra. I'll address your question in those terms.

For references, I've profited much from Lee's Riemannian Manifolds and Coffman's Trace, metric and reality, which is an amazing book that talks about these things in a coordinate free way, which is almost essential to understanding what is going on.

So, let's take a finite dimensional real vector space $V$, and equip it with an inner product $g$. Here $g(u,v) = \langle u, v \rangle$ in your notation.

Each vector space $V$ is canonically isomorphic to its double dual $V^{**}$, but it is not to its dual $V^*$. Thus if we want to transport vectors from $V$ to $V^*$ (we want to do this a lot, just note that the transpose of a vector is an element of the dual space) we need to choose an isomorphism $V \to V^*$.

The inner product $g$ is exactly the choice of such an isomorphism. More precisely, given $u$ in $V$ we send it to the linear form $v \mapsto g(u,v)$. We could also have sent $u$ to the linear form $v \mapsto g(v,u)$, but as $g$ is symmetric this gives the same form.

Suppose now that we have a vector space of the form $V^{\otimes p} \otimes V^{*\otimes q}$. An element of this space is often called a $(p,q)$ tensor. Pick one of the spaces $V$, say the first one. Then the inner product $g$ induces an isomorphism $V^{\otimes p} \otimes V^{*\otimes q} \to V^{\otimes p-1} \otimes V^{*\otimes q+1}$. In the literature, this map is called "lowering the index", or, in musical notation, the "flat" map.

The classical "raising of index", or the "sharp" map, is now just the inverse of the isomorphism $V \to V^*$ given by $g$. As we are working over finite dimensional spaces, so that $V^{**} = V$, it coincides with the dual inner product $V^* \to V^{**} = V$.

This is far from being (only) abstract nonsense. As one example, this point of view clarifies immensly the definition of positive-definitiveness of a matrix $A$. Recall that a matrix $A$ (in the canonical basis of $\mathbb R^n$) is positive-definite if ${}^t u A u > 0$ for all non-zero vectors $u$. Recalling that we have the standard inner product on $\mathbb R^n$, we re-interpret this condition as saying that $\langle Au, u \rangle > 0$ for all vectors $u$.

But now it is immediatly clear that this property depends on the choice of an inner product, and not a basis. So, let $g$ be an inner product on $V$. Then we say that a linear morphism $f : V \to V$ is $g$-positive-definite if the bilinear form (i.e. map $V \to V^*$) $g \circ f$ is positive-definite, in the sense that $g \circ f (u,u) = g(f(u),u) > 0$ for all non-zero $u$. This does however not necessarily give the same condition as setting $g \circ f(u,u) = g(u,f(u))$.

To get around this, note that usually we only talk about positive-definiteness for symmetric matrices $A$. Such a matrix represents a linear map $f : V \to V$, or in other words, an element of $V^* \otimes V$. This last space is auto-dual, so $f^*$ is again a linear map $V \to V$. Saying that $A$ is symmetric exactly means that $f^* = f$. We are now free to define a $g$-positive-definite linear map $f : V \to V$ as a symmetric map as above.

As another example, this point of view will make clear what "trace with respect to a metric" actually means. (Hint: The trace of a linear map is invariant of basis, and has nothing to do with an inner product. A 2-form however, that is something else. If only there was a way to convert a 2-form into a linear map.)

Solution 3:

My copy of Spivak is in storage, unfortunately, but I think he discusses it in his Comprehensive Intro to Differential Geometry (volume 1, probably). I would also guess that Boothby discusses it in his intro to differentiable manifolds, but I am again not sure. I am almost certain that Spivak has a great exercise working through showing that this isomorphism is not canonical.

As far as I can tell, this question shows up in three places. The first is in Hilbert space theory, where Willie Wong's remark is important -- in particular, you end up with some differences between the Banach space adjoint of an operator and the Hilbert space adjoint. This caught me by surprise when I first studied this. Reed & Simon, Functional Analysis, volume 1, has a discussion of this.

The second context I have seen this in, is in Riemannian geometry. Here, you end up with an isomorphism between TM and T^*M induced by the metric. The third context is in symplectic geometry, where the symplectic form induces this isomorphism (not called a musical isomorphism anymore). This is one of the cases covered by Willie Wong's second comment. (Presumably also this shows up in Lorentzian geometry, but I haven't ever studied that myself.)

This isomorphism induced by the metric is quite important in Riemannian geometry -- it's what allows you to do the "raising and lowering of indices". This is why Riemannian geometry books often have discussions of these isomorphisms.

Another possible source for a discussion of this is in Wendl's notes on connections, Appendix A. http://www.homepages.ucl.ac.uk/~ucahcwe/connections.html

As for problems, I don't think any of these give you problems. However, I think there are some simple things you can compute directly that are worth playing around with.

Let $V$ be an $n$-dimensional real vector space with an inner product on it.
Choose a basis and use this to identify $V$ with $\mathbb{R}^n$. Use the dual basis to identify $V^*$ with $\mathbb{R}^n$.

(a) show that the inner product becomes where <,> denotes the standard inner product on $\mathbb{R}^n$, and $A$ is a matrix. Show $A$ is positive definite and symmetric. Also show that any positive definite symmetric matrix induces an inner product on $V$. (b) what does the "musical" isomorphism induced by the inner product look like as a map from $\mathbb{R}^n \to \mathbb{R}^n$, using the isomorphisms to $V$ and $V^*$ given by the basis/dual-basis isomorphisms? (c) How do these isomorphisms transform if you change basis for $V$ (and thus change dual basis for $V^*$)? (d) How about if you take some arbitrary basis for $V$ and $V^*$? (i.e. not necessarily dual bases) (e) Now consider how to represent a linear map from $W \to V$ and from $V \to W$, composed/precomposed with the musical isomorphism.
(f) The musical isomorphism allows you to put an inner product on $V^*$. What does its corresponding self-adjoint positive definite matrix look like?

Most of these are actually redundant, but once you've fiddled with this, I think you will understand the concept a lot better. I don't think it's the musical isomorphism that you don't understand, but the non-canonical nature of the isomorphism between $V$ and $V^*$.

Solution 4:

We can go back to Euclidean space to get some concrete intuition for what's going on. A motivation for introducing differential forms comes in the context of work being done on a particle along a curve $\gamma$ with some force field present. Now, one learns in a manifolds course that what is essential for integration is only the fact that we have linear functionals on the tangent space at each point; physically, this means we know the amount of work needed to push the particle an infinitesimal amount in some tangent direction.

If we have an inner product, there's a very natural way to associate vector fields with such linear functionals. Thinking about this physically, imagine the particle on our curve with some tangent vector; we want to see how much the force field is pushing the particle in the direction of this tangent vector, and the most obvious thing to do is to look at the angle formed, which is defined using the Euclidean dot product. So given any vector field, we can use the dot product to get a linear way of assigning the amount of work required to push the particle in some direction.

If we want to formulate the physical intuition above in terms of Riemannian manifolds, it would look something like this: for any vector field $X$ on a Riemannian manifold $M$ with metric $\langle\cdot, \cdot\rangle_p$, then $\langle X_p, \cdot \rangle$ defines a linear functional on $T_p M$. This is exactly the canonical identification of a tangent space with it's cotangent space.