Isomorphisms Between a Finite-Dimensional Vector Space and its Dual

So, we know that there is no "natural" isomorphism between a finite dimensional vector space $X$ and its dual $X^{\vee}$. However, given a basis for $X$ $(e_1, \dots, e_n)$ we can always construct a (dual) basis $(e^1, \dots, e^n)$ for $X^{\vee}$ that satisfies $e^i(e_j) = \delta^i_j$. It follows that $V \approx V^{\vee}$ simply because they have the same finite dimension $n$. Now, it seems to me that we can construct an isomorphism by assigning $$ e_i \mapsto e^i $$ and extending by linearity. That is, for an arbitrary vector $v \in X$ given by $v = \sum v^ie_i$ we can specify a linear map

$$\phi:V \rightarrow V^{\vee}$$

by

$$ \phi(\sum v^ie_i) := \sum v^ie^i $$ It is a quick exercise to check that $\phi$ is an isomorphism. Now, although this construction seems correct, there is something that doesn't quite sit right with me about it. For one thing, there is no way to really make sense of the summation convention as the right side of the function will always contain two superscripts thus requiring an explicit summation sign. The other thing that feels rather off is actually taking a lower index and moving it to an upper index, i.e., $e_i \mapsto e^i$ (which, of course, is the reason that the summation convention doesn't work).

So, with this background my questions are

  1. Is there any sense in which the above isomorphism is "favored" or "canonical"?

  2. Is there another way to construct an isomorphism between a vector space and its dual, that would "conserve indexes", for lack of a better way to state it. My guess here is that the answer is no but becomes possible if one assumes $V$ has an inner product.


Solution 1:

Question 1: The problem is that if you change the basis, then the dual basis also changes, but in a different way. More precisely, suppose we decide to use the basis $f_i = M e_i$ instead of the basis $e_i$. Then the corresponding dual basis $f_i^{\ast}$ (thinking of $e_i^{\ast}$ as a linear functional $V \to k$) is given by $f_i^{\ast} = e_i^{\ast} M^{-1}$. Indeed the defining property of the dual basis, namely that $$e_i^{\ast}(e_j) = \delta_{ij}$$

(where $\delta_{ij} = 1$ if $i = j$ and is $0$ otherwise) is satisfied here, since $$f_i^{\ast}(f_j) = e_i^{\ast} M^{-1} M e_j = e_i^{\ast} e_j = \delta_{ij}.$$

Note that $M^{-1}$ acts on the right instead of on the left, so if one insists on writing linear transformations as matrices acting on the left, then we need in addition to take the transpose.

This is a reflection of the fact that taking dual spaces is a contravariant functor rather than a covariant one.

Question 2: For $V$ finite-dimensional, specifying an isomorphism $V \to V^{\ast}$ is equivalent to specifying a nondegenerate bilinear form $V \times V \to k$. This form need not be symmetric or an inner product in general. (In fact the notion makes sense over an arbitrary field, and an arbitrary field doesn't have a notion of positivity.)