How to understand "tensor" in commutative algebra?

Solution 1:

Think of the tensor product of two modules as an object that is built out of the original modules, but with linearity encoded in each variable separately. This means that, as opposed to taking a sum in both variables simultaneously as in a direct product: $(m,n) + (m',n') = (m+m',n+n')$, the operations are naturally "one variable at a time", as in: $m \otimes n + m' \otimes n = (m+m') \otimes n$. Similarly, multiplication by a scalar is done one variable at a time: $\alpha(m \otimes n) = (\alpha m) \otimes n$ instead of both variables at once as in a direct product.

This is better stated in terms of maps, where the universal property tells us exactly how the tensor product is built to encode bilinearity: essentially by definition, the tensor product is a gadget that transforms bilinear maps $M \times N \to P$ into linear maps $M \otimes N \to P$. As such, it greatly simplifies the study of bilinear maps. (And, analogously, the study of multilinear maps is simplified into the study of linear maps out the tensor product of several modules.)

Let's conclude with a concrete example: the determinant map $\det:M_n(\mathbb{K})\cong (\mathbb{K}^n)^{\oplus n} \to \mathbb{K}$ is the unique $n$-linear alternating map such that $\det(I) = 1$. (We view it as a $n$-linear map that takes $n$ columns—i.e., elements of $\mathbb{K}^n$—as input.) To consider this map abstractly in terms of linear algebra, we can use tensor products to encode the multilinearity. So the determinant should be a linear map out of the $n$-fold tensor product, $\det:(\mathbb{K}^n)^{\otimes n} \to \mathbb{K}$, except that this doesn't take care of the alternating property. To remedy this, we pass to a quotient of the tensor power, denoted $(\mathbb{K}^n)^{\wedge n}$ and called the exterior power, out of which maps, viewed as maps out of the tensor product, become alternating. Finally, as $\dim_\mathbb{K}\operatorname{Hom}((\mathbb{K}^n)^{\wedge n},\mathbb{K}) = 1$, the condition $\det(I) = 1$ singles out a preferred basis in this one-dimensional space and gives us the determinant map.

Solution 2:

Let's start with bilinear maps. In particular, choose a map $f:M\times N\to P$. Now consider the pairs $(u,\alpha v)$ and $(\alpha u,v)$ where $u\in M$, $v\in N$ and $\alpha$ is a scalar. We find that $f(u,\alpha v)=\alpha f(u,v) = f(\alpha u, v)$, and this does not depend on what bilinear map we have chosen. In other words, as far as bilinear maps are concerned, there's no difference between the pairs $(u,\alpha v)$ and $(\alpha u,v)$.

Therefore, a natural question arises: Can we write down an object that captures exactly the features of pairs of vectors that are essential for the bilinear map? That is, can we have a set $T$ and a function $g:M\times N\to T$ so that $g(u,v) = g(u',v')$ exactly if for any bilinear map $f$ we have $f(u,v)=f(u',v')$?

Obviously, if we can find such a function $g$, then we can write any bilinear map as $f = f'\circ g$ where $f'$ maps objects from $T$ to $P$. After all, by definition the objects in $T$ capture exactly those properties of the pairs of vectors which are relevant for $f$.

Now since we are doing linear algebra, we would additionally like $T$ to also be a linear space, and $f'$ to be linear. It is not hard to see that in this case, the function $g$ has to be bilinear.

The theorem you quoted now states that such a function $g$ not only always exists, but moreover is unique up to isomorphisms.