Tensor product and Kronecker Product

Is there any difference between tensor product and Kronecker Product?


Solution 1:

The two notions represent operations on different objects: Kronecker product on matrices; tensor product on linear maps between vector spaces.

But there is a connection: Given two matrices, we can think of them as representing linear maps between vector spaces equipped with a chosen basis.

The Kronecker product of the two matrices then represents the tensor product of the two linear maps.

(This claim makes sense because the tensor product of two vector spaces with distinguished bases comes with a distinguish basis.)

All this and more is explained on wikipedia.

Solution 2:

Note: I am adding a new answer after all these years because the existing answer tales a rather restricted view of the term "tensor product". In a sense, my answer does the same, as I only talk about tensor products in vector spaces whereas one can talk about tensor products in many other settings. But my feeling is that it is common enough, even in the restricted setting of vector spaces, to have to deal with tensor products of objects other than just linear maps. Also, I think that there is nothing wrong in talking about the tensor product of matrices, and that this is different from, but closely related to, the Kronecker product of matrices. After all, one can talk about tensor products in any vector space, and matrices form a perfectly good vector space.

Usage is not entirely consistent between different fields and between different authors, but it seems a good practice to think of the Kronecker product as a particular kind of universal bilinear map between particular kinds of vector spaces, and to think of the tensor product as a universal bilinear map carefully constructed so as not to let in assumptions about the particular nature of the map or of the vector spaces.

The Kronecker product is a particular universal bilinear map on a pair of vector spaces, each of which consists of matrices of a specified size. The tensor product is a universal bilinear map on a pair of vector spaces (of any sort). In some abstract treatments, this last sentence alone defines the tensor product. It is also common to see the tensor product constructed as a certain quotient, with expressions like $v\otimes w$ treated as formal symbols having no properties apart from those implied by the property of being a universal bilinear map. The Kronecker product, on the other hand, does have such additional properties: for example, it represents a matrix of a specified shape with elements computed in a particular way.

These additional properties of the Kronecker product mean (1) that its use is restricted to particular types of vector spaces, namely spaces of matrices; (2) that one has some nice compatibility properties between Kronecker products on related spaces of matrices. So if $M$ and $N$ are matrices that represent linear transformations of vectors in $\mathbf{R}^m$ and $\mathbf{R}^n$, and if $v$ and $w$ are particular elements of those spaces, then (using the symbol $\otimes_K$ for the Kronecker product) we have $(M\otimes_K N)(v\otimes_K w)=(Mv)\otimes_K(Nw)$. In contrast, the abstract tensor product, $M\otimes N$ has no predefined action on $v\otimes w$. One can, of course, define many actions, including one that has the nice compatibility property that the Kronecker product satisfies.

To put this more concretely, when you see $v\otimes_K w$, you know to combine the matrix elements of $v$ and $w$ in a particular way to construct a larger matrix. And the predefined notions of matrix multiplication that come with spaces of matrices play nicely with Kronecker products. When you see the symbol $v\otimes w$, no particular operations are to be performed: $v$ and $w$ are simply labels that identify the object $v\otimes w$ (up to certain equivalences implied by bilinearity) This is true even in the case where $v$ and $w$ are concrete matrices. To express it yet another way: in the case of vector spaces of matrices, $v\otimes_K w$ is a particular matrix; $v\otimes w$ is an equivalence class of pairs of matrices.* This is all consistent: for example, $(2v)\otimes\left(\frac{1}{2}w\right)$ is in the same equivalence class as $v\otimes w$, while $(2v)\otimes_K\left(\frac{1}{2}w\right)$ is the same matrix as $v\otimes_K w$.

* Actually, the equivalence class contains much more than pairs of matrices. It also contains linear combinations of pairs of matrices.