What are the Differences Between a Matrix and a Tensor?

What is the difference between a matrix and a tensor? Or, what makes a tensor, a tensor? I know that a matrix is a table of values, right? But, a tensor?


Maybe to see the difference between rank 2 tensors and matrices, it is probably best to see a concrete example. Actually this is something which back then confused me very much in the linear algebra course (where we didn't learn about tensors, only about matrices).

As you may know, you can specify a linear transformation $a$ between vectors by a matrix. Let's call that matrix $A$. Now if you do a basis transformation, this can also be written as a linear transformation, so that if the vector in the old basis is $v$, the vector in the new basis is $T^{-1}v$ (where $v$ is a column vector). Now you can ask what matrix describes the transformation $a$ in the new basis. Well, it's the matrix $T^{-1}AT$.

Well, so far, so good. What I memorized back then is that under basis change a matrix transforms as $T^{-1}AT$.

But then, we learned about quadratic forms. Those are calculated using a matrix $A$ as $u^TAv$. Still, no problem, until we learned about how to do basis changes. Now, suddenly the matrix did not transform as $T^{-1}AT$, but rather as $T^TAT$. Which confused me like hell: how could one and the same object transform differently when used in different contexts?

Well, the solution is: because we are actually talking about different objects! In the first case, we are talking about a tensor that takes vectors to vectors. In the second case, we are talking about a tensor that takes two vectors into a scalar, or equivalently, which takes a vector to a covector.

Now both tensors have $n^2$ components, and therefore it is possible to write those components in a $n\times n$ matrix. And since all operations are linear resp. bilinear, the normal matrix-matrix and matrix-vector products together with transposition can be used to write the operations of the tensor. Only when looking at basis transformations, you see that both are, indeed, not the same, and the course did us (well, at least me) a disservice by not telling us that we are really looking at two different objects, and not just at two different uses of the same object, the matrix.

Indeed, speaking of a rank-2 tensor is not really accurate. The rank of a tensor has to be given by two numbers. The vector to vector mapping is given by a rank-(1,1) tensor, while the quadratic form is given by a rank-(0,2) tensor. There's also the type (2,0) which also corresponds to a matrix, but which maps two covectors to a number, and which again transforms differently.

The bottom line of this is:

  • The components of a rank-2 tensor can be written in a matrix.
  • The tensor is not that matrix, because different types of tensors can correspond to the same matrix.
  • The differences between those tensor types are uncovered by the basis transformations (hence the physicist's definition: "A tensor is what transforms like a tensor").

Of course, another difference between matrices and tensors is that matrices are by definition two-index objects, while tensors can have any rank.      


Indeed there are some "confusions" some people do when talking about tensors. This happens mainly on Physics where tensors are usually described as "objects with components which transform in the right way". To really understand this matter, let's first remember that those objects belong to the realm of linear algebra. Even though they are used a lot in many branches of mathematics the area of mathematics devoted to the systematic study of those objects is really linear algebra.

So let's start with two vector spaces $V,W$ over some field of scalars $\Bbb F$. Now, let $T : V \to W$ be a linear transformation. I'll assume that you know that we can associate a matrix with $T$. Now, you might say: so linear transformations and matrices are all the same! And if you say that, you'll be wrong. The point is: one can associate a matrix with $T$ only when one fix some basis of $V$ and some basis of $W$. In that case we will get $T$ represented on those bases, but if we don't introduce those, $T$ will be $T$ and matrices will be matrices (rectangular arrays of numbers, or whatever definition you like).

Now, the construction of tensors is much more elaborate than just saying: "take a set of numbers, label by components, let they transform in the correct way, you get a tensor". In truth, this "definition" is a consequence of the actual definition. Indeed the actual definition of a tensor is meant to introduce what we call "Universal Property".

The point is that if we have a collection of $p$ vector spaces $V_i$ and another vector space $W$ we can form functions of several variables $f: V_1\times \cdots \times V_p \to W$. A function like this will be called multilinear if it's linear in each argument with the others held fixed. Now, since we know how to study linear transformations we ask ourselves: is there a construction of a vector space $S$ and one universal multilinear map $T : V_1 \times \cdots \times V_p \to S$ such that $f = g \circ T$ for some $g : S \to W$ linear and such that this holds for all $f$? If that's always possible we'll reduce the study of multilinear maps to the study of linear maps.

The happy part of the story is that this is always possible, the construction is well defined and $S$ is denoted $V_1 \otimes \cdots \otimes V_p$ and is called the tensor product of the vector spaces and the map $T$ is the tensor product of the vectors. An element $t \in S$ is called a tensor. Now it's possible to prove that if $V_i$ has dimension $n_i$ then the following relation holds:

$$\dim(V_1\otimes \cdots \otimes V_p)=\prod_{i=1}^p n_i$$

This means that $S$ has a basis with $\prod_{i=1}^p n_i$ elements. In that case, as we know from basic linear algebra, we can associate with every $t \in S$ its components in some basis. Now, those components are what people usually call "the tensor". Indeed, when you see in Physics people saying: "consider the tensor $T^{\alpha \beta}$" what they are really saying is "consider the tensor $T$ whose components in some basis understood by context are $T^{\alpha \beta}$".

So if we consider two vector spaces $V_1$ and $V_2$ with dimensions respectivly $n$ and $m$, by the result I've stated $\dim(V_1 \otimes V_2)=nm$, so for every tensor $t \in V_1 \otimes V_2$ one can associate a set of $nm$ scalars (the components of $t$), and we are obviously allowed to plug those values into a matrix $M(t)$ and so there's a correspondence of tensors of rank $2$ with matrices.

However, exactly as in the linear transformation case this correspondence is only possible when we have selected bases on the vector spaces we are dealing with. Finally, with every tensor it is possible to associate also a multilinear map. So tensors can be understood in their fully abstract and algebraic way as elements of the tensor product of vector spaces, and can also be understood as multilinear maps (this is better for intuition) and we can associate matrices to those.

So after all this hassle with linear algebra, the short answer to your question is: matrices are matrices, tensors of rank 2 are tensors of rank 2, however there's a correspondence between then whenever you fix a basis on the space of tensors.

My suggestion is that you read "Kostrikin's Linear Algebra and Geometry" chapter $4$ on multilinear algebra. This book is hard, but it's good to really get the ideas. Also, you can see about tensors (constructions in terms of multilinear maps) in good books of multivariable Analysis like "Calculus on Manifolds" by Michael Spivak or "Analysis on Manifolds" by James Munkres.


As a place-holder answer waiting perhaps for clarification by the questioner's (and others') reaction: given that your context has a matrix be a table of values (which can be entirely reasonable)...

In that context, a "vector" is a list of values, a "matrix" is a table (or list of lists), the next item would be a list of tables (equivalently, a table of lists, or list of lists of lists), then a table of tables (equivalently, a list of tables of lists, or list of lists of tables...). And so on. All these are "tensors".

Unsurprisingly, there are many more sophisticated viewpoints that can be taken, but perhaps this bit of sloganeering is useful?