Tensors in the context of engineering mechanics: can they be explained in an intuitive way?

I've spent a few weeks scouring the internet for a an explanation of tensors in the context of engineering mechanics. You know, the ones every engineering student know and love (stress, strain, etc.). But I cannot find any explanations of tensors without running into abstract formalisms like "homomorphisms" and "inner product spaces". I'm not looking for an explanation of tensors using abstract algebra or infinite, generalized vector spaces. I just want some clarification on what they actually mean and are doing in the nice 3D, Euclidean space, especially in the context of mechanics. There are a few questions that have been bugging me that I'm hoping all you smart people here can answer:

  1. What's the difference between a linear transformation and a tensor? Somehow they can both be represented by a $3\times 3$ matrix, but they do different things when acting on a vector? Like the columns of a $3 \times 3$ matrix of a linear transformation tell you where the basis vectors end up, but the same columns of a tensor don't represent basis vectors at all?

  2. Furthermore, a linear transformation transforms all of space but a tensor is defined at every point in space? Does a tensor act on vectors the same way as linear transformations do?

  3. What is the difference between a tensor product, dyadic product, and outer product and why are engineering tensors like the Cauchy stress built from the tensor product of two vectors (i.e. traction vector and normal vector)?

  4. Is it true that scalars and vectors are just $0^\mathrm{th}$ order and $1^\mathrm{st}$ order tensors, respectively? How are all these things related to each other?

  5. What topics and/or subtopics of linear algebra are essential to grasp the essence of tensors in the context of physics and engineering? Are they really just objects that act on vectors to produce other vectors (or numbers) or are they something more?

I have plenty more questions, but I figure the answers to these could already be enough to fill a whole textbook. Just to note, I have already searched Math.StackExchange for tensors but haven't found any explanations that make sense to me yet.

Thanks!


Solution 1:

Although its a bit lengthy subject but i'll try to give you the exact mathematical information in as short an introduction as possible. Let's start directly with ....

DEFINITION OF TENSOR --- A ($p$,$q$)-tensor $T$ (with p and q integers) is a multilinear transformation $$T:\underbrace{V^*\times V^*\times\dots V^*}_{p\text{ times}}\times\underbrace{V\times V\times\dots V}_{q\text{ times}}\to\mathbb R$$ where $V$ is a vector space, $V^*$ is its dual vector space and $\mathbb R$ is the set of real numbers. The integer $p+q$ is the rank of the tensor.

Example, A (1,1) tensor is a multilinear transformation, $T:V^*\times V\to \mathbb R$. Using the same information we can construct an object $T:V\to V$ as shown later. We recognise this as a simple linear trasnformation of vectors, represented by a matrix. Hence a matrix is a (1,1) tensor.


What does that mean? It means that a Tensor takes p covectors and q vectors and converts them multilinearly to a real number. The main thing to understand here is the difference between a vector (member of a vector space) and a covector (member of the dual vector space). If you already know about this, you can skip this section. A vector is defined as a member of a vector space which itself is defined as a set with a addition and scalar multiplication following certain axioms.* A covector is defined as follows:

Definition (Dual space) The set of all linear transformations $\boldsymbol\omega:V\to\mathbb R$ is called the dual vector space and denoted by $V^*$. The members of the dual vector space are called covectors.

Theorem (without proof) The dual of a dual space of a finite dimesnional vector space $V$ is $V$. i.e., $$(V^*)^*=V$$

We usually denote vectors by $\boldsymbol v$ and covectors by $\boldsymbol \omega$. Also by convention, vectors have indices up and covectors have indices down. (The indices representing coordinates)

$$\boldsymbol{v}=v^i\boldsymbol e_i, \quad\boldsymbol\omega=\omega_i \boldsymbol \epsilon^i$$ Here $e^i$ are the basis vectors. Whenever you see an index up and the same index down you have to sum over that index, like in the above equations, ($\boldsymbol v = \sum_i v^i \boldsymbol e_i$).

Notice that a covector is a (0,1) tensor and a real number is a (0,0) tensor. This can bee seen readily from the definition. We can show that a vector is a (1,0) tensor, using the above mentioned theorem, although it is not very obvious.


How to represent tensors in a basis? Let's say we want to represent a (1,2)-tensor in a given basis. We apply it to an arbitrary input: $$T(\boldsymbol \omega, \boldsymbol v, \boldsymbol w)=T(\omega_a \boldsymbol\epsilon^a,v^b\boldsymbol e_b,w^c\boldsymbol e_c)=\omega_a v^b w^c T(\boldsymbol\epsilon^a,\boldsymbol e_b,\boldsymbol e_c)$$

Here the objects $T(\boldsymbol\epsilon^a,\boldsymbol e_b,\boldsymbol e_c)$ are simply real numbers and they can be labelled as $T^a_{bc}$. Hence a tensor can be represented by a set of $(\dim V)^{p+q}$ numbers. A tensor T of type (p,q) and rank (p+q) has p indices up and q indices down.


Theorem In the definition mentioned above we can transfer $V$ or $V^*$ to the other side by removing or adding a $*$ to V.

Consider a (1,1) tensor. It is an object $T^a_b$ which takes a vector and a covector and converts it to a real number, like so: $$T:V^*\times V\to\mathbb R$$ $$T^a_b \omega_a v^b = r, \,\, r\in\mathbb R.$$ However the same object can be used like so: $$T^a_bv^b = w^a,$$ here it has converted a vector to another, $$T:V\to V.$$

A matrix can do the same things, just think row vector = covector, column vector = vector, and matrix (NxN) = Tensor. Then: covector * Matrix * vector = real number, while Matrix * vector = vector. The entries of the matrix are precisely the numbers $T^a_b$.

Hence, a matrix is simply a (1,1) tensor. However the notation of matrices requires us to use it in some particular ways, like you can do covector * Matrix * vector but not Matrix * vector * covector.


Footnotes:

*the axioms are CANI ADDU - For addition: Commutativity, Associativity, Neutral element (0 vector) exists, Inverse elements exist. For scalar multiplication and addition: Associativity, two Distributivities, Unit element (1*v=v)

Please note this is only a mathematical introduction to this subject. I have not answered all your questions but this is an attempt to make things precise so you dont learn something wrong and probably you will now be able to answer thos questions yourself.

Solution 2:

I will try to answer only part of your questions.

A rank $k$ covariant tensor on $\Bbb R^3$, or simply $k$-tensor, is a multilinear function $T:(\Bbb R^3)^k\to\Bbb R$, that is $T(v_1,...,cu_i+v_i,...,v_k)=c(T(v_1,...,u_i,...,v_k))+T(v_1,...,v_i,...,v_k)$ for each $i=1,...,k$. The set of all $k$-tensors form a vector space with addition and scalar multiplication defined as $(T_1+T_2)(x):=T_1(x)+T_2(x)$ and $(cT_1)(x):=c(T_1(x))$. This vector space is known to have dimension $3^k$.

When $k=2$, the dimension of this vector space equals the dimension of space of all $3\times 3$ matrices, so that we can represent a $2$-tensor by a $3\times 3$ matrix. While it is represented as a matrix, it does not mean that it can act on a vector. Not all tensors have to be able to act on a vector. It highly depends on the type of tensor. I will explain "type of tensor" later.

A rank $k$ contravariant tensor is a multilinear function $T:((\Bbb R^3)^*)^k\to\Bbb R$. If you know that double dual of a finite-dimensional vector space is canonically isomorphic to the original space, then you should know that a rank $1$ contravariant tensor can be represented by a usual vector in $\Bbb R^3$.

A type $(p,q)$ tensor (or $(q,p)$, this depends on author's choice, but we stick to $(p,q)$ here) is a multilinear function $T:((\Bbb R^3)^*)^p\times(\Bbb R^3)^q\to\Bbb R$.

Each linear transformation from $\Bbb R^3$ to itself can be regarded as a type $(1,1)$ tensor. The reason is that for a $3\times 3$ matrix M representing a linear transformation, we define a bilinear function $T_M:(\Bbb R^3)^*\times\Bbb R^3\to\Bbb R$ by $T_M(f,v):=f(Mv)$. This draws a connection between linear transformations and type $(1,1)$ tensors. This type of tensor may act on a vector, but others may not necessarily so.

A type $(1,0)$ tensor is simply a vector, while a type $(0,0)$ tensor, according to Wikipedia, is simply a scalar, and a type $(0,1)$ tensor is simply a linear functional. I am not sure why the second one is the case, most probably a convention. As you see, type $(1,0)$ tensor cannot really act on a vector, but type $(0,1)$ can. In fact, every type $(p,1)$ tensor can act on a vector to give a $(p,0)$ tensor, i.e. a rank $p$ contravariant tensor.

When you see that a tensor is defined everywhere in space, it is actually not a tensor, but is a tensor field, which is a function that assigns to each point a tensor.

As on Wikipedia, there is no difference between tensor product, dyadic product and outer product.

Solution 3:

These are good questions. I'll try to keep the maths to a minimum, below.

Zero-th question: why do we need tensors at all?

A lot of things in engineering and physics can be represented by a function (which is nothing more than a number, or a magnitude) which takes a different value at different points in space – think of the pressure at different points in a room. A lot of other things can be represented by a magnitude and a direction – think of the electric field at different points in a room. A few things are best represented by a quantity which involves a magnitude and two directions. The first lot of things are represented by a scalar field, the second by a vector field, and the third lot by (rank 2) tensors. I can't think of any physical quantities which are modelled by rank-3 tensors.

The air pressure in a room is a scalar, $p$ (we're thinking of this as a field, so there's an implicit dependence on position, $p(\mathbf r)$). If you want to know the pressure at a particular point, your answer is: ‘it's $p$’. Simple.

Now think of the electrostatic force $\mathbf F$ on a charged particle, which is moved through a displacement $\mathbf s$ (remember $\mathbf F$ is a field, so it takes a different value at different points). How much work is done during this displacement? To answer this, think of $\mathbf F$ as a function which takes the displacement as an argument: the work done is $\mathbf F(\mathbf s)$. That's a long way of giving the answer you were doubtless about to give, namely the inner product of the two vectors $\mathbf F\cdot\mathbf s$.

Now think of the stress tensor $\sigma$. Given a surface within the body with normal $\mathbf n$ and a unit reference direction $\mathbf e$, the magnitude of the stress in the direction $\mathbf e$, when you're considering a surface $\mathbf n$, is $\sigma(\mathbf n,\mathbf e)$ – a number. If we postpone thinking about the reference direction $\mathbf e$, and only apply one of the two arguments, we get $\sigma(\mathbf n, \cdot)$. That's a thing which is waiting for a single further vector argument: you may be more used to thinking of that as the stress vector, $\mathbf T^{(n)}$: the number $\sigma(\mathbf n,\mathbf e_x)$ is just $\mathbf T^{(n)}(\mathbf e_x) = \mathbf T^{(n)}\cdot\mathbf e_x = T^{(n)}_x$ (I may have slightly garbled the definition of $\sigma$ above; that's not important).

The short version:

  • A rank-0 tensor is a scalar: it's a field with zero vector arguments.
  • A rank-1 tensor is a vector: it's a field with one vector argument, and the way that that field acts on its argument is what we are more used to calling the inner product.
  • A rank-2 tensor is what is commonly called just ‘a tensor’; it has two vector arguments. If you give it only one argument, then you're left with a thing which has a single remaining vector argument, such as $\mathbf T^{(n)} = \sigma(\mathbf n,\cdot)$.

The above is true for flat euclidean space – that is, the space of our ordinary experience. If you want to handle these mathematical ideas ‘properly’, or in odd coordinates, or in non-flat spaces (eg, general relativity), then you have to worry about vectors versus co-vectors, metrics, inner products, contractions, and observe a couple more distinctions which I've glossed over here, but the core intuitions are as above.

So, returning to your questions...

  1. What's the difference between a linear transformation and a tensor?

They're very different, but unfortunately they look very similar, because they're both represented by a matrix of numbers.

If you ask for the components of a vector, the answer is a set of numbers $\mathbf n=(n_x,n_y,n_z)$ (three of them, in 3D) with respect to a particular set of coordinate axes. If you change your mind about the axes, then the same vector $\mathbf n$ will have different components $\mathbf n=(n_x',n_y',n_z')$, which are systematically related to $(n_x,n_y,n_z)$ by the change-of-basis linear transformation matrix.

Exactly analogously, if you ask for the components of a (rank 2) tensor, then the answer is a $3\times3$ matrix of numbers, again with respect to a particular set of axes. If you change your mind about the axes, the components are a different matrix of numbers, again systematically related to the original set via (two applications of) the transformation matrix.

  1. Does a tensor act on vectors the same way as linear transformations do?

See 1: no, a tensor is a very different thing from a linear transformation matrix.

But also see point zero: you can use a (rank 2) tensor to turn one vector into another one. For example, $\mathbf T^{(n)}=\sigma(\mathbf n,\cdot)$ effectively relates the vector $\mathbf n$ and the vector $\mathbf T^{(n)}$.

  1. What is the difference between a tensor product, dyadic product, and outer product and why are engineering tensors like the Cauchy stress built from the tensor product of two vectors (i.e. traction vector and normal vector)?

The tensor/outer/dyadic product (I hadn't heard the last name before!) are different names for the same thing (I expect a mathematician would quibble about this in full generality, but let's not worry about them).

The outer product is one way of making a rank-2 tensor from a couple of handy rank-1 tensors (ie, vectors). Any tensor can be written as a sum of outer products.

  1. Is it true that scalars and vectors are just 0th order and 1st order tensors, respectively?

Yes.

  1. What topics and/or subtopics of linear algebra are essential to grasp the essence of tensors in the context of physics and engineering?

If you're going to study linear algebra in any mathematical context, then tensors are going to be at the beginning of that. In a more applied context, if you have a clear idea of the relationship between a vector, a set of basis vectors, and the vector's components with respect to that basis, then you're off to a good start.

In my experience with students, the idea that ‘a (rank 2) tensor models a physical quantity which depends on two directions’ is a bit of an aha! moment.

Solution 4:

Vectors need one subscript. $V_x, V_y, V_z$ are all components of the vector $V$. Tensors may have more than one. $V_{xy}$ is an example of a component of a rank $2$ tensor. For example, the stress on a face may have a tensor representation, $V_{xy}$ can represent the shear stress on the $x$ face in the $y$ direction.

Each vector component has a unit vector basis, each tensor component has multiple unit vector bases. From this you may see why a vector is actually a rank $1$ tensor, a matrix is a rank $2$ tensor, and so on.

Solution 5:

Preface: As an old student of Physics and someone trying to re-grasp Applie Math, I will admit my own notion of tensors is shaky at best as I haven't been exposed to a text yet that has given a decent definition of "tensor". But in my exposure so far to tensors, I do have a slight bit of an intuitive notion of what they are (further buttressed by Wikipedia). I can't answer all your questions, but maybe I can help with my shoddy explanation to give a better sense and I invite anyone else with any constructive criticisms to comment as I would like to be on firmer ground with this myself....

1) I believe that a linear transformation can be considered a "subset" of a tensor in a way. From what I've found, tensors can represent linear transformations, but they aren't restricted to being linear transformations for just vectors in physical scenarios. From what I gather, linear transformations are a specific case (mapping one vector space to another) of mapping just plain mathematical objects in one space to another (which tensors do in the general case). To put this in more physical terms, something like the rotation tensor maps a point in a coordinate system to a new point in a different coordinate system that may have resulted via multiple transformations.

2) Don't have a great enough handle to take a stab at this one.

3) Same as 2

4) From the literature I've read, this is indeed the case. The relation comes from tensors needing multiple degrees to designate one single component of the whole tensor. Ever notice that vectors usually are designated by one index. Let $\beta = \{v_1 , v_2, ... , v_n\}$ be a basis for some vector space V (it may help to think of it as the basis for the n-th dimensional Cartesian Space ${\Re}^n$. Since any vector in V is a linear combination of the vectors in the basis, only one index is needed to specify the constants that correspond to the elements in the basis for v (i.e. $v = {x_i}{v_i}$ where Einstein Summation notation is used here with the index i ranging from 1 to n). Note that with tensors however, in order to specify a component of a tensor, you need more than one index to get to that component. Here we see the "generalization" aspect of it again.

5) To be honest, I'm not sure. I have come across tensors very seldom in my study of Physics (please note I have yet to get to an advanced EM, or intro Quantum Mech class yet).

I hope that helps a bit. If there's something mirky about one of my explanations, let me know, and I'll do my best to clarify.