Mathematical properties of Rank-$N$ tensors where $N$>2

Solution 1:

Tensors are a source of great conceptual confusion among many. The reason is that they are often introduced in a very different way to vectors and matrices. When we first learn about vectors and matrices, they are presented as nothing but a list or array of numbers. In some applications like computer science and programming, this is indeed all they are and all they need to be. But, as we know, there are much more enlightening and useful ways to think about these objects. For instance, we can actually think of a (column) vector $\underline{v}$ as a linear map that takes in one row (co) vector (hence order 1) and spits out a real number (or whatever other field you want to use). For instance, $$\begin{bmatrix} \omega_{1} & \omega_{2} \end{bmatrix}\begin{bmatrix} v^{1}\\ v^{2} \end{bmatrix} =v^{1} \omega_{1} +v^{2} \omega_{2}$$ And we can view covectors as linear maps that take in one vector (hence order 1) and spit out real numbers, if you simply reverse the way you see the multiplication above. We could write $$\underline{v}(\underline{\omega})=\underline{\omega}\underline{v}=\underline{\omega}(\underline{v})$$ And importantly, these maps are linear, eg $$\underline{v}(a\underline{\alpha}+b\beta)=a\underline{v}(\underline{\alpha})+b\underline{v}(\underline{\beta})$$ With $a,b$ scalars, $\underline{\alpha},\underline{\beta}$ covectors. The same is true for $\underline{\omega}$.

We can extend this idea to matrices. We can think of a matrix as a bilinear map that takes in one covector and one vector (two arguments total, hence order 2) and outputs a real number. I'll illustrate below: $$\begin{bmatrix} \omega_{1} & \omega_{2} \end{bmatrix}\begin{bmatrix} M^{1}_{1} & M^{1}_{2}\\ M^{2}_{1} & M^{2}_{2} \end{bmatrix}\begin{bmatrix} v^{1}\\ v^{2} \end{bmatrix} =\omega_{1} M^{1}_{1} v^{1} +\omega_{1} M^{1}_{2} v^{2} +\omega_{2} M^{2}_{1} v^{1} +\omega_{2} M^{2}_{2} v^{2}$$ Multiplication order is right to left. We could write this as $$\underline{\underline{M}}(\underline{\omega},\underline{v})=\underline{\omega}~\underline{\underline{M}}~\underline{v}$$ REMARK: One can trivially think about a scalar in this way as well - a scalar is a zero order tensor that takes in zero arguments and spits out a real number.

We don't actually need to have numbers arranged in arrays to create similar constructions of the above. We can think of the column vectors as simply members of an arbitrary real vector space $\mathcal{V}$ and covectors as members of its dual space $\mathcal{V}^*$. Then instead of using the matrix multiplication illustrated above, we simply use the more abstract linear maps. In this setting, don't think of a matrix as a box of numbers - simply think of it as a bilinear map that takes in a covector and a vector and outputs a real number. Normally, when we think about the indices of a vector or whatever we are referring to the entries in its corresponding array. However, when we don't have the notion of an array in the general setting we instead have to use the basis of either the vector space or the dual space - $$v^i=\underline{v}(\underline{e^i})~;~\omega_i=\underline{\omega}(\underline{e_i})$$ Here, $\underline{e_1},...,\underline{e_n}$ is the basis of the $n$ dimensional vector space and $\underline{e^1},...,\underline{e^n}$ is the basis of the $n$ dimensional dual space. We can of course verify this in the usual array setting: $$v^1=\underline{v}(\underline{e^1})=\begin{bmatrix} 1 & 0 \end{bmatrix}\begin{bmatrix} v^{1}\\ v^{2} \end{bmatrix} =v^{1}$$


This leads us to the most often used definition of a $(r,s)$ tensor. Given a real vector space $\mathcal{V}$ and its dual space $\mathcal{V}^*$ a $(r,s)$ tensor is a multilinear map: $$\mathbf{T}:(\mathcal{V}^*)^r\times \mathcal{V}^s\to\mathbb{R}$$ Where the power denotes Cartesian set product $\times$. This tensor has order $r+s$. If we have some notion of array multiplication like the above, we could write $$\mathbf{T}(\underline{\omega^1},...,\underline{\omega^r},\underline{v_1},...,\underline{v_s})=\underline{\omega^1},...,\underline{\omega^r}\mathbf{T}\underline{v_1},...,\underline{v_s}=t$$ Where $t$ is some scalar. The indices of a tensor are defined much in a similar way as to how we went about retrieving indices of vectors: $$T^{i_1,...,i_r}_{j_1,...,j_s}=\mathbf{T}(\underline{e^{i_1}},...,\underline{e^{i_r}},\underline{e_{j_1}},...,\underline{e_{j_s}})$$

Concerning the geometry, this is kind of hard to pin down. I'm sure there are some nice examples of visualizing the Ricci tensor, etc but its hard to talk intuitively about the geometry of these complex objects. But hopefully my answer does a good job connecting the notions of vectors, matrices, and tensors and you get some benefit out of it. As a final comment, the above only covers "square" tensors - that is the range of all of its indices is the same. We can actually define non-square tensors as maps over arbitrary vector and dual spaces: $$\mathbf{T}:\mathcal{W}_1^*\times ...\times \mathcal{W}_r^*\times \mathcal{V}_1\times...\times\mathcal{V}_s\to \mathbb{R}$$ But they don't have many applications so they are not talked about much.


BONUS - A nice notation for sets real tensors When talking about real tensors that we can view as arrays, I like to use the notation $$({}^r_s\mathbb{R})^n$$ As the set of $n$ dimensional $(r,s)$ real square tensors. However if the tensor is non square we might write $${}^r_s\mathbb{R}^{m_1,...,m_r}_{m_1,...,m_s}$$ I think this is rather nice. More generally I use the notation ${}^r_s\mathcal{V}$ to denote the set of $(r,s)$ tensors over a vector space $\mathcal{V}$.

Sorry about the rambling answer. Please comment any questions.