Definition of a tensor for a manifold

Solution 1:

To see that a linear map of vector spaces is a $(1,1)$-tensor, realize that such an object eats a vector $X$ and covector $\omega$ (linear form to $\mathbb R$!) and gives you a number, i.e. $$\mathbf T(X,\omega)=\sum_{i,j} X^i\omega_j\mathbf T(\partial_i,dx^j)=\sum_{i,j} \omega_jT^j_iX^i,\;\text{where }T^j_i:=\mathbf T(\partial_i,dx^j)\in\mathbb R.$$ In the previous formula one can see that a $(1,1)$-tensor "is" a matrix $T^j_i$ which takes a vector $X$ and gives a vector $\sum_i T^j_iX^i$, which is precisely the matrix characterization of a linear map between vector spaces, precisely the linear map $\mathbf T(\_,\cdot):V\rightarrow V$ given by $X\mapsto\mathbf T(X,\cdot)$.

Thinking of tensors as multi-dimensional arrays is indeed a conceptual understanding of what they are as long as you imagine them acting linearly on vectors. You may be interested in this long answer I gave at this other question concerning the concept and construction of covector in manifolds (and thus generalizing to tensors). You can think that your multi-dimensional multi-linear arrays have their entries smoothly dependent on the points of the manifold, in such a way that the whole object remains linear when acting on smooth vector fields (i.e. sections of the tangent bundle). For this to be true, their components have to transform in a particular way (generalizing the transformation law derived at the linked answer above).

If covectors are smoothly varying linear forms $\omega\vert_p :T_pM\rightarrow\mathbb R$ such that $\omega (aX+bY)=a\omega(X)+b\omega(Y)\in\mathbb R$ for all $a,b\in\mathbb R$ and any smooth vector fields $X,Y\in TM$, then they are completely determined, by linearity (check!), by their action on any coordinate basis of any chart: $$\omega(\partial_i)=:\omega_i\Rightarrow \omega (X)=\sum_i X^i\omega(\partial_i)=\sum_i X^i\omega_i\,.$$ Since $X^i$ and $\omega_i$ are by definition of vectors and covectors smooth scalar fields on $M$, we have checked that indeed such $\omega$ are the linear forms $TM\rightarrow\mathbb R$. Now, define covariant $k$-tensors to be similarly generalized from punctual multi-linear forms $\Omega\vert_p:\otimes^k T_pM\rightarrow\mathbb R$, that is to say, linear functionals on $k$ vectors fields: $$\Omega(aX_1+bY_1,X_2,...,X_k) =a\Omega(X_1,...,X_k)+ b\Omega(X_1,...,X_k)\text{ and similarly for the other slots}.$$ Because of this multi-linearity, their definition as action on a number of vectors reduces to action on coordinate basis: $$\Omega(\partial_{i_1},...,\partial_{i_k}) =\Omega_{i_1...i_k}\Rightarrow \Omega(X_1,...,X_k)=\sum_{i_1,...,i_k}X^{i_1}_1\cdots X^{i_k}_k\Omega_{i_1...i_k}\,.$$ In order to extend this algebraic (co)-tensors at every point to tensor fields on the manifold, their multi-array components $\Omega_{i_1...i_k}(P)$ must be smooth functions on the points $P\in M$, i.e. $\Omega_{i_1...i_k}:M\rightarrow\mathbb R$, but for the whole array to behave coherently and multi-linearly, since vectors transform between charts by the basis transformation, their components must patch together as: $$\partial'_i=\sum_j\frac{\partial x^j}{\partial y_i}\partial_j\Rightarrow \Omega'_{i_1...i_k}:=\Omega(\partial'_{i_1},...,\partial'_{i_k})=\sum_{j_1,...,j_k}\frac{\partial x^{j_1}}{\partial y_{i_1}}\cdots\frac{\partial x^{j_k}}{\partial y_{i_k}}\Omega_{j_1...j_k}\,.$$ This is the reason to the, too often, confusing fact that the components of a covector transform as the basis of vectors, and the components of a vector as the basis of covectors!

If you generalize this to include contravariant $k$-tensors $A\vert_p:\otimes^k T_p^*M\rightarrow\mathbb R$, it is easy to deduce that their transformation between charts has the oposite jacobian matrices. Finally you put together all this to define $(r,s)$-tensors $T\vert_p:\otimes^r T_pM\otimes^s T_p^*M\rightarrow\mathbb R$, which are multi-linear objects which take $r$ vectors and $s$ covectors and give numbers, make them into tensor fields by letting their array components to vary smoothly on $M$ and ensuring their patching on overlaping charts to preserve linearity by: $$T'\,^{i_1...i_s}_{j_1...j_r}=\sum_{l_1,...,l_s}\sum_{k_1,...,k_r}\frac{\partial x^{k_1}}{\partial y_{j_1}}\cdots\frac{\partial x^{k_r}}{\partial y_{i_k}}\cdot\frac{\partial y^{i_1}}{\partial x_{l_1}}\cdots\frac{\partial y^{i_r}}{\partial x_{l_k}}T^{l_1...l_s}_{k_1...k_r}\,.$$ Therefore, indeed you can think of tensors as general multi-linear multi-dimensional arrays of smooth real functions on every chart of your manifold, such that all of them patch together nicely on the charts' intersections (think of charts as coordinate systems, like observers in physics, so any of them has a bunch of functions making these arrays, and any two observers agree they are talking about the same array-object by checking that their action on any input is the same regardless of their coordinate transformations). So actually, you end up realizing that you could have defined tensors as sections of the tensor product bundle of several copies of the tangent and cotangent bundles, since that is the most geometric, intrinsic and coordinate-independent definition possible. By taking only anti-symmetric covariant tensors you get the differential forms of the other answer.

Solution 2:

Both Santiago and Javier have given very good answers. At the risk of repeating what they've said, let's do some concrete examples.

A matrix is just an array of numbers, literally. In particular, a matrix is not a function until we specify how it acts. For example, let's look at the matrix $$A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}$$ This matrix defines many maps. For example, if we choose a basis for $V = \mathbb{R}^3$, then we can define the linear map $$f\colon V \to V$$ $$f(v) = Av.$$ As another example, if we choose bases for $V = \mathbb{R}^3$ and $V^* = (\mathbb{R}^3)^*$, then we can define a $(1,1)$-tensor $$T\colon V^* \times V \to \mathbb{R}$$ $$T(u,v) = u^{\top}Av.$$ So, if we take $v = (0, 1, 0)$ in the standard basis $\{e_1, e_2, e_3\}$, and $u = (1,0,0)$ in the dual basis $\{\epsilon^1, \epsilon^2, \epsilon^3\}$, we get $$T(u,v) = \begin{pmatrix} 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} = 2.$$ Of course, if we denote the entries of our matrix as $A = (A^j_i)$, then all we've said is $$T(\epsilon^1, e_2) = A^1_2.$$ Let's change notation slightly: since the matrix $A$ completely defines the tensor $T$, we'll write $T^j_i = A^j_i$. In general, then, we see that $$T(\epsilon^j, e_i) = T^j_i.$$ In other words: the array of numbers $(T^j_i)$ is the result of evaluating the $(1,1)$-tensor $T$ on basis elements.

The exact same story holds for general $(p,q)$-tensors, but now we need to use $(p+q)$-dimensional arrays of numbers rather than matrices. This means that to specify an entry in the array, we will need $p$ superscript indices and $q$ subscript indices: $$T(\epsilon^{j_1}, \ldots, \epsilon^{j_p}, e_{i_1}, \ldots, e_{i_q}) = T^{j_1\ldots j_p}_{i_1\ldots i_q}$$

When we work on manifolds, we just take $V = T_pM$ to be the tangent space (at a fixed point $p \in M$), so that $V^* = T_p^*M$ is the cotangent space. After choosing local coordinates, we can define a standard basis $\{\partial_1|_p, \ldots, \partial_n|_p\}$ for $V = T_pM$, and then let $\{dx^1|_p, \ldots, dx^n|_p\}$ denote the dual basis for $V^* = T_p^*M$.

Solution 3:

To give a linear map $V \to V$ is the same as to give a linear map $V^* \otimes V \to \mathbb R$, assuming we're looking at real vector spaces. The correspondence assigns to a linear map T from $V$ to itself the linear map $V^* \otimes V \to \mathbb R$ defined on pure tensors by

$f \otimes v \mapsto f(Tv)$

and then extending linearly to all tensors. In this way a linear map $V \to V$ is the same as a (1,1) tensor. A similar idea works for more general tensors.