Tensors as Multilinear maps?
Today I learned about Tensors as multilinear maps. I usually think of tensors as a multidimensional array of numbers with fixed transformation laws, and I am having trouble understanding how tensors could be a multilinear map of a set of dual vectors and vectors onto the set R. More specifically, I am having a hard time understanding the concept of a multilinear map. A definition of tensors similar to how I think of them: http://en.wikipedia.org/wiki/Tensor#As_multidimensional_arrays A definition of tensors in terms of multilinear maps: http://en.wikipedia.org/wiki/Tensor#As_multilinear_maps or "Spacetime and Geometry: An introduction to General Relativity" by Sean Carrol, page 21.
Solution 1:
Background: I work in the field of numerical relativity. I've read Carroll's book, but not recently.
It's pretty common for physics students to reach this point in their education, not really knowing anything about what tensors are or how they're talked about in higher mathematics. That's not really the students' fault. If your education was anything like mine, your first exposure to this stuff probably came from an electromagnetism course, or maybe a classical mechanics course. You stuck with vector calculus, and maybe the odd matrix now and then to do transformations, and that's all you needed.
Let's start with matrices, though: you might've thought of matrices as arrays of numbers, just with some funny "matrix multiplication" operation that lets you multiply matrices and vectors to get other vectors. That's good enough to do the computation, but it's a very narrow way of looking at things.
Instead, think of the matrix abstractly as corresponding to a vector-valued linear function of a vector. It's a vector field! Right? A vector-valued function of a vector is, according to everything you've been taught, a vector field. The only additional property we're imposing is that this function be linear.
Example: consider the matrix
$$T = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}$$
You can write $T$ like a function. Given a vector $\vec v$ and a basis $\vec u_1, \vec u_2, \vec u_3$, you could write
$$\begin{align*} T(\vec v) &= [(a \vec u_1 + b \vec u_2 + c \vec u_3) \cdot \vec v ] \vec u_1 \\ & + [(d \vec u_1 + e \vec u_2 + f \vec u_3) \cdot \vec v ] \vec u_2 \\ & + [(g \vec u_1 + h \vec u_2 + i \vec u_3) \cdot \vec v ] \vec u_3\end{align*}$$
Each of those dot products is just doing the row-column approach to matrix multiplication that you already know. This expression, for a general matrix, is rather tedious and tiresome, but most geometric transformations can be written more compactly.
So, a matrix isn't just an array of numbers with some arcane multiplication rule attached. It corresponds to a linear function--a linear map, as mathematicians would say. You can see in the above example that the components of the matrix correspond with the basis we used to write out the function $T$. If you change basis, you change components. That much becomes obvious when written this way.
General tensors correspond to maps just as matrices do. Here, we showed a matrix can correspond to a map from a vector to a vector. A tensor could map a vector to another vector, or a vector to a covector, or several vectors to a scalar, for instance.
On component transformation laws: physicists usually have the point of view that a change of basis doesn't change the underlying vector being described; it merely changes the basis used to describe that vector. The change of basis means you have different vector components, but the vector itself hasn't changed. When you think of a tensor as a map---as some linear function--you ought to be able to describe the arguments in any basis you like. This changes the components of the tensor as described in that basis, but not the tensor itself.
Now, even this answer is only just the tip of the iceberg. I would definitely criticize physicists for not presenting tensors as linear functions; if they had put more emphasis on this, the transformation laws would be obvious from the chain rule and hardly need comment.
However, I think a physicist should not be so eager to treat geometric objects (like vectors and such) as general tensors. You can do this, but doing so deprives you of the geometric intuition you have probably built up. Instead, geometric objects like tangents directions to curves, tangent planes to surface, and the like, should be thought of as elements of an exterior (or clifford) algebra instead. These formalisms let you ignore the "map" definition of vectors and such, so you can focus on building planes, volumes, and the like.
For calculus at this level, it seems the mathematician's preferred tool of choice is differential forms. A physicist might find forms inelegantly integrated into Carroll's text alongside the vanilla, index-manipulation sludge of plain old tensor calculus. Do yourself a favor: at the least, learn forms. It makes all the calculus here as easy as electromagnetism's vector calculus was. I have issues with some of the conventions that forms people tend to use--for reasons totally irrelevant to general relativity, forms people prefer to do everything in terms of forms, and not in terms of actual $k$-vector fields, which is an arbitrary choice, but it leads to circuitous garbage like defining inner products in terms of the Hodge star, which is backwards as sin--but it's still a big improvement over index manipulation.