why don't we define vector multiplication component-wise?

Solution 1:

Unlike the usual operations of vector calculus, the product $\bullet$ you defined here is not covariant for Cartesian coordinate changes. This means that an equation involving $\bullet$ is not guaranteed to keep holding true if both members undergo an orthogonal coordinate change, such as a rotation of the axes.

For a 2 dimensional example, consider the following equation: \begin{equation} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \bullet \begin{bmatrix} 0 \\ 1\end{bmatrix} = \begin{bmatrix} 0 \\ 0\end{bmatrix}. \end{equation} If we rotate the plane 45° counterclockwise then \begin{align} \begin{bmatrix} 1 \\ 0 \end{bmatrix} \to \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},& & \begin{bmatrix} 0 \\ 1 \end{bmatrix} \to \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},&&\begin{bmatrix} 0 \\ 0 \end{bmatrix} \to \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \end{align} but \begin{equation}\tag{!!} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \bullet \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}\end{bmatrix} \ne\begin{bmatrix} 0 \\ 0\end{bmatrix}. \end{equation} From the physicist's point of view, then, this operation is ill-posed as it should be independent of the particular coordinate system one chooses to describe physical space. This is not the case of the dot-product and the cross-product, which are independent of such a choice.

Solution 2:

This is Hadamard product, which is defined for matrices, and hence for vector columns. See Wikipedia page : Hadamard Product

Solution 3:

To elaborate yet a bit more on what Guiseppe Negro, James S. Cook and Michael Joyce have already said:

A vector is not a tuple of individual components. A vector is an element of some vector space.

When you're writing a vector as such a tuple, you're only referring to the expansion of the vector in some particular basis. But this basis is very often not even specified. Which is actually ok, because the "normal" vector operations don't in fact depend on the choice, i.e. if you transformed all you vectors to be written out in some other basis, you would have all different numbers but the same calculations would still yield correct results.

But that wouldn't work for component-wise multiplication, as was already shown. This operation simply does not work on the vectors but on their basis representation, which is only well-defined for some fixed choice of basis, which is not what you're actually interested in when studying vectors.

Of course, there are plenty of applications where you are in fact interested in tuples of numbers, but those aren't vectors then. There's nothing wrong with the Hadamard product, but it doesn't work on vectors but on matrices1. If you want to multiply components, then your objects may be called tuples or arrays or lists or whatever, but hardly vectors.

Unfortunately, many people have got this wrong, and that's why e.g. C++ programmers are blessed with2 an std::vector class that is in fact for dynamic arrays, which are even less accurately vectors than static arrays.


1Matrices suffer from a similar problem: many people use "matrix" and "linear mapping" as synonyms, but they aren't in fact the same; matrices refer to a particular basis while linear mappings need no such thing.

2Nothing against std::vector, it's great – it's just not a vector class, just like "functors" aren't in fact functors.

Solution 4:

I think the reason you don't normally see it is just because it doesn't really have an application in linear algebra.

If you look at $\mathbb{F}^n$ as a ring instead of a vector space over $\mathbb{F}$, then what you have suggested (coordinatewise multiplication) is exactly the product ring structure of the ring $\mathbb{F}^n$. It's completely natural, and useful.

It's just not mentioned in linear algebra because you are rarely thinking of $\mathbb{F}^n$ as a ring, you are usually focused upon its vector space identity.

I think a lot of people have probably been cognitively fooled into thinking of your product "like the cross product" or "like the inner product". Those two are really useful in linear algebra, but the coordinatewise product does not compare.

Solution 5:

I often see this product as an incorrect answer in my freshman mechanics course. If they told me they thought I wanted the direct product of $\mathbb{R}$ with itself then I suppose I would let them have their points back.

This multiplication has been on my mind lately. The algebra defined on $\mathbb{R}^2$ by this Hadamard product is equivalent to the hyperbolic numbers $\mathbb{R} \oplus j\mathbb{R}$ where $j^2=1$. Let's call your algebra $\mathcal{A}_1$ and the hyperbolic numbers $\mathcal{A}_2$

The isomorphism is given by $\Phi: \mathcal{A}_1 \rightarrow \mathcal{A}_2$ with $\Phi(a,b) = \frac{1}{2}(a+b)+\frac{1}{2}j(b-a)$. Notice that the identity for $\mathcal{A}_1$ is $(1,1)$ and $\Phi(1,1)=1$. Furthermore, $\Phi^{-1}(x+jy) = (x-y, x+y)$ which allows us to see that $\Phi^{-1}(j) = (-1,1)$. In other words, $(-1,1)$ is the "$j$" for the Hadamard product.

The geometry of $\mathcal{A}_2$ is in part exposed by thinking about $j$-multiplication:

$$ j(x+jy) = jx+j^2y = y+jx $$

Multiplication by $j$ reflects about the line $y=x$. This is obviously different than the complex numbers $\mathbb{R} \oplus i\mathbb{R}$ where multiplication by $i$ maps $(x,y)$ to $(-y,x)$. See http://en.wikipedia.org/wiki/Split-complex_number for the geometry of these hyperbolic numbers.