On defining cross (vector) product.

This has been bugging me for years so I finally decided to "derive" (for lack of a better term) the definition of the cross product in $\mathbb R{^3}$. Here was my method for finding a vector: $\mathbf w = \mathbf u \times \mathbf v$ such that $\mathbf w \cdot \mathbf u = \mathbf w \cdot \mathbf v = 0$, where $\mathbf u = [$$ \begin{matrix} a & b & c \\ \end{matrix}$$ ]$ and $\mathbf v = [$$ \begin{matrix} d & e & f \\ \end{matrix}$$ ]$. This of course shows orthogonality between $\mathbf w$ and $\mathbf u$, as well as $\mathbf v$. I set up the 2x3 matrix to solve for $\mathbf w = [$$ \begin{matrix} w_1 & w_2 & w_3 \\ \end{matrix}$$ ]$ as follows: $ $$ \begin{bmatrix} a & b & c \\ d & e & f \end{bmatrix}$$ \cdot \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix}$$ = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$$ $ Of course this is 3 unknowns and 2 equations, so I knew there would have to be an arbitrary parameter. I was fine with this for the time being and after some dirty work, ended up with the following:

$$\begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} = t \begin{bmatrix} \frac{\begin{vmatrix}b & c \\ e & f\end{vmatrix}}{\begin{vmatrix}a & b \\ d & e\end{vmatrix}} \\ -\frac{\begin{vmatrix}a & c \\ d & f\end{vmatrix}}{\begin{vmatrix}a & b \\ d & e\end{vmatrix}} \\ 1 \end{bmatrix}$$


This looked very much like the "traditional" definition of the cross product, so I chose $t = \begin{vmatrix}a & b \\ d & e\end{vmatrix}$ and I finally ended up with $\mathbf w = $$\begin{pmatrix} \begin{vmatrix}b & c \\ e & f\end{vmatrix}\\-{\begin{vmatrix}a & c \\ d & f\end{vmatrix}} \\ \begin{vmatrix}a & b \\ d & e\end{vmatrix} \end{pmatrix}$$ $ which is the definition of the cross product that I've seen in pretty much all of my calculus and physics texts (also shown in determinant form with unit vectors). But where does that value for $t$ come from? Why does that particular value of $t$ work, besides my hunch to make it look like a definition that is universally accepted? Is the rationale behind $t$ being negative for $\mathbf w = \mathbf v \times \mathbf u$ just to satisfy the right-hand-rule?

Sorry if anything is messed up, this is my first ever time using MathJax.

By the way, I've checked similar questions which ask for the rationale for the cross product existing which I have learned from studying electromagnetics myself. But I wanted to see the rational behind the length of the vector, hence my value for $t$. Thanks for any help you can offer!


Solution 1:

I'm not sure if this is what you are looking for exactly, but that particular choice of $t$ is nice geometrically because, if $\mathbf{w}=\mathbf{u}\times \mathbf{v}$ with the $t$ defined as such, then the magnitude of $\mathbf{w}$ is the same as the area of the parallelogram defined by $\mathbf{u}$ and $\mathbf{v}$ (this generalizes to $n$ dimensions as well). It is also the unique choice of $t$ which makes $\mathbf{e_1}\times\mathbf{e_2}=\mathbf{e_3}$ (the standard basis vectors), which is a nice property I suppose.

Solution 2:

It depends on your definition of cross product.

The history of cross products has been discussed in depth in Michael Crowe (2002), A History of Vector Analysis (see also this earlier thread). Apparently, there are two paths of development, one led by Hamilton and the other by Grassmann and the French mathematician Adhémar Barré, Comte de Saint-Venant.

According to Crowe (2002), Hamilton noted in 1846 that --- in modern language --- given two purely imaginary quaternions $Q=x\mathbf i+y\mathbf j+z\mathbf k$ and $Q'=x'\mathbf i+y'\mathbf j+z'\mathbf k$, the real part (called "scalar part" by Hamilton) of $QQ'$ is equal to $-(xx'+yy'+zz')$ (which is the negative of the modern dot product) and the imaginary part (called "vector part") of $QQ'$ is equal to $\mathbf i(yz'-zy')+\mathbf j(zx'-xz')+\mathbf k(xy'-yx')$, which is the modern cross product. Crowe comments that "This will be very significant historically; in fact, it was precisely along this path that modern vector analysis originated."

On the other front, Grassmann had already devised in 1840 something that is numerically equivalent to the modern cross product, and Barré de Saint-Venant also "lays out a number of the fundamental ideas of vector analysis, including a version of the cross product" However, they both view the results of vector products only as directed areas but not vectors. This is understandable, because their studies of cross products were motivated by physical applications. Unfortunately, according to Crowe,

Grassmann and Saint-Venant correspond for a time, but Saint-Venant's ideas do not seem to have attracted significant attention. They do show, however, that the search for a vectorial system was “ in the air”.

The earliest known explicit definitions of the modern cross product were given by Tait's An Elementary Treatise on Quaternions (1867) and Gibb's Vector Analysis (1881). In Tait's Treatise, the cross product is motivated exactly in Hamilton's way (i.e. by considering the imaginary part of the product of two purely imaginary quaternions), while in Gibb's Vector Analysis, the cross product $C=A\times B$ is a vector whose length is the area of parallelogram with edges $A$ and $B$ and whose direction is determined by the right hand rule, so that a scalar triple product $A\cdot(B\times C)$ or $(A\times B)\cdot C$ gives the signed volume of the parallelopiped with concurrent edges $A,B,C$ and this volume is equal to the determinant of the augmented matrix $(A,B,C)$.