What are the "building blocks" of a vector?

Solution 1:

Think of it this way: those column matrices you're talking about are really just coordinates of some vector in a vector space. They're not necessarily the vector you are discussing, they're just a representation of that vector in an easier vector space.

So what are coordinates? There is a theorem in linear algebra that says that every vector space has a (Hamel) basis. In fact most of the vector spaces you'll deal with have an infinite number of possible bases. Once you've chosen one, then the coordinates of a vector in that space is just the coefficients of the expansion of that vector in the basis vectors.

Let's look at an example. Consider the degree $2$ polynomial space, denoted $P_2(\Bbb R)$. This is the set of all polynomials of degree at most $2$ with real coefficients along with these definitions for addition and scalar multiplication:
Let $p_1 = a_2x^2 + a_1x+a_0$ and $p_2 = b_2x^2 + b_1x + b_0$ be two arbitrary elements of $P_2(\Bbb R)$ and let $k \in \Bbb R$. Then $$p_1 + p_2 = (a_2 + b_2)x^2 + (a_1+b_1)x + (a_0 + b_0) \\ kp_1 = (ka_2)x^2 + (ka_1)x + (ka_0)$$

It can be proven that this is in fact a vector space over $\Bbb R$.

So first, let's choose a basis for this space. In this case there are an infinite number to chose from but let's just chose the easiest one: $\epsilon = \{1, x, x^2\}$.

Now we'll consider some specific vectors in this space. Let $p_1 = 3x^2 -2$ and $p_2 = 3x$. The coordinates of each of these two vectors are then elements of the vector space $\Bbb R^3$ and are usually represented as column vectors. Remember, though, that coordinates are always given with respect to some set of basis vectors. If we chose a different basis, the coordinates of a given vector would generally change.

In this case $[p_1]_\epsilon = \begin{bmatrix} -2 \\ 0 \\ 3\end{bmatrix}$ and $[p_2]_\epsilon = \begin{bmatrix} 0 \\ 3 \\ 0\end{bmatrix}$. This is because the first coordinate corresponds to the coefficient on $1$, the second on $x$, and the third on $x^2$. So because $p_1 = (3)x^2 + (0)x + (-2)1$, we get the above coordinate vector.

The unfortunate thing is that first courses in linear algebra often stick almost exclusively to discussing $\Bbb R^n$ which is not a very good vector space for understanding things like coordinates or motivating things like having different bases for your vector space. The reason is that it has too many nice qualities. For instance, there is an obvious coordinate vector associated with every single element of $\Bbb R^n$ -- itself.


As for dimension: just remember dimension is a property of a vector space, not of a vector OR EVEN of some arbitrary set of vectors. It has (almost) nothing to do with the number of entries of the coordinates of a vector.

The number of entries of a coordinate vector just tells you the dimension of the space that you've embedded your vector in. But sometimes that's not what you care about. Sometimes you care about what subspace your vector is an element of, and you can't just count your coordinates to tell you that one.

For instance, if I asked you what the dimension of $\operatorname{span}(p_1, p_2)$ is (with $p_1, p_2$ as defined above), you wouldn't get the right answer by saying that there are $3$ coordinates of each of their coordinate vectors and thus the dimension of this subspace is $3$. That is wrong. The answer is actually $2$. All the number of coordinates of the coordinate vectors of $p_1$ and $p_2$ tell you is that the dimension of $P_2(\Bbb R)$ is $3$ -- but we already knew that because we found a basis for it earlier.


Does that answer some of your questions?

Solution 2:

To start with, a vector doesn't have a dimension. A vector space has a dimension. In particular, every vector is member of a one-dimensional vector space, which is spanned by it (unless it is the zero vector, then is spans the zero-dimensional vector space consisting only of the zero vector).

Now what do those three numbers mean? Well, it means you've written the vector as member of a three-dimensional vector space, spanned by three basis vectors. In particular, if you call those basis vectors $e_1$, $e_2$ and $e_3$, then your vector is $1\vec e_1 + 2\vec e_2 - 1\vec e_3$. Note that in that basis, the basis vectors themselves are, of course, $e_1=1e_1+0e_2+0_e3$, $e_2=0e_1+1e_2+0e_3$ and $e_3=0e_1+0e_2+1e_3$, that is, written in itself, the basis is $$e_1 = \pmatrix{1\\0\\0}, e_2 = \pmatrix{0\\1\\0}, e_3=\pmatrix{0\\0\\1}.$$ So by writing the vector in the column form, you are taken as given that it is member of a three-dimensional vector space with a given basis of three vectors. The entries of the column are then the coordinates of the vector in that basis.

Now it is customary for three-dimensional Euclidean space (which has extra structure not available to all vector spaces, which especially allows to define orthogonality) to choose three orthogonal directions, and then those coordinates are usually called $z$, $y$ and $z$.

However not all vector spaces admit that orthogonality structure. Think for example of the vector space "stock orders" (well, it's not really a vector space because you cannot order a fraction of a stock, but let's ignore that): A vector in that vector space could, for example, be the order "buy 2 Microsoft sock and sell one Apple stock". In the "Microsoft/Apple basis", this would be the vector $\pmatrix{2\\1}$. Another order could be "sell a Microsoft stock and buy two Apple", which is the vector $\pmatrix{-1\\2}$ in the same basis.

The vector space operations are all meaningful: Addition means just placing both orders (for example, buying 2 Microsoft stock and one Apple, and then selling one Microsoft and buying two more apple amounts to buying one Microsoft and three Apple stock, according to $\pmatrix{2\\1}+\pmatrix{-1\\2} = \pmatrix{1\\3}$, and also buying/selling twice as much of each corresponds to multiplying the vector by $2$). However it would clearly not make sense to ask if those two orders are orthogonal to each other.

So to summarize:

  • The dimension is a property of a vector space, not of a vector. It tells you how many vectors you need to describe every other vector as their linear combination. A vector that is member of a vector space is also member of many of its proper subspaces, which all have less dimension, down to dimension $1$ (or even to dimension $0$ for the zero vector).

  • The entries in the column form are the coefficients relative to a specific basis, that is, the coefficients those basis vectors get if you write the vector as linear combination of the basis vectors.

  • A vector need not describe something in ordinary space, and there is not necessarily something like $x$, $y$ and $z$ directions. There could just as well be directions labelled "Microsoft" and "Apple" (or "Red", "Green", "Blue"). Especially not everything you know from normal space must exist in every vector space (for example, it makes no sense to ask whether Microsoft and Apple are orthogonal, or what is the angle between Pink and Orange).