In Linear Algebra, what is a vector?

In modern mathematics, there's a tendency to define things in terms of what they do rather than in terms of what they are.

As an example, suppose that I claim that there are objects called "pizkwats" that obey the following laws:

  • $\forall x. \forall y. \exists z. x + y = z$
  • $\exists x. x = 0$
  • $\forall x. x + 0 = 0 + x = x$
  • $\forall x. \forall y. \forall z. (x + y) + z = x + (y + z)$
  • $\forall x. x + x = 0$

These rules specify what pizkwats do by saying what rules they obey, but they don't say anything about what pizkwats are. We can find all sorts of things that we could call pizkwats. For example, we could imagine that pizkwats are the numbers 0 and 1, with addition being done modulo 2. They could also be bitstrings of length 137, with "addition" meaning "bitwise XOR." Or they could be sets, with “addition” meaning “symmetric difference.” Each of these groups of objects obey the rules for what pizkwats do, but neither of them "are" pizkwats.

The advantage of this approach is that we can prove results about pizkwats knowing purely how they behave rather than what they fundamentally are. For example, as a fun exercise, see if you can use the above rules to prove that

$\forall x. \forall y. x + y = y + x$.

This means that anything that "acts like a pizkwat" must support a commutative addition operator. Similarly, we could prove that

$\forall x. \forall y. (x + y = 0 \rightarrow x = y)$.

The advantage of setting things up this way is that any time we find something that "looks like a pizkwat" in the sense that it obeys the rules given above, we're guaranteed that it must have some other properties, namely, that it's commutative and that every element has its own and unique inverse. We could develop a whole elaborate theory about how pizkwats behave and what pizkwats do purely based on the rules of how they work, and since we specifically never actually said what a pizkwat is, anything that we find that looks like a pizkwat instantly falls into our theory.

In your case, you're asking about what a vector is. In a sense, there is no single thing called "a vector," because a vector is just something that obeys a bunch of rules. But any time you find something that looks like a vector, you immediately get a bunch of interesting facts about it - you can ask questions about spans, about changing basis, etc. - regardless of whether that thing you're looking at is a vector in the classical sense (a list of numbers, or an arrow pointing somewhere) or a vector in a more abstract sense (say, a function acting as a vector in a "vector space" made of functions.)

As a concluding remark, Grant Sanderson of 3blue1brown has an excellent video talking about what vectors are that explores this in more depth.


When I was 14, I was introduced to vectors in a freshman physics course (algebra based). We were told that it was a quantity with magnitude and direction. This is stuff like force, momentum, and electric field.

Three years later in precalculus we thought of them as "points," but with arrows emanating from the origin to that point. Just another thing. This was the concept that stuck until I took linear algebra two more years later.

But now in the abstract sense, vectors don't have to be these "arrows." They can be anything we want: functions, numbers, matrices, operators, whatever. When we build vector spaces (linear spaces in other texts), we just call the objects vectors - who cares what they look like? It's a name to an abstract object.

For example, in $\mathbb{R}^n$ our vectors are ordered n-tuples. In $\mathcal{C}[a,b]$ our vectors are now functions - continuous functions on $[a, b]$ at that. In $L^2(\mathbb{R}$) our vectors are those functions for which

$$ \int_{\mathbb{R}} | f |^2 < \infty $$

where the integral is taken in the Lebesgue sense.

Vectors are whatever we take them to be in the appropriate context.


This may be disconcerting at first, but the whole point of the abstract notion of vectors is to not tell you precisely what they are. In practice (that is, when using linear algebra in other areas of mathematics and the sciences, and there are a lot of areas that use linear algebra), a vector could be a real or complex valued function, a power series, a translation in Euclidean space, a description of a state of a quantum mechanical system, or something quite different still.

The reason all these diverse things are gathered under the common name of vector, is that for certain type of questions about all these things, a common way of reasoning can be applied; this is what linear algebra is about. In all cases there must be a definite (large) set of vectors (the vector space in which the vectors live), and operations of addition and scalar multiplication of vectors must be defined. What these operations are concretely may vary according to the nature of the vectors. Certain properties are required to hold in order to serve as a foundation for reasoning; these axioms say for instance that there must be a distinguished "zero" vector that is neutral for addition, that addition of vectors is commutative (a good linear algebra course will give you the complete list).

Linear algebra will tell you what facts about vectors, formulated exclusively in terms of the vector space operations, can be deduced purely from those axioms. Some kinds of vectors have more operations defined than just those of linear algebra: for instance power series can be multiplied together (while in general one cannot multiply two vectors), and functions allow talking about taking limits. However, proving statements about such operations will be based on other facts than the axioms of linear algebra, and will require a different kind of reasoning adapted to each case. In contrast, linear algebra focusses on a large body of common properties that can be derived in exactly the same way in all to the examples, because it does not involve at all these additional structures that may be present. It is for that reason that linear algebra speaks of vectors in an abstract manner, and limits its language to the operations of addition and scalar multiplication (and other notions that can be entirely defined in terms of them).


It's an element of a set which endowed with a certain structure, i.e. satisfying the axioms of a vector space.