I'm studying for the Math GRE subject test and I'm currently going over Linear Algebra. I'm carefully re-reading my course book "Linear Algebra Done Right" By Sheldon Axler (Third Edition). I have a few questions in regards to the fundamental concepts:

Going over the definition of a Vector Space, it doesn't seem obvious to me why we need the scalars, adjoined to the vector set, to be members of a Field. If we loosen this definition of a Vector Space a little bit I think we can get algebraic structures that are analogous to that of a Vector Space and thus meaningful. Hence my first question: Is the Real Number Line a Vector Space?.

If so, then this would imply that numbers themselves can be interpreted as vectors. Then, in trying to consider "small" Vector Spaces (other than the trivial case of the singleton zero vector set $\{0\}$) I wonder, in this sense, my second question: can there be Vector Spaces contained inside the Real Number Line?

It is here where loosening the definition of a vector space can be of merit. If we allow our scalars to be Integers and our vector set to be all the multiples (positive and negative) of a Natural Number then that vector set adjoined with the operation of vector addition and scalar multiplication would manifest all the characteristics that define a Vector Space. Try using the set that contains all the multiples of three - $\{x | x = 3\alpha \text{ where } \alpha \in \mathbb{Z}\}$.

Thus why would such a structure not be a vector space. Is it missing something? Or why do we demand that we use scalars from a field? I'm interested in knowing what you think.


This is a pretty broad question, so I'm not optimistic it'll stay open for long. That said, it's a great observation!

First, I'll answer something you implicitly mentioned before your first question:


What if we don't require scalars to form a field?

If we instead just ask them to form a commutative ring, (i.e. we can do everything you're used to in a field except for division) then the structure we get isn't a vector space any more, but it's a slightly different structure called a module.

There's a lot of interesting things to be said about modules, but the main point is that without division, scaling down isn't generally possible.

This has some surprising differences from a vector space. For instance, in a vector space if you have a spanning set which is not independent, you can throw away useless vectors and make a basis. This isn't always possible in a module, and the notion of 'dimension' is less clearly defined for a module.

For instance, consider the module $\mathbb{Z}$ with the base ring $\mathbb{Z}$. The set $\{2,3\}$ is linearly dependent (in the way that you're used to) and spans $\mathbb{Z}$ since they are coprime, but neither $\{2\}$ nor $\{3\}$ span $\mathbb{Z}$.

In particular, when dealing with vector spaces you'll likely use the fact that if a finite set is linearly dependent, then you can write one of the vectors in terms of the others. How? Well, suppose $$\sum_{i=1}^n \lambda_i v_i = \lambda_1 v_1 + \dotsb +\lambda_n v_n = 0$$ for $\lambda_i \in \mathbb{F}$ and $v_i \in V$. Since the vectors are linearly dependent, one of the $\lambda_j \neq 0$. Then we can just rearrange to get

$$\lambda_j v_j = -\sum_{i=1\\i\neq j}^n \lambda_i v_i \implies v_j = -\sum_{i=1\\i\neq j}^n \frac{\lambda_i}{\lambda_j} v_i$$ by dividing through by $\lambda_j$. But in a module, we can't divide by scalars. So, such an expression isn't always possible, creating things that can be surprising when you're used to vector spaces.


Is the real number line a vector space?

Yes! In a few ways, actually. First, it's a vector space over itself: take the base field to be the reals, and you get the reals as a one dimensional subspace.

Another way you can make the reals a vector space is to define the base field as the rationals. Then, the real numbers form a vector space over the rationals, and it's not difficult to see it's infinite dimensional. What is difficult however, is to write down a basis, which requires the Axiom of Choice.


Can there be Vector Spaces contained inside the Real Number Line?

Yes! Going in this direction pushes you into Galois theory. For example, taking $\mathbb{Q}$ as the base field again, the vector space with basis $1,\sqrt{2}$ is a ($2$ dimensional) vector space purely contained in the reals (we call it $\mathbb{Q}(\sqrt{2})$. It has elements of the form $a + b \sqrt{2}$, where $a,b$ are rational numbers. Alternatively, you can put $\sqrt[3]{2}$ in your vector space instead, and give the (3 dimensional) vector space with basis $1, \sqrt[3]{2}, \sqrt[3]{2}^2$, also contained in the reals.

You can get even bigger ones by considering the algebraic numbers, which has countably infinite dimension over $\mathbb{Q}$.

You can make the base field larger too, for instance you could use a base field $\mathbb{Q}(\sqrt{2})$, and consider the numbers of the form $a + b \sqrt{3}$, where $a,b \in \mathbb{Q}(\sqrt{2})$, which has dimension 2 over $\mathbb{Q}(\sqrt{2})$ (and dimension 4 over $\mathbb{Q}$).

Finally, as mentioned earlier you can take $\mathbb{R}$ over $\mathbb{Q}$, which has uncountable dimension.


Can we relax the definition? Vector spaces are by definition over a field. That is all. If you relax the definition to scalars being from a ring you get, by definition, an object called a module. There are many good places to read about modules. I learned modules from Atiyah & Macdonald Introduction to Commutative Algebra, but most algebra books cover them.

(transitioning to ‘why are there two separate definitions’) Vector spaces are special case of modules since every field is also a ring. However, some of the nice intuitive properties of vector spaces don't hold for a general module. For example, a submodule of a finitely generated module is not necessarily finitely generated which is very counter to the fact that a subspace of a finite dimensional vector space is always finite dimensional. Also, every vector space has a basis (even infinite dimensional ones). Not every module has a basis. Since modules and vector spaces differ by exactly one property, the ability to ‘divide’ by scalars, any time something holds for vector spaces but not for modules, it means somewhere in the proof of the theorem for vector spaces, or in the proof of a theorem it invokes, that property of being a field is invoked. In particular, somewhere in the proof that every vector space has a basis, that property is invoked, otherwise the theorem would also be true for modules. I just read through the proof on Wikipedia to see if I could find it, it is implicitly invoked without mention somewhere here. See if you can spot it. anyways, this is a huge turning point for the theory of modules and the theory of vector spaces because now any theorem for a general vector space $V$ thats proof invokes the existence of a basis is relying on the proof that such a basis exists which is relying on the scalars coming from a field. The existence of a basis is fundamental to theory of vector spaces.

Why is there two separate notions, a vector space and a module? As mentioned above, if you study the dependence relations of the theorems of vector spaces you will see that many of the theorems invoke the fact that scalars come from a field. Any time you need to divide (i.e multiply by the multiplicative inverse) of a scalar, the theorem officially becomes a theorem of vector spaces, not of modules. As things progress, the properties and theory of modules and vector spaces start to be very different, although they are similar objects by definition. This makes it worth having two separate definitions, and two separate objects to study, although there is some interplay sometimes. Module theory often takes full advantage any time a module becomes a vector space after some operation is done to it, thus any theorem of vector spaces then holds. Several branches of math actually have a common theme of attempting to reduce problems to linear algebra by turning some object of interest into a vector space. This happens in algebraic geometry (studying divisors on curves), field theory, and representation theory, to name a few.

Also, relaxing definitions is not unique to vector spaces and modules. Most algebraic objects have a 'relaxed' version. For example, you might be familiar with groups. There is an object called a monoid that is simply a group without the axiom that requires inverses. That is, a monoid is a set with a binary associative operation and an identity element. So every group is automatically a monoid, but not every monoid is a group.

Is the real number line a vector space Absolutely. In fact, for any field $K$, $K$ forms a vector space over itself. So in particular, $\mathbb{R}$ is a real vector space, that is, a vector space over $\mathbb{R}$. This is just one of the ways that $\mathbb{R}$ can be seen as a vector space, but it is very possible to see $\mathbb{R}$ as a vector space with operations other than usual addition and multiplication, or as a vector space over other fields.

Can there be vector spaces contained in the real number line? Well if we are talking about the real number line as a vector space over itself, then $\mathbb{R}$ is a one dimensional vector space, thus it has exactly one proper subspace, the 0 vector space, which is trivial and uninteresting. But if you want to look at more abstract operations than usual addition and multiplication, you can find all sorts of ways for subsets of $\mathbb{R}$ to be vector spaces.

Exercise:

Prove that $V = (-1,1) \subset \mathbb{R}$ is a vector space over $\mathbb{R}$ where addition is defined as $$u \oplus v = \frac{u+v}{1+uv}, \text{for all } u,v \in V$$ and multiplication is defined as $$\alpha \cdot v = \frac{(1+v)^{\alpha} - (1-v)^{\alpha} }{(1+v)^{\alpha} + (1-v)^{\alpha} } \text{ for all } v \in V, \alpha \in \mathbb{R}.$$

It is not hard, merely a bunch of symbol crunching, to check that the axioms of a vector space are satisfied. But the interesting questions, once we know this is a vector space, are question such as what is a basis? What is the dimension? What are the linear transformation?


Division by scalars is same as multiplication by the inverse of that scalar. To ensure the inverse of any scalar exists we need the assumption of a field.