Why study finite-dimensional vector spaces in the abstract if they are all isomorphic to $R^n$?

Timothy Gowers asks Why study finite-dimensional vector spaces in the abstract if they are all isomorphic to $R^n$? and lists some reasons. The most powerful of these is probably

There are many important examples throughout mathematics of infinite-dimensional vector spaces. If one has understood finite-dimensional spaces in a coordinate-free way, then the relevant part of the theory carries over easily. If one has not, then it doesn't.

I mean sure, but what else? Does anyone know examples of specific vector spaces?


For any integer $k$, the set $M_k$ of complex-differentiable functions $f$ defined on the upper-half plane $\{x+iy: \, y > 0\}$ that satisfy the equations $$f(z+1) = f(z), \; \; f(-1/z) = z^k f(z)$$ and have limit $\lim_{y \rightarrow \infty} f(iy) = 0$ is a vector space over $\mathbb{C}$.

Two specific elements of $M_k$ include the functions $$E_4(z) = 1 + 240 \sum_{n=1}^{\infty} \sigma_3(n) e^{2\pi i n z} \in M_4$$ and $$E_8(z) = 1 + 480 \sum_{n=1}^{\infty} \sigma_7(n) e^{2\pi i nz} \in M_8.$$ Here, $\sigma_k(n)$ is the divisor sum $\sum_{d | n} d^k$.

Assuming that $E_4 \in M_4$ it is rather easy to show that $E_4^2 \in M_8.$

It can be proved that $M_8$ is one-dimensional, so $E_4^2$ is a multiple of $E_8$. Comparing constant coefficients tells you that they must be equal, and comparing the others gives you the formula $\sigma_7(n) = \sigma_3(n) + 120 \sum_{m=1}^{n-1} \sigma_3(m) \sigma_3(n-m).$

For example $$\sigma_7(2) = 1 + 2^7 = 1 + 2^3 + 120$$ and $$\sigma_7(3) = 1 + 3^7 = 1 + 3^3 + 120(1+2^3 + 1 + 2^3).$$

A lot of vector spaces like this show up in number theory. They are typically finite-dimensional but working out a basis is pretty hard (certainly harder than showing that they are finite-dimensional).


If you decided that you are only going to call "vector space" those of the form $\mathbb R^n$, then you find yourself in the position that now subspaces are no longer vector spaces.


Consider an analogous question:

Why consider finite sets in the abstract if they're all isomorphic to $\{1,\ldots,n\}$ for some $n$?

  • Because there could be names for the elements that are more natural for a given situation than $1,\ldots,n$, e.g. we may want to refer to $$\{\text{red},\text{green},\text{blue}\}$$ instead of $$\{1,2,3\}\text{ where we agree that 1 stands for red, 2 for green, 3 for blue}$$

  • In general, names for elements are not always important

  • There are many subsets of a set of the form $\{1,\ldots,n\}$ for some $n$ that are not themselves sets of the form $\{1,\ldots,n\}$ for some $n$


The cheeky answer is that we would not know that all finite-dimensional vector spaces are isomorphic to $\mathbb{R}^n$, if we did not study finite-dimensional vector spaces in their own right. In mathematics, we generally like to use as few assumptions as possible and to isolate them in the form of axioms.

See https://en.wikipedia.org/wiki/Examples_of_vector_spaces for examples of vector spaces that seem very different from those found in the world of geometry. Function spaces are good examples; the space $X \rightarrow \mathbb{R}$ of all continuous functions from a given topological space $X$ to $\mathbb{R}$ is a natural example.


I would claim we study them precisely because they are isomorphic to $\mathbb{R}^n$. What do I mean?
I mean that since we are already familiar with $\mathbb{R}^n$, we can use this intuition to understand vector spaces in general, and once we do that, we can generalize the concepts to other less-intuitive objects (such as infinite-dimensional vector spaces) while carrying over our understanding.