Linear Algebra Versus Functional Analysis

As it is mentioned in the answer by Sheldon Axler in this post, we usually restrict linear algebra to finite dimensional linear spaces and study the infinite dimensional ones in functional analysis.

I am wondering that what are those parts of the theory in linear algebra that restrict it to finite dimensions. To clarify myself, here is the main question

Question.
What are the main theorems in linear algebra that are just valid for finite dimensional linear spaces and are propagated in the sequel of theory, used to prove the following theorems?

Please note that I want to know the main theorems that are just valid for finite dimensions not all of them. By main, I mean the minimum number of theorems of this kind such that all other such theorems can be concluded from these.


In finite-dimensional spaces, the main theorem is the one that leads to the definition of dimension itself: that any two bases have the same number of vectors. All the others (e.g., reducing a quadratic form to a sum of squares) rest on this one.

In infinite-dimensional spaces, (1) the linearity of an operator generally does not imply continuity (boundedness), and, for normed spaces, (2) "closed and bounded" does not imply "compact" and (3) a vector space need not be isomorphic to its dual space via canonical isomorphism.

Furthermore, in infinite-dimensional vector spaces there is no natural definition of a volume form.

That's why Halmos's Finite-Dimensional Vector Spaces is probably the best book on the subject: he was a functional analyst and taught finite-dimensional while thinking infinite-dimensional.


To add to avs's answer, in finite dimensions you have the result that a linear operator $V\to V$ is injective iff surjective. This fails in infinite dimensions.


For important results of linear algebra that need the space to be finite dimensional, besides the already mentioned:

An endomorphism is injective if and only if it is surjective.

I would nominate:

For every endomorphism $f: V \to V$ there is a polynomial $p$ such that $p(f)= 0$.

While not all presentations focus on this, this fact is not hard to prove yet pretty powerful. (For example over an algebraically closed field it implies immediately that there is an eigenvalue.)

Another, but that's almost cheating:

Every ascending chain of subspaces becomes stationary.

That is, finite dimensional vector spaces are the notherian modules over fields.

In addition that one can represent linear maps conveniently via matrices and has the determinant function helps (also already mentioned), too.

Especially the former is more a practical consideration, but I do think it is part of the reason why many courses restrict completely to finite dimensional spaces.


Working in finite-dimensional linear spaces, and knowing that dimension is independent of basis leads to many interesting properties that do not hold for infinite-dimensional spaces.

For example, every square matrix $A$ must have a minimal polynomial $m$ for which $m(A)=0$, which follows because the linear space of $n\times n$ matrices has dimension $n^2$, which means that $\{ I,A,A^2,\cdots A^{n^2} \}$ must be a linearly-dependent set of matrices. So there is a unique monic polynomial of lowest order for which $m(A)=0$. If $m(\lambda)=\lambda^k+a_{k-1}\lambda^{k-1}+\cdots +a_{1}\lambda+a_0$, then it can be seen that $A$ is invertible iff $m(0)=a_0\ne 0$. Indeed, if $a_0\ne 0$, $$ I=-\left[\frac{1}{a_0}(A^{k-1}+a_{k-1}A^{k-2}+\cdots+a_1 I)\right]A \\ = -A \left[\frac{1}{a_0}(A^{k-1}+a_{k-1}A^{k-2}+\cdots+a_1 I)\right]. $$ So $A$ has a left inverse iff it has a right inverse, and, in that case, the left and right inverses are the same polynomial in $A$. That's most definitely not true in infinite-dimensional spaces. Most generally, $m(\lambda) \ne 0$ iff $A-\lambda I$ is invertible; and $m(\lambda)=0$ iff $A-\lambda I$ has a non-trivial kernel. So $A-\lambda I$ is non-invertible iff it has a non-trivial kernel, which consists of all eigenvectors of $A$ with eigenvalues $\lambda$. The rank and the nullity of any $n\times n$ matrix are nicely related, which also does not happen in infinite dimensional spaces, even if the kernel and complement of the range are finite-dimensional.

Then if you're working over $\mathbb{C}$, the minimal polynomial factors as $$ (A-\lambda_1 I)^{r_1}(A-\lambda_2 I)^{r_2}\cdots(A-\lambda_k I)^{r_k}=0. $$ Such a factoring leads to the Jordan Canonical Form, which is also something you don't generally get in infinite-dimensional spaces. And $A$ can be diagonalized iff the minimal polynomial has no repeated factors, which basically comes down to a trick with Lagrange polynomials $p_k$, which are the unique $n-1$ order polynomials defined so that $p_k(\lambda_j)=\delta_{j,k}$. Then $$ 1 \equiv \sum_{k=1}^{n}p_k(\lambda) \implies I = \sum_{k=1}^{n}p_k(A), $$ and $(A-\lambda_k I)p_k(A)=0$. That's how you get a full basis of eigenvectors for $A$ when the minimal polynomial has no repeated fators. The matrices $p_k(A)$ are projections onto the eigenspace with eigenvalue $\lambda_k$, and every vector can be written in terms of the ranges of these projections, which are eigenvectors. Normal (and selfadjoint) matrices $N$ are special cases where the minimal polynomial has no repeated factors because $\mathcal{N}((N-\lambda I)^2)=\mathcal{N}(N-\lambda I)$ for all $\lambda$. This algebraic formalism is not generally available for infinite-dimensional spaces.

Though a determinant is not essential for finite-dimensional analysis, it is nice, and there is no determinant for the general infinite-dimensional space.