What changes for linear algebra over a finite field?
This question asks which standard results from linear algebra over a field no longer hold when we generalize the algebraic structure of the scalars to be an arbitrary division ring.
My question is similar but considers a less drastic generalization. In elementary courses on linear algebra, the underlying field is virtually always assumed to be either the real or the complex numbers. (Maybe once in a blue moon, the rationals.) As such, all my intuition is for infinite fields. Moreover, I know that fields of characteristic 2 are especially problematic.
Which theorems from linear algebra no longer hold when we go from an infinite field to a finite field of characteristic greater than 2?
Which further theorems break down (nontrivially) when we go from characteristic greater than 2 to characteristic 2?
This is a rather approximative overview of what generalizations can be explored in an early course of linear algebra.
The short answer is that all that does not use the fact that $\Bbb R$ is ordered, $\Bbb C$ has a norm, or that $\Bbb C=\Bbb R[i]=\overline{\Bbb R}$ carries on identically to all fields and it can, in principle and in point of fact, be taught directly as "linear algebra", instead of "$\Bbb R$-or-$\Bbb C$ linear algebra". More specifically
All the things that are genuinely linear, like basis, matricial representations for finite dimensional spaces, dual and bi-dual, Gaussian elimination, determinants, Rouché-Capelli theorem carry on verbatim or with very obvious adjustments.
The results around Jordan normal form stay unchanged for algebraically closed fields. Phenomena like "real Jordan normal form", though, use heavily the fact that $\dim_{\Bbb R}\Bbb C=2$, and need to be heavily amended to be generalized to other extensions (which are almost always of infinite degree) in an interesting way.
-
The "theory of real inner products on finitely dimensional spaces" is generalized by the theory of quadratic forms, and it is interesting even as part of an early course. It studies the symmetric and bilinear maps $\phi:k^n\times k^n\to k$. There is a generalized notion of orthogonality, of adjoint, of degenerate quadratic forms, of orthogonal maps (sometimes called isometries). The main differences revolve around the fact that:
$\Bbb R$ is ordered, and so there is a notion of sign and positive definiteness that can be used to control/distinguish a lot of things. For a general field, the only thing that can be controlled is the presence of vectors $v$ such that $\phi(v,v)=0$ (isotropic vectors) and/or such that $\phi(v,w)=0$ for all $w$ (orthogonal to the whole space). This reflects on terminology and choice of "canonical forms". If you want to quickly gage the flavour of it, have a look at these results by Witt.
Fields of characteristic $2$, and $\Bbb F_2$ especially, need (if any) a separate treatment. The issue is that, in fields where $1+1\ne0$, there is a bijective correspondence between symmetric bilinear maps and homogeneous polynomial functions of degree $2$ - i.e., maps $q:k^n\to k$ that can be written as $q(v)=\sum_{i,j}q_{ij}v_iv_j$ for some constants $q_{ij}$. This correspondence is established by calling $Q_{\phi}(v)=\phi(v,v)$, and $\Phi_q(v,w)=\frac{q(v+w)-q(v)-q(w)}{2}$. It's straight-forward to verify that $\Phi_{q_\phi}=\phi$ and $Q_{\Phi_q}=q$. You can't divide by $2$ when $1+1=0$, and it turns out that the map $\phi\mapsto Q_\phi$ is not injective in characteristic $2$.
However, you may want to look into an actual textbook for further detail on (3); "Introduction To Quadratic Forms Over Fields" by Lam (or its earlier, more famous version "Algebraic Theory of Quadratic Forms") is something that you may find in your local library. It doesn't quite cover what happens in characteristic $2$, though.
The problems that arise over arbitrary fields mostly have nothing to do with linear algebra itself, more with the applications.
You have to realize that linear algebra arose as a conglomerate of many different concepts and applications: solving linear equations, linear transformations between vector spaces, general matrix theory, matrix groups and rings, geometric problems, engineering applications.
All of the algebra essentially only depends on the fact that you are working over a field. But when you are working with an arbitrary field, you often don't have a notion of distance, angles, slopes, etc. So
- anything that involves having something be greater than/less something else could create a problem (as mentioned in another answer, inner products can give problems because asking that $x \cdot x \geq 0$ for all $x$ can become meaningless)
- anything involving length requires a bit of consideration or redefinition (what does it mean to "normalize" a vector if you don't have a way to measure its length? How do you measure the distance between a vector and a subspace for a least-squares problem?)
Many things can be redefined however.
- Distance metrics such as the Hamming distance can be imposed (as well as others).
- Sometimes "normalizing" a vector can just mean scaling so the first (or last) nonzero entry is a 1.
- Orthogonality can be defined in terms of an arbitrary bilinear or sesquilinear form $B: V\times V \to \mathbb{F}$ (usually we require that $B$ is reflexive, that is that $B(x,y) = 0$ if and only if $B(y,x) = 0$ so that orthogonality is a symmetric relation).
Making these redefinitions will require verifying and reproving that you have analogous properties that you have in the real and complex case. Often you do have similarities, but often with some subtle differences (eg you can often have nonzero vectors that are orthogonal to themselves).