If the field of a vector space weren't characteristic zero, then what would change in the theory?

In the book of Linear Algebra by Werner Greub, whenever we choose a field for our vector spaces, we always choose an arbitrary field $F$ of characteristic zero, but to understand the importance of the this property, I am wondering what would we lose if the field weren't characteristic zero ?

I mean, right now I'm in the middle of the Chapter 4, and up to now we have used the fact that the field is characteristic zero once in a single proof, so as main theorems and properties, if the field weren't characteristic zero, what we would we lose ?

Note, I'm asking this particular question to understand the importance and the place of this fact in the subject, so if you have any other idea to convey this, I'm also OK with that.

Note: Since this a broad question, it is unlikely that one person will cover all the cases, so I will not accept any answer so that you can always post answers.


The equivalence between symmetric bilinear forms and quadratic forms given by the polarization identity breaks down in characteristic $2$.


Many arguments using the trace of a matrix will no longer be true in general. For example, a matrix $A\in M_n(K)$ over a field of characteristic zero is nilpotent, i.e., satisfies $A^n=0$, if and only if $\operatorname{tr}(A^k)=0$ for all $1\le k\le n$. For fields of prime characteristic $p$ with $p\mid n$ however, this fails. For example, the identity matrix $A=I_n$ then satisfies $\operatorname{tr}(A^k)=0$ for all $1\le k\le n$, but is not nilpotent.
The pathology of linear algebra over fields of characteristic $2$ has been discussed already here.


One important difference (which I don't see in any other answer) is that in fields of non-zero characteristic, we can't have a "norm" or "inner product" the way we might over $\Bbb R, \Bbb C,$ or even $\Bbb Q$. In particular: in order to make sense of conditions like $$ \|\alpha x\| = |\alpha| \cdot \|x\|\\ \langle x,x \rangle \geq 0 $$ It is important to have a notion of "positive numbers" (i.e. we must have an ordered subfield) which we lack for non-zero characteristics.


When doing basic linear algebra, there is no real advantage for the theory in assuming a field of characteristic zero. (Nor, I should add, is there any real advantage in assuming commutativity: until doing eigenvalue problems, working over a division ring is perfectly fine. Indeed not assuming commutativity is a very good exercise in mental discipline, keeping scalars to one side and matrices to the other.)

There is a practical advantage that in examples one can write down explicit scalars that are obviously unequal; without any assumption on the characteristic, any integer except $-1,1$ might fail to be nonzero, and beginning students might be surprised e.g. that $\frac{13}{16}=\frac9{14}$ when the characteristic is $19$.


When dealing with inner products, you need to use inequalities, so you have to work with an ordered field (in general $\Bbb R$). Thus anything that is proved using to inner products need not be true in a field of positive characteristic; for example, a symmetric matrix over a finite field is not necessarily diagonalisable. For example, in a field of characteristic $2$ the matrix $$\begin{pmatrix}1 & 1 \\ 1 & 1\end{pmatrix}$$ is nilpotent, but not zero, and thus it is not diagonalizable.

In fact, your previous question is another example (perhaps that's the case you mention in your question): it doesn't hold in characteristic $2$, because in the proof you need to divide by $2$. For example, the bilinear form $$\phi :\Bbb F_2^2\times \Bbb F^2_2\to \Bbb F_2:((x_1,x_2),(y_1,y_2))\mapsto x_1y_1+x_2y_2$$ is skew-symmetric in the sense that $\phi(\vec{x},\vec{y})=\phi(\vec{y},\vec{x})=-\phi(\vec{y},\vec{x})$, but $\phi((1,0),(1,0))\neq 0$.