Insights into linear algebra from abstract algebra
Solution 1:
The "Rank-Nullity" Theorem from Linear Algebra can be viewed as a corollary of the First Isomorphism Theorem, which may be more intuitive.
Suppose $T:V\to V$ is a linear transformation. Then by First Isomorphism Theorem, $V/\ker T\cong T(V)$.
So $\dim V-\rm{Null}(T)=\rm{Rank}(T)$, which is the Rank-Nullity Theorem.
This may be more intuitive than the traditional Linear Algebra proof of Rank-Nullity Theorem (see https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem).
Solution 2:
There are tons of ways that abstract algebra informs linear algebra; here is just one example. Suppose you have a vector space $V$ over a field $k$ with a linear map $T:V\to V$. Given a polynomial $p(x)$ with coefficients in $k$, you get a linear map $p(T):V\to V$. This makes $V$ a module over the ring $k[x]$ of polynomials with coefficients in $k$: given $p(x)\in k[x]$ and $v\in V$, the scalar multiplication $p(x)\cdot v$ is just $p(T)v$. In particular, multiplication by $x$ corresponds to the linear map $T$.
Conversely, given a $k[x]$-module $V$, it is a $k$-vector space by considering multiplication by constant polynomials, and multiplication by $x$ gives a $k$-linear map $T:V\to V$. So any $k[x]$-module $V$ can be thought of as a vector space together with a linear map $V\to V$, and this is inverse to the construction described in the previous paragraph.
So vector spaces $V$ together with a chosen linear map $V\to V$ are essentially the same thing as $k[x]$-modules. This is really powerful because $k[x]$ is a very nice ring: it is a principal ideal domain, and there is a very nice classification of all finitely generated modules over any principal ideal domain. This gives us a classification of all linear maps from a finite-dimensional vector space to itself, up to isomorphism. When you represent linear maps by matrices, "up to isomorphism" ends up meaning "up to conjugation". So this gives a classification of all $n\times n$ matrices over a field $k$, up to conjugation by invertible $n\times n$ matrices, called the rational canonical form. In the case that $k$ is algebraically closed (for instance, $k=\mathbb{C}$), you can go further and get the very powerful Jordan normal form from this classification.
Now of course, these canonical forms for matrices can be obtained without all this language of abstract algebra: you can formulate the arguments in this particular case purely in the language of matrices if you really want to. But the general framework provided by abstract algebra provides a lot of context that can make these ideas easier to understand (for instance, you can think of this classification of matrices as being very closely analogous to the classification of finite abelian groups, since that is just the same result applied to the ring $\mathbb{Z}$ instead of $k[x]$). It also provides a framework to generalize these results to more difficult situations. For instance, if you want to consider a vector space together with two linear maps which commute with each other, that is now equivalent to a $k[x,y]$-module. There is not such a nice classification in this case, but the language of rings and modules lets you formulate and think about this question using the same tools as when you had just one linear map.
Solution 3:
A vector space over a field $k$ is a similar construct to what's known as a module over a ring $R$. The idea is very similar - we want somewhere where we can add elements together, and multiply by elements from some other space. Only here, the other space is a ring. An example of this would be $k[t]$ (polynomials of arbitrary degree with coefficients in a field $k$).
As an example, there's a concept of a Smith Normal Form in Linear Algebra. The idea of this is if $A$ is a $m\times n$ matrix, then we can find invertible $m\times m,n\times n$ matrices $S,T$ such that $SAT$ is:
Diagonal
The diagonal elements of the diagonal matrix ($a_1,a_2,\dots$) satisfy $a_i\mid a_{i+1}$ for "small enough" $i$ (essentially, some of the $a_i$ might be zero, we want to ignore these).
Moreover, the diagonal elements are unique up to "multiplication by units". Over a field, this is rather boring, as all non-zero elements of a field are invertible (which is what it means to be a unit). But, the Smith Normal Form is true for many rings (any that are a PID), so we can do work over integer matrices and compute the smith normal form, and the diagonal elements are unique up to multiplication by $-1$ (the only non-identity unit in $\mathbb Z$). We could even do this for matrices with elements in $k[t]$! (although I'm not sure if it'd be useful).
This kind of idea tends to be true for plenty of things in Linear Algebra. You get taught a version for vector spaces, but often it's implicitly true over more general rings (as a field is just a special kind of ring).
Solution 4:
The other responders above (below?)have given several excellent examples of how more general algebraic principles can be used to clarify linear algebra,but there's one I'm surprised no one's brought up. Actually,that's not entirely true-several HAVE mentioned it-they just couched it in different terms then the one I have in mind.
Consider a group action on a set:
Def: For any group G and X is a set then $\phi : G \times X \to X$ and such that $\phi(e,x)=x$ and $\phi(g,\phi(h,x))=\phi(gh,x) $ for every $x \in X$ and every $g,h \in G$. Then $\phi$ is called a group action on X.
Then consider the definition of an R-module over a ring R.
Def: Suppose that R is a ring and 1$ \in$ R is its multiplicative identity. A left R-module M consists of an Abelian group (M, +) and an operation ⋅ : R × M → M such that for all r, s in R and x, y in M, we have:
1 )r$\cdot$(x+y)=r$\cdot$ x+ r$\cdot$y
2) (r+s)$\cdot$x=r$\cdot$x+s$\cdot$x
3) (rs)$\cdot$x=r$\cdot$ (s$\cdot$x)
4) 1 $\cdot$ x=x
(A right module is defined similarly.)
Looking carefully at this definition, we notice that if we rewrite the scalar action as $L_r$ so that L$_r$(x) = r ⋅ x, and L for the map that takes each r to its corresponding map $L_r$,then (1) states that every $L_r$ is a group homomorphism of M by compatibility.
Also, (2)-(4) assert that the map L : R → End(M) given by r ↦ $L_r$ is a ring homomorphism from R to the endomorphism ring End(M). But this means every left (right) module is a ring action on an Abelian group!
Therefore,every vector space can be thought of as a group action in which the Abelian structure of the field of scalars "acts" on the Abelian group of vectors via multiplication.
Another observation worth mentioning is that if R is a field and G is a group, then a group representation of G is a left module over the group ring R[G].Representation theory is a major branch of abstract algebra with enormous utility in many areas of both pure and applied mathematics where the structure of a group can be analyzed by specific group actions on the a given vector space.
How's that?
Solution 5:
Jordan normal form is about how a matrix can be almost diagonalized. The proof is technical and derives from the exact same more general theorem that yields the Fundamental Theorem of Finitely Generated Abelian Groups.