What is the motivation defining Matrix Similarity?

I'm taking the course Linear Algebra 1, and recently we've learned about matrix similarity. What is the motivation defining it? or, What are the uses/applications for this definition?

Thanks


Solution 1:

Suppose $A$ is an $n \times n$-matrix (with real entries, say). Then $A$ defines a linear map $A : \mathbb{R}^n \to \mathbb{R}^n$ by $A(x) = Ax$. In fact, you can figure out the entries of $A$ by just applying $A$ to the standard basis vectors $e_1, \dots, e_n$ of $\mathbb{R}^n$: the $j$'th column of $A$ is simply $A(e_j)$.

Now sometimes (as in, very very often) we don't start with the matrix but with the linear map, and we could try to study this linear map by associating a matrix to it like above; by simply applying it to the standard basis vectors (what we get out is often called the transformation matrix).

Now we should ask ourselves what would happen, if we had used a different basis than the standard one (I should perhaps elaborate how this works but let's keep it rough and simple for now). In mathematics we often tend to prefer making as few unnatural choices as possible. Moreover, in linear algebra, vector spaces often do not come with preferred bases at all. Now as it turns out, had we chosen a different basis, we would get a different matrix for our map (perhaps not very surprising), but the two different matrices are similar!

Solution 2:

Matrix similarity is all about change of basis.

An $F$-linear transformation from $F^n$ to $F^n$ has an existence independent of those numbers we fill squares with. However, to do computations, we pick a basis and then compute the numbers that belong in the square matrix.

But you can pick bases a lot of different ways. Shouldn't their matrices be related somehow? The answer is yes, of course: if you have a matrix $A$ for a transformation $T$ in one basis, and a matrix $B$ for $T$ in another basis, then $A=XBX^{-1}$ for some invertible matrix $X$.

What does this buy us? In many cases if you have a matrix, you can "move ot a different basis" to make the current matrix look simpler and easier to do algebra with. This is the spirit of the example GitGud already gave and won't repeat.

This is especially true with computing powers of matrices. It's hard to compute powers of matrices in general, but easy to compute powers of diagonal matrices. Thus if a matrix is diagonalizable (similar to a diagonal matrix), you can compute polynomial expressions of matrices very easily. Say, if $A=XDX^{-1}$ for a diagonal matrix, then the polynomial $A^{5000}=XD^{5000}X^{-1}$, and $D^{5000}$ is trivial to compute.

You can see it extends to polynomials too: $A^5+5A^3-2A=X(D^5+5D^3-2D)X^{-1}$.

Solution 3:

It's good to make up exercises for students.

Now seriously, a useful application is the following.

If $A=PJP^{-1}$, then $A^n=PJ^nP^{-1}$ (why?), for all $n\in \mathbb N$.

So if $J^n$ is simple to find, so is $A^n$ and it happens that there is always a matrix $J$ such that $J^n$ is easy to find, it's called a jordan matrix.

Sometimes it happens $J$ is diagonal and that makes matters trivial.

One reason why finding this powers is useful is to find $e^A\color{grey}{:=\sum \limits_{n=0}^\infty\left(\dfrac 1{n!}A^n\right)}$. This is useful to find solutions of systems of linear differential equations, which in turn have googol applications to the real world.