Is basis change ever useful in practical linear algebra?
Changing basis allows you to convert a matrix from a complicated form to a simple form. It is often possible to represent a matrix in a basis where the only nonzero elements are on the diagonal, which is exceptionally simple. These diagonal elements will be the eigenvalues of the matrix. This is especially helpful in solving linear systems of differential equations. Often in physics, engineering, logistics, and probably lots of other places, you have a system of differential equations which all depend on each other. In order to solve the system directly, you would have to solve all equations at once, which is hard. We can use matrices to describe this system. By changing basis, you may be able to make that matrix diagonal, which effectively separates the differential equations from each other, so you can solve just one at a time. This is comparatively easy. Situations where I know this comes up are:
- Quantum mechanics, solving the Schrodinger equation to describe the state of matter on the quantum level.
- Electrical engineering, understanding the time-evolution of an electrical circuit.
- Mechanical engineering, understanding the motion of a linear mechanical system, such as multiple spring-mass system.
Edit:
Here is another very important reason for change of basis I just remembered: suppose you have a matrix $A$, and you want to calculate $A^n$ for some large $n$. This takes a while, even for computers. But, if you can find a change of basis matrix $P$ so that $A=P^{-1}DP$ for a diagonal matrix $D$, then $$A^n=P^{-1}DPP^{-1}DPP^{-1}DP...P^{-1}DP.$$ All of the terms $PP^{-1}$ are the identity, so we get $$A^n=P^{-1}D^nP.$$ Powers of diagonal matrices are really easy: just raise each diagonal element to the $n$th power. So this method allows us to find powers of matrices very easily. When would we ever want to take large powers of matrices? Whenever finding numerical solutions to differential equations, partial or ordinary. This comes up all the time, so we are glad that we can change bases!
Changing basis can make it easier to understand a given linear transformation.
Suppose $T:V \to V$ is a linear transformation. It may seem difficult to understand or to visualize what effect $T$ has when it is applied to a vector $x$. However, suppose we are lucky enough to find a vector $v$ with the special property that $T(v) = \lambda v$ for some scalar $\lambda$. Then, it's easy enough to understand what $T$ does to $v$, at least.
Suppose we are lucky enough to find an entire basis $\{v_1,\ldots,v_n\}$ of these special vectors. So $T(v_i) = \lambda_i v_i$, for some scalars $\lambda_i, i =1,\ldots,n$. Given any vector $x$, we can write $x$ as a linear combination of the vectors $v_i$: \begin{equation} x = c_1 v_1 + \cdots + c_n v_n. \end{equation} And now it seems easier to think about $T(x)$: \begin{align} T(x) &= c_1 T(v_1) + \cdots + c_n T(v_n) \\ &= c_1 \lambda_1 v_1 + \cdots + c_n \lambda_n v_n. \end{align} That is fairly simple. Each component of $x$ (with respect to our special basis) simply got scaled by a factor $\lambda_i$.
So if we can find a basis of eigenvectors for $T$ (and often we can), then it helps us to understand $T$.
By the way, one great practical example of a change of basis is computing a convolution efficiently using the fast Fourier transform (FFT) algorithm. Any discrete convolution operator is diagonalized by a special basis, the discrete Fourier basis. So, to perform a convolution on an image (in image processing), you take the FFT of the image (you change basis to the Fourier basis), then you multiply pointwise by the eigenvalues of the convolution operator, then you take the inverse FFT (change back to the standard basis). This approach is much, much faster than performing convolution in the space domain.