Solution 1:

A rank of the matrix is probably the most important concept you learn in Matrix Algebra. There are two ways to look at the rank of a matrix. One from a theoretical setting and the other from a applied setting.

From a theoretical setting, if we say that a linear operator has a rank $p$, it means that the range of the linear operator is a $p$ dimensional space. From a matrix algebra point of view, column rank denotes the number of independent columns of a matrix while row rank denotes the number of independent rows of a matrix. An interesting, and I think a non-obvious (though the proof is not hard) fact is the row rank is same as column rank. When we say a matrix $A \in \mathbb{R}^{n \times n}$ has rank $p$, what it means is that if we take all vectors $x \in \mathbb{R}^{n \times 1}$, then $Ax$ spans a $p$ dimensional sub-space. Let us see this in a 2D setting. For instance, if

$A = \left( \begin{array}{cc} 1 & 2 \\ 2 & 4 \end{array} \right) \in \mathbb{R}^{2 \times 2}$ and let $x = \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) \in \mathbb{R}^{2 \times 1}$, then $\left( \begin{array}{c} y_1 \\ y_2 \end{array} \right) = y = Ax = \left( \begin{array}{c} x_1 + 2x_2 \\ 2x_1 + 4x_2 \end{array} \right)$.

The rank of matrix $A$ is $1$ and we find that $y_2 = 2y_1$ which is nothing but a line passing through the origin in the plane.

What has happened is the points $(x_1,x_2)$ on the $x_1 - x_2$ plane have all been mapped on to a line $y_2 = 2y_1$. Looking closely, the points in the $x_1 - x_2$ plane along the line $x_1 + 2x_2 = c = \text{const}$, have all been mapped onto a single point $(c,2c)$ in the $y_1 - y_2$ plane. So the single point $(c,2c)$ on the $y_1 - y_2$ plane represents a straight line $x_1 + 2x_2 = c$ in the $x_1 - x_2$ plane.

This is the reason why you cannot solve a linear system when it is rank deficient. The rank deficient matrix $A$ maps $x$ to $y$ and this transformation is neither onto (points in the $y_1 - y_2$ plane not on the line $y_2 = 2y_1$ e.g. $(2,3)$ are not mapped onto, which results in no solutions) nor one-to-one (every point $(c,2c)$ on the line $y_2 = 2y_1$ corresponds to the line $x_1 + 2x_2 =c$ in the $x_1 - x_2$ plane, which results in infinite solutions).

An observation you can make here is that the product of the slopes of the line $x_1 + 2x_2 = c$ and $y_2 = 2y_1$ is $-1$. This is true in general for higher dimensions as well.

From an applied setting, rank of a matrix denotes the information content of the matrix. The lower the rank, the lower is the "information content". For instance, when we say a rank $1$ matrix, the matrix can be written as a product of a column vector times a row vector i.e. if $u$ and $v$ are column vectors, the matrix $uv^T$ is a rank one matrix. So all we need to represent the matrix is $2n-1$ elements. In general, if we know that a matrix $A \in \mathbb{R}^{m \times n}$ is of rank $p$, then we can write $A$ as $U V^T$ where $U \in \mathbb{R}^{m \times p}$ and is of rank $p$ and $V \in \mathbb{R}^{n \times p}$ and is of rank $p$. So if we know that a matrix $A$ is of rank $p$ all we need is only $2np-p^2$ of its entries. So if we know that a matrix is of low rank, then we can compress and store the matrix and can do efficient matrix operations using it. The above ideas can be extended for any linear operator and these in fact form the basis for various compression techniques. You might also want to look up Singular Value Decomposition which gives us a nice (though expensive) way to make low rank approximations of a matrix which allows for compression.

From solving a linear system point of view, when the square matrix is rank deficient, it means that we do not have complete information about the system, ergo we cannot solve the system.

Solution 2:

The rank of a matrix is of major importance. It is closely connected to the nullity of the matrix (which is the dimension of the solution space of the equation $A\mathbf{x}=\mathbf{0}$), via the Dimension Theorem:

Dimension Theorem. Let $A$ be an $m\times n$ matrix. Then $\mathrm{rank}(A)+\mathrm{nullity}(A) = n$.

Even if all you know about matrices is that they can be used to solve systems of linear equations, this tells you that the rank is very important, because it tells you whether $A\mathbf{x}=\mathbf{0}$ has a single solution or multiple solutions.

When you think of matrices as being linear transformations (there is a correspondence between $m\times n$ matrices with coefficients in a field $\mathbf{F}$, and the linear transformations between a given vector space over $\mathbf{F}$ of dimension $n$ with a given basis, and a vector space of dimension $m$ with a given basis), then the rank of the matrix is the dimension of the image of that linear transformation.

The simplest way of computing the Jordan Canonical Form of a matrix (an important way of representing a matrix) is to use the ranks of certain matrices associated to $A$; the same is true for the Rational Canonical Form.

Really, the rank just shows all over the place, it is usually relatively easy to compute, and has a lot of applications and important properties. They will likely not be completely apparent until you start seeing the myriad applications of matrices to things like vector calculus, linear algebra, and the like, but trust me, they're there.