Degrees of freedom for a matrix

What does it mean for a matrix to have degrees of freedom? How does the degrees of freedom relate to constraints on what those values could be in the context of an optimization problem? I'm specifically confused about the last paragraph in this screenshot, but a more general explanation would be much appreciated.

enter image description here


There are several different ways to think about degrees of freedom of a matrix.

Consider a $m\times n$ matrix. This matrix has $mn$ entries. We can change $mn$ values in this matrix to make $mn$ unique matrices, so it has $mn$ degrees of freedom.

What if we had a square $m\times m$ matrix that we knew was upper triangular? Well then, we know that several values in the matrix are 0. There are actually only $m + m-1 + \cdots + 2 + 1$ nonzero entries, and so that's the number of the degrees of freedom of the matrix.

What is we had a $2\times 2$ matrix that we knew was a rotation matrix? That puts huge constraints on the possible values in the matrix. Indeed, once one of the values is chosen, then all other values have been decided. There is only one degree of freedom in this matrix. This is easy to see geometrically; a rotation matrix on $\mathbb{R}^2$ can only rotate by an angle, which is its degree of freedom.

What if we had "equivalence classes"? What if we knew that all scalings of any matrix were equivalent. How many degrees of freedom do we have left? For any matrix, when the $(1,1)$ element is non-zero, we can divide all elements of the matrix by the first element to make it $1$. So if we had two matrices $A$ and $B=2 A$, when we scaled these matrices so that their first elements were $1$, we'd see that they were equivalent. And thus, we've eliminated a degree of freedom. This is the case with homographies. So, for a $3\times 3$ homography matrix, there are only 8 degrees of freedom. These degrees of freedom can also be interpreted geometrically.