Is the "determinant" that shows up accidental?

I'll put it more simply. If the determinant is zero, then the linear functions $ax+b$ and $cx+d$ are rendered linearly dependent. For a pair of monovariant functions this forces the ratio between them to be a constant. The zero determinant condition is thereby a natural boundary between increasing and decreasing functions.


What you are looking at is a Möbius transformation. The relationship between matrices and these functions are given in some detail in the Wikipedia article. Most of this is not anything that I know much about, perhaps another responder will give better details.

What you can find is that the composition of two of these functions corresponds to matrix multiplication with the matrix defined as you have inferred from the determinant issue.

These are also related to continued fraction arithmetic since a continued fraction just is a composition of these functions. A simple continued fraction is a number $a_0+\frac{1}{a_1 + \frac{1}{a_2 + \cdots}}$ and you can see almost directly that each level of the continued fraction is something like $t+\frac{1}{x} = \frac{tx+1}{x}$ where "x" is "the rest of the continued fraction." Each time we expand a bit more of the continued fraction, we engage in just this composition of functions as above. So Gosper used this relationship to perform term-at-a-time arithmetic of continued fractions. In practice this means representing a continued fraction as a matrix product.

For instance, $1+\sqrt{2} = 2 + \frac{1}{2+\frac{1}{2 + \cdots}}$ so you could represent it as $$\prod^{\infty} \pmatrix{2 & 1 \\ 1 & 0}$$ And to find out what $\frac{3}{5}(1+\sqrt{2})$ is you could then calculate, to arbitrary precision, $$\pmatrix{3 & 0 \\ 0 & 5}\times \prod^{\infty} \pmatrix{2 & 1 \\ 1 & 0}$$


One way to see it is via the action of $\mathrm{GL}(2, \mathbb{R})$ on the projective line, $\mathbb{P}^1(\mathbb{R})$ (I will assume the concept is known to you, otherwise please refer to the linked article ).

There is the usual embedding $\psi : \mathbb{R} \to \mathbb{P}^1(\mathbb{R})$ given by $\psi(x) = [x : 1]$ (see homogeneous coordinates), which we use to identify $\mathbb{R}$ with a subset of $\mathbb{P}^1(\mathbb{R})$ (in fact $\mathbb{P}^1(\mathbb{R}) = \psi[\mathbb{R}] \cup \{ [1 : 0] \}$ and $[1 : 0]$ is referred to as the point at infinity).

Now fix an invertible linear transformation $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$. It acts on $\mathbb{R}^2$ and maps lines passing through the origin to lines passing through the origin, hence it also acts on $\mathbb{P}^1(\mathbb{R})$. Let's see how this action looks like on the 'finite part' $\psi[\mathbb{R}] \subseteq \mathbb{P}^1(\mathbb{R})$.

Consider a point $p \in \psi[\mathbb{R}]$ expressed as $[x : 1]$ in the homogeneous coordinates. Then

$$A p = [ax + b : cx + d].$$

Since we again want to see this point as an element of $\mathbb{R}$, we have to divide both coordinates by the second coordinate, so the latter becomes $1$:

$$Ap = \left[ \frac{ax+b}{cx+d} : 1 \right] = \psi \left( \frac{ax+b}{cx+d} \right).$$

So the action that $A$ induces on $\mathbb{R}$ is given by the rational function $f : x \mapsto \frac{ax+b}{cx+d}$. Now assume that $x < y$. The two corresponding elements $[x:1], [y:1]$ of the projective line form a negatively oriented pair of vectors (or lines) in $\mathbb{R}^2$.

  • If $\det A > 0$, then $A$ is orientation-preserving, hence $[f(x) : 1], [f(y) : 1]$ will again be negatively oriented, therefore $f(x) < f(y)$.

  • If on the other hand $\det A < 0$, then $A$ is not orientation-preserving, hence $[f(x) : 1], [f(y) : 1]$ will now be positively oriented, therefore $f(x) > f(y)$.

So $f$ is increasing if $ad-bc > 0$ and it is decreasing if $ad-bc < 0$.


One of the principles is the group property of linear transformations.

The following is from chapter V: Normal forms and particular linear mappings of Elements of the Theory of Functions by K. Knopp

We consider the mapping \begin{align*} y=\frac{a_1x+b_1}{c_1x+d_1}=l_1(x) \end{align*} followed by a mapping \begin{align*} z=\frac{a_2y+b_2}{c_2y+d_2}=l_2(y) \end{align*}

A simple calculation shows that the direct transition from the $x$- to the $z$-plane is effected by the function \begin{align*} z=\frac{ax+b}{cx+d}=l(x)\tag{3} \end{align*} whose four coefficients can be read off from the matrix equation

$$ \begin{pmatrix} a&b\\ c&d\\ \end{pmatrix} = \begin{pmatrix} a_2&b_2\\ c_2&d_2\\ \end{pmatrix} \begin{pmatrix} a_1&b_1\\ c_1&d_1\\ \end{pmatrix} = \begin{pmatrix} a_2a_1+b_2c_1&a_2b_1+b_2d_1\\ c_2a_1+d_2c_1&c_2b_1+d_2d_1\\ \end{pmatrix} $$

By compounding two linear mappings $y=l_1(x), z=l_2(y)$ we thus again obtain a linear mapping \begin{align*} l(x)=l_2(l_1(x))=l_2l_1(x) \end{align*}

If $l_1$ and $l_2$ do not degenerate, neither does $l$. For, according to the multiplication theorem for determinants, or by a simple calculation, we find that $$ \begin{vmatrix} a&b\\ c&d\\ \end{vmatrix} = \begin{vmatrix} a_2&b_2\\ c_2&d_2\\ \end{vmatrix} \cdot \begin{vmatrix} a_1&b_1\\ c_1&d_1\\ \end{vmatrix} $$ and since neither factor is zero, the product is not zero. It is also easy to verify that this compounding or symbolic multiplication of linear functions is associative, i.e., \begin{align*} l_3(l_2l_1)=(l_3l_2)l_1 \end{align*} Every function has also an inverse. The inverse of (3) is \begin{align*} z=\frac{-dx+b}{cx-a} \end{align*} It is denoted by $l^{-1}(x)$. When compounded with $l(x)$, it yields the identity: \begin{align*} ll^{-1}(x)=l^{-1}l(x)=x, \end{align*} which corresponds to the coefficient array $$ \begin{pmatrix} 1&0\\ 0&1\\ \end{pmatrix} $$ On the basis of these facts, we can state the following

Theorem: The linear mappings form a group, if the compounding of linear functions is employed as group multiplication. The identity is the identity element of the group, inverse functions are inverse elements.