Assuming $AB=I$ prove $BA=I$ [duplicate]
Possible Duplicate:
If $AB = I$ then $BA = I$
Most introductory linear algebra texts define the inverse of a square matrix $A$ as such:
Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$.
That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:
Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$.
Do you know of a proof other than defining inverse through determinants or through using rref
?
Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity?
Solution 1:
Multiply both sides of $AB-I=0$ on the left by $B$ to get $$ (BA-I)B=0\tag{1} $$ Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that $$ \sum_{j=1}^n a_jBe_j=0\tag{2} $$ then, multiplying $(2)$ on the left by $A$ gives $$ \sum_{j=1}^n a_je_j=0\tag{3} $$ which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$.
Multiplying $(1)$ on the right by $e_j$ yields $$ (BA-I)Be_j=0\tag{4} $$ for each basis vector $Be_j$. Therefore, $BA=I$.
Failure in an Infinite Dimension
Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element.
$AB=I$, but $BA$ sets the first element to $0$.
Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.
Solution 2:
Without the assumption of $A$ and $B$ being square matrices, we can find counterexamples. For example: $$ \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{array}\right) \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array}\right) = \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array}\right) $$ and $$ \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array}\right) \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{array}\right) = \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array}\right). $$
For square matrices, it was proved in several ways for square matrices in the question:
If $AB = I$ then $BA = I$