Matrix determinant lemma with adjugate matrix

I would like a proof of the following result, given on wikipedia.

For all square matrices $\mathbf{A}$ and column vectors $\mathbf{u}$ and $\mathbf{v}$ over some field $\mathbb{F}$, $$ \det(\mathbf{A}+\mathbf{uv}^\mathrm{T}) = \det(\mathbf{A}) + \mathbf{v}^\mathrm{T}\mathrm{adj}(\mathbf{A})\mathbf{u}, $$ where $\mathrm{adj}(\mathbf{A})$ is the adjugate matrix of $\mathbf{A}$.

Note that $\mathbf{A}$ may be singular. However, the proof given on wikipedia requires that $\mathbf{A}$ is nonsingular.


Put ${\bf A}=x{\bf I}-{\bf M}$ in the result

$\det({\bf A}+{\bf u}{\bf v}^T)=\det({\bf A})(1+{\bf v}^T{\bf A}^{-1}{\bf u})$

so that you obtain

$\det(x{\bf I}-{\bf M}+{\bf u}{\bf v}^T)=\det(x{\bf I}-{\bf M})(1+{\bf v}^T(x{\bf I}-{\bf M})^{-1}{\bf u})$ $\det(x{\bf I}-{\bf M}+{\bf u}{\bf v}^T)=\det(x{\bf I}-{\bf M})+{\bf v}^T\textrm{adj}(x{\bf I}-{\bf M}){\bf u} \quad [*]$

Then substitute $x=0$ and ${\bf A}=-{\bf M}$ in $[*]$.


Just so that this question isn't left unanswered:

There are ways to prove the result without using the "polynomial identity trick" (although, of course, the proofs using the trick, in any of its forms, are much shorter). For one such way, see the solution to Exercise 6.59 in my Notes on the combinatorial fundamentals of algebra, version of 10 January 2019. (Note that my $A$, $u$ and $v$ are your $\mathbf{A}$, $\mathbf{u}$ and $\mathbf{v}^T$.)

However, let me also give a justification for @baronbrixius's answer. It is essentially a complete proof; what it lacks is the definition of a ring over which the matrix $x\mathbf{I} - \mathbf{M}$ is invertible. Fortunately, such a ring is easy to construct: namely, you can take the ring of anti-Laurent series in the indeterminate $x$ over your base ring. Let me explain this a bit: If $R$ is any commutative ring, then the anti-Laurent series in the indeterminate $x$ over $R$ are the sequences $\left(\ldots,r_{-2},r_{-1},r_0,r_1,r_2,\ldots\right) \in R^{\mathbb{Z}}$ (infinite in both directions) such that all but finitely many positive integers $n$ satisfy $r_n = 0$ (whereas we don't care about how many negative integers $n$ satisfy $r_n = 0$). The set of all these anti-Laurent series will be denoted by $R\left(\left(1/x\right)\right)$. This set can be made into a ring in the same way as the polynomial ring $R\left[x\right]$ is made into a ring: Two anti-Laurent series are added entrywise, and multiplied by the rule

$\left(\ldots,r_{-2},r_{-1},r_0,r_1,r_2,\ldots\right) \cdot \left(\ldots,s_{-2},s_{-1},s_0,s_1,s_2,\ldots\right) = \left(\ldots,u_{-2},u_{-1},u_0,u_1,u_2,\ldots\right)$,

where $u_k = \sum_{\left(i,j\right) \in \mathbb{Z}^2;\ i+j=k} r_i s_j$. (The sum $\sum_{\left(i,j\right) \in \mathbb{Z}^2;\ i+j=k} r_i s_j$ has infinitely many addends, but all but finitely many of them are $0$, so this sum is well-defined.) You need to prove that this ring actually satisfies the ring axioms, such as associativity of multiplication; but this is all easy and well-known (if you have seen it done for polynomials, then you'll be able to use the same arguments here). The anti-Laurent series $\left(\ldots,r_{-2},r_{-1},r_0,r_1,r_2,\ldots\right) \in R^{\mathbb{Z}}$ with $r_1 = 1$ and $r_i = 0$ for all $i \neq 1$ is denoted by $x$; thus, we can rewrite any $\left(\ldots,r_{-2},r_{-1},r_0,r_1,r_2,\ldots\right) \in R\left(\left(1/x\right)\right)$ as $\sum\limits_{k\in\mathbb{Z}} r_k x^k = \cdots + r_{-2}x^{-2} + r_{-1}x^{-1} + r_0x^0 + r_1x^1 + r_2x^2 + \cdots$. To make sense of such infinite sums, we need to define a topology on $R\left(\left(1/x\right)\right)$; but this is fairly straightforward: it is the product topology on $R^{\mathbb{Z}}$. Now, I claim that if $R$ is the base ring for your matrix $\mathbf{M}$, then the matrix $x \mathbf{I} - \mathbf{M}$ is invertible when regarded as an $n\times n$-matrix over this ring $R\left(\left(1/x\right)\right)$. Indeed, its inverse is the infinite sum $x^{-1}\mathbf{I} + x^{-2} \mathbf{M} + x^{-3} \mathbf{M}^2 + \cdots = \sum_{k \geq 0} x^{-k-1} \mathbf{M}^k$. (The proof that this infinite sum is well-defined is easy -- it converges entrywise --, and the proof that it's actually an inverse to $x \mathbf{I} - \mathbf{M}$ is easy as well -- just use the telescope principle.)

Here is one caveat: Of course, you cannot "substitute $x=0$" into a matrix over the ring $R\left(\left(1/x\right)\right)$. However, @baronbrixius's argument doesn't do that. He uses the ring $R\left(\left(1/x\right)\right)$ to obtain the identity that he calls $[*]$. But this identity can also be regarded as an identity over the subring $R\left[x\right]$ of the ring $R\left(\left(1/x\right)\right)$ (because no negative powers of $x$ occur in this identity); and you can substitute $x=0$ into such an identity.

One final warning: The requirement that "all but finitely many positive integers $n$ satisfy $r_n = 0$" in our definition of the ring $R\left(\left(1/x\right)\right)$ was important. If you drop it, then the ring structure is no longer well-defined, since the sum $\sum_{\left(i,j\right) \in \mathbb{Z}^2;\ i+j=k} r_i s_j$ might have infinitely many nonzero addends.