Determinant of a sum of matrices

Let me outline two other proofs. Let me first rename your $m$ and $n$ as $n$ and $r$, since I find it confusing when $n$ is not the size of the square matrices involved. So you are claiming the following:

Theorem 1. Let $\mathbb{K}$ be a commutative ring. Let $n\in\mathbb{N}$ and $r\in\mathbb{N}$ be such that $n<r$. Let $A_{1},A_{2},\ldots,A_{r}$ be $n\times n$-matrices over $\mathbb{K}$. Then, \begin{equation} \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\det\left( \sum\limits_{i\in I}A_{i}\right) =0. \end{equation}

Notice that I've snuck in one more little change into your formula: I've added the addend for $I=\varnothing$. This addend usually doesn't contribute much, because $\det\left( \sum\limits_{i\in\varnothing}A_{i}\right) =\det\left( 0_{n\times n}\right) $ is usually $0$... unless $n=0$, in which case it contributes $\det\left( 0_{0\times0}\right) =1$ (keep in mind that there is only one $0\times0$-matrix and its determinant is $1$), and the whole equality fails if this addend is missing.

A first proof of Theorem 1 appears in (the solution to) Exercise 6.53 in my Notes on the combinatorial fundamentals of algebra, version of 10 January 2019. (To obtain Theorem 1 from this exercise, set $G=\left\{ 1,2,\ldots,r\right\} $.) The main idea of this proof is that Theorem 1 holds not only for determinants, but also for each of the $n!$ products that make up the determinant (assuming that you define the determinant of an $n\times n$-matrix as a sum over the $n!$ permutations); this is proven by interchanging summation signs and exploiting discrete "destructive interference" (i.e., the fact that if $G$ is a finite set and $R$ is a subset of $G$, then $\sum\limits_{\substack{I\subseteq G;\\R\subseteq I}}\left( -1\right) ^{\left\vert I\right\vert }= \begin{cases} 1, & \text{if }R=G;\\ 0, & \text{if }R\neq G \end{cases} $).

Let me now sketch a second proof of Theorem 1, which shows that it isn't really about determinants. It is about finite differences, in a slightly more general context than they are usually studied.

Let $M$ be any $\mathbb{K}$-module. The dual $\mathbb{K}$-module $M^{\vee }=\operatorname{Hom}_{\mathbb{K}}\left( M,\mathbb{K}\right) $ of $M$ consists of all $\mathbb{K}$-linear maps $M\rightarrow\mathbb{K}$. Thus, $M^{\vee}$ is a $\mathbb{K}$-submodule of the $\mathbb{K}$-module $\mathbb{K}^{M}$ of all maps $M\rightarrow\mathbb{K}$. The $\mathbb{K} $-module $\mathbb{K}^{M}$ becomes a commutative $\mathbb{K}$-algebra (we just define multiplication to be pointwise, i.e., the product $fg$ of two maps $f,g:M\rightarrow\mathbb{K}$ sends each $m\in M$ to $f\left( m\right) g\left( m\right) \in\mathbb{K}$).

For any $d\in\mathbb{N}$, we let $M^{\vee d}$ be the $\mathbb{K}$-linear span of all elements of $\mathbb{K}^{M}$ of the form $f_{1}f_{2}\cdots f_{d}$ for $f_{1},f_{2},\ldots,f_{d}\in M^{\vee}$. (For $d=0$, the only such element is the empty product $1$, so $M^{\vee0}$ consists of the constant maps $M\rightarrow\mathbb{K}$. Notice also that $M^{\vee1}=M^{\vee}$.) The elements of $M^{\vee d}$ are called homogeneous polynomial functions of degree $d$ on $M$. The underlying idea is that if $M$ is a free $\mathbb{K}$-module with a given basis, then the elements of $M^{\vee d}$ are the maps $M\rightarrow \mathbb{K}$ that can be expressed as polynomials of the coordinate functions with respect to this basis; but the $\mathbb{K}$-module $M^{\vee d}$ makes perfect sense whether or not $M$ is free.

We also set $M^{\vee d}=0$ (the zero $\mathbb{K}$-submodule of $\mathbb{K} ^{M}$) for $d<0$.

For each $d \in \mathbb{Z}$, we define a $\mathbb{K}$-submodule $M^{\vee \leq d}$ of $\mathbb{K}^M$ by \begin{equation} M^{\vee \leq d} = \sum\limits_{i \leq d} M^{\vee i} . \end{equation} The elements of $M^{\vee \leq d}$ are called (inhomogeneous) polynomial functions of degree $\leq d$ on $M$. The submodules $M^{\vee \leq d}$ satisfy \begin{equation} M^{\vee \leq d} M^{\vee \leq e} \subseteq M^{\vee \leq \left(d+e\right)} \end{equation} for any integers $d$ and $e$.

For any $x\in M$, we define the $\mathbb{K}$-linear map $S_{x}:\mathbb{K} ^{M}\rightarrow\mathbb{K}^{M}$ by setting \begin{equation} \left( S_{x}f\right) \left( m\right) =f\left( m+x\right) \qquad\text{for each }m\in M\text{ and }f\in\mathbb{K}^{M}. \end{equation} This map $S_{x}$ is called a shift operator. It is an endomorphism of the $\mathbb{K}$-algebra $\mathbb{K}^{M}$ and preserves all the $\mathbb{K} $-submodules $M^{\vee \leq d}$ (for all $d\in\mathbb{Z}$).

Moreover, for any $x\in M$, we define the $\mathbb{K}$-linear map $\Delta _{x}:\mathbb{K}^{M}\rightarrow\mathbb{K}^{M}$ by $\Delta_{x} =\operatorname*{id}-S_{x}$. Hence, \begin{equation} \left( \Delta_{x}f\right) \left( m\right) =f\left( m\right) -f\left( m+x\right) \qquad\text{for each }m\in M\text{ and }f\in\mathbb{K}^{M}. \end{equation} This map $\Delta_{x}$ is called a difference operator. The following crucial fact shows that it "decrements the degree" of a polynomial function, similarly to how differentiation decrements the degree of a polynomial:

Lemma 2. Let $x \in M$. Then, $\Delta_{x}M^{\vee d}\subseteq M^{\vee \leq \left( d-1\right)}$ for each $d\in\mathbb{Z}$.

[Let me sketch a proof of Lemma 2:

Lemma 2 clearly holds for $d < 0$ (since $M^{\vee d} = 0$ if $d < 0$). Hence, it remains to prove Lemma 2 for $d \geq 0$. We shall prove this by induction on $d$. The induction base is the case $d = 0$, which is easy to check (indeed, each $f \in M^{\vee 0}$ is a constant map, and thus satisfies $\Delta_x f = 0$; therefore, $\Delta_{x}M^{\vee 0} = 0 \subseteq M^{\vee \leq \left( 0-1\right) }$).

For the induction step, we fix some nonnegative integer $e$, and assume that Lemma 2 holds for $d = e$. We must then show that Lemma 2 holds for $d = e+1$.

We have assumed that Lemma 2 holds for $d = e$. In other words, we have $\Delta_{x}M^{\vee e}\subseteq M^{\vee \leq \left( e-1\right)}$.

Our goal is to show that Lemma 2 holds for $d = e+1$. In other words, our goal is to show that $\Delta_{x}M^{\vee \left(e+1\right)}\subseteq M^{\vee \leq e}$.

But the $\mathbb{K}$-module $M^{\vee \left(e+1\right)}$ is spanned by maps of the form $fg$ with $f\in M^{\vee e}$ and $g\in M^{\vee}$ (since it is spanned by products of the form $f_1 f_2 \cdots f_{e+1}$ with $f_1, f_2, \ldots, f_{e+1} \in M^{\vee}$, but each such product can be rewritten in the form $fg$ with $f = f_1 f_2 \cdots f_e \in M^{\vee e}$ and $g = f_{e+1} \in M^{\vee}$). Hence, it suffices to show that $\Delta_x \left( fg \right) \in M^{\vee \leq e}$ for each $f\in M^{\vee e}$ and $g\in M^{\vee}$.

Let us first notice that if $g \in M^{\vee}$ is arbitrary, then $\Delta_x g$ is the constant map whose value is $- g\left(x\right)$ (because each $m \in M$ satisfies \begin{equation} \left(\Delta_x g\right) \left(m\right) = g\left(m\right) - \underbrace{g\left(m+x\right)}_{\substack{=g\left(m\right) + g\left(x\right)\\ \text{(since }g \text{ is } \mathbb{K}\text{-linear)}}} = g\left(m\right) - \left(g\left(m\right) + g\left(x\right)\right) = - g\left(x\right) \end{equation} ), and thus belongs to $M^{\vee 0}$. In other words, $\Delta_x M^{\vee} \subseteq M^{\vee 0}$.

For each $f \in \mathbb{K}^M$ and $g \in \mathbb{K}^M$, we have \begin{align*} \Delta_{x}\left( fg\right) & =\left( \operatorname*{id}-S_{x}\right) \left( fg\right) \qquad\left( \text{since }\Delta_{x}=\operatorname*{id} -S_{x}\right) \\ & =fg-\underbrace{S_{x}\left( fg\right) }_{\substack{=\left( S_{x}f\right) \left( S_{x}g\right) \\\text{(since }S_{x}\text{ is an endomorphism} \\\text{of the }\mathbb{K}\text{-algebra }\mathbb{K}^{M}\text{)}}}\\ & =fg-\left( S_{x}f\right) \left( S_{x}g\right) =\underbrace{\left( f-S_{x}f\right) }_{=\left( \operatorname*{id}-S_{x}\right) f}g+\left( S_{x}f\right) \underbrace{\left( x-S_{x}g\right) }_{=\left( \operatorname*{id}-S_{x}\right) g}\\ & =\left( \underbrace{\left( \operatorname*{id}-S_{x}\right) }_{=\Delta _{x}}f\right) g+\left( S_{x}f\right) \left( \underbrace{\left( \operatorname*{id}-S_{x}\right) }_{=\Delta_{x}}g\right) \\ & =\left( \Delta_{x}f\right) g+\left( \underbrace{S_{x}}_{\substack{=\operatorname*{id}-\Delta_{x}\\ \text{(since }\Delta _{x}=\operatorname*{id}-S_{x}\text{)}}}f\right) \left( \Delta_{x}g\right) \\ & =\left( \Delta_{x}f\right) g+\underbrace{\left( \left( \operatorname*{id}-\Delta_{x}\right) f\right) }_{=f-\Delta_{x}f}\left( \Delta_{x}g\right) \\ & =\left( \Delta_{x}f\right) g+\left( f-\Delta_{x}f\right) \left( \Delta_{x}g\right) \\ & =\left( \Delta_{x}f\right) g+f\left( \Delta_{x}g\right) -\left( \Delta_{x}f\right) \left( \Delta_{x}g\right) . \end{align*} Hence, for each $f\in M^{\vee e}$ and $g\in M^{\vee}$, we have \begin{align*} \Delta_{x}\left( fg\right) & =\left( \Delta_{x}\underbrace{f}_{\in M^{\vee e}}\right) \underbrace{g}_{\in M^{\vee}}+\underbrace{f}_{\in M^{\vee e}}\left( \Delta_{x}\underbrace{g}_{\in M^{\vee}}\right) -\left( \Delta _{x}\underbrace{f}_{\in M^{\vee e}}\right) \left( \Delta_{x}\underbrace{g}_{\in M^{\vee}}\right) \\ & \in\underbrace{\left( \Delta_{x}M^{\vee e}\right) }_{\subseteq M^{\vee \leq\left( e-1\right) }}M^{\vee}+M^{\vee e}\underbrace{\left( \Delta _{x}M^{\vee}\right) }_{\subseteq M^{\vee0}}-\underbrace{\left( \Delta _{x}M^{\vee e}\right) }_{\subseteq M^{\vee\leq\left( e-1\right) } }\underbrace{\left( \Delta_{x}M^{\vee}\right) }_{\subseteq M^{\vee0}}\\ & \subseteq\underbrace{M^{\vee\leq\left( e-1\right) }M^{\vee}}_{\subseteq M^{\vee\leq e}}+\underbrace{M^{\vee e}M^{\vee0}}_{\subseteq M^{\vee e}\subseteq M^{\vee\leq e}}-\underbrace{M^{\vee\leq\left( e-1\right) }M^{\vee0}}_{\subseteq M^{\vee\leq\left( e-1\right) }\subseteq M^{\vee\leq e}}\\ & \subseteq M^{\vee\leq e}+M^{\vee\leq e}-M^{\vee\leq e}\subseteq M^{\vee\leq e}. \end{align*} This proves that $\Delta_{x}\left( M^{\vee\left( e+1\right) }\right) \subseteq M^{\vee\leq e}$, as we intended to prove.

Thus, the induction step is complete, and Lemma 2 is proven.]

The following fact follows by induction using Lemma 2:

Corollary 3. Let $r\in\mathbb{N}$. Let $x_{1},x_{2},\ldots,x_{r}$ be $r$ elements of $M$. Then, \begin{equation} \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}}M^{\vee d}\subseteq M^{\vee \leq \left( d-r\right) } \end{equation} for each $d\in\mathbb{Z}$.

And as a consequence of this, we obtain the following:

Corollary 4. Let $r\in\mathbb{N}$. Let $x_{1},x_{2},\ldots,x_{r}$ be $r$ elements of $M$. Then, \begin{equation} \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}}M^{\vee d}=0 \end{equation} for each $d\in\mathbb{Z}$ satisfying $d<r$.

[In fact, Corollary 4 follows immediately from Corollary 3, because $d<r$ implies $M^{\vee \leq \left( d-r\right) }=0$.]

To make use of Corollary 4, we want a more-or-less explicit expression for how $\Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}}$ acts on maps in $\mathbb{K}^{M}$. This is the following fact:

Proposition 5. Let $r\in\mathbb{N}$. Let $x_{1},x_{2},\ldots,x_{r}$ be $r$ elements of $M$. Then, \begin{equation} \left( \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}}f\right) \left( m\right) =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }f\left( m+\sum\limits_{i\in I}x_{i}\right) \qquad\text{for each }m\in M\text{ and }f\in\mathbb{K}^{M}. \end{equation}

[Proposition 5 can be proven by induction over $r$, where the induction step involves splitting the sum on the right hand side into the part with the $I$ that contain $r$ and the part with the $I$ that don't. But there is also a slicker argument, which needs some preparation. The maps $S_{x}\in \operatorname{End}_{\mathbb{K}}\left( \mathbb{K}^{M}\right) $ for different elements $x\in M$ commute; better yet, they satisfy the multiplication rule $S_{x}S_{y}=S_{x+y}$ (as can be checked immediately). Hence, by induction over $\left\vert I\right\vert $, we conclude that if $I$ is any finite set, and if $x_{i}$ is an element of $M$ for each $i\in I$, then \begin{equation} \prod\limits_{i\in I}S_{x_{i}}=S_{\sum\limits_{i\in I}x_{i}} \qquad \text{in the ring } \operatorname{End}_{\mathbb{K}} \left(\mathbb{K}^M\right) . \end{equation} I shall refer to this fact as the S-multiplication rule.

Now, let us prove Proposition 5. Let $x_{1},x_{2},\ldots,x_{r}$ be $r$ elements of $M$. Recall the well-known formula \begin{equation} \prod\limits_{i\in\left\{ 1,2,\ldots,r\right\} }\left( 1-a_{i}\right) =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\prod\limits_{i\in I}a_{i}, \end{equation} which holds whenever $a_{1},a_{2},\ldots,a_{r}$ are commuting elements of some ring. Applying this formula to $a_{i}=S_{x_{i}}$, we obtain \begin{equation} \prod\limits_{i\in\left\{ 1,2,\ldots,r\right\} }\left( \operatorname*{id} -S_{x_{i}}\right) =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\prod\limits_{i\in I}S_{x_{i}} \end{equation} (since $S_{x_{1}},S_{x_{2}},\ldots,S_{x_{r}}$ are commuting elements of the ring $\operatorname{End}_{\mathbb{K}}\left( \mathbb{K}^{M}\right) $). Thus, \begin{align*} \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}} & =\prod\limits_{i\in\left\{ 1,2,\ldots,r\right\} }\underbrace{\Delta_{x_{i}}} _{\substack{=\operatorname*{id}-S_{x_{i}}\\\text{(by the definition of } \Delta_{x_{i}}\text{)}}}=\prod\limits_{i\in\left\{ 1,2,\ldots,r\right\} }\left( \operatorname*{id}-S_{x_{i}}\right) \\ & =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\underbrace{\prod\limits_{i\in I}S_{x_{i}}} _{\substack{=S_{\sum\limits_{i\in I}x_{i}}\\\text{(by the S-multiplication rule)} }}=\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }S_{\sum\limits_{i\in I}x_{i}}. \end{align*} Hence, for each $m\in M$ and $f\in\mathbb{K}^{M}$, we obtain \begin{align*} & \left( \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r}}f\right) \left( m\right) \\ & =\left( \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }S_{\sum\limits_{i\in I}x_{i}}f\right) \left( m\right) \\ & =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\underbrace{\left( S_{\sum\limits_{i\in I}x_{i}}f\right) \left( m\right) }_{\substack{=f\left( m+\sum\limits_{i\in I}x_{i}\right) \\\text{(by the definition of the shift operators)}}}\\ & =\sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }f\left( m+\sum\limits_{i\in I}x_{i}\right) . \end{align*} Thus, Proposition 5 is proven.]

We can now combine Corollary 4 with Proposition 5 and obtain the following:

Corollary 6. Let $x_{1},x_{2},\ldots,x_{r}$ be $r$ elements of $M$. Let $d\in\mathbb{Z}$ be such that $d<r$. Let $f\in M^{\vee d}$ and $m\in M$. Then, \begin{equation} \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }f\left( m+\sum\limits_{i\in I}x_{i}\right) =0. \end{equation}

[Indeed, Corollary 6 follows from the computation \begin{align*} & \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }f\left( m+\sum\limits_{i\in I}x_{i}\right) \\ & =\underbrace{\left( \Delta_{x_{1}}\Delta_{x_{2}}\cdots\Delta_{x_{r} }f\right) }_{\substack{=0\\\text{(by Corollary 4, since } f \in M^{\vee d} \text{)}}}\left( m\right) \qquad\left( \text{by Proposition 5}\right) \\ & =0. \end{align*} ]

Finally, let us prove Theorem 1. The matrix ring $\mathbb{K}^{n\times n}$ is a $\mathbb{K}$-module. Let $M$ be this $\mathbb{K}$-module $\mathbb{K}^{n\times n}$. For each $i,j\in\left\{ 1,2,\ldots,n\right\} $, we let $x_{i,j}$ be the map $M\rightarrow\mathbb{K}$ that sends each matrix $M$ to its $\left( i,j\right) $-th entry; this map $x_{i,j}$ is $\mathbb{K}$-linear and thus belongs to $M^{\vee}$.

It is easy to see that the map $\det:\mathbb{K}^{n\times n}\rightarrow \mathbb{K}$ (sending each $n\times n$-matrix to its determinant) is a homogeneous polynomial function of degree $n$ on $M$; indeed, it can be represented in the commutative $\mathbb{K}$-algebra $\mathbb{K}^M$ as \begin{equation} \det=\sum\limits_{\sigma\in S_{n}}\left( -1\right) ^{\sigma}x_{1,\sigma\left( 1\right) }x_{2,\sigma\left( 2\right) }\cdots x_{n,\sigma\left( n\right) }, \end{equation} where $S_{n}$ is the $n$-th symmetric group, and where $\left( -1\right) ^{\sigma}$ denotes the sign of a permutation $\sigma$. In other words, $\det\in M^{\vee n}$. Hence, Corollary 6 (applied to $x_{i}=A_{i}$, $d=n$, $f=\det$ and $m=0$) yields \begin{equation} \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\det\left( 0+\sum\limits_{i\in I}A_{i}\right) =0. \end{equation} In other words, \begin{equation} \sum\limits_{I\subseteq\left\{ 1,2,\ldots,r\right\} }\left( -1\right) ^{\left\vert I\right\vert }\det\left( \sum\limits_{i\in I}A_{i}\right) =0. \end{equation} This proves Theorem 1. $\blacksquare$


Given integers $n > m > 0$, let $[n]$ be a short hand for the set $\{1,\ldots,n\}$.

For any $t \in \mathbb{R}$ and $x_1, \ldots, x_n \in \mathbb{C}$, we have the identity

$$\prod_{k=1}^n (1 - e^{tx_k}) = \sum_{P \subset [n]} (-1)^{|P|} e^{t\sum_{k\in P} x_k}$$

Treat both sides as function of $t$. Expand against $t$, one notice on LHS, coefficients in front of $t^k$ vanishes whenever $k < n$. By comparing coefficients of $t^m$, we obtain:

$$ 0 = \sum_{P\subset [n]} (-1)^{|P|} \left(\sum_{k\in P} x_k\right)^m\tag{*1}$$

Notice RHS is a polynomial function in $x_1,\ldots,x_n$ with integer coefficients. Since it evaluates to $0$ for all $(x_1,\ldots,x_n) \in \mathbb{C}^n$, it is valid as a polynomial identity in $n$ indeterminates with integer coefficients. As a corollary, it is valid as an algebraic identity when $x_1, x_2, \ldots, x_n$ are elements taken from any commutative algebra.

Let $V$ be a vector space over $\mathbb{C}$ spanned by elements $\eta_1, \ldots, \eta_m$ and $\bar{\eta}_1,\ldots,\bar{\eta}_m$.

Let $\Lambda^{e}(V) = \bigoplus_{k=0}^n \Lambda^{2k}(V)$ be the 'even' portion of its exterior algebra. $\Lambda^{e}(V)$ itself is a commutative algebra.

For any $m \times m$ matrix $A$, let $\tilde{A} \in \Lambda^e(V)$ be the element defined by:

$$A = (a_{ij}) \quad\longrightarrow\quad \tilde{A} = \sum_{i=1}^m\sum_{j=1}^m a_{ij}\bar{\eta}_i \wedge \eta_j$$

Notice the $m$-fold power of $\tilde{A}$ satisfies an interesting identity:

$$\tilde{A}^m = \underbrace{\tilde{A} \wedge \cdots \wedge \tilde{A}}_{m \text{ times}} = \det(A) \omega \quad\text{ where }\quad \omega = m!\, \bar{\eta}_1 \wedge \eta_1 \wedge \cdots \wedge \bar{\eta}_m \wedge \eta_m\tag{*2}$$

Given any $n$-tuple of matrices $A_1, \ldots, A_n \in M_{m\times m}(\mathbb{C})$, if we substitute $x_k$ in $(*1)$ by $\tilde{A}_k$ and apply $(*2)$, we find

$$ \sum_{P\subset [n]} (-1)^{|P|} \left(\sum_{k\in P} \tilde{A}_k\right)^m = \sum_{P\subset [n]} (-1)^{|P|} \det\left(\sum_{k\in P} A_k\right)\omega = 0 $$ Extracting the coefficient in front of $\omega$, the desired identity follows: $$\sum_{P\subset [n]} (-1)^{|P|} \det\left(\sum_{k\in P} A_k\right) = 0$$