Expressing the determinant of a sum of two matrices?

Can

$$\det(A + B)$$

be expressed in terms of

$$\det(A), \det(B), n$$

where $A,B$ are $n\times n$ matrices?

#

I made the edit to allow $n$ to be factored in.


Solution 1:

When $n=2$, and suppose $A$ has inverse, you can easily show that

$\det(A+B)=\det A+\det B+\det A\,\cdot \mathrm{Tr}(A^{-1}B)$.


Let me give a general method to find the determinant of the sum of two matrices $A,B$ with $A$ invertible and symmetric (The following result might also apply to the non-symmetric case. I might verify that later...). I am a physicist, so I will use the index notation, $A_{ij}$ and $B_{ij}$, with $i,j=1,2,\cdots,n$. Let $A^{ij}$ donate the inverse of $A_{ij}$ such that $A^{il}A_{lj}=\delta^i_j=A_{jl}A^{li}$. We can use $A_{ij}$ to lower the indices, and its inverse to raise. For example $A^{il}B_{lj}=B^i{}_j$. Here and in the following, the Einstein summation rule is assumed.

Let $\epsilon^{i_1\cdots i_n}$ be the totally antisymmetric tensor, with $\epsilon^{1\cdots n}=1$. Define a new tensor $\tilde\epsilon^{i_1\cdots i_n}=\epsilon^{i_1\cdots i_n}/\sqrt{|\det A|}$. We can use $A_{ij}$ to lower the indices of $\tilde\epsilon^{i_1\cdots i_n}$, and define $\tilde\epsilon_{i_1\cdots i_n}=A_{i_1j_1}\cdots A_{i_nj_n}\tilde\epsilon^{j_1\cdots j_n}$. Then there is a useful property: $$ \tilde\epsilon_{i_1\cdots i_kl_{k+1}\cdots l_n}\tilde\epsilon^{j_1\cdots j_kl_{k+1}\cdots l_n}=(-1)^sl!(n-l)!\delta^{[j_1}_{i_1}\cdots\delta^{j_k]}_{i_k}, $$ where the square brackets $[]$ imply the antisymmetrization of the indices enclosed by them. $s$ is the number of negative elements of $A_{ij}$ after it has been diagonalized.

So now the determinant of $A+B$ can be obtained in the following way $$ \det(A+B)=\frac{1}{n!}\epsilon^{i_1\cdots i_n}\epsilon^{j_1\cdots j_n}(A+B)_{i_1j_1}\cdots(A+B)_{i_nj_n} $$ $$ =\frac{(-1)^s\det A}{n!}\tilde\epsilon^{i_1\cdots i_n}\tilde\epsilon^{j_1\cdots j_n}\sum_{k=0}^n C_n^kA_{i_1j_1}\cdots A_{i_kj_k}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$ $$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon^{j_1\cdots j_k}{}_{i_{k+1}\cdots i_n}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n} $$ $$ =\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon_{j_1\cdots j_ki_{k+1}\cdots i_n}B_{i_{k+1}}{}^{j_{k+1}}\cdots B_{i_n}{}^{j_n} $$ $$ =\frac{\det A}{n!}\sum_{k=0}^nC_n^kk!(n-k)!B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$ $$ =\det A\sum_{k=0}^nB_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]} $$ $$ =\det A+\det A\sum_{k=1}^{n-1}B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}+\det B. $$

This reproduces the result for $n=2$. An interesting result for physicists is when $n=4$,

\begin{split} \det(A+B)=&\det A+\det A\cdot\text{Tr}(A^{-1}B)+\frac{\det A}{2}\{[\text{Tr}(A^{-1}B)]^2-\text{Tr}(BA^{-1}BA^{-1})\}\\ &+\frac{1}{6}\{[\text{Tr}(BA^{-1})]^3-3\text{Tr}(BA^{-1})\text{Tr}(BA^{-1}BA^{-1})+2\text{Tr}(BA^{-1}BA^{-1}BA^{-1})\}\\ &+\det B. \end{split}

Solution 2:

When $n\ge2$, the answer is no. To illustrate, consider $$ A=I_n,\quad B_1=\pmatrix{1&1\\ 0&0}\oplus0,\quad B_2=\pmatrix{1&1\\ 1&1}\oplus0. $$ If $\det(A+B)=f\left(\det(A),\det(B),n\right)$ for some function $f$, you should get $\det(A+B_1)=f(1,0,n)=\det(A+B_2)$. But in fact, $\det(A+B_1)=2\ne3=\det(A+B_2)$ over any field.

Solution 3:

From the MAA (mathematics association of america) there is a general formula here. https://www.maa.org/programs/faculty-and-departments/classroom-capsules-and-notes/determinants-of-sums

There is a proof in the article, but in general: $$ \det(A + B) = \sum_r \sum_{\alpha, \beta} (-1)^{s(\alpha) + s(\beta)} \det(A[\alpha | \beta]) \det(B(\alpha | \beta))$$ where $r$ runs over the integers from $0,\dots,n$; then the inner sum runs over all strictly increasing sequences $\alpha$ and $\beta$ of length $r$ chosen from $1,\dots,n$.

$A[\alpha|\beta]$ is the $r$ by $r$ square submatrix of $A$ lying in rows $\alpha$ and columns $\beta$.

$B(\alpha|\beta)$ is the $(n-r)$-square submatrix of $B$ lying in rows complementary to $\alpha$ and columns complementary to $\beta$.

$s(\alpha)$ is the sum of the integers in $\alpha$.