Understanding proof related to Hungarian algorithm

I am looking at the Hungarian algorithm's matrix interpretation(and more specifically the trick that is needed for steps 3 and 4 on the Wikipedia page on the https://en.wikipedia.org/wiki/Hungarian_algorithm) and trying to figure out the following proof linked to this.

Theorem

I just have one question about this proof since it is not explained in the text: why is it exactly that the matrix $\boldsymbol{E}$ has $b$ independent zeroes(and $\boldsymbol{D}$ has $a$)? I can't understand this so if someone can explain I would be grateful.


There are $a$ rows, and this is indicated by the matrices $C$ and $D$.

There are $b$ columns, and this is indicated by the matrices $C$ and $E$.

They are not multiplied, as the matrix notation used is augmented.

So, $D$ has $a$ zeroes, and $E$ has $b$ zeroes.

A far simpler proof of what the author is trying to prove is that there can be at most $l=\min(m,n)$ zeroes by the pigeon-hole principle (if there are more than $l$, at least one row/column has two zeroes in it), and then supply an example of a matrix with $l$ zeroes, e.g. a diagonal line of zeroes from a corner.