An unusual type of linear algebra problem

Solution 1:

This is expanding on my comment above, though it's still not a full answer.

We'll assume that $C$, $a$ and $b$ are all non-negative. For any subset $S$ of rows, let $N(S)$ denote the set of columns having at least one nonzero entry in some row of $S$. Similarly, for a subset $T$ of columns, let $M(T)$ denote the set of rows having at least one nonzero entry in some column of $T$.

One necessary condition (the corresponding one to the "non-zero permanent" one in the all-$1$ case) is that for all $S$ and $T$ we have $$\sum_{i \in S} a_i \leq \sum_{j \in N(S)} b_j$$ $$\sum_{j \in T} b_j \leq \sum_{i \in M(T)} a_i.$$ (In the diagonal case, this reduces down to $a_i=b_i$. If all the entries of $C$ are positive, this reduces down to $\sum a_i = \sum b_j$)

I suspect that actually this condition is sufficient as well as necessary. This is motivated in part by the all-$1$'s case (where it is sufficient), and partly on some numerical examples I ran using the analogue of the Sinkhorn-Knopp algorithm: Repeatedly

  1. Scale each row so that the row sums equal the required value.
  2. Scale each column so that the column sums equal the required value.

The examples I tried had $a_i=b_i=i$ and $C$ taken to be a $50 \times 50$ matrix where the lower right $k \times k$ minor is set to $0$ and the remaining entries are uniform on $[0,1]$.

If $k=15$, then the above condition fails for $S=\{36,37,\dots,50\}$, and as expected the algorithm failed to converge. If $k=14$, the algorithm converged fairly rapidly on the examples I tried (all row sums within $10^{-8}$ of the correct value within $100$ iterations).