In combinatorics, why is asking the opposite problem often times easier? [closed]

In combinatorics answering “and” style questions is easy because it is a multiplication. This is easy since you can remove common factors between denominators and numerators, and use the binomial/choice function. Also any time a 1 or 0 comes up the operation becomes trivial.

However asking "or" style questions is difficult since you have to add the numbers and then work out where you have a double count and subtract them.

De Morgan's laws $\neg ( a \vee b) = ( \neg a \wedge \neg b)$ allows you to transform a “or” problem into a “not and” problem which is easier.


Put it this way: for every problem, there are two equivalent ways to put it, with each way being a simple complement of the other. There may be a disparity: one is significantly simpler than the other. But, they are both immediately equivalent; you can ask either question in place of the other.

So then, the issue becomes, why are people choosing to pose to you the less simple problem? I think the reason is probably pedagogical: looking at the complement problem is an effort-inexpensive but extremely valuable tool for solving combinatorial problems. It's a lesson worth learning that it's worth looking at the complement problem, right off the bat, to see if it's any easier.

There might potentially be some deeper reasons, possibly about some correlation between the harder of the two problems having superior aesthetics most of the time, but this kind of question is not exactly the wheelhouse of a mere mathematician such as myself.


You have basically answered this, in your dot points.

I would articulate it as follows:

Answering the complementary version of a combinatorics question is often easier if the original question contains (directly or indirectly) the constraint "at least one".

In such cases, obtaining the answer typically requires enumerating over a very large number of combinations, whereas the complementary version only contains a small number of possibilities to enumerate and thus is more amenable to direct calculation.

For example, in the birthday problem, can be worded as, "Given $N$ people, what is the likelihood that at least one pair of people have a common birthday"

The complementary version is "Given $N$ people, what is the likelihood that exactly zero people have a common birthday?"

Other math stackexchange questions where you can see this phenomenon:

  • How many combinations contain at least two As

  • Combinations of pizza toppings with at least one vegetable and at least one meat.

  • Combinations including "at most"


In combinatorics when we know total number of all possibilities and counting the number of possibilities satisfying condition C or violating it are equivalent problems. That is knowing one you get the other by simple subtraction from the other known number.

Many times to count the possibilities satisfying condition C, may be difficult directly. If C can be broken up into disjoint possibilities you count in each of the cases and add them up.

SO it boils down to whether the condition C or its negation (complement) has more cases. Choose the one involving fewer cases. In an experiment invovling counting how many results in throwing 10 coins produce at least 2 heads the possibilities 2 heads, 3 heads, and so on up to 10 heads. However the complement is 0 heads or 1 heads.

Clear that the second one is easy.

The key point is how many cases (mutually exclusive or disjoint) you have to analyze. This will tell you which is is easier original problem or the complementary problem.


The set of all possible combinatorics problems is symmetric w.r.t. taking the complement. But the problems we encounter are not randomly chosen from such a set, usually these problems are kooked up problems that are challenging enough to be interesting but they are also solvable. A large class of such problems are problems that simplify when taking the complement.