The meaning of 0% and 100% as opposed to other percentages?

"100%" is equivalent to "all". There is no rounding with "all"; either you get all of something or you don't. If a product advertised itself as "kills all bacteria" and then you found that there were 3 bacteria that it didn't kill, it doesn't matter whether that's 3 out of 10 or 3 out of 28 million; it's not all of them.

Even in ordinary conversation, if your child says "I picked up all the blocks" and you find 1 block left on the floor, you can legitimately say that they did not, in fact, pick up all the blocks. Doesn't matter if there were 10 blocks or 20,000; if there's one left on the floor, they did not pick up 100% of them.

(Similarly, "0%" = "none"; if you say "there are none left" and there's one left, you're wrong, regardless of how many there used to be.)


Rounding percentages is not merely a mathematical operation. Rounding highly depend on the real-life notion represented by the percentage. In your example, the complementary percentage represents the percentage of bacteria that survives after applying the soap. Lets consider the following examples without any rounding:

  1. If soap A kills 40% of bacteria, and soap B kills 39.99%, then the bacteria that survives is similar in both cases (60% and 60.01%). Therefore A is slightly better.
  2. If soap A kills 99.99% and soap B kills 99.98% of bacteria, the remaining amount of bacteria after applying A (0.01%) is twice smaller than the remaining amount of bacteria after applying B (0.02%). Therefore A is significantly better.
  3. If soap A kills 100% and soap B kills 99.99% of bacteria, the remaining amount of bacteria after applying A (0%) is infinitely smaller than the remaining amount of bacteria after applying B (0.01%). Therefore A is much, much better.

You can see from these examples that 0.01% gap behaves differently across the percentage scale. On the edges of the scale it has much more impact. That is why when considering percentages that are close to an edge of the scale, rounding even by 0.01% can be considered as a deception.


The answers here are correct, but I wanted to give some statistical background on the terms.

When we think about measuring error, errors are often phrased in terms of Type I and Type II errors

  • Type I errors are the "false alarm" errors or the "boy who cried wolf" errors. They occur when something is not present, but triggers detection anyway (often due to random noise sources)
  • Type II errors are the "sleeping watchman" errors. These occur when the stimulus is present, but the detector doesn't detect it.

We often tune our systems to balance these two types of errors. The more sensitive they get, the fewer type II errors we get, but we pay for it by creating more type I errors by being more sensitive to noise. Likewise, we can dull sensitivity to minimize type I errors, but it increases type II errors.

With 0% and 100%, these terms fall apart. If you are looking for "all" or "nothing," there's no way to tune the detector to see none of one type without forcing yourself to deal with tons of the other type of error.

In scientific settings, more numbers are presented (such as confidence intervals) which provide a more complete picture. However, in advertisement, nobody uses those terms because they are too technical.

As such, terms like 100% are reserved for subjective situations like a "100% satisfaction guarantee," which specifically means that you can return it for any reason at all, just by claiming "you were not satisfied."