Why do we say "lowest common denominator" when we mean "greatest common divisor"?

Solution 1:

Because people think of something as "low" and "common", both of which it is, and then get themselves mixed up and say "lowest common denominator" which is not what they mean. (This vaguely similar to the way people say "steep learning curve" based on informal ideas of steep surfaces being difficult to climb, without thinking carefully of what a learning curve is and what steepness actually means for one.)

In short, the typical usage of "lowest common denominator" is a logical mistake. To speculate why it happens, my guess is that it's brought about by wanting (rightly) to use the terms "low" and "common", and the temptation of using (incorrectly) the technical-sounding term "lowest common denominator". That, and the fact that one has seen others use the phrase in that sense!

Using something that's actually logically right — like "highest common factor" — sounds positive rather than negative, so this is less used. (Although, in your HTML example, you could use something like "largest common subset of features", and be both right and understood.)


That's the answer, but explanation of the literal meaning…

To take the example of TV shows — because the citation given on Wiktionary is "Reality TV really is appealing to the lowest common denominator in audiences" — it is often said of low-quality populist shows that they are dumbed down, and at the "lowest common denominator", in order to cater to a large population. In this case, it is true that the quality may be low, and that the intention is to make something of a quality whose acceptability is common to the large population. But the resulting quality of the show is actually the greatest common factor! It is at the greatest level of quality that is still common (that's why it's low, because it needs to be common). If they were actually picking the lowest common quality, it would be even lower, or zero.

[Mathematical background: In mathematics, there are two related concepts: the greatest/highest common divisor/factor of two integers, say 15 and 24, is the greatest factor common to both, in this case 3. (Note that 3 is lower than 15 and 24.) The lowest common multiple of the integers is the smallest multiple common to both; in this case 120. (Note that 120 is greater than 15 and 24.) So the lowest common multiple is greater than the numbers, and the greatest common factor is lower than the numbers. This itself should be enough to suggest that if what you want to talk of is something low, you must use "greatest common…", not "lowest common…". But most people don't think so much.

Lowest common denominator is another term which is used when adding fractions: if you're adding the fractions 1/15 and 1/24, you convert them both to the lowest common denominator, which is the lowest common multiple of the denominators. In this case, the lowest common denominator is 120, and you write 1/15 as 8/120 and 1/24 as 5/120, so that you can add them: 1/15 + 1/24 = 8/120 + 5/120 = 13/120.
Even in this case, the lowest common denominator (120) is larger than the original denominators (15 and 24).]

The actual "lowest common factor" of any set of integers is 1, irrespective of what the numbers are, so it's not a very useful term. Similarly, the literal meaning of "lowest common denominator", when used in its usual context, refers to a quality that is always zero (or the minimum possible) irrespective of the population: the lowest common denominator among high-school graduates, the lowest common denominator among the whole population, and the lowest common denominator among people with PhDs would all be the same: abysmal.

Edit: Someone on Wikipedia has already explained this:

In common non-mathematical usage, the term "least common denominator" is often misused for the concept of the greatest common divisor. For example, a graphic toolkit which rendered features like lines and polygons into either Microsoft VML or standard SVG might choose to implement only the maximum set of graphic attributes common to both destination formats, which is an easy analogy to the concept of the greatest common divisor (The greatest common divisor of 12 and 18 is 6, which is the largest factor evenly dividing both numbers). If the systems being compared are very similar, then the common functionality can be a powerful subset (as the greatest common divisor of 375 and 250 is 125), while if the systems are very dissimilar the common capabilities might be very minimal (as the greatest common divisor of 270 and 98 is only 2). With additional systems (or numbers), the set of features common to all cannot grow and often shrinks (likewise for finding the greatest common divisor for a series of numbers).

This approach of making use of only the greatest subset of function common to all supported systems is often disparaged when the common feature set is sparse or weak (by analogy, having a small "greatest common divisor"). In this context colloquial usage has conflated the concept of "greatest common divisor" with the familiar sounding jargon of "least common denominator", which seems to emphasize smallness of overlap through the word "least", but actually refers to a different and inappropriate mathematical concept.

Edit 2: In case it helps, below is a vague picture illustrating what I mean. (Anyone please feel free to make a better image and replace this.)

Highest common factor, etc.

When most people say "lowest common denominator", they mean the second horizontal line, near the bottom: it's very low, because it must be common to so many, but it's also not zero. If you take "lowest common denominator" in its mathematical sense of being an lcm (of some set of existing denominators), then it's actually higher than the elements of the set, and is the highest line. If you use "denominator" to mean just "trait", then the "lowest common denominator" is literally the lowest common trait, which is the very bottom-most line: it's always zero (or 1 in mathematics, if you're talking of factors), independent of your original population, and even lower than it needs to be.

Solution 2:

A “common denominator”, in the extended usage, is a trait or theme that all elements of a group have in common. The “lowest common denominator” is the denominator that it is most practical or easiest to use when summing or comparing fractions. In the analogy, the “lowest common denominator” is the trait or theme that is most practical or easiest to take advantage of or use.

The suggestion to use “greatest common factor” instead kind of misses the point that the analogy is that people or web browsers (or whatever) are fractions, not integers. Sure, if we wanted to make the analogy that web browsers are integers, then speaking of the “greatest common factor” would make sense. But that analogy has no currency, so the suggestion is dead on arrival.

Solution 3:

I don't think it's being misused. To add several fractions, you convert them to a common denominator. To add 1/2 and 1/3, you could convert them to 6/12 and 4/12 respectively; their sum is obviously 10/12, or 5/6.

The lowest common denominator is the lowest to which all of your addends can be converted.

A concept (or argument or piece of entertainment) has to be expressed in a way that it is accessible to each member of the intended audience. This is labeled, by way of analogy, the "common denominator" -- and whatever expression is easiest, or most denigrated, is the "lowest" common denominator.

The only real error comes in when someone says reducing to the lowest common denominator. No reduction is involved: when you convert the fraction to the l.c.d., you aren't changing its value, only the way it's written.

Solution 4:

It's an analogy.

When comparing disparate things, you can't use simple, direct comparisons. Whether that comparison is between individuals, between distinct populations or, in the case of your example, between different technologies, there is no apples-to-apples comparison. So how can you compare apples to oranges and get away with it?

Well, we all learned in school that you can compare things that are different not just in magnitude but in substance. Rational numbers with different denominators can be rewritten as numbers with a common denominator, and can then be treated as things which differ only in magnitude (specifically, in the magnitude of the numerator). The lowest (or least) common denominator is the number (or term) that expresses the greatest commonality between the elements to be compared; the once-disparate elements have been distilled down to the simplest expression of their sameness.

Solution 5:

Because people mix up the concepts, and use the one that sounds more like what they want to express. The lowest common denominator sounds like it's smaller than the greatest common divisor, when in fact it's the opposite.

It's not a very good expression to use outside mathematics, as it sounds better when it's used incorrectly.