How do you pronounce numbers written in different bases? [closed]
The decimal (base 10) number "2" can also be represented as the binary (base 2) number "10".
Let's use binary "10" (equivalent to decimal "2") as an example. I could see a few different ways to go here. Assume that the base doesn't need to be specified, and is understood from the context of the conversation (e.g. two programmers talking about memory addresses would understand that they were using hexadecimal).
It could be "ten", since that is what it looks like. One might even argue that ten, as a concept, refers to a one followed by a zero irrespective of the radix. In other words, ten means "a quantity exactly equal to the base it's represented in".
On the other hand, you could argue that "ten" refers specifically to the quantity; in other words, "1010" in binary, "10" in decimal, and "12" in octal would all be pronounced "ten," and "10" in binary should be pronounced "two".
So how would you pronounce the following numbers?
"10" binary ("2" decimal)
"10" octal ("8" decimal)
"10" hexadecimal ("16" decimal)
"1F" hexadecimal ("31" decimal)
I pronounce your examples "ten", "ten", "ten", and "one ef". I count in hexadecimal, "One, two, three, four, five, six, seven, eight, nine, ay, bee, see, dee, ee, ef, ten, eleven, twelve, ..., one-ee, one-ef, twenty, twenty-one, ..." etc.
I've heard some people make the argument that, as a "number" is a concept that is independent of the numerals and radix used to represent it, that therefore we should read binary 10 as "two", octal 12 as "ten", etc, because that is the concept that these strings of digits represent. I was on another forum once where several people were quite adamant about this, and insisted that anyone who read octal 10 as "ten" was demonstrating profound mathematical ignorance, corrupting the youth, and so forth. I disagree with that idea on two grounds: one philosophical, one practical.
On the philosophical, who says that "thirteen" means "this many: X X X X X X X X X X X X X" and not "the string of digits consisting of a one followed by a three"? There are many possible representations of "this many fingers", including decimal 13, octal 15, Roman numerals XIII, Hebrew symbols yod-gimel, etc etc. Who says that the only correct way to read all these representations is by the word "thirteen"? Are French people "wrong" because they read it as "treize" rather than as "thirteen"? If it's linguistic chauvinism to say that the French are wrong to use French words rather than English words, perhaps it is "radix chauvinism" to say that names derived from the decimal number system are "right" and names derived from any other number system are "wrong". Need I point out that "thirteen" is obviously derived from a string of digits, "1" and "3". To look at (octal) "15" and read it "thirteen" is clearly imposing a decimal-based name on an octal representation.
On more practical terms, trying to read numbers in other bases using names derived from their decimal equivalents quickly becomes wildly impractical. If you insist that octal 10 be read "eight", then presumably we keep counting 11=nine, 12=ten, 13=eleven, 14=twelve, ... 20=sixteen, 21=seventeen, ... 100=sixty-four, ... etc. Imagine trying to read off a series of octal numbers to another person for him to copy. Would you really look at octal 34702 and read it "fourteen thousand seven hundred eighty-six", and then expect the other person to hear this and type in "34702"? Such a process would be very difficult and error-prone. It makes a lot more sense to read it "three four seven zero two" or "thirty-four thousand seven hundred two".
Once you grant that when numbers exceed two or three digits it is most natural and practical to read them using the digits given and not trying to use the same words you would use for "this many" in decimal, it follows that for consistency we should always do this. If I read octal 12 as "ten" but octal 1000 as "one thousand", then we would have to define some cut-off point where we transition from "decimal names" to "octal names". As such a cut-off point would be arbitrary, it would likely be confusing. Better to just consistantly use the natural octal reading.
By convention:
- "one-zero binary" (people rarely say "base 2" in my experience)
- "octal one-zero" or "one-zero octal"
- "hex one-zero"
- "hex one-eff"
If you say "hex ten" to a developer, they will mentally translate it to "hex one-zero" anyway, so you're better off saying "hex one-zero" in the first place.
In general, developers tend to
- pronounce every digit in bases other than decimal
- pronounce groups of four in binary when unambiguous (e.g. "1011" is said "ten-eleven", but "1000" is pronounced "one-zero-zero-zero")
That being said, 0xdeadbeef
is always pronounced "dead beef." But then, you've entered the realm of hexspeak.
Pronouncing the hexadecimal letters A through F
The default pronunciation for the letters are simply their English names, "ay, bee, see, dee, ee, eff."
When reading off a hex MAC address, I have both used and heard the NATO phonetic alphabet used for the letters A through F. Since both the speaker and the listener know that a hex string is coming, they will pronounce
1A-48-0F-CF-3B-24
as "One alpha, four eight, zero foxtrot, charlie foxtrot, three bravo, two four."
With hexadecimal numbers, I have also heard a somewhat simplified phonetic alphabet.
A=Abel
B=Baker (or Boy)
C=Charlie
D=Dog
E=Easy
F=Fox
So the above string would be pronounced "One abel, four eight, zero fox, charlie fox, three baker, two four."
Last week, I provisioned a modem and had to pronounce the MAC address of said device. I'm a AE speaker and the hearer was Filipino. I pronounced the address using the simplified phonetic alphabet, and he confirmed the address using the NATO phonetic alphabet.
There are other spelling alphabets used around the world but the NATO phonetic alphabet is the most common.
Pronouncing individual digits versus pronouncing as if they comprised a decimal number
When I have taught computer science classes, I would always pronounce a binary number like 1010
as "one oh one oh, base 2." To pronounce it as if it were a decimal number, "one thousand and ten" seemed to invite confusion. Thus, I would never pronounce 102 as "ten, base two" or "ten, binary."
This professor, who teaches cryptography and algorithms, uses a similar convention.
If decimal, just say the number (with the word "decimal" if we're mixing contexts)
If any other base, read the digits and say the name of the base
So I might say, "therefore the answer is one-zero-one binary, or 5 decimal."
I would never call 10 hex "ten". Nor would I call 10 binary "two."
Said professor pronounces, "one zero one," while I might shorten and say "one oh one."
(And please click through to the picture of the T-shirt. It's only funny if you misread "one zero base 2" as "ten.")
One HBO Silicon Valley episode
I taught Computer Science back in the day when mighty dinosaurs ruled the earth, punch cards were on the wane, and floppy disks were still floppy.
Acknowledging that language evolves, here is a link to a blog entitled "How to pronounce hexadecimal", dated 2015. The blog author is Bzarg.
It includes a dialogue from an HBO series, Silicon Valley:
Kid: Here it is: Bit… soup. It’s like alphabet soup, BUT… it’s ones and zeros instead of letters.
Bachman: {silence}
Kid: ‘Cause it’s binary? You know, binary’s just ones and zeroes.
Bachman: Yeah, I know what binary is. Jesus Christ, I memorized the hexadecimal times tables when I was fourteen writing machine code. Okay? Ask me what nine times F is. It’s fleventy-five. I don’t need you to tell me what binary is.
Bzarg goes on to propose a pronunciation convention.
For the hex digits in the units place, pronounce the usual 0-9 and "ay, bee, see, dee, ee, eff."
For the hex digits in the sixteen's place, the author proposes these (based on "fleventy"): "atta, bibbity, city, dickety, ebbity, fleventy."
For four hex digits, Bzarg suggests separating each two digits by "bitey."
So the MAC address above might be:
1A-48 = "abteen bitey forty-eight" 0F-CF = "eff bitey city-eff" 3B-24 = "thirty-bee bitey twenty-four"
Despite Bzarg's heroic efforts (and an echo from http://www.xanthir.com/b4ej0), I have not observed this in practice, even once.
How to pronounce your examples
I think your examples depend on whether you are doing math or doing programming.
In math, numbers in other bases are written with the base following as a subscript. (Please see https://math.stackexchange.com/a/638782/5220 for 11002 = C16.) Math also routinely talks about logarithms in base 2, base 10, or natural logarithms like this (with the base following the log as a subscript):
log2 (pronounced "log base 2")
log10 (pronounced "log base 10")
ln (pronounced "log")
So I would pronounce your numbers in a math context as:
- 102, pronounced "one zero base two"
- 108, pronounced "one zero base eight"
- 1016, pronounced "one zero base sixteen"
- 1F16, pronounced "one eff base sixteen"
In programming, there is a convention that literals in other bases are preceded with a 0b
, 0o
, or 0x
for binary, octal, or hexadecimal, respectively. (See this proposal for Python "Integer Literal Support and Syntax" at https://www.python.org/dev/peps/pep-3127/).
The proposal is that:
octal literals must now be specified with a leading "0o" or "0O" instead of "0";
binary literals are now supported via a leading "0b" or "0B"; and
provision will be made for binary numbers in string formatting.
(Python, C, C++, and Java already precede hexadecimal literals with "0x").
Their motivation was:
The default octal representation of integers is silently confusing to people unfamiliar with C-like languages. It is extremely easy to inadvertently create an integer object with the wrong value, because '013' means 'decimal 11', not 'decimal 13', to the Python language itself, which is not the meaning that most humans would assign to this literal.
(Note the Pythonic displeasure with the C and C++ convention that writes an 138 as 013
.)
So with this in mind, if you were pronouncing your examples in a programming context, the literal is preceded by a base marker and the pronunciation follows suit:
- 102, written
0b10
, pronounced "binary one zero" or "binary one oh" - 108, written
0o10
in Python or010
in C++, pronounced "octal one zero" - 1016, written
0x10
, pronounced "hex one zero" - 1F16, written
0x1F
, pronounced "hex one foxtrot" or "hex one eff" or "hex one fox"
In notations other than decimal, always read out the symbols, which is what they are.
Do not even call the individual elements as digits when the number system is not binary, decimal or octal because in higher notations, alphabets are also used, which will create the illogical (not technically incorrect, maybe) use of digit.
When we read 'one' in say, hex, we are not referring to a value of unity, only the name of the symbol.
I find the simplest to pronounce any numeral in any base using any symbols would be to organize the numerals in bytes of three. For example
123 456 789 abc def in hexadecimal
I'd call this one, two, three - tera; four, five, six - giga; seven, eight, nine - mega; ay, bee, cee - kilo; dee, ee, eff
This method's advantage is that it makes describing a numeral in any base simple and correct:
001 000 000 000 binary is one giga.
020 000 000 decimal is two, zero mega
a00 000 hexadecimal is ay, zero, zero kilo
One should be careful because 123 giga hexadecimal is not one hundred and twenty-three giga, it should be viewed as:
(1*102+2*101+3)*109 all in hexadecimal notation
Another example, 101 giga binary should be viewed as:
(1*1010+0*101+1)*109 all in binary notation
And lastly for something familiar 456 giga decimal should be viewed as:
(4*102+5*101+6)*109 all in decimal notation
This notation coincidence with four hundred fifty-six giga in decimal notation.
The quantity these numerals represents is another story. For example 144 (one, four, four) is not a number; it is a numeral that could represent a number. Where as numbers one, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen, sixteen, seventeen, eighteen, nineteen, twenty, thirty, forty, etc are all specific numbers representing a specific quantity in english.
In addition one hundred forty-four and a gross (twelve times twelve) both represent the same number. This same number could be represented by fourteen-ten and four or seven-twenty and four. I could go on and on. Of course all this requires some knowledge of multiplication and addition expect for a gross.
One could say that 10 (one, zero) binary is a number and they would be correct in mathematics. However one zero binary is jargon means nothing in Common English. It must be translated to 'two' if you wish to be understood.