Octal number literals: When? Why? Ever? [closed]

I have never used octal numbers in my code nor come across any code that used it (hexadecimal and bit twiddling notwithstanding).

I started programming in C/C++ about 1994 so maybe I'm too young for this? Does older code use octal? C includes support for these by prepending a 0, but where is the code that uses these base 8 number literals?


I recently had to write network protocol code that accesses 3-bit fields. Octal comes in handy when you want to debug that.

Just for effect, can you tell me what the 3-bit fields of this are?

0x492492

On the other hand, this same number in octal:

022222222

Now, finally, in binary (in groups of 3):

010 010 010 010 010 010 010 010

The only place I come across octal literals these days is when dealing with the permission bits on files in Linux, which are normally represented as 3 octal digits, where each digit represents the permissions for the file owner, group and other users respectively.

e.g. 0755 (also just 755 with most command line tools) means the file owner has full permissions (read, write, execute), and the group and other users just have read and execute permissions.

Representing these bits in octal makes it easier to figure out what permissions are set. You can tell at a glance what 0755 means, but not 493 or 0x1ed.


From Wikipedia

At the time when octal originally became widely used in computing, systems such as the IBM mainframes employed 24-bit (or 36-bit) words. Octal was an ideal abbreviation of binary for these machines because eight (or twelve) digits could concisely display an entire machine word (each octal digit covering three binary digits). It also cut costs by allowing Nixie tubes, seven-segment displays, and calculators to be used for the operator consoles; where binary displays were too complex to use, decimal displays needed complex hardware to convert radixes, and hexadecimal displays needed to display letters.

All modern computing platforms, however, use 16-, 32-, or 64-bit words, with eight bits making up a byte. On such systems three octal digits would be required, with the most significant octal digit inelegantly representing only two binary digits (and in a series the same octal digit would represent one binary digit from the next byte). Hence hexadecimal is more commonly used in programming languages today, since a hexadecimal digit covers four binary digits and all modern computing platforms have machine words that are evenly divisible by four. Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes the PDP-11. The modern-day ubiquitous x86 architecture belongs to this category as well, but octal is almost never used on this platform.

-Adam


I have never used octal numbers in my code nor come across any code that used it.

I bet you have. According to the standard, numeric literals which start with zero are octal. This includes, trivially, 0. Every time you have used or seen a literal zero, this has been octal. Strange but true. :-)


Commercial Aviation uses octal "labels" (basically message type ids) in the venerable Arinc 429 bus standard. So being able to specify label values in octal when writing code for avionics applications is nice...