Question

I have never used octal numbers in my code nor come across any code that used it (hexadecimal and bit twiddling notwithstanding).

I started programming in C/C++ about 1994 so maybe I'm too young for this? Does older code use octal? C includes support for these by prepending a 0, but where is the code that uses these base 8 number literals?

Was it helpful?

Solution

I recently had to write network protocol code that accesses 3-bit fields. Octal comes in handy when you want to debug that.

Just for effect, can you tell me what the 3-bit fields of this are?

0x492492

On the other hand, this same number in octal:

022222222

Now, finally, in binary (in groups of 3):

010 010 010 010 010 010 010 010

OTHER TIPS

The only place I come across octal literals these days is when dealing with the permission bits on files in Linux, which are normally represented as 3 octal digits, where each digit represents the permissions for the file owner, group and other users respectively.

e.g. 0755 (also just 755 with most command line tools) means the file owner has full permissions (read, write, execute), and the group and other users just have read and execute permissions.

Representing these bits in octal makes it easier to figure out what permissions are set. You can tell at a glance what 0755 means, but not 493 or 0x1ed.

From Wikipedia

At the time when octal originally became widely used in computing, systems such as the IBM mainframes employed 24-bit (or 36-bit) words. Octal was an ideal abbreviation of binary for these machines because eight (or twelve) digits could concisely display an entire machine word (each octal digit covering three binary digits). It also cut costs by allowing Nixie tubes, seven-segment displays, and calculators to be used for the operator consoles; where binary displays were too complex to use, decimal displays needed complex hardware to convert radixes, and hexadecimal displays needed to display letters.

All modern computing platforms, however, use 16-, 32-, or 64-bit words, with eight bits making up a byte. On such systems three octal digits would be required, with the most significant octal digit inelegantly representing only two binary digits (and in a series the same octal digit would represent one binary digit from the next byte). Hence hexadecimal is more commonly used in programming languages today, since a hexadecimal digit covers four binary digits and all modern computing platforms have machine words that are evenly divisible by four. Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes the PDP-11. The modern-day ubiquitous x86 architecture belongs to this category as well, but octal is almost never used on this platform.

-Adam

I have never used octal numbers in my code nor come across any code that used it.

I bet you have. According to the standard, numeric literals which start with zero are octal. This includes, trivially, 0. Every time you have used or seen a literal zero, this has been octal. Strange but true. :-)

Commercial Aviation uses octal "labels" (basically message type ids) in the venerable Arinc 429 bus standard. So being able to specify label values in octal when writing code for avionics applications is nice...

I have also seen octal used in aircraft transponders. A mode-3a transponder code is a 12-bit number that everyone deals with as 4 octal numbers. There is a bit more information on Wikipedia. I know it's not generally computer related, but the FAA uses computers too :).

It's useful for the chmod and mkdir functions in Unix land, but aside from that I can't think of any other common uses.

I came into contact with Octal through PDP-11, and so, apparently, did the C language :)

tar files store information as an octal integer value string

There is no earthly reason to modify a standard that goes back to the birth of the language and which exists in untold numbers of programs. I still remember ASCII characters by their octal values, would have to think to come up with the hex value of A, but it is 101 in octal; numeric 0 is 060... ^C is 003...

That is to say, I often use the octal representation.

Now if you really want to bend your mine, take a look at the word format for the PDP-10...

There are still a bunch of old Process Control Systems (Honeywell H4400, H45000, etc) out there from the late 60s and 70s which are arranged to use 24-bit words with octal addressing. Think about when the last nuclear power plants were constructed in the United States as one example.

Replacing these industrial systems is a pretty major undertaking so you may just be lucky enough to encounter one in the wild before they go extinct and gape in awe at their magnificent custom floating point formats!

Anyone who learned to program on a PDP-8 has a warm spot in his heart for octal numbers. Word size was 12 bits divided into 4 groups of 3 bits each, so -1 was 7777 octal. This scheme was perpetuated in the PDP-11 which had 16 bit words but still used octal representation for various things, hence the *NIX file permission scheme which lives to this day.

Octal is and was most useful with the first available display hardware (7-segment displays). These original displays did not have the decoders available later.

Thus the digital register outputs were grouped to fit the available display which was capable of only displaying eight(8) symbols: 0,1,2 3,4,5,6,7 .

Also the first CRT display tubes were raster scan displays and simplest character-symbol generators were equivalent to the 7-segment displays.

The motivating driver was, as always, the least expensive display possible.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top