(article continued from previous page)
The appropriate numbering system to represent a value depends on the application. There are advantages and disadvantages to each numbering system.
When interfacing with non-engineers, decimal remains the numbering system of choice. For example, I’d have heart attack if the IRS told me that I owed them 1000000 dollars, when they actually meant to say 64 decimal.
Modern computers have lots of memory and disk space for programs. However, many low-end microcontrollers can be quite tight on space. Outputting a decimal value to a serial port or display requires the number be divided by 10 to break off each digit. Depending on the capabilities of the microcontroller, the divide-by-ten routine alone can take up more room than an entire hexadecimal output routine!
When diagnosing an output pin logic problem, it isn’t immediately helpful to see that the microcontroller’s port B is set to 180 decimal. I’d prefer to see the pins as 10110100 binary. The same thing is true when looking at bit flags in memory. A disadvantage of binary is that it gets lengthy fairly quickly.
When debugging dumps of memory, disks, or data streams on modern microcontroller and computers, hexadecimal provides a compact format that is easy to break into bytes and convert into bits. In my software career, I routine examine lengthy log files detailing all the bytes sent over the air on the wireless networks.
Octal has fallen out of usage over the last thirty years due to microprocessors and microcontrollers processing data in bit chunks that are not evenly divisible by 3 (4-bit, 8-bit, 16-bit, 32-bit, 64-bit) . Octal does have the advantage of requiring only number symbols (0-8), as opposed to hexadecimal (0-9, A-F). So, octal still finds some limited usage on numeric-only displays.
To summarize, I switch between using binary, decimal, and hexadecimal on a regular basis, depending which format most clearly expresses the data that the number represents.