Question

Excuse me if you find it silly. I am not able to understand the purpose of BCD and its implementation in programming?

What I so far understand about BCD is that it is an internal way of data representation in a byte or a nibble. In a simple C program, we declare variables int, float etc and we manipulate them as we like. Internally they may be represented as binary number or BCD.

Modern calculators use BCD format only. In Intel chips, the instruction FBLD is used to move a packed BCD value into the FPU register.

I have seen few exercises in assembly language programming that convert BCD into decimal and other way round.

How does it help a programmer to know that?

What is the usefulness of the knowledge of BCD to a high-level language programmer?

Was it helpful?

Solution 3

Intel chip does not uses BCD representation internally. It uses binary representation including 2's complement for negative integers. However, it has certain instructions like AAA, AAS, AAM, AAD, DAA, DAS which are used to convert the binary results of addition, subtraction, multiplication and division on unsigned integer values into unpacked/packed BCD results. Therefore, Intel chips can produced BCD results for unsigned integers INDIRECTLY. These instructions use the implied operand located in AL register and place the result of the conversion in the AL register. There are advanced BCD-handling instructions to move from memory a 80-bit packed signed BCD format value into the FPU register, where it is converted automatically into binary form, processed and converted back into the BCD format.

OTHER TIPS

BCD does exactly what it is named for: Binary Coded Decimal. In essence, this means, that instead of storing a hexadecimal digit in every 4 bits, we only store a decimal digit, wasting the remaining space.

The point in doing this is mostly with arithmetics, that need to be perfect (in the sense of least rounding error) in the decimal system, but not necessarily in the hexadecimal (or binary) system, when observing digits after the decimal point.

In former times, this was important in e.g. accounting software, but it has become a none-issue with natural word lenghtes becoming larger. Todays solutions typically use integer arithmetic in 1/10th or 1/100th of a cent.

Another former use was to facilitate easier interfacing to 7-segment LED displays - numbers encoded in BCD could be displayed nibble by nibble, while binary representation needs modulo operations.

I am sure, in today's world you will encounter BCD on a bit level only in very specialised circumstances.

There is no BCD format in C although you can store it in normal binary variables and print out in hex.

In the old times BCD is mostly used to store decimal values that are infrequently involved in calculations (esp. in accounting softwares like Eugen Rieck said). In that case the cost of converting between decimal and binary in input and output may outperformed the cost of simple calculations that were used because of the slow divisions or lack of hardware divisors. Some old architectures may also have instructions for BCD arithmetics so BCD can be used to improve performance

But that's almost no problem nowadays since most modern architectures have dropped support for BCD maths, and doing that in larger registers (> 8 bit) may result in worse performance

To a high-level programmer, BCD is not important. To a low-level programmer in the early days, imagine the following situation, even simpler than a calculator - you have an integer variable in your code that you want to show to the user on a nine-segment display.

It would be easy to display in hex, but the users prefer to use decimal numbers. You will find the need to have a hex-to-decimal conversion and then you will need to internally represent the decimal digits you want to display (in hex).

Very early they identified that it would be easy to use the bit sequences 0000 - 1001 to represent decimal digits from 0 to 9. While wasteful, with 1 byte you can represent two digits, and while you are at it, why not implement arithmetic directly on these decimal digits? Then no extra conversion is needed to interact with the user, and combining with more bytes you can have more digits.

They noticed that with the help of some extra CPU instructions, they could 'correct' the binary arithmetic instructions present in the CPUs to operate on BCD. They could perform all calculations in BCD, the preferred way for calculators. As a bonus, they could handle a decimal point and fractional numbers such as 0.1 which require no ugly approximations, as in float representation. BCD was adopted in this domain for quite some time.

When we get to C language, we are already far away from these considerations. People consider C a 'low-level language' but this is true only in relative terms. The C language does not expose the CPU architecture, such as the availability of BCD instructions and even the carry flag, which is so important to implement variable precision arithmetic.

Instead of using a few assembly instructions, you can write a much more complicated code in C to handle BCD, but it is up to the compiler to recognize and map it back to these special instructions. Most likely it will not happen, and reverse-engineering this code is a very complex task for the compiler... Most compilers simply ignore these instructions.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top