Pregunta

Sé que BCD es un tipo de datos más intuitivo si no sabes binario.Pero no sé por qué usar esta codificación, es como no tener mucho sentido ya que su representación de residuos en 4bits (cuando la representación es mayor que 9).

También creo que x86 solo admite adiciones y substituciones directamente (puedes convertirlas a través de FPU).

¿Es posible que esto provenga de máquinas antiguas u otras arquitecturas?

¡Gracias!

¿Fue útil?

Solución

Creo que BCD es útil para muchas cosas, las razones dadas anteriormente. Una cosa que es obvia que parece haberse pasado por alto es proporcionar una instrucción para ir de binario a BCD y lo contrario. Esto podría ser muy útil para convertir un número ASCII en binario para aritmático.

Uno de los carteles estaba equivocado sobre los números almacenados a menudo en ASCII, en realidad se realiza mucho almacenamiento de números binarios porque es más eficiente. Y convertir ASCII a binario es un poco complicado. BCD es una especie de ASCII y binario, si hubiera una instrucción BSDTOINT e INTTOBCD, haría que las conversiones fueran muy fáciles. Todos los valores ASCII deben convertirse en binario para aritmático. Entonces, BCD es realmente útil en esa conversión ASCII a binaria.

Otros consejos

BCD arithmetic is useful for exact decimal calculations, which is often a requirement for financial applications, accountancy, etc. It also makes things like multiplying/dividing by powers of 10 easier. These days there are better alternatives.

There's a good Wikipedia article which discusses the pro and cons.

BCD is useful at the very low end of the electronics spectrum, when the value in a register is displayed by some output device. For example, say you have a calculator with a number of seven-segment displays that show a number. It is convenient if each display is controlled by separate bits.

It may seem implausible that a modern x86 processor would be used in a device with these kinds of displays, but x86 goes back a long way, and the ISA maintains a great deal of backward compatibility.

BCD is space-wise wasteful, that's true, but it has the advantage of being a "fixed pitch" format, making it easy to find the nth digit in a particular number.

Another advantage is that is allows for exact arithmetic calculations on arbitrary size numbers. Also, using the "fixed pitch" characteristics mentioned, such arithmetic operations can easily be chunked into multiple threads (parallel processing).

BCD existe en las CPU x86 modernas desde que estaba en el procesador 8086 original, y todas las CPU x86 son compatibles con 8086.Las operaciones BCD en x86 se utilizaban para soportar aplicaciones empresariales en aquel entonces.La compatibilidad con BCD en el propio procesador ya no se utiliza.

Tenga en cuenta que BCD es una representación exacta de números decimales, mientras que el punto flotante no lo es, y que implementar BCD en hardware es mucho más simple que implementar punto flotante.Este tipo de cosas importaban más cuando los procesadores tenían menos de un millón de transistores que funcionaban a unos pocos megahercios.

Nowadays, it's common to store numbers in binary format, and convert them to decimal format for display purposes, but the conversion does take some time. If the primary purpose of a number is to be displayed, or to be added to a number which will be displayed, it may be more practical to perform computations in a decimal format than to perform computations in binary and convert to decimal. Many devices with numerical readouts, and many video games, stored numbers in packed BCD format, which stores two digits per byte. This is why many score counters overflow at 1,000,000 points rather than some power-of-two value. If hardware did not facilitate packed-BCD arithmetic, the alternative would not be to use binary, but to use unpacked decimal. Converting packed BCD to unpacked decimal at the moment it's displayed can easily be done a digit at a time. Converting binary to decimal, by contrast, is much slower, and requires operating on the entire quantity.

Incidentally, the 8086 instruction set is the only one I've seen with instructions for "ASCII Adjust for Division" and "ASCII Adjust for Multiplication", one of which multiplies a byte by ten and the other of which divides by ten. Curiously, the value "0A" is part of the machine instructions, and substituting a different number will cause those instructions to multiply or divide by other quantities, but the instructions are not documented as being general-purpose multiply/divide-by-constant instructions. I wonder why that feature wasn't documented, given that it could have been useful?

It's also interesting to note the variety of approaches processors used for adding or subtracting packed BCD. Many perform a binary addition but use a flag to keep track of whether a carry occurred from bit 3 to bit 4 during an addition; they may then expect code to clean up the result (e.g. PIC), supply an opcode to cleanup addition but not subtraction, supply one opcode to clean up addition and another for subtraction (e.g. x86), or use a flag to track whether the last operation was addition or subtraction and use the same opcode to clean up both (e.g. Z80). Some use separate opcodes for BCD arithmetic (e.g. 68000), and some use a flag to indicate whether add/subtract operations should use binary or BCD (e.g. 6502 derivatives). Interestingly, the original 6502 performs BCD math at the same speed as binary math, but CMOS derivatives of it require an extra cycle for BCD operations.

I'm sure the Wiki article linked to earlier goes into more detail, but I used BCD on IBM mainframe programming (in PL/I). BCD not only guaranteed that you could look at particular areas of a byte to find an individual digit - which is useful sometimes - but also allowed the hardware to apply simple rules to calculate the required precision and scale for e.g. adding or multiplying two numbers together.

As I recall, I was told that on mainframes, support for BCD was implemented in hardware and at that time, was our only option for representing floating point numbers. (We're talking 18 years go here!)

When I was in college over 30 years ago, I was told the reasons why BCD (COMP-3 in COBOL) was a good format.

None of those reasons are still relevant with modern hardware. We have fast, binary fixed point arithmetic. We no longer need to be able to convert BCD to a displayable format by adding an offset to each BCD digit. We rarely store numbers as eight bits per digit, so the fact that BCD only takes four bits per digit isn't very interesting.

BCD is a relic, and should be left in the past, where it belongs.

Very few humans can size amounts expressed in hexa, so it is useful to show or at least allow to view intermediary result in decimal. Specially in the financial or accounting world.

Modern computing has emphasized coding that captures the design logic rather than optimizing a few cpu cycles here or there. The value of the time and/or memory saved often isn't worth writing special bit-level routines.

That being said, BCD is still occasionally useful.

The one example I can think of is when you have a huge database flatfiles or other such big data that's in an ASCII format like CSV. BCD is awesome if all you're doing is looking for value between some limits. To convert all of the values as you scan all that data would greatly increase processing time.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top