Question

I've been asked to write a program in assembly language, starting from the next formula: (yy+h+m)-(d+d), where the variables are represented on bytes, and only the value of h is greater than 128.

There must be two programs, one using the unsigned convention, and the other the signed convention, and I don't know what this means, as there are no multiplications and division to use imul or idiv...

Should I use the cbw command, and if yes, how?

Was it helpful?

Solution

In order to make this calculation, you need to convert all byte values to word and then to make the computations. That is because the result will be bigger than byte.

So, extending a byte value to word (two bytes) is different for signed and unsigned numbers, because the content of the high order byte depends on the convention.

If the byte value is unsigned, the high order byte of the word value have to be set to 0. For example $8c is converted to two bytes: $8c $00 ($008c)

If the byte value is sighed then the high order byte have to be filled with the value of the sign bit of the byte value. The same example: $8c has to be extended to $8f $ff. The instruction cbw makes signed conversion.

As a code it will look following way:

; unsigned
    mov  al, byte [SomeByteVariable]
    mov  ah, 0
    add  ax, 1234 ; here we have unsigned word value.

;  signed
    mov  al, byte [SomeByteVariable]
    cbw
    add  ax, 1234 ; here we have signed word value
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top