Question

I tested this code in X86.

void func()
{
  int a, b;
  unsigned int c, d;
  int ret;

  ret = a / b;  // This line use idivl, expected
  ret = c / d;  // this line use idivl, expected
  ret = a / c;  // this line use divl..., surprised 
  ret = c / a;  // this line use divl..., supriised
  ret = a * c;  // this line use imull, expected
}

I paste the assembly code here:

func:
    pushl   %ebp
    movl    %esp, %ebp
    subl    $36, %esp
    movl    -4(%ebp), %eax
    movl    %eax, %edx
    sarl    $31, %edx
    idivl   -8(%ebp)
    movl    %eax, -20(%ebp)
    movl    -12(%ebp), %eax
    movl    $0, %edx
    divl    -16(%ebp)
    movl    %eax, -20(%ebp)
    movl    -4(%ebp), %eax
    movl    $0, %edx
    divl    -12(%ebp)
    movl    %eax, -20(%ebp)
    movl    -4(%ebp), %eax
    movl    %eax, -36(%ebp)
    movl    -12(%ebp), %eax
    movl    $0, %edx
    divl    -36(%ebp)
    movl    %eax, -20(%ebp)
    movl    -4(%ebp), %eax
    imull   -12(%ebp), %eax
    movl    %eax, -20(%ebp)
    leave
    ret

Could you please tell me, why the division between int and unsigned int using divl , instead of idivl ?

Was it helpful?

Solution

Since the types of a and c have the same conversion rank, but a is signed and c is unsigned, a is converted to unsigned int before the division, in both a / c and c / a.

The compiler thus emits the unsigned division instruction div for these cases (as well as c / d, where both operands are unsigned).

The multiplication a * c is also an unsigned multiplication. In this case the compiler can get away with using the signed multiplication instruction imull, because the truncated result is identical regardless of whether mull or imull is used - only the flags are different, and the generated code doesn't test those.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top