I'm curious how many ways are there to set a register to zero in x86 assembly. Using one instruction. Someone told me that he managed to find at least 10 ways to do it.

The ones I can think of are:

xor ax,ax
mov ax, 0
and ax, 0
有帮助吗?

解决方案

There are a lot of possibility how to mov 0 in to ax under IA32...

    lea eax, [0]
    mov eax, 0FFFF0000h         //All constants form 0..0FFFFh << 16
    shr eax, 16                 //All constants form 16..31
    shl eax, 16                 //All constants form 16..31

And perhaps the most strange... :)

@movzx:
    movzx eax, byte ptr[@movzx + 6]   //Because the last byte of this instruction is 0

and...

  @movzx:
    movzx ax, byte ptr[@movzx + 7]

Edit:

And for 16 bit x86 cpu mode, not tested...:

    lea  ax, [0]

and...

  @movzx:
    movzx ax, byte ptr cs:[@movzx + 7]   //Check if 7 is right offset

The cs: prefix is optional in case that the ds segment register is not equal to cs segment register.

其他提示

See this answer for the best way to zero registers: xor eax,eax (performance advantages, and smaller encoding).


I'll consider just the ways that a single instruction can zero a register. There are far too many ways if you allow loading a zero from memory, so we'll mostly exclude instructions that load from memory.

I've found 10 different single instructions that zero a 32bit register (and thus the full 64bit register in long mode), with no pre-conditions or loads from any other memory. This is not counting different encodings of the same insn, or the different forms of mov. If you count loading from memory that's known to hold a zero, or from segment registers or whatever, there are a boatload of ways. There are also a zillion ways to zero vector registers.

For most of these, the eax and rax versions are separate encodings for the same functionality, both zeroing the full 64-bit registers, either zeroing the upper half implicitly or explicitly writing the full register with a REX.W prefix.

Integer registers:

# Works on any reg unless noted, usually of any size.  eax/ax/al as placeholders
and    eax, 0         ; three encodings: imm8, imm32, and eax-only imm32
andn   eax, eax,eax   ; BMI1 instruction set: dest = ~s1 & s2
imul   eax, any,0     ; eax = something * 0.  two encodings: imm8, imm32
lea    eax, [0]       ; absolute encoding (disp32 with no base or index).  Use [abs 0] in NASM if you used DEFAULT REL
lea    eax, [rel 0]   ; YASM supports this, but NASM doesn't: use a RIP-relative encoding to address a specific absolute address, making position-dependent code

mov    eax, 0         ; 5 bytes to encode (B8 imm32)
mov    rax, strict dword 0   ; 7 bytes: REX mov r/m64, sign-extended-imm32.    NASM optimizes mov rax,0 to the 5B version, but dword or strict dword stops it for some reason
mov    rax, strict qword 0   ; 10 bytes to encode (REX B8 imm64).  movabs mnemonic for AT&T.  normally assemblers choose smaller encodings if the operand fits, but strict qword forces the imm64.

sub    eax, eax         ; recognized as a zeroing idiom on some but maybe not all CPUs
xor    eax, eax         ; Preferred idiom: recognized on all CPUs

@movzx:
  movzx eax, byte ptr[@movzx + 6]   //Because the last byte of this instruction is 0.  neat hack from GJ.'s answer

.l: loop .l             ; clears e/rcx... eventually.  from I. J. Kennedy's answer.  To operate on only ECX, use an address-size prefix.
; rep lodsb             ; not counted because it's not safe (potential segfaults), but also zeros ecx

"Shift all the bits out one end" isn't possible for regular-size GP registers, only partial registers. shl and shr shift counts are masked: count &= 31;, equivalent to count %= 32;. (But 286 and earlier are 16bit-only, so ax is a "full" register. The shr r/m16, imm8 variable-count form of the instruction was added 286, so there were CPUs where a shift can zero a full integer register.)

Also note that shift counts for vectors saturate instead of wrapping.

# Zeroing methods that only work on 16bit or 8bit regs:
shl    ax, 16           ; shift count is still masked to 0x1F for any operand size less than 64b.  i.e. count %= 32
shr    al, 16           ; so 8b and 16b shifts can zero registers.

# zeroing ah/bh/ch/dh:  Low byte of the reg = whatever garbage was in the high16 reg
movxz  eax, ah          ; From Jerry Coffin's answer

Depending on other existing conditions (other than having a zero in another reg):

bextr  eax,  any, eax  ; if al >= 32, or ah = 0.  BMI1
BLSR   eax,  src       ; if src only has one set bit
CDQ                    ; edx = sign-extend(eax)
sbb    eax, eax        ; if CF=0.  (Only recognized on AMD CPUs as dependent only on flags (not eax))
setcc  al              ; with a condition that will produce a zero based on known state of flags

PSHUFB   xmm0, all-ones  ; xmm0 bytes are cleared when the mask bytes have their high bit set

vector regs:

Some of these SSE2 integer instructions can also be used on MMX registers (mm0 - mm7). Again, best choice is some form of xor. Either PXOR / VPXOR, or XORPS / VXORPS.

AVX vxorps xmm0,xmm0,xmm0 zeros the full ymm0/zmm0, and is better than vxorps ymm0,ymm0,ymm0 on AMD CPUs. These zeroing instructions have three encodings: legacy SSE, AVX (VEX prefix), and AVX512 (EVEX prefix), although the SSE version only zeros the bottom 128, which isn't the full register on CPUs that support AVX or AVX512. Anyway, depending on how you count, each entry can be three different instructions (same opcode, though, just different prefixes). Except vzeroall, which AVX512 didn't change (and doesn't zero zmm16-31).

ANDNPD    xmm0, xmm0
ANDNPS    xmm0, xmm0
PANDN     xmm0, xmm0     ; dest = ~dest & src

PCMPGTB   xmm0, xmm0     ; n > n is always false.
PCMPGTW   xmm0, xmm0     ; similarly, pcmpeqd is a good way to do _mm_set1_epi32(-1)
PCMPGTD   xmm0, xmm0
PCMPGTQ   xmm0, xmm0     ; SSE4.2, and slower than byte/word/dword


PSADBW    xmm0, xmm0     ; sum of absolute differences
MPSADBW   xmm0, xmm0, 0  ; SSE4.1.  sum of absolute differences, register against itself with no offset.  (imm8=0: same as PSADBW)

  ; shift-counts saturate and zero the reg, unlike for GP-register shifts
PSLLDQ    xmm0, 16       ;  left-shift the bytes in xmm0
PSRLDQ    xmm0, 16       ; right-shift the bytes in xmm0
PSLLW     xmm0, 16       ; left-shift the bits in each word
PSLLD     xmm0, 32       ;           double-word
PSLLQ     xmm0, 64       ;             quad-word
PSRLW/PSRLD/PSRLQ  ; same but right shift

PSUBB/W/D/Q   xmm0, xmm0     ; subtract packed elements, byte/word/dword/qword
PSUBSB/W   xmm0, xmm0     ; sub with signed saturation
PSUBUSB/W  xmm0, xmm0     ; sub with unsigned saturation

PXOR       xmm0, xmm0
XORPD      xmm0, xmm0
XORPS      xmm0, xmm0

VZEROALL

# Can raise an exception on SNaN, so only usable if you know exceptions are masked
CMPLTPD    xmm0, xmm0         # exception on QNaN or SNaN, or denormal
VCMPLT_OQPD xmm0, xmm0,xmm0   # exception only on SNaN or denormal
CMPLT_OQPS ditto

VCMPFALSE_OQPD xmm0, xmm0, xmm0   # This is really just another imm8 predicate value fro the same VCMPPD xmm,xmm,xmm, imm8 instruction.  Same exception behaviour as LT_OQ.

SUBPS xmm0, xmm0 and similar won't work because NaN-NaN = NaN, not zero.

Also, FP instructions can raise exceptions on NaN arguments, so even CMPPS/PD is only safe if you know exceptions are masked, and you don't care about possibly setting the exception bits in MXCSR. Even the the AVX version, with its expanded choice of predicates, will raise #IA on SNaN. The "quiet" predicates only suppress #IA for QNaN. CMPPS/PD can also raise the Denormal exception.

(See the table in the insn set ref entry for CMPPD, or preferably in Intel's original PDF since the HTML extract mangles that table.)

AVX512:

There are probably several options here, but I'm not curious enough right now to go digging through the instruction set list looking for all of them.

There is one interesting one worth mentioning, though: VPTERNLOGD/Q can set a register to all-ones instead, with imm8 = 0xFF. (But has a false dependency on the old value, on current implementations). Since the compare instructions all compare into a mask, VPTERNLOGD seems to be the best way to set a vector to all-ones on Skylake-AVX512 in my testing, although it doesn't special-case the imm8=0xFF case to avoid a false dependency.

VPTERNLOGD zmm0, zmm0,zmm0, 0     ; inputs can be any registers you like.

x87 FP:

Only one choice (because sub doesn't work if the old value was infinity or NaN).

FLDZ    ; push +0.0

A couple more possibilities:

sub ax, ax

movxz, eax, ah

Edit: I should note that the movzx doesn't zero all of eax -- it just zero's ah (plus the top 16 bits that aren't accessible as a register in themselves).

As for being the fastest, if memory serves the sub and xor are equivalent. They're faster than (most) others because they're common enough that the CPU designers added special optimization for them. Specifically, with a normal sub or xor the result depends on the previous value in the register. The CPU recognizes the xor-with-self and subtract-from-self specially so it knows the dependency chain is broken there. Any instructions after that won't depend on any previous value so it can execute previous and subsequent instructions in parallel using rename registers.

Especially on older processors, we expect the 'mov reg, 0' to be slower simply because it has an extra 16 bits of data, and most early processors (especially the 8088) were limited primarily by their ability to load the stream from memory -- in fact, on an 8088 you can estimate run time pretty accurately with any reference sheets at all, and just pay attention to the number of bytes involved. That does break down for the div and idiv instructions, but that's about it. OTOH, I should probably shut up, since the 8088 really is of little interest to much of anybody (for at least a decade now).

You can set register CX to 0 with LOOP $.

Of course, specific cases have additional ways to set a register to 0: e.g. if you have eax set to a positive integer, you can set edx to 0 with a cdq/cltd (this trick is used on a famous 24 byte shellcode, which appears on "Insecure programming by example").

This thread is old but a few other examples. Simple ones:

xor eax,eax

sub eax,eax

and eax,0

lea eax,[0] ; it doesn't look "natural" in the binary

more complex combinations:

; flip all those 1111... bits to 0000
or  eax,-1  ;  eax = 0FFFFFFFFh
not eax     ; ~eax = 0

; XOR EAX,-1 works the same as NOT EAX instruction in this case, flipping 1 bits to 0
or  eax,-1  ;  eax = 0FFFFFFFFh
xor eax,-1  ; ~eax = 0

; -1 + 1 = 0
or  eax,-1 ;  eax = 0FFFFFFFFh or signed int = -1
not eax    ;++eax = 0
mov eax,0  
shl eax,32  
shr eax,32  
imul eax,0 
sub eax,eax 
xor eax,eax   
and eax,0  
andn eax,eax,eax 

loop $ ;ecx only  
pause  ;ecx only (pause="rep nop" or better="rep xchg eax,eax")

;twogether:  
push dword 0    
pop eax

or eax,0xFFFFFFFF  
not eax

xor al,al ;("mov al,0","sub al,al",...)  
movzx eax,al
...
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top