Does using xor reg, reg give advantage over mov reg, 0? [duplicate]
There're two well-known ways to set an integer register to zero value on x86.
Either
mov reg, 0
or
xor reg, reg
There's an opinion that the second variant is better since the value 0 is not stored in the code and that saves several bytes of produced machine code. This is definitely good - less instruction cache is used and this can sometimes allow for faster code execution. Many compilers produce such code.
However there's formally an inter-instruction dependency between the xor instruction and whatever earlier instruction that changes the same register. Since there's a depedency the latter instruction needs to wait until the former completes and this could reduce the processor units load and hurt performance.
add reg, 17
;do something else with reg here
xor reg, reg
It's obvious that the result of xor will be exactly the same regardless of the initial register value. But it the processor able to recognize this?
I tried the following test in VC++7:
const int Count = 10 * 1000 * 1000 * 1000;
int _tmain(int argc, _TCHAR* argv[])
{
int i;
DWORD start = GetTickCount();
for( i = 0; i < Count ; i++ ) {
__asm {
mov eax, 10
xor eax, eax
};
}
DWORD diff = GetTickCount() - start;
start = GetTickCount();
for( i = 0; i < Count ; i++ ) {
__asm {
mov eax, 10
mov eax, 0
};
}
diff = GetTickCount() - start;
return 0;
}
With optimizations off both loops take exactly the same time. Does this reasonably prove that the processor recognizes that there's no dependency of xor reg, reg
instruction on the earlier mov eax, 0
instruction? What could be a better test to check this?
an actual answer for you:
Intel 64 and IA-32 Architectures Optimization Reference Manual
Section 3.5.1.8 is where you want to look.
In short there are situations where an xor or a mov may be preferred. The issues center around dependency chains and preservation of condition codes.
On modern CPUs the XOR pattern is preferred. It is smaller, and faster.
Smaller actually does matter because on many real workloads one of the main factors limiting performance is i-cache misses. This wouldn't be captured in a micro-benchmark comparing the two options, but in the real world it will make code run slightly faster.
And, ignoring the reduced i-cache misses, XOR on any CPU in the last many years is the same speed or faster than MOV. What could be faster than executing a MOV instruction? Not executing any instruction at all! On recent Intel processors the dispatch/rename logic recognizes the XOR pattern, 'realizes' that the result will be zero, and just points the register at a physical zero-register. It then throws away the instruction because there is no need to execute it.
The net result is that the XOR pattern uses zero execution resources and can, on recent Intel CPUs, 'execute' four instructions per cycle. MOV tops out at three instructions per cycle.
For details see this blog post that I wrote:
https://randomascii.wordpress.com/2012/12/29/the-surprising-subtleties-of-zeroing-a-register/
Most programmers shouldn't be worrying about this, but compiler writers do have to worry, and it's good to understand the code that is being generated, and it's just frickin' cool!