What does (x ^ 0x1) != 0 mean?

I came across the following code snippet

if( 0 != ( x ^ 0x1 ) )
     encode( x, m );

What does x ^ 0x1 mean? Is this some standard technique?


The XOR operation (x ^ 0x1) inverts bit 0. So the expression effectively means: if bit 0 of x is 0, or any other bit of x is 1, then the expression is true.

Conversely the expression is false if x == 1.

So the test is the same as:

if (x != 1)

and is therefore (arguably) unnecessarily obfuscated.


  • ^ is the bitwise XOR operation
  • 0x1 is 1 in hex notation
  • x ^ 0x1 will invert the last bit of x (refer to the XOR truth table in the link above if that's not clear to you).

So, the condition (0 != ( x ^ 0x1 )) will be true if x is greater than 1 or if the last bit of x is 0. Which only leaves x==1 as a value at which the condition will be false. So it is equivalent to

if (x != 1)

P. S. Hell of a way to implement such a simple condition, I might add. Don't do that. And if you must write complicated code, leave a comment. I beg of you.


This may seem as oversimplified explanation, but if someone would like to go through it slowly it is below:

^ is a bitwise XOR operator in c, c++ and c#.

A bitwise XOR takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair of corresponding bits.

Exclusive OR is a logical operation that outputs true whenever both inputs differ (one is true, the other is false).

The truth table of a xor b:

a           b        a xor b
----------------------------
1           1           0
1           0           1
0           1           1
0           0           0

So let's illustrate the 0 == ( x ^ 0x1 ) expression on binary level:

             what? xxxxxxxx (8 bits)
               xor 00000001 (hex 0x1 or 0x01, decimal 1)    
             gives 00000000
---------------------------
the only answer is 00000001

so:

   0 == ( x ^ 0x1 )    =>    x == 1
   0 != ( x ^ 0x1 )    =>    x != 1