Why would one use MACRO+0 !=0

In my current codebase I see this following pattern:

#if SOMETHING_SUPPORTED+0 != 0
...
#endif

Unfortunately this is a very old codebase and nobody knows how and why it started. I think it started in C and it was slowly converted into C with classes and now it tends to C++

I can't see any obvious advantage of using previous construct instead of the "classic", but maybe I'm missing something:

#if SOMETHING_SUPPORTED
...
#endif

Do you know why would one use #if MACRO+0 != 0 instead of #if MACRO?


Solution 1:

The clue here is that the code base is very old.

This trick likely exists because the code once had been ported to a compiler with some very old preprocessor which doesn't treat undefined macros as 0 in preprocessor #if conditionals.

That is to say, as of 1989 ANSI C it was standardized that if we have:

#if foo + bar - xyzzy

the directive is subject to macro replacement, so that if foo, bar or xyzzy are macros, they are replaced. Then any remaining identifiers which were not replaced are replaced with 0. So if foo is defined as 42, but bar and xyzzy are not defined at all, we get:

#if 42 + 0 - 0

and not, say, bad syntax:

#if 42 + -

or some other behavior, like diagnostics about bar not being defined.

On a preprocessor where undefined macros are treated as blanks, #if SOMETHING_SUPPORTED expands to just #if, which is then erroneous.

This is the only way in which this IDENT+0 trick makes any real sense. You simply wouldn't ever want to do this if you can rely on preprocessing being ISO C conforming.

The reason is that if SOMETHING_SUPPORTED is expected to have numeric values, it is erroneously scatter-brained to define it as simply a blank. You ideally want to detect when this has happened and stop the compilation with a diagnostic.

Secondly, if you do support such a scatter-brained usage, you almost certainly want an explicitly defined, but blank symbol to behave as if it had the value 1, not the value 0. Otherwise, you're creating a trap. Someone might do this on the compiler command line:

 -DSOMETHING_SUPPORTED=$SHELL_VAR  # oops, SHELL_VAR expanded to nothing

or in code:

 #define SOMETHING_SUPPORTED  /* oops, forgot "1" */

Nobody is going to add a #define or -D for a symbol with the intent of turning off the feature that it controls! The programmer who inserts a #define SOMETHING_SUPPORTED without the 1 will be surprised by the behavior of

 #if SOMETHING_SUPPORTED+0

which skips the material which was intended to be enabled.

This is why I suspect that few C programmers reading this have ever seen such a usage, and why I suspect that it's just a workaround for preprocessor behavior whose intended effect is to skip the block if SOMETHING_SUPPORTED is missing. The fact that it lays a "programmer trap" is just a side-effect of the workaround.

To work around such a preprocessor issue without creating a programmer trap is to have, somewhere early in the translation unit, this:

#ifndef SOMETHING_SUPPORTED
#define SOMETHING_SUPPORTED 0
#endif

and then elsewhere just use #if SOMETHING_SUPPORTED. Maybe that approach didn't occur to the original programmer, or perhaps that programmer thought that +0 trick was neat, and placed value on its self-containment.

Solution 2:

#if X+0 != 0 is different to #if X in the case where X is defined to empty (note: this is different to the case of X not being defined), e.g.:

#define X

#if X          // error
#if X+0 != 0   // no error; test fails

It is very common to define empty macros: project configuration may generate some common header that contains a bunch of lines #define USE_FOO, #define USE_BAR to enable features that the system supports, and so on.

The != 0 is redundant, the code could have just been #if X+0.


So, the benefit of using #if X+0 is that if X is defined as empty then compilation continues with the block being skipped, instead of triggering an error.

Whether this is a good idea is debatable, personally I would use #ifdef for boolean macros like USE_SOME_FEATURE, and #if for macros where the value might be a range of integers for example; and I would want to see an error if I accidentally use #if with something defined to empty.

Solution 3:

Let's make a table!

X       #if X     #if X+0 != 0
<undef> false     false
<empty> error     false
0       false     false
1       true      true
2       true      true
a       false     false
xyz     false     false
12a     error     error
12 a    error     error

So the only difference we've found (thanks to commenters) is the case where X is defined but has no value (like empty string). I've never seen the +0 != 0 variant before.