How do you implement "#ifdef" in python?
Solution 1:
If you just want to disable logging methods, use the logging
module. If the log level is set to exclude, say, debug statements, then logging.debug
will be very close to a no-op (it just checks the log level and returns without interpolating the log string).
If you want to actually remove chunks of code at bytecode compile time conditional on a particular variable, your only option is the rather enigmatic __debug__
global variable. This variable is set to True
unless the -O
flag is passed to Python (or PYTHONOPTIMIZE
is set to something nonempty in the environment).
If __debug__
is used in an if
statement, the if
statement is actually compiled into only the True
branch. This particular optimization is as close to a preprocessor macro as Python ever gets.
Note that, unlike macros, your code must still be syntactically correct in both branches of the if
.
To show how __debug__
works, consider these two functions:
def f():
if __debug__: return 3
else: return 4
def g():
if True: return 3
else: return 4
Now check them out with dis
:
>>> dis.dis(f)
2 0 LOAD_CONST 1 (3)
3 RETURN_VALUE
>>> dis.dis(g)
2 0 LOAD_GLOBAL 0 (True)
3 JUMP_IF_FALSE 5 (to 11)
6 POP_TOP
7 LOAD_CONST 1 (3)
10 RETURN_VALUE
>> 11 POP_TOP
3 12 LOAD_CONST 2 (4)
15 RETURN_VALUE
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
As you can see, only f
is "optimized".
Solution 2:
It is important to understand that in Python def
and class
are two regular executable statements...
import os
if os.name == "posix":
def foo(x):
return x * x
else:
def foo(x):
return x + 42
...
so to do what you do with preprocessor in C and C++ you can use the the regular Python language.
Python language is fundamentally different from C and C++ on this point because there exist no concept of "compile time" and the only two phases are "parse time" (when the source code is read in) and "run time" when the parsed code (normally mostly composed of definition statements but that is indeed arbitrary Python code) is executed.
I am using the term "parse time" even if technically when the source code is read in the transformation is a full compilation to bytecode because the semantic of C and C++ compilation is different and for example the definition of a function happens during that phase (while instead it happens at runtime in Python).
Even the equivalent of #include
of C and C++ (that in Python is import
) is a regular statement that is executed at run time and not at compile (parse) time so it can be placed inside a regular python if
. Quite common is for example having an import
inside a try
block that will provide alternate definitions for some functions if a specific optional Python library is not present on the system.
Finally note that in Python you can even create new functions and classes at runtime from scratch by the use of exec
, not necessarily having them in your source code. You can also assemble those objects directly using code because classes and functions are indeed just regular objects (this is normally done only for classes, however).
There are some tools that instead try to consider def
and class
definitions and import
statements as "static", for example to do a static analysis of Python code to generate warnings on suspicious fragments or to create a self-contained deployable package that doesn't depend on having a specific Python installation on the system to run the program. All of them however need to be able to consider that Python is more dynamic than C or C++ in this area and they also allow adding exceptions for where the automatic analysis will fail.