Difference between the built-in pow() and math.pow() for floats, in Python?
Quick Check
From the signatures, we can tell that they are different:
pow(x, y[, z])
math.pow(x, y)
Also, trying it in the shell will give you a quick idea:
>>> pow is math.pow
False
Testing the differences
Another way to understand the differences in behaviour between the two functions is to test for them:
import math
import traceback
import sys
inf = float("inf")
NaN = float("nan")
vals = [inf, NaN, 0.0, 1.0, 2.2, -1.0, -0.0, -2.2, -inf, 1, 0, 2]
tests = set([])
for vala in vals:
for valb in vals:
tests.add( (vala, valb) )
tests.add( (valb, vala) )
for a,b in tests:
print("math.pow(%f,%f)"%(a,b) )
try:
print(" %f "%math.pow(a,b))
except:
traceback.print_exc()
print("__builtins__.pow(%f,%f)"%(a,b) )
try:
print(" %f "%__builtins__.pow(a,b))
except:
traceback.print_exc()
We can then notice some subtle differences. For example:
math.pow(0.000000,-2.200000)
ValueError: math domain error
__builtins__.pow(0.000000,-2.200000)
ZeroDivisionError: 0.0 cannot be raised to a negative power
There are other differences, and the test list above is not complete (no long numbers, no complex, etc...), but this will give us a pragmatic list of how the two functions behave differently. I would also recommend extending the above test to check for the type that each function returns. You could probably write something similar that creates a report of the differences between the two functions.
math.pow()
math.pow()
handles its arguments very differently from the builtin **
or pow()
. This comes at the cost of flexibility. Having a look at the source, we can see that the arguments to math.pow()
are cast directly to doubles:
static PyObject *
math_pow(PyObject *self, PyObject *args)
{
PyObject *ox, *oy;
double r, x, y;
int odd_y;
if (! PyArg_UnpackTuple(args, "pow", 2, 2, &ox, &oy))
return NULL;
x = PyFloat_AsDouble(ox);
y = PyFloat_AsDouble(oy);
/*...*/
The checks are then carried out against the doubles for validity, and then the result is passed to the underlying C math library.
builtin pow()
The built-in pow()
(same as the **
operator) on the other hand behaves very differently, it actually uses the Objects's own implementation of the **
operator, which can be overridden by the end user if need be by replacing a number's __pow__()
, __rpow__()
or __ipow__()
, method.
For built-in types, it is instructive to study the difference between the power function implemented for two numeric types, for example, floats, long and complex.
Overriding the default behaviour
Emulating numeric types is described here. essentially, if you are creating a new type for numbers with uncertainty, what you will have to do is provide the __pow__()
, __rpow__()
and possibly __ipow__()
methods for your type. This will allow your numbers to be used with the operator:
class Uncertain:
def __init__(self, x, delta=0):
self.delta = delta
self.x = x
def __pow__(self, other):
return Uncertain(
self.x**other.x,
Uncertain._propagate_power(self, other)
)
@staticmethod
def _propagate_power(A, B):
return math.sqrt(
((B.x*(A.x**(B.x-1)))**2)*A.delta*A.delta +
(((A.x**B.x)*math.log(B.x))**2)*B.delta*B.delta
)
In order to override math.pow()
you will have to monkey patch it to support your new type:
def new_pow(a,b):
_a = Uncertain(a)
_b = Uncertain(b)
return _a ** _b
math.pow = new_pow
Note that for this to work you'll have to wrangle the Uncertain
class to cope with an Uncertain
instance as an input to __init__()
math.pow()
implicitly converts its arguments to float
:
>>> from decimal import Decimal
>>> from fractions import Fraction
>>> math.pow(Fraction(1, 3), 2)
0.1111111111111111
>>> math.pow(Decimal(10), -1)
0.1
but the built-in pow
does not:
>>> pow(Fraction(1, 3), 2)
Fraction(1, 9)
>>> pow(Decimal(10), -1)
Decimal('0.1')
My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty
You can overload pow
and **
by defining __pow__
and __rpow__
methods for your class.
However, you can't overload math.pow
(without hacks like math.pow = pow
). You can make a class usable with math.pow
by defining a __float__
conversion, but then you'll lose the uncertainty attached to your numbers.
Python's standard pow
includes a simple hack that makes pow(2, 3, 2)
faster than (2 ** 3) % 2
(of course, you'll only notice that with large numbers).
Another big difference is how the two functions handle different input formats.
>>> pow(2, 1+0.5j)
(1.8810842093664877+0.679354250205337j)
>>> math.pow(2, 1+0.5j)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't convert complex to float
However, I have no idea why anyone would prefer math.pow
over pow
.
Just adding %timeit comparison
In [1]: def pair_generator():
...: yield (random.random()*10, random.random()*10)
...:
In [2]: %timeit [a**b for a, b in pair_generator()]
538 ns ± 1.94 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [3]: %timeit [math.pow(a, b) for a, b in pair_generator()]
632 ns ± 2.77 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)