What is the internal precision of numpy.float128?
What precision does numpy.float128
map to internally? Is it __float128
or long double? Or something else entirely?
A potential follow on question if anybody knows: is it safe in C to cast a __float128
to a (16 byte) long double, with just a loss in precision? (this is for interfacing with a C lib that operates on long doubles).
Edit: In response to the comment, the platform is 'Linux-3.0.0-14-generic-x86_64-with-Ubuntu-11.10-oneiric'. Now, if numpy.float128
has varying precision dependent on the platform, that is also useful knowledge for me!
Just to be clear, it is the precision I am interested in, not the size of an element.
numpy.longdouble
refers to whatever type your C compiler calls long double
. Currently, this is the only extended precision floating point type that numpy supports.
On x86-32 and x86-64, this is an 80-bit floating point type. On more exotic systems it may be something else (IIRC on Sparc it's an actual 128-bit IEEE float, and on PPC it's double-double). (It also may depend on what OS and compiler you're using -- e.g. MSVC on Windows doesn't support any kind of extended precision at all.)
Numpy will also export some name like numpy.float96
or numpy.float128
. Which of these names is exported depends on your platform/compiler, but whatever you get always refers to the same underlying type as longdouble
. Also, these names are highly misleading. They do not indicate a 96- or 128-bit IEEE floating point format. Instead, they indicate the number of bits of alignment used by the underlying long double
type. So e.g. on x86-32, long double
is 80 bits, but gets padded up to 96 bits to maintain 32-bit alignment, and numpy calls this float96
. On x86-64, long double
is again the identical 80 bit type, but now it gets padded up to 128 bits to maintain 64-bit alignment, and numpy calls this float128
. There's no extra precision, just extra padding.
Recommendation: ignore the float96
/float128
names, just use numpy.longdouble
. Or better yet stick to doubles unless you have a truly compelling reason. They'll be faster, more portable, etc.
It's quite recommended to use longdouble
instead of float128, since it's quite a mess, ATM. Python will cast it to float64
during initialization.
Inside numpy, it can be a double or a long double. It's defined in npy_common.h
and depends of your platform. I don't know if you can include it out-of-the-box into your source code.
If you don't need performance in this part of your algorithm, a safer way could be to export it to a string and use strold
afterwards.
TLDR from the numpy docs:
np.longdouble
is padded to the system default;np.float96
andnp.float128
are provided for users who want specific padding. In spite of the names,np.float96
andnp.float128
provide only as much precision asnp.longdouble
, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds.