"sys.getsizeof(int)" returns an unreasonably large value?

I want to check the size of int data type in python:

import sys
sys.getsizeof(int)

It comes out to be "436", which doesn't make sense to me. Anyway, I want to know how many bytes (2,4,..?) int will take on my machine.


Solution 1:

The short answer

You're getting the size of the class, not of an instance of the class. Call int to get the size of an instance:

>>> sys.getsizeof(int())
24

If that size still seems a little bit large, remember that a Python int is very different from an int in (for example) c. In Python, an int is a fully-fledged object. This means there's extra overhead.

Every Python object contains at least a refcount and a reference to the object's type in addition to other storage; on a 64-bit machine, that takes up 16 bytes! The int internals (as determined by the standard CPython implementation) have also changed over time, so that the amount of additional storage taken depends on your version.

Some details about int objects in Python 2 and 3

Here's the situation in Python 2. (Some of this is adapted from a blog post by Laurent Luce). Integer objects are represented as blocks of memory with the following structure:

typedef struct {
    PyObject_HEAD
    long ob_ival;
} PyIntObject;

PyObject_HEAD is a macro defining the storage for the refcount and the object type. It's described in some detail by the documentation, and the code can be seen in this answer.

The memory is allocated in large blocks so that there's not an allocation bottleneck for every new integer. The structure for the block looks like this:

struct _intblock {
    struct _intblock *next;
    PyIntObject objects[N_INTOBJECTS];
};
typedef struct _intblock PyIntBlock;

These are all empty at first. Then, each time a new integer is created, Python uses the memory pointed at by next and increments next to point to the next free integer object in the block.

I'm not entirely sure how this changes once you exceed the storage capacity of an ordinary integer, but once you do so, the size of an int gets larger. On my machine, in Python 2:

>>> sys.getsizeof(0)
24
>>> sys.getsizeof(1)
24
>>> sys.getsizeof(2 ** 62)
24
>>> sys.getsizeof(2 ** 63)
36

In Python 3, I think the general picture is the same, but the size of integers increases in a more piecemeal way:

>>> sys.getsizeof(0)
24
>>> sys.getsizeof(1)
28
>>> sys.getsizeof(2 ** 30 - 1)
28
>>> sys.getsizeof(2 ** 30)
32
>>> sys.getsizeof(2 ** 60 - 1)
32
>>> sys.getsizeof(2 ** 60)
36

These results are, of course, all hardware-dependent! YMMV.

The variability in integer size in Python 3 is a hint that they may behave more like variable-length types (like lists). And indeed, this turns out to be true. Here's the definition of the C struct for int objects in Python 3:

struct _longobject {
    PyObject_VAR_HEAD
    digit ob_digit[1];
};

The comments that accompany this definition summarize Python 3's representation of integers. Zero is represented not by a stored value, but by an object with size zero (which is why sys.getsizeof(0) is 24 bytes while sys.getsizeof(1) is 28). Negative numbers are represented by objects with a negative size attribute! So weird.