Dictionary vs Object - which is more efficient and why?
What is more efficient in Python in terms of memory usage and CPU consumption - Dictionary or Object?
Background: I have to load huge amount of data into Python. I created an object that is just a field container. Creating 4M instances and putting them into a dictionary took about 10 minutes and ~6GB of memory. After dictionary is ready, accessing it is a blink of an eye.
Example: To check the performance I wrote two simple programs that do the same - one is using objects, other dictionary:
Object (execution time ~18sec):
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
Dictionary (execution time ~12sec):
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
Question: Am I doing something wrong or dictionary is just faster than object? If indeed dictionary performs better, can somebody explain why?
Have you tried using __slots__
?
From the documentation:
By default, instances of both old and new-style classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.
The default can be overridden by defining
__slots__
in a new-style class definition. The__slots__
declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because__dict__
is not created for each instance.
So does this save time as well as memory?
Comparing the three approaches on my computer:
test_slots.py:
class Obj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_obj.py:
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_dict.py:
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
test_namedtuple.py (supported in 2.6):
import collections
Obj = collections.namedtuple('Obj', 'i l')
all = {}
for i in range(1000000):
all[i] = Obj(i, [])
Run benchmark (using CPython 2.5):
$ lshw | grep product | head -n 1
product: Intel(R) Pentium(R) M processor 1.60GHz
$ python --version
Python 2.5
$ time python test_obj.py && time python test_dict.py && time python test_slots.py
real 0m27.398s (using 'normal' object)
real 0m16.747s (using __dict__)
real 0m11.777s (using __slots__)
Using CPython 2.6.2, including the named tuple test:
$ python --version
Python 2.6.2
$ time python test_obj.py && time python test_dict.py && time python test_slots.py && time python test_namedtuple.py
real 0m27.197s (using 'normal' object)
real 0m17.657s (using __dict__)
real 0m12.249s (using __slots__)
real 0m12.262s (using namedtuple)
So yes (not really a surprise), using __slots__
is a performance optimization. Using a named tuple has similar performance to __slots__
.
Attribute access in an object uses dictionary access behind the scenes - so by using attribute access you are adding extra overhead. Plus in the object case, you are incurring additional overhead because of e.g. additional memory allocations and code execution (e.g. of the __init__
method).
In your code, if o
is an Obj
instance, o.attr
is equivalent to o.__dict__['attr']
with a small amount of extra overhead.
Have you considered using a namedtuple? (link for python 2.4/2.5)
It's the new standard way of representing structured data that gives you the performance of a tuple and the convenience of a class.
It's only downside compared with dictionaries is that (like tuples) it doesn't give you the ability to change attributes after creation.