How expensive are Python dictionaries to handle?
As the title states, how expensive are Python dictionaries to handle? Creation, insertion, updating, deletion, all of it.
Asymptotic time complexities are interesting themselves, but also how they compare to e.g. tuples or normal lists.
Solution 1:
dicts
(just like set
s when you don't need to associate a value to each key but simply record if a key is present or absent) are pretty heavily optimized. Creating a dict
from N keys or key/value pairs is O(N)
, fetching is O(1)
, putting is amortized O(1)
, and so forth. Can't really do anything substantially better for any non-tiny container!
For tiny containers, you can easily check the boundaries with timeit
-based benchmarks. For example:
$ python -mtimeit -s'empty=()' '23 in empty'
10000000 loops, best of 3: 0.0709 usec per loop
$ python -mtimeit -s'empty=set()' '23 in empty'
10000000 loops, best of 3: 0.101 usec per loop
$ python -mtimeit -s'empty=[]' '23 in empty'
10000000 loops, best of 3: 0.0716 usec per loop
$ python -mtimeit -s'empty=dict()' '23 in empty'
10000000 loops, best of 3: 0.0926 usec per loop
this shows that checking membership in empty lists or tuples is faster, by a whopping 20-30 nanoseconds, than checking membership in empty sets or dicts; when every nanosecond matters, this info might be relevant to you. Moving up a bit...:
$ python -mtimeit -s'empty=range(7)' '23 in empty'
1000000 loops, best of 3: 0.318 usec per loop
$ python -mtimeit -s'empty=tuple(range(7))' '23 in empty'
1000000 loops, best of 3: 0.311 usec per loop
$ python -mtimeit -s'empty=set(range(7))' '23 in empty'
10000000 loops, best of 3: 0.109 usec per loop
$ python -mtimeit -s'empty=dict.fromkeys(range(7))' '23 in empty'
10000000 loops, best of 3: 0.0933 usec per loop
you see that for 7-items containers (not including the one of interest) the balance of performance has shifted, and now dicts and sets have the advantages by HUNDREDS of nanoseconds. When the item of interest IS present:
$ python -mtimeit -s'empty=range(7)' '5 in empty'
1000000 loops, best of 3: 0.246 usec per loop
$ python -mtimeit -s'empty=tuple(range(7))' '5 in empty'
1000000 loops, best of 3: 0.25 usec per loop
$ python -mtimeit -s'empty=dict.fromkeys(range(7))' '5 in empty'
10000000 loops, best of 3: 0.0921 usec per loop
$ python -mtimeit -s'empty=set(range(7))' '5 in empty'
10000000 loops, best of 3: 0.112 usec per loop
dicts and sets don't gain much, but tuples and list do, even though dicts and set remain vastly faster.
And so on, and so forth -- timeit
makes it trivially easy to run micro-benchmarks (strictly speaking, warranted only for those exceedingly rare situations where nanoseconds DO matter, but, easy enough to do, that it's no big hardship to check for OTHER cases;-).
Solution 2:
Dictionaries are one of the more heavily tuned parts of Python, since they underlie so much of the language. For example, members of a class, and variables in a stack frame are both stored internally in dictionaries. They will be a good choice if they are the right data structure.
Choosing between lists and dicts based on performance seems odd: they do different things. Maybe you can tell us more about the problem you are trying to solve.