blob: cb46cb120bd82feeb7134f10d3f6bc45c600d5f0 [file] [log] [blame]
Raymond Hettinger54662962003-05-02 20:11:29 +00001NOTES ON OPTIMIZING DICTIONARIES
2================================
3
4
5Principal Use Cases for Dictionaries
6------------------------------------
7
8Passing keyword arguments
9 Typically, one read and one write for 1 to 3 elements.
10 Occurs frequently in normal python code.
11
12Class method lookup
13 Dictionaries vary in size with 8 to 16 elements being common.
14 Usually written once with many lookups.
15 When base classes are used, there are many failed lookups
16 followed by a lookup in a base class.
17
18Instance attribute lookup and Global variables
19 Dictionaries vary in size. 4 to 10 elements are common.
20 Both reads and writes are common.
21
22Builtins
23 Frequent reads. Almost never written.
24 Size 126 interned strings (as of Py2.3b1).
25 A few keys are accessed much more frequently than others.
26
27Uniquification
28 Dictionaries of any size. Bulk of work is in creation.
29 Repeated writes to a smaller set of keys.
30 Single read of each key.
Raymond Hettingere509b2a2003-05-28 14:10:46 +000031 Some use cases have two consecutive accesses to the same key.
Raymond Hettinger54662962003-05-02 20:11:29 +000032
33 * Removing duplicates from a sequence.
34 dict.fromkeys(seqn).keys()
Raymond Hettingere509b2a2003-05-28 14:10:46 +000035
Raymond Hettinger54662962003-05-02 20:11:29 +000036 * Counting elements in a sequence.
Raymond Hettingere509b2a2003-05-28 14:10:46 +000037 for e in seqn:
38 d[e] = d.get(e,0) + 1
39
40 * Accumulating references in a dictionary of lists:
41
42 for pagenumber, page in enumerate(pages):
43 for word in page:
44 d.setdefault(word, []).append(pagenumber)
45
46 Note, the second example is a use case characterized by a get and set
47 to the same key. There are similar used cases with a __contains__
48 followed by a get, set, or del to the same key. Part of the
49 justification for d.setdefault is combining the two lookups into one.
Raymond Hettinger54662962003-05-02 20:11:29 +000050
51Membership Testing
52 Dictionaries of any size. Created once and then rarely changes.
53 Single write to each key.
54 Many calls to __contains__() or has_key().
55 Similar access patterns occur with replacement dictionaries
56 such as with the % formatting operator.
57
Raymond Hettinger258dfeb2003-05-04 21:25:19 +000058Dynamic Mappings
Raymond Hettingere509b2a2003-05-28 14:10:46 +000059 Characterized by deletions interspersed with adds and replacements.
Raymond Hettinger258dfeb2003-05-04 21:25:19 +000060 Performance benefits greatly from the re-use of dummy entries.
61
Raymond Hettinger54662962003-05-02 20:11:29 +000062
63Data Layout (assuming a 32-bit box with 64 bytes per cache line)
64----------------------------------------------------------------
65
66Smalldicts (8 entries) are attached to the dictobject structure
67and the whole group nearly fills two consecutive cache lines.
68
69Larger dicts use the first half of the dictobject structure (one cache
70line) and a separate, continuous block of entries (at 12 bytes each
71for a total of 5.333 entries per cache line).
72
73
74Tunable Dictionary Parameters
75-----------------------------
76
77* PyDict_MINSIZE. Currently set to 8.
78 Must be a power of two. New dicts have to zero-out every cell.
79 Each additional 8 consumes 1.5 cache lines. Increasing improves
80 the sparseness of small dictionaries but costs time to read in
81 the additional cache lines if they are not already in cache.
82 That case is common when keyword arguments are passed.
83
84* Maximum dictionary load in PyDict_SetItem. Currently set to 2/3.
85 Increasing this ratio makes dictionaries more dense resulting
86 in more collisions. Decreasing it improves sparseness at the
87 expense of spreading entries over more cache lines and at the
88 cost of total memory consumed.
89
90 The load test occurs in highly time sensitive code. Efforts
91 to make the test more complex (for example, varying the load
92 for different sizes) have degraded performance.
93
94* Growth rate upon hitting maximum load. Currently set to *2.
95 Raising this to *4 results in half the number of resizes,
96 less effort to resize, better sparseness for some (but not
Raymond Hettinger9d5c4432004-03-15 15:52:22 +000097 all dict sizes), and potentially doubles memory consumption
Raymond Hettinger54662962003-05-02 20:11:29 +000098 depending on the size of the dictionary. Setting to *4
99 eliminates every other resize step.
100
101Tune-ups should be measured across a broad range of applications and
102use cases. A change to any parameter will help in some situations and
103hurt in others. The key is to find settings that help the most common
104cases and do the least damage to the less common cases. Results will
105vary dramatically depending on the exact number of keys, whether the
106keys are all strings, whether reads or writes dominate, the exact
107hash values of the keys (some sets of values have fewer collisions than
108others). Any one test or benchmark is likely to prove misleading.
109
Raymond Hettinger258dfeb2003-05-04 21:25:19 +0000110While making a dictionary more sparse reduces collisions, it impairs
111iteration and key listing. Those methods loop over every potential
112entry. Doubling the size of dictionary results in twice as many
113non-overlapping memory accesses for keys(), items(), values(),
114__iter__(), iterkeys(), iteritems(), itervalues(), and update().
Raymond Hettinger9d5c4432004-03-15 15:52:22 +0000115Also, every dictionary iterates at least twice, once for the memset()
116when it is created and once by dealloc().
Raymond Hettinger258dfeb2003-05-04 21:25:19 +0000117
Raymond Hettinger54662962003-05-02 20:11:29 +0000118
119Results of Cache Locality Experiments
120-------------------------------------
121
122When an entry is retrieved from memory, 4.333 adjacent entries are also
123retrieved into a cache line. Since accessing items in cache is *much*
124cheaper than a cache miss, an enticing idea is to probe the adjacent
125entries as a first step in collision resolution. Unfortunately, the
126introduction of any regularity into collision searches results in more
127collisions than the current random chaining approach.
128
129Exploiting cache locality at the expense of additional collisions fails
130to payoff when the entries are already loaded in cache (the expense
131is paid with no compensating benefit). This occurs in small dictionaries
132where the whole dictionary fits into a pair of cache lines. It also
133occurs frequently in large dictionaries which have a common access pattern
134where some keys are accessed much more frequently than others. The
135more popular entries *and* their collision chains tend to remain in cache.
136
137To exploit cache locality, change the collision resolution section
138in lookdict() and lookdict_string(). Set i^=1 at the top of the
139loop and move the i = (i << 2) + i + perturb + 1 to an unrolled
140version of the loop.
141
142This optimization strategy can be leveraged in several ways:
143
144* If the dictionary is kept sparse (through the tunable parameters),
145then the occurrence of additional collisions is lessened.
146
147* If lookdict() and lookdict_string() are specialized for small dicts
148and for largedicts, then the versions for large_dicts can be given
149an alternate search strategy without increasing collisions in small dicts
150which already have the maximum benefit of cache locality.
151
152* If the use case for a dictionary is known to have a random key
153access pattern (as opposed to a more common pattern with a Zipf's law
154distribution), then there will be more benefit for large dictionaries
155because any given key is no more likely than another to already be
156in cache.
157
Raymond Hettingere509b2a2003-05-28 14:10:46 +0000158* In use cases with paired accesses to the same key, the second access
159is always in cache and gets no benefit from efforts to further improve
160cache locality.
Raymond Hettinger54662962003-05-02 20:11:29 +0000161
162Optimizing the Search of Small Dictionaries
163-------------------------------------------
164
165If lookdict() and lookdict_string() are specialized for smaller dictionaries,
166then a custom search approach can be implemented that exploits the small
167search space and cache locality.
168
169* The simplest example is a linear search of contiguous entries. This is
170 simple to implement, guaranteed to terminate rapidly, never searches
171 the same entry twice, and precludes the need to check for dummy entries.
172
173* A more advanced example is a self-organizing search so that the most
174 frequently accessed entries get probed first. The organization
175 adapts if the access pattern changes over time. Treaps are ideally
176 suited for self-organization with the most common entries at the
177 top of the heap and a rapid binary search pattern. Most probes and
178 results are all located at the top of the tree allowing them all to
179 be located in one or two cache lines.
180
181* Also, small dictionaries may be made more dense, perhaps filling all
182 eight cells to take the maximum advantage of two cache lines.
183
184
185Strategy Pattern
186----------------
187
188Consider allowing the user to set the tunable parameters or to select a
189particular search method. Since some dictionary use cases have known
190sizes and access patterns, the user may be able to provide useful hints.
191
1921) For example, if membership testing or lookups dominate runtime and memory
193 is not at a premium, the user may benefit from setting the maximum load
194 ratio at 5% or 10% instead of the usual 66.7%. This will sharply
Raymond Hettinger258dfeb2003-05-04 21:25:19 +0000195 curtail the number of collisions but will increase iteration time.
Raymond Hettinger9d5c4432004-03-15 15:52:22 +0000196 The builtin namespace is a prime example of a dictionary that can
197 benefit from being highly sparse.
Raymond Hettinger54662962003-05-02 20:11:29 +0000198
1992) Dictionary creation time can be shortened in cases where the ultimate
200 size of the dictionary is known in advance. The dictionary can be
201 pre-sized so that no resize operations are required during creation.
202 Not only does this save resizes, but the key insertion will go
203 more quickly because the first half of the keys will be inserted into
204 a more sparse environment than before. The preconditions for this
205 strategy arise whenever a dictionary is created from a key or item
Raymond Hettinger9d5c4432004-03-15 15:52:22 +0000206 sequence and the number of *unique* keys is known.
Raymond Hettinger54662962003-05-02 20:11:29 +0000207
2083) If the key space is large and the access pattern is known to be random,
209 then search strategies exploiting cache locality can be fruitful.
210 The preconditions for this strategy arise in simulations and
211 numerical analysis.
212
2134) If the keys are fixed and the access pattern strongly favors some of
214 the keys, then the entries can be stored contiguously and accessed
215 with a linear search or treap. This exploits knowledge of the data,
216 cache locality, and a simplified search routine. It also eliminates
217 the need to test for dummy entries on each probe. The preconditions
218 for this strategy arise in symbol tables and in the builtin dictionary.
Raymond Hettinger4887a122003-05-05 21:31:51 +0000219
220
221Readonly Dictionaries
222---------------------
223Some dictionary use cases pass through a build stage and then move to a
224more heavily exercised lookup stage with no further changes to the
225dictionary.
226
227An idea that emerged on python-dev is to be able to convert a dictionary
228to a read-only state. This can help prevent programming errors and also
229provide knowledge that can be exploited for lookup optimization.
230
231The dictionary can be immediately rebuilt (eliminating dummy entries),
232resized (to an appropriate level of sparseness), and the keys can be
233jostled (to minimize collisions). The lookdict() routine can then
234eliminate the test for dummy entries (saving about 1/4 of the time
Raymond Hettinger9d5c4432004-03-15 15:52:22 +0000235spent in the collision resolution loop).
Raymond Hettinger4887a122003-05-05 21:31:51 +0000236
237An additional possibility is to insert links into the empty spaces
238so that dictionary iteration can proceed in len(d) steps instead of
Raymond Hettinger9d5c4432004-03-15 15:52:22 +0000239(mp->mask + 1) steps. Alternatively, a separate tuple of keys can be
240kept just for iteration.
Raymond Hettingere509b2a2003-05-28 14:10:46 +0000241
242
243Caching Lookups
244---------------
245The idea is to exploit key access patterns by anticipating future lookups
246based of previous lookups.
247
248The simplest incarnation is to save the most recently accessed entry.
249This gives optimal performance for use cases where every get is followed
250by a set or del to the same key.