blob: 63b06e5b388c7fba57c5746e3c4b11ef441f7461 [file] [log] [blame]
Raymond Hettinger54662962003-05-02 20:11:29 +00001NOTES ON OPTIMIZING DICTIONARIES
2================================
3
4
5Principal Use Cases for Dictionaries
6------------------------------------
7
8Passing keyword arguments
9 Typically, one read and one write for 1 to 3 elements.
10 Occurs frequently in normal python code.
11
12Class method lookup
13 Dictionaries vary in size with 8 to 16 elements being common.
14 Usually written once with many lookups.
15 When base classes are used, there are many failed lookups
16 followed by a lookup in a base class.
17
18Instance attribute lookup and Global variables
19 Dictionaries vary in size. 4 to 10 elements are common.
20 Both reads and writes are common.
21
22Builtins
23 Frequent reads. Almost never written.
24 Size 126 interned strings (as of Py2.3b1).
25 A few keys are accessed much more frequently than others.
26
27Uniquification
28 Dictionaries of any size. Bulk of work is in creation.
29 Repeated writes to a smaller set of keys.
30 Single read of each key.
Raymond Hettingere509b2a2003-05-28 14:10:46 +000031 Some use cases have two consecutive accesses to the same key.
Raymond Hettinger54662962003-05-02 20:11:29 +000032
33 * Removing duplicates from a sequence.
34 dict.fromkeys(seqn).keys()
Raymond Hettingere509b2a2003-05-28 14:10:46 +000035
Raymond Hettinger54662962003-05-02 20:11:29 +000036 * Counting elements in a sequence.
Raymond Hettingere509b2a2003-05-28 14:10:46 +000037 for e in seqn:
38 d[e] = d.get(e,0) + 1
39
40 * Accumulating references in a dictionary of lists:
41
42 for pagenumber, page in enumerate(pages):
43 for word in page:
44 d.setdefault(word, []).append(pagenumber)
45
46 Note, the second example is a use case characterized by a get and set
47 to the same key. There are similar used cases with a __contains__
48 followed by a get, set, or del to the same key. Part of the
49 justification for d.setdefault is combining the two lookups into one.
Raymond Hettinger54662962003-05-02 20:11:29 +000050
51Membership Testing
52 Dictionaries of any size. Created once and then rarely changes.
53 Single write to each key.
54 Many calls to __contains__() or has_key().
55 Similar access patterns occur with replacement dictionaries
56 such as with the % formatting operator.
57
Raymond Hettinger258dfeb2003-05-04 21:25:19 +000058Dynamic Mappings
Raymond Hettingere509b2a2003-05-28 14:10:46 +000059 Characterized by deletions interspersed with adds and replacements.
Raymond Hettinger258dfeb2003-05-04 21:25:19 +000060 Performance benefits greatly from the re-use of dummy entries.
61
Raymond Hettinger54662962003-05-02 20:11:29 +000062
63Data Layout (assuming a 32-bit box with 64 bytes per cache line)
64----------------------------------------------------------------
65
66Smalldicts (8 entries) are attached to the dictobject structure
67and the whole group nearly fills two consecutive cache lines.
68
69Larger dicts use the first half of the dictobject structure (one cache
70line) and a separate, continuous block of entries (at 12 bytes each
71for a total of 5.333 entries per cache line).
72
73
74Tunable Dictionary Parameters
75-----------------------------
76
77* PyDict_MINSIZE. Currently set to 8.
78 Must be a power of two. New dicts have to zero-out every cell.
79 Each additional 8 consumes 1.5 cache lines. Increasing improves
80 the sparseness of small dictionaries but costs time to read in
81 the additional cache lines if they are not already in cache.
82 That case is common when keyword arguments are passed.
83
84* Maximum dictionary load in PyDict_SetItem. Currently set to 2/3.
85 Increasing this ratio makes dictionaries more dense resulting
86 in more collisions. Decreasing it improves sparseness at the
87 expense of spreading entries over more cache lines and at the
88 cost of total memory consumed.
89
90 The load test occurs in highly time sensitive code. Efforts
91 to make the test more complex (for example, varying the load
92 for different sizes) have degraded performance.
93
94* Growth rate upon hitting maximum load. Currently set to *2.
95 Raising this to *4 results in half the number of resizes,
96 less effort to resize, better sparseness for some (but not
97 all dict sizes), and potentially double memory consumption
98 depending on the size of the dictionary. Setting to *4
99 eliminates every other resize step.
100
101Tune-ups should be measured across a broad range of applications and
102use cases. A change to any parameter will help in some situations and
103hurt in others. The key is to find settings that help the most common
104cases and do the least damage to the less common cases. Results will
105vary dramatically depending on the exact number of keys, whether the
106keys are all strings, whether reads or writes dominate, the exact
107hash values of the keys (some sets of values have fewer collisions than
108others). Any one test or benchmark is likely to prove misleading.
109
Raymond Hettinger258dfeb2003-05-04 21:25:19 +0000110While making a dictionary more sparse reduces collisions, it impairs
111iteration and key listing. Those methods loop over every potential
112entry. Doubling the size of dictionary results in twice as many
113non-overlapping memory accesses for keys(), items(), values(),
114__iter__(), iterkeys(), iteritems(), itervalues(), and update().
115
Raymond Hettinger54662962003-05-02 20:11:29 +0000116
117Results of Cache Locality Experiments
118-------------------------------------
119
120When an entry is retrieved from memory, 4.333 adjacent entries are also
121retrieved into a cache line. Since accessing items in cache is *much*
122cheaper than a cache miss, an enticing idea is to probe the adjacent
123entries as a first step in collision resolution. Unfortunately, the
124introduction of any regularity into collision searches results in more
125collisions than the current random chaining approach.
126
127Exploiting cache locality at the expense of additional collisions fails
128to payoff when the entries are already loaded in cache (the expense
129is paid with no compensating benefit). This occurs in small dictionaries
130where the whole dictionary fits into a pair of cache lines. It also
131occurs frequently in large dictionaries which have a common access pattern
132where some keys are accessed much more frequently than others. The
133more popular entries *and* their collision chains tend to remain in cache.
134
135To exploit cache locality, change the collision resolution section
136in lookdict() and lookdict_string(). Set i^=1 at the top of the
137loop and move the i = (i << 2) + i + perturb + 1 to an unrolled
138version of the loop.
139
140This optimization strategy can be leveraged in several ways:
141
142* If the dictionary is kept sparse (through the tunable parameters),
143then the occurrence of additional collisions is lessened.
144
145* If lookdict() and lookdict_string() are specialized for small dicts
146and for largedicts, then the versions for large_dicts can be given
147an alternate search strategy without increasing collisions in small dicts
148which already have the maximum benefit of cache locality.
149
150* If the use case for a dictionary is known to have a random key
151access pattern (as opposed to a more common pattern with a Zipf's law
152distribution), then there will be more benefit for large dictionaries
153because any given key is no more likely than another to already be
154in cache.
155
Raymond Hettingere509b2a2003-05-28 14:10:46 +0000156* In use cases with paired accesses to the same key, the second access
157is always in cache and gets no benefit from efforts to further improve
158cache locality.
Raymond Hettinger54662962003-05-02 20:11:29 +0000159
160Optimizing the Search of Small Dictionaries
161-------------------------------------------
162
163If lookdict() and lookdict_string() are specialized for smaller dictionaries,
164then a custom search approach can be implemented that exploits the small
165search space and cache locality.
166
167* The simplest example is a linear search of contiguous entries. This is
168 simple to implement, guaranteed to terminate rapidly, never searches
169 the same entry twice, and precludes the need to check for dummy entries.
170
171* A more advanced example is a self-organizing search so that the most
172 frequently accessed entries get probed first. The organization
173 adapts if the access pattern changes over time. Treaps are ideally
174 suited for self-organization with the most common entries at the
175 top of the heap and a rapid binary search pattern. Most probes and
176 results are all located at the top of the tree allowing them all to
177 be located in one or two cache lines.
178
179* Also, small dictionaries may be made more dense, perhaps filling all
180 eight cells to take the maximum advantage of two cache lines.
181
182
183Strategy Pattern
184----------------
185
186Consider allowing the user to set the tunable parameters or to select a
187particular search method. Since some dictionary use cases have known
188sizes and access patterns, the user may be able to provide useful hints.
189
1901) For example, if membership testing or lookups dominate runtime and memory
191 is not at a premium, the user may benefit from setting the maximum load
192 ratio at 5% or 10% instead of the usual 66.7%. This will sharply
Raymond Hettinger258dfeb2003-05-04 21:25:19 +0000193 curtail the number of collisions but will increase iteration time.
Raymond Hettinger54662962003-05-02 20:11:29 +0000194
1952) Dictionary creation time can be shortened in cases where the ultimate
196 size of the dictionary is known in advance. The dictionary can be
197 pre-sized so that no resize operations are required during creation.
198 Not only does this save resizes, but the key insertion will go
199 more quickly because the first half of the keys will be inserted into
200 a more sparse environment than before. The preconditions for this
201 strategy arise whenever a dictionary is created from a key or item
Raymond Hettingere509b2a2003-05-28 14:10:46 +0000202 sequence and the number of unique keys is known.
Raymond Hettinger54662962003-05-02 20:11:29 +0000203
2043) If the key space is large and the access pattern is known to be random,
205 then search strategies exploiting cache locality can be fruitful.
206 The preconditions for this strategy arise in simulations and
207 numerical analysis.
208
2094) If the keys are fixed and the access pattern strongly favors some of
210 the keys, then the entries can be stored contiguously and accessed
211 with a linear search or treap. This exploits knowledge of the data,
212 cache locality, and a simplified search routine. It also eliminates
213 the need to test for dummy entries on each probe. The preconditions
214 for this strategy arise in symbol tables and in the builtin dictionary.
Raymond Hettinger4887a122003-05-05 21:31:51 +0000215
216
217Readonly Dictionaries
218---------------------
219Some dictionary use cases pass through a build stage and then move to a
220more heavily exercised lookup stage with no further changes to the
221dictionary.
222
223An idea that emerged on python-dev is to be able to convert a dictionary
224to a read-only state. This can help prevent programming errors and also
225provide knowledge that can be exploited for lookup optimization.
226
227The dictionary can be immediately rebuilt (eliminating dummy entries),
228resized (to an appropriate level of sparseness), and the keys can be
229jostled (to minimize collisions). The lookdict() routine can then
230eliminate the test for dummy entries (saving about 1/4 of the time
231spend in the collision resolution loop).
232
233An additional possibility is to insert links into the empty spaces
234so that dictionary iteration can proceed in len(d) steps instead of
235(mp->mask + 1) steps.
Raymond Hettingere509b2a2003-05-28 14:10:46 +0000236
237
238Caching Lookups
239---------------
240The idea is to exploit key access patterns by anticipating future lookups
241based of previous lookups.
242
243The simplest incarnation is to save the most recently accessed entry.
244This gives optimal performance for use cases where every get is followed
245by a set or del to the same key.