blob: 617e3fe630e4abd7923584216684f107138dbc15 [file] [log] [blame]
Georg Brandl8ec7f652007-08-15 14:28:01 +00001:mod:`heapq` --- Heap queue algorithm
2=====================================
3
4.. module:: heapq
5 :synopsis: Heap queue algorithm (a.k.a. priority queue).
6.. moduleauthor:: Kevin O'Connor
7.. sectionauthor:: Guido van Rossum <guido@python.org>
8.. sectionauthor:: François Pinard
Raymond Hettingerfb4c6042010-08-07 23:35:52 +00009.. sectionauthor:: Raymond Hettinger
Georg Brandl8ec7f652007-08-15 14:28:01 +000010
Georg Brandl8ec7f652007-08-15 14:28:01 +000011.. versionadded:: 2.3
12
Éric Araujo29a0b572011-08-19 02:14:03 +020013**Source code:** :source:`Lib/heapq.py`
14
15--------------
16
Georg Brandl8ec7f652007-08-15 14:28:01 +000017This module provides an implementation of the heap queue algorithm, also known
18as the priority queue algorithm.
19
Georg Brandlb7276502010-11-26 08:28:05 +000020Heaps are binary trees for which every parent node has a value less than or
21equal to any of its children. This implementation uses arrays for which
22``heap[k] <= heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]`` for all *k*, counting
23elements from zero. For the sake of comparison, non-existing elements are
24considered to be infinite. The interesting property of a heap is that its
25smallest element is always the root, ``heap[0]``.
Georg Brandl8ec7f652007-08-15 14:28:01 +000026
27The API below differs from textbook heap algorithms in two aspects: (a) We use
28zero-based indexing. This makes the relationship between the index for a node
29and the indexes for its children slightly less obvious, but is more suitable
30since Python uses zero-based indexing. (b) Our pop method returns the smallest
31item, not the largest (called a "min heap" in textbooks; a "max heap" is more
32common in texts because of its suitability for in-place sorting).
33
34These two make it possible to view the heap as a regular Python list without
35surprises: ``heap[0]`` is the smallest item, and ``heap.sort()`` maintains the
36heap invariant!
37
38To create a heap, use a list initialized to ``[]``, or you can transform a
39populated list into a heap via function :func:`heapify`.
40
41The following functions are provided:
42
43
44.. function:: heappush(heap, item)
45
46 Push the value *item* onto the *heap*, maintaining the heap invariant.
47
48
49.. function:: heappop(heap)
50
51 Pop and return the smallest item from the *heap*, maintaining the heap
52 invariant. If the heap is empty, :exc:`IndexError` is raised.
53
Raymond Hettinger53bdf092008-03-13 19:03:51 +000054.. function:: heappushpop(heap, item)
55
56 Push *item* on the heap, then pop and return the smallest item from the
57 *heap*. The combined action runs more efficiently than :func:`heappush`
58 followed by a separate call to :func:`heappop`.
59
60 .. versionadded:: 2.6
Georg Brandl8ec7f652007-08-15 14:28:01 +000061
62.. function:: heapify(x)
63
64 Transform list *x* into a heap, in-place, in linear time.
65
66
67.. function:: heapreplace(heap, item)
68
69 Pop and return the smallest item from the *heap*, and also push the new *item*.
70 The heap size doesn't change. If the heap is empty, :exc:`IndexError` is raised.
Georg Brandl8ec7f652007-08-15 14:28:01 +000071
Raymond Hettingerd252d0d2010-09-01 21:20:07 +000072 This one step operation is more efficient than a :func:`heappop` followed by
73 :func:`heappush` and can be more appropriate when using a fixed-size heap.
74 The pop/push combination always returns an element from the heap and replaces
75 it with *item*.
Georg Brandl8ec7f652007-08-15 14:28:01 +000076
Raymond Hettingerd252d0d2010-09-01 21:20:07 +000077 The value returned may be larger than the *item* added. If that isn't
78 desired, consider using :func:`heappushpop` instead. Its push/pop
79 combination returns the smaller of the two values, leaving the larger value
80 on the heap.
Georg Brandl32d14082008-12-04 18:59:16 +000081
Georg Brandlc62ef8b2009-01-03 20:55:06 +000082
Georg Brandl8ec7f652007-08-15 14:28:01 +000083The module also offers three general purpose functions based on heaps.
84
85
86.. function:: merge(*iterables)
87
88 Merge multiple sorted inputs into a single sorted output (for example, merge
Georg Brandle7a09902007-10-21 12:10:28 +000089 timestamped entries from multiple log files). Returns an :term:`iterator`
Georg Brandl92b70bc2008-10-17 21:41:49 +000090 over the sorted values.
Georg Brandl8ec7f652007-08-15 14:28:01 +000091
92 Similar to ``sorted(itertools.chain(*iterables))`` but returns an iterable, does
93 not pull the data into memory all at once, and assumes that each of the input
94 streams is already sorted (smallest to largest).
95
96 .. versionadded:: 2.6
97
98
99.. function:: nlargest(n, iterable[, key])
100
101 Return a list with the *n* largest elements from the dataset defined by
102 *iterable*. *key*, if provided, specifies a function of one argument that is
103 used to extract a comparison key from each element in the iterable:
104 ``key=str.lower`` Equivalent to: ``sorted(iterable, key=key,
105 reverse=True)[:n]``
106
107 .. versionadded:: 2.4
108
109 .. versionchanged:: 2.5
110 Added the optional *key* argument.
111
112
113.. function:: nsmallest(n, iterable[, key])
114
115 Return a list with the *n* smallest elements from the dataset defined by
116 *iterable*. *key*, if provided, specifies a function of one argument that is
117 used to extract a comparison key from each element in the iterable:
118 ``key=str.lower`` Equivalent to: ``sorted(iterable, key=key)[:n]``
119
120 .. versionadded:: 2.4
121
122 .. versionchanged:: 2.5
123 Added the optional *key* argument.
124
125The latter two functions perform best for smaller values of *n*. For larger
126values, it is more efficient to use the :func:`sorted` function. Also, when
Georg Brandld7d4fd72009-07-26 14:37:28 +0000127``n==1``, it is more efficient to use the built-in :func:`min` and :func:`max`
Georg Brandl8ec7f652007-08-15 14:28:01 +0000128functions.
129
130
Raymond Hettingerd252d0d2010-09-01 21:20:07 +0000131Basic Examples
132--------------
133
134A `heapsort <http://en.wikipedia.org/wiki/Heapsort>`_ can be implemented by
135pushing all values onto a heap and then popping off the smallest values one at a
136time::
137
138 >>> def heapsort(iterable):
Raymond Hettingerd252d0d2010-09-01 21:20:07 +0000139 ... h = []
140 ... for value in iterable:
141 ... heappush(h, value)
142 ... return [heappop(h) for i in range(len(h))]
143 ...
144 >>> heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0])
145 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
146
Ezio Melotti9f8a5b12014-10-28 12:57:11 +0100147This is similar to ``sorted(iterable)``, but unlike :func:`sorted`, this
148implementation is not stable.
149
Raymond Hettingerd252d0d2010-09-01 21:20:07 +0000150Heap elements can be tuples. This is useful for assigning comparison values
151(such as task priorities) alongside the main record being tracked::
152
153 >>> h = []
154 >>> heappush(h, (5, 'write code'))
155 >>> heappush(h, (7, 'release product'))
156 >>> heappush(h, (1, 'write spec'))
157 >>> heappush(h, (3, 'create tests'))
158 >>> heappop(h)
159 (1, 'write spec')
160
161
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000162Priority Queue Implementation Notes
163-----------------------------------
164
165A `priority queue <http://en.wikipedia.org/wiki/Priority_queue>`_ is common use
166for a heap, and it presents several implementation challenges:
167
168* Sort stability: how do you get two tasks with equal priorities to be returned
169 in the order they were originally added?
170
171* In the future with Python 3, tuple comparison breaks for (priority, task)
172 pairs if the priorities are equal and the tasks do not have a default
173 comparison order.
174
175* If the priority of a task changes, how do you move it to a new position in
176 the heap?
177
178* Or if a pending task needs to be deleted, how do you find it and remove it
179 from the queue?
180
181A solution to the first two challenges is to store entries as 3-element list
182including the priority, an entry count, and the task. The entry count serves as
183a tie-breaker so that two tasks with the same priority are returned in the order
184they were added. And since no two entry counts are the same, the tuple
185comparison will never attempt to directly compare two tasks.
186
187The remaining challenges revolve around finding a pending task and making
188changes to its priority or removing it entirely. Finding a task can be done
189with a dictionary pointing to an entry in the queue.
190
191Removing the entry or changing its priority is more difficult because it would
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100192break the heap structure invariants. So, a possible solution is to mark the
193existing entry as removed and add a new entry with the revised priority::
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000194
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100195 pq = [] # list of entries arranged in a heap
196 entry_finder = {} # mapping of tasks to entries
197 REMOVED = '<removed-task>' # placeholder for a removed task
198 counter = itertools.count() # unique sequence count
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000199
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100200 def add_task(task, priority=0):
201 'Add a new task or update the priority of an existing task'
202 if task in entry_finder:
203 remove_task(task)
204 count = next(counter)
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000205 entry = [priority, count, task]
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100206 entry_finder[task] = entry
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000207 heappush(pq, entry)
208
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100209 def remove_task(task):
210 'Mark an existing task as REMOVED. Raise KeyError if not found.'
211 entry = entry_finder.pop(task)
212 entry[-1] = REMOVED
213
214 def pop_task():
215 'Remove and return the lowest priority task. Raise KeyError if empty.'
216 while pq:
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000217 priority, count, task = heappop(pq)
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100218 if task is not REMOVED:
219 del entry_finder[task]
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000220 return task
Raymond Hettinger3e0a3fa2011-10-09 17:32:43 +0100221 raise KeyError('pop from an empty priority queue')
Raymond Hettingerfb4c6042010-08-07 23:35:52 +0000222
223
Georg Brandl8ec7f652007-08-15 14:28:01 +0000224Theory
225------
226
Georg Brandl8ec7f652007-08-15 14:28:01 +0000227Heaps are arrays for which ``a[k] <= a[2*k+1]`` and ``a[k] <= a[2*k+2]`` for all
228*k*, counting elements from 0. For the sake of comparison, non-existing
229elements are considered to be infinite. The interesting property of a heap is
230that ``a[0]`` is always its smallest element.
231
232The strange invariant above is meant to be an efficient memory representation
233for a tournament. The numbers below are *k*, not ``a[k]``::
234
235 0
236
237 1 2
238
239 3 4 5 6
240
241 7 8 9 10 11 12 13 14
242
243 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
244
245In the tree above, each cell *k* is topping ``2*k+1`` and ``2*k+2``. In an usual
246binary tournament we see in sports, each cell is the winner over the two cells
247it tops, and we can trace the winner down the tree to see all opponents s/he
248had. However, in many computer applications of such tournaments, we do not need
249to trace the history of a winner. To be more memory efficient, when a winner is
250promoted, we try to replace it by something else at a lower level, and the rule
251becomes that a cell and the two cells it tops contain three different items, but
252the top cell "wins" over the two topped cells.
253
254If this heap invariant is protected at all time, index 0 is clearly the overall
255winner. The simplest algorithmic way to remove it and find the "next" winner is
256to move some loser (let's say cell 30 in the diagram above) into the 0 position,
257and then percolate this new 0 down the tree, exchanging values, until the
258invariant is re-established. This is clearly logarithmic on the total number of
259items in the tree. By iterating over all items, you get an O(n log n) sort.
260
261A nice feature of this sort is that you can efficiently insert new items while
262the sort is going on, provided that the inserted items are not "better" than the
263last 0'th element you extracted. This is especially useful in simulation
264contexts, where the tree holds all incoming events, and the "win" condition
Ned Deilyb7a285f2013-07-15 19:07:41 -0700265means the smallest scheduled time. When an event schedules other events for
Georg Brandl8ec7f652007-08-15 14:28:01 +0000266execution, they are scheduled into the future, so they can easily go into the
267heap. So, a heap is a good structure for implementing schedulers (this is what
268I used for my MIDI sequencer :-).
269
270Various structures for implementing schedulers have been extensively studied,
271and heaps are good for this, as they are reasonably speedy, the speed is almost
272constant, and the worst case is not much different than the average case.
273However, there are other representations which are more efficient overall, yet
274the worst cases might be terrible.
275
276Heaps are also very useful in big disk sorts. You most probably all know that a
277big sort implies producing "runs" (which are pre-sorted sequences, which size is
278usually related to the amount of CPU memory), followed by a merging passes for
279these runs, which merging is often very cleverly organised [#]_. It is very
280important that the initial sort produces the longest runs possible. Tournaments
281are a good way to that. If, using all the memory available to hold a
282tournament, you replace and percolate items that happen to fit the current run,
283you'll produce runs which are twice the size of the memory for random input, and
284much better for input fuzzily ordered.
285
286Moreover, if you output the 0'th item on disk and get an input which may not fit
287in the current tournament (because the value "wins" over the last output value),
288it cannot fit in the heap, so the size of the heap decreases. The freed memory
289could be cleverly reused immediately for progressively building a second heap,
290which grows at exactly the same rate the first heap is melting. When the first
291heap completely vanishes, you switch heaps and start a new run. Clever and
292quite effective!
293
294In a word, heaps are useful memory structures to know. I use them in a few
295applications, and I think it is good to keep a 'heap' module around. :-)
296
297.. rubric:: Footnotes
298
299.. [#] The disk balancing algorithms which are current, nowadays, are more annoying
300 than clever, and this is a consequence of the seeking capabilities of the disks.
301 On devices which cannot seek, like big tape drives, the story was quite
302 different, and one had to be very clever to ensure (far in advance) that each
303 tape movement will be the most effective possible (that is, will best
304 participate at "progressing" the merge). Some tapes were even able to read
305 backwards, and this was also used to avoid the rewinding time. Believe me, real
306 good tape sorts were quite spectacular to watch! From all times, sorting has
307 always been a Great Art! :-)
308