| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 1 | :mod:`heapq` --- Heap queue algorithm | 
 | 2 | ===================================== | 
 | 3 |  | 
 | 4 | .. module:: heapq | 
 | 5 |    :synopsis: Heap queue algorithm (a.k.a. priority queue). | 
 | 6 | .. moduleauthor:: Kevin O'Connor | 
 | 7 | .. sectionauthor:: Guido van Rossum <guido@python.org> | 
 | 8 | .. sectionauthor:: François Pinard | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 9 | .. sectionauthor:: Raymond Hettinger | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 10 |  | 
| Raymond Hettinger | 1048094 | 2011-01-10 03:26:08 +0000 | [diff] [blame] | 11 | **Source code:** :source:`Lib/heapq.py` | 
 | 12 |  | 
| Raymond Hettinger | 4f707fd | 2011-01-10 19:54:11 +0000 | [diff] [blame] | 13 | -------------- | 
 | 14 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 15 | This module provides an implementation of the heap queue algorithm, also known | 
 | 16 | as the priority queue algorithm. | 
 | 17 |  | 
| Georg Brandl | 57410c1 | 2010-11-23 08:37:54 +0000 | [diff] [blame] | 18 | Heaps are binary trees for which every parent node has a value less than or | 
 | 19 | equal to any of its children.  This implementation uses arrays for which | 
 | 20 | ``heap[k] <= heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]`` for all *k*, counting | 
 | 21 | elements from zero.  For the sake of comparison, non-existing elements are | 
 | 22 | considered to be infinite.  The interesting property of a heap is that its | 
 | 23 | smallest element is always the root, ``heap[0]``. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 24 |  | 
 | 25 | The API below differs from textbook heap algorithms in two aspects: (a) We use | 
 | 26 | zero-based indexing.  This makes the relationship between the index for a node | 
 | 27 | and the indexes for its children slightly less obvious, but is more suitable | 
 | 28 | since Python uses zero-based indexing. (b) Our pop method returns the smallest | 
 | 29 | item, not the largest (called a "min heap" in textbooks; a "max heap" is more | 
 | 30 | common in texts because of its suitability for in-place sorting). | 
 | 31 |  | 
 | 32 | These two make it possible to view the heap as a regular Python list without | 
 | 33 | surprises: ``heap[0]`` is the smallest item, and ``heap.sort()`` maintains the | 
 | 34 | heap invariant! | 
 | 35 |  | 
 | 36 | To create a heap, use a list initialized to ``[]``, or you can transform a | 
 | 37 | populated list into a heap via function :func:`heapify`. | 
 | 38 |  | 
 | 39 | The following functions are provided: | 
 | 40 |  | 
 | 41 |  | 
 | 42 | .. function:: heappush(heap, item) | 
 | 43 |  | 
 | 44 |    Push the value *item* onto the *heap*, maintaining the heap invariant. | 
 | 45 |  | 
 | 46 |  | 
 | 47 | .. function:: heappop(heap) | 
 | 48 |  | 
 | 49 |    Pop and return the smallest item from the *heap*, maintaining the heap | 
| Eli Bendersky | 39430da | 2015-03-14 20:14:23 -0700 | [diff] [blame] | 50 |    invariant.  If the heap is empty, :exc:`IndexError` is raised.  To access the | 
 | 51 |    smallest item without popping it, use ``heap[0]``. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 52 |  | 
| Benjamin Peterson | 35e8c46 | 2008-04-24 02:34:53 +0000 | [diff] [blame] | 53 |  | 
| Christian Heimes | dd15f6c | 2008-03-16 00:07:10 +0000 | [diff] [blame] | 54 | .. function:: heappushpop(heap, item) | 
 | 55 |  | 
 | 56 |    Push *item* on the heap, then pop and return the smallest item from the | 
 | 57 |    *heap*.  The combined action runs more efficiently than :func:`heappush` | 
 | 58 |    followed by a separate call to :func:`heappop`. | 
 | 59 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 60 |  | 
 | 61 | .. function:: heapify(x) | 
 | 62 |  | 
 | 63 |    Transform list *x* into a heap, in-place, in linear time. | 
 | 64 |  | 
 | 65 |  | 
 | 66 | .. function:: heapreplace(heap, item) | 
 | 67 |  | 
 | 68 |    Pop and return the smallest item from the *heap*, and also push the new *item*. | 
 | 69 |    The heap size doesn't change. If the heap is empty, :exc:`IndexError` is raised. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 70 |  | 
| Raymond Hettinger | 6f80b4c | 2010-09-01 21:27:31 +0000 | [diff] [blame] | 71 |    This one step operation is more efficient than a :func:`heappop` followed by | 
 | 72 |    :func:`heappush` and can be more appropriate when using a fixed-size heap. | 
 | 73 |    The pop/push combination always returns an element from the heap and replaces | 
 | 74 |    it with *item*. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 75 |  | 
| Raymond Hettinger | 6f80b4c | 2010-09-01 21:27:31 +0000 | [diff] [blame] | 76 |    The value returned may be larger than the *item* added.  If that isn't | 
 | 77 |    desired, consider using :func:`heappushpop` instead.  Its push/pop | 
 | 78 |    combination returns the smaller of the two values, leaving the larger value | 
 | 79 |    on the heap. | 
| Georg Brandl | af265f4 | 2008-12-07 15:06:20 +0000 | [diff] [blame] | 80 |  | 
| Georg Brandl | 48310cd | 2009-01-03 21:18:54 +0000 | [diff] [blame] | 81 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 82 | The module also offers three general purpose functions based on heaps. | 
 | 83 |  | 
 | 84 |  | 
| Raymond Hettinger | 35db439 | 2014-05-30 02:28:36 -0700 | [diff] [blame] | 85 | .. function:: merge(*iterables, key=None, reverse=False) | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 86 |  | 
 | 87 |    Merge multiple sorted inputs into a single sorted output (for example, merge | 
| Georg Brandl | 9afde1c | 2007-11-01 20:32:30 +0000 | [diff] [blame] | 88 |    timestamped entries from multiple log files).  Returns an :term:`iterator` | 
| Benjamin Peterson | 206e307 | 2008-10-19 14:07:49 +0000 | [diff] [blame] | 89 |    over the sorted values. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 90 |  | 
 | 91 |    Similar to ``sorted(itertools.chain(*iterables))`` but returns an iterable, does | 
 | 92 |    not pull the data into memory all at once, and assumes that each of the input | 
 | 93 |    streams is already sorted (smallest to largest). | 
 | 94 |  | 
| Raymond Hettinger | 35db439 | 2014-05-30 02:28:36 -0700 | [diff] [blame] | 95 |    Has two optional arguments which must be specified as keyword arguments. | 
 | 96 |  | 
 | 97 |    *key* specifies a :term:`key function` of one argument that is used to | 
 | 98 |    extract a comparison key from each input element.  The default value is | 
 | 99 |    ``None`` (compare the elements directly). | 
 | 100 |  | 
 | 101 |    *reverse* is a boolean value.  If set to ``True``, then the input elements | 
 | 102 |    are merged as if each comparison were reversed. | 
 | 103 |  | 
 | 104 |    .. versionchanged:: 3.5 | 
 | 105 |       Added the optional *key* and *reverse* parameters. | 
 | 106 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 107 |  | 
| Georg Brandl | 036490d | 2009-05-17 13:00:36 +0000 | [diff] [blame] | 108 | .. function:: nlargest(n, iterable, key=None) | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 109 |  | 
 | 110 |    Return a list with the *n* largest elements from the dataset defined by | 
 | 111 |    *iterable*.  *key*, if provided, specifies a function of one argument that is | 
 | 112 |    used to extract a comparison key from each element in the iterable: | 
 | 113 |    ``key=str.lower`` Equivalent to:  ``sorted(iterable, key=key, | 
 | 114 |    reverse=True)[:n]`` | 
 | 115 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 116 |  | 
| Georg Brandl | 036490d | 2009-05-17 13:00:36 +0000 | [diff] [blame] | 117 | .. function:: nsmallest(n, iterable, key=None) | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 118 |  | 
 | 119 |    Return a list with the *n* smallest elements from the dataset defined by | 
 | 120 |    *iterable*.  *key*, if provided, specifies a function of one argument that is | 
 | 121 |    used to extract a comparison key from each element in the iterable: | 
 | 122 |    ``key=str.lower`` Equivalent to:  ``sorted(iterable, key=key)[:n]`` | 
 | 123 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 124 |  | 
 | 125 | The latter two functions perform best for smaller values of *n*.  For larger | 
 | 126 | values, it is more efficient to use the :func:`sorted` function.  Also, when | 
| Georg Brandl | 22b3431 | 2009-07-26 14:54:51 +0000 | [diff] [blame] | 127 | ``n==1``, it is more efficient to use the built-in :func:`min` and :func:`max` | 
| Eli Bendersky | 39430da | 2015-03-14 20:14:23 -0700 | [diff] [blame] | 128 | functions.  If repeated usage of these functions is required, consider turning | 
 | 129 | the iterable into an actual heap. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 130 |  | 
 | 131 |  | 
| Raymond Hettinger | 6f80b4c | 2010-09-01 21:27:31 +0000 | [diff] [blame] | 132 | Basic Examples | 
 | 133 | -------------- | 
 | 134 |  | 
 | 135 | A `heapsort <http://en.wikipedia.org/wiki/Heapsort>`_ can be implemented by | 
 | 136 | pushing all values onto a heap and then popping off the smallest values one at a | 
 | 137 | time:: | 
 | 138 |  | 
 | 139 |    >>> def heapsort(iterable): | 
| Raymond Hettinger | 6f80b4c | 2010-09-01 21:27:31 +0000 | [diff] [blame] | 140 |    ...     h = [] | 
 | 141 |    ...     for value in iterable: | 
 | 142 |    ...         heappush(h, value) | 
 | 143 |    ...     return [heappop(h) for i in range(len(h))] | 
 | 144 |    ... | 
 | 145 |    >>> heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0]) | 
 | 146 |    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] | 
 | 147 |  | 
| Ezio Melotti | 9b1e92f | 2014-10-28 12:57:11 +0100 | [diff] [blame] | 148 | This is similar to ``sorted(iterable)``, but unlike :func:`sorted`, this | 
 | 149 | implementation is not stable. | 
 | 150 |  | 
| Raymond Hettinger | 6f80b4c | 2010-09-01 21:27:31 +0000 | [diff] [blame] | 151 | Heap elements can be tuples.  This is useful for assigning comparison values | 
 | 152 | (such as task priorities) alongside the main record being tracked:: | 
 | 153 |  | 
 | 154 |     >>> h = [] | 
 | 155 |     >>> heappush(h, (5, 'write code')) | 
 | 156 |     >>> heappush(h, (7, 'release product')) | 
 | 157 |     >>> heappush(h, (1, 'write spec')) | 
 | 158 |     >>> heappush(h, (3, 'create tests')) | 
 | 159 |     >>> heappop(h) | 
 | 160 |     (1, 'write spec') | 
 | 161 |  | 
 | 162 |  | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 163 | Priority Queue Implementation Notes | 
 | 164 | ----------------------------------- | 
 | 165 |  | 
 | 166 | A `priority queue <http://en.wikipedia.org/wiki/Priority_queue>`_ is common use | 
 | 167 | for a heap, and it presents several implementation challenges: | 
 | 168 |  | 
 | 169 | * Sort stability:  how do you get two tasks with equal priorities to be returned | 
 | 170 |   in the order they were originally added? | 
 | 171 |  | 
 | 172 | * Tuple comparison breaks for (priority, task) pairs if the priorities are equal | 
 | 173 |   and the tasks do not have a default comparison order. | 
 | 174 |  | 
| Raymond Hettinger | 648e725 | 2010-08-07 23:37:37 +0000 | [diff] [blame] | 175 | * If the priority of a task changes, how do you move it to a new position in | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 176 |   the heap? | 
 | 177 |  | 
 | 178 | * Or if a pending task needs to be deleted, how do you find it and remove it | 
 | 179 |   from the queue? | 
 | 180 |  | 
 | 181 | A solution to the first two challenges is to store entries as 3-element list | 
 | 182 | including the priority, an entry count, and the task.  The entry count serves as | 
 | 183 | a tie-breaker so that two tasks with the same priority are returned in the order | 
 | 184 | they were added. And since no two entry counts are the same, the tuple | 
 | 185 | comparison will never attempt to directly compare two tasks. | 
 | 186 |  | 
 | 187 | The remaining challenges revolve around finding a pending task and making | 
 | 188 | changes to its priority or removing it entirely.  Finding a task can be done | 
 | 189 | with a dictionary pointing to an entry in the queue. | 
 | 190 |  | 
 | 191 | Removing the entry or changing its priority is more difficult because it would | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 192 | break the heap structure invariants.  So, a possible solution is to mark the | 
 | 193 | entry as removed and add a new entry with the revised priority:: | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 194 |  | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 195 |     pq = []                         # list of entries arranged in a heap | 
 | 196 |     entry_finder = {}               # mapping of tasks to entries | 
 | 197 |     REMOVED = '<removed-task>'      # placeholder for a removed task | 
 | 198 |     counter = itertools.count()     # unique sequence count | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 199 |  | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 200 |     def add_task(task, priority=0): | 
 | 201 |         'Add a new task or update the priority of an existing task' | 
 | 202 |         if task in entry_finder: | 
 | 203 |             remove_task(task) | 
 | 204 |         count = next(counter) | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 205 |         entry = [priority, count, task] | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 206 |         entry_finder[task] = entry | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 207 |         heappush(pq, entry) | 
 | 208 |  | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 209 |     def remove_task(task): | 
 | 210 |         'Mark an existing task as REMOVED.  Raise KeyError if not found.' | 
 | 211 |         entry = entry_finder.pop(task) | 
 | 212 |         entry[-1] = REMOVED | 
 | 213 |  | 
 | 214 |     def pop_task(): | 
 | 215 |         'Remove and return the lowest priority task. Raise KeyError if empty.' | 
 | 216 |         while pq: | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 217 |             priority, count, task = heappop(pq) | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 218 |             if task is not REMOVED: | 
 | 219 |                 del entry_finder[task] | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 220 |                 return task | 
| Raymond Hettinger | df7c4cd | 2011-10-09 17:28:14 +0100 | [diff] [blame] | 221 |         raise KeyError('pop from an empty priority queue') | 
| Raymond Hettinger | 0e833c3 | 2010-08-07 23:31:27 +0000 | [diff] [blame] | 222 |  | 
 | 223 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 224 | Theory | 
 | 225 | ------ | 
 | 226 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 227 | Heaps are arrays for which ``a[k] <= a[2*k+1]`` and ``a[k] <= a[2*k+2]`` for all | 
 | 228 | *k*, counting elements from 0.  For the sake of comparison, non-existing | 
 | 229 | elements are considered to be infinite.  The interesting property of a heap is | 
 | 230 | that ``a[0]`` is always its smallest element. | 
 | 231 |  | 
 | 232 | The strange invariant above is meant to be an efficient memory representation | 
 | 233 | for a tournament.  The numbers below are *k*, not ``a[k]``:: | 
 | 234 |  | 
 | 235 |                                   0 | 
 | 236 |  | 
 | 237 |                  1                                 2 | 
 | 238 |  | 
 | 239 |          3               4                5               6 | 
 | 240 |  | 
 | 241 |      7       8       9       10      11      12      13      14 | 
 | 242 |  | 
 | 243 |    15 16   17 18   19 20   21 22   23 24   25 26   27 28   29 30 | 
 | 244 |  | 
 | 245 | In the tree above, each cell *k* is topping ``2*k+1`` and ``2*k+2``. In an usual | 
 | 246 | binary tournament we see in sports, each cell is the winner over the two cells | 
 | 247 | it tops, and we can trace the winner down the tree to see all opponents s/he | 
 | 248 | had.  However, in many computer applications of such tournaments, we do not need | 
 | 249 | to trace the history of a winner. To be more memory efficient, when a winner is | 
 | 250 | promoted, we try to replace it by something else at a lower level, and the rule | 
 | 251 | becomes that a cell and the two cells it tops contain three different items, but | 
 | 252 | the top cell "wins" over the two topped cells. | 
 | 253 |  | 
 | 254 | If this heap invariant is protected at all time, index 0 is clearly the overall | 
 | 255 | winner.  The simplest algorithmic way to remove it and find the "next" winner is | 
 | 256 | to move some loser (let's say cell 30 in the diagram above) into the 0 position, | 
 | 257 | and then percolate this new 0 down the tree, exchanging values, until the | 
 | 258 | invariant is re-established. This is clearly logarithmic on the total number of | 
 | 259 | items in the tree. By iterating over all items, you get an O(n log n) sort. | 
 | 260 |  | 
 | 261 | A nice feature of this sort is that you can efficiently insert new items while | 
 | 262 | the sort is going on, provided that the inserted items are not "better" than the | 
 | 263 | last 0'th element you extracted.  This is especially useful in simulation | 
 | 264 | contexts, where the tree holds all incoming events, and the "win" condition | 
| Ned Deily | 676d7aa | 2013-07-15 19:08:13 -0700 | [diff] [blame] | 265 | means the smallest scheduled time.  When an event schedules other events for | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 266 | execution, they are scheduled into the future, so they can easily go into the | 
 | 267 | heap.  So, a heap is a good structure for implementing schedulers (this is what | 
 | 268 | I used for my MIDI sequencer :-). | 
 | 269 |  | 
 | 270 | Various structures for implementing schedulers have been extensively studied, | 
 | 271 | and heaps are good for this, as they are reasonably speedy, the speed is almost | 
 | 272 | constant, and the worst case is not much different than the average case. | 
 | 273 | However, there are other representations which are more efficient overall, yet | 
 | 274 | the worst cases might be terrible. | 
 | 275 |  | 
 | 276 | Heaps are also very useful in big disk sorts.  You most probably all know that a | 
| Raymond Hettinger | d2a296a | 2014-12-11 23:56:32 -0800 | [diff] [blame] | 277 | big sort implies producing "runs" (which are pre-sorted sequences, whose size is | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 278 | usually related to the amount of CPU memory), followed by a merging passes for | 
 | 279 | these runs, which merging is often very cleverly organised [#]_. It is very | 
 | 280 | important that the initial sort produces the longest runs possible.  Tournaments | 
| Raymond Hettinger | d2a296a | 2014-12-11 23:56:32 -0800 | [diff] [blame] | 281 | are a good way to achieve that.  If, using all the memory available to hold a | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 282 | tournament, you replace and percolate items that happen to fit the current run, | 
 | 283 | you'll produce runs which are twice the size of the memory for random input, and | 
 | 284 | much better for input fuzzily ordered. | 
 | 285 |  | 
 | 286 | Moreover, if you output the 0'th item on disk and get an input which may not fit | 
 | 287 | in the current tournament (because the value "wins" over the last output value), | 
 | 288 | it cannot fit in the heap, so the size of the heap decreases.  The freed memory | 
 | 289 | could be cleverly reused immediately for progressively building a second heap, | 
 | 290 | which grows at exactly the same rate the first heap is melting.  When the first | 
 | 291 | heap completely vanishes, you switch heaps and start a new run.  Clever and | 
 | 292 | quite effective! | 
 | 293 |  | 
 | 294 | In a word, heaps are useful memory structures to know.  I use them in a few | 
 | 295 | applications, and I think it is good to keep a 'heap' module around. :-) | 
 | 296 |  | 
 | 297 | .. rubric:: Footnotes | 
 | 298 |  | 
 | 299 | .. [#] The disk balancing algorithms which are current, nowadays, are more annoying | 
 | 300 |    than clever, and this is a consequence of the seeking capabilities of the disks. | 
 | 301 |    On devices which cannot seek, like big tape drives, the story was quite | 
 | 302 |    different, and one had to be very clever to ensure (far in advance) that each | 
 | 303 |    tape movement will be the most effective possible (that is, will best | 
 | 304 |    participate at "progressing" the merge).  Some tapes were even able to read | 
 | 305 |    backwards, and this was also used to avoid the rewinding time. Believe me, real | 
 | 306 |    good tape sorts were quite spectacular to watch! From all times, sorting has | 
 | 307 |    always been a Great Art! :-) | 
 | 308 |  |