Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 1 | \section{\module{heapq} --- |
| 2 | Heap queue algorithm} |
| 3 | |
| 4 | \declaremodule{standard}{heapq} |
| 5 | \modulesynopsis{Heap queue algorithm (a.k.a. priority queue).} |
Fred Drake | 1acab69 | 2002-08-02 19:46:42 +0000 | [diff] [blame] | 6 | \moduleauthor{Kevin O'Connor}{} |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 7 | \sectionauthor{Guido van Rossum}{guido@python.org} |
Fred Drake | 1acab69 | 2002-08-02 19:46:42 +0000 | [diff] [blame] | 8 | % Theoretical explanation: |
| 9 | \sectionauthor{Fran\c cois Pinard}{} |
| 10 | \versionadded{2.3} |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 11 | |
| 12 | |
| 13 | This module provides an implementation of the heap queue algorithm, |
| 14 | also known as the priority queue algorithm. |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 15 | |
| 16 | Heaps are arrays for which |
| 17 | \code{\var{heap}[\var{k}] <= \var{heap}[2*\var{k}+1]} and |
| 18 | \code{\var{heap}[\var{k}] <= \var{heap}[2*\var{k}+2]} |
| 19 | for all \var{k}, counting elements from zero. For the sake of |
| 20 | comparison, non-existing elements are considered to be infinite. The |
| 21 | interesting property of a heap is that \code{\var{heap}[0]} is always |
| 22 | its smallest element. |
| 23 | |
| 24 | The API below differs from textbook heap algorithms in two aspects: |
| 25 | (a) We use zero-based indexing. This makes the relationship between the |
| 26 | index for a node and the indexes for its children slightly less |
| 27 | obvious, but is more suitable since Python uses zero-based indexing. |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 28 | (b) Our pop method returns the smallest item, not the largest (called a |
| 29 | "min heap" in textbooks; a "max heap" is more common in texts because |
| 30 | of its suitability for in-place sorting). |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 31 | |
| 32 | These two make it possible to view the heap as a regular Python list |
| 33 | without surprises: \code{\var{heap}[0]} is the smallest item, and |
| 34 | \code{\var{heap}.sort()} maintains the heap invariant! |
| 35 | |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 36 | To create a heap, use a list initialized to \code{[]}, or you can |
| 37 | transform a populated list into a heap via function \function{heapify()}. |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 38 | |
| 39 | The following functions are provided: |
| 40 | |
| 41 | \begin{funcdesc}{heappush}{heap, item} |
| 42 | Push the value \var{item} onto the \var{heap}, maintaining the |
| 43 | heap invariant. |
| 44 | \end{funcdesc} |
| 45 | |
| 46 | \begin{funcdesc}{heappop}{heap} |
| 47 | Pop and return the smallest item from the \var{heap}, maintaining the |
Guido van Rossum | b286591 | 2002-08-07 18:56:08 +0000 | [diff] [blame] | 48 | heap invariant. If the heap is empty, \exception{IndexError} is raised. |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 49 | \end{funcdesc} |
| 50 | |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 51 | \begin{funcdesc}{heapify}{x} |
| 52 | Transform list \var{x} into a heap, in-place, in linear time. |
| 53 | \end{funcdesc} |
| 54 | |
Tim Peters | 0ad679f | 2002-08-03 18:53:28 +0000 | [diff] [blame] | 55 | \begin{funcdesc}{heapreplace}{heap, item} |
| 56 | Pop and return the smallest item from the \var{heap}, and also push |
| 57 | the new \var{item}. The heap size doesn't change. |
Guido van Rossum | b286591 | 2002-08-07 18:56:08 +0000 | [diff] [blame] | 58 | If the heap is empty, \exception{IndexError} is raised. |
Tim Peters | 0ad679f | 2002-08-03 18:53:28 +0000 | [diff] [blame] | 59 | This is more efficient than \function{heappop()} followed |
| 60 | by \function{heappush()}, and can be more appropriate when using |
| 61 | a fixed-size heap. Note that the value returned may be larger |
| 62 | than \var{item}! That constrains reasonable uses of this routine. |
| 63 | \end{funcdesc} |
| 64 | |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 65 | Example of use: |
| 66 | |
| 67 | \begin{verbatim} |
| 68 | >>> from heapq import heappush, heappop |
| 69 | >>> heap = [] |
| 70 | >>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] |
| 71 | >>> for item in data: |
| 72 | ... heappush(heap, item) |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 73 | ... |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 74 | >>> sorted = [] |
| 75 | >>> while heap: |
| 76 | ... sorted.append(heappop(heap)) |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 77 | ... |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 78 | >>> print sorted |
| 79 | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |
| 80 | >>> data.sort() |
| 81 | >>> print data == sorted |
| 82 | True |
Tim Peters | 6e0da82 | 2002-08-03 18:02:09 +0000 | [diff] [blame] | 83 | >>> |
Guido van Rossum | 9751216 | 2002-08-02 18:03:24 +0000 | [diff] [blame] | 84 | \end{verbatim} |
| 85 | |
| 86 | |
| 87 | \subsection{Theory} |
| 88 | |
| 89 | (This explanation is due to François Pinard. The Python |
| 90 | code for this module was contributed by Kevin O'Connor.) |
| 91 | |
| 92 | Heaps are arrays for which \code{a[\var{k}] <= a[2*\var{k}+1]} and |
| 93 | \code{a[\var{k}] <= a[2*\var{k}+2]} |
| 94 | for all \var{k}, counting elements from 0. For the sake of comparison, |
| 95 | non-existing elements are considered to be infinite. The interesting |
| 96 | property of a heap is that \code{a[0]} is always its smallest element. |
| 97 | |
| 98 | The strange invariant above is meant to be an efficient memory |
| 99 | representation for a tournament. The numbers below are \var{k}, not |
| 100 | \code{a[\var{k}]}: |
| 101 | |
| 102 | \begin{verbatim} |
| 103 | 0 |
| 104 | |
| 105 | 1 2 |
| 106 | |
| 107 | 3 4 5 6 |
| 108 | |
| 109 | 7 8 9 10 11 12 13 14 |
| 110 | |
| 111 | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
| 112 | \end{verbatim} |
| 113 | |
| 114 | In the tree above, each cell \var{k} is topping \code{2*\var{k}+1} and |
| 115 | \code{2*\var{k}+2}. |
| 116 | In an usual binary tournament we see in sports, each cell is the winner |
| 117 | over the two cells it tops, and we can trace the winner down the tree |
| 118 | to see all opponents s/he had. However, in many computer applications |
| 119 | of such tournaments, we do not need to trace the history of a winner. |
| 120 | To be more memory efficient, when a winner is promoted, we try to |
| 121 | replace it by something else at a lower level, and the rule becomes |
| 122 | that a cell and the two cells it tops contain three different items, |
| 123 | but the top cell "wins" over the two topped cells. |
| 124 | |
| 125 | If this heap invariant is protected at all time, index 0 is clearly |
| 126 | the overall winner. The simplest algorithmic way to remove it and |
| 127 | find the "next" winner is to move some loser (let's say cell 30 in the |
| 128 | diagram above) into the 0 position, and then percolate this new 0 down |
| 129 | the tree, exchanging values, until the invariant is re-established. |
| 130 | This is clearly logarithmic on the total number of items in the tree. |
| 131 | By iterating over all items, you get an O(n log n) sort. |
| 132 | |
| 133 | A nice feature of this sort is that you can efficiently insert new |
| 134 | items while the sort is going on, provided that the inserted items are |
| 135 | not "better" than the last 0'th element you extracted. This is |
| 136 | especially useful in simulation contexts, where the tree holds all |
| 137 | incoming events, and the "win" condition means the smallest scheduled |
| 138 | time. When an event schedule other events for execution, they are |
| 139 | scheduled into the future, so they can easily go into the heap. So, a |
| 140 | heap is a good structure for implementing schedulers (this is what I |
| 141 | used for my MIDI sequencer :-). |
| 142 | |
| 143 | Various structures for implementing schedulers have been extensively |
| 144 | studied, and heaps are good for this, as they are reasonably speedy, |
| 145 | the speed is almost constant, and the worst case is not much different |
| 146 | than the average case. However, there are other representations which |
| 147 | are more efficient overall, yet the worst cases might be terrible. |
| 148 | |
| 149 | Heaps are also very useful in big disk sorts. You most probably all |
| 150 | know that a big sort implies producing "runs" (which are pre-sorted |
| 151 | sequences, which size is usually related to the amount of CPU memory), |
| 152 | followed by a merging passes for these runs, which merging is often |
| 153 | very cleverly organised\footnote{The disk balancing algorithms which |
| 154 | are current, nowadays, are |
| 155 | more annoying than clever, and this is a consequence of the seeking |
| 156 | capabilities of the disks. On devices which cannot seek, like big |
| 157 | tape drives, the story was quite different, and one had to be very |
| 158 | clever to ensure (far in advance) that each tape movement will be the |
| 159 | most effective possible (that is, will best participate at |
| 160 | "progressing" the merge). Some tapes were even able to read |
| 161 | backwards, and this was also used to avoid the rewinding time. |
| 162 | Believe me, real good tape sorts were quite spectacular to watch! |
| 163 | From all times, sorting has always been a Great Art! :-)}. |
| 164 | It is very important that the initial |
| 165 | sort produces the longest runs possible. Tournaments are a good way |
| 166 | to that. If, using all the memory available to hold a tournament, you |
| 167 | replace and percolate items that happen to fit the current run, you'll |
| 168 | produce runs which are twice the size of the memory for random input, |
| 169 | and much better for input fuzzily ordered. |
| 170 | |
| 171 | Moreover, if you output the 0'th item on disk and get an input which |
| 172 | may not fit in the current tournament (because the value "wins" over |
| 173 | the last output value), it cannot fit in the heap, so the size of the |
| 174 | heap decreases. The freed memory could be cleverly reused immediately |
| 175 | for progressively building a second heap, which grows at exactly the |
| 176 | same rate the first heap is melting. When the first heap completely |
| 177 | vanishes, you switch heaps and start a new run. Clever and quite |
| 178 | effective! |
| 179 | |
| 180 | In a word, heaps are useful memory structures to know. I use them in |
| 181 | a few applications, and I think it is good to keep a `heap' module |
| 182 | around. :-) |