| Benjamin Peterson | 90f5ba5 | 2010-03-11 22:53:45 +0000 | [diff] [blame] | 1 | #! /usr/bin/env python3 | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 2 |  | 
 | 3 | """ | 
 | 4 | Module difflib -- helpers for computing deltas between objects. | 
 | 5 |  | 
 | 6 | Function get_close_matches(word, possibilities, n=3, cutoff=0.6): | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 7 |     Use SequenceMatcher to return list of the best "good enough" matches. | 
 | 8 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 9 | Function context_diff(a, b): | 
 | 10 |     For two lists of strings, return a delta in context diff format. | 
 | 11 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 12 | Function ndiff(a, b): | 
 | 13 |     Return a delta: the difference between `a` and `b` (lists of strings). | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 14 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 15 | Function restore(delta, which): | 
 | 16 |     Return one of the two sequences that generated an ndiff delta. | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 17 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 18 | Function unified_diff(a, b): | 
 | 19 |     For two lists of strings, return a delta in unified diff format. | 
 | 20 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 21 | Class SequenceMatcher: | 
 | 22 |     A flexible class for comparing pairs of sequences of any type. | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 23 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 24 | Class Differ: | 
 | 25 |     For producing human-readable deltas from sequences of lines of text. | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 26 |  | 
 | 27 | Class HtmlDiff: | 
 | 28 |     For producing HTML side by side comparison with change highlights. | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 29 | """ | 
 | 30 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 31 | __all__ = ['get_close_matches', 'ndiff', 'restore', 'SequenceMatcher', | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 32 |            'Differ','IS_CHARACTER_JUNK', 'IS_LINE_JUNK', 'context_diff', | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 33 |            'unified_diff', 'HtmlDiff', 'Match'] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 34 |  | 
| Terry Reedy | bcd8988 | 2010-12-03 22:29:40 +0000 | [diff] [blame] | 35 | import warnings | 
| Raymond Hettinger | bb6b734 | 2004-06-13 09:57:33 +0000 | [diff] [blame] | 36 | import heapq | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 37 | from collections import namedtuple as _namedtuple | 
 | 38 |  | 
 | 39 | Match = _namedtuple('Match', 'a b size') | 
| Raymond Hettinger | bb6b734 | 2004-06-13 09:57:33 +0000 | [diff] [blame] | 40 |  | 
| Neal Norwitz | e7dfe21 | 2003-07-01 14:59:46 +0000 | [diff] [blame] | 41 | def _calculate_ratio(matches, length): | 
 | 42 |     if length: | 
 | 43 |         return 2.0 * matches / length | 
 | 44 |     return 1.0 | 
 | 45 |  | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 46 | class SequenceMatcher: | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 47 |  | 
 | 48 |     """ | 
 | 49 |     SequenceMatcher is a flexible class for comparing pairs of sequences of | 
 | 50 |     any type, so long as the sequence elements are hashable.  The basic | 
 | 51 |     algorithm predates, and is a little fancier than, an algorithm | 
 | 52 |     published in the late 1980's by Ratcliff and Obershelp under the | 
 | 53 |     hyperbolic name "gestalt pattern matching".  The basic idea is to find | 
 | 54 |     the longest contiguous matching subsequence that contains no "junk" | 
 | 55 |     elements (R-O doesn't address junk).  The same idea is then applied | 
 | 56 |     recursively to the pieces of the sequences to the left and to the right | 
 | 57 |     of the matching subsequence.  This does not yield minimal edit | 
 | 58 |     sequences, but does tend to yield matches that "look right" to people. | 
 | 59 |  | 
 | 60 |     SequenceMatcher tries to compute a "human-friendly diff" between two | 
 | 61 |     sequences.  Unlike e.g. UNIX(tm) diff, the fundamental notion is the | 
 | 62 |     longest *contiguous* & junk-free matching subsequence.  That's what | 
 | 63 |     catches peoples' eyes.  The Windows(tm) windiff has another interesting | 
 | 64 |     notion, pairing up elements that appear uniquely in each sequence. | 
 | 65 |     That, and the method here, appear to yield more intuitive difference | 
 | 66 |     reports than does diff.  This method appears to be the least vulnerable | 
 | 67 |     to synching up on blocks of "junk lines", though (like blank lines in | 
 | 68 |     ordinary text files, or maybe "<P>" lines in HTML files).  That may be | 
 | 69 |     because this is the only method of the 3 that has a *concept* of | 
 | 70 |     "junk" <wink>. | 
 | 71 |  | 
 | 72 |     Example, comparing two strings, and considering blanks to be "junk": | 
 | 73 |  | 
 | 74 |     >>> s = SequenceMatcher(lambda x: x == " ", | 
 | 75 |     ...                     "private Thread currentThread;", | 
 | 76 |     ...                     "private volatile Thread currentThread;") | 
 | 77 |     >>> | 
 | 78 |  | 
 | 79 |     .ratio() returns a float in [0, 1], measuring the "similarity" of the | 
 | 80 |     sequences.  As a rule of thumb, a .ratio() value over 0.6 means the | 
 | 81 |     sequences are close matches: | 
 | 82 |  | 
| Guido van Rossum | fff80df | 2007-02-09 20:33:44 +0000 | [diff] [blame] | 83 |     >>> print(round(s.ratio(), 3)) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 84 |     0.866 | 
 | 85 |     >>> | 
 | 86 |  | 
 | 87 |     If you're only interested in where the sequences match, | 
 | 88 |     .get_matching_blocks() is handy: | 
 | 89 |  | 
 | 90 |     >>> for block in s.get_matching_blocks(): | 
| Guido van Rossum | fff80df | 2007-02-09 20:33:44 +0000 | [diff] [blame] | 91 |     ...     print("a[%d] and b[%d] match for %d elements" % block) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 92 |     a[0] and b[0] match for 8 elements | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 93 |     a[8] and b[17] match for 21 elements | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 94 |     a[29] and b[38] match for 0 elements | 
 | 95 |  | 
 | 96 |     Note that the last tuple returned by .get_matching_blocks() is always a | 
 | 97 |     dummy, (len(a), len(b), 0), and this is the only case in which the last | 
 | 98 |     tuple element (number of elements matched) is 0. | 
 | 99 |  | 
 | 100 |     If you want to know how to change the first sequence into the second, | 
 | 101 |     use .get_opcodes(): | 
 | 102 |  | 
 | 103 |     >>> for opcode in s.get_opcodes(): | 
| Guido van Rossum | fff80df | 2007-02-09 20:33:44 +0000 | [diff] [blame] | 104 |     ...     print("%6s a[%d:%d] b[%d:%d]" % opcode) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 105 |      equal a[0:8] b[0:8] | 
 | 106 |     insert a[8:8] b[8:17] | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 107 |      equal a[8:29] b[17:38] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 108 |  | 
 | 109 |     See the Differ class for a fancy human-friendly file differencer, which | 
 | 110 |     uses SequenceMatcher both to compare sequences of lines, and to compare | 
 | 111 |     sequences of characters within similar (near-matching) lines. | 
 | 112 |  | 
 | 113 |     See also function get_close_matches() in this module, which shows how | 
 | 114 |     simple code building on SequenceMatcher can be used to do useful work. | 
 | 115 |  | 
 | 116 |     Timing:  Basic R-O is cubic time worst case and quadratic time expected | 
 | 117 |     case.  SequenceMatcher is quadratic time for the worst case and has | 
 | 118 |     expected-case behavior dependent in a complicated way on how many | 
 | 119 |     elements the sequences have in common; best case time is linear. | 
 | 120 |  | 
 | 121 |     Methods: | 
 | 122 |  | 
 | 123 |     __init__(isjunk=None, a='', b='') | 
 | 124 |         Construct a SequenceMatcher. | 
 | 125 |  | 
 | 126 |     set_seqs(a, b) | 
 | 127 |         Set the two sequences to be compared. | 
 | 128 |  | 
 | 129 |     set_seq1(a) | 
 | 130 |         Set the first sequence to be compared. | 
 | 131 |  | 
 | 132 |     set_seq2(b) | 
 | 133 |         Set the second sequence to be compared. | 
 | 134 |  | 
 | 135 |     find_longest_match(alo, ahi, blo, bhi) | 
 | 136 |         Find longest matching block in a[alo:ahi] and b[blo:bhi]. | 
 | 137 |  | 
 | 138 |     get_matching_blocks() | 
 | 139 |         Return list of triples describing matching subsequences. | 
 | 140 |  | 
 | 141 |     get_opcodes() | 
 | 142 |         Return list of 5-tuples describing how to turn a into b. | 
 | 143 |  | 
 | 144 |     ratio() | 
 | 145 |         Return a measure of the sequences' similarity (float in [0,1]). | 
 | 146 |  | 
 | 147 |     quick_ratio() | 
 | 148 |         Return an upper bound on .ratio() relatively quickly. | 
 | 149 |  | 
 | 150 |     real_quick_ratio() | 
 | 151 |         Return an upper bound on ratio() very quickly. | 
 | 152 |     """ | 
 | 153 |  | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 154 |     def __init__(self, isjunk=None, a='', b='', autojunk=True): | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 155 |         """Construct a SequenceMatcher. | 
 | 156 |  | 
 | 157 |         Optional arg isjunk is None (the default), or a one-argument | 
 | 158 |         function that takes a sequence element and returns true iff the | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 159 |         element is junk.  None is equivalent to passing "lambda x: 0", i.e. | 
| Fred Drake | f1da628 | 2001-02-19 19:30:05 +0000 | [diff] [blame] | 160 |         no elements are considered to be junk.  For example, pass | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 161 |             lambda x: x in " \\t" | 
 | 162 |         if you're comparing lines as sequences of characters, and don't | 
 | 163 |         want to synch up on blanks or hard tabs. | 
 | 164 |  | 
 | 165 |         Optional arg a is the first of two sequences to be compared.  By | 
 | 166 |         default, an empty string.  The elements of a must be hashable.  See | 
 | 167 |         also .set_seqs() and .set_seq1(). | 
 | 168 |  | 
 | 169 |         Optional arg b is the second of two sequences to be compared.  By | 
| Fred Drake | f1da628 | 2001-02-19 19:30:05 +0000 | [diff] [blame] | 170 |         default, an empty string.  The elements of b must be hashable. See | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 171 |         also .set_seqs() and .set_seq2(). | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 172 |  | 
 | 173 |         Optional arg autojunk should be set to False to disable the | 
 | 174 |         "automatic junk heuristic" that treats popular elements as junk | 
 | 175 |         (see module documentation for more information). | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 176 |         """ | 
 | 177 |  | 
 | 178 |         # Members: | 
 | 179 |         # a | 
 | 180 |         #      first sequence | 
 | 181 |         # b | 
 | 182 |         #      second sequence; differences are computed as "what do | 
 | 183 |         #      we need to do to 'a' to change it into 'b'?" | 
 | 184 |         # b2j | 
 | 185 |         #      for x in b, b2j[x] is a list of the indices (into b) | 
| Terry Reedy | bcd8988 | 2010-12-03 22:29:40 +0000 | [diff] [blame] | 186 |         #      at which x appears; junk and popular elements do not appear | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 187 |         # fullbcount | 
 | 188 |         #      for x in b, fullbcount[x] == the number of times x | 
 | 189 |         #      appears in b; only materialized if really needed (used | 
 | 190 |         #      only for computing quick_ratio()) | 
 | 191 |         # matching_blocks | 
 | 192 |         #      a list of (i, j, k) triples, where a[i:i+k] == b[j:j+k]; | 
 | 193 |         #      ascending & non-overlapping in i and in j; terminated by | 
 | 194 |         #      a dummy (len(a), len(b), 0) sentinel | 
 | 195 |         # opcodes | 
 | 196 |         #      a list of (tag, i1, i2, j1, j2) tuples, where tag is | 
 | 197 |         #      one of | 
 | 198 |         #          'replace'   a[i1:i2] should be replaced by b[j1:j2] | 
 | 199 |         #          'delete'    a[i1:i2] should be deleted | 
 | 200 |         #          'insert'    b[j1:j2] should be inserted | 
 | 201 |         #          'equal'     a[i1:i2] == b[j1:j2] | 
 | 202 |         # isjunk | 
 | 203 |         #      a user-supplied function taking a sequence element and | 
 | 204 |         #      returning true iff the element is "junk" -- this has | 
 | 205 |         #      subtle but helpful effects on the algorithm, which I'll | 
 | 206 |         #      get around to writing up someday <0.9 wink>. | 
 | 207 |         #      DON'T USE!  Only __chain_b uses this.  Use isbjunk. | 
| Terry Reedy | 74a7c67 | 2010-12-03 18:57:42 +0000 | [diff] [blame] | 208 |         # bjunk | 
 | 209 |         #      the items in b for which isjunk is True. | 
 | 210 |         # bpopular | 
 | 211 |         #      nonjunk items in b treated as junk by the heuristic (if used). | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 212 |  | 
 | 213 |         self.isjunk = isjunk | 
 | 214 |         self.a = self.b = None | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 215 |         self.autojunk = autojunk | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 216 |         self.set_seqs(a, b) | 
 | 217 |  | 
 | 218 |     def set_seqs(self, a, b): | 
 | 219 |         """Set the two sequences to be compared. | 
 | 220 |  | 
 | 221 |         >>> s = SequenceMatcher() | 
 | 222 |         >>> s.set_seqs("abcd", "bcde") | 
 | 223 |         >>> s.ratio() | 
 | 224 |         0.75 | 
 | 225 |         """ | 
 | 226 |  | 
 | 227 |         self.set_seq1(a) | 
 | 228 |         self.set_seq2(b) | 
 | 229 |  | 
 | 230 |     def set_seq1(self, a): | 
 | 231 |         """Set the first sequence to be compared. | 
 | 232 |  | 
 | 233 |         The second sequence to be compared is not changed. | 
 | 234 |  | 
 | 235 |         >>> s = SequenceMatcher(None, "abcd", "bcde") | 
 | 236 |         >>> s.ratio() | 
 | 237 |         0.75 | 
 | 238 |         >>> s.set_seq1("bcde") | 
 | 239 |         >>> s.ratio() | 
 | 240 |         1.0 | 
 | 241 |         >>> | 
 | 242 |  | 
 | 243 |         SequenceMatcher computes and caches detailed information about the | 
 | 244 |         second sequence, so if you want to compare one sequence S against | 
 | 245 |         many sequences, use .set_seq2(S) once and call .set_seq1(x) | 
 | 246 |         repeatedly for each of the other sequences. | 
 | 247 |  | 
 | 248 |         See also set_seqs() and set_seq2(). | 
 | 249 |         """ | 
 | 250 |  | 
 | 251 |         if a is self.a: | 
 | 252 |             return | 
 | 253 |         self.a = a | 
 | 254 |         self.matching_blocks = self.opcodes = None | 
 | 255 |  | 
 | 256 |     def set_seq2(self, b): | 
 | 257 |         """Set the second sequence to be compared. | 
 | 258 |  | 
 | 259 |         The first sequence to be compared is not changed. | 
 | 260 |  | 
 | 261 |         >>> s = SequenceMatcher(None, "abcd", "bcde") | 
 | 262 |         >>> s.ratio() | 
 | 263 |         0.75 | 
 | 264 |         >>> s.set_seq2("abcd") | 
 | 265 |         >>> s.ratio() | 
 | 266 |         1.0 | 
 | 267 |         >>> | 
 | 268 |  | 
 | 269 |         SequenceMatcher computes and caches detailed information about the | 
 | 270 |         second sequence, so if you want to compare one sequence S against | 
 | 271 |         many sequences, use .set_seq2(S) once and call .set_seq1(x) | 
 | 272 |         repeatedly for each of the other sequences. | 
 | 273 |  | 
 | 274 |         See also set_seqs() and set_seq1(). | 
 | 275 |         """ | 
 | 276 |  | 
 | 277 |         if b is self.b: | 
 | 278 |             return | 
 | 279 |         self.b = b | 
 | 280 |         self.matching_blocks = self.opcodes = None | 
 | 281 |         self.fullbcount = None | 
 | 282 |         self.__chain_b() | 
 | 283 |  | 
 | 284 |     # For each element x in b, set b2j[x] to a list of the indices in | 
 | 285 |     # b where x appears; the indices are in increasing order; note that | 
 | 286 |     # the number of times x appears in b is len(b2j[x]) ... | 
 | 287 |     # when self.isjunk is defined, junk elements don't show up in this | 
 | 288 |     # map at all, which stops the central find_longest_match method | 
 | 289 |     # from starting any matching block at a junk element ... | 
 | 290 |     # also creates the fast isbjunk function ... | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 291 |     # b2j also does not contain entries for "popular" elements, meaning | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 292 |     # elements that account for more than 1 + 1% of the total elements, and | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 293 |     # when the sequence is reasonably large (>= 200 elements); this can | 
 | 294 |     # be viewed as an adaptive notion of semi-junk, and yields an enormous | 
 | 295 |     # speedup when, e.g., comparing program files with hundreds of | 
 | 296 |     # instances of "return NULL;" ... | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 297 |     # note that this is only called when b changes; so for cross-product | 
 | 298 |     # kinds of matches, it's best to call set_seq2 once, then set_seq1 | 
 | 299 |     # repeatedly | 
 | 300 |  | 
 | 301 |     def __chain_b(self): | 
 | 302 |         # Because isjunk is a user-defined (not C) function, and we test | 
 | 303 |         # for junk a LOT, it's important to minimize the number of calls. | 
 | 304 |         # Before the tricks described here, __chain_b was by far the most | 
 | 305 |         # time-consuming routine in the whole module!  If anyone sees | 
 | 306 |         # Jim Roskind, thank him again for profile.py -- I never would | 
 | 307 |         # have guessed that. | 
 | 308 |         # The first trick is to build b2j ignoring the possibility | 
 | 309 |         # of junk.  I.e., we don't call isjunk at all yet.  Throwing | 
 | 310 |         # out the junk later is much cheaper than building b2j "right" | 
 | 311 |         # from the start. | 
 | 312 |         b = self.b | 
 | 313 |         self.b2j = b2j = {} | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 314 |  | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 315 |         for i, elt in enumerate(b): | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 316 |             indices = b2j.setdefault(elt, []) | 
 | 317 |             indices.append(i) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 318 |  | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 319 |         # Purge junk elements | 
| Terry Reedy | 74a7c67 | 2010-12-03 18:57:42 +0000 | [diff] [blame] | 320 |         self.bjunk = junk = set() | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 321 |         isjunk = self.isjunk | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 322 |         if isjunk: | 
| Terry Reedy | 17a5925 | 2010-12-15 20:18:10 +0000 | [diff] [blame] | 323 |             for elt in b2j.keys(): | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 324 |                 if isjunk(elt): | 
 | 325 |                     junk.add(elt) | 
| Terry Reedy | 17a5925 | 2010-12-15 20:18:10 +0000 | [diff] [blame] | 326 |             for elt in junk: # separate loop avoids separate list of keys | 
 | 327 |                 del b2j[elt] | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 328 |  | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 329 |         # Purge popular elements that are not junk | 
| Terry Reedy | 74a7c67 | 2010-12-03 18:57:42 +0000 | [diff] [blame] | 330 |         self.bpopular = popular = set() | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 331 |         n = len(b) | 
 | 332 |         if self.autojunk and n >= 200: | 
 | 333 |             ntest = n // 100 + 1 | 
| Terry Reedy | 17a5925 | 2010-12-15 20:18:10 +0000 | [diff] [blame] | 334 |             for elt, idxs in b2j.items(): | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 335 |                 if len(idxs) > ntest: | 
 | 336 |                     popular.add(elt) | 
| Terry Reedy | 17a5925 | 2010-12-15 20:18:10 +0000 | [diff] [blame] | 337 |             for elt in popular: # ditto; as fast for 1% deletion | 
 | 338 |                 del b2j[elt] | 
| Terry Reedy | 99f9637 | 2010-11-25 06:12:34 +0000 | [diff] [blame] | 339 |  | 
| Terry Reedy | bcd8988 | 2010-12-03 22:29:40 +0000 | [diff] [blame] | 340 |     def isbjunk(self, item): | 
 | 341 |         "Deprecated; use 'item in SequenceMatcher().bjunk'." | 
 | 342 |         warnings.warn("'SequenceMatcher().isbjunk(item)' is deprecated;\n" | 
 | 343 |                       "use 'item in SMinstance.bjunk' instead.", | 
 | 344 |                       DeprecationWarning, 2) | 
 | 345 |         return item in self.bjunk | 
 | 346 |  | 
 | 347 |     def isbpopular(self, item): | 
 | 348 |         "Deprecated; use 'item in SequenceMatcher().bpopular'." | 
 | 349 |         warnings.warn("'SequenceMatcher().isbpopular(item)' is deprecated;\n" | 
 | 350 |                       "use 'item in SMinstance.bpopular' instead.", | 
 | 351 |                       DeprecationWarning, 2) | 
 | 352 |         return item in self.bpopular | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 353 |  | 
 | 354 |     def find_longest_match(self, alo, ahi, blo, bhi): | 
 | 355 |         """Find longest matching block in a[alo:ahi] and b[blo:bhi]. | 
 | 356 |  | 
 | 357 |         If isjunk is not defined: | 
 | 358 |  | 
 | 359 |         Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where | 
 | 360 |             alo <= i <= i+k <= ahi | 
 | 361 |             blo <= j <= j+k <= bhi | 
 | 362 |         and for all (i',j',k') meeting those conditions, | 
 | 363 |             k >= k' | 
 | 364 |             i <= i' | 
 | 365 |             and if i == i', j <= j' | 
 | 366 |  | 
 | 367 |         In other words, of all maximal matching blocks, return one that | 
 | 368 |         starts earliest in a, and of all those maximal matching blocks that | 
 | 369 |         start earliest in a, return the one that starts earliest in b. | 
 | 370 |  | 
 | 371 |         >>> s = SequenceMatcher(None, " abcd", "abcd abcd") | 
 | 372 |         >>> s.find_longest_match(0, 5, 0, 9) | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 373 |         Match(a=0, b=4, size=5) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 374 |  | 
 | 375 |         If isjunk is defined, first the longest matching block is | 
 | 376 |         determined as above, but with the additional restriction that no | 
 | 377 |         junk element appears in the block.  Then that block is extended as | 
 | 378 |         far as possible by matching (only) junk elements on both sides.  So | 
 | 379 |         the resulting block never matches on junk except as identical junk | 
 | 380 |         happens to be adjacent to an "interesting" match. | 
 | 381 |  | 
 | 382 |         Here's the same example as before, but considering blanks to be | 
 | 383 |         junk.  That prevents " abcd" from matching the " abcd" at the tail | 
 | 384 |         end of the second sequence directly.  Instead only the "abcd" can | 
 | 385 |         match, and matches the leftmost "abcd" in the second sequence: | 
 | 386 |  | 
 | 387 |         >>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd") | 
 | 388 |         >>> s.find_longest_match(0, 5, 0, 9) | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 389 |         Match(a=1, b=0, size=4) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 390 |  | 
 | 391 |         If no blocks match, return (alo, blo, 0). | 
 | 392 |  | 
 | 393 |         >>> s = SequenceMatcher(None, "ab", "c") | 
 | 394 |         >>> s.find_longest_match(0, 2, 0, 1) | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 395 |         Match(a=0, b=0, size=0) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 396 |         """ | 
 | 397 |  | 
 | 398 |         # CAUTION:  stripping common prefix or suffix would be incorrect. | 
 | 399 |         # E.g., | 
 | 400 |         #    ab | 
 | 401 |         #    acab | 
 | 402 |         # Longest matching block is "ab", but if common prefix is | 
 | 403 |         # stripped, it's "a" (tied with "b").  UNIX(tm) diff does so | 
 | 404 |         # strip, so ends up claiming that ab is changed to acab by | 
 | 405 |         # inserting "ca" in the middle.  That's minimal but unintuitive: | 
 | 406 |         # "it's obvious" that someone inserted "ac" at the front. | 
 | 407 |         # Windiff ends up at the same place as diff, but by pairing up | 
 | 408 |         # the unique 'b's and then matching the first two 'a's. | 
 | 409 |  | 
| Terry Reedy | bcd8988 | 2010-12-03 22:29:40 +0000 | [diff] [blame] | 410 |         a, b, b2j, isbjunk = self.a, self.b, self.b2j, self.bjunk.__contains__ | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 411 |         besti, bestj, bestsize = alo, blo, 0 | 
 | 412 |         # find longest junk-free match | 
 | 413 |         # during an iteration of the loop, j2len[j] = length of longest | 
 | 414 |         # junk-free match ending with a[i-1] and b[j] | 
 | 415 |         j2len = {} | 
 | 416 |         nothing = [] | 
| Guido van Rossum | 805365e | 2007-05-07 22:24:25 +0000 | [diff] [blame] | 417 |         for i in range(alo, ahi): | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 418 |             # look at all instances of a[i] in b; note that because | 
 | 419 |             # b2j has no junk keys, the loop is skipped if a[i] is junk | 
 | 420 |             j2lenget = j2len.get | 
 | 421 |             newj2len = {} | 
 | 422 |             for j in b2j.get(a[i], nothing): | 
 | 423 |                 # a[i] matches b[j] | 
 | 424 |                 if j < blo: | 
 | 425 |                     continue | 
 | 426 |                 if j >= bhi: | 
 | 427 |                     break | 
 | 428 |                 k = newj2len[j] = j2lenget(j-1, 0) + 1 | 
 | 429 |                 if k > bestsize: | 
 | 430 |                     besti, bestj, bestsize = i-k+1, j-k+1, k | 
 | 431 |             j2len = newj2len | 
 | 432 |  | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 433 |         # Extend the best by non-junk elements on each end.  In particular, | 
 | 434 |         # "popular" non-junk elements aren't in b2j, which greatly speeds | 
 | 435 |         # the inner loop above, but also means "the best" match so far | 
 | 436 |         # doesn't contain any junk *or* popular non-junk elements. | 
 | 437 |         while besti > alo and bestj > blo and \ | 
 | 438 |               not isbjunk(b[bestj-1]) and \ | 
 | 439 |               a[besti-1] == b[bestj-1]: | 
 | 440 |             besti, bestj, bestsize = besti-1, bestj-1, bestsize+1 | 
 | 441 |         while besti+bestsize < ahi and bestj+bestsize < bhi and \ | 
 | 442 |               not isbjunk(b[bestj+bestsize]) and \ | 
 | 443 |               a[besti+bestsize] == b[bestj+bestsize]: | 
 | 444 |             bestsize += 1 | 
 | 445 |  | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 446 |         # Now that we have a wholly interesting match (albeit possibly | 
 | 447 |         # empty!), we may as well suck up the matching junk on each | 
 | 448 |         # side of it too.  Can't think of a good reason not to, and it | 
 | 449 |         # saves post-processing the (possibly considerable) expense of | 
 | 450 |         # figuring out what to do with it.  In the case of an empty | 
 | 451 |         # interesting match, this is clearly the right thing to do, | 
 | 452 |         # because no other kind of match is possible in the regions. | 
 | 453 |         while besti > alo and bestj > blo and \ | 
 | 454 |               isbjunk(b[bestj-1]) and \ | 
 | 455 |               a[besti-1] == b[bestj-1]: | 
 | 456 |             besti, bestj, bestsize = besti-1, bestj-1, bestsize+1 | 
 | 457 |         while besti+bestsize < ahi and bestj+bestsize < bhi and \ | 
 | 458 |               isbjunk(b[bestj+bestsize]) and \ | 
 | 459 |               a[besti+bestsize] == b[bestj+bestsize]: | 
 | 460 |             bestsize = bestsize + 1 | 
 | 461 |  | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 462 |         return Match(besti, bestj, bestsize) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 463 |  | 
 | 464 |     def get_matching_blocks(self): | 
 | 465 |         """Return list of triples describing matching subsequences. | 
 | 466 |  | 
 | 467 |         Each triple is of the form (i, j, n), and means that | 
 | 468 |         a[i:i+n] == b[j:j+n].  The triples are monotonically increasing in | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 469 |         i and in j.  New in Python 2.5, it's also guaranteed that if | 
 | 470 |         (i, j, n) and (i', j', n') are adjacent triples in the list, and | 
 | 471 |         the second is not the last triple in the list, then i+n != i' or | 
 | 472 |         j+n != j'.  IOW, adjacent triples never describe adjacent equal | 
 | 473 |         blocks. | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 474 |  | 
 | 475 |         The last triple is a dummy, (len(a), len(b), 0), and is the only | 
 | 476 |         triple with n==0. | 
 | 477 |  | 
 | 478 |         >>> s = SequenceMatcher(None, "abxcd", "abcd") | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 479 |         >>> list(s.get_matching_blocks()) | 
 | 480 |         [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 481 |         """ | 
 | 482 |  | 
 | 483 |         if self.matching_blocks is not None: | 
 | 484 |             return self.matching_blocks | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 485 |         la, lb = len(self.a), len(self.b) | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 486 |  | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 487 |         # This is most naturally expressed as a recursive algorithm, but | 
 | 488 |         # at least one user bumped into extreme use cases that exceeded | 
 | 489 |         # the recursion limit on their box.  So, now we maintain a list | 
 | 490 |         # ('queue`) of blocks we still need to look at, and append partial | 
 | 491 |         # results to `matching_blocks` in a loop; the matches are sorted | 
 | 492 |         # at the end. | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 493 |         queue = [(0, la, 0, lb)] | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 494 |         matching_blocks = [] | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 495 |         while queue: | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 496 |             alo, ahi, blo, bhi = queue.pop() | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 497 |             i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi) | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 498 |             # a[alo:i] vs b[blo:j] unknown | 
 | 499 |             # a[i:i+k] same as b[j:j+k] | 
 | 500 |             # a[i+k:ahi] vs b[j+k:bhi] unknown | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 501 |             if k:   # if k is 0, there was no matching block | 
 | 502 |                 matching_blocks.append(x) | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 503 |                 if alo < i and blo < j: | 
 | 504 |                     queue.append((alo, i, blo, j)) | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 505 |                 if i+k < ahi and j+k < bhi: | 
 | 506 |                     queue.append((i+k, ahi, j+k, bhi)) | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 507 |         matching_blocks.sort() | 
| Gustavo Niemeyer | 54814881 | 2006-01-31 18:34:13 +0000 | [diff] [blame] | 508 |  | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 509 |         # It's possible that we have adjacent equal blocks in the | 
 | 510 |         # matching_blocks list now.  Starting with 2.5, this code was added | 
 | 511 |         # to collapse them. | 
 | 512 |         i1 = j1 = k1 = 0 | 
 | 513 |         non_adjacent = [] | 
 | 514 |         for i2, j2, k2 in matching_blocks: | 
 | 515 |             # Is this block adjacent to i1, j1, k1? | 
 | 516 |             if i1 + k1 == i2 and j1 + k1 == j2: | 
 | 517 |                 # Yes, so collapse them -- this just increases the length of | 
 | 518 |                 # the first block by the length of the second, and the first | 
 | 519 |                 # block so lengthened remains the block to compare against. | 
 | 520 |                 k1 += k2 | 
 | 521 |             else: | 
 | 522 |                 # Not adjacent.  Remember the first block (k1==0 means it's | 
 | 523 |                 # the dummy we started with), and make the second block the | 
 | 524 |                 # new block to compare against. | 
 | 525 |                 if k1: | 
 | 526 |                     non_adjacent.append((i1, j1, k1)) | 
 | 527 |                 i1, j1, k1 = i2, j2, k2 | 
 | 528 |         if k1: | 
 | 529 |             non_adjacent.append((i1, j1, k1)) | 
 | 530 |  | 
 | 531 |         non_adjacent.append( (la, lb, 0) ) | 
 | 532 |         self.matching_blocks = non_adjacent | 
| Christian Heimes | 25bb783 | 2008-01-11 16:17:00 +0000 | [diff] [blame] | 533 |         return map(Match._make, self.matching_blocks) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 534 |  | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 535 |     def get_opcodes(self): | 
 | 536 |         """Return list of 5-tuples describing how to turn a into b. | 
 | 537 |  | 
 | 538 |         Each tuple is of the form (tag, i1, i2, j1, j2).  The first tuple | 
 | 539 |         has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the | 
 | 540 |         tuple preceding it, and likewise for j1 == the previous j2. | 
 | 541 |  | 
 | 542 |         The tags are strings, with these meanings: | 
 | 543 |  | 
 | 544 |         'replace':  a[i1:i2] should be replaced by b[j1:j2] | 
 | 545 |         'delete':   a[i1:i2] should be deleted. | 
 | 546 |                     Note that j1==j2 in this case. | 
 | 547 |         'insert':   b[j1:j2] should be inserted at a[i1:i1]. | 
 | 548 |                     Note that i1==i2 in this case. | 
 | 549 |         'equal':    a[i1:i2] == b[j1:j2] | 
 | 550 |  | 
 | 551 |         >>> a = "qabxcd" | 
 | 552 |         >>> b = "abycdf" | 
 | 553 |         >>> s = SequenceMatcher(None, a, b) | 
 | 554 |         >>> for tag, i1, i2, j1, j2 in s.get_opcodes(): | 
| Guido van Rossum | fff80df | 2007-02-09 20:33:44 +0000 | [diff] [blame] | 555 |         ...    print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" % | 
 | 556 |         ...           (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 557 |          delete a[0:1] (q) b[0:0] () | 
 | 558 |           equal a[1:3] (ab) b[0:2] (ab) | 
 | 559 |         replace a[3:4] (x) b[2:3] (y) | 
 | 560 |           equal a[4:6] (cd) b[3:5] (cd) | 
 | 561 |          insert a[6:6] () b[5:6] (f) | 
 | 562 |         """ | 
 | 563 |  | 
 | 564 |         if self.opcodes is not None: | 
 | 565 |             return self.opcodes | 
 | 566 |         i = j = 0 | 
 | 567 |         self.opcodes = answer = [] | 
 | 568 |         for ai, bj, size in self.get_matching_blocks(): | 
 | 569 |             # invariant:  we've pumped out correct diffs to change | 
 | 570 |             # a[:i] into b[:j], and the next matching block is | 
 | 571 |             # a[ai:ai+size] == b[bj:bj+size].  So we need to pump | 
 | 572 |             # out a diff to change a[i:ai] into b[j:bj], pump out | 
 | 573 |             # the matching block, and move (i,j) beyond the match | 
 | 574 |             tag = '' | 
 | 575 |             if i < ai and j < bj: | 
 | 576 |                 tag = 'replace' | 
 | 577 |             elif i < ai: | 
 | 578 |                 tag = 'delete' | 
 | 579 |             elif j < bj: | 
 | 580 |                 tag = 'insert' | 
 | 581 |             if tag: | 
 | 582 |                 answer.append( (tag, i, ai, j, bj) ) | 
 | 583 |             i, j = ai+size, bj+size | 
 | 584 |             # the list of matching blocks is terminated by a | 
 | 585 |             # sentinel with size 0 | 
 | 586 |             if size: | 
 | 587 |                 answer.append( ('equal', ai, i, bj, j) ) | 
 | 588 |         return answer | 
 | 589 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 590 |     def get_grouped_opcodes(self, n=3): | 
 | 591 |         """ Isolate change clusters by eliminating ranges with no changes. | 
 | 592 |  | 
 | 593 |         Return a generator of groups with upto n lines of context. | 
 | 594 |         Each group is in the same format as returned by get_opcodes(). | 
 | 595 |  | 
 | 596 |         >>> from pprint import pprint | 
| Guido van Rossum | c1f779c | 2007-07-03 08:25:58 +0000 | [diff] [blame] | 597 |         >>> a = list(map(str, range(1,40))) | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 598 |         >>> b = a[:] | 
 | 599 |         >>> b[8:8] = ['i']     # Make an insertion | 
 | 600 |         >>> b[20] += 'x'       # Make a replacement | 
 | 601 |         >>> b[23:28] = []      # Make a deletion | 
 | 602 |         >>> b[30] += 'y'       # Make another replacement | 
 | 603 |         >>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes())) | 
 | 604 |         [[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)], | 
 | 605 |          [('equal', 16, 19, 17, 20), | 
 | 606 |           ('replace', 19, 20, 20, 21), | 
 | 607 |           ('equal', 20, 22, 21, 23), | 
 | 608 |           ('delete', 22, 27, 23, 23), | 
 | 609 |           ('equal', 27, 30, 23, 26)], | 
 | 610 |          [('equal', 31, 34, 27, 30), | 
 | 611 |           ('replace', 34, 35, 30, 31), | 
 | 612 |           ('equal', 35, 38, 31, 34)]] | 
 | 613 |         """ | 
 | 614 |  | 
 | 615 |         codes = self.get_opcodes() | 
| Brett Cannon | d2c5b4b | 2004-07-10 23:54:07 +0000 | [diff] [blame] | 616 |         if not codes: | 
 | 617 |             codes = [("equal", 0, 1, 0, 1)] | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 618 |         # Fixup leading and trailing groups if they show no changes. | 
 | 619 |         if codes[0][0] == 'equal': | 
 | 620 |             tag, i1, i2, j1, j2 = codes[0] | 
 | 621 |             codes[0] = tag, max(i1, i2-n), i2, max(j1, j2-n), j2 | 
 | 622 |         if codes[-1][0] == 'equal': | 
 | 623 |             tag, i1, i2, j1, j2 = codes[-1] | 
 | 624 |             codes[-1] = tag, i1, min(i2, i1+n), j1, min(j2, j1+n) | 
 | 625 |  | 
 | 626 |         nn = n + n | 
 | 627 |         group = [] | 
 | 628 |         for tag, i1, i2, j1, j2 in codes: | 
 | 629 |             # End the current group and start a new one whenever | 
 | 630 |             # there is a large range with no changes. | 
 | 631 |             if tag == 'equal' and i2-i1 > nn: | 
 | 632 |                 group.append((tag, i1, min(i2, i1+n), j1, min(j2, j1+n))) | 
 | 633 |                 yield group | 
 | 634 |                 group = [] | 
 | 635 |                 i1, j1 = max(i1, i2-n), max(j1, j2-n) | 
 | 636 |             group.append((tag, i1, i2, j1 ,j2)) | 
 | 637 |         if group and not (len(group)==1 and group[0][0] == 'equal'): | 
 | 638 |             yield group | 
 | 639 |  | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 640 |     def ratio(self): | 
 | 641 |         """Return a measure of the sequences' similarity (float in [0,1]). | 
 | 642 |  | 
 | 643 |         Where T is the total number of elements in both sequences, and | 
| Tim Peters | bcc95cb | 2004-07-31 00:19:43 +0000 | [diff] [blame] | 644 |         M is the number of matches, this is 2.0*M / T. | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 645 |         Note that this is 1 if the sequences are identical, and 0 if | 
 | 646 |         they have nothing in common. | 
 | 647 |  | 
 | 648 |         .ratio() is expensive to compute if you haven't already computed | 
 | 649 |         .get_matching_blocks() or .get_opcodes(), in which case you may | 
 | 650 |         want to try .quick_ratio() or .real_quick_ratio() first to get an | 
 | 651 |         upper bound. | 
 | 652 |  | 
 | 653 |         >>> s = SequenceMatcher(None, "abcd", "bcde") | 
 | 654 |         >>> s.ratio() | 
 | 655 |         0.75 | 
 | 656 |         >>> s.quick_ratio() | 
 | 657 |         0.75 | 
 | 658 |         >>> s.real_quick_ratio() | 
 | 659 |         1.0 | 
 | 660 |         """ | 
 | 661 |  | 
| Guido van Rossum | 89da5d7 | 2006-08-22 00:21:25 +0000 | [diff] [blame] | 662 |         matches = sum(triple[-1] for triple in self.get_matching_blocks()) | 
| Neal Norwitz | e7dfe21 | 2003-07-01 14:59:46 +0000 | [diff] [blame] | 663 |         return _calculate_ratio(matches, len(self.a) + len(self.b)) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 664 |  | 
 | 665 |     def quick_ratio(self): | 
 | 666 |         """Return an upper bound on ratio() relatively quickly. | 
 | 667 |  | 
 | 668 |         This isn't defined beyond that it is an upper bound on .ratio(), and | 
 | 669 |         is faster to compute. | 
 | 670 |         """ | 
 | 671 |  | 
 | 672 |         # viewing a and b as multisets, set matches to the cardinality | 
 | 673 |         # of their intersection; this counts the number of matches | 
 | 674 |         # without regard to order, so is clearly an upper bound | 
 | 675 |         if self.fullbcount is None: | 
 | 676 |             self.fullbcount = fullbcount = {} | 
 | 677 |             for elt in self.b: | 
 | 678 |                 fullbcount[elt] = fullbcount.get(elt, 0) + 1 | 
 | 679 |         fullbcount = self.fullbcount | 
 | 680 |         # avail[x] is the number of times x appears in 'b' less the | 
 | 681 |         # number of times we've seen it in 'a' so far ... kinda | 
 | 682 |         avail = {} | 
| Guido van Rossum | e2b70bc | 2006-08-18 22:13:04 +0000 | [diff] [blame] | 683 |         availhas, matches = avail.__contains__, 0 | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 684 |         for elt in self.a: | 
 | 685 |             if availhas(elt): | 
 | 686 |                 numb = avail[elt] | 
 | 687 |             else: | 
 | 688 |                 numb = fullbcount.get(elt, 0) | 
 | 689 |             avail[elt] = numb - 1 | 
 | 690 |             if numb > 0: | 
 | 691 |                 matches = matches + 1 | 
| Neal Norwitz | e7dfe21 | 2003-07-01 14:59:46 +0000 | [diff] [blame] | 692 |         return _calculate_ratio(matches, len(self.a) + len(self.b)) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 693 |  | 
 | 694 |     def real_quick_ratio(self): | 
 | 695 |         """Return an upper bound on ratio() very quickly. | 
 | 696 |  | 
 | 697 |         This isn't defined beyond that it is an upper bound on .ratio(), and | 
 | 698 |         is faster to compute than either .ratio() or .quick_ratio(). | 
 | 699 |         """ | 
 | 700 |  | 
 | 701 |         la, lb = len(self.a), len(self.b) | 
 | 702 |         # can't have more matches than the number of elements in the | 
 | 703 |         # shorter sequence | 
| Neal Norwitz | e7dfe21 | 2003-07-01 14:59:46 +0000 | [diff] [blame] | 704 |         return _calculate_ratio(min(la, lb), la + lb) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 705 |  | 
 | 706 | def get_close_matches(word, possibilities, n=3, cutoff=0.6): | 
 | 707 |     """Use SequenceMatcher to return list of the best "good enough" matches. | 
 | 708 |  | 
 | 709 |     word is a sequence for which close matches are desired (typically a | 
 | 710 |     string). | 
 | 711 |  | 
 | 712 |     possibilities is a list of sequences against which to match word | 
 | 713 |     (typically a list of strings). | 
 | 714 |  | 
 | 715 |     Optional arg n (default 3) is the maximum number of close matches to | 
 | 716 |     return.  n must be > 0. | 
 | 717 |  | 
 | 718 |     Optional arg cutoff (default 0.6) is a float in [0, 1].  Possibilities | 
 | 719 |     that don't score at least that similar to word are ignored. | 
 | 720 |  | 
 | 721 |     The best (no more than n) matches among the possibilities are returned | 
 | 722 |     in a list, sorted by similarity score, most similar first. | 
 | 723 |  | 
 | 724 |     >>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"]) | 
 | 725 |     ['apple', 'ape'] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 726 |     >>> import keyword as _keyword | 
 | 727 |     >>> get_close_matches("wheel", _keyword.kwlist) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 728 |     ['while'] | 
| Guido van Rossum | 486364b | 2007-06-30 05:01:58 +0000 | [diff] [blame] | 729 |     >>> get_close_matches("Apple", _keyword.kwlist) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 730 |     [] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 731 |     >>> get_close_matches("accept", _keyword.kwlist) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 732 |     ['except'] | 
 | 733 |     """ | 
 | 734 |  | 
 | 735 |     if not n >  0: | 
| Walter Dörwald | 70a6b49 | 2004-02-12 17:35:32 +0000 | [diff] [blame] | 736 |         raise ValueError("n must be > 0: %r" % (n,)) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 737 |     if not 0.0 <= cutoff <= 1.0: | 
| Walter Dörwald | 70a6b49 | 2004-02-12 17:35:32 +0000 | [diff] [blame] | 738 |         raise ValueError("cutoff must be in [0.0, 1.0]: %r" % (cutoff,)) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 739 |     result = [] | 
 | 740 |     s = SequenceMatcher() | 
 | 741 |     s.set_seq2(word) | 
 | 742 |     for x in possibilities: | 
 | 743 |         s.set_seq1(x) | 
 | 744 |         if s.real_quick_ratio() >= cutoff and \ | 
 | 745 |            s.quick_ratio() >= cutoff and \ | 
 | 746 |            s.ratio() >= cutoff: | 
 | 747 |             result.append((s.ratio(), x)) | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 748 |  | 
| Raymond Hettinger | 6b59f5f | 2003-10-16 05:53:16 +0000 | [diff] [blame] | 749 |     # Move the best scorers to head of list | 
| Raymond Hettinger | aefde43 | 2004-06-15 23:53:35 +0000 | [diff] [blame] | 750 |     result = heapq.nlargest(n, result) | 
| Raymond Hettinger | 6b59f5f | 2003-10-16 05:53:16 +0000 | [diff] [blame] | 751 |     # Strip scores for the best n matches | 
| Raymond Hettinger | bb6b734 | 2004-06-13 09:57:33 +0000 | [diff] [blame] | 752 |     return [x for score, x in result] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 753 |  | 
 | 754 | def _count_leading(line, ch): | 
 | 755 |     """ | 
 | 756 |     Return number of `ch` characters at the start of `line`. | 
 | 757 |  | 
 | 758 |     Example: | 
 | 759 |  | 
 | 760 |     >>> _count_leading('   abc', ' ') | 
 | 761 |     3 | 
 | 762 |     """ | 
 | 763 |  | 
 | 764 |     i, n = 0, len(line) | 
 | 765 |     while i < n and line[i] == ch: | 
 | 766 |         i += 1 | 
 | 767 |     return i | 
 | 768 |  | 
 | 769 | class Differ: | 
 | 770 |     r""" | 
 | 771 |     Differ is a class for comparing sequences of lines of text, and | 
 | 772 |     producing human-readable differences or deltas.  Differ uses | 
 | 773 |     SequenceMatcher both to compare sequences of lines, and to compare | 
 | 774 |     sequences of characters within similar (near-matching) lines. | 
 | 775 |  | 
 | 776 |     Each line of a Differ delta begins with a two-letter code: | 
 | 777 |  | 
 | 778 |         '- '    line unique to sequence 1 | 
 | 779 |         '+ '    line unique to sequence 2 | 
 | 780 |         '  '    line common to both sequences | 
 | 781 |         '? '    line not present in either input sequence | 
 | 782 |  | 
 | 783 |     Lines beginning with '? ' attempt to guide the eye to intraline | 
 | 784 |     differences, and were not present in either input sequence.  These lines | 
 | 785 |     can be confusing if the sequences contain tab characters. | 
 | 786 |  | 
 | 787 |     Note that Differ makes no claim to produce a *minimal* diff.  To the | 
 | 788 |     contrary, minimal diffs are often counter-intuitive, because they synch | 
 | 789 |     up anywhere possible, sometimes accidental matches 100 pages apart. | 
 | 790 |     Restricting synch points to contiguous matches preserves some notion of | 
 | 791 |     locality, at the occasional cost of producing a longer diff. | 
 | 792 |  | 
 | 793 |     Example: Comparing two texts. | 
 | 794 |  | 
 | 795 |     First we set up the texts, sequences of individual single-line strings | 
 | 796 |     ending with newlines (such sequences can also be obtained from the | 
 | 797 |     `readlines()` method of file-like objects): | 
 | 798 |  | 
 | 799 |     >>> text1 = '''  1. Beautiful is better than ugly. | 
 | 800 |     ...   2. Explicit is better than implicit. | 
 | 801 |     ...   3. Simple is better than complex. | 
 | 802 |     ...   4. Complex is better than complicated. | 
 | 803 |     ... '''.splitlines(1) | 
 | 804 |     >>> len(text1) | 
 | 805 |     4 | 
 | 806 |     >>> text1[0][-1] | 
 | 807 |     '\n' | 
 | 808 |     >>> text2 = '''  1. Beautiful is better than ugly. | 
 | 809 |     ...   3.   Simple is better than complex. | 
 | 810 |     ...   4. Complicated is better than complex. | 
 | 811 |     ...   5. Flat is better than nested. | 
 | 812 |     ... '''.splitlines(1) | 
 | 813 |  | 
 | 814 |     Next we instantiate a Differ object: | 
 | 815 |  | 
 | 816 |     >>> d = Differ() | 
 | 817 |  | 
 | 818 |     Note that when instantiating a Differ object we may pass functions to | 
 | 819 |     filter out line and character 'junk'.  See Differ.__init__ for details. | 
 | 820 |  | 
 | 821 |     Finally, we compare the two: | 
 | 822 |  | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 823 |     >>> result = list(d.compare(text1, text2)) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 824 |  | 
 | 825 |     'result' is a list of strings, so let's pretty-print it: | 
 | 826 |  | 
 | 827 |     >>> from pprint import pprint as _pprint | 
 | 828 |     >>> _pprint(result) | 
 | 829 |     ['    1. Beautiful is better than ugly.\n', | 
 | 830 |      '-   2. Explicit is better than implicit.\n', | 
 | 831 |      '-   3. Simple is better than complex.\n', | 
 | 832 |      '+   3.   Simple is better than complex.\n', | 
 | 833 |      '?     ++\n', | 
 | 834 |      '-   4. Complex is better than complicated.\n', | 
 | 835 |      '?            ^                     ---- ^\n', | 
 | 836 |      '+   4. Complicated is better than complex.\n', | 
 | 837 |      '?           ++++ ^                      ^\n', | 
 | 838 |      '+   5. Flat is better than nested.\n'] | 
 | 839 |  | 
 | 840 |     As a single multi-line string it looks like this: | 
 | 841 |  | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 842 |     >>> print(''.join(result), end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 843 |         1. Beautiful is better than ugly. | 
 | 844 |     -   2. Explicit is better than implicit. | 
 | 845 |     -   3. Simple is better than complex. | 
 | 846 |     +   3.   Simple is better than complex. | 
 | 847 |     ?     ++ | 
 | 848 |     -   4. Complex is better than complicated. | 
 | 849 |     ?            ^                     ---- ^ | 
 | 850 |     +   4. Complicated is better than complex. | 
 | 851 |     ?           ++++ ^                      ^ | 
 | 852 |     +   5. Flat is better than nested. | 
 | 853 |  | 
 | 854 |     Methods: | 
 | 855 |  | 
 | 856 |     __init__(linejunk=None, charjunk=None) | 
 | 857 |         Construct a text differencer, with optional filters. | 
 | 858 |  | 
 | 859 |     compare(a, b) | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 860 |         Compare two sequences of lines; generate the resulting delta. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 861 |     """ | 
 | 862 |  | 
 | 863 |     def __init__(self, linejunk=None, charjunk=None): | 
 | 864 |         """ | 
 | 865 |         Construct a text differencer, with optional filters. | 
 | 866 |  | 
 | 867 |         The two optional keyword parameters are for filter functions: | 
 | 868 |  | 
 | 869 |         - `linejunk`: A function that should accept a single string argument, | 
 | 870 |           and return true iff the string is junk. The module-level function | 
 | 871 |           `IS_LINE_JUNK` may be used to filter out lines without visible | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 872 |           characters, except for at most one splat ('#').  It is recommended | 
 | 873 |           to leave linejunk None; as of Python 2.3, the underlying | 
 | 874 |           SequenceMatcher class has grown an adaptive notion of "noise" lines | 
 | 875 |           that's better than any static definition the author has ever been | 
 | 876 |           able to craft. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 877 |  | 
 | 878 |         - `charjunk`: A function that should accept a string of length 1. The | 
 | 879 |           module-level function `IS_CHARACTER_JUNK` may be used to filter out | 
 | 880 |           whitespace characters (a blank or tab; **note**: bad idea to include | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 881 |           newline in this!).  Use of IS_CHARACTER_JUNK is recommended. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 882 |         """ | 
 | 883 |  | 
 | 884 |         self.linejunk = linejunk | 
 | 885 |         self.charjunk = charjunk | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 886 |  | 
 | 887 |     def compare(self, a, b): | 
 | 888 |         r""" | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 889 |         Compare two sequences of lines; generate the resulting delta. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 890 |  | 
 | 891 |         Each sequence must contain individual single-line strings ending with | 
 | 892 |         newlines. Such sequences can be obtained from the `readlines()` method | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 893 |         of file-like objects.  The delta generated also consists of newline- | 
 | 894 |         terminated strings, ready to be printed as-is via the writeline() | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 895 |         method of a file-like object. | 
 | 896 |  | 
 | 897 |         Example: | 
 | 898 |  | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 899 |         >>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(1), | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 900 |         ...                                'ore\ntree\nemu\n'.splitlines(1))), | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 901 |         ...       end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 902 |         - one | 
 | 903 |         ?  ^ | 
 | 904 |         + ore | 
 | 905 |         ?  ^ | 
 | 906 |         - two | 
 | 907 |         - three | 
 | 908 |         ?  - | 
 | 909 |         + tree | 
 | 910 |         + emu | 
 | 911 |         """ | 
 | 912 |  | 
 | 913 |         cruncher = SequenceMatcher(self.linejunk, a, b) | 
 | 914 |         for tag, alo, ahi, blo, bhi in cruncher.get_opcodes(): | 
 | 915 |             if tag == 'replace': | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 916 |                 g = self._fancy_replace(a, alo, ahi, b, blo, bhi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 917 |             elif tag == 'delete': | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 918 |                 g = self._dump('-', a, alo, ahi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 919 |             elif tag == 'insert': | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 920 |                 g = self._dump('+', b, blo, bhi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 921 |             elif tag == 'equal': | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 922 |                 g = self._dump(' ', a, alo, ahi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 923 |             else: | 
| Collin Winter | ce36ad8 | 2007-08-30 01:19:48 +0000 | [diff] [blame] | 924 |                 raise ValueError('unknown tag %r' % (tag,)) | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 925 |  | 
 | 926 |             for line in g: | 
 | 927 |                 yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 928 |  | 
 | 929 |     def _dump(self, tag, x, lo, hi): | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 930 |         """Generate comparison results for a same-tagged range.""" | 
| Guido van Rossum | 805365e | 2007-05-07 22:24:25 +0000 | [diff] [blame] | 931 |         for i in range(lo, hi): | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 932 |             yield '%s %s' % (tag, x[i]) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 933 |  | 
 | 934 |     def _plain_replace(self, a, alo, ahi, b, blo, bhi): | 
 | 935 |         assert alo < ahi and blo < bhi | 
 | 936 |         # dump the shorter block first -- reduces the burden on short-term | 
 | 937 |         # memory if the blocks are of very different sizes | 
 | 938 |         if bhi - blo < ahi - alo: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 939 |             first  = self._dump('+', b, blo, bhi) | 
 | 940 |             second = self._dump('-', a, alo, ahi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 941 |         else: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 942 |             first  = self._dump('-', a, alo, ahi) | 
 | 943 |             second = self._dump('+', b, blo, bhi) | 
 | 944 |  | 
 | 945 |         for g in first, second: | 
 | 946 |             for line in g: | 
 | 947 |                 yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 948 |  | 
 | 949 |     def _fancy_replace(self, a, alo, ahi, b, blo, bhi): | 
 | 950 |         r""" | 
 | 951 |         When replacing one block of lines with another, search the blocks | 
 | 952 |         for *similar* lines; the best-matching pair (if any) is used as a | 
 | 953 |         synch point, and intraline difference marking is done on the | 
 | 954 |         similar pair. Lots of work, but often worth it. | 
 | 955 |  | 
 | 956 |         Example: | 
 | 957 |  | 
 | 958 |         >>> d = Differ() | 
| Raymond Hettinger | 83325e9 | 2003-07-16 04:32:32 +0000 | [diff] [blame] | 959 |         >>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1, | 
 | 960 |         ...                            ['abcdefGhijkl\n'], 0, 1) | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 961 |         >>> print(''.join(results), end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 962 |         - abcDefghiJkl | 
 | 963 |         ?    ^  ^  ^ | 
 | 964 |         + abcdefGhijkl | 
 | 965 |         ?    ^  ^  ^ | 
 | 966 |         """ | 
 | 967 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 968 |         # don't synch up unless the lines have a similarity score of at | 
 | 969 |         # least cutoff; best_ratio tracks the best score seen so far | 
 | 970 |         best_ratio, cutoff = 0.74, 0.75 | 
 | 971 |         cruncher = SequenceMatcher(self.charjunk) | 
 | 972 |         eqi, eqj = None, None   # 1st indices of equal lines (if any) | 
 | 973 |  | 
 | 974 |         # search for the pair that matches best without being identical | 
 | 975 |         # (identical lines must be junk lines, & we don't want to synch up | 
 | 976 |         # on junk -- unless we have to) | 
| Guido van Rossum | 805365e | 2007-05-07 22:24:25 +0000 | [diff] [blame] | 977 |         for j in range(blo, bhi): | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 978 |             bj = b[j] | 
 | 979 |             cruncher.set_seq2(bj) | 
| Guido van Rossum | 805365e | 2007-05-07 22:24:25 +0000 | [diff] [blame] | 980 |             for i in range(alo, ahi): | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 981 |                 ai = a[i] | 
 | 982 |                 if ai == bj: | 
 | 983 |                     if eqi is None: | 
 | 984 |                         eqi, eqj = i, j | 
 | 985 |                     continue | 
 | 986 |                 cruncher.set_seq1(ai) | 
 | 987 |                 # computing similarity is expensive, so use the quick | 
 | 988 |                 # upper bounds first -- have seen this speed up messy | 
 | 989 |                 # compares by a factor of 3. | 
 | 990 |                 # note that ratio() is only expensive to compute the first | 
 | 991 |                 # time it's called on a sequence pair; the expensive part | 
 | 992 |                 # of the computation is cached by cruncher | 
 | 993 |                 if cruncher.real_quick_ratio() > best_ratio and \ | 
 | 994 |                       cruncher.quick_ratio() > best_ratio and \ | 
 | 995 |                       cruncher.ratio() > best_ratio: | 
 | 996 |                     best_ratio, best_i, best_j = cruncher.ratio(), i, j | 
 | 997 |         if best_ratio < cutoff: | 
 | 998 |             # no non-identical "pretty close" pair | 
 | 999 |             if eqi is None: | 
 | 1000 |                 # no identical pair either -- treat it as a straight replace | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1001 |                 for line in self._plain_replace(a, alo, ahi, b, blo, bhi): | 
 | 1002 |                     yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1003 |                 return | 
 | 1004 |             # no close pair, but an identical pair -- synch up on that | 
 | 1005 |             best_i, best_j, best_ratio = eqi, eqj, 1.0 | 
 | 1006 |         else: | 
 | 1007 |             # there's a close pair, so forget the identical pair (if any) | 
 | 1008 |             eqi = None | 
 | 1009 |  | 
 | 1010 |         # a[best_i] very similar to b[best_j]; eqi is None iff they're not | 
 | 1011 |         # identical | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1012 |  | 
 | 1013 |         # pump out diffs from before the synch point | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1014 |         for line in self._fancy_helper(a, alo, best_i, b, blo, best_j): | 
 | 1015 |             yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1016 |  | 
 | 1017 |         # do intraline marking on the synch pair | 
 | 1018 |         aelt, belt = a[best_i], b[best_j] | 
 | 1019 |         if eqi is None: | 
 | 1020 |             # pump out a '-', '?', '+', '?' quad for the synched lines | 
 | 1021 |             atags = btags = "" | 
 | 1022 |             cruncher.set_seqs(aelt, belt) | 
 | 1023 |             for tag, ai1, ai2, bj1, bj2 in cruncher.get_opcodes(): | 
 | 1024 |                 la, lb = ai2 - ai1, bj2 - bj1 | 
 | 1025 |                 if tag == 'replace': | 
 | 1026 |                     atags += '^' * la | 
 | 1027 |                     btags += '^' * lb | 
 | 1028 |                 elif tag == 'delete': | 
 | 1029 |                     atags += '-' * la | 
 | 1030 |                 elif tag == 'insert': | 
 | 1031 |                     btags += '+' * lb | 
 | 1032 |                 elif tag == 'equal': | 
 | 1033 |                     atags += ' ' * la | 
 | 1034 |                     btags += ' ' * lb | 
 | 1035 |                 else: | 
| Collin Winter | ce36ad8 | 2007-08-30 01:19:48 +0000 | [diff] [blame] | 1036 |                     raise ValueError('unknown tag %r' % (tag,)) | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1037 |             for line in self._qformat(aelt, belt, atags, btags): | 
 | 1038 |                 yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1039 |         else: | 
 | 1040 |             # the synch pair is identical | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1041 |             yield '  ' + aelt | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1042 |  | 
 | 1043 |         # pump out diffs from after the synch point | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1044 |         for line in self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi): | 
 | 1045 |             yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1046 |  | 
 | 1047 |     def _fancy_helper(self, a, alo, ahi, b, blo, bhi): | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1048 |         g = [] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1049 |         if alo < ahi: | 
 | 1050 |             if blo < bhi: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1051 |                 g = self._fancy_replace(a, alo, ahi, b, blo, bhi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1052 |             else: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1053 |                 g = self._dump('-', a, alo, ahi) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1054 |         elif blo < bhi: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1055 |             g = self._dump('+', b, blo, bhi) | 
 | 1056 |  | 
 | 1057 |         for line in g: | 
 | 1058 |             yield line | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1059 |  | 
 | 1060 |     def _qformat(self, aline, bline, atags, btags): | 
 | 1061 |         r""" | 
 | 1062 |         Format "?" output and deal with leading tabs. | 
 | 1063 |  | 
 | 1064 |         Example: | 
 | 1065 |  | 
 | 1066 |         >>> d = Differ() | 
| Senthil Kumaran | 758025c | 2009-11-23 19:02:52 +0000 | [diff] [blame] | 1067 |         >>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n', | 
 | 1068 |         ...                      '  ^ ^  ^      ', '  ^ ^  ^      ') | 
| Guido van Rossum | fff80df | 2007-02-09 20:33:44 +0000 | [diff] [blame] | 1069 |         >>> for line in results: print(repr(line)) | 
 | 1070 |         ... | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1071 |         '- \tabcDefghiJkl\n' | 
 | 1072 |         '? \t ^ ^  ^\n' | 
| Senthil Kumaran | 758025c | 2009-11-23 19:02:52 +0000 | [diff] [blame] | 1073 |         '+ \tabcdefGhijkl\n' | 
 | 1074 |         '? \t ^ ^  ^\n' | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1075 |         """ | 
 | 1076 |  | 
 | 1077 |         # Can hurt, but will probably help most of the time. | 
 | 1078 |         common = min(_count_leading(aline, "\t"), | 
 | 1079 |                      _count_leading(bline, "\t")) | 
 | 1080 |         common = min(common, _count_leading(atags[:common], " ")) | 
| Senthil Kumaran | 758025c | 2009-11-23 19:02:52 +0000 | [diff] [blame] | 1081 |         common = min(common, _count_leading(btags[:common], " ")) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1082 |         atags = atags[common:].rstrip() | 
 | 1083 |         btags = btags[common:].rstrip() | 
 | 1084 |  | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1085 |         yield "- " + aline | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1086 |         if atags: | 
| Tim Peters | 527e64f | 2001-10-04 05:36:56 +0000 | [diff] [blame] | 1087 |             yield "? %s%s\n" % ("\t" * common, atags) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1088 |  | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1089 |         yield "+ " + bline | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1090 |         if btags: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1091 |             yield "? %s%s\n" % ("\t" * common, btags) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1092 |  | 
 | 1093 | # With respect to junk, an earlier version of ndiff simply refused to | 
 | 1094 | # *start* a match with a junk element.  The result was cases like this: | 
 | 1095 | #     before: private Thread currentThread; | 
 | 1096 | #     after:  private volatile Thread currentThread; | 
 | 1097 | # If you consider whitespace to be junk, the longest contiguous match | 
 | 1098 | # not starting with junk is "e Thread currentThread".  So ndiff reported | 
 | 1099 | # that "e volatil" was inserted between the 't' and the 'e' in "private". | 
 | 1100 | # While an accurate view, to people that's absurd.  The current version | 
 | 1101 | # looks for matching blocks that are entirely junk-free, then extends the | 
 | 1102 | # longest one of those as far as possible but only with matching junk. | 
 | 1103 | # So now "currentThread" is matched, then extended to suck up the | 
 | 1104 | # preceding blank; then "private" is matched, and extended to suck up the | 
 | 1105 | # following blank; then "Thread" is matched; and finally ndiff reports | 
 | 1106 | # that "volatile " was inserted before "Thread".  The only quibble | 
 | 1107 | # remaining is that perhaps it was really the case that " volatile" | 
 | 1108 | # was inserted after "private".  I can live with that <wink>. | 
 | 1109 |  | 
 | 1110 | import re | 
 | 1111 |  | 
 | 1112 | def IS_LINE_JUNK(line, pat=re.compile(r"\s*#?\s*$").match): | 
 | 1113 |     r""" | 
 | 1114 |     Return 1 for ignorable line: iff `line` is blank or contains a single '#'. | 
 | 1115 |  | 
 | 1116 |     Examples: | 
 | 1117 |  | 
 | 1118 |     >>> IS_LINE_JUNK('\n') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1119 |     True | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1120 |     >>> IS_LINE_JUNK('  #   \n') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1121 |     True | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1122 |     >>> IS_LINE_JUNK('hello\n') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1123 |     False | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1124 |     """ | 
 | 1125 |  | 
 | 1126 |     return pat(line) is not None | 
 | 1127 |  | 
 | 1128 | def IS_CHARACTER_JUNK(ch, ws=" \t"): | 
 | 1129 |     r""" | 
 | 1130 |     Return 1 for ignorable character: iff `ch` is a space or tab. | 
 | 1131 |  | 
 | 1132 |     Examples: | 
 | 1133 |  | 
 | 1134 |     >>> IS_CHARACTER_JUNK(' ') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1135 |     True | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1136 |     >>> IS_CHARACTER_JUNK('\t') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1137 |     True | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1138 |     >>> IS_CHARACTER_JUNK('\n') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1139 |     False | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1140 |     >>> IS_CHARACTER_JUNK('x') | 
| Guido van Rossum | 77f6a65 | 2002-04-03 22:41:51 +0000 | [diff] [blame] | 1141 |     False | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1142 |     """ | 
 | 1143 |  | 
 | 1144 |     return ch in ws | 
 | 1145 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1146 |  | 
 | 1147 | def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', | 
 | 1148 |                  tofiledate='', n=3, lineterm='\n'): | 
 | 1149 |     r""" | 
 | 1150 |     Compare two sequences of lines; generate the delta as a unified diff. | 
 | 1151 |  | 
 | 1152 |     Unified diffs are a compact way of showing line changes and a few | 
 | 1153 |     lines of context.  The number of context lines is set by 'n' which | 
 | 1154 |     defaults to three. | 
 | 1155 |  | 
| Raymond Hettinger | 0887c73 | 2003-06-17 16:53:25 +0000 | [diff] [blame] | 1156 |     By default, the diff control lines (those with ---, +++, or @@) are | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1157 |     created with a trailing newline.  This is helpful so that inputs | 
 | 1158 |     created from file.readlines() result in diffs that are suitable for | 
 | 1159 |     file.writelines() since both the inputs and outputs have trailing | 
 | 1160 |     newlines. | 
 | 1161 |  | 
 | 1162 |     For inputs that do not have trailing newlines, set the lineterm | 
 | 1163 |     argument to "" so that the output will be uniformly newline free. | 
 | 1164 |  | 
 | 1165 |     The unidiff format normally has a header for filenames and modification | 
 | 1166 |     times.  Any or all of these may be specified using strings for | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1167 |     'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. | 
 | 1168 |     The modification times are normally expressed in the ISO 8601 format. | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1169 |  | 
 | 1170 |     Example: | 
 | 1171 |  | 
 | 1172 |     >>> for line in unified_diff('one two three four'.split(), | 
 | 1173 |     ...             'zero one tree four'.split(), 'Original', 'Current', | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1174 |     ...             '2005-01-26 23:30:50', '2010-04-02 10:20:52', | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1175 |     ...             lineterm=''): | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1176 |     ...     print(line)                 # doctest: +NORMALIZE_WHITESPACE | 
 | 1177 |     --- Original        2005-01-26 23:30:50 | 
 | 1178 |     +++ Current         2010-04-02 10:20:52 | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1179 |     @@ -1,4 +1,4 @@ | 
 | 1180 |     +zero | 
 | 1181 |      one | 
 | 1182 |     -two | 
 | 1183 |     -three | 
 | 1184 |     +tree | 
 | 1185 |      four | 
 | 1186 |     """ | 
 | 1187 |  | 
 | 1188 |     started = False | 
 | 1189 |     for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): | 
 | 1190 |         if not started: | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1191 |             fromdate = '\t%s' % fromfiledate if fromfiledate else '' | 
 | 1192 |             todate = '\t%s' % tofiledate if tofiledate else '' | 
 | 1193 |             yield '--- %s%s%s' % (fromfile, fromdate, lineterm) | 
 | 1194 |             yield '+++ %s%s%s' % (tofile, todate, lineterm) | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1195 |             started = True | 
 | 1196 |         i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] | 
 | 1197 |         yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) | 
 | 1198 |         for tag, i1, i2, j1, j2 in group: | 
 | 1199 |             if tag == 'equal': | 
 | 1200 |                 for line in a[i1:i2]: | 
 | 1201 |                     yield ' ' + line | 
 | 1202 |                 continue | 
 | 1203 |             if tag == 'replace' or tag == 'delete': | 
 | 1204 |                 for line in a[i1:i2]: | 
 | 1205 |                     yield '-' + line | 
 | 1206 |             if tag == 'replace' or tag == 'insert': | 
 | 1207 |                 for line in b[j1:j2]: | 
 | 1208 |                     yield '+' + line | 
 | 1209 |  | 
 | 1210 | # See http://www.unix.org/single_unix_specification/ | 
 | 1211 | def context_diff(a, b, fromfile='', tofile='', | 
 | 1212 |                  fromfiledate='', tofiledate='', n=3, lineterm='\n'): | 
 | 1213 |     r""" | 
 | 1214 |     Compare two sequences of lines; generate the delta as a context diff. | 
 | 1215 |  | 
 | 1216 |     Context diffs are a compact way of showing line changes and a few | 
 | 1217 |     lines of context.  The number of context lines is set by 'n' which | 
 | 1218 |     defaults to three. | 
 | 1219 |  | 
 | 1220 |     By default, the diff control lines (those with *** or ---) are | 
 | 1221 |     created with a trailing newline.  This is helpful so that inputs | 
 | 1222 |     created from file.readlines() result in diffs that are suitable for | 
 | 1223 |     file.writelines() since both the inputs and outputs have trailing | 
 | 1224 |     newlines. | 
 | 1225 |  | 
 | 1226 |     For inputs that do not have trailing newlines, set the lineterm | 
 | 1227 |     argument to "" so that the output will be uniformly newline free. | 
 | 1228 |  | 
 | 1229 |     The context diff format normally has a header for filenames and | 
 | 1230 |     modification times.  Any or all of these may be specified using | 
 | 1231 |     strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1232 |     The modification times are normally expressed in the ISO 8601 format. | 
 | 1233 |     If not specified, the strings default to blanks. | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1234 |  | 
 | 1235 |     Example: | 
 | 1236 |  | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 1237 |     >>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(1), | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1238 |     ...       'zero\none\ntree\nfour\n'.splitlines(1), 'Original', 'Current')), | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 1239 |     ...       end="") | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1240 |     *** Original | 
 | 1241 |     --- Current | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1242 |     *************** | 
 | 1243 |     *** 1,4 **** | 
 | 1244 |       one | 
 | 1245 |     ! two | 
 | 1246 |     ! three | 
 | 1247 |       four | 
 | 1248 |     --- 1,4 ---- | 
 | 1249 |     + zero | 
 | 1250 |       one | 
 | 1251 |     ! tree | 
 | 1252 |       four | 
 | 1253 |     """ | 
 | 1254 |  | 
 | 1255 |     started = False | 
| Raymond Hettinger | 7f2d302 | 2003-06-08 19:38:42 +0000 | [diff] [blame] | 1256 |     prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':'  '} | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1257 |     for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): | 
 | 1258 |         if not started: | 
| R. David Murray | b2416e5 | 2010-04-12 16:58:02 +0000 | [diff] [blame] | 1259 |             fromdate = '\t%s' % fromfiledate if fromfiledate else '' | 
 | 1260 |             todate = '\t%s' % tofiledate if tofiledate else '' | 
 | 1261 |             yield '*** %s%s%s' % (fromfile, fromdate, lineterm) | 
 | 1262 |             yield '--- %s%s%s' % (tofile, todate, lineterm) | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1263 |             started = True | 
| Raymond Hettinger | 7f2d302 | 2003-06-08 19:38:42 +0000 | [diff] [blame] | 1264 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1265 |         yield '***************%s' % (lineterm,) | 
 | 1266 |         if group[-1][2] - group[0][1] >= 2: | 
 | 1267 |             yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) | 
 | 1268 |         else: | 
 | 1269 |             yield '*** %d ****%s' % (group[-1][2], lineterm) | 
| Raymond Hettinger | 7f2d302 | 2003-06-08 19:38:42 +0000 | [diff] [blame] | 1270 |         visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] | 
 | 1271 |         if visiblechanges: | 
 | 1272 |             for tag, i1, i2, _, _ in group: | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1273 |                 if tag != 'insert': | 
 | 1274 |                     for line in a[i1:i2]: | 
 | 1275 |                         yield prefixmap[tag] + line | 
| Raymond Hettinger | 7f2d302 | 2003-06-08 19:38:42 +0000 | [diff] [blame] | 1276 |  | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1277 |         if group[-1][4] - group[0][3] >= 2: | 
 | 1278 |             yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) | 
 | 1279 |         else: | 
 | 1280 |             yield '--- %d ----%s' % (group[-1][4], lineterm) | 
| Raymond Hettinger | 7f2d302 | 2003-06-08 19:38:42 +0000 | [diff] [blame] | 1281 |         visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] | 
 | 1282 |         if visiblechanges: | 
 | 1283 |             for tag, _, _, j1, j2 in group: | 
| Raymond Hettinger | f0b1a1f | 2003-06-08 11:07:08 +0000 | [diff] [blame] | 1284 |                 if tag != 'delete': | 
 | 1285 |                     for line in b[j1:j2]: | 
 | 1286 |                         yield prefixmap[tag] + line | 
 | 1287 |  | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 1288 | def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1289 |     r""" | 
 | 1290 |     Compare `a` and `b` (lists of strings); return a `Differ`-style delta. | 
 | 1291 |  | 
 | 1292 |     Optional keyword parameters `linejunk` and `charjunk` are for filter | 
 | 1293 |     functions (or None): | 
 | 1294 |  | 
 | 1295 |     - linejunk: A function that should accept a single string argument, and | 
| Tim Peters | 81b9251 | 2002-04-29 01:37:32 +0000 | [diff] [blame] | 1296 |       return true iff the string is junk.  The default is None, and is | 
 | 1297 |       recommended; as of Python 2.3, an adaptive notion of "noise" lines is | 
 | 1298 |       used that does a good job on its own. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1299 |  | 
 | 1300 |     - charjunk: A function that should accept a string of length 1. The | 
 | 1301 |       default is module-level function IS_CHARACTER_JUNK, which filters out | 
 | 1302 |       whitespace characters (a blank or tab; note: bad idea to include newline | 
 | 1303 |       in this!). | 
 | 1304 |  | 
 | 1305 |     Tools/scripts/ndiff.py is a command-line front-end to this function. | 
 | 1306 |  | 
 | 1307 |     Example: | 
 | 1308 |  | 
 | 1309 |     >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(1), | 
 | 1310 |     ...              'ore\ntree\nemu\n'.splitlines(1)) | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 1311 |     >>> print(''.join(diff), end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1312 |     - one | 
 | 1313 |     ?  ^ | 
 | 1314 |     + ore | 
 | 1315 |     ?  ^ | 
 | 1316 |     - two | 
 | 1317 |     - three | 
 | 1318 |     ?  - | 
 | 1319 |     + tree | 
 | 1320 |     + emu | 
 | 1321 |     """ | 
 | 1322 |     return Differ(linejunk, charjunk).compare(a, b) | 
 | 1323 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1324 | def _mdiff(fromlines, tolines, context=None, linejunk=None, | 
 | 1325 |            charjunk=IS_CHARACTER_JUNK): | 
| Thomas Wouters | 902d6eb | 2007-01-09 23:18:33 +0000 | [diff] [blame] | 1326 |     r"""Returns generator yielding marked up from/to side by side differences. | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1327 |  | 
 | 1328 |     Arguments: | 
 | 1329 |     fromlines -- list of text lines to compared to tolines | 
 | 1330 |     tolines -- list of text lines to be compared to fromlines | 
 | 1331 |     context -- number of context lines to display on each side of difference, | 
 | 1332 |                if None, all from/to text lines will be generated. | 
 | 1333 |     linejunk -- passed on to ndiff (see ndiff documentation) | 
 | 1334 |     charjunk -- passed on to ndiff (see ndiff documentation) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1335 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1336 |     This function returns an interator which returns a tuple: | 
 | 1337 |     (from line tuple, to line tuple, boolean flag) | 
 | 1338 |  | 
 | 1339 |     from/to line tuple -- (line num, line text) | 
| Mark Dickinson | 934896d | 2009-02-21 20:59:32 +0000 | [diff] [blame] | 1340 |         line num -- integer or None (to indicate a context separation) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1341 |         line text -- original line text with following markers inserted: | 
 | 1342 |             '\0+' -- marks start of added text | 
 | 1343 |             '\0-' -- marks start of deleted text | 
 | 1344 |             '\0^' -- marks start of changed text | 
 | 1345 |             '\1' -- marks end of added/deleted/changed text | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1346 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1347 |     boolean flag -- None indicates context separation, True indicates | 
 | 1348 |         either "from" or "to" line contains a change, otherwise False. | 
 | 1349 |  | 
 | 1350 |     This function/iterator was originally developed to generate side by side | 
 | 1351 |     file difference for making HTML pages (see HtmlDiff class for example | 
 | 1352 |     usage). | 
 | 1353 |  | 
 | 1354 |     Note, this function utilizes the ndiff function to generate the side by | 
 | 1355 |     side difference markup.  Optional ndiff arguments may be passed to this | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1356 |     function and they in turn will be passed to ndiff. | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1357 |     """ | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1358 |     import re | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1359 |  | 
 | 1360 |     # regular expression for finding intraline change indices | 
 | 1361 |     change_re = re.compile('(\++|\-+|\^+)') | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1362 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1363 |     # create the difference iterator to generate the differences | 
 | 1364 |     diff_lines_iterator = ndiff(fromlines,tolines,linejunk,charjunk) | 
 | 1365 |  | 
 | 1366 |     def _make_line(lines, format_key, side, num_lines=[0,0]): | 
 | 1367 |         """Returns line of text with user's change markup and line formatting. | 
 | 1368 |  | 
 | 1369 |         lines -- list of lines from the ndiff generator to produce a line of | 
 | 1370 |                  text from.  When producing the line of text to return, the | 
 | 1371 |                  lines used are removed from this list. | 
 | 1372 |         format_key -- '+' return first line in list with "add" markup around | 
 | 1373 |                           the entire line. | 
 | 1374 |                       '-' return first line in list with "delete" markup around | 
 | 1375 |                           the entire line. | 
 | 1376 |                       '?' return first line in list with add/delete/change | 
 | 1377 |                           intraline markup (indices obtained from second line) | 
 | 1378 |                       None return first line in list with no markup | 
 | 1379 |         side -- indice into the num_lines list (0=from,1=to) | 
 | 1380 |         num_lines -- from/to current line number.  This is NOT intended to be a | 
 | 1381 |                      passed parameter.  It is present as a keyword argument to | 
 | 1382 |                      maintain memory of the current line numbers between calls | 
 | 1383 |                      of this function. | 
 | 1384 |  | 
 | 1385 |         Note, this function is purposefully not defined at the module scope so | 
 | 1386 |         that data it needs from its parent function (within whose context it | 
 | 1387 |         is defined) does not need to be of module scope. | 
 | 1388 |         """ | 
 | 1389 |         num_lines[side] += 1 | 
 | 1390 |         # Handle case where no user markup is to be added, just return line of | 
 | 1391 |         # text with user's line format to allow for usage of the line number. | 
 | 1392 |         if format_key is None: | 
 | 1393 |             return (num_lines[side],lines.pop(0)[2:]) | 
 | 1394 |         # Handle case of intraline changes | 
 | 1395 |         if format_key == '?': | 
 | 1396 |             text, markers = lines.pop(0), lines.pop(0) | 
 | 1397 |             # find intraline changes (store change type and indices in tuples) | 
 | 1398 |             sub_info = [] | 
 | 1399 |             def record_sub_info(match_object,sub_info=sub_info): | 
 | 1400 |                 sub_info.append([match_object.group(1)[0],match_object.span()]) | 
 | 1401 |                 return match_object.group(1) | 
 | 1402 |             change_re.sub(record_sub_info,markers) | 
 | 1403 |             # process each tuple inserting our special marks that won't be | 
 | 1404 |             # noticed by an xml/html escaper. | 
 | 1405 |             for key,(begin,end) in sub_info[::-1]: | 
 | 1406 |                 text = text[0:begin]+'\0'+key+text[begin:end]+'\1'+text[end:] | 
 | 1407 |             text = text[2:] | 
 | 1408 |         # Handle case of add/delete entire line | 
 | 1409 |         else: | 
 | 1410 |             text = lines.pop(0)[2:] | 
 | 1411 |             # if line of text is just a newline, insert a space so there is | 
 | 1412 |             # something for the user to highlight and see. | 
| Tim Peters | 0ca0c64 | 2004-11-12 16:12:15 +0000 | [diff] [blame] | 1413 |             if not text: | 
 | 1414 |                 text = ' ' | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1415 |             # insert marks that won't be noticed by an xml/html escaper. | 
 | 1416 |             text = '\0' + format_key + text + '\1' | 
| Georg Brandl | 7eb4b7d | 2005-07-22 21:49:32 +0000 | [diff] [blame] | 1417 |         # Return line of text, first allow user's line formatter to do its | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1418 |         # thing (such as adding the line number) then replace the special | 
 | 1419 |         # marks with what the user's change markup. | 
 | 1420 |         return (num_lines[side],text) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1421 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1422 |     def _line_iterator(): | 
 | 1423 |         """Yields from/to lines of text with a change indication. | 
 | 1424 |  | 
 | 1425 |         This function is an iterator.  It itself pulls lines from a | 
 | 1426 |         differencing iterator, processes them and yields them.  When it can | 
 | 1427 |         it yields both a "from" and a "to" line, otherwise it will yield one | 
 | 1428 |         or the other.  In addition to yielding the lines of from/to text, a | 
 | 1429 |         boolean flag is yielded to indicate if the text line(s) have | 
 | 1430 |         differences in them. | 
 | 1431 |  | 
 | 1432 |         Note, this function is purposefully not defined at the module scope so | 
 | 1433 |         that data it needs from its parent function (within whose context it | 
 | 1434 |         is defined) does not need to be of module scope. | 
 | 1435 |         """ | 
 | 1436 |         lines = [] | 
 | 1437 |         num_blanks_pending, num_blanks_to_yield = 0, 0 | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1438 |         while True: | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1439 |             # Load up next 4 lines so we can look ahead, create strings which | 
 | 1440 |             # are a concatenation of the first character of each of the 4 lines | 
 | 1441 |             # so we can do some very readable comparisons. | 
 | 1442 |             while len(lines) < 4: | 
 | 1443 |                 try: | 
| Georg Brandl | a18af4e | 2007-04-21 15:47:16 +0000 | [diff] [blame] | 1444 |                     lines.append(next(diff_lines_iterator)) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1445 |                 except StopIteration: | 
 | 1446 |                     lines.append('X') | 
 | 1447 |             s = ''.join([line[0] for line in lines]) | 
 | 1448 |             if s.startswith('X'): | 
 | 1449 |                 # When no more lines, pump out any remaining blank lines so the | 
 | 1450 |                 # corresponding add/delete lines get a matching blank line so | 
 | 1451 |                 # all line pairs get yielded at the next level. | 
 | 1452 |                 num_blanks_to_yield = num_blanks_pending | 
 | 1453 |             elif s.startswith('-?+?'): | 
 | 1454 |                 # simple intraline change | 
 | 1455 |                 yield _make_line(lines,'?',0), _make_line(lines,'?',1), True | 
 | 1456 |                 continue | 
 | 1457 |             elif s.startswith('--++'): | 
 | 1458 |                 # in delete block, add block coming: we do NOT want to get | 
 | 1459 |                 # caught up on blank lines yet, just process the delete line | 
 | 1460 |                 num_blanks_pending -= 1 | 
 | 1461 |                 yield _make_line(lines,'-',0), None, True | 
 | 1462 |                 continue | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 1463 |             elif s.startswith(('--?+', '--+', '- ')): | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1464 |                 # in delete block and see a intraline change or unchanged line | 
 | 1465 |                 # coming: yield the delete line and then blanks | 
 | 1466 |                 from_line,to_line = _make_line(lines,'-',0), None | 
 | 1467 |                 num_blanks_to_yield,num_blanks_pending = num_blanks_pending-1,0 | 
 | 1468 |             elif s.startswith('-+?'): | 
 | 1469 |                 # intraline change | 
 | 1470 |                 yield _make_line(lines,None,0), _make_line(lines,'?',1), True | 
 | 1471 |                 continue | 
 | 1472 |             elif s.startswith('-?+'): | 
 | 1473 |                 # intraline change | 
 | 1474 |                 yield _make_line(lines,'?',0), _make_line(lines,None,1), True | 
 | 1475 |                 continue | 
 | 1476 |             elif s.startswith('-'): | 
 | 1477 |                 # delete FROM line | 
 | 1478 |                 num_blanks_pending -= 1 | 
 | 1479 |                 yield _make_line(lines,'-',0), None, True | 
 | 1480 |                 continue | 
 | 1481 |             elif s.startswith('+--'): | 
 | 1482 |                 # in add block, delete block coming: we do NOT want to get | 
 | 1483 |                 # caught up on blank lines yet, just process the add line | 
 | 1484 |                 num_blanks_pending += 1 | 
 | 1485 |                 yield None, _make_line(lines,'+',1), True | 
 | 1486 |                 continue | 
| Thomas Wouters | 0e3f591 | 2006-08-11 14:57:12 +0000 | [diff] [blame] | 1487 |             elif s.startswith(('+ ', '+-')): | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1488 |                 # will be leaving an add block: yield blanks then add line | 
 | 1489 |                 from_line, to_line = None, _make_line(lines,'+',1) | 
 | 1490 |                 num_blanks_to_yield,num_blanks_pending = num_blanks_pending+1,0 | 
 | 1491 |             elif s.startswith('+'): | 
 | 1492 |                 # inside an add block, yield the add line | 
 | 1493 |                 num_blanks_pending += 1 | 
 | 1494 |                 yield None, _make_line(lines,'+',1), True | 
 | 1495 |                 continue | 
 | 1496 |             elif s.startswith(' '): | 
 | 1497 |                 # unchanged text, yield it to both sides | 
 | 1498 |                 yield _make_line(lines[:],None,0),_make_line(lines,None,1),False | 
 | 1499 |                 continue | 
 | 1500 |             # Catch up on the blank lines so when we yield the next from/to | 
 | 1501 |             # pair, they are lined up. | 
 | 1502 |             while(num_blanks_to_yield < 0): | 
 | 1503 |                 num_blanks_to_yield += 1 | 
 | 1504 |                 yield None,('','\n'),True | 
 | 1505 |             while(num_blanks_to_yield > 0): | 
 | 1506 |                 num_blanks_to_yield -= 1 | 
 | 1507 |                 yield ('','\n'),None,True | 
 | 1508 |             if s.startswith('X'): | 
 | 1509 |                 raise StopIteration | 
 | 1510 |             else: | 
 | 1511 |                 yield from_line,to_line,True | 
 | 1512 |  | 
 | 1513 |     def _line_pair_iterator(): | 
 | 1514 |         """Yields from/to lines of text with a change indication. | 
 | 1515 |  | 
 | 1516 |         This function is an iterator.  It itself pulls lines from the line | 
| Georg Brandl | 7eb4b7d | 2005-07-22 21:49:32 +0000 | [diff] [blame] | 1517 |         iterator.  Its difference from that iterator is that this function | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1518 |         always yields a pair of from/to text lines (with the change | 
 | 1519 |         indication).  If necessary it will collect single from/to lines | 
 | 1520 |         until it has a matching pair from/to pair to yield. | 
 | 1521 |  | 
 | 1522 |         Note, this function is purposefully not defined at the module scope so | 
 | 1523 |         that data it needs from its parent function (within whose context it | 
 | 1524 |         is defined) does not need to be of module scope. | 
 | 1525 |         """ | 
 | 1526 |         line_iterator = _line_iterator() | 
 | 1527 |         fromlines,tolines=[],[] | 
 | 1528 |         while True: | 
 | 1529 |             # Collecting lines of text until we have a from/to pair | 
 | 1530 |             while (len(fromlines)==0 or len(tolines)==0): | 
| Georg Brandl | a18af4e | 2007-04-21 15:47:16 +0000 | [diff] [blame] | 1531 |                 from_line, to_line, found_diff = next(line_iterator) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1532 |                 if from_line is not None: | 
 | 1533 |                     fromlines.append((from_line,found_diff)) | 
 | 1534 |                 if to_line is not None: | 
 | 1535 |                     tolines.append((to_line,found_diff)) | 
 | 1536 |             # Once we have a pair, remove them from the collection and yield it | 
 | 1537 |             from_line, fromDiff = fromlines.pop(0) | 
 | 1538 |             to_line, to_diff = tolines.pop(0) | 
 | 1539 |             yield (from_line,to_line,fromDiff or to_diff) | 
 | 1540 |  | 
 | 1541 |     # Handle case where user does not want context differencing, just yield | 
 | 1542 |     # them up without doing anything else with them. | 
 | 1543 |     line_pair_iterator = _line_pair_iterator() | 
 | 1544 |     if context is None: | 
 | 1545 |         while True: | 
| Georg Brandl | a18af4e | 2007-04-21 15:47:16 +0000 | [diff] [blame] | 1546 |             yield next(line_pair_iterator) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1547 |     # Handle case where user wants context differencing.  We must do some | 
 | 1548 |     # storage of lines until we know for sure that they are to be yielded. | 
 | 1549 |     else: | 
 | 1550 |         context += 1 | 
 | 1551 |         lines_to_write = 0 | 
 | 1552 |         while True: | 
 | 1553 |             # Store lines up until we find a difference, note use of a | 
 | 1554 |             # circular queue because we only need to keep around what | 
 | 1555 |             # we need for context. | 
 | 1556 |             index, contextLines = 0, [None]*(context) | 
 | 1557 |             found_diff = False | 
 | 1558 |             while(found_diff is False): | 
| Georg Brandl | a18af4e | 2007-04-21 15:47:16 +0000 | [diff] [blame] | 1559 |                 from_line, to_line, found_diff = next(line_pair_iterator) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1560 |                 i = index % context | 
 | 1561 |                 contextLines[i] = (from_line, to_line, found_diff) | 
 | 1562 |                 index += 1 | 
 | 1563 |             # Yield lines that we have collected so far, but first yield | 
 | 1564 |             # the user's separator. | 
 | 1565 |             if index > context: | 
 | 1566 |                 yield None, None, None | 
 | 1567 |                 lines_to_write = context | 
 | 1568 |             else: | 
 | 1569 |                 lines_to_write = index | 
 | 1570 |                 index = 0 | 
 | 1571 |             while(lines_to_write): | 
 | 1572 |                 i = index % context | 
 | 1573 |                 index += 1 | 
 | 1574 |                 yield contextLines[i] | 
 | 1575 |                 lines_to_write -= 1 | 
 | 1576 |             # Now yield the context lines after the change | 
 | 1577 |             lines_to_write = context-1 | 
 | 1578 |             while(lines_to_write): | 
| Georg Brandl | a18af4e | 2007-04-21 15:47:16 +0000 | [diff] [blame] | 1579 |                 from_line, to_line, found_diff = next(line_pair_iterator) | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1580 |                 # If another change within the context, extend the context | 
 | 1581 |                 if found_diff: | 
 | 1582 |                     lines_to_write = context-1 | 
 | 1583 |                 else: | 
 | 1584 |                     lines_to_write -= 1 | 
 | 1585 |                 yield from_line, to_line, found_diff | 
 | 1586 |  | 
 | 1587 |  | 
 | 1588 | _file_template = """ | 
 | 1589 | <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" | 
 | 1590 |           "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> | 
 | 1591 |  | 
 | 1592 | <html> | 
 | 1593 |  | 
 | 1594 | <head> | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1595 |     <meta http-equiv="Content-Type" | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1596 |           content="text/html; charset=ISO-8859-1" /> | 
 | 1597 |     <title></title> | 
 | 1598 |     <style type="text/css">%(styles)s | 
 | 1599 |     </style> | 
 | 1600 | </head> | 
 | 1601 |  | 
 | 1602 | <body> | 
 | 1603 |     %(table)s%(legend)s | 
 | 1604 | </body> | 
 | 1605 |  | 
 | 1606 | </html>""" | 
 | 1607 |  | 
 | 1608 | _styles = """ | 
 | 1609 |         table.diff {font-family:Courier; border:medium;} | 
 | 1610 |         .diff_header {background-color:#e0e0e0} | 
 | 1611 |         td.diff_header {text-align:right} | 
 | 1612 |         .diff_next {background-color:#c0c0c0} | 
 | 1613 |         .diff_add {background-color:#aaffaa} | 
 | 1614 |         .diff_chg {background-color:#ffff77} | 
 | 1615 |         .diff_sub {background-color:#ffaaaa}""" | 
 | 1616 |  | 
 | 1617 | _table_template = """ | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1618 |     <table class="diff" id="difflib_chg_%(prefix)s_top" | 
 | 1619 |            cellspacing="0" cellpadding="0" rules="groups" > | 
 | 1620 |         <colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup> | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1621 |         <colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup> | 
 | 1622 |         %(header_row)s | 
 | 1623 |         <tbody> | 
 | 1624 | %(data_rows)s        </tbody> | 
 | 1625 |     </table>""" | 
 | 1626 |  | 
 | 1627 | _legend = """ | 
 | 1628 |     <table class="diff" summary="Legends"> | 
 | 1629 |         <tr> <th colspan="2"> Legends </th> </tr> | 
 | 1630 |         <tr> <td> <table border="" summary="Colors"> | 
 | 1631 |                       <tr><th> Colors </th> </tr> | 
 | 1632 |                       <tr><td class="diff_add"> Added </td></tr> | 
 | 1633 |                       <tr><td class="diff_chg">Changed</td> </tr> | 
 | 1634 |                       <tr><td class="diff_sub">Deleted</td> </tr> | 
 | 1635 |                   </table></td> | 
 | 1636 |              <td> <table border="" summary="Links"> | 
 | 1637 |                       <tr><th colspan="2"> Links </th> </tr> | 
 | 1638 |                       <tr><td>(f)irst change</td> </tr> | 
 | 1639 |                       <tr><td>(n)ext change</td> </tr> | 
 | 1640 |                       <tr><td>(t)op</td> </tr> | 
 | 1641 |                   </table></td> </tr> | 
 | 1642 |     </table>""" | 
 | 1643 |  | 
 | 1644 | class HtmlDiff(object): | 
 | 1645 |     """For producing HTML side by side comparison with change highlights. | 
 | 1646 |  | 
 | 1647 |     This class can be used to create an HTML table (or a complete HTML file | 
| Andrew M. Kuchling | 55be9ea | 2004-09-10 12:59:54 +0000 | [diff] [blame] | 1648 |     containing the table) showing a side by side, line by line comparison | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1649 |     of text with inter-line and intra-line change highlights.  The table can | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1650 |     be generated in either full or contextual difference mode. | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1651 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1652 |     The following methods are provided for HTML generation: | 
 | 1653 |  | 
 | 1654 |     make_table -- generates HTML for a single side by side table | 
 | 1655 |     make_file -- generates complete HTML file with a single side by side table | 
 | 1656 |  | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1657 |     See tools/scripts/diff.py for an example usage of this class. | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1658 |     """ | 
 | 1659 |  | 
 | 1660 |     _file_template = _file_template | 
 | 1661 |     _styles = _styles | 
 | 1662 |     _table_template = _table_template | 
 | 1663 |     _legend = _legend | 
 | 1664 |     _default_prefix = 0 | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1665 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1666 |     def __init__(self,tabsize=8,wrapcolumn=None,linejunk=None, | 
 | 1667 |                  charjunk=IS_CHARACTER_JUNK): | 
 | 1668 |         """HtmlDiff instance initializer | 
 | 1669 |  | 
 | 1670 |         Arguments: | 
 | 1671 |         tabsize -- tab stop spacing, defaults to 8. | 
 | 1672 |         wrapcolumn -- column number where lines are broken and wrapped, | 
 | 1673 |             defaults to None where lines are not wrapped. | 
 | 1674 |         linejunk,charjunk -- keyword arguments passed into ndiff() (used to by | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1675 |             HtmlDiff() to generate the side by side HTML differences).  See | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1676 |             ndiff() documentation for argument default values and descriptions. | 
 | 1677 |         """ | 
 | 1678 |         self._tabsize = tabsize | 
 | 1679 |         self._wrapcolumn = wrapcolumn | 
 | 1680 |         self._linejunk = linejunk | 
 | 1681 |         self._charjunk = charjunk | 
 | 1682 |  | 
 | 1683 |     def make_file(self,fromlines,tolines,fromdesc='',todesc='',context=False, | 
 | 1684 |                   numlines=5): | 
 | 1685 |         """Returns HTML file of side by side comparison with change highlights | 
 | 1686 |  | 
 | 1687 |         Arguments: | 
 | 1688 |         fromlines -- list of "from" lines | 
 | 1689 |         tolines -- list of "to" lines | 
 | 1690 |         fromdesc -- "from" file column header string | 
 | 1691 |         todesc -- "to" file column header string | 
 | 1692 |         context -- set to True for contextual differences (defaults to False | 
 | 1693 |             which shows full differences). | 
 | 1694 |         numlines -- number of context lines.  When context is set True, | 
 | 1695 |             controls number of lines displayed before and after the change. | 
 | 1696 |             When context is False, controls the number of lines to place | 
 | 1697 |             the "next" link anchors before the next change (so click of | 
 | 1698 |             "next" link jumps to just before the change). | 
 | 1699 |         """ | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1700 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1701 |         return self._file_template % dict( | 
 | 1702 |             styles = self._styles, | 
 | 1703 |             legend = self._legend, | 
 | 1704 |             table = self.make_table(fromlines,tolines,fromdesc,todesc, | 
 | 1705 |                                     context=context,numlines=numlines)) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1706 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1707 |     def _tab_newline_replace(self,fromlines,tolines): | 
 | 1708 |         """Returns from/to line lists with tabs expanded and newlines removed. | 
 | 1709 |  | 
 | 1710 |         Instead of tab characters being replaced by the number of spaces | 
 | 1711 |         needed to fill in to the next tab stop, this function will fill | 
 | 1712 |         the space with tab characters.  This is done so that the difference | 
 | 1713 |         algorithms can identify changes in a file when tabs are replaced by | 
 | 1714 |         spaces and vice versa.  At the end of the HTML generation, the tab | 
 | 1715 |         characters will be replaced with a nonbreakable space. | 
 | 1716 |         """ | 
 | 1717 |         def expand_tabs(line): | 
 | 1718 |             # hide real spaces | 
 | 1719 |             line = line.replace(' ','\0') | 
 | 1720 |             # expand tabs into spaces | 
 | 1721 |             line = line.expandtabs(self._tabsize) | 
| Ezio Melotti | 1392500 | 2011-03-16 11:05:33 +0200 | [diff] [blame] | 1722 |             # replace spaces from expanded tabs back into tab characters | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1723 |             # (we'll replace them with markup after we do differencing) | 
 | 1724 |             line = line.replace(' ','\t') | 
 | 1725 |             return line.replace('\0',' ').rstrip('\n') | 
 | 1726 |         fromlines = [expand_tabs(line) for line in fromlines] | 
 | 1727 |         tolines = [expand_tabs(line) for line in tolines] | 
 | 1728 |         return fromlines,tolines | 
 | 1729 |  | 
 | 1730 |     def _split_line(self,data_list,line_num,text): | 
 | 1731 |         """Builds list of text lines by splitting text lines at wrap point | 
 | 1732 |  | 
 | 1733 |         This function will determine if the input text line needs to be | 
 | 1734 |         wrapped (split) into separate lines.  If so, the first wrap point | 
 | 1735 |         will be determined and the first line appended to the output | 
 | 1736 |         text line list.  This function is used recursively to handle | 
 | 1737 |         the second part of the split line to further split it. | 
 | 1738 |         """ | 
 | 1739 |         # if blank line or context separator, just add it to the output list | 
 | 1740 |         if not line_num: | 
 | 1741 |             data_list.append((line_num,text)) | 
 | 1742 |             return | 
 | 1743 |  | 
 | 1744 |         # if line text doesn't need wrapping, just add it to the output list | 
 | 1745 |         size = len(text) | 
 | 1746 |         max = self._wrapcolumn | 
 | 1747 |         if (size <= max) or ((size -(text.count('\0')*3)) <= max): | 
 | 1748 |             data_list.append((line_num,text)) | 
 | 1749 |             return | 
 | 1750 |  | 
 | 1751 |         # scan text looking for the wrap point, keeping track if the wrap | 
 | 1752 |         # point is inside markers | 
 | 1753 |         i = 0 | 
 | 1754 |         n = 0 | 
 | 1755 |         mark = '' | 
 | 1756 |         while n < max and i < size: | 
 | 1757 |             if text[i] == '\0': | 
 | 1758 |                 i += 1 | 
 | 1759 |                 mark = text[i] | 
 | 1760 |                 i += 1 | 
 | 1761 |             elif text[i] == '\1': | 
 | 1762 |                 i += 1 | 
 | 1763 |                 mark = '' | 
 | 1764 |             else: | 
 | 1765 |                 i += 1 | 
 | 1766 |                 n += 1 | 
 | 1767 |  | 
 | 1768 |         # wrap point is inside text, break it up into separate lines | 
 | 1769 |         line1 = text[:i] | 
 | 1770 |         line2 = text[i:] | 
 | 1771 |  | 
 | 1772 |         # if wrap point is inside markers, place end marker at end of first | 
 | 1773 |         # line and start marker at beginning of second line because each | 
 | 1774 |         # line will have its own table tag markup around it. | 
 | 1775 |         if mark: | 
 | 1776 |             line1 = line1 + '\1' | 
 | 1777 |             line2 = '\0' + mark + line2 | 
 | 1778 |  | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1779 |         # tack on first line onto the output list | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1780 |         data_list.append((line_num,line1)) | 
 | 1781 |  | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1782 |         # use this routine again to wrap the remaining text | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1783 |         self._split_line(data_list,'>',line2) | 
 | 1784 |  | 
 | 1785 |     def _line_wrapper(self,diffs): | 
 | 1786 |         """Returns iterator that splits (wraps) mdiff text lines""" | 
 | 1787 |  | 
 | 1788 |         # pull from/to data and flags from mdiff iterator | 
 | 1789 |         for fromdata,todata,flag in diffs: | 
 | 1790 |             # check for context separators and pass them through | 
 | 1791 |             if flag is None: | 
 | 1792 |                 yield fromdata,todata,flag | 
 | 1793 |                 continue | 
 | 1794 |             (fromline,fromtext),(toline,totext) = fromdata,todata | 
 | 1795 |             # for each from/to line split it at the wrap column to form | 
 | 1796 |             # list of text lines. | 
 | 1797 |             fromlist,tolist = [],[] | 
 | 1798 |             self._split_line(fromlist,fromline,fromtext) | 
 | 1799 |             self._split_line(tolist,toline,totext) | 
 | 1800 |             # yield from/to line in pairs inserting blank lines as | 
 | 1801 |             # necessary when one side has more wrapped lines | 
 | 1802 |             while fromlist or tolist: | 
 | 1803 |                 if fromlist: | 
 | 1804 |                     fromdata = fromlist.pop(0) | 
 | 1805 |                 else: | 
 | 1806 |                     fromdata = ('',' ') | 
 | 1807 |                 if tolist: | 
 | 1808 |                     todata = tolist.pop(0) | 
 | 1809 |                 else: | 
 | 1810 |                     todata = ('',' ') | 
 | 1811 |                 yield fromdata,todata,flag | 
 | 1812 |  | 
 | 1813 |     def _collect_lines(self,diffs): | 
 | 1814 |         """Collects mdiff output into separate lists | 
 | 1815 |  | 
 | 1816 |         Before storing the mdiff from/to data into a list, it is converted | 
 | 1817 |         into a single line of text with HTML markup. | 
 | 1818 |         """ | 
 | 1819 |  | 
 | 1820 |         fromlist,tolist,flaglist = [],[],[] | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1821 |         # pull from/to data and flags from mdiff style iterator | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1822 |         for fromdata,todata,flag in diffs: | 
 | 1823 |             try: | 
 | 1824 |                 # store HTML markup of the lines into the lists | 
 | 1825 |                 fromlist.append(self._format_line(0,flag,*fromdata)) | 
 | 1826 |                 tolist.append(self._format_line(1,flag,*todata)) | 
 | 1827 |             except TypeError: | 
 | 1828 |                 # exceptions occur for lines where context separators go | 
 | 1829 |                 fromlist.append(None) | 
 | 1830 |                 tolist.append(None) | 
 | 1831 |             flaglist.append(flag) | 
 | 1832 |         return fromlist,tolist,flaglist | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1833 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1834 |     def _format_line(self,side,flag,linenum,text): | 
 | 1835 |         """Returns HTML markup of "from" / "to" text lines | 
 | 1836 |  | 
 | 1837 |         side -- 0 or 1 indicating "from" or "to" text | 
 | 1838 |         flag -- indicates if difference on line | 
 | 1839 |         linenum -- line number (used for line number column) | 
 | 1840 |         text -- line text to be marked up | 
 | 1841 |         """ | 
 | 1842 |         try: | 
 | 1843 |             linenum = '%d' % linenum | 
 | 1844 |             id = ' id="%s%s"' % (self._prefix[side],linenum) | 
 | 1845 |         except TypeError: | 
 | 1846 |             # handle blank lines where linenum is '>' or '' | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1847 |             id = '' | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1848 |         # replace those things that would get confused with HTML symbols | 
 | 1849 |         text=text.replace("&","&").replace(">",">").replace("<","<") | 
 | 1850 |  | 
 | 1851 |         # make space non-breakable so they don't get compressed or line wrapped | 
 | 1852 |         text = text.replace(' ',' ').rstrip() | 
 | 1853 |  | 
 | 1854 |         return '<td class="diff_header"%s>%s</td><td nowrap="nowrap">%s</td>' \ | 
 | 1855 |                % (id,linenum,text) | 
 | 1856 |  | 
 | 1857 |     def _make_prefix(self): | 
 | 1858 |         """Create unique anchor prefixes""" | 
 | 1859 |  | 
 | 1860 |         # Generate a unique anchor prefix so multiple tables | 
 | 1861 |         # can exist on the same HTML page without conflicts. | 
 | 1862 |         fromprefix = "from%d_" % HtmlDiff._default_prefix | 
 | 1863 |         toprefix = "to%d_" % HtmlDiff._default_prefix | 
 | 1864 |         HtmlDiff._default_prefix += 1 | 
 | 1865 |         # store prefixes so line format method has access | 
 | 1866 |         self._prefix = [fromprefix,toprefix] | 
 | 1867 |  | 
 | 1868 |     def _convert_flags(self,fromlist,tolist,flaglist,context,numlines): | 
 | 1869 |         """Makes list of "next" links""" | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1870 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1871 |         # all anchor names will be generated using the unique "to" prefix | 
 | 1872 |         toprefix = self._prefix[1] | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1873 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1874 |         # process change flags, generating middle column of next anchors/links | 
 | 1875 |         next_id = ['']*len(flaglist) | 
 | 1876 |         next_href = ['']*len(flaglist) | 
 | 1877 |         num_chg, in_change = 0, False | 
 | 1878 |         last = 0 | 
 | 1879 |         for i,flag in enumerate(flaglist): | 
 | 1880 |             if flag: | 
 | 1881 |                 if not in_change: | 
 | 1882 |                     in_change = True | 
 | 1883 |                     last = i | 
 | 1884 |                     # at the beginning of a change, drop an anchor a few lines | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1885 |                     # (the context lines) before the change for the previous | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1886 |                     # link | 
 | 1887 |                     i = max([0,i-numlines]) | 
 | 1888 |                     next_id[i] = ' id="difflib_chg_%s_%d"' % (toprefix,num_chg) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1889 |                     # at the beginning of a change, drop a link to the next | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1890 |                     # change | 
 | 1891 |                     num_chg += 1 | 
 | 1892 |                     next_href[last] = '<a href="#difflib_chg_%s_%d">n</a>' % ( | 
 | 1893 |                          toprefix,num_chg) | 
 | 1894 |             else: | 
 | 1895 |                 in_change = False | 
 | 1896 |         # check for cases where there is no content to avoid exceptions | 
 | 1897 |         if not flaglist: | 
 | 1898 |             flaglist = [False] | 
 | 1899 |             next_id = [''] | 
 | 1900 |             next_href = [''] | 
 | 1901 |             last = 0 | 
 | 1902 |             if context: | 
 | 1903 |                 fromlist = ['<td></td><td> No Differences Found </td>'] | 
 | 1904 |                 tolist = fromlist | 
 | 1905 |             else: | 
 | 1906 |                 fromlist = tolist = ['<td></td><td> Empty File </td>'] | 
 | 1907 |         # if not a change on first line, drop a link | 
 | 1908 |         if not flaglist[0]: | 
 | 1909 |             next_href[0] = '<a href="#difflib_chg_%s_0">f</a>' % toprefix | 
 | 1910 |         # redo the last link to link to the top | 
 | 1911 |         next_href[last] = '<a href="#difflib_chg_%s_top">t</a>' % (toprefix) | 
 | 1912 |  | 
 | 1913 |         return fromlist,tolist,flaglist,next_href,next_id | 
 | 1914 |  | 
 | 1915 |     def make_table(self,fromlines,tolines,fromdesc='',todesc='',context=False, | 
 | 1916 |                    numlines=5): | 
 | 1917 |         """Returns HTML table of side by side comparison with change highlights | 
 | 1918 |  | 
 | 1919 |         Arguments: | 
 | 1920 |         fromlines -- list of "from" lines | 
 | 1921 |         tolines -- list of "to" lines | 
 | 1922 |         fromdesc -- "from" file column header string | 
 | 1923 |         todesc -- "to" file column header string | 
 | 1924 |         context -- set to True for contextual differences (defaults to False | 
 | 1925 |             which shows full differences). | 
 | 1926 |         numlines -- number of context lines.  When context is set True, | 
 | 1927 |             controls number of lines displayed before and after the change. | 
 | 1928 |             When context is False, controls the number of lines to place | 
 | 1929 |             the "next" link anchors before the next change (so click of | 
 | 1930 |             "next" link jumps to just before the change). | 
 | 1931 |         """ | 
 | 1932 |  | 
 | 1933 |         # make unique anchor prefixes so that multiple tables may exist | 
 | 1934 |         # on the same page without conflict. | 
 | 1935 |         self._make_prefix() | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1936 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1937 |         # change tabs to spaces before it gets more difficult after we insert | 
 | 1938 |         # markkup | 
 | 1939 |         fromlines,tolines = self._tab_newline_replace(fromlines,tolines) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1940 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1941 |         # create diffs iterator which generates side by side from/to data | 
 | 1942 |         if context: | 
 | 1943 |             context_lines = numlines | 
 | 1944 |         else: | 
 | 1945 |             context_lines = None | 
 | 1946 |         diffs = _mdiff(fromlines,tolines,context_lines,linejunk=self._linejunk, | 
 | 1947 |                       charjunk=self._charjunk) | 
 | 1948 |  | 
 | 1949 |         # set up iterator to wrap lines that exceed desired width | 
 | 1950 |         if self._wrapcolumn: | 
 | 1951 |             diffs = self._line_wrapper(diffs) | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1952 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1953 |         # collect up from/to lines and flags into lists (also format the lines) | 
 | 1954 |         fromlist,tolist,flaglist = self._collect_lines(diffs) | 
 | 1955 |  | 
 | 1956 |         # process change flags, generating middle column of next anchors/links | 
 | 1957 |         fromlist,tolist,flaglist,next_href,next_id = self._convert_flags( | 
 | 1958 |             fromlist,tolist,flaglist,context,numlines) | 
 | 1959 |  | 
| Guido van Rossum | d8faa36 | 2007-04-27 19:54:29 +0000 | [diff] [blame] | 1960 |         s = [] | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1961 |         fmt = '            <tr><td class="diff_next"%s>%s</td>%s' + \ | 
 | 1962 |               '<td class="diff_next">%s</td>%s</tr>\n' | 
 | 1963 |         for i in range(len(flaglist)): | 
 | 1964 |             if flaglist[i] is None: | 
 | 1965 |                 # mdiff yields None on separator lines skip the bogus ones | 
 | 1966 |                 # generated for the first line | 
 | 1967 |                 if i > 0: | 
| Guido van Rossum | d8faa36 | 2007-04-27 19:54:29 +0000 | [diff] [blame] | 1968 |                     s.append('        </tbody>        \n        <tbody>\n') | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1969 |             else: | 
| Guido van Rossum | d8faa36 | 2007-04-27 19:54:29 +0000 | [diff] [blame] | 1970 |                 s.append( fmt % (next_id[i],next_href[i],fromlist[i], | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1971 |                                            next_href[i],tolist[i])) | 
 | 1972 |         if fromdesc or todesc: | 
 | 1973 |             header_row = '<thead><tr>%s%s%s%s</tr></thead>' % ( | 
 | 1974 |                 '<th class="diff_next"><br /></th>', | 
 | 1975 |                 '<th colspan="2" class="diff_header">%s</th>' % fromdesc, | 
 | 1976 |                 '<th class="diff_next"><br /></th>', | 
 | 1977 |                 '<th colspan="2" class="diff_header">%s</th>' % todesc) | 
 | 1978 |         else: | 
 | 1979 |             header_row = '' | 
 | 1980 |  | 
 | 1981 |         table = self._table_template % dict( | 
| Guido van Rossum | d8faa36 | 2007-04-27 19:54:29 +0000 | [diff] [blame] | 1982 |             data_rows=''.join(s), | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1983 |             header_row=header_row, | 
 | 1984 |             prefix=self._prefix[1]) | 
 | 1985 |  | 
 | 1986 |         return table.replace('\0+','<span class="diff_add">'). \ | 
 | 1987 |                      replace('\0-','<span class="diff_sub">'). \ | 
 | 1988 |                      replace('\0^','<span class="diff_chg">'). \ | 
 | 1989 |                      replace('\1','</span>'). \ | 
 | 1990 |                      replace('\t',' ') | 
| Tim Peters | 48bd7f3 | 2004-08-29 22:38:38 +0000 | [diff] [blame] | 1991 |  | 
| Martin v. Löwis | e064b41 | 2004-08-29 16:34:40 +0000 | [diff] [blame] | 1992 | del re | 
 | 1993 |  | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1994 | def restore(delta, which): | 
 | 1995 |     r""" | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 1996 |     Generate one of the two sequences that generated a delta. | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 1997 |  | 
 | 1998 |     Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract | 
 | 1999 |     lines originating from file 1 or 2 (parameter `which`), stripping off line | 
 | 2000 |     prefixes. | 
 | 2001 |  | 
 | 2002 |     Examples: | 
 | 2003 |  | 
 | 2004 |     >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(1), | 
 | 2005 |     ...              'ore\ntree\nemu\n'.splitlines(1)) | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 2006 |     >>> diff = list(diff) | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 2007 |     >>> print(''.join(restore(diff, 1)), end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 2008 |     one | 
 | 2009 |     two | 
 | 2010 |     three | 
| Guido van Rossum | be19ed7 | 2007-02-09 05:37:30 +0000 | [diff] [blame] | 2011 |     >>> print(''.join(restore(diff, 2)), end="") | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 2012 |     ore | 
 | 2013 |     tree | 
 | 2014 |     emu | 
 | 2015 |     """ | 
 | 2016 |     try: | 
 | 2017 |         tag = {1: "- ", 2: "+ "}[int(which)] | 
 | 2018 |     except KeyError: | 
| Collin Winter | ce36ad8 | 2007-08-30 01:19:48 +0000 | [diff] [blame] | 2019 |         raise ValueError('unknown delta choice (must be 1 or 2): %r' | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 2020 |                            % which) | 
 | 2021 |     prefixes = ("  ", tag) | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 2022 |     for line in delta: | 
 | 2023 |         if line[:2] in prefixes: | 
| Tim Peters | 8a9c284 | 2001-09-22 21:30:22 +0000 | [diff] [blame] | 2024 |             yield line[2:] | 
| Tim Peters | 5e824c3 | 2001-08-12 22:25:01 +0000 | [diff] [blame] | 2025 |  | 
| Tim Peters | 9ae2148 | 2001-02-10 08:00:53 +0000 | [diff] [blame] | 2026 | def _test(): | 
 | 2027 |     import doctest, difflib | 
 | 2028 |     return doctest.testmod(difflib) | 
 | 2029 |  | 
 | 2030 | if __name__ == "__main__": | 
 | 2031 |     _test() |