1. fc354f0 bpo-25324: copy tok_name before changing it (#1608) by Albert-Jan Nijburg · 7 years ago
  2. c471ca4 bpo-30377: Simplify handling of COMMENT and NL in tokenize.py (#1607) by Albert-Jan Nijburg · 7 years ago
  3. 3972628 bpo-30296 Remove unnecessary tuples, lists, sets, and dicts (#1489) by Jon Dufresne · 7 years ago
  4. d4914e9 Add ELLIPSIS and RARROW. Add tests (#666) by Jim Fasarakis-Hilliard · 8 years ago
  5. a721aba Issue #26331: Implement the parsing part of PEP 515. by Brett Cannon · 8 years ago
  6. a051bf3 Issue #26581: Use the first coding cookie on a line, not the last one. by Serhiy Storchaka · 9 years ago
  7. e431d3c Issue #26581: Use the first coding cookie on a line, not the last one. by Serhiy Storchaka · 9 years ago
  8. a7161e7 Issue #25977: Fix typos in Lib/tokenize.py by Berker Peksag · 9 years ago
  9. ff8d087 Issue #25977: Fix typos in Lib/tokenize.py by Berker Peksag · 9 years ago
  10. 1c8222c Issue 25311: Add support for f-strings to tokenize.py. Also added some comments to explain what's happening, since it's not so obvious. by Eric V. Smith · 9 years ago
  11. 96ec934 Issue #24619: Simplify async/await tokenization. by Yury Selivanov · 9 years ago
  12. 8fb307c Issue #24619: New approach for tokenizing async/await. by Yury Selivanov · 9 years ago
  13. a95a476 Issue #20387: Merge test and patch from 3.4.4 by Jason R. Coombs · 9 years ago
  14. e411b66 Issue #20387: Restore retention of indentation during untokenize. by Dingyuan Wang · 9 years ago
  15. 24d262a (Merge 3.5) Issue #23840: tokenize.open() now closes the temporary binary file by Victor Stinner · 9 years ago
  16. 387729e Issue #23840: tokenize.open() now closes the temporary binary file on error to by Victor Stinner · 9 years ago
  17. 7544508 PEP 0492 -- Coroutines with async and await syntax. Issue #24017. by Yury Selivanov · 9 years ago
  18. ca8b644 Issue #23615: Modules bz2, tarfile and tokenize now can be reloaded with by Serhiy Storchaka · 10 years ago
  19. cf4a2f2 Issue #23615: Modules bz2, tarfile and tokenize now can be reloaded with by Serhiy Storchaka · 10 years ago
  20. 845b14c Removed duplicated dict entries. by Serhiy Storchaka · 10 years ago
  21. 9691750 Issue #22599: Enhance tokenize.open() to be able to call it during Python by Victor Stinner · 10 years ago
  22. 9d279b8 (Merge 3.4) Issue #22599: Enhance tokenize.open() to be able to call it during by Victor Stinner · 10 years ago
  23. d51374e PEP 465: a dedicated infix operator for matrix multiplication (closes #21176) by Benjamin Peterson · 11 years ago
  24. 58719a7 Merge with 3.3 by Terry Jan Reedy · 11 years ago
  25. f106f8f whitespace by Terry Jan Reedy · 11 years ago
  26. 40f8c67 Merge with 3.3 by Terry Jan Reedy · 11 years ago
  27. 9dc3a36 Issue #9974: When untokenizing, use row info to insert backslash+newline. by Terry Jan Reedy · 11 years ago
  28. 79bf899 Merge with 3.3 by Terry Jan Reedy · 11 years ago
  29. 5b8d2c3 Issue #8478: Untokenizer.compat now processes first token from iterator input. by Terry Jan Reedy · 11 years ago
  30. 8c8d772 Untokenize, bad assert: Merge with 3.3 by Terry Jan Reedy · 11 years ago
  31. 5e6db31 Untokenize: An logically incorrect assert tested user input validity. by Terry Jan Reedy · 11 years ago
  32. 7282ff6 Issue #18960: Fix bugs with Python source code encoding in the second line. by Serhiy Storchaka · 11 years ago
  33. 768c16c Issue #18960: Fix bugs with Python source code encoding in the second line. by Serhiy Storchaka · 11 years ago
  34. 5833c00 #19620: merge with 3.3. by Ezio Melotti · 11 years ago
  35. 4bcc796 #19620: Fix typo in docstring (noticed by Christopher Welborn). by Ezio Melotti · 11 years ago
  36. 9353494 Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py script by Serhiy Storchaka · 11 years ago
  37. dafea85 Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py script by Serhiy Storchaka · 11 years ago
  38. f7a17b4 Replace IOError with OSError (#16715) by Andrew Svetlov · 12 years ago
  39. fafa8b7 #16152: merge with 3.2. by Ezio Melotti · 12 years ago
  40. 2cc3b4b #16152: fix tokenize to ignore whitespace at the end of the code when no newline is found. Patch by Ned Batchelder. by Ezio Melotti · 12 years ago
  41. fed2c51 Merge branch by Florent Xicluna · 12 years ago
  42. 11f0b41 Issue #14990: tokenize: correctly fail with SyntaxError on invalid encoding declaration. by Florent Xicluna · 12 years ago
  43. 0b3847d Issue #15096: Drop support for the ur string prefix by Christian Heimes · 12 years ago
  44. 8d5c0b8 Issue #15054: Fix incorrect tokenization of 'b' string literals. by Meador Inge · 12 years ago
  45. c33f3f2 Issue #14629: Mention the filename in SyntaxError exceptions from by Brett Cannon · 13 years ago
  46. 63c39fe merge 3.2: issue 14629 by Martin v. Löwis · 13 years ago
  47. 63674f4 Issue #14629: Raise SyntaxError in tokenizer.detect_encoding by Martin v. Löwis · 13 years ago
  48. c0eaeca Updated tokenize to support the inverse byte literals new in 3.3 by Armin Ronacher · 13 years ago
  49. 6ecf77b Basic support for PEP 414 without docs or tests. by Armin Ronacher · 13 years ago
  50. 00c7f85 Issue #2134: Add support for tokenize.TokenInfo.exact_type. by Meador Inge · 13 years ago
  51. 10a99b0 Issue #13150: The tokenize module doesn't compile large regular expressions at startup anymore. by Antoine Pitrou · 13 years ago
  52. 14c0f03 Issue #12943: python -m tokenize support has been added to tokenize. by Meador Inge · 13 years ago
  53. 45b96d3 Merged revisions 88498 via svnmerge from by Brett Cannon · 14 years ago
  54. f304278 Issue #11074: Make 'tokenize' so it can be reloaded. by Brett Cannon · 14 years ago
  55. b9d10d0 Issue #10386: Added __all__ to token module; this simplifies importing by Alexander Belopolsky · 14 years ago
  56. 58c0752 Issue #10335: Add tokenize.open(), detect the file encoding using by Victor Stinner · 14 years ago
  57. a0e7940 A little bit more readable repr method. by Raymond Hettinger · 14 years ago
  58. 3fb79c7 Experiment: Let collections.namedtuple() do the work. This should work now that _collections is pre-built. The buildbots will tell us shortly. by Raymond Hettinger · 14 years ago
  59. 6c60d09 Improve the repr for the TokenInfo named tuple. by Raymond Hettinger · 14 years ago
  60. 43e4ea1 Remove unused import, fix typo and rewrap docstrings. by Florent Xicluna · 14 years ago
  61. 33856de handle names starting with non-ascii characters correctly #9712 by Benjamin Peterson · 14 years ago
  62. 1613ed8 fix for files with coding cookies and BOMs by Benjamin Peterson · 15 years ago
  63. 689a558 in tokenize.detect_encoding(), return utf-8-sig when a BOM is found by Benjamin Peterson · 15 years ago
  64. 81dd8b9 use some more itertools magic to make '' be yielded after readline is done by Benjamin Peterson · 15 years ago
  65. 21db77e simply by using itertools.chain() by Benjamin Peterson · 15 years ago
  66. a0dfa82 Merged revisions 75149,75260-75263,75265-75267,75292,75300,75376,75405,75429-75433,75437,75445,75501,75551,75572,75589-75591,75657,75742,75868,75952-75957,76057,76105,76139,76143,76162,76223 via svnmerge from by Benjamin Peterson · 15 years ago
  67. d3afada normalize latin-1 and utf-8 variant encodings like the builtin tokenizer does by Benjamin Peterson · 15 years ago
  68. aa17a7f Remove dependency on the collections module. by Raymond Hettinger · 16 years ago
  69. a48db39 Issue #5857: tokenize.tokenize() now returns named tuples. by Raymond Hettinger · 16 years ago
  70. 9b8d24b reuse tokenize.detect_encoding in linecache instead of a custom solution by Benjamin Peterson · 16 years ago
  71. 433f32c raise a SyntaxError in detect_encoding() when a codec lookup fails like the builtin parser #4021 by Benjamin Peterson · 16 years ago
  72. fd03645 #2834: Change re module semantics, so that str and bytes mixing is forbidden, by Antoine Pitrou · 16 years ago
  73. 0fe1438 use the more idomatic (and Py3k faster) while True by Benjamin Peterson · 16 years ago
  74. ba4af49 Merged revisions 61964-61979 via svnmerge from by Christian Heimes · 17 years ago
  75. 428de65 - Issue #719888: Updated tokenize to use a bytes API. generate_tokens has been by Trent Nelson · 17 years ago
  76. fceab5a Merged revisions 60080-60089,60091-60093 via svnmerge from by Georg Brandl · 17 years ago
  77. 4fe72f9 Patch 1420 by Ron Adam. by Guido van Rossum · 17 years ago
  78. ce36ad8 Raise statement normalization in Lib/. by Collin Winter · 17 years ago
  79. cd16bf6 Merged revisions 55817-55961 via svnmerge from by Guido van Rossum · 17 years ago
  80. 1bc535d Merged revisions 55328-55341 via svnmerge from by Guido van Rossum · 17 years ago
  81. a18af4e PEP 3114: rename .next() to .__next__() and add next() builtin. by Georg Brandl · 18 years ago
  82. 650f0d0 Hide list comp variables and support set comprehensions by Nick Coghlan · 18 years ago
  83. dde0028 Make ELLIPSIS a separate token. This makes it a syntax error to write ". . ." for Ellipsis. by Georg Brandl · 18 years ago
  84. be19ed7 Fix most trivially-findable print statements. by Guido van Rossum · 18 years ago
  85. c150536 PEP 3107 - Function Annotations thanks to Tony Lownds by Neal Norwitz · 18 years ago
  86. 89f507f Four months of trunk changes (including a few releases...) by Thomas Wouters · 18 years ago
  87. cf588f6 Remove support for backticks from the grammar and compiler. by Brett Cannon · 18 years ago
  88. b053cd8 Killed the <> operator. You must now use !=. by Guido van Rossum · 18 years ago
  89. 00ee7ba Merge current trunk into p3yk. This includes the PyNumber_Index API change, by Thomas Wouters · 18 years ago
  90. 49fd7fa Merge p3yk branch with the trunk up to revision 45595. This breaks a fair by Thomas Wouters · 19 years ago
  91. da99d1c SF bug #1224621: tokenize module does not detect inconsistent dedents by Raymond Hettinger · 19 years ago
  92. 68c0453 Add untokenize() function to allow full round-trip tokenization. by Raymond Hettinger · 19 years ago
  93. c2a5a63 PEP-0318, @decorator-style. In Guido's words: by Anthony Baxter · 20 years ago
  94. 68468eb Get rid of many apply() calls. by Guido van Rossum · 22 years ago
  95. 78a7aee SF 633560: tokenize.__all__ needs "generate_tokens" by Raymond Hettinger · 22 years ago
  96. 9d6897a Speed up the most egregious "if token in (long tuple)" cases by using by Guido van Rossum · 22 years ago
  97. 8ac1495 Whitespace normalization. by Tim Peters · 22 years ago
  98. d1fa3db Added docstrings excerpted from Python Library Reference. Closes patch 556161. by Raymond Hettinger · 22 years ago
  99. 496563a Remove some now-obsolete generator future statements. by Tim Peters · 23 years ago
  100. e98d16e Cleanup x so it is not left in module by Neal Norwitz · 23 years ago