Merged revisions 46753-51188 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk

........
  r46755 | brett.cannon | 2006-06-08 18:23:04 +0200 (Thu, 08 Jun 2006) | 4 lines

  Make binascii.hexlify() use s# for its arguments instead of t# to actually
  match its documentation stating it accepts any read-only buffer.
........
  r46757 | brett.cannon | 2006-06-08 19:00:45 +0200 (Thu, 08 Jun 2006) | 8 lines

  Buffer objects would return the read or write buffer for a wrapped object when
  the char buffer was requested.  Now it actually returns the char buffer if
  available or raises a TypeError if it isn't (as is raised for the other buffer
  types if they are not present but requested).

  Not a backport candidate since it does change semantics of the buffer object
  (although it could be argued this is enough of a bug to bother backporting).
........
  r46760 | andrew.kuchling | 2006-06-09 03:10:17 +0200 (Fri, 09 Jun 2006) | 1 line

  Update functools section
........
  r46762 | tim.peters | 2006-06-09 04:11:02 +0200 (Fri, 09 Jun 2006) | 6 lines

  Whitespace normalization.

  Since test_file is implicated in mysterious test failures
  when followed by test_optparse, if I had any brains I'd
  look at the checkin that last changed test_file ;-)
........
  r46763 | tim.peters | 2006-06-09 05:09:42 +0200 (Fri, 09 Jun 2006) | 5 lines

  To boost morale :-), force test_optparse to run immediately
  after test_file until we can figure out how to fix it.
  (See python-dev; at the moment we don't even know which checkin
  caused the problem.)
........
  r46764 | tim.peters | 2006-06-09 05:51:41 +0200 (Fri, 09 Jun 2006) | 6 lines

  AutoFileTests.tearDown():  Removed mysterious undocumented
  try/except.  Remove TESTFN.

  Throughout:  used open() instead of file(), and wrapped
  long lines.
........
  r46765 | tim.peters | 2006-06-09 06:02:06 +0200 (Fri, 09 Jun 2006) | 8 lines

  testUnicodeOpen():  I have no idea why, but making this
  test clean up after itself appears to fix the test failures
  when test_optparse follows test_file.

  test_main():  Get rid of TESTFN no matter what.  That's
  also enough to fix the mystery failures.  Doesn't hurt
  to fix them twice :-)
........
  r46766 | tim.peters | 2006-06-09 07:12:40 +0200 (Fri, 09 Jun 2006) | 6 lines

  Remove the temporary hack to force test_optparse to
  run immediately after test_file.  At least 8 buildbot
  boxes passed since the underlying problem got fixed,
  and they all failed before the fix, so there's no point
  to this anymore.
........
  r46767 | neal.norwitz | 2006-06-09 07:54:18 +0200 (Fri, 09 Jun 2006) | 1 line

  Fix grammar and reflow
........
  r46769 | andrew.kuchling | 2006-06-09 12:22:35 +0200 (Fri, 09 Jun 2006) | 1 line

  Markup fix
........
  r46773 | andrew.kuchling | 2006-06-09 15:15:57 +0200 (Fri, 09 Jun 2006) | 1 line

  [Bug #1472827] Make saxutils.XMLGenerator handle \r\n\t in attribute values by escaping them properly.   2.4 bugfix candidate.
........
  r46778 | kristjan.jonsson | 2006-06-09 18:28:01 +0200 (Fri, 09 Jun 2006) | 2 lines

  Turn off warning about deprecated CRT functions on for VisualStudio .NET 2005.
  Make the definition #ARRAYSIZE conditional.  VisualStudio .NET 2005 already has it defined using a better gimmick.
........
  r46779 | phillip.eby | 2006-06-09 18:40:18 +0200 (Fri, 09 Jun 2006) | 2 lines

  Import wsgiref into the stdlib, as of the external version 0.1-r2181.
........
  r46783 | andrew.kuchling | 2006-06-09 18:44:40 +0200 (Fri, 09 Jun 2006) | 1 line

  Add note about XMLGenerator bugfix
........
  r46784 | andrew.kuchling | 2006-06-09 18:46:51 +0200 (Fri, 09 Jun 2006) | 1 line

  Add note about wsgiref
........
  r46785 | brett.cannon | 2006-06-09 19:05:48 +0200 (Fri, 09 Jun 2006) | 2 lines

  Fix inconsistency in naming within an enum.
........
  r46787 | tim.peters | 2006-06-09 19:47:00 +0200 (Fri, 09 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46792 | georg.brandl | 2006-06-09 20:29:52 +0200 (Fri, 09 Jun 2006) | 3 lines

  Test file.__exit__.
........
  r46794 | brett.cannon | 2006-06-09 20:40:46 +0200 (Fri, 09 Jun 2006) | 2 lines

  svn:ignore .pyc and .pyo files.
........
  r46795 | georg.brandl | 2006-06-09 20:45:48 +0200 (Fri, 09 Jun 2006) | 3 lines

  RFE #1491485: str/unicode.endswith()/startswith() now accept a tuple as first argument.
........
  r46798 | andrew.kuchling | 2006-06-09 21:03:16 +0200 (Fri, 09 Jun 2006) | 1 line

  Describe startswith()/endswiith() change; add reminder about wsgiref
........
  r46799 | tim.peters | 2006-06-09 21:24:44 +0200 (Fri, 09 Jun 2006) | 11 lines

  Implementing a happy idea from Georg Brandl:  make runtest() try to
  clean up files and directories the tests often leave behind by
  mistake.  This is the first time in history I don't have a bogus
  "db_home" directory after running the tests ;-)

  Also worked on runtest's docstring, to say something about all the
  arguments, and to document the non-obvious return values.

  New functions runtest_inner() and cleanup_test_droppings() in
  support of the above.
........
  r46800 | andrew.kuchling | 2006-06-09 21:43:25 +0200 (Fri, 09 Jun 2006) | 1 line

  Remove unused variable
........
  r46801 | andrew.kuchling | 2006-06-09 21:56:05 +0200 (Fri, 09 Jun 2006) | 1 line

  Add some wsgiref text
........
  r46803 | thomas.heller | 2006-06-09 21:59:11 +0200 (Fri, 09 Jun 2006) | 1 line

  set eol-style svn property
........
  r46804 | thomas.heller | 2006-06-09 22:01:01 +0200 (Fri, 09 Jun 2006) | 1 line

  set eol-style svn property
........
  r46805 | georg.brandl | 2006-06-09 22:43:48 +0200 (Fri, 09 Jun 2006) | 3 lines

  Make use of new str.startswith/endswith semantics.
  Occurences in email and compiler were ignored due to backwards compat requirements.
........
  r46806 | brett.cannon | 2006-06-10 00:31:23 +0200 (Sat, 10 Jun 2006) | 4 lines

  An object with __call__ as an attribute, when called, will have that attribute checked for __call__ itself, and will continue to look until it finds an object without the attribute.  This can lead to an infinite recursion.

  Closes bug #532646, again.  Will be backported.
........
  r46808 | brett.cannon | 2006-06-10 00:45:54 +0200 (Sat, 10 Jun 2006) | 2 lines

  Fix bug introduced in rev. 46806 by not having variable declaration at the top of a block.
........
  r46812 | georg.brandl | 2006-06-10 08:40:50 +0200 (Sat, 10 Jun 2006) | 4 lines

  Apply perky's fix for #1503157: "/".join([u"", u""]) raising OverflowError.
  Also improve error message on overflow.
........
  r46817 | martin.v.loewis | 2006-06-10 10:14:03 +0200 (Sat, 10 Jun 2006) | 2 lines

  Port cygwin kill_python changes from 2.4 branch.
........
  r46818 | armin.rigo | 2006-06-10 12:57:40 +0200 (Sat, 10 Jun 2006) | 4 lines

  SF bug #1503294.

  PyThreadState_GET() complains if the tstate is NULL, but only in debug mode.
........
  r46819 | martin.v.loewis | 2006-06-10 14:23:46 +0200 (Sat, 10 Jun 2006) | 4 lines

  Patch #1495999: Part two of Windows CE changes.
  - update header checks, using autoconf
  - provide dummies for getenv, environ, and GetVersion
  - adjust MSC_VER check in socketmodule.c
........
  r46820 | skip.montanaro | 2006-06-10 16:09:11 +0200 (Sat, 10 Jun 2006) | 1 line

  document the class, not its initializer
........
  r46821 | greg.ward | 2006-06-10 18:40:01 +0200 (Sat, 10 Jun 2006) | 4 lines

  Sync with Optik docs (rev 518):
    * restore "Extending optparse" section
    * document ALWAYS_TYPED_ACTIONS (SF #1449311)
........
  r46824 | thomas.heller | 2006-06-10 21:51:46 +0200 (Sat, 10 Jun 2006) | 8 lines

  Upgrade to ctypes version 0.9.9.7.

  Summary of changes:

  - support for 'variable sized' data
  - support for anonymous structure/union fields
  - fix severe bug with certain arrays or structures containing more than 256 fields
........
  r46825 | thomas.heller | 2006-06-10 21:55:36 +0200 (Sat, 10 Jun 2006) | 8 lines

  Upgrade to ctypes version 0.9.9.7.

  Summary of changes:

  - support for 'variable sized' data
  - support for anonymous structure/union fields
  - fix severe bug with certain arrays or structures containing more than 256 fields
........
  r46826 | fred.drake | 2006-06-10 22:01:34 +0200 (Sat, 10 Jun 2006) | 4 lines

  SF patch #1303595: improve description of __builtins__, explaining how it
  varies between __main__ and other modules, and strongly suggest not touching
  it but using __builtin__ if absolutely necessary
........
  r46827 | fred.drake | 2006-06-10 22:02:58 +0200 (Sat, 10 Jun 2006) | 1 line

  credit for SF patch #1303595
........
  r46831 | thomas.heller | 2006-06-10 22:29:34 +0200 (Sat, 10 Jun 2006) | 2 lines

  New docs for ctypes.
........
  r46834 | thomas.heller | 2006-06-10 23:07:19 +0200 (Sat, 10 Jun 2006) | 1 line

  Fix a wrong printf format.
........
  r46835 | thomas.heller | 2006-06-10 23:17:58 +0200 (Sat, 10 Jun 2006) | 1 line

  Fix the second occurrence of the problematic printf format.
........
  r46837 | thomas.heller | 2006-06-10 23:56:03 +0200 (Sat, 10 Jun 2006) | 1 line

  Don't use C++ comment.
........
  r46838 | thomas.heller | 2006-06-11 00:01:50 +0200 (Sun, 11 Jun 2006) | 1 line

  Handle failure of PyMem_Realloc.
........
  r46839 | skip.montanaro | 2006-06-11 00:38:13 +0200 (Sun, 11 Jun 2006) | 2 lines

  Suppress warning on MacOSX about possible use before set of proc.
........
  r46840 | tim.peters | 2006-06-11 00:51:45 +0200 (Sun, 11 Jun 2006) | 8 lines

  shuffle() doscstring:  Removed warning about sequence length
  versus generator period.  While this was a real weakness of the
  older WH generator for lists with just a few dozen elements,
  and so could potentially bite the naive ;-), the Twister should
  show excellent behavior up to at least 600 elements.

  Module docstring:  reflowed some jarringly short lines.
........
  r46844 | greg.ward | 2006-06-11 02:40:49 +0200 (Sun, 11 Jun 2006) | 4 lines

  Bug #1361643: fix textwrap.dedent() so it handles tabs appropriately,
  i.e. do *not* expand tabs, but treat them as whitespace that is not
  equivalent to spaces.  Add a couple of test cases.  Clarify docs.
........
  r46850 | neal.norwitz | 2006-06-11 07:44:18 +0200 (Sun, 11 Jun 2006) | 5 lines

  Fix Coverity # 146.  newDBSequenceObject would deref dbobj, so it can't be NULL.

  We know it's not NULL from the ParseTuple and DbObject_Check will verify
  it's not NULL.
........
  r46851 | neal.norwitz | 2006-06-11 07:45:25 +0200 (Sun, 11 Jun 2006) | 4 lines

  Wrap some long lines
  Top/Bottom factor out some common expressions
  Add a XXX comment about widing offset.
........
  r46852 | neal.norwitz | 2006-06-11 07:45:47 +0200 (Sun, 11 Jun 2006) | 1 line

  Add versionadded to doc
........
  r46853 | neal.norwitz | 2006-06-11 07:47:14 +0200 (Sun, 11 Jun 2006) | 3 lines

  Update doc to make it agree with code.
  Bottom factor out some common code.
........
  r46854 | neal.norwitz | 2006-06-11 07:48:14 +0200 (Sun, 11 Jun 2006) | 3 lines

  f_code can't be NULL based on Frame_New and other code that derefs it.
  So there doesn't seem to be much point to checking here.
........
  r46855 | neal.norwitz | 2006-06-11 09:26:27 +0200 (Sun, 11 Jun 2006) | 1 line

  Fix errors found by pychecker
........
  r46856 | neal.norwitz | 2006-06-11 09:26:50 +0200 (Sun, 11 Jun 2006) | 1 line

  warnings was imported at module scope, no need to import again
........
  r46857 | neal.norwitz | 2006-06-11 09:27:56 +0200 (Sun, 11 Jun 2006) | 5 lines

  Fix errors found by pychecker.
  I think these changes are correct, but I'm not sure.  Could someone
  who knows how this module works test it?  It can at least start on
  the cmd line.
........
  r46858 | neal.norwitz | 2006-06-11 10:35:14 +0200 (Sun, 11 Jun 2006) | 1 line

  Fix errors found by pychecker
........
  r46859 | ronald.oussoren | 2006-06-11 16:33:36 +0200 (Sun, 11 Jun 2006) | 4 lines

  This patch improves the L&F of IDLE on OSX. The changes are conditionalized on
  being in an IDLE.app bundle on darwin. This does a slight reorganisation of the
  menus and adds support for file-open events.
........
  r46860 | greg.ward | 2006-06-11 16:42:41 +0200 (Sun, 11 Jun 2006) | 1 line

  SF #1366250: optparse docs: fix inconsistency in variable name; minor tweaks.
........
  r46861 | greg.ward | 2006-06-11 18:24:11 +0200 (Sun, 11 Jun 2006) | 3 lines

  Bug #1498146: fix optparse to handle Unicode strings in option help,
  description, and epilog.
........
  r46862 | thomas.heller | 2006-06-11 19:04:22 +0200 (Sun, 11 Jun 2006) | 2 lines

  Release the GIL during COM method calls, to avoid deadlocks in
  Python coded COM objects.
........
  r46863 | tim.peters | 2006-06-11 21:42:51 +0200 (Sun, 11 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46864 | tim.peters | 2006-06-11 21:43:49 +0200 (Sun, 11 Jun 2006) | 2 lines

  Add missing svn:eol-style property to text files.
........
  r46865 | ronald.oussoren | 2006-06-11 21:45:57 +0200 (Sun, 11 Jun 2006) | 2 lines

  Remove message about using make frameworkinstall, that's no longer necesssary
........
  r46866 | ronald.oussoren | 2006-06-11 22:23:29 +0200 (Sun, 11 Jun 2006) | 2 lines

  Use configure to substitute the correct prefix instead of hardcoding
........
  r46867 | ronald.oussoren | 2006-06-11 22:24:45 +0200 (Sun, 11 Jun 2006) | 4 lines

  - Change fixapplepython23.py to ensure that it will run with /usr/bin/python
    on intel macs.
  - Fix some minor problems in the installer for OSX
........
  r46868 | neal.norwitz | 2006-06-11 22:25:56 +0200 (Sun, 11 Jun 2006) | 5 lines

  Try to fix several networking tests.  The problem is that if hosts have
  a search path setup, some of these hosts resolve to the wrong address.
  By appending a period to the hostname, the hostname should only resolve
  to what we want it to resolve to.  Hopefully this doesn't break different bots.
........
  r46869 | neal.norwitz | 2006-06-11 22:42:02 +0200 (Sun, 11 Jun 2006) | 7 lines

  Try to fix another networking test.  The problem is that if hosts have
  a search path setup, some of these hosts resolve to the wrong address.
  By appending a period to the hostname, the hostname should only resolve
  to what we want it to resolve to.  Hopefully this doesn't break different bots.

  Also add more info to failure message to aid debugging test failure.
........
  r46870 | neal.norwitz | 2006-06-11 22:46:46 +0200 (Sun, 11 Jun 2006) | 4 lines

  Fix test on PPC64 buildbot.  It raised an IOError (really an URLError which
  derives from an IOError).  That seems valid.  Env Error includes both OSError
  and IOError, so this seems like a reasonable fix.
........
  r46871 | tim.peters | 2006-06-11 22:52:59 +0200 (Sun, 11 Jun 2006) | 10 lines

  compare_generic_iter():  Fixed the failure of test_wsgiref's testFileWrapper
  when running with -O.

  test_simple_validation_error still fails under -O.  That appears to be because
  wsgiref's validate.py uses `assert` statements all over the place to check
  arguments for sanity.  That should all be changed (it's not a logical error
  in the software if a user passes bogus arguments, so this isn't a reasonable
  use for `assert` -- checking external preconditions should generally raise
  ValueError or TypeError instead, as appropriate).
........
  r46872 | neal.norwitz | 2006-06-11 23:38:38 +0200 (Sun, 11 Jun 2006) | 1 line

  Get test to pass on S/390.  Shout if you think this change is incorrect.
........
  r46873 | neal.norwitz | 2006-06-12 04:05:55 +0200 (Mon, 12 Jun 2006) | 1 line

  Cleanup Py_ssize_t a little (get rid of second #ifdef)
........
  r46874 | neal.norwitz | 2006-06-12 04:06:17 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix some Py_ssize_t issues
........
  r46875 | neal.norwitz | 2006-06-12 04:06:42 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix some Py_ssize_t issues
........
  r46876 | neal.norwitz | 2006-06-12 04:07:24 +0200 (Mon, 12 Jun 2006) | 2 lines

  Cleanup: Remove import of types to get StringTypes, we can just use basestring.
........
  r46877 | neal.norwitz | 2006-06-12 04:07:57 +0200 (Mon, 12 Jun 2006) | 1 line

  Don't truncate if size_t is bigger than uint
........
  r46878 | neal.norwitz | 2006-06-12 04:08:41 +0200 (Mon, 12 Jun 2006) | 1 line

  Don't leak the list object if there's an error allocating the item storage.  Backport candidate
........
  r46879 | neal.norwitz | 2006-06-12 04:09:03 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix typo.  Backport if anyone cares. :-)
........
  r46880 | neal.norwitz | 2006-06-12 04:09:34 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix indentation of case and a Py_ssize_t issue.
........
  r46881 | neal.norwitz | 2006-06-12 04:11:18 +0200 (Mon, 12 Jun 2006) | 3 lines

  Get rid of f_restricted too.  Doc the other 4 ints that were already removed
  at the NeedForSpeed sprint.
........
  r46882 | neal.norwitz | 2006-06-12 04:13:21 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix the socket tests so they can be run concurrently.  Backport candidate
........
  r46883 | neal.norwitz | 2006-06-12 04:16:10 +0200 (Mon, 12 Jun 2006) | 1 line

  i and j are initialized below when used.  No need to do it twice
........
  r46884 | neal.norwitz | 2006-06-12 05:05:03 +0200 (Mon, 12 Jun 2006) | 1 line

  Remove unused import
........
  r46885 | neal.norwitz | 2006-06-12 05:05:40 +0200 (Mon, 12 Jun 2006) | 1 line

  Impl ssize_t
........
  r46886 | neal.norwitz | 2006-06-12 05:33:09 +0200 (Mon, 12 Jun 2006) | 6 lines

  Patch #1503046, Conditional compilation of zlib.(de)compressobj.copy

  copy is only in newer versions of zlib.  This should allow zlibmodule
  to work with older versions like the Tru64 buildbot.
........
  r46887 | phillip.eby | 2006-06-12 06:04:32 +0200 (Mon, 12 Jun 2006) | 2 lines

  Sync w/external release 0.1.2.  Please see PEP 360 before making changes to external packages.
........
  r46888 | martin.v.loewis | 2006-06-12 06:26:31 +0200 (Mon, 12 Jun 2006) | 2 lines

  Get rid of function pointer cast.
........
  r46889 | thomas.heller | 2006-06-12 08:05:57 +0200 (Mon, 12 Jun 2006) | 3 lines

  I don't know how that happend, but the entire file contents was
  duplicated.  Thanks to Simon Percivall for the heads up.
........
  r46890 | nick.coghlan | 2006-06-12 10:19:37 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix site module docstring to match the code
........
  r46891 | nick.coghlan | 2006-06-12 10:23:02 +0200 (Mon, 12 Jun 2006) | 1 line

  Fix site module docstring to match the code for Mac OSX, too
........
  r46892 | nick.coghlan | 2006-06-12 10:27:13 +0200 (Mon, 12 Jun 2006) | 1 line

  The site module documentation also described the Windows behaviour incorrectly.
........
  r46893 | nick.coghlan | 2006-06-12 12:17:11 +0200 (Mon, 12 Jun 2006) | 1 line

  Make the -m switch conform to the documentation of sys.path by behaving like the -c switch
........
  r46894 | kristjan.jonsson | 2006-06-12 17:45:12 +0200 (Mon, 12 Jun 2006) | 2 lines

  Fix the CRT argument error handling for VisualStudio .NET 2005.  Install a CRT error handler and disable the assertion for debug builds.  This causes CRT to set errno to EINVAL.
  This update fixes crash cases in the test suite where the default CRT error handler would cause process exit.
........
  r46899 | thomas.heller | 2006-06-12 22:56:48 +0200 (Mon, 12 Jun 2006) | 1 line

  Add pep-291 compatibility markers.
........
  r46901 | ka-ping.yee | 2006-06-13 01:47:52 +0200 (Tue, 13 Jun 2006) | 5 lines

  Add the uuid module.

  This module has been tested so far on Windows XP (Python 2.4 and 2.5a2),
  Mac OS X (Python 2.3, 2.4, and 2.5a2), and Linux (Python 2.4 and 2.5a2).
........
  r46902 | tim.peters | 2006-06-13 02:30:01 +0200 (Tue, 13 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46903 | tim.peters | 2006-06-13 02:30:50 +0200 (Tue, 13 Jun 2006) | 2 lines

  Added missing svn:eol-style property to text files.
........
  r46905 | tim.peters | 2006-06-13 05:30:07 +0200 (Tue, 13 Jun 2006) | 5 lines

  get_matching_blocks():  rewrote code & comments so they match; added
  more comments about why it's this way at all; and removed what looked
  like needless expense (sorting (i, j, k) triples directly should give
  exactly the same order as sorting (i, (i, j, k)) pairs).
........
  r46906 | neal.norwitz | 2006-06-13 06:08:53 +0200 (Tue, 13 Jun 2006) | 1 line

  Don't fail if another process is listening on our port.
........
  r46908 | neal.norwitz | 2006-06-13 10:28:19 +0200 (Tue, 13 Jun 2006) | 2 lines

  Initialize the type object so pychecker can't crash the interpreter.
........
  r46909 | neal.norwitz | 2006-06-13 10:41:06 +0200 (Tue, 13 Jun 2006) | 1 line

  Verify the crash due to EncodingMap not initialized does not return
........
  r46910 | thomas.heller | 2006-06-13 10:56:14 +0200 (Tue, 13 Jun 2006) | 3 lines

  Add some windows datatypes that were missing from this file, and add
  the aliases defined in windows header files for the structures.
........
  r46911 | thomas.heller | 2006-06-13 11:40:14 +0200 (Tue, 13 Jun 2006) | 3 lines

  Add back WCHAR, UINT, DOUBLE, _LARGE_INTEGER, _ULARGE_INTEGER.
  VARIANT_BOOL is a special _ctypes data type, not c_short.
........
  r46912 | ronald.oussoren | 2006-06-13 13:19:56 +0200 (Tue, 13 Jun 2006) | 4 lines

  Linecache contains support for PEP302 loaders, but fails to deal with loaders
  that return None to indicate that the module is valid but no source is
  available. This patch fixes that.
........
  r46913 | andrew.kuchling | 2006-06-13 13:57:04 +0200 (Tue, 13 Jun 2006) | 1 line

  Mention uuid module
........
  r46915 | walter.doerwald | 2006-06-13 14:02:12 +0200 (Tue, 13 Jun 2006) | 2 lines

  Fix passing errors to the encoder and decoder functions.
........
  r46917 | walter.doerwald | 2006-06-13 14:04:43 +0200 (Tue, 13 Jun 2006) | 3 lines

  errors is an attribute in the incremental decoder
  not an argument.
........
  r46919 | andrew.macintyre | 2006-06-13 17:04:24 +0200 (Tue, 13 Jun 2006) | 11 lines

  Patch #1454481:  Make thread stack size runtime tunable.

  Heavily revised, comprising revisions:
  46640 - original trunk revision (backed out in r46655)
  46647 - markup fix (backed out in r46655)
  46692:46918 merged from branch aimacintyre-sf1454481

  branch tested on buildbots (Windows buildbots had problems
  not related to these changes).
........
  r46920 | brett.cannon | 2006-06-13 18:06:55 +0200 (Tue, 13 Jun 2006) | 2 lines

  Remove unused variable.
........
  r46921 | andrew.kuchling | 2006-06-13 18:41:41 +0200 (Tue, 13 Jun 2006) | 1 line

  Add ability to set stack size
........
  r46923 | marc-andre.lemburg | 2006-06-13 19:04:26 +0200 (Tue, 13 Jun 2006) | 2 lines

  Update pybench to version 2.0.
........
  r46924 | marc-andre.lemburg | 2006-06-13 19:07:14 +0200 (Tue, 13 Jun 2006) | 2 lines

  Revert wrong svn copy.
........
  r46925 | andrew.macintyre | 2006-06-13 19:14:36 +0200 (Tue, 13 Jun 2006) | 2 lines

  fix exception usage
........
  r46927 | tim.peters | 2006-06-13 20:37:07 +0200 (Tue, 13 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46928 | marc-andre.lemburg | 2006-06-13 20:56:56 +0200 (Tue, 13 Jun 2006) | 9 lines

  Updated to pybench 2.0.

  See svn.python.org/external/pybench-2.0 for the original import of that
  version.

  Note that platform.py was not copied over from pybench-2.0 since
  it is already part of Python 2.5.
........
  r46929 | andrew.macintyre | 2006-06-13 21:02:35 +0200 (Tue, 13 Jun 2006) | 5 lines

  Increase the small thread stack size to get the test
  to pass reliably on the one buildbot that insists on
  more than 32kB of thread stack.
........
  r46930 | marc-andre.lemburg | 2006-06-13 21:20:07 +0200 (Tue, 13 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46931 | thomas.heller | 2006-06-13 22:18:43 +0200 (Tue, 13 Jun 2006) | 2 lines

  More docs for ctypes.
........
  r46932 | brett.cannon | 2006-06-13 23:34:24 +0200 (Tue, 13 Jun 2006) | 2 lines

  Ignore .pyc and .pyo files in Pybench.
........
  r46933 | brett.cannon | 2006-06-13 23:46:41 +0200 (Tue, 13 Jun 2006) | 7 lines

  If a classic class defined a __coerce__() method that just returned its two
  arguments in reverse, the interpreter would infinitely recourse trying to get a
  coercion that worked.  So put in a recursion check after a coercion is made and
  the next call to attempt to use the coerced values.

  Fixes bug #992017 and closes crashers/coerce.py .
........
  r46936 | gerhard.haering | 2006-06-14 00:24:47 +0200 (Wed, 14 Jun 2006) | 3 lines

  Merged changes from external pysqlite 2.3.0 release. Documentation updates will
  follow in a few hours at the latest. Then we should be ready for beta1.
........
  r46937 | brett.cannon | 2006-06-14 00:26:13 +0200 (Wed, 14 Jun 2006) | 2 lines

  Missed test for rev. 46933; infinite recursion from __coerce__() returning its arguments reversed.
........
  r46938 | gerhard.haering | 2006-06-14 00:53:48 +0200 (Wed, 14 Jun 2006) | 2 lines

  Updated documentation for pysqlite 2.3.0 API.
........
  r46939 | tim.peters | 2006-06-14 06:09:25 +0200 (Wed, 14 Jun 2006) | 10 lines

  SequenceMatcher.get_matching_blocks():  This now guarantees that
  adjacent triples in the result list describe non-adjacent matching
  blocks.  That's _nice_ to have, and Guido said he wanted it.

  Not a bugfix candidate:  Guido or not ;-), this changes visible
  endcase semantics (note that some tests had to change), and
  nothing about this was documented before.  Since it was working
  as designed, and behavior was consistent with the docs, it wasn't
  "a bug".
........
  r46940 | tim.peters | 2006-06-14 06:13:00 +0200 (Wed, 14 Jun 2006) | 2 lines

  Repaired typo in new comment.
........
  r46941 | tim.peters | 2006-06-14 06:15:27 +0200 (Wed, 14 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46942 | fred.drake | 2006-06-14 06:25:02 +0200 (Wed, 14 Jun 2006) | 3 lines

  - make some disabled tests run what they intend when enabled
  - remove some over-zealous triple-quoting
........
  r46943 | fred.drake | 2006-06-14 07:04:47 +0200 (Wed, 14 Jun 2006) | 3 lines

  add tests for two cases that are handled correctly in the current code,
  but that SF patch 1504676 as written mis-handles
........
  r46944 | fred.drake | 2006-06-14 07:15:51 +0200 (Wed, 14 Jun 2006) | 1 line

  explain an XXX in more detail
........
  r46945 | martin.v.loewis | 2006-06-14 07:21:04 +0200 (Wed, 14 Jun 2006) | 1 line

  Patch #1455898: Incremental mode for "mbcs" codec.
........
  r46946 | georg.brandl | 2006-06-14 08:08:31 +0200 (Wed, 14 Jun 2006) | 3 lines

  Bug #1339007: Shelf objects now don't raise an exception in their
  __del__ method when initialization failed.
........
  r46948 | thomas.heller | 2006-06-14 08:18:15 +0200 (Wed, 14 Jun 2006) | 1 line

  Fix docstring.
........
  r46949 | georg.brandl | 2006-06-14 08:29:07 +0200 (Wed, 14 Jun 2006) | 2 lines

  Bug #1501122: mention __gt__ &co in description of comparison order.
........
  r46951 | thomas.heller | 2006-06-14 09:08:38 +0200 (Wed, 14 Jun 2006) | 1 line

  Write more docs.
........
  r46952 | georg.brandl | 2006-06-14 10:31:39 +0200 (Wed, 14 Jun 2006) | 3 lines

  Bug #1153163: describe __add__ vs __radd__ behavior when adding
  objects of same type/of subclasses of the other.
........
  r46954 | georg.brandl | 2006-06-14 10:42:11 +0200 (Wed, 14 Jun 2006) | 3 lines

  Bug #1202018: add some common mime.types locations.
........
  r46955 | georg.brandl | 2006-06-14 10:50:03 +0200 (Wed, 14 Jun 2006) | 3 lines

  Bug #1117556: SimpleHTTPServer now tries to find and use the system's
  mime.types file for determining MIME types.
........
  r46957 | thomas.heller | 2006-06-14 11:09:08 +0200 (Wed, 14 Jun 2006) | 1 line

  Document paramflags.
........
  r46958 | thomas.heller | 2006-06-14 11:20:11 +0200 (Wed, 14 Jun 2006) | 1 line

  Add an __all__ list, since this module does 'from ctypes import *'.
........
  r46959 | andrew.kuchling | 2006-06-14 15:59:15 +0200 (Wed, 14 Jun 2006) | 1 line

  Add item
........
  r46961 | georg.brandl | 2006-06-14 18:46:43 +0200 (Wed, 14 Jun 2006) | 3 lines

  Bug #805015: doc error in PyUnicode_FromEncodedObject.
........
  r46962 | gerhard.haering | 2006-06-15 00:28:37 +0200 (Thu, 15 Jun 2006) | 10 lines

  - Added version checks in C code to make sure we don't trigger bugs in older
    SQLite versions.
  - Added version checks in test suite so that we don't execute tests that we
    know will fail with older (buggy) SQLite versions.

  Now, all tests should run against all SQLite versions from 3.0.8 until 3.3.6
  (latest one now). The sqlite3 module can be built against all these SQLite
  versions and the sqlite3 module does its best to not trigger bugs in SQLite,
  but using SQLite 3.3.3 or later is recommended.
........
  r46963 | tim.peters | 2006-06-15 00:38:13 +0200 (Thu, 15 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46964 | neal.norwitz | 2006-06-15 06:54:29 +0200 (Thu, 15 Jun 2006) | 9 lines

  Speculative checkin (requires approval of Gerhard Haering)

  This backs out the test changes in 46962 which prevented crashes
  by not running the tests via a version check.  All the version checks
  added in that rev were removed from the tests.

  Code was added to the error handler in connection.c that seems
  to work with older versions of sqlite including 3.1.3.
........
  r46965 | neal.norwitz | 2006-06-15 07:55:49 +0200 (Thu, 15 Jun 2006) | 1 line

  Try to narrow window of failure on slow/busy boxes (ppc64 buildbot)
........
  r46966 | martin.v.loewis | 2006-06-15 08:45:05 +0200 (Thu, 15 Jun 2006) | 2 lines

  Make import/lookup of mbcs fail on non-Windows systems.
........
  r46967 | ronald.oussoren | 2006-06-15 10:14:18 +0200 (Thu, 15 Jun 2006) | 2 lines

  Patch #1446489	(zipfile: support for ZIP64)
........
  r46968 | neal.norwitz | 2006-06-15 10:16:44 +0200 (Thu, 15 Jun 2006) | 6 lines

  Re-revert this change.  Install the version check and don't run the test
  until Gerhard has time to fully debug the issue.  This affects versions
  before 3.2.1 (possibly only versions earlier than 3.1.3).

  Based on discussion on python-checkins.
........
  r46969 | gregory.p.smith | 2006-06-15 10:52:32 +0200 (Thu, 15 Jun 2006) | 6 lines

  - bsddb: multithreaded DB access using the simple bsddb module interface
    now works reliably.  It has been updated to use automatic BerkeleyDB
    deadlock detection and the bsddb.dbutils.DeadlockWrap wrapper to retry
    database calls that would previously deadlock. [SF python bug #775414]
........
  r46970 | gregory.p.smith | 2006-06-15 11:23:52 +0200 (Thu, 15 Jun 2006) | 2 lines

  minor documentation cleanup.  mention the bsddb.db interface explicitly by name.
........
  r46971 | neal.norwitz | 2006-06-15 11:57:03 +0200 (Thu, 15 Jun 2006) | 5 lines

  Steal the trick from test_compiler to print out a slow msg.
  This will hopefully get the buildbots to pass.  Not sure this
  test will be feasible or even work.  But everything is red now,
  so it can't get much worse.
........
  r46972 | neal.norwitz | 2006-06-15 12:24:49 +0200 (Thu, 15 Jun 2006) | 1 line

  Print some more info to get an idea of how much longer the test will last
........
  r46981 | tim.peters | 2006-06-15 20:04:40 +0200 (Thu, 15 Jun 2006) | 6 lines

  Try to reduce the extreme peak memory and disk-space use
  of this test.  It probably still requires more disk space
  than most buildbots have, and in any case is still so
  intrusive that if we don't find another way to test this I'm
  taking my buildbot offline permanently ;-)
........
  r46982 | tim.peters | 2006-06-15 20:06:29 +0200 (Thu, 15 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r46983 | tim.peters | 2006-06-15 20:07:28 +0200 (Thu, 15 Jun 2006) | 2 lines

  Add missing svn:eol-style property to text files.
........
  r46984 | tim.peters | 2006-06-15 20:38:19 +0200 (Thu, 15 Jun 2006) | 2 lines

  Oops -- I introduced an off-by-6436159488 error.
........
  r46990 | neal.norwitz | 2006-06-16 06:30:34 +0200 (Fri, 16 Jun 2006) | 1 line

  Disable this test until we can determine what to do about it
........
  r46991 | neal.norwitz | 2006-06-16 06:31:06 +0200 (Fri, 16 Jun 2006) | 1 line

  Param name is dir, not directory.  Update docstring.  Backport candidate
........
  r46992 | neal.norwitz | 2006-06-16 06:31:28 +0200 (Fri, 16 Jun 2006) | 1 line

  Add missing period in comment.
........
  r46993 | neal.norwitz | 2006-06-16 06:32:43 +0200 (Fri, 16 Jun 2006) | 1 line

  Fix whitespace, there are memory leaks in this module.
........
  r46995 | fred.drake | 2006-06-17 01:45:06 +0200 (Sat, 17 Jun 2006) | 3 lines

  SF patch 1504676: Make sgmllib char and entity references pluggable
  (implementation/tests contributed by Sam Ruby)
........
  r46996 | fred.drake | 2006-06-17 03:07:54 +0200 (Sat, 17 Jun 2006) | 1 line

  fix change that broke the htmllib tests
........
  r46998 | martin.v.loewis | 2006-06-17 11:15:14 +0200 (Sat, 17 Jun 2006) | 3 lines

  Patch #763580:  Add name and value arguments to
  Tkinter variable classes.
........
  r46999 | martin.v.loewis | 2006-06-17 11:20:41 +0200 (Sat, 17 Jun 2006) | 2 lines

  Patch #1096231: Add default argument to wm_iconbitmap.
........
  r47000 | martin.v.loewis | 2006-06-17 11:25:15 +0200 (Sat, 17 Jun 2006) | 2 lines

  Patch #1494750: Destroy master after deleting children.
........
  r47003 | george.yoshida | 2006-06-17 18:31:52 +0200 (Sat, 17 Jun 2006) | 2 lines

  markup fix
........
  r47005 | george.yoshida | 2006-06-17 18:39:13 +0200 (Sat, 17 Jun 2006) | 4 lines

  Update url.

  Old url returned status code:301 Moved permanently.
........
  r47007 | martin.v.loewis | 2006-06-17 20:44:27 +0200 (Sat, 17 Jun 2006) | 2 lines

  Patch #812986: Update the canvas even if not tracing.
........
  r47008 | martin.v.loewis | 2006-06-17 21:03:26 +0200 (Sat, 17 Jun 2006) | 2 lines

  Patch #815924: Restore ability to pass type= and icon=
........
  r47009 | neal.norwitz | 2006-06-18 00:37:45 +0200 (Sun, 18 Jun 2006) | 1 line

  Fix typo in docstring
........
  r47010 | neal.norwitz | 2006-06-18 00:38:15 +0200 (Sun, 18 Jun 2006) | 1 line

  Fix memory leak reported by valgrind while running test_subprocess
........
  r47011 | fred.drake | 2006-06-18 04:57:35 +0200 (Sun, 18 Jun 2006) | 1 line

  remove unnecessary markup
........
  r47013 | neal.norwitz | 2006-06-18 21:35:01 +0200 (Sun, 18 Jun 2006) | 7 lines

  Prevent spurious leaks when running regrtest.py -R.  There may be more
  issues that crop up from time to time, but this change seems to have been
  pretty stable (no spurious warnings) for about a week.

  Other modules which use threads may require similar use of
  threading_setup/threading_cleanup from test_support.
........
  r47014 | neal.norwitz | 2006-06-18 21:37:40 +0200 (Sun, 18 Jun 2006) | 9 lines

  The hppa ubuntu box sometimes hangs forever in these tests.  My guess
  is that the wait is failing for some reason.  Use WNOHANG, so we won't
  wait until the buildbot kills the test suite.

  I haven't been able to reproduce the failure, so I'm not sure if
  this will help or not.  Hopefully, this change will cause the test
  to fail, rather than hang.  That will be better since we will get
  the rest of the test results.  It may also help us debug the real problem.
........
  r47015 | neal.norwitz | 2006-06-18 22:10:24 +0200 (Sun, 18 Jun 2006) | 1 line

  Revert 47014 until it is more robust
........
  r47016 | thomas.heller | 2006-06-18 23:27:04 +0200 (Sun, 18 Jun 2006) | 6 lines

  Fix typos.
  Fix doctest example.
  Mention in the tutorial that 'errcheck' is explained in the ref manual.
  Use better wording in some places.
  Remoce code examples that shouldn't be in the tutorial.
  Remove some XXX notices.
........
  r47017 | georg.brandl | 2006-06-19 00:17:29 +0200 (Mon, 19 Jun 2006) | 3 lines

  Patch #1507676: improve exception messages in abstract.c, object.c and typeobject.c.
........
  r47018 | neal.norwitz | 2006-06-19 07:40:44 +0200 (Mon, 19 Jun 2006) | 1 line

  Use Py_ssize_t
........
  r47019 | georg.brandl | 2006-06-19 08:35:54 +0200 (Mon, 19 Jun 2006) | 3 lines

  Add news entry about error msg improvement.
........
  r47020 | thomas.heller | 2006-06-19 09:07:49 +0200 (Mon, 19 Jun 2006) | 2 lines

  Try to repair the failing test on the OpenBSD buildbot.  Trial and error...
........
  r47021 | tim.peters | 2006-06-19 09:45:16 +0200 (Mon, 19 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r47022 | walter.doerwald | 2006-06-19 10:07:50 +0200 (Mon, 19 Jun 2006) | 4 lines

  Patch #1506645: add Python wrappers for the curses functions
  is_term_resized, resize_term and resizeterm. This uses three
  separate configure checks (one for each function).
........
  r47023 | walter.doerwald | 2006-06-19 10:14:09 +0200 (Mon, 19 Jun 2006) | 2 lines

  Make check order match in configure and configure.in.
........
  r47024 | tim.peters | 2006-06-19 10:14:28 +0200 (Mon, 19 Jun 2006) | 3 lines

  Repair KeyError when running test_threaded_import under -R,
  as reported by Neal on python-dev.
........
  r47025 | thomas.heller | 2006-06-19 10:32:46 +0200 (Mon, 19 Jun 2006) | 3 lines

  Next try to fix the OpenBSD buildbot tests:
  Use ctypes.util.find_library to locate the C runtime library
  on platforms where is returns useful results.
........
  r47026 | tim.peters | 2006-06-19 11:09:44 +0200 (Mon, 19 Jun 2006) | 13 lines

  TestHelp.make_parser():  This was making a permanent change to
  os.environ (setting envar COLUMNS), which at least caused
  test_float_default() to fail if the tests were run more than once.

  This repairs the test_optparse -R failures Neal reported on
  python-dev.  It also explains some seemingly bizarre test_optparse
  failures we saw a couple weeks ago on the buildbots, when
  test_optparse failed due to test_file failing to clean up after
  itself, and then test_optparse failed in an entirely different
  way when regrtest's -w option ran test_optparse a second time.
  It's now obvious that make_parser() permanently changing os.environ
  was responsible for the second half of that.
........
  r47027 | anthony.baxter | 2006-06-19 14:04:15 +0200 (Mon, 19 Jun 2006) | 2 lines

  Preparing for 2.5b1.
........
  r47029 | fred.drake | 2006-06-19 19:31:16 +0200 (Mon, 19 Jun 2006) | 1 line

  remove non-working document formats from edist
........
  r47030 | gerhard.haering | 2006-06-19 23:17:35 +0200 (Mon, 19 Jun 2006) | 5 lines

  Fixed a memory leak that was introduced with incorrect usage of the Python weak
  reference API in pysqlite 2.2.1.

  Bumbed pysqlite version number to upcoming pysqlite 2.3.1 release.
........
  r47032 | ka-ping.yee | 2006-06-20 00:49:36 +0200 (Tue, 20 Jun 2006) | 2 lines

  Remove Python 2.3 compatibility comment.
........
  r47033 | trent.mick | 2006-06-20 01:21:25 +0200 (Tue, 20 Jun 2006) | 2 lines

  Upgrade pyexpat to expat 2.0.0 (http://python.org/sf/1462338).
........
  r47034 | trent.mick | 2006-06-20 01:57:41 +0200 (Tue, 20 Jun 2006) | 3 lines

  [ 1295808 ] expat symbols should be namespaced in pyexpat
  (http://python.org/sf/1295808)
........
  r47039 | andrew.kuchling | 2006-06-20 13:52:16 +0200 (Tue, 20 Jun 2006) | 1 line

  Uncomment wsgiref section
........
  r47040 | andrew.kuchling | 2006-06-20 14:15:09 +0200 (Tue, 20 Jun 2006) | 1 line

  Add four library items
........
  r47041 | andrew.kuchling | 2006-06-20 14:19:54 +0200 (Tue, 20 Jun 2006) | 1 line

  Terminology and typography fixes
........
  r47042 | andrew.kuchling | 2006-06-20 15:05:12 +0200 (Tue, 20 Jun 2006) | 1 line

  Add introductory paragraphs summarizing the release; minor edits
........
  r47043 | andrew.kuchling | 2006-06-20 15:11:29 +0200 (Tue, 20 Jun 2006) | 1 line

  Minor edits and rearrangements; markup fix
........
  r47044 | andrew.kuchling | 2006-06-20 15:20:30 +0200 (Tue, 20 Jun 2006) | 1 line

  [Bug #1504456] Mention xml -> xmlcore change
........
  r47047 | brett.cannon | 2006-06-20 19:30:26 +0200 (Tue, 20 Jun 2006) | 2 lines

  Raise TestSkipped when the test socket connection is refused.
........
  r47049 | brett.cannon | 2006-06-20 21:20:17 +0200 (Tue, 20 Jun 2006) | 2 lines

  Fix typo of exception name.
........
  r47053 | brett.cannon | 2006-06-21 18:57:57 +0200 (Wed, 21 Jun 2006) | 5 lines

  At the C level, tuple arguments are passed in directly to the exception
  constructor, meaning it is treated as *args, not as a single argument.  This
  means using the 'message' attribute won't work (until Py3K comes around),
  and so one must grab from 'arg' to get the error number.
........
  r47054 | andrew.kuchling | 2006-06-21 19:10:18 +0200 (Wed, 21 Jun 2006) | 1 line

  Link to LibRef module documentation
........
  r47055 | andrew.kuchling | 2006-06-21 19:17:10 +0200 (Wed, 21 Jun 2006) | 1 line

  Note some of Barry's work
........
  r47056 | andrew.kuchling | 2006-06-21 19:17:28 +0200 (Wed, 21 Jun 2006) | 1 line

  Bump version
........
  r47057 | georg.brandl | 2006-06-21 19:45:17 +0200 (Wed, 21 Jun 2006) | 3 lines

  fix [ 1509132 ] compiler module builds incorrect AST for TryExceptFinally
........
  r47058 | georg.brandl | 2006-06-21 19:52:36 +0200 (Wed, 21 Jun 2006) | 3 lines

  Make test_fcntl aware of netbsd3.
........
  r47059 | georg.brandl | 2006-06-21 19:53:17 +0200 (Wed, 21 Jun 2006) | 3 lines

  Patch #1509001: expected skips for netbsd3.
........
  r47060 | gerhard.haering | 2006-06-21 22:55:04 +0200 (Wed, 21 Jun 2006) | 2 lines

  Removed call to enable_callback_tracebacks that slipped in by accident.
........
  r47061 | armin.rigo | 2006-06-21 23:58:50 +0200 (Wed, 21 Jun 2006) | 13 lines

  Fix for an obscure bug introduced by revs 46806 and 46808, with a test.
  The problem of checking too eagerly for recursive calls is the
  following: if a RuntimeError is caused by recursion, and if code needs
  to normalize it immediately (as in the 2nd test), then
  PyErr_NormalizeException() needs a call to the RuntimeError class to
  instantiate it, and this hits the recursion limit again...  causing
  PyErr_NormalizeException() to never finish.

  Moved this particular recursion check to slot_tp_call(), which is not
  involved in instantiating built-in exceptions.

  Backport candidate.
........
  r47064 | neal.norwitz | 2006-06-22 08:30:50 +0200 (Thu, 22 Jun 2006) | 3 lines

  Copy the wsgiref package during make install.
........
  r47065 | neal.norwitz | 2006-06-22 08:35:30 +0200 (Thu, 22 Jun 2006) | 1 line

  Reset the doc date to today for the automatic doc builds
........
  r47067 | andrew.kuchling | 2006-06-22 15:10:23 +0200 (Thu, 22 Jun 2006) | 1 line

  Mention how to suppress warnings
........
  r47069 | georg.brandl | 2006-06-22 16:46:17 +0200 (Thu, 22 Jun 2006) | 3 lines

  Set lineno correctly on list, tuple and dict literals.
........
  r47070 | georg.brandl | 2006-06-22 16:46:46 +0200 (Thu, 22 Jun 2006) | 4 lines

  Test for correct compilation of try-except-finally stmt.
  Test for correct lineno on list, tuple, dict literals.
........
  r47071 | fred.drake | 2006-06-22 17:50:08 +0200 (Thu, 22 Jun 2006) | 1 line

  fix markup nit
........
  r47072 | brett.cannon | 2006-06-22 18:49:14 +0200 (Thu, 22 Jun 2006) | 6 lines

  'warning's was improperly requiring that a command-line Warning category be
  both a subclass of Warning and a subclass of types.ClassType.  The latter is no
  longer true thanks to new-style exceptions.

  Closes bug #1510580.  Thanks to AMK for the test.
........
  r47073 | ronald.oussoren | 2006-06-22 20:33:54 +0200 (Thu, 22 Jun 2006) | 3 lines

  MacOSX: Add a message to the first screen of the installer that tells
  users how to avoid updates to their shell profile.
........
  r47074 | georg.brandl | 2006-06-22 21:02:18 +0200 (Thu, 22 Jun 2006) | 3 lines

  Fix my name ;)
........
  r47075 | thomas.heller | 2006-06-22 21:07:36 +0200 (Thu, 22 Jun 2006) | 2 lines

  Small fixes, mostly in the markup.
........
  r47076 | peter.astrand | 2006-06-22 22:06:46 +0200 (Thu, 22 Jun 2006) | 1 line

  Make it possible to run test_subprocess.py on Python 2.2, which lacks test_support.is_resource_enabled.
........
  r47077 | peter.astrand | 2006-06-22 22:21:26 +0200 (Thu, 22 Jun 2006) | 1 line

  Applied patch #1506758: Prevent MemoryErrors with large MAXFD.
........
  r47079 | neal.norwitz | 2006-06-23 05:32:44 +0200 (Fri, 23 Jun 2006) | 1 line

  Fix refleak
........
  r47080 | fred.drake | 2006-06-23 08:03:45 +0200 (Fri, 23 Jun 2006) | 9 lines

  - SF bug #853506: IP6 address parsing in sgmllib
    ('[' and ']' were not accepted in unquoted attribute values)

  - cleaned up tests of character and entity reference decoding so the
    tests cover the documented relationships among handle_charref,
    handle_entityref, convert_charref, convert_codepoint, and
    convert_entityref, without bringing up Unicode issues that sgmllib
    cannot be involved in
........
  r47085 | andrew.kuchling | 2006-06-23 21:23:40 +0200 (Fri, 23 Jun 2006) | 11 lines

  Fit Makefile for the Python doc environment better; this is a step toward
  including the howtos in the build process.

  	* Put LaTeX output in ../paper-<whatever>/.
  	* Put HTML output in ../html/
  	* Explain some of the Makefile variables
  	* Remove some cruft dating to my environment (e.g. the 'web' target)

  This makefile isn't currently invoked by the documentation build process,
  so these changes won't destabilize anything.
........
  r47086 | hyeshik.chang | 2006-06-23 23:16:18 +0200 (Fri, 23 Jun 2006) | 5 lines

  Bug #1511381: codec_getstreamcodec() in codec.c is corrected to
  omit a default "error" argument for NULL pointer.  This allows
  the parser to take a codec from cjkcodecs again.
  (Reported by Taewook Kang and reviewed by Walter Doerwald)
........
  r47091 | ronald.oussoren | 2006-06-25 22:44:16 +0200 (Sun, 25 Jun 2006) | 6 lines

  Workaround for bug #1512124

  Without this patch IDLE will get unresponsive when you open the debugger
  window on OSX. This is both using the system Tcl/Tk on Tiger as the latest
  universal download from tk-components.sf.net.
........
  r47092 | ronald.oussoren | 2006-06-25 23:14:19 +0200 (Sun, 25 Jun 2006) | 3 lines

  Drop the calldll demo's for macos, calldll isn't present anymore, no need
  to keep the demo's around.
........
  r47093 | ronald.oussoren | 2006-06-25 23:15:58 +0200 (Sun, 25 Jun 2006) | 3 lines

  Use a path without a double slash to compile the .py files after installation
  (macosx, binary installer). This fixes bug #1508369 for python 2.5.
........
  r47094 | ronald.oussoren | 2006-06-25 23:19:06 +0200 (Sun, 25 Jun 2006) | 3 lines

  Also install the .egg-info files in Lib. This will cause wsgiref.egg-info to
  be installed.
........
  r47097 | andrew.kuchling | 2006-06-26 14:40:02 +0200 (Mon, 26 Jun 2006) | 1 line

  [Bug #1511998] Various comments from Nick Coghlan; thanks!
........
  r47098 | andrew.kuchling | 2006-06-26 14:43:43 +0200 (Mon, 26 Jun 2006) | 1 line

  Describe workaround for PyRange_New()'s removal
........
  r47099 | andrew.kuchling | 2006-06-26 15:08:24 +0200 (Mon, 26 Jun 2006) | 5 lines

  [Bug #1512163] Fix typo.

  This change will probably break tests on FreeBSD buildbots, but I'll check in
  a fix for that next.
........
  r47100 | andrew.kuchling | 2006-06-26 15:12:16 +0200 (Mon, 26 Jun 2006) | 9 lines

  [Bug #1512163] Use one set of locking methods, lockf();
  remove the flock() calls.

  On FreeBSD, the two methods lockf() and flock() end up using the same
  mechanism and the second one fails.  A Linux man page claims that the
  two methods are orthogonal (so locks acquired one way don't interact
  with locks acquired the other way) but that clearly must be false.
........
  r47101 | andrew.kuchling | 2006-06-26 15:23:10 +0200 (Mon, 26 Jun 2006) | 5 lines

  Add a test for a conflicting lock.

  On slow machines, maybe the time intervals (2 sec, 0.5 sec) will be too tight.
  I'll see how the buildbots like it.
........
  r47103 | andrew.kuchling | 2006-06-26 16:33:24 +0200 (Mon, 26 Jun 2006) | 1 line

  Windows doesn't have os.fork().  I'll just disable this test for now
........
  r47106 | andrew.kuchling | 2006-06-26 19:00:35 +0200 (Mon, 26 Jun 2006) | 9 lines

  Attempt to fix build failure on OS X and Debian alpha; the symptom is
  consistent with os.wait() returning immediately because some other
  subprocess had previously exited; the test suite then immediately
  tries to lock the mailbox and gets an error saying it's already
  locked.

  To fix this, do a waitpid() so the test suite only continues once
  the intended child process has exited.
........
  r47113 | neal.norwitz | 2006-06-27 06:06:46 +0200 (Tue, 27 Jun 2006) | 1 line

  Ignore some more warnings in the dynamic linker on an older gentoo
........
  r47114 | neal.norwitz | 2006-06-27 06:09:13 +0200 (Tue, 27 Jun 2006) | 6 lines

  Instead of doing a make test, run the regression tests out of the installed
  copy.  This will hopefully catch problems where directories are added
  under Lib/ but not to Makefile.pre.in.  This breaks out the 2 runs
  of the test suite with and without -O which is also nicer.
........
  r47115 | neal.norwitz | 2006-06-27 06:12:58 +0200 (Tue, 27 Jun 2006) | 5 lines

  Fix SF bug #1513032, 'make install' failure on FreeBSD 5.3.

  No need to install lib-old, it's empty in 2.5.
........
  r47116 | neal.norwitz | 2006-06-27 06:23:06 +0200 (Tue, 27 Jun 2006) | 1 line

  Test unimportant change to verify buildbot does not try to build
........
  r47117 | neal.norwitz | 2006-06-27 06:26:30 +0200 (Tue, 27 Jun 2006) | 1 line

  Try again: test unimportant change to verify buildbot does not try to build
........
  r47118 | neal.norwitz | 2006-06-27 06:28:56 +0200 (Tue, 27 Jun 2006) | 1 line

  Verify buildbot picks up these changes (really needs testing after last change to Makefile.pre.in)
........
  r47121 | vinay.sajip | 2006-06-27 09:34:37 +0200 (Tue, 27 Jun 2006) | 1 line

  Removed buggy exception handling in doRollover of rotating file handlers. Exceptions now propagate to caller.
........
  r47123 | ronald.oussoren | 2006-06-27 12:08:25 +0200 (Tue, 27 Jun 2006) | 3 lines

  MacOSX: fix rather dumb buglet that made it impossible to create extensions on
  OSX 10.3 when using a binary distribution build on 10.4.
........
  r47125 | tim.peters | 2006-06-27 13:52:49 +0200 (Tue, 27 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r47128 | ronald.oussoren | 2006-06-27 14:53:52 +0200 (Tue, 27 Jun 2006) | 8 lines

  Use staticly build copies of zlib and bzip2 to build the OSX installer, that
  way the resulting binaries have a better change of running on 10.3.

  This patch also updates the search logic for sleepycat db3/4, without this
  patch you cannot use a sleepycat build with a non-standard prefix; with this
  you can (at least on OSX) if you add the prefix to CPPFLAGS/LDFLAGS at
  configure-time. This change is needed to build the binary installer for OSX.
........
  r47131 | ronald.oussoren | 2006-06-27 17:45:32 +0200 (Tue, 27 Jun 2006) | 5 lines

  macosx: Install a libpython2.5.a inside the framework as a symlink to the actual
  dylib at the root of the framework, that way tools that expect a unix-like
  install (python-config, but more importantly external products like
  mod_python) work correctly.
........
  r47137 | neal.norwitz | 2006-06-28 07:03:22 +0200 (Wed, 28 Jun 2006) | 4 lines

  According to the man pages on Gentoo Linux and Tru64, EACCES or EAGAIN
  can be returned if fcntl (lockf) fails.  This fixes the test failure
  on Tru64 by checking for either error rather than just EAGAIN.
........
  r47139 | neal.norwitz | 2006-06-28 08:28:31 +0200 (Wed, 28 Jun 2006) | 5 lines

  Fix bug #1512695: cPickle.loads could crash if it was interrupted with
  a KeyboardInterrupt since PyTuple_Pack was passed a NULL.

  Will backport.
........
  r47142 | nick.coghlan | 2006-06-28 12:41:47 +0200 (Wed, 28 Jun 2006) | 1 line

  Make full module name available as __module_name__ even when __name__ is set to something else (like '__main__')
........
  r47143 | armin.rigo | 2006-06-28 12:49:51 +0200 (Wed, 28 Jun 2006) | 2 lines

  A couple of crashers of the "won't fix" kind.
........
  r47147 | andrew.kuchling | 2006-06-28 16:25:20 +0200 (Wed, 28 Jun 2006) | 1 line

  [Bug #1508766] Add docs for uuid module; docs written by George Yoshida, with minor rearrangements by me.
........
  r47148 | andrew.kuchling | 2006-06-28 16:27:21 +0200 (Wed, 28 Jun 2006) | 1 line

  [Bug #1508766] Add docs for uuid module; this puts the module in the 'Internet Protocols' section.  Arguably this module could also have gone in the chapters on strings or encodings, maybe even the crypto chapter.  Fred, please move if you see fit.
........
  r47151 | georg.brandl | 2006-06-28 22:23:25 +0200 (Wed, 28 Jun 2006) | 3 lines

  Fix end_fill().
........
  r47153 | trent.mick | 2006-06-28 22:30:41 +0200 (Wed, 28 Jun 2006) | 2 lines

  Mention the expat upgrade and pyexpat fix I put in 2.5b1.
........
  r47154 | fred.drake | 2006-06-29 02:51:53 +0200 (Thu, 29 Jun 2006) | 6 lines

  SF bug #1504333: sgmlib should allow angle brackets in quoted values
  (modified patch by Sam Ruby; changed to use separate REs for start and end
   tags to reduce matching cost for end tags; extended tests; updated to avoid
   breaking previous changes to support IPv6 addresses in unquoted attribute
   values)
........
  r47156 | fred.drake | 2006-06-29 04:57:48 +0200 (Thu, 29 Jun 2006) | 1 line

  document recent bugfixes in sgmllib
........
  r47158 | neal.norwitz | 2006-06-29 06:10:08 +0200 (Thu, 29 Jun 2006) | 10 lines

  Add new utility function, reap_children(), to test_support.  This should
  be called at the end of each test that spawns children (perhaps it
  should be called from regrtest instead?).  This will hopefully prevent
  some of the unexplained failures in the buildbots (hppa and alpha)
  during tests that spawn children.  The problems were not reproducible.
  There were many zombies that remained at the end of several tests.
  In the worst case, this shouldn't cause any more problems,
  though it may not help either.  Time will tell.
........
  r47159 | neal.norwitz | 2006-06-29 07:48:14 +0200 (Thu, 29 Jun 2006) | 5 lines

  This should fix the buildbot failure on s/390 which can't connect to gmail.org.
  It makes the error message consistent and always sends to stderr.

  It would be much better for all the networking tests to hit only python.org.
........
  r47161 | thomas.heller | 2006-06-29 20:34:15 +0200 (Thu, 29 Jun 2006) | 3 lines

  Protect the thread api calls in the _ctypes extension module within
  #ifdef WITH_THREADS/#endif blocks.  Found by Sam Rushing.
........
  r47162 | martin.v.loewis | 2006-06-29 20:58:44 +0200 (Thu, 29 Jun 2006) | 2 lines

  Patch #1509163: MS Toolkit Compiler no longer available
........
  r47163 | skip.montanaro | 2006-06-29 21:20:09 +0200 (Thu, 29 Jun 2006) | 1 line

  add string methods to index
........
  r47164 | vinay.sajip | 2006-06-30 02:13:08 +0200 (Fri, 30 Jun 2006) | 1 line

  Fixed bug in fileConfig() which failed to clear logging._handlerList
........
  r47166 | tim.peters | 2006-06-30 08:18:39 +0200 (Fri, 30 Jun 2006) | 2 lines

  Whitespace normalization.
........
  r47170 | neal.norwitz | 2006-06-30 09:32:16 +0200 (Fri, 30 Jun 2006) | 1 line

  Silence compiler warning
........
  r47171 | neal.norwitz | 2006-06-30 09:32:46 +0200 (Fri, 30 Jun 2006) | 1 line

  Another problem reported by Coverity.  Backport candidate.
........
  r47175 | thomas.heller | 2006-06-30 19:44:54 +0200 (Fri, 30 Jun 2006) | 2 lines

  Revert the use of PY_FORMAT_SIZE_T in PyErr_Format.
........
  r47176 | tim.peters | 2006-06-30 20:34:51 +0200 (Fri, 30 Jun 2006) | 2 lines

  Remove now-unused fidding with PY_FORMAT_SIZE_T.
........
  r47177 | georg.brandl | 2006-06-30 20:47:56 +0200 (Fri, 30 Jun 2006) | 3 lines

  Document decorator usage of property.
........
  r47181 | fred.drake | 2006-06-30 21:29:25 +0200 (Fri, 30 Jun 2006) | 4 lines

  - consistency nit: always include "()" in \function and \method
    (*should* be done by the presentation, but that requires changes all over)
  - avoid spreading the __name meme
........
  r47188 | vinay.sajip | 2006-07-01 12:45:20 +0200 (Sat, 01 Jul 2006) | 1 line

  Added entry for fileConfig() bugfix.
........
  r47189 | vinay.sajip | 2006-07-01 12:47:20 +0200 (Sat, 01 Jul 2006) | 1 line

  Added duplicate call to fileConfig() to ensure that it cleans up after itself correctly.
........
  r47190 | martin.v.loewis | 2006-07-01 17:33:37 +0200 (Sat, 01 Jul 2006) | 2 lines

  Release all forwarded functions in .close. Fixes #1513223.
........
  r47191 | fred.drake | 2006-07-01 18:28:20 +0200 (Sat, 01 Jul 2006) | 7 lines

  SF bug #1296433 (Expat bug #1515266): Unchecked calls to character data
  handler would cause a segfault.  This merges in Expat's lib/xmlparse.c
  revisions 1.154 and 1.155, which fix this and a closely related problem
  (the later does not affect Python).

  Moved the crasher test to the tests for xml.parsers.expat.
........
  r47197 | gerhard.haering | 2006-07-02 19:48:30 +0200 (Sun, 02 Jul 2006) | 4 lines

  The sqlite3 module did cut off data from the SQLite database at the first null
  character before sending it to a custom converter. This has been fixed now.
........
  r47198 | martin.v.loewis | 2006-07-02 20:44:00 +0200 (Sun, 02 Jul 2006) | 1 line

  Correct arithmetic in access on Win32. Fixes #1513646.
........
  r47203 | thomas.heller | 2006-07-03 09:58:09 +0200 (Mon, 03 Jul 2006) | 1 line

  Cleanup: Remove commented out code.
........
  r47204 | thomas.heller | 2006-07-03 09:59:50 +0200 (Mon, 03 Jul 2006) | 1 line

  Don't run the doctests with Python 2.3 because it doesn't have the ELLIPSIS flag.
........
  r47205 | thomas.heller | 2006-07-03 10:04:05 +0200 (Mon, 03 Jul 2006) | 7 lines

  Fixes so that _ctypes can be compiled with the MingW compiler.

  It seems that the definition of '__attribute__(x)' was responsible for
  the compiler ignoring the '__fastcall' attribute on the
  ffi_closure_SYSV function in libffi_msvc/ffi.c, took me quite some
  time to figure this out.
........
  r47206 | thomas.heller | 2006-07-03 10:08:14 +0200 (Mon, 03 Jul 2006) | 11 lines

  Add a new function uses_seh() to the _ctypes extension module.  This
  will return True if Windows Structured Exception handling (SEH) is
  used when calling functions, False otherwise.

  Currently, only MSVC supports SEH.

  Fix the test so that it doesn't crash when run with MingW compiled
  _ctypes.  Note that two tests are still failing when mingw is used, I
  suspect structure layout differences and function calling conventions
  between MSVC and MingW.
........
  r47207 | tim.peters | 2006-07-03 10:23:19 +0200 (Mon, 03 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r47208 | martin.v.loewis | 2006-07-03 11:44:00 +0200 (Mon, 03 Jul 2006) | 3 lines

  Only setup canvas when it is first created.
  Fixes #1514703
........
  r47209 | martin.v.loewis | 2006-07-03 12:05:30 +0200 (Mon, 03 Jul 2006) | 3 lines

  Reimplement turtle.circle using a polyline, to allow correct
  filling of arcs. Also fixes #1514693.
........
  r47210 | martin.v.loewis | 2006-07-03 12:19:49 +0200 (Mon, 03 Jul 2006) | 3 lines

  Bug #1514693: Update turtle's heading when switching between
  degrees and radians.
........
  r47211 | martin.v.loewis | 2006-07-03 13:12:06 +0200 (Mon, 03 Jul 2006) | 2 lines

  Document functions added in 2.3 and 2.5.
........
  r47212 | martin.v.loewis | 2006-07-03 14:19:50 +0200 (Mon, 03 Jul 2006) | 3 lines

  Bug #1417699: Reject locale-specific decimal point in float()
  and atof().
........
  r47213 | martin.v.loewis | 2006-07-03 14:28:58 +0200 (Mon, 03 Jul 2006) | 3 lines

  Bug #1267547: Put proper recursive setup.py call into the
  spec file generated by bdist_rpm.
........
  r47215 | martin.v.loewis | 2006-07-03 15:01:35 +0200 (Mon, 03 Jul 2006) | 3 lines

  Patch #825417: Fix timeout processing in expect,
  read_until. Will backport to 2.4.
........
  r47218 | martin.v.loewis | 2006-07-03 15:47:40 +0200 (Mon, 03 Jul 2006) | 2 lines

  Put method-wrappers into trashcan. Fixes #927248.
........
  r47219 | andrew.kuchling | 2006-07-03 16:07:30 +0200 (Mon, 03 Jul 2006) | 1 line

  [Bug #1515932] Clarify description of slice assignment
........
  r47220 | andrew.kuchling | 2006-07-03 16:16:09 +0200 (Mon, 03 Jul 2006) | 4 lines

  [Bug #1511911] Clarify description of optional arguments to sorted()
     by improving the xref to the section on lists, and by
     copying the explanations of the arguments (with a slight modification).
........
  r47223 | kristjan.jonsson | 2006-07-03 16:59:05 +0200 (Mon, 03 Jul 2006) | 1 line

  Fix build problems with the platform SDK on windows.  It is not sufficient to test for the C compiler version when determining if we have the secure CRT from microsoft.  Must test with an undocumented macro, __STDC_SECURE_LIB__ too.
........
  r47224 | ronald.oussoren | 2006-07-04 14:30:22 +0200 (Tue, 04 Jul 2006) | 7 lines

  Sync the darwin/x86 port libffi with the copy in PyObjC. This fixes a number
  of bugs in that port. The most annoying ones were due to some subtle differences
  between the document ABI and the actual implementation :-(

  (there are no python unittests that fail without this patch, but without it
   some of libffi's unittests fail).
........
  r47234 | georg.brandl | 2006-07-05 10:21:00 +0200 (Wed, 05 Jul 2006) | 3 lines

  Remove remaining references to OverflowWarning.
........
  r47236 | thomas.heller | 2006-07-05 11:13:56 +0200 (Wed, 05 Jul 2006) | 3 lines

  Fix the bitfield test when _ctypes is compiled with MingW.  Structures
  containing bitfields may have different layout on MSVC and MingW .
........
  r47237 | thomas.wouters | 2006-07-05 13:03:49 +0200 (Wed, 05 Jul 2006) | 15 lines


  Fix bug in passing tuples to string.Template. All other values (with working
  str() or repr()) would work, just not multi-value tuples. Probably not a
  backport candidate, since it changes the behaviour of passing a
  single-element tuple:

  >>> string.Template("$foo").substitute(dict(foo=(1,)))

  '(1,)'

  versus

  '1'
........
  r47241 | georg.brandl | 2006-07-05 16:18:45 +0200 (Wed, 05 Jul 2006) | 2 lines

  Patch #1517490: fix glitches in filter() docs.
........
  r47244 | georg.brandl | 2006-07-05 17:50:05 +0200 (Wed, 05 Jul 2006) | 2 lines

  no need to elaborate "string".
........
  r47251 | neal.norwitz | 2006-07-06 06:28:59 +0200 (Thu, 06 Jul 2006) | 3 lines

  Fix refleaks reported by Shane Hathaway in SF patch #1515361.  This change
  contains only the changes related to leaking the copy variable.
........
  r47253 | fred.drake | 2006-07-06 07:13:22 +0200 (Thu, 06 Jul 2006) | 4 lines

  - back out Expat change; the final fix to Expat will be different
  - change the pyexpat wrapper to not be so sensitive to this detail of the
    Expat implementation (the ex-crasher test still passes)
........
  r47257 | neal.norwitz | 2006-07-06 08:45:08 +0200 (Thu, 06 Jul 2006) | 1 line

  Add a NEWS entry for a recent pyexpat fix
........
  r47258 | martin.v.loewis | 2006-07-06 08:55:58 +0200 (Thu, 06 Jul 2006) | 2 lines

  Add sqlite3.dll to the DLLs component, not to the TkDLLs component.
  Fixes #1517388.
........
  r47259 | martin.v.loewis | 2006-07-06 09:05:21 +0200 (Thu, 06 Jul 2006) | 1 line

  Properly quote compileall and Lib paths in case TARGETDIR has a space.
........
  r47260 | thomas.heller | 2006-07-06 09:50:18 +0200 (Thu, 06 Jul 2006) | 5 lines

  Revert the change done in svn revision 47206:

  Add a new function uses_seh() to the _ctypes extension module.  This
  will return True if Windows Structured Exception handling (SEH) is
  used when calling functions, False otherwise.
........
  r47261 | armin.rigo | 2006-07-06 09:58:18 +0200 (Thu, 06 Jul 2006) | 3 lines

  A couple of examples about how to attack the fact that _PyType_Lookup()
  returns a borrowed ref.  Many of the calls are open to attack.
........
  r47262 | thomas.heller | 2006-07-06 10:28:14 +0200 (Thu, 06 Jul 2006) | 2 lines

  The test that calls a function with invalid arguments and catches the
  resulting Windows access violation will not be run by default.
........
  r47263 | thomas.heller | 2006-07-06 10:48:35 +0200 (Thu, 06 Jul 2006) | 5 lines

  Patch #1517790: It is now possible to use custom objects in the ctypes
  foreign function argtypes sequence as long as they provide a
  from_param method, no longer is it required that the object is a
  ctypes type.
........
  r47264 | thomas.heller | 2006-07-06 10:58:40 +0200 (Thu, 06 Jul 2006) | 2 lines

  Document the Struture and Union constructors.
........
  r47265 | thomas.heller | 2006-07-06 11:11:22 +0200 (Thu, 06 Jul 2006) | 2 lines

  Document the changes in svn revision 47263, from patch #1517790.
........
  r47267 | ronald.oussoren | 2006-07-06 12:13:35 +0200 (Thu, 06 Jul 2006) | 7 lines

  This patch solves the problem Skip was seeing with zlib, this patch ensures that
  configure uses similar compiler flags as setup.py when doing the zlib test.

  Without this patch configure would use the first shared library on the linker
  path, with this patch it uses the first shared or static library on that path
  just like setup.py.
........
  r47268 | thomas.wouters | 2006-07-06 12:48:28 +0200 (Thu, 06 Jul 2006) | 4 lines


  NEWS entry for r47267: fixing configure's zlib probing.
........
  r47269 | fredrik.lundh | 2006-07-06 14:29:24 +0200 (Thu, 06 Jul 2006) | 3 lines

  added XMLParser alias for cElementTree compatibility
........
  r47271 | nick.coghlan | 2006-07-06 14:53:04 +0200 (Thu, 06 Jul 2006) | 1 line

  Revert the __module_name__ changes made in rev 47142. We'll revisit this in Python 2.6
........
  r47272 | nick.coghlan | 2006-07-06 15:04:56 +0200 (Thu, 06 Jul 2006) | 1 line

  Update the tutorial section on relative imports
........
  r47273 | nick.coghlan | 2006-07-06 15:35:27 +0200 (Thu, 06 Jul 2006) | 1 line

  Ignore ImportWarning by default
........
  r47274 | nick.coghlan | 2006-07-06 15:41:34 +0200 (Thu, 06 Jul 2006) | 1 line

  Cover ImportWarning, PendingDeprecationWarning and simplefilter() in the warnings module docs
........
  r47275 | nick.coghlan | 2006-07-06 15:47:18 +0200 (Thu, 06 Jul 2006) | 1 line

  Add NEWS entries for the ImportWarning change and documentation update
........
  r47276 | andrew.kuchling | 2006-07-06 15:57:28 +0200 (Thu, 06 Jul 2006) | 1 line

  ImportWarning is now silent by default
........
  r47277 | thomas.heller | 2006-07-06 17:06:05 +0200 (Thu, 06 Jul 2006) | 2 lines

  Document the correct return type of PyLong_AsUnsignedLongLongMask.
........
  r47278 | hyeshik.chang | 2006-07-06 17:21:52 +0200 (Thu, 06 Jul 2006) | 2 lines

  Add a testcase for r47086 which fixed a bug in codec_getstreamcodec().
........
  r47279 | hyeshik.chang | 2006-07-06 17:39:24 +0200 (Thu, 06 Jul 2006) | 3 lines

  Test using all CJK encodings for the testcases which don't require
  specific encodings.
........
  r47280 | martin.v.loewis | 2006-07-06 21:28:03 +0200 (Thu, 06 Jul 2006) | 2 lines

  Properly generate logical file ids. Fixes #1515998.
  Also correct typo in Control.mapping.
........
  r47287 | neal.norwitz | 2006-07-07 08:03:15 +0200 (Fri, 07 Jul 2006) | 17 lines

  Restore rev 47014:

  The hppa ubuntu box sometimes hangs forever in these tests.  My guess
  is that the wait is failing for some reason.  Use WNOHANG, so we won't
  wait until the buildbot kills the test suite.

  I haven't been able to reproduce the failure, so I'm not sure if
  this will help or not.  Hopefully, this change will cause the test
  to fail, rather than hang.  That will be better since we will get
  the rest of the test results.  It may also help us debug the real problem.

  *** The reason this originally failed was because there were many
  zombie children outstanding before rev 47158 cleaned them up.
  There are still hangs in test_subprocess that need to be addressed,
  but that will take more work.  This should close some holes.
........
  r47289 | georg.brandl | 2006-07-07 10:15:12 +0200 (Fri, 07 Jul 2006) | 3 lines

  Fix RFC number.
........
  r50489 | neal.norwitz | 2006-07-08 07:31:37 +0200 (Sat, 08 Jul 2006) | 1 line

  Fix SF bug #1519018: 'as' is now validated properly in import statements
........
  r50490 | georg.brandl | 2006-07-08 14:15:27 +0200 (Sat, 08 Jul 2006) | 3 lines

  Add an additional test for bug #1519018.
........
  r50491 | tim.peters | 2006-07-08 21:55:05 +0200 (Sat, 08 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50493 | neil.schemenauer | 2006-07-09 18:16:34 +0200 (Sun, 09 Jul 2006) | 2 lines

  Fix AST compiler bug #1501934: incorrect LOAD/STORE_GLOBAL generation.
........
  r50495 | neil.schemenauer | 2006-07-09 23:19:29 +0200 (Sun, 09 Jul 2006) | 2 lines

  Fix SF bug 1441486: bad unary minus folding in compiler.
........
  r50497 | neal.norwitz | 2006-07-10 00:14:42 +0200 (Mon, 10 Jul 2006) | 4 lines

  On 64 bit systems, int literals that use less than 64 bits are now ints
  rather than longs.  This also fixes the test for eval(-sys.maxint - 1).
........
  r50500 | neal.norwitz | 2006-07-10 02:04:44 +0200 (Mon, 10 Jul 2006) | 4 lines

  Bug #1512814, Fix incorrect lineno's when code at module scope
  started after line 256.
........
  r50501 | neal.norwitz | 2006-07-10 02:05:34 +0200 (Mon, 10 Jul 2006) | 1 line

  Fix doco.  Backport candidate.
........
  r50503 | neal.norwitz | 2006-07-10 02:23:17 +0200 (Mon, 10 Jul 2006) | 5 lines

  Part of SF patch #1484695.  This removes dead code.  The chksum was
  already verified in .frombuf() on the lines above.  If there was
  a problem an exception is raised, so there was no way this condition
  could have been true.
........
  r50504 | neal.norwitz | 2006-07-10 03:18:57 +0200 (Mon, 10 Jul 2006) | 3 lines

  Patch #1516912: improve Modules support for OpenVMS.
........
  r50506 | neal.norwitz | 2006-07-10 04:36:41 +0200 (Mon, 10 Jul 2006) | 7 lines

  Patch #1504046: Add documentation for xml.etree.

  /F wrote the text docs, Englebert Gruber massaged it to latex and I
  did some more massaging to try and improve the consistency and
  fix some name mismatches between the declaration and text.
........
  r50509 | martin.v.loewis | 2006-07-10 09:23:48 +0200 (Mon, 10 Jul 2006) | 2 lines

  Introduce DISTUTILS_USE_SDK as a flag to determine whether the
  SDK environment should be used. Fixes #1508010.
........
  r50510 | martin.v.loewis | 2006-07-10 09:26:41 +0200 (Mon, 10 Jul 2006) | 1 line

  Change error message to indicate that VS2003 is necessary to build extension modules, not the .NET SDK.
........
  r50511 | martin.v.loewis | 2006-07-10 09:29:41 +0200 (Mon, 10 Jul 2006) | 1 line

  Add svn:ignore.
........
  r50512 | anthony.baxter | 2006-07-10 09:41:04 +0200 (Mon, 10 Jul 2006) | 1 line

  preparing for 2.5b2
........
  r50513 | thomas.heller | 2006-07-10 11:10:28 +0200 (Mon, 10 Jul 2006) | 2 lines

  Fix bug #1518190: accept any integer or long value in the
  ctypes.c_void_p constructor.
........
  r50514 | thomas.heller | 2006-07-10 11:31:06 +0200 (Mon, 10 Jul 2006) | 3 lines

  Fixed a segfault when ctypes.wintypes were imported on
  non-Windows machines.
........
  r50516 | thomas.heller | 2006-07-10 13:11:10 +0200 (Mon, 10 Jul 2006) | 3 lines

  Assigning None to pointer type structure fields possible overwrote
  wrong fields.
........
  r50517 | thomas.heller | 2006-07-10 13:17:37 +0200 (Mon, 10 Jul 2006) | 5 lines

  Moved the ctypes news entries from the 'Library' section into the
  'Extension Modules' section where they belong, probably.

  This destroyes the original order of the news entries, don't know
  if that is important or not.
........
  r50526 | phillip.eby | 2006-07-10 21:03:29 +0200 (Mon, 10 Jul 2006) | 2 lines

  Fix SF#1516184 and add a test to prevent regression.
........
  r50528 | phillip.eby | 2006-07-10 21:18:35 +0200 (Mon, 10 Jul 2006) | 2 lines

  Fix SF#1457312: bad socket error handling in distutils "upload" command.
........
  r50537 | peter.astrand | 2006-07-10 22:39:49 +0200 (Mon, 10 Jul 2006) | 1 line

  Make it possible to run test_subprocess.py with Python 2.2, which lacks test_support.reap_children().
........
  r50541 | tim.peters | 2006-07-10 23:08:24 +0200 (Mon, 10 Jul 2006) | 5 lines

  After approval from Anthony, merge the tim-current_frames
  branch into the trunk.  This adds a new sys._current_frames()
  function, which returns a dict mapping thread id to topmost
  thread stack frame.
........
  r50542 | tim.peters | 2006-07-10 23:11:49 +0200 (Mon, 10 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50553 | martin.v.loewis | 2006-07-11 00:11:28 +0200 (Tue, 11 Jul 2006) | 4 lines

  Patch #1519566: Remove unused _tofill member.
  Make begin_fill idempotent.
  Update demo2 to demonstrate filling of concave shapes.
........
  r50567 | anthony.baxter | 2006-07-11 04:04:09 +0200 (Tue, 11 Jul 2006) | 4 lines

  #1494314: Fix a regression with high-numbered sockets in 2.4.3. This
  means that select() on sockets > FD_SETSIZE (typically 1024) work again.
  The patch makes sockets use poll() internally where available.
........
  r50568 | tim.peters | 2006-07-11 04:17:48 +0200 (Tue, 11 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50575 | thomas.heller | 2006-07-11 18:42:05 +0200 (Tue, 11 Jul 2006) | 1 line

  Add missing Py_DECREF.
........
  r50576 | thomas.heller | 2006-07-11 18:44:25 +0200 (Tue, 11 Jul 2006) | 1 line

  Add missing Py_DECREFs.
........
  r50579 | andrew.kuchling | 2006-07-11 19:20:16 +0200 (Tue, 11 Jul 2006) | 1 line

  Bump version number;  add sys._current_frames
........
  r50582 | thomas.heller | 2006-07-11 20:28:35 +0200 (Tue, 11 Jul 2006) | 3 lines

  When a foreign function is retrived by calling __getitem__ on a ctypes
  library instance, do not set it as attribute.
........
  r50583 | thomas.heller | 2006-07-11 20:40:50 +0200 (Tue, 11 Jul 2006) | 2 lines

  Change the ctypes version number to 1.0.0.
........
  r50597 | neal.norwitz | 2006-07-12 07:26:17 +0200 (Wed, 12 Jul 2006) | 3 lines

  Bug #1520864: unpacking singleton tuples in for loop (for x, in) work again.
........
  r50598 | neal.norwitz | 2006-07-12 07:26:35 +0200 (Wed, 12 Jul 2006) | 1 line

  Fix function name in error msg
........
  r50599 | neal.norwitz | 2006-07-12 07:27:46 +0200 (Wed, 12 Jul 2006) | 4 lines

  Fix uninitialized memory read reported by Valgrind when running doctest.
  This could happen if size == 0.
........
  r50600 | neal.norwitz | 2006-07-12 09:28:29 +0200 (Wed, 12 Jul 2006) | 1 line

  Actually change the MAGIC #.  Create a new section for 2.5c1 and mention the impact of changing the MAGIC #.
........
  r50601 | thomas.heller | 2006-07-12 10:43:47 +0200 (Wed, 12 Jul 2006) | 3 lines

  Fix #1467450: ctypes now uses RTLD_GLOBAL by default on OSX 10.3 to
  load shared libraries.
........
  r50604 | thomas.heller | 2006-07-12 16:25:18 +0200 (Wed, 12 Jul 2006) | 3 lines

  Fix the wrong description of LibraryLoader.LoadLibrary, and document
  the DEFAULT_MODE constant.
........
  r50607 | georg.brandl | 2006-07-12 17:31:17 +0200 (Wed, 12 Jul 2006) | 3 lines

  Accept long options "--help" and "--version".
........
  r50617 | thomas.heller | 2006-07-13 11:53:47 +0200 (Thu, 13 Jul 2006) | 3 lines

  A misspelled preprocessor symbol caused ctypes to be always compiled
  without thread support.  Replaced WITH_THREADS with WITH_THREAD.
........
  r50619 | thomas.heller | 2006-07-13 19:01:14 +0200 (Thu, 13 Jul 2006) | 3 lines

  Fix #1521375.  When running with root priviledges, 'gcc -o /dev/null'
  did overwrite /dev/null.  Use a temporary file instead of /dev/null.
........
  r50620 | thomas.heller | 2006-07-13 19:05:13 +0200 (Thu, 13 Jul 2006) | 2 lines

  Fix misleading words.
........
  r50622 | andrew.kuchling | 2006-07-13 19:37:26 +0200 (Thu, 13 Jul 2006) | 1 line

  Typo fix
........
  r50629 | georg.brandl | 2006-07-14 09:12:54 +0200 (Fri, 14 Jul 2006) | 3 lines

  Patch #1521874: grammar errors in doanddont.tex.
........
  r50630 | neal.norwitz | 2006-07-14 09:20:04 +0200 (Fri, 14 Jul 2006) | 1 line

  Try to improve grammar further.
........
  r50631 | martin.v.loewis | 2006-07-14 11:58:55 +0200 (Fri, 14 Jul 2006) | 1 line

  Extend build_ssl to Win64, using VSExtComp.
........
  r50632 | martin.v.loewis | 2006-07-14 14:10:09 +0200 (Fri, 14 Jul 2006) | 1 line

  Add debug output to analyse buildbot failure.
........
  r50633 | martin.v.loewis | 2006-07-14 14:31:05 +0200 (Fri, 14 Jul 2006) | 1 line

  Fix Debug build of _ssl.
........
  r50636 | andrew.kuchling | 2006-07-14 15:32:38 +0200 (Fri, 14 Jul 2006) | 1 line

  Mention new options
........
  r50638 | peter.astrand | 2006-07-14 16:04:45 +0200 (Fri, 14 Jul 2006) | 1 line

  Bug #1223937: CalledProcessError.errno -> CalledProcessError.returncode.
........
  r50640 | thomas.heller | 2006-07-14 17:01:05 +0200 (Fri, 14 Jul 2006) | 4 lines

  Make the prototypes of our private PyUnicode_FromWideChar and
  PyUnicode_AsWideChar replacement functions compatible to the official
  functions by using Py_ssize_t instead of int.
........
  r50643 | thomas.heller | 2006-07-14 19:51:14 +0200 (Fri, 14 Jul 2006) | 3 lines

  Patch #1521817: The index range checking on ctypes arrays containing
  exactly one element is enabled again.
........
  r50647 | thomas.heller | 2006-07-14 20:22:50 +0200 (Fri, 14 Jul 2006) | 2 lines

  Updates for the ctypes documentation.
........
  r50655 | fredrik.lundh | 2006-07-14 23:45:48 +0200 (Fri, 14 Jul 2006) | 3 lines

  typo
........
  r50664 | george.yoshida | 2006-07-15 18:03:49 +0200 (Sat, 15 Jul 2006) | 2 lines

  Bug #15187702 : ext/win-cookbook.html has a broken link to distutils
........
  r50667 | bob.ippolito | 2006-07-15 18:53:15 +0200 (Sat, 15 Jul 2006) | 1 line

  Patch #1220874: Update the binhex module for Mach-O.
........
  r50671 | fred.drake | 2006-07-16 03:21:20 +0200 (Sun, 16 Jul 2006) | 1 line

  clean up some link markup
........
  r50673 | neal.norwitz | 2006-07-16 03:50:38 +0200 (Sun, 16 Jul 2006) | 4 lines

  Bug #1512814, Fix incorrect lineno's when code within a function
  had more than 255 blank lines.  Byte codes need to go first, line #s second.
........
  r50674 | neal.norwitz | 2006-07-16 04:00:32 +0200 (Sun, 16 Jul 2006) | 5 lines

  a & b were dereffed above, so they are known to be valid pointers.
  z is known to be NULL, nothing to DECREF.

  Reported by Klockwork, #107.
........
  r50675 | neal.norwitz | 2006-07-16 04:02:57 +0200 (Sun, 16 Jul 2006) | 5 lines

  self is dereffed (and passed as first arg), so it's known to be good.
  func is returned from PyArg_ParseTuple and also dereffed.

  Reported by Klocwork, #30 (self one at least).
........
  r50676 | neal.norwitz | 2006-07-16 04:05:35 +0200 (Sun, 16 Jul 2006) | 4 lines

  proto was dereffed above and is known to be good.  No need for X.

  Reported by Klocwork, #39.
........
  r50677 | neal.norwitz | 2006-07-16 04:15:27 +0200 (Sun, 16 Jul 2006) | 5 lines

  Fix memory leaks in some conditions.

  Reported by Klocwork #152.
........
  r50678 | neal.norwitz | 2006-07-16 04:17:36 +0200 (Sun, 16 Jul 2006) | 4 lines

  Fix memory leak under some conditions.

  Reported by Klocwork, #98.
........
  r50679 | neal.norwitz | 2006-07-16 04:22:30 +0200 (Sun, 16 Jul 2006) | 8 lines

  Use sizeof(buffer) instead of duplicating the constants to ensure they won't
  be wrong.

  The real change is to pass (bufsz - 1) to PyOS_ascii_formatd and 1
  to strncat.  strncat copies n+1 bytes from src (not dest).

  Reported by Klocwork #58.
........
  r50680 | neal.norwitz | 2006-07-16 04:32:03 +0200 (Sun, 16 Jul 2006) | 5 lines

  Handle a NULL name properly.

  Reported by Klocwork #67
........
  r50681 | neal.norwitz | 2006-07-16 04:35:47 +0200 (Sun, 16 Jul 2006) | 6 lines

  PyFunction_SetDefaults() is documented as taking None or a tuple.
  A NULL would crash the PyTuple_Check().  Now make NULL return a SystemError.

  Reported by Klocwork #73.
........
  r50683 | neal.norwitz | 2006-07-17 02:55:45 +0200 (Mon, 17 Jul 2006) | 5 lines

  Stop INCREFing name, then checking if it's NULL.  name (f_name) should never
  be NULL so assert it.  Fix one place where we could have passed NULL.

  Reported by Klocwork #66.
........
  r50684 | neal.norwitz | 2006-07-17 02:57:15 +0200 (Mon, 17 Jul 2006) | 5 lines

  otherset is known to be non-NULL based on checks before and DECREF after.
  DECREF otherset rather than XDECREF in error conditions too.

  Reported by Klockwork #154.
........
  r50685 | neal.norwitz | 2006-07-17 02:59:04 +0200 (Mon, 17 Jul 2006) | 7 lines

  Reported by Klocwork #151.

  v2 can be NULL if exception2 is NULL.  I don't think that condition can happen,
  but I'm not sure it can't either.  Now the code will protect against either
  being NULL.
........
  r50686 | neal.norwitz | 2006-07-17 03:00:16 +0200 (Mon, 17 Jul 2006) | 1 line

  Add NEWS entry for a bunch of fixes due to warnings produced by Klocworks static analysis tool.
........
  r50687 | fred.drake | 2006-07-17 07:47:52 +0200 (Mon, 17 Jul 2006) | 3 lines

  document xmlcore (still minimal; needs mention in each of the xml.* modules)
  SF bug #1504456 (partial)
........
  r50688 | georg.brandl | 2006-07-17 15:23:46 +0200 (Mon, 17 Jul 2006) | 3 lines

  Remove usage of sets module (patch #1500609).
........
  r50689 | georg.brandl | 2006-07-17 15:26:33 +0200 (Mon, 17 Jul 2006) | 3 lines

  Add missing NEWS item (#1522771)
........
  r50690 | andrew.kuchling | 2006-07-17 18:47:54 +0200 (Mon, 17 Jul 2006) | 1 line

  Attribute more features
........
  r50692 | kurt.kaiser | 2006-07-17 23:59:27 +0200 (Mon, 17 Jul 2006) | 8 lines

  Patch 1479219 - Tal Einat
  1. 'as' highlighted as builtin in comment string on import line
  2. Comments such as "#False identity" which start with a keyword immediately
     after the '#' character aren't colored as comments.
  3. u or U beginning unicode string not correctly highlighted

  Closes bug 1325071
........
  r50693 | barry.warsaw | 2006-07-18 01:07:51 +0200 (Tue, 18 Jul 2006) | 16 lines

  decode_rfc2231(): Be more robust against buggy RFC 2231 encodings.
  Specifically, instead of raising a ValueError when there is a single tick in
  the parameter, simply return that the entire string unquoted, with None for
  both the charset and the language.  Also, if there are more than 2 ticks in
  the parameter, interpret the first three parts as the standard RFC 2231 parts,
  then the rest of the parts as the encoded string.

  Test cases added.

  Original fewer-than-3-parts fix by Tokio Kikuchi.

  Resolves SF bug # 1218081.  I will back port the fix and tests to Python 2.4
  (email 3.0) and Python 2.3 (email 2.5).

  Also, bump the version number to email 4.0.1, removing the 'alpha' moniker.
........
  r50695 | kurt.kaiser | 2006-07-18 06:03:16 +0200 (Tue, 18 Jul 2006) | 2 lines

  Rebinding Tab key was inserting 'tab' instead of 'Tab'.  Bug 1179168.
........
  r50696 | brett.cannon | 2006-07-18 06:41:36 +0200 (Tue, 18 Jul 2006) | 6 lines

  Fix bug #1520914.  Starting in 2.4, time.strftime() began to check the bounds
  of values in the time tuple passed in.  Unfortunately people came to rely on
  undocumented behaviour of setting unneeded values to 0, regardless of if it was
  within the valid range.  Now those values force the value internally to the
  minimum value when 0 is passed in.
........
  r50697 | facundo.batista | 2006-07-18 14:16:13 +0200 (Tue, 18 Jul 2006) | 1 line

  Comments and docs cleanups, and some little fixes, provided by Santiágo Peresón
........
  r50704 | martin.v.loewis | 2006-07-18 19:46:31 +0200 (Tue, 18 Jul 2006) | 2 lines

  Patch #1524429: Use repr instead of backticks again.
........
  r50706 | tim.peters | 2006-07-18 23:55:15 +0200 (Tue, 18 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50708 | tim.peters | 2006-07-19 02:03:19 +0200 (Wed, 19 Jul 2006) | 18 lines

  SF bug 1524317: configure --without-threads fails to build

  Moved the code for _PyThread_CurrentFrames() up, so it's no longer
  in a huge "#ifdef WITH_THREAD" block (I didn't realize it /was/ in
  one).

  Changed test_sys's test_current_frames() so it passes with or without
  thread supported compiled in.

  Note that test_sys fails when Python is compiled without threads,
  but for an unrelated reason (the old test_exit() fails with an
  indirect ImportError on the `thread` module).  There are also
  other unrelated compilation failures without threads, in extension
  modules (like ctypes); at least the core compiles again.

  Do we really support --without-threads?  If so, there are several
  problems remaining.
........
  r50713 | thomas.heller | 2006-07-19 11:09:32 +0200 (Wed, 19 Jul 2006) | 4 lines

  Make sure the _ctypes extension can be compiled when WITH_THREAD is
  not defined on Windows, even if that configuration is probably not
  supported at all.
........
  r50715 | martin.v.loewis | 2006-07-19 19:18:32 +0200 (Wed, 19 Jul 2006) | 4 lines

  Revert r50706 (Whitespace normalization) and
  r50697: Comments and docs cleanups, and some little fixes
  per recommendation from Raymond Hettinger.
........
  r50719 | phillip.eby | 2006-07-20 17:54:16 +0200 (Thu, 20 Jul 2006) | 4 lines

  Fix SF#1516184 (again) and add a test to prevent regression.
  (There was a problem with empty filenames still causing recursion)
........
  r50720 | georg.brandl | 2006-07-20 18:28:39 +0200 (Thu, 20 Jul 2006) | 3 lines

  Guard for _active being None in __del__ method.
........
  r50721 | vinay.sajip | 2006-07-20 18:28:39 +0200 (Thu, 20 Jul 2006) | 1 line

  Updated documentation for TimedRotatingFileHandler relating to how rollover files are named. The previous documentation was wrongly the same as for RotatingFileHandler.
........
  r50731 | fred.drake | 2006-07-20 22:11:57 +0200 (Thu, 20 Jul 2006) | 1 line

  markup fix
........
  r50739 | kurt.kaiser | 2006-07-21 00:22:52 +0200 (Fri, 21 Jul 2006) | 7 lines

  Avoid occasional failure to detect closing paren properly.
  Patch 1407280 Tal Einat

  M    ParenMatch.py
  M    NEWS.txt
  M    CREDITS.txt
........
  r50740 | vinay.sajip | 2006-07-21 01:20:12 +0200 (Fri, 21 Jul 2006) | 1 line

  Addressed SF#1524081 by using a dictionary to map level names to syslog priority names, rather than a string.lower().
........
  r50741 | neal.norwitz | 2006-07-21 07:29:58 +0200 (Fri, 21 Jul 2006) | 1 line

  Add some asserts that we got good params passed
........
  r50742 | neal.norwitz | 2006-07-21 07:31:02 +0200 (Fri, 21 Jul 2006) | 5 lines

  Move the initialization of some pointers earlier.  The problem is
  that if we call Py_DECREF(frame) like we do if allocating locals fails,
  frame_dealloc() will try to use these bogus values and crash.
........
  r50743 | neal.norwitz | 2006-07-21 07:32:28 +0200 (Fri, 21 Jul 2006) | 4 lines

  Handle allocation failures gracefully.  Found with failmalloc.
  Many (all?) of these could be backported.
........
  r50745 | neal.norwitz | 2006-07-21 09:59:02 +0200 (Fri, 21 Jul 2006) | 1 line

  Speel initialise write.  Tanks Anthony.
........
  r50746 | neal.norwitz | 2006-07-21 09:59:47 +0200 (Fri, 21 Jul 2006) | 2 lines

  Handle more memory allocation failures without crashing.
........
  r50754 | barry.warsaw | 2006-07-21 16:51:07 +0200 (Fri, 21 Jul 2006) | 23 lines

  More RFC 2231 improvements for the email 4.0 package.  As Mark Sapiro rightly
  points out there are really two types of continued headers defined in this
  RFC (i.e. "encoded" parameters with the form "name*0*=" and unencoded
  parameters with the form "name*0="), but we were were handling them both the
  same way and that isn't correct.

  This patch should be much more RFC compliant in that only encoded params are
  %-decoded and the charset/language information is only extract if there are
  any encoded params in the segments.  If there are no encoded params then the
  RFC says that there will be no charset/language parts.

  Note however that this will change the return value for Message.get_param() in
  some cases.  For example, whereas before if you had all unencoded param
  continuations you would have still gotten a 3-tuple back from this method
  (with charset and language == None), you will now get just a string.  I don't
  believe this is a backward incompatible change though because the
  documentation for this method already indicates that either return value is
  possible and that you must do an isinstance(val, tuple) check to discriminate
  between the two.  (Yeah that API kind of sucks but we can't change /that/
  without breaking code.)

  Test cases, some documentation updates, and a NEWS item accompany this patch.
........
  r50759 | georg.brandl | 2006-07-21 19:36:31 +0200 (Fri, 21 Jul 2006) | 3 lines

  Fix check for empty list (vs. None).
........
  r50771 | brett.cannon | 2006-07-22 00:44:07 +0200 (Sat, 22 Jul 2006) | 2 lines

  Remove an XXX marker in a comment.
........
  r50773 | neal.norwitz | 2006-07-22 18:20:49 +0200 (Sat, 22 Jul 2006) | 1 line

  Fix more memory allocation issues found with failmalloc.
........
  r50774 | neal.norwitz | 2006-07-22 19:00:57 +0200 (Sat, 22 Jul 2006) | 1 line

  Don't fail if the directory already exists
........
  r50775 | greg.ward | 2006-07-23 04:25:53 +0200 (Sun, 23 Jul 2006) | 6 lines

  Be a lot smarter about whether this test passes: instead of assuming
  that a 2.93 sec audio file will always take 3.1 sec (as it did on the
  hardware I had when I first wrote the test), expect that it will take
  2.93 sec +/- 10%, and only fail if it's outside of that range.
  Compute the expected
........
  r50776 | kurt.kaiser | 2006-07-23 06:19:49 +0200 (Sun, 23 Jul 2006) | 2 lines

  Tooltips failed on new-syle class __init__ args.  Bug 1027566 Loren Guthrie
........
  r50777 | neal.norwitz | 2006-07-23 09:50:36 +0200 (Sun, 23 Jul 2006) | 1 line

  Handle more mem alloc issues found with failmalloc
........
  r50778 | neal.norwitz | 2006-07-23 09:51:58 +0200 (Sun, 23 Jul 2006) | 5 lines

  If the for loop isn't entered, entryblock will be NULL.  If passed
  to stackdepth_walk it will be dereffed.

  Not sure if I found with failmalloc or Klockwork #55.
........
  r50779 | neal.norwitz | 2006-07-23 09:53:14 +0200 (Sun, 23 Jul 2006) | 4 lines

  Move the initialization of size_a down below the check for a being NULL.

  Reported by Klocwork #106
........
  r50780 | neal.norwitz | 2006-07-23 09:55:55 +0200 (Sun, 23 Jul 2006) | 9 lines

  Check the allocation of b_objects and return if there was a failure.
  Also fix a few memory leaks in other failure scenarios.

  It seems that if b_objects == Py_None, we will have an extra ref to
  b_objects.  Add XXX comment so hopefully someone documents why the
  else isn't necessary or adds it in.

  Reported by Klocwork #20
........
  r50781 | neal.norwitz | 2006-07-23 09:57:11 +0200 (Sun, 23 Jul 2006) | 2 lines

  Fix memory leaks spotted by Klocwork #37.
........
  r50782 | neal.norwitz | 2006-07-23 09:59:00 +0200 (Sun, 23 Jul 2006) | 5 lines

  nextlink can be NULL if teedataobject_new fails, so use XINCREF.
  Ensure that dataobj is never NULL.

  Reported by Klocwork #102
........
  r50783 | neal.norwitz | 2006-07-23 10:01:43 +0200 (Sun, 23 Jul 2006) | 8 lines

  Ensure we don't write beyond errText.  I think I got this right, but
  it definitely could use some review to ensure I'm not off by one
  and there's no possible overflow/wrap-around of bytes_left.
  Reported by Klocwork #1.

  Fix a problem if there is a failure allocating self->db.
  Found with failmalloc.
........
  r50784 | ronald.oussoren | 2006-07-23 11:41:09 +0200 (Sun, 23 Jul 2006) | 3 lines

  Without this patch CMD-W won't close EditorWindows on MacOS X. This solves
  part of bug #1517990.
........
  r50785 | ronald.oussoren | 2006-07-23 11:46:11 +0200 (Sun, 23 Jul 2006) | 5 lines

  Fix for bug #1517996: Class and Path browsers show Tk menu

  This patch replaces the menubar that is used by AquaTk for windows without a
  menubar of their own by one that is more appropriate for IDLE.
........
  r50786 | andrew.macintyre | 2006-07-23 14:57:02 +0200 (Sun, 23 Jul 2006) | 2 lines

  Build updates for OS/2 EMX port
........
  r50787 | andrew.macintyre | 2006-07-23 15:00:04 +0200 (Sun, 23 Jul 2006) | 3 lines

  bugfix: PyThread_start_new_thread() returns the thread ID, not a flag;
  will backport.
........
  r50789 | andrew.macintyre | 2006-07-23 15:04:00 +0200 (Sun, 23 Jul 2006) | 2 lines

  Get mailbox module working on OS/2 EMX port.
........
  r50791 | greg.ward | 2006-07-23 18:05:51 +0200 (Sun, 23 Jul 2006) | 1 line

  Resync optparse with Optik 1.5.3: minor tweaks for/to tests.
........
  r50794 | martin.v.loewis | 2006-07-24 07:05:22 +0200 (Mon, 24 Jul 2006) | 2 lines

  Update list of unsupported systems. Fixes #1510853.
........
  r50795 | martin.v.loewis | 2006-07-24 12:26:33 +0200 (Mon, 24 Jul 2006) | 1 line

  Patch #1448199: Release GIL around ConnectRegistry.
........
  r50796 | martin.v.loewis | 2006-07-24 13:54:53 +0200 (Mon, 24 Jul 2006) | 3 lines

  Patch #1232023: Don't include empty path component from registry,
  so that the current directory does not get added to sys.path.
  Also fixes #1526785.
........
  r50797 | martin.v.loewis | 2006-07-24 14:54:17 +0200 (Mon, 24 Jul 2006) | 3 lines

  Bug #1524310: Properly report errors from FindNextFile in os.listdir.
  Will backport to 2.4.
........
  r50800 | georg.brandl | 2006-07-24 15:28:57 +0200 (Mon, 24 Jul 2006) | 7 lines

  Patch #1523356: fix determining include dirs in python-config.

  Also don't install "python-config" when doing altinstall, but
  always install "python-config2.x" and make a link to it like
  with the main executable.
........
  r50802 | georg.brandl | 2006-07-24 15:46:47 +0200 (Mon, 24 Jul 2006) | 3 lines

  Patch #1527744: right order of includes in order to have HAVE_CONIO_H defined properly.
........
  r50803 | georg.brandl | 2006-07-24 16:09:56 +0200 (Mon, 24 Jul 2006) | 3 lines

  Patch #1515343: Fix printing of deprecated string exceptions with a
  value in the traceback module.
........
  r50804 | kurt.kaiser | 2006-07-24 19:13:23 +0200 (Mon, 24 Jul 2006) | 7 lines

  EditorWindow failed when used stand-alone if sys.ps1 not set.
  Bug 1010370 Dave Florek

  M    EditorWindow.py
  M    PyShell.py
  M    NEWS.txt
........
  r50805 | kurt.kaiser | 2006-07-24 20:05:51 +0200 (Mon, 24 Jul 2006) | 6 lines

  - EditorWindow.test() was failing.  Bug 1417598

  M    EditorWindow.py
  M    ScriptBinding.py
  M    NEWS.txt
........
  r50808 | georg.brandl | 2006-07-24 22:11:35 +0200 (Mon, 24 Jul 2006) | 3 lines

  Repair accidental NameError.
........
  r50809 | tim.peters | 2006-07-24 23:02:15 +0200 (Mon, 24 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50810 | greg.ward | 2006-07-25 04:11:12 +0200 (Tue, 25 Jul 2006) | 3 lines

  Don't use standard assert: want tests to fail even when run with -O.
  Delete cruft.
........
  r50811 | tim.peters | 2006-07-25 06:07:22 +0200 (Tue, 25 Jul 2006) | 10 lines

  current_frames_with_threads():  There's actually no way
  to guess /which/ line the spawned thread is in at the time
  sys._current_frames() is called:  we know it finished
  enter_g.set(), but can't know whether the instruction
  counter has advanced to the following leave_g.wait().
  The latter is overwhelming most likely, but not guaranteed,
  and I see that the "x86 Ubuntu dapper (icc) trunk" buildbot
  found it on the other line once.  Changed the test so it
  passes in either case.
........
  r50815 | martin.v.loewis | 2006-07-25 11:53:12 +0200 (Tue, 25 Jul 2006) | 2 lines

  Bug #1525817: Don't truncate short lines in IDLE's tool tips.
........
  r50816 | martin.v.loewis | 2006-07-25 12:05:47 +0200 (Tue, 25 Jul 2006) | 3 lines

  Bug #978833: Really close underlying socket in _socketobject.close.
  Will backport to 2.4.
........
  r50817 | martin.v.loewis | 2006-07-25 12:11:14 +0200 (Tue, 25 Jul 2006) | 1 line

  Revert incomplete checkin.
........
  r50819 | georg.brandl | 2006-07-25 12:22:34 +0200 (Tue, 25 Jul 2006) | 4 lines

  Patch #1525766: correctly pass onerror arg to recursive calls
  of pkg.walk_packages. Also improve the docstrings.
........
  r50825 | brett.cannon | 2006-07-25 19:32:20 +0200 (Tue, 25 Jul 2006) | 2 lines

  Add comment for changes to test_ossaudiodev.
........
  r50826 | brett.cannon | 2006-07-25 19:34:36 +0200 (Tue, 25 Jul 2006) | 3 lines

  Fix a bug in the messages for an assert failure where not enough arguments to a string
  were being converted in the format.
........
  r50828 | armin.rigo | 2006-07-25 20:09:57 +0200 (Tue, 25 Jul 2006) | 2 lines

  Document why is and is not a good way to fix the gc_inspection crasher.
........
  r50829 | armin.rigo | 2006-07-25 20:11:07 +0200 (Tue, 25 Jul 2006) | 5 lines

  Added another crasher, which hit me today (I was not intentionally
  writing such code, of course, but it took some gdb time to figure out
  what my bug was).
........
  r50830 | armin.rigo | 2006-07-25 20:38:39 +0200 (Tue, 25 Jul 2006) | 3 lines

  Document the crashers that will not go away soon as "won't fix",
  and explain why.
........
  r50831 | ronald.oussoren | 2006-07-25 21:13:35 +0200 (Tue, 25 Jul 2006) | 3 lines

  Install the compatibility symlink to libpython.a on OSX using 'ln -sf' instead
  of 'ln -s', this avoid problems when reinstalling python.
........
  r50832 | ronald.oussoren | 2006-07-25 21:20:54 +0200 (Tue, 25 Jul 2006) | 7 lines

  Fix for bug #1525447 (renaming to MacOSmodule.c would also work, but not
  without causing problems for anyone that is on a case-insensitive filesystem).

  Setup.py tries to compile the MacOS extension from MacOSmodule.c, while the
  actual file is named macosmodule.c. This is no problem on the (default)
  case-insensitive filesystem, but doesn't work on case-sensitive filesystems.
........
  r50833 | ronald.oussoren | 2006-07-25 22:28:55 +0200 (Tue, 25 Jul 2006) | 7 lines

  Fix bug #1517990: IDLE keybindings on OSX

  This adds a new key definition for OSX, which is slightly different from the
  classic mac definition.

  Also add NEWS item for a couple of bugfixes I added recently.
........
  r50834 | tim.peters | 2006-07-26 00:30:24 +0200 (Wed, 26 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50839 | neal.norwitz | 2006-07-26 06:00:18 +0200 (Wed, 26 Jul 2006) | 1 line

  Hmm, only python2.x is installed, not plain python.  Did that change recently?
........
  r50840 | barry.warsaw | 2006-07-26 07:54:46 +0200 (Wed, 26 Jul 2006) | 6 lines

  Forward port some fixes that were in email 2.5 but for some reason didn't make
  it into email 4.0.  Specifically, in Message.get_content_charset(), handle RFC
  2231 headers that contain an encoding not known to Python, or a character in
  the data that isn't in the charset encoding.  Also forward port the
  appropriate unit tests.
........
  r50841 | georg.brandl | 2006-07-26 09:23:32 +0200 (Wed, 26 Jul 2006) | 3 lines

  NEWS entry for #1525766.
........
  r50842 | georg.brandl | 2006-07-26 09:40:17 +0200 (Wed, 26 Jul 2006) | 3 lines

  Bug #1459963: properly capitalize HTTP header names.
........
  r50843 | georg.brandl | 2006-07-26 10:03:10 +0200 (Wed, 26 Jul 2006) | 6 lines

  Part of bug #1523610: fix miscalculation of buffer length.

  Also add a guard against NULL in converttuple and add a test case
  (that previously would have crashed).
........
  r50844 | martin.v.loewis | 2006-07-26 14:12:56 +0200 (Wed, 26 Jul 2006) | 3 lines

  Bug #978833: Really close underlying socket in _socketobject.close.
  Fix httplib.HTTPConnection.getresponse to not close the
  socket if it is still needed for the response.
........
  r50845 | andrew.kuchling | 2006-07-26 19:16:52 +0200 (Wed, 26 Jul 2006) | 1 line

  [Bug #1471938] Fix build problem on Solaris 8 by conditionalizing the use of mvwgetnstr(); it was conditionalized a few lines below.  Fix from Paul Eggert.  I also tried out the STRICT_SYSV_CURSES case and am therefore removing the 'untested' comment.
........
  r50846 | andrew.kuchling | 2006-07-26 19:18:01 +0200 (Wed, 26 Jul 2006) | 1 line

  Correct error message
........
  r50847 | andrew.kuchling | 2006-07-26 19:19:39 +0200 (Wed, 26 Jul 2006) | 1 line

  Minor grammar fix
........
  r50848 | andrew.kuchling | 2006-07-26 19:22:21 +0200 (Wed, 26 Jul 2006) | 1 line

  Put news item in right section
........
  r50850 | andrew.kuchling | 2006-07-26 20:03:12 +0200 (Wed, 26 Jul 2006) | 1 line

  Use sys.exc_info()
........
  r50851 | andrew.kuchling | 2006-07-26 20:15:45 +0200 (Wed, 26 Jul 2006) | 1 line

  Use sys.exc_info()
........
  r50852 | phillip.eby | 2006-07-26 21:48:27 +0200 (Wed, 26 Jul 2006) | 4 lines

  Allow the 'onerror' argument to walk_packages() to catch any Exception, not
  just ImportError.  This allows documentation tools to better skip unimportable
  packages.
........
  r50854 | tim.peters | 2006-07-27 01:23:15 +0200 (Thu, 27 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50855 | tim.peters | 2006-07-27 03:14:53 +0200 (Thu, 27 Jul 2006) | 21 lines

  Bug #1521947:  possible bug in mystrtol.c with recent gcc.

  In general, C doesn't define anything about what happens when
  an operation on a signed integral type overflows, and PyOS_strtol()
  did several formally undefined things of that nature on signed
  longs.  Some version of gcc apparently tries to exploit that now,
  and PyOS_strtol() could fail to detect overflow then.

  Tried to repair all that, although it seems at least as likely to me
  that we'll get screwed by bad platform definitions for LONG_MIN
  and/or LONG_MAX now.  For that reason, I don't recommend backporting
  this.

  Note that I have no box on which this makes a lick of difference --
  can't really test it, except to note that it didn't break anything
  on my boxes.

  Silent change:  PyOS_strtol() used to return the hard-coded 0x7fffffff
  in case of overflow.  Now it returns LONG_MAX.  They're the same only on
  32-bit boxes (although C doesn't guarantee that either ...).
........
  r50856 | neal.norwitz | 2006-07-27 05:51:58 +0200 (Thu, 27 Jul 2006) | 6 lines

  Don't kill a normal instance of python running on windows when checking
  to kill a cygwin instance.  build\\python.exe was matching a normal windows
  instance.  Prefix that with a \\ to ensure build is a directory and not
  PCbuild.  As discussed on python-dev.
........
  r50857 | neal.norwitz | 2006-07-27 05:55:39 +0200 (Thu, 27 Jul 2006) | 5 lines

  Closure can't be NULL at this point since we know it's a tuple.

  Reported by Klocwork # 74.
........
  r50858 | neal.norwitz | 2006-07-27 06:04:50 +0200 (Thu, 27 Jul 2006) | 1 line

  No functional change.  Add comment and assert to describe why there cannot be overflow which was reported by Klocwork.  Discussed on python-dev
........
  r50859 | martin.v.loewis | 2006-07-27 08:38:16 +0200 (Thu, 27 Jul 2006) | 3 lines

  Bump distutils version to 2.5, as several new features
  have been introduced since 2.4.
........
  r50860 | andrew.kuchling | 2006-07-27 14:18:20 +0200 (Thu, 27 Jul 2006) | 1 line

  Reformat docstring; fix typo
........
  r50861 | georg.brandl | 2006-07-27 17:05:36 +0200 (Thu, 27 Jul 2006) | 6 lines

  Add test_main() methods. These three tests were never run
  by regrtest.py.

  We really need a simpler testing framework.
........
  r50862 | tim.peters | 2006-07-27 17:09:20 +0200 (Thu, 27 Jul 2006) | 2 lines

  News for patch #1529686.
........
  r50863 | tim.peters | 2006-07-27 17:11:00 +0200 (Thu, 27 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50864 | georg.brandl | 2006-07-27 17:38:33 +0200 (Thu, 27 Jul 2006) | 3 lines

  Amend news entry.
........
  r50865 | georg.brandl | 2006-07-27 18:08:15 +0200 (Thu, 27 Jul 2006) | 3 lines

  Make uuid test suite pass on this box by requesting output with LC_ALL=C.
........
  r50866 | andrew.kuchling | 2006-07-27 20:37:33 +0200 (Thu, 27 Jul 2006) | 1 line

  Add example
........
  r50867 | thomas.heller | 2006-07-27 20:39:55 +0200 (Thu, 27 Jul 2006) | 9 lines

  Remove code that is no longer used (ctypes.com).

  Fix the DllGetClassObject and DllCanUnloadNow so that they forward the
  call to the comtypes.server.inprocserver module.

  The latter was never documented, never used by published code, and
  didn't work anyway, so I think it does not deserve a NEWS entry (but I
  might be wrong).
........
  r50868 | andrew.kuchling | 2006-07-27 20:41:21 +0200 (Thu, 27 Jul 2006) | 1 line

  Typo fix ('publically' is rare, poss. non-standard)
........
  r50869 | andrew.kuchling | 2006-07-27 20:42:41 +0200 (Thu, 27 Jul 2006) | 1 line

  Add missing word
........
  r50870 | andrew.kuchling | 2006-07-27 20:44:10 +0200 (Thu, 27 Jul 2006) | 1 line

  Repair typos
........
  r50872 | andrew.kuchling | 2006-07-27 20:53:33 +0200 (Thu, 27 Jul 2006) | 1 line

  Update URL; add example
........
  r50873 | andrew.kuchling | 2006-07-27 21:07:29 +0200 (Thu, 27 Jul 2006) | 1 line

  Add punctuation mark; add some examples
........
  r50874 | andrew.kuchling | 2006-07-27 21:11:07 +0200 (Thu, 27 Jul 2006) | 1 line

  Mention base64 module; rewrite last sentence to be more positive
........
  r50875 | andrew.kuchling | 2006-07-27 21:12:49 +0200 (Thu, 27 Jul 2006) | 1 line

  If binhex is higher-level than binascii, it should come first in the chapter
........
  r50876 | tim.peters | 2006-07-27 22:47:24 +0200 (Thu, 27 Jul 2006) | 28 lines

  check_node():  stop spraying mystery output to stderr.

  When a node number disagrees, keep track of all sources & the
  node numbers they reported, and stick all that in the error message.

  Changed all callers to supply a non-empty "source" argument; made
  the "source" argument non-optional.

  On my box, test_uuid still fails, but with the less confusing output:

  AssertionError: different sources disagree on node:
      from source 'getnode1', node was 00038a000015
      from source 'getnode2', node was 00038a000015
      from source 'ipconfig', node was 001111b2b7bf

  Only the last one appears to be correct; e.g.,

  C:\Code\python\PCbuild>getmac

  Physical Address    Transport Name
  =================== ==========================================================
  00-11-11-B2-B7-BF   \Device\Tcpip_{190FB163-5AFD-4483-86A1-2FE16AC61FF1}
  62-A1-AC-6C-FD-BE   \Device\Tcpip_{8F77DF5A-EA3D-4F1D-975E-D472CEE6438A}
  E2-1F-01-C6-5D-88   \Device\Tcpip_{CD18F76B-2EF3-409F-9B8A-6481EE70A1E4}

  I can't find anything on my box with MAC 00-03-8a-00-00-15, and am
  not clear on where that comes from.
........
  r50878 | andrew.kuchling | 2006-07-28 00:40:05 +0200 (Fri, 28 Jul 2006) | 1 line

  Reword paragraph
........
  r50879 | andrew.kuchling | 2006-07-28 00:49:38 +0200 (Fri, 28 Jul 2006) | 1 line

  Add example
........
  r50880 | andrew.kuchling | 2006-07-28 00:49:54 +0200 (Fri, 28 Jul 2006) | 1 line

  Add example
........
  r50881 | barry.warsaw | 2006-07-28 01:43:15 +0200 (Fri, 28 Jul 2006) | 27 lines

  Patch #1520294: Support for getset and member descriptors in types.py,
  inspect.py, and pydoc.py.  Specifically, this allows for querying the type of
  an object against these built-in C types and more importantly, for getting
  their docstrings printed in the interactive interpreter's help() function.

  This patch includes a new built-in module called _types which provides
  definitions of getset and member descriptors for use by the types.py module.
  These types are exposed as types.GetSetDescriptorType and
  types.MemberDescriptorType.  Query functions are provided as
  inspect.isgetsetdescriptor() and inspect.ismemberdescriptor().  The
  implementations of these are robust enough to work with Python implementations
  other than CPython, which may not have these fundamental types.

  The patch also includes documentation and test suite updates.

  I commit these changes now under these guiding principles:

  1. Silence is assent.  The release manager has not said "no", and of the few
     people that cared enough to respond to the thread, the worst vote was "0".

  2. It's easier to ask for forgiveness than permission.

  3. It's so dang easy to revert stuff in svn, that you could view this as a
     forcing function. :)

  Windows build patches will follow.
........
  r50882 | tim.peters | 2006-07-28 01:44:37 +0200 (Fri, 28 Jul 2006) | 4 lines

  Bug #1529297:  The rewrite of doctest for Python 2.4 unintentionally
  lost that tests are sorted by name before being run.  ``DocTestFinder``
  has been changed to sort the list of tests it returns.
........
  r50883 | tim.peters | 2006-07-28 01:45:48 +0200 (Fri, 28 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50884 | tim.peters | 2006-07-28 01:46:36 +0200 (Fri, 28 Jul 2006) | 2 lines

  Add missing svn:eol-style property to text files.
........
  r50885 | barry.warsaw | 2006-07-28 01:50:40 +0200 (Fri, 28 Jul 2006) | 4 lines

  Enable the building of the _types module on Windows.

  Note that this has only been tested for VS 2003 since that's all I have.
........
  r50887 | tim.peters | 2006-07-28 02:23:15 +0200 (Fri, 28 Jul 2006) | 7 lines

  defdict_reduce():  Plug leaks.

  We didn't notice these before because test_defaultdict didn't
  actually do anything before Georg fixed that earlier today.
  Neal's next refleak run then showed test_defaultdict leaking
  9 references on each run.  That's repaired by this checkin.
........
  r50888 | tim.peters | 2006-07-28 02:30:00 +0200 (Fri, 28 Jul 2006) | 2 lines

  News about the repaired memory leak in defaultdict.
........
  r50889 | gregory.p.smith | 2006-07-28 03:35:25 +0200 (Fri, 28 Jul 2006) | 7 lines

  - pybsddb Bug #1527939: bsddb module DBEnv dbremove and dbrename
    methods now allow their database parameter to be None as the
    sleepycat API allows.

  Also adds an appropriate test case for DBEnv.dbrename and dbremove.
........
  r50895 | neal.norwitz | 2006-07-28 06:22:34 +0200 (Fri, 28 Jul 2006) | 1 line

  Ensure the actual number matches the expected count
........
  r50896 | tim.peters | 2006-07-28 06:51:59 +0200 (Fri, 28 Jul 2006) | 6 lines

  Live with that "the hardware address" is an ill-defined
  concept, and that different ways of trying to find "the
  hardware address" may return different results.  Certainly
  true on both of my Windows boxes, and in different ways
  (see whining on python-dev).
........
  r50897 | neal.norwitz | 2006-07-28 09:21:27 +0200 (Fri, 28 Jul 2006) | 3 lines

  Try to find the MAC addr on various flavours of Unix.  This seems hopeless.
  The reduces the test_uuid failures, but there's still another method failing.
........
  r50898 | martin.v.loewis | 2006-07-28 09:45:49 +0200 (Fri, 28 Jul 2006) | 2 lines

  Add UUID for upcoming 2.5b3.
........
  r50899 | matt.fleming | 2006-07-28 13:27:27 +0200 (Fri, 28 Jul 2006) | 3 lines

  Allow socketmodule to compile on NetBSD -current, whose bluetooth API
  differs from both Linux and FreeBSD. Accepted by Neal Norwitz.
........
  r50900 | andrew.kuchling | 2006-07-28 14:07:12 +0200 (Fri, 28 Jul 2006) | 1 line

  [Patch #1529811] Correction to description of r|* mode
........
  r50901 | andrew.kuchling | 2006-07-28 14:18:22 +0200 (Fri, 28 Jul 2006) | 1 line

  Typo fix
........
  r50902 | andrew.kuchling | 2006-07-28 14:32:43 +0200 (Fri, 28 Jul 2006) | 1 line

  Add example
........
  r50903 | andrew.kuchling | 2006-07-28 14:33:19 +0200 (Fri, 28 Jul 2006) | 1 line

  Add example
........
  r50904 | andrew.kuchling | 2006-07-28 14:45:55 +0200 (Fri, 28 Jul 2006) | 1 line

  Don't overwrite built-in name; add some blank lines for readability
........
  r50905 | andrew.kuchling | 2006-07-28 14:48:07 +0200 (Fri, 28 Jul 2006) | 1 line

  Add example.  Should I propagate this example to all the other DBM-ish modules, too?
........
  r50912 | georg.brandl | 2006-07-28 20:31:39 +0200 (Fri, 28 Jul 2006) | 3 lines

  Patch #1529686: also run test_email_codecs with regrtest.py.
........
  r50913 | georg.brandl | 2006-07-28 20:36:01 +0200 (Fri, 28 Jul 2006) | 3 lines

  Fix spelling.
........
  r50915 | thomas.heller | 2006-07-28 21:42:40 +0200 (Fri, 28 Jul 2006) | 3 lines

  Remove a useless XXX comment.
  Cosmetic changes to the code so that the #ifdef _UNICODE block
  doesn't mess emacs code formatting.
........
  r50916 | phillip.eby | 2006-07-28 23:12:07 +0200 (Fri, 28 Jul 2006) | 5 lines

  Bug #1529871: The speed enhancement patch #921466 broke Python's compliance
  with PEP 302.  This was fixed by adding an ``imp.NullImporter`` type that is
  used in ``sys.path_importer_cache`` to cache non-directory paths and avoid
  excessive filesystem operations during imports.
........
  r50917 | phillip.eby | 2006-07-28 23:31:54 +0200 (Fri, 28 Jul 2006) | 2 lines

  Fix svn merge spew.
........
  r50918 | thomas.heller | 2006-07-28 23:43:20 +0200 (Fri, 28 Jul 2006) | 4 lines

  Patch #1529514: More openbsd platforms for ctypes.
  Regenerated Modules/_ctypes/libffi/configure with autoconf 2.59.

  Approved by Neal.
........
  r50922 | georg.brandl | 2006-07-29 10:51:21 +0200 (Sat, 29 Jul 2006) | 2 lines

  Bug #835255: The "closure" argument to new.function() is now documented.
........
  r50924 | georg.brandl | 2006-07-29 11:33:26 +0200 (Sat, 29 Jul 2006) | 3 lines

  Bug #1441397: The compiler module now recognizes module and function
  docstrings correctly as it did in Python 2.4.
........
  r50925 | georg.brandl | 2006-07-29 12:25:46 +0200 (Sat, 29 Jul 2006) | 4 lines

  Revert rev 42617, it was introduced to work around bug #1441397.
  test_compiler now passes again.
........
  r50926 | fred.drake | 2006-07-29 15:22:49 +0200 (Sat, 29 Jul 2006) | 1 line

  update target version number
........
  r50927 | andrew.kuchling | 2006-07-29 15:56:48 +0200 (Sat, 29 Jul 2006) | 1 line

  Add example
........
  r50928 | andrew.kuchling | 2006-07-29 16:04:47 +0200 (Sat, 29 Jul 2006) | 1 line

  Update URL
........
  r50930 | andrew.kuchling | 2006-07-29 16:08:15 +0200 (Sat, 29 Jul 2006) | 1 line

  Reword paragraph to match the order of the subsequent sections
........
  r50931 | andrew.kuchling | 2006-07-29 16:21:15 +0200 (Sat, 29 Jul 2006) | 1 line

  [Bug #1529157] Mention raw_input() and input(); while I'm at it, reword the description a bit
........
  r50932 | andrew.kuchling | 2006-07-29 16:42:48 +0200 (Sat, 29 Jul 2006) | 1 line

  [Bug #1519571] Document some missing functions: setup(), title(), done()
........
  r50933 | andrew.kuchling | 2006-07-29 16:43:55 +0200 (Sat, 29 Jul 2006) | 1 line

  Fix docstring punctuation
........
  r50934 | andrew.kuchling | 2006-07-29 17:10:32 +0200 (Sat, 29 Jul 2006) | 1 line

  [Bug #1414697] Change docstring of set/frozenset types to specify that the contents are unique.  Raymond, please feel free to edit or revert.
........
  r50935 | andrew.kuchling | 2006-07-29 17:35:21 +0200 (Sat, 29 Jul 2006) | 1 line

  [Bug #1530382] Document SSL.server(), .issuer() methods
........
  r50936 | andrew.kuchling | 2006-07-29 17:42:46 +0200 (Sat, 29 Jul 2006) | 1 line

  Typo fix
........
  r50937 | andrew.kuchling | 2006-07-29 17:43:13 +0200 (Sat, 29 Jul 2006) | 1 line

  Tweak wording
........
  r50938 | matt.fleming | 2006-07-29 17:55:30 +0200 (Sat, 29 Jul 2006) | 2 lines

  Fix typo
........
  r50939 | andrew.kuchling | 2006-07-29 17:57:08 +0200 (Sat, 29 Jul 2006) | 6 lines

  [Bug #1528258] Mention that the 'data' argument can be None.

  The constructor docs referred the reader to the add_data() method's docs,
  but they weren't very helpful.  I've simply copied an earlier explanation
  of 'data' that's more useful.
........
  r50940 | andrew.kuchling | 2006-07-29 18:08:40 +0200 (Sat, 29 Jul 2006) | 1 line

  Set bug/patch count.  Take a bow, everyone!
........
  r50941 | fred.drake | 2006-07-29 18:56:15 +0200 (Sat, 29 Jul 2006) | 18 lines

  expunge the xmlcore changes:
    41667, 41668 - initial switch to xmlcore
    47044        - mention of xmlcore in What's New
    50687        - mention of xmlcore in the library reference

  re-apply xmlcore changes to xml:
    41674        - line ending changes (re-applied manually), directory props
    41677        - add cElementTree wrapper
    41678        - PSF licensing for etree
    41812        - whitespace normalization
    42724        - fix svn:eol-style settings
    43681, 43682 - remove Python version-compatibility cruft from minidom
    46773        - fix encoding of \r\n\t in attr values in saxutils
    47269        - added XMLParser alias for cElementTree compatibility

  additional tests were added in Lib/test/test_sax.py that failed with
  the xmlcore changes; these relate to SF bugs #1511497, #1513611
........
  r50942 | andrew.kuchling | 2006-07-29 20:14:07 +0200 (Sat, 29 Jul 2006) | 17 lines

  Reorganize the docs for 'file' and 'open()' after some discussion with Fred.

  We want to encourage users to write open() when opening a file, but
  open() was described with a single paragraph and
  'file' had lots of explanation of the mode and bufsize arguments.

  I've shrunk the description of 'file' to cross-reference to the 'File
  objects' section, and to open() for an explanation of the arguments.

  open() now has all the paragraphs about the mode string.  The bufsize
  argument was moved up so that it isn't buried at the end; now there's
  1 paragraph on mode, 1 on bufsize, and then 3 more on mode.  Various
  other edits and rearrangements were made in the process.

  It's probably best to read the final text and not to try to make sense
  of the diffs.
........
  r50943 | fred.drake | 2006-07-29 20:19:19 +0200 (Sat, 29 Jul 2006) | 1 line

  restore test un-intentionally removed in the xmlcore purge (revision 50941)
........
  r50944 | fred.drake | 2006-07-29 20:33:29 +0200 (Sat, 29 Jul 2006) | 3 lines

  make the reference to older versions of the documentation a link
  to the right page on python.org
........
  r50945 | fred.drake | 2006-07-29 21:09:01 +0200 (Sat, 29 Jul 2006) | 1 line

  document the footnote usage pattern
........
  r50947 | fred.drake | 2006-07-29 21:14:10 +0200 (Sat, 29 Jul 2006) | 1 line

  emphasize and oddball nuance of LaTeX comment syntax
........
  r50948 | andrew.kuchling | 2006-07-29 21:24:04 +0200 (Sat, 29 Jul 2006) | 1 line

  [Patch #1490989 from Skip Montanaro]  Mention debugging builds in the API documentation.  I've changed Skip's patch to point to Misc/SpecialBuilds and fiddled with the markup a bit.
........
  r50949 | neal.norwitz | 2006-07-29 21:29:35 +0200 (Sat, 29 Jul 2006) | 6 lines

  Disable these tests until they are reliable across platforms.
  These problems may mask more important, real problems.

  One or both methods are known to fail on: Solaris, OpenBSD, Debian, Ubuntu.
  They pass on Windows and some Linux boxes.
........
  r50950 | andrew.kuchling | 2006-07-29 21:50:37 +0200 (Sat, 29 Jul 2006) | 1 line

  [Patch #1068277] Clarify that os.path.exists() can return False depending on permissions.  Fred approved committing this patch in December 2004!
........
  r50952 | fred.drake | 2006-07-29 22:04:42 +0200 (Sat, 29 Jul 2006) | 6 lines

  SF bug #1193966: Weakref types documentation misplaced

  The information about supporting weakrefs with types defined in C extensions
  is moved to the Extending & Embedding manual.  Py_TPFLAGS_HAVE_WEAKREFS is
  no longer mentioned since it is part of Py_TPFLAGS_DEFAULT.
........
  r50953 | skip.montanaro | 2006-07-29 22:06:05 +0200 (Sat, 29 Jul 2006) | 4 lines

  Add a comment to the csv reader documentation that explains why the
  treatment of newlines changed in 2.5.  Pulled almost verbatim from a comment
  by Andrew McNamara in <http://python.org/sf/1465014>.
........
  r50954 | neal.norwitz | 2006-07-29 22:20:52 +0200 (Sat, 29 Jul 2006) | 3 lines

  If the executable doesn't exist, there's no reason to try to start it.
  This prevents garbage about command not found being printed on Solaris.
........
  r50955 | fred.drake | 2006-07-29 22:21:25 +0200 (Sat, 29 Jul 2006) | 1 line

  fix minor markup error that introduced extra punctuation
........
  r50957 | neal.norwitz | 2006-07-29 22:37:08 +0200 (Sat, 29 Jul 2006) | 3 lines

  Disable test_getnode too, since this is also unreliable.
........
  r50958 | andrew.kuchling | 2006-07-29 23:27:12 +0200 (Sat, 29 Jul 2006) | 1 line

  Follow TeX's conventions for hyphens
........
  r50959 | andrew.kuchling | 2006-07-29 23:30:21 +0200 (Sat, 29 Jul 2006) | 1 line

  Fix case for 'Unix'
........
  r50960 | fred.drake | 2006-07-30 01:34:57 +0200 (Sun, 30 Jul 2006) | 1 line

  markup cleanups
........
  r50961 | andrew.kuchling | 2006-07-30 02:27:34 +0200 (Sun, 30 Jul 2006) | 1 line

  Minor typo fixes
........
  r50962 | andrew.kuchling | 2006-07-30 02:37:56 +0200 (Sun, 30 Jul 2006) | 1 line

  [Bug #793553] Correct description of keyword arguments for SSL authentication
........
  r50963 | tim.peters | 2006-07-30 02:58:15 +0200 (Sun, 30 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50964 | fred.drake | 2006-07-30 05:03:43 +0200 (Sun, 30 Jul 2006) | 1 line

  lots of markup nits, most commonly Unix/unix --> \UNIX
........
  r50965 | fred.drake | 2006-07-30 07:41:28 +0200 (Sun, 30 Jul 2006) | 1 line

  update information on wxPython, from Robin Dunn
........
  r50966 | fred.drake | 2006-07-30 07:49:49 +0200 (Sun, 30 Jul 2006) | 4 lines

  remove possibly-outdated comment on what GUI toolkit is most commonly used;
  it is hard to know whether this is right, and it does not add valuable reference information
  at any rate
........
  r50967 | fred.drake | 2006-07-30 07:55:39 +0200 (Sun, 30 Jul 2006) | 3 lines

  - remove yet another reference to how commonly Tkinter is (thought to be) used
  - fix an internal section reference
........
  r50968 | neal.norwitz | 2006-07-30 08:53:31 +0200 (Sun, 30 Jul 2006) | 4 lines

  Patch #1531113: Fix augmented assignment with yield expressions.
  Also fix a SystemError when trying to assign to yield expressions.
........
  r50969 | neal.norwitz | 2006-07-30 08:55:48 +0200 (Sun, 30 Jul 2006) | 5 lines

  Add PyErr_WarnEx() so C code can pass the stacklevel to warnings.warn().
  This provides the proper warning for struct.pack().
  PyErr_Warn() is now deprecated in favor of PyErr_WarnEx().
  As mentioned by Tim Peters on python-dev.
........
  r50970 | neal.norwitz | 2006-07-30 08:57:04 +0200 (Sun, 30 Jul 2006) | 3 lines

  Bug #1515471: string.replace() accepts character buffers again.
  Pass the char* and size around rather than PyObject's.
........
  r50971 | neal.norwitz | 2006-07-30 08:59:13 +0200 (Sun, 30 Jul 2006) | 1 line

  Whitespace normalization
........
  r50973 | georg.brandl | 2006-07-30 12:53:32 +0200 (Sun, 30 Jul 2006) | 3 lines

  Clarify that __op__ methods must return NotImplemented if they don't support the operation.
........
  r50974 | georg.brandl | 2006-07-30 13:07:23 +0200 (Sun, 30 Jul 2006) | 3 lines

  Bug #1002398: The documentation for os.path.sameopenfile now correctly
  refers to file descriptors, not file objects.
........
  r50977 | martin.v.loewis | 2006-07-30 15:00:31 +0200 (Sun, 30 Jul 2006) | 3 lines

  Don't copy directory stat times in shutil.copytree on Windows
  Fixes #1525866.
........
  r50978 | martin.v.loewis | 2006-07-30 15:14:05 +0200 (Sun, 30 Jul 2006) | 3 lines

  Base __version__ on sys.version_info, as distutils is
  no longer maintained separatedly.
........
  r50979 | martin.v.loewis | 2006-07-30 15:27:31 +0200 (Sun, 30 Jul 2006) | 3 lines

  Mention Cygwin in distutils error message about a missing VS 2003.
  Fixes #1257728.
........
  r50982 | martin.v.loewis | 2006-07-30 16:09:47 +0200 (Sun, 30 Jul 2006) | 5 lines

  Drop usage of test -e in configure as it is not portable.
  Fixes #1439538
  Will backport to 2.4
  Also regenerate pyconfig.h.in.
........
  r50984 | georg.brandl | 2006-07-30 18:20:10 +0200 (Sun, 30 Jul 2006) | 3 lines

  Fix makefile changes for python-config.
........
  r50985 | george.yoshida | 2006-07-30 18:37:37 +0200 (Sun, 30 Jul 2006) | 2 lines

  Rename struct.pack_to to struct.pack_into as changed in revision 46642.
........
  r50986 | george.yoshida | 2006-07-30 18:41:30 +0200 (Sun, 30 Jul 2006) | 2 lines

  Typo fix
........
  r50987 | neal.norwitz | 2006-07-30 21:18:13 +0200 (Sun, 30 Jul 2006) | 1 line

  Add some asserts and update comments
........
  r50988 | neal.norwitz | 2006-07-30 21:18:38 +0200 (Sun, 30 Jul 2006) | 1 line

  Verify that the signal handlers were really called
........
  r50989 | neal.norwitz | 2006-07-30 21:20:42 +0200 (Sun, 30 Jul 2006) | 3 lines

  Try to prevent hangs on Tru64/Alpha buildbot.  I'm not certain this will help
  and may need to be reverted if it causes problems.
........
  r50990 | georg.brandl | 2006-07-30 22:18:51 +0200 (Sun, 30 Jul 2006) | 2 lines

  Bug #1531349: right <-> left glitch in __rop__ description.
........
  r50992 | tim.peters | 2006-07-31 03:46:03 +0200 (Mon, 31 Jul 2006) | 2 lines

  Whitespace normalization.
........
  r50993 | andrew.mcnamara | 2006-07-31 04:27:48 +0200 (Mon, 31 Jul 2006) | 2 lines

  Redo the comment about the 2.5 change in quoted-newline handling.
........
  r50994 | tim.peters | 2006-07-31 04:40:23 +0200 (Mon, 31 Jul 2006) | 10 lines

  ZipFile.close():  Killed one of the struct.pack deprecation
  warnings on Win32.

  Also added an XXX about the line:

                  pos3 = self.fp.tell()

  `pos3` is never referenced, and I have no idea what the code
  intended to do instead.
........
  r50996 | tim.peters | 2006-07-31 04:53:03 +0200 (Mon, 31 Jul 2006) | 8 lines

  ZipFile.close():  Kill the other struct.pack deprecation
  warning on Windows.

  Afraid I can't detect a pattern to when the pack formats decide
  to use a signed or unsigned format code -- appears nearly
  arbitrary to my eyes.  So I left all the pack formats alone and
  changed the special-case data values instead.
........
  r50997 | skip.montanaro | 2006-07-31 05:09:45 +0200 (Mon, 31 Jul 2006) | 1 line

  minor tweaks
........
  r50998 | skip.montanaro | 2006-07-31 05:11:11 +0200 (Mon, 31 Jul 2006) | 1 line

  minor tweaks
........
  r50999 | andrew.kuchling | 2006-07-31 14:20:24 +0200 (Mon, 31 Jul 2006) | 1 line

  Add refcounts for PyErr_WarnEx
........
  r51000 | andrew.kuchling | 2006-07-31 14:39:05 +0200 (Mon, 31 Jul 2006) | 9 lines

  Document PyErr_WarnEx.  (Bad Neal!  No biscuit!)

  Is the explanation of the 'stacklevel' parameter clear?  Please feel free
  to edit it.

  I don't have LaTeX installed on this machine, so haven't verified that the
  markup is correct.  Will check tonight, or maybe the automatic doc build will
  tell me.
........
  r51001 | andrew.kuchling | 2006-07-31 14:52:26 +0200 (Mon, 31 Jul 2006) | 1 line

  Add PyErr_WarnEx()
........
  r51002 | andrew.kuchling | 2006-07-31 15:18:27 +0200 (Mon, 31 Jul 2006) | 1 line

  Mention csv newline changes
........
  r51003 | andrew.kuchling | 2006-07-31 17:22:58 +0200 (Mon, 31 Jul 2006) | 1 line

  Typo fix
........
  r51004 | andrew.kuchling | 2006-07-31 17:23:43 +0200 (Mon, 31 Jul 2006) | 1 line

  Remove reference to  notation
........
  r51005 | georg.brandl | 2006-07-31 18:00:34 +0200 (Mon, 31 Jul 2006) | 3 lines

  Fix function name.
........
  r51006 | andrew.kuchling | 2006-07-31 18:10:24 +0200 (Mon, 31 Jul 2006) | 1 line

  [Bug #1514540] Instead of putting the standard types in a section, put them in a chapter of their own.  This means string methods will now show up in the ToC.  (Should the types come before or after the functions+exceptions+constants chapter?  I've put them after, for now.)
........
  r51007 | andrew.kuchling | 2006-07-31 18:22:05 +0200 (Mon, 31 Jul 2006) | 1 line

  [Bug #848556] Remove \d* from second alternative to avoid exponential case when repeating match
........
  r51008 | andrew.kuchling | 2006-07-31 18:27:57 +0200 (Mon, 31 Jul 2006) | 1 line

  Update list of files; fix a typo
........
  r51013 | andrew.kuchling | 2006-08-01 18:24:30 +0200 (Tue, 01 Aug 2006) | 1 line

  typo fix
........
  r51018 | thomas.heller | 2006-08-01 18:54:43 +0200 (Tue, 01 Aug 2006) | 2 lines

  Fix a potential segfault and various potentail refcount leaks
  in the cast() function.
........
  r51020 | thomas.heller | 2006-08-01 19:46:10 +0200 (Tue, 01 Aug 2006) | 1 line

  Minimal useful docstring for CopyComPointer.
........
  r51021 | andrew.kuchling | 2006-08-01 20:16:15 +0200 (Tue, 01 Aug 2006) | 8 lines

  [Patch #1520905] Attempt to suppress core file created by test_subprocess.py.
  Patch by Douglas Greiman.

  The test_run_abort() testcase produces a core file on Unix systems,
  even though the test is successful. This can be confusing or alarming
  to someone who runs 'make test' and then finds that the Python
  interpreter apparently crashed.
........
  r51023 | georg.brandl | 2006-08-01 20:49:24 +0200 (Tue, 01 Aug 2006) | 3 lines

  os.urandom no longer masks unrelated exceptions like SystemExit or
  KeyboardInterrupt.
........
  r51025 | thomas.heller | 2006-08-01 21:14:15 +0200 (Tue, 01 Aug 2006) | 2 lines

  Speed up PyType_stgdict and PyObject_stgdict.
........
  r51027 | ronald.oussoren | 2006-08-01 22:30:31 +0200 (Tue, 01 Aug 2006) | 3 lines

  Make sure the postinstall action that optionally updates the user's profile
  on MacOS X actually works correctly in all cases.
........
  r51028 | ronald.oussoren | 2006-08-01 23:00:57 +0200 (Tue, 01 Aug 2006) | 4 lines

  This fixes bug #1527397: PythonLauncher runs scripts with the wrong working
  directory. It also fixes a bug where PythonLauncher failed to launch scripts
  when the scriptname (or the path to the script) contains quotes.
........
  r51031 | tim.peters | 2006-08-02 05:27:46 +0200 (Wed, 02 Aug 2006) | 2 lines

  Whitespace normalization.
........
  r51032 | tim.peters | 2006-08-02 06:12:36 +0200 (Wed, 02 Aug 2006) | 19 lines

  Try to squash struct.pack warnings on the "amd64 gentoo trunk"
  buildbot (& possibly other 64-bit boxes) during test_gzip.

  The native zlib crc32 function returns an unsigned 32-bit integer,
  which the Python wrapper implicitly casts to C long.  Therefore the
  same crc can "look negative" on a 32-bit box but "look positive" on
  a 64-bit box.  This patch papers over that platform difference when
  writing the crc to file.

  It may be better to change the Python wrapper, either to make
  the result "look positive" on all platforms (which means it may
  have to return a Python long at times on a 32-bit box), or to
  keep the sign the same across boxes.  But that would be a visible
  change in what users see, while the current hack changes no
  visible behavior (well, apart from stopping the struct deprecation
  warning).

  Note that the module-level write32() function is no longer used.
........
  r51033 | neal.norwitz | 2006-08-02 06:27:11 +0200 (Wed, 02 Aug 2006) | 4 lines

  Prevent memory leak on error.

  Reported by Klocwork #36
........
  r51034 | tim.peters | 2006-08-02 07:20:08 +0200 (Wed, 02 Aug 2006) | 9 lines

  _Stream.close():  Try to kill struct.pack() warnings when
  writing the crc to file on the "PPC64 Debian trunk" buildbot
  when running test_tarfile.

  This is again a case where the native zlib crc is an unsigned
  32-bit int, but the Python wrapper implicitly casts it to
  signed C long, so that "the sign bit looks different" on
  different platforms.
........
  r51035 | ronald.oussoren | 2006-08-02 08:10:10 +0200 (Wed, 02 Aug 2006) | 2 lines

  Updated documentation for the script that builds the OSX installer.
........
  r51036 | neal.norwitz | 2006-08-02 08:14:22 +0200 (Wed, 02 Aug 2006) | 2 lines

  _PyWeakref_GetWeakrefCount() now returns a Py_ssize_t instead of long.
........
  r51037 | neal.norwitz | 2006-08-02 08:15:10 +0200 (Wed, 02 Aug 2006) | 1 line

  v is already checked for NULL, so just DECREF it
........
  r51038 | neal.norwitz | 2006-08-02 08:19:19 +0200 (Wed, 02 Aug 2006) | 1 line

  Let us know when there was a problem and the child had to kill the parent
........
  r51039 | neal.norwitz | 2006-08-02 08:46:21 +0200 (Wed, 02 Aug 2006) | 5 lines

  Patch #1519025 and bug #926423: If a KeyboardInterrupt occurs during
  a socket operation on a socket with a timeout, the exception will be
  caught correctly.  Previously, the exception was not caught.
........
  r51040 | neal.norwitz | 2006-08-02 09:09:32 +0200 (Wed, 02 Aug 2006) | 1 line

  Add some explanation about Klocwork and Coverity static analysis
........
  r51041 | anthony.baxter | 2006-08-02 09:43:09 +0200 (Wed, 02 Aug 2006) | 1 line

  pre-release machinations
........
  r51043 | thomas.heller | 2006-08-02 13:35:31 +0200 (Wed, 02 Aug 2006) | 4 lines

  A few nore words about what ctypes does.
  Document that using the wrong calling convention can also raise
  'ValueError: Procedure called with the wrong number of arguments'.
........
  r51045 | thomas.heller | 2006-08-02 14:00:13 +0200 (Wed, 02 Aug 2006) | 1 line

  Fix a mistake.
........
  r51046 | martin.v.loewis | 2006-08-02 15:53:55 +0200 (Wed, 02 Aug 2006) | 3 lines

  Correction of patch #1455898: In the mbcs decoder, set final=False
  for stream decoder, but final=True for the decode function.
........
  r51049 | tim.peters | 2006-08-02 20:19:35 +0200 (Wed, 02 Aug 2006) | 2 lines

  Add missing svn:eol-style property to text files.
........
  r51079 | neal.norwitz | 2006-08-04 06:50:21 +0200 (Fri, 04 Aug 2006) | 3 lines

  Bug #1531405, format_exception no longer raises an exception if
  str(exception) raised an exception.
........
  r51080 | neal.norwitz | 2006-08-04 06:58:47 +0200 (Fri, 04 Aug 2006) | 11 lines

  Bug #1191458: tracing over for loops now produces a line event
  on each iteration.  I'm not positive this is the best way to handle
  this.  I'm also not sure that there aren't other cases where
  the lnotab is generated incorrectly.  It would be great if people
  that use pdb or tracing could test heavily.

  Also:
   * Remove dead/duplicated code that wasn't used/necessary
     because we already handled the docstring prior to entering the loop.
   * add some debugging code into the compiler (#if 0'd out).
........
  r51081 | neal.norwitz | 2006-08-04 07:09:28 +0200 (Fri, 04 Aug 2006) | 4 lines

  Bug #1333982: string/number constants were inappropriately stored
  in the byte code and co_consts even if they were not used, ie
  immediately popped off the stack.
........
  r51082 | neal.norwitz | 2006-08-04 07:12:19 +0200 (Fri, 04 Aug 2006) | 1 line

  There were really two issues
........
  r51084 | fred.drake | 2006-08-04 07:17:21 +0200 (Fri, 04 Aug 2006) | 1 line

  SF patch #1534048 (bug #1531003): fix typo in error message
........
  r51085 | gregory.p.smith | 2006-08-04 07:17:47 +0200 (Fri, 04 Aug 2006) | 3 lines

  fix typos
........
  r51087 | georg.brandl | 2006-08-04 08:03:53 +0200 (Fri, 04 Aug 2006) | 3 lines

  Fix bug caused by first decrefing, then increfing.
........
  r51109 | neil.schemenauer | 2006-08-04 18:20:30 +0200 (Fri, 04 Aug 2006) | 5 lines

  Fix the 'compiler' package to generate correct code for MAKE_CLOSURE.
  In the 2.5 development cycle, MAKE_CLOSURE as changed to take free
  variables as a tuple rather than as individual items on the stack.
  Closes patch #1534084.
........
  r51110 | georg.brandl | 2006-08-04 20:03:37 +0200 (Fri, 04 Aug 2006) | 3 lines

  Change fix for segfaulting property(), add a NEWS entry and a test.
........
  r51111 | georg.brandl | 2006-08-04 20:07:34 +0200 (Fri, 04 Aug 2006) | 3 lines

  Better fix for bug #1531405, not executing str(value) twice.
........
  r51112 | thomas.heller | 2006-08-04 20:17:40 +0200 (Fri, 04 Aug 2006) | 1 line

  On Windows, make PyErr_Warn an exported function again.
........
  r51113 | thomas.heller | 2006-08-04 20:57:34 +0200 (Fri, 04 Aug 2006) | 4 lines

  Fix #1530448 - fix ctypes build failure on solaris 10.

  The '-mimpure-text' linker flag is required when linking _ctypes.so.
........
  r51114 | thomas.heller | 2006-08-04 21:49:31 +0200 (Fri, 04 Aug 2006) | 3 lines

  Fix #1534738: win32 debug version of _msi must be _msi_d.pyd, not _msi.pyd.
  Fix the name of the pdb file as well.
........
  r51115 | andrew.kuchling | 2006-08-04 22:37:43 +0200 (Fri, 04 Aug 2006) | 1 line

  Typo fixes
........
  r51116 | andrew.kuchling | 2006-08-04 23:10:03 +0200 (Fri, 04 Aug 2006) | 1 line

  Fix mangled sentence
........
  r51118 | tim.peters | 2006-08-05 00:00:35 +0200 (Sat, 05 Aug 2006) | 2 lines

  Whitespace normalization.
........
  r51119 | bob.ippolito | 2006-08-05 01:59:21 +0200 (Sat, 05 Aug 2006) | 5 lines

  Fix #1530559, struct.pack raises TypeError where it used to convert.
  Passing float arguments to struct.pack when integers are expected
  now triggers a DeprecationWarning.
........
  r51123 | georg.brandl | 2006-08-05 08:10:54 +0200 (Sat, 05 Aug 2006) | 3 lines

  Patch #1534922: correct and enhance unittest docs.
........
  r51126 | georg.brandl | 2006-08-06 09:06:33 +0200 (Sun, 06 Aug 2006) | 2 lines

  Bug #1535182: really test the xreadlines() method of bz2 objects.
........
  r51128 | georg.brandl | 2006-08-06 09:26:21 +0200 (Sun, 06 Aug 2006) | 4 lines

  Bug #1535081: A leading underscore has been added to the names of
  the md5 and sha modules, so add it in Modules/Setup.dist too.
........
  r51129 | georg.brandl | 2006-08-06 10:23:54 +0200 (Sun, 06 Aug 2006) | 3 lines

  Bug #1535165: fixed a segfault in input() and raw_input() when
  sys.stdin is closed.
........
  r51131 | georg.brandl | 2006-08-06 11:17:16 +0200 (Sun, 06 Aug 2006) | 2 lines

  Don't produce output in test_builtin.
........
  r51133 | andrew.macintyre | 2006-08-06 14:37:03 +0200 (Sun, 06 Aug 2006) | 4 lines

  test_threading now skips testing alternate thread stack sizes on
  platforms that don't support changing thread stack size.
........
  r51134 | andrew.kuchling | 2006-08-07 00:07:04 +0200 (Mon, 07 Aug 2006) | 2 lines

  [Patch #1464056] Ensure that we use the panelw library when linking with ncursesw.
  Once I see how the buildbots react, I'll backport this to 2.4.
........
  r51137 | georg.brandl | 2006-08-08 13:52:34 +0200 (Tue, 08 Aug 2006) | 3 lines

  webbrowser: Silence stderr output if no gconftool or gnome browser found
........
  r51138 | georg.brandl | 2006-08-08 13:56:21 +0200 (Tue, 08 Aug 2006) | 7 lines

  Remove "non-mapping" and "non-sequence" from TypeErrors raised by
  PyMapping_Size and PySequence_Size.

  Because len() tries first sequence, then mapping size, it will always
  raise a "non-mapping object has no len" error which is confusing.
........
  r51139 | thomas.heller | 2006-08-08 19:37:00 +0200 (Tue, 08 Aug 2006) | 3 lines

  memcmp() can return values other than -1, 0, and +1 but tp_compare
  must not.
........
  r51140 | thomas.heller | 2006-08-08 19:39:20 +0200 (Tue, 08 Aug 2006) | 1 line

  Remove accidently committed, duplicated test.
........
  r51147 | andrew.kuchling | 2006-08-08 20:50:14 +0200 (Tue, 08 Aug 2006) | 1 line

  Reword paragraph to clarify
........
  r51148 | andrew.kuchling | 2006-08-08 20:56:08 +0200 (Tue, 08 Aug 2006) | 1 line

  Move obmalloc item into C API section
........
  r51149 | andrew.kuchling | 2006-08-08 21:00:14 +0200 (Tue, 08 Aug 2006) | 1 line

  'Other changes' section now has only one item; move the item elsewhere and remove the section
........
  r51150 | andrew.kuchling | 2006-08-08 21:00:34 +0200 (Tue, 08 Aug 2006) | 1 line

  Bump version number
........
  r51151 | georg.brandl | 2006-08-08 22:11:22 +0200 (Tue, 08 Aug 2006) | 2 lines

  Bug #1536828: typo: TypeType should have been StringType.
........
  r51153 | georg.brandl | 2006-08-08 22:13:13 +0200 (Tue, 08 Aug 2006) | 2 lines

  Bug #1536660: separate two words.
........
  r51155 | georg.brandl | 2006-08-08 22:48:10 +0200 (Tue, 08 Aug 2006) | 3 lines

  ``str`` is now the same object as ``types.StringType``.
........
  r51156 | tim.peters | 2006-08-09 02:52:26 +0200 (Wed, 09 Aug 2006) | 2 lines

  Whitespace normalization.
........
  r51158 | georg.brandl | 2006-08-09 09:03:22 +0200 (Wed, 09 Aug 2006) | 4 lines

  Introduce an upper bound on tuple nesting depth in
  C argument format strings; fixes rest of #1523610.
........
  r51160 | martin.v.loewis | 2006-08-09 09:57:39 +0200 (Wed, 09 Aug 2006) | 4 lines

  __hash__ may now return long int; the final hash
    value is obtained by invoking hash on the long int.
  Fixes #1536021.
........
  r51168 | andrew.kuchling | 2006-08-09 15:03:41 +0200 (Wed, 09 Aug 2006) | 1 line

  [Bug #1536021] Mention __hash__ change
........
  r51169 | andrew.kuchling | 2006-08-09 15:57:05 +0200 (Wed, 09 Aug 2006) | 1 line

  [Patch #1534027] Add notes on locale module changes
........
  r51170 | andrew.kuchling | 2006-08-09 16:05:35 +0200 (Wed, 09 Aug 2006) | 1 line

  Add missing 'self' parameters
........
  r51171 | andrew.kuchling | 2006-08-09 16:06:19 +0200 (Wed, 09 Aug 2006) | 1 line

  Reindent code
........
  r51172 | armin.rigo | 2006-08-09 16:55:26 +0200 (Wed, 09 Aug 2006) | 2 lines

  Fix and test for an infinite C recursion.
........
  r51173 | ronald.oussoren | 2006-08-09 16:56:33 +0200 (Wed, 09 Aug 2006) | 2 lines

  It's unlikely that future versions will require _POSIX_C_SOURCE
........
  r51178 | armin.rigo | 2006-08-09 17:37:26 +0200 (Wed, 09 Aug 2006) | 2 lines

  Concatenation on a long string breaks (SF #1526585).
........
  r51180 | kurt.kaiser | 2006-08-09 18:46:15 +0200 (Wed, 09 Aug 2006) | 8 lines

  1.  When used w/o subprocess, all exceptions were preceeded by an error
      message claiming they were IDLE internal errors (since 1.2a1).
  2.  Add Ronald Oussoren to CREDITS

  M    NEWS.txt
  M    PyShell.py
  M    CREDITS.txt
........
  r51181 | kurt.kaiser | 2006-08-09 19:47:15 +0200 (Wed, 09 Aug 2006) | 4 lines

  As a slight enhancement to the previous checkin, improve the
  internal error reporting by moving message to IDLE console.
........
  r51182 | andrew.kuchling | 2006-08-09 20:23:14 +0200 (Wed, 09 Aug 2006) | 1 line

  Typo fix
........
  r51183 | kurt.kaiser | 2006-08-09 22:34:46 +0200 (Wed, 09 Aug 2006) | 2 lines

  ToggleTab dialog was setting indent to 8 even if cancelled (since 1.2a1).
........
  r51184 | martin.v.loewis | 2006-08-10 01:42:18 +0200 (Thu, 10 Aug 2006) | 2 lines

  Add some commentary on -mimpure-text.
........
  r51185 | tim.peters | 2006-08-10 02:58:49 +0200 (Thu, 10 Aug 2006) | 2 lines

  Add missing svn:eol-style property to text files.
........
  r51186 | kurt.kaiser | 2006-08-10 03:41:17 +0200 (Thu, 10 Aug 2006) | 2 lines

  Changing tokenize (39046) to detect dedent broke tabnanny check (since 1.2a1)
........
  r51187 | tim.peters | 2006-08-10 05:01:26 +0200 (Thu, 10 Aug 2006) | 13 lines

  test_copytree_simple():  This was leaving behind two new temp
  directories each time it ran, at least on Windows.

  Several changes:  explicitly closed all files; wrapped long
  lines; stopped suppressing errors when removing a file or
  directory fails (removing /shouldn't/ fail!); and changed
  what appeared to be incorrect usage of os.removedirs() (that
  doesn't remove empty directories at and /under/ the given
  path, instead it must be given an empty leaf directory and
  then deletes empty directories moving /up/ the path -- could
  be that the conceptually simpler shutil.rmtree() was really
  actually intended here).
........
diff --git a/Lib/Queue.py b/Lib/Queue.py
index 51ad354..0f80584 100644
--- a/Lib/Queue.py
+++ b/Lib/Queue.py
@@ -14,11 +14,11 @@
     pass
 
 class Queue:
-    def __init__(self, maxsize=0):
-        """Initialize a queue object with a given maximum size.
+    """Create a queue object with a given maximum size.
 
-        If maxsize is <= 0, the queue size is infinite.
-        """
+    If maxsize is <= 0, the queue size is infinite.
+    """
+    def __init__(self, maxsize=0):
         try:
             import threading
         except ImportError:
diff --git a/Lib/SimpleHTTPServer.py b/Lib/SimpleHTTPServer.py
index 089936f..fae551a 100644
--- a/Lib/SimpleHTTPServer.py
+++ b/Lib/SimpleHTTPServer.py
@@ -192,6 +192,8 @@
         else:
             return self.extensions_map['']
 
+    if not mimetypes.inited:
+        mimetypes.init() # try to read system mime.types
     extensions_map = mimetypes.types_map.copy()
     extensions_map.update({
         '': 'application/octet-stream', # Default
diff --git a/Lib/UserString.py b/Lib/UserString.py
index 473ee88..60dc34b 100755
--- a/Lib/UserString.py
+++ b/Lib/UserString.py
@@ -5,14 +5,13 @@
 Note: string objects have grown methods in Python 1.6
 This module requires Python 1.6 or later.
 """
-from types import StringTypes
 import sys
 
 __all__ = ["UserString","MutableString"]
 
 class UserString:
     def __init__(self, seq):
-        if isinstance(seq, StringTypes):
+        if isinstance(seq, basestring):
             self.data = seq
         elif isinstance(seq, UserString):
             self.data = seq.data[:]
@@ -43,12 +42,12 @@
     def __add__(self, other):
         if isinstance(other, UserString):
             return self.__class__(self.data + other.data)
-        elif isinstance(other, StringTypes):
+        elif isinstance(other, basestring):
             return self.__class__(self.data + other)
         else:
             return self.__class__(self.data + str(other))
     def __radd__(self, other):
-        if isinstance(other, StringTypes):
+        if isinstance(other, basestring):
             return self.__class__(other + self.data)
         else:
             return self.__class__(str(other) + self.data)
@@ -163,7 +162,7 @@
         start = max(start, 0); end = max(end, 0)
         if isinstance(sub, UserString):
             self.data = self.data[:start]+sub.data+self.data[end:]
-        elif isinstance(sub, StringTypes):
+        elif isinstance(sub, basestring):
             self.data = self.data[:start]+sub+self.data[end:]
         else:
             self.data =  self.data[:start]+str(sub)+self.data[end:]
@@ -175,7 +174,7 @@
     def __iadd__(self, other):
         if isinstance(other, UserString):
             self.data += other.data
-        elif isinstance(other, StringTypes):
+        elif isinstance(other, basestring):
             self.data += other
         else:
             self.data += str(other)
diff --git a/Lib/_MozillaCookieJar.py b/Lib/_MozillaCookieJar.py
index 1776b93..4fd6de3 100644
--- a/Lib/_MozillaCookieJar.py
+++ b/Lib/_MozillaCookieJar.py
@@ -63,8 +63,7 @@
                 if line.endswith("\n"): line = line[:-1]
 
                 # skip comments and blank lines XXX what is $ for?
-                if (line.strip().startswith("#") or
-                    line.strip().startswith("$") or
+                if (line.strip().startswith(("#", "$")) or
                     line.strip() == ""):
                     continue
 
diff --git a/Lib/binhex.py b/Lib/binhex.py
index 4f3882a..0f3e3c4 100644
--- a/Lib/binhex.py
+++ b/Lib/binhex.py
@@ -44,22 +44,14 @@
 
 #
 # Workarounds for non-mac machines.
-if os.name == 'mac':
-    import macfs
-    import MacOS
-    try:
-        openrf = MacOS.openrf
-    except AttributeError:
-        # Backward compatibility
-        openrf = open
-
-    def FInfo():
-        return macfs.FInfo()
+try:
+    from Carbon.File import FSSpec, FInfo
+    from MacOS import openrf
 
     def getfileinfo(name):
-        finfo = macfs.FSSpec(name).GetFInfo()
+        finfo = FSSpec(name).FSpGetFInfo()
         dir, file = os.path.split(name)
-        # XXXX Get resource/data sizes
+        # XXX Get resource/data sizes
         fp = open(name, 'rb')
         fp.seek(0, 2)
         dlen = fp.tell()
@@ -75,7 +67,7 @@
             mode = '*' + mode[0]
         return openrf(name, mode)
 
-else:
+except ImportError:
     #
     # Glue code for non-macintosh usage
     #
@@ -183,7 +175,7 @@
             ofname = ofp
             ofp = open(ofname, 'w')
             if os.name == 'mac':
-                fss = macfs.FSSpec(ofname)
+                fss = FSSpec(ofname)
                 fss.SetCreatorType('BnHq', 'TEXT')
         ofp.write('(This file must be converted with BinHex 4.0)\n\n:')
         hqxer = _Hqxcoderengine(ofp)
@@ -486,7 +478,7 @@
     if not out:
         out = ifp.FName
     if os.name == 'mac':
-        ofss = macfs.FSSpec(out)
+        ofss = FSSpec(out)
         out = ofss.as_pathname()
 
     ofp = open(out, 'wb')
@@ -519,6 +511,7 @@
 
 def _test():
     if os.name == 'mac':
+        import macfs
         fss, ok = macfs.PromptGetFile('File to convert:')
         if not ok:
             sys.exit(0)
diff --git a/Lib/bsddb/__init__.py b/Lib/bsddb/__init__.py
index 90ed362..cf32668 100644
--- a/Lib/bsddb/__init__.py
+++ b/Lib/bsddb/__init__.py
@@ -33,7 +33,10 @@
 #----------------------------------------------------------------------
 
 
-"""Support for BerkeleyDB 3.2 through 4.2.
+"""Support for BerkeleyDB 3.3 through 4.4 with a simple interface.
+
+For the full featured object oriented interface use the bsddb.db module
+instead.  It mirrors the Sleepycat BerkeleyDB C API.
 """
 
 try:
@@ -43,8 +46,10 @@
         # python as bsddb._bsddb.
         import _pybsddb
         _bsddb = _pybsddb
+        from bsddb3.dbutils import DeadlockWrap as _DeadlockWrap
     else:
         import _bsddb
+        from bsddb.dbutils import DeadlockWrap as _DeadlockWrap
 except ImportError:
     # Remove ourselves from sys.modules
     import sys
@@ -70,7 +75,7 @@
     exec """
 class _iter_mixin(UserDict.DictMixin):
     def _make_iter_cursor(self):
-        cur = self.db.cursor()
+        cur = _DeadlockWrap(self.db.cursor)
         key = id(cur)
         self._cursor_refs[key] = ref(cur, self._gen_cref_cleaner(key))
         return cur
@@ -90,19 +95,19 @@
 
             # since we're only returning keys, we call the cursor
             # methods with flags=0, dlen=0, dofs=0
-            key = cur.first(0,0,0)[0]
+            key = _DeadlockWrap(cur.first, 0,0,0)[0]
             yield key
 
             next = cur.next
             while 1:
                 try:
-                    key = next(0,0,0)[0]
+                    key = _DeadlockWrap(next, 0,0,0)[0]
                     yield key
                 except _bsddb.DBCursorClosedError:
                     cur = self._make_iter_cursor()
                     # FIXME-20031101-greg: race condition.  cursor could
                     # be closed by another thread before this call.
-                    cur.set(key,0,0,0)
+                    _DeadlockWrap(cur.set, key,0,0,0)
                     next = cur.next
         except _bsddb.DBNotFoundError:
             return
@@ -119,21 +124,21 @@
             # FIXME-20031102-greg: race condition.  cursor could
             # be closed by another thread before this call.
 
-            kv = cur.first()
+            kv = _DeadlockWrap(cur.first)
             key = kv[0]
             yield kv
 
             next = cur.next
             while 1:
                 try:
-                    kv = next()
+                    kv = _DeadlockWrap(next)
                     key = kv[0]
                     yield kv
                 except _bsddb.DBCursorClosedError:
                     cur = self._make_iter_cursor()
                     # FIXME-20031101-greg: race condition.  cursor could
                     # be closed by another thread before this call.
-                    cur.set(key,0,0,0)
+                    _DeadlockWrap(cur.set, key,0,0,0)
                     next = cur.next
         except _bsddb.DBNotFoundError:
             return
@@ -177,9 +182,9 @@
 
     def _checkCursor(self):
         if self.dbc is None:
-            self.dbc = self.db.cursor()
+            self.dbc = _DeadlockWrap(self.db.cursor)
             if self.saved_dbc_key is not None:
-                self.dbc.set(self.saved_dbc_key)
+                _DeadlockWrap(self.dbc.set, self.saved_dbc_key)
                 self.saved_dbc_key = None
 
     # This method is needed for all non-cursor DB calls to avoid
@@ -192,15 +197,15 @@
             self.dbc = None
             if save:
                 try:
-                    self.saved_dbc_key = c.current(0,0,0)[0]
+                    self.saved_dbc_key = _DeadlockWrap(c.current, 0,0,0)[0]
                 except db.DBError:
                     pass
-            c.close()
+            _DeadlockWrap(c.close)
             del c
         for cref in self._cursor_refs.values():
             c = cref()
             if c is not None:
-                c.close()
+                _DeadlockWrap(c.close)
 
     def _checkOpen(self):
         if self.db is None:
@@ -211,73 +216,77 @@
 
     def __len__(self):
         self._checkOpen()
-        return len(self.db)
+        return _DeadlockWrap(lambda: len(self.db))  # len(self.db)
 
     def __getitem__(self, key):
         self._checkOpen()
-        return self.db[key]
+        return _DeadlockWrap(lambda: self.db[key])  # self.db[key]
 
     def __setitem__(self, key, value):
         self._checkOpen()
         self._closeCursors()
-        self.db[key] = value
+        def wrapF():
+            self.db[key] = value
+        _DeadlockWrap(wrapF)  # self.db[key] = value
 
     def __delitem__(self, key):
         self._checkOpen()
         self._closeCursors()
-        del self.db[key]
+        def wrapF():
+            del self.db[key]
+        _DeadlockWrap(wrapF)  # del self.db[key]
 
     def close(self):
         self._closeCursors(save=0)
         if self.dbc is not None:
-            self.dbc.close()
+            _DeadlockWrap(self.dbc.close)
         v = 0
         if self.db is not None:
-            v = self.db.close()
+            v = _DeadlockWrap(self.db.close)
         self.dbc = None
         self.db = None
         return v
 
     def keys(self):
         self._checkOpen()
-        return self.db.keys()
+        return _DeadlockWrap(self.db.keys)
 
     def has_key(self, key):
         self._checkOpen()
-        return self.db.has_key(key)
+        return _DeadlockWrap(self.db.has_key, key)
 
     def set_location(self, key):
         self._checkOpen()
         self._checkCursor()
-        return self.dbc.set_range(key)
+        return _DeadlockWrap(self.dbc.set_range, key)
 
     def next(self):
         self._checkOpen()
         self._checkCursor()
-        rv = self.dbc.next()
+        rv = _DeadlockWrap(self.dbc.next)
         return rv
 
     def previous(self):
         self._checkOpen()
         self._checkCursor()
-        rv = self.dbc.prev()
+        rv = _DeadlockWrap(self.dbc.prev)
         return rv
 
     def first(self):
         self._checkOpen()
         self._checkCursor()
-        rv = self.dbc.first()
+        rv = _DeadlockWrap(self.dbc.first)
         return rv
 
     def last(self):
         self._checkOpen()
         self._checkCursor()
-        rv = self.dbc.last()
+        rv = _DeadlockWrap(self.dbc.last)
         return rv
 
     def sync(self):
         self._checkOpen()
-        return self.db.sync()
+        return _DeadlockWrap(self.db.sync)
 
 
 #----------------------------------------------------------------------
@@ -385,5 +394,4 @@
 except ImportError:
     db.DB_THREAD = 0
 
-
 #----------------------------------------------------------------------
diff --git a/Lib/bsddb/dbrecio.py b/Lib/bsddb/dbrecio.py
index 22e382a..d439f32 100644
--- a/Lib/bsddb/dbrecio.py
+++ b/Lib/bsddb/dbrecio.py
@@ -75,7 +75,7 @@
 
         dlen = newpos - self.pos
 
-        r = self.db.get(key, txn=self.txn, dlen=dlen, doff=self.pos)
+        r = self.db.get(self.key, txn=self.txn, dlen=dlen, doff=self.pos)
         self.pos = newpos
         return r
 
@@ -121,7 +121,7 @@
                                       "Negative size not allowed")
         elif size < self.pos:
             self.pos = size
-        self.db.put(key, "", txn=self.txn, dlen=self.len-size, doff=size)
+        self.db.put(self.key, "", txn=self.txn, dlen=self.len-size, doff=size)
 
     def write(self, s):
         if self.closed:
@@ -131,7 +131,7 @@
             self.buflist.append('\0'*(self.pos - self.len))
             self.len = self.pos
         newpos = self.pos + len(s)
-        self.db.put(key, s, txn=self.txn, dlen=len(s), doff=self.pos)
+        self.db.put(self.key, s, txn=self.txn, dlen=len(s), doff=self.pos)
         self.pos = newpos
 
     def writelines(self, list):
diff --git a/Lib/bsddb/dbtables.py b/Lib/bsddb/dbtables.py
index 369db43..492d5fd 100644
--- a/Lib/bsddb/dbtables.py
+++ b/Lib/bsddb/dbtables.py
@@ -32,6 +32,12 @@
     # For Python 2.3
     from bsddb.db import *
 
+# XXX(nnorwitz): is this correct? DBIncompleteError is conditional in _bsddb.c
+try:
+    DBIncompleteError
+except NameError:
+    class DBIncompleteError(Exception):
+        pass
 
 class TableDBError(StandardError):
     pass
diff --git a/Lib/bsddb/dbutils.py b/Lib/bsddb/dbutils.py
index 3f63842..6dcfdd5 100644
--- a/Lib/bsddb/dbutils.py
+++ b/Lib/bsddb/dbutils.py
@@ -22,14 +22,14 @@
 
 #
 # import the time.sleep function in a namespace safe way to allow
-# "from bsddb.db import *"
+# "from bsddb.dbutils import *"
 #
 from time import sleep as _sleep
 
 import db
 
 # always sleep at least N seconds between retrys
-_deadlock_MinSleepTime = 1.0/64
+_deadlock_MinSleepTime = 1.0/128
 # never sleep more than N seconds between retrys
 _deadlock_MaxSleepTime = 3.14159
 
@@ -57,7 +57,7 @@
     max_retries = _kwargs.get('max_retries', -1)
     if _kwargs.has_key('max_retries'):
         del _kwargs['max_retries']
-    while 1:
+    while True:
         try:
             return function(*_args, **_kwargs)
         except db.DBLockDeadlockError:
diff --git a/Lib/bsddb/test/test_basics.py b/Lib/bsddb/test/test_basics.py
index bec5da3..d6d507f 100644
--- a/Lib/bsddb/test/test_basics.py
+++ b/Lib/bsddb/test/test_basics.py
@@ -562,6 +562,9 @@
         num = d.truncate()
         assert num == 0, "truncate on empty DB returned nonzero (%r)" % (num,)
 
+    #----------------------------------------
+
+
 #----------------------------------------------------------------------
 
 
@@ -583,18 +586,40 @@
     dbopenflags = db.DB_THREAD
 
 
-class BasicBTreeWithEnvTestCase(BasicTestCase):
+class BasicWithEnvTestCase(BasicTestCase):
+    dbopenflags = db.DB_THREAD
+    useEnv = 1
+    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
+
+    #----------------------------------------
+
+    def test07_EnvRemoveAndRename(self):
+        if not self.env:
+            return
+
+        if verbose:
+            print '\n', '-=' * 30
+            print "Running %s.test07_EnvRemoveAndRename..." % self.__class__.__name__
+
+        # can't rename or remove an open DB
+        self.d.close()
+
+        newname = self.filename + '.renamed'
+        self.env.dbrename(self.filename, None, newname)
+        self.env.dbremove(newname)
+
+    # dbremove and dbrename are in 4.1 and later
+    if db.version() < (4,1):
+        del test07_EnvRemoveAndRename
+
+    #----------------------------------------
+
+class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase):
     dbtype = db.DB_BTREE
-    dbopenflags = db.DB_THREAD
-    useEnv = 1
-    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
 
 
-class BasicHashWithEnvTestCase(BasicTestCase):
+class BasicHashWithEnvTestCase(BasicWithEnvTestCase):
     dbtype = db.DB_HASH
-    dbopenflags = db.DB_THREAD
-    useEnv = 1
-    envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK
 
 
 #----------------------------------------------------------------------
diff --git a/Lib/compiler/future.py b/Lib/compiler/future.py
index 39c3bb9..fef189e 100644
--- a/Lib/compiler/future.py
+++ b/Lib/compiler/future.py
@@ -23,14 +23,7 @@
 
     def visitModule(self, node):
         stmt = node.node
-        found_docstring = False
         for s in stmt.nodes:
-            # Skip over docstrings
-            if not found_docstring and isinstance(s, ast.Discard) \
-               and isinstance(s.expr, ast.Const) \
-               and isinstance(s.expr.value, str):
-                found_docstring = True
-                continue
             if not self.check_stmt(s):
                 break
 
diff --git a/Lib/compiler/pycodegen.py b/Lib/compiler/pycodegen.py
index c093128..c8a9779 100644
--- a/Lib/compiler/pycodegen.py
+++ b/Lib/compiler/pycodegen.py
@@ -380,16 +380,7 @@
         self.set_lineno(node)
         for default in node.defaults:
             self.visit(default)
-        frees = gen.scope.get_free_vars()
-        if frees:
-            for name in frees:
-                self.emit('LOAD_CLOSURE', name)
-            self.emit('LOAD_CONST', gen)
-            self.emit('MAKE_CLOSURE', len(node.defaults))
-        else:
-            self.emit('LOAD_CONST', gen)
-            self.emit('MAKE_FUNCTION', len(node.defaults))
-
+        self._makeClosure(gen, len(node.defaults))
         for i in range(ndecorators):
             self.emit('CALL_FUNCTION', 1)
 
@@ -403,14 +394,7 @@
         for base in node.bases:
             self.visit(base)
         self.emit('BUILD_TUPLE', len(node.bases))
-        frees = gen.scope.get_free_vars()
-        for name in frees:
-            self.emit('LOAD_CLOSURE', name)
-        self.emit('LOAD_CONST', gen)
-        if frees:
-            self.emit('MAKE_CLOSURE', 0)
-        else:
-            self.emit('MAKE_FUNCTION', 0)
+        self._makeClosure(gen, 0)
         self.emit('CALL_FUNCTION', 0)
         self.emit('BUILD_CLASS')
         self.storeName(node.name)
@@ -642,22 +626,25 @@
         self.newBlock()
         self.emit('POP_TOP')
 
+    def _makeClosure(self, gen, args):
+        frees = gen.scope.get_free_vars()
+        if frees:
+            for name in frees:
+                self.emit('LOAD_CLOSURE', name)
+            self.emit('BUILD_TUPLE', len(frees))
+            self.emit('LOAD_CONST', gen)
+            self.emit('MAKE_CLOSURE', args)
+        else:
+            self.emit('LOAD_CONST', gen)
+            self.emit('MAKE_FUNCTION', args)
+
     def visitGenExpr(self, node):
         gen = GenExprCodeGenerator(node, self.scopes, self.class_name,
                                    self.get_module())
         walk(node.code, gen)
         gen.finish()
         self.set_lineno(node)
-        frees = gen.scope.get_free_vars()
-        if frees:
-            for name in frees:
-                self.emit('LOAD_CLOSURE', name)
-            self.emit('LOAD_CONST', gen)
-            self.emit('MAKE_CLOSURE', 0)
-        else:
-            self.emit('LOAD_CONST', gen)
-            self.emit('MAKE_FUNCTION', 0)
-
+        self._makeClosure(gen, 0)
         # precomputation of outmost iterable
         self.visit(node.code.quals[0].iter)
         self.emit('GET_ITER')
diff --git a/Lib/compiler/symbols.py b/Lib/compiler/symbols.py
index c608f64..8eb5fce 100644
--- a/Lib/compiler/symbols.py
+++ b/Lib/compiler/symbols.py
@@ -191,7 +191,7 @@
         self.add_param('[outmost-iterable]')
 
     def get_names(self):
-        keys = Scope.get_names()
+        keys = Scope.get_names(self)
         return keys
 
 class LambdaScope(FunctionScope):
diff --git a/Lib/compiler/transformer.py b/Lib/compiler/transformer.py
index 96bcce3..8d256ed 100644
--- a/Lib/compiler/transformer.py
+++ b/Lib/compiler/transformer.py
@@ -536,12 +536,7 @@
                    lineno=nodelist[0][2])
 
     def try_stmt(self, nodelist):
-        # 'try' ':' suite (except_clause ':' suite)+ ['else' ':' suite]
-        # | 'try' ':' suite 'finally' ':' suite
-        if nodelist[3][0] != symbol.except_clause:
-            return self.com_try_finally(nodelist)
-
-        return self.com_try_except(nodelist)
+        return self.com_try_except_finally(nodelist)
 
     def with_stmt(self, nodelist):
         return self.com_with(nodelist)
@@ -729,22 +724,20 @@
 
     def atom(self, nodelist):
         return self._atom_dispatch[nodelist[0][0]](nodelist)
-        n.lineno = nodelist[0][2]
-        return n
 
     def atom_lpar(self, nodelist):
         if nodelist[1][0] == token.RPAR:
-            return Tuple(())
+            return Tuple((), lineno=nodelist[0][2])
         return self.com_node(nodelist[1])
 
     def atom_lsqb(self, nodelist):
         if nodelist[1][0] == token.RSQB:
-            return List(())
+            return List((), lineno=nodelist[0][2])
         return self.com_list_constructor(nodelist[1])
 
     def atom_lbrace(self, nodelist):
         if nodelist[1][0] == token.RBRACE:
-            return Dict(())
+            return Dict((), lineno=nodelist[0][2])
         return self.com_dictmaker(nodelist[1])
 
     def atom_backquote(self, nodelist):
@@ -919,18 +912,21 @@
             bases.append(self.com_node(node[i]))
         return bases
 
-    def com_try_finally(self, nodelist):
-        # try_fin_stmt: "try" ":" suite "finally" ":" suite
-        return TryFinally(self.com_node(nodelist[2]),
-                       self.com_node(nodelist[5]),
-                       lineno=nodelist[0][2])
+    def com_try_except_finally(self, nodelist):
+        # ('try' ':' suite
+        #  ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite]
+        #   | 'finally' ':' suite))
 
-    def com_try_except(self, nodelist):
-        # try_except: 'try' ':' suite (except_clause ':' suite)* ['else' suite]
+        if nodelist[3][0] == token.NAME:
+            # first clause is a finally clause: only try-finally
+            return TryFinally(self.com_node(nodelist[2]),
+                              self.com_node(nodelist[5]),
+                              lineno=nodelist[0][2])
+
         #tryexcept:  [TryNode, [except_clauses], elseNode)]
-        stmt = self.com_node(nodelist[2])
         clauses = []
         elseNode = None
+        finallyNode = None
         for i in range(3, len(nodelist), 3):
             node = nodelist[i]
             if node[0] == symbol.except_clause:
@@ -946,9 +942,16 @@
                 clauses.append((expr1, expr2, self.com_node(nodelist[i+2])))
 
             if node[0] == token.NAME:
-                elseNode = self.com_node(nodelist[i+2])
-        return TryExcept(self.com_node(nodelist[2]), clauses, elseNode,
-                         lineno=nodelist[0][2])
+                if node[1] == 'else':
+                    elseNode = self.com_node(nodelist[i+2])
+                elif node[1] == 'finally':
+                    finallyNode = self.com_node(nodelist[i+2])
+        try_except = TryExcept(self.com_node(nodelist[2]), clauses, elseNode,
+                               lineno=nodelist[0][2])
+        if finallyNode:
+            return TryFinally(try_except, finallyNode, lineno=nodelist[0][2])
+        else:
+            return try_except
 
     def com_with(self, nodelist):
         # with_stmt: 'with' expr [with_var] ':' suite
@@ -1138,7 +1141,7 @@
             values = []
             for i in range(1, len(nodelist), 2):
                 values.append(self.com_node(nodelist[i]))
-            return List(values)
+            return List(values, lineno=values[0].lineno)
 
     if hasattr(symbol, 'gen_for'):
         def com_generator_expression(self, expr, node):
@@ -1185,7 +1188,7 @@
         for i in range(1, len(nodelist), 4):
             items.append((self.com_node(nodelist[i]),
                           self.com_node(nodelist[i+2])))
-        return Dict(items)
+        return Dict(items, lineno=items[0][0].lineno)
 
     def com_apply_trailer(self, primaryNode, nodelist):
         t = nodelist[1][0]
@@ -1379,6 +1382,7 @@
     symbol.testlist,
     symbol.testlist_safe,
     symbol.test,
+    symbol.or_test,
     symbol.and_test,
     symbol.not_test,
     symbol.comparison,
diff --git a/Lib/ctypes/__init__.py b/Lib/ctypes/__init__.py
index f2ddbaa..a4e3c36 100644
--- a/Lib/ctypes/__init__.py
+++ b/Lib/ctypes/__init__.py
@@ -1,9 +1,11 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 """create and manipulate C data types in Python"""
 
 import os as _os, sys as _sys
-from itertools import chain as _chain
 
-__version__ = "0.9.9.6"
+__version__ = "1.0.0"
 
 from _ctypes import Union, Structure, Array
 from _ctypes import _Pointer
@@ -20,6 +22,23 @@
 if _os.name in ("nt", "ce"):
     from _ctypes import FormatError
 
+DEFAULT_MODE = RTLD_LOCAL
+if _os.name == "posix" and _sys.platform == "darwin":
+    import gestalt
+
+    # gestalt.gestalt("sysv") returns the version number of the
+    # currently active system file as BCD.
+    # On OS X 10.4.6 -> 0x1046
+    # On OS X 10.2.8 -> 0x1028
+    # See also http://www.rgaros.nl/gestalt/
+    #
+    # On OS X 10.3, we use RTLD_GLOBAL as default mode
+    # because RTLD_LOCAL does not work at least on some
+    # libraries.
+
+    if gestalt.gestalt("sysv") < 0x1040:
+        DEFAULT_MODE = RTLD_GLOBAL
+
 from _ctypes import FUNCFLAG_CDECL as _FUNCFLAG_CDECL, \
      FUNCFLAG_PYTHONAPI as _FUNCFLAG_PYTHONAPI
 
@@ -67,7 +86,7 @@
     restype: the result type
     argtypes: a sequence specifying the argument types
 
-    The function prototype can be called in three ways to create a
+    The function prototype can be called in different ways to create a
     callable object:
 
     prototype(integer address) -> foreign function
@@ -111,7 +130,7 @@
 elif _os.name == "posix":
     from _ctypes import dlopen as _dlopen
 
-from _ctypes import sizeof, byref, addressof, alignment
+from _ctypes import sizeof, byref, addressof, alignment, resize
 from _ctypes import _SimpleCData
 
 class py_object(_SimpleCData):
@@ -282,7 +301,7 @@
         _flags_ = _FUNCFLAG_CDECL
         _restype_ = c_int # default, can be overridden in instances
 
-    def __init__(self, name, mode=RTLD_LOCAL, handle=None):
+    def __init__(self, name, mode=DEFAULT_MODE, handle=None):
         self._name = name
         if handle is None:
             self._handle = _dlopen(self._name, mode)
@@ -293,18 +312,19 @@
         return "<%s '%s', handle %x at %x>" % \
                (self.__class__.__name__, self._name,
                 (self._handle & (_sys.maxint*2 + 1)),
-                id(self))
+                id(self) & (_sys.maxint*2 + 1))
 
     def __getattr__(self, name):
         if name.startswith('__') and name.endswith('__'):
             raise AttributeError, name
-        return self.__getitem__(name)
+        func = self.__getitem__(name)
+        setattr(self, name, func)
+        return func
 
     def __getitem__(self, name_or_ordinal):
         func = self._FuncPtr((name_or_ordinal, self))
         if not isinstance(name_or_ordinal, (int, long)):
             func.__name__ = name_or_ordinal
-            setattr(self, name_or_ordinal, func)
         return func
 
 class PyDLL(CDLL):
@@ -419,12 +439,10 @@
         _restype_ = restype
         _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
     return CFunctionType
-_cast = PYFUNCTYPE(py_object, c_void_p, py_object)(_cast_addr)
 
+_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr)
 def cast(obj, typ):
-    result = _cast(obj, typ)
-    result.__keepref = obj
-    return result
+    return _cast(obj, obj, typ)
 
 _string_at = CFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr)
 def string_at(ptr, size=0):
@@ -446,52 +464,21 @@
         return _wstring_at(ptr, size)
 
 
-if _os.name == "nt": # COM stuff
+if _os.name in ("nt", "ce"): # COM stuff
     def DllGetClassObject(rclsid, riid, ppv):
-        # First ask ctypes.com.server than comtypes.server for the
-        # class object.
-
-        # trick py2exe by doing dynamic imports
-        result = -2147221231 # CLASS_E_CLASSNOTAVAILABLE
         try:
-            ctcom = __import__("ctypes.com.server", globals(), locals(), ['*'])
+            ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
         except ImportError:
-            pass
+            return -2147221231 # CLASS_E_CLASSNOTAVAILABLE
         else:
-            result = ctcom.DllGetClassObject(rclsid, riid, ppv)
-
-        if result == -2147221231: # CLASS_E_CLASSNOTAVAILABLE
-            try:
-                ccom = __import__("comtypes.server", globals(), locals(), ['*'])
-            except ImportError:
-                pass
-            else:
-                result = ccom.DllGetClassObject(rclsid, riid, ppv)
-
-        return result
+            return ccom.DllGetClassObject(rclsid, riid, ppv)
 
     def DllCanUnloadNow():
-        # First ask ctypes.com.server than comtypes.server if we can unload or not.
-        # trick py2exe by doing dynamic imports
-        result = 0 # S_OK
         try:
-            ctcom = __import__("ctypes.com.server", globals(), locals(), ['*'])
+            ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
         except ImportError:
-            pass
-        else:
-            result = ctcom.DllCanUnloadNow()
-            if result != 0: # != S_OK
-                return result
-
-        try:
-            ccom = __import__("comtypes.server", globals(), locals(), ['*'])
-        except ImportError:
-            return result
-        try:
-            return ccom.DllCanUnloadNow()
-        except AttributeError:
-            pass
-        return result
+            return 0 # S_OK
+        return ccom.DllCanUnloadNow()
 
 from ctypes._endian import BigEndianStructure, LittleEndianStructure
 
diff --git a/Lib/ctypes/_endian.py b/Lib/ctypes/_endian.py
index 5818ae1..6de0d47 100644
--- a/Lib/ctypes/_endian.py
+++ b/Lib/ctypes/_endian.py
@@ -1,3 +1,6 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 import sys
 from ctypes import *
 
diff --git a/Lib/ctypes/macholib/__init__.py b/Lib/ctypes/macholib/__init__.py
index 5621def..36149d2 100644
--- a/Lib/ctypes/macholib/__init__.py
+++ b/Lib/ctypes/macholib/__init__.py
@@ -1,3 +1,6 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 """
 Enough Mach-O to make your head spin.
 
diff --git a/Lib/ctypes/macholib/dyld.py b/Lib/ctypes/macholib/dyld.py
index a336fd0..14e2139 100644
--- a/Lib/ctypes/macholib/dyld.py
+++ b/Lib/ctypes/macholib/dyld.py
@@ -1,3 +1,6 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 """
 dyld emulation
 """
diff --git a/Lib/ctypes/macholib/dylib.py b/Lib/ctypes/macholib/dylib.py
index aa10750..ea3dd38 100644
--- a/Lib/ctypes/macholib/dylib.py
+++ b/Lib/ctypes/macholib/dylib.py
@@ -1,3 +1,6 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 """
 Generic dylib path manipulation
 """
diff --git a/Lib/ctypes/macholib/framework.py b/Lib/ctypes/macholib/framework.py
index ad6ed55..dd7fb2f 100644
--- a/Lib/ctypes/macholib/framework.py
+++ b/Lib/ctypes/macholib/framework.py
@@ -1,3 +1,6 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 """
 Generic framework path manipulation
 """
diff --git a/Lib/ctypes/test/test_anon.py b/Lib/ctypes/test/test_anon.py
new file mode 100644
index 0000000..99e02cb
--- /dev/null
+++ b/Lib/ctypes/test/test_anon.py
@@ -0,0 +1,60 @@
+import unittest
+from ctypes import *
+
+class AnonTest(unittest.TestCase):
+
+    def test_anon(self):
+        class ANON(Union):
+            _fields_ = [("a", c_int),
+                        ("b", c_int)]
+
+        class Y(Structure):
+            _fields_ = [("x", c_int),
+                        ("_", ANON),
+                        ("y", c_int)]
+            _anonymous_ = ["_"]
+
+        self.failUnlessEqual(Y.a.offset, sizeof(c_int))
+        self.failUnlessEqual(Y.b.offset, sizeof(c_int))
+
+        self.failUnlessEqual(ANON.a.offset, 0)
+        self.failUnlessEqual(ANON.b.offset, 0)
+
+    def test_anon_nonseq(self):
+        # TypeError: _anonymous_ must be a sequence
+        self.failUnlessRaises(TypeError,
+                              lambda: type(Structure)("Name",
+                                                      (Structure,),
+                                                      {"_fields_": [], "_anonymous_": 42}))
+
+    def test_anon_nonmember(self):
+        # AttributeError: type object 'Name' has no attribute 'x'
+        self.failUnlessRaises(AttributeError,
+                              lambda: type(Structure)("Name",
+                                                      (Structure,),
+                                                      {"_fields_": [],
+                                                       "_anonymous_": ["x"]}))
+
+    def test_nested(self):
+        class ANON_S(Structure):
+            _fields_ = [("a", c_int)]
+
+        class ANON_U(Union):
+            _fields_ = [("_", ANON_S),
+                        ("b", c_int)]
+            _anonymous_ = ["_"]
+
+        class Y(Structure):
+            _fields_ = [("x", c_int),
+                        ("_", ANON_U),
+                        ("y", c_int)]
+            _anonymous_ = ["_"]
+
+        self.failUnlessEqual(Y.x.offset, 0)
+        self.failUnlessEqual(Y.a.offset, sizeof(c_int))
+        self.failUnlessEqual(Y.b.offset, sizeof(c_int))
+        self.failUnlessEqual(Y._.offset, sizeof(c_int))
+        self.failUnlessEqual(Y.y.offset, sizeof(c_int) * 2)
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/Lib/ctypes/test/test_cast.py b/Lib/ctypes/test/test_cast.py
index 821ce3f..09e928f 100644
--- a/Lib/ctypes/test/test_cast.py
+++ b/Lib/ctypes/test/test_cast.py
@@ -30,17 +30,32 @@
         ptr = cast(address, POINTER(c_int))
         self.failUnlessEqual([ptr[i] for i in range(3)], [42, 17, 2])
 
+    def test_p2a_objects(self):
+        array = (c_char_p * 5)()
+        self.failUnlessEqual(array._objects, None)
+        array[0] = "foo bar"
+        self.failUnlessEqual(array._objects, {'0': "foo bar"})
 
-    def test_ptr2array(self):
-        array = (c_int * 3)(42, 17, 2)
+        p = cast(array, POINTER(c_char_p))
+        # array and p share a common _objects attribute
+        self.failUnless(p._objects is array._objects)
+        self.failUnlessEqual(array._objects, {'0': "foo bar", id(array): array})
+        p[0] = "spam spam"
+        self.failUnlessEqual(p._objects, {'0': "spam spam", id(array): array})
+        self.failUnless(array._objects is p._objects)
+        p[1] = "foo bar"
+        self.failUnlessEqual(p._objects, {'1': 'foo bar', '0': "spam spam", id(array): array})
+        self.failUnless(array._objects is p._objects)
 
-        from sys import getrefcount
-
-        before = getrefcount(array)
-        ptr = cast(array, POINTER(c_int))
-        self.failUnlessEqual(getrefcount(array), before + 1)
-        del ptr
-        self.failUnlessEqual(getrefcount(array), before)
+    def test_other(self):
+        p = cast((c_int * 4)(1, 2, 3, 4), POINTER(c_int))
+        self.failUnlessEqual(p[:4], [1,2, 3, 4])
+        c_int()
+        self.failUnlessEqual(p[:4], [1, 2, 3, 4])
+        p[2] = 96
+        self.failUnlessEqual(p[:4], [1, 2, 96, 4])
+        c_int()
+        self.failUnlessEqual(p[:4], [1, 2, 96, 4])
 
 if __name__ == "__main__":
     unittest.main()
diff --git a/Lib/ctypes/test/test_keeprefs.py b/Lib/ctypes/test/test_keeprefs.py
index 7318f29..80b6ca2 100644
--- a/Lib/ctypes/test/test_keeprefs.py
+++ b/Lib/ctypes/test/test_keeprefs.py
@@ -61,6 +61,8 @@
         r.ul.x = 22
         r.ul.y = 44
         self.assertEquals(r._objects, {'0': {}})
+        r.lr = POINT()
+        self.assertEquals(r._objects, {'0': {}, '1': {}})
 
 class ArrayTestCase(unittest.TestCase):
     def test_cint_array(self):
@@ -86,9 +88,10 @@
         self.assertEquals(x._objects, {'1': {}})
 
 class PointerTestCase(unittest.TestCase):
-    def X_test_p_cint(self):
-        x = pointer(c_int(42))
-        print x._objects
+    def test_p_cint(self):
+        i = c_int(42)
+        x = pointer(i)
+        self.failUnlessEqual(x._objects, {'1': i})
 
 class DeletePointerTestCase(unittest.TestCase):
     def X_test(self):
diff --git a/Lib/ctypes/test/test_loading.py b/Lib/ctypes/test/test_loading.py
index 45585ae..28c83fd4 100644
--- a/Lib/ctypes/test/test_loading.py
+++ b/Lib/ctypes/test/test_loading.py
@@ -9,18 +9,10 @@
     libc_name = "msvcrt"
 elif os.name == "ce":
     libc_name = "coredll"
-elif sys.platform == "darwin":
-    libc_name = "libc.dylib"
 elif sys.platform == "cygwin":
     libc_name = "cygwin1.dll"
 else:
-    for line in os.popen("ldd %s" % sys.executable):
-        if "libc.so" in line:
-            if sys.platform == "openbsd3":
-                libc_name = line.split()[4]
-            else:
-                libc_name = line.split()[2]
-            break
+    libc_name = find_library("c")
 
 if is_resource_enabled("printing"):
     print "libc_name is", libc_name
diff --git a/Lib/ctypes/test/test_objects.py b/Lib/ctypes/test/test_objects.py
new file mode 100644
index 0000000..4d921d2
--- /dev/null
+++ b/Lib/ctypes/test/test_objects.py
@@ -0,0 +1,70 @@
+r'''
+This tests the '_objects' attribute of ctypes instances.  '_objects'
+holds references to objects that must be kept alive as long as the
+ctypes instance, to make sure that the memory buffer is valid.
+
+WARNING: The '_objects' attribute is exposed ONLY for debugging ctypes itself,
+it MUST NEVER BE MODIFIED!
+
+'_objects' is initialized to a dictionary on first use, before that it
+is None.
+
+Here is an array of string pointers:
+
+>>> from ctypes import *
+>>> array = (c_char_p * 5)()
+>>> print array._objects
+None
+>>>
+
+The memory block stores pointers to strings, and the strings itself
+assigned from Python must be kept.
+
+>>> array[4] = 'foo bar'
+>>> array._objects
+{'4': 'foo bar'}
+>>> array[4]
+'foo bar'
+>>>
+
+It gets more complicated when the ctypes instance itself is contained
+in a 'base' object.
+
+>>> class X(Structure):
+...     _fields_ = [("x", c_int), ("y", c_int), ("array", c_char_p * 5)]
+...
+>>> x = X()
+>>> print x._objects
+None
+>>>
+
+The'array' attribute of the 'x' object shares part of the memory buffer
+of 'x' ('_b_base_' is either None, or the root object owning the memory block):
+
+>>> print x.array._b_base_ # doctest: +ELLIPSIS
+<ctypes.test.test_objects.X object at 0x...>
+>>>
+
+>>> x.array[0] = 'spam spam spam'
+>>> x._objects
+{'0:2': 'spam spam spam'}
+>>> x.array._b_base_._objects
+{'0:2': 'spam spam spam'}
+>>>
+
+'''
+
+import unittest, doctest, sys
+
+import ctypes.test.test_objects
+
+class TestCase(unittest.TestCase):
+    if sys.hexversion > 0x02040000:
+        # Python 2.3 has no ELLIPSIS flag, so we don't test with this
+        # version:
+        def test(self):
+            doctest.testmod(ctypes.test.test_objects)
+
+if __name__ == '__main__':
+    if sys.hexversion > 0x02040000:
+        doctest.testmod(ctypes.test.test_objects)
diff --git a/Lib/ctypes/test/test_parameters.py b/Lib/ctypes/test/test_parameters.py
index 9537400..1b7f0dc 100644
--- a/Lib/ctypes/test/test_parameters.py
+++ b/Lib/ctypes/test/test_parameters.py
@@ -147,6 +147,41 @@
 ##    def test_performance(self):
 ##        check_perf()
 
+    def test_noctypes_argtype(self):
+        import _ctypes_test
+        from ctypes import CDLL, c_void_p, ArgumentError
+
+        func = CDLL(_ctypes_test.__file__)._testfunc_p_p
+        func.restype = c_void_p
+        # TypeError: has no from_param method
+        self.assertRaises(TypeError, setattr, func, "argtypes", (object,))
+
+        class Adapter(object):
+            def from_param(cls, obj):
+                return None
+
+        func.argtypes = (Adapter(),)
+        self.failUnlessEqual(func(None), None)
+        self.failUnlessEqual(func(object()), None)
+
+        class Adapter(object):
+            def from_param(cls, obj):
+                return obj
+
+        func.argtypes = (Adapter(),)
+        # don't know how to convert parameter 1
+        self.assertRaises(ArgumentError, func, object())
+        self.failUnlessEqual(func(c_void_p(42)), 42)
+
+        class Adapter(object):
+            def from_param(cls, obj):
+                raise ValueError(obj)
+
+        func.argtypes = (Adapter(),)
+        # ArgumentError: argument 1: ValueError: 99
+        self.assertRaises(ArgumentError, func, 99)
+
+
 ################################################################
 
 if __name__ == '__main__':
diff --git a/Lib/ctypes/test/test_pointers.py b/Lib/ctypes/test/test_pointers.py
index a7a2802..586655a 100644
--- a/Lib/ctypes/test/test_pointers.py
+++ b/Lib/ctypes/test/test_pointers.py
@@ -157,6 +157,23 @@
         q = pointer(y)
         pp[0] = q         # <==
         self.failUnlessEqual(p[0], 6)
+    def test_c_void_p(self):
+        # http://sourceforge.net/tracker/?func=detail&aid=1518190&group_id=5470&atid=105470
+        if sizeof(c_void_p) == 4:
+            self.failUnlessEqual(c_void_p(0xFFFFFFFFL).value,
+                                 c_void_p(-1).value)
+            self.failUnlessEqual(c_void_p(0xFFFFFFFFFFFFFFFFL).value,
+                                 c_void_p(-1).value)
+        elif sizeof(c_void_p) == 8:
+            self.failUnlessEqual(c_void_p(0xFFFFFFFFL).value,
+                                 0xFFFFFFFFL)
+            self.failUnlessEqual(c_void_p(0xFFFFFFFFFFFFFFFFL).value,
+                                 c_void_p(-1).value)
+            self.failUnlessEqual(c_void_p(0xFFFFFFFFFFFFFFFFFFFFFFFFL).value,
+                                 c_void_p(-1).value)
+
+        self.assertRaises(TypeError, c_void_p, 3.14) # make sure floats are NOT accepted
+        self.assertRaises(TypeError, c_void_p, object()) # nor other objects
 
 if __name__ == '__main__':
     unittest.main()
diff --git a/Lib/ctypes/test/test_slicing.py b/Lib/ctypes/test/test_slicing.py
index 08c811e..511c3d3 100644
--- a/Lib/ctypes/test/test_slicing.py
+++ b/Lib/ctypes/test/test_slicing.py
@@ -35,7 +35,7 @@
         self.assertRaises(ValueError, setslice, a, 0, 5, range(32))
 
     def test_char_ptr(self):
-        s = "abcdefghijklmnopqrstuvwxyz\0"
+        s = "abcdefghijklmnopqrstuvwxyz"
 
         dll = CDLL(_ctypes_test.__file__)
         dll.my_strdup.restype = POINTER(c_char)
@@ -50,9 +50,31 @@
 
         dll.my_strdup.restype = POINTER(c_byte)
         res = dll.my_strdup(s)
-        self.failUnlessEqual(res[:len(s)-1], range(ord("a"), ord("z")+1))
+        self.failUnlessEqual(res[:len(s)], range(ord("a"), ord("z")+1))
         dll.my_free(res)
 
+    def test_char_ptr_with_free(self):
+        dll = CDLL(_ctypes_test.__file__)
+        s = "abcdefghijklmnopqrstuvwxyz"
+
+        class allocated_c_char_p(c_char_p):
+            pass
+
+        dll.my_free.restype = None
+        def errcheck(result, func, args):
+            retval = result.value
+            dll.my_free(result)
+            return retval
+
+        dll.my_strdup.restype = allocated_c_char_p
+        dll.my_strdup.errcheck = errcheck
+        try:
+            res = dll.my_strdup(s)
+            self.failUnlessEqual(res, s)
+        finally:
+            del dll.my_strdup.errcheck
+
+
     def test_char_array(self):
         s = "abcdefghijklmnopqrstuvwxyz\0"
 
diff --git a/Lib/ctypes/test/test_structures.py b/Lib/ctypes/test/test_structures.py
index 49f064b..8a4531d 100644
--- a/Lib/ctypes/test/test_structures.py
+++ b/Lib/ctypes/test/test_structures.py
@@ -138,8 +138,8 @@
         self.failUnlessEqual(X.y.size, sizeof(c_char))
 
         # readonly
-        self.assertRaises(AttributeError, setattr, X.x, "offset", 92)
-        self.assertRaises(AttributeError, setattr, X.x, "size", 92)
+        self.assertRaises((TypeError, AttributeError), setattr, X.x, "offset", 92)
+        self.assertRaises((TypeError, AttributeError), setattr, X.x, "size", 92)
 
         class X(Union):
             _fields_ = [("x", c_int),
@@ -152,8 +152,8 @@
         self.failUnlessEqual(X.y.size, sizeof(c_char))
 
         # readonly
-        self.assertRaises(AttributeError, setattr, X.x, "offset", 92)
-        self.assertRaises(AttributeError, setattr, X.x, "size", 92)
+        self.assertRaises((TypeError, AttributeError), setattr, X.x, "offset", 92)
+        self.assertRaises((TypeError, AttributeError), setattr, X.x, "size", 92)
 
         # XXX Should we check nested data types also?
         # offset is always relative to the class...
@@ -298,7 +298,7 @@
                                  "expected string or Unicode object, int found")
         else:
             self.failUnlessEqual(msg,
-                                 "(Phone) TypeError: "
+                                 "(Phone) exceptions.TypeError: "
                                  "expected string or Unicode object, int found")
 
         cls, msg = self.get_except(Person, "Someone", ("a", "b", "c"))
@@ -307,7 +307,7 @@
             self.failUnlessEqual(msg,
                                  "(Phone) <type 'exceptions.ValueError'>: too many initializers")
         else:
-            self.failUnlessEqual(msg, "(Phone) ValueError: too many initializers")
+            self.failUnlessEqual(msg, "(Phone) exceptions.ValueError: too many initializers")
 
 
     def get_except(self, func, *args):
@@ -371,5 +371,15 @@
         items = [s.array[i] for i in range(3)]
         self.failUnlessEqual(items, [1, 2, 3])
 
+    def test_none_to_pointer_fields(self):
+        class S(Structure):
+            _fields_ = [("x", c_int),
+                        ("p", POINTER(c_int))]
+
+        s = S()
+        s.x = 12345678
+        s.p = None
+        self.failUnlessEqual(s.x, 12345678)
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/Lib/ctypes/test/test_varsize_struct.py b/Lib/ctypes/test/test_varsize_struct.py
new file mode 100644
index 0000000..06d2323
--- /dev/null
+++ b/Lib/ctypes/test/test_varsize_struct.py
@@ -0,0 +1,50 @@
+from ctypes import *
+import unittest
+
+class VarSizeTest(unittest.TestCase):
+    def test_resize(self):
+        class X(Structure):
+            _fields_ = [("item", c_int),
+                        ("array", c_int * 1)]
+
+        self.failUnlessEqual(sizeof(X), sizeof(c_int) * 2)
+        x = X()
+        x.item = 42
+        x.array[0] = 100
+        self.failUnlessEqual(sizeof(x), sizeof(c_int) * 2)
+
+        # make room for one additional item
+        new_size = sizeof(X) + sizeof(c_int) * 1
+        resize(x, new_size)
+        self.failUnlessEqual(sizeof(x), new_size)
+        self.failUnlessEqual((x.item, x.array[0]), (42, 100))
+
+        # make room for 10 additional items
+        new_size = sizeof(X) + sizeof(c_int) * 9
+        resize(x, new_size)
+        self.failUnlessEqual(sizeof(x), new_size)
+        self.failUnlessEqual((x.item, x.array[0]), (42, 100))
+
+        # make room for one additional item
+        new_size = sizeof(X) + sizeof(c_int) * 1
+        resize(x, new_size)
+        self.failUnlessEqual(sizeof(x), new_size)
+        self.failUnlessEqual((x.item, x.array[0]), (42, 100))
+
+    def test_array_invalid_length(self):
+        # cannot create arrays with non-positive size
+        self.failUnlessRaises(ValueError, lambda: c_int * -1)
+        self.failUnlessRaises(ValueError, lambda: c_int * -3)
+
+    def test_zerosized_array(self):
+        array = (c_int * 0)()
+        # accessing elements of zero-sized arrays raise IndexError
+        self.failUnlessRaises(IndexError, array.__setitem__, 0, None)
+        self.failUnlessRaises(IndexError, array.__getitem__, 0)
+        self.failUnlessRaises(IndexError, array.__setitem__, 1, None)
+        self.failUnlessRaises(IndexError, array.__getitem__, 1)
+        self.failUnlessRaises(IndexError, array.__setitem__, -1, None)
+        self.failUnlessRaises(IndexError, array.__getitem__, -1)
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/Lib/ctypes/test/test_win32.py b/Lib/ctypes/test/test_win32.py
index 8247d37..db530d3 100644
--- a/Lib/ctypes/test/test_win32.py
+++ b/Lib/ctypes/test/test_win32.py
@@ -1,6 +1,7 @@
 # Windows specific tests
 
 from ctypes import *
+from ctypes.test import is_resource_enabled
 import unittest, sys
 
 import _ctypes_test
@@ -30,15 +31,10 @@
             # or wrong calling convention
             self.assertRaises(ValueError, IsWindow, None)
 
-        def test_SEH(self):
-            # Call functions with invalid arguments, and make sure that access violations
-            # are trapped and raise an exception.
-            #
-            # Normally, in a debug build of the _ctypes extension
-            # module, exceptions are not trapped, so we can only run
-            # this test in a release build.
-            import sys
-            if not hasattr(sys, "getobjects"):
+        if is_resource_enabled("SEH"):
+            def test_SEH(self):
+                # Call functions with invalid arguments, and make sure that access violations
+                # are trapped and raise an exception.
                 self.assertRaises(WindowsError, windll.kernel32.GetModuleHandleA, 32)
 
 class Structures(unittest.TestCase):
diff --git a/Lib/ctypes/util.py b/Lib/ctypes/util.py
index d756c1c..2ee2968 100644
--- a/Lib/ctypes/util.py
+++ b/Lib/ctypes/util.py
@@ -1,5 +1,7 @@
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 import sys, os
-import ctypes
 
 # find_library(name) returns the pathname of a library, or None.
 if os.name == "nt":
@@ -41,14 +43,17 @@
 
 elif os.name == "posix":
     # Andreas Degert's find functions, using gcc, /sbin/ldconfig, objdump
-    import re, tempfile
+    import re, tempfile, errno
 
     def _findLib_gcc(name):
         expr = '[^\(\)\s]*lib%s\.[^\(\)\s]*' % name
+        fdout, ccout = tempfile.mkstemp()
+        os.close(fdout)
         cmd = 'if type gcc &>/dev/null; then CC=gcc; else CC=cc; fi;' \
-              '$CC -Wl,-t -o /dev/null 2>&1 -l' + name
+              '$CC -Wl,-t -o ' + ccout + ' 2>&1 -l' + name
         try:
             fdout, outfile =  tempfile.mkstemp()
+            os.close(fdout)
             fd = os.popen(cmd)
             trace = fd.read()
             err = fd.close()
@@ -58,6 +63,11 @@
             except OSError, e:
                 if e.errno != errno.ENOENT:
                     raise
+            try:
+                os.unlink(ccout)
+            except OSError, e:
+                if e.errno != errno.ENOENT:
+                    raise
         res = re.search(expr, trace)
         if not res:
             return None
diff --git a/Lib/ctypes/wintypes.py b/Lib/ctypes/wintypes.py
index 92b79d2..9768233 100644
--- a/Lib/ctypes/wintypes.py
+++ b/Lib/ctypes/wintypes.py
@@ -1,60 +1,117 @@
-# XXX This module needs cleanup.
+######################################################################
+#  This file should be kept compatible with Python 2.3, see PEP 291. #
+######################################################################
 
+# The most useful windows datatypes
 from ctypes import *
 
-DWORD = c_ulong
-WORD = c_ushort
 BYTE = c_byte
+WORD = c_ushort
+DWORD = c_ulong
+
+WCHAR = c_wchar
+UINT = c_uint
+
+DOUBLE = c_double
+
+BOOLEAN = BYTE
+BOOL = c_long
+
+from ctypes import _SimpleCData
+class VARIANT_BOOL(_SimpleCData):
+    _type_ = "v"
+    def __repr__(self):
+        return "%s(%r)" % (self.__class__.__name__, self.value)
 
 ULONG = c_ulong
 LONG = c_long
 
-LARGE_INTEGER = c_longlong
-ULARGE_INTEGER = c_ulonglong
+# in the windows header files, these are structures.
+_LARGE_INTEGER = LARGE_INTEGER = c_longlong
+_ULARGE_INTEGER = ULARGE_INTEGER = c_ulonglong
 
-
-HANDLE = c_ulong # in the header files: void *
-
-HWND = HANDLE
-HDC = HANDLE
-HMODULE = HANDLE
-HINSTANCE = HANDLE
-HRGN = HANDLE
-HTASK = HANDLE
-HKEY = HANDLE
-HPEN = HANDLE
-HGDIOBJ = HANDLE
-HMENU = HANDLE
-
-LCID = DWORD
+LPCOLESTR = LPOLESTR = OLESTR = c_wchar_p
+LPCWSTR = LPWSTR = c_wchar_p
+LPCSTR = LPSTR = c_char_p
 
 WPARAM = c_uint
 LPARAM = c_long
 
-BOOL = c_long
-VARIANT_BOOL = c_short
+ATOM = WORD
+LANGID = WORD
 
-LPCOLESTR = LPOLESTR = OLESTR = c_wchar_p
-LPCWSTR = LPWSTR = c_wchar_p
+COLORREF = DWORD
+LGRPID = DWORD
+LCTYPE = DWORD
 
-LPCSTR = LPSTR = c_char_p
+LCID = DWORD
+
+################################################################
+# HANDLE types
+HANDLE = c_ulong # in the header files: void *
+
+HACCEL = HANDLE
+HBITMAP = HANDLE
+HBRUSH = HANDLE
+HCOLORSPACE = HANDLE
+HDC = HANDLE
+HDESK = HANDLE
+HDWP = HANDLE
+HENHMETAFILE = HANDLE
+HFONT = HANDLE
+HGDIOBJ = HANDLE
+HGLOBAL = HANDLE
+HHOOK = HANDLE
+HICON = HANDLE
+HINSTANCE = HANDLE
+HKEY = HANDLE
+HKL = HANDLE
+HLOCAL = HANDLE
+HMENU = HANDLE
+HMETAFILE = HANDLE
+HMODULE = HANDLE
+HMONITOR = HANDLE
+HPALETTE = HANDLE
+HPEN = HANDLE
+HRGN = HANDLE
+HRSRC = HANDLE
+HSTR = HANDLE
+HTASK = HANDLE
+HWINSTA = HANDLE
+HWND = HANDLE
+SC_HANDLE = HANDLE
+SERVICE_STATUS_HANDLE = HANDLE
+
+################################################################
+# Some important structure definitions
 
 class RECT(Structure):
     _fields_ = [("left", c_long),
                 ("top", c_long),
                 ("right", c_long),
                 ("bottom", c_long)]
-RECTL = RECT
+tagRECT = _RECTL = RECTL = RECT
+
+class _SMALL_RECT(Structure):
+    _fields_ = [('Left', c_short),
+                ('Top', c_short),
+                ('Right', c_short),
+                ('Bottom', c_short)]
+SMALL_RECT = _SMALL_RECT
+
+class _COORD(Structure):
+    _fields_ = [('X', c_short),
+                ('Y', c_short)]
 
 class POINT(Structure):
     _fields_ = [("x", c_long),
                 ("y", c_long)]
-POINTL = POINT
+tagPOINT = _POINTL = POINTL = POINT
 
 class SIZE(Structure):
     _fields_ = [("cx", c_long),
                 ("cy", c_long)]
-SIZEL = SIZE
+tagSIZE = SIZEL = SIZE
 
 def RGB(red, green, blue):
     return red + (green << 8) + (blue << 16)
@@ -62,6 +119,7 @@
 class FILETIME(Structure):
     _fields_ = [("dwLowDateTime", DWORD),
                 ("dwHighDateTime", DWORD)]
+_FILETIME = FILETIME
 
 class MSG(Structure):
     _fields_ = [("hWnd", HWND),
@@ -70,6 +128,7 @@
                 ("lParam", LPARAM),
                 ("time", DWORD),
                 ("pt", POINT)]
+tagMSG = MSG
 MAX_PATH = 260
 
 class WIN32_FIND_DATAA(Structure):
@@ -95,3 +154,19 @@
                 ("dwReserved1", DWORD),
                 ("cFileName", c_wchar * MAX_PATH),
                 ("cAlternameFileName", c_wchar * 14)]
+
+__all__ = ['ATOM', 'BOOL', 'BOOLEAN', 'BYTE', 'COLORREF', 'DOUBLE',
+           'DWORD', 'FILETIME', 'HACCEL', 'HANDLE', 'HBITMAP', 'HBRUSH',
+           'HCOLORSPACE', 'HDC', 'HDESK', 'HDWP', 'HENHMETAFILE', 'HFONT',
+           'HGDIOBJ', 'HGLOBAL', 'HHOOK', 'HICON', 'HINSTANCE', 'HKEY',
+           'HKL', 'HLOCAL', 'HMENU', 'HMETAFILE', 'HMODULE', 'HMONITOR',
+           'HPALETTE', 'HPEN', 'HRGN', 'HRSRC', 'HSTR', 'HTASK', 'HWINSTA',
+           'HWND', 'LANGID', 'LARGE_INTEGER', 'LCID', 'LCTYPE', 'LGRPID',
+           'LONG', 'LPARAM', 'LPCOLESTR', 'LPCSTR', 'LPCWSTR', 'LPOLESTR',
+           'LPSTR', 'LPWSTR', 'MAX_PATH', 'MSG', 'OLESTR', 'POINT',
+           'POINTL', 'RECT', 'RECTL', 'RGB', 'SC_HANDLE',
+           'SERVICE_STATUS_HANDLE', 'SIZE', 'SIZEL', 'SMALL_RECT', 'UINT',
+           'ULARGE_INTEGER', 'ULONG', 'VARIANT_BOOL', 'WCHAR',
+           'WIN32_FIND_DATAA', 'WIN32_FIND_DATAW', 'WORD', 'WPARAM', '_COORD',
+           '_FILETIME', '_LARGE_INTEGER', '_POINTL', '_RECTL', '_SMALL_RECT',
+           '_ULARGE_INTEGER', 'tagMSG', 'tagPOINT', 'tagRECT', 'tagSIZE']
diff --git a/Lib/difflib.py b/Lib/difflib.py
index 55f69ba..3e28b18 100644
--- a/Lib/difflib.py
+++ b/Lib/difflib.py
@@ -86,8 +86,7 @@
     >>> for block in s.get_matching_blocks():
     ...     print "a[%d] and b[%d] match for %d elements" % block
     a[0] and b[0] match for 8 elements
-    a[8] and b[17] match for 6 elements
-    a[14] and b[23] match for 15 elements
+    a[8] and b[17] match for 21 elements
     a[29] and b[38] match for 0 elements
 
     Note that the last tuple returned by .get_matching_blocks() is always a
@@ -101,8 +100,7 @@
     ...     print "%6s a[%d:%d] b[%d:%d]" % opcode
      equal a[0:8] b[0:8]
     insert a[8:8] b[8:17]
-     equal a[8:14] b[17:23]
-     equal a[14:29] b[23:38]
+     equal a[8:29] b[17:38]
 
     See the Differ class for a fancy human-friendly file differencer, which
     uses SequenceMatcher both to compare sequences of lines, and to compare
@@ -461,7 +459,11 @@
 
         Each triple is of the form (i, j, n), and means that
         a[i:i+n] == b[j:j+n].  The triples are monotonically increasing in
-        i and in j.
+        i and in j.  New in Python 2.5, it's also guaranteed that if
+        (i, j, n) and (i', j', n') are adjacent triples in the list, and
+        the second is not the last triple in the list, then i+n != i' or
+        j+n != j'.  IOW, adjacent triples never describe adjacent equal
+        blocks.
 
         The last triple is a dummy, (len(a), len(b), 0), and is the only
         triple with n==0.
@@ -475,28 +477,52 @@
             return self.matching_blocks
         la, lb = len(self.a), len(self.b)
 
-        indexed_blocks = []
+        # This is most naturally expressed as a recursive algorithm, but
+        # at least one user bumped into extreme use cases that exceeded
+        # the recursion limit on their box.  So, now we maintain a list
+        # ('queue`) of blocks we still need to look at, and append partial
+        # results to `matching_blocks` in a loop; the matches are sorted
+        # at the end.
         queue = [(0, la, 0, lb)]
+        matching_blocks = []
         while queue:
-            # builds list of matching blocks covering a[alo:ahi] and
-            # b[blo:bhi], appending them in increasing order to answer
             alo, ahi, blo, bhi = queue.pop()
-
+            i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi)
             # a[alo:i] vs b[blo:j] unknown
             # a[i:i+k] same as b[j:j+k]
             # a[i+k:ahi] vs b[j+k:bhi] unknown
-            i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi)
-
-            if k:
+            if k:   # if k is 0, there was no matching block
+                matching_blocks.append(x)
                 if alo < i and blo < j:
                     queue.append((alo, i, blo, j))
-                indexed_blocks.append((i, x))
                 if i+k < ahi and j+k < bhi:
                     queue.append((i+k, ahi, j+k, bhi))
-        indexed_blocks.sort()
+        matching_blocks.sort()
 
-        self.matching_blocks = [elem[1] for elem in indexed_blocks]
-        self.matching_blocks.append( (la, lb, 0) )
+        # It's possible that we have adjacent equal blocks in the
+        # matching_blocks list now.  Starting with 2.5, this code was added
+        # to collapse them.
+        i1 = j1 = k1 = 0
+        non_adjacent = []
+        for i2, j2, k2 in matching_blocks:
+            # Is this block adjacent to i1, j1, k1?
+            if i1 + k1 == i2 and j1 + k1 == j2:
+                # Yes, so collapse them -- this just increases the length of
+                # the first block by the length of the second, and the first
+                # block so lengthened remains the block to compare against.
+                k1 += k2
+            else:
+                # Not adjacent.  Remember the first block (k1==0 means it's
+                # the dummy we started with), and make the second block the
+                # new block to compare against.
+                if k1:
+                    non_adjacent.append((i1, j1, k1))
+                i1, j1, k1 = i2, j2, k2
+        if k1:
+            non_adjacent.append((i1, j1, k1))
+
+        non_adjacent.append( (la, lb, 0) )
+        self.matching_blocks = non_adjacent
         return self.matching_blocks
 
     def get_opcodes(self):
@@ -1422,8 +1448,7 @@
                 num_blanks_pending -= 1
                 yield _make_line(lines,'-',0), None, True
                 continue
-            elif s.startswith('--?+') or s.startswith('--+') or \
-                 s.startswith('- '):
+            elif s.startswith(('--?+', '--+', '- ')):
                 # in delete block and see a intraline change or unchanged line
                 # coming: yield the delete line and then blanks
                 from_line,to_line = _make_line(lines,'-',0), None
@@ -1447,7 +1472,7 @@
                 num_blanks_pending += 1
                 yield None, _make_line(lines,'+',1), True
                 continue
-            elif s.startswith('+ ') or s.startswith('+-'):
+            elif s.startswith(('+ ', '+-')):
                 # will be leaving an add block: yield blanks then add line
                 from_line, to_line = None, _make_line(lines,'+',1)
                 num_blanks_to_yield,num_blanks_pending = num_blanks_pending+1,0
diff --git a/Lib/distutils/__init__.py b/Lib/distutils/__init__.py
index a1dbb4b..9c60e54 100644
--- a/Lib/distutils/__init__.py
+++ b/Lib/distutils/__init__.py
@@ -12,4 +12,6 @@
 
 __revision__ = "$Id$"
 
-__version__ = "2.4.0"
+import sys
+__version__ = "%d.%d.%d" % sys.version_info[:3]
+del sys
diff --git a/Lib/distutils/command/bdist_rpm.py b/Lib/distutils/command/bdist_rpm.py
index 738e3f7..5b09965 100644
--- a/Lib/distutils/command/bdist_rpm.py
+++ b/Lib/distutils/command/bdist_rpm.py
@@ -467,7 +467,8 @@
 
         # rpm scripts
         # figure out default build script
-        def_build = "%s setup.py build" % self.python
+        def_setup_call = "%s %s" % (self.python,os.path.basename(sys.argv[0]))
+        def_build = "%s build" % def_setup_call
         if self.use_rpm_opt_flags:
             def_build = 'env CFLAGS="$RPM_OPT_FLAGS" ' + def_build
 
@@ -481,9 +482,9 @@
             ('prep', 'prep_script', "%setup"),
             ('build', 'build_script', def_build),
             ('install', 'install_script',
-             ("%s setup.py install "
+             ("%s install "
               "--root=$RPM_BUILD_ROOT "
-              "--record=INSTALLED_FILES") % self.python),
+              "--record=INSTALLED_FILES") % def_setup_call),
             ('clean', 'clean_script', "rm -rf $RPM_BUILD_ROOT"),
             ('verifyscript', 'verify_script', None),
             ('pre', 'pre_install', None),
diff --git a/Lib/distutils/command/upload.py b/Lib/distutils/command/upload.py
index 4a9ed39..67ba080 100644
--- a/Lib/distutils/command/upload.py
+++ b/Lib/distutils/command/upload.py
@@ -185,7 +185,7 @@
             http.endheaders()
             http.send(body)
         except socket.error, e:
-            self.announce(e.msg, log.ERROR)
+            self.announce(str(e), log.ERROR)
             return
 
         r = http.getresponse()
diff --git a/Lib/distutils/msvccompiler.py b/Lib/distutils/msvccompiler.py
index d24d0ac..0d72837 100644
--- a/Lib/distutils/msvccompiler.py
+++ b/Lib/distutils/msvccompiler.py
@@ -131,8 +131,10 @@
                 self.set_macro("FrameworkSDKDir", net, "sdkinstallroot")
         except KeyError, exc: #
             raise DistutilsPlatformError, \
-                  ("The .NET Framework SDK needs to be installed before "
-                   "building extensions for Python.")
+                  ("""Python was built with Visual Studio 2003;
+extensions must be built with a compiler than can generate compatible binaries.
+Visual Studio 2003 was not found on this system. If you have Cygwin installed,
+you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""")
 
         p = r"Software\Microsoft\NET Framework Setup\Product"
         for base in HKEYS:
@@ -237,7 +239,7 @@
 
     def initialize(self):
         self.__paths = []
-        if os.environ.has_key("MSSdk") and self.find_exe("cl.exe"):
+        if os.environ.has_key("DISTUTILS_USE_SDK") and os.environ.has_key("MSSdk") and self.find_exe("cl.exe"):
             # Assume that the SDK set up everything alright; don't try to be
             # smarter
             self.cc = "cl.exe"
diff --git a/Lib/distutils/sysconfig.py b/Lib/distutils/sysconfig.py
index e1397a1..76fe256 100644
--- a/Lib/distutils/sysconfig.py
+++ b/Lib/distutils/sysconfig.py
@@ -512,7 +512,7 @@
                 for key in ('LDFLAGS', 'BASECFLAGS'):
                     flags = _config_vars[key]
                     flags = re.sub('-arch\s+\w+\s', ' ', flags)
-                    flags = re.sub('-isysroot [^ \t]* ', ' ', flags)
+                    flags = re.sub('-isysroot [^ \t]*', ' ', flags)
                     _config_vars[key] = flags
 
     if args:
diff --git a/Lib/distutils/unixccompiler.py b/Lib/distutils/unixccompiler.py
index 324819d..6cd14f7 100644
--- a/Lib/distutils/unixccompiler.py
+++ b/Lib/distutils/unixccompiler.py
@@ -78,7 +78,7 @@
         try:
             index = compiler_so.index('-isysroot')
             # Strip this argument and the next one:
-            del compiler_so[index:index+1]
+            del compiler_so[index:index+2]
         except ValueError:
             pass
 
diff --git a/Lib/doctest.py b/Lib/doctest.py
index 47b3aae..fe734b3 100644
--- a/Lib/doctest.py
+++ b/Lib/doctest.py
@@ -95,7 +95,7 @@
 
 import __future__
 
-import sys, traceback, inspect, linecache, os, re, types
+import sys, traceback, inspect, linecache, os, re
 import unittest, difflib, pdb, tempfile
 import warnings
 from StringIO import StringIO
@@ -821,6 +821,11 @@
         # Recursively expore `obj`, extracting DocTests.
         tests = []
         self._find(tests, obj, name, module, source_lines, globs, {})
+        # Sort the tests by alpha order of names, for consistency in
+        # verbose-mode output.  This was a feature of doctest in Pythons
+        # <= 2.3 that got lost by accident in 2.4.  It was repaired in
+        # 2.4.4 and 2.5.
+        tests.sort()
         return tests
 
     def _from_module(self, module, object):
diff --git a/Lib/dummy_thread.py b/Lib/dummy_thread.py
index 21fd03f..a72c927 100644
--- a/Lib/dummy_thread.py
+++ b/Lib/dummy_thread.py
@@ -20,6 +20,7 @@
            'interrupt_main', 'LockType']
 
 import traceback as _traceback
+import warnings
 
 class error(Exception):
     """Dummy implementation of thread.error."""
@@ -75,6 +76,12 @@
     """Dummy implementation of thread.allocate_lock()."""
     return LockType()
 
+def stack_size(size=None):
+    """Dummy implementation of thread.stack_size()."""
+    if size is not None:
+        raise error("setting thread stack size not supported")
+    return 0
+
 class LockType(object):
     """Class implementing dummy implementation of thread.LockType.
 
diff --git a/Lib/email/__init__.py b/Lib/email/__init__.py
index f01260f..8d230fd 100644
--- a/Lib/email/__init__.py
+++ b/Lib/email/__init__.py
@@ -4,7 +4,7 @@
 
 """A package for parsing, handling, and generating email messages."""
 
-__version__ = '4.0a2'
+__version__ = '4.0.1'
 
 __all__ = [
     # Old names
diff --git a/Lib/email/message.py b/Lib/email/message.py
index 50d90b4..79c5c4c 100644
--- a/Lib/email/message.py
+++ b/Lib/email/message.py
@@ -747,7 +747,18 @@
         if isinstance(charset, tuple):
             # RFC 2231 encoded, so decode it, and it better end up as ascii.
             pcharset = charset[0] or 'us-ascii'
-            charset = unicode(charset[2], pcharset).encode('us-ascii')
+            try:
+                # LookupError will be raised if the charset isn't known to
+                # Python.  UnicodeError will be raised if the encoded text
+                # contains a character not in the charset.
+                charset = unicode(charset[2], pcharset).encode('us-ascii')
+            except (LookupError, UnicodeError):
+                charset = charset[2]
+        # charset character must be in us-ascii range
+        try:
+            charset = unicode(charset, 'us-ascii').encode('us-ascii')
+        except UnicodeError:
+            return failobj
         # RFC 2046, $4.1.2 says charsets are not case sensitive
         return charset.lower()
 
diff --git a/Lib/email/test/test_email.py b/Lib/email/test/test_email.py
index a197a36..13801dc 100644
--- a/Lib/email/test/test_email.py
+++ b/Lib/email/test/test_email.py
@@ -3005,26 +3005,67 @@
 
 '''
         msg = email.message_from_string(m)
-        self.assertEqual(msg.get_param('NAME'),
-                         (None, None, 'file____C__DOCUMENTS_20AND_20SETTINGS_FABIEN_LOCAL_20SETTINGS_TEMP_nsmail.htm'))
+        param = msg.get_param('NAME')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(
+            param,
+            'file____C__DOCUMENTS_20AND_20SETTINGS_FABIEN_LOCAL_20SETTINGS_TEMP_nsmail.htm')
 
     def test_rfc2231_no_language_or_charset_in_filename(self):
         m = '''\
 Content-Disposition: inline;
+\tfilename*0*="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_no_language_or_charset_in_filename_encoded(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_partly_encoded(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(
+            msg.get_filename(),
+            'This%20is%20even%20more%20***fun*** is it not.pdf')
+
+    def test_rfc2231_partly_nonencoded(self):
+        m = '''\
+Content-Disposition: inline;
 \tfilename*0="This%20is%20even%20more%20";
 \tfilename*1="%2A%2A%2Afun%2A%2A%2A%20";
 \tfilename*2="is it not.pdf"
 
 '''
         msg = email.message_from_string(m)
-        self.assertEqual(msg.get_filename(),
-                         'This is even more ***fun*** is it not.pdf')
+        self.assertEqual(
+            msg.get_filename(),
+            'This%20is%20even%20more%20%2A%2A%2Afun%2A%2A%2A%20is it not.pdf')
 
     def test_rfc2231_no_language_or_charset_in_boundary(self):
         m = '''\
 Content-Type: multipart/alternative;
-\tboundary*0="This%20is%20even%20more%20";
-\tboundary*1="%2A%2A%2Afun%2A%2A%2A%20";
+\tboundary*0*="''This%20is%20even%20more%20";
+\tboundary*1*="%2A%2A%2Afun%2A%2A%2A%20";
 \tboundary*2="is it not.pdf"
 
 '''
@@ -3036,8 +3077,8 @@
         # This is a nonsensical charset value, but tests the code anyway
         m = '''\
 Content-Type: text/plain;
-\tcharset*0="This%20is%20even%20more%20";
-\tcharset*1="%2A%2A%2Afun%2A%2A%2A%20";
+\tcharset*0*="This%20is%20even%20more%20";
+\tcharset*1*="%2A%2A%2Afun%2A%2A%2A%20";
 \tcharset*2="is it not.pdf"
 
 '''
@@ -3045,15 +3086,145 @@
         self.assertEqual(msg.get_content_charset(),
                          'this is even more ***fun*** is it not.pdf')
 
+    def test_rfc2231_bad_encoding_in_filename(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="bogus'xx'This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_bad_encoding_in_charset(self):
+        m = """\
+Content-Type: text/plain; charset*=bogus''utf-8%E2%80%9D
+
+"""
+        msg = email.message_from_string(m)
+        # This should return None because non-ascii characters in the charset
+        # are not allowed.
+        self.assertEqual(msg.get_content_charset(), None)
+
+    def test_rfc2231_bad_character_in_charset(self):
+        m = """\
+Content-Type: text/plain; charset*=ascii''utf-8%E2%80%9D
+
+"""
+        msg = email.message_from_string(m)
+        # This should return None because non-ascii characters in the charset
+        # are not allowed.
+        self.assertEqual(msg.get_content_charset(), None)
+
+    def test_rfc2231_bad_character_in_filename(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="ascii'xx'This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2*="is it not.pdf%E2"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         u'This is even more ***fun*** is it not.pdf\ufffd')
+
     def test_rfc2231_unknown_encoding(self):
         m = """\
 Content-Transfer-Encoding: 8bit
-Content-Disposition: inline; filename*0=X-UNKNOWN''myfile.txt
+Content-Disposition: inline; filename*=X-UNKNOWN''myfile.txt
 
 """
         msg = email.message_from_string(m)
         self.assertEqual(msg.get_filename(), 'myfile.txt')
 
+    def test_rfc2231_single_tick_in_filename_extended(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"Frank's\"; name*1*=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, None)
+        eq(language, None)
+        eq(s, "Frank's Document")
+
+    def test_rfc2231_single_tick_in_filename(self):
+        m = """\
+Content-Type: application/x-foo; name*0=\"Frank's\"; name*1=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        param = msg.get_param('name')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(param, "Frank's Document")
+
+    def test_rfc2231_tick_attack_extended(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"us-ascii'en-us'Frank's\"; name*1*=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, "Frank's Document")
+
+    def test_rfc2231_tick_attack(self):
+        m = """\
+Content-Type: application/x-foo;
+\tname*0=\"us-ascii'en-us'Frank's\"; name*1=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        param = msg.get_param('name')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(param, "us-ascii'en-us'Frank's Document")
+
+    def test_rfc2231_no_extended_values(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo; name=\"Frank's Document\"
+
+"""
+        msg = email.message_from_string(m)
+        eq(msg.get_param('name'), "Frank's Document")
+
+    def test_rfc2231_encoded_then_unencoded_segments(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"us-ascii'en-us'My\";
+\tname*1=\" Document\";
+\tname*2*=\" For You\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, 'My Document For You')
+
+    def test_rfc2231_unencoded_then_encoded_segments(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0=\"us-ascii'en-us'My\";
+\tname*1*=\" Document\";
+\tname*2*=\" For You\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, 'My Document For You')
+
 
 
 def _testclasses():
diff --git a/Lib/email/test/test_email_renamed.py b/Lib/email/test/test_email_renamed.py
index 95d06cb..30f39b9 100644
--- a/Lib/email/test/test_email_renamed.py
+++ b/Lib/email/test/test_email_renamed.py
@@ -3011,26 +3011,67 @@
 
 '''
         msg = email.message_from_string(m)
-        self.assertEqual(msg.get_param('NAME'),
-                         (None, None, 'file____C__DOCUMENTS_20AND_20SETTINGS_FABIEN_LOCAL_20SETTINGS_TEMP_nsmail.htm'))
+        param = msg.get_param('NAME')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(
+            param,
+            'file____C__DOCUMENTS_20AND_20SETTINGS_FABIEN_LOCAL_20SETTINGS_TEMP_nsmail.htm')
 
     def test_rfc2231_no_language_or_charset_in_filename(self):
         m = '''\
 Content-Disposition: inline;
+\tfilename*0*="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_no_language_or_charset_in_filename_encoded(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_partly_encoded(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0="''This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(
+            msg.get_filename(),
+            'This%20is%20even%20more%20***fun*** is it not.pdf')
+
+    def test_rfc2231_partly_nonencoded(self):
+        m = '''\
+Content-Disposition: inline;
 \tfilename*0="This%20is%20even%20more%20";
 \tfilename*1="%2A%2A%2Afun%2A%2A%2A%20";
 \tfilename*2="is it not.pdf"
 
 '''
         msg = email.message_from_string(m)
-        self.assertEqual(msg.get_filename(),
-                         'This is even more ***fun*** is it not.pdf')
+        self.assertEqual(
+            msg.get_filename(),
+            'This%20is%20even%20more%20%2A%2A%2Afun%2A%2A%2A%20is it not.pdf')
 
     def test_rfc2231_no_language_or_charset_in_boundary(self):
         m = '''\
 Content-Type: multipart/alternative;
-\tboundary*0="This%20is%20even%20more%20";
-\tboundary*1="%2A%2A%2Afun%2A%2A%2A%20";
+\tboundary*0*="''This%20is%20even%20more%20";
+\tboundary*1*="%2A%2A%2Afun%2A%2A%2A%20";
 \tboundary*2="is it not.pdf"
 
 '''
@@ -3042,8 +3083,8 @@
         # This is a nonsensical charset value, but tests the code anyway
         m = '''\
 Content-Type: text/plain;
-\tcharset*0="This%20is%20even%20more%20";
-\tcharset*1="%2A%2A%2Afun%2A%2A%2A%20";
+\tcharset*0*="This%20is%20even%20more%20";
+\tcharset*1*="%2A%2A%2Afun%2A%2A%2A%20";
 \tcharset*2="is it not.pdf"
 
 '''
@@ -3051,15 +3092,145 @@
         self.assertEqual(msg.get_content_charset(),
                          'this is even more ***fun*** is it not.pdf')
 
+    def test_rfc2231_bad_encoding_in_filename(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="bogus'xx'This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2="is it not.pdf"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         'This is even more ***fun*** is it not.pdf')
+
+    def test_rfc2231_bad_encoding_in_charset(self):
+        m = """\
+Content-Type: text/plain; charset*=bogus''utf-8%E2%80%9D
+
+"""
+        msg = email.message_from_string(m)
+        # This should return None because non-ascii characters in the charset
+        # are not allowed.
+        self.assertEqual(msg.get_content_charset(), None)
+
+    def test_rfc2231_bad_character_in_charset(self):
+        m = """\
+Content-Type: text/plain; charset*=ascii''utf-8%E2%80%9D
+
+"""
+        msg = email.message_from_string(m)
+        # This should return None because non-ascii characters in the charset
+        # are not allowed.
+        self.assertEqual(msg.get_content_charset(), None)
+
+    def test_rfc2231_bad_character_in_filename(self):
+        m = '''\
+Content-Disposition: inline;
+\tfilename*0*="ascii'xx'This%20is%20even%20more%20";
+\tfilename*1*="%2A%2A%2Afun%2A%2A%2A%20";
+\tfilename*2*="is it not.pdf%E2"
+
+'''
+        msg = email.message_from_string(m)
+        self.assertEqual(msg.get_filename(),
+                         u'This is even more ***fun*** is it not.pdf\ufffd')
+
     def test_rfc2231_unknown_encoding(self):
         m = """\
 Content-Transfer-Encoding: 8bit
-Content-Disposition: inline; filename*0=X-UNKNOWN''myfile.txt
+Content-Disposition: inline; filename*=X-UNKNOWN''myfile.txt
 
 """
         msg = email.message_from_string(m)
         self.assertEqual(msg.get_filename(), 'myfile.txt')
 
+    def test_rfc2231_single_tick_in_filename_extended(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"Frank's\"; name*1*=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, None)
+        eq(language, None)
+        eq(s, "Frank's Document")
+
+    def test_rfc2231_single_tick_in_filename(self):
+        m = """\
+Content-Type: application/x-foo; name*0=\"Frank's\"; name*1=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        param = msg.get_param('name')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(param, "Frank's Document")
+
+    def test_rfc2231_tick_attack_extended(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"us-ascii'en-us'Frank's\"; name*1*=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, "Frank's Document")
+
+    def test_rfc2231_tick_attack(self):
+        m = """\
+Content-Type: application/x-foo;
+\tname*0=\"us-ascii'en-us'Frank's\"; name*1=\" Document\"
+
+"""
+        msg = email.message_from_string(m)
+        param = msg.get_param('name')
+        self.failIf(isinstance(param, tuple))
+        self.assertEqual(param, "us-ascii'en-us'Frank's Document")
+
+    def test_rfc2231_no_extended_values(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo; name=\"Frank's Document\"
+
+"""
+        msg = email.message_from_string(m)
+        eq(msg.get_param('name'), "Frank's Document")
+
+    def test_rfc2231_encoded_then_unencoded_segments(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0*=\"us-ascii'en-us'My\";
+\tname*1=\" Document\";
+\tname*2*=\" For You\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, 'My Document For You')
+
+    def test_rfc2231_unencoded_then_encoded_segments(self):
+        eq = self.assertEqual
+        m = """\
+Content-Type: application/x-foo;
+\tname*0=\"us-ascii'en-us'My\";
+\tname*1*=\" Document\";
+\tname*2*=\" For You\"
+
+"""
+        msg = email.message_from_string(m)
+        charset, language, s = msg.get_param('name')
+        eq(charset, 'us-ascii')
+        eq(language, 'en-us')
+        eq(s, 'My Document For You')
+
 
 
 def _testclasses():
diff --git a/Lib/email/utils.py b/Lib/email/utils.py
index 250eb19..26ebb0e 100644
--- a/Lib/email/utils.py
+++ b/Lib/email/utils.py
@@ -25,6 +25,7 @@
 import base64
 import random
 import socket
+import urllib
 import warnings
 from cStringIO import StringIO
 
@@ -45,6 +46,7 @@
 EMPTYSTRING = ''
 UEMPTYSTRING = u''
 CRLF = '\r\n'
+TICK = "'"
 
 specialsre = re.compile(r'[][\\()<>@,:;".]')
 escapesre = re.compile(r'[][\\()"]')
@@ -230,12 +232,14 @@
 # RFC2231-related functions - parameter encoding and decoding
 def decode_rfc2231(s):
     """Decode string according to RFC 2231"""
-    import urllib
-    parts = s.split("'", 2)
-    if len(parts) == 1:
-        return None, None, urllib.unquote(s)
-    charset, language, s = parts
-    return charset, language, urllib.unquote(s)
+    parts = s.split(TICK, 2)
+    if len(parts) <= 2:
+        return None, None, s
+    if len(parts) > 3:
+        charset, language = parts[:2]
+        s = TICK.join(parts[2:])
+        return charset, language, s
+    return parts
 
 
 def encode_rfc2231(s, charset=None, language=None):
@@ -259,37 +263,54 @@
 def decode_params(params):
     """Decode parameters list according to RFC 2231.
 
-    params is a sequence of 2-tuples containing (content type, string value).
+    params is a sequence of 2-tuples containing (param name, string value).
     """
+    # Copy params so we don't mess with the original
+    params = params[:]
     new_params = []
-    # maps parameter's name to a list of continuations
+    # Map parameter's name to a list of continuations.  The values are a
+    # 3-tuple of the continuation number, the string value, and a flag
+    # specifying whether a particular segment is %-encoded.
     rfc2231_params = {}
-    # params is a sequence of 2-tuples containing (content_type, string value)
-    name, value = params[0]
+    name, value = params.pop(0)
     new_params.append((name, value))
-    # Cycle through each of the rest of the parameters.
-    for name, value in params[1:]:
+    while params:
+        name, value = params.pop(0)
+        if name.endswith('*'):
+            encoded = True
+        else:
+            encoded = False
         value = unquote(value)
         mo = rfc2231_continuation.match(name)
         if mo:
             name, num = mo.group('name', 'num')
             if num is not None:
                 num = int(num)
-            rfc2231_param1 = rfc2231_params.setdefault(name, [])
-            rfc2231_param1.append((num, value))
+            rfc2231_params.setdefault(name, []).append((num, value, encoded))
         else:
             new_params.append((name, '"%s"' % quote(value)))
     if rfc2231_params:
         for name, continuations in rfc2231_params.items():
             value = []
+            extended = False
             # Sort by number
             continuations.sort()
-            # And now append all values in num order
-            for num, continuation in continuations:
-                value.append(continuation)
-            charset, language, value = decode_rfc2231(EMPTYSTRING.join(value))
-            new_params.append(
-                (name, (charset, language, '"%s"' % quote(value))))
+            # And now append all values in numerical order, converting
+            # %-encodings for the encoded segments.  If any of the
+            # continuation names ends in a *, then the entire string, after
+            # decoding segments and concatenating, must have the charset and
+            # language specifiers at the beginning of the string.
+            for num, s, encoded in continuations:
+                if encoded:
+                    s = urllib.unquote(s)
+                    extended = True
+                value.append(s)
+            value = quote(EMPTYSTRING.join(value))
+            if extended:
+                charset, language, value = decode_rfc2231(value)
+                new_params.append((name, (charset, language, '"%s"' % value)))
+            else:
+                new_params.append((name, '"%s"' % value))
     return new_params
 
 def collapse_rfc2231_value(value, errors='replace',
diff --git a/Lib/encodings/mbcs.py b/Lib/encodings/mbcs.py
index ff77fde..baf46cb 100644
--- a/Lib/encodings/mbcs.py
+++ b/Lib/encodings/mbcs.py
@@ -7,42 +7,39 @@
 (c) Copyright CNRI, All Rights Reserved. NO WARRANTY.
 
 """
+# Import them explicitly to cause an ImportError
+# on non-Windows systems
+from codecs import mbcs_encode, mbcs_decode
+# for IncrementalDecoder, IncrementalEncoder, ...
 import codecs
 
 ### Codec APIs
 
-class Codec(codecs.Codec):
+encode = mbcs_encode
 
-    # Note: Binding these as C functions will result in the class not
-    # converting them to methods. This is intended.
-    encode = codecs.mbcs_encode
-    decode = codecs.mbcs_decode
+def decode(input, errors='strict'):
+    return mbcs_decode(input, errors, True)
 
 class IncrementalEncoder(codecs.IncrementalEncoder):
     def encode(self, input, final=False):
-        return codecs.mbcs_encode(input,self.errors)[0]
+        return mbcs_encode(input, self.errors)[0]
 
-class IncrementalDecoder(codecs.IncrementalDecoder):
-    def decode(self, input, final=False):
-        return codecs.mbcs_decode(input,self.errors)[0]
-class StreamWriter(Codec,codecs.StreamWriter):
-    pass
+class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
+    _buffer_decode = mbcs_decode
 
-class StreamReader(Codec,codecs.StreamReader):
-    pass
+class StreamWriter(codecs.StreamWriter):
+    encode = mbcs_encode
 
-class StreamConverter(StreamWriter,StreamReader):
-
-    encode = codecs.mbcs_decode
-    decode = codecs.mbcs_encode
+class StreamReader(codecs.StreamReader):
+    decode = mbcs_decode
 
 ### encodings module API
 
 def getregentry():
     return codecs.CodecInfo(
         name='mbcs',
-        encode=Codec.encode,
-        decode=Codec.decode,
+        encode=encode,
+        decode=decode,
         incrementalencoder=IncrementalEncoder,
         incrementaldecoder=IncrementalDecoder,
         streamreader=StreamReader,
diff --git a/Lib/encodings/punycode.py b/Lib/encodings/punycode.py
index 2cde8b9..d97200f 100644
--- a/Lib/encodings/punycode.py
+++ b/Lib/encodings/punycode.py
@@ -214,9 +214,9 @@
 
 class IncrementalDecoder(codecs.IncrementalDecoder):
     def decode(self, input, final=False):
-        if errors not in ('strict', 'replace', 'ignore'):
-            raise UnicodeError, "Unsupported error handling "+errors
-        return punycode_decode(input, errors)
+        if self.errors not in ('strict', 'replace', 'ignore'):
+            raise UnicodeError, "Unsupported error handling "+self.errors
+        return punycode_decode(input, self.errors)
 
 class StreamWriter(Codec,codecs.StreamWriter):
     pass
diff --git a/Lib/encodings/utf_8_sig.py b/Lib/encodings/utf_8_sig.py
index cd14ab0..f05f6b8 100644
--- a/Lib/encodings/utf_8_sig.py
+++ b/Lib/encodings/utf_8_sig.py
@@ -30,9 +30,9 @@
     def encode(self, input, final=False):
         if self.first:
             self.first = False
-            return codecs.BOM_UTF8 + codecs.utf_8_encode(input, errors)[0]
+            return codecs.BOM_UTF8 + codecs.utf_8_encode(input, self.errors)[0]
         else:
-            return codecs.utf_8_encode(input, errors)[0]
+            return codecs.utf_8_encode(input, self.errors)[0]
 
     def reset(self):
         codecs.IncrementalEncoder.reset(self)
diff --git a/Lib/encodings/uu_codec.py b/Lib/encodings/uu_codec.py
index 0877fe1..43fb93c 100644
--- a/Lib/encodings/uu_codec.py
+++ b/Lib/encodings/uu_codec.py
@@ -102,11 +102,11 @@
 
 class IncrementalEncoder(codecs.IncrementalEncoder):
     def encode(self, input, final=False):
-        return uu_encode(input, errors)[0]
+        return uu_encode(input, self.errors)[0]
 
 class IncrementalDecoder(codecs.IncrementalDecoder):
     def decode(self, input, final=False):
-        return uu_decode(input, errors)[0]
+        return uu_decode(input, self.errors)[0]
 
 class StreamWriter(Codec,codecs.StreamWriter):
     pass
diff --git a/Lib/gzip.py b/Lib/gzip.py
index 860accc..0bf29e8 100644
--- a/Lib/gzip.py
+++ b/Lib/gzip.py
@@ -315,7 +315,13 @@
     def close(self):
         if self.mode == WRITE:
             self.fileobj.write(self.compress.flush())
-            write32(self.fileobj, self.crc)
+            # The native zlib crc is an unsigned 32-bit integer, but
+            # the Python wrapper implicitly casts that to a signed C
+            # long.  So, on a 32-bit box self.crc may "look negative",
+            # while the same crc on a 64-bit box may "look positive".
+            # To avoid irksome warnings from the `struct` module, force
+            # it to look positive on all boxes.
+            write32u(self.fileobj, LOWU32(self.crc))
             # self.size may exceed 2GB, or even 4GB
             write32u(self.fileobj, LOWU32(self.size))
             self.fileobj = None
diff --git a/Lib/httplib.py b/Lib/httplib.py
index 36381de..5ae5efc 100644
--- a/Lib/httplib.py
+++ b/Lib/httplib.py
@@ -3,7 +3,7 @@
 <intro stuff goes here>
 <other stuff, too>
 
-HTTPConnection go through a number of "states", which defines when a client
+HTTPConnection goes through a number of "states", which define when a client
 may legally make another request or fetch the response for a particular
 request. This diagram details these state transitions:
 
@@ -926,15 +926,15 @@
         self.__state = _CS_IDLE
 
         if response.will_close:
-            # this effectively passes the connection to the response
-            self.close()
+            # Pass the socket to the response
+            self.sock = None
         else:
             # remember this, so we can tell when it is complete
             self.__response = response
 
         return response
 
-# The next several classes are used to define FakeSocket,a socket-like
+# The next several classes are used to define FakeSocket, a socket-like
 # interface to an SSL connection.
 
 # The primary complexity comes from faking a makefile() method.  The
diff --git a/Lib/idlelib/Bindings.py b/Lib/idlelib/Bindings.py
index b5e90b0..d24be3f 100644
--- a/Lib/idlelib/Bindings.py
+++ b/Lib/idlelib/Bindings.py
@@ -80,6 +80,32 @@
    ]),
 ]
 
+import sys
+if sys.platform == 'darwin' and '.app' in sys.executable:
+    # Running as a proper MacOS application bundle. This block restructures
+    # the menus a little to make them conform better to the HIG.
+
+    quitItem = menudefs[0][1][-1]
+    closeItem = menudefs[0][1][-2]
+
+    # Remove the last 3 items of the file menu: a separator, close window and
+    # quit. Close window will be reinserted just above the save item, where
+    # it should be according to the HIG. Quit is in the application menu.
+    del menudefs[0][1][-3:]
+    menudefs[0][1].insert(6, closeItem)
+
+    # Remove the 'About' entry from the help menu, it is in the application
+    # menu
+    del menudefs[-1][1][0:2]
+
+    menudefs.insert(0,
+            ('application', [
+                ('About IDLE', '<<about-idle>>'),
+                None,
+                ('_Preferences....', '<<open-config-dialog>>'),
+            ]))
+
+
 default_keydefs = idleConf.GetCurrentKeySet()
 
 del sys
diff --git a/Lib/idlelib/CREDITS.txt b/Lib/idlelib/CREDITS.txt
index 6f4e95d..e838c03 100644
--- a/Lib/idlelib/CREDITS.txt
+++ b/Lib/idlelib/CREDITS.txt
@@ -19,17 +19,18 @@
 subprocess, and made a number of usability enhancements.
 
 Other contributors include Raymond Hettinger, Tony Lownds (Mac integration),
-Neal Norwitz (code check and clean-up), and Chui Tey (RPC integration, debugger
-integration and persistent breakpoints).
+Neal Norwitz (code check and clean-up), Ronald Oussoren (Mac integration),
+Noam Raphael (Code Context, Call Tips, many other patches), and Chui Tey (RPC
+integration, debugger integration and persistent breakpoints).
 
-Scott David Daniels, Hernan Foffani, Christos Georgiou, Martin v. Löwis, 
-Jason Orendorff, Noam Raphael, Josh Robb, Nigel Rowe, Bruce Sherwood, and
-Jeff Shute have submitted useful patches.  Thanks, guys!
+Scott David Daniels, Tal Einat, Hernan Foffani, Christos Georgiou,
+Martin v. Löwis, Jason Orendorff, Josh Robb, Nigel Rowe, Bruce Sherwood,
+and Jeff Shute have submitted useful patches.  Thanks, guys!
 
 For additional details refer to NEWS.txt and Changelog.
 
-Please contact the IDLE maintainer to have yourself included here if you
-are one of those we missed! 
+Please contact the IDLE maintainer (kbk@shore.net) to have yourself included
+here if you are one of those we missed!
 
 
 
diff --git a/Lib/idlelib/CallTipWindow.py b/Lib/idlelib/CallTipWindow.py
index afd4439..2223885 100644
--- a/Lib/idlelib/CallTipWindow.py
+++ b/Lib/idlelib/CallTipWindow.py
@@ -49,7 +49,11 @@
         """
         # truncate overly long calltip
         if len(text) >= 79:
-            text = text[:75] + ' ...'
+            textlines = text.splitlines()
+            for i, line in enumerate(textlines):
+                if len(line) > 79:
+                    textlines[i] = line[:75] + ' ...'
+            text = '\n'.join(textlines)
         self.text = text
         if self.tipwindow or not self.text:
             return
diff --git a/Lib/idlelib/CallTips.py b/Lib/idlelib/CallTips.py
index 47a1d55..997eb13 100644
--- a/Lib/idlelib/CallTips.py
+++ b/Lib/idlelib/CallTips.py
@@ -127,7 +127,7 @@
     argText = ""
     if ob is not None:
         argOffset = 0
-        if type(ob)==types.ClassType:
+        if type(ob) in (types.ClassType, types.TypeType):
             # Look for the highest __init__ in the class chain.
             fob = _find_constructor(ob)
             if fob is None:
diff --git a/Lib/idlelib/CodeContext.py b/Lib/idlelib/CodeContext.py
index 5d55f77..63cc82c 100644
--- a/Lib/idlelib/CodeContext.py
+++ b/Lib/idlelib/CodeContext.py
@@ -11,11 +11,10 @@
 """
 import Tkinter
 from configHandler import idleConf
-from sets import Set
 import re
 from sys import maxint as INFINITY
 
-BLOCKOPENERS = Set(["class", "def", "elif", "else", "except", "finally", "for",
+BLOCKOPENERS = set(["class", "def", "elif", "else", "except", "finally", "for",
                     "if", "try", "while"])
 UPDATEINTERVAL = 100 # millisec
 FONTUPDATEINTERVAL = 1000 # millisec
diff --git a/Lib/idlelib/ColorDelegator.py b/Lib/idlelib/ColorDelegator.py
index f258b34..e55f9e6 100644
--- a/Lib/idlelib/ColorDelegator.py
+++ b/Lib/idlelib/ColorDelegator.py
@@ -8,28 +8,29 @@
 
 DEBUG = False
 
-def any(name, list):
-    return "(?P<%s>" % name + "|".join(list) + ")"
+def any(name, alternates):
+    "Return a named group pattern matching list of alternates."
+    return "(?P<%s>" % name + "|".join(alternates) + ")"
 
 def make_pat():
     kw = r"\b" + any("KEYWORD", keyword.kwlist) + r"\b"
     builtinlist = [str(name) for name in dir(__builtin__)
                                         if not name.startswith('_')]
     # self.file = file("file") :
-    # 1st 'file' colorized normal, 2nd as builtin, 3rd as comment
-    builtin = r"([^.'\"\\]\b|^)" + any("BUILTIN", builtinlist) + r"\b"
+    # 1st 'file' colorized normal, 2nd as builtin, 3rd as string
+    builtin = r"([^.'\"\\#]\b|^)" + any("BUILTIN", builtinlist) + r"\b"
     comment = any("COMMENT", [r"#[^\n]*"])
-    sqstring = r"(\b[rR])?'[^'\\\n]*(\\.[^'\\\n]*)*'?"
-    dqstring = r'(\b[rR])?"[^"\\\n]*(\\.[^"\\\n]*)*"?'
-    sq3string = r"(\b[rR])?'''[^'\\]*((\\.|'(?!''))[^'\\]*)*(''')?"
-    dq3string = r'(\b[rR])?"""[^"\\]*((\\.|"(?!""))[^"\\]*)*(""")?'
+    sqstring = r"(\b[rRuU])?'[^'\\\n]*(\\.[^'\\\n]*)*'?"
+    dqstring = r'(\b[rRuU])?"[^"\\\n]*(\\.[^"\\\n]*)*"?'
+    sq3string = r"(\b[rRuU])?'''[^'\\]*((\\.|'(?!''))[^'\\]*)*(''')?"
+    dq3string = r'(\b[rRuU])?"""[^"\\]*((\\.|"(?!""))[^"\\]*)*(""")?'
     string = any("STRING", [sq3string, dq3string, sqstring, dqstring])
     return kw + "|" + builtin + "|" + comment + "|" + string +\
            "|" + any("SYNC", [r"\n"])
 
 prog = re.compile(make_pat(), re.S)
 idprog = re.compile(r"\s+(\w+)", re.S)
-asprog = re.compile(r".*?\b(as)\b", re.S)
+asprog = re.compile(r".*?\b(as)\b")
 
 class ColorDelegator(Delegator):
 
@@ -208,10 +209,15 @@
                                                  head + "+%dc" % a,
                                                  head + "+%dc" % b)
                             elif value == "import":
-                                # color all the "as" words on same line;
-                                # cheap approximation to the truth
+                                # color all the "as" words on same line, except
+                                # if in a comment; cheap approximation to the
+                                # truth
+                                if '#' in chars:
+                                    endpos = chars.index('#')
+                                else:
+                                    endpos = len(chars)
                                 while True:
-                                    m1 = self.asprog.match(chars, b)
+                                    m1 = self.asprog.match(chars, b, endpos)
                                     if not m1:
                                         break
                                     a, b = m1.span(1)
diff --git a/Lib/idlelib/Debugger.py b/Lib/idlelib/Debugger.py
index 7a9d02f..f56460a 100644
--- a/Lib/idlelib/Debugger.py
+++ b/Lib/idlelib/Debugger.py
@@ -4,6 +4,7 @@
 from Tkinter import *
 from WindowList import ListedToplevel
 from ScrolledList import ScrolledList
+import macosxSupport
 
 
 class Idb(bdb.Bdb):
@@ -322,7 +323,13 @@
 class StackViewer(ScrolledList):
 
     def __init__(self, master, flist, gui):
-        ScrolledList.__init__(self, master, width=80)
+        if macosxSupport.runningAsOSXApp():
+            # At least on with the stock AquaTk version on OSX 10.4 you'll
+            # get an shaking GUI that eventually kills IDLE if the width
+            # argument is specified.
+            ScrolledList.__init__(self, master)
+        else:
+            ScrolledList.__init__(self, master, width=80)
         self.flist = flist
         self.gui = gui
         self.stack = []
diff --git a/Lib/idlelib/EditorWindow.py b/Lib/idlelib/EditorWindow.py
index 59440f0..6b8ab63 100644
--- a/Lib/idlelib/EditorWindow.py
+++ b/Lib/idlelib/EditorWindow.py
@@ -17,6 +17,7 @@
 import PyParse
 from configHandler import idleConf
 import aboutDialog, textView, configDialog
+import macosxSupport
 
 # The default tab setting for a Text widget, in average-width characters.
 TK_TABWIDTH_DEFAULT = 8
@@ -66,26 +67,40 @@
                                        'Python%d%d.chm' % sys.version_info[:2])
                 if os.path.isfile(chmfile):
                     dochome = chmfile
+
+            elif macosxSupport.runningAsOSXApp():
+                # documentation is stored inside the python framework
+                dochome = os.path.join(sys.prefix,
+                        'Resources/English.lproj/Documentation/index.html')
+
             dochome = os.path.normpath(dochome)
             if os.path.isfile(dochome):
                 EditorWindow.help_url = dochome
+                if sys.platform == 'darwin':
+                    # Safari requires real file:-URLs
+                    EditorWindow.help_url = 'file://' + EditorWindow.help_url
             else:
                 EditorWindow.help_url = "http://www.python.org/doc/current"
         currentTheme=idleConf.CurrentTheme()
         self.flist = flist
         root = root or flist.root
         self.root = root
+        try:
+            sys.ps1
+        except AttributeError:
+            sys.ps1 = '>>> '
         self.menubar = Menu(root)
         self.top = top = WindowList.ListedToplevel(root, menu=self.menubar)
         if flist:
             self.tkinter_vars = flist.vars
             #self.top.instance_dict makes flist.inversedict avalable to
             #configDialog.py so it can access all EditorWindow instaces
-            self.top.instance_dict=flist.inversedict
+            self.top.instance_dict = flist.inversedict
         else:
             self.tkinter_vars = {}  # keys: Tkinter event names
                                     # values: Tkinter variable instances
-        self.recent_files_path=os.path.join(idleConf.GetUserCfgDir(),
+            self.top.instance_dict = {}
+        self.recent_files_path = os.path.join(idleConf.GetUserCfgDir(),
                 'recent-files.lst')
         self.vbar = vbar = Scrollbar(top, name='vbar')
         self.text_frame = text_frame = Frame(top)
@@ -111,6 +126,9 @@
 
         self.top.protocol("WM_DELETE_WINDOW", self.close)
         self.top.bind("<<close-window>>", self.close_event)
+        if macosxSupport.runningAsOSXApp():
+            # Command-W on editorwindows doesn't work without this.
+            text.bind('<<close-window>>', self.close_event)
         text.bind("<<cut>>", self.cut)
         text.bind("<<copy>>", self.copy)
         text.bind("<<paste>>", self.paste)
@@ -278,6 +296,10 @@
 
     def set_status_bar(self):
         self.status_bar = self.MultiStatusBar(self.top)
+        if macosxSupport.runningAsOSXApp():
+            # Insert some padding to avoid obscuring some of the statusbar
+            # by the resize widget.
+            self.status_bar.set_label('_padding1', '    ', side=RIGHT)
         self.status_bar.set_label('column', 'Col: ?', side=RIGHT)
         self.status_bar.set_label('line', 'Ln: ?', side=RIGHT)
         self.status_bar.pack(side=BOTTOM, fill=X)
@@ -301,6 +323,11 @@
         ("help", "_Help"),
     ]
 
+    if macosxSupport.runningAsOSXApp():
+        del menu_specs[-3]
+        menu_specs[-2] = ("windows", "_Window")
+
+
     def createmenubar(self):
         mbar = self.menubar
         self.menudict = menudict = {}
@@ -308,6 +335,12 @@
             underline, label = prepstr(label)
             menudict[name] = menu = Menu(mbar, name=name)
             mbar.add_cascade(label=label, menu=menu, underline=underline)
+
+        if sys.platform == 'darwin' and '.framework' in sys.executable:
+            # Insert the application menu
+            menudict['application'] = menu = Menu(mbar, name='apple')
+            mbar.add_cascade(label='IDLE', menu=menu)
+
         self.fill_menus()
         self.base_helpmenu_length = self.menudict['help'].index(END)
         self.reset_help_menu_entries()
@@ -649,7 +682,7 @@
     def __extra_help_callback(self, helpfile):
         "Create a callback with the helpfile value frozen at definition time"
         def display_extra_help(helpfile=helpfile):
-            if not (helpfile.startswith('www') or helpfile.startswith('http')):
+            if not helpfile.startswith(('www', 'http')):
                 url = os.path.normpath(helpfile)
             if sys.platform[:3] == 'win':
                 os.startfile(helpfile)
@@ -1244,13 +1277,13 @@
               "Toggle tabs",
               "Turn tabs " + ("on", "off")[self.usetabs] +
               "?\nIndent width " +
-              ("will be", "remains at")[self.usetabs] + " 8.",
+              ("will be", "remains at")[self.usetabs] + " 8." +
+              "\n Note: a tab is always 8 columns",
               parent=self.text):
             self.usetabs = not self.usetabs
-        # Try to prevent mixed tabs/spaces.
-        # User must reset indent width manually after using tabs
-        #      if he insists on getting into trouble.
-        self.indentwidth = 8
+            # Try to prevent inconsistent indentation.
+            # User must change indent width manually after using tabs.
+            self.indentwidth = 8
         return "break"
 
     # XXX this isn't bound to anything -- see tabwidth comments
diff --git a/Lib/idlelib/NEWS.txt b/Lib/idlelib/NEWS.txt
index 25e5d40..235963e 100644
--- a/Lib/idlelib/NEWS.txt
+++ b/Lib/idlelib/NEWS.txt
@@ -1,3 +1,46 @@
+What's New in IDLE 1.2c1?
+=========================
+
+*Release date: XX-AUG-2006*
+
+- Changing tokenize (39046) to detect dedent broke tabnanny check (since 1.2a1)
+
+- ToggleTab dialog was setting indent to 8 even if cancelled (since 1.2a1).
+
+- When used w/o subprocess, all exceptions were preceded by an error
+  message claiming they were IDLE internal errors (since 1.2a1).
+
+What's New in IDLE 1.2b3?
+=========================
+
+*Release date: 03-AUG-2006*
+
+- EditorWindow.test() was failing.  Bug 1417598
+
+- EditorWindow failed when used stand-alone if sys.ps1 not set.
+  Bug 1010370 Dave Florek
+
+- Tooltips failed on new-syle class __init__ args.  Bug 1027566 Loren Guthrie
+
+- Avoid occasional failure to detect closing paren properly.
+  Patch 1407280 Tal Einat
+
+- Rebinding Tab key was inserting 'tab' instead of 'Tab'.  Bug 1179168.
+
+- Colorizer now handles #<builtin> correctly, also unicode strings and
+  'as' keyword in comment directly following import command. Closes 1325071.
+  Patch 1479219 Tal Einat
+
+What's New in IDLE 1.2b2?
+=========================
+
+*Release date: 11-JUL-2006*
+
+What's New in IDLE 1.2b1?
+=========================
+
+*Release date: 20-JUN-2006*
+
 What's New in IDLE 1.2a2?
 =========================
 
diff --git a/Lib/idlelib/ParenMatch.py b/Lib/idlelib/ParenMatch.py
index 673aee2..250ae8b 100644
--- a/Lib/idlelib/ParenMatch.py
+++ b/Lib/idlelib/ParenMatch.py
@@ -8,7 +8,7 @@
 from HyperParser import HyperParser
 from configHandler import idleConf
 
-keysym_opener = {"parenright":'(', "bracketright":'[', "braceright":'{'}
+_openers = {')':'(',']':'[','}':'{'}
 CHECK_DELAY = 100 # miliseconds
 
 class ParenMatch:
@@ -100,12 +100,13 @@
 
     def paren_closed_event(self, event):
         # If it was a shortcut and not really a closing paren, quit.
-        if self.text.get("insert-1c") not in (')',']','}'):
+        closer = self.text.get("insert-1c")
+        if closer not in _openers:
             return
         hp = HyperParser(self.editwin, "insert-1c")
         if not hp.is_in_code():
             return
-        indices = hp.get_surrounding_brackets(keysym_opener[event.keysym], True)
+        indices = hp.get_surrounding_brackets(_openers[closer], True)
         if indices is None:
             self.warn_mismatched()
             return
diff --git a/Lib/idlelib/PyShell.py b/Lib/idlelib/PyShell.py
index b6abe40..25eb446 100644
--- a/Lib/idlelib/PyShell.py
+++ b/Lib/idlelib/PyShell.py
@@ -11,6 +11,7 @@
 import threading
 import traceback
 import types
+import macosxSupport
 
 import linecache
 from code import InteractiveInterpreter
@@ -721,8 +722,12 @@
                 else:
                     self.showtraceback()
             except:
-                print>>sys.stderr, "IDLE internal error in runcode()"
+                if use_subprocess:
+                    print >> self.tkconsole.stderr, \
+                             "IDLE internal error in runcode()"
                 self.showtraceback()
+                if use_subprocess:
+                    self.tkconsole.endexecuting()
         finally:
             if not use_subprocess:
                 self.tkconsole.endexecuting()
@@ -777,6 +782,11 @@
         ("help", "_Help"),
     ]
 
+    if macosxSupport.runningAsOSXApp():
+        del menu_specs[-3]
+        menu_specs[-2] = ("windows", "_Window")
+
+
     # New classes
     from IdleHistory import History
 
@@ -1300,10 +1310,6 @@
     script = None
     startup = False
     try:
-        sys.ps1
-    except AttributeError:
-        sys.ps1 = '>>> '
-    try:
         opts, args = getopt.getopt(sys.argv[1:], "c:deihnr:st:")
     except getopt.error, msg:
         sys.stderr.write("Error: %s\n" % str(msg))
@@ -1371,9 +1377,12 @@
     enable_shell = enable_shell or not edit_start
     # start editor and/or shell windows:
     root = Tk(className="Idle")
+
     fixwordbreaks(root)
     root.withdraw()
     flist = PyShellFileList(root)
+    macosxSupport.setupApp(root, flist)
+
     if enable_edit:
         if not (cmd or script):
             for filename in args:
@@ -1381,8 +1390,17 @@
             if not args:
                 flist.new()
     if enable_shell:
-        if not flist.open_shell():
+        shell = flist.open_shell()
+        if not shell:
             return # couldn't open shell
+
+        if macosxSupport.runningAsOSXApp() and flist.dict:
+            # On OSX: when the user has double-clicked on a file that causes
+            # IDLE to be launched the shell window will open just in front of
+            # the file she wants to see. Lower the interpreter window when
+            # there are open files.
+            shell.top.lower()
+
     shell = flist.pyshell
     # handle remaining options:
     if debug:
@@ -1403,6 +1421,7 @@
         elif script:
             shell.interp.prepend_syspath(script)
             shell.interp.execfile(script)
+
     root.mainloop()
     root.destroy()
 
diff --git a/Lib/idlelib/ScriptBinding.py b/Lib/idlelib/ScriptBinding.py
index 084c607..f325ad1 100644
--- a/Lib/idlelib/ScriptBinding.py
+++ b/Lib/idlelib/ScriptBinding.py
@@ -51,7 +51,7 @@
         # Provide instance variables referenced by Debugger
         # XXX This should be done differently
         self.flist = self.editwin.flist
-        self.root = self.flist.root
+        self.root = self.editwin.root
 
     def check_module_event(self, event):
         filename = self.getfilename()
@@ -76,6 +76,9 @@
             self.editwin.gotoline(nag.get_lineno())
             self.errorbox("Tab/space error", indent_message)
             return False
+        except IndentationError:
+            # From tokenize(), let compile() in checksyntax find it again.
+            pass
         return True
 
     def checksyntax(self, filename):
diff --git a/Lib/idlelib/ZoomHeight.py b/Lib/idlelib/ZoomHeight.py
index 2ab4656..83ca3a6 100644
--- a/Lib/idlelib/ZoomHeight.py
+++ b/Lib/idlelib/ZoomHeight.py
@@ -2,6 +2,7 @@
 
 import re
 import sys
+import macosxSupport
 
 class ZoomHeight:
 
@@ -29,6 +30,14 @@
     if sys.platform == 'win32':
         newy = 0
         newheight = newheight - 72
+
+    elif macosxSupport.runningAsOSXApp():
+        # The '88' below is a magic number that avoids placing the bottom
+        # of the window below the panel on my machine. I don't know how
+        # to calculate the correct value for this with tkinter.
+        newy = 22
+        newheight = newheight - newy - 88
+
     else:
         #newy = 24
         newy = 0
diff --git a/Lib/idlelib/buildapp.py b/Lib/idlelib/buildapp.py
deleted file mode 100644
index 672eb1e..0000000
--- a/Lib/idlelib/buildapp.py
+++ /dev/null
@@ -1,17 +0,0 @@
-#
-# After running python setup.py install, run this program from the command
-# line like so:
-#
-# % python2.3 buildapp.py build
-#
-# A double-clickable IDLE application will be created in the build/ directory.
-#
-
-from bundlebuilder import buildapp
-
-buildapp(
-        name="IDLE",
-        mainprogram="idle.py",
-        argv_emulation=1,
-        iconfile="Icons/idle.icns",
-)
diff --git a/Lib/idlelib/config-keys.def b/Lib/idlelib/config-keys.def
index 0653746..fb0aaf4 100644
--- a/Lib/idlelib/config-keys.def
+++ b/Lib/idlelib/config-keys.def
@@ -159,3 +159,56 @@
 change-indentwidth=<Control-Key-u>
 del-word-left=<Control-Key-BackSpace>
 del-word-right=<Control-Key-Delete>
+
+[IDLE Classic OSX]
+toggle-tabs = <Control-Key-t>
+interrupt-execution = <Control-Key-c>
+untabify-region = <Control-Key-6>
+remove-selection = <Key-Escape>
+print-window = <Command-Key-p>
+replace = <Command-Key-r>
+goto-line = <Command-Key-j>
+plain-newline-and-indent = <Control-Key-j>
+history-previous = <Control-Key-p>
+beginning-of-line = <Control-Key-Left>
+end-of-line = <Control-Key-Right>
+comment-region = <Control-Key-3>
+redo = <Shift-Command-Key-Z>
+close-window = <Command-Key-w>
+restart-shell = <Control-Key-F6>
+save-window-as-file = <Command-Key-S>
+close-all-windows = <Command-Key-q>
+view-restart = <Key-F6>
+tabify-region = <Control-Key-5>
+find-again = <Command-Key-g> <Key-F3>
+find = <Command-Key-f>
+toggle-auto-coloring = <Control-Key-slash>
+select-all = <Command-Key-a>
+smart-backspace = <Key-BackSpace>
+change-indentwidth = <Control-Key-u>
+do-nothing = <Control-Key-F12>
+smart-indent = <Key-Tab>
+center-insert = <Control-Key-l>
+history-next = <Control-Key-n>
+del-word-right = <Option-Key-Delete>
+undo = <Command-Key-z>
+save-window = <Command-Key-s>
+uncomment-region = <Control-Key-4>
+cut = <Command-Key-x>
+find-in-files = <Command-Key-F3>
+dedent-region = <Command-Key-bracketleft>
+copy = <Command-Key-c>
+paste = <Command-Key-v>
+indent-region = <Command-Key-bracketright>
+del-word-left = <Option-Key-BackSpace> <Option-Command-Key-BackSpace>
+newline-and-indent = <Key-Return> <Key-KP_Enter>
+end-of-file = <Control-Key-d>
+open-class-browser = <Command-Key-b>
+open-new-window = <Command-Key-n>
+open-module = <Command-Key-m>
+find-selection = <Shift-Command-Key-F3>
+python-context-help = <Shift-Key-F1>
+save-copy-of-window-as-file = <Shift-Command-Key-s>
+open-window-from-file = <Command-Key-o>
+python-docs = <Key-F1>
+
diff --git a/Lib/idlelib/configHandler.py b/Lib/idlelib/configHandler.py
index 191a87c..826fb5d 100644
--- a/Lib/idlelib/configHandler.py
+++ b/Lib/idlelib/configHandler.py
@@ -20,6 +20,7 @@
 import os
 import sys
 import string
+import macosxSupport
 from ConfigParser import ConfigParser, NoOptionError, NoSectionError
 
 class InvalidConfigType(Exception): pass
@@ -406,7 +407,7 @@
         names=extnNameList
         kbNameIndicies=[]
         for name in names:
-            if name.endswith('_bindings') or name.endswith('_cfgBindings'):
+            if name.endswith(('_bindings', '_cfgBindings')):
                 kbNameIndicies.append(names.index(name))
         kbNameIndicies.sort()
         kbNameIndicies.reverse()
@@ -495,7 +496,18 @@
         return binding
 
     def GetCurrentKeySet(self):
-        return self.GetKeySet(self.CurrentKeys())
+        result = self.GetKeySet(self.CurrentKeys())
+
+        if macosxSupport.runningAsOSXApp():
+            # We're using AquaTk, replace all keybingings that use the
+            # Alt key by ones that use the Option key because the former
+            # don't work reliably.
+            for k, v in result.items():
+                v2 = [ x.replace('<Alt-', '<Option-') for x in v ]
+                if v != v2:
+                    result[k] = v2
+
+        return result
 
     def GetKeySet(self,keySetName):
         """
diff --git a/Lib/idlelib/configHelpSourceEdit.py b/Lib/idlelib/configHelpSourceEdit.py
index 8924f79..6611621 100644
--- a/Lib/idlelib/configHelpSourceEdit.py
+++ b/Lib/idlelib/configHelpSourceEdit.py
@@ -127,7 +127,7 @@
                                    parent=self)
             self.entryPath.focus_set()
             pathOk = False
-        elif path.startswith('www.') or path.startswith('http'):
+        elif path.startswith(('www.', 'http')):
             pass
         else:
             if path[:5] == 'file:':
@@ -146,8 +146,7 @@
                            self.path.get().strip())
             if sys.platform == 'darwin':
                 path = self.result[1]
-                if (path.startswith('www') or path.startswith('file:')
-                    or path.startswith('http:')):
+                if path.startswith(('www', 'file:', 'http:')):
                     pass
                 else:
                     # Mac Safari insists on using the URI form for local files
diff --git a/Lib/idlelib/idlever.py b/Lib/idlelib/idlever.py
index b7deb3f..07d3d82 100644
--- a/Lib/idlelib/idlever.py
+++ b/Lib/idlelib/idlever.py
@@ -1 +1 @@
-IDLE_VERSION = "1.2a2"
+IDLE_VERSION = "1.2b3"
diff --git a/Lib/idlelib/keybindingDialog.py b/Lib/idlelib/keybindingDialog.py
index ea57958..aff9cac 100644
--- a/Lib/idlelib/keybindingDialog.py
+++ b/Lib/idlelib/keybindingDialog.py
@@ -133,7 +133,7 @@
         config-keys.def must use the same ordering.
         """
         import sys
-        if sys.platform == 'darwin' and sys.executable.count('.app'):
+        if sys.platform == 'darwin' and sys.argv[0].count('.app'):
             self.modifiers = ['Shift', 'Control', 'Option', 'Command']
         else:
             self.modifiers = ['Control', 'Alt', 'Shift']
@@ -202,7 +202,7 @@
                 ':':'colon',',':'comma','.':'period','<':'less','>':'greater',
                 '/':'slash','?':'question','Page Up':'Prior','Page Down':'Next',
                 'Left Arrow':'Left','Right Arrow':'Right','Up Arrow':'Up',
-                'Down Arrow': 'Down', 'Tab':'tab'}
+                'Down Arrow': 'Down', 'Tab':'Tab'}
         if key in translateDict.keys():
             key = translateDict[key]
         if 'Shift' in modifiers and key in string.ascii_lowercase:
diff --git a/Lib/idlelib/macosxSupport.py b/Lib/idlelib/macosxSupport.py
new file mode 100644
index 0000000..ad61fff
--- /dev/null
+++ b/Lib/idlelib/macosxSupport.py
@@ -0,0 +1,112 @@
+"""
+A number of function that enhance IDLE on MacOSX when it used as a normal
+GUI application (as opposed to an X11 application).
+"""
+import sys
+
+def runningAsOSXApp():
+    """ Returns True iff running from the IDLE.app bundle on OSX """
+    return (sys.platform == 'darwin' and 'IDLE.app' in sys.argv[0])
+
+def addOpenEventSupport(root, flist):
+    """
+    This ensures that the application will respont to open AppleEvents, which
+    makes is feaseable to use IDLE as the default application for python files.
+    """
+    def doOpenFile(*args):
+        for fn in args:
+            flist.open(fn)
+
+    # The command below is a hook in aquatk that is called whenever the app
+    # receives a file open event. The callback can have multiple arguments,
+    # one for every file that should be opened.
+    root.createcommand("::tk::mac::OpenDocument", doOpenFile)
+
+def hideTkConsole(root):
+    root.tk.call('console', 'hide')
+
+def overrideRootMenu(root, flist):
+    """
+    Replace the Tk root menu by something that's more appropriate for
+    IDLE.
+    """
+    # The menu that is attached to the Tk root (".") is also used by AquaTk for
+    # all windows that don't specify a menu of their own. The default menubar
+    # contains a number of menus, none of which are appropriate for IDLE. The
+    # Most annoying of those is an 'About Tck/Tk...' menu in the application
+    # menu.
+    #
+    # This function replaces the default menubar by a mostly empty one, it
+    # should only contain the correct application menu and the window menu.
+    #
+    # Due to a (mis-)feature of TkAqua the user will also see an empty Help
+    # menu.
+    from Tkinter import Menu, Text, Text
+    from EditorWindow import prepstr, get_accelerator
+    import Bindings
+    import WindowList
+    from MultiCall import MultiCallCreator
+
+    menubar = Menu(root)
+    root.configure(menu=menubar)
+    menudict = {}
+
+    menudict['windows'] = menu = Menu(menubar, name='windows')
+    menubar.add_cascade(label='Window', menu=menu, underline=0)
+
+    def postwindowsmenu(menu=menu):
+        end = menu.index('end')
+        if end is None:
+            end = -1
+
+        if end > 0:
+            menu.delete(0, end)
+        WindowList.add_windows_to_menu(menu)
+    WindowList.register_callback(postwindowsmenu)
+
+    menudict['application'] = menu = Menu(menubar, name='apple')
+    menubar.add_cascade(label='IDLE', menu=menu)
+
+    def about_dialog(event=None):
+        import aboutDialog
+        aboutDialog.AboutDialog(root, 'About IDLE')
+
+    def config_dialog(event=None):
+        import configDialog
+        configDialog.ConfigDialog(root, 'Settings')
+
+    root.bind('<<about-idle>>', about_dialog)
+    root.bind('<<open-config-dialog>>', config_dialog)
+    if flist:
+        root.bind('<<close-all-windows>>', flist.close_all_callback)
+
+    for mname, entrylist in Bindings.menudefs:
+        menu = menudict.get(mname)
+        if not menu:
+            continue
+        for entry in entrylist:
+            if not entry:
+                menu.add_separator()
+            else:
+                label, eventname = entry
+                underline, label = prepstr(label)
+                accelerator = get_accelerator(Bindings.default_keydefs,
+                        eventname)
+                def command(text=root, eventname=eventname):
+                    text.event_generate(eventname)
+                menu.add_command(label=label, underline=underline,
+                        command=command, accelerator=accelerator)
+
+
+
+
+
+def setupApp(root, flist):
+    """
+    Perform setup for the OSX application bundle.
+    """
+    if not runningAsOSXApp(): return
+
+    hideTkConsole(root)
+    overrideRootMenu(root, flist)
+    addOpenEventSupport(root, flist)
diff --git a/Lib/inspect.py b/Lib/inspect.py
index bf7f006..0b498b5 100644
--- a/Lib/inspect.py
+++ b/Lib/inspect.py
@@ -89,6 +89,40 @@
     is not guaranteed."""
     return (hasattr(object, "__set__") and hasattr(object, "__get__"))
 
+if hasattr(types, 'MemberDescriptorType'):
+    # CPython and equivalent
+    def ismemberdescriptor(object):
+        """Return true if the object is a member descriptor.
+
+        Member descriptors are specialized descriptors defined in extension
+        modules."""
+        return isinstance(object, types.MemberDescriptorType)
+else:
+    # Other implementations
+    def ismemberdescriptor(object):
+        """Return true if the object is a member descriptor.
+
+        Member descriptors are specialized descriptors defined in extension
+        modules."""
+        return False
+
+if hasattr(types, 'GetSetDescriptorType'):
+    # CPython and equivalent
+    def isgetsetdescriptor(object):
+        """Return true if the object is a getset descriptor.
+
+        getset descriptors are specialized descriptors defined in extension
+        modules."""
+        return isinstance(object, types.GetSetDescriptorType)
+else:
+    # Other implementations
+    def isgetsetdescriptor(object):
+        """Return true if the object is a getset descriptor.
+
+        getset descriptors are specialized descriptors defined in extension
+        modules."""
+        return False
+
 def isfunction(object):
     """Return true if the object is a user-defined function.
 
@@ -355,40 +389,38 @@
             return None
     if os.path.exists(filename):
         return filename
-    # Ugly but necessary - '<stdin>' and '<string>' mean that getmodule()
-    # would infinitely recurse, because they're not real files nor loadable
-    # Note that this means that writing a PEP 302 loader that uses '<'
-    # at the start of a filename is now not a good idea.  :(
-    if filename[:1]!='<' and hasattr(getmodule(object), '__loader__'):
+    # only return a non-existent filename if the module has a PEP 302 loader
+    if hasattr(getmodule(object, filename), '__loader__'):
         return filename
 
-def getabsfile(object):
+def getabsfile(object, _filename=None):
     """Return an absolute path to the source or compiled file for an object.
 
     The idea is for each object to have a unique origin, so this routine
     normalizes the result as much as possible."""
-    return os.path.normcase(
-        os.path.abspath(getsourcefile(object) or getfile(object)))
+    if _filename is None:
+        _filename = getsourcefile(object) or getfile(object)
+    return os.path.normcase(os.path.abspath(_filename))
 
 modulesbyfile = {}
 
-def getmodule(object):
+def getmodule(object, _filename=None):
     """Return the module an object was defined in, or None if not found."""
     if ismodule(object):
         return object
     if hasattr(object, '__module__'):
         return sys.modules.get(object.__module__)
     try:
-        file = getabsfile(object)
+        file = getabsfile(object, _filename)
     except TypeError:
         return None
     if file in modulesbyfile:
         return sys.modules.get(modulesbyfile[file])
     for module in sys.modules.values():
         if ismodule(module) and hasattr(module, '__file__'):
-            modulesbyfile[
-                os.path.realpath(
-                        getabsfile(module))] = module.__name__
+            f = getabsfile(module)
+            modulesbyfile[f] = modulesbyfile[
+                os.path.realpath(f)] = module.__name__
     if file in modulesbyfile:
         return sys.modules.get(modulesbyfile[file])
     main = sys.modules['__main__']
diff --git a/Lib/lib-tk/Tkinter.py b/Lib/lib-tk/Tkinter.py
index 0ba954e..b248031 100644
--- a/Lib/lib-tk/Tkinter.py
+++ b/Lib/lib-tk/Tkinter.py
@@ -168,18 +168,30 @@
     Subclasses StringVar, IntVar, DoubleVar, BooleanVar are specializations
     that constrain the type of the value returned from get()."""
     _default = ""
-    def __init__(self, master=None):
-        """Construct a variable with an optional MASTER as master widget.
-        The variable is named PY_VAR_number in Tcl.
+    def __init__(self, master=None, value=None, name=None):
+        """Construct a variable
+
+        MASTER can be given as master widget.
+        VALUE is an optional value (defaults to "")
+        NAME is an optional Tcl name (defaults to PY_VARnum).
+
+        If NAME matches an existing variable and VALUE is omitted
+        then the existing value is retained.
         """
         global _varnum
         if not master:
             master = _default_root
         self._master = master
         self._tk = master.tk
-        self._name = 'PY_VAR' + repr(_varnum)
-        _varnum = _varnum + 1
-        self.set(self._default)
+        if name:
+            self._name = name
+        else:
+            self._name = 'PY_VAR' + repr(_varnum)
+            _varnum += 1
+        if value != None:
+            self.set(value)
+        elif not self._tk.call("info", "exists", self._name):
+            self.set(self._default)
     def __del__(self):
         """Unset the variable in Tcl."""
         self._tk.globalunsetvar(self._name)
@@ -217,15 +229,29 @@
         """Return all trace callback information."""
         return map(self._tk.split, self._tk.splitlist(
             self._tk.call("trace", "vinfo", self._name)))
+    def __eq__(self, other):
+        """Comparison for equality (==).
+
+        Note: if the Variable's master matters to behavior
+        also compare self._master == other._master
+        """
+        return self.__class__.__name__ == other.__class__.__name__ \
+            and self._name == other._name
 
 class StringVar(Variable):
     """Value holder for strings variables."""
     _default = ""
-    def __init__(self, master=None):
+    def __init__(self, master=None, value=None, name=None):
         """Construct a string variable.
 
-        MASTER can be given as master widget."""
-        Variable.__init__(self, master)
+        MASTER can be given as master widget.
+        VALUE is an optional value (defaults to "")
+        NAME is an optional Tcl name (defaults to PY_VARnum).
+
+        If NAME matches an existing variable and VALUE is omitted
+        then the existing value is retained.
+        """
+        Variable.__init__(self, master, value, name)
 
     def get(self):
         """Return value of variable as string."""
@@ -237,11 +263,17 @@
 class IntVar(Variable):
     """Value holder for integer variables."""
     _default = 0
-    def __init__(self, master=None):
+    def __init__(self, master=None, value=None, name=None):
         """Construct an integer variable.
 
-        MASTER can be given as master widget."""
-        Variable.__init__(self, master)
+        MASTER can be given as master widget.
+        VALUE is an optional value (defaults to 0)
+        NAME is an optional Tcl name (defaults to PY_VARnum).
+
+        If NAME matches an existing variable and VALUE is omitted
+        then the existing value is retained.
+        """
+        Variable.__init__(self, master, value, name)
 
     def set(self, value):
         """Set the variable to value, converting booleans to integers."""
@@ -256,11 +288,17 @@
 class DoubleVar(Variable):
     """Value holder for float variables."""
     _default = 0.0
-    def __init__(self, master=None):
+    def __init__(self, master=None, value=None, name=None):
         """Construct a float variable.
 
-        MASTER can be given as a master widget."""
-        Variable.__init__(self, master)
+        MASTER can be given as master widget.
+        VALUE is an optional value (defaults to 0.0)
+        NAME is an optional Tcl name (defaults to PY_VARnum).
+
+        If NAME matches an existing variable and VALUE is omitted
+        then the existing value is retained.
+        """
+        Variable.__init__(self, master, value, name)
 
     def get(self):
         """Return the value of the variable as a float."""
@@ -268,12 +306,18 @@
 
 class BooleanVar(Variable):
     """Value holder for boolean variables."""
-    _default = "false"
-    def __init__(self, master=None):
+    _default = False
+    def __init__(self, master=None, value=None, name=None):
         """Construct a boolean variable.
 
-        MASTER can be given as a master widget."""
-        Variable.__init__(self, master)
+        MASTER can be given as master widget.
+        VALUE is an optional value (defaults to False)
+        NAME is an optional Tcl name (defaults to PY_VARnum).
+
+        If NAME matches an existing variable and VALUE is omitted
+        then the existing value is retained.
+        """
+        Variable.__init__(self, master, value, name)
 
     def get(self):
         """Return the value of the variable as a bool."""
@@ -1456,10 +1500,19 @@
         the group leader of this widget if None is given."""
         return self.tk.call('wm', 'group', self._w, pathName)
     group = wm_group
-    def wm_iconbitmap(self, bitmap=None):
+    def wm_iconbitmap(self, bitmap=None, default=None):
         """Set bitmap for the iconified widget to BITMAP. Return
-        the bitmap if None is given."""
-        return self.tk.call('wm', 'iconbitmap', self._w, bitmap)
+        the bitmap if None is given.
+
+        Under Windows, the DEFAULT parameter can be used to set the icon
+        for the widget and any descendents that don't have an icon set
+        explicitly.  DEFAULT can be the relative path to a .ico file
+        (example: root.iconbitmap(default='myicon.ico') ).  See Tk
+        documentation for more information."""
+        if default:
+            return self.tk.call('wm', 'iconbitmap', self._w, '-default', default)
+        else:
+            return self.tk.call('wm', 'iconbitmap', self._w, bitmap)
     iconbitmap = wm_iconbitmap
     def wm_iconify(self):
         """Display widget as icon."""
@@ -1880,9 +1933,9 @@
     def destroy(self):
         """Destroy this and all descendants widgets."""
         for c in self.children.values(): c.destroy()
+        self.tk.call('destroy', self._w)
         if self.master.children.has_key(self._name):
             del self.master.children[self._name]
-        self.tk.call('destroy', self._w)
         Misc.destroy(self)
     def _do(self, name, args=()):
         # XXX Obsolete -- better use self.tk.call directly!
diff --git a/Lib/lib-tk/tkMessageBox.py b/Lib/lib-tk/tkMessageBox.py
index 25071fe..aff069b 100644
--- a/Lib/lib-tk/tkMessageBox.py
+++ b/Lib/lib-tk/tkMessageBox.py
@@ -63,9 +63,10 @@
 #
 # convenience stuff
 
-def _show(title=None, message=None, icon=None, type=None, **options):
-    if icon:    options["icon"] = icon
-    if type:    options["type"] = type
+# Rename _icon and _type options to allow overriding them in options
+def _show(title=None, message=None, _icon=None, _type=None, **options):
+    if _icon and "icon" not in options:    options["icon"] = _icon
+    if _type and "type" not in options:    options["type"] = _type
     if title:   options["title"] = title
     if message: options["message"] = message
     res = Message(**options).show()
diff --git a/Lib/lib-tk/turtle.py b/Lib/lib-tk/turtle.py
index d68e405..01a55b1 100644
--- a/Lib/lib-tk/turtle.py
+++ b/Lib/lib-tk/turtle.py
@@ -30,6 +30,7 @@
         self._tracing = 1
         self._arrow = 0
         self._delay = 10     # default delay for drawing
+        self._angle = 0.0
         self.degrees()
         self.reset()
 
@@ -39,6 +40,10 @@
         Example:
         >>> turtle.degrees()
         """
+        # Don't try to change _angle if it is 0, because
+        # _fullcircle might not be set, yet
+        if self._angle:
+            self._angle = (self._angle / self._fullcircle) * fullcircle
         self._fullcircle = fullcircle
         self._invradian = pi / (fullcircle * 0.5)
 
@@ -81,7 +86,6 @@
         self._color = "black"
         self._filling = 0
         self._path = []
-        self._tofill = []
         self.clear()
         canvas._root().tkraise()
 
@@ -301,19 +305,15 @@
                                             {'fill': self._color,
                                              'smooth': smooth})
                 self._items.append(item)
-                if self._tofill:
-                    for item in self._tofill:
-                        self._canvas.itemconfigure(item, fill=self._color)
-                        self._items.append(item)
         self._path = []
-        self._tofill = []
         self._filling = flag
         if flag:
             self._path.append(self._position)
-        self.forward(0)
 
     def begin_fill(self):
         """ Called just before drawing a shape to be filled.
+            Must eventually be followed by a corresponding end_fill() call.
+            Otherwise it will be ignored.
 
         Example:
         >>> turtle.begin_fill()
@@ -326,7 +326,8 @@
         >>> turtle.forward(100)
         >>> turtle.end_fill()
         """
-        self.fill(1)
+        self._path = [self._position]
+        self._filling = 1
 
     def end_fill(self):
         """ Called after drawing a shape to be filled.
@@ -344,7 +345,7 @@
         """
         self.fill(0)
 
-    def circle(self, radius, extent=None):
+    def circle(self, radius, extent = None):
         """ Draw a circle with given radius.
         The center is radius units left of the turtle; extent
         determines which part of the circle is drawn. If not given,
@@ -361,52 +362,18 @@
         """
         if extent is None:
             extent = self._fullcircle
-        x0, y0 = self._position
-        xc = x0 - radius * sin(self._angle * self._invradian)
-        yc = y0 - radius * cos(self._angle * self._invradian)
-        if radius >= 0.0:
-            start = self._angle - (self._fullcircle / 4.0)
-        else:
-            start = self._angle + (self._fullcircle / 4.0)
-            extent = -extent
-        if self._filling:
-            if abs(extent) >= self._fullcircle:
-                item = self._canvas.create_oval(xc-radius, yc-radius,
-                                                xc+radius, yc+radius,
-                                                width=self._width,
-                                                outline="")
-                self._tofill.append(item)
-            item = self._canvas.create_arc(xc-radius, yc-radius,
-                                           xc+radius, yc+radius,
-                                           style="chord",
-                                           start=start,
-                                           extent=extent,
-                                           width=self._width,
-                                           outline="")
-            self._tofill.append(item)
-        if self._drawing:
-            if abs(extent) >= self._fullcircle:
-                item = self._canvas.create_oval(xc-radius, yc-radius,
-                                                xc+radius, yc+radius,
-                                                width=self._width,
-                                                outline=self._color)
-                self._items.append(item)
-            item = self._canvas.create_arc(xc-radius, yc-radius,
-                                           xc+radius, yc+radius,
-                                           style="arc",
-                                           start=start,
-                                           extent=extent,
-                                           width=self._width,
-                                           outline=self._color)
-            self._items.append(item)
-        angle = start + extent
-        x1 = xc + abs(radius) * cos(angle * self._invradian)
-        y1 = yc - abs(radius) * sin(angle * self._invradian)
-        self._angle = (self._angle + extent) % self._fullcircle
-        self._position = x1, y1
-        if self._filling:
-            self._path.append(self._position)
-        self._draw_turtle()
+        frac = abs(extent)/self._fullcircle
+        steps = 1+int(min(11+abs(radius)/6.0, 59.0)*frac)
+        w = 1.0 * extent / steps
+        w2 = 0.5 * w
+        l = 2.0 * radius * sin(w2*self._invradian)
+        if radius < 0:
+            l, w, w2 = -l, -w, -w2
+        self.left(w2)
+        for i in range(steps):
+            self.forward(l)
+            self.left(w)
+        self.right(w2)
 
     def heading(self):
         """ Return the turtle's current heading.
@@ -634,6 +601,7 @@
 
     def _draw_turtle(self, position=[]):
         if not self._tracing:
+            self._canvas.update()
             return
         if position == []:
             position = self._position
@@ -678,7 +646,7 @@
             _canvas = Tkinter.Canvas(_root, background="white")
             _canvas.pack(expand=1, fill="both")
 
-        setup(width=_width, height= _height, startx=_startx, starty=_starty)
+            setup(width=_width, height= _height, startx=_startx, starty=_starty)
 
         RawPen.__init__(self, _canvas)
 
@@ -720,7 +688,7 @@
 def write(arg, move=0): _getpen().write(arg, move)
 def fill(flag): _getpen().fill(flag)
 def begin_fill(): _getpen().begin_fill()
-def end_fill(): _getpen.end_fill()
+def end_fill(): _getpen().end_fill()
 def circle(radius, extent=None): _getpen().circle(radius, extent)
 def goto(*args): _getpen().goto(*args)
 def heading(): return _getpen().heading()
@@ -745,7 +713,7 @@
 def setup(**geometry):
     """ Sets the size and position of the main window.
 
-    Keywords are width, height, startx and starty
+    Keywords are width, height, startx and starty:
 
     width: either a size in pixels or a fraction of the screen.
       Default is 50% of screen.
@@ -820,7 +788,7 @@
         _root.geometry("%dx%d+%d+%d" % (_width, _height, _startx, _starty))
 
 def title(title):
-    """ set the window title.
+    """Set the window title.
 
     By default this is set to 'Turtle Graphics'
 
@@ -929,15 +897,30 @@
             speed(speeds[sp])
     color(0.25,0,0.75)
     fill(0)
-    color("green")
 
-    left(130)
+    # draw and fill a concave shape
+    left(120)
     up()
-    forward(90)
+    forward(70)
+    right(30)
+    down()
     color("red")
-    speed('fastest')
+    speed("fastest")
+    fill(1)
+    for i in range(4):
+        circle(50,90)
+        right(90)
+        forward(30)
+        right(90)
+    color("yellow")
+    fill(0)
+    left(90)
+    up()
+    forward(30)
     down();
 
+    color("red")
+
     # create a second turtle and make the original pursue and catch it
     turtle=Turtle()
     turtle.reset()
diff --git a/Lib/linecache.py b/Lib/linecache.py
index f49695a..4838625 100644
--- a/Lib/linecache.py
+++ b/Lib/linecache.py
@@ -94,6 +94,10 @@
                     except (ImportError, IOError):
                         pass
                     else:
+                        if data is None:
+                            # No luck, the PEP302 loader cannot find the source
+                            # for this module.
+                            return []
                         cache[filename] = (
                             len(data), None,
                             [line+'\n' for line in data.splitlines()], fullname
diff --git a/Lib/logging/config.py b/Lib/logging/config.py
index 457ec5c..1d5f8c4 100644
--- a/Lib/logging/config.py
+++ b/Lib/logging/config.py
@@ -79,6 +79,7 @@
     logging._acquireLock()
     try:
         logging._handlers.clear()
+        logging._handlerList = []
         # Handlers add themselves to logging._handlers
         handlers = _install_handlers(cp, formatters)
         _install_loggers(cp, handlers)
diff --git a/Lib/logging/handlers.py b/Lib/logging/handlers.py
index e0da254..3552950 100644
--- a/Lib/logging/handlers.py
+++ b/Lib/logging/handlers.py
@@ -128,12 +128,7 @@
             dfn = self.baseFilename + ".1"
             if os.path.exists(dfn):
                 os.remove(dfn)
-            try:
-                os.rename(self.baseFilename, dfn)
-            except (KeyboardInterrupt, SystemExit):
-                raise
-            except:
-                self.handleError(record)
+            os.rename(self.baseFilename, dfn)
             #print "%s -> %s" % (self.baseFilename, dfn)
         if self.encoding:
             self.stream = codecs.open(self.baseFilename, 'w', self.encoding)
@@ -273,12 +268,7 @@
         dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
         if os.path.exists(dfn):
             os.remove(dfn)
-        try:
-            os.rename(self.baseFilename, dfn)
-        except (KeyboardInterrupt, SystemExit):
-            raise
-        except:
-            self.handleError(record)
+        os.rename(self.baseFilename, dfn)
         if self.backupCount > 0:
             # find the oldest log file and delete it
             s = glob.glob(self.baseFilename + ".20*")
@@ -572,6 +562,18 @@
         "local7":   LOG_LOCAL7,
         }
 
+    #The map below appears to be trivially lowercasing the key. However,
+    #there's more to it than meets the eye - in some locales, lowercasing
+    #gives unexpected results. See SF #1524081: in the Turkish locale,
+    #"INFO".lower() != "info"
+    priority_map = {
+        "DEBUG" : "debug",
+        "INFO" : "info",
+        "WARNING" : "warning",
+        "ERROR" : "error",
+        "CRITICAL" : "critical"
+    }
+
     def __init__(self, address=('localhost', SYSLOG_UDP_PORT), facility=LOG_USER):
         """
         Initialize a handler.
@@ -608,7 +610,7 @@
     #   necessary.
     log_format_string = '<%d>%s\000'
 
-    def encodePriority (self, facility, priority):
+    def encodePriority(self, facility, priority):
         """
         Encode the facility and priority. You can pass in strings or
         integers - if strings are passed, the facility_names and
@@ -629,6 +631,16 @@
             self.socket.close()
         logging.Handler.close(self)
 
+    def mapPriority(self, levelName):
+        """
+        Map a logging level name to a key in the priority_names map.
+        This is useful in two scenarios: when custom levels are being
+        used, and in the case where you can't do a straightforward
+        mapping by lowercasing the logging level name because of locale-
+        specific issues (see SF #1524081).
+        """
+        return self.priority_map.get(levelName, "warning")
+
     def emit(self, record):
         """
         Emit a record.
@@ -643,8 +655,8 @@
         """
         msg = self.log_format_string % (
             self.encodePriority(self.facility,
-                                string.lower(record.levelname)),
-            msg)
+                                self.mapPriority(record.levelname)),
+                                msg)
         try:
             if self.unixsocket:
                 try:
diff --git a/Lib/mailbox.py b/Lib/mailbox.py
index bb115e1..b72128b 100755
--- a/Lib/mailbox.py
+++ b/Lib/mailbox.py
@@ -15,7 +15,10 @@
 import rfc822
 import StringIO
 try:
-    import fnctl
+    if sys.platform == 'os2emx':
+        # OS/2 EMX fcntl() not adequate
+        raise ImportError
+    import fcntl
 except ImportError:
     fcntl = None
 
@@ -565,7 +568,8 @@
         try:
             os.rename(new_file.name, self._path)
         except OSError, e:
-            if e.errno == errno.EEXIST:
+            if e.errno == errno.EEXIST or \
+              (os.name == 'os2' and e.errno == errno.EACCES):
                 os.remove(self._path)
                 os.rename(new_file.name, self._path)
             else:
@@ -1030,6 +1034,9 @@
                         if hasattr(os, 'link'):
                             os.link(os.path.join(self._path, str(key)),
                                     os.path.join(self._path, str(prev + 1)))
+                            if sys.platform == 'os2emx':
+                                # cannot unlink an open file on OS/2
+                                f.close()
                             os.unlink(os.path.join(self._path, str(key)))
                         else:
                             f.close()
@@ -1798,26 +1805,18 @@
 
 
 def _lock_file(f, dotlock=True):
-    """Lock file f using lockf, flock, and dot locking."""
+    """Lock file f using lockf and dot locking."""
     dotlock_done = False
     try:
         if fcntl:
             try:
                 fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
             except IOError, e:
-                if e.errno == errno.EAGAIN:
+                if e.errno in (errno.EAGAIN, errno.EACCES):
                     raise ExternalClashError('lockf: lock unavailable: %s' %
                                              f.name)
                 else:
                     raise
-            try:
-                fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
-            except IOError, e:
-                if e.errno == errno.EWOULDBLOCK:
-                    raise ExternalClashError('flock: lock unavailable: %s' %
-                                             f.name)
-                else:
-                    raise
         if dotlock:
             try:
                 pre_lock = _create_temporary(f.name + '.lock')
@@ -1836,7 +1835,8 @@
                     os.rename(pre_lock.name, f.name + '.lock')
                     dotlock_done = True
             except OSError, e:
-                if e.errno == errno.EEXIST:
+                if e.errno == errno.EEXIST or \
+                  (os.name == 'os2' and e.errno == errno.EACCES):
                     os.remove(pre_lock.name)
                     raise ExternalClashError('dot lock unavailable: %s' %
                                              f.name)
@@ -1845,16 +1845,14 @@
     except:
         if fcntl:
             fcntl.lockf(f, fcntl.LOCK_UN)
-            fcntl.flock(f, fcntl.LOCK_UN)
         if dotlock_done:
             os.remove(f.name + '.lock')
         raise
 
 def _unlock_file(f):
-    """Unlock file f using lockf, flock, and dot locking."""
+    """Unlock file f using lockf and dot locking."""
     if fcntl:
         fcntl.lockf(f, fcntl.LOCK_UN)
-        fcntl.flock(f, fcntl.LOCK_UN)
     if os.path.exists(f.name + '.lock'):
         os.remove(f.name + '.lock')
 
diff --git a/Lib/mimetypes.py b/Lib/mimetypes.py
index bee2ff7..b0d2f18 100644
--- a/Lib/mimetypes.py
+++ b/Lib/mimetypes.py
@@ -33,6 +33,10 @@
 
 knownfiles = [
     "/etc/mime.types",
+    "/etc/httpd/mime.types",                    # Mac OS X
+    "/etc/httpd/conf/mime.types",               # Apache
+    "/etc/apache/mime.types",                   # Apache 1
+    "/etc/apache2/mime.types",                  # Apache 2
     "/usr/local/etc/httpd/conf/mime.types",
     "/usr/local/lib/netscape/mime.types",
     "/usr/local/etc/httpd/conf/mime.types",     # Apache 1.2
diff --git a/Lib/msilib/__init__.py b/Lib/msilib/__init__.py
index 0881409..4be82b0 100644
--- a/Lib/msilib/__init__.py
+++ b/Lib/msilib/__init__.py
@@ -187,7 +187,7 @@
         self.filenames = sets.Set()
         self.index = 0
 
-    def gen_id(self, dir, file):
+    def gen_id(self, file):
         logical = _logical = make_id(file)
         pos = 1
         while logical in self.filenames:
@@ -196,9 +196,11 @@
         self.filenames.add(logical)
         return logical
 
-    def append(self, full, logical):
+    def append(self, full, file, logical):
         if os.path.isdir(full):
             return
+        if not logical:
+            logical = self.gen_id(file)
         self.index += 1
         self.files.append((full, logical))
         return self.index, logical
@@ -328,7 +330,7 @@
             logical = self.keyfiles[file]
         else:
             logical = None
-        sequence, logical = self.cab.append(absolute, logical)
+        sequence, logical = self.cab.append(absolute, file, logical)
         assert logical not in self.ids
         self.ids.add(logical)
         short = self.make_short(file)
@@ -403,7 +405,7 @@
                  [(self.dlg.name, self.name, event, argument,
                    condition, ordering)])
 
-    def mapping(self, mapping, attribute):
+    def mapping(self, event, attribute):
         add_data(self.dlg.db, "EventMapping",
                  [(self.dlg.name, self.name, event, attribute)])
 
diff --git a/Lib/optparse.py b/Lib/optparse.py
index 6b8f5d1..62d2f7e 100644
--- a/Lib/optparse.py
+++ b/Lib/optparse.py
@@ -16,7 +16,7 @@
 # Python developers: please do not make changes to this file, since
 # it is automatically generated from the Optik source code.
 
-__version__ = "1.5.1"
+__version__ = "1.5.3"
 
 __all__ = ['Option',
            'SUPPRESS_HELP',
@@ -75,9 +75,9 @@
 
 
 # This file was generated from:
-#   Id: option_parser.py 509 2006-04-20 00:58:24Z gward
-#   Id: option.py 509 2006-04-20 00:58:24Z gward
-#   Id: help.py 509 2006-04-20 00:58:24Z gward
+#   Id: option_parser.py 527 2006-07-23 15:21:30Z greg
+#   Id: option.py 522 2006-06-11 16:22:03Z gward
+#   Id: help.py 527 2006-07-23 15:21:30Z greg
 #   Id: errors.py 509 2006-04-20 00:58:24Z gward
 
 try:
@@ -1629,6 +1629,13 @@
         result.append(self.format_epilog(formatter))
         return "".join(result)
 
+    # used by test suite
+    def _get_encoding(self, file):
+        encoding = getattr(file, "encoding", None)
+        if not encoding:
+            encoding = sys.getdefaultencoding()
+        return encoding
+
     def print_help(self, file=None):
         """print_help(file : file = stdout)
 
@@ -1637,7 +1644,8 @@
         """
         if file is None:
             file = sys.stdout
-        file.write(self.format_help())
+        encoding = self._get_encoding(file)
+        file.write(self.format_help().encode(encoding, "replace"))
 
 # class OptionParser
 
diff --git a/Lib/os.py b/Lib/os.py
index 31002ac..2d1b29b 100644
--- a/Lib/os.py
+++ b/Lib/os.py
@@ -723,7 +723,7 @@
         """
         try:
             _urandomfd = open("/dev/urandom", O_RDONLY)
-        except:
+        except (OSError, IOError):
             raise NotImplementedError("/dev/urandom (or equivalent) not found")
         bytes = ""
         while len(bytes) < n:
diff --git a/Lib/pdb.py b/Lib/pdb.py
index 94f61f7..06181e7 100755
--- a/Lib/pdb.py
+++ b/Lib/pdb.py
@@ -235,7 +235,8 @@
         """Interpret the argument as though it had been typed in response
         to the prompt.
 
-        Checks wether  this line is typed in the normal prompt or in a breakpoint command list definition
+        Checks whether this line is typed at the normal prompt or in
+        a breakpoint command list definition.
         """
         if not self.commands_defining:
             return cmd.Cmd.onecmd(self, line)
diff --git a/Lib/pkgutil.py b/Lib/pkgutil.py
index 26c797f..37738e4 100644
--- a/Lib/pkgutil.py
+++ b/Lib/pkgutil.py
@@ -69,7 +69,33 @@
 
 
 def walk_packages(path=None, prefix='', onerror=None):
-    """Yield submodule names+loaders recursively, for path or sys.path"""
+    """Yields (module_loader, name, ispkg) for all modules recursively
+    on path, or, if path is None, all accessible modules.
+
+    'path' should be either None or a list of paths to look for
+    modules in.
+
+    'prefix' is a string to output on the front of every module name
+    on output.
+
+    Note that this function must import all *packages* (NOT all
+    modules!) on the given path, in order to access the __path__
+    attribute to find submodules.
+
+    'onerror' is a function which gets called with one argument (the
+    name of the package which was being imported) if any exception
+    occurs while trying to import a package.  If no onerror function is
+    supplied, ImportErrors are caught and ignored, while all other
+    exceptions are propagated, terminating the search.
+
+    Examples:
+
+    # list all modules python can access
+    walk_packages()
+
+    # list all submodules of ctypes
+    walk_packages(ctypes.__path__, ctypes.__name__+'.')
+    """
 
     def seen(p, m={}):
         if p in m:
@@ -84,19 +110,33 @@
                 __import__(name)
             except ImportError:
                 if onerror is not None:
-                    onerror()
+                    onerror(name)
+            except Exception:
+                if onerror is not None:
+                    onerror(name)
+                else:
+                    raise
             else:
                 path = getattr(sys.modules[name], '__path__', None) or []
 
                 # don't traverse path items we've seen before
                 path = [p for p in path if not seen(p)]
 
-                for item in walk_packages(path, name+'.'):
+                for item in walk_packages(path, name+'.', onerror):
                     yield item
 
 
 def iter_modules(path=None, prefix=''):
-    """Yield submodule names+loaders for path or sys.path"""
+    """Yields (module_loader, name, ispkg) for all submodules on path,
+    or, if path is None, all top-level modules on sys.path.
+
+    'path' should be either None or a list of paths to look for
+    modules in.
+
+    'prefix' is a string to output on the front of every module name
+    on output.
+    """
+
     if path is None:
         importers = iter_importers()
     else:
@@ -208,6 +248,7 @@
 
     def _reopen(self):
         if self.file and self.file.closed:
+            mod_type = self.etc[2]
             if mod_type==imp.PY_SOURCE:
                 self.file = open(self.filename, 'rU')
             elif mod_type in (imp.PY_COMPILED, imp.C_EXTENSION):
@@ -340,9 +381,7 @@
             importer = None
         sys.path_importer_cache.setdefault(path_item, importer)
 
-    # The boolean values are used for caching valid and invalid
-    # file paths for the built-in import machinery
-    if importer in (None, True, False):
+    if importer is None:
         try:
             importer = ImpImporter(path_item)
         except ImportError:
diff --git a/Lib/popen2.py b/Lib/popen2.py
index b966d4c..694979e 100644
--- a/Lib/popen2.py
+++ b/Lib/popen2.py
@@ -72,14 +72,14 @@
         # In case the child hasn't been waited on, check if it's done.
         self.poll(_deadstate=sys.maxint)
         if self.sts < 0:
-            if _active:
+            if _active is not None:
                 # Child is still running, keep us alive until we can wait on it.
                 _active.append(self)
 
     def _run_child(self, cmd):
         if isinstance(cmd, basestring):
             cmd = ['/bin/sh', '-c', cmd]
-        for i in range(3, MAXFD):
+        for i in xrange(3, MAXFD):
             try:
                 os.close(i)
             except OSError:
diff --git a/Lib/pstats.py b/Lib/pstats.py
index c3a8828..4e94b0c 100644
--- a/Lib/pstats.py
+++ b/Lib/pstats.py
@@ -548,8 +548,10 @@
             self.prompt = "% "
             if profile is not None:
                 self.stats = Stats(profile)
+                self.stream = self.stats.stream
             else:
                 self.stats = None
+                self.stream = sys.stdout
 
         def generic(self, fn, line):
             args = line.split()
@@ -667,14 +669,15 @@
             return None
 
     import sys
-    print >> self.stream, "Welcome to the profile statistics browser."
     if len(sys.argv) > 1:
         initprofile = sys.argv[1]
     else:
         initprofile = None
     try:
-        ProfileBrowser(initprofile).cmdloop()
-        print >> self.stream, "Goodbye."
+        browser = ProfileBrowser(initprofile)
+        print >> browser.stream, "Welcome to the profile statistics browser."
+        browser.cmdloop()
+        print >> browser.stream, "Goodbye."
     except KeyboardInterrupt:
         pass
 
diff --git a/Lib/pydoc.py b/Lib/pydoc.py
index cf38630..29c6cc4 100755
--- a/Lib/pydoc.py
+++ b/Lib/pydoc.py
@@ -318,6 +318,8 @@
         # identifies something in a way that pydoc itself has issues handling;
         # think 'super' and how it is a descriptor (which raises the exception
         # by lacking a __name__ attribute) and an instance.
+        if inspect.isgetsetdescriptor(object): return self.docdata(*args)
+        if inspect.ismemberdescriptor(object): return self.docdata(*args)
         try:
             if inspect.ismodule(object): return self.docmodule(*args)
             if inspect.isclass(object): return self.docclass(*args)
@@ -333,7 +335,7 @@
             name and ' ' + repr(name), type(object).__name__)
         raise TypeError, message
 
-    docmodule = docclass = docroutine = docother = fail
+    docmodule = docclass = docroutine = docother = docproperty = docdata = fail
 
     def getdocloc(self, object):
         """Return the location of module docs or None"""
@@ -915,6 +917,10 @@
         lhs = name and '<strong>%s</strong> = ' % name or ''
         return lhs + self.repr(object)
 
+    def docdata(self, object, name=None, mod=None, cl=None):
+        """Produce html documentation for a data descriptor."""
+        return self._docdescriptor(name, object, mod)
+
     def index(self, dir, shadowed=None):
         """Generate an HTML index for a directory of modules."""
         modpkgs = []
@@ -1268,6 +1274,10 @@
         """Produce text documentation for a property."""
         return self._docdescriptor(name, object, mod)
 
+    def docdata(self, object, name=None, mod=None, cl=None):
+        """Produce text documentation for a data descriptor."""
+        return self._docdescriptor(name, object, mod)
+
     def docother(self, object, name=None, mod=None, parent=None, maxlen=None, doc=None):
         """Produce text documentation for a data object."""
         repr = self.repr(object)
@@ -1397,6 +1407,14 @@
             return 'module ' + thing.__name__
     if inspect.isbuiltin(thing):
         return 'built-in function ' + thing.__name__
+    if inspect.isgetsetdescriptor(thing):
+        return 'getset descriptor %s.%s.%s' % (
+            thing.__objclass__.__module__, thing.__objclass__.__name__,
+            thing.__name__)
+    if inspect.ismemberdescriptor(thing):
+        return 'member descriptor %s.%s.%s' % (
+            thing.__objclass__.__module__, thing.__objclass__.__name__,
+            thing.__name__)
     if inspect.isclass(thing):
         return 'class ' + thing.__name__
     if inspect.isfunction(thing):
@@ -1453,6 +1471,8 @@
         if not (inspect.ismodule(object) or
                 inspect.isclass(object) or
                 inspect.isroutine(object) or
+                inspect.isgetsetdescriptor(object) or
+                inspect.ismemberdescriptor(object) or
                 isinstance(object, property)):
             # If the passed object is a piece of data or an instance,
             # document its available methods instead of its value.
diff --git a/Lib/random.py b/Lib/random.py
index 465f477..ae2d434 100644
--- a/Lib/random.py
+++ b/Lib/random.py
@@ -29,13 +29,12 @@
 General notes on the underlying Mersenne Twister core generator:
 
 * The period is 2**19937-1.
-* It is one of the most extensively tested generators in existence
-* Without a direct way to compute N steps forward, the
-  semantics of jumpahead(n) are weakened to simply jump
-  to another distant state and rely on the large period
-  to avoid overlapping sequences.
-* The random() method is implemented in C, executes in
-  a single Python step, and is, therefore, threadsafe.
+* It is one of the most extensively tested generators in existence.
+* Without a direct way to compute N steps forward, the semantics of
+  jumpahead(n) are weakened to simply jump to another distant state and rely
+  on the large period to avoid overlapping sequences.
+* The random() method is implemented in C, executes in a single Python step,
+  and is, therefore, threadsafe.
 
 """
 
@@ -253,11 +252,6 @@
 
         Optional arg random is a 0-argument function returning a random
         float in [0.0, 1.0); by default, the standard random.random.
-
-        Note that for even rather small len(x), the total number of
-        permutations of x is larger than the period of most random number
-        generators; this implies that "most" permutations of a long
-        sequence can never be generated.
         """
 
         if random is None:
diff --git a/Lib/sgmllib.py b/Lib/sgmllib.py
index 3e85a91..3020d11 100644
--- a/Lib/sgmllib.py
+++ b/Lib/sgmllib.py
@@ -29,11 +29,16 @@
 shorttagopen = re.compile('<[a-zA-Z][-.a-zA-Z0-9]*/')
 shorttag = re.compile('<([a-zA-Z][-.a-zA-Z0-9]*)/([^/]*)/')
 piclose = re.compile('>')
-endbracket = re.compile('[<>]')
+starttag = re.compile(r'<[a-zA-Z][-_.:a-zA-Z0-9]*\s*('
+        r'\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*'
+        r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]'
+        r'[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*(?=[\s>/<])))?'
+    r')*\s*/?\s*(?=[<>])')
+endtag = re.compile(r'</?[a-zA-Z][-_.:a-zA-Z0-9]*\s*/?\s*(?=[<>])')
 tagfind = re.compile('[a-zA-Z][-_.a-zA-Z0-9]*')
 attrfind = re.compile(
     r'\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*'
-    r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?')
+    r'(\'[^\']*\'|"[^"]*"|[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?')
 
 
 class SGMLParseError(RuntimeError):
@@ -53,6 +58,10 @@
 # self.handle_entityref() with the entity reference as argument.
 
 class SGMLParser(markupbase.ParserBase):
+    # Definition of entities -- derived classes may override
+    entity_or_charref = re.compile('&(?:'
+      '([a-zA-Z][-.a-zA-Z0-9]*)|#([0-9]+)'
+      ')(;?)')
 
     def __init__(self, verbose=0):
         """Initialize and reset this instance."""
@@ -245,11 +254,10 @@
             self.finish_shorttag(tag, data)
             self.__starttag_text = rawdata[start_pos:match.end(1) + 1]
             return k
-        # XXX The following should skip matching quotes (' or ")
-        match = endbracket.search(rawdata, i+1)
+        match = starttag.match(rawdata, i)
         if not match:
             return -1
-        j = match.start(0)
+        j = match.end(0)
         # Now parse the data between i+1 and j into a tag and attrs
         attrs = []
         if rawdata[i:i+2] == '<>':
@@ -274,32 +282,8 @@
                     attrvalue[:1] == '"' == attrvalue[-1:]):
                     # strip quotes
                     attrvalue = attrvalue[1:-1]
-                l = 0
-                new_attrvalue = ''
-                while l < len(attrvalue):
-                    av_match = entityref.match(attrvalue, l)
-                    if (av_match and av_match.group(1) in self.entitydefs and
-                        attrvalue[av_match.end(1)] == ';'):
-                        # only substitute entityrefs ending in ';' since
-                        # otherwise we may break <a href='?p=x&q=y'>
-                        # which is very common
-                        new_attrvalue += self.entitydefs[av_match.group(1)]
-                        l = av_match.end(0)
-                        continue
-                    ch_match = charref.match(attrvalue, l)
-                    if ch_match:
-                        try:
-                            char = chr(int(ch_match.group(1)))
-                            new_attrvalue += char
-                            l = ch_match.end(0)
-                            continue
-                        except ValueError:
-                            # invalid character reference, don't substitute
-                            pass
-                    # all other cases
-                    new_attrvalue += attrvalue[l]
-                    l += 1
-                attrvalue = new_attrvalue
+                attrvalue = self.entity_or_charref.sub(
+                    self._convert_ref, attrvalue)
             attrs.append((attrname.lower(), attrvalue))
             k = match.end(0)
         if rawdata[j] == '>':
@@ -308,13 +292,24 @@
         self.finish_starttag(tag, attrs)
         return j
 
+    # Internal -- convert entity or character reference
+    def _convert_ref(self, match):
+        if match.group(2):
+            return self.convert_charref(match.group(2)) or \
+                '&#%s%s' % match.groups()[1:]
+        elif match.group(3):
+            return self.convert_entityref(match.group(1)) or \
+                '&%s;' % match.group(1)
+        else:
+            return '&%s' % match.group(1)
+
     # Internal -- parse endtag
     def parse_endtag(self, i):
         rawdata = self.rawdata
-        match = endbracket.search(rawdata, i+1)
+        match = endtag.match(rawdata, i)
         if not match:
             return -1
-        j = match.start(0)
+        j = match.end(0)
         tag = rawdata[i+2:j].strip().lower()
         if rawdata[j] == '>':
             j = j+1
@@ -391,35 +386,51 @@
             print '*** Unbalanced </' + tag + '>'
             print '*** Stack:', self.stack
 
-    def handle_charref(self, name):
-        """Handle character reference, no need to override."""
+    def convert_charref(self, name):
+        """Convert character reference, may be overridden."""
         try:
             n = int(name)
         except ValueError:
-            self.unknown_charref(name)
             return
         if not 0 <= n <= 255:
-            self.unknown_charref(name)
             return
-        self.handle_data(chr(n))
+        return self.convert_codepoint(n)
+
+    def convert_codepoint(self, codepoint):
+        return chr(codepoint)
+
+    def handle_charref(self, name):
+        """Handle character reference, no need to override."""
+        replacement = self.convert_charref(name)
+        if replacement is None:
+            self.unknown_charref(name)
+        else:
+            self.handle_data(replacement)
 
     # Definition of entities -- derived classes may override
     entitydefs = \
             {'lt': '<', 'gt': '>', 'amp': '&', 'quot': '"', 'apos': '\''}
 
-    def handle_entityref(self, name):
-        """Handle entity references.
+    def convert_entityref(self, name):
+        """Convert entity references.
 
-        There should be no need to override this method; it can be
-        tailored by setting up the self.entitydefs mapping appropriately.
+        As an alternative to overriding this method; one can tailor the
+        results by setting up the self.entitydefs mapping appropriately.
         """
         table = self.entitydefs
         if name in table:
-            self.handle_data(table[name])
+            return table[name]
         else:
-            self.unknown_entityref(name)
             return
 
+    def handle_entityref(self, name):
+        """Handle entity references, no need to override."""
+        replacement = self.convert_entityref(name)
+        if replacement is None:
+            self.unknown_entityref(name)
+        else:
+            self.handle_data(self.convert_entityref(name))
+
     # Example -- handle data, should be overridden
     def handle_data(self, data):
         pass
diff --git a/Lib/shelve.py b/Lib/shelve.py
index 4959c26..7a75445 100644
--- a/Lib/shelve.py
+++ b/Lib/shelve.py
@@ -139,6 +139,9 @@
         self.dict = 0
 
     def __del__(self):
+        if not hasattr(self, 'writeback'):
+            # __init__ didn't succeed, so don't bother closing
+            return
         self.close()
 
     def sync(self):
diff --git a/Lib/shutil.py b/Lib/shutil.py
index c50184c..c3ff687 100644
--- a/Lib/shutil.py
+++ b/Lib/shutil.py
@@ -127,7 +127,13 @@
         # continue with other files
         except Error, err:
             errors.extend(err.args[0])
-    copystat(src, dst)
+    try:
+        copystat(src, dst)
+    except WindowsError:
+        # can't copy file access times on Windows
+        pass
+    except OSError, why:
+        errors.extend((src, dst, str(why)))
     if errors:
         raise Error, errors
 
diff --git a/Lib/site.py b/Lib/site.py
index 47eda24..01086b7 100644
--- a/Lib/site.py
+++ b/Lib/site.py
@@ -11,10 +11,11 @@
 works).
 
 This will append site-specific paths to the module search path.  On
-Unix, it starts with sys.prefix and sys.exec_prefix (if different) and
-appends lib/python<version>/site-packages as well as lib/site-python.
-On other platforms (mainly Mac and Windows), it uses just sys.prefix
-(and sys.exec_prefix, if different, but this is unlikely).  The
+Unix (including Mac OSX), it starts with sys.prefix and
+sys.exec_prefix (if different) and appends
+lib/python<version>/site-packages as well as lib/site-python.
+On other platforms (such as Windows), it tries each of the
+prefixes directly, as well as with lib/site-packages appended.  The
 resulting directories, if they exist, are appended to sys.path, and
 also inspected for path configuration files.
 
diff --git a/Lib/socket.py b/Lib/socket.py
index fa0e663..52fb8e3 100644
--- a/Lib/socket.py
+++ b/Lib/socket.py
@@ -130,35 +130,40 @@
 if sys.platform == "riscos":
     _socketmethods = _socketmethods + ('sleeptaskw',)
 
+# All the method names that must be delegated to either the real socket
+# object or the _closedsocket object.
+_delegate_methods = ("recv", "recvfrom", "recv_into", "recvfrom_into",
+                     "send", "sendto")
+
 class _closedsocket(object):
     __slots__ = []
     def _dummy(*args):
         raise error(EBADF, 'Bad file descriptor')
-    send = recv = sendto = recvfrom = __getattr__ = _dummy
+    def close(self):
+        pass
+    # All _delegate_methods must also be initialized here.
+    send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy
+    __getattr__ = _dummy
 
 class _socketobject(object):
 
     __doc__ = _realsocket.__doc__
 
-    __slots__ = ["_sock",
-                 "recv", "recv_into", "recvfrom_into",
-                 "send", "sendto", "recvfrom",
-                 "__weakref__"]
+    __slots__ = ["_sock", "__weakref__"] + list(_delegate_methods)
 
     def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None):
         if _sock is None:
             _sock = _realsocket(family, type, proto)
         self._sock = _sock
-        self.send = self._sock.send
-        self.recv = self._sock.recv
-        self.recv_into = self._sock.recv_into
-        self.sendto = self._sock.sendto
-        self.recvfrom = self._sock.recvfrom
-        self.recvfrom_into = self._sock.recvfrom_into
+        for method in _delegate_methods:
+            setattr(self, method, getattr(_sock, method))
 
     def close(self):
+        self._sock.close()
         self._sock = _closedsocket()
-        self.send = self.recv = self.sendto = self.recvfrom = self._sock._dummy
+        dummy = self._sock._dummy
+        for method in _delegate_methods:
+            setattr(self, method, dummy)
     close.__doc__ = _realsocket.close.__doc__
 
     def accept(self):
diff --git a/Lib/sqlite3/test/hooks.py b/Lib/sqlite3/test/hooks.py
index b10b3ef..761bdaa 100644
--- a/Lib/sqlite3/test/hooks.py
+++ b/Lib/sqlite3/test/hooks.py
@@ -48,6 +48,8 @@
             pass
 
     def CheckCollationIsUsed(self):
+        if sqlite.version_info < (3, 2, 1):  # old SQLite versions crash on this test
+            return
         def mycoll(x, y):
             # reverse order
             return -cmp(x, y)
diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py
index 25e4b63..c8733b9 100644
--- a/Lib/sqlite3/test/regression.py
+++ b/Lib/sqlite3/test/regression.py
@@ -61,6 +61,14 @@
 
         con.rollback()
 
+    def CheckColumnNameWithSpaces(self):
+        cur = self.con.cursor()
+        cur.execute('select 1 as "foo bar [datetime]"')
+        self.failUnlessEqual(cur.description[0][0], "foo bar")
+
+        cur.execute('select 1 as "foo baz"')
+        self.failUnlessEqual(cur.description[0][0], "foo baz")
+
 def suite():
     regression_suite = unittest.makeSuite(RegressionTests, "Check")
     return unittest.TestSuite((regression_suite,))
diff --git a/Lib/sqlite3/test/types.py b/Lib/sqlite3/test/types.py
index e49f7dd..8da5722 100644
--- a/Lib/sqlite3/test/types.py
+++ b/Lib/sqlite3/test/types.py
@@ -21,7 +21,7 @@
 #    misrepresented as being the original software.
 # 3. This notice may not be removed or altered from any source distribution.
 
-import datetime
+import bz2, datetime
 import unittest
 import sqlite3 as sqlite
 
@@ -101,16 +101,16 @@
         self.cur.execute("create table test(i int, s str, f float, b bool, u unicode, foo foo, bin blob)")
 
         # override float, make them always return the same number
-        sqlite.converters["float"] = lambda x: 47.2
+        sqlite.converters["FLOAT"] = lambda x: 47.2
 
         # and implement two custom ones
-        sqlite.converters["bool"] = lambda x: bool(int(x))
-        sqlite.converters["foo"] = DeclTypesTests.Foo
+        sqlite.converters["BOOL"] = lambda x: bool(int(x))
+        sqlite.converters["FOO"] = DeclTypesTests.Foo
 
     def tearDown(self):
-        del sqlite.converters["float"]
-        del sqlite.converters["bool"]
-        del sqlite.converters["foo"]
+        del sqlite.converters["FLOAT"]
+        del sqlite.converters["BOOL"]
+        del sqlite.converters["FOO"]
         self.cur.close()
         self.con.close()
 
@@ -208,14 +208,14 @@
         self.cur = self.con.cursor()
         self.cur.execute("create table test(x foo)")
 
-        sqlite.converters["foo"] = lambda x: "[%s]" % x
-        sqlite.converters["bar"] = lambda x: "<%s>" % x
-        sqlite.converters["exc"] = lambda x: 5/0
+        sqlite.converters["FOO"] = lambda x: "[%s]" % x
+        sqlite.converters["BAR"] = lambda x: "<%s>" % x
+        sqlite.converters["EXC"] = lambda x: 5/0
 
     def tearDown(self):
-        del sqlite.converters["foo"]
-        del sqlite.converters["bar"]
-        del sqlite.converters["exc"]
+        del sqlite.converters["FOO"]
+        del sqlite.converters["BAR"]
+        del sqlite.converters["EXC"]
         self.cur.close()
         self.con.close()
 
@@ -231,12 +231,6 @@
         val = self.cur.fetchone()[0]
         self.failUnlessEqual(val, None)
 
-    def CheckExc(self):
-        # Exceptions in type converters result in returned Nones
-        self.cur.execute('select 5 as "x [exc]"')
-        val = self.cur.fetchone()[0]
-        self.failUnlessEqual(val, None)
-
     def CheckColName(self):
         self.cur.execute("insert into test(x) values (?)", ("xxx",))
         self.cur.execute('select x as "x [bar]" from test')
@@ -279,6 +273,23 @@
         val = self.cur.fetchone()[0]
         self.failUnlessEqual(type(val), float)
 
+class BinaryConverterTests(unittest.TestCase):
+    def convert(s):
+        return bz2.decompress(s)
+    convert = staticmethod(convert)
+
+    def setUp(self):
+        self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_COLNAMES)
+        sqlite.register_converter("bin", BinaryConverterTests.convert)
+
+    def tearDown(self):
+        self.con.close()
+
+    def CheckBinaryInputForConverter(self):
+        testdata = "abcdefg" * 10
+        result = self.con.execute('select ? as "x [bin]"', (buffer(bz2.compress(testdata)),)).fetchone()[0]
+        self.failUnlessEqual(testdata, result)
+
 class DateTimeTests(unittest.TestCase):
     def setUp(self):
         self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES)
@@ -328,8 +339,9 @@
     decltypes_type_suite = unittest.makeSuite(DeclTypesTests, "Check")
     colnames_type_suite = unittest.makeSuite(ColNamesTests, "Check")
     adaptation_suite = unittest.makeSuite(ObjectAdaptationTests, "Check")
+    bin_suite = unittest.makeSuite(BinaryConverterTests, "Check")
     date_suite = unittest.makeSuite(DateTimeTests, "Check")
-    return unittest.TestSuite((sqlite_type_suite, decltypes_type_suite, colnames_type_suite, adaptation_suite, date_suite))
+    return unittest.TestSuite((sqlite_type_suite, decltypes_type_suite, colnames_type_suite, adaptation_suite, bin_suite, date_suite))
 
 def test():
     runner = unittest.TextTestRunner()
diff --git a/Lib/sqlite3/test/userfunctions.py b/Lib/sqlite3/test/userfunctions.py
index 78656e7..31bf289 100644
--- a/Lib/sqlite3/test/userfunctions.py
+++ b/Lib/sqlite3/test/userfunctions.py
@@ -55,6 +55,9 @@
     def __init__(self):
         pass
 
+    def finalize(self):
+        return 1
+
 class AggrNoFinalize:
     def __init__(self):
         pass
@@ -144,9 +147,12 @@
     def CheckFuncRefCount(self):
         def getfunc():
             def f():
-                return val
+                return 1
             return f
-        self.con.create_function("reftest", 0, getfunc())
+        f = getfunc()
+        globals()["foo"] = f
+        # self.con.create_function("reftest", 0, getfunc())
+        self.con.create_function("reftest", 0, f)
         cur = self.con.cursor()
         cur.execute("select reftest()")
 
@@ -195,9 +201,12 @@
 
     def CheckFuncException(self):
         cur = self.con.cursor()
-        cur.execute("select raiseexception()")
-        val = cur.fetchone()[0]
-        self.failUnlessEqual(val, None)
+        try:
+            cur.execute("select raiseexception()")
+            cur.fetchone()
+            self.fail("should have raised OperationalError")
+        except sqlite.OperationalError, e:
+            self.failUnlessEqual(e.args[0], 'user-defined function raised exception')
 
     def CheckParamString(self):
         cur = self.con.cursor()
@@ -267,31 +276,47 @@
 
     def CheckAggrNoStep(self):
         cur = self.con.cursor()
-        cur.execute("select nostep(t) from test")
+        try:
+            cur.execute("select nostep(t) from test")
+            self.fail("should have raised an AttributeError")
+        except AttributeError, e:
+            self.failUnlessEqual(e.args[0], "AggrNoStep instance has no attribute 'step'")
 
     def CheckAggrNoFinalize(self):
         cur = self.con.cursor()
-        cur.execute("select nofinalize(t) from test")
-        val = cur.fetchone()[0]
-        self.failUnlessEqual(val, None)
+        try:
+            cur.execute("select nofinalize(t) from test")
+            val = cur.fetchone()[0]
+            self.fail("should have raised an OperationalError")
+        except sqlite.OperationalError, e:
+            self.failUnlessEqual(e.args[0], "user-defined aggregate's 'finalize' method raised error")
 
     def CheckAggrExceptionInInit(self):
         cur = self.con.cursor()
-        cur.execute("select excInit(t) from test")
-        val = cur.fetchone()[0]
-        self.failUnlessEqual(val, None)
+        try:
+            cur.execute("select excInit(t) from test")
+            val = cur.fetchone()[0]
+            self.fail("should have raised an OperationalError")
+        except sqlite.OperationalError, e:
+            self.failUnlessEqual(e.args[0], "user-defined aggregate's '__init__' method raised error")
 
     def CheckAggrExceptionInStep(self):
         cur = self.con.cursor()
-        cur.execute("select excStep(t) from test")
-        val = cur.fetchone()[0]
-        self.failUnlessEqual(val, 42)
+        try:
+            cur.execute("select excStep(t) from test")
+            val = cur.fetchone()[0]
+            self.fail("should have raised an OperationalError")
+        except sqlite.OperationalError, e:
+            self.failUnlessEqual(e.args[0], "user-defined aggregate's 'step' method raised error")
 
     def CheckAggrExceptionInFinalize(self):
         cur = self.con.cursor()
-        cur.execute("select excFinalize(t) from test")
-        val = cur.fetchone()[0]
-        self.failUnlessEqual(val, None)
+        try:
+            cur.execute("select excFinalize(t) from test")
+            val = cur.fetchone()[0]
+            self.fail("should have raised an OperationalError")
+        except sqlite.OperationalError, e:
+            self.failUnlessEqual(e.args[0], "user-defined aggregate's 'finalize' method raised error")
 
     def CheckAggrCheckParamStr(self):
         cur = self.con.cursor()
@@ -331,10 +356,54 @@
         val = cur.fetchone()[0]
         self.failUnlessEqual(val, 60)
 
+def authorizer_cb(action, arg1, arg2, dbname, source):
+    if action != sqlite.SQLITE_SELECT:
+        return sqlite.SQLITE_DENY
+    if arg2 == 'c2' or arg1 == 't2':
+        return sqlite.SQLITE_DENY
+    return sqlite.SQLITE_OK
+
+class AuthorizerTests(unittest.TestCase):
+    def setUp(self):
+        self.con = sqlite.connect(":memory:")
+        self.con.executescript("""
+            create table t1 (c1, c2);
+            create table t2 (c1, c2);
+            insert into t1 (c1, c2) values (1, 2);
+            insert into t2 (c1, c2) values (4, 5);
+            """)
+
+        # For our security test:
+        self.con.execute("select c2 from t2")
+
+        self.con.set_authorizer(authorizer_cb)
+
+    def tearDown(self):
+        pass
+
+    def CheckTableAccess(self):
+        try:
+            self.con.execute("select * from t2")
+        except sqlite.DatabaseError, e:
+            if not e.args[0].endswith("prohibited"):
+                self.fail("wrong exception text: %s" % e.args[0])
+            return
+        self.fail("should have raised an exception due to missing privileges")
+
+    def CheckColumnAccess(self):
+        try:
+            self.con.execute("select c2 from t1")
+        except sqlite.DatabaseError, e:
+            if not e.args[0].endswith("prohibited"):
+                self.fail("wrong exception text: %s" % e.args[0])
+            return
+        self.fail("should have raised an exception due to missing privileges")
+
 def suite():
     function_suite = unittest.makeSuite(FunctionTests, "Check")
     aggregate_suite = unittest.makeSuite(AggregateTests, "Check")
-    return unittest.TestSuite((function_suite, aggregate_suite))
+    authorizer_suite = unittest.makeSuite(AuthorizerTests, "Check")
+    return unittest.TestSuite((function_suite, aggregate_suite, authorizer_suite))
 
 def test():
     runner = unittest.TextTestRunner()
diff --git a/Lib/string.py b/Lib/string.py
index ba85a49..a5837e9 100644
--- a/Lib/string.py
+++ b/Lib/string.py
@@ -161,7 +161,7 @@
                 val = mapping[named]
                 # We use this idiom instead of str() because the latter will
                 # fail if val is a Unicode containing non-ASCII characters.
-                return '%s' % val
+                return '%s' % (val,)
             if mo.group('escaped') is not None:
                 return self.delimiter
             if mo.group('invalid') is not None:
@@ -186,13 +186,13 @@
                 try:
                     # We use this idiom instead of str() because the latter
                     # will fail if val is a Unicode containing non-ASCII
-                    return '%s' % mapping[named]
+                    return '%s' % (mapping[named],)
                 except KeyError:
                     return self.delimiter + named
             braced = mo.group('braced')
             if braced is not None:
                 try:
-                    return '%s' % mapping[braced]
+                    return '%s' % (mapping[braced],)
                 except KeyError:
                     return self.delimiter + '{' + braced + '}'
             if mo.group('escaped') is not None:
diff --git a/Lib/struct.py b/Lib/struct.py
index 9113e71..07c21bf 100644
--- a/Lib/struct.py
+++ b/Lib/struct.py
@@ -64,7 +64,7 @@
 
 def pack_into(fmt, buf, offset, *args):
     """
-    Pack the values v2, v2, ... according to fmt, write
+    Pack the values v1, v2, ... according to fmt, write
     the packed bytes into the writable buffer buf starting at offset.
     See struct.__doc__ for more on format strings.
     """
diff --git a/Lib/subprocess.py b/Lib/subprocess.py
index a6af7e7..0d19129 100644
--- a/Lib/subprocess.py
+++ b/Lib/subprocess.py
@@ -121,7 +121,7 @@
     Run command with arguments.  Wait for command to complete.  If the
     exit code was zero then return, otherwise raise
     CalledProcessError.  The CalledProcessError object will have the
-    return code in the errno attribute.
+    return code in the returncode attribute.
 
     The arguments are the same as for the Popen constructor.  Example:
 
@@ -141,8 +141,8 @@
 
 A ValueError will be raised if Popen is called with invalid arguments.
 
-check_call() will raise CalledProcessError, which is a subclass of
-OSError, if the called process returns a non-zero return code.
+check_call() will raise CalledProcessError, if the called process
+returns a non-zero return code.
 
 
 Security
@@ -234,7 +234,7 @@
 sts = os.system("mycmd" + " myarg")
 ==>
 p = Popen("mycmd" + " myarg", shell=True)
-sts = os.waitpid(p.pid, 0)
+pid, sts = os.waitpid(p.pid, 0)
 
 Note:
 
@@ -360,11 +360,16 @@
 import traceback
 
 # Exception classes used by this module.
-class CalledProcessError(OSError):
+class CalledProcessError(Exception):
     """This exception is raised when a process run by check_call() returns
     a non-zero exit status.  The exit status will be stored in the
-    errno attribute.  This exception is a subclass of
-    OSError."""
+    returncode attribute."""
+    def __init__(self, returncode, cmd):
+        self.returncode = returncode
+        self.cmd = cmd
+    def __str__(self):
+        return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode)
+
 
 if mswindows:
     import threading
@@ -442,7 +447,7 @@
     """Run command with arguments.  Wait for command to complete.  If
     the exit code was zero then return, otherwise raise
     CalledProcessError.  The CalledProcessError object will have the
-    return code in the errno attribute.
+    return code in the returncode attribute.
 
     The arguments are the same as for the Popen constructor.  Example:
 
@@ -453,7 +458,7 @@
     if cmd is None:
         cmd = popenargs[0]
     if retcode:
-        raise CalledProcessError(retcode, "Command %s returned non-zero exit status" % cmd)
+        raise CalledProcessError(retcode, cmd)
     return retcode
 
 
@@ -613,7 +618,7 @@
             return
         # In case the child hasn't been waited on, check if it's done.
         self.poll(_deadstate=sys.maxint)
-        if self.returncode is None:
+        if self.returncode is None and _active is not None:
             # Child is still running, keep us alive until we can wait on it.
             _active.append(self)
 
@@ -941,7 +946,7 @@
 
 
         def _close_fds(self, but):
-            for i in range(3, MAXFD):
+            for i in xrange(3, MAXFD):
                 if i == but:
                     continue
                 try:
diff --git a/Lib/tarfile.py b/Lib/tarfile.py
index 061d0f5..c185fbd 100644
--- a/Lib/tarfile.py
+++ b/Lib/tarfile.py
@@ -417,7 +417,13 @@
             self.fileobj.write(self.buf)
             self.buf = ""
             if self.comptype == "gz":
-                self.fileobj.write(struct.pack("<l", self.crc))
+                # The native zlib crc is an unsigned 32-bit integer, but
+                # the Python wrapper implicitly casts that to a signed C
+                # long.  So, on a 32-bit box self.crc may "look negative",
+                # while the same crc on a 64-bit box may "look positive".
+                # To avoid irksome warnings from the `struct` module, force
+                # it to look positive on all boxes.
+                self.fileobj.write(struct.pack("<L", self.crc & 0xffffffffL))
                 self.fileobj.write(struct.pack("<L", self.pos & 0xffffFFFFL))
 
         if not self._extfileobj:
@@ -1750,13 +1756,6 @@
             try:
                 tarinfo = TarInfo.frombuf(buf)
 
-                # We shouldn't rely on this checksum, because some tar programs
-                # calculate it differently and it is merely validating the
-                # header block. We could just as well skip this part, which would
-                # have a slight effect on performance...
-                if tarinfo.chksum not in calc_chksums(buf):
-                    self._dbg(1, "tarfile: Bad Checksum %r" % tarinfo.name)
-
                 # Set the TarInfo object's offset to the current position of the
                 # TarFile and set self.offset to the position where the data blocks
                 # should begin.
diff --git a/Lib/telnetlib.py b/Lib/telnetlib.py
index 3523037..a13e85c 100644
--- a/Lib/telnetlib.py
+++ b/Lib/telnetlib.py
@@ -311,6 +311,8 @@
         s_args = s_reply
         if timeout is not None:
             s_args = s_args + (timeout,)
+            from time import time
+            time_start = time()
         while not self.eof and select.select(*s_args) == s_reply:
             i = max(0, len(self.cookedq)-n)
             self.fill_rawq()
@@ -321,6 +323,11 @@
                 buf = self.cookedq[:i]
                 self.cookedq = self.cookedq[i:]
                 return buf
+            if timeout is not None:
+                elapsed = time() - time_start
+                if elapsed >= timeout:
+                    break
+                s_args = s_reply + (timeout-elapsed,)
         return self.read_very_lazy()
 
     def read_all(self):
@@ -601,6 +608,9 @@
             if not hasattr(list[i], "search"):
                 if not re: import re
                 list[i] = re.compile(list[i])
+        if timeout is not None:
+            from time import time
+            time_start = time()
         while 1:
             self.process_rawq()
             for i in indices:
@@ -613,7 +623,11 @@
             if self.eof:
                 break
             if timeout is not None:
-                r, w, x = select.select([self.fileno()], [], [], timeout)
+                elapsed = time() - time_start
+                if elapsed >= timeout:
+                    break
+                s_args = ([self.fileno()], [], [], timeout-elapsed)
+                r, w, x = select.select(*s_args)
                 if not r:
                     break
             self.fill_rawq()
diff --git a/Lib/tempfile.py b/Lib/tempfile.py
index dd7e864..2e8cd6d 100644
--- a/Lib/tempfile.py
+++ b/Lib/tempfile.py
@@ -446,7 +446,7 @@
                       prefix=template, dir=None):
         """Create and return a temporary file.
         Arguments:
-        'prefix', 'suffix', 'directory' -- as for mkstemp.
+        'prefix', 'suffix', 'dir' -- as for mkstemp.
         'mode' -- the mode argument to os.fdopen (default "w+b").
         'bufsize' -- the buffer size argument to os.fdopen (default -1).
         The file is created as mkstemp() would do it.
diff --git a/Lib/test/crashers/bogus_code_obj.py b/Lib/test/crashers/bogus_code_obj.py
new file mode 100644
index 0000000..613ae51
--- /dev/null
+++ b/Lib/test/crashers/bogus_code_obj.py
@@ -0,0 +1,19 @@
+"""
+Broken bytecode objects can easily crash the interpreter.
+
+This is not going to be fixed.  It is generally agreed that there is no
+point in writing a bytecode verifier and putting it in CPython just for
+this.  Moreover, a verifier is bound to accept only a subset of all safe
+bytecodes, so it could lead to unnecessary breakage.
+
+For security purposes, "restricted" interpreters are not going to let
+the user build or load random bytecodes anyway.  Otherwise, this is a
+"won't fix" case.
+
+"""
+
+import types
+
+co = types.CodeType(0, 0, 0, 0, '\x04\x71\x00\x00', (),
+                    (), (), '', '', 1, '')
+exec co
diff --git a/Lib/test/crashers/borrowed_ref_1.py b/Lib/test/crashers/borrowed_ref_1.py
new file mode 100644
index 0000000..d16ede2
--- /dev/null
+++ b/Lib/test/crashers/borrowed_ref_1.py
@@ -0,0 +1,29 @@
+"""
+_PyType_Lookup() returns a borrowed reference.
+This attacks the call in dictobject.c.
+"""
+
+class A(object):
+    pass
+
+class B(object):
+    def __del__(self):
+        print 'hi'
+        del D.__missing__
+
+class D(dict):
+    class __missing__:
+        def __init__(self, *args):
+            pass
+
+
+d = D()
+a = A()
+a.cycle = a
+a.other = B()
+del a
+
+prev = None
+while 1:
+    d[5]
+    prev = (prev,)
diff --git a/Lib/test/crashers/borrowed_ref_2.py b/Lib/test/crashers/borrowed_ref_2.py
new file mode 100644
index 0000000..1a7b3ff
--- /dev/null
+++ b/Lib/test/crashers/borrowed_ref_2.py
@@ -0,0 +1,38 @@
+"""
+_PyType_Lookup() returns a borrowed reference.
+This attacks PyObject_GenericSetAttr().
+
+NB. on my machine this crashes in 2.5 debug but not release.
+"""
+
+class A(object):
+    pass
+
+class B(object):
+    def __del__(self):
+        print "hi"
+        del C.d
+
+class D(object):
+    def __set__(self, obj, value):
+        self.hello = 42
+
+class C(object):
+    d = D()
+
+    def g():
+        pass
+
+
+c = C()
+a = A()
+a.cycle = a
+a.other = B()
+
+lst = [None] * 1000000
+i = 0
+del a
+while 1:
+    c.d = 42         # segfaults in PyMethod_New(im_func=D.__set__, im_self=d)
+    lst[i] = c.g     # consume the free list of instancemethod objects
+    i += 1
diff --git a/Lib/test/crashers/coerce.py b/Lib/test/crashers/coerce.py
deleted file mode 100644
index 574956b..0000000
--- a/Lib/test/crashers/coerce.py
+++ /dev/null
@@ -1,9 +0,0 @@
-
-# http://python.org/sf/992017
-
-class foo:
-    def __coerce__(self, other):
-        return other, self
-
-if __name__ == '__main__':
-    foo()+1   # segfault: infinite recursion in C
diff --git a/Lib/test/crashers/gc_inspection.py b/Lib/test/crashers/gc_inspection.py
new file mode 100644
index 0000000..10caa79
--- /dev/null
+++ b/Lib/test/crashers/gc_inspection.py
@@ -0,0 +1,32 @@
+"""
+gc.get_referrers() can be used to see objects before they are fully built.
+
+Note that this is only an example.  There are many ways to crash Python
+by using gc.get_referrers(), as well as many extension modules (even
+when they are using perfectly documented patterns to build objects).
+
+Identifying and removing all places that expose to the GC a
+partially-built object is a long-term project.  A patch was proposed on
+SF specifically for this example but I consider fixing just this single
+example a bit pointless (#1517042).
+
+A fix would include a whole-scale code review, possibly with an API
+change to decouple object creation and GC registration, and according
+fixes to the documentation for extension module writers.  It's unlikely
+to happen, though.  So this is currently classified as
+"gc.get_referrers() is dangerous, use only for debugging".
+"""
+
+import gc
+
+
+def g():
+    marker = object()
+    yield marker
+    # now the marker is in the tuple being constructed
+    [tup] = [x for x in gc.get_referrers(marker) if type(x) is tuple]
+    print tup
+    print tup[1]
+
+
+tuple(g())
diff --git a/Lib/test/crashers/infinite_rec_3.py b/Lib/test/crashers/infinite_rec_3.py
deleted file mode 100644
index 0b04e4c..0000000
--- a/Lib/test/crashers/infinite_rec_3.py
+++ /dev/null
@@ -1,9 +0,0 @@
-
-# http://python.org/sf/1202533
-
-class A(object):
-    pass
-A.__call__ = A()
-
-if __name__ == '__main__':
-    A()()   # segfault: infinite recursion in C
diff --git a/Lib/test/crashers/recursion_limit_too_high.py b/Lib/test/crashers/recursion_limit_too_high.py
new file mode 100644
index 0000000..1fa4d32
--- /dev/null
+++ b/Lib/test/crashers/recursion_limit_too_high.py
@@ -0,0 +1,16 @@
+# The following example may crash or not depending on the platform.
+# E.g. on 32-bit Intel Linux in a "standard" configuration it seems to
+# crash on Python 2.5 (but not 2.4 nor 2.3).  On Windows the import
+# eventually fails to find the module, possibly because we run out of
+# file handles.
+
+# The point of this example is to show that sys.setrecursionlimit() is a
+# hack, and not a robust solution.  This example simply exercices a path
+# where it takes many C-level recursions, consuming a lot of stack
+# space, for each Python-level recursion.  So 1000 times this amount of
+# stack space may be too much for standard platforms already.
+
+import sys
+if 'recursion_limit_too_high' in sys.modules:
+    del sys.modules['recursion_limit_too_high']
+import recursion_limit_too_high
diff --git a/Lib/test/crashers/recursive_call.py b/Lib/test/crashers/recursive_call.py
index 0776479..31c8963 100644
--- a/Lib/test/crashers/recursive_call.py
+++ b/Lib/test/crashers/recursive_call.py
@@ -1,6 +1,11 @@
 #!/usr/bin/env python
 
 # No bug report AFAIK, mail on python-dev on 2006-01-10
+
+# This is a "won't fix" case.  It is known that setting a high enough
+# recursion limit crashes by overflowing the stack.  Unless this is
+# redesigned somehow, it won't go away.
+
 import sys
 
 sys.setrecursionlimit(1 << 30)
diff --git a/Lib/test/crashers/xml_parsers.py b/Lib/test/crashers/xml_parsers.py
deleted file mode 100644
index e6b5727..0000000
--- a/Lib/test/crashers/xml_parsers.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from xml.parsers import expat
-
-# http://python.org/sf/1296433
-
-def test_parse_only_xml_data():
-    #
-    xml = "<?xml version='1.0' encoding='iso8859'?><s>%s</s>" % ('a' * 1025)
-    # this one doesn't crash
-    #xml = "<?xml version='1.0'?><s>%s</s>" % ('a' * 10000)
-
-    def handler(text):
-        raise Exception
-
-    parser = expat.ParserCreate()
-    parser.CharacterDataHandler = handler
-
-    try:
-        parser.Parse(xml)
-    except:
-        pass
-
-if __name__ == '__main__':
-    test_parse_only_xml_data()
-
-# Invalid read of size 4
-#    at 0x43F936: PyObject_Free (obmalloc.c:735)
-#    by 0x45A7C7: unicode_dealloc (unicodeobject.c:246)
-#    by 0x1299021D: PyUnknownEncodingHandler (pyexpat.c:1314)
-#    by 0x12993A66: processXmlDecl (xmlparse.c:3330)
-#    by 0x12999211: doProlog (xmlparse.c:3678)
-#    by 0x1299C3F0: prologInitProcessor (xmlparse.c:3550)
-#    by 0x12991EA3: XML_ParseBuffer (xmlparse.c:1562)
-#    by 0x1298F8EC: xmlparse_Parse (pyexpat.c:895)
-#    by 0x47B3A1: PyEval_EvalFrameEx (ceval.c:3565)
-#    by 0x47CCAC: PyEval_EvalCodeEx (ceval.c:2739)
-#    by 0x47CDE1: PyEval_EvalCode (ceval.c:490)
-#    by 0x499820: PyRun_SimpleFileExFlags (pythonrun.c:1198)
-#    by 0x4117F1: Py_Main (main.c:492)
-#    by 0x12476D1F: __libc_start_main (in /lib/libc-2.3.5.so)
-#    by 0x410DC9: (within /home/neal/build/python/svn/clean/python)
-#  Address 0x12704020 is 264 bytes inside a block of size 592 free'd
-#    at 0x11B1BA8A: free (vg_replace_malloc.c:235)
-#    by 0x124B5F18: (within /lib/libc-2.3.5.so)
-#    by 0x48DE43: find_module (import.c:1320)
-#    by 0x48E997: import_submodule (import.c:2249)
-#    by 0x48EC15: load_next (import.c:2083)
-#    by 0x48F091: import_module_ex (import.c:1914)
-#    by 0x48F385: PyImport_ImportModuleEx (import.c:1955)
-#    by 0x46D070: builtin___import__ (bltinmodule.c:44)
-#    by 0x4186CF: PyObject_Call (abstract.c:1777)
-#    by 0x474E9B: PyEval_CallObjectWithKeywords (ceval.c:3432)
-#    by 0x47928E: PyEval_EvalFrameEx (ceval.c:2038)
-#    by 0x47CCAC: PyEval_EvalCodeEx (ceval.c:2739)
-#    by 0x47CDE1: PyEval_EvalCode (ceval.c:490)
-#    by 0x48D0F7: PyImport_ExecCodeModuleEx (import.c:635)
-#    by 0x48D4F4: load_source_module (import.c:913)
diff --git a/Lib/test/fork_wait.py b/Lib/test/fork_wait.py
index 5600bdb..7eb55f6 100644
--- a/Lib/test/fork_wait.py
+++ b/Lib/test/fork_wait.py
@@ -34,7 +34,14 @@
                 pass
 
     def wait_impl(self, cpid):
-        spid, status = os.waitpid(cpid, 0)
+        for i in range(10):
+            # waitpid() shouldn't hang, but some of the buildbots seem to hang
+            # in the forking tests.  This is an attempt to fix the problem.
+            spid, status = os.waitpid(cpid, os.WNOHANG)
+            if spid == cpid:
+                break
+            time.sleep(2 * SHORTSLEEP)
+
         self.assertEquals(spid, cpid)
         self.assertEquals(status, 0, "cause = %d, exit = %d" % (status&0xff, status>>8))
 
diff --git a/Lib/test/output/test_ossaudiodev b/Lib/test/output/test_ossaudiodev
index 9f55afa..f0df5d2 100644
--- a/Lib/test/output/test_ossaudiodev
+++ b/Lib/test/output/test_ossaudiodev
@@ -1,3 +1,2 @@
 test_ossaudiodev
-playing test sound file...
-elapsed time: 3.1 sec
+playing test sound file (expected running time: 2.93 sec)
diff --git a/Lib/test/output/test_thread b/Lib/test/output/test_thread
index d49651d..68c6a92 100644
--- a/Lib/test/output/test_thread
+++ b/Lib/test/output/test_thread
@@ -4,3 +4,15 @@
 
 *** Barrier Test ***
 all tasks done
+
+*** Changing thread stack size ***
+caught expected ValueError setting stack_size(4096)
+successfully set stack_size(262144)
+successfully set stack_size(1048576)
+successfully set stack_size(0)
+trying stack_size = 262144
+waiting for all tasks to complete
+all tasks done
+trying stack_size = 1048576
+waiting for all tasks to complete
+all tasks done
diff --git a/Lib/test/regrtest.py b/Lib/test/regrtest.py
index ca4a3b5..4553838 100755
--- a/Lib/test/regrtest.py
+++ b/Lib/test/regrtest.py
@@ -66,7 +66,9 @@
 
 -M runs tests that require an exorbitant amount of memory. These tests
 typically try to ascertain containers keep working when containing more than
-2 bilion objects, and only work on 64-bit systems. The passed-in memlimit,
+2 billion objects, which only works on 64-bit systems. There are also some
+tests that try to exhaust the address space of the process, which only makes
+sense on 32-bit systems with at least 2Gb of memory. The passed-in memlimit,
 which is a string in the form of '2.5Gb', determines howmuch memory the
 tests will limit themselves to (but they may go slightly over.) The number
 shouldn't be more memory than the machine has (including swap memory). You
@@ -496,14 +498,30 @@
 
 def runtest(test, generate, verbose, quiet, testdir=None, huntrleaks=False):
     """Run a single test.
+
     test -- the name of the test
     generate -- if true, generate output, instead of running the test
-    and comparing it to a previously created output file
+                and comparing it to a previously created output file
     verbose -- if true, print more messages
     quiet -- if true, don't print 'skipped' messages (probably redundant)
     testdir -- test directory
+    huntrleaks -- run multiple times to test for leaks; requires a debug
+                  build; a triple corresponding to -R's three arguments
+    Return:
+        -2  test skipped because resource denied
+        -1  test skipped for some other reason
+         0  test failed
+         1  test passed
     """
 
+    try:
+        return runtest_inner(test, generate, verbose, quiet, testdir,
+                             huntrleaks)
+    finally:
+        cleanup_test_droppings(test, verbose)
+
+def runtest_inner(test, generate, verbose, quiet,
+                     testdir=None, huntrleaks=False):
     test_support.unload(test)
     if not testdir:
         testdir = findtestdir()
@@ -595,6 +613,37 @@
         sys.stdout.flush()
         return 0
 
+def cleanup_test_droppings(testname, verbose):
+    import shutil
+
+    # Try to clean up junk commonly left behind.  While tests shouldn't leave
+    # any files or directories behind, when a test fails that can be tedious
+    # for it to arrange.  The consequences can be especially nasty on Windows,
+    # since if a test leaves a file open, it cannot be deleted by name (while
+    # there's nothing we can do about that here either, we can display the
+    # name of the offending test, which is a real help).
+    for name in (test_support.TESTFN,
+                 "db_home",
+                ):
+        if not os.path.exists(name):
+            continue
+
+        if os.path.isdir(name):
+            kind, nuker = "directory", shutil.rmtree
+        elif os.path.isfile(name):
+            kind, nuker = "file", os.unlink
+        else:
+            raise SystemError("os.path says %r exists but is neither "
+                              "directory nor file" % name)
+
+        if verbose:
+            print "%r left behind %s %r" % (testname, kind, name)
+        try:
+            nuker(name)
+        except Exception, msg:
+            print >> sys.stderr, ("%r left behind %s %r and it couldn't be "
+                "removed: %s" % (testname, kind, name, msg))
+
 def dash_R(the_module, test, indirect_test, huntrleaks):
     # This code is hackish and inelegant, but it seems to do the job.
     import copy_reg
@@ -637,7 +686,7 @@
 
 def dash_R_cleanup(fs, ps, pic):
     import gc, copy_reg
-    import _strptime, linecache, warnings, dircache
+    import _strptime, linecache, dircache
     import urlparse, urllib, urllib2, mimetypes, doctest
     import struct, filecmp
     from distutils.dir_util import _path_created
@@ -1227,6 +1276,37 @@
         test_winreg
         test_winsound
         """,
+    'netbsd3':
+        """
+        test_aepack
+        test_al
+        test_applesingle
+        test_bsddb
+        test_bsddb185
+        test_bsddb3
+        test_cd
+        test_cl
+        test_ctypes
+        test_curses
+        test_dl
+        test_gdbm
+        test_gl
+        test_imgfile
+        test_linuxaudiodev
+        test_locale
+        test_macfs
+        test_macostools
+        test_nis
+        test_ossaudiodev
+        test_pep277
+        test_sqlite
+        test_startfile
+        test_sunaudiodev
+        test_tcl
+        test_unicode_file
+        test_winreg
+        test_winsound
+        """,
 }
 _expectations['freebsd5'] = _expectations['freebsd4']
 _expectations['freebsd6'] = _expectations['freebsd4']
diff --git a/Lib/test/string_tests.py b/Lib/test/string_tests.py
index aaa2dc2..73447ad 100644
--- a/Lib/test/string_tests.py
+++ b/Lib/test/string_tests.py
@@ -147,8 +147,8 @@
                 else:
                     r2, rem = len(i)+1, 0
                 if rem or r1 != r2:
-                    self.assertEqual(rem, 0)
-                    self.assertEqual(r1, r2)
+                    self.assertEqual(rem, 0, '%s != 0 for %s' % (rem, i))
+                    self.assertEqual(r1, r2, '%s != %s for %s' % (r1, r2, i))
 
     def test_find(self):
         self.checkequal(0, 'abcdefghiabc', 'find', 'abc')
@@ -636,6 +636,11 @@
         EQ("bobobXbobob", "bobobobXbobobob", "replace", "bobob", "bob")
         EQ("BOBOBOB", "BOBOBOB", "replace", "bob", "bobby")
 
+        ba = buffer('a')
+        bb = buffer('b')
+        EQ("bbc", "abc", "replace", ba, bb)
+        EQ("aac", "abc", "replace", bb, ba)
+
         #
         self.checkequal('one@two!three!', 'one!two!three!', 'replace', '!', '@', 1)
         self.checkequal('onetwothree', 'one!two!three!', 'replace', '!', '')
@@ -819,6 +824,21 @@
         self.checkraises(TypeError, 'hello', 'startswith')
         self.checkraises(TypeError, 'hello', 'startswith', 42)
 
+        # test tuple arguments
+        self.checkequal(True, 'hello', 'startswith', ('he', 'ha'))
+        self.checkequal(False, 'hello', 'startswith', ('lo', 'llo'))
+        self.checkequal(True, 'hello', 'startswith', ('hellox', 'hello'))
+        self.checkequal(False, 'hello', 'startswith', ())
+        self.checkequal(True, 'helloworld', 'startswith', ('hellowo',
+                                                           'rld', 'lowo'), 3)
+        self.checkequal(False, 'helloworld', 'startswith', ('hellowo', 'ello',
+                                                            'rld'), 3)
+        self.checkequal(True, 'hello', 'startswith', ('lo', 'he'), 0, -1)
+        self.checkequal(False, 'hello', 'startswith', ('he', 'hel'), 0, 1)
+        self.checkequal(True, 'hello', 'startswith', ('he', 'hel'), 0, 2)
+
+        self.checkraises(TypeError, 'hello', 'startswith', (42,))
+
     def test_endswith(self):
         self.checkequal(True, 'hello', 'endswith', 'lo')
         self.checkequal(False, 'hello', 'endswith', 'he')
@@ -853,6 +873,21 @@
         self.checkraises(TypeError, 'hello', 'endswith')
         self.checkraises(TypeError, 'hello', 'endswith', 42)
 
+        # test tuple arguments
+        self.checkequal(False, 'hello', 'endswith', ('he', 'ha'))
+        self.checkequal(True, 'hello', 'endswith', ('lo', 'llo'))
+        self.checkequal(True, 'hello', 'endswith', ('hellox', 'hello'))
+        self.checkequal(False, 'hello', 'endswith', ())
+        self.checkequal(True, 'helloworld', 'endswith', ('hellowo',
+                                                           'rld', 'lowo'), 3)
+        self.checkequal(False, 'helloworld', 'endswith', ('hellowo', 'ello',
+                                                            'rld'), 3, -1)
+        self.checkequal(True, 'hello', 'endswith', ('hell', 'ell'), 0, -1)
+        self.checkequal(False, 'hello', 'endswith', ('he', 'hel'), 0, 1)
+        self.checkequal(True, 'hello', 'endswith', ('he', 'hell'), 0, 4)
+
+        self.checkraises(TypeError, 'hello', 'endswith', (42,))
+
     def test___contains__(self):
         self.checkequal(True, '', '__contains__', '')         # vereq('' in '', True)
         self.checkequal(True, 'abc', '__contains__', '')      # vereq('' in 'abc', True)
@@ -872,7 +907,7 @@
         self.checkequal(u'abc', 'abc', '__getitem__', slice(0, 1000))
         self.checkequal(u'a', 'abc', '__getitem__', slice(0, 1))
         self.checkequal(u'', 'abc', '__getitem__', slice(0, 0))
-        # FIXME What about negative indizes? This is handled differently by [] and __getitem__(slice)
+        # FIXME What about negative indices? This is handled differently by [] and __getitem__(slice)
 
         self.checkraises(TypeError, 'abc', '__getitem__', 'def')
 
@@ -908,6 +943,8 @@
         # test.test_string.StringTest.test_join)
         self.checkequal('a b c d', ' ', 'join', ['a', 'b', 'c', 'd'])
         self.checkequal('abcd', '', 'join', ('a', 'b', 'c', 'd'))
+        self.checkequal('bd', '', 'join', ('', 'b', '', 'd'))
+        self.checkequal('ac', '', 'join', ('a', '', 'c', ''))
         self.checkequal('w x y z', ' ', 'join', Sequence())
         self.checkequal('abc', 'a', 'join', ('abc',))
         self.checkequal('z', 'a', 'join', UserList(['z']))
diff --git a/Lib/test/test__locale.py b/Lib/test/test__locale.py
index 9799f89..ec59d71 100644
--- a/Lib/test/test__locale.py
+++ b/Lib/test/test__locale.py
@@ -113,6 +113,9 @@
                                 "using eval('3.14') failed for %s" % loc)
             self.assertEquals(int(float('3.14') * 100), 314,
                                 "using float('3.14') failed for %s" % loc)
+            if localeconv()['decimal_point'] != '.':
+                self.assertRaises(ValueError, float,
+                                  localeconv()['decimal_point'].join(['1', '23']))
 
 def test_main():
     run_unittest(_LocaleTests)
diff --git a/Lib/test/test_ast.py b/Lib/test/test_ast.py
index c64ad28..14fc010 100644
--- a/Lib/test/test_ast.py
+++ b/Lib/test/test_ast.py
@@ -160,7 +160,7 @@
 ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Return', (1, 8), ('Num', (1, 15), 1))], [])]),
 ('Module', [('Delete', (1, 0), [('Name', (1, 4), 'v', ('Del',))])]),
 ('Module', [('Assign', (1, 0), [('Name', (1, 0), 'v', ('Store',))], ('Num', (1, 4), 1))]),
-('Module', [('AugAssign', (1, 0), ('Name', (1, 0), 'v', ('Load',)), ('Add',), ('Num', (1, 5), 1))]),
+('Module', [('AugAssign', (1, 0), ('Name', (1, 0), 'v', ('Store',)), ('Add',), ('Num', (1, 5), 1))]),
 ('Module', [('Print', (1, 0), ('Name', (1, 8), 'f', ('Load',)), [('Num', (1, 11), 1)], False)]),
 ('Module', [('For', (1, 0), ('Name', (1, 4), 'v', ('Store',)), ('Name', (1, 9), 'v', ('Load',)), [('Pass', (1, 11))], [])]),
 ('Module', [('While', (1, 0), ('Name', (1, 6), 'v', ('Load',)), [('Pass', (1, 8))], [])]),
diff --git a/Lib/test/test_asynchat.py b/Lib/test/test_asynchat.py
index f93587a..9926167 100644
--- a/Lib/test/test_asynchat.py
+++ b/Lib/test/test_asynchat.py
@@ -13,7 +13,8 @@
     def run(self):
         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-        sock.bind((HOST, PORT))
+        global PORT
+        PORT = test_support.bind_port(sock, HOST, PORT)
         sock.listen(1)
         conn, client = sock.accept()
         buffer = ""
diff --git a/Lib/test/test_bigaddrspace.py b/Lib/test/test_bigaddrspace.py
new file mode 100644
index 0000000..8c215fe
--- /dev/null
+++ b/Lib/test/test_bigaddrspace.py
@@ -0,0 +1,46 @@
+from test import test_support
+from test.test_support import bigaddrspacetest, MAX_Py_ssize_t
+
+import unittest
+import operator
+import sys
+
+
+class StrTest(unittest.TestCase):
+
+    @bigaddrspacetest
+    def test_concat(self):
+        s1 = 'x' * MAX_Py_ssize_t
+        self.assertRaises(OverflowError, operator.add, s1, '?')
+
+    @bigaddrspacetest
+    def test_optimized_concat(self):
+        x = 'x' * MAX_Py_ssize_t
+        try:
+            x = x + '?'     # this statement uses a fast path in ceval.c
+        except OverflowError:
+            pass
+        else:
+            self.fail("should have raised OverflowError")
+        try:
+            x += '?'        # this statement uses a fast path in ceval.c
+        except OverflowError:
+            pass
+        else:
+            self.fail("should have raised OverflowError")
+        self.assertEquals(len(x), MAX_Py_ssize_t)
+
+    ### the following test is pending a patch
+    #   (http://mail.python.org/pipermail/python-dev/2006-July/067774.html)
+    #@bigaddrspacetest
+    #def test_repeat(self):
+    #    self.assertRaises(OverflowError, operator.mul, 'x', MAX_Py_ssize_t + 1)
+
+
+def test_main():
+    test_support.run_unittest(StrTest)
+
+if __name__ == '__main__':
+    if len(sys.argv) > 1:
+        test_support.set_memlimit(sys.argv[1])
+    test_main()
diff --git a/Lib/test/test_bigmem.py b/Lib/test/test_bigmem.py
index 255428f..6d6c37c 100644
--- a/Lib/test/test_bigmem.py
+++ b/Lib/test/test_bigmem.py
@@ -28,7 +28,7 @@
 #  - While the bigmemtest decorator speaks of 'minsize', all tests will
 #    actually be called with a much smaller number too, in the normal
 #    test run (5Kb currently.) This is so the tests themselves get frequent
-#    testing Consequently, always make all large allocations based on the
+#    testing. Consequently, always make all large allocations based on the
 #    passed-in 'size', and don't rely on the size being very large. Also,
 #    memuse-per-size should remain sane (less than a few thousand); if your
 #    test uses more, adjust 'size' upward, instead.
diff --git a/Lib/test/test_bsddb.py b/Lib/test/test_bsddb.py
index 513e541..474f3da 100755
--- a/Lib/test/test_bsddb.py
+++ b/Lib/test/test_bsddb.py
@@ -8,7 +8,6 @@
 import dbhash # Just so we know it's imported
 import unittest
 from test import test_support
-from sets import Set
 
 class TestBSDDB(unittest.TestCase):
     openflag = 'c'
@@ -53,7 +52,7 @@
             self.assertEqual(self.f[k], v)
 
     def assertSetEquals(self, seqn1, seqn2):
-        self.assertEqual(Set(seqn1), Set(seqn2))
+        self.assertEqual(set(seqn1), set(seqn2))
 
     def test_mapping_iteration_methods(self):
         f = self.f
diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py
index e6e4440..c7e4394 100644
--- a/Lib/test/test_builtin.py
+++ b/Lib/test/test_builtin.py
@@ -532,13 +532,24 @@
     @run_with_locale('LC_NUMERIC', 'fr_FR', 'de_DE')
     def test_float_with_comma(self):
         # set locale to something that doesn't use '.' for the decimal point
+        # float must not accept the locale specific decimal point but
+        # it still has to accept the normal python syntac
         import locale
         if not locale.localeconv()['decimal_point'] == ',':
             return
 
-        self.assertEqual(float("  3,14  "), 3.14)
-        self.assertEqual(float("  +3,14  "), 3.14)
-        self.assertEqual(float("  -3,14  "), -3.14)
+        self.assertEqual(float("  3.14  "), 3.14)
+        self.assertEqual(float("+3.14  "), 3.14)
+        self.assertEqual(float("-3.14  "), -3.14)
+        self.assertEqual(float(".14  "), .14)
+        self.assertEqual(float("3.  "), 3.0)
+        self.assertEqual(float("3.e3  "), 3000.0)
+        self.assertEqual(float("3.2e3  "), 3200.0)
+        self.assertEqual(float("2.5e-1  "), 0.25)
+        self.assertEqual(float("5e-1"), 0.5)
+        self.assertRaises(ValueError, float, "  3,14  ")
+        self.assertRaises(ValueError, float, "  +3,14  ")
+        self.assertRaises(ValueError, float, "  -3,14  ")
         self.assertRaises(ValueError, float, "  0x3.1  ")
         self.assertRaises(ValueError, float, "  -0x3.p-1  ")
         self.assertEqual(float("  25.e-1  "), 2.5)
@@ -603,6 +614,19 @@
         def f(): pass
         self.assertRaises(TypeError, hash, [])
         self.assertRaises(TypeError, hash, {})
+        # Bug 1536021: Allow hash to return long objects
+        class X:
+            def __hash__(self):
+                return 2**100
+        self.assertEquals(type(hash(X())), int)
+        class Y(object):
+            def __hash__(self):
+                return 2**100
+        self.assertEquals(type(hash(Y())), int)
+        class Z(long):
+            def __hash__(self):
+                return self
+        self.assertEquals(hash(Z(42)), hash(42L))
 
     def test_hex(self):
         self.assertEqual(hex(16), '0x10')
diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py
index 356c2e3..f198116 100644
--- a/Lib/test/test_bz2.py
+++ b/Lib/test/test_bz2.py
@@ -250,7 +250,7 @@
         bz2f = BZ2File(self.filename)
         xlines = list(bz2f.readlines())
         bz2f.close()
-        self.assertEqual(lines, ['Test'])
+        self.assertEqual(xlines, ['Test'])
 
 
 class BZ2CompressorTest(BaseTest):
@@ -344,6 +344,7 @@
         BZ2DecompressorTest,
         FuncTest
     )
+    test_support.reap_children()
 
 if __name__ == '__main__':
     test_main()
diff --git a/Lib/test/test_cmd_line.py b/Lib/test/test_cmd_line.py
index ec860d1..5e89863 100644
--- a/Lib/test/test_cmd_line.py
+++ b/Lib/test/test_cmd_line.py
@@ -87,6 +87,7 @@
 
 def test_main():
     test.test_support.run_unittest(CmdLineTest)
+    test.test_support.reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_code.py b/Lib/test/test_code.py
index 52bc894..4e68638 100644
--- a/Lib/test/test_code.py
+++ b/Lib/test/test_code.py
@@ -61,6 +61,23 @@
 flags: 67
 consts: ('None',)
 
+>>> def optimize_away():
+...     'doc string'
+...     'not a docstring'
+...     53
+...     53L
+
+>>> dump(optimize_away.func_code)
+name: optimize_away
+argcount: 0
+names: ()
+varnames: ()
+cellvars: ()
+freevars: ()
+nlocals: 0
+flags: 67
+consts: ("'doc string'", 'None')
+
 """
 
 def consts(t):
diff --git a/Lib/test/test_codecs.py b/Lib/test/test_codecs.py
index 6ea49cc..8153979 100644
--- a/Lib/test/test_codecs.py
+++ b/Lib/test/test_codecs.py
@@ -1166,6 +1166,12 @@
             encoder = codecs.getencoder(encoding)
             self.assertRaises(TypeError, encoder)
 
+    def test_encoding_map_type_initialized(self):
+        from encodings import cp1140
+        # This used to crash, we are only verifying there's no crash.
+        table_type = type(cp1140.encoding_table)
+        self.assertEqual(table_type, table_type)
+
 class BasicStrTest(unittest.TestCase):
     def test_basics(self):
         s = "abc123"
diff --git a/Lib/test/test_commands.py b/Lib/test/test_commands.py
index 0f7d15f..b72a1b9 100644
--- a/Lib/test/test_commands.py
+++ b/Lib/test/test_commands.py
@@ -5,7 +5,7 @@
 import unittest
 import os, tempfile, re
 
-from test.test_support import TestSkipped, run_unittest
+from test.test_support import TestSkipped, run_unittest, reap_children
 from commands import *
 
 # The module says:
@@ -58,6 +58,7 @@
 
 def test_main():
     run_unittest(CommandTests)
+    reap_children()
 
 
 if __name__ == "__main__":
diff --git a/Lib/test/test_compile.py b/Lib/test/test_compile.py
index 72c4f7e..a3f15bf 100644
--- a/Lib/test/test_compile.py
+++ b/Lib/test/test_compile.py
@@ -166,6 +166,16 @@
         pass"""
         compile(s, "<string>", "exec")
 
+    # This test is probably specific to CPython and may not generalize
+    # to other implementations.  We are trying to ensure that when
+    # the first line of code starts after 256, correct line numbers
+    # in tracebacks are still produced.
+    def test_leading_newlines(self):
+        s256 = "".join(["\n"] * 256 + ["spam"])
+        co = compile(s256, 'fn', 'exec')
+        self.assertEqual(co.co_firstlineno, 257)
+        self.assertEqual(co.co_lnotab, '')
+
     def test_literals_with_leading_zeroes(self):
         for arg in ["077787", "0xj", "0x.", "0e",  "090000000000000",
                     "080000000000000", "000000000000009", "000000000000008"]:
@@ -211,6 +221,25 @@
             self.assertEqual(eval("-" + all_one_bits), -18446744073709551615L)
         else:
             self.fail("How many bits *does* this machine have???")
+        # Verify treatment of contant folding on -(sys.maxint+1)
+        # i.e. -2147483648 on 32 bit platforms.  Should return int, not long.
+        self.assertTrue(isinstance(eval("%s" % (-sys.maxint - 1)), int))
+        self.assertTrue(isinstance(eval("%s" % (-sys.maxint - 2)), long))
+
+    if sys.maxint == 9223372036854775807:
+        def test_32_63_bit_values(self):
+            a = +4294967296  # 1 << 32
+            b = -4294967296  # 1 << 32
+            c = +281474976710656  # 1 << 48
+            d = -281474976710656  # 1 << 48
+            e = +4611686018427387904  # 1 << 62
+            f = -4611686018427387904  # 1 << 62
+            g = +9223372036854775807  # 1 << 63 - 1
+            h = -9223372036854775807  # 1 << 63 - 1
+
+            for variable in self.test_32_63_bit_values.func_code.co_consts:
+                if variable is not None:
+                    self.assertTrue(isinstance(variable, int))
 
     def test_sequence_unpacking_error(self):
         # Verify sequence packing/unpacking with "or".  SF bug #757818
@@ -238,6 +267,8 @@
         succeed = [
             'import sys',
             'import os, sys',
+            'import os as bar',
+            'import os.path as bar',
             'from __future__ import nested_scopes, generators',
             'from __future__ import (nested_scopes,\ngenerators)',
             'from __future__ import (nested_scopes,\ngenerators,)',
@@ -257,6 +288,10 @@
             'import (sys',
             'import sys)',
             'import (os,)',
+            'import os As bar',
+            'import os.path a bar',
+            'from sys import stdin As stdout',
+            'from sys import stdin a stdout',
             'from (sys) import stdin',
             'from __future__ import (nested_scopes',
             'from __future__ import nested_scopes)',
diff --git a/Lib/test/test_compiler.py b/Lib/test/test_compiler.py
index 48f1643..1efb6a6 100644
--- a/Lib/test/test_compiler.py
+++ b/Lib/test/test_compiler.py
@@ -56,13 +56,30 @@
     def testYieldExpr(self):
         compiler.compile("def g(): yield\n\n", "<string>", "exec")
 
+    def testTryExceptFinally(self):
+        # Test that except and finally clauses in one try stmt are recognized
+        c = compiler.compile("try:\n 1/0\nexcept:\n e = 1\nfinally:\n f = 1",
+                             "<string>", "exec")
+        dct = {}
+        exec c in dct
+        self.assertEquals(dct.get('e'), 1)
+        self.assertEquals(dct.get('f'), 1)
+
     def testDefaultArgs(self):
         self.assertRaises(SyntaxError, compiler.parse, "def foo(a=1, b): pass")
 
+    def testDocstrings(self):
+        c = compiler.compile('"doc"', '<string>', 'exec')
+        self.assert_('__doc__' in c.co_names)
+        c = compiler.compile('def f():\n "doc"', '<string>', 'exec')
+        g = {}
+        exec c in g
+        self.assertEquals(g['f'].__doc__, "doc")
+
     def testLineNo(self):
         # Test that all nodes except Module have a correct lineno attribute.
         filename = __file__
-        if filename.endswith(".pyc") or filename.endswith(".pyo"):
+        if filename.endswith((".pyc", ".pyo")):
             filename = filename[:-1]
         tree = compiler.parseFile(filename)
         self.check_lineno(tree)
@@ -87,6 +104,19 @@
         self.assertEquals(flatten([1, [2]]), [1, 2])
         self.assertEquals(flatten((1, (2,))), [1, 2])
 
+    def testNestedScope(self):
+        c = compiler.compile('def g():\n'
+                             '    a = 1\n'
+                             '    def f(): return a + 2\n'
+                             '    return f()\n'
+                             'result = g()',
+                             '<string>',
+                             'exec')
+        dct = {}
+        exec c in dct
+        self.assertEquals(dct.get('result'), 3)
+
+
 NOLINENO = (compiler.ast.Module, compiler.ast.Stmt, compiler.ast.Discard)
 
 ###############################################################################
@@ -103,6 +133,12 @@
 l = [(x, y) for x, y in zip(range(5), range(5,10))]
 l[0]
 l[3:4]
+d = {'a': 2}
+d = {}
+t = ()
+t = (1, 2)
+l = []
+l = [1, 2]
 if l:
     pass
 else:
diff --git a/Lib/test/test_curses.py b/Lib/test/test_curses.py
index dc2f20b..4022149 100644
--- a/Lib/test/test_curses.py
+++ b/Lib/test/test_curses.py
@@ -212,6 +212,13 @@
             m = curses.getmouse()
             curses.ungetmouse(*m)
 
+    if hasattr(curses, 'is_term_resized'):
+        curses.is_term_resized(*stdscr.getmaxyx())
+    if hasattr(curses, 'resizeterm'):
+        curses.resizeterm(*stdscr.getmaxyx())
+    if hasattr(curses, 'resize_term'):
+        curses.resize_term(*stdscr.getmaxyx())
+
 def unit_tests():
     from curses import ascii
     for ch, expected in [('a', 'a'), ('A', 'A'),
diff --git a/Lib/test/test_defaultdict.py b/Lib/test/test_defaultdict.py
index b5a6628..134b5a8 100644
--- a/Lib/test/test_defaultdict.py
+++ b/Lib/test/test_defaultdict.py
@@ -4,6 +4,7 @@
 import copy
 import tempfile
 import unittest
+from test import test_support
 
 from collections import defaultdict
 
@@ -131,5 +132,8 @@
         self.assertEqual(d2, d1)
 
 
+def test_main():
+    test_support.run_unittest(TestDefaultDict)
+
 if __name__ == "__main__":
-    unittest.main()
+    test_main()
diff --git a/Lib/test/test_descr.py b/Lib/test/test_descr.py
index 8ee431b..4a39be5 100644
--- a/Lib/test/test_descr.py
+++ b/Lib/test/test_descr.py
@@ -1899,6 +1899,16 @@
         prop2 = property(fset=setter)
         vereq(prop2.__doc__, None)
 
+    # this segfaulted in 2.5b2
+    try:
+        import _testcapi
+    except ImportError:
+        pass
+    else:
+        class X(object):
+            p = property(_testcapi.test_with_docstring)
+
+
 def supers():
     if verbose: print "Testing super..."
 
@@ -3046,6 +3056,21 @@
     list.__init__(a, sequence=[0, 1, 2])
     vereq(a, [0, 1, 2])
 
+def recursive__call__():
+    if verbose: print ("Testing recursive __call__() by setting to instance of "
+                        "class ...")
+    class A(object):
+        pass
+
+    A.__call__ = A()
+    try:
+        A()()
+    except RuntimeError:
+        pass
+    else:
+        raise TestFailed("Recursion limit should have been reached for "
+                         "__call__()")
+
 def delhook():
     if verbose: print "Testing __del__ hook..."
     log = []
@@ -3803,6 +3828,13 @@
     o.whatever = Provoker(o)
     del o
 
+def wrapper_segfault():
+    # SF 927248: deeply nested wrappers could cause stack overflow
+    f = lambda:None
+    for i in xrange(1000000):
+        f = f.__call__
+    f = None
+
 # Fix SF #762455, segfault when sys.stdout is changed in getattr
 def filefault():
     if verbose:
@@ -3957,6 +3989,7 @@
 
 def test_main():
     weakref_segfault() # Must be first, somehow
+    wrapper_segfault()
     do_this_first()
     class_docstrings()
     lists()
@@ -4015,6 +4048,7 @@
     buffer_inherit()
     str_of_str_subclass()
     kwdargs()
+    recursive__call__()
     delhook()
     hashinherit()
     strops()
diff --git a/Lib/test/test_dis.py b/Lib/test/test_dis.py
index 081941d..c31092c 100644
--- a/Lib/test/test_dis.py
+++ b/Lib/test/test_dis.py
@@ -81,6 +81,13 @@
      bug1333982.func_code.co_firstlineno + 2,
      bug1333982.func_code.co_firstlineno + 3)
 
+_BIG_LINENO_FORMAT = """\
+%3d           0 LOAD_GLOBAL              0 (spam)
+              3 POP_TOP
+              4 LOAD_CONST               0 (None)
+              7 RETURN_VALUE
+"""
+
 class DisTests(unittest.TestCase):
     def do_disassembly_test(self, func, expected):
         s = StringIO.StringIO()
@@ -124,6 +131,23 @@
         if __debug__:
             self.do_disassembly_test(bug1333982, dis_bug1333982)
 
+    def test_big_linenos(self):
+        def func(count):
+            namespace = {}
+            func = "def foo():\n " + "".join(["\n "] * count + ["spam\n"])
+            exec func in namespace
+            return namespace['foo']
+
+        # Test all small ranges
+        for i in xrange(1, 300):
+            expected = _BIG_LINENO_FORMAT % (i + 2)
+            self.do_disassembly_test(func(i), expected)
+
+        # Test some larger ranges too
+        for i in xrange(300, 5000, 10):
+            expected = _BIG_LINENO_FORMAT % (i + 2)
+            self.do_disassembly_test(func(i), expected)
+
 def test_main():
     run_unittest(DisTests)
 
diff --git a/Lib/test/test_doctest.py b/Lib/test/test_doctest.py
index 01f7acd..e8379c5 100644
--- a/Lib/test/test_doctest.py
+++ b/Lib/test/test_doctest.py
@@ -419,7 +419,6 @@
 
     >>> finder = doctest.DocTestFinder()
     >>> tests = finder.find(SampleClass)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      3  SampleClass
@@ -435,7 +434,6 @@
 New-style classes are also supported:
 
     >>> tests = finder.find(SampleNewStyleClass)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      1  SampleNewStyleClass
@@ -475,7 +473,6 @@
     >>> # ignoring the objects since they weren't defined in m.
     >>> import test.test_doctest
     >>> tests = finder.find(m, module=test.test_doctest)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      1  some_module
@@ -499,7 +496,6 @@
 
     >>> from test import doctest_aliases
     >>> tests = excl_empty_finder.find(doctest_aliases)
-    >>> tests.sort()
     >>> print len(tests)
     2
     >>> print tests[0].name
@@ -517,7 +513,6 @@
 By default, an object with no doctests doesn't create any tests:
 
     >>> tests = doctest.DocTestFinder().find(SampleClass)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      3  SampleClass
@@ -536,7 +531,6 @@
 displays.
 
     >>> tests = doctest.DocTestFinder(exclude_empty=False).find(SampleClass)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      3  SampleClass
@@ -557,7 +551,6 @@
 using the `recurse` flag:
 
     >>> tests = doctest.DocTestFinder(recurse=False).find(SampleClass)
-    >>> tests.sort()
     >>> for t in tests:
     ...     print '%2s  %s' % (len(t.examples), t.name)
      3  SampleClass
diff --git a/Lib/test/test_email_codecs.py b/Lib/test/test_email_codecs.py
index aadd537..c550a6f 100644
--- a/Lib/test/test_email_codecs.py
+++ b/Lib/test/test_email_codecs.py
@@ -1,11 +1,15 @@
 # Copyright (C) 2002 Python Software Foundation
 # email package unit tests for (optional) Asian codecs
 
-import unittest
 # The specific tests now live in Lib/email/test
-from email.test.test_email_codecs import suite
+from email.test import test_email_codecs
+from email.test import test_email_codecs_renamed
+from test import test_support
 
+def test_main():
+    suite = test_email_codecs.suite()
+    suite.addTest(test_email_codecs_renamed.suite())
+    test_support.run_suite(suite)
 
-
 if __name__ == '__main__':
-    unittest.main(defaultTest='suite')
+    test_main()
diff --git a/Lib/test/test_exceptions.py b/Lib/test/test_exceptions.py
index ebe60c1..be2cca1 100644
--- a/Lib/test/test_exceptions.py
+++ b/Lib/test/test_exceptions.py
@@ -314,6 +314,18 @@
         x = DerivedException(fancy_arg=42)
         self.assertEquals(x.fancy_arg, 42)
 
+    def testInfiniteRecursion(self):
+        def f():
+            return f()
+        self.assertRaises(RuntimeError, f)
+
+        def g():
+            try:
+                return g()
+            except ValueError:
+                return -1
+        self.assertRaises(RuntimeError, g)
+
 def test_main():
     run_unittest(ExceptionTests)
 
diff --git a/Lib/test/test_fcntl.py b/Lib/test/test_fcntl.py
index f53b13a..58a57b5 100755
--- a/Lib/test/test_fcntl.py
+++ b/Lib/test/test_fcntl.py
@@ -20,9 +20,10 @@
 if sys.platform.startswith('atheos'):
     start_len = "qq"
 
-if sys.platform in ('netbsd1', 'netbsd2', 'Darwin1.2', 'darwin',
-                    'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', 'freebsd6',
-                    'freebsd7',
+if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3',
+                    'Darwin1.2', 'darwin',
+                    'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5',
+                    'freebsd6', 'freebsd7',
                     'bsdos2', 'bsdos3', 'bsdos4',
                     'openbsd', 'openbsd2', 'openbsd3'):
     if struct.calcsize('l') == 8:
diff --git a/Lib/test/test_file.py b/Lib/test/test_file.py
index dcfa265..234920d 100644
--- a/Lib/test/test_file.py
+++ b/Lib/test/test_file.py
@@ -11,14 +11,12 @@
     # file tests for which a test file is automatically set up
 
     def setUp(self):
-        self.f = file(TESTFN, 'wb')
+        self.f = open(TESTFN, 'wb')
 
     def tearDown(self):
-        try:
-            if self.f:
-                self.f.close()
-        except IOError:
-            pass
+        if self.f:
+            self.f.close()
+        os.remove(TESTFN)
 
     def testWeakRefs(self):
         # verify weak references
@@ -80,9 +78,11 @@
 
     def testWritelinesNonString(self):
         # verify writelines with non-string object
-        class NonString: pass
+        class NonString:
+            pass
 
-        self.assertRaises(TypeError, self.f.writelines, [NonString(), NonString()])
+        self.assertRaises(TypeError, self.f.writelines,
+                          [NonString(), NonString()])
 
     def testRepr(self):
         # verify repr works
@@ -93,19 +93,21 @@
         self.assertEquals(f.name, TESTFN)
         self.assert_(not f.isatty())
         self.assert_(not f.closed)
-        
+
         self.assertRaises(TypeError, f.readinto, "")
         f.close()
         self.assert_(f.closed)
 
     def testMethods(self):
         methods = ['fileno', 'flush', 'isatty', 'next', 'read', 'readinto',
-                   'readline', 'readlines', 'seek', 'tell', 'truncate', 'write',
-                   '__iter__']
+                   'readline', 'readlines', 'seek', 'tell', 'truncate',
+                   'write', '__iter__']
         if sys.platform.startswith('atheos'):
             methods.remove('truncate')
 
-        self.f.close()
+        # __exit__ should close the file
+        self.f.__exit__(None, None, None)
+        self.assert_(self.f.closed)
 
         for methodname in methods:
             method = getattr(self.f, methodname)
@@ -113,6 +115,14 @@
             self.assertRaises(ValueError, method)
         self.assertRaises(ValueError, self.f.writelines, [])
 
+        # file is closed, __exit__ shouldn't do anything
+        self.assertEquals(self.f.__exit__(None, None, None), None)
+        # it must also return None if an exception was given
+        try:
+            1/0
+        except:
+            self.assertEquals(self.f.__exit__(*sys.exc_info()), None)
+
 
 class OtherFileTests(unittest.TestCase):
 
@@ -120,7 +130,7 @@
         # check invalid mode strings
         for mode in ("", "aU", "wU+"):
             try:
-                f = file(TESTFN, mode)
+                f = open(TESTFN, mode)
             except ValueError:
                 pass
             else:
@@ -142,6 +152,7 @@
         f = open(unicode(TESTFN), "w")
         self.assert_(repr(f).startswith("<open file u'" + TESTFN))
         f.close()
+        os.unlink(TESTFN)
 
     def testBadModeArgument(self):
         # verify that we get a sensible error message for bad mode argument
@@ -182,11 +193,11 @@
         def bug801631():
             # SF bug <http://www.python.org/sf/801631>
             # "file.truncate fault on windows"
-            f = file(TESTFN, 'wb')
+            f = open(TESTFN, 'wb')
             f.write('12345678901')   # 11 bytes
             f.close()
 
-            f = file(TESTFN,'rb+')
+            f = open(TESTFN,'rb+')
             data = f.read(5)
             if data != '12345':
                 self.fail("Read on file opened for update failed %r" % data)
@@ -208,14 +219,14 @@
             os.unlink(TESTFN)
 
     def testIteration(self):
-        # Test the complex interaction when mixing file-iteration and the various
-        # read* methods. Ostensibly, the mixture could just be tested to work
-        # when it should work according to the Python language, instead of fail
-        # when it should fail according to the current CPython implementation.
-        # People don't always program Python the way they should, though, and the
-        # implemenation might change in subtle ways, so we explicitly test for
-        # errors, too; the test will just have to be updated when the
-        # implementation changes.
+        # Test the complex interaction when mixing file-iteration and the
+        # various read* methods. Ostensibly, the mixture could just be tested
+        # to work when it should work according to the Python language,
+        # instead of fail when it should fail according to the current CPython
+        # implementation.  People don't always program Python the way they
+        # should, though, and the implemenation might change in subtle ways,
+        # so we explicitly test for errors, too; the test will just have to
+        # be updated when the implementation changes.
         dataoffset = 16384
         filler = "ham\n"
         assert not dataoffset % len(filler), \
@@ -253,12 +264,13 @@
                                      (methodname, args))
                 f.close()
 
-            # Test to see if harmless (by accident) mixing of read* and iteration
-            # still works. This depends on the size of the internal iteration
-            # buffer (currently 8192,) but we can test it in a flexible manner.
-            # Each line in the bag o' ham is 4 bytes ("h", "a", "m", "\n"), so
-            # 4096 lines of that should get us exactly on the buffer boundary for
-            # any power-of-2 buffersize between 4 and 16384 (inclusive).
+            # Test to see if harmless (by accident) mixing of read* and
+            # iteration still works. This depends on the size of the internal
+            # iteration buffer (currently 8192,) but we can test it in a
+            # flexible manner.  Each line in the bag o' ham is 4 bytes
+            # ("h", "a", "m", "\n"), so 4096 lines of that should get us
+            # exactly on the buffer boundary for any power-of-2 buffersize
+            # between 4 and 16384 (inclusive).
             f = open(TESTFN, 'rb')
             for i in range(nchunks):
                 f.next()
@@ -319,7 +331,13 @@
 
 
 def test_main():
-    run_unittest(AutoFileTests, OtherFileTests)
+    # Historically, these tests have been sloppy about removing TESTFN.
+    # So get rid of it no matter what.
+    try:
+        run_unittest(AutoFileTests, OtherFileTests)
+    finally:
+        if os.path.exists(TESTFN):
+            os.unlink(TESTFN)
 
 if __name__ == '__main__':
     test_main()
diff --git a/Lib/test/test_filecmp.py b/Lib/test/test_filecmp.py
index c54119c..503562b 100644
--- a/Lib/test/test_filecmp.py
+++ b/Lib/test/test_filecmp.py
@@ -1,5 +1,5 @@
 
-import os, filecmp, shutil, tempfile
+import os, filecmp, shutil, tempfile, shutil
 import unittest
 from test import test_support
 
@@ -49,6 +49,7 @@
         self.caseinsensitive = os.path.normcase('A') == os.path.normcase('a')
         data = 'Contents of file go here.\n'
         for dir in [self.dir, self.dir_same, self.dir_diff]:
+            shutil.rmtree(dir, True)
             os.mkdir(dir)
             if self.caseinsensitive and dir is self.dir_same:
                 fn = 'FiLe'     # Verify case-insensitive comparison
diff --git a/Lib/test/test_fork1.py b/Lib/test/test_fork1.py
index cba5fc7..e64e398 100644
--- a/Lib/test/test_fork1.py
+++ b/Lib/test/test_fork1.py
@@ -2,8 +2,9 @@
 """
 
 import os
+import time
 from test.fork_wait import ForkWait
-from test.test_support import TestSkipped, run_unittest
+from test.test_support import TestSkipped, run_unittest, reap_children
 
 try:
     os.fork
@@ -12,12 +13,20 @@
 
 class ForkTest(ForkWait):
     def wait_impl(self, cpid):
-        spid, status = os.waitpid(cpid, 0)
+        for i in range(10):
+            # waitpid() shouldn't hang, but some of the buildbots seem to hang
+            # in the forking tests.  This is an attempt to fix the problem.
+            spid, status = os.waitpid(cpid, os.WNOHANG)
+            if spid == cpid:
+                break
+            time.sleep(1.0)
+
         self.assertEqual(spid, cpid)
         self.assertEqual(status, 0, "cause = %d, exit = %d" % (status&0xff, status>>8))
 
 def test_main():
     run_unittest(ForkTest)
+    reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_generators.py b/Lib/test/test_generators.py
index a184a8b..ee36413 100644
--- a/Lib/test/test_generators.py
+++ b/Lib/test/test_generators.py
@@ -1497,22 +1497,55 @@
 <type 'generator'>
 
 
+A yield expression with augmented assignment.
+
+>>> def coroutine(seq):
+...     count = 0
+...     while count < 200:
+...         count += yield
+...         seq.append(count)
+>>> seq = []
+>>> c = coroutine(seq)
+>>> c.next()
+>>> print seq
+[]
+>>> c.send(10)
+>>> print seq
+[10]
+>>> c.send(10)
+>>> print seq
+[10, 20]
+>>> c.send(10)
+>>> print seq
+[10, 20, 30]
+
+
 Check some syntax errors for yield expressions:
 
 >>> f=lambda: (yield 1),(yield 2)
 Traceback (most recent call last):
   ...
-SyntaxError: 'yield' outside function (<doctest test.test_generators.__test__.coroutine[10]>, line 1)
+SyntaxError: 'yield' outside function (<doctest test.test_generators.__test__.coroutine[21]>, line 1)
 
 >>> def f(): return lambda x=(yield): 1
 Traceback (most recent call last):
   ...
-SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.coroutine[11]>, line 1)
+SyntaxError: 'return' with argument inside generator (<doctest test.test_generators.__test__.coroutine[22]>, line 1)
 
 >>> def f(): x = yield = y
 Traceback (most recent call last):
   ...
-SyntaxError: assignment to yield expression not possible (<doctest test.test_generators.__test__.coroutine[12]>, line 1)
+SyntaxError: assignment to yield expression not possible (<doctest test.test_generators.__test__.coroutine[23]>, line 1)
+
+>>> def f(): (yield bar) = y
+Traceback (most recent call last):
+  ...
+SyntaxError: can't assign to yield expression (<doctest test.test_generators.__test__.coroutine[24]>, line 1)
+
+>>> def f(): (yield bar) += y
+Traceback (most recent call last):
+  ...
+SyntaxError: augmented assignment to yield expression not possible (<doctest test.test_generators.__test__.coroutine[25]>, line 1)
 
 
 Now check some throw() conditions:
diff --git a/Lib/test/test_genexps.py b/Lib/test/test_genexps.py
index e414757..2598a79 100644
--- a/Lib/test/test_genexps.py
+++ b/Lib/test/test_genexps.py
@@ -109,7 +109,7 @@
     Traceback (most recent call last):
       File "<pyshell#4>", line 1, in -toplevel-
         (i for i in 6)
-    TypeError: iteration over non-sequence
+    TypeError: 'int' object is not iterable
 
 Verify late binding for the outermost if-expression
 
diff --git a/Lib/test/test_getargs2.py b/Lib/test/test_getargs2.py
index 8864e8e..c428f45 100644
--- a/Lib/test/test_getargs2.py
+++ b/Lib/test/test_getargs2.py
@@ -233,8 +233,25 @@
 
         self.failUnlessEqual(VERY_LARGE & ULLONG_MAX, getargs_K(VERY_LARGE))
 
+
+class Tuple_TestCase(unittest.TestCase):
+    def test_tuple(self):
+        from _testcapi import getargs_tuple
+
+        ret = getargs_tuple(1, (2, 3))
+        self.assertEquals(ret, (1,2,3))
+
+        # make sure invalid tuple arguments are handled correctly
+        class seq:
+            def __len__(self):
+                return 2
+            def __getitem__(self, n):
+                raise ValueError
+        self.assertRaises(TypeError, getargs_tuple, 1, seq())
+
+
 def test_main():
-    tests = [Signed_TestCase, Unsigned_TestCase]
+    tests = [Signed_TestCase, Unsigned_TestCase, Tuple_TestCase]
     try:
         from _testcapi import getargs_L, getargs_K
     except ImportError:
diff --git a/Lib/test/test_grammar.py b/Lib/test/test_grammar.py
index 4bb4e45..f160867 100644
--- a/Lib/test/test_grammar.py
+++ b/Lib/test/test_grammar.py
@@ -531,6 +531,11 @@
 for x in Squares(10): n = n+x
 if n != 285: raise TestFailed, 'for over growing sequence'
 
+result = []
+for x, in [(1,), (2,), (3,)]:
+    result.append(x)
+vereq(result, [1, 2, 3])
+
 print 'try_stmt'
 ### try_stmt: 'try' ':' suite (except_clause ':' suite)+ ['else' ':' suite]
 ###         | 'try' ':' suite 'finally' ':' suite
diff --git a/Lib/test/test_inspect.py b/Lib/test/test_inspect.py
index d9fd93d..99140d2 100644
--- a/Lib/test/test_inspect.py
+++ b/Lib/test/test_inspect.py
@@ -1,6 +1,8 @@
 import sys
+import types
 import unittest
 import inspect
+import datetime
 
 from test.test_support import TESTFN, run_unittest
 
@@ -15,7 +17,7 @@
 # isdatadescriptor
 
 modfile = mod.__file__
-if modfile.endswith('c') or modfile.endswith('o'):
+if modfile.endswith(('c', 'o')):
     modfile = modfile[:-1]
 
 import __builtin__
@@ -40,10 +42,12 @@
             self.failIf(other(obj), 'not %s(%s)' % (other.__name__, exp))
 
 class TestPredicates(IsTestBase):
-    def test_eleven(self):
-        # Doc/lib/libinspect.tex claims there are 11 such functions
+    def test_thirteen(self):
         count = len(filter(lambda x:x.startswith('is'), dir(inspect)))
-        self.assertEqual(count, 11, "There are %d (not 11) is* functions" % count)
+        # Doc/lib/libinspect.tex claims there are 13 such functions
+        expected = 13
+        err_msg = "There are %d (not %d) is* functions" % (count, expected)
+        self.assertEqual(count, expected, err_msg)
 
     def test_excluding_predicates(self):
         self.istest(inspect.isbuiltin, 'sys.exit')
@@ -58,6 +62,15 @@
         self.istest(inspect.istraceback, 'tb')
         self.istest(inspect.isdatadescriptor, '__builtin__.file.closed')
         self.istest(inspect.isdatadescriptor, '__builtin__.file.softspace')
+        if hasattr(types, 'GetSetDescriptorType'):
+            self.istest(inspect.isgetsetdescriptor,
+                        'type(tb.tb_frame).f_locals')
+        else:
+            self.failIf(inspect.isgetsetdescriptor(type(tb.tb_frame).f_locals))
+        if hasattr(types, 'MemberDescriptorType'):
+            self.istest(inspect.ismemberdescriptor, 'datetime.timedelta.days')
+        else:
+            self.failIf(inspect.ismemberdescriptor(datetime.timedelta.days))
 
     def test_isroutine(self):
         self.assert_(inspect.isroutine(mod.spam))
@@ -180,6 +193,17 @@
     def test_getfile(self):
         self.assertEqual(inspect.getfile(mod.StupidGit), mod.__file__)
 
+    def test_getmodule_recursion(self):
+        from new import module
+        name = '__inspect_dummy'
+        m = sys.modules[name] = module(name)
+        m.__file__ = "<string>" # hopefully not a real filename...
+        m.__loader__ = "dummy"  # pretend the filename is understood by a loader
+        exec "def x(): pass" in m.__dict__
+        self.assertEqual(inspect.getsourcefile(m.x.func_code), '<string>')
+        del sys.modules[name]
+        inspect.getmodule(compile('a=10','','single'))
+
 class TestDecorators(GetSourceBase):
     fodderFile = mod2
 
diff --git a/Lib/test/test_iterlen.py b/Lib/test/test_iterlen.py
index bcd0a6f..af4467e 100644
--- a/Lib/test/test_iterlen.py
+++ b/Lib/test/test_iterlen.py
@@ -235,9 +235,7 @@
         self.assertEqual(len(it), 0)
 
 
-
-if __name__ == "__main__":
-
+def test_main():
     unittests = [
         TestRepeat,
         TestXrange,
@@ -255,3 +253,6 @@
         TestSeqIterReversed,
     ]
     test_support.run_unittest(*unittests)
+
+if __name__ == "__main__":
+    test_main()
diff --git a/Lib/test/test_logging.py b/Lib/test/test_logging.py
index 73f8288..68c23c2 100644
--- a/Lib/test/test_logging.py
+++ b/Lib/test/test_logging.py
@@ -480,6 +480,8 @@
             f.close()
             try:
                 logging.config.fileConfig(fn)
+                #call again to make sure cleanup is correct
+                logging.config.fileConfig(fn)
             except:
                 t = sys.exc_info()[0]
                 message(str(t))
diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py
index 914a20c..45dd118 100644
--- a/Lib/test/test_mailbox.py
+++ b/Lib/test/test_mailbox.py
@@ -461,7 +461,7 @@
 
     def setUp(self):
         TestMailbox.setUp(self)
-        if os.name == 'nt':
+        if os.name in ('nt', 'os2'):
             self._box.colon = '!'
 
     def test_add_MM(self):
@@ -520,7 +520,7 @@
         # Initialize an existing mailbox
         self.tearDown()
         for subdir in '', 'tmp', 'new', 'cur':
-            os.mkdir(os.path.join(self._path, subdir))
+            os.mkdir(os.path.normpath(os.path.join(self._path, subdir)))
         self._box = mailbox.Maildir(self._path)
         self._check_basics(factory=rfc822.Message)
         self._box = mailbox.Maildir(self._path, factory=None)
@@ -720,6 +720,30 @@
         self.assert_(contents == open(self._path, 'rb').read())
         self._box = self._factory(self._path)
 
+    def test_lock_conflict(self):
+        # Fork off a subprocess that will lock the file for 2 seconds,
+        # unlock it, and then exit.
+        if not hasattr(os, 'fork'):
+            return
+        pid = os.fork()
+        if pid == 0:
+            # In the child, lock the mailbox.
+            self._box.lock()
+            time.sleep(2)
+            self._box.unlock()
+            os._exit(0)
+
+        # In the parent, sleep a bit to give the child time to acquire
+        # the lock.
+        time.sleep(0.5)
+        self.assertRaises(mailbox.ExternalClashError,
+                          self._box.lock)
+
+        # Wait for child to exit.  Locking should now succeed.
+        exited_pid, status = os.waitpid(pid, 0)
+        self._box.lock()
+        self._box.unlock()
+
 
 class TestMbox(_TestMboxMMDF):
 
@@ -1761,6 +1785,7 @@
              TestMessageConversion, TestProxyFile, TestPartialFile,
              MaildirTestCase)
     test_support.run_unittest(*tests)
+    test_support.reap_children()
 
 
 if __name__ == '__main__':
diff --git a/Lib/test/test_mimetools.py b/Lib/test/test_mimetools.py
index 96bbb36..b0b5b01 100644
--- a/Lib/test/test_mimetools.py
+++ b/Lib/test/test_mimetools.py
@@ -1,7 +1,7 @@
 import unittest
 from test import test_support
 
-import string, StringIO, mimetools, sets
+import string, StringIO, mimetools
 
 msgtext1 = mimetools.Message(StringIO.StringIO(
 """Content-Type: text/plain; charset=iso-8859-1; format=flowed
@@ -25,7 +25,7 @@
             self.assertEqual(o.getvalue(), start)
 
     def test_boundary(self):
-        s = sets.Set([""])
+        s = set([""])
         for i in xrange(100):
             nb = mimetools.choose_boundary()
             self.assert_(nb not in s)
diff --git a/Lib/test/test_mimetypes.py b/Lib/test/test_mimetypes.py
index 8c584ad..0190c2f 100644
--- a/Lib/test/test_mimetypes.py
+++ b/Lib/test/test_mimetypes.py
@@ -1,7 +1,6 @@
 import mimetypes
 import StringIO
 import unittest
-from sets import Set
 
 from test import test_support
 
@@ -52,8 +51,8 @@
         # First try strict.  Use a set here for testing the results because if
         # test_urllib2 is run before test_mimetypes, global state is modified
         # such that the 'all' set will have more items in it.
-        all = Set(self.db.guess_all_extensions('text/plain', strict=True))
-        unless(all >= Set(['.bat', '.c', '.h', '.ksh', '.pl', '.txt']))
+        all = set(self.db.guess_all_extensions('text/plain', strict=True))
+        unless(all >= set(['.bat', '.c', '.h', '.ksh', '.pl', '.txt']))
         # And now non-strict
         all = self.db.guess_all_extensions('image/jpg', strict=False)
         all.sort()
diff --git a/Lib/test/test_minidom.py b/Lib/test/test_minidom.py
index b9377ae..a6d309f 100644
--- a/Lib/test/test_minidom.py
+++ b/Lib/test/test_minidom.py
@@ -1,4 +1,4 @@
-# test for xmlcore.dom.minidom
+# test for xml.dom.minidom
 
 import os
 import sys
@@ -7,12 +7,12 @@
 from StringIO import StringIO
 from test.test_support import verbose
 
-import xmlcore.dom
-import xmlcore.dom.minidom
-import xmlcore.parsers.expat
+import xml.dom
+import xml.dom.minidom
+import xml.parsers.expat
 
-from xmlcore.dom.minidom import parse, Node, Document, parseString
-from xmlcore.dom.minidom import getDOMImplementation
+from xml.dom.minidom import parse, Node, Document, parseString
+from xml.dom.minidom import getDOMImplementation
 
 
 if __name__ == "__main__":
@@ -138,29 +138,29 @@
     text = dom.createTextNode('text')
 
     try: dom.appendChild(text)
-    except xmlcore.dom.HierarchyRequestErr: pass
+    except xml.dom.HierarchyRequestErr: pass
     else:
         print "dom.appendChild didn't raise HierarchyRequestErr"
 
     dom.appendChild(elem)
     try: dom.insertBefore(text, elem)
-    except xmlcore.dom.HierarchyRequestErr: pass
+    except xml.dom.HierarchyRequestErr: pass
     else:
         print "dom.appendChild didn't raise HierarchyRequestErr"
 
     try: dom.replaceChild(text, elem)
-    except xmlcore.dom.HierarchyRequestErr: pass
+    except xml.dom.HierarchyRequestErr: pass
     else:
         print "dom.appendChild didn't raise HierarchyRequestErr"
 
     nodemap = elem.attributes
     try: nodemap.setNamedItem(text)
-    except xmlcore.dom.HierarchyRequestErr: pass
+    except xml.dom.HierarchyRequestErr: pass
     else:
         print "NamedNodeMap.setNamedItem didn't raise HierarchyRequestErr"
 
     try: nodemap.setNamedItemNS(text)
-    except xmlcore.dom.HierarchyRequestErr: pass
+    except xml.dom.HierarchyRequestErr: pass
     else:
         print "NamedNodeMap.setNamedItemNS didn't raise HierarchyRequestErr"
 
@@ -439,7 +439,7 @@
             and pi.firstChild is None
             and pi.lastChild is None
             and pi.localName is None
-            and pi.namespaceURI == xmlcore.dom.EMPTY_NAMESPACE)
+            and pi.namespaceURI == xml.dom.EMPTY_NAMESPACE)
 
 def testProcessingInstructionRepr(): pass
 
@@ -454,7 +454,7 @@
     elem = doc.createElement("extra")
     try:
         doc.appendChild(elem)
-    except xmlcore.dom.HierarchyRequestErr:
+    except xml.dom.HierarchyRequestErr:
         pass
     else:
         print "Failed to catch expected exception when" \
@@ -491,7 +491,7 @@
     confirm(a1.isSameNode(a2))
     try:
         attrs.removeNamedItem("a")
-    except xmlcore.dom.NotFoundErr:
+    except xml.dom.NotFoundErr:
         pass
 
 def testRemoveNamedItemNS():
@@ -503,7 +503,7 @@
     confirm(a1.isSameNode(a2))
     try:
         attrs.removeNamedItemNS("http://xml.python.org/", "b")
-    except xmlcore.dom.NotFoundErr:
+    except xml.dom.NotFoundErr:
         pass
 
 def testAttrListValues(): pass
@@ -682,7 +682,7 @@
     doc2 = parseString("<doc/>")
     try:
         doc1.importNode(doc2, deep)
-    except xmlcore.dom.NotSupportedErr:
+    except xml.dom.NotSupportedErr:
         pass
     else:
         raise Exception(testName +
@@ -705,14 +705,12 @@
     doctype = getDOMImplementation().createDocumentType("doc", None, None)
     doctype.entities._seq = []
     doctype.notations._seq = []
-    notation = xmlcore.dom.minidom.Notation(
-        "my-notation", None,
-        "http://xml.python.org/notations/my")
+    notation = xml.dom.minidom.Notation("my-notation", None,
+                                        "http://xml.python.org/notations/my")
     doctype.notations._seq.append(notation)
-    entity = xmlcore.dom.minidom.Entity(
-        "my-entity", None,
-        "http://xml.python.org/entities/my",
-        "my-notation")
+    entity = xml.dom.minidom.Entity("my-entity", None,
+                                    "http://xml.python.org/entities/my",
+                                    "my-notation")
     entity.version = "1.0"
     entity.encoding = "utf-8"
     entity.actualEncoding = "us-ascii"
@@ -731,7 +729,7 @@
     target = create_doc_without_doctype()
     try:
         imported = target.importNode(src.doctype, 0)
-    except xmlcore.dom.NotSupportedErr:
+    except xml.dom.NotSupportedErr:
         pass
     else:
         raise Exception(
@@ -742,7 +740,7 @@
     target = create_doc_without_doctype()
     try:
         imported = target.importNode(src.doctype, 1)
-    except xmlcore.dom.NotSupportedErr:
+    except xml.dom.NotSupportedErr:
         pass
     else:
         raise Exception(
@@ -850,7 +848,7 @@
     doc.unlink()
 
 def testSAX2DOM():
-    from xmlcore.dom import pulldom
+    from xml.dom import pulldom
 
     sax2dom = pulldom.SAX2DOM()
     sax2dom.startDocument()
@@ -940,11 +938,11 @@
     attr = elem.attributes['a']
 
     # Simple renaming
-    attr = doc.renameNode(attr, xmlcore.dom.EMPTY_NAMESPACE, "b")
+    attr = doc.renameNode(attr, xml.dom.EMPTY_NAMESPACE, "b")
     confirm(attr.name == "b"
             and attr.nodeName == "b"
             and attr.localName is None
-            and attr.namespaceURI == xmlcore.dom.EMPTY_NAMESPACE
+            and attr.namespaceURI == xml.dom.EMPTY_NAMESPACE
             and attr.prefix is None
             and attr.value == "v"
             and elem.getAttributeNode("a") is None
@@ -989,11 +987,11 @@
             and attrmap[("http://xml.python.org/ns2", "d")].isSameNode(attr))
 
     # Rename back to a simple non-NS node
-    attr = doc.renameNode(attr, xmlcore.dom.EMPTY_NAMESPACE, "e")
+    attr = doc.renameNode(attr, xml.dom.EMPTY_NAMESPACE, "e")
     confirm(attr.name == "e"
             and attr.nodeName == "e"
             and attr.localName is None
-            and attr.namespaceURI == xmlcore.dom.EMPTY_NAMESPACE
+            and attr.namespaceURI == xml.dom.EMPTY_NAMESPACE
             and attr.prefix is None
             and attr.value == "v"
             and elem.getAttributeNode("a") is None
@@ -1007,7 +1005,7 @@
 
     try:
         doc.renameNode(attr, "http://xml.python.org/ns", "xmlns")
-    except xmlcore.dom.NamespaceErr:
+    except xml.dom.NamespaceErr:
         pass
     else:
         print "expected NamespaceErr"
@@ -1020,11 +1018,11 @@
     elem = doc.documentElement
 
     # Simple renaming
-    elem = doc.renameNode(elem, xmlcore.dom.EMPTY_NAMESPACE, "a")
+    elem = doc.renameNode(elem, xml.dom.EMPTY_NAMESPACE, "a")
     confirm(elem.tagName == "a"
             and elem.nodeName == "a"
             and elem.localName is None
-            and elem.namespaceURI == xmlcore.dom.EMPTY_NAMESPACE
+            and elem.namespaceURI == xml.dom.EMPTY_NAMESPACE
             and elem.prefix is None
             and elem.ownerDocument.isSameNode(doc))
 
@@ -1047,11 +1045,11 @@
             and elem.ownerDocument.isSameNode(doc))
 
     # Rename back to a simple non-NS node
-    elem = doc.renameNode(elem, xmlcore.dom.EMPTY_NAMESPACE, "d")
+    elem = doc.renameNode(elem, xml.dom.EMPTY_NAMESPACE, "d")
     confirm(elem.tagName == "d"
             and elem.nodeName == "d"
             and elem.localName is None
-            and elem.namespaceURI == xmlcore.dom.EMPTY_NAMESPACE
+            and elem.namespaceURI == xml.dom.EMPTY_NAMESPACE
             and elem.prefix is None
             and elem.ownerDocument.isSameNode(doc))
 
@@ -1062,15 +1060,15 @@
     # Make sure illegal NS usage is detected:
     try:
         doc.renameNode(node, "http://xml.python.org/ns", "xmlns:foo")
-    except xmlcore.dom.NamespaceErr:
+    except xml.dom.NamespaceErr:
         pass
     else:
         print "expected NamespaceErr"
 
     doc2 = parseString("<doc/>")
     try:
-        doc2.renameNode(node, xmlcore.dom.EMPTY_NAMESPACE, "foo")
-    except xmlcore.dom.WrongDocumentErr:
+        doc2.renameNode(node, xml.dom.EMPTY_NAMESPACE, "foo")
+    except xml.dom.WrongDocumentErr:
         pass
     else:
         print "expected WrongDocumentErr"
@@ -1078,12 +1076,12 @@
 def testRenameOther():
     # We have to create a comment node explicitly since not all DOM
     # builders used with minidom add comments to the DOM.
-    doc = xmlcore.dom.minidom.getDOMImplementation().createDocument(
-        xmlcore.dom.EMPTY_NAMESPACE, "e", None)
+    doc = xml.dom.minidom.getDOMImplementation().createDocument(
+        xml.dom.EMPTY_NAMESPACE, "e", None)
     node = doc.createComment("comment")
     try:
-        doc.renameNode(node, xmlcore.dom.EMPTY_NAMESPACE, "foo")
-    except xmlcore.dom.NotSupportedErr:
+        doc.renameNode(node, xml.dom.EMPTY_NAMESPACE, "foo")
+    except xml.dom.NotSupportedErr:
         pass
     else:
         print "expected NotSupportedErr when renaming comment node"
@@ -1194,13 +1192,13 @@
     # since each supports a different level of DTD information.
     t = elem.schemaType
     confirm(t.name is None
-            and t.namespace == xmlcore.dom.EMPTY_NAMESPACE)
+            and t.namespace == xml.dom.EMPTY_NAMESPACE)
     names = "id notid text enum ref refs ent ents nm nms".split()
     for name in names:
         a = elem.getAttributeNode(name)
         t = a.schemaType
         confirm(hasattr(t, "name")
-                and t.namespace == xmlcore.dom.EMPTY_NAMESPACE)
+                and t.namespace == xml.dom.EMPTY_NAMESPACE)
 
 def testSetIdAttribute():
     doc = parseString("<doc a1='v' a2='w'/>")
@@ -1229,7 +1227,7 @@
             and a2.isId
             and not a3.isId)
     # renaming an attribute should not affect its ID-ness:
-    doc.renameNode(a2, xmlcore.dom.EMPTY_NAMESPACE, "an")
+    doc.renameNode(a2, xml.dom.EMPTY_NAMESPACE, "an")
     confirm(e.isSameNode(doc.getElementById("w"))
             and a2.isId)
 
@@ -1265,7 +1263,7 @@
     confirm(not a3.isId)
     confirm(doc.getElementById("v") is None)
     # renaming an attribute should not affect its ID-ness:
-    doc.renameNode(a2, xmlcore.dom.EMPTY_NAMESPACE, "an")
+    doc.renameNode(a2, xml.dom.EMPTY_NAMESPACE, "an")
     confirm(e.isSameNode(doc.getElementById("w"))
             and a2.isId)
 
@@ -1301,7 +1299,7 @@
     confirm(not a3.isId)
     confirm(doc.getElementById("v") is None)
     # renaming an attribute should not affect its ID-ness:
-    doc.renameNode(a2, xmlcore.dom.EMPTY_NAMESPACE, "an")
+    doc.renameNode(a2, xml.dom.EMPTY_NAMESPACE, "an")
     confirm(e.isSameNode(doc.getElementById("w"))
             and a2.isId)
 
diff --git a/Lib/test/test_multibytecodec.py b/Lib/test/test_multibytecodec.py
index 276b9af..397ebeb 100644
--- a/Lib/test/test_multibytecodec.py
+++ b/Lib/test/test_multibytecodec.py
@@ -6,17 +6,37 @@
 
 from test import test_support
 from test import test_multibytecodec_support
-import unittest, StringIO, codecs, sys
+from test.test_support import TESTFN
+import unittest, StringIO, codecs, sys, os
+
+ALL_CJKENCODINGS = [
+# _codecs_cn
+    'gb2312', 'gbk', 'gb18030', 'hz',
+# _codecs_hk
+    'big5hkscs',
+# _codecs_jp
+    'cp932', 'shift_jis', 'euc_jp', 'euc_jisx0213', 'shift_jisx0213',
+    'euc_jis_2004', 'shift_jis_2004',
+# _codecs_kr
+    'cp949', 'euc_kr', 'johab',
+# _codecs_tw
+    'big5', 'cp950',
+# _codecs_iso2022
+    'iso2022_jp', 'iso2022_jp_1', 'iso2022_jp_2', 'iso2022_jp_2004',
+    'iso2022_jp_3', 'iso2022_jp_ext', 'iso2022_kr',
+]
 
 class Test_MultibyteCodec(unittest.TestCase):
 
     def test_nullcoding(self):
-        self.assertEqual(''.decode('gb18030'), u'')
-        self.assertEqual(unicode('', 'gb18030'), u'')
-        self.assertEqual(u''.encode('gb18030'), '')
+        for enc in ALL_CJKENCODINGS:
+            self.assertEqual(''.decode(enc), u'')
+            self.assertEqual(unicode('', enc), u'')
+            self.assertEqual(u''.encode(enc), '')
 
     def test_str_decode(self):
-        self.assertEqual('abcd'.encode('gb18030'), 'abcd')
+        for enc in ALL_CJKENCODINGS:
+            self.assertEqual('abcd'.encode(enc), 'abcd')
 
     def test_errorcallback_longindex(self):
         dec = codecs.getdecoder('euc-kr')
@@ -25,6 +45,14 @@
         self.assertRaises(IndexError, dec,
                           'apple\x92ham\x93spam', 'test.cjktest')
 
+    def test_codingspec(self):
+        try:
+            for enc in ALL_CJKENCODINGS:
+                print >> open(TESTFN, 'w'), '# coding:', enc
+                exec open(TESTFN)
+        finally:
+            os.unlink(TESTFN)
+
 class Test_IncrementalEncoder(unittest.TestCase):
 
     def test_stateless(self):
diff --git a/Lib/test/test_optparse.py b/Lib/test/test_optparse.py
index 79df906..4582fa7 100644
--- a/Lib/test/test_optparse.py
+++ b/Lib/test/test_optparse.py
@@ -15,7 +15,7 @@
 import types
 import unittest
 
-from cStringIO import StringIO
+from StringIO import StringIO
 from pprint import pprint
 from test import test_support
 
@@ -164,15 +164,23 @@
                      expected_error=None):
         """Assert the parser prints the expected output on stdout."""
         save_stdout = sys.stdout
+        encoding = getattr(save_stdout, 'encoding', None)
         try:
             try:
                 sys.stdout = StringIO()
+                if encoding:
+                    sys.stdout.encoding = encoding
                 self.parser.parse_args(cmdline_args)
             finally:
                 output = sys.stdout.getvalue()
                 sys.stdout = save_stdout
 
         except InterceptedError, err:
+            self.assert_(
+                type(output) is types.StringType,
+                "expected output to be an ordinary string, not %r"
+                % type(output))
+
             if output != expected_output:
                 self.fail("expected: \n'''\n" + expected_output +
                           "'''\nbut got \n'''\n" + output + "'''")
@@ -1452,10 +1460,26 @@
             make_option("--foo", action="append", type="string", dest='foo',
                         help="store FOO in the foo list for later fooing"),
             ]
+
+        # We need to set COLUMNS for the OptionParser constructor, but
+        # we must restore its original value -- otherwise, this test
+        # screws things up for other tests when it's part of the Python
+        # test suite.
+        orig_columns = os.environ.get('COLUMNS')
         os.environ['COLUMNS'] = str(columns)
-        return InterceptingOptionParser(option_list=options)
+        try:
+            return InterceptingOptionParser(option_list=options)
+        finally:
+            if orig_columns is None:
+                del os.environ['COLUMNS']
+            else:
+                os.environ['COLUMNS'] = orig_columns
 
     def assertHelpEquals(self, expected_output):
+        if type(expected_output) is types.UnicodeType:
+            encoding = self.parser._get_encoding(sys.stdout)
+            expected_output = expected_output.encode(encoding, "replace")
+
         save_argv = sys.argv[:]
         try:
             # Make optparse believe bar.py is being executed.
@@ -1486,6 +1510,27 @@
         self.parser = self.make_parser(60)
         self.assertHelpEquals(_expected_help_short_lines)
 
+    def test_help_unicode(self):
+        self.parser = InterceptingOptionParser(usage=SUPPRESS_USAGE)
+        self.parser.add_option("-a", action="store_true", help=u"ol\u00E9!")
+        expect = u"""\
+Options:
+  -h, --help  show this help message and exit
+  -a          ol\u00E9!
+"""
+        self.assertHelpEquals(expect)
+
+    def test_help_unicode_description(self):
+        self.parser = InterceptingOptionParser(usage=SUPPRESS_USAGE,
+                                               description=u"ol\u00E9!")
+        expect = u"""\
+ol\u00E9!
+
+Options:
+  -h, --help  show this help message and exit
+"""
+        self.assertHelpEquals(expect)
+
     def test_help_description_groups(self):
         self.parser.set_description(
             "This is the program description for %prog.  %prog has "
diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py
index ffc9420..9497777 100644
--- a/Lib/test/test_os.py
+++ b/Lib/test/test_os.py
@@ -11,6 +11,19 @@
 warnings.filterwarnings("ignore", "tempnam", RuntimeWarning, __name__)
 warnings.filterwarnings("ignore", "tmpnam", RuntimeWarning, __name__)
 
+# Tests creating TESTFN
+class FileTests(unittest.TestCase):
+    def setUp(self):
+        if os.path.exists(test_support.TESTFN):
+            os.unlink(test_support.TESTFN)
+    tearDown = setUp
+
+    def test_access(self):
+        f = os.open(test_support.TESTFN, os.O_CREAT|os.O_RDWR)
+        os.close(f)
+        self.assert_(os.access(test_support.TESTFN, os.W_OK))
+
+
 class TemporaryFileTests(unittest.TestCase):
     def setUp(self):
         self.files = []
@@ -393,6 +406,7 @@
 
 def test_main():
     test_support.run_unittest(
+        FileTests,
         TemporaryFileTests,
         StatAttributeTests,
         EnvironTests,
diff --git a/Lib/test/test_ossaudiodev.py b/Lib/test/test_ossaudiodev.py
index 8810516..5868ea7 100644
--- a/Lib/test/test_ossaudiodev.py
+++ b/Lib/test/test_ossaudiodev.py
@@ -40,6 +40,10 @@
     data = audioop.ulaw2lin(data, 2)
     return (data, rate, 16, nchannels)
 
+# version of assert that still works with -O
+def _assert(expr, message=None):
+    if not expr:
+        raise AssertionError(message or "assertion failed")
 
 def play_sound_file(data, rate, ssize, nchannels):
     try:
@@ -57,9 +61,9 @@
     dsp.fileno()
 
     # Make sure the read-only attributes work.
-    assert dsp.closed is False, "dsp.closed is not False"
-    assert dsp.name == "/dev/dsp"
-    assert dsp.mode == 'w', "bad dsp.mode: %r" % dsp.mode
+    _assert(dsp.closed is False, "dsp.closed is not False")
+    _assert(dsp.name == "/dev/dsp")
+    _assert(dsp.mode == 'w', "bad dsp.mode: %r" % dsp.mode)
 
     # And make sure they're really read-only.
     for attr in ('closed', 'name', 'mode'):
@@ -69,14 +73,23 @@
         except TypeError:
             pass
 
+    # Compute expected running time of sound sample (in seconds).
+    expected_time = float(len(data)) / (ssize/8) / nchannels / rate
+
     # set parameters based on .au file headers
     dsp.setparameters(AFMT_S16_NE, nchannels, rate)
+    print ("playing test sound file (expected running time: %.2f sec)"
+           % expected_time)
     t1 = time.time()
-    print "playing test sound file..."
     dsp.write(data)
     dsp.close()
     t2 = time.time()
-    print "elapsed time: %.1f sec" % (t2-t1)
+    elapsed_time = t2 - t1
+
+    percent_diff = (abs(elapsed_time - expected_time) / expected_time) * 100
+    _assert(percent_diff <= 10.0, \
+            ("elapsed time (%.2f sec) > 10%% off of expected time (%.2f sec)"
+             % (elapsed_time, expected_time)))
 
 def test_setparameters(dsp):
     # Two configurations for testing:
@@ -101,11 +114,11 @@
     # setparameters() should be able to set this configuration in
     # either strict or non-strict mode.
     result = dsp.setparameters(fmt, channels, rate, False)
-    assert result == (fmt, channels, rate), \
-           "setparameters%r: returned %r" % (config + result)
+    _assert(result == (fmt, channels, rate),
+            "setparameters%r: returned %r" % (config, result))
     result = dsp.setparameters(fmt, channels, rate, True)
-    assert result == (fmt, channels, rate), \
-           "setparameters%r: returned %r" % (config + result)
+    _assert(result == (fmt, channels, rate),
+            "setparameters%r: returned %r" % (config, result))
 
 def test_bad_setparameters(dsp):
 
@@ -123,8 +136,8 @@
                   ]:
         (fmt, channels, rate) = config
         result = dsp.setparameters(fmt, channels, rate, False)
-        assert result != config, \
-               "setparameters: unexpectedly got requested configuration"
+        _assert(result != config,
+                "setparameters: unexpectedly got requested configuration")
 
         try:
             result = dsp.setparameters(fmt, channels, rate, True)
@@ -145,6 +158,6 @@
         #test_bad_setparameters(dsp)
     finally:
         dsp.close()
-        assert dsp.closed is True, "dsp.closed is not True"
+        _assert(dsp.closed is True, "dsp.closed is not True")
 
 test()
diff --git a/Lib/test/test_pep292.py b/Lib/test/test_pep292.py
index 2a4353a..d1100ea 100644
--- a/Lib/test/test_pep292.py
+++ b/Lib/test/test_pep292.py
@@ -58,6 +58,13 @@
         s = Template('tim has eaten ${count} bags of ham today')
         eq(s.substitute(d), 'tim has eaten 7 bags of ham today')
 
+    def test_tupleargs(self):
+        eq = self.assertEqual
+        s = Template('$who ate ${meal}')
+        d = dict(who=('tim', 'fred'), meal=('ham', 'kung pao'))
+        eq(s.substitute(d), "('tim', 'fred') ate ('ham', 'kung pao')")
+        eq(s.safe_substitute(d), "('tim', 'fred') ate ('ham', 'kung pao')")
+
     def test_SafeTemplate(self):
         eq = self.assertEqual
         s = Template('$who likes ${what} for ${meal}')
diff --git a/Lib/test/test_popen.py b/Lib/test/test_popen.py
index 2b687ad..fbf5e05 100644
--- a/Lib/test/test_popen.py
+++ b/Lib/test/test_popen.py
@@ -6,7 +6,7 @@
 
 import os
 import sys
-from test.test_support import TestSkipped
+from test.test_support import TestSkipped, reap_children
 from os import popen
 
 # Test that command-lines get down as we expect.
@@ -35,5 +35,6 @@
 def main():
     print "Test popen:"
     _test_commandline()
+    reap_children()
 
 main()
diff --git a/Lib/test/test_popen2.py b/Lib/test/test_popen2.py
index 4db3cd1..2d54eb0 100644
--- a/Lib/test/test_popen2.py
+++ b/Lib/test/test_popen2.py
@@ -5,7 +5,7 @@
 
 import os
 import sys
-from test.test_support import TestSkipped
+from test.test_support import TestSkipped, reap_children
 
 # popen2 contains its own testing routine
 # which is especially useful to see if open files
@@ -75,3 +75,4 @@
 
 main()
 _test()
+reap_children()
diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py
index a9a5e8f..0698818 100644
--- a/Lib/test/test_pyexpat.py
+++ b/Lib/test/test_pyexpat.py
@@ -365,3 +365,24 @@
   <c/>
  </b>
 </a>''', 1)
+
+
+def test_parse_only_xml_data():
+    # http://python.org/sf/1296433
+    #
+    xml = "<?xml version='1.0' encoding='iso8859'?><s>%s</s>" % ('a' * 1025)
+    # this one doesn't crash
+    #xml = "<?xml version='1.0'?><s>%s</s>" % ('a' * 10000)
+
+    def handler(text):
+        raise Exception
+
+    parser = expat.ParserCreate()
+    parser.CharacterDataHandler = handler
+
+    try:
+        parser.Parse(xml)
+    except:
+        pass
+
+test_parse_only_xml_data()
diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py
index ded81fb..af4c7dd 100644
--- a/Lib/test/test_sax.py
+++ b/Lib/test/test_sax.py
@@ -1,17 +1,17 @@
 # regression test for SAX 2.0            -*- coding: iso-8859-1 -*-
 # $Id$
 
-from xmlcore.sax import make_parser, ContentHandler, \
-     SAXException, SAXReaderNotAvailable, SAXParseException
+from xml.sax import make_parser, ContentHandler, \
+                    SAXException, SAXReaderNotAvailable, SAXParseException
 try:
     make_parser()
 except SAXReaderNotAvailable:
     # don't try to test this module if we cannot create a parser
     raise ImportError("no XML parsers available")
-from xmlcore.sax.saxutils import XMLGenerator, escape, unescape, quoteattr, \
-     XMLFilterBase
-from xmlcore.sax.expatreader import create_parser
-from xmlcore.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl
+from xml.sax.saxutils import XMLGenerator, escape, unescape, quoteattr, \
+                             XMLFilterBase
+from xml.sax.expatreader import create_parser
+from xml.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl
 from cStringIO import StringIO
 from test.test_support import verify, verbose, TestFailed, findfile
 import os
@@ -36,17 +36,17 @@
         # Creating parsers several times in a row should succeed.
         # Testing this because there have been failures of this kind
         # before.
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
-        from xmlcore.sax import make_parser
+        from xml.sax import make_parser
         p = make_parser()
     except:
         return 0
@@ -108,7 +108,7 @@
     try:
         # Creating a parser should succeed - it should fall back
         # to the expatreader
-        p = make_parser(['xmlcore.parsers.no_such_parser'])
+        p = make_parser(['xml.parsers.no_such_parser'])
     except:
         return 0
     else:
@@ -175,11 +175,14 @@
     gen.endElement("e")
     gen.startElement("e", {"a": "'\""})
     gen.endElement("e")
+    gen.startElement("e", {"a": "\n\r\t"})
+    gen.endElement("e")
     gen.endElement("doc")
     gen.endDocument()
 
-    return result.getvalue() == start \
-           + "<doc a='\"'><e a=\"'\"></e><e a=\"'&quot;\"></e></doc>"
+    return result.getvalue() == start + ("<doc a='\"'><e a=\"'\"></e>"
+                                         "<e a=\"'&quot;\"></e>"
+                                         "<e a=\"&#10;&#13;&#9;\"></e></doc>")
 
 def test_xmlgen_ignorable():
     result = StringIO()
@@ -668,6 +671,55 @@
            attrs.getQNameByName((ns_uri, "attr")) == "ns:attr"
 
 
+# During the development of Python 2.5, an attempt to move the "xml"
+# package implementation to a new package ("xmlcore") proved painful.
+# The goal of this change was to allow applications to be able to
+# obtain and rely on behavior in the standard library implementation
+# of the XML support without needing to be concerned about the
+# availability of the PyXML implementation.
+#
+# While the existing import hackery in Lib/xml/__init__.py can cause
+# PyXML's _xmlpus package to supplant the "xml" package, that only
+# works because either implementation uses the "xml" package name for
+# imports.
+#
+# The move resulted in a number of problems related to the fact that
+# the import machinery's "package context" is based on the name that's
+# being imported rather than the __name__ of the actual package
+# containment; it wasn't possible for the "xml" package to be replaced
+# by a simple module that indirected imports to the "xmlcore" package.
+#
+# The following two tests exercised bugs that were introduced in that
+# attempt.  Keeping these tests around will help detect problems with
+# other attempts to provide reliable access to the standard library's
+# implementation of the XML support.
+
+def test_sf_1511497():
+    # Bug report: http://www.python.org/sf/1511497
+    import sys
+    old_modules = sys.modules.copy()
+    for modname in sys.modules.keys():
+        if modname.startswith("xml."):
+            del sys.modules[modname]
+    try:
+        import xml.sax.expatreader
+        module = xml.sax.expatreader
+        return module.__name__ == "xml.sax.expatreader"
+    finally:
+        sys.modules.update(old_modules)
+
+def test_sf_1513611():
+    # Bug report: http://www.python.org/sf/1513611
+    sio = StringIO("invalid")
+    parser = make_parser()
+    from xml.sax import SAXParseException
+    try:
+        parser.parse(sio)
+    except SAXParseException:
+        return True
+    else:
+        return False
+
 # ===== Main program
 
 def make_test_output():
diff --git a/Lib/test/test_scope.py b/Lib/test/test_scope.py
index f37254c..239745c 100644
--- a/Lib/test/test_scope.py
+++ b/Lib/test/test_scope.py
@@ -299,6 +299,17 @@
 else:
     raise TestFailed
 
+# test for bug #1501934: incorrect LOAD/STORE_GLOBAL generation
+global_x = 1
+def f():
+    global_x += 1
+try:
+    f()
+except UnboundLocalError:
+    pass
+else:
+    raise TestFailed, 'scope of global_x not correctly determined'
+
 print "14. complex definitions"
 
 def makeReturner(*lst):
diff --git a/Lib/test/test_select.py b/Lib/test/test_select.py
index eaec52b..d341324 100644
--- a/Lib/test/test_select.py
+++ b/Lib/test/test_select.py
@@ -1,5 +1,5 @@
 # Testing select module
-from test.test_support import verbose
+from test.test_support import verbose, reap_children
 import select
 import os
 
@@ -65,5 +65,6 @@
             continue
         print 'Unexpected return values from select():', rfd, wfd, xfd
     p.close()
+    reap_children()
 
 test()
diff --git a/Lib/test/test_sgmllib.py b/Lib/test/test_sgmllib.py
index 8e8b02f..28a21a4 100644
--- a/Lib/test/test_sgmllib.py
+++ b/Lib/test/test_sgmllib.py
@@ -1,4 +1,6 @@
+import htmlentitydefs
 import pprint
+import re
 import sgmllib
 import unittest
 from test import test_support
@@ -64,6 +66,37 @@
         self.setliteral()
 
 
+class HTMLEntityCollector(EventCollector):
+
+    entity_or_charref = re.compile('(?:&([a-zA-Z][-.a-zA-Z0-9]*)'
+        '|&#(x[0-9a-zA-Z]+|[0-9]+))(;?)')
+
+    def convert_charref(self, name):
+        self.append(("charref", "convert", name))
+        if name[0] != "x":
+            return EventCollector.convert_charref(self, name)
+
+    def convert_codepoint(self, codepoint):
+        self.append(("codepoint", "convert", codepoint))
+        EventCollector.convert_codepoint(self, codepoint)
+
+    def convert_entityref(self, name):
+        self.append(("entityref", "convert", name))
+        return EventCollector.convert_entityref(self, name)
+
+    # These to record that they were called, then pass the call along
+    # to the default implementation so that it's actions can be
+    # recorded.
+
+    def handle_charref(self, data):
+        self.append(("charref", data))
+        sgmllib.SGMLParser.handle_charref(self, data)
+
+    def handle_entityref(self, data):
+        self.append(("entityref", data))
+        sgmllib.SGMLParser.handle_entityref(self, data)
+
+
 class SGMLParserTestCase(unittest.TestCase):
 
     collector = EventCollector
@@ -218,7 +251,9 @@
         """Substitution of entities and charrefs in attribute values"""
         # SF bug #1452246
         self.check_events("""<a b=&lt; c=&lt;&gt; d=&lt-&gt; e='&lt; '
-                                f="&xxx;" g='&#32;&#33;' h='&#500;' i='x?a=b&c=d;'>""",
+                                f="&xxx;" g='&#32;&#33;' h='&#500;'
+                                i='x?a=b&c=d;'
+                                j='&amp;#42;' k='&#38;#42;'>""",
             [("starttag", "a", [("b", "<"),
                                 ("c", "<>"),
                                 ("d", "&lt->"),
@@ -226,13 +261,59 @@
                                 ("f", "&xxx;"),
                                 ("g", " !"),
                                 ("h", "&#500;"),
-                                ("i", "x?a=b&c=d;"), ])])
+                                ("i", "x?a=b&c=d;"),
+                                ("j", "&#42;"),
+                                ("k", "&#42;"),
+                                ])])
+
+    def test_convert_overrides(self):
+        # This checks that the character and entity reference
+        # conversion helpers are called at the documented times.  No
+        # attempt is made to really change what the parser accepts.
+        #
+        self.collector = HTMLEntityCollector
+        self.check_events(('<a title="&ldquo;test&#x201d;">foo</a>'
+                           '&foobar;&#42;'), [
+            ('entityref', 'convert', 'ldquo'),
+            ('charref', 'convert', 'x201d'),
+            ('starttag', 'a', [('title', '&ldquo;test&#x201d;')]),
+            ('data', 'foo'),
+            ('endtag', 'a'),
+            ('entityref', 'foobar'),
+            ('entityref', 'convert', 'foobar'),
+            ('charref', '42'),
+            ('charref', 'convert', '42'),
+            ('codepoint', 'convert', 42),
+            ])
+
+    def test_attr_values_quoted_markup(self):
+        """Multi-line and markup in attribute values"""
+        self.check_events("""<a title='foo\n<br>bar'>text</a>""",
+            [("starttag", "a", [("title", "foo\n<br>bar")]),
+             ("data", "text"),
+             ("endtag", "a")])
+        self.check_events("""<a title='less < than'>text</a>""",
+            [("starttag", "a", [("title", "less < than")]),
+             ("data", "text"),
+             ("endtag", "a")])
+        self.check_events("""<a title='greater > than'>text</a>""",
+            [("starttag", "a", [("title", "greater > than")]),
+             ("data", "text"),
+             ("endtag", "a")])
 
     def test_attr_funky_names(self):
         self.check_events("""<a a.b='v' c:d=v e-f=v>""", [
             ("starttag", "a", [("a.b", "v"), ("c:d", "v"), ("e-f", "v")]),
             ])
 
+    def test_attr_value_ip6_url(self):
+        # http://www.python.org/sf/853506
+        self.check_events(("<a href='http://[1080::8:800:200C:417A]/'>"
+                           "<a href=http://[1080::8:800:200C:417A]/>"), [
+            ("starttag", "a", [("href", "http://[1080::8:800:200C:417A]/")]),
+            ("starttag", "a", [("href", "http://[1080::8:800:200C:417A]/")]),
+            ])
+
     def test_illegal_declarations(self):
         s = 'abc<!spacer type="block" height="25">def'
         self.check_events(s, [
@@ -301,8 +382,8 @@
     # that needs to be carefully considered before changing it.
 
     def _test_starttag_end_boundary(self):
-        self.check_events("""<a b='<'>""", [("starttag", "a", [("b", "<")])])
-        self.check_events("""<a b='>'>""", [("starttag", "a", [("b", ">")])])
+        self.check_events("<a b='<'>", [("starttag", "a", [("b", "<")])])
+        self.check_events("<a b='>'>", [("starttag", "a", [("b", ">")])])
 
     def _test_buffer_artefacts(self):
         output = [("starttag", "a", [("b", "<")])]
@@ -322,17 +403,17 @@
         self.check_events(["<a b='>'", ">"], output)
 
         output = [("comment", "abc")]
-        self._run_check(["", "<!--abc-->"], output)
-        self._run_check(["<", "!--abc-->"], output)
-        self._run_check(["<!", "--abc-->"], output)
-        self._run_check(["<!-", "-abc-->"], output)
-        self._run_check(["<!--", "abc-->"], output)
-        self._run_check(["<!--a", "bc-->"], output)
-        self._run_check(["<!--ab", "c-->"], output)
-        self._run_check(["<!--abc", "-->"], output)
-        self._run_check(["<!--abc-", "->"], output)
-        self._run_check(["<!--abc--", ">"], output)
-        self._run_check(["<!--abc-->", ""], output)
+        self.check_events(["", "<!--abc-->"], output)
+        self.check_events(["<", "!--abc-->"], output)
+        self.check_events(["<!", "--abc-->"], output)
+        self.check_events(["<!-", "-abc-->"], output)
+        self.check_events(["<!--", "abc-->"], output)
+        self.check_events(["<!--a", "bc-->"], output)
+        self.check_events(["<!--ab", "c-->"], output)
+        self.check_events(["<!--abc", "-->"], output)
+        self.check_events(["<!--abc-", "->"], output)
+        self.check_events(["<!--abc--", ">"], output)
+        self.check_events(["<!--abc-->", ""], output)
 
     def _test_starttag_junk_chars(self):
         self.check_parse_error("<")
diff --git a/Lib/test/test_shutil.py b/Lib/test/test_shutil.py
index 6ab5a35..da71fa8 100644
--- a/Lib/test/test_shutil.py
+++ b/Lib/test/test_shutil.py
@@ -74,6 +74,53 @@
             except:
                 pass
 
+    def test_copytree_simple(self):
+        def write_data(path, data):
+            f = open(path, "w")
+            f.write(data)
+            f.close()
+
+        def read_data(path):
+            f = open(path)
+            data = f.read()
+            f.close()
+            return data
+
+        src_dir = tempfile.mkdtemp()
+        dst_dir = os.path.join(tempfile.mkdtemp(), 'destination')
+
+        write_data(os.path.join(src_dir, 'test.txt'), '123')
+
+        os.mkdir(os.path.join(src_dir, 'test_dir'))
+        write_data(os.path.join(src_dir, 'test_dir', 'test.txt'), '456')
+
+        try:
+            shutil.copytree(src_dir, dst_dir)
+            self.assertTrue(os.path.isfile(os.path.join(dst_dir, 'test.txt')))
+            self.assertTrue(os.path.isdir(os.path.join(dst_dir, 'test_dir')))
+            self.assertTrue(os.path.isfile(os.path.join(dst_dir, 'test_dir',
+                                                        'test.txt')))
+            actual = read_data(os.path.join(dst_dir, 'test.txt'))
+            self.assertEqual(actual, '123')
+            actual = read_data(os.path.join(dst_dir, 'test_dir', 'test.txt'))
+            self.assertEqual(actual, '456')
+        finally:
+            for path in (
+                    os.path.join(src_dir, 'test.txt'),
+                    os.path.join(dst_dir, 'test.txt'),
+                    os.path.join(src_dir, 'test_dir', 'test.txt'),
+                    os.path.join(dst_dir, 'test_dir', 'test.txt'),
+                ):
+                if os.path.exists(path):
+                    os.remove(path)
+            for path in (
+                    os.path.join(src_dir, 'test_dir'),
+                    os.path.join(dst_dir, 'test_dir'),
+                ):
+                if os.path.exists(path):
+                    os.removedirs(path)
+
+
     if hasattr(os, "symlink"):
         def test_dont_copy_file_onto_link_to_itself(self):
             # bug 851123.
diff --git a/Lib/test/test_signal.py b/Lib/test/test_signal.py
index f7fcb04..a6267d2 100644
--- a/Lib/test/test_signal.py
+++ b/Lib/test/test_signal.py
@@ -25,7 +25,11 @@
  ) &
 """ % vars()
 
+a_called = b_called = False
+
 def handlerA(*args):
+    global a_called
+    a_called = True
     if verbose:
         print "handlerA", args
 
@@ -33,11 +37,14 @@
     pass
 
 def handlerB(*args):
+    global b_called
+    b_called = True
     if verbose:
         print "handlerB", args
     raise HandlerBCalled, args
 
-signal.alarm(20)                        # Entire test lasts at most 20 sec.
+MAX_DURATION = 20
+signal.alarm(MAX_DURATION)   # Entire test should last at most 20 sec.
 hup = signal.signal(signal.SIGHUP, handlerA)
 usr1 = signal.signal(signal.SIGUSR1, handlerB)
 usr2 = signal.signal(signal.SIGUSR2, signal.SIG_IGN)
@@ -65,9 +72,35 @@
 except TypeError:
     pass
 
+# Set up a child to send an alarm signal to us (the parent) after waiting
+# long enough to receive the alarm.  It seems we miss the alarm for some
+# reason.  This will hopefully stop the hangs on Tru64/Alpha.
+def force_test_exit():
+    # Sigh, both imports seem necessary to avoid errors.
+    import os
+    fork_pid = os.fork()
+    if fork_pid == 0:
+        # In child
+        import os, time
+        try:
+            # Wait 5 seconds longer than the expected alarm to give enough
+            # time for the normal sequence of events to occur.  This is
+            # just a stop-gap to prevent the test from hanging.
+            time.sleep(MAX_DURATION + 5)
+            print >> sys.__stdout__, '  child should not have to kill parent'
+            for i in range(3):
+                os.kill(pid, signal.SIGALARM)
+        finally:
+            os._exit(0)
+    # In parent (or error)
+    return fork_pid
+
 try:
     os.system(script)
 
+    # Try to ensure this test exits even if there is some problem with alarm.
+    # Tru64/Alpha sometimes hangs and is ultimately killed by the buildbot.
+    fork_pid = force_test_exit()
     print "starting pause() loop..."
 
     try:
@@ -88,6 +121,22 @@
         if verbose:
             print "KeyboardInterrupt (assume the alarm() went off)"
 
+    # Forcibly kill the child we created to ping us if there was a test error.
+    try:
+        # Make sure we don't kill ourself if there was a fork error.
+        if fork_pid > 0:
+            os.kill(fork_pid, signal.SIGKILL)
+    except:
+        # If the child killed us, it has probably exited.  Killing a
+        # non-existant process will raise an error which we don't care about.
+        pass
+
+    if not a_called:
+        print 'HandlerA not called'
+
+    if not b_called:
+        print 'HandlerB not called'
+
 finally:
     signal.signal(signal.SIGHUP, hup)
     signal.signal(signal.SIGUSR1, usr1)
diff --git a/Lib/test/test_socket.py b/Lib/test/test_socket.py
index 01b9b5b..356b801 100644
--- a/Lib/test/test_socket.py
+++ b/Lib/test/test_socket.py
@@ -11,6 +11,7 @@
 import sys
 import array
 from weakref import proxy
+import signal
 
 PORT = 50007
 HOST = 'localhost'
@@ -21,7 +22,8 @@
     def setUp(self):
         self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-        self.serv.bind((HOST, PORT))
+        global PORT
+        PORT = test_support.bind_port(self.serv, HOST, PORT)
         self.serv.listen(1)
 
     def tearDown(self):
@@ -33,7 +35,8 @@
     def setUp(self):
         self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
         self.serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-        self.serv.bind((HOST, PORT))
+        global PORT
+        PORT = test_support.bind_port(self.serv, HOST, PORT)
 
     def tearDown(self):
         self.serv.close()
@@ -447,7 +450,12 @@
         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         sock.bind(("0.0.0.0", PORT+1))
         name = sock.getsockname()
-        self.assertEqual(name, ("0.0.0.0", PORT+1))
+        # XXX(nnorwitz): http://tinyurl.com/os5jz seems to indicate
+        # it reasonable to get the host's addr in addition to 0.0.0.0.
+        # At least for eCos.  This is required for the S/390 to pass.
+        my_ip_addr = socket.gethostbyname(socket.gethostname())
+        self.assert_(name[0] in ("0.0.0.0", my_ip_addr), '%s invalid' % name[0])
+        self.assertEqual(name[1], PORT+1)
 
     def testGetSockOpt(self):
         # Testing getsockopt()
@@ -575,6 +583,21 @@
     def _testRecvFrom(self):
         self.cli.sendto(MSG, 0, (HOST, PORT))
 
+class TCPCloserTest(ThreadedTCPSocketTest):
+
+    def testClose(self):
+        conn, addr = self.serv.accept()
+        conn.close()
+
+        sd = self.cli
+        read, write, err = select.select([sd], [], [], 1.0)
+        self.assertEqual(read, [sd])
+        self.assertEqual(sd.recv(1), '')
+
+    def _testClose(self):
+        self.cli.connect((HOST, PORT))
+        time.sleep(1.0)
+
 class BasicSocketPairTest(SocketPairTest):
 
     def __init__(self, methodName='runTest'):
@@ -795,6 +818,37 @@
         if not ok:
             self.fail("accept() returned success when we did not expect it")
 
+    def testInterruptedTimeout(self):
+        # XXX I don't know how to do this test on MSWindows or any other
+        # plaform that doesn't support signal.alarm() or os.kill(), though
+        # the bug should have existed on all platforms.
+        if not hasattr(signal, "alarm"):
+            return                  # can only test on *nix
+        self.serv.settimeout(5.0)   # must be longer than alarm
+        class Alarm(Exception):
+            pass
+        def alarm_handler(signal, frame):
+            raise Alarm
+        old_alarm = signal.signal(signal.SIGALRM, alarm_handler)
+        try:
+            signal.alarm(2)    # POSIX allows alarm to be up to 1 second early
+            try:
+                foo = self.serv.accept()
+            except socket.timeout:
+                self.fail("caught timeout instead of Alarm")
+            except Alarm:
+                pass
+            except:
+                self.fail("caught other exception instead of Alarm")
+            else:
+                self.fail("nothing caught")
+            signal.alarm(0)         # shut off alarm
+        except Alarm:
+            self.fail("got Alarm in wrong place")
+        finally:
+            # no alarm can be pending.  Safe to restore old handler.
+            signal.signal(signal.SIGALRM, old_alarm)
+
 class UDPTimeoutTest(SocketTCPTest):
 
     def testUDPTimeout(self):
@@ -883,8 +937,8 @@
         self.serv_conn.send(buf)
 
 def test_main():
-    tests = [GeneralModuleTests, BasicTCPTest, TCPTimeoutTest, TestExceptions,
-             BufferIOTest]
+    tests = [GeneralModuleTests, BasicTCPTest, TCPCloserTest, TCPTimeoutTest,
+             TestExceptions, BufferIOTest]
     if sys.platform != 'mac':
         tests.extend([ BasicUDPTest, UDPTimeoutTest ])
 
@@ -899,7 +953,10 @@
         tests.append(BasicSocketPairTest)
     if sys.platform == 'linux2':
         tests.append(TestLinuxAbstractNamespace)
+
+    thread_info = test_support.threading_setup()
     test_support.run_unittest(*tests)
+    test_support.threading_cleanup(*thread_info)
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_socket_ssl.py b/Lib/test/test_socket_ssl.py
index 1091383..3c9c9f0 100644
--- a/Lib/test/test_socket_ssl.py
+++ b/Lib/test/test_socket_ssl.py
@@ -3,6 +3,7 @@
 import sys
 from test import test_support
 import socket
+import errno
 
 # Optionally test SSL support.  This requires the 'network' resource as given
 # on the regrtest command line.
@@ -33,6 +34,13 @@
 def test_timeout():
     test_support.requires('network')
 
+    def error_msg(extra_msg):
+        print >> sys.stderr, """\
+    WARNING:  an attempt to connect to %r %s, in
+    test_timeout.  That may be legitimate, but is not the outcome we hoped
+    for.  If this message is seen often, test_timeout should be changed to
+    use a more reliable address.""" % (ADDR, extra_msg)
+
     if test_support.verbose:
         print "test_timeout ..."
 
@@ -48,12 +56,14 @@
     try:
         s.connect(ADDR)
     except socket.timeout:
-        print >> sys.stderr, """\
-    WARNING:  an attempt to connect to %r timed out, in
-    test_timeout.  That may be legitimate, but is not the outcome we hoped
-    for.  If this message is seen often, test_timeout should be changed to
-    use a more reliable address.""" % (ADDR,)
+        error_msg('timed out')
         return
+    except socket.error, exc:  # In case connection is refused.
+        if exc.args[0] == errno.ECONNREFUSED:
+            error_msg('was refused')
+            return
+        else:
+            raise
 
     ss = socket.ssl(s)
     # Read part of return welcome banner twice.
@@ -71,7 +81,7 @@
         return
 
     # Some random port to connect to.
-    PORT = 9934
+    PORT = [9934]
 
     listener_ready = threading.Event()
     listener_gone = threading.Event()
@@ -82,7 +92,7 @@
     # know the socket is gone.
     def listener():
         s = socket.socket()
-        s.bind(('', PORT))
+        PORT[0] = test_support.bind_port(s, '', PORT[0])
         s.listen(5)
         listener_ready.set()
         s.accept()
@@ -92,7 +102,7 @@
     def connector():
         listener_ready.wait()
         s = socket.socket()
-        s.connect(('localhost', PORT))
+        s.connect(('localhost', PORT[0]))
         listener_gone.wait()
         try:
             ssl_sock = socket.ssl(s)
diff --git a/Lib/test/test_socketserver.py b/Lib/test/test_socketserver.py
index 1245ba5..dd4532f 100644
--- a/Lib/test/test_socketserver.py
+++ b/Lib/test/test_socketserver.py
@@ -1,11 +1,13 @@
 # Test suite for SocketServer.py
 
 from test import test_support
-from test.test_support import verbose, verify, TESTFN, TestSkipped
+from test.test_support import (verbose, verify, TESTFN, TestSkipped,
+                               reap_children)
 test_support.requires('network')
 
 from SocketServer import *
 import socket
+import errno
 import select
 import time
 import threading
@@ -77,6 +79,11 @@
             pass
         if verbose: print "thread: creating server"
         svr = svrcls(self.__addr, self.__hdlrcls)
+        # pull the address out of the server in case it changed
+        # this can happen if another process is using the port
+        addr = getattr(svr, 'server_address')
+        if addr:
+            self.__addr = addr
         if verbose: print "thread: serving three times"
         svr.serve_a_few()
         if verbose: print "thread: done"
@@ -136,7 +143,25 @@
         t.join()
         if verbose: print "done"
 
-tcpservers = [TCPServer, ThreadingTCPServer]
+class ForgivingTCPServer(TCPServer):
+    # prevent errors if another process is using the port we want
+    def server_bind(self):
+        host, default_port = self.server_address
+        # this code shamelessly stolen from test.test_support
+        # the ports were changed to protect the innocent
+        import sys
+        for port in [default_port, 3434, 8798, 23833]:
+            try:
+                self.server_address = host, port
+                TCPServer.server_bind(self)
+                break
+            except socket.error, (err, msg):
+                if err != errno.EADDRINUSE:
+                    raise
+                print >>sys.__stderr__, \
+                    '  WARNING: failed to listen on port %d, trying another' % port
+
+tcpservers = [ForgivingTCPServer, ThreadingTCPServer]
 if hasattr(os, 'fork') and os.name not in ('os2',):
     tcpservers.append(ForkingTCPServer)
 udpservers = [UDPServer, ThreadingUDPServer]
@@ -175,6 +200,7 @@
         testall()
     finally:
         cleanup()
+    reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_struct.py b/Lib/test/test_struct.py
index aa458e6..66fd667 100644
--- a/Lib/test/test_struct.py
+++ b/Lib/test/test_struct.py
@@ -15,9 +15,11 @@
 except ImportError:
     PY_STRUCT_RANGE_CHECKING = 0
     PY_STRUCT_OVERFLOW_MASKING = 1
+    PY_STRUCT_FLOAT_COERCE = 2
 else:
-    PY_STRUCT_RANGE_CHECKING = _struct._PY_STRUCT_RANGE_CHECKING
-    PY_STRUCT_OVERFLOW_MASKING = _struct._PY_STRUCT_OVERFLOW_MASKING
+    PY_STRUCT_RANGE_CHECKING = getattr(_struct, '_PY_STRUCT_RANGE_CHECKING', 0)
+    PY_STRUCT_OVERFLOW_MASKING = getattr(_struct, '_PY_STRUCT_OVERFLOW_MASKING', 0)
+    PY_STRUCT_FLOAT_COERCE = getattr(_struct, '_PY_STRUCT_FLOAT_COERCE', 0)
 
 def string_reverse(s):
     return "".join(reversed(s))
@@ -46,33 +48,40 @@
         raise TestFailed, "%s%s did not raise error" % (
             func.__name__, args)
 
-def deprecated_err(func, *args):
-    # The `warnings` module doesn't have an advertised way to restore
-    # its filter list.  Cheat.
-    save_warnings_filters = warnings.filters[:]
-    # Grrr, we need this function to warn every time.  Without removing
-    # the warningregistry, running test_tarfile then test_struct would fail
-    # on 64-bit platforms.
-    globals = func.func_globals
-    if '__warningregistry__' in globals:
-        del globals['__warningregistry__']
-    warnings.filterwarnings("error", r"""^struct.*""", DeprecationWarning)
-    warnings.filterwarnings("error", r""".*format requires.*""",
-                            DeprecationWarning)
-    try:
+def with_warning_restore(func):
+    def _with_warning_restore(*args, **kw):
+        # The `warnings` module doesn't have an advertised way to restore
+        # its filter list.  Cheat.
+        save_warnings_filters = warnings.filters[:]
+        # Grrr, we need this function to warn every time.  Without removing
+        # the warningregistry, running test_tarfile then test_struct would fail
+        # on 64-bit platforms.
+        globals = func.func_globals
+        if '__warningregistry__' in globals:
+            del globals['__warningregistry__']
+        warnings.filterwarnings("error", r"""^struct.*""", DeprecationWarning)
+        warnings.filterwarnings("error", r""".*format requires.*""",
+                                DeprecationWarning)
         try:
-            func(*args)
-        except (struct.error, TypeError):
-            pass
-        except DeprecationWarning:
-            if not PY_STRUCT_OVERFLOW_MASKING:
-                raise TestFailed, "%s%s expected to raise struct.error" % (
-                    func.__name__, args)
-        else:
-            raise TestFailed, "%s%s did not raise error" % (
+            return func(*args, **kw)
+        finally:
+            warnings.filters[:] = save_warnings_filters[:]
+    return _with_warning_restore
+
+def deprecated_err(func, *args):
+    try:
+        func(*args)
+    except (struct.error, TypeError):
+        pass
+    except DeprecationWarning:
+        if not PY_STRUCT_OVERFLOW_MASKING:
+            raise TestFailed, "%s%s expected to raise struct.error" % (
                 func.__name__, args)
-    finally:
-        warnings.filters[:] = save_warnings_filters[:]
+    else:
+        raise TestFailed, "%s%s did not raise error" % (
+            func.__name__, args)
+deprecated_err = with_warning_restore(deprecated_err)
+
 
 simple_err(struct.calcsize, 'Z')
 
@@ -475,6 +484,9 @@
 
 test_705836()
 
+###########################################################################
+# SF bug 1229380. No struct.pack exception for some out of range integers
+
 def test_1229380():
     import sys
     for endian in ('', '>', '<'):
@@ -491,6 +503,37 @@
 if PY_STRUCT_RANGE_CHECKING:
     test_1229380()
 
+###########################################################################
+# SF bug 1530559. struct.pack raises TypeError where it used to convert.
+
+def check_float_coerce(format, number):
+    if PY_STRUCT_FLOAT_COERCE == 2:
+        # Test for pre-2.5 struct module
+        packed = struct.pack(format, number)
+        floored = struct.unpack(format, packed)[0]
+        if floored != int(number):
+            raise TestFailed("did not correcly coerce float to int")
+        return
+    try:
+        func(*args)
+    except (struct.error, TypeError):
+        if PY_STRUCT_FLOAT_COERCE:
+            raise TestFailed("expected DeprecationWarning for float coerce")
+    except DeprecationWarning:
+        if not PY_STRUCT_FLOAT_COERCE:
+            raise TestFailed("expected to raise struct.error for float coerce")
+    else:
+        raise TestFailed("did not raise error for float coerce")
+
+check_float_coerce = with_warning_restore(deprecated_err)
+
+def test_1530559():
+    for endian in ('', '>', '<'):
+        for fmt in ('B', 'H', 'I', 'L', 'b', 'h', 'i', 'l'):
+            check_float_coerce(endian + fmt, 1.0)
+            check_float_coerce(endian + fmt, 1.5)
+
+test_1530559()
 
 ###########################################################################
 # Packing and unpacking to/from buffers.
diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py
index edf5bd0..8c8ac40 100644
--- a/Lib/test/test_subprocess.py
+++ b/Lib/test/test_subprocess.py
@@ -27,6 +27,18 @@
     return re.sub(r"\[\d+ refs\]\r?\n?$", "", stderr)
 
 class ProcessTestCase(unittest.TestCase):
+    def setUp(self):
+        # Try to minimize the number of children we have so this test
+        # doesn't crash on some buildbots (Alphas in particular).
+        if hasattr(test_support, "reap_children"):
+            test_support.reap_children()
+
+    def tearDown(self):
+        # Try to minimize the number of children we have so this test
+        # doesn't crash on some buildbots (Alphas in particular).
+        if hasattr(test_support, "reap_children"):
+            test_support.reap_children()
+
     def mkstemp(self):
         """wrapper for mkstemp, calling mktemp if mkstemp is not available"""
         if hasattr(tempfile, "mkstemp"):
@@ -56,7 +68,7 @@
             subprocess.check_call([sys.executable, "-c",
                                    "import sys; sys.exit(47)"])
         except subprocess.CalledProcessError, e:
-            self.assertEqual(e.errno, 47)
+            self.assertEqual(e.returncode, 47)
         else:
             self.fail("Expected CalledProcessError")
 
@@ -384,7 +396,8 @@
 
     def test_no_leaking(self):
         # Make sure we leak no resources
-        if test_support.is_resource_enabled("subprocess") and not mswindows:
+        if not hasattr(test_support, "is_resource_enabled") \
+               or test_support.is_resource_enabled("subprocess") and not mswindows:
             max_handles = 1026 # too much for most UNIX systems
         else:
             max_handles = 65
@@ -463,10 +476,36 @@
             else:
                 self.fail("Expected OSError")
 
+        def _suppress_core_files(self):
+            """Try to prevent core files from being created.
+            Returns previous ulimit if successful, else None.
+            """
+            try:
+                import resource
+                old_limit = resource.getrlimit(resource.RLIMIT_CORE)
+                resource.setrlimit(resource.RLIMIT_CORE, (0,0))
+                return old_limit
+            except (ImportError, ValueError, resource.error):
+                return None
+
+        def _unsuppress_core_files(self, old_limit):
+            """Return core file behavior to default."""
+            if old_limit is None:
+                return
+            try:
+                import resource
+                resource.setrlimit(resource.RLIMIT_CORE, old_limit)
+            except (ImportError, ValueError, resource.error):
+                return
+
         def test_run_abort(self):
             # returncode handles signal termination
-            p = subprocess.Popen([sys.executable,
-                                  "-c", "import os; os.abort()"])
+            old_limit = self._suppress_core_files()
+            try:
+                p = subprocess.Popen([sys.executable,
+                                      "-c", "import os; os.abort()"])
+            finally:
+                self._unsuppress_core_files(old_limit)
             p.wait()
             self.assertEqual(-p.returncode, signal.SIGABRT)
 
@@ -599,6 +638,8 @@
 
 def test_main():
     test_support.run_unittest(ProcessTestCase)
+    if hasattr(test_support, "reap_children"):
+        test_support.reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_support.py b/Lib/test/test_support.py
index 2d08f4d..a9d5dab 100644
--- a/Lib/test/test_support.py
+++ b/Lib/test/test_support.py
@@ -89,6 +89,24 @@
             msg = "Use of the `%s' resource not enabled" % resource
         raise ResourceDenied(msg)
 
+def bind_port(sock, host='', preferred_port=54321):
+    """Try to bind the sock to a port.  If we are running multiple
+    tests and we don't try multiple ports, the test can fails.  This
+    makes the test more robust."""
+
+    import socket, errno
+    # some random ports that hopefully no one is listening on.
+    for port in [preferred_port, 9907, 10243, 32999]:
+        try:
+            sock.bind((host, port))
+            return port
+        except socket.error, (err, msg):
+            if err != errno.EADDRINUSE:
+                raise
+            print >>sys.__stderr__, \
+                '  WARNING: failed to listen on port %d, trying another' % port
+    raise TestFailed, 'unable to find port to listen on'
+
 FUZZ = 1e-6
 
 def fcmp(x, y): # fuzzy comparison function
@@ -296,6 +314,12 @@
 _1G = 1024 * _1M
 _2G = 2 * _1G
 
+# Hack to get at the maximum value an internal index can take.
+class _Dummy:
+    def __getslice__(self, i, j):
+        return j
+MAX_Py_ssize_t = _Dummy()[:]
+
 def set_memlimit(limit):
     import re
     global max_memuse
@@ -310,7 +334,9 @@
     if m is None:
         raise ValueError('Invalid memory limit %r' % (limit,))
     memlimit = int(float(m.group(1)) * sizes[m.group(3).lower()])
-    if memlimit < 2.5*_1G:
+    if memlimit > MAX_Py_ssize_t:
+        memlimit = MAX_Py_ssize_t
+    if memlimit < _2G - 1:
         raise ValueError('Memory limit %r too low to be useful' % (limit,))
     max_memuse = memlimit
 
@@ -353,6 +379,17 @@
         return wrapper
     return decorator
 
+def bigaddrspacetest(f):
+    """Decorator for tests that fill the address space."""
+    def wrapper(self):
+        if max_memuse < MAX_Py_ssize_t:
+            if verbose:
+                sys.stderr.write("Skipping %s because of memory "
+                                 "constraint\n" % (f.__name__,))
+        else:
+            return f(self)
+    return wrapper
+
 #=======================================================================
 # Preliminary PyUNIT integration.
 
@@ -435,3 +472,46 @@
     if verbose:
         print 'doctest (%s) ... %d tests with zero failures' % (module.__name__, t)
     return f, t
+
+#=======================================================================
+# Threading support to prevent reporting refleaks when running regrtest.py -R
+
+def threading_setup():
+    import threading
+    return len(threading._active), len(threading._limbo)
+
+def threading_cleanup(num_active, num_limbo):
+    import threading
+    import time
+
+    _MAX_COUNT = 10
+    count = 0
+    while len(threading._active) != num_active and count < _MAX_COUNT:
+        count += 1
+        time.sleep(0.1)
+
+    count = 0
+    while len(threading._limbo) != num_limbo and count < _MAX_COUNT:
+        count += 1
+        time.sleep(0.1)
+
+def reap_children():
+    """Use this function at the end of test_main() whenever sub-processes
+    are started.  This will help ensure that no extra children (zombies)
+    stick around to hog resources and create problems when looking
+    for refleaks.
+    """
+
+    # Reap all our dead child processes so we don't leave zombies around.
+    # These hog resources and might be causing some of the buildbots to die.
+    import os
+    if hasattr(os, 'waitpid'):
+        any_process = -1
+        while True:
+            try:
+                # This will raise an exception on Windows.  That's ok.
+                pid, status = os.waitpid(any_process, os.WNOHANG)
+                if pid == 0:
+                    break
+            except:
+                break
diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py
index ae2a1c8..f1f1524 100644
--- a/Lib/test/test_sys.py
+++ b/Lib/test/test_sys.py
@@ -237,6 +237,90 @@
             is sys._getframe().f_code
         )
 
+    # sys._current_frames() is a CPython-only gimmick.
+    def test_current_frames(self):
+        have_threads = True
+        try:
+            import thread
+        except ImportError:
+            have_threads = False
+
+        if have_threads:
+            self.current_frames_with_threads()
+        else:
+            self.current_frames_without_threads()
+
+    # Test sys._current_frames() in a WITH_THREADS build.
+    def current_frames_with_threads(self):
+        import threading, thread
+        import traceback
+
+        # Spawn a thread that blocks at a known place.  Then the main
+        # thread does sys._current_frames(), and verifies that the frames
+        # returned make sense.
+        entered_g = threading.Event()
+        leave_g = threading.Event()
+        thread_info = []  # the thread's id
+
+        def f123():
+            g456()
+
+        def g456():
+            thread_info.append(thread.get_ident())
+            entered_g.set()
+            leave_g.wait()
+
+        t = threading.Thread(target=f123)
+        t.start()
+        entered_g.wait()
+
+        # At this point, t has finished its entered_g.set(), although it's
+        # impossible to guess whether it's still on that line or has moved on
+        # to its leave_g.wait().
+        self.assertEqual(len(thread_info), 1)
+        thread_id = thread_info[0]
+
+        d = sys._current_frames()
+
+        main_id = thread.get_ident()
+        self.assert_(main_id in d)
+        self.assert_(thread_id in d)
+
+        # Verify that the captured main-thread frame is _this_ frame.
+        frame = d.pop(main_id)
+        self.assert_(frame is sys._getframe())
+
+        # Verify that the captured thread frame is blocked in g456, called
+        # from f123.  This is a litte tricky, since various bits of
+        # threading.py are also in the thread's call stack.
+        frame = d.pop(thread_id)
+        stack = traceback.extract_stack(frame)
+        for i, (filename, lineno, funcname, sourceline) in enumerate(stack):
+            if funcname == "f123":
+                break
+        else:
+            self.fail("didn't find f123() on thread's call stack")
+
+        self.assertEqual(sourceline, "g456()")
+
+        # And the next record must be for g456().
+        filename, lineno, funcname, sourceline = stack[i+1]
+        self.assertEqual(funcname, "g456")
+        self.assert_(sourceline in ["leave_g.wait()", "entered_g.set()"])
+
+        # Reap the spawned thread.
+        leave_g.set()
+        t.join()
+
+    # Test sys._current_frames() when thread support doesn't exist.
+    def current_frames_without_threads(self):
+        # Not much happens here:  there is only one thread, with artificial
+        # "thread id" 0.
+        d = sys._current_frames()
+        self.assertEqual(len(d), 1)
+        self.assert_(0 in d)
+        self.assert_(d[0] is sys._getframe())
+
     def test_attributes(self):
         self.assert_(isinstance(sys.api_version, int))
         self.assert_(isinstance(sys.argv, list))
diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py
index e3fbf98..fa170ef 100644
--- a/Lib/test/test_tcl.py
+++ b/Lib/test/test_tcl.py
@@ -130,10 +130,8 @@
         import os
         old_display = None
         import sys
-        if (sys.platform.startswith('win') or
-                sys.platform.startswith('darwin') or
-                sys.platform.startswith('cygwin')):
-            return # no failure possible on windows?
+        if sys.platform.startswith(('win', 'darwin', 'cygwin')):
+            return  # no failure possible on windows?
         if 'DISPLAY' in os.environ:
             old_display = os.environ['DISPLAY']
             del os.environ['DISPLAY']
diff --git a/Lib/test/test_textwrap.py b/Lib/test/test_textwrap.py
index 68e4d6d..500eceb 100644
--- a/Lib/test/test_textwrap.py
+++ b/Lib/test/test_textwrap.py
@@ -460,38 +460,42 @@
 # of IndentTestCase!
 class DedentTestCase(unittest.TestCase):
 
+    def assertUnchanged(self, text):
+        """assert that dedent() has no effect on 'text'"""
+        self.assertEquals(text, dedent(text))
+
     def test_dedent_nomargin(self):
         # No lines indented.
         text = "Hello there.\nHow are you?\nOh good, I'm glad."
-        self.assertEquals(dedent(text), text)
+        self.assertUnchanged(text)
 
         # Similar, with a blank line.
         text = "Hello there.\n\nBoo!"
-        self.assertEquals(dedent(text), text)
+        self.assertUnchanged(text)
 
         # Some lines indented, but overall margin is still zero.
         text = "Hello there.\n  This is indented."
-        self.assertEquals(dedent(text), text)
+        self.assertUnchanged(text)
 
         # Again, add a blank line.
         text = "Hello there.\n\n  Boo!\n"
-        self.assertEquals(dedent(text), text)
+        self.assertUnchanged(text)
 
     def test_dedent_even(self):
         # All lines indented by two spaces.
         text = "  Hello there.\n  How are ya?\n  Oh good."
         expect = "Hello there.\nHow are ya?\nOh good."
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
         # Same, with blank lines.
         text = "  Hello there.\n\n  How are ya?\n  Oh good.\n"
         expect = "Hello there.\n\nHow are ya?\nOh good.\n"
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
         # Now indent one of the blank lines.
         text = "  Hello there.\n  \n  How are ya?\n  Oh good.\n"
         expect = "Hello there.\n\nHow are ya?\nOh good.\n"
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
     def test_dedent_uneven(self):
         # Lines indented unevenly.
@@ -505,18 +509,53 @@
     while 1:
         return foo
 '''
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
         # Uneven indentation with a blank line.
         text = "  Foo\n    Bar\n\n   Baz\n"
         expect = "Foo\n  Bar\n\n Baz\n"
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
         # Uneven indentation with a whitespace-only line.
         text = "  Foo\n    Bar\n \n   Baz\n"
         expect = "Foo\n  Bar\n\n Baz\n"
-        self.assertEquals(dedent(text), expect)
+        self.assertEquals(expect, dedent(text))
 
+    # dedent() should not mangle internal tabs
+    def test_dedent_preserve_internal_tabs(self):
+        text = "  hello\tthere\n  how are\tyou?"
+        expect = "hello\tthere\nhow are\tyou?"
+        self.assertEquals(expect, dedent(text))
+
+        # make sure that it preserves tabs when it's not making any
+        # changes at all
+        self.assertEquals(expect, dedent(expect))
+
+    # dedent() should not mangle tabs in the margin (i.e.
+    # tabs and spaces both count as margin, but are *not*
+    # considered equivalent)
+    def test_dedent_preserve_margin_tabs(self):
+        text = "  hello there\n\thow are you?"
+        self.assertUnchanged(text)
+
+        # same effect even if we have 8 spaces
+        text = "        hello there\n\thow are you?"
+        self.assertUnchanged(text)
+
+        # dedent() only removes whitespace that can be uniformly removed!
+        text = "\thello there\n\thow are you?"
+        expect = "hello there\nhow are you?"
+        self.assertEquals(expect, dedent(text))
+
+        text = "  \thello there\n  \thow are you?"
+        self.assertEquals(expect, dedent(text))
+
+        text = "  \t  hello there\n  \t  how are you?"
+        self.assertEquals(expect, dedent(text))
+
+        text = "  \thello there\n  \t  how are you?"
+        expect = "hello there\n  how are you?"
+        self.assertEquals(expect, dedent(text))
 
 
 def test_main():
diff --git a/Lib/test/test_thread.py b/Lib/test/test_thread.py
index ea345b6..c4c21fe 100644
--- a/Lib/test/test_thread.py
+++ b/Lib/test/test_thread.py
@@ -115,3 +115,46 @@
     thread.start_new_thread(task2, (i,))
 done.acquire()
 print 'all tasks done'
+
+# not all platforms support changing thread stack size
+print '\n*** Changing thread stack size ***'
+if thread.stack_size() != 0:
+    raise ValueError, "initial stack_size not 0"
+
+thread.stack_size(0)
+if thread.stack_size() != 0:
+    raise ValueError, "stack_size not reset to default"
+
+from os import name as os_name
+if os_name in ("nt", "os2", "posix"):
+
+    tss_supported = 1
+    try:
+        thread.stack_size(4096)
+    except ValueError:
+        print 'caught expected ValueError setting stack_size(4096)'
+    except thread.error:
+        tss_supported = 0
+        print 'platform does not support changing thread stack size'
+
+    if tss_supported:
+        failed = lambda s, e: s != e
+        fail_msg = "stack_size(%d) failed - should succeed"
+        for tss in (262144, 0x100000, 0):
+            thread.stack_size(tss)
+            if failed(thread.stack_size(), tss):
+                raise ValueError, fail_msg % tss
+            print 'successfully set stack_size(%d)' % tss
+
+        for tss in (262144, 0x100000):
+            print 'trying stack_size = %d' % tss
+            next_ident = 0
+            for i in range(numtasks):
+                newtask()
+
+            print 'waiting for all tasks to complete'
+            done.acquire()
+            print 'all tasks done'
+
+        # reset stack size to default
+        thread.stack_size(0)
diff --git a/Lib/test/test_threaded_import.py b/Lib/test/test_threaded_import.py
index 0642d25..602ad2a 100644
--- a/Lib/test/test_threaded_import.py
+++ b/Lib/test/test_threaded_import.py
@@ -30,11 +30,10 @@
     if verbose:
         print "testing import hangers ...",
 
-    from test import threaded_import_hangers
-
+    import test.threaded_import_hangers
     try:
-        if threaded_import_hangers.errors:
-            raise TestFailed(threaded_import_hangers.errors)
+        if test.threaded_import_hangers.errors:
+            raise TestFailed(test.threaded_import_hangers.errors)
         elif verbose:
             print "OK."
     finally:
diff --git a/Lib/test/test_threadedtempfile.py b/Lib/test/test_threadedtempfile.py
index 459ba3a..974333b 100644
--- a/Lib/test/test_threadedtempfile.py
+++ b/Lib/test/test_threadedtempfile.py
@@ -22,7 +22,7 @@
 
 import thread # If this fails, we can't test this module
 import threading
-from test.test_support import TestFailed
+from test.test_support import TestFailed, threading_setup, threading_cleanup
 import StringIO
 from traceback import print_exc
 import tempfile
@@ -48,6 +48,7 @@
 
 def test_main():
     threads = []
+    thread_info = threading_setup()
 
     print "Creating"
     for i in range(NUM_THREADS):
@@ -72,6 +73,7 @@
     if errors:
         raise TestFailed(msg)
 
+    threading_cleanup(*thread_info)
 
 if __name__ == "__main__":
     import sys, getopt
diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py
index 7eb9758..79335ea 100644
--- a/Lib/test/test_threading.py
+++ b/Lib/test/test_threading.py
@@ -85,6 +85,32 @@
             print 'all tasks done'
         self.assertEqual(numrunning.get(), 0)
 
+    # run with a small(ish) thread stack size (256kB)
+    def test_various_ops_small_stack(self):
+        if verbose:
+            print 'with 256kB thread stack size...'
+        try:
+            threading.stack_size(262144)
+        except thread.error:
+            if verbose:
+                print 'platform does not support changing thread stack size'
+            return
+        self.test_various_ops()
+        threading.stack_size(0)
+
+    # run with a large thread stack size (1MB)
+    def test_various_ops_large_stack(self):
+        if verbose:
+            print 'with 1MB thread stack size...'
+        try:
+            threading.stack_size(0x100000)
+        except thread.error:
+            if verbose:
+                print 'platform does not support changing thread stack size'
+            return
+        self.test_various_ops()
+        threading.stack_size(0)
+
     def test_foreign_thread(self):
         # Check that a "foreign" thread can use the threading module.
         def f(mutex):
diff --git a/Lib/test/test_time.py b/Lib/test/test_time.py
index 768e7a0..f4be759 100644
--- a/Lib/test/test_time.py
+++ b/Lib/test/test_time.py
@@ -39,9 +39,9 @@
 
     def test_strftime_bounds_checking(self):
         # Make sure that strftime() checks the bounds of the various parts
-        #of the time tuple.
+        #of the time tuple (0 is valid for *all* values).
 
-        # Check year
+        # Check year [1900, max(int)]
         self.assertRaises(ValueError, time.strftime, '',
                             (1899, 1, 1, 0, 0, 0, 0, 1, -1))
         if time.accept2dyear:
@@ -49,27 +49,27 @@
                                 (-1, 1, 1, 0, 0, 0, 0, 1, -1))
             self.assertRaises(ValueError, time.strftime, '',
                                 (100, 1, 1, 0, 0, 0, 0, 1, -1))
-        # Check month
+        # Check month [1, 12] + zero support
         self.assertRaises(ValueError, time.strftime, '',
-                            (1900, 0, 1, 0, 0, 0, 0, 1, -1))
+                            (1900, -1, 1, 0, 0, 0, 0, 1, -1))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 13, 1, 0, 0, 0, 0, 1, -1))
-        # Check day of month
+        # Check day of month [1, 31] + zero support
         self.assertRaises(ValueError, time.strftime, '',
-                            (1900, 1, 0, 0, 0, 0, 0, 1, -1))
+                            (1900, 1, -1, 0, 0, 0, 0, 1, -1))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 32, 0, 0, 0, 0, 1, -1))
-        # Check hour
+        # Check hour [0, 23]
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, -1, 0, 0, 0, 1, -1))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 24, 0, 0, 0, 1, -1))
-        # Check minute
+        # Check minute [0, 59]
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, -1, 0, 0, 1, -1))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 60, 0, 0, 1, -1))
-        # Check second
+        # Check second [0, 61]
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 0, -1, 0, 1, -1))
         # C99 only requires allowing for one leap second, but Python's docs say
@@ -82,17 +82,25 @@
         #  modulo.
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 0, 0, -2, 1, -1))
-        # Check day of the year
+        # Check day of the year [1, 366] + zero support
         self.assertRaises(ValueError, time.strftime, '',
-                            (1900, 1, 1, 0, 0, 0, 0, 0, -1))
+                            (1900, 1, 1, 0, 0, 0, 0, -1, -1))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 0, 0, 0, 367, -1))
-        # Check daylight savings flag
+        # Check daylight savings flag [-1, 1]
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 0, 0, 0, 1, -2))
         self.assertRaises(ValueError, time.strftime, '',
                             (1900, 1, 1, 0, 0, 0, 0, 1, 2))
 
+    def test_default_values_for_zero(self):
+        # Make sure that using all zeros uses the proper default values.
+        # No test for daylight savings since strftime() does not change output
+        # based on its value.
+        expected = "2000 01 01 00 00 00 1 001"
+        result = time.strftime("%Y %m %d %H %M %S %w %j", (0,)*9)
+        self.assertEquals(expected, result)
+
     def test_strptime(self):
         tt = time.gmtime(self.t)
         for directive in ('a', 'A', 'b', 'B', 'c', 'd', 'H', 'I',
@@ -193,13 +201,17 @@
         time.ctime(None)
 
     def test_gmtime_without_arg(self):
-        t0 = time.mktime(time.gmtime())
-        t1 = time.mktime(time.gmtime(None))
+        gt0 = time.gmtime()
+        gt1 = time.gmtime(None)
+        t0 = time.mktime(gt0)
+        t1 = time.mktime(gt1)
         self.assert_(0 <= (t1-t0) < 0.2)
 
     def test_localtime_without_arg(self):
-        t0 = time.mktime(time.localtime())
-        t1 = time.mktime(time.localtime(None))
+        lt0 = time.localtime()
+        lt1 = time.localtime(None)
+        t0 = time.mktime(lt0)
+        t1 = time.mktime(lt1)
         self.assert_(0 <= (t1-t0) < 0.2)
 
 def test_main():
diff --git a/Lib/test/test_timeout.py b/Lib/test/test_timeout.py
index 4309e8c..2b32b92 100644
--- a/Lib/test/test_timeout.py
+++ b/Lib/test/test_timeout.py
@@ -100,7 +100,7 @@
 
     def setUp(self):
         self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-        self.addr_remote = ('www.python.org', 80)
+        self.addr_remote = ('www.python.org.', 80)
         self.addr_local  = ('127.0.0.1', 25339)
 
     def tearDown(self):
diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py
index 4f946f7..08aec8e 100644
--- a/Lib/test/test_trace.py
+++ b/Lib/test/test_trace.py
@@ -244,8 +244,8 @@
         self.run_test(one_instr_line)
     def test_04_no_pop_blocks(self):
         self.run_test(no_pop_blocks)
-##    def test_05_no_pop_tops(self):
-##        self.run_test(no_pop_tops)
+    def test_05_no_pop_tops(self):
+        self.run_test(no_pop_tops)
     def test_06_call(self):
         self.run_test(call)
     def test_07_raise(self):
diff --git a/Lib/test/test_traceback.py b/Lib/test/test_traceback.py
index 1b59f98..b3c5a50 100644
--- a/Lib/test/test_traceback.py
+++ b/Lib/test/test_traceback.py
@@ -31,8 +31,9 @@
         err = self.get_exception_format(self.syntax_error_with_caret,
                                         SyntaxError)
         self.assert_(len(err) == 4)
-        self.assert_("^" in err[2]) # third line has caret
         self.assert_(err[1].strip() == "return x!")
+        self.assert_("^" in err[2]) # third line has caret
+        self.assert_(err[1].find("!") == err[2].find("^")) # in the right place
 
     def test_nocaret(self):
         if is_jython:
@@ -47,8 +48,9 @@
         err = self.get_exception_format(self.syntax_error_bad_indentation,
                                         IndentationError)
         self.assert_(len(err) == 4)
-        self.assert_("^" in err[2])
         self.assert_(err[1].strip() == "print 2")
+        self.assert_("^" in err[2])
+        self.assert_(err[1].find("2") == err[2].find("^"))
 
     def test_bug737473(self):
         import sys, os, tempfile, time
@@ -109,6 +111,45 @@
         lst = traceback.format_exception_only(e.__class__, e)
         self.assertEqual(lst, ['KeyboardInterrupt\n'])
 
+    # String exceptions are deprecated, but legal.  The quirky form with
+    # separate "type" and "value" tends to break things, because
+    #     not isinstance(value, type)
+    # and a string cannot be the first argument to issubclass.
+    #
+    # Note that sys.last_type and sys.last_value do not get set if an
+    # exception is caught, so we sort of cheat and just emulate them.
+    #
+    # test_string_exception1 is equivalent to
+    #
+    # >>> raise "String Exception"
+    #
+    # test_string_exception2 is equivalent to
+    #
+    # >>> raise "String Exception", "String Value"
+    #
+    def test_string_exception1(self):
+        str_type = "String Exception"
+        err = traceback.format_exception_only(str_type, None)
+        self.assertEqual(len(err), 1)
+        self.assertEqual(err[0], str_type + '\n')
+
+    def test_string_exception2(self):
+        str_type = "String Exception"
+        str_value = "String Value"
+        err = traceback.format_exception_only(str_type, str_value)
+        self.assertEqual(len(err), 1)
+        self.assertEqual(err[0], str_type + ': ' + str_value + '\n')
+
+    def test_format_exception_only_bad__str__(self):
+        class X(Exception):
+            def __str__(self):
+                1/0
+        err = traceback.format_exception_only(X, X())
+        self.assertEqual(len(err), 1)
+        str_value = '<unprintable %s object>' % X.__name__
+        self.assertEqual(err[0], X.__name__ + ': ' + str_value + '\n')
+
+
 def test_main():
     run_unittest(TracebackCases)
 
diff --git a/Lib/test/test_types.py b/Lib/test/test_types.py
index c575c0c..2d299c3 100644
--- a/Lib/test/test_types.py
+++ b/Lib/test/test_types.py
@@ -233,6 +233,7 @@
 try: buffer('asdf', -1)
 except ValueError: pass
 else: raise TestFailed, "buffer('asdf', -1) should raise ValueError"
+cmp(buffer("abc"), buffer("def")) # used to raise a warning: tp_compare didn't return -1, 0, or 1
 
 try: buffer(None)
 except TypeError: pass
@@ -276,3 +277,10 @@
 try: a[0:1] = 'g'
 except TypeError: pass
 else: raise TestFailed, "buffer slice assignment should raise TypeError"
+
+# array.array() returns an object that does not implement a char buffer,
+# something which int() uses for conversion.
+import array
+try: int(buffer(array.array('c')))
+except TypeError :pass
+else: raise TestFailed, "char buffer (at C level) not working"
diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py
index 034b9d0..67218b8 100644
--- a/Lib/test/test_urllib2.py
+++ b/Lib/test/test_urllib2.py
@@ -676,11 +676,11 @@
             r = MockResponse(200, "OK", {}, "")
             newreq = h.do_request_(req)
             if data is None:  # GET
-                self.assert_("Content-length" not in req.unredirected_hdrs)
-                self.assert_("Content-type" not in req.unredirected_hdrs)
+                self.assert_("Content-Length" not in req.unredirected_hdrs)
+                self.assert_("Content-Type" not in req.unredirected_hdrs)
             else:  # POST
-                self.assertEqual(req.unredirected_hdrs["Content-length"], "0")
-                self.assertEqual(req.unredirected_hdrs["Content-type"],
+                self.assertEqual(req.unredirected_hdrs["Content-Length"], "0")
+                self.assertEqual(req.unredirected_hdrs["Content-Type"],
                              "application/x-www-form-urlencoded")
             # XXX the details of Host could be better tested
             self.assertEqual(req.unredirected_hdrs["Host"], "example.com")
@@ -692,8 +692,8 @@
             req.add_unredirected_header("Host", "baz")
             req.add_unredirected_header("Spam", "foo")
             newreq = h.do_request_(req)
-            self.assertEqual(req.unredirected_hdrs["Content-length"], "foo")
-            self.assertEqual(req.unredirected_hdrs["Content-type"], "bar")
+            self.assertEqual(req.unredirected_hdrs["Content-Length"], "foo")
+            self.assertEqual(req.unredirected_hdrs["Content-Type"], "bar")
             self.assertEqual(req.unredirected_hdrs["Host"], "baz")
             self.assertEqual(req.unredirected_hdrs["Spam"], "foo")
 
@@ -847,7 +847,7 @@
             407, 'Proxy-Authenticate: Basic realm="%s"\r\n\r\n' % realm)
         opener.add_handler(auth_handler)
         opener.add_handler(http_handler)
-        self._test_basic_auth(opener, auth_handler, "Proxy-authorization",
+        self._test_basic_auth(opener, auth_handler, "Proxy-Authorization",
                               realm, http_handler, password_manager,
                               "http://acme.example.com:3128/protected",
                               "proxy.example.com:3128",
diff --git a/Lib/test/test_urllib2net.py b/Lib/test/test_urllib2net.py
index dc3d36d..00cf202 100644
--- a/Lib/test/test_urllib2net.py
+++ b/Lib/test/test_urllib2net.py
@@ -123,7 +123,7 @@
                           # domain will be spared to serve its defined
                           # purpose.
                           # urllib2.urlopen, "http://www.sadflkjsasadf.com/")
-                          urllib2.urlopen, "http://www.python.invalid/")
+                          urllib2.urlopen, "http://www.python.invalid./")
 
 
 class OtherNetworkTests(unittest.TestCase):
@@ -160,8 +160,8 @@
                                 "urllib2$")
         urls = [
             # Thanks to Fred for finding these!
-            'gopher://gopher.lib.ncsu.edu/11/library/stacks/Alex',
-            'gopher://gopher.vt.edu:10010/10/33',
+            'gopher://gopher.lib.ncsu.edu./11/library/stacks/Alex',
+            'gopher://gopher.vt.edu.:10010/10/33',
             ]
         self._test_urls(urls, self._extra_handlers())
 
@@ -176,7 +176,7 @@
 
                 # XXX bug, should raise URLError
                 #('file://nonsensename/etc/passwd', None, urllib2.URLError)
-                ('file://nonsensename/etc/passwd', None, (OSError, socket.error))
+                ('file://nonsensename/etc/passwd', None, (EnvironmentError, socket.error))
                 ]
             self._test_urls(urls, self._extra_handlers())
         finally:
@@ -239,7 +239,9 @@
             except (IOError, socket.error, OSError), err:
                 debug(err)
                 if expected_err:
-                    self.assert_(isinstance(err, expected_err))
+                    msg = ("Didn't get expected error(s) %s for %s %s, got %s" %
+                           (expected_err, url, req, err))
+                    self.assert_(isinstance(err, expected_err), msg)
             else:
                 buf = f.read()
                 f.close()
@@ -259,7 +261,6 @@
         return handlers
 
 
-
 def test_main():
     test_support.requires("network")
     test_support.run_unittest(URLTimeoutTest, urlopenNetworkTests,
diff --git a/Lib/test/test_urllibnet.py b/Lib/test/test_urllibnet.py
index 80761df..9105afe 100644
--- a/Lib/test/test_urllibnet.py
+++ b/Lib/test/test_urllibnet.py
@@ -110,7 +110,7 @@
                           # domain will be spared to serve its defined
                           # purpose.
                           # urllib.urlopen, "http://www.sadflkjsasadf.com/")
-                          urllib.urlopen, "http://www.python.invalid/")
+                          urllib.urlopen, "http://www.python.invalid./")
 
 class urlretrieveNetworkTests(unittest.TestCase):
     """Tests urllib.urlretrieve using the network."""
diff --git a/Lib/test/test_uuid.py b/Lib/test/test_uuid.py
new file mode 100644
index 0000000..0586cfd
--- /dev/null
+++ b/Lib/test/test_uuid.py
@@ -0,0 +1,434 @@
+from unittest import TestCase
+from test import test_support
+import uuid
+
+def importable(name):
+    try:
+        __import__(name)
+        return True
+    except:
+        return False
+
+class TestUUID(TestCase):
+    last_node = None
+    source2node = {}
+
+    def test_UUID(self):
+        equal = self.assertEqual
+        ascending = []
+        for (string, curly, hex, bytes, fields, integer, urn,
+             time, clock_seq, variant, version) in [
+            ('00000000-0000-0000-0000-000000000000',
+             '{00000000-0000-0000-0000-000000000000}',
+             '00000000000000000000000000000000',
+             '\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0',
+             (0, 0, 0, 0, 0, 0),
+             0,
+             'urn:uuid:00000000-0000-0000-0000-000000000000',
+             0, 0, uuid.RESERVED_NCS, None),
+            ('00010203-0405-0607-0809-0a0b0c0d0e0f',
+             '{00010203-0405-0607-0809-0a0b0c0d0e0f}',
+             '000102030405060708090a0b0c0d0e0f',
+             '\0\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\x0d\x0e\x0f',
+             (0x00010203L, 0x0405, 0x0607, 8, 9, 0x0a0b0c0d0e0fL),
+             0x000102030405060708090a0b0c0d0e0fL,
+             'urn:uuid:00010203-0405-0607-0809-0a0b0c0d0e0f',
+             0x607040500010203L, 0x809, uuid.RESERVED_NCS, None),
+            ('02d9e6d5-9467-382e-8f9b-9300a64ac3cd',
+             '{02d9e6d5-9467-382e-8f9b-9300a64ac3cd}',
+             '02d9e6d59467382e8f9b9300a64ac3cd',
+             '\x02\xd9\xe6\xd5\x94\x67\x38\x2e\x8f\x9b\x93\x00\xa6\x4a\xc3\xcd',
+             (0x02d9e6d5L, 0x9467, 0x382e, 0x8f, 0x9b, 0x9300a64ac3cdL),
+             0x02d9e6d59467382e8f9b9300a64ac3cdL,
+             'urn:uuid:02d9e6d5-9467-382e-8f9b-9300a64ac3cd',
+             0x82e946702d9e6d5L, 0xf9b, uuid.RFC_4122, 3),
+            ('12345678-1234-5678-1234-567812345678',
+             '{12345678-1234-5678-1234-567812345678}',
+             '12345678123456781234567812345678',
+             '\x12\x34\x56\x78'*4,
+             (0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678),
+             0x12345678123456781234567812345678,
+             'urn:uuid:12345678-1234-5678-1234-567812345678',
+             0x678123412345678L, 0x1234, uuid.RESERVED_NCS, None),
+            ('6ba7b810-9dad-11d1-80b4-00c04fd430c8',
+             '{6ba7b810-9dad-11d1-80b4-00c04fd430c8}',
+             '6ba7b8109dad11d180b400c04fd430c8',
+             '\x6b\xa7\xb8\x10\x9d\xad\x11\xd1\x80\xb4\x00\xc0\x4f\xd4\x30\xc8',
+             (0x6ba7b810L, 0x9dad, 0x11d1, 0x80, 0xb4, 0x00c04fd430c8L),
+             0x6ba7b8109dad11d180b400c04fd430c8L,
+             'urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8',
+             0x1d19dad6ba7b810L, 0xb4, uuid.RFC_4122, 1),
+            ('6ba7b811-9dad-11d1-80b4-00c04fd430c8',
+             '{6ba7b811-9dad-11d1-80b4-00c04fd430c8}',
+             '6ba7b8119dad11d180b400c04fd430c8',
+             '\x6b\xa7\xb8\x11\x9d\xad\x11\xd1\x80\xb4\x00\xc0\x4f\xd4\x30\xc8',
+             (0x6ba7b811L, 0x9dad, 0x11d1, 0x80, 0xb4, 0x00c04fd430c8L),
+             0x6ba7b8119dad11d180b400c04fd430c8L,
+             'urn:uuid:6ba7b811-9dad-11d1-80b4-00c04fd430c8',
+             0x1d19dad6ba7b811L, 0xb4, uuid.RFC_4122, 1),
+            ('6ba7b812-9dad-11d1-80b4-00c04fd430c8',
+             '{6ba7b812-9dad-11d1-80b4-00c04fd430c8}',
+             '6ba7b8129dad11d180b400c04fd430c8',
+             '\x6b\xa7\xb8\x12\x9d\xad\x11\xd1\x80\xb4\x00\xc0\x4f\xd4\x30\xc8',
+             (0x6ba7b812L, 0x9dad, 0x11d1, 0x80, 0xb4, 0x00c04fd430c8L),
+             0x6ba7b8129dad11d180b400c04fd430c8L,
+             'urn:uuid:6ba7b812-9dad-11d1-80b4-00c04fd430c8',
+             0x1d19dad6ba7b812L, 0xb4, uuid.RFC_4122, 1),
+            ('6ba7b814-9dad-11d1-80b4-00c04fd430c8',
+             '{6ba7b814-9dad-11d1-80b4-00c04fd430c8}',
+             '6ba7b8149dad11d180b400c04fd430c8',
+             '\x6b\xa7\xb8\x14\x9d\xad\x11\xd1\x80\xb4\x00\xc0\x4f\xd4\x30\xc8',
+             (0x6ba7b814L, 0x9dad, 0x11d1, 0x80, 0xb4, 0x00c04fd430c8L),
+             0x6ba7b8149dad11d180b400c04fd430c8L,
+             'urn:uuid:6ba7b814-9dad-11d1-80b4-00c04fd430c8',
+             0x1d19dad6ba7b814L, 0xb4, uuid.RFC_4122, 1),
+            ('7d444840-9dc0-11d1-b245-5ffdce74fad2',
+             '{7d444840-9dc0-11d1-b245-5ffdce74fad2}',
+             '7d4448409dc011d1b2455ffdce74fad2',
+             '\x7d\x44\x48\x40\x9d\xc0\x11\xd1\xb2\x45\x5f\xfd\xce\x74\xfa\xd2',
+             (0x7d444840L, 0x9dc0, 0x11d1, 0xb2, 0x45, 0x5ffdce74fad2L),
+             0x7d4448409dc011d1b2455ffdce74fad2L,
+             'urn:uuid:7d444840-9dc0-11d1-b245-5ffdce74fad2',
+             0x1d19dc07d444840L, 0x3245, uuid.RFC_4122, 1),
+            ('e902893a-9d22-3c7e-a7b8-d6e313b71d9f',
+             '{e902893a-9d22-3c7e-a7b8-d6e313b71d9f}',
+             'e902893a9d223c7ea7b8d6e313b71d9f',
+             '\xe9\x02\x89\x3a\x9d\x22\x3c\x7e\xa7\xb8\xd6\xe3\x13\xb7\x1d\x9f',
+             (0xe902893aL, 0x9d22, 0x3c7e, 0xa7, 0xb8, 0xd6e313b71d9fL),
+             0xe902893a9d223c7ea7b8d6e313b71d9fL,
+             'urn:uuid:e902893a-9d22-3c7e-a7b8-d6e313b71d9f',
+             0xc7e9d22e902893aL, 0x27b8, uuid.RFC_4122, 3),
+            ('eb424026-6f54-4ef8-a4d0-bb658a1fc6cf',
+             '{eb424026-6f54-4ef8-a4d0-bb658a1fc6cf}',
+             'eb4240266f544ef8a4d0bb658a1fc6cf',
+             '\xeb\x42\x40\x26\x6f\x54\x4e\xf8\xa4\xd0\xbb\x65\x8a\x1f\xc6\xcf',
+             (0xeb424026L, 0x6f54, 0x4ef8, 0xa4, 0xd0, 0xbb658a1fc6cfL),
+             0xeb4240266f544ef8a4d0bb658a1fc6cfL,
+             'urn:uuid:eb424026-6f54-4ef8-a4d0-bb658a1fc6cf',
+             0xef86f54eb424026L, 0x24d0, uuid.RFC_4122, 4),
+            ('f81d4fae-7dec-11d0-a765-00a0c91e6bf6',
+             '{f81d4fae-7dec-11d0-a765-00a0c91e6bf6}',
+             'f81d4fae7dec11d0a76500a0c91e6bf6',
+             '\xf8\x1d\x4f\xae\x7d\xec\x11\xd0\xa7\x65\x00\xa0\xc9\x1e\x6b\xf6',
+             (0xf81d4faeL, 0x7dec, 0x11d0, 0xa7, 0x65, 0x00a0c91e6bf6L),
+             0xf81d4fae7dec11d0a76500a0c91e6bf6L,
+             'urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6',
+             0x1d07decf81d4faeL, 0x2765, uuid.RFC_4122, 1),
+            ('fffefdfc-fffe-fffe-fffe-fffefdfcfbfa',
+             '{fffefdfc-fffe-fffe-fffe-fffefdfcfbfa}',
+             'fffefdfcfffefffefffefffefdfcfbfa',
+             '\xff\xfe\xfd\xfc\xff\xfe\xff\xfe\xff\xfe\xff\xfe\xfd\xfc\xfb\xfa',
+             (0xfffefdfcL, 0xfffe, 0xfffe, 0xff, 0xfe, 0xfffefdfcfbfaL),
+             0xfffefdfcfffefffefffefffefdfcfbfaL,
+             'urn:uuid:fffefdfc-fffe-fffe-fffe-fffefdfcfbfa',
+             0xffefffefffefdfcL, 0x3ffe, uuid.RESERVED_FUTURE, None),
+            ('ffffffff-ffff-ffff-ffff-ffffffffffff',
+             '{ffffffff-ffff-ffff-ffff-ffffffffffff}',
+             'ffffffffffffffffffffffffffffffff',
+             '\xff'*16,
+             (0xffffffffL, 0xffffL, 0xffffL, 0xff, 0xff, 0xffffffffffffL),
+             0xffffffffffffffffffffffffffffffffL,
+             'urn:uuid:ffffffff-ffff-ffff-ffff-ffffffffffff',
+             0xfffffffffffffffL, 0x3fff, uuid.RESERVED_FUTURE, None),
+            ]:
+            equivalents = []
+            # Construct each UUID in several different ways.
+            for u in [uuid.UUID(string), uuid.UUID(curly), uuid.UUID(hex),
+                      uuid.UUID(bytes=bytes), uuid.UUID(fields=fields),
+                      uuid.UUID(int=integer), uuid.UUID(urn)]:
+                # Test all conversions and properties of the UUID object.
+                equal(str(u), string)
+                equal(int(u), integer)
+                equal(u.bytes, bytes)
+                equal(u.fields, fields)
+                equal(u.time_low, fields[0])
+                equal(u.time_mid, fields[1])
+                equal(u.time_hi_version, fields[2])
+                equal(u.clock_seq_hi_variant, fields[3])
+                equal(u.clock_seq_low, fields[4])
+                equal(u.node, fields[5])
+                equal(u.hex, hex)
+                equal(u.int, integer)
+                equal(u.urn, urn)
+                equal(u.time, time)
+                equal(u.clock_seq, clock_seq)
+                equal(u.variant, variant)
+                equal(u.version, version)
+                equivalents.append(u)
+
+            # Different construction methods should give the same UUID.
+            for u in equivalents:
+                for v in equivalents:
+                    equal(u, v)
+            ascending.append(u)
+
+        # Test comparison of UUIDs.
+        for i in range(len(ascending)):
+            for j in range(len(ascending)):
+                equal(cmp(i, j), cmp(ascending[i], ascending[j]))
+
+        # Test sorting of UUIDs (above list is in ascending order).
+        resorted = ascending[:]
+        resorted.reverse()
+        resorted.sort()
+        equal(ascending, resorted)
+
+    def test_exceptions(self):
+        badvalue = lambda f: self.assertRaises(ValueError, f)
+        badtype = lambda f: self.assertRaises(TypeError, f)
+
+        # Badly formed hex strings.
+        badvalue(lambda: uuid.UUID(''))
+        badvalue(lambda: uuid.UUID('abc'))
+        badvalue(lambda: uuid.UUID('1234567812345678123456781234567'))
+        badvalue(lambda: uuid.UUID('123456781234567812345678123456789'))
+        badvalue(lambda: uuid.UUID('123456781234567812345678z2345678'))
+
+        # Badly formed bytes.
+        badvalue(lambda: uuid.UUID(bytes='abc'))
+        badvalue(lambda: uuid.UUID(bytes='\0'*15))
+        badvalue(lambda: uuid.UUID(bytes='\0'*17))
+
+        # Badly formed fields.
+        badvalue(lambda: uuid.UUID(fields=(1,)))
+        badvalue(lambda: uuid.UUID(fields=(1, 2, 3, 4, 5)))
+        badvalue(lambda: uuid.UUID(fields=(1, 2, 3, 4, 5, 6, 7)))
+
+        # Field values out of range.
+        badvalue(lambda: uuid.UUID(fields=(-1, 0, 0, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0x100000000L, 0, 0, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, -1, 0, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0x10000L, 0, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, -1, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0x10000L, 0, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, -1, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, 0x100L, 0, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, 0, -1, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, 0, 0x100L, 0)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, 0, 0, -1)))
+        badvalue(lambda: uuid.UUID(fields=(0, 0, 0, 0, 0, 0x1000000000000L)))
+
+        # Version number out of range.
+        badvalue(lambda: uuid.UUID('00'*16, version=0))
+        badvalue(lambda: uuid.UUID('00'*16, version=6))
+
+        # Integer value out of range.
+        badvalue(lambda: uuid.UUID(int=-1))
+        badvalue(lambda: uuid.UUID(int=1<<128L))
+
+        # Must supply exactly one of hex, bytes, fields, int.
+        h, b, f, i = '00'*16, '\0'*16, (0, 0, 0, 0, 0, 0), 0
+        uuid.UUID(h)
+        uuid.UUID(hex=h)
+        uuid.UUID(bytes=b)
+        uuid.UUID(fields=f)
+        uuid.UUID(int=i)
+
+        # Wrong number of arguments (positional).
+        badtype(lambda: uuid.UUID())
+        badtype(lambda: uuid.UUID(h, b))
+        badtype(lambda: uuid.UUID(h, b, f))
+        badtype(lambda: uuid.UUID(h, b, f, i))
+
+        # Duplicate arguments (named).
+        badtype(lambda: uuid.UUID(hex=h, bytes=b))
+        badtype(lambda: uuid.UUID(hex=h, fields=f))
+        badtype(lambda: uuid.UUID(hex=h, int=i))
+        badtype(lambda: uuid.UUID(bytes=b, fields=f))
+        badtype(lambda: uuid.UUID(bytes=b, int=i))
+        badtype(lambda: uuid.UUID(fields=f, int=i))
+        badtype(lambda: uuid.UUID(hex=h, bytes=b, fields=f))
+        badtype(lambda: uuid.UUID(hex=h, bytes=b, int=i))
+        badtype(lambda: uuid.UUID(hex=h, fields=f, int=i))
+        badtype(lambda: uuid.UUID(bytes=b, int=i, fields=f))
+        badtype(lambda: uuid.UUID(hex=h, bytes=b, int=i, fields=f))
+
+        # Duplicate arguments (positional and named).
+        badtype(lambda: uuid.UUID(h, hex=h))
+        badtype(lambda: uuid.UUID(h, bytes=b))
+        badtype(lambda: uuid.UUID(h, fields=f))
+        badtype(lambda: uuid.UUID(h, int=i))
+        badtype(lambda: uuid.UUID(h, hex=h, bytes=b))
+        badtype(lambda: uuid.UUID(h, hex=h, fields=f))
+        badtype(lambda: uuid.UUID(h, hex=h, int=i))
+        badtype(lambda: uuid.UUID(h, bytes=b, fields=f))
+        badtype(lambda: uuid.UUID(h, bytes=b, int=i))
+        badtype(lambda: uuid.UUID(h, fields=f, int=i))
+        badtype(lambda: uuid.UUID(h, hex=h, bytes=b, fields=f))
+        badtype(lambda: uuid.UUID(h, hex=h, bytes=b, int=i))
+        badtype(lambda: uuid.UUID(h, hex=h, fields=f, int=i))
+        badtype(lambda: uuid.UUID(h, bytes=b, int=i, fields=f))
+        badtype(lambda: uuid.UUID(h, hex=h, bytes=b, int=i, fields=f))
+
+        # Immutability.
+        u = uuid.UUID(h)
+        badtype(lambda: setattr(u, 'hex', h))
+        badtype(lambda: setattr(u, 'bytes', b))
+        badtype(lambda: setattr(u, 'fields', f))
+        badtype(lambda: setattr(u, 'int', i))
+
+    def check_node(self, node, source):
+        individual_group_bit = (node >> 40L) & 1
+        universal_local_bit = (node >> 40L) & 2
+        message = "%012x doesn't look like a real MAC address" % node
+        self.assertEqual(individual_group_bit, 0, message)
+        self.assertEqual(universal_local_bit, 0, message)
+        self.assertNotEqual(node, 0, message)
+        self.assertNotEqual(node, 0xffffffffffffL, message)
+        self.assert_(0 <= node, message)
+        self.assert_(node < (1L << 48), message)
+
+        TestUUID.source2node[source] = node
+        if TestUUID.last_node:
+            if TestUUID.last_node != node:
+                msg = "different sources disagree on node:\n"
+                for s, n in TestUUID.source2node.iteritems():
+                    msg += "    from source %r, node was %012x\n" % (s, n)
+                # There's actually no reason to expect the MAC addresses
+                # to agree across various methods -- e.g., a box may have
+                # multiple network interfaces, and different ways of getting
+                # a MAC address may favor different HW.
+                ##self.fail(msg)
+        else:
+            TestUUID.last_node = node
+
+    def test_ifconfig_getnode(self):
+        import sys
+        print >>sys.__stdout__, \
+"""    WARNING: uuid._ifconfig_getnode is unreliable on many platforms.
+        It is disabled until the code and/or test can be fixed properly."""
+        return
+
+        import os
+        if os.name == 'posix':
+            node = uuid._ifconfig_getnode()
+            if node is not None:
+                self.check_node(node, 'ifconfig')
+
+    def test_ipconfig_getnode(self):
+        import os
+        if os.name == 'nt':
+            node = uuid._ipconfig_getnode()
+            if node is not None:
+                self.check_node(node, 'ipconfig')
+
+    def test_netbios_getnode(self):
+        if importable('win32wnet') and importable('netbios'):
+            self.check_node(uuid._netbios_getnode(), 'netbios')
+
+    def test_random_getnode(self):
+        node = uuid._random_getnode()
+        self.assert_(0 <= node)
+        self.assert_(node < (1L <<48))
+
+    def test_unixdll_getnode(self):
+        import sys
+        print >>sys.__stdout__, \
+"""    WARNING: uuid._unixdll_getnode is unreliable on many platforms.
+        It is disabled until the code and/or test can be fixed properly."""
+        return
+
+        import os
+        if importable('ctypes') and os.name == 'posix':
+            self.check_node(uuid._unixdll_getnode(), 'unixdll')
+
+    def test_windll_getnode(self):
+        import os
+        if importable('ctypes') and os.name == 'nt':
+            self.check_node(uuid._windll_getnode(), 'windll')
+
+    def test_getnode(self):
+        import sys
+        print >>sys.__stdout__, \
+"""    WARNING: uuid.getnode is unreliable on many platforms.
+        It is disabled until the code and/or test can be fixed properly."""
+        return
+
+        node1 = uuid.getnode()
+        self.check_node(node1, "getnode1")
+
+        # Test it again to ensure consistency.
+        node2 = uuid.getnode()
+        self.check_node(node2, "getnode2")
+
+        self.assertEqual(node1, node2)
+
+    def test_uuid1(self):
+        equal = self.assertEqual
+
+        # Make sure uuid4() generates UUIDs that are actually version 1.
+        for u in [uuid.uuid1() for i in range(10)]:
+            equal(u.variant, uuid.RFC_4122)
+            equal(u.version, 1)
+
+        # Make sure the supplied node ID appears in the UUID.
+        u = uuid.uuid1(0)
+        equal(u.node, 0)
+        u = uuid.uuid1(0x123456789abc)
+        equal(u.node, 0x123456789abc)
+        u = uuid.uuid1(0xffffffffffff)
+        equal(u.node, 0xffffffffffff)
+
+        # Make sure the supplied clock sequence appears in the UUID.
+        u = uuid.uuid1(0x123456789abc, 0)
+        equal(u.node, 0x123456789abc)
+        equal(((u.clock_seq_hi_variant & 0x3f) << 8) | u.clock_seq_low, 0)
+        u = uuid.uuid1(0x123456789abc, 0x1234)
+        equal(u.node, 0x123456789abc)
+        equal(((u.clock_seq_hi_variant & 0x3f) << 8) |
+                         u.clock_seq_low, 0x1234)
+        u = uuid.uuid1(0x123456789abc, 0x3fff)
+        equal(u.node, 0x123456789abc)
+        equal(((u.clock_seq_hi_variant & 0x3f) << 8) |
+                         u.clock_seq_low, 0x3fff)
+
+    def test_uuid3(self):
+        equal = self.assertEqual
+
+        # Test some known version-3 UUIDs.
+        for u, v in [(uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org'),
+                      '6fa459ea-ee8a-3ca4-894e-db77e160355e'),
+                     (uuid.uuid3(uuid.NAMESPACE_URL, 'http://python.org/'),
+                      '9fe8e8c4-aaa8-32a9-a55c-4535a88b748d'),
+                     (uuid.uuid3(uuid.NAMESPACE_OID, '1.3.6.1'),
+                      'dd1a1cef-13d5-368a-ad82-eca71acd4cd1'),
+                     (uuid.uuid3(uuid.NAMESPACE_X500, 'c=ca'),
+                      '658d3002-db6b-3040-a1d1-8ddd7d189a4d'),
+                    ]:
+            equal(u.variant, uuid.RFC_4122)
+            equal(u.version, 3)
+            equal(u, uuid.UUID(v))
+            equal(str(u), v)
+
+    def test_uuid4(self):
+        equal = self.assertEqual
+
+        # Make sure uuid4() generates UUIDs that are actually version 4.
+        for u in [uuid.uuid4() for i in range(10)]:
+            equal(u.variant, uuid.RFC_4122)
+            equal(u.version, 4)
+
+    def test_uuid5(self):
+        equal = self.assertEqual
+
+        # Test some known version-5 UUIDs.
+        for u, v in [(uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org'),
+                      '886313e1-3b8a-5372-9b90-0c9aee199e5d'),
+                     (uuid.uuid5(uuid.NAMESPACE_URL, 'http://python.org/'),
+                      '4c565f0d-3f5a-5890-b41b-20cf47701c5e'),
+                     (uuid.uuid5(uuid.NAMESPACE_OID, '1.3.6.1'),
+                      '1447fa61-5277-5fef-a9b3-fbc6e44f4af3'),
+                     (uuid.uuid5(uuid.NAMESPACE_X500, 'c=ca'),
+                      'cc957dd1-a972-5349-98cd-874190002798'),
+                    ]:
+            equal(u.variant, uuid.RFC_4122)
+            equal(u.version, 5)
+            equal(u, uuid.UUID(v))
+            equal(str(u), v)
+
+
+def test_main():
+    test_support.run_unittest(TestUUID)
+
+if __name__ == '__main__':
+    test_main()
diff --git a/Lib/test/test_wait3.py b/Lib/test/test_wait3.py
index f6a41a6..9de64b2 100644
--- a/Lib/test/test_wait3.py
+++ b/Lib/test/test_wait3.py
@@ -2,8 +2,9 @@
 """
 
 import os
+import time
 from test.fork_wait import ForkWait
-from test.test_support import TestSkipped, run_unittest
+from test.test_support import TestSkipped, run_unittest, reap_children
 
 try:
     os.fork
@@ -17,16 +18,21 @@
 
 class Wait3Test(ForkWait):
     def wait_impl(self, cpid):
-        while 1:
-            spid, status, rusage = os.wait3(0)
+        for i in range(10):
+            # wait3() shouldn't hang, but some of the buildbots seem to hang
+            # in the forking tests.  This is an attempt to fix the problem.
+            spid, status, rusage = os.wait3(os.WNOHANG)
             if spid == cpid:
                 break
+            time.sleep(1.0)
+
         self.assertEqual(spid, cpid)
         self.assertEqual(status, 0, "cause = %d, exit = %d" % (status&0xff, status>>8))
         self.assertTrue(rusage)
 
 def test_main():
     run_unittest(Wait3Test)
+    reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_wait4.py b/Lib/test/test_wait4.py
index 027e5c3..9f7fc14 100644
--- a/Lib/test/test_wait4.py
+++ b/Lib/test/test_wait4.py
@@ -2,8 +2,9 @@
 """
 
 import os
+import time
 from test.fork_wait import ForkWait
-from test.test_support import TestSkipped, run_unittest
+from test.test_support import TestSkipped, run_unittest, reap_children
 
 try:
     os.fork
@@ -17,13 +18,20 @@
 
 class Wait4Test(ForkWait):
     def wait_impl(self, cpid):
-        spid, status, rusage = os.wait4(cpid, 0)
+        for i in range(10):
+            # wait4() shouldn't hang, but some of the buildbots seem to hang
+            # in the forking tests.  This is an attempt to fix the problem.
+            spid, status, rusage = os.wait4(cpid, os.WNOHANG)
+            if spid == cpid:
+                break
+            time.sleep(1.0)
         self.assertEqual(spid, cpid)
         self.assertEqual(status, 0, "cause = %d, exit = %d" % (status&0xff, status>>8))
         self.assertTrue(rusage)
 
 def test_main():
     run_unittest(Wait4Test)
+    reap_children()
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_warnings.py b/Lib/test/test_warnings.py
index 5d051a5..a7ccb6b 100644
--- a/Lib/test/test_warnings.py
+++ b/Lib/test/test_warnings.py
@@ -81,6 +81,19 @@
         self.assertEqual(msg.message, text)
         self.assertEqual(msg.category, 'UserWarning')
 
+    def test_options(self):
+        # Uses the private _setoption() function to test the parsing
+        # of command-line warning arguments
+        self.assertRaises(warnings._OptionError,
+                          warnings._setoption, '1:2:3:4:5:6')
+        self.assertRaises(warnings._OptionError,
+                          warnings._setoption, 'bogus::Warning')
+        self.assertRaises(warnings._OptionError,
+                          warnings._setoption, 'ignore:2::4:-5')
+        warnings._setoption('error::Warning::0')
+        self.assertRaises(UserWarning, warnings.warn, 'convert to error')
+
+
 def test_main(verbose=None):
     # Obscure hack so that this test passes after reloads or repeated calls
     # to test_main (regrtest -R).
diff --git a/Lib/test/test_winreg.py b/Lib/test/test_winreg.py
index a9bc962..5830fd6 100644
--- a/Lib/test/test_winreg.py
+++ b/Lib/test/test_winreg.py
@@ -151,3 +151,6 @@
 else:
     print "Remote registry calls can be tested using",
     print "'test_winreg.py --remote \\\\machine_name'"
+    # perform minimal ConnectRegistry test which just invokes it
+    h = ConnectRegistry(None, HKEY_LOCAL_MACHINE)
+    h.Close()
diff --git a/Lib/test/test_wsgiref.py b/Lib/test/test_wsgiref.py
new file mode 100755
index 0000000..1ec271b
--- /dev/null
+++ b/Lib/test/test_wsgiref.py
@@ -0,0 +1,615 @@
+from __future__ import nested_scopes    # Backward compat for 2.1
+from unittest import TestSuite, TestCase, makeSuite
+from wsgiref.util import setup_testing_defaults
+from wsgiref.headers import Headers
+from wsgiref.handlers import BaseHandler, BaseCGIHandler
+from wsgiref import util
+from wsgiref.validate import validator
+from wsgiref.simple_server import WSGIServer, WSGIRequestHandler, demo_app
+from wsgiref.simple_server import make_server
+from StringIO import StringIO
+from SocketServer import BaseServer
+import re, sys
+
+
+class MockServer(WSGIServer):
+    """Non-socket HTTP server"""
+
+    def __init__(self, server_address, RequestHandlerClass):
+        BaseServer.__init__(self, server_address, RequestHandlerClass)
+        self.server_bind()
+
+    def server_bind(self):
+        host, port = self.server_address
+        self.server_name = host
+        self.server_port = port
+        self.setup_environ()
+
+
+class MockHandler(WSGIRequestHandler):
+    """Non-socket HTTP handler"""
+    def setup(self):
+        self.connection = self.request
+        self.rfile, self.wfile = self.connection
+
+    def finish(self):
+        pass
+
+
+
+
+
+def hello_app(environ,start_response):
+    start_response("200 OK", [
+        ('Content-Type','text/plain'),
+        ('Date','Mon, 05 Jun 2006 18:49:54 GMT')
+    ])
+    return ["Hello, world!"]
+
+def run_amock(app=hello_app, data="GET / HTTP/1.0\n\n"):
+    server = make_server("", 80, app, MockServer, MockHandler)
+    inp, out, err, olderr = StringIO(data), StringIO(), StringIO(), sys.stderr
+    sys.stderr = err
+
+    try:
+        server.finish_request((inp,out), ("127.0.0.1",8888))
+    finally:
+        sys.stderr = olderr
+
+    return out.getvalue(), err.getvalue()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+def compare_generic_iter(make_it,match):
+    """Utility to compare a generic 2.1/2.2+ iterator with an iterable
+
+    If running under Python 2.2+, this tests the iterator using iter()/next(),
+    as well as __getitem__.  'make_it' must be a function returning a fresh
+    iterator to be tested (since this may test the iterator twice)."""
+
+    it = make_it()
+    n = 0
+    for item in match:
+        if not it[n]==item: raise AssertionError
+        n+=1
+    try:
+        it[n]
+    except IndexError:
+        pass
+    else:
+        raise AssertionError("Too many items from __getitem__",it)
+
+    try:
+        iter, StopIteration
+    except NameError:
+        pass
+    else:
+        # Only test iter mode under 2.2+
+        it = make_it()
+        if not iter(it) is it: raise AssertionError
+        for item in match:
+            if not it.next()==item: raise AssertionError
+        try:
+            it.next()
+        except StopIteration:
+            pass
+        else:
+            raise AssertionError("Too many items from .next()",it)
+
+
+
+
+
+
+class IntegrationTests(TestCase):
+
+    def check_hello(self, out, has_length=True):
+        self.assertEqual(out,
+            "HTTP/1.0 200 OK\r\n"
+            "Server: WSGIServer/0.1 Python/"+sys.version.split()[0]+"\r\n"
+            "Content-Type: text/plain\r\n"
+            "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" +
+            (has_length and  "Content-Length: 13\r\n" or "") +
+            "\r\n"
+            "Hello, world!"
+        )
+
+    def test_plain_hello(self):
+        out, err = run_amock()
+        self.check_hello(out)
+
+    def test_validated_hello(self):
+        out, err = run_amock(validator(hello_app))
+        # the middleware doesn't support len(), so content-length isn't there
+        self.check_hello(out, has_length=False)
+
+    def test_simple_validation_error(self):
+        def bad_app(environ,start_response):
+            start_response("200 OK", ('Content-Type','text/plain'))
+            return ["Hello, world!"]
+        out, err = run_amock(validator(bad_app))
+        self.failUnless(out.endswith(
+            "A server error occurred.  Please contact the administrator."
+        ))
+        self.assertEqual(
+            err.splitlines()[-2],
+            "AssertionError: Headers (('Content-Type', 'text/plain')) must"
+            " be of type list: <type 'tuple'>"
+        )
+
+
+
+
+
+
+class UtilityTests(TestCase):
+
+    def checkShift(self,sn_in,pi_in,part,sn_out,pi_out):
+        env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in}
+        util.setup_testing_defaults(env)
+        self.assertEqual(util.shift_path_info(env),part)
+        self.assertEqual(env['PATH_INFO'],pi_out)
+        self.assertEqual(env['SCRIPT_NAME'],sn_out)
+        return env
+
+    def checkDefault(self, key, value, alt=None):
+        # Check defaulting when empty
+        env = {}
+        util.setup_testing_defaults(env)
+        if isinstance(value,StringIO):
+            self.failUnless(isinstance(env[key],StringIO))
+        else:
+            self.assertEqual(env[key],value)
+
+        # Check existing value
+        env = {key:alt}
+        util.setup_testing_defaults(env)
+        self.failUnless(env[key] is alt)
+
+    def checkCrossDefault(self,key,value,**kw):
+        util.setup_testing_defaults(kw)
+        self.assertEqual(kw[key],value)
+
+    def checkAppURI(self,uri,**kw):
+        util.setup_testing_defaults(kw)
+        self.assertEqual(util.application_uri(kw),uri)
+
+    def checkReqURI(self,uri,query=1,**kw):
+        util.setup_testing_defaults(kw)
+        self.assertEqual(util.request_uri(kw,query),uri)
+
+
+
+
+
+
+    def checkFW(self,text,size,match):
+
+        def make_it(text=text,size=size):
+            return util.FileWrapper(StringIO(text),size)
+
+        compare_generic_iter(make_it,match)
+
+        it = make_it()
+        self.failIf(it.filelike.closed)
+
+        for item in it:
+            pass
+
+        self.failIf(it.filelike.closed)
+
+        it.close()
+        self.failUnless(it.filelike.closed)
+
+
+    def testSimpleShifts(self):
+        self.checkShift('','/', '', '/', '')
+        self.checkShift('','/x', 'x', '/x', '')
+        self.checkShift('/','', None, '/', '')
+        self.checkShift('/a','/x/y', 'x', '/a/x', '/y')
+        self.checkShift('/a','/x/',  'x', '/a/x', '/')
+
+
+    def testNormalizedShifts(self):
+        self.checkShift('/a/b', '/../y', '..', '/a', '/y')
+        self.checkShift('', '/../y', '..', '', '/y')
+        self.checkShift('/a/b', '//y', 'y', '/a/b/y', '')
+        self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/')
+        self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '')
+        self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/')
+        self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/')
+        self.checkShift('/a/b', '///', '', '/a/b/', '')
+        self.checkShift('/a/b', '/.//', '', '/a/b/', '')
+        self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/')
+        self.checkShift('/a/b', '/.', None, '/a/b', '')
+
+
+    def testDefaults(self):
+        for key, value in [
+            ('SERVER_NAME','127.0.0.1'),
+            ('SERVER_PORT', '80'),
+            ('SERVER_PROTOCOL','HTTP/1.0'),
+            ('HTTP_HOST','127.0.0.1'),
+            ('REQUEST_METHOD','GET'),
+            ('SCRIPT_NAME',''),
+            ('PATH_INFO','/'),
+            ('wsgi.version', (1,0)),
+            ('wsgi.run_once', 0),
+            ('wsgi.multithread', 0),
+            ('wsgi.multiprocess', 0),
+            ('wsgi.input', StringIO("")),
+            ('wsgi.errors', StringIO()),
+            ('wsgi.url_scheme','http'),
+        ]:
+            self.checkDefault(key,value)
+
+
+    def testCrossDefaults(self):
+        self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar")
+        self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on")
+        self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1")
+        self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes")
+        self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo")
+        self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo")
+        self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on")
+
+
+    def testGuessScheme(self):
+        self.assertEqual(util.guess_scheme({}), "http")
+        self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http")
+        self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https")
+        self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https")
+        self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https")
+
+
+
+
+
+    def testAppURIs(self):
+        self.checkAppURI("http://127.0.0.1/")
+        self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam")
+        self.checkAppURI("http://spam.example.com:2071/",
+            HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071")
+        self.checkAppURI("http://spam.example.com/",
+            SERVER_NAME="spam.example.com")
+        self.checkAppURI("http://127.0.0.1/",
+            HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com")
+        self.checkAppURI("https://127.0.0.1/", HTTPS="on")
+        self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000",
+            HTTP_HOST=None)
+
+    def testReqURIs(self):
+        self.checkReqURI("http://127.0.0.1/")
+        self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam")
+        self.checkReqURI("http://127.0.0.1/spammity/spam",
+            SCRIPT_NAME="/spammity", PATH_INFO="/spam")
+        self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni",
+            SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni")
+        self.checkReqURI("http://127.0.0.1/spammity/spam", 0,
+            SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni")
+
+    def testFileWrapper(self):
+        self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10])
+
+    def testHopByHop(self):
+        for hop in (
+            "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization "
+            "TE Trailers Transfer-Encoding Upgrade"
+        ).split():
+            for alt in hop, hop.title(), hop.upper(), hop.lower():
+                self.failUnless(util.is_hop_by_hop(alt))
+
+        # Not comprehensive, just a few random header names
+        for hop in (
+            "Accept Cache-Control Date Pragma Trailer Via Warning"
+        ).split():
+            for alt in hop, hop.title(), hop.upper(), hop.lower():
+                self.failIf(util.is_hop_by_hop(alt))
+
+class HeaderTests(TestCase):
+
+    def testMappingInterface(self):
+        test = [('x','y')]
+        self.assertEqual(len(Headers([])),0)
+        self.assertEqual(len(Headers(test[:])),1)
+        self.assertEqual(Headers(test[:]).keys(), ['x'])
+        self.assertEqual(Headers(test[:]).values(), ['y'])
+        self.assertEqual(Headers(test[:]).items(), test)
+        self.failIf(Headers(test).items() is test)  # must be copy!
+
+        h=Headers([])
+        del h['foo']   # should not raise an error
+
+        h['Foo'] = 'bar'
+        for m in h.has_key, h.__contains__, h.get, h.get_all, h.__getitem__:
+            self.failUnless(m('foo'))
+            self.failUnless(m('Foo'))
+            self.failUnless(m('FOO'))
+            self.failIf(m('bar'))
+
+        self.assertEqual(h['foo'],'bar')
+        h['foo'] = 'baz'
+        self.assertEqual(h['FOO'],'baz')
+        self.assertEqual(h.get_all('foo'),['baz'])
+
+        self.assertEqual(h.get("foo","whee"), "baz")
+        self.assertEqual(h.get("zoo","whee"), "whee")
+        self.assertEqual(h.setdefault("foo","whee"), "baz")
+        self.assertEqual(h.setdefault("zoo","whee"), "whee")
+        self.assertEqual(h["foo"],"baz")
+        self.assertEqual(h["zoo"],"whee")
+
+    def testRequireList(self):
+        self.assertRaises(TypeError, Headers, "foo")
+
+
+    def testExtras(self):
+        h = Headers([])
+        self.assertEqual(str(h),'\r\n')
+
+        h.add_header('foo','bar',baz="spam")
+        self.assertEqual(h['foo'], 'bar; baz="spam"')
+        self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n')
+
+        h.add_header('Foo','bar',cheese=None)
+        self.assertEqual(h.get_all('foo'),
+            ['bar; baz="spam"', 'bar; cheese'])
+
+        self.assertEqual(str(h),
+            'foo: bar; baz="spam"\r\n'
+            'Foo: bar; cheese\r\n'
+            '\r\n'
+        )
+
+
+class ErrorHandler(BaseCGIHandler):
+    """Simple handler subclass for testing BaseHandler"""
+
+    def __init__(self,**kw):
+        setup_testing_defaults(kw)
+        BaseCGIHandler.__init__(
+            self, StringIO(''), StringIO(), StringIO(), kw,
+            multithread=True, multiprocess=True
+        )
+
+class TestHandler(ErrorHandler):
+    """Simple handler subclass for testing BaseHandler, w/error passthru"""
+
+    def handle_error(self):
+        raise   # for testing, we want to see what's happening
+
+
+
+
+
+
+
+
+
+
+
+class HandlerTests(TestCase):
+
+    def checkEnvironAttrs(self, handler):
+        env = handler.environ
+        for attr in [
+            'version','multithread','multiprocess','run_once','file_wrapper'
+        ]:
+            if attr=='file_wrapper' and handler.wsgi_file_wrapper is None:
+                continue
+            self.assertEqual(getattr(handler,'wsgi_'+attr),env['wsgi.'+attr])
+
+    def checkOSEnviron(self,handler):
+        empty = {}; setup_testing_defaults(empty)
+        env = handler.environ
+        from os import environ
+        for k,v in environ.items():
+            if not empty.has_key(k):
+                self.assertEqual(env[k],v)
+        for k,v in empty.items():
+            self.failUnless(env.has_key(k))
+
+    def testEnviron(self):
+        h = TestHandler(X="Y")
+        h.setup_environ()
+        self.checkEnvironAttrs(h)
+        self.checkOSEnviron(h)
+        self.assertEqual(h.environ["X"],"Y")
+
+    def testCGIEnviron(self):
+        h = BaseCGIHandler(None,None,None,{})
+        h.setup_environ()
+        for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors':
+            self.assert_(h.environ.has_key(key))
+
+    def testScheme(self):
+        h=TestHandler(HTTPS="on"); h.setup_environ()
+        self.assertEqual(h.environ['wsgi.url_scheme'],'https')
+        h=TestHandler(); h.setup_environ()
+        self.assertEqual(h.environ['wsgi.url_scheme'],'http')
+
+
+    def testAbstractMethods(self):
+        h = BaseHandler()
+        for name in [
+            '_flush','get_stdin','get_stderr','add_cgi_vars'
+        ]:
+            self.assertRaises(NotImplementedError, getattr(h,name))
+        self.assertRaises(NotImplementedError, h._write, "test")
+
+
+    def testContentLength(self):
+        # Demo one reason iteration is better than write()...  ;)
+
+        def trivial_app1(e,s):
+            s('200 OK',[])
+            return [e['wsgi.url_scheme']]
+
+        def trivial_app2(e,s):
+            s('200 OK',[])(e['wsgi.url_scheme'])
+            return []
+
+        h = TestHandler()
+        h.run(trivial_app1)
+        self.assertEqual(h.stdout.getvalue(),
+            "Status: 200 OK\r\n"
+            "Content-Length: 4\r\n"
+            "\r\n"
+            "http")
+
+        h = TestHandler()
+        h.run(trivial_app2)
+        self.assertEqual(h.stdout.getvalue(),
+            "Status: 200 OK\r\n"
+            "\r\n"
+            "http")
+
+
+
+
+
+
+
+    def testBasicErrorOutput(self):
+
+        def non_error_app(e,s):
+            s('200 OK',[])
+            return []
+
+        def error_app(e,s):
+            raise AssertionError("This should be caught by handler")
+
+        h = ErrorHandler()
+        h.run(non_error_app)
+        self.assertEqual(h.stdout.getvalue(),
+            "Status: 200 OK\r\n"
+            "Content-Length: 0\r\n"
+            "\r\n")
+        self.assertEqual(h.stderr.getvalue(),"")
+
+        h = ErrorHandler()
+        h.run(error_app)
+        self.assertEqual(h.stdout.getvalue(),
+            "Status: %s\r\n"
+            "Content-Type: text/plain\r\n"
+            "Content-Length: %d\r\n"
+            "\r\n%s" % (h.error_status,len(h.error_body),h.error_body))
+
+        self.failUnless(h.stderr.getvalue().find("AssertionError")<>-1)
+
+    def testErrorAfterOutput(self):
+        MSG = "Some output has been sent"
+        def error_app(e,s):
+            s("200 OK",[])(MSG)
+            raise AssertionError("This should be caught by handler")
+
+        h = ErrorHandler()
+        h.run(error_app)
+        self.assertEqual(h.stdout.getvalue(),
+            "Status: 200 OK\r\n"
+            "\r\n"+MSG)
+        self.failUnless(h.stderr.getvalue().find("AssertionError")<>-1)
+
+
+    def testHeaderFormats(self):
+
+        def non_error_app(e,s):
+            s('200 OK',[])
+            return []
+
+        stdpat = (
+            r"HTTP/%s 200 OK\r\n"
+            r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n"
+            r"%s" r"Content-Length: 0\r\n" r"\r\n"
+        )
+        shortpat = (
+            "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n"
+        )
+
+        for ssw in "FooBar/1.0", None:
+            sw = ssw and "Server: %s\r\n" % ssw or ""
+
+            for version in "1.0", "1.1":
+                for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1":
+
+                    h = TestHandler(SERVER_PROTOCOL=proto)
+                    h.origin_server = False
+                    h.http_version = version
+                    h.server_software = ssw
+                    h.run(non_error_app)
+                    self.assertEqual(shortpat,h.stdout.getvalue())
+
+                    h = TestHandler(SERVER_PROTOCOL=proto)
+                    h.origin_server = True
+                    h.http_version = version
+                    h.server_software = ssw
+                    h.run(non_error_app)
+                    if proto=="HTTP/0.9":
+                        self.assertEqual(h.stdout.getvalue(),"")
+                    else:
+                        self.failUnless(
+                            re.match(stdpat%(version,sw), h.stdout.getvalue()),
+                            (stdpat%(version,sw), h.stdout.getvalue())
+                        )
+
+# This epilogue is needed for compatibility with the Python 2.5 regrtest module
+
+def test_main():
+    import unittest
+    from test.test_support import run_suite
+    run_suite(
+        unittest.defaultTestLoader.loadTestsFromModule(sys.modules[__name__])
+    )
+
+if __name__ == "__main__":
+    test_main()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# the above lines intentionally left blank
diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py
index 86052d7..78adb42 100644
--- a/Lib/test/test_xml_etree.py
+++ b/Lib/test/test_xml_etree.py
@@ -1,4 +1,4 @@
-# xmlcore.etree test.  This file contains enough tests to make sure that
+# xml.etree test.  This file contains enough tests to make sure that
 # all included components work as they should.  For a more extensive
 # test suite, see the selftest script in the ElementTree distribution.
 
@@ -6,8 +6,6 @@
 
 from test import test_support
 
-from xmlcore.etree import ElementTree as ET
-
 SAMPLE_XML = """
 <body>
   <tag>text</tag>
@@ -32,9 +30,9 @@
     """
     Import sanity.
 
-    >>> from xmlcore.etree import ElementTree
-    >>> from xmlcore.etree import ElementInclude
-    >>> from xmlcore.etree import ElementPath
+    >>> from xml.etree import ElementTree
+    >>> from xml.etree import ElementInclude
+    >>> from xml.etree import ElementPath
     """
 
 def check_method(method):
@@ -61,6 +59,8 @@
     """
     Test element tree interface.
 
+    >>> from xml.etree import ElementTree as ET
+
     >>> element = ET.Element("tag", key="value")
     >>> tree = ET.ElementTree(element)
 
@@ -108,6 +108,8 @@
     """
     Test find methods (including xpath syntax).
 
+    >>> from xml.etree import ElementTree as ET
+
     >>> elem = ET.XML(SAMPLE_XML)
     >>> elem.find("tag").tag
     'tag'
@@ -174,6 +176,8 @@
 def parseliteral():
     r"""
 
+    >>> from xml.etree import ElementTree as ET
+
     >>> element = ET.XML("<html><body>text</body></html>")
     >>> ET.ElementTree(element).write(sys.stdout)
     <html><body>text</body></html>
@@ -195,18 +199,20 @@
     'body'
     """
 
-def check_encoding(encoding):
+
+def check_encoding(ET, encoding):
     """
-    >>> check_encoding("ascii")
-    >>> check_encoding("us-ascii")
-    >>> check_encoding("iso-8859-1")
-    >>> check_encoding("iso-8859-15")
-    >>> check_encoding("cp437")
-    >>> check_encoding("mac-roman")
+    >>> from xml.etree import ElementTree as ET
+
+    >>> check_encoding(ET, "ascii")
+    >>> check_encoding(ET, "us-ascii")
+    >>> check_encoding(ET, "iso-8859-1")
+    >>> check_encoding(ET, "iso-8859-15")
+    >>> check_encoding(ET, "cp437")
+    >>> check_encoding(ET, "mac-roman")
     """
-    ET.XML(
-        "<?xml version='1.0' encoding='%s'?><xml />" % encoding
-        )
+    ET.XML("<?xml version='1.0' encoding='%s'?><xml />" % encoding)
+
 
 #
 # xinclude tests (samples from appendix C of the xinclude specification)
@@ -282,14 +288,16 @@
     except KeyError:
         raise IOError("resource not found")
     if parse == "xml":
-        return ET.XML(data)
+        from xml.etree.ElementTree import XML
+        return XML(data)
     return data
 
 def xinclude():
     r"""
     Basic inclusion example (XInclude C.1)
 
-    >>> from xmlcore.etree import ElementInclude
+    >>> from xml.etree import ElementTree as ET
+    >>> from xml.etree import ElementInclude
 
     >>> document = xinclude_loader("C1.xml")
     >>> ElementInclude.include(document, xinclude_loader)
diff --git a/Lib/test/test_xml_etree_c.py b/Lib/test/test_xml_etree_c.py
index 587ea99..56e7fed 100644
--- a/Lib/test/test_xml_etree_c.py
+++ b/Lib/test/test_xml_etree_c.py
@@ -1,10 +1,10 @@
-# xmlcore.etree test for cElementTree
+# xml.etree test for cElementTree
 
 import doctest, sys
 
 from test import test_support
 
-from xmlcore.etree import cElementTree as ET
+from xml.etree import cElementTree as ET
 
 SAMPLE_XML = """
 <body>
@@ -30,7 +30,7 @@
     """
     Import sanity.
 
-    >>> from xmlcore.etree import cElementTree
+    >>> from xml.etree import cElementTree
     """
 
 def check_method(method):
diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py
index 0241348..54684f3 100644
--- a/Lib/test/test_zipfile.py
+++ b/Lib/test/test_zipfile.py
@@ -4,7 +4,7 @@
 except ImportError:
     zlib = None
 
-import zipfile, os, unittest
+import zipfile, os, unittest, sys, shutil
 
 from StringIO import StringIO
 from tempfile import TemporaryFile
@@ -28,14 +28,70 @@
         zipfp = zipfile.ZipFile(f, "w", compression)
         zipfp.write(TESTFN, "another"+os.extsep+"name")
         zipfp.write(TESTFN, TESTFN)
+        zipfp.writestr("strfile", self.data)
         zipfp.close()
 
         # Read the ZIP archive
         zipfp = zipfile.ZipFile(f, "r", compression)
         self.assertEqual(zipfp.read(TESTFN), self.data)
         self.assertEqual(zipfp.read("another"+os.extsep+"name"), self.data)
+        self.assertEqual(zipfp.read("strfile"), self.data)
+
+        # Print the ZIP directory
+        fp = StringIO()
+        stdout = sys.stdout
+        try:
+            sys.stdout = fp
+
+            zipfp.printdir()
+        finally:
+            sys.stdout = stdout
+
+        directory = fp.getvalue()
+        lines = directory.splitlines()
+        self.assertEquals(len(lines), 4) # Number of files + header
+
+        self.assert_('File Name' in lines[0])
+        self.assert_('Modified' in lines[0])
+        self.assert_('Size' in lines[0])
+
+        fn, date, time, size = lines[1].split()
+        self.assertEquals(fn, 'another.name')
+        # XXX: timestamp is not tested
+        self.assertEquals(size, str(len(self.data)))
+
+        # Check the namelist
+        names = zipfp.namelist()
+        self.assertEquals(len(names), 3)
+        self.assert_(TESTFN in names)
+        self.assert_("another"+os.extsep+"name" in names)
+        self.assert_("strfile" in names)
+
+        # Check infolist
+        infos = zipfp.infolist()
+        names = [ i.filename for i in infos ]
+        self.assertEquals(len(names), 3)
+        self.assert_(TESTFN in names)
+        self.assert_("another"+os.extsep+"name" in names)
+        self.assert_("strfile" in names)
+        for i in infos:
+            self.assertEquals(i.file_size, len(self.data))
+
+        # check getinfo
+        for nm in (TESTFN, "another"+os.extsep+"name", "strfile"):
+            info = zipfp.getinfo(nm)
+            self.assertEquals(info.filename, nm)
+            self.assertEquals(info.file_size, len(self.data))
+
+        # Check that testzip doesn't raise an exception
+        zipfp.testzip()
+
+
         zipfp.close()
 
+
+
+
     def testStored(self):
         for f in (TESTFN2, TemporaryFile(), StringIO()):
             self.zipTest(f, zipfile.ZIP_STORED)
@@ -59,6 +115,197 @@
         os.remove(TESTFN)
         os.remove(TESTFN2)
 
+class TestZip64InSmallFiles(unittest.TestCase):
+    # These tests test the ZIP64 functionality without using large files,
+    # see test_zipfile64 for proper tests.
+
+    def setUp(self):
+        self._limit = zipfile.ZIP64_LIMIT
+        zipfile.ZIP64_LIMIT = 5
+
+        line_gen = ("Test of zipfile line %d." % i for i in range(0, 1000))
+        self.data = '\n'.join(line_gen)
+
+        # Make a source file with some lines
+        fp = open(TESTFN, "wb")
+        fp.write(self.data)
+        fp.close()
+
+    def largeFileExceptionTest(self, f, compression):
+        zipfp = zipfile.ZipFile(f, "w", compression)
+        self.assertRaises(zipfile.LargeZipFile,
+                zipfp.write, TESTFN, "another"+os.extsep+"name")
+        zipfp.close()
+
+    def largeFileExceptionTest2(self, f, compression):
+        zipfp = zipfile.ZipFile(f, "w", compression)
+        self.assertRaises(zipfile.LargeZipFile,
+                zipfp.writestr, "another"+os.extsep+"name", self.data)
+        zipfp.close()
+
+    def testLargeFileException(self):
+        for f in (TESTFN2, TemporaryFile(), StringIO()):
+            self.largeFileExceptionTest(f, zipfile.ZIP_STORED)
+            self.largeFileExceptionTest2(f, zipfile.ZIP_STORED)
+
+    def zipTest(self, f, compression):
+        # Create the ZIP archive
+        zipfp = zipfile.ZipFile(f, "w", compression, allowZip64=True)
+        zipfp.write(TESTFN, "another"+os.extsep+"name")
+        zipfp.write(TESTFN, TESTFN)
+        zipfp.writestr("strfile", self.data)
+        zipfp.close()
+
+        # Read the ZIP archive
+        zipfp = zipfile.ZipFile(f, "r", compression)
+        self.assertEqual(zipfp.read(TESTFN), self.data)
+        self.assertEqual(zipfp.read("another"+os.extsep+"name"), self.data)
+        self.assertEqual(zipfp.read("strfile"), self.data)
+
+        # Print the ZIP directory
+        fp = StringIO()
+        stdout = sys.stdout
+        try:
+            sys.stdout = fp
+
+            zipfp.printdir()
+        finally:
+            sys.stdout = stdout
+
+        directory = fp.getvalue()
+        lines = directory.splitlines()
+        self.assertEquals(len(lines), 4) # Number of files + header
+
+        self.assert_('File Name' in lines[0])
+        self.assert_('Modified' in lines[0])
+        self.assert_('Size' in lines[0])
+
+        fn, date, time, size = lines[1].split()
+        self.assertEquals(fn, 'another.name')
+        # XXX: timestamp is not tested
+        self.assertEquals(size, str(len(self.data)))
+
+        # Check the namelist
+        names = zipfp.namelist()
+        self.assertEquals(len(names), 3)
+        self.assert_(TESTFN in names)
+        self.assert_("another"+os.extsep+"name" in names)
+        self.assert_("strfile" in names)
+
+        # Check infolist
+        infos = zipfp.infolist()
+        names = [ i.filename for i in infos ]
+        self.assertEquals(len(names), 3)
+        self.assert_(TESTFN in names)
+        self.assert_("another"+os.extsep+"name" in names)
+        self.assert_("strfile" in names)
+        for i in infos:
+            self.assertEquals(i.file_size, len(self.data))
+
+        # check getinfo
+        for nm in (TESTFN, "another"+os.extsep+"name", "strfile"):
+            info = zipfp.getinfo(nm)
+            self.assertEquals(info.filename, nm)
+            self.assertEquals(info.file_size, len(self.data))
+
+        # Check that testzip doesn't raise an exception
+        zipfp.testzip()
+
+
+        zipfp.close()
+
+    def testStored(self):
+        for f in (TESTFN2, TemporaryFile(), StringIO()):
+            self.zipTest(f, zipfile.ZIP_STORED)
+
+
+    if zlib:
+        def testDeflated(self):
+            for f in (TESTFN2, TemporaryFile(), StringIO()):
+                self.zipTest(f, zipfile.ZIP_DEFLATED)
+
+    def testAbsoluteArcnames(self):
+        zipfp = zipfile.ZipFile(TESTFN2, "w", zipfile.ZIP_STORED, allowZip64=True)
+        zipfp.write(TESTFN, "/absolute")
+        zipfp.close()
+
+        zipfp = zipfile.ZipFile(TESTFN2, "r", zipfile.ZIP_STORED)
+        self.assertEqual(zipfp.namelist(), ["absolute"])
+        zipfp.close()
+
+
+    def tearDown(self):
+        zipfile.ZIP64_LIMIT = self._limit
+        os.remove(TESTFN)
+        os.remove(TESTFN2)
+
+class PyZipFileTests(unittest.TestCase):
+    def testWritePyfile(self):
+        zipfp  = zipfile.PyZipFile(TemporaryFile(), "w")
+        fn = __file__
+        if fn.endswith('.pyc') or fn.endswith('.pyo'):
+            fn = fn[:-1]
+
+        zipfp.writepy(fn)
+
+        bn = os.path.basename(fn)
+        self.assert_(bn not in zipfp.namelist())
+        self.assert_(bn + 'o' in zipfp.namelist() or bn + 'c' in zipfp.namelist())
+        zipfp.close()
+
+
+        zipfp  = zipfile.PyZipFile(TemporaryFile(), "w")
+        fn = __file__
+        if fn.endswith('.pyc') or fn.endswith('.pyo'):
+            fn = fn[:-1]
+
+        zipfp.writepy(fn, "testpackage")
+
+        bn = "%s/%s"%("testpackage", os.path.basename(fn))
+        self.assert_(bn not in zipfp.namelist())
+        self.assert_(bn + 'o' in zipfp.namelist() or bn + 'c' in zipfp.namelist())
+        zipfp.close()
+
+    def testWritePythonPackage(self):
+        import email
+        packagedir = os.path.dirname(email.__file__)
+
+        zipfp  = zipfile.PyZipFile(TemporaryFile(), "w")
+        zipfp.writepy(packagedir)
+
+        # Check for a couple of modules at different levels of the hieararchy
+        names = zipfp.namelist()
+        self.assert_('email/__init__.pyo' in names or 'email/__init__.pyc' in names)
+        self.assert_('email/mime/text.pyo' in names or 'email/mime/text.pyc' in names)
+
+    def testWritePythonDirectory(self):
+        os.mkdir(TESTFN2)
+        try:
+            fp = open(os.path.join(TESTFN2, "mod1.py"), "w")
+            fp.write("print 42\n")
+            fp.close()
+
+            fp = open(os.path.join(TESTFN2, "mod2.py"), "w")
+            fp.write("print 42 * 42\n")
+            fp.close()
+
+            fp = open(os.path.join(TESTFN2, "mod2.txt"), "w")
+            fp.write("bla bla bla\n")
+            fp.close()
+
+            zipfp  = zipfile.PyZipFile(TemporaryFile(), "w")
+            zipfp.writepy(TESTFN2)
+
+            names = zipfp.namelist()
+            self.assert_('mod1.pyc' in names or 'mod1.pyo' in names)
+            self.assert_('mod2.pyc' in names or 'mod2.pyo' in names)
+            self.assert_('mod2.txt' not in names)
+
+        finally:
+            shutil.rmtree(TESTFN2)
+
+
+
 class OtherTests(unittest.TestCase):
     def testCloseErroneousFile(self):
         # This test checks that the ZipFile constructor closes the file object
@@ -103,7 +350,8 @@
         self.assertRaises(RuntimeError, zipf.testzip)
 
 def test_main():
-    run_unittest(TestsWithSourceFile, OtherTests)
+    run_unittest(TestsWithSourceFile, TestZip64InSmallFiles, OtherTests, PyZipFileTests)
+    #run_unittest(TestZip64InSmallFiles)
 
 if __name__ == "__main__":
     test_main()
diff --git a/Lib/test/test_zipfile64.py b/Lib/test/test_zipfile64.py
new file mode 100644
index 0000000..449cf39
--- /dev/null
+++ b/Lib/test/test_zipfile64.py
@@ -0,0 +1,101 @@
+# Tests of the full ZIP64 functionality of zipfile
+# The test_support.requires call is the only reason for keeping this separate
+# from test_zipfile
+from test import test_support
+# XXX(nnorwitz): disable this test by looking for extra largfile resource
+# which doesn't exist.  This test takes over 30 minutes to run in general
+# and requires more disk space than most of the buildbots.
+test_support.requires(
+        'extralargefile',
+        'test requires loads of disk-space bytes and a long time to run'
+    )
+
+# We can test part of the module without zlib.
+try:
+    import zlib
+except ImportError:
+    zlib = None
+
+import zipfile, os, unittest
+import time
+import sys
+
+from StringIO import StringIO
+from tempfile import TemporaryFile
+
+from test.test_support import TESTFN, run_unittest
+
+TESTFN2 = TESTFN + "2"
+
+# How much time in seconds can pass before we print a 'Still working' message.
+_PRINT_WORKING_MSG_INTERVAL = 5 * 60
+
+class TestsWithSourceFile(unittest.TestCase):
+    def setUp(self):
+        # Create test data.
+        # xrange() is important here -- don't want to create immortal space
+        # for a million ints.
+        line_gen = ("Test of zipfile line %d." % i for i in xrange(1000000))
+        self.data = '\n'.join(line_gen)
+
+        # And write it to a file.
+        fp = open(TESTFN, "wb")
+        fp.write(self.data)
+        fp.close()
+
+    def zipTest(self, f, compression):
+        # Create the ZIP archive.
+        zipfp = zipfile.ZipFile(f, "w", compression, allowZip64=True)
+
+        # It will contain enough copies of self.data to reach about 6GB of
+        # raw data to store.
+        filecount = 6*1024**3 // len(self.data)
+
+        next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL
+        for num in range(filecount):
+            zipfp.writestr("testfn%d" % num, self.data)
+            # Print still working message since this test can be really slow
+            if next_time <= time.time():
+                next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL
+                print >>sys.__stdout__, (
+                   '  zipTest still writing %d of %d, be patient...' %
+                   (num, filecount))
+                sys.__stdout__.flush()
+        zipfp.close()
+
+        # Read the ZIP archive
+        zipfp = zipfile.ZipFile(f, "r", compression)
+        for num in range(filecount):
+            self.assertEqual(zipfp.read("testfn%d" % num), self.data)
+            # Print still working message since this test can be really slow
+            if next_time <= time.time():
+                next_time = time.time() + _PRINT_WORKING_MSG_INTERVAL
+                print >>sys.__stdout__, (
+                   '  zipTest still reading %d of %d, be patient...' %
+                   (num, filecount))
+                sys.__stdout__.flush()
+        zipfp.close()
+
+    def testStored(self):
+        # Try the temp file first.  If we do TESTFN2 first, then it hogs
+        # gigabytes of disk space for the duration of the test.
+        for f in TemporaryFile(), TESTFN2:
+            self.zipTest(f, zipfile.ZIP_STORED)
+
+    if zlib:
+        def testDeflated(self):
+            # Try the temp file first.  If we do TESTFN2 first, then it hogs
+            # gigabytes of disk space for the duration of the test.
+            for f in TemporaryFile(), TESTFN2:
+                self.zipTest(f, zipfile.ZIP_DEFLATED)
+
+    def tearDown(self):
+        for fname in TESTFN, TESTFN2:
+            if os.path.exists(fname):
+                os.remove(fname)
+
+def test_main():
+    run_unittest(TestsWithSourceFile)
+
+if __name__ == "__main__":
+    test_main()
diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
index ccbc8fd..4440942 100644
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -302,63 +302,65 @@
         dco = zlib.decompressobj()
         self.assertEqual(dco.flush(), "") # Returns nothing
 
-    def test_compresscopy(self):
-        # Test copying a compression object
-        data0 = HAMLET_SCENE
-        data1 = HAMLET_SCENE.swapcase()
-        c0 = zlib.compressobj(zlib.Z_BEST_COMPRESSION)
-        bufs0 = []
-        bufs0.append(c0.compress(data0))
+    if hasattr(zlib.compressobj(), "copy"):
+        def test_compresscopy(self):
+            # Test copying a compression object
+            data0 = HAMLET_SCENE
+            data1 = HAMLET_SCENE.swapcase()
+            c0 = zlib.compressobj(zlib.Z_BEST_COMPRESSION)
+            bufs0 = []
+            bufs0.append(c0.compress(data0))
 
-        c1 = c0.copy()
-        bufs1 = bufs0[:]
+            c1 = c0.copy()
+            bufs1 = bufs0[:]
 
-        bufs0.append(c0.compress(data0))
-        bufs0.append(c0.flush())
-        s0 = ''.join(bufs0)
+            bufs0.append(c0.compress(data0))
+            bufs0.append(c0.flush())
+            s0 = ''.join(bufs0)
 
-        bufs1.append(c1.compress(data1))
-        bufs1.append(c1.flush())
-        s1 = ''.join(bufs1)
+            bufs1.append(c1.compress(data1))
+            bufs1.append(c1.flush())
+            s1 = ''.join(bufs1)
 
-        self.assertEqual(zlib.decompress(s0),data0+data0)
-        self.assertEqual(zlib.decompress(s1),data0+data1)
+            self.assertEqual(zlib.decompress(s0),data0+data0)
+            self.assertEqual(zlib.decompress(s1),data0+data1)
 
-    def test_badcompresscopy(self):
-        # Test copying a compression object in an inconsistent state
-        c = zlib.compressobj()
-        c.compress(HAMLET_SCENE)
-        c.flush()
-        self.assertRaises(ValueError, c.copy)
+        def test_badcompresscopy(self):
+            # Test copying a compression object in an inconsistent state
+            c = zlib.compressobj()
+            c.compress(HAMLET_SCENE)
+            c.flush()
+            self.assertRaises(ValueError, c.copy)
 
-    def test_decompresscopy(self):
-        # Test copying a decompression object
-        data = HAMLET_SCENE
-        comp = zlib.compress(data)
+    if hasattr(zlib.decompressobj(), "copy"):
+        def test_decompresscopy(self):
+            # Test copying a decompression object
+            data = HAMLET_SCENE
+            comp = zlib.compress(data)
 
-        d0 = zlib.decompressobj()
-        bufs0 = []
-        bufs0.append(d0.decompress(comp[:32]))
+            d0 = zlib.decompressobj()
+            bufs0 = []
+            bufs0.append(d0.decompress(comp[:32]))
 
-        d1 = d0.copy()
-        bufs1 = bufs0[:]
+            d1 = d0.copy()
+            bufs1 = bufs0[:]
 
-        bufs0.append(d0.decompress(comp[32:]))
-        s0 = ''.join(bufs0)
+            bufs0.append(d0.decompress(comp[32:]))
+            s0 = ''.join(bufs0)
 
-        bufs1.append(d1.decompress(comp[32:]))
-        s1 = ''.join(bufs1)
+            bufs1.append(d1.decompress(comp[32:]))
+            s1 = ''.join(bufs1)
 
-        self.assertEqual(s0,s1)
-        self.assertEqual(s0,data)
+            self.assertEqual(s0,s1)
+            self.assertEqual(s0,data)
 
-    def test_baddecompresscopy(self):
-        # Test copying a compression object in an inconsistent state
-        data = zlib.compress(HAMLET_SCENE)
-        d = zlib.decompressobj()
-        d.decompress(data)
-        d.flush()
-        self.assertRaises(ValueError, d.copy)
+        def test_baddecompresscopy(self):
+            # Test copying a compression object in an inconsistent state
+            data = zlib.compress(HAMLET_SCENE)
+            d = zlib.decompressobj()
+            d.decompress(data)
+            d.flush()
+            self.assertRaises(ValueError, d.copy)
 
 def genblock(seed, length, step=1024, generator=random):
     """length-byte stream of random data from a seed (in step-byte blocks)."""
diff --git a/Lib/textwrap.py b/Lib/textwrap.py
index 7c68280..ccff2ab 100644
--- a/Lib/textwrap.py
+++ b/Lib/textwrap.py
@@ -317,41 +317,58 @@
 
 # -- Loosely related functionality -------------------------------------
 
+_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
+_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
+
 def dedent(text):
-    """dedent(text : string) -> string
+    """Remove any common leading whitespace from every line in `text`.
 
-    Remove any whitespace than can be uniformly removed from the left
-    of every line in `text`.
+    This can be used to make triple-quoted strings line up with the left
+    edge of the display, while still presenting them in the source code
+    in indented form.
 
-    This can be used e.g. to make triple-quoted strings line up with
-    the left edge of screen/whatever, while still presenting it in the
-    source code in indented form.
-
-    For example:
-
-        def test():
-            # end first line with \ to avoid the empty line!
-            s = '''\
-            hello
-              world
-            '''
-            print repr(s)          # prints '    hello\n      world\n    '
-            print repr(dedent(s))  # prints 'hello\n  world\n'
+    Note that tabs and spaces are both treated as whitespace, but they
+    are not equal: the lines "  hello" and "\thello" are
+    considered to have no common leading whitespace.  (This behaviour is
+    new in Python 2.5; older versions of this module incorrectly
+    expanded tabs before searching for common leading whitespace.)
     """
-    lines = text.expandtabs().split('\n')
+    # Look for the longest leading string of spaces and tabs common to
+    # all lines.
     margin = None
-    for line in lines:
-        content = line.lstrip()
-        if not content:
-            continue
-        indent = len(line) - len(content)
+    text = _whitespace_only_re.sub('', text)
+    indents = _leading_whitespace_re.findall(text)
+    for indent in indents:
         if margin is None:
             margin = indent
+
+        # Current line more deeply indented than previous winner:
+        # no change (previous winner is still on top).
+        elif indent.startswith(margin):
+            pass
+
+        # Current line consistent with and no deeper than previous winner:
+        # it's the new winner.
+        elif margin.startswith(indent):
+            margin = indent
+
+        # Current line and previous winner have no common whitespace:
+        # there is no margin.
         else:
-            margin = min(margin, indent)
+            margin = ""
+            break
 
-    if margin is not None and margin > 0:
-        for i in range(len(lines)):
-            lines[i] = lines[i][margin:]
+    # sanity check (testing/debugging only)
+    if 0 and margin:
+        for line in text.split("\n"):
+            assert not line or line.startswith(margin), \
+                   "line = %r, margin = %r" % (line, margin)
 
-    return '\n'.join(lines)
+    if margin:
+        text = re.sub(r'(?m)^' + margin, '', text)
+    return text
+
+if __name__ == "__main__":
+    #print dedent("\tfoo\n\tbar")
+    #print dedent("  \thello there\n  \t  how are you?")
+    print dedent("Hello there.\n  This is indented.")
diff --git a/Lib/threading.py b/Lib/threading.py
index c27140d..5655dde 100644
--- a/Lib/threading.py
+++ b/Lib/threading.py
@@ -15,7 +15,7 @@
 # Rename some stuff so "from threading import *" is safe
 __all__ = ['activeCount', 'Condition', 'currentThread', 'enumerate', 'Event',
            'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread',
-           'Timer', 'setprofile', 'settrace', 'local']
+           'Timer', 'setprofile', 'settrace', 'local', 'stack_size']
 
 _start_new_thread = thread.start_new_thread
 _allocate_lock = thread.allocate_lock
@@ -713,6 +713,8 @@
     _active_limbo_lock.release()
     return active
 
+from thread import stack_size
+
 # Create the main thread object
 
 _MainThread()
diff --git a/Lib/trace.py b/Lib/trace.py
index ca6294e..db36e1d 100644
--- a/Lib/trace.py
+++ b/Lib/trace.py
@@ -285,7 +285,7 @@
             if filename == "<string>":
                 continue
 
-            if filename.endswith(".pyc") or filename.endswith(".pyo"):
+            if filename.endswith((".pyc", ".pyo")):
                 filename = filename[:-1]
 
             if coverdir is None:
diff --git a/Lib/traceback.py b/Lib/traceback.py
index f5c4b29..efd0f75 100644
--- a/Lib/traceback.py
+++ b/Lib/traceback.py
@@ -150,50 +150,53 @@
 
     The arguments are the exception type and value such as given by
     sys.last_type and sys.last_value. The return value is a list of
-    strings, each ending in a newline.  Normally, the list contains a
-    single string; however, for SyntaxError exceptions, it contains
-    several lines that (when printed) display detailed information
-    about where the syntax error occurred.  The message indicating
-    which exception occurred is the always last string in the list.
+    strings, each ending in a newline.
+
+    Normally, the list contains a single string; however, for
+    SyntaxError exceptions, it contains several lines that (when
+    printed) display detailed information about where the syntax
+    error occurred.
+
+    The message indicating which exception occurred is always the last
+    string in the list.
+
     """
-    list = []
-    if (type(etype) == types.ClassType
-        or (isinstance(etype, type) and issubclass(etype, BaseException))):
-        stype = etype.__name__
+
+    stype = etype.__name__
+
+    if not issubclass(etype, SyntaxError):
+        return [_format_final_exc_line(stype, value)]
+
+    # It was a syntax error; show exactly where the problem was found.
+    lines = []
+    try:
+        msg, (filename, lineno, offset, badline) = value
+    except Exception:
+        pass
     else:
-        stype = etype
-    if value is None:
-        list.append(str(stype) + '\n')
+        filename = filename or "<string>"
+        lines.append('  File "%s", line %d\n' % (filename, lineno))
+        if badline is not None:
+            lines.append('    %s\n' % badline.strip())
+            if offset is not None:
+                caretspace = badline[:offset].lstrip()
+                # non-space whitespace (likes tabs) must be kept for alignment
+                caretspace = ((c.isspace() and c or ' ') for c in caretspace)
+                # only three spaces to account for offset1 == pos 0
+                lines.append('   %s^\n' % ''.join(caretspace))
+            value = msg
+
+    lines.append(_format_final_exc_line(stype, value))
+    return lines
+
+def _format_final_exc_line(etype, value):
+    """Return a list of a single line -- normal case for format_exception_only"""
+    valuestr = _some_str(value)
+    if value is None or not valuestr:
+        line = "%s\n" % etype
     else:
-        if issubclass(etype, SyntaxError):
-            try:
-                msg, (filename, lineno, offset, line) = value
-            except:
-                pass
-            else:
-                if not filename: filename = "<string>"
-                list.append('  File "%s", line %d\n' %
-                            (filename, lineno))
-                if line is not None:
-                    i = 0
-                    while i < len(line) and line[i].isspace():
-                        i = i+1
-                    list.append('    %s\n' % line.strip())
-                    if offset is not None:
-                        s = '    '
-                        for c in line[i:offset-1]:
-                            if c.isspace():
-                                s = s + c
-                            else:
-                                s = s + ' '
-                        list.append('%s^\n' % s)
-                    value = msg
-        s = _some_str(value)
-        if s:
-            list.append('%s: %s\n' % (str(stype), s))
-        else:
-            list.append('%s\n' % str(stype))
-    return list
+        line = "%s: %s\n" % (etype, valuestr)
+    return line
 
 def _some_str(value):
     try:
diff --git a/Lib/types.py b/Lib/types.py
index db63c96..5a89ad1 100644
--- a/Lib/types.py
+++ b/Lib/types.py
@@ -84,4 +84,16 @@
 DictProxyType = type(TypeType.__dict__)
 NotImplementedType = type(NotImplemented)
 
-del sys, _f, _g, _C                 # Not for export
+# Extension types defined in a C helper module.  XXX There may be no
+# equivalent in implementations other than CPython, so it seems better to
+# leave them undefined then to set them to e.g. None.
+try:
+    import _types
+except ImportError:
+    pass
+else:
+    GetSetDescriptorType = type(_types.Helper.getter)
+    MemberDescriptorType = type(_types.Helper.member)
+    del _types
+
+del sys, _f, _g, _C,                              # Not for export
diff --git a/Lib/urllib.py b/Lib/urllib.py
index f0ae53a..d2a4c48 100644
--- a/Lib/urllib.py
+++ b/Lib/urllib.py
@@ -118,7 +118,7 @@
         self.proxies = proxies
         self.key_file = x509.get('key_file')
         self.cert_file = x509.get('cert_file')
-        self.addheaders = [('User-agent', self.version)]
+        self.addheaders = [('User-Agent', self.version)]
         self.__tempfiles = []
         self.__unlink = os.unlink # See cleanup()
         self.tempcache = None
@@ -314,8 +314,8 @@
         h = httplib.HTTP(host)
         if data is not None:
             h.putrequest('POST', selector)
-            h.putheader('Content-type', 'application/x-www-form-urlencoded')
-            h.putheader('Content-length', '%d' % len(data))
+            h.putheader('Content-Type', 'application/x-www-form-urlencoded')
+            h.putheader('Content-Length', '%d' % len(data))
         else:
             h.putrequest('GET', selector)
         if proxy_auth: h.putheader('Proxy-Authorization', 'Basic %s' % proxy_auth)
@@ -400,9 +400,9 @@
                               cert_file=self.cert_file)
             if data is not None:
                 h.putrequest('POST', selector)
-                h.putheader('Content-type',
+                h.putheader('Content-Type',
                             'application/x-www-form-urlencoded')
-                h.putheader('Content-length', '%d' % len(data))
+                h.putheader('Content-Length', '%d' % len(data))
             else:
                 h.putrequest('GET', selector)
             if proxy_auth: h.putheader('Proxy-Authorization: Basic %s' % proxy_auth)
@@ -584,7 +584,7 @@
             data = base64.decodestring(data)
         else:
             data = unquote(data)
-        msg.append('Content-length: %d' % len(data))
+        msg.append('Content-Length: %d' % len(data))
         msg.append('')
         msg.append(data)
         msg = '\n'.join(msg)
diff --git a/Lib/urllib2.py b/Lib/urllib2.py
index 227311c..6ee9e2c 100644
--- a/Lib/urllib2.py
+++ b/Lib/urllib2.py
@@ -263,11 +263,11 @@
 
     def add_header(self, key, val):
         # useful for something like authentication
-        self.headers[key.capitalize()] = val
+        self.headers[key.title()] = val
 
     def add_unredirected_header(self, key, val):
         # will not be added to a redirected request
-        self.unredirected_hdrs[key.capitalize()] = val
+        self.unredirected_hdrs[key.title()] = val
 
     def has_header(self, header_name):
         return (header_name in self.headers or
@@ -286,7 +286,7 @@
 class OpenerDirector:
     def __init__(self):
         client_version = "Python-urllib/%s" % __version__
-        self.addheaders = [('User-agent', client_version)]
+        self.addheaders = [('User-Agent', client_version)]
         # manage the individual handlers
         self.handlers = []
         self.handle_open = {}
@@ -675,7 +675,7 @@
         if user and password:
             user_pass = '%s:%s' % (unquote(user), unquote(password))
             creds = base64.encodestring(user_pass).strip()
-            req.add_header('Proxy-authorization', 'Basic ' + creds)
+            req.add_header('Proxy-Authorization', 'Basic ' + creds)
         hostport = unquote(hostport)
         req.set_proxy(hostport, proxy_type)
         if orig_type == proxy_type:
@@ -819,7 +819,7 @@
 
 class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler):
 
-    auth_header = 'Proxy-authorization'
+    auth_header = 'Proxy-Authorization'
 
     def http_error_407(self, req, fp, code, msg, headers):
         # http_error_auth_reqed requires that there is no userinfo component in
@@ -1022,20 +1022,20 @@
 
         if request.has_data():  # POST
             data = request.get_data()
-            if not request.has_header('Content-type'):
+            if not request.has_header('Content-Type'):
                 request.add_unredirected_header(
-                    'Content-type',
+                    'Content-Type',
                     'application/x-www-form-urlencoded')
-            if not request.has_header('Content-length'):
+            if not request.has_header('Content-Length'):
                 request.add_unredirected_header(
-                    'Content-length', '%d' % len(data))
+                    'Content-Length', '%d' % len(data))
 
         scheme, sel = splittype(request.get_selector())
         sel_host, sel_path = splithost(sel)
         if not request.has_header('Host'):
             request.add_unredirected_header('Host', sel_host or host)
         for name, value in self.parent.addheaders:
-            name = name.capitalize()
+            name = name.title()
             if not request.has_header(name):
                 request.add_unredirected_header(name, value)
 
@@ -1217,7 +1217,7 @@
         modified = email.Utils.formatdate(stats.st_mtime, usegmt=True)
         mtype = mimetypes.guess_type(file)[0]
         headers = mimetools.Message(StringIO(
-            'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' %
+            'Content-Type: %s\nContent-Length: %d\nLast-Modified: %s\n' %
             (mtype or 'text/plain', size, modified)))
         if host:
             host, port = splitport(host)
@@ -1272,9 +1272,9 @@
             headers = ""
             mtype = mimetypes.guess_type(req.get_full_url())[0]
             if mtype:
-                headers += "Content-type: %s\n" % mtype
+                headers += "Content-Type: %s\n" % mtype
             if retrlen is not None and retrlen >= 0:
-                headers += "Content-length: %d\n" % retrlen
+                headers += "Content-Length: %d\n" % retrlen
             sf = StringIO(headers)
             headers = mimetools.Message(sf)
             return addinfourl(fp, headers, req.get_full_url())
diff --git a/Lib/uuid.py b/Lib/uuid.py
new file mode 100644
index 0000000..a6446a1
--- /dev/null
+++ b/Lib/uuid.py
@@ -0,0 +1,515 @@
+r"""UUID objects (universally unique identifiers) according to RFC 4122.
+
+This module provides immutable UUID objects (class UUID) and the functions
+uuid1(), uuid3(), uuid4(), uuid5() for generating version 1, 3, 4, and 5
+UUIDs as specified in RFC 4122.
+
+If all you want is a unique ID, you should probably call uuid1() or uuid4().
+Note that uuid1() may compromise privacy since it creates a UUID containing
+the computer's network address.  uuid4() creates a random UUID.
+
+Typical usage:
+
+    >>> import uuid
+
+    # make a UUID based on the host ID and current time
+    >>> uuid.uuid1()
+    UUID('a8098c1a-f86e-11da-bd1a-00112444be1e')
+
+    # make a UUID using an MD5 hash of a namespace UUID and a name
+    >>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org')
+    UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e')
+
+    # make a random UUID
+    >>> uuid.uuid4()
+    UUID('16fd2706-8baf-433b-82eb-8c7fada847da')
+
+    # make a UUID using a SHA-1 hash of a namespace UUID and a name
+    >>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org')
+    UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')
+
+    # make a UUID from a string of hex digits (braces and hyphens ignored)
+    >>> x = uuid.UUID('{00010203-0405-0607-0809-0a0b0c0d0e0f}')
+
+    # convert a UUID to a string of hex digits in standard form
+    >>> str(x)
+    '00010203-0405-0607-0809-0a0b0c0d0e0f'
+
+    # get the raw 16 bytes of the UUID
+    >>> x.bytes
+    '\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f'
+
+    # make a UUID from a 16-byte string
+    >>> uuid.UUID(bytes=x.bytes)
+    UUID('00010203-0405-0607-0809-0a0b0c0d0e0f')
+"""
+
+__author__ = 'Ka-Ping Yee <ping@zesty.ca>'
+__date__ = '$Date: 2006/06/12 23:15:40 $'.split()[1].replace('/', '-')
+__version__ = '$Revision: 1.30 $'.split()[1]
+
+RESERVED_NCS, RFC_4122, RESERVED_MICROSOFT, RESERVED_FUTURE = [
+    'reserved for NCS compatibility', 'specified in RFC 4122',
+    'reserved for Microsoft compatibility', 'reserved for future definition']
+
+class UUID(object):
+    """Instances of the UUID class represent UUIDs as specified in RFC 4122.
+    UUID objects are immutable, hashable, and usable as dictionary keys.
+    Converting a UUID to a string with str() yields something in the form
+    '12345678-1234-1234-1234-123456789abc'.  The UUID constructor accepts
+    four possible forms: a similar string of hexadecimal digits, or a
+    string of 16 raw bytes as an argument named 'bytes', or a tuple of
+    six integer fields (with 32-bit, 16-bit, 16-bit, 8-bit, 8-bit, and
+    48-bit values respectively) as an argument named 'fields', or a single
+    128-bit integer as an argument named 'int'.
+
+    UUIDs have these read-only attributes:
+
+        bytes       the UUID as a 16-byte string
+
+        fields      a tuple of the six integer fields of the UUID,
+                    which are also available as six individual attributes
+                    and two derived attributes:
+
+            time_low                the first 32 bits of the UUID
+            time_mid                the next 16 bits of the UUID
+            time_hi_version         the next 16 bits of the UUID
+            clock_seq_hi_variant    the next 8 bits of the UUID
+            clock_seq_low           the next 8 bits of the UUID
+            node                    the last 48 bits of the UUID
+
+            time                    the 60-bit timestamp
+            clock_seq               the 14-bit sequence number
+
+        hex         the UUID as a 32-character hexadecimal string
+
+        int         the UUID as a 128-bit integer
+
+        urn         the UUID as a URN as specified in RFC 4122
+
+        variant     the UUID variant (one of the constants RESERVED_NCS,
+                    RFC_4122, RESERVED_MICROSOFT, or RESERVED_FUTURE)
+
+        version     the UUID version number (1 through 5, meaningful only
+                    when the variant is RFC_4122)
+    """
+
+    def __init__(self, hex=None, bytes=None, fields=None, int=None,
+                       version=None):
+        r"""Create a UUID from either a string of 32 hexadecimal digits,
+        a string of 16 bytes as the 'bytes' argument, a tuple of six
+        integers (32-bit time_low, 16-bit time_mid, 16-bit time_hi_version,
+        8-bit clock_seq_hi_variant, 8-bit clock_seq_low, 48-bit node) as
+        the 'fields' argument, or a single 128-bit integer as the 'int'
+        argument.  When a string of hex digits is given, curly braces,
+        hyphens, and a URN prefix are all optional.  For example, these
+        expressions all yield the same UUID:
+
+        UUID('{12345678-1234-5678-1234-567812345678}')
+        UUID('12345678123456781234567812345678')
+        UUID('urn:uuid:12345678-1234-5678-1234-567812345678')
+        UUID(bytes='\x12\x34\x56\x78'*4)
+        UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678))
+        UUID(int=0x12345678123456781234567812345678)
+
+        Exactly one of 'hex', 'bytes', 'fields', or 'int' must be given.
+        The 'version' argument is optional; if given, the resulting UUID
+        will have its variant and version number set according to RFC 4122,
+        overriding bits in the given 'hex', 'bytes', 'fields', or 'int'.
+        """
+
+        if [hex, bytes, fields, int].count(None) != 3:
+            raise TypeError('need just one of hex, bytes, fields, or int')
+        if hex is not None:
+            hex = hex.replace('urn:', '').replace('uuid:', '')
+            hex = hex.strip('{}').replace('-', '')
+            if len(hex) != 32:
+                raise ValueError('badly formed hexadecimal UUID string')
+            int = long(hex, 16)
+        if bytes is not None:
+            if len(bytes) != 16:
+                raise ValueError('bytes is not a 16-char string')
+            int = long(('%02x'*16) % tuple(map(ord, bytes)), 16)
+        if fields is not None:
+            if len(fields) != 6:
+                raise ValueError('fields is not a 6-tuple')
+            (time_low, time_mid, time_hi_version,
+             clock_seq_hi_variant, clock_seq_low, node) = fields
+            if not 0 <= time_low < 1<<32L:
+                raise ValueError('field 1 out of range (need a 32-bit value)')
+            if not 0 <= time_mid < 1<<16L:
+                raise ValueError('field 2 out of range (need a 16-bit value)')
+            if not 0 <= time_hi_version < 1<<16L:
+                raise ValueError('field 3 out of range (need a 16-bit value)')
+            if not 0 <= clock_seq_hi_variant < 1<<8L:
+                raise ValueError('field 4 out of range (need an 8-bit value)')
+            if not 0 <= clock_seq_low < 1<<8L:
+                raise ValueError('field 5 out of range (need an 8-bit value)')
+            if not 0 <= node < 1<<48L:
+                raise ValueError('field 6 out of range (need a 48-bit value)')
+            clock_seq = (clock_seq_hi_variant << 8L) | clock_seq_low
+            int = ((time_low << 96L) | (time_mid << 80L) |
+                   (time_hi_version << 64L) | (clock_seq << 48L) | node)
+        if int is not None:
+            if not 0 <= int < 1<<128L:
+                raise ValueError('int is out of range (need a 128-bit value)')
+        if version is not None:
+            if not 1 <= version <= 5:
+                raise ValueError('illegal version number')
+            # Set the variant to RFC 4122.
+            int &= ~(0xc000 << 48L)
+            int |= 0x8000 << 48L
+            # Set the version number.
+            int &= ~(0xf000 << 64L)
+            int |= version << 76L
+        self.__dict__['int'] = int
+
+    def __cmp__(self, other):
+        if isinstance(other, UUID):
+            return cmp(self.int, other.int)
+        return NotImplemented
+
+    def __hash__(self):
+        return hash(self.int)
+
+    def __int__(self):
+        return self.int
+
+    def __repr__(self):
+        return 'UUID(%r)' % str(self)
+
+    def __setattr__(self, name, value):
+        raise TypeError('UUID objects are immutable')
+
+    def __str__(self):
+        hex = '%032x' % self.int
+        return '%s-%s-%s-%s-%s' % (
+            hex[:8], hex[8:12], hex[12:16], hex[16:20], hex[20:])
+
+    def get_bytes(self):
+        bytes = ''
+        for shift in range(0, 128, 8):
+            bytes = chr((self.int >> shift) & 0xff) + bytes
+        return bytes
+
+    bytes = property(get_bytes)
+
+    def get_fields(self):
+        return (self.time_low, self.time_mid, self.time_hi_version,
+                self.clock_seq_hi_variant, self.clock_seq_low, self.node)
+
+    fields = property(get_fields)
+
+    def get_time_low(self):
+        return self.int >> 96L
+
+    time_low = property(get_time_low)
+
+    def get_time_mid(self):
+        return (self.int >> 80L) & 0xffff
+
+    time_mid = property(get_time_mid)
+
+    def get_time_hi_version(self):
+        return (self.int >> 64L) & 0xffff
+
+    time_hi_version = property(get_time_hi_version)
+
+    def get_clock_seq_hi_variant(self):
+        return (self.int >> 56L) & 0xff
+
+    clock_seq_hi_variant = property(get_clock_seq_hi_variant)
+
+    def get_clock_seq_low(self):
+        return (self.int >> 48L) & 0xff
+
+    clock_seq_low = property(get_clock_seq_low)
+
+    def get_time(self):
+        return (((self.time_hi_version & 0x0fffL) << 48L) |
+                (self.time_mid << 32L) | self.time_low)
+
+    time = property(get_time)
+
+    def get_clock_seq(self):
+        return (((self.clock_seq_hi_variant & 0x3fL) << 8L) |
+                self.clock_seq_low)
+
+    clock_seq = property(get_clock_seq)
+
+    def get_node(self):
+        return self.int & 0xffffffffffff
+
+    node = property(get_node)
+
+    def get_hex(self):
+        return '%032x' % self.int
+
+    hex = property(get_hex)
+
+    def get_urn(self):
+        return 'urn:uuid:' + str(self)
+
+    urn = property(get_urn)
+
+    def get_variant(self):
+        if not self.int & (0x8000 << 48L):
+            return RESERVED_NCS
+        elif not self.int & (0x4000 << 48L):
+            return RFC_4122
+        elif not self.int & (0x2000 << 48L):
+            return RESERVED_MICROSOFT
+        else:
+            return RESERVED_FUTURE
+
+    variant = property(get_variant)
+
+    def get_version(self):
+        # The version bits are only meaningful for RFC 4122 UUIDs.
+        if self.variant == RFC_4122:
+            return int((self.int >> 76L) & 0xf)
+
+    version = property(get_version)
+
+def _find_mac(command, args, hw_identifiers, get_index):
+    import os
+    for dir in ['', '/sbin/', '/usr/sbin']:
+        executable = os.path.join(dir, command)
+        if not os.path.exists(executable):
+            continue
+
+        try:
+            # LC_ALL to get English output, 2>/dev/null to
+            # prevent output on stderr
+            cmd = 'LC_ALL=C %s %s 2>/dev/null' % (executable, args)
+            pipe = os.popen(cmd)
+        except IOError:
+            continue
+
+        for line in pipe:
+            words = line.lower().split()
+            for i in range(len(words)):
+                if words[i] in hw_identifiers:
+                    return int(words[get_index(i)].replace(':', ''), 16)
+    return None
+
+def _ifconfig_getnode():
+    """Get the hardware address on Unix by running ifconfig."""
+
+    # This works on Linux ('' or '-a'), Tru64 ('-av'), but not all Unixes.
+    for args in ('', '-a', '-av'):
+        mac = _find_mac('ifconfig', args, ['hwaddr', 'ether'], lambda i: i+1)
+        if mac:
+            return mac
+
+    import socket
+    ip_addr = socket.gethostbyname(socket.gethostname())
+
+    # Try getting the MAC addr from arp based on our IP address (Solaris).
+    mac = _find_mac('arp', '-an', [ip_addr], lambda i: -1)
+    if mac:
+        return mac
+
+    # This might work on HP-UX.
+    mac = _find_mac('lanscan', '-ai', ['lan0'], lambda i: 0)
+    if mac:
+        return mac
+
+    return None
+
+def _ipconfig_getnode():
+    """Get the hardware address on Windows by running ipconfig.exe."""
+    import os, re
+    dirs = ['', r'c:\windows\system32', r'c:\winnt\system32']
+    try:
+        import ctypes
+        buffer = ctypes.create_string_buffer(300)
+        ctypes.windll.kernel32.GetSystemDirectoryA(buffer, 300)
+        dirs.insert(0, buffer.value.decode('mbcs'))
+    except:
+        pass
+    for dir in dirs:
+        try:
+            pipe = os.popen(os.path.join(dir, 'ipconfig') + ' /all')
+        except IOError:
+            continue
+        for line in pipe:
+            value = line.split(':')[-1].strip().lower()
+            if re.match('([0-9a-f][0-9a-f]-){5}[0-9a-f][0-9a-f]', value):
+                return int(value.replace('-', ''), 16)
+
+def _netbios_getnode():
+    """Get the hardware address on Windows using NetBIOS calls.
+    See http://support.microsoft.com/kb/118623 for details."""
+    import win32wnet, netbios
+    ncb = netbios.NCB()
+    ncb.Command = netbios.NCBENUM
+    ncb.Buffer = adapters = netbios.LANA_ENUM()
+    adapters._pack()
+    if win32wnet.Netbios(ncb) != 0:
+        return
+    adapters._unpack()
+    for i in range(adapters.length):
+        ncb.Reset()
+        ncb.Command = netbios.NCBRESET
+        ncb.Lana_num = ord(adapters.lana[i])
+        if win32wnet.Netbios(ncb) != 0:
+            continue
+        ncb.Reset()
+        ncb.Command = netbios.NCBASTAT
+        ncb.Lana_num = ord(adapters.lana[i])
+        ncb.Callname = '*'.ljust(16)
+        ncb.Buffer = status = netbios.ADAPTER_STATUS()
+        if win32wnet.Netbios(ncb) != 0:
+            continue
+        status._unpack()
+        bytes = map(ord, status.adapter_address)
+        return ((bytes[0]<<40L) + (bytes[1]<<32L) + (bytes[2]<<24L) +
+                (bytes[3]<<16L) + (bytes[4]<<8L) + bytes[5])
+
+# Thanks to Thomas Heller for ctypes and for his help with its use here.
+
+# If ctypes is available, use it to find system routines for UUID generation.
+_uuid_generate_random = _uuid_generate_time = _UuidCreate = None
+try:
+    import ctypes, ctypes.util
+    _buffer = ctypes.create_string_buffer(16)
+
+    # The uuid_generate_* routines are provided by libuuid on at least
+    # Linux and FreeBSD, and provided by libc on Mac OS X.
+    for libname in ['uuid', 'c']:
+        try:
+            lib = ctypes.CDLL(ctypes.util.find_library(libname))
+        except:
+            continue
+        if hasattr(lib, 'uuid_generate_random'):
+            _uuid_generate_random = lib.uuid_generate_random
+        if hasattr(lib, 'uuid_generate_time'):
+            _uuid_generate_time = lib.uuid_generate_time
+
+    # On Windows prior to 2000, UuidCreate gives a UUID containing the
+    # hardware address.  On Windows 2000 and later, UuidCreate makes a
+    # random UUID and UuidCreateSequential gives a UUID containing the
+    # hardware address.  These routines are provided by the RPC runtime.
+    # NOTE:  at least on Tim's WinXP Pro SP2 desktop box, while the last
+    # 6 bytes returned by UuidCreateSequential are fixed, they don't appear
+    # to bear any relationship to the MAC address of any network device
+    # on the box.
+    try:
+        lib = ctypes.windll.rpcrt4
+    except:
+        lib = None
+    _UuidCreate = getattr(lib, 'UuidCreateSequential',
+                          getattr(lib, 'UuidCreate', None))
+except:
+    pass
+
+def _unixdll_getnode():
+    """Get the hardware address on Unix using ctypes."""
+    _uuid_generate_time(_buffer)
+    return UUID(bytes=_buffer.raw).node
+
+def _windll_getnode():
+    """Get the hardware address on Windows using ctypes."""
+    if _UuidCreate(_buffer) == 0:
+        return UUID(bytes=_buffer.raw).node
+
+def _random_getnode():
+    """Get a random node ID, with eighth bit set as suggested by RFC 4122."""
+    import random
+    return random.randrange(0, 1<<48L) | 0x010000000000L
+
+_node = None
+
+def getnode():
+    """Get the hardware address as a 48-bit positive integer.
+
+    The first time this runs, it may launch a separate program, which could
+    be quite slow.  If all attempts to obtain the hardware address fail, we
+    choose a random 48-bit number with its eighth bit set to 1 as recommended
+    in RFC 4122.
+    """
+
+    global _node
+    if _node is not None:
+        return _node
+
+    import sys
+    if sys.platform == 'win32':
+        getters = [_windll_getnode, _netbios_getnode, _ipconfig_getnode]
+    else:
+        getters = [_unixdll_getnode, _ifconfig_getnode]
+
+    for getter in getters + [_random_getnode]:
+        try:
+            _node = getter()
+        except:
+            continue
+        if _node is not None:
+            return _node
+
+def uuid1(node=None, clock_seq=None):
+    """Generate a UUID from a host ID, sequence number, and the current time.
+    If 'node' is not given, getnode() is used to obtain the hardware
+    address.  If 'clock_seq' is given, it is used as the sequence number;
+    otherwise a random 14-bit sequence number is chosen."""
+
+    # When the system provides a version-1 UUID generator, use it (but don't
+    # use UuidCreate here because its UUIDs don't conform to RFC 4122).
+    if _uuid_generate_time and node is clock_seq is None:
+        _uuid_generate_time(_buffer)
+        return UUID(bytes=_buffer.raw)
+
+    import time
+    nanoseconds = int(time.time() * 1e9)
+    # 0x01b21dd213814000 is the number of 100-ns intervals between the
+    # UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00.
+    timestamp = int(nanoseconds/100) + 0x01b21dd213814000L
+    if clock_seq is None:
+        import random
+        clock_seq = random.randrange(1<<14L) # instead of stable storage
+    time_low = timestamp & 0xffffffffL
+    time_mid = (timestamp >> 32L) & 0xffffL
+    time_hi_version = (timestamp >> 48L) & 0x0fffL
+    clock_seq_low = clock_seq & 0xffL
+    clock_seq_hi_variant = (clock_seq >> 8L) & 0x3fL
+    if node is None:
+        node = getnode()
+    return UUID(fields=(time_low, time_mid, time_hi_version,
+                        clock_seq_hi_variant, clock_seq_low, node), version=1)
+
+def uuid3(namespace, name):
+    """Generate a UUID from the MD5 hash of a namespace UUID and a name."""
+    import md5
+    hash = md5.md5(namespace.bytes + name).digest()
+    return UUID(bytes=hash[:16], version=3)
+
+def uuid4():
+    """Generate a random UUID."""
+
+    # When the system provides a version-4 UUID generator, use it.
+    if _uuid_generate_random:
+        _uuid_generate_random(_buffer)
+        return UUID(bytes=_buffer.raw)
+
+    # Otherwise, get randomness from urandom or the 'random' module.
+    try:
+        import os
+        return UUID(bytes=os.urandom(16), version=4)
+    except:
+        import random
+        bytes = [chr(random.randrange(256)) for i in range(16)]
+        return UUID(bytes=bytes, version=4)
+
+def uuid5(namespace, name):
+    """Generate a UUID from the SHA-1 hash of a namespace UUID and a name."""
+    import sha
+    hash = sha.sha(namespace.bytes + name).digest()
+    return UUID(bytes=hash[:16], version=5)
+
+# The following standard UUIDs are for use with uuid3() or uuid5().
+
+NAMESPACE_DNS = UUID('6ba7b810-9dad-11d1-80b4-00c04fd430c8')
+NAMESPACE_URL = UUID('6ba7b811-9dad-11d1-80b4-00c04fd430c8')
+NAMESPACE_OID = UUID('6ba7b812-9dad-11d1-80b4-00c04fd430c8')
+NAMESPACE_X500 = UUID('6ba7b814-9dad-11d1-80b4-00c04fd430c8')
diff --git a/Lib/warnings.py b/Lib/warnings.py
index b5d75e4..b7fac69 100644
--- a/Lib/warnings.py
+++ b/Lib/warnings.py
@@ -46,7 +46,7 @@
     filename = globals.get('__file__')
     if filename:
         fnl = filename.lower()
-        if fnl.endswith(".pyc") or fnl.endswith(".pyo"):
+        if fnl.endswith((".pyc", ".pyo")):
             filename = filename[:-1]
     else:
         if module == "__main__":
@@ -254,11 +254,11 @@
             cat = getattr(m, klass)
         except AttributeError:
             raise _OptionError("unknown warning category: %r" % (category,))
-    if (not isinstance(cat, types.ClassType) or
-        not issubclass(cat, Warning)):
+    if not issubclass(cat, Warning):
         raise _OptionError("invalid warning category: %r" % (category,))
     return cat
 
 # Module initialization
 _processoptions(sys.warnoptions)
 simplefilter("ignore", category=PendingDeprecationWarning, append=1)
+simplefilter("ignore", category=ImportWarning, append=1)
diff --git a/Lib/webbrowser.py b/Lib/webbrowser.py
index 4693fe7..7a1a3b4 100644
--- a/Lib/webbrowser.py
+++ b/Lib/webbrowser.py
@@ -98,8 +98,7 @@
 if sys.platform[:3] == "win":
     def _isexecutable(cmd):
         cmd = cmd.lower()
-        if os.path.isfile(cmd) and (cmd.endswith(".exe") or
-                                    cmd.endswith(".bat")):
+        if os.path.isfile(cmd) and cmd.endswith((".exe", ".bat")):
             return True
         for ext in ".exe", ".bat":
             if os.path.isfile(cmd + ext):
@@ -435,13 +434,13 @@
     # The default Gnome browser
     if _iscommand("gconftool-2"):
         # get the web browser string from gconftool
-        gc = 'gconftool-2 -g /desktop/gnome/url-handlers/http/command'
+        gc = 'gconftool-2 -g /desktop/gnome/url-handlers/http/command 2>/dev/null'
         out = os.popen(gc)
         commd = out.read().strip()
         retncode = out.close()
 
         # if successful, register it
-        if retncode == None and len(commd) != 0:
+        if retncode is None and commd:
             register("gnome", None, BackgroundBrowser(commd))
 
     # First, the Mozilla/Netscape browsers
diff --git a/Lib/wsgiref.egg-info b/Lib/wsgiref.egg-info
new file mode 100644
index 0000000..c0b7893
--- /dev/null
+++ b/Lib/wsgiref.egg-info
@@ -0,0 +1,8 @@
+Metadata-Version: 1.0
+Name: wsgiref
+Version: 0.1.2
+Summary: WSGI (PEP 333) Reference Library
+Author: Phillip J. Eby
+Author-email: web-sig@python.org
+License: PSF or ZPL
+Platform: UNKNOWN
diff --git a/Lib/wsgiref/__init__.py b/Lib/wsgiref/__init__.py
new file mode 100644
index 0000000..46c579f
--- /dev/null
+++ b/Lib/wsgiref/__init__.py
@@ -0,0 +1,23 @@
+"""wsgiref -- a WSGI (PEP 333) Reference Library
+
+Current Contents:
+
+* util -- Miscellaneous useful functions and wrappers
+
+* headers -- Manage response headers
+
+* handlers -- base classes for server/gateway implementations
+
+* simple_server -- a simple BaseHTTPServer that supports WSGI
+
+* validate -- validation wrapper that sits between an app and a server
+  to detect errors in either
+
+To-Do:
+
+* cgi_gateway -- Run WSGI apps under CGI (pending a deployment standard)
+
+* cgi_wrapper -- Run CGI apps under WSGI
+
+* router -- a simple middleware component that handles URL traversal
+"""
diff --git a/Lib/wsgiref/handlers.py b/Lib/wsgiref/handlers.py
new file mode 100644
index 0000000..099371b
--- /dev/null
+++ b/Lib/wsgiref/handlers.py
@@ -0,0 +1,492 @@
+"""Base classes for server/gateway implementations"""
+
+from types import StringType
+from util import FileWrapper, guess_scheme, is_hop_by_hop
+from headers import Headers
+
+import sys, os, time
+
+__all__ = ['BaseHandler', 'SimpleHandler', 'BaseCGIHandler', 'CGIHandler']
+
+try:
+    dict
+except NameError:
+    def dict(items):
+        d = {}
+        for k,v in items:
+            d[k] = v
+        return d
+
+try:
+    True
+    False
+except NameError:
+    True = not None
+    False = not True
+
+
+# Weekday and month names for HTTP date/time formatting; always English!
+_weekdayname = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
+_monthname = [None, # Dummy so we can use 1-based month numbers
+              "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+              "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
+
+def format_date_time(timestamp):
+    year, month, day, hh, mm, ss, wd, y, z = time.gmtime(timestamp)
+    return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
+        _weekdayname[wd], day, _monthname[month], year, hh, mm, ss
+    )
+
+
+
+class BaseHandler:
+    """Manage the invocation of a WSGI application"""
+
+    # Configuration parameters; can override per-subclass or per-instance
+    wsgi_version = (1,0)
+    wsgi_multithread = True
+    wsgi_multiprocess = True
+    wsgi_run_once = False
+
+    origin_server = True    # We are transmitting direct to client
+    http_version  = "1.0"   # Version that should be used for response
+    server_software = None  # String name of server software, if any
+
+    # os_environ is used to supply configuration from the OS environment:
+    # by default it's a copy of 'os.environ' as of import time, but you can
+    # override this in e.g. your __init__ method.
+    os_environ = dict(os.environ.items())
+
+    # Collaborator classes
+    wsgi_file_wrapper = FileWrapper     # set to None to disable
+    headers_class = Headers             # must be a Headers-like class
+
+    # Error handling (also per-subclass or per-instance)
+    traceback_limit = None  # Print entire traceback to self.get_stderr()
+    error_status = "500 Dude, this is whack!"
+    error_headers = [('Content-Type','text/plain')]
+    error_body = "A server error occurred.  Please contact the administrator."
+
+    # State variables (don't mess with these)
+    status = result = None
+    headers_sent = False
+    headers = None
+    bytes_sent = 0
+
+
+
+
+
+
+
+
+    def run(self, application):
+        """Invoke the application"""
+        # Note to self: don't move the close()!  Asynchronous servers shouldn't
+        # call close() from finish_response(), so if you close() anywhere but
+        # the double-error branch here, you'll break asynchronous servers by
+        # prematurely closing.  Async servers must return from 'run()' without
+        # closing if there might still be output to iterate over.
+        try:
+            self.setup_environ()
+            self.result = application(self.environ, self.start_response)
+            self.finish_response()
+        except:
+            try:
+                self.handle_error()
+            except:
+                # If we get an error handling an error, just give up already!
+                self.close()
+                raise   # ...and let the actual server figure it out.
+
+
+    def setup_environ(self):
+        """Set up the environment for one request"""
+
+        env = self.environ = self.os_environ.copy()
+        self.add_cgi_vars()
+
+        env['wsgi.input']        = self.get_stdin()
+        env['wsgi.errors']       = self.get_stderr()
+        env['wsgi.version']      = self.wsgi_version
+        env['wsgi.run_once']     = self.wsgi_run_once
+        env['wsgi.url_scheme']   = self.get_scheme()
+        env['wsgi.multithread']  = self.wsgi_multithread
+        env['wsgi.multiprocess'] = self.wsgi_multiprocess
+
+        if self.wsgi_file_wrapper is not None:
+            env['wsgi.file_wrapper'] = self.wsgi_file_wrapper
+
+        if self.origin_server and self.server_software:
+            env.setdefault('SERVER_SOFTWARE',self.server_software)
+
+
+    def finish_response(self):
+        """Send any iterable data, then close self and the iterable
+
+        Subclasses intended for use in asynchronous servers will
+        want to redefine this method, such that it sets up callbacks
+        in the event loop to iterate over the data, and to call
+        'self.close()' once the response is finished.
+        """
+        if not self.result_is_file() or not self.sendfile():
+            for data in self.result:
+                self.write(data)
+            self.finish_content()
+        self.close()
+
+
+    def get_scheme(self):
+        """Return the URL scheme being used"""
+        return guess_scheme(self.environ)
+
+
+    def set_content_length(self):
+        """Compute Content-Length or switch to chunked encoding if possible"""
+        try:
+            blocks = len(self.result)
+        except (TypeError,AttributeError,NotImplementedError):
+            pass
+        else:
+            if blocks==1:
+                self.headers['Content-Length'] = str(self.bytes_sent)
+                return
+        # XXX Try for chunked encoding if origin server and client is 1.1
+
+
+    def cleanup_headers(self):
+        """Make any necessary header changes or defaults
+
+        Subclasses can extend this to add other defaults.
+        """
+        if not self.headers.has_key('Content-Length'):
+            self.set_content_length()
+
+    def start_response(self, status, headers,exc_info=None):
+        """'start_response()' callable as specified by PEP 333"""
+
+        if exc_info:
+            try:
+                if self.headers_sent:
+                    # Re-raise original exception if headers sent
+                    raise exc_info[0], exc_info[1], exc_info[2]
+            finally:
+                exc_info = None        # avoid dangling circular ref
+        elif self.headers is not None:
+            raise AssertionError("Headers already set!")
+
+        assert type(status) is StringType,"Status must be a string"
+        assert len(status)>=4,"Status must be at least 4 characters"
+        assert int(status[:3]),"Status message must begin w/3-digit code"
+        assert status[3]==" ", "Status message must have a space after code"
+        if __debug__:
+            for name,val in headers:
+                assert type(name) is StringType,"Header names must be strings"
+                assert type(val) is StringType,"Header values must be strings"
+                assert not is_hop_by_hop(name),"Hop-by-hop headers not allowed"
+        self.status = status
+        self.headers = self.headers_class(headers)
+        return self.write
+
+
+    def send_preamble(self):
+        """Transmit version/status/date/server, via self._write()"""
+        if self.origin_server:
+            if self.client_is_modern():
+                self._write('HTTP/%s %s\r\n' % (self.http_version,self.status))
+                if not self.headers.has_key('Date'):
+                    self._write(
+                        'Date: %s\r\n' % format_date_time(time.time())
+                    )
+                if self.server_software and not self.headers.has_key('Server'):
+                    self._write('Server: %s\r\n' % self.server_software)
+        else:
+            self._write('Status: %s\r\n' % self.status)
+
+    def write(self, data):
+        """'write()' callable as specified by PEP 333"""
+
+        assert type(data) is StringType,"write() argument must be string"
+
+        if not self.status:
+            raise AssertionError("write() before start_response()")
+
+        elif not self.headers_sent:
+            # Before the first output, send the stored headers
+            self.bytes_sent = len(data)    # make sure we know content-length
+            self.send_headers()
+        else:
+            self.bytes_sent += len(data)
+
+        # XXX check Content-Length and truncate if too many bytes written?
+        self._write(data)
+        self._flush()
+
+
+    def sendfile(self):
+        """Platform-specific file transmission
+
+        Override this method in subclasses to support platform-specific
+        file transmission.  It is only called if the application's
+        return iterable ('self.result') is an instance of
+        'self.wsgi_file_wrapper'.
+
+        This method should return a true value if it was able to actually
+        transmit the wrapped file-like object using a platform-specific
+        approach.  It should return a false value if normal iteration
+        should be used instead.  An exception can be raised to indicate
+        that transmission was attempted, but failed.
+
+        NOTE: this method should call 'self.send_headers()' if
+        'self.headers_sent' is false and it is going to attempt direct
+        transmission of the file.
+        """
+        return False   # No platform-specific transmission by default
+
+
+    def finish_content(self):
+        """Ensure headers and content have both been sent"""
+        if not self.headers_sent:
+            self.headers['Content-Length'] = "0"
+            self.send_headers()
+        else:
+            pass # XXX check if content-length was too short?
+
+    def close(self):
+        """Close the iterable (if needed) and reset all instance vars
+
+        Subclasses may want to also drop the client connection.
+        """
+        try:
+            if hasattr(self.result,'close'):
+                self.result.close()
+        finally:
+            self.result = self.headers = self.status = self.environ = None
+            self.bytes_sent = 0; self.headers_sent = False
+
+
+    def send_headers(self):
+        """Transmit headers to the client, via self._write()"""
+        self.cleanup_headers()
+        self.headers_sent = True
+        if not self.origin_server or self.client_is_modern():
+            self.send_preamble()
+            self._write(str(self.headers))
+
+
+    def result_is_file(self):
+        """True if 'self.result' is an instance of 'self.wsgi_file_wrapper'"""
+        wrapper = self.wsgi_file_wrapper
+        return wrapper is not None and isinstance(self.result,wrapper)
+
+
+    def client_is_modern(self):
+        """True if client can accept status and headers"""
+        return self.environ['SERVER_PROTOCOL'].upper() != 'HTTP/0.9'
+
+
+    def log_exception(self,exc_info):
+        """Log the 'exc_info' tuple in the server log
+
+        Subclasses may override to retarget the output or change its format.
+        """
+        try:
+            from traceback import print_exception
+            stderr = self.get_stderr()
+            print_exception(
+                exc_info[0], exc_info[1], exc_info[2],
+                self.traceback_limit, stderr
+            )
+            stderr.flush()
+        finally:
+            exc_info = None
+
+    def handle_error(self):
+        """Log current error, and send error output to client if possible"""
+        self.log_exception(sys.exc_info())
+        if not self.headers_sent:
+            self.result = self.error_output(self.environ, self.start_response)
+            self.finish_response()
+        # XXX else: attempt advanced recovery techniques for HTML or text?
+
+    def error_output(self, environ, start_response):
+        """WSGI mini-app to create error output
+
+        By default, this just uses the 'error_status', 'error_headers',
+        and 'error_body' attributes to generate an output page.  It can
+        be overridden in a subclass to dynamically generate diagnostics,
+        choose an appropriate message for the user's preferred language, etc.
+
+        Note, however, that it's not recommended from a security perspective to
+        spit out diagnostics to any old user; ideally, you should have to do
+        something special to enable diagnostic output, which is why we don't
+        include any here!
+        """
+        start_response(self.error_status,self.error_headers[:],sys.exc_info())
+        return [self.error_body]
+
+
+    # Pure abstract methods; *must* be overridden in subclasses
+
+    def _write(self,data):
+        """Override in subclass to buffer data for send to client
+
+        It's okay if this method actually transmits the data; BaseHandler
+        just separates write and flush operations for greater efficiency
+        when the underlying system actually has such a distinction.
+        """
+        raise NotImplementedError
+
+    def _flush(self):
+        """Override in subclass to force sending of recent '_write()' calls
+
+        It's okay if this method is a no-op (i.e., if '_write()' actually
+        sends the data.
+        """
+        raise NotImplementedError
+
+    def get_stdin(self):
+        """Override in subclass to return suitable 'wsgi.input'"""
+        raise NotImplementedError
+
+    def get_stderr(self):
+        """Override in subclass to return suitable 'wsgi.errors'"""
+        raise NotImplementedError
+
+    def add_cgi_vars(self):
+        """Override in subclass to insert CGI variables in 'self.environ'"""
+        raise NotImplementedError
+
+
+
+
+
+
+
+
+
+
+
+class SimpleHandler(BaseHandler):
+    """Handler that's just initialized with streams, environment, etc.
+
+    This handler subclass is intended for synchronous HTTP/1.0 origin servers,
+    and handles sending the entire response output, given the correct inputs.
+
+    Usage::
+
+        handler = SimpleHandler(
+            inp,out,err,env, multithread=False, multiprocess=True
+        )
+        handler.run(app)"""
+
+    def __init__(self,stdin,stdout,stderr,environ,
+        multithread=True, multiprocess=False
+    ):
+        self.stdin = stdin
+        self.stdout = stdout
+        self.stderr = stderr
+        self.base_env = environ
+        self.wsgi_multithread = multithread
+        self.wsgi_multiprocess = multiprocess
+
+    def get_stdin(self):
+        return self.stdin
+
+    def get_stderr(self):
+        return self.stderr
+
+    def add_cgi_vars(self):
+        self.environ.update(self.base_env)
+
+    def _write(self,data):
+        self.stdout.write(data)
+        self._write = self.stdout.write
+
+    def _flush(self):
+        self.stdout.flush()
+        self._flush = self.stdout.flush
+
+
+class BaseCGIHandler(SimpleHandler):
+
+    """CGI-like systems using input/output/error streams and environ mapping
+
+    Usage::
+
+        handler = BaseCGIHandler(inp,out,err,env)
+        handler.run(app)
+
+    This handler class is useful for gateway protocols like ReadyExec and
+    FastCGI, that have usable input/output/error streams and an environment
+    mapping.  It's also the base class for CGIHandler, which just uses
+    sys.stdin, os.environ, and so on.
+
+    The constructor also takes keyword arguments 'multithread' and
+    'multiprocess' (defaulting to 'True' and 'False' respectively) to control
+    the configuration sent to the application.  It sets 'origin_server' to
+    False (to enable CGI-like output), and assumes that 'wsgi.run_once' is
+    False.
+    """
+
+    origin_server = False
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+class CGIHandler(BaseCGIHandler):
+
+    """CGI-based invocation via sys.stdin/stdout/stderr and os.environ
+
+    Usage::
+
+        CGIHandler().run(app)
+
+    The difference between this class and BaseCGIHandler is that it always
+    uses 'wsgi.run_once' of 'True', 'wsgi.multithread' of 'False', and
+    'wsgi.multiprocess' of 'True'.  It does not take any initialization
+    parameters, but always uses 'sys.stdin', 'os.environ', and friends.
+
+    If you need to override any of these parameters, use BaseCGIHandler
+    instead.
+    """
+
+    wsgi_run_once = True
+
+    def __init__(self):
+        BaseCGIHandler.__init__(
+            self, sys.stdin, sys.stdout, sys.stderr, dict(os.environ.items()),
+            multithread=False, multiprocess=True
+        )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#
diff --git a/Lib/wsgiref/headers.py b/Lib/wsgiref/headers.py
new file mode 100644
index 0000000..016eb86
--- /dev/null
+++ b/Lib/wsgiref/headers.py
@@ -0,0 +1,205 @@
+"""Manage HTTP Response Headers
+
+Much of this module is red-handedly pilfered from email.Message in the stdlib,
+so portions are Copyright (C) 2001,2002 Python Software Foundation, and were
+written by Barry Warsaw.
+"""
+
+from types import ListType, TupleType
+
+# Regular expression that matches `special' characters in parameters, the
+# existance of which force quoting of the parameter value.
+import re
+tspecials = re.compile(r'[ \(\)<>@,;:\\"/\[\]\?=]')
+
+def _formatparam(param, value=None, quote=1):
+    """Convenience function to format and return a key=value pair.
+
+    This will quote the value if needed or if quote is true.
+    """
+    if value is not None and len(value) > 0:
+        if quote or tspecials.search(value):
+            value = value.replace('\\', '\\\\').replace('"', r'\"')
+            return '%s="%s"' % (param, value)
+        else:
+            return '%s=%s' % (param, value)
+    else:
+        return param
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+class Headers:
+
+    """Manage a collection of HTTP response headers"""
+
+    def __init__(self,headers):
+        if type(headers) is not ListType:
+            raise TypeError("Headers must be a list of name/value tuples")
+        self._headers = headers
+
+    def __len__(self):
+        """Return the total number of headers, including duplicates."""
+        return len(self._headers)
+
+    def __setitem__(self, name, val):
+        """Set the value of a header."""
+        del self[name]
+        self._headers.append((name, val))
+
+    def __delitem__(self,name):
+        """Delete all occurrences of a header, if present.
+
+        Does *not* raise an exception if the header is missing.
+        """
+        name = name.lower()
+        self._headers[:] = [kv for kv in self._headers if kv[0].lower()<>name]
+
+    def __getitem__(self,name):
+        """Get the first header value for 'name'
+
+        Return None if the header is missing instead of raising an exception.
+
+        Note that if the header appeared multiple times, the first exactly which
+        occurrance gets returned is undefined.  Use getall() to get all
+        the values matching a header field name.
+        """
+        return self.get(name)
+
+
+
+
+
+    def has_key(self, name):
+        """Return true if the message contains the header."""
+        return self.get(name) is not None
+
+    __contains__ = has_key
+
+
+    def get_all(self, name):
+        """Return a list of all the values for the named field.
+
+        These will be sorted in the order they appeared in the original header
+        list or were added to this instance, and may contain duplicates.  Any
+        fields deleted and re-inserted are always appended to the header list.
+        If no fields exist with the given name, returns an empty list.
+        """
+        name = name.lower()
+        return [kv[1] for kv in self._headers if kv[0].lower()==name]
+
+
+    def get(self,name,default=None):
+        """Get the first header value for 'name', or return 'default'"""
+        name = name.lower()
+        for k,v in self._headers:
+            if k.lower()==name:
+                return v
+        return default
+
+
+    def keys(self):
+        """Return a list of all the header field names.
+
+        These will be sorted in the order they appeared in the original header
+        list, or were added to this instance, and may contain duplicates.
+        Any fields deleted and re-inserted are always appended to the header
+        list.
+        """
+        return [k for k, v in self._headers]
+
+
+
+
+    def values(self):
+        """Return a list of all header values.
+
+        These will be sorted in the order they appeared in the original header
+        list, or were added to this instance, and may contain duplicates.
+        Any fields deleted and re-inserted are always appended to the header
+        list.
+        """
+        return [v for k, v in self._headers]
+
+    def items(self):
+        """Get all the header fields and values.
+
+        These will be sorted in the order they were in the original header
+        list, or were added to this instance, and may contain duplicates.
+        Any fields deleted and re-inserted are always appended to the header
+        list.
+        """
+        return self._headers[:]
+
+    def __repr__(self):
+        return "Headers(%s)" % `self._headers`
+
+    def __str__(self):
+        """str() returns the formatted headers, complete with end line,
+        suitable for direct HTTP transmission."""
+        return '\r\n'.join(["%s: %s" % kv for kv in self._headers]+['',''])
+
+    def setdefault(self,name,value):
+        """Return first matching header value for 'name', or 'value'
+
+        If there is no header named 'name', add a new header with name 'name'
+        and value 'value'."""
+        result = self.get(name)
+        if result is None:
+            self._headers.append((name,value))
+            return value
+        else:
+            return result
+
+
+    def add_header(self, _name, _value, **_params):
+        """Extended header setting.
+
+        _name is the header field to add.  keyword arguments can be used to set
+        additional parameters for the header field, with underscores converted
+        to dashes.  Normally the parameter will be added as key="value" unless
+        value is None, in which case only the key will be added.
+
+        Example:
+
+        h.add_header('content-disposition', 'attachment', filename='bud.gif')
+
+        Note that unlike the corresponding 'email.Message' method, this does
+        *not* handle '(charset, language, value)' tuples: all values must be
+        strings or None.
+        """
+        parts = []
+        if _value is not None:
+            parts.append(_value)
+        for k, v in _params.items():
+            if v is None:
+                parts.append(k.replace('_', '-'))
+            else:
+                parts.append(_formatparam(k.replace('_', '-'), v))
+        self._headers.append((_name, "; ".join(parts)))
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#
diff --git a/Lib/wsgiref/simple_server.py b/Lib/wsgiref/simple_server.py
new file mode 100644
index 0000000..95996cc
--- /dev/null
+++ b/Lib/wsgiref/simple_server.py
@@ -0,0 +1,205 @@
+"""BaseHTTPServer that implements the Python WSGI protocol (PEP 333, rev 1.21)
+
+This is both an example of how WSGI can be implemented, and a basis for running
+simple web applications on a local machine, such as might be done when testing
+or debugging an application.  It has not been reviewed for security issues,
+however, and we strongly recommend that you use a "real" web server for
+production use.
+
+For example usage, see the 'if __name__=="__main__"' block at the end of the
+module.  See also the BaseHTTPServer module docs for other API information.
+"""
+
+from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
+import urllib, sys
+from wsgiref.handlers import SimpleHandler
+
+__version__ = "0.1"
+__all__ = ['WSGIServer', 'WSGIRequestHandler', 'demo_app', 'make_server']
+
+
+server_version = "WSGIServer/" + __version__
+sys_version = "Python/" + sys.version.split()[0]
+software_version = server_version + ' ' + sys_version
+
+
+class ServerHandler(SimpleHandler):
+
+    server_software = software_version
+
+    def close(self):
+        try:
+            self.request_handler.log_request(
+                self.status.split(' ',1)[0], self.bytes_sent
+            )
+        finally:
+            SimpleHandler.close(self)
+
+
+
+
+
+class WSGIServer(HTTPServer):
+
+    """BaseHTTPServer that implements the Python WSGI protocol"""
+
+    application = None
+
+    def server_bind(self):
+        """Override server_bind to store the server name."""
+        HTTPServer.server_bind(self)
+        self.setup_environ()
+
+    def setup_environ(self):
+        # Set up base environment
+        env = self.base_environ = {}
+        env['SERVER_NAME'] = self.server_name
+        env['GATEWAY_INTERFACE'] = 'CGI/1.1'
+        env['SERVER_PORT'] = str(self.server_port)
+        env['REMOTE_HOST']=''
+        env['CONTENT_LENGTH']=''
+        env['SCRIPT_NAME'] = ''
+
+    def get_app(self):
+        return self.application
+
+    def set_app(self,application):
+        self.application = application
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+class WSGIRequestHandler(BaseHTTPRequestHandler):
+
+    server_version = "WSGIServer/" + __version__
+
+    def get_environ(self):
+        env = self.server.base_environ.copy()
+        env['SERVER_PROTOCOL'] = self.request_version
+        env['REQUEST_METHOD'] = self.command
+        if '?' in self.path:
+            path,query = self.path.split('?',1)
+        else:
+            path,query = self.path,''
+
+        env['PATH_INFO'] = urllib.unquote(path)
+        env['QUERY_STRING'] = query
+
+        host = self.address_string()
+        if host != self.client_address[0]:
+            env['REMOTE_HOST'] = host
+        env['REMOTE_ADDR'] = self.client_address[0]
+
+        if self.headers.typeheader is None:
+            env['CONTENT_TYPE'] = self.headers.type
+        else:
+            env['CONTENT_TYPE'] = self.headers.typeheader
+
+        length = self.headers.getheader('content-length')
+        if length:
+            env['CONTENT_LENGTH'] = length
+
+        for h in self.headers.headers:
+            k,v = h.split(':',1)
+            k=k.replace('-','_').upper(); v=v.strip()
+            if k in env:
+                continue                    # skip content length, type,etc.
+            if 'HTTP_'+k in env:
+                env['HTTP_'+k] += ','+v     # comma-separate multiple headers
+            else:
+                env['HTTP_'+k] = v
+        return env
+
+    def get_stderr(self):
+        return sys.stderr
+
+    def handle(self):
+        """Handle a single HTTP request"""
+
+        self.raw_requestline = self.rfile.readline()
+        if not self.parse_request(): # An error code has been sent, just exit
+            return
+
+        handler = ServerHandler(
+            self.rfile, self.wfile, self.get_stderr(), self.get_environ()
+        )
+        handler.request_handler = self      # backpointer for logging
+        handler.run(self.server.get_app())
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+def demo_app(environ,start_response):
+    from StringIO import StringIO
+    stdout = StringIO()
+    print >>stdout, "Hello world!"
+    print >>stdout
+    h = environ.items(); h.sort()
+    for k,v in h:
+        print >>stdout, k,'=',`v`
+    start_response("200 OK", [('Content-Type','text/plain')])
+    return [stdout.getvalue()]
+
+
+def make_server(
+    host, port, app, server_class=WSGIServer, handler_class=WSGIRequestHandler
+):
+    """Create a new WSGI server listening on `host` and `port` for `app`"""
+    server = server_class((host, port), handler_class)
+    server.set_app(app)
+    return server
+
+
+if __name__ == '__main__':
+    httpd = make_server('', 8000, demo_app)
+    sa = httpd.socket.getsockname()
+    print "Serving HTTP on", sa[0], "port", sa[1], "..."
+    import webbrowser
+    webbrowser.open('http://localhost:8000/xyz?abc')
+    httpd.handle_request()  # serve one request, then exit
+
+
+
+
+
+
+
+
+
+
+
+
+#
diff --git a/Lib/wsgiref/util.py b/Lib/wsgiref/util.py
new file mode 100644
index 0000000..9009b87
--- /dev/null
+++ b/Lib/wsgiref/util.py
@@ -0,0 +1,205 @@
+"""Miscellaneous WSGI-related Utilities"""
+
+import posixpath
+
+__all__ = [
+    'FileWrapper', 'guess_scheme', 'application_uri', 'request_uri',
+    'shift_path_info', 'setup_testing_defaults',
+]
+
+
+class FileWrapper:
+    """Wrapper to convert file-like objects to iterables"""
+
+    def __init__(self, filelike, blksize=8192):
+        self.filelike = filelike
+        self.blksize = blksize
+        if hasattr(filelike,'close'):
+            self.close = filelike.close
+
+    def __getitem__(self,key):
+        data = self.filelike.read(self.blksize)
+        if data:
+            return data
+        raise IndexError
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        data = self.filelike.read(self.blksize)
+        if data:
+            return data
+        raise StopIteration
+
+
+
+
+
+
+
+
+def guess_scheme(environ):
+    """Return a guess for whether 'wsgi.url_scheme' should be 'http' or 'https'
+    """
+    if environ.get("HTTPS") in ('yes','on','1'):
+        return 'https'
+    else:
+        return 'http'
+
+def application_uri(environ):
+    """Return the application's base URI (no PATH_INFO or QUERY_STRING)"""
+    url = environ['wsgi.url_scheme']+'://'
+    from urllib import quote
+
+    if environ.get('HTTP_HOST'):
+        url += environ['HTTP_HOST']
+    else:
+        url += environ['SERVER_NAME']
+
+        if environ['wsgi.url_scheme'] == 'https':
+            if environ['SERVER_PORT'] != '443':
+                url += ':' + environ['SERVER_PORT']
+        else:
+            if environ['SERVER_PORT'] != '80':
+                url += ':' + environ['SERVER_PORT']
+
+    url += quote(environ.get('SCRIPT_NAME') or '/')
+    return url
+
+def request_uri(environ, include_query=1):
+    """Return the full request URI, optionally including the query string"""
+    url = application_uri(environ)
+    from urllib import quote
+    path_info = quote(environ.get('PATH_INFO',''))
+    if not environ.get('SCRIPT_NAME'):
+        url += path_info[1:]
+    else:
+        url += path_info
+    if include_query and environ.get('QUERY_STRING'):
+        url += '?' + environ['QUERY_STRING']
+    return url
+
+def shift_path_info(environ):
+    """Shift a name from PATH_INFO to SCRIPT_NAME, returning it
+
+    If there are no remaining path segments in PATH_INFO, return None.
+    Note: 'environ' is modified in-place; use a copy if you need to keep
+    the original PATH_INFO or SCRIPT_NAME.
+
+    Note: when PATH_INFO is just a '/', this returns '' and appends a trailing
+    '/' to SCRIPT_NAME, even though empty path segments are normally ignored,
+    and SCRIPT_NAME doesn't normally end in a '/'.  This is intentional
+    behavior, to ensure that an application can tell the difference between
+    '/x' and '/x/' when traversing to objects.
+    """
+    path_info = environ.get('PATH_INFO','')
+    if not path_info:
+        return None
+
+    path_parts = path_info.split('/')
+    path_parts[1:-1] = [p for p in path_parts[1:-1] if p and p<>'.']
+    name = path_parts[1]
+    del path_parts[1]
+
+    script_name = environ.get('SCRIPT_NAME','')
+    script_name = posixpath.normpath(script_name+'/'+name)
+    if script_name.endswith('/'):
+        script_name = script_name[:-1]
+    if not name and not script_name.endswith('/'):
+        script_name += '/'
+
+    environ['SCRIPT_NAME'] = script_name
+    environ['PATH_INFO']   = '/'.join(path_parts)
+
+    # Special case: '/.' on PATH_INFO doesn't get stripped,
+    # because we don't strip the last element of PATH_INFO
+    # if there's only one path part left.  Instead of fixing this
+    # above, we fix it here so that PATH_INFO gets normalized to
+    # an empty string in the environ.
+    if name=='.':
+        name = None
+    return name
+
+def setup_testing_defaults(environ):
+    """Update 'environ' with trivial defaults for testing purposes
+
+    This adds various parameters required for WSGI, including HTTP_HOST,
+    SERVER_NAME, SERVER_PORT, REQUEST_METHOD, SCRIPT_NAME, PATH_INFO,
+    and all of the wsgi.* variables.  It only supplies default values,
+    and does not replace any existing settings for these variables.
+
+    This routine is intended to make it easier for unit tests of WSGI
+    servers and applications to set up dummy environments.  It should *not*
+    be used by actual WSGI servers or applications, since the data is fake!
+    """
+
+    environ.setdefault('SERVER_NAME','127.0.0.1')
+    environ.setdefault('SERVER_PROTOCOL','HTTP/1.0')
+
+    environ.setdefault('HTTP_HOST',environ['SERVER_NAME'])
+    environ.setdefault('REQUEST_METHOD','GET')
+
+    if 'SCRIPT_NAME' not in environ and 'PATH_INFO' not in environ:
+        environ.setdefault('SCRIPT_NAME','')
+        environ.setdefault('PATH_INFO','/')
+
+    environ.setdefault('wsgi.version', (1,0))
+    environ.setdefault('wsgi.run_once', 0)
+    environ.setdefault('wsgi.multithread', 0)
+    environ.setdefault('wsgi.multiprocess', 0)
+
+    from StringIO import StringIO
+    environ.setdefault('wsgi.input', StringIO(""))
+    environ.setdefault('wsgi.errors', StringIO())
+    environ.setdefault('wsgi.url_scheme',guess_scheme(environ))
+
+    if environ['wsgi.url_scheme']=='http':
+        environ.setdefault('SERVER_PORT', '80')
+    elif environ['wsgi.url_scheme']=='https':
+        environ.setdefault('SERVER_PORT', '443')
+
+
+
+
+_hoppish = {
+    'connection':1, 'keep-alive':1, 'proxy-authenticate':1,
+    'proxy-authorization':1, 'te':1, 'trailers':1, 'transfer-encoding':1,
+    'upgrade':1
+}.has_key
+
+def is_hop_by_hop(header_name):
+    """Return true if 'header_name' is an HTTP/1.1 "Hop-by-Hop" header"""
+    return _hoppish(header_name.lower())
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#
diff --git a/Lib/wsgiref/validate.py b/Lib/wsgiref/validate.py
new file mode 100644
index 0000000..23ab9f8
--- /dev/null
+++ b/Lib/wsgiref/validate.py
@@ -0,0 +1,432 @@
+# (c) 2005 Ian Bicking and contributors; written for Paste (http://pythonpaste.org)
+# Licensed under the MIT license: http://www.opensource.org/licenses/mit-license.php
+# Also licenced under the Apache License, 2.0: http://opensource.org/licenses/apache2.0.php
+# Licensed to PSF under a Contributor Agreement
+"""
+Middleware to check for obedience to the WSGI specification.
+
+Some of the things this checks:
+
+* Signature of the application and start_response (including that
+  keyword arguments are not used).
+
+* Environment checks:
+
+  - Environment is a dictionary (and not a subclass).
+
+  - That all the required keys are in the environment: REQUEST_METHOD,
+    SERVER_NAME, SERVER_PORT, wsgi.version, wsgi.input, wsgi.errors,
+    wsgi.multithread, wsgi.multiprocess, wsgi.run_once
+
+  - That HTTP_CONTENT_TYPE and HTTP_CONTENT_LENGTH are not in the
+    environment (these headers should appear as CONTENT_LENGTH and
+    CONTENT_TYPE).
+
+  - Warns if QUERY_STRING is missing, as the cgi module acts
+    unpredictably in that case.
+
+  - That CGI-style variables (that don't contain a .) have
+    (non-unicode) string values
+
+  - That wsgi.version is a tuple
+
+  - That wsgi.url_scheme is 'http' or 'https' (@@: is this too
+    restrictive?)
+
+  - Warns if the REQUEST_METHOD is not known (@@: probably too
+    restrictive).
+
+  - That SCRIPT_NAME and PATH_INFO are empty or start with /
+
+  - That at least one of SCRIPT_NAME or PATH_INFO are set.
+
+  - That CONTENT_LENGTH is a positive integer.
+
+  - That SCRIPT_NAME is not '/' (it should be '', and PATH_INFO should
+    be '/').
+
+  - That wsgi.input has the methods read, readline, readlines, and
+    __iter__
+
+  - That wsgi.errors has the methods flush, write, writelines
+
+* The status is a string, contains a space, starts with an integer,
+  and that integer is in range (> 100).
+
+* That the headers is a list (not a subclass, not another kind of
+  sequence).
+
+* That the items of the headers are tuples of strings.
+
+* That there is no 'status' header (that is used in CGI, but not in
+  WSGI).
+
+* That the headers don't contain newlines or colons, end in _ or -, or
+  contain characters codes below 037.
+
+* That Content-Type is given if there is content (CGI often has a
+  default content type, but WSGI does not).
+
+* That no Content-Type is given when there is no content (@@: is this
+  too restrictive?)
+
+* That the exc_info argument to start_response is a tuple or None.
+
+* That all calls to the writer are with strings, and no other methods
+  on the writer are accessed.
+
+* That wsgi.input is used properly:
+
+  - .read() is called with zero or one argument
+
+  - That it returns a string
+
+  - That readline, readlines, and __iter__ return strings
+
+  - That .close() is not called
+
+  - No other methods are provided
+
+* That wsgi.errors is used properly:
+
+  - .write() and .writelines() is called with a string
+
+  - That .close() is not called, and no other methods are provided.
+
+* The response iterator:
+
+  - That it is not a string (it should be a list of a single string; a
+    string will work, but perform horribly).
+
+  - That .next() returns a string
+
+  - That the iterator is not iterated over until start_response has
+    been called (that can signal either a server or application
+    error).
+
+  - That .close() is called (doesn't raise exception, only prints to
+    sys.stderr, because we only know it isn't called when the object
+    is garbage collected).
+"""
+__all__ = ['validator']
+
+
+import re
+import sys
+from types import DictType, StringType, TupleType, ListType
+import warnings
+
+header_re = re.compile(r'^[a-zA-Z][a-zA-Z0-9\-_]*$')
+bad_header_value_re = re.compile(r'[\000-\037]')
+
+class WSGIWarning(Warning):
+    """
+    Raised in response to WSGI-spec-related warnings
+    """
+
+def assert_(cond, *args):
+    if not cond:
+        raise AssertionError(*args)
+
+def validator(application):
+
+    """
+    When applied between a WSGI server and a WSGI application, this
+    middleware will check for WSGI compliancy on a number of levels.
+    This middleware does not modify the request or response in any
+    way, but will throw an AssertionError if anything seems off
+    (except for a failure to close the application iterator, which
+    will be printed to stderr -- there's no way to throw an exception
+    at that point).
+    """
+
+    def lint_app(*args, **kw):
+        assert_(len(args) == 2, "Two arguments required")
+        assert_(not kw, "No keyword arguments allowed")
+        environ, start_response = args
+
+        check_environ(environ)
+
+        # We use this to check if the application returns without
+        # calling start_response:
+        start_response_started = []
+
+        def start_response_wrapper(*args, **kw):
+            assert_(len(args) == 2 or len(args) == 3, (
+                "Invalid number of arguments: %s" % (args,)))
+            assert_(not kw, "No keyword arguments allowed")
+            status = args[0]
+            headers = args[1]
+            if len(args) == 3:
+                exc_info = args[2]
+            else:
+                exc_info = None
+
+            check_status(status)
+            check_headers(headers)
+            check_content_type(status, headers)
+            check_exc_info(exc_info)
+
+            start_response_started.append(None)
+            return WriteWrapper(start_response(*args))
+
+        environ['wsgi.input'] = InputWrapper(environ['wsgi.input'])
+        environ['wsgi.errors'] = ErrorWrapper(environ['wsgi.errors'])
+
+        iterator = application(environ, start_response_wrapper)
+        assert_(iterator is not None and iterator != False,
+            "The application must return an iterator, if only an empty list")
+
+        check_iterator(iterator)
+
+        return IteratorWrapper(iterator, start_response_started)
+
+    return lint_app
+
+class InputWrapper:
+
+    def __init__(self, wsgi_input):
+        self.input = wsgi_input
+
+    def read(self, *args):
+        assert_(len(args) <= 1)
+        v = self.input.read(*args)
+        assert_(type(v) is type(""))
+        return v
+
+    def readline(self):
+        v = self.input.readline()
+        assert_(type(v) is type(""))
+        return v
+
+    def readlines(self, *args):
+        assert_(len(args) <= 1)
+        lines = self.input.readlines(*args)
+        assert_(type(lines) is type([]))
+        for line in lines:
+            assert_(type(line) is type(""))
+        return lines
+
+    def __iter__(self):
+        while 1:
+            line = self.readline()
+            if not line:
+                return
+            yield line
+
+    def close(self):
+        assert_(0, "input.close() must not be called")
+
+class ErrorWrapper:
+
+    def __init__(self, wsgi_errors):
+        self.errors = wsgi_errors
+
+    def write(self, s):
+        assert_(type(s) is type(""))
+        self.errors.write(s)
+
+    def flush(self):
+        self.errors.flush()
+
+    def writelines(self, seq):
+        for line in seq:
+            self.write(line)
+
+    def close(self):
+        assert_(0, "errors.close() must not be called")
+
+class WriteWrapper:
+
+    def __init__(self, wsgi_writer):
+        self.writer = wsgi_writer
+
+    def __call__(self, s):
+        assert_(type(s) is type(""))
+        self.writer(s)
+
+class PartialIteratorWrapper:
+
+    def __init__(self, wsgi_iterator):
+        self.iterator = wsgi_iterator
+
+    def __iter__(self):
+        # We want to make sure __iter__ is called
+        return IteratorWrapper(self.iterator, None)
+
+class IteratorWrapper:
+
+    def __init__(self, wsgi_iterator, check_start_response):
+        self.original_iterator = wsgi_iterator
+        self.iterator = iter(wsgi_iterator)
+        self.closed = False
+        self.check_start_response = check_start_response
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        assert_(not self.closed,
+            "Iterator read after closed")
+        v = self.iterator.next()
+        if self.check_start_response is not None:
+            assert_(self.check_start_response,
+                "The application returns and we started iterating over its body, but start_response has not yet been called")
+            self.check_start_response = None
+        return v
+
+    def close(self):
+        self.closed = True
+        if hasattr(self.original_iterator, 'close'):
+            self.original_iterator.close()
+
+    def __del__(self):
+        if not self.closed:
+            sys.stderr.write(
+                "Iterator garbage collected without being closed")
+        assert_(self.closed,
+            "Iterator garbage collected without being closed")
+
+def check_environ(environ):
+    assert_(type(environ) is DictType,
+        "Environment is not of the right type: %r (environment: %r)"
+        % (type(environ), environ))
+
+    for key in ['REQUEST_METHOD', 'SERVER_NAME', 'SERVER_PORT',
+                'wsgi.version', 'wsgi.input', 'wsgi.errors',
+                'wsgi.multithread', 'wsgi.multiprocess',
+                'wsgi.run_once']:
+        assert_(key in environ,
+            "Environment missing required key: %r" % (key,))
+
+    for key in ['HTTP_CONTENT_TYPE', 'HTTP_CONTENT_LENGTH']:
+        assert_(key not in environ,
+            "Environment should not have the key: %s "
+            "(use %s instead)" % (key, key[5:]))
+
+    if 'QUERY_STRING' not in environ:
+        warnings.warn(
+            'QUERY_STRING is not in the WSGI environment; the cgi '
+            'module will use sys.argv when this variable is missing, '
+            'so application errors are more likely',
+            WSGIWarning)
+
+    for key in environ.keys():
+        if '.' in key:
+            # Extension, we don't care about its type
+            continue
+        assert_(type(environ[key]) is StringType,
+            "Environmental variable %s is not a string: %r (value: %r)"
+            % (key, type(environ[key]), environ[key]))
+
+    assert_(type(environ['wsgi.version']) is TupleType,
+        "wsgi.version should be a tuple (%r)" % (environ['wsgi.version'],))
+    assert_(environ['wsgi.url_scheme'] in ('http', 'https'),
+        "wsgi.url_scheme unknown: %r" % environ['wsgi.url_scheme'])
+
+    check_input(environ['wsgi.input'])
+    check_errors(environ['wsgi.errors'])
+
+    # @@: these need filling out:
+    if environ['REQUEST_METHOD'] not in (
+        'GET', 'HEAD', 'POST', 'OPTIONS','PUT','DELETE','TRACE'):
+        warnings.warn(
+            "Unknown REQUEST_METHOD: %r" % environ['REQUEST_METHOD'],
+            WSGIWarning)
+
+    assert_(not environ.get('SCRIPT_NAME')
+            or environ['SCRIPT_NAME'].startswith('/'),
+        "SCRIPT_NAME doesn't start with /: %r" % environ['SCRIPT_NAME'])
+    assert_(not environ.get('PATH_INFO')
+            or environ['PATH_INFO'].startswith('/'),
+        "PATH_INFO doesn't start with /: %r" % environ['PATH_INFO'])
+    if environ.get('CONTENT_LENGTH'):
+        assert_(int(environ['CONTENT_LENGTH']) >= 0,
+            "Invalid CONTENT_LENGTH: %r" % environ['CONTENT_LENGTH'])
+
+    if not environ.get('SCRIPT_NAME'):
+        assert_(environ.has_key('PATH_INFO'),
+            "One of SCRIPT_NAME or PATH_INFO are required (PATH_INFO "
+            "should at least be '/' if SCRIPT_NAME is empty)")
+    assert_(environ.get('SCRIPT_NAME') != '/',
+        "SCRIPT_NAME cannot be '/'; it should instead be '', and "
+        "PATH_INFO should be '/'")
+
+def check_input(wsgi_input):
+    for attr in ['read', 'readline', 'readlines', '__iter__']:
+        assert_(hasattr(wsgi_input, attr),
+            "wsgi.input (%r) doesn't have the attribute %s"
+            % (wsgi_input, attr))
+
+def check_errors(wsgi_errors):
+    for attr in ['flush', 'write', 'writelines']:
+        assert_(hasattr(wsgi_errors, attr),
+            "wsgi.errors (%r) doesn't have the attribute %s"
+            % (wsgi_errors, attr))
+
+def check_status(status):
+    assert_(type(status) is StringType,
+        "Status must be a string (not %r)" % status)
+    # Implicitly check that we can turn it into an integer:
+    status_code = status.split(None, 1)[0]
+    assert_(len(status_code) == 3,
+        "Status codes must be three characters: %r" % status_code)
+    status_int = int(status_code)
+    assert_(status_int >= 100, "Status code is invalid: %r" % status_int)
+    if len(status) < 4 or status[3] != ' ':
+        warnings.warn(
+            "The status string (%r) should be a three-digit integer "
+            "followed by a single space and a status explanation"
+            % status, WSGIWarning)
+
+def check_headers(headers):
+    assert_(type(headers) is ListType,
+        "Headers (%r) must be of type list: %r"
+        % (headers, type(headers)))
+    header_names = {}
+    for item in headers:
+        assert_(type(item) is TupleType,
+            "Individual headers (%r) must be of type tuple: %r"
+            % (item, type(item)))
+        assert_(len(item) == 2)
+        name, value = item
+        assert_(name.lower() != 'status',
+            "The Status header cannot be used; it conflicts with CGI "
+            "script, and HTTP status is not given through headers "
+            "(value: %r)." % value)
+        header_names[name.lower()] = None
+        assert_('\n' not in name and ':' not in name,
+            "Header names may not contain ':' or '\\n': %r" % name)
+        assert_(header_re.search(name), "Bad header name: %r" % name)
+        assert_(not name.endswith('-') and not name.endswith('_'),
+            "Names may not end in '-' or '_': %r" % name)
+        if bad_header_value_re.search(value):
+            assert_(0, "Bad header value: %r (bad char: %r)"
+            % (value, bad_header_value_re.search(value).group(0)))
+
+def check_content_type(status, headers):
+    code = int(status.split(None, 1)[0])
+    # @@: need one more person to verify this interpretation of RFC 2616
+    #     http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
+    NO_MESSAGE_BODY = (204, 304)
+    for name, value in headers:
+        if name.lower() == 'content-type':
+            if code not in NO_MESSAGE_BODY:
+                return
+            assert_(0, ("Content-Type header found in a %s response, "
+                        "which must not return content.") % code)
+    if code not in NO_MESSAGE_BODY:
+        assert_(0, "No Content-Type header found in headers (%s)" % headers)
+
+def check_exc_info(exc_info):
+    assert_(exc_info is None or type(exc_info) is type(()),
+        "exc_info (%r) is not a tuple: %r" % (exc_info, type(exc_info)))
+    # More exc_info checks?
+
+def check_iterator(iterator):
+    # Technically a string is legal, which is why it's a really bad
+    # idea, because it may cause the response to be returned
+    # character-by-character
+    assert_(not isinstance(iterator, str),
+        "You should not return a string as your application iterator, "
+        "instead return a single-item list containing that string.")
diff --git a/Lib/xml.py b/Lib/xml/__init__.py
similarity index 89%
rename from Lib/xml.py
rename to Lib/xml/__init__.py
index 7393c66..fa5e8cd 100644
--- a/Lib/xml.py
+++ b/Lib/xml/__init__.py
@@ -16,8 +16,6 @@
 
 """
 
-import sys
-import xmlcore
 
 __all__ = ["dom", "parsers", "sax", "etree"]
 
@@ -29,10 +27,11 @@
 
 _MINIMUM_XMLPLUS_VERSION = (0, 8, 4)
 
+
 try:
     import _xmlplus
 except ImportError:
-    sys.modules[__name__] = xmlcore
+    pass
 else:
     try:
         v = _xmlplus.version_info
@@ -41,7 +40,8 @@
         pass
     else:
         if v >= _MINIMUM_XMLPLUS_VERSION:
-            _xmlplus.__path__.extend(xmlcore.__path__)
+            import sys
+            _xmlplus.__path__.extend(__path__)
             sys.modules[__name__] = _xmlplus
         else:
             del v
diff --git a/Lib/xmlcore/dom/NodeFilter.py b/Lib/xml/dom/NodeFilter.py
similarity index 100%
rename from Lib/xmlcore/dom/NodeFilter.py
rename to Lib/xml/dom/NodeFilter.py
diff --git a/Lib/xmlcore/dom/__init__.py b/Lib/xml/dom/__init__.py
similarity index 98%
rename from Lib/xmlcore/dom/__init__.py
rename to Lib/xml/dom/__init__.py
index 002cdb7..6363d00 100644
--- a/Lib/xmlcore/dom/__init__.py
+++ b/Lib/xml/dom/__init__.py
@@ -136,4 +136,4 @@
 EMPTY_NAMESPACE = None
 EMPTY_PREFIX = None
 
-from .domreg import getDOMImplementation,registerDOMImplementation
+from domreg import getDOMImplementation,registerDOMImplementation
diff --git a/Lib/xmlcore/dom/domreg.py b/Lib/xml/dom/domreg.py
similarity index 97%
rename from Lib/xmlcore/dom/domreg.py
rename to Lib/xml/dom/domreg.py
index d60ed64..684c436 100644
--- a/Lib/xmlcore/dom/domreg.py
+++ b/Lib/xml/dom/domreg.py
@@ -2,7 +2,7 @@
 directly. Instead, the functions getDOMImplementation and
 registerDOMImplementation should be imported from xml.dom."""
 
-from xmlcore.dom.minicompat import *  # isinstance, StringTypes
+from xml.dom.minicompat import *  # isinstance, StringTypes
 
 # This is a list of well-known implementations.  Well-known names
 # should be published by posting to xml-sig@python.org, and are
diff --git a/Lib/xmlcore/dom/expatbuilder.py b/Lib/xml/dom/expatbuilder.py
similarity index 98%
rename from Lib/xmlcore/dom/expatbuilder.py
rename to Lib/xml/dom/expatbuilder.py
index 32ffa41..a2f8a33 100644
--- a/Lib/xmlcore/dom/expatbuilder.py
+++ b/Lib/xml/dom/expatbuilder.py
@@ -27,13 +27,13 @@
 #      calling any methods on the node object if it exists.  (A rather
 #      nice speedup is achieved this way as well!)
 
-from xmlcore.dom import xmlbuilder, minidom, Node
-from xmlcore.dom import EMPTY_NAMESPACE, EMPTY_PREFIX, XMLNS_NAMESPACE
-from xmlcore.parsers import expat
-from xmlcore.dom.minidom import _append_child, _set_attribute_node
-from xmlcore.dom.NodeFilter import NodeFilter
+from xml.dom import xmlbuilder, minidom, Node
+from xml.dom import EMPTY_NAMESPACE, EMPTY_PREFIX, XMLNS_NAMESPACE
+from xml.parsers import expat
+from xml.dom.minidom import _append_child, _set_attribute_node
+from xml.dom.NodeFilter import NodeFilter
 
-from xmlcore.dom.minicompat import *
+from xml.dom.minicompat import *
 
 TEXT_NODE = Node.TEXT_NODE
 CDATA_SECTION_NODE = Node.CDATA_SECTION_NODE
diff --git a/Lib/xmlcore/dom/minicompat.py b/Lib/xml/dom/minicompat.py
similarity index 99%
rename from Lib/xmlcore/dom/minicompat.py
rename to Lib/xml/dom/minicompat.py
index f99b7fe..d491fb6 100644
--- a/Lib/xmlcore/dom/minicompat.py
+++ b/Lib/xml/dom/minicompat.py
@@ -38,7 +38,7 @@
 
 __all__ = ["NodeList", "EmptyNodeList", "StringTypes", "defproperty"]
 
-import xmlcore.dom
+import xml.dom
 
 try:
     unicode
@@ -71,6 +71,7 @@
     def __setstate__(self, state):
         self[:] = state
 
+
 class EmptyNodeList(tuple):
     __slots__ = ()
 
diff --git a/Lib/xmlcore/dom/minidom.py b/Lib/xml/dom/minidom.py
similarity index 93%
rename from Lib/xmlcore/dom/minidom.py
rename to Lib/xml/dom/minidom.py
index a8abd14..3a35781 100644
--- a/Lib/xmlcore/dom/minidom.py
+++ b/Lib/xml/dom/minidom.py
@@ -14,22 +14,22 @@
  * SAX 2 namespaces
 """
 
-import xmlcore.dom
+import xml.dom
 
-from xmlcore.dom import EMPTY_NAMESPACE, EMPTY_PREFIX, XMLNS_NAMESPACE, domreg
-from xmlcore.dom.minicompat import *
-from xmlcore.dom.xmlbuilder import DOMImplementationLS, DocumentLS
+from xml.dom import EMPTY_NAMESPACE, EMPTY_PREFIX, XMLNS_NAMESPACE, domreg
+from xml.dom.minicompat import *
+from xml.dom.xmlbuilder import DOMImplementationLS, DocumentLS
 
 # This is used by the ID-cache invalidation checks; the list isn't
 # actually complete, since the nodes being checked will never be the
 # DOCUMENT_NODE or DOCUMENT_FRAGMENT_NODE.  (The node being checked is
 # the node being added or removed, not the node being modified.)
 #
-_nodeTypes_with_children = (xmlcore.dom.Node.ELEMENT_NODE,
-                            xmlcore.dom.Node.ENTITY_REFERENCE_NODE)
+_nodeTypes_with_children = (xml.dom.Node.ELEMENT_NODE,
+                            xml.dom.Node.ENTITY_REFERENCE_NODE)
 
 
-class Node(xmlcore.dom.Node):
+class Node(xml.dom.Node):
     namespaceURI = None # this is non-null only for elements and attributes
     parentNode = None
     ownerDocument = None
@@ -83,7 +83,7 @@
             ### The DOM does not clearly specify what to return in this case
             return newChild
         if newChild.nodeType not in self._child_node_types:
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "%s cannot be child of %s" % (repr(newChild), repr(self)))
         if newChild.parentNode is not None:
             newChild.parentNode.removeChild(newChild)
@@ -93,7 +93,7 @@
             try:
                 index = self.childNodes.index(refChild)
             except ValueError:
-                raise xmlcore.dom.NotFoundErr()
+                raise xml.dom.NotFoundErr()
             if newChild.nodeType in _nodeTypes_with_children:
                 _clear_id_cache(self)
             self.childNodes.insert(index, newChild)
@@ -115,7 +115,7 @@
             ### The DOM does not clearly specify what to return in this case
             return node
         if node.nodeType not in self._child_node_types:
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "%s cannot be child of %s" % (repr(node), repr(self)))
         elif node.nodeType in _nodeTypes_with_children:
             _clear_id_cache(self)
@@ -131,7 +131,7 @@
             self.removeChild(oldChild)
             return self.insertBefore(newChild, refChild)
         if newChild.nodeType not in self._child_node_types:
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "%s cannot be child of %s" % (repr(newChild), repr(self)))
         if newChild is oldChild:
             return
@@ -140,7 +140,7 @@
         try:
             index = self.childNodes.index(oldChild)
         except ValueError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         self.childNodes[index] = newChild
         newChild.parentNode = self
         oldChild.parentNode = None
@@ -161,7 +161,7 @@
         try:
             self.childNodes.remove(oldChild)
         except ValueError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         if oldChild.nextSibling is not None:
             oldChild.nextSibling.previousSibling = oldChild.previousSibling
         if oldChild.previousSibling is not None:
@@ -386,7 +386,7 @@
         nsuri = self.namespaceURI
         if prefix == "xmlns":
             if nsuri and nsuri != XMLNS_NAMESPACE:
-                raise xmlcore.dom.NamespaceErr(
+                raise xml.dom.NamespaceErr(
                     "illegal use of 'xmlns' prefix for the wrong namespace")
         d = self.__dict__
         d['prefix'] = prefix
@@ -564,7 +564,7 @@
                 n.__dict__['ownerElement'] = None
             return n
         else:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
 
     def removeNamedItemNS(self, namespaceURI, localName):
         n = self.getNamedItemNS(namespaceURI, localName)
@@ -576,11 +576,11 @@
                 n.__dict__['ownerElement'] = None
             return n
         else:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
 
     def setNamedItem(self, node):
         if not isinstance(node, Attr):
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "%s cannot be child of %s" % (repr(node), repr(self)))
         old = self._attrs.get(node.name)
         if old:
@@ -731,7 +731,7 @@
 
     def setAttributeNode(self, attr):
         if attr.ownerElement not in (None, self):
-            raise xmlcore.dom.InuseAttributeErr("attribute node already owned")
+            raise xml.dom.InuseAttributeErr("attribute node already owned")
         old1 = self._attrs.get(attr.name, None)
         if old1 is not None:
             self.removeAttributeNode(old1)
@@ -753,23 +753,23 @@
         try:
             attr = self._attrs[name]
         except KeyError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         self.removeAttributeNode(attr)
 
     def removeAttributeNS(self, namespaceURI, localName):
         try:
             attr = self._attrsNS[(namespaceURI, localName)]
         except KeyError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         self.removeAttributeNode(attr)
 
     def removeAttributeNode(self, node):
         if node is None:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         try:
             self._attrs[node.name]
         except KeyError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         _clear_id_cache(self)
         node.unlink()
         # Restore this since the node is still useful and otherwise
@@ -837,9 +837,9 @@
 
     def setIdAttributeNode(self, idAttr):
         if idAttr is None or not self.isSameNode(idAttr.ownerElement):
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         if _get_containing_entref(self) is not None:
-            raise xmlcore.dom.NoModificationAllowedErr()
+            raise xml.dom.NoModificationAllowedErr()
         if not idAttr._is_id:
             idAttr.__dict__['_is_id'] = True
             self._magic_id_nodes += 1
@@ -880,22 +880,22 @@
         return None
 
     def appendChild(self, node):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             self.nodeName + " nodes cannot have children")
 
     def hasChildNodes(self):
         return False
 
     def insertBefore(self, newChild, refChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             self.nodeName + " nodes do not have children")
 
     def removeChild(self, oldChild):
-        raise xmlcore.dom.NotFoundErr(
+        raise xml.dom.NotFoundErr(
             self.nodeName + " nodes do not have children")
 
     def replaceChild(self, newChild, oldChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             self.nodeName + " nodes do not have children")
 
 
@@ -961,11 +961,11 @@
 
     def substringData(self, offset, count):
         if offset < 0:
-            raise xmlcore.dom.IndexSizeErr("offset cannot be negative")
+            raise xml.dom.IndexSizeErr("offset cannot be negative")
         if offset >= len(self.data):
-            raise xmlcore.dom.IndexSizeErr("offset cannot be beyond end of data")
+            raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
         if count < 0:
-            raise xmlcore.dom.IndexSizeErr("count cannot be negative")
+            raise xml.dom.IndexSizeErr("count cannot be negative")
         return self.data[offset:offset+count]
 
     def appendData(self, arg):
@@ -973,30 +973,30 @@
 
     def insertData(self, offset, arg):
         if offset < 0:
-            raise xmlcore.dom.IndexSizeErr("offset cannot be negative")
+            raise xml.dom.IndexSizeErr("offset cannot be negative")
         if offset >= len(self.data):
-            raise xmlcore.dom.IndexSizeErr("offset cannot be beyond end of data")
+            raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
         if arg:
             self.data = "%s%s%s" % (
                 self.data[:offset], arg, self.data[offset:])
 
     def deleteData(self, offset, count):
         if offset < 0:
-            raise xmlcore.dom.IndexSizeErr("offset cannot be negative")
+            raise xml.dom.IndexSizeErr("offset cannot be negative")
         if offset >= len(self.data):
-            raise xmlcore.dom.IndexSizeErr("offset cannot be beyond end of data")
+            raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
         if count < 0:
-            raise xmlcore.dom.IndexSizeErr("count cannot be negative")
+            raise xml.dom.IndexSizeErr("count cannot be negative")
         if count:
             self.data = self.data[:offset] + self.data[offset+count:]
 
     def replaceData(self, offset, count, arg):
         if offset < 0:
-            raise xmlcore.dom.IndexSizeErr("offset cannot be negative")
+            raise xml.dom.IndexSizeErr("offset cannot be negative")
         if offset >= len(self.data):
-            raise xmlcore.dom.IndexSizeErr("offset cannot be beyond end of data")
+            raise xml.dom.IndexSizeErr("offset cannot be beyond end of data")
         if count < 0:
-            raise xmlcore.dom.IndexSizeErr("count cannot be negative")
+            raise xml.dom.IndexSizeErr("count cannot be negative")
         if count:
             self.data = "%s%s%s" % (
                 self.data[:offset], arg, self.data[offset+count:])
@@ -1016,7 +1016,7 @@
 
     def splitText(self, offset):
         if offset < 0 or offset > len(self.data):
-            raise xmlcore.dom.IndexSizeErr("illegal offset value")
+            raise xml.dom.IndexSizeErr("illegal offset value")
         newText = self.__class__()
         newText.data = self.data[offset:]
         newText.ownerDocument = self.ownerDocument
@@ -1185,19 +1185,19 @@
             return None
 
     def removeNamedItem(self, name):
-        raise xmlcore.dom.NoModificationAllowedErr(
+        raise xml.dom.NoModificationAllowedErr(
             "NamedNodeMap instance is read-only")
 
     def removeNamedItemNS(self, namespaceURI, localName):
-        raise xmlcore.dom.NoModificationAllowedErr(
+        raise xml.dom.NoModificationAllowedErr(
             "NamedNodeMap instance is read-only")
 
     def setNamedItem(self, node):
-        raise xmlcore.dom.NoModificationAllowedErr(
+        raise xml.dom.NoModificationAllowedErr(
             "NamedNodeMap instance is read-only")
 
     def setNamedItemNS(self, node):
-        raise xmlcore.dom.NoModificationAllowedErr(
+        raise xml.dom.NoModificationAllowedErr(
             "NamedNodeMap instance is read-only")
 
     def __getstate__(self):
@@ -1251,7 +1251,7 @@
             clone = DocumentType(None)
             clone.name = self.name
             clone.nodeName = self.name
-            operation = xmlcore.dom.UserDataHandler.NODE_CLONED
+            operation = xml.dom.UserDataHandler.NODE_CLONED
             if deep:
                 clone.entities._seq = []
                 clone.notations._seq = []
@@ -1311,19 +1311,19 @@
         return self.version
 
     def appendChild(self, newChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             "cannot append children to an entity node")
 
     def insertBefore(self, newChild, refChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             "cannot insert children below an entity node")
 
     def removeChild(self, oldChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             "cannot remove children from an entity node")
 
     def replaceChild(self, newChild, oldChild):
-        raise xmlcore.dom.HierarchyRequestErr(
+        raise xml.dom.HierarchyRequestErr(
             "cannot replace children of an entity node")
 
 class Notation(Identified, Childless, Node):
@@ -1355,7 +1355,7 @@
 
     def createDocument(self, namespaceURI, qualifiedName, doctype):
         if doctype and doctype.parentNode is not None:
-            raise xmlcore.dom.WrongDocumentErr(
+            raise xml.dom.WrongDocumentErr(
                 "doctype object owned by another DOM tree")
         doc = self._create_document()
 
@@ -1376,15 +1376,15 @@
             # Null the document is returned without a document element
             # Otherwise if doctype or namespaceURI are not None
             # Then we go back to the above problem
-            raise xmlcore.dom.InvalidCharacterErr("Element with no name")
+            raise xml.dom.InvalidCharacterErr("Element with no name")
 
         if add_root_element:
             prefix, localname = _nssplit(qualifiedName)
             if prefix == "xml" \
                and namespaceURI != "http://www.w3.org/XML/1998/namespace":
-                raise xmlcore.dom.NamespaceErr("illegal use of 'xml' prefix")
+                raise xml.dom.NamespaceErr("illegal use of 'xml' prefix")
             if prefix and not namespaceURI:
-                raise xmlcore.dom.NamespaceErr(
+                raise xml.dom.NamespaceErr(
                     "illegal use of prefix without namespaces")
             element = doc.createElementNS(namespaceURI, qualifiedName)
             if doctype:
@@ -1533,7 +1533,7 @@
 
     def appendChild(self, node):
         if node.nodeType not in self._child_node_types:
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "%s cannot be child of %s" % (repr(node), repr(self)))
         if node.parentNode is not None:
             # This needs to be done before the next test since this
@@ -1543,7 +1543,7 @@
 
         if node.nodeType == Node.ELEMENT_NODE \
            and self._get_documentElement():
-            raise xmlcore.dom.HierarchyRequestErr(
+            raise xml.dom.HierarchyRequestErr(
                 "two document elements disallowed")
         return Node.appendChild(self, node)
 
@@ -1551,7 +1551,7 @@
         try:
             self.childNodes.remove(oldChild)
         except ValueError:
-            raise xmlcore.dom.NotFoundErr()
+            raise xml.dom.NotFoundErr()
         oldChild.nextSibling = oldChild.previousSibling = None
         oldChild.parentNode = None
         if self.documentElement is oldChild:
@@ -1587,7 +1587,7 @@
                 assert clone.doctype is None
                 clone.doctype = childclone
             childclone.parentNode = clone
-        self._call_user_data_handler(xmlcore.dom.UserDataHandler.NODE_CLONED,
+        self._call_user_data_handler(xml.dom.UserDataHandler.NODE_CLONED,
                                      self, clone)
         return clone
 
@@ -1729,9 +1729,9 @@
 
     def importNode(self, node, deep):
         if node.nodeType == Node.DOCUMENT_NODE:
-            raise xmlcore.dom.NotSupportedErr("cannot import document nodes")
+            raise xml.dom.NotSupportedErr("cannot import document nodes")
         elif node.nodeType == Node.DOCUMENT_TYPE_NODE:
-            raise xmlcore.dom.NotSupportedErr("cannot import document type nodes")
+            raise xml.dom.NotSupportedErr("cannot import document type nodes")
         return _clone_node(node, deep, self)
 
     def writexml(self, writer, indent="", addindent="", newl="",
@@ -1747,24 +1747,24 @@
 
     def renameNode(self, n, namespaceURI, name):
         if n.ownerDocument is not self:
-            raise xmlcore.dom.WrongDocumentErr(
+            raise xml.dom.WrongDocumentErr(
                 "cannot rename nodes from other documents;\n"
                 "expected %s,\nfound %s" % (self, n.ownerDocument))
         if n.nodeType not in (Node.ELEMENT_NODE, Node.ATTRIBUTE_NODE):
-            raise xmlcore.dom.NotSupportedErr(
+            raise xml.dom.NotSupportedErr(
                 "renameNode() only applies to element and attribute nodes")
         if namespaceURI != EMPTY_NAMESPACE:
             if ':' in name:
                 prefix, localName = name.split(':', 1)
                 if (  prefix == "xmlns"
-                      and namespaceURI != xmlcore.dom.XMLNS_NAMESPACE):
-                    raise xmlcore.dom.NamespaceErr(
+                      and namespaceURI != xml.dom.XMLNS_NAMESPACE):
+                    raise xml.dom.NamespaceErr(
                         "illegal use of 'xmlns' prefix")
             else:
                 if (  name == "xmlns"
-                      and namespaceURI != xmlcore.dom.XMLNS_NAMESPACE
+                      and namespaceURI != xml.dom.XMLNS_NAMESPACE
                       and n.nodeType == Node.ATTRIBUTE_NODE):
-                    raise xmlcore.dom.NamespaceErr(
+                    raise xml.dom.NamespaceErr(
                         "illegal use of the 'xmlns' attribute")
                 prefix = None
                 localName = name
@@ -1810,9 +1810,9 @@
     Called by Node.cloneNode and Document.importNode
     """
     if node.ownerDocument.isSameNode(newOwnerDocument):
-        operation = xmlcore.dom.UserDataHandler.NODE_CLONED
+        operation = xml.dom.UserDataHandler.NODE_CLONED
     else:
-        operation = xmlcore.dom.UserDataHandler.NODE_IMPORTED
+        operation = xml.dom.UserDataHandler.NODE_IMPORTED
     if node.nodeType == Node.ELEMENT_NODE:
         clone = newOwnerDocument.createElementNS(node.namespaceURI,
                                                  node.nodeName)
@@ -1849,7 +1849,7 @@
         clone.value = node.value
     elif node.nodeType == Node.DOCUMENT_TYPE_NODE:
         assert node.ownerDocument is not newOwnerDocument
-        operation = xmlcore.dom.UserDataHandler.NODE_IMPORTED
+        operation = xml.dom.UserDataHandler.NODE_IMPORTED
         clone = newOwnerDocument.implementation.createDocumentType(
             node.name, node.publicId, node.systemId)
         clone.ownerDocument = newOwnerDocument
@@ -1876,7 +1876,7 @@
         # Note the cloning of Document and DocumentType nodes is
         # implemenetation specific.  minidom handles those cases
         # directly in the cloneNode() methods.
-        raise xmlcore.dom.NotSupportedErr("Cannot clone node %s" % repr(node))
+        raise xml.dom.NotSupportedErr("Cannot clone node %s" % repr(node))
 
     # Check for _call_user_data_handler() since this could conceivably
     # used with other DOM implementations (one of the FourThought
@@ -1909,20 +1909,20 @@
 def parse(file, parser=None, bufsize=None):
     """Parse a file into a DOM by filename or file object."""
     if parser is None and not bufsize:
-        from xmlcore.dom import expatbuilder
+        from xml.dom import expatbuilder
         return expatbuilder.parse(file)
     else:
-        from xmlcore.dom import pulldom
+        from xml.dom import pulldom
         return _do_pulldom_parse(pulldom.parse, (file,),
             {'parser': parser, 'bufsize': bufsize})
 
 def parseString(string, parser=None):
     """Parse a file into a DOM from a string."""
     if parser is None:
-        from xmlcore.dom import expatbuilder
+        from xml.dom import expatbuilder
         return expatbuilder.parseString(string)
     else:
-        from xmlcore.dom import pulldom
+        from xml.dom import pulldom
         return _do_pulldom_parse(pulldom.parseString, (string,),
                                  {'parser': parser})
 
diff --git a/Lib/xmlcore/dom/pulldom.py b/Lib/xml/dom/pulldom.py
similarity index 96%
rename from Lib/xmlcore/dom/pulldom.py
rename to Lib/xml/dom/pulldom.py
index dad3718..18f49b5 100644
--- a/Lib/xmlcore/dom/pulldom.py
+++ b/Lib/xml/dom/pulldom.py
@@ -1,5 +1,5 @@
-import xmlcore.sax
-import xmlcore.sax.handler
+import xml.sax
+import xml.sax.handler
 import types
 
 try:
@@ -16,12 +16,12 @@
 IGNORABLE_WHITESPACE = "IGNORABLE_WHITESPACE"
 CHARACTERS = "CHARACTERS"
 
-class PullDOM(xmlcore.sax.ContentHandler):
+class PullDOM(xml.sax.ContentHandler):
     _locator = None
     document = None
 
     def __init__(self, documentFactory=None):
-        from xmlcore.dom import XML_NAMESPACE
+        from xml.dom import XML_NAMESPACE
         self.documentFactory = documentFactory
         self.firstEvent = [None, None]
         self.lastEvent = self.firstEvent
@@ -164,8 +164,8 @@
 
     def startDocument(self):
         if self.documentFactory is None:
-            import xmlcore.dom.minidom
-            self.documentFactory = xmlcore.dom.minidom.Document.implementation
+            import xml.dom.minidom
+            self.documentFactory = xml.dom.minidom.Document.implementation
 
     def buildDocument(self, uri, tagname):
         # Can't do that in startDocument, since we need the tagname
@@ -219,7 +219,7 @@
     def reset(self):
         self.pulldom = PullDOM()
         # This content handler relies on namespace support
-        self.parser.setFeature(xmlcore.sax.handler.feature_namespaces, 1)
+        self.parser.setFeature(xml.sax.handler.feature_namespaces, 1)
         self.parser.setContentHandler(self.pulldom)
 
     def __getitem__(self, pos):
@@ -335,7 +335,7 @@
     else:
         stream = stream_or_string
     if not parser:
-        parser = xmlcore.sax.make_parser()
+        parser = xml.sax.make_parser()
     return DOMEventStream(stream, parser, bufsize)
 
 def parseString(string, parser=None):
@@ -347,5 +347,5 @@
     bufsize = len(string)
     buf = StringIO(string)
     if not parser:
-        parser = xmlcore.sax.make_parser()
+        parser = xml.sax.make_parser()
     return DOMEventStream(buf, parser, bufsize)
diff --git a/Lib/xmlcore/dom/xmlbuilder.py b/Lib/xml/dom/xmlbuilder.py
similarity index 95%
rename from Lib/xmlcore/dom/xmlbuilder.py
rename to Lib/xml/dom/xmlbuilder.py
index 6566d3c..ac1d448 100644
--- a/Lib/xmlcore/dom/xmlbuilder.py
+++ b/Lib/xml/dom/xmlbuilder.py
@@ -1,9 +1,9 @@
 """Implementation of the DOM Level 3 'LS-Load' feature."""
 
 import copy
-import xmlcore.dom
+import xml.dom
 
-from xmlcore.dom.NodeFilter import NodeFilter
+from xml.dom.NodeFilter import NodeFilter
 
 
 __all__ = ["DOMBuilder", "DOMEntityResolver", "DOMInputSource"]
@@ -78,13 +78,13 @@
             try:
                 settings = self._settings[(_name_xform(name), state)]
             except KeyError:
-                raise xmlcore.dom.NotSupportedErr(
+                raise xml.dom.NotSupportedErr(
                     "unsupported feature: %r" % (name,))
             else:
                 for name, value in settings:
                     setattr(self._options, name, value)
         else:
-            raise xmlcore.dom.NotFoundErr("unknown feature: " + repr(name))
+            raise xml.dom.NotFoundErr("unknown feature: " + repr(name))
 
     def supportsFeature(self, name):
         return hasattr(self._options, _name_xform(name))
@@ -175,7 +175,7 @@
                                  or options.create_entity_ref_nodes
                                  or options.entities
                                  or options.cdata_sections))
-            raise xmlcore.dom.NotFoundErr("feature %s not known" % repr(name))
+            raise xml.dom.NotFoundErr("feature %s not known" % repr(name))
 
     def parseURI(self, uri):
         if self.entityResolver:
@@ -200,8 +200,8 @@
         raise NotImplementedError("Haven't written this yet...")
 
     def _parse_bytestream(self, stream, options):
-        import xmlcore.dom.expatbuilder
-        builder = xmlcore.dom.expatbuilder.makeBuilder(options)
+        import xml.dom.expatbuilder
+        builder = xml.dom.expatbuilder.makeBuilder(options)
         return builder.parseFile(stream)
 
 
@@ -340,7 +340,7 @@
         return False
     def _set_async(self, async):
         if async:
-            raise xmlcore.dom.NotSupportedErr(
+            raise xml.dom.NotSupportedErr(
                 "asynchronous document loading is not supported")
 
     def abort(self):
@@ -359,7 +359,7 @@
         if snode is None:
             snode = self
         elif snode.ownerDocument is not self:
-            raise xmlcore.dom.WrongDocumentErr()
+            raise xml.dom.WrongDocumentErr()
         return snode.toxml()
 
 
@@ -369,12 +369,12 @@
 
     def createDOMBuilder(self, mode, schemaType):
         if schemaType is not None:
-            raise xmlcore.dom.NotSupportedErr(
+            raise xml.dom.NotSupportedErr(
                 "schemaType not yet supported")
         if mode == self.MODE_SYNCHRONOUS:
             return DOMBuilder()
         if mode == self.MODE_ASYNCHRONOUS:
-            raise xmlcore.dom.NotSupportedErr(
+            raise xml.dom.NotSupportedErr(
                 "asynchronous builders are not supported")
         raise ValueError("unknown value for mode")
 
diff --git a/Lib/xmlcore/etree/ElementInclude.py b/Lib/xml/etree/ElementInclude.py
similarity index 99%
rename from Lib/xmlcore/etree/ElementInclude.py
rename to Lib/xml/etree/ElementInclude.py
index d7f85b3..974cc21 100644
--- a/Lib/xmlcore/etree/ElementInclude.py
+++ b/Lib/xml/etree/ElementInclude.py
@@ -49,7 +49,7 @@
 ##
 
 import copy
-from . import ElementTree
+import ElementTree
 
 XINCLUDE = "{http://www.w3.org/2001/XInclude}"
 
diff --git a/Lib/xmlcore/etree/ElementPath.py b/Lib/xml/etree/ElementPath.py
similarity index 100%
rename from Lib/xmlcore/etree/ElementPath.py
rename to Lib/xml/etree/ElementPath.py
diff --git a/Lib/xmlcore/etree/ElementTree.py b/Lib/xml/etree/ElementTree.py
similarity index 99%
rename from Lib/xmlcore/etree/ElementTree.py
rename to Lib/xml/etree/ElementTree.py
index b39760ea..7dbc72e 100644
--- a/Lib/xmlcore/etree/ElementTree.py
+++ b/Lib/xml/etree/ElementTree.py
@@ -84,7 +84,7 @@
     "tostring",
     "TreeBuilder",
     "VERSION", "XML",
-    "XMLTreeBuilder",
+    "XMLParser", "XMLTreeBuilder",
     ]
 
 ##
@@ -1112,7 +1112,7 @@
 
     def __init__(self, html=0, target=None):
         try:
-            from xmlcore.parsers import expat
+            from xml.parsers import expat
         except ImportError:
             raise ImportError(
                 "No module named expat; use SimpleXMLTreeBuilder instead"
@@ -1194,7 +1194,7 @@
             try:
                 self._target.data(self.entity[text[1:-1]])
             except KeyError:
-                from xmlcore.parsers import expat
+                from xml.parsers import expat
                 raise expat.error(
                     "undefined entity %s: line %d, column %d" %
                     (text, self._parser.ErrorLineNumber,
@@ -1255,3 +1255,6 @@
         tree = self._target.close()
         del self._target, self._parser # get rid of circular references
         return tree
+
+# compatibility
+XMLParser = XMLTreeBuilder
diff --git a/Lib/xmlcore/etree/__init__.py b/Lib/xml/etree/__init__.py
similarity index 100%
rename from Lib/xmlcore/etree/__init__.py
rename to Lib/xml/etree/__init__.py
diff --git a/Lib/xmlcore/etree/cElementTree.py b/Lib/xml/etree/cElementTree.py
similarity index 100%
rename from Lib/xmlcore/etree/cElementTree.py
rename to Lib/xml/etree/cElementTree.py
diff --git a/Lib/xmlcore/parsers/__init__.py b/Lib/xml/parsers/__init__.py
similarity index 100%
rename from Lib/xmlcore/parsers/__init__.py
rename to Lib/xml/parsers/__init__.py
diff --git a/Lib/xmlcore/parsers/expat.py b/Lib/xml/parsers/expat.py
similarity index 100%
rename from Lib/xmlcore/parsers/expat.py
rename to Lib/xml/parsers/expat.py
diff --git a/Lib/xmlcore/sax/__init__.py b/Lib/xml/sax/__init__.py
similarity index 89%
rename from Lib/xmlcore/sax/__init__.py
rename to Lib/xml/sax/__init__.py
index 8afbdb0..6b1b1ba 100644
--- a/Lib/xmlcore/sax/__init__.py
+++ b/Lib/xml/sax/__init__.py
@@ -19,11 +19,11 @@
 expatreader -- Driver that allows use of the Expat parser with SAX.
 """
 
-from .xmlreader import InputSource
-from .handler import ContentHandler, ErrorHandler
-from ._exceptions import (SAXException, SAXNotRecognizedException,
-                          SAXParseException, SAXNotSupportedException,
-                          SAXReaderNotAvailable)
+from xmlreader import InputSource
+from handler import ContentHandler, ErrorHandler
+from _exceptions import SAXException, SAXNotRecognizedException, \
+                        SAXParseException, SAXNotSupportedException, \
+                        SAXReaderNotAvailable
 
 
 def parse(source, handler, errorHandler=ErrorHandler()):
@@ -51,12 +51,12 @@
 # this is the parser list used by the make_parser function if no
 # alternatives are given as parameters to the function
 
-default_parser_list = ["xmlcore.sax.expatreader"]
+default_parser_list = ["xml.sax.expatreader"]
 
 # tell modulefinder that importing sax potentially imports expatreader
 _false = 0
 if _false:
-    import xmlcore.sax.expatreader
+    import xml.sax.expatreader
 
 import os, sys
 if os.environ.has_key("PY_SAX_PARSER"):
diff --git a/Lib/xmlcore/sax/_exceptions.py b/Lib/xml/sax/_exceptions.py
similarity index 100%
rename from Lib/xmlcore/sax/_exceptions.py
rename to Lib/xml/sax/_exceptions.py
diff --git a/Lib/xmlcore/sax/expatreader.py b/Lib/xml/sax/expatreader.py
similarity index 95%
rename from Lib/xmlcore/sax/expatreader.py
rename to Lib/xml/sax/expatreader.py
index 6fbd22e..bb9c294 100644
--- a/Lib/xmlcore/sax/expatreader.py
+++ b/Lib/xml/sax/expatreader.py
@@ -5,27 +5,27 @@
 
 version = "0.20"
 
-from xmlcore.sax._exceptions import *
-from xmlcore.sax.handler import feature_validation, feature_namespaces
-from xmlcore.sax.handler import feature_namespace_prefixes
-from xmlcore.sax.handler import feature_external_ges, feature_external_pes
-from xmlcore.sax.handler import feature_string_interning
-from xmlcore.sax.handler import property_xml_string, property_interning_dict
+from xml.sax._exceptions import *
+from xml.sax.handler import feature_validation, feature_namespaces
+from xml.sax.handler import feature_namespace_prefixes
+from xml.sax.handler import feature_external_ges, feature_external_pes
+from xml.sax.handler import feature_string_interning
+from xml.sax.handler import property_xml_string, property_interning_dict
 
-# xmlcore.parsers.expat does not raise ImportError in Jython
+# xml.parsers.expat does not raise ImportError in Jython
 import sys
 if sys.platform[:4] == "java":
     raise SAXReaderNotAvailable("expat not available in Java", None)
 del sys
 
 try:
-    from xmlcore.parsers import expat
+    from xml.parsers import expat
 except ImportError:
     raise SAXReaderNotAvailable("expat not supported", None)
 else:
     if not hasattr(expat, "ParserCreate"):
         raise SAXReaderNotAvailable("expat not supported", None)
-from xmlcore.sax import xmlreader, saxutils, handler
+from xml.sax import xmlreader, saxutils, handler
 
 AttributesImpl = xmlreader.AttributesImpl
 AttributesNSImpl = xmlreader.AttributesNSImpl
@@ -407,8 +407,8 @@
 # ---
 
 if __name__ == "__main__":
-    import xmlcore.sax
+    import xml.sax
     p = create_parser()
-    p.setContentHandler(xmlcore.sax.XMLGenerator())
-    p.setErrorHandler(xmlcore.sax.ErrorHandler())
+    p.setContentHandler(xml.sax.XMLGenerator())
+    p.setErrorHandler(xml.sax.ErrorHandler())
     p.parse("../../../hamlet.xml")
diff --git a/Lib/xmlcore/sax/handler.py b/Lib/xml/sax/handler.py
similarity index 100%
rename from Lib/xmlcore/sax/handler.py
rename to Lib/xml/sax/handler.py
diff --git a/Lib/xmlcore/sax/saxutils.py b/Lib/xml/sax/saxutils.py
similarity index 98%
rename from Lib/xmlcore/sax/saxutils.py
rename to Lib/xml/sax/saxutils.py
index 880de80..a496519 100644
--- a/Lib/xmlcore/sax/saxutils.py
+++ b/Lib/xml/sax/saxutils.py
@@ -4,8 +4,8 @@
 """
 
 import os, urlparse, urllib, types
-from . import handler
-from . import xmlreader
+import handler
+import xmlreader
 
 try:
     _StringTypes = [types.StringType, types.UnicodeType]
@@ -68,6 +68,8 @@
     the optional entities parameter.  The keys and values must all be
     strings; each key will be replaced with its corresponding value.
     """
+    entities = entities.copy()
+    entities.update({'\n': '&#10;', '\r': '&#13;', '\t':'&#9;'})
     data = escape(data, entities)
     if '"' in data:
         if "'" in data:
diff --git a/Lib/xmlcore/sax/xmlreader.py b/Lib/xml/sax/xmlreader.py
similarity index 98%
rename from Lib/xmlcore/sax/xmlreader.py
rename to Lib/xml/sax/xmlreader.py
index 1cade65..9a2361e 100644
--- a/Lib/xmlcore/sax/xmlreader.py
+++ b/Lib/xml/sax/xmlreader.py
@@ -1,9 +1,9 @@
 """An XML Reader is the SAX 2 name for an XML parser. XML Parsers
 should be based on this code. """
 
-from . import handler
+import handler
 
-from ._exceptions import SAXNotSupportedException, SAXNotRecognizedException
+from _exceptions import SAXNotSupportedException, SAXNotRecognizedException
 
 
 # ===== XMLREADER =====
@@ -113,7 +113,7 @@
         XMLReader.__init__(self)
 
     def parse(self, source):
-        from . import saxutils
+        import saxutils
         source = saxutils.prepare_input_source(source)
 
         self.prepareParser(source)
diff --git a/Lib/xmlcore/__init__.py b/Lib/xmlcore/__init__.py
deleted file mode 100644
index bf6d8dd..0000000
--- a/Lib/xmlcore/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-"""Core XML support for Python.
-
-This package contains four sub-packages:
-
-dom -- The W3C Document Object Model.  This supports DOM Level 1 +
-       Namespaces.
-
-parsers -- Python wrappers for XML parsers (currently only supports Expat).
-
-sax -- The Simple API for XML, developed by XML-Dev, led by David
-       Megginson and ported to Python by Lars Marius Garshol.  This
-       supports the SAX 2 API.
-
-etree -- The ElementTree XML library.  This is a subset of the full
-       ElementTree XML release.
-
-"""
-
-
-__all__ = ["dom", "parsers", "sax", "etree"]
diff --git a/Lib/zipfile.py b/Lib/zipfile.py
index 168d245..5c3fff3 100644
--- a/Lib/zipfile.py
+++ b/Lib/zipfile.py
@@ -1,7 +1,8 @@
-"Read and write ZIP files."
-
+"""
+Read and write ZIP files.
+"""
 import struct, os, time, sys
-import binascii
+import binascii, cStringIO
 
 try:
     import zlib # We may need its compression method
@@ -9,12 +10,22 @@
     zlib = None
 
 __all__ = ["BadZipfile", "error", "ZIP_STORED", "ZIP_DEFLATED", "is_zipfile",
-           "ZipInfo", "ZipFile", "PyZipFile"]
+           "ZipInfo", "ZipFile", "PyZipFile", "LargeZipFile" ]
 
 class BadZipfile(Exception):
     pass
+
+
+class LargeZipFile(Exception):
+    """
+    Raised when writing a zipfile, the zipfile requires ZIP64 extensions
+    and those extensions are disabled.
+    """
+
 error = BadZipfile      # The exception raised by this module
 
+ZIP64_LIMIT= (1 << 31) - 1
+
 # constants for Zip file compression methods
 ZIP_STORED = 0
 ZIP_DEFLATED = 8
@@ -27,6 +38,11 @@
 stringCentralDir = "PK\001\002"   # magic number for central directory
 structFileHeader = "<4s2B4HlLL2H"  # 12 items, file header record, 30 bytes
 stringFileHeader = "PK\003\004"   # magic number for file header
+structEndArchive64Locator = "<4slql" # 4 items, locate Zip64 header, 20 bytes
+stringEndArchive64Locator = "PK\x06\x07" # magic token for locator header
+structEndArchive64 = "<4sqhhllqqqq" # 10 items, end of archive (Zip64), 56 bytes
+stringEndArchive64 = "PK\x06\x06" # magic token for Zip64 header
+
 
 # indexes of entries in the central directory structure
 _CD_SIGNATURE = 0
@@ -75,6 +91,40 @@
         pass
     return False
 
+def _EndRecData64(fpin, offset, endrec):
+    """
+    Read the ZIP64 end-of-archive records and use that to update endrec
+    """
+    locatorSize = struct.calcsize(structEndArchive64Locator)
+    fpin.seek(offset - locatorSize, 2)
+    data = fpin.read(locatorSize)
+    sig, diskno, reloff, disks = struct.unpack(structEndArchive64Locator, data)
+    if sig != stringEndArchive64Locator:
+        return endrec
+
+    if diskno != 0 or disks != 1:
+        raise BadZipfile("zipfiles that span multiple disks are not supported")
+
+    # Assume no 'zip64 extensible data'
+    endArchiveSize = struct.calcsize(structEndArchive64)
+    fpin.seek(offset - locatorSize - endArchiveSize, 2)
+    data = fpin.read(endArchiveSize)
+    sig, sz, create_version, read_version, disk_num, disk_dir, \
+            dircount, dircount2, dirsize, diroffset = \
+            struct.unpack(structEndArchive64, data)
+    if sig != stringEndArchive64:
+        return endrec
+
+    # Update the original endrec using data from the ZIP64 record
+    endrec[1] = disk_num
+    endrec[2] = disk_dir
+    endrec[3] = dircount
+    endrec[4] = dircount2
+    endrec[5] = dirsize
+    endrec[6] = diroffset
+    return endrec
+
+
 def _EndRecData(fpin):
     """Return data from the "End of Central Directory" record, or None.
 
@@ -88,6 +138,8 @@
         endrec = list(endrec)
         endrec.append("")               # Append the archive comment
         endrec.append(filesize - 22)    # Append the record start offset
+        if endrec[-4] == -1 or endrec[-4] == 0xffffffff:
+            return _EndRecData64(fpin, -22, endrec)
         return endrec
     # Search the last END_BLOCK bytes of the file for the record signature.
     # The comment is appended to the ZIP file and has a 16 bit length.
@@ -106,25 +158,50 @@
             # Append the archive comment and start offset
             endrec.append(comment)
             endrec.append(filesize - END_BLOCK + start)
+            if endrec[-4] == -1 or endrec[-4] == 0xffffffff:
+                return _EndRecData64(fpin, - END_BLOCK + start, endrec)
             return endrec
     return      # Error, return None
 
 
-class ZipInfo:
+class ZipInfo (object):
     """Class with attributes describing each file in the ZIP archive."""
 
+    __slots__ = (
+            'orig_filename',
+            'filename',
+            'date_time',
+            'compress_type',
+            'comment',
+            'extra',
+            'create_system',
+            'create_version',
+            'extract_version',
+            'reserved',
+            'flag_bits',
+            'volume',
+            'internal_attr',
+            'external_attr',
+            'header_offset',
+            'CRC',
+            'compress_size',
+            'file_size',
+        )
+
     def __init__(self, filename="NoName", date_time=(1980,1,1,0,0,0)):
         self.orig_filename = filename   # Original file name in archive
-# Terminate the file name at the first null byte.  Null bytes in file
-# names are used as tricks by viruses in archives.
+
+        # Terminate the file name at the first null byte.  Null bytes in file
+        # names are used as tricks by viruses in archives.
         null_byte = filename.find(chr(0))
         if null_byte >= 0:
             filename = filename[0:null_byte]
-# This is used to ensure paths in generated ZIP files always use
-# forward slashes as the directory separator, as required by the
-# ZIP format specification.
-        if os.sep != "/":
+        # This is used to ensure paths in generated ZIP files always use
+        # forward slashes as the directory separator, as required by the
+        # ZIP format specification.
+        if os.sep != "/" and os.sep in filename:
             filename = filename.replace(os.sep, "/")
+
         self.filename = filename        # Normalized file name
         self.date_time = date_time      # year, month, day, hour, min, sec
         # Standard values:
@@ -145,7 +222,6 @@
         self.external_attr = 0          # External file attributes
         # Other attributes are set by class ZipFile:
         # header_offset         Byte offset to the file header
-        # file_offset           Byte offset to the start of the file data
         # CRC                   CRC-32 of the uncompressed file
         # compress_size         Size of the compressed file
         # file_size             Size of the uncompressed file
@@ -162,29 +238,85 @@
             CRC = self.CRC
             compress_size = self.compress_size
             file_size = self.file_size
+
+        extra = self.extra
+
+        if file_size > ZIP64_LIMIT or compress_size > ZIP64_LIMIT:
+            # File is larger than what fits into a 4 byte integer,
+            # fall back to the ZIP64 extension
+            fmt = '<hhqq'
+            extra = extra + struct.pack(fmt,
+                    1, struct.calcsize(fmt)-4, file_size, compress_size)
+            file_size = 0xffffffff # -1
+            compress_size = 0xffffffff # -1
+            self.extract_version = max(45, self.extract_version)
+            self.create_version = max(45, self.extract_version)
+
         header = struct.pack(structFileHeader, stringFileHeader,
                  self.extract_version, self.reserved, self.flag_bits,
                  self.compress_type, dostime, dosdate, CRC,
                  compress_size, file_size,
-                 len(self.filename), len(self.extra))
-        return header + self.filename + self.extra
+                 len(self.filename), len(extra))
+        return header + self.filename + extra
+
+    def _decodeExtra(self):
+        # Try to decode the extra field.
+        extra = self.extra
+        unpack = struct.unpack
+        while extra:
+            tp, ln = unpack('<hh', extra[:4])
+            if tp == 1:
+                if ln >= 24:
+                    counts = unpack('<qqq', extra[4:28])
+                elif ln == 16:
+                    counts = unpack('<qq', extra[4:20])
+                elif ln == 8:
+                    counts = unpack('<q', extra[4:12])
+                elif ln == 0:
+                    counts = ()
+                else:
+                    raise RuntimeError, "Corrupt extra field %s"%(ln,)
+
+                idx = 0
+
+                # ZIP64 extension (large files and/or large archives)
+                if self.file_size == -1 or self.file_size == 0xFFFFFFFFL:
+                    self.file_size = counts[idx]
+                    idx += 1
+
+                if self.compress_size == -1 or self.compress_size == 0xFFFFFFFFL:
+                    self.compress_size = counts[idx]
+                    idx += 1
+
+                if self.header_offset == -1 or self.header_offset == 0xffffffffL:
+                    old = self.header_offset
+                    self.header_offset = counts[idx]
+                    idx+=1
+
+            extra = extra[ln+4:]
 
 
 class ZipFile:
     """ Class with methods to open, read, write, close, list zip files.
 
-    z = ZipFile(file, mode="r", compression=ZIP_STORED)
+    z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=True)
 
     file: Either the path to the file, or a file-like object.
           If it is a path, the file will be opened and closed by ZipFile.
     mode: The mode can be either read "r", write "w" or append "a".
     compression: ZIP_STORED (no compression) or ZIP_DEFLATED (requires zlib).
+    allowZip64: if True ZipFile will create files with ZIP64 extensions when
+                needed, otherwise it will raise an exception when this would
+                be necessary.
+
     """
 
     fp = None                   # Set here since __del__ checks it
 
-    def __init__(self, file, mode="r", compression=ZIP_STORED):
+    def __init__(self, file, mode="r", compression=ZIP_STORED, allowZip64=False):
         """Open the ZIP file with mode read "r", write "w" or append "a"."""
+        self._allowZip64 = allowZip64
+        self._didModify = False
         if compression == ZIP_STORED:
             pass
         elif compression == ZIP_DEFLATED:
@@ -250,7 +382,10 @@
         offset_cd = endrec[6]   # offset of central directory
         self.comment = endrec[8]        # archive comment
         # endrec[9] is the offset of the "End of Central Dir" record
-        x = endrec[9] - size_cd
+        if endrec[9] > ZIP64_LIMIT:
+            x = endrec[9] - size_cd - 56 - 20
+        else:
+            x = endrec[9] - size_cd
         # "concat" is zero, unless zip was concatenated to another file
         concat = x - offset_cd
         if self.debug > 2:
@@ -258,6 +393,8 @@
         # self.start_dir:  Position of start of central directory
         self.start_dir = offset_cd + concat
         fp.seek(self.start_dir, 0)
+        data = fp.read(size_cd)
+        fp = cStringIO.StringIO(data)
         total = 0
         while total < size_cd:
             centdir = fp.read(46)
@@ -275,8 +412,7 @@
             total = (total + centdir[_CD_FILENAME_LENGTH]
                      + centdir[_CD_EXTRA_FIELD_LENGTH]
                      + centdir[_CD_COMMENT_LENGTH])
-            x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET] + concat
-            # file_offset must be computed below...
+            x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET]
             (x.create_version, x.create_system, x.extract_version, x.reserved,
                 x.flag_bits, x.compress_type, t, d,
                 x.CRC, x.compress_size, x.file_size) = centdir[1:12]
@@ -284,28 +420,14 @@
             # Convert date/time code to (year, month, day, hour, min, sec)
             x.date_time = ( (d>>9)+1980, (d>>5)&0xF, d&0x1F,
                                      t>>11, (t>>5)&0x3F, (t&0x1F) * 2 )
+
+            x._decodeExtra()
+            x.header_offset = x.header_offset + concat
             self.filelist.append(x)
             self.NameToInfo[x.filename] = x
             if self.debug > 2:
                 print "total", total
-        for data in self.filelist:
-            fp.seek(data.header_offset, 0)
-            fheader = fp.read(30)
-            if fheader[0:4] != stringFileHeader:
-                raise BadZipfile, "Bad magic number for file header"
-            fheader = struct.unpack(structFileHeader, fheader)
-            # file_offset is computed here, since the extra field for
-            # the central directory and for the local file header
-            # refer to different fields, and they can have different
-            # lengths
-            data.file_offset = (data.header_offset + 30
-                                + fheader[_FH_FILENAME_LENGTH]
-                                + fheader[_FH_EXTRA_FIELD_LENGTH])
-            fname = fp.read(fheader[_FH_FILENAME_LENGTH])
-            if fname != data.orig_filename:
-                raise RuntimeError, \
-                      'File name in directory "%s" and header "%s" differ.' % (
-                          data.orig_filename, fname)
+
 
     def namelist(self):
         """Return a list of file names in the archive."""
@@ -334,6 +456,7 @@
             except BadZipfile:
                 return zinfo.filename
 
+
     def getinfo(self, name):
         """Return the instance of ZipInfo given 'name'."""
         return self.NameToInfo[name]
@@ -347,7 +470,24 @@
                   "Attempt to read ZIP archive that was already closed"
         zinfo = self.getinfo(name)
         filepos = self.fp.tell()
-        self.fp.seek(zinfo.file_offset, 0)
+
+        self.fp.seek(zinfo.header_offset, 0)
+
+        # Skip the file header:
+        fheader = self.fp.read(30)
+        if fheader[0:4] != stringFileHeader:
+            raise BadZipfile, "Bad magic number for file header"
+
+        fheader = struct.unpack(structFileHeader, fheader)
+        fname = self.fp.read(fheader[_FH_FILENAME_LENGTH])
+        if fheader[_FH_EXTRA_FIELD_LENGTH]:
+            self.fp.read(fheader[_FH_EXTRA_FIELD_LENGTH])
+
+        if fname != zinfo.orig_filename:
+            raise BadZipfile, \
+                      'File name in directory "%s" and header "%s" differ.' % (
+                          zinfo.orig_filename, fname)
+
         bytes = self.fp.read(zinfo.compress_size)
         self.fp.seek(filepos, 0)
         if zinfo.compress_type == ZIP_STORED:
@@ -388,6 +528,12 @@
         if zinfo.compress_type not in (ZIP_STORED, ZIP_DEFLATED):
             raise RuntimeError, \
                   "That compression method is not supported"
+        if zinfo.file_size > ZIP64_LIMIT:
+            if not self._allowZip64:
+                raise LargeZipFile("Filesize would require ZIP64 extensions")
+        if zinfo.header_offset > ZIP64_LIMIT:
+            if not self._allowZip64:
+                raise LargeZipFile("Zipfile size would require ZIP64 extensions")
 
     def write(self, filename, arcname=None, compress_type=None):
         """Put the bytes from filename into the archive under the name
@@ -407,16 +553,19 @@
             zinfo.compress_type = self.compression
         else:
             zinfo.compress_type = compress_type
-        self._writecheck(zinfo)
-        fp = open(filename, "rb")
+
+        zinfo.file_size = st.st_size
         zinfo.flag_bits = 0x00
         zinfo.header_offset = self.fp.tell()    # Start of header bytes
+
+        self._writecheck(zinfo)
+        self._didModify = True
+        fp = open(filename, "rb")
         # Must overwrite CRC and sizes with correct data later
         zinfo.CRC = CRC = 0
         zinfo.compress_size = compress_size = 0
         zinfo.file_size = file_size = 0
         self.fp.write(zinfo.FileHeader())
-        zinfo.file_offset = self.fp.tell()      # Start of file bytes
         if zinfo.compress_type == ZIP_DEFLATED:
             cmpr = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
                  zlib.DEFLATED, -15)
@@ -461,8 +610,10 @@
             zinfo.compress_type = self.compression
         else:
             zinfo = zinfo_or_arcname
-        self._writecheck(zinfo)
         zinfo.file_size = len(bytes)            # Uncompressed size
+        zinfo.header_offset = self.fp.tell()    # Start of header bytes
+        self._writecheck(zinfo)
+        self._didModify = True
         zinfo.CRC = binascii.crc32(bytes)       # CRC-32 checksum
         if zinfo.compress_type == ZIP_DEFLATED:
             co = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
@@ -473,8 +624,8 @@
             zinfo.compress_size = zinfo.file_size
         zinfo.header_offset = self.fp.tell()    # Start of header bytes
         self.fp.write(zinfo.FileHeader())
-        zinfo.file_offset = self.fp.tell()      # Start of file bytes
         self.fp.write(bytes)
+        self.fp.flush()
         if zinfo.flag_bits & 0x08:
             # Write CRC and file sizes after the file data
             self.fp.write(struct.pack("<lLL", zinfo.CRC, zinfo.compress_size,
@@ -491,7 +642,8 @@
         records."""
         if self.fp is None:
             return
-        if self.mode in ("w", "a"):             # write ending records
+
+        if self.mode in ("w", "a") and self._didModify: # write ending records
             count = 0
             pos1 = self.fp.tell()
             for zinfo in self.filelist:         # write central directory
@@ -499,23 +651,73 @@
                 dt = zinfo.date_time
                 dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
                 dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
+                extra = []
+                if zinfo.file_size > ZIP64_LIMIT \
+                        or zinfo.compress_size > ZIP64_LIMIT:
+                    extra.append(zinfo.file_size)
+                    extra.append(zinfo.compress_size)
+                    file_size = 0xffffffff #-1
+                    compress_size = 0xffffffff #-1
+                else:
+                    file_size = zinfo.file_size
+                    compress_size = zinfo.compress_size
+
+                if zinfo.header_offset > ZIP64_LIMIT:
+                    extra.append(zinfo.header_offset)
+                    header_offset = -1  # struct "l" format:  32 one bits
+                else:
+                    header_offset = zinfo.header_offset
+
+                extra_data = zinfo.extra
+                if extra:
+                    # Append a ZIP64 field to the extra's
+                    extra_data = struct.pack(
+                            '<hh' + 'q'*len(extra),
+                            1, 8*len(extra), *extra) + extra_data
+
+                    extract_version = max(45, zinfo.extract_version)
+                    create_version = max(45, zinfo.create_version)
+                else:
+                    extract_version = zinfo.extract_version
+                    create_version = zinfo.create_version
+
                 centdir = struct.pack(structCentralDir,
-                  stringCentralDir, zinfo.create_version,
-                  zinfo.create_system, zinfo.extract_version, zinfo.reserved,
+                  stringCentralDir, create_version,
+                  zinfo.create_system, extract_version, zinfo.reserved,
                   zinfo.flag_bits, zinfo.compress_type, dostime, dosdate,
-                  zinfo.CRC, zinfo.compress_size, zinfo.file_size,
-                  len(zinfo.filename), len(zinfo.extra), len(zinfo.comment),
+                  zinfo.CRC, compress_size, file_size,
+                  len(zinfo.filename), len(extra_data), len(zinfo.comment),
                   0, zinfo.internal_attr, zinfo.external_attr,
-                  zinfo.header_offset)
+                  header_offset)
                 self.fp.write(centdir)
                 self.fp.write(zinfo.filename)
-                self.fp.write(zinfo.extra)
+                self.fp.write(extra_data)
                 self.fp.write(zinfo.comment)
+
             pos2 = self.fp.tell()
             # Write end-of-zip-archive record
-            endrec = struct.pack(structEndArchive, stringEndArchive,
-                     0, 0, count, count, pos2 - pos1, pos1, 0)
-            self.fp.write(endrec)
+            if pos1 > ZIP64_LIMIT:
+                # Need to write the ZIP64 end-of-archive records
+                zip64endrec = struct.pack(
+                        structEndArchive64, stringEndArchive64,
+                        44, 45, 45, 0, 0, count, count, pos2 - pos1, pos1)
+                self.fp.write(zip64endrec)
+
+                zip64locrec = struct.pack(
+                        structEndArchive64Locator,
+                        stringEndArchive64Locator, 0, pos2, 1)
+                self.fp.write(zip64locrec)
+
+                # XXX Why is `pos3` computed next?  It's never referenced.
+                pos3 = self.fp.tell()
+                endrec = struct.pack(structEndArchive, stringEndArchive,
+                            0, 0, count, count, pos2 - pos1, -1, 0)
+                self.fp.write(endrec)
+
+            else:
+                endrec = struct.pack(structEndArchive, stringEndArchive,
+                         0, 0, count, count, pos2 - pos1, pos1, 0)
+                self.fp.write(endrec)
             self.fp.flush()
         if not self._filePassed:
             self.fp.close()
@@ -619,3 +821,80 @@
         if basename:
             archivename = "%s/%s" % (basename, archivename)
         return (fname, archivename)
+
+
+def main(args = None):
+    import textwrap
+    USAGE=textwrap.dedent("""\
+        Usage:
+            zipfile.py -l zipfile.zip        # Show listing of a zipfile
+            zipfile.py -t zipfile.zip        # Test if a zipfile is valid
+            zipfile.py -e zipfile.zip target # Extract zipfile into target dir
+            zipfile.py -c zipfile.zip src ... # Create zipfile from sources
+        """)
+    if args is None:
+        args = sys.argv[1:]
+
+    if not args or args[0] not in ('-l', '-c', '-e', '-t'):
+        print USAGE
+        sys.exit(1)
+
+    if args[0] == '-l':
+        if len(args) != 2:
+            print USAGE
+            sys.exit(1)
+        zf = ZipFile(args[1], 'r')
+        zf.printdir()
+        zf.close()
+
+    elif args[0] == '-t':
+        if len(args) != 2:
+            print USAGE
+            sys.exit(1)
+        zf = ZipFile(args[1], 'r')
+        zf.testzip()
+        print "Done testing"
+
+    elif args[0] == '-e':
+        if len(args) != 3:
+            print USAGE
+            sys.exit(1)
+
+        zf = ZipFile(args[1], 'r')
+        out = args[2]
+        for path in zf.namelist():
+            if path.startswith('./'):
+                tgt = os.path.join(out, path[2:])
+            else:
+                tgt = os.path.join(out, path)
+
+            tgtdir = os.path.dirname(tgt)
+            if not os.path.exists(tgtdir):
+                os.makedirs(tgtdir)
+            fp = open(tgt, 'wb')
+            fp.write(zf.read(path))
+            fp.close()
+        zf.close()
+
+    elif args[0] == '-c':
+        if len(args) < 3:
+            print USAGE
+            sys.exit(1)
+
+        def addToZip(zf, path, zippath):
+            if os.path.isfile(path):
+                zf.write(path, zippath, ZIP_DEFLATED)
+            elif os.path.isdir(path):
+                for nm in os.listdir(path):
+                    addToZip(zf,
+                            os.path.join(path, nm), os.path.join(zippath, nm))
+            # else: ignore
+
+        zf = ZipFile(args[1], 'w', allowZip64=True)
+        for src in args[2:]:
+            addToZip(zf, src, os.path.basename(src))
+
+        zf.close()
+
+if __name__ == "__main__":
+    main()