Branch merge
diff --git a/Doc/c-api/unicode.rst b/Doc/c-api/unicode.rst
index 9cc0caa..00d6007 100644
--- a/Doc/c-api/unicode.rst
+++ b/Doc/c-api/unicode.rst
@@ -99,7 +99,7 @@
 
    .. deprecated-removed:: 3.3 4.0
       Part of the old-style Unicode API, please migrate to using
-      :c:func:`PyUnicode_GET_LENGTH` or :c:func:`PyUnicode_KIND_SIZE`.
+      :c:func:`PyUnicode_GET_LENGTH`.
 
 
 .. c:function:: Py_UNICODE* PyUnicode_AS_UNICODE(PyObject *o)
@@ -149,9 +149,8 @@
    Return a pointer to the canonical representation cast to UCS1, UCS2 or UCS4
    integer types for direct character access.  No checks are performed if the
    canonical representation has the correct character size; use
-   :c:func:`PyUnicode_CHARACTER_SIZE` or :c:func:`PyUnicode_KIND` to select the
-   right macro.  Make sure :c:func:`PyUnicode_READY` has been called before
-   accessing this.
+   :c:func:`PyUnicode_KIND` to select the right macro.  Make sure
+   :c:func:`PyUnicode_READY` has been called before accessing this.
 
    .. versionadded:: 3.3
 
@@ -176,15 +175,6 @@
    .. versionadded:: 3.3
 
 
-.. c:function:: int PyUnicode_CHARACTER_SIZE(PyObject *o)
-
-   Return the number of bytes the string uses to represent single characters;
-   this can be 1, 2 or 4.  *o* has to be a Unicode object in the "canonical"
-   representation (not checked).
-
-   .. versionadded:: 3.3
-
-
 .. c:function:: void* PyUnicode_DATA(PyObject *o)
 
    Return a void pointer to the raw unicode buffer.  *o* has to be a Unicode
@@ -193,14 +183,6 @@
    .. versionadded:: 3.3
 
 
-.. c:function:: int PyUnicode_KIND_SIZE(int kind, Py_ssize_t index)
-
-   Compute ``index * char_size`` where ``char_size`` is ``2**(kind - 1)``.  The
-   index is a character index, the result is a size in bytes.
-
-   .. versionadded:: 3.3
-
-
 .. c:function:: void PyUnicode_WRITE(int kind, void *data, Py_ssize_t index, \
                                      Py_UCS4 value)
 
diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst
index 46f0f9f..559921a 100644
--- a/Doc/library/stdtypes.rst
+++ b/Doc/library/stdtypes.rst
@@ -15,6 +15,10 @@
 The principal built-in types are numerics, sequences, mappings, classes,
 instances and exceptions.
 
+Some collection classes are mutable.  The methods that add, subtract, or
+rearrange their members in place, and don't return a specific item, never return
+the collection instance itself but ``None``.
+
 Some operations are supported by several object types; in particular,
 practically all objects can be compared, tested for truth value, and converted
 to a string (with the :func:`repr` function or the slightly different
diff --git a/Doc/tutorial/datastructures.rst b/Doc/tutorial/datastructures.rst
index defb47c..44c09c3 100644
--- a/Doc/tutorial/datastructures.rst
+++ b/Doc/tutorial/datastructures.rst
@@ -19,13 +19,13 @@
 .. method:: list.append(x)
    :noindex:
 
-   Add an item to the end of the list; equivalent to ``a[len(a):] = [x]``.
+   Add an item to the end of the list.  Equivalent to ``a[len(a):] = [x]``.
 
 
 .. method:: list.extend(L)
    :noindex:
 
-   Extend the list by appending all the items in the given list; equivalent to
+   Extend the list by appending all the items in the given list.  Equivalent to
    ``a[len(a):] = L``.
 
 
@@ -40,8 +40,8 @@
 .. method:: list.remove(x)
    :noindex:
 
-   Remove the first item from the list whose value is *x*. It is an error if there
-   is no such item.
+   Remove the first item from the list whose value is *x*.  It is an error if
+   there is no such item.
 
 
 .. method:: list.pop([i])
@@ -70,13 +70,14 @@
 .. method:: list.sort()
    :noindex:
 
-   Sort the items of the list, in place.
+   Sort the items of the list in place.
 
 
 .. method:: list.reverse()
    :noindex:
 
-   Reverse the elements of the list, in place.
+   Reverse the elements of the list in place.
+
 
 An example that uses most of the list methods::
 
@@ -99,6 +100,10 @@
    >>> a
    [-1, 1, 66.25, 333, 333, 1234.5]
 
+You might have noticed that methods like ``insert``, ``remove`` or ``sort`` that
+modify the list have no return value printed -- they return ``None``. [1]_  This
+is a design principle for all mutable data structures in Python.
+
 
 .. _tut-lists-as-stacks:
 
@@ -438,7 +443,7 @@
 
 Performing ``list(d.keys())`` on a dictionary returns a list of all the keys
 used in the dictionary, in arbitrary order (if you want it sorted, just use
-``sorted(d.keys())`` instead). [1]_  To check whether a single key is in the
+``sorted(d.keys())`` instead). [2]_  To check whether a single key is in the
 dictionary, use the :keyword:`in` keyword.
 
 Here is a small example using a dictionary::
@@ -622,6 +627,9 @@
 
 .. rubric:: Footnotes
 
-.. [1] Calling ``d.keys()`` will return a :dfn:`dictionary view` object.  It
+.. [1] Other languages may return the mutated object, which allows method
+       chaining, such as ``d->insert("a")->remove("b")->sort();``.
+
+.. [2] Calling ``d.keys()`` will return a :dfn:`dictionary view` object.  It
        supports operations like membership test and iteration, but its contents
        are not independent of the original dictionary -- it is only a *view*.
diff --git a/Include/unicodeobject.h b/Include/unicodeobject.h
index c6dfdf7..5144234 100644
--- a/Include/unicodeobject.h
+++ b/Include/unicodeobject.h
@@ -305,12 +305,12 @@
              * character type = Py_UCS2 (16 bits, unsigned)
              * at least one character must be in range U+0100-U+FFFF
 
-           - PyUnicode_4BYTE_KIND (3):
+           - PyUnicode_4BYTE_KIND (4):
 
              * character type = Py_UCS4 (32 bits, unsigned)
              * at least one character must be in range U+10000-U+10FFFF
          */
-        unsigned int kind:2;
+        unsigned int kind:3;
         /* Compact is with respect to the allocation scheme. Compact unicode
            objects only require one memory block while non-compact objects use
            one block for the PyUnicodeObject struct and another for its data
@@ -424,29 +424,21 @@
 #define PyUnicode_IS_COMPACT_ASCII(op)                 \
     (PyUnicode_IS_ASCII(op) && PyUnicode_IS_COMPACT(op))
 
+enum PyUnicode_Kind {
 /* String contains only wstr byte characters.  This is only possible
    when the string was created with a legacy API and _PyUnicode_Ready()
    has not been called yet.  */
-#define PyUnicode_WCHAR_KIND 0
-
+    PyUnicode_WCHAR_KIND = 0,
 /* Return values of the PyUnicode_KIND() macro: */
-
-#define PyUnicode_1BYTE_KIND 1
-#define PyUnicode_2BYTE_KIND 2
-#define PyUnicode_4BYTE_KIND 3
-
-
-/* Return the number of bytes the string uses to represent single characters,
-   this can be 1, 2 or 4.
-
-   See also PyUnicode_KIND_SIZE(). */
-#define PyUnicode_CHARACTER_SIZE(op) \
-    (((Py_ssize_t)1 << (PyUnicode_KIND(op) - 1)))
+    PyUnicode_1BYTE_KIND = 1,
+    PyUnicode_2BYTE_KIND = 2,
+    PyUnicode_4BYTE_KIND = 4
+};
 
 /* Return pointers to the canonical representation cast to unsigned char,
    Py_UCS2, or Py_UCS4 for direct character access.
-   No checks are performed, use PyUnicode_CHARACTER_SIZE or
-   PyUnicode_KIND() before to ensure these will work correctly. */
+   No checks are performed, use PyUnicode_KIND() before to ensure
+   these will work correctly. */
 
 #define PyUnicode_1BYTE_DATA(op) ((Py_UCS1*)PyUnicode_DATA(op))
 #define PyUnicode_2BYTE_DATA(op) ((Py_UCS2*)PyUnicode_DATA(op))
@@ -473,13 +465,6 @@
      PyUnicode_IS_COMPACT(op) ? _PyUnicode_COMPACT_DATA(op) :   \
      _PyUnicode_NONCOMPACT_DATA(op))
 
-/* Compute (index * char_size) where char_size is 2 ** (kind - 1).
-   The index is a character index, the result is a size in bytes.
-
-   See also PyUnicode_CHARACTER_SIZE(). */
-#define PyUnicode_KIND_SIZE(kind, index) \
-    (((Py_ssize_t)(index)) << ((kind) - 1))
-
 /* In the access macros below, "kind" may be evaluated more than once.
    All other macro parameters are evaluated exactly once, so it is safe
    to put side effects into them (such as increasing the index). */
diff --git a/Lib/lib2to3/tests/test_parser.py b/Lib/lib2to3/tests/test_parser.py
index f389795..f32404c 100644
--- a/Lib/lib2to3/tests/test_parser.py
+++ b/Lib/lib2to3/tests/test_parser.py
@@ -14,6 +14,7 @@
 
 # Python imports
 import os
+import unittest
 
 # Local imports
 from lib2to3.pgen2 import tokenize
@@ -157,19 +158,22 @@
 
     """A cut-down version of pytree_idempotency.py."""
 
+    # Issue 13125
+    @unittest.expectedFailure
     def test_all_project_files(self):
         for filepath in support.all_project_files():
             with open(filepath, "rb") as fp:
                 encoding = tokenize.detect_encoding(fp.readline)[0]
             self.assertTrue(encoding is not None,
                             "can't detect encoding for %s" % filepath)
-            with open(filepath, "r") as fp:
+            with open(filepath, "r", encoding=encoding) as fp:
                 source = fp.read()
-                source = source.decode(encoding)
-            tree = driver.parse_string(source)
+            try:
+                tree = driver.parse_string(source)
+            except ParseError as err:
+                print('ParseError on file', filepath, err)
+                continue
             new = str(tree)
-            if encoding:
-                new = new.encode(encoding)
             if diff(filepath, new):
                 self.fail("Idempotency failed: %s" % filepath)
 
@@ -212,14 +216,14 @@
         self.validate(s)
 
 
-def diff(fn, result, encoding):
-    f = open("@", "w")
+def diff(fn, result):
     try:
-        f.write(result.encode(encoding))
-    finally:
-        f.close()
-    try:
+        with open('@', 'w') as f:
+            f.write(str(result))
         fn = fn.replace('"', '\\"')
         return os.system('diff -u "%s" @' % fn)
     finally:
-        os.remove("@")
+        try:
+            os.remove("@")
+        except OSError:
+            pass
diff --git a/Lib/test/test_lib2to3.py b/Lib/test/test_lib2to3.py
index 0d6f9a3..df4c37b 100644
--- a/Lib/test/test_lib2to3.py
+++ b/Lib/test/test_lib2to3.py
@@ -1,6 +1,7 @@
 # Skipping test_parser and test_all_fixers
 # because of running
 from lib2to3.tests import (test_fixers, test_pytree, test_util, test_refactor,
+                           test_parser,
                            test_main as test_main_)
 import unittest
 from test.support import run_unittest
@@ -8,7 +9,7 @@
 def suite():
     tests = unittest.TestSuite()
     loader = unittest.TestLoader()
-    for m in (test_fixers, test_pytree,test_util, test_refactor,
+    for m in (test_fixers, test_pytree, test_util, test_refactor, test_parser,
               test_main_):
         tests.addTests(loader.loadTestsFromModule(m))
     return tests
diff --git a/Lib/test/test_pkgutil.py b/Lib/test/test_pkgutil.py
index f755e67..f4e0323 100644
--- a/Lib/test/test_pkgutil.py
+++ b/Lib/test/test_pkgutil.py
@@ -15,11 +15,11 @@
 
     def setUp(self):
         self.dirname = tempfile.mkdtemp()
+        self.addCleanup(shutil.rmtree, self.dirname)
         sys.path.insert(0, self.dirname)
 
     def tearDown(self):
         del sys.path[0]
-        shutil.rmtree(self.dirname)
 
     def test_getdata_filesys(self):
         pkg = 'test_getdata_filesys'
@@ -91,9 +91,9 @@
         # this does not appear to create an unreadable dir on Windows
         #   but the test should not fail anyway
         os.mkdir(d, 0)
+        self.addCleanup(os.rmdir, d)
         for t in pkgutil.walk_packages(path=[self.dirname]):
             self.fail("unexpected package found")
-        os.rmdir(d)
 
 class PkgutilPEP302Tests(unittest.TestCase):
 
diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py
index 27df862..9a5862d 100644
--- a/Lib/test/test_unicode.py
+++ b/Lib/test/test_unicode.py
@@ -170,6 +170,7 @@
         self.checkequalnofix(0, 'aaa', 'count',  'a', 0, -10)
 
     def test_find(self):
+        string_tests.CommonTest.test_find(self)
         self.checkequalnofix(0,  'abcdefghiabc', 'find', 'abc')
         self.checkequalnofix(9,  'abcdefghiabc', 'find', 'abc', 1)
         self.checkequalnofix(-1, 'abcdefghiabc', 'find', 'def', 4)
diff --git a/Misc/NEWS b/Misc/NEWS
index 953671b..1a3ec2a 100644
--- a/Misc/NEWS
+++ b/Misc/NEWS
@@ -1388,6 +1388,9 @@
 Tests
 -----
 
+- Re-enable lib2to3's test_parser.py tests, though with an expected failure
+  (see issue 13125).
+
 - Issue #12656: Add tests for IPv6 and Unix sockets to test_asyncore.
 
 - Issue #6484: Add unit tests for mailcap module (patch by Gregory Nofi)
diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c
index 880a5f0..aa29ffb 100644
--- a/Modules/_io/textio.c
+++ b/Modules/_io/textio.c
@@ -291,9 +291,7 @@
         kind = PyUnicode_KIND(modified);
         out = PyUnicode_DATA(modified);
         PyUnicode_WRITE(kind, PyUnicode_DATA(modified), 0, '\r');
-        memcpy(out + PyUnicode_KIND_SIZE(kind, 1),
-               PyUnicode_DATA(output),
-               PyUnicode_KIND_SIZE(kind, output_len));
+        memcpy(out + kind, PyUnicode_DATA(output), kind * output_len);
         Py_DECREF(output);
         output = modified; /* output remains ready */
         self->pendingcr = 0;
@@ -336,7 +334,7 @@
            for the \r *byte* with the libc's optimized memchr.
            */
         if (seennl == SEEN_LF || seennl == 0) {
-            only_lf = (memchr(in_str, '\r', PyUnicode_KIND_SIZE(kind, len)) == NULL);
+            only_lf = (memchr(in_str, '\r', kind * len) == NULL);
         }
 
         if (only_lf) {
@@ -344,7 +342,7 @@
                (there's nothing else to be done, even when in translation mode)
             */
             if (seennl == 0 &&
-                memchr(in_str, '\n', PyUnicode_KIND_SIZE(kind, len)) != NULL) {
+                memchr(in_str, '\n', kind * len) != NULL) {
                 Py_ssize_t i = 0;
                 for (;;) {
                     Py_UCS4 c;
@@ -403,7 +401,7 @@
                when there is something to translate. On the other hand,
                we already know there is a \r byte, so chances are high
                that something needs to be done. */
-            translated = PyMem_Malloc(PyUnicode_KIND_SIZE(kind, len));
+            translated = PyMem_Malloc(kind * len);
             if (translated == NULL) {
                 PyErr_NoMemory();
                 goto error;
@@ -1576,15 +1574,14 @@
 static char *
 find_control_char(int kind, char *s, char *end, Py_UCS4 ch)
 {
-    int size = PyUnicode_KIND_SIZE(kind, 1);
     for (;;) {
         while (PyUnicode_READ(kind, s, 0) > ch)
-            s += size;
+            s += kind;
         if (PyUnicode_READ(kind, s, 0) == ch)
             return s;
         if (s == end)
             return NULL;
-        s += size;
+        s += kind;
     }
 }
 
@@ -1593,14 +1590,13 @@
     int translated, int universal, PyObject *readnl,
     int kind, char *start, char *end, Py_ssize_t *consumed)
 {
-    int size = PyUnicode_KIND_SIZE(kind, 1);
-    Py_ssize_t len = ((char*)end - (char*)start)/size;
+    Py_ssize_t len = ((char*)end - (char*)start)/kind;
 
     if (translated) {
         /* Newlines are already translated, only search for \n */
         char *pos = find_control_char(kind, start, end, '\n');
         if (pos != NULL)
-            return (pos - start)/size + 1;
+            return (pos - start)/kind + 1;
         else {
             *consumed = len;
             return -1;
@@ -1616,20 +1612,20 @@
             /* Fast path for non-control chars. The loop always ends
                since the Unicode string is NUL-terminated. */
             while (PyUnicode_READ(kind, s, 0) > '\r')
-                s += size;
+                s += kind;
             if (s >= end) {
                 *consumed = len;
                 return -1;
             }
             ch = PyUnicode_READ(kind, s, 0);
-            s += size;
+            s += kind;
             if (ch == '\n')
-                return (s - start)/size;
+                return (s - start)/kind;
             if (ch == '\r') {
                 if (PyUnicode_READ(kind, s, 0) == '\n')
-                    return (s - start)/size + 1;
+                    return (s - start)/kind + 1;
                 else
-                    return (s - start)/size;
+                    return (s - start)/kind;
             }
         }
     }
@@ -1642,13 +1638,13 @@
         if (readnl_len == 1) {
             char *pos = find_control_char(kind, start, end, nl[0]);
             if (pos != NULL)
-                return (pos - start)/size + 1;
+                return (pos - start)/kind + 1;
             *consumed = len;
             return -1;
         }
         else {
             char *s = start;
-            char *e = end - (readnl_len - 1)*size;
+            char *e = end - (readnl_len - 1)*kind;
             char *pos;
             if (e < s)
                 e = s;
@@ -1662,14 +1658,14 @@
                         break;
                 }
                 if (i == readnl_len)
-                    return (pos - start)/size + readnl_len;
-                s = pos + size;
+                    return (pos - start)/kind + readnl_len;
+                s = pos + kind;
             }
             pos = find_control_char(kind, e, end, nl[0]);
             if (pos == NULL)
                 *consumed = len;
             else
-                *consumed = (pos - start)/size;
+                *consumed = (pos - start)/kind;
             return -1;
         }
     }
@@ -1738,8 +1734,8 @@
         endpos = _PyIO_find_line_ending(
             self->readtranslate, self->readuniversal, self->readnl,
             kind,
-            ptr + PyUnicode_KIND_SIZE(kind, start),
-            ptr + PyUnicode_KIND_SIZE(kind, line_len),
+            ptr + kind * start,
+            ptr + kind * line_len,
             &consumed);
         if (endpos >= 0) {
             endpos += start;
diff --git a/Modules/_json.c b/Modules/_json.c
index 0f550c1..e49d1b2 100644
--- a/Modules/_json.c
+++ b/Modules/_json.c
@@ -365,7 +365,7 @@
             APPEND_OLD_CHUNK
                 chunk = PyUnicode_FromKindAndData(
                     kind,
-                    (char*)buf + PyUnicode_KIND_SIZE(kind, end),
+                    (char*)buf + kind * end,
                     next - end);
             if (chunk == NULL) {
                 goto bail;
@@ -931,7 +931,7 @@
     if (custom_func) {
         /* copy the section we determined to be a number */
         numstr = PyUnicode_FromKindAndData(kind,
-                                           (char*)str + PyUnicode_KIND_SIZE(kind, start),
+                                           (char*)str + kind * start,
                                            idx - start);
         if (numstr == NULL)
             return NULL;
diff --git a/Modules/_sre.c b/Modules/_sre.c
index c685bae..395a120 100644
--- a/Modules/_sre.c
+++ b/Modules/_sre.c
@@ -1669,7 +1669,7 @@
             return NULL;
         ptr = PyUnicode_DATA(string);
         *p_length = PyUnicode_GET_LENGTH(string);
-        *p_charsize = PyUnicode_CHARACTER_SIZE(string);
+        *p_charsize = PyUnicode_KIND(string);
         *p_logical_charsize = 4;
         return ptr;
     }
diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c
index aac1b72..e64c960 100644
--- a/Modules/socketmodule.c
+++ b/Modules/socketmodule.c
@@ -1220,7 +1220,7 @@
     }
 #endif
 
-#ifdef HAVE_LINUX_CAN_H
+#ifdef AF_CAN
     case AF_CAN:
     {
         struct sockaddr_can *a = (struct sockaddr_can *)addr;
@@ -1606,7 +1606,7 @@
     }
 #endif
 
-#ifdef HAVE_LINUX_CAN_H
+#ifdef AF_CAN
     case AF_CAN:
         switch (s->sock_proto) {
         case CAN_RAW:
@@ -1746,7 +1746,7 @@
     }
 #endif
 
-#ifdef HAVE_LINUX_CAN_H
+#ifdef AF_CAN
     case AF_CAN:
     {
         *len_ret = sizeof (struct sockaddr_can);
diff --git a/Objects/listobject.c b/Objects/listobject.c
index 28d94e7..049f2a8 100644
--- a/Objects/listobject.c
+++ b/Objects/listobject.c
@@ -2329,16 +2329,16 @@
 PyDoc_STRVAR(copy_doc,
 "L.copy() -> list -- a shallow copy of L");
 PyDoc_STRVAR(append_doc,
-"L.append(object) -- append object to end");
+"L.append(object) -> None -- append object to end");
 PyDoc_STRVAR(extend_doc,
-"L.extend(iterable) -- extend list by appending elements from the iterable");
+"L.extend(iterable) -> None -- extend list by appending elements from the iterable");
 PyDoc_STRVAR(insert_doc,
 "L.insert(index, object) -- insert object before index");
 PyDoc_STRVAR(pop_doc,
 "L.pop([index]) -> item -- remove and return item at index (default last).\n"
 "Raises IndexError if list is empty or index is out of range.");
 PyDoc_STRVAR(remove_doc,
-"L.remove(value) -- remove first occurrence of value.\n"
+"L.remove(value) -> None -- remove first occurrence of value.\n"
 "Raises ValueError if the value is not present.");
 PyDoc_STRVAR(index_doc,
 "L.index(value, [start, [stop]]) -> integer -- return first index of value.\n"
@@ -2348,7 +2348,7 @@
 PyDoc_STRVAR(reverse_doc,
 "L.reverse() -- reverse *IN PLACE*");
 PyDoc_STRVAR(sort_doc,
-"L.sort(key=None, reverse=False) -- stable sort *IN PLACE*");
+"L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*");
 
 static PyObject *list_subscript(PyListObject*, PyObject*);
 
diff --git a/Objects/stringlib/eq.h b/Objects/stringlib/eq.h
index dd67128..8e79a43 100644
--- a/Objects/stringlib/eq.h
+++ b/Objects/stringlib/eq.h
@@ -30,5 +30,5 @@
         PyUnicode_GET_LENGTH(a) == 1)
         return 1;
     return memcmp(PyUnicode_1BYTE_DATA(a), PyUnicode_1BYTE_DATA(b),
-                  PyUnicode_GET_LENGTH(a) * PyUnicode_CHARACTER_SIZE(a)) == 0;
+                  PyUnicode_GET_LENGTH(a) * PyUnicode_KIND(a)) == 0;
 }
diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c
index 7e29a03..e904b6e 100644
--- a/Objects/unicodeobject.c
+++ b/Objects/unicodeobject.c
@@ -470,12 +470,12 @@
     if (direction == 1) {
         for(i = 0; i < size; i++)
             if (PyUnicode_READ(kind, s, i) == ch)
-                return (char*)s + PyUnicode_KIND_SIZE(kind, i);
+                return (char*)s + kind * i;
     }
     else {
         for(i = size-1; i >= 0; i--)
             if (PyUnicode_READ(kind, s, i) == ch)
-                return (char*)s + PyUnicode_KIND_SIZE(kind, i);
+                return (char*)s + kind * i;
     }
     return NULL;
 }
@@ -489,7 +489,7 @@
     int share_wstr;
 
     assert(PyUnicode_IS_READY(unicode));
-    char_size = PyUnicode_CHARACTER_SIZE(unicode);
+    char_size = PyUnicode_KIND(unicode);
     if (PyUnicode_IS_COMPACT_ASCII(unicode))
         struct_size = sizeof(PyASCIIObject);
     else
@@ -540,7 +540,7 @@
 
         data = _PyUnicode_DATA_ANY(unicode);
         assert(data != NULL);
-        char_size = PyUnicode_CHARACTER_SIZE(unicode);
+        char_size = PyUnicode_KIND(unicode);
         share_wstr = _PyUnicode_SHARE_WSTR(unicode);
         share_utf8 = _PyUnicode_SHARE_UTF8(unicode);
         if (!share_utf8 && _PyUnicode_HAS_UTF8_MEMORY(unicode))
@@ -1005,11 +1005,9 @@
     }
 
     if (fast) {
-        Py_MEMCPY((char*)to_data
-                      + PyUnicode_KIND_SIZE(to_kind, to_start),
-                  (char*)from_data
-                      + PyUnicode_KIND_SIZE(from_kind, from_start),
-                  PyUnicode_KIND_SIZE(to_kind, how_many));
+        Py_MEMCPY((char*)to_data + to_kind * to_start,
+                  (char*)from_data + from_kind * from_start,
+                  to_kind * how_many);
     }
     else if (from_kind == PyUnicode_1BYTE_KIND
              && to_kind == PyUnicode_2BYTE_KIND)
@@ -8732,7 +8730,7 @@
             );
     else
         result = any_find_slice(
-            asciilib_find_slice, ucs1lib_rfind_slice,
+            asciilib_rfind_slice, ucs1lib_rfind_slice,
             ucs2lib_rfind_slice, ucs4lib_rfind_slice,
             str, sub, start, end
             );
@@ -8760,7 +8758,7 @@
         end = PyUnicode_GET_LENGTH(str);
     kind = PyUnicode_KIND(str);
     result = findchar(PyUnicode_1BYTE_DATA(str)
-                      + PyUnicode_KIND_SIZE(kind, start),
+                      + kind*start,
                       kind,
                       end-start, ch, direction);
     if (!result)
@@ -8813,10 +8811,10 @@
         /* If both are of the same kind, memcmp is sufficient */
         if (kind_self == kind_sub) {
             return ! memcmp((char *)data_self +
-                                (offset * PyUnicode_CHARACTER_SIZE(substring)),
+                                (offset * PyUnicode_KIND(substring)),
                             data_sub,
                             PyUnicode_GET_LENGTH(substring) *
-                                PyUnicode_CHARACTER_SIZE(substring));
+                                PyUnicode_KIND(substring));
         }
         /* otherwise we have to compare each character by first accesing it */
         else {
@@ -8881,7 +8879,7 @@
         return NULL;
 
     Py_MEMCPY(PyUnicode_1BYTE_DATA(u), PyUnicode_1BYTE_DATA(self),
-              PyUnicode_GET_LENGTH(u) * PyUnicode_CHARACTER_SIZE(u));
+              PyUnicode_GET_LENGTH(u) * PyUnicode_KIND(u));
 
     /* fix functions return the new maximum character in a string,
        if the kind of the resulting unicode object does not change,
@@ -9262,8 +9260,8 @@
             if (use_memcpy) {
                 Py_MEMCPY(res_data,
                           sep_data,
-                          PyUnicode_KIND_SIZE(kind, seplen));
-                res_data += PyUnicode_KIND_SIZE(kind, seplen);
+                          kind * seplen);
+                res_data += kind * seplen;
             }
             else {
                 copy_characters(res, res_offset, sep, 0, seplen);
@@ -9275,8 +9273,8 @@
             if (use_memcpy) {
                 Py_MEMCPY(res_data,
                           PyUnicode_DATA(item),
-                          PyUnicode_KIND_SIZE(kind, itemlen));
-                res_data += PyUnicode_KIND_SIZE(kind, itemlen);
+                          kind * itemlen);
+                res_data += kind * itemlen;
             }
             else {
                 copy_characters(res, res_offset, item, 0, itemlen);
@@ -9286,7 +9284,7 @@
     }
     if (use_memcpy)
         assert(res_data == PyUnicode_1BYTE_DATA(res)
-                           + PyUnicode_KIND_SIZE(kind, PyUnicode_GET_LENGTH(res)));
+                           + kind * PyUnicode_GET_LENGTH(res));
     else
         assert(res_offset == PyUnicode_GET_LENGTH(res));
 
@@ -9735,22 +9733,22 @@
                 goto error;
             res = PyUnicode_DATA(rstr);
 
-            memcpy(res, sbuf, PyUnicode_KIND_SIZE(rkind, slen));
+            memcpy(res, sbuf, rkind * slen);
             /* change everything in-place, starting with this one */
-            memcpy(res + PyUnicode_KIND_SIZE(rkind, i),
+            memcpy(res + rkind * i,
                    buf2,
-                   PyUnicode_KIND_SIZE(rkind, len2));
+                   rkind * len2);
             i += len1;
 
             while ( --maxcount > 0) {
                 i = anylib_find(rkind, self,
-                                sbuf+PyUnicode_KIND_SIZE(rkind, i), slen-i,
+                                sbuf+rkind*i, slen-i,
                                 str1, buf1, len1, i);
                 if (i == -1)
                     break;
-                memcpy(res + PyUnicode_KIND_SIZE(rkind, i),
+                memcpy(res + rkind * i,
                        buf2,
-                       PyUnicode_KIND_SIZE(rkind, len2));
+                       rkind * len2);
                 i += len1;
             }
 
@@ -9816,49 +9814,49 @@
             while (n-- > 0) {
                 /* look for next match */
                 j = anylib_find(rkind, self,
-                                sbuf + PyUnicode_KIND_SIZE(rkind, i), slen-i,
+                                sbuf + rkind * i, slen-i,
                                 str1, buf1, len1, i);
                 if (j == -1)
                     break;
                 else if (j > i) {
                     /* copy unchanged part [i:j] */
-                    memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
-                           sbuf + PyUnicode_KIND_SIZE(rkind, i),
-                           PyUnicode_KIND_SIZE(rkind, j-i));
+                    memcpy(res + rkind * ires,
+                           sbuf + rkind * i,
+                           rkind * (j-i));
                     ires += j - i;
                 }
                 /* copy substitution string */
                 if (len2 > 0) {
-                    memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
+                    memcpy(res + rkind * ires,
                            buf2,
-                           PyUnicode_KIND_SIZE(rkind, len2));
+                           rkind * len2);
                     ires += len2;
                 }
                 i = j + len1;
             }
             if (i < slen)
                 /* copy tail [i:] */
-                memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
-                       sbuf + PyUnicode_KIND_SIZE(rkind, i),
-                       PyUnicode_KIND_SIZE(rkind, slen-i));
+                memcpy(res + rkind * ires,
+                       sbuf + rkind * i,
+                       rkind * (slen-i));
         } else {
             /* interleave */
             while (n > 0) {
-                memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
+                memcpy(res + rkind * ires,
                        buf2,
-                       PyUnicode_KIND_SIZE(rkind, len2));
+                       rkind * len2);
                 ires += len2;
                 if (--n <= 0)
                     break;
-                memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
-                       sbuf + PyUnicode_KIND_SIZE(rkind, i),
-                       PyUnicode_KIND_SIZE(rkind, 1));
+                memcpy(res + rkind * ires,
+                       sbuf + rkind * i,
+                       rkind);
                 ires++;
                 i++;
             }
-            memcpy(res + PyUnicode_KIND_SIZE(rkind, ires),
-                   sbuf + PyUnicode_KIND_SIZE(rkind, i),
-                   PyUnicode_KIND_SIZE(rkind, slen-i));
+            memcpy(res + rkind * ires,
+                   sbuf + rkind * i,
+                   rkind * (slen-i));
         }
         u = rstr;
         unicode_adjust_maxchar(&u);
@@ -11341,7 +11339,7 @@
         kind = PyUnicode_KIND(self);
         data = PyUnicode_1BYTE_DATA(self);
         return PyUnicode_FromKindAndData(kind,
-                                         data + PyUnicode_KIND_SIZE(kind, start),
+                                         data + kind * start,
                                          length);
     }
 }
@@ -11497,7 +11495,7 @@
     else {
         /* number of characters copied this far */
         Py_ssize_t done = PyUnicode_GET_LENGTH(str);
-        const Py_ssize_t char_size = PyUnicode_CHARACTER_SIZE(str);
+        const Py_ssize_t char_size = PyUnicode_KIND(str);
         char *to = (char *) PyUnicode_DATA(u);
         Py_MEMCPY(to, PyUnicode_DATA(str),
                   PyUnicode_GET_LENGTH(str) * char_size);
@@ -12488,14 +12486,14 @@
         size = sizeof(PyASCIIObject) + PyUnicode_GET_LENGTH(v) + 1;
     else if (PyUnicode_IS_COMPACT(v))
         size = sizeof(PyCompactUnicodeObject) +
-            (PyUnicode_GET_LENGTH(v) + 1) * PyUnicode_CHARACTER_SIZE(v);
+            (PyUnicode_GET_LENGTH(v) + 1) * PyUnicode_KIND(v);
     else {
         /* If it is a two-block object, account for base object, and
            for character block if present. */
         size = sizeof(PyUnicodeObject);
         if (_PyUnicode_DATA_ANY(v))
             size += (PyUnicode_GET_LENGTH(v) + 1) *
-                PyUnicode_CHARACTER_SIZE(v);
+                PyUnicode_KIND(v);
     }
     /* If the wstr pointer is present, account for it unless it is shared
        with the data pointer. Check if the data is not shared. */
@@ -13246,7 +13244,7 @@
             else {
                 const char *p = (const char *) pbuf;
                 assert(pbuf != NULL);
-                p = p + PyUnicode_KIND_SIZE(kind, pindex);
+                p += kind * pindex;
                 v = PyUnicode_FromKindAndData(kind, p, len);
             }
             if (v == NULL)
@@ -13399,7 +13397,7 @@
     }
 
     Py_MEMCPY(data, PyUnicode_DATA(unicode),
-              PyUnicode_KIND_SIZE(kind, length + 1));
+              kind * (length + 1));
     Py_DECREF(unicode);
     assert(_PyUnicode_CheckConsistency(self, 1));
 #ifdef Py_DEBUG
diff --git a/Python/formatter_unicode.c b/Python/formatter_unicode.c
index a389734..0378800 100644
--- a/Python/formatter_unicode.c
+++ b/Python/formatter_unicode.c
@@ -604,9 +604,9 @@
 #endif
             _PyUnicode_InsertThousandsGrouping(
                 out, kind,
-                (char*)data + PyUnicode_KIND_SIZE(kind, pos),
+                (char*)data + kind * pos,
                 spec->n_grouped_digits,
-                pdigits + PyUnicode_KIND_SIZE(kind, d_pos),
+                pdigits + kind * d_pos,
                 spec->n_digits, spec->n_min_width,
                 locale->grouping, locale->thousands_sep);
 #ifndef NDEBUG
diff --git a/Tools/gdb/libpython.py b/Tools/gdb/libpython.py
index 4b42c8b..43a0f20 100644
--- a/Tools/gdb/libpython.py
+++ b/Tools/gdb/libpython.py
@@ -1152,7 +1152,7 @@
                     field_str = field_str.cast(_type_unsigned_char_ptr)
                 elif repr_kind == 2:
                     field_str = field_str.cast(_type_unsigned_short_ptr)
-                elif repr_kind == 3:
+                elif repr_kind == 4:
                     field_str = field_str.cast(_type_unsigned_int_ptr)
         else:
             # Python 3.2 and earlier
diff --git a/Tools/iobench/iobench.py b/Tools/iobench/iobench.py
index b3bdd6a..5ec6f17 100644
--- a/Tools/iobench/iobench.py
+++ b/Tools/iobench/iobench.py
@@ -358,7 +358,7 @@
             with text_open(name, "r") as f:
                 return f.read()
         run_test_family(modify_tests, "b", text_files,
-            lambda fn: open(fn, "r+"), make_test_source)
+            lambda fn: text_open(fn, "r+"), make_test_source)
 
 
 def prepare_files():