Guido van Rossum | 715287f | 2008-12-02 22:34:15 +0000 | [diff] [blame] | 1 | .. _unicode-howto: |
| 2 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 3 | ***************** |
| 4 | Unicode HOWTO |
| 5 | ***************** |
| 6 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 7 | :Release: 1.12 |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 8 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 9 | This HOWTO discusses Python support for Unicode, and explains |
Benjamin Peterson | d7c3ed5 | 2010-06-27 22:32:30 +0000 | [diff] [blame] | 10 | various problems that people commonly encounter when trying to work |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 11 | with Unicode. |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 12 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 13 | Introduction to Unicode |
| 14 | ======================= |
| 15 | |
| 16 | History of Character Codes |
| 17 | -------------------------- |
| 18 | |
| 19 | In 1968, the American Standard Code for Information Interchange, better known by |
| 20 | its acronym ASCII, was standardized. ASCII defined numeric codes for various |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 21 | characters, with the numeric values running from 0 to 127. For example, the |
| 22 | lowercase letter 'a' is assigned 97 as its code value. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 23 | |
| 24 | ASCII was an American-developed standard, so it only defined unaccented |
| 25 | characters. There was an 'e', but no 'é' or 'Í'. This meant that languages |
| 26 | which required accented characters couldn't be faithfully represented in ASCII. |
| 27 | (Actually the missing accents matter for English, too, which contains words such |
| 28 | as 'naïve' and 'café', and some publications have house styles which require |
| 29 | spellings such as 'coöperate'.) |
| 30 | |
| 31 | For a while people just wrote programs that didn't display accents. I remember |
| 32 | looking at Apple ][ BASIC programs, published in French-language publications in |
| 33 | the mid-1980s, that had lines like these:: |
| 34 | |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 35 | PRINT "FICHIER EST COMPLETE." |
| 36 | PRINT "CARACTERE NON ACCEPTE." |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 37 | |
| 38 | Those messages should contain accents, and they just look wrong to someone who |
| 39 | can read French. |
| 40 | |
| 41 | In the 1980s, almost all personal computers were 8-bit, meaning that bytes could |
| 42 | hold values ranging from 0 to 255. ASCII codes only went up to 127, so some |
| 43 | machines assigned values between 128 and 255 to accented characters. Different |
| 44 | machines had different codes, however, which led to problems exchanging files. |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 45 | Eventually various commonly used sets of values for the 128--255 range emerged. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 46 | Some were true standards, defined by the International Standards Organization, |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 47 | and some were *de facto* conventions that were invented by one company or |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 48 | another and managed to catch on. |
| 49 | |
| 50 | 255 characters aren't very many. For example, you can't fit both the accented |
| 51 | characters used in Western Europe and the Cyrillic alphabet used for Russian |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 52 | into the 128--255 range because there are more than 127 such characters. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 53 | |
| 54 | You could write files using different codes (all your Russian files in a coding |
| 55 | system called KOI8, all your French files in a different coding system called |
| 56 | Latin1), but what if you wanted to write a French document that quotes some |
| 57 | Russian text? In the 1980s people began to want to solve this problem, and the |
| 58 | Unicode standardization effort began. |
| 59 | |
| 60 | Unicode started out using 16-bit characters instead of 8-bit characters. 16 |
| 61 | bits means you have 2^16 = 65,536 distinct values available, making it possible |
| 62 | to represent many different characters from many different alphabets; an initial |
| 63 | goal was to have Unicode contain the alphabets for every single human language. |
| 64 | It turns out that even 16 bits isn't enough to meet that goal, and the modern |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 65 | Unicode specification uses a wider range of codes, 0 through 1,114,111 ( |
| 66 | ``0x10FFFF`` in base 16). |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 67 | |
| 68 | There's a related ISO standard, ISO 10646. Unicode and ISO 10646 were |
| 69 | originally separate efforts, but the specifications were merged with the 1.1 |
| 70 | revision of Unicode. |
| 71 | |
| 72 | (This discussion of Unicode's history is highly simplified. I don't think the |
| 73 | average Python programmer needs to worry about the historical details; consult |
| 74 | the Unicode consortium site listed in the References for more information.) |
| 75 | |
| 76 | |
| 77 | Definitions |
| 78 | ----------- |
| 79 | |
| 80 | A **character** is the smallest possible component of a text. 'A', 'B', 'C', |
| 81 | etc., are all different characters. So are 'È' and 'Í'. Characters are |
| 82 | abstractions, and vary depending on the language or context you're talking |
| 83 | about. For example, the symbol for ohms (Ω) is usually drawn much like the |
| 84 | capital letter omega (Ω) in the Greek alphabet (they may even be the same in |
| 85 | some fonts), but these are two different characters that have different |
| 86 | meanings. |
| 87 | |
| 88 | The Unicode standard describes how characters are represented by **code |
| 89 | points**. A code point is an integer value, usually denoted in base 16. In the |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 90 | standard, a code point is written using the notation ``U+12CA`` to mean the |
| 91 | character with value ``0x12ca`` (4,810 decimal). The Unicode standard contains |
| 92 | a lot of tables listing characters and their corresponding code points: |
| 93 | |
| 94 | .. code-block:: none |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 95 | |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 96 | 0061 'a'; LATIN SMALL LETTER A |
| 97 | 0062 'b'; LATIN SMALL LETTER B |
| 98 | 0063 'c'; LATIN SMALL LETTER C |
| 99 | ... |
| 100 | 007B '{'; LEFT CURLY BRACKET |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 101 | |
| 102 | Strictly, these definitions imply that it's meaningless to say 'this is |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 103 | character ``U+12CA``'. ``U+12CA`` is a code point, which represents some particular |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 104 | character; in this case, it represents the character 'ETHIOPIC SYLLABLE WI'. In |
| 105 | informal contexts, this distinction between code points and characters will |
| 106 | sometimes be forgotten. |
| 107 | |
| 108 | A character is represented on a screen or on paper by a set of graphical |
| 109 | elements that's called a **glyph**. The glyph for an uppercase A, for example, |
| 110 | is two diagonal strokes and a horizontal stroke, though the exact details will |
| 111 | depend on the font being used. Most Python code doesn't need to worry about |
| 112 | glyphs; figuring out the correct glyph to display is generally the job of a GUI |
| 113 | toolkit or a terminal's font renderer. |
| 114 | |
| 115 | |
| 116 | Encodings |
| 117 | --------- |
| 118 | |
| 119 | To summarize the previous section: a Unicode string is a sequence of code |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 120 | points, which are numbers from 0 through ``0x10FFFF`` (1,114,111 decimal). This |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 121 | sequence needs to be represented as a set of bytes (meaning, values |
| 122 | from 0 through 255) in memory. The rules for translating a Unicode string |
| 123 | into a sequence of bytes are called an **encoding**. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 124 | |
| 125 | The first encoding you might think of is an array of 32-bit integers. In this |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 126 | representation, the string "Python" would look like this: |
| 127 | |
| 128 | .. code-block:: none |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 129 | |
| 130 | P y t h o n |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 131 | 0x50 00 00 00 79 00 00 00 74 00 00 00 68 00 00 00 6f 00 00 00 6e 00 00 00 |
| 132 | 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 133 | |
| 134 | This representation is straightforward but using it presents a number of |
| 135 | problems. |
| 136 | |
| 137 | 1. It's not portable; different processors order the bytes differently. |
| 138 | |
| 139 | 2. It's very wasteful of space. In most texts, the majority of the code points |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 140 | are less than 127, or less than 255, so a lot of space is occupied by ``0x00`` |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 141 | bytes. The above string takes 24 bytes compared to the 6 bytes needed for an |
| 142 | ASCII representation. Increased RAM usage doesn't matter too much (desktop |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 143 | computers have gigabytes of RAM, and strings aren't usually that large), but |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 144 | expanding our usage of disk and network bandwidth by a factor of 4 is |
| 145 | intolerable. |
| 146 | |
| 147 | 3. It's not compatible with existing C functions such as ``strlen()``, so a new |
| 148 | family of wide string functions would need to be used. |
| 149 | |
| 150 | 4. Many Internet standards are defined in terms of textual data, and can't |
| 151 | handle content with embedded zero bytes. |
| 152 | |
Benjamin Peterson | d7c3ed5 | 2010-06-27 22:32:30 +0000 | [diff] [blame] | 153 | Generally people don't use this encoding, instead choosing other |
| 154 | encodings that are more efficient and convenient. UTF-8 is probably |
| 155 | the most commonly supported encoding; it will be discussed below. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 156 | |
| 157 | Encodings don't have to handle every possible Unicode character, and most |
Benjamin Peterson | 1f31697 | 2009-09-11 20:42:29 +0000 | [diff] [blame] | 158 | encodings don't. The rules for converting a Unicode string into the ASCII |
| 159 | encoding, for example, are simple; for each code point: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 160 | |
| 161 | 1. If the code point is < 128, each byte is the same as the value of the code |
| 162 | point. |
| 163 | |
| 164 | 2. If the code point is 128 or greater, the Unicode string can't be represented |
| 165 | in this encoding. (Python raises a :exc:`UnicodeEncodeError` exception in this |
| 166 | case.) |
| 167 | |
| 168 | Latin-1, also known as ISO-8859-1, is a similar encoding. Unicode code points |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 169 | 0--255 are identical to the Latin-1 values, so converting to this encoding simply |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 170 | requires converting code points to byte values; if a code point larger than 255 |
| 171 | is encountered, the string can't be encoded into Latin-1. |
| 172 | |
| 173 | Encodings don't have to be simple one-to-one mappings like Latin-1. Consider |
| 174 | IBM's EBCDIC, which was used on IBM mainframes. Letter values weren't in one |
| 175 | block: 'a' through 'i' had values from 129 to 137, but 'j' through 'r' were 145 |
| 176 | through 153. If you wanted to use EBCDIC as an encoding, you'd probably use |
| 177 | some sort of lookup table to perform the conversion, but this is largely an |
| 178 | internal detail. |
| 179 | |
| 180 | UTF-8 is one of the most commonly used encodings. UTF stands for "Unicode |
| 181 | Transformation Format", and the '8' means that 8-bit numbers are used in the |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 182 | encoding. (There are also a UTF-16 and UTF-32 encodings, but they are less |
| 183 | frequently used than UTF-8.) UTF-8 uses the following rules: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 184 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 185 | 1. If the code point is < 128, it's represented by the corresponding byte value. |
| 186 | 2. If the code point is >= 128, it's turned into a sequence of two, three, or |
| 187 | four bytes, where each byte of the sequence is between 128 and 255. |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 188 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 189 | UTF-8 has several convenient properties: |
| 190 | |
| 191 | 1. It can handle any Unicode code point. |
| 192 | 2. A Unicode string is turned into a string of bytes containing no embedded zero |
| 193 | bytes. This avoids byte-ordering issues, and means UTF-8 strings can be |
| 194 | processed by C functions such as ``strcpy()`` and sent through protocols that |
| 195 | can't handle zero bytes. |
| 196 | 3. A string of ASCII text is also valid UTF-8 text. |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 197 | 4. UTF-8 is fairly compact; the majority of commonly used characters can be |
| 198 | represented with one or two bytes. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 199 | 5. If bytes are corrupted or lost, it's possible to determine the start of the |
| 200 | next UTF-8-encoded code point and resynchronize. It's also unlikely that |
| 201 | random 8-bit data will look like valid UTF-8. |
| 202 | |
| 203 | |
| 204 | |
| 205 | References |
| 206 | ---------- |
| 207 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 208 | The `Unicode Consortium site <http://www.unicode.org>`_ has character charts, a |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 209 | glossary, and PDF versions of the Unicode specification. Be prepared for some |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 210 | difficult reading. `A chronology <http://www.unicode.org/history/>`_ of the |
| 211 | origin and development of Unicode is also available on the site. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 212 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 213 | To help understand the standard, Jukka Korpela has written `an introductory |
| 214 | guide <http://www.cs.tut.fi/~jkorpela/unicode/guide.html>`_ to reading the |
| 215 | Unicode character tables. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 216 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 217 | Another `good introductory article <http://www.joelonsoftware.com/articles/Unicode.html>`_ |
| 218 | was written by Joel Spolsky. |
Georg Brandl | ee8783d | 2009-09-16 16:00:31 +0000 | [diff] [blame] | 219 | If this introduction didn't make things clear to you, you should try reading this |
| 220 | alternate article before continuing. |
| 221 | |
| 222 | .. Jason Orendorff XXX http://www.jorendorff.com/articles/unicode/ is broken |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 223 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 224 | Wikipedia entries are often helpful; see the entries for "`character encoding |
| 225 | <http://en.wikipedia.org/wiki/Character_encoding>`_" and `UTF-8 |
| 226 | <http://en.wikipedia.org/wiki/UTF-8>`_, for example. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 227 | |
| 228 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 229 | Python's Unicode Support |
| 230 | ======================== |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 231 | |
| 232 | Now that you've learned the rudiments of Unicode, we can look at Python's |
| 233 | Unicode features. |
| 234 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 235 | The String Type |
| 236 | --------------- |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 237 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 238 | Since Python 3.0, the language features a :class:`str` type that contain Unicode |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 239 | characters, meaning any string created using ``"unicode rocks!"``, ``'unicode |
Georg Brandl | 4f5f98d | 2009-05-04 21:01:20 +0000 | [diff] [blame] | 240 | rocks!'``, or the triple-quoted string syntax is stored as Unicode. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 241 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 242 | To insert a non-ASCII Unicode character, e.g., any letters with |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 243 | accents, one can use escape sequences in their string literals as such:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 244 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 245 | >>> "\N{GREEK CAPITAL LETTER DELTA}" # Using the character name |
| 246 | '\u0394' |
| 247 | >>> "\u0394" # Using a 16-bit hex value |
| 248 | '\u0394' |
| 249 | >>> "\U00000394" # Using a 32-bit hex value |
| 250 | '\u0394' |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 251 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 252 | In addition, one can create a string using the :func:`~bytes.decode` method of |
| 253 | :class:`bytes`. This method takes an *encoding* argument, such as ``UTF-8``, |
| 254 | and optionally, an *errors* argument. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 255 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 256 | The *errors* argument specifies the response when the input string can't be |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 257 | converted according to the encoding's rules. Legal values for this argument are |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 258 | ``'strict'`` (raise a :exc:`UnicodeDecodeError` exception), ``'replace'`` (use |
| 259 | ``U+FFFD``, ``REPLACEMENT CHARACTER``), or ``'ignore'`` (just leave the |
| 260 | character out of the Unicode result). |
| 261 | The following examples show the differences:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 262 | |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 263 | >>> b'\x80abc'.decode("utf-8", "strict") #doctest: +NORMALIZE_WHITESPACE |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 264 | Traceback (most recent call last): |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 265 | ... |
| 266 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: |
| 267 | invalid start byte |
Ezio Melotti | 20b8d99 | 2012-09-23 15:55:14 +0300 | [diff] [blame] | 268 | >>> b'\x80abc'.decode("utf-8", "replace") |
| 269 | '\ufffdabc' |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 270 | >>> b'\x80abc'.decode("utf-8", "ignore") |
| 271 | 'abc' |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 272 | |
Georg Brandl | c8c60c2 | 2010-11-19 22:09:04 +0000 | [diff] [blame] | 273 | (In this code example, the Unicode replacement character has been replaced by |
| 274 | a question mark because it may not be displayed on some systems.) |
| 275 | |
Benjamin Peterson | d7c3ed5 | 2010-06-27 22:32:30 +0000 | [diff] [blame] | 276 | Encodings are specified as strings containing the encoding's name. Python 3.2 |
| 277 | comes with roughly 100 different encodings; see the Python Library Reference at |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 278 | :ref:`standard-encodings` for a list. Some encodings have multiple names; for |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 279 | example, ``'latin-1'``, ``'iso_8859_1'`` and ``'8859``' are all synonyms for |
| 280 | the same encoding. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 281 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 282 | One-character Unicode strings can also be created with the :func:`chr` |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 283 | built-in function, which takes integers and returns a Unicode string of length 1 |
| 284 | that contains the corresponding code point. The reverse operation is the |
| 285 | built-in :func:`ord` function that takes a one-character Unicode string and |
| 286 | returns the code point value:: |
| 287 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 288 | >>> chr(57344) |
| 289 | '\ue000' |
| 290 | >>> ord('\ue000') |
| 291 | 57344 |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 292 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 293 | Converting to Bytes |
| 294 | ------------------- |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 295 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 296 | The opposite method of :meth:`bytes.decode` is :meth:`str.encode`, |
| 297 | which returns a :class:`bytes` representation of the Unicode string, encoded in the |
| 298 | requested *encoding*. The *errors* parameter is the same as the parameter of |
| 299 | the :meth:`~bytes.decode` method, with one additional possibility; as well as |
| 300 | ``'strict'``, ``'ignore'``, and ``'replace'`` (which in this case inserts a |
| 301 | question mark instead of the unencodable character), you can also pass |
| 302 | ``'xmlcharrefreplace'`` which uses XML's character references. |
| 303 | The following example shows the different results:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 304 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 305 | >>> u = chr(40960) + 'abcd' + chr(1972) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 306 | >>> u.encode('utf-8') |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 307 | b'\xea\x80\x80abcd\xde\xb4' |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 308 | >>> u.encode('ascii') #doctest: +NORMALIZE_WHITESPACE |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 309 | Traceback (most recent call last): |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 310 | ... |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 311 | UnicodeEncodeError: 'ascii' codec can't encode character '\ua000' in |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 312 | position 0: ordinal not in range(128) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 313 | >>> u.encode('ascii', 'ignore') |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 314 | b'abcd' |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 315 | >>> u.encode('ascii', 'replace') |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 316 | b'?abcd?' |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 317 | >>> u.encode('ascii', 'xmlcharrefreplace') |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 318 | b'ꀀabcd޴' |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 319 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 320 | .. XXX mention the surrogate* error handlers |
| 321 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 322 | The low-level routines for registering and accessing the available encodings are |
| 323 | found in the :mod:`codecs` module. However, the encoding and decoding functions |
| 324 | returned by this module are usually more low-level than is comfortable, so I'm |
| 325 | not going to describe the :mod:`codecs` module here. If you need to implement a |
| 326 | completely new encoding, you'll need to learn about the :mod:`codecs` module |
| 327 | interfaces, but implementing encodings is a specialized task that also won't be |
| 328 | covered here. Consult the Python documentation to learn more about this module. |
| 329 | |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 330 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 331 | Unicode Literals in Python Source Code |
| 332 | -------------------------------------- |
| 333 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 334 | In Python source code, specific Unicode code points can be written using the |
| 335 | ``\u`` escape sequence, which is followed by four hex digits giving the code |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 336 | point. The ``\U`` escape sequence is similar, but expects eight hex digits, |
| 337 | not four:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 338 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 339 | >>> s = "a\xac\u1234\u20ac\U00008000" |
Senthil Kumaran | 2fd8bdb | 2012-09-11 03:17:52 -0700 | [diff] [blame] | 340 | ... # ^^^^ two-digit hex escape |
| 341 | ... # ^^^^^^ four-digit Unicode escape |
| 342 | ... # ^^^^^^^^^^ eight-digit Unicode escape |
| 343 | >>> [ord(c) for c in s] |
| 344 | [97, 172, 4660, 8364, 32768] |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 345 | |
| 346 | Using escape sequences for code points greater than 127 is fine in small doses, |
| 347 | but becomes an annoyance if you're using many accented characters, as you would |
| 348 | in a program with messages in French or some other accent-using language. You |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 349 | can also assemble strings using the :func:`chr` built-in function, but this is |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 350 | even more tedious. |
| 351 | |
| 352 | Ideally, you'd want to be able to write literals in your language's natural |
| 353 | encoding. You could then edit Python source code with your favorite editor |
| 354 | which would display the accented characters naturally, and have the right |
| 355 | characters used at runtime. |
| 356 | |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 357 | Python supports writing source code in UTF-8 by default, but you can use almost |
| 358 | any encoding if you declare the encoding being used. This is done by including |
| 359 | a special comment as either the first or second line of the source file:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 360 | |
| 361 | #!/usr/bin/env python |
| 362 | # -*- coding: latin-1 -*- |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 363 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 364 | u = 'abcdé' |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 365 | print(ord(u[-1])) |
| 366 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 367 | The syntax is inspired by Emacs's notation for specifying variables local to a |
| 368 | file. Emacs supports many different variables, but Python only supports |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 369 | 'coding'. The ``-*-`` symbols indicate to Emacs that the comment is special; |
| 370 | they have no significance to Python but are a convention. Python looks for |
| 371 | ``coding: name`` or ``coding=name`` in the comment. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 372 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 373 | If you don't include such a comment, the default encoding used will be UTF-8 as |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 374 | already mentioned. See also :pep:`263` for more information. |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 375 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 376 | |
| 377 | Unicode Properties |
| 378 | ------------------ |
| 379 | |
| 380 | The Unicode specification includes a database of information about code points. |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 381 | For each defined code point, the information includes the character's |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 382 | name, its category, the numeric value if applicable (Unicode has characters |
| 383 | representing the Roman numerals and fractions such as one-third and |
| 384 | four-fifths). There are also properties related to the code point's use in |
| 385 | bidirectional text and other display-related properties. |
| 386 | |
| 387 | The following program displays some information about several characters, and |
| 388 | prints the numeric value of one particular character:: |
| 389 | |
| 390 | import unicodedata |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 391 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 392 | u = chr(233) + chr(0x0bf2) + chr(3972) + chr(6000) + chr(13231) |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 393 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 394 | for i, c in enumerate(u): |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 395 | print(i, '%04x' % ord(c), unicodedata.category(c), end=" ") |
| 396 | print(unicodedata.name(c)) |
| 397 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 398 | # Get numeric value of second character |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 399 | print(unicodedata.numeric(u[1])) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 400 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 401 | When run, this prints: |
| 402 | |
| 403 | .. code-block:: none |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 404 | |
| 405 | 0 00e9 Ll LATIN SMALL LETTER E WITH ACUTE |
| 406 | 1 0bf2 No TAMIL NUMBER ONE THOUSAND |
| 407 | 2 0f84 Mn TIBETAN MARK HALANTA |
| 408 | 3 1770 Lo TAGBANWA LETTER SA |
| 409 | 4 33af So SQUARE RAD OVER S SQUARED |
| 410 | 1000.0 |
| 411 | |
| 412 | The category codes are abbreviations describing the nature of the character. |
| 413 | These are grouped into categories such as "Letter", "Number", "Punctuation", or |
| 414 | "Symbol", which in turn are broken up into subcategories. To take the codes |
| 415 | from the above output, ``'Ll'`` means 'Letter, lowercase', ``'No'`` means |
| 416 | "Number, other", ``'Mn'`` is "Mark, nonspacing", and ``'So'`` is "Symbol, |
| 417 | other". See |
Ezio Melotti | 4c5475d | 2010-03-22 23:16:42 +0000 | [diff] [blame] | 418 | <http://www.unicode.org/reports/tr44/#General_Category_Values> for a |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 419 | list of category codes. |
| 420 | |
| 421 | References |
| 422 | ---------- |
| 423 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 424 | The :class:`str` type is described in the Python library reference at |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 425 | :ref:`typesseq`. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 426 | |
| 427 | The documentation for the :mod:`unicodedata` module. |
| 428 | |
| 429 | The documentation for the :mod:`codecs` module. |
| 430 | |
| 431 | Marc-André Lemburg gave a presentation at EuroPython 2002 titled "Python and |
| 432 | Unicode". A PDF version of his slides is available at |
Christian Heimes | dd15f6c | 2008-03-16 00:07:10 +0000 | [diff] [blame] | 433 | <http://downloads.egenix.com/python/Unicode-EPC2002-Talk.pdf>, and is an |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 434 | excellent overview of the design of Python's Unicode features (based on Python |
| 435 | 2, where the Unicode string type is called ``unicode`` and literals start with |
| 436 | ``u``). |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 437 | |
| 438 | |
| 439 | Reading and Writing Unicode Data |
| 440 | ================================ |
| 441 | |
| 442 | Once you've written some code that works with Unicode data, the next problem is |
| 443 | input/output. How do you get Unicode strings into your program, and how do you |
| 444 | convert Unicode into a form suitable for storage or transmission? |
| 445 | |
| 446 | It's possible that you may not need to do anything depending on your input |
| 447 | sources and output destinations; you should check whether the libraries used in |
| 448 | your application support Unicode natively. XML parsers often return Unicode |
| 449 | data, for example. Many relational databases also support Unicode-valued |
| 450 | columns and can return Unicode values from an SQL query. |
| 451 | |
| 452 | Unicode data is usually converted to a particular encoding before it gets |
| 453 | written to disk or sent over a socket. It's possible to do all the work |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 454 | yourself: open a file, read an 8-bit bytes object from it, and convert the string |
| 455 | with ``bytes.decode(encoding)``. However, the manual approach is not recommended. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 456 | |
| 457 | One problem is the multi-byte nature of encodings; one Unicode character can be |
| 458 | represented by several bytes. If you want to read the file in arbitrary-sized |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 459 | chunks (say, 1k or 4k), you need to write error-handling code to catch the case |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 460 | where only part of the bytes encoding a single Unicode character are read at the |
| 461 | end of a chunk. One solution would be to read the entire file into memory and |
| 462 | then perform the decoding, but that prevents you from working with files that |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 463 | are extremely large; if you need to read a 2GB file, you need 2GB of RAM. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 464 | (More, really, since for at least a moment you'd need to have both the encoded |
| 465 | string and its Unicode version in memory.) |
| 466 | |
| 467 | The solution would be to use the low-level decoding interface to catch the case |
| 468 | of partial coding sequences. The work of implementing this has already been |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 469 | done for you: the built-in :func:`open` function can return a file-like object |
| 470 | that assumes the file's contents are in a specified encoding and accepts Unicode |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 471 | parameters for methods such as :meth:`read` and :meth:`write`. This works through |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 472 | :func:`open`\'s *encoding* and *errors* parameters which are interpreted just |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 473 | like those in :meth:`str.encode` and :meth:`bytes.decode`. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 474 | |
| 475 | Reading Unicode from a file is therefore simple:: |
| 476 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 477 | with open('unicode.rst', encoding='utf-8') as f: |
| 478 | for line in f: |
| 479 | print(repr(line)) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 480 | |
| 481 | It's also possible to open files in update mode, allowing both reading and |
| 482 | writing:: |
| 483 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 484 | with open('test', encoding='utf-8', mode='w+') as f: |
| 485 | f.write('\u4500 blah blah blah\n') |
| 486 | f.seek(0) |
| 487 | print(repr(f.readline()[:1])) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 488 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 489 | The Unicode character ``U+FEFF`` is used as a byte-order mark (BOM), and is often |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 490 | written as the first character of a file in order to assist with autodetection |
| 491 | of the file's byte ordering. Some encodings, such as UTF-16, expect a BOM to be |
| 492 | present at the start of a file; when such an encoding is used, the BOM will be |
| 493 | automatically written as the first character and will be silently dropped when |
| 494 | the file is read. There are variants of these encodings, such as 'utf-16-le' |
| 495 | and 'utf-16-be' for little-endian and big-endian encodings, that specify one |
| 496 | particular byte ordering and don't skip the BOM. |
| 497 | |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 498 | In some areas, it is also convention to use a "BOM" at the start of UTF-8 |
| 499 | encoded files; the name is misleading since UTF-8 is not byte-order dependent. |
| 500 | The mark simply announces that the file is encoded in UTF-8. Use the |
| 501 | 'utf-8-sig' codec to automatically skip the mark if present for reading such |
| 502 | files. |
| 503 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 504 | |
| 505 | Unicode filenames |
| 506 | ----------------- |
| 507 | |
| 508 | Most of the operating systems in common use today support filenames that contain |
| 509 | arbitrary Unicode characters. Usually this is implemented by converting the |
| 510 | Unicode string into some encoding that varies depending on the system. For |
Georg Brandl | c575c90 | 2008-09-13 17:46:05 +0000 | [diff] [blame] | 511 | example, Mac OS X uses UTF-8 while Windows uses a configurable encoding; on |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 512 | Windows, Python uses the name "mbcs" to refer to whatever the currently |
| 513 | configured encoding is. On Unix systems, there will only be a filesystem |
| 514 | encoding if you've set the ``LANG`` or ``LC_CTYPE`` environment variables; if |
| 515 | you haven't, the default encoding is ASCII. |
| 516 | |
| 517 | The :func:`sys.getfilesystemencoding` function returns the encoding to use on |
| 518 | your current system, in case you want to do the encoding manually, but there's |
| 519 | not much reason to bother. When opening a file for reading or writing, you can |
| 520 | usually just provide the Unicode string as the filename, and it will be |
| 521 | automatically converted to the right encoding for you:: |
| 522 | |
Georg Brandl | f694518 | 2008-02-01 11:56:49 +0000 | [diff] [blame] | 523 | filename = 'filename\u4500abc' |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 524 | with open(filename, 'w') as f: |
| 525 | f.write('blah\n') |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 526 | |
| 527 | Functions in the :mod:`os` module such as :func:`os.stat` will also accept Unicode |
| 528 | filenames. |
| 529 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 530 | Function :func:`os.listdir`, which returns filenames, raises an issue: should it return |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 531 | the Unicode version of filenames, or should it return bytes containing |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 532 | the encoded versions? :func:`os.listdir` will do both, depending on whether you |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 533 | provided the directory path as bytes or a Unicode string. If you pass a |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 534 | Unicode string as the path, filenames will be decoded using the filesystem's |
| 535 | encoding and a list of Unicode strings will be returned, while passing a byte |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 536 | path will return the bytes versions of the filenames. For example, |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 537 | assuming the default filesystem encoding is UTF-8, running the following |
| 538 | program:: |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 539 | |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 540 | fn = 'filename\u4500abc' |
| 541 | f = open(fn, 'w') |
| 542 | f.close() |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 543 | |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 544 | import os |
| 545 | print(os.listdir(b'.')) |
| 546 | print(os.listdir('.')) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 547 | |
| 548 | will produce the following output:: |
| 549 | |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 550 | amk:~$ python t.py |
| 551 | [b'.svn', b'filename\xe4\x94\x80abc', ...] |
| 552 | ['.svn', 'filename\u4500abc', ...] |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 553 | |
| 554 | The first list contains UTF-8-encoded filenames, and the second list contains |
| 555 | the Unicode versions. |
| 556 | |
R. David Murray | 01054d7 | 2009-09-12 03:09:02 +0000 | [diff] [blame] | 557 | Note that in most occasions, the Unicode APIs should be used. The bytes APIs |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 558 | should only be used on systems where undecodable file names can be present, |
| 559 | i.e. Unix systems. |
| 560 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 561 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 562 | Tips for Writing Unicode-aware Programs |
| 563 | --------------------------------------- |
| 564 | |
| 565 | This section provides some suggestions on writing software that deals with |
| 566 | Unicode. |
| 567 | |
| 568 | The most important tip is: |
| 569 | |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 570 | Software should only work with Unicode strings internally, decoding the input |
| 571 | data as soon as possible and encoding the output only at the end. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 572 | |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 573 | If you attempt to write processing functions that accept both Unicode and byte |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 574 | strings, you will find your program vulnerable to bugs wherever you combine the |
Ezio Melotti | 410eee5 | 2013-01-20 12:16:03 +0200 | [diff] [blame] | 575 | two different kinds of strings. There is no automatic encoding or decoding: if |
| 576 | you do e.g. ``str + bytes``, a :exc:`TypeError` will be raised. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 577 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 578 | When using data coming from a web browser or some other untrusted source, a |
| 579 | common technique is to check for illegal characters in a string before using the |
| 580 | string in a generated command line or storing it in a database. If you're doing |
Antoine Pitrou | 534e253 | 2011-12-05 01:21:46 +0100 | [diff] [blame] | 581 | this, be careful to check the decoded string, not the encoded bytes data; |
| 582 | some encodings may have interesting properties, such as not being bijective |
| 583 | or not being fully ASCII-compatible. This is especially true if the input |
| 584 | data also specifies the encoding, since the attacker can then choose a |
| 585 | clever way to hide malicious text in the encoded bytestream. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 586 | |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 587 | |
| 588 | References |
| 589 | ---------- |
| 590 | |
| 591 | The PDF slides for Marc-André Lemburg's presentation "Writing Unicode-aware |
| 592 | Applications in Python" are available at |
Christian Heimes | dd15f6c | 2008-03-16 00:07:10 +0000 | [diff] [blame] | 593 | <http://downloads.egenix.com/python/LSM2005-Developing-Unicode-aware-applications-in-Python.pdf> |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 594 | and discuss questions of character encodings as well as how to internationalize |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 595 | and localize an application. These slides cover Python 2.x only. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 596 | |
| 597 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 598 | Acknowledgements |
| 599 | ================ |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 600 | |
| 601 | Thanks to the following people who have noted errors or offered suggestions on |
| 602 | this article: Nicholas Bastin, Marius Gedminas, Kent Johnson, Ken Krugler, |
| 603 | Marc-André Lemburg, Martin von Löwis, Chad Whitacre. |
| 604 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 605 | .. comment |
| 606 | Revision History |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 607 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 608 | Version 1.0: posted August 5 2005. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 609 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 610 | Version 1.01: posted August 7 2005. Corrects factual and markup errors; adds |
| 611 | several links. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 612 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 613 | Version 1.02: posted August 16 2005. Corrects factual errors. |
Georg Brandl | 0c07422 | 2008-11-22 10:26:59 +0000 | [diff] [blame] | 614 | |
Alexander Belopolsky | 93a6b13 | 2010-11-19 16:09:58 +0000 | [diff] [blame] | 615 | Version 1.1: Feb-Nov 2008. Updates the document with respect to Python 3 changes. |
| 616 | |
| 617 | Version 1.11: posted June 20 2010. Notes that Python 3.x is not covered, |
| 618 | and that the HOWTO only covers 2.x. |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 619 | |
Benjamin Peterson | d7c3ed5 | 2010-06-27 22:32:30 +0000 | [diff] [blame] | 620 | .. comment Describe Python 3.x support (new section? new document?) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 621 | .. comment Additional topic: building Python w/ UCS2 or UCS4 support |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 622 | .. comment Describe use of codecs.StreamRecoder and StreamReaderWriter |
| 623 | |
Georg Brandl | 6911e3c | 2007-09-04 07:15:32 +0000 | [diff] [blame] | 624 | .. comment |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 625 | Original outline: |
| 626 | |
| 627 | - [ ] Unicode introduction |
| 628 | - [ ] ASCII |
| 629 | - [ ] Terms |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 630 | - [ ] Character |
| 631 | - [ ] Code point |
| 632 | - [ ] Encodings |
| 633 | - [ ] Common encodings: ASCII, Latin-1, UTF-8 |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 634 | - [ ] Unicode Python type |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 635 | - [ ] Writing unicode literals |
| 636 | - [ ] Obscurity: -U switch |
| 637 | - [ ] Built-ins |
| 638 | - [ ] unichr() |
| 639 | - [ ] ord() |
| 640 | - [ ] unicode() constructor |
| 641 | - [ ] Unicode type |
| 642 | - [ ] encode(), decode() methods |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 643 | - [ ] Unicodedata module for character properties |
| 644 | - [ ] I/O |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 645 | - [ ] Reading/writing Unicode data into files |
| 646 | - [ ] Byte-order marks |
| 647 | - [ ] Unicode filenames |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 648 | - [ ] Writing Unicode programs |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 649 | - [ ] Do everything in Unicode |
| 650 | - [ ] Declaring source code encodings (PEP 263) |
Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 651 | - [ ] Other issues |
Georg Brandl | a1c6a1c | 2009-01-03 21:26:05 +0000 | [diff] [blame] | 652 | - [ ] Building Python (UCS2, UCS4) |