blob: 13efa7610f244fde25aa27e7f98aa65591791ff1 [file] [log] [blame]
Guido van Rossum715287f2008-12-02 22:34:15 +00001.. _unicode-howto:
2
Georg Brandl116aa622007-08-15 14:28:22 +00003*****************
4 Unicode HOWTO
5*****************
6
Georg Brandlc62efa82010-07-11 10:41:07 +00007:Release: 1.11
Georg Brandl116aa622007-08-15 14:28:22 +00008
Georg Brandlc62efa82010-07-11 10:41:07 +00009This HOWTO discusses Python 2.x's support for Unicode, and explains
10various problems that people commonly encounter when trying to work
11with Unicode. (This HOWTO has not yet been updated to cover the 3.x
12versions of Python.)
Georg Brandl116aa622007-08-15 14:28:22 +000013
Georg Brandl6911e3c2007-09-04 07:15:32 +000014
Georg Brandl116aa622007-08-15 14:28:22 +000015Introduction to Unicode
16=======================
17
18History of Character Codes
19--------------------------
20
21In 1968, the American Standard Code for Information Interchange, better known by
22its acronym ASCII, was standardized. ASCII defined numeric codes for various
Georg Brandl0c074222008-11-22 10:26:59 +000023characters, with the numeric values running from 0 to 127. For example, the
24lowercase letter 'a' is assigned 97 as its code value.
Georg Brandl116aa622007-08-15 14:28:22 +000025
26ASCII was an American-developed standard, so it only defined unaccented
27characters. There was an 'e', but no 'é' or 'Í'. This meant that languages
28which required accented characters couldn't be faithfully represented in ASCII.
29(Actually the missing accents matter for English, too, which contains words such
30as 'naïve' and 'café', and some publications have house styles which require
31spellings such as 'coöperate'.)
32
33For a while people just wrote programs that didn't display accents. I remember
34looking at Apple ][ BASIC programs, published in French-language publications in
35the mid-1980s, that had lines like these::
36
Georg Brandla1c6a1c2009-01-03 21:26:05 +000037 PRINT "FICHIER EST COMPLETE."
38 PRINT "CARACTERE NON ACCEPTE."
Georg Brandl116aa622007-08-15 14:28:22 +000039
40Those messages should contain accents, and they just look wrong to someone who
41can read French.
42
43In the 1980s, almost all personal computers were 8-bit, meaning that bytes could
44hold values ranging from 0 to 255. ASCII codes only went up to 127, so some
45machines assigned values between 128 and 255 to accented characters. Different
46machines had different codes, however, which led to problems exchanging files.
47Eventually various commonly used sets of values for the 128-255 range emerged.
48Some were true standards, defined by the International Standards Organization,
49and some were **de facto** conventions that were invented by one company or
50another and managed to catch on.
51
52255 characters aren't very many. For example, you can't fit both the accented
53characters used in Western Europe and the Cyrillic alphabet used for Russian
54into the 128-255 range because there are more than 127 such characters.
55
56You could write files using different codes (all your Russian files in a coding
57system called KOI8, all your French files in a different coding system called
58Latin1), but what if you wanted to write a French document that quotes some
59Russian text? In the 1980s people began to want to solve this problem, and the
60Unicode standardization effort began.
61
62Unicode started out using 16-bit characters instead of 8-bit characters. 16
63bits means you have 2^16 = 65,536 distinct values available, making it possible
64to represent many different characters from many different alphabets; an initial
65goal was to have Unicode contain the alphabets for every single human language.
66It turns out that even 16 bits isn't enough to meet that goal, and the modern
67Unicode specification uses a wider range of codes, 0-1,114,111 (0x10ffff in
68base-16).
69
70There's a related ISO standard, ISO 10646. Unicode and ISO 10646 were
71originally separate efforts, but the specifications were merged with the 1.1
72revision of Unicode.
73
74(This discussion of Unicode's history is highly simplified. I don't think the
75average Python programmer needs to worry about the historical details; consult
76the Unicode consortium site listed in the References for more information.)
77
78
79Definitions
80-----------
81
82A **character** is the smallest possible component of a text. 'A', 'B', 'C',
83etc., are all different characters. So are 'È' and 'Í'. Characters are
84abstractions, and vary depending on the language or context you're talking
85about. For example, the symbol for ohms (Ω) is usually drawn much like the
86capital letter omega (Ω) in the Greek alphabet (they may even be the same in
87some fonts), but these are two different characters that have different
88meanings.
89
90The Unicode standard describes how characters are represented by **code
91points**. A code point is an integer value, usually denoted in base 16. In the
92standard, a code point is written using the notation U+12ca to mean the
93character with value 0x12ca (4810 decimal). The Unicode standard contains a lot
94of tables listing characters and their corresponding code points::
95
Georg Brandla1c6a1c2009-01-03 21:26:05 +000096 0061 'a'; LATIN SMALL LETTER A
97 0062 'b'; LATIN SMALL LETTER B
98 0063 'c'; LATIN SMALL LETTER C
99 ...
100 007B '{'; LEFT CURLY BRACKET
Georg Brandl116aa622007-08-15 14:28:22 +0000101
102Strictly, these definitions imply that it's meaningless to say 'this is
103character U+12ca'. U+12ca is a code point, which represents some particular
104character; in this case, it represents the character 'ETHIOPIC SYLLABLE WI'. In
105informal contexts, this distinction between code points and characters will
106sometimes be forgotten.
107
108A character is represented on a screen or on paper by a set of graphical
109elements that's called a **glyph**. The glyph for an uppercase A, for example,
110is two diagonal strokes and a horizontal stroke, though the exact details will
111depend on the font being used. Most Python code doesn't need to worry about
112glyphs; figuring out the correct glyph to display is generally the job of a GUI
113toolkit or a terminal's font renderer.
114
115
116Encodings
117---------
118
119To summarize the previous section: a Unicode string is a sequence of code
120points, which are numbers from 0 to 0x10ffff. This sequence needs to be
121represented as a set of bytes (meaning, values from 0-255) in memory. The rules
122for translating a Unicode string into a sequence of bytes are called an
123**encoding**.
124
125The first encoding you might think of is an array of 32-bit integers. In this
126representation, the string "Python" would look like this::
127
128 P y t h o n
Georg Brandl6911e3c2007-09-04 07:15:32 +0000129 0x50 00 00 00 79 00 00 00 74 00 00 00 68 00 00 00 6f 00 00 00 6e 00 00 00
130 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Georg Brandl116aa622007-08-15 14:28:22 +0000131
132This representation is straightforward but using it presents a number of
133problems.
134
1351. It's not portable; different processors order the bytes differently.
136
1372. It's very wasteful of space. In most texts, the majority of the code points
138 are less than 127, or less than 255, so a lot of space is occupied by zero
139 bytes. The above string takes 24 bytes compared to the 6 bytes needed for an
140 ASCII representation. Increased RAM usage doesn't matter too much (desktop
141 computers have megabytes of RAM, and strings aren't usually that large), but
142 expanding our usage of disk and network bandwidth by a factor of 4 is
143 intolerable.
144
1453. It's not compatible with existing C functions such as ``strlen()``, so a new
146 family of wide string functions would need to be used.
147
1484. Many Internet standards are defined in terms of textual data, and can't
149 handle content with embedded zero bytes.
150
Georg Brandlc62efa82010-07-11 10:41:07 +0000151Generally people don't use this encoding, instead choosing other
152encodings that are more efficient and convenient. UTF-8 is probably
153the most commonly supported encoding; it will be discussed below.
Georg Brandl116aa622007-08-15 14:28:22 +0000154
155Encodings don't have to handle every possible Unicode character, and most
R. David Murrayc1eb92b2009-09-21 14:39:26 +0000156encodings don't. The rules for converting a Unicode string into the ASCII
157encoding, for example, are simple; for each code point:
Georg Brandl116aa622007-08-15 14:28:22 +0000158
1591. If the code point is < 128, each byte is the same as the value of the code
160 point.
161
1622. If the code point is 128 or greater, the Unicode string can't be represented
163 in this encoding. (Python raises a :exc:`UnicodeEncodeError` exception in this
164 case.)
165
166Latin-1, also known as ISO-8859-1, is a similar encoding. Unicode code points
1670-255 are identical to the Latin-1 values, so converting to this encoding simply
168requires converting code points to byte values; if a code point larger than 255
169is encountered, the string can't be encoded into Latin-1.
170
171Encodings don't have to be simple one-to-one mappings like Latin-1. Consider
172IBM's EBCDIC, which was used on IBM mainframes. Letter values weren't in one
173block: 'a' through 'i' had values from 129 to 137, but 'j' through 'r' were 145
174through 153. If you wanted to use EBCDIC as an encoding, you'd probably use
175some sort of lookup table to perform the conversion, but this is largely an
176internal detail.
177
178UTF-8 is one of the most commonly used encodings. UTF stands for "Unicode
179Transformation Format", and the '8' means that 8-bit numbers are used in the
180encoding. (There's also a UTF-16 encoding, but it's less frequently used than
181UTF-8.) UTF-8 uses the following rules:
182
1831. If the code point is <128, it's represented by the corresponding byte value.
1842. If the code point is between 128 and 0x7ff, it's turned into two byte values
185 between 128 and 255.
1863. Code points >0x7ff are turned into three- or four-byte sequences, where each
187 byte of the sequence is between 128 and 255.
Georg Brandl6911e3c2007-09-04 07:15:32 +0000188
Georg Brandl116aa622007-08-15 14:28:22 +0000189UTF-8 has several convenient properties:
190
1911. It can handle any Unicode code point.
1922. A Unicode string is turned into a string of bytes containing no embedded zero
193 bytes. This avoids byte-ordering issues, and means UTF-8 strings can be
194 processed by C functions such as ``strcpy()`` and sent through protocols that
195 can't handle zero bytes.
1963. A string of ASCII text is also valid UTF-8 text.
1974. UTF-8 is fairly compact; the majority of code points are turned into two
198 bytes, and values less than 128 occupy only a single byte.
1995. If bytes are corrupted or lost, it's possible to determine the start of the
200 next UTF-8-encoded code point and resynchronize. It's also unlikely that
201 random 8-bit data will look like valid UTF-8.
202
203
204
205References
206----------
207
208The Unicode Consortium site at <http://www.unicode.org> has character charts, a
209glossary, and PDF versions of the Unicode specification. Be prepared for some
210difficult reading. <http://www.unicode.org/history/> is a chronology of the
211origin and development of Unicode.
212
213To help understand the standard, Jukka Korpela has written an introductory guide
214to reading the Unicode character tables, available at
215<http://www.cs.tut.fi/~jkorpela/unicode/guide.html>.
216
Georg Brandlb044b2a2009-09-16 16:05:59 +0000217Another good introductory article was written by Joel Spolsky
218<http://www.joelonsoftware.com/articles/Unicode.html>.
219If this introduction didn't make things clear to you, you should try reading this
220alternate article before continuing.
221
222.. Jason Orendorff XXX http://www.jorendorff.com/articles/unicode/ is broken
Georg Brandl116aa622007-08-15 14:28:22 +0000223
224Wikipedia entries are often helpful; see the entries for "character encoding"
225<http://en.wikipedia.org/wiki/Character_encoding> and UTF-8
226<http://en.wikipedia.org/wiki/UTF-8>, for example.
227
228
Georg Brandlc62efa82010-07-11 10:41:07 +0000229Python 2.x's Unicode Support
230============================
Georg Brandl116aa622007-08-15 14:28:22 +0000231
232Now that you've learned the rudiments of Unicode, we can look at Python's
233Unicode features.
234
Georg Brandlf6945182008-02-01 11:56:49 +0000235The String Type
236---------------
Georg Brandl116aa622007-08-15 14:28:22 +0000237
Georg Brandlf6945182008-02-01 11:56:49 +0000238Since Python 3.0, the language features a ``str`` type that contain Unicode
239characters, meaning any string created using ``"unicode rocks!"``, ``'unicode
Georg Brandl4f5f98d2009-05-04 21:01:20 +0000240rocks!'``, or the triple-quoted string syntax is stored as Unicode.
Georg Brandl116aa622007-08-15 14:28:22 +0000241
Georg Brandlf6945182008-02-01 11:56:49 +0000242To insert a Unicode character that is not part ASCII, e.g., any letters with
243accents, one can use escape sequences in their string literals as such::
Georg Brandl116aa622007-08-15 14:28:22 +0000244
Georg Brandlf6945182008-02-01 11:56:49 +0000245 >>> "\N{GREEK CAPITAL LETTER DELTA}" # Using the character name
246 '\u0394'
247 >>> "\u0394" # Using a 16-bit hex value
248 '\u0394'
249 >>> "\U00000394" # Using a 32-bit hex value
250 '\u0394'
Georg Brandl116aa622007-08-15 14:28:22 +0000251
Georg Brandlf6945182008-02-01 11:56:49 +0000252In addition, one can create a string using the :func:`decode` method of
253:class:`bytes`. This method takes an encoding, such as UTF-8, and, optionally,
254an *errors* argument.
Georg Brandl116aa622007-08-15 14:28:22 +0000255
Georg Brandlf6945182008-02-01 11:56:49 +0000256The *errors* argument specifies the response when the input string can't be
Georg Brandl116aa622007-08-15 14:28:22 +0000257converted according to the encoding's rules. Legal values for this argument are
Georg Brandl0c074222008-11-22 10:26:59 +0000258'strict' (raise a :exc:`UnicodeDecodeError` exception), 'replace' (use U+FFFD,
Georg Brandl116aa622007-08-15 14:28:22 +0000259'REPLACEMENT CHARACTER'), or 'ignore' (just leave the character out of the
260Unicode result). The following examples show the differences::
261
Georg Brandlf6945182008-02-01 11:56:49 +0000262 >>> b'\x80abc'.decode("utf-8", "strict")
Georg Brandl116aa622007-08-15 14:28:22 +0000263 Traceback (most recent call last):
264 File "<stdin>", line 1, in ?
Georg Brandl0c074222008-11-22 10:26:59 +0000265 UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:
266 unexpected code byte
Georg Brandlf6945182008-02-01 11:56:49 +0000267 >>> b'\x80abc'.decode("utf-8", "replace")
Georg Brandlf6c8fd62011-02-25 09:48:21 +0000268 '?abc'
Georg Brandlf6945182008-02-01 11:56:49 +0000269 >>> b'\x80abc'.decode("utf-8", "ignore")
270 'abc'
Georg Brandl116aa622007-08-15 14:28:22 +0000271
Georg Brandlf6c8fd62011-02-25 09:48:21 +0000272(In this code example, the Unicode replacement character has been replaced by
273a question mark because it may not be displayed on some systems.)
274
Georg Brandlc62efa82010-07-11 10:41:07 +0000275Encodings are specified as strings containing the encoding's name. Python 3.2
276comes with roughly 100 different encodings; see the Python Library Reference at
Georg Brandl0c074222008-11-22 10:26:59 +0000277:ref:`standard-encodings` for a list. Some encodings have multiple names; for
278example, 'latin-1', 'iso_8859_1' and '8859' are all synonyms for the same
279encoding.
Georg Brandl116aa622007-08-15 14:28:22 +0000280
Georg Brandlf6945182008-02-01 11:56:49 +0000281One-character Unicode strings can also be created with the :func:`chr`
Georg Brandl116aa622007-08-15 14:28:22 +0000282built-in function, which takes integers and returns a Unicode string of length 1
283that contains the corresponding code point. The reverse operation is the
284built-in :func:`ord` function that takes a one-character Unicode string and
285returns the code point value::
286
Georg Brandlf6945182008-02-01 11:56:49 +0000287 >>> chr(40960)
288 '\ua000'
289 >>> ord('\ua000')
Georg Brandl116aa622007-08-15 14:28:22 +0000290 40960
291
Georg Brandlf6945182008-02-01 11:56:49 +0000292Converting to Bytes
293-------------------
Georg Brandl116aa622007-08-15 14:28:22 +0000294
Georg Brandlf6945182008-02-01 11:56:49 +0000295Another important str method is ``.encode([encoding], [errors='strict'])``,
296which returns a ``bytes`` representation of the Unicode string, encoded in the
297requested encoding. The ``errors`` parameter is the same as the parameter of
298the :meth:`decode` method, with one additional possibility; as well as 'strict',
Georg Brandl0c074222008-11-22 10:26:59 +0000299'ignore', and 'replace' (which in this case inserts a question mark instead of
300the unencodable character), you can also pass 'xmlcharrefreplace' which uses
301XML's character references. The following example shows the different results::
Georg Brandl116aa622007-08-15 14:28:22 +0000302
Georg Brandlf6945182008-02-01 11:56:49 +0000303 >>> u = chr(40960) + 'abcd' + chr(1972)
Georg Brandl116aa622007-08-15 14:28:22 +0000304 >>> u.encode('utf-8')
Georg Brandlf6945182008-02-01 11:56:49 +0000305 b'\xea\x80\x80abcd\xde\xb4'
Georg Brandl116aa622007-08-15 14:28:22 +0000306 >>> u.encode('ascii')
307 Traceback (most recent call last):
308 File "<stdin>", line 1, in ?
Georg Brandl0c074222008-11-22 10:26:59 +0000309 UnicodeEncodeError: 'ascii' codec can't encode character '\ua000' in
310 position 0: ordinal not in range(128)
Georg Brandl116aa622007-08-15 14:28:22 +0000311 >>> u.encode('ascii', 'ignore')
Georg Brandlf6945182008-02-01 11:56:49 +0000312 b'abcd'
Georg Brandl116aa622007-08-15 14:28:22 +0000313 >>> u.encode('ascii', 'replace')
Georg Brandlf6945182008-02-01 11:56:49 +0000314 b'?abcd?'
Georg Brandl116aa622007-08-15 14:28:22 +0000315 >>> u.encode('ascii', 'xmlcharrefreplace')
Georg Brandlf6945182008-02-01 11:56:49 +0000316 b'&#40960;abcd&#1972;'
Georg Brandl6911e3c2007-09-04 07:15:32 +0000317
Georg Brandl116aa622007-08-15 14:28:22 +0000318The low-level routines for registering and accessing the available encodings are
319found in the :mod:`codecs` module. However, the encoding and decoding functions
320returned by this module are usually more low-level than is comfortable, so I'm
321not going to describe the :mod:`codecs` module here. If you need to implement a
322completely new encoding, you'll need to learn about the :mod:`codecs` module
323interfaces, but implementing encodings is a specialized task that also won't be
324covered here. Consult the Python documentation to learn more about this module.
325
Georg Brandl6911e3c2007-09-04 07:15:32 +0000326
Georg Brandl116aa622007-08-15 14:28:22 +0000327Unicode Literals in Python Source Code
328--------------------------------------
329
Georg Brandlf6945182008-02-01 11:56:49 +0000330In Python source code, specific Unicode code points can be written using the
331``\u`` escape sequence, which is followed by four hex digits giving the code
332point. The ``\U`` escape sequence is similar, but expects 8 hex digits, not 4::
Georg Brandl116aa622007-08-15 14:28:22 +0000333
Georg Brandlf6945182008-02-01 11:56:49 +0000334 >>> s = "a\xac\u1234\u20ac\U00008000"
335 ^^^^ two-digit hex escape
336 ^^^^^ four-digit Unicode escape
337 ^^^^^^^^^^ eight-digit Unicode escape
Georg Brandl6911e3c2007-09-04 07:15:32 +0000338 >>> for c in s: print(ord(c), end=" ")
339 ...
Georg Brandl116aa622007-08-15 14:28:22 +0000340 97 172 4660 8364 32768
341
342Using escape sequences for code points greater than 127 is fine in small doses,
343but becomes an annoyance if you're using many accented characters, as you would
344in a program with messages in French or some other accent-using language. You
Georg Brandlf6945182008-02-01 11:56:49 +0000345can also assemble strings using the :func:`chr` built-in function, but this is
Georg Brandl116aa622007-08-15 14:28:22 +0000346even more tedious.
347
348Ideally, you'd want to be able to write literals in your language's natural
349encoding. You could then edit Python source code with your favorite editor
350which would display the accented characters naturally, and have the right
351characters used at runtime.
352
Georg Brandl0c074222008-11-22 10:26:59 +0000353Python supports writing source code in UTF-8 by default, but you can use almost
354any encoding if you declare the encoding being used. This is done by including
355a special comment as either the first or second line of the source file::
Georg Brandl116aa622007-08-15 14:28:22 +0000356
357 #!/usr/bin/env python
358 # -*- coding: latin-1 -*-
Georg Brandl6911e3c2007-09-04 07:15:32 +0000359
Georg Brandlf6945182008-02-01 11:56:49 +0000360 u = 'abcdé'
Georg Brandl6911e3c2007-09-04 07:15:32 +0000361 print(ord(u[-1]))
362
Georg Brandl116aa622007-08-15 14:28:22 +0000363The syntax is inspired by Emacs's notation for specifying variables local to a
364file. Emacs supports many different variables, but Python only supports
Georg Brandl0c074222008-11-22 10:26:59 +0000365'coding'. The ``-*-`` symbols indicate to Emacs that the comment is special;
366they have no significance to Python but are a convention. Python looks for
367``coding: name`` or ``coding=name`` in the comment.
Georg Brandl116aa622007-08-15 14:28:22 +0000368
Georg Brandlf6945182008-02-01 11:56:49 +0000369If you don't include such a comment, the default encoding used will be UTF-8 as
370already mentioned.
Georg Brandl6911e3c2007-09-04 07:15:32 +0000371
Georg Brandl116aa622007-08-15 14:28:22 +0000372
373Unicode Properties
374------------------
375
376The Unicode specification includes a database of information about code points.
377For each code point that's defined, the information includes the character's
378name, its category, the numeric value if applicable (Unicode has characters
379representing the Roman numerals and fractions such as one-third and
380four-fifths). There are also properties related to the code point's use in
381bidirectional text and other display-related properties.
382
383The following program displays some information about several characters, and
384prints the numeric value of one particular character::
385
386 import unicodedata
Georg Brandl6911e3c2007-09-04 07:15:32 +0000387
Georg Brandlf6945182008-02-01 11:56:49 +0000388 u = chr(233) + chr(0x0bf2) + chr(3972) + chr(6000) + chr(13231)
Georg Brandl6911e3c2007-09-04 07:15:32 +0000389
Georg Brandl116aa622007-08-15 14:28:22 +0000390 for i, c in enumerate(u):
Georg Brandl6911e3c2007-09-04 07:15:32 +0000391 print(i, '%04x' % ord(c), unicodedata.category(c), end=" ")
392 print(unicodedata.name(c))
393
Georg Brandl116aa622007-08-15 14:28:22 +0000394 # Get numeric value of second character
Georg Brandl6911e3c2007-09-04 07:15:32 +0000395 print(unicodedata.numeric(u[1]))
Georg Brandl116aa622007-08-15 14:28:22 +0000396
397When run, this prints::
398
399 0 00e9 Ll LATIN SMALL LETTER E WITH ACUTE
400 1 0bf2 No TAMIL NUMBER ONE THOUSAND
401 2 0f84 Mn TIBETAN MARK HALANTA
402 3 1770 Lo TAGBANWA LETTER SA
403 4 33af So SQUARE RAD OVER S SQUARED
404 1000.0
405
406The category codes are abbreviations describing the nature of the character.
407These are grouped into categories such as "Letter", "Number", "Punctuation", or
408"Symbol", which in turn are broken up into subcategories. To take the codes
409from the above output, ``'Ll'`` means 'Letter, lowercase', ``'No'`` means
410"Number, other", ``'Mn'`` is "Mark, nonspacing", and ``'So'`` is "Symbol,
411other". See
Georg Brandl628e6f92009-10-27 20:24:45 +0000412<http://unicode.org/Public/5.1.0/ucd/UCD.html#General_Category_Values> for a
Georg Brandl116aa622007-08-15 14:28:22 +0000413list of category codes.
414
415References
416----------
417
Georg Brandlf6945182008-02-01 11:56:49 +0000418The ``str`` type is described in the Python library reference at
419:ref:`typesseq`.
Georg Brandl116aa622007-08-15 14:28:22 +0000420
421The documentation for the :mod:`unicodedata` module.
422
423The documentation for the :mod:`codecs` module.
424
425Marc-André Lemburg gave a presentation at EuroPython 2002 titled "Python and
426Unicode". A PDF version of his slides is available at
Christian Heimesdd15f6c2008-03-16 00:07:10 +0000427<http://downloads.egenix.com/python/Unicode-EPC2002-Talk.pdf>, and is an
Georg Brandl0c074222008-11-22 10:26:59 +0000428excellent overview of the design of Python's Unicode features (based on Python
4292, where the Unicode string type is called ``unicode`` and literals start with
430``u``).
Georg Brandl116aa622007-08-15 14:28:22 +0000431
432
433Reading and Writing Unicode Data
434================================
435
436Once you've written some code that works with Unicode data, the next problem is
437input/output. How do you get Unicode strings into your program, and how do you
438convert Unicode into a form suitable for storage or transmission?
439
440It's possible that you may not need to do anything depending on your input
441sources and output destinations; you should check whether the libraries used in
442your application support Unicode natively. XML parsers often return Unicode
443data, for example. Many relational databases also support Unicode-valued
444columns and can return Unicode values from an SQL query.
445
446Unicode data is usually converted to a particular encoding before it gets
447written to disk or sent over a socket. It's possible to do all the work
Georg Brandl0c074222008-11-22 10:26:59 +0000448yourself: open a file, read an 8-bit byte string from it, and convert the string
449with ``str(bytes, encoding)``. However, the manual approach is not recommended.
Georg Brandl116aa622007-08-15 14:28:22 +0000450
451One problem is the multi-byte nature of encodings; one Unicode character can be
452represented by several bytes. If you want to read the file in arbitrary-sized
453chunks (say, 1K or 4K), you need to write error-handling code to catch the case
454where only part of the bytes encoding a single Unicode character are read at the
455end of a chunk. One solution would be to read the entire file into memory and
456then perform the decoding, but that prevents you from working with files that
457are extremely large; if you need to read a 2Gb file, you need 2Gb of RAM.
458(More, really, since for at least a moment you'd need to have both the encoded
459string and its Unicode version in memory.)
460
461The solution would be to use the low-level decoding interface to catch the case
462of partial coding sequences. The work of implementing this has already been
Georg Brandl0c074222008-11-22 10:26:59 +0000463done for you: the built-in :func:`open` function can return a file-like object
464that assumes the file's contents are in a specified encoding and accepts Unicode
465parameters for methods such as ``.read()`` and ``.write()``. This works through
466:func:`open`\'s *encoding* and *errors* parameters which are interpreted just
467like those in string objects' :meth:`encode` and :meth:`decode` methods.
Georg Brandl116aa622007-08-15 14:28:22 +0000468
469Reading Unicode from a file is therefore simple::
470
Georg Brandl0c074222008-11-22 10:26:59 +0000471 f = open('unicode.rst', encoding='utf-8')
Georg Brandl116aa622007-08-15 14:28:22 +0000472 for line in f:
Georg Brandl6911e3c2007-09-04 07:15:32 +0000473 print(repr(line))
Georg Brandl116aa622007-08-15 14:28:22 +0000474
475It's also possible to open files in update mode, allowing both reading and
476writing::
477
Georg Brandl0c074222008-11-22 10:26:59 +0000478 f = open('test', encoding='utf-8', mode='w+')
Georg Brandlf6945182008-02-01 11:56:49 +0000479 f.write('\u4500 blah blah blah\n')
Georg Brandl116aa622007-08-15 14:28:22 +0000480 f.seek(0)
Georg Brandl6911e3c2007-09-04 07:15:32 +0000481 print(repr(f.readline()[:1]))
Georg Brandl116aa622007-08-15 14:28:22 +0000482 f.close()
483
Georg Brandl0c074222008-11-22 10:26:59 +0000484The Unicode character U+FEFF is used as a byte-order mark (BOM), and is often
Georg Brandl116aa622007-08-15 14:28:22 +0000485written as the first character of a file in order to assist with autodetection
486of the file's byte ordering. Some encodings, such as UTF-16, expect a BOM to be
487present at the start of a file; when such an encoding is used, the BOM will be
488automatically written as the first character and will be silently dropped when
489the file is read. There are variants of these encodings, such as 'utf-16-le'
490and 'utf-16-be' for little-endian and big-endian encodings, that specify one
491particular byte ordering and don't skip the BOM.
492
Georg Brandl0c074222008-11-22 10:26:59 +0000493In some areas, it is also convention to use a "BOM" at the start of UTF-8
494encoded files; the name is misleading since UTF-8 is not byte-order dependent.
495The mark simply announces that the file is encoded in UTF-8. Use the
496'utf-8-sig' codec to automatically skip the mark if present for reading such
497files.
498
Georg Brandl116aa622007-08-15 14:28:22 +0000499
500Unicode filenames
501-----------------
502
503Most of the operating systems in common use today support filenames that contain
504arbitrary Unicode characters. Usually this is implemented by converting the
505Unicode string into some encoding that varies depending on the system. For
Georg Brandlc575c902008-09-13 17:46:05 +0000506example, Mac OS X uses UTF-8 while Windows uses a configurable encoding; on
Georg Brandl116aa622007-08-15 14:28:22 +0000507Windows, Python uses the name "mbcs" to refer to whatever the currently
508configured encoding is. On Unix systems, there will only be a filesystem
509encoding if you've set the ``LANG`` or ``LC_CTYPE`` environment variables; if
510you haven't, the default encoding is ASCII.
511
512The :func:`sys.getfilesystemencoding` function returns the encoding to use on
513your current system, in case you want to do the encoding manually, but there's
514not much reason to bother. When opening a file for reading or writing, you can
515usually just provide the Unicode string as the filename, and it will be
516automatically converted to the right encoding for you::
517
Georg Brandlf6945182008-02-01 11:56:49 +0000518 filename = 'filename\u4500abc'
Georg Brandl116aa622007-08-15 14:28:22 +0000519 f = open(filename, 'w')
520 f.write('blah\n')
521 f.close()
522
523Functions in the :mod:`os` module such as :func:`os.stat` will also accept Unicode
524filenames.
525
526:func:`os.listdir`, which returns filenames, raises an issue: should it return
Georg Brandl0c074222008-11-22 10:26:59 +0000527the Unicode version of filenames, or should it return byte strings containing
Georg Brandl116aa622007-08-15 14:28:22 +0000528the encoded versions? :func:`os.listdir` will do both, depending on whether you
Georg Brandl0c074222008-11-22 10:26:59 +0000529provided the directory path as a byte string or a Unicode string. If you pass a
530Unicode string as the path, filenames will be decoded using the filesystem's
531encoding and a list of Unicode strings will be returned, while passing a byte
532path will return the byte string versions of the filenames. For example,
533assuming the default filesystem encoding is UTF-8, running the following
534program::
Georg Brandl116aa622007-08-15 14:28:22 +0000535
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000536 fn = 'filename\u4500abc'
537 f = open(fn, 'w')
538 f.close()
Georg Brandl116aa622007-08-15 14:28:22 +0000539
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000540 import os
541 print(os.listdir(b'.'))
542 print(os.listdir('.'))
Georg Brandl116aa622007-08-15 14:28:22 +0000543
544will produce the following output::
545
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000546 amk:~$ python t.py
547 [b'.svn', b'filename\xe4\x94\x80abc', ...]
548 ['.svn', 'filename\u4500abc', ...]
Georg Brandl116aa622007-08-15 14:28:22 +0000549
550The first list contains UTF-8-encoded filenames, and the second list contains
551the Unicode versions.
552
Georg Brandl7725e7f2009-09-24 05:53:19 +0000553Note that in most occasions, the Unicode APIs should be used. The bytes APIs
Georg Brandl0c074222008-11-22 10:26:59 +0000554should only be used on systems where undecodable file names can be present,
555i.e. Unix systems.
556
Georg Brandl116aa622007-08-15 14:28:22 +0000557
Georg Brandl6911e3c2007-09-04 07:15:32 +0000558
Georg Brandl116aa622007-08-15 14:28:22 +0000559Tips for Writing Unicode-aware Programs
560---------------------------------------
561
562This section provides some suggestions on writing software that deals with
563Unicode.
564
565The most important tip is:
566
567 Software should only work with Unicode strings internally, converting to a
568 particular encoding on output.
569
Georg Brandl0c074222008-11-22 10:26:59 +0000570If you attempt to write processing functions that accept both Unicode and byte
Georg Brandl116aa622007-08-15 14:28:22 +0000571strings, you will find your program vulnerable to bugs wherever you combine the
Georg Brandl0c074222008-11-22 10:26:59 +0000572two different kinds of strings. There is no automatic encoding or decoding if
573you do e.g. ``str + bytes``, a :exc:`TypeError` is raised for this expression.
Georg Brandl116aa622007-08-15 14:28:22 +0000574
575It's easy to miss such problems if you only test your software with data that
576doesn't contain any accents; everything will seem to work, but there's actually
577a bug in your program waiting for the first user who attempts to use characters
578> 127. A second tip, therefore, is:
579
580 Include characters > 127 and, even better, characters > 255 in your test
581 data.
582
583When using data coming from a web browser or some other untrusted source, a
584common technique is to check for illegal characters in a string before using the
585string in a generated command line or storing it in a database. If you're doing
586this, be careful to check the string once it's in the form that will be used or
587stored; it's possible for encodings to be used to disguise characters. This is
588especially true if the input data also specifies the encoding; many encodings
589leave the commonly checked-for characters alone, but Python includes some
590encodings such as ``'base64'`` that modify every single character.
591
592For example, let's say you have a content management system that takes a Unicode
593filename, and you want to disallow paths with a '/' character. You might write
594this code::
595
Georg Brandl0c074222008-11-22 10:26:59 +0000596 def read_file(filename, encoding):
Georg Brandl116aa622007-08-15 14:28:22 +0000597 if '/' in filename:
598 raise ValueError("'/' not allowed in filenames")
599 unicode_name = filename.decode(encoding)
600 f = open(unicode_name, 'r')
601 # ... return contents of file ...
Georg Brandl6911e3c2007-09-04 07:15:32 +0000602
Georg Brandl116aa622007-08-15 14:28:22 +0000603However, if an attacker could specify the ``'base64'`` encoding, they could pass
604``'L2V0Yy9wYXNzd2Q='``, which is the base-64 encoded form of the string
605``'/etc/passwd'``, to read a system file. The above code looks for ``'/'``
606characters in the encoded form and misses the dangerous character in the
607resulting decoded form.
608
609References
610----------
611
612The PDF slides for Marc-André Lemburg's presentation "Writing Unicode-aware
613Applications in Python" are available at
Christian Heimesdd15f6c2008-03-16 00:07:10 +0000614<http://downloads.egenix.com/python/LSM2005-Developing-Unicode-aware-applications-in-Python.pdf>
Georg Brandl116aa622007-08-15 14:28:22 +0000615and discuss questions of character encodings as well as how to internationalize
616and localize an application.
617
618
619Revision History and Acknowledgements
620=====================================
621
622Thanks to the following people who have noted errors or offered suggestions on
623this article: Nicholas Bastin, Marius Gedminas, Kent Johnson, Ken Krugler,
624Marc-André Lemburg, Martin von Löwis, Chad Whitacre.
625
626Version 1.0: posted August 5 2005.
627
628Version 1.01: posted August 7 2005. Corrects factual and markup errors; adds
629several links.
630
631Version 1.02: posted August 16 2005. Corrects factual errors.
632
Georg Brandl0c074222008-11-22 10:26:59 +0000633Version 1.1: Feb-Nov 2008. Updates the document with respect to Python 3 changes.
634
Georg Brandlc62efa82010-07-11 10:41:07 +0000635Version 1.11: posted June 20 2010. Notes that Python 3.x is not covered,
636and that the HOWTO only covers 2.x.
Georg Brandl116aa622007-08-15 14:28:22 +0000637
Georg Brandlc62efa82010-07-11 10:41:07 +0000638.. comment Describe Python 3.x support (new section? new document?)
Georg Brandl116aa622007-08-15 14:28:22 +0000639.. comment Additional topic: building Python w/ UCS2 or UCS4 support
Georg Brandl116aa622007-08-15 14:28:22 +0000640.. comment Describe use of codecs.StreamRecoder and StreamReaderWriter
641
Georg Brandl6911e3c2007-09-04 07:15:32 +0000642.. comment
Georg Brandl116aa622007-08-15 14:28:22 +0000643 Original outline:
644
645 - [ ] Unicode introduction
646 - [ ] ASCII
647 - [ ] Terms
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000648 - [ ] Character
649 - [ ] Code point
650 - [ ] Encodings
651 - [ ] Common encodings: ASCII, Latin-1, UTF-8
Georg Brandl116aa622007-08-15 14:28:22 +0000652 - [ ] Unicode Python type
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000653 - [ ] Writing unicode literals
654 - [ ] Obscurity: -U switch
655 - [ ] Built-ins
656 - [ ] unichr()
657 - [ ] ord()
658 - [ ] unicode() constructor
659 - [ ] Unicode type
660 - [ ] encode(), decode() methods
Georg Brandl116aa622007-08-15 14:28:22 +0000661 - [ ] Unicodedata module for character properties
662 - [ ] I/O
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000663 - [ ] Reading/writing Unicode data into files
664 - [ ] Byte-order marks
665 - [ ] Unicode filenames
Georg Brandl116aa622007-08-15 14:28:22 +0000666 - [ ] Writing Unicode programs
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000667 - [ ] Do everything in Unicode
668 - [ ] Declaring source code encodings (PEP 263)
Georg Brandl116aa622007-08-15 14:28:22 +0000669 - [ ] Other issues
Georg Brandla1c6a1c2009-01-03 21:26:05 +0000670 - [ ] Building Python (UCS2, UCS4)