Georg Brandl | d741315 | 2009-10-11 21:25:26 +0000 | [diff] [blame] | 1 | ====================== |
| 2 | Design and History FAQ |
| 3 | ====================== |
| 4 | |
| 5 | Why does Python use indentation for grouping of statements? |
| 6 | ----------------------------------------------------------- |
| 7 | |
| 8 | Guido van Rossum believes that using indentation for grouping is extremely |
| 9 | elegant and contributes a lot to the clarity of the average Python program. |
| 10 | Most people learn to love this feature after awhile. |
| 11 | |
| 12 | Since there are no begin/end brackets there cannot be a disagreement between |
| 13 | grouping perceived by the parser and the human reader. Occasionally C |
| 14 | programmers will encounter a fragment of code like this:: |
| 15 | |
| 16 | if (x <= y) |
| 17 | x++; |
| 18 | y--; |
| 19 | z++; |
| 20 | |
| 21 | Only the ``x++`` statement is executed if the condition is true, but the |
| 22 | indentation leads you to believe otherwise. Even experienced C programmers will |
| 23 | sometimes stare at it a long time wondering why ``y`` is being decremented even |
| 24 | for ``x > y``. |
| 25 | |
| 26 | Because there are no begin/end brackets, Python is much less prone to |
| 27 | coding-style conflicts. In C there are many different ways to place the braces. |
| 28 | If you're used to reading and writing code that uses one style, you will feel at |
| 29 | least slightly uneasy when reading (or being required to write) another style. |
| 30 | |
| 31 | Many coding styles place begin/end brackets on a line by themself. This makes |
| 32 | programs considerably longer and wastes valuable screen space, making it harder |
| 33 | to get a good overview of a program. Ideally, a function should fit on one |
| 34 | screen (say, 20-30 lines). 20 lines of Python can do a lot more work than 20 |
| 35 | lines of C. This is not solely due to the lack of begin/end brackets -- the |
| 36 | lack of declarations and the high-level data types are also responsible -- but |
| 37 | the indentation-based syntax certainly helps. |
| 38 | |
| 39 | |
| 40 | Why am I getting strange results with simple arithmetic operations? |
| 41 | ------------------------------------------------------------------- |
| 42 | |
| 43 | See the next question. |
| 44 | |
| 45 | |
| 46 | Why are floating point calculations so inaccurate? |
| 47 | -------------------------------------------------- |
| 48 | |
| 49 | People are often very surprised by results like this:: |
| 50 | |
| 51 | >>> 1.2-1.0 |
| 52 | 0.199999999999999996 |
| 53 | |
| 54 | and think it is a bug in Python. It's not. This has nothing to do with Python, |
| 55 | but with how the underlying C platform handles floating point numbers, and |
| 56 | ultimately with the inaccuracies introduced when writing down numbers as a |
| 57 | string of a fixed number of digits. |
| 58 | |
| 59 | The internal representation of floating point numbers uses a fixed number of |
| 60 | binary digits to represent a decimal number. Some decimal numbers can't be |
| 61 | represented exactly in binary, resulting in small roundoff errors. |
| 62 | |
| 63 | In decimal math, there are many numbers that can't be represented with a fixed |
| 64 | number of decimal digits, e.g. 1/3 = 0.3333333333....... |
| 65 | |
| 66 | In base 2, 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001, etc. .2 equals 2/10 equals 1/5, |
| 67 | resulting in the binary fractional number 0.001100110011001... |
| 68 | |
| 69 | Floating point numbers only have 32 or 64 bits of precision, so the digits are |
| 70 | cut off at some point, and the resulting number is 0.199999999999999996 in |
| 71 | decimal, not 0.2. |
| 72 | |
| 73 | A floating point number's ``repr()`` function prints as many digits are |
| 74 | necessary to make ``eval(repr(f)) == f`` true for any float f. The ``str()`` |
| 75 | function prints fewer digits and this often results in the more sensible number |
| 76 | that was probably intended:: |
| 77 | |
| 78 | >>> 0.2 |
| 79 | 0.20000000000000001 |
| 80 | >>> print 0.2 |
| 81 | 0.2 |
| 82 | |
| 83 | One of the consequences of this is that it is error-prone to compare the result |
| 84 | of some computation to a float with ``==``. Tiny inaccuracies may mean that |
| 85 | ``==`` fails. Instead, you have to check that the difference between the two |
| 86 | numbers is less than a certain threshold:: |
| 87 | |
| 88 | epsilon = 0.0000000000001 # Tiny allowed error |
| 89 | expected_result = 0.4 |
| 90 | |
| 91 | if expected_result-epsilon <= computation() <= expected_result+epsilon: |
| 92 | ... |
| 93 | |
| 94 | Please see the chapter on :ref:`floating point arithmetic <tut-fp-issues>` in |
| 95 | the Python tutorial for more information. |
| 96 | |
| 97 | |
| 98 | Why are Python strings immutable? |
| 99 | --------------------------------- |
| 100 | |
| 101 | There are several advantages. |
| 102 | |
| 103 | One is performance: knowing that a string is immutable means we can allocate |
| 104 | space for it at creation time, and the storage requirements are fixed and |
| 105 | unchanging. This is also one of the reasons for the distinction between tuples |
| 106 | and lists. |
| 107 | |
| 108 | Another advantage is that strings in Python are considered as "elemental" as |
| 109 | numbers. No amount of activity will change the value 8 to anything else, and in |
| 110 | Python, no amount of activity will change the string "eight" to anything else. |
| 111 | |
| 112 | |
| 113 | .. _why-self: |
| 114 | |
| 115 | Why must 'self' be used explicitly in method definitions and calls? |
| 116 | ------------------------------------------------------------------- |
| 117 | |
| 118 | The idea was borrowed from Modula-3. It turns out to be very useful, for a |
| 119 | variety of reasons. |
| 120 | |
| 121 | First, it's more obvious that you are using a method or instance attribute |
| 122 | instead of a local variable. Reading ``self.x`` or ``self.meth()`` makes it |
| 123 | absolutely clear that an instance variable or method is used even if you don't |
| 124 | know the class definition by heart. In C++, you can sort of tell by the lack of |
| 125 | a local variable declaration (assuming globals are rare or easily recognizable) |
| 126 | -- but in Python, there are no local variable declarations, so you'd have to |
| 127 | look up the class definition to be sure. Some C++ and Java coding standards |
| 128 | call for instance attributes to have an ``m_`` prefix, so this explicitness is |
| 129 | still useful in those languages, too. |
| 130 | |
| 131 | Second, it means that no special syntax is necessary if you want to explicitly |
| 132 | reference or call the method from a particular class. In C++, if you want to |
| 133 | use a method from a base class which is overridden in a derived class, you have |
| 134 | to use the ``::`` operator -- in Python you can write baseclass.methodname(self, |
| 135 | <argument list>). This is particularly useful for :meth:`__init__` methods, and |
| 136 | in general in cases where a derived class method wants to extend the base class |
| 137 | method of the same name and thus has to call the base class method somehow. |
| 138 | |
| 139 | Finally, for instance variables it solves a syntactic problem with assignment: |
| 140 | since local variables in Python are (by definition!) those variables to which a |
| 141 | value assigned in a function body (and that aren't explicitly declared global), |
| 142 | there has to be some way to tell the interpreter that an assignment was meant to |
| 143 | assign to an instance variable instead of to a local variable, and it should |
| 144 | preferably be syntactic (for efficiency reasons). C++ does this through |
| 145 | declarations, but Python doesn't have declarations and it would be a pity having |
| 146 | to introduce them just for this purpose. Using the explicit "self.var" solves |
| 147 | this nicely. Similarly, for using instance variables, having to write |
| 148 | "self.var" means that references to unqualified names inside a method don't have |
| 149 | to search the instance's directories. To put it another way, local variables |
| 150 | and instance variables live in two different namespaces, and you need to tell |
| 151 | Python which namespace to use. |
| 152 | |
| 153 | |
| 154 | Why can't I use an assignment in an expression? |
| 155 | ----------------------------------------------- |
| 156 | |
| 157 | Many people used to C or Perl complain that they want to use this C idiom: |
| 158 | |
| 159 | .. code-block:: c |
| 160 | |
| 161 | while (line = readline(f)) { |
| 162 | // do something with line |
| 163 | } |
| 164 | |
| 165 | where in Python you're forced to write this:: |
| 166 | |
| 167 | while True: |
| 168 | line = f.readline() |
| 169 | if not line: |
| 170 | break |
| 171 | ... # do something with line |
| 172 | |
| 173 | The reason for not allowing assignment in Python expressions is a common, |
| 174 | hard-to-find bug in those other languages, caused by this construct: |
| 175 | |
| 176 | .. code-block:: c |
| 177 | |
| 178 | if (x = 0) { |
| 179 | // error handling |
| 180 | } |
| 181 | else { |
| 182 | // code that only works for nonzero x |
| 183 | } |
| 184 | |
| 185 | The error is a simple typo: ``x = 0``, which assigns 0 to the variable ``x``, |
| 186 | was written while the comparison ``x == 0`` is certainly what was intended. |
| 187 | |
| 188 | Many alternatives have been proposed. Most are hacks that save some typing but |
| 189 | use arbitrary or cryptic syntax or keywords, and fail the simple criterion for |
| 190 | language change proposals: it should intuitively suggest the proper meaning to a |
| 191 | human reader who has not yet been introduced to the construct. |
| 192 | |
| 193 | An interesting phenomenon is that most experienced Python programmers recognize |
| 194 | the ``while True`` idiom and don't seem to be missing the assignment in |
| 195 | expression construct much; it's only newcomers who express a strong desire to |
| 196 | add this to the language. |
| 197 | |
| 198 | There's an alternative way of spelling this that seems attractive but is |
| 199 | generally less robust than the "while True" solution:: |
| 200 | |
| 201 | line = f.readline() |
| 202 | while line: |
| 203 | ... # do something with line... |
| 204 | line = f.readline() |
| 205 | |
| 206 | The problem with this is that if you change your mind about exactly how you get |
| 207 | the next line (e.g. you want to change it into ``sys.stdin.readline()``) you |
| 208 | have to remember to change two places in your program -- the second occurrence |
| 209 | is hidden at the bottom of the loop. |
| 210 | |
| 211 | The best approach is to use iterators, making it possible to loop through |
| 212 | objects using the ``for`` statement. For example, in the current version of |
| 213 | Python file objects support the iterator protocol, so you can now write simply:: |
| 214 | |
| 215 | for line in f: |
| 216 | ... # do something with line... |
| 217 | |
| 218 | |
| 219 | |
| 220 | Why does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))? |
| 221 | ---------------------------------------------------------------------------------------------------------------- |
| 222 | |
| 223 | The major reason is history. Functions were used for those operations that were |
| 224 | generic for a group of types and which were intended to work even for objects |
| 225 | that didn't have methods at all (e.g. tuples). It is also convenient to have a |
| 226 | function that can readily be applied to an amorphous collection of objects when |
| 227 | you use the functional features of Python (``map()``, ``apply()`` et al). |
| 228 | |
| 229 | In fact, implementing ``len()``, ``max()``, ``min()`` as a built-in function is |
| 230 | actually less code than implementing them as methods for each type. One can |
| 231 | quibble about individual cases but it's a part of Python, and it's too late to |
| 232 | make such fundamental changes now. The functions have to remain to avoid massive |
| 233 | code breakage. |
| 234 | |
| 235 | .. XXX talk about protocols? |
| 236 | |
| 237 | Note that for string operations Python has moved from external functions (the |
| 238 | ``string`` module) to methods. However, ``len()`` is still a function. |
| 239 | |
| 240 | |
| 241 | Why is join() a string method instead of a list or tuple method? |
| 242 | ---------------------------------------------------------------- |
| 243 | |
| 244 | Strings became much more like other standard types starting in Python 1.6, when |
| 245 | methods were added which give the same functionality that has always been |
| 246 | available using the functions of the string module. Most of these new methods |
| 247 | have been widely accepted, but the one which appears to make some programmers |
| 248 | feel uncomfortable is:: |
| 249 | |
| 250 | ", ".join(['1', '2', '4', '8', '16']) |
| 251 | |
| 252 | which gives the result:: |
| 253 | |
| 254 | "1, 2, 4, 8, 16" |
| 255 | |
| 256 | There are two common arguments against this usage. |
| 257 | |
| 258 | The first runs along the lines of: "It looks really ugly using a method of a |
| 259 | string literal (string constant)", to which the answer is that it might, but a |
| 260 | string literal is just a fixed value. If the methods are to be allowed on names |
| 261 | bound to strings there is no logical reason to make them unavailable on |
| 262 | literals. |
| 263 | |
| 264 | The second objection is typically cast as: "I am really telling a sequence to |
| 265 | join its members together with a string constant". Sadly, you aren't. For some |
| 266 | reason there seems to be much less difficulty with having :meth:`~str.split` as |
| 267 | a string method, since in that case it is easy to see that :: |
| 268 | |
| 269 | "1, 2, 4, 8, 16".split(", ") |
| 270 | |
| 271 | is an instruction to a string literal to return the substrings delimited by the |
| 272 | given separator (or, by default, arbitrary runs of white space). In this case a |
| 273 | Unicode string returns a list of Unicode strings, an ASCII string returns a list |
| 274 | of ASCII strings, and everyone is happy. |
| 275 | |
| 276 | :meth:`~str.join` is a string method because in using it you are telling the |
| 277 | separator string to iterate over a sequence of strings and insert itself between |
| 278 | adjacent elements. This method can be used with any argument which obeys the |
| 279 | rules for sequence objects, including any new classes you might define yourself. |
| 280 | |
| 281 | Because this is a string method it can work for Unicode strings as well as plain |
| 282 | ASCII strings. If ``join()`` were a method of the sequence types then the |
| 283 | sequence types would have to decide which type of string to return depending on |
| 284 | the type of the separator. |
| 285 | |
| 286 | .. XXX remove next paragraph eventually |
| 287 | |
| 288 | If none of these arguments persuade you, then for the moment you can continue to |
| 289 | use the ``join()`` function from the string module, which allows you to write :: |
| 290 | |
| 291 | string.join(['1', '2', '4', '8', '16'], ", ") |
| 292 | |
| 293 | |
| 294 | How fast are exceptions? |
| 295 | ------------------------ |
| 296 | |
| 297 | A try/except block is extremely efficient. Actually catching an exception is |
| 298 | expensive. In versions of Python prior to 2.0 it was common to use this idiom:: |
| 299 | |
| 300 | try: |
| 301 | value = dict[key] |
| 302 | except KeyError: |
| 303 | dict[key] = getvalue(key) |
| 304 | value = dict[key] |
| 305 | |
| 306 | This only made sense when you expected the dict to have the key almost all the |
| 307 | time. If that wasn't the case, you coded it like this:: |
| 308 | |
| 309 | if dict.has_key(key): |
| 310 | value = dict[key] |
| 311 | else: |
| 312 | dict[key] = getvalue(key) |
| 313 | value = dict[key] |
| 314 | |
| 315 | (In Python 2.0 and higher, you can code this as ``value = dict.setdefault(key, |
| 316 | getvalue(key))``.) |
| 317 | |
| 318 | |
| 319 | Why isn't there a switch or case statement in Python? |
| 320 | ----------------------------------------------------- |
| 321 | |
| 322 | You can do this easily enough with a sequence of ``if... elif... elif... else``. |
| 323 | There have been some proposals for switch statement syntax, but there is no |
| 324 | consensus (yet) on whether and how to do range tests. See :pep:`275` for |
| 325 | complete details and the current status. |
| 326 | |
| 327 | For cases where you need to choose from a very large number of possibilities, |
| 328 | you can create a dictionary mapping case values to functions to call. For |
| 329 | example:: |
| 330 | |
| 331 | def function_1(...): |
| 332 | ... |
| 333 | |
| 334 | functions = {'a': function_1, |
| 335 | 'b': function_2, |
| 336 | 'c': self.method_1, ...} |
| 337 | |
| 338 | func = functions[value] |
| 339 | func() |
| 340 | |
| 341 | For calling methods on objects, you can simplify yet further by using the |
| 342 | :func:`getattr` built-in to retrieve methods with a particular name:: |
| 343 | |
| 344 | def visit_a(self, ...): |
| 345 | ... |
| 346 | ... |
| 347 | |
| 348 | def dispatch(self, value): |
| 349 | method_name = 'visit_' + str(value) |
| 350 | method = getattr(self, method_name) |
| 351 | method() |
| 352 | |
| 353 | It's suggested that you use a prefix for the method names, such as ``visit_`` in |
| 354 | this example. Without such a prefix, if values are coming from an untrusted |
| 355 | source, an attacker would be able to call any method on your object. |
| 356 | |
| 357 | |
| 358 | Can't you emulate threads in the interpreter instead of relying on an OS-specific thread implementation? |
| 359 | -------------------------------------------------------------------------------------------------------- |
| 360 | |
| 361 | Answer 1: Unfortunately, the interpreter pushes at least one C stack frame for |
| 362 | each Python stack frame. Also, extensions can call back into Python at almost |
| 363 | random moments. Therefore, a complete threads implementation requires thread |
| 364 | support for C. |
| 365 | |
| 366 | Answer 2: Fortunately, there is `Stackless Python <http://www.stackless.com>`_, |
| 367 | which has a completely redesigned interpreter loop that avoids the C stack. |
| 368 | It's still experimental but looks very promising. Although it is binary |
| 369 | compatible with standard Python, it's still unclear whether Stackless will make |
| 370 | it into the core -- maybe it's just too revolutionary. |
| 371 | |
| 372 | |
| 373 | Why can't lambda forms contain statements? |
| 374 | ------------------------------------------ |
| 375 | |
| 376 | Python lambda forms cannot contain statements because Python's syntactic |
| 377 | framework can't handle statements nested inside expressions. However, in |
| 378 | Python, this is not a serious problem. Unlike lambda forms in other languages, |
| 379 | where they add functionality, Python lambdas are only a shorthand notation if |
| 380 | you're too lazy to define a function. |
| 381 | |
| 382 | Functions are already first class objects in Python, and can be declared in a |
| 383 | local scope. Therefore the only advantage of using a lambda form instead of a |
| 384 | locally-defined function is that you don't need to invent a name for the |
| 385 | function -- but that's just a local variable to which the function object (which |
| 386 | is exactly the same type of object that a lambda form yields) is assigned! |
| 387 | |
| 388 | |
| 389 | Can Python be compiled to machine code, C or some other language? |
| 390 | ----------------------------------------------------------------- |
| 391 | |
| 392 | Not easily. Python's high level data types, dynamic typing of objects and |
| 393 | run-time invocation of the interpreter (using :func:`eval` or :keyword:`exec`) |
| 394 | together mean that a "compiled" Python program would probably consist mostly of |
| 395 | calls into the Python run-time system, even for seemingly simple operations like |
| 396 | ``x+1``. |
| 397 | |
| 398 | Several projects described in the Python newsgroup or at past `Python |
Georg Brandl | 495f7b5 | 2009-10-27 15:28:25 +0000 | [diff] [blame] | 399 | conferences <http://python.org/community/workshops/>`_ have shown that this |
| 400 | approach is feasible, although the speedups reached so far are only modest |
| 401 | (e.g. 2x). Jython uses the same strategy for compiling to Java bytecode. (Jim |
| 402 | Hugunin has demonstrated that in combination with whole-program analysis, |
| 403 | speedups of 1000x are feasible for small demo programs. See the proceedings |
| 404 | from the `1997 Python conference |
| 405 | <http://python.org/workshops/1997-10/proceedings/>`_ for more information.) |
Georg Brandl | d741315 | 2009-10-11 21:25:26 +0000 | [diff] [blame] | 406 | |
| 407 | Internally, Python source code is always translated into a bytecode |
| 408 | representation, and this bytecode is then executed by the Python virtual |
| 409 | machine. In order to avoid the overhead of repeatedly parsing and translating |
| 410 | modules that rarely change, this byte code is written into a file whose name |
| 411 | ends in ".pyc" whenever a module is parsed. When the corresponding .py file is |
| 412 | changed, it is parsed and translated again and the .pyc file is rewritten. |
| 413 | |
| 414 | There is no performance difference once the .pyc file has been loaded, as the |
| 415 | bytecode read from the .pyc file is exactly the same as the bytecode created by |
| 416 | direct translation. The only difference is that loading code from a .pyc file |
| 417 | is faster than parsing and translating a .py file, so the presence of |
| 418 | precompiled .pyc files improves the start-up time of Python scripts. If |
| 419 | desired, the Lib/compileall.py module can be used to create valid .pyc files for |
| 420 | a given set of modules. |
| 421 | |
| 422 | Note that the main script executed by Python, even if its filename ends in .py, |
| 423 | is not compiled to a .pyc file. It is compiled to bytecode, but the bytecode is |
| 424 | not saved to a file. Usually main scripts are quite short, so this doesn't cost |
| 425 | much speed. |
| 426 | |
| 427 | .. XXX check which of these projects are still alive |
| 428 | |
| 429 | There are also several programs which make it easier to intermingle Python and C |
| 430 | code in various ways to increase performance. See, for example, `Psyco |
| 431 | <http://psyco.sourceforge.net/>`_, `Pyrex |
| 432 | <http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/>`_, `PyInline |
| 433 | <http://pyinline.sourceforge.net/>`_, `Py2Cmod |
| 434 | <http://sourceforge.net/projects/py2cmod/>`_, and `Weave |
| 435 | <http://www.scipy.org/site_content/weave>`_. |
| 436 | |
| 437 | |
| 438 | How does Python manage memory? |
| 439 | ------------------------------ |
| 440 | |
| 441 | The details of Python memory management depend on the implementation. The |
| 442 | standard C implementation of Python uses reference counting to detect |
| 443 | inaccessible objects, and another mechanism to collect reference cycles, |
| 444 | periodically executing a cycle detection algorithm which looks for inaccessible |
| 445 | cycles and deletes the objects involved. The :mod:`gc` module provides functions |
| 446 | to perform a garbage collection, obtain debugging statistics, and tune the |
| 447 | collector's parameters. |
| 448 | |
| 449 | Jython relies on the Java runtime so the JVM's garbage collector is used. This |
| 450 | difference can cause some subtle porting problems if your Python code depends on |
| 451 | the behavior of the reference counting implementation. |
| 452 | |
| 453 | Sometimes objects get stuck in tracebacks temporarily and hence are not |
| 454 | deallocated when you might expect. Clear the tracebacks with:: |
| 455 | |
| 456 | import sys |
| 457 | sys.exc_clear() |
| 458 | sys.exc_traceback = sys.last_traceback = None |
| 459 | |
| 460 | Tracebacks are used for reporting errors, implementing debuggers and related |
| 461 | things. They contain a portion of the program state extracted during the |
| 462 | handling of an exception (usually the most recent exception). |
| 463 | |
| 464 | In the absence of circularities and tracebacks, Python programs need not |
| 465 | explicitly manage memory. |
| 466 | |
| 467 | Why doesn't Python use a more traditional garbage collection scheme? For one |
| 468 | thing, this is not a C standard feature and hence it's not portable. (Yes, we |
| 469 | know about the Boehm GC library. It has bits of assembler code for *most* |
| 470 | common platforms, not for all of them, and although it is mostly transparent, it |
| 471 | isn't completely transparent; patches are required to get Python to work with |
| 472 | it.) |
| 473 | |
| 474 | Traditional GC also becomes a problem when Python is embedded into other |
| 475 | applications. While in a standalone Python it's fine to replace the standard |
| 476 | malloc() and free() with versions provided by the GC library, an application |
| 477 | embedding Python may want to have its *own* substitute for malloc() and free(), |
| 478 | and may not want Python's. Right now, Python works with anything that |
| 479 | implements malloc() and free() properly. |
| 480 | |
| 481 | In Jython, the following code (which is fine in CPython) will probably run out |
| 482 | of file descriptors long before it runs out of memory:: |
| 483 | |
| 484 | for file in <very long list of files>: |
| 485 | f = open(file) |
| 486 | c = f.read(1) |
| 487 | |
| 488 | Using the current reference counting and destructor scheme, each new assignment |
| 489 | to f closes the previous file. Using GC, this is not guaranteed. If you want |
| 490 | to write code that will work with any Python implementation, you should |
| 491 | explicitly close the file; this will work regardless of GC:: |
| 492 | |
| 493 | for file in <very long list of files>: |
| 494 | f = open(file) |
| 495 | c = f.read(1) |
| 496 | f.close() |
| 497 | |
| 498 | |
| 499 | Why isn't all memory freed when Python exits? |
| 500 | --------------------------------------------- |
| 501 | |
| 502 | Objects referenced from the global namespaces of Python modules are not always |
| 503 | deallocated when Python exits. This may happen if there are circular |
| 504 | references. There are also certain bits of memory that are allocated by the C |
| 505 | library that are impossible to free (e.g. a tool like Purify will complain about |
| 506 | these). Python is, however, aggressive about cleaning up memory on exit and |
| 507 | does try to destroy every single object. |
| 508 | |
| 509 | If you want to force Python to delete certain things on deallocation use the |
| 510 | :mod:`atexit` module to run a function that will force those deletions. |
| 511 | |
| 512 | |
| 513 | Why are there separate tuple and list data types? |
| 514 | ------------------------------------------------- |
| 515 | |
| 516 | Lists and tuples, while similar in many respects, are generally used in |
| 517 | fundamentally different ways. Tuples can be thought of as being similar to |
| 518 | Pascal records or C structs; they're small collections of related data which may |
| 519 | be of different types which are operated on as a group. For example, a |
| 520 | Cartesian coordinate is appropriately represented as a tuple of two or three |
| 521 | numbers. |
| 522 | |
| 523 | Lists, on the other hand, are more like arrays in other languages. They tend to |
| 524 | hold a varying number of objects all of which have the same type and which are |
| 525 | operated on one-by-one. For example, ``os.listdir('.')`` returns a list of |
| 526 | strings representing the files in the current directory. Functions which |
| 527 | operate on this output would generally not break if you added another file or |
| 528 | two to the directory. |
| 529 | |
| 530 | Tuples are immutable, meaning that once a tuple has been created, you can't |
| 531 | replace any of its elements with a new value. Lists are mutable, meaning that |
| 532 | you can always change a list's elements. Only immutable elements can be used as |
| 533 | dictionary keys, and hence only tuples and not lists can be used as keys. |
| 534 | |
| 535 | |
| 536 | How are lists implemented? |
| 537 | -------------------------- |
| 538 | |
| 539 | Python's lists are really variable-length arrays, not Lisp-style linked lists. |
| 540 | The implementation uses a contiguous array of references to other objects, and |
| 541 | keeps a pointer to this array and the array's length in a list head structure. |
| 542 | |
| 543 | This makes indexing a list ``a[i]`` an operation whose cost is independent of |
| 544 | the size of the list or the value of the index. |
| 545 | |
| 546 | When items are appended or inserted, the array of references is resized. Some |
| 547 | cleverness is applied to improve the performance of appending items repeatedly; |
| 548 | when the array must be grown, some extra space is allocated so the next few |
| 549 | times don't require an actual resize. |
| 550 | |
| 551 | |
| 552 | How are dictionaries implemented? |
| 553 | --------------------------------- |
| 554 | |
| 555 | Python's dictionaries are implemented as resizable hash tables. Compared to |
| 556 | B-trees, this gives better performance for lookup (the most common operation by |
| 557 | far) under most circumstances, and the implementation is simpler. |
| 558 | |
| 559 | Dictionaries work by computing a hash code for each key stored in the dictionary |
| 560 | using the :func:`hash` built-in function. The hash code varies widely depending |
| 561 | on the key; for example, "Python" hashes to -539294296 while "python", a string |
| 562 | that differs by a single bit, hashes to 1142331976. The hash code is then used |
| 563 | to calculate a location in an internal array where the value will be stored. |
| 564 | Assuming that you're storing keys that all have different hash values, this |
| 565 | means that dictionaries take constant time -- O(1), in computer science notation |
| 566 | -- to retrieve a key. It also means that no sorted order of the keys is |
| 567 | maintained, and traversing the array as the ``.keys()`` and ``.items()`` do will |
| 568 | output the dictionary's content in some arbitrary jumbled order. |
| 569 | |
| 570 | |
| 571 | Why must dictionary keys be immutable? |
| 572 | -------------------------------------- |
| 573 | |
| 574 | The hash table implementation of dictionaries uses a hash value calculated from |
| 575 | the key value to find the key. If the key were a mutable object, its value |
| 576 | could change, and thus its hash could also change. But since whoever changes |
| 577 | the key object can't tell that it was being used as a dictionary key, it can't |
| 578 | move the entry around in the dictionary. Then, when you try to look up the same |
| 579 | object in the dictionary it won't be found because its hash value is different. |
| 580 | If you tried to look up the old value it wouldn't be found either, because the |
| 581 | value of the object found in that hash bin would be different. |
| 582 | |
| 583 | If you want a dictionary indexed with a list, simply convert the list to a tuple |
| 584 | first; the function ``tuple(L)`` creates a tuple with the same entries as the |
| 585 | list ``L``. Tuples are immutable and can therefore be used as dictionary keys. |
| 586 | |
| 587 | Some unacceptable solutions that have been proposed: |
| 588 | |
| 589 | - Hash lists by their address (object ID). This doesn't work because if you |
| 590 | construct a new list with the same value it won't be found; e.g.:: |
| 591 | |
| 592 | d = {[1,2]: '12'} |
| 593 | print d[[1,2]] |
| 594 | |
| 595 | would raise a KeyError exception because the id of the ``[1,2]`` used in the |
| 596 | second line differs from that in the first line. In other words, dictionary |
| 597 | keys should be compared using ``==``, not using :keyword:`is`. |
| 598 | |
| 599 | - Make a copy when using a list as a key. This doesn't work because the list, |
| 600 | being a mutable object, could contain a reference to itself, and then the |
| 601 | copying code would run into an infinite loop. |
| 602 | |
| 603 | - Allow lists as keys but tell the user not to modify them. This would allow a |
| 604 | class of hard-to-track bugs in programs when you forgot or modified a list by |
| 605 | accident. It also invalidates an important invariant of dictionaries: every |
| 606 | value in ``d.keys()`` is usable as a key of the dictionary. |
| 607 | |
| 608 | - Mark lists as read-only once they are used as a dictionary key. The problem |
| 609 | is that it's not just the top-level object that could change its value; you |
| 610 | could use a tuple containing a list as a key. Entering anything as a key into |
| 611 | a dictionary would require marking all objects reachable from there as |
| 612 | read-only -- and again, self-referential objects could cause an infinite loop. |
| 613 | |
| 614 | There is a trick to get around this if you need to, but use it at your own risk: |
| 615 | You can wrap a mutable structure inside a class instance which has both a |
| 616 | :meth:`__cmp_` and a :meth:`__hash__` method. You must then make sure that the |
| 617 | hash value for all such wrapper objects that reside in a dictionary (or other |
| 618 | hash based structure), remain fixed while the object is in the dictionary (or |
| 619 | other structure). :: |
| 620 | |
| 621 | class ListWrapper: |
| 622 | def __init__(self, the_list): |
| 623 | self.the_list = the_list |
| 624 | def __cmp__(self, other): |
| 625 | return self.the_list == other.the_list |
| 626 | def __hash__(self): |
| 627 | l = self.the_list |
| 628 | result = 98767 - len(l)*555 |
| 629 | for i in range(len(l)): |
| 630 | try: |
| 631 | result = result + (hash(l[i]) % 9999999) * 1001 + i |
| 632 | except: |
| 633 | result = (result % 7777777) + i * 333 |
| 634 | return result |
| 635 | |
| 636 | Note that the hash computation is complicated by the possibility that some |
| 637 | members of the list may be unhashable and also by the possibility of arithmetic |
| 638 | overflow. |
| 639 | |
| 640 | Furthermore it must always be the case that if ``o1 == o2`` (ie ``o1.__cmp__(o2) |
| 641 | == 0``) then ``hash(o1) == hash(o2)`` (ie, ``o1.__hash__() == o2.__hash__()``), |
| 642 | regardless of whether the object is in a dictionary or not. If you fail to meet |
| 643 | these restrictions dictionaries and other hash based structures will misbehave. |
| 644 | |
| 645 | In the case of ListWrapper, whenever the wrapper object is in a dictionary the |
| 646 | wrapped list must not change to avoid anomalies. Don't do this unless you are |
| 647 | prepared to think hard about the requirements and the consequences of not |
| 648 | meeting them correctly. Consider yourself warned. |
| 649 | |
| 650 | |
| 651 | Why doesn't list.sort() return the sorted list? |
| 652 | ----------------------------------------------- |
| 653 | |
| 654 | In situations where performance matters, making a copy of the list just to sort |
| 655 | it would be wasteful. Therefore, :meth:`list.sort` sorts the list in place. In |
| 656 | order to remind you of that fact, it does not return the sorted list. This way, |
| 657 | you won't be fooled into accidentally overwriting a list when you need a sorted |
| 658 | copy but also need to keep the unsorted version around. |
| 659 | |
| 660 | In Python 2.4 a new builtin -- :func:`sorted` -- has been added. This function |
| 661 | creates a new list from a provided iterable, sorts it and returns it. For |
| 662 | example, here's how to iterate over the keys of a dictionary in sorted order:: |
| 663 | |
| 664 | for key in sorted(dict.iterkeys()): |
| 665 | ... # do whatever with dict[key]... |
| 666 | |
| 667 | |
| 668 | How do you specify and enforce an interface spec in Python? |
| 669 | ----------------------------------------------------------- |
| 670 | |
| 671 | An interface specification for a module as provided by languages such as C++ and |
| 672 | Java describes the prototypes for the methods and functions of the module. Many |
| 673 | feel that compile-time enforcement of interface specifications helps in the |
| 674 | construction of large programs. |
| 675 | |
| 676 | Python 2.6 adds an :mod:`abc` module that lets you define Abstract Base Classes |
| 677 | (ABCs). You can then use :func:`isinstance` and :func:`issubclass` to check |
| 678 | whether an instance or a class implements a particular ABC. The |
| 679 | :mod:`collections` modules defines a set of useful ABCs such as |
| 680 | :class:`Iterable`, :class:`Container`, and :class:`MutableMapping`. |
| 681 | |
| 682 | For Python, many of the advantages of interface specifications can be obtained |
| 683 | by an appropriate test discipline for components. There is also a tool, |
| 684 | PyChecker, which can be used to find problems due to subclassing. |
| 685 | |
| 686 | A good test suite for a module can both provide a regression test and serve as a |
| 687 | module interface specification and a set of examples. Many Python modules can |
| 688 | be run as a script to provide a simple "self test." Even modules which use |
| 689 | complex external interfaces can often be tested in isolation using trivial |
| 690 | "stub" emulations of the external interface. The :mod:`doctest` and |
| 691 | :mod:`unittest` modules or third-party test frameworks can be used to construct |
| 692 | exhaustive test suites that exercise every line of code in a module. |
| 693 | |
| 694 | An appropriate testing discipline can help build large complex applications in |
| 695 | Python as well as having interface specifications would. In fact, it can be |
| 696 | better because an interface specification cannot test certain properties of a |
| 697 | program. For example, the :meth:`append` method is expected to add new elements |
| 698 | to the end of some internal list; an interface specification cannot test that |
| 699 | your :meth:`append` implementation will actually do this correctly, but it's |
| 700 | trivial to check this property in a test suite. |
| 701 | |
| 702 | Writing test suites is very helpful, and you might want to design your code with |
| 703 | an eye to making it easily tested. One increasingly popular technique, |
| 704 | test-directed development, calls for writing parts of the test suite first, |
| 705 | before you write any of the actual code. Of course Python allows you to be |
| 706 | sloppy and not write test cases at all. |
| 707 | |
| 708 | |
| 709 | Why are default values shared between objects? |
| 710 | ---------------------------------------------- |
| 711 | |
| 712 | This type of bug commonly bites neophyte programmers. Consider this function:: |
| 713 | |
| 714 | def foo(D={}): # Danger: shared reference to one dict for all calls |
| 715 | ... compute something ... |
| 716 | D[key] = value |
| 717 | return D |
| 718 | |
| 719 | The first time you call this function, ``D`` contains a single item. The second |
| 720 | time, ``D`` contains two items because when ``foo()`` begins executing, ``D`` |
| 721 | starts out with an item already in it. |
| 722 | |
| 723 | It is often expected that a function call creates new objects for default |
| 724 | values. This is not what happens. Default values are created exactly once, when |
| 725 | the function is defined. If that object is changed, like the dictionary in this |
| 726 | example, subsequent calls to the function will refer to this changed object. |
| 727 | |
| 728 | By definition, immutable objects such as numbers, strings, tuples, and ``None``, |
| 729 | are safe from change. Changes to mutable objects such as dictionaries, lists, |
| 730 | and class instances can lead to confusion. |
| 731 | |
| 732 | Because of this feature, it is good programming practice to not use mutable |
| 733 | objects as default values. Instead, use ``None`` as the default value and |
| 734 | inside the function, check if the parameter is ``None`` and create a new |
| 735 | list/dictionary/whatever if it is. For example, don't write:: |
| 736 | |
| 737 | def foo(dict={}): |
| 738 | ... |
| 739 | |
| 740 | but:: |
| 741 | |
| 742 | def foo(dict=None): |
| 743 | if dict is None: |
| 744 | dict = {} # create a new dict for local namespace |
| 745 | |
| 746 | This feature can be useful. When you have a function that's time-consuming to |
| 747 | compute, a common technique is to cache the parameters and the resulting value |
| 748 | of each call to the function, and return the cached value if the same value is |
| 749 | requested again. This is called "memoizing", and can be implemented like this:: |
| 750 | |
| 751 | # Callers will never provide a third parameter for this function. |
| 752 | def expensive (arg1, arg2, _cache={}): |
| 753 | if _cache.has_key((arg1, arg2)): |
| 754 | return _cache[(arg1, arg2)] |
| 755 | |
| 756 | # Calculate the value |
| 757 | result = ... expensive computation ... |
| 758 | _cache[(arg1, arg2)] = result # Store result in the cache |
| 759 | return result |
| 760 | |
| 761 | You could use a global variable containing a dictionary instead of the default |
| 762 | value; it's a matter of taste. |
| 763 | |
| 764 | |
| 765 | Why is there no goto? |
| 766 | --------------------- |
| 767 | |
| 768 | You can use exceptions to provide a "structured goto" that even works across |
| 769 | function calls. Many feel that exceptions can conveniently emulate all |
| 770 | reasonable uses of the "go" or "goto" constructs of C, Fortran, and other |
| 771 | languages. For example:: |
| 772 | |
| 773 | class label: pass # declare a label |
| 774 | |
| 775 | try: |
| 776 | ... |
| 777 | if (condition): raise label() # goto label |
| 778 | ... |
| 779 | except label: # where to goto |
| 780 | pass |
| 781 | ... |
| 782 | |
| 783 | This doesn't allow you to jump into the middle of a loop, but that's usually |
| 784 | considered an abuse of goto anyway. Use sparingly. |
| 785 | |
| 786 | |
| 787 | Why can't raw strings (r-strings) end with a backslash? |
| 788 | ------------------------------------------------------- |
| 789 | |
| 790 | More precisely, they can't end with an odd number of backslashes: the unpaired |
| 791 | backslash at the end escapes the closing quote character, leaving an |
| 792 | unterminated string. |
| 793 | |
| 794 | Raw strings were designed to ease creating input for processors (chiefly regular |
| 795 | expression engines) that want to do their own backslash escape processing. Such |
| 796 | processors consider an unmatched trailing backslash to be an error anyway, so |
| 797 | raw strings disallow that. In return, they allow you to pass on the string |
| 798 | quote character by escaping it with a backslash. These rules work well when |
| 799 | r-strings are used for their intended purpose. |
| 800 | |
| 801 | If you're trying to build Windows pathnames, note that all Windows system calls |
| 802 | accept forward slashes too:: |
| 803 | |
| 804 | f = open("/mydir/file.txt") # works fine! |
| 805 | |
| 806 | If you're trying to build a pathname for a DOS command, try e.g. one of :: |
| 807 | |
| 808 | dir = r"\this\is\my\dos\dir" "\\" |
| 809 | dir = r"\this\is\my\dos\dir\ "[:-1] |
| 810 | dir = "\\this\\is\\my\\dos\\dir\\" |
| 811 | |
| 812 | |
| 813 | Why doesn't Python have a "with" statement for attribute assignments? |
| 814 | --------------------------------------------------------------------- |
| 815 | |
| 816 | Python has a 'with' statement that wraps the execution of a block, calling code |
| 817 | on the entrance and exit from the block. Some language have a construct that |
| 818 | looks like this:: |
| 819 | |
| 820 | with obj: |
| 821 | a = 1 # equivalent to obj.a = 1 |
| 822 | total = total + 1 # obj.total = obj.total + 1 |
| 823 | |
| 824 | In Python, such a construct would be ambiguous. |
| 825 | |
| 826 | Other languages, such as Object Pascal, Delphi, and C++, use static types, so |
| 827 | it's possible to know, in an unambiguous way, what member is being assigned |
| 828 | to. This is the main point of static typing -- the compiler *always* knows the |
| 829 | scope of every variable at compile time. |
| 830 | |
| 831 | Python uses dynamic types. It is impossible to know in advance which attribute |
| 832 | will be referenced at runtime. Member attributes may be added or removed from |
| 833 | objects on the fly. This makes it impossible to know, from a simple reading, |
| 834 | what attribute is being referenced: a local one, a global one, or a member |
| 835 | attribute? |
| 836 | |
| 837 | For instance, take the following incomplete snippet:: |
| 838 | |
| 839 | def foo(a): |
| 840 | with a: |
| 841 | print x |
| 842 | |
| 843 | The snippet assumes that "a" must have a member attribute called "x". However, |
| 844 | there is nothing in Python that tells the interpreter this. What should happen |
| 845 | if "a" is, let us say, an integer? If there is a global variable named "x", |
| 846 | will it be used inside the with block? As you see, the dynamic nature of Python |
| 847 | makes such choices much harder. |
| 848 | |
| 849 | The primary benefit of "with" and similar language features (reduction of code |
| 850 | volume) can, however, easily be achieved in Python by assignment. Instead of:: |
| 851 | |
| 852 | function(args).dict[index][index].a = 21 |
| 853 | function(args).dict[index][index].b = 42 |
| 854 | function(args).dict[index][index].c = 63 |
| 855 | |
| 856 | write this:: |
| 857 | |
| 858 | ref = function(args).dict[index][index] |
| 859 | ref.a = 21 |
| 860 | ref.b = 42 |
| 861 | ref.c = 63 |
| 862 | |
| 863 | This also has the side-effect of increasing execution speed because name |
| 864 | bindings are resolved at run-time in Python, and the second version only needs |
| 865 | to perform the resolution once. If the referenced object does not have a, b and |
| 866 | c attributes, of course, the end result is still a run-time exception. |
| 867 | |
| 868 | |
| 869 | Why are colons required for the if/while/def/class statements? |
| 870 | -------------------------------------------------------------- |
| 871 | |
| 872 | The colon is required primarily to enhance readability (one of the results of |
| 873 | the experimental ABC language). Consider this:: |
| 874 | |
| 875 | if a == b |
| 876 | print a |
| 877 | |
| 878 | versus :: |
| 879 | |
| 880 | if a == b: |
| 881 | print a |
| 882 | |
| 883 | Notice how the second one is slightly easier to read. Notice further how a |
| 884 | colon sets off the example in this FAQ answer; it's a standard usage in English. |
| 885 | |
| 886 | Another minor reason is that the colon makes it easier for editors with syntax |
| 887 | highlighting; they can look for colons to decide when indentation needs to be |
| 888 | increased instead of having to do a more elaborate parsing of the program text. |
| 889 | |
| 890 | |
| 891 | Why does Python allow commas at the end of lists and tuples? |
| 892 | ------------------------------------------------------------ |
| 893 | |
| 894 | Python lets you add a trailing comma at the end of lists, tuples, and |
| 895 | dictionaries:: |
| 896 | |
| 897 | [1, 2, 3,] |
| 898 | ('a', 'b', 'c',) |
| 899 | d = { |
| 900 | "A": [1, 5], |
| 901 | "B": [6, 7], # last trailing comma is optional but good style |
| 902 | } |
| 903 | |
| 904 | |
| 905 | There are several reasons to allow this. |
| 906 | |
| 907 | When you have a literal value for a list, tuple, or dictionary spread across |
| 908 | multiple lines, it's easier to add more elements because you don't have to |
| 909 | remember to add a comma to the previous line. The lines can also be sorted in |
| 910 | your editor without creating a syntax error. |
| 911 | |
| 912 | Accidentally omitting the comma can lead to errors that are hard to diagnose. |
| 913 | For example:: |
| 914 | |
| 915 | x = [ |
| 916 | "fee", |
| 917 | "fie" |
| 918 | "foo", |
| 919 | "fum" |
| 920 | ] |
| 921 | |
| 922 | This list looks like it has four elements, but it actually contains three: |
| 923 | "fee", "fiefoo" and "fum". Always adding the comma avoids this source of error. |
| 924 | |
| 925 | Allowing the trailing comma may also make programmatic code generation easier. |