blob: 093781270b3f944eb2e2489dc9439cab3d097a00 [file] [log] [blame]
Vinay Sajip5dbca9c2011-04-08 11:40:38 +01001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
24
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
97Multiple handlers and formatters
98--------------------------------
99
100Loggers are plain Python objects. The :func:`addHandler` method has no minimum
101or maximum quota for the number of handlers you may add. Sometimes it will be
102beneficial for an application to log all messages of all severities to a text
103file while simultaneously logging errors or above to the console. To set this
104up, simply configure the appropriate handlers. The logging calls in the
105application code will remain unchanged. Here is a slight modification to the
106previous simple module-based configuration example::
107
108 import logging
109
110 logger = logging.getLogger('simple_example')
111 logger.setLevel(logging.DEBUG)
112 # create file handler which logs even debug messages
113 fh = logging.FileHandler('spam.log')
114 fh.setLevel(logging.DEBUG)
115 # create console handler with a higher log level
116 ch = logging.StreamHandler()
117 ch.setLevel(logging.ERROR)
118 # create formatter and add it to the handlers
119 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
120 ch.setFormatter(formatter)
121 fh.setFormatter(formatter)
122 # add the handlers to logger
123 logger.addHandler(ch)
124 logger.addHandler(fh)
125
126 # 'application' code
127 logger.debug('debug message')
128 logger.info('info message')
129 logger.warn('warn message')
130 logger.error('error message')
131 logger.critical('critical message')
132
133Notice that the 'application' code does not care about multiple handlers. All
134that changed was the addition and configuration of a new handler named *fh*.
135
136The ability to create new handlers with higher- or lower-severity filters can be
137very helpful when writing and testing an application. Instead of using many
138``print`` statements for debugging, use ``logger.debug``: Unlike the print
139statements, which you will have to delete or comment out later, the logger.debug
140statements can remain intact in the source code and remain dormant until you
141need them again. At that time, the only change that needs to happen is to
142modify the severity level of the logger and/or handler to debug.
143
144.. _multiple-destinations:
145
146Logging to multiple destinations
147--------------------------------
148
149Let's say you want to log to console and file with different message formats and
150in differing circumstances. Say you want to log messages with levels of DEBUG
151and higher to file, and those messages at level INFO and higher to the console.
152Let's also assume that the file should contain timestamps, but the console
153messages should not. Here's how you can achieve this::
154
155 import logging
156
157 # set up logging to file - see previous section for more details
158 logging.basicConfig(level=logging.DEBUG,
159 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
160 datefmt='%m-%d %H:%M',
161 filename='/temp/myapp.log',
162 filemode='w')
163 # define a Handler which writes INFO messages or higher to the sys.stderr
164 console = logging.StreamHandler()
165 console.setLevel(logging.INFO)
166 # set a format which is simpler for console use
167 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
168 # tell the handler to use this format
169 console.setFormatter(formatter)
170 # add the handler to the root logger
171 logging.getLogger('').addHandler(console)
172
173 # Now, we can log to the root logger, or any other logger. First the root...
174 logging.info('Jackdaws love my big sphinx of quartz.')
175
176 # Now, define a couple of other loggers which might represent areas in your
177 # application:
178
179 logger1 = logging.getLogger('myapp.area1')
180 logger2 = logging.getLogger('myapp.area2')
181
182 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
183 logger1.info('How quickly daft jumping zebras vex.')
184 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
185 logger2.error('The five boxing wizards jump quickly.')
186
187When you run this, on the console you will see ::
188
189 root : INFO Jackdaws love my big sphinx of quartz.
190 myapp.area1 : INFO How quickly daft jumping zebras vex.
191 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
192 myapp.area2 : ERROR The five boxing wizards jump quickly.
193
194and in the file you will see something like ::
195
196 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
197 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
198 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
199 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
200 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
201
202As you can see, the DEBUG message only shows up in the file. The other messages
203are sent to both destinations.
204
205This example uses console and file handlers, but you can use any number and
206combination of handlers you choose.
207
208
209Configuration server example
210----------------------------
211
212Here is an example of a module using the logging configuration server::
213
214 import logging
215 import logging.config
216 import time
217 import os
218
219 # read initial config file
220 logging.config.fileConfig('logging.conf')
221
222 # create and start listener on port 9999
223 t = logging.config.listen(9999)
224 t.start()
225
226 logger = logging.getLogger('simpleExample')
227
228 try:
229 # loop through logging calls to see the difference
230 # new configurations make, until Ctrl+C is pressed
231 while True:
232 logger.debug('debug message')
233 logger.info('info message')
234 logger.warn('warn message')
235 logger.error('error message')
236 logger.critical('critical message')
237 time.sleep(5)
238 except KeyboardInterrupt:
239 # cleanup
240 logging.config.stopListening()
241 t.join()
242
243And here is a script that takes a filename and sends that file to the server,
244properly preceded with the binary-encoded length, as the new logging
245configuration::
246
247 #!/usr/bin/env python
248 import socket, sys, struct
249
250 with open(sys.argv[1], 'rb') as f:
251 data_to_send = f.read()
252
253 HOST = 'localhost'
254 PORT = 9999
255 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
256 print('connecting...')
257 s.connect((HOST, PORT))
258 print('sending config...')
259 s.send(struct.pack('>L', len(data_to_send)))
260 s.send(data_to_send)
261 s.close()
262 print('complete')
263
264
265.. _network-logging:
266
267Sending and receiving logging events across a network
268-----------------------------------------------------
269
270Let's say you want to send logging events across a network, and handle them at
271the receiving end. A simple way of doing this is attaching a
272:class:`SocketHandler` instance to the root logger at the sending end::
273
274 import logging, logging.handlers
275
276 rootLogger = logging.getLogger('')
277 rootLogger.setLevel(logging.DEBUG)
278 socketHandler = logging.handlers.SocketHandler('localhost',
279 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
280 # don't bother with a formatter, since a socket handler sends the event as
281 # an unformatted pickle
282 rootLogger.addHandler(socketHandler)
283
284 # Now, we can log to the root logger, or any other logger. First the root...
285 logging.info('Jackdaws love my big sphinx of quartz.')
286
287 # Now, define a couple of other loggers which might represent areas in your
288 # application:
289
290 logger1 = logging.getLogger('myapp.area1')
291 logger2 = logging.getLogger('myapp.area2')
292
293 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
294 logger1.info('How quickly daft jumping zebras vex.')
295 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
296 logger2.error('The five boxing wizards jump quickly.')
297
Georg Brandleb068f92012-08-11 11:12:55 +0200298At the receiving end, you can set up a receiver using the :mod:`SocketServer`
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100299module. Here is a basic working example::
300
301 import pickle
302 import logging
303 import logging.handlers
Georg Brandleb068f92012-08-11 11:12:55 +0200304 import SocketServer
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100305 import struct
306
307
Georg Brandleb068f92012-08-11 11:12:55 +0200308 class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100309 """Handler for a streaming logging request.
310
311 This basically logs the record using whatever logging policy is
312 configured locally.
313 """
314
315 def handle(self):
316 """
317 Handle multiple requests - each expected to be a 4-byte length,
318 followed by the LogRecord in pickle format. Logs the record
319 according to whatever policy is configured locally.
320 """
321 while True:
322 chunk = self.connection.recv(4)
323 if len(chunk) < 4:
324 break
325 slen = struct.unpack('>L', chunk)[0]
326 chunk = self.connection.recv(slen)
327 while len(chunk) < slen:
328 chunk = chunk + self.connection.recv(slen - len(chunk))
329 obj = self.unPickle(chunk)
330 record = logging.makeLogRecord(obj)
331 self.handleLogRecord(record)
332
333 def unPickle(self, data):
334 return pickle.loads(data)
335
336 def handleLogRecord(self, record):
337 # if a name is specified, we use the named logger rather than the one
338 # implied by the record.
339 if self.server.logname is not None:
340 name = self.server.logname
341 else:
342 name = record.name
343 logger = logging.getLogger(name)
344 # N.B. EVERY record gets logged. This is because Logger.handle
345 # is normally called AFTER logger-level filtering. If you want
346 # to do filtering, do it at the client end to save wasting
347 # cycles and network bandwidth!
348 logger.handle(record)
349
Georg Brandleb068f92012-08-11 11:12:55 +0200350 class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100351 """
352 Simple TCP socket-based logging receiver suitable for testing.
353 """
354
355 allow_reuse_address = 1
356
357 def __init__(self, host='localhost',
358 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
359 handler=LogRecordStreamHandler):
Georg Brandleb068f92012-08-11 11:12:55 +0200360 SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100361 self.abort = 0
362 self.timeout = 1
363 self.logname = None
364
365 def serve_until_stopped(self):
366 import select
367 abort = 0
368 while not abort:
369 rd, wr, ex = select.select([self.socket.fileno()],
370 [], [],
371 self.timeout)
372 if rd:
373 self.handle_request()
374 abort = self.abort
375
376 def main():
377 logging.basicConfig(
378 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
379 tcpserver = LogRecordSocketReceiver()
380 print('About to start TCP server...')
381 tcpserver.serve_until_stopped()
382
383 if __name__ == '__main__':
384 main()
385
386First run the server, and then the client. On the client side, nothing is
387printed on the console; on the server side, you should see something like::
388
389 About to start TCP server...
390 59 root INFO Jackdaws love my big sphinx of quartz.
391 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
392 69 myapp.area1 INFO How quickly daft jumping zebras vex.
393 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
394 69 myapp.area2 ERROR The five boxing wizards jump quickly.
395
396Note that there are some security issues with pickle in some scenarios. If
397these affect you, you can use an alternative serialization scheme by overriding
398the :meth:`makePickle` method and implementing your alternative there, as
399well as adapting the above script to use your alternative serialization.
400
401
402.. _context-info:
403
404Adding contextual information to your logging output
405----------------------------------------------------
406
407Sometimes you want logging output to contain contextual information in
408addition to the parameters passed to the logging call. For example, in a
409networked application, it may be desirable to log client-specific information
410in the log (e.g. remote client's username, or IP address). Although you could
411use the *extra* parameter to achieve this, it's not always convenient to pass
412the information in this way. While it might be tempting to create
413:class:`Logger` instances on a per-connection basis, this is not a good idea
414because these instances are not garbage collected. While this is not a problem
415in practice, when the number of :class:`Logger` instances is dependent on the
416level of granularity you want to use in logging an application, it could
417be hard to manage if the number of :class:`Logger` instances becomes
418effectively unbounded.
419
420
421Using LoggerAdapters to impart contextual information
422^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
423
424An easy way in which you can pass contextual information to be output along
425with logging event information is to use the :class:`LoggerAdapter` class.
426This class is designed to look like a :class:`Logger`, so that you can call
427:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
428:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
429same signatures as their counterparts in :class:`Logger`, so you can use the
430two types of instances interchangeably.
431
432When you create an instance of :class:`LoggerAdapter`, you pass it a
433:class:`Logger` instance and a dict-like object which contains your contextual
434information. When you call one of the logging methods on an instance of
435:class:`LoggerAdapter`, it delegates the call to the underlying instance of
436:class:`Logger` passed to its constructor, and arranges to pass the contextual
437information in the delegated call. Here's a snippet from the code of
438:class:`LoggerAdapter`::
439
440 def debug(self, msg, *args, **kwargs):
441 """
442 Delegate a debug call to the underlying logger, after adding
443 contextual information from this adapter instance.
444 """
445 msg, kwargs = self.process(msg, kwargs)
446 self.logger.debug(msg, *args, **kwargs)
447
448The :meth:`process` method of :class:`LoggerAdapter` is where the contextual
449information is added to the logging output. It's passed the message and
450keyword arguments of the logging call, and it passes back (potentially)
451modified versions of these to use in the call to the underlying logger. The
452default implementation of this method leaves the message alone, but inserts
453an 'extra' key in the keyword argument whose value is the dict-like object
454passed to the constructor. Of course, if you had passed an 'extra' keyword
455argument in the call to the adapter, it will be silently overwritten.
456
457The advantage of using 'extra' is that the values in the dict-like object are
458merged into the :class:`LogRecord` instance's __dict__, allowing you to use
459customized strings with your :class:`Formatter` instances which know about
460the keys of the dict-like object. If you need a different method, e.g. if you
461want to prepend or append the contextual information to the message string,
462you just need to subclass :class:`LoggerAdapter` and override :meth:`process`
463to do what you need. Here's an example script which uses this class, which
464also illustrates what dict-like behaviour is needed from an arbitrary
465'dict-like' object for use in the constructor::
466
467 import logging
468
469 class ConnInfo:
470 """
471 An example class which shows how an arbitrary class can be used as
472 the 'extra' context information repository passed to a LoggerAdapter.
473 """
474
475 def __getitem__(self, name):
476 """
477 To allow this instance to look like a dict.
478 """
479 from random import choice
480 if name == 'ip':
481 result = choice(['127.0.0.1', '192.168.0.1'])
482 elif name == 'user':
483 result = choice(['jim', 'fred', 'sheila'])
484 else:
485 result = self.__dict__.get(name, '?')
486 return result
487
488 def __iter__(self):
489 """
490 To allow iteration over keys, which will be merged into
491 the LogRecord dict before formatting and output.
492 """
493 keys = ['ip', 'user']
494 keys.extend(self.__dict__.keys())
495 return keys.__iter__()
496
497 if __name__ == '__main__':
498 from random import choice
499 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
500 a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
501 { 'ip' : '123.231.231.123', 'user' : 'sheila' })
502 logging.basicConfig(level=logging.DEBUG,
503 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
504 a1.debug('A debug message')
505 a1.info('An info message with %s', 'some parameters')
506 a2 = logging.LoggerAdapter(logging.getLogger('d.e.f'), ConnInfo())
507 for x in range(10):
508 lvl = choice(levels)
509 lvlname = logging.getLevelName(lvl)
510 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
511
512When this script is run, the output should look something like this::
513
514 2008-01-18 14:49:54,023 a.b.c DEBUG IP: 123.231.231.123 User: sheila A debug message
515 2008-01-18 14:49:54,023 a.b.c INFO IP: 123.231.231.123 User: sheila An info message with some parameters
516 2008-01-18 14:49:54,023 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
517 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: jim A message at INFO level with 2 parameters
518 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
519 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: fred A message at ERROR level with 2 parameters
520 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
521 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
522 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: jim A message at WARNING level with 2 parameters
523 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: fred A message at INFO level with 2 parameters
524 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
525 2008-01-18 14:49:54,033 d.e.f WARNING IP: 127.0.0.1 User: jim A message at WARNING level with 2 parameters
526
527
528.. _filters-contextual:
529
530Using Filters to impart contextual information
531^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
532
533You can also add contextual information to log output using a user-defined
534:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
535passed to them, including adding additional attributes which can then be output
536using a suitable format string, or if needed a custom :class:`Formatter`.
537
538For example in a web application, the request being processed (or at least,
539the interesting parts of it) can be stored in a threadlocal
540(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
541add, say, information from the request - say, the remote IP address and remote
542user's username - to the ``LogRecord``, using the attribute names 'ip' and
543'user' as in the ``LoggerAdapter`` example above. In that case, the same format
544string can be used to get similar output to that shown above. Here's an example
545script::
546
547 import logging
548 from random import choice
549
550 class ContextFilter(logging.Filter):
551 """
552 This is a filter which injects contextual information into the log.
553
554 Rather than use actual contextual information, we just use random
555 data in this demo.
556 """
557
558 USERS = ['jim', 'fred', 'sheila']
559 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
560
561 def filter(self, record):
562
563 record.ip = choice(ContextFilter.IPS)
564 record.user = choice(ContextFilter.USERS)
565 return True
566
567 if __name__ == '__main__':
568 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
569 logging.basicConfig(level=logging.DEBUG,
570 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
571 a1 = logging.getLogger('a.b.c')
572 a2 = logging.getLogger('d.e.f')
573
574 f = ContextFilter()
575 a1.addFilter(f)
576 a2.addFilter(f)
577 a1.debug('A debug message')
578 a1.info('An info message with %s', 'some parameters')
579 for x in range(10):
580 lvl = choice(levels)
581 lvlname = logging.getLevelName(lvl)
582 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
583
584which, when run, produces something like::
585
586 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
587 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
588 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
589 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
590 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
591 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
592 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
593 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
594 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
595 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
596 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
597 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
598
599
600.. _multiple-processes:
601
602Logging to a single file from multiple processes
603------------------------------------------------
604
605Although logging is thread-safe, and logging to a single file from multiple
606threads in a single process *is* supported, logging to a single file from
607*multiple processes* is *not* supported, because there is no standard way to
608serialize access to a single file across multiple processes in Python. If you
609need to log to a single file from multiple processes, one way of doing this is
610to have all the processes log to a :class:`SocketHandler`, and have a separate
611process which implements a socket server which reads from the socket and logs
612to file. (If you prefer, you can dedicate one thread in one of the existing
Vinay Sajipe6fda852011-10-27 17:51:13 +0100613processes to perform this function.) :ref:`This section <network-logging>`
614documents this approach in more detail and includes a working socket receiver
615which can be used as a starting point for you to adapt in your own
616applications.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100617
618If you are using a recent version of Python which includes the
619:mod:`multiprocessing` module, you could write your own handler which uses the
620:class:`Lock` class from this module to serialize access to the file from
621your processes. The existing :class:`FileHandler` and subclasses do not make
622use of :mod:`multiprocessing` at present, though they may do so in the future.
623Note that at present, the :mod:`multiprocessing` module does not provide
624working lock functionality on all platforms (see
625http://bugs.python.org/issue3770).
626
627.. currentmodule:: logging.handlers
628
629
630Using file rotation
631-------------------
632
633.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
634.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
635
636Sometimes you want to let a log file grow to a certain size, then open a new
637file and log to that. You may want to keep a certain number of these files, and
638when that many files have been created, rotate the files so that the number of
639files and the size of the files both remain bounded. For this usage pattern, the
640logging package provides a :class:`RotatingFileHandler`::
641
642 import glob
643 import logging
644 import logging.handlers
645
646 LOG_FILENAME = 'logging_rotatingfile_example.out'
647
648 # Set up a specific logger with our desired output level
649 my_logger = logging.getLogger('MyLogger')
650 my_logger.setLevel(logging.DEBUG)
651
652 # Add the log message handler to the logger
653 handler = logging.handlers.RotatingFileHandler(
654 LOG_FILENAME, maxBytes=20, backupCount=5)
655
656 my_logger.addHandler(handler)
657
658 # Log some messages
659 for i in range(20):
660 my_logger.debug('i = %d' % i)
661
662 # See what files are created
663 logfiles = glob.glob('%s*' % LOG_FILENAME)
664
665 for filename in logfiles:
666 print(filename)
667
668The result should be 6 separate files, each with part of the log history for the
669application::
670
671 logging_rotatingfile_example.out
672 logging_rotatingfile_example.out.1
673 logging_rotatingfile_example.out.2
674 logging_rotatingfile_example.out.3
675 logging_rotatingfile_example.out.4
676 logging_rotatingfile_example.out.5
677
678The most current file is always :file:`logging_rotatingfile_example.out`,
679and each time it reaches the size limit it is renamed with the suffix
680``.1``. Each of the existing backup files is renamed to increment the suffix
681(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
682
Ezio Melotti1e87da12011-10-19 10:39:35 +0300683Obviously this example sets the log length much too small as an extreme
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100684example. You would want to set *maxBytes* to an appropriate value.
685
Vinay Sajipda75dd22011-11-23 14:27:11 +0000686An example dictionary-based configuration
687-----------------------------------------
688
689Below is an example of a logging configuration dictionary - it's taken from
690the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
691This dictionary is passed to :func:`~logging.config.dictConfig` to put the configuration into effect::
692
693 LOGGING = {
694 'version': 1,
695 'disable_existing_loggers': True,
696 'formatters': {
697 'verbose': {
698 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
699 },
700 'simple': {
701 'format': '%(levelname)s %(message)s'
702 },
703 },
704 'filters': {
705 'special': {
706 '()': 'project.logging.SpecialFilter',
707 'foo': 'bar',
708 }
709 },
710 'handlers': {
711 'null': {
712 'level':'DEBUG',
713 'class':'django.utils.log.NullHandler',
714 },
715 'console':{
716 'level':'DEBUG',
717 'class':'logging.StreamHandler',
718 'formatter': 'simple'
719 },
720 'mail_admins': {
721 'level': 'ERROR',
722 'class': 'django.utils.log.AdminEmailHandler',
723 'filters': ['special']
724 }
725 },
726 'loggers': {
727 'django': {
728 'handlers':['null'],
729 'propagate': True,
730 'level':'INFO',
731 },
732 'django.request': {
733 'handlers': ['mail_admins'],
734 'level': 'ERROR',
735 'propagate': False,
736 },
737 'myproject.custom': {
738 'handlers': ['console', 'mail_admins'],
739 'level': 'INFO',
740 'filters': ['special']
741 }
742 }
743 }
744
745For more information about this configuration, you can see the `relevant
746section <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_
747of the Django documentation.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100748
749Inserting a BOM into messages sent to a SysLogHandler
750-----------------------------------------------------
751
752`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
753Unicode message be sent to a syslog daemon as a set of bytes which have the
754following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
755Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
756section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
757
758In Python 2.6 and 2.7, code was added to
759:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
760unfortunately, it was implemented incorrectly, with the BOM appearing at the
761beginning of the message and hence not allowing any pure-ASCII component to
762appear before it.
763
764As this behaviour is broken, the incorrect BOM insertion code is being removed
765from Python 2.7.4 and later. However, it is not being replaced, and if you
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100766want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100767pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
768UTF-8, then you need to do the following:
769
770#. Attach a :class:`~logging.Formatter` instance to your
771 :class:`~logging.handlers.SysLogHandler` instance, with a format string
772 such as::
773
Vinay Sajip8085f442012-04-16 15:45:11 +0100774 u'ASCII section\ufeffUnicode section'
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100775
776 The Unicode code point ``u'\feff```, when encoded using UTF-8, will be
Vinay Sajip8085f442012-04-16 15:45:11 +0100777 encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100778
779#. Replace the ASCII section with whatever placeholders you like, but make sure
780 that the data that appears in there after substitution is always ASCII (that
781 way, it will remain unchanged after UTF-8 encoding).
782
783#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100784 which appears there after substitution contains characters outside the ASCII
785 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100786
787If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
Vinay Sajip8085f442012-04-16 15:45:11 +0100788by ``SysLogHandler``. If you follow the above rules, you should be able to
789produce RFC 5424-compliant messages. If you don't, logging may not complain,
790but your messages will not be RFC 5424-compliant, and your syslog daemon may
791complain.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100792