blob: 00d0e809143c106b2d336e6e6a87a441a141820f [file] [log] [blame]
Vinay Sajip5dbca9c2011-04-08 11:40:38 +01001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
24
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
97Multiple handlers and formatters
98--------------------------------
99
100Loggers are plain Python objects. The :func:`addHandler` method has no minimum
101or maximum quota for the number of handlers you may add. Sometimes it will be
102beneficial for an application to log all messages of all severities to a text
103file while simultaneously logging errors or above to the console. To set this
104up, simply configure the appropriate handlers. The logging calls in the
105application code will remain unchanged. Here is a slight modification to the
106previous simple module-based configuration example::
107
108 import logging
109
110 logger = logging.getLogger('simple_example')
111 logger.setLevel(logging.DEBUG)
112 # create file handler which logs even debug messages
113 fh = logging.FileHandler('spam.log')
114 fh.setLevel(logging.DEBUG)
115 # create console handler with a higher log level
116 ch = logging.StreamHandler()
117 ch.setLevel(logging.ERROR)
118 # create formatter and add it to the handlers
119 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
120 ch.setFormatter(formatter)
121 fh.setFormatter(formatter)
122 # add the handlers to logger
123 logger.addHandler(ch)
124 logger.addHandler(fh)
125
126 # 'application' code
127 logger.debug('debug message')
128 logger.info('info message')
129 logger.warn('warn message')
130 logger.error('error message')
131 logger.critical('critical message')
132
133Notice that the 'application' code does not care about multiple handlers. All
134that changed was the addition and configuration of a new handler named *fh*.
135
136The ability to create new handlers with higher- or lower-severity filters can be
137very helpful when writing and testing an application. Instead of using many
138``print`` statements for debugging, use ``logger.debug``: Unlike the print
139statements, which you will have to delete or comment out later, the logger.debug
140statements can remain intact in the source code and remain dormant until you
141need them again. At that time, the only change that needs to happen is to
142modify the severity level of the logger and/or handler to debug.
143
144.. _multiple-destinations:
145
146Logging to multiple destinations
147--------------------------------
148
149Let's say you want to log to console and file with different message formats and
150in differing circumstances. Say you want to log messages with levels of DEBUG
151and higher to file, and those messages at level INFO and higher to the console.
152Let's also assume that the file should contain timestamps, but the console
153messages should not. Here's how you can achieve this::
154
155 import logging
156
157 # set up logging to file - see previous section for more details
158 logging.basicConfig(level=logging.DEBUG,
159 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
160 datefmt='%m-%d %H:%M',
161 filename='/temp/myapp.log',
162 filemode='w')
163 # define a Handler which writes INFO messages or higher to the sys.stderr
164 console = logging.StreamHandler()
165 console.setLevel(logging.INFO)
166 # set a format which is simpler for console use
167 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
168 # tell the handler to use this format
169 console.setFormatter(formatter)
170 # add the handler to the root logger
171 logging.getLogger('').addHandler(console)
172
173 # Now, we can log to the root logger, or any other logger. First the root...
174 logging.info('Jackdaws love my big sphinx of quartz.')
175
176 # Now, define a couple of other loggers which might represent areas in your
177 # application:
178
179 logger1 = logging.getLogger('myapp.area1')
180 logger2 = logging.getLogger('myapp.area2')
181
182 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
183 logger1.info('How quickly daft jumping zebras vex.')
184 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
185 logger2.error('The five boxing wizards jump quickly.')
186
187When you run this, on the console you will see ::
188
189 root : INFO Jackdaws love my big sphinx of quartz.
190 myapp.area1 : INFO How quickly daft jumping zebras vex.
191 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
192 myapp.area2 : ERROR The five boxing wizards jump quickly.
193
194and in the file you will see something like ::
195
196 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
197 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
198 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
199 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
200 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
201
202As you can see, the DEBUG message only shows up in the file. The other messages
203are sent to both destinations.
204
205This example uses console and file handlers, but you can use any number and
206combination of handlers you choose.
207
208
209Configuration server example
210----------------------------
211
212Here is an example of a module using the logging configuration server::
213
214 import logging
215 import logging.config
216 import time
217 import os
218
219 # read initial config file
220 logging.config.fileConfig('logging.conf')
221
222 # create and start listener on port 9999
223 t = logging.config.listen(9999)
224 t.start()
225
226 logger = logging.getLogger('simpleExample')
227
228 try:
229 # loop through logging calls to see the difference
230 # new configurations make, until Ctrl+C is pressed
231 while True:
232 logger.debug('debug message')
233 logger.info('info message')
234 logger.warn('warn message')
235 logger.error('error message')
236 logger.critical('critical message')
237 time.sleep(5)
238 except KeyboardInterrupt:
239 # cleanup
240 logging.config.stopListening()
241 t.join()
242
243And here is a script that takes a filename and sends that file to the server,
244properly preceded with the binary-encoded length, as the new logging
245configuration::
246
247 #!/usr/bin/env python
248 import socket, sys, struct
249
250 with open(sys.argv[1], 'rb') as f:
251 data_to_send = f.read()
252
253 HOST = 'localhost'
254 PORT = 9999
255 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
256 print('connecting...')
257 s.connect((HOST, PORT))
258 print('sending config...')
259 s.send(struct.pack('>L', len(data_to_send)))
260 s.send(data_to_send)
261 s.close()
262 print('complete')
263
264
265.. _network-logging:
266
267Sending and receiving logging events across a network
268-----------------------------------------------------
269
270Let's say you want to send logging events across a network, and handle them at
271the receiving end. A simple way of doing this is attaching a
272:class:`SocketHandler` instance to the root logger at the sending end::
273
274 import logging, logging.handlers
275
276 rootLogger = logging.getLogger('')
277 rootLogger.setLevel(logging.DEBUG)
278 socketHandler = logging.handlers.SocketHandler('localhost',
279 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
280 # don't bother with a formatter, since a socket handler sends the event as
281 # an unformatted pickle
282 rootLogger.addHandler(socketHandler)
283
284 # Now, we can log to the root logger, or any other logger. First the root...
285 logging.info('Jackdaws love my big sphinx of quartz.')
286
287 # Now, define a couple of other loggers which might represent areas in your
288 # application:
289
290 logger1 = logging.getLogger('myapp.area1')
291 logger2 = logging.getLogger('myapp.area2')
292
293 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
294 logger1.info('How quickly daft jumping zebras vex.')
295 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
296 logger2.error('The five boxing wizards jump quickly.')
297
Georg Brandleb068f92012-08-11 11:12:55 +0200298At the receiving end, you can set up a receiver using the :mod:`SocketServer`
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100299module. Here is a basic working example::
300
301 import pickle
302 import logging
303 import logging.handlers
Georg Brandleb068f92012-08-11 11:12:55 +0200304 import SocketServer
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100305 import struct
306
307
Georg Brandleb068f92012-08-11 11:12:55 +0200308 class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100309 """Handler for a streaming logging request.
310
311 This basically logs the record using whatever logging policy is
312 configured locally.
313 """
314
315 def handle(self):
316 """
317 Handle multiple requests - each expected to be a 4-byte length,
318 followed by the LogRecord in pickle format. Logs the record
319 according to whatever policy is configured locally.
320 """
321 while True:
322 chunk = self.connection.recv(4)
323 if len(chunk) < 4:
324 break
325 slen = struct.unpack('>L', chunk)[0]
326 chunk = self.connection.recv(slen)
327 while len(chunk) < slen:
328 chunk = chunk + self.connection.recv(slen - len(chunk))
329 obj = self.unPickle(chunk)
330 record = logging.makeLogRecord(obj)
331 self.handleLogRecord(record)
332
333 def unPickle(self, data):
334 return pickle.loads(data)
335
336 def handleLogRecord(self, record):
337 # if a name is specified, we use the named logger rather than the one
338 # implied by the record.
339 if self.server.logname is not None:
340 name = self.server.logname
341 else:
342 name = record.name
343 logger = logging.getLogger(name)
344 # N.B. EVERY record gets logged. This is because Logger.handle
345 # is normally called AFTER logger-level filtering. If you want
346 # to do filtering, do it at the client end to save wasting
347 # cycles and network bandwidth!
348 logger.handle(record)
349
Georg Brandleb068f92012-08-11 11:12:55 +0200350 class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100351 """
352 Simple TCP socket-based logging receiver suitable for testing.
353 """
354
355 allow_reuse_address = 1
356
357 def __init__(self, host='localhost',
358 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
359 handler=LogRecordStreamHandler):
Georg Brandleb068f92012-08-11 11:12:55 +0200360 SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100361 self.abort = 0
362 self.timeout = 1
363 self.logname = None
364
365 def serve_until_stopped(self):
366 import select
367 abort = 0
368 while not abort:
369 rd, wr, ex = select.select([self.socket.fileno()],
370 [], [],
371 self.timeout)
372 if rd:
373 self.handle_request()
374 abort = self.abort
375
376 def main():
377 logging.basicConfig(
378 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
379 tcpserver = LogRecordSocketReceiver()
380 print('About to start TCP server...')
381 tcpserver.serve_until_stopped()
382
383 if __name__ == '__main__':
384 main()
385
386First run the server, and then the client. On the client side, nothing is
387printed on the console; on the server side, you should see something like::
388
389 About to start TCP server...
390 59 root INFO Jackdaws love my big sphinx of quartz.
391 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
392 69 myapp.area1 INFO How quickly daft jumping zebras vex.
393 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
394 69 myapp.area2 ERROR The five boxing wizards jump quickly.
395
396Note that there are some security issues with pickle in some scenarios. If
397these affect you, you can use an alternative serialization scheme by overriding
398the :meth:`makePickle` method and implementing your alternative there, as
399well as adapting the above script to use your alternative serialization.
400
401
402.. _context-info:
403
404Adding contextual information to your logging output
405----------------------------------------------------
406
407Sometimes you want logging output to contain contextual information in
408addition to the parameters passed to the logging call. For example, in a
409networked application, it may be desirable to log client-specific information
410in the log (e.g. remote client's username, or IP address). Although you could
411use the *extra* parameter to achieve this, it's not always convenient to pass
412the information in this way. While it might be tempting to create
413:class:`Logger` instances on a per-connection basis, this is not a good idea
414because these instances are not garbage collected. While this is not a problem
415in practice, when the number of :class:`Logger` instances is dependent on the
416level of granularity you want to use in logging an application, it could
417be hard to manage if the number of :class:`Logger` instances becomes
418effectively unbounded.
419
420
421Using LoggerAdapters to impart contextual information
422^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
423
424An easy way in which you can pass contextual information to be output along
425with logging event information is to use the :class:`LoggerAdapter` class.
426This class is designed to look like a :class:`Logger`, so that you can call
427:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
428:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
429same signatures as their counterparts in :class:`Logger`, so you can use the
430two types of instances interchangeably.
431
432When you create an instance of :class:`LoggerAdapter`, you pass it a
433:class:`Logger` instance and a dict-like object which contains your contextual
434information. When you call one of the logging methods on an instance of
435:class:`LoggerAdapter`, it delegates the call to the underlying instance of
436:class:`Logger` passed to its constructor, and arranges to pass the contextual
437information in the delegated call. Here's a snippet from the code of
438:class:`LoggerAdapter`::
439
440 def debug(self, msg, *args, **kwargs):
441 """
442 Delegate a debug call to the underlying logger, after adding
443 contextual information from this adapter instance.
444 """
445 msg, kwargs = self.process(msg, kwargs)
446 self.logger.debug(msg, *args, **kwargs)
447
448The :meth:`process` method of :class:`LoggerAdapter` is where the contextual
449information is added to the logging output. It's passed the message and
450keyword arguments of the logging call, and it passes back (potentially)
451modified versions of these to use in the call to the underlying logger. The
452default implementation of this method leaves the message alone, but inserts
453an 'extra' key in the keyword argument whose value is the dict-like object
454passed to the constructor. Of course, if you had passed an 'extra' keyword
455argument in the call to the adapter, it will be silently overwritten.
456
457The advantage of using 'extra' is that the values in the dict-like object are
458merged into the :class:`LogRecord` instance's __dict__, allowing you to use
459customized strings with your :class:`Formatter` instances which know about
460the keys of the dict-like object. If you need a different method, e.g. if you
461want to prepend or append the contextual information to the message string,
462you just need to subclass :class:`LoggerAdapter` and override :meth:`process`
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100463to do what you need. Here is a simple example::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100464
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100465 class CustomAdapter(logging.LoggerAdapter):
466 """
467 This example adapter expects the passed in dict-like object to have a
468 'connid' key, whose value in brackets is prepended to the log message.
469 """
470 def process(self, msg, kwargs):
471 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100472
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100473which you can use like this::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100474
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100475 logger = logging.getLogger(__name__)
476 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100477
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100478Then any events that you log to the adapter will have the value of
479``some_conn_id`` prepended to the log messages.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100480
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100481Using objects other than dicts to pass contextual information
482~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100483
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100484You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
485pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
486that it looks like a dict to logging. This would be useful if you want to
487generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100488
489
490.. _filters-contextual:
491
492Using Filters to impart contextual information
493^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
494
495You can also add contextual information to log output using a user-defined
496:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
497passed to them, including adding additional attributes which can then be output
498using a suitable format string, or if needed a custom :class:`Formatter`.
499
500For example in a web application, the request being processed (or at least,
501the interesting parts of it) can be stored in a threadlocal
502(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
503add, say, information from the request - say, the remote IP address and remote
504user's username - to the ``LogRecord``, using the attribute names 'ip' and
505'user' as in the ``LoggerAdapter`` example above. In that case, the same format
506string can be used to get similar output to that shown above. Here's an example
507script::
508
509 import logging
510 from random import choice
511
512 class ContextFilter(logging.Filter):
513 """
514 This is a filter which injects contextual information into the log.
515
516 Rather than use actual contextual information, we just use random
517 data in this demo.
518 """
519
520 USERS = ['jim', 'fred', 'sheila']
521 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
522
523 def filter(self, record):
524
525 record.ip = choice(ContextFilter.IPS)
526 record.user = choice(ContextFilter.USERS)
527 return True
528
529 if __name__ == '__main__':
530 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
531 logging.basicConfig(level=logging.DEBUG,
532 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
533 a1 = logging.getLogger('a.b.c')
534 a2 = logging.getLogger('d.e.f')
535
536 f = ContextFilter()
537 a1.addFilter(f)
538 a2.addFilter(f)
539 a1.debug('A debug message')
540 a1.info('An info message with %s', 'some parameters')
541 for x in range(10):
542 lvl = choice(levels)
543 lvlname = logging.getLevelName(lvl)
544 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
545
546which, when run, produces something like::
547
548 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
549 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
550 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
551 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
552 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
553 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
554 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
555 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
556 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
557 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
558 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
559 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
560
561
562.. _multiple-processes:
563
564Logging to a single file from multiple processes
565------------------------------------------------
566
567Although logging is thread-safe, and logging to a single file from multiple
568threads in a single process *is* supported, logging to a single file from
569*multiple processes* is *not* supported, because there is no standard way to
570serialize access to a single file across multiple processes in Python. If you
571need to log to a single file from multiple processes, one way of doing this is
572to have all the processes log to a :class:`SocketHandler`, and have a separate
573process which implements a socket server which reads from the socket and logs
574to file. (If you prefer, you can dedicate one thread in one of the existing
Vinay Sajipe6fda852011-10-27 17:51:13 +0100575processes to perform this function.) :ref:`This section <network-logging>`
576documents this approach in more detail and includes a working socket receiver
577which can be used as a starting point for you to adapt in your own
578applications.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100579
580If you are using a recent version of Python which includes the
581:mod:`multiprocessing` module, you could write your own handler which uses the
582:class:`Lock` class from this module to serialize access to the file from
583your processes. The existing :class:`FileHandler` and subclasses do not make
584use of :mod:`multiprocessing` at present, though they may do so in the future.
585Note that at present, the :mod:`multiprocessing` module does not provide
586working lock functionality on all platforms (see
587http://bugs.python.org/issue3770).
588
589.. currentmodule:: logging.handlers
590
591
592Using file rotation
593-------------------
594
595.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
596.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
597
598Sometimes you want to let a log file grow to a certain size, then open a new
599file and log to that. You may want to keep a certain number of these files, and
600when that many files have been created, rotate the files so that the number of
601files and the size of the files both remain bounded. For this usage pattern, the
602logging package provides a :class:`RotatingFileHandler`::
603
604 import glob
605 import logging
606 import logging.handlers
607
608 LOG_FILENAME = 'logging_rotatingfile_example.out'
609
610 # Set up a specific logger with our desired output level
611 my_logger = logging.getLogger('MyLogger')
612 my_logger.setLevel(logging.DEBUG)
613
614 # Add the log message handler to the logger
615 handler = logging.handlers.RotatingFileHandler(
616 LOG_FILENAME, maxBytes=20, backupCount=5)
617
618 my_logger.addHandler(handler)
619
620 # Log some messages
621 for i in range(20):
622 my_logger.debug('i = %d' % i)
623
624 # See what files are created
625 logfiles = glob.glob('%s*' % LOG_FILENAME)
626
627 for filename in logfiles:
628 print(filename)
629
630The result should be 6 separate files, each with part of the log history for the
631application::
632
633 logging_rotatingfile_example.out
634 logging_rotatingfile_example.out.1
635 logging_rotatingfile_example.out.2
636 logging_rotatingfile_example.out.3
637 logging_rotatingfile_example.out.4
638 logging_rotatingfile_example.out.5
639
640The most current file is always :file:`logging_rotatingfile_example.out`,
641and each time it reaches the size limit it is renamed with the suffix
642``.1``. Each of the existing backup files is renamed to increment the suffix
643(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
644
Ezio Melotti1e87da12011-10-19 10:39:35 +0300645Obviously this example sets the log length much too small as an extreme
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100646example. You would want to set *maxBytes* to an appropriate value.
647
Vinay Sajipda75dd22011-11-23 14:27:11 +0000648An example dictionary-based configuration
649-----------------------------------------
650
651Below is an example of a logging configuration dictionary - it's taken from
652the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
653This dictionary is passed to :func:`~logging.config.dictConfig` to put the configuration into effect::
654
655 LOGGING = {
656 'version': 1,
657 'disable_existing_loggers': True,
658 'formatters': {
659 'verbose': {
660 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
661 },
662 'simple': {
663 'format': '%(levelname)s %(message)s'
664 },
665 },
666 'filters': {
667 'special': {
668 '()': 'project.logging.SpecialFilter',
669 'foo': 'bar',
670 }
671 },
672 'handlers': {
673 'null': {
674 'level':'DEBUG',
675 'class':'django.utils.log.NullHandler',
676 },
677 'console':{
678 'level':'DEBUG',
679 'class':'logging.StreamHandler',
680 'formatter': 'simple'
681 },
682 'mail_admins': {
683 'level': 'ERROR',
684 'class': 'django.utils.log.AdminEmailHandler',
685 'filters': ['special']
686 }
687 },
688 'loggers': {
689 'django': {
690 'handlers':['null'],
691 'propagate': True,
692 'level':'INFO',
693 },
694 'django.request': {
695 'handlers': ['mail_admins'],
696 'level': 'ERROR',
697 'propagate': False,
698 },
699 'myproject.custom': {
700 'handlers': ['console', 'mail_admins'],
701 'level': 'INFO',
702 'filters': ['special']
703 }
704 }
705 }
706
707For more information about this configuration, you can see the `relevant
708section <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_
709of the Django documentation.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100710
711Inserting a BOM into messages sent to a SysLogHandler
712-----------------------------------------------------
713
714`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
715Unicode message be sent to a syslog daemon as a set of bytes which have the
716following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
717Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
718section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
719
720In Python 2.6 and 2.7, code was added to
721:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
722unfortunately, it was implemented incorrectly, with the BOM appearing at the
723beginning of the message and hence not allowing any pure-ASCII component to
724appear before it.
725
726As this behaviour is broken, the incorrect BOM insertion code is being removed
727from Python 2.7.4 and later. However, it is not being replaced, and if you
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100728want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100729pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
730UTF-8, then you need to do the following:
731
732#. Attach a :class:`~logging.Formatter` instance to your
733 :class:`~logging.handlers.SysLogHandler` instance, with a format string
734 such as::
735
Vinay Sajip8085f442012-04-16 15:45:11 +0100736 u'ASCII section\ufeffUnicode section'
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100737
Ezio Melotti93324d72013-03-28 05:47:31 +0200738 The Unicode code point ``u'\ufeff'``, when encoded using UTF-8, will be
Vinay Sajip8085f442012-04-16 15:45:11 +0100739 encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100740
741#. Replace the ASCII section with whatever placeholders you like, but make sure
742 that the data that appears in there after substitution is always ASCII (that
743 way, it will remain unchanged after UTF-8 encoding).
744
745#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100746 which appears there after substitution contains characters outside the ASCII
747 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100748
749If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
Vinay Sajip8085f442012-04-16 15:45:11 +0100750by ``SysLogHandler``. If you follow the above rules, you should be able to
751produce RFC 5424-compliant messages. If you don't, logging may not complain,
752but your messages will not be RFC 5424-compliant, and your syslog daemon may
753complain.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100754
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000755
756Implementing structured logging
757-------------------------------
758
759Although most logging messages are intended for reading by humans, and thus not
760readily machine-parseable, there might be cirumstances where you want to output
761messages in a structured format which *is* capable of being parsed by a program
Vinay Sajipeb019492013-01-23 09:30:34 +0000762(without needing complex regular expressions to parse the log message). This is
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000763straightforward to achieve using the logging package. There are a number of
764ways in which this could be achieved, but the following is a simple approach
765which uses JSON to serialise the event in a machine-parseable manner::
766
767 import json
768 import logging
769
770 class StructuredMessage(object):
771 def __init__(self, message, **kwargs):
772 self.message = message
773 self.kwargs = kwargs
774
775 def __str__(self):
776 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
777
778 _ = StructuredMessage # optional, to improve readability
779
780 logging.basicConfig(level=logging.INFO, format='%(message)s')
781 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
782
783If the above script is run, it prints::
784
785 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
786
Vinay Sajipeb019492013-01-23 09:30:34 +0000787Note that the order of items might be different according to the version of
788Python used.
789
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000790If you need more specialised processing, you can use a custom JSON encoder,
791as in the following complete example::
792
793 from __future__ import unicode_literals
794
795 import json
796 import logging
797
Vinay Sajipeb019492013-01-23 09:30:34 +0000798 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000799 try:
800 unicode
801 except NameError:
802 unicode = str
803
804 class Encoder(json.JSONEncoder):
805 def default(self, o):
806 if isinstance(o, set):
807 return tuple(o)
808 elif isinstance(o, unicode):
809 return o.encode('unicode_escape').decode('ascii')
810 return super(Encoder, self).default(o)
811
812 class StructuredMessage(object):
813 def __init__(self, message, **kwargs):
814 self.message = message
815 self.kwargs = kwargs
816
817 def __str__(self):
818 s = Encoder().encode(self.kwargs)
819 return '%s >>> %s' % (self.message, s)
820
Vinay Sajipeb019492013-01-23 09:30:34 +0000821 _ = StructuredMessage # optional, to improve readability
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000822
823 def main():
824 logging.basicConfig(level=logging.INFO, format='%(message)s')
825 logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
826
827 if __name__ == '__main__':
828 main()
829
830When the above script is run, it prints::
831
832 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
833
Vinay Sajipeb019492013-01-23 09:30:34 +0000834Note that the order of items might be different according to the version of
835Python used.
836