blob: 074c396712f6dd8129d314837c7466fe3b3cd501 [file] [log] [blame]
Vinay Sajip5dbca9c2011-04-08 11:40:38 +01001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
24
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
Serhiy Storchaka12d547a2016-05-10 13:45:32 +030066
Vinay Sajip5dbca9c2011-04-08 11:40:38 +010067 def do_something(self):
68 self.logger.info('doing something')
69 a = 1 + 1
70 self.logger.info('done doing something')
71
72 def some_function():
73 module_logger.info('received a call to "some_function"')
74
75The output looks like this::
76
77 2005-03-23 23:47:11,663 - spam_application - INFO -
78 creating an instance of auxiliary_module.Auxiliary
79 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
80 creating an instance of Auxiliary
81 2005-03-23 23:47:11,665 - spam_application - INFO -
82 created an instance of auxiliary_module.Auxiliary
83 2005-03-23 23:47:11,668 - spam_application - INFO -
84 calling auxiliary_module.Auxiliary.do_something
85 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
86 doing something
87 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
88 done doing something
89 2005-03-23 23:47:11,670 - spam_application - INFO -
90 finished auxiliary_module.Auxiliary.do_something
91 2005-03-23 23:47:11,671 - spam_application - INFO -
92 calling auxiliary_module.some_function()
93 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
94 received a call to 'some_function'
95 2005-03-23 23:47:11,673 - spam_application - INFO -
96 done with auxiliary_module.some_function()
97
Vinay Sajip35bb78e2016-02-20 19:02:04 +000098Logging from multiple threads
99-----------------------------
100
101Logging from multiple threads requires no special effort. The following example
102shows logging from the main (initIal) thread and another thread::
103
104 import logging
105 import threading
106 import time
107
108 def worker(arg):
109 while not arg['stop']:
110 logging.debug('Hi from myfunc')
111 time.sleep(0.5)
112
113 def main():
114 logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
115 info = {'stop': False}
116 thread = threading.Thread(target=worker, args=(info,))
117 thread.start()
118 while True:
119 try:
120 logging.debug('Hello from main')
121 time.sleep(0.75)
122 except KeyboardInterrupt:
123 info['stop'] = True
124 break
125 thread.join()
126
127 if __name__ == '__main__':
128 main()
129
130When run, the script should print something like the following::
131
132 0 Thread-1 Hi from myfunc
133 3 MainThread Hello from main
134 505 Thread-1 Hi from myfunc
135 755 MainThread Hello from main
136 1007 Thread-1 Hi from myfunc
137 1507 MainThread Hello from main
138 1508 Thread-1 Hi from myfunc
139 2010 Thread-1 Hi from myfunc
140 2258 MainThread Hello from main
141 2512 Thread-1 Hi from myfunc
142 3009 MainThread Hello from main
143 3013 Thread-1 Hi from myfunc
144 3515 Thread-1 Hi from myfunc
145 3761 MainThread Hello from main
146 4017 Thread-1 Hi from myfunc
147 4513 MainThread Hello from main
148 4518 Thread-1 Hi from myfunc
149
150This shows the logging output interspersed as one might expect. This approach
151works for more threads than shown here, of course.
152
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100153Multiple handlers and formatters
154--------------------------------
155
Vinay Sajip10b51302013-08-17 00:38:48 +0100156Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
157minimum or maximum quota for the number of handlers you may add. Sometimes it
158will be beneficial for an application to log all messages of all severities to a
159text file while simultaneously logging errors or above to the console. To set
160this up, simply configure the appropriate handlers. The logging calls in the
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100161application code will remain unchanged. Here is a slight modification to the
162previous simple module-based configuration example::
163
164 import logging
165
166 logger = logging.getLogger('simple_example')
167 logger.setLevel(logging.DEBUG)
168 # create file handler which logs even debug messages
169 fh = logging.FileHandler('spam.log')
170 fh.setLevel(logging.DEBUG)
171 # create console handler with a higher log level
172 ch = logging.StreamHandler()
173 ch.setLevel(logging.ERROR)
174 # create formatter and add it to the handlers
175 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
176 ch.setFormatter(formatter)
177 fh.setFormatter(formatter)
178 # add the handlers to logger
179 logger.addHandler(ch)
180 logger.addHandler(fh)
181
182 # 'application' code
183 logger.debug('debug message')
184 logger.info('info message')
185 logger.warn('warn message')
186 logger.error('error message')
187 logger.critical('critical message')
188
189Notice that the 'application' code does not care about multiple handlers. All
190that changed was the addition and configuration of a new handler named *fh*.
191
192The ability to create new handlers with higher- or lower-severity filters can be
193very helpful when writing and testing an application. Instead of using many
194``print`` statements for debugging, use ``logger.debug``: Unlike the print
195statements, which you will have to delete or comment out later, the logger.debug
196statements can remain intact in the source code and remain dormant until you
197need them again. At that time, the only change that needs to happen is to
198modify the severity level of the logger and/or handler to debug.
199
200.. _multiple-destinations:
201
202Logging to multiple destinations
203--------------------------------
204
205Let's say you want to log to console and file with different message formats and
206in differing circumstances. Say you want to log messages with levels of DEBUG
207and higher to file, and those messages at level INFO and higher to the console.
208Let's also assume that the file should contain timestamps, but the console
209messages should not. Here's how you can achieve this::
210
211 import logging
212
213 # set up logging to file - see previous section for more details
214 logging.basicConfig(level=logging.DEBUG,
215 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
216 datefmt='%m-%d %H:%M',
217 filename='/temp/myapp.log',
218 filemode='w')
219 # define a Handler which writes INFO messages or higher to the sys.stderr
220 console = logging.StreamHandler()
221 console.setLevel(logging.INFO)
222 # set a format which is simpler for console use
223 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
224 # tell the handler to use this format
225 console.setFormatter(formatter)
226 # add the handler to the root logger
227 logging.getLogger('').addHandler(console)
228
229 # Now, we can log to the root logger, or any other logger. First the root...
230 logging.info('Jackdaws love my big sphinx of quartz.')
231
232 # Now, define a couple of other loggers which might represent areas in your
233 # application:
234
235 logger1 = logging.getLogger('myapp.area1')
236 logger2 = logging.getLogger('myapp.area2')
237
238 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
239 logger1.info('How quickly daft jumping zebras vex.')
240 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
241 logger2.error('The five boxing wizards jump quickly.')
242
243When you run this, on the console you will see ::
244
245 root : INFO Jackdaws love my big sphinx of quartz.
246 myapp.area1 : INFO How quickly daft jumping zebras vex.
247 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
248 myapp.area2 : ERROR The five boxing wizards jump quickly.
249
250and in the file you will see something like ::
251
252 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
253 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
254 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
255 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
256 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
257
258As you can see, the DEBUG message only shows up in the file. The other messages
259are sent to both destinations.
260
261This example uses console and file handlers, but you can use any number and
262combination of handlers you choose.
263
264
265Configuration server example
266----------------------------
267
268Here is an example of a module using the logging configuration server::
269
270 import logging
271 import logging.config
272 import time
273 import os
274
275 # read initial config file
276 logging.config.fileConfig('logging.conf')
277
278 # create and start listener on port 9999
279 t = logging.config.listen(9999)
280 t.start()
281
282 logger = logging.getLogger('simpleExample')
283
284 try:
285 # loop through logging calls to see the difference
286 # new configurations make, until Ctrl+C is pressed
287 while True:
288 logger.debug('debug message')
289 logger.info('info message')
290 logger.warn('warn message')
291 logger.error('error message')
292 logger.critical('critical message')
293 time.sleep(5)
294 except KeyboardInterrupt:
295 # cleanup
296 logging.config.stopListening()
297 t.join()
298
299And here is a script that takes a filename and sends that file to the server,
300properly preceded with the binary-encoded length, as the new logging
301configuration::
302
303 #!/usr/bin/env python
304 import socket, sys, struct
305
306 with open(sys.argv[1], 'rb') as f:
307 data_to_send = f.read()
308
309 HOST = 'localhost'
310 PORT = 9999
311 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
312 print('connecting...')
313 s.connect((HOST, PORT))
314 print('sending config...')
315 s.send(struct.pack('>L', len(data_to_send)))
316 s.send(data_to_send)
317 s.close()
318 print('complete')
319
320
321.. _network-logging:
322
323Sending and receiving logging events across a network
324-----------------------------------------------------
325
326Let's say you want to send logging events across a network, and handle them at
327the receiving end. A simple way of doing this is attaching a
328:class:`SocketHandler` instance to the root logger at the sending end::
329
330 import logging, logging.handlers
331
332 rootLogger = logging.getLogger('')
333 rootLogger.setLevel(logging.DEBUG)
334 socketHandler = logging.handlers.SocketHandler('localhost',
335 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
336 # don't bother with a formatter, since a socket handler sends the event as
337 # an unformatted pickle
338 rootLogger.addHandler(socketHandler)
339
340 # Now, we can log to the root logger, or any other logger. First the root...
341 logging.info('Jackdaws love my big sphinx of quartz.')
342
343 # Now, define a couple of other loggers which might represent areas in your
344 # application:
345
346 logger1 = logging.getLogger('myapp.area1')
347 logger2 = logging.getLogger('myapp.area2')
348
349 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
350 logger1.info('How quickly daft jumping zebras vex.')
351 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
352 logger2.error('The five boxing wizards jump quickly.')
353
Georg Brandleb068f92012-08-11 11:12:55 +0200354At the receiving end, you can set up a receiver using the :mod:`SocketServer`
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100355module. Here is a basic working example::
356
357 import pickle
358 import logging
359 import logging.handlers
Georg Brandleb068f92012-08-11 11:12:55 +0200360 import SocketServer
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100361 import struct
362
363
Georg Brandleb068f92012-08-11 11:12:55 +0200364 class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100365 """Handler for a streaming logging request.
366
367 This basically logs the record using whatever logging policy is
368 configured locally.
369 """
370
371 def handle(self):
372 """
373 Handle multiple requests - each expected to be a 4-byte length,
374 followed by the LogRecord in pickle format. Logs the record
375 according to whatever policy is configured locally.
376 """
377 while True:
378 chunk = self.connection.recv(4)
379 if len(chunk) < 4:
380 break
381 slen = struct.unpack('>L', chunk)[0]
382 chunk = self.connection.recv(slen)
383 while len(chunk) < slen:
384 chunk = chunk + self.connection.recv(slen - len(chunk))
385 obj = self.unPickle(chunk)
386 record = logging.makeLogRecord(obj)
387 self.handleLogRecord(record)
388
389 def unPickle(self, data):
390 return pickle.loads(data)
391
392 def handleLogRecord(self, record):
393 # if a name is specified, we use the named logger rather than the one
394 # implied by the record.
395 if self.server.logname is not None:
396 name = self.server.logname
397 else:
398 name = record.name
399 logger = logging.getLogger(name)
400 # N.B. EVERY record gets logged. This is because Logger.handle
401 # is normally called AFTER logger-level filtering. If you want
402 # to do filtering, do it at the client end to save wasting
403 # cycles and network bandwidth!
404 logger.handle(record)
405
Georg Brandleb068f92012-08-11 11:12:55 +0200406 class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100407 """
408 Simple TCP socket-based logging receiver suitable for testing.
409 """
410
411 allow_reuse_address = 1
412
413 def __init__(self, host='localhost',
414 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
415 handler=LogRecordStreamHandler):
Georg Brandleb068f92012-08-11 11:12:55 +0200416 SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100417 self.abort = 0
418 self.timeout = 1
419 self.logname = None
420
421 def serve_until_stopped(self):
422 import select
423 abort = 0
424 while not abort:
425 rd, wr, ex = select.select([self.socket.fileno()],
426 [], [],
427 self.timeout)
428 if rd:
429 self.handle_request()
430 abort = self.abort
431
432 def main():
433 logging.basicConfig(
434 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
435 tcpserver = LogRecordSocketReceiver()
436 print('About to start TCP server...')
437 tcpserver.serve_until_stopped()
438
439 if __name__ == '__main__':
440 main()
441
442First run the server, and then the client. On the client side, nothing is
443printed on the console; on the server side, you should see something like::
444
445 About to start TCP server...
446 59 root INFO Jackdaws love my big sphinx of quartz.
447 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
448 69 myapp.area1 INFO How quickly daft jumping zebras vex.
449 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
450 69 myapp.area2 ERROR The five boxing wizards jump quickly.
451
452Note that there are some security issues with pickle in some scenarios. If
453these affect you, you can use an alternative serialization scheme by overriding
Vinay Sajip10b51302013-08-17 00:38:48 +0100454the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
455alternative there, as well as adapting the above script to use your alternative
456serialization.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100457
458
459.. _context-info:
460
461Adding contextual information to your logging output
462----------------------------------------------------
463
Vinay Sajip10b51302013-08-17 00:38:48 +0100464.. currentmodule:: logging
465
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100466Sometimes you want logging output to contain contextual information in
467addition to the parameters passed to the logging call. For example, in a
468networked application, it may be desirable to log client-specific information
469in the log (e.g. remote client's username, or IP address). Although you could
470use the *extra* parameter to achieve this, it's not always convenient to pass
471the information in this way. While it might be tempting to create
472:class:`Logger` instances on a per-connection basis, this is not a good idea
473because these instances are not garbage collected. While this is not a problem
474in practice, when the number of :class:`Logger` instances is dependent on the
475level of granularity you want to use in logging an application, it could
476be hard to manage if the number of :class:`Logger` instances becomes
477effectively unbounded.
478
479
480Using LoggerAdapters to impart contextual information
481^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
482
483An easy way in which you can pass contextual information to be output along
484with logging event information is to use the :class:`LoggerAdapter` class.
485This class is designed to look like a :class:`Logger`, so that you can call
486:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
487:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
488same signatures as their counterparts in :class:`Logger`, so you can use the
489two types of instances interchangeably.
490
491When you create an instance of :class:`LoggerAdapter`, you pass it a
492:class:`Logger` instance and a dict-like object which contains your contextual
493information. When you call one of the logging methods on an instance of
494:class:`LoggerAdapter`, it delegates the call to the underlying instance of
495:class:`Logger` passed to its constructor, and arranges to pass the contextual
496information in the delegated call. Here's a snippet from the code of
497:class:`LoggerAdapter`::
498
499 def debug(self, msg, *args, **kwargs):
500 """
501 Delegate a debug call to the underlying logger, after adding
502 contextual information from this adapter instance.
503 """
504 msg, kwargs = self.process(msg, kwargs)
505 self.logger.debug(msg, *args, **kwargs)
506
Vinay Sajip10b51302013-08-17 00:38:48 +0100507The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
508contextual information is added to the logging output. It's passed the message
509and keyword arguments of the logging call, and it passes back (potentially)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100510modified versions of these to use in the call to the underlying logger. The
511default implementation of this method leaves the message alone, but inserts
512an 'extra' key in the keyword argument whose value is the dict-like object
513passed to the constructor. Of course, if you had passed an 'extra' keyword
514argument in the call to the adapter, it will be silently overwritten.
515
516The advantage of using 'extra' is that the values in the dict-like object are
517merged into the :class:`LogRecord` instance's __dict__, allowing you to use
518customized strings with your :class:`Formatter` instances which know about
519the keys of the dict-like object. If you need a different method, e.g. if you
520want to prepend or append the contextual information to the message string,
Vinay Sajip10b51302013-08-17 00:38:48 +0100521you just need to subclass :class:`LoggerAdapter` and override
522:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100523
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100524 class CustomAdapter(logging.LoggerAdapter):
525 """
526 This example adapter expects the passed in dict-like object to have a
527 'connid' key, whose value in brackets is prepended to the log message.
528 """
529 def process(self, msg, kwargs):
530 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100531
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100532which you can use like this::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100533
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100534 logger = logging.getLogger(__name__)
535 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100536
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100537Then any events that you log to the adapter will have the value of
538``some_conn_id`` prepended to the log messages.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100539
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100540Using objects other than dicts to pass contextual information
541~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100542
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100543You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
544pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
545that it looks like a dict to logging. This would be useful if you want to
546generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100547
548
549.. _filters-contextual:
550
551Using Filters to impart contextual information
552^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
553
554You can also add contextual information to log output using a user-defined
555:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
556passed to them, including adding additional attributes which can then be output
557using a suitable format string, or if needed a custom :class:`Formatter`.
558
559For example in a web application, the request being processed (or at least,
560the interesting parts of it) can be stored in a threadlocal
561(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
562add, say, information from the request - say, the remote IP address and remote
563user's username - to the ``LogRecord``, using the attribute names 'ip' and
564'user' as in the ``LoggerAdapter`` example above. In that case, the same format
565string can be used to get similar output to that shown above. Here's an example
566script::
567
568 import logging
569 from random import choice
570
571 class ContextFilter(logging.Filter):
572 """
573 This is a filter which injects contextual information into the log.
574
575 Rather than use actual contextual information, we just use random
576 data in this demo.
577 """
578
579 USERS = ['jim', 'fred', 'sheila']
580 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
581
582 def filter(self, record):
583
584 record.ip = choice(ContextFilter.IPS)
585 record.user = choice(ContextFilter.USERS)
586 return True
587
588 if __name__ == '__main__':
Serhiy Storchaka12d547a2016-05-10 13:45:32 +0300589 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
590 logging.basicConfig(level=logging.DEBUG,
591 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
592 a1 = logging.getLogger('a.b.c')
593 a2 = logging.getLogger('d.e.f')
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100594
Serhiy Storchaka12d547a2016-05-10 13:45:32 +0300595 f = ContextFilter()
596 a1.addFilter(f)
597 a2.addFilter(f)
598 a1.debug('A debug message')
599 a1.info('An info message with %s', 'some parameters')
600 for x in range(10):
601 lvl = choice(levels)
602 lvlname = logging.getLevelName(lvl)
603 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100604
605which, when run, produces something like::
606
607 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
608 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
609 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
610 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
611 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
612 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
613 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
614 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
615 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
616 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
617 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
618 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
619
620
621.. _multiple-processes:
622
623Logging to a single file from multiple processes
624------------------------------------------------
625
626Although logging is thread-safe, and logging to a single file from multiple
627threads in a single process *is* supported, logging to a single file from
628*multiple processes* is *not* supported, because there is no standard way to
629serialize access to a single file across multiple processes in Python. If you
630need to log to a single file from multiple processes, one way of doing this is
Vinay Sajip10b51302013-08-17 00:38:48 +0100631to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
632separate process which implements a socket server which reads from the socket
633and logs to file. (If you prefer, you can dedicate one thread in one of the
634existing processes to perform this function.)
635:ref:`This section <network-logging>` documents this approach in more detail and
636includes a working socket receiver which can be used as a starting point for you
637to adapt in your own applications.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100638
639If you are using a recent version of Python which includes the
640:mod:`multiprocessing` module, you could write your own handler which uses the
Vinay Sajip10b51302013-08-17 00:38:48 +0100641:class:`~multiprocessing.Lock` class from this module to serialize access to the
642file from your processes. The existing :class:`FileHandler` and subclasses do
643not make use of :mod:`multiprocessing` at present, though they may do so in the
644future. Note that at present, the :mod:`multiprocessing` module does not provide
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100645working lock functionality on all platforms (see
Georg Brandl06f3b3b2014-10-29 08:36:35 +0100646https://bugs.python.org/issue3770).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100647
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100648
649Using file rotation
650-------------------
651
652.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
653.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
654
655Sometimes you want to let a log file grow to a certain size, then open a new
656file and log to that. You may want to keep a certain number of these files, and
657when that many files have been created, rotate the files so that the number of
658files and the size of the files both remain bounded. For this usage pattern, the
Vinay Sajip10b51302013-08-17 00:38:48 +0100659logging package provides a :class:`~handlers.RotatingFileHandler`::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100660
661 import glob
662 import logging
663 import logging.handlers
664
665 LOG_FILENAME = 'logging_rotatingfile_example.out'
666
667 # Set up a specific logger with our desired output level
668 my_logger = logging.getLogger('MyLogger')
669 my_logger.setLevel(logging.DEBUG)
670
671 # Add the log message handler to the logger
672 handler = logging.handlers.RotatingFileHandler(
673 LOG_FILENAME, maxBytes=20, backupCount=5)
674
675 my_logger.addHandler(handler)
676
677 # Log some messages
678 for i in range(20):
679 my_logger.debug('i = %d' % i)
680
681 # See what files are created
682 logfiles = glob.glob('%s*' % LOG_FILENAME)
683
684 for filename in logfiles:
685 print(filename)
686
687The result should be 6 separate files, each with part of the log history for the
688application::
689
690 logging_rotatingfile_example.out
691 logging_rotatingfile_example.out.1
692 logging_rotatingfile_example.out.2
693 logging_rotatingfile_example.out.3
694 logging_rotatingfile_example.out.4
695 logging_rotatingfile_example.out.5
696
697The most current file is always :file:`logging_rotatingfile_example.out`,
698and each time it reaches the size limit it is renamed with the suffix
699``.1``. Each of the existing backup files is renamed to increment the suffix
700(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
701
Ezio Melotti1e87da12011-10-19 10:39:35 +0300702Obviously this example sets the log length much too small as an extreme
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100703example. You would want to set *maxBytes* to an appropriate value.
704
Vinay Sajipda75dd22011-11-23 14:27:11 +0000705An example dictionary-based configuration
706-----------------------------------------
707
708Below is an example of a logging configuration dictionary - it's taken from
Serhiy Storchakafdcb5352016-04-11 12:18:56 +0300709the `documentation on the Django project <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_.
Vinay Sajip10b51302013-08-17 00:38:48 +0100710This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
Vinay Sajipda75dd22011-11-23 14:27:11 +0000711
712 LOGGING = {
713 'version': 1,
714 'disable_existing_loggers': True,
715 'formatters': {
716 'verbose': {
717 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
718 },
719 'simple': {
720 'format': '%(levelname)s %(message)s'
721 },
722 },
723 'filters': {
724 'special': {
725 '()': 'project.logging.SpecialFilter',
726 'foo': 'bar',
727 }
728 },
729 'handlers': {
730 'null': {
731 'level':'DEBUG',
732 'class':'django.utils.log.NullHandler',
733 },
734 'console':{
735 'level':'DEBUG',
736 'class':'logging.StreamHandler',
737 'formatter': 'simple'
738 },
739 'mail_admins': {
740 'level': 'ERROR',
741 'class': 'django.utils.log.AdminEmailHandler',
742 'filters': ['special']
743 }
744 },
745 'loggers': {
746 'django': {
747 'handlers':['null'],
748 'propagate': True,
749 'level':'INFO',
750 },
751 'django.request': {
752 'handlers': ['mail_admins'],
753 'level': 'ERROR',
754 'propagate': False,
755 },
756 'myproject.custom': {
757 'handlers': ['console', 'mail_admins'],
758 'level': 'INFO',
759 'filters': ['special']
760 }
761 }
762 }
763
764For more information about this configuration, you can see the `relevant
Serhiy Storchakafdcb5352016-04-11 12:18:56 +0300765section <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_
Vinay Sajipda75dd22011-11-23 14:27:11 +0000766of the Django documentation.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100767
768Inserting a BOM into messages sent to a SysLogHandler
769-----------------------------------------------------
770
Serhiy Storchakab4905ef2016-05-07 10:50:12 +0300771`RFC 5424 <https://tools.ietf.org/html/rfc5424>`_ requires that a
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100772Unicode message be sent to a syslog daemon as a set of bytes which have the
773following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
774Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
Serhiy Storchakab4905ef2016-05-07 10:50:12 +0300775section of the specification <https://tools.ietf.org/html/rfc5424#section-6>`_.)
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100776
777In Python 2.6 and 2.7, code was added to
778:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
779unfortunately, it was implemented incorrectly, with the BOM appearing at the
780beginning of the message and hence not allowing any pure-ASCII component to
781appear before it.
782
783As this behaviour is broken, the incorrect BOM insertion code is being removed
784from Python 2.7.4 and later. However, it is not being replaced, and if you
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100785want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100786pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
787UTF-8, then you need to do the following:
788
789#. Attach a :class:`~logging.Formatter` instance to your
790 :class:`~logging.handlers.SysLogHandler` instance, with a format string
791 such as::
792
Vinay Sajip8085f442012-04-16 15:45:11 +0100793 u'ASCII section\ufeffUnicode section'
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100794
Ezio Melotti93324d72013-03-28 05:47:31 +0200795 The Unicode code point ``u'\ufeff'``, when encoded using UTF-8, will be
Vinay Sajip8085f442012-04-16 15:45:11 +0100796 encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100797
798#. Replace the ASCII section with whatever placeholders you like, but make sure
799 that the data that appears in there after substitution is always ASCII (that
800 way, it will remain unchanged after UTF-8 encoding).
801
802#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100803 which appears there after substitution contains characters outside the ASCII
804 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100805
806If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
Vinay Sajip8085f442012-04-16 15:45:11 +0100807by ``SysLogHandler``. If you follow the above rules, you should be able to
808produce RFC 5424-compliant messages. If you don't, logging may not complain,
809but your messages will not be RFC 5424-compliant, and your syslog daemon may
810complain.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100811
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000812
813Implementing structured logging
814-------------------------------
815
816Although most logging messages are intended for reading by humans, and thus not
817readily machine-parseable, there might be cirumstances where you want to output
818messages in a structured format which *is* capable of being parsed by a program
Vinay Sajipeb019492013-01-23 09:30:34 +0000819(without needing complex regular expressions to parse the log message). This is
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000820straightforward to achieve using the logging package. There are a number of
821ways in which this could be achieved, but the following is a simple approach
822which uses JSON to serialise the event in a machine-parseable manner::
823
824 import json
825 import logging
826
827 class StructuredMessage(object):
828 def __init__(self, message, **kwargs):
829 self.message = message
830 self.kwargs = kwargs
831
832 def __str__(self):
833 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
834
835 _ = StructuredMessage # optional, to improve readability
836
837 logging.basicConfig(level=logging.INFO, format='%(message)s')
838 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
839
840If the above script is run, it prints::
841
842 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
843
Vinay Sajipeb019492013-01-23 09:30:34 +0000844Note that the order of items might be different according to the version of
845Python used.
846
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000847If you need more specialised processing, you can use a custom JSON encoder,
848as in the following complete example::
849
850 from __future__ import unicode_literals
851
852 import json
853 import logging
854
Vinay Sajipeb019492013-01-23 09:30:34 +0000855 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000856 try:
857 unicode
858 except NameError:
859 unicode = str
860
861 class Encoder(json.JSONEncoder):
862 def default(self, o):
863 if isinstance(o, set):
864 return tuple(o)
865 elif isinstance(o, unicode):
866 return o.encode('unicode_escape').decode('ascii')
867 return super(Encoder, self).default(o)
868
869 class StructuredMessage(object):
870 def __init__(self, message, **kwargs):
871 self.message = message
872 self.kwargs = kwargs
873
874 def __str__(self):
875 s = Encoder().encode(self.kwargs)
876 return '%s >>> %s' % (self.message, s)
877
Vinay Sajipeb019492013-01-23 09:30:34 +0000878 _ = StructuredMessage # optional, to improve readability
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000879
880 def main():
881 logging.basicConfig(level=logging.INFO, format='%(message)s')
882 logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
883
884 if __name__ == '__main__':
885 main()
886
887When the above script is run, it prints::
888
889 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
890
Vinay Sajipeb019492013-01-23 09:30:34 +0000891Note that the order of items might be different according to the version of
892Python used.
Vinay Sajip51468252014-02-03 11:51:22 +0000893
894
895.. _custom-handlers:
896
897.. currentmodule:: logging.config
898
899Customizing handlers with :func:`dictConfig`
900--------------------------------------------
901
902There are times when you want to customize logging handlers in particular ways,
903and if you use :func:`dictConfig` you may be able to do this without
904subclassing. As an example, consider that you may want to set the ownership of a
Vinay Sajip55640c12014-10-22 18:27:59 +0100905log file. On POSIX, this is easily done using :func:`os.chown`, but the file
Vinay Sajip51468252014-02-03 11:51:22 +0000906handlers in the stdlib don't offer built-in support. You can customize handler
907creation using a plain function such as::
908
909 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
910 if owner:
Vinay Sajip55640c12014-10-22 18:27:59 +0100911 import os, pwd, grp
912 # convert user and group names to uid and gid
913 uid = pwd.getpwnam(owner[0]).pw_uid
914 gid = grp.getgrnam(owner[1]).gr_gid
915 owner = (uid, gid)
Vinay Sajip51468252014-02-03 11:51:22 +0000916 if not os.path.exists(filename):
917 open(filename, 'a').close()
Vinay Sajip55640c12014-10-22 18:27:59 +0100918 os.chown(filename, *owner)
Vinay Sajip51468252014-02-03 11:51:22 +0000919 return logging.FileHandler(filename, mode, encoding)
920
921You can then specify, in a logging configuration passed to :func:`dictConfig`,
922that a logging handler be created by calling this function::
923
924 LOGGING = {
925 'version': 1,
926 'disable_existing_loggers': False,
927 'formatters': {
928 'default': {
929 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
930 },
931 },
932 'handlers': {
933 'file':{
934 # The values below are popped from this dictionary and
935 # used to create the handler, set the handler's level and
936 # its formatter.
937 '()': owned_file_handler,
938 'level':'DEBUG',
939 'formatter': 'default',
940 # The values below are passed to the handler creator callable
941 # as keyword arguments.
942 'owner': ['pulse', 'pulse'],
943 'filename': 'chowntest.log',
944 'mode': 'w',
945 'encoding': 'utf-8',
946 },
947 },
948 'root': {
949 'handlers': ['file'],
950 'level': 'DEBUG',
951 },
952 }
953
954In this example I am setting the ownership using the ``pulse`` user and group,
955just for the purposes of illustration. Putting it together into a working
956script, ``chowntest.py``::
957
958 import logging, logging.config, os, shutil
959
960 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
961 if owner:
962 if not os.path.exists(filename):
963 open(filename, 'a').close()
964 shutil.chown(filename, *owner)
965 return logging.FileHandler(filename, mode, encoding)
966
967 LOGGING = {
968 'version': 1,
969 'disable_existing_loggers': False,
970 'formatters': {
971 'default': {
972 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
973 },
974 },
975 'handlers': {
976 'file':{
977 # The values below are popped from this dictionary and
978 # used to create the handler, set the handler's level and
979 # its formatter.
980 '()': owned_file_handler,
981 'level':'DEBUG',
982 'formatter': 'default',
983 # The values below are passed to the handler creator callable
984 # as keyword arguments.
985 'owner': ['pulse', 'pulse'],
986 'filename': 'chowntest.log',
987 'mode': 'w',
988 'encoding': 'utf-8',
989 },
990 },
991 'root': {
992 'handlers': ['file'],
993 'level': 'DEBUG',
994 },
995 }
996
997 logging.config.dictConfig(LOGGING)
998 logger = logging.getLogger('mylogger')
999 logger.debug('A debug message')
1000
1001To run this, you will probably need to run as ``root``::
1002
1003 $ sudo python3.3 chowntest.py
1004 $ cat chowntest.log
1005 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1006 $ ls -l chowntest.log
1007 -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1008
1009Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1010makes an appearance. This approach should work with any Python version that
1011supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1012versions, you would need to implement the actual ownership change using e.g.
1013:func:`os.chown`.
1014
1015In practice, the handler-creating function may be in a utility module somewhere
1016in your project. Instead of the line in the configuration::
1017
1018 '()': owned_file_handler,
1019
1020you could use e.g.::
1021
1022 '()': 'ext://project.util.owned_file_handler',
1023
1024where ``project.util`` can be replaced with the actual name of the package
1025where the function resides. In the above working script, using
1026``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1027is resolved by :func:`dictConfig` from the ``ext://`` specification.
1028
1029This example hopefully also points the way to how you could implement other
1030types of file change - e.g. setting specific POSIX permission bits - in the
1031same way, using :func:`os.chmod`.
1032
1033Of course, the approach could also be extended to types of handler other than a
1034:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1035or a different type of handler altogether.
1036
1037
1038.. _filters-dictconfig:
1039
1040Configuring filters with :func:`dictConfig`
1041-------------------------------------------
1042
1043You *can* configure filters using :func:`~logging.config.dictConfig`, though it
1044might not be obvious at first glance how to do it (hence this recipe). Since
1045:class:`~logging.Filter` is the only filter class included in the standard
1046library, and it is unlikely to cater to many requirements (it's only there as a
1047base class), you will typically need to define your own :class:`~logging.Filter`
1048subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
1049specify the ``()`` key in the configuration dictionary for the filter,
1050specifying a callable which will be used to create the filter (a class is the
1051most obvious, but you can provide any callable which returns a
1052:class:`~logging.Filter` instance). Here is a complete example::
1053
1054 import logging
1055 import logging.config
1056 import sys
1057
1058 class MyFilter(logging.Filter):
1059 def __init__(self, param=None):
1060 self.param = param
1061
1062 def filter(self, record):
1063 if self.param is None:
1064 allow = True
1065 else:
1066 allow = self.param not in record.msg
1067 if allow:
1068 record.msg = 'changed: ' + record.msg
1069 return allow
1070
1071 LOGGING = {
1072 'version': 1,
1073 'filters': {
1074 'myfilter': {
1075 '()': MyFilter,
1076 'param': 'noshow',
1077 }
1078 },
1079 'handlers': {
1080 'console': {
1081 'class': 'logging.StreamHandler',
1082 'filters': ['myfilter']
1083 }
1084 },
1085 'root': {
1086 'level': 'DEBUG',
1087 'handlers': ['console']
1088 },
1089 }
1090
1091 if __name__ == '__main__':
1092 logging.config.dictConfig(LOGGING)
1093 logging.debug('hello')
1094 logging.debug('hello - noshow')
1095
1096This example shows how you can pass configuration data to the callable which
1097constructs the instance, in the form of keyword parameters. When run, the above
1098script will print::
1099
1100 changed: hello
1101
1102which shows that the filter is working as configured.
1103
1104A couple of extra points to note:
1105
1106* If you can't refer to the callable directly in the configuration (e.g. if it
1107 lives in a different module, and you can't import it directly where the
1108 configuration dictionary is), you can use the form ``ext://...`` as described
1109 in :ref:`logging-config-dict-externalobj`. For example, you could have used
1110 the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
1111 example.
1112
1113* As well as for filters, this technique can also be used to configure custom
1114 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
1115 information on how logging supports using user-defined objects in its
1116 configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
1117
Vinay Sajip9e74fc32015-01-28 07:29:13 +00001118
1119.. _custom-format-exception:
1120
1121Customized exception formatting
1122-------------------------------
1123
1124There might be times when you want to do customized exception formatting - for
1125argument's sake, let's say you want exactly one line per logged event, even
1126when exception information is present. You can do this with a custom formatter
1127class, as shown in the following example::
1128
1129 import logging
1130
1131 class OneLineExceptionFormatter(logging.Formatter):
1132 def formatException(self, exc_info):
1133 """
1134 Format an exception so that it prints on a single line.
1135 """
1136 result = super(OneLineExceptionFormatter, self).formatException(exc_info)
1137 return repr(result) # or format into one line however you want to
1138
1139 def format(self, record):
1140 s = super(OneLineExceptionFormatter, self).format(record)
1141 if record.exc_text:
1142 s = s.replace('\n', '') + '|'
1143 return s
1144
1145 def configure_logging():
1146 fh = logging.FileHandler('output.txt', 'w')
1147 f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
1148 '%d/%m/%Y %H:%M:%S')
1149 fh.setFormatter(f)
1150 root = logging.getLogger()
1151 root.setLevel(logging.DEBUG)
1152 root.addHandler(fh)
1153
1154 def main():
1155 configure_logging()
1156 logging.info('Sample message')
1157 try:
1158 x = 1 / 0
1159 except ZeroDivisionError as e:
1160 logging.exception('ZeroDivisionError: %s', e)
1161
1162 if __name__ == '__main__':
1163 main()
1164
1165When run, this produces a file with exactly two lines::
1166
1167 28/01/2015 07:21:23|INFO|Sample message|
1168 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
1169
1170While the above treatment is simplistic, it points the way to how exception
1171information can be formatted to your liking. The :mod:`traceback` module may be
1172helpful for more specialized needs.
Vinay Sajipcaaaa142015-02-01 15:14:03 +00001173
1174.. _spoken-messages:
1175
1176Speaking logging messages
1177-------------------------
1178
1179There might be situations when it is desirable to have logging messages rendered
1180in an audible rather than a visible format. This is easy to do if you have text-
1181to-speech (TTS) functionality available in your system, even if it doesn't have
1182a Python binding. Most TTS systems have a command line program you can run, and
1183this can be invoked from a handler using :mod:`subprocess`. It's assumed here
1184that TTS command line programs won't expect to interact with users or take a
1185long time to complete, and that the frequency of logged messages will be not so
1186high as to swamp the user with messages, and that it's acceptable to have the
1187messages spoken one at a time rather than concurrently, The example implementation
1188below waits for one message to be spoken before the next is processed, and this
1189might cause other handlers to be kept waiting. Here is a short example showing
1190the approach, which assumes that the ``espeak`` TTS package is available::
1191
1192 import logging
1193 import subprocess
1194 import sys
1195
1196 class TTSHandler(logging.Handler):
1197 def emit(self, record):
1198 msg = self.format(record)
1199 # Speak slowly in a female English voice
1200 cmd = ['espeak', '-s150', '-ven+f3', msg]
1201 p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
1202 stderr=subprocess.STDOUT)
1203 # wait for the program to finish
1204 p.communicate()
1205
1206 def configure_logging():
1207 h = TTSHandler()
1208 root = logging.getLogger()
1209 root.addHandler(h)
1210 # the default formatter just returns the message
1211 root.setLevel(logging.DEBUG)
1212
1213 def main():
1214 logging.info('Hello')
1215 logging.debug('Goodbye')
1216
1217 if __name__ == '__main__':
1218 configure_logging()
1219 sys.exit(main())
1220
1221When run, this script should say "Hello" and then "Goodbye" in a female voice.
1222
1223The above approach can, of course, be adapted to other TTS systems and even
1224other systems altogether which can process messages via external programs run
1225from a command line.
1226
Vinay Sajip99187752015-10-10 00:49:10 +01001227.. _buffered-logging:
1228
1229Buffering logging messages and outputting them conditionally
1230------------------------------------------------------------
1231
1232There might be situations where you want to log messages in a temporary area
1233and only output them if a certain condition occurs. For example, you may want to
1234start logging debug events in a function, and if the function completes without
1235errors, you don't want to clutter the log with the collected debug information,
1236but if there is an error, you want all the debug information to be output as well
1237as the error.
1238
1239Here is an example which shows how you could do this using a decorator for your
1240functions where you want logging to behave this way. It makes use of the
1241:class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
1242until some condition occurs, at which point the buffered events are ``flushed``
1243- passed to another handler (the ``target`` handler) for processing. By default,
1244the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
1245level is greater than or equal to a specified threshold is seen. You can use this
1246recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
1247flushing behavior.
1248
1249The example script has a simple function, ``foo``, which just cycles through
1250all the logging levels, writing to ``sys.stderr`` to say what level it's about
Martin Panterf2f1c572016-05-08 13:18:25 +00001251to log at, and then actually logging a message at that level. You can pass a
Vinay Sajip99187752015-10-10 00:49:10 +01001252parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
1253otherwise, it only logs at DEBUG, INFO and WARNING levels.
1254
1255The script just arranges to decorate ``foo`` with a decorator which will do the
1256conditional logging that's required. The decorator takes a logger as a parameter
1257and attaches a memory handler for the duration of the call to the decorated
1258function. The decorator can be additionally parameterised using a target handler,
1259a level at which flushing should occur, and a capacity for the buffer. These
1260default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
1261``logging.ERROR`` and ``100`` respectively.
1262
1263Here's the script::
1264
1265 import logging
1266 from logging.handlers import MemoryHandler
1267 import sys
1268
1269 logger = logging.getLogger(__name__)
1270 logger.addHandler(logging.NullHandler())
1271
1272 def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
1273 if target_handler is None:
1274 target_handler = logging.StreamHandler()
1275 if flush_level is None:
1276 flush_level = logging.ERROR
1277 if capacity is None:
1278 capacity = 100
1279 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
1280
1281 def decorator(fn):
1282 def wrapper(*args, **kwargs):
1283 logger.addHandler(handler)
1284 try:
1285 return fn(*args, **kwargs)
1286 except Exception:
1287 logger.exception('call failed')
1288 raise
1289 finally:
1290 super(MemoryHandler, handler).flush()
1291 logger.removeHandler(handler)
1292 return wrapper
1293
1294 return decorator
1295
1296 def write_line(s):
1297 sys.stderr.write('%s\n' % s)
1298
1299 def foo(fail=False):
1300 write_line('about to log at DEBUG ...')
1301 logger.debug('Actually logged at DEBUG')
1302 write_line('about to log at INFO ...')
1303 logger.info('Actually logged at INFO')
1304 write_line('about to log at WARNING ...')
1305 logger.warning('Actually logged at WARNING')
1306 if fail:
1307 write_line('about to log at ERROR ...')
1308 logger.error('Actually logged at ERROR')
1309 write_line('about to log at CRITICAL ...')
1310 logger.critical('Actually logged at CRITICAL')
1311 return fail
1312
1313 decorated_foo = log_if_errors(logger)(foo)
1314
1315 if __name__ == '__main__':
1316 logger.setLevel(logging.DEBUG)
1317 write_line('Calling undecorated foo with False')
1318 assert not foo(False)
1319 write_line('Calling undecorated foo with True')
1320 assert foo(True)
1321 write_line('Calling decorated foo with False')
1322 assert not decorated_foo(False)
1323 write_line('Calling decorated foo with True')
1324 assert decorated_foo(True)
1325
1326When this script is run, the following output should be observed::
1327
1328 Calling undecorated foo with False
1329 about to log at DEBUG ...
1330 about to log at INFO ...
1331 about to log at WARNING ...
1332 Calling undecorated foo with True
1333 about to log at DEBUG ...
1334 about to log at INFO ...
1335 about to log at WARNING ...
1336 about to log at ERROR ...
1337 about to log at CRITICAL ...
1338 Calling decorated foo with False
1339 about to log at DEBUG ...
1340 about to log at INFO ...
1341 about to log at WARNING ...
1342 Calling decorated foo with True
1343 about to log at DEBUG ...
1344 about to log at INFO ...
1345 about to log at WARNING ...
1346 about to log at ERROR ...
1347 Actually logged at DEBUG
1348 Actually logged at INFO
1349 Actually logged at WARNING
1350 Actually logged at ERROR
1351 about to log at CRITICAL ...
1352 Actually logged at CRITICAL
1353
1354As you can see, actual logging output only occurs when an event is logged whose
1355severity is ERROR or greater, but in that case, any previous events at lower
1356severities are also logged.
1357
1358You can of course use the conventional means of decoration::
1359
1360 @log_if_errors(logger)
1361 def foo(fail=False):
1362 ...
Vinay Sajip63173a82015-10-17 13:55:19 +01001363
1364
1365.. _utc-formatting:
1366
1367Formatting times using UTC (GMT) via configuration
1368--------------------------------------------------
1369
1370Sometimes you want to format times using UTC, which can be done using a class
1371such as `UTCFormatter`, shown below::
1372
1373 import logging
1374 import time
1375
1376 class UTCFormatter(logging.Formatter):
1377 converter = time.gmtime
1378
Zachary Ware2ee55f32015-11-05 10:28:18 -06001379and you can then use the ``UTCFormatter`` in your code instead of
Vinay Sajip63173a82015-10-17 13:55:19 +01001380:class:`~logging.Formatter`. If you want to do that via configuration, you can
1381use the :func:`~logging.config.dictConfig` API with an approach illustrated by
1382the following complete example::
1383
1384 import logging
1385 import logging.config
1386 import time
1387
1388 class UTCFormatter(logging.Formatter):
1389 converter = time.gmtime
1390
1391 LOGGING = {
1392 'version': 1,
1393 'disable_existing_loggers': False,
1394 'formatters': {
1395 'utc': {
1396 '()': UTCFormatter,
1397 'format': '%(asctime)s %(message)s',
1398 },
1399 'local': {
1400 'format': '%(asctime)s %(message)s',
1401 }
1402 },
1403 'handlers': {
1404 'console1': {
1405 'class': 'logging.StreamHandler',
1406 'formatter': 'utc',
1407 },
1408 'console2': {
1409 'class': 'logging.StreamHandler',
1410 'formatter': 'local',
1411 },
1412 },
1413 'root': {
1414 'handlers': ['console1', 'console2'],
1415 }
1416 }
1417
1418 if __name__ == '__main__':
1419 logging.config.dictConfig(LOGGING)
1420 logging.warning('The local time is %s', time.asctime())
1421
1422When this script is run, it should print something like::
1423
1424 2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1425 2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1426
1427showing how the time is formatted both as local time and UTC, one for each
1428handler.
Vinay Sajipcb7fff52016-04-01 23:06:57 +01001429
1430
1431.. _context-manager:
1432
1433Using a context manager for selective logging
1434---------------------------------------------
1435
1436There are times when it would be useful to temporarily change the logging
1437configuration and revert it back after doing something. For this, a context
1438manager is the most obvious way of saving and restoring the logging context.
1439Here is a simple example of such a context manager, which allows you to
1440optionally change the logging level and add a logging handler purely in the
1441scope of the context manager::
1442
1443 import logging
1444 import sys
1445
1446 class LoggingContext(object):
1447 def __init__(self, logger, level=None, handler=None, close=True):
1448 self.logger = logger
1449 self.level = level
1450 self.handler = handler
1451 self.close = close
1452
1453 def __enter__(self):
1454 if self.level is not None:
1455 self.old_level = self.logger.level
1456 self.logger.setLevel(self.level)
1457 if self.handler:
1458 self.logger.addHandler(self.handler)
1459
1460 def __exit__(self, et, ev, tb):
1461 if self.level is not None:
1462 self.logger.setLevel(self.old_level)
1463 if self.handler:
1464 self.logger.removeHandler(self.handler)
1465 if self.handler and self.close:
1466 self.handler.close()
1467 # implicit return of None => don't swallow exceptions
1468
1469If you specify a level value, the logger's level is set to that value in the
1470scope of the with block covered by the context manager. If you specify a
1471handler, it is added to the logger on entry to the block and removed on exit
1472from the block. You can also ask the manager to close the handler for you on
1473block exit - you could do this if you don't need the handler any more.
1474
1475To illustrate how it works, we can add the following block of code to the
1476above::
1477
1478 if __name__ == '__main__':
1479 logger = logging.getLogger('foo')
1480 logger.addHandler(logging.StreamHandler())
1481 logger.setLevel(logging.INFO)
1482 logger.info('1. This should appear just once on stderr.')
1483 logger.debug('2. This should not appear.')
1484 with LoggingContext(logger, level=logging.DEBUG):
1485 logger.debug('3. This should appear once on stderr.')
1486 logger.debug('4. This should not appear.')
1487 h = logging.StreamHandler(sys.stdout)
1488 with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
1489 logger.debug('5. This should appear twice - once on stderr and once on stdout.')
1490 logger.info('6. This should appear just once on stderr.')
1491 logger.debug('7. This should not appear.')
1492
1493We initially set the logger's level to ``INFO``, so message #1 appears and
1494message #2 doesn't. We then change the level to ``DEBUG`` temporarily in the
1495following ``with`` block, and so message #3 appears. After the block exits, the
1496logger's level is restored to ``INFO`` and so message #4 doesn't appear. In the
1497next ``with`` block, we set the level to ``DEBUG`` again but also add a handler
1498writing to ``sys.stdout``. Thus, message #5 appears twice on the console (once
1499via ``stderr`` and once via ``stdout``). After the ``with`` statement's
1500completion, the status is as it was before so message #6 appears (like message
1501#1) whereas message #7 doesn't (just like message #2).
1502
1503If we run the resulting script, the result is as follows::
1504
1505 $ python logctx.py
1506 1. This should appear just once on stderr.
1507 3. This should appear once on stderr.
1508 5. This should appear twice - once on stderr and once on stdout.
1509 5. This should appear twice - once on stderr and once on stdout.
1510 6. This should appear just once on stderr.
1511
1512If we run it again, but pipe ``stderr`` to ``/dev/null``, we see the following,
1513which is the only message written to ``stdout``::
1514
1515 $ python logctx.py 2>/dev/null
1516 5. This should appear twice - once on stderr and once on stdout.
1517
1518Once again, but piping ``stdout`` to ``/dev/null``, we get::
1519
1520 $ python logctx.py >/dev/null
1521 1. This should appear just once on stderr.
1522 3. This should appear once on stderr.
1523 5. This should appear twice - once on stderr and once on stdout.
1524 6. This should appear just once on stderr.
1525
1526In this case, the message #5 printed to ``stdout`` doesn't appear, as expected.
1527
1528Of course, the approach described here can be generalised, for example to attach
1529logging filters temporarily. Note that the above code works in Python 2 as well
1530as Python 3.