blob: fe43c29e926935be49d93ba9bdeb5aecf1db4008 [file] [log] [blame]
Vinay Sajip5dbca9c2011-04-08 11:40:38 +01001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
24
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
Vinay Sajip35bb78e2016-02-20 19:02:04 +000097Logging from multiple threads
98-----------------------------
99
100Logging from multiple threads requires no special effort. The following example
101shows logging from the main (initIal) thread and another thread::
102
103 import logging
104 import threading
105 import time
106
107 def worker(arg):
108 while not arg['stop']:
109 logging.debug('Hi from myfunc')
110 time.sleep(0.5)
111
112 def main():
113 logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
114 info = {'stop': False}
115 thread = threading.Thread(target=worker, args=(info,))
116 thread.start()
117 while True:
118 try:
119 logging.debug('Hello from main')
120 time.sleep(0.75)
121 except KeyboardInterrupt:
122 info['stop'] = True
123 break
124 thread.join()
125
126 if __name__ == '__main__':
127 main()
128
129When run, the script should print something like the following::
130
131 0 Thread-1 Hi from myfunc
132 3 MainThread Hello from main
133 505 Thread-1 Hi from myfunc
134 755 MainThread Hello from main
135 1007 Thread-1 Hi from myfunc
136 1507 MainThread Hello from main
137 1508 Thread-1 Hi from myfunc
138 2010 Thread-1 Hi from myfunc
139 2258 MainThread Hello from main
140 2512 Thread-1 Hi from myfunc
141 3009 MainThread Hello from main
142 3013 Thread-1 Hi from myfunc
143 3515 Thread-1 Hi from myfunc
144 3761 MainThread Hello from main
145 4017 Thread-1 Hi from myfunc
146 4513 MainThread Hello from main
147 4518 Thread-1 Hi from myfunc
148
149This shows the logging output interspersed as one might expect. This approach
150works for more threads than shown here, of course.
151
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100152Multiple handlers and formatters
153--------------------------------
154
Vinay Sajip10b51302013-08-17 00:38:48 +0100155Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
156minimum or maximum quota for the number of handlers you may add. Sometimes it
157will be beneficial for an application to log all messages of all severities to a
158text file while simultaneously logging errors or above to the console. To set
159this up, simply configure the appropriate handlers. The logging calls in the
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100160application code will remain unchanged. Here is a slight modification to the
161previous simple module-based configuration example::
162
163 import logging
164
165 logger = logging.getLogger('simple_example')
166 logger.setLevel(logging.DEBUG)
167 # create file handler which logs even debug messages
168 fh = logging.FileHandler('spam.log')
169 fh.setLevel(logging.DEBUG)
170 # create console handler with a higher log level
171 ch = logging.StreamHandler()
172 ch.setLevel(logging.ERROR)
173 # create formatter and add it to the handlers
174 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
175 ch.setFormatter(formatter)
176 fh.setFormatter(formatter)
177 # add the handlers to logger
178 logger.addHandler(ch)
179 logger.addHandler(fh)
180
181 # 'application' code
182 logger.debug('debug message')
183 logger.info('info message')
184 logger.warn('warn message')
185 logger.error('error message')
186 logger.critical('critical message')
187
188Notice that the 'application' code does not care about multiple handlers. All
189that changed was the addition and configuration of a new handler named *fh*.
190
191The ability to create new handlers with higher- or lower-severity filters can be
192very helpful when writing and testing an application. Instead of using many
193``print`` statements for debugging, use ``logger.debug``: Unlike the print
194statements, which you will have to delete or comment out later, the logger.debug
195statements can remain intact in the source code and remain dormant until you
196need them again. At that time, the only change that needs to happen is to
197modify the severity level of the logger and/or handler to debug.
198
199.. _multiple-destinations:
200
201Logging to multiple destinations
202--------------------------------
203
204Let's say you want to log to console and file with different message formats and
205in differing circumstances. Say you want to log messages with levels of DEBUG
206and higher to file, and those messages at level INFO and higher to the console.
207Let's also assume that the file should contain timestamps, but the console
208messages should not. Here's how you can achieve this::
209
210 import logging
211
212 # set up logging to file - see previous section for more details
213 logging.basicConfig(level=logging.DEBUG,
214 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
215 datefmt='%m-%d %H:%M',
216 filename='/temp/myapp.log',
217 filemode='w')
218 # define a Handler which writes INFO messages or higher to the sys.stderr
219 console = logging.StreamHandler()
220 console.setLevel(logging.INFO)
221 # set a format which is simpler for console use
222 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
223 # tell the handler to use this format
224 console.setFormatter(formatter)
225 # add the handler to the root logger
226 logging.getLogger('').addHandler(console)
227
228 # Now, we can log to the root logger, or any other logger. First the root...
229 logging.info('Jackdaws love my big sphinx of quartz.')
230
231 # Now, define a couple of other loggers which might represent areas in your
232 # application:
233
234 logger1 = logging.getLogger('myapp.area1')
235 logger2 = logging.getLogger('myapp.area2')
236
237 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
238 logger1.info('How quickly daft jumping zebras vex.')
239 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
240 logger2.error('The five boxing wizards jump quickly.')
241
242When you run this, on the console you will see ::
243
244 root : INFO Jackdaws love my big sphinx of quartz.
245 myapp.area1 : INFO How quickly daft jumping zebras vex.
246 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
247 myapp.area2 : ERROR The five boxing wizards jump quickly.
248
249and in the file you will see something like ::
250
251 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
252 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
253 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
254 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
255 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
256
257As you can see, the DEBUG message only shows up in the file. The other messages
258are sent to both destinations.
259
260This example uses console and file handlers, but you can use any number and
261combination of handlers you choose.
262
263
264Configuration server example
265----------------------------
266
267Here is an example of a module using the logging configuration server::
268
269 import logging
270 import logging.config
271 import time
272 import os
273
274 # read initial config file
275 logging.config.fileConfig('logging.conf')
276
277 # create and start listener on port 9999
278 t = logging.config.listen(9999)
279 t.start()
280
281 logger = logging.getLogger('simpleExample')
282
283 try:
284 # loop through logging calls to see the difference
285 # new configurations make, until Ctrl+C is pressed
286 while True:
287 logger.debug('debug message')
288 logger.info('info message')
289 logger.warn('warn message')
290 logger.error('error message')
291 logger.critical('critical message')
292 time.sleep(5)
293 except KeyboardInterrupt:
294 # cleanup
295 logging.config.stopListening()
296 t.join()
297
298And here is a script that takes a filename and sends that file to the server,
299properly preceded with the binary-encoded length, as the new logging
300configuration::
301
302 #!/usr/bin/env python
303 import socket, sys, struct
304
305 with open(sys.argv[1], 'rb') as f:
306 data_to_send = f.read()
307
308 HOST = 'localhost'
309 PORT = 9999
310 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
311 print('connecting...')
312 s.connect((HOST, PORT))
313 print('sending config...')
314 s.send(struct.pack('>L', len(data_to_send)))
315 s.send(data_to_send)
316 s.close()
317 print('complete')
318
319
320.. _network-logging:
321
322Sending and receiving logging events across a network
323-----------------------------------------------------
324
325Let's say you want to send logging events across a network, and handle them at
326the receiving end. A simple way of doing this is attaching a
327:class:`SocketHandler` instance to the root logger at the sending end::
328
329 import logging, logging.handlers
330
331 rootLogger = logging.getLogger('')
332 rootLogger.setLevel(logging.DEBUG)
333 socketHandler = logging.handlers.SocketHandler('localhost',
334 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
335 # don't bother with a formatter, since a socket handler sends the event as
336 # an unformatted pickle
337 rootLogger.addHandler(socketHandler)
338
339 # Now, we can log to the root logger, or any other logger. First the root...
340 logging.info('Jackdaws love my big sphinx of quartz.')
341
342 # Now, define a couple of other loggers which might represent areas in your
343 # application:
344
345 logger1 = logging.getLogger('myapp.area1')
346 logger2 = logging.getLogger('myapp.area2')
347
348 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
349 logger1.info('How quickly daft jumping zebras vex.')
350 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
351 logger2.error('The five boxing wizards jump quickly.')
352
Georg Brandleb068f92012-08-11 11:12:55 +0200353At the receiving end, you can set up a receiver using the :mod:`SocketServer`
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100354module. Here is a basic working example::
355
356 import pickle
357 import logging
358 import logging.handlers
Georg Brandleb068f92012-08-11 11:12:55 +0200359 import SocketServer
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100360 import struct
361
362
Georg Brandleb068f92012-08-11 11:12:55 +0200363 class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100364 """Handler for a streaming logging request.
365
366 This basically logs the record using whatever logging policy is
367 configured locally.
368 """
369
370 def handle(self):
371 """
372 Handle multiple requests - each expected to be a 4-byte length,
373 followed by the LogRecord in pickle format. Logs the record
374 according to whatever policy is configured locally.
375 """
376 while True:
377 chunk = self.connection.recv(4)
378 if len(chunk) < 4:
379 break
380 slen = struct.unpack('>L', chunk)[0]
381 chunk = self.connection.recv(slen)
382 while len(chunk) < slen:
383 chunk = chunk + self.connection.recv(slen - len(chunk))
384 obj = self.unPickle(chunk)
385 record = logging.makeLogRecord(obj)
386 self.handleLogRecord(record)
387
388 def unPickle(self, data):
389 return pickle.loads(data)
390
391 def handleLogRecord(self, record):
392 # if a name is specified, we use the named logger rather than the one
393 # implied by the record.
394 if self.server.logname is not None:
395 name = self.server.logname
396 else:
397 name = record.name
398 logger = logging.getLogger(name)
399 # N.B. EVERY record gets logged. This is because Logger.handle
400 # is normally called AFTER logger-level filtering. If you want
401 # to do filtering, do it at the client end to save wasting
402 # cycles and network bandwidth!
403 logger.handle(record)
404
Georg Brandleb068f92012-08-11 11:12:55 +0200405 class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100406 """
407 Simple TCP socket-based logging receiver suitable for testing.
408 """
409
410 allow_reuse_address = 1
411
412 def __init__(self, host='localhost',
413 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
414 handler=LogRecordStreamHandler):
Georg Brandleb068f92012-08-11 11:12:55 +0200415 SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100416 self.abort = 0
417 self.timeout = 1
418 self.logname = None
419
420 def serve_until_stopped(self):
421 import select
422 abort = 0
423 while not abort:
424 rd, wr, ex = select.select([self.socket.fileno()],
425 [], [],
426 self.timeout)
427 if rd:
428 self.handle_request()
429 abort = self.abort
430
431 def main():
432 logging.basicConfig(
433 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
434 tcpserver = LogRecordSocketReceiver()
435 print('About to start TCP server...')
436 tcpserver.serve_until_stopped()
437
438 if __name__ == '__main__':
439 main()
440
441First run the server, and then the client. On the client side, nothing is
442printed on the console; on the server side, you should see something like::
443
444 About to start TCP server...
445 59 root INFO Jackdaws love my big sphinx of quartz.
446 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
447 69 myapp.area1 INFO How quickly daft jumping zebras vex.
448 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
449 69 myapp.area2 ERROR The five boxing wizards jump quickly.
450
451Note that there are some security issues with pickle in some scenarios. If
452these affect you, you can use an alternative serialization scheme by overriding
Vinay Sajip10b51302013-08-17 00:38:48 +0100453the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
454alternative there, as well as adapting the above script to use your alternative
455serialization.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100456
457
458.. _context-info:
459
460Adding contextual information to your logging output
461----------------------------------------------------
462
Vinay Sajip10b51302013-08-17 00:38:48 +0100463.. currentmodule:: logging
464
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100465Sometimes you want logging output to contain contextual information in
466addition to the parameters passed to the logging call. For example, in a
467networked application, it may be desirable to log client-specific information
468in the log (e.g. remote client's username, or IP address). Although you could
469use the *extra* parameter to achieve this, it's not always convenient to pass
470the information in this way. While it might be tempting to create
471:class:`Logger` instances on a per-connection basis, this is not a good idea
472because these instances are not garbage collected. While this is not a problem
473in practice, when the number of :class:`Logger` instances is dependent on the
474level of granularity you want to use in logging an application, it could
475be hard to manage if the number of :class:`Logger` instances becomes
476effectively unbounded.
477
478
479Using LoggerAdapters to impart contextual information
480^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
481
482An easy way in which you can pass contextual information to be output along
483with logging event information is to use the :class:`LoggerAdapter` class.
484This class is designed to look like a :class:`Logger`, so that you can call
485:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
486:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
487same signatures as their counterparts in :class:`Logger`, so you can use the
488two types of instances interchangeably.
489
490When you create an instance of :class:`LoggerAdapter`, you pass it a
491:class:`Logger` instance and a dict-like object which contains your contextual
492information. When you call one of the logging methods on an instance of
493:class:`LoggerAdapter`, it delegates the call to the underlying instance of
494:class:`Logger` passed to its constructor, and arranges to pass the contextual
495information in the delegated call. Here's a snippet from the code of
496:class:`LoggerAdapter`::
497
498 def debug(self, msg, *args, **kwargs):
499 """
500 Delegate a debug call to the underlying logger, after adding
501 contextual information from this adapter instance.
502 """
503 msg, kwargs = self.process(msg, kwargs)
504 self.logger.debug(msg, *args, **kwargs)
505
Vinay Sajip10b51302013-08-17 00:38:48 +0100506The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
507contextual information is added to the logging output. It's passed the message
508and keyword arguments of the logging call, and it passes back (potentially)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100509modified versions of these to use in the call to the underlying logger. The
510default implementation of this method leaves the message alone, but inserts
511an 'extra' key in the keyword argument whose value is the dict-like object
512passed to the constructor. Of course, if you had passed an 'extra' keyword
513argument in the call to the adapter, it will be silently overwritten.
514
515The advantage of using 'extra' is that the values in the dict-like object are
516merged into the :class:`LogRecord` instance's __dict__, allowing you to use
517customized strings with your :class:`Formatter` instances which know about
518the keys of the dict-like object. If you need a different method, e.g. if you
519want to prepend or append the contextual information to the message string,
Vinay Sajip10b51302013-08-17 00:38:48 +0100520you just need to subclass :class:`LoggerAdapter` and override
521:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100522
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100523 class CustomAdapter(logging.LoggerAdapter):
524 """
525 This example adapter expects the passed in dict-like object to have a
526 'connid' key, whose value in brackets is prepended to the log message.
527 """
528 def process(self, msg, kwargs):
529 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100530
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100531which you can use like this::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100532
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100533 logger = logging.getLogger(__name__)
534 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100535
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100536Then any events that you log to the adapter will have the value of
537``some_conn_id`` prepended to the log messages.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100538
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100539Using objects other than dicts to pass contextual information
540~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100541
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100542You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
543pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
544that it looks like a dict to logging. This would be useful if you want to
545generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100546
547
548.. _filters-contextual:
549
550Using Filters to impart contextual information
551^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
552
553You can also add contextual information to log output using a user-defined
554:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
555passed to them, including adding additional attributes which can then be output
556using a suitable format string, or if needed a custom :class:`Formatter`.
557
558For example in a web application, the request being processed (or at least,
559the interesting parts of it) can be stored in a threadlocal
560(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
561add, say, information from the request - say, the remote IP address and remote
562user's username - to the ``LogRecord``, using the attribute names 'ip' and
563'user' as in the ``LoggerAdapter`` example above. In that case, the same format
564string can be used to get similar output to that shown above. Here's an example
565script::
566
567 import logging
568 from random import choice
569
570 class ContextFilter(logging.Filter):
571 """
572 This is a filter which injects contextual information into the log.
573
574 Rather than use actual contextual information, we just use random
575 data in this demo.
576 """
577
578 USERS = ['jim', 'fred', 'sheila']
579 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
580
581 def filter(self, record):
582
583 record.ip = choice(ContextFilter.IPS)
584 record.user = choice(ContextFilter.USERS)
585 return True
586
587 if __name__ == '__main__':
588 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
589 logging.basicConfig(level=logging.DEBUG,
590 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
591 a1 = logging.getLogger('a.b.c')
592 a2 = logging.getLogger('d.e.f')
593
594 f = ContextFilter()
595 a1.addFilter(f)
596 a2.addFilter(f)
597 a1.debug('A debug message')
598 a1.info('An info message with %s', 'some parameters')
599 for x in range(10):
600 lvl = choice(levels)
601 lvlname = logging.getLevelName(lvl)
602 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
603
604which, when run, produces something like::
605
606 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
607 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
608 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
609 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
610 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
611 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
612 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
613 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
614 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
615 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
616 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
617 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
618
619
620.. _multiple-processes:
621
622Logging to a single file from multiple processes
623------------------------------------------------
624
625Although logging is thread-safe, and logging to a single file from multiple
626threads in a single process *is* supported, logging to a single file from
627*multiple processes* is *not* supported, because there is no standard way to
628serialize access to a single file across multiple processes in Python. If you
629need to log to a single file from multiple processes, one way of doing this is
Vinay Sajip10b51302013-08-17 00:38:48 +0100630to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
631separate process which implements a socket server which reads from the socket
632and logs to file. (If you prefer, you can dedicate one thread in one of the
633existing processes to perform this function.)
634:ref:`This section <network-logging>` documents this approach in more detail and
635includes a working socket receiver which can be used as a starting point for you
636to adapt in your own applications.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100637
638If you are using a recent version of Python which includes the
639:mod:`multiprocessing` module, you could write your own handler which uses the
Vinay Sajip10b51302013-08-17 00:38:48 +0100640:class:`~multiprocessing.Lock` class from this module to serialize access to the
641file from your processes. The existing :class:`FileHandler` and subclasses do
642not make use of :mod:`multiprocessing` at present, though they may do so in the
643future. Note that at present, the :mod:`multiprocessing` module does not provide
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100644working lock functionality on all platforms (see
Georg Brandl06f3b3b2014-10-29 08:36:35 +0100645https://bugs.python.org/issue3770).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100646
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100647
648Using file rotation
649-------------------
650
651.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
652.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
653
654Sometimes you want to let a log file grow to a certain size, then open a new
655file and log to that. You may want to keep a certain number of these files, and
656when that many files have been created, rotate the files so that the number of
657files and the size of the files both remain bounded. For this usage pattern, the
Vinay Sajip10b51302013-08-17 00:38:48 +0100658logging package provides a :class:`~handlers.RotatingFileHandler`::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100659
660 import glob
661 import logging
662 import logging.handlers
663
664 LOG_FILENAME = 'logging_rotatingfile_example.out'
665
666 # Set up a specific logger with our desired output level
667 my_logger = logging.getLogger('MyLogger')
668 my_logger.setLevel(logging.DEBUG)
669
670 # Add the log message handler to the logger
671 handler = logging.handlers.RotatingFileHandler(
672 LOG_FILENAME, maxBytes=20, backupCount=5)
673
674 my_logger.addHandler(handler)
675
676 # Log some messages
677 for i in range(20):
678 my_logger.debug('i = %d' % i)
679
680 # See what files are created
681 logfiles = glob.glob('%s*' % LOG_FILENAME)
682
683 for filename in logfiles:
684 print(filename)
685
686The result should be 6 separate files, each with part of the log history for the
687application::
688
689 logging_rotatingfile_example.out
690 logging_rotatingfile_example.out.1
691 logging_rotatingfile_example.out.2
692 logging_rotatingfile_example.out.3
693 logging_rotatingfile_example.out.4
694 logging_rotatingfile_example.out.5
695
696The most current file is always :file:`logging_rotatingfile_example.out`,
697and each time it reaches the size limit it is renamed with the suffix
698``.1``. Each of the existing backup files is renamed to increment the suffix
699(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
700
Ezio Melotti1e87da12011-10-19 10:39:35 +0300701Obviously this example sets the log length much too small as an extreme
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100702example. You would want to set *maxBytes* to an appropriate value.
703
Vinay Sajipda75dd22011-11-23 14:27:11 +0000704An example dictionary-based configuration
705-----------------------------------------
706
707Below is an example of a logging configuration dictionary - it's taken from
708the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
Vinay Sajip10b51302013-08-17 00:38:48 +0100709This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
Vinay Sajipda75dd22011-11-23 14:27:11 +0000710
711 LOGGING = {
712 'version': 1,
713 'disable_existing_loggers': True,
714 'formatters': {
715 'verbose': {
716 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
717 },
718 'simple': {
719 'format': '%(levelname)s %(message)s'
720 },
721 },
722 'filters': {
723 'special': {
724 '()': 'project.logging.SpecialFilter',
725 'foo': 'bar',
726 }
727 },
728 'handlers': {
729 'null': {
730 'level':'DEBUG',
731 'class':'django.utils.log.NullHandler',
732 },
733 'console':{
734 'level':'DEBUG',
735 'class':'logging.StreamHandler',
736 'formatter': 'simple'
737 },
738 'mail_admins': {
739 'level': 'ERROR',
740 'class': 'django.utils.log.AdminEmailHandler',
741 'filters': ['special']
742 }
743 },
744 'loggers': {
745 'django': {
746 'handlers':['null'],
747 'propagate': True,
748 'level':'INFO',
749 },
750 'django.request': {
751 'handlers': ['mail_admins'],
752 'level': 'ERROR',
753 'propagate': False,
754 },
755 'myproject.custom': {
756 'handlers': ['console', 'mail_admins'],
757 'level': 'INFO',
758 'filters': ['special']
759 }
760 }
761 }
762
763For more information about this configuration, you can see the `relevant
Vinay Sajip33a431c2014-03-25 11:20:22 +0000764section <https://docs.djangoproject.com/en/1.6/topics/logging/#configuring-logging>`_
Vinay Sajipda75dd22011-11-23 14:27:11 +0000765of the Django documentation.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100766
767Inserting a BOM into messages sent to a SysLogHandler
768-----------------------------------------------------
769
770`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
771Unicode message be sent to a syslog daemon as a set of bytes which have the
772following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
773Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
774section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
775
776In Python 2.6 and 2.7, code was added to
777:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
778unfortunately, it was implemented incorrectly, with the BOM appearing at the
779beginning of the message and hence not allowing any pure-ASCII component to
780appear before it.
781
782As this behaviour is broken, the incorrect BOM insertion code is being removed
783from Python 2.7.4 and later. However, it is not being replaced, and if you
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100784want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100785pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
786UTF-8, then you need to do the following:
787
788#. Attach a :class:`~logging.Formatter` instance to your
789 :class:`~logging.handlers.SysLogHandler` instance, with a format string
790 such as::
791
Vinay Sajip8085f442012-04-16 15:45:11 +0100792 u'ASCII section\ufeffUnicode section'
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100793
Ezio Melotti93324d72013-03-28 05:47:31 +0200794 The Unicode code point ``u'\ufeff'``, when encoded using UTF-8, will be
Vinay Sajip8085f442012-04-16 15:45:11 +0100795 encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100796
797#. Replace the ASCII section with whatever placeholders you like, but make sure
798 that the data that appears in there after substitution is always ASCII (that
799 way, it will remain unchanged after UTF-8 encoding).
800
801#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100802 which appears there after substitution contains characters outside the ASCII
803 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100804
805If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
Vinay Sajip8085f442012-04-16 15:45:11 +0100806by ``SysLogHandler``. If you follow the above rules, you should be able to
807produce RFC 5424-compliant messages. If you don't, logging may not complain,
808but your messages will not be RFC 5424-compliant, and your syslog daemon may
809complain.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100810
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000811
812Implementing structured logging
813-------------------------------
814
815Although most logging messages are intended for reading by humans, and thus not
816readily machine-parseable, there might be cirumstances where you want to output
817messages in a structured format which *is* capable of being parsed by a program
Vinay Sajipeb019492013-01-23 09:30:34 +0000818(without needing complex regular expressions to parse the log message). This is
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000819straightforward to achieve using the logging package. There are a number of
820ways in which this could be achieved, but the following is a simple approach
821which uses JSON to serialise the event in a machine-parseable manner::
822
823 import json
824 import logging
825
826 class StructuredMessage(object):
827 def __init__(self, message, **kwargs):
828 self.message = message
829 self.kwargs = kwargs
830
831 def __str__(self):
832 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
833
834 _ = StructuredMessage # optional, to improve readability
835
836 logging.basicConfig(level=logging.INFO, format='%(message)s')
837 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
838
839If the above script is run, it prints::
840
841 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
842
Vinay Sajipeb019492013-01-23 09:30:34 +0000843Note that the order of items might be different according to the version of
844Python used.
845
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000846If you need more specialised processing, you can use a custom JSON encoder,
847as in the following complete example::
848
849 from __future__ import unicode_literals
850
851 import json
852 import logging
853
Vinay Sajipeb019492013-01-23 09:30:34 +0000854 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000855 try:
856 unicode
857 except NameError:
858 unicode = str
859
860 class Encoder(json.JSONEncoder):
861 def default(self, o):
862 if isinstance(o, set):
863 return tuple(o)
864 elif isinstance(o, unicode):
865 return o.encode('unicode_escape').decode('ascii')
866 return super(Encoder, self).default(o)
867
868 class StructuredMessage(object):
869 def __init__(self, message, **kwargs):
870 self.message = message
871 self.kwargs = kwargs
872
873 def __str__(self):
874 s = Encoder().encode(self.kwargs)
875 return '%s >>> %s' % (self.message, s)
876
Vinay Sajipeb019492013-01-23 09:30:34 +0000877 _ = StructuredMessage # optional, to improve readability
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000878
879 def main():
880 logging.basicConfig(level=logging.INFO, format='%(message)s')
881 logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
882
883 if __name__ == '__main__':
884 main()
885
886When the above script is run, it prints::
887
888 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
889
Vinay Sajipeb019492013-01-23 09:30:34 +0000890Note that the order of items might be different according to the version of
891Python used.
Vinay Sajip51468252014-02-03 11:51:22 +0000892
893
894.. _custom-handlers:
895
896.. currentmodule:: logging.config
897
898Customizing handlers with :func:`dictConfig`
899--------------------------------------------
900
901There are times when you want to customize logging handlers in particular ways,
902and if you use :func:`dictConfig` you may be able to do this without
903subclassing. As an example, consider that you may want to set the ownership of a
Vinay Sajip55640c12014-10-22 18:27:59 +0100904log file. On POSIX, this is easily done using :func:`os.chown`, but the file
Vinay Sajip51468252014-02-03 11:51:22 +0000905handlers in the stdlib don't offer built-in support. You can customize handler
906creation using a plain function such as::
907
908 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
909 if owner:
Vinay Sajip55640c12014-10-22 18:27:59 +0100910 import os, pwd, grp
911 # convert user and group names to uid and gid
912 uid = pwd.getpwnam(owner[0]).pw_uid
913 gid = grp.getgrnam(owner[1]).gr_gid
914 owner = (uid, gid)
Vinay Sajip51468252014-02-03 11:51:22 +0000915 if not os.path.exists(filename):
916 open(filename, 'a').close()
Vinay Sajip55640c12014-10-22 18:27:59 +0100917 os.chown(filename, *owner)
Vinay Sajip51468252014-02-03 11:51:22 +0000918 return logging.FileHandler(filename, mode, encoding)
919
920You can then specify, in a logging configuration passed to :func:`dictConfig`,
921that a logging handler be created by calling this function::
922
923 LOGGING = {
924 'version': 1,
925 'disable_existing_loggers': False,
926 'formatters': {
927 'default': {
928 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
929 },
930 },
931 'handlers': {
932 'file':{
933 # The values below are popped from this dictionary and
934 # used to create the handler, set the handler's level and
935 # its formatter.
936 '()': owned_file_handler,
937 'level':'DEBUG',
938 'formatter': 'default',
939 # The values below are passed to the handler creator callable
940 # as keyword arguments.
941 'owner': ['pulse', 'pulse'],
942 'filename': 'chowntest.log',
943 'mode': 'w',
944 'encoding': 'utf-8',
945 },
946 },
947 'root': {
948 'handlers': ['file'],
949 'level': 'DEBUG',
950 },
951 }
952
953In this example I am setting the ownership using the ``pulse`` user and group,
954just for the purposes of illustration. Putting it together into a working
955script, ``chowntest.py``::
956
957 import logging, logging.config, os, shutil
958
959 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
960 if owner:
961 if not os.path.exists(filename):
962 open(filename, 'a').close()
963 shutil.chown(filename, *owner)
964 return logging.FileHandler(filename, mode, encoding)
965
966 LOGGING = {
967 'version': 1,
968 'disable_existing_loggers': False,
969 'formatters': {
970 'default': {
971 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
972 },
973 },
974 'handlers': {
975 'file':{
976 # The values below are popped from this dictionary and
977 # used to create the handler, set the handler's level and
978 # its formatter.
979 '()': owned_file_handler,
980 'level':'DEBUG',
981 'formatter': 'default',
982 # The values below are passed to the handler creator callable
983 # as keyword arguments.
984 'owner': ['pulse', 'pulse'],
985 'filename': 'chowntest.log',
986 'mode': 'w',
987 'encoding': 'utf-8',
988 },
989 },
990 'root': {
991 'handlers': ['file'],
992 'level': 'DEBUG',
993 },
994 }
995
996 logging.config.dictConfig(LOGGING)
997 logger = logging.getLogger('mylogger')
998 logger.debug('A debug message')
999
1000To run this, you will probably need to run as ``root``::
1001
1002 $ sudo python3.3 chowntest.py
1003 $ cat chowntest.log
1004 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1005 $ ls -l chowntest.log
1006 -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1007
1008Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1009makes an appearance. This approach should work with any Python version that
1010supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1011versions, you would need to implement the actual ownership change using e.g.
1012:func:`os.chown`.
1013
1014In practice, the handler-creating function may be in a utility module somewhere
1015in your project. Instead of the line in the configuration::
1016
1017 '()': owned_file_handler,
1018
1019you could use e.g.::
1020
1021 '()': 'ext://project.util.owned_file_handler',
1022
1023where ``project.util`` can be replaced with the actual name of the package
1024where the function resides. In the above working script, using
1025``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1026is resolved by :func:`dictConfig` from the ``ext://`` specification.
1027
1028This example hopefully also points the way to how you could implement other
1029types of file change - e.g. setting specific POSIX permission bits - in the
1030same way, using :func:`os.chmod`.
1031
1032Of course, the approach could also be extended to types of handler other than a
1033:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1034or a different type of handler altogether.
1035
1036
1037.. _filters-dictconfig:
1038
1039Configuring filters with :func:`dictConfig`
1040-------------------------------------------
1041
1042You *can* configure filters using :func:`~logging.config.dictConfig`, though it
1043might not be obvious at first glance how to do it (hence this recipe). Since
1044:class:`~logging.Filter` is the only filter class included in the standard
1045library, and it is unlikely to cater to many requirements (it's only there as a
1046base class), you will typically need to define your own :class:`~logging.Filter`
1047subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
1048specify the ``()`` key in the configuration dictionary for the filter,
1049specifying a callable which will be used to create the filter (a class is the
1050most obvious, but you can provide any callable which returns a
1051:class:`~logging.Filter` instance). Here is a complete example::
1052
1053 import logging
1054 import logging.config
1055 import sys
1056
1057 class MyFilter(logging.Filter):
1058 def __init__(self, param=None):
1059 self.param = param
1060
1061 def filter(self, record):
1062 if self.param is None:
1063 allow = True
1064 else:
1065 allow = self.param not in record.msg
1066 if allow:
1067 record.msg = 'changed: ' + record.msg
1068 return allow
1069
1070 LOGGING = {
1071 'version': 1,
1072 'filters': {
1073 'myfilter': {
1074 '()': MyFilter,
1075 'param': 'noshow',
1076 }
1077 },
1078 'handlers': {
1079 'console': {
1080 'class': 'logging.StreamHandler',
1081 'filters': ['myfilter']
1082 }
1083 },
1084 'root': {
1085 'level': 'DEBUG',
1086 'handlers': ['console']
1087 },
1088 }
1089
1090 if __name__ == '__main__':
1091 logging.config.dictConfig(LOGGING)
1092 logging.debug('hello')
1093 logging.debug('hello - noshow')
1094
1095This example shows how you can pass configuration data to the callable which
1096constructs the instance, in the form of keyword parameters. When run, the above
1097script will print::
1098
1099 changed: hello
1100
1101which shows that the filter is working as configured.
1102
1103A couple of extra points to note:
1104
1105* If you can't refer to the callable directly in the configuration (e.g. if it
1106 lives in a different module, and you can't import it directly where the
1107 configuration dictionary is), you can use the form ``ext://...`` as described
1108 in :ref:`logging-config-dict-externalobj`. For example, you could have used
1109 the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
1110 example.
1111
1112* As well as for filters, this technique can also be used to configure custom
1113 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
1114 information on how logging supports using user-defined objects in its
1115 configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
1116
Vinay Sajip9e74fc32015-01-28 07:29:13 +00001117
1118.. _custom-format-exception:
1119
1120Customized exception formatting
1121-------------------------------
1122
1123There might be times when you want to do customized exception formatting - for
1124argument's sake, let's say you want exactly one line per logged event, even
1125when exception information is present. You can do this with a custom formatter
1126class, as shown in the following example::
1127
1128 import logging
1129
1130 class OneLineExceptionFormatter(logging.Formatter):
1131 def formatException(self, exc_info):
1132 """
1133 Format an exception so that it prints on a single line.
1134 """
1135 result = super(OneLineExceptionFormatter, self).formatException(exc_info)
1136 return repr(result) # or format into one line however you want to
1137
1138 def format(self, record):
1139 s = super(OneLineExceptionFormatter, self).format(record)
1140 if record.exc_text:
1141 s = s.replace('\n', '') + '|'
1142 return s
1143
1144 def configure_logging():
1145 fh = logging.FileHandler('output.txt', 'w')
1146 f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
1147 '%d/%m/%Y %H:%M:%S')
1148 fh.setFormatter(f)
1149 root = logging.getLogger()
1150 root.setLevel(logging.DEBUG)
1151 root.addHandler(fh)
1152
1153 def main():
1154 configure_logging()
1155 logging.info('Sample message')
1156 try:
1157 x = 1 / 0
1158 except ZeroDivisionError as e:
1159 logging.exception('ZeroDivisionError: %s', e)
1160
1161 if __name__ == '__main__':
1162 main()
1163
1164When run, this produces a file with exactly two lines::
1165
1166 28/01/2015 07:21:23|INFO|Sample message|
1167 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
1168
1169While the above treatment is simplistic, it points the way to how exception
1170information can be formatted to your liking. The :mod:`traceback` module may be
1171helpful for more specialized needs.
Vinay Sajipcaaaa142015-02-01 15:14:03 +00001172
1173.. _spoken-messages:
1174
1175Speaking logging messages
1176-------------------------
1177
1178There might be situations when it is desirable to have logging messages rendered
1179in an audible rather than a visible format. This is easy to do if you have text-
1180to-speech (TTS) functionality available in your system, even if it doesn't have
1181a Python binding. Most TTS systems have a command line program you can run, and
1182this can be invoked from a handler using :mod:`subprocess`. It's assumed here
1183that TTS command line programs won't expect to interact with users or take a
1184long time to complete, and that the frequency of logged messages will be not so
1185high as to swamp the user with messages, and that it's acceptable to have the
1186messages spoken one at a time rather than concurrently, The example implementation
1187below waits for one message to be spoken before the next is processed, and this
1188might cause other handlers to be kept waiting. Here is a short example showing
1189the approach, which assumes that the ``espeak`` TTS package is available::
1190
1191 import logging
1192 import subprocess
1193 import sys
1194
1195 class TTSHandler(logging.Handler):
1196 def emit(self, record):
1197 msg = self.format(record)
1198 # Speak slowly in a female English voice
1199 cmd = ['espeak', '-s150', '-ven+f3', msg]
1200 p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
1201 stderr=subprocess.STDOUT)
1202 # wait for the program to finish
1203 p.communicate()
1204
1205 def configure_logging():
1206 h = TTSHandler()
1207 root = logging.getLogger()
1208 root.addHandler(h)
1209 # the default formatter just returns the message
1210 root.setLevel(logging.DEBUG)
1211
1212 def main():
1213 logging.info('Hello')
1214 logging.debug('Goodbye')
1215
1216 if __name__ == '__main__':
1217 configure_logging()
1218 sys.exit(main())
1219
1220When run, this script should say "Hello" and then "Goodbye" in a female voice.
1221
1222The above approach can, of course, be adapted to other TTS systems and even
1223other systems altogether which can process messages via external programs run
1224from a command line.
1225
Vinay Sajip99187752015-10-10 00:49:10 +01001226.. _buffered-logging:
1227
1228Buffering logging messages and outputting them conditionally
1229------------------------------------------------------------
1230
1231There might be situations where you want to log messages in a temporary area
1232and only output them if a certain condition occurs. For example, you may want to
1233start logging debug events in a function, and if the function completes without
1234errors, you don't want to clutter the log with the collected debug information,
1235but if there is an error, you want all the debug information to be output as well
1236as the error.
1237
1238Here is an example which shows how you could do this using a decorator for your
1239functions where you want logging to behave this way. It makes use of the
1240:class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
1241until some condition occurs, at which point the buffered events are ``flushed``
1242- passed to another handler (the ``target`` handler) for processing. By default,
1243the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
1244level is greater than or equal to a specified threshold is seen. You can use this
1245recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
1246flushing behavior.
1247
1248The example script has a simple function, ``foo``, which just cycles through
1249all the logging levels, writing to ``sys.stderr`` to say what level it's about
1250to log at, and then actually logging a message that that level. You can pass a
1251parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
1252otherwise, it only logs at DEBUG, INFO and WARNING levels.
1253
1254The script just arranges to decorate ``foo`` with a decorator which will do the
1255conditional logging that's required. The decorator takes a logger as a parameter
1256and attaches a memory handler for the duration of the call to the decorated
1257function. The decorator can be additionally parameterised using a target handler,
1258a level at which flushing should occur, and a capacity for the buffer. These
1259default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
1260``logging.ERROR`` and ``100`` respectively.
1261
1262Here's the script::
1263
1264 import logging
1265 from logging.handlers import MemoryHandler
1266 import sys
1267
1268 logger = logging.getLogger(__name__)
1269 logger.addHandler(logging.NullHandler())
1270
1271 def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
1272 if target_handler is None:
1273 target_handler = logging.StreamHandler()
1274 if flush_level is None:
1275 flush_level = logging.ERROR
1276 if capacity is None:
1277 capacity = 100
1278 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
1279
1280 def decorator(fn):
1281 def wrapper(*args, **kwargs):
1282 logger.addHandler(handler)
1283 try:
1284 return fn(*args, **kwargs)
1285 except Exception:
1286 logger.exception('call failed')
1287 raise
1288 finally:
1289 super(MemoryHandler, handler).flush()
1290 logger.removeHandler(handler)
1291 return wrapper
1292
1293 return decorator
1294
1295 def write_line(s):
1296 sys.stderr.write('%s\n' % s)
1297
1298 def foo(fail=False):
1299 write_line('about to log at DEBUG ...')
1300 logger.debug('Actually logged at DEBUG')
1301 write_line('about to log at INFO ...')
1302 logger.info('Actually logged at INFO')
1303 write_line('about to log at WARNING ...')
1304 logger.warning('Actually logged at WARNING')
1305 if fail:
1306 write_line('about to log at ERROR ...')
1307 logger.error('Actually logged at ERROR')
1308 write_line('about to log at CRITICAL ...')
1309 logger.critical('Actually logged at CRITICAL')
1310 return fail
1311
1312 decorated_foo = log_if_errors(logger)(foo)
1313
1314 if __name__ == '__main__':
1315 logger.setLevel(logging.DEBUG)
1316 write_line('Calling undecorated foo with False')
1317 assert not foo(False)
1318 write_line('Calling undecorated foo with True')
1319 assert foo(True)
1320 write_line('Calling decorated foo with False')
1321 assert not decorated_foo(False)
1322 write_line('Calling decorated foo with True')
1323 assert decorated_foo(True)
1324
1325When this script is run, the following output should be observed::
1326
1327 Calling undecorated foo with False
1328 about to log at DEBUG ...
1329 about to log at INFO ...
1330 about to log at WARNING ...
1331 Calling undecorated foo with True
1332 about to log at DEBUG ...
1333 about to log at INFO ...
1334 about to log at WARNING ...
1335 about to log at ERROR ...
1336 about to log at CRITICAL ...
1337 Calling decorated foo with False
1338 about to log at DEBUG ...
1339 about to log at INFO ...
1340 about to log at WARNING ...
1341 Calling decorated foo with True
1342 about to log at DEBUG ...
1343 about to log at INFO ...
1344 about to log at WARNING ...
1345 about to log at ERROR ...
1346 Actually logged at DEBUG
1347 Actually logged at INFO
1348 Actually logged at WARNING
1349 Actually logged at ERROR
1350 about to log at CRITICAL ...
1351 Actually logged at CRITICAL
1352
1353As you can see, actual logging output only occurs when an event is logged whose
1354severity is ERROR or greater, but in that case, any previous events at lower
1355severities are also logged.
1356
1357You can of course use the conventional means of decoration::
1358
1359 @log_if_errors(logger)
1360 def foo(fail=False):
1361 ...
Vinay Sajip63173a82015-10-17 13:55:19 +01001362
1363
1364.. _utc-formatting:
1365
1366Formatting times using UTC (GMT) via configuration
1367--------------------------------------------------
1368
1369Sometimes you want to format times using UTC, which can be done using a class
1370such as `UTCFormatter`, shown below::
1371
1372 import logging
1373 import time
1374
1375 class UTCFormatter(logging.Formatter):
1376 converter = time.gmtime
1377
Zachary Ware2ee55f32015-11-05 10:28:18 -06001378and you can then use the ``UTCFormatter`` in your code instead of
Vinay Sajip63173a82015-10-17 13:55:19 +01001379:class:`~logging.Formatter`. If you want to do that via configuration, you can
1380use the :func:`~logging.config.dictConfig` API with an approach illustrated by
1381the following complete example::
1382
1383 import logging
1384 import logging.config
1385 import time
1386
1387 class UTCFormatter(logging.Formatter):
1388 converter = time.gmtime
1389
1390 LOGGING = {
1391 'version': 1,
1392 'disable_existing_loggers': False,
1393 'formatters': {
1394 'utc': {
1395 '()': UTCFormatter,
1396 'format': '%(asctime)s %(message)s',
1397 },
1398 'local': {
1399 'format': '%(asctime)s %(message)s',
1400 }
1401 },
1402 'handlers': {
1403 'console1': {
1404 'class': 'logging.StreamHandler',
1405 'formatter': 'utc',
1406 },
1407 'console2': {
1408 'class': 'logging.StreamHandler',
1409 'formatter': 'local',
1410 },
1411 },
1412 'root': {
1413 'handlers': ['console1', 'console2'],
1414 }
1415 }
1416
1417 if __name__ == '__main__':
1418 logging.config.dictConfig(LOGGING)
1419 logging.warning('The local time is %s', time.asctime())
1420
1421When this script is run, it should print something like::
1422
1423 2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1424 2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
1425
1426showing how the time is formatted both as local time and UTC, one for each
1427handler.
Vinay Sajipcb7fff52016-04-01 23:06:57 +01001428
1429
1430.. _context-manager:
1431
1432Using a context manager for selective logging
1433---------------------------------------------
1434
1435There are times when it would be useful to temporarily change the logging
1436configuration and revert it back after doing something. For this, a context
1437manager is the most obvious way of saving and restoring the logging context.
1438Here is a simple example of such a context manager, which allows you to
1439optionally change the logging level and add a logging handler purely in the
1440scope of the context manager::
1441
1442 import logging
1443 import sys
1444
1445 class LoggingContext(object):
1446 def __init__(self, logger, level=None, handler=None, close=True):
1447 self.logger = logger
1448 self.level = level
1449 self.handler = handler
1450 self.close = close
1451
1452 def __enter__(self):
1453 if self.level is not None:
1454 self.old_level = self.logger.level
1455 self.logger.setLevel(self.level)
1456 if self.handler:
1457 self.logger.addHandler(self.handler)
1458
1459 def __exit__(self, et, ev, tb):
1460 if self.level is not None:
1461 self.logger.setLevel(self.old_level)
1462 if self.handler:
1463 self.logger.removeHandler(self.handler)
1464 if self.handler and self.close:
1465 self.handler.close()
1466 # implicit return of None => don't swallow exceptions
1467
1468If you specify a level value, the logger's level is set to that value in the
1469scope of the with block covered by the context manager. If you specify a
1470handler, it is added to the logger on entry to the block and removed on exit
1471from the block. You can also ask the manager to close the handler for you on
1472block exit - you could do this if you don't need the handler any more.
1473
1474To illustrate how it works, we can add the following block of code to the
1475above::
1476
1477 if __name__ == '__main__':
1478 logger = logging.getLogger('foo')
1479 logger.addHandler(logging.StreamHandler())
1480 logger.setLevel(logging.INFO)
1481 logger.info('1. This should appear just once on stderr.')
1482 logger.debug('2. This should not appear.')
1483 with LoggingContext(logger, level=logging.DEBUG):
1484 logger.debug('3. This should appear once on stderr.')
1485 logger.debug('4. This should not appear.')
1486 h = logging.StreamHandler(sys.stdout)
1487 with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
1488 logger.debug('5. This should appear twice - once on stderr and once on stdout.')
1489 logger.info('6. This should appear just once on stderr.')
1490 logger.debug('7. This should not appear.')
1491
1492We initially set the logger's level to ``INFO``, so message #1 appears and
1493message #2 doesn't. We then change the level to ``DEBUG`` temporarily in the
1494following ``with`` block, and so message #3 appears. After the block exits, the
1495logger's level is restored to ``INFO`` and so message #4 doesn't appear. In the
1496next ``with`` block, we set the level to ``DEBUG`` again but also add a handler
1497writing to ``sys.stdout``. Thus, message #5 appears twice on the console (once
1498via ``stderr`` and once via ``stdout``). After the ``with`` statement's
1499completion, the status is as it was before so message #6 appears (like message
1500#1) whereas message #7 doesn't (just like message #2).
1501
1502If we run the resulting script, the result is as follows::
1503
1504 $ python logctx.py
1505 1. This should appear just once on stderr.
1506 3. This should appear once on stderr.
1507 5. This should appear twice - once on stderr and once on stdout.
1508 5. This should appear twice - once on stderr and once on stdout.
1509 6. This should appear just once on stderr.
1510
1511If we run it again, but pipe ``stderr`` to ``/dev/null``, we see the following,
1512which is the only message written to ``stdout``::
1513
1514 $ python logctx.py 2>/dev/null
1515 5. This should appear twice - once on stderr and once on stdout.
1516
1517Once again, but piping ``stdout`` to ``/dev/null``, we get::
1518
1519 $ python logctx.py >/dev/null
1520 1. This should appear just once on stderr.
1521 3. This should appear once on stderr.
1522 5. This should appear twice - once on stderr and once on stdout.
1523 6. This should appear just once on stderr.
1524
1525In this case, the message #5 printed to ``stdout`` doesn't appear, as expected.
1526
1527Of course, the approach described here can be generalised, for example to attach
1528logging filters temporarily. Note that the above code works in Python 2 as well
1529as Python 3.