blob: aee95a845d3e506d5702d90146112d0b6ee49e91 [file] [log] [blame]
Vinay Sajipc63619b2010-12-19 12:56:57 +00001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
Georg Brandl375aec22011-01-15 17:03:02 +00009This page contains a number of recipes related to logging, which have been found
10useful in the past.
Vinay Sajipc63619b2010-12-19 12:56:57 +000011
Vinay Sajipc63619b2010-12-19 12:56:57 +000012.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
Vinay Sajip1397ce12010-12-24 12:03:48 +000017Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
Vinay Sajipc63619b2010-12-19 12:56:57 +000024
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
Vinay Sajipe10d3702016-02-20 19:02:46 +000097Logging from multiple threads
98-----------------------------
99
100Logging from multiple threads requires no special effort. The following example
Berker Peksag563c9492016-03-20 12:50:56 +0200101shows logging from the main (initial) thread and another thread::
Vinay Sajipe10d3702016-02-20 19:02:46 +0000102
103 import logging
104 import threading
105 import time
106
107 def worker(arg):
108 while not arg['stop']:
109 logging.debug('Hi from myfunc')
110 time.sleep(0.5)
111
112 def main():
113 logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
114 info = {'stop': False}
115 thread = threading.Thread(target=worker, args=(info,))
116 thread.start()
117 while True:
118 try:
119 logging.debug('Hello from main')
120 time.sleep(0.75)
121 except KeyboardInterrupt:
122 info['stop'] = True
123 break
124 thread.join()
125
126 if __name__ == '__main__':
127 main()
128
129When run, the script should print something like the following::
130
131 0 Thread-1 Hi from myfunc
132 3 MainThread Hello from main
133 505 Thread-1 Hi from myfunc
134 755 MainThread Hello from main
135 1007 Thread-1 Hi from myfunc
136 1507 MainThread Hello from main
137 1508 Thread-1 Hi from myfunc
138 2010 Thread-1 Hi from myfunc
139 2258 MainThread Hello from main
140 2512 Thread-1 Hi from myfunc
141 3009 MainThread Hello from main
142 3013 Thread-1 Hi from myfunc
143 3515 Thread-1 Hi from myfunc
144 3761 MainThread Hello from main
145 4017 Thread-1 Hi from myfunc
146 4513 MainThread Hello from main
147 4518 Thread-1 Hi from myfunc
148
149This shows the logging output interspersed as one might expect. This approach
150works for more threads than shown here, of course.
151
Vinay Sajipc63619b2010-12-19 12:56:57 +0000152Multiple handlers and formatters
153--------------------------------
154
Vinay Sajip67f39772013-08-17 00:39:42 +0100155Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
156minimum or maximum quota for the number of handlers you may add. Sometimes it
157will be beneficial for an application to log all messages of all severities to a
158text file while simultaneously logging errors or above to the console. To set
159this up, simply configure the appropriate handlers. The logging calls in the
Vinay Sajipc63619b2010-12-19 12:56:57 +0000160application code will remain unchanged. Here is a slight modification to the
161previous simple module-based configuration example::
162
163 import logging
164
165 logger = logging.getLogger('simple_example')
166 logger.setLevel(logging.DEBUG)
167 # create file handler which logs even debug messages
168 fh = logging.FileHandler('spam.log')
169 fh.setLevel(logging.DEBUG)
170 # create console handler with a higher log level
171 ch = logging.StreamHandler()
172 ch.setLevel(logging.ERROR)
173 # create formatter and add it to the handlers
174 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
175 ch.setFormatter(formatter)
176 fh.setFormatter(formatter)
177 # add the handlers to logger
178 logger.addHandler(ch)
179 logger.addHandler(fh)
180
181 # 'application' code
182 logger.debug('debug message')
183 logger.info('info message')
184 logger.warn('warn message')
185 logger.error('error message')
186 logger.critical('critical message')
187
188Notice that the 'application' code does not care about multiple handlers. All
189that changed was the addition and configuration of a new handler named *fh*.
190
191The ability to create new handlers with higher- or lower-severity filters can be
192very helpful when writing and testing an application. Instead of using many
193``print`` statements for debugging, use ``logger.debug``: Unlike the print
194statements, which you will have to delete or comment out later, the logger.debug
195statements can remain intact in the source code and remain dormant until you
196need them again. At that time, the only change that needs to happen is to
197modify the severity level of the logger and/or handler to debug.
198
199.. _multiple-destinations:
200
201Logging to multiple destinations
202--------------------------------
203
204Let's say you want to log to console and file with different message formats and
205in differing circumstances. Say you want to log messages with levels of DEBUG
206and higher to file, and those messages at level INFO and higher to the console.
207Let's also assume that the file should contain timestamps, but the console
208messages should not. Here's how you can achieve this::
209
210 import logging
211
212 # set up logging to file - see previous section for more details
213 logging.basicConfig(level=logging.DEBUG,
214 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
215 datefmt='%m-%d %H:%M',
216 filename='/temp/myapp.log',
217 filemode='w')
218 # define a Handler which writes INFO messages or higher to the sys.stderr
219 console = logging.StreamHandler()
220 console.setLevel(logging.INFO)
221 # set a format which is simpler for console use
222 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
223 # tell the handler to use this format
224 console.setFormatter(formatter)
225 # add the handler to the root logger
226 logging.getLogger('').addHandler(console)
227
228 # Now, we can log to the root logger, or any other logger. First the root...
229 logging.info('Jackdaws love my big sphinx of quartz.')
230
231 # Now, define a couple of other loggers which might represent areas in your
232 # application:
233
234 logger1 = logging.getLogger('myapp.area1')
235 logger2 = logging.getLogger('myapp.area2')
236
237 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
238 logger1.info('How quickly daft jumping zebras vex.')
239 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
240 logger2.error('The five boxing wizards jump quickly.')
241
242When you run this, on the console you will see ::
243
244 root : INFO Jackdaws love my big sphinx of quartz.
245 myapp.area1 : INFO How quickly daft jumping zebras vex.
246 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
247 myapp.area2 : ERROR The five boxing wizards jump quickly.
248
249and in the file you will see something like ::
250
251 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
252 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
253 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
254 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
255 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
256
257As you can see, the DEBUG message only shows up in the file. The other messages
258are sent to both destinations.
259
260This example uses console and file handlers, but you can use any number and
261combination of handlers you choose.
262
263
264Configuration server example
265----------------------------
266
267Here is an example of a module using the logging configuration server::
268
269 import logging
270 import logging.config
271 import time
272 import os
273
274 # read initial config file
275 logging.config.fileConfig('logging.conf')
276
277 # create and start listener on port 9999
278 t = logging.config.listen(9999)
279 t.start()
280
281 logger = logging.getLogger('simpleExample')
282
283 try:
284 # loop through logging calls to see the difference
285 # new configurations make, until Ctrl+C is pressed
286 while True:
287 logger.debug('debug message')
288 logger.info('info message')
289 logger.warn('warn message')
290 logger.error('error message')
291 logger.critical('critical message')
292 time.sleep(5)
293 except KeyboardInterrupt:
294 # cleanup
295 logging.config.stopListening()
296 t.join()
297
298And here is a script that takes a filename and sends that file to the server,
299properly preceded with the binary-encoded length, as the new logging
300configuration::
301
302 #!/usr/bin/env python
303 import socket, sys, struct
304
Vinay Sajip689b68a2010-12-22 15:04:15 +0000305 with open(sys.argv[1], 'rb') as f:
306 data_to_send = f.read()
Vinay Sajipc63619b2010-12-19 12:56:57 +0000307
308 HOST = 'localhost'
309 PORT = 9999
310 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
311 print('connecting...')
312 s.connect((HOST, PORT))
313 print('sending config...')
314 s.send(struct.pack('>L', len(data_to_send)))
315 s.send(data_to_send)
316 s.close()
317 print('complete')
318
319
320Dealing with handlers that block
321--------------------------------
322
323.. currentmodule:: logging.handlers
324
325Sometimes you have to get your logging handlers to do their work without
Vinay Sajip6b883a22012-02-27 11:02:45 +0000326blocking the thread you're logging from. This is common in Web applications,
Vinay Sajipc63619b2010-12-19 12:56:57 +0000327though of course it also occurs in other scenarios.
328
329A common culprit which demonstrates sluggish behaviour is the
330:class:`SMTPHandler`: sending emails can take a long time, for a
Vinay Sajip6b883a22012-02-27 11:02:45 +0000331number of reasons outside the developer's control (for example, a poorly
Vinay Sajipc63619b2010-12-19 12:56:57 +0000332performing mail or network infrastructure). But almost any network-based
333handler can block: Even a :class:`SocketHandler` operation may do a
334DNS query under the hood which is too slow (and this query can be deep in the
335socket library code, below the Python layer, and outside your control).
336
337One solution is to use a two-part approach. For the first part, attach only a
338:class:`QueueHandler` to those loggers which are accessed from
339performance-critical threads. They simply write to their queue, which can be
340sized to a large enough capacity or initialized with no upper bound to their
341size. The write to the queue will typically be accepted quickly, though you
Georg Brandl375aec22011-01-15 17:03:02 +0000342will probably need to catch the :exc:`queue.Full` exception as a precaution
Vinay Sajipc63619b2010-12-19 12:56:57 +0000343in your code. If you are a library developer who has performance-critical
344threads in their code, be sure to document this (together with a suggestion to
345attach only ``QueueHandlers`` to your loggers) for the benefit of other
346developers who will use your code.
347
348The second part of the solution is :class:`QueueListener`, which has been
349designed as the counterpart to :class:`QueueHandler`. A
Vinay Sajip6b883a22012-02-27 11:02:45 +0000350:class:`QueueListener` is very simple: it's passed a queue and some handlers,
Vinay Sajipc63619b2010-12-19 12:56:57 +0000351and it fires up an internal thread which listens to its queue for LogRecords
352sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
353matter). The ``LogRecords`` are removed from the queue and passed to the
354handlers for processing.
355
356The advantage of having a separate :class:`QueueListener` class is that you
357can use the same instance to service multiple ``QueueHandlers``. This is more
358resource-friendly than, say, having threaded versions of the existing handler
359classes, which would eat up one thread per handler for no particular benefit.
360
361An example of using these two classes follows (imports omitted)::
362
363 que = queue.Queue(-1) # no limit on size
364 queue_handler = QueueHandler(que)
365 handler = logging.StreamHandler()
366 listener = QueueListener(que, handler)
367 root = logging.getLogger()
368 root.addHandler(queue_handler)
369 formatter = logging.Formatter('%(threadName)s: %(message)s')
370 handler.setFormatter(formatter)
371 listener.start()
372 # The log output will display the thread which generated
373 # the event (the main thread) rather than the internal
374 # thread which monitors the internal queue. This is what
375 # you want to happen.
376 root.warning('Look out!')
377 listener.stop()
378
379which, when run, will produce::
380
381 MainThread: Look out!
382
Vinay Sajip365701a2015-02-09 19:49:00 +0000383.. versionchanged:: 3.5
384 Prior to Python 3.5, the :class:`QueueListener` always passed every message
385 received from the queue to every handler it was initialized with. (This was
386 because it was assumed that level filtering was all done on the other side,
387 where the queue is filled.) From 3.5 onwards, this behaviour can be changed
388 by passing a keyword argument ``respect_handler_level=True`` to the
389 listener's constructor. When this is done, the listener compares the level
390 of each message with the handler's level, and only passes a message to a
391 handler if it's appropriate to do so.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000392
393.. _network-logging:
394
395Sending and receiving logging events across a network
396-----------------------------------------------------
397
398Let's say you want to send logging events across a network, and handle them at
399the receiving end. A simple way of doing this is attaching a
400:class:`SocketHandler` instance to the root logger at the sending end::
401
402 import logging, logging.handlers
403
404 rootLogger = logging.getLogger('')
405 rootLogger.setLevel(logging.DEBUG)
406 socketHandler = logging.handlers.SocketHandler('localhost',
407 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
408 # don't bother with a formatter, since a socket handler sends the event as
409 # an unformatted pickle
410 rootLogger.addHandler(socketHandler)
411
412 # Now, we can log to the root logger, or any other logger. First the root...
413 logging.info('Jackdaws love my big sphinx of quartz.')
414
415 # Now, define a couple of other loggers which might represent areas in your
416 # application:
417
418 logger1 = logging.getLogger('myapp.area1')
419 logger2 = logging.getLogger('myapp.area2')
420
421 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
422 logger1.info('How quickly daft jumping zebras vex.')
423 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
424 logger2.error('The five boxing wizards jump quickly.')
425
426At the receiving end, you can set up a receiver using the :mod:`socketserver`
427module. Here is a basic working example::
428
429 import pickle
430 import logging
431 import logging.handlers
432 import socketserver
433 import struct
434
435
436 class LogRecordStreamHandler(socketserver.StreamRequestHandler):
437 """Handler for a streaming logging request.
438
439 This basically logs the record using whatever logging policy is
440 configured locally.
441 """
442
443 def handle(self):
444 """
445 Handle multiple requests - each expected to be a 4-byte length,
446 followed by the LogRecord in pickle format. Logs the record
447 according to whatever policy is configured locally.
448 """
449 while True:
450 chunk = self.connection.recv(4)
451 if len(chunk) < 4:
452 break
453 slen = struct.unpack('>L', chunk)[0]
454 chunk = self.connection.recv(slen)
455 while len(chunk) < slen:
456 chunk = chunk + self.connection.recv(slen - len(chunk))
457 obj = self.unPickle(chunk)
458 record = logging.makeLogRecord(obj)
459 self.handleLogRecord(record)
460
461 def unPickle(self, data):
462 return pickle.loads(data)
463
464 def handleLogRecord(self, record):
465 # if a name is specified, we use the named logger rather than the one
466 # implied by the record.
467 if self.server.logname is not None:
468 name = self.server.logname
469 else:
470 name = record.name
471 logger = logging.getLogger(name)
472 # N.B. EVERY record gets logged. This is because Logger.handle
473 # is normally called AFTER logger-level filtering. If you want
474 # to do filtering, do it at the client end to save wasting
475 # cycles and network bandwidth!
476 logger.handle(record)
477
478 class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
479 """
480 Simple TCP socket-based logging receiver suitable for testing.
481 """
482
Raymond Hettinger4ab532b2014-03-28 16:39:25 -0700483 allow_reuse_address = True
Vinay Sajipc63619b2010-12-19 12:56:57 +0000484
485 def __init__(self, host='localhost',
486 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
487 handler=LogRecordStreamHandler):
488 socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
489 self.abort = 0
490 self.timeout = 1
491 self.logname = None
492
493 def serve_until_stopped(self):
494 import select
495 abort = 0
496 while not abort:
497 rd, wr, ex = select.select([self.socket.fileno()],
498 [], [],
499 self.timeout)
500 if rd:
501 self.handle_request()
502 abort = self.abort
503
504 def main():
505 logging.basicConfig(
506 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
507 tcpserver = LogRecordSocketReceiver()
508 print('About to start TCP server...')
509 tcpserver.serve_until_stopped()
510
511 if __name__ == '__main__':
512 main()
513
514First run the server, and then the client. On the client side, nothing is
515printed on the console; on the server side, you should see something like::
516
517 About to start TCP server...
518 59 root INFO Jackdaws love my big sphinx of quartz.
519 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
520 69 myapp.area1 INFO How quickly daft jumping zebras vex.
521 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
522 69 myapp.area2 ERROR The five boxing wizards jump quickly.
523
524Note that there are some security issues with pickle in some scenarios. If
525these affect you, you can use an alternative serialization scheme by overriding
Vinay Sajip67f39772013-08-17 00:39:42 +0100526the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
527alternative there, as well as adapting the above script to use your alternative
528serialization.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000529
530
531.. _context-info:
532
533Adding contextual information to your logging output
534----------------------------------------------------
535
536Sometimes you want logging output to contain contextual information in
537addition to the parameters passed to the logging call. For example, in a
538networked application, it may be desirable to log client-specific information
539in the log (e.g. remote client's username, or IP address). Although you could
540use the *extra* parameter to achieve this, it's not always convenient to pass
541the information in this way. While it might be tempting to create
542:class:`Logger` instances on a per-connection basis, this is not a good idea
543because these instances are not garbage collected. While this is not a problem
544in practice, when the number of :class:`Logger` instances is dependent on the
545level of granularity you want to use in logging an application, it could
546be hard to manage if the number of :class:`Logger` instances becomes
547effectively unbounded.
548
549
550Using LoggerAdapters to impart contextual information
551^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
552
553An easy way in which you can pass contextual information to be output along
554with logging event information is to use the :class:`LoggerAdapter` class.
555This class is designed to look like a :class:`Logger`, so that you can call
556:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
557:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
558same signatures as their counterparts in :class:`Logger`, so you can use the
559two types of instances interchangeably.
560
561When you create an instance of :class:`LoggerAdapter`, you pass it a
562:class:`Logger` instance and a dict-like object which contains your contextual
563information. When you call one of the logging methods on an instance of
564:class:`LoggerAdapter`, it delegates the call to the underlying instance of
565:class:`Logger` passed to its constructor, and arranges to pass the contextual
566information in the delegated call. Here's a snippet from the code of
567:class:`LoggerAdapter`::
568
569 def debug(self, msg, *args, **kwargs):
570 """
571 Delegate a debug call to the underlying logger, after adding
572 contextual information from this adapter instance.
573 """
574 msg, kwargs = self.process(msg, kwargs)
575 self.logger.debug(msg, *args, **kwargs)
576
Vinay Sajip67f39772013-08-17 00:39:42 +0100577The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
578contextual information is added to the logging output. It's passed the message
579and keyword arguments of the logging call, and it passes back (potentially)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000580modified versions of these to use in the call to the underlying logger. The
581default implementation of this method leaves the message alone, but inserts
582an 'extra' key in the keyword argument whose value is the dict-like object
583passed to the constructor. Of course, if you had passed an 'extra' keyword
584argument in the call to the adapter, it will be silently overwritten.
585
586The advantage of using 'extra' is that the values in the dict-like object are
587merged into the :class:`LogRecord` instance's __dict__, allowing you to use
588customized strings with your :class:`Formatter` instances which know about
589the keys of the dict-like object. If you need a different method, e.g. if you
590want to prepend or append the contextual information to the message string,
Vinay Sajip67f39772013-08-17 00:39:42 +0100591you just need to subclass :class:`LoggerAdapter` and override
592:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000593
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100594 class CustomAdapter(logging.LoggerAdapter):
595 """
596 This example adapter expects the passed in dict-like object to have a
597 'connid' key, whose value in brackets is prepended to the log message.
598 """
599 def process(self, msg, kwargs):
600 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajipc63619b2010-12-19 12:56:57 +0000601
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100602which you can use like this::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000603
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100604 logger = logging.getLogger(__name__)
605 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajipc63619b2010-12-19 12:56:57 +0000606
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100607Then any events that you log to the adapter will have the value of
608``some_conn_id`` prepended to the log messages.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000609
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100610Using objects other than dicts to pass contextual information
611~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajipc63619b2010-12-19 12:56:57 +0000612
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100613You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
614pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
615that it looks like a dict to logging. This would be useful if you want to
616generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajipc63619b2010-12-19 12:56:57 +0000617
618
619.. _filters-contextual:
620
621Using Filters to impart contextual information
622^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
623
624You can also add contextual information to log output using a user-defined
625:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
626passed to them, including adding additional attributes which can then be output
627using a suitable format string, or if needed a custom :class:`Formatter`.
628
629For example in a web application, the request being processed (or at least,
630the interesting parts of it) can be stored in a threadlocal
631(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
632add, say, information from the request - say, the remote IP address and remote
633user's username - to the ``LogRecord``, using the attribute names 'ip' and
634'user' as in the ``LoggerAdapter`` example above. In that case, the same format
635string can be used to get similar output to that shown above. Here's an example
636script::
637
638 import logging
639 from random import choice
640
641 class ContextFilter(logging.Filter):
642 """
643 This is a filter which injects contextual information into the log.
644
645 Rather than use actual contextual information, we just use random
646 data in this demo.
647 """
648
649 USERS = ['jim', 'fred', 'sheila']
650 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
651
652 def filter(self, record):
653
654 record.ip = choice(ContextFilter.IPS)
655 record.user = choice(ContextFilter.USERS)
656 return True
657
658 if __name__ == '__main__':
659 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000660 logging.basicConfig(level=logging.DEBUG,
661 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
662 a1 = logging.getLogger('a.b.c')
663 a2 = logging.getLogger('d.e.f')
664
665 f = ContextFilter()
666 a1.addFilter(f)
667 a2.addFilter(f)
668 a1.debug('A debug message')
669 a1.info('An info message with %s', 'some parameters')
670 for x in range(10):
671 lvl = choice(levels)
672 lvlname = logging.getLevelName(lvl)
673 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
674
675which, when run, produces something like::
676
677 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
678 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
679 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
680 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
681 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
682 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
683 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
684 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
685 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
686 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
687 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
688 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
689
690
691.. _multiple-processes:
692
693Logging to a single file from multiple processes
694------------------------------------------------
695
696Although logging is thread-safe, and logging to a single file from multiple
697threads in a single process *is* supported, logging to a single file from
698*multiple processes* is *not* supported, because there is no standard way to
699serialize access to a single file across multiple processes in Python. If you
700need to log to a single file from multiple processes, one way of doing this is
Vinay Sajip67f39772013-08-17 00:39:42 +0100701to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
702separate process which implements a socket server which reads from the socket
703and logs to file. (If you prefer, you can dedicate one thread in one of the
704existing processes to perform this function.)
705:ref:`This section <network-logging>` documents this approach in more detail and
706includes a working socket receiver which can be used as a starting point for you
707to adapt in your own applications.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000708
709If you are using a recent version of Python which includes the
710:mod:`multiprocessing` module, you could write your own handler which uses the
Vinay Sajip67f39772013-08-17 00:39:42 +0100711:class:`~multiprocessing.Lock` class from this module to serialize access to the
712file from your processes. The existing :class:`FileHandler` and subclasses do
713not make use of :mod:`multiprocessing` at present, though they may do so in the
714future. Note that at present, the :mod:`multiprocessing` module does not provide
Vinay Sajipc63619b2010-12-19 12:56:57 +0000715working lock functionality on all platforms (see
Georg Brandle73778c2014-10-29 08:36:35 +0100716https://bugs.python.org/issue3770).
Vinay Sajipc63619b2010-12-19 12:56:57 +0000717
718.. currentmodule:: logging.handlers
719
720Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
721all logging events to one of the processes in your multi-process application.
722The following example script demonstrates how you can do this; in the example
723a separate listener process listens for events sent by other processes and logs
724them according to its own logging configuration. Although the example only
725demonstrates one way of doing it (for example, you may want to use a listener
Georg Brandl7a0afd32011-02-07 15:44:27 +0000726thread rather than a separate listener process -- the implementation would be
Vinay Sajipc63619b2010-12-19 12:56:57 +0000727analogous) it does allow for completely different logging configurations for
728the listener and the other processes in your application, and can be used as
729the basis for code meeting your own specific requirements::
730
731 # You'll need these imports in your own code
732 import logging
733 import logging.handlers
734 import multiprocessing
735
736 # Next two import lines for this demo only
737 from random import choice, random
738 import time
739
740 #
741 # Because you'll want to define the logging configurations for listener and workers, the
742 # listener and worker process functions take a configurer parameter which is a callable
743 # for configuring logging for that process. These functions are also passed the queue,
744 # which they use for communication.
745 #
746 # In practice, you can configure the listener however you want, but note that in this
747 # simple example, the listener does not apply level or filter logic to received records.
Georg Brandl7a0afd32011-02-07 15:44:27 +0000748 # In practice, you would probably want to do this logic in the worker processes, to avoid
Vinay Sajipc63619b2010-12-19 12:56:57 +0000749 # sending events which would be filtered out between processes.
750 #
751 # The size of the rotated files is made small so you can see the results easily.
752 def listener_configurer():
753 root = logging.getLogger()
Raymond Hettingerb34705f2011-06-26 15:29:06 +0200754 h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000755 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
756 h.setFormatter(f)
757 root.addHandler(h)
758
759 # This is the listener process top-level loop: wait for logging events
760 # (LogRecords)on the queue and handle them, quit when you get a None for a
761 # LogRecord.
762 def listener_process(queue, configurer):
763 configurer()
764 while True:
765 try:
766 record = queue.get()
767 if record is None: # We send this as a sentinel to tell the listener to quit.
768 break
769 logger = logging.getLogger(record.name)
770 logger.handle(record) # No level or filter logic applied - just do it!
Andrew Svetlov47395612012-11-02 22:07:26 +0200771 except Exception:
Vinay Sajipc63619b2010-12-19 12:56:57 +0000772 import sys, traceback
Vinay Sajip6b883a22012-02-27 11:02:45 +0000773 print('Whoops! Problem:', file=sys.stderr)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000774 traceback.print_exc(file=sys.stderr)
775
776 # Arrays used for random selections in this demo
777
778 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
779 logging.ERROR, logging.CRITICAL]
780
781 LOGGERS = ['a.b.c', 'd.e.f']
782
783 MESSAGES = [
784 'Random message #1',
785 'Random message #2',
786 'Random message #3',
787 ]
788
789 # The worker configuration is done at the start of the worker process run.
790 # Note that on Windows you can't rely on fork semantics, so each process
791 # will run the logging configuration code when it starts.
792 def worker_configurer(queue):
793 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
794 root = logging.getLogger()
795 root.addHandler(h)
796 root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
797
798 # This is the worker process top-level loop, which just logs ten events with
799 # random intervening delays before terminating.
800 # The print messages are just so you know it's doing something!
801 def worker_process(queue, configurer):
802 configurer(queue)
803 name = multiprocessing.current_process().name
804 print('Worker started: %s' % name)
805 for i in range(10):
806 time.sleep(random())
807 logger = logging.getLogger(choice(LOGGERS))
808 level = choice(LEVELS)
809 message = choice(MESSAGES)
810 logger.log(level, message)
811 print('Worker finished: %s' % name)
812
813 # Here's where the demo gets orchestrated. Create the queue, create and start
814 # the listener, create ten workers and start them, wait for them to finish,
815 # then send a None to the queue to tell the listener to finish.
816 def main():
817 queue = multiprocessing.Queue(-1)
818 listener = multiprocessing.Process(target=listener_process,
819 args=(queue, listener_configurer))
820 listener.start()
821 workers = []
822 for i in range(10):
823 worker = multiprocessing.Process(target=worker_process,
824 args=(queue, worker_configurer))
825 workers.append(worker)
826 worker.start()
827 for w in workers:
828 w.join()
829 queue.put_nowait(None)
830 listener.join()
831
832 if __name__ == '__main__':
833 main()
834
Vinay Sajipe6f1e432010-12-26 18:47:51 +0000835A variant of the above script keeps the logging in the main process, in a
836separate thread::
837
838 import logging
839 import logging.config
840 import logging.handlers
841 from multiprocessing import Process, Queue
842 import random
843 import threading
844 import time
845
846 def logger_thread(q):
847 while True:
848 record = q.get()
849 if record is None:
850 break
851 logger = logging.getLogger(record.name)
852 logger.handle(record)
853
854
855 def worker_process(q):
856 qh = logging.handlers.QueueHandler(q)
857 root = logging.getLogger()
858 root.setLevel(logging.DEBUG)
859 root.addHandler(qh)
860 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
861 logging.CRITICAL]
862 loggers = ['foo', 'foo.bar', 'foo.bar.baz',
863 'spam', 'spam.ham', 'spam.ham.eggs']
864 for i in range(100):
865 lvl = random.choice(levels)
866 logger = logging.getLogger(random.choice(loggers))
867 logger.log(lvl, 'Message no. %d', i)
868
869 if __name__ == '__main__':
870 q = Queue()
871 d = {
872 'version': 1,
873 'formatters': {
874 'detailed': {
875 'class': 'logging.Formatter',
876 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
877 }
878 },
879 'handlers': {
880 'console': {
881 'class': 'logging.StreamHandler',
882 'level': 'INFO',
883 },
884 'file': {
885 'class': 'logging.FileHandler',
886 'filename': 'mplog.log',
887 'mode': 'w',
888 'formatter': 'detailed',
889 },
890 'foofile': {
891 'class': 'logging.FileHandler',
892 'filename': 'mplog-foo.log',
893 'mode': 'w',
894 'formatter': 'detailed',
895 },
896 'errors': {
897 'class': 'logging.FileHandler',
898 'filename': 'mplog-errors.log',
899 'mode': 'w',
900 'level': 'ERROR',
901 'formatter': 'detailed',
902 },
903 },
904 'loggers': {
905 'foo': {
Serhiy Storchakaf47036c2013-12-24 11:04:36 +0200906 'handlers': ['foofile']
Vinay Sajipe6f1e432010-12-26 18:47:51 +0000907 }
908 },
909 'root': {
910 'level': 'DEBUG',
911 'handlers': ['console', 'file', 'errors']
912 },
913 }
914 workers = []
915 for i in range(5):
916 wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
917 workers.append(wp)
918 wp.start()
919 logging.config.dictConfig(d)
920 lp = threading.Thread(target=logger_thread, args=(q,))
921 lp.start()
922 # At this point, the main process could do some useful work of its own
923 # Once it's done that, it can wait for the workers to terminate...
924 for wp in workers:
925 wp.join()
926 # And now tell the logging thread to finish up, too
927 q.put(None)
928 lp.join()
929
930This variant shows how you can e.g. apply configuration for particular loggers
931- e.g. the ``foo`` logger has a special handler which stores all events in the
932``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
933machinery in the main process (even though the logging events are generated in
934the worker processes) to direct the messages to the appropriate destinations.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000935
936Using file rotation
937-------------------
938
939.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
940.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
941
942Sometimes you want to let a log file grow to a certain size, then open a new
943file and log to that. You may want to keep a certain number of these files, and
944when that many files have been created, rotate the files so that the number of
Georg Brandl7a0afd32011-02-07 15:44:27 +0000945files and the size of the files both remain bounded. For this usage pattern, the
Vinay Sajip67f39772013-08-17 00:39:42 +0100946logging package provides a :class:`~handlers.RotatingFileHandler`::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000947
948 import glob
949 import logging
950 import logging.handlers
951
952 LOG_FILENAME = 'logging_rotatingfile_example.out'
953
954 # Set up a specific logger with our desired output level
955 my_logger = logging.getLogger('MyLogger')
956 my_logger.setLevel(logging.DEBUG)
957
958 # Add the log message handler to the logger
959 handler = logging.handlers.RotatingFileHandler(
960 LOG_FILENAME, maxBytes=20, backupCount=5)
961
962 my_logger.addHandler(handler)
963
964 # Log some messages
965 for i in range(20):
966 my_logger.debug('i = %d' % i)
967
968 # See what files are created
969 logfiles = glob.glob('%s*' % LOG_FILENAME)
970
971 for filename in logfiles:
972 print(filename)
973
974The result should be 6 separate files, each with part of the log history for the
975application::
976
977 logging_rotatingfile_example.out
978 logging_rotatingfile_example.out.1
979 logging_rotatingfile_example.out.2
980 logging_rotatingfile_example.out.3
981 logging_rotatingfile_example.out.4
982 logging_rotatingfile_example.out.5
983
984The most current file is always :file:`logging_rotatingfile_example.out`,
985and each time it reaches the size limit it is renamed with the suffix
986``.1``. Each of the existing backup files is renamed to increment the suffix
987(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
988
Ezio Melottie130a522011-10-19 10:58:56 +0300989Obviously this example sets the log length much too small as an extreme
Vinay Sajipc63619b2010-12-19 12:56:57 +0000990example. You would want to set *maxBytes* to an appropriate value.
991
Vinay Sajip6b883a22012-02-27 11:02:45 +0000992.. _format-styles:
993
994Use of alternative formatting styles
995------------------------------------
996
997When logging was added to the Python standard library, the only way of
998formatting messages with variable content was to use the %-formatting
999method. Since then, Python has gained two new formatting approaches:
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001000:class:`string.Template` (added in Python 2.4) and :meth:`str.format`
1001(added in Python 2.6).
Vinay Sajip6b883a22012-02-27 11:02:45 +00001002
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001003Logging (as of 3.2) provides improved support for these two additional
1004formatting styles. The :class:`Formatter` class been enhanced to take an
1005additional, optional keyword parameter named ``style``. This defaults to
1006``'%'``, but other possible values are ``'{'`` and ``'$'``, which correspond
Vinay Sajip6b883a22012-02-27 11:02:45 +00001007to the other two formatting styles. Backwards compatibility is maintained by
1008default (as you would expect), but by explicitly specifying a style parameter,
1009you get the ability to specify format strings which work with
1010:meth:`str.format` or :class:`string.Template`. Here's an example console
1011session to show the possibilities:
1012
1013.. code-block:: pycon
1014
1015 >>> import logging
1016 >>> root = logging.getLogger()
1017 >>> root.setLevel(logging.DEBUG)
1018 >>> handler = logging.StreamHandler()
1019 >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
1020 ... style='{')
1021 >>> handler.setFormatter(bf)
1022 >>> root.addHandler(handler)
1023 >>> logger = logging.getLogger('foo.bar')
1024 >>> logger.debug('This is a DEBUG message')
1025 2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message
1026 >>> logger.critical('This is a CRITICAL message')
1027 2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
1028 >>> df = logging.Formatter('$asctime $name ${levelname} $message',
1029 ... style='$')
1030 >>> handler.setFormatter(df)
1031 >>> logger.debug('This is a DEBUG message')
1032 2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
1033 >>> logger.critical('This is a CRITICAL message')
1034 2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
1035 >>>
1036
1037Note that the formatting of logging messages for final output to logs is
1038completely independent of how an individual logging message is constructed.
1039That can still use %-formatting, as shown here::
1040
1041 >>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
1042 2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
1043 >>>
1044
1045Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
1046positional parameters for the actual logging message itself, with keyword
1047parameters used only for determining options for how to handle the actual
1048logging call (e.g. the ``exc_info`` keyword parameter to indicate that
1049traceback information should be logged, or the ``extra`` keyword parameter
1050to indicate additional contextual information to be added to the log). So
1051you cannot directly make logging calls using :meth:`str.format` or
1052:class:`string.Template` syntax, because internally the logging package
1053uses %-formatting to merge the format string and the variable arguments.
1054There would no changing this while preserving backward compatibility, since
1055all logging calls which are out there in existing code will be using %-format
1056strings.
1057
1058There is, however, a way that you can use {}- and $- formatting to construct
1059your individual log messages. Recall that for a message you can use an
1060arbitrary object as a message format string, and that the logging package will
1061call ``str()`` on that object to get the actual format string. Consider the
1062following two classes::
1063
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001064 class BraceMessage:
Vinay Sajip6b883a22012-02-27 11:02:45 +00001065 def __init__(self, fmt, *args, **kwargs):
1066 self.fmt = fmt
1067 self.args = args
1068 self.kwargs = kwargs
1069
1070 def __str__(self):
1071 return self.fmt.format(*self.args, **self.kwargs)
1072
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001073 class DollarMessage:
Vinay Sajip6b883a22012-02-27 11:02:45 +00001074 def __init__(self, fmt, **kwargs):
1075 self.fmt = fmt
1076 self.kwargs = kwargs
1077
1078 def __str__(self):
1079 from string import Template
1080 return Template(self.fmt).substitute(**self.kwargs)
1081
1082Either of these can be used in place of a format string, to allow {}- or
1083$-formatting to be used to build the actual "message" part which appears in the
1084formatted log output in place of "%(message)s" or "{message}" or "$message".
1085It's a little unwieldy to use the class names whenever you want to log
1086something, but it's quite palatable if you use an alias such as __ (double
1087underscore – not to be confused with _, the single underscore used as a
1088synonym/alias for :func:`gettext.gettext` or its brethren).
1089
1090The above classes are not included in Python, though they're easy enough to
1091copy and paste into your own code. They can be used as follows (assuming that
1092they're declared in a module called ``wherever``):
1093
1094.. code-block:: pycon
1095
1096 >>> from wherever import BraceMessage as __
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001097 >>> print(__('Message with {0} {name}', 2, name='placeholders'))
Vinay Sajip6b883a22012-02-27 11:02:45 +00001098 Message with 2 placeholders
1099 >>> class Point: pass
1100 ...
1101 >>> p = Point()
1102 >>> p.x = 0.5
1103 >>> p.y = 0.5
1104 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
1105 ... point=p))
1106 Message with coordinates: (0.50, 0.50)
1107 >>> from wherever import DollarMessage as __
1108 >>> print(__('Message with $num $what', num=2, what='placeholders'))
1109 Message with 2 placeholders
1110 >>>
1111
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001112While the above examples use ``print()`` to show how the formatting works, you
1113would of course use ``logger.debug()`` or similar to actually log using this
1114approach.
1115
Vinay Sajip6b883a22012-02-27 11:02:45 +00001116One thing to note is that you pay no significant performance penalty with this
1117approach: the actual formatting happens not when you make the logging call, but
1118when (and if) the logged message is actually about to be output to a log by a
1119handler. So the only slightly unusual thing which might trip you up is that the
1120parentheses go around the format string and the arguments, not just the format
1121string. That's because the __ notation is just syntax sugar for a constructor
1122call to one of the XXXMessage classes.
1123
Vinay Sajip8028a5c2013-03-30 11:56:18 +00001124If you prefer, you can use a :class:`LoggerAdapter` to achieve a similar effect
1125to the above, as in the following example::
1126
1127 import logging
1128
1129 class Message(object):
1130 def __init__(self, fmt, args):
1131 self.fmt = fmt
1132 self.args = args
1133
1134 def __str__(self):
1135 return self.fmt.format(*self.args)
1136
1137 class StyleAdapter(logging.LoggerAdapter):
1138 def __init__(self, logger, extra=None):
1139 super(StyleAdapter, self).__init__(logger, extra or {})
1140
1141 def log(self, level, msg, *args, **kwargs):
1142 if self.isEnabledFor(level):
1143 msg, kwargs = self.process(msg, kwargs)
1144 self.logger._log(level, Message(msg, args), (), **kwargs)
1145
1146 logger = StyleAdapter(logging.getLogger(__name__))
1147
1148 def main():
1149 logger.debug('Hello, {}', 'world!')
1150
1151 if __name__ == '__main__':
1152 logging.basicConfig(level=logging.DEBUG)
1153 main()
1154
1155The above script should log the message ``Hello, world!`` when run with
1156Python 3.2 or later.
1157
Vinay Sajip6b883a22012-02-27 11:02:45 +00001158
Vinay Sajip982f5342012-02-27 11:56:29 +00001159.. currentmodule:: logging
1160
Georg Brandle9983862012-02-28 08:21:40 +01001161.. _custom-logrecord:
Vinay Sajip982f5342012-02-27 11:56:29 +00001162
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001163Customizing ``LogRecord``
Vinay Sajip982f5342012-02-27 11:56:29 +00001164-------------------------
1165
1166Every logging event is represented by a :class:`LogRecord` instance.
1167When an event is logged and not filtered out by a logger's level, a
1168:class:`LogRecord` is created, populated with information about the event and
1169then passed to the handlers for that logger (and its ancestors, up to and
1170including the logger where further propagation up the hierarchy is disabled).
1171Before Python 3.2, there were only two places where this creation was done:
1172
1173* :meth:`Logger.makeRecord`, which is called in the normal process of
1174 logging an event. This invoked :class:`LogRecord` directly to create an
1175 instance.
1176* :func:`makeLogRecord`, which is called with a dictionary containing
1177 attributes to be added to the LogRecord. This is typically invoked when a
1178 suitable dictionary has been received over the network (e.g. in pickle form
1179 via a :class:`~handlers.SocketHandler`, or in JSON form via an
1180 :class:`~handlers.HTTPHandler`).
1181
1182This has usually meant that if you need to do anything special with a
1183:class:`LogRecord`, you've had to do one of the following.
1184
1185* Create your own :class:`Logger` subclass, which overrides
1186 :meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
1187 before any loggers that you care about are instantiated.
1188* Add a :class:`Filter` to a logger or handler, which does the
1189 necessary special manipulation you need when its
1190 :meth:`~Filter.filter` method is called.
1191
1192The first approach would be a little unwieldy in the scenario where (say)
1193several different libraries wanted to do different things. Each would attempt
1194to set its own :class:`Logger` subclass, and the one which did this last would
1195win.
1196
1197The second approach works reasonably well for many cases, but does not allow
1198you to e.g. use a specialized subclass of :class:`LogRecord`. Library
1199developers can set a suitable filter on their loggers, but they would have to
1200remember to do this every time they introduced a new logger (which they would
Georg Brandle9983862012-02-28 08:21:40 +01001201do simply by adding new packages or modules and doing ::
Vinay Sajip982f5342012-02-27 11:56:29 +00001202
1203 logger = logging.getLogger(__name__)
1204
1205at module level). It's probably one too many things to think about. Developers
1206could also add the filter to a :class:`~logging.NullHandler` attached to their
1207top-level logger, but this would not be invoked if an application developer
1208attached a handler to a lower-level library logger – so output from that
1209handler would not reflect the intentions of the library developer.
1210
1211In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
1212factory, which you can specify. The factory is just a callable you can set with
1213:func:`~logging.setLogRecordFactory`, and interrogate with
1214:func:`~logging.getLogRecordFactory`. The factory is invoked with the same
1215signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
1216is the default setting for the factory.
1217
1218This approach allows a custom factory to control all aspects of LogRecord
1219creation. For example, you could return a subclass, or just add some additional
1220attributes to the record once created, using a pattern similar to this::
1221
1222 old_factory = logging.getLogRecordFactory()
1223
1224 def record_factory(*args, **kwargs):
1225 record = old_factory(*args, **kwargs)
1226 record.custom_attribute = 0xdecafbad
1227 return record
1228
1229 logging.setLogRecordFactory(record_factory)
1230
1231This pattern allows different libraries to chain factories together, and as
1232long as they don't overwrite each other's attributes or unintentionally
1233overwrite the attributes provided as standard, there should be no surprises.
1234However, it should be borne in mind that each link in the chain adds run-time
1235overhead to all logging operations, and the technique should only be used when
1236the use of a :class:`Filter` does not provide the desired result.
1237
1238
Vinay Sajipc63619b2010-12-19 12:56:57 +00001239.. _zeromq-handlers:
1240
Vinay Sajip7d101292010-12-26 21:22:33 +00001241Subclassing QueueHandler - a ZeroMQ example
1242-------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +00001243
1244You can use a :class:`QueueHandler` subclass to send messages to other kinds
1245of queues, for example a ZeroMQ 'publish' socket. In the example below,the
1246socket is created separately and passed to the handler (as its 'queue')::
1247
1248 import zmq # using pyzmq, the Python binding for ZeroMQ
1249 import json # for serializing records portably
1250
1251 ctx = zmq.Context()
1252 sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
1253 sock.bind('tcp://*:5556') # or wherever
1254
1255 class ZeroMQSocketHandler(QueueHandler):
1256 def enqueue(self, record):
1257 data = json.dumps(record.__dict__)
1258 self.queue.send(data)
1259
1260 handler = ZeroMQSocketHandler(sock)
1261
1262
1263Of course there are other ways of organizing this, for example passing in the
1264data needed by the handler to create the socket::
1265
1266 class ZeroMQSocketHandler(QueueHandler):
1267 def __init__(self, uri, socktype=zmq.PUB, ctx=None):
1268 self.ctx = ctx or zmq.Context()
1269 socket = zmq.Socket(self.ctx, socktype)
1270 socket.bind(uri)
1271 QueueHandler.__init__(self, socket)
1272
1273 def enqueue(self, record):
1274 data = json.dumps(record.__dict__)
1275 self.queue.send(data)
1276
1277 def close(self):
1278 self.queue.close()
1279
1280
Vinay Sajip7d101292010-12-26 21:22:33 +00001281Subclassing QueueListener - a ZeroMQ example
1282--------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +00001283
1284You can also subclass :class:`QueueListener` to get messages from other kinds
1285of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
1286
1287 class ZeroMQSocketListener(QueueListener):
1288 def __init__(self, uri, *handlers, **kwargs):
1289 self.ctx = kwargs.get('ctx') or zmq.Context()
1290 socket = zmq.Socket(self.ctx, zmq.SUB)
1291 socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
1292 socket.connect(uri)
1293
1294 def dequeue(self):
1295 msg = self.queue.recv()
1296 return logging.makeLogRecord(json.loads(msg))
1297
1298
Vinay Sajip7d101292010-12-26 21:22:33 +00001299.. seealso::
Vinay Sajipc63619b2010-12-19 12:56:57 +00001300
Vinay Sajip7d101292010-12-26 21:22:33 +00001301 Module :mod:`logging`
1302 API reference for the logging module.
1303
1304 Module :mod:`logging.config`
1305 Configuration API for the logging module.
1306
1307 Module :mod:`logging.handlers`
1308 Useful handlers included with the logging module.
1309
1310 :ref:`A basic logging tutorial <logging-basic-tutorial>`
1311
1312 :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
Vinay Sajip631a7e22011-11-23 14:27:54 +00001313
1314
1315An example dictionary-based configuration
1316-----------------------------------------
1317
1318Below is an example of a logging configuration dictionary - it's taken from
Georg Brandl5d941342016-02-26 19:37:12 +01001319the `documentation on the Django project <https://docs.djangoproject.com/en/1.4/topics/logging/#configuring-logging>`_.
Vinay Sajip67f39772013-08-17 00:39:42 +01001320This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
Vinay Sajip631a7e22011-11-23 14:27:54 +00001321
1322 LOGGING = {
1323 'version': 1,
1324 'disable_existing_loggers': True,
1325 'formatters': {
1326 'verbose': {
1327 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
1328 },
1329 'simple': {
1330 'format': '%(levelname)s %(message)s'
1331 },
1332 },
1333 'filters': {
1334 'special': {
1335 '()': 'project.logging.SpecialFilter',
1336 'foo': 'bar',
1337 }
1338 },
1339 'handlers': {
1340 'null': {
1341 'level':'DEBUG',
1342 'class':'django.utils.log.NullHandler',
1343 },
1344 'console':{
1345 'level':'DEBUG',
1346 'class':'logging.StreamHandler',
1347 'formatter': 'simple'
1348 },
1349 'mail_admins': {
1350 'level': 'ERROR',
1351 'class': 'django.utils.log.AdminEmailHandler',
1352 'filters': ['special']
1353 }
1354 },
1355 'loggers': {
1356 'django': {
1357 'handlers':['null'],
1358 'propagate': True,
1359 'level':'INFO',
1360 },
1361 'django.request': {
1362 'handlers': ['mail_admins'],
1363 'level': 'ERROR',
1364 'propagate': False,
1365 },
1366 'myproject.custom': {
1367 'handlers': ['console', 'mail_admins'],
1368 'level': 'INFO',
1369 'filters': ['special']
1370 }
1371 }
1372 }
1373
1374For more information about this configuration, you can see the `relevant
Vinay Sajip49089102014-03-25 11:21:41 +00001375section <https://docs.djangoproject.com/en/1.6/topics/logging/#configuring-logging>`_
Vinay Sajip631a7e22011-11-23 14:27:54 +00001376of the Django documentation.
Vinay Sajip23b94d02012-01-04 12:02:26 +00001377
1378.. _cookbook-rotator-namer:
1379
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001380Using a rotator and namer to customize log rotation processing
Vinay Sajip23b94d02012-01-04 12:02:26 +00001381--------------------------------------------------------------
1382
1383An example of how you can define a namer and rotator is given in the following
1384snippet, which shows zlib-based compression of the log file::
1385
1386 def namer(name):
1387 return name + ".gz"
1388
1389 def rotator(source, dest):
1390 with open(source, "rb") as sf:
1391 data = sf.read()
1392 compressed = zlib.compress(data, 9)
1393 with open(dest, "wb") as df:
1394 df.write(compressed)
1395 os.remove(source)
1396
1397 rh = logging.handlers.RotatingFileHandler(...)
1398 rh.rotator = rotator
1399 rh.namer = namer
1400
Ezio Melotti226231c2012-01-18 05:40:00 +02001401These are not "true" .gz files, as they are bare compressed data, with no
1402"container" such as youd find in an actual gzip file. This snippet is just
Vinay Sajip23b94d02012-01-04 12:02:26 +00001403for illustration purposes.
1404
Vinay Sajip0292fa92012-04-08 01:49:12 +01001405A more elaborate multiprocessing example
1406----------------------------------------
1407
1408The following working example shows how logging can be used with multiprocessing
1409using configuration files. The configurations are fairly simple, but serve to
1410illustrate how more complex ones could be implemented in a real multiprocessing
1411scenario.
1412
1413In the example, the main process spawns a listener process and some worker
1414processes. Each of the main process, the listener and the workers have three
1415separate configurations (the workers all share the same configuration). We can
1416see logging in the main process, how the workers log to a QueueHandler and how
1417the listener implements a QueueListener and a more complex logging
1418configuration, and arranges to dispatch events received via the queue to the
1419handlers specified in the configuration. Note that these configurations are
1420purely illustrative, but you should be able to adapt this example to your own
1421scenario.
1422
1423Here's the script - the docstrings and the comments hopefully explain how it
1424works::
1425
1426 import logging
1427 import logging.config
1428 import logging.handlers
1429 from multiprocessing import Process, Queue, Event, current_process
1430 import os
1431 import random
1432 import time
1433
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001434 class MyHandler:
Vinay Sajip0292fa92012-04-08 01:49:12 +01001435 """
1436 A simple handler for logging events. It runs in the listener process and
1437 dispatches events to loggers based on the name in the received record,
1438 which then get dispatched, by the logging system, to the handlers
Vinay Sajip838e6382012-04-09 19:46:24 +01001439 configured for those loggers.
Vinay Sajip0292fa92012-04-08 01:49:12 +01001440 """
1441 def handle(self, record):
1442 logger = logging.getLogger(record.name)
1443 # The process name is transformed just to show that it's the listener
1444 # doing the logging to files and console
1445 record.processName = '%s (for %s)' % (current_process().name, record.processName)
1446 logger.handle(record)
1447
1448 def listener_process(q, stop_event, config):
1449 """
1450 This could be done in the main process, but is just done in a separate
1451 process for illustrative purposes.
1452
1453 This initialises logging according to the specified configuration,
1454 starts the listener and waits for the main process to signal completion
1455 via the event. The listener is then stopped, and the process exits.
1456 """
1457 logging.config.dictConfig(config)
1458 listener = logging.handlers.QueueListener(q, MyHandler())
1459 listener.start()
1460 if os.name == 'posix':
1461 # On POSIX, the setup logger will have been configured in the
1462 # parent process, but should have been disabled following the
1463 # dictConfig call.
1464 # On Windows, since fork isn't used, the setup logger won't
1465 # exist in the child, so it would be created and the message
1466 # would appear - hence the "if posix" clause.
1467 logger = logging.getLogger('setup')
1468 logger.critical('Should not appear, because of disabled logger ...')
1469 stop_event.wait()
1470 listener.stop()
1471
1472 def worker_process(config):
1473 """
1474 A number of these are spawned for the purpose of illustration. In
Berker Peksag315e1042015-05-19 01:36:55 +03001475 practice, they could be a heterogeneous bunch of processes rather than
Vinay Sajip0292fa92012-04-08 01:49:12 +01001476 ones which are identical to each other.
1477
1478 This initialises logging according to the specified configuration,
1479 and logs a hundred messages with random levels to randomly selected
1480 loggers.
1481
1482 A small sleep is added to allow other processes a chance to run. This
1483 is not strictly needed, but it mixes the output from the different
1484 processes a bit more than if it's left out.
1485 """
1486 logging.config.dictConfig(config)
1487 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
1488 logging.CRITICAL]
1489 loggers = ['foo', 'foo.bar', 'foo.bar.baz',
1490 'spam', 'spam.ham', 'spam.ham.eggs']
1491 if os.name == 'posix':
1492 # On POSIX, the setup logger will have been configured in the
1493 # parent process, but should have been disabled following the
1494 # dictConfig call.
1495 # On Windows, since fork isn't used, the setup logger won't
1496 # exist in the child, so it would be created and the message
1497 # would appear - hence the "if posix" clause.
1498 logger = logging.getLogger('setup')
1499 logger.critical('Should not appear, because of disabled logger ...')
1500 for i in range(100):
1501 lvl = random.choice(levels)
1502 logger = logging.getLogger(random.choice(loggers))
1503 logger.log(lvl, 'Message no. %d', i)
1504 time.sleep(0.01)
1505
1506 def main():
1507 q = Queue()
1508 # The main process gets a simple configuration which prints to the console.
1509 config_initial = {
1510 'version': 1,
1511 'formatters': {
1512 'detailed': {
1513 'class': 'logging.Formatter',
1514 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1515 }
1516 },
1517 'handlers': {
1518 'console': {
1519 'class': 'logging.StreamHandler',
1520 'level': 'INFO',
1521 },
1522 },
1523 'root': {
1524 'level': 'DEBUG',
1525 'handlers': ['console']
1526 },
1527 }
1528 # The worker process configuration is just a QueueHandler attached to the
1529 # root logger, which allows all messages to be sent to the queue.
1530 # We disable existing loggers to disable the "setup" logger used in the
1531 # parent process. This is needed on POSIX because the logger will
1532 # be there in the child following a fork().
1533 config_worker = {
1534 'version': 1,
1535 'disable_existing_loggers': True,
1536 'handlers': {
1537 'queue': {
1538 'class': 'logging.handlers.QueueHandler',
1539 'queue': q,
1540 },
1541 },
1542 'root': {
1543 'level': 'DEBUG',
1544 'handlers': ['queue']
1545 },
1546 }
1547 # The listener process configuration shows that the full flexibility of
1548 # logging configuration is available to dispatch events to handlers however
1549 # you want.
1550 # We disable existing loggers to disable the "setup" logger used in the
1551 # parent process. This is needed on POSIX because the logger will
1552 # be there in the child following a fork().
1553 config_listener = {
1554 'version': 1,
1555 'disable_existing_loggers': True,
1556 'formatters': {
1557 'detailed': {
1558 'class': 'logging.Formatter',
1559 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1560 },
1561 'simple': {
1562 'class': 'logging.Formatter',
1563 'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1564 }
1565 },
1566 'handlers': {
1567 'console': {
1568 'class': 'logging.StreamHandler',
1569 'level': 'INFO',
1570 'formatter': 'simple',
1571 },
1572 'file': {
1573 'class': 'logging.FileHandler',
1574 'filename': 'mplog.log',
1575 'mode': 'w',
1576 'formatter': 'detailed',
1577 },
1578 'foofile': {
1579 'class': 'logging.FileHandler',
1580 'filename': 'mplog-foo.log',
1581 'mode': 'w',
1582 'formatter': 'detailed',
1583 },
1584 'errors': {
1585 'class': 'logging.FileHandler',
1586 'filename': 'mplog-errors.log',
1587 'mode': 'w',
1588 'level': 'ERROR',
1589 'formatter': 'detailed',
1590 },
1591 },
1592 'loggers': {
1593 'foo': {
Serhiy Storchakaf47036c2013-12-24 11:04:36 +02001594 'handlers': ['foofile']
Vinay Sajip0292fa92012-04-08 01:49:12 +01001595 }
1596 },
1597 'root': {
1598 'level': 'DEBUG',
1599 'handlers': ['console', 'file', 'errors']
1600 },
1601 }
1602 # Log some initial events, just to show that logging in the parent works
1603 # normally.
1604 logging.config.dictConfig(config_initial)
1605 logger = logging.getLogger('setup')
1606 logger.info('About to create workers ...')
1607 workers = []
1608 for i in range(5):
1609 wp = Process(target=worker_process, name='worker %d' % (i + 1),
1610 args=(config_worker,))
1611 workers.append(wp)
1612 wp.start()
1613 logger.info('Started worker: %s', wp.name)
1614 logger.info('About to create listener ...')
1615 stop_event = Event()
1616 lp = Process(target=listener_process, name='listener',
1617 args=(q, stop_event, config_listener))
1618 lp.start()
1619 logger.info('Started listener')
1620 # We now hang around for the workers to finish their work.
1621 for wp in workers:
1622 wp.join()
1623 # Workers all done, listening can now stop.
1624 # Logging in the parent still works normally.
1625 logger.info('Telling listener to stop ...')
1626 stop_event.set()
1627 lp.join()
1628 logger.info('All done.')
1629
1630 if __name__ == '__main__':
1631 main()
1632
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001633
1634Inserting a BOM into messages sent to a SysLogHandler
1635-----------------------------------------------------
1636
1637`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
1638Unicode message be sent to a syslog daemon as a set of bytes which have the
1639following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
1640Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
1641section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
1642
Vinay Sajip62930e12012-04-17 00:40:48 +01001643In Python 3.1, code was added to
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001644:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
1645unfortunately, it was implemented incorrectly, with the BOM appearing at the
1646beginning of the message and hence not allowing any pure-ASCII component to
1647appear before it.
1648
1649As this behaviour is broken, the incorrect BOM insertion code is being removed
Vinay Sajip62930e12012-04-17 00:40:48 +01001650from Python 3.2.4 and later. However, it is not being replaced, and if you
Vinay Sajipa58d6682012-07-27 10:54:10 +01001651want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001652pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
1653UTF-8, then you need to do the following:
1654
1655#. Attach a :class:`~logging.Formatter` instance to your
1656 :class:`~logging.handlers.SysLogHandler` instance, with a format string
1657 such as::
1658
Vinay Sajip59b9a792012-04-16 15:46:18 +01001659 'ASCII section\ufeffUnicode section'
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001660
Georg Brandld50fe722013-03-23 16:00:41 +01001661 The Unicode code point U+FEFF, when encoded using UTF-8, will be
Vinay Sajip59b9a792012-04-16 15:46:18 +01001662 encoded as a UTF-8 BOM -- the byte-string ``b'\xef\xbb\xbf'``.
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001663
1664#. Replace the ASCII section with whatever placeholders you like, but make sure
1665 that the data that appears in there after substitution is always ASCII (that
1666 way, it will remain unchanged after UTF-8 encoding).
1667
1668#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajipa58d6682012-07-27 10:54:10 +01001669 which appears there after substitution contains characters outside the ASCII
1670 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001671
Vinay Sajip59b9a792012-04-16 15:46:18 +01001672The formatted message *will* be encoded using UTF-8 encoding by
1673``SysLogHandler``. If you follow the above rules, you should be able to produce
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001674RFC 5424-compliant messages. If you don't, logging may not complain, but your
1675messages will not be RFC 5424-compliant, and your syslog daemon may complain.
1676
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001677
1678Implementing structured logging
1679-------------------------------
1680
1681Although most logging messages are intended for reading by humans, and thus not
1682readily machine-parseable, there might be cirumstances where you want to output
1683messages in a structured format which *is* capable of being parsed by a program
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001684(without needing complex regular expressions to parse the log message). This is
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001685straightforward to achieve using the logging package. There are a number of
1686ways in which this could be achieved, but the following is a simple approach
1687which uses JSON to serialise the event in a machine-parseable manner::
1688
1689 import json
1690 import logging
1691
1692 class StructuredMessage(object):
1693 def __init__(self, message, **kwargs):
1694 self.message = message
1695 self.kwargs = kwargs
1696
1697 def __str__(self):
1698 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
1699
1700 _ = StructuredMessage # optional, to improve readability
1701
1702 logging.basicConfig(level=logging.INFO, format='%(message)s')
1703 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
1704
1705If the above script is run, it prints::
1706
1707 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
1708
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001709Note that the order of items might be different according to the version of
1710Python used.
1711
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001712If you need more specialised processing, you can use a custom JSON encoder,
1713as in the following complete example::
1714
1715 from __future__ import unicode_literals
1716
1717 import json
1718 import logging
1719
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001720 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001721 try:
1722 unicode
1723 except NameError:
1724 unicode = str
1725
1726 class Encoder(json.JSONEncoder):
1727 def default(self, o):
1728 if isinstance(o, set):
1729 return tuple(o)
1730 elif isinstance(o, unicode):
1731 return o.encode('unicode_escape').decode('ascii')
1732 return super(Encoder, self).default(o)
1733
1734 class StructuredMessage(object):
1735 def __init__(self, message, **kwargs):
1736 self.message = message
1737 self.kwargs = kwargs
1738
1739 def __str__(self):
1740 s = Encoder().encode(self.kwargs)
1741 return '%s >>> %s' % (self.message, s)
1742
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001743 _ = StructuredMessage # optional, to improve readability
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001744
1745 def main():
1746 logging.basicConfig(level=logging.INFO, format='%(message)s')
Raymond Hettingerdf1b6992014-11-09 15:56:33 -08001747 logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001748
1749 if __name__ == '__main__':
1750 main()
1751
1752When the above script is run, it prints::
1753
1754 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
1755
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001756Note that the order of items might be different according to the version of
1757Python used.
1758
Vinay Sajip554f22f2014-02-03 11:51:45 +00001759
1760.. _custom-handlers:
1761
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001762.. currentmodule:: logging.config
1763
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001764Customizing handlers with :func:`dictConfig`
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001765--------------------------------------------
1766
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001767There are times when you want to customize logging handlers in particular ways,
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001768and if you use :func:`dictConfig` you may be able to do this without
1769subclassing. As an example, consider that you may want to set the ownership of a
1770log file. On POSIX, this is easily done using :func:`shutil.chown`, but the file
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001771handlers in the stdlib don't offer built-in support. You can customize handler
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001772creation using a plain function such as::
1773
1774 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1775 if owner:
1776 if not os.path.exists(filename):
1777 open(filename, 'a').close()
1778 shutil.chown(filename, *owner)
1779 return logging.FileHandler(filename, mode, encoding)
1780
1781You can then specify, in a logging configuration passed to :func:`dictConfig`,
1782that a logging handler be created by calling this function::
1783
1784 LOGGING = {
1785 'version': 1,
1786 'disable_existing_loggers': False,
1787 'formatters': {
1788 'default': {
1789 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1790 },
1791 },
1792 'handlers': {
1793 'file':{
1794 # The values below are popped from this dictionary and
1795 # used to create the handler, set the handler's level and
1796 # its formatter.
1797 '()': owned_file_handler,
1798 'level':'DEBUG',
1799 'formatter': 'default',
1800 # The values below are passed to the handler creator callable
1801 # as keyword arguments.
1802 'owner': ['pulse', 'pulse'],
1803 'filename': 'chowntest.log',
1804 'mode': 'w',
1805 'encoding': 'utf-8',
1806 },
1807 },
1808 'root': {
1809 'handlers': ['file'],
1810 'level': 'DEBUG',
1811 },
1812 }
1813
1814In this example I am setting the ownership using the ``pulse`` user and group,
1815just for the purposes of illustration. Putting it together into a working
1816script, ``chowntest.py``::
1817
1818 import logging, logging.config, os, shutil
1819
1820 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1821 if owner:
1822 if not os.path.exists(filename):
1823 open(filename, 'a').close()
1824 shutil.chown(filename, *owner)
1825 return logging.FileHandler(filename, mode, encoding)
1826
1827 LOGGING = {
1828 'version': 1,
1829 'disable_existing_loggers': False,
1830 'formatters': {
1831 'default': {
1832 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1833 },
1834 },
1835 'handlers': {
1836 'file':{
1837 # The values below are popped from this dictionary and
1838 # used to create the handler, set the handler's level and
1839 # its formatter.
1840 '()': owned_file_handler,
1841 'level':'DEBUG',
1842 'formatter': 'default',
1843 # The values below are passed to the handler creator callable
1844 # as keyword arguments.
1845 'owner': ['pulse', 'pulse'],
1846 'filename': 'chowntest.log',
1847 'mode': 'w',
1848 'encoding': 'utf-8',
1849 },
1850 },
1851 'root': {
1852 'handlers': ['file'],
1853 'level': 'DEBUG',
1854 },
1855 }
1856
1857 logging.config.dictConfig(LOGGING)
1858 logger = logging.getLogger('mylogger')
1859 logger.debug('A debug message')
1860
1861To run this, you will probably need to run as ``root``::
1862
1863 $ sudo python3.3 chowntest.py
1864 $ cat chowntest.log
1865 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1866 $ ls -l chowntest.log
1867 -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1868
1869Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1870makes an appearance. This approach should work with any Python version that
1871supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1872versions, you would need to implement the actual ownership change using e.g.
1873:func:`os.chown`.
1874
1875In practice, the handler-creating function may be in a utility module somewhere
1876in your project. Instead of the line in the configuration::
1877
1878 '()': owned_file_handler,
1879
1880you could use e.g.::
1881
1882 '()': 'ext://project.util.owned_file_handler',
1883
1884where ``project.util`` can be replaced with the actual name of the package
1885where the function resides. In the above working script, using
1886``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1887is resolved by :func:`dictConfig` from the ``ext://`` specification.
1888
1889This example hopefully also points the way to how you could implement other
1890types of file change - e.g. setting specific POSIX permission bits - in the
1891same way, using :func:`os.chmod`.
1892
1893Of course, the approach could also be extended to types of handler other than a
1894:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1895or a different type of handler altogether.
1896
Vinay Sajipcbefe3b2014-01-15 15:09:05 +00001897
1898.. currentmodule:: logging
1899
1900.. _formatting-styles:
1901
1902Using particular formatting styles throughout your application
1903--------------------------------------------------------------
1904
1905In Python 3.2, the :class:`~logging.Formatter` gained a ``style`` keyword
1906parameter which, while defaulting to ``%`` for backward compatibility, allowed
1907the specification of ``{`` or ``$`` to support the formatting approaches
1908supported by :meth:`str.format` and :class:`string.Template`. Note that this
1909governs the formatting of logging messages for final output to logs, and is
1910completely orthogonal to how an individual logging message is constructed.
1911
1912Logging calls (:meth:`~Logger.debug`, :meth:`~Logger.info` etc.) only take
1913positional parameters for the actual logging message itself, with keyword
1914parameters used only for determining options for how to handle the logging call
1915(e.g. the ``exc_info`` keyword parameter to indicate that traceback information
1916should be logged, or the ``extra`` keyword parameter to indicate additional
1917contextual information to be added to the log). So you cannot directly make
1918logging calls using :meth:`str.format` or :class:`string.Template` syntax,
1919because internally the logging package uses %-formatting to merge the format
1920string and the variable arguments. There would no changing this while preserving
1921backward compatibility, since all logging calls which are out there in existing
1922code will be using %-format strings.
1923
1924There have been suggestions to associate format styles with specific loggers,
1925but that approach also runs into backward compatibility problems because any
1926existing code could be using a given logger name and using %-formatting.
1927
1928For logging to work interoperably between any third-party libraries and your
1929code, decisions about formatting need to be made at the level of the
1930individual logging call. This opens up a couple of ways in which alternative
1931formatting styles can be accommodated.
1932
1933
1934Using LogRecord factories
1935^^^^^^^^^^^^^^^^^^^^^^^^^
1936
1937In Python 3.2, along with the :class:`~logging.Formatter` changes mentioned
1938above, the logging package gained the ability to allow users to set their own
1939:class:`LogRecord` subclasses, using the :func:`setLogRecordFactory` function.
1940You can use this to set your own subclass of :class:`LogRecord`, which does the
1941Right Thing by overriding the :meth:`~LogRecord.getMessage` method. The base
1942class implementation of this method is where the ``msg % args`` formatting
1943happens, and where you can substitute your alternate formatting; however, you
1944should be careful to support all formatting styles and allow %-formatting as
1945the default, to ensure interoperability with other code. Care should also be
1946taken to call ``str(self.msg)``, just as the base implementation does.
1947
1948Refer to the reference documentation on :func:`setLogRecordFactory` and
1949:class:`LogRecord` for more information.
1950
1951
1952Using custom message objects
1953^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1954
1955There is another, perhaps simpler way that you can use {}- and $- formatting to
1956construct your individual log messages. You may recall (from
1957:ref:`arbitrary-object-messages`) that when logging you can use an arbitrary
1958object as a message format string, and that the logging package will call
1959:func:`str` on that object to get the actual format string. Consider the
1960following two classes::
1961
1962 class BraceMessage(object):
1963 def __init__(self, fmt, *args, **kwargs):
1964 self.fmt = fmt
1965 self.args = args
1966 self.kwargs = kwargs
1967
1968 def __str__(self):
1969 return self.fmt.format(*self.args, **self.kwargs)
1970
1971 class DollarMessage(object):
1972 def __init__(self, fmt, **kwargs):
1973 self.fmt = fmt
1974 self.kwargs = kwargs
1975
1976 def __str__(self):
1977 from string import Template
1978 return Template(self.fmt).substitute(**self.kwargs)
1979
1980Either of these can be used in place of a format string, to allow {}- or
1981$-formatting to be used to build the actual "message" part which appears in the
1982formatted log output in place of “%(message)s or “{message}” or $message”.
1983If you find it a little unwieldy to use the class names whenever you want to log
1984something, you can make it more palatable if you use an alias such as ``M`` or
1985``_`` for the message (or perhaps ``__``, if you are using ``_`` for
1986localization).
1987
Vinay Sajipeb14dec2014-01-17 18:36:02 +00001988Examples of this approach are given below. Firstly, formatting with
1989:meth:`str.format`::
1990
1991 >>> __ = BraceMessage
1992 >>> print(__('Message with {0} {1}', 2, 'placeholders'))
1993 Message with 2 placeholders
1994 >>> class Point: pass
1995 ...
1996 >>> p = Point()
1997 >>> p.x = 0.5
1998 >>> p.y = 0.5
1999 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
2000 Message with coordinates: (0.50, 0.50)
2001
2002Secondly, formatting with :class:`string.Template`::
2003
2004 >>> __ = DollarMessage
2005 >>> print(__('Message with $num $what', num=2, what='placeholders'))
2006 Message with 2 placeholders
2007 >>>
2008
2009One thing to note is that you pay no significant performance penalty with this
2010approach: the actual formatting happens not when you make the logging call, but
2011when (and if) the logged message is actually about to be output to a log by a
2012handler. So the only slightly unusual thing which might trip you up is that the
2013parentheses go around the format string and the arguments, not just the format
2014string. Thats because the __ notation is just syntax sugar for a constructor
2015call to one of the ``XXXMessage`` classes shown above.
Vinay Sajip554f22f2014-02-03 11:51:45 +00002016
2017
2018.. _filters-dictconfig:
2019
2020.. currentmodule:: logging.config
2021
2022Configuring filters with :func:`dictConfig`
2023-------------------------------------------
2024
2025You *can* configure filters using :func:`~logging.config.dictConfig`, though it
2026might not be obvious at first glance how to do it (hence this recipe). Since
2027:class:`~logging.Filter` is the only filter class included in the standard
2028library, and it is unlikely to cater to many requirements (it's only there as a
2029base class), you will typically need to define your own :class:`~logging.Filter`
2030subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
2031specify the ``()`` key in the configuration dictionary for the filter,
2032specifying a callable which will be used to create the filter (a class is the
2033most obvious, but you can provide any callable which returns a
2034:class:`~logging.Filter` instance). Here is a complete example::
2035
2036 import logging
2037 import logging.config
2038 import sys
2039
2040 class MyFilter(logging.Filter):
2041 def __init__(self, param=None):
2042 self.param = param
2043
2044 def filter(self, record):
2045 if self.param is None:
2046 allow = True
2047 else:
2048 allow = self.param not in record.msg
2049 if allow:
2050 record.msg = 'changed: ' + record.msg
2051 return allow
2052
2053 LOGGING = {
2054 'version': 1,
2055 'filters': {
2056 'myfilter': {
2057 '()': MyFilter,
2058 'param': 'noshow',
2059 }
2060 },
2061 'handlers': {
2062 'console': {
2063 'class': 'logging.StreamHandler',
2064 'filters': ['myfilter']
2065 }
2066 },
2067 'root': {
2068 'level': 'DEBUG',
2069 'handlers': ['console']
2070 },
2071 }
2072
2073 if __name__ == '__main__':
2074 logging.config.dictConfig(LOGGING)
2075 logging.debug('hello')
2076 logging.debug('hello - noshow')
2077
2078This example shows how you can pass configuration data to the callable which
2079constructs the instance, in the form of keyword parameters. When run, the above
2080script will print::
2081
2082 changed: hello
2083
2084which shows that the filter is working as configured.
2085
2086A couple of extra points to note:
2087
2088* If you can't refer to the callable directly in the configuration (e.g. if it
2089 lives in a different module, and you can't import it directly where the
2090 configuration dictionary is), you can use the form ``ext://...`` as described
2091 in :ref:`logging-config-dict-externalobj`. For example, you could have used
2092 the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
2093 example.
2094
2095* As well as for filters, this technique can also be used to configure custom
2096 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
2097 information on how logging supports using user-defined objects in its
2098 configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
2099
Vinay Sajipdb071642015-01-28 07:32:38 +00002100
2101.. _custom-format-exception:
2102
2103Customized exception formatting
2104-------------------------------
2105
2106There might be times when you want to do customized exception formatting - for
2107argument's sake, let's say you want exactly one line per logged event, even
2108when exception information is present. You can do this with a custom formatter
2109class, as shown in the following example::
2110
2111 import logging
2112
2113 class OneLineExceptionFormatter(logging.Formatter):
2114 def formatException(self, exc_info):
2115 """
2116 Format an exception so that it prints on a single line.
2117 """
2118 result = super(OneLineExceptionFormatter, self).formatException(exc_info)
2119 return repr(result) # or format into one line however you want to
2120
2121 def format(self, record):
2122 s = super(OneLineExceptionFormatter, self).format(record)
2123 if record.exc_text:
2124 s = s.replace('\n', '') + '|'
2125 return s
2126
2127 def configure_logging():
2128 fh = logging.FileHandler('output.txt', 'w')
2129 f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
2130 '%d/%m/%Y %H:%M:%S')
2131 fh.setFormatter(f)
2132 root = logging.getLogger()
2133 root.setLevel(logging.DEBUG)
2134 root.addHandler(fh)
2135
2136 def main():
2137 configure_logging()
2138 logging.info('Sample message')
2139 try:
2140 x = 1 / 0
2141 except ZeroDivisionError as e:
2142 logging.exception('ZeroDivisionError: %s', e)
2143
2144 if __name__ == '__main__':
2145 main()
2146
2147When run, this produces a file with exactly two lines::
2148
2149 28/01/2015 07:21:23|INFO|Sample message|
2150 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
2151
2152While the above treatment is simplistic, it points the way to how exception
2153information can be formatted to your liking. The :mod:`traceback` module may be
2154helpful for more specialized needs.
Vinay Sajipf046dfe2015-02-01 15:17:34 +00002155
2156.. _spoken-messages:
2157
2158Speaking logging messages
2159-------------------------
2160
2161There might be situations when it is desirable to have logging messages rendered
2162in an audible rather than a visible format. This is easy to do if you have text-
2163to-speech (TTS) functionality available in your system, even if it doesn't have
2164a Python binding. Most TTS systems have a command line program you can run, and
2165this can be invoked from a handler using :mod:`subprocess`. It's assumed here
2166that TTS command line programs won't expect to interact with users or take a
2167long time to complete, and that the frequency of logged messages will be not so
2168high as to swamp the user with messages, and that it's acceptable to have the
2169messages spoken one at a time rather than concurrently, The example implementation
2170below waits for one message to be spoken before the next is processed, and this
2171might cause other handlers to be kept waiting. Here is a short example showing
2172the approach, which assumes that the ``espeak`` TTS package is available::
2173
2174 import logging
2175 import subprocess
2176 import sys
2177
2178 class TTSHandler(logging.Handler):
2179 def emit(self, record):
2180 msg = self.format(record)
2181 # Speak slowly in a female English voice
2182 cmd = ['espeak', '-s150', '-ven+f3', msg]
2183 p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
2184 stderr=subprocess.STDOUT)
2185 # wait for the program to finish
2186 p.communicate()
2187
2188 def configure_logging():
2189 h = TTSHandler()
2190 root = logging.getLogger()
2191 root.addHandler(h)
2192 # the default formatter just returns the message
2193 root.setLevel(logging.DEBUG)
2194
2195 def main():
2196 logging.info('Hello')
2197 logging.debug('Goodbye')
2198
2199 if __name__ == '__main__':
2200 configure_logging()
2201 sys.exit(main())
2202
2203When run, this script should say "Hello" and then "Goodbye" in a female voice.
2204
2205The above approach can, of course, be adapted to other TTS systems and even
2206other systems altogether which can process messages via external programs run
2207from a command line.
2208
Vinay Sajipff1f3d92015-10-10 00:52:35 +01002209
2210.. _buffered-logging:
2211
2212Buffering logging messages and outputting them conditionally
2213------------------------------------------------------------
2214
2215There might be situations where you want to log messages in a temporary area
2216and only output them if a certain condition occurs. For example, you may want to
2217start logging debug events in a function, and if the function completes without
2218errors, you don't want to clutter the log with the collected debug information,
2219but if there is an error, you want all the debug information to be output as well
2220as the error.
2221
2222Here is an example which shows how you could do this using a decorator for your
2223functions where you want logging to behave this way. It makes use of the
2224:class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
2225until some condition occurs, at which point the buffered events are ``flushed``
2226- passed to another handler (the ``target`` handler) for processing. By default,
2227the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
2228level is greater than or equal to a specified threshold is seen. You can use this
2229recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
2230flushing behavior.
2231
2232The example script has a simple function, ``foo``, which just cycles through
2233all the logging levels, writing to ``sys.stderr`` to say what level it's about
2234to log at, and then actually logging a message that that level. You can pass a
2235parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
2236otherwise, it only logs at DEBUG, INFO and WARNING levels.
2237
2238The script just arranges to decorate ``foo`` with a decorator which will do the
2239conditional logging that's required. The decorator takes a logger as a parameter
2240and attaches a memory handler for the duration of the call to the decorated
2241function. The decorator can be additionally parameterised using a target handler,
2242a level at which flushing should occur, and a capacity for the buffer. These
2243default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
2244``logging.ERROR`` and ``100`` respectively.
2245
2246Here's the script::
2247
2248 import logging
2249 from logging.handlers import MemoryHandler
2250 import sys
2251
2252 logger = logging.getLogger(__name__)
2253 logger.addHandler(logging.NullHandler())
2254
2255 def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
2256 if target_handler is None:
2257 target_handler = logging.StreamHandler()
2258 if flush_level is None:
2259 flush_level = logging.ERROR
2260 if capacity is None:
2261 capacity = 100
2262 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
2263
2264 def decorator(fn):
2265 def wrapper(*args, **kwargs):
2266 logger.addHandler(handler)
2267 try:
2268 return fn(*args, **kwargs)
2269 except Exception:
2270 logger.exception('call failed')
2271 raise
2272 finally:
2273 super(MemoryHandler, handler).flush()
2274 logger.removeHandler(handler)
2275 return wrapper
2276
2277 return decorator
2278
2279 def write_line(s):
2280 sys.stderr.write('%s\n' % s)
2281
2282 def foo(fail=False):
2283 write_line('about to log at DEBUG ...')
2284 logger.debug('Actually logged at DEBUG')
2285 write_line('about to log at INFO ...')
2286 logger.info('Actually logged at INFO')
2287 write_line('about to log at WARNING ...')
2288 logger.warning('Actually logged at WARNING')
2289 if fail:
2290 write_line('about to log at ERROR ...')
2291 logger.error('Actually logged at ERROR')
2292 write_line('about to log at CRITICAL ...')
2293 logger.critical('Actually logged at CRITICAL')
2294 return fail
2295
2296 decorated_foo = log_if_errors(logger)(foo)
2297
2298 if __name__ == '__main__':
2299 logger.setLevel(logging.DEBUG)
2300 write_line('Calling undecorated foo with False')
2301 assert not foo(False)
2302 write_line('Calling undecorated foo with True')
2303 assert foo(True)
2304 write_line('Calling decorated foo with False')
2305 assert not decorated_foo(False)
2306 write_line('Calling decorated foo with True')
2307 assert decorated_foo(True)
2308
2309When this script is run, the following output should be observed::
2310
2311 Calling undecorated foo with False
2312 about to log at DEBUG ...
2313 about to log at INFO ...
2314 about to log at WARNING ...
2315 Calling undecorated foo with True
2316 about to log at DEBUG ...
2317 about to log at INFO ...
2318 about to log at WARNING ...
2319 about to log at ERROR ...
2320 about to log at CRITICAL ...
2321 Calling decorated foo with False
2322 about to log at DEBUG ...
2323 about to log at INFO ...
2324 about to log at WARNING ...
2325 Calling decorated foo with True
2326 about to log at DEBUG ...
2327 about to log at INFO ...
2328 about to log at WARNING ...
2329 about to log at ERROR ...
2330 Actually logged at DEBUG
2331 Actually logged at INFO
2332 Actually logged at WARNING
2333 Actually logged at ERROR
2334 about to log at CRITICAL ...
2335 Actually logged at CRITICAL
2336
2337As you can see, actual logging output only occurs when an event is logged whose
2338severity is ERROR or greater, but in that case, any previous events at lower
2339severities are also logged.
2340
2341You can of course use the conventional means of decoration::
2342
2343 @log_if_errors(logger)
2344 def foo(fail=False):
2345 ...
Vinay Sajip4de9dae2015-10-17 13:58:19 +01002346
2347
2348.. _utc-formatting:
2349
2350Formatting times using UTC (GMT) via configuration
2351--------------------------------------------------
2352
2353Sometimes you want to format times using UTC, which can be done using a class
2354such as `UTCFormatter`, shown below::
2355
2356 import logging
2357 import time
2358
2359 class UTCFormatter(logging.Formatter):
2360 converter = time.gmtime
2361
Berker Peksagf84499a2015-10-20 03:41:38 +03002362and you can then use the ``UTCFormatter`` in your code instead of
Vinay Sajip4de9dae2015-10-17 13:58:19 +01002363:class:`~logging.Formatter`. If you want to do that via configuration, you can
2364use the :func:`~logging.config.dictConfig` API with an approach illustrated by
2365the following complete example::
2366
2367 import logging
2368 import logging.config
2369 import time
2370
2371 class UTCFormatter(logging.Formatter):
2372 converter = time.gmtime
2373
2374 LOGGING = {
2375 'version': 1,
2376 'disable_existing_loggers': False,
2377 'formatters': {
2378 'utc': {
2379 '()': UTCFormatter,
2380 'format': '%(asctime)s %(message)s',
2381 },
2382 'local': {
2383 'format': '%(asctime)s %(message)s',
2384 }
2385 },
2386 'handlers': {
2387 'console1': {
2388 'class': 'logging.StreamHandler',
2389 'formatter': 'utc',
2390 },
2391 'console2': {
2392 'class': 'logging.StreamHandler',
2393 'formatter': 'local',
2394 },
2395 },
2396 'root': {
2397 'handlers': ['console1', 'console2'],
2398 }
2399 }
2400
2401 if __name__ == '__main__':
2402 logging.config.dictConfig(LOGGING)
2403 logging.warning('The local time is %s', time.asctime())
2404
2405When this script is run, it should print something like::
2406
2407 2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
2408 2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
2409
2410showing how the time is formatted both as local time and UTC, one for each
2411handler.
Vinay Sajipd93a6012016-04-01 23:13:01 +01002412
2413
2414.. _context-manager:
2415
2416Using a context manager for selective logging
2417---------------------------------------------
2418
2419There are times when it would be useful to temporarily change the logging
2420configuration and revert it back after doing something. For this, a context
2421manager is the most obvious way of saving and restoring the logging context.
2422Here is a simple example of such a context manager, which allows you to
2423optionally change the logging level and add a logging handler purely in the
2424scope of the context manager::
2425
2426 import logging
2427 import sys
2428
2429 class LoggingContext(object):
2430 def __init__(self, logger, level=None, handler=None, close=True):
2431 self.logger = logger
2432 self.level = level
2433 self.handler = handler
2434 self.close = close
2435
2436 def __enter__(self):
2437 if self.level is not None:
2438 self.old_level = self.logger.level
2439 self.logger.setLevel(self.level)
2440 if self.handler:
2441 self.logger.addHandler(self.handler)
2442
2443 def __exit__(self, et, ev, tb):
2444 if self.level is not None:
2445 self.logger.setLevel(self.old_level)
2446 if self.handler:
2447 self.logger.removeHandler(self.handler)
2448 if self.handler and self.close:
2449 self.handler.close()
2450 # implicit return of None => don't swallow exceptions
2451
2452If you specify a level value, the logger's level is set to that value in the
2453scope of the with block covered by the context manager. If you specify a
2454handler, it is added to the logger on entry to the block and removed on exit
2455from the block. You can also ask the manager to close the handler for you on
2456block exit - you could do this if you don't need the handler any more.
2457
2458To illustrate how it works, we can add the following block of code to the
2459above::
2460
2461 if __name__ == '__main__':
2462 logger = logging.getLogger('foo')
2463 logger.addHandler(logging.StreamHandler())
2464 logger.setLevel(logging.INFO)
2465 logger.info('1. This should appear just once on stderr.')
2466 logger.debug('2. This should not appear.')
2467 with LoggingContext(logger, level=logging.DEBUG):
2468 logger.debug('3. This should appear once on stderr.')
2469 logger.debug('4. This should not appear.')
2470 h = logging.StreamHandler(sys.stdout)
2471 with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
2472 logger.debug('5. This should appear twice - once on stderr and once on stdout.')
2473 logger.info('6. This should appear just once on stderr.')
2474 logger.debug('7. This should not appear.')
2475
2476We initially set the logger's level to ``INFO``, so message #1 appears and
2477message #2 doesn't. We then change the level to ``DEBUG`` temporarily in the
2478following ``with`` block, and so message #3 appears. After the block exits, the
2479logger's level is restored to ``INFO`` and so message #4 doesn't appear. In the
2480next ``with`` block, we set the level to ``DEBUG`` again but also add a handler
2481writing to ``sys.stdout``. Thus, message #5 appears twice on the console (once
2482via ``stderr`` and once via ``stdout``). After the ``with`` statement's
2483completion, the status is as it was before so message #6 appears (like message
2484#1) whereas message #7 doesn't (just like message #2).
2485
2486If we run the resulting script, the result is as follows::
2487
2488 $ python logctx.py
2489 1. This should appear just once on stderr.
2490 3. This should appear once on stderr.
2491 5. This should appear twice - once on stderr and once on stdout.
2492 5. This should appear twice - once on stderr and once on stdout.
2493 6. This should appear just once on stderr.
2494
2495If we run it again, but pipe ``stderr`` to ``/dev/null``, we see the following,
2496which is the only message written to ``stdout``::
2497
2498 $ python logctx.py 2>/dev/null
2499 5. This should appear twice - once on stderr and once on stdout.
2500
2501Once again, but piping ``stdout`` to ``/dev/null``, we get::
2502
2503 $ python logctx.py >/dev/null
2504 1. This should appear just once on stderr.
2505 3. This should appear once on stderr.
2506 5. This should appear twice - once on stderr and once on stdout.
2507 6. This should appear just once on stderr.
2508
2509In this case, the message #5 printed to ``stdout`` doesn't appear, as expected.
2510
2511Of course, the approach described here can be generalised, for example to attach
2512logging filters temporarily. Note that the above code works in Python 2 as well
2513as Python 3.