blob: b979aa7d7d8389a57cfa72d88378ab2ce247b9d1 [file] [log] [blame]
Vinay Sajipc63619b2010-12-19 12:56:57 +00001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
Georg Brandl375aec22011-01-15 17:03:02 +00009This page contains a number of recipes related to logging, which have been found
10useful in the past.
Vinay Sajipc63619b2010-12-19 12:56:57 +000011
Vinay Sajipc63619b2010-12-19 12:56:57 +000012.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
Vinay Sajip1397ce12010-12-24 12:03:48 +000017Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
Vinay Sajipc63619b2010-12-19 12:56:57 +000024
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
97Multiple handlers and formatters
98--------------------------------
99
Vinay Sajip67f39772013-08-17 00:39:42 +0100100Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
101minimum or maximum quota for the number of handlers you may add. Sometimes it
102will be beneficial for an application to log all messages of all severities to a
103text file while simultaneously logging errors or above to the console. To set
104this up, simply configure the appropriate handlers. The logging calls in the
Vinay Sajipc63619b2010-12-19 12:56:57 +0000105application code will remain unchanged. Here is a slight modification to the
106previous simple module-based configuration example::
107
108 import logging
109
110 logger = logging.getLogger('simple_example')
111 logger.setLevel(logging.DEBUG)
112 # create file handler which logs even debug messages
113 fh = logging.FileHandler('spam.log')
114 fh.setLevel(logging.DEBUG)
115 # create console handler with a higher log level
116 ch = logging.StreamHandler()
117 ch.setLevel(logging.ERROR)
118 # create formatter and add it to the handlers
119 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
120 ch.setFormatter(formatter)
121 fh.setFormatter(formatter)
122 # add the handlers to logger
123 logger.addHandler(ch)
124 logger.addHandler(fh)
125
126 # 'application' code
127 logger.debug('debug message')
128 logger.info('info message')
129 logger.warn('warn message')
130 logger.error('error message')
131 logger.critical('critical message')
132
133Notice that the 'application' code does not care about multiple handlers. All
134that changed was the addition and configuration of a new handler named *fh*.
135
136The ability to create new handlers with higher- or lower-severity filters can be
137very helpful when writing and testing an application. Instead of using many
138``print`` statements for debugging, use ``logger.debug``: Unlike the print
139statements, which you will have to delete or comment out later, the logger.debug
140statements can remain intact in the source code and remain dormant until you
141need them again. At that time, the only change that needs to happen is to
142modify the severity level of the logger and/or handler to debug.
143
144.. _multiple-destinations:
145
146Logging to multiple destinations
147--------------------------------
148
149Let's say you want to log to console and file with different message formats and
150in differing circumstances. Say you want to log messages with levels of DEBUG
151and higher to file, and those messages at level INFO and higher to the console.
152Let's also assume that the file should contain timestamps, but the console
153messages should not. Here's how you can achieve this::
154
155 import logging
156
157 # set up logging to file - see previous section for more details
158 logging.basicConfig(level=logging.DEBUG,
159 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
160 datefmt='%m-%d %H:%M',
161 filename='/temp/myapp.log',
162 filemode='w')
163 # define a Handler which writes INFO messages or higher to the sys.stderr
164 console = logging.StreamHandler()
165 console.setLevel(logging.INFO)
166 # set a format which is simpler for console use
167 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
168 # tell the handler to use this format
169 console.setFormatter(formatter)
170 # add the handler to the root logger
171 logging.getLogger('').addHandler(console)
172
173 # Now, we can log to the root logger, or any other logger. First the root...
174 logging.info('Jackdaws love my big sphinx of quartz.')
175
176 # Now, define a couple of other loggers which might represent areas in your
177 # application:
178
179 logger1 = logging.getLogger('myapp.area1')
180 logger2 = logging.getLogger('myapp.area2')
181
182 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
183 logger1.info('How quickly daft jumping zebras vex.')
184 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
185 logger2.error('The five boxing wizards jump quickly.')
186
187When you run this, on the console you will see ::
188
189 root : INFO Jackdaws love my big sphinx of quartz.
190 myapp.area1 : INFO How quickly daft jumping zebras vex.
191 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
192 myapp.area2 : ERROR The five boxing wizards jump quickly.
193
194and in the file you will see something like ::
195
196 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
197 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
198 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
199 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
200 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
201
202As you can see, the DEBUG message only shows up in the file. The other messages
203are sent to both destinations.
204
205This example uses console and file handlers, but you can use any number and
206combination of handlers you choose.
207
208
209Configuration server example
210----------------------------
211
212Here is an example of a module using the logging configuration server::
213
214 import logging
215 import logging.config
216 import time
217 import os
218
219 # read initial config file
220 logging.config.fileConfig('logging.conf')
221
222 # create and start listener on port 9999
223 t = logging.config.listen(9999)
224 t.start()
225
226 logger = logging.getLogger('simpleExample')
227
228 try:
229 # loop through logging calls to see the difference
230 # new configurations make, until Ctrl+C is pressed
231 while True:
232 logger.debug('debug message')
233 logger.info('info message')
234 logger.warn('warn message')
235 logger.error('error message')
236 logger.critical('critical message')
237 time.sleep(5)
238 except KeyboardInterrupt:
239 # cleanup
240 logging.config.stopListening()
241 t.join()
242
243And here is a script that takes a filename and sends that file to the server,
244properly preceded with the binary-encoded length, as the new logging
245configuration::
246
247 #!/usr/bin/env python
248 import socket, sys, struct
249
Vinay Sajip689b68a2010-12-22 15:04:15 +0000250 with open(sys.argv[1], 'rb') as f:
251 data_to_send = f.read()
Vinay Sajipc63619b2010-12-19 12:56:57 +0000252
253 HOST = 'localhost'
254 PORT = 9999
255 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
256 print('connecting...')
257 s.connect((HOST, PORT))
258 print('sending config...')
259 s.send(struct.pack('>L', len(data_to_send)))
260 s.send(data_to_send)
261 s.close()
262 print('complete')
263
264
265Dealing with handlers that block
266--------------------------------
267
268.. currentmodule:: logging.handlers
269
270Sometimes you have to get your logging handlers to do their work without
Vinay Sajip6b883a22012-02-27 11:02:45 +0000271blocking the thread you're logging from. This is common in Web applications,
Vinay Sajipc63619b2010-12-19 12:56:57 +0000272though of course it also occurs in other scenarios.
273
274A common culprit which demonstrates sluggish behaviour is the
275:class:`SMTPHandler`: sending emails can take a long time, for a
Vinay Sajip6b883a22012-02-27 11:02:45 +0000276number of reasons outside the developer's control (for example, a poorly
Vinay Sajipc63619b2010-12-19 12:56:57 +0000277performing mail or network infrastructure). But almost any network-based
278handler can block: Even a :class:`SocketHandler` operation may do a
279DNS query under the hood which is too slow (and this query can be deep in the
280socket library code, below the Python layer, and outside your control).
281
282One solution is to use a two-part approach. For the first part, attach only a
283:class:`QueueHandler` to those loggers which are accessed from
284performance-critical threads. They simply write to their queue, which can be
285sized to a large enough capacity or initialized with no upper bound to their
286size. The write to the queue will typically be accepted quickly, though you
Georg Brandl375aec22011-01-15 17:03:02 +0000287will probably need to catch the :exc:`queue.Full` exception as a precaution
Vinay Sajipc63619b2010-12-19 12:56:57 +0000288in your code. If you are a library developer who has performance-critical
289threads in their code, be sure to document this (together with a suggestion to
290attach only ``QueueHandlers`` to your loggers) for the benefit of other
291developers who will use your code.
292
293The second part of the solution is :class:`QueueListener`, which has been
294designed as the counterpart to :class:`QueueHandler`. A
Vinay Sajip6b883a22012-02-27 11:02:45 +0000295:class:`QueueListener` is very simple: it's passed a queue and some handlers,
Vinay Sajipc63619b2010-12-19 12:56:57 +0000296and it fires up an internal thread which listens to its queue for LogRecords
297sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
298matter). The ``LogRecords`` are removed from the queue and passed to the
299handlers for processing.
300
301The advantage of having a separate :class:`QueueListener` class is that you
302can use the same instance to service multiple ``QueueHandlers``. This is more
303resource-friendly than, say, having threaded versions of the existing handler
304classes, which would eat up one thread per handler for no particular benefit.
305
306An example of using these two classes follows (imports omitted)::
307
308 que = queue.Queue(-1) # no limit on size
309 queue_handler = QueueHandler(que)
310 handler = logging.StreamHandler()
311 listener = QueueListener(que, handler)
312 root = logging.getLogger()
313 root.addHandler(queue_handler)
314 formatter = logging.Formatter('%(threadName)s: %(message)s')
315 handler.setFormatter(formatter)
316 listener.start()
317 # The log output will display the thread which generated
318 # the event (the main thread) rather than the internal
319 # thread which monitors the internal queue. This is what
320 # you want to happen.
321 root.warning('Look out!')
322 listener.stop()
323
324which, when run, will produce::
325
326 MainThread: Look out!
327
Vinay Sajip365701a2015-02-09 19:49:00 +0000328.. versionchanged:: 3.5
329 Prior to Python 3.5, the :class:`QueueListener` always passed every message
330 received from the queue to every handler it was initialized with. (This was
331 because it was assumed that level filtering was all done on the other side,
332 where the queue is filled.) From 3.5 onwards, this behaviour can be changed
333 by passing a keyword argument ``respect_handler_level=True`` to the
334 listener's constructor. When this is done, the listener compares the level
335 of each message with the handler's level, and only passes a message to a
336 handler if it's appropriate to do so.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000337
338.. _network-logging:
339
340Sending and receiving logging events across a network
341-----------------------------------------------------
342
343Let's say you want to send logging events across a network, and handle them at
344the receiving end. A simple way of doing this is attaching a
345:class:`SocketHandler` instance to the root logger at the sending end::
346
347 import logging, logging.handlers
348
349 rootLogger = logging.getLogger('')
350 rootLogger.setLevel(logging.DEBUG)
351 socketHandler = logging.handlers.SocketHandler('localhost',
352 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
353 # don't bother with a formatter, since a socket handler sends the event as
354 # an unformatted pickle
355 rootLogger.addHandler(socketHandler)
356
357 # Now, we can log to the root logger, or any other logger. First the root...
358 logging.info('Jackdaws love my big sphinx of quartz.')
359
360 # Now, define a couple of other loggers which might represent areas in your
361 # application:
362
363 logger1 = logging.getLogger('myapp.area1')
364 logger2 = logging.getLogger('myapp.area2')
365
366 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
367 logger1.info('How quickly daft jumping zebras vex.')
368 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
369 logger2.error('The five boxing wizards jump quickly.')
370
371At the receiving end, you can set up a receiver using the :mod:`socketserver`
372module. Here is a basic working example::
373
374 import pickle
375 import logging
376 import logging.handlers
377 import socketserver
378 import struct
379
380
381 class LogRecordStreamHandler(socketserver.StreamRequestHandler):
382 """Handler for a streaming logging request.
383
384 This basically logs the record using whatever logging policy is
385 configured locally.
386 """
387
388 def handle(self):
389 """
390 Handle multiple requests - each expected to be a 4-byte length,
391 followed by the LogRecord in pickle format. Logs the record
392 according to whatever policy is configured locally.
393 """
394 while True:
395 chunk = self.connection.recv(4)
396 if len(chunk) < 4:
397 break
398 slen = struct.unpack('>L', chunk)[0]
399 chunk = self.connection.recv(slen)
400 while len(chunk) < slen:
401 chunk = chunk + self.connection.recv(slen - len(chunk))
402 obj = self.unPickle(chunk)
403 record = logging.makeLogRecord(obj)
404 self.handleLogRecord(record)
405
406 def unPickle(self, data):
407 return pickle.loads(data)
408
409 def handleLogRecord(self, record):
410 # if a name is specified, we use the named logger rather than the one
411 # implied by the record.
412 if self.server.logname is not None:
413 name = self.server.logname
414 else:
415 name = record.name
416 logger = logging.getLogger(name)
417 # N.B. EVERY record gets logged. This is because Logger.handle
418 # is normally called AFTER logger-level filtering. If you want
419 # to do filtering, do it at the client end to save wasting
420 # cycles and network bandwidth!
421 logger.handle(record)
422
423 class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
424 """
425 Simple TCP socket-based logging receiver suitable for testing.
426 """
427
Raymond Hettinger4ab532b2014-03-28 16:39:25 -0700428 allow_reuse_address = True
Vinay Sajipc63619b2010-12-19 12:56:57 +0000429
430 def __init__(self, host='localhost',
431 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
432 handler=LogRecordStreamHandler):
433 socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
434 self.abort = 0
435 self.timeout = 1
436 self.logname = None
437
438 def serve_until_stopped(self):
439 import select
440 abort = 0
441 while not abort:
442 rd, wr, ex = select.select([self.socket.fileno()],
443 [], [],
444 self.timeout)
445 if rd:
446 self.handle_request()
447 abort = self.abort
448
449 def main():
450 logging.basicConfig(
451 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
452 tcpserver = LogRecordSocketReceiver()
453 print('About to start TCP server...')
454 tcpserver.serve_until_stopped()
455
456 if __name__ == '__main__':
457 main()
458
459First run the server, and then the client. On the client side, nothing is
460printed on the console; on the server side, you should see something like::
461
462 About to start TCP server...
463 59 root INFO Jackdaws love my big sphinx of quartz.
464 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
465 69 myapp.area1 INFO How quickly daft jumping zebras vex.
466 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
467 69 myapp.area2 ERROR The five boxing wizards jump quickly.
468
469Note that there are some security issues with pickle in some scenarios. If
470these affect you, you can use an alternative serialization scheme by overriding
Vinay Sajip67f39772013-08-17 00:39:42 +0100471the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
472alternative there, as well as adapting the above script to use your alternative
473serialization.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000474
475
476.. _context-info:
477
478Adding contextual information to your logging output
479----------------------------------------------------
480
481Sometimes you want logging output to contain contextual information in
482addition to the parameters passed to the logging call. For example, in a
483networked application, it may be desirable to log client-specific information
484in the log (e.g. remote client's username, or IP address). Although you could
485use the *extra* parameter to achieve this, it's not always convenient to pass
486the information in this way. While it might be tempting to create
487:class:`Logger` instances on a per-connection basis, this is not a good idea
488because these instances are not garbage collected. While this is not a problem
489in practice, when the number of :class:`Logger` instances is dependent on the
490level of granularity you want to use in logging an application, it could
491be hard to manage if the number of :class:`Logger` instances becomes
492effectively unbounded.
493
494
495Using LoggerAdapters to impart contextual information
496^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
497
498An easy way in which you can pass contextual information to be output along
499with logging event information is to use the :class:`LoggerAdapter` class.
500This class is designed to look like a :class:`Logger`, so that you can call
501:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
502:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
503same signatures as their counterparts in :class:`Logger`, so you can use the
504two types of instances interchangeably.
505
506When you create an instance of :class:`LoggerAdapter`, you pass it a
507:class:`Logger` instance and a dict-like object which contains your contextual
508information. When you call one of the logging methods on an instance of
509:class:`LoggerAdapter`, it delegates the call to the underlying instance of
510:class:`Logger` passed to its constructor, and arranges to pass the contextual
511information in the delegated call. Here's a snippet from the code of
512:class:`LoggerAdapter`::
513
514 def debug(self, msg, *args, **kwargs):
515 """
516 Delegate a debug call to the underlying logger, after adding
517 contextual information from this adapter instance.
518 """
519 msg, kwargs = self.process(msg, kwargs)
520 self.logger.debug(msg, *args, **kwargs)
521
Vinay Sajip67f39772013-08-17 00:39:42 +0100522The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
523contextual information is added to the logging output. It's passed the message
524and keyword arguments of the logging call, and it passes back (potentially)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000525modified versions of these to use in the call to the underlying logger. The
526default implementation of this method leaves the message alone, but inserts
527an 'extra' key in the keyword argument whose value is the dict-like object
528passed to the constructor. Of course, if you had passed an 'extra' keyword
529argument in the call to the adapter, it will be silently overwritten.
530
531The advantage of using 'extra' is that the values in the dict-like object are
532merged into the :class:`LogRecord` instance's __dict__, allowing you to use
533customized strings with your :class:`Formatter` instances which know about
534the keys of the dict-like object. If you need a different method, e.g. if you
535want to prepend or append the contextual information to the message string,
Vinay Sajip67f39772013-08-17 00:39:42 +0100536you just need to subclass :class:`LoggerAdapter` and override
537:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000538
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100539 class CustomAdapter(logging.LoggerAdapter):
540 """
541 This example adapter expects the passed in dict-like object to have a
542 'connid' key, whose value in brackets is prepended to the log message.
543 """
544 def process(self, msg, kwargs):
545 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajipc63619b2010-12-19 12:56:57 +0000546
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100547which you can use like this::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000548
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100549 logger = logging.getLogger(__name__)
550 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajipc63619b2010-12-19 12:56:57 +0000551
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100552Then any events that you log to the adapter will have the value of
553``some_conn_id`` prepended to the log messages.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000554
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100555Using objects other than dicts to pass contextual information
556~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajipc63619b2010-12-19 12:56:57 +0000557
Vinay Sajipa92fbe62013-07-24 17:52:01 +0100558You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
559pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
560that it looks like a dict to logging. This would be useful if you want to
561generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajipc63619b2010-12-19 12:56:57 +0000562
563
564.. _filters-contextual:
565
566Using Filters to impart contextual information
567^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
568
569You can also add contextual information to log output using a user-defined
570:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
571passed to them, including adding additional attributes which can then be output
572using a suitable format string, or if needed a custom :class:`Formatter`.
573
574For example in a web application, the request being processed (or at least,
575the interesting parts of it) can be stored in a threadlocal
576(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
577add, say, information from the request - say, the remote IP address and remote
578user's username - to the ``LogRecord``, using the attribute names 'ip' and
579'user' as in the ``LoggerAdapter`` example above. In that case, the same format
580string can be used to get similar output to that shown above. Here's an example
581script::
582
583 import logging
584 from random import choice
585
586 class ContextFilter(logging.Filter):
587 """
588 This is a filter which injects contextual information into the log.
589
590 Rather than use actual contextual information, we just use random
591 data in this demo.
592 """
593
594 USERS = ['jim', 'fred', 'sheila']
595 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
596
597 def filter(self, record):
598
599 record.ip = choice(ContextFilter.IPS)
600 record.user = choice(ContextFilter.USERS)
601 return True
602
603 if __name__ == '__main__':
604 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000605 logging.basicConfig(level=logging.DEBUG,
606 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
607 a1 = logging.getLogger('a.b.c')
608 a2 = logging.getLogger('d.e.f')
609
610 f = ContextFilter()
611 a1.addFilter(f)
612 a2.addFilter(f)
613 a1.debug('A debug message')
614 a1.info('An info message with %s', 'some parameters')
615 for x in range(10):
616 lvl = choice(levels)
617 lvlname = logging.getLevelName(lvl)
618 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
619
620which, when run, produces something like::
621
622 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
623 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
624 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
625 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
626 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
627 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
628 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
629 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
630 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
631 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
632 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
633 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
634
635
636.. _multiple-processes:
637
638Logging to a single file from multiple processes
639------------------------------------------------
640
641Although logging is thread-safe, and logging to a single file from multiple
642threads in a single process *is* supported, logging to a single file from
643*multiple processes* is *not* supported, because there is no standard way to
644serialize access to a single file across multiple processes in Python. If you
645need to log to a single file from multiple processes, one way of doing this is
Vinay Sajip67f39772013-08-17 00:39:42 +0100646to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
647separate process which implements a socket server which reads from the socket
648and logs to file. (If you prefer, you can dedicate one thread in one of the
649existing processes to perform this function.)
650:ref:`This section <network-logging>` documents this approach in more detail and
651includes a working socket receiver which can be used as a starting point for you
652to adapt in your own applications.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000653
654If you are using a recent version of Python which includes the
655:mod:`multiprocessing` module, you could write your own handler which uses the
Vinay Sajip67f39772013-08-17 00:39:42 +0100656:class:`~multiprocessing.Lock` class from this module to serialize access to the
657file from your processes. The existing :class:`FileHandler` and subclasses do
658not make use of :mod:`multiprocessing` at present, though they may do so in the
659future. Note that at present, the :mod:`multiprocessing` module does not provide
Vinay Sajipc63619b2010-12-19 12:56:57 +0000660working lock functionality on all platforms (see
Georg Brandle73778c2014-10-29 08:36:35 +0100661https://bugs.python.org/issue3770).
Vinay Sajipc63619b2010-12-19 12:56:57 +0000662
663.. currentmodule:: logging.handlers
664
665Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
666all logging events to one of the processes in your multi-process application.
667The following example script demonstrates how you can do this; in the example
668a separate listener process listens for events sent by other processes and logs
669them according to its own logging configuration. Although the example only
670demonstrates one way of doing it (for example, you may want to use a listener
Georg Brandl7a0afd32011-02-07 15:44:27 +0000671thread rather than a separate listener process -- the implementation would be
Vinay Sajipc63619b2010-12-19 12:56:57 +0000672analogous) it does allow for completely different logging configurations for
673the listener and the other processes in your application, and can be used as
674the basis for code meeting your own specific requirements::
675
676 # You'll need these imports in your own code
677 import logging
678 import logging.handlers
679 import multiprocessing
680
681 # Next two import lines for this demo only
682 from random import choice, random
683 import time
684
685 #
686 # Because you'll want to define the logging configurations for listener and workers, the
687 # listener and worker process functions take a configurer parameter which is a callable
688 # for configuring logging for that process. These functions are also passed the queue,
689 # which they use for communication.
690 #
691 # In practice, you can configure the listener however you want, but note that in this
692 # simple example, the listener does not apply level or filter logic to received records.
Georg Brandl7a0afd32011-02-07 15:44:27 +0000693 # In practice, you would probably want to do this logic in the worker processes, to avoid
Vinay Sajipc63619b2010-12-19 12:56:57 +0000694 # sending events which would be filtered out between processes.
695 #
696 # The size of the rotated files is made small so you can see the results easily.
697 def listener_configurer():
698 root = logging.getLogger()
Raymond Hettingerb34705f2011-06-26 15:29:06 +0200699 h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000700 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
701 h.setFormatter(f)
702 root.addHandler(h)
703
704 # This is the listener process top-level loop: wait for logging events
705 # (LogRecords)on the queue and handle them, quit when you get a None for a
706 # LogRecord.
707 def listener_process(queue, configurer):
708 configurer()
709 while True:
710 try:
711 record = queue.get()
712 if record is None: # We send this as a sentinel to tell the listener to quit.
713 break
714 logger = logging.getLogger(record.name)
715 logger.handle(record) # No level or filter logic applied - just do it!
Andrew Svetlov47395612012-11-02 22:07:26 +0200716 except Exception:
Vinay Sajipc63619b2010-12-19 12:56:57 +0000717 import sys, traceback
Vinay Sajip6b883a22012-02-27 11:02:45 +0000718 print('Whoops! Problem:', file=sys.stderr)
Vinay Sajipc63619b2010-12-19 12:56:57 +0000719 traceback.print_exc(file=sys.stderr)
720
721 # Arrays used for random selections in this demo
722
723 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
724 logging.ERROR, logging.CRITICAL]
725
726 LOGGERS = ['a.b.c', 'd.e.f']
727
728 MESSAGES = [
729 'Random message #1',
730 'Random message #2',
731 'Random message #3',
732 ]
733
734 # The worker configuration is done at the start of the worker process run.
735 # Note that on Windows you can't rely on fork semantics, so each process
736 # will run the logging configuration code when it starts.
737 def worker_configurer(queue):
738 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
739 root = logging.getLogger()
740 root.addHandler(h)
741 root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
742
743 # This is the worker process top-level loop, which just logs ten events with
744 # random intervening delays before terminating.
745 # The print messages are just so you know it's doing something!
746 def worker_process(queue, configurer):
747 configurer(queue)
748 name = multiprocessing.current_process().name
749 print('Worker started: %s' % name)
750 for i in range(10):
751 time.sleep(random())
752 logger = logging.getLogger(choice(LOGGERS))
753 level = choice(LEVELS)
754 message = choice(MESSAGES)
755 logger.log(level, message)
756 print('Worker finished: %s' % name)
757
758 # Here's where the demo gets orchestrated. Create the queue, create and start
759 # the listener, create ten workers and start them, wait for them to finish,
760 # then send a None to the queue to tell the listener to finish.
761 def main():
762 queue = multiprocessing.Queue(-1)
763 listener = multiprocessing.Process(target=listener_process,
764 args=(queue, listener_configurer))
765 listener.start()
766 workers = []
767 for i in range(10):
768 worker = multiprocessing.Process(target=worker_process,
769 args=(queue, worker_configurer))
770 workers.append(worker)
771 worker.start()
772 for w in workers:
773 w.join()
774 queue.put_nowait(None)
775 listener.join()
776
777 if __name__ == '__main__':
778 main()
779
Vinay Sajipe6f1e432010-12-26 18:47:51 +0000780A variant of the above script keeps the logging in the main process, in a
781separate thread::
782
783 import logging
784 import logging.config
785 import logging.handlers
786 from multiprocessing import Process, Queue
787 import random
788 import threading
789 import time
790
791 def logger_thread(q):
792 while True:
793 record = q.get()
794 if record is None:
795 break
796 logger = logging.getLogger(record.name)
797 logger.handle(record)
798
799
800 def worker_process(q):
801 qh = logging.handlers.QueueHandler(q)
802 root = logging.getLogger()
803 root.setLevel(logging.DEBUG)
804 root.addHandler(qh)
805 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
806 logging.CRITICAL]
807 loggers = ['foo', 'foo.bar', 'foo.bar.baz',
808 'spam', 'spam.ham', 'spam.ham.eggs']
809 for i in range(100):
810 lvl = random.choice(levels)
811 logger = logging.getLogger(random.choice(loggers))
812 logger.log(lvl, 'Message no. %d', i)
813
814 if __name__ == '__main__':
815 q = Queue()
816 d = {
817 'version': 1,
818 'formatters': {
819 'detailed': {
820 'class': 'logging.Formatter',
821 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
822 }
823 },
824 'handlers': {
825 'console': {
826 'class': 'logging.StreamHandler',
827 'level': 'INFO',
828 },
829 'file': {
830 'class': 'logging.FileHandler',
831 'filename': 'mplog.log',
832 'mode': 'w',
833 'formatter': 'detailed',
834 },
835 'foofile': {
836 'class': 'logging.FileHandler',
837 'filename': 'mplog-foo.log',
838 'mode': 'w',
839 'formatter': 'detailed',
840 },
841 'errors': {
842 'class': 'logging.FileHandler',
843 'filename': 'mplog-errors.log',
844 'mode': 'w',
845 'level': 'ERROR',
846 'formatter': 'detailed',
847 },
848 },
849 'loggers': {
850 'foo': {
Serhiy Storchakaf47036c2013-12-24 11:04:36 +0200851 'handlers': ['foofile']
Vinay Sajipe6f1e432010-12-26 18:47:51 +0000852 }
853 },
854 'root': {
855 'level': 'DEBUG',
856 'handlers': ['console', 'file', 'errors']
857 },
858 }
859 workers = []
860 for i in range(5):
861 wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
862 workers.append(wp)
863 wp.start()
864 logging.config.dictConfig(d)
865 lp = threading.Thread(target=logger_thread, args=(q,))
866 lp.start()
867 # At this point, the main process could do some useful work of its own
868 # Once it's done that, it can wait for the workers to terminate...
869 for wp in workers:
870 wp.join()
871 # And now tell the logging thread to finish up, too
872 q.put(None)
873 lp.join()
874
875This variant shows how you can e.g. apply configuration for particular loggers
876- e.g. the ``foo`` logger has a special handler which stores all events in the
877``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
878machinery in the main process (even though the logging events are generated in
879the worker processes) to direct the messages to the appropriate destinations.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000880
881Using file rotation
882-------------------
883
884.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
885.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
886
887Sometimes you want to let a log file grow to a certain size, then open a new
888file and log to that. You may want to keep a certain number of these files, and
889when that many files have been created, rotate the files so that the number of
Georg Brandl7a0afd32011-02-07 15:44:27 +0000890files and the size of the files both remain bounded. For this usage pattern, the
Vinay Sajip67f39772013-08-17 00:39:42 +0100891logging package provides a :class:`~handlers.RotatingFileHandler`::
Vinay Sajipc63619b2010-12-19 12:56:57 +0000892
893 import glob
894 import logging
895 import logging.handlers
896
897 LOG_FILENAME = 'logging_rotatingfile_example.out'
898
899 # Set up a specific logger with our desired output level
900 my_logger = logging.getLogger('MyLogger')
901 my_logger.setLevel(logging.DEBUG)
902
903 # Add the log message handler to the logger
904 handler = logging.handlers.RotatingFileHandler(
905 LOG_FILENAME, maxBytes=20, backupCount=5)
906
907 my_logger.addHandler(handler)
908
909 # Log some messages
910 for i in range(20):
911 my_logger.debug('i = %d' % i)
912
913 # See what files are created
914 logfiles = glob.glob('%s*' % LOG_FILENAME)
915
916 for filename in logfiles:
917 print(filename)
918
919The result should be 6 separate files, each with part of the log history for the
920application::
921
922 logging_rotatingfile_example.out
923 logging_rotatingfile_example.out.1
924 logging_rotatingfile_example.out.2
925 logging_rotatingfile_example.out.3
926 logging_rotatingfile_example.out.4
927 logging_rotatingfile_example.out.5
928
929The most current file is always :file:`logging_rotatingfile_example.out`,
930and each time it reaches the size limit it is renamed with the suffix
931``.1``. Each of the existing backup files is renamed to increment the suffix
932(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
933
Ezio Melottie130a522011-10-19 10:58:56 +0300934Obviously this example sets the log length much too small as an extreme
Vinay Sajipc63619b2010-12-19 12:56:57 +0000935example. You would want to set *maxBytes* to an appropriate value.
936
Vinay Sajip6b883a22012-02-27 11:02:45 +0000937.. _format-styles:
938
939Use of alternative formatting styles
940------------------------------------
941
942When logging was added to the Python standard library, the only way of
943formatting messages with variable content was to use the %-formatting
944method. Since then, Python has gained two new formatting approaches:
Vinay Sajip39b83ac2012-02-28 08:05:23 +0000945:class:`string.Template` (added in Python 2.4) and :meth:`str.format`
946(added in Python 2.6).
Vinay Sajip6b883a22012-02-27 11:02:45 +0000947
Vinay Sajip39b83ac2012-02-28 08:05:23 +0000948Logging (as of 3.2) provides improved support for these two additional
949formatting styles. The :class:`Formatter` class been enhanced to take an
950additional, optional keyword parameter named ``style``. This defaults to
951``'%'``, but other possible values are ``'{'`` and ``'$'``, which correspond
Vinay Sajip6b883a22012-02-27 11:02:45 +0000952to the other two formatting styles. Backwards compatibility is maintained by
953default (as you would expect), but by explicitly specifying a style parameter,
954you get the ability to specify format strings which work with
955:meth:`str.format` or :class:`string.Template`. Here's an example console
956session to show the possibilities:
957
958.. code-block:: pycon
959
960 >>> import logging
961 >>> root = logging.getLogger()
962 >>> root.setLevel(logging.DEBUG)
963 >>> handler = logging.StreamHandler()
964 >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
965 ... style='{')
966 >>> handler.setFormatter(bf)
967 >>> root.addHandler(handler)
968 >>> logger = logging.getLogger('foo.bar')
969 >>> logger.debug('This is a DEBUG message')
970 2010-10-28 15:11:55,341 foo.bar DEBUG This is a DEBUG message
971 >>> logger.critical('This is a CRITICAL message')
972 2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
973 >>> df = logging.Formatter('$asctime $name ${levelname} $message',
974 ... style='$')
975 >>> handler.setFormatter(df)
976 >>> logger.debug('This is a DEBUG message')
977 2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
978 >>> logger.critical('This is a CRITICAL message')
979 2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
980 >>>
981
982Note that the formatting of logging messages for final output to logs is
983completely independent of how an individual logging message is constructed.
984That can still use %-formatting, as shown here::
985
986 >>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
987 2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
988 >>>
989
990Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
991positional parameters for the actual logging message itself, with keyword
992parameters used only for determining options for how to handle the actual
993logging call (e.g. the ``exc_info`` keyword parameter to indicate that
994traceback information should be logged, or the ``extra`` keyword parameter
995to indicate additional contextual information to be added to the log). So
996you cannot directly make logging calls using :meth:`str.format` or
997:class:`string.Template` syntax, because internally the logging package
998uses %-formatting to merge the format string and the variable arguments.
999There would no changing this while preserving backward compatibility, since
1000all logging calls which are out there in existing code will be using %-format
1001strings.
1002
1003There is, however, a way that you can use {}- and $- formatting to construct
1004your individual log messages. Recall that for a message you can use an
1005arbitrary object as a message format string, and that the logging package will
1006call ``str()`` on that object to get the actual format string. Consider the
1007following two classes::
1008
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001009 class BraceMessage:
Vinay Sajip6b883a22012-02-27 11:02:45 +00001010 def __init__(self, fmt, *args, **kwargs):
1011 self.fmt = fmt
1012 self.args = args
1013 self.kwargs = kwargs
1014
1015 def __str__(self):
1016 return self.fmt.format(*self.args, **self.kwargs)
1017
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001018 class DollarMessage:
Vinay Sajip6b883a22012-02-27 11:02:45 +00001019 def __init__(self, fmt, **kwargs):
1020 self.fmt = fmt
1021 self.kwargs = kwargs
1022
1023 def __str__(self):
1024 from string import Template
1025 return Template(self.fmt).substitute(**self.kwargs)
1026
1027Either of these can be used in place of a format string, to allow {}- or
1028$-formatting to be used to build the actual "message" part which appears in the
1029formatted log output in place of "%(message)s" or "{message}" or "$message".
1030It's a little unwieldy to use the class names whenever you want to log
1031something, but it's quite palatable if you use an alias such as __ (double
1032underscore – not to be confused with _, the single underscore used as a
1033synonym/alias for :func:`gettext.gettext` or its brethren).
1034
1035The above classes are not included in Python, though they're easy enough to
1036copy and paste into your own code. They can be used as follows (assuming that
1037they're declared in a module called ``wherever``):
1038
1039.. code-block:: pycon
1040
1041 >>> from wherever import BraceMessage as __
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001042 >>> print(__('Message with {0} {name}', 2, name='placeholders'))
Vinay Sajip6b883a22012-02-27 11:02:45 +00001043 Message with 2 placeholders
1044 >>> class Point: pass
1045 ...
1046 >>> p = Point()
1047 >>> p.x = 0.5
1048 >>> p.y = 0.5
1049 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
1050 ... point=p))
1051 Message with coordinates: (0.50, 0.50)
1052 >>> from wherever import DollarMessage as __
1053 >>> print(__('Message with $num $what', num=2, what='placeholders'))
1054 Message with 2 placeholders
1055 >>>
1056
Vinay Sajip39b83ac2012-02-28 08:05:23 +00001057While the above examples use ``print()`` to show how the formatting works, you
1058would of course use ``logger.debug()`` or similar to actually log using this
1059approach.
1060
Vinay Sajip6b883a22012-02-27 11:02:45 +00001061One thing to note is that you pay no significant performance penalty with this
1062approach: the actual formatting happens not when you make the logging call, but
1063when (and if) the logged message is actually about to be output to a log by a
1064handler. So the only slightly unusual thing which might trip you up is that the
1065parentheses go around the format string and the arguments, not just the format
1066string. That's because the __ notation is just syntax sugar for a constructor
1067call to one of the XXXMessage classes.
1068
Vinay Sajip8028a5c2013-03-30 11:56:18 +00001069If you prefer, you can use a :class:`LoggerAdapter` to achieve a similar effect
1070to the above, as in the following example::
1071
1072 import logging
1073
1074 class Message(object):
1075 def __init__(self, fmt, args):
1076 self.fmt = fmt
1077 self.args = args
1078
1079 def __str__(self):
1080 return self.fmt.format(*self.args)
1081
1082 class StyleAdapter(logging.LoggerAdapter):
1083 def __init__(self, logger, extra=None):
1084 super(StyleAdapter, self).__init__(logger, extra or {})
1085
1086 def log(self, level, msg, *args, **kwargs):
1087 if self.isEnabledFor(level):
1088 msg, kwargs = self.process(msg, kwargs)
1089 self.logger._log(level, Message(msg, args), (), **kwargs)
1090
1091 logger = StyleAdapter(logging.getLogger(__name__))
1092
1093 def main():
1094 logger.debug('Hello, {}', 'world!')
1095
1096 if __name__ == '__main__':
1097 logging.basicConfig(level=logging.DEBUG)
1098 main()
1099
1100The above script should log the message ``Hello, world!`` when run with
1101Python 3.2 or later.
1102
Vinay Sajip6b883a22012-02-27 11:02:45 +00001103
Vinay Sajip982f5342012-02-27 11:56:29 +00001104.. currentmodule:: logging
1105
Georg Brandle9983862012-02-28 08:21:40 +01001106.. _custom-logrecord:
Vinay Sajip982f5342012-02-27 11:56:29 +00001107
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001108Customizing ``LogRecord``
Vinay Sajip982f5342012-02-27 11:56:29 +00001109-------------------------
1110
1111Every logging event is represented by a :class:`LogRecord` instance.
1112When an event is logged and not filtered out by a logger's level, a
1113:class:`LogRecord` is created, populated with information about the event and
1114then passed to the handlers for that logger (and its ancestors, up to and
1115including the logger where further propagation up the hierarchy is disabled).
1116Before Python 3.2, there were only two places where this creation was done:
1117
1118* :meth:`Logger.makeRecord`, which is called in the normal process of
1119 logging an event. This invoked :class:`LogRecord` directly to create an
1120 instance.
1121* :func:`makeLogRecord`, which is called with a dictionary containing
1122 attributes to be added to the LogRecord. This is typically invoked when a
1123 suitable dictionary has been received over the network (e.g. in pickle form
1124 via a :class:`~handlers.SocketHandler`, or in JSON form via an
1125 :class:`~handlers.HTTPHandler`).
1126
1127This has usually meant that if you need to do anything special with a
1128:class:`LogRecord`, you've had to do one of the following.
1129
1130* Create your own :class:`Logger` subclass, which overrides
1131 :meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
1132 before any loggers that you care about are instantiated.
1133* Add a :class:`Filter` to a logger or handler, which does the
1134 necessary special manipulation you need when its
1135 :meth:`~Filter.filter` method is called.
1136
1137The first approach would be a little unwieldy in the scenario where (say)
1138several different libraries wanted to do different things. Each would attempt
1139to set its own :class:`Logger` subclass, and the one which did this last would
1140win.
1141
1142The second approach works reasonably well for many cases, but does not allow
1143you to e.g. use a specialized subclass of :class:`LogRecord`. Library
1144developers can set a suitable filter on their loggers, but they would have to
1145remember to do this every time they introduced a new logger (which they would
Georg Brandle9983862012-02-28 08:21:40 +01001146do simply by adding new packages or modules and doing ::
Vinay Sajip982f5342012-02-27 11:56:29 +00001147
1148 logger = logging.getLogger(__name__)
1149
1150at module level). It's probably one too many things to think about. Developers
1151could also add the filter to a :class:`~logging.NullHandler` attached to their
1152top-level logger, but this would not be invoked if an application developer
1153attached a handler to a lower-level library logger – so output from that
1154handler would not reflect the intentions of the library developer.
1155
1156In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
1157factory, which you can specify. The factory is just a callable you can set with
1158:func:`~logging.setLogRecordFactory`, and interrogate with
1159:func:`~logging.getLogRecordFactory`. The factory is invoked with the same
1160signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
1161is the default setting for the factory.
1162
1163This approach allows a custom factory to control all aspects of LogRecord
1164creation. For example, you could return a subclass, or just add some additional
1165attributes to the record once created, using a pattern similar to this::
1166
1167 old_factory = logging.getLogRecordFactory()
1168
1169 def record_factory(*args, **kwargs):
1170 record = old_factory(*args, **kwargs)
1171 record.custom_attribute = 0xdecafbad
1172 return record
1173
1174 logging.setLogRecordFactory(record_factory)
1175
1176This pattern allows different libraries to chain factories together, and as
1177long as they don't overwrite each other's attributes or unintentionally
1178overwrite the attributes provided as standard, there should be no surprises.
1179However, it should be borne in mind that each link in the chain adds run-time
1180overhead to all logging operations, and the technique should only be used when
1181the use of a :class:`Filter` does not provide the desired result.
1182
1183
Vinay Sajipc63619b2010-12-19 12:56:57 +00001184.. _zeromq-handlers:
1185
Vinay Sajip7d101292010-12-26 21:22:33 +00001186Subclassing QueueHandler - a ZeroMQ example
1187-------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +00001188
1189You can use a :class:`QueueHandler` subclass to send messages to other kinds
1190of queues, for example a ZeroMQ 'publish' socket. In the example below,the
1191socket is created separately and passed to the handler (as its 'queue')::
1192
1193 import zmq # using pyzmq, the Python binding for ZeroMQ
1194 import json # for serializing records portably
1195
1196 ctx = zmq.Context()
1197 sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
1198 sock.bind('tcp://*:5556') # or wherever
1199
1200 class ZeroMQSocketHandler(QueueHandler):
1201 def enqueue(self, record):
1202 data = json.dumps(record.__dict__)
1203 self.queue.send(data)
1204
1205 handler = ZeroMQSocketHandler(sock)
1206
1207
1208Of course there are other ways of organizing this, for example passing in the
1209data needed by the handler to create the socket::
1210
1211 class ZeroMQSocketHandler(QueueHandler):
1212 def __init__(self, uri, socktype=zmq.PUB, ctx=None):
1213 self.ctx = ctx or zmq.Context()
1214 socket = zmq.Socket(self.ctx, socktype)
1215 socket.bind(uri)
1216 QueueHandler.__init__(self, socket)
1217
1218 def enqueue(self, record):
1219 data = json.dumps(record.__dict__)
1220 self.queue.send(data)
1221
1222 def close(self):
1223 self.queue.close()
1224
1225
Vinay Sajip7d101292010-12-26 21:22:33 +00001226Subclassing QueueListener - a ZeroMQ example
1227--------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +00001228
1229You can also subclass :class:`QueueListener` to get messages from other kinds
1230of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
1231
1232 class ZeroMQSocketListener(QueueListener):
1233 def __init__(self, uri, *handlers, **kwargs):
1234 self.ctx = kwargs.get('ctx') or zmq.Context()
1235 socket = zmq.Socket(self.ctx, zmq.SUB)
1236 socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
1237 socket.connect(uri)
1238
1239 def dequeue(self):
1240 msg = self.queue.recv()
1241 return logging.makeLogRecord(json.loads(msg))
1242
1243
Vinay Sajip7d101292010-12-26 21:22:33 +00001244.. seealso::
Vinay Sajipc63619b2010-12-19 12:56:57 +00001245
Vinay Sajip7d101292010-12-26 21:22:33 +00001246 Module :mod:`logging`
1247 API reference for the logging module.
1248
1249 Module :mod:`logging.config`
1250 Configuration API for the logging module.
1251
1252 Module :mod:`logging.handlers`
1253 Useful handlers included with the logging module.
1254
1255 :ref:`A basic logging tutorial <logging-basic-tutorial>`
1256
1257 :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
Vinay Sajip631a7e22011-11-23 14:27:54 +00001258
1259
1260An example dictionary-based configuration
1261-----------------------------------------
1262
1263Below is an example of a logging configuration dictionary - it's taken from
1264the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
Vinay Sajip67f39772013-08-17 00:39:42 +01001265This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
Vinay Sajip631a7e22011-11-23 14:27:54 +00001266
1267 LOGGING = {
1268 'version': 1,
1269 'disable_existing_loggers': True,
1270 'formatters': {
1271 'verbose': {
1272 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
1273 },
1274 'simple': {
1275 'format': '%(levelname)s %(message)s'
1276 },
1277 },
1278 'filters': {
1279 'special': {
1280 '()': 'project.logging.SpecialFilter',
1281 'foo': 'bar',
1282 }
1283 },
1284 'handlers': {
1285 'null': {
1286 'level':'DEBUG',
1287 'class':'django.utils.log.NullHandler',
1288 },
1289 'console':{
1290 'level':'DEBUG',
1291 'class':'logging.StreamHandler',
1292 'formatter': 'simple'
1293 },
1294 'mail_admins': {
1295 'level': 'ERROR',
1296 'class': 'django.utils.log.AdminEmailHandler',
1297 'filters': ['special']
1298 }
1299 },
1300 'loggers': {
1301 'django': {
1302 'handlers':['null'],
1303 'propagate': True,
1304 'level':'INFO',
1305 },
1306 'django.request': {
1307 'handlers': ['mail_admins'],
1308 'level': 'ERROR',
1309 'propagate': False,
1310 },
1311 'myproject.custom': {
1312 'handlers': ['console', 'mail_admins'],
1313 'level': 'INFO',
1314 'filters': ['special']
1315 }
1316 }
1317 }
1318
1319For more information about this configuration, you can see the `relevant
Vinay Sajip49089102014-03-25 11:21:41 +00001320section <https://docs.djangoproject.com/en/1.6/topics/logging/#configuring-logging>`_
Vinay Sajip631a7e22011-11-23 14:27:54 +00001321of the Django documentation.
Vinay Sajip23b94d02012-01-04 12:02:26 +00001322
1323.. _cookbook-rotator-namer:
1324
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001325Using a rotator and namer to customize log rotation processing
Vinay Sajip23b94d02012-01-04 12:02:26 +00001326--------------------------------------------------------------
1327
1328An example of how you can define a namer and rotator is given in the following
1329snippet, which shows zlib-based compression of the log file::
1330
1331 def namer(name):
1332 return name + ".gz"
1333
1334 def rotator(source, dest):
1335 with open(source, "rb") as sf:
1336 data = sf.read()
1337 compressed = zlib.compress(data, 9)
1338 with open(dest, "wb") as df:
1339 df.write(compressed)
1340 os.remove(source)
1341
1342 rh = logging.handlers.RotatingFileHandler(...)
1343 rh.rotator = rotator
1344 rh.namer = namer
1345
Ezio Melotti226231c2012-01-18 05:40:00 +02001346These are not "true" .gz files, as they are bare compressed data, with no
1347"container" such as youd find in an actual gzip file. This snippet is just
Vinay Sajip23b94d02012-01-04 12:02:26 +00001348for illustration purposes.
1349
Vinay Sajip0292fa92012-04-08 01:49:12 +01001350A more elaborate multiprocessing example
1351----------------------------------------
1352
1353The following working example shows how logging can be used with multiprocessing
1354using configuration files. The configurations are fairly simple, but serve to
1355illustrate how more complex ones could be implemented in a real multiprocessing
1356scenario.
1357
1358In the example, the main process spawns a listener process and some worker
1359processes. Each of the main process, the listener and the workers have three
1360separate configurations (the workers all share the same configuration). We can
1361see logging in the main process, how the workers log to a QueueHandler and how
1362the listener implements a QueueListener and a more complex logging
1363configuration, and arranges to dispatch events received via the queue to the
1364handlers specified in the configuration. Note that these configurations are
1365purely illustrative, but you should be able to adapt this example to your own
1366scenario.
1367
1368Here's the script - the docstrings and the comments hopefully explain how it
1369works::
1370
1371 import logging
1372 import logging.config
1373 import logging.handlers
1374 from multiprocessing import Process, Queue, Event, current_process
1375 import os
1376 import random
1377 import time
1378
Ezio Melottiaf8838f2013-03-11 09:30:21 +02001379 class MyHandler:
Vinay Sajip0292fa92012-04-08 01:49:12 +01001380 """
1381 A simple handler for logging events. It runs in the listener process and
1382 dispatches events to loggers based on the name in the received record,
1383 which then get dispatched, by the logging system, to the handlers
Vinay Sajip838e6382012-04-09 19:46:24 +01001384 configured for those loggers.
Vinay Sajip0292fa92012-04-08 01:49:12 +01001385 """
1386 def handle(self, record):
1387 logger = logging.getLogger(record.name)
1388 # The process name is transformed just to show that it's the listener
1389 # doing the logging to files and console
1390 record.processName = '%s (for %s)' % (current_process().name, record.processName)
1391 logger.handle(record)
1392
1393 def listener_process(q, stop_event, config):
1394 """
1395 This could be done in the main process, but is just done in a separate
1396 process for illustrative purposes.
1397
1398 This initialises logging according to the specified configuration,
1399 starts the listener and waits for the main process to signal completion
1400 via the event. The listener is then stopped, and the process exits.
1401 """
1402 logging.config.dictConfig(config)
1403 listener = logging.handlers.QueueListener(q, MyHandler())
1404 listener.start()
1405 if os.name == 'posix':
1406 # On POSIX, the setup logger will have been configured in the
1407 # parent process, but should have been disabled following the
1408 # dictConfig call.
1409 # On Windows, since fork isn't used, the setup logger won't
1410 # exist in the child, so it would be created and the message
1411 # would appear - hence the "if posix" clause.
1412 logger = logging.getLogger('setup')
1413 logger.critical('Should not appear, because of disabled logger ...')
1414 stop_event.wait()
1415 listener.stop()
1416
1417 def worker_process(config):
1418 """
1419 A number of these are spawned for the purpose of illustration. In
Berker Peksag315e1042015-05-19 01:36:55 +03001420 practice, they could be a heterogeneous bunch of processes rather than
Vinay Sajip0292fa92012-04-08 01:49:12 +01001421 ones which are identical to each other.
1422
1423 This initialises logging according to the specified configuration,
1424 and logs a hundred messages with random levels to randomly selected
1425 loggers.
1426
1427 A small sleep is added to allow other processes a chance to run. This
1428 is not strictly needed, but it mixes the output from the different
1429 processes a bit more than if it's left out.
1430 """
1431 logging.config.dictConfig(config)
1432 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
1433 logging.CRITICAL]
1434 loggers = ['foo', 'foo.bar', 'foo.bar.baz',
1435 'spam', 'spam.ham', 'spam.ham.eggs']
1436 if os.name == 'posix':
1437 # On POSIX, the setup logger will have been configured in the
1438 # parent process, but should have been disabled following the
1439 # dictConfig call.
1440 # On Windows, since fork isn't used, the setup logger won't
1441 # exist in the child, so it would be created and the message
1442 # would appear - hence the "if posix" clause.
1443 logger = logging.getLogger('setup')
1444 logger.critical('Should not appear, because of disabled logger ...')
1445 for i in range(100):
1446 lvl = random.choice(levels)
1447 logger = logging.getLogger(random.choice(loggers))
1448 logger.log(lvl, 'Message no. %d', i)
1449 time.sleep(0.01)
1450
1451 def main():
1452 q = Queue()
1453 # The main process gets a simple configuration which prints to the console.
1454 config_initial = {
1455 'version': 1,
1456 'formatters': {
1457 'detailed': {
1458 'class': 'logging.Formatter',
1459 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1460 }
1461 },
1462 'handlers': {
1463 'console': {
1464 'class': 'logging.StreamHandler',
1465 'level': 'INFO',
1466 },
1467 },
1468 'root': {
1469 'level': 'DEBUG',
1470 'handlers': ['console']
1471 },
1472 }
1473 # The worker process configuration is just a QueueHandler attached to the
1474 # root logger, which allows all messages to be sent to the queue.
1475 # We disable existing loggers to disable the "setup" logger used in the
1476 # parent process. This is needed on POSIX because the logger will
1477 # be there in the child following a fork().
1478 config_worker = {
1479 'version': 1,
1480 'disable_existing_loggers': True,
1481 'handlers': {
1482 'queue': {
1483 'class': 'logging.handlers.QueueHandler',
1484 'queue': q,
1485 },
1486 },
1487 'root': {
1488 'level': 'DEBUG',
1489 'handlers': ['queue']
1490 },
1491 }
1492 # The listener process configuration shows that the full flexibility of
1493 # logging configuration is available to dispatch events to handlers however
1494 # you want.
1495 # We disable existing loggers to disable the "setup" logger used in the
1496 # parent process. This is needed on POSIX because the logger will
1497 # be there in the child following a fork().
1498 config_listener = {
1499 'version': 1,
1500 'disable_existing_loggers': True,
1501 'formatters': {
1502 'detailed': {
1503 'class': 'logging.Formatter',
1504 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1505 },
1506 'simple': {
1507 'class': 'logging.Formatter',
1508 'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
1509 }
1510 },
1511 'handlers': {
1512 'console': {
1513 'class': 'logging.StreamHandler',
1514 'level': 'INFO',
1515 'formatter': 'simple',
1516 },
1517 'file': {
1518 'class': 'logging.FileHandler',
1519 'filename': 'mplog.log',
1520 'mode': 'w',
1521 'formatter': 'detailed',
1522 },
1523 'foofile': {
1524 'class': 'logging.FileHandler',
1525 'filename': 'mplog-foo.log',
1526 'mode': 'w',
1527 'formatter': 'detailed',
1528 },
1529 'errors': {
1530 'class': 'logging.FileHandler',
1531 'filename': 'mplog-errors.log',
1532 'mode': 'w',
1533 'level': 'ERROR',
1534 'formatter': 'detailed',
1535 },
1536 },
1537 'loggers': {
1538 'foo': {
Serhiy Storchakaf47036c2013-12-24 11:04:36 +02001539 'handlers': ['foofile']
Vinay Sajip0292fa92012-04-08 01:49:12 +01001540 }
1541 },
1542 'root': {
1543 'level': 'DEBUG',
1544 'handlers': ['console', 'file', 'errors']
1545 },
1546 }
1547 # Log some initial events, just to show that logging in the parent works
1548 # normally.
1549 logging.config.dictConfig(config_initial)
1550 logger = logging.getLogger('setup')
1551 logger.info('About to create workers ...')
1552 workers = []
1553 for i in range(5):
1554 wp = Process(target=worker_process, name='worker %d' % (i + 1),
1555 args=(config_worker,))
1556 workers.append(wp)
1557 wp.start()
1558 logger.info('Started worker: %s', wp.name)
1559 logger.info('About to create listener ...')
1560 stop_event = Event()
1561 lp = Process(target=listener_process, name='listener',
1562 args=(q, stop_event, config_listener))
1563 lp.start()
1564 logger.info('Started listener')
1565 # We now hang around for the workers to finish their work.
1566 for wp in workers:
1567 wp.join()
1568 # Workers all done, listening can now stop.
1569 # Logging in the parent still works normally.
1570 logger.info('Telling listener to stop ...')
1571 stop_event.set()
1572 lp.join()
1573 logger.info('All done.')
1574
1575 if __name__ == '__main__':
1576 main()
1577
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001578
1579Inserting a BOM into messages sent to a SysLogHandler
1580-----------------------------------------------------
1581
1582`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
1583Unicode message be sent to a syslog daemon as a set of bytes which have the
1584following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
1585Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
1586section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
1587
Vinay Sajip62930e12012-04-17 00:40:48 +01001588In Python 3.1, code was added to
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001589:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
1590unfortunately, it was implemented incorrectly, with the BOM appearing at the
1591beginning of the message and hence not allowing any pure-ASCII component to
1592appear before it.
1593
1594As this behaviour is broken, the incorrect BOM insertion code is being removed
Vinay Sajip62930e12012-04-17 00:40:48 +01001595from Python 3.2.4 and later. However, it is not being replaced, and if you
Vinay Sajipa58d6682012-07-27 10:54:10 +01001596want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001597pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
1598UTF-8, then you need to do the following:
1599
1600#. Attach a :class:`~logging.Formatter` instance to your
1601 :class:`~logging.handlers.SysLogHandler` instance, with a format string
1602 such as::
1603
Vinay Sajip59b9a792012-04-16 15:46:18 +01001604 'ASCII section\ufeffUnicode section'
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001605
Georg Brandld50fe722013-03-23 16:00:41 +01001606 The Unicode code point U+FEFF, when encoded using UTF-8, will be
Vinay Sajip59b9a792012-04-16 15:46:18 +01001607 encoded as a UTF-8 BOM -- the byte-string ``b'\xef\xbb\xbf'``.
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001608
1609#. Replace the ASCII section with whatever placeholders you like, but make sure
1610 that the data that appears in there after substitution is always ASCII (that
1611 way, it will remain unchanged after UTF-8 encoding).
1612
1613#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajipa58d6682012-07-27 10:54:10 +01001614 which appears there after substitution contains characters outside the ASCII
1615 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001616
Vinay Sajip59b9a792012-04-16 15:46:18 +01001617The formatted message *will* be encoded using UTF-8 encoding by
1618``SysLogHandler``. If you follow the above rules, you should be able to produce
Vinay Sajipb00e8f12012-04-16 15:28:50 +01001619RFC 5424-compliant messages. If you don't, logging may not complain, but your
1620messages will not be RFC 5424-compliant, and your syslog daemon may complain.
1621
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001622
1623Implementing structured logging
1624-------------------------------
1625
1626Although most logging messages are intended for reading by humans, and thus not
1627readily machine-parseable, there might be cirumstances where you want to output
1628messages in a structured format which *is* capable of being parsed by a program
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001629(without needing complex regular expressions to parse the log message). This is
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001630straightforward to achieve using the logging package. There are a number of
1631ways in which this could be achieved, but the following is a simple approach
1632which uses JSON to serialise the event in a machine-parseable manner::
1633
1634 import json
1635 import logging
1636
1637 class StructuredMessage(object):
1638 def __init__(self, message, **kwargs):
1639 self.message = message
1640 self.kwargs = kwargs
1641
1642 def __str__(self):
1643 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
1644
1645 _ = StructuredMessage # optional, to improve readability
1646
1647 logging.basicConfig(level=logging.INFO, format='%(message)s')
1648 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
1649
1650If the above script is run, it prints::
1651
1652 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
1653
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001654Note that the order of items might be different according to the version of
1655Python used.
1656
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001657If you need more specialised processing, you can use a custom JSON encoder,
1658as in the following complete example::
1659
1660 from __future__ import unicode_literals
1661
1662 import json
1663 import logging
1664
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001665 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001666 try:
1667 unicode
1668 except NameError:
1669 unicode = str
1670
1671 class Encoder(json.JSONEncoder):
1672 def default(self, o):
1673 if isinstance(o, set):
1674 return tuple(o)
1675 elif isinstance(o, unicode):
1676 return o.encode('unicode_escape').decode('ascii')
1677 return super(Encoder, self).default(o)
1678
1679 class StructuredMessage(object):
1680 def __init__(self, message, **kwargs):
1681 self.message = message
1682 self.kwargs = kwargs
1683
1684 def __str__(self):
1685 s = Encoder().encode(self.kwargs)
1686 return '%s >>> %s' % (self.message, s)
1687
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001688 _ = StructuredMessage # optional, to improve readability
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001689
1690 def main():
1691 logging.basicConfig(level=logging.INFO, format='%(message)s')
Raymond Hettingerdf1b6992014-11-09 15:56:33 -08001692 logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
Vinay Sajip4b88d6c2013-01-22 15:57:39 +00001693
1694 if __name__ == '__main__':
1695 main()
1696
1697When the above script is run, it prints::
1698
1699 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
1700
Vinay Sajip3d9e9722013-01-23 09:31:19 +00001701Note that the order of items might be different according to the version of
1702Python used.
1703
Vinay Sajip554f22f2014-02-03 11:51:45 +00001704
1705.. _custom-handlers:
1706
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001707.. currentmodule:: logging.config
1708
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001709Customizing handlers with :func:`dictConfig`
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001710--------------------------------------------
1711
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001712There are times when you want to customize logging handlers in particular ways,
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001713and if you use :func:`dictConfig` you may be able to do this without
1714subclassing. As an example, consider that you may want to set the ownership of a
1715log file. On POSIX, this is easily done using :func:`shutil.chown`, but the file
Vinay Sajip9c10d6b2013-11-15 20:58:13 +00001716handlers in the stdlib don't offer built-in support. You can customize handler
Vinay Sajip2c1adcb2013-11-05 10:02:21 +00001717creation using a plain function such as::
1718
1719 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1720 if owner:
1721 if not os.path.exists(filename):
1722 open(filename, 'a').close()
1723 shutil.chown(filename, *owner)
1724 return logging.FileHandler(filename, mode, encoding)
1725
1726You can then specify, in a logging configuration passed to :func:`dictConfig`,
1727that a logging handler be created by calling this function::
1728
1729 LOGGING = {
1730 'version': 1,
1731 'disable_existing_loggers': False,
1732 'formatters': {
1733 'default': {
1734 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1735 },
1736 },
1737 'handlers': {
1738 'file':{
1739 # The values below are popped from this dictionary and
1740 # used to create the handler, set the handler's level and
1741 # its formatter.
1742 '()': owned_file_handler,
1743 'level':'DEBUG',
1744 'formatter': 'default',
1745 # The values below are passed to the handler creator callable
1746 # as keyword arguments.
1747 'owner': ['pulse', 'pulse'],
1748 'filename': 'chowntest.log',
1749 'mode': 'w',
1750 'encoding': 'utf-8',
1751 },
1752 },
1753 'root': {
1754 'handlers': ['file'],
1755 'level': 'DEBUG',
1756 },
1757 }
1758
1759In this example I am setting the ownership using the ``pulse`` user and group,
1760just for the purposes of illustration. Putting it together into a working
1761script, ``chowntest.py``::
1762
1763 import logging, logging.config, os, shutil
1764
1765 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
1766 if owner:
1767 if not os.path.exists(filename):
1768 open(filename, 'a').close()
1769 shutil.chown(filename, *owner)
1770 return logging.FileHandler(filename, mode, encoding)
1771
1772 LOGGING = {
1773 'version': 1,
1774 'disable_existing_loggers': False,
1775 'formatters': {
1776 'default': {
1777 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
1778 },
1779 },
1780 'handlers': {
1781 'file':{
1782 # The values below are popped from this dictionary and
1783 # used to create the handler, set the handler's level and
1784 # its formatter.
1785 '()': owned_file_handler,
1786 'level':'DEBUG',
1787 'formatter': 'default',
1788 # The values below are passed to the handler creator callable
1789 # as keyword arguments.
1790 'owner': ['pulse', 'pulse'],
1791 'filename': 'chowntest.log',
1792 'mode': 'w',
1793 'encoding': 'utf-8',
1794 },
1795 },
1796 'root': {
1797 'handlers': ['file'],
1798 'level': 'DEBUG',
1799 },
1800 }
1801
1802 logging.config.dictConfig(LOGGING)
1803 logger = logging.getLogger('mylogger')
1804 logger.debug('A debug message')
1805
1806To run this, you will probably need to run as ``root``::
1807
1808 $ sudo python3.3 chowntest.py
1809 $ cat chowntest.log
1810 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
1811 $ ls -l chowntest.log
1812 -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
1813
1814Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
1815makes an appearance. This approach should work with any Python version that
1816supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
1817versions, you would need to implement the actual ownership change using e.g.
1818:func:`os.chown`.
1819
1820In practice, the handler-creating function may be in a utility module somewhere
1821in your project. Instead of the line in the configuration::
1822
1823 '()': owned_file_handler,
1824
1825you could use e.g.::
1826
1827 '()': 'ext://project.util.owned_file_handler',
1828
1829where ``project.util`` can be replaced with the actual name of the package
1830where the function resides. In the above working script, using
1831``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
1832is resolved by :func:`dictConfig` from the ``ext://`` specification.
1833
1834This example hopefully also points the way to how you could implement other
1835types of file change - e.g. setting specific POSIX permission bits - in the
1836same way, using :func:`os.chmod`.
1837
1838Of course, the approach could also be extended to types of handler other than a
1839:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
1840or a different type of handler altogether.
1841
Vinay Sajipcbefe3b2014-01-15 15:09:05 +00001842
1843.. currentmodule:: logging
1844
1845.. _formatting-styles:
1846
1847Using particular formatting styles throughout your application
1848--------------------------------------------------------------
1849
1850In Python 3.2, the :class:`~logging.Formatter` gained a ``style`` keyword
1851parameter which, while defaulting to ``%`` for backward compatibility, allowed
1852the specification of ``{`` or ``$`` to support the formatting approaches
1853supported by :meth:`str.format` and :class:`string.Template`. Note that this
1854governs the formatting of logging messages for final output to logs, and is
1855completely orthogonal to how an individual logging message is constructed.
1856
1857Logging calls (:meth:`~Logger.debug`, :meth:`~Logger.info` etc.) only take
1858positional parameters for the actual logging message itself, with keyword
1859parameters used only for determining options for how to handle the logging call
1860(e.g. the ``exc_info`` keyword parameter to indicate that traceback information
1861should be logged, or the ``extra`` keyword parameter to indicate additional
1862contextual information to be added to the log). So you cannot directly make
1863logging calls using :meth:`str.format` or :class:`string.Template` syntax,
1864because internally the logging package uses %-formatting to merge the format
1865string and the variable arguments. There would no changing this while preserving
1866backward compatibility, since all logging calls which are out there in existing
1867code will be using %-format strings.
1868
1869There have been suggestions to associate format styles with specific loggers,
1870but that approach also runs into backward compatibility problems because any
1871existing code could be using a given logger name and using %-formatting.
1872
1873For logging to work interoperably between any third-party libraries and your
1874code, decisions about formatting need to be made at the level of the
1875individual logging call. This opens up a couple of ways in which alternative
1876formatting styles can be accommodated.
1877
1878
1879Using LogRecord factories
1880^^^^^^^^^^^^^^^^^^^^^^^^^
1881
1882In Python 3.2, along with the :class:`~logging.Formatter` changes mentioned
1883above, the logging package gained the ability to allow users to set their own
1884:class:`LogRecord` subclasses, using the :func:`setLogRecordFactory` function.
1885You can use this to set your own subclass of :class:`LogRecord`, which does the
1886Right Thing by overriding the :meth:`~LogRecord.getMessage` method. The base
1887class implementation of this method is where the ``msg % args`` formatting
1888happens, and where you can substitute your alternate formatting; however, you
1889should be careful to support all formatting styles and allow %-formatting as
1890the default, to ensure interoperability with other code. Care should also be
1891taken to call ``str(self.msg)``, just as the base implementation does.
1892
1893Refer to the reference documentation on :func:`setLogRecordFactory` and
1894:class:`LogRecord` for more information.
1895
1896
1897Using custom message objects
1898^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1899
1900There is another, perhaps simpler way that you can use {}- and $- formatting to
1901construct your individual log messages. You may recall (from
1902:ref:`arbitrary-object-messages`) that when logging you can use an arbitrary
1903object as a message format string, and that the logging package will call
1904:func:`str` on that object to get the actual format string. Consider the
1905following two classes::
1906
1907 class BraceMessage(object):
1908 def __init__(self, fmt, *args, **kwargs):
1909 self.fmt = fmt
1910 self.args = args
1911 self.kwargs = kwargs
1912
1913 def __str__(self):
1914 return self.fmt.format(*self.args, **self.kwargs)
1915
1916 class DollarMessage(object):
1917 def __init__(self, fmt, **kwargs):
1918 self.fmt = fmt
1919 self.kwargs = kwargs
1920
1921 def __str__(self):
1922 from string import Template
1923 return Template(self.fmt).substitute(**self.kwargs)
1924
1925Either of these can be used in place of a format string, to allow {}- or
1926$-formatting to be used to build the actual "message" part which appears in the
1927formatted log output in place of “%(message)s or “{message}” or $message”.
1928If you find it a little unwieldy to use the class names whenever you want to log
1929something, you can make it more palatable if you use an alias such as ``M`` or
1930``_`` for the message (or perhaps ``__``, if you are using ``_`` for
1931localization).
1932
Vinay Sajipeb14dec2014-01-17 18:36:02 +00001933Examples of this approach are given below. Firstly, formatting with
1934:meth:`str.format`::
1935
1936 >>> __ = BraceMessage
1937 >>> print(__('Message with {0} {1}', 2, 'placeholders'))
1938 Message with 2 placeholders
1939 >>> class Point: pass
1940 ...
1941 >>> p = Point()
1942 >>> p.x = 0.5
1943 >>> p.y = 0.5
1944 >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
1945 Message with coordinates: (0.50, 0.50)
1946
1947Secondly, formatting with :class:`string.Template`::
1948
1949 >>> __ = DollarMessage
1950 >>> print(__('Message with $num $what', num=2, what='placeholders'))
1951 Message with 2 placeholders
1952 >>>
1953
1954One thing to note is that you pay no significant performance penalty with this
1955approach: the actual formatting happens not when you make the logging call, but
1956when (and if) the logged message is actually about to be output to a log by a
1957handler. So the only slightly unusual thing which might trip you up is that the
1958parentheses go around the format string and the arguments, not just the format
1959string. Thats because the __ notation is just syntax sugar for a constructor
1960call to one of the ``XXXMessage`` classes shown above.
Vinay Sajip554f22f2014-02-03 11:51:45 +00001961
1962
1963.. _filters-dictconfig:
1964
1965.. currentmodule:: logging.config
1966
1967Configuring filters with :func:`dictConfig`
1968-------------------------------------------
1969
1970You *can* configure filters using :func:`~logging.config.dictConfig`, though it
1971might not be obvious at first glance how to do it (hence this recipe). Since
1972:class:`~logging.Filter` is the only filter class included in the standard
1973library, and it is unlikely to cater to many requirements (it's only there as a
1974base class), you will typically need to define your own :class:`~logging.Filter`
1975subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
1976specify the ``()`` key in the configuration dictionary for the filter,
1977specifying a callable which will be used to create the filter (a class is the
1978most obvious, but you can provide any callable which returns a
1979:class:`~logging.Filter` instance). Here is a complete example::
1980
1981 import logging
1982 import logging.config
1983 import sys
1984
1985 class MyFilter(logging.Filter):
1986 def __init__(self, param=None):
1987 self.param = param
1988
1989 def filter(self, record):
1990 if self.param is None:
1991 allow = True
1992 else:
1993 allow = self.param not in record.msg
1994 if allow:
1995 record.msg = 'changed: ' + record.msg
1996 return allow
1997
1998 LOGGING = {
1999 'version': 1,
2000 'filters': {
2001 'myfilter': {
2002 '()': MyFilter,
2003 'param': 'noshow',
2004 }
2005 },
2006 'handlers': {
2007 'console': {
2008 'class': 'logging.StreamHandler',
2009 'filters': ['myfilter']
2010 }
2011 },
2012 'root': {
2013 'level': 'DEBUG',
2014 'handlers': ['console']
2015 },
2016 }
2017
2018 if __name__ == '__main__':
2019 logging.config.dictConfig(LOGGING)
2020 logging.debug('hello')
2021 logging.debug('hello - noshow')
2022
2023This example shows how you can pass configuration data to the callable which
2024constructs the instance, in the form of keyword parameters. When run, the above
2025script will print::
2026
2027 changed: hello
2028
2029which shows that the filter is working as configured.
2030
2031A couple of extra points to note:
2032
2033* If you can't refer to the callable directly in the configuration (e.g. if it
2034 lives in a different module, and you can't import it directly where the
2035 configuration dictionary is), you can use the form ``ext://...`` as described
2036 in :ref:`logging-config-dict-externalobj`. For example, you could have used
2037 the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
2038 example.
2039
2040* As well as for filters, this technique can also be used to configure custom
2041 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
2042 information on how logging supports using user-defined objects in its
2043 configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
2044
Vinay Sajipdb071642015-01-28 07:32:38 +00002045
2046.. _custom-format-exception:
2047
2048Customized exception formatting
2049-------------------------------
2050
2051There might be times when you want to do customized exception formatting - for
2052argument's sake, let's say you want exactly one line per logged event, even
2053when exception information is present. You can do this with a custom formatter
2054class, as shown in the following example::
2055
2056 import logging
2057
2058 class OneLineExceptionFormatter(logging.Formatter):
2059 def formatException(self, exc_info):
2060 """
2061 Format an exception so that it prints on a single line.
2062 """
2063 result = super(OneLineExceptionFormatter, self).formatException(exc_info)
2064 return repr(result) # or format into one line however you want to
2065
2066 def format(self, record):
2067 s = super(OneLineExceptionFormatter, self).format(record)
2068 if record.exc_text:
2069 s = s.replace('\n', '') + '|'
2070 return s
2071
2072 def configure_logging():
2073 fh = logging.FileHandler('output.txt', 'w')
2074 f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
2075 '%d/%m/%Y %H:%M:%S')
2076 fh.setFormatter(f)
2077 root = logging.getLogger()
2078 root.setLevel(logging.DEBUG)
2079 root.addHandler(fh)
2080
2081 def main():
2082 configure_logging()
2083 logging.info('Sample message')
2084 try:
2085 x = 1 / 0
2086 except ZeroDivisionError as e:
2087 logging.exception('ZeroDivisionError: %s', e)
2088
2089 if __name__ == '__main__':
2090 main()
2091
2092When run, this produces a file with exactly two lines::
2093
2094 28/01/2015 07:21:23|INFO|Sample message|
2095 28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n File "logtest7.py", line 30, in main\n x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
2096
2097While the above treatment is simplistic, it points the way to how exception
2098information can be formatted to your liking. The :mod:`traceback` module may be
2099helpful for more specialized needs.
Vinay Sajipf046dfe2015-02-01 15:17:34 +00002100
2101.. _spoken-messages:
2102
2103Speaking logging messages
2104-------------------------
2105
2106There might be situations when it is desirable to have logging messages rendered
2107in an audible rather than a visible format. This is easy to do if you have text-
2108to-speech (TTS) functionality available in your system, even if it doesn't have
2109a Python binding. Most TTS systems have a command line program you can run, and
2110this can be invoked from a handler using :mod:`subprocess`. It's assumed here
2111that TTS command line programs won't expect to interact with users or take a
2112long time to complete, and that the frequency of logged messages will be not so
2113high as to swamp the user with messages, and that it's acceptable to have the
2114messages spoken one at a time rather than concurrently, The example implementation
2115below waits for one message to be spoken before the next is processed, and this
2116might cause other handlers to be kept waiting. Here is a short example showing
2117the approach, which assumes that the ``espeak`` TTS package is available::
2118
2119 import logging
2120 import subprocess
2121 import sys
2122
2123 class TTSHandler(logging.Handler):
2124 def emit(self, record):
2125 msg = self.format(record)
2126 # Speak slowly in a female English voice
2127 cmd = ['espeak', '-s150', '-ven+f3', msg]
2128 p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
2129 stderr=subprocess.STDOUT)
2130 # wait for the program to finish
2131 p.communicate()
2132
2133 def configure_logging():
2134 h = TTSHandler()
2135 root = logging.getLogger()
2136 root.addHandler(h)
2137 # the default formatter just returns the message
2138 root.setLevel(logging.DEBUG)
2139
2140 def main():
2141 logging.info('Hello')
2142 logging.debug('Goodbye')
2143
2144 if __name__ == '__main__':
2145 configure_logging()
2146 sys.exit(main())
2147
2148When run, this script should say "Hello" and then "Goodbye" in a female voice.
2149
2150The above approach can, of course, be adapted to other TTS systems and even
2151other systems altogether which can process messages via external programs run
2152from a command line.
2153
Vinay Sajipff1f3d92015-10-10 00:52:35 +01002154
2155.. _buffered-logging:
2156
2157Buffering logging messages and outputting them conditionally
2158------------------------------------------------------------
2159
2160There might be situations where you want to log messages in a temporary area
2161and only output them if a certain condition occurs. For example, you may want to
2162start logging debug events in a function, and if the function completes without
2163errors, you don't want to clutter the log with the collected debug information,
2164but if there is an error, you want all the debug information to be output as well
2165as the error.
2166
2167Here is an example which shows how you could do this using a decorator for your
2168functions where you want logging to behave this way. It makes use of the
2169:class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
2170until some condition occurs, at which point the buffered events are ``flushed``
2171- passed to another handler (the ``target`` handler) for processing. By default,
2172the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
2173level is greater than or equal to a specified threshold is seen. You can use this
2174recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
2175flushing behavior.
2176
2177The example script has a simple function, ``foo``, which just cycles through
2178all the logging levels, writing to ``sys.stderr`` to say what level it's about
2179to log at, and then actually logging a message that that level. You can pass a
2180parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
2181otherwise, it only logs at DEBUG, INFO and WARNING levels.
2182
2183The script just arranges to decorate ``foo`` with a decorator which will do the
2184conditional logging that's required. The decorator takes a logger as a parameter
2185and attaches a memory handler for the duration of the call to the decorated
2186function. The decorator can be additionally parameterised using a target handler,
2187a level at which flushing should occur, and a capacity for the buffer. These
2188default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
2189``logging.ERROR`` and ``100`` respectively.
2190
2191Here's the script::
2192
2193 import logging
2194 from logging.handlers import MemoryHandler
2195 import sys
2196
2197 logger = logging.getLogger(__name__)
2198 logger.addHandler(logging.NullHandler())
2199
2200 def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
2201 if target_handler is None:
2202 target_handler = logging.StreamHandler()
2203 if flush_level is None:
2204 flush_level = logging.ERROR
2205 if capacity is None:
2206 capacity = 100
2207 handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
2208
2209 def decorator(fn):
2210 def wrapper(*args, **kwargs):
2211 logger.addHandler(handler)
2212 try:
2213 return fn(*args, **kwargs)
2214 except Exception:
2215 logger.exception('call failed')
2216 raise
2217 finally:
2218 super(MemoryHandler, handler).flush()
2219 logger.removeHandler(handler)
2220 return wrapper
2221
2222 return decorator
2223
2224 def write_line(s):
2225 sys.stderr.write('%s\n' % s)
2226
2227 def foo(fail=False):
2228 write_line('about to log at DEBUG ...')
2229 logger.debug('Actually logged at DEBUG')
2230 write_line('about to log at INFO ...')
2231 logger.info('Actually logged at INFO')
2232 write_line('about to log at WARNING ...')
2233 logger.warning('Actually logged at WARNING')
2234 if fail:
2235 write_line('about to log at ERROR ...')
2236 logger.error('Actually logged at ERROR')
2237 write_line('about to log at CRITICAL ...')
2238 logger.critical('Actually logged at CRITICAL')
2239 return fail
2240
2241 decorated_foo = log_if_errors(logger)(foo)
2242
2243 if __name__ == '__main__':
2244 logger.setLevel(logging.DEBUG)
2245 write_line('Calling undecorated foo with False')
2246 assert not foo(False)
2247 write_line('Calling undecorated foo with True')
2248 assert foo(True)
2249 write_line('Calling decorated foo with False')
2250 assert not decorated_foo(False)
2251 write_line('Calling decorated foo with True')
2252 assert decorated_foo(True)
2253
2254When this script is run, the following output should be observed::
2255
2256 Calling undecorated foo with False
2257 about to log at DEBUG ...
2258 about to log at INFO ...
2259 about to log at WARNING ...
2260 Calling undecorated foo with True
2261 about to log at DEBUG ...
2262 about to log at INFO ...
2263 about to log at WARNING ...
2264 about to log at ERROR ...
2265 about to log at CRITICAL ...
2266 Calling decorated foo with False
2267 about to log at DEBUG ...
2268 about to log at INFO ...
2269 about to log at WARNING ...
2270 Calling decorated foo with True
2271 about to log at DEBUG ...
2272 about to log at INFO ...
2273 about to log at WARNING ...
2274 about to log at ERROR ...
2275 Actually logged at DEBUG
2276 Actually logged at INFO
2277 Actually logged at WARNING
2278 Actually logged at ERROR
2279 about to log at CRITICAL ...
2280 Actually logged at CRITICAL
2281
2282As you can see, actual logging output only occurs when an event is logged whose
2283severity is ERROR or greater, but in that case, any previous events at lower
2284severities are also logged.
2285
2286You can of course use the conventional means of decoration::
2287
2288 @log_if_errors(logger)
2289 def foo(fail=False):
2290 ...
Vinay Sajip4de9dae2015-10-17 13:58:19 +01002291
2292
2293.. _utc-formatting:
2294
2295Formatting times using UTC (GMT) via configuration
2296--------------------------------------------------
2297
2298Sometimes you want to format times using UTC, which can be done using a class
2299such as `UTCFormatter`, shown below::
2300
2301 import logging
2302 import time
2303
2304 class UTCFormatter(logging.Formatter):
2305 converter = time.gmtime
2306
Berker Peksagf84499a2015-10-20 03:41:38 +03002307and you can then use the ``UTCFormatter`` in your code instead of
Vinay Sajip4de9dae2015-10-17 13:58:19 +01002308:class:`~logging.Formatter`. If you want to do that via configuration, you can
2309use the :func:`~logging.config.dictConfig` API with an approach illustrated by
2310the following complete example::
2311
2312 import logging
2313 import logging.config
2314 import time
2315
2316 class UTCFormatter(logging.Formatter):
2317 converter = time.gmtime
2318
2319 LOGGING = {
2320 'version': 1,
2321 'disable_existing_loggers': False,
2322 'formatters': {
2323 'utc': {
2324 '()': UTCFormatter,
2325 'format': '%(asctime)s %(message)s',
2326 },
2327 'local': {
2328 'format': '%(asctime)s %(message)s',
2329 }
2330 },
2331 'handlers': {
2332 'console1': {
2333 'class': 'logging.StreamHandler',
2334 'formatter': 'utc',
2335 },
2336 'console2': {
2337 'class': 'logging.StreamHandler',
2338 'formatter': 'local',
2339 },
2340 },
2341 'root': {
2342 'handlers': ['console1', 'console2'],
2343 }
2344 }
2345
2346 if __name__ == '__main__':
2347 logging.config.dictConfig(LOGGING)
2348 logging.warning('The local time is %s', time.asctime())
2349
2350When this script is run, it should print something like::
2351
2352 2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
2353 2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
2354
2355showing how the time is formatted both as local time and UTC, one for each
2356handler.