blob: 3128402abf798b841f0547cd2dafea5c34ba576b [file] [log] [blame]
Vinay Sajip5dbca9c2011-04-08 11:40:38 +01001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found
10useful in the past.
11
12.. currentmodule:: logging
13
14Using logging in multiple modules
15---------------------------------
16
17Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
18same logger object. This is true not only within the same module, but also
19across modules as long as it is in the same Python interpreter process. It is
20true for references to the same object; additionally, application code can
21define and configure a parent logger in one module and create (but not
22configure) a child logger in a separate module, and all logger calls to the
23child will pass up to the parent. Here is a main module::
24
25 import logging
26 import auxiliary_module
27
28 # create logger with 'spam_application'
29 logger = logging.getLogger('spam_application')
30 logger.setLevel(logging.DEBUG)
31 # create file handler which logs even debug messages
32 fh = logging.FileHandler('spam.log')
33 fh.setLevel(logging.DEBUG)
34 # create console handler with a higher log level
35 ch = logging.StreamHandler()
36 ch.setLevel(logging.ERROR)
37 # create formatter and add it to the handlers
38 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 fh.setFormatter(formatter)
40 ch.setFormatter(formatter)
41 # add the handlers to the logger
42 logger.addHandler(fh)
43 logger.addHandler(ch)
44
45 logger.info('creating an instance of auxiliary_module.Auxiliary')
46 a = auxiliary_module.Auxiliary()
47 logger.info('created an instance of auxiliary_module.Auxiliary')
48 logger.info('calling auxiliary_module.Auxiliary.do_something')
49 a.do_something()
50 logger.info('finished auxiliary_module.Auxiliary.do_something')
51 logger.info('calling auxiliary_module.some_function()')
52 auxiliary_module.some_function()
53 logger.info('done with auxiliary_module.some_function()')
54
55Here is the auxiliary module::
56
57 import logging
58
59 # create logger
60 module_logger = logging.getLogger('spam_application.auxiliary')
61
62 class Auxiliary:
63 def __init__(self):
64 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
65 self.logger.info('creating an instance of Auxiliary')
66 def do_something(self):
67 self.logger.info('doing something')
68 a = 1 + 1
69 self.logger.info('done doing something')
70
71 def some_function():
72 module_logger.info('received a call to "some_function"')
73
74The output looks like this::
75
76 2005-03-23 23:47:11,663 - spam_application - INFO -
77 creating an instance of auxiliary_module.Auxiliary
78 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
79 creating an instance of Auxiliary
80 2005-03-23 23:47:11,665 - spam_application - INFO -
81 created an instance of auxiliary_module.Auxiliary
82 2005-03-23 23:47:11,668 - spam_application - INFO -
83 calling auxiliary_module.Auxiliary.do_something
84 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
85 doing something
86 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
87 done doing something
88 2005-03-23 23:47:11,670 - spam_application - INFO -
89 finished auxiliary_module.Auxiliary.do_something
90 2005-03-23 23:47:11,671 - spam_application - INFO -
91 calling auxiliary_module.some_function()
92 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
93 received a call to 'some_function'
94 2005-03-23 23:47:11,673 - spam_application - INFO -
95 done with auxiliary_module.some_function()
96
97Multiple handlers and formatters
98--------------------------------
99
Vinay Sajip10b51302013-08-17 00:38:48 +0100100Loggers are plain Python objects. The :meth:`~Logger.addHandler` method has no
101minimum or maximum quota for the number of handlers you may add. Sometimes it
102will be beneficial for an application to log all messages of all severities to a
103text file while simultaneously logging errors or above to the console. To set
104this up, simply configure the appropriate handlers. The logging calls in the
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100105application code will remain unchanged. Here is a slight modification to the
106previous simple module-based configuration example::
107
108 import logging
109
110 logger = logging.getLogger('simple_example')
111 logger.setLevel(logging.DEBUG)
112 # create file handler which logs even debug messages
113 fh = logging.FileHandler('spam.log')
114 fh.setLevel(logging.DEBUG)
115 # create console handler with a higher log level
116 ch = logging.StreamHandler()
117 ch.setLevel(logging.ERROR)
118 # create formatter and add it to the handlers
119 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
120 ch.setFormatter(formatter)
121 fh.setFormatter(formatter)
122 # add the handlers to logger
123 logger.addHandler(ch)
124 logger.addHandler(fh)
125
126 # 'application' code
127 logger.debug('debug message')
128 logger.info('info message')
129 logger.warn('warn message')
130 logger.error('error message')
131 logger.critical('critical message')
132
133Notice that the 'application' code does not care about multiple handlers. All
134that changed was the addition and configuration of a new handler named *fh*.
135
136The ability to create new handlers with higher- or lower-severity filters can be
137very helpful when writing and testing an application. Instead of using many
138``print`` statements for debugging, use ``logger.debug``: Unlike the print
139statements, which you will have to delete or comment out later, the logger.debug
140statements can remain intact in the source code and remain dormant until you
141need them again. At that time, the only change that needs to happen is to
142modify the severity level of the logger and/or handler to debug.
143
144.. _multiple-destinations:
145
146Logging to multiple destinations
147--------------------------------
148
149Let's say you want to log to console and file with different message formats and
150in differing circumstances. Say you want to log messages with levels of DEBUG
151and higher to file, and those messages at level INFO and higher to the console.
152Let's also assume that the file should contain timestamps, but the console
153messages should not. Here's how you can achieve this::
154
155 import logging
156
157 # set up logging to file - see previous section for more details
158 logging.basicConfig(level=logging.DEBUG,
159 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
160 datefmt='%m-%d %H:%M',
161 filename='/temp/myapp.log',
162 filemode='w')
163 # define a Handler which writes INFO messages or higher to the sys.stderr
164 console = logging.StreamHandler()
165 console.setLevel(logging.INFO)
166 # set a format which is simpler for console use
167 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
168 # tell the handler to use this format
169 console.setFormatter(formatter)
170 # add the handler to the root logger
171 logging.getLogger('').addHandler(console)
172
173 # Now, we can log to the root logger, or any other logger. First the root...
174 logging.info('Jackdaws love my big sphinx of quartz.')
175
176 # Now, define a couple of other loggers which might represent areas in your
177 # application:
178
179 logger1 = logging.getLogger('myapp.area1')
180 logger2 = logging.getLogger('myapp.area2')
181
182 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
183 logger1.info('How quickly daft jumping zebras vex.')
184 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
185 logger2.error('The five boxing wizards jump quickly.')
186
187When you run this, on the console you will see ::
188
189 root : INFO Jackdaws love my big sphinx of quartz.
190 myapp.area1 : INFO How quickly daft jumping zebras vex.
191 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
192 myapp.area2 : ERROR The five boxing wizards jump quickly.
193
194and in the file you will see something like ::
195
196 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
197 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
198 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
199 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
200 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
201
202As you can see, the DEBUG message only shows up in the file. The other messages
203are sent to both destinations.
204
205This example uses console and file handlers, but you can use any number and
206combination of handlers you choose.
207
208
209Configuration server example
210----------------------------
211
212Here is an example of a module using the logging configuration server::
213
214 import logging
215 import logging.config
216 import time
217 import os
218
219 # read initial config file
220 logging.config.fileConfig('logging.conf')
221
222 # create and start listener on port 9999
223 t = logging.config.listen(9999)
224 t.start()
225
226 logger = logging.getLogger('simpleExample')
227
228 try:
229 # loop through logging calls to see the difference
230 # new configurations make, until Ctrl+C is pressed
231 while True:
232 logger.debug('debug message')
233 logger.info('info message')
234 logger.warn('warn message')
235 logger.error('error message')
236 logger.critical('critical message')
237 time.sleep(5)
238 except KeyboardInterrupt:
239 # cleanup
240 logging.config.stopListening()
241 t.join()
242
243And here is a script that takes a filename and sends that file to the server,
244properly preceded with the binary-encoded length, as the new logging
245configuration::
246
247 #!/usr/bin/env python
248 import socket, sys, struct
249
250 with open(sys.argv[1], 'rb') as f:
251 data_to_send = f.read()
252
253 HOST = 'localhost'
254 PORT = 9999
255 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
256 print('connecting...')
257 s.connect((HOST, PORT))
258 print('sending config...')
259 s.send(struct.pack('>L', len(data_to_send)))
260 s.send(data_to_send)
261 s.close()
262 print('complete')
263
264
265.. _network-logging:
266
267Sending and receiving logging events across a network
268-----------------------------------------------------
269
270Let's say you want to send logging events across a network, and handle them at
271the receiving end. A simple way of doing this is attaching a
272:class:`SocketHandler` instance to the root logger at the sending end::
273
274 import logging, logging.handlers
275
276 rootLogger = logging.getLogger('')
277 rootLogger.setLevel(logging.DEBUG)
278 socketHandler = logging.handlers.SocketHandler('localhost',
279 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
280 # don't bother with a formatter, since a socket handler sends the event as
281 # an unformatted pickle
282 rootLogger.addHandler(socketHandler)
283
284 # Now, we can log to the root logger, or any other logger. First the root...
285 logging.info('Jackdaws love my big sphinx of quartz.')
286
287 # Now, define a couple of other loggers which might represent areas in your
288 # application:
289
290 logger1 = logging.getLogger('myapp.area1')
291 logger2 = logging.getLogger('myapp.area2')
292
293 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
294 logger1.info('How quickly daft jumping zebras vex.')
295 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
296 logger2.error('The five boxing wizards jump quickly.')
297
Georg Brandleb068f92012-08-11 11:12:55 +0200298At the receiving end, you can set up a receiver using the :mod:`SocketServer`
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100299module. Here is a basic working example::
300
301 import pickle
302 import logging
303 import logging.handlers
Georg Brandleb068f92012-08-11 11:12:55 +0200304 import SocketServer
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100305 import struct
306
307
Georg Brandleb068f92012-08-11 11:12:55 +0200308 class LogRecordStreamHandler(SocketServer.StreamRequestHandler):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100309 """Handler for a streaming logging request.
310
311 This basically logs the record using whatever logging policy is
312 configured locally.
313 """
314
315 def handle(self):
316 """
317 Handle multiple requests - each expected to be a 4-byte length,
318 followed by the LogRecord in pickle format. Logs the record
319 according to whatever policy is configured locally.
320 """
321 while True:
322 chunk = self.connection.recv(4)
323 if len(chunk) < 4:
324 break
325 slen = struct.unpack('>L', chunk)[0]
326 chunk = self.connection.recv(slen)
327 while len(chunk) < slen:
328 chunk = chunk + self.connection.recv(slen - len(chunk))
329 obj = self.unPickle(chunk)
330 record = logging.makeLogRecord(obj)
331 self.handleLogRecord(record)
332
333 def unPickle(self, data):
334 return pickle.loads(data)
335
336 def handleLogRecord(self, record):
337 # if a name is specified, we use the named logger rather than the one
338 # implied by the record.
339 if self.server.logname is not None:
340 name = self.server.logname
341 else:
342 name = record.name
343 logger = logging.getLogger(name)
344 # N.B. EVERY record gets logged. This is because Logger.handle
345 # is normally called AFTER logger-level filtering. If you want
346 # to do filtering, do it at the client end to save wasting
347 # cycles and network bandwidth!
348 logger.handle(record)
349
Georg Brandleb068f92012-08-11 11:12:55 +0200350 class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100351 """
352 Simple TCP socket-based logging receiver suitable for testing.
353 """
354
355 allow_reuse_address = 1
356
357 def __init__(self, host='localhost',
358 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
359 handler=LogRecordStreamHandler):
Georg Brandleb068f92012-08-11 11:12:55 +0200360 SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100361 self.abort = 0
362 self.timeout = 1
363 self.logname = None
364
365 def serve_until_stopped(self):
366 import select
367 abort = 0
368 while not abort:
369 rd, wr, ex = select.select([self.socket.fileno()],
370 [], [],
371 self.timeout)
372 if rd:
373 self.handle_request()
374 abort = self.abort
375
376 def main():
377 logging.basicConfig(
378 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
379 tcpserver = LogRecordSocketReceiver()
380 print('About to start TCP server...')
381 tcpserver.serve_until_stopped()
382
383 if __name__ == '__main__':
384 main()
385
386First run the server, and then the client. On the client side, nothing is
387printed on the console; on the server side, you should see something like::
388
389 About to start TCP server...
390 59 root INFO Jackdaws love my big sphinx of quartz.
391 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
392 69 myapp.area1 INFO How quickly daft jumping zebras vex.
393 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
394 69 myapp.area2 ERROR The five boxing wizards jump quickly.
395
396Note that there are some security issues with pickle in some scenarios. If
397these affect you, you can use an alternative serialization scheme by overriding
Vinay Sajip10b51302013-08-17 00:38:48 +0100398the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
399alternative there, as well as adapting the above script to use your alternative
400serialization.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100401
402
403.. _context-info:
404
405Adding contextual information to your logging output
406----------------------------------------------------
407
Vinay Sajip10b51302013-08-17 00:38:48 +0100408.. currentmodule:: logging
409
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100410Sometimes you want logging output to contain contextual information in
411addition to the parameters passed to the logging call. For example, in a
412networked application, it may be desirable to log client-specific information
413in the log (e.g. remote client's username, or IP address). Although you could
414use the *extra* parameter to achieve this, it's not always convenient to pass
415the information in this way. While it might be tempting to create
416:class:`Logger` instances on a per-connection basis, this is not a good idea
417because these instances are not garbage collected. While this is not a problem
418in practice, when the number of :class:`Logger` instances is dependent on the
419level of granularity you want to use in logging an application, it could
420be hard to manage if the number of :class:`Logger` instances becomes
421effectively unbounded.
422
423
424Using LoggerAdapters to impart contextual information
425^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
426
427An easy way in which you can pass contextual information to be output along
428with logging event information is to use the :class:`LoggerAdapter` class.
429This class is designed to look like a :class:`Logger`, so that you can call
430:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
431:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
432same signatures as their counterparts in :class:`Logger`, so you can use the
433two types of instances interchangeably.
434
435When you create an instance of :class:`LoggerAdapter`, you pass it a
436:class:`Logger` instance and a dict-like object which contains your contextual
437information. When you call one of the logging methods on an instance of
438:class:`LoggerAdapter`, it delegates the call to the underlying instance of
439:class:`Logger` passed to its constructor, and arranges to pass the contextual
440information in the delegated call. Here's a snippet from the code of
441:class:`LoggerAdapter`::
442
443 def debug(self, msg, *args, **kwargs):
444 """
445 Delegate a debug call to the underlying logger, after adding
446 contextual information from this adapter instance.
447 """
448 msg, kwargs = self.process(msg, kwargs)
449 self.logger.debug(msg, *args, **kwargs)
450
Vinay Sajip10b51302013-08-17 00:38:48 +0100451The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
452contextual information is added to the logging output. It's passed the message
453and keyword arguments of the logging call, and it passes back (potentially)
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100454modified versions of these to use in the call to the underlying logger. The
455default implementation of this method leaves the message alone, but inserts
456an 'extra' key in the keyword argument whose value is the dict-like object
457passed to the constructor. Of course, if you had passed an 'extra' keyword
458argument in the call to the adapter, it will be silently overwritten.
459
460The advantage of using 'extra' is that the values in the dict-like object are
461merged into the :class:`LogRecord` instance's __dict__, allowing you to use
462customized strings with your :class:`Formatter` instances which know about
463the keys of the dict-like object. If you need a different method, e.g. if you
464want to prepend or append the contextual information to the message string,
Vinay Sajip10b51302013-08-17 00:38:48 +0100465you just need to subclass :class:`LoggerAdapter` and override
466:meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100467
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100468 class CustomAdapter(logging.LoggerAdapter):
469 """
470 This example adapter expects the passed in dict-like object to have a
471 'connid' key, whose value in brackets is prepended to the log message.
472 """
473 def process(self, msg, kwargs):
474 return '[%s] %s' % (self.extra['connid'], msg), kwargs
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100475
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100476which you can use like this::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100477
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100478 logger = logging.getLogger(__name__)
479 adapter = CustomAdapter(logger, {'connid': some_conn_id})
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100480
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100481Then any events that you log to the adapter will have the value of
482``some_conn_id`` prepended to the log messages.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100483
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100484Using objects other than dicts to pass contextual information
485~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100486
Vinay Sajipbcf55a72013-07-24 17:47:52 +0100487You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
488pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
489that it looks like a dict to logging. This would be useful if you want to
490generate values dynamically (whereas the values in a dict would be constant).
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100491
492
493.. _filters-contextual:
494
495Using Filters to impart contextual information
496^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
497
498You can also add contextual information to log output using a user-defined
499:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
500passed to them, including adding additional attributes which can then be output
501using a suitable format string, or if needed a custom :class:`Formatter`.
502
503For example in a web application, the request being processed (or at least,
504the interesting parts of it) can be stored in a threadlocal
505(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
506add, say, information from the request - say, the remote IP address and remote
507user's username - to the ``LogRecord``, using the attribute names 'ip' and
508'user' as in the ``LoggerAdapter`` example above. In that case, the same format
509string can be used to get similar output to that shown above. Here's an example
510script::
511
512 import logging
513 from random import choice
514
515 class ContextFilter(logging.Filter):
516 """
517 This is a filter which injects contextual information into the log.
518
519 Rather than use actual contextual information, we just use random
520 data in this demo.
521 """
522
523 USERS = ['jim', 'fred', 'sheila']
524 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
525
526 def filter(self, record):
527
528 record.ip = choice(ContextFilter.IPS)
529 record.user = choice(ContextFilter.USERS)
530 return True
531
532 if __name__ == '__main__':
533 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
534 logging.basicConfig(level=logging.DEBUG,
535 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
536 a1 = logging.getLogger('a.b.c')
537 a2 = logging.getLogger('d.e.f')
538
539 f = ContextFilter()
540 a1.addFilter(f)
541 a2.addFilter(f)
542 a1.debug('A debug message')
543 a1.info('An info message with %s', 'some parameters')
544 for x in range(10):
545 lvl = choice(levels)
546 lvlname = logging.getLevelName(lvl)
547 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
548
549which, when run, produces something like::
550
551 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
552 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
553 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
554 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
555 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
556 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
557 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
558 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
559 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
560 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
561 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
562 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
563
564
565.. _multiple-processes:
566
567Logging to a single file from multiple processes
568------------------------------------------------
569
570Although logging is thread-safe, and logging to a single file from multiple
571threads in a single process *is* supported, logging to a single file from
572*multiple processes* is *not* supported, because there is no standard way to
573serialize access to a single file across multiple processes in Python. If you
574need to log to a single file from multiple processes, one way of doing this is
Vinay Sajip10b51302013-08-17 00:38:48 +0100575to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
576separate process which implements a socket server which reads from the socket
577and logs to file. (If you prefer, you can dedicate one thread in one of the
578existing processes to perform this function.)
579:ref:`This section <network-logging>` documents this approach in more detail and
580includes a working socket receiver which can be used as a starting point for you
581to adapt in your own applications.
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100582
583If you are using a recent version of Python which includes the
584:mod:`multiprocessing` module, you could write your own handler which uses the
Vinay Sajip10b51302013-08-17 00:38:48 +0100585:class:`~multiprocessing.Lock` class from this module to serialize access to the
586file from your processes. The existing :class:`FileHandler` and subclasses do
587not make use of :mod:`multiprocessing` at present, though they may do so in the
588future. Note that at present, the :mod:`multiprocessing` module does not provide
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100589working lock functionality on all platforms (see
590http://bugs.python.org/issue3770).
591
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100592
593Using file rotation
594-------------------
595
596.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
597.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
598
599Sometimes you want to let a log file grow to a certain size, then open a new
600file and log to that. You may want to keep a certain number of these files, and
601when that many files have been created, rotate the files so that the number of
602files and the size of the files both remain bounded. For this usage pattern, the
Vinay Sajip10b51302013-08-17 00:38:48 +0100603logging package provides a :class:`~handlers.RotatingFileHandler`::
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100604
605 import glob
606 import logging
607 import logging.handlers
608
609 LOG_FILENAME = 'logging_rotatingfile_example.out'
610
611 # Set up a specific logger with our desired output level
612 my_logger = logging.getLogger('MyLogger')
613 my_logger.setLevel(logging.DEBUG)
614
615 # Add the log message handler to the logger
616 handler = logging.handlers.RotatingFileHandler(
617 LOG_FILENAME, maxBytes=20, backupCount=5)
618
619 my_logger.addHandler(handler)
620
621 # Log some messages
622 for i in range(20):
623 my_logger.debug('i = %d' % i)
624
625 # See what files are created
626 logfiles = glob.glob('%s*' % LOG_FILENAME)
627
628 for filename in logfiles:
629 print(filename)
630
631The result should be 6 separate files, each with part of the log history for the
632application::
633
634 logging_rotatingfile_example.out
635 logging_rotatingfile_example.out.1
636 logging_rotatingfile_example.out.2
637 logging_rotatingfile_example.out.3
638 logging_rotatingfile_example.out.4
639 logging_rotatingfile_example.out.5
640
641The most current file is always :file:`logging_rotatingfile_example.out`,
642and each time it reaches the size limit it is renamed with the suffix
643``.1``. Each of the existing backup files is renamed to increment the suffix
644(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
645
Ezio Melotti1e87da12011-10-19 10:39:35 +0300646Obviously this example sets the log length much too small as an extreme
Vinay Sajip5dbca9c2011-04-08 11:40:38 +0100647example. You would want to set *maxBytes* to an appropriate value.
648
Vinay Sajipda75dd22011-11-23 14:27:11 +0000649An example dictionary-based configuration
650-----------------------------------------
651
652Below is an example of a logging configuration dictionary - it's taken from
653the `documentation on the Django project <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_.
Vinay Sajip10b51302013-08-17 00:38:48 +0100654This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
Vinay Sajipda75dd22011-11-23 14:27:11 +0000655
656 LOGGING = {
657 'version': 1,
658 'disable_existing_loggers': True,
659 'formatters': {
660 'verbose': {
661 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
662 },
663 'simple': {
664 'format': '%(levelname)s %(message)s'
665 },
666 },
667 'filters': {
668 'special': {
669 '()': 'project.logging.SpecialFilter',
670 'foo': 'bar',
671 }
672 },
673 'handlers': {
674 'null': {
675 'level':'DEBUG',
676 'class':'django.utils.log.NullHandler',
677 },
678 'console':{
679 'level':'DEBUG',
680 'class':'logging.StreamHandler',
681 'formatter': 'simple'
682 },
683 'mail_admins': {
684 'level': 'ERROR',
685 'class': 'django.utils.log.AdminEmailHandler',
686 'filters': ['special']
687 }
688 },
689 'loggers': {
690 'django': {
691 'handlers':['null'],
692 'propagate': True,
693 'level':'INFO',
694 },
695 'django.request': {
696 'handlers': ['mail_admins'],
697 'level': 'ERROR',
698 'propagate': False,
699 },
700 'myproject.custom': {
701 'handlers': ['console', 'mail_admins'],
702 'level': 'INFO',
703 'filters': ['special']
704 }
705 }
706 }
707
708For more information about this configuration, you can see the `relevant
709section <https://docs.djangoproject.com/en/1.3/topics/logging/#configuring-logging>`_
710of the Django documentation.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100711
712Inserting a BOM into messages sent to a SysLogHandler
713-----------------------------------------------------
714
715`RFC 5424 <http://tools.ietf.org/html/rfc5424>`_ requires that a
716Unicode message be sent to a syslog daemon as a set of bytes which have the
717following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
718Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the `relevant
719section of the specification <http://tools.ietf.org/html/rfc5424#section-6>`_.)
720
721In Python 2.6 and 2.7, code was added to
722:class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
723unfortunately, it was implemented incorrectly, with the BOM appearing at the
724beginning of the message and hence not allowing any pure-ASCII component to
725appear before it.
726
727As this behaviour is broken, the incorrect BOM insertion code is being removed
728from Python 2.7.4 and later. However, it is not being replaced, and if you
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100729want to produce RFC 5424-compliant messages which include a BOM, an optional
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100730pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
731UTF-8, then you need to do the following:
732
733#. Attach a :class:`~logging.Formatter` instance to your
734 :class:`~logging.handlers.SysLogHandler` instance, with a format string
735 such as::
736
Vinay Sajip8085f442012-04-16 15:45:11 +0100737 u'ASCII section\ufeffUnicode section'
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100738
Ezio Melotti93324d72013-03-28 05:47:31 +0200739 The Unicode code point ``u'\ufeff'``, when encoded using UTF-8, will be
Vinay Sajip8085f442012-04-16 15:45:11 +0100740 encoded as a UTF-8 BOM -- the byte-string ``'\xef\xbb\xbf'``.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100741
742#. Replace the ASCII section with whatever placeholders you like, but make sure
743 that the data that appears in there after substitution is always ASCII (that
744 way, it will remain unchanged after UTF-8 encoding).
745
746#. Replace the Unicode section with whatever placeholders you like; if the data
Vinay Sajip4f947dd2012-07-27 10:52:18 +0100747 which appears there after substitution contains characters outside the ASCII
748 range, that's fine -- it will be encoded using UTF-8.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100749
750If the formatted message is Unicode, it *will* be encoded using UTF-8 encoding
Vinay Sajip8085f442012-04-16 15:45:11 +0100751by ``SysLogHandler``. If you follow the above rules, you should be able to
752produce RFC 5424-compliant messages. If you don't, logging may not complain,
753but your messages will not be RFC 5424-compliant, and your syslog daemon may
754complain.
Vinay Sajip4e646aa2012-04-16 15:27:55 +0100755
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000756
757Implementing structured logging
758-------------------------------
759
760Although most logging messages are intended for reading by humans, and thus not
761readily machine-parseable, there might be cirumstances where you want to output
762messages in a structured format which *is* capable of being parsed by a program
Vinay Sajipeb019492013-01-23 09:30:34 +0000763(without needing complex regular expressions to parse the log message). This is
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000764straightforward to achieve using the logging package. There are a number of
765ways in which this could be achieved, but the following is a simple approach
766which uses JSON to serialise the event in a machine-parseable manner::
767
768 import json
769 import logging
770
771 class StructuredMessage(object):
772 def __init__(self, message, **kwargs):
773 self.message = message
774 self.kwargs = kwargs
775
776 def __str__(self):
777 return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
778
779 _ = StructuredMessage # optional, to improve readability
780
781 logging.basicConfig(level=logging.INFO, format='%(message)s')
782 logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
783
784If the above script is run, it prints::
785
786 message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
787
Vinay Sajipeb019492013-01-23 09:30:34 +0000788Note that the order of items might be different according to the version of
789Python used.
790
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000791If you need more specialised processing, you can use a custom JSON encoder,
792as in the following complete example::
793
794 from __future__ import unicode_literals
795
796 import json
797 import logging
798
Vinay Sajipeb019492013-01-23 09:30:34 +0000799 # This next bit is to ensure the script runs unchanged on 2.x and 3.x
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000800 try:
801 unicode
802 except NameError:
803 unicode = str
804
805 class Encoder(json.JSONEncoder):
806 def default(self, o):
807 if isinstance(o, set):
808 return tuple(o)
809 elif isinstance(o, unicode):
810 return o.encode('unicode_escape').decode('ascii')
811 return super(Encoder, self).default(o)
812
813 class StructuredMessage(object):
814 def __init__(self, message, **kwargs):
815 self.message = message
816 self.kwargs = kwargs
817
818 def __str__(self):
819 s = Encoder().encode(self.kwargs)
820 return '%s >>> %s' % (self.message, s)
821
Vinay Sajipeb019492013-01-23 09:30:34 +0000822 _ = StructuredMessage # optional, to improve readability
Vinay Sajipa0c9a4e2013-01-22 15:57:12 +0000823
824 def main():
825 logging.basicConfig(level=logging.INFO, format='%(message)s')
826 logging.info(_('message 1', set_value=set([1, 2, 3]), snowman='\u2603'))
827
828 if __name__ == '__main__':
829 main()
830
831When the above script is run, it prints::
832
833 message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
834
Vinay Sajipeb019492013-01-23 09:30:34 +0000835Note that the order of items might be different according to the version of
836Python used.
Vinay Sajip51468252014-02-03 11:51:22 +0000837
838
839.. _custom-handlers:
840
841.. currentmodule:: logging.config
842
843Customizing handlers with :func:`dictConfig`
844--------------------------------------------
845
846There are times when you want to customize logging handlers in particular ways,
847and if you use :func:`dictConfig` you may be able to do this without
848subclassing. As an example, consider that you may want to set the ownership of a
849log file. On POSIX, this is easily done using :func:`shutil.chown`, but the file
850handlers in the stdlib don't offer built-in support. You can customize handler
851creation using a plain function such as::
852
853 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
854 if owner:
855 if not os.path.exists(filename):
856 open(filename, 'a').close()
857 shutil.chown(filename, *owner)
858 return logging.FileHandler(filename, mode, encoding)
859
860You can then specify, in a logging configuration passed to :func:`dictConfig`,
861that a logging handler be created by calling this function::
862
863 LOGGING = {
864 'version': 1,
865 'disable_existing_loggers': False,
866 'formatters': {
867 'default': {
868 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
869 },
870 },
871 'handlers': {
872 'file':{
873 # The values below are popped from this dictionary and
874 # used to create the handler, set the handler's level and
875 # its formatter.
876 '()': owned_file_handler,
877 'level':'DEBUG',
878 'formatter': 'default',
879 # The values below are passed to the handler creator callable
880 # as keyword arguments.
881 'owner': ['pulse', 'pulse'],
882 'filename': 'chowntest.log',
883 'mode': 'w',
884 'encoding': 'utf-8',
885 },
886 },
887 'root': {
888 'handlers': ['file'],
889 'level': 'DEBUG',
890 },
891 }
892
893In this example I am setting the ownership using the ``pulse`` user and group,
894just for the purposes of illustration. Putting it together into a working
895script, ``chowntest.py``::
896
897 import logging, logging.config, os, shutil
898
899 def owned_file_handler(filename, mode='a', encoding=None, owner=None):
900 if owner:
901 if not os.path.exists(filename):
902 open(filename, 'a').close()
903 shutil.chown(filename, *owner)
904 return logging.FileHandler(filename, mode, encoding)
905
906 LOGGING = {
907 'version': 1,
908 'disable_existing_loggers': False,
909 'formatters': {
910 'default': {
911 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
912 },
913 },
914 'handlers': {
915 'file':{
916 # The values below are popped from this dictionary and
917 # used to create the handler, set the handler's level and
918 # its formatter.
919 '()': owned_file_handler,
920 'level':'DEBUG',
921 'formatter': 'default',
922 # The values below are passed to the handler creator callable
923 # as keyword arguments.
924 'owner': ['pulse', 'pulse'],
925 'filename': 'chowntest.log',
926 'mode': 'w',
927 'encoding': 'utf-8',
928 },
929 },
930 'root': {
931 'handlers': ['file'],
932 'level': 'DEBUG',
933 },
934 }
935
936 logging.config.dictConfig(LOGGING)
937 logger = logging.getLogger('mylogger')
938 logger.debug('A debug message')
939
940To run this, you will probably need to run as ``root``::
941
942 $ sudo python3.3 chowntest.py
943 $ cat chowntest.log
944 2013-11-05 09:34:51,128 DEBUG mylogger A debug message
945 $ ls -l chowntest.log
946 -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
947
948Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
949makes an appearance. This approach should work with any Python version that
950supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
951versions, you would need to implement the actual ownership change using e.g.
952:func:`os.chown`.
953
954In practice, the handler-creating function may be in a utility module somewhere
955in your project. Instead of the line in the configuration::
956
957 '()': owned_file_handler,
958
959you could use e.g.::
960
961 '()': 'ext://project.util.owned_file_handler',
962
963where ``project.util`` can be replaced with the actual name of the package
964where the function resides. In the above working script, using
965``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
966is resolved by :func:`dictConfig` from the ``ext://`` specification.
967
968This example hopefully also points the way to how you could implement other
969types of file change - e.g. setting specific POSIX permission bits - in the
970same way, using :func:`os.chmod`.
971
972Of course, the approach could also be extended to types of handler other than a
973:class:`~logging.FileHandler` - for example, one of the rotating file handlers,
974or a different type of handler altogether.
975
976
977.. _filters-dictconfig:
978
979Configuring filters with :func:`dictConfig`
980-------------------------------------------
981
982You *can* configure filters using :func:`~logging.config.dictConfig`, though it
983might not be obvious at first glance how to do it (hence this recipe). Since
984:class:`~logging.Filter` is the only filter class included in the standard
985library, and it is unlikely to cater to many requirements (it's only there as a
986base class), you will typically need to define your own :class:`~logging.Filter`
987subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
988specify the ``()`` key in the configuration dictionary for the filter,
989specifying a callable which will be used to create the filter (a class is the
990most obvious, but you can provide any callable which returns a
991:class:`~logging.Filter` instance). Here is a complete example::
992
993 import logging
994 import logging.config
995 import sys
996
997 class MyFilter(logging.Filter):
998 def __init__(self, param=None):
999 self.param = param
1000
1001 def filter(self, record):
1002 if self.param is None:
1003 allow = True
1004 else:
1005 allow = self.param not in record.msg
1006 if allow:
1007 record.msg = 'changed: ' + record.msg
1008 return allow
1009
1010 LOGGING = {
1011 'version': 1,
1012 'filters': {
1013 'myfilter': {
1014 '()': MyFilter,
1015 'param': 'noshow',
1016 }
1017 },
1018 'handlers': {
1019 'console': {
1020 'class': 'logging.StreamHandler',
1021 'filters': ['myfilter']
1022 }
1023 },
1024 'root': {
1025 'level': 'DEBUG',
1026 'handlers': ['console']
1027 },
1028 }
1029
1030 if __name__ == '__main__':
1031 logging.config.dictConfig(LOGGING)
1032 logging.debug('hello')
1033 logging.debug('hello - noshow')
1034
1035This example shows how you can pass configuration data to the callable which
1036constructs the instance, in the form of keyword parameters. When run, the above
1037script will print::
1038
1039 changed: hello
1040
1041which shows that the filter is working as configured.
1042
1043A couple of extra points to note:
1044
1045* If you can't refer to the callable directly in the configuration (e.g. if it
1046 lives in a different module, and you can't import it directly where the
1047 configuration dictionary is), you can use the form ``ext://...`` as described
1048 in :ref:`logging-config-dict-externalobj`. For example, you could have used
1049 the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
1050 example.
1051
1052* As well as for filters, this technique can also be used to configure custom
1053 handlers and formatters. See :ref:`logging-config-dict-userdef` for more
1054 information on how logging supports using user-defined objects in its
1055 configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
1056