blob: 7a13790e87c4b7402d18c5c9a31880f5e65fc3c3 [file] [log] [blame]
Vinay Sajipc63619b2010-12-19 12:56:57 +00001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found useful in the past.
10
Vinay Sajipc63619b2010-12-19 12:56:57 +000011.. currentmodule:: logging
12
13Using logging in multiple modules
14---------------------------------
15
Vinay Sajip1397ce12010-12-24 12:03:48 +000016Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
17same logger object. This is true not only within the same module, but also
18across modules as long as it is in the same Python interpreter process. It is
19true for references to the same object; additionally, application code can
20define and configure a parent logger in one module and create (but not
21configure) a child logger in a separate module, and all logger calls to the
22child will pass up to the parent. Here is a main module::
Vinay Sajipc63619b2010-12-19 12:56:57 +000023
24 import logging
25 import auxiliary_module
26
27 # create logger with 'spam_application'
28 logger = logging.getLogger('spam_application')
29 logger.setLevel(logging.DEBUG)
30 # create file handler which logs even debug messages
31 fh = logging.FileHandler('spam.log')
32 fh.setLevel(logging.DEBUG)
33 # create console handler with a higher log level
34 ch = logging.StreamHandler()
35 ch.setLevel(logging.ERROR)
36 # create formatter and add it to the handlers
37 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
38 fh.setFormatter(formatter)
39 ch.setFormatter(formatter)
40 # add the handlers to the logger
41 logger.addHandler(fh)
42 logger.addHandler(ch)
43
44 logger.info('creating an instance of auxiliary_module.Auxiliary')
45 a = auxiliary_module.Auxiliary()
46 logger.info('created an instance of auxiliary_module.Auxiliary')
47 logger.info('calling auxiliary_module.Auxiliary.do_something')
48 a.do_something()
49 logger.info('finished auxiliary_module.Auxiliary.do_something')
50 logger.info('calling auxiliary_module.some_function()')
51 auxiliary_module.some_function()
52 logger.info('done with auxiliary_module.some_function()')
53
54Here is the auxiliary module::
55
56 import logging
57
58 # create logger
59 module_logger = logging.getLogger('spam_application.auxiliary')
60
61 class Auxiliary:
62 def __init__(self):
63 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
64 self.logger.info('creating an instance of Auxiliary')
65 def do_something(self):
66 self.logger.info('doing something')
67 a = 1 + 1
68 self.logger.info('done doing something')
69
70 def some_function():
71 module_logger.info('received a call to "some_function"')
72
73The output looks like this::
74
75 2005-03-23 23:47:11,663 - spam_application - INFO -
76 creating an instance of auxiliary_module.Auxiliary
77 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
78 creating an instance of Auxiliary
79 2005-03-23 23:47:11,665 - spam_application - INFO -
80 created an instance of auxiliary_module.Auxiliary
81 2005-03-23 23:47:11,668 - spam_application - INFO -
82 calling auxiliary_module.Auxiliary.do_something
83 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
84 doing something
85 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
86 done doing something
87 2005-03-23 23:47:11,670 - spam_application - INFO -
88 finished auxiliary_module.Auxiliary.do_something
89 2005-03-23 23:47:11,671 - spam_application - INFO -
90 calling auxiliary_module.some_function()
91 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
92 received a call to 'some_function'
93 2005-03-23 23:47:11,673 - spam_application - INFO -
94 done with auxiliary_module.some_function()
95
96Multiple handlers and formatters
97--------------------------------
98
99Loggers are plain Python objects. The :func:`addHandler` method has no minimum
100or maximum quota for the number of handlers you may add. Sometimes it will be
101beneficial for an application to log all messages of all severities to a text
102file while simultaneously logging errors or above to the console. To set this
103up, simply configure the appropriate handlers. The logging calls in the
104application code will remain unchanged. Here is a slight modification to the
105previous simple module-based configuration example::
106
107 import logging
108
109 logger = logging.getLogger('simple_example')
110 logger.setLevel(logging.DEBUG)
111 # create file handler which logs even debug messages
112 fh = logging.FileHandler('spam.log')
113 fh.setLevel(logging.DEBUG)
114 # create console handler with a higher log level
115 ch = logging.StreamHandler()
116 ch.setLevel(logging.ERROR)
117 # create formatter and add it to the handlers
118 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
119 ch.setFormatter(formatter)
120 fh.setFormatter(formatter)
121 # add the handlers to logger
122 logger.addHandler(ch)
123 logger.addHandler(fh)
124
125 # 'application' code
126 logger.debug('debug message')
127 logger.info('info message')
128 logger.warn('warn message')
129 logger.error('error message')
130 logger.critical('critical message')
131
132Notice that the 'application' code does not care about multiple handlers. All
133that changed was the addition and configuration of a new handler named *fh*.
134
135The ability to create new handlers with higher- or lower-severity filters can be
136very helpful when writing and testing an application. Instead of using many
137``print`` statements for debugging, use ``logger.debug``: Unlike the print
138statements, which you will have to delete or comment out later, the logger.debug
139statements can remain intact in the source code and remain dormant until you
140need them again. At that time, the only change that needs to happen is to
141modify the severity level of the logger and/or handler to debug.
142
143.. _multiple-destinations:
144
145Logging to multiple destinations
146--------------------------------
147
148Let's say you want to log to console and file with different message formats and
149in differing circumstances. Say you want to log messages with levels of DEBUG
150and higher to file, and those messages at level INFO and higher to the console.
151Let's also assume that the file should contain timestamps, but the console
152messages should not. Here's how you can achieve this::
153
154 import logging
155
156 # set up logging to file - see previous section for more details
157 logging.basicConfig(level=logging.DEBUG,
158 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
159 datefmt='%m-%d %H:%M',
160 filename='/temp/myapp.log',
161 filemode='w')
162 # define a Handler which writes INFO messages or higher to the sys.stderr
163 console = logging.StreamHandler()
164 console.setLevel(logging.INFO)
165 # set a format which is simpler for console use
166 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
167 # tell the handler to use this format
168 console.setFormatter(formatter)
169 # add the handler to the root logger
170 logging.getLogger('').addHandler(console)
171
172 # Now, we can log to the root logger, or any other logger. First the root...
173 logging.info('Jackdaws love my big sphinx of quartz.')
174
175 # Now, define a couple of other loggers which might represent areas in your
176 # application:
177
178 logger1 = logging.getLogger('myapp.area1')
179 logger2 = logging.getLogger('myapp.area2')
180
181 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
182 logger1.info('How quickly daft jumping zebras vex.')
183 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
184 logger2.error('The five boxing wizards jump quickly.')
185
186When you run this, on the console you will see ::
187
188 root : INFO Jackdaws love my big sphinx of quartz.
189 myapp.area1 : INFO How quickly daft jumping zebras vex.
190 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
191 myapp.area2 : ERROR The five boxing wizards jump quickly.
192
193and in the file you will see something like ::
194
195 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
196 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
197 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
198 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
199 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
200
201As you can see, the DEBUG message only shows up in the file. The other messages
202are sent to both destinations.
203
204This example uses console and file handlers, but you can use any number and
205combination of handlers you choose.
206
207
208Configuration server example
209----------------------------
210
211Here is an example of a module using the logging configuration server::
212
213 import logging
214 import logging.config
215 import time
216 import os
217
218 # read initial config file
219 logging.config.fileConfig('logging.conf')
220
221 # create and start listener on port 9999
222 t = logging.config.listen(9999)
223 t.start()
224
225 logger = logging.getLogger('simpleExample')
226
227 try:
228 # loop through logging calls to see the difference
229 # new configurations make, until Ctrl+C is pressed
230 while True:
231 logger.debug('debug message')
232 logger.info('info message')
233 logger.warn('warn message')
234 logger.error('error message')
235 logger.critical('critical message')
236 time.sleep(5)
237 except KeyboardInterrupt:
238 # cleanup
239 logging.config.stopListening()
240 t.join()
241
242And here is a script that takes a filename and sends that file to the server,
243properly preceded with the binary-encoded length, as the new logging
244configuration::
245
246 #!/usr/bin/env python
247 import socket, sys, struct
248
Vinay Sajip689b68a2010-12-22 15:04:15 +0000249 with open(sys.argv[1], 'rb') as f:
250 data_to_send = f.read()
Vinay Sajipc63619b2010-12-19 12:56:57 +0000251
252 HOST = 'localhost'
253 PORT = 9999
254 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
255 print('connecting...')
256 s.connect((HOST, PORT))
257 print('sending config...')
258 s.send(struct.pack('>L', len(data_to_send)))
259 s.send(data_to_send)
260 s.close()
261 print('complete')
262
263
264Dealing with handlers that block
265--------------------------------
266
267.. currentmodule:: logging.handlers
268
269Sometimes you have to get your logging handlers to do their work without
270blocking the thread you’re logging from. This is common in Web applications,
271though of course it also occurs in other scenarios.
272
273A common culprit which demonstrates sluggish behaviour is the
274:class:`SMTPHandler`: sending emails can take a long time, for a
275number of reasons outside the developer’s control (for example, a poorly
276performing mail or network infrastructure). But almost any network-based
277handler can block: Even a :class:`SocketHandler` operation may do a
278DNS query under the hood which is too slow (and this query can be deep in the
279socket library code, below the Python layer, and outside your control).
280
281One solution is to use a two-part approach. For the first part, attach only a
282:class:`QueueHandler` to those loggers which are accessed from
283performance-critical threads. They simply write to their queue, which can be
284sized to a large enough capacity or initialized with no upper bound to their
285size. The write to the queue will typically be accepted quickly, though you
286will probably need to catch the :ref:`queue.Full` exception as a precaution
287in your code. If you are a library developer who has performance-critical
288threads in their code, be sure to document this (together with a suggestion to
289attach only ``QueueHandlers`` to your loggers) for the benefit of other
290developers who will use your code.
291
292The second part of the solution is :class:`QueueListener`, which has been
293designed as the counterpart to :class:`QueueHandler`. A
294:class:`QueueListener` is very simple: it’s passed a queue and some handlers,
295and it fires up an internal thread which listens to its queue for LogRecords
296sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
297matter). The ``LogRecords`` are removed from the queue and passed to the
298handlers for processing.
299
300The advantage of having a separate :class:`QueueListener` class is that you
301can use the same instance to service multiple ``QueueHandlers``. This is more
302resource-friendly than, say, having threaded versions of the existing handler
303classes, which would eat up one thread per handler for no particular benefit.
304
305An example of using these two classes follows (imports omitted)::
306
307 que = queue.Queue(-1) # no limit on size
308 queue_handler = QueueHandler(que)
309 handler = logging.StreamHandler()
310 listener = QueueListener(que, handler)
311 root = logging.getLogger()
312 root.addHandler(queue_handler)
313 formatter = logging.Formatter('%(threadName)s: %(message)s')
314 handler.setFormatter(formatter)
315 listener.start()
316 # The log output will display the thread which generated
317 # the event (the main thread) rather than the internal
318 # thread which monitors the internal queue. This is what
319 # you want to happen.
320 root.warning('Look out!')
321 listener.stop()
322
323which, when run, will produce::
324
325 MainThread: Look out!
326
327
328.. _network-logging:
329
330Sending and receiving logging events across a network
331-----------------------------------------------------
332
333Let's say you want to send logging events across a network, and handle them at
334the receiving end. A simple way of doing this is attaching a
335:class:`SocketHandler` instance to the root logger at the sending end::
336
337 import logging, logging.handlers
338
339 rootLogger = logging.getLogger('')
340 rootLogger.setLevel(logging.DEBUG)
341 socketHandler = logging.handlers.SocketHandler('localhost',
342 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
343 # don't bother with a formatter, since a socket handler sends the event as
344 # an unformatted pickle
345 rootLogger.addHandler(socketHandler)
346
347 # Now, we can log to the root logger, or any other logger. First the root...
348 logging.info('Jackdaws love my big sphinx of quartz.')
349
350 # Now, define a couple of other loggers which might represent areas in your
351 # application:
352
353 logger1 = logging.getLogger('myapp.area1')
354 logger2 = logging.getLogger('myapp.area2')
355
356 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
357 logger1.info('How quickly daft jumping zebras vex.')
358 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
359 logger2.error('The five boxing wizards jump quickly.')
360
361At the receiving end, you can set up a receiver using the :mod:`socketserver`
362module. Here is a basic working example::
363
364 import pickle
365 import logging
366 import logging.handlers
367 import socketserver
368 import struct
369
370
371 class LogRecordStreamHandler(socketserver.StreamRequestHandler):
372 """Handler for a streaming logging request.
373
374 This basically logs the record using whatever logging policy is
375 configured locally.
376 """
377
378 def handle(self):
379 """
380 Handle multiple requests - each expected to be a 4-byte length,
381 followed by the LogRecord in pickle format. Logs the record
382 according to whatever policy is configured locally.
383 """
384 while True:
385 chunk = self.connection.recv(4)
386 if len(chunk) < 4:
387 break
388 slen = struct.unpack('>L', chunk)[0]
389 chunk = self.connection.recv(slen)
390 while len(chunk) < slen:
391 chunk = chunk + self.connection.recv(slen - len(chunk))
392 obj = self.unPickle(chunk)
393 record = logging.makeLogRecord(obj)
394 self.handleLogRecord(record)
395
396 def unPickle(self, data):
397 return pickle.loads(data)
398
399 def handleLogRecord(self, record):
400 # if a name is specified, we use the named logger rather than the one
401 # implied by the record.
402 if self.server.logname is not None:
403 name = self.server.logname
404 else:
405 name = record.name
406 logger = logging.getLogger(name)
407 # N.B. EVERY record gets logged. This is because Logger.handle
408 # is normally called AFTER logger-level filtering. If you want
409 # to do filtering, do it at the client end to save wasting
410 # cycles and network bandwidth!
411 logger.handle(record)
412
413 class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
414 """
415 Simple TCP socket-based logging receiver suitable for testing.
416 """
417
418 allow_reuse_address = 1
419
420 def __init__(self, host='localhost',
421 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
422 handler=LogRecordStreamHandler):
423 socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
424 self.abort = 0
425 self.timeout = 1
426 self.logname = None
427
428 def serve_until_stopped(self):
429 import select
430 abort = 0
431 while not abort:
432 rd, wr, ex = select.select([self.socket.fileno()],
433 [], [],
434 self.timeout)
435 if rd:
436 self.handle_request()
437 abort = self.abort
438
439 def main():
440 logging.basicConfig(
441 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
442 tcpserver = LogRecordSocketReceiver()
443 print('About to start TCP server...')
444 tcpserver.serve_until_stopped()
445
446 if __name__ == '__main__':
447 main()
448
449First run the server, and then the client. On the client side, nothing is
450printed on the console; on the server side, you should see something like::
451
452 About to start TCP server...
453 59 root INFO Jackdaws love my big sphinx of quartz.
454 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
455 69 myapp.area1 INFO How quickly daft jumping zebras vex.
456 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
457 69 myapp.area2 ERROR The five boxing wizards jump quickly.
458
459Note that there are some security issues with pickle in some scenarios. If
460these affect you, you can use an alternative serialization scheme by overriding
461the :meth:`makePickle` method and implementing your alternative there, as
462well as adapting the above script to use your alternative serialization.
463
464
465.. _context-info:
466
467Adding contextual information to your logging output
468----------------------------------------------------
469
470Sometimes you want logging output to contain contextual information in
471addition to the parameters passed to the logging call. For example, in a
472networked application, it may be desirable to log client-specific information
473in the log (e.g. remote client's username, or IP address). Although you could
474use the *extra* parameter to achieve this, it's not always convenient to pass
475the information in this way. While it might be tempting to create
476:class:`Logger` instances on a per-connection basis, this is not a good idea
477because these instances are not garbage collected. While this is not a problem
478in practice, when the number of :class:`Logger` instances is dependent on the
479level of granularity you want to use in logging an application, it could
480be hard to manage if the number of :class:`Logger` instances becomes
481effectively unbounded.
482
483
484Using LoggerAdapters to impart contextual information
485^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
486
487An easy way in which you can pass contextual information to be output along
488with logging event information is to use the :class:`LoggerAdapter` class.
489This class is designed to look like a :class:`Logger`, so that you can call
490:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
491:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
492same signatures as their counterparts in :class:`Logger`, so you can use the
493two types of instances interchangeably.
494
495When you create an instance of :class:`LoggerAdapter`, you pass it a
496:class:`Logger` instance and a dict-like object which contains your contextual
497information. When you call one of the logging methods on an instance of
498:class:`LoggerAdapter`, it delegates the call to the underlying instance of
499:class:`Logger` passed to its constructor, and arranges to pass the contextual
500information in the delegated call. Here's a snippet from the code of
501:class:`LoggerAdapter`::
502
503 def debug(self, msg, *args, **kwargs):
504 """
505 Delegate a debug call to the underlying logger, after adding
506 contextual information from this adapter instance.
507 """
508 msg, kwargs = self.process(msg, kwargs)
509 self.logger.debug(msg, *args, **kwargs)
510
511The :meth:`process` method of :class:`LoggerAdapter` is where the contextual
512information is added to the logging output. It's passed the message and
513keyword arguments of the logging call, and it passes back (potentially)
514modified versions of these to use in the call to the underlying logger. The
515default implementation of this method leaves the message alone, but inserts
516an 'extra' key in the keyword argument whose value is the dict-like object
517passed to the constructor. Of course, if you had passed an 'extra' keyword
518argument in the call to the adapter, it will be silently overwritten.
519
520The advantage of using 'extra' is that the values in the dict-like object are
521merged into the :class:`LogRecord` instance's __dict__, allowing you to use
522customized strings with your :class:`Formatter` instances which know about
523the keys of the dict-like object. If you need a different method, e.g. if you
524want to prepend or append the contextual information to the message string,
525you just need to subclass :class:`LoggerAdapter` and override :meth:`process`
526to do what you need. Here's an example script which uses this class, which
527also illustrates what dict-like behaviour is needed from an arbitrary
528'dict-like' object for use in the constructor::
529
530 import logging
531
532 class ConnInfo:
533 """
534 An example class which shows how an arbitrary class can be used as
535 the 'extra' context information repository passed to a LoggerAdapter.
536 """
537
538 def __getitem__(self, name):
539 """
540 To allow this instance to look like a dict.
541 """
542 from random import choice
543 if name == 'ip':
544 result = choice(['127.0.0.1', '192.168.0.1'])
545 elif name == 'user':
546 result = choice(['jim', 'fred', 'sheila'])
547 else:
548 result = self.__dict__.get(name, '?')
549 return result
550
551 def __iter__(self):
552 """
553 To allow iteration over keys, which will be merged into
554 the LogRecord dict before formatting and output.
555 """
556 keys = ['ip', 'user']
557 keys.extend(self.__dict__.keys())
558 return keys.__iter__()
559
560 if __name__ == '__main__':
561 from random import choice
562 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
563 a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
564 { 'ip' : '123.231.231.123', 'user' : 'sheila' })
565 logging.basicConfig(level=logging.DEBUG,
566 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
567 a1.debug('A debug message')
568 a1.info('An info message with %s', 'some parameters')
569 a2 = logging.LoggerAdapter(logging.getLogger('d.e.f'), ConnInfo())
570 for x in range(10):
571 lvl = choice(levels)
572 lvlname = logging.getLevelName(lvl)
573 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
574
575When this script is run, the output should look something like this::
576
577 2008-01-18 14:49:54,023 a.b.c DEBUG IP: 123.231.231.123 User: sheila A debug message
578 2008-01-18 14:49:54,023 a.b.c INFO IP: 123.231.231.123 User: sheila An info message with some parameters
579 2008-01-18 14:49:54,023 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
580 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: jim A message at INFO level with 2 parameters
581 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
582 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: fred A message at ERROR level with 2 parameters
583 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
584 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
585 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: jim A message at WARNING level with 2 parameters
586 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: fred A message at INFO level with 2 parameters
587 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
588 2008-01-18 14:49:54,033 d.e.f WARNING IP: 127.0.0.1 User: jim A message at WARNING level with 2 parameters
589
590
591.. _filters-contextual:
592
593Using Filters to impart contextual information
594^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
595
596You can also add contextual information to log output using a user-defined
597:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
598passed to them, including adding additional attributes which can then be output
599using a suitable format string, or if needed a custom :class:`Formatter`.
600
601For example in a web application, the request being processed (or at least,
602the interesting parts of it) can be stored in a threadlocal
603(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
604add, say, information from the request - say, the remote IP address and remote
605user's username - to the ``LogRecord``, using the attribute names 'ip' and
606'user' as in the ``LoggerAdapter`` example above. In that case, the same format
607string can be used to get similar output to that shown above. Here's an example
608script::
609
610 import logging
611 from random import choice
612
613 class ContextFilter(logging.Filter):
614 """
615 This is a filter which injects contextual information into the log.
616
617 Rather than use actual contextual information, we just use random
618 data in this demo.
619 """
620
621 USERS = ['jim', 'fred', 'sheila']
622 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
623
624 def filter(self, record):
625
626 record.ip = choice(ContextFilter.IPS)
627 record.user = choice(ContextFilter.USERS)
628 return True
629
630 if __name__ == '__main__':
631 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
632 a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
633 { 'ip' : '123.231.231.123', 'user' : 'sheila' })
634 logging.basicConfig(level=logging.DEBUG,
635 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
636 a1 = logging.getLogger('a.b.c')
637 a2 = logging.getLogger('d.e.f')
638
639 f = ContextFilter()
640 a1.addFilter(f)
641 a2.addFilter(f)
642 a1.debug('A debug message')
643 a1.info('An info message with %s', 'some parameters')
644 for x in range(10):
645 lvl = choice(levels)
646 lvlname = logging.getLevelName(lvl)
647 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
648
649which, when run, produces something like::
650
651 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
652 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
653 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
654 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
655 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
656 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
657 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
658 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
659 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
660 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
661 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
662 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
663
664
665.. _multiple-processes:
666
667Logging to a single file from multiple processes
668------------------------------------------------
669
670Although logging is thread-safe, and logging to a single file from multiple
671threads in a single process *is* supported, logging to a single file from
672*multiple processes* is *not* supported, because there is no standard way to
673serialize access to a single file across multiple processes in Python. If you
674need to log to a single file from multiple processes, one way of doing this is
675to have all the processes log to a :class:`SocketHandler`, and have a separate
676process which implements a socket server which reads from the socket and logs
677to file. (If you prefer, you can dedicate one thread in one of the existing
678processes to perform this function.) The following section documents this
679approach in more detail and includes a working socket receiver which can be
680used as a starting point for you to adapt in your own applications.
681
682If you are using a recent version of Python which includes the
683:mod:`multiprocessing` module, you could write your own handler which uses the
684:class:`Lock` class from this module to serialize access to the file from
685your processes. The existing :class:`FileHandler` and subclasses do not make
686use of :mod:`multiprocessing` at present, though they may do so in the future.
687Note that at present, the :mod:`multiprocessing` module does not provide
688working lock functionality on all platforms (see
689http://bugs.python.org/issue3770).
690
691.. currentmodule:: logging.handlers
692
693Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
694all logging events to one of the processes in your multi-process application.
695The following example script demonstrates how you can do this; in the example
696a separate listener process listens for events sent by other processes and logs
697them according to its own logging configuration. Although the example only
698demonstrates one way of doing it (for example, you may want to use a listener
699thread rather than a separate listener process - the implementation would be
700analogous) it does allow for completely different logging configurations for
701the listener and the other processes in your application, and can be used as
702the basis for code meeting your own specific requirements::
703
704 # You'll need these imports in your own code
705 import logging
706 import logging.handlers
707 import multiprocessing
708
709 # Next two import lines for this demo only
710 from random import choice, random
711 import time
712
713 #
714 # Because you'll want to define the logging configurations for listener and workers, the
715 # listener and worker process functions take a configurer parameter which is a callable
716 # for configuring logging for that process. These functions are also passed the queue,
717 # which they use for communication.
718 #
719 # In practice, you can configure the listener however you want, but note that in this
720 # simple example, the listener does not apply level or filter logic to received records.
721 # In practice, you would probably want to do ths logic in the worker processes, to avoid
722 # sending events which would be filtered out between processes.
723 #
724 # The size of the rotated files is made small so you can see the results easily.
725 def listener_configurer():
726 root = logging.getLogger()
727 h = logging.handlers.RotatingFileHandler('/tmp/mptest.log', 'a', 300, 10)
728 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
729 h.setFormatter(f)
730 root.addHandler(h)
731
732 # This is the listener process top-level loop: wait for logging events
733 # (LogRecords)on the queue and handle them, quit when you get a None for a
734 # LogRecord.
735 def listener_process(queue, configurer):
736 configurer()
737 while True:
738 try:
739 record = queue.get()
740 if record is None: # We send this as a sentinel to tell the listener to quit.
741 break
742 logger = logging.getLogger(record.name)
743 logger.handle(record) # No level or filter logic applied - just do it!
744 except (KeyboardInterrupt, SystemExit):
745 raise
746 except:
747 import sys, traceback
748 print >> sys.stderr, 'Whoops! Problem:'
749 traceback.print_exc(file=sys.stderr)
750
751 # Arrays used for random selections in this demo
752
753 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
754 logging.ERROR, logging.CRITICAL]
755
756 LOGGERS = ['a.b.c', 'd.e.f']
757
758 MESSAGES = [
759 'Random message #1',
760 'Random message #2',
761 'Random message #3',
762 ]
763
764 # The worker configuration is done at the start of the worker process run.
765 # Note that on Windows you can't rely on fork semantics, so each process
766 # will run the logging configuration code when it starts.
767 def worker_configurer(queue):
768 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
769 root = logging.getLogger()
770 root.addHandler(h)
771 root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
772
773 # This is the worker process top-level loop, which just logs ten events with
774 # random intervening delays before terminating.
775 # The print messages are just so you know it's doing something!
776 def worker_process(queue, configurer):
777 configurer(queue)
778 name = multiprocessing.current_process().name
779 print('Worker started: %s' % name)
780 for i in range(10):
781 time.sleep(random())
782 logger = logging.getLogger(choice(LOGGERS))
783 level = choice(LEVELS)
784 message = choice(MESSAGES)
785 logger.log(level, message)
786 print('Worker finished: %s' % name)
787
788 # Here's where the demo gets orchestrated. Create the queue, create and start
789 # the listener, create ten workers and start them, wait for them to finish,
790 # then send a None to the queue to tell the listener to finish.
791 def main():
792 queue = multiprocessing.Queue(-1)
793 listener = multiprocessing.Process(target=listener_process,
794 args=(queue, listener_configurer))
795 listener.start()
796 workers = []
797 for i in range(10):
798 worker = multiprocessing.Process(target=worker_process,
799 args=(queue, worker_configurer))
800 workers.append(worker)
801 worker.start()
802 for w in workers:
803 w.join()
804 queue.put_nowait(None)
805 listener.join()
806
807 if __name__ == '__main__':
808 main()
809
Vinay Sajipe6f1e432010-12-26 18:47:51 +0000810A variant of the above script keeps the logging in the main process, in a
811separate thread::
812
813 import logging
814 import logging.config
815 import logging.handlers
816 from multiprocessing import Process, Queue
817 import random
818 import threading
819 import time
820
821 def logger_thread(q):
822 while True:
823 record = q.get()
824 if record is None:
825 break
826 logger = logging.getLogger(record.name)
827 logger.handle(record)
828
829
830 def worker_process(q):
831 qh = logging.handlers.QueueHandler(q)
832 root = logging.getLogger()
833 root.setLevel(logging.DEBUG)
834 root.addHandler(qh)
835 levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
836 logging.CRITICAL]
837 loggers = ['foo', 'foo.bar', 'foo.bar.baz',
838 'spam', 'spam.ham', 'spam.ham.eggs']
839 for i in range(100):
840 lvl = random.choice(levels)
841 logger = logging.getLogger(random.choice(loggers))
842 logger.log(lvl, 'Message no. %d', i)
843
844 if __name__ == '__main__':
845 q = Queue()
846 d = {
847 'version': 1,
848 'formatters': {
849 'detailed': {
850 'class': 'logging.Formatter',
851 'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
852 }
853 },
854 'handlers': {
855 'console': {
856 'class': 'logging.StreamHandler',
857 'level': 'INFO',
858 },
859 'file': {
860 'class': 'logging.FileHandler',
861 'filename': 'mplog.log',
862 'mode': 'w',
863 'formatter': 'detailed',
864 },
865 'foofile': {
866 'class': 'logging.FileHandler',
867 'filename': 'mplog-foo.log',
868 'mode': 'w',
869 'formatter': 'detailed',
870 },
871 'errors': {
872 'class': 'logging.FileHandler',
873 'filename': 'mplog-errors.log',
874 'mode': 'w',
875 'level': 'ERROR',
876 'formatter': 'detailed',
877 },
878 },
879 'loggers': {
880 'foo': {
881 'handlers' : ['foofile']
882 }
883 },
884 'root': {
885 'level': 'DEBUG',
886 'handlers': ['console', 'file', 'errors']
887 },
888 }
889 workers = []
890 for i in range(5):
891 wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
892 workers.append(wp)
893 wp.start()
894 logging.config.dictConfig(d)
895 lp = threading.Thread(target=logger_thread, args=(q,))
896 lp.start()
897 # At this point, the main process could do some useful work of its own
898 # Once it's done that, it can wait for the workers to terminate...
899 for wp in workers:
900 wp.join()
901 # And now tell the logging thread to finish up, too
902 q.put(None)
903 lp.join()
904
905This variant shows how you can e.g. apply configuration for particular loggers
906- e.g. the ``foo`` logger has a special handler which stores all events in the
907``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
908machinery in the main process (even though the logging events are generated in
909the worker processes) to direct the messages to the appropriate destinations.
Vinay Sajipc63619b2010-12-19 12:56:57 +0000910
911Using file rotation
912-------------------
913
914.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
915.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
916
917Sometimes you want to let a log file grow to a certain size, then open a new
918file and log to that. You may want to keep a certain number of these files, and
919when that many files have been created, rotate the files so that the number of
920files and the size of the files both remin bounded. For this usage pattern, the
921logging package provides a :class:`RotatingFileHandler`::
922
923 import glob
924 import logging
925 import logging.handlers
926
927 LOG_FILENAME = 'logging_rotatingfile_example.out'
928
929 # Set up a specific logger with our desired output level
930 my_logger = logging.getLogger('MyLogger')
931 my_logger.setLevel(logging.DEBUG)
932
933 # Add the log message handler to the logger
934 handler = logging.handlers.RotatingFileHandler(
935 LOG_FILENAME, maxBytes=20, backupCount=5)
936
937 my_logger.addHandler(handler)
938
939 # Log some messages
940 for i in range(20):
941 my_logger.debug('i = %d' % i)
942
943 # See what files are created
944 logfiles = glob.glob('%s*' % LOG_FILENAME)
945
946 for filename in logfiles:
947 print(filename)
948
949The result should be 6 separate files, each with part of the log history for the
950application::
951
952 logging_rotatingfile_example.out
953 logging_rotatingfile_example.out.1
954 logging_rotatingfile_example.out.2
955 logging_rotatingfile_example.out.3
956 logging_rotatingfile_example.out.4
957 logging_rotatingfile_example.out.5
958
959The most current file is always :file:`logging_rotatingfile_example.out`,
960and each time it reaches the size limit it is renamed with the suffix
961``.1``. Each of the existing backup files is renamed to increment the suffix
962(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
963
964Obviously this example sets the log length much much too small as an extreme
965example. You would want to set *maxBytes* to an appropriate value.
966
967.. _zeromq-handlers:
968
Vinay Sajip7d101292010-12-26 21:22:33 +0000969Subclassing QueueHandler - a ZeroMQ example
970-------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +0000971
972You can use a :class:`QueueHandler` subclass to send messages to other kinds
973of queues, for example a ZeroMQ 'publish' socket. In the example below,the
974socket is created separately and passed to the handler (as its 'queue')::
975
976 import zmq # using pyzmq, the Python binding for ZeroMQ
977 import json # for serializing records portably
978
979 ctx = zmq.Context()
980 sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
981 sock.bind('tcp://*:5556') # or wherever
982
983 class ZeroMQSocketHandler(QueueHandler):
984 def enqueue(self, record):
985 data = json.dumps(record.__dict__)
986 self.queue.send(data)
987
988 handler = ZeroMQSocketHandler(sock)
989
990
991Of course there are other ways of organizing this, for example passing in the
992data needed by the handler to create the socket::
993
994 class ZeroMQSocketHandler(QueueHandler):
995 def __init__(self, uri, socktype=zmq.PUB, ctx=None):
996 self.ctx = ctx or zmq.Context()
997 socket = zmq.Socket(self.ctx, socktype)
998 socket.bind(uri)
999 QueueHandler.__init__(self, socket)
1000
1001 def enqueue(self, record):
1002 data = json.dumps(record.__dict__)
1003 self.queue.send(data)
1004
1005 def close(self):
1006 self.queue.close()
1007
1008
Vinay Sajip7d101292010-12-26 21:22:33 +00001009Subclassing QueueListener - a ZeroMQ example
1010--------------------------------------------
Vinay Sajipc63619b2010-12-19 12:56:57 +00001011
1012You can also subclass :class:`QueueListener` to get messages from other kinds
1013of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
1014
1015 class ZeroMQSocketListener(QueueListener):
1016 def __init__(self, uri, *handlers, **kwargs):
1017 self.ctx = kwargs.get('ctx') or zmq.Context()
1018 socket = zmq.Socket(self.ctx, zmq.SUB)
1019 socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
1020 socket.connect(uri)
1021
1022 def dequeue(self):
1023 msg = self.queue.recv()
1024 return logging.makeLogRecord(json.loads(msg))
1025
1026
Vinay Sajip7d101292010-12-26 21:22:33 +00001027.. seealso::
Vinay Sajipc63619b2010-12-19 12:56:57 +00001028
Vinay Sajip7d101292010-12-26 21:22:33 +00001029 Module :mod:`logging`
1030 API reference for the logging module.
1031
1032 Module :mod:`logging.config`
1033 Configuration API for the logging module.
1034
1035 Module :mod:`logging.handlers`
1036 Useful handlers included with the logging module.
1037
1038 :ref:`A basic logging tutorial <logging-basic-tutorial>`
1039
1040 :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`