blob: ea1756be7e72899d5c0428b6b9e45e18b20f2542 [file] [log] [blame]
Vinay Sajipc63619b2010-12-19 12:56:57 +00001.. _logging-cookbook:
2
3================
4Logging Cookbook
5================
6
7:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
8
9This page contains a number of recipes related to logging, which have been found useful in the past.
10
11.. Contents::
12
13.. currentmodule:: logging
14
15Using logging in multiple modules
16---------------------------------
17
18It was mentioned above that multiple calls to
19``logging.getLogger('someLogger')`` return a reference to the same logger
20object. This is true not only within the same module, but also across modules
21as long as it is in the same Python interpreter process. It is true for
22references to the same object; additionally, application code can define and
23configure a parent logger in one module and create (but not configure) a child
24logger in a separate module, and all logger calls to the child will pass up to
25the parent. Here is a main module::
26
27 import logging
28 import auxiliary_module
29
30 # create logger with 'spam_application'
31 logger = logging.getLogger('spam_application')
32 logger.setLevel(logging.DEBUG)
33 # create file handler which logs even debug messages
34 fh = logging.FileHandler('spam.log')
35 fh.setLevel(logging.DEBUG)
36 # create console handler with a higher log level
37 ch = logging.StreamHandler()
38 ch.setLevel(logging.ERROR)
39 # create formatter and add it to the handlers
40 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
41 fh.setFormatter(formatter)
42 ch.setFormatter(formatter)
43 # add the handlers to the logger
44 logger.addHandler(fh)
45 logger.addHandler(ch)
46
47 logger.info('creating an instance of auxiliary_module.Auxiliary')
48 a = auxiliary_module.Auxiliary()
49 logger.info('created an instance of auxiliary_module.Auxiliary')
50 logger.info('calling auxiliary_module.Auxiliary.do_something')
51 a.do_something()
52 logger.info('finished auxiliary_module.Auxiliary.do_something')
53 logger.info('calling auxiliary_module.some_function()')
54 auxiliary_module.some_function()
55 logger.info('done with auxiliary_module.some_function()')
56
57Here is the auxiliary module::
58
59 import logging
60
61 # create logger
62 module_logger = logging.getLogger('spam_application.auxiliary')
63
64 class Auxiliary:
65 def __init__(self):
66 self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
67 self.logger.info('creating an instance of Auxiliary')
68 def do_something(self):
69 self.logger.info('doing something')
70 a = 1 + 1
71 self.logger.info('done doing something')
72
73 def some_function():
74 module_logger.info('received a call to "some_function"')
75
76The output looks like this::
77
78 2005-03-23 23:47:11,663 - spam_application - INFO -
79 creating an instance of auxiliary_module.Auxiliary
80 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
81 creating an instance of Auxiliary
82 2005-03-23 23:47:11,665 - spam_application - INFO -
83 created an instance of auxiliary_module.Auxiliary
84 2005-03-23 23:47:11,668 - spam_application - INFO -
85 calling auxiliary_module.Auxiliary.do_something
86 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
87 doing something
88 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
89 done doing something
90 2005-03-23 23:47:11,670 - spam_application - INFO -
91 finished auxiliary_module.Auxiliary.do_something
92 2005-03-23 23:47:11,671 - spam_application - INFO -
93 calling auxiliary_module.some_function()
94 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
95 received a call to 'some_function'
96 2005-03-23 23:47:11,673 - spam_application - INFO -
97 done with auxiliary_module.some_function()
98
99Multiple handlers and formatters
100--------------------------------
101
102Loggers are plain Python objects. The :func:`addHandler` method has no minimum
103or maximum quota for the number of handlers you may add. Sometimes it will be
104beneficial for an application to log all messages of all severities to a text
105file while simultaneously logging errors or above to the console. To set this
106up, simply configure the appropriate handlers. The logging calls in the
107application code will remain unchanged. Here is a slight modification to the
108previous simple module-based configuration example::
109
110 import logging
111
112 logger = logging.getLogger('simple_example')
113 logger.setLevel(logging.DEBUG)
114 # create file handler which logs even debug messages
115 fh = logging.FileHandler('spam.log')
116 fh.setLevel(logging.DEBUG)
117 # create console handler with a higher log level
118 ch = logging.StreamHandler()
119 ch.setLevel(logging.ERROR)
120 # create formatter and add it to the handlers
121 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
122 ch.setFormatter(formatter)
123 fh.setFormatter(formatter)
124 # add the handlers to logger
125 logger.addHandler(ch)
126 logger.addHandler(fh)
127
128 # 'application' code
129 logger.debug('debug message')
130 logger.info('info message')
131 logger.warn('warn message')
132 logger.error('error message')
133 logger.critical('critical message')
134
135Notice that the 'application' code does not care about multiple handlers. All
136that changed was the addition and configuration of a new handler named *fh*.
137
138The ability to create new handlers with higher- or lower-severity filters can be
139very helpful when writing and testing an application. Instead of using many
140``print`` statements for debugging, use ``logger.debug``: Unlike the print
141statements, which you will have to delete or comment out later, the logger.debug
142statements can remain intact in the source code and remain dormant until you
143need them again. At that time, the only change that needs to happen is to
144modify the severity level of the logger and/or handler to debug.
145
146.. _multiple-destinations:
147
148Logging to multiple destinations
149--------------------------------
150
151Let's say you want to log to console and file with different message formats and
152in differing circumstances. Say you want to log messages with levels of DEBUG
153and higher to file, and those messages at level INFO and higher to the console.
154Let's also assume that the file should contain timestamps, but the console
155messages should not. Here's how you can achieve this::
156
157 import logging
158
159 # set up logging to file - see previous section for more details
160 logging.basicConfig(level=logging.DEBUG,
161 format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
162 datefmt='%m-%d %H:%M',
163 filename='/temp/myapp.log',
164 filemode='w')
165 # define a Handler which writes INFO messages or higher to the sys.stderr
166 console = logging.StreamHandler()
167 console.setLevel(logging.INFO)
168 # set a format which is simpler for console use
169 formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
170 # tell the handler to use this format
171 console.setFormatter(formatter)
172 # add the handler to the root logger
173 logging.getLogger('').addHandler(console)
174
175 # Now, we can log to the root logger, or any other logger. First the root...
176 logging.info('Jackdaws love my big sphinx of quartz.')
177
178 # Now, define a couple of other loggers which might represent areas in your
179 # application:
180
181 logger1 = logging.getLogger('myapp.area1')
182 logger2 = logging.getLogger('myapp.area2')
183
184 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
185 logger1.info('How quickly daft jumping zebras vex.')
186 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
187 logger2.error('The five boxing wizards jump quickly.')
188
189When you run this, on the console you will see ::
190
191 root : INFO Jackdaws love my big sphinx of quartz.
192 myapp.area1 : INFO How quickly daft jumping zebras vex.
193 myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
194 myapp.area2 : ERROR The five boxing wizards jump quickly.
195
196and in the file you will see something like ::
197
198 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
199 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
200 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
201 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
202 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.
203
204As you can see, the DEBUG message only shows up in the file. The other messages
205are sent to both destinations.
206
207This example uses console and file handlers, but you can use any number and
208combination of handlers you choose.
209
210
211Configuration server example
212----------------------------
213
214Here is an example of a module using the logging configuration server::
215
216 import logging
217 import logging.config
218 import time
219 import os
220
221 # read initial config file
222 logging.config.fileConfig('logging.conf')
223
224 # create and start listener on port 9999
225 t = logging.config.listen(9999)
226 t.start()
227
228 logger = logging.getLogger('simpleExample')
229
230 try:
231 # loop through logging calls to see the difference
232 # new configurations make, until Ctrl+C is pressed
233 while True:
234 logger.debug('debug message')
235 logger.info('info message')
236 logger.warn('warn message')
237 logger.error('error message')
238 logger.critical('critical message')
239 time.sleep(5)
240 except KeyboardInterrupt:
241 # cleanup
242 logging.config.stopListening()
243 t.join()
244
245And here is a script that takes a filename and sends that file to the server,
246properly preceded with the binary-encoded length, as the new logging
247configuration::
248
249 #!/usr/bin/env python
250 import socket, sys, struct
251
252 data_to_send = open(sys.argv[1], 'r').read()
253
254 HOST = 'localhost'
255 PORT = 9999
256 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
257 print('connecting...')
258 s.connect((HOST, PORT))
259 print('sending config...')
260 s.send(struct.pack('>L', len(data_to_send)))
261 s.send(data_to_send)
262 s.close()
263 print('complete')
264
265
266Dealing with handlers that block
267--------------------------------
268
269.. currentmodule:: logging.handlers
270
271Sometimes you have to get your logging handlers to do their work without
272blocking the thread you’re logging from. This is common in Web applications,
273though of course it also occurs in other scenarios.
274
275A common culprit which demonstrates sluggish behaviour is the
276:class:`SMTPHandler`: sending emails can take a long time, for a
277number of reasons outside the developer’s control (for example, a poorly
278performing mail or network infrastructure). But almost any network-based
279handler can block: Even a :class:`SocketHandler` operation may do a
280DNS query under the hood which is too slow (and this query can be deep in the
281socket library code, below the Python layer, and outside your control).
282
283One solution is to use a two-part approach. For the first part, attach only a
284:class:`QueueHandler` to those loggers which are accessed from
285performance-critical threads. They simply write to their queue, which can be
286sized to a large enough capacity or initialized with no upper bound to their
287size. The write to the queue will typically be accepted quickly, though you
288will probably need to catch the :ref:`queue.Full` exception as a precaution
289in your code. If you are a library developer who has performance-critical
290threads in their code, be sure to document this (together with a suggestion to
291attach only ``QueueHandlers`` to your loggers) for the benefit of other
292developers who will use your code.
293
294The second part of the solution is :class:`QueueListener`, which has been
295designed as the counterpart to :class:`QueueHandler`. A
296:class:`QueueListener` is very simple: it’s passed a queue and some handlers,
297and it fires up an internal thread which listens to its queue for LogRecords
298sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
299matter). The ``LogRecords`` are removed from the queue and passed to the
300handlers for processing.
301
302The advantage of having a separate :class:`QueueListener` class is that you
303can use the same instance to service multiple ``QueueHandlers``. This is more
304resource-friendly than, say, having threaded versions of the existing handler
305classes, which would eat up one thread per handler for no particular benefit.
306
307An example of using these two classes follows (imports omitted)::
308
309 que = queue.Queue(-1) # no limit on size
310 queue_handler = QueueHandler(que)
311 handler = logging.StreamHandler()
312 listener = QueueListener(que, handler)
313 root = logging.getLogger()
314 root.addHandler(queue_handler)
315 formatter = logging.Formatter('%(threadName)s: %(message)s')
316 handler.setFormatter(formatter)
317 listener.start()
318 # The log output will display the thread which generated
319 # the event (the main thread) rather than the internal
320 # thread which monitors the internal queue. This is what
321 # you want to happen.
322 root.warning('Look out!')
323 listener.stop()
324
325which, when run, will produce::
326
327 MainThread: Look out!
328
329
330.. _network-logging:
331
332Sending and receiving logging events across a network
333-----------------------------------------------------
334
335Let's say you want to send logging events across a network, and handle them at
336the receiving end. A simple way of doing this is attaching a
337:class:`SocketHandler` instance to the root logger at the sending end::
338
339 import logging, logging.handlers
340
341 rootLogger = logging.getLogger('')
342 rootLogger.setLevel(logging.DEBUG)
343 socketHandler = logging.handlers.SocketHandler('localhost',
344 logging.handlers.DEFAULT_TCP_LOGGING_PORT)
345 # don't bother with a formatter, since a socket handler sends the event as
346 # an unformatted pickle
347 rootLogger.addHandler(socketHandler)
348
349 # Now, we can log to the root logger, or any other logger. First the root...
350 logging.info('Jackdaws love my big sphinx of quartz.')
351
352 # Now, define a couple of other loggers which might represent areas in your
353 # application:
354
355 logger1 = logging.getLogger('myapp.area1')
356 logger2 = logging.getLogger('myapp.area2')
357
358 logger1.debug('Quick zephyrs blow, vexing daft Jim.')
359 logger1.info('How quickly daft jumping zebras vex.')
360 logger2.warning('Jail zesty vixen who grabbed pay from quack.')
361 logger2.error('The five boxing wizards jump quickly.')
362
363At the receiving end, you can set up a receiver using the :mod:`socketserver`
364module. Here is a basic working example::
365
366 import pickle
367 import logging
368 import logging.handlers
369 import socketserver
370 import struct
371
372
373 class LogRecordStreamHandler(socketserver.StreamRequestHandler):
374 """Handler for a streaming logging request.
375
376 This basically logs the record using whatever logging policy is
377 configured locally.
378 """
379
380 def handle(self):
381 """
382 Handle multiple requests - each expected to be a 4-byte length,
383 followed by the LogRecord in pickle format. Logs the record
384 according to whatever policy is configured locally.
385 """
386 while True:
387 chunk = self.connection.recv(4)
388 if len(chunk) < 4:
389 break
390 slen = struct.unpack('>L', chunk)[0]
391 chunk = self.connection.recv(slen)
392 while len(chunk) < slen:
393 chunk = chunk + self.connection.recv(slen - len(chunk))
394 obj = self.unPickle(chunk)
395 record = logging.makeLogRecord(obj)
396 self.handleLogRecord(record)
397
398 def unPickle(self, data):
399 return pickle.loads(data)
400
401 def handleLogRecord(self, record):
402 # if a name is specified, we use the named logger rather than the one
403 # implied by the record.
404 if self.server.logname is not None:
405 name = self.server.logname
406 else:
407 name = record.name
408 logger = logging.getLogger(name)
409 # N.B. EVERY record gets logged. This is because Logger.handle
410 # is normally called AFTER logger-level filtering. If you want
411 # to do filtering, do it at the client end to save wasting
412 # cycles and network bandwidth!
413 logger.handle(record)
414
415 class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
416 """
417 Simple TCP socket-based logging receiver suitable for testing.
418 """
419
420 allow_reuse_address = 1
421
422 def __init__(self, host='localhost',
423 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
424 handler=LogRecordStreamHandler):
425 socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
426 self.abort = 0
427 self.timeout = 1
428 self.logname = None
429
430 def serve_until_stopped(self):
431 import select
432 abort = 0
433 while not abort:
434 rd, wr, ex = select.select([self.socket.fileno()],
435 [], [],
436 self.timeout)
437 if rd:
438 self.handle_request()
439 abort = self.abort
440
441 def main():
442 logging.basicConfig(
443 format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
444 tcpserver = LogRecordSocketReceiver()
445 print('About to start TCP server...')
446 tcpserver.serve_until_stopped()
447
448 if __name__ == '__main__':
449 main()
450
451First run the server, and then the client. On the client side, nothing is
452printed on the console; on the server side, you should see something like::
453
454 About to start TCP server...
455 59 root INFO Jackdaws love my big sphinx of quartz.
456 59 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
457 69 myapp.area1 INFO How quickly daft jumping zebras vex.
458 69 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
459 69 myapp.area2 ERROR The five boxing wizards jump quickly.
460
461Note that there are some security issues with pickle in some scenarios. If
462these affect you, you can use an alternative serialization scheme by overriding
463the :meth:`makePickle` method and implementing your alternative there, as
464well as adapting the above script to use your alternative serialization.
465
466
467.. _context-info:
468
469Adding contextual information to your logging output
470----------------------------------------------------
471
472Sometimes you want logging output to contain contextual information in
473addition to the parameters passed to the logging call. For example, in a
474networked application, it may be desirable to log client-specific information
475in the log (e.g. remote client's username, or IP address). Although you could
476use the *extra* parameter to achieve this, it's not always convenient to pass
477the information in this way. While it might be tempting to create
478:class:`Logger` instances on a per-connection basis, this is not a good idea
479because these instances are not garbage collected. While this is not a problem
480in practice, when the number of :class:`Logger` instances is dependent on the
481level of granularity you want to use in logging an application, it could
482be hard to manage if the number of :class:`Logger` instances becomes
483effectively unbounded.
484
485
486Using LoggerAdapters to impart contextual information
487^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
488
489An easy way in which you can pass contextual information to be output along
490with logging event information is to use the :class:`LoggerAdapter` class.
491This class is designed to look like a :class:`Logger`, so that you can call
492:meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
493:meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
494same signatures as their counterparts in :class:`Logger`, so you can use the
495two types of instances interchangeably.
496
497When you create an instance of :class:`LoggerAdapter`, you pass it a
498:class:`Logger` instance and a dict-like object which contains your contextual
499information. When you call one of the logging methods on an instance of
500:class:`LoggerAdapter`, it delegates the call to the underlying instance of
501:class:`Logger` passed to its constructor, and arranges to pass the contextual
502information in the delegated call. Here's a snippet from the code of
503:class:`LoggerAdapter`::
504
505 def debug(self, msg, *args, **kwargs):
506 """
507 Delegate a debug call to the underlying logger, after adding
508 contextual information from this adapter instance.
509 """
510 msg, kwargs = self.process(msg, kwargs)
511 self.logger.debug(msg, *args, **kwargs)
512
513The :meth:`process` method of :class:`LoggerAdapter` is where the contextual
514information is added to the logging output. It's passed the message and
515keyword arguments of the logging call, and it passes back (potentially)
516modified versions of these to use in the call to the underlying logger. The
517default implementation of this method leaves the message alone, but inserts
518an 'extra' key in the keyword argument whose value is the dict-like object
519passed to the constructor. Of course, if you had passed an 'extra' keyword
520argument in the call to the adapter, it will be silently overwritten.
521
522The advantage of using 'extra' is that the values in the dict-like object are
523merged into the :class:`LogRecord` instance's __dict__, allowing you to use
524customized strings with your :class:`Formatter` instances which know about
525the keys of the dict-like object. If you need a different method, e.g. if you
526want to prepend or append the contextual information to the message string,
527you just need to subclass :class:`LoggerAdapter` and override :meth:`process`
528to do what you need. Here's an example script which uses this class, which
529also illustrates what dict-like behaviour is needed from an arbitrary
530'dict-like' object for use in the constructor::
531
532 import logging
533
534 class ConnInfo:
535 """
536 An example class which shows how an arbitrary class can be used as
537 the 'extra' context information repository passed to a LoggerAdapter.
538 """
539
540 def __getitem__(self, name):
541 """
542 To allow this instance to look like a dict.
543 """
544 from random import choice
545 if name == 'ip':
546 result = choice(['127.0.0.1', '192.168.0.1'])
547 elif name == 'user':
548 result = choice(['jim', 'fred', 'sheila'])
549 else:
550 result = self.__dict__.get(name, '?')
551 return result
552
553 def __iter__(self):
554 """
555 To allow iteration over keys, which will be merged into
556 the LogRecord dict before formatting and output.
557 """
558 keys = ['ip', 'user']
559 keys.extend(self.__dict__.keys())
560 return keys.__iter__()
561
562 if __name__ == '__main__':
563 from random import choice
564 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
565 a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
566 { 'ip' : '123.231.231.123', 'user' : 'sheila' })
567 logging.basicConfig(level=logging.DEBUG,
568 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
569 a1.debug('A debug message')
570 a1.info('An info message with %s', 'some parameters')
571 a2 = logging.LoggerAdapter(logging.getLogger('d.e.f'), ConnInfo())
572 for x in range(10):
573 lvl = choice(levels)
574 lvlname = logging.getLevelName(lvl)
575 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
576
577When this script is run, the output should look something like this::
578
579 2008-01-18 14:49:54,023 a.b.c DEBUG IP: 123.231.231.123 User: sheila A debug message
580 2008-01-18 14:49:54,023 a.b.c INFO IP: 123.231.231.123 User: sheila An info message with some parameters
581 2008-01-18 14:49:54,023 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
582 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: jim A message at INFO level with 2 parameters
583 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
584 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: fred A message at ERROR level with 2 parameters
585 2008-01-18 14:49:54,033 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
586 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
587 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: jim A message at WARNING level with 2 parameters
588 2008-01-18 14:49:54,033 d.e.f INFO IP: 192.168.0.1 User: fred A message at INFO level with 2 parameters
589 2008-01-18 14:49:54,033 d.e.f WARNING IP: 192.168.0.1 User: sheila A message at WARNING level with 2 parameters
590 2008-01-18 14:49:54,033 d.e.f WARNING IP: 127.0.0.1 User: jim A message at WARNING level with 2 parameters
591
592
593.. _filters-contextual:
594
595Using Filters to impart contextual information
596^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
597
598You can also add contextual information to log output using a user-defined
599:class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
600passed to them, including adding additional attributes which can then be output
601using a suitable format string, or if needed a custom :class:`Formatter`.
602
603For example in a web application, the request being processed (or at least,
604the interesting parts of it) can be stored in a threadlocal
605(:class:`threading.local`) variable, and then accessed from a ``Filter`` to
606add, say, information from the request - say, the remote IP address and remote
607user's username - to the ``LogRecord``, using the attribute names 'ip' and
608'user' as in the ``LoggerAdapter`` example above. In that case, the same format
609string can be used to get similar output to that shown above. Here's an example
610script::
611
612 import logging
613 from random import choice
614
615 class ContextFilter(logging.Filter):
616 """
617 This is a filter which injects contextual information into the log.
618
619 Rather than use actual contextual information, we just use random
620 data in this demo.
621 """
622
623 USERS = ['jim', 'fred', 'sheila']
624 IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
625
626 def filter(self, record):
627
628 record.ip = choice(ContextFilter.IPS)
629 record.user = choice(ContextFilter.USERS)
630 return True
631
632 if __name__ == '__main__':
633 levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
634 a1 = logging.LoggerAdapter(logging.getLogger('a.b.c'),
635 { 'ip' : '123.231.231.123', 'user' : 'sheila' })
636 logging.basicConfig(level=logging.DEBUG,
637 format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
638 a1 = logging.getLogger('a.b.c')
639 a2 = logging.getLogger('d.e.f')
640
641 f = ContextFilter()
642 a1.addFilter(f)
643 a2.addFilter(f)
644 a1.debug('A debug message')
645 a1.info('An info message with %s', 'some parameters')
646 for x in range(10):
647 lvl = choice(levels)
648 lvlname = logging.getLevelName(lvl)
649 a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
650
651which, when run, produces something like::
652
653 2010-09-06 22:38:15,292 a.b.c DEBUG IP: 123.231.231.123 User: fred A debug message
654 2010-09-06 22:38:15,300 a.b.c INFO IP: 192.168.0.1 User: sheila An info message with some parameters
655 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
656 2010-09-06 22:38:15,300 d.e.f ERROR IP: 127.0.0.1 User: jim A message at ERROR level with 2 parameters
657 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 127.0.0.1 User: sheila A message at DEBUG level with 2 parameters
658 2010-09-06 22:38:15,300 d.e.f ERROR IP: 123.231.231.123 User: fred A message at ERROR level with 2 parameters
659 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1 User: jim A message at CRITICAL level with 2 parameters
660 2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1 User: sheila A message at CRITICAL level with 2 parameters
661 2010-09-06 22:38:15,300 d.e.f DEBUG IP: 192.168.0.1 User: jim A message at DEBUG level with 2 parameters
662 2010-09-06 22:38:15,301 d.e.f ERROR IP: 127.0.0.1 User: sheila A message at ERROR level with 2 parameters
663 2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
664 2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
665
666
667.. _multiple-processes:
668
669Logging to a single file from multiple processes
670------------------------------------------------
671
672Although logging is thread-safe, and logging to a single file from multiple
673threads in a single process *is* supported, logging to a single file from
674*multiple processes* is *not* supported, because there is no standard way to
675serialize access to a single file across multiple processes in Python. If you
676need to log to a single file from multiple processes, one way of doing this is
677to have all the processes log to a :class:`SocketHandler`, and have a separate
678process which implements a socket server which reads from the socket and logs
679to file. (If you prefer, you can dedicate one thread in one of the existing
680processes to perform this function.) The following section documents this
681approach in more detail and includes a working socket receiver which can be
682used as a starting point for you to adapt in your own applications.
683
684If you are using a recent version of Python which includes the
685:mod:`multiprocessing` module, you could write your own handler which uses the
686:class:`Lock` class from this module to serialize access to the file from
687your processes. The existing :class:`FileHandler` and subclasses do not make
688use of :mod:`multiprocessing` at present, though they may do so in the future.
689Note that at present, the :mod:`multiprocessing` module does not provide
690working lock functionality on all platforms (see
691http://bugs.python.org/issue3770).
692
693.. currentmodule:: logging.handlers
694
695Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
696all logging events to one of the processes in your multi-process application.
697The following example script demonstrates how you can do this; in the example
698a separate listener process listens for events sent by other processes and logs
699them according to its own logging configuration. Although the example only
700demonstrates one way of doing it (for example, you may want to use a listener
701thread rather than a separate listener process - the implementation would be
702analogous) it does allow for completely different logging configurations for
703the listener and the other processes in your application, and can be used as
704the basis for code meeting your own specific requirements::
705
706 # You'll need these imports in your own code
707 import logging
708 import logging.handlers
709 import multiprocessing
710
711 # Next two import lines for this demo only
712 from random import choice, random
713 import time
714
715 #
716 # Because you'll want to define the logging configurations for listener and workers, the
717 # listener and worker process functions take a configurer parameter which is a callable
718 # for configuring logging for that process. These functions are also passed the queue,
719 # which they use for communication.
720 #
721 # In practice, you can configure the listener however you want, but note that in this
722 # simple example, the listener does not apply level or filter logic to received records.
723 # In practice, you would probably want to do ths logic in the worker processes, to avoid
724 # sending events which would be filtered out between processes.
725 #
726 # The size of the rotated files is made small so you can see the results easily.
727 def listener_configurer():
728 root = logging.getLogger()
729 h = logging.handlers.RotatingFileHandler('/tmp/mptest.log', 'a', 300, 10)
730 f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
731 h.setFormatter(f)
732 root.addHandler(h)
733
734 # This is the listener process top-level loop: wait for logging events
735 # (LogRecords)on the queue and handle them, quit when you get a None for a
736 # LogRecord.
737 def listener_process(queue, configurer):
738 configurer()
739 while True:
740 try:
741 record = queue.get()
742 if record is None: # We send this as a sentinel to tell the listener to quit.
743 break
744 logger = logging.getLogger(record.name)
745 logger.handle(record) # No level or filter logic applied - just do it!
746 except (KeyboardInterrupt, SystemExit):
747 raise
748 except:
749 import sys, traceback
750 print >> sys.stderr, 'Whoops! Problem:'
751 traceback.print_exc(file=sys.stderr)
752
753 # Arrays used for random selections in this demo
754
755 LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
756 logging.ERROR, logging.CRITICAL]
757
758 LOGGERS = ['a.b.c', 'd.e.f']
759
760 MESSAGES = [
761 'Random message #1',
762 'Random message #2',
763 'Random message #3',
764 ]
765
766 # The worker configuration is done at the start of the worker process run.
767 # Note that on Windows you can't rely on fork semantics, so each process
768 # will run the logging configuration code when it starts.
769 def worker_configurer(queue):
770 h = logging.handlers.QueueHandler(queue) # Just the one handler needed
771 root = logging.getLogger()
772 root.addHandler(h)
773 root.setLevel(logging.DEBUG) # send all messages, for demo; no other level or filter logic applied.
774
775 # This is the worker process top-level loop, which just logs ten events with
776 # random intervening delays before terminating.
777 # The print messages are just so you know it's doing something!
778 def worker_process(queue, configurer):
779 configurer(queue)
780 name = multiprocessing.current_process().name
781 print('Worker started: %s' % name)
782 for i in range(10):
783 time.sleep(random())
784 logger = logging.getLogger(choice(LOGGERS))
785 level = choice(LEVELS)
786 message = choice(MESSAGES)
787 logger.log(level, message)
788 print('Worker finished: %s' % name)
789
790 # Here's where the demo gets orchestrated. Create the queue, create and start
791 # the listener, create ten workers and start them, wait for them to finish,
792 # then send a None to the queue to tell the listener to finish.
793 def main():
794 queue = multiprocessing.Queue(-1)
795 listener = multiprocessing.Process(target=listener_process,
796 args=(queue, listener_configurer))
797 listener.start()
798 workers = []
799 for i in range(10):
800 worker = multiprocessing.Process(target=worker_process,
801 args=(queue, worker_configurer))
802 workers.append(worker)
803 worker.start()
804 for w in workers:
805 w.join()
806 queue.put_nowait(None)
807 listener.join()
808
809 if __name__ == '__main__':
810 main()
811
812
813Using file rotation
814-------------------
815
816.. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
817.. (see <http://blog.doughellmann.com/2007/05/pymotw-logging.html>)
818
819Sometimes you want to let a log file grow to a certain size, then open a new
820file and log to that. You may want to keep a certain number of these files, and
821when that many files have been created, rotate the files so that the number of
822files and the size of the files both remin bounded. For this usage pattern, the
823logging package provides a :class:`RotatingFileHandler`::
824
825 import glob
826 import logging
827 import logging.handlers
828
829 LOG_FILENAME = 'logging_rotatingfile_example.out'
830
831 # Set up a specific logger with our desired output level
832 my_logger = logging.getLogger('MyLogger')
833 my_logger.setLevel(logging.DEBUG)
834
835 # Add the log message handler to the logger
836 handler = logging.handlers.RotatingFileHandler(
837 LOG_FILENAME, maxBytes=20, backupCount=5)
838
839 my_logger.addHandler(handler)
840
841 # Log some messages
842 for i in range(20):
843 my_logger.debug('i = %d' % i)
844
845 # See what files are created
846 logfiles = glob.glob('%s*' % LOG_FILENAME)
847
848 for filename in logfiles:
849 print(filename)
850
851The result should be 6 separate files, each with part of the log history for the
852application::
853
854 logging_rotatingfile_example.out
855 logging_rotatingfile_example.out.1
856 logging_rotatingfile_example.out.2
857 logging_rotatingfile_example.out.3
858 logging_rotatingfile_example.out.4
859 logging_rotatingfile_example.out.5
860
861The most current file is always :file:`logging_rotatingfile_example.out`,
862and each time it reaches the size limit it is renamed with the suffix
863``.1``. Each of the existing backup files is renamed to increment the suffix
864(``.1`` becomes ``.2``, etc.) and the ``.6`` file is erased.
865
866Obviously this example sets the log length much much too small as an extreme
867example. You would want to set *maxBytes* to an appropriate value.
868
869.. _zeromq-handlers:
870
871Subclassing QueueHandler
872------------------------
873
874You can use a :class:`QueueHandler` subclass to send messages to other kinds
875of queues, for example a ZeroMQ 'publish' socket. In the example below,the
876socket is created separately and passed to the handler (as its 'queue')::
877
878 import zmq # using pyzmq, the Python binding for ZeroMQ
879 import json # for serializing records portably
880
881 ctx = zmq.Context()
882 sock = zmq.Socket(ctx, zmq.PUB) # or zmq.PUSH, or other suitable value
883 sock.bind('tcp://*:5556') # or wherever
884
885 class ZeroMQSocketHandler(QueueHandler):
886 def enqueue(self, record):
887 data = json.dumps(record.__dict__)
888 self.queue.send(data)
889
890 handler = ZeroMQSocketHandler(sock)
891
892
893Of course there are other ways of organizing this, for example passing in the
894data needed by the handler to create the socket::
895
896 class ZeroMQSocketHandler(QueueHandler):
897 def __init__(self, uri, socktype=zmq.PUB, ctx=None):
898 self.ctx = ctx or zmq.Context()
899 socket = zmq.Socket(self.ctx, socktype)
900 socket.bind(uri)
901 QueueHandler.__init__(self, socket)
902
903 def enqueue(self, record):
904 data = json.dumps(record.__dict__)
905 self.queue.send(data)
906
907 def close(self):
908 self.queue.close()
909
910
911Subclassing QueueListener
912-------------------------
913
914You can also subclass :class:`QueueListener` to get messages from other kinds
915of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
916
917 class ZeroMQSocketListener(QueueListener):
918 def __init__(self, uri, *handlers, **kwargs):
919 self.ctx = kwargs.get('ctx') or zmq.Context()
920 socket = zmq.Socket(self.ctx, zmq.SUB)
921 socket.setsockopt(zmq.SUBSCRIBE, '') # subscribe to everything
922 socket.connect(uri)
923
924 def dequeue(self):
925 msg = self.queue.recv()
926 return logging.makeLogRecord(json.loads(msg))
927
928
929