Fix argdist, trace, tplist to use the libbcc USDT support (#698)

* Allow argdist to enable USDT probes without a pid

The current code would only pass the pid to the USDT
class, thereby not allowing USDT probes to be enabled
from the binary path only. If the probe doesn't have
a semaphore, it can actually be enabled for all
processes in a uniform fashion -- which is now
supported.

* Reintroduce USDT support into tplist

To print USDT probe information, tplist needs an API
to return the probe data, including the number of
arguments and locations for each probe. This commit
introduces this API, called bcc_usdt_foreach, and
invokes it from the revised tplist implementation.

Although the result is not 100% identical to the
original tplist, which could also print the probe
argument information, this is not strictly required
for users of the argdist and trace tools, which is
why it was omitted for now.

* Fix trace.py tracepoint support

Somehow, the import of the Perf class was omitted
from tracepoint.py, which would cause failures when
trace enables kernel tracepoints.

* trace: Native bcc USDT support

trace now works again by using the new bcc USDT support
instead of the home-grown Python USDT parser. This
required an additional change in the BPF Python API
to allow multiple USDT context objects to be passed to
the constructor in order to support multiple USDT
probes in a single invocation of trace. Otherwise, the
USDT-related code in trace was greatly simplified, and
uses the `bpf_usdt_readarg` macros to obtain probe
argument values.

One minor inconvenience that was introduced in the bcc
USDT API is that USDT probes with multiple locations
that reside in a shared object *must* have a pid
specified to enable, even if they don't have an
associated semaphore. The reason is that the bcc USDT
code figures out which location invoked the probe by
inspecting `ctx->ip`, which, for shared objects, can
only be determined when the specific process context is
available to figure out where the shared object was
loaded. This limitation did not previously exist,
because instead of looking at `ctx->ip`, the Python
USDT reader generated separate code for each probe
location with an incrementing identifier. It's not a
very big deal because it only means that some probes
can't be enabled without specifying a process id, which
is almost always desired anyway for USDT probes.

argdist has not yet been retrofitted with support for
multiple USDT probes, and needs to be updated in a
separate commit.

* argdist: Support multiple USDT probes

argdist now supports multiple USDT probes, as it did
before the transition to the native bcc USDT support.
This requires aggregating the USDT objects from each
probe and passing them together to the BPF constructor
when the probes are initialized and attached.

Also add a more descriptive exception message to the
USDT class when it fails to enable a probe.
11 files changed
tree: d1bbb66e09bea8abb94b8016af07366cbb544736
  1. cmake/
  2. debian/
  3. docs/
  4. examples/
  5. images/
  6. man/
  7. scripts/
  8. SPECS/
  9. src/
  10. tests/
  11. tools/
  12. .clang-format
  13. .dockerignore
  14. .gitignore
  15. CMakeLists.txt
  16. CONTRIBUTING-SCRIPTS.md
  17. COPYRIGHT.txt
  18. Dockerfile.debian
  19. Dockerfile.ubuntu
  20. FAQ.txt
  21. INSTALL.md
  22. LICENSE.txt
  23. README.md
README.md

BCC Logo

BPF Compiler Collection (BCC)

BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.

eBPF was described by Ingo Molnár as:

One of the more interesting features in this cycle is the ability to attach eBPF programs (user-defined, sandboxed bytecode executed by the kernel) to kprobes. This allows user-defined instrumentation on a live kernel image that can never crash, hang or interfere with the kernel negatively.

BCC makes BPF programs easier to write, with kernel instrumentation in C (and includes a C wrapper around LLVM), and front-ends in Python and lua. It is suited for many tasks, including performance analysis and network traffic control.

Screenshot

This example traces a disk I/O kernel function, and populates an in-kernel power-of-2 histogram of the I/O size. For efficiency, only the histogram summary is returned to user-level.

# ./bitehist.py 
Tracing... Hit Ctrl-C to end.
^C
     kbytes          : count     distribution
       0 -> 1        : 3        |                                      |
       2 -> 3        : 0        |                                      |
       4 -> 7        : 211      |**********                            |
       8 -> 15       : 0        |                                      |
      16 -> 31       : 0        |                                      |
      32 -> 63       : 0        |                                      |
      64 -> 127      : 1        |                                      |
     128 -> 255      : 800      |**************************************|

The above output shows a bimodal distribution, where the largest mode of 800 I/O was between 128 and 255 Kbytes in size.

See the source: bitehist.py. What this traces, what this stores, and how the data is presented, can be entirely customized. This shows only some of many possible capabilities.

Installing

See INSTALL.md for installation steps on your platform.

FAQ

See FAQ.txt for the most common troubleshoot questions.

Contents

Some of these are single files that contain both C and Python, others have a pair of .c and .py files, and some are directories of files.

Tracing

Examples:

Tools:

Networking

Examples:

Motivation

BPF guarantees that the programs loaded into the kernel cannot crash, and cannot run forever, but yet BPF is general purpose enough to perform many arbitrary types of computation. Currently, it is possible to write a program in C that will compile into a valid BPF program, yet it is vastly easier to write a C program that will compile into invalid BPF (C is like that). The user won't know until trying to run the program whether it was valid or not.

With a BPF-specific frontend, one should be able to write in a language and receive feedback from the compiler on the validity as it pertains to a BPF backend. This toolkit aims to provide a frontend that can only create valid BPF programs while still harnessing its full flexibility.

Furthermore, current integrations with BPF have a kludgy workflow, sometimes involving compiling directly in a linux kernel source tree. This toolchain aims to minimize the time that a developer spends getting BPF compiled, and instead focus on the applications that can be written and the problems that can be solved with BPF.

The features of this toolkit include:

  • End-to-end BPF workflow in a shared library
    • A modified C language for BPF backends
    • Integration with llvm-bpf backend for JIT
    • Dynamic (un)loading of JITed programs
    • Support for BPF kernel hooks: socket filters, tc classifiers, tc actions, and kprobes
  • Bindings for Python
  • Examples for socket filters, tc classifiers, and kprobes
  • Self-contained tools for tracing a running system

In the future, more bindings besides python will likely be supported. Feel free to add support for the language of your choice and send a pull request!

Tutorials

Networking

At Red Hat Summit 2015, BCC was presented as part of a session on BPF. A multi-host vxlan environment is simulated and a BPF program used to monitor one of the physical interfaces. The BPF program keeps statistics on the inner and outer IP addresses traversing the interface, and the userspace component turns those statistics into a graph showing the traffic distribution at multiple granularities. See the code here.

Screenshot

Contributing

Already pumped up to commit some code? Here are some resources to join the discussions in the IOVisor community and see what you want to work on.