Wcohen/efficiency (#2063)

* Reduce instrumentation overhead with the sys_enter and sys_exit tracepoints

The ucalls script initially used kprobes and kretprobes on each of the
hundreds of syscalls functions in the system.  This approach causes a
large number of probes to be set up at the start and removed at the
conclusion of the script's execution resulting in slow start up.

Like the syscount.py script the ucall syscall instrumentation has been
modified to use the sys_enter and sys_exit tracepoints.  This only
requires the installation and removal of one or two tracepoints to
implement and results in much shorter times to start and stop the
ucalls script.

Another benefit of this change is syscalls on newer kernels will be
monitored with the "-S" option.  The regular expression used to find
the locations for the kprobes and kretprobes for all the possible
syscall functions would not would match the syscall function naming
convention in newer kernels.

* Update ucalls_examples.txt to match current "-S" option output

* Add required "import subprocess" and remove unneeded "global syscalls"

* Factor out the syscall_name code into a separate python module syscall.py

Multiple scripts are going to find the syscall_name() function useful
when using the syscall tracepoints.  Factoring out this code into a
separate python module avoids having to replicate this code in
multiple scripts.

* Use the syscall_name() function in syscount.py to make it more compact.

* Update the default syscall mappings and the way that they were generated

The default table was missing some newer syscall mapping. Regenerated
the table using the syscallent.h file from Fedora 30
strace-4.25-1.fc30.src.rpm.  Also updated the comment with the command
actually used to generate the mappings.

* Add license information and upsdate the syscalls

The default x86_64 syscall dictionary mapping syscalls numbers to
names has been updated. The following syscall x86_64 names have been
updated:

    18: b"pwrite64",
    60: b"exit",
    166: b"umount2",

The following syscall x86_64 have been added:

    313: b"finit_module",
    314: b"sched_setattr",
    315: b"sched_getattr",
    316: b"renameat2",
    317: b"seccomp",
    318: b"getrandom",
    319: b"memfd_create",
    320: b"kexec_file_load",
    321: b"bpf",
    322: b"execveat",
    323: b"userfaultfd",
    324: b"membarrier",
    325: b"mlock2",
    326: b"copy_file_range",
    327: b"preadv2",
    328: b"pwritev2",
    329: b"pkey_mprotect",
    330: b"pkey_alloc",
    331: b"pkey_free",
    332: b"statx",
    333: b"io_pgetevents",
    334: b"rseq",

* Eliminate stderr output and use of shell features

Redirect all stderr output so it isn't seen.  Also avoid use of the
shell pipe and tail command.  Just strip off the first line in the
python code instead.

* Update lib/ucalls.py smoke test to required linux-4.7

The use of tracepoints in the ucalls.py requires linux-4.7. Changed
the test to only run with a suitable kernel.  The libs/ucalls.py
script is no longer inserting hundreds of kprobes and is much faster
as a result, so removed the timeout adjustment and the comment about
it being slow.
7 files changed
tree: 5d6a351743480640b32f85244acbdafc99b3b077
  1. cmake/
  2. debian/
  3. docs/
  4. examples/
  5. images/
  6. introspection/
  7. man/
  8. scripts/
  9. snapcraft/
  10. SPECS/
  11. src/
  12. tests/
  13. tools/
  14. .clang-format
  15. .dockerignore
  16. .gitignore
  17. .travis.yml
  18. CMakeLists.txt
  19. CODEOWNERS
  20. CONTRIBUTING-SCRIPTS.md
  21. Dockerfile.debian
  22. Dockerfile.ubuntu
  23. FAQ.txt
  24. INSTALL.md
  25. LICENSE.txt
  26. LINKS.md
  27. QUICKSTART.md
  28. README.md
README.md

BCC Logo

BPF Compiler Collection (BCC)

BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.

eBPF was described by Ingo Molnár as:

One of the more interesting features in this cycle is the ability to attach eBPF programs (user-defined, sandboxed bytecode executed by the kernel) to kprobes. This allows user-defined instrumentation on a live kernel image that can never crash, hang or interfere with the kernel negatively.

BCC makes BPF programs easier to write, with kernel instrumentation in C (and includes a C wrapper around LLVM), and front-ends in Python and lua. It is suited for many tasks, including performance analysis and network traffic control.

Screenshot

This example traces a disk I/O kernel function, and populates an in-kernel power-of-2 histogram of the I/O size. For efficiency, only the histogram summary is returned to user-level.

# ./bitehist.py
Tracing... Hit Ctrl-C to end.
^C
     kbytes          : count     distribution
       0 -> 1        : 3        |                                      |
       2 -> 3        : 0        |                                      |
       4 -> 7        : 211      |**********                            |
       8 -> 15       : 0        |                                      |
      16 -> 31       : 0        |                                      |
      32 -> 63       : 0        |                                      |
      64 -> 127      : 1        |                                      |
     128 -> 255      : 800      |**************************************|

The above output shows a bimodal distribution, where the largest mode of 800 I/O was between 128 and 255 Kbytes in size.

See the source: bitehist.py. What this traces, what this stores, and how the data is presented, can be entirely customized. This shows only some of many possible capabilities.

Installing

See INSTALL.md for installation steps on your platform.

FAQ

See FAQ.txt for the most common troubleshoot questions.

Reference guide

See docs/reference_guide.md for the reference guide to the bcc and bcc/BPF APIs.

Contents

Some of these are single files that contain both C and Python, others have a pair of .c and .py files, and some are directories of files.

Tracing

Examples:

Tools:

Networking

Examples:

BPF Introspection:

Tools that help to introspect BPF programs.

  • introspection/bps.c: List all BPF programs loaded into the kernel. 'ps' for BPF programs. Examples.

Motivation

BPF guarantees that the programs loaded into the kernel cannot crash, and cannot run forever, but yet BPF is general purpose enough to perform many arbitrary types of computation. Currently, it is possible to write a program in C that will compile into a valid BPF program, yet it is vastly easier to write a C program that will compile into invalid BPF (C is like that). The user won't know until trying to run the program whether it was valid or not.

With a BPF-specific frontend, one should be able to write in a language and receive feedback from the compiler on the validity as it pertains to a BPF backend. This toolkit aims to provide a frontend that can only create valid BPF programs while still harnessing its full flexibility.

Furthermore, current integrations with BPF have a kludgy workflow, sometimes involving compiling directly in a linux kernel source tree. This toolchain aims to minimize the time that a developer spends getting BPF compiled, and instead focus on the applications that can be written and the problems that can be solved with BPF.

The features of this toolkit include:

  • End-to-end BPF workflow in a shared library
    • A modified C language for BPF backends
    • Integration with llvm-bpf backend for JIT
    • Dynamic (un)loading of JITed programs
    • Support for BPF kernel hooks: socket filters, tc classifiers, tc actions, and kprobes
  • Bindings for Python
  • Examples for socket filters, tc classifiers, and kprobes
  • Self-contained tools for tracing a running system

In the future, more bindings besides python will likely be supported. Feel free to add support for the language of your choice and send a pull request!

Tutorials

Networking

At Red Hat Summit 2015, BCC was presented as part of a session on BPF. A multi-host vxlan environment is simulated and a BPF program used to monitor one of the physical interfaces. The BPF program keeps statistics on the inner and outer IP addresses traversing the interface, and the userspace component turns those statistics into a graph showing the traffic distribution at multiple granularities. See the code here.

Screenshot

Contributing

Already pumped up to commit some code? Here are some resources to join the discussions in the IOVisor community and see what you want to work on.

External links

Looking for more information on BCC and how it's being used? You can find links to other BCC content on the web in LINKS.md.