Upgrade to valgrind 3.12.0.
Release 3.12.0 (20 October 2016)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3.12.0 is a feature release with many improvements and the usual
collection of bug fixes.
This release supports X86/Linux, AMD64/Linux, ARM32/Linux,
ARM64/Linux, PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux,
MIPS32/Linux, MIPS64/Linux, ARM/Android, ARM64/Android,
MIPS32/Android, X86/Android, X86/Solaris, AMD64/Solaris, X86/MacOSX
10.10 and AMD64/MacOSX 10.10. There is also preliminary support for
X86/MacOSX 10.11/12, AMD64/MacOSX 10.11/12 and TILEGX/Linux.
* ================== PLATFORM CHANGES =================
* POWER: Support for ISA 3.0 has been added
* mips: support for O32 FPXX ABI has been added.
* mips: improved recognition of different processors
* mips: determination of page size now done at run time
* amd64: Partial support for AMD FMA4 instructions.
* arm, arm64: Support for v8 crypto and CRC instructions.
* Improvements and robustification of the Solaris port.
* Preliminary support for MacOS 10.12 (Sierra) has been added.
Whilst 3.12.0 continues to support the 32-bit x86 instruction set, we
would prefer users to migrate to 64-bit x86 (a.k.a amd64 or x86_64)
where possible. Valgrind's support for 32-bit x86 has stagnated in
recent years and has fallen far behind that for 64-bit x86
instructions. By contrast 64-bit x86 is well supported, up to and
including AVX2.
* ==================== TOOL CHANGES ====================
* Memcheck:
- Added meta mempool support for describing a custom allocator which:
- Auto-frees all chunks assuming that destroying a pool destroys all
objects in the pool
- Uses itself to allocate other memory blocks
- New flag --ignore-range-below-sp to ignore memory accesses below
the stack pointer, if you really have to. The related flag
--workaround-gcc296-bugs=yes is now deprecated. Use
--ignore-range-below-sp=1024-1 as a replacement.
* DRD:
- Improved thread startup time significantly on non-Linux platforms.
* DHAT
- Added collection of the metric "tot-blocks-allocd"
* ==================== OTHER CHANGES ====================
* Replacement/wrapping of malloc/new related functions is now done not just
for system libraries by default, but for any globally defined malloc/new
related function (both in shared libraries and statically linked alternative
malloc implementations). The dynamic (runtime) linker is excluded, though.
To only intercept malloc/new related functions in
system libraries use --soname-synonyms=somalloc=nouserintercepts (where
"nouserintercepts" can be any non-existing library name).
This new functionality is not implemented for MacOS X.
* The maximum number of callers in a suppression entry is now equal to
the maximum size for --num-callers (500).
Note that --gen-suppressions=yes|all similarly generates suppressions
containing up to --num-callers frames.
* New and modified GDB server monitor features:
- Valgrind's gdbserver now accepts the command 'catch syscall'.
Note that you must have GDB >= 7.11 to use 'catch syscall' with
gdbserver.
* New option --run-cxx-freeres=<yes|no> can be used to change whether
__gnu_cxx::__freeres() cleanup function is called or not. Default is
'yes'.
* Valgrind is able to read compressed debuginfo sections in two formats:
- zlib ELF gABI format with SHF_COMPRESSED flag (gcc option -gz=zlib)
- zlib GNU format with .zdebug sections (gcc option -gz=zlib-gnu)
* Modest JIT-cost improvements: the cost of instrumenting code blocks
for the most common use case (x86_64-linux, Memcheck) has been
reduced by 10%-15%.
* Improved performance for programs that do a lot of discarding of
instruction address ranges of 8KB or less.
* The C++ symbol demangler has been updated.
* More robustness against invalid syscall parameters on Linux.
* ==================== FIXED BUGS ====================
The following bugs have been fixed or resolved. Note that "n-i-bz"
stands for "not in bugzilla" -- that is, a bug that was reported to us
but never got a bugzilla entry. We encourage you to file bugs in
bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather
than mailing the developers (or mailing lists) directly -- bugs that
are not entered into bugzilla tend to get forgotten about or ignored.
To see details of a given bug, visit
https://bugs.kde.org/show_bug.cgi?id=XXXXXX
where XXXXXX is the bug number as listed below.
191069 Exiting due to signal not reported in XML output
199468 Suppressions: stack size limited to 25
while --num-callers allows more frames
212352 vex amd64 unhandled opc_aux = 0x 2, first_opcode == 0xDC (FCOM)
278744 cvtps2pd with redundant RexW
303877 valgrind doesn't support compressed debuginfo sections.
345307 Warning about "still reachable" memory when using libstdc++ from gcc 5
348345 Assertion fails for negative lineno
351282 V 3.10.1 MIPS softfloat build broken with GCC 4.9.3 / binutils 2.25.1
351692 Dumps created by valgrind are not readable by gdb (mips32 specific)
351804 Crash on generating suppressions for "printf" call on OS X 10.10
352197 mips: mmap2() not wrapped correctly for page size > 4096
353083 arm64 doesn't implement various xattr system calls
353084 arm64 doesn't support sigpending system call
353137 www: update info for Supported Platforms
353138 www: update "The Valgrind Developers" page
353370 don't advertise RDRAND in cpuid for Core-i7-4910-like avx2 machine
== 365325
== 357873
353384 amd64->IR: 0x66 0xF 0x3A 0x62 0xD1 0x62 (pcmpXstrX $0x62)
353398 WARNING: unhandled amd64-solaris syscall: 207
353660 XML in auxwhat tag not escaping reserved symbols properly
353680 s390x: Crash with certain glibc versions due to non-implemented TBEGIN
353727 amd64->IR: 0x66 0xF 0x3A 0x62 0xD1 0x72 (pcmpXstrX $0x72)
353802 ELF debug info reader confused with multiple .rodata sections
353891 Assert 'bad_scanned_addr < VG_ROUNDDN(start+len, sizeof(Addr))' failed
353917 unhandled amd64-solaris syscall fchdir(120)
353920 unhandled amd64-solaris syscall: 170
354274 arm: unhandled instruction: 0xEBAD 0x0AC1 (sub.w sl, sp, r1, lsl #3)
354392 unhandled amd64-solaris syscall: 171
354797 Vbit test does not include Iops for Power 8 instruction support
354883 tst->os_state.pthread - magic_delta assertion failure on OSX 10.11
== 361351
== 362920
== 366222
354933 Fix documentation of --kernel-variant=android-no-hw-tls option
355188 valgrind should intercept all malloc related global functions
355454 do not intercept malloc related symbols from the runtime linker
355455 stderr.exp of test cases wrapmalloc and wrapmallocstatic overconstrained
356044 Dwarf line info reader misinterprets is_stmt register
356112 mips: replace addi with addiu
356393 valgrind (vex) crashes because isZeroU happened
== 363497
== 364497
356676 arm64-linux: unhandled syscalls 125, 126 (sched_get_priority_max/min)
356678 arm64-linux: unhandled syscall 232 (mincore)
356817 valgrind.h triggers compiler errors on MSVC when defining NVALGRIND
356823 Unsupported ARM instruction: stlex
357059 x86/amd64: SSE cvtpi2ps with memory source does transition to MMX state
357338 Unhandled instruction for SHA instructions libcrypto Boring SSL
357673 crash if I try to run valgrind with a binary link with libcurl
357833 Setting RLIMIT_DATA to zero breaks with linux 4.5+
357871 pthread_spin_destroy not properly wrapped
357887 Calls to VG_(fclose) do not close the file descriptor
357932 amd64->IR: accept redundant REX prefixes for {minsd,maxsd} m128, xmm.
358030 support direct socket calls on x86 32bit (new in linux 4.3)
358478 drd/tests/std_thread.cpp doesn't build with GCC6
359133 Assertion 'eltSzB <= ddpa->poolSzB' failed
359181 Buffer Overflow during Demangling
359201 futex syscall "skips" argument 5 if op is FUTEX_WAIT_BITSET
359289 s390x: popcnt (B9E1) not implemented
359472 The Power PC vsubuqm instruction doesn't always give the correct result
359503 Add missing syscalls for aarch64 (arm64)
359645 "You need libc6-dbg" help message could be more helpful
359703 s390: wire up separate socketcalls system calls
359724 getsockname might crash - deref_UInt should call safe_to_deref
359733 amd64 implement ld.so strchr/index override like x86
359767 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 1/5
359829 Power PC test suite none/tests/ppc64/test_isa_2_07.c uses
uninitialized data
359838 arm64: Unhandled instruction 0xD5033F5F (clrex)
359871 Incorrect mask handling in ppoll
359952 Unrecognised PCMPESTRM variants (0x70, 0x19)
360008 Contents of Power vr registers contents is not printed correctly when
the --vgdb-shadow-registers=yes option is used
360035 POWER PC instruction bcdadd and bcdsubtract generate result with
non-zero shadow bits
360378 arm64: Unhandled instruction 0x5E280844 (sha1h s4, s2)
360425 arm64 unsupported instruction ldpsw
== 364435
360519 none/tests/arm64/memory.vgtest might fail with newer gcc
360571 Error about the Android Runtime reading below the stack pointer on ARM
360574 Wrong parameter type for an ashmem ioctl() call on Android and ARM64
360749 kludge for multiple .rodata sections on Solaris no longer needed
360752 raise the number of reserved fds in m_main.c from 10 to 12
361207 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 2/5
361226 s390x: risbgn (EC59) not implemented
361253 [s390x] ex_clone.c:42: undefined reference to `pthread_create'
361354 ppc64[le]: wire up separate socketcalls system calls
361615 Inconsistent termination for multithreaded process terminated by signal
361926 Unhandled Solaris syscall: sysfs(84)
362009 V dumps core on unimplemented functionality before threads are created
362329 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 3/5
362894 missing (broken) support for wbit field on mtfsfi instruction (ppc64)
362935 [AsusWRT] Assertion 'sizeof(TTEntryC) <= 88' failed
362953 Request for an update to the Valgrind Developers page
363680 add renameat2() support
363705 arm64 missing syscall name_to_handle_at and open_by_handle_at
363714 ppc64 missing syscalls sync, waitid and name_to/open_by_handle_at
363858 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 4/5
364058 clarify in manual limitations of array overruns detections
364413 pselect sycallwrapper mishandles NULL sigmask
364728 Power PC, missing support for several HW registers in
get_otrack_shadow_offset_wrk()
364948 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 5/5
365273 Invalid write to stack location reported after signal handler runs
365912 ppc64BE segfault during jm-insns test (RELRO)
366079 FPXX Support for MIPS32 Valgrind
366138 Fix configure errors out when using Xcode 8 (clang 8.0.0)
366344 Multiple unhandled instruction for Aarch64
(0x0EE0E020, 0x1AC15800, 0x4E284801, 0x5E040023, 0x5E056060)
367995 Integration of memcheck with custom memory allocator
368120 x86_linux asm _start functions do not keep 16-byte aligned stack pointer
368412 False positive result for altivec capability check
368416 Add tc06_two_races_xml.exp output for ppc64
368419 Perf Events ioctls not implemented
368461 mmapunmap test fails on ppc64
368823 run_a_thread_NORETURN assembly code typo for VGP_arm64_linux target
369000 AMD64 fma4 instructions unsupported.
369169 ppc64 fails jm_int_isa_2_07 test
369175 jm_vec_isa_2_07 test crashes on ppc64
369209 valgrind loops and eats up all memory if cwd doesn't exist.
369356 pre_mem_read_sockaddr syscall wrapper can crash with bad sockaddr
369359 msghdr_foreachfield can crash when handling bad iovec
369360 Bad sigprocmask old or new sets can crash valgrind
369361 vmsplice syscall wrapper crashes on bad iovec
369362 Bad sigaction arguments crash valgrind
369383 x86 sys_modify_ldt wrapper crashes on bad ptr
369402 Bad set/get_thread_area pointer crashes valgrind
369441 bad lvec argument crashes process_vm_readv/writev syscall wrappers
369446 valgrind crashes on unknown fcntl command
369439 S390x: Unhandled insns RISBLG/RISBHG and LDE/LDER
369468 Remove quadratic metapool algorithm using VG_(HT_remove_at_Iter)
370265 ISA 3.0 HW cap stuff needs updating
371128 BCD add and subtract instructions on Power BE in 32-bit mode do not work
n-i-bz Fix incorrect (or infinite loop) unwind on RHEL7 x86 and amd64
n-i-bz massif --pages-as-heap=yes does not report peak caused by mmap+munmap
n-i-bz false positive leaks due to aspacemgr merging heap & non heap segments
n-i-bz Fix ppoll_alarm exclusion on OS X
n-i-bz Document brk segment limitation, reference manual in limit reached msg.
n-i-bz Fix clobber list in none/tests/amd64/xacq_xrel.c [valgrind r15737]
n-i-bz Bump allowed shift value for "add.w reg, sp, reg, lsl #N" [vex r3206]
n-i-bz amd64: memcheck false positive with shr %edx
n-i-bz arm3: Allow early writeback of SP base register in "strd rD, [sp, #-16]"
n-i-bz ppc: Fix two cases of PPCAvFpOp vs PPCFpOp enum confusion
n-i-bz arm: Fix incorrect register-number constraint check for LDAEX{,B,H,D}
n-i-bz DHAT: added collection of the metric "tot-blocks-allocd"
(3.12.0.RC1: 20 October 2016, vex r3282, valgrind r16094)
(3.12.0.RC2: 20 October 2016, vex r3282, valgrind r16096)
(3.12.0: 21 October 2016, vex r3282, valgrind r16098)
Bug: http://b/37470713
Bug: http://b/29251682
Test: ran runtests-arm(64)?.sh and the bug reporter's specific binary (32- and 64-bit)
Change-Id: I43ccbea946d89fc4ae9f355181ac5061d6ce4453
diff --git a/docs/html/FAQ.html b/docs/html/FAQ.html
new file mode 100644
index 0000000..17c6491
--- /dev/null
+++ b/docs/html/FAQ.html
@@ -0,0 +1,51 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind FAQ</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="nl-manual.html" title="14. Nulgrind: the minimal Valgrind tool">
+<link rel="next" href="faq.html" title="Valgrind Frequently Asked Questions">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="nl-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="faq.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div>
+<div><h1 class="title">
+<a name="FAQ"></a>Valgrind FAQ</h1></div>
+<div><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div><p class="copyright">Copyright © 2000-2016 <a class="ulink" href="http://www.valgrind.org/info/developers.html" target="_top">Valgrind Developers</a></p></div>
+<div><div class="legalnotice">
+<a name="idm140639119221280"></a><p>Email: <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a></p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc"><dt><span class="article"><a href="faq.html">Valgrind Frequently Asked Questions</a></span></dt></dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="nl-manual.html"><< 14. Nulgrind: the minimal Valgrind tool</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="faq.html">Valgrind Frequently Asked Questions >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/QuickStart.html b/docs/html/QuickStart.html
new file mode 100644
index 0000000..a304edb
--- /dev/null
+++ b/docs/html/QuickStart.html
@@ -0,0 +1,61 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>The Valgrind Quick Start Guide</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="index.html" title="Valgrind Documentation">
+<link rel="next" href="quick-start.html" title="The Valgrind Quick Start Guide">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="index.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="quick-start.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div>
+<div><h1 class="title">
+<a name="QuickStart"></a>The Valgrind Quick Start Guide</h1></div>
+<div><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div><p class="copyright">Copyright © 2000-2016 <a class="ulink" href="http://www.valgrind.org/info/developers.html" target="_top">Valgrind Developers</a></p></div>
+<div><div class="legalnotice">
+<a name="idm140639120054432"></a><p>Email: <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a></p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="article"><a href="quick-start.html">The Valgrind Quick Start Guide</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.intro">1. Introduction</a></span></dt>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.prepare">2. Preparing your program</a></span></dt>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.mcrun">3. Running your program under Memcheck</a></span></dt>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.interpret">4. Interpreting Memcheck's output</a></span></dt>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.caveats">5. Caveats</a></span></dt>
+<dt><span class="sect1"><a href="quick-start.html#quick-start.info">6. More information</a></span></dt>
+</dl></dd>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="index.html"><< Valgrind Documentation</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="quick-start.html">The Valgrind Quick Start Guide >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/bbv-manual.html b/docs/html/bbv-manual.html
new file mode 100644
index 0000000..2039193
--- /dev/null
+++ b/docs/html/bbv-manual.html
@@ -0,0 +1,366 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>12. BBV: an experimental basic block vector generation tool</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="sg-manual.html" title="11. SGCheck: an experimental stack and global array overrun detector">
+<link rel="next" href="lk-manual.html" title="13. Lackey: an example tool">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="sg-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="lk-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="bbv-manual"></a>12. BBV: an experimental basic block vector generation tool</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.overview">12.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.quickstart">12.2. Using Basic Block Vectors to create SimPoints</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.usage">12.3. BBV Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.fileformat">12.4. Basic Block Vector File Format</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.implementation">12.5. Implementation</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.threadsupport">12.6. Threaded Executable Support</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.validation">12.7. Validation</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.performance">12.8. Performance</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=exp-bbv</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.overview"></a>12.1. Overview</h2></div></div></div>
+<p>
+ A basic block is a linear section of code with one entry point and one exit
+ point. A <span class="emphasis"><em>basic block vector</em></span> (BBV) is a list of all
+ basic blocks entered during program execution, and a count of how many
+ times each basic block was run.
+</p>
+<p>
+ BBV is a tool that generates basic block vectors for use with the
+ <a class="ulink" href="http://www.cse.ucsd.edu/~calder/simpoint/" target="_top">SimPoint</a>
+ analysis tool.
+ The SimPoint methodology enables speeding up architectural
+ simulations by only running a small portion of a program
+ and then extrapolating total behavior from this
+ small portion. Most programs exhibit phase-based behavior, which
+ means that at various times during execution a program will encounter
+ intervals of time where the code behaves similarly to a previous
+ interval. If you can detect these intervals and group them together,
+ an approximation of the total program behavior can be obtained
+ by only simulating a bare minimum number of intervals, and then scaling
+ the results.
+</p>
+<p>
+ In computer architecture research, running a
+ benchmark on a cycle-accurate simulator can cause slowdowns on the order
+ of 1000 times, making it take days, weeks, or even longer to run full
+ benchmarks. By utilizing SimPoint this can be reduced significantly,
+ usually by 90-95%, while still retaining reasonable accuracy.
+</p>
+<p>
+ A more complete introduction to how SimPoint works can be
+ found in the paper "Automatically Characterizing Large Scale
+ Program Behavior" by T. Sherwood, E. Perelman, G. Hamerly, and
+ B. Calder.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.quickstart"></a>12.2. Using Basic Block Vectors to create SimPoints</h2></div></div></div>
+<p>
+ To quickly create a basic block vector file, you will call Valgrind
+ like this:
+
+ </p>
+<pre class="programlisting">valgrind --tool=exp-bbv /bin/ls</pre>
+<p>
+
+ In this case we are running on <code class="filename">/bin/ls</code>,
+ but this can be any program. By default a file called
+ <code class="computeroutput">bb.out.PID</code> will be created,
+ where PID is replaced by the process ID of the running process.
+ This file contains the basic block vector. For long-running programs
+ this file can be quite large, so it might be wise to compress
+ it with gzip or some other compression program.
+</p>
+<p>
+ To create actual SimPoint results, you will need the SimPoint utility,
+ available from the
+ <a class="ulink" href="http://www.cse.ucsd.edu/~calder/simpoint/" target="_top">SimPoint webpage</a>.
+ Assuming you have downloaded SimPoint 3.2 and compiled it,
+ create SimPoint results with a command like the following:
+
+ </p>
+<pre class="programlisting">
+./SimPoint.3.2/bin/simpoint -inputVectorsGzipped \
+ -loadFVFile bb.out.1234.gz \
+ -k 5 -saveSimpoints results.simpts \
+ -saveSimpointWeights results.weights</pre>
+<p>
+
+ where bb.out.1234.gz is your compressed basic block vector file
+ generated by BBV.
+</p>
+<p>
+ The SimPoint utility does random linear projection using 15-dimensions,
+ then does k-mean clustering to calculate which intervals are
+ of interest. In this example we specify 5 intervals with the
+ -k 5 option.
+</p>
+<p>
+ The outputs from the SimPoint run are the
+ <code class="computeroutput">results.simpts</code>
+ and <code class="computeroutput">results.weights</code> files.
+ The first holds the 5 most relevant intervals of the program.
+ The seconds holds the weight to scale each interval by when
+ extrapolating full-program behavior. The intervals and the weights
+ can be used in conjunction with a simulator that supports
+ fast-forwarding; you fast-forward to the interval of interest,
+ collect stats for the desired interval length, then use
+ statistics gathered in conjunction with the weights to
+ calculate your results.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.usage"></a>12.3. BBV Command-line Options</h2></div></div></div>
+<p> BBV-specific command-line options are:</p>
+<div class="variablelist">
+<a name="bbv.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.bb-out-file"></a><span class="term">
+ <code class="option">--bb-out-file=<name> [default: bb.out.%p] </code>
+ </span>
+</dt>
+<dd><p>
+ This option selects the name of the basic block vector file. The
+ <code class="option">%p</code> and <code class="option">%q</code> format specifiers can be
+ used to embed the process ID and/or the contents of an environment
+ variable in the name, as is the case for the core option
+ <code class="option"><a class="xref" href="manual-core.html#opt.log-file">--log-file</a></code>.
+ </p></dd>
+<dt>
+<a name="opt.pc-out-file"></a><span class="term">
+ <code class="option">--pc-out-file=<name> [default: pc.out.%p] </code>
+ </span>
+</dt>
+<dd><p>
+ This option selects the name of the PC file.
+ This file holds program counter addresses
+ and function name info for the various basic blocks.
+ This can be used in conjunction
+ with the basic block vector file to fast-forward via function names
+ instead of just instruction counts. The
+ <code class="option">%p</code> and <code class="option">%q</code> format specifiers can be
+ used to embed the process ID and/or the contents of an environment
+ variable in the name, as is the case for the core option
+ <code class="option"><a class="xref" href="manual-core.html#opt.log-file">--log-file</a></code>.
+ </p></dd>
+<dt>
+<a name="opt.interval-size"></a><span class="term">
+ <code class="option">--interval-size=<number> [default: 100000000] </code>
+ </span>
+</dt>
+<dd><p>
+ This option selects the size of the interval to use.
+ The default is 100
+ million instructions, which is a commonly used value.
+ Other sizes can be used; smaller intervals can help programs
+ with finer-grained phases. However smaller interval size
+ can lead to accuracy issues due to warm-up effects
+ (When fast-forwarding the various architectural features
+ will be un-initialized, and it will take some number
+ of instructions before they "warm up" to the state a
+ full simulation would be at without the fast-forwarding.
+ Large interval sizes tend to mitigate this.)
+ </p></dd>
+<dt>
+<a name="opt.instr-count-only"></a><span class="term">
+ <code class="option">--instr-count-only [default: no] </code>
+ </span>
+</dt>
+<dd><p>
+ This option tells the tool to only display instruction count
+ totals, and to not generate the actual basic block vector file.
+ This is useful for debugging, and for gathering instruction count
+ info without generating the large basic block vector files.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.fileformat"></a>12.4. Basic Block Vector File Format</h2></div></div></div>
+<p>
+ The Basic Block Vector is dumped at fixed intervals. This
+ is commonly done every 100 million instructions; the
+ <code class="option">--interval-size</code> option can be
+ used to change this.
+</p>
+<p>
+ The output file looks like this:
+</p>
+<pre class="programlisting">
+T:45:1024 :189:99343
+T:11:78573 :15:1353 :56:1
+T:18:45 :12:135353 :56:78 314:4324263</pre>
+<p>
+ Each new interval starts with a T. This is followed on the same line
+ by a series of basic block and frequency pairs, one for each
+ basic block that was entered during the interval. The format for
+ each block/frequency pair is a colon, followed by a number that
+ uniquely identifies the basic block, another colon, and then
+ the frequency (which is the number of times the block was entered,
+ multiplied by the number of instructions in the block). The
+ pairs are separated from each other by a space.
+</p>
+<p>
+ The frequency count is multiplied by the number of instructions that are
+ in the basic block, in order to weigh the count so that instructions in
+ small basic blocks aren't counted as more important than instructions
+ in large basic blocks.
+</p>
+<p>
+ The SimPoint program only processes lines that start with a "T". All
+ other lines are ignored. Traditionally comments are indicated by
+ starting a line with a "#" character. Some other BBV generation tools,
+ such as PinPoints, generate lines beginning with letters other than "T"
+ to indicate more information about the program being run. We do
+ not generate these, as the SimPoint utility ignores them.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.implementation"></a>12.5. Implementation</h2></div></div></div>
+<p>
+ Valgrind provides all of the information necessary to create
+ BBV files. In the current implementation, all instructions
+ are instrumented. This is slower (by approximately a factor
+ of two) than a method that instruments at the basic block level,
+ but there are some complications (especially with rep prefix
+ detection) that make that method more difficult.
+</p>
+<p>
+ Valgrind actually provides instrumentation at a superblock level.
+ A superblock has one entry point but unlike basic blocks can
+ have multiple exit points. Once a branch occurs into the middle
+ of a block, it is split into a new basic block. Because
+ Valgrind cannot produce "true" basic blocks, the generated
+ BBV vectors will be different than those generated by other tools.
+ In practice this does not seem to affect the accuracy of the
+ SimPoint results. We do internally force the
+ <code class="option">--vex-guest-chase-thresh=0</code>
+ option to Valgrind which forces a more basic-block-like
+ behavior.
+</p>
+<p>
+ When a superblock is run for the first time, it is instrumented
+ with our BBV routine. A block info (bbInfo) structure is allocated
+ which holds the various information and statistics for the block.
+ A unique block ID is assigned to the block, and then the
+ structure is placed into an ordered set.
+ Then each native instruction in the block is instrumented to
+ call an instruction counting routine with a pointer to the block
+ info structure as an argument.
+</p>
+<p>
+ At run-time, our instruction counting routines are called once
+ per native instruction. The relevant block info structure is accessed
+ and the block count and total instruction count is updated.
+ If the total instruction count overflows the interval size
+ then we walk the ordered set, writing out the statistics for
+ any block that was accessed in the interval, then resetting the
+ block counters to zero.
+</p>
+<p>
+ On the x86 and amd64 architectures the counting code has extra
+ code to handle rep-prefixed string instructions. This is because
+ actual hardware counts a rep-prefixed instruction
+ as one instruction, while a naive Valgrind implementation
+ would count it as many (possibly hundreds, thousands or even millions)
+ of instructions. We handle rep-prefixed instructions specially,
+ in order to make the results match those obtained with hardware performance
+ counters.
+</p>
+<p>
+ BBV also counts the fldcw instruction. This instruction is used on
+ x86 machines in various ways; it is most commonly found when converting
+ floating point values into integers.
+ On Pentium 4 systems the retired instruction performance
+ counter counts this instruction as two instructions (all other
+ known processors only count it as one).
+ This can affect results when using SimPoint on Pentium 4 systems.
+ We provide the fldcw count so that users can evaluate whether it
+ will impact their results enough to avoid using Pentium 4 machines
+ for their experiments. It would be possible to add an option to
+ this tool that mimics the double-counting so that the generated BBV
+ files would be usable for experiments using hardware performance
+ counters on Pentium 4 systems.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.threadsupport"></a>12.6. Threaded Executable Support</h2></div></div></div>
+<p>
+ BBV supports threaded programs. When a program has multiple threads,
+ an additional basic block vector file is created for each thread (each
+ additional file is the specified filename with the thread number
+ appended at the end).
+</p>
+<p>
+ There is no official method of using SimPoint with
+ threaded workloads. The most common method is to run
+ SimPoint on each thread's results independently, and use
+ some method of deterministic execution to try to match the
+ original workload. This should be possible with the current
+ BBV.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.validation"></a>12.7. Validation</h2></div></div></div>
+<p>
+ BBV has been tested on x86, amd64, and ppc32 platforms.
+ An earlier version of BBV was tested in detail using
+ hardware performance counters, this work is described in a paper
+ from the HiPEAC'08 conference, "Using Dynamic Binary Instrumentation
+ to Generate Multi-Platform SimPoints: Methodology and Accuracy" by
+ V.M. Weaver and S.A. McKee.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="bbv-manual.performance"></a>12.8. Performance</h2></div></div></div>
+<p>
+ Using this program slows down execution by roughly a factor of 40
+ over native execution. This varies depending on the machine
+ used and the benchmark being run.
+ On the SPEC CPU 2000 benchmarks running on a 3.4GHz Pentium D
+ processor, the slowdown ranges from 24x (mcf) to 340x (vortex.2).
+</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="sg-manual.html"><< 11. SGCheck: an experimental stack and global array overrun detector</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="lk-manual.html">13. Lackey: an example tool >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/cg-manual.html b/docs/html/cg-manual.html
new file mode 100644
index 0000000..fabf6fa
--- /dev/null
+++ b/docs/html/cg-manual.html
@@ -0,0 +1,1176 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>5. Cachegrind: a cache and branch-prediction profiler</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="mc-manual.html" title="4. Memcheck: a memory error detector">
+<link rel="next" href="cl-manual.html" title="6. Callgrind: a call-graph generating cache and branch prediction profiler">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="mc-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="cl-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="cg-manual"></a>5. Cachegrind: a cache and branch-prediction profiler</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.overview">5.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.profile">5.2. Using Cachegrind, cg_annotate and cg_merge</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.running-cachegrind">5.2.1. Running Cachegrind</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.outputfile">5.2.2. Output File</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.running-cg_annotate">5.2.3. Running cg_annotate</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.the-output-preamble">5.2.4. The Output Preamble</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.the-global">5.2.5. The Global and Function-level Counts</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.line-by-line">5.2.6. Line-by-line Counts</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.assembler">5.2.7. Annotating Assembly Code Programs</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#ms-manual.forkingprograms">5.2.8. Forking Programs</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.warnings">5.2.9. cg_annotate Warnings</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.things-to-watch-out-for">5.2.10. Unusual Annotation Cases</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.cg_merge">5.2.11. Merging Profiles with cg_merge</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.cg_diff">5.2.12. Differencing Profiles with cg_diff</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.cgopts">5.3. Cachegrind Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.annopts">5.4. cg_annotate Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.mergeopts">5.5. cg_merge Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.diffopts">5.6. cg_diff Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.acting-on">5.7. Acting on Cachegrind's Information</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.sim-details">5.8. Simulation Details</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cache-sim">5.8.1. Cache Simulation Specifics</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#branch-sim">5.8.2. Branch Simulation Specifics</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.accuracy">5.8.3. Accuracy</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.impl-details">5.9. Implementation Details</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.impl-details.how-cg-works">5.9.1. How Cachegrind Works</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.impl-details.file-format">5.9.2. Cachegrind Output File Format</a></span></dt>
+</dl></dd>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=cachegrind</code> on the
+Valgrind command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.overview"></a>5.1. Overview</h2></div></div></div>
+<p>Cachegrind simulates how your program interacts with a machine's cache
+hierarchy and (optionally) branch predictor. It simulates a machine with
+independent first-level instruction and data caches (I1 and D1), backed by a
+unified second-level cache (L2). This exactly matches the configuration of
+many modern machines.</p>
+<p>However, some modern machines have three or four levels of cache. For these
+machines (in the cases where Cachegrind can auto-detect the cache
+configuration) Cachegrind simulates the first-level and last-level caches.
+The reason for this choice is that the last-level cache has the most influence on
+runtime, as it masks accesses to main memory. Furthermore, the L1 caches
+often have low associativity, so simulating them can detect cases where the
+code interacts badly with this cache (eg. traversing a matrix column-wise
+with the row length being a power of 2).</p>
+<p>Therefore, Cachegrind always refers to the I1, D1 and LL (last-level)
+caches.</p>
+<p>
+Cachegrind gathers the following statistics (abbreviations used for each statistic
+is given in parentheses):</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>I cache reads (<code class="computeroutput">Ir</code>,
+ which equals the number of instructions executed),
+ I1 cache read misses (<code class="computeroutput">I1mr</code>) and
+ LL cache instruction read misses (<code class="computeroutput">ILmr</code>).
+ </p></li>
+<li class="listitem"><p>D cache reads (<code class="computeroutput">Dr</code>, which
+ equals the number of memory reads),
+ D1 cache read misses (<code class="computeroutput">D1mr</code>), and
+ LL cache data read misses (<code class="computeroutput">DLmr</code>).
+ </p></li>
+<li class="listitem"><p>D cache writes (<code class="computeroutput">Dw</code>, which equals
+ the number of memory writes),
+ D1 cache write misses (<code class="computeroutput">D1mw</code>), and
+ LL cache data write misses (<code class="computeroutput">DLmw</code>).
+ </p></li>
+<li class="listitem"><p>Conditional branches executed (<code class="computeroutput">Bc</code>) and
+ conditional branches mispredicted (<code class="computeroutput">Bcm</code>).
+ </p></li>
+<li class="listitem"><p>Indirect branches executed (<code class="computeroutput">Bi</code>) and
+ indirect branches mispredicted (<code class="computeroutput">Bim</code>).
+ </p></li>
+</ul></div>
+<p>Note that D1 total accesses is given by
+<code class="computeroutput">D1mr</code> +
+<code class="computeroutput">D1mw</code>, and that LL total
+accesses is given by <code class="computeroutput">ILmr</code> +
+<code class="computeroutput">DLmr</code> +
+<code class="computeroutput">DLmw</code>.
+</p>
+<p>These statistics are presented for the entire program and for each
+function in the program. You can also annotate each line of source code in
+the program with the counts that were caused directly by it.</p>
+<p>On a modern machine, an L1 miss will typically cost
+around 10 cycles, an LL miss can cost as much as 200
+cycles, and a mispredicted branch costs in the region of 10
+to 30 cycles. Detailed cache and branch profiling can be very useful
+for understanding how your program interacts with the machine and thus how
+to make it faster.</p>
+<p>Also, since one instruction cache read is performed per
+instruction executed, you can find out how many instructions are
+executed per line, which can be useful for traditional profiling.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.profile"></a>5.2. Using Cachegrind, cg_annotate and cg_merge</h2></div></div></div>
+<p>First off, as for normal Valgrind use, you probably want to
+compile with debugging info (the
+<code class="option">-g</code> option). But by contrast with
+normal Valgrind use, you probably do want to turn
+optimisation on, since you should profile your program as it will
+be normally run.</p>
+<p>Then, you need to run Cachegrind itself to gather the profiling
+information, and then run cg_annotate to get a detailed presentation of that
+information. As an optional intermediate step, you can use cg_merge to sum
+together the outputs of multiple Cachegrind runs into a single file which
+you then use as the input for cg_annotate. Alternatively, you can use
+cg_diff to difference the outputs of two Cachegrind runs into a single file
+which you then use as the input for cg_annotate.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.running-cachegrind"></a>5.2.1. Running Cachegrind</h3></div></div></div>
+<p>To run Cachegrind on a program <code class="filename">prog</code>, run:</p>
+<pre class="screen">
+valgrind --tool=cachegrind prog
+</pre>
+<p>The program will execute (slowly). Upon completion,
+summary statistics that look like this will be printed:</p>
+<pre class="programlisting">
+==31751== I refs: 27,742,716
+==31751== I1 misses: 276
+==31751== LLi misses: 275
+==31751== I1 miss rate: 0.0%
+==31751== LLi miss rate: 0.0%
+==31751==
+==31751== D refs: 15,430,290 (10,955,517 rd + 4,474,773 wr)
+==31751== D1 misses: 41,185 ( 21,905 rd + 19,280 wr)
+==31751== LLd misses: 23,085 ( 3,987 rd + 19,098 wr)
+==31751== D1 miss rate: 0.2% ( 0.1% + 0.4%)
+==31751== LLd miss rate: 0.1% ( 0.0% + 0.4%)
+==31751==
+==31751== LL misses: 23,360 ( 4,262 rd + 19,098 wr)
+==31751== LL miss rate: 0.0% ( 0.0% + 0.4%)</pre>
+<p>Cache accesses for instruction fetches are summarised
+first, giving the number of fetches made (this is the number of
+instructions executed, which can be useful to know in its own
+right), the number of I1 misses, and the number of LL instruction
+(<code class="computeroutput">LLi</code>) misses.</p>
+<p>Cache accesses for data follow. The information is similar
+to that of the instruction fetches, except that the values are
+also shown split between reads and writes (note each row's
+<code class="computeroutput">rd</code> and
+<code class="computeroutput">wr</code> values add up to the row's
+total).</p>
+<p>Combined instruction and data figures for the LL cache
+follow that. Note that the LL miss rate is computed relative to the total
+number of memory accesses, not the number of L1 misses. I.e. it is
+<code class="computeroutput">(ILmr + DLmr + DLmw) / (Ir + Dr + Dw)</code>
+not
+<code class="computeroutput">(ILmr + DLmr + DLmw) / (I1mr + D1mr + D1mw)</code>
+</p>
+<p>Branch prediction statistics are not collected by default.
+To do so, add the option <code class="option">--branch-sim=yes</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.outputfile"></a>5.2.2. Output File</h3></div></div></div>
+<p>As well as printing summary information, Cachegrind also writes
+more detailed profiling information to a file. By default this file is named
+<code class="filename">cachegrind.out.<pid></code> (where
+<code class="filename"><pid></code> is the program's process ID), but its name
+can be changed with the <code class="option">--cachegrind-out-file</code> option. This
+file is human-readable, but is intended to be interpreted by the
+accompanying program cg_annotate, described in the next section.</p>
+<p>The default <code class="computeroutput">.<pid></code> suffix
+on the output file name serves two purposes. Firstly, it means you
+don't have to rename old log files that you don't want to overwrite.
+Secondly, and more importantly, it allows correct profiling with the
+<code class="option">--trace-children=yes</code> option of
+programs that spawn child processes.</p>
+<p>The output file can be big, many megabytes for large applications
+built with full debugging information.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.running-cg_annotate"></a>5.2.3. Running cg_annotate</h3></div></div></div>
+<p>Before using cg_annotate,
+it is worth widening your window to be at least 120-characters
+wide if possible, as the output lines can be quite long.</p>
+<p>To get a function-by-function summary, run:</p>
+<pre class="screen">cg_annotate <filename></pre>
+<p>on a Cachegrind output file.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.the-output-preamble"></a>5.2.4. The Output Preamble</h3></div></div></div>
+<p>The first part of the output looks like this:</p>
+<pre class="programlisting">
+--------------------------------------------------------------------------------
+I1 cache: 65536 B, 64 B, 2-way associative
+D1 cache: 65536 B, 64 B, 2-way associative
+LL cache: 262144 B, 64 B, 8-way associative
+Command: concord vg_to_ucode.c
+Events recorded: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
+Events shown: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
+Event sort order: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
+Threshold: 99%
+Chosen for annotation:
+Auto-annotation: off
+</pre>
+<p>This is a summary of the annotation options:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>I1 cache, D1 cache, LL cache: cache configuration. So
+ you know the configuration with which these results were
+ obtained.</p></li>
+<li class="listitem"><p>Command: the command line invocation of the program
+ under examination.</p></li>
+<li class="listitem"><p>Events recorded: which events were recorded.</p></li>
+<li class="listitem"><p>Events shown: the events shown, which is a subset of the events
+ gathered. This can be adjusted with the
+ <code class="option">--show</code> option.</p></li>
+<li class="listitem">
+<p>Event sort order: the sort order in which functions are
+ shown. For example, in this case the functions are sorted
+ from highest <code class="computeroutput">Ir</code> counts to
+ lowest. If two functions have identical
+ <code class="computeroutput">Ir</code> counts, they will then be
+ sorted by <code class="computeroutput">I1mr</code> counts, and
+ so on. This order can be adjusted with the
+ <code class="option">--sort</code> option.</p>
+<p>Note that this dictates the order the functions appear.
+ It is <span class="emphasis"><em>not</em></span> the order in which the columns
+ appear; that is dictated by the "events shown" line (and can
+ be changed with the <code class="option">--show</code>
+ option).</p>
+</li>
+<li class="listitem"><p>Threshold: cg_annotate
+ by default omits functions that cause very low counts
+ to avoid drowning you in information. In this case,
+ cg_annotate shows summaries the functions that account for
+ 99% of the <code class="computeroutput">Ir</code> counts;
+ <code class="computeroutput">Ir</code> is chosen as the
+ threshold event since it is the primary sort event. The
+ threshold can be adjusted with the
+ <code class="option">--threshold</code>
+ option.</p></li>
+<li class="listitem"><p>Chosen for annotation: names of files specified
+ manually for annotation; in this case none.</p></li>
+<li class="listitem"><p>Auto-annotation: whether auto-annotation was requested
+ via the <code class="option">--auto=yes</code>
+ option. In this case no.</p></li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.the-global"></a>5.2.5. The Global and Function-level Counts</h3></div></div></div>
+<p>Then follows summary statistics for the whole
+program:</p>
+<pre class="programlisting">
+--------------------------------------------------------------------------------
+Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
+--------------------------------------------------------------------------------
+27,742,716 276 275 10,955,517 21,905 3,987 4,474,773 19,280 19,098 PROGRAM TOTALS</pre>
+<p>
+These are similar to the summary provided when Cachegrind finishes running.
+</p>
+<p>Then comes function-by-function statistics:</p>
+<pre class="programlisting">
+--------------------------------------------------------------------------------
+Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw file:function
+--------------------------------------------------------------------------------
+8,821,482 5 5 2,242,702 1,621 73 1,794,230 0 0 getc.c:_IO_getc
+5,222,023 4 4 2,276,334 16 12 875,959 1 1 concord.c:get_word
+2,649,248 2 2 1,344,810 7,326 1,385 . . . vg_main.c:strcmp
+2,521,927 2 2 591,215 0 0 179,398 0 0 concord.c:hash
+2,242,740 2 2 1,046,612 568 22 448,548 0 0 ctype.c:tolower
+1,496,937 4 4 630,874 9,000 1,400 279,388 0 0 concord.c:insert
+ 897,991 51 51 897,831 95 30 62 1 1 ???:???
+ 598,068 1 1 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__flockfile
+ 598,068 0 0 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__funlockfile
+ 598,024 4 4 213,580 35 16 149,506 0 0 vg_clientmalloc.c:malloc
+ 446,587 1 1 215,973 2,167 430 129,948 14,057 13,957 concord.c:add_existing
+ 341,760 2 2 128,160 0 0 128,160 0 0 vg_clientmalloc.c:vg_trap_here_WRAPPER
+ 320,782 4 4 150,711 276 0 56,027 53 53 concord.c:init_hash_table
+ 298,998 1 1 106,785 0 0 64,071 1 1 concord.c:create
+ 149,518 0 0 149,516 0 0 1 0 0 ???:tolower@@GLIBC_2.0
+ 149,518 0 0 149,516 0 0 1 0 0 ???:fgetc@@GLIBC_2.0
+ 95,983 4 4 38,031 0 0 34,409 3,152 3,150 concord.c:new_word_node
+ 85,440 0 0 42,720 0 0 21,360 0 0 vg_clientmalloc.c:vg_bogus_epilogue</pre>
+<p>Each function
+is identified by a
+<code class="computeroutput">file_name:function_name</code> pair. If
+a column contains only a dot it means the function never performs
+that event (e.g. the third row shows that
+<code class="computeroutput">strcmp()</code> contains no
+instructions that write to memory). The name
+<code class="computeroutput">???</code> is used if the file name
+and/or function name could not be determined from debugging
+information. If most of the entries have the form
+<code class="computeroutput">???:???</code> the program probably
+wasn't compiled with <code class="option">-g</code>.</p>
+<p>It is worth noting that functions will come both from
+the profiled program (e.g. <code class="filename">concord.c</code>)
+and from libraries (e.g. <code class="filename">getc.c</code>)</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.line-by-line"></a>5.2.6. Line-by-line Counts</h3></div></div></div>
+<p>There are two ways to annotate source files -- by specifying them
+manually as arguments to cg_annotate, or with the
+<code class="option">--auto=yes</code> option. For example, the output from running
+<code class="filename">cg_annotate <filename> concord.c</code> for our example
+produces the same output as above followed by an annotated version of
+<code class="filename">concord.c</code>, a section of which looks like:</p>
+<pre class="programlisting">
+--------------------------------------------------------------------------------
+-- User-annotated source: concord.c
+--------------------------------------------------------------------------------
+Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw
+
+ . . . . . . . . . void init_hash_table(char *file_name, Word_Node *table[])
+ 3 1 1 . . . 1 0 0 {
+ . . . . . . . . . FILE *file_ptr;
+ . . . . . . . . . Word_Info *data;
+ 1 0 0 . . . 1 1 1 int line = 1, i;
+ . . . . . . . . .
+ 5 0 0 . . . 3 0 0 data = (Word_Info *) create(sizeof(Word_Info));
+ . . . . . . . . .
+ 4,991 0 0 1,995 0 0 998 0 0 for (i = 0; i < TABLE_SIZE; i++)
+ 3,988 1 1 1,994 0 0 997 53 52 table[i] = NULL;
+ . . . . . . . . .
+ . . . . . . . . . /* Open file, check it. */
+ 6 0 0 1 0 0 4 0 0 file_ptr = fopen(file_name, "r");
+ 2 0 0 1 0 0 . . . if (!(file_ptr)) {
+ . . . . . . . . . fprintf(stderr, "Couldn't open '%s'.\n", file_name);
+ 1 1 1 . . . . . . exit(EXIT_FAILURE);
+ . . . . . . . . . }
+ . . . . . . . . .
+ 165,062 1 1 73,360 0 0 91,700 0 0 while ((line = get_word(data, line, file_ptr)) != EOF)
+ 146,712 0 0 73,356 0 0 73,356 0 0 insert(data->;word, data->line, table);
+ . . . . . . . . .
+ 4 0 0 1 0 0 2 0 0 free(data);
+ 4 0 0 1 0 0 2 0 0 fclose(file_ptr);
+ 3 0 0 2 0 0 . . . }</pre>
+<p>(Although column widths are automatically minimised, a wide
+terminal is clearly useful.)</p>
+<p>Each source file is clearly marked
+(<code class="computeroutput">User-annotated source</code>) as
+having been chosen manually for annotation. If the file was
+found in one of the directories specified with the
+<code class="option">-I</code>/<code class="option">--include</code> option, the directory
+and file are both given.</p>
+<p>Each line is annotated with its event counts. Events not
+applicable for a line are represented by a dot. This is useful
+for distinguishing between an event which cannot happen, and one
+which can but did not.</p>
+<p>Sometimes only a small section of a source file is
+executed. To minimise uninteresting output, Cachegrind only shows
+annotated lines and lines within a small distance of annotated
+lines. Gaps are marked with the line numbers so you know which
+part of a file the shown code comes from, eg:</p>
+<pre class="programlisting">
+(figures and code for line 704)
+-- line 704 ----------------------------------------
+-- line 878 ----------------------------------------
+(figures and code for line 878)</pre>
+<p>The amount of context to show around annotated lines is
+controlled by the <code class="option">--context</code>
+option.</p>
+<p>To get automatic annotation, use the <code class="option">--auto=yes</code> option.
+cg_annotate will automatically annotate every source file it can
+find that is mentioned in the function-by-function summary.
+Therefore, the files chosen for auto-annotation are affected by
+the <code class="option">--sort</code> and
+<code class="option">--threshold</code> options. Each
+source file is clearly marked (<code class="computeroutput">Auto-annotated
+source</code>) as being chosen automatically. Any
+files that could not be found are mentioned at the end of the
+output, eg:</p>
+<pre class="programlisting">
+------------------------------------------------------------------
+The following files chosen for auto-annotation could not be found:
+------------------------------------------------------------------
+ getc.c
+ ctype.c
+ ../sysdeps/generic/lockfile.c</pre>
+<p>This is quite common for library files, since libraries are
+usually compiled with debugging information, but the source files
+are often not present on a system. If a file is chosen for
+annotation both manually and automatically, it
+is marked as <code class="computeroutput">User-annotated
+source</code>. Use the
+<code class="option">-I</code>/<code class="option">--include</code> option to tell Valgrind where
+to look for source files if the filenames found from the debugging
+information aren't specific enough.</p>
+<p>Beware that cg_annotate can take some time to digest large
+<code class="filename">cachegrind.out.<pid></code> files,
+e.g. 30 seconds or more. Also beware that auto-annotation can
+produce a lot of output if your program is large!</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.assembler"></a>5.2.7. Annotating Assembly Code Programs</h3></div></div></div>
+<p>Valgrind can annotate assembly code programs too, or annotate
+the assembly code generated for your C program. Sometimes this is
+useful for understanding what is really happening when an
+interesting line of C code is translated into multiple
+instructions.</p>
+<p>To do this, you just need to assemble your
+<code class="computeroutput">.s</code> files with assembly-level debug
+information. You can use compile with the <code class="option">-S</code> to compile C/C++
+programs to assembly code, and then assemble the assembly code files with
+<code class="option">-g</code> to achieve this. You can then profile and annotate the
+assembly code source files in the same way as C/C++ source files.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.forkingprograms"></a>5.2.8. Forking Programs</h3></div></div></div>
+<p>If your program forks, the child will inherit all the profiling data that
+has been gathered for the parent.</p>
+<p>If the output file format string (controlled by
+<code class="option">--cachegrind-out-file</code>) does not contain <code class="option">%p</code>,
+then the outputs from the parent and child will be intermingled in a single
+output file, which will almost certainly make it unreadable by
+cg_annotate.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.annopts.warnings"></a>5.2.9. cg_annotate Warnings</h3></div></div></div>
+<p>There are a couple of situations in which
+cg_annotate issues warnings.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>If a source file is more recent than the
+ <code class="filename">cachegrind.out.<pid></code> file.
+ This is because the information in
+ <code class="filename">cachegrind.out.<pid></code> is only
+ recorded with line numbers, so if the line numbers change at
+ all in the source (e.g. lines added, deleted, swapped), any
+ annotations will be incorrect.</p></li>
+<li class="listitem"><p>If information is recorded about line numbers past the
+ end of a file. This can be caused by the above problem,
+ i.e. shortening the source file while using an old
+ <code class="filename">cachegrind.out.<pid></code> file. If
+ this happens, the figures for the bogus lines are printed
+ anyway (clearly marked as bogus) in case they are
+ important.</p></li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.annopts.things-to-watch-out-for"></a>5.2.10. Unusual Annotation Cases</h3></div></div></div>
+<p>Some odd things that can occur during annotation:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>If annotating at the assembler level, you might see
+ something like this:</p>
+<pre class="programlisting">
+ 1 0 0 . . . . . . leal -12(%ebp),%eax
+ 1 0 0 . . . 1 0 0 movl %eax,84(%ebx)
+ 2 0 0 0 0 0 1 0 0 movl $1,-20(%ebp)
+ . . . . . . . . . .align 4,0x90
+ 1 0 0 . . . . . . movl $.LnrB,%eax
+ 1 0 0 . . . 1 0 0 movl %eax,-16(%ebp)</pre>
+<p>How can the third instruction be executed twice when
+ the others are executed only once? As it turns out, it
+ isn't. Here's a dump of the executable, using
+ <code class="computeroutput">objdump -d</code>:</p>
+<pre class="programlisting">
+ 8048f25: 8d 45 f4 lea 0xfffffff4(%ebp),%eax
+ 8048f28: 89 43 54 mov %eax,0x54(%ebx)
+ 8048f2b: c7 45 ec 01 00 00 00 movl $0x1,0xffffffec(%ebp)
+ 8048f32: 89 f6 mov %esi,%esi
+ 8048f34: b8 08 8b 07 08 mov $0x8078b08,%eax
+ 8048f39: 89 45 f0 mov %eax,0xfffffff0(%ebp)</pre>
+<p>Notice the extra <code class="computeroutput">mov
+ %esi,%esi</code> instruction. Where did this come
+ from? The GNU assembler inserted it to serve as the two
+ bytes of padding needed to align the <code class="computeroutput">movl
+ $.LnrB,%eax</code> instruction on a four-byte
+ boundary, but pretended it didn't exist when adding debug
+ information. Thus when Valgrind reads the debug info it
+ thinks that the <code class="computeroutput">movl
+ $0x1,0xffffffec(%ebp)</code> instruction covers the
+ address range 0x8048f2b--0x804833 by itself, and attributes
+ the counts for the <code class="computeroutput">mov
+ %esi,%esi</code> to it.</p>
+</li>
+<li class="listitem"><p>Sometimes, the same filename might be represented with
+ a relative name and with an absolute name in different parts
+ of the debug info, eg:
+ <code class="filename">/home/user/proj/proj.h</code> and
+ <code class="filename">../proj.h</code>. In this case, if you use
+ auto-annotation, the file will be annotated twice with the
+ counts split between the two.</p></li>
+<li class="listitem"><p>If you compile some files with
+ <code class="option">-g</code> and some without, some
+ events that take place in a file without debug info could be
+ attributed to the last line of a file with debug info
+ (whichever one gets placed before the non-debug-info file in
+ the executable).</p></li>
+</ul></div>
+<p>This list looks long, but these cases should be fairly
+rare.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.cg_merge"></a>5.2.11. Merging Profiles with cg_merge</h3></div></div></div>
+<p>
+cg_merge is a simple program which
+reads multiple profile files, as created by Cachegrind, merges them
+together, and writes the results into another file in the same format.
+You can then examine the merged results using
+<code class="computeroutput">cg_annotate <filename></code>, as
+described above. The merging functionality might be useful if you
+want to aggregate costs over multiple runs of the same program, or
+from a single parallel run with multiple instances of the same
+program.</p>
+<p>
+cg_merge is invoked as follows:
+</p>
+<pre class="programlisting">
+cg_merge -o outputfile file1 file2 file3 ...</pre>
+<p>
+It reads and checks <code class="computeroutput">file1</code>, then read
+and checks <code class="computeroutput">file2</code> and merges it into
+the running totals, then the same with
+<code class="computeroutput">file3</code>, etc. The final results are
+written to <code class="computeroutput">outputfile</code>, or to standard
+out if no output file is specified.</p>
+<p>
+Costs are summed on a per-function, per-line and per-instruction
+basis. Because of this, the order in which the input files does not
+matter, although you should take care to only mention each file once,
+since any file mentioned twice will be added in twice.</p>
+<p>
+cg_merge does not attempt to check
+that the input files come from runs of the same executable. It will
+happily merge together profile files from completely unrelated
+programs. It does however check that the
+<code class="computeroutput">Events:</code> lines of all the inputs are
+identical, so as to ensure that the addition of costs makes sense.
+For example, it would be nonsensical for it to add a number indicating
+D1 read references to a number from a different file indicating LL
+write misses.</p>
+<p>
+A number of other syntax and sanity checks are done whilst reading the
+inputs. cg_merge will stop and
+attempt to print a helpful error message if any of the input files
+fail these checks.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.cg_diff"></a>5.2.12. Differencing Profiles with cg_diff</h3></div></div></div>
+<p>
+cg_diff is a simple program which
+reads two profile files, as created by Cachegrind, finds the difference
+between them, and writes the results into another file in the same format.
+You can then examine the merged results using
+<code class="computeroutput">cg_annotate <filename></code>, as
+described above. This is very useful if you want to measure how a change to
+a program affected its performance.
+</p>
+<p>
+cg_diff is invoked as follows:
+</p>
+<pre class="programlisting">
+cg_diff file1 file2</pre>
+<p>
+It reads and checks <code class="computeroutput">file1</code>, then read
+and checks <code class="computeroutput">file2</code>, then computes the
+difference (effectively <code class="computeroutput">file1</code> -
+<code class="computeroutput">file2</code>). The final results are written to
+standard output.</p>
+<p>
+Costs are summed on a per-function basis. Per-line costs are not summed,
+because doing so is too difficult. For example, consider differencing two
+profiles, one from a single-file program A, and one from the same program A
+where a single blank line was inserted at the top of the file. Every single
+per-line count has changed. In comparison, the per-function counts have not
+changed. The per-function count differences are still very useful for
+determining differences between programs. Note that because the result is
+the difference of two profiles, many of the counts will be negative; this
+indicates that the counts for the relevant function are fewer in the second
+version than those in the first version.</p>
+<p>
+cg_diff does not attempt to check
+that the input files come from runs of the same executable. It will
+happily merge together profile files from completely unrelated
+programs. It does however check that the
+<code class="computeroutput">Events:</code> lines of all the inputs are
+identical, so as to ensure that the addition of costs makes sense.
+For example, it would be nonsensical for it to add a number indicating
+D1 read references to a number from a different file indicating LL
+write misses.</p>
+<p>
+A number of other syntax and sanity checks are done whilst reading the
+inputs. cg_diff will stop and
+attempt to print a helpful error message if any of the input files
+fail these checks.</p>
+<p>
+Sometimes you will want to compare Cachegrind profiles of two versions of a
+program that you have sitting side-by-side. For example, you might have
+<code class="computeroutput">version1/prog.c</code> and
+<code class="computeroutput">version2/prog.c</code>, where the second is
+slightly different to the first. A straight comparison of the two will not
+be useful -- because functions are qualified with filenames, a function
+<code class="function">f</code> will be listed as
+<code class="computeroutput">version1/prog.c:f</code> for the first version but
+<code class="computeroutput">version2/prog.c:f</code> for the second
+version.</p>
+<p>
+When this happens, you can use the <code class="option">--mod-filename</code> option.
+Its argument is a Perl search-and-replace expression that will be applied
+to all the filenames in both Cachegrind output files. It can be used to
+remove minor differences in filenames. For example, the option
+<code class="option">--mod-filename='s/version[0-9]/versionN/'</code> will suffice for
+this case.</p>
+<p>
+Similarly, sometimes compilers auto-generate certain functions and give them
+randomized names. For example, GCC sometimes auto-generates functions with
+names like <code class="function">T.1234</code>, and the suffixes vary from build to
+build. You can use the <code class="option">--mod-funcname</code> option to remove
+small differences like these; it works in the same way as
+<code class="option">--mod-filename</code>.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.cgopts"></a>5.3. Cachegrind Command-line Options</h2></div></div></div>
+<p>Cachegrind-specific options are:</p>
+<div class="variablelist">
+<a name="cg.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.I1"></a><span class="term">
+ <code class="option">--I1=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the level 1
+ instruction cache. </p></dd>
+<dt>
+<a name="opt.D1"></a><span class="term">
+ <code class="option">--D1=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the level 1
+ data cache.</p></dd>
+<dt>
+<a name="opt.LL"></a><span class="term">
+ <code class="option">--LL=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the last-level
+ cache.</p></dd>
+<dt>
+<a name="opt.cache-sim"></a><span class="term">
+ <code class="option">--cache-sim=no|yes [yes] </code>
+ </span>
+</dt>
+<dd><p>Enables or disables collection of cache access and miss
+ counts.</p></dd>
+<dt>
+<a name="opt.branch-sim"></a><span class="term">
+ <code class="option">--branch-sim=no|yes [no] </code>
+ </span>
+</dt>
+<dd><p>Enables or disables collection of branch instruction and
+ misprediction counts. By default this is disabled as it
+ slows Cachegrind down by approximately 25%. Note that you
+ cannot specify <code class="option">--cache-sim=no</code>
+ and <code class="option">--branch-sim=no</code>
+ together, as that would leave Cachegrind with no
+ information to collect.</p></dd>
+<dt>
+<a name="opt.cachegrind-out-file"></a><span class="term">
+ <code class="option">--cachegrind-out-file=<file> </code>
+ </span>
+</dt>
+<dd><p>Write the profile data to
+ <code class="computeroutput">file</code> rather than to the default
+ output file,
+ <code class="filename">cachegrind.out.<pid></code>. The
+ <code class="option">%p</code> and <code class="option">%q</code> format specifiers
+ can be used to embed the process ID and/or the contents of an
+ environment variable in the name, as is the case for the core
+ option <code class="option"><a class="xref" href="manual-core.html#opt.log-file">--log-file</a></code>.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.annopts"></a>5.4. cg_annotate Command-line Options</h2></div></div></div>
+<div class="variablelist">
+<a name="cg_annotate.opts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">-h --help </code>
+ </span></dt>
+<dd><p>Show the help message.</p></dd>
+<dt><span class="term">
+ <code class="option">--version </code>
+ </span></dt>
+<dd><p>Show the version number.</p></dd>
+<dt><span class="term">
+ <code class="option">--show=A,B,C [default: all, using order in
+ cachegrind.out.<pid>] </code>
+ </span></dt>
+<dd><p>Specifies which events to show (and the column
+ order). Default is to use all present in the
+ <code class="filename">cachegrind.out.<pid></code> file (and
+ use the order in the file). Useful if you want to concentrate on, for
+ example, I cache misses (<code class="option">--show=I1mr,ILmr</code>), or data
+ read misses (<code class="option">--show=D1mr,DLmr</code>), or LL data misses
+ (<code class="option">--show=DLmr,DLmw</code>). Best used in conjunction with
+ <code class="option">--sort</code>.</p></dd>
+<dt><span class="term">
+ <code class="option">--sort=A,B,C [default: order in
+ cachegrind.out.<pid>] </code>
+ </span></dt>
+<dd><p>Specifies the events upon which the sorting of the
+ function-by-function entries will be based.</p></dd>
+<dt><span class="term">
+ <code class="option">--threshold=X [default: 0.1%] </code>
+ </span></dt>
+<dd>
+<p>Sets the threshold for the function-by-function
+ summary. A function is shown if it accounts for more than X%
+ of the counts for the primary sort event. If auto-annotating, also
+ affects which files are annotated.</p>
+<p>Note: thresholds can be set for more than one of the
+ events by appending any events for the
+ <code class="option">--sort</code> option with a colon
+ and a number (no spaces, though). E.g. if you want to see
+ each function that covers more than 1% of LL read misses or 1% of LL
+ write misses, use this option:</p>
+<p><code class="option">--sort=DLmr:1,DLmw:1</code></p>
+</dd>
+<dt><span class="term">
+ <code class="option">--auto=<no|yes> [default: no] </code>
+ </span></dt>
+<dd><p>When enabled, automatically annotates every file that
+ is mentioned in the function-by-function summary that can be
+ found. Also gives a list of those that couldn't be found.</p></dd>
+<dt><span class="term">
+ <code class="option">--context=N [default: 8] </code>
+ </span></dt>
+<dd><p>Print N lines of context before and after each
+ annotated line. Avoids printing large sections of source
+ files that were not executed. Use a large number
+ (e.g. 100000) to show all source lines.</p></dd>
+<dt><span class="term">
+ <code class="option">-I<dir> --include=<dir> [default: none] </code>
+ </span></dt>
+<dd><p>Adds a directory to the list in which to search for
+ files. Multiple <code class="option">-I</code>/<code class="option">--include</code>
+ options can be given to add multiple directories.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.mergeopts"></a>5.5. cg_merge Command-line Options</h2></div></div></div>
+<div class="variablelist">
+<a name="cg_merge.opts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">-o outfile</code>
+ </span></dt>
+<dd><p>Write the profile data to <code class="computeroutput">outfile</code>
+ rather than to standard output.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.diffopts"></a>5.6. cg_diff Command-line Options</h2></div></div></div>
+<div class="variablelist">
+<a name="cg_diff.opts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">-h --help </code>
+ </span></dt>
+<dd><p>Show the help message.</p></dd>
+<dt><span class="term">
+ <code class="option">--version </code>
+ </span></dt>
+<dd><p>Show the version number.</p></dd>
+<dt><span class="term">
+ <code class="option">--mod-filename=<expr> [default: none]</code>
+ </span></dt>
+<dd><p>Specifies a Perl search-and-replace expression that is applied
+ to all filenames. Useful for removing minor differences in paths
+ between two different versions of a program that are sitting in
+ different directories.</p></dd>
+<dt><span class="term">
+ <code class="option">--mod-funcname=<expr> [default: none]</code>
+ </span></dt>
+<dd><p>Like <code class="option">--mod-filename</code>, but for filenames.
+ Useful for removing minor differences in randomized names of
+ auto-generated functions generated by some compilers.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.acting-on"></a>5.7. Acting on Cachegrind's Information</h2></div></div></div>
+<p>
+Cachegrind gives you lots of information, but acting on that information
+isn't always easy. Here are some rules of thumb that we have found to be
+useful.</p>
+<p>
+First of all, the global hit/miss counts and miss rates are not that useful.
+If you have multiple programs or multiple runs of a program, comparing the
+numbers might identify if any are outliers and worthy of closer
+investigation. Otherwise, they're not enough to act on.</p>
+<p>
+The function-by-function counts are more useful to look at, as they pinpoint
+which functions are causing large numbers of counts. However, beware that
+inlining can make these counts misleading. If a function
+<code class="function">f</code> is always inlined, counts will be attributed to the
+functions it is inlined into, rather than itself. However, if you look at
+the line-by-line annotations for <code class="function">f</code> you'll see the
+counts that belong to <code class="function">f</code>. (This is hard to avoid, it's
+how the debug info is structured.) So it's worth looking for large numbers
+in the line-by-line annotations.</p>
+<p>
+The line-by-line source code annotations are much more useful. In our
+experience, the best place to start is by looking at the
+<code class="computeroutput">Ir</code> numbers. They simply measure how many
+instructions were executed for each line, and don't include any cache
+information, but they can still be very useful for identifying
+bottlenecks.</p>
+<p>
+After that, we have found that LL misses are typically a much bigger source
+of slow-downs than L1 misses. So it's worth looking for any snippets of
+code with high <code class="computeroutput">DLmr</code> or
+<code class="computeroutput">DLmw</code> counts. (You can use
+<code class="option">--show=DLmr
+--sort=DLmr</code> with cg_annotate to focus just on
+<code class="literal">DLmr</code> counts, for example.) If you find any, it's still
+not always easy to work out how to improve things. You need to have a
+reasonable understanding of how caches work, the principles of locality, and
+your program's data access patterns. Improving things may require
+redesigning a data structure, for example.</p>
+<p>
+Looking at the <code class="computeroutput">Bcm</code> and
+<code class="computeroutput">Bim</code> misses can also be helpful.
+In particular, <code class="computeroutput">Bim</code> misses are often caused
+by <code class="literal">switch</code> statements, and in some cases these
+<code class="literal">switch</code> statements can be replaced with table-driven code.
+For example, you might replace code like this:</p>
+<pre class="programlisting">
+enum E { A, B, C };
+enum E e;
+int i;
+...
+switch (e)
+{
+ case A: i += 1; break;
+ case B: i += 2; break;
+ case C: i += 3; break;
+}
+</pre>
+<p>with code like this:</p>
+<pre class="programlisting">
+enum E { A, B, C };
+enum E e;
+enum E table[] = { 1, 2, 3 };
+int i;
+...
+i += table[e];
+</pre>
+<p>
+This is obviously a contrived example, but the basic principle applies in a
+wide variety of situations.</p>
+<p>
+In short, Cachegrind can tell you where some of the bottlenecks in your code
+are, but it can't tell you how to fix them. You have to work that out for
+yourself. But at least you have the information!
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.sim-details"></a>5.8. Simulation Details</h2></div></div></div>
+<p>
+This section talks about details you don't need to know about in order to
+use Cachegrind, but may be of interest to some people.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cache-sim"></a>5.8.1. Cache Simulation Specifics</h3></div></div></div>
+<p>Specific characteristics of the cache simulation are as
+follows:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Write-allocate: when a write miss occurs, the block
+ written to is brought into the D1 cache. Most modern caches
+ have this property.</p></li>
+<li class="listitem">
+<p>Bit-selection hash function: the set of line(s) in the cache
+ to which a memory block maps is chosen by the middle bits
+ M--(M+N-1) of the byte address, where:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
+<li class="listitem"><p>line size = 2^M bytes</p></li>
+<li class="listitem"><p>(cache size / line size / associativity) = 2^N bytes</p></li>
+</ul></div>
+</li>
+<li class="listitem"><p>Inclusive LL cache: the LL cache typically replicates all
+ the entries of the L1 caches, because fetching into L1 involves
+ fetching into LL first (this does not guarantee strict inclusiveness,
+ as lines evicted from LL still could reside in L1). This is
+ standard on Pentium chips, but AMD Opterons, Athlons and Durons
+ use an exclusive LL cache that only holds
+ blocks evicted from L1. Ditto most modern VIA CPUs.</p></li>
+</ul></div>
+<p>The cache configuration simulated (cache size,
+associativity and line size) is determined automatically using
+the x86 CPUID instruction. If you have a machine that (a)
+doesn't support the CPUID instruction, or (b) supports it in an
+early incarnation that doesn't give any cache information, then
+Cachegrind will fall back to using a default configuration (that
+of a model 3/4 Athlon). Cachegrind will tell you if this
+happens. You can manually specify one, two or all three levels
+(I1/D1/LL) of the cache from the command line using the
+<code class="option">--I1</code>,
+<code class="option">--D1</code> and
+<code class="option">--LL</code> options.
+For cache parameters to be valid for simulation, the number
+of sets (with associativity being the number of cache lines in
+each set) has to be a power of two.</p>
+<p>On PowerPC platforms
+Cachegrind cannot automatically
+determine the cache configuration, so you will
+need to specify it with the
+<code class="option">--I1</code>,
+<code class="option">--D1</code> and
+<code class="option">--LL</code> options.</p>
+<p>Other noteworthy behaviour:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>References that straddle two cache lines are treated as
+ follows:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
+<li class="listitem"><p>If both blocks hit --> counted as one hit</p></li>
+<li class="listitem"><p>If one block hits, the other misses --> counted
+ as one miss.</p></li>
+<li class="listitem"><p>If both blocks miss --> counted as one miss (not
+ two)</p></li>
+</ul></div>
+</li>
+<li class="listitem">
+<p>Instructions that modify a memory location
+ (e.g. <code class="computeroutput">inc</code> and
+ <code class="computeroutput">dec</code>) are counted as doing
+ just a read, i.e. a single data reference. This may seem
+ strange, but since the write can never cause a miss (the read
+ guarantees the block is in the cache) it's not very
+ interesting.</p>
+<p>Thus it measures not the number of times the data cache
+ is accessed, but the number of times a data cache miss could
+ occur.</p>
+</li>
+</ul></div>
+<p>If you are interested in simulating a cache with different
+properties, it is not particularly hard to write your own cache
+simulator, or to modify the existing ones in
+<code class="computeroutput">cg_sim.c</code>. We'd be
+interested to hear from anyone who does.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="branch-sim"></a>5.8.2. Branch Simulation Specifics</h3></div></div></div>
+<p>Cachegrind simulates branch predictors intended to be
+typical of mainstream desktop/server processors of around 2004.</p>
+<p>Conditional branches are predicted using an array of 16384 2-bit
+saturating counters. The array index used for a branch instruction is
+computed partly from the low-order bits of the branch instruction's
+address and partly using the taken/not-taken behaviour of the last few
+conditional branches. As a result the predictions for any specific
+branch depend both on its own history and the behaviour of previous
+branches. This is a standard technique for improving prediction
+accuracy.</p>
+<p>For indirect branches (that is, jumps to unknown destinations)
+Cachegrind uses a simple branch target address predictor. Targets are
+predicted using an array of 512 entries indexed by the low order 9
+bits of the branch instruction's address. Each branch is predicted to
+jump to the same address it did last time. Any other behaviour causes
+a mispredict.</p>
+<p>More recent processors have better branch predictors, in
+particular better indirect branch predictors. Cachegrind's predictor
+design is deliberately conservative so as to be representative of the
+large installed base of processors which pre-date widespread
+deployment of more sophisticated indirect branch predictors. In
+particular, late model Pentium 4s (Prescott), Pentium M, Core and Core
+2 have more sophisticated indirect branch predictors than modelled by
+Cachegrind. </p>
+<p>Cachegrind does not simulate a return stack predictor. It
+assumes that processors perfectly predict function return addresses,
+an assumption which is probably close to being true.</p>
+<p>See Hennessy and Patterson's classic text "Computer
+Architecture: A Quantitative Approach", 4th edition (2007), Section
+2.3 (pages 80-89) for background on modern branch predictors.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.annopts.accuracy"></a>5.8.3. Accuracy</h3></div></div></div>
+<p>Valgrind's cache profiling has a number of
+shortcomings:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>It doesn't account for kernel activity -- the effect of system
+ calls on the cache and branch predictor contents is ignored.</p></li>
+<li class="listitem"><p>It doesn't account for other process activity.
+ This is probably desirable when considering a single
+ program.</p></li>
+<li class="listitem"><p>It doesn't account for virtual-to-physical address
+ mappings. Hence the simulation is not a true
+ representation of what's happening in the
+ cache. Most caches and branch predictors are physically indexed, but
+ Cachegrind simulates caches using virtual addresses.</p></li>
+<li class="listitem"><p>It doesn't account for cache misses not visible at the
+ instruction level, e.g. those arising from TLB misses, or
+ speculative execution.</p></li>
+<li class="listitem"><p>Valgrind will schedule
+ threads differently from how they would be when running natively.
+ This could warp the results for threaded programs.</p></li>
+<li class="listitem">
+<p>The x86/amd64 instructions <code class="computeroutput">bts</code>,
+ <code class="computeroutput">btr</code> and
+ <code class="computeroutput">btc</code> will incorrectly be
+ counted as doing a data read if both the arguments are
+ registers, eg:</p>
+<pre class="programlisting">
+ btsl %eax, %edx</pre>
+<p>This should only happen rarely.</p>
+</li>
+<li class="listitem"><p>x86/amd64 FPU instructions with data sizes of 28 and 108 bytes
+ (e.g. <code class="computeroutput">fsave</code>) are treated as
+ though they only access 16 bytes. These instructions seem to
+ be rare so hopefully this won't affect accuracy much.</p></li>
+</ul></div>
+<p>Another thing worth noting is that results are very sensitive.
+Changing the size of the executable being profiled, or the sizes
+of any of the shared libraries it uses, or even the length of their
+file names, can perturb the results. Variations will be small, but
+don't expect perfectly repeatable results if your program changes at
+all.</p>
+<p>More recent GNU/Linux distributions do address space
+randomisation, in which identical runs of the same program have their
+shared libraries loaded at different locations, as a security measure.
+This also perturbs the results.</p>
+<p>While these factors mean you shouldn't trust the results to
+be super-accurate, they should be close enough to be useful.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cg-manual.impl-details"></a>5.9. Implementation Details</h2></div></div></div>
+<p>
+This section talks about details you don't need to know about in order to
+use Cachegrind, but may be of interest to some people.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.impl-details.how-cg-works"></a>5.9.1. How Cachegrind Works</h3></div></div></div>
+<p>The best reference for understanding how Cachegrind works is chapter 3 of
+"Dynamic Binary Analysis and Instrumentation", by Nicholas Nethercote. It
+is available on the <a class="ulink" href="http://www.valgrind.org/docs/pubs.html" target="_top">Valgrind publications
+page</a>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cg-manual.impl-details.file-format"></a>5.9.2. Cachegrind Output File Format</h3></div></div></div>
+<p>The file format is fairly straightforward, basically giving the
+cost centre for every line, grouped by files and
+functions. It's also totally generic and self-describing, in the sense that
+it can be used for any events that can be counted on a line-by-line basis,
+not just cache and branch predictor events. For example, earlier versions
+of Cachegrind didn't have a branch predictor simulation. When this was
+added, the file format didn't need to change at all. So the format (and
+consequently, cg_annotate) could be used by other tools.</p>
+<p>The file format:</p>
+<pre class="programlisting">
+file ::= desc_line* cmd_line events_line data_line+ summary_line
+desc_line ::= "desc:" ws? non_nl_string
+cmd_line ::= "cmd:" ws? cmd
+events_line ::= "events:" ws? (event ws)+
+data_line ::= file_line | fn_line | count_line
+file_line ::= "fl=" filename
+fn_line ::= "fn=" fn_name
+count_line ::= line_num ws? (count ws)+
+summary_line ::= "summary:" ws? (count ws)+
+count ::= num | "."</pre>
+<p>Where:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="computeroutput">non_nl_string</code> is any
+ string not containing a newline.</p></li>
+<li class="listitem"><p><code class="computeroutput">cmd</code> is a string holding the
+ command line of the profiled program.</p></li>
+<li class="listitem"><p><code class="computeroutput">event</code> is a string containing
+ no whitespace.</p></li>
+<li class="listitem"><p><code class="computeroutput">filename</code> and
+ <code class="computeroutput">fn_name</code> are strings.</p></li>
+<li class="listitem"><p><code class="computeroutput">num</code> and
+ <code class="computeroutput">line_num</code> are decimal
+ numbers.</p></li>
+<li class="listitem"><p><code class="computeroutput">ws</code> is whitespace.</p></li>
+</ul></div>
+<p>The contents of the "desc:" lines are printed out at the top
+of the summary. This is a generic way of providing simulation
+specific information, e.g. for giving the cache configuration for
+cache simulation.</p>
+<p>More than one line of info can be presented for each file/fn/line number.
+In such cases, the counts for the named events will be accumulated.</p>
+<p>Counts can be "." to represent zero. This makes the files easier for
+humans to read.</p>
+<p>The number of counts in each
+<code class="computeroutput">line</code> and the
+<code class="computeroutput">summary_line</code> should not exceed
+the number of events in the
+<code class="computeroutput">event_line</code>. If the number in
+each <code class="computeroutput">line</code> is less, cg_annotate
+treats those missing as though they were a "." entry. This saves space.
+</p>
+<p>A <code class="computeroutput">file_line</code> changes the
+current file name. A <code class="computeroutput">fn_line</code>
+changes the current function name. A
+<code class="computeroutput">count_line</code> contains counts that
+pertain to the current filename/fn_name. A "fn="
+<code class="computeroutput">file_line</code> and a
+<code class="computeroutput">fn_line</code> must appear before any
+<code class="computeroutput">count_line</code>s to give the context
+of the first <code class="computeroutput">count_line</code>s.</p>
+<p>Each <code class="computeroutput">file_line</code> will normally be
+immediately followed by a <code class="computeroutput">fn_line</code>. But it
+doesn't have to be.</p>
+<p>The summary line is redundant, because it just holds the total counts
+for each event. But this serves as a useful sanity check of the data; if
+the totals for each event don't match the summary line, something has gone
+wrong.</p>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="mc-manual.html"><< 4. Memcheck: a memory error detector</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="cl-manual.html">6. Callgrind: a call-graph generating cache and branch prediction profiler >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/cl-format.html b/docs/html/cl-format.html
new file mode 100644
index 0000000..7cf30de
--- /dev/null
+++ b/docs/html/cl-format.html
@@ -0,0 +1,651 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>3. Callgrind Format Specification</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="tech-docs.html" title="Valgrind Technical Documentation">
+<link rel="prev" href="manual-writing-tools.html" title="2. Writing a New Valgrind Tool">
+<link rel="next" href="dist.html" title="Valgrind Distribution Documents">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="manual-writing-tools.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="tech-docs.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Technical Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="cl-format"></a>3. Callgrind Format Specification</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="cl-format.html#cl-format.overview">3.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.basics">3.1.1. Basic Structure</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.example1">3.1.2. Simple Example</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.associations">3.1.3. Associations</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.example2">3.1.4. Extended Example</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.compression1">3.1.5. Name Compression</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.compression2">3.1.6. Subposition Compression</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.misc">3.1.7. Miscellaneous</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-format.html#cl-format.reference">3.2. Reference</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.grammar">3.2.1. Grammar</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.header">3.2.2. Description of Header Lines</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.body">3.2.3. Description of Body Lines</a></span></dt>
+</dl></dd>
+</dl>
+</div>
+<p>This chapter describes the Callgrind Profile Format, Version 1.</p>
+<p>A synonymous name is "Calltree Profile Format". These names actually mean
+the same since Callgrind was previously named Calltree.</p>
+<p>The format description is meant for the user to be able to understand the
+file contents; but more important, it is given for authors of measurement or
+visualization tools to be able to write and read this format.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-format.overview"></a>3.1. Overview</h2></div></div></div>
+<p>The profile data format is ASCII based.
+It is written by Callgrind, and it is upwards compatible
+to the format used by Cachegrind (ie. Cachegrind uses a subset). It can
+be read by callgrind_annotate and KCachegrind.</p>
+<p>This chapter gives on overview of format features and examples.
+For detailed syntax, look at the format reference.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.basics"></a>3.1.1. Basic Structure</h3></div></div></div>
+<p>Each file has a header part of an arbitrary number of lines of the
+format "key: value". After the header, lines specifying profile costs
+follow. Everywhere, comments on own lines starting with '#' are allowed.
+The header lines with keys "positions" and "events" define
+the meaning of cost lines in the second part of the file: the value of
+"positions" is a list of subpositions, and the value of "events" is a list
+of event type names. Cost lines consist of subpositions followed by 64-bit
+counters for the events, in the order specified by the "positions" and "events"
+header line.</p>
+<p>The "events" header line is always required in contrast to the optional
+line for "positions", which defaults to "line", i.e. a line number of some
+source file. In addition, the second part of the file contains position
+specifications of the form "spec=name". "spec" can be e.g. "fn" for a
+function name or "fl" for a file name. Cost lines are always related to
+the function/file specifications given directly before.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.example1"></a>3.1.2. Simple Example</h3></div></div></div>
+<p>The event names in the following example are quite arbitrary, and are not
+related to event names used by Callgrind. Especially, cycle counts matching
+real processors probably will never be generated by any Valgrind tools, as these
+are bound to simulations of simple machine models for acceptable slowdown.
+However, any profiling tool could use the format described in this chapter.</p>
+<p>
+</p>
+<pre class="screen">events: Cycles Instructions Flops
+fl=file.f
+fn=main
+15 90 14 2
+16 20 12</pre>
+<p>The above example gives profile information for event types "Cycles",
+"Instructions", and "Flops". Thus, cost lines give the number of CPU cycles
+passed by, number of executed instructions, and number of floating point
+operations executed while running code corresponding to some source
+position. As there is no line specifying the value of "positions", it defaults
+to "line", which means that the first number of a cost line is always a line
+number.</p>
+<p>Thus, the first cost line specifies that in line 15 of source file
+<code class="filename">file.f</code> there is code belonging to function
+<code class="function">main</code>. While running, 90 CPU cycles passed by, and 2 of
+the 14 instructions executed were floating point operations. Similarly, the
+next line specifies that there were 12 instructions executed in the context
+of function <code class="function">main</code> which can be related to line 16 in
+file <code class="filename">file.f</code>, taking 20 CPU cycles. If a cost line
+specifies less event counts than given in the "events" line, the rest is
+assumed to be zero. I.e. there was no floating point instruction executed
+relating to line 16.</p>
+<p>Note that regular cost lines always give self (also called exclusive)
+cost of code at a given position. If you specify multiple cost lines for the
+same position, these will be summed up. On the other hand, in the example above
+there is no specification of how many times function
+<code class="function">main</code> actually was
+called: profile data only contains sums.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.associations"></a>3.1.3. Associations</h3></div></div></div>
+<p>The most important extension to the original format of Cachegrind is the
+ability to specify call relationship among functions. More generally, you
+specify associations among positions. For this, the second part of the
+file also can contain association specifications. These look similar to
+position specifications, but consist of two lines. For calls, the format
+looks like
+</p>
+<pre class="screen">
+ calls=(Call Count) (Target position)
+ (Source position) (Inclusive cost of call)
+</pre>
+<p>The destination only specifies subpositions like line number. Therefore,
+to be able to specify a call to another function in another source file, you
+have to precede the above lines with a "cfn=" specification for the name of the
+called function, and optionally a "cfi=" specification if the function is in
+another source file ("cfl=" is an alternative specification for "cfi=" because
+of historical reasons, and both should be supported by format readers).
+The second line looks like a regular cost line with the difference
+that inclusive cost spent inside of the function call has to be specified.</p>
+<p>Other associations are for example (conditional) jumps. See the
+reference below for details.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.example2"></a>3.1.4. Extended Example</h3></div></div></div>
+<p>The following example shows 3 functions, <code class="function">main</code>,
+<code class="function">func1</code>, and <code class="function">func2</code>. Function
+<code class="function">main</code> calls <code class="function">func1</code> once and
+<code class="function">func2</code> 3 times. <code class="function">func1</code> calls
+<code class="function">func2</code> 2 times.
+</p>
+<pre class="screen">events: Instructions
+
+fl=file1.c
+fn=main
+16 20
+cfn=func1
+calls=1 50
+16 400
+cfi=file2.c
+cfn=func2
+calls=3 20
+16 400
+
+fn=func1
+51 100
+cfi=file2.c
+cfn=func2
+calls=2 20
+51 300
+
+fl=file2.c
+fn=func2
+20 700</pre>
+<p>One can see that in <code class="function">main</code> only code from line 16
+is executed where also the other functions are called. Inclusive cost of
+<code class="function">main</code> is 820, which is the sum of self cost 20 and costs
+spent in the calls: 400 for the single call to <code class="function">func1</code>
+and 400 as sum for the three calls to <code class="function">func2</code>.</p>
+<p>Function <code class="function">func1</code> is located in
+<code class="filename">file1.c</code>, the same as <code class="function">main</code>.
+Therefore, a "cfi=" specification for the call to <code class="function">func1</code>
+is not needed. The function <code class="function">func1</code> only consists of code
+at line 51 of <code class="filename">file1.c</code>, where <code class="function">func2</code>
+is called.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.compression1"></a>3.1.5. Name Compression</h3></div></div></div>
+<p>With the introduction of association specifications like calls it is
+needed to specify the same function or same file name multiple times. As
+absolute filenames or symbol names in C++ can be quite long, it is advantageous
+to be able to specify integer IDs for position specifications.
+Here, the term "position" corresponds to a file name (source or object file)
+or function name.</p>
+<p>To support name compression, a position specification can be not only of
+the format "spec=name", but also "spec=(ID) name" to specify a mapping of an
+integer ID to a name, and "spec=(ID)" to reference a previously defined ID
+mapping. There is a separate ID mapping for each position specification,
+i.e. you can use ID 1 for both a file name and a symbol name.</p>
+<p>With string compression, the example from 1.4 looks like this:
+</p>
+<pre class="screen">events: Instructions
+
+fl=(1) file1.c
+fn=(1) main
+16 20
+cfn=(2) func1
+calls=1 50
+16 400
+cfi=(2) file2.c
+cfn=(3) func2
+calls=3 20
+16 400
+
+fn=(2)
+51 100
+cfi=(2)
+cfn=(3)
+calls=2 20
+51 300
+
+fl=(2)
+fn=(3)
+20 700</pre>
+<p>As position specifications carry no information themselves, but only change
+the meaning of subsequent cost lines or associations, they can appear
+everywhere in the file without any negative consequence. Especially, you can
+define name compression mappings directly after the header, and before any cost
+lines. Thus, the above example can also be written as
+</p>
+<pre class="screen">events: Instructions
+
+# define file ID mapping
+fl=(1) file1.c
+fl=(2) file2.c
+# define function ID mapping
+fn=(1) main
+fn=(2) func1
+fn=(3) func2
+
+fl=(1)
+fn=(1)
+16 20
+...</pre>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.compression2"></a>3.1.6. Subposition Compression</h3></div></div></div>
+<p>If a Callgrind data file should hold costs for each assembler instruction
+of a program, you specify subposition "instr" in the "positions:" header line,
+and each cost line has to include the address of some instruction. Addresses
+are allowed to have a size of 64 bits to support 64-bit architectures. Thus,
+repeating similar, long addresses for almost every line in the data file can
+enlarge the file size quite significantly, and
+motivates for subposition compression: instead of every cost line starting with
+a 16 character long address, one is allowed to specify relative addresses.
+This relative specification is not only allowed for instruction addresses, but
+also for line numbers; both addresses and line numbers are called "subpositions".</p>
+<p>A relative subposition always is based on the corresponding subposition
+of the last cost line, and starts with a "+" to specify a positive difference,
+a "-" to specify a negative difference, or consists of "*" to specify the same
+subposition. Because absolute subpositions always are positive (ie. never
+prefixed by "-"), any relative specification is non-ambiguous; additionally,
+absolute and relative subposition specifications can be mixed freely.
+Assume the following example (subpositions can always be specified
+as hexadecimal numbers, beginning with "0x"):
+</p>
+<pre class="screen">positions: instr line
+events: ticks
+
+fn=func
+0x80001234 90 1
+0x80001237 90 5
+0x80001238 91 6</pre>
+<p>With subposition compression, this looks like
+</p>
+<pre class="screen">positions: instr line
+events: ticks
+
+fn=func
+0x80001234 90 1
++3 * 5
++1 +1 6</pre>
+<p>Remark: For assembler annotation to work, instruction addresses have to
+be corrected to correspond to addresses found in the original binary. I.e. for
+relocatable shared objects, often a load offset has to be subtracted.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.overview.misc"></a>3.1.7. Miscellaneous</h3></div></div></div>
+<div class="sect3">
+<div class="titlepage"><div><div><h4 class="title">
+<a name="cl-format.overview.misc.summary"></a>3.1.7.1. Cost Summary Information</h4></div></div></div>
+<p>For the visualization to be able to show cost percentage, a sum of the
+cost of the full run has to be known. Usually, it is assumed that this is the
+sum of all cost lines in a file. But sometimes, this is not correct. Thus, you
+can specify a "summary:" line in the header giving the full cost for the
+profile run. An import filter may use this to show a progress bar
+while loading a large data file.</p>
+</div>
+<div class="sect3">
+<div class="titlepage"><div><div><h4 class="title">
+<a name="cl-format.overview.misc.events"></a>3.1.7.2. Long Names for Event Types and inherited Types</h4></div></div></div>
+<p>Event types for cost lines are specified in the "events:" line with an
+abbreviated name. For visualization, it makes sense to be able to specify some
+longer, more descriptive name. For an event type "Ir" which means "Instruction
+Fetches", this can be specified the header line
+</p>
+<pre class="screen">event: Ir : Instruction Fetches
+events: Ir Dr</pre>
+<p>In this example, "Dr" itself has no long name associated. The order of
+"event:" lines and the "events:" line is of no importance. Additionally,
+inherited event types can be introduced for which no raw data is available, but
+which are calculated from given types. Suppose the last example, you could add
+</p>
+<pre class="screen">event: Sum = Ir + Dr</pre>
+<p>
+to specify an additional event type "Sum", which is calculated by adding costs
+for "Ir and "Dr".</p>
+</div>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-format.reference"></a>3.2. Reference</h2></div></div></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.reference.grammar"></a>3.2.1. Grammar</h3></div></div></div>
+<p>
+</p>
+<pre class="screen">ProfileDataFile := FormatVersion? Creator? PartData*</pre>
+<p>
+</p>
+<pre class="screen">FormatVersion := "version: 1\n"</pre>
+<p>
+</p>
+<pre class="screen">Creator := "creator:" NoNewLineChar* "\n"</pre>
+<p>
+</p>
+<pre class="screen">PartData := (HeaderLine "\n")+ (BodyLine "\n")+</pre>
+<p>
+</p>
+<pre class="screen">HeaderLine := (empty line)
+ | ('#' NoNewLineChar*)
+ | PartDetail
+ | Description
+ | EventSpecification
+ | CostLineDef</pre>
+<p>
+</p>
+<pre class="screen">PartDetail := TargetCommand | TargetID</pre>
+<p>
+</p>
+<pre class="screen">TargetCommand := "cmd:" Space* NoNewLineChar*</pre>
+<p>
+</p>
+<pre class="screen">TargetID := ("pid"|"thread"|"part") ":" Space* Number</pre>
+<p>
+</p>
+<pre class="screen">Description := "desc:" Space* Name Space* ":" NoNewLineChar*</pre>
+<p>
+</p>
+<pre class="screen">EventSpecification := "event:" Space* Name InheritedDef? LongNameDef?</pre>
+<p>
+</p>
+<pre class="screen">InheritedDef := "=" InheritedExpr</pre>
+<p>
+</p>
+<pre class="screen">InheritedExpr := Name
+ | Number Space* ("*" Space*)? Name
+ | InheritedExpr Space* "+" Space* InheritedExpr</pre>
+<p>
+</p>
+<pre class="screen">LongNameDef := ":" NoNewLineChar*</pre>
+<p>
+</p>
+<pre class="screen">CostLineDef := "events:" Space* Name (Space+ Name)*
+ | "positions:" "instr"? (Space+ "line")?</pre>
+<p>
+</p>
+<pre class="screen">BodyLine := (empty line)
+ | ('#' NoNewLineChar*)
+ | CostLine
+ | PositionSpec
+ | CallSpec
+ | UncondJumpSpec
+ | CondJumpSpec</pre>
+<p>
+</p>
+<pre class="screen">CostLine := SubPositionList Costs?</pre>
+<p>
+</p>
+<pre class="screen">SubPositionList := (SubPosition+ Space+)+</pre>
+<p>
+</p>
+<pre class="screen">SubPosition := Number | "+" Number | "-" Number | "*"</pre>
+<p>
+</p>
+<pre class="screen">Costs := (Number Space+)+</pre>
+<p>
+</p>
+<pre class="screen">PositionSpec := Position "=" Space* PositionName</pre>
+<p>
+</p>
+<pre class="screen">Position := CostPosition | CalledPosition</pre>
+<p>
+</p>
+<pre class="screen">CostPosition := "ob" | "fl" | "fi" | "fe" | "fn"</pre>
+<p>
+</p>
+<pre class="screen">CalledPosition := " "cob" | "cfi" | "cfl" | "cfn"</pre>
+<p>
+</p>
+<pre class="screen">PositionName := ( "(" Number ")" )? (Space* NoNewLineChar* )?</pre>
+<p>
+</p>
+<pre class="screen">CallSpec := CallLine "\n" CostLine</pre>
+<p>
+</p>
+<pre class="screen">CallLine := "calls=" Space* Number Space+ SubPositionList</pre>
+<p>
+</p>
+<pre class="screen">UncondJumpSpec := "jump=" Space* Number Space+ SubPositionList</pre>
+<p>
+</p>
+<pre class="screen">CondJumpSpec := "jcnd=" Space* Number Space+ Number Space+ SubPositionList</pre>
+<p>
+</p>
+<pre class="screen">Space := " " | "\t"</pre>
+<p>
+</p>
+<pre class="screen">Number := HexNumber | (Digit)+</pre>
+<p>
+</p>
+<pre class="screen">Digit := "0" | ... | "9"</pre>
+<p>
+</p>
+<pre class="screen">HexNumber := "0x" (Digit | HexChar)+</pre>
+<p>
+</p>
+<pre class="screen">HexChar := "a" | ... | "f" | "A" | ... | "F"</pre>
+<p>
+</p>
+<pre class="screen">Name = Alpha (Digit | Alpha)*</pre>
+<p>
+</p>
+<pre class="screen">Alpha = "a" | ... | "z" | "A" | ... | "Z"</pre>
+<p>
+</p>
+<pre class="screen">NoNewLineChar := all characters without "\n"</pre>
+<p>
+</p>
+<p>A profile data file ("ProfileDataFile") starts with basic information
+ such as the version and creator information, and then has a list of parts, where
+ each part has its own header and body. Parts typically are different threads
+ and/or time spans/phases within a profiled application run.</p>
+<p>Note that callgrind_annotate currently only supports profile data files with
+ one part. Callgrind may produce multiple parts for one profile run, but defaults
+ to one output file for each part.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.reference.header"></a>3.2.2. Description of Header Lines</h3></div></div></div>
+<p>Basic information in the first lines of a profile data file:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="computeroutput">version: number</code> [Callgrind]</p>
+<p>This is used to distinguish future profile data formats. A
+ major version of 0 or 1 is supposed to be upwards compatible with
+ Cachegrind's format. It is optional; if not appearing, version 1
+ is assumed. Otherwise, this has to be the first header line.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">creator: string</code> [Callgrind]</p>
+<p>This is an arbitrary string to denote the creator of this file.
+ Optional.</p>
+</li>
+</ul></div>
+<p>The header for each part has an arbitrary number of lines of the format
+"key: value". Possible <span class="emphasis"><em>key</em></span> values for the header are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="computeroutput">pid: process id</code> [Callgrind]</p>
+<p>Optional. This specifies the process ID of the supervised application
+ for which this profile was generated.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">cmd: program name + args</code> [Cachegrind]</p>
+<p>Optional. This specifies the full command line of the supervised
+ application for which this profile was generated.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">part: number</code> [Callgrind]</p>
+<p>Optional. This specifies a sequentially incremented number for each dump
+ generated, starting at 1.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">desc: type: value</code> [Cachegrind]</p>
+<p>This specifies various information for this dump. For some
+ types, the semantic is defined, but any description type is allowed.
+ Unknown types should be ignored.</p>
+<p>There are the types "I1 cache", "D1 cache", "LL cache", which
+ specify parameters used for the cache simulator. These are the only
+ types originally used by Cachegrind. Additionally, Callgrind uses
+ the following types: "Timerange" gives a rough range of the basic
+ block counter, for which the cost of this dump was collected.
+ Type "Trigger" states the reason of why this trace was generated.
+ E.g. program termination or forced interactive dump.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">positions: [instr] [line]</code> [Callgrind]</p>
+<p>For cost lines, this defines the semantic of the first numbers.
+ Any combination of "instr", "bb" and "line" is allowed, but has to be
+ in this order which corresponds to position numbers at the start of
+ the cost lines later in the file.</p>
+<p>If "instr" is specified, the position is the address of an
+ instruction whose execution raised the events given later on the
+ line. This address is relative to the offset of the binary/shared
+ library file to not have to specify relocation info. For "line",
+ the position is the line number of a source file, which is
+ responsible for the events raised. Note that the mapping of "instr"
+ and "line" positions are given by the debugging line information
+ produced by the compiler.</p>
+<p>This header line is optional, defaulting to "positions:
+ line" if not specified.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">events: event type abbreviations</code> [Cachegrind]</p>
+<p>A list of short names of the event types logged in cost
+ lines in this part of the profile data file. Arbitrary short
+ names are allowed. The order given specifies the required order
+ in cost lines. Thus, the first event type is the second or third
+ number in a cost line, depending on the value of "positions".
+ Required to appear for each header part exactly once.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">summary: costs</code> [Callgrind]</p>
+<p>Optional. This header line specifies a summary cost, which should be
+ equal or larger than a total over all self costs. It may be larger as
+ the cost lines may not represent all cost of the program run.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">totals: costs</code> [Cachegrind]</p>
+<p>Optional. Should appear at the end of the file (although
+ looking like a header line). Must give the total of all cost lines,
+ to allow for a consistency check.</p>
+</li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-format.reference.body"></a>3.2.3. Description of Body Lines</h3></div></div></div>
+<p>The regular body line is a cost line consisting of one or two
+position numbers (depending on "positions:" header line, see above)
+and an array of cost numbers. A position number either is a
+line numbers into a source file or an instruction address within binary
+code, with source/binary file names specified as position names (see
+below). The cost numbers get mapped to event types in the same order
+as specified in the "events:" header line. If less numbers than event
+types are given, the costs default to zero for the remaining event
+types.</p>
+<p>Further, there exist lines
+<code class="computeroutput">spec=position name</code>. A position name
+is an arbitrary string. If it starts with "(" and a
+digit, it's a string in compressed format. Otherwise it's the real
+position string. This allows for file and symbol names as position
+strings, as these never start with "(" + <span class="emphasis"><em>digit</em></span>.
+The compressed format is either "(" <span class="emphasis"><em>number</em></span> ")"
+<span class="emphasis"><em>space</em></span> <span class="emphasis"><em>position</em></span> or only
+"(" <span class="emphasis"><em>number</em></span> ")". The first relates
+<span class="emphasis"><em>position</em></span> to <span class="emphasis"><em>number</em></span> in the
+context of the given format specification from this line to the end of
+the file; it makes the (<span class="emphasis"><em>number</em></span>) an alias for
+<span class="emphasis"><em>position</em></span>. Compressed format is always
+optional.</p>
+<p>Position specifications allowed:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="computeroutput">ob=</code> [Callgrind]</p>
+<p>The ELF object where the cost of next cost lines happens.</p>
+</li>
+<li class="listitem"><p><code class="computeroutput">fl=</code> [Cachegrind]</p></li>
+<li class="listitem"><p><code class="computeroutput">fi=</code> [Cachegrind]</p></li>
+<li class="listitem">
+<p><code class="computeroutput">fe=</code> [Cachegrind]</p>
+<p>The source file including the code which is responsible for
+ the cost of next cost lines. "fi="/"fe=" is used when the source
+ file changes inside of a function, i.e. for inlined code.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">fn=</code> [Cachegrind]</p>
+<p>The name of the function where the cost of next cost lines
+ happens.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">cob=</code> [Callgrind]</p>
+<p>The ELF object of the target of the next call cost lines.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">cfi=</code> [Callgrind]</p>
+<p>The source file including the code of the target of the
+ next call cost lines.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">cfl=</code> [Callgrind]</p>
+<p>Alternative spelling for <code class="computeroutput">cfi=</code>
+ specification (because of historical reasons).</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">cfn=</code> [Callgrind]</p>
+<p>The name of the target function of the next call cost
+ lines.</p>
+</li>
+</ul></div>
+<p>The last type of body line provides specific costs not just
+related to one position as regular cost lines. It starts with specific
+strings similar to position name specifications.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="computeroutput">calls=count target-position</code> [Callgrind]</p>
+<p>Call executed "count" times to "target-position".
+ After a "calls=" line there MUST be a cost line. This provides the source position
+ of the call and the cost spent in the called function in total.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">jump=count target-position</code> [Callgrind]</p>
+<p>Unconditional jump, executed "count" times, to "target-position".</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">jcnd=exe-count jump-count target-position</code> [Callgrind]</p>
+<p>Conditional jump, executed "exe-count" times with "jump-count" jumps
+ happening (rest is fall-through) to "target-position".</p>
+</li>
+</ul></div>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="manual-writing-tools.html"><< 2. Writing a New Valgrind Tool</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="tech-docs.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.html">Valgrind Distribution Documents >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/cl-manual.html b/docs/html/cl-manual.html
new file mode 100644
index 0000000..35f29cf
--- /dev/null
+++ b/docs/html/cl-manual.html
@@ -0,0 +1,1147 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>6. Callgrind: a call-graph generating cache and branch prediction profiler</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="cg-manual.html" title="5. Cachegrind: a cache and branch-prediction profiler">
+<link rel="next" href="hg-manual.html" title="7. Helgrind: a thread error detector">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="cg-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="hg-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="cl-manual"></a>6. Callgrind: a call-graph generating cache and branch prediction profiler</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.use">6.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.functionality">6.1.1. Functionality</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.basics">6.1.2. Basic Usage</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.usage">6.2. Advanced Usage</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.dumps">6.2.1. Multiple profiling dumps from one program run</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.limits">6.2.2. Limiting the range of collected events</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.busevents">6.2.3. Counting global bus events</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.cycles">6.2.4. Avoiding cycles</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.forkingprograms">6.2.5. Forking Programs</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.options">6.3. Callgrind Command-line Options</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.creation">6.3.1. Dump creation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.activity">6.3.2. Activity options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.collection">6.3.3. Data collection options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.separation">6.3.4. Cost entity separation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.simulation">6.3.5. Simulation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.cachesimulation">6.3.6. Cache simulation options</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.monitor-commands">6.4. Callgrind Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.clientrequests">6.5. Callgrind specific client requests</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.callgrind_annotate-options">6.6. callgrind_annotate Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.callgrind_control-options">6.7. callgrind_control Command-line Options</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=callgrind</code> on the
+Valgrind command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.use"></a>6.1. Overview</h2></div></div></div>
+<p>Callgrind is a profiling tool that records the call history among
+functions in a program's run as a call-graph.
+By default, the collected data consists of
+the number of instructions executed, their relationship
+to source lines, the caller/callee relationship between functions,
+and the numbers of such calls.
+Optionally, cache simulation and/or branch prediction (similar to Cachegrind)
+can produce further information about the runtime behavior of an application.
+</p>
+<p>The profile data is written out to a file at program
+termination. For presentation of the data, and interactive control
+of the profiling, two command line tools are provided:</p>
+<div class="variablelist"><dl class="variablelist">
+<dt><span class="term"><span class="command"><strong>callgrind_annotate</strong></span></span></dt>
+<dd>
+<p>This command reads in the profile data, and prints a
+ sorted lists of functions, optionally with source annotation.</p>
+<p>For graphical visualization of the data, try
+ <a class="ulink" href="http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindIndex" target="_top">KCachegrind</a>, which is a KDE/Qt based
+ GUI that makes it easy to navigate the large amount of data that
+ Callgrind produces.</p>
+</dd>
+<dt><span class="term"><span class="command"><strong>callgrind_control</strong></span></span></dt>
+<dd><p>This command enables you to interactively observe and control
+ the status of a program currently running under Callgrind's control,
+ without stopping the program. You can get statistics information as
+ well as the current stack trace, and you can request zeroing of counters
+ or dumping of profile data.</p></dd>
+</dl></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.functionality"></a>6.1.1. Functionality</h3></div></div></div>
+<p>Cachegrind collects flat profile data: event counts (data reads,
+cache misses, etc.) are attributed directly to the function they
+occurred in. This cost attribution mechanism is
+called <span class="emphasis"><em>self</em></span> or <span class="emphasis"><em>exclusive</em></span>
+attribution.</p>
+<p>Callgrind extends this functionality by propagating costs
+across function call boundaries. If function <code class="function">foo</code> calls
+<code class="function">bar</code>, the costs from <code class="function">bar</code> are added into
+<code class="function">foo</code>'s costs. When applied to the program as a whole,
+this builds up a picture of so called <span class="emphasis"><em>inclusive</em></span>
+costs, that is, where the cost of each function includes the costs of
+all functions it called, directly or indirectly.</p>
+<p>As an example, the inclusive cost of
+<code class="function">main</code> should be almost 100 percent
+of the total program cost. Because of costs arising before
+<code class="function">main</code> is run, such as
+initialization of the run time linker and construction of global C++
+objects, the inclusive cost of <code class="function">main</code>
+is not exactly 100 percent of the total program cost.</p>
+<p>Together with the call graph, this allows you to find the
+specific call chains starting from
+<code class="function">main</code> in which the majority of the
+program's costs occur. Caller/callee cost attribution is also useful
+for profiling functions called from multiple call sites, and where
+optimization opportunities depend on changing code in the callers, in
+particular by reducing the call count.</p>
+<p>Callgrind's cache simulation is based on that of Cachegrind.
+Read the documentation for <a class="xref" href="cg-manual.html" title="5. Cachegrind: a cache and branch-prediction profiler">Cachegrind: a cache and branch-prediction profiler</a> first. The material
+below describes the features supported in addition to Cachegrind's
+features.</p>
+<p>Callgrind's ability to detect function calls and returns depends
+on the instruction set of the platform it is run on. It works best on
+x86 and amd64, and unfortunately currently does not work so well on
+PowerPC, ARM, Thumb or MIPS code. This is because there are no explicit
+call or return instructions in these instruction sets, so Callgrind
+has to rely on heuristics to detect calls and returns.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.basics"></a>6.1.2. Basic Usage</h3></div></div></div>
+<p>As with Cachegrind, you probably want to compile with debugging info
+ (the <code class="option">-g</code> option) and with optimization turned on.</p>
+<p>To start a profile run for a program, execute:
+ </p>
+<pre class="screen">valgrind --tool=callgrind [callgrind options] your-program [program options]</pre>
+<p>
+ </p>
+<p>While the simulation is running, you can observe execution with:
+ </p>
+<pre class="screen">callgrind_control -b</pre>
+<p>
+ This will print out the current backtrace. To annotate the backtrace with
+ event counts, run
+ </p>
+<pre class="screen">callgrind_control -e -b</pre>
+<p>
+ </p>
+<p>After program termination, a profile data file named
+ <code class="computeroutput">callgrind.out.<pid></code>
+ is generated, where <span class="emphasis"><em>pid</em></span> is the process ID
+ of the program being profiled.
+ The data file contains information about the calls made in the
+ program among the functions executed, together with
+ <span class="command"><strong>Instruction Read</strong></span> (Ir) event counts.</p>
+<p>To generate a function-by-function summary from the profile
+ data file, use
+ </p>
+<pre class="screen">callgrind_annotate [options] callgrind.out.<pid></pre>
+<p>
+ This summary is similar to the output you get from a Cachegrind
+ run with cg_annotate: the list
+ of functions is ordered by exclusive cost of functions, which also
+ are the ones that are shown.
+ Important for the additional features of Callgrind are
+ the following two options:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="option">--inclusive=yes</code>: Instead of using
+ exclusive cost of functions as sorting order, use and show
+ inclusive cost.</p></li>
+<li class="listitem"><p><code class="option">--tree=both</code>: Interleave into the
+ top level list of functions, information on the callers and the callees
+ of each function. In these lines, which represents executed
+ calls, the cost gives the number of events spent in the call.
+ Indented, above each function, there is the list of callers,
+ and below, the list of callees. The sum of events in calls to
+ a given function (caller lines), as well as the sum of events in
+ calls from the function (callee lines) together with the self
+ cost, gives the total inclusive cost of the function.</p></li>
+</ul></div>
+<p>Use <code class="option">--auto=yes</code> to get annotated source code
+ for all relevant functions for which the source can be found. In
+ addition to source annotation as produced by
+ <code class="computeroutput">cg_annotate</code>, you will see the
+ annotated call sites with call counts. For all other options,
+ consult the (Cachegrind) documentation for
+ <code class="computeroutput">cg_annotate</code>.
+ </p>
+<p>For better call graph browsing experience, it is highly recommended
+ to use <a class="ulink" href="http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindIndex" target="_top">KCachegrind</a>.
+ If your code
+ has a significant fraction of its cost in <span class="emphasis"><em>cycles</em></span> (sets
+ of functions calling each other in a recursive manner), you have to
+ use KCachegrind, as <code class="computeroutput">callgrind_annotate</code>
+ currently does not do any cycle detection, which is important to get correct
+ results in this case.</p>
+<p>If you are additionally interested in measuring the
+ cache behavior of your program, use Callgrind with the option
+ <code class="option"><a class="xref" href="cl-manual.html#clopt.cache-sim">--cache-sim</a>=yes</code>. For
+ branch prediction simulation, use <code class="option"><a class="xref" href="cl-manual.html#clopt.branch-sim">--branch-sim</a>=yes</code>.
+ Expect a further slow down approximately by a factor of 2.</p>
+<p>If the program section you want to profile is somewhere in the
+ middle of the run, it is beneficial to
+ <span class="emphasis"><em>fast forward</em></span> to this section without any
+ profiling, and then enable profiling. This is achieved by using
+ the command line option
+ <code class="option"><a class="xref" href="cl-manual.html#opt.instr-atstart">--instr-atstart</a>=no</code>
+ and running, in a shell:
+ <code class="computeroutput">callgrind_control -i on</code> just before the
+ interesting code section is executed. To exactly specify
+ the code position where profiling should start, use the client request
+ <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.start-instr">CALLGRIND_START_INSTRUMENTATION</a></code>.</p>
+<p>If you want to be able to see assembly code level annotation, specify
+ <code class="option"><a class="xref" href="cl-manual.html#opt.dump-instr">--dump-instr</a>=yes</code>. This will produce
+ profile data at instruction granularity. Note that the resulting profile
+ data
+ can only be viewed with KCachegrind. For assembly annotation, it also is
+ interesting to see more details of the control flow inside of functions,
+ i.e. (conditional) jumps. This will be collected by further specifying
+ <code class="option"><a class="xref" href="cl-manual.html#opt.collect-jumps">--collect-jumps</a>=yes</code>.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.usage"></a>6.2. Advanced Usage</h2></div></div></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.dumps"></a>6.2.1. Multiple profiling dumps from one program run</h3></div></div></div>
+<p>Sometimes you are not interested in characteristics of a full
+ program run, but only of a small part of it, for example execution of one
+ algorithm. If there are multiple algorithms, or one algorithm
+ running with different input data, it may even be useful to get different
+ profile information for different parts of a single program run.</p>
+<p>Profile data files have names of the form
+</p>
+<pre class="screen">
+callgrind.out.<span class="emphasis"><em>pid</em></span>.<span class="emphasis"><em>part</em></span>-<span class="emphasis"><em>threadID</em></span>
+</pre>
+<p>
+ </p>
+<p>where <span class="emphasis"><em>pid</em></span> is the PID of the running
+ program, <span class="emphasis"><em>part</em></span> is a number incremented on each
+ dump (".part" is skipped for the dump at program termination), and
+ <span class="emphasis"><em>threadID</em></span> is a thread identification
+ ("-threadID" is only used if you request dumps of individual
+ threads with <code class="option"><a class="xref" href="cl-manual.html#opt.separate-threads">--separate-threads</a>=yes</code>).</p>
+<p>There are different ways to generate multiple profile dumps
+ while a program is running under Callgrind's supervision. Nevertheless,
+ all methods trigger the same action, which is "dump all profile
+ information since the last dump or program start, and zero cost
+ counters afterwards". To allow for zeroing cost counters without
+ dumping, there is a second action "zero all cost counters now".
+ The different methods are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><span class="command"><strong>Dump on program termination.</strong></span>
+ This method is the standard way and doesn't need any special
+ action on your part.</p></li>
+<li class="listitem">
+<p><span class="command"><strong>Spontaneous, interactive dumping.</strong></span> Use
+ </p>
+<pre class="screen">callgrind_control -d [hint [PID/Name]]</pre>
+<p> to
+ request the dumping of profile information of the supervised
+ application with PID or Name. <span class="emphasis"><em>hint</em></span> is an
+ arbitrary string you can optionally specify to later be able to
+ distinguish profile dumps. The control program will not terminate
+ before the dump is completely written. Note that the application
+ must be actively running for detection of the dump command. So,
+ for a GUI application, resize the window, or for a server, send a
+ request.</p>
+<p>If you are using <a class="ulink" href="http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindIndex" target="_top">KCachegrind</a>
+ for browsing of profile information, you can use the toolbar
+ button <span class="command"><strong>Force dump</strong></span>. This will request a dump
+ and trigger a reload after the dump is written.</p>
+</li>
+<li class="listitem"><p><span class="command"><strong>Periodic dumping after execution of a specified
+ number of basic blocks</strong></span>. For this, use the command line
+ option <code class="option"><a class="xref" href="cl-manual.html#opt.dump-every-bb">--dump-every-bb</a>=count</code>.
+ </p></li>
+<li class="listitem">
+<p><span class="command"><strong>Dumping at enter/leave of specified functions.</strong></span>
+ Use the
+ option <code class="option"><a class="xref" href="cl-manual.html#opt.dump-before">--dump-before</a>=function</code>
+ and <code class="option"><a class="xref" href="cl-manual.html#opt.dump-after">--dump-after</a>=function</code>.
+ To zero cost counters before entering a function, use
+ <code class="option"><a class="xref" href="cl-manual.html#opt.zero-before">--zero-before</a>=function</code>.</p>
+<p>You can specify these options multiple times for different
+ functions. Function specifications support wildcards: e.g. use
+ <code class="option"><a class="xref" href="cl-manual.html#opt.dump-before">--dump-before</a>='foo*'</code> to
+ generate dumps before entering any function starting with
+ <span class="emphasis"><em>foo</em></span>.</p>
+</li>
+<li class="listitem"><p><span class="command"><strong>Program controlled dumping.</strong></span>
+ Insert
+ <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.dump-stats">CALLGRIND_DUMP_STATS</a>;</code>
+ at the position in your code where you want a profile dump to happen. Use
+ <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.zero-stats">CALLGRIND_ZERO_STATS</a>;</code> to only
+ zero profile counters.
+ See <a class="xref" href="cl-manual.html#cl-manual.clientrequests" title="6.5. Callgrind specific client requests">Client request reference</a> for more information on
+ Callgrind specific client requests.</p></li>
+</ul></div>
+<p>If you are running a multi-threaded application and specify the
+ command line option <code class="option"><a class="xref" href="cl-manual.html#opt.separate-threads">--separate-threads</a>=yes</code>,
+ every thread will be profiled on its own and will create its own
+ profile dump. Thus, the last two methods will only generate one dump
+ of the currently running thread. With the other methods, you will get
+ multiple dumps (one for each thread) on a dump request.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.limits"></a>6.2.2. Limiting the range of collected events</h3></div></div></div>
+<p>By default, whenever events are happening (such as an
+ instruction execution or cache hit/miss), Callgrind is aggregating
+ them into event counters. However, you may be interested only in
+ what is happening within a given function or starting from a given
+ program phase. To this end, you can disable event aggregation for
+ uninteresting program parts. While attribution of events to
+ functions as well as producing seperate output per program phase
+ can be done by other means (see previous section), there are two
+ benefits by disabling aggregation. First, this is very
+ fine-granular (e.g. just for a loop within a function). Second,
+ disabling event aggregation for complete program phases allows to
+ switch off time-consuming cache simulation and allows Callgrind to
+ progress at much higher speed with an slowdown of around factor 2
+ (identical to <code class="computeroutput">valgrind
+ --tool=none</code>).
+ </p>
+<p>There are two aspects which influence whether Callgrind is
+ aggregating events at some point in time of program execution.
+ First, there is the <span class="emphasis"><em>collection state</em></span>. If this
+ is off, no aggregation will be done. By changing the collection
+ state, you can control event aggregation at a very fine
+ granularity. However, there is not much difference in regard to
+ execution speed of Callgrind. By default, collection is switched
+ on, but can be disabled by different means (see below). Second,
+ there is the <span class="emphasis"><em>instrumentation mode</em></span> in which
+ Callgrind is running. This mode either can be on or off. If
+ instrumentation is off, no observation of actions in the program
+ will be done and thus, no actions will be forwarded to the
+ simulator which could trigger events. In the end, no events will
+ be aggregated. The huge benefit is the much higher speed with
+ instrumentation switched off. However, this only should be used
+ with care and in a coarse fashion: every mode change resets the
+ simulator state (ie. whether a memory block is cached or not) and
+ flushes Valgrinds internal cache of instrumented code blocks,
+ resulting in latency penalty at switching time. Also, cache
+ simulator results directly after switching on instrumentation will
+ be skewed due to identified cache misses which would not happen in
+ reality (if you care about this warm-up effect, you should make
+ sure to temporarly have collection state switched off directly
+ after turning instrumentation mode on). However, switching
+ instrumentation state is very useful to skip larger program phases
+ such as an initialization phase. By default, instrumentation is
+ switched on, but as with the collection state, can be changed by
+ various means.
+ </p>
+<p>Callgrind can start with instrumentation mode switched off by
+ specifying
+ option <code class="option"><a class="xref" href="cl-manual.html#opt.instr-atstart">--instr-atstart</a>=no</code>.
+ Afterwards, instrumentation can be controlled in two ways: first,
+ interactively with: </p>
+<pre class="screen">callgrind_control -i on</pre>
+<p> (and
+ switching off again by specifying "off" instead of "on"). Second,
+ instrumentation state can be programatically changed with the
+ macros <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.start-instr">CALLGRIND_START_INSTRUMENTATION</a>;</code>
+ and <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.stop-instr">CALLGRIND_STOP_INSTRUMENTATION</a>;</code>.
+ </p>
+<p>Similarly, the collection state at program start can be
+ switched off
+ by <code class="option"><a class="xref" href="cl-manual.html#opt.instr-atstart">--instr-atstart</a>=no</code>. During
+ execution, it can be controlled programatically with the
+ macro <code class="computeroutput">CALLGRIND_TOGGLE_COLLECT;</code>.
+ Further, you can limit event collection to a specific function by
+ using <code class="option"><a class="xref" href="cl-manual.html#opt.toggle-collect">--toggle-collect</a>=function</code>.
+ This will toggle the collection state on entering and leaving the
+ specified function. When this option is in effect, the default
+ collection state at program start is "off". Only events happening
+ while running inside of the given function will be
+ collected. Recursive calls of the given function do not trigger
+ any action. This option can be given multiple times to specify
+ different functions of interest.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.busevents"></a>6.2.3. Counting global bus events</h3></div></div></div>
+<p>For access to shared data among threads in a multithreaded
+ code, synchronization is required to avoid raced conditions.
+ Synchronization primitives are usually implemented via atomic instructions.
+ However, excessive use of such instructions can lead to performance
+ issues.</p>
+<p>To enable analysis of this problem, Callgrind optionally can count
+ the number of atomic instructions executed. More precisely, for x86/x86_64,
+ these are instructions using a lock prefix. For architectures supporting
+ LL/SC, these are the number of SC instructions executed. For both, the term
+ "global bus events" is used.</p>
+<p>The short name of the event type used for global bus events is "Ge".
+ To count global bus events, use <code class="option"><a class="xref" href="cl-manual.html#clopt.collect-bus">--collect-bus</a>=yes</code>.
+ </p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.cycles"></a>6.2.4. Avoiding cycles</h3></div></div></div>
+<p>Informally speaking, a cycle is a group of functions which
+ call each other in a recursive way.</p>
+<p>Formally speaking, a cycle is a nonempty set S of functions,
+ such that for every pair of functions F and G in S, it is possible
+ to call from F to G (possibly via intermediate functions) and also
+ from G to F. Furthermore, S must be maximal -- that is, be the
+ largest set of functions satisfying this property. For example, if
+ a third function H is called from inside S and calls back into S,
+ then H is also part of the cycle and should be included in S.</p>
+<p>Recursion is quite usual in programs, and therefore, cycles
+ sometimes appear in the call graph output of Callgrind. However,
+ the title of this chapter should raise two questions: What is bad
+ about cycles which makes you want to avoid them? And: How can
+ cycles be avoided without changing program code?</p>
+<p>Cycles are not bad in itself, but tend to make performance
+ analysis of your code harder. This is because inclusive costs
+ for calls inside of a cycle are meaningless. The definition of
+ inclusive cost, i.e. self cost of a function plus inclusive cost
+ of its callees, needs a topological order among functions. For
+ cycles, this does not hold true: callees of a function in a cycle include
+ the function itself. Therefore, KCachegrind does cycle detection
+ and skips visualization of any inclusive cost for calls inside
+ of cycles. Further, all functions in a cycle are collapsed into artifical
+ functions called like <code class="computeroutput">Cycle 1</code>.</p>
+<p>Now, when a program exposes really big cycles (as is
+ true for some GUI code, or in general code using event or callback based
+ programming style), you lose the nice property to let you pinpoint
+ the bottlenecks by following call chains from
+ <code class="function">main</code>, guided via
+ inclusive cost. In addition, KCachegrind loses its ability to show
+ interesting parts of the call graph, as it uses inclusive costs to
+ cut off uninteresting areas.</p>
+<p>Despite the meaningless of inclusive costs in cycles, the big
+ drawback for visualization motivates the possibility to temporarily
+ switch off cycle detection in KCachegrind, which can lead to
+ misguiding visualization. However, often cycles appear because of
+ unlucky superposition of independent call chains in a way that
+ the profile result will see a cycle. Neglecting uninteresting
+ calls with very small measured inclusive cost would break these
+ cycles. In such cases, incorrect handling of cycles by not detecting
+ them still gives meaningful profiling visualization.</p>
+<p>It has to be noted that currently, <span class="command"><strong>callgrind_annotate</strong></span>
+ does not do any cycle detection at all. For program executions with function
+ recursion, it e.g. can print nonsense inclusive costs way above 100%.</p>
+<p>After describing why cycles are bad for profiling, it is worth
+ talking about cycle avoidance. The key insight here is that symbols in
+ the profile data do not have to exactly match the symbols found in the
+ program. Instead, the symbol name could encode additional information
+ from the current execution context such as recursion level of the
+ current function, or even some part of the call chain leading to the
+ function. While encoding of additional information into symbols is
+ quite capable of avoiding cycles, it has to be used carefully to not cause
+ symbol explosion. The latter imposes large memory requirement for Callgrind
+ with possible out-of-memory conditions, and big profile data files.</p>
+<p>A further possibility to avoid cycles in Callgrind's profile data
+ output is to simply leave out given functions in the call graph. Of course, this
+ also skips any call information from and to an ignored function, and thus can
+ break a cycle. Candidates for this typically are dispatcher functions in event
+ driven code. The option to ignore calls to a function is
+ <code class="option"><a class="xref" href="cl-manual.html#opt.fn-skip">--fn-skip</a>=function</code>. Aside from
+ possibly breaking cycles, this is used in Callgrind to skip
+ trampoline functions in the PLT sections
+ for calls to functions in shared libraries. You can see the difference
+ if you profile with <code class="option"><a class="xref" href="cl-manual.html#opt.skip-plt">--skip-plt</a>=no</code>.
+ If a call is ignored, its cost events will be propagated to the
+ enclosing function.</p>
+<p>If you have a recursive function, you can distinguish the first
+ 10 recursion levels by specifying
+ <code class="option"><a class="xref" href="cl-manual.html#opt.separate-recs-num">--separate-recs10</a>=function</code>.
+ Or for all functions with
+ <code class="option"><a class="xref" href="cl-manual.html#opt.separate-recs">--separate-recs</a>=10</code>, but this will
+ give you much bigger profile data files. In the profile data, you will see
+ the recursion levels of "func" as the different functions with names
+ "func", "func'2", "func'3" and so on.</p>
+<p>If you have call chains "A > B > C" and "A > C > B"
+ in your program, you usually get a "false" cycle "B <> C". Use
+ <code class="option"><a class="xref" href="cl-manual.html#opt.separate-callers-num">--separate-callers2</a>=B</code>
+ <code class="option"><a class="xref" href="cl-manual.html#opt.separate-callers-num">--separate-callers2</a>=C</code>,
+ and functions "B" and "C" will be treated as different functions
+ depending on the direct caller. Using the apostrophe for appending
+ this "context" to the function name, you get "A > B'A > C'B"
+ and "A > C'A > B'C", and there will be no cycle. Use
+ <code class="option"><a class="xref" href="cl-manual.html#opt.separate-callers">--separate-callers</a>=2</code> to get a 2-caller
+ dependency for all functions. Note that doing this will increase
+ the size of profile data files.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.forkingprograms"></a>6.2.5. Forking Programs</h3></div></div></div>
+<p>If your program forks, the child will inherit all the profiling
+ data that has been gathered for the parent. To start with empty profile
+ counter values in the child, the client request
+ <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.zero-stats">CALLGRIND_ZERO_STATS</a>;</code>
+ can be inserted into code to be executed by the child, directly after
+ <code class="computeroutput">fork</code>.</p>
+<p>However, you will have to make sure that the output file format string
+ (controlled by <code class="option">--callgrind-out-file</code>) does contain
+ <code class="option">%p</code> (which is true by default). Otherwise, the
+ outputs from the parent and child will overwrite each other or will be
+ intermingled, which almost certainly is not what you want.</p>
+<p>You will be able to control the new child independently from
+ the parent via callgrind_control.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.options"></a>6.3. Callgrind Command-line Options</h2></div></div></div>
+<p>
+In the following, options are grouped into classes.
+</p>
+<p>
+Some options allow the specification of a function/symbol name, such as
+<code class="option"><a class="xref" href="cl-manual.html#opt.dump-before">--dump-before</a>=function</code>, or
+<code class="option"><a class="xref" href="cl-manual.html#opt.fn-skip">--fn-skip</a>=function</code>. All these options
+can be specified multiple times for different functions.
+In addition, the function specifications actually are patterns by supporting
+the use of wildcards '*' (zero or more arbitrary characters) and '?'
+(exactly one arbitrary character), similar to file name globbing in the
+shell. This feature is important especially for C++, as without wildcard
+usage, the function would have to be specified in full extent, including
+parameter signature. </p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.creation"></a>6.3.1. Dump creation options</h3></div></div></div>
+<p>
+These options influence the name and format of the profile data files.
+</p>
+<div class="variablelist">
+<a name="cl.opts.list.creation"></a><dl class="variablelist">
+<dt>
+<a name="opt.callgrind-out-file"></a><span class="term">
+ <code class="option">--callgrind-out-file=<file> </code>
+ </span>
+</dt>
+<dd><p>Write the profile data to
+ <code class="computeroutput">file</code> rather than to the default
+ output file,
+ <code class="computeroutput">callgrind.out.<pid></code>. The
+ <code class="option">%p</code> and <code class="option">%q</code> format specifiers
+ can be used to embed the process ID and/or the contents of an
+ environment variable in the name, as is the case for the core
+ option <code class="option"><a class="xref" href="manual-core.html#opt.log-file">--log-file</a></code>.
+ When multiple dumps are made, the file name
+ is modified further; see below.</p></dd>
+<dt>
+<a name="opt.dump-line"></a><span class="term">
+ <code class="option">--dump-line=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>This specifies that event counting should be performed at
+ source line granularity. This allows source annotation for sources
+ which are compiled with debug information
+ (<code class="option">-g</code>).</p></dd>
+<dt>
+<a name="opt.dump-instr"></a><span class="term">
+ <code class="option">--dump-instr=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>This specifies that event counting should be performed at
+ per-instruction granularity.
+ This allows for assembly code
+ annotation. Currently the results can only be
+ displayed by KCachegrind.</p></dd>
+<dt>
+<a name="opt.compress-strings"></a><span class="term">
+ <code class="option">--compress-strings=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>This option influences the output format of the profile data.
+ It specifies whether strings (file and function names) should be
+ identified by numbers. This shrinks the file,
+ but makes it more difficult
+ for humans to read (which is not recommended in any case).</p></dd>
+<dt>
+<a name="opt.compress-pos"></a><span class="term">
+ <code class="option">--compress-pos=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>This option influences the output format of the profile data.
+ It specifies whether numerical positions are always specified as absolute
+ values or are allowed to be relative to previous numbers.
+ This shrinks the file size.</p></dd>
+<dt>
+<a name="opt.combine-dumps"></a><span class="term">
+ <code class="option">--combine-dumps=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, when multiple profile data parts are to be
+ generated these parts are appended to the same output file.
+ Not recommended.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.activity"></a>6.3.2. Activity options</h3></div></div></div>
+<p>
+These options specify when actions relating to event counts are to
+be executed. For interactive control use callgrind_control.
+</p>
+<div class="variablelist">
+<a name="cl.opts.list.activity"></a><dl class="variablelist">
+<dt>
+<a name="opt.dump-every-bb"></a><span class="term">
+ <code class="option">--dump-every-bb=<count> [default: 0, never] </code>
+ </span>
+</dt>
+<dd><p>Dump profile data every <code class="option">count</code> basic blocks.
+ Whether a dump is needed is only checked when Valgrind's internal
+ scheduler is run. Therefore, the minimum setting useful is about 100000.
+ The count is a 64-bit value to make long dump periods possible.
+ </p></dd>
+<dt>
+<a name="opt.dump-before"></a><span class="term">
+ <code class="option">--dump-before=<function> </code>
+ </span>
+</dt>
+<dd><p>Dump when entering <code class="option">function</code>.</p></dd>
+<dt>
+<a name="opt.zero-before"></a><span class="term">
+ <code class="option">--zero-before=<function> </code>
+ </span>
+</dt>
+<dd><p>Zero all costs when entering <code class="option">function</code>.</p></dd>
+<dt>
+<a name="opt.dump-after"></a><span class="term">
+ <code class="option">--dump-after=<function> </code>
+ </span>
+</dt>
+<dd><p>Dump when leaving <code class="option">function</code>.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.collection"></a>6.3.3. Data collection options</h3></div></div></div>
+<p>
+These options specify when events are to be aggregated into event counts.
+Also see <a class="xref" href="cl-manual.html#cl-manual.limits" title="6.2.2. Limiting the range of collected events">Limiting range of event collection</a>.</p>
+<div class="variablelist">
+<a name="cl.opts.list.collection"></a><dl class="variablelist">
+<dt>
+<a name="opt.instr-atstart"></a><span class="term">
+ <code class="option">--instr-atstart=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Specify if you want Callgrind to start simulation and
+ profiling from the beginning of the program.
+ When set to <code class="computeroutput">no</code>,
+ Callgrind will not be able
+ to collect any information, including calls, but it will have at
+ most a slowdown of around 4, which is the minimum Valgrind
+ overhead. Instrumentation can be interactively enabled via
+ <code class="computeroutput">callgrind_control -i on</code>.</p>
+<p>Note that the resulting call graph will most probably not
+ contain <code class="function">main</code>, but will contain all the
+ functions executed after instrumentation was enabled.
+ Instrumentation can also programatically enabled/disabled. See the
+ Callgrind include file
+ <code class="computeroutput">callgrind.h</code> for the macro
+ you have to use in your source code.</p>
+<p>For cache
+ simulation, results will be less accurate when switching on
+ instrumentation later in the program run, as the simulator starts
+ with an empty cache at that moment. Switch on event collection
+ later to cope with this error.</p>
+</dd>
+<dt>
+<a name="opt.collect-atstart"></a><span class="term">
+ <code class="option">--collect-atstart=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Specify whether event collection is enabled at beginning
+ of the profile run.</p>
+<p>To only look at parts of your program, you have two
+ possibilities:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>Zero event counters before entering the program part you
+ want to profile, and dump the event counters to a file after
+ leaving that program part.</p></li>
+<li class="listitem"><p>Switch on/off collection state as needed to only see
+ event counters happening while inside of the program part you
+ want to profile.</p></li>
+</ol></div>
+<p>The second option can be used if the program part you want to
+ profile is called many times. Option 1, i.e. creating a lot of
+ dumps is not practical here.</p>
+<p>Collection state can be
+ toggled at entry and exit of a given function with the
+ option <code class="option"><a class="xref" href="cl-manual.html#opt.toggle-collect">--toggle-collect</a></code>. If you
+ use this option, collection
+ state should be disabled at the beginning. Note that the
+ specification of <code class="option">--toggle-collect</code>
+ implicitly sets
+ <code class="option">--collect-state=no</code>.</p>
+<p>Collection state can be toggled also by inserting the client request
+ <code class="computeroutput">
+
+ CALLGRIND_TOGGLE_COLLECT
+ ;</code>
+ at the needed code positions.</p>
+</dd>
+<dt>
+<a name="opt.toggle-collect"></a><span class="term">
+ <code class="option">--toggle-collect=<function> </code>
+ </span>
+</dt>
+<dd><p>Toggle collection on entry/exit of <code class="option">function</code>.</p></dd>
+<dt>
+<a name="opt.collect-jumps"></a><span class="term">
+ <code class="option">--collect-jumps=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>This specifies whether information for (conditional) jumps
+ should be collected. As above, callgrind_annotate currently is not
+ able to show you the data. You have to use KCachegrind to get jump
+ arrows in the annotated code.</p></dd>
+<dt>
+<a name="opt.collect-systime"></a><span class="term">
+ <code class="option">--collect-systime=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>This specifies whether information for system call times
+ should be collected.</p></dd>
+<dt>
+<a name="clopt.collect-bus"></a><span class="term">
+ <code class="option">--collect-bus=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>This specifies whether the number of global bus events executed
+ should be collected. The event type "Ge" is used for these events.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.separation"></a>6.3.4. Cost entity separation options</h3></div></div></div>
+<p>
+These options specify how event counts should be attributed to execution
+contexts.
+For example, they specify whether the recursion level or the
+call chain leading to a function should be taken into account,
+and whether the thread ID should be considered.
+Also see <a class="xref" href="cl-manual.html#cl-manual.cycles" title="6.2.4. Avoiding cycles">Avoiding cycles</a>.</p>
+<div class="variablelist">
+<a name="cmd-options.separation"></a><dl class="variablelist">
+<dt>
+<a name="opt.separate-threads"></a><span class="term">
+ <code class="option">--separate-threads=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>This option specifies whether profile data should be generated
+ separately for every thread. If yes, the file names get "-threadID"
+ appended.</p></dd>
+<dt>
+<a name="opt.separate-callers"></a><span class="term">
+ <code class="option">--separate-callers=<callers> [default: 0] </code>
+ </span>
+</dt>
+<dd><p>Separate contexts by at most <callers> functions in the
+ call chain. See <a class="xref" href="cl-manual.html#cl-manual.cycles" title="6.2.4. Avoiding cycles">Avoiding cycles</a>.</p></dd>
+<dt>
+<a name="opt.separate-callers-num"></a><span class="term">
+ <code class="option">--separate-callers<number>=<function> </code>
+ </span>
+</dt>
+<dd><p>Separate <code class="option">number</code> callers for <code class="option">function</code>.
+ See <a class="xref" href="cl-manual.html#cl-manual.cycles" title="6.2.4. Avoiding cycles">Avoiding cycles</a>.</p></dd>
+<dt>
+<a name="opt.separate-recs"></a><span class="term">
+ <code class="option">--separate-recs=<level> [default: 2] </code>
+ </span>
+</dt>
+<dd><p>Separate function recursions by at most <code class="option">level</code> levels.
+ See <a class="xref" href="cl-manual.html#cl-manual.cycles" title="6.2.4. Avoiding cycles">Avoiding cycles</a>.</p></dd>
+<dt>
+<a name="opt.separate-recs-num"></a><span class="term">
+ <code class="option">--separate-recs<number>=<function> </code>
+ </span>
+</dt>
+<dd><p>Separate <code class="option">number</code> recursions for <code class="option">function</code>.
+ See <a class="xref" href="cl-manual.html#cl-manual.cycles" title="6.2.4. Avoiding cycles">Avoiding cycles</a>.</p></dd>
+<dt>
+<a name="opt.skip-plt"></a><span class="term">
+ <code class="option">--skip-plt=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>Ignore calls to/from PLT sections.</p></dd>
+<dt>
+<a name="opt.skip-direct-rec"></a><span class="term">
+ <code class="option">--skip-direct-rec=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>Ignore direct recursions.</p></dd>
+<dt>
+<a name="opt.fn-skip"></a><span class="term">
+ <code class="option">--fn-skip=<function> </code>
+ </span>
+</dt>
+<dd>
+<p>Ignore calls to/from a given function. E.g. if you have a
+ call chain A > B > C, and you specify function B to be
+ ignored, you will only see A > C.</p>
+<p>This is very convenient to skip functions handling callback
+ behaviour. For example, with the signal/slot mechanism in the
+ Qt graphics library, you only want
+ to see the function emitting a signal to call the slots connected
+ to that signal. First, determine the real call chain to see the
+ functions needed to be skipped, then use this option.</p>
+</dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.simulation"></a>6.3.5. Simulation options</h3></div></div></div>
+<div class="variablelist">
+<a name="cl.opts.list.simulation"></a><dl class="variablelist">
+<dt>
+<a name="clopt.cache-sim"></a><span class="term">
+ <code class="option">--cache-sim=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specify if you want to do full cache simulation. By default,
+ only instruction read accesses will be counted ("Ir").
+ With cache simulation, further event counters are enabled:
+ Cache misses on instruction reads ("I1mr"/"ILmr"),
+ data read accesses ("Dr") and related cache misses ("D1mr"/"DLmr"),
+ data write accesses ("Dw") and related cache misses ("D1mw"/"DLmw").
+ For more information, see <a class="xref" href="cg-manual.html" title="5. Cachegrind: a cache and branch-prediction profiler">Cachegrind: a cache and branch-prediction profiler</a>.
+ </p></dd>
+<dt>
+<a name="clopt.branch-sim"></a><span class="term">
+ <code class="option">--branch-sim=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specify if you want to do branch prediction simulation.
+ Further event counters are enabled: Number of executed conditional
+ branches and related predictor misses ("Bc"/"Bcm"), executed indirect
+ jumps and related misses of the jump address predictor ("Bi"/"Bim").
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="cl-manual.options.cachesimulation"></a>6.3.6. Cache simulation options</h3></div></div></div>
+<div class="variablelist">
+<a name="cl.opts.list.cachesimulation"></a><dl class="variablelist">
+<dt>
+<a name="opt.simulate-wb"></a><span class="term">
+ <code class="option">--simulate-wb=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specify whether write-back behavior should be simulated, allowing
+ to distinguish LL caches misses with and without write backs.
+ The cache model of Cachegrind/Callgrind does not specify write-through
+ vs. write-back behavior, and this also is not relevant for the number
+ of generated miss counts. However, with explicit write-back simulation
+ it can be decided whether a miss triggers not only the loading of a new
+ cache line, but also if a write back of a dirty cache line had to take
+ place before. The new dirty miss events are ILdmr, DLdmr, and DLdmw,
+ for misses because of instruction read, data read, and data write,
+ respectively. As they produce two memory transactions, they should
+ account for a doubled time estimation in relation to a normal miss.
+ </p></dd>
+<dt>
+<a name="opt.simulate-hwpref"></a><span class="term">
+ <code class="option">--simulate-hwpref=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specify whether simulation of a hardware prefetcher should be
+ added which is able to detect stream access in the second level cache
+ by comparing accesses to separate to each page.
+ As the simulation can not decide about any timing issues of prefetching,
+ it is assumed that any hardware prefetch triggered succeeds before a
+ real access is done. Thus, this gives a best-case scenario by covering
+ all possible stream accesses.</p></dd>
+<dt>
+<a name="opt.cacheuse"></a><span class="term">
+ <code class="option">--cacheuse=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specify whether cache line use should be collected. For every
+ cache line, from loading to it being evicted, the number of accesses
+ as well as the number of actually used bytes is determined. This
+ behavior is related to the code which triggered loading of the cache
+ line. In contrast to miss counters, which shows the position where
+ the symptoms of bad cache behavior (i.e. latencies) happens, the
+ use counters try to pinpoint at the reason (i.e. the code with the
+ bad access behavior). The new counters are defined in a way such
+ that worse behavior results in higher cost.
+ AcCost1 and AcCost2 are counters showing bad temporal locality
+ for L1 and LL caches, respectively. This is done by summing up
+ reciprocal values of the numbers of accesses of each cache line,
+ multiplied by 1000 (as only integer costs are allowed). E.g. for
+ a given source line with 5 read accesses, a value of 5000 AcCost
+ means that for every access, a new cache line was loaded and directly
+ evicted afterwards without further accesses. Similarly, SpLoss1/2
+ shows bad spatial locality for L1 and LL caches, respectively. It
+ gives the <span class="emphasis"><em>spatial loss</em></span> count of bytes which
+ were loaded into cache but never accessed. It pinpoints at code
+ accessing data in a way such that cache space is wasted. This hints
+ at bad layout of data structures in memory. Assuming a cache line
+ size of 64 bytes and 100 L1 misses for a given source line, the
+ loading of 6400 bytes into L1 was triggered. If SpLoss1 shows a
+ value of 3200 for this line, this means that half of the loaded data was
+ never used, or using a better data layout, only half of the cache
+ space would have been needed.
+ Please note that for cache line use counters, it currently is
+ not possible to provide meaningful inclusive costs. Therefore,
+ inclusive cost of these counters should be ignored.
+ </p></dd>
+<dt>
+<a name="opt.I1"></a><span class="term">
+ <code class="option">--I1=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the level 1
+ instruction cache. </p></dd>
+<dt>
+<a name="opt.D1"></a><span class="term">
+ <code class="option">--D1=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the level 1
+ data cache.</p></dd>
+<dt>
+<a name="opt.LL"></a><span class="term">
+ <code class="option">--LL=<size>,<associativity>,<line size> </code>
+ </span>
+</dt>
+<dd><p>Specify the size, associativity and line size of the last-level
+ cache.</p></dd>
+</dl>
+</div>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.monitor-commands"></a>6.4. Callgrind Monitor Commands</h2></div></div></div>
+<p>The Callgrind tool provides monitor commands handled by the Valgrind
+gdbserver (see <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling" title="3.2.5. Monitor command handling by the Valgrind gdbserver">Monitor command handling by the Valgrind gdbserver</a>).
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">dump [<dump_hint>]</code> requests to dump the
+ profile data. </p></li>
+<li class="listitem"><p><code class="varname">zero</code> requests to zero the profile data
+ counters. </p></li>
+<li class="listitem"><p><code class="varname">instrumentation [on|off]</code> requests to set
+ (if parameter on/off is given) or get the current instrumentation state.
+ </p></li>
+<li class="listitem"><p><code class="varname">status</code> requests to print out some status
+ information.</p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.clientrequests"></a>6.5. Callgrind specific client requests</h2></div></div></div>
+<p>Callgrind provides the following specific client requests in
+<code class="filename">callgrind.h</code>. See that file for the exact details of
+their arguments.</p>
+<div class="variablelist">
+<a name="cl.clientrequests.list"></a><dl class="variablelist">
+<dt>
+<a name="cr.dump-stats"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_DUMP_STATS</code>
+ </span>
+</dt>
+<dd><p>Force generation of a profile dump at specified position
+ in code, for the current thread only. Written counters will be reset
+ to zero.</p></dd>
+<dt>
+<a name="cr.dump-stats-at"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_DUMP_STATS_AT(string)</code>
+ </span>
+</dt>
+<dd><p>Same as <code class="computeroutput">CALLGRIND_DUMP_STATS</code>,
+ but allows to specify a string to be able to distinguish profile
+ dumps.</p></dd>
+<dt>
+<a name="cr.zero-stats"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_ZERO_STATS</code>
+ </span>
+</dt>
+<dd><p>Reset the profile counters for the current thread to zero.</p></dd>
+<dt>
+<a name="cr.toggle-collect"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_TOGGLE_COLLECT</code>
+ </span>
+</dt>
+<dd><p>Toggle the collection state. This allows to ignore events
+ with regard to profile counters. See also options
+ <code class="option"><a class="xref" href="cl-manual.html#opt.collect-atstart">--collect-atstart</a></code> and
+ <code class="option"><a class="xref" href="cl-manual.html#opt.toggle-collect">--toggle-collect</a></code>.</p></dd>
+<dt>
+<a name="cr.start-instr"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_START_INSTRUMENTATION</code>
+ </span>
+</dt>
+<dd><p>Start full Callgrind instrumentation if not already enabled.
+ When cache simulation is done, this will flush the simulated cache
+ and lead to an artifical cache warmup phase afterwards with
+ cache misses which would not have happened in reality. See also
+ option <code class="option"><a class="xref" href="cl-manual.html#opt.instr-atstart">--instr-atstart</a></code>.</p></dd>
+<dt>
+<a name="cr.stop-instr"></a><span class="term">
+ <code class="computeroutput">CALLGRIND_STOP_INSTRUMENTATION</code>
+ </span>
+</dt>
+<dd><p>Stop full Callgrind instrumentation if not already disabled.
+ This flushes Valgrinds translation cache, and does no additional
+ instrumentation afterwards: it effectivly will run at the same
+ speed as Nulgrind, i.e. at minimal slowdown. Use this to
+ speed up the Callgrind run for uninteresting code parts. Use
+ <code class="computeroutput"><a class="xref" href="cl-manual.html#cr.start-instr">CALLGRIND_START_INSTRUMENTATION</a></code> to
+ enable instrumentation again. See also option
+ <code class="option"><a class="xref" href="cl-manual.html#opt.instr-atstart">--instr-atstart</a></code>.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.callgrind_annotate-options"></a>6.6. callgrind_annotate Command-line Options</h2></div></div></div>
+<div class="variablelist">
+<a name="callgrind_annotate.opts.list"></a><dl class="variablelist">
+<dt><span class="term"><code class="option">-h --help</code></span></dt>
+<dd><p>Show summary of options.</p></dd>
+<dt><span class="term"><code class="option">--version</code></span></dt>
+<dd><p>Show version of callgrind_annotate.</p></dd>
+<dt><span class="term">
+ <code class="option">--show=A,B,C [default: all]</code>
+ </span></dt>
+<dd><p>Only show figures for events A,B,C.</p></dd>
+<dt><span class="term">
+ <code class="option">--sort=A,B,C</code>
+ </span></dt>
+<dd><p>Sort columns by events A,B,C [event column order].</p></dd>
+<dt><span class="term">
+ <code class="option">--threshold=<0--100> [default: 99%] </code>
+ </span></dt>
+<dd><p>Percentage of counts (of primary sort event) we are
+ interested in.</p></dd>
+<dt><span class="term">
+ <code class="option">--auto=<yes|no> [default: no] </code>
+ </span></dt>
+<dd><p>Annotate all source files containing functions that helped
+ reach the event count threshold.</p></dd>
+<dt><span class="term">
+ <code class="option">--context=N [default: 8] </code>
+ </span></dt>
+<dd><p>Print N lines of context before and after annotated
+ lines.</p></dd>
+<dt><span class="term">
+ <code class="option">--inclusive=<yes|no> [default: no] </code>
+ </span></dt>
+<dd><p>Add subroutine costs to functions calls.</p></dd>
+<dt><span class="term">
+ <code class="option">--tree=<none|caller|calling|both> [default: none] </code>
+ </span></dt>
+<dd><p>Print for each function their callers, the called functions
+ or both.</p></dd>
+<dt><span class="term">
+ <code class="option">-I, --include=<dir> </code>
+ </span></dt>
+<dd><p>Add <code class="option">dir</code> to the list of directories to search
+ for source files.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="cl-manual.callgrind_control-options"></a>6.7. callgrind_control Command-line Options</h2></div></div></div>
+<p>By default, callgrind_control acts on all programs run by the
+ current user under Callgrind. It is possible to limit the actions to
+ specified Callgrind runs by providing a list of pids or program names as
+ argument. The default action is to give some brief information about the
+ applications being run under Callgrind.</p>
+<div class="variablelist">
+<a name="callgrind_control.opts.list"></a><dl class="variablelist">
+<dt><span class="term"><code class="option">-h --help</code></span></dt>
+<dd><p>Show a short description, usage, and summary of options.</p></dd>
+<dt><span class="term"><code class="option">--version</code></span></dt>
+<dd><p>Show version of callgrind_control.</p></dd>
+<dt><span class="term"><code class="option">-l --long</code></span></dt>
+<dd><p>Show also the working directory, in addition to the brief
+ information given by default.
+ </p></dd>
+<dt><span class="term"><code class="option">-s --stat</code></span></dt>
+<dd><p>Show statistics information about active Callgrind runs.</p></dd>
+<dt><span class="term"><code class="option">-b --back</code></span></dt>
+<dd><p>Show stack/back traces of each thread in active Callgrind runs. For
+ each active function in the stack trace, also the number of invocations
+ since program start (or last dump) is shown. This option can be
+ combined with -e to show inclusive cost of active functions.</p></dd>
+<dt><span class="term"><code class="option">-e [A,B,...] </code> (default: all)</span></dt>
+<dd><p>Show the current per-thread, exclusive cost values of event
+ counters. If no explicit event names are given, figures for all event
+ types which are collected in the given Callgrind run are
+ shown. Otherwise, only figures for event types A, B, ... are shown. If
+ this option is combined with -b, inclusive cost for the functions of
+ each active stack frame is provided, too.
+ </p></dd>
+<dt><span class="term"><code class="option">--dump[=<desc>] </code> (default: no description)</span></dt>
+<dd><p>Request the dumping of profile information. Optionally, a
+ description can be specified which is written into the dump as part of
+ the information giving the reason which triggered the dump action. This
+ can be used to distinguish multiple dumps.</p></dd>
+<dt><span class="term"><code class="option">-z --zero</code></span></dt>
+<dd><p>Zero all event counters.</p></dd>
+<dt><span class="term"><code class="option">-k --kill</code></span></dt>
+<dd><p>Force a Callgrind run to be terminated.</p></dd>
+<dt><span class="term"><code class="option">--instr=<on|off></code></span></dt>
+<dd><p>Switch instrumentation mode on or off. If a Callgrind run has
+ instrumentation disabled, no simulation is done and no events are
+ counted. This is useful to skip uninteresting program parts, as there
+ is much less slowdown (same as with the Valgrind tool "none"). See also
+ the Callgrind option <code class="option">--instr-atstart</code>.</p></dd>
+<dt><span class="term"><code class="option">--vgdb-prefix=<prefix></code></span></dt>
+<dd><p>Specify the vgdb prefix to use by callgrind_control.
+ callgrind_control internally uses vgdb to find and control the active
+ Callgrind runs. If the <code class="option">--vgdb-prefix</code> option was used
+ for launching valgrind, then the same option must be given to
+ callgrind_control.</p></dd>
+</dl>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="cg-manual.html"><< 5. Cachegrind: a cache and branch-prediction profiler</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="hg-manual.html">7. Helgrind: a thread error detector >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/design-impl.html b/docs/html/design-impl.html
new file mode 100644
index 0000000..22f8c6e
--- /dev/null
+++ b/docs/html/design-impl.html
@@ -0,0 +1,84 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>1. The Design and Implementation of Valgrind</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="tech-docs.html" title="Valgrind Technical Documentation">
+<link rel="prev" href="tech-docs.html" title="Valgrind Technical Documentation">
+<link rel="next" href="manual-writing-tools.html" title="2. Writing a New Valgrind Tool">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="tech-docs.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="tech-docs.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Technical Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="manual-writing-tools.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="design-impl"></a>1. The Design and Implementation of Valgrind</h1></div></div></div>
+<p>A number of academic publications nicely describe many aspects
+of Valgrind's design and implementation. Online copies of all of
+them, and others, are available on the <a class="ulink" href="http://www.valgrind.org/docs/pubs.html" target="_top">Valgrind
+publications page</a>.</p>
+<p>The following paper gives a good overview of Valgrind, and explains
+how it differs from other dynamic binary instrumentation frameworks such as
+Pin and DynamoRIO.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>
+ <span class="command"><strong>Valgrind: A Framework for Heavyweight Dynamic Binary
+ Instrumentation. Nicholas Nethercote and Julian Seward. Proceedings
+ of ACM SIGPLAN 2007 Conference on Programming Language Design and
+ Implementation (PLDI 2007), San Diego, California, USA, June
+ 2007.</strong></span>
+ </p></li></ul></div>
+<p>The following two papers together give a comprehensive description of
+how most of Memcheck works. The first paper describes in detail how
+Memcheck's undefined value error detection (a.k.a. V bits) works. The
+second paper describes in detail how Memcheck's shadow memory is
+implemented, and compares it to other alternative approaches.
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem">
+<p><span class="command"><strong>Using Valgrind to detect undefined value errors with
+ bit-precision. Julian Seward and Nicholas Nethercote. Proceedings
+ of the USENIX'05 Annual Technical Conference, Anaheim, California,
+ USA, April 2005.</strong></span>
+ </p>
+<p><span class="command"><strong>How to Shadow Every Byte of Memory Used by a Program.
+ Nicholas Nethercote and Julian Seward. Proceedings of the Third
+ International ACM SIGPLAN/SIGOPS Conference on Virtual Execution
+ Environments (VEE 2007), San Diego, California, USA, June
+ 2007.</strong></span>
+ </p>
+</li></ul></div>
+<p>The following paper describes Callgrind.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p><span class="command"><strong>A Tool Suite for Simulation Based Analysis of Memory Access
+ Behavior. Josef Weidendorfer, Markus Kowarschik and Carsten
+ Trinitis. Proceedings of the 4th International Conference on
+ Computational Science (ICCS 2004), Krakow, Poland, June 2004.</strong></span>
+ </p></li></ul></div>
+<p>The following dissertation describes Valgrind in some detail
+(many of these details are now out-of-date) as well as Cachegrind,
+Annelid and Redux. It also covers some underlying theory about
+dynamic binary analysis in general and what all these tools have in
+common.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p><span class="command"><strong>Dynamic Binary Analysis and Instrumentation. Nicholas
+ Nethercote.</strong></span> PhD Dissertation, University of Cambridge, November
+ 2004.</p></li></ul></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="tech-docs.html"><< Valgrind Technical Documentation</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="tech-docs.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="manual-writing-tools.html">2. Writing a New Valgrind Tool >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dh-manual.html b/docs/html/dh-manual.html
new file mode 100644
index 0000000..4557e7d
--- /dev/null
+++ b/docs/html/dh-manual.html
@@ -0,0 +1,363 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>10. DHAT: a dynamic heap analysis tool</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="ms-manual.html" title="9. Massif: a heap profiler">
+<link rel="next" href="sg-manual.html" title="11. SGCheck: an experimental stack and global array overrun detector">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="ms-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="sg-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dh-manual"></a>10. DHAT: a dynamic heap analysis tool</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.overview">10.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.understanding">10.2. Understanding DHAT's output</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639117126160">10.2.1. Interpreting the max-live, tot-alloc and deaths fields</a></span></dt>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639113841488">10.2.2. Interpreting the acc-ratios fields</a></span></dt>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639116741152">10.2.3. Interpreting "Aggregated access counts by offset" data</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.options">10.3. DHAT Command-line Options</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=exp-dhat</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="dh-manual.overview"></a>10.1. Overview</h2></div></div></div>
+<p>DHAT is a tool for examining how programs use their heap
+allocations.</p>
+<p>It tracks the allocated blocks, and inspects every memory access
+to find which block, if any, it is to. The following data is
+collected and presented per allocation point (allocation
+stack):</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Total allocation (number of bytes and
+ blocks)</p></li>
+<li class="listitem"><p>maximum live volume (number of bytes and
+ blocks)</p></li>
+<li class="listitem"><p>average block lifetime (number of instructions
+ between allocation and freeing)</p></li>
+<li class="listitem"><p>average number of reads and writes to each byte in
+ the block ("access ratios")</p></li>
+<li class="listitem"><p>for allocation points which always allocate blocks
+ only of one size, and that size is 4096 bytes or less: counts
+ showing how often each byte offset inside the block is
+ accessed.</p></li>
+</ul></div>
+<p>Using these statistics it is possible to identify allocation
+points with the following characteristics:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>potential process-lifetime leaks: blocks allocated
+ by the point just accumulate, and are freed only at the end of the
+ run.</p></li>
+<li class="listitem"><p>excessive turnover: points which chew through a lot
+ of heap, even if it is not held onto for very long</p></li>
+<li class="listitem"><p>excessively transient: points which allocate very
+ short lived blocks</p></li>
+<li class="listitem"><p>useless or underused allocations: blocks which are
+ allocated but not completely filled in, or are filled in but not
+ subsequently read.</p></li>
+<li class="listitem"><p>blocks with inefficient layout -- areas never
+ accessed, or with hot fields scattered throughout the
+ block.</p></li>
+</ul></div>
+<p>As with the Massif heap profiler, DHAT measures program progress
+by counting instructions, and so presents all age/time related figures
+as instruction counts. This sounds a little odd at first, but it
+makes runs repeatable in a way which is not possible if CPU time is
+used.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="dh-manual.understanding"></a>10.2. Understanding DHAT's output</h2></div></div></div>
+<p>DHAT provides a lot of useful information on dynamic heap usage.
+Most of the art of using it is in interpretation of the resulting
+numbers. That is best illustrated via a set of examples.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="idm140639117126160"></a>10.2.1. Interpreting the max-live, tot-alloc and deaths fields</h3></div></div></div>
+<div class="sect3"><div class="titlepage"><div><div><h4 class="title">
+<a name="idm140639117125456"></a>10.2.1.1. A simple example</h4></div></div></div></div>
+<pre class="screen">
+ ======== SUMMARY STATISTICS ========
+
+ guest_insns: 1,045,339,534
+ [...]
+ max-live: 63,490 in 984 blocks
+ tot-alloc: 1,904,700 in 29,520 blocks (avg size 64.52)
+ deaths: 29,520, at avg age 22,227,424
+ acc-ratios: 6.37 rd, 1.14 wr (12,141,526 b-read, 2,174,460 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x40350E: tcc_malloc (tinycc.c:6712)
+ by 0x404580: tok_alloc_new (tinycc.c:7151)
+ by 0x40870A: next_nomacro1 (tinycc.c:9305)
+</pre>
+<p>Over the entire run of the program, this stack (allocation
+point) allocated 29,520 blocks in total, containing 1,904,700 bytes in
+total. By looking at the max-live data, we see that not many blocks
+were simultaneously live, though: at the peak, there were 63,490
+allocated bytes in 984 blocks. This tells us that the program is
+steadily freeing such blocks as it runs, rather than hanging on to all
+of them until the end and freeing them all.</p>
+<p>The deaths entry tells us that 29,520 blocks allocated by this stack
+died (were freed) during the run of the program. Since 29,520 is
+also the number of blocks allocated in total, that tells us that
+all allocated blocks were freed by the end of the program.</p>
+<p>It also tells us that the average age at death was 22,227,424
+instructions. From the summary statistics we see that the program ran
+for 1,045,339,534 instructions, and so the average age at death is
+about 2% of the program's total run time.</p>
+<div class="sect3"><div class="titlepage"><div><div><h4 class="title">
+<a name="idm140639113980544"></a>10.2.1.2. Example of a potential process-lifetime leak</h4></div></div></div></div>
+<p>This next example (from a different program than the above)
+shows a potential process lifetime leak. A process lifetime leak
+occurs when a program keeps allocating data, but only frees the
+data just before it exits. Hence the program's heap grows constantly
+in size, yet Memcheck reports no leak, because the program has
+freed up everything at exit. This is particularly a hazard for
+long running programs.</p>
+<pre class="screen">
+ ======== SUMMARY STATISTICS ========
+
+ guest_insns: 418,901,537
+ [...]
+ max-live: 32,512 in 254 blocks
+ tot-alloc: 32,512 in 254 blocks (avg size 128.00)
+ deaths: 254, at avg age 300,467,389
+ acc-ratios: 0.26 rd, 0.20 wr (8,756 b-read, 6,604 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x4C27632: realloc (vg_replace_malloc.c:525)
+ by 0x56FF41D: QtFontStyle::pixelSize(unsigned short, bool) (qfontdatabase.cpp:269)
+ by 0x5700D69: loadFontConfig() (qfontdatabase_x11.cpp:1146)
+</pre>
+<p>There are two tell-tale signs that this might be a
+process-lifetime leak. Firstly, the max-live and tot-alloc numbers
+are identical. The only way that can happen is if these blocks are
+all allocated and then all deallocated.</p>
+<p>Secondly, the average age at death (300 million insns) is 71% of
+the total program lifetime (419 million insns), hence this is not a
+transient allocation-free spike -- rather, it is spread out over a
+large part of the entire run. One interpretation is, roughly, that
+all 254 blocks were allocated in the first half of the run, held onto
+for the second half, and then freed just before exit.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="idm140639113841488"></a>10.2.2. Interpreting the acc-ratios fields</h3></div></div></div>
+<div class="sect3"><div class="titlepage"><div><div><h4 class="title">
+<a name="idm140639113840736"></a>10.2.2.1. A fairly harmless allocation point record</h4></div></div></div></div>
+<pre class="screen">
+ max-live: 49,398 in 808 blocks
+ tot-alloc: 1,481,940 in 24,240 blocks (avg size 61.13)
+ deaths: 24,240, at avg age 34,611,026
+ acc-ratios: 2.13 rd, 0.91 wr (3,166,650 b-read, 1,358,820 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x40350E: tcc_malloc (tinycc.c:6712)
+ by 0x404580: tok_alloc_new (tinycc.c:7151)
+ by 0x4046C4: tok_alloc (tinycc.c:7190)
+</pre>
+<p>The acc-ratios field tells us that each byte in the blocks
+allocated here is read an average of 2.13 times before the block is
+deallocated. Given that the blocks have an average age at death of
+34,611,026, that's one read per block per approximately every 15
+million instructions. So from that standpoint the blocks aren't
+"working" very hard.</p>
+<p>More interesting is the write ratio: each byte is written an
+average of 0.91 times. This tells us that some parts of the allocated
+blocks are never written, at least 9% on average. To completely
+initialise the block would require writing each byte at least once,
+and that would give a write ratio of 1.0. The fact that some block
+areas are evidently unused might point to data alignment holes or
+other layout inefficiencies.</p>
+<p>Well, at least all the blocks are freed (24,240 allocations,
+24,240 deaths).</p>
+<p>If all the blocks had been the same size, DHAT would also show
+the access counts by block offset, so we could see where exactly these
+unused areas are. However, that isn't the case: the blocks have
+varying sizes, so DHAT can't perform such an analysis. We can see
+that they must have varying sizes since the average block size, 61.13,
+isn't a whole number.</p>
+<div class="sect3"><div class="titlepage"><div><div><h4 class="title">
+<a name="idm140639118134560"></a>10.2.2.2. A more suspicious looking example</h4></div></div></div></div>
+<pre class="screen">
+ max-live: 180,224 in 22 blocks
+ tot-alloc: 180,224 in 22 blocks (avg size 8192.00)
+ deaths: none (none of these blocks were freed)
+ acc-ratios: 0.00 rd, 0.00 wr (0 b-read, 0 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x40350E: tcc_malloc (tinycc.c:6712)
+ by 0x40369C: __sym_malloc (tinycc.c:6787)
+ by 0x403711: sym_malloc (tinycc.c:6805)
+</pre>
+<p>Here, both the read and write access ratios are zero. Hence
+this point is allocating blocks which are never used, neither read nor
+written. Indeed, they are also not freed ("deaths: none") and are
+simply leaked. So, here is 180k of completely useless allocation that
+could be removed.</p>
+<p>Re-running with Memcheck does indeed report the same leak. What
+DHAT can tell us, that Memcheck can't, is that not only are the blocks
+leaked, they are also never used.</p>
+<div class="sect3"><div class="titlepage"><div><div><h4 class="title">
+<a name="idm140639111498432"></a>10.2.2.3. Another suspicious example</h4></div></div></div></div>
+<p>Here's one where blocks are allocated, written to,
+but never read from. We see this immediately from the zero read
+access ratio. They do get freed, though:</p>
+<pre class="screen">
+ max-live: 54 in 3 blocks
+ tot-alloc: 1,620 in 90 blocks (avg size 18.00)
+ deaths: 90, at avg age 34,558,236
+ acc-ratios: 0.00 rd, 1.11 wr (0 b-read, 1,800 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x40350E: tcc_malloc (tinycc.c:6712)
+ by 0x4035BD: tcc_strdup (tinycc.c:6750)
+ by 0x41FEBB: tcc_add_sysinclude_path (tinycc.c:20931)
+</pre>
+<p>In the previous two examples, it is easy to see blocks that are
+never written to, or never read from, or some combination of both.
+Unfortunately, in C++ code, the situation is less clear. That's
+because an object's constructor will write to the underlying block,
+and its destructor will read from it. So the block's read and write
+ratios will be non-zero even if the object, once constructed, is never
+used, but only eventually destructed.</p>
+<p>Really, what we want is to measure only memory accesses in
+between the end of an object's construction and the start of its
+destruction. Unfortunately I do not know of a reliable way to
+determine when those transitions are made.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="idm140639116741152"></a>10.2.3. Interpreting "Aggregated access counts by offset" data</h3></div></div></div>
+<p>For allocation points that always allocate blocks of the same
+size, and which are 4096 bytes or smaller, DHAT counts accesses
+per offset, for example:</p>
+<pre class="screen">
+ max-live: 317,408 in 5,668 blocks
+ tot-alloc: 317,408 in 5,668 blocks (avg size 56.00)
+ deaths: 5,668, at avg age 622,890,597
+ acc-ratios: 1.03 rd, 1.28 wr (327,642 b-read, 408,172 b-written)
+ at 0x4C275B8: malloc (vg_replace_malloc.c:236)
+ by 0x5440C16: QDesignerPropertySheetPrivate::ensureInfo (qhash.h:515)
+ by 0x544350B: QDesignerPropertySheet::setVisible (qdesigner_propertysh...)
+ by 0x5446232: QDesignerPropertySheet::QDesignerPropertySheet (qdesigne...)
+
+ Aggregated access counts by offset:
+
+ [ 0] 28782 28782 28782 28782 28782 28782 28782 28782
+ [ 8] 20638 20638 20638 20638 0 0 0 0
+ [ 16] 22738 22738 22738 22738 22738 22738 22738 22738
+ [ 24] 6013 6013 6013 6013 6013 6013 6013 6013
+ [ 32] 18883 18883 18883 37422 0 0 0 0
+ [ 36] 5668 11915 5668 5668 11336 11336 11336 11336
+ [ 48] 6166 6166 6166 6166 0 0 0 0
+</pre>
+<p>This is fairly typical, for C++ code running on a 64-bit
+platform. Here, we have aggregated access statistics for 5668 blocks,
+all of size 56 bytes. Each byte has been accessed at least 5668
+times, except for offsets 12--15, 36--39 and 52--55. These are likely
+to be alignment holes.</p>
+<p>Careful interpretation of the numbers reveals useful information.
+Groups of N consecutive identical numbers that begin at an N-aligned
+offset, for N being 2, 4 or 8, are likely to indicate an N-byte object
+in the structure at that point. For example, the first 32 bytes of
+this object are likely to have the layout</p>
+<pre class="screen">
+ [0 ] 64-bit type
+ [8 ] 32-bit type
+ [12] 32-bit alignment hole
+ [16] 64-bit type
+ [24] 64-bit type
+</pre>
+<p>As a counterexample, it's also clear that, whatever is at offset 32,
+it is not a 32-bit value. That's because the last number of the group
+(37422) is not the same as the first three (18883 18883 18883).</p>
+<p>This example leads one to enquire (by reading the source code)
+whether the zeroes at 12--15 and 52--55 are alignment holes, and
+whether 48--51 is indeed a 32-bit type. If so, it might be possible
+to place what's at 48--51 at 12--15 instead, which would reduce
+the object size from 56 to 48 bytes.</p>
+<p>Bear in mind that the above inferences are all only "maybes". That's
+because they are based on dynamic data, not static analysis of the
+object layout. For example, the zeroes might not be alignment
+holes, but rather just parts of the structure which were not used
+at all for this particular run. Experience shows that's unlikely
+to be the case, but it could happen.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="dh-manual.options"></a>10.3. DHAT Command-line Options</h2></div></div></div>
+<p>DHAT-specific command-line options are:</p>
+<div class="variablelist">
+<a name="dh.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.show-top-n"></a><span class="term">
+ <code class="option">--show-top-n=<number>
+ [default: 10] </code>
+ </span>
+</dt>
+<dd><p>At the end of the run, DHAT sorts the accumulated
+ allocation points according to some metric, and shows the
+ highest scoring entries. <code class="varname">--show-top-n</code>
+ controls how many entries are shown. The default of 10 is
+ quite small. For realistic applications you will probably need
+ to set it much higher, at least several hundred.</p></dd>
+<dt>
+<a name="opt.sort-by"></a><span class="term">
+ <code class="option">--sort-by=<string> [default: max-bytes-live] </code>
+ </span>
+</dt>
+<dd>
+<p>At the end of the run, DHAT sorts the accumulated
+ allocation points according to some metric, and shows the
+ highest scoring entries. <code class="varname">--sort-by</code>
+ selects the metric used for sorting:</p>
+<p><code class="varname">max-bytes-live </code> maximum live bytes [default]</p>
+<p><code class="varname">tot-bytes-allocd </code> bytes allocates in total (turnover)</p>
+<p><code class="varname">max-blocks-live </code> maximum live blocks</p>
+<p><code class="varname">tot-blocks-allocd </code> blocks allocated in total (turnover)</p>
+<p>This controls the order in which allocation points are
+ displayed. You can choose to look at allocation points with
+ the highest number of live bytes, or the highest total byte turnover, or
+ by the highest number of live blocks, or the highest total block
+ turnover. These give usefully different pictures of program behaviour.
+ For example, sorting by maximum live blocks tends to show up allocation
+ points creating large numbers of small objects.</p>
+</dd>
+</dl>
+</div>
+<p>One important point to note is that each allocation stack counts
+as a seperate allocation point. Because stacks by default have 12
+frames, this tends to spread data out over multiple allocation points.
+You may want to use the flag --num-callers=4 or some such small
+number, to reduce the spreading.</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="ms-manual.html"><< 9. Massif: a heap profiler</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="sg-manual.html">11. SGCheck: an experimental stack and global array overrun detector >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.authors.html b/docs/html/dist.authors.html
new file mode 100644
index 0000000..b69494e
--- /dev/null
+++ b/docs/html/dist.authors.html
@@ -0,0 +1,130 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>1. AUTHORS</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="next" href="dist.news.html" title="2. NEWS">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.news.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.authors"></a>1. AUTHORS</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Julian Seward was the original founder, designer and author of<br>
+Valgrind, created the dynamic translation frameworks, wrote Memcheck,<br>
+the 3.X versions of Helgrind, SGCheck, DHAT, and did lots of other<br>
+things.<br>
+<br>
+Nicholas Nethercote did the core/tool generalisation, wrote<br>
+Cachegrind and Massif, and tons of other stuff.<br>
+<br>
+Tom Hughes did a vast number of bug fixes, helped out with support for<br>
+more recent Linux/glibc versions, set up the present build system, and has<br>
+helped out with test and build machines.<br>
+<br>
+Jeremy Fitzhardinge wrote Helgrind (in the 2.X line) and totally<br>
+overhauled low-level syscall/signal and address space layout stuff,<br>
+among many other things.<br>
+<br>
+Josef Weidendorfer wrote and maintains Callgrind and the associated<br>
+KCachegrind GUI.<br>
+<br>
+Paul Mackerras did a lot of the initial per-architecture factoring<br>
+that forms the basis of the 3.0 line and was also seen in 2.4.0.<br>
+He also did UCode-based dynamic translation support for PowerPC, and<br>
+created a set of ppc-linux derivatives of the 2.X release line.<br>
+<br>
+Greg Parker wrote the Mac OS X port.<br>
+<br>
+Dirk Mueller contributed the malloc/free mismatch checking<br>
+and other bits and pieces, and acts as our KDE liaison.<br>
+<br>
+Robert Walsh added file descriptor leakage checking, new library<br>
+interception machinery, support for client allocation pools, and minor<br>
+other tweakage.<br>
+<br>
+Bart Van Assche wrote and maintains DRD.<br>
+<br>
+Cerion Armour-Brown worked on PowerPC instruction set support in the<br>
+Vex dynamic-translation framework. Maynard Johnson improved the<br>
+Power6 support.<br>
+<br>
+Kirill Batuzov and Dmitry Zhurikhin did the NEON instruction set<br>
+support for ARM. Donna Robinson did the v6 media instruction support.<br>
+<br>
+Donna Robinson created and maintains the very excellent<br>
+http://www.valgrind.org.<br>
+<br>
+Vince Weaver wrote and maintains BBV.<br>
+<br>
+Frederic Gobry helped with autoconf and automake.<br>
+<br>
+Daniel Berlin modified readelf's dwarf2 source line reader, written by Nick<br>
+Clifton, for use in Valgrind.o<br>
+<br>
+Michael Matz and Simon Hausmann modified the GNU binutils demangler(s) for<br>
+use in Valgrind.<br>
+<br>
+David Woodhouse has helped out with test and build machines over the course<br>
+of many releases.<br>
+<br>
+Florian Krohm and Christian Borntraeger wrote and maintain the<br>
+S390X/Linux port. Florian improved and ruggedised the regression test<br>
+system during 2011.<br>
+<br>
+Philippe Waroquiers wrote and maintains the embedded GDB server. He<br>
+also made a bunch of performance and memory-reduction fixes across<br>
+diverse parts of the system.<br>
+<br>
+Carl Love and Maynard Johnson contributed IBM Power6 and Power7<br>
+support, and generally deal with ppc{32,64}-linux issues.<br>
+<br>
+Petar Jovanovic and Dejan Jevtic wrote and maintain the mips32-linux<br>
+port.<br>
+<br>
+Dragos Tatulea modified the arm-android port so it also works on<br>
+x86-android.<br>
+<br>
+Jakub Jelinek helped out extensively with the AVX and AVX2 support.<br>
+<br>
+Mark Wielaard fixed a bunch of bugs and acts as our Fedora/RHEL<br>
+liaison.<br>
+<br>
+Maran Pakkirisamy implemented support for decimal floating point on<br>
+s390.<br>
+<br>
+Many, many people sent bug reports, patches, and helpful feedback.<br>
+<br>
+Development of Valgrind was supported in part by the Tri-Lab Partners<br>
+(Lawrence Livermore National Laboratory, Los Alamos National<br>
+Laboratory, and Sandia National Laboratories) of the U.S. Department<br>
+of Energy's Advanced Simulation & Computing (ASC) Program.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.html"><< Valgrind Distribution Documents</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.news.html">2. NEWS >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.html b/docs/html/dist.html
new file mode 100644
index 0000000..b50087b
--- /dev/null
+++ b/docs/html/dist.html
@@ -0,0 +1,64 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind Distribution Documents</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="cl-format.html" title="3. Callgrind Format Specification">
+<link rel="next" href="dist.authors.html" title="1. AUTHORS">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="cl-format.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.authors.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div>
+<div><h1 class="title">
+<a name="dist"></a>Valgrind Distribution Documents</h1></div>
+<div><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div><p class="copyright">Copyright © 2000-2016 <a class="ulink" href="http://www.valgrind.org/info/developers.html" target="_top">Valgrind Developers</a></p></div>
+<div><div class="legalnotice">
+<a name="idm140639116581280"></a><p>Email: <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a></p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="chapter"><a href="dist.authors.html">1. AUTHORS</a></span></dt>
+<dt><span class="chapter"><a href="dist.news.html">2. NEWS</a></span></dt>
+<dt><span class="chapter"><a href="dist.news.old.html">3. OLDER NEWS</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme.html">4. README</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-missing.html">5. README_MISSING_SYSCALL_OR_IOCTL</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-developers.html">6. README_DEVELOPERS</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-packagers.html">7. README_PACKAGERS</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-s390.html">8. README.S390</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-android.html">9. README.android</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-android_emulator.html">10. README.android_emulator</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-mips.html">11. README.mips</a></span></dt>
+<dt><span class="chapter"><a href="dist.readme-solaris.html">12. README.solaris</a></span></dt>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="cl-format.html"><< 3. Callgrind Format Specification</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.authors.html">1. AUTHORS >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.news.html b/docs/html/dist.news.html
new file mode 100644
index 0000000..cb91bf8
--- /dev/null
+++ b/docs/html/dist.news.html
@@ -0,0 +1,3333 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>2. NEWS</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.authors.html" title="1. AUTHORS">
+<link rel="next" href="dist.news.old.html" title="3. OLDER NEWS">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.authors.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.news.old.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.news"></a>2. NEWS</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Release 3.12.0 (20 October 2016)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+3.12.0 is a feature release with many improvements and the usual<br>
+collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM32/Linux,<br>
+ARM64/Linux, PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux,<br>
+MIPS32/Linux, MIPS64/Linux, ARM/Android, ARM64/Android,<br>
+MIPS32/Android, X86/Android, X86/Solaris, AMD64/Solaris, X86/MacOSX<br>
+10.10 and AMD64/MacOSX 10.10. There is also preliminary support for<br>
+X86/MacOSX 10.11/12, AMD64/MacOSX 10.11/12 and TILEGX/Linux.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* POWER: Support for ISA 3.0 has been added<br>
+<br>
+* mips: support for O32 FPXX ABI has been added.<br>
+* mips: improved recognition of different processors<br>
+* mips: determination of page size now done at run time<br>
+<br>
+* amd64: Partial support for AMD FMA4 instructions.<br>
+<br>
+* arm, arm64: Support for v8 crypto and CRC instructions.<br>
+<br>
+* Improvements and robustification of the Solaris port.<br>
+<br>
+* Preliminary support for MacOS 10.12 (Sierra) has been added.<br>
+<br>
+Whilst 3.12.0 continues to support the 32-bit x86 instruction set, we<br>
+would prefer users to migrate to 64-bit x86 (a.k.a amd64 or x86_64)<br>
+where possible. Valgrind's support for 32-bit x86 has stagnated in<br>
+recent years and has fallen far behind that for 64-bit x86<br>
+instructions. By contrast 64-bit x86 is well supported, up to and<br>
+including AVX2.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Memcheck:<br>
+<br>
+ - Added meta mempool support for describing a custom allocator which:<br>
+ - Auto-frees all chunks assuming that destroying a pool destroys all<br>
+ objects in the pool<br>
+ - Uses itself to allocate other memory blocks<br>
+<br>
+ - New flag --ignore-range-below-sp to ignore memory accesses below<br>
+ the stack pointer, if you really have to. The related flag<br>
+ --workaround-gcc296-bugs=yes is now deprecated. Use<br>
+ --ignore-range-below-sp=1024-1 as a replacement.<br>
+<br>
+* DRD:<br>
+<br>
+ - Improved thread startup time significantly on non-Linux platforms.<br>
+<br>
+* DHAT<br>
+<br>
+ - Added collection of the metric "tot-blocks-allocd"<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* Replacement/wrapping of malloc/new related functions is now done not just<br>
+ for system libraries by default, but for any globally defined malloc/new<br>
+ related function (both in shared libraries and statically linked alternative<br>
+ malloc implementations). The dynamic (runtime) linker is excluded, though.<br>
+ To only intercept malloc/new related functions in<br>
+ system libraries use --soname-synonyms=somalloc=nouserintercepts (where<br>
+ "nouserintercepts" can be any non-existing library name).<br>
+ This new functionality is not implemented for MacOS X.<br>
+<br>
+* The maximum number of callers in a suppression entry is now equal to<br>
+ the maximum size for --num-callers (500).<br>
+ Note that --gen-suppressions=yes|all similarly generates suppressions<br>
+ containing up to --num-callers frames.<br>
+<br>
+* New and modified GDB server monitor features:<br>
+<br>
+ - Valgrind's gdbserver now accepts the command 'catch syscall'.<br>
+ Note that you must have GDB >= 7.11 to use 'catch syscall' with<br>
+ gdbserver.<br>
+<br>
+* New option --run-cxx-freeres=<yes|no> can be used to change whether<br>
+ __gnu_cxx::__freeres() cleanup function is called or not. Default is<br>
+ 'yes'.<br>
+<br>
+* Valgrind is able to read compressed debuginfo sections in two formats:<br>
+ - zlib ELF gABI format with SHF_COMPRESSED flag (gcc option -gz=zlib)<br>
+ - zlib GNU format with .zdebug sections (gcc option -gz=zlib-gnu)<br>
+<br>
+* Modest JIT-cost improvements: the cost of instrumenting code blocks<br>
+ for the most common use case (x86_64-linux, Memcheck) has been<br>
+ reduced by 10%-15%.<br>
+<br>
+* Improved performance for programs that do a lot of discarding of<br>
+ instruction address ranges of 8KB or less.<br>
+<br>
+* The C++ symbol demangler has been updated.<br>
+<br>
+* More robustness against invalid syscall parameters on Linux.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+191069 Exiting due to signal not reported in XML output<br>
+199468 Suppressions: stack size limited to 25<br>
+ while --num-callers allows more frames<br>
+212352 vex amd64 unhandled opc_aux = 0x 2, first_opcode == 0xDC (FCOM)<br>
+278744 cvtps2pd with redundant RexW<br>
+303877 valgrind doesn't support compressed debuginfo sections.<br>
+345307 Warning about "still reachable" memory when using libstdc++ from gcc 5<br>
+348345 Assertion fails for negative lineno<br>
+351282 V 3.10.1 MIPS softfloat build broken with GCC 4.9.3 / binutils 2.25.1<br>
+351692 Dumps created by valgrind are not readable by gdb (mips32 specific)<br>
+351804 Crash on generating suppressions for "printf" call on OS X 10.10<br>
+352197 mips: mmap2() not wrapped correctly for page size > 4096<br>
+353083 arm64 doesn't implement various xattr system calls<br>
+353084 arm64 doesn't support sigpending system call<br>
+353137 www: update info for Supported Platforms<br>
+353138 www: update "The Valgrind Developers" page<br>
+353370 don't advertise RDRAND in cpuid for Core-i7-4910-like avx2 machine<br>
+ == 365325<br>
+ == 357873<br>
+353384 amd64->IR: 0x66 0xF 0x3A 0x62 0xD1 0x62 (pcmpXstrX $0x62)<br>
+353398 WARNING: unhandled amd64-solaris syscall: 207<br>
+353660 XML in auxwhat tag not escaping reserved symbols properly<br>
+353680 s390x: Crash with certain glibc versions due to non-implemented TBEGIN<br>
+353727 amd64->IR: 0x66 0xF 0x3A 0x62 0xD1 0x72 (pcmpXstrX $0x72)<br>
+353802 ELF debug info reader confused with multiple .rodata sections<br>
+353891 Assert 'bad_scanned_addr < VG_ROUNDDN(start+len, sizeof(Addr))' failed<br>
+353917 unhandled amd64-solaris syscall fchdir(120)<br>
+353920 unhandled amd64-solaris syscall: 170<br>
+354274 arm: unhandled instruction: 0xEBAD 0x0AC1 (sub.w sl, sp, r1, lsl #3)<br>
+354392 unhandled amd64-solaris syscall: 171<br>
+354797 Vbit test does not include Iops for Power 8 instruction support<br>
+354883 tst->os_state.pthread - magic_delta assertion failure on OSX 10.11<br>
+ == 361351<br>
+ == 362920<br>
+ == 366222<br>
+354933 Fix documentation of --kernel-variant=android-no-hw-tls option<br>
+355188 valgrind should intercept all malloc related global functions<br>
+355454 do not intercept malloc related symbols from the runtime linker<br>
+355455 stderr.exp of test cases wrapmalloc and wrapmallocstatic overconstrained<br>
+356044 Dwarf line info reader misinterprets is_stmt register<br>
+356112 mips: replace addi with addiu<br>
+356393 valgrind (vex) crashes because isZeroU happened<br>
+ == 363497<br>
+ == 364497<br>
+356676 arm64-linux: unhandled syscalls 125, 126 (sched_get_priority_max/min)<br>
+356678 arm64-linux: unhandled syscall 232 (mincore)<br>
+356817 valgrind.h triggers compiler errors on MSVC when defining NVALGRIND<br>
+356823 Unsupported ARM instruction: stlex<br>
+357059 x86/amd64: SSE cvtpi2ps with memory source does transition to MMX state<br>
+357338 Unhandled instruction for SHA instructions libcrypto Boring SSL<br>
+357673 crash if I try to run valgrind with a binary link with libcurl<br>
+357833 Setting RLIMIT_DATA to zero breaks with linux 4.5+<br>
+357871 pthread_spin_destroy not properly wrapped<br>
+357887 Calls to VG_(fclose) do not close the file descriptor<br>
+357932 amd64->IR: accept redundant REX prefixes for {minsd,maxsd} m128, xmm.<br>
+358030 support direct socket calls on x86 32bit (new in linux 4.3)<br>
+358478 drd/tests/std_thread.cpp doesn't build with GCC6<br>
+359133 Assertion 'eltSzB <= ddpa->poolSzB' failed<br>
+359181 Buffer Overflow during Demangling<br>
+359201 futex syscall "skips" argument 5 if op is FUTEX_WAIT_BITSET<br>
+359289 s390x: popcnt (B9E1) not implemented<br>
+359472 The Power PC vsubuqm instruction doesn't always give the correct result<br>
+359503 Add missing syscalls for aarch64 (arm64)<br>
+359645 "You need libc6-dbg" help message could be more helpful<br>
+359703 s390: wire up separate socketcalls system calls<br>
+359724 getsockname might crash - deref_UInt should call safe_to_deref<br>
+359733 amd64 implement ld.so strchr/index override like x86<br>
+359767 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 1/5<br>
+359829 Power PC test suite none/tests/ppc64/test_isa_2_07.c uses<br>
+ uninitialized data<br>
+359838 arm64: Unhandled instruction 0xD5033F5F (clrex)<br>
+359871 Incorrect mask handling in ppoll<br>
+359952 Unrecognised PCMPESTRM variants (0x70, 0x19)<br>
+360008 Contents of Power vr registers contents is not printed correctly when<br>
+ the --vgdb-shadow-registers=yes option is used<br>
+360035 POWER PC instruction bcdadd and bcdsubtract generate result with<br>
+ non-zero shadow bits<br>
+360378 arm64: Unhandled instruction 0x5E280844 (sha1h s4, s2)<br>
+360425 arm64 unsupported instruction ldpsw<br>
+ == 364435<br>
+360519 none/tests/arm64/memory.vgtest might fail with newer gcc<br>
+360571 Error about the Android Runtime reading below the stack pointer on ARM<br>
+360574 Wrong parameter type for an ashmem ioctl() call on Android and ARM64<br>
+360749 kludge for multiple .rodata sections on Solaris no longer needed<br>
+360752 raise the number of reserved fds in m_main.c from 10 to 12<br>
+361207 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 2/5<br>
+361226 s390x: risbgn (EC59) not implemented<br>
+361253 [s390x] ex_clone.c:42: undefined reference to `pthread_create'<br>
+361354 ppc64[le]: wire up separate socketcalls system calls<br>
+361615 Inconsistent termination for multithreaded process terminated by signal<br>
+361926 Unhandled Solaris syscall: sysfs(84)<br>
+362009 V dumps core on unimplemented functionality before threads are created<br>
+362329 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 3/5<br>
+362894 missing (broken) support for wbit field on mtfsfi instruction (ppc64)<br>
+362935 [AsusWRT] Assertion 'sizeof(TTEntryC) <= 88' failed<br>
+362953 Request for an update to the Valgrind Developers page<br>
+363680 add renameat2() support<br>
+363705 arm64 missing syscall name_to_handle_at and open_by_handle_at<br>
+363714 ppc64 missing syscalls sync, waitid and name_to/open_by_handle_at<br>
+363858 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 4/5<br>
+364058 clarify in manual limitations of array overruns detections<br>
+364413 pselect sycallwrapper mishandles NULL sigmask<br>
+364728 Power PC, missing support for several HW registers in<br>
+ get_otrack_shadow_offset_wrk()<br>
+364948 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 5/5<br>
+365273 Invalid write to stack location reported after signal handler runs<br>
+365912 ppc64BE segfault during jm-insns test (RELRO)<br>
+366079 FPXX Support for MIPS32 Valgrind<br>
+366138 Fix configure errors out when using Xcode 8 (clang 8.0.0)<br>
+366344 Multiple unhandled instruction for Aarch64<br>
+ (0x0EE0E020, 0x1AC15800, 0x4E284801, 0x5E040023, 0x5E056060)<br>
+367995 Integration of memcheck with custom memory allocator<br>
+368120 x86_linux asm _start functions do not keep 16-byte aligned stack pointer<br>
+368412 False positive result for altivec capability check<br>
+368416 Add tc06_two_races_xml.exp output for ppc64<br>
+368419 Perf Events ioctls not implemented<br>
+368461 mmapunmap test fails on ppc64<br>
+368823 run_a_thread_NORETURN assembly code typo for VGP_arm64_linux target<br>
+369000 AMD64 fma4 instructions unsupported.<br>
+369169 ppc64 fails jm_int_isa_2_07 test<br>
+369175 jm_vec_isa_2_07 test crashes on ppc64<br>
+369209 valgrind loops and eats up all memory if cwd doesn't exist.<br>
+369356 pre_mem_read_sockaddr syscall wrapper can crash with bad sockaddr<br>
+369359 msghdr_foreachfield can crash when handling bad iovec<br>
+369360 Bad sigprocmask old or new sets can crash valgrind<br>
+369361 vmsplice syscall wrapper crashes on bad iovec<br>
+369362 Bad sigaction arguments crash valgrind<br>
+369383 x86 sys_modify_ldt wrapper crashes on bad ptr<br>
+369402 Bad set/get_thread_area pointer crashes valgrind<br>
+369441 bad lvec argument crashes process_vm_readv/writev syscall wrappers<br>
+369446 valgrind crashes on unknown fcntl command<br>
+369439 S390x: Unhandled insns RISBLG/RISBHG and LDE/LDER <br>
+369468 Remove quadratic metapool algorithm using VG_(HT_remove_at_Iter)<br>
+370265 ISA 3.0 HW cap stuff needs updating<br>
+371128 BCD add and subtract instructions on Power BE in 32-bit mode do not work<br>
+n-i-bz Fix incorrect (or infinite loop) unwind on RHEL7 x86 and amd64<br>
+n-i-bz massif --pages-as-heap=yes does not report peak caused by mmap+munmap<br>
+n-i-bz false positive leaks due to aspacemgr merging heap & non heap segments<br>
+n-i-bz Fix ppoll_alarm exclusion on OS X<br>
+n-i-bz Document brk segment limitation, reference manual in limit reached msg.<br>
+n-i-bz Fix clobber list in none/tests/amd64/xacq_xrel.c [valgrind r15737]<br>
+n-i-bz Bump allowed shift value for "add.w reg, sp, reg, lsl #N" [vex r3206]<br>
+n-i-bz amd64: memcheck false positive with shr %edx<br>
+n-i-bz arm3: Allow early writeback of SP base register in "strd rD, [sp, #-16]"<br>
+n-i-bz ppc: Fix two cases of PPCAvFpOp vs PPCFpOp enum confusion<br>
+n-i-bz arm: Fix incorrect register-number constraint check for LDAEX{,B,H,D}<br>
+n-i-bz DHAT: added collection of the metric "tot-blocks-allocd" <br>
+<br>
+(3.12.0.RC1: 20 October 2016, vex r3282, valgrind r16094)<br>
+(3.12.0.RC2: 20 October 2016, vex r3282, valgrind r16096)<br>
+(3.12.0: 21 October 2016, vex r3282, valgrind r16098)<br>
+<br>
+<br>
+<br>
+Release 3.11.0 (22 September 2015)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+3.11.0 is a feature release with many improvements and the usual<br>
+collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM32/Linux,<br>
+ARM64/Linux, PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux,<br>
+MIPS32/Linux, MIPS64/Linux, ARM/Android, ARM64/Android,<br>
+MIPS32/Android, X86/Android, X86/Solaris, AMD64/Solaris, X86/MacOSX<br>
+10.10 and AMD64/MacOSX 10.10. There is also preliminary support for<br>
+X86/MacOSX 10.11, AMD64/MacOSX 10.11 and TILEGX/Linux.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for Solaris/x86 and Solaris/amd64 has been added.<br>
+<br>
+* Preliminary support for Mac OS X 10.11 (El Capitan) has been added.<br>
+<br>
+* Preliminary support for the Tilera TileGX architecture has been added.<br>
+<br>
+* s390x: It is now required for the host to have the "long displacement"<br>
+ facility. The oldest supported machine model is z990.<br>
+<br>
+* x86: on an SSE2 only host, Valgrind in 32 bit mode now claims to be a<br>
+ Pentium 4. 3.10.1 wrongly claimed to be a Core 2, which is SSSE3.<br>
+<br>
+* The JIT's register allocator is significantly faster, making the JIT<br>
+ as a whole somewhat faster, so JIT-intensive activities, for example<br>
+ program startup, are modestly faster, around 5%.<br>
+<br>
+* There have been changes to the default settings of several command<br>
+ line flags, as detailed below.<br>
+<br>
+* Intel AVX2 support is more complete (64 bit targets only). On AVX2<br>
+ capable hosts, the simulated CPUID will now indicate AVX2 support.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Memcheck:<br>
+<br>
+ - The default value for --leak-check-heuristics has been changed from<br>
+ "none" to "all". This helps to reduce the number of possibly<br>
+ lost blocks, in particular for C++ applications.<br>
+<br>
+ - The default value for --keep-stacktraces has been changed from<br>
+ "malloc-then-free" to "malloc-and-free". This has a small cost in<br>
+ memory (one word per malloc-ed block) but allows Memcheck to show the<br>
+ 3 stacktraces of a dangling reference: where the block was allocated,<br>
+ where it was freed, and where it is acccessed after being freed.<br>
+<br>
+ - The default value for --partial-loads-ok has been changed from "no" to <br>
+ "yes", so as to avoid false positive errors resulting from some kinds<br>
+ of vectorised loops.<br>
+<br>
+ - A new monitor command 'xb <addr> <len>' shows the validity bits of<br>
+ <len> bytes at <addr>. The monitor command 'xb' is easier to use<br>
+ than get_vbits when you need to associate byte data value with<br>
+ their corresponding validity bits.<br>
+<br>
+ - The 'block_list' monitor command has been enhanced:<br>
+ o it can print a range of loss records<br>
+ o it now accepts an optional argument 'limited <max_blocks>'<br>
+ to control the number of blocks printed.<br>
+ o if a block has been found using a heuristic, then<br>
+ 'block_list' now shows the heuristic after the block size.<br>
+ o the loss records/blocks to print can be limited to the blocks<br>
+ found via specified heuristics.<br>
+<br>
+ - The C helper functions used to instrument loads on<br>
+ x86-{linux,solaris} and arm-linux (both 32-bit only) have been<br>
+ replaced by handwritten assembly sequences. This gives speedups<br>
+ in the region of 0% to 7% for those targets only.<br>
+<br>
+ - A new command line option, --expensive-definedness-checks=yes|no,<br>
+ has been added. This is useful for avoiding occasional invalid<br>
+ uninitialised-value errors in optimised code. Watch out for<br>
+ runtime degradation, as this can be up to 25%. As always, though,<br>
+ the slowdown is highly application specific. The default setting<br>
+ is "no".<br>
+<br>
+* Massif:<br>
+<br>
+ - A new monitor command 'all_snapshots <filename>' dumps all<br>
+ snapshots taken so far.<br>
+<br>
+* Helgrind:<br>
+<br>
+ - Significant memory reduction and moderate speedups for<br>
+ --history-level=full for applications accessing a lot of memory<br>
+ with many different stacktraces.<br>
+<br>
+ - The default value for --conflict-cache-size=N has been doubled to<br>
+ 2000000. Users that were not using the default value should<br>
+ preferably also double the value they give.<br>
+<br>
+ The default was changed due to the changes in the "full history"<br>
+ implementation. Doubling the value gives on average a slightly more<br>
+ complete history and uses similar memory (or significantly less memory<br>
+ in the worst case) than the previous implementation.<br>
+ <br>
+ - The Helgrind monitor command 'info locks' now accepts an optional<br>
+ argument 'lock_addr', which shows information about the lock at the<br>
+ given address only.<br>
+<br>
+ - When using --history-level=full, the new Helgrind monitor command<br>
+ 'accesshistory <addr> [<len>]' will show the recorded accesses for<br>
+ <len> (or 1) bytes at <addr>.<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* The default value for the --smc-check option has been changed from<br>
+ "stack" to "all-non-file" on targets that provide automatic D-I<br>
+ cache coherence (x86, amd64 and s390x). The result is to provide,<br>
+ by default, transparent support for JIT generated and self-modifying<br>
+ code on all targets.<br>
+<br>
+* Mac OS X only: the default value for the --dsymutil option has been<br>
+ changed from "no" to "yes", since any serious usage on Mac OS X<br>
+ always required it to be "yes".<br>
+<br>
+* The command line options --db-attach and --db-command have been removed.<br>
+ They were deprecated in 3.10.0.<br>
+<br>
+* When a process dies due to a signal, Valgrind now shows the signal<br>
+ and the stacktrace at default verbosity (i.e. verbosity 1).<br>
+<br>
+* The address description logic used by Memcheck and Helgrind now<br>
+ describes addresses in anonymous segments, file mmap-ed segments,<br>
+ shared memory segments and the brk data segment.<br>
+<br>
+* The new option --error-markers=<begin>,<end> can be used to mark the<br>
+ begin/end of errors in textual output mode, to facilitate<br>
+ searching/extracting errors in output files that mix valgrind errors<br>
+ with program output.<br>
+<br>
+* The new option --max-threads=<number> can be used to change the number<br>
+ of threads valgrind can handle. The default is 500 threads which<br>
+ should be more than enough for most applications.<br>
+<br>
+* The new option --valgrind-stacksize=<number> can be used to change the<br>
+ size of the private thread stacks used by Valgrind. This is useful<br>
+ for reducing memory use or increasing the stack size if Valgrind<br>
+ segfaults due to stack overflow.<br>
+<br>
+* The new option --avg-transtab-entry-size=<number> can be used to specify<br>
+ the expected instrumented block size, either to reduce memory use or<br>
+ to avoid excessive retranslation.<br>
+<br>
+* Valgrind can be built with Intel's ICC compiler, version 14.0 or later.<br>
+<br>
+* New and modified GDB server monitor features:<br>
+<br>
+ - When a signal is reported in GDB, you can now use the GDB convenience<br>
+ variable $_siginfo to examine detailed signal information.<br>
+ <br>
+ - Valgrind's gdbserver now allows the user to change the signal<br>
+ to deliver to the process. So, use 'signal SIGNAL' to continue execution<br>
+ with SIGNAL instead of the signal reported to GDB. Use 'signal 0' to<br>
+ continue without passing the signal to the process.<br>
+<br>
+ - With GDB >= 7.10, the command 'target remote'<br>
+ will automatically load the executable file of the process running<br>
+ under Valgrind. This means you do not need to specify the executable<br>
+ file yourself, GDB will discover it itself. See GDB documentation about<br>
+ 'qXfer:exec-file:read' packet for more info.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+116002 VG_(printf): Problems with justification of strings and integers<br>
+155125 avoid cutting away file:lineno after long function name<br>
+197259 Unsupported arch_prtctl PR_SET_GS option<br>
+201152 ppc64: Assertion in ppc32g_dirtyhelper_MFSPR_268_269<br>
+201216 Fix Valgrind does not support pthread_sigmask() on OS X<br>
+201435 Fix Darwin: -v does not show kernel version<br>
+208217 "Warning: noted but unhandled ioctl 0x2000747b" on Mac OS X<br>
+211256 Fixed an outdated comment regarding the default platform.<br>
+211529 Incomplete call stacks for code compiled by newer versions of MSVC<br>
+211926 Avoid compilation warnings in valgrind.h with -pedantic<br>
+212291 Fix unhandled syscall: unix:132 (mkfifo) on OS X<br>
+ == 263119<br>
+226609 Crediting upstream authors in man page<br>
+231257 Valgrind omits path when executing script from shebang line<br>
+254164 OS X task_info: UNKNOWN task message [id 3405, to mach_task_self() [..]<br>
+294065 Improve the pdb file reader by avoiding hardwired absolute pathnames<br>
+269360 s390x: Fix addressing mode selection for compare-and-swap<br>
+302630 Memcheck: Assertion failed: 'sizeof(UWord) == sizeof(UInt)'<br>
+ == 326797<br>
+312989 ioctl handling needs to do POST handling on generic ioctls and [..]<br>
+319274 Fix unhandled syscall: unix:410 (sigsuspend_nocancel) on OS X<br>
+324181 mmap does not handle MAP_32BIT (handle it now, rather than fail it)<br>
+327745 Fix valgrind 3.9.0 build fails on Mac OS X 10.6.8<br>
+330147 libmpiwrap PMPI_Get_count returns undefined value<br>
+333051 mmap of huge pages fails due to incorrect alignment<br>
+ == 339163<br>
+334802 valgrind does not always explain why a given option is bad<br>
+335618 mov.w rN, pc/sp (ARM32)<br>
+335785 amd64->IR 0xC4 0xE2 0x75 0x2F (vmaskmovpd)<br>
+ == 307399<br>
+ == 343175<br>
+ == 342740<br>
+ == 346912<br>
+335907 segfault when running wine's ddrawex/tests/surface.c under valgrind<br>
+338602 AVX2 bit in CPUID missing<br>
+338606 Strange message for scripts with invalid interpreter<br>
+338731 ppc: Fix testuite build for toolchains not supporting -maltivec<br>
+338995 shmat with hugepages (SHM_HUGETLB) fails with EINVAL<br>
+339045 Getting valgrind to compile and run on OS X Yosemite (10.10)<br>
+ == 340252<br>
+339156 gdbsrv not called for fatal signal<br>
+339215 Valgrind 3.10.0 contain 2013 in copyrights notice<br>
+339288 support Cavium Octeon MIPS specific BBIT*32 instructions<br>
+339636 Use fxsave64 and fxrstor64 mnemonics instead of old-school rex64 prefix<br>
+339442 Fix testsuite build failure on OS X 10.9<br>
+339542 Enable compilation with Intel's ICC compiler<br>
+339563 The DVB demux DMX_STOP ioctl doesn't have a wrapper<br>
+339688 Mac-specific ASM does not support .version directive (cpuid,<br>
+ tronical and pushfpopf tests)<br>
+339745 Valgrind crash when check Marmalade app (partial fix)<br>
+339755 Fix known deliberate memory leak in setenv() on Mac OS X 10.9<br>
+339778 Linux/TileGx platform support for Valgrind<br>
+339780 Fix known uninitialised read in pthread_rwlock_init() on Mac OS X 10.9 <br>
+339789 Fix none/tests/execve test on Mac OS X 10.9<br>
+339808 Fix none/tests/rlimit64_nofile test on Mac OS X 10.9<br>
+339820 vex amd64->IR: 0x66 0xF 0x3A 0x63 0xA 0x42 0x74 0x9 (pcmpistri $0x42)<br>
+340115 Fix none/tests/cmdline[1|2] tests on systems which define TMPDIR<br>
+340392 Allow user to select more accurate definedness checking in memcheck<br>
+ to avoid invalid complaints on optimised code<br>
+340430 Fix some grammatical weirdness in the manual.<br>
+341238 Recognize GCC5/DWARFv5 DW_LANG constants (Go, C11, C++11, C++14)<br>
+341419 Signal handler ucontext_t not filled out correctly on OS X<br>
+341539 VG_(describe_addr) should not describe address as belonging to client<br>
+ segment if it is past the heap end<br>
+341613 Enable building of manythreads and thread-exits tests on Mac OS X<br>
+341615 Fix none/tests/darwin/access_extended test on Mac OS X<br>
+341698 Valgrind's AESKEYGENASSIST gives wrong result in words 0 and 2 [..]<br>
+341789 aarch64: shmat fails with valgrind on ARMv8<br>
+341997 MIPS64: Cavium OCTEON insns - immediate operand handled incorrectly<br>
+342008 valgrind.h needs type cast [..] for clang/llvm in 64-bit mode<br>
+342038 Unhandled syscalls on aarch64 (mbind/get/set_mempolicy)<br>
+342063 wrong format specifier for test mcblocklistsearch in gdbserver_tests<br>
+342117 Hang when loading PDB file for MSVC compiled Firefox under Wine<br>
+342221 socket connect false positive uninit memory for unknown af family<br>
+342353 Allow dumping full massif output while valgrind is still running<br>
+342571 Valgrind chokes on AVX compare intrinsic with _CMP_GE_QS<br>
+ == 346476<br>
+ == 348387<br>
+ == 350593<br>
+342603 Add I2C_SMBUS ioctl support<br>
+342635 OS X 10.10 (Yosemite) - missing system calls and fcntl code<br>
+342683 Mark memory past the initial brk limit as unaddressable<br>
+342783 arm: unhandled instruction 0xEEFE1ACA = "vcvt.s32.f32 s3, s3, #12"<br>
+342795 Internal glibc __GI_mempcpy call should be intercepted<br>
+342841 s390x: Support instructions fiebr(a) and fidbr(a)<br>
+343012 Unhandled syscall 319 (memfd_create)<br>
+343069 Patch updating v4l2 API support<br>
+343173 helgrind crash during stack unwind<br>
+343219 fix GET_STARTREGS for arm<br>
+343303 Fix known deliberate memory leak in setenv() on Mac OS X 10.10<br>
+343306 OS X 10.10: UNKNOWN mach_msg unhandled MACH_SEND_TRAILER option<br>
+343332 Unhandled instruction 0x9E310021 (fcvtmu) on aarch64<br>
+343335 unhandled instruction 0x1E638400 (fccmp) aarch64<br>
+343523 OS X mach_ports_register: UNKNOWN task message [id 3403, to [..]<br>
+343525 OS X host_get_special_port: UNKNOWN host message [id 412, to [..]<br>
+343597 ppc64le: incorrect use of offseof macro<br>
+343649 OS X host_create_mach_voucher: UNKNOWN host message [id 222, to [..]<br>
+343663 OS X 10.10 Memchecj always reports a leak regardless of [..]<br>
+343732 Unhandled syscall 144 (setgid) on aarch64<br>
+343733 Unhandled syscall 187 (msgctl and related) on aarch64<br>
+343802 s390x: False positive "conditional jump or move depends on [..]<br>
+343902 --vgdb=yes doesn't break when --xml=yes is used<br>
+343967 Don't warn about setuid/setgid/setcap executable for directories<br>
+343978 Recognize DWARF5/GCC5 DW_LANG_Fortran 2003 and 2008 constants<br>
+344007 accept4 syscall unhandled on arm64 (242) and ppc64 (344)<br>
+344033 Helgrind on ARM32 loses track of mutex state in pthread_cond_wait<br>
+344054 www - update info for Solaris/illumos<br>
+344416 'make regtest' does not work cleanly on OS X<br>
+344235 Remove duplicate include of pub_core_aspacemgr.h<br>
+344279 syscall sendmmsg on arm64 (269) and ppc32/64 (349) unhandled<br>
+344295 syscall recvmmsg on arm64 (243) and ppc32/64 (343) unhandled<br>
+344307 2 unhandled syscalls on aarch64/arm64: umount2(39), mount (40)<br>
+344314 callgrind_annotate ... warnings about commands containing newlines<br>
+344318 socketcall should wrap recvmmsg and sendmmsg<br>
+344337 Fix unhandled syscall: mach:41 (_kernelrpc_mach_port_guard_trap)<br>
+344416 Fix 'make regtest' does not work cleanly on OS X<br>
+344499 Fix compilation for Linux kernel >= 4.0.0<br>
+344512 OS X: unhandled syscall: unix:348 (__pthread_chdir), <br>
+ unix:349 (__pthread_fchdir)<br>
+344559 Garbage collection of unused segment names in address space manager<br>
+344560 Fix stack traces missing penultimate frame on OS X<br>
+344621 Fix memcheck/tests/err_disable4 test on OS X<br>
+344686 Fix suppression for pthread_rwlock_init on OS X 10.10<br>
+344702 Fix missing libobjc suppressions on OS X 10.10<br>
+ == 344543<br>
+344936 Fix unhandled syscall: unix:473 (readlinkat) on OS X 10.10<br>
+344939 Fix memcheck/tests/xml1 on OS X 10.10<br>
+345016 helgrind/tests/locked_vs_unlocked2 is failing sometimes<br>
+345079 Fix build problems in VEX/useful/test_main.c<br>
+345126 Incorrect handling of VIDIOC_G_AUDIO and G_AUDOUT<br>
+345177 arm64: prfm (reg) not implemented<br>
+345215 Performance improvements for the register allocator<br>
+345248 add support for Solaris OS in valgrind<br>
+345338 TIOCGSERIAL and TIOCSSERIAL ioctl support on Linux<br>
+345394 Fix memcheck/tests/strchr on OS X<br>
+345637 Fix memcheck/tests/sendmsg on OS X<br>
+345695 Add POWERPC support for AT_DCACHESIZE and HWCAP2<br>
+345824 Fix aspacem segment mismatch: seen with none/tests/bigcode<br>
+345887 Fix an assertion in the address space manager<br>
+345928 amd64: callstack only contains current function for small stacks<br>
+345984 disInstr(arm): unhandled instruction: 0xEE193F1E<br>
+345987 MIPS64: Implement cavium LHX instruction<br>
+346031 MIPS: Implement support for the CvmCount register (rhwr %0, 31)<br>
+346185 Fix typo saving altivec register v24<br>
+346267 Compiler warnings for PPC64 code on call to LibVEX_GuestPPC64_get_XER()<br>
+ and LibVEX_GuestPPC64_get_CR()<br>
+346270 Regression tests none/tests/jm_vec/isa_2_07 and<br>
+ none/tests/test_isa_2_07_part2 have failures on PPC64 little endian<br>
+346307 fuse filesystem syscall deadlocks<br>
+346324 PPC64 missing support for lbarx, lharx, stbcx and sthcx instructions<br>
+346411 MIPS: SysRes::_valEx handling is incorrect<br>
+346416 Add support for LL_IOC_PATH2FID and LL_IOC_GETPARENT Lustre ioctls<br>
+346474 PPC64 Power 8, spr TEXASRU register not supported<br>
+346487 Compiler generates "note" about a future ABI change for PPC64<br>
+346562 MIPS64: lwl/lwr instructions are performing 64bit loads<br>
+ and causing spurious "invalid read of size 8" warnings<br>
+346801 Fix link error on OS X: _vgModuleLocal_sf_maybe_extend_stack<br>
+347151 Fix suppression for pthread_rwlock_init on OS X 10.8<br>
+347233 Fix memcheck/tests/strchr on OS X 10.10 (Haswell) <br>
+347322 Power PC regression test cleanup<br>
+347379 valgrind --leak-check=full leak errors from system libs on OS X 10.8<br>
+ == 217236<br>
+347389 unhandled syscall: 373 (Linux ARM syncfs)<br>
+347686 Patch set to cleanup PPC64 regtests<br>
+347978 Remove bash dependencies where not needed<br>
+347982 OS X: undefined symbols for architecture x86_64: "_global" [..]<br>
+347988 Memcheck: the 'impossible' happened: unexpected size for Addr (OSX/wine)<br>
+ == 345929<br>
+348102 Patch updating v4l2 API support<br>
+348247 amd64 front end: jno jumps wrongly when overflow is not set<br>
+348269 Improve mmap MAP_HUGETLB support.<br>
+348334 (ppc) valgrind does not simulate dcbfl - then my program terminates<br>
+348345 Assertion fails for negative lineno<br>
+348377 Unsupported ARM instruction: yield<br>
+348565 Fix detection of command line option availability for clang<br>
+348574 vex amd64->IR pcmpistri SSE4.2 unsupported (pcmpistri $0x18)<br>
+348728 Fix broken check for VIDIOC_G_ENC_INDEX<br>
+348748 Fix redundant condition<br>
+348890 Fix clang warning about unsupported --param inline-unit-growth=900<br>
+348949 Bogus "ERROR: --ignore-ranges: suspiciously large range"<br>
+349034 Add Lustre ioctls LL_IOC_GROUP_LOCK and LL_IOC_GROUP_UNLOCK<br>
+349086 Fix UNKNOWN task message [id 3406, to mach_task_self(), [..]<br>
+349087 Fix UNKNOWN task message [id 3410, to mach_task_self(), [..]<br>
+349626 Implemented additional Xen hypercalls<br>
+349769 Clang/osx: ld: warning: -read_only_relocs cannot be used with x86_64<br>
+349790 Clean up of the hardware capability checking utilities.<br>
+349828 memcpy intercepts memmove causing src/dst overlap error (ppc64 ld.so)<br>
+349874 Fix typos in source code<br>
+349879 memcheck: add handwritten assembly for helperc_LOADV*<br>
+349941 di_notify_mmap might create wrong start/size DebugInfoMapping<br>
+350062 vex x86->IR: 0x66 0xF 0x3A 0xB (ROUNDSD) on OS X<br>
+350202 Add limited param to 'monitor block_list'<br>
+350290 s390x: Support instructions fixbr(a)<br>
+350359 memcheck/tests/x86/fxsave hangs indefinetely on OS X<br>
+350809 Fix none/tests/async-sigs for Solaris<br>
+350811 Remove reference to --db-attach which has been removed.<br>
+350813 Memcheck/x86: enable handwritten assembly helpers for x86/Solaris too<br>
+350854 hard-to-understand code in VG_(load_ELF)()<br>
+351140 arm64 syscalls setuid (146) and setresgid (149) not implemented<br>
+351386 Solaris: Cannot run ld.so.1 under Valgrind<br>
+351474 Fix VG_(iseqsigset) as obvious<br>
+351531 Typo in /include/vki/vki-xen-physdev.h header guard<br>
+351756 Intercept platform_memchr$VARIANT$Haswell on OS X<br>
+351858 ldsoexec support on Solaris<br>
+351873 Newer gcc doesn't allow __builtin_tabortdc[i] in ppc32 mode<br>
+352130 helgrind reports false races for printfs using mempcpy on FILE* state<br>
+352284 s390: Conditional jump depends on uninitialised value(s) in vfprintf <br>
+352320 arm64 crash on none/tests/nestedfs<br>
+352765 Vbit test fails on Power 6<br>
+352768 The mbar instruction is missing from the Power PC support<br>
+352769 Power PC program priority register (PPR) is not supported<br>
+n-i-bz Provide implementations of certain compiler builtins to support<br>
+ compilers that may not provide those<br>
+n-i-bz Old STABS code is still being compiled, but never used. Remove it.<br>
+n-i-bz Fix compilation on distros with glibc < 2.5<br>
+n-i-bz (vex 3098) Avoid generation of Neon insns on non-Neon hosts<br>
+n-i-bz Enable rt_sigpending syscall on ppc64 linux.<br>
+n-i-bz mremap did not work properly on shared memory<br>
+n-i-bz Fix incorrect sizeof expression in syswrap-xen.c reported by Coverity<br>
+n-i-bz In VALGRIND_PRINTF write out thread name, if any, to xml<br>
+<br>
+(3.11.0.TEST1: 8 September 2015, vex r3187, valgrind r15646)<br>
+(3.11.0.TEST2: 21 September 2015, vex r3193, valgrind r15667)<br>
+(3.11.0: 22 September 2015, vex r3195, valgrind r15674)<br>
+<br>
+<br>
+<br>
+Release 3.10.1 (25 November 2014)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.10.1 is a bug fix release. It fixes various bugs reported in 3.10.0<br>
+and backports fixes for all reported missing AArch64 ARMv8 instructions<br>
+and syscalls from the trunk. If you package or deliver 3.10.0 for others<br>
+to use, you might want to consider upgrading to 3.10.1 instead.<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+335440 arm64: ld1 (single structure) is not implemented<br>
+335713 arm64: unhanded instruction: prfm (immediate)<br>
+339020 ppc64: memcheck/tests/ppc64/power_ISA2_05 failing in nightly build<br>
+339182 ppc64: AvSplat ought to load destination vector register with [..]<br>
+339336 PPC64 store quad instruction (stq) is not supposed to change [..]<br>
+339433 ppc64 lxvw4x instruction uses four 32-byte loads<br>
+339645 Use correct tag names in sys_getdents/64 wrappers<br>
+339706 Fix false positive for ioctl(TIOCSIG) on linux<br>
+339721 assertion 'check_sibling == sibling' failed in readdwarf3.c ...<br>
+339853 arm64 times syscall unknown<br>
+339855 arm64 unhandled getsid/setsid syscalls<br>
+339858 arm64 dmb sy not implemented<br>
+339926 Unhandled instruction 0x1E674001 (frintx) on aarm64<br>
+339927 Unhandled instruction 0x9E7100C6 (fcvtmu) on aarch64<br>
+339938 disInstr(arm64): unhandled instruction 0x4F8010A4 (fmla)<br>
+ == 339950<br>
+339940 arm64: unhandled syscall: 83 (sys_fdatasync) + patch<br>
+340033 arm64: unhandled insn dmb ishld and some other isb-dmb-dsb variants<br>
+340028 unhandled syscalls for arm64 (msync, pread64, setreuid and setregid)<br>
+340036 arm64: Unhandled instruction ld4 (multiple structures, no offset)<br>
+340236 arm64: unhandled syscalls: mknodat, fchdir, chroot, fchownat<br>
+340509 arm64: unhandled instruction fcvtas<br>
+340630 arm64: fchmod (52) and fchown (55) syscalls not recognized<br>
+340632 arm64: unhandled instruction fcvtas<br>
+340722 Resolve "UNKNOWN attrlist flags 0:0x10000000"<br>
+340725 AVX2: Incorrect decoding of vpbroadcast{b,w} reg,reg forms<br>
+340788 warning: unhandled syscall: 318 (getrandom)<br>
+340807 disInstr(arm): unhandled instruction: 0xEE989B20<br>
+340856 disInstr(arm64): unhandled instruction 0x1E634C45 (fcsel)<br>
+340922 arm64: unhandled getgroups/setgroups syscalls<br>
+350251 Fix typo in VEX utility program (test_main.c).<br>
+350407 arm64: unhandled instruction ucvtf (vector, integer)<br>
+350809 none/tests/async-sigs breaks when run under cron on Solaris<br>
+350811 update README.solaris after r15445<br>
+350813 Use handwritten memcheck assembly helpers on x86/Solaris [..]<br>
+350854 strange code in VG_(load_ELF)()<br>
+351140 arm64 syscalls setuid (146) and setresgid (149) not implemented<br>
+n-i-bz DRD and Helgrind: Handle Imbe_CancelReservation (clrex on ARM)<br>
+n-i-bz Add missing ]] to terminate CDATA.<br>
+n-i-bz Glibc versions prior to 2.5 do not define PTRACE_GETSIGINFO<br>
+n-i-bz Enable sys_fadvise64_64 on arm32.<br>
+n-i-bz Add test cases for all remaining AArch64 SIMD, FP and memory insns.<br>
+n-i-bz Add test cases for all known arm64 load/store instructions.<br>
+n-i-bz PRE(sys_openat): when checking whether ARG1 == VKI_AT_FDCWD [..]<br>
+n-i-bz Add detection of old ppc32 magic instructions from bug 278808.<br>
+n-i-bz exp-dhat: Implement missing function "dh_malloc_usable_size".<br>
+n-i-bz arm64: Implement "fcvtpu w, s".<br>
+n-i-bz arm64: implement ADDP and various others<br>
+n-i-bz arm64: Implement {S,U}CVTF (scalar, fixedpt).<br>
+n-i-bz arm64: enable FCVT{A,N}S X,S.<br>
+<br>
+(3.10.1: 25 November 2014, vex r3026, valgrind r14785)<br>
+<br>
+<br>
+<br>
+Release 3.10.0 (10 September 2014)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+3.10.0 is a feature release with many improvements and the usual<br>
+collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM32/Linux, ARM64/Linux,<br>
+PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux, MIPS32/Linux,<br>
+MIPS64/Linux, ARM/Android, MIPS32/Android, X86/Android, X86/MacOSX 10.9<br>
+and AMD64/MacOSX 10.9. Support for MacOSX 10.8 and 10.9 is<br>
+significantly improved relative to the 3.9.0 release.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for the 64-bit ARM Architecture (AArch64 ARMv8). This port<br>
+ is mostly complete, and is usable, but some SIMD instructions are as<br>
+ yet unsupported.<br>
+<br>
+* Support for little-endian variant of the 64-bit POWER architecture.<br>
+<br>
+* Support for Android on MIPS32.<br>
+<br>
+* Support for 64bit FPU on MIPS32 platforms.<br>
+<br>
+* Both 32- and 64-bit executables are supported on MacOSX 10.8 and 10.9.<br>
+<br>
+* Configuration for and running on Android targets has changed.<br>
+ See README.android in the source tree for details.<br>
+<br>
+* ================== DEPRECATED FEATURES =================<br>
+<br>
+* --db-attach is now deprecated and will be removed in the next<br>
+ valgrind feature release. The built-in GDB server capabilities are<br>
+ superior and should be used instead. Learn more here:<br>
+ http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Memcheck:<br>
+<br>
+ - Client code can now selectively disable and re-enable reporting of<br>
+ invalid address errors in specific ranges using the new client<br>
+ requests VALGRIND_DISABLE_ADDR_ERROR_REPORTING_IN_RANGE and<br>
+ VALGRIND_ENABLE_ADDR_ERROR_REPORTING_IN_RANGE.<br>
+<br>
+ - Leak checker: there is a new leak check heuristic called<br>
+ "length64". This is used to detect interior pointers pointing 8<br>
+ bytes inside a block, on the assumption that the first 8 bytes<br>
+ holds the value "block size - 8". This is used by<br>
+ sqlite3MemMalloc, for example.<br>
+<br>
+ - Checking of system call parameters: if a syscall parameter<br>
+ (e.g. bind struct sockaddr, sendmsg struct msghdr, ...) has<br>
+ several fields not initialised, an error is now reported for each<br>
+ field. Previously, an error was reported only for the first<br>
+ uninitialised field.<br>
+<br>
+ - Mismatched alloc/free checking: a new flag<br>
+ --show-mismatched-frees=no|yes [yes] makes it possible to turn off<br>
+ such checks if necessary.<br>
+<br>
+* Helgrind:<br>
+<br>
+ - Improvements to error messages:<br>
+<br>
+ o Race condition error message involving heap allocated blocks also<br>
+ show the thread number that allocated the raced-on block.<br>
+<br>
+ o All locks referenced by an error message are now announced.<br>
+ Previously, some error messages only showed the lock addresses.<br>
+<br>
+ o The message indicating where a lock was first observed now also<br>
+ describes the address/location of the lock.<br>
+<br>
+ - Helgrind now understands the Ada task termination rules and<br>
+ creates a happens-before relationship between a terminated task<br>
+ and its master. This avoids some false positives and avoids a big<br>
+ memory leak when a lot of Ada tasks are created and terminated.<br>
+ The interceptions are only activated with forthcoming releases of<br>
+ gnatpro >= 7.3.0w-20140611 and gcc >= 5.0.<br>
+<br>
+ - A new GDB server monitor command "info locks" giving the list of<br>
+ locks, their location, and their status.<br>
+<br>
+* Callgrind:<br>
+<br>
+ - callgrind_control now supports the --vgdb-prefix argument,<br>
+ which is needed if valgrind was started with this same argument.<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* Unwinding through inlined function calls. Stack unwinding can now<br>
+ make use of Dwarf3 inlined-unwind information if it is available.<br>
+ The practical effect is that inlined calls become visible in stack<br>
+ traces. The suppression matching machinery has been adjusted<br>
+ accordingly. This is controlled by the new option<br>
+ --read-inline-info=yes|no. Currently this is enabled by default<br>
+ only on Linux and Android targets and only for the tools Memcheck,<br>
+ Helgrind and DRD.<br>
+<br>
+* Valgrind can now read EXIDX unwind information on 32-bit ARM<br>
+ targets. If an object contains both CFI and EXIDX unwind<br>
+ information, Valgrind will prefer the CFI over the EXIDX. This<br>
+ facilitates unwinding through system libraries on arm-android<br>
+ targets.<br>
+<br>
+* Address description logic has been improved and is now common<br>
+ between Memcheck and Helgrind, resulting in better address<br>
+ descriptions for some kinds of error messages.<br>
+<br>
+* Error messages about dubious arguments (eg, to malloc or calloc) are<br>
+ output like other errors. This means that they can be suppressed<br>
+ and they have a stack trace.<br>
+<br>
+* The C++ demangler has been updated for better C++11 support.<br>
+<br>
+* New and modified GDB server monitor features:<br>
+<br>
+ - Thread local variables/storage (__thread) can now be displayed.<br>
+<br>
+ - The GDB server monitor command "v.info location <address>"<br>
+ displays information about an address. The information produced<br>
+ depends on the tool and on the options given to valgrind.<br>
+ Possibly, the following are described: global variables, local<br>
+ (stack) variables, allocated or freed blocks, ...<br>
+<br>
+ - The option "--vgdb-stop-at=event1,event2,..." allows the user to<br>
+ ask the GDB server to stop at the start of program execution, at<br>
+ the end of the program execution and on Valgrind internal errors.<br>
+<br>
+ - A new monitor command "v.info stats" shows various Valgrind core<br>
+ and tool statistics.<br>
+<br>
+ - A new monitor command "v.set hostvisibility" allows the GDB server<br>
+ to provide access to Valgrind internal host status/memory.<br>
+<br>
+* A new option "--aspace-minaddr=<address>" can in some situations<br>
+ allow the use of more memory by decreasing the address above which<br>
+ Valgrind maps memory. It can also be used to solve address<br>
+ conflicts with system libraries by increasing the default value.<br>
+ See user manual for details.<br>
+<br>
+* The amount of memory used by Valgrind to store debug info (unwind<br>
+ info, line number information and symbol data) has been<br>
+ significantly reduced, even though Valgrind now reads more<br>
+ information in order to support unwinding of inlined function calls.<br>
+<br>
+* Dwarf3 handling with --read-var-info=yes has been improved:<br>
+<br>
+ - Ada and C struct containing VLAs no longer cause a "bad DIE" error<br>
+<br>
+ - Code compiled with<br>
+ -ffunction-sections -fdata-sections -Wl,--gc-sections<br>
+ no longer causes assertion failures.<br>
+<br>
+* Improved checking for the --sim-hints= and --kernel-variant=<br>
+ options. Unknown strings are now detected and reported to the user<br>
+ as a usage error.<br>
+<br>
+* The semantics of stack start/end boundaries in the valgrind.h<br>
+ VALGRIND_STACK_REGISTER client request has been clarified and<br>
+ documented. The convention is that start and end are respectively<br>
+ the lowest and highest addressable bytes of the stack.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+175819 Support for ipv6 socket reporting with --track-fds<br>
+232510 make distcheck fails<br>
+249435 Analyzing wine programs with callgrind triggers a crash<br>
+278972 support for inlined function calls in stacktraces and suppression<br>
+ == 199144<br>
+291310 FXSAVE instruction marks memory as undefined on amd64<br>
+303536 ioctl for SIOCETHTOOL (ethtool(8)) isn't wrapped<br>
+308729 vex x86->IR: unhandled instruction bytes 0xf 0x5 (syscall) <br>
+315199 vgcore file for threaded app does not show which thread crashed<br>
+315952 tun/tap ioctls are not supported<br>
+323178 Unhandled instruction: PLDW register (ARM) <br>
+323179 Unhandled instruction: PLDW immediate (ARM)<br>
+324050 Helgrind: SEGV because of unaligned stack when using movdqa<br>
+325110 Add test-cases for Power ISA 2.06 insns: divdo/divdo. and divduo/divduo.<br>
+325124 [MIPSEL] Compilation error<br>
+325477 Phase 4 support for IBM Power ISA 2.07<br>
+325538 cavium octeon mips64, valgrind reported "dumping core" [...]<br>
+325628 Phase 5 support for IBM Power ISA 2.07<br>
+325714 Empty vgcore but RLIMIT_CORE is big enough (too big) <br>
+325751 Missing the two privileged Power PC Transactional Memory Instructions<br>
+325816 Phase 6 support for IBM Power ISA 2.07<br>
+325856 Make SGCheck fail gracefully on unsupported platforms<br>
+326026 Iop names for count leading zeros/sign bits incorrectly imply [..]<br>
+326436 DRD: False positive in libstdc++ std::list::push_back<br>
+326444 Cavium MIPS Octeon Specific Load Indexed Instructions<br>
+326462 Refactor vgdb to isolate invoker stuff into separate module<br>
+326469 amd64->IR: 0x66 0xF 0x3A 0x63 0xC1 0xE (pcmpistri 0x0E)<br>
+326623 DRD: false positive conflict report in a field assignment<br>
+326724 Valgrind does not compile on OSX 1.9 Mavericks<br>
+326816 Intercept for __strncpy_sse2_unaligned missing?<br>
+326921 coregrind fails to compile m_trampoline.S with MIPS/Linux port of V<br>
+326983 Clear direction flag after tests on amd64.<br>
+327212 Do not prepend the current directory to absolute path names.<br>
+327223 Support for Cavium MIPS Octeon Atomic and Count Instructions<br>
+327238 Callgrind Assertion 'passed <= last_bb->cjmp_count' failed<br>
+327284 s390x: Fix translation of the risbg instruction<br>
+327639 vex amd64->IR pcmpestri SSE4.2 instruction is unsupported 0x34<br>
+327837 dwz compressed alternate .debug_info and .debug_str not read correctly<br>
+327916 DW_TAG_typedef may have no name<br>
+327943 s390x: add a redirection for the 'index' function<br>
+328100 XABORT not implemented<br>
+328205 Implement additional Xen hypercalls<br>
+328454 add support Backtraces with ARM unwind tables (EXIDX)<br>
+328455 s390x: SIGILL after emitting wrong register pair for ldxbr<br>
+328711 valgrind.1 manpage "memcheck options" section is badly generated<br>
+328878 vex amd64->IR pcmpestri SSE4.2 instruction is unsupported 0x14<br>
+329612 Incorrect handling of AT_BASE for image execution <br>
+329694 clang warns about using uninitialized variable <br>
+329956 valgrind crashes when lmw/stmw instructions are used on ppc64<br>
+330228 mmap must align to VKI_SHMLBA on mips32<br>
+330257 LLVM does not support `-mno-dynamic-no-pic` option<br>
+330319 amd64->IR: unhandled instruction bytes: 0xF 0x1 0xD5 (xend)<br>
+330459 --track-fds=yes doesn't track eventfds<br>
+330469 Add clock_adjtime syscall support<br>
+330594 Missing sysalls on PowerPC / uClibc<br>
+330622 Add test to regression suite for POWER instruction: dcbzl<br>
+330939 Support for AMD's syscall instruction on x86<br>
+ == 308729<br>
+330941 Typo in PRE(poll) syscall wrapper<br>
+331057 unhandled instruction: 0xEEE01B20 (vfma.f64) (has patch)<br>
+331254 Fix expected output for memcheck/tests/dw4<br>
+331255 Fix race condition in test none/tests/coolo_sigaction<br>
+331257 Fix type of jump buffer in test none/tests/faultstatus<br>
+331305 configure uses bash specific syntax<br>
+331337 s390x WARNING: unhandled syscall: 326 (dup3)<br>
+331380 Syscall param timer_create(evp) points to uninitialised byte(s)<br>
+331476 Patch to handle ioctl 0x5422 on Linux (x86 and amd64)<br>
+331829 Unexpected ioctl opcode sign extension<br>
+331830 ppc64: WARNING: unhandled syscall: 96/97<br>
+331839 drd/tests/sem_open specifies invalid semaphore name <br>
+331847 outcome of drd/tests/thread_name is nondeterministic<br>
+332037 Valgrind cannot handle Thumb "add pc, reg"<br>
+332055 drd asserts on platforms with VG_STACK_REDZONE_SZB == 0 and<br>
+ consistency checks enabled<br>
+332263 intercepts for pthread_rwlock_timedrdlock and<br>
+ pthread_rwlock_timedwrlock are incorrect<br>
+332265 drd could do with post-rwlock_init and pre-rwlock_destroy<br>
+ client requests<br>
+332276 Implement additional Xen hypercalls<br>
+332658 ldrd.w r1, r2, [PC, #imm] does not adjust for 32bit alignment<br>
+332765 Fix ms_print to create temporary files in a proper directory<br>
+333072 drd: Add semaphore annotations<br>
+333145 Tests for missaligned PC+#imm access for arm<br>
+333228 AAarch64 Missing instruction encoding: mrs %[reg], ctr_el0<br>
+333230 AAarch64 missing instruction encodings: dc, ic, dsb.<br>
+333248 WARNING: unhandled syscall: unix:443<br>
+333428 ldr.w pc [rD, #imm] instruction leads to assertion<br>
+333501 cachegrind: assertion: Cache set count is not a power of two.<br>
+ == 336577<br>
+ == 292281<br>
+333666 Recognize MPX instructions and bnd prefix.<br>
+333788 Valgrind does not support the CDROM_DISC_STATUS ioctl (has patch)<br>
+333817 Valgrind reports the memory areas written to by the SG_IO<br>
+ ioctl as untouched<br>
+334049 lzcnt fails silently (x86_32)<br>
+334384 Valgrind does not have support Little Endian support for<br>
+ IBM POWER PPC 64<br>
+334585 recvmmsg unhandled (+patch) (arm)<br>
+334705 sendmsg and recvmsg should guard against bogus msghdr fields.<br>
+334727 Build fails with -Werror=format-security<br>
+334788 clarify doc about --log-file initial program directory<br>
+334834 PPC64 Little Endian support, patch 2<br>
+334836 PPC64 Little Endian support, patch 3 testcase fixes<br>
+334936 patch to fix false positives on alsa SNDRV_CTL_* ioctls<br>
+335034 Unhandled ioctl: HCIGETDEVLIST<br>
+335155 vgdb, fix error print statement.<br>
+335262 arm64: movi 8bit version is not supported<br>
+335263 arm64: dmb instruction is not implemented<br>
+335441 unhandled ioctl 0x8905 (SIOCATMARK) when running wine under valgrind<br>
+335496 arm64: sbc/abc instructions are not implemented<br>
+335554 arm64: unhandled instruction: abs<br>
+335564 arm64: unhandled instruction: fcvtpu Xn, Sn<br>
+335735 arm64: unhandled instruction: cnt<br>
+335736 arm64: unhandled instruction: uaddlv<br>
+335848 arm64: unhandled instruction: {s,u}cvtf<br>
+335902 arm64: unhandled instruction: sli<br>
+335903 arm64: unhandled instruction: umull (vector)<br>
+336055 arm64: unhandled instruction: mov (element)<br>
+336062 arm64: unhandled instruction: shrn{,2}<br>
+336139 mip64: [...] valgrind hangs and spins on a single core [...]<br>
+336189 arm64: unhandled Instruction: mvn<br>
+336435 Valgrind hangs in pthread_spin_lock consuming 100% CPU<br>
+336619 valgrind --read-var-info=yes doesn't handle DW_TAG_restrict_type<br>
+336772 Make moans about unknown ioctls more informative<br>
+336957 Add a section about the Solaris/illumos port on the webpage<br>
+337094 ifunc wrapper is broken on ppc64<br>
+337285 fcntl commands F_OFD_SETLK, F_OFD_SETLKW, and F_OFD_GETLK not supported<br>
+337528 leak check heuristic for block prefixed by length as 64bit number<br>
+337740 Implement additional Xen hypercalls<br>
+337762 guest_arm64_toIR.c:4166 (dis_ARM64_load_store): Assertion `0' failed.<br>
+337766 arm64-linux: unhandled syscalls mlock (228) and mlockall (230)<br>
+337871 deprecate --db-attach<br>
+338023 Add support for all V4L2/media ioctls<br>
+338024 inlined functions are not shown if DW_AT_ranges is used<br>
+338106 Add support for 'kcmp' syscall<br>
+338115 DRD: computed conflict set differs from actual after fork<br>
+338160 implement display of thread local storage in gdbsrv<br>
+338205 configure.ac and check for -Wno-tautological-compare<br>
+338300 coredumps are missing one byte of every segment<br>
+338445 amd64 vbit-test fails with unknown opcodes used by arm64 VEX<br>
+338499 --sim-hints parsing broken due to wrong order in tokens<br>
+338615 suppress glibc 2.20 optimized strcmp implementation for ARMv7<br>
+338681 Unable to unwind through clone thread created on i386-linux<br>
+338698 race condition between gdbsrv and vgdb on startup<br>
+338703 helgrind on arm-linux gets false positives in dynamic loader<br>
+338791 alt dwz files can be relative of debug/main file<br>
+338878 on MacOS: assertion 'VG_IS_PAGE_ALIGNED(clstack_end+1)' failed<br>
+338932 build V-trunk with gcc-trunk<br>
+338974 glibc 2.20 changed size of struct sigaction sa_flags field on s390<br>
+345079 Fix build problems in VEX/useful/test_main.c<br>
+n-i-bz Fix KVM_CREATE_IRQCHIP ioctl handling<br>
+n-i-bz s390x: Fix memory corruption for multithreaded applications<br>
+n-i-bz vex arm->IR: allow PC as basereg in some LDRD cases<br>
+n-i-bz internal error in Valgrind if vgdb transmit signals when ptrace invoked<br>
+n-i-bz Fix mingw64 support in valgrind.h (dev@, 9 May 2014)<br>
+n-i-bz drd manual: Document how to C++11 programs that use class "std::thread"<br>
+n-i-bz Add command-line option --default-suppressions<br>
+n-i-bz Add support for BLKDISCARDZEROES ioctl<br>
+n-i-bz ppc32/64: fix a regression with the mtfsb0/mtfsb1 instructions<br>
+n-i-bz Add support for sys_pivot_root and sys_unshare<br>
+<br>
+(3.10.0.BETA1: 2 September 2014, vex r2940, valgrind r14428)<br>
+(3.10.0.BETA2: 8 September 2014, vex r2950, valgrind r14503)<br>
+(3.10.0: 10 September 2014, vex r2950, valgrind r14514)<br>
+<br>
+<br>
+<br>
+Release 3.9.0 (31 October 2013)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.9.0 is a feature release with many improvements and the usual<br>
+collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM/Linux, PPC32/Linux,<br>
+PPC64/Linux, S390X/Linux, MIPS32/Linux, MIPS64/Linux, ARM/Android,<br>
+X86/Android, X86/MacOSX 10.7 and AMD64/MacOSX 10.7. Support for<br>
+MacOSX 10.8 is significantly improved relative to the 3.8.0 release.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for MIPS64 LE and BE running Linux. Valgrind has been<br>
+ tested on MIPS64 Debian Squeeze and Debian Wheezy distributions.<br>
+<br>
+* Support for MIPS DSP ASE on MIPS32 platforms.<br>
+<br>
+* Support for s390x Decimal Floating Point instructions on hosts that<br>
+ have the DFP facility installed.<br>
+<br>
+* Support for POWER8 (Power ISA 2.07) instructions<br>
+<br>
+* Support for Intel AVX2 instructions. This is available only on 64<br>
+ bit code.<br>
+<br>
+* Initial support for Intel Transactional Synchronization Extensions,<br>
+ both RTM and HLE.<br>
+<br>
+* Initial support for Hardware Transactional Memory on POWER.<br>
+<br>
+* Improved support for MacOSX 10.8 (64-bit only). Memcheck can now<br>
+ run large GUI apps tolerably well.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Memcheck:<br>
+<br>
+ - Improvements in handling of vectorised code, leading to<br>
+ significantly fewer false error reports. You need to use the flag<br>
+ --partial-loads-ok=yes to get the benefits of these changes.<br>
+<br>
+ - Better control over the leak checker. It is now possible to<br>
+ specify which leak kinds (definite/indirect/possible/reachable)<br>
+ should be displayed, which should be regarded as errors, and which<br>
+ should be suppressed by a given leak suppression. This is done<br>
+ using the options --show-leak-kinds=kind1,kind2,..,<br>
+ --errors-for-leak-kinds=kind1,kind2,.. and an optional<br>
+ "match-leak-kinds:" line in suppression entries, respectively.<br>
+<br>
+ Note that generated leak suppressions contain this new line and<br>
+ are therefore more specific than in previous releases. To get the<br>
+ same behaviour as previous releases, remove the "match-leak-kinds:"<br>
+ line from generated suppressions before using them.<br>
+<br>
+ - Reduced "possible leak" reports from the leak checker by the use<br>
+ of better heuristics. The available heuristics provide detection<br>
+ of valid interior pointers to std::stdstring, to new[] allocated<br>
+ arrays with elements having destructors and to interior pointers<br>
+ pointing to an inner part of a C++ object using multiple<br>
+ inheritance. They can be selected individually using the<br>
+ option --leak-check-heuristics=heur1,heur2,...<br>
+<br>
+ - Better control of stacktrace acquisition for heap-allocated<br>
+ blocks. Using the --keep-stacktraces option, it is possible to<br>
+ control independently whether a stack trace is acquired for each<br>
+ allocation and deallocation. This can be used to create better<br>
+ "use after free" errors or to decrease Valgrind's resource<br>
+ consumption by recording less information.<br>
+<br>
+ - Better reporting of leak suppression usage. The list of used<br>
+ suppressions (shown when the -v option is given) now shows, for<br>
+ each leak suppressions, how many blocks and bytes it suppressed<br>
+ during the last leak search.<br>
+<br>
+* Helgrind:<br>
+<br>
+ - False errors resulting from the use of statically initialised<br>
+ mutexes and condition variables (PTHREAD_MUTEX_INITIALISER, etc)<br>
+ have been removed.<br>
+<br>
+ - False errors resulting from the use of pthread_cond_waits that<br>
+ timeout, have been removed.<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* Some attempt to tune Valgrind's space requirements to the expected<br>
+ capabilities of the target:<br>
+<br>
+ - The default size of the translation cache has been reduced from 8<br>
+ sectors to 6 on Android platforms, since each sector occupies<br>
+ about 40MB when using Memcheck.<br>
+<br>
+ - The default size of the translation cache has been increased to 16<br>
+ sectors on all other platforms, reflecting the fact that large<br>
+ applications require instrumentation and storage of huge amounts<br>
+ of code. For similar reasons, the number of memory mapped<br>
+ segments that can be tracked has been increased by a factor of 6.<br>
+<br>
+ - In all cases, the maximum number of sectors in the translation<br>
+ cache can be controlled by the new flag --num-transtab-sectors.<br>
+<br>
+* Changes in how debug info (line numbers, etc) is read:<br>
+<br>
+ - Valgrind no longer temporarily mmaps the entire object to read<br>
+ from it. Instead, reading is done through a small fixed sized<br>
+ buffer. This avoids virtual memory usage spikes when Valgrind<br>
+ reads debuginfo from large shared objects.<br>
+<br>
+ - A new experimental remote debug info server. Valgrind can read<br>
+ debug info from a different machine (typically, a build host)<br>
+ where debuginfo objects are stored. This can save a lot of time<br>
+ and hassle when running Valgrind on resource-constrained targets<br>
+ (phones, tablets) when the full debuginfo objects are stored<br>
+ somewhere else. This is enabled by the --debuginfo-server=<br>
+ option.<br>
+<br>
+ - Consistency checking between main and debug objects can be<br>
+ disabled using the --allow-mismatched-debuginfo option.<br>
+<br>
+* Stack unwinding by stack scanning, on ARM. Unwinding by stack<br>
+ scanning can recover stack traces in some cases when the normal<br>
+ unwind mechanisms fail. Stack scanning is best described as "a<br>
+ nasty, dangerous and misleading hack" and so is disabled by default.<br>
+ Use --unw-stack-scan-thresh and --unw-stack-scan-frames to enable<br>
+ and control it.<br>
+<br>
+* Detection and merging of recursive stack frame cycles. When your<br>
+ program has recursive algorithms, this limits the memory used by<br>
+ Valgrind for recorded stack traces and avoids recording<br>
+ uninteresting repeated calls. This is controlled by the command<br>
+ line option --merge-recursive-frame and by the monitor command<br>
+ "v.set merge-recursive-frames".<br>
+<br>
+* File name and line numbers for used suppressions. The list of used<br>
+ suppressions (shown when the -v option is given) now shows, for each<br>
+ used suppression, the file name and line number where the suppression<br>
+ is defined.<br>
+<br>
+* New and modified GDB server monitor features:<br>
+<br>
+ - valgrind.h has a new client request, VALGRIND_MONITOR_COMMAND,<br>
+ that can be used to execute gdbserver monitor commands from the<br>
+ client program.<br>
+<br>
+ - A new monitor command, "v.info open_fds", that gives the list of<br>
+ open file descriptors and additional details.<br>
+<br>
+ - An optional message in the "v.info n_errs_found" monitor command,<br>
+ for example "v.info n_errs_found test 1234 finished", allowing a<br>
+ comment string to be added to the process output, perhaps for the<br>
+ purpose of separating errors of different tests or test phases.<br>
+<br>
+ - A new monitor command "v.info execontext" that shows information<br>
+ about the stack traces recorded by Valgrind.<br>
+<br>
+ - A new monitor command "v.do expensive_sanity_check_general" to run<br>
+ some internal consistency checks.<br>
+<br>
+* New flag --sigill-diagnostics to control whether a diagnostic<br>
+ message is printed when the JIT encounters an instruction it can't<br>
+ translate. The actual behavior -- delivery of SIGILL to the<br>
+ application -- is unchanged.<br>
+<br>
+* The maximum amount of memory that Valgrind can use on 64 bit targets<br>
+ has been increased from 32GB to 64GB. This should make it possible<br>
+ to run applications on Memcheck that natively require up to about 35GB.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+123837 system call: 4th argument is optional, depending on cmd<br>
+135425 memcheck should tell you where Freed blocks were Mallocd<br>
+164485 VG_N_SEGNAMES and VG_N_SEGMENTS are (still) too small<br>
+207815 Adds some of the drm ioctls to syswrap-linux.c <br>
+251569 vex amd64->IR: 0xF 0x1 0xF9 0xBF 0x90 0xD0 0x3 0x0 (RDTSCP)<br>
+252955 Impossible to compile with ccache<br>
+253519 Memcheck reports auxv pointer accesses as invalid reads.<br>
+263034 Crash when loading some PPC64 binaries<br>
+269599 Increase deepest backtrace<br>
+274695 s390x: Support "compare to/from logical" instructions (z196)<br>
+275800 s390x: Autodetect cache info (part 2)<br>
+280271 Valgrind reports possible memory leaks on still-reachable std::string<br>
+284540 Memcheck shouldn't count suppressions matching still-reachable [..]<br>
+289578 Backtraces with ARM unwind tables (stack scan flags)<br>
+296311 Wrong stack traces due to -fomit-frame-pointer (x86) <br>
+304832 ppc32: build failure<br>
+305431 Use find_buildid shdr fallback for separate .debug files<br>
+305728 Add support for AVX2 instructions<br>
+305948 ppc64: code generation for ShlD64 / ShrD64 asserts<br>
+306035 s390x: Fix IR generation for LAAG and friends<br>
+306054 s390x: Condition code computation for convert-to-int/logical<br>
+306098 s390x: alternate opcode form for convert to/from fixed<br>
+306587 Fix cache line detection from auxiliary vector for PPC.<br>
+306783 Mips unhandled syscall : 4025 / 4079 / 4182<br>
+307038 DWARF2 CFI reader: unhandled DW_OP_ opcode 0x8 (DW_OP_const1u et al)<br>
+307082 HG false positive: pthread_cond_destroy: destruction of unknown CV<br>
+307101 sys_capget second argument can be NULL<br>
+307103 sys_openat: If pathname is absolute, then dirfd is ignored.<br>
+307106 amd64->IR: f0 0f c0 02 (lock xadd byte)<br>
+307113 s390x: DFP support<br>
+307141 valgrind does't work in mips-linux system<br>
+307155 filter_gdb should filter out syscall-template.S T_PSEUDO<br>
+307285 x86_amd64 feature test for avx in test suite is wrong<br>
+307290 memcheck overlap testcase needs memcpy version filter<br>
+307463 Please add "&limit=0" to the "all open bugs" link<br>
+307465 --show-possibly-lost=no should reduce the error count / exit code<br>
+307557 Leaks on Mac OS X 10.7.5 libraries at ImageLoader::recursiveInit[..]<br>
+307729 pkgconfig support broken valgrind.pc<br>
+307828 Memcheck false errors SSE optimized wcscpy, wcscmp, wcsrchr, wcschr<br>
+307955 Building valgrind 3.7.0-r4 fails in Gentoo AMD64 when using clang<br>
+308089 Unhandled syscall on ppc64: prctl<br>
+308135 PPC32 MPC8xx has 16 bytes cache size<br>
+308321 testsuite memcheck filter interferes with gdb_filter <br>
+308333 == 307106<br>
+308341 vgdb should report process exit (or fatal signal)<br>
+308427 s390 memcheck reports tsearch cjump/cmove depends on uninit<br>
+308495 Remove build dependency on installed Xen headers<br>
+308573 Internal error on 64-bit instruction executed in 32-bit mode<br>
+308626 == 308627<br>
+308627 pmovmskb validity bit propagation is imprecise<br>
+308644 vgdb command for having the info for the track-fds option<br>
+308711 give more info about aspacemgr and arenas in out_of_memory<br>
+308717 ARM: implement fixed-point VCVT.F64.[SU]32<br>
+308718 ARM implement SMLALBB family of instructions<br>
+308886 Missing support for PTRACE_SET/GETREGSET <br>
+308930 syscall name_to_handle_at (303 on amd64) not handled<br>
+309229 V-bit tester does not report number of tests generated<br>
+309323 print unrecognized instuction on MIPS<br>
+309425 Provide a --sigill-diagnostics flag to suppress illegal [..]<br>
+309427 SSE optimized stpncpy trigger uninitialised value [..] errors<br>
+309430 Self hosting ppc64 encounters a vassert error on operand type<br>
+309600 valgrind is a bit confused about 0-sized sections<br>
+309823 Generate errors for still reachable blocks<br>
+309921 PCMPISTRI validity bit propagation is imprecise<br>
+309922 none/tests/ppc64/test_dfp5 sometimes fails<br>
+310169 The Iop_CmpORD class of Iops is not supported by the vbit checker.<br>
+310424 --read-var-info does not properly describe static variables <br>
+310792 search additional path for debug symbols<br>
+310931 s390x: Message-security assist (MSA) instruction extension [..]<br>
+311100 PPC DFP implementation of the integer operands is inconsistent [..]<br>
+311318 ARM: "128-bit constant is not implemented" error message<br>
+311407 ssse3 bcopy (actually converted memcpy) causes invalid read [..]<br>
+311690 V crashes because it redirects branches inside of a redirected function<br>
+311880 x86_64: make regtest hangs at shell_valid1<br>
+311922 WARNING: unhandled syscall: 170<br>
+311933 == 251569<br>
+312171 ppc: insn selection for DFP<br>
+312571 Rounding mode call wrong for the DFP Iops [..]<br>
+312620 Change to Iop_D32toD64 [..] for s390 DFP support broke ppc [..]<br>
+312913 Dangling pointers error should also report the alloc stack trace<br>
+312980 Building on Mountain Lion generates some compiler warnings<br>
+313267 Adding MIPS64/Linux port to Valgrind<br>
+313348 == 251569<br>
+313354 == 251569<br>
+313811 Buffer overflow in assert_fail<br>
+314099 coverity pointed out error in VEX guest_ppc_toIR.c insn_suffix<br>
+314269 ppc: dead code in insn selection<br>
+314718 ARM: implement integer divide instruction (sdiv and udiv)<br>
+315345 cl-format.xml and callgrind/dump.c don't agree on using cfl= or cfi=<br>
+315441 sendmsg syscall should ignore unset msghdr msg_flags<br>
+315534 msgrcv inside a thread causes valgrind to hang (block)<br>
+315545 Assertion '(UChar*)sec->tt[tteNo].tcptr <= (UChar*)hcode' failed<br>
+315689 disInstr(thumb): unhandled instruction: 0xF852 0x0E10 (LDRT)<br>
+315738 disInstr(arm): unhandled instruction: 0xEEBE0BEE (vcvt.s32.f64)<br>
+315959 valgrind man page has bogus SGCHECK (and no BBV) OPTIONS section<br>
+316144 valgrind.1 manpage contains unknown ??? strings [..]<br>
+316145 callgrind command line options in manpage reference (unknown) [..]<br>
+316145 callgrind command line options in manpage reference [..]<br>
+316181 drd: Fixed a 4x slowdown for certain applications<br>
+316503 Valgrind does not support SSE4 "movntdqa" instruction<br>
+316535 Use of |signed int| instead of |size_t| in valgrind messages<br>
+316696 fluidanimate program of parsec 2.1 stuck <br>
+316761 syscall open_by_handle_at (304 on amd64, 342 on x86) not handled<br>
+317091 Use -Wl,-Ttext-segment when static linking if possible [..]<br>
+317186 "Impossible happens" when occurs VCVT instruction on ARM<br>
+317318 Support for Threading Building Blocks "scalable_malloc"<br>
+317444 amd64->IR: 0xC4 0x41 0x2C 0xC2 0xD2 0x8 (vcmpeq_uqps)<br>
+317461 Fix BMI assembler configure check and avx2/bmi/fma vgtest prereqs<br>
+317463 bmi testcase IR SANITY CHECK FAILURE<br>
+317506 memcheck/tests/vbit-test fails with unknown opcode after [..]<br>
+318050 libmpiwrap fails to compile with out-of-source build<br>
+318203 setsockopt handling needs to handle SOL_SOCKET/SO_ATTACH_FILTER<br>
+318643 annotate_trace_memory tests infinite loop on arm and ppc [..]<br>
+318773 amd64->IR: 0xF3 0x48 0x0F 0xBC 0xC2 0xC3 0x66 0x0F<br>
+318929 Crash with: disInstr(thumb): 0xF321 0x0001 (ssat16)<br>
+318932 Add missing PPC64 and PPC32 system call support<br>
+319235 --db-attach=yes is broken with Yama (ptrace scoping) enabled<br>
+319395 Crash with unhandled instruction on STRT (Thumb) instructions<br>
+319494 VEX Makefile-gcc standalone build update after r2702<br>
+319505 [MIPSEL] Crash: unhandled UNRAY operator.<br>
+319858 disInstr(thumb): unhandled instruction on instruction STRBT<br>
+319932 disInstr(thumb): unhandled instruction on instruction STRHT<br>
+320057 Problems when we try to mmap more than 12 memory pages on MIPS32<br>
+320063 Memory from PTRACE_GET_THREAD_AREA is reported uninitialised<br>
+320083 disInstr(thumb): unhandled instruction on instruction LDRBT<br>
+320116 bind on AF_BLUETOOTH produces warnings because of sockaddr_rc padding<br>
+320131 WARNING: unhandled syscall: 369 on ARM (prlimit64)<br>
+320211 Stack buffer overflow in ./coregrind/m_main.c with huge TMPDIR<br>
+320661 vgModuleLocal_read_elf_debug_info(): "Assertion '!di->soname'<br>
+320895 add fanotify support (patch included)<br>
+320998 vex amd64->IR pcmpestri and pcmpestrm SSE4.2 instruction<br>
+321065 Valgrind updates for Xen 4.3<br>
+321148 Unhandled instruction: PLI (Thumb 1, 2, 3)<br>
+321363 Unhandled instruction: SSAX (ARM + Thumb)<br>
+321364 Unhandled instruction: SXTAB16 (ARM + Thumb)<br>
+321466 Unhandled instruction: SHASX (ARM + Thumb)<br>
+321467 Unhandled instruction: SHSAX (ARM + Thumb)<br>
+321468 Unhandled instruction: SHSUB16 (ARM + Thumb)<br>
+321619 Unhandled instruction: SHSUB8 (ARM + Thumb)<br>
+321620 Unhandled instruction: UASX (ARM + Thumb)<br>
+321621 Unhandled instruction: USAX (ARM + Thumb)<br>
+321692 Unhandled instruction: UQADD16 (ARM + Thumb)<br>
+321693 Unhandled instruction: LDRSBT (Thumb)<br>
+321694 Unhandled instruction: UQASX (ARM + Thumb)<br>
+321696 Unhandled instruction: UQSAX (Thumb + ARM)<br>
+321697 Unhandled instruction: UHASX (ARM + Thumb)<br>
+321703 Unhandled instruction: UHSAX (ARM + Thumb)<br>
+321704 Unhandled instruction: REVSH (ARM + Thumb)<br>
+321730 Add cg_diff and cg_merge man pages<br>
+321738 Add vgdb and valgrind-listener man pages<br>
+321814 == 315545<br>
+321891 Unhandled instruction: LDRHT (Thumb)<br>
+321960 pthread_create() then alloca() causing invalid stack write errors<br>
+321969 ppc32 and ppc64 don't support [lf]setxattr<br>
+322254 Show threadname together with tid if set by application<br>
+322294 Add initial support for IBM Power ISA 2.07<br>
+322368 Assertion failure in wqthread_hijack under OS X 10.8<br>
+322563 vex mips->IR: 0x70 0x83 0xF0 0x3A<br>
+322807 VALGRIND_PRINTF_BACKTRACE writes callstack to xml and text to stderr<br>
+322851 0bXXX binary literal syntax is not standard <br>
+323035 Unhandled instruction: LDRSHT(Thumb)<br>
+323036 Unhandled instruction: SMMLS (ARM and Thumb)<br>
+323116 The memcheck/tests/ppc64/power_ISA2_05.c fails to build [..]<br>
+323175 Unhandled instruction: SMLALD (ARM + Thumb)<br>
+323177 Unhandled instruction: SMLSLD (ARM + Thumb)<br>
+323432 Calling pthread_cond_destroy() or pthread_mutex_destroy() [..]<br>
+323437 Phase 2 support for IBM Power ISA 2.07<br>
+323713 Support mmxext (integer sse) subset on i386 (athlon)<br>
+323803 Transactional memory instructions are not supported for Power<br>
+323893 SSE3 not available on amd cpus in valgrind<br>
+323905 Probable false positive from Valgrind/drd on close()<br>
+323912 valgrind.h header isn't compatible for mingw64<br>
+324047 Valgrind doesn't support [LDR,ST]{S}[B,H]T ARM instructions<br>
+324149 helgrind: When pthread_cond_timedwait returns ETIMEDOUT [..]<br>
+324181 mmap does not handle MAP_32BIT<br>
+324227 memcheck false positive leak when a thread calls exit+block [..]<br>
+324421 Support for fanotify API on ARM architecture<br>
+324514 gdbserver monitor cmd output behaviour consistency [..]<br>
+324518 ppc64: Emulation of dcbt instructions does not handle [..]<br>
+324546 none/tests/ppc32 test_isa_2_07_part2 requests -m64<br>
+324582 When access is made to freed memory, report both allocation [..]<br>
+324594 Fix overflow computation for Power ISA 2.06 insns: mulldo/mulldo.<br>
+324765 ppc64: illegal instruction when executing none/tests/ppc64/jm-misc<br>
+324816 Incorrect VEX implementation for xscvspdp/xvcvspdp for SNaN inputs<br>
+324834 Unhandled instructions in Microsoft C run-time for x86_64<br>
+324894 Phase 3 support for IBM Power ISA 2.07<br>
+326091 drd: Avoid false race reports from optimized strlen() impls<br>
+326113 valgrind libvex hwcaps error on AMD64 <br>
+n-i-bz Some wrong command line options could be ignored<br>
+n-i-bz patch to allow fair-sched on android<br>
+n-i-bz report error for vgdb snapshot requested before execution<br>
+n-i-bz same as 303624 (fixed in 3.8.0), but for x86 android<br>
+<br>
+(3.9.0: 31 October 2013, vex r2796, valgrind r13708)<br>
+<br>
+<br>
+<br>
+Release 3.8.1 (19 September 2012)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.8.1 is a bug fix release. It fixes some assertion failures in 3.8.0<br>
+that occur moderately frequently in real use cases, adds support for<br>
+some missing instructions on ARM, and fixes a deadlock condition on<br>
+MacOSX. If you package or deliver 3.8.0 for others to use, you might<br>
+want to consider upgrading to 3.8.1 instead.<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+284004 == 301281<br>
+289584 Unhandled instruction: 0xF 0x29 0xE5 (MOVAPS)<br>
+295808 amd64->IR: 0xF3 0xF 0xBC 0xC0 (TZCNT)<br>
+298281 wcslen causes false(?) uninitialised value warnings<br>
+301281 valgrind hangs on OS X when the process calls system()<br>
+304035 disInstr(arm): unhandled instruction 0xE1023053<br>
+304867 implement MOVBE instruction in x86 mode<br>
+304980 Assertion 'lo <= hi' failed in vgModuleLocal_find_rx_mapping<br>
+305042 amd64: implement 0F 7F encoding of movq between two registers<br>
+305199 ARM: implement QDADD and QDSUB<br>
+305321 amd64->IR: 0xF 0xD 0xC (prefetchw)<br>
+305513 killed by fatal signal: SIGSEGV<br>
+305690 DRD reporting invalid semaphore when sem_trywait fails<br>
+305926 Invalid alignment checks for some AVX instructions<br>
+306297 disInstr(thumb): unhandled instruction 0xE883 0x000C<br>
+306310 3.8.0 release tarball missing some files<br>
+306612 RHEL 6 glibc-2.X default suppressions need /lib*/libc-*patterns<br>
+306664 vex amd64->IR: 0x66 0xF 0x3A 0x62 0xD1 0x46 0x66 0xF<br>
+n-i-bz shmat of a segment > 4Gb does not work <br>
+n-i-bz simulate_control_c script wrong USR1 signal number on mips<br>
+n-i-bz vgdb ptrace calls wrong on mips [...]<br>
+n-i-bz Fixes for more MPI false positives<br>
+n-i-bz exp-sgcheck's memcpy causes programs to segfault<br>
+n-i-bz OSX build w/ clang: asserts at startup<br>
+n-i-bz Incorrect undef'dness prop for Iop_DPBtoBCD and Iop_BCDtoDPB<br>
+n-i-bz fix a couple of union tag-vs-field mixups<br>
+n-i-bz OSX: use __NR_poll_nocancel rather than __NR_poll<br>
+<br>
+The following bugs were fixed in 3.8.0 but not listed in this NEWS<br>
+file at the time:<br>
+<br>
+254088 Valgrind should know about UD2 instruction<br>
+301280 == 254088<br>
+301902 == 254088<br>
+304754 NEWS blows TeX's little mind<br>
+<br>
+(3.8.1: 19 September 2012, vex r2537, valgrind r12996)<br>
+<br>
+<br>
+<br>
+Release 3.8.0 (10 August 2012)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.8.0 is a feature release with many improvements and the usual<br>
+collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM/Linux, PPC32/Linux,<br>
+PPC64/Linux, S390X/Linux, MIPS/Linux, ARM/Android, X86/Android,<br>
+X86/MacOSX 10.6/10.7 and AMD64/MacOSX 10.6/10.7. Support for recent<br>
+distros and toolchain components (glibc 2.16, gcc 4.7) has been added.<br>
+There is initial support for MacOSX 10.8, but it is not usable for<br>
+serious work at present.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for MIPS32 platforms running Linux. Valgrind has been<br>
+ tested on MIPS32 and MIPS32r2 platforms running different Debian<br>
+ Squeeze and MeeGo distributions. Both little-endian and big-endian<br>
+ cores are supported. The tools Memcheck, Massif and Lackey have<br>
+ been tested and are known to work. See README.mips for more details.<br>
+<br>
+* Preliminary support for Android running on x86.<br>
+<br>
+* Preliminary (as-yet largely unusable) support for MacOSX 10.8.<br>
+<br>
+* Support for Intel AVX instructions and for AES instructions. This<br>
+ support is available only for 64 bit code.<br>
+<br>
+* Support for POWER Decimal Floating Point instructions.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Non-libc malloc implementations are now supported. This is useful<br>
+ for tools that replace malloc (Memcheck, Massif, DRD, Helgrind).<br>
+ Using the new option --soname-synonyms, such tools can be informed<br>
+ that the malloc implementation is either linked statically into the<br>
+ executable, or is present in some other shared library different<br>
+ from libc.so. This makes it possible to process statically linked<br>
+ programs, and programs using other malloc libraries, for example<br>
+ TCMalloc or JEMalloc.<br>
+<br>
+* For tools that provide their own replacement for malloc et al, the<br>
+ option --redzone-size=<number> allows users to specify the size of<br>
+ the padding blocks (redzones) added before and after each client<br>
+ allocated block. Smaller redzones decrease the memory needed by<br>
+ Valgrind. Bigger redzones increase the chance to detect blocks<br>
+ overrun or underrun. Prior to this change, the redzone size was<br>
+ hardwired to 16 bytes in Memcheck.<br>
+<br>
+* Memcheck:<br>
+<br>
+ - The leak_check GDB server monitor command now can<br>
+ control the maximum nr of loss records to output.<br>
+<br>
+ - Reduction of memory use for applications allocating<br>
+ many blocks and/or having many partially defined bytes.<br>
+<br>
+ - Addition of GDB server monitor command 'block_list' that lists<br>
+ the addresses/sizes of the blocks of a leak search loss record.<br>
+<br>
+ - Addition of GDB server monitor command 'who_points_at' that lists<br>
+ the locations pointing at a block.<br>
+<br>
+ - If a redzone size > 0 is given, VALGRIND_MALLOCLIKE_BLOCK now will<br>
+ detect an invalid access of these redzones, by marking them<br>
+ noaccess. Similarly, if a redzone size is given for a memory<br>
+ pool, VALGRIND_MEMPOOL_ALLOC will mark the redzones no access.<br>
+ This still allows to find some bugs if the user has forgotten to<br>
+ mark the pool superblock noaccess.<br>
+<br>
+ - Performance of memory leak check has been improved, especially in<br>
+ cases where there are many leaked blocks and/or many suppression<br>
+ rules used to suppress leak reports.<br>
+<br>
+ - Reduced noise (false positive) level on MacOSX 10.6/10.7, due to<br>
+ more precise analysis, which is important for LLVM/Clang<br>
+ generated code. This is at the cost of somewhat reduced<br>
+ performance. Note there is no change to analysis precision or<br>
+ costs on Linux targets.<br>
+<br>
+* DRD:<br>
+<br>
+ - Added even more facilities that can help finding the cause of a data<br>
+ race, namely the command-line option --ptrace-addr and the macro<br>
+ DRD_STOP_TRACING_VAR(x). More information can be found in the manual.<br>
+<br>
+ - Fixed a subtle bug that could cause false positive data race reports.<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* The C++ demangler has been updated so as to work well with C++ <br>
+ compiled by up to at least g++ 4.6.<br>
+<br>
+* Tool developers can make replacement/wrapping more flexible thanks<br>
+ to the new option --soname-synonyms. This was reported above, but<br>
+ in fact is very general and applies to all function<br>
+ replacement/wrapping, not just to malloc-family functions.<br>
+<br>
+* Round-robin scheduling of threads can be selected, using the new<br>
+ option --fair-sched= yes. Prior to this change, the pipe-based<br>
+ thread serialisation mechanism (which is still the default) could<br>
+ give very unfair scheduling. --fair-sched=yes improves<br>
+ responsiveness of interactive multithreaded applications, and<br>
+ improves repeatability of results from the thread checkers Helgrind<br>
+ and DRD.<br>
+<br>
+* For tool developers: support to run Valgrind on Valgrind has been<br>
+ improved. We can now routinely Valgrind on Helgrind or Memcheck.<br>
+<br>
+* gdbserver now shows the float shadow registers as integer<br>
+ rather than float values, as the shadow values are mostly<br>
+ used as bit patterns.<br>
+<br>
+* Increased limit for the --num-callers command line flag to 500.<br>
+<br>
+* Performance improvements for error matching when there are many<br>
+ suppression records in use.<br>
+<br>
+* Improved support for DWARF4 debugging information (bug 284184).<br>
+<br>
+* Initial support for DWZ compressed Dwarf debug info.<br>
+<br>
+* Improved control over the IR optimiser's handling of the tradeoff<br>
+ between performance and precision of exceptions. Specifically,<br>
+ --vex-iropt-precise-memory-exns has been removed and replaced by<br>
+ --vex-iropt-register-updates, with extended functionality. This<br>
+ allows the Valgrind gdbserver to always show up to date register<br>
+ values to GDB.<br>
+<br>
+* Modest performance gains through the use of translation chaining for<br>
+ JIT-generated code.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather<br>
+than mailing the developers (or mailing lists) directly -- bugs that<br>
+are not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+ https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+197914 Building valgrind from svn now requires automake-1.10<br>
+203877 increase to 16Mb maximum allowed alignment for memalign et al<br>
+219156 Handle statically linked malloc or other malloc lib (e.g. tcmalloc) <br>
+247386 make perf does not run all performance tests<br>
+270006 Valgrind scheduler unfair <br>
+270777 Adding MIPS/Linux port to Valgrind<br>
+270796 s390x: Removed broken support for the TS insn<br>
+271438 Fix configure for proper SSE4.2 detection<br>
+273114 s390x: Support TR, TRE, TROO, TROT, TRTO, and TRTT instructions<br>
+273475 Add support for AVX instructions<br>
+274078 improved configure logic for mpicc<br>
+276993 fix mremap 'no thrash checks' <br>
+278313 Fedora 15/x64: err read debug info with --read-var-info=yes flag<br>
+281482 memcheck incorrect byte allocation count in realloc() for silly argument<br>
+282230 group allocator for small fixed size, use it for MC_Chunk/SEc vbit<br>
+283413 Fix wrong sanity check<br>
+283671 Robustize alignment computation in LibVEX_Alloc<br>
+283961 Adding support for some HCI IOCTLs<br>
+284124 parse_type_DIE: confused by: DWARF 4<br>
+284864 == 273475 (Add support for AVX instructions)<br>
+285219 Too-restrictive constraints for Thumb2 "SP plus/minus register"<br>
+285662 (MacOSX): Memcheck needs to replace memcpy/memmove<br>
+285725 == 273475 (Add support for AVX instructions)<br>
+286261 add wrapper for linux I2C_RDWR ioctl<br>
+286270 vgpreload is not friendly to 64->32 bit execs, gives ld.so warnings<br>
+286374 Running cachegrind with --branch-sim=yes on 64-bit PowerPC program fails<br>
+286384 configure fails "checking for a supported version of gcc"<br>
+286497 == 273475 (Add support for AVX instructions)<br>
+286596 == 273475 (Add support for AVX instructions)<br>
+286917 disInstr(arm): unhandled instruction: QADD (also QSUB)<br>
+287175 ARM: scalar VFP fixed-point VCVT instructions not handled<br>
+287260 Incorrect conditional jump or move depends on uninitialised value(s)<br>
+287301 vex amd64->IR: 0x66 0xF 0x38 0x41 0xC0 0xB8 0x0 0x0 (PHMINPOSUW)<br>
+287307 == 273475 (Add support for AVX instructions)<br>
+287858 VG_(strerror): unknown error <br>
+288298 (MacOSX) unhandled syscall shm_unlink<br>
+288995 == 273475 (Add support for AVX instructions)<br>
+289470 Loading of large Mach-O thin binaries fails.<br>
+289656 == 273475 (Add support for AVX instructions)<br>
+289699 vgdb connection in relay mode erroneously closed due to buffer overrun <br>
+289823 == 293754 (PCMPxSTRx not implemented for 16-bit characters)<br>
+289839 s390x: Provide support for unicode conversion instructions<br>
+289939 monitor cmd 'leak_check' with details about leaked or reachable blocks<br>
+290006 memcheck doesn't mark %xmm as initialized after "pcmpeqw %xmm %xmm"<br>
+290655 Add support for AESKEYGENASSIST instruction <br>
+290719 valgrind-3.7.0 fails with automake-1.11.2 due to"pkglibdir" usage<br>
+290974 vgdb must align pages to VKI_SHMLBA (16KB) on ARM <br>
+291253 ES register not initialised in valgrind simulation<br>
+291568 Fix 3DNOW-related crashes with baseline x86_64 CPU (w patch)<br>
+291865 s390x: Support the "Compare Double and Swap" family of instructions<br>
+292300 == 273475 (Add support for AVX instructions)<br>
+292430 unrecognized instruction in __intel_get_new_mem_ops_cpuid<br>
+292493 == 273475 (Add support for AVX instructions)<br>
+292626 Missing fcntl F_SETOWN_EX and F_GETOWN_EX support<br>
+292627 Missing support for some SCSI ioctls<br>
+292628 none/tests/x86/bug125959-x86.c triggers undefined behavior<br>
+292841 == 273475 (Add support for AVX instructions)<br>
+292993 implement the getcpu syscall on amd64-linux<br>
+292995 Implement the “cross memory attach” syscalls introduced in Linux 3.2<br>
+293088 Add some VEX sanity checks for ppc64 unhandled instructions<br>
+293751 == 290655 (Add support for AESKEYGENASSIST instruction)<br>
+293754 PCMPxSTRx not implemented for 16-bit characters<br>
+293755 == 293754 (No tests for PCMPxSTRx on 16-bit characters)<br>
+293808 CLFLUSH not supported by latest VEX for amd64<br>
+294047 valgrind does not correctly emulate prlimit64(..., RLIMIT_NOFILE, ...)<br>
+294048 MPSADBW instruction not implemented<br>
+294055 regtest none/tests/shell fails when locale is not set to C<br>
+294185 INT 0x44 (and others) not supported on x86 guest, but used by Jikes RVM<br>
+294190 --vgdb-error=xxx can be out of sync with errors shown to the user<br>
+294191 amd64: fnsave/frstor and 0x66 size prefixes on FP instructions<br>
+294260 disInstr_AMD64: disInstr miscalculated next %rip<br>
+294523 --partial-loads-ok=yes causes false negatives<br>
+294617 vex amd64->IR: 0x66 0xF 0x3A 0xDF 0xD1 0x1 0xE8 0x6A<br>
+294736 vex amd64->IR: 0x48 0xF 0xD7 0xD6 0x48 0x83<br>
+294812 patch allowing to run (on x86 at least) helgrind/drd on tool.<br>
+295089 can not annotate source for both helgrind and drd<br>
+295221 POWER Processor decimal floating point instruction support missing<br>
+295427 building for i386 with clang on darwin11 requires "-new_linker linker"<br>
+295428 coregrind/m_main.c has incorrect x86 assembly for darwin<br>
+295590 Helgrind: Assertion 'cvi->nWaiters > 0' failed<br>
+295617 ARM - Add some missing syscalls<br>
+295799 Missing \n with get_vbits in gdbserver when line is % 80 [...]<br>
+296229 Linux user input device ioctls missing wrappers<br>
+296318 ELF Debug info improvements (more than one rx/rw mapping)<br>
+296422 Add translation chaining support<br>
+296457 vex amd64->IR: 0x66 0xF 0x3A 0xDF 0xD1 0x1 0xE8 0x6A (dup of AES)<br>
+296792 valgrind 3.7.0: add SIOCSHWTSTAMP (0x89B0) ioctl wrapper<br>
+296983 Fix build issues on x86_64/ppc64 without 32-bit toolchains<br>
+297078 gdbserver signal handling problems [..]<br>
+297147 drd false positives on newly allocated memory<br>
+297329 disallow decoding of IBM Power DFP insns on some machines<br>
+297497 POWER Processor decimal floating point instruction support missing<br>
+297701 Another alias for strncasecmp_l in libc-2.13.so<br>
+297911 'invalid write' not reported when using APIs for custom mem allocators.<br>
+297976 s390x: revisit EX implementation<br>
+297991 Valgrind interferes with mmap()+ftell() <br>
+297992 Support systems missing WIFCONTINUED (e.g. pre-2.6.10 Linux) <br>
+297993 Fix compilation of valgrind with gcc -g3.<br>
+298080 POWER Processor DFP support missing, part 3<br>
+298227 == 273475 (Add support for AVX instructions)<br>
+298335 == 273475 (Add support for AVX instructions)<br>
+298354 Unhandled ARM Thumb instruction 0xEB0D 0x0585 (streq)<br>
+298394 s390x: Don't bail out on an unknown machine model. [..]<br>
+298421 accept4() syscall (366) support is missing for ARM<br>
+298718 vex amd64->IR: 0xF 0xB1 0xCB 0x9C 0x8F 0x45<br>
+298732 valgrind installation problem in ubuntu with kernel version 3.x<br>
+298862 POWER Processor DFP instruction support missing, part 4<br>
+298864 DWARF reader mis-parses DW_FORM_ref_addr<br>
+298943 massif asserts with --pages-as-heap=yes when brk is changing [..]<br>
+299053 Support DWARF4 DW_AT_high_pc constant form<br>
+299104 == 273475 (Add support for AVX instructions)<br>
+299316 Helgrind: hg_main.c:628 (map_threads_lookup): Assertion 'thr' failed.<br>
+299629 dup3() syscall (358) support is missing for ARM<br>
+299694 POWER Processor DFP instruction support missing, part 5<br>
+299756 Ignore --free-fill for MEMPOOL_FREE and FREELIKE client requests<br>
+299803 == 273475 (Add support for AVX instructions)<br>
+299804 == 273475 (Add support for AVX instructions)<br>
+299805 == 273475 (Add support for AVX instructions)<br>
+300140 ARM - Missing (T1) SMMUL<br>
+300195 == 296318 (ELF Debug info improvements (more than one rx/rw mapping))<br>
+300389 Assertion `are_valid_hwcaps(VexArchAMD64, [..])' failed.<br>
+300414 FCOM and FCOMP unimplemented for amd64 guest<br>
+301204 infinite loop in canonicaliseSymtab with ifunc symbol<br>
+301229 == 203877 (increase to 16Mb maximum allowed alignment for memalign etc)<br>
+301265 add x86 support to Android build <br>
+301984 configure script doesn't detect certain versions of clang<br>
+302205 Fix compiler warnings for POWER VEX code and POWER test cases<br>
+302287 Unhandled movbe instruction on Atom processors<br>
+302370 PPC: fnmadd, fnmsub, fnmadds, fnmsubs insns always negate the result<br>
+302536 Fix for the POWER Valgrind regression test: memcheck-ISA2.0.<br>
+302578 Unrecognized isntruction 0xc5 0x32 0xc2 0xca 0x09 vcmpngess<br>
+302656 == 273475 (Add support for AVX instructions)<br>
+302709 valgrind for ARM needs extra tls support for android emulator [..]<br>
+302827 add wrapper for CDROM_GET_CAPABILITY<br>
+302901 Valgrind crashes with dwz optimized debuginfo<br>
+302918 Enable testing of the vmaddfp and vnsubfp instructions in the testsuite<br>
+303116 Add support for the POWER instruction popcntb<br>
+303127 Power test suite fixes for frsqrte, vrefp, and vrsqrtefp instructions.<br>
+303250 Assertion `instrs_in->arr_used <= 10000' failed w/ OpenSSL code<br>
+303466 == 273475 (Add support for AVX instructions)<br>
+303624 segmentation fault on Android 4.1 (e.g. on Galaxy Nexus OMAP) <br>
+303963 strstr() function produces wrong results under valgrind callgrind<br>
+304054 CALL_FN_xx macros need to enforce stack alignment<br>
+304561 tee system call not supported<br>
+715750 (MacOSX): Incorrect invalid-address errors near 0xFFFFxxxx (mozbug#)<br>
+n-i-bz Add missing gdbserver xml files for shadow registers for ppc32<br>
+n-i-bz Bypass gcc4.4/4.5 code gen bugs causing out of memory or asserts<br>
+n-i-bz Fix assert in gdbserver for watchpoints watching the same address<br>
+n-i-bz Fix false positive in sys_clone on amd64 when optional args [..]<br>
+n-i-bz s390x: Shadow registers can now be examined using vgdb<br>
+<br>
+(3.8.0-TEST3: 9 August 2012, vex r2465, valgrind r12865)<br>
+(3.8.0: 10 August 2012, vex r2465, valgrind r12866)<br>
+<br>
+<br>
+<br>
+Release 3.7.0 (5 November 2011)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.7.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM/Linux, PPC32/Linux,<br>
+PPC64/Linux, S390X/Linux, ARM/Android, X86/Darwin and AMD64/Darwin.<br>
+Support for recent distros and toolchain components (glibc 2.14, gcc<br>
+4.6, MacOSX 10.7) has been added.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for IBM z/Architecture (s390x) running Linux. Valgrind can<br>
+ analyse 64-bit programs running on z/Architecture. Most user space<br>
+ instructions up to and including z10 are supported. Valgrind has<br>
+ been tested extensively on z9, z10, and z196 machines running SLES<br>
+ 10/11, RedHat 5/6m, and Fedora. The Memcheck and Massif tools are<br>
+ known to work well. Callgrind, Helgrind, and DRD work reasonably<br>
+ well on z9 and later models. See README.s390 for more details.<br>
+<br>
+* Preliminary support for MacOSX 10.7 and XCode 4. Both 32- and<br>
+ 64-bit processes are supported. Some complex threaded applications<br>
+ (Firefox) are observed to hang when run as 32 bit applications,<br>
+ whereas 64-bit versions run OK. The cause is unknown. Memcheck<br>
+ will likely report some false errors. In general, expect some rough<br>
+ spots. This release also supports MacOSX 10.6, but drops support<br>
+ for 10.5.<br>
+<br>
+* Preliminary support for Android (on ARM). Valgrind can now run<br>
+ large applications (eg, Firefox) on (eg) a Samsung Nexus S. See<br>
+ README.android for more details, plus instructions on how to get<br>
+ started.<br>
+<br>
+* Support for the IBM Power ISA 2.06 (Power7 instructions)<br>
+<br>
+* General correctness and performance improvements for ARM/Linux, and,<br>
+ by extension, ARM/Android.<br>
+<br>
+* Further solidification of support for SSE 4.2 in 64-bit mode. AVX<br>
+ instruction set support is under development but is not available in<br>
+ this release.<br>
+<br>
+* Support for AIX5 has been removed.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Memcheck: some incremental changes:<br>
+<br>
+ - reduction of memory use in some circumstances<br>
+<br>
+ - improved handling of freed memory, which in some circumstances <br>
+ can cause detection of use-after-free that would previously have<br>
+ been missed<br>
+<br>
+ - fix of a longstanding bug that could cause false negatives (missed<br>
+ errors) in programs doing vector saturated narrowing instructions.<br>
+<br>
+* Helgrind: performance improvements and major memory use reductions,<br>
+ particularly for large, long running applications which perform many<br>
+ synchronisation (lock, unlock, etc) events. Plus many smaller<br>
+ changes:<br>
+<br>
+ - display of locksets for both threads involved in a race<br>
+<br>
+ - general improvements in formatting/clarity of error messages<br>
+<br>
+ - addition of facilities and documentation regarding annotation<br>
+ of thread safe reference counted C++ classes<br>
+<br>
+ - new flag --check-stack-refs=no|yes [yes], to disable race checking<br>
+ on thread stacks (a performance hack)<br>
+<br>
+ - new flag --free-is-write=no|yes [no], to enable detection of races<br>
+ where one thread accesses heap memory but another one frees it,<br>
+ without any coordinating synchronisation event<br>
+<br>
+* DRD: enabled XML output; added support for delayed thread deletion<br>
+ in order to detect races that occur close to the end of a thread<br>
+ (--join-list-vol); fixed a memory leak triggered by repeated client<br>
+ memory allocatation and deallocation; improved Darwin support.<br>
+<br>
+* exp-ptrcheck: this tool has been renamed to exp-sgcheck<br>
+<br>
+* exp-sgcheck: this tool has been reduced in scope so as to improve<br>
+ performance and remove checking that Memcheck does better.<br>
+ Specifically, the ability to check for overruns for stack and global<br>
+ arrays is unchanged, but the ability to check for overruns of heap<br>
+ blocks has been removed. The tool has accordingly been renamed to<br>
+ exp-sgcheck ("Stack and Global Array Checking").<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* GDB server: Valgrind now has an embedded GDB server. That means it<br>
+ is possible to control a Valgrind run from GDB, doing all the usual<br>
+ things that GDB can do (single stepping, breakpoints, examining<br>
+ data, etc). Tool-specific functionality is also available. For<br>
+ example, it is possible to query the definedness state of variables<br>
+ or memory from within GDB when running Memcheck; arbitrarily large<br>
+ memory watchpoints are supported, etc. To use the GDB server, start<br>
+ Valgrind with the flag --vgdb-error=0 and follow the on-screen<br>
+ instructions.<br>
+<br>
+* Improved support for unfriendly self-modifying code: a new option<br>
+ --smc-check=all-non-file is available. This adds the relevant<br>
+ consistency checks only to code that originates in non-file-backed<br>
+ mappings. In effect this confines the consistency checking only to<br>
+ code that is or might be JIT generated, and avoids checks on code<br>
+ that must have been compiled ahead of time. This significantly<br>
+ improves performance on applications that generate code at run time.<br>
+<br>
+* It is now possible to build a working Valgrind using Clang-2.9 on<br>
+ Linux.<br>
+<br>
+* new client requests VALGRIND_{DISABLE,ENABLE}_ERROR_REPORTING.<br>
+ These enable and disable error reporting on a per-thread, and<br>
+ nestable, basis. This is useful for hiding errors in particularly<br>
+ troublesome pieces of code. The MPI wrapper library (libmpiwrap.c)<br>
+ now uses this facility.<br>
+<br>
+* Added the --mod-funcname option to cg_diff.<br>
+<br>
+* ==================== FIXED BUGS ====================<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+mailing the developers (or mailing lists) directly -- bugs that are<br>
+not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+ 79311 malloc silly arg warning does not give stack trace<br>
+210935 port valgrind.h (not valgrind) to win32 to support client requests<br>
+214223 valgrind SIGSEGV on startup gcc 4.4.1 ppc32 (G4) Ubuntu 9.10<br>
+243404 Port to zSeries<br>
+243935 Helgrind: incorrect handling of ANNOTATE_HAPPENS_BEFORE()/AFTER()<br>
+247223 non-x86: Suppress warning: 'regparm' attribute directive ignored<br>
+250101 huge "free" memory usage due to m_mallocfree.c fragmentation<br>
+253206 Some fixes for the faultstatus testcase<br>
+255223 capget testcase fails when running as root<br>
+256703 xlc_dbl_u32.c testcase broken<br>
+256726 Helgrind tests have broken inline asm <br>
+259977 == 214223 (Valgrind segfaults doing __builtin_longjmp)<br>
+264800 testcase compile failure on zseries<br>
+265762 make public VEX headers compilable by G++ 3.x<br>
+265771 assertion in jumps.c (r11523) fails with glibc-2.3<br>
+266753 configure script does not give the user the option to not use QtCore<br>
+266931 gen_insn_test.pl is broken<br>
+266961 ld-linux.so.2 i?86-linux strlen issues<br>
+266990 setns instruction causes false positive<br>
+267020 Make directory for temporary files configurable at run-time.<br>
+267342 == 267997 (segmentation fault on Mac OS 10.6)<br>
+267383 Assertion 'vgPlain_strlen(dir) + vgPlain_strlen(file) + 1 < 256' failed<br>
+267413 Assertion 'DRD_(g_threadinfo)[tid].synchr_nesting >= 1' failed.<br>
+267488 regtest: darwin support for 64-bit build<br>
+267552 SIGSEGV (misaligned_stack_error) with DRD, but not with other tools<br>
+267630 Add support for IBM Power ISA 2.06 -- stage 1<br>
+267769 == 267997 (Darwin: memcheck triggers segmentation fault)<br>
+267819 Add client request for informing the core about reallocation<br>
+267925 laog data structure quadratic for a single sequence of lock<br>
+267968 drd: (vgDrd_thread_set_joinable): Assertion '0 <= (int)tid ..' failed<br>
+267997 MacOSX: 64-bit V segfaults on launch when built with Xcode 4.0.1<br>
+268513 missed optimizations in fold_Expr<br>
+268619 s390x: fpr - gpr transfer facility <br>
+268620 s390x: reconsider "long displacement" requirement <br>
+268621 s390x: improve IR generation for XC<br>
+268715 s390x: FLOGR is not universally available<br>
+268792 == 267997 (valgrind seg faults on startup when compiled with Xcode 4)<br>
+268930 s390x: MHY is not universally available<br>
+269078 arm->IR: unhandled instruction SUB (SP minus immediate/register) <br>
+269079 Support ptrace system call on ARM<br>
+269144 missing "Bad option" error message<br>
+269209 conditional load and store facility (z196)<br>
+269354 Shift by zero on x86 can incorrectly clobber CC_NDEP<br>
+269641 == 267997 (valgrind segfaults immediately (segmentation fault))<br>
+269736 s390x: minor code generation tweaks<br>
+269778 == 272986 (valgrind.h: swap roles of VALGRIND_DO_CLIENT_REQUEST() ..)<br>
+269863 s390x: remove unused function parameters<br>
+269864 s390x: tweak s390_emit_load_cc <br>
+269884 == 250101 (overhead for huge blocks exhausts space too soon)<br>
+270082 s390x: Make sure to point the PSW address to the next address on SIGILL<br>
+270115 s390x: rewrite some testcases<br>
+270309 == 267997 (valgrind crash on startup)<br>
+270320 add support for Linux FIOQSIZE ioctl() call<br>
+270326 segfault while trying to sanitize the environment passed to execle<br>
+270794 IBM POWER7 support patch causes regression in none/tests<br>
+270851 IBM POWER7 fcfidus instruction causes memcheck to fail<br>
+270856 IBM POWER7 xsnmaddadp instruction causes memcheck to fail on 32bit app <br>
+270925 hyper-optimized strspn() in /lib64/libc-2.13.so needs fix<br>
+270959 s390x: invalid use of R0 as base register<br>
+271042 VSX configure check fails when it should not <br>
+271043 Valgrind build fails with assembler error on ppc64 with binutils 2.21 <br>
+271259 s390x: fix code confusion <br>
+271337 == 267997 (Valgrind segfaults on MacOS X)<br>
+271385 s390x: Implement Ist_MBE <br>
+271501 s390x: misc cleanups <br>
+271504 s390x: promote likely and unlikely <br>
+271579 ppc: using wrong enum type <br>
+271615 unhandled instruction "popcnt" (arch=amd10h) <br>
+271730 Fix bug when checking ioctls: duplicate check <br>
+271776 s390x: provide STFLE instruction support <br>
+271779 s390x: provide clock instructions like STCK <br>
+271799 Darwin: ioctls without an arg report a memory error <br>
+271820 arm: fix type confusion <br>
+271917 pthread_cond_timedwait failure leads to not-locked false positive <br>
+272067 s390x: fix DISP20 macro <br>
+272615 A typo in debug output in mc_leakcheck.c<br>
+272661 callgrind_annotate chokes when run from paths containing regex chars<br>
+272893 amd64->IR: 0x66 0xF 0x38 0x2B 0xC1 0x66 0xF 0x7F == (closed as dup)<br>
+272955 Unhandled syscall error for pwrite64 on ppc64 arch <br>
+272967 make documentation build-system more robust <br>
+272986 Fix gcc-4.6 warnings with valgrind.h<br>
+273318 amd64->IR: 0x66 0xF 0x3A 0x61 0xC1 0x38 (missing PCMPxSTRx case)<br>
+273318 unhandled PCMPxSTRx case: vex amd64->IR: 0x66 0xF 0x3A 0x61 0xC1 0x38 <br>
+273431 valgrind segfaults in evalCfiExpr (debuginfo.c:2039)<br>
+273465 Callgrind: jumps.c:164 (new_jcc): Assertion '(0 <= jmp) && ...'<br>
+273536 Build error: multiple definition of `vgDrd_pthread_cond_initializer'<br>
+273640 ppc64-linux: unhandled syscalls setresuid(164) and setresgid(169)<br>
+273729 == 283000 (Illegal opcode for SSE2 "roundsd" instruction)<br>
+273778 exp-ptrcheck: unhandled sysno == 259<br>
+274089 exp-ptrcheck: unhandled sysno == 208<br>
+274378 s390x: Various dispatcher tweaks<br>
+274447 WARNING: unhandled syscall: 340<br>
+274776 amd64->IR: 0x66 0xF 0x38 0x2B 0xC5 0x66<br>
+274784 == 267997 (valgrind ls -l results in Segmentation Fault)<br>
+274926 valgrind does not build against linux-3<br>
+275148 configure FAIL with glibc-2.14<br>
+275151 Fedora 15 / glibc-2.14 'make regtest' FAIL<br>
+275168 Make Valgrind work for MacOSX 10.7 Lion<br>
+275212 == 275284 (lots of false positives from __memcpy_ssse3_back et al)<br>
+275278 valgrind does not build on Linux kernel 3.0.* due to silly<br>
+275284 Valgrind memcpy/memmove redirection stopped working in glibc 2.14/x86_64<br>
+275308 Fix implementation for ppc64 fres instruc<br>
+275339 s390x: fix testcase compile warnings<br>
+275517 s390x: Provide support for CKSM instruction<br>
+275710 s390x: get rid of redundant address mode calculation<br>
+275815 == 247894 (Valgrind doesn't know about Linux readahead(2) syscall)<br>
+275852 == 250101 (valgrind uses all swap space and is killed)<br>
+276784 Add support for IBM Power ISA 2.06 -- stage 3<br>
+276987 gdbsrv: fix tests following recent commits<br>
+277045 Valgrind crashes with unhandled DW_OP_ opcode 0x2a<br>
+277199 The test_isa_2_06_part1.c in none/tests/ppc64 should be a symlink<br>
+277471 Unhandled syscall: 340<br>
+277610 valgrind crashes in VG_(lseek)(core_fd, phdrs[idx].p_offset, ...)<br>
+277653 ARM: support Thumb2 PLD instruction<br>
+277663 ARM: NEON float VMUL by scalar incorrect<br>
+277689 ARM: tests for VSTn with register post-index are broken<br>
+277694 ARM: BLX LR instruction broken in ARM mode<br>
+277780 ARM: VMOV.F32 (immediate) instruction is broken<br>
+278057 fuse filesystem syscall deadlocks<br>
+278078 Unimplemented syscall 280 on ppc32<br>
+278349 F_GETPIPE_SZ and F_SETPIPE_SZ Linux fcntl commands<br>
+278454 VALGRIND_STACK_DEREGISTER has wrong output type<br>
+278502 == 275284 (Valgrind confuses memcpy() and memmove())<br>
+278892 gdbsrv: factorize gdb version handling, fix doc and typos<br>
+279027 Support for MVCL and CLCL instruction<br>
+279027 s390x: Provide support for CLCL and MVCL instructions<br>
+279062 Remove a redundant check in the insn selector for ppc.<br>
+279071 JDK creates PTEST with redundant REX.W prefix<br>
+279212 gdbsrv: add monitor cmd v.info scheduler.<br>
+279378 exp-ptrcheck: the 'impossible' happened on mkfifo call<br>
+279698 memcheck discards valid-bits for packuswb<br>
+279795 memcheck reports uninitialised values for mincore on amd64<br>
+279994 Add support for IBM Power ISA 2.06 -- stage 3<br>
+280083 mempolicy syscall check errors<br>
+280290 vex amd64->IR: 0x66 0xF 0x38 0x28 0xC1 0x66 0xF 0x6F<br>
+280710 s390x: config files for nightly builds<br>
+280757 /tmp dir still used by valgrind even if TMPDIR is specified<br>
+280965 Valgrind breaks fcntl locks when program does mmap<br>
+281138 WARNING: unhandled syscall: 340<br>
+281241 == 275168 (valgrind useless on Macos 10.7.1 Lion)<br>
+281304 == 275168 (Darwin: dyld "cannot load inserted library")<br>
+281305 == 275168 (unhandled syscall: unix:357 on Darwin 11.1)<br>
+281468 s390x: handle do_clone and gcc clones in call traces<br>
+281488 ARM: VFP register corruption<br>
+281828 == 275284 (false memmove warning: "Source and destination overlap")<br>
+281883 s390x: Fix system call wrapper for "clone".<br>
+282105 generalise 'reclaimSuperBlock' to also reclaim splittable superblock<br>
+282112 Unhandled instruction bytes: 0xDE 0xD9 0x9B 0xDF (fcompp)<br>
+282238 SLES10: make check fails<br>
+282979 strcasestr needs replacement with recent(>=2.12) glibc<br>
+283000 vex amd64->IR: 0x66 0xF 0x3A 0xA 0xC0 0x9 0xF3 0xF<br>
+283243 Regression in ppc64 memcheck tests<br>
+283325 == 267997 (Darwin: V segfaults on startup when built with Xcode 4.0)<br>
+283427 re-connect epoll_pwait syscall on ARM linux<br>
+283600 gdbsrv: android: port vgdb.c<br>
+283709 none/tests/faultstatus needs to account for page size<br>
+284305 filter_gdb needs enhancement to work on ppc64<br>
+284384 clang 3.1 -Wunused-value warnings in valgrind.h, memcheck.h<br>
+284472 Thumb2 ROR.W encoding T2 not implemented<br>
+284621 XML-escape process command line in XML output<br>
+n-i-bz cachegrind/callgrind: handle CPUID information for Core iX Intel CPUs<br>
+ that have non-power-of-2 sizes (also AMDs)<br>
+n-i-bz don't be spooked by libraries mashed by elfhack<br>
+n-i-bz don't be spooked by libxul.so linked with gold<br>
+n-i-bz improved checking for VALGRIND_CHECK_MEM_IS_DEFINED<br>
+<br>
+(3.7.0-TEST1: 27 October 2011, vex r2228, valgrind r12245)<br>
+(3.7.0.RC1: 1 November 2011, vex r2231, valgrind r12257)<br>
+(3.7.0: 5 November 2011, vex r2231, valgrind r12258)<br>
+<br>
+<br>
+<br>
+Release 3.6.1 (16 February 2011)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.6.1 is a bug fix release. It adds support for some SSE4<br>
+instructions that were omitted in 3.6.0 due to lack of time. Initial<br>
+support for glibc-2.13 has been added. A number of bugs causing<br>
+crashing or assertion failures have been fixed.<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+mailing the developers (or mailing lists) directly -- bugs that are<br>
+not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+188572 Valgrind on Mac should suppress setenv() mem leak<br>
+194402 vex amd64->IR: 0x48 0xF 0xAE 0x4 (proper FX{SAVE,RSTOR} support)<br>
+210481 vex amd64->IR: Assertion `sz == 2 || sz == 4' failed (REX.W POPQ)<br>
+246152 callgrind internal error after pthread_cancel on 32 Bit Linux<br>
+250038 ppc64: Altivec LVSR and LVSL instructions fail their regtest<br>
+254420 memory pool tracking broken <br>
+254957 Test code failing to compile due to changes in memcheck.h<br>
+255009 helgrind/drd: crash on chmod with invalid parameter<br>
+255130 readdwarf3.c parse_type_DIE confused by GNAT Ada types<br>
+255355 helgrind/drd: crash on threaded programs doing fork<br>
+255358 == 255355<br>
+255418 (SSE4.x) rint call compiled with ICC<br>
+255822 --gen-suppressions can create invalid files: "too many callers [...]"<br>
+255888 closing valgrindoutput tag outputted to log-stream on error<br>
+255963 (SSE4.x) vex amd64->IR: 0x66 0xF 0x3A 0x9 0xDB 0x0 (ROUNDPD)<br>
+255966 Slowness when using mempool annotations<br>
+256387 vex x86->IR: 0xD4 0xA 0x2 0x7 (AAD and AAM)<br>
+256600 super-optimized strcasecmp() false positive<br>
+256669 vex amd64->IR: Unhandled LOOPNEL insn on amd64<br>
+256968 (SSE4.x) vex amd64->IR: 0x66 0xF 0x38 0x10 0xD3 0x66 (BLENDVPx)<br>
+257011 (SSE4.x) vex amd64->IR: 0x66 0xF 0x3A 0xE 0xFD 0xA0 (PBLENDW)<br>
+257063 (SSE4.x) vex amd64->IR: 0x66 0xF 0x3A 0x8 0xC0 0x0 (ROUNDPS)<br>
+257276 Missing case in memcheck --track-origins=yes<br>
+258870 (SSE4.x) Add support for EXTRACTPS SSE 4.1 instruction<br>
+261966 (SSE4.x) support for CRC32B and CRC32Q is lacking (also CRC32{W,L})<br>
+262985 VEX regression in valgrind 3.6.0 in handling PowerPC VMX<br>
+262995 (SSE4.x) crash when trying to valgrind gcc-snapshot (PCMPxSTRx $0)<br>
+263099 callgrind_annotate counts Ir improperly [...]<br>
+263877 undefined coprocessor instruction on ARMv7<br>
+265964 configure FAIL with glibc-2.13<br>
+n-i-bz Fix compile error w/ icc-12.x in guest_arm_toIR.c<br>
+n-i-bz Docs: fix bogus descriptions for VALGRIND_CREATE_BLOCK et al<br>
+n-i-bz Massif: don't assert on shmat() with --pages-as-heap=yes<br>
+n-i-bz Bug fixes and major speedups for the exp-DHAT space profiler<br>
+n-i-bz DRD: disable --free-is-write due to implementation difficulties<br>
+<br>
+(3.6.1: 16 February 2011, vex r2103, valgrind r11561).<br>
+<br>
+<br>
+<br>
+Release 3.6.0 (21 October 2010)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.6.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, ARM/Linux, PPC32/Linux,<br>
+PPC64/Linux, X86/Darwin and AMD64/Darwin. Support for recent distros<br>
+and toolchain components (glibc 2.12, gcc 4.5, OSX 10.6) has been added.<br>
+<br>
+ -------------------------<br>
+<br>
+Here are some highlights. Details are shown further down:<br>
+<br>
+* Support for ARM/Linux.<br>
+<br>
+* Support for recent Linux distros: Ubuntu 10.10 and Fedora 14.<br>
+<br>
+* Support for Mac OS X 10.6, both 32- and 64-bit executables.<br>
+<br>
+* Support for the SSE4.2 instruction set.<br>
+<br>
+* Enhancements to the Callgrind profiler, including the ability to<br>
+ handle CPUs with three levels of cache.<br>
+<br>
+* A new experimental heap profiler, DHAT.<br>
+<br>
+* A huge number of bug fixes and small enhancements.<br>
+<br>
+ -------------------------<br>
+<br>
+Here are details of the above changes, together with descriptions of<br>
+many other changes, and a list of fixed bugs.<br>
+<br>
+* ================== PLATFORM CHANGES =================<br>
+<br>
+* Support for ARM/Linux. Valgrind now runs on ARMv7 capable CPUs<br>
+ running Linux. It is known to work on Ubuntu 10.04, Ubuntu 10.10,<br>
+ and Maemo 5, so you can run Valgrind on your Nokia N900 if you want.<br>
+<br>
+ This requires a CPU capable of running the ARMv7-A instruction set<br>
+ (Cortex A5, A8 and A9). Valgrind provides fairly complete coverage<br>
+ of the user space instruction set, including ARM and Thumb integer<br>
+ code, VFPv3, NEON and V6 media instructions. The Memcheck,<br>
+ Cachegrind and Massif tools work properly; other tools work to<br>
+ varying degrees.<br>
+<br>
+* Support for recent Linux distros (Ubuntu 10.10 and Fedora 14), along<br>
+ with support for recent releases of the underlying toolchain<br>
+ components, notably gcc-4.5 and glibc-2.12.<br>
+<br>
+* Support for Mac OS X 10.6, both 32- and 64-bit executables. 64-bit<br>
+ support also works much better on OS X 10.5, and is as solid as<br>
+ 32-bit support now.<br>
+<br>
+* Support for the SSE4.2 instruction set. SSE4.2 is supported in<br>
+ 64-bit mode. In 32-bit mode, support is only available up to and<br>
+ including SSSE3. Some exceptions: SSE4.2 AES instructions are not<br>
+ supported in 64-bit mode, and 32-bit mode does in fact support the<br>
+ bare minimum SSE4 instructions to needed to run programs on Mac OS X<br>
+ 10.6 on 32-bit targets.<br>
+<br>
+* Support for IBM POWER6 cpus has been improved. The Power ISA up to<br>
+ and including version 2.05 is supported.<br>
+<br>
+* ==================== TOOL CHANGES ====================<br>
+<br>
+* Cachegrind has a new processing script, cg_diff, which finds the<br>
+ difference between two profiles. It's very useful for evaluating<br>
+ the performance effects of a change in a program.<br>
+ <br>
+ Related to this change, the meaning of cg_annotate's (rarely-used)<br>
+ --threshold option has changed; this is unlikely to affect many<br>
+ people, if you do use it please see the user manual for details.<br>
+<br>
+* Callgrind now can do branch prediction simulation, similar to<br>
+ Cachegrind. In addition, it optionally can count the number of<br>
+ executed global bus events. Both can be used for a better<br>
+ approximation of a "Cycle Estimation" as derived event (you need to<br>
+ update the event formula in KCachegrind yourself).<br>
+<br>
+* Cachegrind and Callgrind now refer to the LL (last-level) cache<br>
+ rather than the L2 cache. This is to accommodate machines with<br>
+ three levels of caches -- if Cachegrind/Callgrind auto-detects the<br>
+ cache configuration of such a machine it will run the simulation as<br>
+ if the L2 cache isn't present. This means the results are less<br>
+ likely to match the true result for the machine, but<br>
+ Cachegrind/Callgrind's results are already only approximate, and<br>
+ should not be considered authoritative. The results are still<br>
+ useful for giving a general idea about a program's locality.<br>
+<br>
+* Massif has a new option, --pages-as-heap, which is disabled by<br>
+ default. When enabled, instead of tracking allocations at the level<br>
+ of heap blocks (as allocated with malloc/new/new[]), it instead<br>
+ tracks memory allocations at the level of memory pages (as mapped by<br>
+ mmap, brk, etc). Each mapped page is treated as its own block.<br>
+ Interpreting the page-level output is harder than the heap-level<br>
+ output, but this option is useful if you want to account for every<br>
+ byte of memory used by a program.<br>
+<br>
+* DRD has two new command-line options: --free-is-write and<br>
+ --trace-alloc. The former allows to detect reading from already freed<br>
+ memory, and the latter allows tracing of all memory allocations and<br>
+ deallocations.<br>
+<br>
+* DRD has several new annotations. Custom barrier implementations can<br>
+ now be annotated, as well as benign races on static variables.<br>
+<br>
+* DRD's happens before / happens after annotations have been made more<br>
+ powerful, so that they can now also be used to annotate e.g. a smart<br>
+ pointer implementation.<br>
+<br>
+* Helgrind's annotation set has also been drastically improved, so as<br>
+ to provide to users a general set of annotations to describe locks,<br>
+ semaphores, barriers and condition variables. Annotations to<br>
+ describe thread-safe reference counted heap objects have also been<br>
+ added.<br>
+<br>
+* Memcheck has a new command-line option, --show-possibly-lost, which<br>
+ is enabled by default. When disabled, the leak detector will not<br>
+ show possibly-lost blocks.<br>
+<br>
+* A new experimental heap profiler, DHAT (Dynamic Heap Analysis Tool),<br>
+ has been added. DHAT keeps track of allocated heap blocks, and also<br>
+ inspects every memory reference to see which block (if any) is being<br>
+ accessed. This gives a lot of insight into block lifetimes,<br>
+ utilisation, turnover, liveness, and the location of hot and cold<br>
+ fields. You can use DHAT to do hot-field profiling.<br>
+<br>
+* ==================== OTHER CHANGES ====================<br>
+<br>
+* Improved support for unfriendly self-modifying code: the extra<br>
+ overhead incurred by --smc-check=all has been reduced by<br>
+ approximately a factor of 5 as compared with 3.5.0.<br>
+<br>
+* Ability to show directory names for source files in error messages.<br>
+ This is combined with a flexible mechanism for specifying which<br>
+ parts of the paths should be shown. This is enabled by the new flag<br>
+ --fullpath-after.<br>
+<br>
+* A new flag, --require-text-symbol, which will stop the run if a<br>
+ specified symbol is not found it a given shared object when it is<br>
+ loaded into the process. This makes advanced working with function<br>
+ intercepting and wrapping safer and more reliable.<br>
+<br>
+* Improved support for the Valkyrie GUI, version 2.0.0. GUI output<br>
+ and control of Valgrind is now available for the tools Memcheck and<br>
+ Helgrind. XML output from Valgrind is available for Memcheck,<br>
+ Helgrind and exp-Ptrcheck.<br>
+<br>
+* More reliable stack unwinding on amd64-linux, particularly in the<br>
+ presence of function wrappers, and with gcc-4.5 compiled code.<br>
+<br>
+* Modest scalability (performance improvements) for massive<br>
+ long-running applications, particularly for those with huge amounts<br>
+ of code.<br>
+<br>
+* Support for analyzing programs running under Wine with has been<br>
+ improved. The header files <valgrind/valgrind.h>,<br>
+ <valgrind/memcheck.h> and <valgrind/drd.h> can now be used in<br>
+ Windows-programs compiled with MinGW or one of the Microsoft Visual<br>
+ Studio compilers.<br>
+<br>
+* A rare but serious error in the 64-bit x86 CPU simulation was fixed.<br>
+ The 32-bit simulator was not affected. This did not occur often,<br>
+ but when it did would usually crash the program under test.<br>
+ Bug 245925.<br>
+<br>
+* A large number of bugs were fixed. These are shown below.<br>
+<br>
+* A number of bugs were investigated, and were candidates for fixing,<br>
+ but are not fixed in 3.6.0, due to lack of developer time. They may<br>
+ get fixed in later releases. They are:<br>
+<br>
+ 194402 vex amd64->IR: 0x48 0xF 0xAE 0x4 0x24 0x49 (FXSAVE64)<br>
+ 212419 false positive "lock order violated" (A+B vs A) <br>
+ 213685 Undefined value propagates past dependency breaking instruction<br>
+ 216837 Incorrect instrumentation of NSOperationQueue on Darwin <br>
+ 237920 valgrind segfault on fork failure <br>
+ 242137 support for code compiled by LLVM-2.8<br>
+ 242423 Another unknown Intel cache config value <br>
+ 243232 Inconsistent Lock Orderings report with trylock <br>
+ 243483 ppc: callgrind triggers VEX assertion failure <br>
+ 243935 Helgrind: implementation of ANNOTATE_HAPPENS_BEFORE() is wrong<br>
+ 244677 Helgrind crash hg_main.c:616 (map_threads_lookup): Assertion<br>
+ 'thr' failed. <br>
+ 246152 callgrind internal error after pthread_cancel on 32 Bit Linux <br>
+ 249435 Analyzing wine programs with callgrind triggers a crash <br>
+ 250038 ppc64: Altivec lvsr and lvsl instructions fail their regtest<br>
+ 250065 Handling large allocations <br>
+ 250101 huge "free" memory usage due to m_mallocfree.c<br>
+ "superblocks fragmentation"<br>
+ 251569 vex amd64->IR: 0xF 0x1 0xF9 0x8B 0x4C 0x24 (RDTSCP)<br>
+ 252091 Callgrind on ARM does not detect function returns correctly<br>
+ 252600 [PATCH] Allow lhs to be a pointer for shl/shr<br>
+ 254420 memory pool tracking broken<br>
+ n-i-bz support for adding symbols for JIT generated code<br>
+<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+mailing the developers (or mailing lists) directly -- bugs that are<br>
+not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+135264 dcbzl instruction missing<br>
+142688 == 250799<br>
+153699 Valgrind should report unaligned reads with movdqa<br>
+180217 == 212335<br>
+190429 Valgrind reports lost of errors in ld.so<br>
+ with x86_64 2.9.90 glibc <br>
+197266 valgrind appears to choke on the xmms instruction<br>
+ "roundsd" on x86_64 <br>
+197988 Crash when demangling very large symbol names<br>
+202315 unhandled syscall: 332 (inotify_init1)<br>
+203256 Add page-level profiling to Massif<br>
+205093 dsymutil=yes needs quotes, locking (partial fix)<br>
+205241 Snow Leopard 10.6 support (partial fix)<br>
+206600 Leak checker fails to upgrade indirect blocks when their<br>
+ parent becomes reachable <br>
+210935 port valgrind.h (not valgrind) to win32 so apps run under<br>
+ wine can make client requests<br>
+211410 vex amd64->IR: 0x15 0xFF 0xFF 0x0 0x0 0x89<br>
+ within Linux ip-stack checksum functions <br>
+212335 unhandled instruction bytes: 0xF3 0xF 0xBD 0xC0<br>
+ (lzcnt %eax,%eax) <br>
+213685 Undefined value propagates past dependency breaking instruction<br>
+ (partial fix)<br>
+215914 Valgrind inserts bogus empty environment variable <br>
+217863 == 197988<br>
+219538 adjtimex syscall wrapper wrong in readonly adjtime mode <br>
+222545 shmat fails under valgind on some arm targets <br>
+222560 ARM NEON support <br>
+230407 == 202315<br>
+231076 == 202315<br>
+232509 Docs build fails with formatting inside <title></title> elements <br>
+232793 == 202315<br>
+235642 [PATCH] syswrap-linux.c: support evdev EVIOCG* ioctls <br>
+236546 vex x86->IR: 0x66 0xF 0x3A 0xA<br>
+237202 vex amd64->IR: 0xF3 0xF 0xB8 0xC0 0x49 0x3B <br>
+237371 better support for VALGRIND_MALLOCLIKE_BLOCK <br>
+237485 symlink (syscall 57) is not supported on Mac OS <br>
+237723 sysno == 101 exp-ptrcheck: the 'impossible' happened:<br>
+ unhandled syscall <br>
+238208 is_just_below_ESP doesn't take into account red-zone <br>
+238345 valgrind passes wrong $0 when executing a shell script <br>
+238679 mq_timedreceive syscall doesn't flag the reception buffer<br>
+ as "defined"<br>
+238696 fcntl command F_DUPFD_CLOEXEC not supported <br>
+238713 unhandled instruction bytes: 0x66 0xF 0x29 0xC6 <br>
+238713 unhandled instruction bytes: 0x66 0xF 0x29 0xC6 <br>
+238745 3.5.0 Make fails on PPC Altivec opcodes, though configure<br>
+ says "Altivec off"<br>
+239992 vex amd64->IR: 0x48 0xF 0xC4 0xC1 0x0 0x48 <br>
+240488 == 197988<br>
+240639 == 212335<br>
+241377 == 236546<br>
+241903 == 202315<br>
+241920 == 212335<br>
+242606 unhandled syscall: setegid (in Ptrcheck)<br>
+242814 Helgrind "Impossible has happened" during<br>
+ QApplication::initInstance(); <br>
+243064 Valgrind attempting to read debug information from iso <br>
+243270 Make stack unwinding in Valgrind wrappers more reliable<br>
+243884 exp-ptrcheck: the 'impossible happened: unhandled syscall <br>
+ sysno = 277 (mq_open)<br>
+244009 exp-ptrcheck unknown syscalls in analyzing lighttpd<br>
+244493 ARM VFP d16-d31 registers support <br>
+244670 add support for audit_session_self syscall on Mac OS 10.6<br>
+244921 The xml report of helgrind tool is not well format<br>
+244923 In the xml report file, the <preamble> not escape the <br>
+ xml char, eg '<','&','>'<br>
+245535 print full path names in plain text reports <br>
+245925 x86-64 red zone handling problem <br>
+246258 Valgrind not catching integer underruns + new [] s<br>
+246311 reg/reg cmpxchg doesn't work on amd64<br>
+246549 unhandled syscall unix:277 while testing 32-bit Darwin app <br>
+246888 Improve Makefile.vex.am <br>
+247510 [OS X 10.6] Memcheck reports unaddressable bytes passed <br>
+ to [f]chmod_extended<br>
+247526 IBM POWER6 (ISA 2.05) support is incomplete<br>
+247561 Some leak testcases fails due to reachable addresses in<br>
+ caller save regs<br>
+247875 sizeofIRType to handle Ity_I128 <br>
+247894 [PATCH] unhandled syscall sys_readahead <br>
+247980 Doesn't honor CFLAGS passed to configure <br>
+248373 darwin10.supp is empty in the trunk <br>
+248822 Linux FIBMAP ioctl has int parameter instead of long<br>
+248893 [PATCH] make readdwarf.c big endianess safe to enable<br>
+ unwinding on big endian systems<br>
+249224 Syscall 336 not supported (SYS_proc_info) <br>
+249359 == 245535<br>
+249775 Incorrect scheme for detecting NEON capabilities of host CPU<br>
+249943 jni JVM init fails when using valgrind<br>
+249991 Valgrind incorrectly declares AESKEYGENASSIST support<br>
+ since VEX r2011<br>
+249996 linux/arm: unhandled syscall: 181 (__NR_pwrite64)<br>
+250799 frexp$fenv_access_off function generates SIGILL <br>
+250998 vex x86->IR: unhandled instruction bytes: 0x66 0x66 0x66 0x2E <br>
+251251 support pclmulqdq insn <br>
+251362 valgrind: ARM: attach to debugger either fails or provokes<br>
+ kernel oops <br>
+251674 Unhandled syscall 294<br>
+251818 == 254550<br>
+<br>
+254257 Add support for debugfiles found by build-id<br>
+254550 [PATCH] Implement DW_ATE_UTF (DWARF4)<br>
+254646 Wrapped functions cause stack misalignment on OS X<br>
+ (and possibly Linux)<br>
+254556 ARM: valgrinding anything fails with SIGSEGV for 0xFFFF0FA0<br>
+<br>
+(3.6.0: 21 October 2010, vex r2068, valgrind r11471).<br>
+<br>
+<br>
+<br>
+Release 3.5.0 (19 August 2009)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.5.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes. The main improvement is that Valgrind<br>
+now works on Mac OS X.<br>
+<br>
+This release supports X86/Linux, AMD64/Linux, PPC32/Linux, PPC64/Linux<br>
+and X86/Darwin. Support for recent distros and toolchain components<br>
+(glibc 2.10, gcc 4.5) has been added.<br>
+<br>
+ -------------------------<br>
+<br>
+Here is a short summary of the changes. Details are shown further<br>
+down:<br>
+<br>
+* Support for Mac OS X (10.5.x).<br>
+<br>
+* Improvements and simplifications to Memcheck's leak checker.<br>
+<br>
+* Clarification and simplifications in various aspects of Valgrind's<br>
+ text output.<br>
+<br>
+* XML output for Helgrind and Ptrcheck.<br>
+<br>
+* Performance and stability improvements for Helgrind and DRD.<br>
+<br>
+* Genuinely atomic support for x86/amd64/ppc atomic instructions.<br>
+<br>
+* A new experimental tool, BBV, useful for computer architecture<br>
+ research.<br>
+<br>
+* Improved Wine support, including ability to read Windows PDB<br>
+ debuginfo.<br>
+<br>
+ -------------------------<br>
+<br>
+Here are details of the above changes, followed by descriptions of<br>
+many other minor changes, and a list of fixed bugs.<br>
+<br>
+<br>
+* Valgrind now runs on Mac OS X. (Note that Mac OS X is sometimes<br>
+ called "Darwin" because that is the name of the OS core, which is the<br>
+ level that Valgrind works at.)<br>
+<br>
+ Supported systems:<br>
+<br>
+ - It requires OS 10.5.x (Leopard). Porting to 10.4.x is not planned<br>
+ because it would require work and 10.4 is only becoming less common.<br>
+<br>
+ - 32-bit programs on x86 and AMD64 (a.k.a x86-64) machines are supported<br>
+ fairly well. For 10.5.x, 32-bit programs are the default even on<br>
+ 64-bit machines, so it handles most current programs.<br>
+ <br>
+ - 64-bit programs on x86 and AMD64 (a.k.a x86-64) machines are not<br>
+ officially supported, but simple programs at least will probably work.<br>
+ However, start-up is slow.<br>
+<br>
+ - PowerPC machines are not supported.<br>
+<br>
+ Things that don't work:<br>
+<br>
+ - The Ptrcheck tool.<br>
+<br>
+ - Objective-C garbage collection.<br>
+<br>
+ - --db-attach=yes.<br>
+<br>
+ - If you have Rogue Amoeba's "Instant Hijack" program installed,<br>
+ Valgrind will fail with a SIGTRAP at start-up. See<br>
+ https://bugs.kde.org/show_bug.cgi?id=193917 for details and a<br>
+ simple work-around.<br>
+<br>
+ Usage notes:<br>
+<br>
+ - You will likely find --dsymutil=yes a useful option, as error<br>
+ messages may be imprecise without it.<br>
+<br>
+ - Mac OS X support is new and therefore will be less robust than the<br>
+ Linux support. Please report any bugs you find.<br>
+<br>
+ - Threaded programs may run more slowly than on Linux.<br>
+<br>
+ Many thanks to Greg Parker for developing this port over several years.<br>
+<br>
+<br>
+* Memcheck's leak checker has been improved. <br>
+<br>
+ - The results for --leak-check=summary now match the summary results<br>
+ for --leak-check=full. Previously they could differ because<br>
+ --leak-check=summary counted "indirectly lost" blocks and<br>
+ "suppressed" blocks as "definitely lost".<br>
+<br>
+ - Blocks that are only reachable via at least one interior-pointer,<br>
+ but are directly pointed to by a start-pointer, were previously<br>
+ marked as "still reachable". They are now correctly marked as<br>
+ "possibly lost".<br>
+<br>
+ - The default value for the --leak-resolution option has been<br>
+ changed from "low" to "high". In general, this means that more<br>
+ leak reports will be produced, but each leak report will describe<br>
+ fewer leaked blocks.<br>
+<br>
+ - With --leak-check=full, "definitely lost" and "possibly lost"<br>
+ leaks are now considered as proper errors, ie. they are counted<br>
+ for the "ERROR SUMMARY" and affect the behaviour of<br>
+ --error-exitcode. These leaks are not counted as errors if<br>
+ --leak-check=summary is specified, however.<br>
+<br>
+ - Documentation for the leak checker has been improved.<br>
+<br>
+<br>
+* Various aspects of Valgrind's text output have changed.<br>
+<br>
+ - Valgrind's start-up message has changed. It is shorter but also<br>
+ includes the command being run, which makes it easier to use<br>
+ --trace-children=yes. An example:<br>
+<br>
+ - Valgrind's shut-down messages have also changed. This is most<br>
+ noticeable with Memcheck, where the leak summary now occurs before<br>
+ the error summary. This change was necessary to allow leaks to be<br>
+ counted as proper errors (see the description of the leak checker<br>
+ changes above for more details). This was also necessary to fix a<br>
+ longstanding bug in which uses of suppressions against leaks were<br>
+ not "counted", leading to difficulties in maintaining suppression<br>
+ files (see https://bugs.kde.org/show_bug.cgi?id=186790).<br>
+<br>
+ - Behavior of -v has changed. In previous versions, -v printed out<br>
+ a mixture of marginally-user-useful information, and tool/core<br>
+ statistics. The statistics printing has now been moved to its own<br>
+ flag, --stats=yes. This means -v is less verbose and more likely<br>
+ to convey useful end-user information.<br>
+<br>
+ - The format of some (non-XML) stack trace entries has changed a<br>
+ little. Previously there were six possible forms:<br>
+<br>
+ 0x80483BF: really (a.c:20)<br>
+ 0x80483BF: really (in /foo/a.out)<br>
+ 0x80483BF: really<br>
+ 0x80483BF: (within /foo/a.out)<br>
+ 0x80483BF: ??? (a.c:20)<br>
+ 0x80483BF: ???<br>
+<br>
+ The third and fourth of these forms have been made more consistent<br>
+ with the others. The six possible forms are now:<br>
+ <br>
+ 0x80483BF: really (a.c:20)<br>
+ 0x80483BF: really (in /foo/a.out)<br>
+ 0x80483BF: really (in ???)<br>
+ 0x80483BF: ??? (in /foo/a.out)<br>
+ 0x80483BF: ??? (a.c:20)<br>
+ 0x80483BF: ???<br>
+<br>
+ Stack traces produced when --xml=yes is specified are different<br>
+ and unchanged.<br>
+<br>
+<br>
+* Helgrind and Ptrcheck now support XML output, so they can be used<br>
+ from GUI tools. Also, the XML output mechanism has been<br>
+ overhauled.<br>
+<br>
+ - The XML format has been overhauled and generalised, so it is more<br>
+ suitable for error reporting tools in general. The Memcheck<br>
+ specific aspects of it have been removed. The new format, which<br>
+ is an evolution of the old format, is described in<br>
+ docs/internals/xml-output-protocol4.txt.<br>
+<br>
+ - Memcheck has been updated to use the new format.<br>
+<br>
+ - Helgrind and Ptrcheck are now able to emit output in this format.<br>
+<br>
+ - The XML output mechanism has been overhauled. XML is now output<br>
+ to its own file descriptor, which means that:<br>
+<br>
+ * Valgrind can output text and XML independently.<br>
+<br>
+ * The longstanding problem of XML output being corrupted by <br>
+ unexpected un-tagged text messages is solved.<br>
+<br>
+ As before, the destination for text output is specified using<br>
+ --log-file=, --log-fd= or --log-socket=.<br>
+<br>
+ As before, XML output for a tool is enabled using --xml=yes.<br>
+<br>
+ Because there's a new XML output channel, the XML output<br>
+ destination is now specified by --xml-file=, --xml-fd= or<br>
+ --xml-socket=.<br>
+<br>
+ Initial feedback has shown this causes some confusion. To<br>
+ clarify, the two envisaged usage scenarios are:<br>
+<br>
+ (1) Normal text output. In this case, do not specify --xml=yes<br>
+ nor any of --xml-file=, --xml-fd= or --xml-socket=.<br>
+<br>
+ (2) XML output. In this case, specify --xml=yes, and one of<br>
+ --xml-file=, --xml-fd= or --xml-socket= to select the XML<br>
+ destination, one of --log-file=, --log-fd= or --log-socket=<br>
+ to select the destination for any remaining text messages,<br>
+ and, importantly, -q.<br>
+<br>
+ -q makes Valgrind completely silent on the text channel,<br>
+ except in the case of critical failures, such as Valgrind<br>
+ itself segfaulting, or failing to read debugging information.<br>
+ Hence, in this scenario, it suffices to check whether or not<br>
+ any output appeared on the text channel. If yes, then it is<br>
+ likely to be a critical error which should be brought to the<br>
+ attention of the user. If no (the text channel produced no<br>
+ output) then it can be assumed that the run was successful.<br>
+<br>
+ This allows GUIs to make the critical distinction they need to<br>
+ make (did the run fail or not?) without having to search or<br>
+ filter the text output channel in any way.<br>
+<br>
+ It is also recommended to use --child-silent-after-fork=yes in<br>
+ scenario (2).<br>
+<br>
+<br>
+* Improvements and changes in Helgrind:<br>
+<br>
+ - XML output, as described above<br>
+<br>
+ - Checks for consistent association between pthread condition<br>
+ variables and their associated mutexes are now performed.<br>
+<br>
+ - pthread_spinlock functions are supported.<br>
+<br>
+ - Modest performance improvements.<br>
+<br>
+ - Initial (skeletal) support for describing the behaviour of<br>
+ non-POSIX synchronisation objects through ThreadSanitizer<br>
+ compatible ANNOTATE_* macros.<br>
+<br>
+ - More controllable tradeoffs between performance and the level of<br>
+ detail of "previous" accesses in a race. There are now three<br>
+ settings:<br>
+<br>
+ * --history-level=full. This is the default, and was also the<br>
+ default in 3.4.x. It shows both stacks involved in a race, but<br>
+ requires a lot of memory and can be very slow in programs that<br>
+ do many inter-thread synchronisation events.<br>
+<br>
+ * --history-level=none. This only shows the later stack involved<br>
+ in a race. This can be much faster than --history-level=full,<br>
+ but makes it much more difficult to find the other access<br>
+ involved in the race.<br>
+<br>
+ The new intermediate setting is<br>
+<br>
+ * --history-level=approx<br>
+<br>
+ For the earlier (other) access, two stacks are presented. The<br>
+ earlier access is guaranteed to be somewhere in between the two<br>
+ program points denoted by those stacks. This is not as useful<br>
+ as showing the exact stack for the previous access (as per<br>
+ --history-level=full), but it is better than nothing, and it's<br>
+ almost as fast as --history-level=none.<br>
+<br>
+<br>
+* New features and improvements in DRD:<br>
+<br>
+ - The error messages printed by DRD are now easier to interpret.<br>
+ Instead of using two different numbers to identify each thread<br>
+ (Valgrind thread ID and DRD thread ID), DRD does now identify<br>
+ threads via a single number (the DRD thread ID). Furthermore<br>
+ "first observed at" information is now printed for all error<br>
+ messages related to synchronization objects.<br>
+<br>
+ - Added support for named semaphores (sem_open() and sem_close()).<br>
+<br>
+ - Race conditions between pthread_barrier_wait() and<br>
+ pthread_barrier_destroy() calls are now reported.<br>
+<br>
+ - Added support for custom allocators through the macros<br>
+ VALGRIND_MALLOCLIKE_BLOCK() VALGRIND_FREELIKE_BLOCK() (defined in<br>
+ in <valgrind/valgrind.h>). An alternative for these two macros is<br>
+ the new client request VG_USERREQ__DRD_CLEAN_MEMORY (defined in<br>
+ <valgrind/drd.h>).<br>
+<br>
+ - Added support for annotating non-POSIX synchronization objects<br>
+ through several new ANNOTATE_*() macros.<br>
+<br>
+ - OpenMP: added support for the OpenMP runtime (libgomp) included<br>
+ with gcc versions 4.3.0 and 4.4.0.<br>
+<br>
+ - Faster operation.<br>
+<br>
+ - Added two new command-line options (--first-race-only and<br>
+ --segment-merging-interval).<br>
+<br>
+<br>
+* Genuinely atomic support for x86/amd64/ppc atomic instructions<br>
+<br>
+ Valgrind will now preserve (memory-access) atomicity of LOCK-<br>
+ prefixed x86/amd64 instructions, and any others implying a global<br>
+ bus lock. Ditto for PowerPC l{w,d}arx/st{w,d}cx. instructions.<br>
+<br>
+ This means that Valgrinded processes will "play nicely" in<br>
+ situations where communication with other processes, or the kernel,<br>
+ is done through shared memory and coordinated with such atomic<br>
+ instructions. Prior to this change, such arrangements usually<br>
+ resulted in hangs, races or other synchronisation failures, because<br>
+ Valgrind did not honour atomicity of such instructions.<br>
+<br>
+<br>
+* A new experimental tool, BBV, has been added. BBV generates basic<br>
+ block vectors for use with the SimPoint analysis tool, which allows<br>
+ a program's overall behaviour to be approximated by running only a<br>
+ fraction of it. This is useful for computer architecture<br>
+ researchers. You can run BBV by specifying --tool=exp-bbv (the<br>
+ "exp-" prefix is short for "experimental"). BBV was written by<br>
+ Vince Weaver.<br>
+<br>
+<br>
+* Modestly improved support for running Windows applications under<br>
+ Wine. In particular, initial support for reading Windows .PDB debug<br>
+ information has been added.<br>
+<br>
+<br>
+* A new Memcheck client request VALGRIND_COUNT_LEAK_BLOCKS has been<br>
+ added. It is similar to VALGRIND_COUNT_LEAKS but counts blocks<br>
+ instead of bytes.<br>
+<br>
+<br>
+* The Valgrind client requests VALGRIND_PRINTF and<br>
+ VALGRIND_PRINTF_BACKTRACE have been changed slightly. Previously,<br>
+ the string was always printed immediately on its own line. Now, the<br>
+ string will be added to a buffer but not printed until a newline is<br>
+ encountered, or other Valgrind output is printed (note that for<br>
+ VALGRIND_PRINTF_BACKTRACE, the back-trace itself is considered<br>
+ "other Valgrind output"). This allows you to use multiple<br>
+ VALGRIND_PRINTF calls to build up a single output line, and also to<br>
+ print multiple output lines with a single request (by embedding<br>
+ multiple newlines in the string).<br>
+<br>
+<br>
+* The graphs drawn by Massif's ms_print program have changed slightly:<br>
+<br>
+ - The half-height chars '.' and ',' are no longer drawn, because<br>
+ they are confusing. The --y option can be used if the default<br>
+ y-resolution is not high enough.<br>
+<br>
+ - Horizontal lines are now drawn after the top of a snapshot if<br>
+ there is a gap until the next snapshot. This makes it clear that<br>
+ the memory usage has not dropped to zero between snapshots.<br>
+<br>
+<br>
+* Something that happened in 3.4.0, but wasn't clearly announced: the<br>
+ option --read-var-info=yes can be used by some tools (Memcheck,<br>
+ Helgrind and DRD). When enabled, it causes Valgrind to read DWARF3<br>
+ variable type and location information. This makes those tools<br>
+ start up more slowly and increases memory consumption, but<br>
+ descriptions of data addresses in error messages become more<br>
+ detailed.<br>
+<br>
+<br>
+* exp-Omega, an experimental instantaneous leak-detecting tool, was<br>
+ disabled in 3.4.0 due to a lack of interest and maintenance,<br>
+ although the source code was still in the distribution. The source<br>
+ code has now been removed from the distribution. For anyone<br>
+ interested, the removal occurred in SVN revision r10247.<br>
+<br>
+<br>
+* Some changes have been made to the build system.<br>
+<br>
+ - VEX/ is now integrated properly into the build system. This means<br>
+ that dependency tracking within VEX/ now works properly, "make<br>
+ install" will work without requiring "make" before it, and<br>
+ parallel builds (ie. 'make -j') now work (previously a<br>
+ .NOTPARALLEL directive was used to serialize builds, ie. 'make -j'<br>
+ was effectively ignored).<br>
+<br>
+ - The --with-vex configure option has been removed. It was of<br>
+ little use and removing it simplified the build system.<br>
+<br>
+ - The location of some install files has changed. This should not<br>
+ affect most users. Those who might be affected:<br>
+<br>
+ * For people who use Valgrind with MPI programs, the installed<br>
+ libmpiwrap.so library has moved from<br>
+ $(INSTALL)/<platform>/libmpiwrap.so to<br>
+ $(INSTALL)/libmpiwrap-<platform>.so.<br>
+<br>
+ * For people who distribute standalone Valgrind tools, the<br>
+ installed libraries such as $(INSTALL)/<platform>/libcoregrind.a<br>
+ have moved to $(INSTALL)/libcoregrind-<platform>.a.<br>
+<br>
+ These changes simplify the build system.<br>
+<br>
+ - Previously, all the distributed suppression (*.supp) files were<br>
+ installed. Now, only default.supp is installed. This should not<br>
+ affect users as the other installed suppression files were not<br>
+ read; the fact that they were installed was a mistake.<br>
+<br>
+<br>
+* KNOWN LIMITATIONS:<br>
+<br>
+ - Memcheck is unusable with the Intel compiler suite version 11.1,<br>
+ when it generates code for SSE2-and-above capable targets. This<br>
+ is because of icc's use of highly optimised inlined strlen<br>
+ implementations. It causes Memcheck to report huge numbers of<br>
+ false errors even in simple programs. Helgrind and DRD may also<br>
+ have problems.<br>
+<br>
+ Versions 11.0 and earlier may be OK, but this has not been<br>
+ properly tested.<br>
+<br>
+<br>
+The following bugs have been fixed or resolved. Note that "n-i-bz"<br>
+stands for "not in bugzilla" -- that is, a bug that was reported to us<br>
+but never got a bugzilla entry. We encourage you to file bugs in<br>
+bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+mailing the developers (or mailing lists) directly -- bugs that are<br>
+not entered into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+To see details of a given bug, visit<br>
+https://bugs.kde.org/show_bug.cgi?id=XXXXXX<br>
+where XXXXXX is the bug number as listed below.<br>
+<br>
+84303 How about a LockCheck tool? <br>
+91633 dereference of null ptr in vgPlain_st_basetype <br>
+97452 Valgrind doesn't report any pthreads problems <br>
+100628 leak-check gets assertion failure when using <br>
+ VALGRIND_MALLOCLIKE_BLOCK on malloc()ed memory <br>
+108528 NPTL pthread cleanup handlers not called <br>
+110126 Valgrind 2.4.1 configure.in tramples CFLAGS <br>
+110128 mallinfo is not implemented... <br>
+110770 VEX: Generated files not always updated when making valgrind<br>
+111102 Memcheck: problems with large (memory footprint) applications <br>
+115673 Vex's decoder should never assert <br>
+117564 False positive: Syscall param clone(child_tidptr) contains<br>
+ uninitialised byte(s) <br>
+119404 executing ssh from inside valgrind fails <br>
+133679 Callgrind does not write path names to sources with dwarf debug<br>
+ info<br>
+135847 configure.in problem with non gnu compilers (and possible fix) <br>
+136154 threads.c:273 (vgCallgrind_post_signal): Assertion<br>
+ '*(vgCallgrind_current_fn_stack.top) == 0' failed. <br>
+136230 memcheck reports "possibly lost", should be "still reachable" <br>
+137073 NULL arg to MALLOCLIKE_BLOCK causes crash <br>
+137904 Valgrind reports a memory leak when using POSIX threads,<br>
+ while it shouldn't <br>
+139076 valgrind VT_GETSTATE error <br>
+142228 complaint of elf_dynamic_do_rela in trivial usage <br>
+145347 spurious warning with USBDEVFS_REAPURB <br>
+148441 (wine) can't find memory leak in Wine, win32 binary <br>
+ executable file.<br>
+148742 Leak-check fails assert on exit <br>
+149878 add (proper) check for calloc integer overflow <br>
+150606 Call graph is broken when using callgrind control <br>
+152393 leak errors produce an exit code of 0. I need some way to <br>
+ cause leak errors to result in a nonzero exit code. <br>
+157154 documentation (leak-resolution doc speaks about num-callers<br>
+ def=4) + what is a loss record<br>
+159501 incorrect handling of ALSA ioctls <br>
+162020 Valgrinding an empty/zero-byte file crashes valgrind <br>
+162482 ppc: Valgrind crashes while reading stabs information <br>
+162718 x86: avoid segment selector 0 in sys_set_thread_area() <br>
+163253 (wine) canonicaliseSymtab forgot some fields in DiSym <br>
+163560 VEX/test_main.c is missing from valgrind-3.3.1 <br>
+164353 malloc_usable_size() doesn't return a usable size <br>
+165468 Inconsistent formatting in memcheck manual -- please fix <br>
+169505 main.c:286 (endOfInstr):<br>
+ Assertion 'ii->cost_offset == *cost_offset' failed <br>
+177206 Generate default.supp during compile instead of configure<br>
+177209 Configure valt_load_address based on arch+os <br>
+177305 eventfd / syscall 323 patch lost<br>
+179731 Tests fail to build because of inlining of non-local asm labels<br>
+181394 helgrind: libhb_core.c:3762 (msm_write): Assertion <br>
+ 'ordxx == POrd_EQ || ordxx == POrd_LT' failed. <br>
+181594 Bogus warning for empty text segment <br>
+181707 dwarf doesn't require enumerations to have name <br>
+185038 exp-ptrcheck: "unhandled syscall: 285" (fallocate) on x86_64 <br>
+185050 exp-ptrcheck: sg_main.c:727 (add_block_to_GlobalTree):<br>
+ Assertion '!already_present' failed.<br>
+185359 exp-ptrcheck: unhandled syscall getresuid()<br>
+185794 "WARNING: unhandled syscall: 285" (fallocate) on x86_64<br>
+185816 Valgrind is unable to handle debug info for files with split<br>
+ debug info that are prelinked afterwards <br>
+185980 [darwin] unhandled syscall: sem_open <br>
+186238 bbToIR_AMD64: disInstr miscalculated next %rip<br>
+186507 exp-ptrcheck unhandled syscalls prctl, etc. <br>
+186790 Suppression pattern used for leaks are not reported <br>
+186796 Symbols with length>200 in suppression files are ignored <br>
+187048 drd: mutex PTHREAD_PROCESS_SHARED attribute missinterpretation<br>
+187416 exp-ptrcheck: support for __NR_{setregid,setreuid,setresuid}<br>
+188038 helgrind: hg_main.c:926: mk_SHVAL_fail: the 'impossible' happened<br>
+188046 bashisms in the configure script<br>
+188127 amd64->IR: unhandled instruction bytes: 0xF0 0xF 0xB0 0xA<br>
+188161 memcheck: --track-origins=yes asserts "mc_machine.c:672<br>
+ (get_otrack_shadow_offset_wrk): the 'impossible' happened."<br>
+188248 helgrind: pthread_cleanup_push, pthread_rwlock_unlock, <br>
+ assertion fail "!lock->heldBy" <br>
+188427 Add support for epoll_create1 (with patch) <br>
+188530 Support for SIOCGSTAMPNS<br>
+188560 Include valgrind.spec in the tarball<br>
+188572 Valgrind on Mac should suppress setenv() mem leak <br>
+189054 Valgrind fails to build because of duplicate non-local asm labels <br>
+189737 vex amd64->IR: unhandled instruction bytes: 0xAC<br>
+189762 epoll_create syscall not handled (--tool=exp-ptrcheck)<br>
+189763 drd assertion failure: s_threadinfo[tid].is_recording <br>
+190219 unhandled syscall: 328 (x86-linux)<br>
+190391 dup of 181394; see above<br>
+190429 Valgrind reports lots of errors in ld.so with x86_64 2.9.90 glibc <br>
+190820 No debug information on powerpc-linux<br>
+191095 PATCH: Improve usbdevfs ioctl handling <br>
+191182 memcheck: VALGRIND_LEAK_CHECK quadratic when big nr of chunks<br>
+ or big nr of errors<br>
+191189 --xml=yes should obey --gen-suppressions=all <br>
+191192 syslog() needs a suppression on macosx <br>
+191271 DARWIN: WARNING: unhandled syscall: 33554697 a.k.a.: 265 <br>
+191761 getrlimit on MacOSX <br>
+191992 multiple --fn-skip only works sometimes; dependent on order <br>
+192634 V. reports "aspacem sync_check_mapping_callback: <br>
+ segment mismatch" on Darwin<br>
+192954 __extension__ missing on 2 client requests <br>
+194429 Crash at start-up with glibc-2.10.1 and linux-2.6.29 <br>
+194474 "INSTALL" file has different build instructions than "README"<br>
+194671 Unhandled syscall (sem_wait?) from mac valgrind <br>
+195069 memcheck: reports leak (memory still reachable) for <br>
+ printf("%d', x) <br>
+195169 drd: (vgDrd_barrier_post_wait):<br>
+ Assertion 'r->sg[p->post_iteration]' failed. <br>
+195268 valgrind --log-file doesn't accept ~/...<br>
+195838 VEX abort: LibVEX_N_SPILL_BYTES too small for CPUID boilerplate <br>
+195860 WARNING: unhandled syscall: unix:223 <br>
+196528 need a error suppression for pthread_rwlock_init under os x? <br>
+197227 Support aio_* syscalls on Darwin<br>
+197456 valgrind should reject --suppressions=(directory) <br>
+197512 DWARF2 CFI reader: unhandled CFI instruction 0:10 <br>
+197591 unhandled syscall 27 (mincore) <br>
+197793 Merge DCAS branch to the trunk == 85756, 142103<br>
+197794 Avoid duplicate filenames in Vex <br>
+197898 make check fails on current SVN <br>
+197901 make check fails also under exp-ptrcheck in current SVN <br>
+197929 Make --leak-resolution=high the default <br>
+197930 Reduce spacing between leak reports <br>
+197933 Print command line of client at start-up, and shorten preamble <br>
+197966 unhandled syscall 205 (x86-linux, --tool=exp-ptrcheck)<br>
+198395 add BBV to the distribution as an experimental tool <br>
+198624 Missing syscalls on Darwin: 82, 167, 281, 347 <br>
+198649 callgrind_annotate doesn't cumulate counters <br>
+199338 callgrind_annotate sorting/thresholds are broken for all but Ir <br>
+199977 Valgrind complains about an unrecognized instruction in the<br>
+ atomic_incs test program<br>
+200029 valgrind isn't able to read Fedora 12 debuginfo <br>
+200760 darwin unhandled syscall: unix:284 <br>
+200827 DRD doesn't work on Mac OS X <br>
+200990 VG_(read_millisecond_timer)() does not work correctly <br>
+201016 Valgrind does not support pthread_kill() on Mac OS <br>
+201169 Document --read-var-info<br>
+201323 Pre-3.5.0 performance sanity checking <br>
+201384 Review user manual for the 3.5.0 release <br>
+201585 mfpvr not implemented on ppc <br>
+201708 tests failing because x86 direction flag is left set <br>
+201757 Valgrind doesn't handle any recent sys_futex additions <br>
+204377 64-bit valgrind can not start a shell script<br>
+ (with #!/path/to/shell) if the shell is a 32-bit executable<br>
+n-i-bz drd: fixed assertion failure triggered by mutex reinitialization.<br>
+n-i-bz drd: fixed a bug that caused incorrect messages to be printed<br>
+ about memory allocation events with memory access tracing enabled<br>
+n-i-bz drd: fixed a memory leak triggered by vector clock deallocation<br>
+<br>
+(3.5.0: 19 Aug 2009, vex r1913, valgrind r10846).<br>
+<br>
+<br>
+<br>
+Release 3.4.1 (28 February 2009)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.4.1 is a bug-fix release that fixes some regressions and assertion<br>
+failures in debug info reading in 3.4.0, most notably incorrect stack<br>
+traces on amd64-linux on older (glibc-2.3 based) systems. Various<br>
+other debug info problems are also fixed. A number of bugs in the<br>
+exp-ptrcheck tool introduced in 3.4.0 have been fixed.<br>
+<br>
+In view of the fact that 3.4.0 contains user-visible regressions<br>
+relative to 3.3.x, upgrading to 3.4.1 is recommended. Packagers are<br>
+encouraged to ship 3.4.1 in preference to 3.4.0.<br>
+<br>
+The fixed bugs are as follows. Note that "n-i-bz" stands for "not in<br>
+bugzilla" -- that is, a bug that was reported to us but never got a<br>
+bugzilla entry. We encourage you to file bugs in bugzilla<br>
+(http://bugs.kde.org/enter_valgrind_bug.cgi) rather than mailing the<br>
+developers (or mailing lists) directly -- bugs that are not entered<br>
+into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+n-i-bz Fix various bugs reading icc-11 generated debug info<br>
+n-i-bz Fix various bugs reading gcc-4.4 generated debug info<br>
+n-i-bz Preliminary support for glibc-2.10 / Fedora 11<br>
+n-i-bz Cachegrind and Callgrind: handle non-power-of-two cache sizes,<br>
+ so as to support (eg) 24k Atom D1 and Core2 with 3/6/12MB L2.<br>
+179618 exp-ptrcheck crashed / exit prematurely<br>
+179624 helgrind: false positive races with pthread_create and<br>
+ recv/open/close/read<br>
+134207 pkg-config output contains @VG_PLATFORM@<br>
+176926 floating point exception at valgrind startup with PPC 440EPX<br>
+181594 Bogus warning for empty text segment<br>
+173751 amd64->IR: 0x48 0xF 0x6F 0x45 (even more redundant rex prefixes)<br>
+181707 Dwarf3 doesn't require enumerations to have name<br>
+185038 exp-ptrcheck: "unhandled syscall: 285" (fallocate) on x86_64<br>
+185050 exp-ptrcheck: sg_main.c:727 (add_block_to_GlobalTree):<br>
+ Assertion '!already_present' failed.<br>
+185359 exp-ptrcheck unhandled syscall getresuid()<br>
+<br>
+(3.4.1.RC1: 24 Feb 2008, vex r1884, valgrind r9253).<br>
+(3.4.1: 28 Feb 2008, vex r1884, valgrind r9293).<br>
+<br>
+<br>
+<br>
+Release 3.4.0 (2 January 2009)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.4.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes. This release supports X86/Linux,<br>
+AMD64/Linux, PPC32/Linux and PPC64/Linux. Support for recent distros<br>
+(using gcc 4.4, glibc 2.8 and 2.9) has been added.<br>
+<br>
+3.4.0 brings some significant tool improvements. Memcheck can now<br>
+report the origin of uninitialised values, the thread checkers<br>
+Helgrind and DRD are much improved, and we have a new experimental<br>
+tool, exp-Ptrcheck, which is able to detect overruns of stack and<br>
+global arrays. In detail:<br>
+<br>
+* Memcheck is now able to track the origin of uninitialised values.<br>
+ When it reports an uninitialised value error, it will try to show<br>
+ the origin of the value, as either a heap or stack allocation.<br>
+ Origin tracking is expensive and so is not enabled by default. To<br>
+ use it, specify --track-origins=yes. Memcheck's speed will be<br>
+ essentially halved, and memory usage will be significantly<br>
+ increased. Nevertheless it can drastically reduce the effort<br>
+ required to identify the root cause of uninitialised value errors,<br>
+ and so is often a programmer productivity win, despite running more<br>
+ slowly.<br>
+<br>
+* A version (1.4.0) of the Valkyrie GUI, that works with Memcheck in<br>
+ 3.4.0, will be released shortly.<br>
+<br>
+* Helgrind's race detection algorithm has been completely redesigned<br>
+ and reimplemented, to address usability and scalability concerns:<br>
+<br>
+ - The new algorithm has a lower false-error rate: it is much less<br>
+ likely to report races that do not really exist.<br>
+<br>
+ - Helgrind will display full call stacks for both accesses involved<br>
+ in a race. This makes it easier to identify the root causes of<br>
+ races.<br>
+<br>
+ - Limitations on the size of program that can run have been removed.<br>
+<br>
+ - Performance has been modestly improved, although that is very<br>
+ workload-dependent.<br>
+<br>
+ - Direct support for Qt4 threading has been added.<br>
+<br>
+ - pthread_barriers are now directly supported.<br>
+<br>
+ - Helgrind works well on all supported Linux targets.<br>
+<br>
+* The DRD thread debugging tool has seen major improvements:<br>
+<br>
+ - Greatly improved performance and significantly reduced memory<br>
+ usage.<br>
+<br>
+ - Support for several major threading libraries (Boost.Thread, Qt4,<br>
+ glib, OpenMP) has been added.<br>
+<br>
+ - Support for atomic instructions, POSIX semaphores, barriers and<br>
+ reader-writer locks has been added.<br>
+<br>
+ - Works now on PowerPC CPUs too.<br>
+<br>
+ - Added support for printing thread stack usage at thread exit time.<br>
+<br>
+ - Added support for debugging lock contention.<br>
+<br>
+ - Added a manual for Drd.<br>
+<br>
+* A new experimental tool, exp-Ptrcheck, has been added. Ptrcheck<br>
+ checks for misuses of pointers. In that sense it is a bit like<br>
+ Memcheck. However, Ptrcheck can do things Memcheck can't: it can<br>
+ detect overruns of stack and global arrays, it can detect<br>
+ arbitrarily far out-of-bounds accesses to heap blocks, and it can<br>
+ detect accesses heap blocks that have been freed a very long time<br>
+ ago (millions of blocks in the past).<br>
+<br>
+ Ptrcheck currently works only on x86-linux and amd64-linux. To use<br>
+ it, use --tool=exp-ptrcheck. A simple manual is provided, as part<br>
+ of the main Valgrind documentation. As this is an experimental<br>
+ tool, we would be particularly interested in hearing about your<br>
+ experiences with it.<br>
+<br>
+* exp-Omega, an experimental instantaneous leak-detecting tool, is no<br>
+ longer built by default, although the code remains in the repository<br>
+ and the tarball. This is due to three factors: a perceived lack of<br>
+ users, a lack of maintenance, and concerns that it may not be<br>
+ possible to achieve reliable operation using the existing design.<br>
+<br>
+* As usual, support for the latest Linux distros and toolchain<br>
+ components has been added. It should work well on Fedora Core 10,<br>
+ OpenSUSE 11.1 and Ubuntu 8.10. gcc-4.4 (in its current pre-release<br>
+ state) is supported, as is glibc-2.9. The C++ demangler has been<br>
+ updated so as to work well with C++ compiled by even the most recent<br>
+ g++'s.<br>
+<br>
+* You can now use frame-level wildcards in suppressions. This was a<br>
+ frequently-requested enhancement. A line "..." in a suppression now<br>
+ matches zero or more frames. This makes it easier to write<br>
+ suppressions which are precise yet insensitive to changes in<br>
+ inlining behaviour.<br>
+<br>
+* 3.4.0 adds support on x86/amd64 for the SSSE3 instruction set.<br>
+<br>
+* Very basic support for IBM Power6 has been added (64-bit processes only).<br>
+<br>
+* Valgrind is now cross-compilable. For example, it is possible to<br>
+ cross compile Valgrind on an x86/amd64-linux host, so that it runs<br>
+ on a ppc32/64-linux target.<br>
+<br>
+* You can set the main thread's stack size at startup using the<br>
+ new --main-stacksize= flag (subject of course to ulimit settings).<br>
+ This is useful for running apps that need a lot of stack space.<br>
+<br>
+* The limitation that you can't use --trace-children=yes together<br>
+ with --db-attach=yes has been removed.<br>
+<br>
+* The following bugs have been fixed. Note that "n-i-bz" stands for<br>
+ "not in bugzilla" -- that is, a bug that was reported to us but<br>
+ never got a bugzilla entry. We encourage you to file bugs in<br>
+ bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+ mailing the developers (or mailing lists) directly.<br>
+<br>
+ n-i-bz Make return types for some client requests 64-bit clean<br>
+ n-i-bz glibc 2.9 support<br>
+ n-i-bz ignore unsafe .valgrindrc's (CVE-2008-4865)<br>
+ n-i-bz MPI_Init(0,0) is valid but libmpiwrap.c segfaults<br>
+ n-i-bz Building in an env without gdb gives bogus gdb attach<br>
+ 92456 Tracing the origin of uninitialised memory<br>
+ 106497 Valgrind does not demangle some C++ template symbols<br>
+ 162222 ==106497<br>
+ 151612 Suppression with "..." (frame-level wildcards in .supp files)<br>
+ 156404 Unable to start oocalc under memcheck on openSUSE 10.3 (64-bit)<br>
+ 159285 unhandled syscall:25 (stime, on x86-linux)<br>
+ 159452 unhandled ioctl 0x8B01 on "valgrind iwconfig"<br>
+ 160954 ppc build of valgrind crashes with illegal instruction (isel)<br>
+ 160956 mallinfo implementation, w/ patch<br>
+ 162092 Valgrind fails to start gnome-system-monitor<br>
+ 162819 malloc_free_fill test doesn't pass on glibc2.8 x86<br>
+ 163794 assertion failure with "--track-origins=yes"<br>
+ 163933 sigcontext.err and .trapno must be set together<br>
+ 163955 remove constraint !(--db-attach=yes && --trace-children=yes)<br>
+ 164476 Missing kernel module loading system calls<br>
+ 164669 SVN regression: mmap() drops posix file locks<br>
+ 166581 Callgrind output corruption when program forks<br>
+ 167288 Patch file for missing system calls on Cell BE<br>
+ 168943 unsupported scas instruction pentium<br>
+ 171645 Unrecognised instruction (MOVSD, non-binutils encoding)<br>
+ 172417 x86->IR: 0x82 ...<br>
+ 172563 amd64->IR: 0xD9 0xF5 - fprem1<br>
+ 173099 .lds linker script generation error<br>
+ 173177 [x86_64] syscalls: 125/126/179 (capget/capset/quotactl)<br>
+ 173751 amd64->IR: 0x48 0xF 0x6F 0x45 (even more redundant prefixes)<br>
+ 174532 == 173751<br>
+ 174908 --log-file value not expanded correctly for core file<br>
+ 175044 Add lookup_dcookie for amd64<br>
+ 175150 x86->IR: 0xF2 0xF 0x11 0xC1 (movss non-binutils encoding)<br>
+<br>
+Developer-visible changes:<br>
+<br>
+* Valgrind's debug-info reading machinery has been majorly overhauled.<br>
+ It can now correctly establish the addresses for ELF data symbols,<br>
+ which is something that has never worked properly before now.<br>
+<br>
+ Also, Valgrind can now read DWARF3 type and location information for<br>
+ stack and global variables. This makes it possible to use the<br>
+ framework to build tools that rely on knowing the type and locations<br>
+ of stack and global variables, for example exp-Ptrcheck.<br>
+<br>
+ Reading of such information is disabled by default, because most<br>
+ tools don't need it, and because it is expensive in space and time.<br>
+ However, you can force Valgrind to read it, using the<br>
+ --read-var-info=yes flag. Memcheck, Helgrind and DRD are able to<br>
+ make use of such information, if present, to provide source-level<br>
+ descriptions of data addresses in the error messages they create.<br>
+<br>
+(3.4.0.RC1: 24 Dec 2008, vex r1878, valgrind r8882).<br>
+(3.4.0: 3 Jan 2009, vex r1878, valgrind r8899).<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.authors.html"><< 1. AUTHORS</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.news.old.html">3. OLDER NEWS >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.news.old.html b/docs/html/dist.news.old.html
new file mode 100644
index 0000000..d5747d5
--- /dev/null
+++ b/docs/html/dist.news.old.html
@@ -0,0 +1,2043 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>3. OLDER NEWS</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.news.html" title="2. NEWS">
+<link rel="next" href="dist.readme.html" title="4. README">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.news.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.news.old"></a>3. OLDER NEWS</h1></div></div></div>
+<div class="literallayout"><p><br>
+ Release 3.3.1 (4 June 2008)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.3.1 fixes a bunch of bugs in 3.3.0, adds support for glibc-2.8 based<br>
+systems (openSUSE 11, Fedora Core 9), improves the existing glibc-2.7<br>
+support, and adds support for the SSSE3 (Core 2) instruction set.<br>
+<br>
+3.3.1 will likely be the last release that supports some very old<br>
+systems. In particular, the next major release, 3.4.0, will drop<br>
+support for the old LinuxThreads threading library, and for gcc<br>
+versions prior to 3.0.<br>
+<br>
+The fixed bugs are as follows. Note that "n-i-bz" stands for "not in<br>
+bugzilla" -- that is, a bug that was reported to us but never got a<br>
+bugzilla entry. We encourage you to file bugs in bugzilla<br>
+(http://bugs.kde.org/enter_valgrind_bug.cgi) rather than mailing the<br>
+developers (or mailing lists) directly -- bugs that are not entered<br>
+into bugzilla tend to get forgotten about or ignored.<br>
+<br>
+n-i-bz Massif segfaults at exit<br>
+n-i-bz Memcheck asserts on Altivec code<br>
+n-i-bz fix sizeof bug in Helgrind<br>
+n-i-bz check fd on sys_llseek<br>
+n-i-bz update syscall lists to kernel 2.6.23.1<br>
+n-i-bz support sys_sync_file_range<br>
+n-i-bz handle sys_sysinfo, sys_getresuid, sys_getresgid on ppc64-linux<br>
+n-i-bz intercept memcpy in 64-bit ld.so's<br>
+n-i-bz Fix wrappers for sys_{futimesat,utimensat}<br>
+n-i-bz Minor false-error avoidance fixes for Memcheck<br>
+n-i-bz libmpiwrap.c: add a wrapper for MPI_Waitany<br>
+n-i-bz helgrind support for glibc-2.8<br>
+n-i-bz partial fix for mc_leakcheck.c:698 assert:<br>
+ 'lc_shadows[i]->data + lc_shadows[i] ...<br>
+n-i-bz Massif/Cachegrind output corruption when programs fork<br>
+n-i-bz register allocator fix: handle spill stores correctly<br>
+n-i-bz add support for PA6T PowerPC CPUs<br>
+126389 vex x86->IR: 0xF 0xAE (FXRSTOR)<br>
+158525 ==126389<br>
+152818 vex x86->IR: 0xF3 0xAC (repz lodsb) <br>
+153196 vex x86->IR: 0xF2 0xA6 (repnz cmpsb) <br>
+155011 vex x86->IR: 0xCF (iret)<br>
+155091 Warning [...] unhandled DW_OP_ opcode 0x23<br>
+156960 ==155901<br>
+155528 support Core2/SSSE3 insns on x86/amd64<br>
+155929 ms_print fails on massif outputs containing long lines<br>
+157665 valgrind fails on shmdt(0) after shmat to 0<br>
+157748 support x86 PUSHFW/POPFW<br>
+158212 helgrind: handle pthread_rwlock_try{rd,wr}lock.<br>
+158425 sys_poll incorrectly emulated when RES==0<br>
+158744 vex amd64->IR: 0xF0 0x41 0xF 0xC0 (xaddb)<br>
+160907 Support for a couple of recent Linux syscalls<br>
+161285 Patch -- support for eventfd() syscall<br>
+161378 illegal opcode in debug libm (FUCOMPP)<br>
+160136 ==161378<br>
+161487 number of suppressions files is limited to 10<br>
+162386 ms_print typo in milliseconds time unit for massif<br>
+161036 exp-drd: client allocated memory was never freed<br>
+162663 signalfd_wrapper fails on 64bit linux<br>
+<br>
+(3.3.1.RC1: 2 June 2008, vex r1854, valgrind r8169).<br>
+(3.3.1: 4 June 2008, vex r1854, valgrind r8180).<br>
+<br>
+<br>
+<br>
+Release 3.3.0 (7 December 2007)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.3.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes. This release supports X86/Linux,<br>
+AMD64/Linux, PPC32/Linux and PPC64/Linux. Support for recent distros<br>
+(using gcc 4.3, glibc 2.6 and 2.7) has been added.<br>
+<br>
+The main excitement in 3.3.0 is new and improved tools. Helgrind<br>
+works again, Massif has been completely overhauled and much improved,<br>
+Cachegrind now does branch-misprediction profiling, and a new category<br>
+of experimental tools has been created, containing two new tools:<br>
+Omega and DRD. There are many other smaller improvements. In detail:<br>
+<br>
+- Helgrind has been completely overhauled and works for the first time<br>
+ since Valgrind 2.2.0. Supported functionality is: detection of<br>
+ misuses of the POSIX PThreads API, detection of potential deadlocks<br>
+ resulting from cyclic lock dependencies, and detection of data<br>
+ races. Compared to the 2.2.0 Helgrind, the race detection algorithm<br>
+ has some significant improvements aimed at reducing the false error<br>
+ rate. Handling of various kinds of corner cases has been improved.<br>
+ Efforts have been made to make the error messages easier to<br>
+ understand. Extensive documentation is provided.<br>
+<br>
+- Massif has been completely overhauled. Instead of measuring<br>
+ space-time usage -- which wasn't always useful and many people found<br>
+ confusing -- it now measures space usage at various points in the<br>
+ execution, including the point of peak memory allocation. Its<br>
+ output format has also changed: instead of producing PostScript<br>
+ graphs and HTML text, it produces a single text output (via the new<br>
+ 'ms_print' script) that contains both a graph and the old textual<br>
+ information, but in a more compact and readable form. Finally, the<br>
+ new version should be more reliable than the old one, as it has been<br>
+ tested more thoroughly.<br>
+<br>
+- Cachegrind has been extended to do branch-misprediction profiling.<br>
+ Both conditional and indirect branches are profiled. The default<br>
+ behaviour of Cachegrind is unchanged. To use the new functionality,<br>
+ give the option --branch-sim=yes.<br>
+<br>
+- A new category of "experimental tools" has been created. Such tools<br>
+ may not work as well as the standard tools, but are included because<br>
+ some people will find them useful, and because exposure to a wider<br>
+ user group provides tool authors with more end-user feedback. These<br>
+ tools have a "exp-" prefix attached to their names to indicate their<br>
+ experimental nature. Currently there are two experimental tools:<br>
+<br>
+ * exp-Omega: an instantaneous leak detector. See<br>
+ exp-omega/docs/omega_introduction.txt.<br>
+<br>
+ * exp-DRD: a data race detector based on the happens-before<br>
+ relation. See exp-drd/docs/README.txt.<br>
+<br>
+- Scalability improvements for very large programs, particularly those<br>
+ which have a million or more malloc'd blocks in use at once. These<br>
+ improvements mostly affect Memcheck. Memcheck is also up to 10%<br>
+ faster for all programs, with x86-linux seeing the largest<br>
+ improvement.<br>
+<br>
+- Works well on the latest Linux distros. Has been tested on Fedora<br>
+ Core 8 (x86, amd64, ppc32, ppc64) and openSUSE 10.3. glibc 2.6 and<br>
+ 2.7 are supported. gcc-4.3 (in its current pre-release state) is<br>
+ supported. At the same time, 3.3.0 retains support for older<br>
+ distros.<br>
+<br>
+- The documentation has been modestly reorganised with the aim of<br>
+ making it easier to find information on common-usage scenarios.<br>
+ Some advanced material has been moved into a new chapter in the main<br>
+ manual, so as to unclutter the main flow, and other tidying up has<br>
+ been done.<br>
+<br>
+- There is experimental support for AIX 5.3, both 32-bit and 64-bit<br>
+ processes. You need to be running a 64-bit kernel to use Valgrind<br>
+ on a 64-bit executable.<br>
+<br>
+- There have been some changes to command line options, which may<br>
+ affect you:<br>
+<br>
+ * --log-file-exactly and <br>
+ --log-file-qualifier options have been removed.<br>
+<br>
+ To make up for this --log-file option has been made more powerful.<br>
+ It now accepts a %p format specifier, which is replaced with the<br>
+ process ID, and a %q{FOO} format specifier, which is replaced with<br>
+ the contents of the environment variable FOO.<br>
+<br>
+ * --child-silent-after-fork=yes|no [no]<br>
+<br>
+ Causes Valgrind to not show any debugging or logging output for<br>
+ the child process resulting from a fork() call. This can make the<br>
+ output less confusing (although more misleading) when dealing with<br>
+ processes that create children.<br>
+<br>
+ * --cachegrind-out-file, --callgrind-out-file and --massif-out-file<br>
+<br>
+ These control the names of the output files produced by<br>
+ Cachegrind, Callgrind and Massif. They accept the same %p and %q<br>
+ format specifiers that --log-file accepts. --callgrind-out-file<br>
+ replaces Callgrind's old --base option.<br>
+<br>
+ * Cachegrind's 'cg_annotate' script no longer uses the --<pid><br>
+ option to specify the output file. Instead, the first non-option<br>
+ argument is taken to be the name of the output file, and any<br>
+ subsequent non-option arguments are taken to be the names of<br>
+ source files to be annotated.<br>
+<br>
+ * Cachegrind and Callgrind now use directory names where possible in<br>
+ their output files. This means that the -I option to<br>
+ 'cg_annotate' and 'callgrind_annotate' should not be needed in<br>
+ most cases. It also means they can correctly handle the case<br>
+ where two source files in different directories have the same<br>
+ name.<br>
+<br>
+- Memcheck offers a new suppression kind: "Jump". This is for<br>
+ suppressing jump-to-invalid-address errors. Previously you had to<br>
+ use an "Addr1" suppression, which didn't make much sense.<br>
+<br>
+- Memcheck has new flags --malloc-fill=<hexnum> and<br>
+ --free-fill=<hexnum> which free malloc'd / free'd areas with the<br>
+ specified byte. This can help shake out obscure memory corruption<br>
+ problems. The definedness and addressability of these areas is<br>
+ unchanged -- only the contents are affected.<br>
+<br>
+- The behaviour of Memcheck's client requests VALGRIND_GET_VBITS and<br>
+ VALGRIND_SET_VBITS have changed slightly. They no longer issue<br>
+ addressability errors -- if either array is partially unaddressable,<br>
+ they just return 3 (as before). Also, SET_VBITS doesn't report<br>
+ definedness errors if any of the V bits are undefined.<br>
+<br>
+- The following Memcheck client requests have been removed:<br>
+ VALGRIND_MAKE_NOACCESS<br>
+ VALGRIND_MAKE_WRITABLE<br>
+ VALGRIND_MAKE_READABLE<br>
+ VALGRIND_CHECK_WRITABLE<br>
+ VALGRIND_CHECK_READABLE<br>
+ VALGRIND_CHECK_DEFINED<br>
+ They were deprecated in 3.2.0, when equivalent but better-named client<br>
+ requests were added. See the 3.2.0 release notes for more details.<br>
+<br>
+- The behaviour of the tool Lackey has changed slightly. First, the output<br>
+ from --trace-mem has been made more compact, to reduce the size of the<br>
+ traces. Second, a new option --trace-superblocks has been added, which<br>
+ shows the addresses of superblocks (code blocks) as they are executed.<br>
+<br>
+- The following bugs have been fixed. Note that "n-i-bz" stands for<br>
+ "not in bugzilla" -- that is, a bug that was reported to us but<br>
+ never got a bugzilla entry. We encourage you to file bugs in<br>
+ bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than<br>
+ mailing the developers (or mailing lists) directly.<br>
+<br>
+ n-i-bz x86_linux_REDIR_FOR_index() broken<br>
+ n-i-bz guest-amd64/toIR.c:2512 (dis_op2_E_G): Assertion `0' failed.<br>
+ n-i-bz Support x86 INT insn (INT (0xCD) 0x40 - 0x43)<br>
+ n-i-bz Add sys_utimensat system call for Linux x86 platform<br>
+ 79844 Helgrind complains about race condition which does not exist<br>
+ 82871 Massif output function names too short<br>
+ 89061 Massif: ms_main.c:485 (get_XCon): Assertion `xpt->max_chi...'<br>
+ 92615 Write output from Massif at crash<br>
+ 95483 massif feature request: include peak allocation in report<br>
+ 112163 MASSIF crashed with signal 7 (SIGBUS) after running 2 days<br>
+ 119404 problems running setuid executables (partial fix)<br>
+ 121629 add instruction-counting mode for timing<br>
+ 127371 java vm giving unhandled instruction bytes: 0x26 0x2E 0x64 0x65<br>
+ 129937 ==150380<br>
+ 129576 Massif loses track of memory, incorrect graphs<br>
+ 132132 massif --format=html output does not do html entity escaping<br>
+ 132950 Heap alloc/usage summary<br>
+ 133962 unhandled instruction bytes: 0xF2 0x4C 0xF 0x10<br>
+ 134990 use -fno-stack-protector if possible<br>
+ 136382 ==134990<br>
+ 137396 I would really like helgrind to work again...<br>
+ 137714 x86/amd64->IR: 0x66 0xF 0xF7 0xC6 (maskmovq, maskmovdq)<br>
+ 141631 Massif: percentages don't add up correctly<br>
+ 142706 massif numbers don't seem to add up<br>
+ 143062 massif crashes on app exit with signal 8 SIGFPE<br>
+ 144453 (get_XCon): Assertion 'xpt->max_children != 0' failed.<br>
+ 145559 valgrind aborts when malloc_stats is called<br>
+ 145609 valgrind aborts all runs with 'repeated section!'<br>
+ 145622 --db-attach broken again on x86-64<br>
+ 145837 ==149519<br>
+ 145887 PPC32: getitimer() system call is not supported<br>
+ 146252 ==150678<br>
+ 146456 (update_XCon): Assertion 'xpt->curr_space >= -space_delta'...<br>
+ 146701 ==134990<br>
+ 146781 Adding support for private futexes<br>
+ 147325 valgrind internal error on syscall (SYS_io_destroy, 0)<br>
+ 147498 amd64->IR: 0xF0 0xF 0xB0 0xF (lock cmpxchg %cl,(%rdi))<br>
+ 147545 Memcheck: mc_main.c:817 (get_sec_vbits8): Assertion 'n' failed.<br>
+ 147628 SALC opcode 0xd6 unimplemented<br>
+ 147825 crash on amd64-linux with gcc 4.2 and glibc 2.6 (CFI)<br>
+ 148174 Incorrect type of freed_list_volume causes assertion [...]<br>
+ 148447 x86_64 : new NOP codes: 66 66 66 66 2e 0f 1f<br>
+ 149182 PPC Trap instructions not implemented in valgrind<br>
+ 149504 Assertion hit on alloc_xpt->curr_space >= -space_delta<br>
+ 149519 ppc32: V aborts with SIGSEGV on execution of a signal handler<br>
+ 149892 ==137714<br>
+ 150044 SEGV during stack deregister<br>
+ 150380 dwarf/gcc interoperation (dwarf3 read problems)<br>
+ 150408 ==148447<br>
+ 150678 guest-amd64/toIR.c:3741 (dis_Grp5): Assertion `sz == 4' failed<br>
+ 151209 V unable to execute programs for users with UID > 2^16<br>
+ 151938 help on --db-command= misleading<br>
+ 152022 subw $0x28, %%sp causes assertion failure in memcheck<br>
+ 152357 inb and outb not recognized in 64-bit mode<br>
+ 152501 vex x86->IR: 0x27 0x66 0x89 0x45 (daa) <br>
+ 152818 vex x86->IR: 0xF3 0xAC 0xFC 0x9C (rep lodsb)<br>
+<br>
+Developer-visible changes:<br>
+<br>
+- The names of some functions and types within the Vex IR have<br>
+ changed. Run 'svn log -r1689 VEX/pub/libvex_ir.h' for full details.<br>
+ Any existing standalone tools will have to be updated to reflect<br>
+ these changes. The new names should be clearer. The file<br>
+ VEX/pub/libvex_ir.h is also much better commented.<br>
+<br>
+- A number of new debugging command line options have been added.<br>
+ These are mostly of use for debugging the symbol table and line<br>
+ number readers:<br>
+<br>
+ --trace-symtab-patt=<patt> limit debuginfo tracing to obj name <patt><br>
+ --trace-cfi=no|yes show call-frame-info details? [no]<br>
+ --debug-dump=syms mimic /usr/bin/readelf --syms<br>
+ --debug-dump=line mimic /usr/bin/readelf --debug-dump=line<br>
+ --debug-dump=frames mimic /usr/bin/readelf --debug-dump=frames<br>
+ --sym-offsets=yes|no show syms in form 'name+offset' ? [no]<br>
+<br>
+- Internally, the code base has been further factorised and<br>
+ abstractified, particularly with respect to support for non-Linux<br>
+ OSs.<br>
+<br>
+(3.3.0.RC1: 2 Dec 2007, vex r1803, valgrind r7268).<br>
+(3.3.0.RC2: 5 Dec 2007, vex r1804, valgrind r7282).<br>
+(3.3.0.RC3: 9 Dec 2007, vex r1804, valgrind r7288).<br>
+(3.3.0: 10 Dec 2007, vex r1804, valgrind r7290).<br>
+<br>
+<br>
+<br>
+Release 3.2.3 (29 Jan 2007)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+Unfortunately 3.2.2 introduced a regression which can cause an<br>
+assertion failure ("vex: the `impossible' happened: eqIRConst") when<br>
+running obscure pieces of SSE code. 3.2.3 fixes this and adds one<br>
+more glibc-2.5 intercept. In all other respects it is identical to<br>
+3.2.2. Please do not use (or package) 3.2.2; instead use 3.2.3.<br>
+<br>
+n-i-bz vex: the `impossible' happened: eqIRConst<br>
+n-i-bz Add an intercept for glibc-2.5 __stpcpy_chk<br>
+<br>
+(3.2.3: 29 Jan 2007, vex r1732, valgrind r6560).<br>
+<br>
+<br>
+Release 3.2.2 (22 Jan 2007)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.2.2 fixes a bunch of bugs in 3.2.1, adds support for glibc-2.5 based<br>
+systems (openSUSE 10.2, Fedora Core 6), improves support for icc-9.X<br>
+compiled code, and brings modest performance improvements in some<br>
+areas, including amd64 floating point, powerpc support, and startup<br>
+responsiveness on all targets.<br>
+<br>
+The fixed bugs are as follows. Note that "n-i-bz" stands for "not in<br>
+bugzilla" -- that is, a bug that was reported to us but never got a<br>
+bugzilla entry. We encourage you to file bugs in bugzilla<br>
+(http://bugs.kde.org/enter_valgrind_bug.cgi) rather than mailing the<br>
+developers (or mailing lists) directly.<br>
+<br>
+129390 ppc?->IR: some kind of VMX prefetch (dstt)<br>
+129968 amd64->IR: 0xF 0xAE 0x0 (fxsave)<br>
+134319 ==129968<br>
+133054 'make install' fails with syntax errors<br>
+118903 ==133054<br>
+132998 startup fails in when running on UML<br>
+134207 pkg-config output contains @VG_PLATFORM@<br>
+134727 valgrind exits with "Value too large for defined data type"<br>
+n-i-bz ppc32/64: support mcrfs<br>
+n-i-bz Cachegrind/Callgrind: Update cache parameter detection<br>
+135012 x86->IR: 0xD7 0x8A 0xE0 0xD0 (xlat)<br>
+125959 ==135012<br>
+126147 x86->IR: 0xF2 0xA5 0xF 0x77 (repne movsw)<br>
+136650 amd64->IR: 0xC2 0x8 0x0<br>
+135421 x86->IR: unhandled Grp5(R) case 6<br>
+n-i-bz Improved documentation of the IR intermediate representation<br>
+n-i-bz jcxz (x86) (users list, 8 Nov)<br>
+n-i-bz ExeContext hashing fix<br>
+n-i-bz fix CFI reading failures ("Dwarf CFI 0:24 0:32 0:48 0:7")<br>
+n-i-bz fix Cachegrind/Callgrind simulation bug<br>
+n-i-bz libmpiwrap.c: fix handling of MPI_LONG_DOUBLE<br>
+n-i-bz make User errors suppressible<br>
+136844 corrupted malloc line when using --gen-suppressions=yes<br>
+138507 ==136844<br>
+n-i-bz Speed up the JIT's register allocator<br>
+n-i-bz Fix confusing leak-checker flag hints<br>
+n-i-bz Support recent autoswamp versions<br>
+n-i-bz ppc32/64 dispatcher speedups<br>
+n-i-bz ppc64 front end rld/rlw improvements<br>
+n-i-bz ppc64 back end imm64 improvements<br>
+136300 support 64K pages on ppc64-linux<br>
+139124 == 136300<br>
+n-i-bz fix ppc insn set tests for gcc >= 4.1<br>
+137493 x86->IR: recent binutils no-ops<br>
+137714 x86->IR: 0x66 0xF 0xF7 0xC6 (maskmovdqu)<br>
+138424 "failed in UME with error 22" (produce a better error msg)<br>
+138856 ==138424<br>
+138627 Enhancement support for prctl ioctls<br>
+138896 Add support for usb ioctls<br>
+136059 ==138896<br>
+139050 ppc32->IR: mfspr 268/269 instructions not handled<br>
+n-i-bz ppc32->IR: lvxl/stvxl<br>
+n-i-bz glibc-2.5 support<br>
+n-i-bz memcheck: provide replacement for mempcpy<br>
+n-i-bz memcheck: replace bcmp in ld.so<br>
+n-i-bz Use 'ifndef' in VEX's Makefile correctly<br>
+n-i-bz Suppressions for MVL 4.0.1 on ppc32-linux<br>
+n-i-bz libmpiwrap.c: Fixes for MPICH<br>
+n-i-bz More robust handling of hinted client mmaps<br>
+139776 Invalid read in unaligned memcpy with Intel compiler v9<br>
+n-i-bz Generate valid XML even for very long fn names<br>
+n-i-bz Don't prompt about suppressions for unshown reachable leaks<br>
+139910 amd64 rcl is not supported<br>
+n-i-bz DWARF CFI reader: handle DW_CFA_undefined<br>
+n-i-bz DWARF CFI reader: handle icc9 generated CFI info better<br>
+n-i-bz fix false uninit-value errs in icc9 generated FP code<br>
+n-i-bz reduce extraneous frames in libmpiwrap.c<br>
+n-i-bz support pselect6 on amd64-linux<br>
+<br>
+(3.2.2: 22 Jan 2007, vex r1729, valgrind r6545).<br>
+<br>
+<br>
+Release 3.2.1 (16 Sept 2006)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.2.1 adds x86/amd64 support for all SSE3 instructions except monitor<br>
+and mwait, further reduces memcheck's false error rate on all<br>
+platforms, adds support for recent binutils (in OpenSUSE 10.2 and<br>
+Fedora Rawhide) and fixes a bunch of bugs in 3.2.0. Some of the fixed<br>
+bugs were causing large programs to segfault with --tool=callgrind and<br>
+--tool=cachegrind, so an upgrade is recommended.<br>
+<br>
+In view of the fact that any 3.3.0 release is unlikely to happen until<br>
+well into 1Q07, we intend to keep the 3.2.X line alive for a while<br>
+yet, and so we tentatively plan a 3.2.2 release sometime in December<br>
+06.<br>
+<br>
+The fixed bugs are as follows. Note that "n-i-bz" stands for "not in<br>
+bugzilla" -- that is, a bug that was reported to us but never got a<br>
+bugzilla entry.<br>
+<br>
+n-i-bz Expanding brk() into last available page asserts<br>
+n-i-bz ppc64-linux stack RZ fast-case snafu<br>
+n-i-bz 'c' in --gen-supps=yes doesn't work<br>
+n-i-bz VG_N_SEGMENTS too low (users, 28 June)<br>
+n-i-bz VG_N_SEGNAMES too low (Stu Robinson)<br>
+106852 x86->IR: fisttp (SSE3)<br>
+117172 FUTEX_WAKE does not use uaddr2<br>
+124039 Lacks support for VKI_[GP]IO_UNIMAP*<br>
+127521 amd64->IR: 0xF0 0x48 0xF 0xC7 (cmpxchg8b)<br>
+128917 amd64->IR: 0x66 0xF 0xF6 0xC4 (psadbw,SSE2)<br>
+129246 JJ: ppc32/ppc64 syscalls, w/ patch<br>
+129358 x86->IR: fisttpl (SSE3)<br>
+129866 cachegrind/callgrind causes executable to die<br>
+130020 Can't stat .so/.exe error while reading symbols<br>
+130388 Valgrind aborts when process calls malloc_trim()<br>
+130638 PATCH: ppc32 missing system calls<br>
+130785 amd64->IR: unhandled instruction "pushfq"<br>
+131481: (HINT_NOP) vex x86->IR: 0xF 0x1F 0x0 0xF<br>
+131298 ==131481<br>
+132146 Programs with long sequences of bswap[l,q]s<br>
+132918 vex amd64->IR: 0xD9 0xF8 (fprem)<br>
+132813 Assertion at priv/guest-x86/toIR.c:652 fails<br>
+133051 'cfsi->len > 0 && cfsi->len < 2000000' failed<br>
+132722 valgrind header files are not standard C<br>
+n-i-bz Livelocks entire machine (users list, Timothy Terriberry)<br>
+n-i-bz Alex Bennee mmap problem (9 Aug)<br>
+n-i-bz BartV: Don't print more lines of a stack-trace than were obtained.<br>
+n-i-bz ppc32 SuSE 10.1 redir<br>
+n-i-bz amd64 padding suppressions<br>
+n-i-bz amd64 insn printing fix.<br>
+n-i-bz ppc cmp reg,reg fix<br>
+n-i-bz x86/amd64 iropt e/rflag reduction rules<br>
+n-i-bz SuSE 10.1 (ppc32) minor fixes<br>
+133678 amd64->IR: 0x48 0xF 0xC5 0xC0 (pextrw?)<br>
+133694 aspacem assertion: aspacem_minAddr <= holeStart<br>
+n-i-bz callgrind: fix warning about malformed creator line <br>
+n-i-bz callgrind: fix annotate script for data produced with <br>
+ --dump-instr=yes<br>
+n-i-bz callgrind: fix failed assertion when toggling <br>
+ instrumentation mode<br>
+n-i-bz callgrind: fix annotate script fix warnings with<br>
+ --collect-jumps=yes<br>
+n-i-bz docs path hardwired (Dennis Lubert)<br>
+<br>
+The following bugs were not fixed, due primarily to lack of developer<br>
+time, and also because bug reporters did not answer requests for<br>
+feedback in time for the release:<br>
+<br>
+129390 ppc?->IR: some kind of VMX prefetch (dstt)<br>
+129968 amd64->IR: 0xF 0xAE 0x0 (fxsave)<br>
+133054 'make install' fails with syntax errors<br>
+n-i-bz Signal race condition (users list, 13 June, Johannes Berg)<br>
+n-i-bz Unrecognised instruction at address 0x70198EC2 (users list,<br>
+ 19 July, Bennee)<br>
+132998 startup fails in when running on UML<br>
+<br>
+The following bug was tentatively fixed on the mainline but the fix<br>
+was considered too risky to push into 3.2.X:<br>
+<br>
+133154 crash when using client requests to register/deregister stack<br>
+<br>
+(3.2.1: 16 Sept 2006, vex r1658, valgrind r6070).<br>
+<br>
+<br>
+Release 3.2.0 (7 June 2006)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.2.0 is a feature release with many significant improvements and the<br>
+usual collection of bug fixes. This release supports X86/Linux,<br>
+AMD64/Linux, PPC32/Linux and PPC64/Linux.<br>
+<br>
+Performance, especially of Memcheck, is improved, Addrcheck has been<br>
+removed, Callgrind has been added, PPC64/Linux support has been added,<br>
+Lackey has been improved, and MPI support has been added. In detail:<br>
+<br>
+- Memcheck has improved speed and reduced memory use. Run times are<br>
+ typically reduced by 15-30%, averaging about 24% for SPEC CPU2000.<br>
+ The other tools have smaller but noticeable speed improvements. We<br>
+ are interested to hear what improvements users get.<br>
+<br>
+ Memcheck uses less memory due to the introduction of a compressed<br>
+ representation for shadow memory. The space overhead has been<br>
+ reduced by a factor of up to four, depending on program behaviour.<br>
+ This means you should be able to run programs that use more memory<br>
+ than before without hitting problems.<br>
+<br>
+- Addrcheck has been removed. It has not worked since version 2.4.0,<br>
+ and the speed and memory improvements to Memcheck make it redundant.<br>
+ If you liked using Addrcheck because it didn't give undefined value<br>
+ errors, you can use the new Memcheck option --undef-value-errors=no<br>
+ to get the same behaviour.<br>
+<br>
+- The number of undefined-value errors incorrectly reported by<br>
+ Memcheck has been reduced (such false reports were already very<br>
+ rare). In particular, efforts have been made to ensure Memcheck<br>
+ works really well with gcc 4.0/4.1-generated code on X86/Linux and<br>
+ AMD64/Linux.<br>
+<br>
+- Josef Weidendorfer's popular Callgrind tool has been added. Folding<br>
+ it in was a logical step given its popularity and usefulness, and<br>
+ makes it easier for us to ensure it works "out of the box" on all<br>
+ supported targets. The associated KDE KCachegrind GUI remains a<br>
+ separate project.<br>
+<br>
+- A new release of the Valkyrie GUI for Memcheck, version 1.2.0,<br>
+ accompanies this release. Improvements over previous releases<br>
+ include improved robustness, many refinements to the user interface,<br>
+ and use of a standard autoconf/automake build system. You can get<br>
+ it from http://www.valgrind.org/downloads/guis.html.<br>
+<br>
+- Valgrind now works on PPC64/Linux. As with the AMD64/Linux port,<br>
+ this supports programs using to 32G of address space. On 64-bit<br>
+ capable PPC64/Linux setups, you get a dual architecture build so<br>
+ that both 32-bit and 64-bit executables can be run. Linux on POWER5<br>
+ is supported, and POWER4 is also believed to work. Both 32-bit and<br>
+ 64-bit DWARF2 is supported. This port is known to work well with<br>
+ both gcc-compiled and xlc/xlf-compiled code.<br>
+<br>
+- Floating point accuracy has been improved for PPC32/Linux.<br>
+ Specifically, the floating point rounding mode is observed on all FP<br>
+ arithmetic operations, and multiply-accumulate instructions are<br>
+ preserved by the compilation pipeline. This means you should get FP<br>
+ results which are bit-for-bit identical to a native run. These<br>
+ improvements are also present in the PPC64/Linux port.<br>
+<br>
+- Lackey, the example tool, has been improved:<br>
+<br>
+ * It has a new option --detailed-counts (off by default) which<br>
+ causes it to print out a count of loads, stores and ALU operations<br>
+ done, and their sizes.<br>
+<br>
+ * It has a new option --trace-mem (off by default) which causes it<br>
+ to print out a trace of all memory accesses performed by a<br>
+ program. It's a good starting point for building Valgrind tools<br>
+ that need to track memory accesses. Read the comments at the top<br>
+ of the file lackey/lk_main.c for details.<br>
+<br>
+ * The original instrumentation (counting numbers of instructions,<br>
+ jumps, etc) is now controlled by a new option --basic-counts. It<br>
+ is on by default.<br>
+<br>
+- MPI support: partial support for debugging distributed applications<br>
+ using the MPI library specification has been added. Valgrind is<br>
+ aware of the memory state changes caused by a subset of the MPI<br>
+ functions, and will carefully check data passed to the (P)MPI_<br>
+ interface.<br>
+<br>
+- A new flag, --error-exitcode=, has been added. This allows changing<br>
+ the exit code in runs where Valgrind reported errors, which is<br>
+ useful when using Valgrind as part of an automated test suite.<br>
+<br>
+- Various segfaults when reading old-style "stabs" debug information<br>
+ have been fixed.<br>
+<br>
+- A simple performance evaluation suite has been added. See<br>
+ perf/README and README_DEVELOPERS for details. There are<br>
+ various bells and whistles.<br>
+<br>
+- New configuration flags:<br>
+ --enable-only32bit<br>
+ --enable-only64bit<br>
+ By default, on 64 bit platforms (ppc64-linux, amd64-linux) the build<br>
+ system will attempt to build a Valgrind which supports both 32-bit<br>
+ and 64-bit executables. This may not be what you want, and you can<br>
+ override the default behaviour using these flags.<br>
+<br>
+Please note that Helgrind is still not working. We have made an<br>
+important step towards making it work again, however, with the<br>
+addition of function wrapping (see below).<br>
+<br>
+Other user-visible changes:<br>
+<br>
+- Valgrind now has the ability to intercept and wrap arbitrary<br>
+ functions. This is a preliminary step towards making Helgrind work<br>
+ again, and was required for MPI support.<br>
+<br>
+- There are some changes to Memcheck's client requests. Some of them<br>
+ have changed names:<br>
+<br>
+ MAKE_NOACCESS --> MAKE_MEM_NOACCESS<br>
+ MAKE_WRITABLE --> MAKE_MEM_UNDEFINED<br>
+ MAKE_READABLE --> MAKE_MEM_DEFINED<br>
+<br>
+ CHECK_WRITABLE --> CHECK_MEM_IS_ADDRESSABLE<br>
+ CHECK_READABLE --> CHECK_MEM_IS_DEFINED<br>
+ CHECK_DEFINED --> CHECK_VALUE_IS_DEFINED<br>
+<br>
+ The reason for the change is that the old names are subtly<br>
+ misleading. The old names will still work, but they are deprecated<br>
+ and may be removed in a future release.<br>
+<br>
+ We also added a new client request:<br>
+ <br>
+ MAKE_MEM_DEFINED_IF_ADDRESSABLE(a, len)<br>
+ <br>
+ which is like MAKE_MEM_DEFINED but only affects a byte if the byte is<br>
+ already addressable.<br>
+<br>
+- The way client requests are encoded in the instruction stream has<br>
+ changed. Unfortunately, this means 3.2.0 will not honour client<br>
+ requests compiled into binaries using headers from earlier versions<br>
+ of Valgrind. We will try to keep the client request encodings more <br>
+ stable in future.<br>
+<br>
+BUGS FIXED:<br>
+<br>
+108258 NPTL pthread cleanup handlers not called <br>
+117290 valgrind is sigKILL'd on startup<br>
+117295 == 117290<br>
+118703 m_signals.c:1427 Assertion 'tst->status == VgTs_WaitSys'<br>
+118466 add %reg, %reg generates incorrect validity for bit 0<br>
+123210 New: strlen from ld-linux on amd64<br>
+123244 DWARF2 CFI reader: unhandled CFI instruction 0:18<br>
+123248 syscalls in glibc-2.4: openat, fstatat, symlinkat<br>
+123258 socketcall.recvmsg(msg.msg_iov[i] points to uninit<br>
+123535 mremap(new_addr) requires MREMAP_FIXED in 4th arg<br>
+123836 small typo in the doc<br>
+124029 ppc compile failed: `vor' gcc 3.3.5<br>
+124222 Segfault: @@don't know what type ':' is<br>
+124475 ppc32: crash (syscall?) timer_settime()<br>
+124499 amd64->IR: 0xF 0xE 0x48 0x85 (femms)<br>
+124528 FATAL: aspacem assertion failed: segment_is_sane<br>
+124697 vex x86->IR: 0xF 0x70 0xC9 0x0 (pshufw)<br>
+124892 vex x86->IR: 0xF3 0xAE (REPx SCASB)<br>
+126216 == 124892<br>
+124808 ppc32: sys_sched_getaffinity() not handled<br>
+n-i-bz Very long stabs strings crash m_debuginfo<br>
+n-i-bz amd64->IR: 0x66 0xF 0xF5 (pmaddwd)<br>
+125492 ppc32: support a bunch more syscalls<br>
+121617 ppc32/64: coredumping gives assertion failure<br>
+121814 Coregrind return error as exitcode patch<br>
+126517 == 121814<br>
+125607 amd64->IR: 0x66 0xF 0xA3 0x2 (btw etc)<br>
+125651 amd64->IR: 0xF8 0x49 0xFF 0xE3 (clc?)<br>
+126253 x86 movx is wrong<br>
+126451 3.2 SVN doesn't work on ppc32 CPU's without FPU<br>
+126217 increase # threads<br>
+126243 vex x86->IR: popw mem<br>
+126583 amd64->IR: 0x48 0xF 0xA4 0xC2 (shld $1,%rax,%rdx)<br>
+126668 amd64->IR: 0x1C 0xFF (sbb $0xff,%al)<br>
+126696 support for CDROMREADRAW ioctl and CDROMREADTOCENTRY fix<br>
+126722 assertion: segment_is_sane at m_aspacemgr/aspacemgr.c:1624<br>
+126938 bad checking for syscalls linkat, renameat, symlinkat<br>
+<br>
+(3.2.0RC1: 27 May 2006, vex r1626, valgrind r5947).<br>
+(3.2.0: 7 June 2006, vex r1628, valgrind r5957).<br>
+<br>
+<br>
+Release 3.1.1 (15 March 2006)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.1.1 fixes a bunch of bugs reported in 3.1.0. There is no new<br>
+functionality. The fixed bugs are:<br>
+<br>
+(note: "n-i-bz" means "not in bugzilla" -- this bug does not have<br>
+ a bugzilla entry).<br>
+<br>
+n-i-bz ppc32: fsub 3,3,3 in dispatcher doesn't clear NaNs<br>
+n-i-bz ppc32: __NR_{set,get}priority<br>
+117332 x86: missing line info with icc 8.1<br>
+117366 amd64: 0xDD 0x7C fnstsw<br>
+118274 == 117366<br>
+117367 amd64: 0xD9 0xF4 fxtract<br>
+117369 amd64: __NR_getpriority (140)<br>
+117419 ppc32: lfsu f5, -4(r11)<br>
+117419 ppc32: fsqrt<br>
+117936 more stabs problems (segfaults while reading debug info)<br>
+119914 == 117936<br>
+120345 == 117936<br>
+118239 amd64: 0xF 0xAE 0x3F (clflush)<br>
+118939 vm86old system call<br>
+n-i-bz memcheck/tests/mempool reads freed memory<br>
+n-i-bz AshleyP's custom-allocator assertion<br>
+n-i-bz Dirk strict-aliasing stuff<br>
+n-i-bz More space for debugger cmd line (Dan Thaler)<br>
+n-i-bz Clarified leak checker output message<br>
+n-i-bz AshleyP's --gen-suppressions output fix<br>
+n-i-bz cg_annotate's --sort option broken<br>
+n-i-bz OSet 64-bit fastcmp bug<br>
+n-i-bz VG_(getgroups) fix (Shinichi Noda)<br>
+n-i-bz ppc32: allocate from callee-saved FP/VMX regs<br>
+n-i-bz misaligned path word-size bug in mc_main.c<br>
+119297 Incorrect error message for sse code<br>
+120410 x86: prefetchw (0xF 0xD 0x48 0x4)<br>
+120728 TIOCSERGETLSR, TIOCGICOUNT, HDIO_GET_DMA ioctls<br>
+120658 Build fixes for gcc 2.96<br>
+120734 x86: Support for changing EIP in signal handler<br>
+n-i-bz memcheck/tests/zeropage de-looping fix<br>
+n-i-bz x86: fxtract doesn't work reliably<br>
+121662 x86: lock xadd (0xF0 0xF 0xC0 0x2)<br>
+121893 calloc does not always return zeroed memory<br>
+121901 no support for syscall tkill<br>
+n-i-bz Suppression update for Debian unstable<br>
+122067 amd64: fcmovnu (0xDB 0xD9)<br>
+n-i-bz ppc32: broken signal handling in cpu feature detection<br>
+n-i-bz ppc32: rounding mode problems (improved, partial fix only)<br>
+119482 ppc32: mtfsb1<br>
+n-i-bz ppc32: mtocrf/mfocrf<br>
+<br>
+(3.1.1: 15 March 2006, vex r1597, valgrind r5771).<br>
+<br>
+<br>
+Release 3.1.0 (25 November 2005)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.1.0 is a feature release with a number of significant improvements:<br>
+AMD64 support is much improved, PPC32 support is good enough to be<br>
+usable, and the handling of memory management and address space is<br>
+much more robust. In detail:<br>
+<br>
+- AMD64 support is much improved. The 64-bit vs. 32-bit issues in<br>
+ 3.0.X have been resolved, and it should "just work" now in all<br>
+ cases. On AMD64 machines both 64-bit and 32-bit versions of<br>
+ Valgrind are built. The right version will be invoked<br>
+ automatically, even when using --trace-children and mixing execution<br>
+ between 64-bit and 32-bit executables. Also, many more instructions<br>
+ are supported.<br>
+<br>
+- PPC32 support is now good enough to be usable. It should work with<br>
+ all tools, but please let us know if you have problems. Three<br>
+ classes of CPUs are supported: integer only (no FP, no Altivec),<br>
+ which covers embedded PPC uses, integer and FP but no Altivec<br>
+ (G3-ish), and CPUs capable of Altivec too (G4, G5).<br>
+<br>
+- Valgrind's address space management has been overhauled. As a<br>
+ result, Valgrind should be much more robust with programs that use<br>
+ large amounts of memory. There should be many fewer "memory<br>
+ exhausted" messages, and debug symbols should be read correctly on<br>
+ large (eg. 300MB+) executables. On 32-bit machines the full address<br>
+ space available to user programs (usually 3GB or 4GB) can be fully<br>
+ utilised. On 64-bit machines up to 32GB of space is usable; when<br>
+ using Memcheck that means your program can use up to about 14GB.<br>
+<br>
+ A side effect of this change is that Valgrind is no longer protected<br>
+ against wild writes by the client. This feature was nice but relied<br>
+ on the x86 segment registers and so wasn't portable.<br>
+<br>
+- Most users should not notice, but as part of the address space<br>
+ manager change, the way Valgrind is built has been changed. Each<br>
+ tool is now built as a statically linked stand-alone executable,<br>
+ rather than as a shared object that is dynamically linked with the<br>
+ core. The "valgrind" program invokes the appropriate tool depending<br>
+ on the --tool option. This slightly increases the amount of disk<br>
+ space used by Valgrind, but it greatly simplified many things and<br>
+ removed Valgrind's dependence on glibc.<br>
+<br>
+Please note that Addrcheck and Helgrind are still not working. Work<br>
+is underway to reinstate them (or equivalents). We apologise for the<br>
+inconvenience.<br>
+<br>
+Other user-visible changes:<br>
+<br>
+- The --weird-hacks option has been renamed --sim-hints.<br>
+<br>
+- The --time-stamp option no longer gives an absolute date and time.<br>
+ It now prints the time elapsed since the program began.<br>
+<br>
+- It should build with gcc-2.96.<br>
+<br>
+- Valgrind can now run itself (see README_DEVELOPERS for how).<br>
+ This is not much use to you, but it means the developers can now<br>
+ profile Valgrind using Cachegrind. As a result a couple of<br>
+ performance bad cases have been fixed.<br>
+<br>
+- The XML output format has changed slightly. See<br>
+ docs/internals/xml-output.txt.<br>
+<br>
+- Core dumping has been reinstated (it was disabled in 3.0.0 and 3.0.1).<br>
+ If your program crashes while running under Valgrind, a core file with<br>
+ the name "vgcore.<pid>" will be created (if your settings allow core<br>
+ file creation). Note that the floating point information is not all<br>
+ there. If Valgrind itself crashes, the OS will create a normal core<br>
+ file.<br>
+<br>
+The following are some user-visible changes that occurred in earlier<br>
+versions that may not have been announced, or were announced but not<br>
+widely noticed. So we're mentioning them now.<br>
+<br>
+- The --tool flag is optional once again; if you omit it, Memcheck<br>
+ is run by default.<br>
+<br>
+- The --num-callers flag now has a default value of 12. It was<br>
+ previously 4.<br>
+<br>
+- The --xml=yes flag causes Valgrind's output to be produced in XML<br>
+ format. This is designed to make it easy for other programs to<br>
+ consume Valgrind's output. The format is described in the file<br>
+ docs/internals/xml-format.txt.<br>
+<br>
+- The --gen-suppressions flag supports an "all" value that causes every<br>
+ suppression to be printed without asking.<br>
+<br>
+- The --log-file option no longer puts "pid" in the filename, eg. the<br>
+ old name "foo.pid12345" is now "foo.12345".<br>
+<br>
+- There are several graphical front-ends for Valgrind, such as Valkyrie,<br>
+ Alleyoop and Valgui. See http://www.valgrind.org/downloads/guis.html<br>
+ for a list.<br>
+<br>
+BUGS FIXED:<br>
+<br>
+109861 amd64 hangs at startup<br>
+110301 ditto<br>
+111554 valgrind crashes with Cannot allocate memory<br>
+111809 Memcheck tool doesn't start java<br>
+111901 cross-platform run of cachegrind fails on opteron<br>
+113468 (vgPlain_mprotect_range): Assertion 'r != -1' failed.<br>
+ 92071 Reading debugging info uses too much memory<br>
+109744 memcheck loses track of mmap from direct ld-linux.so.2<br>
+110183 tail of page with _end<br>
+ 82301 FV memory layout too rigid<br>
+ 98278 Infinite recursion possible when allocating memory<br>
+108994 Valgrind runs out of memory due to 133x overhead<br>
+115643 valgrind cannot allocate memory<br>
+105974 vg_hashtable.c static hash table<br>
+109323 ppc32: dispatch.S uses Altivec insn, which doesn't work on POWER. <br>
+109345 ptrace_setregs not yet implemented for ppc<br>
+110831 Would like to be able to run against both 32 and 64 bit <br>
+ binaries on AMD64<br>
+110829 == 110831<br>
+111781 compile of valgrind-3.0.0 fails on my linux (gcc 2.X prob)<br>
+112670 Cachegrind: cg_main.c:486 (handleOneStatement ...<br>
+112941 vex x86: 0xD9 0xF4 (fxtract)<br>
+110201 == 112941<br>
+113015 vex amd64->IR: 0xE3 0x14 0x48 0x83 (jrcxz)<br>
+113126 Crash with binaries built with -gstabs+/-ggdb<br>
+104065 == 113126<br>
+115741 == 113126<br>
+113403 Partial SSE3 support on x86<br>
+113541 vex: Grp5(x86) (alt encoding inc/dec) case 1<br>
+113642 valgrind crashes when trying to read debug information<br>
+113810 vex x86->IR: 66 0F F6 (66 + PSADBW == SSE PSADBW)<br>
+113796 read() and write() do not work if buffer is in shared memory<br>
+113851 vex x86->IR: (pmaddwd): 0x66 0xF 0xF5 0xC7<br>
+114366 vex amd64 cannnot handle __asm__( "fninit" )<br>
+114412 vex amd64->IR: 0xF 0xAD 0xC2 0xD3 (128-bit shift, shrdq?)<br>
+114455 vex amd64->IR: 0xF 0xAC 0xD0 0x1 (also shrdq)<br>
+115590: amd64->IR: 0x67 0xE3 0x9 0xEB (address size override)<br>
+115953 valgrind svn r5042 does not build with parallel make (-j3)<br>
+116057 maximum instruction size - VG_MAX_INSTR_SZB too small?<br>
+116483 shmat failes with invalid argument<br>
+102202 valgrind crashes when realloc'ing until out of memory<br>
+109487 == 102202<br>
+110536 == 102202<br>
+112687 == 102202<br>
+111724 vex amd64->IR: 0x41 0xF 0xAB (more BT{,S,R,C} fun n games)<br>
+111748 vex amd64->IR: 0xDD 0xE2 (fucom)<br>
+111785 make fails if CC contains spaces<br>
+111829 vex x86->IR: sbb AL, Ib<br>
+111851 vex x86->IR: 0x9F 0x89 (lahf/sahf)<br>
+112031 iopl on AMD64 and README_MISSING_SYSCALL_OR_IOCTL update<br>
+112152 code generation for Xin_MFence on x86 with SSE0 subarch<br>
+112167 == 112152<br>
+112789 == 112152<br>
+112199 naked ar tool is used in vex makefile<br>
+112501 vex x86->IR: movq (0xF 0x7F 0xC1 0xF) (mmx MOVQ)<br>
+113583 == 112501<br>
+112538 memalign crash<br>
+113190 Broken links in docs/html/<br>
+113230 Valgrind sys_pipe on x86-64 wrongly thinks file descriptors<br>
+ should be 64bit<br>
+113996 vex amd64->IR: fucomp (0xDD 0xE9)<br>
+114196 vex x86->IR: out %eax,(%dx) (0xEF 0xC9 0xC3 0x90)<br>
+114289 Memcheck fails to intercept malloc when used in an uclibc environment<br>
+114756 mbind syscall support<br>
+114757 Valgrind dies with assertion: Assertion 'noLargerThan > 0' failed<br>
+114563 stack tracking module not informed when valgrind switches threads<br>
+114564 clone() and stacks<br>
+114565 == 114564<br>
+115496 glibc crashes trying to use sysinfo page<br>
+116200 enable fsetxattr, fgetxattr, and fremovexattr for amd64<br>
+<br>
+(3.1.0RC1: 20 November 2005, vex r1466, valgrind r5224).<br>
+(3.1.0: 26 November 2005, vex r1471, valgrind r5235).<br>
+<br>
+<br>
+Release 3.0.1 (29 August 2005)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.0.1 fixes a bunch of bugs reported in 3.0.0. There is no new<br>
+functionality. Some of the fixed bugs are critical, so if you<br>
+use/distribute 3.0.0, an upgrade to 3.0.1 is recommended. The fixed<br>
+bugs are:<br>
+<br>
+(note: "n-i-bz" means "not in bugzilla" -- this bug does not have<br>
+ a bugzilla entry).<br>
+<br>
+109313 (== 110505) x86 cmpxchg8b<br>
+n-i-bz x86: track but ignore changes to %eflags.AC (alignment check)<br>
+110102 dis_op2_E_G(amd64)<br>
+110202 x86 sys_waitpid(#286)<br>
+110203 clock_getres(,0)<br>
+110208 execve fail wrong retval<br>
+110274 SSE1 now mandatory for x86<br>
+110388 amd64 0xDD 0xD1<br>
+110464 amd64 0xDC 0x1D FCOMP<br>
+110478 amd64 0xF 0xD PREFETCH<br>
+n-i-bz XML <unique> printing wrong<br>
+n-i-bz Dirk r4359 (amd64 syscalls from trunk)<br>
+110591 amd64 and x86: rdtsc not implemented properly<br>
+n-i-bz Nick r4384 (stub implementations of Addrcheck and Helgrind)<br>
+110652 AMD64 valgrind crashes on cwtd instruction<br>
+110653 AMD64 valgrind crashes on sarb $0x4,foo(%rip) instruction<br>
+110656 PATH=/usr/bin::/bin valgrind foobar stats ./fooba<br>
+110657 Small test fixes<br>
+110671 vex x86->IR: unhandled instruction bytes: 0xF3 0xC3 (rep ret)<br>
+n-i-bz Nick (Cachegrind should not assert when it encounters a client<br>
+ request.)<br>
+110685 amd64->IR: unhandled instruction bytes: 0xE1 0x56 (loope Jb)<br>
+110830 configuring with --host fails to build 32 bit on 64 bit target<br>
+110875 Assertion when execve fails<br>
+n-i-bz Updates to Memcheck manual<br>
+n-i-bz Fixed broken malloc_usable_size()<br>
+110898 opteron instructions missing: btq btsq btrq bsfq<br>
+110954 x86->IR: unhandled instruction bytes: 0xE2 0xF6 (loop Jb)<br>
+n-i-bz Make suppressions work for "???" lines in stacktraces.<br>
+111006 bogus warnings from linuxthreads<br>
+111092 x86: dis_Grp2(Reg): unhandled case(x86) <br>
+111231 sctp_getladdrs() and sctp_getpaddrs() returns uninitialized<br>
+ memory<br>
+111102 (comment #4) Fixed 64-bit unclean "silly arg" message<br>
+n-i-bz vex x86->IR: unhandled instruction bytes: 0x14 0x0<br>
+n-i-bz minor umount/fcntl wrapper fixes<br>
+111090 Internal Error running Massif<br>
+101204 noisy warning<br>
+111513 Illegal opcode for SSE instruction (x86 movups)<br>
+111555 VEX/Makefile: CC is set to gcc<br>
+n-i-bz Fix XML bugs in FAQ<br>
+<br>
+(3.0.1: 29 August 05,<br>
+ vex/branches/VEX_3_0_BRANCH r1367,<br>
+ valgrind/branches/VALGRIND_3_0_BRANCH r4574).<br>
+<br>
+<br>
+<br>
+Release 3.0.0 (3 August 2005)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+3.0.0 is a major overhaul of Valgrind. The most significant user<br>
+visible change is that Valgrind now supports architectures other than<br>
+x86. The new architectures it supports are AMD64 and PPC32, and the<br>
+infrastructure is present for other architectures to be added later.<br>
+<br>
+AMD64 support works well, but has some shortcomings:<br>
+<br>
+- It generally won't be as solid as the x86 version. For example,<br>
+ support for more obscure instructions and system calls may be missing.<br>
+ We will fix these as they arise.<br>
+<br>
+- Address space may be limited; see the point about<br>
+ position-independent executables below.<br>
+<br>
+- If Valgrind is built on an AMD64 machine, it will only run 64-bit<br>
+ executables. If you want to run 32-bit x86 executables under Valgrind<br>
+ on an AMD64, you will need to build Valgrind on an x86 machine and<br>
+ copy it to the AMD64 machine. And it probably won't work if you do<br>
+ something tricky like exec'ing a 32-bit program from a 64-bit program<br>
+ while using --trace-children=yes. We hope to improve this situation<br>
+ in the future.<br>
+<br>
+The PPC32 support is very basic. It may not work reliably even for<br>
+small programs, but it's a start. Many thanks to Paul Mackerras for<br>
+his great work that enabled this support. We are working to make<br>
+PPC32 usable as soon as possible.<br>
+<br>
+Other user-visible changes:<br>
+<br>
+- Valgrind is no longer built by default as a position-independent<br>
+ executable (PIE), as this caused too many problems.<br>
+<br>
+ Without PIE enabled, AMD64 programs will only be able to access 2GB of<br>
+ address space. We will fix this eventually, but not for the moment.<br>
+ <br>
+ Use --enable-pie at configure-time to turn this on.<br>
+<br>
+- Support for programs that use stack-switching has been improved. Use<br>
+ the --max-stackframe flag for simple cases, and the<br>
+ VALGRIND_STACK_REGISTER, VALGRIND_STACK_DEREGISTER and<br>
+ VALGRIND_STACK_CHANGE client requests for trickier cases.<br>
+<br>
+- Support for programs that use self-modifying code has been improved,<br>
+ in particular programs that put temporary code fragments on the stack.<br>
+ This helps for C programs compiled with GCC that use nested functions,<br>
+ and also Ada programs. This is controlled with the --smc-check<br>
+ flag, although the default setting should work in most cases.<br>
+<br>
+- Output can now be printed in XML format. This should make it easier<br>
+ for tools such as GUI front-ends and automated error-processing<br>
+ schemes to use Valgrind output as input. The --xml flag controls this.<br>
+ As part of this change, ELF directory information is read from executables,<br>
+ so absolute source file paths are available if needed.<br>
+<br>
+- Programs that allocate many heap blocks may run faster, due to<br>
+ improvements in certain data structures.<br>
+<br>
+- Addrcheck is currently not working. We hope to get it working again<br>
+ soon. Helgrind is still not working, as was the case for the 2.4.0<br>
+ release.<br>
+<br>
+- The JITter has been completely rewritten, and is now in a separate<br>
+ library, called Vex. This enabled a lot of the user-visible changes,<br>
+ such as new architecture support. The new JIT unfortunately translates<br>
+ more slowly than the old one, so programs may take longer to start.<br>
+ We believe the code quality is produces is about the same, so once<br>
+ started, programs should run at about the same speed. Feedback about<br>
+ this would be useful.<br>
+<br>
+ On the plus side, Vex and hence Memcheck tracks value flow properly<br>
+ through floating point and vector registers, something the 2.X line<br>
+ could not do. That means that Memcheck is much more likely to be<br>
+ usably accurate on vectorised code.<br>
+<br>
+- There is a subtle change to the way exiting of threaded programs<br>
+ is handled. In 3.0, Valgrind's final diagnostic output (leak check,<br>
+ etc) is not printed until the last thread exits. If the last thread<br>
+ to exit was not the original thread which started the program, any<br>
+ other process wait()-ing on this one to exit may conclude it has<br>
+ finished before the diagnostic output is printed. This may not be<br>
+ what you expect. 2.X had a different scheme which avoided this<br>
+ problem, but caused deadlocks under obscure circumstances, so we<br>
+ are trying something different for 3.0.<br>
+<br>
+- Small changes in control log file naming which make it easier to<br>
+ use valgrind for debugging MPI-based programs. The relevant<br>
+ new flags are --log-file-exactly= and --log-file-qualifier=.<br>
+<br>
+- As part of adding AMD64 support, DWARF2 CFI-based stack unwinding<br>
+ support was added. In principle this means Valgrind can produce<br>
+ meaningful backtraces on x86 code compiled with -fomit-frame-pointer<br>
+ providing you also compile your code with -fasynchronous-unwind-tables.<br>
+<br>
+- The documentation build system has been completely redone.<br>
+ The documentation masters are now in XML format, and from that<br>
+ HTML, PostScript and PDF documentation is generated. As a result<br>
+ the manual is now available in book form. Note that the<br>
+ documentation in the source tarballs is pre-built, so you don't need<br>
+ any XML processing tools to build Valgrind from a tarball.<br>
+<br>
+Changes that are not user-visible:<br>
+<br>
+- The code has been massively overhauled in order to modularise it.<br>
+ As a result we hope it is easier to navigate and understand.<br>
+<br>
+- Lots of code has been rewritten.<br>
+<br>
+BUGS FIXED:<br>
+<br>
+110046 sz == 4 assertion failed <br>
+109810 vex amd64->IR: unhandled instruction bytes: 0xA3 0x4C 0x70 0xD7<br>
+109802 Add a plausible_stack_size command-line parameter ?<br>
+109783 unhandled ioctl TIOCMGET (running hw detection tool discover) <br>
+109780 unhandled ioctl BLKSSZGET (running fdisk -l /dev/hda)<br>
+109718 vex x86->IR: unhandled instruction: ffreep <br>
+109429 AMD64 unhandled syscall: 127 (sigpending)<br>
+109401 false positive uninit in strchr from ld-linux.so.2<br>
+109385 "stabs" parse failure <br>
+109378 amd64: unhandled instruction REP NOP<br>
+109376 amd64: unhandled instruction LOOP Jb <br>
+109363 AMD64 unhandled instruction bytes <br>
+109362 AMD64 unhandled syscall: 24 (sched_yield)<br>
+109358 fork() won't work with valgrind-3.0 SVN<br>
+109332 amd64 unhandled instruction: ADC Ev, Gv<br>
+109314 Bogus memcheck report on amd64<br>
+108883 Crash; vg_memory.c:905 (vgPlain_init_shadow_range):<br>
+ Assertion `vgPlain_defined_init_shadow_page()' failed.<br>
+108349 mincore syscall parameter checked incorrectly <br>
+108059 build infrastructure: small update<br>
+107524 epoll_ctl event parameter checked on EPOLL_CTL_DEL<br>
+107123 Vex dies with unhandled instructions: 0xD9 0x31 0xF 0xAE<br>
+106841 auxmap & openGL problems<br>
+106713 SDL_Init causes valgrind to exit<br>
+106352 setcontext and makecontext not handled correctly <br>
+106293 addresses beyond initial client stack allocation <br>
+ not checked in VALGRIND_DO_LEAK_CHECK<br>
+106283 PIE client programs are loaded at address 0<br>
+105831 Assertion `vgPlain_defined_init_shadow_page()' failed.<br>
+105039 long run-times probably due to memory manager <br>
+104797 valgrind needs to be aware of BLKGETSIZE64<br>
+103594 unhandled instruction: FICOM<br>
+103320 Valgrind 2.4.0 fails to compile with gcc 3.4.3 and -O0<br>
+103168 potentially memory leak in coregrind/ume.c <br>
+102039 bad permissions for mapped region at address 0xB7C73680<br>
+101881 weird assertion problem<br>
+101543 Support fadvise64 syscalls<br>
+75247 x86_64/amd64 support (the biggest "bug" we have ever fixed)<br>
+<br>
+(3.0RC1: 27 July 05, vex r1303, valgrind r4283).<br>
+(3.0.0: 3 August 05, vex r1313, valgrind r4316).<br>
+<br>
+<br>
+<br>
+Stable release 2.4.1 (1 August 2005)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+(The notes for this release have been lost. Sorry! It would have<br>
+contained various bug fixes but no new features.)<br>
+<br>
+<br>
+<br>
+Stable release 2.4.0 (March 2005) -- CHANGES RELATIVE TO 2.2.0<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+2.4.0 brings many significant changes and bug fixes. The most<br>
+significant user-visible change is that we no longer supply our own<br>
+pthread implementation. Instead, Valgrind is finally capable of<br>
+running the native thread library, either LinuxThreads or NPTL.<br>
+<br>
+This means our libpthread has gone, along with the bugs associated<br>
+with it. Valgrind now supports the kernel's threading syscalls, and<br>
+lets you use your standard system libpthread. As a result:<br>
+<br>
+* There are many fewer system dependencies and strange library-related<br>
+ bugs. There is a small performance improvement, and a large<br>
+ stability improvement.<br>
+<br>
+* On the downside, Valgrind can no longer report misuses of the POSIX<br>
+ PThreads API. It also means that Helgrind currently does not work.<br>
+ We hope to fix these problems in a future release.<br>
+<br>
+Note that running the native thread libraries does not mean Valgrind<br>
+is able to provide genuine concurrent execution on SMPs. We still<br>
+impose the restriction that only one thread is running at any given<br>
+time.<br>
+<br>
+There are many other significant changes too:<br>
+<br>
+* Memcheck is (once again) the default tool.<br>
+<br>
+* The default stack backtrace is now 12 call frames, rather than 4.<br>
+<br>
+* Suppressions can have up to 25 call frame matches, rather than 4.<br>
+<br>
+* Memcheck and Addrcheck use less memory. Under some circumstances,<br>
+ they no longer allocate shadow memory if there are large regions of<br>
+ memory with the same A/V states - such as an mmaped file.<br>
+<br>
+* The memory-leak detector in Memcheck and Addrcheck has been<br>
+ improved. It now reports more types of memory leak, including<br>
+ leaked cycles. When reporting leaked memory, it can distinguish<br>
+ between directly leaked memory (memory with no references), and<br>
+ indirectly leaked memory (memory only referred to by other leaked<br>
+ memory).<br>
+<br>
+* Memcheck's confusion over the effect of mprotect() has been fixed:<br>
+ previously mprotect could erroneously mark undefined data as<br>
+ defined.<br>
+<br>
+* Signal handling is much improved and should be very close to what<br>
+ you get when running natively. <br>
+<br>
+ One result of this is that Valgrind observes changes to sigcontexts<br>
+ passed to signal handlers. Such modifications will take effect when<br>
+ the signal returns. You will need to run with --single-step=yes to<br>
+ make this useful.<br>
+<br>
+* Valgrind is built in Position Independent Executable (PIE) format if<br>
+ your toolchain supports it. This allows it to take advantage of all<br>
+ the available address space on systems with 4Gbyte user address<br>
+ spaces.<br>
+<br>
+* Valgrind can now run itself (requires PIE support).<br>
+<br>
+* Syscall arguments are now checked for validity. Previously all<br>
+ memory used by syscalls was checked, but now the actual values<br>
+ passed are also checked.<br>
+<br>
+* Syscall wrappers are more robust against bad addresses being passed<br>
+ to syscalls: they will fail with EFAULT rather than killing Valgrind<br>
+ with SIGSEGV.<br>
+<br>
+* Because clone() is directly supported, some non-pthread uses of it<br>
+ will work. Partial sharing (where some resources are shared, and<br>
+ some are not) is not supported.<br>
+<br>
+* open() and readlink() on /proc/self/exe are supported.<br>
+<br>
+BUGS FIXED:<br>
+<br>
+88520 pipe+fork+dup2 kills the main program<br>
+88604 Valgrind Aborts when using $VALGRIND_OPTS and user progra...<br>
+88614 valgrind: vg_libpthread.c:2323 (read): Assertion `read_pt...<br>
+88703 Stabs parser fails to handle ";"<br>
+88886 ioctl wrappers for TIOCMBIS and TIOCMBIC<br>
+89032 valgrind pthread_cond_timedwait fails<br>
+89106 the 'impossible' happened<br>
+89139 Missing sched_setaffinity & sched_getaffinity<br>
+89198 valgrind lacks support for SIOCSPGRP and SIOCGPGRP<br>
+89263 Missing ioctl translations for scsi-generic and CD playing<br>
+89440 tests/deadlock.c line endings<br>
+89481 `impossible' happened: EXEC FAILED<br>
+89663 valgrind 2.2.0 crash on Redhat 7.2<br>
+89792 Report pthread_mutex_lock() deadlocks instead of returnin...<br>
+90111 statvfs64 gives invalid error/warning<br>
+90128 crash+memory fault with stabs generated by gnat for a run...<br>
+90778 VALGRIND_CHECK_DEFINED() not as documented in memcheck.h<br>
+90834 cachegrind crashes at end of program without reporting re...<br>
+91028 valgrind: vg_memory.c:229 (vgPlain_unmap_range): Assertio...<br>
+91162 valgrind crash while debugging drivel 1.2.1<br>
+91199 Unimplemented function<br>
+91325 Signal routing does not propagate the siginfo structure<br>
+91599 Assertion `cv == ((void *)0)'<br>
+91604 rw_lookup clears orig and sends the NULL value to rw_new<br>
+91821 Small problems building valgrind with $top_builddir ne $t...<br>
+91844 signal 11 (SIGSEGV) at get_tcb (libpthread.c:86) in corec...<br>
+92264 UNIMPLEMENTED FUNCTION: pthread_condattr_setpshared<br>
+92331 per-target flags necessitate AM_PROG_CC_C_O<br>
+92420 valgrind doesn't compile with linux 2.6.8.1/9<br>
+92513 Valgrind 2.2.0 generates some warning messages<br>
+92528 vg_symtab2.c:170 (addLoc): Assertion `loc->size > 0' failed.<br>
+93096 unhandled ioctl 0x4B3A and 0x5601<br>
+93117 Tool and core interface versions do not match<br>
+93128 Can't run valgrind --tool=memcheck because of unimplement...<br>
+93174 Valgrind can crash if passed bad args to certain syscalls<br>
+93309 Stack frame in new thread is badly aligned<br>
+93328 Wrong types used with sys_sigprocmask()<br>
+93763 /usr/include/asm/msr.h is missing<br>
+93776 valgrind: vg_memory.c:508 (vgPlain_find_map_space): Asser...<br>
+93810 fcntl() argument checking a bit too strict<br>
+94378 Assertion `tst->sigqueue_head != tst->sigqueue_tail' failed.<br>
+94429 valgrind 2.2.0 segfault with mmap64 in glibc 2.3.3<br>
+94645 Impossible happened: PINSRW mem<br>
+94953 valgrind: the `impossible' happened: SIGSEGV<br>
+95667 Valgrind does not work with any KDE app<br>
+96243 Assertion 'res==0' failed<br>
+96252 stage2 loader of valgrind fails to allocate memory<br>
+96520 All programs crashing at _dl_start (in /lib/ld-2.3.3.so) ...<br>
+96660 ioctl CDROMREADTOCENTRY causes bogus warnings<br>
+96747 After looping in a segfault handler, the impossible happens<br>
+96923 Zero sized arrays crash valgrind trace back with SIGFPE<br>
+96948 valgrind stops with assertion failure regarding mmap2<br>
+96966 valgrind fails when application opens more than 16 sockets<br>
+97398 valgrind: vg_libpthread.c:2667 Assertion failed<br>
+97407 valgrind: vg_mylibc.c:1226 (vgPlain_safe_fd): Assertion `...<br>
+97427 "Warning: invalid file descriptor -1 in syscall close()" ...<br>
+97785 missing backtrace<br>
+97792 build in obj dir fails - autoconf / makefile cleanup<br>
+97880 pthread_mutex_lock fails from shared library (special ker...<br>
+97975 program aborts without ang VG messages<br>
+98129 Failed when open and close file 230000 times using stdio<br>
+98175 Crashes when using valgrind-2.2.0 with a program using al...<br>
+98288 Massif broken<br>
+98303 UNIMPLEMENTED FUNCTION pthread_condattr_setpshared<br>
+98630 failed--compilation missing warnings.pm, fails to make he...<br>
+98756 Cannot valgrind signal-heavy kdrive X server<br>
+98966 valgrinding the JVM fails with a sanity check assertion<br>
+99035 Valgrind crashes while profiling<br>
+99142 loops with message "Signal 11 being dropped from thread 0...<br>
+99195 threaded apps crash on thread start (using QThread::start...<br>
+99348 Assertion `vgPlain_lseek(core_fd, 0, 1) == phdrs[i].p_off...<br>
+99568 False negative due to mishandling of mprotect<br>
+99738 valgrind memcheck crashes on program that uses sigitimer<br>
+99923 0-sized allocations are reported as leaks<br>
+99949 program seg faults after exit()<br>
+100036 "newSuperblock's request for 1048576 bytes failed"<br>
+100116 valgrind: (pthread_cond_init): Assertion `sizeof(* cond) ...<br>
+100486 memcheck reports "valgrind: the `impossible' happened: V...<br>
+100833 second call to "mremap" fails with EINVAL<br>
+101156 (vgPlain_find_map_space): Assertion `(addr & ((1 << 12)-1...<br>
+101173 Assertion `recDepth >= 0 && recDepth < 500' failed<br>
+101291 creating threads in a forked process fails<br>
+101313 valgrind causes different behavior when resizing a window...<br>
+101423 segfault for c++ array of floats<br>
+101562 valgrind massif dies on SIGINT even with signal handler r...<br>
+<br>
+<br>
+Stable release 2.2.0 (31 August 2004) -- CHANGES RELATIVE TO 2.0.0<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+2.2.0 brings nine months worth of improvements and bug fixes. We<br>
+believe it to be a worthy successor to 2.0.0. There are literally<br>
+hundreds of bug fixes and minor improvements. There are also some<br>
+fairly major user-visible changes:<br>
+<br>
+* A complete overhaul of handling of system calls and signals, and <br>
+ their interaction with threads. In general, the accuracy of the <br>
+ system call, thread and signal simulations is much improved:<br>
+<br>
+ - Blocking system calls behave exactly as they do when running<br>
+ natively (not on valgrind). That is, if a syscall blocks only the<br>
+ calling thread when running natively, than it behaves the same on<br>
+ valgrind. No more mysterious hangs because V doesn't know that some<br>
+ syscall or other, should block only the calling thread.<br>
+<br>
+ - Interrupted syscalls should now give more faithful results.<br>
+<br>
+ - Signal contexts in signal handlers are supported.<br>
+<br>
+* Improvements to NPTL support to the extent that V now works <br>
+ properly on NPTL-only setups.<br>
+<br>
+* Greater isolation between Valgrind and the program being run, so<br>
+ the program is less likely to inadvertently kill Valgrind by<br>
+ doing wild writes.<br>
+<br>
+* Massif: a new space profiling tool. Try it! It's cool, and it'll<br>
+ tell you in detail where and when your C/C++ code is allocating heap.<br>
+ Draws pretty .ps pictures of memory use against time. A potentially<br>
+ powerful tool for making sense of your program's space use.<br>
+<br>
+* File descriptor leakage checks. When enabled, Valgrind will print out<br>
+ a list of open file descriptors on exit.<br>
+<br>
+* Improved SSE2/SSE3 support.<br>
+<br>
+* Time-stamped output; use --time-stamp=yes<br>
+<br>
+<br>
+<br>
+Stable release 2.2.0 (31 August 2004) -- CHANGES RELATIVE TO 2.1.2<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+2.2.0 is not much different from 2.1.2, released seven weeks ago.<br>
+A number of bugs have been fixed, most notably #85658, which gave<br>
+problems for quite a few people. There have been many internal<br>
+cleanups, but those are not user visible.<br>
+<br>
+The following bugs have been fixed since 2.1.2:<br>
+<br>
+85658 Assert in coregrind/vg_libpthread.c:2326 (open64) !=<br>
+ (void*)0 failed<br>
+ This bug was reported multiple times, and so the following<br>
+ duplicates of it are also fixed: 87620, 85796, 85935, 86065, <br>
+ 86919, 86988, 87917, 88156<br>
+<br>
+80716 Semaphore mapping bug caused by unmap (sem_destroy)<br>
+ (Was fixed prior to 2.1.2)<br>
+<br>
+86987 semctl and shmctl syscalls family is not handled properly<br>
+<br>
+86696 valgrind 2.1.2 + RH AS2.1 + librt<br>
+<br>
+86730 valgrind locks up at end of run with assertion failure <br>
+ in __pthread_unwind<br>
+<br>
+86641 memcheck doesn't work with Mesa OpenGL/ATI on Suse 9.1<br>
+ (also fixes 74298, a duplicate of this)<br>
+<br>
+85947 MMX/SSE unhandled instruction 'sfence'<br>
+<br>
+84978 Wrong error "Conditional jump or move depends on<br>
+ uninitialised value" resulting from "sbbl %reg, %reg"<br>
+<br>
+86254 ssort() fails when signed int return type from comparison is <br>
+ too small to handle result of unsigned int subtraction<br>
+<br>
+87089 memalign( 4, xxx) makes valgrind assert<br>
+<br>
+86407 Add support for low-level parallel port driver ioctls.<br>
+<br>
+70587 Add timestamps to Valgrind output? (wishlist)<br>
+<br>
+84937 vg_libpthread.c:2505 (se_remap): Assertion `res == 0'<br>
+ (fixed prior to 2.1.2)<br>
+<br>
+86317 cannot load libSDL-1.2.so.0 using valgrind<br>
+<br>
+86989 memcpy from mac_replace_strmem.c complains about<br>
+ uninitialized pointers passed when length to copy is zero<br>
+<br>
+85811 gnu pascal symbol causes segmentation fault; ok in 2.0.0<br>
+<br>
+79138 writing to sbrk()'d memory causes segfault<br>
+<br>
+77369 sched deadlock while signal received during pthread_join<br>
+ and the joined thread exited<br>
+<br>
+88115 In signal handler for SIGFPE, siginfo->si_addr is wrong <br>
+ under Valgrind<br>
+<br>
+78765 Massif crashes on app exit if FP exceptions are enabled<br>
+<br>
+Additionally there are the following changes, which are not <br>
+connected to any bug report numbers, AFAICS:<br>
+<br>
+* Fix scary bug causing mis-identification of SSE stores vs<br>
+ loads and so causing memcheck to sometimes give nonsense results<br>
+ on SSE code.<br>
+<br>
+* Add support for the POSIX message queue system calls.<br>
+<br>
+* Fix to allow 32-bit Valgrind to run on AMD64 boxes. Note: this does<br>
+ NOT allow Valgrind to work with 64-bit executables - only with 32-bit<br>
+ executables on an AMD64 box.<br>
+<br>
+* At configure time, only check whether linux/mii.h can be processed <br>
+ so that we don't generate ugly warnings by trying to compile it.<br>
+<br>
+* Add support for POSIX clocks and timers.<br>
+<br>
+<br>
+<br>
+Developer (cvs head) release 2.1.2 (18 July 2004)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+2.1.2 contains four months worth of bug fixes and refinements.<br>
+Although officially a developer release, we believe it to be stable<br>
+enough for widespread day-to-day use. 2.1.2 is pretty good, so try it<br>
+first, although there is a chance it won't work. If so then try 2.0.0<br>
+and tell us what went wrong." 2.1.2 fixes a lot of problems present<br>
+in 2.0.0 and is generally a much better product.<br>
+<br>
+Relative to 2.1.1, a large number of minor problems with 2.1.1 have<br>
+been fixed, and so if you use 2.1.1 you should try 2.1.2. Users of<br>
+the last stable release, 2.0.0, might also want to try this release.<br>
+<br>
+The following bugs, and probably many more, have been fixed. These<br>
+are listed at http://bugs.kde.org. Reporting a bug for valgrind in<br>
+the http://bugs.kde.org is much more likely to get you a fix than<br>
+mailing developers directly, so please continue to keep sending bugs<br>
+there.<br>
+<br>
+76869 Crashes when running any tool under Fedora Core 2 test1<br>
+ This fixes the problem with returning from a signal handler <br>
+ when VDSOs are turned off in FC2.<br>
+<br>
+69508 java 1.4.2 client fails with erroneous "stack size too small".<br>
+ This fix makes more of the pthread stack attribute related <br>
+ functions work properly. Java still doesn't work though.<br>
+<br>
+71906 malloc alignment should be 8, not 4<br>
+ All memory returned by malloc/new etc is now at least<br>
+ 8-byte aligned.<br>
+<br>
+81970 vg_alloc_ThreadState: no free slots available<br>
+ (closed because the workaround is simple: increase<br>
+ VG_N_THREADS, rebuild and try again.)<br>
+<br>
+78514 Conditional jump or move depends on uninitialized value(s)<br>
+ (a slight mishanding of FP code in memcheck)<br>
+<br>
+77952 pThread Support (crash) (due to initialisation-ordering probs)<br>
+ (also 85118)<br>
+<br>
+80942 Addrcheck wasn't doing overlap checking as it should.<br>
+78048 return NULL on malloc/new etc failure, instead of asserting<br>
+73655 operator new() override in user .so files often doesn't get picked up<br>
+83060 Valgrind does not handle native kernel AIO<br>
+69872 Create proper coredumps after fatal signals<br>
+82026 failure with new glibc versions: __libc_* functions are not exported<br>
+70344 UNIMPLEMENTED FUNCTION: tcdrain <br>
+81297 Cancellation of pthread_cond_wait does not require mutex<br>
+82872 Using debug info from additional packages (wishlist)<br>
+83025 Support for ioctls FIGETBSZ and FIBMAP<br>
+83340 Support for ioctl HDIO_GET_IDENTITY<br>
+79714 Support for the semtimedop system call.<br>
+77022 Support for ioctls FBIOGET_VSCREENINFO and FBIOGET_FSCREENINFO<br>
+82098 hp2ps ansification (wishlist)<br>
+83573 Valgrind SIGSEGV on execve<br>
+82999 show which cmdline option was erroneous (wishlist)<br>
+83040 make valgrind VPATH and distcheck-clean (wishlist)<br>
+83998 Assertion `newfd > vgPlain_max_fd' failed (see below)<br>
+82722 Unchecked mmap in as_pad leads to mysterious failures later<br>
+78958 memcheck seg faults while running Mozilla <br>
+85416 Arguments with colon (e.g. --logsocket) ignored<br>
+<br>
+<br>
+Additionally there are the following changes, which are not <br>
+connected to any bug report numbers, AFAICS:<br>
+<br>
+* Rearranged address space layout relative to 2.1.1, so that<br>
+ Valgrind/tools will run out of memory later than currently in many<br>
+ circumstances. This is good news esp. for Calltree. It should<br>
+ be possible for client programs to allocate over 800MB of<br>
+ memory when using memcheck now.<br>
+<br>
+* Improved checking when laying out memory. Should hopefully avoid<br>
+ the random segmentation faults that 2.1.1 sometimes caused.<br>
+<br>
+* Support for Fedora Core 2 and SuSE 9.1. Improvements to NPTL<br>
+ support to the extent that V now works properly on NPTL-only setups.<br>
+<br>
+* Renamed the following options:<br>
+ --logfile-fd --> --log-fd<br>
+ --logfile --> --log-file<br>
+ --logsocket --> --log-socket<br>
+ to be consistent with each other and other options (esp. --input-fd).<br>
+<br>
+* Add support for SIOCGMIIPHY, SIOCGMIIREG and SIOCSMIIREG ioctls and<br>
+ improve the checking of other interface related ioctls.<br>
+<br>
+* Fix building with gcc-3.4.1.<br>
+<br>
+* Remove limit on number of semaphores supported.<br>
+<br>
+* Add support for syscalls: set_tid_address (258), acct (51).<br>
+<br>
+* Support instruction "repne movs" -- not official but seems to occur.<br>
+<br>
+* Implement an emulated soft limit for file descriptors in addition to<br>
+ the current reserved area, which effectively acts as a hard limit. The<br>
+ setrlimit system call now simply updates the emulated limits as best<br>
+ as possible - the hard limit is not allowed to move at all and just<br>
+ returns EPERM if you try and change it. This should stop reductions<br>
+ in the soft limit causing assertions when valgrind tries to allocate<br>
+ descriptors from the reserved area.<br>
+ (This actually came from bug #83998).<br>
+<br>
+* Major overhaul of Cachegrind implementation. First user-visible change<br>
+ is that cachegrind.out files are now typically 90% smaller than they<br>
+ used to be; code annotation times are correspondingly much smaller.<br>
+ Second user-visible change is that hit/miss counts for code that is<br>
+ unloaded at run-time is no longer dumped into a single "discard" pile,<br>
+ but accurately preserved.<br>
+<br>
+* Client requests for telling valgrind about memory pools.<br>
+<br>
+<br>
+<br>
+Developer (cvs head) release 2.1.1 (12 March 2004)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+2.1.1 contains some internal structural changes needed for V's<br>
+long-term future. These don't affect end-users. Most notable<br>
+user-visible changes are:<br>
+<br>
+* Greater isolation between Valgrind and the program being run, so<br>
+ the program is less likely to inadvertently kill Valgrind by<br>
+ doing wild writes.<br>
+<br>
+* Massif: a new space profiling tool. Try it! It's cool, and it'll<br>
+ tell you in detail where and when your C/C++ code is allocating heap.<br>
+ Draws pretty .ps pictures of memory use against time. A potentially<br>
+ powerful tool for making sense of your program's space use.<br>
+<br>
+* Fixes for many bugs, including support for more SSE2/SSE3 instructions,<br>
+ various signal/syscall things, and various problems with debug<br>
+ info readers.<br>
+<br>
+* Support for glibc-2.3.3 based systems.<br>
+<br>
+We are now doing automatic overnight build-and-test runs on a variety<br>
+of distros. As a result, we believe 2.1.1 builds and runs on:<br>
+Red Hat 7.2, 7.3, 8.0, 9, Fedora Core 1, SuSE 8.2, SuSE 9.<br>
+<br>
+<br>
+The following bugs, and probably many more, have been fixed. These<br>
+are listed at http://bugs.kde.org. Reporting a bug for valgrind in<br>
+the http://bugs.kde.org is much more likely to get you a fix than<br>
+mailing developers directly, so please continue to keep sending bugs<br>
+there.<br>
+<br>
+69616 glibc 2.3.2 w/NPTL is massively different than what valgrind expects <br>
+69856 I don't know how to instrument MMXish stuff (Helgrind)<br>
+73892 valgrind segfaults starting with Objective-C debug info <br>
+ (fix for S-type stabs)<br>
+73145 Valgrind complains too much about close(<reserved fd>) <br>
+73902 Shadow memory allocation seems to fail on RedHat 8.0 <br>
+68633 VG_N_SEMAPHORES too low (V itself was leaking semaphores)<br>
+75099 impossible to trace multiprocess programs <br>
+76839 the `impossible' happened: disInstr: INT but not 0x80 ! <br>
+76762 vg_to_ucode.c:3748 (dis_push_segreg): Assertion `sz == 4' failed. <br>
+76747 cannot include valgrind.h in c++ program <br>
+76223 parsing B(3,10) gave NULL type => impossible happens <br>
+75604 shmdt handling problem <br>
+76416 Problems with gcc 3.4 snap 20040225 <br>
+75614 using -gstabs when building your programs the `impossible' happened<br>
+75787 Patch for some CDROM ioctls CDORM_GET_MCN, CDROM_SEND_PACKET,<br>
+75294 gcc 3.4 snapshot's libstdc++ have unsupported instructions. <br>
+ (REP RET)<br>
+73326 vg_symtab2.c:272 (addScopeRange): Assertion `range->size > 0' failed. <br>
+72596 not recognizing __libc_malloc <br>
+69489 Would like to attach ddd to running program <br>
+72781 Cachegrind crashes with kde programs <br>
+73055 Illegal operand at DXTCV11CompressBlockSSE2 (more SSE opcodes)<br>
+73026 Descriptor leak check reports port numbers wrongly <br>
+71705 README_MISSING_SYSCALL_OR_IOCTL out of date <br>
+72643 Improve support for SSE/SSE2 instructions <br>
+72484 valgrind leaves it's own signal mask in place when execing <br>
+72650 Signal Handling always seems to restart system calls <br>
+72006 The mmap system call turns all errors in ENOMEM <br>
+71781 gdb attach is pretty useless <br>
+71180 unhandled instruction bytes: 0xF 0xAE 0x85 0xE8 <br>
+69886 writes to zero page cause valgrind to assert on exit <br>
+71791 crash when valgrinding gimp 1.3 (stabs reader problem)<br>
+69783 unhandled syscall: 218 <br>
+69782 unhandled instruction bytes: 0x66 0xF 0x2B 0x80 <br>
+70385 valgrind fails if the soft file descriptor limit is less <br>
+ than about 828<br>
+69529 "rep; nop" should do a yield <br>
+70827 programs with lots of shared libraries report "mmap failed" <br>
+ for some of them when reading symbols <br>
+71028 glibc's strnlen is optimised enough to confuse valgrind <br>
+<br>
+<br>
+<br>
+<br>
+Unstable (cvs head) release 2.1.0 (15 December 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+For whatever it's worth, 2.1.0 actually seems pretty darn stable to me<br>
+(Julian). It looks eminently usable, and given that it fixes some<br>
+significant bugs, may well be worth using on a day-to-day basis.<br>
+2.1.0 is known to build and pass regression tests on: SuSE 9, SuSE<br>
+8.2, RedHat 8.<br>
+<br>
+2.1.0 most notably includes Jeremy Fitzhardinge's complete overhaul of<br>
+handling of system calls and signals, and their interaction with<br>
+threads. In general, the accuracy of the system call, thread and<br>
+signal simulations is much improved. Specifically:<br>
+<br>
+- Blocking system calls behave exactly as they do when running<br>
+ natively (not on valgrind). That is, if a syscall blocks only the<br>
+ calling thread when running natively, than it behaves the same on<br>
+ valgrind. No more mysterious hangs because V doesn't know that some<br>
+ syscall or other, should block only the calling thread.<br>
+<br>
+- Interrupted syscalls should now give more faithful results.<br>
+<br>
+- Finally, signal contexts in signal handlers are supported. As a<br>
+ result, konqueror on SuSE 9 no longer segfaults when notified of<br>
+ file changes in directories it is watching.<br>
+<br>
+Other changes:<br>
+<br>
+- Robert Walsh's file descriptor leakage checks. When enabled,<br>
+ Valgrind will print out a list of open file descriptors on<br>
+ exit. Along with each file descriptor, Valgrind prints out a stack<br>
+ backtrace of where the file was opened and any details relating to the<br>
+ file descriptor such as the file name or socket details.<br>
+ To use, give: --track-fds=yes<br>
+<br>
+- Implemented a few more SSE/SSE2 instructions.<br>
+<br>
+- Less crud on the stack when you do 'where' inside a GDB attach.<br>
+<br>
+- Fixed the following bugs:<br>
+ 68360: Valgrind does not compile against 2.6.0-testX kernels<br>
+ 68525: CVS head doesn't compile on C90 compilers<br>
+ 68566: pkgconfig support (wishlist)<br>
+ 68588: Assertion `sz == 4' failed in vg_to_ucode.c (disInstr)<br>
+ 69140: valgrind not able to explicitly specify a path to a binary. <br>
+ 69432: helgrind asserts encountering a MutexErr when there are <br>
+ EraserErr suppressions<br>
+<br>
+- Increase the max size of the translation cache from 200k average bbs<br>
+ to 300k average bbs. Programs on the size of OOo (680m17) are<br>
+ thrashing the cache at the smaller size, creating large numbers of<br>
+ retranslations and wasting significant time as a result.<br>
+<br>
+<br>
+<br>
+Stable release 2.0.0 (5 Nov 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+2.0.0 improves SSE/SSE2 support, fixes some minor bugs, and<br>
+improves support for SuSE 9 and the Red Hat "Severn" beta.<br>
+<br>
+- Further improvements to SSE/SSE2 support. The entire test suite of<br>
+ the GNU Scientific Library (gsl-1.4) compiled with Intel Icc 7.1<br>
+ 20030307Z '-g -O -xW' now works. I think this gives pretty good<br>
+ coverage of SSE/SSE2 floating point instructions, or at least the<br>
+ subset emitted by Icc.<br>
+<br>
+- Also added support for the following instructions:<br>
+ MOVNTDQ UCOMISD UNPCKLPS UNPCKHPS SQRTSS<br>
+ PUSH/POP %{FS,GS}, and PUSH %CS (Nb: there is no POP %CS).<br>
+<br>
+- CFI support for GDB version 6. Needed to enable newer GDBs<br>
+ to figure out where they are when using --gdb-attach=yes.<br>
+<br>
+- Fix this:<br>
+ mc_translate.c:1091 (memcheck_instrument): Assertion<br>
+ `u_in->size == 4 || u_in->size == 16' failed.<br>
+<br>
+- Return an error rather than panicing when given a bad socketcall.<br>
+<br>
+- Fix checking of syscall rt_sigtimedwait().<br>
+<br>
+- Implement __NR_clock_gettime (syscall 265). Needed on Red Hat Severn.<br>
+<br>
+- Fixed bug in overlap check in strncpy() -- it was assuming the src was 'n'<br>
+ bytes long, when it could be shorter, which could cause false<br>
+ positives.<br>
+<br>
+- Support use of select() for very large numbers of file descriptors.<br>
+<br>
+- Don't fail silently if the executable is statically linked, or is<br>
+ setuid/setgid. Print an error message instead.<br>
+<br>
+- Support for old DWARF-1 format line number info.<br>
+<br>
+<br>
+<br>
+Snapshot 20031012 (12 October 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+Three months worth of bug fixes, roughly. Most significant single<br>
+change is improved SSE/SSE2 support, mostly thanks to Dirk Mueller.<br>
+<br>
+20031012 builds on Red Hat Fedora ("Severn") but doesn't really work<br>
+(curiously, mozilla runs OK, but a modest "ls -l" bombs). I hope to<br>
+get a working version out soon. It may or may not work ok on the<br>
+forthcoming SuSE 9; I hear positive noises about it but haven't been<br>
+able to verify this myself (not until I get hold of a copy of 9).<br>
+<br>
+A detailed list of changes, in no particular order:<br>
+<br>
+- Describe --gen-suppressions in the FAQ.<br>
+<br>
+- Syscall __NR_waitpid supported.<br>
+<br>
+- Minor MMX bug fix.<br>
+<br>
+- -v prints program's argv[] at startup.<br>
+<br>
+- More glibc-2.3 suppressions.<br>
+<br>
+- Suppressions for stack underrun bug(s) in the c++ support library<br>
+ distributed with Intel Icc 7.0.<br>
+<br>
+- Fix problems reading /proc/self/maps.<br>
+<br>
+- Fix a couple of messages that should have been suppressed by -q, <br>
+ but weren't.<br>
+<br>
+- Make Addrcheck understand "Overlap" suppressions.<br>
+<br>
+- At startup, check if program is statically linked and bail out if so.<br>
+<br>
+- Cachegrind: Auto-detect Intel Pentium-M, also VIA Nehemiah<br>
+<br>
+- Memcheck/addrcheck: minor speed optimisations<br>
+<br>
+- Handle syscall __NR_brk more correctly than before.<br>
+<br>
+- Fixed incorrect allocate/free mismatch errors when using<br>
+ operator new(unsigned, std::nothrow_t const&)<br>
+ operator new[](unsigned, std::nothrow_t const&)<br>
+<br>
+- Support POSIX pthread spinlocks.<br>
+<br>
+- Fixups for clean compilation with gcc-3.3.1.<br>
+<br>
+- Implemented more opcodes: <br>
+ - push %es<br>
+ - push %ds<br>
+ - pop %es<br>
+ - pop %ds<br>
+ - movntq<br>
+ - sfence<br>
+ - pshufw<br>
+ - pavgb<br>
+ - ucomiss<br>
+ - enter<br>
+ - mov imm32, %esp<br>
+ - all "in" and "out" opcodes<br>
+ - inc/dec %esp<br>
+ - A whole bunch of SSE/SSE2 instructions<br>
+<br>
+- Memcheck: don't bomb on SSE/SSE2 code.<br>
+<br>
+<br>
+Snapshot 20030725 (25 July 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+Fixes some minor problems in 20030716.<br>
+<br>
+- Fix bugs in overlap checking for strcpy/memcpy etc.<br>
+<br>
+- Do overlap checking with Addrcheck as well as Memcheck.<br>
+<br>
+- Fix this:<br>
+ Memcheck: the `impossible' happened:<br>
+ get_error_name: unexpected type<br>
+<br>
+- Install headers needed to compile new skins.<br>
+<br>
+- Remove leading spaces and colon in the LD_LIBRARY_PATH / LD_PRELOAD<br>
+ passed to non-traced children.<br>
+<br>
+- Fix file descriptor leak in valgrind-listener.<br>
+<br>
+- Fix longstanding bug in which the allocation point of a <br>
+ block resized by realloc was not correctly set. This may<br>
+ have caused confusing error messages.<br>
+<br>
+<br>
+Snapshot 20030716 (16 July 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+20030716 is a snapshot of our current CVS head (development) branch.<br>
+This is the branch which will become valgrind-2.0. It contains<br>
+significant enhancements over the 1.9.X branch.<br>
+<br>
+Despite this being a snapshot of the CVS head, it is believed to be<br>
+quite stable -- at least as stable as 1.9.6 or 1.0.4, if not more so<br>
+-- and therefore suitable for widespread use. Please let us know asap<br>
+if it causes problems for you.<br>
+<br>
+Two reasons for releasing a snapshot now are:<br>
+<br>
+- It's been a while since 1.9.6, and this snapshot fixes<br>
+ various problems that 1.9.6 has with threaded programs <br>
+ on glibc-2.3.X based systems.<br>
+<br>
+- So as to make available improvements in the 2.0 line.<br>
+<br>
+Major changes in 20030716, as compared to 1.9.6:<br>
+<br>
+- More fixes to threading support on glibc-2.3.1 and 2.3.2-based<br>
+ systems (SuSE 8.2, Red Hat 9). If you have had problems<br>
+ with inconsistent/illogical behaviour of errno, h_errno or the DNS<br>
+ resolver functions in threaded programs, 20030716 should improve<br>
+ matters. This snapshot seems stable enough to run OpenOffice.org<br>
+ 1.1rc on Red Hat 7.3, SuSE 8.2 and Red Hat 9, and that's a big<br>
+ threaded app if ever I saw one.<br>
+<br>
+- Automatic generation of suppression records; you no longer<br>
+ need to write them by hand. Use --gen-suppressions=yes.<br>
+<br>
+- strcpy/memcpy/etc check their arguments for overlaps, when<br>
+ running with the Memcheck or Addrcheck skins.<br>
+<br>
+- malloc_usable_size() is now supported.<br>
+<br>
+- new client requests:<br>
+ - VALGRIND_COUNT_ERRORS, VALGRIND_COUNT_LEAKS: <br>
+ useful with regression testing<br>
+ - VALGRIND_NON_SIMD_CALL[0123]: for running arbitrary functions <br>
+ on real CPU (use with caution!)<br>
+<br>
+- The GDB attach mechanism is more flexible. Allow the GDB to<br>
+ be run to be specified by --gdb-path=/path/to/gdb, and specify<br>
+ which file descriptor V will read its input from with<br>
+ --input-fd=<number>.<br>
+<br>
+- Cachegrind gives more accurate results (wasn't tracking instructions in<br>
+ malloc() and friends previously, is now).<br>
+<br>
+- Complete support for the MMX instruction set.<br>
+<br>
+- Partial support for the SSE and SSE2 instruction sets. Work for this<br>
+ is ongoing. About half the SSE/SSE2 instructions are done, so<br>
+ some SSE based programs may work. Currently you need to specify<br>
+ --skin=addrcheck. Basically not suitable for real use yet.<br>
+<br>
+- Significant speedups (10%-20%) for standard memory checking.<br>
+<br>
+- Fix assertion failure in pthread_once().<br>
+<br>
+- Fix this:<br>
+ valgrind: vg_intercept.c:598 (vgAllRoadsLeadToRome_select): <br>
+ Assertion `ms_end >= ms_now' failed.<br>
+<br>
+- Implement pthread_mutexattr_setpshared.<br>
+<br>
+- Understand Pentium 4 branch hints. Also implemented a couple more<br>
+ obscure x86 instructions.<br>
+<br>
+- Lots of other minor bug fixes.<br>
+<br>
+- We have a decent regression test system, for the first time.<br>
+ This doesn't help you directly, but it does make it a lot easier<br>
+ for us to track the quality of the system, especially across<br>
+ multiple linux distributions. <br>
+<br>
+ You can run the regression tests with 'make regtest' after 'make<br>
+ install' completes. On SuSE 8.2 and Red Hat 9 I get this:<br>
+ <br>
+ == 84 tests, 0 stderr failures, 0 stdout failures ==<br>
+<br>
+ On Red Hat 8, I get this:<br>
+<br>
+ == 84 tests, 2 stderr failures, 1 stdout failure ==<br>
+ corecheck/tests/res_search (stdout)<br>
+ memcheck/tests/sigaltstack (stderr)<br>
+<br>
+ sigaltstack is probably harmless. res_search doesn't work<br>
+ on R H 8 even running natively, so I'm not too worried. <br>
+<br>
+ On Red Hat 7.3, a glibc-2.2.5 system, I get these harmless failures:<br>
+<br>
+ == 84 tests, 2 stderr failures, 1 stdout failure ==<br>
+ corecheck/tests/pth_atfork1 (stdout)<br>
+ corecheck/tests/pth_atfork1 (stderr)<br>
+ memcheck/tests/sigaltstack (stderr)<br>
+<br>
+ You need to run on a PII system, at least, since some tests<br>
+ contain P6-specific instructions, and the test machine needs<br>
+ access to the internet so that corecheck/tests/res_search<br>
+ (a test that the DNS resolver works) can function.<br>
+<br>
+As ever, thanks for the vast amount of feedback :) and bug reports :(<br>
+We may not answer all messages, but we do at least look at all of<br>
+them, and tend to fix the most frequently reported bugs.<br>
+<br>
+<br>
+<br>
+Version 1.9.6 (7 May 2003 or thereabouts)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+Major changes in 1.9.6:<br>
+<br>
+- Improved threading support for glibc >= 2.3.2 (SuSE 8.2,<br>
+ RedHat 9, to name but two ...) It turned out that 1.9.5<br>
+ had problems with threading support on glibc >= 2.3.2,<br>
+ usually manifested by threaded programs deadlocking in system calls,<br>
+ or running unbelievably slowly. Hopefully these are fixed now. 1.9.6<br>
+ is the first valgrind which gives reasonable support for<br>
+ glibc-2.3.2. Also fixed a 2.3.2 problem with pthread_atfork().<br>
+<br>
+- Majorly expanded FAQ.txt. We've added workarounds for all<br>
+ common problems for which a workaround is known.<br>
+<br>
+Minor changes in 1.9.6:<br>
+<br>
+- Fix identification of the main thread's stack. Incorrect<br>
+ identification of it was causing some on-stack addresses to not get<br>
+ identified as such. This only affected the usefulness of some error<br>
+ messages; the correctness of the checks made is unchanged.<br>
+<br>
+- Support for kernels >= 2.5.68.<br>
+<br>
+- Dummy implementations of __libc_current_sigrtmin, <br>
+ __libc_current_sigrtmax and __libc_allocate_rtsig, hopefully<br>
+ good enough to keep alive programs which previously died for lack of<br>
+ them.<br>
+<br>
+- Fix bug in the VALGRIND_DISCARD_TRANSLATIONS client request.<br>
+<br>
+- Fix bug in the DWARF2 debug line info loader, when instructions <br>
+ following each other have source lines far from each other <br>
+ (e.g. with inlined functions).<br>
+<br>
+- Debug info reading: read symbols from both "symtab" and "dynsym"<br>
+ sections, rather than merely from the one that comes last in the<br>
+ file.<br>
+<br>
+- New syscall support: prctl(), creat(), lookup_dcookie().<br>
+<br>
+- When checking calls to accept(), recvfrom(), getsocketopt(),<br>
+ don't complain if buffer values are NULL.<br>
+<br>
+- Try and avoid assertion failures in<br>
+ mash_LD_PRELOAD_and_LD_LIBRARY_PATH.<br>
+<br>
+- Minor bug fixes in cg_annotate.<br>
+<br>
+<br>
+<br>
+Version 1.9.5 (7 April 2003)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+It occurs to me that it would be helpful for valgrind users to record<br>
+in the source distribution the changes in each release. So I now<br>
+attempt to mend my errant ways :-) Changes in this and future releases<br>
+will be documented in the NEWS file in the source distribution.<br>
+<br>
+Major changes in 1.9.5:<br>
+<br>
+- (Critical bug fix): Fix a bug in the FPU simulation. This was<br>
+ causing some floating point conditional tests not to work right.<br>
+ Several people reported this. If you had floating point code which<br>
+ didn't work right on 1.9.1 to 1.9.4, it's worth trying 1.9.5.<br>
+<br>
+- Partial support for Red Hat 9. RH9 uses the new Native Posix <br>
+ Threads Library (NPTL), instead of the older LinuxThreads. <br>
+ This potentially causes problems with V which will take some<br>
+ time to correct. In the meantime we have partially worked around<br>
+ this, and so 1.9.5 works on RH9. Threaded programs still work,<br>
+ but they may deadlock, because some system calls (accept, read,<br>
+ write, etc) which should be nonblocking, in fact do block. This<br>
+ is a known bug which we are looking into.<br>
+<br>
+ If you can, your best bet (unfortunately) is to avoid using <br>
+ 1.9.5 on a Red Hat 9 system, or on any NPTL-based distribution.<br>
+ If your glibc is 2.3.1 or earlier, you're almost certainly OK.<br>
+<br>
+Minor changes in 1.9.5:<br>
+<br>
+- Added some #errors to valgrind.h to ensure people don't include<br>
+ it accidentally in their sources. This is a change from 1.0.X<br>
+ which was never properly documented. The right thing to include<br>
+ is now memcheck.h. Some people reported problems and strange<br>
+ behaviour when (incorrectly) including valgrind.h in code with <br>
+ 1.9.1 -- 1.9.4. This is no longer possible.<br>
+<br>
+- Add some __extension__ bits and pieces so that gcc configured<br>
+ for valgrind-checking compiles even with -Werror. If you<br>
+ don't understand this, ignore it. Of interest to gcc developers<br>
+ only.<br>
+<br>
+- Removed a pointless check which caused problems interworking <br>
+ with Clearcase. V would complain about shared objects whose<br>
+ names did not end ".so", and refuse to run. This is now fixed.<br>
+ In fact it was fixed in 1.9.4 but not documented.<br>
+<br>
+- Fixed a bug causing an assertion failure of "waiters == 1"<br>
+ somewhere in vg_scheduler.c, when running large threaded apps,<br>
+ notably MySQL.<br>
+<br>
+- Add support for the munlock system call (124).<br>
+<br>
+Some comments about future releases:<br>
+<br>
+1.9.5 is, we hope, the most stable Valgrind so far. It pretty much<br>
+supersedes the 1.0.X branch. If you are a valgrind packager, please<br>
+consider making 1.9.5 available to your users. You can regard the<br>
+1.0.X branch as obsolete: 1.9.5 is stable and vastly superior. There<br>
+are no plans at all for further releases of the 1.0.X branch.<br>
+<br>
+If you want a leading-edge valgrind, consider building the cvs head<br>
+(from SourceForge), or getting a snapshot of it. Current cool stuff<br>
+going in includes MMX support (done); SSE/SSE2 support (in progress),<br>
+a significant (10-20%) performance improvement (done), and the usual<br>
+large collection of minor changes. Hopefully we will be able to<br>
+improve our NPTL support, but no promises.<br>
+<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.news.html"><< 2. NEWS</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme.html">4. README >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-android.html b/docs/html/dist.readme-android.html
new file mode 100644
index 0000000..4fcbfb7
--- /dev/null
+++ b/docs/html/dist.readme-android.html
@@ -0,0 +1,250 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>9. README.android</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-s390.html" title="8. README.S390">
+<link rel="next" href="dist.readme-android_emulator.html" title="10. README.android_emulator">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-s390.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-android_emulator.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-android"></a>9. README.android</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+How to cross-compile and run on Android. Please read to the end,<br>
+since there are important details further down regarding crash<br>
+avoidance and GPU support.<br>
+<br>
+These notes were last updated on 4 Nov 2014, for Valgrind SVN<br>
+revision 14689/2987.<br>
+<br>
+These instructions are known to work, or have worked at some time in<br>
+the past, for:<br>
+<br>
+arm:<br>
+ Android 4.0.3 running on a (rooted, AOSP build) Nexus S.<br>
+ Android 4.0.3 running on Motorola Xoom.<br>
+ Android 4.0.3 running on android arm emulator.<br>
+ Android 4.1 running on android emulator.<br>
+ Android 2.3.4 on Nexus S worked at some time in the past.<br>
+<br>
+x86:<br>
+ Android 4.0.3 running on android x86 emulator.<br>
+<br>
+mips32:<br>
+ Android 4.1.2 running on android mips emulator.<br>
+ Android 4.2.2 running on android mips emulator.<br>
+ Android 4.3 running on android mips emulator.<br>
+ Android 4.0.4 running on BROADCOM bcm7425<br>
+<br>
+arm64:<br>
+ Android 4.5 (?) running on ARM Juno<br>
+<br>
+On android-arm, GDBserver might insert breaks at wrong addresses.<br>
+Feedback on this welcome.<br>
+<br>
+Other configurations and toolchains might work, but haven't been tested.<br>
+Feedback is welcome.<br>
+<br>
+Toolchain:<br>
+<br>
+ For arm32, x86 and mips32 you need the android-ndk-r6 native<br>
+ development kit. r6b and r7 give a non-completely-working build;<br>
+ see http://code.google.com/p/android/issues/detail?id=23203<br>
+ For the android emulator, the versions needed and how to install<br>
+ them are described in README.android_emulator.<br>
+<br>
+ You can get android-ndk-r6 from<br>
+ http://dl.google.com/android/ndk/android-ndk-r6-linux-x86.tar.bz2<br>
+<br>
+ For arm64 (aarch64) you need the android-ndk-r10c NDK, from<br>
+ http://dl.google.com/android/ndk/android-ndk-r10c-linux-x86_64.bin<br>
+<br>
+Install the NDK somewhere. Doesn't matter where. Then:<br>
+<br>
+<br>
+# Modify this (obviously). Note, this "export" command is only done<br>
+# so as to reduce the amount of typing required. None of the commands<br>
+# below read it as part of their operation.<br>
+#<br>
+export NDKROOT=/path/to/android-ndk-r<version><br>
+<br>
+<br>
+# Then cd to the root of your Valgrind source tree.<br>
+#<br>
+cd /path/to/valgrind/source/tree<br>
+<br>
+<br>
+# After this point, you don't need to modify anything. Just copy and<br>
+# paste the commands below.<br>
+<br>
+<br>
+# Set up toolchain paths.<br>
+#<br>
+# For ARM<br>
+export AR=$NDKROOT/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-ar<br>
+export LD=$NDKROOT/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-ld<br>
+export CC=$NDKROOT/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc<br>
+<br>
+# For x86<br>
+export AR=$NDKROOT/toolchains/x86-4.4.3/prebuilt/linux-x86/bin/i686-android-linux-ar<br>
+export LD=$NDKROOT/toolchains/x86-4.4.3/prebuilt/linux-x86/bin/i686-android-linux-ld<br>
+export CC=$NDKROOT/toolchains/x86-4.4.3/prebuilt/linux-x86/bin/i686-android-linux-gcc<br>
+<br>
+# For MIPS32<br>
+export AR=$NDKROOT/toolchains/mipsel-linux-android-4.8/prebuilt/linux-x86_64/bin/mipsel-linux-android-ar<br>
+export LD=$NDKROOT/toolchains/mipsel-linux-android-4.8/prebuilt/linux-x86_64/bin/mipsel-linux-android-ld<br>
+export CC=$NDKROOT/toolchains/mipsel-linux-android-4.8/prebuilt/linux-x86_64/bin/mipsel-linux-android-gcc<br>
+<br>
+# For ARM64 (AArch64)<br>
+export AR=$NDKROOT/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/aarch64-linux-android-ar <br>
+export LD=$NDKROOT/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/aarch64-linux-android-ld<br>
+export CC=$NDKROOT/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/aarch64-linux-android-gcc<br>
+<br>
+<br>
+# Do configuration stuff. Don't mess with the --prefix in the<br>
+# configure command below, even if you think it's wrong.<br>
+# You may need to set the --with-tmpdir path to something<br>
+# different if /sdcard doesn't work on the device -- this is<br>
+# a known cause of difficulties.<br>
+<br>
+# The below re-generates configure, Makefiles, ...<br>
+# This is not needed if you start from a release tarball.<br>
+./autogen.sh<br>
+<br>
+# for ARM<br>
+CPPFLAGS="--sysroot=$NDKROOT/platforms/android-3/arch-arm" \<br>
+ CFLAGS="--sysroot=$NDKROOT/platforms/android-3/arch-arm" \<br>
+ ./configure --prefix=/data/local/Inst \<br>
+ --host=armv7-unknown-linux --target=armv7-unknown-linux \<br>
+ --with-tmpdir=/sdcard<br>
+# note: on android emulator, android-14 platform was also tested and works.<br>
+# It is not clear what this platform nr really is.<br>
+<br>
+# for x86<br>
+CPPFLAGS="--sysroot=$NDKROOT/platforms/android-9/arch-x86" \<br>
+ CFLAGS="--sysroot=$NDKROOT/platforms/android-9/arch-x86 -fno-pic" \<br>
+ ./configure --prefix=/data/local/Inst \<br>
+ --host=i686-android-linux --target=i686-android-linux \<br>
+ --with-tmpdir=/sdcard<br>
+<br>
+# for MIPS32<br>
+CPPFLAGS="--sysroot=$NDKROOT/platforms/android-18/arch-mips" \<br>
+ CFLAGS="--sysroot=$NDKROOT/platforms/android-18/arch-mips" \<br>
+ ./configure --prefix=/data/local/Inst \<br>
+ --host=mipsel-linux-android --target=mipsel-linux-android \<br>
+ --with-tmpdir=/sdcard<br>
+<br>
+# for ARM64 (AArch64)<br>
+CPPFLAGS="--sysroot=$NDKROOT/platforms/android-21/arch-arm64" \<br>
+ CFLAGS="--sysroot=$NDKROOT/platforms/android-21/arch-arm64" \<br>
+ ./configure --prefix=/data/local/Inst \<br>
+ --host=aarch64-unknown-linux --target=aarch64-unknown-linux \<br>
+ --with-tmpdir=/sdcard<br>
+<br>
+<br>
+# At the end of the configure run, a few lines of details<br>
+# are printed. Make sure that you see these two lines:<br>
+#<br>
+# For ARM:<br>
+# Platform variant: android<br>
+# Primary -DVGPV string: -DVGPV_arm_linux_android=1<br>
+#<br>
+# For x86:<br>
+# Platform variant: android<br>
+# Primary -DVGPV string: -DVGPV_x86_linux_android=1<br>
+#<br>
+# For mips32:<br>
+# Platform variant: android<br>
+# Primary -DVGPV string: -DVGPV_mips32_linux_android=1<br>
+#<br>
+# For ARM64 (AArch64):<br>
+# Platform variant: android<br>
+# Primary -DVGPV string: -DVGPV_arm64_linux_android=1<br>
+#<br>
+# If you see anything else at this point, something is wrong, and<br>
+# either the build will fail, or will succeed but you'll get something<br>
+# which won't work.<br>
+<br>
+<br>
+# Build, and park the install tree in `pwd`/Inst<br>
+#<br>
+make -j4<br>
+make -j4 install DESTDIR=`pwd`/Inst<br>
+<br>
+<br>
+# To get the install tree onto the device:<br>
+# (I don't know why it's not "adb push Inst /data/local", but this<br>
+# formulation does appear to put the result in /data/local/Inst.)<br>
+#<br>
+adb push Inst /<br>
+<br>
+<br>
+# To run (on the device). There are two things you need to consider:<br>
+#<br>
+# (1) if you are running on the Android emulator, Valgrind may crash<br>
+# at startup. This is because the emulator (for ARM) may not be<br>
+# simulating a hardware TLS register. To get around this, run<br>
+# Valgrind with:<br>
+# --kernel-variant=android-no-hw-tls<br>
+# <br>
+# (2) if you are running a real device, you need to tell Valgrind<br>
+# what GPU it has, so Valgrind knows how to handle custom GPU<br>
+# ioctls. You can choose one of the following:<br>
+# --kernel-variant=android-gpu-sgx5xx # PowerVR SGX 5XX series<br>
+# --kernel-variant=android-gpu-adreno3xx # Qualcomm Adreno 3XX series<br>
+# If you don't choose one, the program will still run, but Memcheck<br>
+# may report false errors after the program performs GPU-specific ioctls.<br>
+#<br>
+# Anyway: to run on the device:<br>
+#<br>
+/data/local/Inst/bin/valgrind [kernel variant args] [the usual args etc]<br>
+<br>
+<br>
+# Once you're up and running, a handy modify-V-rebuild-reinstall<br>
+# command line (on the host, of course) is<br>
+#<br>
+mq -j2 && mq -j2 install DESTDIR=`pwd`/Inst && adb push Inst /<br>
+#<br>
+# where 'mq' is an alias for 'make --quiet'.<br>
+<br>
+<br>
+# One common cause of runs failing at startup is the inability of<br>
+# Valgrind to find a suitable temporary directory. On the device,<br>
+# there doesn't seem to be any one location which we always have<br>
+# permission to write to. The instructions above use /sdcard. If<br>
+# that doesn't work for you, and you're Valgrinding one specific<br>
+# application which is already installed, you could try using its<br>
+# temporary directory, in /data/data, for example<br>
+# /data/data/org.mozilla.firefox_beta.<br>
+#<br>
+# Using /system/bin/logcat on the device is helpful for diagnosing<br>
+# these kinds of problems.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-s390.html"><< 8. README.S390</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-android_emulator.html">10. README.android_emulator >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-android_emulator.html b/docs/html/dist.readme-android_emulator.html
new file mode 100644
index 0000000..6c788fd
--- /dev/null
+++ b/docs/html/dist.readme-android_emulator.html
@@ -0,0 +1,121 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>10. README.android_emulator</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-android.html" title="9. README.android">
+<link rel="next" href="dist.readme-mips.html" title="11. README.mips">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-android.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-mips.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-android_emulator"></a>10. README.android_emulator</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+How to install and run an android emulator.<br>
+<br>
+mkdir android # or any other place you prefer<br>
+cd android<br>
+<br>
+# download java JDK<br>
+# http://www.oracle.com/technetwork/java/javase/downloads/index.html<br>
+# download android SDK<br>
+# http://developer.android.com/sdk/index.html<br>
+# download android NDK<br>
+# http://developer.android.com/sdk/ndk/index.html<br>
+<br>
+# versions I used:<br>
+# jdk-7u4-linux-i586.tar.gz<br>
+# android-ndk-r8-linux-x86.tar.bz2<br>
+# android-sdk_r18-linux.tgz<br>
+<br>
+# install jdk<br>
+tar xzf jdk-7u4-linux-i586.tar.gz<br>
+<br>
+# install sdk<br>
+tar xzf android-sdk_r18-linux.tgz<br>
+<br>
+# install ndk<br>
+tar xjf android-ndk-r8-linux-x86.tar.bz2<br>
+<br>
+<br>
+# setup PATH to use the installed software:<br>
+export SDKROOT=$HOME/android/android-sdk-linux<br>
+export PATH=$PATH:$SDKROOT/tools:$SDKROOT/platform-tools<br>
+export NDKROOT=$HOME/android/android-ndk-r8<br>
+<br>
+# install android platforms you want by starting:<br>
+android <br>
+# (from $SDKROOT/tools)<br>
+<br>
+# select the platforms you need<br>
+# I selected and installed:<br>
+# Android 4.0.3 (API 15)<br>
+# Upgraded then to the newer version available:<br>
+# Android sdk 20<br>
+# Android platform tools 12<br>
+<br>
+# then define a virtual device:<br>
+Tools -> Manage AVDs...<br>
+# I define an AVD Name with 64 Mb SD Card, (4.0.3, api 15)<br>
+# rest is default<br>
+<br>
+<br>
+# compile and make install Valgrind, following README.android<br>
+<br>
+<br>
+# Start your android emulator (it takes some time).<br>
+# You can use adb shell to get a shell on the device<br>
+# and see it is working. Note that I usually get<br>
+# one or two time out from adb shell before it works<br>
+adb shell<br>
+<br>
+# Once the emulator is ready, push your Valgrind to the emulator:<br>
+adb push Inst /<br>
+<br>
+<br>
+# IMPORTANT: when running Valgrind, you may need give it the flag<br>
+#<br>
+# --kernel-variant=android-no-hw-tls<br>
+#<br>
+# since otherwise it may crash at startup.<br>
+# See README.android for details.<br>
+<br>
+<br>
+# if you need to debug:<br>
+# You have on the android side a gdbserver<br>
+# on the device side:<br>
+gdbserver :1234 your_exe<br>
+<br>
+# on the host side:<br>
+adb forward tcp:1234 tcp:1234<br>
+$HOME/android/android-ndk-r8/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-gdb your_exe<br>
+target remote :1234<br>
+<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-android.html"><< 9. README.android</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-mips.html">11. README.mips >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-developers.html b/docs/html/dist.readme-developers.html
new file mode 100644
index 0000000..29ad4d5
--- /dev/null
+++ b/docs/html/dist.readme-developers.html
@@ -0,0 +1,334 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>6. README_DEVELOPERS</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-missing.html" title="5. README_MISSING_SYSCALL_OR_IOCTL">
+<link rel="next" href="dist.readme-packagers.html" title="7. README_PACKAGERS">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-missing.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-packagers.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-developers"></a>6. README_DEVELOPERS</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Building and not installing it<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To run Valgrind without having to install it, run coregrind/valgrind<br>
+with the VALGRIND_LIB environment variable set, where <dir> is the root<br>
+of the source tree (and must be an absolute path). Eg:<br>
+<br>
+ VALGRIND_LIB=~/grind/head4/.in_place ~/grind/head4/coregrind/valgrind <br>
+<br>
+This allows you to compile and run with "make" instead of "make install",<br>
+saving you time.<br>
+<br>
+Or, you can use the 'vg-in-place' script which does that for you.<br>
+<br>
+I recommend compiling with "make --quiet" to further reduce the amount of<br>
+output spewed out during compilation, letting you actually see any errors,<br>
+warnings, etc.<br>
+<br>
+<br>
+Building a distribution tarball<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To build a distribution tarball from the valgrind sources:<br>
+<br>
+ make dist<br>
+<br>
+In addition to compiling, linking and packaging everything up, the command<br>
+will also attempt to build the documentation.<br>
+<br>
+If you only want to test whether the generated tarball is complete and runs<br>
+regression tests successfully, building documentation is not needed.<br>
+<br>
+ make dist BUILD_ALL_DOCS=no<br>
+<br>
+If you insist on building documentation some embarrassing instructions<br>
+can be found in docs/README.<br>
+<br>
+<br>
+Running the regression tests<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To build and run all the regression tests, run "make [--quiet] regtest".<br>
+<br>
+To run a subset of the regression tests, execute:<br>
+<br>
+ perl tests/vg_regtest <name><br>
+<br>
+where <name> is a directory (all tests within will be run) or a single<br>
+.vgtest test file, or the name of a program which has a like-named .vgtest<br>
+file. Eg:<br>
+<br>
+ perl tests/vg_regtest memcheck<br>
+ perl tests/vg_regtest memcheck/tests/badfree.vgtest<br>
+ perl tests/vg_regtest memcheck/tests/badfree<br>
+<br>
+<br>
+Running the performance tests<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To build and run all the performance tests, run "make [--quiet] perf".<br>
+<br>
+To run a subset of the performance suite, execute:<br>
+<br>
+ perl perf/vg_perf <name><br>
+<br>
+where <name> is a directory (all tests within will be run) or a single<br>
+.vgperf test file, or the name of a program which has a like-named .vgperf<br>
+file. Eg:<br>
+<br>
+ perl perf/vg_perf perf/<br>
+ perl perf/vg_perf perf/bz2.vgperf<br>
+ perl perf/vg_perf perf/bz2<br>
+<br>
+To compare multiple versions of Valgrind, use the --vg= option multiple<br>
+times. For example, if you have two Valgrinds next to each other, one in<br>
+trunk1/ and one in trunk2/, from within either trunk1/ or trunk2/ do this to<br>
+compare them on all the performance tests:<br>
+<br>
+ perl perf/vg_perf --vg=../trunk1 --vg=../trunk2 perf/<br>
+<br>
+<br>
+Debugging Valgrind with GDB<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To debug the valgrind launcher program (<prefix>/bin/valgrind) just<br>
+run it under gdb in the normal way.<br>
+<br>
+Debugging the main body of the valgrind code (and/or the code for<br>
+a particular tool) requires a bit more trickery but can be achieved<br>
+without too much problem by following these steps:<br>
+<br>
+(1) Set VALGRIND_LAUNCHER to point to the valgrind executable. Eg:<br>
+<br>
+ export VALGRIND_LAUNCHER=/usr/local/bin/valgrind<br>
+<br>
+ or for an uninstalled version in a source directory $DIR:<br>
+<br>
+ export VALGRIND_LAUNCHER=$DIR/coregrind/valgrind<br>
+<br>
+(2) Run gdb on the tool executable. Eg:<br>
+<br>
+ gdb /usr/local/lib/valgrind/ppc32-linux/lackey<br>
+<br>
+ or<br>
+<br>
+ gdb $DIR/.in_place/x86-linux/memcheck<br>
+<br>
+(3) Do "handle SIGSEGV SIGILL nostop noprint" in GDB to prevent GDB from<br>
+ stopping on a SIGSEGV or SIGILL:<br>
+<br>
+ (gdb) handle SIGILL SIGSEGV nostop noprint<br>
+<br>
+(4) Set any breakpoints you want and proceed as normal for gdb. The<br>
+ macro VG_(FUNC) is expanded to vgPlain_FUNC, so If you want to set<br>
+ a breakpoint VG_(do_exec), you could do like this in GDB:<br>
+<br>
+ (gdb) b vgPlain_do_exec<br>
+<br>
+(5) Run the tool with required options (the --tool option is required<br>
+ for correct setup), e.g.<br>
+<br>
+ (gdb) run --tool=lackey pwd<br>
+<br>
+Steps (1)--(3) can be put in a .gdbinit file, but any directory names must<br>
+be fully expanded (ie. not an environment variable).<br>
+<br>
+A different and possibly easier way is as follows:<br>
+<br>
+(1) Run Valgrind as normal, but add the flag --wait-for-gdb=yes. This<br>
+ puts the tool executable into a wait loop soon after it gains<br>
+ control. This delays startup for a few seconds.<br>
+<br>
+(2) In a different shell, do "gdb /proc/<pid>/exe <pid>", where<br>
+ <pid> you read from the output printed by (1). This attaches<br>
+ GDB to the tool executable, which should be in the abovementioned<br>
+ wait loop.<br>
+<br>
+(3) Do "cont" to continue. After the loop finishes spinning, startup<br>
+ will continue as normal. Note that comment (3) above re passing<br>
+ signals applies here too.<br>
+<br>
+<br>
+Self-hosting<br>
+~~~~~~~~~~~~<br>
+This section explains :<br>
+ (A) How to configure Valgrind to run under Valgrind.<br>
+ Such a setup is called self hosting, or outer/inner setup.<br>
+ (B) How to run Valgrind regression tests in a 'self-hosting' mode,<br>
+ e.g. to verify Valgrind has no bugs such as memory leaks.<br>
+ (C) How to run Valgrind performance tests in a 'self-hosting' mode,<br>
+ to analyse and optimise the performance of Valgrind and its tools.<br>
+<br>
+(A) How to configure Valgrind to run under Valgrind:<br>
+<br>
+(1) Check out 2 trees, "Inner" and "Outer". Inner runs the app<br>
+ directly. Outer runs Inner.<br>
+<br>
+(2) Configure inner with --enable-inner and build/install as usual.<br>
+<br>
+(3) Configure Outer normally and build/install as usual.<br>
+<br>
+(4) Choose a very simple program (date) and try<br>
+<br>
+ outer/.../bin/valgrind --sim-hints=enable-outer --trace-children=yes \<br>
+ --smc-check=all-non-file \<br>
+ --run-libc-freeres=no --tool=cachegrind -v \<br>
+ inner/.../bin/valgrind --vgdb-prefix=./inner --tool=none -v prog<br>
+<br>
+Note: You must use a "make install"-ed valgrind.<br>
+Do *not* use vg-in-place for the outer valgrind.<br>
+<br>
+If you omit the --trace-children=yes, you'll only monitor Inner's launcher<br>
+program, not its stage2. Outer needs --run-libc-freeres=no, as otherwise<br>
+it will try to find and run __libc_freeres in the inner, while libc is not<br>
+used by the inner. Inner needs --vgdb-prefix=./inner to avoid inner<br>
+gdbserver colliding with outer gdbserver.<br>
+Currently, inner does *not* use the client request <br>
+VALGRIND_DISCARD_TRANSLATIONS for the JITted code or the code patched for<br>
+translation chaining. So the outer needs --smc-check=all-non-file to<br>
+detect the modified code.<br>
+<br>
+Debugging the whole thing might imply to use up to 3 GDB:<br>
+ * a GDB attached to the Outer valgrind, allowing<br>
+ to examine the state of Outer.<br>
+ * a GDB using Outer gdbserver, allowing to<br>
+ examine the state of Inner.<br>
+ * a GDB using Inner gdbserver, allowing to<br>
+ examine the state of prog.<br>
+<br>
+The whole thing is fragile, confusing and slow, but it does work well enough<br>
+for you to get some useful performance data. Inner has most of<br>
+its output (ie. those lines beginning with "==<pid>==") prefixed with a '>',<br>
+which helps a lot. However, when running regression tests in an Outer/Inner<br>
+setup, this prefix causes the reg test diff to fail. Give <br>
+--sim-hints=no-inner-prefix to the Inner to disable the production<br>
+of the prefix in the stdout/stderr output of Inner.<br>
+<br>
+The allocator (coregrind/m_mallocfree.c) is annotated with client requests<br>
+so Memcheck can be used to find leaks and use after free in an Inner<br>
+Valgrind.<br>
+<br>
+The Valgrind "big lock" is annotated with helgrind client requests<br>
+so helgrind and drd can be used to find race conditions in an Inner<br>
+Valgrind.<br>
+<br>
+All this has not been tested much, so don't be surprised if you hit problems.<br>
+<br>
+When using self-hosting with an outer Callgrind tool, use '--pop-on-jump'<br>
+(on the outer). Otherwise, Callgrind has much higher memory requirements. <br>
+<br>
+(B) Regression tests in an outer/inner setup:<br>
+<br>
+ To run all the regression tests with an outer memcheck, do :<br>
+ perl tests/vg_regtest --outer-valgrind=../outer/.../bin/valgrind \<br>
+ --all<br>
+<br>
+ To run a specific regression tests with an outer memcheck, do:<br>
+ perl tests/vg_regtest --outer-valgrind=../outer/.../bin/valgrind \<br>
+ none/tests/args.vgtest<br>
+<br>
+ To run regression tests with another outer tool:<br>
+ perl tests/vg_regtest --outer-valgrind=../outer/.../bin/valgrind \<br>
+ --outer-tool=helgrind --all<br>
+<br>
+ --outer-args allows to give specific arguments to the outer tool,<br>
+ replacing the default one provided by vg_regtest.<br>
+<br>
+Note: --outer-valgrind must be a "make install"-ed valgrind.<br>
+Do *not* use vg-in-place.<br>
+<br>
+When an outer valgrind runs an inner valgrind, a regression test<br>
+produces one additional file <testname>.outer.log which contains the<br>
+errors detected by the outer valgrind. E.g. for an outer memcheck, it<br>
+contains the leaks found in the inner, for an outer helgrind or drd,<br>
+it contains the detected race conditions.<br>
+<br>
+The file tests/outer_inner.supp contains suppressions for <br>
+the irrelevant or benign errors found in the inner.<br>
+<br>
+(C) Performance tests in an outer/inner setup:<br>
+<br>
+ To run all the performance tests with an outer cachegrind, do :<br>
+ perl perf/vg_perf --outer-valgrind=../outer/.../bin/valgrind perf<br>
+<br>
+ To run a specific perf test (e.g. bz2) in this setup, do :<br>
+ perl perf/vg_perf --outer-valgrind=../outer/.../bin/valgrind perf/bz2<br>
+<br>
+ To run all the performance tests with an outer callgrind, do :<br>
+ perl perf/vg_perf --outer-valgrind=../outer/.../bin/valgrind \<br>
+ --outer-tool=callgrind perf<br>
+<br>
+Note: --outer-valgrind must be a "make install"-ed valgrind.<br>
+Do *not* use vg-in-place.<br>
+<br>
+ To compare the performance of multiple Valgrind versions, do :<br>
+ perl perf/vg_perf --outer-valgrind=../outer/.../bin/valgrind \<br>
+ --outer-tool=callgrind \<br>
+ --vg=../inner_xxxx --vg=../inner_yyyy perf<br>
+ (where inner_xxxx and inner_yyyy are the toplevel directories of<br>
+ the versions to compare).<br>
+ Cachegrind and cg_diff are particularly handy to obtain a delta<br>
+ between the two versions.<br>
+<br>
+When the outer tool is callgrind or cachegrind, the following<br>
+output files will be created for each test:<br>
+ <outertoolname>.out.<inner_valgrind_dir>.<tt>.<perftestname>.<pid><br>
+ <outertoolname>.outer.log.<inner_valgrind_dir>.<tt>.<perftestname>.<pid><br>
+ (where tt is the two letters abbreviation for the inner tool(s) run).<br>
+<br>
+For example, the command<br>
+ perl perf/vg_perf \<br>
+ --outer-valgrind=../outer_trunk/install/bin/valgrind \<br>
+ --outer-tool=callgrind \<br>
+ --vg=../inner_tchain --vg=../inner_trunk perf/many-loss-records<br>
+<br>
+produces the files<br>
+ callgrind.out.inner_tchain.no.many-loss-records.18465<br>
+ callgrind.outer.log.inner_tchain.no.many-loss-records.18465<br>
+ callgrind.out.inner_tchain.me.many-loss-records.21899<br>
+ callgrind.outer.log.inner_tchain.me.many-loss-records.21899<br>
+ callgrind.out.inner_trunk.no.many-loss-records.21224<br>
+ callgrind.outer.log.inner_trunk.no.many-loss-records.21224<br>
+ callgrind.out.inner_trunk.me.many-loss-records.22916<br>
+ callgrind.outer.log.inner_trunk.me.many-loss-records.22916<br>
+<br>
+<br>
+Printing out problematic blocks<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+If you want to print out a disassembly of a particular block that<br>
+causes a crash, do the following.<br>
+<br>
+Try running with "--vex-guest-chase-thresh=0 --trace-flags=10000000<br>
+--trace-notbelow=999999". This should print one line for each block<br>
+translated, and that includes the address.<br>
+<br>
+Then re-run with 999999 changed to the highest bb number shown.<br>
+This will print the one line per block, and also will print a<br>
+disassembly of the block in which the fault occurred.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-missing.html"><< 5. README_MISSING_SYSCALL_OR_IOCTL</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-packagers.html">7. README_PACKAGERS >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-mips.html b/docs/html/dist.readme-mips.html
new file mode 100644
index 0000000..bd7c477
--- /dev/null
+++ b/docs/html/dist.readme-mips.html
@@ -0,0 +1,89 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>11. README.mips</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-android_emulator.html" title="10. README.android_emulator">
+<link rel="next" href="dist.readme-solaris.html" title="12. README.solaris">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-android_emulator.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-solaris.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-mips"></a>11. README.mips</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Supported platforms<br>
+-------------------<br>
+- MIPS32 and MIPS64 platforms are currently supported.<br>
+- Both little-endian and big-endian cores are supported.<br>
+- MIPS DSP ASE on MIPS32 platforms is supported.<br>
+<br>
+<br>
+Building V for MIPS<br>
+-------------------<br>
+- Native build is available for all supported platforms. The build system<br>
+expects that native GCC is configured correctly and optimized for the platform.<br>
+Yet, this may not be the case with some Debian distributions which configure<br>
+GCC to compile to "mips1" by default. Depending on a target platform, using<br>
+CFLAGS="-mips32r2", CFLAGS="-mips32" or CFLAGS="-mips64" or<br>
+CFLAGS="-mips64 -mabi=64" will do the trick and compile Valgrind correctly.<br>
+<br>
+- Use of cross-toolchain is supported as well.<br>
+- Example of configure line and additional configure options:<br>
+<br>
+ $ ./configure --host=mipsel-linux-gnu --prefix=<path_to_install_directory><br>
+<br>
+ * --host=mips-linux-gnu is necessary only if Valgrind is built on platform<br>
+ other then MIPS, tools for building MIPS application have to be in PATH.<br>
+<br>
+ * --host=mips-linux-gnu is necessary if you compile it with cross toolchain<br>
+ compiler for big endian platform.<br>
+<br>
+ * --host=mipsel-linux-gnu is necessary if you compile it with cross toolchain<br>
+ compiler for little endian platform.<br>
+<br>
+ * --build=mips-linux is needed if you want to build it for MIPS32 on 64-bit<br>
+ MIPS system.<br>
+<br>
+ * If you are compiling Valgrind for mips32 with gcc version older then<br>
+ gcc (GCC) 4.5.1, you must specify CFLAGS="-mips32r2 -mplt", e.g.<br>
+<br>
+ ./configure --prefix=<path_to_install_directory><br>
+ CFLAGS="-mips32r2 -mplt"<br>
+<br>
+<br>
+Limitations<br>
+-----------<br>
+- Some gdb tests will fail when gdb (GDB) older than 7.5 is used and gdb is<br>
+ not compiled with '--with-expat=yes'.<br>
+- You can not compile tests for DSP ASE if you are using gcc (GCC) older<br>
+ then 4.6.1 due to a bug in the toolchain.<br>
+- Older GCC may have issues with some inline assembly blocks. Get a toolchain<br>
+ based on newer GCC versions, if possible.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-android_emulator.html"><< 10. README.android_emulator</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-solaris.html">12. README.solaris >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-missing.html b/docs/html/dist.readme-missing.html
new file mode 100644
index 0000000..7273b51
--- /dev/null
+++ b/docs/html/dist.readme-missing.html
@@ -0,0 +1,275 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>5. README_MISSING_SYSCALL_OR_IOCTL</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme.html" title="4. README">
+<link rel="next" href="dist.readme-developers.html" title="6. README_DEVELOPERS">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-developers.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-missing"></a>5. README_MISSING_SYSCALL_OR_IOCTL</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Dealing with missing system call or ioctl wrappers in Valgrind<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+You're probably reading this because Valgrind bombed out whilst<br>
+running your program, and advised you to read this file. The good<br>
+news is that, in general, it's easy to write the missing syscall or<br>
+ioctl wrappers you need, so that you can continue your debugging. If<br>
+you send the resulting patches to me, then you'll be doing a favour to<br>
+all future Valgrind users too.<br>
+<br>
+Note that an "ioctl" is just a special kind of system call, really; so<br>
+there's not a lot of need to distinguish them (at least conceptually)<br>
+in the discussion that follows.<br>
+<br>
+All this machinery is in coregrind/m_syswrap.<br>
+<br>
+<br>
+What are syscall/ioctl wrappers? What do they do?<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+Valgrind does what it does, in part, by keeping track of everything your<br>
+program does. When a system call happens, for example a request to read<br>
+part of a file, control passes to the Linux kernel, which fulfills the<br>
+request, and returns control to your program. The problem is that the<br>
+kernel will often change the status of some part of your program's memory<br>
+as a result, and tools (instrumentation plug-ins) may need to know about<br>
+this.<br>
+<br>
+Syscall and ioctl wrappers have two jobs: <br>
+<br>
+1. Tell a tool what's about to happen, before the syscall takes place. A<br>
+ tool could perform checks beforehand, eg. if memory about to be written<br>
+ is actually writeable. This part is useful, but not strictly<br>
+ essential.<br>
+<br>
+2. Tell a tool what just happened, after a syscall takes place. This is<br>
+ so it can update its view of the program's state, eg. that memory has<br>
+ just been written to. This step is essential.<br>
+<br>
+The "happenings" mostly involve reading/writing of memory.<br>
+<br>
+So, let's look at an example of a wrapper for a system call which<br>
+should be familiar to many Unix programmers.<br>
+<br>
+<br>
+The syscall wrapper for time()<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+The wrapper for the time system call looks like this:<br>
+<br>
+ PRE(sys_time)<br>
+ {<br>
+ /* time_t time(time_t *t); */<br>
+ PRINT("sys_time ( %p )",ARG1);<br>
+ PRE_REG_READ1(long, "time", int *, t);<br>
+ if (ARG1 != 0) {<br>
+ PRE_MEM_WRITE( "time(t)", ARG1, sizeof(vki_time_t) );<br>
+ }<br>
+ }<br>
+<br>
+ POST(sys_time)<br>
+ { <br>
+ if (ARG1 != 0) {<br>
+ POST_MEM_WRITE( ARG1, sizeof(vki_time_t) );<br>
+ }<br>
+ }<br>
+<br>
+The first thing we do happens before the syscall occurs, in the PRE() function.<br>
+The PRE() function typically starts with invoking to the PRINT() macro. This<br>
+PRINT() macro implements support for the --trace-syscalls command line option.<br>
+Next, the tool is told the return type of the syscall, that the syscall has<br>
+one argument, the type of the syscall argument and that the argument is being<br>
+read from a register:<br>
+<br>
+ PRE_REG_READ1(long, "time", int *, t);<br>
+<br>
+Next, if a non-NULL buffer is passed in as the argument, tell the tool that the<br>
+buffer is about to be written to:<br>
+<br>
+ if (ARG1 != 0) {<br>
+ PRE_MEM_WRITE( "time", ARG1, sizeof(vki_time_t) );<br>
+ }<br>
+<br>
+Finally, the really important bit, after the syscall occurs, in the POST()<br>
+function: if, and only if, the system call was successful, tell the tool that<br>
+the memory was written:<br>
+<br>
+ if (ARG1 != 0) {<br>
+ POST_MEM_WRITE( ARG1, sizeof(vki_time_t) );<br>
+ }<br>
+<br>
+The POST() function won't be called if the syscall failed, so you<br>
+don't need to worry about checking that in the POST() function.<br>
+(Note: this is sometimes a bug; some syscalls do return results when<br>
+they "fail" - for example, nanosleep returns the amount of unslept<br>
+time if interrupted. TODO: add another per-syscall flag for this<br>
+case.)<br>
+<br>
+Note that we use the type 'vki_time_t'. This is a copy of the kernel<br>
+type, with 'vki_' prefixed. Our copies of such types are kept in the<br>
+appropriate vki*.h file(s). We don't include kernel headers or glibc headers<br>
+directly.<br>
+<br>
+<br>
+Writing your own syscall wrappers (see below for ioctl wrappers)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+If Valgrind tells you that system call NNN is unimplemented, do the <br>
+following:<br>
+<br>
+1. Find out the name of the system call:<br>
+<br>
+ grep NNN /usr/include/asm/unistd*.h<br>
+<br>
+ This should tell you something like __NR_mysyscallname.<br>
+ Copy this entry to include/vki/vki-scnums-$(VG_PLATFORM).h.<br>
+<br>
+<br>
+2. Do 'man 2 mysyscallname' to get some idea of what the syscall<br>
+ does. Note that the actual kernel interface can differ from this,<br>
+ so you might also want to check a version of the Linux kernel<br>
+ source.<br>
+<br>
+ NOTE: any syscall which has something to do with signals or<br>
+ threads is probably "special", and needs more careful handling.<br>
+ Post something to valgrind-developers if you aren't sure.<br>
+<br>
+<br>
+3. Add a case to the already-huge collection of wrappers in <br>
+ the coregrind/m_syswrap/syswrap-*.c files. <br>
+ For each in-memory parameter which is read or written by<br>
+ the syscall, do one of<br>
+ <br>
+ PRE_MEM_READ( ... )<br>
+ PRE_MEM_RASCIIZ( ... ) <br>
+ PRE_MEM_WRITE( ... ) <br>
+ <br>
+ for that parameter. Then do the syscall. Then, if the syscall<br>
+ succeeds, issue suitable POST_MEM_WRITE( ... ) calls.<br>
+ (There's no need for POST_MEM_READ calls.)<br>
+<br>
+ Also, add it to the syscall_table[] array; use one of GENX_, GENXY<br>
+ LINX_, LINXY, PLAX_, PLAXY.<br>
+ GEN* for generic syscalls (in syswrap-generic.c), LIN* for linux<br>
+ specific ones (in syswrap-linux.c) and PLA* for the platform<br>
+ dependant ones (in syswrap-$(PLATFORM)-linux.c).<br>
+ The *XY variant if it requires a PRE() and POST() function, and<br>
+ the *X_ variant if it only requires a PRE()<br>
+ function. <br>
+ <br>
+ If you find this difficult, read the wrappers for other syscalls<br>
+ for ideas. A good tip is to look for the wrapper for a syscall<br>
+ which has a similar behaviour to yours, and use it as a <br>
+ starting point.<br>
+<br>
+ If you need structure definitions and/or constants for your syscall,<br>
+ copy them from the kernel headers into include/vki.h and co., with<br>
+ the appropriate vki_*/VKI_* name mangling. Don't #include any<br>
+ kernel headers. And certainly don't #include any glibc headers.<br>
+<br>
+ Test it.<br>
+<br>
+ Note that a common error is to call POST_MEM_WRITE( ... )<br>
+ with 0 (NULL) as the first (address) argument. This usually means<br>
+ your logic is slightly inadequate. It's a sufficiently common bug<br>
+ that there's a built-in check for it, and you'll get a "probably<br>
+ sanity check failure" for the syscall wrapper you just made, if this<br>
+ is the case.<br>
+<br>
+<br>
+4. Once happy, send us the patch. Pretty please.<br>
+<br>
+<br>
+<br>
+<br>
+Writing your own ioctl wrappers<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+Is pretty much the same as writing syscall wrappers, except that all<br>
+the action happens within PRE(ioctl) and POST(ioctl).<br>
+<br>
+There's a default case, sometimes it isn't correct and you have to write a<br>
+more specific case to get the right behaviour.<br>
+<br>
+As above, please create a bug report and attach the patch as described<br>
+on http://www.valgrind.org.<br>
+<br>
+<br>
+Writing your own door call wrappers (Solaris only)<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+<br>
+Unlike syscalls or ioctls, door calls transfer data between two userspace<br>
+programs, albeit through a kernel interface. Programs may use completely<br>
+proprietary semantics in the data buffers passed between them.<br>
+Therefore it may not be possible to capture these semantics within<br>
+a Valgrind door call or door return wrapper.<br>
+<br>
+Nevertheless, for system or well-known door services it would be beneficial<br>
+to have a door call and a door return wrapper. Writing such wrapper is pretty<br>
+much the same as writing ioctl wrappers. Please take a few moments to study<br>
+the following picture depicting how a door client and a door server interact<br>
+through the kernel interface in a typical scenario:<br>
+<br>
+<br>
+door client thread kernel door server thread<br>
+invokes door_call() invokes door_return()<br>
+-------------------------------------------------------------------<br>
+ <---- PRE(sys_door, DOOR_RETURN)<br>
+PRE(sys_door, DOOR_CALL) ---><br>
+ ----> POST(sys_door, DOOR_RETURN)<br>
+ ----> server_procedure()<br>
+ <----<br>
+ <---- PRE(sys_door, DOOR_RETURN)<br>
+POST(sys_door, DOOR_CALL) <---<br>
+<br>
+The first PRE(sys_door, DOOR_RETURN) is invoked with data_ptr=NULL<br>
+and data_size=0. That's because it has not received any data from<br>
+a door call, yet.<br>
+<br>
+Semantics are described by the following functions<br>
+in coregring/m_syswrap/syswrap-solaris.c module:<br>
+o For a door call wrapper the following attributes of 'params' argument:<br>
+ - data_ptr (and associated data_size) as input buffer (request);<br>
+ described in door_call_pre_mem_params_data()<br>
+ - rbuf (and associated rsize) as output buffer (response);<br>
+ described in door_call_post_mem_params_rbuf()<br>
+o For a door return wrapper the following parameters:<br>
+ - data_ptr (and associated data_size) as input buffer (request);<br>
+ described in door_return_post_mem_data()<br>
+ - data_ptr (and associated data_size) as output buffer (response);<br>
+ described in door_return_pre_mem_data()<br>
+<br>
+There's a default case which may not be correct and you have to write a<br>
+more specific case to get the right behaviour. Unless Valgrind's option<br>
+'--sim-hints=lax-doors' is specified, the default case also spits a warning.<br>
+<br>
+As above, please create a bug report and attach the patch as described<br>
+on http://www.valgrind.org.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme.html"><< 4. README</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-developers.html">6. README_DEVELOPERS >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-packagers.html b/docs/html/dist.readme-packagers.html
new file mode 100644
index 0000000..12b6685
--- /dev/null
+++ b/docs/html/dist.readme-packagers.html
@@ -0,0 +1,135 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>7. README_PACKAGERS</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-developers.html" title="6. README_DEVELOPERS">
+<link rel="next" href="dist.readme-s390.html" title="8. README.S390">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-developers.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-s390.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-packagers"></a>7. README_PACKAGERS</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Greetings, packaging person! This information is aimed at people<br>
+building binary distributions of Valgrind.<br>
+<br>
+Thanks for taking the time and effort to make a binary distribution of<br>
+Valgrind. The following notes may save you some trouble.<br>
+<br>
+<br>
+-- Do not ship your Linux distro with a completely stripped<br>
+ /lib/ld.so. At least leave the debugging symbol names on -- line<br>
+ number info isn't necessary. If you don't want to leave symbols on<br>
+ ld.so, alternatively you can have your distro install ld.so's<br>
+ debuginfo package by default, or make ld.so.debuginfo be a<br>
+ requirement of your Valgrind RPM/DEB/whatever.<br>
+<br>
+ Reason for this is that Valgrind's Memcheck tool needs to intercept<br>
+ calls to, and provide replacements for, some symbols in ld.so at<br>
+ startup (most importantly strlen). If it cannot do that, Memcheck<br>
+ shows a large number of false positives due to the highly optimised<br>
+ strlen (etc) routines in ld.so. This has caused some trouble in<br>
+ the past. As of version 3.3.0, on some targets (ppc32-linux,<br>
+ ppc64-linux), Memcheck will simply stop at startup (and print an<br>
+ error message) if such symbols are not present, because it is<br>
+ infeasible to continue.<br>
+<br>
+ It's not like this is going to cost you much space. We only need<br>
+ the symbols for ld.so (a few K at most). Not the debug info and<br>
+ not any debuginfo or extra symbols for any other libraries.<br>
+<br>
+<br>
+-- (Unfortunate but true) When you configure to build with the <br>
+ --prefix=/foo/bar/xyzzy option, the prefix /foo/bar/xyzzy gets<br>
+ baked into valgrind. The consequence is that you _must_ install<br>
+ valgrind at the location specified in the prefix. If you don't,<br>
+ it may appear to work, but will break doing some obscure things,<br>
+ particularly doing fork() and exec().<br>
+<br>
+ So you can't build a relocatable RPM / whatever from Valgrind.<br>
+<br>
+<br>
+-- Don't strip the debug info off lib/valgrind/$platform/vgpreload*.so<br>
+ in the installation tree. Either Valgrind won't work at all, or it<br>
+ will still work if you do, but will generate less helpful error<br>
+ messages. Here's an example:<br>
+<br>
+ Mismatched free() / delete / delete []<br>
+ at 0x40043249: free (vg_clientfuncs.c:171)<br>
+ by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)<br>
+ by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)<br>
+ by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)<br>
+ Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd<br>
+ at 0x4004318C: __builtin_vec_new (vg_clientfuncs.c:152)<br>
+ by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)<br>
+ by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)<br>
+ by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)<br>
+<br>
+ This tells you that some memory allocated with new[] was freed with<br>
+ free().<br>
+<br>
+ Mismatched free() / delete / delete []<br>
+ at 0x40043249: (inside vgpreload_memcheck.so)<br>
+ by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)<br>
+ by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)<br>
+ by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)<br>
+ Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd<br>
+ at 0x4004318C: (inside vgpreload_memcheck.so)<br>
+ by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)<br>
+ by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)<br>
+ by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)<br>
+<br>
+ This isn't so helpful. Although you can tell there is a mismatch, <br>
+ the names of the allocating and deallocating functions are no longer<br>
+ visible. The same kind of thing occurs in various other messages <br>
+ from valgrind.<br>
+<br>
+<br>
+-- Don't strip symbols from lib/valgrind/* in the installation tree.<br>
+ Doing so will likely cause problems. Removing the line number info is<br>
+ probably OK (at least for some of the files in that directory), although<br>
+ that has not been tested by the Valgrind developers.<br>
+<br>
+<br>
+-- Please test the final installation works by running it on something<br>
+ huge. I suggest checking that it can start and exit successfully<br>
+ both Firefox and OpenOffice.org. I use these as test programs, and I<br>
+ know they fairly thoroughly exercise Valgrind. The command lines to use<br>
+ are:<br>
+<br>
+ valgrind -v --trace-children=yes firefox<br>
+<br>
+ valgrind -v --trace-children=yes soffice<br>
+<br>
+<br>
+If you find any more hints/tips for packaging, please report<br>
+it as a bugreport. See http://www.valgrind.org for details.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-developers.html"><< 6. README_DEVELOPERS</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-s390.html">8. README.S390 >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-s390.html b/docs/html/dist.readme-s390.html
new file mode 100644
index 0000000..003d5c8
--- /dev/null
+++ b/docs/html/dist.readme-s390.html
@@ -0,0 +1,95 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>8. README.S390</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-packagers.html" title="7. README_PACKAGERS">
+<link rel="next" href="dist.readme-android.html" title="9. README.android">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-packagers.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-android.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-s390"></a>8. README.S390</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Requirements<br>
+------------<br>
+- You need GCC 3.4 or later to compile the s390 port.<br>
+- To run valgrind a z10 machine or any later model is recommended.<br>
+ Older machine models down to and including z990 may work but have<br>
+ not been tested extensively.<br>
+<br>
+<br>
+Limitations<br>
+-----------<br>
+- 31-bit client programs are not supported.<br>
+- Hexadecimal floating point is not supported.<br>
+- Transactional memory is not supported.<br>
+- Instructions operating on vector registers are not supported.<br>
+- memcheck, cachegrind, drd, helgrind, massif, lackey, and none are<br>
+ supported. <br>
+- On machine models predating z10, cachegrind will assume a z10 cache<br>
+ architecture. Otherwise, cachegrind will query the hosts cache system<br>
+ and use those parameters.<br>
+- callgrind and all experimental tools are currently not supported.<br>
+- Some gcc versions use mvc to copy 4/8 byte values. This will affect<br>
+ certain debug messages. For example, memcheck will complain about<br>
+ 4 one-byte reads/writes instead of just a single read/write.<br>
+- The transactional-execution facility is not supported; it is masked<br>
+ off from HWCAP.<br>
+- The vector facility is not supported; it is masked off from HWCAP.<br>
+<br>
+<br>
+Hardware facilities<br>
+-------------------<br>
+Valgrind does not require that the host machine has the same hardware<br>
+facilities as the machine for which the client program was compiled.<br>
+This is convenient. If possible, the JIT compiler will translate the<br>
+client instructions according to the facilities available on the host.<br>
+This means, though, that probing for hardware facilities by issuing<br>
+instructions from that facility and observing whether SIGILL is thrown<br>
+may not work. As a consequence, programs that attempt to do so may<br>
+behave differently. It is believed that this is a rare use case.<br>
+<br>
+<br>
+Recommendations<br>
+---------------<br>
+Applications should be compiled with -fno-builtin to avoid<br>
+false positives due to builtin string operations when running memcheck.<br>
+<br>
+<br>
+Reading Material<br>
+----------------<br>
+(1) Linux for zSeries ELF ABI Supplement<br>
+ http://refspecs.linuxfoundation.org/ELF/zSeries/index.html<br>
+(2) z/Architecture Principles of Operation<br>
+ http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr010.pdf<br>
+(3) z/Architecture Reference Summary<br>
+ http://publibfi.boulder.ibm.com/epubs/pdf/dz9zs008.pdf<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-packagers.html"><< 7. README_PACKAGERS</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-android.html">9. README.android >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme-solaris.html b/docs/html/dist.readme-solaris.html
new file mode 100644
index 0000000..fa10fe3
--- /dev/null
+++ b/docs/html/dist.readme-solaris.html
@@ -0,0 +1,186 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>12. README.solaris</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.readme-mips.html" title="11. README.mips">
+<link rel="next" href="licenses.html" title="GNU Licenses">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-mips.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="licenses.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme-solaris"></a>12. README.solaris</h1></div></div></div>
+<div class="literallayout"><p><br>
+ Requirements<br>
+------------<br>
+- You need a recent Solaris-like OS to compile this port. Solaris 11 or<br>
+ any illumos-based distribution should work, Solaris 10 is not supported.<br>
+ Running `uname -r` has to print '5.11'.<br>
+- Recent GCC tools are required, GCC 3 will probably not work. GCC version<br>
+ 4.5 (or higher) is recommended.<br>
+- Solaris ld has to be the first linker in the PATH. GNU ld cannot be used.<br>
+ There is currently no linker check in the configure script but the linking<br>
+ phase fails if GNU ld is used. Recent Solaris/illumos distributions are ok.<br>
+- A working combination of autotools is required: aclocal, autoheader,<br>
+ automake and autoconf have to be found in the PATH. You should be able to<br>
+ install pkg:/developer/build/automake and pkg:/developer/build/autoconf<br>
+ packages to fullfil this requirement.<br>
+- System header files are required. On Solaris, these can be installed with:<br>
+ # pkg install system/header<br>
+- GNU make is also required. On Solaris, this can be quickly achieved with:<br>
+ $ PATH=/usr/gnu/bin:$PATH; export PATH<br>
+- For remote debugging support, working GDB is required (see below).<br>
+<br>
+<br>
+Compilation<br>
+-----------<br>
+Please follow the generic instructions in the README file.<br>
+<br>
+The configure script detects a canonical host to determine which version of<br>
+Valgrind should be built. If the system compiler by default produces 32-bit<br>
+binaries then only a 32-bit version of Valgrind will be built. To enable<br>
+compilation of both 64-bit and 32-bit versions on such a system, issue the<br>
+configure script as follows:<br>
+./configure CC='gcc -m64' CXX='g++ -m64'<br>
+<br>
+<br>
+Oracle Solaris and illumos support<br>
+----------------------------------<br>
+One of the main goal of this port is to support both Oracle Solaris and<br>
+illumos kernels. This is a very hard task because Solaris kernel traditionally<br>
+does not provide a stable syscall interface and because Valgrind contains<br>
+several parts that are closely tied to the underlying kernel. For these<br>
+reasons, the port needs to detect which syscall interfaces are present. This<br>
+detection cannot be done easily at run time and is currently implemented as<br>
+a set of configure tests. This means that a binary version of this port can be<br>
+executed only on a kernel that is compatible with a kernel that was used<br>
+during the configure and compilation time.<br>
+<br>
+Main currently-known incompatibilities:<br>
+- Solaris 11 (released in November 2011) removed a large set of syscalls where<br>
+ *at variant of the syscall was also present, for example, open() versus<br>
+ openat(AT_FDCWD) [1]<br>
+- syscall number for unlinkat() is 76 on Solaris 11, but 65 on illumos [2]<br>
+- illumos (in April 2013) changed interface of the accept() and pipe()<br>
+ syscalls [3]<br>
+- posix_spawn() functionality is backed up by true spawn() syscall on Solaris 12<br>
+ whereas illumos and Solaris 11 leverage vfork()<br>
+- illumos and older Solaris use utimesys() syscall whereas newer Solaris<br>
+ uses utimensat()<br>
+<br>
+[1] http://docs.oracle.com/cd/E26502_01/html/E28556/gkzlf.html#gkzip<br>
+[2] https://www.illumos.org/issues/521<br>
+[3] https://github.com/illumos/illumos-gate/commit/5dbfd19ad5fcc2b779f40f80fa05c1bd28fd0b4e<br>
+<br>
+<br>
+Limitations<br>
+-----------<br>
+- The port is Work-In-Progress, many things may not work or they can be subtly<br>
+ broken.<br>
+- Coredumps produced by Valgrind do not contain all information available,<br>
+ especially microstate accounting and processor bindings.<br>
+- Accessing contents of /proc/self/psinfo is not thread-safe. That is because<br>
+ Valgrind emulates this file on behalf of the client programs. Entire<br>
+ open() - read() - close() sequence on this file needs to be performed<br>
+ atomically.<br>
+- Fork limitations: vfork() is translated to fork(), forkall() is not<br>
+ supported.<br>
+- Valgrind does not track definedness of some eflags (OF, SF, ZF, AF, CF, PF)<br>
+ individually for each flag. After a syscall is finished, when a carry flag<br>
+ is set and defined, all other mentioned flags will be also defined even<br>
+ though they might be undefined before making the syscall.<br>
+- System call "execve" with a file descriptor which points to a hardlink<br>
+ is currently not supported. That is because from the opened file descriptor<br>
+ itself it is not possible to reverse map the intended pathname.<br>
+ Examples are fexecve(3C) and isaexec(3C).<br>
+- Program headers PT_SUNW_SYSSTAT and PT_SUNW_SYSSTAT_ZONE are not supported.<br>
+ That is, programs linked with mapfile directive RESERVE_SEGMENT and attribute<br>
+ TYPE equal to SYSSTAT or SYSSTAT_ZONE will cause Valgrind exit. It is not<br>
+ possible for Valgrind to arrange mapping of a kernel shared page at the<br>
+ address specified in the mapfile for the guest application. There is currently<br>
+ no such mechanism in Solaris. Hacky workarounds are possible, though.<br>
+- When a thread has no stack then all system calls will result in Valgrind<br>
+ crash, even though such system calls use just parameters passed in registers.<br>
+ This should happen only in pathological situations when a thread is created<br>
+ with custom mmap'ed stack and this stack is then unmap'ed during thread<br>
+ execution.<br>
+<br>
+<br>
+Remote debugging support<br>
+------------------------<br>
+Solaris port of GDB has a major flaw which prevents remote debugging from<br>
+working correctly. Fortunately this flaw has an easy fix [4]. Unfortunately<br>
+it is not present in the current GDB 7.6.2. This boils down to several<br>
+options:<br>
+- Use GDB shipped with Solaris 11.2 which has this flaw fixed.<br>
+- Wait until GDB 7.7 becomes available (there won't be other 7.6.x releases).<br>
+- Build GDB 7.6.2 with the fix by yourself using the following steps:<br>
+ # pkg install developer/gnu-binutils<br>
+ $ wget http://ftp.gnu.org/gnu/gdb/gdb-7.6.2.tar.gz<br>
+ $ gzip -dc gdb-7.6.2.tar.gz | tar xf -<br>
+ $ cd gdb-7.6.2<br>
+ $ patch -p1 -i /path/to/valgrind-solaris/solaris/gdb-sol-thread.patch<br>
+ $ export LIBS="-lncurses"<br>
+ $ export CC="gcc -m64"<br>
+ $ ./configure --with-x=no --with-curses --with-libexpat-prefix=/usr/lib<br>
+ $ gmake && gmake install<br>
+<br>
+[4] https://sourceware.org/ml/gdb-patches/2013-12/msg00573.html<br>
+<br>
+<br>
+TODO list<br>
+---------<br>
+- Fix few remaining failing tests.<br>
+- Add more Solaris-specific tests (especially for the door and spawn<br>
+ syscalls).<br>
+- Provide better error reporting for various subsyscalls.<br>
+- Implement storing of extra register state in signal frame.<br>
+- Performance comparison against other platforms.<br>
+<br>
+- Prevent SIGPIPE when writing to a socket (coregrind/m_libcfile.c).<br>
+- Implement ticket locking for fair scheduling (--fair-sched=yes).<br>
+- Implement support in DRD and Helgrind tools for thr_join() with thread == 0.<br>
+- Add support for accessing thread-local variables via gdb (auxprogs/getoff.c).<br>
+ Requires research on internal libc TLS representation.<br>
+- VEX supports AVX, BMI and AVX2. Investigate if they can be enabled on<br>
+ Solaris/illumos.<br>
+- Investigate support for more flags in AT_SUN_AUXFLAGS.<br>
+- Fix Valgrind crash when a thread has no stack and syswrap-main.c accesses<br>
+ all possible syscall parameters. Enable helgrind/tests/stackteardown.c<br>
+ to see this in effect. Would require awareness of syscall parameter semantics.<br>
+- Correctly print arguments of DW_CFA_ORCL_arg_loc in show_CF_instruction() when<br>
+ it is implemented in libdwarf.<br>
+<br>
+<br>
+Contacts<br>
+--------<br>
+Please send bug reports and any questions about the port to:<br>
+Ivo Raisr <ivosh@ivosh.net><br>
+Petr Pavlu <setup@dagobah.cz><br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-mips.html"><< 11. README.mips</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="licenses.html">GNU Licenses >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/dist.readme.html b/docs/html/dist.readme.html
new file mode 100644
index 0000000..2904de3
--- /dev/null
+++ b/docs/html/dist.readme.html
@@ -0,0 +1,140 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>4. README</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="dist.html" title="Valgrind Distribution Documents">
+<link rel="prev" href="dist.news.old.html" title="3. OLDER NEWS">
+<link rel="next" href="dist.readme-missing.html" title="5. README_MISSING_SYSCALL_OR_IOCTL">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.news.old.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="dist.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Distribution Documents</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dist.readme-missing.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="dist.readme"></a>4. README</h1></div></div></div>
+<div class="literallayout"><p><br>
+ <br>
+Release notes for Valgrind<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+If you are building a binary package of Valgrind for distribution,<br>
+please read README_PACKAGERS. It contains some important information.<br>
+<br>
+If you are developing Valgrind, please read README_DEVELOPERS. It contains<br>
+some useful information.<br>
+<br>
+For instructions on how to build/install, see the end of this file.<br>
+<br>
+If you have problems, consult the FAQ to see if there are workarounds.<br>
+<br>
+<br>
+Executive Summary<br>
+~~~~~~~~~~~~~~~~~<br>
+Valgrind is a framework for building dynamic analysis tools. There are<br>
+Valgrind tools that can automatically detect many memory management<br>
+and threading bugs, and profile your programs in detail. You can also<br>
+use Valgrind to build new tools.<br>
+<br>
+The Valgrind distribution currently includes six production-quality<br>
+tools: a memory error detector, two thread error detectors, a cache<br>
+and branch-prediction profiler, a call-graph generating cache abd<br>
+branch-prediction profiler, and a heap profiler. It also includes<br>
+three experimental tools: a heap/stack/global array overrun detector,<br>
+a different kind of heap profiler, and a SimPoint basic block vector<br>
+generator.<br>
+<br>
+Valgrind is closely tied to details of the CPU, operating system and to<br>
+a lesser extent, compiler and basic C libraries. This makes it difficult<br>
+to make it portable. Nonetheless, it is available for the following<br>
+platforms: <br>
+<br>
+- X86/Linux<br>
+- AMD64/Linux<br>
+- PPC32/Linux<br>
+- PPC64/Linux<br>
+- ARM/Linux<br>
+- x86/MacOSX<br>
+- AMD64/MacOSX<br>
+- S390X/Linux<br>
+- MIPS32/Linux<br>
+- MIPS64/Linux<br>
+- X86/Solaris<br>
+- AMD64/Solaris<br>
+<br>
+Note that AMD64 is just another name for x86_64, and Valgrind runs fine<br>
+on Intel processors. Also note that the core of MacOSX is called<br>
+"Darwin" and this name is used sometimes.<br>
+<br>
+Valgrind is licensed under the GNU General Public License, version 2. <br>
+Read the file COPYING in the source distribution for details.<br>
+<br>
+However: if you contribute code, you need to make it available as GPL<br>
+version 2 or later, and not 2-only.<br>
+<br>
+<br>
+Documentation<br>
+~~~~~~~~~~~~~<br>
+A comprehensive user guide is supplied. Point your browser at<br>
+$PREFIX/share/doc/valgrind/manual.html, where $PREFIX is whatever you<br>
+specified with --prefix= when building.<br>
+<br>
+<br>
+Building and installing it<br>
+~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+To install from the Subversion repository :<br>
+<br>
+ 0. Check out the code from SVN, following the instructions at<br>
+ http://www.valgrind.org/downloads/repository.html.<br>
+<br>
+ 1. cd into the source directory.<br>
+<br>
+ 2. Run ./autogen.sh to setup the environment (you need the standard<br>
+ autoconf tools to do so).<br>
+<br>
+ 3. Continue with the following instructions...<br>
+<br>
+To install from a tar.bz2 distribution:<br>
+<br>
+ 4. Run ./configure, with some options if you wish. The only interesting<br>
+ one is the usual --prefix=/where/you/want/it/installed.<br>
+<br>
+ 5. Run "make".<br>
+<br>
+ 6. Run "make install", possibly as root if the destination permissions<br>
+ require that.<br>
+<br>
+ 7. See if it works. Try "valgrind ls -l". Either this works, or it<br>
+ bombs out with some complaint. In that case, please let us know<br>
+ (see www.valgrind.org).<br>
+<br>
+Important! Do not move the valgrind installation into a place<br>
+different from that specified by --prefix at build time. This will<br>
+cause things to break in subtle ways, mostly when Valgrind handles<br>
+fork/exec calls.<br>
+<br>
+<br>
+The Valgrind Developers<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.news.old.html"><< 3. OLDER NEWS</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="dist.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dist.readme-missing.html">5. README_MISSING_SYSCALL_OR_IOCTL >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/drd-manual.html b/docs/html/drd-manual.html
new file mode 100644
index 0000000..0de0955
--- /dev/null
+++ b/docs/html/drd-manual.html
@@ -0,0 +1,1530 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>8. DRD: a thread error detector</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="hg-manual.html" title="7. Helgrind: a thread error detector">
+<link rel="next" href="ms-manual.html" title="9. Massif: a heap profiler">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="hg-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="ms-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="drd-manual"></a>8. DRD: a thread error detector</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.overview">8.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mt-progr-models">8.1.1. Multithreaded Programming Paradigms</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.pthreads-model">8.1.2. POSIX Threads Programming Model</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mt-problems">8.1.3. Multithreaded Programming Problems</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.data-race-detection">8.1.4. Data Race Detection</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.using-drd">8.2. Using DRD</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.options">8.2.1. DRD Command-line Options</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.data-races">8.2.2. Detected Errors: Data Races</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.lock-contention">8.2.3. Detected Errors: Lock Contention</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.api-checks">8.2.4. Detected Errors: Misuse of the POSIX threads API</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.clientreqs">8.2.5. Client Requests</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.C++11">8.2.6. Debugging C++11 Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.gnome">8.2.7. Debugging GNOME Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.boost.thread">8.2.8. Debugging Boost.Thread Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.openmp">8.2.9. Debugging OpenMP Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.cust-mem-alloc">8.2.10. DRD and Custom Memory Allocators</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.drd-versus-memcheck">8.2.11. DRD Versus Memcheck</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.resource-requirements">8.2.12. Resource Requirements</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.effective-use">8.2.13. Hints and Tips for Effective Use of DRD</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.Pthreads">8.3. Using the POSIX Threads API Effectively</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mutex-types">8.3.1. Mutex types</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.condvar">8.3.2. Condition variables</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.pctw">8.3.3. pthread_cond_timedwait and timeouts</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.limitations">8.4. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.feedback">8.5. Feedback</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=drd</code>
+on the Valgrind command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="drd-manual.overview"></a>8.1. Overview</h2></div></div></div>
+<p>
+DRD is a Valgrind tool for detecting errors in multithreaded C and C++
+programs. The tool works for any program that uses the POSIX threading
+primitives or that uses threading concepts built on top of the POSIX threading
+primitives.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.mt-progr-models"></a>8.1.1. Multithreaded Programming Paradigms</h3></div></div></div>
+<p>
+There are two possible reasons for using multithreading in a program:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ To model concurrent activities. Assigning one thread to each activity
+ can be a great simplification compared to multiplexing the states of
+ multiple activities in a single thread. This is why most server software
+ and embedded software is multithreaded.
+ </p></li>
+<li class="listitem"><p>
+ To use multiple CPU cores simultaneously for speeding up
+ computations. This is why many High Performance Computing (HPC)
+ applications are multithreaded.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+Multithreaded programs can use one or more of the following programming
+paradigms. Which paradigm is appropriate depends e.g. on the application type.
+Some examples of multithreaded programming paradigms are:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Locking. Data that is shared over threads is protected from concurrent
+ accesses via locking. E.g. the POSIX threads library, the Qt library
+ and the Boost.Thread library support this paradigm directly.
+ </p></li>
+<li class="listitem"><p>
+ Message passing. No data is shared between threads, but threads exchange
+ data by passing messages to each other. Examples of implementations of
+ the message passing paradigm are MPI and CORBA.
+ </p></li>
+<li class="listitem"><p>
+ Automatic parallelization. A compiler converts a sequential program into
+ a multithreaded program. The original program may or may not contain
+ parallelization hints. One example of such parallelization hints is the
+ OpenMP standard. In this standard a set of directives are defined which
+ tell a compiler how to parallelize a C, C++ or Fortran program. OpenMP
+ is well suited for computational intensive applications. As an example,
+ an open source image processing software package is using OpenMP to
+ maximize performance on systems with multiple CPU
+ cores. GCC supports the
+ OpenMP standard from version 4.2.0 on.
+ </p></li>
+<li class="listitem"><p>
+ Software Transactional Memory (STM). Any data that is shared between
+ threads is updated via transactions. After each transaction it is
+ verified whether there were any conflicting transactions. If there were
+ conflicts, the transaction is aborted, otherwise it is committed. This
+ is a so-called optimistic approach. There is a prototype of the Intel C++
+ Compiler available that supports STM. Research about the addition of
+ STM support to GCC is ongoing.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+DRD supports any combination of multithreaded programming paradigms as
+long as the implementation of these paradigms is based on the POSIX
+threads primitives. DRD however does not support programs that use
+e.g. Linux' futexes directly. Attempts to analyze such programs with
+DRD will cause DRD to report many false positives.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.pthreads-model"></a>8.1.2. POSIX Threads Programming Model</h3></div></div></div>
+<p>
+POSIX threads, also known as Pthreads, is the most widely available
+threading library on Unix systems.
+</p>
+<p>
+The POSIX threads programming model is based on the following abstractions:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ A shared address space. All threads running within the same
+ process share the same address space. All data, whether shared or
+ not, is identified by its address.
+ </p></li>
+<li class="listitem"><p>
+ Regular load and store operations, which allow to read values
+ from or to write values to the memory shared by all threads
+ running in the same process.
+ </p></li>
+<li class="listitem"><p>
+ Atomic store and load-modify-store operations. While these are
+ not mentioned in the POSIX threads standard, most
+ microprocessors support atomic memory operations.
+ </p></li>
+<li class="listitem"><p>
+ Threads. Each thread represents a concurrent activity.
+ </p></li>
+<li class="listitem"><p>
+ Synchronization objects and operations on these synchronization
+ objects. The following types of synchronization objects have been
+ defined in the POSIX threads standard: mutexes, condition variables,
+ semaphores, reader-writer synchronization objects, barriers and
+ spinlocks.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+Which source code statements generate which memory accesses depends on
+the <span class="emphasis"><em>memory model</em></span> of the programming language being
+used. There is not yet a definitive memory model for the C and C++
+languages. For a draft memory model, see also the document
+<a class="ulink" href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2338.html" target="_top">
+WG21/N2338: Concurrency memory model compiler consequences</a>.
+</p>
+<p>
+For more information about POSIX threads, see also the Single UNIX
+Specification version 3, also known as
+<a class="ulink" href="http://www.opengroup.org/onlinepubs/000095399/idx/threads.html" target="_top">
+IEEE Std 1003.1</a>.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.mt-problems"></a>8.1.3. Multithreaded Programming Problems</h3></div></div></div>
+<p>
+Depending on which multithreading paradigm is being used in a program,
+one or more of the following problems can occur:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Data races. One or more threads access the same memory location without
+ sufficient locking. Most but not all data races are programming errors
+ and are the cause of subtle and hard-to-find bugs.
+ </p></li>
+<li class="listitem"><p>
+ Lock contention. One thread blocks the progress of one or more other
+ threads by holding a lock too long.
+ </p></li>
+<li class="listitem"><p>
+ Improper use of the POSIX threads API. Most implementations of the POSIX
+ threads API have been optimized for runtime speed. Such implementations
+ will not complain on certain errors, e.g. when a mutex is being unlocked
+ by another thread than the thread that obtained a lock on the mutex.
+ </p></li>
+<li class="listitem"><p>
+ Deadlock. A deadlock occurs when two or more threads wait for
+ each other indefinitely.
+ </p></li>
+<li class="listitem"><p>
+ False sharing. If threads that run on different processor cores
+ access different variables located in the same cache line
+ frequently, this will slow down the involved threads a lot due
+ to frequent exchange of cache lines.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+Although the likelihood of the occurrence of data races can be reduced
+through a disciplined programming style, a tool for automatic
+detection of data races is a necessity when developing multithreaded
+software. DRD can detect these, as well as lock contention and
+improper use of the POSIX threads API.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.data-race-detection"></a>8.1.4. Data Race Detection</h3></div></div></div>
+<p>
+The result of load and store operations performed by a multithreaded program
+depends on the order in which memory operations are performed. This order is
+determined by:
+</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>
+ All memory operations performed by the same thread are performed in
+ <span class="emphasis"><em>program order</em></span>, that is, the order determined by the
+ program source code and the results of previous load operations.
+ </p></li>
+<li class="listitem"><p>
+ Synchronization operations determine certain ordering constraints on
+ memory operations performed by different threads. These ordering
+ constraints are called the <span class="emphasis"><em>synchronization order</em></span>.
+ </p></li>
+</ol></div>
+<p>
+The combination of program order and synchronization order is called the
+<span class="emphasis"><em>happens-before relationship</em></span>. This concept was first
+defined by S. Adve et al in the paper <span class="emphasis"><em>Detecting data races on weak
+memory systems</em></span>, ACM SIGARCH Computer Architecture News, v.19 n.3,
+p.234-243, May 1991.
+</p>
+<p>
+Two memory operations <span class="emphasis"><em>conflict</em></span> if both operations are
+performed by different threads, refer to the same memory location and at least
+one of them is a store operation.
+</p>
+<p>
+A multithreaded program is <span class="emphasis"><em>data-race free</em></span> if all
+conflicting memory accesses are ordered by synchronization
+operations.
+</p>
+<p>
+A well known way to ensure that a multithreaded program is data-race
+free is to ensure that a locking discipline is followed. It is e.g.
+possible to associate a mutex with each shared data item, and to hold
+a lock on the associated mutex while the shared data is accessed.
+</p>
+<p>
+All programs that follow a locking discipline are data-race free, but not all
+data-race free programs follow a locking discipline. There exist multithreaded
+programs where access to shared data is arbitrated via condition variables,
+semaphores or barriers. As an example, a certain class of HPC applications
+consists of a sequence of computation steps separated in time by barriers, and
+where these barriers are the only means of synchronization. Although there are
+many conflicting memory accesses in such applications and although such
+applications do not make use mutexes, most of these applications do not
+contain data races.
+</p>
+<p>
+There exist two different approaches for verifying the correctness of
+multithreaded programs at runtime. The approach of the so-called Eraser
+algorithm is to verify whether all shared memory accesses follow a consistent
+locking strategy. And the happens-before data race detectors verify directly
+whether all interthread memory accesses are ordered by synchronization
+operations. While the last approach is more complex to implement, and while it
+is more sensitive to OS scheduling, it is a general approach that works for
+all classes of multithreaded programs. An important advantage of
+happens-before data race detectors is that these do not report any false
+positives.
+</p>
+<p>
+DRD is based on the happens-before algorithm.
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="drd-manual.using-drd"></a>8.2. Using DRD</h2></div></div></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.options"></a>8.2.1. DRD Command-line Options</h3></div></div></div>
+<p>The following command-line options are available for controlling the
+behavior of the DRD tool itself:</p>
+<div class="variablelist">
+<a name="drd.opts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">--check-stack-var=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Controls whether DRD detects data races on stack
+ variables. Verifying stack variables is disabled by default because
+ most programs do not share stack variables over threads.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--exclusive-threshold=<n> [default: off]</code>
+ </span></dt>
+<dd><p>
+ Print an error message if any mutex or writer lock has been
+ held longer than the time specified in milliseconds. This
+ option enables the detection of lock contention.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--join-list-vol=<n> [default: 10]</code>
+ </span></dt>
+<dd><p>
+ Data races that occur between a statement at the end of one thread
+ and another thread can be missed if memory access information is
+ discarded immediately after a thread has been joined. This option
+ allows to specify for how many joined threads memory access information
+ should be retained.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">
+ --first-race-only=<yes|no> [default: no]
+ </code>
+ </span></dt>
+<dd><p>
+ Whether to report only the first data race that has been detected on a
+ memory location or all data races that have been detected on a memory
+ location.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">
+ --free-is-write=<yes|no> [default: no]
+ </code>
+ </span></dt>
+<dd>
+<p>
+ Whether to report races between accessing memory and freeing
+ memory. Enabling this option may cause DRD to run slightly
+ slower. Notes:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Don't enable this option when using custom memory allocators
+ that use
+ the <code class="computeroutput">VG_USERREQ__MALLOCLIKE_BLOCK</code>
+ and <code class="computeroutput">VG_USERREQ__FREELIKE_BLOCK</code>
+ because that would result in false positives.
+ </p></li>
+<li class="listitem"><p>Don't enable this option when using reference-counted
+ objects because that will result in false positives, even when
+ that code has been annotated properly with
+ <code class="computeroutput">ANNOTATE_HAPPENS_BEFORE</code>
+ and <code class="computeroutput">ANNOTATE_HAPPENS_AFTER</code>. See
+ e.g. the output of the following command for an example:
+ <code class="computeroutput">valgrind --tool=drd --free-is-write=yes
+ drd/tests/annotate_smart_pointer</code>.
+ </p></li>
+</ul></div>
+</dd>
+<dt><span class="term">
+ <code class="option">
+ --report-signal-unlocked=<yes|no> [default: yes]
+ </code>
+ </span></dt>
+<dd><p>
+ Whether to report calls to
+ <code class="function">pthread_cond_signal</code> and
+ <code class="function">pthread_cond_broadcast</code> where the mutex
+ associated with the signal through
+ <code class="function">pthread_cond_wait</code> or
+ <code class="function">pthread_cond_timed_wait</code>is not locked at
+ the time the signal is sent. Sending a signal without holding
+ a lock on the associated mutex is a common programming error
+ which can cause subtle race conditions and unpredictable
+ behavior. There exist some uncommon synchronization patterns
+ however where it is safe to send a signal without holding a
+ lock on the associated mutex.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--segment-merging=<yes|no> [default: yes]</code>
+ </span></dt>
+<dd><p>
+ Controls segment merging. Segment merging is an algorithm to
+ limit memory usage of the data race detection
+ algorithm. Disabling segment merging may improve the accuracy
+ of the so-called 'other segments' displayed in race reports
+ but can also trigger an out of memory error.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--segment-merging-interval=<n> [default: 10]</code>
+ </span></dt>
+<dd><p>
+ Perform segment merging only after the specified number of new
+ segments have been created. This is an advanced configuration option
+ that allows to choose whether to minimize DRD's memory usage by
+ choosing a low value or to let DRD run faster by choosing a slightly
+ higher value. The optimal value for this parameter depends on the
+ program being analyzed. The default value works well for most programs.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--shared-threshold=<n> [default: off]</code>
+ </span></dt>
+<dd><p>
+ Print an error message if a reader lock has been held longer
+ than the specified time (in milliseconds). This option enables
+ the detection of lock contention.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--show-confl-seg=<yes|no> [default: yes]</code>
+ </span></dt>
+<dd><p>
+ Show conflicting segments in race reports. Since this
+ information can help to find the cause of a data race, this
+ option is enabled by default. Disabling this option makes the
+ output of DRD more compact.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--show-stack-usage=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Print stack usage at thread exit time. When a program creates a large
+ number of threads it becomes important to limit the amount of virtual
+ memory allocated for thread stacks. This option makes it possible to
+ observe how much stack memory has been used by each thread of the
+ client program. Note: the DRD tool itself allocates some temporary
+ data on the client thread stack. The space necessary for this
+ temporary data must be allocated by the client program when it
+ allocates stack memory, but is not included in stack usage reported by
+ DRD.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--ignore-thread-creation=<yes|no> [default: no]</code>
+ </span></dt>
+<dd>
+<p>
+ Controls whether all activities during thread creation should be
+ ignored. By default enabled only on Solaris.
+ Solaris provides higher throughput, parallelism and scalability than
+ other operating systems, at the cost of more fine-grained locking
+ activity. This means for example that when a thread is created under
+ glibc, just one big lock is used for all thread setup. Solaris libc
+ uses several fine-grained locks and the creator thread resumes its
+ activities as soon as possible, leaving for example stack and TLS setup
+ sequence to the created thread.
+ This situation confuses DRD as it assumes there is some false ordering
+ in place between creator and created thread; and therefore many types
+ of race conditions in the application would not be reported. To prevent
+ such false ordering, this command line option is set to
+ <code class="computeroutput">yes</code> by default on Solaris.
+ All activity (loads, stores, client requests) is therefore ignored
+ during:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ pthread_create() call in the creator thread
+ </p></li>
+<li class="listitem"><p>
+ thread creation phase (stack and TLS setup) in the created thread
+ </p></li>
+</ul></div>
+</dd>
+</dl>
+</div>
+<p>
+The following options are available for monitoring the behavior of the
+client program:
+</p>
+<div class="variablelist">
+<a name="drd.debugopts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">--trace-addr=<address> [default: none]</code>
+ </span></dt>
+<dd><p>
+ Trace all load and store activity for the specified
+ address. This option may be specified more than once.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--ptrace-addr=<address> [default: none]</code>
+ </span></dt>
+<dd><p>
+ Trace all load and store activity for the specified address and keep
+ doing that even after the memory at that address has been freed and
+ reallocated.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-alloc=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all memory allocations and deallocations. May produce a huge
+ amount of output.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-barrier=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all barrier activity.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-cond=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all condition variable activity.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-fork-join=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all thread creation and all thread termination events.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-hb=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace execution of the <code class="literal">ANNOTATE_HAPPENS_BEFORE()</code>,
+ <code class="literal">ANNOTATE_HAPPENS_AFTER()</code> and
+ <code class="literal">ANNOTATE_HAPPENS_DONE()</code> client requests.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-mutex=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all mutex activity.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-rwlock=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all reader-writer lock activity.
+ </p></dd>
+<dt><span class="term">
+ <code class="option">--trace-semaphore=<yes|no> [default: no]</code>
+ </span></dt>
+<dd><p>
+ Trace all semaphore activity.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.data-races"></a>8.2.2. Detected Errors: Data Races</h3></div></div></div>
+<p>
+DRD prints a message every time it detects a data race. Please keep
+the following in mind when interpreting DRD's output:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Every thread is assigned a <span class="emphasis"><em>thread ID</em></span> by the DRD
+ tool. A thread ID is a number. Thread ID's start at one and are never
+ recycled.
+ </p></li>
+<li class="listitem"><p>
+ The term <span class="emphasis"><em>segment</em></span> refers to a consecutive
+ sequence of load, store and synchronization operations, all
+ issued by the same thread. A segment always starts and ends at a
+ synchronization operation. Data race analysis is performed
+ between segments instead of between individual load and store
+ operations because of performance reasons.
+ </p></li>
+<li class="listitem"><p>
+ There are always at least two memory accesses involved in a data
+ race. Memory accesses involved in a data race are called
+ <span class="emphasis"><em>conflicting memory accesses</em></span>. DRD prints a
+ report for each memory access that conflicts with a past memory
+ access.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+Below you can find an example of a message printed by DRD when it
+detects a data race:
+</p>
+<pre class="programlisting">
+$ valgrind --tool=drd --read-var-info=yes drd/tests/rwlock_race
+...
+==9466== Thread 3:
+==9466== Conflicting load by thread 3 at 0x006020b8 size 4
+==9466== at 0x400B6C: thread_func (rwlock_race.c:29)
+==9466== by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
+==9466== by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+==9466== by 0x53250CC: clone (in /lib64/libc-2.8.so)
+==9466== Location 0x6020b8 is 0 bytes inside local var "s_racy"
+==9466== declared at rwlock_race.c:18, in frame #0 of thread 3
+==9466== Other segment start (thread 2)
+==9466== at 0x4C2847D: pthread_rwlock_rdlock* (drd_pthread_intercepts.c:813)
+==9466== by 0x400B6B: thread_func (rwlock_race.c:28)
+==9466== by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
+==9466== by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+==9466== by 0x53250CC: clone (in /lib64/libc-2.8.so)
+==9466== Other segment end (thread 2)
+==9466== at 0x4C28B54: pthread_rwlock_unlock* (drd_pthread_intercepts.c:912)
+==9466== by 0x400B84: thread_func (rwlock_race.c:30)
+==9466== by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
+==9466== by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+==9466== by 0x53250CC: clone (in /lib64/libc-2.8.so)
+...
+</pre>
+<p>
+The above report has the following meaning:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ The number in the column on the left is the process ID of the
+ process being analyzed by DRD.
+ </p></li>
+<li class="listitem"><p>
+ The first line ("Thread 3") tells you the thread ID for
+ the thread in which context the data race has been detected.
+ </p></li>
+<li class="listitem"><p>
+ The next line tells which kind of operation was performed (load or
+ store) and by which thread. On the same line the start address and the
+ number of bytes involved in the conflicting access are also displayed.
+ </p></li>
+<li class="listitem"><p>
+ Next, the call stack of the conflicting access is displayed. If
+ your program has been compiled with debug information
+ (<code class="option">-g</code>), this call stack will include file names and
+ line numbers. The two
+ bottommost frames in this call stack (<code class="function">clone</code>
+ and <code class="function">start_thread</code>) show how the NPTL starts
+ a thread. The third frame
+ (<code class="function">vg_thread_wrapper</code>) is added by DRD. The
+ fourth frame (<code class="function">thread_func</code>) is the first
+ interesting line because it shows the thread entry point, that
+ is the function that has been passed as the third argument to
+ <code class="function">pthread_create</code>.
+ </p></li>
+<li class="listitem"><p>
+ Next, the allocation context for the conflicting address is
+ displayed. For dynamically allocated data the allocation call
+ stack is shown. For static variables and stack variables the
+ allocation context is only shown when the option
+ <code class="option">--read-var-info=yes</code> has been
+ specified. Otherwise DRD will print <code class="computeroutput">Allocation
+ context: unknown</code>.
+ </p></li>
+<li class="listitem">
+<p>
+ A conflicting access involves at least two memory accesses. For
+ one of these accesses an exact call stack is displayed, and for
+ the other accesses an approximate call stack is displayed,
+ namely the start and the end of the segments of the other
+ accesses. This information can be interpreted as follows:
+ </p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>
+ Start at the bottom of both call stacks, and count the
+ number stack frames with identical function name, file
+ name and line number. In the above example the three
+ bottommost frames are identical
+ (<code class="function">clone</code>,
+ <code class="function">start_thread</code> and
+ <code class="function">vg_thread_wrapper</code>).
+ </p></li>
+<li class="listitem"><p>
+ The next higher stack frame in both call stacks now tells
+ you between in which source code region the other memory
+ access happened. The above output tells that the other
+ memory access involved in the data race happened between
+ source code lines 28 and 30 in file
+ <code class="computeroutput">rwlock_race.c</code>.
+ </p></li>
+</ol></div>
+<p>
+ </p>
+</li>
+</ul></div>
+<p>
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.lock-contention"></a>8.2.3. Detected Errors: Lock Contention</h3></div></div></div>
+<p>
+Threads must be able to make progress without being blocked for too long by
+other threads. Sometimes a thread has to wait until a mutex or reader-writer
+synchronization object is unlocked by another thread. This is called
+<span class="emphasis"><em>lock contention</em></span>.
+</p>
+<p>
+Lock contention causes delays. Such delays should be as short as
+possible. The two command line options
+<code class="literal">--exclusive-threshold=<n></code> and
+<code class="literal">--shared-threshold=<n></code> make it possible to
+detect excessive lock contention by making DRD report any lock that
+has been held longer than the specified threshold. An example:
+</p>
+<pre class="programlisting">
+$ valgrind --tool=drd --exclusive-threshold=10 drd/tests/hold_lock -i 500
+...
+==10668== Acquired at:
+==10668== at 0x4C267C8: pthread_mutex_lock (drd_pthread_intercepts.c:395)
+==10668== by 0x400D92: main (hold_lock.c:51)
+==10668== Lock on mutex 0x7fefffd50 was held during 503 ms (threshold: 10 ms).
+==10668== at 0x4C26ADA: pthread_mutex_unlock (drd_pthread_intercepts.c:441)
+==10668== by 0x400DB5: main (hold_lock.c:55)
+...
+</pre>
+<p>
+The <code class="literal">hold_lock</code> test program holds a lock as long as
+specified by the <code class="literal">-i</code> (interval) argument. The DRD
+output reports that the lock acquired at line 51 in source file
+<code class="literal">hold_lock.c</code> and released at line 55 was held during
+503 ms, while a threshold of 10 ms was specified to DRD.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.api-checks"></a>8.2.4. Detected Errors: Misuse of the POSIX threads API</h3></div></div></div>
+<p>
+ DRD is able to detect and report the following misuses of the POSIX
+ threads API:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Passing the address of one type of synchronization object
+ (e.g. a mutex) to a POSIX API call that expects a pointer to
+ another type of synchronization object (e.g. a condition
+ variable).
+ </p></li>
+<li class="listitem"><p>
+ Attempts to unlock a mutex that has not been locked.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to unlock a mutex that was locked by another thread.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to lock a mutex of type
+ <code class="literal">PTHREAD_MUTEX_NORMAL</code> or a spinlock
+ recursively.
+ </p></li>
+<li class="listitem"><p>
+ Destruction or deallocation of a locked mutex.
+ </p></li>
+<li class="listitem"><p>
+ Sending a signal to a condition variable while no lock is held
+ on the mutex associated with the condition variable.
+ </p></li>
+<li class="listitem"><p>
+ Calling <code class="function">pthread_cond_wait</code> on a mutex
+ that is not locked, that is locked by another thread or that
+ has been locked recursively.
+ </p></li>
+<li class="listitem"><p>
+ Associating two different mutexes with a condition variable
+ through <code class="function">pthread_cond_wait</code>.
+ </p></li>
+<li class="listitem"><p>
+ Destruction or deallocation of a condition variable that is
+ being waited upon.
+ </p></li>
+<li class="listitem"><p>
+ Destruction or deallocation of a locked reader-writer synchronization
+ object.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to unlock a reader-writer synchronization object that was not
+ locked by the calling thread.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to recursively lock a reader-writer synchronization object
+ exclusively.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to pass the address of a user-defined reader-writer
+ synchronization object to a POSIX threads function.
+ </p></li>
+<li class="listitem"><p>
+ Attempts to pass the address of a POSIX reader-writer synchronization
+ object to one of the annotations for user-defined reader-writer
+ synchronization objects.
+ </p></li>
+<li class="listitem"><p>
+ Reinitialization of a mutex, condition variable, reader-writer
+ lock, semaphore or barrier.
+ </p></li>
+<li class="listitem"><p>
+ Destruction or deallocation of a semaphore or barrier that is
+ being waited upon.
+ </p></li>
+<li class="listitem"><p>
+ Missing synchronization between barrier wait and barrier destruction.
+ </p></li>
+<li class="listitem"><p>
+ Exiting a thread without first unlocking the spinlocks, mutexes or
+ reader-writer synchronization objects that were locked by that thread.
+ </p></li>
+<li class="listitem"><p>
+ Passing an invalid thread ID to <code class="function">pthread_join</code>
+ or <code class="function">pthread_cancel</code>.
+ </p></li>
+</ul></div>
+<p>
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.clientreqs"></a>8.2.5. Client Requests</h3></div></div></div>
+<p>
+Just as for other Valgrind tools it is possible to let a client program
+interact with the DRD tool through client requests. In addition to the
+client requests several macros have been defined that allow to use the
+client requests in a convenient way.
+</p>
+<p>
+The interface between client programs and the DRD tool is defined in
+the header file <code class="literal"><valgrind/drd.h></code>. The
+available macros and client requests are:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ The macro <code class="literal">DRD_GET_VALGRIND_THREADID</code> and the
+ corresponding client
+ request <code class="varname">VG_USERREQ__DRD_GET_VALGRIND_THREAD_ID</code>.
+ Query the thread ID that has been assigned by the Valgrind core to the
+ thread executing this client request. Valgrind's thread ID's start at
+ one and are recycled in case a thread stops.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">DRD_GET_DRD_THREADID</code> and the corresponding
+ client request <code class="varname">VG_USERREQ__DRD_GET_DRD_THREAD_ID</code>.
+ Query the thread ID that has been assigned by DRD to the thread
+ executing this client request. These are the thread ID's reported by DRD
+ in data race reports and in trace messages. DRD's thread ID's start at
+ one and are never recycled.
+ </p></li>
+<li class="listitem"><p>
+ The macros <code class="literal">DRD_IGNORE_VAR(x)</code>,
+ <code class="literal">ANNOTATE_TRACE_MEMORY(&x)</code> and the corresponding
+ client request <code class="varname">VG_USERREQ__DRD_START_SUPPRESSION</code>. Some
+ applications contain intentional races. There exist e.g. applications
+ where the same value is assigned to a shared variable from two different
+ threads. It may be more convenient to suppress such races than to solve
+ these. This client request allows to suppress such races.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">DRD_STOP_IGNORING_VAR(x)</code> and the
+ corresponding client request
+ <code class="varname">VG_USERREQ__DRD_FINISH_SUPPRESSION</code>. Tell DRD
+ to no longer ignore data races for the address range that was suppressed
+ either via the macro <code class="literal">DRD_IGNORE_VAR(x)</code> or via the
+ client request <code class="varname">VG_USERREQ__DRD_START_SUPPRESSION</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">DRD_TRACE_VAR(x)</code>. Trace all load and store
+ activity for the address range starting at <code class="literal">&x</code> and
+ occupying <code class="literal">sizeof(x)</code> bytes. When DRD reports a data
+ race on a specified variable, and it's not immediately clear which
+ source code statements triggered the conflicting accesses, it can be
+ very helpful to trace all activity on the offending memory location.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">DRD_STOP_TRACING_VAR(x)</code>. Stop tracing load
+ and store activity for the address range starting
+ at <code class="literal">&x</code> and occupying <code class="literal">sizeof(x)</code>
+ bytes.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_TRACE_MEMORY(&x)</code>. Trace all
+ load and store activity that touches at least the single byte at the
+ address <code class="literal">&x</code>.
+ </p></li>
+<li class="listitem"><p>
+ The client request <code class="varname">VG_USERREQ__DRD_START_TRACE_ADDR</code>,
+ which allows to trace all load and store activity for the specified
+ address range.
+ </p></li>
+<li class="listitem"><p>
+ The client
+ request <code class="varname">VG_USERREQ__DRD_STOP_TRACE_ADDR</code>. Do no longer
+ trace load and store activity for the specified address range.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_HAPPENS_BEFORE(addr)</code> tells DRD to
+ insert a mark. Insert this macro just after an access to the variable at
+ the specified address has been performed.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_HAPPENS_AFTER(addr)</code> tells DRD that
+ the next access to the variable at the specified address should be
+ considered to have happened after the access just before the latest
+ <code class="literal">ANNOTATE_HAPPENS_BEFORE(addr)</code> annotation that
+ references the same variable. The purpose of these two macros is to tell
+ DRD about the order of inter-thread memory accesses implemented via
+ atomic memory operations. See
+ also <code class="literal">drd/tests/annotate_smart_pointer.cpp</code> for an
+ example.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_RWLOCK_CREATE(rwlock)</code> tells DRD
+ that the object at address <code class="literal">rwlock</code> is a
+ reader-writer synchronization object that is not a
+ <code class="literal">pthread_rwlock_t</code> synchronization object. See
+ also <code class="literal">drd/tests/annotate_rwlock.c</code> for an example.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_RWLOCK_DESTROY(rwlock)</code> tells DRD
+ that the reader-writer synchronization object at
+ address <code class="literal">rwlock</code> has been destroyed.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_WRITERLOCK_ACQUIRED(rwlock)</code> tells
+ DRD that a writer lock has been acquired on the reader-writer
+ synchronization object at address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_READERLOCK_ACQUIRED(rwlock)</code> tells
+ DRD that a reader lock has been acquired on the reader-writer
+ synchronization object at address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_RWLOCK_ACQUIRED(rwlock, is_w)</code>
+ tells DRD that a writer lock (when <code class="literal">is_w != 0</code>) or that
+ a reader lock (when <code class="literal">is_w == 0</code>) has been acquired on
+ the reader-writer synchronization object at
+ address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_WRITERLOCK_RELEASED(rwlock)</code> tells
+ DRD that a writer lock has been released on the reader-writer
+ synchronization object at address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_READERLOCK_RELEASED(rwlock)</code> tells
+ DRD that a reader lock has been released on the reader-writer
+ synchronization object at address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_RWLOCK_RELEASED(rwlock, is_w)</code>
+ tells DRD that a writer lock (when <code class="literal">is_w != 0</code>) or that
+ a reader lock (when <code class="literal">is_w == 0</code>) has been released on
+ the reader-writer synchronization object at
+ address <code class="literal">rwlock</code>.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BARRIER_INIT(barrier, count,
+ reinitialization_allowed)</code> tells DRD that a new barrier object
+ at the address <code class="literal">barrier</code> has been initialized,
+ that <code class="literal">count</code> threads participate in each barrier and
+ also whether or not barrier reinitialization without intervening
+ destruction should be reported as an error. See
+ also <code class="literal">drd/tests/annotate_barrier.c</code> for an example.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BARRIER_DESTROY(barrier)</code>
+ tells DRD that a barrier object is about to be destroyed.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BARRIER_WAIT_BEFORE(barrier)</code>
+ tells DRD that waiting for a barrier will start.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BARRIER_WAIT_AFTER(barrier)</code>
+ tells DRD that waiting for a barrier has finished.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BENIGN_RACE_SIZED(addr, size,
+ descr)</code> tells DRD that any races detected on the specified
+ address are benign and hence should not be
+ reported. The <code class="literal">descr</code> argument is ignored but can be
+ used to document why data races on <code class="literal">addr</code> are benign.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_BENIGN_RACE_STATIC(var, descr)</code>
+ tells DRD that any races detected on the specified static variable are
+ benign and hence should not be reported. The <code class="literal">descr</code>
+ argument is ignored but can be used to document why data races
+ on <code class="literal">var</code> are benign. Note: this macro can only be
+ used in C++ programs and not in C programs.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_READS_BEGIN</code> tells
+ DRD to ignore all memory loads performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_READS_END</code> tells
+ DRD to stop ignoring the memory loads performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_WRITES_BEGIN</code> tells
+ DRD to ignore all memory stores performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_WRITES_END</code> tells
+ DRD to stop ignoring the memory stores performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_READS_AND_WRITES_BEGIN</code> tells
+ DRD to ignore all memory accesses performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_IGNORE_READS_AND_WRITES_END</code> tells
+ DRD to stop ignoring the memory accesses performed by the current thread.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_NEW_MEMORY(addr, size)</code> tells
+ DRD that the specified memory range has been allocated by a custom
+ memory allocator in the client program and that the client program
+ will start using this memory range.
+ </p></li>
+<li class="listitem"><p>
+ The macro <code class="literal">ANNOTATE_THREAD_NAME(name)</code> tells DRD to
+ associate the specified name with the current thread and to include this
+ name in the error messages printed by DRD.
+ </p></li>
+<li class="listitem"><p>
+ The macros <code class="literal">VALGRIND_MALLOCLIKE_BLOCK</code> and
+ <code class="literal">VALGRIND_FREELIKE_BLOCK</code> from the Valgrind core are
+ implemented; they are described in
+ <a class="xref" href="manual-core-adv.html#manual-core-adv.clientreq" title="3.1. The Client Request mechanism">The Client Request mechanism</a>.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+Note: if you compiled Valgrind yourself, the header file
+<code class="literal"><valgrind/drd.h></code> will have been installed in
+the directory <code class="literal">/usr/include</code> by the command
+<code class="literal">make install</code>. If you obtained Valgrind by
+installing it as a package however, you will probably have to install
+another package with a name like <code class="literal">valgrind-devel</code>
+before Valgrind's header files are available.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.C++11"></a>8.2.6. Debugging C++11 Programs</h3></div></div></div>
+<p>If you want to use the C++11 class std::thread you will need to do the
+ following to annotate the std::shared_ptr<> objects used in the
+ implementation of that class:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>Add the following code at the start of a common header or at the
+ start of each source file, before any C++ header files are included:</p>
+<pre class="programlisting">
+#include <valgrind/drd.h>
+#define _GLIBCXX_SYNCHRONIZATION_HAPPENS_BEFORE(addr) ANNOTATE_HAPPENS_BEFORE(addr)
+#define _GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(addr) ANNOTATE_HAPPENS_AFTER(addr)
+</pre>
+</li>
+<li class="listitem"><p>Download the gcc source code and from source file
+ libstdc++-v3/src/c++11/thread.cc copy the implementation of the
+ <code class="computeroutput">execute_native_thread_routine()</code>
+ and <code class="computeroutput">std::thread::_M_start_thread()</code>
+ functions into a source file that is linked with your application. Make
+ sure that also in this source file the
+ _GLIBCXX_SYNCHRONIZATION_HAPPENS_*() macros are defined properly.</p></li>
+</ul></div>
+<p>
+</p>
+<p>For more information, see also <span class="emphasis"><em>The
+GNU C++ Library Manual, Debugging Support</em></span>
+(<a class="ulink" href="http://gcc.gnu.org/onlinedocs/libstdc++/manual/debug.html" target="_top">http://gcc.gnu.org/onlinedocs/libstdc++/manual/debug.html</a>).</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.gnome"></a>8.2.7. Debugging GNOME Programs</h3></div></div></div>
+<p>
+GNOME applications use the threading primitives provided by the
+<code class="computeroutput">glib</code> and
+<code class="computeroutput">gthread</code> libraries. These libraries
+are built on top of POSIX threads, and hence are directly supported by
+DRD. Please keep in mind that you have to call
+<code class="function">g_thread_init</code> before creating any threads, or
+DRD will report several data races on glib functions. See also the
+<a class="ulink" href="http://library.gnome.org/devel/glib/stable/glib-Threads.html" target="_top">GLib
+Reference Manual</a> for more information about
+<code class="function">g_thread_init</code>.
+</p>
+<p>
+One of the many facilities provided by the <code class="literal">glib</code>
+library is a block allocator, called <code class="literal">g_slice</code>. You
+have to disable this block allocator when using DRD by adding the
+following to the shell environment variables:
+<code class="literal">G_SLICE=always-malloc</code>. See also the <a class="ulink" href="http://library.gnome.org/devel/glib/stable/glib-Memory-Slices.html" target="_top">GLib
+Reference Manual</a> for more information.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.boost.thread"></a>8.2.8. Debugging Boost.Thread Programs</h3></div></div></div>
+<p>
+The Boost.Thread library is the threading library included with the
+cross-platform Boost Libraries. This threading library is an early
+implementation of the upcoming C++0x threading library.
+</p>
+<p>
+Applications that use the Boost.Thread library should run fine under DRD.
+</p>
+<p>
+More information about Boost.Thread can be found here:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Anthony Williams, <a class="ulink" href="http://www.boost.org/doc/libs/1_37_0/doc/html/thread.html" target="_top">Boost.Thread</a>
+ Library Documentation, Boost website, 2007.
+ </p></li>
+<li class="listitem"><p>
+ Anthony Williams, <a class="ulink" href="http://www.ddj.com/cpp/211600441" target="_top">What's New in Boost
+ Threads?</a>, Recent changes to the Boost Thread library,
+ Dr. Dobbs Magazine, October 2008.
+ </p></li>
+</ul></div>
+<p>
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.openmp"></a>8.2.9. Debugging OpenMP Programs</h3></div></div></div>
+<p>
+OpenMP stands for <span class="emphasis"><em>Open Multi-Processing</em></span>. The OpenMP
+standard consists of a set of compiler directives for C, C++ and Fortran
+programs that allows a compiler to transform a sequential program into a
+parallel program. OpenMP is well suited for HPC applications and allows to
+work at a higher level compared to direct use of the POSIX threads API. While
+OpenMP ensures that the POSIX API is used correctly, OpenMP programs can still
+contain data races. So it definitely makes sense to verify OpenMP programs
+with a thread checking tool.
+</p>
+<p>
+DRD supports OpenMP shared-memory programs generated by GCC. GCC
+supports OpenMP since version 4.2.0. GCC's runtime support
+for OpenMP programs is provided by a library called
+<code class="literal">libgomp</code>. The synchronization primitives implemented
+in this library use Linux' futex system call directly, unless the
+library has been configured with the
+<code class="literal">--disable-linux-futex</code> option. DRD only supports
+libgomp libraries that have been configured with this option and in
+which symbol information is present. For most Linux distributions this
+means that you will have to recompile GCC. See also the script
+<code class="literal">drd/scripts/download-and-build-gcc</code> in the
+Valgrind source tree for an example of how to compile GCC. You will
+also have to make sure that the newly compiled
+<code class="literal">libgomp.so</code> library is loaded when OpenMP programs
+are started. This is possible by adding a line similar to the
+following to your shell startup script:
+</p>
+<pre class="programlisting">
+export LD_LIBRARY_PATH=~/gcc-4.4.0/lib64:~/gcc-4.4.0/lib:
+</pre>
+<p>
+As an example, the test OpenMP test program
+<code class="literal">drd/tests/omp_matinv</code> triggers a data race
+when the option -r has been specified on the command line. The data
+race is triggered by the following code:
+</p>
+<pre class="programlisting">
+#pragma omp parallel for private(j)
+for (j = 0; j < rows; j++)
+{
+ if (i != j)
+ {
+ const elem_t factor = a[j * cols + i];
+ for (k = 0; k < cols; k++)
+ {
+ a[j * cols + k] -= a[i * cols + k] * factor;
+ }
+ }
+}
+</pre>
+<p>
+The above code is racy because the variable <code class="literal">k</code> has
+not been declared private. DRD will print the following error message
+for the above code:
+</p>
+<pre class="programlisting">
+$ valgrind --tool=drd --check-stack-var=yes --read-var-info=yes drd/tests/omp_matinv 3 -t 2 -r
+...
+Conflicting store by thread 1/1 at 0x7fefffbc4 size 4
+ at 0x4014A0: gj.omp_fn.0 (omp_matinv.c:203)
+ by 0x401211: gj (omp_matinv.c:159)
+ by 0x40166A: invert_matrix (omp_matinv.c:238)
+ by 0x4019B4: main (omp_matinv.c:316)
+Location 0x7fefffbc4 is 0 bytes inside local var "k"
+declared at omp_matinv.c:160, in frame #0 of thread 1
+...
+</pre>
+<p>
+In the above output the function name <code class="function">gj.omp_fn.0</code>
+has been generated by GCC from the function name
+<code class="function">gj</code>. The allocation context information shows that the
+data race has been caused by modifying the variable <code class="literal">k</code>.
+</p>
+<p>
+Note: for GCC versions before 4.4.0, no allocation context information is
+shown. With these GCC versions the most usable information in the above output
+is the source file name and the line number where the data race has been
+detected (<code class="literal">omp_matinv.c:203</code>).
+</p>
+<p>
+For more information about OpenMP, see also
+<a class="ulink" href="http://openmp.org/" target="_top">openmp.org</a>.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.cust-mem-alloc"></a>8.2.10. DRD and Custom Memory Allocators</h3></div></div></div>
+<p>
+DRD tracks all memory allocation events that happen via the
+standard memory allocation and deallocation functions
+(<code class="function">malloc</code>, <code class="function">free</code>,
+<code class="function">new</code> and <code class="function">delete</code>), via entry
+and exit of stack frames or that have been annotated with Valgrind's
+memory pool client requests. DRD uses memory allocation and deallocation
+information for two purposes:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ To know where the scope ends of POSIX objects that have not been
+ destroyed explicitly. It is e.g. not required by the POSIX
+ threads standard to call
+ <code class="function">pthread_mutex_destroy</code> before freeing the
+ memory in which a mutex object resides.
+ </p></li>
+<li class="listitem"><p>
+ To know where the scope of variables ends. If e.g. heap memory
+ has been used by one thread, that thread frees that memory, and
+ another thread allocates and starts using that memory, no data
+ races must be reported for that memory.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+It is essential for correct operation of DRD that the tool knows about
+memory allocation and deallocation events. When analyzing a client program
+with DRD that uses a custom memory allocator, either instrument the custom
+memory allocator with the <code class="literal">VALGRIND_MALLOCLIKE_BLOCK</code>
+and <code class="literal">VALGRIND_FREELIKE_BLOCK</code> macros or disable the
+custom memory allocator.
+</p>
+<p>
+As an example, the GNU libstdc++ library can be configured
+to use standard memory allocation functions instead of memory pools by
+setting the environment variable
+<code class="literal">GLIBCXX_FORCE_NEW</code>. For more information, see also
+the <a class="ulink" href="http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt04ch11.html" target="_top">libstdc++
+manual</a>.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.drd-versus-memcheck"></a>8.2.11. DRD Versus Memcheck</h3></div></div></div>
+<p>
+It is essential for correct operation of DRD that there are no memory
+errors such as dangling pointers in the client program. Which means that
+it is a good idea to make sure that your program is Memcheck-clean
+before you analyze it with DRD. It is possible however that some of
+the Memcheck reports are caused by data races. In this case it makes
+sense to run DRD before Memcheck.
+</p>
+<p>
+So which tool should be run first? In case both DRD and Memcheck
+complain about a program, a possible approach is to run both tools
+alternatingly and to fix as many errors as possible after each run of
+each tool until none of the two tools prints any more error messages.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.resource-requirements"></a>8.2.12. Resource Requirements</h3></div></div></div>
+<p>
+The requirements of DRD with regard to heap and stack memory and the
+effect on the execution time of client programs are as follows:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ When running a program under DRD with default DRD options,
+ between 1.1 and 3.6 times more memory will be needed compared to
+ a native run of the client program. More memory will be needed
+ if loading debug information has been enabled
+ (<code class="literal">--read-var-info=yes</code>).
+ </p></li>
+<li class="listitem"><p>
+ DRD allocates some of its temporary data structures on the stack
+ of the client program threads. This amount of data is limited to
+ 1 - 2 KB. Make sure that thread stacks are sufficiently large.
+ </p></li>
+<li class="listitem"><p>
+ Most applications will run between 20 and 50 times slower under
+ DRD than a native single-threaded run. The slowdown will be most
+ noticeable for applications which perform frequent mutex lock /
+ unlock operations.
+ </p></li>
+</ul></div>
+<p>
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.effective-use"></a>8.2.13. Hints and Tips for Effective Use of DRD</h3></div></div></div>
+<p>
+The following information may be helpful when using DRD:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ Make sure that debug information is present in the executable
+ being analyzed, such that DRD can print function name and line
+ number information in stack traces. Most compilers can be told
+ to include debug information via compiler option
+ <code class="option">-g</code>.
+ </p></li>
+<li class="listitem"><p>
+ Compile with option <code class="option">-O1</code> instead of
+ <code class="option">-O0</code>. This will reduce the amount of generated
+ code, may reduce the amount of debug info and will speed up
+ DRD's processing of the client program. For more information,
+ see also <a class="xref" href="manual-core.html#manual-core.started" title="2.2. Getting started">Getting started</a>.
+ </p></li>
+<li class="listitem"><p>
+ If DRD reports any errors on libraries that are part of your
+ Linux distribution like e.g. <code class="literal">libc.so</code> or
+ <code class="literal">libstdc++.so</code>, installing the debug packages
+ for these libraries will make the output of DRD a lot more
+ detailed.
+ </p></li>
+<li class="listitem">
+<p>
+ When using C++, do not send output from more than one thread to
+ <code class="literal">std::cout</code>. Doing so would not only
+ generate multiple data race reports, it could also result in
+ output from several threads getting mixed up. Either use
+ <code class="function">printf</code> or do the following:
+ </p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>Derive a class from <code class="literal">std::ostreambuf</code>
+ and let that class send output line by line to
+ <code class="literal">stdout</code>. This will avoid that individual
+ lines of text produced by different threads get mixed
+ up.</p></li>
+<li class="listitem"><p>Create one instance of <code class="literal">std::ostream</code>
+ for each thread. This makes stream formatting settings
+ thread-local. Pass a per-thread instance of the class
+ derived from <code class="literal">std::ostreambuf</code> to the
+ constructor of each instance. </p></li>
+<li class="listitem"><p>Let each thread send its output to its own instance of
+ <code class="literal">std::ostream</code> instead of
+ <code class="literal">std::cout</code>.</p></li>
+</ol></div>
+<p>
+ </p>
+</li>
+</ul></div>
+<p>
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="drd-manual.Pthreads"></a>8.3. Using the POSIX Threads API Effectively</h2></div></div></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.mutex-types"></a>8.3.1. Mutex types</h3></div></div></div>
+<p>
+The Single UNIX Specification version two defines the following four
+mutex types (see also the documentation of <a class="ulink" href="http://www.opengroup.org/onlinepubs/007908799/xsh/pthread_mutexattr_settype.html" target="_top"><code class="function">pthread_mutexattr_settype</code></a>):
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ <span class="emphasis"><em>normal</em></span>, which means that no error checking
+ is performed, and that the mutex is non-recursive.
+ </p></li>
+<li class="listitem"><p>
+ <span class="emphasis"><em>error checking</em></span>, which means that the mutex
+ is non-recursive and that error checking is performed.
+ </p></li>
+<li class="listitem"><p>
+ <span class="emphasis"><em>recursive</em></span>, which means that a mutex may be
+ locked recursively.
+ </p></li>
+<li class="listitem"><p>
+ <span class="emphasis"><em>default</em></span>, which means that error checking
+ behavior is undefined, and that the behavior for recursive
+ locking is also undefined. Or: portable code must neither
+ trigger error conditions through the Pthreads API nor attempt to
+ lock a mutex of default type recursively.
+ </p></li>
+</ul></div>
+<p>
+</p>
+<p>
+In complex applications it is not always clear from beforehand which
+mutex will be locked recursively and which mutex will not be locked
+recursively. Attempts lock a non-recursive mutex recursively will
+result in race conditions that are very hard to find without a thread
+checking tool. So either use the error checking mutex type and
+consistently check the return value of Pthread API mutex calls, or use
+the recursive mutex type.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.condvar"></a>8.3.2. Condition variables</h3></div></div></div>
+<p>
+A condition variable allows one thread to wake up one or more other
+threads. Condition variables are often used to notify one or more
+threads about state changes of shared data. Unfortunately it is very
+easy to introduce race conditions by using condition variables as the
+only means of state information propagation. A better approach is to
+let threads poll for changes of a state variable that is protected by
+a mutex, and to use condition variables only as a thread wakeup
+mechanism. See also the source file
+<code class="computeroutput">drd/tests/monitor_example.cpp</code> for an
+example of how to implement this concept in C++. The monitor concept
+used in this example is a well known and very useful concept -- see
+also Wikipedia for more information about the <a class="ulink" href="http://en.wikipedia.org/wiki/Monitor_(synchronization)" target="_top">monitor</a>
+concept.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="drd-manual.pctw"></a>8.3.3. pthread_cond_timedwait and timeouts</h3></div></div></div>
+<p>
+Historically the function
+<code class="function">pthread_cond_timedwait</code> only allowed the
+specification of an absolute timeout, that is a timeout independent of
+the time when this function was called. However, almost every call to
+this function expresses a relative timeout. This typically happens by
+passing the sum of
+<code class="computeroutput">clock_gettime(CLOCK_REALTIME)</code> and a
+relative timeout as the third argument. This approach is incorrect
+since forward or backward clock adjustments by e.g. ntpd will affect
+the timeout. A more reliable approach is as follows:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ When initializing a condition variable through
+ <code class="function">pthread_cond_init</code>, specify that the timeout of
+ <code class="function">pthread_cond_timedwait</code> will use the clock
+ <code class="literal">CLOCK_MONOTONIC</code> instead of
+ <code class="literal">CLOCK_REALTIME</code>. You can do this via
+ <code class="computeroutput">pthread_condattr_setclock(...,
+ CLOCK_MONOTONIC)</code>.
+ </p></li>
+<li class="listitem"><p>
+ When calling <code class="function">pthread_cond_timedwait</code>, pass
+ the sum of
+ <code class="computeroutput">clock_gettime(CLOCK_MONOTONIC)</code>
+ and a relative timeout as the third argument.
+ </p></li>
+</ul></div>
+<p>
+See also
+<code class="computeroutput">drd/tests/monitor_example.cpp</code> for an
+example.
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="drd-manual.limitations"></a>8.4. Limitations</h2></div></div></div>
+<p>DRD currently has the following limitations:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ DRD, just like Memcheck, will refuse to start on Linux
+ distributions where all symbol information has been removed from
+ <code class="filename">ld.so</code>. This is e.g. the case for the PPC editions
+ of openSUSE and Gentoo. You will have to install the glibc debuginfo
+ package on these platforms before you can use DRD. See also openSUSE
+ bug <a class="ulink" href="http://bugzilla.novell.com/show_bug.cgi?id=396197" target="_top">
+ 396197</a> and Gentoo bug <a class="ulink" href="http://bugs.gentoo.org/214065" target="_top">214065</a>.
+ </p></li>
+<li class="listitem"><p>
+ With gcc 4.4.3 and before, DRD may report data races on the C++
+ class <code class="literal">std::string</code> in a multithreaded program. This is
+ a know <code class="literal">libstdc++</code> issue -- see also GCC bug
+ <a class="ulink" href="http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40518" target="_top">40518</a>
+ for more information.
+ </p></li>
+<li class="listitem"><p>
+ If you compile the DRD source code yourself, you need GCC 3.0 or
+ later. GCC 2.95 is not supported.
+ </p></li>
+<li class="listitem"><p>
+ Of the two POSIX threads implementations for Linux, only the
+ NPTL (Native POSIX Thread Library) is supported. The older
+ LinuxThreads library is not supported.
+ </p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="drd-manual.feedback"></a>8.5. Feedback</h2></div></div></div>
+<p>
+If you have any comments, suggestions, feedback or bug reports about
+DRD, feel free to either post a message on the Valgrind users mailing
+list or to file a bug report. See also <a class="ulink" href="http://www.valgrind.org/" target="_top">http://www.valgrind.org/</a> for more information.
+</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="hg-manual.html"><< 7. Helgrind: a thread error detector</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="ms-manual.html">9. Massif: a heap profiler >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/faq.html b/docs/html/faq.html
new file mode 100644
index 0000000..b17361e
--- /dev/null
+++ b/docs/html/faq.html
@@ -0,0 +1,776 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind Frequently Asked Questions</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="FAQ.html" title="Valgrind FAQ">
+<link rel="prev" href="FAQ.html" title="Valgrind FAQ">
+<link rel="next" href="tech-docs.html" title="Valgrind Technical Documentation">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="FAQ.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="FAQ.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind FAQ</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="tech-docs.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="article">
+<div class="titlepage">
+<div><div><h1 class="title">
+<a name="faq"></a>Valgrind Frequently Asked Questions</h1></div></div>
+<hr>
+</div>
+<div class="qandaset">
+<dl>
+<dt>1. <a href="faq.html#faq.background">Background</a>
+</dt>
+<dd>1.1. <a href="faq.html#faq.pronounce">How do you pronounce "Valgrind"?</a>
+</dd>
+<dd>1.2. <a href="faq.html#faq.whence">Where does the name "Valgrind" come from?</a>
+</dd>
+</dl>
+<dl>
+<dt>2. <a href="faq.html#faq.installing">Compiling, installing and configuring</a>
+</dt>
+<dd>2.1. <a href="faq.html#faq.make_dies">When building Valgrind, 'make' dies partway with
+ an assertion failure, something like this:</a>
+</dd>
+<dd>2.2. <a href="faq.html#faq.glibc_devel">When building Valgrind, 'make' fails with this:</a>
+</dd>
+</dl>
+<dl>
+<dt>3. <a href="faq.html#faq.abort">Valgrind aborts unexpectedly</a>
+</dt>
+<dd>3.1. <a href="faq.html#faq.exit_errors">Programs run OK on Valgrind, but at exit produce a bunch of
+ errors involving __libc_freeres and then die
+ with a segmentation fault.</a>
+</dd>
+<dd>3.2. <a href="faq.html#faq.bugdeath">My (buggy) program dies like this:</a>
+</dd>
+<dd>3.3. <a href="faq.html#faq.msgdeath">My program dies, printing a message like this along the
+ way:</a>
+</dd>
+<dd>3.4. <a href="faq.html#faq.java">I tried running a Java program (or another program that uses a
+ just-in-time compiler) under Valgrind but something went wrong.
+ Does Valgrind handle such programs?</a>
+</dd>
+</dl>
+<dl>
+<dt>4. <a href="faq.html#faq.unexpected">Valgrind behaves unexpectedly</a>
+</dt>
+<dd>4.1. <a href="faq.html#faq.reports">My program uses the C++ STL and string classes. Valgrind
+ reports 'still reachable' memory leaks involving these classes at
+ the exit of the program, but there should be none.</a>
+</dd>
+<dd>4.2. <a href="faq.html#faq.unhelpful">The stack traces given by Memcheck (or another tool) aren't
+ helpful. How can I improve them?</a>
+</dd>
+<dd>4.3. <a href="faq.html#faq.aliases">The stack traces given by Memcheck (or another tool) seem to
+ have the wrong function name in them. What's happening?</a>
+</dd>
+<dd>4.4. <a href="faq.html#faq.crashes">My program crashes normally, but doesn't under Valgrind, or vice
+ versa. What's happening?</a>
+</dd>
+<dd>4.5. <a href="faq.html#faq.hiddenbug"> Memcheck doesn't report any errors and I know my program has
+ errors.</a>
+</dd>
+<dd>4.6. <a href="faq.html#faq.overruns">Why doesn't Memcheck find the array overruns in this
+ program?</a>
+</dd>
+</dl>
+<dl>
+<dt>5. <a href="faq.html#faq.misc">Miscellaneous</a>
+</dt>
+<dd>5.1. <a href="faq.html#faq.writesupp">I tried writing a suppression but it didn't work. Can you
+ write my suppression for me?</a>
+</dd>
+<dd>5.2. <a href="faq.html#faq.deflost">With Memcheck's memory leak detector, what's the
+ difference between "definitely lost", "indirectly lost", "possibly
+ lost", "still reachable", and "suppressed"?</a>
+</dd>
+<dd>5.3. <a href="faq.html#faq.undeferrors">Memcheck's uninitialised value errors are hard to track down,
+ because they are often reported some time after they are caused. Could
+ Memcheck record a trail of operations to better link the cause to the
+ effect? Or maybe just eagerly report any copies of uninitialised
+ memory values?</a>
+</dd>
+<dd>5.4. <a href="faq.html#faq.attach">Is it possible to attach Valgrind to a program that is already
+ running?</a>
+</dd>
+</dl>
+<dl><dt>6. <a href="faq.html#faq.help">How To Get Further Assistance</a>
+</dt></dl>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.background"></a><h3 class="title">
+<a name="faq.background"></a>1. Background</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2">1.1. <a href="faq.html#faq.pronounce">How do you pronounce "Valgrind"?</a><br>1.2. <a href="faq.html#faq.whence">Where does the name "Valgrind" come from?</a><br>
+</td></tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.pronounce"></a><a name="q-pronounce"></a><b>1.1.</b>
+</td>
+<td align="left" valign="top"><b>How do you pronounce "Valgrind"?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-pronounce"></a></td>
+<td align="left" valign="top">
+<p>The "Val" as in the word "value". The "grind" is pronounced
+ with a short 'i' -- ie. "grinned" (rhymes with "tinned") rather than
+ "grined" (rhymes with "find").</p>
+<p>Don't feel bad: almost
+ everyone gets it wrong at first.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.whence"></a><a name="q-whence"></a><b>1.2.</b>
+</td>
+<td align="left" valign="top"><b>Where does the name "Valgrind" come from?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-whence"></a></td>
+<td align="left" valign="top">
+<p>From Nordic mythology. Originally (before release) the project
+ was named Heimdall, after the watchman of the Nordic gods. He could
+ "see a hundred miles by day or night, hear the grass growing, see the
+ wool growing on a sheep's back", etc. This would have been a great
+ name, but it was already taken by a security package "Heimdal".</p>
+<p>Keeping with the Nordic theme, Valgrind was chosen. Valgrind is
+ the name of the main entrance to Valhalla (the Hall of the Chosen
+ Slain in Asgard). Over this entrance there resides a wolf and over it
+ there is the head of a boar and on it perches a huge eagle, whose eyes
+ can see to the far regions of the nine worlds. Only those judged
+ worthy by the guardians are allowed to pass through Valgrind. All
+ others are refused entrance.</p>
+<p>It's not short for "value grinder", although that's not a bad
+ guess.</p>
+</td>
+</tr>
+</table>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.installing"></a><h3 class="title">
+<a name="faq.installing"></a>2. Compiling, installing and configuring</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2">2.1. <a href="faq.html#faq.make_dies">When building Valgrind, 'make' dies partway with
+ an assertion failure, something like this:</a><br>2.2. <a href="faq.html#faq.glibc_devel">When building Valgrind, 'make' fails with this:</a><br>
+</td></tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.make_dies"></a><a name="q-make_dies"></a><b>2.1.</b>
+</td>
+<td align="left" valign="top">
+<b>When building Valgrind, 'make' dies partway with
+ an assertion failure, something like this:</b><pre class="screen">
+% make: expand.c:489: allocated_variable_append:
+ Assertion 'current_variable_set_list->next != 0' failed.
+</pre>
+</td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-make_dies"></a></td>
+<td align="left" valign="top"><p>It's probably a bug in 'make'. Some, but not all, instances of
+ version 3.79.1 have this bug, see
+ <a class="ulink" href="http://www.mail-archive.com/bug-make@gnu.org/msg01658.html" target="_top">this</a>.
+ Try upgrading to a more recent version of 'make'. Alternatively, we have
+ heard that unsetting the CFLAGS environment variable avoids the
+ problem.</p></td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.glibc_devel"></a><a name="idm140639118203536"></a><b>2.2.</b>
+</td>
+<td align="left" valign="top">
+<b>When building Valgrind, 'make' fails with this:</b><pre class="screen">
+/usr/bin/ld: cannot find -lc
+collect2: ld returned 1 exit status
+</pre>
+</td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"></td>
+<td align="left" valign="top"><p>You need to install the glibc-static-devel package.</p></td>
+</tr>
+</table>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.abort"></a><h3 class="title">
+<a name="faq.abort"></a>3. Valgrind aborts unexpectedly</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2">3.1. <a href="faq.html#faq.exit_errors">Programs run OK on Valgrind, but at exit produce a bunch of
+ errors involving __libc_freeres and then die
+ with a segmentation fault.</a><br>3.2. <a href="faq.html#faq.bugdeath">My (buggy) program dies like this:</a><br>3.3. <a href="faq.html#faq.msgdeath">My program dies, printing a message like this along the
+ way:</a><br>3.4. <a href="faq.html#faq.java">I tried running a Java program (or another program that uses a
+ just-in-time compiler) under Valgrind but something went wrong.
+ Does Valgrind handle such programs?</a><br>
+</td></tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.exit_errors"></a><a name="q-exit_errors"></a><b>3.1.</b>
+</td>
+<td align="left" valign="top"><b>Programs run OK on Valgrind, but at exit produce a bunch of
+ errors involving <code class="literal">__libc_freeres</code> and then die
+ with a segmentation fault.</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-exit_errors"></a></td>
+<td align="left" valign="top">
+<p>When the program exits, Valgrind runs the procedure
+ <code class="function">__libc_freeres</code> in glibc. This is a hook for
+ memory debuggers, so they can ask glibc to free up any memory it has
+ used. Doing that is needed to ensure that Valgrind doesn't
+ incorrectly report space leaks in glibc.</p>
+<p>The problem is that running <code class="literal">__libc_freeres</code> in
+ older glibc versions causes this crash.</p>
+<p>Workaround for 1.1.X and later versions of Valgrind: use the
+ <code class="option">--run-libc-freeres=no</code> option. You may then get space
+ leak reports for glibc allocations (please don't report these to
+ the glibc people, since they are not real leaks), but at least the
+ program runs.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.bugdeath"></a><a name="q-bugdeath"></a><b>3.2.</b>
+</td>
+<td align="left" valign="top">
+<b>My (buggy) program dies like this:</b><pre class="screen">valgrind: m_mallocfree.c:248 (get_bszB_as_is): Assertion 'bszB_lo == bszB_hi' failed.</pre>
+<b>or like this:</b><pre class="screen">valgrind: m_mallocfree.c:442 (mk_inuse_bszB): Assertion 'bszB != 0' failed.</pre>
+<b>or otherwise aborts or crashes in m_mallocfree.c.</b>
+</td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-bugdeath"></a></td>
+<td align="left" valign="top"><p>If Memcheck (the memory checker) shows any invalid reads,
+ invalid writes or invalid frees in your program, the above may
+ happen. Reason is that your program may trash Valgrind's low-level
+ memory manager, which then dies with the above assertion, or
+ something similar. The cure is to fix your program so that it
+ doesn't do any illegal memory accesses. The above failure will
+ hopefully go away after that.</p></td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.msgdeath"></a><a name="q-msgdeath"></a><b>3.3.</b>
+</td>
+<td align="left" valign="top">
+<b>My program dies, printing a message like this along the
+ way:</b><pre class="screen">vex x86->IR: unhandled instruction bytes: 0x66 0xF 0x2E 0x5</pre>
+</td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-msgdeath"></a></td>
+<td align="left" valign="top">
+<p>One possibility is that your program has a bug and erroneously
+ jumps to a non-code address, in which case you'll get a SIGILL signal.
+ Memcheck may issue a warning just before this happens, but it might not
+ if the jump happens to land in addressable memory.</p>
+<p>Another possibility is that Valgrind does not handle the
+ instruction. If you are using an older Valgrind, a newer version might
+ handle the instruction. However, all instruction sets have some
+ obscure, rarely used instructions. Also, on amd64 there are an almost
+ limitless number of combinations of redundant instruction prefixes, many
+ of them undocumented but accepted by CPUs. So Valgrind will still have
+ decoding failures from time to time. If this happens, please file a bug
+ report.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.java"></a><a name="q-java"></a><b>3.4.</b>
+</td>
+<td align="left" valign="top"><b>I tried running a Java program (or another program that uses a
+ just-in-time compiler) under Valgrind but something went wrong.
+ Does Valgrind handle such programs?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-java"></a></td>
+<td align="left" valign="top">
+<p>Valgrind can handle dynamically generated code, so long as
+ none of the generated code is later overwritten by other generated
+ code. If this happens, though, things will go wrong as Valgrind
+ will continue running its translations of the old code (this is true
+ on x86 and amd64, on PowerPC there are explicit cache flush
+ instructions which Valgrind detects and honours).
+ You should try running with
+ <code class="option">--smc-check=all</code> in this case. Valgrind will run
+ much more slowly, but should detect the use of the out-of-date
+ code.</p>
+<p>Alternatively, if you have the source code to the JIT compiler
+ you can insert calls to the
+ <code class="computeroutput">VALGRIND_DISCARD_TRANSLATIONS</code>
+ client request to mark out-of-date code, saving you from using
+ <code class="option">--smc-check=all</code>.</p>
+<p>Apart from this, in theory Valgrind can run any Java program
+ just fine, even those that use JNI and are partially implemented in
+ other languages like C and C++. In practice, Java implementations
+ tend to do nasty things that most programs do not, and Valgrind
+ sometimes falls over these corner cases.</p>
+<p>If your Java programs do not run under Valgrind, even with
+ <code class="option">--smc-check=all</code>, please file a bug report and
+ hopefully we'll be able to fix the problem.</p>
+</td>
+</tr>
+</table>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.unexpected"></a><h3 class="title">
+<a name="faq.unexpected"></a>4. Valgrind behaves unexpectedly</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2">4.1. <a href="faq.html#faq.reports">My program uses the C++ STL and string classes. Valgrind
+ reports 'still reachable' memory leaks involving these classes at
+ the exit of the program, but there should be none.</a><br>4.2. <a href="faq.html#faq.unhelpful">The stack traces given by Memcheck (or another tool) aren't
+ helpful. How can I improve them?</a><br>4.3. <a href="faq.html#faq.aliases">The stack traces given by Memcheck (or another tool) seem to
+ have the wrong function name in them. What's happening?</a><br>4.4. <a href="faq.html#faq.crashes">My program crashes normally, but doesn't under Valgrind, or vice
+ versa. What's happening?</a><br>4.5. <a href="faq.html#faq.hiddenbug"> Memcheck doesn't report any errors and I know my program has
+ errors.</a><br>4.6. <a href="faq.html#faq.overruns">Why doesn't Memcheck find the array overruns in this
+ program?</a><br>
+</td></tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.reports"></a><a name="q-reports"></a><b>4.1.</b>
+</td>
+<td align="left" valign="top"><b>My program uses the C++ STL and string classes. Valgrind
+ reports 'still reachable' memory leaks involving these classes at
+ the exit of the program, but there should be none.</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-reports"></a></td>
+<td align="left" valign="top">
+<p>First of all: relax, it's probably not a bug, but a feature.
+ Many implementations of the C++ standard libraries use their own
+ memory pool allocators. Memory for quite a number of destructed
+ objects is not immediately freed and given back to the OS, but kept
+ in the pool(s) for later re-use. The fact that the pools are not
+ freed at the exit of the program cause Valgrind to report this
+ memory as still reachable. The behaviour not to free pools at the
+ exit could be called a bug of the library though.</p>
+<p>Using GCC, you can force the STL to use malloc and to free
+ memory as soon as possible by globally disabling memory caching.
+ Beware! Doing so will probably slow down your program, sometimes
+ drastically.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>With GCC 2.91, 2.95, 3.0 and 3.1, compile all source using
+ the STL with <code class="literal">-D__USE_MALLOC</code>. Beware! This was
+ removed from GCC starting with version 3.3.</p></li>
+<li class="listitem"><p>With GCC 3.2.2 and later, you should export the
+ environment variable <code class="literal">GLIBCPP_FORCE_NEW</code> before
+ running your program.</p></li>
+<li class="listitem"><p>With GCC 3.4 and later, that variable has changed name to
+ <code class="literal">GLIBCXX_FORCE_NEW</code>.</p></li>
+</ul></div>
+<p>There are other ways to disable memory pooling: using the
+ <code class="literal">malloc_alloc</code> template with your objects (not
+ portable, but should work for GCC) or even writing your own memory
+ allocators. But all this goes beyond the scope of this FAQ. Start
+ by reading
+ <a class="ulink" href="http://gcc.gnu.org/onlinedocs/libstdc++/faq/index.html#4_4_leak" target="_top">
+ http://gcc.gnu.org/onlinedocs/libstdc++/faq/index.html#4_4_leak</a>
+ if you absolutely want to do that. But beware:
+ allocators belong to the more messy parts of the STL and
+ people went to great lengths to make the STL portable across
+ platforms. Chances are good that your solution will work on your
+ platform, but not on others.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.unhelpful"></a><a name="q-unhelpful"></a><b>4.2.</b>
+</td>
+<td align="left" valign="top"><b>The stack traces given by Memcheck (or another tool) aren't
+ helpful. How can I improve them?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-unhelpful"></a></td>
+<td align="left" valign="top">
+<p>If they're not long enough, use <code class="option">--num-callers</code>
+ to make them longer.</p>
+<p>If they're not detailed enough, make sure you are compiling
+ with <code class="option">-g</code> to add debug information. And don't strip
+ symbol tables (programs should be unstripped unless you run 'strip'
+ on them; some libraries ship stripped).</p>
+<p>Also, for leak reports involving shared objects, if the shared
+ object is unloaded before the program terminates, Valgrind will
+ discard the debug information and the error message will be full of
+ <code class="literal">???</code> entries. The workaround here is to avoid
+ calling <code class="function">dlclose</code> on these shared objects.</p>
+<p>Also, <code class="option">-fomit-frame-pointer</code> and
+ <code class="option">-fstack-check</code> can make stack traces worse.</p>
+<p>Some example sub-traces:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>With debug information and unstripped (best):</p>
+<pre class="programlisting">
+Invalid write of size 1
+ at 0x80483BF: really (malloc1.c:20)
+ by 0x8048370: main (malloc1.c:9)
+</pre>
+</li>
+<li class="listitem">
+<p>With no debug information, unstripped:</p>
+<pre class="programlisting">
+Invalid write of size 1
+ at 0x80483BF: really (in /auto/homes/njn25/grind/head5/a.out)
+ by 0x8048370: main (in /auto/homes/njn25/grind/head5/a.out)
+</pre>
+</li>
+<li class="listitem">
+<p>With no debug information, stripped:</p>
+<pre class="programlisting">
+Invalid write of size 1
+ at 0x80483BF: (within /auto/homes/njn25/grind/head5/a.out)
+ by 0x8048370: (within /auto/homes/njn25/grind/head5/a.out)
+ by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
+ by 0x80482CC: (within /auto/homes/njn25/grind/head5/a.out)
+</pre>
+</li>
+<li class="listitem">
+<p>With debug information and -fomit-frame-pointer:</p>
+<pre class="programlisting">
+Invalid write of size 1
+ at 0x80483C4: really (malloc1.c:20)
+ by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
+ by 0x80482CC: ??? (start.S:81)
+</pre>
+</li>
+<li class="listitem">
+<p>A leak error message involving an unloaded shared object:</p>
+<pre class="programlisting">
+84 bytes in 1 blocks are possibly lost in loss record 488 of 713
+ at 0x1B9036DA: operator new(unsigned) (vg_replace_malloc.c:132)
+ by 0x1DB63EEB: ???
+ by 0x1DB4B800: ???
+ by 0x1D65E007: ???
+ by 0x8049EE6: main (main.cpp:24)
+</pre>
+</li>
+</ul></div>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.aliases"></a><a name="q-aliases"></a><b>4.3.</b>
+</td>
+<td align="left" valign="top"><b>The stack traces given by Memcheck (or another tool) seem to
+ have the wrong function name in them. What's happening?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-aliases"></a></td>
+<td align="left" valign="top"><p>Occasionally Valgrind stack traces get the wrong function
+ names. This is caused by glibc using aliases to effectively give
+ one function two names. Most of the time Valgrind chooses a
+ suitable name, but very occasionally it gets it wrong. Examples we know
+ of are printing <code class="function">bcmp</code> instead of
+ <code class="function">memcmp</code>, <code class="function">index</code> instead of
+ <code class="function">strchr</code>, and <code class="function">rindex</code> instead of
+ <code class="function">strrchr</code>.</p></td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.crashes"></a><a name="q-crashes"></a><b>4.4.</b>
+</td>
+<td align="left" valign="top"><b>My program crashes normally, but doesn't under Valgrind, or vice
+ versa. What's happening?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-crashes"></a></td>
+<td align="left" valign="top">
+<p>When a program runs under Valgrind, its environment is slightly
+ different to when it runs natively. For example, the memory layout is
+ different, and the way that threads are scheduled is different.</p>
+<p>Most of the time this doesn't make any difference, but it can,
+ particularly if your program is buggy. For example, if your program
+ crashes because it erroneously accesses memory that is unaddressable,
+ it's possible that this memory will not be unaddressable when run under
+ Valgrind. Alternatively, if your program has data races, these may not
+ manifest under Valgrind.</p>
+<p>There isn't anything you can do to change this, it's just the
+ nature of the way Valgrind works that it cannot exactly replicate a
+ native execution environment. In the case where your program crashes
+ due to a memory error when run natively but not when run under Valgrind,
+ in most cases Memcheck should identify the bad memory operation.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.hiddenbug"></a><a name="q-hiddenbug"></a><b>4.5.</b>
+</td>
+<td align="left" valign="top"><b> Memcheck doesn't report any errors and I know my program has
+ errors.</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-hiddenbug"></a></td>
+<td align="left" valign="top">
+<p>There are two possible causes of this.</p>
+<p>First, by default, Valgrind only traces the top-level process.
+ So if your program spawns children, they won't be traced by Valgrind
+ by default. Also, if your program is started by a shell script,
+ Perl script, or something similar, Valgrind will trace the shell, or
+ the Perl interpreter, or equivalent.</p>
+<p>To trace child processes, use the
+ <code class="option">--trace-children=yes</code> option.</p>
+<p>If you are tracing large trees of processes, it can be less
+ disruptive to have the output sent over the network. Give Valgrind
+ the option <code class="option">--log-socket=127.0.0.1:12345</code> (if you want
+ logging output sent to port <code class="literal">12345</code> on
+ <code class="literal">localhost</code>). You can use the valgrind-listener
+ program to listen on that port:</p>
+<pre class="programlisting">
+valgrind-listener 12345
+</pre>
+<p>Obviously you have to start the listener process first. See
+ the manual for more details.</p>
+<p>Second, if your program is statically linked, most Valgrind
+ tools will only work well if they are able to replace certain
+ functions, such as <code class="function">malloc</code>, with their own
+ versions. By default, statically linked <code class="function">malloc
+ functions</code> are not replaced. A key indicator of this is
+ if Memcheck says:
+</p>
+<pre class="programlisting">
+All heap blocks were freed -- no leaks are possible
+</pre>
+<p>
+ when you know your program calls <code class="function">malloc</code>. The
+ workaround is to use the option
+ <code class="option">--soname-synonyms=somalloc=NONE</code>
+ or to avoid statically linking your program.</p>
+<p>There will also be no replacement if you use an alternative
+ <code class="function">malloc library</code> such as tcmalloc, jemalloc,
+ ... In such a case, the
+ option <code class="option">--soname-synonyms=somalloc=zzzz</code> (where
+ zzzz is the soname of the alternative malloc library) will allow
+ Valgrind to replace the functions.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.overruns"></a><a name="q-overruns"></a><b>4.6.</b>
+</td>
+<td align="left" valign="top">
+<b>Why doesn't Memcheck find the array overruns in this
+ program?</b><pre class="programlisting">
+int static[5];
+
+int main(void)
+{
+ int stack[5];
+
+ static[5] = 0;
+ stack [5] = 0;
+
+ return 0;
+}
+</pre>
+</td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-overruns"></a></td>
+<td align="left" valign="top">
+<p>Unfortunately, Memcheck doesn't do bounds checking on global
+ or stack arrays. We'd like to, but it's just not possible to do in
+ a reasonable way that fits with how Memcheck works. Sorry.</p>
+<p>However, the experimental tool SGcheck can detect errors like
+ this. Run Valgrind with the <code class="option">--tool=exp-sgcheck</code> option
+ to try it, but be aware that it is not as robust as Memcheck.</p>
+</td>
+</tr>
+</table>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.misc"></a><h3 class="title">
+<a name="faq.misc"></a>5. Miscellaneous</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2">5.1. <a href="faq.html#faq.writesupp">I tried writing a suppression but it didn't work. Can you
+ write my suppression for me?</a><br>5.2. <a href="faq.html#faq.deflost">With Memcheck's memory leak detector, what's the
+ difference between "definitely lost", "indirectly lost", "possibly
+ lost", "still reachable", and "suppressed"?</a><br>5.3. <a href="faq.html#faq.undeferrors">Memcheck's uninitialised value errors are hard to track down,
+ because they are often reported some time after they are caused. Could
+ Memcheck record a trail of operations to better link the cause to the
+ effect? Or maybe just eagerly report any copies of uninitialised
+ memory values?</a><br>5.4. <a href="faq.html#faq.attach">Is it possible to attach Valgrind to a program that is already
+ running?</a><br>
+</td></tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.writesupp"></a><a name="q-writesupp"></a><b>5.1.</b>
+</td>
+<td align="left" valign="top"><b>I tried writing a suppression but it didn't work. Can you
+ write my suppression for me?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-writesupp"></a></td>
+<td align="left" valign="top">
+<p>Yes! Use the <code class="option">--gen-suppressions=yes</code> feature
+ to spit out suppressions automatically for you. You can then edit
+ them if you like, eg. combining similar automatically generated
+ suppressions using wildcards like <code class="literal">'*'</code>.</p>
+<p>If you really want to write suppressions by hand, read the
+ manual carefully. Note particularly that C++ function names must be
+ mangled (that is, not demangled).</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.deflost"></a><a name="q-deflost"></a><b>5.2.</b>
+</td>
+<td align="left" valign="top"><b>With Memcheck's memory leak detector, what's the
+ difference between "definitely lost", "indirectly lost", "possibly
+ lost", "still reachable", and "suppressed"?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-deflost"></a></td>
+<td align="left" valign="top">
+<p>The details are in the Memcheck section of the user manual.</p>
+<p>In short:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>"definitely lost" means your program is leaking memory --
+ fix those leaks!</p></li>
+<li class="listitem"><p>"indirectly lost" means your program is leaking memory in
+ a pointer-based structure. (E.g. if the root node of a binary tree
+ is "definitely lost", all the children will be "indirectly lost".)
+ If you fix the "definitely lost" leaks, the "indirectly lost" leaks
+ should go away.
+ </p></li>
+<li class="listitem"><p>"possibly lost" means your program is leaking
+ memory, unless you're doing unusual things with pointers that could
+ cause them to point into the middle of an allocated block; see the
+ user manual for some possible causes. Use
+ <code class="option">--show-possibly-lost=no</code> if you don't want to see
+ these reports.</p></li>
+<li class="listitem"><p>"still reachable" means your program is probably ok -- it
+ didn't free some memory it could have. This is quite common and
+ often reasonable. Don't use
+ <code class="option">--show-reachable=yes</code> if you don't want to see
+ these reports.</p></li>
+<li class="listitem"><p>"suppressed" means that a leak error has been suppressed.
+ There are some suppressions in the default suppression files.
+ You can ignore suppressed errors.</p></li>
+</ul></div>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.undeferrors"></a><a name="q-undeferrors"></a><b>5.3.</b>
+</td>
+<td align="left" valign="top"><b>Memcheck's uninitialised value errors are hard to track down,
+ because they are often reported some time after they are caused. Could
+ Memcheck record a trail of operations to better link the cause to the
+ effect? Or maybe just eagerly report any copies of uninitialised
+ memory values?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-undeferrors"></a></td>
+<td align="left" valign="top">
+<p>Prior to version 3.4.0, the answer was "we don't know how to do it
+ without huge performance penalties". As of 3.4.0, try using the
+ <code class="option">--track-origins=yes</code> option. It will run slower than
+ usual, but will give you extra information about the origin of
+ uninitialised values.</p>
+<p>Or if you want to do it the old fashioned way, you can use the
+ client request
+ <code class="computeroutput">VALGRIND_CHECK_VALUE_IS_DEFINED</code> to help
+ track these errors down -- work backwards from the point where the
+ uninitialised error occurs, checking suspect values until you find the
+ cause. This requires editing, compiling and re-running your program
+ multiple times, which is a pain, but still easier than debugging the
+ problem without Memcheck's help.</p>
+<p>As for eager reporting of copies of uninitialised memory values,
+ this has been suggested multiple times. Unfortunately, almost all
+ programs legitimately copy uninitialised memory values around (because
+ compilers pad structs to preserve alignment) and eager checking leads to
+ hundreds of false positives. Therefore Memcheck does not support eager
+ checking at this time.</p>
+</td>
+</tr>
+<tr><td colspan="2"> </td></tr>
+<tr class="question">
+<td align="left" valign="top">
+<a name="faq.attach"></a><a name="q-attach"></a><b>5.4.</b>
+</td>
+<td align="left" valign="top"><b>Is it possible to attach Valgrind to a program that is already
+ running?</b></td>
+</tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-attach"></a></td>
+<td align="left" valign="top">
+<p>No. The environment that Valgrind provides for running programs
+ is significantly different to that for normal programs, e.g. due to
+ different layout of memory. Therefore Valgrind has to have full control
+ from the very start.</p>
+<p>It is possible to achieve something like this by running your
+ program without any instrumentation (which involves a slow-down of about
+ 5x, less than that of most tools), and then adding instrumentation once
+ you get to a point of interest. Support for this must be provided by
+ the tool, however, and Callgrind is the only tool that currently has
+ such support. See the instructions on the
+ <code class="computeroutput">callgrind_control</code> program for details.
+ </p>
+</td>
+</tr>
+</table>
+<br><table width="100%" summary="Q and A Div" cellpadding="2" cellspacing="2" border="0">
+<tr class="qandadiv"><td align="left" valign="top" colspan="2">
+<a name="faq.help"></a><h3 class="title">
+<a name="faq.help"></a>6. How To Get Further Assistance</h3>
+</td></tr>
+<tr class="toc" colspan="2"><td align="left" valign="top" colspan="2"></td></tr>
+<tr class="answer">
+<td align="left" valign="top"><a name="a-help"></a></td>
+<td align="left" valign="top">
+<p>Read the appropriate section(s) of the
+ <a class="ulink" href="http://www.valgrind.org/docs/manual/index.html" target="_top">Valgrind Documentation</a>.</p>
+<p><a class="ulink" href="http://search.gmane.org" target="_top">Search</a> the
+ <a class="ulink" href="http://news.gmane.org/gmane.comp.debugging.valgrind" target="_top">valgrind-users</a> mailing list archives, using the group name
+ <code class="computeroutput">gmane.comp.debugging.valgrind</code>.</p>
+<p>If you think an answer in this FAQ is incomplete or inaccurate, please
+ e-mail <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a>.</p>
+<p>If you have tried all of these things and are still
+ stuck, you can try mailing the
+ <a class="ulink" href="http://www.valgrind.org/support/mailing_lists.html" target="_top">valgrind-users mailing list</a>.
+ Note that an email has a better change of being answered usefully if it is
+ clearly written. Also remember that, despite the fact that most of the
+ community are very helpful and responsive to emailed questions, you are
+ probably requesting help from unpaid volunteers, so you have no guarantee
+ of receiving an answer.</p>
+</td>
+</tr>
+</table>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="FAQ.html"><< Valgrind FAQ</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="FAQ.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="tech-docs.html">Valgrind Technical Documentation >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/hg-manual.html b/docs/html/hg-manual.html
new file mode 100644
index 0000000..4156c05
--- /dev/null
+++ b/docs/html/hg-manual.html
@@ -0,0 +1,1248 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>7. Helgrind: a thread error detector</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="cl-manual.html" title="6. Callgrind: a call-graph generating cache and branch prediction profiler">
+<link rel="next" href="drd-manual.html" title="8. DRD: a thread error detector">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="cl-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="drd-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="hg-manual"></a>7. Helgrind: a thread error detector</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.overview">7.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.api-checks">7.2. Detected errors: Misuses of the POSIX pthreads API</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.lock-orders">7.3. Detected errors: Inconsistent Lock Orderings</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.data-races">7.4. Detected errors: Data Races</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.example">7.4.1. A Simple Data Race</a></span></dt>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.algorithm">7.4.2. Helgrind's Race Detection Algorithm</a></span></dt>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.errmsgs">7.4.3. Interpreting Race Error Messages</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.effective-use">7.5. Hints and Tips for Effective Use of Helgrind</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.options">7.6. Helgrind Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.monitor-commands">7.7. Helgrind Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.client-requests">7.8. Helgrind Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.todolist">7.9. A To-Do List for Helgrind</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=helgrind</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.overview"></a>7.1. Overview</h2></div></div></div>
+<p>Helgrind is a Valgrind tool for detecting synchronisation errors
+in C, C++ and Fortran programs that use the POSIX pthreads
+threading primitives.</p>
+<p>The main abstractions in POSIX pthreads are: a set of threads
+sharing a common address space, thread creation, thread joining,
+thread exit, mutexes (locks), condition variables (inter-thread event
+notifications), reader-writer locks, spinlocks, semaphores and
+barriers.</p>
+<p>Helgrind can detect three classes of errors, which are discussed
+in detail in the next three sections:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p><a class="link" href="hg-manual.html#hg-manual.api-checks" title="7.2. Detected errors: Misuses of the POSIX pthreads API">
+ Misuses of the POSIX pthreads API.</a></p></li>
+<li class="listitem"><p><a class="link" href="hg-manual.html#hg-manual.lock-orders" title="7.3. Detected errors: Inconsistent Lock Orderings">
+ Potential deadlocks arising from lock
+ ordering problems.</a></p></li>
+<li class="listitem"><p><a class="link" href="hg-manual.html#hg-manual.data-races" title="7.4. Detected errors: Data Races">
+ Data races -- accessing memory without adequate locking
+ or synchronisation</a>.
+ </p></li>
+</ol></div>
+<p>Problems like these often result in unreproducible,
+timing-dependent crashes, deadlocks and other misbehaviour, and
+can be difficult to find by other means.</p>
+<p>Helgrind is aware of all the pthread abstractions and tracks
+their effects as accurately as it can. On x86 and amd64 platforms, it
+understands and partially handles implicit locking arising from the
+use of the LOCK instruction prefix. On PowerPC/POWER and ARM
+platforms, it partially handles implicit locking arising from
+load-linked and store-conditional instruction pairs.
+</p>
+<p>Helgrind works best when your application uses only the POSIX
+pthreads API. However, if you want to use custom threading
+primitives, you can describe their behaviour to Helgrind using the
+<code class="varname">ANNOTATE_*</code> macros defined
+in <code class="varname">helgrind.h</code>.</p>
+<p>Following those is a section containing
+<a class="link" href="hg-manual.html#hg-manual.effective-use" title="7.5. Hints and Tips for Effective Use of Helgrind">
+hints and tips on how to get the best out of Helgrind.</a>
+</p>
+<p>Then there is a
+<a class="link" href="hg-manual.html#hg-manual.options" title="7.6. Helgrind Command-line Options">summary of command-line
+options.</a>
+</p>
+<p>Finally, there is
+<a class="link" href="hg-manual.html#hg-manual.todolist" title="7.9. A To-Do List for Helgrind">a brief summary of areas in which Helgrind
+could be improved.</a>
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.api-checks"></a>7.2. Detected errors: Misuses of the POSIX pthreads API</h2></div></div></div>
+<p>Helgrind intercepts calls to many POSIX pthreads functions, and
+is therefore able to report on various common problems. Although
+these are unglamourous errors, their presence can lead to undefined
+program behaviour and hard-to-find bugs later on. The detected errors
+are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>unlocking an invalid mutex</p></li>
+<li class="listitem"><p>unlocking a not-locked mutex</p></li>
+<li class="listitem"><p>unlocking a mutex held by a different
+ thread</p></li>
+<li class="listitem"><p>destroying an invalid or a locked mutex</p></li>
+<li class="listitem"><p>recursively locking a non-recursive mutex</p></li>
+<li class="listitem"><p>deallocation of memory that contains a
+ locked mutex</p></li>
+<li class="listitem"><p>passing mutex arguments to functions expecting
+ reader-writer lock arguments, and vice
+ versa</p></li>
+<li class="listitem"><p>when a POSIX pthread function fails with an
+ error code that must be handled</p></li>
+<li class="listitem"><p>when a thread exits whilst still holding locked
+ locks</p></li>
+<li class="listitem"><p>calling <code class="function">pthread_cond_wait</code>
+ with a not-locked mutex, an invalid mutex,
+ or one locked by a different
+ thread</p></li>
+<li class="listitem"><p>inconsistent bindings between condition
+ variables and their associated mutexes</p></li>
+<li class="listitem"><p>invalid or duplicate initialisation of a pthread
+ barrier</p></li>
+<li class="listitem"><p>initialisation of a pthread barrier on which threads
+ are still waiting</p></li>
+<li class="listitem"><p>destruction of a pthread barrier object which was
+ never initialised, or on which threads are still
+ waiting</p></li>
+<li class="listitem"><p>waiting on an uninitialised pthread
+ barrier</p></li>
+<li class="listitem"><p>for all of the pthreads functions that Helgrind
+ intercepts, an error is reported, along with a stack
+ trace, if the system threading library routine returns
+ an error code, even if Helgrind itself detected no
+ error</p></li>
+</ul></div>
+<p>Checks pertaining to the validity of mutexes are generally also
+performed for reader-writer locks.</p>
+<p>Various kinds of this-can't-possibly-happen events are also
+reported. These usually indicate bugs in the system threading
+library.</p>
+<p>Reported errors always contain a primary stack trace indicating
+where the error was detected. They may also contain auxiliary stack
+traces giving additional information. In particular, most errors
+relating to mutexes will also tell you where that mutex first came to
+Helgrind's attention (the "<code class="computeroutput">was first observed
+at</code>" part), so you have a chance of figuring out which
+mutex it is referring to. For example:</p>
+<pre class="programlisting">
+Thread #1 unlocked a not-locked lock at 0x7FEFFFA90
+ at 0x4C2408D: pthread_mutex_unlock (hg_intercepts.c:492)
+ by 0x40073A: nearly_main (tc09_bad_unlock.c:27)
+ by 0x40079B: main (tc09_bad_unlock.c:50)
+ Lock at 0x7FEFFFA90 was first observed
+ at 0x4C25D01: pthread_mutex_init (hg_intercepts.c:326)
+ by 0x40071F: nearly_main (tc09_bad_unlock.c:23)
+ by 0x40079B: main (tc09_bad_unlock.c:50)
+</pre>
+<p>Helgrind has a way of summarising thread identities, as
+you see here with the text "<code class="computeroutput">Thread
+#1</code>". This is so that it can speak about threads and
+sets of threads without overwhelming you with details. See
+<a class="link" href="hg-manual.html#hg-manual.data-races.errmsgs" title="7.4.3. Interpreting Race Error Messages">below</a>
+for more information on interpreting error messages.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.lock-orders"></a>7.3. Detected errors: Inconsistent Lock Orderings</h2></div></div></div>
+<p>In this section, and in general, to "acquire" a lock simply
+means to lock that lock, and to "release" a lock means to unlock
+it.</p>
+<p>Helgrind monitors the order in which threads acquire locks.
+This allows it to detect potential deadlocks which could arise from
+the formation of cycles of locks. Detecting such inconsistencies is
+useful because, whilst actual deadlocks are fairly obvious, potential
+deadlocks may never be discovered during testing and could later lead
+to hard-to-diagnose in-service failures.</p>
+<p>The simplest example of such a problem is as
+follows.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Imagine some shared resource R, which, for whatever
+ reason, is guarded by two locks, L1 and L2, which must both be held
+ when R is accessed.</p></li>
+<li class="listitem"><p>Suppose a thread acquires L1, then L2, and proceeds
+ to access R. The implication of this is that all threads in the
+ program must acquire the two locks in the order first L1 then L2.
+ Not doing so risks deadlock.</p></li>
+<li class="listitem"><p>The deadlock could happen if two threads -- call them
+ T1 and T2 -- both want to access R. Suppose T1 acquires L1 first,
+ and T2 acquires L2 first. Then T1 tries to acquire L2, and T2 tries
+ to acquire L1, but those locks are both already held. So T1 and T2
+ become deadlocked.</p></li>
+</ul></div>
+<p>Helgrind builds a directed graph indicating the order in which
+locks have been acquired in the past. When a thread acquires a new
+lock, the graph is updated, and then checked to see if it now contains
+a cycle. The presence of a cycle indicates a potential deadlock involving
+the locks in the cycle.</p>
+<p>In general, Helgrind will choose two locks involved in the cycle
+and show you how their acquisition ordering has become inconsistent.
+It does this by showing the program points that first defined the
+ordering, and the program points which later violated it. Here is a
+simple example involving just two locks:</p>
+<pre class="programlisting">
+Thread #1: lock order "0x7FF0006D0 before 0x7FF0006A0" violated
+
+Observed (incorrect) order is: acquisition of lock at 0x7FF0006A0
+ at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
+ by 0x400825: main (tc13_laog1.c:23)
+
+ followed by a later acquisition of lock at 0x7FF0006D0
+ at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
+ by 0x400853: main (tc13_laog1.c:24)
+
+Required order was established by acquisition of lock at 0x7FF0006D0
+ at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
+ by 0x40076D: main (tc13_laog1.c:17)
+
+ followed by a later acquisition of lock at 0x7FF0006A0
+ at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
+ by 0x40079B: main (tc13_laog1.c:18)
+</pre>
+<p>When there are more than two locks in the cycle, the error is
+equally serious. However, at present Helgrind does not show the locks
+involved, sometimes because that information is not available, but
+also so as to avoid flooding you with information. For example, a
+naive implementation of the famous Dining Philosophers problem
+involves a cycle of five locks
+(see <code class="computeroutput">helgrind/tests/tc14_laog_dinphils.c</code>).
+In this case Helgrind has detected that all 5 philosophers could
+simultaneously pick up their left fork and then deadlock whilst
+waiting to pick up their right forks.</p>
+<pre class="programlisting">
+Thread #6: lock order "0x80499A0 before 0x8049A00" violated
+
+Observed (incorrect) order is: acquisition of lock at 0x8049A00
+ at 0x40085BC: pthread_mutex_lock (hg_intercepts.c:495)
+ by 0x80485B4: dine (tc14_laog_dinphils.c:18)
+ by 0x400BDA4: mythread_wrapper (hg_intercepts.c:219)
+ by 0x39B924: start_thread (pthread_create.c:297)
+ by 0x2F107D: clone (clone.S:130)
+
+ followed by a later acquisition of lock at 0x80499A0
+ at 0x40085BC: pthread_mutex_lock (hg_intercepts.c:495)
+ by 0x80485CD: dine (tc14_laog_dinphils.c:19)
+ by 0x400BDA4: mythread_wrapper (hg_intercepts.c:219)
+ by 0x39B924: start_thread (pthread_create.c:297)
+ by 0x2F107D: clone (clone.S:130)
+</pre>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.data-races"></a>7.4. Detected errors: Data Races</h2></div></div></div>
+<p>A data race happens, or could happen, when two threads access a
+shared memory location without using suitable locks or other
+synchronisation to ensure single-threaded access. Such missing
+locking can cause obscure timing dependent bugs. Ensuring programs
+are race-free is one of the central difficulties of threaded
+programming.</p>
+<p>Reliably detecting races is a difficult problem, and most
+of Helgrind's internals are devoted to dealing with it.
+We begin with a simple example.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="hg-manual.data-races.example"></a>7.4.1. A Simple Data Race</h3></div></div></div>
+<p>About the simplest possible example of a race is as follows. In
+this program, it is impossible to know what the value
+of <code class="computeroutput">var</code> is at the end of the program.
+Is it 2 ? Or 1 ?</p>
+<pre class="programlisting">
+#include <pthread.h>
+
+int var = 0;
+
+void* child_fn ( void* arg ) {
+ var++; /* Unprotected relative to parent */ /* this is line 6 */
+ return NULL;
+}
+
+int main ( void ) {
+ pthread_t child;
+ pthread_create(&child, NULL, child_fn, NULL);
+ var++; /* Unprotected relative to child */ /* this is line 13 */
+ pthread_join(child, NULL);
+ return 0;
+}
+</pre>
+<p>The problem is there is nothing to
+stop <code class="varname">var</code> being updated simultaneously
+by both threads. A correct program would
+protect <code class="varname">var</code> with a lock of type
+<code class="function">pthread_mutex_t</code>, which is acquired
+before each access and released afterwards. Helgrind's output for
+this program is:</p>
+<pre class="programlisting">
+Thread #1 is the program's root thread
+
+Thread #2 was created
+ at 0x511C08E: clone (in /lib64/libc-2.8.so)
+ by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
+ by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
+ by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
+ by 0x400605: main (simple_race.c:12)
+
+Possible data race during read of size 4 at 0x601038 by thread #1
+Locks held: none
+ at 0x400606: main (simple_race.c:13)
+
+This conflicts with a previous write of size 4 by thread #2
+Locks held: none
+ at 0x4005DC: child_fn (simple_race.c:6)
+ by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
+ by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+ by 0x511C0CC: clone (in /lib64/libc-2.8.so)
+
+Location 0x601038 is 0 bytes inside global var "var"
+declared at simple_race.c:3
+</pre>
+<p>This is quite a lot of detail for an apparently simple error.
+The last clause is the main error message. It says there is a race as
+a result of a read of size 4 (bytes), at 0x601038, which is the
+address of <code class="computeroutput">var</code>, happening in
+function <code class="computeroutput">main</code> at line 13 in the
+program.</p>
+<p>Two important parts of the message are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>Helgrind shows two stack traces for the error, not one. By
+ definition, a race involves two different threads accessing the
+ same location in such a way that the result depends on the relative
+ speeds of the two threads.</p>
+<p>
+ The first stack trace follows the text "<code class="computeroutput">Possible
+ data race during read of size 4 ...</code>" and the
+ second trace follows the text "<code class="computeroutput">This conflicts with
+ a previous write of size 4 ...</code>". Helgrind is
+ usually able to show both accesses involved in a race. At least
+ one of these will be a write (since two concurrent, unsynchronised
+ reads are harmless), and they will of course be from different
+ threads.</p>
+<p>By examining your program at the two locations, you should be
+ able to get at least some idea of what the root cause of the
+ problem is. For each location, Helgrind shows the set of locks
+ held at the time of the access. This often makes it clear which
+ thread, if any, failed to take a required lock. In this example
+ neither thread holds a lock during the access.</p>
+</li>
+<li class="listitem">
+<p>For races which occur on global or stack variables, Helgrind
+ tries to identify the name and defining point of the variable.
+ Hence the text "<code class="computeroutput">Location 0x601038 is 0 bytes inside
+ global var "var" declared at simple_race.c:3</code>".</p>
+<p>Showing names of stack and global variables carries no
+ run-time overhead once Helgrind has your program up and running.
+ However, it does require Helgrind to spend considerable extra time
+ and memory at program startup to read the relevant debug info.
+ Hence this facility is disabled by default. To enable it, you need
+ to give the <code class="varname">--read-var-info=yes</code> option to
+ Helgrind.</p>
+</li>
+</ul></div>
+<p>The following section explains Helgrind's race detection
+algorithm in more detail.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="hg-manual.data-races.algorithm"></a>7.4.2. Helgrind's Race Detection Algorithm</h3></div></div></div>
+<p>Most programmers think about threaded programming in terms of
+the basic functionality provided by the threading library (POSIX
+Pthreads): thread creation, thread joining, locks, condition
+variables, semaphores and barriers.</p>
+<p>The effect of using these functions is to impose
+constraints upon the order in which memory accesses can
+happen. This implied ordering is generally known as the
+"happens-before relation". Once you understand the happens-before
+relation, it is easy to see how Helgrind finds races in your code.
+Fortunately, the happens-before relation is itself easy to understand,
+and is by itself a useful tool for reasoning about the behaviour of
+parallel programs. We now introduce it using a simple example.</p>
+<p>Consider first the following buggy program:</p>
+<pre class="programlisting">
+Parent thread: Child thread:
+
+int var;
+
+// create child thread
+pthread_create(...)
+var = 20; var = 10;
+ exit
+
+// wait for child
+pthread_join(...)
+printf("%d\n", var);
+</pre>
+<p>The parent thread creates a child. Both then write different
+values to some variable <code class="computeroutput">var</code>, and the
+parent then waits for the child to exit.</p>
+<p>What is the value of <code class="computeroutput">var</code> at the
+end of the program, 10 or 20? We don't know. The program is
+considered buggy (it has a race) because the final value
+of <code class="computeroutput">var</code> depends on the relative rates
+of progress of the parent and child threads. If the parent is fast
+and the child is slow, then the child's assignment may happen later,
+so the final value will be 10; and vice versa if the child is faster
+than the parent.</p>
+<p>The relative rates of progress of parent vs child is not something
+the programmer can control, and will often change from run to run.
+It depends on factors such as the load on the machine, what else is
+running, the kernel's scheduling strategy, and many other factors.</p>
+<p>The obvious fix is to use a lock to
+protect <code class="computeroutput">var</code>. It is however
+instructive to consider a somewhat more abstract solution, which is to
+send a message from one thread to the other:</p>
+<pre class="programlisting">
+Parent thread: Child thread:
+
+int var;
+
+// create child thread
+pthread_create(...)
+var = 20;
+// send message to child
+ // wait for message to arrive
+ var = 10;
+ exit
+
+// wait for child
+pthread_join(...)
+printf("%d\n", var);
+</pre>
+<p>Now the program reliably prints "10", regardless of the speed of
+the threads. Why? Because the child's assignment cannot happen until
+after it receives the message. And the message is not sent until
+after the parent's assignment is done.</p>
+<p>The message transmission creates a "happens-before" dependency
+between the two assignments: <code class="computeroutput">var = 20;</code>
+must now happen-before <code class="computeroutput">var = 10;</code>.
+And so there is no longer a race
+on <code class="computeroutput">var</code>.
+</p>
+<p>Note that it's not significant that the parent sends a message
+to the child. Sending a message from the child (after its assignment)
+to the parent (before its assignment) would also fix the problem, causing
+the program to reliably print "20".</p>
+<p>Helgrind's algorithm is (conceptually) very simple. It monitors all
+accesses to memory locations. If a location -- in this example,
+<code class="computeroutput">var</code>,
+is accessed by two different threads, Helgrind checks to see if the
+two accesses are ordered by the happens-before relation. If so,
+that's fine; if not, it reports a race.</p>
+<p>It is important to understand that the happens-before relation
+creates only a partial ordering, not a total ordering. An example of
+a total ordering is comparison of numbers: for any two numbers
+<code class="computeroutput">x</code> and
+<code class="computeroutput">y</code>, either
+<code class="computeroutput">x</code> is less than, equal to, or greater
+than
+<code class="computeroutput">y</code>. A partial ordering is like a
+total ordering, but it can also express the concept that two elements
+are neither equal, less or greater, but merely unordered with respect
+to each other.</p>
+<p>In the fixed example above, we say that
+<code class="computeroutput">var = 20;</code> "happens-before"
+<code class="computeroutput">var = 10;</code>. But in the original
+version, they are unordered: we cannot say that either happens-before
+the other.</p>
+<p>What does it mean to say that two accesses from different
+threads are ordered by the happens-before relation? It means that
+there is some chain of inter-thread synchronisation operations which
+cause those accesses to happen in a particular order, irrespective of
+the actual rates of progress of the individual threads. This is a
+required property for a reliable threaded program, which is why
+Helgrind checks for it.</p>
+<p>The happens-before relations created by standard threading
+primitives are as follows:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>When a mutex is unlocked by thread T1 and later (or
+ immediately) locked by thread T2, then the memory accesses in T1
+ prior to the unlock must happen-before those in T2 after it acquires
+ the lock.</p></li>
+<li class="listitem"><p>The same idea applies to reader-writer locks,
+ although with some complication so as to allow correct handling of
+ reads vs writes.</p></li>
+<li class="listitem"><p>When a condition variable (CV) is signalled on by
+ thread T1 and some other thread T2 is thereby released from a wait
+ on the same CV, then the memory accesses in T1 prior to the
+ signalling must happen-before those in T2 after it returns from the
+ wait. If no thread was waiting on the CV then there is no
+ effect.</p></li>
+<li class="listitem"><p>If instead T1 broadcasts on a CV, then all of the
+ waiting threads, rather than just one of them, acquire a
+ happens-before dependency on the broadcasting thread at the point it
+ did the broadcast.</p></li>
+<li class="listitem"><p>A thread T2 that continues after completing sem_wait
+ on a semaphore that thread T1 posts on, acquires a happens-before
+ dependence on the posting thread, a bit like dependencies caused
+ mutex unlock-lock pairs. However, since a semaphore can be posted
+ on many times, it is unspecified from which of the post calls the
+ wait call gets its happens-before dependency.</p></li>
+<li class="listitem"><p>For a group of threads T1 .. Tn which arrive at a
+ barrier and then move on, each thread after the call has a
+ happens-after dependency from all threads before the
+ barrier.</p></li>
+<li class="listitem"><p>A newly-created child thread acquires an initial
+ happens-after dependency on the point where its parent created it.
+ That is, all memory accesses performed by the parent prior to
+ creating the child are regarded as happening-before all the accesses
+ of the child.</p></li>
+<li class="listitem"><p>Similarly, when an exiting thread is reaped via a
+ call to <code class="function">pthread_join</code>, once the call returns, the
+ reaping thread acquires a happens-after dependency relative to all memory
+ accesses made by the exiting thread.</p></li>
+</ul></div>
+<p>In summary: Helgrind intercepts the above listed events, and builds a
+directed acyclic graph represented the collective happens-before
+dependencies. It also monitors all memory accesses.</p>
+<p>If a location is accessed by two different threads, but Helgrind
+cannot find any path through the happens-before graph from one access
+to the other, then it reports a race.</p>
+<p>There are a couple of caveats:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Helgrind doesn't check for a race in the case where
+ both accesses are reads. That would be silly, since concurrent
+ reads are harmless.</p></li>
+<li class="listitem"><p>Two accesses are considered to be ordered by the
+ happens-before dependency even through arbitrarily long chains of
+ synchronisation events. For example, if T1 accesses some location
+ L, and then <code class="function">pthread_cond_signals</code> T2, which later
+ <code class="function">pthread_cond_signals</code> T3, which then accesses L, then
+ a suitable happens-before dependency exists between the first and second
+ accesses, even though it involves two different inter-thread
+ synchronisation events.</p></li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="hg-manual.data-races.errmsgs"></a>7.4.3. Interpreting Race Error Messages</h3></div></div></div>
+<p>Helgrind's race detection algorithm collects a lot of
+information, and tries to present it in a helpful way when a race is
+detected. Here's an example:</p>
+<pre class="programlisting">
+Thread #2 was created
+ at 0x511C08E: clone (in /lib64/libc-2.8.so)
+ by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
+ by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
+ by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
+ by 0x4008F2: main (tc21_pthonce.c:86)
+
+Thread #3 was created
+ at 0x511C08E: clone (in /lib64/libc-2.8.so)
+ by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
+ by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
+ by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
+ by 0x4008F2: main (tc21_pthonce.c:86)
+
+Possible data race during read of size 4 at 0x601070 by thread #3
+Locks held: none
+ at 0x40087A: child (tc21_pthonce.c:74)
+ by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
+ by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+ by 0x511C0CC: clone (in /lib64/libc-2.8.so)
+
+This conflicts with a previous write of size 4 by thread #2
+Locks held: none
+ at 0x400883: child (tc21_pthonce.c:74)
+ by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
+ by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
+ by 0x511C0CC: clone (in /lib64/libc-2.8.so)
+
+Location 0x601070 is 0 bytes inside local var "unprotected2"
+declared at tc21_pthonce.c:51, in frame #0 of thread 3
+</pre>
+<p>Helgrind first announces the creation points of any threads
+referenced in the error message. This is so it can speak concisely
+about threads without repeatedly printing their creation point call
+stacks. Each thread is only ever announced once, the first time it
+appears in any Helgrind error message.</p>
+<p>The main error message begins at the text
+"<code class="computeroutput">Possible data race during read</code>". At
+the start is information you would expect to see -- address and size
+of the racing access, whether a read or a write, and the call stack at
+the point it was detected.</p>
+<p>A second call stack is presented starting at the text
+"<code class="computeroutput">This conflicts with a previous
+write</code>". This shows a previous access which also
+accessed the stated address, and which is believed to be racing
+against the access in the first call stack. Note that this second
+call stack is limited to a maximum of 8 entries to limit the
+memory usage.</p>
+<p>Finally, Helgrind may attempt to give a description of the
+raced-on address in source level terms. In this example, it
+identifies it as a local variable, shows its name, declaration point,
+and in which frame (of the first call stack) it lives. Note that this
+information is only shown when <code class="varname">--read-var-info=yes</code>
+is specified on the command line. That's because reading the DWARF3
+debug information in enough detail to capture variable type and
+location information makes Helgrind much slower at startup, and also
+requires considerable amounts of memory, for large programs.
+</p>
+<p>Once you have your two call stacks, how do you find the root
+cause of the race?</p>
+<p>The first thing to do is examine the source locations referred
+to by each call stack. They should both show an access to the same
+location, or variable.</p>
+<p>Now figure out how how that location should have been made
+thread-safe:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Perhaps the location was intended to be protected by
+ a mutex? If so, you need to lock and unlock the mutex at both
+ access points, even if one of the accesses is reported to be a read.
+ Did you perhaps forget the locking at one or other of the accesses?
+ To help you do this, Helgrind shows the set of locks held by each
+ threads at the time they accessed the raced-on location.</p></li>
+<li class="listitem">
+<p>Alternatively, perhaps you intended to use a some
+ other scheme to make it safe, such as signalling on a condition
+ variable. In all such cases, try to find a synchronisation event
+ (or a chain thereof) which separates the earlier-observed access (as
+ shown in the second call stack) from the later-observed access (as
+ shown in the first call stack). In other words, try to find
+ evidence that the earlier access "happens-before" the later access.
+ See the previous subsection for an explanation of the happens-before
+ relation.</p>
+<p>
+ The fact that Helgrind is reporting a race means it did not observe
+ any happens-before relation between the two accesses. If
+ Helgrind is working correctly, it should also be the case that you
+ also cannot find any such relation, even on detailed inspection
+ of the source code. Hopefully, though, your inspection of the code
+ will show where the missing synchronisation operation(s) should have
+ been.</p>
+</li>
+</ul></div>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.effective-use"></a>7.5. Hints and Tips for Effective Use of Helgrind</h2></div></div></div>
+<p>Helgrind can be very helpful in finding and resolving
+threading-related problems. Like all sophisticated tools, it is most
+effective when you understand how to play to its strengths.</p>
+<p>Helgrind will be less effective when you merely throw an
+existing threaded program at it and try to make sense of any reported
+errors. It will be more effective if you design threaded programs
+from the start in a way that helps Helgrind verify correctness. The
+same is true for finding memory errors with Memcheck, but applies more
+here, because thread checking is a harder problem. Consequently it is
+much easier to write a correct program for which Helgrind falsely
+reports (threading) errors than it is to write a correct program for
+which Memcheck falsely reports (memory) errors.</p>
+<p>With that in mind, here are some tips, listed most important first,
+for getting reliable results and avoiding false errors. The first two
+are critical. Any violations of them will swamp you with huge numbers
+of false data-race errors.</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem">
+<p>Make sure your application, and all the libraries it uses,
+ use the POSIX threading primitives. Helgrind needs to be able to
+ see all events pertaining to thread creation, exit, locking and
+ other synchronisation events. To do so it intercepts many POSIX
+ pthreads functions.</p>
+<p>Do not roll your own threading primitives (mutexes, etc)
+ from combinations of the Linux futex syscall, atomic counters, etc.
+ These throw Helgrind's internal what's-going-on models
+ way off course and will give bogus results.</p>
+<p>Also, do not reimplement existing POSIX abstractions using
+ other POSIX abstractions. For example, don't build your own
+ semaphore routines or reader-writer locks from POSIX mutexes and
+ condition variables. Instead use POSIX reader-writer locks and
+ semaphores directly, since Helgrind supports them directly.</p>
+<p>Helgrind directly supports the following POSIX threading
+ abstractions: mutexes, reader-writer locks, condition variables
+ (but see below), semaphores and barriers. Currently spinlocks
+ are not supported, although they could be in future.</p>
+<p>At the time of writing, the following popular Linux packages
+ are known to implement their own threading primitives:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Qt version 4.X. Qt 3.X is harmless in that it
+ only uses POSIX pthreads primitives. Unfortunately Qt 4.X
+ has its own implementation of mutexes (QMutex) and thread reaping.
+ Helgrind 3.4.x contains direct support
+ for Qt 4.X threading, which is experimental but is believed to
+ work fairly well. A side effect of supporting Qt 4 directly is
+ that Helgrind can be used to debug KDE4 applications. As this
+ is an experimental feature, we would particularly appreciate
+ feedback from folks who have used Helgrind to successfully debug
+ Qt 4 and/or KDE4 applications.</p></li>
+<li class="listitem">
+<p>Runtime support library for GNU OpenMP (part of
+ GCC), at least for GCC versions 4.2 and 4.3. The GNU OpenMP runtime
+ library (<code class="filename">libgomp.so</code>) constructs its own
+ synchronisation primitives using combinations of atomic memory
+ instructions and the futex syscall, which causes total chaos since in
+ Helgrind since it cannot "see" those.</p>
+<p>Fortunately, this can be solved using a configuration-time
+ option (for GCC). Rebuild GCC from source, and configure using
+ <code class="varname">--disable-linux-futex</code>.
+ This makes libgomp.so use the standard
+ POSIX threading primitives instead. Note that this was tested
+ using GCC 4.2.3 and has not been re-tested using more recent GCC
+ versions. We would appreciate hearing about any successes or
+ failures with more recent versions.</p>
+</li>
+</ul></div>
+<p>If you must implement your own threading primitives, there
+ are a set of client request macros
+ in <code class="computeroutput">helgrind.h</code> to help you
+ describe your primitives to Helgrind. You should be able to
+ mark up mutexes, condition variables, etc, without difficulty.
+ </p>
+<p>
+ It is also possible to mark up the effects of thread-safe
+ reference counting using the
+ <code class="computeroutput">ANNOTATE_HAPPENS_BEFORE</code>,
+ <code class="computeroutput">ANNOTATE_HAPPENS_AFTER</code> and
+ <code class="computeroutput">ANNOTATE_HAPPENS_BEFORE_FORGET_ALL</code>,
+ macros. Thread-safe reference counting using an atomically
+ incremented/decremented refcount variable causes Helgrind
+ problems because a one-to-zero transition of the reference count
+ means the accessing thread has exclusive ownership of the
+ associated resource (normally, a C++ object) and can therefore
+ access it (normally, to run its destructor) without locking.
+ Helgrind doesn't understand this, and markup is essential to
+ avoid false positives.
+ </p>
+<p>
+ Here are recommended guidelines for marking up thread safe
+ reference counting in C++. You only need to mark up your
+ release methods -- the ones which decrement the reference count.
+ Given a class like this:
+ </p>
+<pre class="programlisting">
+class MyClass {
+ unsigned int mRefCount;
+
+ void Release ( void ) {
+ unsigned int newCount = atomic_decrement(&mRefCount);
+ if (newCount == 0) {
+ delete this;
+ }
+ }
+}
+</pre>
+<p>
+ the release method should be marked up as follows:
+ </p>
+<pre class="programlisting">
+ void Release ( void ) {
+ unsigned int newCount = atomic_decrement(&mRefCount);
+ if (newCount == 0) {
+ ANNOTATE_HAPPENS_AFTER(&mRefCount);
+ ANNOTATE_HAPPENS_BEFORE_FORGET_ALL(&mRefCount);
+ delete this;
+ } else {
+ ANNOTATE_HAPPENS_BEFORE(&mRefCount);
+ }
+ }
+</pre>
+<p>
+ There are a number of complex, mostly-theoretical objections to
+ this scheme. From a theoretical standpoint it appears to be
+ impossible to devise a markup scheme which is completely correct
+ in the sense of guaranteeing to remove all false races. The
+ proposed scheme however works well in practice.
+ </p>
+</li>
+<li class="listitem">
+<p>Avoid memory recycling. If you can't avoid it, you must use
+ tell Helgrind what is going on via the
+ <code class="function">VALGRIND_HG_CLEAN_MEMORY</code> client request (in
+ <code class="computeroutput">helgrind.h</code>).</p>
+<p>Helgrind is aware of standard heap memory allocation and
+ deallocation that occurs via
+ <code class="function">malloc</code>/<code class="function">free</code>/<code class="function">new</code>/<code class="function">delete</code>
+ and from entry and exit of stack frames. In particular, when memory is
+ deallocated via <code class="function">free</code>, <code class="function">delete</code>,
+ or function exit, Helgrind considers that memory clean, so when it is
+ eventually reallocated, its history is irrelevant.</p>
+<p>However, it is common practice to implement memory recycling
+ schemes. In these, memory to be freed is not handed to
+ <code class="function">free</code>/<code class="function">delete</code>, but instead put
+ into a pool of free buffers to be handed out again as required. The
+ problem is that Helgrind has no
+ way to know that such memory is logically no longer in use, and
+ its history is irrelevant. Hence you must make that explicit,
+ using the <code class="function">VALGRIND_HG_CLEAN_MEMORY</code> client request
+ to specify the relevant address ranges. It's easiest to put these
+ requests into the pool manager code, and use them either when memory is
+ returned to the pool, or is allocated from it.</p>
+</li>
+<li class="listitem">
+<p>Avoid POSIX condition variables. If you can, use POSIX
+ semaphores (<code class="function">sem_t</code>, <code class="function">sem_post</code>,
+ <code class="function">sem_wait</code>) to do inter-thread event signalling.
+ Semaphores with an initial value of zero are particularly useful for
+ this.</p>
+<p>Helgrind only partially correctly handles POSIX condition
+ variables. This is because Helgrind can see inter-thread
+ dependencies between a <code class="function">pthread_cond_wait</code> call and a
+ <code class="function">pthread_cond_signal</code>/<code class="function">pthread_cond_broadcast</code>
+ call only if the waiting thread actually gets to the rendezvous first
+ (so that it actually calls
+ <code class="function">pthread_cond_wait</code>). It can't see dependencies
+ between the threads if the signaller arrives first. In the latter case,
+ POSIX guidelines imply that the associated boolean condition still
+ provides an inter-thread synchronisation event, but one which is
+ invisible to Helgrind.</p>
+<p>The result of Helgrind missing some inter-thread
+ synchronisation events is to cause it to report false positives.
+ </p>
+<p>The root cause of this synchronisation lossage is
+ particularly hard to understand, so an example is helpful. It was
+ discussed at length by Arndt Muehlenfeld ("Runtime Race Detection
+ in Multi-Threaded Programs", Dissertation, TU Graz, Austria). The
+ canonical POSIX-recommended usage scheme for condition variables
+ is as follows:</p>
+<pre class="programlisting">
+b is a Boolean condition, which is False most of the time
+cv is a condition variable
+mx is its associated mutex
+
+Signaller: Waiter:
+
+lock(mx) lock(mx)
+b = True while (b == False)
+signal(cv) wait(cv,mx)
+unlock(mx) unlock(mx)
+</pre>
+<p>Assume <code class="computeroutput">b</code> is False most of
+ the time. If the waiter arrives at the rendezvous first, it
+ enters its while-loop, waits for the signaller to signal, and
+ eventually proceeds. Helgrind sees the signal, notes the
+ dependency, and all is well.</p>
+<p>If the signaller arrives
+ first, <code class="computeroutput">b</code> is set to true, and the
+ signal disappears into nowhere. When the waiter later arrives, it
+ does not enter its while-loop and simply carries on. But even in
+ this case, the waiter code following the while-loop cannot execute
+ until the signaller sets <code class="computeroutput">b</code> to
+ True. Hence there is still the same inter-thread dependency, but
+ this time it is through an arbitrary in-memory condition, and
+ Helgrind cannot see it.</p>
+<p>By comparison, Helgrind's detection of inter-thread
+ dependencies caused by semaphore operations is believed to be
+ exactly correct.</p>
+<p>As far as I know, a solution to this problem that does not
+ require source-level annotation of condition-variable wait loops
+ is beyond the current state of the art.</p>
+</li>
+<li class="listitem"><p>Make sure you are using a supported Linux distribution. At
+ present, Helgrind only properly supports glibc-2.3 or later. This
+ in turn means we only support glibc's NPTL threading
+ implementation. The old LinuxThreads implementation is not
+ supported.</p></li>
+<li class="listitem"><p>If your application is using thread local variables,
+ helgrind might report false positive race conditions on these
+ variables, despite being very probably race free. On Linux, you can
+ use <code class="option">--sim-hints=deactivate-pthread-stack-cache-via-hack</code>
+ to avoid such false positive error messages
+ (see <a class="xref" href="manual-core.html#opt.sim-hints">--sim-hints</a>).
+ </p></li>
+<li class="listitem">
+<p>Round up all finished threads using
+ <code class="function">pthread_join</code>. Avoid
+ detaching threads: don't create threads in the detached state, and
+ don't call <code class="function">pthread_detach</code> on existing threads.</p>
+<p>Using <code class="function">pthread_join</code> to round up finished
+ threads provides a clear synchronisation point that both Helgrind and
+ programmers can see. If you don't call
+ <code class="function">pthread_join</code> on a thread, Helgrind has no way to
+ know when it finishes, relative to any
+ significant synchronisation points for other threads in the program. So
+ it assumes that the thread lingers indefinitely and can potentially
+ interfere indefinitely with the memory state of the program. It
+ has every right to assume that -- after all, it might really be
+ the case that, for scheduling reasons, the exiting thread did run
+ very slowly in the last stages of its life.</p>
+</li>
+<li class="listitem">
+<p>Perform thread debugging (with Helgrind) and memory
+ debugging (with Memcheck) together.</p>
+<p>Helgrind tracks the state of memory in detail, and memory
+ management bugs in the application are liable to cause confusion.
+ In extreme cases, applications which do many invalid reads and
+ writes (particularly to freed memory) have been known to crash
+ Helgrind. So, ideally, you should make your application
+ Memcheck-clean before using Helgrind.</p>
+<p>It may be impossible to make your application Memcheck-clean
+ unless you first remove threading bugs. In particular, it may be
+ difficult to remove all reads and writes to freed memory in
+ multithreaded C++ destructor sequences at program termination.
+ So, ideally, you should make your application Helgrind-clean
+ before using Memcheck.</p>
+<p>Since this circularity is obviously unresolvable, at least
+ bear in mind that Memcheck and Helgrind are to some extent
+ complementary, and you may need to use them together.</p>
+</li>
+<li class="listitem">
+<p>POSIX requires that implementations of standard I/O
+ (<code class="function">printf</code>, <code class="function">fprintf</code>,
+ <code class="function">fwrite</code>, <code class="function">fread</code>, etc) are thread
+ safe. Unfortunately GNU libc implements this by using internal locking
+ primitives that Helgrind is unable to intercept. Consequently Helgrind
+ generates many false race reports when you use these functions.</p>
+<p>Helgrind attempts to hide these errors using the standard
+ Valgrind error-suppression mechanism. So, at least for simple
+ test cases, you don't see any. Nevertheless, some may slip
+ through. Just something to be aware of.</p>
+</li>
+<li class="listitem">
+<p>Helgrind's error checks do not work properly inside the
+ system threading library itself
+ (<code class="computeroutput">libpthread.so</code>), and it usually
+ observes large numbers of (false) errors in there. Valgrind's
+ suppression system then filters these out, so you should not see
+ them.</p>
+<p>If you see any race errors reported
+ where <code class="computeroutput">libpthread.so</code> or
+ <code class="computeroutput">ld.so</code> is the object associated
+ with the innermost stack frame, please file a bug report at
+ <a class="ulink" href="http://www.valgrind.org/" target="_top">http://www.valgrind.org/</a>.
+ </p>
+</li>
+</ol></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.options"></a>7.6. Helgrind Command-line Options</h2></div></div></div>
+<p>The following end-user options are available:</p>
+<div class="variablelist">
+<a name="hg.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.free-is-write"></a><span class="term">
+ <code class="option">--free-is-write=no|yes
+ [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled (not the default), Helgrind treats freeing of
+ heap memory as if the memory was written immediately before
+ the free. This exposes races where memory is referenced by
+ one thread, and freed by another, but there is no observable
+ synchronisation event to ensure that the reference happens
+ before the free.
+ </p>
+<p>This functionality is new in Valgrind 3.7.0, and is
+ regarded as experimental. It is not enabled by default
+ because its interaction with custom memory allocators is not
+ well understood at present. User feedback is welcomed.
+ </p>
+</dd>
+<dt>
+<a name="opt.track-lockorders"></a><span class="term">
+ <code class="option">--track-lockorders=no|yes
+ [default: yes] </code>
+ </span>
+</dt>
+<dd><p>When enabled (the default), Helgrind performs lock order
+ consistency checking. For some buggy programs, the large number
+ of lock order errors reported can become annoying, particularly
+ if you're only interested in race errors. You may therefore find
+ it helpful to disable lock order checking.</p></dd>
+<dt>
+<a name="opt.history-level"></a><span class="term">
+ <code class="option">--history-level=none|approx|full
+ [default: full] </code>
+ </span>
+</dt>
+<dd>
+<p><code class="option">--history-level=full</code> (the default) causes
+ Helgrind collects enough information about "old" accesses that
+ it can produce two stack traces in a race report -- both the
+ stack trace for the current access, and the trace for the
+ older, conflicting access. To limit memory usage, "old" accesses
+ stack traces are limited to a maximum of 8 entries, even if
+ <code class="option">--num-callers</code> value is bigger.</p>
+<p>Collecting such information is expensive in both speed and
+ memory, particularly for programs that do many inter-thread
+ synchronisation events (locks, unlocks, etc). Without such
+ information, it is more difficult to track down the root
+ causes of races. Nonetheless, you may not need it in
+ situations where you just want to check for the presence or
+ absence of races, for example, when doing regression testing
+ of a previously race-free program.</p>
+<p><code class="option">--history-level=none</code> is the opposite
+ extreme. It causes Helgrind not to collect any information
+ about previous accesses. This can be dramatically faster
+ than <code class="option">--history-level=full</code>.</p>
+<p><code class="option">--history-level=approx</code> provides a
+ compromise between these two extremes. It causes Helgrind to
+ show a full trace for the later access, and approximate
+ information regarding the earlier access. This approximate
+ information consists of two stacks, and the earlier access is
+ guaranteed to have occurred somewhere between program points
+ denoted by the two stacks. This is not as useful as showing
+ the exact stack for the previous access
+ (as <code class="option">--history-level=full</code> does), but it is
+ better than nothing, and it is almost as fast as
+ <code class="option">--history-level=none</code>.</p>
+</dd>
+<dt>
+<a name="opt.conflict-cache-size"></a><span class="term">
+ <code class="option">--conflict-cache-size=N
+ [default: 1000000] </code>
+ </span>
+</dt>
+<dd>
+<p>This flag only has any effect
+ at <code class="option">--history-level=full</code>.</p>
+<p>Information about "old" conflicting accesses is stored in
+ a cache of limited size, with LRU-style management. This is
+ necessary because it isn't practical to store a stack trace
+ for every single memory access made by the program.
+ Historical information on not recently accessed locations is
+ periodically discarded, to free up space in the cache.</p>
+<p>This option controls the size of the cache, in terms of the
+ number of different memory addresses for which
+ conflicting access information is stored. If you find that
+ Helgrind is showing race errors with only one stack instead of
+ the expected two stacks, try increasing this value.</p>
+<p>The minimum value is 10,000 and the maximum is 30,000,000
+ (thirty times the default value). Increasing the value by 1
+ increases Helgrind's memory requirement by very roughly 100
+ bytes, so the maximum value will easily eat up three extra
+ gigabytes or so of memory.</p>
+</dd>
+<dt>
+<a name="opt.check-stack-refs"></a><span class="term">
+ <code class="option">--check-stack-refs=no|yes
+ [default: yes] </code>
+ </span>
+</dt>
+<dd><p>
+ By default Helgrind checks all data memory accesses made by your
+ program. This flag enables you to skip checking for accesses
+ to thread stacks (local variables). This can improve
+ performance, but comes at the cost of missing races on
+ stack-allocated data.
+ </p></dd>
+<dt>
+<a name="opt.ignore-thread-creation"></a><span class="term">
+ <code class="option">--ignore-thread-creation=<yes|no>
+ [default: no]</code>
+ </span>
+</dt>
+<dd>
+<p>
+ Controls whether all activities during thread creation should be
+ ignored. By default enabled only on Solaris.
+ Solaris provides higher throughput, parallelism and scalability than
+ other operating systems, at the cost of more fine-grained locking
+ activity. This means for example that when a thread is created under
+ glibc, just one big lock is used for all thread setup. Solaris libc
+ uses several fine-grained locks and the creator thread resumes its
+ activities as soon as possible, leaving for example stack and TLS setup
+ sequence to the created thread.
+ This situation confuses Helgrind as it assumes there is some false
+ ordering in place between creator and created thread; and therefore many
+ types of race conditions in the application would not be reported.
+ To prevent such false ordering, this command line option is set to
+ <code class="computeroutput">yes</code> by default on Solaris.
+ All activity (loads, stores, client requests) is therefore ignored
+ during:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ pthread_create() call in the creator thread
+ </p></li>
+<li class="listitem"><p>
+ thread creation phase (stack and TLS setup) in the created thread
+ </p></li>
+</ul></div>
+<p>
+ Also new memory allocated during thread creation is untracked,
+ that is race reporting is suppressed there. DRD does the same thing
+ implicitly. This is necessary because Solaris libc caches many objects
+ and reuses them for different threads and that confuses
+ Helgrind.</p>
+</dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.monitor-commands"></a>7.7. Helgrind Monitor Commands</h2></div></div></div>
+<p>The Helgrind tool provides monitor commands handled by Valgrind's
+built-in gdbserver (see <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling" title="3.2.5. Monitor command handling by the Valgrind gdbserver">Monitor command handling by the Valgrind gdbserver</a>).
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="varname">info locks [lock_addr]</code> shows the list of locks
+ and their status. If <code class="varname">lock_addr</code> is given, only shows
+ the lock located at this address. </p>
+<p>
+ In the following example, helgrind knows about one lock. This
+ lock is located at the guest address <code class="varname">ga
+ 0x8049a20</code>. The lock kind is <code class="varname">rdwr</code>
+ indicating a reader-writer lock. Other possible lock kinds
+ are <code class="varname">nonRec</code> (simple mutex, non recursive)
+ and <code class="varname">mbRec</code> (simple mutex, possibly recursive).
+ The lock kind is then followed by the list of threads helding the
+ lock. In the below example, <code class="varname">R1:thread #6 tid 3</code>
+ indicates that the helgrind thread #6 has acquired (once, as the
+ counter following the letter R is one) the lock in read mode. The
+ helgrind thread nr is incremented for each started thread. The
+ presence of 'tid 3' indicates that the thread #6 is has not exited
+ yet and is the valgrind tid 3. If a thread has terminated, then
+ this is indicated with 'tid (exited)'.
+ </p>
+<pre class="programlisting">
+(gdb) monitor info locks
+Lock ga 0x8049a20 {
+ kind rdwr
+ { R1:thread #6 tid 3 }
+}
+(gdb)
+</pre>
+<p> If you give the option <code class="varname">--read-var-info=yes</code>,
+ then more information will be provided about the lock location, such as
+ the global variable or the heap block that contains the lock:
+ </p>
+<pre class="programlisting">
+Lock ga 0x8049a20 {
+ Location 0x8049a20 is 0 bytes inside global var "s_rwlock"
+ declared at rwlock_race.c:17
+ kind rdwr
+ { R1:thread #3 tid 3 }
+}
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">accesshistory <addr> [<len>]</code>
+ shows the access history recorded for <len> (default 1) bytes
+ starting at <addr>. For each recorded access that overlaps
+ with the given range, <code class="varname">accesshistory</code> shows the operation
+ type (read or write), the address and size read or written, the helgrind
+ thread nr/valgrind tid number that did the operation and the locks held
+ by the thread at the time of the operation.
+ The oldest access is shown first, the most recent access is shown last.
+ </p>
+<p>
+ In the following example, we see first a recorded write of 4 bytes by
+ thread #7 that has modified the given 2 bytes range.
+ The second recorded write is the most recent recorded write : thread #9
+ modified the same 2 bytes as part of a 4 bytes write operation.
+ The list of locks held by each thread at the time of the write operation
+ are also shown.
+ </p>
+<pre class="programlisting">
+(gdb) monitor accesshistory 0x8049D8A 2
+write of size 4 at 0x8049D88 by thread #7 tid 3
+==6319== Locks held: 2, at address 0x8049D8C (and 1 that can't be shown)
+==6319== at 0x804865F: child_fn1 (locked_vs_unlocked2.c:29)
+==6319== by 0x400AE61: mythread_wrapper (hg_intercepts.c:234)
+==6319== by 0x39B924: start_thread (pthread_create.c:297)
+==6319== by 0x2F107D: clone (clone.S:130)
+
+write of size 4 at 0x8049D88 by thread #9 tid 2
+==6319== Locks held: 2, at addresses 0x8049DA4 0x8049DD4
+==6319== at 0x804877B: child_fn2 (locked_vs_unlocked2.c:45)
+==6319== by 0x400AE61: mythread_wrapper (hg_intercepts.c:234)
+==6319== by 0x39B924: start_thread (pthread_create.c:297)
+==6319== by 0x2F107D: clone (clone.S:130)
+
+</pre>
+</li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.client-requests"></a>7.8. Helgrind Client Requests</h2></div></div></div>
+<p>The following client requests are defined in
+<code class="filename">helgrind.h</code>. See that file for exact details of their
+arguments.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="function">VALGRIND_HG_CLEAN_MEMORY</code></p>
+<p>This makes Helgrind forget everything it knows about a
+ specified memory range. This is particularly useful for memory
+ allocators that wish to recycle memory.</p>
+</li>
+<li class="listitem"><p><code class="function">ANNOTATE_HAPPENS_BEFORE</code></p></li>
+<li class="listitem"><p><code class="function">ANNOTATE_HAPPENS_AFTER</code></p></li>
+<li class="listitem"><p><code class="function">ANNOTATE_NEW_MEMORY</code></p></li>
+<li class="listitem"><p><code class="function">ANNOTATE_RWLOCK_CREATE</code></p></li>
+<li class="listitem"><p><code class="function">ANNOTATE_RWLOCK_DESTROY</code></p></li>
+<li class="listitem"><p><code class="function">ANNOTATE_RWLOCK_ACQUIRED</code></p></li>
+<li class="listitem">
+<p><code class="function">ANNOTATE_RWLOCK_RELEASED</code></p>
+<p>These are used to describe to Helgrind, the behaviour of
+ custom (non-POSIX) synchronisation primitives, which it otherwise
+ has no way to understand. See comments
+ in <code class="filename">helgrind.h</code> for further
+ documentation.</p>
+</li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="hg-manual.todolist"></a>7.9. A To-Do List for Helgrind</h2></div></div></div>
+<p>The following is a list of loose ends which should be tidied up
+some time.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>For lock order errors, print the complete lock
+ cycle, rather than only doing for size-2 cycles as at
+ present.</p></li>
+<li class="listitem"><p>The conflicting access mechanism sometimes
+ mysteriously fails to show the conflicting access' stack, even
+ when provided with unbounded storage for conflicting access info.
+ This should be investigated.</p></li>
+<li class="listitem"><p>Document races caused by GCC's thread-unsafe code
+ generation for speculative stores. In the interim see
+ <code class="computeroutput">http://gcc.gnu.org/ml/gcc/2007-10/msg00266.html
+ </code>
+ and <code class="computeroutput">http://lkml.org/lkml/2007/10/24/673</code>.
+ </p></li>
+<li class="listitem"><p>Don't update the lock-order graph, and don't check
+ for errors, when a "try"-style lock operation happens (e.g.
+ <code class="function">pthread_mutex_trylock</code>). Such calls do not add any real
+ restrictions to the locking order, since they can always fail to
+ acquire the lock, resulting in the caller going off and doing Plan
+ B (presumably it will have a Plan B). Doing such checks could
+ generate false lock-order errors and confuse users.</p></li>
+<li class="listitem"><p> Performance can be very poor. Slowdowns on the
+ order of 100:1 are not unusual. There is limited scope for
+ performance improvements.
+ </p></li>
+</ul></div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="cl-manual.html"><< 6. Callgrind: a call-graph generating cache and branch prediction profiler</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="drd-manual.html">8. DRD: a thread error detector >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/images/home.png b/docs/html/images/home.png
new file mode 100644
index 0000000..1ccfb7b
--- /dev/null
+++ b/docs/html/images/home.png
Binary files differ
diff --git a/docs/html/images/next.png b/docs/html/images/next.png
new file mode 100644
index 0000000..6d0c11a
--- /dev/null
+++ b/docs/html/images/next.png
Binary files differ
diff --git a/docs/html/images/prev.png b/docs/html/images/prev.png
new file mode 100644
index 0000000..9fdf29e
--- /dev/null
+++ b/docs/html/images/prev.png
Binary files differ
diff --git a/docs/html/images/up.png b/docs/html/images/up.png
new file mode 100644
index 0000000..a75f0b3
--- /dev/null
+++ b/docs/html/images/up.png
Binary files differ
diff --git a/docs/html/index.html b/docs/html/index.html
new file mode 100644
index 0000000..8e27063
--- /dev/null
+++ b/docs/html/index.html
@@ -0,0 +1,64 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind Documentation</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="next" href="QuickStart.html" title="The Valgrind Quick Start Guide">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div></div>
+<div lang="en" class="set">
+<div class="titlepage">
+<div>
+<div align="center"><h1 class="title">
+<a name="set-index"></a>Valgrind Documentation</h1></div>
+<div align="center"><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div align="center"><p class="copyright">Copyright © 2000-2016
+ <a class="link" href="dist.authors.html" title="1. AUTHORS">AUTHORS</a>
+ </p></div>
+<div align="center"><div class="legalnotice">
+<a name="idm140639127546768"></a><p>Permission is granted to copy, distribute and/or modify
+ this document under the terms of the GNU Free Documentation
+ License, Version 1.2 or any later version published by the
+ Free Software Foundation; with no Invariant Sections, with no
+ Front-Cover Texts, and with no Back-Cover Texts. A copy of
+ the license is included in the section entitled
+ <a class="xref" href="license.gfdl.html" title="2. The GNU Free Documentation License">The GNU Free Documentation License</a>.
+ </p>
+<p>This is the top level of Valgrind's documentation tree.
+ The documentation is contained in six logically separate
+ documents, as listed in the following Table of Contents. To
+ get started quickly, read the Valgrind Quick Start Guide. For
+ full documentation on Valgrind, read the Valgrind User Manual.
+ </p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="book"><a href="QuickStart.html">The Valgrind Quick Start Guide</a></span></dt>
+<dt><span class="book"><a href="manual.html">Valgrind User Manual</a></span></dt>
+<dt><span class="book"><a href="FAQ.html">Valgrind FAQ</a></span></dt>
+<dt><span class="book"><a href="tech-docs.html">Valgrind Technical Documentation</a></span></dt>
+<dt><span class="book"><a href="dist.html">Valgrind Distribution Documents</a></span></dt>
+<dt><span class="book"><a href="licenses.html">GNU Licenses</a></span></dt>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left"> </td>
+<td width="20%" align="center"> </td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="QuickStart.html">The Valgrind Quick Start Guide >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"> </td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/license.gfdl.html b/docs/html/license.gfdl.html
new file mode 100644
index 0000000..b4f9e11
--- /dev/null
+++ b/docs/html/license.gfdl.html
@@ -0,0 +1,435 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>2. The GNU Free Documentation License</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="licenses.html" title="GNU Licenses">
+<link rel="prev" href="license.gpl.html" title="1. The GNU General Public License">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="license.gpl.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="licenses.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">GNU Licenses</th>
+<td width="22px" align="center" valign="middle"></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="license.gfdl"></a>2. The GNU Free Documentation License</h1></div></div></div>
+<div class="literallayout"><p><br>
+ GNU Free Documentation License<br>
+ Version 1.2, November 2002<br>
+<br>
+<br>
+ Copyright (C) 2000,2001,2002 Free Software Foundation, Inc.<br>
+ 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA<br>
+ Everyone is permitted to copy and distribute verbatim copies<br>
+ of this license document, but changing it is not allowed.<br>
+<br>
+<br>
+0. PREAMBLE<br>
+<br>
+The purpose of this License is to make a manual, textbook, or other<br>
+functional and useful document "free" in the sense of freedom: to<br>
+assure everyone the effective freedom to copy and redistribute it,<br>
+with or without modifying it, either commercially or noncommercially.<br>
+Secondarily, this License preserves for the author and publisher a way<br>
+to get credit for their work, while not being considered responsible<br>
+for modifications made by others.<br>
+<br>
+This License is a kind of "copyleft", which means that derivative<br>
+works of the document must themselves be free in the same sense. It<br>
+complements the GNU General Public License, which is a copyleft<br>
+license designed for free software.<br>
+<br>
+We have designed this License in order to use it for manuals for free<br>
+software, because free software needs free documentation: a free<br>
+program should come with manuals providing the same freedoms that the<br>
+software does. But this License is not limited to software manuals;<br>
+it can be used for any textual work, regardless of subject matter or<br>
+whether it is published as a printed book. We recommend this License<br>
+principally for works whose purpose is instruction or reference.<br>
+<br>
+<br>
+1. APPLICABILITY AND DEFINITIONS<br>
+<br>
+This License applies to any manual or other work, in any medium, that<br>
+contains a notice placed by the copyright holder saying it can be<br>
+distributed under the terms of this License. Such a notice grants a<br>
+world-wide, royalty-free license, unlimited in duration, to use that<br>
+work under the conditions stated herein. The "Document", below,<br>
+refers to any such manual or work. Any member of the public is a<br>
+licensee, and is addressed as "you". You accept the license if you<br>
+copy, modify or distribute the work in a way requiring permission<br>
+under copyright law.<br>
+<br>
+A "Modified Version" of the Document means any work containing the<br>
+Document or a portion of it, either copied verbatim, or with<br>
+modifications and/or translated into another language.<br>
+<br>
+A "Secondary Section" is a named appendix or a front-matter section of<br>
+the Document that deals exclusively with the relationship of the<br>
+publishers or authors of the Document to the Document's overall subject<br>
+(or to related matters) and contains nothing that could fall directly<br>
+within that overall subject. (Thus, if the Document is in part a<br>
+textbook of mathematics, a Secondary Section may not explain any<br>
+mathematics.) The relationship could be a matter of historical<br>
+connection with the subject or with related matters, or of legal,<br>
+commercial, philosophical, ethical or political position regarding<br>
+them.<br>
+<br>
+The "Invariant Sections" are certain Secondary Sections whose titles<br>
+are designated, as being those of Invariant Sections, in the notice<br>
+that says that the Document is released under this License. If a<br>
+section does not fit the above definition of Secondary then it is not<br>
+allowed to be designated as Invariant. The Document may contain zero<br>
+Invariant Sections. If the Document does not identify any Invariant<br>
+Sections then there are none.<br>
+<br>
+The "Cover Texts" are certain short passages of text that are listed,<br>
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that<br>
+the Document is released under this License. A Front-Cover Text may<br>
+be at most 5 words, and a Back-Cover Text may be at most 25 words.<br>
+<br>
+A "Transparent" copy of the Document means a machine-readable copy,<br>
+represented in a format whose specification is available to the<br>
+general public, that is suitable for revising the document<br>
+straightforwardly with generic text editors or (for images composed of<br>
+pixels) generic paint programs or (for drawings) some widely available<br>
+drawing editor, and that is suitable for input to text formatters or<br>
+for automatic translation to a variety of formats suitable for input<br>
+to text formatters. A copy made in an otherwise Transparent file<br>
+format whose markup, or absence of markup, has been arranged to thwart<br>
+or discourage subsequent modification by readers is not Transparent.<br>
+An image format is not Transparent if used for any substantial amount<br>
+of text. A copy that is not "Transparent" is called "Opaque".<br>
+<br>
+Examples of suitable formats for Transparent copies include plain<br>
+ASCII without markup, Texinfo input format, LaTeX input format, SGML<br>
+or XML using a publicly available DTD, and standard-conforming simple<br>
+HTML, PostScript or PDF designed for human modification. Examples of<br>
+transparent image formats include PNG, XCF and JPG. Opaque formats<br>
+include proprietary formats that can be read and edited only by<br>
+proprietary word processors, SGML or XML for which the DTD and/or<br>
+processing tools are not generally available, and the<br>
+machine-generated HTML, PostScript or PDF produced by some word<br>
+processors for output purposes only.<br>
+<br>
+The "Title Page" means, for a printed book, the title page itself,<br>
+plus such following pages as are needed to hold, legibly, the material<br>
+this License requires to appear in the title page. For works in<br>
+formats which do not have any title page as such, "Title Page" means<br>
+the text near the most prominent appearance of the work's title,<br>
+preceding the beginning of the body of the text.<br>
+<br>
+A section "Entitled XYZ" means a named subunit of the Document whose<br>
+title either is precisely XYZ or contains XYZ in parentheses following<br>
+text that translates XYZ in another language. (Here XYZ stands for a<br>
+specific section name mentioned below, such as "Acknowledgements",<br>
+"Dedications", "Endorsements", or "History".) To "Preserve the Title"<br>
+of such a section when you modify the Document means that it remains a<br>
+section "Entitled XYZ" according to this definition.<br>
+<br>
+The Document may include Warranty Disclaimers next to the notice which<br>
+states that this License applies to the Document. These Warranty<br>
+Disclaimers are considered to be included by reference in this<br>
+License, but only as regards disclaiming warranties: any other<br>
+implication that these Warranty Disclaimers may have is void and has<br>
+no effect on the meaning of this License.<br>
+<br>
+<br>
+2. VERBATIM COPYING<br>
+<br>
+You may copy and distribute the Document in any medium, either<br>
+commercially or noncommercially, provided that this License, the<br>
+copyright notices, and the license notice saying this License applies<br>
+to the Document are reproduced in all copies, and that you add no other<br>
+conditions whatsoever to those of this License. You may not use<br>
+technical measures to obstruct or control the reading or further<br>
+copying of the copies you make or distribute. However, you may accept<br>
+compensation in exchange for copies. If you distribute a large enough<br>
+number of copies you must also follow the conditions in section 3.<br>
+<br>
+You may also lend copies, under the same conditions stated above, and<br>
+you may publicly display copies.<br>
+<br>
+<br>
+3. COPYING IN QUANTITY<br>
+<br>
+If you publish printed copies (or copies in media that commonly have<br>
+printed covers) of the Document, numbering more than 100, and the<br>
+Document's license notice requires Cover Texts, you must enclose the<br>
+copies in covers that carry, clearly and legibly, all these Cover<br>
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on<br>
+the back cover. Both covers must also clearly and legibly identify<br>
+you as the publisher of these copies. The front cover must present<br>
+the full title with all words of the title equally prominent and<br>
+visible. You may add other material on the covers in addition.<br>
+Copying with changes limited to the covers, as long as they preserve<br>
+the title of the Document and satisfy these conditions, can be treated<br>
+as verbatim copying in other respects.<br>
+<br>
+If the required texts for either cover are too voluminous to fit<br>
+legibly, you should put the first ones listed (as many as fit<br>
+reasonably) on the actual cover, and continue the rest onto adjacent<br>
+pages.<br>
+<br>
+If you publish or distribute Opaque copies of the Document numbering<br>
+more than 100, you must either include a machine-readable Transparent<br>
+copy along with each Opaque copy, or state in or with each Opaque copy<br>
+a computer-network location from which the general network-using<br>
+public has access to download using public-standard network protocols<br>
+a complete Transparent copy of the Document, free of added material.<br>
+If you use the latter option, you must take reasonably prudent steps,<br>
+when you begin distribution of Opaque copies in quantity, to ensure<br>
+that this Transparent copy will remain thus accessible at the stated<br>
+location until at least one year after the last time you distribute an<br>
+Opaque copy (directly or through your agents or retailers) of that<br>
+edition to the public.<br>
+<br>
+It is requested, but not required, that you contact the authors of the<br>
+Document well before redistributing any large number of copies, to give<br>
+them a chance to provide you with an updated version of the Document.<br>
+<br>
+<br>
+4. MODIFICATIONS<br>
+<br>
+You may copy and distribute a Modified Version of the Document under<br>
+the conditions of sections 2 and 3 above, provided that you release<br>
+the Modified Version under precisely this License, with the Modified<br>
+Version filling the role of the Document, thus licensing distribution<br>
+and modification of the Modified Version to whoever possesses a copy<br>
+of it. In addition, you must do these things in the Modified Version:<br>
+<br>
+A. Use in the Title Page (and on the covers, if any) a title distinct<br>
+ from that of the Document, and from those of previous versions<br>
+ (which should, if there were any, be listed in the History section<br>
+ of the Document). You may use the same title as a previous version<br>
+ if the original publisher of that version gives permission.<br>
+B. List on the Title Page, as authors, one or more persons or entities<br>
+ responsible for authorship of the modifications in the Modified<br>
+ Version, together with at least five of the principal authors of the<br>
+ Document (all of its principal authors, if it has fewer than five),<br>
+ unless they release you from this requirement.<br>
+C. State on the Title page the name of the publisher of the<br>
+ Modified Version, as the publisher.<br>
+D. Preserve all the copyright notices of the Document.<br>
+E. Add an appropriate copyright notice for your modifications<br>
+ adjacent to the other copyright notices.<br>
+F. Include, immediately after the copyright notices, a license notice<br>
+ giving the public permission to use the Modified Version under the<br>
+ terms of this License, in the form shown in the Addendum below.<br>
+G. Preserve in that license notice the full lists of Invariant Sections<br>
+ and required Cover Texts given in the Document's license notice.<br>
+H. Include an unaltered copy of this License.<br>
+I. Preserve the section Entitled "History", Preserve its Title, and add<br>
+ to it an item stating at least the title, year, new authors, and<br>
+ publisher of the Modified Version as given on the Title Page. If<br>
+ there is no section Entitled "History" in the Document, create one<br>
+ stating the title, year, authors, and publisher of the Document as<br>
+ given on its Title Page, then add an item describing the Modified<br>
+ Version as stated in the previous sentence.<br>
+J. Preserve the network location, if any, given in the Document for<br>
+ public access to a Transparent copy of the Document, and likewise<br>
+ the network locations given in the Document for previous versions<br>
+ it was based on. These may be placed in the "History" section.<br>
+ You may omit a network location for a work that was published at<br>
+ least four years before the Document itself, or if the original<br>
+ publisher of the version it refers to gives permission.<br>
+K. For any section Entitled "Acknowledgements" or "Dedications",<br>
+ Preserve the Title of the section, and preserve in the section all<br>
+ the substance and tone of each of the contributor acknowledgements<br>
+ and/or dedications given therein.<br>
+L. Preserve all the Invariant Sections of the Document,<br>
+ unaltered in their text and in their titles. Section numbers<br>
+ or the equivalent are not considered part of the section titles.<br>
+M. Delete any section Entitled "Endorsements". Such a section<br>
+ may not be included in the Modified Version.<br>
+N. Do not retitle any existing section to be Entitled "Endorsements"<br>
+ or to conflict in title with any Invariant Section.<br>
+O. Preserve any Warranty Disclaimers.<br>
+<br>
+If the Modified Version includes new front-matter sections or<br>
+appendices that qualify as Secondary Sections and contain no material<br>
+copied from the Document, you may at your option designate some or all<br>
+of these sections as invariant. To do this, add their titles to the<br>
+list of Invariant Sections in the Modified Version's license notice.<br>
+These titles must be distinct from any other section titles.<br>
+<br>
+You may add a section Entitled "Endorsements", provided it contains<br>
+nothing but endorsements of your Modified Version by various<br>
+parties--for example, statements of peer review or that the text has<br>
+been approved by an organization as the authoritative definition of a<br>
+standard.<br>
+<br>
+You may add a passage of up to five words as a Front-Cover Text, and a<br>
+passage of up to 25 words as a Back-Cover Text, to the end of the list<br>
+of Cover Texts in the Modified Version. Only one passage of<br>
+Front-Cover Text and one of Back-Cover Text may be added by (or<br>
+through arrangements made by) any one entity. If the Document already<br>
+includes a cover text for the same cover, previously added by you or<br>
+by arrangement made by the same entity you are acting on behalf of,<br>
+you may not add another; but you may replace the old one, on explicit<br>
+permission from the previous publisher that added the old one.<br>
+<br>
+The author(s) and publisher(s) of the Document do not by this License<br>
+give permission to use their names for publicity for or to assert or<br>
+imply endorsement of any Modified Version.<br>
+<br>
+<br>
+5. COMBINING DOCUMENTS<br>
+<br>
+You may combine the Document with other documents released under this<br>
+License, under the terms defined in section 4 above for modified<br>
+versions, provided that you include in the combination all of the<br>
+Invariant Sections of all of the original documents, unmodified, and<br>
+list them all as Invariant Sections of your combined work in its<br>
+license notice, and that you preserve all their Warranty Disclaimers.<br>
+<br>
+The combined work need only contain one copy of this License, and<br>
+multiple identical Invariant Sections may be replaced with a single<br>
+copy. If there are multiple Invariant Sections with the same name but<br>
+different contents, make the title of each such section unique by<br>
+adding at the end of it, in parentheses, the name of the original<br>
+author or publisher of that section if known, or else a unique number.<br>
+Make the same adjustment to the section titles in the list of<br>
+Invariant Sections in the license notice of the combined work.<br>
+<br>
+In the combination, you must combine any sections Entitled "History"<br>
+in the various original documents, forming one section Entitled<br>
+"History"; likewise combine any sections Entitled "Acknowledgements",<br>
+and any sections Entitled "Dedications". You must delete all sections<br>
+Entitled "Endorsements".<br>
+<br>
+<br>
+6. COLLECTIONS OF DOCUMENTS<br>
+<br>
+You may make a collection consisting of the Document and other documents<br>
+released under this License, and replace the individual copies of this<br>
+License in the various documents with a single copy that is included in<br>
+the collection, provided that you follow the rules of this License for<br>
+verbatim copying of each of the documents in all other respects.<br>
+<br>
+You may extract a single document from such a collection, and distribute<br>
+it individually under this License, provided you insert a copy of this<br>
+License into the extracted document, and follow this License in all<br>
+other respects regarding verbatim copying of that document.<br>
+<br>
+<br>
+7. AGGREGATION WITH INDEPENDENT WORKS<br>
+<br>
+A compilation of the Document or its derivatives with other separate<br>
+and independent documents or works, in or on a volume of a storage or<br>
+distribution medium, is called an "aggregate" if the copyright<br>
+resulting from the compilation is not used to limit the legal rights<br>
+of the compilation's users beyond what the individual works permit.<br>
+When the Document is included in an aggregate, this License does not<br>
+apply to the other works in the aggregate which are not themselves<br>
+derivative works of the Document.<br>
+<br>
+If the Cover Text requirement of section 3 is applicable to these<br>
+copies of the Document, then if the Document is less than one half of<br>
+the entire aggregate, the Document's Cover Texts may be placed on<br>
+covers that bracket the Document within the aggregate, or the<br>
+electronic equivalent of covers if the Document is in electronic form.<br>
+Otherwise they must appear on printed covers that bracket the whole<br>
+aggregate.<br>
+<br>
+<br>
+8. TRANSLATION<br>
+<br>
+Translation is considered a kind of modification, so you may<br>
+distribute translations of the Document under the terms of section 4.<br>
+Replacing Invariant Sections with translations requires special<br>
+permission from their copyright holders, but you may include<br>
+translations of some or all Invariant Sections in addition to the<br>
+original versions of these Invariant Sections. You may include a<br>
+translation of this License, and all the license notices in the<br>
+Document, and any Warranty Disclaimers, provided that you also include<br>
+the original English version of this License and the original versions<br>
+of those notices and disclaimers. In case of a disagreement between<br>
+the translation and the original version of this License or a notice<br>
+or disclaimer, the original version will prevail.<br>
+<br>
+If a section in the Document is Entitled "Acknowledgements",<br>
+"Dedications", or "History", the requirement (section 4) to Preserve<br>
+its Title (section 1) will typically require changing the actual<br>
+title.<br>
+<br>
+<br>
+9. TERMINATION<br>
+<br>
+You may not copy, modify, sublicense, or distribute the Document except<br>
+as expressly provided for under this License. Any other attempt to<br>
+copy, modify, sublicense or distribute the Document is void, and will<br>
+automatically terminate your rights under this License. However,<br>
+parties who have received copies, or rights, from you under this<br>
+License will not have their licenses terminated so long as such<br>
+parties remain in full compliance.<br>
+<br>
+<br>
+10. FUTURE REVISIONS OF THIS LICENSE<br>
+<br>
+The Free Software Foundation may publish new, revised versions<br>
+of the GNU Free Documentation License from time to time. Such new<br>
+versions will be similar in spirit to the present version, but may<br>
+differ in detail to address new problems or concerns. See<br>
+http://www.gnu.org/copyleft/.<br>
+<br>
+Each version of the License is given a distinguishing version number.<br>
+If the Document specifies that a particular numbered version of this<br>
+License "or any later version" applies to it, you have the option of<br>
+following the terms and conditions either of that specified version or<br>
+of any later version that has been published (not as a draft) by the<br>
+Free Software Foundation. If the Document does not specify a version<br>
+number of this License, you may choose any version ever published (not<br>
+as a draft) by the Free Software Foundation.<br>
+<br>
+<br>
+ADDENDUM: How to use this License for your documents<br>
+<br>
+To use this License in a document you have written, include a copy of<br>
+the License in the document and put the following copyright and<br>
+license notices just after the title page:<br>
+<br>
+ Copyright (c) YEAR YOUR NAME.<br>
+ Permission is granted to copy, distribute and/or modify this document<br>
+ under the terms of the GNU Free Documentation License, Version 1.2<br>
+ or any later version published by the Free Software Foundation;<br>
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.<br>
+ A copy of the license is included in the section entitled "GNU<br>
+ Free Documentation License".<br>
+<br>
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,<br>
+replace the "with...Texts." line with this:<br>
+<br>
+ with the Invariant Sections being LIST THEIR TITLES, with the<br>
+ Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.<br>
+<br>
+If you have Invariant Sections without Cover Texts, or some other<br>
+combination of the three, merge those two alternatives to suit the<br>
+situation.<br>
+<br>
+If your document contains nontrivial examples of program code, we<br>
+recommend releasing these examples in parallel under your choice of<br>
+free software license, such as the GNU General Public License,<br>
+to permit their use in free software.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="license.gpl.html"><< 1. The GNU General Public License</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="licenses.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> </td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/license.gpl.html b/docs/html/license.gpl.html
new file mode 100644
index 0000000..e48a6c4
--- /dev/null
+++ b/docs/html/license.gpl.html
@@ -0,0 +1,379 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>1. The GNU General Public License</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="licenses.html" title="GNU Licenses">
+<link rel="prev" href="licenses.html" title="GNU Licenses">
+<link rel="next" href="license.gfdl.html" title="2. The GNU Free Documentation License">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="licenses.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="licenses.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">GNU Licenses</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="license.gfdl.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="license.gpl"></a>1. The GNU General Public License</h1></div></div></div>
+<div class="literallayout"><p><br>
+ GNU GENERAL PUBLIC LICENSE<br>
+ Version 2, June 1991<br>
+<br>
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,<br>
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA<br>
+ Everyone is permitted to copy and distribute verbatim copies<br>
+ of this license document, but changing it is not allowed.<br>
+<br>
+ Preamble<br>
+<br>
+ The licenses for most software are designed to take away your<br>
+freedom to share and change it. By contrast, the GNU General Public<br>
+License is intended to guarantee your freedom to share and change free<br>
+software--to make sure the software is free for all its users. This<br>
+General Public License applies to most of the Free Software<br>
+Foundation's software and to any other program whose authors commit to<br>
+using it. (Some other Free Software Foundation software is covered by<br>
+the GNU Lesser General Public License instead.) You can apply it to<br>
+your programs, too.<br>
+<br>
+ When we speak of free software, we are referring to freedom, not<br>
+price. Our General Public Licenses are designed to make sure that you<br>
+have the freedom to distribute copies of free software (and charge for<br>
+this service if you wish), that you receive source code or can get it<br>
+if you want it, that you can change the software or use pieces of it<br>
+in new free programs; and that you know you can do these things.<br>
+<br>
+ To protect your rights, we need to make restrictions that forbid<br>
+anyone to deny you these rights or to ask you to surrender the rights.<br>
+These restrictions translate to certain responsibilities for you if you<br>
+distribute copies of the software, or if you modify it.<br>
+<br>
+ For example, if you distribute copies of such a program, whether<br>
+gratis or for a fee, you must give the recipients all the rights that<br>
+you have. You must make sure that they, too, receive or can get the<br>
+source code. And you must show them these terms so they know their<br>
+rights.<br>
+<br>
+ We protect your rights with two steps: (1) copyright the software, and<br>
+(2) offer you this license which gives you legal permission to copy,<br>
+distribute and/or modify the software.<br>
+<br>
+ Also, for each author's protection and ours, we want to make certain<br>
+that everyone understands that there is no warranty for this free<br>
+software. If the software is modified by someone else and passed on, we<br>
+want its recipients to know that what they have is not the original, so<br>
+that any problems introduced by others will not reflect on the original<br>
+authors' reputations.<br>
+<br>
+ Finally, any free program is threatened constantly by software<br>
+patents. We wish to avoid the danger that redistributors of a free<br>
+program will individually obtain patent licenses, in effect making the<br>
+program proprietary. To prevent this, we have made it clear that any<br>
+patent must be licensed for everyone's free use or not licensed at all.<br>
+<br>
+ The precise terms and conditions for copying, distribution and<br>
+modification follow.<br>
+<br>
+ GNU GENERAL PUBLIC LICENSE<br>
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION<br>
+<br>
+ 0. This License applies to any program or other work which contains<br>
+a notice placed by the copyright holder saying it may be distributed<br>
+under the terms of this General Public License. The "Program", below,<br>
+refers to any such program or work, and a "work based on the Program"<br>
+means either the Program or any derivative work under copyright law:<br>
+that is to say, a work containing the Program or a portion of it,<br>
+either verbatim or with modifications and/or translated into another<br>
+language. (Hereinafter, translation is included without limitation in<br>
+the term "modification".) Each licensee is addressed as "you".<br>
+<br>
+Activities other than copying, distribution and modification are not<br>
+covered by this License; they are outside its scope. The act of<br>
+running the Program is not restricted, and the output from the Program<br>
+is covered only if its contents constitute a work based on the<br>
+Program (independent of having been made by running the Program).<br>
+Whether that is true depends on what the Program does.<br>
+<br>
+ 1. You may copy and distribute verbatim copies of the Program's<br>
+source code as you receive it, in any medium, provided that you<br>
+conspicuously and appropriately publish on each copy an appropriate<br>
+copyright notice and disclaimer of warranty; keep intact all the<br>
+notices that refer to this License and to the absence of any warranty;<br>
+and give any other recipients of the Program a copy of this License<br>
+along with the Program.<br>
+<br>
+You may charge a fee for the physical act of transferring a copy, and<br>
+you may at your option offer warranty protection in exchange for a fee.<br>
+<br>
+ 2. You may modify your copy or copies of the Program or any portion<br>
+of it, thus forming a work based on the Program, and copy and<br>
+distribute such modifications or work under the terms of Section 1<br>
+above, provided that you also meet all of these conditions:<br>
+<br>
+ a) You must cause the modified files to carry prominent notices<br>
+ stating that you changed the files and the date of any change.<br>
+<br>
+ b) You must cause any work that you distribute or publish, that in<br>
+ whole or in part contains or is derived from the Program or any<br>
+ part thereof, to be licensed as a whole at no charge to all third<br>
+ parties under the terms of this License.<br>
+<br>
+ c) If the modified program normally reads commands interactively<br>
+ when run, you must cause it, when started running for such<br>
+ interactive use in the most ordinary way, to print or display an<br>
+ announcement including an appropriate copyright notice and a<br>
+ notice that there is no warranty (or else, saying that you provide<br>
+ a warranty) and that users may redistribute the program under<br>
+ these conditions, and telling the user how to view a copy of this<br>
+ License. (Exception: if the Program itself is interactive but<br>
+ does not normally print such an announcement, your work based on<br>
+ the Program is not required to print an announcement.)<br>
+<br>
+These requirements apply to the modified work as a whole. If<br>
+identifiable sections of that work are not derived from the Program,<br>
+and can be reasonably considered independent and separate works in<br>
+themselves, then this License, and its terms, do not apply to those<br>
+sections when you distribute them as separate works. But when you<br>
+distribute the same sections as part of a whole which is a work based<br>
+on the Program, the distribution of the whole must be on the terms of<br>
+this License, whose permissions for other licensees extend to the<br>
+entire whole, and thus to each and every part regardless of who wrote it.<br>
+<br>
+Thus, it is not the intent of this section to claim rights or contest<br>
+your rights to work written entirely by you; rather, the intent is to<br>
+exercise the right to control the distribution of derivative or<br>
+collective works based on the Program.<br>
+<br>
+In addition, mere aggregation of another work not based on the Program<br>
+with the Program (or with a work based on the Program) on a volume of<br>
+a storage or distribution medium does not bring the other work under<br>
+the scope of this License.<br>
+<br>
+ 3. You may copy and distribute the Program (or a work based on it,<br>
+under Section 2) in object code or executable form under the terms of<br>
+Sections 1 and 2 above provided that you also do one of the following:<br>
+<br>
+ a) Accompany it with the complete corresponding machine-readable<br>
+ source code, which must be distributed under the terms of Sections<br>
+ 1 and 2 above on a medium customarily used for software interchange; or,<br>
+<br>
+ b) Accompany it with a written offer, valid for at least three<br>
+ years, to give any third party, for a charge no more than your<br>
+ cost of physically performing source distribution, a complete<br>
+ machine-readable copy of the corresponding source code, to be<br>
+ distributed under the terms of Sections 1 and 2 above on a medium<br>
+ customarily used for software interchange; or,<br>
+<br>
+ c) Accompany it with the information you received as to the offer<br>
+ to distribute corresponding source code. (This alternative is<br>
+ allowed only for noncommercial distribution and only if you<br>
+ received the program in object code or executable form with such<br>
+ an offer, in accord with Subsection b above.)<br>
+<br>
+The source code for a work means the preferred form of the work for<br>
+making modifications to it. For an executable work, complete source<br>
+code means all the source code for all modules it contains, plus any<br>
+associated interface definition files, plus the scripts used to<br>
+control compilation and installation of the executable. However, as a<br>
+special exception, the source code distributed need not include<br>
+anything that is normally distributed (in either source or binary<br>
+form) with the major components (compiler, kernel, and so on) of the<br>
+operating system on which the executable runs, unless that component<br>
+itself accompanies the executable.<br>
+<br>
+If distribution of executable or object code is made by offering<br>
+access to copy from a designated place, then offering equivalent<br>
+access to copy the source code from the same place counts as<br>
+distribution of the source code, even though third parties are not<br>
+compelled to copy the source along with the object code.<br>
+<br>
+ 4. You may not copy, modify, sublicense, or distribute the Program<br>
+except as expressly provided under this License. Any attempt<br>
+otherwise to copy, modify, sublicense or distribute the Program is<br>
+void, and will automatically terminate your rights under this License.<br>
+However, parties who have received copies, or rights, from you under<br>
+this License will not have their licenses terminated so long as such<br>
+parties remain in full compliance.<br>
+<br>
+ 5. You are not required to accept this License, since you have not<br>
+signed it. However, nothing else grants you permission to modify or<br>
+distribute the Program or its derivative works. These actions are<br>
+prohibited by law if you do not accept this License. Therefore, by<br>
+modifying or distributing the Program (or any work based on the<br>
+Program), you indicate your acceptance of this License to do so, and<br>
+all its terms and conditions for copying, distributing or modifying<br>
+the Program or works based on it.<br>
+<br>
+ 6. Each time you redistribute the Program (or any work based on the<br>
+Program), the recipient automatically receives a license from the<br>
+original licensor to copy, distribute or modify the Program subject to<br>
+these terms and conditions. You may not impose any further<br>
+restrictions on the recipients' exercise of the rights granted herein.<br>
+You are not responsible for enforcing compliance by third parties to<br>
+this License.<br>
+<br>
+ 7. If, as a consequence of a court judgment or allegation of patent<br>
+infringement or for any other reason (not limited to patent issues),<br>
+conditions are imposed on you (whether by court order, agreement or<br>
+otherwise) that contradict the conditions of this License, they do not<br>
+excuse you from the conditions of this License. If you cannot<br>
+distribute so as to satisfy simultaneously your obligations under this<br>
+License and any other pertinent obligations, then as a consequence you<br>
+may not distribute the Program at all. For example, if a patent<br>
+license would not permit royalty-free redistribution of the Program by<br>
+all those who receive copies directly or indirectly through you, then<br>
+the only way you could satisfy both it and this License would be to<br>
+refrain entirely from distribution of the Program.<br>
+<br>
+If any portion of this section is held invalid or unenforceable under<br>
+any particular circumstance, the balance of the section is intended to<br>
+apply and the section as a whole is intended to apply in other<br>
+circumstances.<br>
+<br>
+It is not the purpose of this section to induce you to infringe any<br>
+patents or other property right claims or to contest validity of any<br>
+such claims; this section has the sole purpose of protecting the<br>
+integrity of the free software distribution system, which is<br>
+implemented by public license practices. Many people have made<br>
+generous contributions to the wide range of software distributed<br>
+through that system in reliance on consistent application of that<br>
+system; it is up to the author/donor to decide if he or she is willing<br>
+to distribute software through any other system and a licensee cannot<br>
+impose that choice.<br>
+<br>
+This section is intended to make thoroughly clear what is believed to<br>
+be a consequence of the rest of this License.<br>
+<br>
+ 8. If the distribution and/or use of the Program is restricted in<br>
+certain countries either by patents or by copyrighted interfaces, the<br>
+original copyright holder who places the Program under this License<br>
+may add an explicit geographical distribution limitation excluding<br>
+those countries, so that distribution is permitted only in or among<br>
+countries not thus excluded. In such case, this License incorporates<br>
+the limitation as if written in the body of this License.<br>
+<br>
+ 9. The Free Software Foundation may publish revised and/or new versions<br>
+of the General Public License from time to time. Such new versions will<br>
+be similar in spirit to the present version, but may differ in detail to<br>
+address new problems or concerns.<br>
+<br>
+Each version is given a distinguishing version number. If the Program<br>
+specifies a version number of this License which applies to it and "any<br>
+later version", you have the option of following the terms and conditions<br>
+either of that version or of any later version published by the Free<br>
+Software Foundation. If the Program does not specify a version number of<br>
+this License, you may choose any version ever published by the Free Software<br>
+Foundation.<br>
+<br>
+ 10. If you wish to incorporate parts of the Program into other free<br>
+programs whose distribution conditions are different, write to the author<br>
+to ask for permission. For software which is copyrighted by the Free<br>
+Software Foundation, write to the Free Software Foundation; we sometimes<br>
+make exceptions for this. Our decision will be guided by the two goals<br>
+of preserving the free status of all derivatives of our free software and<br>
+of promoting the sharing and reuse of software generally.<br>
+<br>
+ NO WARRANTY<br>
+<br>
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY<br>
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN<br>
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES<br>
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED<br>
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF<br>
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS<br>
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE<br>
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,<br>
+REPAIR OR CORRECTION.<br>
+<br>
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING<br>
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR<br>
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,<br>
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING<br>
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED<br>
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY<br>
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER<br>
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE<br>
+POSSIBILITY OF SUCH DAMAGES.<br>
+<br>
+ END OF TERMS AND CONDITIONS<br>
+<br>
+ How to Apply These Terms to Your New Programs<br>
+<br>
+ If you develop a new program, and you want it to be of the greatest<br>
+possible use to the public, the best way to achieve this is to make it<br>
+free software which everyone can redistribute and change under these terms.<br>
+<br>
+ To do so, attach the following notices to the program. It is safest<br>
+to attach them to the start of each source file to most effectively<br>
+convey the exclusion of warranty; and each file should have at least<br>
+the "copyright" line and a pointer to where the full notice is found.<br>
+<br>
+ <one line to give the program's name and a brief idea of what it does.><br>
+ Copyright (C) <year> <name of author><br>
+<br>
+ This program is free software; you can redistribute it and/or modify<br>
+ it under the terms of the GNU General Public License as published by<br>
+ the Free Software Foundation; either version 2 of the License, or<br>
+ (at your option) any later version.<br>
+<br>
+ This program is distributed in the hope that it will be useful,<br>
+ but WITHOUT ANY WARRANTY; without even the implied warranty of<br>
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the<br>
+ GNU General Public License for more details.<br>
+<br>
+ You should have received a copy of the GNU General Public License along<br>
+ with this program; if not, write to the Free Software Foundation, Inc.,<br>
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.<br>
+<br>
+Also add information on how to contact you by electronic and paper mail.<br>
+<br>
+If the program is interactive, make it output a short notice like this<br>
+when it starts in an interactive mode:<br>
+<br>
+ Gnomovision version 69, Copyright (C) year name of author<br>
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.<br>
+ This is free software, and you are welcome to redistribute it<br>
+ under certain conditions; type `show c' for details.<br>
+<br>
+The hypothetical commands `show w' and `show c' should show the appropriate<br>
+parts of the General Public License. Of course, the commands you use may<br>
+be called something other than `show w' and `show c'; they could even be<br>
+mouse-clicks or menu items--whatever suits your program.<br>
+<br>
+You should also get your employer (if you work as a programmer) or your<br>
+school, if any, to sign a "copyright disclaimer" for the program, if<br>
+necessary. Here is a sample; alter the names:<br>
+<br>
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program<br>
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.<br>
+<br>
+ <signature of Ty Coon>, 1 April 1989<br>
+ Ty Coon, President of Vice<br>
+<br>
+This General Public License does not permit incorporating your program into<br>
+proprietary programs. If your program is a subroutine library, you may<br>
+consider it more useful to permit linking proprietary applications with the<br>
+library. If this is what you want to do, use the GNU Lesser General<br>
+Public License instead of this License.<br>
+<br>
+ </p></div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="licenses.html"><< GNU Licenses</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="licenses.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="license.gfdl.html">2. The GNU Free Documentation License >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/licenses.html b/docs/html/licenses.html
new file mode 100644
index 0000000..924a130
--- /dev/null
+++ b/docs/html/licenses.html
@@ -0,0 +1,47 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>GNU Licenses</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="dist.readme-solaris.html" title="12. README.solaris">
+<link rel="next" href="license.gpl.html" title="1. The GNU General Public License">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dist.readme-solaris.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="license.gpl.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div><div><h1 class="title">
+<a name="licenses"></a>GNU Licenses</h1></div></div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="chapter"><a href="license.gpl.html">1. The GNU General Public License</a></span></dt>
+<dt><span class="chapter"><a href="license.gfdl.html">2. The GNU Free Documentation License</a></span></dt>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dist.readme-solaris.html"><< 12. README.solaris</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="license.gpl.html">1. The GNU General Public License >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/lk-manual.html b/docs/html/lk-manual.html
new file mode 100644
index 0000000..0713143
--- /dev/null
+++ b/docs/html/lk-manual.html
@@ -0,0 +1,131 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>13. Lackey: an example tool</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="bbv-manual.html" title="12. BBV: an experimental basic block vector generation tool">
+<link rel="next" href="nl-manual.html" title="14. Nulgrind: the minimal Valgrind tool">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="bbv-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="nl-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="lk-manual"></a>13. Lackey: an example tool</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="lk-manual.html#lk-manual.overview">13.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="lk-manual.html#lk-manual.options">13.2. Lackey Command-line Options</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=lackey</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="lk-manual.overview"></a>13.1. Overview</h2></div></div></div>
+<p>Lackey is a simple Valgrind tool that does various kinds of basic
+program measurement. It adds quite a lot of simple instrumentation to the
+program's code. It is primarily intended to be of use as an example tool,
+and consequently emphasises clarity of implementation over
+performance.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="lk-manual.options"></a>13.2. Lackey Command-line Options</h2></div></div></div>
+<p>Lackey-specific command-line options are:</p>
+<div class="variablelist">
+<a name="lk.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.basic-counts"></a><span class="term">
+ <code class="option">--basic-counts=<no|yes> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, Lackey prints the following statistics and
+ information about the execution of the client program:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>The number of calls to the function specified by the
+ <code class="option">--fnname</code> option (the default is
+ <code class="computeroutput">main</code>).
+ If the program has had its symbols stripped, the count will always
+ be zero.</p></li>
+<li class="listitem"><p>The number of conditional branches encountered and the
+ number and proportion of those taken.</p></li>
+<li class="listitem"><p>The number of superblocks entered and completed by the
+ program. Note that due to optimisations done by the JIT, this
+ is not at all an accurate value.</p></li>
+<li class="listitem"><p>The number of guest (x86, amd64, ppc, etc.) instructions and IR
+ statements executed. IR is Valgrind's RISC-like intermediate
+ representation via which all instrumentation is done.
+ </p></li>
+<li class="listitem"><p>Ratios between some of these counts.</p></li>
+<li class="listitem"><p>The exit code of the client program.</p></li>
+</ol></div>
+</dd>
+<dt>
+<a name="opt.detailed-counts"></a><span class="term">
+ <code class="option">--detailed-counts=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Lackey prints a table containing counts of loads,
+ stores and ALU operations, differentiated by their IR types.
+ The IR types are identified by their IR name ("I1", "I8", ... "I128",
+ "F32", "F64", and "V128").</p></dd>
+<dt>
+<a name="opt.trace-mem"></a><span class="term">
+ <code class="option">--trace-mem=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Lackey prints the size and address of almost every
+ memory access made by the program. See the comments at the top of
+ the file <code class="computeroutput">lackey/lk_main.c</code> for details
+ about the output format, how it works, and inaccuracies in the address
+ trace. Note that this option produces immense amounts of output.</p></dd>
+<dt>
+<a name="opt.trace-superblocks"></a><span class="term">
+ <code class="option">--trace-superblocks=<no|yes> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled,
+ Lackey prints out the address of every superblock
+ (a single entry, multiple exit, linear chunk of code) executed by the
+ program. This is primarily of interest to Valgrind developers. See
+ the comments at the top of the file
+ <code class="computeroutput">lackey/lk_main.c</code> for details about
+ the output format. Note that this option produces large amounts of
+ output.</p></dd>
+<dt>
+<a name="opt.fnname"></a><span class="term">
+ <code class="option">--fnname=<name> [default: main] </code>
+ </span>
+</dt>
+<dd><p>Changes the function for which calls are counted when
+ <code class="option">--basic-counts=yes</code> is specified.</p></dd>
+</dl>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="bbv-manual.html"><< 12. BBV: an experimental basic block vector generation tool</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="nl-manual.html">14. Nulgrind: the minimal Valgrind tool >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/manual-core-adv.html b/docs/html/manual-core-adv.html
new file mode 100644
index 0000000..0e6738c
--- /dev/null
+++ b/docs/html/manual-core-adv.html
@@ -0,0 +1,1690 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>3. Using and understanding the Valgrind core: Advanced Topics</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="manual-core.html" title="2. Using and understanding the Valgrind core">
+<link rel="next" href="mc-manual.html" title="4. Memcheck: a memory error detector">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="manual-core.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="mc-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="manual-core-adv"></a>3. Using and understanding the Valgrind core: Advanced Topics</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.clientreq">3.1. The Client Request mechanism</a></span></dt>
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.gdbserver">3.2. Debugging your program using Valgrind gdbserver and GDB</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-simple">3.2.1. Quick Start: debugging in 3 steps</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-concept">3.2.2. Valgrind gdbserver overall organisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-gdb">3.2.3. Connecting GDB to a Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-gdb-android">3.2.4. Connecting to an Android gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling">3.2.5. Monitor command handling by the Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-threads">3.2.6. Valgrind gdbserver thread information</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-shadowregisters">3.2.7. Examining and modifying Valgrind shadow registers</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-limitations">3.2.8. Limitations of the Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.vgdb">3.2.9. vgdb command line options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.valgrind-monitor-commands">3.2.10. Valgrind monitor commands</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.wrapping">3.3. Function wrapping</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.example">3.3.1. A Simple Example</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.specs">3.3.2. Wrapping Specifications</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.semantics">3.3.3. Wrapping Semantics</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.debugging">3.3.4. Debugging</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.limitations-cf">3.3.5. Limitations - control flow</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.limitations-sigs">3.3.6. Limitations - original function signatures</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.examples">3.3.7. Examples</a></span></dt>
+</dl></dd>
+</dl>
+</div>
+<p>This chapter describes advanced aspects of the Valgrind core
+services, which are mostly of interest to power users who wish to
+customise and modify Valgrind's default behaviours in certain useful
+ways. The subjects covered are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>The "Client Request" mechanism</p></li>
+<li class="listitem"><p>Debugging your program using Valgrind's gdbserver
+ and GDB</p></li>
+<li class="listitem"><p>Function Wrapping</p></li>
+</ul></div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core-adv.clientreq"></a>3.1. The Client Request mechanism</h2></div></div></div>
+<p>Valgrind has a trapdoor mechanism via which the client
+program can pass all manner of requests and queries to Valgrind
+and the current tool. Internally, this is used extensively
+to make various things work, although that's not visible from the
+outside.</p>
+<p>For your convenience, a subset of these so-called client
+requests is provided to allow you to tell Valgrind facts about
+the behaviour of your program, and also to make queries.
+In particular, your program can tell Valgrind about things that it
+otherwise would not know, leading to better results.
+</p>
+<p>Clients need to include a header file to make this work.
+Which header file depends on which client requests you use. Some
+client requests are handled by the core, and are defined in the
+header file <code class="filename">valgrind/valgrind.h</code>. Tool-specific
+header files are named after the tool, e.g.
+<code class="filename">valgrind/memcheck.h</code>. Each tool-specific header file
+includes <code class="filename">valgrind/valgrind.h</code> so you don't need to
+include it in your client if you include a tool-specific header. All header
+files can be found in the <code class="literal">include/valgrind</code> directory of
+wherever Valgrind was installed.</p>
+<p>The macros in these header files have the magical property
+that they generate code in-line which Valgrind can spot.
+However, the code does nothing when not run on Valgrind, so you
+are not forced to run your program under Valgrind just because you
+use the macros in this file. Also, you are not required to link your
+program with any extra supporting libraries.</p>
+<p>The code added to your binary has negligible performance impact:
+on x86, amd64, ppc32, ppc64 and ARM, the overhead is 6 simple integer
+instructions and is probably undetectable except in tight loops.
+However, if you really wish to compile out the client requests, you
+can compile with <code class="option">-DNVALGRIND</code> (analogous to
+<code class="option">-DNDEBUG</code>'s effect on
+<code class="function">assert</code>).
+</p>
+<p>You are encouraged to copy the <code class="filename">valgrind/*.h</code> headers
+into your project's include directory, so your program doesn't have a
+compile-time dependency on Valgrind being installed. The Valgrind headers,
+unlike most of the rest of the code, are under a BSD-style license so you may
+include them without worrying about license incompatibility.</p>
+<p>Here is a brief description of the macros available in
+<code class="filename">valgrind.h</code>, which work with more than one
+tool (see the tool-specific documentation for explanations of the
+tool-specific macros).</p>
+<div class="variablelist"><dl class="variablelist">
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">RUNNING_ON_VALGRIND</code></strong></span>:</span></dt>
+<dd><p>Returns 1 if running on Valgrind, 0 if running on the
+ real CPU. If you are running Valgrind on itself, returns the
+ number of layers of Valgrind emulation you're running on.
+ </p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_DISCARD_TRANSLATIONS</code>:</strong></span></span></dt>
+<dd>
+<p>Discards translations of code in the specified address
+ range. Useful if you are debugging a JIT compiler or some other
+ dynamic code generation system. After this call, attempts to
+ execute code in the invalidated address range will cause
+ Valgrind to make new translations of that code, which is
+ probably the semantics you want. Note that code invalidations
+ are expensive because finding all the relevant translations
+ quickly is very difficult, so try not to call it often.
+ Note that you can be clever about
+ this: you only need to call it when an area which previously
+ contained code is overwritten with new code. You can choose
+ to write code into fresh memory, and just call this
+ occasionally to discard large chunks of old code all at
+ once.</p>
+<p>
+ Alternatively, for transparent self-modifying-code support,
+ use<code class="option">--smc-check=all</code>, or run
+ on ppc32/Linux, ppc64/Linux or ARM/Linux.
+ </p>
+</dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_COUNT_ERRORS</code>:</strong></span></span></dt>
+<dd><p>Returns the number of errors found so far by Valgrind. Can be
+ useful in test harness code when combined with the
+ <code class="option">--log-fd=-1</code> option; this runs Valgrind silently,
+ but the client program can detect when errors occur. Only useful
+ for tools that report errors, e.g. it's useful for Memcheck, but for
+ Cachegrind it will always return zero because Cachegrind doesn't
+ report errors.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_MALLOCLIKE_BLOCK</code>:</strong></span></span></dt>
+<dd><p>If your program manages its own memory instead of using
+ the standard <code class="function">malloc</code> /
+ <code class="function">new</code> /
+ <code class="function">new[]</code>, tools that track
+ information about heap blocks will not do nearly as good a
+ job. For example, Memcheck won't detect nearly as many
+ errors, and the error messages won't be as informative. To
+ improve this situation, use this macro just after your custom
+ allocator allocates some new memory. See the comments in
+ <code class="filename">valgrind.h</code> for information on how to use
+ it.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_FREELIKE_BLOCK</code>:</strong></span></span></dt>
+<dd><p>This should be used in conjunction with
+ <code class="computeroutput">VALGRIND_MALLOCLIKE_BLOCK</code>.
+ Again, see <code class="filename">valgrind.h</code> for
+ information on how to use it.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_RESIZEINPLACE_BLOCK</code>:</strong></span></span></dt>
+<dd><p>Informs a Valgrind tool that the size of an allocated block has been
+ modified but not its address. See <code class="filename">valgrind.h</code> for
+ more information on how to use it.</p></dd>
+<dt><span class="term">
+ <span class="command"><strong><code class="computeroutput">VALGRIND_CREATE_MEMPOOL</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_DESTROY_MEMPOOL</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_MEMPOOL_ALLOC</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_MEMPOOL_FREE</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_MOVE_MEMPOOL</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_MEMPOOL_CHANGE</code></strong></span>,
+ <span class="command"><strong><code class="computeroutput">VALGRIND_MEMPOOL_EXISTS</code></strong></span>:
+ </span></dt>
+<dd><p>These are similar to
+ <code class="computeroutput">VALGRIND_MALLOCLIKE_BLOCK</code> and
+ <code class="computeroutput">VALGRIND_FREELIKE_BLOCK</code>
+ but are tailored towards code that uses memory pools. See
+ <a class="xref" href="mc-manual.html#mc-manual.mempools" title="4.8. Memory Pools: describing and working with custom allocators">Memory Pools</a> for a detailed description.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_NON_SIMD_CALL[0123]</code>:</strong></span></span></dt>
+<dd>
+<p>Executes a function in the client program on the
+ <span class="emphasis"><em>real</em></span> CPU, not the virtual CPU that Valgrind
+ normally runs code on. The function must take an integer (holding a
+ thread ID) as the first argument and then 0, 1, 2 or 3 more arguments
+ (depending on which client request is used). These are used in various
+ ways internally to Valgrind. They might be useful to client
+ programs.</p>
+<p><span class="command"><strong>Warning:</strong></span> Only use these if you
+ <span class="emphasis"><em>really</em></span> know what you are doing. They aren't
+ entirely reliable, and can cause Valgrind to crash. See
+ <code class="filename">valgrind.h</code> for more details.
+ </p>
+</dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_PRINTF(format, ...)</code>:</strong></span></span></dt>
+<dd><p>Print a printf-style message to the Valgrind log file. The
+ message is prefixed with the PID between a pair of
+ <code class="computeroutput">**</code> markers. (Like all client requests,
+ nothing is output if the client program is not running under Valgrind.)
+ Output is not produced until a newline is encountered, or subsequent
+ Valgrind output is printed; this allows you to build up a single line of
+ output over multiple calls. Returns the number of characters output,
+ excluding the PID prefix.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_PRINTF_BACKTRACE(format, ...)</code>:</strong></span></span></dt>
+<dd><p>Like <code class="computeroutput">VALGRIND_PRINTF</code> (in
+ particular, the return value is identical), but prints a stack backtrace
+ immediately afterwards.</p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_MONITOR_COMMAND(command)</code>:</strong></span></span></dt>
+<dd><p>Execute the given monitor command (a string).
+ Returns 0 if command is recognised. Returns 1 if command is not recognised.
+ Note that some monitor commands provide access to a functionality
+ also accessible via a specific client request. For example,
+ memcheck leak search can be requested from the client program
+ using VALGRIND_DO_LEAK_CHECK or via the monitor command "leak_search".
+ Note that the syntax of the command string is only verified at
+ run-time. So, if it exists, it is preferrable to use a specific
+ client request to have better compile time verifications of the
+ arguments.
+ </p></dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_STACK_REGISTER(start, end)</code>:</strong></span></span></dt>
+<dd>
+<p>Registers a new stack. Informs Valgrind that the memory range
+ between start and end is a unique stack. Returns a stack identifier
+ that can be used with other
+ <code class="computeroutput">VALGRIND_STACK_*</code> calls.</p>
+<p>Valgrind will use this information to determine if a change
+ to the stack pointer is an item pushed onto the stack or a change
+ over to a new stack. Use this if you're using a user-level thread
+ package and are noticing crashes in stack trace recording or
+ spurious errors from Valgrind about uninitialized memory
+ reads.</p>
+<p><span class="command"><strong>Warning:</strong></span> Unfortunately, this client request is
+ unreliable and best avoided.</p>
+</dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_STACK_DEREGISTER(id)</code>:</strong></span></span></dt>
+<dd>
+<p>Deregisters a previously registered stack. Informs
+ Valgrind that previously registered memory range with stack id
+ <code class="computeroutput">id</code> is no longer a stack.</p>
+<p><span class="command"><strong>Warning:</strong></span> Unfortunately, this client request is
+ unreliable and best avoided.</p>
+</dd>
+<dt><span class="term"><span class="command"><strong><code class="computeroutput">VALGRIND_STACK_CHANGE(id, start, end)</code>:</strong></span></span></dt>
+<dd>
+<p>Changes a previously registered stack. Informs
+ Valgrind that the previously registered stack with stack id
+ <code class="computeroutput">id</code> has changed its start and end
+ values. Use this if your user-level thread package implements
+ stack growth.</p>
+<p><span class="command"><strong>Warning:</strong></span> Unfortunately, this client request is
+ unreliable and best avoided.</p>
+</dd>
+</dl></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core-adv.gdbserver"></a>3.2. Debugging your program using Valgrind gdbserver and GDB</h2></div></div></div>
+<p>A program running under Valgrind is not executed directly by the
+CPU. Instead it runs on a synthetic CPU provided by Valgrind. This is
+why a debugger cannot debug your program when it runs on Valgrind.
+</p>
+<p>
+This section describes how GDB can interact with the
+Valgrind gdbserver to provide a fully debuggable program under
+Valgrind. Used in this way, GDB also provides an interactive usage of
+Valgrind core or tool functionalities, including incremental leak search
+under Memcheck and on-demand Massif snapshot production.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-simple"></a>3.2.1. Quick Start: debugging in 3 steps</h3></div></div></div>
+<p>The simplest way to get started is to run Valgrind with the
+flag <code class="option">--vgdb-error=0</code>. Then follow the on-screen
+directions, which give you the precise commands needed to start GDB
+and connect it to your program.</p>
+<p>Otherwise, here's a slightly more verbose overview.</p>
+<p>If you want to debug a program with GDB when using the Memcheck
+tool, start Valgrind like this:
+</p>
+<pre class="screen">
+valgrind --vgdb=yes --vgdb-error=0 prog
+</pre>
+<p>In another shell, start GDB:
+</p>
+<pre class="screen">
+gdb prog
+</pre>
+<p>Then give the following command to GDB:
+</p>
+<pre class="screen">
+(gdb) target remote | vgdb
+</pre>
+<p>You can now debug your program e.g. by inserting a breakpoint
+and then using the GDB <code class="computeroutput">continue</code>
+command.</p>
+<p>This quick start information is enough for basic usage of the
+Valgrind gdbserver. The sections below describe more advanced
+functionality provided by the combination of Valgrind and GDB. Note
+that the command line flag <code class="option">--vgdb=yes</code> can be omitted,
+as this is the default value.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-concept"></a>3.2.2. Valgrind gdbserver overall organisation</h3></div></div></div>
+<p>The GNU GDB debugger is typically used to debug a process
+running on the same machine. In this mode, GDB uses system calls to
+control and query the program being debugged. This works well, but
+only allows GDB to debug a program running on the same computer.
+</p>
+<p>GDB can also debug processes running on a different computer.
+To achieve this, GDB defines a protocol (that is, a set of query and
+reply packets) that facilitates fetching the value of memory or
+registers, setting breakpoints, etc. A gdbserver is an implementation
+of this "GDB remote debugging" protocol. To debug a process running
+on a remote computer, a gdbserver (sometimes called a GDB stub)
+must run at the remote computer side.
+</p>
+<p>The Valgrind core provides a built-in gdbserver implementation,
+which is activated using <code class="option">--vgdb=yes</code>
+or <code class="option">--vgdb=full</code>. This gdbserver allows the process
+running on Valgrind's synthetic CPU to be debugged remotely.
+GDB sends protocol query packets (such as "get register contents") to
+the Valgrind embedded gdbserver. The gdbserver executes the queries
+(for example, it will get the register values of the synthetic CPU)
+and gives the results back to GDB.
+</p>
+<p>GDB can use various kinds of channels (TCP/IP, serial line, etc)
+to communicate with the gdbserver. In the case of Valgrind's
+gdbserver, communication is done via a pipe and a small helper program
+called <a class="xref" href="manual-core-adv.html#manual-core-adv.vgdb" title="3.2.9. vgdb command line options">vgdb</a>, which acts as an
+intermediary. If no GDB is in use, vgdb can also be
+used to send monitor commands to the Valgrind gdbserver from a shell
+command line.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-gdb"></a>3.2.3. Connecting GDB to a Valgrind gdbserver</h3></div></div></div>
+<p>To debug a program "<code class="filename">prog</code>" running under
+Valgrind, you must ensure that the Valgrind gdbserver is activated by
+specifying either <code class="option">--vgdb=yes</code>
+or <code class="option">--vgdb=full</code>. A secondary command line option,
+<code class="option">--vgdb-error=number</code>, can be used to tell the gdbserver
+only to become active once the specified number of errors have been
+shown. A value of zero will therefore cause
+the gdbserver to become active at startup, which allows you to
+insert breakpoints before starting the run. For example:
+</p>
+<pre class="screen">
+valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
+</pre>
+<p>The Valgrind gdbserver is invoked at startup
+and indicates it is waiting for a connection from a GDB:</p>
+<pre class="programlisting">
+==2418== Memcheck, a memory error detector
+==2418== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
+==2418== Using Valgrind-3.7.0.SVN and LibVEX; rerun with -h for copyright info
+==2418== Command: ./prog
+==2418==
+==2418== (action at startup) vgdb me ...
+</pre>
+<p>GDB (in another shell) can then be connected to the Valgrind gdbserver.
+For this, GDB must be started on the program <code class="filename">prog</code>:
+</p>
+<pre class="screen">
+gdb ./prog
+</pre>
+<p>You then indicate to GDB that you want to debug a remote target:
+</p>
+<pre class="screen">
+(gdb) target remote | vgdb
+</pre>
+<p>
+GDB then starts a vgdb relay application to communicate with the
+Valgrind embedded gdbserver:</p>
+<pre class="programlisting">
+(gdb) target remote | vgdb
+Remote debugging using | vgdb
+relaying data between gdb and process 2418
+Reading symbols from /lib/ld-linux.so.2...done.
+Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so.debug...done.
+Loaded symbols for /lib/ld-linux.so.2
+[Switching to Thread 2418]
+0x001f2850 in _start () from /lib/ld-linux.so.2
+(gdb)
+</pre>
+<p>Note that vgdb is provided as part of the Valgrind
+distribution. You do not need to install it separately.</p>
+<p>If vgdb detects that there are multiple Valgrind gdbservers that
+can be connected to, it will list all such servers and their PIDs, and
+then exit. You can then reissue the GDB "target" command, but
+specifying the PID of the process you want to debug:
+</p>
+<pre class="programlisting">
+(gdb) target remote | vgdb
+Remote debugging using | vgdb
+no --pid= arg given and multiple valgrind pids found:
+use --pid=2479 for valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
+use --pid=2481 for valgrind --tool=memcheck --vgdb=yes --vgdb-error=0 ./prog
+use --pid=2483 for valgrind --vgdb=yes --vgdb-error=0 ./another_prog
+Remote communication error: Resource temporarily unavailable.
+(gdb) target remote | vgdb --pid=2479
+Remote debugging using | vgdb --pid=2479
+relaying data between gdb and process 2479
+Reading symbols from /lib/ld-linux.so.2...done.
+Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so.debug...done.
+Loaded symbols for /lib/ld-linux.so.2
+[Switching to Thread 2479]
+0x001f2850 in _start () from /lib/ld-linux.so.2
+(gdb)
+</pre>
+<p>Once GDB is connected to the Valgrind gdbserver, it can be used
+in the same way as if you were debugging the program natively:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Breakpoints can be inserted or deleted.</p></li>
+<li class="listitem"><p>Variables and register values can be examined or modified.
+ </p></li>
+<li class="listitem"><p>Signal handling can be configured (printing, ignoring).
+ </p></li>
+<li class="listitem"><p>Execution can be controlled (continue, step, next, stepi, etc).
+ </p></li>
+<li class="listitem"><p>Program execution can be interrupted using Control-C.</p></li>
+</ul></div>
+<p>And so on. Refer to the GDB user manual for a complete
+description of GDB's functionality.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-gdb-android"></a>3.2.4. Connecting to an Android gdbserver</h3></div></div></div>
+<p> When developping applications for Android, you will typically use
+a development system (on which the Android NDK is installed) to compile your
+application. An Android target system or emulator will be used to run
+the application.
+In this setup, Valgrind and vgdb will run on the Android system,
+while GDB will run on the development system. GDB will connect
+to the vgdb running on the Android system using the Android NDK
+'adb forward' application.
+</p>
+<p> Example: on the Android system, execute the following:
+ </p>
+<pre class="screen">
+valgrind --vgdb-error=0 --vgdb=yes prog
+# and then in another shell, run:
+vgdb --port=1234
+</pre>
+<p>
+</p>
+<p> On the development system, execute the following commands:
+</p>
+<pre class="screen">
+adb forward tcp:1234 tcp:1234
+gdb prog
+(gdb) target remote :1234
+</pre>
+<p>
+GDB will use a local tcp/ip connection to connect to the Android adb forwarder.
+Adb will establish a relay connection between the host system and the Android
+target system. Be sure to use the GDB delivered in the
+Android NDK system (typically, arm-linux-androideabi-gdb), as the host
+GDB is probably not able to debug Android arm applications.
+Note that the local port nr (used by GDB) must not necessarily be equal
+to the port number used by vgdb: adb can forward tcp/ip between different
+port numbers.
+</p>
+<p>In the current release, the GDB server is not enabled by default
+for Android, due to problems in establishing a suitable directory in
+which Valgrind can create the necessary FIFOs (named pipes) for
+communication purposes. You can stil try to use the GDB server, but
+you will need to explicitly enable it using the flag
+<code class="computeroutput">--vgdb=yes</code> or
+<code class="computeroutput">--vgdb=full</code>.
+</p>
+<p>Additionally, you
+will need to select a temporary directory which is (a) writable
+by Valgrind, and (b) supports FIFOs. This is the main difficult
+point. Often, <code class="computeroutput">/sdcard</code> satisfies
+requirement (a), but fails for (b) because it is a VFAT file system
+and VFAT does not support pipes. Possibilities you could try are
+<code class="computeroutput">/data/local</code>,
+<code class="computeroutput">/data/local/Inst</code> (if you
+installed Valgrind there), or
+<code class="computeroutput">/data/data/name.of.my.app</code>, if you
+are running a specific application and it has its own directory of
+that form. This last possibility may have the highest probability
+of success.</p>
+<p>You can specify the temporary directory to use either via
+the <code class="computeroutput">--with-tmpdir=</code> configure time
+flag, or by setting environment variable TMPDIR when running Valgrind
+(on the Android device, not on the Android NDK development host).
+Another alternative is to specify the directory for the FIFOs using
+the <code class="computeroutput">--vgdb-prefix=</code> Valgrind command
+line option.
+</p>
+<p>We hope to have a better story for temporary directory handling
+on Android in the future. The difficulty is that, unlike in standard
+Unixes, there is no single temporary file directory that reliably
+works across all devices and scenarios.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-commandhandling"></a>3.2.5. Monitor command handling by the Valgrind gdbserver</h3></div></div></div>
+<p> The Valgrind gdbserver provides additional Valgrind-specific
+functionality via "monitor commands". Such monitor commands can be
+sent from the GDB command line or from the shell command line or
+requested by the client program using the VALGRIND_MONITOR_COMMAND
+client request. See
+<a class="xref" href="manual-core-adv.html#manual-core-adv.valgrind-monitor-commands" title="3.2.10. Valgrind monitor commands">Valgrind monitor commands</a> for the
+list of the Valgrind core monitor commands available regardless of the
+Valgrind tool selected.
+</p>
+<p>The following tools provide tool-specific monitor commands:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><a class="xref" href="mc-manual.html#mc-manual.monitor-commands" title="4.6. Memcheck Monitor Commands">Memcheck Monitor Commands</a></p></li>
+<li class="listitem"><p><a class="xref" href="cl-manual.html#cl-manual.monitor-commands" title="6.4. Callgrind Monitor Commands">Callgrind Monitor Commands</a></p></li>
+<li class="listitem"><p><a class="xref" href="ms-manual.html#ms-manual.monitor-commands" title="9.4. Massif Monitor Commands">Massif Monitor Commands</a></p></li>
+<li class="listitem"><p><a class="xref" href="hg-manual.html#hg-manual.monitor-commands" title="7.7. Helgrind Monitor Commands">Helgrind Monitor Commands</a></p></li>
+</ul></div>
+<p>
+</p>
+<p>An example of a tool specific monitor command is the Memcheck monitor
+command <code class="computeroutput">leak_check full
+reachable any</code>. This requests a full reporting of the
+allocated memory blocks. To have this leak check executed, use the GDB
+command:
+</p>
+<pre class="screen">
+(gdb) monitor leak_check full reachable any
+</pre>
+<p>
+</p>
+<p>GDB will send the <code class="computeroutput">leak_check</code>
+command to the Valgrind gdbserver. The Valgrind gdbserver will
+execute the monitor command itself, if it recognises it to be a Valgrind core
+monitor command. If it is not recognised as such, it is assumed to
+be tool-specific and is handed to the tool for execution. For example:
+</p>
+<pre class="programlisting">
+(gdb) monitor leak_check full reachable any
+==2418== 100 bytes in 1 blocks are still reachable in loss record 1 of 1
+==2418== at 0x4006E9E: malloc (vg_replace_malloc.c:236)
+==2418== by 0x804884F: main (prog.c:88)
+==2418==
+==2418== LEAK SUMMARY:
+==2418== definitely lost: 0 bytes in 0 blocks
+==2418== indirectly lost: 0 bytes in 0 blocks
+==2418== possibly lost: 0 bytes in 0 blocks
+==2418== still reachable: 100 bytes in 1 blocks
+==2418== suppressed: 0 bytes in 0 blocks
+==2418==
+(gdb)
+</pre>
+<p>As with other GDB commands, the Valgrind gdbserver will accept
+abbreviated monitor command names and arguments, as long as the given
+abbreviation is unambiguous. For example, the above
+<code class="computeroutput">leak_check</code>
+command can also be typed as:
+</p>
+<pre class="screen">
+(gdb) mo l f r a
+</pre>
+<p>
+
+The letters <code class="computeroutput">mo</code> are recognised by GDB as being
+an abbreviation for <code class="computeroutput">monitor</code>. So GDB sends the
+string <code class="computeroutput">l f r a</code> to the Valgrind
+gdbserver. The letters provided in this string are unambiguous for the
+Valgrind gdbserver. This therefore gives the same output as the
+unabbreviated command and arguments. If the provided abbreviation is
+ambiguous, the Valgrind gdbserver will report the list of commands (or
+argument values) that can match:
+</p>
+<pre class="programlisting">
+(gdb) mo v. n
+v. can match v.set v.info v.wait v.kill v.translate v.do
+(gdb) mo v.i n
+n_errs_found 0 n_errs_shown 0 (vgdb-error 0)
+(gdb)
+</pre>
+<p>
+</p>
+<p>Instead of sending a monitor command from GDB, you can also send
+these from a shell command line. For example, the following command
+lines, when given in a shell, will cause the same leak search to be executed
+by the process 3145:
+</p>
+<pre class="screen">
+vgdb --pid=3145 leak_check full reachable any
+vgdb --pid=3145 l f r a
+</pre>
+<p>Note that the Valgrind gdbserver automatically continues the
+execution of the program after a standalone invocation of
+vgdb. Monitor commands sent from GDB do not cause the program to
+continue: the program execution is controlled explicitly using GDB
+commands such as "continue" or "next".</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-threads"></a>3.2.6. Valgrind gdbserver thread information</h3></div></div></div>
+<p>Valgrind's gdbserver enriches the output of the
+GDB <code class="computeroutput">info threads</code> command
+with Valgrind-specific information.
+The operating system's thread number is followed
+by Valgrind's internal index for that thread ("tid") and by
+the Valgrind scheduler thread state:</p>
+<pre class="programlisting">
+(gdb) info threads
+ 4 Thread 6239 (tid 4 VgTs_Yielding) 0x001f2832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
+* 3 Thread 6238 (tid 3 VgTs_Runnable) make_error (s=0x8048b76 "called from London") at prog.c:20
+ 2 Thread 6237 (tid 2 VgTs_WaitSys) 0x001f2832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
+ 1 Thread 6234 (tid 1 VgTs_Yielding) main (argc=1, argv=0xbedcc274) at prog.c:105
+(gdb)
+</pre>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-shadowregisters"></a>3.2.7. Examining and modifying Valgrind shadow registers</h3></div></div></div>
+<p> When the option <code class="option">--vgdb-shadow-registers=yes</code> is
+given, the Valgrind gdbserver will let GDB examine and/or modify
+Valgrind's shadow registers. GDB version 7.1 or later is needed for this
+to work. For x86 and amd64, GDB version 7.2 or later is needed.</p>
+<p>For each CPU register, the Valgrind core maintains two
+shadow register sets. These shadow registers can be accessed from
+GDB by giving a postfix <code class="computeroutput">s1</code>
+or <code class="computeroutput">s2</code> for respectively the first
+and second shadow register. For example, the x86 register
+<code class="computeroutput">eax</code> and its two shadows
+can be examined using the following commands:</p>
+<pre class="programlisting">
+(gdb) p $eax
+$1 = 0
+(gdb) p $eaxs1
+$2 = 0
+(gdb) p $eaxs2
+$3 = 0
+(gdb)
+</pre>
+<p>Float shadow registers are shown by GDB as unsigned integer
+values instead of float values, as it is expected that these
+shadow values are mostly used for memcheck validity bits. </p>
+<p>Intel/amd64 AVX registers <code class="computeroutput">ymm0</code>
+to <code class="computeroutput">ymm15</code> have also their shadow
+registers. However, GDB presents the shadow values using two
+"half" registers. For example, the half shadow registers for
+<code class="computeroutput">ymm9</code> are
+<code class="computeroutput">xmm9s1</code> (lower half for set 1),
+<code class="computeroutput">ymm9hs1</code> (upper half for set 1),
+<code class="computeroutput">xmm9s2</code> (lower half for set 2),
+<code class="computeroutput">ymm9hs2</code> (upper half for set 2).
+Note the inconsistent notation for the names of the half registers:
+the lower part starts with an <code class="computeroutput">x</code>,
+the upper part starts with an <code class="computeroutput">y</code>
+and has an <code class="computeroutput">h</code> before the shadow postfix.
+</p>
+<p>The special presentation of the AVX shadow registers is due to
+the fact that GDB independently retrieves the lower and upper half of
+the <code class="computeroutput">ymm</code> registers. GDB does not
+however know that the shadow half registers have to be shown combined.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.gdbserver-limitations"></a>3.2.8. Limitations of the Valgrind gdbserver</h3></div></div></div>
+<p>Debugging with the Valgrind gdbserver is very similar to native
+debugging. Valgrind's gdbserver implementation is quite
+complete, and so provides most of the GDB debugging functionality. There
+are however some limitations and peculiarities:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>Precision of "stop-at" commands.</p>
+<p>
+ GDB commands such as "step", "next", "stepi", breakpoints
+ and watchpoints, will stop the execution of the process. With
+ the option <code class="option">--vgdb=yes</code>, the process might not
+ stop at the exact requested instruction. Instead, it might
+ continue execution of the current basic block and stop at one
+ of the following basic blocks. This is linked to the fact that
+ Valgrind gdbserver has to instrument a block to allow stopping
+ at the exact instruction requested. Currently,
+ re-instrumentation of the block currently being executed is not
+ supported. So, if the action requested by GDB (e.g. single
+ stepping or inserting a breakpoint) implies re-instrumentation
+ of the current block, the GDB action may not be executed
+ precisely.
+ </p>
+<p>
+ This limitation applies when the basic block
+ currently being executed has not yet been instrumented for debugging.
+ This typically happens when the gdbserver is activated due to the
+ tool reporting an error or to a watchpoint. If the gdbserver
+ block has been activated following a breakpoint, or if a
+ breakpoint has been inserted in the block before its execution,
+ then the block has already been instrumented for debugging.
+ </p>
+<p>
+ If you use the option <code class="option">--vgdb=full</code>, then GDB
+ "stop-at" commands will be obeyed precisely. The
+ downside is that this requires each instruction to be
+ instrumented with an additional call to a gdbserver helper
+ function, which gives considerable overhead (+500% for memcheck)
+ compared to <code class="option">--vgdb=no</code>.
+ Option <code class="option">--vgdb=yes</code> has neglectible overhead compared
+ to <code class="option">--vgdb=no</code>.
+ </p>
+</li>
+<li class="listitem">
+<p>Processor registers and flags values.</p>
+<p>When Valgrind gdbserver stops on an error, on a breakpoint
+ or when single stepping, registers and flags values might not be always
+ up to date due to the optimisations done by the Valgrind core.
+ The default value
+ <code class="option">--vex-iropt-register-updates=unwindregs-at-mem-access</code>
+ ensures that the registers needed to make a stack trace (typically
+ PC/SP/FP) are up to date at each memory access (i.e. memory exception
+ points).
+ Disabling some optimisations using the following values will increase
+ the precision of registers and flags values (a typical performance
+ impact for memcheck is given for each option).
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
+<li class="listitem">
+<code class="option">--vex-iropt-register-updates=allregs-at-mem-access</code> (+10%)
+ ensures that all registers and flags are up to date at each memory
+ access.
+ </li>
+<li class="listitem">
+<code class="option">--vex-iropt-register-updates=allregs-at-each-insn</code> (+25%)
+ ensures that all registers and flags are up to date at each instruction.
+ </li>
+</ul></div>
+<p>
+ Note that <code class="option">--vgdb=full</code> (+500%, see above
+ Precision of "stop-at" commands) automatically
+ activates <code class="option">--vex-iropt-register-updates=allregs-at-each-insn</code>.
+ </p>
+</li>
+<li class="listitem">
+<p>Hardware watchpoint support by the Valgrind
+ gdbserver.</p>
+<p> The Valgrind gdbserver can simulate hardware watchpoints
+ if the selected tool provides support for it. Currently,
+ only Memcheck provides hardware watchpoint simulation. The
+ hardware watchpoint simulation provided by Memcheck is much
+ faster that GDB software watchpoints, which are implemented by
+ GDB checking the value of the watched zone(s) after each
+ instruction. Hardware watchpoint simulation also provides read
+ watchpoints. The hardware watchpoint simulation by Memcheck has
+ some limitations compared to real hardware
+ watchpoints. However, the number and length of simulated
+ watchpoints are not limited.
+ </p>
+<p>Typically, the number of (real) hardware watchpoints is
+ limited. For example, the x86 architecture supports a maximum of
+ 4 hardware watchpoints, each watchpoint watching 1, 2, 4 or 8
+ bytes. The Valgrind gdbserver does not have any limitation on the
+ number of simulated hardware watchpoints. It also has no
+ limitation on the length of the memory zone being
+ watched. Using GDB version 7.4 or later allow full use of the
+ flexibility of the Valgrind gdbserver's simulated hardware watchpoints.
+ Previous GDB versions do not understand that Valgrind gdbserver
+ watchpoints have no length limit.
+ </p>
+<p>Memcheck implements hardware watchpoint simulation by
+ marking the watched address ranges as being unaddressable. When
+ a hardware watchpoint is removed, the range is marked as
+ addressable and defined. Hardware watchpoint simulation of
+ addressable-but-undefined memory zones works properly, but has
+ the undesirable side effect of marking the zone as defined when
+ the watchpoint is removed.
+ </p>
+<p>Write watchpoints might not be reported at the
+ exact instruction that writes the monitored area,
+ unless option <code class="option">--vgdb=full</code> is given. Read watchpoints
+ will always be reported at the exact instruction reading the
+ watched memory.
+ </p>
+<p>It is better to avoid using hardware watchpoint of not
+ addressable (yet) memory: in such a case, GDB will fall back to
+ extremely slow software watchpoints. Also, if you do not quit GDB
+ between two debugging sessions, the hardware watchpoints of the
+ previous sessions will be re-inserted as software watchpoints if
+ the watched memory zone is not addressable at program startup.
+ </p>
+</li>
+<li class="listitem">
+<p>Stepping inside shared libraries on ARM.</p>
+<p>For unknown reasons, stepping inside shared
+ libraries on ARM may fail. A workaround is to use the
+ <code class="computeroutput">ldd</code> command
+ to find the list of shared libraries and their loading address
+ and inform GDB of the loading address using the GDB command
+ "add-symbol-file". Example:
+ </p>
+<pre class="programlisting">
+(gdb) shell ldd ./prog
+ libc.so.6 => /lib/libc.so.6 (0x4002c000)
+ /lib/ld-linux.so.3 (0x40000000)
+(gdb) add-symbol-file /lib/libc.so.6 0x4002c000
+add symbol table from file "/lib/libc.so.6" at
+ .text_addr = 0x4002c000
+(y or n) y
+Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
+(gdb)
+</pre>
+<p>
+ </p>
+</li>
+<li class="listitem">
+<p>GDB version needed for ARM and PPC32/64.</p>
+<p>You must use a GDB version which is able to read XML
+ target description sent by a gdbserver. This is the standard setup
+ if GDB was configured and built with the "expat"
+ library. If your GDB was not configured with XML support, it
+ will report an error message when using the "target"
+ command. Debugging will not work because GDB will then not be
+ able to fetch the registers from the Valgrind gdbserver.
+ For ARM programs using the Thumb instruction set, you must use
+ a GDB version of 7.1 or later, as earlier versions have problems
+ with next/step/breakpoints in Thumb code.
+ </p>
+</li>
+<li class="listitem">
+<p>Stack unwinding on PPC32/PPC64. </p>
+<p>On PPC32/PPC64, stack unwinding for leaf functions
+ (functions that do not call any other functions) works properly
+ only when you give the option
+ <code class="option">--vex-iropt-register-updates=allregs-at-mem-access</code>
+ or <code class="option">--vex-iropt-register-updates=allregs-at-each-insn</code>.
+ You must also pass this option in order to get a precise stack when
+ a signal is trapped by GDB.
+ </p>
+</li>
+<li class="listitem">
+<p>Breakpoints encountered multiple times.</p>
+<p>Some instructions (e.g. x86 "rep movsb")
+ are translated by Valgrind using a loop. If a breakpoint is placed
+ on such an instruction, the breakpoint will be encountered
+ multiple times -- once for each step of the "implicit" loop
+ implementing the instruction.
+ </p>
+</li>
+<li class="listitem">
+<p>Execution of Inferior function calls by the Valgrind
+ gdbserver.</p>
+<p>GDB allows the user to "call" functions inside the process
+ being debugged. Such calls are named "inferior calls" in the GDB
+ terminology. A typical use of an inferior call is to execute
+ a function that prints a human-readable version of a complex data
+ structure. To make an inferior call, use the GDB "print" command
+ followed by the function to call and its arguments. As an
+ example, the following GDB command causes an inferior call to the
+ libc "printf" function to be executed by the process
+ being debugged:
+ </p>
+<pre class="programlisting">
+(gdb) p printf("process being debugged has pid %d\n", getpid())
+$5 = 36
+(gdb)
+</pre>
+<p>The Valgrind gdbserver supports inferior function calls.
+ Whilst an inferior call is running, the Valgrind tool will report
+ errors as usual. If you do not want to have such errors stop the
+ execution of the inferior call, you can
+ use <code class="computeroutput">v.set vgdb-error</code> to set a
+ big value before the call, then manually reset it to its original
+ value when the call is complete.</p>
+<p>To execute inferior calls, GDB changes registers such as
+ the program counter, and then continues the execution of the
+ program. In a multithreaded program, all threads are continued,
+ not just the thread instructed to make the inferior call. If
+ another thread reports an error or encounters a breakpoint, the
+ evaluation of the inferior call is abandoned.</p>
+<p>Note that inferior function calls are a powerful GDB
+ feature, but should be used with caution. For example, if
+ the program being debugged is stopped inside the function "printf",
+ forcing a recursive call to printf via an inferior call will
+ very probably create problems. The Valgrind tool might also add
+ another level of complexity to inferior calls, e.g. by reporting
+ tool errors during the Inferior call or due to the
+ instrumentation done.
+ </p>
+</li>
+<li class="listitem">
+<p>Connecting to or interrupting a Valgrind process blocked in
+ a system call.</p>
+<p>Connecting to or interrupting a Valgrind process blocked in
+ a system call requires the "ptrace" system call to be usable.
+ This may be disabled in your kernel for security reasons.
+ </p>
+<p>When running your program, Valgrind's scheduler
+ periodically checks whether there is any work to be handled by
+ the gdbserver. Unfortunately this check is only done if at least
+ one thread of the process is runnable. If all the threads of the
+ process are blocked in a system call, then the checks do not
+ happen, and the Valgrind scheduler will not invoke the gdbserver.
+ In such a case, the vgdb relay application will "force" the
+ gdbserver to be invoked, without the intervention of the Valgrind
+ scheduler.
+ </p>
+<p>Such forced invocation of the Valgrind gdbserver is
+ implemented by vgdb using ptrace system calls. On a properly
+ implemented kernel, the ptrace calls done by vgdb will not
+ influence the behaviour of the program running under Valgrind.
+ If however they do, giving the
+ option <code class="option">--max-invoke-ms=0</code> to the vgdb relay
+ application will disable the usage of ptrace calls. The
+ consequence of disabling ptrace usage in vgdb is that a Valgrind
+ process blocked in a system call cannot be woken up or
+ interrupted from GDB until it executes enough basic blocks to let
+ the Valgrind scheduler's normal checking take effect.
+ </p>
+<p>When ptrace is disabled in vgdb, you can increase the
+ responsiveness of the Valgrind gdbserver to commands or
+ interrupts by giving a lower value to the
+ option <code class="option">--vgdb-poll</code>. If your application is
+ blocked in system calls most of the time, using a very low value
+ for <code class="option">--vgdb-poll</code> will cause a the gdbserver to be
+ invoked sooner. The gdbserver polling done by Valgrind's
+ scheduler is very efficient, so the increased polling frequency
+ should not cause significant performance degradation.
+ </p>
+<p>When ptrace is disabled in vgdb, a query packet sent by GDB
+ may take significant time to be handled by the Valgrind
+ gdbserver. In such cases, GDB might encounter a protocol
+ timeout. To avoid this,
+ you can increase the value of the timeout by using the GDB
+ command "set remotetimeout".
+ </p>
+<p>Ubuntu versions 10.10 and later may restrict the scope of
+ ptrace to the children of the process calling ptrace. As the
+ Valgrind process is not a child of vgdb, such restricted scoping
+ causes the ptrace calls to fail. To avoid that, Valgrind will
+ automatically allow all processes belonging to the same userid to
+ "ptrace" a Valgrind process, by using PR_SET_PTRACER.</p>
+<p>Unblocking processes blocked in system calls is not
+ currently implemented on Mac OS X and Android. So you cannot
+ connect to or interrupt a process blocked in a system call on Mac
+ OS X or Android.
+ </p>
+</li>
+<li class="listitem">
+<p>Changing register values.</p>
+<p>The Valgrind gdbserver will only modify the values of the
+ thread's registers when the thread is in status Runnable or
+ Yielding. In other states (typically, WaitSys), attempts to
+ change register values will fail. Amongst other things, this
+ means that inferior calls are not executed for a thread which is
+ in a system call, since the Valgrind gdbserver does not implement
+ system call restart.
+ </p>
+</li>
+<li class="listitem">
+<p>Unsupported GDB functionality.</p>
+<p>GDB provides a lot of debugging functionality and not all
+ of it is supported. Specifically, the following are not
+ supported: reversible debugging and tracepoints.
+ </p>
+</li>
+<li class="listitem">
+<p>Unknown limitations or problems.</p>
+<p>The combination of GDB, Valgrind and the Valgrind gdbserver
+ probably has unknown other limitations and problems. If you
+ encounter strange or unexpected behaviour, feel free to report a
+ bug. But first please verify that the limitation or problem is
+ not inherent to GDB or the GDB remote protocol. You may be able
+ to do so by checking the behaviour when using standard gdbserver
+ part of the GDB package.
+ </p>
+</li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.vgdb"></a>3.2.9. vgdb command line options</h3></div></div></div>
+<p> Usage: <code class="computeroutput">vgdb [OPTION]... [[-c] COMMAND]...</code></p>
+<p> vgdb ("Valgrind to GDB") is a small program that is used as an
+intermediary between Valgrind and GDB or a shell.
+Therefore, it has two usage modes:
+</p>
+<div class="orderedlist">
+<a name="vgdb.desc.modes"></a><ol class="orderedlist" type="1">
+<li class="listitem"><p><a name="manual-core-adv.vgdb-standalone"></a>As a standalone utility, it is used from a shell command
+ line to send monitor commands to a process running under
+ Valgrind. For this usage, the vgdb OPTION(s) must be followed by
+ the monitor command to send. To send more than one command,
+ separate them with the <code class="option">-c</code> option.
+ </p></li>
+<li class="listitem"><p><a name="manual-core-adv.vgdb-relay"></a>In combination with GDB "target remote |" command, it is
+ used as the relay application between GDB and the Valgrind
+ gdbserver. For this usage, only OPTION(s) can be given, but no
+ COMMAND can be given.
+ </p></li>
+</ol>
+</div>
+<p><code class="computeroutput">vgdb</code> accepts the following
+options:</p>
+<div class="variablelist">
+<a name="vgdb.opts.list"></a><dl class="variablelist">
+<dt><span class="term"><code class="option">--pid=<number></code></span></dt>
+<dd><p>Specifies the PID of
+ the process to which vgdb must connect to. This option is useful
+ in case more than one Valgrind gdbserver can be connected to. If
+ the <code class="option">--pid</code> argument is not given and multiple
+ Valgrind gdbserver processes are running, vgdb will report the
+ list of such processes and then exit.</p></dd>
+<dt><span class="term"><code class="option">--vgdb-prefix</code></span></dt>
+<dd><p>Must be given to both
+ Valgrind and vgdb if you want to change the default prefix for the
+ FIFOs (named pipes) used for communication between the Valgrind
+ gdbserver and vgdb.</p></dd>
+<dt><span class="term"><code class="option">--wait=<number></code></span></dt>
+<dd><p>Instructs vgdb to
+ search for available Valgrind gdbservers for the specified number
+ of seconds. This makes it possible start a vgdb process
+ before starting the Valgrind gdbserver with which you intend the
+ vgdb to communicate. This option is useful when used in
+ conjunction with a <code class="option">--vgdb-prefix</code> that is
+ unique to the process you want to wait for.
+ Also, if you use the <code class="option">--wait</code> argument in the GDB
+ "target remote" command, you must set the GDB remotetimeout to a
+ value bigger than the --wait argument value. See option
+ <code class="option">--max-invoke-ms</code> (just below)
+ for an example of setting the remotetimeout value.</p></dd>
+<dt><span class="term"><code class="option">--max-invoke-ms=<number></code></span></dt>
+<dd>
+<p>Gives the
+ number of milliseconds after which vgdb will force the invocation
+ of gdbserver embedded in Valgrind. The default value is 100
+ milliseconds. A value of 0 disables forced invocation. The forced
+ invocation is used when vgdb is connected to a Valgrind gdbserver,
+ and the Valgrind process has all its threads blocked in a system
+ call.
+ </p>
+<p>If you specify a large value, you might need to increase the
+ GDB "remotetimeout" value from its default value of 2 seconds.
+ You should ensure that the timeout (in seconds) is
+ bigger than the <code class="option">--max-invoke-ms</code> value. For
+ example, for <code class="option">--max-invoke-ms=5000</code>, the following
+ GDB command is suitable:
+ </p>
+<pre class="screen">
+ (gdb) set remotetimeout 6
+ </pre>
+<p>
+ </p>
+</dd>
+<dt><span class="term"><code class="option">--cmd-time-out=<number></code></span></dt>
+<dd><p>Instructs a
+ standalone vgdb to exit if the Valgrind gdbserver it is connected
+ to does not process a command in the specified number of seconds.
+ The default value is to never time out.</p></dd>
+<dt><span class="term"><code class="option">--port=<portnr></code></span></dt>
+<dd>
+<p>Instructs vgdb to
+ use tcp/ip and listen for GDB on the specified port nr rather than
+ to use a pipe to communicate with GDB. Using tcp/ip allows to have
+ GDB running on one computer and debugging a Valgrind process
+ running on another target computer.
+ Example:
+ </p>
+<pre class="screen">
+# On the target computer, start your program under valgrind using
+valgrind --vgdb-error=0 prog
+# and then in another shell, run:
+vgdb --port=1234
+</pre>
+<p>On the computer which hosts GDB, execute the command:
+ </p>
+<pre class="screen">
+gdb prog
+(gdb) target remote targetip:1234
+</pre>
+<p>
+ where targetip is the ip address or hostname of the target computer.
+ </p>
+</dd>
+<dt><span class="term"><code class="option">-c</code></span></dt>
+<dd>
+<p>To give more than one command to a
+ standalone vgdb, separate the commands by an
+ option <code class="option">-c</code>. Example:
+ </p>
+<pre class="screen">
+vgdb v.set log_output -c leak_check any
+</pre>
+</dd>
+<dt><span class="term"><code class="option">-l</code></span></dt>
+<dd><p>Instructs a standalone vgdb to report
+ the list of the Valgrind gdbserver processes running and then
+ exit.</p></dd>
+<dt><span class="term"><code class="option">-D</code></span></dt>
+<dd><p>Instructs a standalone vgdb to show the
+ state of the shared memory used by the Valgrind gdbserver. vgdb
+ will exit after having shown the Valgrind gdbserver shared memory
+ state.</p></dd>
+<dt><span class="term"><code class="option">-d</code></span></dt>
+<dd><p>Instructs vgdb to produce debugging
+ output. Give multiple <code class="option">-d</code> args to increase the
+ verbosity. When giving <code class="option">-d</code> to a relay vgdb, you better
+ redirect the standard error (stderr) of vgdb to a file to avoid
+ interaction between GDB and vgdb debugging output.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.valgrind-monitor-commands"></a>3.2.10. Valgrind monitor commands</h3></div></div></div>
+<p>This section describes the Valgrind monitor commands, available
+regardless of the Valgrind tool selected. For the tool specific
+commands, refer to <a class="xref" href="mc-manual.html#mc-manual.monitor-commands" title="4.6. Memcheck Monitor Commands">Memcheck Monitor Commands</a>,
+<a class="xref" href="hg-manual.html#hg-manual.monitor-commands" title="7.7. Helgrind Monitor Commands">Helgrind Monitor Commands</a>,
+<a class="xref" href="cl-manual.html#cl-manual.monitor-commands" title="6.4. Callgrind Monitor Commands">Callgrind Monitor Commands</a> and
+<a class="xref" href="ms-manual.html#ms-manual.monitor-commands" title="9.4. Massif Monitor Commands">Massif Monitor Commands</a>. </p>
+<p> The monitor commands can be sent either from a shell command line, by using a
+standalone vgdb, or from GDB, by using GDB's "monitor"
+command (see <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling" title="3.2.5. Monitor command handling by the Valgrind gdbserver">Monitor command handling by the Valgrind gdbserver</a>).
+They can also be launched by the client program, using the VALGRIND_MONITOR_COMMAND
+client request.
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">help [debug]</code> instructs Valgrind's gdbserver
+ to give the list of all monitor commands of the Valgrind core and
+ of the tool. The optional "debug" argument tells to also give help
+ for the monitor commands aimed at Valgrind internals debugging.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info all_errors</code> shows all errors found
+ so far.</p></li>
+<li class="listitem"><p><code class="varname">v.info last_error</code> shows the last error
+ found.</p></li>
+<li class="listitem">
+<p><code class="varname">v.info location <addr></code> outputs
+ information about the location <addr>. Possibly, the
+ following are described: global variables, local (stack)
+ variables, allocated or freed blocks, ... The information
+ produced depends on the tool and on the options given to valgrind.
+ Some tools (e.g. memcheck and helgrind) produce more detailed
+ information for client heap blocks. For example, these tools show
+ the stacktrace where the heap block was allocated. If a tool does
+ not replace the malloc/free/... functions, then client heap blocks
+ will not be described. Use the
+ option <code class="varname">--read-var-info=yes</code> to obtain more
+ detailed information about global or local (stack) variables.
+ </p>
+<pre class="programlisting">
+(gdb) monitor v.info location 0x8050b20
+ Location 0x8050b20 is 0 bytes inside global var "mx"
+ declared at tc19_shadowmem.c:19
+
+(gdb) mo v.in loc 0x582f33c
+ Location 0x582f33c is 0 bytes inside local var "info"
+ declared at tc19_shadowmem.c:282, in frame #1 of thread 3
+(gdb)
+</pre>
+</li>
+<li class="listitem"><p><code class="varname">v.info n_errs_found [msg]</code> shows the number of
+ errors found so far, the nr of errors shown so far and the current
+ value of the <code class="option">--vgdb-error</code> argument. The optional
+ <code class="computeroutput">msg</code> (one or more words) is appended.
+ Typically, this can be used to insert markers in a process output
+ file between several tests executed in sequence by a process
+ started only once. This allows to associate the errors reported
+ by Valgrind with the specific test that produced these errors.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info open_fds</code> shows the list of open file
+ descriptors and details related to the file descriptor.
+ This only works if <code class="option">--track-fds=yes</code>
+ was given at Valgrind startup.</p></li>
+<li class="listitem">
+<p><code class="varname">v.set {gdb_output | log_output |
+ mixed_output}</code> allows redirection of the Valgrind output
+ (e.g. the errors detected by the tool). The default setting is
+ <code class="computeroutput">mixed_output</code>.</p>
+<p>With <code class="computeroutput">mixed_output</code>, the
+ Valgrind output goes to the Valgrind log (typically stderr) while
+ the output of the interactive GDB monitor commands (e.g.
+ <code class="computeroutput">v.info last_error</code>)
+ is displayed by GDB.</p>
+<p>With <code class="computeroutput">gdb_output</code>, both the
+ Valgrind output and the interactive GDB monitor commands output are
+ displayed by GDB.</p>
+<p>With <code class="computeroutput">log_output</code>, both the
+ Valgrind output and the interactive GDB monitor commands output go
+ to the Valgrind log.</p>
+</li>
+<li class="listitem"><p><code class="varname">v.wait [ms (default 0)]</code> instructs
+ Valgrind gdbserver to sleep "ms" milli-seconds and then
+ continue. When sent from a standalone vgdb, if this is the last
+ command, the Valgrind process will continue the execution of the
+ guest process. The typical usage of this is to use vgdb to send a
+ "no-op" command to a Valgrind gdbserver so as to continue the
+ execution of the guest process.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.kill</code> requests the gdbserver to kill
+ the process. This can be used from a standalone vgdb to properly
+ kill a Valgrind process which is currently expecting a vgdb
+ connection.</p></li>
+<li class="listitem"><p><code class="varname">v.set vgdb-error <errornr></code>
+ dynamically changes the value of the
+ <code class="option">--vgdb-error</code> argument. A
+ typical usage of this is to start with
+ <code class="option">--vgdb-error=0</code> on the
+ command line, then set a few breakpoints, set the vgdb-error value
+ to a huge value and continue execution.</p></li>
+</ul></div>
+<p>The following Valgrind monitor commands are useful for
+investigating the behaviour of Valgrind or its gdbserver in case of
+problems or bugs.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">v.do expensive_sanity_check_general</code>
+ executes various sanity checks. In particular, the sanity of the
+ Valgrind heap is verified. This can be useful if you suspect that
+ your program and/or Valgrind has a bug corrupting Valgrind data
+ structure. It can also be used when a Valgrind tool
+ reports a client error to the connected GDB, in order to verify
+ the sanity of Valgrind before continuing the execution.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info gdbserver_status</code> shows the
+ gdbserver status. In case of problems (e.g. of communications),
+ this shows the values of some relevant Valgrind gdbserver internal
+ variables. Note that the variables related to breakpoints and
+ watchpoints (e.g. the number of breakpoint addresses and the number of
+ watchpoints) will be zero, as GDB by default removes all
+ watchpoints and breakpoints when execution stops, and re-inserts
+ them when resuming the execution of the debugged process. You can
+ change this GDB behaviour by using the GDB command
+ <code class="computeroutput">set breakpoint always-inserted on</code>.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info memory [aspacemgr]</code> shows the statistics of
+ Valgrind's internal heap management. If
+ option <code class="option">--profile-heap=yes</code> was given, detailed
+ statistics will be output. With the optional argument
+ <code class="computeroutput">aspacemgr</code>. the segment list maintained
+ by valgrind address space manager will be output. Note that
+ this list of segments is always output on the Valgrind log.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info exectxt</code> shows informations about
+ the "executable contexts" (i.e. the stack traces) recorded by
+ Valgrind. For some programs, Valgrind can record a very high
+ number of such stack traces, causing a high memory usage. This
+ monitor command shows all the recorded stack traces, followed by
+ some statistics. This can be used to analyse the reason for having
+ a big number of stack traces. Typically, you will use this command
+ if <code class="varname">v.info memory</code> has shown significant memory
+ usage by the "exectxt" arena.
+ </p></li>
+<li class="listitem">
+<p><code class="varname">v.info scheduler</code> shows various
+ information about threads. First, it outputs the host stack trace,
+ i.e. the Valgrind code being executed. Then, for each thread, it
+ outputs the thread state. For non terminated threads, the state is
+ followed by the guest (client) stack trace. Finally, for each
+ active thread or for each terminated thread slot not yet re-used,
+ it shows the max usage of the valgrind stack.</p>
+<p>Showing the client stack traces allows to compare the stack
+ traces produced by the Valgrind unwinder with the stack traces
+ produced by GDB+Valgrind gdbserver. Pay attention that GDB and
+ Valgrind scheduler status have their own thread numbering
+ scheme. To make the link between the GDB thread number and the
+ corresponding Valgrind scheduler thread number, use the GDB
+ command <code class="computeroutput">info threads</code>. The output
+ of this command shows the GDB thread number and the valgrind
+ 'tid'. The 'tid' is the thread number output
+ by <code class="computeroutput">v.info scheduler</code>. When using
+ the callgrind tool, the callgrind monitor command
+ <code class="computeroutput">status</code> outputs internal callgrind
+ information about the stack/call graph it maintains.
+ </p>
+</li>
+<li class="listitem"><p><code class="varname">v.info stats</code> shows various valgrind core and
+ tool statistics. With this, Valgrind and tool statistics can
+ be examined while running, even without option <code class="option">--stats=yes</code>.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.info unwind <addr> [<len>]</code> shows
+ the CFI unwind debug info for the address range [addr, addr+len-1].
+ The default value of <len> is 1, giving the unwind information
+ for the instruction at <addr>.
+ </p></li>
+<li class="listitem"><p><code class="varname">v.set debuglog <intvalue></code> sets the
+ Valgrind debug log level to <intvalue>. This allows to
+ dynamically change the log level of Valgrind e.g. when a problem
+ is detected.</p></li>
+<li class="listitem">
+<p><code class="varname">v.set hostvisibility [yes*|no]</code> The value
+ "yes" indicates to gdbserver that GDB can look at the Valgrind
+ 'host' (internal) status/memory. "no" disables this access.
+ When hostvisibility is activated, GDB can e.g. look at Valgrind
+ global variables. As an example, to examine a Valgrind global
+ variable of the memcheck tool on an x86, do the following setup:</p>
+<pre class="screen">
+(gdb) monitor v.set hostvisibility yes
+(gdb) add-symbol-file /path/to/tool/executable/file/memcheck-x86-linux 0x38000000
+add symbol table from file "/path/to/tool/executable/file/memcheck-x86-linux" at
+ .text_addr = 0x38000000
+(y or n) y
+Reading symbols from /path/to/tool/executable/file/memcheck-x86-linux...done.
+(gdb)
+</pre>
+<p>After that, variables defined in memcheck-x86-linux can be accessed, e.g.</p>
+<pre class="screen">
+(gdb) p /x vgPlain_threads[1].os_state
+$3 = {lwpid = 0x4688, threadgroup = 0x4688, parent = 0x0,
+ valgrind_stack_base = 0x62e78000, valgrind_stack_init_SP = 0x62f79fe0,
+ exitcode = 0x0, fatalsig = 0x0}
+(gdb) p vex_control
+$5 = {iropt_verbosity = 0, iropt_level = 2,
+ iropt_register_updates = VexRegUpdUnwindregsAtMemAccess,
+ iropt_unroll_thresh = 120, guest_max_insns = 60, guest_chase_thresh = 10,
+ guest_chase_cond = 0 '\000'}
+(gdb)
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">v.translate <address>
+ [<traceflags>]</code> shows the translation of the block
+ containing <code class="computeroutput">address</code> with the given
+ trace flags. The <code class="computeroutput">traceflags</code> value
+ bit patterns have similar meaning to Valgrind's
+ <code class="option">--trace-flags</code> option. It can be given
+ in hexadecimal (e.g. 0x20) or decimal (e.g. 32) or in binary 1s
+ and 0s bit (e.g. 0b00100000). The default value of the traceflags
+ is 0b00100000, corresponding to "show after instrumentation".
+ The output of this command always goes to the Valgrind
+ log.</p>
+<p>The additional bit flag 0b100000000 (bit 8)
+ has no equivalent in the <code class="option">--trace-flags</code> option.
+ It enables tracing of the gdbserver specific instrumentation. Note
+ that this bit 8 can only enable the addition of gdbserver
+ instrumentation in the trace. Setting it to 0 will not
+ disable the tracing of the gdbserver instrumentation if it is
+ active for some other reason, for example because there is a breakpoint at
+ this address or because gdbserver is in single stepping
+ mode.</p>
+</li>
+</ul></div>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core-adv.wrapping"></a>3.3. Function wrapping</h2></div></div></div>
+<p>
+Valgrind allows calls to some specified functions to be intercepted and
+rerouted to a different, user-supplied function. This can do whatever it
+likes, typically examining the arguments, calling onwards to the original,
+and possibly examining the result. Any number of functions may be
+wrapped.</p>
+<p>
+Function wrapping is useful for instrumenting an API in some way. For
+example, Helgrind wraps functions in the POSIX pthreads API so it can know
+about thread status changes, and the core is able to wrap
+functions in the MPI (message-passing) API so it can know
+of memory status changes associated with message arrival/departure.
+Such information is usually passed to Valgrind by using client
+requests in the wrapper functions, although the exact mechanism may vary.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.example"></a>3.3.1. A Simple Example</h3></div></div></div>
+<p>Supposing we want to wrap some function</p>
+<pre class="programlisting">
+int foo ( int x, int y ) { return x + y; }</pre>
+<p>A wrapper is a function of identical type, but with a special name
+which identifies it as the wrapper for <code class="computeroutput">foo</code>.
+Wrappers need to include
+supporting macros from <code class="filename">valgrind.h</code>.
+Here is a simple wrapper which prints the arguments and return value:</p>
+<pre class="programlisting">
+#include <stdio.h>
+#include "valgrind.h"
+int I_WRAP_SONAME_FNNAME_ZU(NONE,foo)( int x, int y )
+{
+ int result;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ printf("foo's wrapper: args %d %d\n", x, y);
+ CALL_FN_W_WW(result, fn, x,y);
+ printf("foo's wrapper: result %d\n", result);
+ return result;
+}
+</pre>
+<p>To become active, the wrapper merely needs to be present in a text
+section somewhere in the same process' address space as the function
+it wraps, and for its ELF symbol name to be visible to Valgrind. In
+practice, this means either compiling to a
+<code class="computeroutput">.o</code> and linking it in, or
+compiling to a <code class="computeroutput">.so</code> and
+<code class="computeroutput">LD_PRELOAD</code>ing it in. The latter is more
+convenient in that it doesn't require relinking.</p>
+<p>All wrappers have approximately the above form. There are three
+crucial macros:</p>
+<p><code class="computeroutput">I_WRAP_SONAME_FNNAME_ZU</code>:
+this generates the real name of the wrapper.
+This is an encoded name which Valgrind notices when reading symbol
+table information. What it says is: I am the wrapper for any function
+named <code class="computeroutput">foo</code> which is found in
+an ELF shared object with an empty
+("<code class="computeroutput">NONE</code>") soname field. The specification
+mechanism is powerful in
+that wildcards are allowed for both sonames and function names.
+The details are discussed below.</p>
+<p><code class="computeroutput">VALGRIND_GET_ORIG_FN</code>:
+once in the wrapper, the first priority is
+to get hold of the address of the original (and any other supporting
+information needed). This is stored in a value of opaque
+type <code class="computeroutput">OrigFn</code>.
+The information is acquired using
+<code class="computeroutput">VALGRIND_GET_ORIG_FN</code>. It is crucial
+to make this macro call before calling any other wrapped function
+in the same thread.</p>
+<p><code class="computeroutput">CALL_FN_W_WW</code>: eventually we will
+want to call the function being
+wrapped. Calling it directly does not work, since that just gets us
+back to the wrapper and leads to an infinite loop. Instead, the result
+lvalue,
+<code class="computeroutput">OrigFn</code> and arguments are
+handed to one of a family of macros of the form
+<code class="computeroutput">CALL_FN_*</code>. These
+cause Valgrind to call the original and avoid recursion back to the
+wrapper.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.specs"></a>3.3.2. Wrapping Specifications</h3></div></div></div>
+<p>This scheme has the advantage of being self-contained. A library of
+wrappers can be compiled to object code in the normal way, and does
+not rely on an external script telling Valgrind which wrappers pertain
+to which originals.</p>
+<p>Each wrapper has a name which, in the most general case says: I am the
+wrapper for any function whose name matches FNPATT and whose ELF
+"soname" matches SOPATT. Both FNPATT and SOPATT may contain wildcards
+(asterisks) and other characters (spaces, dots, @, etc) which are not
+generally regarded as valid C identifier names.</p>
+<p>This flexibility is needed to write robust wrappers for POSIX pthread
+functions, where typically we are not completely sure of either the
+function name or the soname, or alternatively we want to wrap a whole
+set of functions at once.</p>
+<p>For example, <code class="computeroutput">pthread_create</code>
+in GNU libpthread is usually a
+versioned symbol - one whose name ends in, eg,
+<code class="computeroutput">@GLIBC_2.3</code>. Hence we
+are not sure what its real name is. We also want to cover any soname
+of the form <code class="computeroutput">libpthread.so*</code>.
+So the header of the wrapper will be</p>
+<pre class="programlisting">
+int I_WRAP_SONAME_FNNAME_ZZ(libpthreadZdsoZd0,pthreadZucreateZAZa)
+ ( ... formals ... )
+ { ... body ... }
+</pre>
+<p>In order to write unusual characters as valid C function names, a
+Z-encoding scheme is used. Names are written literally, except that
+a capital Z acts as an escape character, with the following encoding:</p>
+<pre class="programlisting">
+ Za encodes *
+ Zp +
+ Zc :
+ Zd .
+ Zu _
+ Zh -
+ Zs (space)
+ ZA @
+ ZZ Z
+ ZL ( # only in valgrind 3.3.0 and later
+ ZR ) # only in valgrind 3.3.0 and later
+</pre>
+<p>Hence <code class="computeroutput">libpthreadZdsoZd0</code> is an
+encoding of the soname <code class="computeroutput">libpthread.so.0</code>
+and <code class="computeroutput">pthreadZucreateZAZa</code> is an encoding
+of the function name <code class="computeroutput">pthread_create@*</code>.
+</p>
+<p>The macro <code class="computeroutput">I_WRAP_SONAME_FNNAME_ZZ</code>
+constructs a wrapper name in which
+both the soname (first component) and function name (second component)
+are Z-encoded. Encoding the function name can be tiresome and is
+often unnecessary, so a second macro,
+<code class="computeroutput">I_WRAP_SONAME_FNNAME_ZU</code>, can be
+used instead. The <code class="computeroutput">_ZU</code> variant is
+also useful for writing wrappers for
+C++ functions, in which the function name is usually already mangled
+using some other convention in which Z plays an important role. Having
+to encode a second time quickly becomes confusing.</p>
+<p>Since the function name field may contain wildcards, it can be
+anything, including just <code class="computeroutput">*</code>.
+The same is true for the soname.
+However, some ELF objects - specifically, main executables - do not
+have sonames. Any object lacking a soname is treated as if its soname
+was <code class="computeroutput">NONE</code>, which is why the original
+example above had a name
+<code class="computeroutput">I_WRAP_SONAME_FNNAME_ZU(NONE,foo)</code>.</p>
+<p>Note that the soname of an ELF object is not the same as its
+file name, although it is often similar. You can find the soname of
+an object <code class="computeroutput">libfoo.so</code> using the command
+<code class="computeroutput">readelf -a libfoo.so | grep soname</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.semantics"></a>3.3.3. Wrapping Semantics</h3></div></div></div>
+<p>The ability for a wrapper to replace an infinite family of functions
+is powerful but brings complications in situations where ELF objects
+appear and disappear (are dlopen'd and dlclose'd) on the fly.
+Valgrind tries to maintain sensible behaviour in such situations.</p>
+<p>For example, suppose a process has dlopened (an ELF object with
+soname) <code class="filename">object1.so</code>, which contains
+<code class="computeroutput">function1</code>. It starts to use
+<code class="computeroutput">function1</code> immediately.</p>
+<p>After a while it dlopens <code class="filename">wrappers.so</code>,
+which contains a wrapper
+for <code class="computeroutput">function1</code> in (soname)
+<code class="filename">object1.so</code>. All subsequent calls to
+<code class="computeroutput">function1</code> are rerouted to the wrapper.</p>
+<p>If <code class="filename">wrappers.so</code> is
+later dlclose'd, calls to <code class="computeroutput">function1</code> are
+naturally routed back to the original.</p>
+<p>Alternatively, if <code class="filename">object1.so</code>
+is dlclose'd but <code class="filename">wrappers.so</code> remains,
+then the wrapper exported by <code class="filename">wrappers.so</code>
+becomes inactive, since there
+is no way to get to it - there is no original to call any more. However,
+Valgrind remembers that the wrapper is still present. If
+<code class="filename">object1.so</code> is
+eventually dlopen'd again, the wrapper will become active again.</p>
+<p>In short, valgrind inspects all code loading/unloading events to
+ensure that the set of currently active wrappers remains consistent.</p>
+<p>A second possible problem is that of conflicting wrappers. It is
+easily possible to load two or more wrappers, both of which claim
+to be wrappers for some third function. In such cases Valgrind will
+complain about conflicting wrappers when the second one appears, and
+will honour only the first one.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.debugging"></a>3.3.4. Debugging</h3></div></div></div>
+<p>Figuring out what's going on given the dynamic nature of wrapping
+can be difficult. The
+<code class="option">--trace-redir=yes</code> option makes
+this possible
+by showing the complete state of the redirection subsystem after
+every
+<code class="function">mmap</code>/<code class="function">munmap</code>
+event affecting code (text).</p>
+<p>There are two central concepts:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>A "redirection specification" is a binding of
+ a (soname pattern, fnname pattern) pair to a code address.
+ These bindings are created by writing functions with names
+ made with the
+ <code class="computeroutput">I_WRAP_SONAME_FNNAME_{ZZ,_ZU}</code>
+ macros.</p></li>
+<li class="listitem"><p>An "active redirection" is a code-address to
+ code-address binding currently in effect.</p></li>
+</ul></div>
+<p>The state of the wrapping-and-redirection subsystem comprises a set of
+specifications and a set of active bindings. The specifications are
+acquired/discarded by watching all
+<code class="function">mmap</code>/<code class="function">munmap</code>
+events on code (text)
+sections. The active binding set is (conceptually) recomputed from
+the specifications, and all known symbol names, following any change
+to the specification set.</p>
+<p><code class="option">--trace-redir=yes</code> shows the contents
+of both sets following any such event.</p>
+<p><code class="option">-v</code> prints a line of text each
+time an active specification is used for the first time.</p>
+<p>Hence for maximum debugging effectiveness you will need to use both
+options.</p>
+<p>One final comment. The function-wrapping facility is closely
+tied to Valgrind's ability to replace (redirect) specified
+functions, for example to redirect calls to
+<code class="function">malloc</code> to its
+own implementation. Indeed, a replacement function can be
+regarded as a wrapper function which does not call the original.
+However, to make the implementation more robust, the two kinds
+of interception (wrapping vs replacement) are treated differently.
+</p>
+<p><code class="option">--trace-redir=yes</code> shows
+specifications and bindings for both
+replacement and wrapper functions. To differentiate the
+two, replacement bindings are printed using
+<code class="computeroutput">R-></code> whereas
+wraps are printed using <code class="computeroutput">W-></code>.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.limitations-cf"></a>3.3.5. Limitations - control flow</h3></div></div></div>
+<p>For the most part, the function wrapping implementation is robust.
+The only important caveat is: in a wrapper, get hold of
+the <code class="computeroutput">OrigFn</code> information using
+<code class="computeroutput">VALGRIND_GET_ORIG_FN</code> before calling any
+other wrapped function. Once you have the
+<code class="computeroutput">OrigFn</code>, arbitrary
+calls between, recursion between, and longjumps out of wrappers
+should work correctly. There is never any interaction between wrapped
+functions and merely replaced functions
+(eg <code class="function">malloc</code>), so you can call
+<code class="function">malloc</code> etc safely from within wrappers.
+</p>
+<p>The above comments are true for {x86,amd64,ppc32,arm,mips32,s390}-linux.
+On
+ppc64-linux function wrapping is more fragile due to the (arguably
+poorly designed) ppc64-linux ABI. This mandates the use of a shadow
+stack which tracks entries/exits of both wrapper and replacement
+functions. This gives two limitations: firstly, longjumping out of
+wrappers will rapidly lead to disaster, since the shadow stack will
+not get correctly cleared. Secondly, since the shadow stack has
+finite size, recursion between wrapper/replacement functions is only
+possible to a limited depth, beyond which Valgrind has to abort the
+run. This depth is currently 16 calls.</p>
+<p>For all platforms ({x86,amd64,ppc32,ppc64,arm,mips32,s390}-linux)
+all the above
+comments apply on a per-thread basis. In other words, wrapping is
+thread-safe: each thread must individually observe the above
+restrictions, but there is no need for any kind of inter-thread
+cooperation.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.limitations-sigs"></a>3.3.6. Limitations - original function signatures</h3></div></div></div>
+<p>As shown in the above example, to call the original you must use a
+macro of the form <code class="computeroutput">CALL_FN_*</code>.
+For technical reasons it is impossible
+to create a single macro to deal with all argument types and numbers,
+so a family of macros covering the most common cases is supplied. In
+what follows, 'W' denotes a machine-word-typed value (a pointer or a
+C <code class="computeroutput">long</code>),
+and 'v' denotes C's <code class="computeroutput">void</code> type.
+The currently available macros are:</p>
+<pre class="programlisting">
+CALL_FN_v_v -- call an original of type void fn ( void )
+CALL_FN_W_v -- call an original of type long fn ( void )
+
+CALL_FN_v_W -- call an original of type void fn ( long )
+CALL_FN_W_W -- call an original of type long fn ( long )
+
+CALL_FN_v_WW -- call an original of type void fn ( long, long )
+CALL_FN_W_WW -- call an original of type long fn ( long, long )
+
+CALL_FN_v_WWW -- call an original of type void fn ( long, long, long )
+CALL_FN_W_WWW -- call an original of type long fn ( long, long, long )
+
+CALL_FN_W_WWWW -- call an original of type long fn ( long, long, long, long )
+CALL_FN_W_5W -- call an original of type long fn ( long, long, long, long, long )
+CALL_FN_W_6W -- call an original of type long fn ( long, long, long, long, long, long )
+and so on, up to
+CALL_FN_W_12W
+</pre>
+<p>The set of supported types can be expanded as needed. It is
+regrettable that this limitation exists. Function wrapping has proven
+difficult to implement, with a certain apparently unavoidable level of
+ickiness. After several implementation attempts, the present
+arrangement appears to be the least-worst tradeoff. At least it works
+reliably in the presence of dynamic linking and dynamic code
+loading/unloading.</p>
+<p>You should not attempt to wrap a function of one type signature with a
+wrapper of a different type signature. Such trickery will surely lead
+to crashes or strange behaviour. This is not a limitation
+of the function wrapping implementation, merely a reflection of the
+fact that it gives you sweeping powers to shoot yourself in the foot
+if you are not careful. Imagine the instant havoc you could wreak by
+writing a wrapper which matched any function name in any soname - in
+effect, one which claimed to be a wrapper for all functions in the
+process.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core-adv.wrapping.examples"></a>3.3.7. Examples</h3></div></div></div>
+<p>In the source tree,
+<code class="filename">memcheck/tests/wrap[1-8].c</code> provide a series of
+examples, ranging from very simple to quite advanced.</p>
+<p><code class="filename">mpi/libmpiwrap.c</code> is an example
+of wrapping a big, complex API (the MPI-2 interface). This file defines
+almost 300 different wrappers.</p>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="manual-core.html"><< 2. Using and understanding the Valgrind core</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="mc-manual.html">4. Memcheck: a memory error detector >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/manual-core.html b/docs/html/manual-core.html
new file mode 100644
index 0000000..480d038
--- /dev/null
+++ b/docs/html/manual-core.html
@@ -0,0 +1,2658 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>2. Using and understanding the Valgrind core</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="manual-intro.html" title="1. Introduction">
+<link rel="next" href="manual-core-adv.html" title="3. Using and understanding the Valgrind core: Advanced Topics">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="manual-intro.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="manual-core-adv.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="manual-core"></a>2. Using and understanding the Valgrind core</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="manual-core.html#manual-core.whatdoes">2.1. What Valgrind does with your program</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.started">2.2. Getting started</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.comment">2.3. The Commentary</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.report">2.4. Reporting of errors</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.suppress">2.5. Suppressing errors</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.options">2.6. Core Command-line Options</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.toolopts">2.6.1. Tool-selection Option</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.basicopts">2.6.2. Basic Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.erropts">2.6.3. Error-related Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.mallocopts">2.6.4. malloc-related Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.rareopts">2.6.5. Uncommon Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.debugopts">2.6.6. Debugging Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.defopts">2.6.7. Setting Default Options</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.pthreads">2.7. Support for Threads</a></span></dt>
+<dd><dl><dt><span class="sect2"><a href="manual-core.html#manual-core.pthreads_perf_sched">2.7.1. Scheduling and Multi-Thread Performance</a></span></dt></dl></dd>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.signals">2.8. Handling of Signals</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.install">2.9. Building and Installing Valgrind</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.problems">2.10. If You Have Problems</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.limits">2.11. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.example">2.12. An Example Run</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.warnings">2.13. Warning Messages You Might See</a></span></dt>
+</dl>
+</div>
+<p>This chapter describes the Valgrind core services, command-line
+options and behaviours. That means it is relevant regardless of what
+particular tool you are using. The information should be sufficient for you
+to make effective day-to-day use of Valgrind. Advanced topics related to
+the Valgrind core are described in <a class="xref" href="manual-core-adv.html" title="3. Using and understanding the Valgrind core: Advanced Topics">Valgrind's core: advanced topics</a>.
+</p>
+<p>
+A point of terminology: most references to "Valgrind" in this chapter
+refer to the Valgrind core services. </p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.whatdoes"></a>2.1. What Valgrind does with your program</h2></div></div></div>
+<p>Valgrind is designed to be as non-intrusive as possible. It works
+directly with existing executables. You don't need to recompile, relink,
+or otherwise modify the program to be checked.</p>
+<p>You invoke Valgrind like this:</p>
+<pre class="programlisting">
+valgrind [valgrind-options] your-prog [your-prog-options]</pre>
+<p>The most important option is <code class="option">--tool</code> which dictates
+which Valgrind tool to run. For example, if want to run the command
+<code class="computeroutput">ls -l</code> using the memory-checking tool
+Memcheck, issue this command:</p>
+<pre class="programlisting">
+valgrind --tool=memcheck ls -l</pre>
+<p>However, Memcheck is the default, so if you want to use it you can
+omit the <code class="option">--tool</code> option.</p>
+<p>Regardless of which tool is in use, Valgrind takes control of your
+program before it starts. Debugging information is read from the
+executable and associated libraries, so that error messages and other
+outputs can be phrased in terms of source code locations, when
+appropriate.</p>
+<p>Your program is then run on a synthetic CPU provided by the
+Valgrind core. As new code is executed for the first time, the core
+hands the code to the selected tool. The tool adds its own
+instrumentation code to this and hands the result back to the core,
+which coordinates the continued execution of this instrumented
+code.</p>
+<p>The amount of instrumentation code added varies widely between
+tools. At one end of the scale, Memcheck adds code to check every
+memory access and every value computed,
+making it run 10-50 times slower than natively.
+At the other end of the spectrum, the minimal tool, called Nulgrind,
+adds no instrumentation at all and causes in total "only" about a 4 times
+slowdown.</p>
+<p>Valgrind simulates every single instruction your program executes.
+Because of this, the active tool checks, or profiles, not only the code
+in your application but also in all supporting dynamically-linked libraries,
+including the C library, graphical libraries, and so on.</p>
+<p>If you're using an error-detection tool, Valgrind may
+detect errors in system libraries, for example the GNU C or X11
+libraries, which you have to use. You might not be interested in these
+errors, since you probably have no control over that code. Therefore,
+Valgrind allows you to selectively suppress errors, by recording them in
+a suppressions file which is read when Valgrind starts up. The build
+mechanism selects default suppressions which give reasonable
+behaviour for the OS and libraries detected on your machine.
+To make it easier to write suppressions, you can use the
+<code class="option">--gen-suppressions=yes</code> option. This tells Valgrind to
+print out a suppression for each reported error, which you can then
+copy into a suppressions file.</p>
+<p>Different error-checking tools report different kinds of errors.
+The suppression mechanism therefore allows you to say which tool or
+tool(s) each suppression applies to.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.started"></a>2.2. Getting started</h2></div></div></div>
+<p>First off, consider whether it might be beneficial to recompile
+your application and supporting libraries with debugging info enabled
+(the <code class="option">-g</code> option). Without debugging info, the best
+Valgrind tools will be able to do is guess which function a particular
+piece of code belongs to, which makes both error messages and profiling
+output nearly useless. With <code class="option">-g</code>, you'll get
+messages which point directly to the relevant source code lines.</p>
+<p>Another option you might like to consider, if you are working with
+C++, is <code class="option">-fno-inline</code>. That makes it easier to see the
+function-call chain, which can help reduce confusion when navigating
+around large C++ apps. For example, debugging
+OpenOffice.org with Memcheck is a bit easier when using this option. You
+don't have to do this, but doing so helps Valgrind produce more accurate
+and less confusing error reports. Chances are you're set up like this
+already, if you intended to debug your program with GNU GDB, or some
+other debugger. Alternatively, the Valgrind option
+<code class="option">--read-inline-info=yes</code> instructs Valgrind to read
+the debug information describing inlining information. With this,
+function call chain will be properly shown, even when your application
+is compiled with inlining. </p>
+<p>If you are planning to use Memcheck: On rare
+occasions, compiler optimisations (at <code class="option">-O2</code>
+and above, and sometimes <code class="option">-O1</code>) have been
+observed to generate code which fools Memcheck into wrongly reporting
+uninitialised value errors, or missing uninitialised value errors. We have
+looked in detail into fixing this, and unfortunately the result is that
+doing so would give a further significant slowdown in what is already a slow
+tool. So the best solution is to turn off optimisation altogether. Since
+this often makes things unmanageably slow, a reasonable compromise is to use
+<code class="option">-O</code>. This gets you the majority of the
+benefits of higher optimisation levels whilst keeping relatively small the
+chances of false positives or false negatives from Memcheck. Also, you
+should compile your code with <code class="option">-Wall</code> because
+it can identify some or all of the problems that Valgrind can miss at the
+higher optimisation levels. (Using <code class="option">-Wall</code>
+is also a good idea in general.) All other tools (as far as we know) are
+unaffected by optimisation level, and for profiling tools like Cachegrind it
+is better to compile your program at its normal optimisation level.</p>
+<p>Valgrind understands the DWARF2/3/4 formats used by GCC 3.1 and
+later. The reader for "stabs" debugging format (used by GCC versions
+prior to 3.1) has been disabled in Valgrind 3.9.0.</p>
+<p>When you're ready to roll, run Valgrind as described above.
+Note that you should run the real
+(machine-code) executable here. If your application is started by, for
+example, a shell or Perl script, you'll need to modify it to invoke
+Valgrind on the real executables. Running such scripts directly under
+Valgrind will result in you getting error reports pertaining to
+<code class="filename">/bin/sh</code>,
+<code class="filename">/usr/bin/perl</code>, or whatever interpreter
+you're using. This may not be what you want and can be confusing. You
+can force the issue by giving the option
+<code class="option">--trace-children=yes</code>, but confusion is still
+likely.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.comment"></a>2.3. The Commentary</h2></div></div></div>
+<p>Valgrind tools write a commentary, a stream of text, detailing
+error reports and other significant events. All lines in the commentary
+have following form:
+
+</p>
+<pre class="programlisting">
+==12345== some-message-from-Valgrind</pre>
+<p>
+</p>
+<p>The <code class="computeroutput">12345</code> is the process ID.
+This scheme makes it easy to distinguish program output from Valgrind
+commentary, and also easy to differentiate commentaries from different
+processes which have become merged together, for whatever reason.</p>
+<p>By default, Valgrind tools write only essential messages to the
+commentary, so as to avoid flooding you with information of secondary
+importance. If you want more information about what is happening,
+re-run, passing the <code class="option">-v</code> option to Valgrind. A second
+<code class="option">-v</code> gives yet more detail.
+</p>
+<p>You can direct the commentary to three different places:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem">
+<p><a name="manual-core.out2fd"></a>The default: send it to a file descriptor, which is by default
+ 2 (stderr). So, if you give the core no options, it will write
+ commentary to the standard error stream. If you want to send it to
+ some other file descriptor, for example number 9, you can specify
+ <code class="option">--log-fd=9</code>.</p>
+<p>This is the simplest and most common arrangement, but can
+ cause problems when Valgrinding entire trees of processes which
+ expect specific file descriptors, particularly stdin/stdout/stderr,
+ to be available for their own use.</p>
+</li>
+<li class="listitem"><p><a name="manual-core.out2file"></a>A less intrusive
+ option is to write the commentary to a file, which you specify by
+ <code class="option">--log-file=filename</code>. There are special format
+ specifiers that can be used to use a process ID or an environment
+ variable name in the log file name. These are useful/necessary if your
+ program invokes multiple processes (especially for MPI programs).
+ See the <a class="link" href="manual-core.html#manual-core.basicopts" title="2.6.2. Basic Options">basic options section</a>
+ for more details.</p></li>
+<li class="listitem">
+<p><a name="manual-core.out2socket"></a>The
+ least intrusive option is to send the commentary to a network
+ socket. The socket is specified as an IP address and port number
+ pair, like this: <code class="option">--log-socket=192.168.0.1:12345</code> if
+ you want to send the output to host IP 192.168.0.1 port 12345
+ (note: we
+ have no idea if 12345 is a port of pre-existing significance). You
+ can also omit the port number:
+ <code class="option">--log-socket=192.168.0.1</code>, in which case a default
+ port of 1500 is used. This default is defined by the constant
+ <code class="computeroutput">VG_CLO_DEFAULT_LOGPORT</code> in the
+ sources.</p>
+<p>Note, unfortunately, that you have to use an IP address here,
+ rather than a hostname.</p>
+<p>Writing to a network socket is pointless if you don't
+ have something listening at the other end. We provide a simple
+ listener program,
+ <code class="computeroutput">valgrind-listener</code>, which accepts
+ connections on the specified port and copies whatever it is sent to
+ stdout. Probably someone will tell us this is a horrible security
+ risk. It seems likely that people will write more sophisticated
+ listeners in the fullness of time.</p>
+<p><code class="computeroutput">valgrind-listener</code> can accept
+ simultaneous connections from up to 50 Valgrinded processes. In front
+ of each line of output it prints the current number of active
+ connections in round brackets.</p>
+<p><code class="computeroutput">valgrind-listener</code> accepts three
+ command-line options:</p>
+<div class="variablelist">
+<a name="listener.opts.list"></a><dl class="variablelist">
+<dt><span class="term"><code class="option">-e --exit-at-zero</code></span></dt>
+<dd><p>When the number of connected processes falls back to zero,
+ exit. Without this, it will run forever, that is, until you
+ send it Control-C.</p></dd>
+<dt><span class="term"><code class="option">--max-connect=INTEGER</code></span></dt>
+<dd><p>By default, the listener can connect to up to 50 processes.
+ Occasionally, that number is too small. Use this option to
+ provide a different limit. E.g.
+ <code class="computeroutput">--max-connect=100</code>.
+ </p></dd>
+<dt><span class="term"><code class="option">portnumber</code></span></dt>
+<dd><p>Changes the port it listens on from the default (1500).
+ The specified port must be in the range 1024 to 65535.
+ The same restriction applies to port numbers specified by a
+ <code class="option">--log-socket</code> to Valgrind itself.</p></dd>
+</dl>
+</div>
+<p>If a Valgrinded process fails to connect to a listener, for
+ whatever reason (the listener isn't running, invalid or unreachable
+ host or port, etc), Valgrind switches back to writing the commentary
+ to stderr. The same goes for any process which loses an established
+ connection to a listener. In other words, killing the listener
+ doesn't kill the processes sending data to it.</p>
+</li>
+</ol></div>
+<p>Here is an important point about the relationship between the
+commentary and profiling output from tools. The commentary contains a
+mix of messages from the Valgrind core and the selected tool. If the
+tool reports errors, it will report them to the commentary. However, if
+the tool does profiling, the profile data will be written to a file of
+some kind, depending on the tool, and independent of what
+<code class="option">--log-*</code> options are in force. The commentary is
+intended to be a low-bandwidth, human-readable channel. Profiling data,
+on the other hand, is usually voluminous and not meaningful without
+further processing, which is why we have chosen this arrangement.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.report"></a>2.4. Reporting of errors</h2></div></div></div>
+<p>When an error-checking tool
+detects something bad happening in the program, an error
+message is written to the commentary. Here's an example from Memcheck:</p>
+<pre class="programlisting">
+==25832== Invalid read of size 4
+==25832== at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45)
+==25832== by 0x80487AF: main (bogon.cpp:66)
+==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd</pre>
+<p>This message says that the program did an illegal 4-byte read of
+address 0xBFFFF74C, which, as far as Memcheck can tell, is not a valid
+stack address, nor corresponds to any current heap blocks or recently freed
+heap blocks. The read is happening at line 45 of
+<code class="filename">bogon.cpp</code>, called from line 66 of the same file,
+etc. For errors associated with an identified (current or freed) heap block,
+for example reading freed memory, Valgrind reports not only the
+location where the error happened, but also where the associated heap block
+was allocated/freed.</p>
+<p>Valgrind remembers all error reports. When an error is detected,
+it is compared against old reports, to see if it is a duplicate. If so,
+the error is noted, but no further commentary is emitted. This avoids
+you being swamped with bazillions of duplicate error reports.</p>
+<p>If you want to know how many times each error occurred, run with
+the <code class="option">-v</code> option. When execution finishes, all the
+reports are printed out, along with, and sorted by, their occurrence
+counts. This makes it easy to see which errors have occurred most
+frequently.</p>
+<p>Errors are reported before the associated operation actually
+happens. For example, if you're using Memcheck and your program attempts to
+read from address zero, Memcheck will emit a message to this effect, and
+your program will then likely die with a segmentation fault.</p>
+<p>In general, you should try and fix errors in the order that they
+are reported. Not doing so can be confusing. For example, a program
+which copies uninitialised values to several memory locations, and later
+uses them, will generate several error messages, when run on Memcheck.
+The first such error message may well give the most direct clue to the
+root cause of the problem.</p>
+<p>The process of detecting duplicate errors is quite an
+expensive one and can become a significant performance overhead
+if your program generates huge quantities of errors. To avoid
+serious problems, Valgrind will simply stop collecting
+errors after 1,000 different errors have been seen, or 10,000,000 errors
+in total have been seen. In this situation you might as well
+stop your program and fix it, because Valgrind won't tell you
+anything else useful after this. Note that the 1,000/10,000,000 limits
+apply after suppressed errors are removed. These limits are
+defined in <code class="filename">m_errormgr.c</code> and can be increased
+if necessary.</p>
+<p>To avoid this cutoff you can use the
+<code class="option">--error-limit=no</code> option. Then Valgrind will always show
+errors, regardless of how many there are. Use this option carefully,
+since it may have a bad effect on performance.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.suppress"></a>2.5. Suppressing errors</h2></div></div></div>
+<p>The error-checking tools detect numerous problems in the system
+libraries, such as the C library,
+which come pre-installed with your OS. You can't easily fix
+these, but you don't want to see these errors (and yes, there are many!)
+So Valgrind reads a list of errors to suppress at startup. A default
+suppression file is created by the
+<code class="computeroutput">./configure</code> script when the system is
+built.</p>
+<p>You can modify and add to the suppressions file at your leisure,
+or, better, write your own. Multiple suppression files are allowed.
+This is useful if part of your project contains errors you can't or
+don't want to fix, yet you don't want to continuously be reminded of
+them.</p>
+<p><b>Note: </b>By far the easiest way to add
+suppressions is to use the <code class="option">--gen-suppressions=yes</code> option
+described in <a class="xref" href="manual-core.html#manual-core.options" title="2.6. Core Command-line Options">Core Command-line Options</a>. This generates
+suppressions automatically. For best results,
+though, you may want to edit the output
+ of <code class="option">--gen-suppressions=yes</code> by hand, in which
+case it would be advisable to read through this section.
+</p>
+<p>Each error to be suppressed is described very specifically, to
+minimise the possibility that a suppression-directive inadvertently
+suppresses a bunch of similar errors which you did want to see. The
+suppression mechanism is designed to allow precise yet flexible
+specification of errors to suppress.</p>
+<p>If you use the <code class="option">-v</code> option, at the end of execution,
+Valgrind prints out one line for each used suppression, giving the number of times
+it got used, its name and the filename and line number where the suppression is
+defined. Depending on the suppression kind, the filename and line number are optionally
+followed by additional information (such as the number of blocks and bytes suppressed
+by a memcheck leak suppression). Here's the suppressions used by a
+run of <code class="computeroutput">valgrind -v --tool=memcheck ls -l</code>:</p>
+<pre class="programlisting">
+--1610-- used_suppression: 2 dl-hack3-cond-1 /usr/lib/valgrind/default.supp:1234
+--1610-- used_suppression: 2 glibc-2.5.x-on-SUSE-10.2-(PPC)-2a /usr/lib/valgrind/default.supp:1234
+</pre>
+<p>Multiple suppressions files are allowed. Valgrind loads suppression
+patterns from <code class="filename">$PREFIX/lib/valgrind/default.supp</code> unless
+<code class="option">--default-suppressions=no</code> has been specified. You can
+ask to add suppressions from additional files by specifying
+<code class="option">--suppressions=/path/to/file.supp</code> one or more times.
+</p>
+<p>If you want to understand more about suppressions, look at an
+existing suppressions file whilst reading the following documentation.
+The file <code class="filename">glibc-2.3.supp</code>, in the source
+distribution, provides some good examples.</p>
+<p>Each suppression has the following components:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>First line: its name. This merely gives a handy name to the
+ suppression, by which it is referred to in the summary of used
+ suppressions printed out when a program finishes. It's not
+ important what the name is; any identifying string will do.</p></li>
+<li class="listitem">
+<p>Second line: name of the tool(s) that the suppression is for
+ (if more than one, comma-separated), and the name of the suppression
+ itself, separated by a colon (n.b.: no spaces are allowed), eg:</p>
+<pre class="programlisting">
+tool_name1,tool_name2:suppression_name</pre>
+<p>Recall that Valgrind is a modular system, in which
+ different instrumentation tools can observe your program whilst it
+ is running. Since different tools detect different kinds of errors,
+ it is necessary to say which tool(s) the suppression is meaningful
+ to.</p>
+<p>Tools will complain, at startup, if a tool does not understand
+ any suppression directed to it. Tools ignore suppressions which are
+ not directed to them. As a result, it is quite practical to put
+ suppressions for all tools into the same suppression file.</p>
+</li>
+<li class="listitem"><p>Next line: a small number of suppression types have extra
+ information after the second line (eg. the <code class="varname">Param</code>
+ suppression for Memcheck)</p></li>
+<li class="listitem">
+<p>Remaining lines: This is the calling context for the error --
+ the chain of function calls that led to it. There can be up to 24
+ of these lines.</p>
+<p>Locations may be names of either shared objects or
+ functions. They begin
+ <code class="computeroutput">obj:</code> and
+ <code class="computeroutput">fun:</code> respectively. Function and
+ object names to match against may use the wildcard characters
+ <code class="computeroutput">*</code> and
+ <code class="computeroutput">?</code>.</p>
+<p><span class="command"><strong>Important note: </strong></span> C++ function names must be
+ <span class="command"><strong>mangled</strong></span>. If you are writing suppressions by
+ hand, use the <code class="option">--demangle=no</code> option to get the
+ mangled names in your error messages. An example of a mangled
+ C++ name is <code class="computeroutput">_ZN9QListView4showEv</code>.
+ This is the form that the GNU C++ compiler uses internally, and
+ the form that must be used in suppression files. The equivalent
+ demangled name, <code class="computeroutput">QListView::show()</code>,
+ is what you see at the C++ source code level.
+ </p>
+<p>A location line may also be
+ simply "<code class="computeroutput">...</code>" (three dots). This is
+ a frame-level wildcard, which matches zero or more frames. Frame
+ level wildcards are useful because they make it easy to ignore
+ varying numbers of uninteresting frames in between frames of
+ interest. That is often important when writing suppressions which
+ are intended to be robust against variations in the amount of
+ function inlining done by compilers.</p>
+</li>
+<li class="listitem"><p>Finally, the entire suppression must be between curly
+ braces. Each brace must be the first character on its own
+ line.</p></li>
+</ul></div>
+<p>A suppression only suppresses an error when the error matches all
+the details in the suppression. Here's an example:</p>
+<pre class="programlisting">
+{
+ __gconv_transform_ascii_internal/__mbrtowc/mbtowc
+ Memcheck:Value4
+ fun:__gconv_transform_ascii_internal
+ fun:__mbr*toc
+ fun:mbtowc
+}</pre>
+<p>What it means is: for Memcheck only, suppress a
+use-of-uninitialised-value error, when the data size is 4, when it
+occurs in the function
+<code class="computeroutput">__gconv_transform_ascii_internal</code>, when
+that is called from any function of name matching
+<code class="computeroutput">__mbr*toc</code>, when that is called from
+<code class="computeroutput">mbtowc</code>. It doesn't apply under any
+other circumstances. The string by which this suppression is identified
+to the user is
+<code class="computeroutput">__gconv_transform_ascii_internal/__mbrtowc/mbtowc</code>.</p>
+<p>(See <a class="xref" href="mc-manual.html#mc-manual.suppfiles" title="4.4. Writing suppression files">Writing suppression files</a> for more details
+on the specifics of Memcheck's suppression kinds.)</p>
+<p>Another example, again for the Memcheck tool:</p>
+<pre class="programlisting">
+{
+ libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
+ Memcheck:Value4
+ obj:/usr/X11R6/lib/libX11.so.6.2
+ obj:/usr/X11R6/lib/libX11.so.6.2
+ obj:/usr/X11R6/lib/libXaw.so.7.0
+}</pre>
+<p>This suppresses any size 4 uninitialised-value error which occurs
+anywhere in <code class="filename">libX11.so.6.2</code>, when called from
+anywhere in the same library, when called from anywhere in
+<code class="filename">libXaw.so.7.0</code>. The inexact specification of
+locations is regrettable, but is about all you can hope for, given that
+the X11 libraries shipped on the Linux distro on which this example
+was made have had their symbol tables removed.</p>
+<p>Although the above two examples do not make this clear, you can
+freely mix <code class="computeroutput">obj:</code> and
+<code class="computeroutput">fun:</code> lines in a suppression.</p>
+<p>Finally, here's an example using three frame-level wildcards:</p>
+<pre class="programlisting">
+{
+ a-contrived-example
+ Memcheck:Leak
+ fun:malloc
+ ...
+ fun:ddd
+ ...
+ fun:ccc
+ ...
+ fun:main
+}
+</pre>
+This suppresses Memcheck memory-leak errors, in the case where
+the allocation was done by <code class="computeroutput">main</code>
+calling (though any number of intermediaries, including zero)
+<code class="computeroutput">ccc</code>,
+calling onwards via
+<code class="computeroutput">ddd</code> and eventually
+to <code class="computeroutput">malloc.</code>.
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.options"></a>2.6. Core Command-line Options</h2></div></div></div>
+<p>As mentioned above, Valgrind's core accepts a common set of options.
+The tools also accept tool-specific options, which are documented
+separately for each tool.</p>
+<p>Valgrind's default settings succeed in giving reasonable behaviour
+in most cases. We group the available options by rough categories.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.toolopts"></a>2.6.1. Tool-selection Option</h3></div></div></div>
+<p><a name="tool.opts.para"></a>The single most important option.</p>
+<div class="variablelist">
+<a name="tool.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="tool_name"></a><span class="term">
+ <code class="option">--tool=<toolname> [default: memcheck] </code>
+ </span>
+</dt>
+<dd><p>Run the Valgrind tool called <code class="varname">toolname</code>,
+ e.g. memcheck, cachegrind, callgrind, helgrind, drd, massif,
+ lackey, none, exp-sgcheck, exp-bbv, exp-dhat, etc.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.basicopts"></a>2.6.2. Basic Options</h3></div></div></div>
+<p><a name="basic.opts.para"></a>These options work with all tools.</p>
+<div class="variablelist">
+<a name="basic.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.help"></a><span class="term"><code class="option">-h --help</code></span>
+</dt>
+<dd><p>Show help for all options, both for the core and for the
+ selected tool. If the option is repeated it is equivalent to giving
+ <code class="option">--help-debug</code>.</p></dd>
+<dt>
+<a name="opt.help-debug"></a><span class="term"><code class="option">--help-debug</code></span>
+</dt>
+<dd><p>Same as <code class="option">--help</code>, but also lists debugging
+ options which usually are only of use to Valgrind's
+ developers.</p></dd>
+<dt>
+<a name="opt.version"></a><span class="term"><code class="option">--version</code></span>
+</dt>
+<dd><p>Show the version number of the Valgrind core. Tools can have
+ their own version numbers. There is a scheme in place to ensure
+ that tools only execute when the core version is one they are
+ known to work with. This was done to minimise the chances of
+ strange problems arising from tool-vs-core version
+ incompatibilities.</p></dd>
+<dt>
+<a name="opt.quiet"></a><span class="term"><code class="option">-q</code>, <code class="option">--quiet</code></span>
+</dt>
+<dd><p>Run silently, and only print error messages. Useful if you
+ are running regression tests or have some other automated test
+ machinery.</p></dd>
+<dt>
+<a name="opt.verbose"></a><span class="term"><code class="option">-v</code>, <code class="option">--verbose</code></span>
+</dt>
+<dd><p>Be more verbose. Gives extra information on various aspects
+ of your program, such as: the shared objects loaded, the
+ suppressions used, the progress of the instrumentation and
+ execution engines, and warnings about unusual behaviour. Repeating
+ the option increases the verbosity level.</p></dd>
+<dt>
+<a name="opt.trace-children"></a><span class="term">
+ <code class="option">--trace-children=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, Valgrind will trace into sub-processes
+ initiated via the <code class="varname">exec</code> system call. This is
+ necessary for multi-process programs.
+ </p>
+<p>Note that Valgrind does trace into the child of a
+ <code class="varname">fork</code> (it would be difficult not to, since
+ <code class="varname">fork</code> makes an identical copy of a process), so this
+ option is arguably badly named. However, most children of
+ <code class="varname">fork</code> calls immediately call <code class="varname">exec</code>
+ anyway.
+ </p>
+</dd>
+<dt>
+<a name="opt.trace-children-skip"></a><span class="term">
+ <code class="option">--trace-children-skip=patt1,patt2,... </code>
+ </span>
+</dt>
+<dd>
+<p>This option only has an effect when
+ <code class="option">--trace-children=yes</code> is specified. It allows
+ for some children to be skipped. The option takes a comma
+ separated list of patterns for the names of child executables
+ that Valgrind should not trace into. Patterns may include the
+ metacharacters <code class="computeroutput">?</code>
+ and <code class="computeroutput">*</code>, which have the usual
+ meaning.</p>
+<p>
+ This can be useful for pruning uninteresting branches from a
+ tree of processes being run on Valgrind. But you should be
+ careful when using it. When Valgrind skips tracing into an
+ executable, it doesn't just skip tracing that executable, it
+ also skips tracing any of that executable's child processes.
+ In other words, the flag doesn't merely cause tracing to stop
+ at the specified executables -- it skips tracing of entire
+ process subtrees rooted at any of the specified
+ executables.</p>
+</dd>
+<dt>
+<a name="opt.trace-children-skip-by-arg"></a><span class="term">
+ <code class="option">--trace-children-skip-by-arg=patt1,patt2,... </code>
+ </span>
+</dt>
+<dd><p>This is the same as
+ <code class="option">--trace-children-skip</code>, with one difference:
+ the decision as to whether to trace into a child process is
+ made by examining the arguments to the child process, rather
+ than the name of its executable.</p></dd>
+<dt>
+<a name="opt.child-silent-after-fork"></a><span class="term">
+ <code class="option">--child-silent-after-fork=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Valgrind will not show any debugging or
+ logging output for the child process resulting from
+ a <code class="varname">fork</code> call. This can make the output less
+ confusing (although more misleading) when dealing with processes
+ that create children. It is particularly useful in conjunction
+ with <code class="varname">--trace-children=</code>. Use of this option is also
+ strongly recommended if you are requesting XML output
+ (<code class="varname">--xml=yes</code>), since otherwise the XML from child and
+ parent may become mixed up, which usually makes it useless.
+ </p></dd>
+<dt>
+<a name="opt.vgdb"></a><span class="term">
+ <code class="option">--vgdb=<no|yes|full> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Valgrind will provide "gdbserver" functionality when
+ <code class="option">--vgdb=yes</code> or <code class="option">--vgdb=full</code> is
+ specified. This allows an external GNU GDB debugger to control
+ and debug your program when it runs on Valgrind.
+ <code class="option">--vgdb=full</code> incurs significant performance
+ overheads, but provides more precise breakpoints and
+ watchpoints. See <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver" title="3.2. Debugging your program using Valgrind gdbserver and GDB">Debugging your program using Valgrind's gdbserver and GDB</a> for
+ a detailed description.
+ </p>
+<p> If the embedded gdbserver is enabled but no gdb is
+ currently being used, the <a class="xref" href="manual-core-adv.html#manual-core-adv.vgdb" title="3.2.9. vgdb command line options">vgdb</a>
+ command line utility can send "monitor commands" to Valgrind
+ from a shell. The Valgrind core provides a set of
+ <a class="xref" href="manual-core-adv.html#manual-core-adv.valgrind-monitor-commands" title="3.2.10. Valgrind monitor commands">Valgrind monitor commands</a>. A tool
+ can optionally provide tool specific monitor commands, which are
+ documented in the tool specific chapter.
+ </p>
+</dd>
+<dt>
+<a name="opt.vgdb-error"></a><span class="term">
+ <code class="option">--vgdb-error=<number> [default: 999999999] </code>
+ </span>
+</dt>
+<dd><p> Use this option when the Valgrind gdbserver is enabled with
+ <code class="option">--vgdb=yes</code> or <code class="option">--vgdb=full</code>.
+ Tools that report errors will wait
+ for "<code class="computeroutput">number</code>" errors to be
+ reported before freezing the program and waiting for you to
+ connect with GDB. It follows that a value of zero will cause
+ the gdbserver to be started before your program is executed.
+ This is typically used to insert GDB breakpoints before
+ execution, and also works with tools that do not report
+ errors, such as Massif.
+ </p></dd>
+<dt>
+<a name="opt.vgdb-stop-at"></a><span class="term">
+ <code class="option">--vgdb-stop-at=<set> [default: none] </code>
+ </span>
+</dt>
+<dd>
+<p> Use this option when the Valgrind gdbserver is enabled with
+ <code class="option">--vgdb=yes</code> or <code class="option">--vgdb=full</code>.
+ The Valgrind gdbserver will be invoked for each error after
+ <code class="option">--vgdb-error</code> have been reported.
+ You can additionally ask the Valgrind gdbserver to be invoked
+ for other events, specified in one of the following ways: </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>a comma separated list of one or more of
+ <code class="option">startup exit valgrindabexit</code>.</p>
+<p>The values <code class="option">startup</code> <code class="option">exit</code>
+ <code class="option">valgrindabexit</code> respectively indicate to
+ invoke gdbserver before your program is executed, after the
+ last instruction of your program, on Valgrind abnormal exit
+ (e.g. internal error, out of memory, ...).</p>
+<p>Note: <code class="option">startup</code> and
+ <code class="option">--vgdb-error=0</code> will both cause Valgrind
+ gdbserver to be invoked before your program is executed. The
+ <code class="option">--vgdb-error=0</code> will in addition cause your
+ program to stop on all subsequent errors.</p>
+</li>
+<li class="listitem"><p><code class="option">all</code> to specify the complete set.
+ It is equivalent to
+ <code class="option">--vgdb-stop-at=startup,exit,valgrindabexit</code>.</p></li>
+<li class="listitem"><p><code class="option">none</code> for the empty set.</p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.track-fds"></a><span class="term">
+ <code class="option">--track-fds=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Valgrind will print out a list of open file
+ descriptors on exit or on request, via the gdbserver monitor
+ command <code class="varname">v.info open_fds</code>. Along with each
+ file descriptor is printed a stack backtrace of where the file
+ was opened and any details relating to the file descriptor such
+ as the file name or socket details.</p></dd>
+<dt>
+<a name="opt.time-stamp"></a><span class="term">
+ <code class="option">--time-stamp=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, each message is preceded with an indication of
+ the elapsed wallclock time since startup, expressed as days,
+ hours, minutes, seconds and milliseconds.</p></dd>
+<dt>
+<a name="opt.log-fd"></a><span class="term">
+ <code class="option">--log-fd=<number> [default: 2, stderr] </code>
+ </span>
+</dt>
+<dd><p>Specifies that Valgrind should send all of its messages to
+ the specified file descriptor. The default, 2, is the standard
+ error channel (stderr). Note that this may interfere with the
+ client's own use of stderr, as Valgrind's output will be
+ interleaved with any output that the client sends to
+ stderr.</p></dd>
+<dt>
+<a name="opt.log-file"></a><span class="term">
+ <code class="option">--log-file=<filename> </code>
+ </span>
+</dt>
+<dd>
+<p>Specifies that Valgrind should send all of its messages to
+ the specified file. If the file name is empty, it causes an abort.
+ There are three special format specifiers that can be used in the file
+ name.</p>
+<p><code class="option">%p</code> is replaced with the current process ID.
+ This is very useful for program that invoke multiple processes.
+ WARNING: If you use <code class="option">--trace-children=yes</code> and your
+ program invokes multiple processes OR your program forks without
+ calling exec afterwards, and you don't use this specifier
+ (or the <code class="option">%q</code> specifier below), the Valgrind output from
+ all those processes will go into one file, possibly jumbled up, and
+ possibly incomplete.</p>
+<p><code class="option">%q{FOO}</code> is replaced with the contents of the
+ environment variable <code class="varname">FOO</code>. If the
+ <code class="option">{FOO}</code> part is malformed, it causes an abort. This
+ specifier is rarely needed, but very useful in certain circumstances
+ (eg. when running MPI programs). The idea is that you specify a
+ variable which will be set differently for each process in the job,
+ for example <code class="computeroutput">BPROC_RANK</code> or whatever is
+ applicable in your MPI setup. If the named environment variable is not
+ set, it causes an abort. Note that in some shells, the
+ <code class="option">{</code> and <code class="option">}</code> characters may need to be
+ escaped with a backslash.</p>
+<p><code class="option">%%</code> is replaced with <code class="option">%</code>.</p>
+<p>If an <code class="option">%</code> is followed by any other character, it
+ causes an abort.</p>
+<p>If the file name specifies a relative file name, it is put
+ in the program's initial working directory : this is the current
+ directory when the program started its execution after the fork
+ or after the exec. If it specifies an absolute file name (ie.
+ starts with '/') then it is put there.
+ </p>
+</dd>
+<dt>
+<a name="opt.log-socket"></a><span class="term">
+ <code class="option">--log-socket=<ip-address:port-number> </code>
+ </span>
+</dt>
+<dd><p>Specifies that Valgrind should send all of its messages to
+ the specified port at the specified IP address. The port may be
+ omitted, in which case port 1500 is used. If a connection cannot
+ be made to the specified socket, Valgrind falls back to writing
+ output to the standard error (stderr). This option is intended to
+ be used in conjunction with the
+ <code class="computeroutput">valgrind-listener</code> program. For
+ further details, see
+ <a class="link" href="manual-core.html#manual-core.comment" title="2.3. The Commentary">the commentary</a>
+ in the manual.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.erropts"></a>2.6.3. Error-related Options</h3></div></div></div>
+<p><a name="error-related.opts.para"></a>These options are used by all tools
+that can report errors, e.g. Memcheck, but not Cachegrind.</p>
+<div class="variablelist">
+<a name="error-related.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.xml"></a><span class="term">
+ <code class="option">--xml=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, the important parts of the output (e.g. tool error
+ messages) will be in XML format rather than plain text. Furthermore,
+ the XML output will be sent to a different output channel than the
+ plain text output. Therefore, you also must use one of
+ <code class="option">--xml-fd</code>, <code class="option">--xml-file</code> or
+ <code class="option">--xml-socket</code> to specify where the XML is to be sent.
+ </p>
+<p>Less important messages will still be printed in plain text, but
+ because the XML output and plain text output are sent to different
+ output channels (the destination of the plain text output is still
+ controlled by <code class="option">--log-fd</code>, <code class="option">--log-file</code>
+ and <code class="option">--log-socket</code>) this should not cause problems.
+ </p>
+<p>This option is aimed at making life easier for tools that consume
+ Valgrind's output as input, such as GUI front ends. Currently this
+ option works with Memcheck, Helgrind, DRD and SGcheck. The output
+ format is specified in the file
+ <code class="computeroutput">docs/internals/xml-output-protocol4.txt</code>
+ in the source tree for Valgrind 3.5.0 or later.</p>
+<p>The recommended options for a GUI to pass, when requesting
+ XML output, are: <code class="option">--xml=yes</code> to enable XML output,
+ <code class="option">--xml-file</code> to send the XML output to a (presumably
+ GUI-selected) file, <code class="option">--log-file</code> to send the plain
+ text output to a second GUI-selected file,
+ <code class="option">--child-silent-after-fork=yes</code>, and
+ <code class="option">-q</code> to restrict the plain text output to critical
+ error messages created by Valgrind itself. For example, failure to
+ read a specified suppressions file counts as a critical error message.
+ In this way, for a successful run the text output file will be empty.
+ But if it isn't empty, then it will contain important information
+ which the GUI user should be made aware
+ of.</p>
+</dd>
+<dt>
+<a name="opt.xml-fd"></a><span class="term">
+ <code class="option">--xml-fd=<number> [default: -1, disabled] </code>
+ </span>
+</dt>
+<dd><p>Specifies that Valgrind should send its XML output to the
+ specified file descriptor. It must be used in conjunction with
+ <code class="option">--xml=yes</code>.</p></dd>
+<dt>
+<a name="opt.xml-file"></a><span class="term">
+ <code class="option">--xml-file=<filename> </code>
+ </span>
+</dt>
+<dd><p>Specifies that Valgrind should send its XML output
+ to the specified file. It must be used in conjunction with
+ <code class="option">--xml=yes</code>. Any <code class="option">%p</code> or
+ <code class="option">%q</code> sequences appearing in the filename are expanded
+ in exactly the same way as they are for <code class="option">--log-file</code>.
+ See the description of <code class="option">--log-file</code> for details.
+ </p></dd>
+<dt>
+<a name="opt.xml-socket"></a><span class="term">
+ <code class="option">--xml-socket=<ip-address:port-number> </code>
+ </span>
+</dt>
+<dd><p>Specifies that Valgrind should send its XML output the
+ specified port at the specified IP address. It must be used in
+ conjunction with <code class="option">--xml=yes</code>. The form of the argument
+ is the same as that used by <code class="option">--log-socket</code>.
+ See the description of <code class="option">--log-socket</code>
+ for further details.</p></dd>
+<dt>
+<a name="opt.xml-user-comment"></a><span class="term">
+ <code class="option">--xml-user-comment=<string> </code>
+ </span>
+</dt>
+<dd><p>Embeds an extra user comment string at the start of the XML
+ output. Only works when <code class="option">--xml=yes</code> is specified;
+ ignored otherwise.</p></dd>
+<dt>
+<a name="opt.demangle"></a><span class="term">
+ <code class="option">--demangle=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Enable/disable automatic demangling (decoding) of C++ names.
+ Enabled by default. When enabled, Valgrind will attempt to
+ translate encoded C++ names back to something approaching the
+ original. The demangler handles symbols mangled by g++ versions
+ 2.X, 3.X and 4.X.</p>
+<p>An important fact about demangling is that function names
+ mentioned in suppressions files should be in their mangled form.
+ Valgrind does not demangle function names when searching for
+ applicable suppressions, because to do otherwise would make
+ suppression file contents dependent on the state of Valgrind's
+ demangling machinery, and also slow down suppression matching.</p>
+</dd>
+<dt>
+<a name="opt.num-callers"></a><span class="term">
+ <code class="option">--num-callers=<number> [default: 12] </code>
+ </span>
+</dt>
+<dd>
+<p>Specifies the maximum number of entries shown in stack traces
+ that identify program locations. Note that errors are commoned up
+ using only the top four function locations (the place in the current
+ function, and that of its three immediate callers). So this doesn't
+ affect the total number of errors reported.</p>
+<p>The maximum value for this is 500. Note that higher settings
+ will make Valgrind run a bit more slowly and take a bit more
+ memory, but can be useful when working with programs with
+ deeply-nested call chains.</p>
+</dd>
+<dt>
+<a name="opt.unw-stack-scan-thresh"></a><span class="term">
+ <code class="option">--unw-stack-scan-thresh=<number> [default: 0] </code>
+ , </span><span class="term">
+ <code class="option">--unw-stack-scan-frames=<number> [default: 5] </code>
+ </span>
+</dt>
+<dd>
+<p>Stack-scanning support is available only on ARM
+ targets.</p>
+<p>These flags enable and control stack unwinding by stack
+ scanning. When the normal stack unwinding mechanisms -- usage
+ of Dwarf CFI records, and frame-pointer following -- fail, stack
+ scanning may be able to recover a stack trace.</p>
+<p>Note that stack scanning is an imprecise, heuristic
+ mechanism that may give very misleading results, or none at all.
+ It should be used only in emergencies, when normal unwinding
+ fails, and it is important to nevertheless have stack
+ traces.</p>
+<p>Stack scanning is a simple technique: the unwinder reads
+ words from the stack, and tries to guess which of them might be
+ return addresses, by checking to see if they point just after
+ ARM or Thumb call instructions. If so, the word is added to the
+ backtrace.</p>
+<p>The main danger occurs when a function call returns,
+ leaving its return address exposed, and a new function is
+ called, but the new function does not overwrite the old address.
+ The result of this is that the backtrace may contain entries for
+ functions which have already returned, and so be very
+ confusing.</p>
+<p>A second limitation of this implementation is that it will
+ scan only the page (4KB, normally) containing the starting stack
+ pointer. If the stack frames are large, this may result in only
+ a few (or not even any) being present in the trace. Also, if
+ you are unlucky and have an initial stack pointer near the end
+ of its containing page, the scan may miss all interesting
+ frames.</p>
+<p>By default stack scanning is disabled. The normal use
+ case is to ask for it when a stack trace would otherwise be very
+ short. So, to enable it,
+ use <code class="computeroutput">--unw-stack-scan-thresh=number</code>.
+ This requests Valgrind to try using stack scanning to "extend"
+ stack traces which contain fewer
+ than <code class="computeroutput">number</code> frames.</p>
+<p>If stack scanning does take place, it will only generate
+ at most the number of frames specified
+ by <code class="computeroutput">--unw-stack-scan-frames</code>.
+ Typically, stack scanning generates so many garbage entries that
+ this value is set to a low value (5) by default. In no case
+ will a stack trace larger than the value specified
+ by <code class="computeroutput">--num-callers</code> be
+ created.</p>
+</dd>
+<dt>
+<a name="opt.error-limit"></a><span class="term">
+ <code class="option">--error-limit=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Valgrind stops reporting errors after 10,000,000
+ in total, or 1,000 different ones, have been seen. This is to
+ stop the error tracking machinery from becoming a huge performance
+ overhead in programs with many errors.</p></dd>
+<dt>
+<a name="opt.error-exitcode"></a><span class="term">
+ <code class="option">--error-exitcode=<number> [default: 0] </code>
+ </span>
+</dt>
+<dd><p>Specifies an alternative exit code to return if Valgrind
+ reported any errors in the run. When set to the default value
+ (zero), the return value from Valgrind will always be the return
+ value of the process being simulated. When set to a nonzero value,
+ that value is returned instead, if Valgrind detects any errors.
+ This is useful for using Valgrind as part of an automated test
+ suite, since it makes it easy to detect test cases for which
+ Valgrind has reported errors, just by inspecting return codes.</p></dd>
+<dt>
+<a name="opt.error-markers"></a><span class="term">
+ <code class="option">--error-markers=<begin>,<end> [default: none]</code>
+ </span>
+</dt>
+<dd>
+<p>When errors are output as plain text (i.e. XML not used),
+ <code class="option">--error-markers</code> instructs to output a line
+ containing the <code class="option">begin</code> (<code class="option">end</code>)
+ string before (after) each error. </p>
+<p> Such marker lines facilitate searching for errors and/or
+ extracting errors in an output file that contain valgrind errors mixed
+ with the program output. </p>
+<p> Note that empty markers are accepted. So, only using a begin
+ (or an end) marker is possible.</p>
+</dd>
+<dt>
+<a name="opt.sigill-diagnostics"></a><span class="term">
+ <code class="option">--sigill-diagnostics=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Enable/disable printing of illegal instruction diagnostics.
+ Enabled by default, but defaults to disabled when
+ <code class="option">--quiet</code> is given. The default can always be explicitly
+ overridden by giving this option.</p>
+<p>When enabled, a warning message will be printed, along with some
+ diagnostics, whenever an instruction is encountered that Valgrind
+ cannot decode or translate, before the program is given a SIGILL signal.
+ Often an illegal instruction indicates a bug in the program or missing
+ support for the particular instruction in Valgrind. But some programs
+ do deliberately try to execute an instruction that might be missing
+ and trap the SIGILL signal to detect processor features. Using
+ this flag makes it possible to avoid the diagnostic output
+ that you would otherwise get in such cases.</p>
+</dd>
+<dt>
+<a name="opt.show-below-main"></a><span class="term">
+ <code class="option">--show-below-main=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>By default, stack traces for errors do not show any
+ functions that appear beneath <code class="function">main</code> because
+ most of the time it's uninteresting C library stuff and/or
+ gobbledygook. Alternatively, if <code class="function">main</code> is not
+ present in the stack trace, stack traces will not show any functions
+ below <code class="function">main</code>-like functions such as glibc's
+ <code class="function">__libc_start_main</code>. Furthermore, if
+ <code class="function">main</code>-like functions are present in the trace,
+ they are normalised as <code class="function">(below main)</code>, in order to
+ make the output more deterministic.</p>
+<p>If this option is enabled, all stack trace entries will be
+ shown and <code class="function">main</code>-like functions will not be
+ normalised.</p>
+</dd>
+<dt>
+<a name="opt.fullpath-after"></a><span class="term">
+ <code class="option">--fullpath-after=<string>
+ [default: don't show source paths] </code>
+ </span>
+</dt>
+<dd>
+<p>By default Valgrind only shows the filenames in stack
+ traces, but not full paths to source files. When using Valgrind
+ in large projects where the sources reside in multiple different
+ directories, this can be inconvenient.
+ <code class="option">--fullpath-after</code> provides a flexible solution
+ to this problem. When this option is present, the path to each
+ source file is shown, with the following all-important caveat:
+ if <code class="option">string</code> is found in the path, then the path
+ up to and including <code class="option">string</code> is omitted, else the
+ path is shown unmodified. Note that <code class="option">string</code> is
+ not required to be a prefix of the path.</p>
+<p>For example, consider a file named
+ <code class="computeroutput">/home/janedoe/blah/src/foo/bar/xyzzy.c</code>.
+ Specifying <code class="option">--fullpath-after=/home/janedoe/blah/src/</code>
+ will cause Valgrind to show the name
+ as <code class="computeroutput">foo/bar/xyzzy.c</code>.</p>
+<p>Because the string is not required to be a prefix,
+ <code class="option">--fullpath-after=src/</code> will produce the same
+ output. This is useful when the path contains arbitrary
+ machine-generated characters. For example, the
+ path
+ <code class="computeroutput">/my/build/dir/C32A1B47/blah/src/foo/xyzzy</code>
+ can be pruned to <code class="computeroutput">foo/xyzzy</code>
+ using
+ <code class="option">--fullpath-after=/blah/src/</code>.</p>
+<p>If you simply want to see the full path, just specify an
+ empty string: <code class="option">--fullpath-after=</code>. This isn't a
+ special case, merely a logical consequence of the above rules.</p>
+<p>Finally, you can use <code class="option">--fullpath-after</code>
+ multiple times. Any appearance of it causes Valgrind to switch
+ to producing full paths and applying the above filtering rule.
+ Each produced path is compared against all
+ the <code class="option">--fullpath-after</code>-specified strings, in the
+ order specified. The first string to match causes the path to
+ be truncated as described above. If none match, the full path
+ is shown. This facilitates chopping off prefixes when the
+ sources are drawn from a number of unrelated directories.
+ </p>
+</dd>
+<dt>
+<a name="opt.extra-debuginfo-path"></a><span class="term">
+ <code class="option">--extra-debuginfo-path=<path> [default: undefined and unused] </code>
+ </span>
+</dt>
+<dd>
+<p>By default Valgrind searches in several well-known paths
+ for debug objects, such
+ as <code class="computeroutput">/usr/lib/debug/</code>.</p>
+<p>However, there may be scenarios where you may wish to put
+ debug objects at an arbitrary location, such as external storage
+ when running Valgrind on a mobile device with limited local
+ storage. Another example might be a situation where you do not
+ have permission to install debug object packages on the system
+ where you are running Valgrind.</p>
+<p>In these scenarios, you may provide an absolute path as an extra,
+ final place for Valgrind to search for debug objects by specifying
+ <code class="option">--extra-debuginfo-path=/path/to/debug/objects</code>.
+ The given path will be prepended to the absolute path name of
+ the searched-for object. For example, if Valgrind is looking
+ for the debuginfo
+ for <code class="computeroutput">/w/x/y/zz.so</code>
+ and <code class="option">--extra-debuginfo-path=/a/b/c</code> is specified,
+ it will look for a debug object at
+ <code class="computeroutput">/a/b/c/w/x/y/zz.so</code>.</p>
+<p>This flag should only be specified once. If it is
+ specified multiple times, only the last instance is
+ honoured.</p>
+</dd>
+<dt>
+<a name="opt.debuginfo-server"></a><span class="term">
+ <code class="option">--debuginfo-server=ipaddr:port [default: undefined and unused]</code>
+ </span>
+</dt>
+<dd>
+<p>This is a new, experimental, feature introduced in version
+ 3.9.0.</p>
+<p>In some scenarios it may be convenient to read debuginfo
+ from objects stored on a different machine. With this flag,
+ Valgrind will query a debuginfo server running
+ on <code class="computeroutput">ipaddr</code> and listening on
+ port <code class="computeroutput">port</code>, if it cannot find
+ the debuginfo object in the local filesystem.</p>
+<p>The debuginfo server must accept TCP connections on
+ port <code class="computeroutput">port</code>. The debuginfo
+ server is contained in the source
+ file <code class="computeroutput">auxprogs/valgrind-di-server.c</code>.
+ It will only serve from the directory it is started
+ in. <code class="computeroutput">port</code> defaults to 1500 in
+ both client and server if not specified.</p>
+<p>If Valgrind looks for the debuginfo for
+ <code class="computeroutput">/w/x/y/zz.so</code> by using the
+ debuginfo server, it will strip the pathname components and
+ merely request <code class="computeroutput">zz.so</code> on the
+ server. That in turn will look only in its current working
+ directory for a matching debuginfo object.</p>
+<p>The debuginfo data is transmitted in small fragments (8
+ KB) as requested by Valgrind. Each block is compressed using
+ LZO to reduce transmission time. The implementation has been
+ tuned for best performance over a single-stage 802.11g (WiFi)
+ network link.</p>
+<p>Note that checks for matching primary vs debug objects,
+ using GNU debuglink CRC scheme, are performed even when using
+ the debuginfo server. To disable such checking, you need to
+ also specify
+ <code class="computeroutput">--allow-mismatched-debuginfo=yes</code>.
+ </p>
+<p>By default the Valgrind build system will
+ build <code class="computeroutput">valgrind-di-server</code> for
+ the target platform, which is almost certainly not what you
+ want. So far we have been unable to find out how to get
+ automake/autoconf to build it for the build platform. If
+ you want to use it, you will have to recompile it by hand using
+ the command shown at the top
+ of <code class="computeroutput">auxprogs/valgrind-di-server.c</code>.</p>
+</dd>
+<dt>
+<a name="opt.allow-mismatched-debuginfo"></a><span class="term">
+ <code class="option">--allow-mismatched-debuginfo=no|yes [no] </code>
+ </span>
+</dt>
+<dd>
+<p>When reading debuginfo from separate debuginfo objects,
+ Valgrind will by default check that the main and debuginfo
+ objects match, using the GNU debuglink mechanism. This
+ guarantees that it does not read debuginfo from out of date
+ debuginfo objects, and also ensures that Valgrind can't crash as
+ a result of mismatches.</p>
+<p>This check can be overridden using
+ <code class="computeroutput">--allow-mismatched-debuginfo=yes</code>.
+ This may be useful when the debuginfo and main objects have not
+ been split in the proper way. Be careful when using this,
+ though: it disables all consistency checking, and Valgrind has
+ been observed to crash when the main and debuginfo objects don't
+ match.</p>
+</dd>
+<dt>
+<a name="opt.suppressions"></a><span class="term">
+ <code class="option">--suppressions=<filename> [default: $PREFIX/lib/valgrind/default.supp] </code>
+ </span>
+</dt>
+<dd><p>Specifies an extra file from which to read descriptions of
+ errors to suppress. You may use up to 100 extra suppression
+ files.</p></dd>
+<dt>
+<a name="opt.gen-suppressions"></a><span class="term">
+ <code class="option">--gen-suppressions=<yes|no|all> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When set to <code class="varname">yes</code>, Valgrind will pause
+ after every error shown and print the line:
+ </p>
+<div class="literallayout"><p><code class="computeroutput"> ---- Print suppression ? --- [Return/N/n/Y/y/C/c] ----</code></p></div>
+<p>
+
+ Pressing <code class="varname">Ret</code>, or <code class="varname">N Ret</code> or
+ <code class="varname">n Ret</code>, causes Valgrind continue execution without
+ printing a suppression for this error.</p>
+<p>Pressing <code class="varname">Y Ret</code> or
+ <code class="varname">y Ret</code> causes Valgrind to write a suppression
+ for this error. You can then cut and paste it into a suppression file
+ if you don't want to hear about the error in the future.</p>
+<p>When set to <code class="varname">all</code>, Valgrind will print a
+ suppression for every reported error, without querying the
+ user.</p>
+<p>This option is particularly useful with C++ programs, as it
+ prints out the suppressions with mangled names, as
+ required.</p>
+<p>Note that the suppressions printed are as specific as
+ possible. You may want to common up similar ones, by adding
+ wildcards to function names, and by using frame-level wildcards.
+ The wildcarding facilities are powerful yet flexible, and with a
+ bit of careful editing, you may be able to suppress a whole
+ family of related errors with only a few suppressions.
+
+ </p>
+<p>Sometimes two different errors
+ are suppressed by the same suppression, in which case Valgrind
+ will output the suppression more than once, but you only need to
+ have one copy in your suppression file (but having more than one
+ won't cause problems). Also, the suppression name is given as
+ <code class="computeroutput"><insert a suppression name
+ here></code>; the name doesn't really matter, it's
+ only used with the <code class="option">-v</code> option which prints out all
+ used suppression records.</p>
+</dd>
+<dt>
+<a name="opt.input-fd"></a><span class="term">
+ <code class="option">--input-fd=<number> [default: 0, stdin] </code>
+ </span>
+</dt>
+<dd><p>When using
+ <code class="option">--gen-suppressions=yes</code>, Valgrind will stop so as
+ to read keyboard input from you when each error occurs. By
+ default it reads from the standard input (stdin), which is
+ problematic for programs which close stdin. This option allows
+ you to specify an alternative file descriptor from which to read
+ input.</p></dd>
+<dt>
+<a name="opt.dsymutil"></a><span class="term">
+ <code class="option">--dsymutil=no|yes [yes] </code>
+ </span>
+</dt>
+<dd>
+<p>This option is only relevant when running Valgrind on
+ Mac OS X.</p>
+<p>Mac OS X uses a deferred debug information (debuginfo)
+ linking scheme. When object files containing debuginfo are
+ linked into a <code class="computeroutput">.dylib</code> or an
+ executable, the debuginfo is not copied into the final file.
+ Instead, the debuginfo must be linked manually by
+ running <code class="computeroutput">dsymutil</code>, a
+ system-provided utility, on the executable
+ or <code class="computeroutput">.dylib</code>. The resulting
+ combined debuginfo is placed in a directory alongside the
+ executable or <code class="computeroutput">.dylib</code>, but with
+ the extension <code class="computeroutput">.dSYM</code>.</p>
+<p>With <code class="option">--dsymutil=no</code>, Valgrind
+ will detect cases where the
+ <code class="computeroutput">.dSYM</code> directory is either
+ missing, or is present but does not appear to match the
+ associated executable or <code class="computeroutput">.dylib</code>,
+ most likely because it is out of date. In these cases, Valgrind
+ will print a warning message but take no further action.</p>
+<p>With <code class="option">--dsymutil=yes</code>, Valgrind
+ will, in such cases, automatically
+ run <code class="computeroutput">dsymutil</code> as necessary to
+ bring the debuginfo up to date. For all practical purposes, if
+ you always use <code class="option">--dsymutil=yes</code>, then
+ there is never any need to
+ run <code class="computeroutput">dsymutil</code> manually or as part
+ of your applications's build system, since Valgrind will run it
+ as necessary.</p>
+<p>Valgrind will not attempt to
+ run <code class="computeroutput">dsymutil</code> on any
+ executable or library in
+ <code class="computeroutput">/usr/</code>,
+ <code class="computeroutput">/bin/</code>,
+ <code class="computeroutput">/sbin/</code>,
+ <code class="computeroutput">/opt/</code>,
+ <code class="computeroutput">/sw/</code>,
+ <code class="computeroutput">/System/</code>,
+ <code class="computeroutput">/Library/</code> or
+ <code class="computeroutput">/Applications/</code>
+ since <code class="computeroutput">dsymutil</code> will always fail
+ in such situations. It fails both because the debuginfo for
+ such pre-installed system components is not available anywhere,
+ and also because it would require write privileges in those
+ directories.</p>
+<p>Be careful when
+ using <code class="option">--dsymutil=yes</code>, since it will
+ cause pre-existing <code class="computeroutput">.dSYM</code>
+ directories to be silently deleted and re-created. Also note that
+ <code class="computeroutput">dsymutil</code> is quite slow, sometimes
+ excessively so.</p>
+</dd>
+<dt>
+<a name="opt.max-stackframe"></a><span class="term">
+ <code class="option">--max-stackframe=<number> [default: 2000000] </code>
+ </span>
+</dt>
+<dd>
+<p>The maximum size of a stack frame. If the stack pointer moves by
+ more than this amount then Valgrind will assume that
+ the program is switching to a different stack.</p>
+<p>You may need to use this option if your program has large
+ stack-allocated arrays. Valgrind keeps track of your program's
+ stack pointer. If it changes by more than the threshold amount,
+ Valgrind assumes your program is switching to a different stack,
+ and Memcheck behaves differently than it would for a stack pointer
+ change smaller than the threshold. Usually this heuristic works
+ well. However, if your program allocates large structures on the
+ stack, this heuristic will be fooled, and Memcheck will
+ subsequently report large numbers of invalid stack accesses. This
+ option allows you to change the threshold to a different
+ value.</p>
+<p>You should only consider use of this option if Valgrind's
+ debug output directs you to do so. In that case it will tell you
+ the new threshold you should specify.</p>
+<p>In general, allocating large structures on the stack is a
+ bad idea, because you can easily run out of stack space,
+ especially on systems with limited memory or which expect to
+ support large numbers of threads each with a small stack, and also
+ because the error checking performed by Memcheck is more effective
+ for heap-allocated data than for stack-allocated data. If you
+ have to use this option, you may wish to consider rewriting your
+ code to allocate on the heap rather than on the stack.</p>
+</dd>
+<dt>
+<a name="opt.main-stacksize"></a><span class="term">
+ <code class="option">--main-stacksize=<number>
+ [default: use current 'ulimit' value] </code>
+ </span>
+</dt>
+<dd>
+<p>Specifies the size of the main thread's stack.</p>
+<p>To simplify its memory management, Valgrind reserves all
+ required space for the main thread's stack at startup. That
+ means it needs to know the required stack size at
+ startup.</p>
+<p>By default, Valgrind uses the current "ulimit" value for
+ the stack size, or 16 MB, whichever is lower. In many cases
+ this gives a stack size in the range 8 to 16 MB, which almost
+ never overflows for most applications.</p>
+<p>If you need a larger total stack size,
+ use <code class="option">--main-stacksize</code> to specify it. Only set
+ it as high as you need, since reserving far more space than you
+ need (that is, hundreds of megabytes more than you need)
+ constrains Valgrind's memory allocators and may reduce the total
+ amount of memory that Valgrind can use. This is only really of
+ significance on 32-bit machines.</p>
+<p>On Linux, you may request a stack of size up to 2GB.
+ Valgrind will stop with a diagnostic message if the stack cannot
+ be allocated.</p>
+<p><code class="option">--main-stacksize</code> only affects the stack
+ size for the program's initial thread. It has no bearing on the
+ size of thread stacks, as Valgrind does not allocate
+ those.</p>
+<p>You may need to use both <code class="option">--main-stacksize</code>
+ and <code class="option">--max-stackframe</code> together. It is important
+ to understand that <code class="option">--main-stacksize</code> sets the
+ maximum total stack size,
+ whilst <code class="option">--max-stackframe</code> specifies the largest
+ size of any one stack frame. You will have to work out
+ the <code class="option">--main-stacksize</code> value for yourself
+ (usually, if your applications segfaults). But Valgrind will
+ tell you the needed <code class="option">--max-stackframe</code> size, if
+ necessary.</p>
+<p>As discussed further in the description
+ of <code class="option">--max-stackframe</code>, a requirement for a large
+ stack is a sign of potential portability problems. You are best
+ advised to place all large data in heap-allocated memory.</p>
+</dd>
+<dt>
+<a name="opt.max-threads"></a><span class="term">
+ <code class="option">--max-threads=<number> [default: 500] </code>
+ </span>
+</dt>
+<dd><p>By default, Valgrind can handle to up to 500 threads.
+ Occasionally, that number is too small. Use this option to
+ provide a different limit. E.g.
+ <code class="computeroutput">--max-threads=3000</code>.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.mallocopts"></a>2.6.4. malloc-related Options</h3></div></div></div>
+<p><a name="malloc-related.opts.para"></a>For tools that use their own version of
+<code class="computeroutput">malloc</code> (e.g. Memcheck,
+Massif, Helgrind, DRD), the following options apply.</p>
+<div class="variablelist">
+<a name="malloc-related.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.alignment"></a><span class="term">
+ <code class="option">--alignment=<number> [default: 8 or 16, depending on the platform] </code>
+ </span>
+</dt>
+<dd><p>By default Valgrind's <code class="function">malloc</code>,
+ <code class="function">realloc</code>, etc, return a block whose starting
+ address is 8-byte aligned or 16-byte aligned (the value depends on the
+ platform and matches the platform default). This option allows you to
+ specify a different alignment. The supplied value must be greater
+ than or equal to the default, less than or equal to 4096, and must be
+ a power of two.</p></dd>
+<dt>
+<a name="opt.redzone-size"></a><span class="term">
+ <code class="option">--redzone-size=<number> [default: depends on the tool] </code>
+ </span>
+</dt>
+<dd>
+<p> Valgrind's <code class="function">malloc, realloc,</code> etc, add
+ padding blocks before and after each heap block allocated by the
+ program being run. Such padding blocks are called redzones. The
+ default value for the redzone size depends on the tool. For
+ example, Memcheck adds and protects a minimum of 16 bytes before
+ and after each block allocated by the client. This allows it to
+ detect block underruns or overruns of up to 16 bytes.
+ </p>
+<p>Increasing the redzone size makes it possible to detect
+ overruns of larger distances, but increases the amount of memory
+ used by Valgrind. Decreasing the redzone size will reduce the
+ memory needed by Valgrind but also reduces the chances of
+ detecting over/underruns, so is not recommended.</p>
+</dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.rareopts"></a>2.6.5. Uncommon Options</h3></div></div></div>
+<p><a name="uncommon.opts.para"></a>These options apply to all tools, as they
+affect certain obscure workings of the Valgrind core. Most people won't
+need to use them.</p>
+<div class="variablelist">
+<a name="uncommon.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.smc-check"></a><span class="term">
+ <code class="option">--smc-check=<none|stack|all|all-non-file>
+ [default: all-non-file for x86/amd64/s390x, stack for other archs] </code>
+ </span>
+</dt>
+<dd>
+<p>This option controls Valgrind's detection of self-modifying
+ code. If no checking is done, when a program executes some code, then
+ overwrites it with new code, and executes the new code, Valgrind will
+ continue to execute the translations it made for the old code. This
+ will likely lead to incorrect behaviour and/or crashes.</p>
+<p>For "modern" architectures -- anything that's not x86,
+ amd64 or s390x -- the default is <code class="varname">stack</code>.
+ This is because a correct program must take explicit action
+ to reestablish D-I cache coherence following code
+ modification. Valgrind observes and honours such actions,
+ with the result that self-modifying code is transparently
+ handled with zero extra cost.</p>
+<p>For x86, amd64 and s390x, the program is not required to
+ notify the hardware of required D-I coherence syncing. Hence
+ the default is <code class="varname">all-non-file</code>, which covers
+ the normal case of generating code into an anonymous
+ (non-file-backed) mmap'd area.</p>
+<p>The meanings of the four available settings are as
+ follows. No detection (<code class="varname">none</code>),
+ detect self-modifying code
+ on the stack (which is used by GCC to implement nested
+ functions) (<code class="varname">stack</code>), detect self-modifying code
+ everywhere (<code class="varname">all</code>), and detect
+ self-modifying code everywhere except in file-backed
+ mappings (<code class="varname">all-non-file</code>).</p>
+<p>Running with <code class="varname">all</code> will slow Valgrind
+ down noticeably. Running with <code class="varname">none</code> will
+ rarely speed things up, since very little code gets
+ dynamically generated in most programs. The
+ <code class="function">VALGRIND_DISCARD_TRANSLATIONS</code> client
+ request is an alternative to <code class="option">--smc-check=all</code>
+ and <code class="option">--smc-check=all-non-file</code>
+ that requires more programmer effort but allows Valgrind to run
+ your program faster, by telling it precisely when translations
+ need to be re-made.
+
+ </p>
+<p><code class="option">--smc-check=all-non-file</code> provides a
+ cheaper but more limited version
+ of <code class="option">--smc-check=all</code>. It adds checks to any
+ translations that do not originate from file-backed memory
+ mappings. Typical applications that generate code, for example
+ JITs in web browsers, generate code into anonymous mmaped areas,
+ whereas the "fixed" code of the browser always lives in
+ file-backed mappings. <code class="option">--smc-check=all-non-file</code>
+ takes advantage of this observation, limiting the overhead of
+ checking to code which is likely to be JIT generated.</p>
+</dd>
+<dt>
+<a name="opt.read-inline-info"></a><span class="term">
+ <code class="option">--read-inline-info=<yes|no> [default: see below] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, Valgrind will read information about inlined
+ function calls from DWARF3 debug info. This slows Valgrind
+ startup and makes it use more memory (typically for each inlined
+ piece of code, 6 words and space for the function name), but it
+ results in more descriptive stacktraces. For the 3.10.0
+ release, this functionality is enabled by default only for Linux,
+ Android and Solaris targets and only for the tools Memcheck, Helgrind
+ and DRD. Here is an example of some stacktraces with
+ <code class="option">--read-inline-info=no</code>:
+</p>
+<pre class="programlisting">
+==15380== Conditional jump or move depends on uninitialised value(s)
+==15380== at 0x80484EA: main (inlinfo.c:6)
+==15380==
+==15380== Conditional jump or move depends on uninitialised value(s)
+==15380== at 0x8048550: fun_noninline (inlinfo.c:6)
+==15380== by 0x804850E: main (inlinfo.c:34)
+==15380==
+==15380== Conditional jump or move depends on uninitialised value(s)
+==15380== at 0x8048520: main (inlinfo.c:6)
+</pre>
+<p>And here are the same errors with
+ <code class="option">--read-inline-info=yes</code>:</p>
+<pre class="programlisting">
+==15377== Conditional jump or move depends on uninitialised value(s)
+==15377== at 0x80484EA: fun_d (inlinfo.c:6)
+==15377== by 0x80484EA: fun_c (inlinfo.c:14)
+==15377== by 0x80484EA: fun_b (inlinfo.c:20)
+==15377== by 0x80484EA: fun_a (inlinfo.c:26)
+==15377== by 0x80484EA: main (inlinfo.c:33)
+==15377==
+==15377== Conditional jump or move depends on uninitialised value(s)
+==15377== at 0x8048550: fun_d (inlinfo.c:6)
+==15377== by 0x8048550: fun_noninline (inlinfo.c:41)
+==15377== by 0x804850E: main (inlinfo.c:34)
+==15377==
+==15377== Conditional jump or move depends on uninitialised value(s)
+==15377== at 0x8048520: fun_d (inlinfo.c:6)
+==15377== by 0x8048520: main (inlinfo.c:35)
+</pre>
+</dd>
+<dt>
+<a name="opt.read-var-info"></a><span class="term">
+ <code class="option">--read-var-info=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, Valgrind will read information about
+ variable types and locations from DWARF3 debug info.
+ This slows Valgrind startup significantly and makes it use significantly
+ more memory, but for the tools that can take advantage of it (Memcheck,
+ Helgrind, DRD) it can result in more precise error messages. For example,
+ here are some standard errors issued by Memcheck:</p>
+<pre class="programlisting">
+==15363== Uninitialised byte(s) found during client check request
+==15363== at 0x80484A9: croak (varinfo1.c:28)
+==15363== by 0x8048544: main (varinfo1.c:55)
+==15363== Address 0x80497f7 is 7 bytes inside data symbol "global_i2"
+==15363==
+==15363== Uninitialised byte(s) found during client check request
+==15363== at 0x80484A9: croak (varinfo1.c:28)
+==15363== by 0x8048550: main (varinfo1.c:56)
+==15363== Address 0xbea0d0cc is on thread 1's stack
+==15363== in frame #1, created by main (varinfo1.c:45)
+</pre>
+<p>And here are the same errors with
+ <code class="option">--read-var-info=yes</code>:</p>
+<pre class="programlisting">
+==15370== Uninitialised byte(s) found during client check request
+==15370== at 0x80484A9: croak (varinfo1.c:28)
+==15370== by 0x8048544: main (varinfo1.c:55)
+==15370== Location 0x80497f7 is 0 bytes inside global_i2[7],
+==15370== a global variable declared at varinfo1.c:41
+==15370==
+==15370== Uninitialised byte(s) found during client check request
+==15370== at 0x80484A9: croak (varinfo1.c:28)
+==15370== by 0x8048550: main (varinfo1.c:56)
+==15370== Location 0xbeb4a0cc is 0 bytes inside local var "local"
+==15370== declared at varinfo1.c:46, in frame #1 of thread 1
+</pre>
+</dd>
+<dt>
+<a name="opt.vgdb-poll"></a><span class="term">
+ <code class="option">--vgdb-poll=<number> [default: 5000] </code>
+ </span>
+</dt>
+<dd><p> As part of its main loop, the Valgrind scheduler will
+ poll to check if some activity (such as an external command or
+ some input from a gdb) has to be handled by gdbserver. This
+ activity poll will be done after having run the given number of
+ basic blocks (or slightly more than the given number of basic
+ blocks). This poll is quite cheap so the default value is set
+ relatively low. You might further decrease this value if vgdb
+ cannot use ptrace system call to interrupt Valgrind if all
+ threads are (most of the time) blocked in a system call.
+ </p></dd>
+<dt>
+<a name="opt.vgdb-shadow-registers"></a><span class="term">
+ <code class="option">--vgdb-shadow-registers=no|yes [default: no] </code>
+ </span>
+</dt>
+<dd><p> When activated, gdbserver will expose the Valgrind shadow registers
+ to GDB. With this, the value of the Valgrind shadow registers can be examined
+ or changed using GDB. Exposing shadow registers only works with GDB version
+ 7.1 or later.
+ </p></dd>
+<dt>
+<a name="opt.vgdb-prefix"></a><span class="term">
+ <code class="option">--vgdb-prefix=<prefix> [default: /tmp/vgdb-pipe] </code>
+ </span>
+</dt>
+<dd><p> To communicate with gdb/vgdb, the Valgrind gdbserver
+ creates 3 files (2 named FIFOs and a mmap shared memory
+ file). The prefix option controls the directory and prefix for
+ the creation of these files.
+ </p></dd>
+<dt>
+<a name="opt.run-libc-freeres"></a><span class="term">
+ <code class="option">--run-libc-freeres=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>This option is only relevant when running Valgrind on Linux.</p>
+<p>The GNU C library (<code class="function">libc.so</code>), which is
+ used by all programs, may allocate memory for its own uses.
+ Usually it doesn't bother to free that memory when the program
+ ends—there would be no point, since the Linux kernel reclaims
+ all process resources when a process exits anyway, so it would
+ just slow things down.</p>
+<p>The glibc authors realised that this behaviour causes leak
+ checkers, such as Valgrind, to falsely report leaks in glibc, when
+ a leak check is done at exit. In order to avoid this, they
+ provided a routine called <code class="function">__libc_freeres</code>
+ specifically to make glibc release all memory it has allocated.
+ Memcheck therefore tries to run
+ <code class="function">__libc_freeres</code> at exit.</p>
+<p>Unfortunately, in some very old versions of glibc,
+ <code class="function">__libc_freeres</code> is sufficiently buggy to cause
+ segmentation faults. This was particularly noticeable on Red Hat
+ 7.1. So this option is provided in order to inhibit the run of
+ <code class="function">__libc_freeres</code>. If your program seems to run
+ fine on Valgrind, but segfaults at exit, you may find that
+ <code class="option">--run-libc-freeres=no</code> fixes that, although at the
+ cost of possibly falsely reporting space leaks in
+ <code class="filename">libc.so</code>.</p>
+</dd>
+<dt>
+<a name="opt.run-cxx-freeres"></a><span class="term">
+ <code class="option">--run-cxx-freeres=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>This option is only relevant when running Valgrind on Linux
+ or Solaris C++ programs.</p>
+<p>The GNU Standard C++ library (<code class="function">libstdc++.so</code>),
+ which is used by all C++ programs compiled with g++, may allocate memory
+ for its own uses. Usually it doesn't bother to free that memory when
+ the program ends—there would be no point, since the kernel reclaims
+ all process resources when a process exits anyway, so it would
+ just slow things down.</p>
+<p>The gcc authors realised that this behaviour causes leak
+ checkers, such as Valgrind, to falsely report leaks in libstdc++, when
+ a leak check is done at exit. In order to avoid this, they
+ provided a routine called <code class="function">__gnu_cxx::__freeres</code>
+ specifically to make libstdc++ release all memory it has allocated.
+ Memcheck therefore tries to run
+ <code class="function">__gnu_cxx::__freeres</code> at exit.</p>
+<p>For the sake of flexibility and unforeseen problems with
+ <code class="function">__gnu_cxx::__freeres</code>, option
+ <code class="option">--run-cxx-freeres=no</code> exists,
+ although at the cost of possibly falsely reporting space leaks in
+ <code class="filename">libstdc++.so</code>.</p>
+</dd>
+<dt>
+<a name="opt.sim-hints"></a><span class="term">
+ <code class="option">--sim-hints=hint1,hint2,... </code>
+ </span>
+</dt>
+<dd>
+<p>Pass miscellaneous hints to Valgrind which slightly modify
+ the simulated behaviour in nonstandard or dangerous ways, possibly
+ to help the simulation of strange features. By default no hints
+ are enabled. Use with caution! Currently known hints are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="option">lax-ioctls: </code> Be very lax about ioctl
+ handling; the only assumption is that the size is
+ correct. Doesn't require the full buffer to be initialised
+ when writing. Without this, using some device drivers with a
+ large number of strange ioctl commands becomes very
+ tiresome.</p></li>
+<li class="listitem"><p><code class="option">fuse-compatible: </code> Enable special
+ handling for certain system calls that may block in a FUSE
+ file-system. This may be necessary when running Valgrind
+ on a multi-threaded program that uses one thread to manage
+ a FUSE file-system and another thread to access that
+ file-system.
+ </p></li>
+<li class="listitem"><p><code class="option">enable-outer: </code> Enable some special
+ magic needed when the program being run is itself
+ Valgrind.</p></li>
+<li class="listitem"><p><code class="option">no-inner-prefix: </code> Disable printing
+ a prefix <code class="option">></code> in front of each stdout or
+ stderr output line in an inner Valgrind being run by an
+ outer Valgrind. This is useful when running Valgrind
+ regression tests in an outer/inner setup. Note that the
+ prefix <code class="option">></code> will always be printed in
+ front of the inner debug logging lines.</p></li>
+<li class="listitem">
+<p><code class="option">no-nptl-pthread-stackcache: </code>
+ This hint is only relevant when running Valgrind on Linux.</p>
+<p>The GNU glibc pthread library
+ (<code class="function">libpthread.so</code>), which is used by
+ pthread programs, maintains a cache of pthread stacks.
+ When a pthread terminates, the memory used for the pthread
+ stack and some thread local storage related data structure
+ are not always directly released. This memory is kept in
+ a cache (up to a certain size), and is re-used if a new
+ thread is started.</p>
+<p>This cache causes the helgrind tool to report some
+ false positive race condition errors on this cached
+ memory, as helgrind does not understand the internal glibc
+ cache synchronisation primitives. So, when using helgrind,
+ disabling the cache helps to avoid false positive race
+ conditions, in particular when using thread local storage
+ variables (e.g. variables using the
+ <code class="function">__thread</code> qualifier).</p>
+<p>When using the memcheck tool, disabling the cache
+ ensures the memory used by glibc to handle __thread
+ variables is directly released when a thread
+ terminates.</p>
+<p>Note: Valgrind disables the cache using some internal
+ knowledge of the glibc stack cache implementation and by
+ examining the debug information of the pthread
+ library. This technique is thus somewhat fragile and might
+ not work for all glibc versions. This has been succesfully
+ tested with various glibc versions (e.g. 2.11, 2.16, 2.18)
+ on various platforms.</p>
+</li>
+<li class="listitem"><p><code class="option">lax-doors: </code> (Solaris only) Be very lax
+ about door syscall handling over unrecognised door file
+ descriptors. Does not require that full buffer is initialised
+ when writing. Without this, programs using libdoor(3LIB)
+ functionality with completely proprietary semantics may report
+ large number of false positives.</p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.fair-sched"></a><span class="term">
+ <code class="option">--fair-sched=<no|yes|try> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>The <code class="option">--fair-sched</code> option controls
+ the locking mechanism used by Valgrind to serialise thread
+ execution. The locking mechanism controls the way the threads
+ are scheduled, and different settings give different trade-offs
+ between fairness and performance. For more details about the
+ Valgrind thread serialisation scheme and its impact on
+ performance and thread scheduling, see
+ <a class="xref" href="manual-core.html#manual-core.pthreads_perf_sched" title="2.7.1. Scheduling and Multi-Thread Performance">Scheduling and Multi-Thread Performance</a>.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>The value <code class="option">--fair-sched=yes</code>
+ activates a fair scheduler. In short, if multiple threads are
+ ready to run, the threads will be scheduled in a round robin
+ fashion. This mechanism is not available on all platforms or
+ Linux versions. If not available,
+ using <code class="option">--fair-sched=yes</code> will cause Valgrind to
+ terminate with an error.</p>
+<p>You may find this setting improves overall
+ responsiveness if you are running an interactive
+ multithreaded program, for example a web browser, on
+ Valgrind.</p>
+</li>
+<li class="listitem"><p>The value <code class="option">--fair-sched=try</code>
+ activates fair scheduling if available on the
+ platform. Otherwise, it will automatically fall back
+ to <code class="option">--fair-sched=no</code>.</p></li>
+<li class="listitem"><p>The value <code class="option">--fair-sched=no</code> activates
+ a scheduler which does not guarantee fairness
+ between threads ready to run, but which in general gives the
+ highest performance.</p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.kernel-variant"></a><span class="term">
+ <code class="option">--kernel-variant=variant1,variant2,...</code>
+ </span>
+</dt>
+<dd>
+<p>Handle system calls and ioctls arising from minor variants
+ of the default kernel for this platform. This is useful for
+ running on hacked kernels or with kernel modules which support
+ nonstandard ioctls, for example. Use with caution. If you don't
+ understand what this option does then you almost certainly don't
+ need it. Currently known variants are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="option">bproc</code>: support the
+ <code class="function">sys_broc</code> system call on x86. This is for
+ running on BProc, which is a minor variant of standard Linux which
+ is sometimes used for building clusters.
+ </p></li>
+<li class="listitem"><p><code class="option">android-no-hw-tls</code>: some
+ versions of the Android emulator for ARM do not provide a
+ hardware TLS (thread-local state) register, and Valgrind
+ crashes at startup. Use this variant to select software
+ support for TLS.
+ </p></li>
+<li class="listitem"><p><code class="option">android-gpu-sgx5xx</code>: use this to
+ support handling of proprietary ioctls for the PowerVR SGX
+ 5XX series of GPUs on Android devices. Failure to select
+ this does not cause stability problems, but may cause
+ Memcheck to report false errors after the program performs
+ GPU-specific ioctls.
+ </p></li>
+<li class="listitem"><p><code class="option">android-gpu-adreno3xx</code>: similarly, use
+ this to support handling of proprietary ioctls for the
+ Qualcomm Adreno 3XX series of GPUs on Android devices.
+ </p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.merge-recursive-frames"></a><span class="term">
+ <code class="option">--merge-recursive-frames=<number> [default: 0] </code>
+ </span>
+</dt>
+<dd>
+<p>Some recursive algorithms, for example balanced binary
+ tree implementations, create many different stack traces, each
+ containing cycles of calls. A cycle is defined as two identical
+ program counter values separated by zero or more other program
+ counter values. Valgrind may then use a lot of memory to store
+ all these stack traces. This is a poor use of memory
+ considering that such stack traces contain repeated
+ uninteresting recursive calls instead of more interesting
+ information such as the function that has initiated the
+ recursive call.
+ </p>
+<p>The option <code class="option">--merge-recursive-frames=<number></code>
+ instructs Valgrind to detect and merge recursive call cycles
+ having a size of up to <code class="option"><number></code>
+ frames. When such a cycle is detected, Valgrind records the
+ cycle in the stack trace as a unique program counter.
+ </p>
+<p>
+ The value 0 (the default) causes no recursive call merging.
+ A value of 1 will cause stack traces of simple recursive algorithms
+ (for example, a factorial implementation) to be collapsed.
+ A value of 2 will usually be needed to collapse stack traces produced
+ by recursive algorithms such as binary trees, quick sort, etc.
+ Higher values might be needed for more complex recursive algorithms.
+ </p>
+<p>Note: recursive calls are detected by analysis of program
+ counter values. They are not detected by looking at function
+ names.</p>
+</dd>
+<dt>
+<a name="opt.num-transtab-sectors"></a><span class="term">
+ <code class="option">--num-transtab-sectors=<number> [default: 6
+ for Android platforms, 16 for all others] </code>
+ </span>
+</dt>
+<dd><p>Valgrind translates and instruments your program's machine
+ code in small fragments (basic blocks). The translations are stored in a
+ translation cache that is divided into a number of sections
+ (sectors). If the cache is full, the sector containing the
+ oldest translations is emptied and reused. If these old
+ translations are needed again, Valgrind must re-translate and
+ re-instrument the corresponding machine code, which is
+ expensive. If the "executed instructions" working set of a
+ program is big, increasing the number of sectors may improve
+ performance by reducing the number of re-translations needed.
+ Sectors are allocated on demand. Once allocated, a sector can
+ never be freed, and occupies considerable space, depending on the tool
+ and the value of <code class="option">--avg-transtab-entry-size</code>
+ (about 40 MB per sector for Memcheck). Use the
+ option <code class="option">--stats=yes</code> to obtain precise
+ information about the memory used by a sector and the allocation
+ and recycling of sectors.</p></dd>
+<dt>
+<a name="opt.avg-transtab-entry-size"></a><span class="term">
+ <code class="option">--avg-transtab-entry-size=<number> [default: 0,
+ meaning use tool provided default] </code>
+ </span>
+</dt>
+<dd><p>Average size of translated basic block. This average size
+ is used to dimension the size of a sector.
+ Each tool provides a default value to be used.
+ If this default value is too small, the translation sectors
+ will become full too quickly. If this default value is too big,
+ a significant part of the translation sector memory will be unused.
+ Note that the average size of a basic block translation depends
+ on the tool, and might depend on tool options. For example,
+ the memcheck option <code class="option">--track-origins=yes</code>
+ increases the size of the basic block translations.
+ Use <code class="option">--avg-transtab-entry-size</code> to tune the size of the
+ sectors, either to gain memory or to avoid too many retranslations.
+ </p></dd>
+<dt>
+<a name="opt.aspace-minaddr"></a><span class="term">
+ <code class="option">--aspace-minaddr=<address> [default: depends
+ on the platform] </code>
+ </span>
+</dt>
+<dd><p>To avoid potential conflicts with some system libraries,
+ Valgrind does not use the address space
+ below <code class="option">--aspace-minaddr</code> value, keeping it
+ reserved in case a library specifically requests memory in this
+ region. So, some "pessimistic" value is guessed by Valgrind
+ depending on the platform. On linux, by default, Valgrind avoids
+ using the first 64MB even if typically there is no conflict in
+ this complete zone. You can use the
+ option <code class="option">--aspace-minaddr</code> to have your memory
+ hungry application benefitting from more of this lower memory.
+ On the other hand, if you encounter a conflict, increasing
+ aspace-minaddr value might solve it. Conflicts will typically
+ manifest themselves with mmap failures in the low range of the
+ address space. The
+ provided <code class="computeroutput">address</code> must be page
+ aligned and must be equal or bigger to 0x1000 (4KB). To find the
+ default value on your platform, do something such as
+ <code class="computeroutput">valgrind -d -d date 2>&1 | grep -i minaddr</code>.
+ Values lower than 0x10000 (64KB) are known to create problems
+ on some distributions.
+ </p></dd>
+<dt>
+<a name="opt.valgrind-stacksize"></a><span class="term">
+ <code class="option">--valgrind-stacksize=<number> [default: 1MB] </code>
+ </span>
+</dt>
+<dd>
+<p>For each thread, Valgrind needs its own 'private' stack.
+ The default size for these stacks is largely dimensioned, and so
+ should be sufficient in most cases. In case the size is too small,
+ Valgrind will segfault. Before segfaulting, a warning might be produced
+ by Valgrind when approaching the limit.
+ </p>
+<p>
+ Use the option <code class="option">--valgrind-stacksize</code> if such an (unlikely)
+ warning is produced, or Valgrind dies due to a segmentation violation.
+ Such segmentation violations have been seen when demangling huge C++
+ symbols.
+ </p>
+<p>If your application uses many threads and needs a lot of memory, you can
+ gain some memory by reducing the size of these Valgrind stacks using
+ the option <code class="option">--valgrind-stacksize</code>.
+ </p>
+</dd>
+<dt>
+<a name="opt.show-emwarns"></a><span class="term">
+ <code class="option">--show-emwarns=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>When enabled, Valgrind will emit warnings about its CPU
+ emulation in certain cases. These are usually not
+ interesting.</p></dd>
+<dt>
+<a name="opt.require-text-symbol"></a><span class="term">
+ <code class="option">--require-text-symbol=:sonamepatt:fnnamepatt</code>
+ </span>
+</dt>
+<dd>
+<p>When a shared object whose soname
+ matches <code class="varname">sonamepatt</code> is loaded into the
+ process, examine all the text symbols it exports. If none of
+ those match <code class="varname">fnnamepatt</code>, print an error
+ message and abandon the run. This makes it possible to ensure
+ that the run does not continue unless a given shared object
+ contains a particular function name.
+ </p>
+<p>
+ Both <code class="varname">sonamepatt</code> and
+ <code class="varname">fnnamepatt</code> can be written using the usual
+ <code class="varname">?</code> and <code class="varname">*</code> wildcards. For
+ example: <code class="varname">":*libc.so*:foo?bar"</code>. You may use
+ characters other than a colon to separate the two patterns. It
+ is only important that the first character and the separator
+ character are the same. For example, the above example could
+ also be written <code class="varname">"Q*libc.so*Qfoo?bar"</code>.
+ Multiple <code class="varname"> --require-text-symbol</code> flags are
+ allowed, in which case shared objects that are loaded into
+ the process will be checked against all of them.
+ </p>
+<p>
+ The purpose of this is to support reliable usage of marked-up
+ libraries. For example, suppose we have a version of GCC's
+ <code class="varname">libgomp.so</code> which has been marked up with
+ annotations to support Helgrind. It is only too easy and
+ confusing to load the wrong, un-annotated
+ <code class="varname">libgomp.so</code> into the application. So the idea
+ is: add a text symbol in the marked-up library, for
+ example <code class="varname">annotated_for_helgrind_3_6</code>, and then
+ give the flag
+ <code class="varname">--require-text-symbol=:*libgomp*so*:annotated_for_helgrind_3_6</code>
+ so that when <code class="varname">libgomp.so</code> is loaded, Valgrind
+ scans its symbol table, and if the symbol isn't present the run
+ is aborted, rather than continuing silently with the
+ un-marked-up library. Note that you should put the entire flag
+ in quotes to stop shells expanding up the <code class="varname">*</code>
+ and <code class="varname">?</code> wildcards.
+ </p>
+</dd>
+<dt>
+<a name="opt.soname-synonyms"></a><span class="term">
+ <code class="option">--soname-synonyms=syn1=pattern1,syn2=pattern2,...</code>
+ </span>
+</dt>
+<dd>
+<p>When a shared library is loaded, Valgrind checks for
+ functions in the library that must be replaced or wrapped. For
+ example, Memcheck replaces some string and memory functions
+ (strchr, strlen, strcpy, memchr, memcpy, memmove, etc.) with its
+ own versions. Such replacements are normally done only in shared
+ libraries whose soname matches a predefined soname pattern (e.g.
+ <code class="varname">libc.so*</code> on linux). By default, no
+ replacement is done for a statically linked binary or for
+ alternative libraries, except for the allocation functions
+ (malloc, free, calloc, memalign, realloc, operator new, operator
+ delete, etc.) Such allocation functions are intercepted by
+ default in any shared library or in the executable if they are
+ exported as global symbols. This means that if a replacement
+ allocation library such as tcmalloc is found, its functions are
+ also intercepted by default.
+
+ In some cases, the replacements allow
+ <code class="option">--soname-synonyms</code> to specify one additional
+ synonym pattern, giving flexibility in the replacement. Or to
+ prevent interception of all public allocation symbols.</p>
+<p>Currently, this flexibility is only allowed for the
+ malloc related functions, using the
+ synonym <code class="varname">somalloc</code>. This synonym is usable for
+ all tools doing standard replacement of malloc related functions
+ (e.g. memcheck, massif, drd, helgrind, exp-dhat, exp-sgcheck).
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>Alternate malloc library: to replace the malloc
+ related functions in a specific alternate library with
+ soname <code class="varname">mymalloclib.so</code> (and not in any
+ others), give the
+ option <code class="option">--soname-synonyms=somalloc=mymalloclib.so</code>.
+ A pattern can be used to match multiple libraries sonames.
+ For
+ example, <code class="option">--soname-synonyms=somalloc=*tcmalloc*</code>
+ will match the soname of all variants of the tcmalloc
+ library (native, debug, profiled, ... tcmalloc
+ variants). </p>
+<p>Note: the soname of a elf shared library can be
+ retrieved using the readelf utility. </p>
+</li>
+<li class="listitem"><p>Replacements in a statically linked library are done
+ by using the <code class="varname">NONE</code> pattern. For example,
+ if you link with <code class="varname">libtcmalloc.a</code>, and only
+ want to intercept the malloc related functions in the
+ executable (and standard libraries) themselves, but not any
+ other shared libraries, you can give the
+ option <code class="option">--soname-synonyms=somalloc=NONE</code>.
+ Note that a NONE pattern will match the main executable and
+ any shared library having no soname. </p></li>
+<li class="listitem"><p>To run a "default" Firefox build for Linux, in which
+ JEMalloc is linked in to the main executable,
+ use <code class="option">--soname-synonyms=somalloc=NONE</code>.
+ </p></li>
+<li class="listitem"><p>To only intercept allocation symbols in the default
+ system libraries, but not in any other shared library or the
+ executable defining public malloc or operator new related
+ functions use a non-existing library name
+ like <code class="option">--soname-synonyms=somalloc=nouserintercepts</code>
+ (where <code class="varname">nouserintercepts</code> can be any
+ non-existing library name).
+ </p></li>
+<li class="listitem"><p>Shared library of the dynamic (runtime) linker is excluded from
+ searching for global public symbols, such as those for the malloc
+ related functions (identified by <code class="varname">somalloc</code> synonym).
+ </p></li>
+</ul></div>
+</dd>
+</dl>
+</div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.debugopts"></a>2.6.6. Debugging Options</h3></div></div></div>
+<p><a name="debug.opts.para"></a>There are also some options for debugging
+Valgrind itself. You shouldn't need to use them in the normal run of
+things. If you wish to see the list, use the
+<code class="option">--help-debug</code> option.</p>
+<p>If you wish to debug your program rather than debugging
+Valgrind itself, then you should use the options
+<code class="option">--vgdb=yes</code> or <code class="option">--vgdb=full</code>.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.defopts"></a>2.6.7. Setting Default Options</h3></div></div></div>
+<p>Note that Valgrind also reads options from three places:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>The file <code class="computeroutput">~/.valgrindrc</code></p></li>
+<li class="listitem"><p>The environment variable
+ <code class="computeroutput">$VALGRIND_OPTS</code></p></li>
+<li class="listitem"><p>The file <code class="computeroutput">./.valgrindrc</code></p></li>
+</ol></div>
+<p>These are processed in the given order, before the
+command-line options. Options processed later override those
+processed earlier; for example, options in
+<code class="computeroutput">./.valgrindrc</code> will take
+precedence over those in
+<code class="computeroutput">~/.valgrindrc</code>.
+</p>
+<p>Please note that the <code class="computeroutput">./.valgrindrc</code>
+file is ignored if it is marked as world writeable or not owned
+by the current user. This is because the
+<code class="computeroutput">./.valgrindrc</code> can contain options that are
+potentially harmful or can be used by a local attacker to execute code under
+your user account.
+</p>
+<p>Any tool-specific options put in
+<code class="computeroutput">$VALGRIND_OPTS</code> or the
+<code class="computeroutput">.valgrindrc</code> files should be
+prefixed with the tool name and a colon. For example, if you
+want Memcheck to always do leak checking, you can put the
+following entry in <code class="literal">~/.valgrindrc</code>:</p>
+<pre class="programlisting">
+--memcheck:leak-check=yes</pre>
+<p>This will be ignored if any tool other than Memcheck is
+run. Without the <code class="computeroutput">memcheck:</code>
+part, this will cause problems if you select other tools that
+don't understand
+<code class="option">--leak-check=yes</code>.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.pthreads"></a>2.7. Support for Threads</h2></div></div></div>
+<p>Threaded programs are fully supported.</p>
+<p>The main thing to point out with respect to threaded programs is
+that your program will use the native threading library, but Valgrind
+serialises execution so that only one (kernel) thread is running at a
+time. This approach avoids the horrible implementation problems of
+implementing a truly multithreaded version of Valgrind, but it does
+mean that threaded apps never use more than one CPU simultaneously,
+even if you have a multiprocessor or multicore machine.</p>
+<p>Valgrind doesn't schedule the threads itself. It merely ensures
+that only one thread runs at once, using a simple locking scheme. The
+actual thread scheduling remains under control of the OS kernel. What
+this does mean, though, is that your program will see very different
+scheduling when run on Valgrind than it does when running normally.
+This is both because Valgrind is serialising the threads, and because
+the code runs so much slower than normal.</p>
+<p>This difference in scheduling may cause your program to behave
+differently, if you have some kind of concurrency, critical race,
+locking, or similar, bugs. In that case you might consider using the
+tools Helgrind and/or DRD to track them down.</p>
+<p>On Linux, Valgrind also supports direct use of the
+<code class="computeroutput">clone</code> system call,
+<code class="computeroutput">futex</code> and so on.
+<code class="computeroutput">clone</code> is supported where either
+everything is shared (a thread) or nothing is shared (fork-like); partial
+sharing will fail.
+</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-core.pthreads_perf_sched"></a>2.7.1. Scheduling and Multi-Thread Performance</h3></div></div></div>
+<p>A thread executes code only when it holds the abovementioned
+lock. After executing some number of instructions, the running thread
+will release the lock. All threads ready to run will then compete to
+acquire the lock.</p>
+<p>The <code class="option">--fair-sched</code> option controls the locking mechanism
+used to serialise thread execution.</p>
+<p>The default pipe based locking mechanism
+(<code class="option">--fair-sched=no</code>) is available on all
+platforms. Pipe based locking does not guarantee fairness between
+threads: it is quite likely that a thread that has just released the
+lock reacquires it immediately, even though other threads are ready to
+run. When using pipe based locking, different runs of the same
+multithreaded application might give very different thread
+scheduling.</p>
+<p>An alternative locking mechanism, based on futexes, is available
+on some platforms. If available, it is activated
+by <code class="option">--fair-sched=yes</code> or
+<code class="option">--fair-sched=try</code>. Futex based locking ensures
+fairness (round-robin scheduling) between threads: if multiple threads
+are ready to run, the lock will be given to the thread which first
+requested the lock. Note that a thread which is blocked in a system
+call (e.g. in a blocking read system call) has not (yet) requested the
+lock: such a thread requests the lock only after the system call is
+finished.</p>
+<p> The fairness of the futex based locking produces better
+reproducibility of thread scheduling for different executions of a
+multithreaded application. This better reproducibility is particularly
+helpful when using Helgrind or DRD.</p>
+<p>Valgrind's use of thread serialisation implies that only one
+thread at a time may run. On a multiprocessor/multicore system, the
+running thread is assigned to one of the CPUs by the OS kernel
+scheduler. When a thread acquires the lock, sometimes the thread will
+be assigned to the same CPU as the thread that just released the
+lock. Sometimes, the thread will be assigned to another CPU. When
+using pipe based locking, the thread that just acquired the lock
+will usually be scheduled on the same CPU as the thread that just
+released the lock. With the futex based mechanism, the thread that
+just acquired the lock will more often be scheduled on another
+CPU.</p>
+<p>Valgrind's thread serialisation and CPU assignment by the OS
+kernel scheduler can interact badly with the CPU frequency scaling
+available on many modern CPUs. To decrease power consumption, the
+frequency of a CPU or core is automatically decreased if the CPU/core
+has not been used recently. If the OS kernel often assigns the thread
+which just acquired the lock to another CPU/core, it is quite likely
+that this CPU/core is currently at a low frequency. The frequency of
+this CPU will be increased after some time. However, during this
+time, the (only) running thread will have run at the low frequency.
+Once this thread has run for some time, it will release the lock.
+Another thread will acquire this lock, and might be scheduled again on
+another CPU whose clock frequency was decreased in the
+meantime.</p>
+<p>The futex based locking causes threads to change CPUs/cores more
+often. So, if CPU frequency scaling is activated, the futex based
+locking might decrease significantly the performance of a
+multithreaded app running under Valgrind. Performance losses of up to
+50% degradation have been observed, as compared to running on a
+machine for which CPU frequency scaling has been disabled. The pipe
+based locking locking scheme also interacts badly with CPU frequency
+scaling, with performance losses in the range 10..20% having been
+observed.</p>
+<p>To avoid such performance degradation, you should indicate to
+the kernel that all CPUs/cores should always run at maximum clock
+speed. Depending on your Linux distribution, CPU frequency scaling
+may be controlled using a graphical interface or using command line
+such as
+<code class="computeroutput">cpufreq-selector</code> or
+<code class="computeroutput">cpufreq-set</code>.
+</p>
+<p>An alternative way to avoid these problems is to tell the
+OS scheduler to tie a Valgrind process to a specific (fixed) CPU using the
+<code class="computeroutput">taskset</code> command. This should ensure
+that the selected CPU does not fall below its maximum frequency
+setting so long as any thread of the program has work to do.
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.signals"></a>2.8. Handling of Signals</h2></div></div></div>
+<p>Valgrind has a fairly complete signal implementation. It should be
+able to cope with any POSIX-compliant use of signals.</p>
+<p>If you're using signals in clever ways (for example, catching
+SIGSEGV, modifying page state and restarting the instruction), you're
+probably relying on precise exceptions. In this case, you will need
+to use <code class="option">--vex-iropt-register-updates=allregs-at-mem-access</code>
+or <code class="option">--vex-iropt-register-updates=allregs-at-each-insn</code>.
+</p>
+<p>If your program dies as a result of a fatal core-dumping signal,
+Valgrind will generate its own core file
+(<code class="computeroutput">vgcore.NNNNN</code>) containing your program's
+state. You may use this core file for post-mortem debugging with GDB or
+similar. (Note: it will not generate a core if your core dump size limit is
+0.) At the time of writing the core dumps do not include all the floating
+point register information.</p>
+<p>In the unlikely event that Valgrind itself crashes, the operating system
+will create a core dump in the usual way.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.install"></a>2.9. Building and Installing Valgrind</h2></div></div></div>
+<p>We use the standard Unix
+<code class="computeroutput">./configure</code>,
+<code class="computeroutput">make</code>, <code class="computeroutput">make
+install</code> mechanism. Once you have completed
+<code class="computeroutput">make install</code> you may then want
+to run the regression tests
+with <code class="computeroutput">make regtest</code>.
+</p>
+<p>In addition to the usual
+<code class="option">--prefix=/path/to/install/tree</code>, there are three
+ options which affect how Valgrind is built:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="option">--enable-inner</code></p>
+<p>This builds Valgrind with some special magic hacks which make
+ it possible to run it on a standard build of Valgrind (what the
+ developers call "self-hosting"). Ordinarily you should not use
+ this option as various kinds of safety checks are disabled.
+ </p>
+</li>
+<li class="listitem">
+<p><code class="option">--enable-only64bit</code></p>
+<p><code class="option">--enable-only32bit</code></p>
+<p>On 64-bit platforms (amd64-linux, ppc64-linux,
+ amd64-darwin), Valgrind is by default built in such a way that
+ both 32-bit and 64-bit executables can be run. Sometimes this
+ cleverness is a problem for a variety of reasons. These two
+ options allow for single-target builds in this situation. If you
+ issue both, the configure script will complain. Note they are
+ ignored on 32-bit-only platforms (x86-linux, ppc32-linux,
+ arm-linux, x86-darwin).
+ </p>
+</li>
+</ul></div>
+<p>
+</p>
+<p>The <code class="computeroutput">configure</code> script tests
+the version of the X server currently indicated by the current
+<code class="computeroutput">$DISPLAY</code>. This is a known bug.
+The intention was to detect the version of the current X
+client libraries, so that correct suppressions could be selected
+for them, but instead the test checks the server version. This
+is just plain wrong.</p>
+<p>If you are building a binary package of Valgrind for
+distribution, please read <code class="literal">README_PACKAGERS</code>
+<a class="xref" href="dist.readme-packagers.html" title="7. README_PACKAGERS">Readme Packagers</a>. It contains some
+important information.</p>
+<p>Apart from that, there's not much excitement here. Let us
+know if you have build problems.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.problems"></a>2.10. If You Have Problems</h2></div></div></div>
+<p>Contact us at <a class="ulink" href="http://www.valgrind.org/" target="_top">http://www.valgrind.org/</a>.</p>
+<p>See <a class="xref" href="manual-core.html#manual-core.limits" title="2.11. Limitations">Limitations</a> for the known
+limitations of Valgrind, and for a list of programs which are
+known not to work on it.</p>
+<p>All parts of the system make heavy use of assertions and
+internal self-checks. They are permanently enabled, and we have no
+plans to disable them. If one of them breaks, please mail us!</p>
+<p>If you get an assertion failure
+in <code class="filename">m_mallocfree.c</code>, this may have happened because
+your program wrote off the end of a heap block, or before its
+beginning, thus corrupting heap metadata. Valgrind hopefully will have
+emitted a message to that effect before dying in this way.</p>
+<p>Read the <a class="xref" href="FAQ.html" title="Valgrind FAQ">Valgrind FAQ</a> for more advice about common problems,
+crashes, etc.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.limits"></a>2.11. Limitations</h2></div></div></div>
+<p>The following list of limitations seems long. However, most
+programs actually work fine.</p>
+<p>Valgrind will run programs on the supported platforms
+subject to the following constraints:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>On Linux, Valgrind determines at startup the size of the 'brk
+ segment' using the RLIMIT_DATA rlim_cur, with a minimum of 1 MB and
+ a maximum of 8 MB. Valgrind outputs a message each time a program
+ tries to extend the brk segment beyond the size determined at
+ startup. Most programs will work properly with this limit,
+ typically by switching to the use of mmap to get more memory.
+ If your program really needs a big brk segment, you must change
+ the 8 MB hardcoded limit and recompile Valgrind.
+ </p></li>
+<li class="listitem"><p>On x86 and amd64, there is no support for 3DNow!
+ instructions. If the translator encounters these, Valgrind will
+ generate a SIGILL when the instruction is executed. Apart from
+ that, on x86 and amd64, essentially all instructions are supported,
+ up to and including AVX and AES in 64-bit mode and SSSE3 in 32-bit
+ mode. 32-bit mode does in fact support the bare minimum SSE4
+ instructions needed to run programs on MacOSX 10.6 on 32-bit
+ targets.
+ </p></li>
+<li class="listitem"><p>On ppc32 and ppc64, almost all integer, floating point and
+ Altivec instructions are supported. Specifically: integer and FP
+ insns that are mandatory for PowerPC, the "General-purpose
+ optional" group (fsqrt, fsqrts, stfiwx), the "Graphics optional"
+ group (fre, fres, frsqrte, frsqrtes), and the Altivec (also known
+ as VMX) SIMD instruction set, are supported. Also, instructions
+ from the Power ISA 2.05 specification, as present in POWER6 CPUs,
+ are supported.</p></li>
+<li class="listitem"><p>On ARM, essentially the entire ARMv7-A instruction set
+ is supported, in both ARM and Thumb mode. ThumbEE and Jazelle are
+ not supported. NEON, VFPv3 and ARMv6 media support is fairly
+ complete.
+ </p></li>
+<li class="listitem"><p>If your program does its own memory management, rather than
+ using malloc/new/free/delete, it should still work, but Memcheck's
+ error checking won't be so effective. If you describe your
+ program's memory management scheme using "client requests" (see
+ <a class="xref" href="manual-core-adv.html#manual-core-adv.clientreq" title="3.1. The Client Request mechanism">The Client Request mechanism</a>), Memcheck can do
+ better. Nevertheless, using malloc/new and free/delete is still
+ the best approach.</p></li>
+<li class="listitem"><p>Valgrind's signal simulation is not as robust as it could be.
+ Basic POSIX-compliant sigaction and sigprocmask functionality is
+ supplied, but it's conceivable that things could go badly awry if you
+ do weird things with signals. Workaround: don't. Programs that do
+ non-POSIX signal tricks are in any case inherently unportable, so
+ should be avoided if possible.</p></li>
+<li class="listitem"><p>Machine instructions, and system calls, have been implemented
+ on demand. So it's possible, although unlikely, that a program will
+ fall over with a message to that effect. If this happens, please
+ report all the details printed out, so we can try and implement the
+ missing feature.</p></li>
+<li class="listitem"><p>Memory consumption of your program is majorly increased
+ whilst running under Valgrind's Memcheck tool. This is due to the
+ large amount of administrative information maintained behind the
+ scenes. Another cause is that Valgrind dynamically translates the
+ original executable. Translated, instrumented code is 12-18 times
+ larger than the original so you can easily end up with 150+ MB of
+ translations when running (eg) a web browser.</p></li>
+<li class="listitem">
+<p>Valgrind can handle dynamically-generated code just fine. If
+ you regenerate code over the top of old code (ie. at the same
+ memory addresses), if the code is on the stack Valgrind will
+ realise the code has changed, and work correctly. This is
+ necessary to handle the trampolines GCC uses to implemented nested
+ functions. If you regenerate code somewhere other than the stack,
+ and you are running on an 32- or 64-bit x86 CPU, you will need to
+ use the <code class="option">--smc-check=all</code> option, and Valgrind will
+ run more slowly than normal. Or you can add client requests that
+ tell Valgrind when your program has overwritten code.
+ </p>
+<p> On other platforms (ARM, PowerPC) Valgrind observes and
+ honours the cache invalidation hints that programs are obliged to
+ emit to notify new code, and so self-modifying-code support should
+ work automatically, without the need
+ for <code class="option">--smc-check=all</code>.</p>
+</li>
+<li class="listitem">
+<p>Valgrind has the following limitations
+ in its implementation of x86/AMD64 floating point relative to
+ IEEE754.</p>
+<p>Precision: There is no support for 80 bit arithmetic.
+ Internally, Valgrind represents all such "long double" numbers in 64
+ bits, and so there may be some differences in results. Whether or
+ not this is critical remains to be seen. Note, the x86/amd64
+ fldt/fstpt instructions (read/write 80-bit numbers) are correctly
+ simulated, using conversions to/from 64 bits, so that in-memory
+ images of 80-bit numbers look correct if anyone wants to see.</p>
+<p>The impression observed from many FP regression tests is that
+ the accuracy differences aren't significant. Generally speaking, if
+ a program relies on 80-bit precision, there may be difficulties
+ porting it to non x86/amd64 platforms which only support 64-bit FP
+ precision. Even on x86/amd64, the program may get different results
+ depending on whether it is compiled to use SSE2 instructions (64-bits
+ only), or x87 instructions (80-bit). The net effect is to make FP
+ programs behave as if they had been run on a machine with 64-bit IEEE
+ floats, for example PowerPC. On amd64 FP arithmetic is done by
+ default on SSE2, so amd64 looks more like PowerPC than x86 from an FP
+ perspective, and there are far fewer noticeable accuracy differences
+ than with x86.</p>
+<p>Rounding: Valgrind does observe the 4 IEEE-mandated rounding
+ modes (to nearest, to +infinity, to -infinity, to zero) for the
+ following conversions: float to integer, integer to float where
+ there is a possibility of loss of precision, and float-to-float
+ rounding. For all other FP operations, only the IEEE default mode
+ (round to nearest) is supported.</p>
+<p>Numeric exceptions in FP code: IEEE754 defines five types of
+ numeric exception that can happen: invalid operation (sqrt of
+ negative number, etc), division by zero, overflow, underflow,
+ inexact (loss of precision).</p>
+<p>For each exception, two courses of action are defined by IEEE754:
+ either (1) a user-defined exception handler may be called, or (2) a
+ default action is defined, which "fixes things up" and allows the
+ computation to proceed without throwing an exception.</p>
+<p>Currently Valgrind only supports the default fixup actions.
+ Again, feedback on the importance of exception support would be
+ appreciated.</p>
+<p>When Valgrind detects that the program is trying to exceed any
+ of these limitations (setting exception handlers, rounding mode, or
+ precision control), it can print a message giving a traceback of
+ where this has happened, and continue execution. This behaviour used
+ to be the default, but the messages are annoying and so showing them
+ is now disabled by default. Use <code class="option">--show-emwarns=yes</code> to see
+ them.</p>
+<p>The above limitations define precisely the IEEE754 'default'
+ behaviour: default fixup on all exceptions, round-to-nearest
+ operations, and 64-bit precision.</p>
+</li>
+<li class="listitem">
+<p>Valgrind has the following limitations in
+ its implementation of x86/AMD64 SSE2 FP arithmetic, relative to
+ IEEE754.</p>
+<p>Essentially the same: no exceptions, and limited observance of
+ rounding mode. Also, SSE2 has control bits which make it treat
+ denormalised numbers as zero (DAZ) and a related action, flush
+ denormals to zero (FTZ). Both of these cause SSE2 arithmetic to be
+ less accurate than IEEE requires. Valgrind detects, ignores, and can
+ warn about, attempts to enable either mode.</p>
+</li>
+<li class="listitem">
+<p>Valgrind has the following limitations in
+ its implementation of ARM VFPv3 arithmetic, relative to
+ IEEE754.</p>
+<p>Essentially the same: no exceptions, and limited observance
+ of rounding mode. Also, switching the VFP unit into vector mode
+ will cause Valgrind to abort the program -- it has no way to
+ emulate vector uses of VFP at a reasonable performance level. This
+ is no big deal given that non-scalar uses of VFP instructions are
+ in any case deprecated.</p>
+</li>
+<li class="listitem">
+<p>Valgrind has the following limitations
+ in its implementation of PPC32 and PPC64 floating point
+ arithmetic, relative to IEEE754.</p>
+<p>Scalar (non-Altivec): Valgrind provides a bit-exact emulation of
+ all floating point instructions, except for "fre" and "fres", which are
+ done more precisely than required by the PowerPC architecture specification.
+ All floating point operations observe the current rounding mode.
+ </p>
+<p>However, fpscr[FPRF] is not set after each operation. That could
+ be done but would give measurable performance overheads, and so far
+ no need for it has been found.</p>
+<p>As on x86/AMD64, IEEE754 exceptions are not supported: all floating
+ point exceptions are handled using the default IEEE fixup actions.
+ Valgrind detects, ignores, and can warn about, attempts to unmask
+ the 5 IEEE FP exception kinds by writing to the floating-point status
+ and control register (fpscr).
+ </p>
+<p>Vector (Altivec, VMX): essentially as with x86/AMD64 SSE/SSE2:
+ no exceptions, and limited observance of rounding mode.
+ For Altivec, FP arithmetic
+ is done in IEEE/Java mode, which is more accurate than the Linux default
+ setting. "More accurate" means that denormals are handled properly,
+ rather than simply being flushed to zero.</p>
+</li>
+</ul></div>
+<p>Programs which are known not to work are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>emacs starts up but immediately concludes it is out of
+ memory and aborts. It may be that Memcheck does not provide
+ a good enough emulation of the
+ <code class="computeroutput">mallinfo</code> function.
+ Emacs works fine if you build it to use
+ the standard malloc/free routines.</p></li></ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.example"></a>2.12. An Example Run</h2></div></div></div>
+<p>This is the log for a run of a small program using Memcheck.
+The program is in fact correct, and the reported error is as the
+result of a potentially serious code generation bug in GNU g++
+(snapshot 20010527).</p>
+<pre class="programlisting">
+sewardj@phoenix:~/newmat10$ ~/Valgrind-6/valgrind -v ./bogon
+==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1.
+==25832== Copyright (C) 2000-2001, and GNU GPL'd, by Julian Seward.
+==25832== Startup, with flags:
+==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp
+==25832== reading syms from /lib/ld-linux.so.2
+==25832== reading syms from /lib/libc.so.6
+==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0
+==25832== reading syms from /lib/libm.so.6
+==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3
+==25832== reading syms from /home/sewardj/Valgrind/valgrind.so
+==25832== reading syms from /proc/self/exe
+==25832==
+==25832== Invalid read of size 4
+==25832== at 0x8048724: BandMatrix::ReSize(int,int,int) (bogon.cpp:45)
+==25832== by 0x80487AF: main (bogon.cpp:66)
+==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd
+==25832==
+==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
+==25832== malloc/free: in use at exit: 0 bytes in 0 blocks.
+==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
+==25832== For a detailed leak analysis, rerun with: --leak-check=yes
+</pre>
+<p>The GCC folks fixed this about a week before GCC 3.0
+shipped.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-core.warnings"></a>2.13. Warning Messages You Might See</h2></div></div></div>
+<p>Some of these only appear if you run in verbose mode
+(enabled by <code class="option">-v</code>):</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="computeroutput">More than 100 errors detected. Subsequent
+ errors will still be recorded, but in less detail than
+ before.</code></p>
+<p>After 100 different errors have been shown, Valgrind becomes
+ more conservative about collecting them. It then requires only the
+ program counters in the top two stack frames to match when deciding
+ whether or not two errors are really the same one. Prior to this
+ point, the PCs in the top four frames are required to match. This
+ hack has the effect of slowing down the appearance of new errors
+ after the first 100. The 100 constant can be changed by recompiling
+ Valgrind.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">More than 1000 errors detected. I'm not
+ reporting any more. Final error counts may be inaccurate. Go fix
+ your program!</code></p>
+<p>After 1000 different errors have been detected, Valgrind
+ ignores any more. It seems unlikely that collecting even more
+ different ones would be of practical help to anybody, and it avoids
+ the danger that Valgrind spends more and more of its time comparing
+ new errors against an ever-growing collection. As above, the 1000
+ number is a compile-time constant.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">Warning: client switching stacks?</code></p>
+<p>Valgrind spotted such a large change in the stack pointer
+ that it guesses the client is switching to a different stack. At
+ this point it makes a kludgey guess where the base of the new
+ stack is, and sets memory permissions accordingly. At the moment
+ "large change" is defined as a change of more that 2000000 in the
+ value of the stack pointer register. If Valgrind guesses wrong,
+ you may get many bogus error messages following this and/or have
+ crashes in the stack trace recording code. You might avoid these
+ problems by informing Valgrind about the stack bounds using
+ VALGRIND_STACK_REGISTER client request. </p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">Warning: client attempted to close Valgrind's
+ logfile fd <number></code></p>
+<p>Valgrind doesn't allow the client to close the logfile,
+ because you'd never see any diagnostic information after that point.
+ If you see this message, you may want to use the
+ <code class="option">--log-fd=<number></code> option to specify a
+ different logfile file-descriptor number.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">Warning: noted but unhandled ioctl
+ <number></code></p>
+<p>Valgrind observed a call to one of the vast family of
+ <code class="computeroutput">ioctl</code> system calls, but did not
+ modify its memory status info (because nobody has yet written a
+ suitable wrapper). The call will still have gone through, but you may get
+ spurious errors after this as a result of the non-update of the
+ memory info.</p>
+</li>
+<li class="listitem">
+<p><code class="computeroutput">Warning: set address range perms: large range
+ <number></code></p>
+<p>Diagnostic message, mostly for benefit of the Valgrind
+ developers, to do with memory permissions.</p>
+</li>
+</ul></div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="manual-intro.html"><< 1. Introduction</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="manual-core-adv.html">3. Using and understanding the Valgrind core: Advanced Topics >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/manual-intro.html b/docs/html/manual-intro.html
new file mode 100644
index 0000000..965c051
--- /dev/null
+++ b/docs/html/manual-intro.html
@@ -0,0 +1,129 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>1. Introduction</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="manual.html" title="Valgrind User Manual">
+<link rel="next" href="manual-core.html" title="2. Using and understanding the Valgrind core">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="manual-core.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="manual-intro"></a>1. Introduction</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="manual-intro.html#manual-intro.overview">1.1. An Overview of Valgrind</a></span></dt>
+<dt><span class="sect1"><a href="manual-intro.html#manual-intro.navigation">1.2. How to navigate this manual</a></span></dt>
+</dl>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-intro.overview"></a>1.1. An Overview of Valgrind</h2></div></div></div>
+<p>Valgrind is an instrumentation framework for building dynamic analysis
+tools. It comes with a set of tools each of which performs some kind of
+debugging, profiling, or similar task that helps you improve your programs.
+Valgrind's architecture is modular, so new tools can be created easily
+and without disturbing the existing structure.</p>
+<p>A number of useful tools are supplied as standard.</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p><span class="command"><strong>Memcheck</strong></span> is a memory error detector. It helps
+ you make your programs, particularly those written in C and C++, more
+ correct.</p></li>
+<li class="listitem"><p><span class="command"><strong>Cachegrind</strong></span> is a cache and branch-prediction
+ profiler. It helps you make your programs run faster.</p></li>
+<li class="listitem"><p><span class="command"><strong>Callgrind</strong></span> is a call-graph generating cache
+ profiler. It has some overlap with Cachegrind, but also gathers some
+ information that Cachegrind does not.</p></li>
+<li class="listitem"><p><span class="command"><strong>Helgrind</strong></span> is a thread error detector.
+ It helps you make your multi-threaded programs more correct.
+ </p></li>
+<li class="listitem"><p><span class="command"><strong>DRD</strong></span> is also a thread error detector. It is
+ similar to Helgrind but uses different analysis techniques and so may
+ find different problems.</p></li>
+<li class="listitem"><p><span class="command"><strong>Massif</strong></span> is a heap profiler. It helps you
+ make your programs use less memory.</p></li>
+<li class="listitem"><p><span class="command"><strong>DHAT</strong></span> is a different kind of heap
+ profiler. It helps you understand issues of block lifetimes,
+ block utilisation, and layout inefficiencies.</p></li>
+<li class="listitem"><p><span class="command"><strong>SGcheck</strong></span> is an experimental tool that can
+ detect overruns of stack and global arrays. Its functionality is
+ complementary to that of Memcheck: SGcheck finds problems that
+ Memcheck can't, and vice versa..</p></li>
+<li class="listitem"><p><span class="command"><strong>BBV</strong></span> is an experimental SimPoint basic block
+ vector generator. It is useful to people doing computer architecture
+ research and development.</p></li>
+</ol></div>
+<p>There are also a couple of minor tools that aren't useful to
+most users: <span class="command"><strong>Lackey</strong></span> is an example tool that illustrates
+some instrumentation basics; and <span class="command"><strong>Nulgrind</strong></span> is the minimal
+Valgrind tool that does no analysis or instrumentation, and is only useful
+for testing purposes.</p>
+<p>Valgrind is closely tied to details of the CPU and operating
+system, and to a lesser extent, the compiler and basic C libraries.
+Nonetheless, it supports a number of widely-used platforms, listed in full
+at <a class="ulink" href="http://www.valgrind.org/" target="_top">http://www.valgrind.org/</a>.</p>
+<p>Valgrind is built via the standard Unix
+<code class="computeroutput">./configure</code>,
+<code class="computeroutput">make</code>, <code class="computeroutput">make
+install</code> process; full details are given in the
+README file in the distribution.</p>
+<p>Valgrind is licensed under the <a class="xref" href="license.gpl.html" title="1. The GNU General Public License"> The GNU General Public License</a>,
+version 2. The <code class="computeroutput">valgrind/*.h</code> headers
+that you may wish to include in your code (eg.
+<code class="filename">valgrind.h</code>, <code class="filename">memcheck.h</code>,
+<code class="filename">helgrind.h</code>, etc.) are
+distributed under a BSD-style license, so you may include them in your
+code without worrying about license conflicts. Some of the PThreads
+test cases, <code class="filename">pth_*.c</code>, are taken from "Pthreads
+Programming" by Bradford Nichols, Dick Buttlar & Jacqueline Proulx
+Farrell, ISBN 1-56592-115-1, published by O'Reilly & Associates,
+Inc.</p>
+<p>If you contribute code to Valgrind, please ensure your
+contributions are licensed as "GPLv2, or (at your option) any later
+version." This is so as to allow the possibility of easily upgrading
+the license to GPLv3 in future. If you want to modify code in the VEX
+subdirectory, please also see the file VEX/HACKING.README in the
+distribution.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-intro.navigation"></a>1.2. How to navigate this manual</h2></div></div></div>
+<p>This manual's structure reflects the structure of Valgrind itself.
+First, we describe the Valgrind core, how to use it, and the options
+it supports. Then, each tool has its own chapter in this manual. You
+only need to read the documentation for the core and for the tool(s) you
+actually use, although you may find it helpful to be at least a little
+bit familiar with what all tools do. If you're new to all this, you probably
+want to run the Memcheck tool and you might find the <a class="xref" href="quick-start.html" title="The Valgrind Quick Start Guide">The Valgrind Quick Start Guide</a> useful.</p>
+<p>Be aware that the core understands some command line options, and
+the tools have their own options which they know about. This means
+there is no central place describing all the options that are
+accepted -- you have to read the options documentation both for
+<a class="xref" href="manual-core.html" title="2. Using and understanding the Valgrind core">Valgrind's core</a> and for the tool you want to use.</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="manual.html"><< Valgrind User Manual</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="manual-core.html">2. Using and understanding the Valgrind core >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/manual-writing-tools.html b/docs/html/manual-writing-tools.html
new file mode 100644
index 0000000..59fc84d
--- /dev/null
+++ b/docs/html/manual-writing-tools.html
@@ -0,0 +1,501 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>2. Writing a New Valgrind Tool</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="tech-docs.html" title="Valgrind Technical Documentation">
+<link rel="prev" href="design-impl.html" title="1. The Design and Implementation of Valgrind">
+<link rel="next" href="cl-format.html" title="3. Callgrind Format Specification">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="design-impl.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="tech-docs.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Technical Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="cl-format.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="manual-writing-tools"></a>2. Writing a New Valgrind Tool</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.intro">2.1. Introduction</a></span></dt>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.writingatool">2.2. Basics</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.howtoolswork">2.2.1. How tools work</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.gettingcode">2.2.2. Getting the code</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.gettingstarted">2.2.3. Getting started</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.writingcode">2.2.4. Writing the code</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.init">2.2.5. Initialisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.instr">2.2.6. Instrumentation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.fini">2.2.7. Finalisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.otherinfo">2.2.8. Other Important Information</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.advtopics">2.3. Advanced Topics</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.advice">2.3.1. Debugging Tips</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.suppressions">2.3.2. Suppressions</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.docs">2.3.3. Documentation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.regtests">2.3.4. Regression Tests</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.profiling">2.3.5. Profiling</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.mkhackery">2.3.6. Other Makefile Hackery</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.ifacever">2.3.7. The Core/tool Interface</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.finalwords">2.4. Final Words</a></span></dt>
+</dl>
+</div>
+
+So you want to write a Valgrind tool? Here are some instructions that may
+help.
+
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-writing-tools.intro"></a>2.1. Introduction</h2></div></div></div>
+<p>The key idea behind Valgrind's architecture is the division
+between its <span class="emphasis"><em>core</em></span> and <span class="emphasis"><em>tools</em></span>.</p>
+<p>The core provides the common low-level infrastructure to
+support program instrumentation, including the JIT
+compiler, low-level memory manager, signal handling and a
+thread scheduler. It also provides certain services that
+are useful to some but not all tools, such as support for error
+recording, and support for replacing heap allocation functions such as
+<code class="function">malloc</code>.</p>
+<p>But the core leaves certain operations undefined, which
+must be filled by tools. Most notably, tools define how program
+code should be instrumented. They can also call certain
+functions to indicate to the core that they would like to use
+certain services, or be notified when certain interesting events
+occur. But the core takes care of all the hard work.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-writing-tools.writingatool"></a>2.2. Basics</h2></div></div></div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.howtoolswork"></a>2.2.1. How tools work</h3></div></div></div>
+<p>Tools must define various functions for instrumenting programs
+that are called by Valgrind's core. They are then linked against
+Valgrind's core to define a complete Valgrind tool which will be used
+when the <code class="option">--tool</code> option is used to select it.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.gettingcode"></a>2.2.2. Getting the code</h3></div></div></div>
+<p>To write your own tool, you'll need the Valgrind source code. You'll
+need a check-out of the Subversion repository for the automake/autoconf
+build instructions to work. See the information about how to do check-out
+from the repository at <a class="ulink" href="http://www.valgrind.org/downloads/repository.html" target="_top">the Valgrind
+website</a>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.gettingstarted"></a>2.2.3. Getting started</h3></div></div></div>
+<p>Valgrind uses GNU <code class="computeroutput">automake</code> and
+<code class="computeroutput">autoconf</code> for the creation of Makefiles
+and configuration. But don't worry, these instructions should be enough
+to get you started even if you know nothing about those tools.</p>
+<p>In what follows, all filenames are relative to Valgrind's
+top-level directory <code class="computeroutput">valgrind/</code>.</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>Choose a name for the tool, and a two-letter abbreviation that can
+ be used as a short prefix. We'll use
+ <code class="computeroutput">foobar</code> and
+ <code class="computeroutput">fb</code> as an example.</p></li>
+<li class="listitem"><p>Make three new directories <code class="filename">foobar/</code>,
+ <code class="filename">foobar/docs/</code> and
+ <code class="filename">foobar/tests/</code>.
+ </p></li>
+<li class="listitem"><p>Create an empty file <code class="filename">foobar/tests/Makefile.am</code>.
+ </p></li>
+<li class="listitem"><p>Copy <code class="filename">none/Makefile.am</code> into
+ <code class="filename">foobar/</code>. Edit it by replacing all
+ occurrences of the strings
+ <code class="computeroutput">"none"</code>,
+ <code class="computeroutput">"nl_"</code> and
+ <code class="computeroutput">"nl-"</code> with
+ <code class="computeroutput">"foobar"</code>,
+ <code class="computeroutput">"fb_"</code> and
+ <code class="computeroutput">"fb-"</code> respectively.</p></li>
+<li class="listitem"><p>Copy <code class="filename">none/nl_main.c</code> into
+ <code class="computeroutput">foobar/</code>, renaming it as
+ <code class="filename">fb_main.c</code>. Edit it by changing the
+ <code class="computeroutput">details</code> lines in
+ <code class="function">nl_pre_clo_init</code> to something appropriate for the
+ tool. These fields are used in the startup message, except for
+ <code class="computeroutput">bug_reports_to</code> which is used if a
+ tool assertion fails. Also, replace the string
+ <code class="computeroutput">"nl_"</code> throughout with
+ <code class="computeroutput">"fb_"</code> again.</p></li>
+<li class="listitem"><p>Edit <code class="filename">Makefile.am</code>, adding the new directory
+ <code class="filename">foobar</code> to the
+ <code class="computeroutput">TOOLS</code> or
+ <code class="computeroutput">EXP_TOOLS</code> variables.</p></li>
+<li class="listitem"><p>Edit <code class="filename">configure.in</code>, adding
+ <code class="filename">foobar/Makefile</code> and
+ <code class="filename">foobar/tests/Makefile</code> to the
+ <code class="computeroutput">AC_OUTPUT</code> list.</p></li>
+<li class="listitem">
+<p>Run:</p>
+<pre class="programlisting">
+ autogen.sh
+ ./configure --prefix=`pwd`/inst
+ make
+ make install</pre>
+<p>It should automake, configure and compile without errors,
+ putting copies of the tool in
+ <code class="filename">foobar/</code> and
+ <code class="filename">inst/lib/valgrind/</code>.</p>
+</li>
+<li class="listitem">
+<p>You can test it with a command like:</p>
+<pre class="programlisting">
+ inst/bin/valgrind --tool=foobar date</pre>
+<p>(almost any program should work;
+ <code class="computeroutput">date</code> is just an example).
+ The output should be something like this:</p>
+<pre class="programlisting">
+ ==738== foobar-0.0.1, a foobarring tool.
+ ==738== Copyright (C) 2002-2009, and GNU GPL'd, by J. Programmer.
+ ==738== Using Valgrind-3.5.0.SVN and LibVEX; rerun with -h for copyright info
+ ==738== Command: date
+ ==738==
+ Tue Nov 27 12:40:49 EST 2007
+ ==738==</pre>
+<p>The tool does nothing except run the program uninstrumented.</p>
+</li>
+</ol></div>
+<p>These steps don't have to be followed exactly -- you can choose
+different names for your source files, and use a different
+<code class="option">--prefix</code> for
+<code class="computeroutput">./configure</code>.</p>
+<p>Now that we've setup, built and tested the simplest possible tool,
+onto the interesting stuff...</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.writingcode"></a>2.2.4. Writing the code</h3></div></div></div>
+<p>A tool must define at least these four functions:</p>
+<pre class="programlisting">
+ pre_clo_init()
+ post_clo_init()
+ instrument()
+ fini()</pre>
+<p>The names can be different to the above, but these are the usual
+names. The first one is registered using the macro
+<code class="computeroutput">VG_DETERMINE_INTERFACE_VERSION</code>.
+The last three are registered using the
+<code class="computeroutput">VG_(basic_tool_funcs)</code> function.</p>
+<p>In addition, if a tool wants to use some of the optional services
+provided by the core, it may have to define other functions and tell the
+core about them.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.init"></a>2.2.5. Initialisation</h3></div></div></div>
+<p>Most of the initialisation should be done in
+<code class="function">pre_clo_init</code>. Only use
+<code class="function">post_clo_init</code> if a tool provides command line
+options and must do some initialisation after option processing takes
+place (<code class="computeroutput">"clo"</code> stands for "command line
+options").</p>
+<p>First of all, various "details" need to be set for a tool, using
+the functions <code class="function">VG_(details_*)</code>. Some are all
+compulsory, some aren't. Some are used when constructing the startup
+message, <code class="computeroutput">detail_bug_reports_to</code> is used
+if <code class="computeroutput">VG_(tool_panic)</code> is ever called, or
+a tool assertion fails. Others have other uses.</p>
+<p>Second, various "needs" can be set for a tool, using the functions
+<code class="function">VG_(needs_*)</code>. They are mostly booleans, and can
+be left untouched (they default to <code class="varname">False</code>). They
+determine whether a tool can do various things such as: record, report
+and suppress errors; process command line options; wrap system calls;
+record extra information about heap blocks; etc.</p>
+<p>For example, if a tool wants the core's help in recording and
+reporting errors, it must call
+<code class="function">VG_(needs_tool_errors)</code> and provide definitions of
+eight functions for comparing errors, printing out errors, reading
+suppressions from a suppressions file, etc. While writing these
+functions requires some work, it's much less than doing error handling
+from scratch because the core is doing most of the work.
+</p>
+<p>Third, the tool can indicate which events in core it wants to be
+notified about, using the functions <code class="function">VG_(track_*)</code>.
+These include things such as heap blocks being allocated, the stack
+pointer changing, a mutex being locked, etc. If a tool wants to know
+about this, it should provide a pointer to a function, which will be
+called when that event happens.</p>
+<p>For example, if the tool want to be notified when a new heap block
+is allocated, it should call
+<code class="function">VG_(track_new_mem_heap)</code> with an appropriate
+function pointer, and the assigned function will be called each time
+this happens.</p>
+<p>More information about "details", "needs" and "trackable events"
+can be found in
+<code class="filename">include/pub_tool_tooliface.h</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.instr"></a>2.2.6. Instrumentation</h3></div></div></div>
+<p><code class="function">instrument</code> is the interesting one. It
+allows you to instrument <span class="emphasis"><em>VEX IR</em></span>, which is
+Valgrind's RISC-like intermediate language. VEX IR is described
+in the comments of the header file
+<code class="filename">VEX/pub/libvex_ir.h</code>.</p>
+<p>The easiest way to instrument VEX IR is to insert calls to C
+functions when interesting things happen. See the tool "Lackey"
+(<code class="filename">lackey/lk_main.c</code>) for a simple example of this, or
+Cachegrind (<code class="filename">cachegrind/cg_main.c</code>) for a more
+complex example.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.fini"></a>2.2.7. Finalisation</h3></div></div></div>
+<p>This is where you can present the final results, such as a summary
+of the information collected. Any log files should be written out at
+this point.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.otherinfo"></a>2.2.8. Other Important Information</h3></div></div></div>
+<p>Please note that the core/tool split infrastructure is quite
+complex and not brilliantly documented. Here are some important points,
+but there are undoubtedly many others that I should note but haven't
+thought of.</p>
+<p>The files <code class="filename">include/pub_tool_*.h</code> contain all the
+types, macros, functions, etc. that a tool should (hopefully) need, and are
+the only <code class="filename">.h</code> files a tool should need to
+<code class="computeroutput">#include</code>. They have a reasonable amount of
+documentation in it that should hopefully be enough to get you going.</p>
+<p>Note that you can't use anything from the C library (there
+are deep reasons for this, trust us). Valgrind provides an
+implementation of a reasonable subset of the C library, details of which
+are in <code class="filename">pub_tool_libc*.h</code>.</p>
+<p>When writing a tool, in theory you shouldn't need to look at any of
+the code in Valgrind's core, but in practice it might be useful sometimes to
+help understand something.</p>
+<p>The <code class="filename">include/pub_tool_basics.h</code> and
+<code class="filename">VEX/pub/libvex_basictypes.h</code> files have some basic
+types that are widely used.</p>
+<p>Ultimately, the tools distributed (Memcheck, Cachegrind, Lackey, etc.)
+are probably the best documentation of all, for the moment.</p>
+<p>The <code class="computeroutput">VG_</code> macro is used
+heavily. This just prepends a longer string in front of names to avoid
+potential namespace clashes. It is defined in
+<code class="filename">include/pub_tool_basics.h</code>.</p>
+<p>There are some assorted notes about various aspects of the
+implementation in <code class="filename">docs/internals/</code>. Much of it
+isn't that relevant to tool-writers, however.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-writing-tools.advtopics"></a>2.3. Advanced Topics</h2></div></div></div>
+<p>Once a tool becomes more complicated, there are some extra
+things you may want/need to do.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.advice"></a>2.3.1. Debugging Tips</h3></div></div></div>
+<p>Writing and debugging tools is not trivial. Here are some
+suggestions for solving common problems.</p>
+<p>If you are getting segmentation faults in C functions used by your
+tool, the usual GDB command:</p>
+<pre class="screen">
+ gdb <prog> core</pre>
+<p>usually gives the location of the segmentation fault.</p>
+<p>If you want to debug C functions used by your tool, there are
+instructions on how to do so in the file
+<code class="filename">README_DEVELOPERS</code>.</p>
+<p>If you are having problems with your VEX IR instrumentation, it's
+likely that GDB won't be able to help at all. In this case, Valgrind's
+<code class="option">--trace-flags</code> option is invaluable for observing the
+results of instrumentation.</p>
+<p>If you just want to know whether a program point has been reached,
+using the <code class="computeroutput">OINK</code> macro (in
+<code class="filename">include/pub_tool_libcprint.h</code>) can be easier than
+using GDB.</p>
+<p>The other debugging command line options can be useful too (run
+<code class="computeroutput">valgrind --help-debug</code> for the
+list).</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.suppressions"></a>2.3.2. Suppressions</h3></div></div></div>
+<p>If your tool reports errors and you want to suppress some common
+ones, you can add suppressions to the suppression files. The relevant
+files are <code class="filename">*.supp</code>; the final suppression
+file is aggregated from these files by combining the relevant
+<code class="filename">.supp</code> files depending on the versions of linux, X
+and glibc on a system.</p>
+<p>Suppression types have the form
+<code class="computeroutput">tool_name:suppression_name</code>. The
+<code class="computeroutput">tool_name</code> here is the name you specify
+for the tool during initialisation with
+<code class="function">VG_(details_name)</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.docs"></a>2.3.3. Documentation</h3></div></div></div>
+<p>If you are feeling conscientious and want to write some
+documentation for your tool, please use XML as the rest of Valgrind does.
+The file <code class="filename">docs/README</code> has more details on getting
+the XML toolchain to work; this can be difficult, unfortunately.</p>
+<p>To write the documentation, follow these steps (using
+<code class="computeroutput">foobar</code> as the example tool name
+again):</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>The docs go in
+ <code class="computeroutput">foobar/docs/</code>, which you will
+ have created when you started writing the tool.</p></li>
+<li class="listitem">
+<p>Copy the XML documentation file for the tool Nulgrind from
+ <code class="filename">none/docs/nl-manual.xml</code> to
+ <code class="computeroutput">foobar/docs/</code>, and rename it to
+ <code class="filename">foobar/docs/fb-manual.xml</code>.</p>
+<p><span class="command"><strong>Note</strong></span>: there is a tetex bug
+ involving underscores in filenames, so don't use '_'.</p>
+</li>
+<li class="listitem"><p>Write the documentation. There are some helpful bits and
+ pieces on using XML markup in
+ <code class="filename">docs/xml/xml_help.txt</code>.</p></li>
+<li class="listitem"><p>Include it in the User Manual by adding the relevant entry to
+ <code class="filename">docs/xml/manual.xml</code>. Copy and edit an
+ existing entry.</p></li>
+<li class="listitem"><p>Include it in the man page by adding the relevant entry to
+ <code class="filename">docs/xml/valgrind-manpage.xml</code>. Copy and
+ edit an existing entry.</p></li>
+<li class="listitem">
+<p>Validate <code class="filename">foobar/docs/fb-manual.xml</code> using
+ the following command from within <code class="filename">docs/</code>:
+ </p>
+<pre class="screen">
+make valid
+</pre>
+<p>You may get errors that look like this:</p>
+<pre class="screen">
+./xml/index.xml:5: element chapter: validity error : No declaration for
+attribute base of element chapter
+</pre>
+<p>Ignore (only) these -- they're not important.</p>
+<p>Because the XML toolchain is fragile, it is important to ensure
+ that <code class="filename">fb-manual.xml</code> won't break the documentation
+ set build. Note that just because an XML file happily transforms to
+ html does not necessarily mean the same holds true for pdf/ps.</p>
+</li>
+<li class="listitem">
+<p>You can (re-)generate the HTML docs while you are writing
+ <code class="filename">fb-manual.xml</code> to help you see how it's looking.
+ The generated files end up in
+ <code class="filename">docs/html/</code>. Use the following
+ command, within <code class="filename">docs/</code>:</p>
+<pre class="screen">
+make html-docs
+</pre>
+</li>
+<li class="listitem">
+<p>When you have finished, try to generate PDF and PostScript output to
+ check all is well, from within <code class="filename">docs/</code>:
+ </p>
+<pre class="screen">
+make print-docs
+</pre>
+<p>Check the output <code class="filename">.pdf</code> and
+ <code class="filename">.ps</code> files in
+ <code class="computeroutput">docs/print/</code>.</p>
+<p>Note that the toolchain is even more fragile for the print docs,
+ so don't feel too bad if you can't get it working.</p>
+</li>
+</ol></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.regtests"></a>2.3.4. Regression Tests</h3></div></div></div>
+<p>Valgrind has some support for regression tests. If you want to
+write regression tests for your tool:</p>
+<div class="orderedlist"><ol class="orderedlist" type="1">
+<li class="listitem"><p>The tests go in <code class="computeroutput">foobar/tests/</code>,
+ which you will have created when you started writing the tool.</p></li>
+<li class="listitem"><p>Write <code class="filename">foobar/tests/Makefile.am</code>. Use
+ <code class="filename">memcheck/tests/Makefile.am</code> as an
+ example.</p></li>
+<li class="listitem"><p>Write the tests, <code class="computeroutput">.vgtest</code> test
+ description files, <code class="computeroutput">.stdout.exp</code> and
+ <code class="computeroutput">.stderr.exp</code> expected output files.
+ (Note that Valgrind's output goes to stderr.) Some details on
+ writing and running tests are given in the comments at the top of
+ the testing script
+ <code class="computeroutput">tests/vg_regtest</code>.</p></li>
+<li class="listitem"><p>Write a filter for stderr results
+ <code class="computeroutput">foobar/tests/filter_stderr</code>. It can
+ call the existing filters in
+ <code class="computeroutput">tests/</code>. See
+ <code class="computeroutput">memcheck/tests/filter_stderr</code> for an
+ example; in particular note the
+ <code class="computeroutput">$dir</code> trick that ensures the filter
+ works correctly from any directory.</p></li>
+</ol></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.profiling"></a>2.3.5. Profiling</h3></div></div></div>
+<p>Lots of profiling tools have trouble running Valgrind. For example,
+trying to use gprof is hopeless.</p>
+<p>Probably the best way to profile a tool is with OProfile on Linux.</p>
+<p>You can also use Cachegrind on it. Read
+<code class="filename">README_DEVELOPERS</code> for details on running Valgrind under
+Valgrind; it's a bit fragile but can usually be made to work.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.mkhackery"></a>2.3.6. Other Makefile Hackery</h3></div></div></div>
+<p>If you add any directories under
+<code class="computeroutput">foobar/</code>, you will need to add
+an appropriate <code class="filename">Makefile.am</code> to it, and add a
+corresponding entry to the <code class="computeroutput">AC_OUTPUT</code>
+list in <code class="filename">configure.in</code>.</p>
+<p>If you add any scripts to your tool (see Cachegrind for an
+example) you need to add them to the
+<code class="computeroutput">bin_SCRIPTS</code> variable in
+<code class="filename">foobar/Makefile.am</code> and possible also to the
+<code class="computeroutput">AC_OUTPUT</code> list in
+<code class="filename">configure.in</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="manual-writing-tools.ifacever"></a>2.3.7. The Core/tool Interface</h3></div></div></div>
+<p>The core/tool interface evolves over time, but it's pretty stable.
+We deliberately do not provide backward compatibility with old interfaces,
+because it is too difficult and too restrictive. We view this as a good
+thing -- if we had to be backward compatible with earlier versions, many
+improvements now in the system could not have been added.</p>
+<p>Because tools are statically linked with the core, if a tool compiles
+successfully then it should be compatible with the core. We would not
+deliberately violate this property by, for example, changing the behaviour
+of a core function without changing its prototype.</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="manual-writing-tools.finalwords"></a>2.4. Final Words</h2></div></div></div>
+<p>Writing a new Valgrind tool is not easy, but the tools you can write
+with Valgrind are among the most powerful programming tools there are.
+Happy programming!</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="design-impl.html"><< 1. The Design and Implementation of Valgrind</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="tech-docs.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="cl-format.html">3. Callgrind Format Specification >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/manual.html b/docs/html/manual.html
new file mode 100644
index 0000000..ed44036
--- /dev/null
+++ b/docs/html/manual.html
@@ -0,0 +1,323 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind User Manual</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="quick-start.html" title="The Valgrind Quick Start Guide">
+<link rel="next" href="manual-intro.html" title="1. Introduction">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="quick-start.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="manual-intro.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div>
+<div><h1 class="title">
+<a name="manual"></a>Valgrind User Manual</h1></div>
+<div><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div><p class="copyright">Copyright © 2000-2016 <a class="ulink" href="http://www.valgrind.org/info/developers.html" target="_top">Valgrind Developers</a></p></div>
+<div><div class="legalnotice">
+<a name="idm140639115873632"></a><p>Email: <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a></p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="chapter"><a href="manual-intro.html">1. Introduction</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="manual-intro.html#manual-intro.overview">1.1. An Overview of Valgrind</a></span></dt>
+<dt><span class="sect1"><a href="manual-intro.html#manual-intro.navigation">1.2. How to navigate this manual</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="manual-core.html">2. Using and understanding the Valgrind core</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.whatdoes">2.1. What Valgrind does with your program</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.started">2.2. Getting started</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.comment">2.3. The Commentary</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.report">2.4. Reporting of errors</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.suppress">2.5. Suppressing errors</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.options">2.6. Core Command-line Options</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.toolopts">2.6.1. Tool-selection Option</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.basicopts">2.6.2. Basic Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.erropts">2.6.3. Error-related Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.mallocopts">2.6.4. malloc-related Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.rareopts">2.6.5. Uncommon Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.debugopts">2.6.6. Debugging Options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core.html#manual-core.defopts">2.6.7. Setting Default Options</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.pthreads">2.7. Support for Threads</a></span></dt>
+<dd><dl><dt><span class="sect2"><a href="manual-core.html#manual-core.pthreads_perf_sched">2.7.1. Scheduling and Multi-Thread Performance</a></span></dt></dl></dd>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.signals">2.8. Handling of Signals</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.install">2.9. Building and Installing Valgrind</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.problems">2.10. If You Have Problems</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.limits">2.11. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.example">2.12. An Example Run</a></span></dt>
+<dt><span class="sect1"><a href="manual-core.html#manual-core.warnings">2.13. Warning Messages You Might See</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="manual-core-adv.html">3. Using and understanding the Valgrind core: Advanced Topics</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.clientreq">3.1. The Client Request mechanism</a></span></dt>
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.gdbserver">3.2. Debugging your program using Valgrind gdbserver and GDB</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-simple">3.2.1. Quick Start: debugging in 3 steps</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-concept">3.2.2. Valgrind gdbserver overall organisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-gdb">3.2.3. Connecting GDB to a Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-gdb-android">3.2.4. Connecting to an Android gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling">3.2.5. Monitor command handling by the Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-threads">3.2.6. Valgrind gdbserver thread information</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-shadowregisters">3.2.7. Examining and modifying Valgrind shadow registers</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.gdbserver-limitations">3.2.8. Limitations of the Valgrind gdbserver</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.vgdb">3.2.9. vgdb command line options</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.valgrind-monitor-commands">3.2.10. Valgrind monitor commands</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-core-adv.html#manual-core-adv.wrapping">3.3. Function wrapping</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.example">3.3.1. A Simple Example</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.specs">3.3.2. Wrapping Specifications</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.semantics">3.3.3. Wrapping Semantics</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.debugging">3.3.4. Debugging</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.limitations-cf">3.3.5. Limitations - control flow</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.limitations-sigs">3.3.6. Limitations - original function signatures</a></span></dt>
+<dt><span class="sect2"><a href="manual-core-adv.html#manual-core-adv.wrapping.examples">3.3.7. Examples</a></span></dt>
+</dl></dd>
+</dl></dd>
+<dt><span class="chapter"><a href="mc-manual.html">4. Memcheck: a memory error detector</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.overview">4.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.errormsgs">4.2. Explanation of error messages from Memcheck</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.badrw">4.2.1. Illegal read / Illegal write errors</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.uninitvals">4.2.2. Use of uninitialised values</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.bad-syscall-args">4.2.3. Use of uninitialised or unaddressable values in system
+ calls</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.badfrees">4.2.4. Illegal frees</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.rudefn">4.2.5. When a heap block is freed with an inappropriate deallocation
+function</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.overlap">4.2.6. Overlapping source and destination blocks</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.fishyvalue">4.2.7. Fishy argument values</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.leaks">4.2.8. Memory leak detection</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.options">4.3. Memcheck Command-Line Options</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.suppfiles">4.4. Writing suppression files</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.machine">4.5. Details of Memcheck's checking machinery</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.value">4.5.1. Valid-value (V) bits</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.vaddress">4.5.2. Valid-address (A) bits</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.together">4.5.3. Putting it all together</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.monitor-commands">4.6. Memcheck Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.clientreqs">4.7. Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.mempools">4.8. Memory Pools: describing and working with custom allocators</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.mpiwrap">4.9. Debugging MPI Parallel Programs with Valgrind</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.build">4.9.1. Building and installing the wrappers</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.gettingstarted">4.9.2. Getting started</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.controlling">4.9.3. Controlling the wrapper library</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.limitations.functions">4.9.4. Functions</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.limitations.types">4.9.5. Types</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.writingwrappers">4.9.6. Writing new wrappers</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.whattoexpect">4.9.7. What to expect when using the wrappers</a></span></dt>
+</dl></dd>
+</dl></dd>
+<dt><span class="chapter"><a href="cg-manual.html">5. Cachegrind: a cache and branch-prediction profiler</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.overview">5.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.profile">5.2. Using Cachegrind, cg_annotate and cg_merge</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.running-cachegrind">5.2.1. Running Cachegrind</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.outputfile">5.2.2. Output File</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.running-cg_annotate">5.2.3. Running cg_annotate</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.the-output-preamble">5.2.4. The Output Preamble</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.the-global">5.2.5. The Global and Function-level Counts</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.line-by-line">5.2.6. Line-by-line Counts</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.assembler">5.2.7. Annotating Assembly Code Programs</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#ms-manual.forkingprograms">5.2.8. Forking Programs</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.warnings">5.2.9. cg_annotate Warnings</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.things-to-watch-out-for">5.2.10. Unusual Annotation Cases</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.cg_merge">5.2.11. Merging Profiles with cg_merge</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.cg_diff">5.2.12. Differencing Profiles with cg_diff</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.cgopts">5.3. Cachegrind Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.annopts">5.4. cg_annotate Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.mergeopts">5.5. cg_merge Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.diffopts">5.6. cg_diff Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.acting-on">5.7. Acting on Cachegrind's Information</a></span></dt>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.sim-details">5.8. Simulation Details</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cache-sim">5.8.1. Cache Simulation Specifics</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#branch-sim">5.8.2. Branch Simulation Specifics</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.annopts.accuracy">5.8.3. Accuracy</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cg-manual.html#cg-manual.impl-details">5.9. Implementation Details</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.impl-details.how-cg-works">5.9.1. How Cachegrind Works</a></span></dt>
+<dt><span class="sect2"><a href="cg-manual.html#cg-manual.impl-details.file-format">5.9.2. Cachegrind Output File Format</a></span></dt>
+</dl></dd>
+</dl></dd>
+<dt><span class="chapter"><a href="cl-manual.html">6. Callgrind: a call-graph generating cache and branch prediction profiler</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.use">6.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.functionality">6.1.1. Functionality</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.basics">6.1.2. Basic Usage</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.usage">6.2. Advanced Usage</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.dumps">6.2.1. Multiple profiling dumps from one program run</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.limits">6.2.2. Limiting the range of collected events</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.busevents">6.2.3. Counting global bus events</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.cycles">6.2.4. Avoiding cycles</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.forkingprograms">6.2.5. Forking Programs</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.options">6.3. Callgrind Command-line Options</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.creation">6.3.1. Dump creation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.activity">6.3.2. Activity options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.collection">6.3.3. Data collection options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.separation">6.3.4. Cost entity separation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.simulation">6.3.5. Simulation options</a></span></dt>
+<dt><span class="sect2"><a href="cl-manual.html#cl-manual.options.cachesimulation">6.3.6. Cache simulation options</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.monitor-commands">6.4. Callgrind Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.clientrequests">6.5. Callgrind specific client requests</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.callgrind_annotate-options">6.6. callgrind_annotate Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="cl-manual.html#cl-manual.callgrind_control-options">6.7. callgrind_control Command-line Options</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="hg-manual.html">7. Helgrind: a thread error detector</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.overview">7.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.api-checks">7.2. Detected errors: Misuses of the POSIX pthreads API</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.lock-orders">7.3. Detected errors: Inconsistent Lock Orderings</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.data-races">7.4. Detected errors: Data Races</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.example">7.4.1. A Simple Data Race</a></span></dt>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.algorithm">7.4.2. Helgrind's Race Detection Algorithm</a></span></dt>
+<dt><span class="sect2"><a href="hg-manual.html#hg-manual.data-races.errmsgs">7.4.3. Interpreting Race Error Messages</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.effective-use">7.5. Hints and Tips for Effective Use of Helgrind</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.options">7.6. Helgrind Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.monitor-commands">7.7. Helgrind Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.client-requests">7.8. Helgrind Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="hg-manual.html#hg-manual.todolist">7.9. A To-Do List for Helgrind</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="drd-manual.html">8. DRD: a thread error detector</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.overview">8.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mt-progr-models">8.1.1. Multithreaded Programming Paradigms</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.pthreads-model">8.1.2. POSIX Threads Programming Model</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mt-problems">8.1.3. Multithreaded Programming Problems</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.data-race-detection">8.1.4. Data Race Detection</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.using-drd">8.2. Using DRD</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.options">8.2.1. DRD Command-line Options</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.data-races">8.2.2. Detected Errors: Data Races</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.lock-contention">8.2.3. Detected Errors: Lock Contention</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.api-checks">8.2.4. Detected Errors: Misuse of the POSIX threads API</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.clientreqs">8.2.5. Client Requests</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.C++11">8.2.6. Debugging C++11 Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.gnome">8.2.7. Debugging GNOME Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.boost.thread">8.2.8. Debugging Boost.Thread Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.openmp">8.2.9. Debugging OpenMP Programs</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.cust-mem-alloc">8.2.10. DRD and Custom Memory Allocators</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.drd-versus-memcheck">8.2.11. DRD Versus Memcheck</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.resource-requirements">8.2.12. Resource Requirements</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.effective-use">8.2.13. Hints and Tips for Effective Use of DRD</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.Pthreads">8.3. Using the POSIX Threads API Effectively</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.mutex-types">8.3.1. Mutex types</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.condvar">8.3.2. Condition variables</a></span></dt>
+<dt><span class="sect2"><a href="drd-manual.html#drd-manual.pctw">8.3.3. pthread_cond_timedwait and timeouts</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.limitations">8.4. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="drd-manual.html#drd-manual.feedback">8.5. Feedback</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="ms-manual.html">9. Massif: a heap profiler</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.overview">9.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.using">9.2. Using Massif and ms_print</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.anexample">9.2.1. An Example Program</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.running-massif">9.2.2. Running Massif</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.running-ms_print">9.2.3. Running ms_print</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.theoutputpreamble">9.2.4. The Output Preamble</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.theoutputgraph">9.2.5. The Output Graph</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.thesnapshotdetails">9.2.6. The Snapshot Details</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.forkingprograms">9.2.7. Forking Programs</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.not-measured">9.2.8. Measuring All Memory in a Process</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.acting">9.2.9. Acting on Massif's Information</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.options">9.3. Massif Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.monitor-commands">9.4. Massif Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.clientreqs">9.5. Massif Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.ms_print-options">9.6. ms_print Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.fileformat">9.7. Massif's Output File Format</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="dh-manual.html">10. DHAT: a dynamic heap analysis tool</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.overview">10.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.understanding">10.2. Understanding DHAT's output</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639117126160">10.2.1. Interpreting the max-live, tot-alloc and deaths fields</a></span></dt>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639113841488">10.2.2. Interpreting the acc-ratios fields</a></span></dt>
+<dt><span class="sect2"><a href="dh-manual.html#idm140639116741152">10.2.3. Interpreting "Aggregated access counts by offset" data</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="dh-manual.html#dh-manual.options">10.3. DHAT Command-line Options</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="sg-manual.html">11. SGCheck: an experimental stack and global array overrun detector</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.overview">11.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.options">11.2. SGCheck Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.how-works.sg-checks">11.3. How SGCheck Works</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.cmp-w-memcheck">11.4. Comparison with Memcheck</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.limitations">11.5. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.todo-user-visible">11.6. Still To Do: User-visible Functionality</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.todo-implementation">11.7. Still To Do: Implementation Tidying</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="bbv-manual.html">12. BBV: an experimental basic block vector generation tool</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.overview">12.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.quickstart">12.2. Using Basic Block Vectors to create SimPoints</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.usage">12.3. BBV Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.fileformat">12.4. Basic Block Vector File Format</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.implementation">12.5. Implementation</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.threadsupport">12.6. Threaded Executable Support</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.validation">12.7. Validation</a></span></dt>
+<dt><span class="sect1"><a href="bbv-manual.html#bbv-manual.performance">12.8. Performance</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="lk-manual.html">13. Lackey: an example tool</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="lk-manual.html#lk-manual.overview">13.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="lk-manual.html#lk-manual.options">13.2. Lackey Command-line Options</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="nl-manual.html">14. Nulgrind: the minimal Valgrind tool</a></span></dt>
+<dd><dl><dt><span class="sect1"><a href="nl-manual.html#ms-manual.overview">14.1. Overview</a></span></dt></dl></dd>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="quick-start.html"><< The Valgrind Quick Start Guide</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="manual-intro.html">1. Introduction >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/mc-manual.html b/docs/html/mc-manual.html
new file mode 100644
index 0000000..b47da7d
--- /dev/null
+++ b/docs/html/mc-manual.html
@@ -0,0 +1,2386 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>4. Memcheck: a memory error detector</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="manual-core-adv.html" title="3. Using and understanding the Valgrind core: Advanced Topics">
+<link rel="next" href="cg-manual.html" title="5. Cachegrind: a cache and branch-prediction profiler">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="manual-core-adv.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="cg-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="mc-manual"></a>4. Memcheck: a memory error detector</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.overview">4.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.errormsgs">4.2. Explanation of error messages from Memcheck</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.badrw">4.2.1. Illegal read / Illegal write errors</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.uninitvals">4.2.2. Use of uninitialised values</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.bad-syscall-args">4.2.3. Use of uninitialised or unaddressable values in system
+ calls</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.badfrees">4.2.4. Illegal frees</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.rudefn">4.2.5. When a heap block is freed with an inappropriate deallocation
+function</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.overlap">4.2.6. Overlapping source and destination blocks</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.fishyvalue">4.2.7. Fishy argument values</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.leaks">4.2.8. Memory leak detection</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.options">4.3. Memcheck Command-Line Options</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.suppfiles">4.4. Writing suppression files</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.machine">4.5. Details of Memcheck's checking machinery</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.value">4.5.1. Valid-value (V) bits</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.vaddress">4.5.2. Valid-address (A) bits</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.together">4.5.3. Putting it all together</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.monitor-commands">4.6. Memcheck Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.clientreqs">4.7. Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.mempools">4.8. Memory Pools: describing and working with custom allocators</a></span></dt>
+<dt><span class="sect1"><a href="mc-manual.html#mc-manual.mpiwrap">4.9. Debugging MPI Parallel Programs with Valgrind</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.build">4.9.1. Building and installing the wrappers</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.gettingstarted">4.9.2. Getting started</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.controlling">4.9.3. Controlling the wrapper library</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.limitations.functions">4.9.4. Functions</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.limitations.types">4.9.5. Types</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.writingwrappers">4.9.6. Writing new wrappers</a></span></dt>
+<dt><span class="sect2"><a href="mc-manual.html#mc-manual.mpiwrap.whattoexpect">4.9.7. What to expect when using the wrappers</a></span></dt>
+</dl></dd>
+</dl>
+</div>
+<p>To use this tool, you may specify <code class="option">--tool=memcheck</code>
+on the Valgrind command line. You don't have to, though, since Memcheck
+is the default tool.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.overview"></a>4.1. Overview</h2></div></div></div>
+<p>Memcheck is a memory error detector. It can detect the following
+problems that are common in C and C++ programs.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Accessing memory you shouldn't, e.g. overrunning and underrunning
+ heap blocks, overrunning the top of the stack, and accessing memory after
+ it has been freed.</p></li>
+<li class="listitem"><p>Using undefined values, i.e. values that have not been initialised,
+ or that have been derived from other undefined values.</p></li>
+<li class="listitem"><p>Incorrect freeing of heap memory, such as double-freeing heap
+ blocks, or mismatched use of
+ <code class="function">malloc</code>/<code class="computeroutput">new</code>/<code class="computeroutput">new[]</code>
+ versus
+ <code class="function">free</code>/<code class="computeroutput">delete</code>/<code class="computeroutput">delete[]</code></p></li>
+<li class="listitem"><p>Overlapping <code class="computeroutput">src</code> and
+ <code class="computeroutput">dst</code> pointers in
+ <code class="computeroutput">memcpy</code> and related
+ functions.</p></li>
+<li class="listitem"><p>Passing a fishy (presumably negative) value to the
+ <code class="computeroutput">size</code> parameter of a memory
+ allocation function.</p></li>
+<li class="listitem"><p>Memory leaks.</p></li>
+</ul></div>
+<p>Problems like these can be difficult to find by other means,
+often remaining undetected for long periods, then causing occasional,
+difficult-to-diagnose crashes.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.errormsgs"></a>4.2. Explanation of error messages from Memcheck</h2></div></div></div>
+<p>Memcheck issues a range of error messages. This section presents a
+quick summary of what error messages mean. The precise behaviour of the
+error-checking machinery is described in <a class="xref" href="mc-manual.html#mc-manual.machine" title="4.5. Details of Memcheck's checking machinery">Details of Memcheck's checking machinery</a>.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.badrw"></a>4.2.1. Illegal read / Illegal write errors</h3></div></div></div>
+<p>For example:</p>
+<pre class="programlisting">
+Invalid read of size 4
+ at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
+ by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
+ by 0x40B07FF4: read_png_image(QImageIO *) (kernel/qpngio.cpp:326)
+ by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
+ Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
+</pre>
+<p>This happens when your program reads or writes memory at a place
+which Memcheck reckons it shouldn't. In this example, the program did a
+4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied
+library libpng.so.2.1.0.9, which was called from somewhere else in the
+same library, called from line 326 of <code class="filename">qpngio.cpp</code>,
+and so on.</p>
+<p>Memcheck tries to establish what the illegal address might relate
+to, since that's often useful. So, if it points into a block of memory
+which has already been freed, you'll be informed of this, and also where
+the block was freed. Likewise, if it should turn out to be just off
+the end of a heap block, a common result of off-by-one-errors in
+array subscripting, you'll be informed of this fact, and also where the
+block was allocated. If you use the <code class="option"><a class="xref" href="manual-core.html#opt.read-var-info">--read-var-info</a></code> option Memcheck will run more slowly
+but may give a more detailed description of any illegal address.</p>
+<p>In this example, Memcheck can't identify the address. Actually
+the address is on the stack, but, for some reason, this is not a valid
+stack address -- it is below the stack pointer and that isn't allowed.
+In this particular case it's probably caused by GCC generating invalid
+code, a known bug in some ancient versions of GCC.</p>
+<p>Note that Memcheck only tells you that your program is about to
+access memory at an illegal address. It can't stop the access from
+happening. So, if your program makes an access which normally would
+result in a segmentation fault, you program will still suffer the same
+fate -- but you will get a message from Memcheck immediately prior to
+this. In this particular example, reading junk on the stack is
+non-fatal, and the program stays alive.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.uninitvals"></a>4.2.2. Use of uninitialised values</h3></div></div></div>
+<p>For example:</p>
+<pre class="programlisting">
+Conditional jump or move depends on uninitialised value(s)
+ at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
+ by 0x402E8476: _IO_printf (printf.c:36)
+ by 0x8048472: main (tests/manuel1.c:8)
+</pre>
+<p>An uninitialised-value use error is reported when your program
+uses a value which hasn't been initialised -- in other words, is
+undefined. Here, the undefined value is used somewhere inside the
+<code class="function">printf</code> machinery of the C library. This error was
+reported when running the following small program:</p>
+<pre class="programlisting">
+int main()
+{
+ int x;
+ printf ("x = %d\n", x);
+}</pre>
+<p>It is important to understand that your program can copy around
+junk (uninitialised) data as much as it likes. Memcheck observes this
+and keeps track of the data, but does not complain. A complaint is
+issued only when your program attempts to make use of uninitialised
+data in a way that might affect your program's externally-visible behaviour.
+In this example, <code class="varname">x</code> is uninitialised. Memcheck observes
+the value being passed to <code class="function">_IO_printf</code> and thence to
+<code class="function">_IO_vfprintf</code>, but makes no comment. However,
+<code class="function">_IO_vfprintf</code> has to examine the value of
+<code class="varname">x</code> so it can turn it into the corresponding ASCII string,
+and it is at this point that Memcheck complains.</p>
+<p>Sources of uninitialised data tend to be:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Local variables in procedures which have not been initialised,
+ as in the example above.</p></li>
+<li class="listitem"><p>The contents of heap blocks (allocated with
+ <code class="function">malloc</code>, <code class="function">new</code>, or a similar
+ function) before you (or a constructor) write something there.
+ </p></li>
+</ul></div>
+<p>To see information on the sources of uninitialised data in your
+program, use the <code class="option">--track-origins=yes</code> option. This
+makes Memcheck run more slowly, but can make it much easier to track down
+the root causes of uninitialised value errors.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.bad-syscall-args"></a>4.2.3. Use of uninitialised or unaddressable values in system
+ calls</h3></div></div></div>
+<p>Memcheck checks all parameters to system calls:
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>It checks all the direct parameters themselves, whether they are
+ initialised.</p></li>
+<li class="listitem"><p>Also, if a system call needs to read from a buffer provided by
+ your program, Memcheck checks that the entire buffer is addressable
+ and its contents are initialised.</p></li>
+<li class="listitem"><p>Also, if the system call needs to write to a user-supplied
+ buffer, Memcheck checks that the buffer is addressable.</p></li>
+</ul></div>
+<p>
+</p>
+<p>After the system call, Memcheck updates its tracked information to
+precisely reflect any changes in memory state caused by the system
+call.</p>
+<p>Here's an example of two system calls with invalid parameters:</p>
+<pre class="programlisting">
+ #include <stdlib.h>
+ #include <unistd.h>
+ int main( void )
+ {
+ char* arr = malloc(10);
+ int* arr2 = malloc(sizeof(int));
+ write( 1 /* stdout */, arr, 10 );
+ exit(arr2[0]);
+ }
+</pre>
+<p>You get these complaints ...</p>
+<pre class="programlisting">
+ Syscall param write(buf) points to uninitialised byte(s)
+ at 0x25A48723: __write_nocancel (in /lib/tls/libc-2.3.3.so)
+ by 0x259AFAD3: __libc_start_main (in /lib/tls/libc-2.3.3.so)
+ by 0x8048348: (within /auto/homes/njn25/grind/head4/a.out)
+ Address 0x25AB8028 is 0 bytes inside a block of size 10 alloc'd
+ at 0x259852B0: malloc (vg_replace_malloc.c:130)
+ by 0x80483F1: main (a.c:5)
+
+ Syscall param exit(error_code) contains uninitialised byte(s)
+ at 0x25A21B44: __GI__exit (in /lib/tls/libc-2.3.3.so)
+ by 0x8048426: main (a.c:8)
+</pre>
+<p>... because the program has (a) written uninitialised junk
+from the heap block to the standard output, and (b) passed an
+uninitialised value to <code class="function">exit</code>. Note that the first
+error refers to the memory pointed to by
+<code class="computeroutput">buf</code> (not
+<code class="computeroutput">buf</code> itself), but the second error
+refers directly to <code class="computeroutput">exit</code>'s argument
+<code class="computeroutput">arr2[0]</code>.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.badfrees"></a>4.2.4. Illegal frees</h3></div></div></div>
+<p>For example:</p>
+<pre class="programlisting">
+Invalid free()
+ at 0x4004FFDF: free (vg_clientmalloc.c:577)
+ by 0x80484C7: main (tests/doublefree.c:10)
+ Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd
+ at 0x4004FFDF: free (vg_clientmalloc.c:577)
+ by 0x80484C7: main (tests/doublefree.c:10)
+</pre>
+<p>Memcheck keeps track of the blocks allocated by your program
+with <code class="function">malloc</code>/<code class="computeroutput">new</code>,
+so it can know exactly whether or not the argument to
+<code class="function">free</code>/<code class="computeroutput">delete</code> is
+legitimate or not. Here, this test program has freed the same block
+twice. As with the illegal read/write errors, Memcheck attempts to
+make sense of the address freed. If, as here, the address is one
+which has previously been freed, you wil be told that -- making
+duplicate frees of the same block easy to spot. You will also get this
+message if you try to free a pointer that doesn't point to the start of a
+heap block.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.rudefn"></a>4.2.5. When a heap block is freed with an inappropriate deallocation
+function</h3></div></div></div>
+<p>In the following example, a block allocated with
+<code class="function">new[]</code> has wrongly been deallocated with
+<code class="function">free</code>:</p>
+<pre class="programlisting">
+Mismatched free() / delete / delete []
+ at 0x40043249: free (vg_clientfuncs.c:171)
+ by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
+ by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
+ by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
+ Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd
+ at 0x4004318C: operator new[](unsigned int) (vg_clientfuncs.c:152)
+ by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
+ by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
+ by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)
+</pre>
+<p>In <code class="literal">C++</code> it's important to deallocate memory in a
+way compatible with how it was allocated. The deal is:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>If allocated with
+ <code class="function">malloc</code>,
+ <code class="function">calloc</code>,
+ <code class="function">realloc</code>,
+ <code class="function">valloc</code> or
+ <code class="function">memalign</code>, you must
+ deallocate with <code class="function">free</code>.</p></li>
+<li class="listitem"><p>If allocated with <code class="function">new</code>, you must deallocate
+ with <code class="function">delete</code>.</p></li>
+<li class="listitem"><p>If allocated with <code class="function">new[]</code>, you must
+ deallocate with <code class="function">delete[]</code>.</p></li>
+</ul></div>
+<p>The worst thing is that on Linux apparently it doesn't matter if
+you do mix these up, but the same program may then crash on a
+different platform, Solaris for example. So it's best to fix it
+properly. According to the KDE folks "it's amazing how many C++
+programmers don't know this".</p>
+<p>The reason behind the requirement is as follows. In some C++
+implementations, <code class="function">delete[]</code> must be used for
+objects allocated by <code class="function">new[]</code> because the compiler
+stores the size of the array and the pointer-to-member to the
+destructor of the array's content just before the pointer actually
+returned. <code class="function">delete</code> doesn't account for this and will get
+confused, possibly corrupting the heap.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.overlap"></a>4.2.6. Overlapping source and destination blocks</h3></div></div></div>
+<p>The following C library functions copy some data from one
+memory block to another (or something similar):
+<code class="function">memcpy</code>,
+<code class="function">strcpy</code>,
+<code class="function">strncpy</code>,
+<code class="function">strcat</code>,
+<code class="function">strncat</code>.
+The blocks pointed to by their <code class="computeroutput">src</code> and
+<code class="computeroutput">dst</code> pointers aren't allowed to overlap.
+The POSIX standards have wording along the lines "If copying takes place
+between objects that overlap, the behavior is undefined." Therefore,
+Memcheck checks for this.
+</p>
+<p>For example:</p>
+<pre class="programlisting">
+==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
+==27492== at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
+==27492== by 0x804865A: main (overlap.c:40)
+</pre>
+<p>You don't want the two blocks to overlap because one of them could
+get partially overwritten by the copying.</p>
+<p>You might think that Memcheck is being overly pedantic reporting
+this in the case where <code class="computeroutput">dst</code> is less than
+<code class="computeroutput">src</code>. For example, the obvious way to
+implement <code class="function">memcpy</code> is by copying from the first
+byte to the last. However, the optimisation guides of some
+architectures recommend copying from the last byte down to the first.
+Also, some implementations of <code class="function">memcpy</code> zero
+<code class="computeroutput">dst</code> before copying, because zeroing the
+destination's cache line(s) can improve performance.</p>
+<p>The moral of the story is: if you want to write truly portable
+code, don't make any assumptions about the language
+implementation.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.fishyvalue"></a>4.2.7. Fishy argument values</h3></div></div></div>
+<p>All memory allocation functions take an argument specifying the
+size of the memory block that should be allocated. Clearly, the requested
+size should be a non-negative value and is typically not excessively large.
+For instance, it is extremely unlikly that the size of an allocation
+request exceeds 2**63 bytes on a 64-bit machine. It is much more likely that
+such a value is the result of an erroneous size calculation and is in effect
+a negative value (that just happens to appear excessively large because
+the bit pattern is interpreted as an unsigned integer).
+Such a value is called a "fishy value".
+
+The <code class="varname">size</code> argument of the following allocation functions
+is checked for being fishy:
+<code class="function">malloc</code>,
+<code class="function">calloc</code>,
+<code class="function">realloc</code>,
+<code class="function">memalign</code>,
+<code class="function">new</code>,
+<code class="function">new []</code>.
+<code class="function">__builtin_new</code>,
+<code class="function">__builtin_vec_new</code>,
+For <code class="function">calloc</code> both arguments are being checked.
+</p>
+<p>For example:</p>
+<pre class="programlisting">
+==32233== Argument 'size' of function malloc has a fishy (possibly negative) value: -3
+==32233== at 0x4C2CFA7: malloc (vg_replace_malloc.c:298)
+==32233== by 0x400555: foo (fishy.c:15)
+==32233== by 0x400583: main (fishy.c:23)
+</pre>
+<p>In earlier Valgrind versions those values were being referred to
+as "silly arguments" and no back-trace was included.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.leaks"></a>4.2.8. Memory leak detection</h3></div></div></div>
+<p>Memcheck keeps track of all heap blocks issued in response to
+calls to
+<code class="function">malloc</code>/<code class="function">new</code> et al.
+So when the program exits, it knows which blocks have not been freed.
+</p>
+<p>If <code class="option">--leak-check</code> is set appropriately, for each
+remaining block, Memcheck determines if the block is reachable from pointers
+within the root-set. The root-set consists of (a) general purpose registers
+of all threads, and (b) initialised, aligned, pointer-sized data words in
+accessible client memory, including stacks.</p>
+<p>There are two ways a block can be reached. The first is with a
+"start-pointer", i.e. a pointer to the start of the block. The second is with
+an "interior-pointer", i.e. a pointer to the middle of the block. There are
+several ways we know of that an interior-pointer can occur:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>The pointer might have originally been a start-pointer and have been
+ moved along deliberately (or not deliberately) by the program. In
+ particular, this can happen if your program uses tagged pointers, i.e.
+ if it uses the bottom one, two or three bits of a pointer, which are
+ normally always zero due to alignment, in order to store extra
+ information.</p></li>
+<li class="listitem"><p>It might be a random junk value in memory, entirely unrelated, just
+ a coincidence.</p></li>
+<li class="listitem"><p>It might be a pointer to the inner char array of a C++
+ <code class="computeroutput">std::string</code>. For example, some
+ compilers add 3 words at the beginning of the std::string to
+ store the length, the capacity and a reference count before the
+ memory containing the array of characters. They return a pointer
+ just after these 3 words, pointing at the char array.</p></li>
+<li class="listitem"><p>Some code might allocate a block of memory, and use the first 8
+ bytes to store (block size - 8) as a 64bit number.
+ <code class="computeroutput">sqlite3MemMalloc</code> does this.</p></li>
+<li class="listitem"><p>It might be a pointer to an array of C++ objects (which possess
+ destructors) allocated with <code class="computeroutput">new[]</code>. In
+ this case, some compilers store a "magic cookie" containing the array
+ length at the start of the allocated block, and return a pointer to just
+ past that magic cookie, i.e. an interior-pointer.
+ See <a class="ulink" href="http://theory.uwinnipeg.ca/gnu/gcc/gxxint_14.html" target="_top">this
+ page</a> for more information.</p></li>
+<li class="listitem"><p>It might be a pointer to an inner part of a C++ object using
+ multiple inheritance. </p></li>
+</ul></div>
+<p>You can optionally activate heuristics to use during the leak
+search to detect the interior pointers corresponding to
+the <code class="computeroutput">stdstring</code>,
+<code class="computeroutput">length64</code>,
+<code class="computeroutput">newarray</code>
+and <code class="computeroutput">multipleinheritance</code> cases. If the
+heuristic detects that an interior pointer corresponds to such a case,
+the block will be considered as reachable by the interior
+pointer. In other words, the interior pointer will be treated
+as if it were a start pointer.</p>
+<p>With that in mind, consider the nine possible cases described by the
+following figure.</p>
+<pre class="programlisting">
+ Pointer chain AAA Leak Case BBB Leak Case
+ ------------- ------------- -------------
+(1) RRR ------------> BBB DR
+(2) RRR ---> AAA ---> BBB DR IR
+(3) RRR BBB DL
+(4) RRR AAA ---> BBB DL IL
+(5) RRR ------?-----> BBB (y)DR, (n)DL
+(6) RRR ---> AAA -?-> BBB DR (y)IR, (n)DL
+(7) RRR -?-> AAA ---> BBB (y)DR, (n)DL (y)IR, (n)IL
+(8) RRR -?-> AAA -?-> BBB (y)DR, (n)DL (y,y)IR, (n,y)IL, (_,n)DL
+(9) RRR AAA -?-> BBB DL (y)IL, (n)DL
+
+Pointer chain legend:
+- RRR: a root set node or DR block
+- AAA, BBB: heap blocks
+- --->: a start-pointer
+- -?->: an interior-pointer
+
+Leak Case legend:
+- DR: Directly reachable
+- IR: Indirectly reachable
+- DL: Directly lost
+- IL: Indirectly lost
+- (y)XY: it's XY if the interior-pointer is a real pointer
+- (n)XY: it's XY if the interior-pointer is not a real pointer
+- (_)XY: it's XY in either case
+</pre>
+<p>Every possible case can be reduced to one of the above nine. Memcheck
+merges some of these cases in its output, resulting in the following four
+leak kinds.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>"Still reachable". This covers cases 1 and 2 (for the BBB blocks)
+ above. A start-pointer or chain of start-pointers to the block is
+ found. Since the block is still pointed at, the programmer could, at
+ least in principle, have freed it before program exit. "Still reachable"
+ blocks are very common and arguably not a problem. So, by default,
+ Memcheck won't report such blocks individually.</p></li>
+<li class="listitem"><p>"Definitely lost". This covers case 3 (for the BBB blocks) above.
+ This means that no pointer to the block can be found. The block is
+ classified as "lost", because the programmer could not possibly have
+ freed it at program exit, since no pointer to it exists. This is likely
+ a symptom of having lost the pointer at some earlier point in the
+ program. Such cases should be fixed by the programmer.</p></li>
+<li class="listitem"><p>"Indirectly lost". This covers cases 4 and 9 (for the BBB blocks)
+ above. This means that the block is lost, not because there are no
+ pointers to it, but rather because all the blocks that point to it are
+ themselves lost. For example, if you have a binary tree and the root
+ node is lost, all its children nodes will be indirectly lost. Because
+ the problem will disappear if the definitely lost block that caused the
+ indirect leak is fixed, Memcheck won't report such blocks individually
+ by default.</p></li>
+<li class="listitem"><p>"Possibly lost". This covers cases 5--8 (for the BBB blocks)
+ above. This means that a chain of one or more pointers to the block has
+ been found, but at least one of the pointers is an interior-pointer.
+ This could just be a random value in memory that happens to point into a
+ block, and so you shouldn't consider this ok unless you know you have
+ interior-pointers.</p></li>
+</ul></div>
+<p>(Note: This mapping of the nine possible cases onto four leak kinds is
+not necessarily the best way that leaks could be reported; in particular,
+interior-pointers are treated inconsistently. It is possible the
+categorisation may be improved in the future.)</p>
+<p>Furthermore, if suppressions exists for a block, it will be reported
+as "suppressed" no matter what which of the above four kinds it belongs
+to.</p>
+<p>The following is an example leak summary.</p>
+<pre class="programlisting">
+LEAK SUMMARY:
+ definitely lost: 48 bytes in 3 blocks.
+ indirectly lost: 32 bytes in 2 blocks.
+ possibly lost: 96 bytes in 6 blocks.
+ still reachable: 64 bytes in 4 blocks.
+ suppressed: 0 bytes in 0 blocks.
+</pre>
+<p>If heuristics have been used to consider some blocks as
+reachable, the leak summary details the heuristically reachable subset
+of 'still reachable:' per heuristic. In the below example, of the 95
+bytes still reachable, 87 bytes (56+7+8+16) have been considered
+heuristically reachable.
+</p>
+<pre class="programlisting">
+LEAK SUMMARY:
+ definitely lost: 4 bytes in 1 blocks
+ indirectly lost: 0 bytes in 0 blocks
+ possibly lost: 0 bytes in 0 blocks
+ still reachable: 95 bytes in 6 blocks
+ of which reachable via heuristic:
+ stdstring : 56 bytes in 2 blocks
+ length64 : 16 bytes in 1 blocks
+ newarray : 7 bytes in 1 blocks
+ multipleinheritance: 8 bytes in 1 blocks
+ suppressed: 0 bytes in 0 blocks
+</pre>
+<p>If <code class="option">--leak-check=full</code> is specified,
+Memcheck will give details for each definitely lost or possibly lost block,
+including where it was allocated. (Actually, it merges results for all
+blocks that have the same leak kind and sufficiently similar stack traces
+into a single "loss record". The
+<code class="option">--leak-resolution</code> lets you control the
+meaning of "sufficiently similar".) It cannot tell you when or how or why
+the pointer to a leaked block was lost; you have to work that out for
+yourself. In general, you should attempt to ensure your programs do not
+have any definitely lost or possibly lost blocks at exit.</p>
+<p>For example:</p>
+<pre class="programlisting">
+8 bytes in 1 blocks are definitely lost in loss record 1 of 14
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: mk (leak-tree.c:11)
+ by 0x........: main (leak-tree.c:39)
+
+88 (8 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 13 of 14
+ at 0x........: malloc (vg_replace_malloc.c:...)
+ by 0x........: mk (leak-tree.c:11)
+ by 0x........: main (leak-tree.c:25)
+</pre>
+<p>The first message describes a simple case of a single 8 byte block
+that has been definitely lost. The second case mentions another 8 byte
+block that has been definitely lost; the difference is that a further 80
+bytes in other blocks are indirectly lost because of this lost block.
+The loss records are not presented in any notable order, so the loss record
+numbers aren't particularly meaningful. The loss record numbers can be used
+in the Valgrind gdbserver to list the addresses of the leaked blocks and/or give
+more details about how a block is still reachable.</p>
+<p>The option <code class="option">--show-leak-kinds=<set></code>
+controls the set of leak kinds to show
+when <code class="option">--leak-check=full</code> is specified. </p>
+<p>The <code class="option"><set></code> of leak kinds is specified
+in one of the following ways:
+
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>a comma separated list of one or more of
+ <code class="option">definite indirect possible reachable</code>.</p></li>
+<li class="listitem"><p><code class="option">all</code> to specify the complete set (all leak kinds).</p></li>
+<li class="listitem"><p><code class="option">none</code> for the empty set.</p></li>
+</ul></div>
+<p>
+
+</p>
+<p> The default value for the leak kinds to show is
+ <code class="option">--show-leak-kinds=definite,possible</code>.
+</p>
+<p>To also show the reachable and indirectly lost blocks in
+addition to the definitely and possibly lost blocks, you can
+use <code class="option">--show-leak-kinds=all</code>. To only show the
+reachable and indirectly lost blocks, use
+<code class="option">--show-leak-kinds=indirect,reachable</code>. The reachable
+and indirectly lost blocks will then be presented as shown in
+the following two examples.</p>
+<pre class="programlisting">
+64 bytes in 4 blocks are still reachable in loss record 2 of 4
+ at 0x........: malloc (vg_replace_malloc.c:177)
+ by 0x........: mk (leak-cases.c:52)
+ by 0x........: main (leak-cases.c:74)
+
+32 bytes in 2 blocks are indirectly lost in loss record 1 of 4
+ at 0x........: malloc (vg_replace_malloc.c:177)
+ by 0x........: mk (leak-cases.c:52)
+ by 0x........: main (leak-cases.c:80)
+</pre>
+<p>Because there are different kinds of leaks with different
+severities, an interesting question is: which leaks should be
+counted as true "errors" and which should not?
+</p>
+<p> The answer to this question affects the numbers printed in
+the <code class="computeroutput">ERROR SUMMARY</code> line, and also the
+effect of the <code class="option">--error-exitcode</code> option. First, a leak
+is only counted as a true "error"
+if <code class="option">--leak-check=full</code> is specified. Then, the
+option <code class="option">--errors-for-leak-kinds=<set></code> controls
+the set of leak kinds to consider as errors. The default value
+is <code class="option">--errors-for-leak-kinds=definite,possible</code>
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.options"></a>4.3. Memcheck Command-Line Options</h2></div></div></div>
+<div class="variablelist">
+<a name="mc.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.leak-check"></a><span class="term">
+ <code class="option">--leak-check=<no|summary|yes|full> [default: summary] </code>
+ </span>
+</dt>
+<dd><p>When enabled, search for memory leaks when the client
+ program finishes. If set to <code class="varname">summary</code>, it says how
+ many leaks occurred. If set to <code class="varname">full</code> or
+ <code class="varname">yes</code>, each individual leak will be shown
+ in detail and/or counted as an error, as specified by the options
+ <code class="option">--show-leak-kinds</code> and
+ <code class="option">--errors-for-leak-kinds</code>. </p></dd>
+<dt>
+<a name="opt.leak-resolution"></a><span class="term">
+ <code class="option">--leak-resolution=<low|med|high> [default: high] </code>
+ </span>
+</dt>
+<dd>
+<p>When doing leak checking, determines how willing
+ Memcheck is to consider different backtraces to
+ be the same for the purposes of merging multiple leaks into a single
+ leak report. When set to <code class="varname">low</code>, only the first
+ two entries need match. When <code class="varname">med</code>, four entries
+ have to match. When <code class="varname">high</code>, all entries need to
+ match.</p>
+<p>For hardcore leak debugging, you probably want to use
+ <code class="option">--leak-resolution=high</code> together with
+ <code class="option">--num-callers=40</code> or some such large number.
+ </p>
+<p>Note that the <code class="option">--leak-resolution</code> setting
+ does not affect Memcheck's ability to find
+ leaks. It only changes how the results are presented.</p>
+</dd>
+<dt>
+<a name="opt.show-leak-kinds"></a><span class="term">
+ <code class="option">--show-leak-kinds=<set> [default: definite,possible] </code>
+ </span>
+</dt>
+<dd>
+<p>Specifies the leak kinds to show in a <code class="varname">full</code>
+ leak search, in one of the following ways: </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>a comma separated list of one or more of
+ <code class="option">definite indirect possible reachable</code>.</p></li>
+<li class="listitem"><p><code class="option">all</code> to specify the complete set (all leak kinds).
+ It is equivalent to
+ <code class="option">--show-leak-kinds=definite,indirect,possible,reachable</code>.</p></li>
+<li class="listitem"><p><code class="option">none</code> for the empty set.</p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.errors-for-leak-kinds"></a><span class="term">
+ <code class="option">--errors-for-leak-kinds=<set> [default: definite,possible] </code>
+ </span>
+</dt>
+<dd><p>Specifies the leak kinds to count as errors in a
+ <code class="varname">full</code> leak search. The
+ <code class="option"><set></code> is specified similarly to
+ <code class="option">--show-leak-kinds</code>
+ </p></dd>
+<dt>
+<a name="opt.leak-check-heuristics"></a><span class="term">
+ <code class="option">--leak-check-heuristics=<set> [default: all] </code>
+ </span>
+</dt>
+<dd>
+<p>Specifies the set of leak check heuristics to be used
+ during leak searches. The heuristics control which interior pointers
+ to a block cause it to be considered as reachable.
+ The heuristic set is specified in one of the following ways:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>a comma separated list of one or more of
+ <code class="option">stdstring length64 newarray multipleinheritance</code>.</p></li>
+<li class="listitem"><p><code class="option">all</code> to activate the complete set of
+ heuristics.
+ It is equivalent to
+ <code class="option">--leak-check-heuristics=stdstring,length64,newarray,multipleinheritance</code>.</p></li>
+<li class="listitem"><p><code class="option">none</code> for the empty set.</p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.show-reachable"></a><span class="term">
+ <code class="option">--show-reachable=<yes|no> </code>
+ , </span><span class="term">
+ <code class="option">--show-possibly-lost=<yes|no> </code>
+ </span>
+</dt>
+<dd>
+<p>These options provide an alternative way to specify the leak kinds to show:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>
+ <code class="option">--show-reachable=no --show-possibly-lost=yes</code> is equivalent to
+ <code class="option">--show-leak-kinds=definite,possible</code>.
+ </p></li>
+<li class="listitem"><p>
+ <code class="option">--show-reachable=no --show-possibly-lost=no</code> is equivalent to
+ <code class="option">--show-leak-kinds=definite</code>.
+ </p></li>
+<li class="listitem"><p>
+ <code class="option">--show-reachable=yes</code> is equivalent to
+ <code class="option">--show-leak-kinds=all</code>.
+ </p></li>
+</ul></div>
+</dd>
+<dt>
+<a name="opt.undef-value-errors"></a><span class="term">
+ <code class="option">--undef-value-errors=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>Controls whether Memcheck reports
+ uses of undefined value errors. Set this to
+ <code class="varname">no</code> if you don't want to see undefined value
+ errors. It also has the side effect of speeding up
+ Memcheck somewhat.
+ </p></dd>
+<dt>
+<a name="opt.track-origins"></a><span class="term">
+ <code class="option">--track-origins=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>Controls whether Memcheck tracks
+ the origin of uninitialised values. By default, it does not,
+ which means that although it can tell you that an
+ uninitialised value is being used in a dangerous way, it
+ cannot tell you where the uninitialised value came from. This
+ often makes it difficult to track down the root problem.
+ </p>
+<p>When set
+ to <code class="varname">yes</code>, Memcheck keeps
+ track of the origins of all uninitialised values. Then, when
+ an uninitialised value error is
+ reported, Memcheck will try to show the
+ origin of the value. An origin can be one of the following
+ four places: a heap block, a stack allocation, a client
+ request, or miscellaneous other sources (eg, a call
+ to <code class="varname">brk</code>).
+ </p>
+<p>For uninitialised values originating from a heap
+ block, Memcheck shows where the block was
+ allocated. For uninitialised values originating from a stack
+ allocation, Memcheck can tell you which
+ function allocated the value, but no more than that -- typically
+ it shows you the source location of the opening brace of the
+ function. So you should carefully check that all of the
+ function's local variables are initialised properly.
+ </p>
+<p>Performance overhead: origin tracking is expensive. It
+ halves Memcheck's speed and increases
+ memory use by a minimum of 100MB, and possibly more.
+ Nevertheless it can drastically reduce the effort required to
+ identify the root cause of uninitialised value errors, and so
+ is often a programmer productivity win, despite running
+ more slowly.
+ </p>
+<p>Accuracy: Memcheck tracks origins
+ quite accurately. To avoid very large space and time
+ overheads, some approximations are made. It is possible,
+ although unlikely, that Memcheck will report an incorrect origin, or
+ not be able to identify any origin.
+ </p>
+<p>Note that the combination
+ <code class="option">--track-origins=yes</code>
+ and <code class="option">--undef-value-errors=no</code> is
+ nonsensical. Memcheck checks for and
+ rejects this combination at startup.
+ </p>
+</dd>
+<dt>
+<a name="opt.partial-loads-ok"></a><span class="term">
+ <code class="option">--partial-loads-ok=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>Controls how Memcheck handles 32-, 64-, 128- and 256-bit
+ naturally aligned loads from addresses for which some bytes are
+ addressable and others are not. When <code class="varname">yes</code>, such
+ loads do not produce an address error. Instead, loaded bytes
+ originating from illegal addresses are marked as uninitialised, and
+ those corresponding to legal addresses are handled in the normal
+ way.</p>
+<p>When <code class="varname">no</code>, loads from partially invalid
+ addresses are treated the same as loads from completely invalid
+ addresses: an illegal-address error is issued, and the resulting
+ bytes are marked as initialised.</p>
+<p>Note that code that behaves in this way is in violation of
+ the ISO C/C++ standards, and should be considered broken. If
+ at all possible, such code should be fixed.</p>
+</dd>
+<dt>
+<a name="opt.expensive-definedness-checks"></a><span class="term">
+ <code class="option">--expensive-definedness-checks=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Controls whether Memcheck should employ more precise but also more
+ expensive (time consuming) algorithms when checking the definedness of a
+ value. The default setting is not to do that and it is usually
+ sufficient. However, for highly optimised code valgrind may sometimes
+ incorrectly complain.
+ Invoking valgrind with <code class="option">--expensive-definedness-checks=yes</code>
+ helps but comes at a performance cost. Runtime degradation of
+ 25% have been observed but the extra cost depends a lot on the
+ application at hand.
+ </p></dd>
+<dt>
+<a name="opt.keep-stacktraces"></a><span class="term">
+ <code class="option">--keep-stacktraces=alloc|free|alloc-and-free|alloc-then-free|none [default: alloc-and-free] </code>
+ </span>
+</dt>
+<dd>
+<p>Controls which stack trace(s) to keep for malloc'd and/or
+ free'd blocks.
+ </p>
+<p>With <code class="varname">alloc-then-free</code>, a stack trace is
+ recorded at allocation time, and is associated with the block.
+ When the block is freed, a second stack trace is recorded, and
+ this replaces the allocation stack trace. As a result, any "use
+ after free" errors relating to this block can only show a stack
+ trace for where the block was freed.
+ </p>
+<p>With <code class="varname">alloc-and-free</code>, both allocation
+ and the deallocation stack traces for the block are stored.
+ Hence a "use after free" error will
+ show both, which may make the error easier to diagnose.
+ Compared to <code class="varname">alloc-then-free</code>, this setting
+ slightly increases Valgrind's memory use as the block contains two
+ references instead of one.
+ </p>
+<p>With <code class="varname">alloc</code>, only the allocation stack
+ trace is recorded (and reported). With <code class="varname">free</code>,
+ only the deallocation stack trace is recorded (and reported).
+ These values somewhat decrease Valgrind's memory and cpu usage.
+ They can be useful depending on the error types you are
+ searching for and the level of detail you need to analyse
+ them. For example, if you are only interested in memory leak
+ errors, it is sufficient to record the allocation stack traces.
+ </p>
+<p>With <code class="varname">none</code>, no stack traces are recorded
+ for malloc and free operations. If your program allocates a lot
+ of blocks and/or allocates/frees from many different stack
+ traces, this can significantly decrease cpu and/or memory
+ required. Of course, few details will be reported for errors
+ related to heap blocks.
+ </p>
+<p>Note that once a stack trace is recorded, Valgrind keeps
+ the stack trace in memory even if it is not referenced by any
+ block. Some programs (for example, recursive algorithms) can
+ generate a huge number of stack traces. If Valgrind uses too
+ much memory in such circumstances, you can reduce the memory
+ required with the options <code class="varname">--keep-stacktraces</code>
+ and/or by using a smaller value for the
+ option <code class="varname">--num-callers</code>.
+ </p>
+</dd>
+<dt>
+<a name="opt.freelist-vol"></a><span class="term">
+ <code class="option">--freelist-vol=<number> [default: 20000000] </code>
+ </span>
+</dt>
+<dd>
+<p>When the client program releases memory using
+ <code class="function">free</code> (in <code class="literal">C</code>) or
+ <code class="computeroutput">delete</code>
+ (<code class="literal">C++</code>), that memory is not immediately made
+ available for re-allocation. Instead, it is marked inaccessible
+ and placed in a queue of freed blocks. The purpose is to defer as
+ long as possible the point at which freed-up memory comes back
+ into circulation. This increases the chance that
+ Memcheck will be able to detect invalid
+ accesses to blocks for some significant period of time after they
+ have been freed.</p>
+<p>This option specifies the maximum total size, in bytes, of the
+ blocks in the queue. The default value is twenty million bytes.
+ Increasing this increases the total amount of memory used by
+ Memcheck but may detect invalid uses of freed
+ blocks which would otherwise go undetected.</p>
+</dd>
+<dt>
+<a name="opt.freelist-big-blocks"></a><span class="term">
+ <code class="option">--freelist-big-blocks=<number> [default: 1000000] </code>
+ </span>
+</dt>
+<dd>
+<p>When making blocks from the queue of freed blocks available
+ for re-allocation, Memcheck will in priority re-circulate the blocks
+ with a size greater or equal to <code class="option">--freelist-big-blocks</code>.
+ This ensures that freeing big blocks (in particular freeing blocks bigger than
+ <code class="option">--freelist-vol</code>) does not immediately lead to a re-circulation
+ of all (or a lot of) the small blocks in the free list. In other words,
+ this option increases the likelihood to discover dangling pointers
+ for the "small" blocks, even when big blocks are freed.</p>
+<p>Setting a value of 0 means that all the blocks are re-circulated
+ in a FIFO order. </p>
+</dd>
+<dt>
+<a name="opt.workaround-gcc296-bugs"></a><span class="term">
+ <code class="option">--workaround-gcc296-bugs=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, assume that reads and writes some small
+ distance below the stack pointer are due to bugs in GCC 2.96, and
+ does not report them. The "small distance" is 256 bytes by
+ default. Note that GCC 2.96 is the default compiler on some ancient
+ Linux distributions (RedHat 7.X) and so you may need to use this
+ option. Do not use it if you do not have to, as it can cause real
+ errors to be overlooked. A better alternative is to use a more
+ recent GCC in which this bug is fixed.</p>
+<p>You may also need to use this option when working with
+ GCC 3.X or 4.X on 32-bit PowerPC Linux. This is because
+ GCC generates code which occasionally accesses below the
+ stack pointer, particularly for floating-point to/from integer
+ conversions. This is in violation of the 32-bit PowerPC ELF
+ specification, which makes no provision for locations below the
+ stack pointer to be accessible.</p>
+<p>This option is deprecated as of version 3.12 and may be
+ removed from future versions. You should instead use
+ <code class="option">--ignore-range-below-sp</code> to specify the exact
+ range of offsets below the stack pointer that should be ignored.
+ A suitable equivalent
+ is <code class="option">--ignore-range-below-sp=1024-1</code>.
+ </p>
+</dd>
+<dt>
+<a name="opt.ignore-range-below-sp"></a><span class="term">
+ <code class="option">--ignore-range-below-sp=<number>-<number> </code>
+ </span>
+</dt>
+<dd><p>This is a more general replacement for the deprecated
+ <code class="option">--workaround-gcc296-bugs</code> option. When
+ specified, it causes Memcheck not to report errors for accesses
+ at the specified offsets below the stack pointer. The two
+ offsets must be positive decimal numbers and -- somewhat
+ counterintuitively -- the first one must be larger, in order to
+ imply a non-wraparound address range to ignore. For example,
+ to ignore 4 byte accesses at 8192 bytes below the stack
+ pointer,
+ use <code class="option">--ignore-range-below-sp=8192-8189</code>. Only
+ one range may be specified.
+ </p></dd>
+<dt>
+<a name="opt.show-mismatched-frees"></a><span class="term">
+ <code class="option">--show-mismatched-frees=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd>
+<p>When enabled, Memcheck checks that heap blocks are
+ deallocated using a function that matches the allocating
+ function. That is, it expects <code class="varname">free</code> to be
+ used to deallocate blocks allocated
+ by <code class="varname">malloc</code>, <code class="varname">delete</code> for
+ blocks allocated by <code class="varname">new</code>,
+ and <code class="varname">delete[]</code> for blocks allocated
+ by <code class="varname">new[]</code>. If a mismatch is detected, an
+ error is reported. This is in general important because in some
+ environments, freeing with a non-matching function can cause
+ crashes.</p>
+<p>There is however a scenario where such mismatches cannot
+ be avoided. That is when the user provides implementations of
+ <code class="varname">new</code>/<code class="varname">new[]</code> that
+ call <code class="varname">malloc</code> and
+ of <code class="varname">delete</code>/<code class="varname">delete[]</code> that
+ call <code class="varname">free</code>, and these functions are
+ asymmetrically inlined. For example, imagine
+ that <code class="varname">delete[]</code> is inlined
+ but <code class="varname">new[]</code> is not. The result is that
+ Memcheck "sees" all <code class="varname">delete[]</code> calls as direct
+ calls to <code class="varname">free</code>, even when the program source
+ contains no mismatched calls.</p>
+<p>This causes a lot of confusing and irrelevant error
+ reports. <code class="varname">--show-mismatched-frees=no</code> disables
+ these checks. It is not generally advisable to disable them,
+ though, because you may miss real errors as a result.</p>
+</dd>
+<dt>
+<a name="opt.ignore-ranges"></a><span class="term">
+ <code class="option">--ignore-ranges=0xPP-0xQQ[,0xRR-0xSS] </code>
+ </span>
+</dt>
+<dd><p>Any ranges listed in this option (and multiple ranges can be
+ specified, separated by commas) will be ignored by Memcheck's
+ addressability checking.</p></dd>
+<dt>
+<a name="opt.malloc-fill"></a><span class="term">
+ <code class="option">--malloc-fill=<hexnumber> </code>
+ </span>
+</dt>
+<dd><p>Fills blocks allocated
+ by <code class="computeroutput">malloc</code>,
+ <code class="computeroutput">new</code>, etc, but not
+ by <code class="computeroutput">calloc</code>, with the specified
+ byte. This can be useful when trying to shake out obscure
+ memory corruption problems. The allocated area is still
+ regarded by Memcheck as undefined -- this option only affects its
+ contents. Note that <code class="option">--malloc-fill</code> does not
+ affect a block of memory when it is used as argument
+ to client requests VALGRIND_MEMPOOL_ALLOC or
+ VALGRIND_MALLOCLIKE_BLOCK.
+ </p></dd>
+<dt>
+<a name="opt.free-fill"></a><span class="term">
+ <code class="option">--free-fill=<hexnumber> </code>
+ </span>
+</dt>
+<dd><p>Fills blocks freed
+ by <code class="computeroutput">free</code>,
+ <code class="computeroutput">delete</code>, etc, with the
+ specified byte value. This can be useful when trying to shake out
+ obscure memory corruption problems. The freed area is still
+ regarded by Memcheck as not valid for access -- this option only
+ affects its contents. Note that <code class="option">--free-fill</code> does not
+ affect a block of memory when it is used as argument to
+ client requests VALGRIND_MEMPOOL_FREE or VALGRIND_FREELIKE_BLOCK.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.suppfiles"></a>4.4. Writing suppression files</h2></div></div></div>
+<p>The basic suppression format is described in
+<a class="xref" href="manual-core.html#manual-core.suppress" title="2.5. Suppressing errors">Suppressing errors</a>.</p>
+<p>The suppression-type (second) line should have the form:</p>
+<pre class="programlisting">
+Memcheck:suppression_type</pre>
+<p>The Memcheck suppression types are as follows:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">Value1</code>,
+ <code class="varname">Value2</code>,
+ <code class="varname">Value4</code>,
+ <code class="varname">Value8</code>,
+ <code class="varname">Value16</code>,
+ meaning an uninitialised-value error when
+ using a value of 1, 2, 4, 8 or 16 bytes.</p></li>
+<li class="listitem"><p><code class="varname">Cond</code> (or its old
+ name, <code class="varname">Value0</code>), meaning use
+ of an uninitialised CPU condition code.</p></li>
+<li class="listitem"><p><code class="varname">Addr1</code>,
+ <code class="varname">Addr2</code>,
+ <code class="varname">Addr4</code>,
+ <code class="varname">Addr8</code>,
+ <code class="varname">Addr16</code>,
+ meaning an invalid address during a
+ memory access of 1, 2, 4, 8 or 16 bytes respectively.</p></li>
+<li class="listitem"><p><code class="varname">Jump</code>, meaning an
+ jump to an unaddressable location error.</p></li>
+<li class="listitem"><p><code class="varname">Param</code>, meaning an
+ invalid system call parameter error.</p></li>
+<li class="listitem"><p><code class="varname">Free</code>, meaning an
+ invalid or mismatching free.</p></li>
+<li class="listitem"><p><code class="varname">Overlap</code>, meaning a
+ <code class="computeroutput">src</code> /
+ <code class="computeroutput">dst</code> overlap in
+ <code class="function">memcpy</code> or a similar function.</p></li>
+<li class="listitem"><p><code class="varname">Leak</code>, meaning
+ a memory leak.</p></li>
+</ul></div>
+<p><code class="computeroutput">Param</code> errors have a mandatory extra
+information line at this point, which is the name of the offending
+system call parameter. </p>
+<p><code class="computeroutput">Leak</code> errors have an optional
+extra information line, with the following format:</p>
+<pre class="programlisting">
+match-leak-kinds:<set></pre>
+<p>where <code class="computeroutput"><set></code> specifies which
+leak kinds are matched by this suppression entry.
+<code class="computeroutput"><set></code> is specified in the
+same way as with the option <code class="option">--show-leak-kinds</code>, that is,
+one of the following:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">a comma separated list of one or more of
+ <code class="option">definite indirect possible reachable</code>.
+ </li>
+<li class="listitem">
+<code class="option">all</code> to specify the complete set (all leak kinds).
+ </li>
+<li class="listitem">
+<code class="option">none</code> for the empty set.
+ </li>
+</ul></div>
+<p>If this optional extra line is not present, the suppression
+entry will match all leak kinds.</p>
+<p>Be aware that leak suppressions that are created using
+<code class="option">--gen-suppressions</code> will contain this optional extra
+line, and therefore may match fewer leaks than you expect. You may
+want to remove the line before using the generated
+suppressions.</p>
+<p>The other Memcheck error kinds do not have extra lines.</p>
+<p>
+If you give the <code class="option">-v</code> option, Valgrind will print
+the list of used suppressions at the end of execution.
+For a leak suppression, this output gives the number of different
+loss records that match the suppression, and the number of bytes
+and blocks suppressed by the suppression.
+If the run contains multiple leak checks, the number of bytes and blocks
+are reset to zero before each new leak check. Note that the number of different
+loss records is not reset to zero.</p>
+<p>In the example below, in the last leak search, 7 blocks and 96 bytes have
+been suppressed by a suppression with the name
+<code class="option">some_leak_suppression</code>:</p>
+<pre class="programlisting">
+--21041-- used_suppression: 10 some_other_leak_suppression s.supp:14 suppressed: 12,400 bytes in 1 blocks
+--21041-- used_suppression: 39 some_leak_suppression s.supp:2 suppressed: 96 bytes in 7 blocks
+</pre>
+<p>For <code class="varname">ValueN</code> and <code class="varname">AddrN</code>
+errors, the first line of the calling context is either the name of
+the function in which the error occurred, or, failing that, the full
+path of the <code class="filename">.so</code> file or executable containing the
+error location. For <code class="varname">Free</code> errors, the first line is
+the name of the function doing the freeing (eg,
+<code class="function">free</code>, <code class="function">__builtin_vec_delete</code>,
+etc). For <code class="varname">Overlap</code> errors, the first line is the name of the
+function with the overlapping arguments (eg.
+<code class="function">memcpy</code>, <code class="function">strcpy</code>, etc).</p>
+<p>The last part of any suppression specifies the rest of the
+calling context that needs to be matched.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.machine"></a>4.5. Details of Memcheck's checking machinery</h2></div></div></div>
+<p>Read this section if you want to know, in detail, exactly
+what and how Memcheck is checking.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.value"></a>4.5.1. Valid-value (V) bits</h3></div></div></div>
+<p>It is simplest to think of Memcheck implementing a synthetic CPU
+which is identical to a real CPU, except for one crucial detail. Every
+bit (literally) of data processed, stored and handled by the real CPU
+has, in the synthetic CPU, an associated "valid-value" bit, which says
+whether or not the accompanying bit has a legitimate value. In the
+discussions which follow, this bit is referred to as the V (valid-value)
+bit.</p>
+<p>Each byte in the system therefore has a 8 V bits which follow it
+wherever it goes. For example, when the CPU loads a word-size item (4
+bytes) from memory, it also loads the corresponding 32 V bits from a
+bitmap which stores the V bits for the process' entire address space.
+If the CPU should later write the whole or some part of that value to
+memory at a different address, the relevant V bits will be stored back
+in the V-bit bitmap.</p>
+<p>In short, each bit in the system has (conceptually) an associated V
+bit, which follows it around everywhere, even inside the CPU. Yes, all the
+CPU's registers (integer, floating point, vector and condition registers)
+have their own V bit vectors. For this to work, Memcheck uses a great deal
+of compression to represent the V bits compactly.</p>
+<p>Copying values around does not cause Memcheck to check for, or
+report on, errors. However, when a value is used in a way which might
+conceivably affect your program's externally-visible behaviour,
+the associated V bits are immediately checked. If any of these indicate
+that the value is undefined (even partially), an error is reported.</p>
+<p>Here's an (admittedly nonsensical) example:</p>
+<pre class="programlisting">
+int i, j;
+int a[10], b[10];
+for ( i = 0; i < 10; i++ ) {
+ j = a[i];
+ b[i] = j;
+}</pre>
+<p>Memcheck emits no complaints about this, since it merely copies
+uninitialised values from <code class="varname">a[]</code> into
+<code class="varname">b[]</code>, and doesn't use them in a way which could
+affect the behaviour of the program. However, if
+the loop is changed to:</p>
+<pre class="programlisting">
+for ( i = 0; i < 10; i++ ) {
+ j += a[i];
+}
+if ( j == 77 )
+ printf("hello there\n");
+</pre>
+<p>then Memcheck will complain, at the
+<code class="computeroutput">if</code>, that the condition depends on
+uninitialised values. Note that it <span class="command"><strong>doesn't</strong></span> complain
+at the <code class="varname">j += a[i];</code>, since at that point the
+undefinedness is not "observable". It's only when a decision has to be
+made as to whether or not to do the <code class="function">printf</code> -- an
+observable action of your program -- that Memcheck complains.</p>
+<p>Most low level operations, such as adds, cause Memcheck to use the
+V bits for the operands to calculate the V bits for the result. Even if
+the result is partially or wholly undefined, it does not
+complain.</p>
+<p>Checks on definedness only occur in three places: when a value is
+used to generate a memory address, when control flow decision needs to
+be made, and when a system call is detected, Memcheck checks definedness
+of parameters as required.</p>
+<p>If a check should detect undefinedness, an error message is
+issued. The resulting value is subsequently regarded as well-defined.
+To do otherwise would give long chains of error messages. In other
+words, once Memcheck reports an undefined value error, it tries to
+avoid reporting further errors derived from that same undefined
+value.</p>
+<p>This sounds overcomplicated. Why not just check all reads from
+memory, and complain if an undefined value is loaded into a CPU
+register? Well, that doesn't work well, because perfectly legitimate C
+programs routinely copy uninitialised values around in memory, and we
+don't want endless complaints about that. Here's the canonical example.
+Consider a struct like this:</p>
+<pre class="programlisting">
+struct S { int x; char c; };
+struct S s1, s2;
+s1.x = 42;
+s1.c = 'z';
+s2 = s1;
+</pre>
+<p>The question to ask is: how large is <code class="varname">struct S</code>,
+in bytes? An <code class="varname">int</code> is 4 bytes and a
+<code class="varname">char</code> one byte, so perhaps a <code class="varname">struct
+S</code> occupies 5 bytes? Wrong. All non-toy compilers we know
+of will round the size of <code class="varname">struct S</code> up to a whole
+number of words, in this case 8 bytes. Not doing this forces compilers
+to generate truly appalling code for accessing arrays of
+<code class="varname">struct S</code>'s on some architectures.</p>
+<p>So <code class="varname">s1</code> occupies 8 bytes, yet only 5 of them will
+be initialised. For the assignment <code class="varname">s2 = s1</code>, GCC
+generates code to copy all 8 bytes wholesale into <code class="varname">s2</code>
+without regard for their meaning. If Memcheck simply checked values as
+they came out of memory, it would yelp every time a structure assignment
+like this happened. So the more complicated behaviour described above
+is necessary. This allows GCC to copy
+<code class="varname">s1</code> into <code class="varname">s2</code> any way it likes, and a
+warning will only be emitted if the uninitialised values are later
+used.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.vaddress"></a>4.5.2. Valid-address (A) bits</h3></div></div></div>
+<p>Notice that the previous subsection describes how the validity of
+values is established and maintained without having to say whether the
+program does or does not have the right to access any particular memory
+location. We now consider the latter question.</p>
+<p>As described above, every bit in memory or in the CPU has an
+associated valid-value (V) bit. In addition, all bytes in memory, but
+not in the CPU, have an associated valid-address (A) bit. This
+indicates whether or not the program can legitimately read or write that
+location. It does not give any indication of the validity of the data
+at that location -- that's the job of the V bits -- only whether or not
+the location may be accessed.</p>
+<p>Every time your program reads or writes memory, Memcheck checks
+the A bits associated with the address. If any of them indicate an
+invalid address, an error is emitted. Note that the reads and writes
+themselves do not change the A bits, only consult them.</p>
+<p>So how do the A bits get set/cleared? Like this:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>When the program starts, all the global data areas are
+ marked as accessible.</p></li>
+<li class="listitem"><p>When the program does
+ <code class="function">malloc</code>/<code class="computeroutput">new</code>,
+ the A bits for exactly the area allocated, and not a byte more,
+ are marked as accessible. Upon freeing the area the A bits are
+ changed to indicate inaccessibility.</p></li>
+<li class="listitem"><p>When the stack pointer register (<code class="literal">SP</code>) moves
+ up or down, A bits are set. The rule is that the area from
+ <code class="literal">SP</code> up to the base of the stack is marked as
+ accessible, and below <code class="literal">SP</code> is inaccessible. (If
+ that sounds illogical, bear in mind that the stack grows down, not
+ up, on almost all Unix systems, including GNU/Linux.) Tracking
+ <code class="literal">SP</code> like this has the useful side-effect that the
+ section of stack used by a function for local variables etc is
+ automatically marked accessible on function entry and inaccessible
+ on exit.</p></li>
+<li class="listitem"><p>When doing system calls, A bits are changed appropriately.
+ For example, <code class="literal">mmap</code>
+ magically makes files appear in the process'
+ address space, so the A bits must be updated if <code class="literal">mmap</code>
+ succeeds.</p></li>
+<li class="listitem"><p>Optionally, your program can tell Memcheck about such changes
+ explicitly, using the client request mechanism described
+ above.</p></li>
+</ul></div>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.together"></a>4.5.3. Putting it all together</h3></div></div></div>
+<p>Memcheck's checking machinery can be summarised as
+follows:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Each byte in memory has 8 associated V (valid-value) bits,
+ saying whether or not the byte has a defined value, and a single A
+ (valid-address) bit, saying whether or not the program currently has
+ the right to read/write that address. As mentioned above, heavy
+ use of compression means the overhead is typically around 25%.</p></li>
+<li class="listitem"><p>When memory is read or written, the relevant A bits are
+ consulted. If they indicate an invalid address, Memcheck emits an
+ Invalid read or Invalid write error.</p></li>
+<li class="listitem"><p>When memory is read into the CPU's registers, the relevant V
+ bits are fetched from memory and stored in the simulated CPU. They
+ are not consulted.</p></li>
+<li class="listitem"><p>When a register is written out to memory, the V bits for that
+ register are written back to memory too.</p></li>
+<li class="listitem"><p>When values in CPU registers are used to generate a memory
+ address, or to determine the outcome of a conditional branch, the V
+ bits for those values are checked, and an error emitted if any of
+ them are undefined.</p></li>
+<li class="listitem"><p>When values in CPU registers are used for any other purpose,
+ Memcheck computes the V bits for the result, but does not check
+ them.</p></li>
+<li class="listitem"><p>Once the V bits for a value in the CPU have been checked, they
+ are then set to indicate validity. This avoids long chains of
+ errors.</p></li>
+<li class="listitem">
+<p>When values are loaded from memory, Memcheck checks the A bits
+ for that location and issues an illegal-address warning if needed.
+ In that case, the V bits loaded are forced to indicate Valid,
+ despite the location being invalid.</p>
+<p>This apparently strange choice reduces the amount of confusing
+ information presented to the user. It avoids the unpleasant
+ phenomenon in which memory is read from a place which is both
+ unaddressable and contains invalid values, and, as a result, you get
+ not only an invalid-address (read/write) error, but also a
+ potentially large set of uninitialised-value errors, one for every
+ time the value is used.</p>
+<p>There is a hazy boundary case to do with multi-byte loads from
+ addresses which are partially valid and partially invalid. See
+ details of the option <code class="option">--partial-loads-ok</code> for details.
+ </p>
+</li>
+</ul></div>
+<p>Memcheck intercepts calls to <code class="function">malloc</code>,
+<code class="function">calloc</code>, <code class="function">realloc</code>,
+<code class="function">valloc</code>, <code class="function">memalign</code>,
+<code class="function">free</code>, <code class="computeroutput">new</code>,
+<code class="computeroutput">new[]</code>,
+<code class="computeroutput">delete</code> and
+<code class="computeroutput">delete[]</code>. The behaviour you get
+is:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="function">malloc</code>/<code class="function">new</code>/<code class="computeroutput">new[]</code>:
+ the returned memory is marked as addressable but not having valid
+ values. This means you have to write to it before you can read
+ it.</p></li>
+<li class="listitem"><p><code class="function">calloc</code>: returned memory is marked both
+ addressable and valid, since <code class="function">calloc</code> clears
+ the area to zero.</p></li>
+<li class="listitem"><p><code class="function">realloc</code>: if the new size is larger than
+ the old, the new section is addressable but invalid, as with
+ <code class="function">malloc</code>. If the new size is smaller, the
+ dropped-off section is marked as unaddressable. You may only pass to
+ <code class="function">realloc</code> a pointer previously issued to you by
+ <code class="function">malloc</code>/<code class="function">calloc</code>/<code class="function">realloc</code>.</p></li>
+<li class="listitem"><p><code class="function">free</code>/<code class="computeroutput">delete</code>/<code class="computeroutput">delete[]</code>:
+ you may only pass to these functions a pointer previously issued
+ to you by the corresponding allocation function. Otherwise,
+ Memcheck complains. If the pointer is indeed valid, Memcheck
+ marks the entire area it points at as unaddressable, and places
+ the block in the freed-blocks-queue. The aim is to defer as long
+ as possible reallocation of this block. Until that happens, all
+ attempts to access it will elicit an invalid-address error, as you
+ would hope.</p></li>
+</ul></div>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.monitor-commands"></a>4.6. Memcheck Monitor Commands</h2></div></div></div>
+<p>The Memcheck tool provides monitor commands handled by Valgrind's
+built-in gdbserver (see <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling" title="3.2.5. Monitor command handling by the Valgrind gdbserver">Monitor command handling by the Valgrind gdbserver</a>).
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p><code class="varname">xb <addr> [<len>]</code>
+ shows the definedness (V) bits and values for <len> (default 1)
+ bytes starting at <addr>.
+ For each 8 bytes, two lines are output.
+ </p>
+<p>
+ The first line shows the validity bits for 8 bytes.
+ The definedness of each byte in the range is given using two hexadecimal
+ digits. These hexadecimal digits encode the validity of each bit of the
+ corresponding byte,
+ using 0 if the bit is defined and 1 if the bit is undefined.
+ If a byte is not addressable, its validity bits are replaced
+ by <code class="varname">__</code> (a double underscore).
+ </p>
+<p>
+ The second line shows the values of the bytes below the corresponding
+ validity bits. The format used to show the bytes data is similar to the
+ GDB command 'x /<len>xb <addr>'. The value for a non
+ addressable bytes is shown as ?? (two question marks).
+ </p>
+<p>
+ In the following example, <code class="varname">string10</code> is an array
+ of 10 characters, in which the even numbered bytes are
+ undefined. In the below example, the byte corresponding
+ to <code class="varname">string10[5]</code> is not addressable.
+ </p>
+<pre class="programlisting">
+(gdb) p &string10
+$4 = (char (*)[10]) 0x804a2f0
+(gdb) mo xb 0x804a2f0 10
+ ff 00 ff 00 ff __ ff 00
+0x804A2F0: 0x3f 0x6e 0x3f 0x65 0x3f 0x?? 0x3f 0x65
+ ff 00
+0x804A2F8: 0x3f 0x00
+Address 0x804A2F0 len 10 has 1 bytes unaddressable
+(gdb)
+</pre>
+<p> The command xb cannot be used with registers. To get
+ the validity bits of a register, you must start Valgrind with the
+ option <code class="option">--vgdb-shadow-registers=yes</code>. The validity
+ bits of a register can then be obtained by printing the 'shadow 1'
+ corresponding register. In the below x86 example, the register
+ eax has all its bits undefined, while the register ebx is fully
+ defined.
+ </p>
+<pre class="programlisting">
+(gdb) p /x $eaxs1
+$9 = 0xffffffff
+(gdb) p /x $ebxs1
+$10 = 0x0
+(gdb)
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">get_vbits <addr> [<len>]</code>
+ shows the definedness (V) bits for <len> (default 1) bytes
+ starting at <addr> using the same convention as the
+ <code class="varname">xb</code> command. <code class="varname">get_vbits</code> only
+ shows the V bits (grouped by 4 bytes). It does not show the values.
+ If you want to associate V bits with the corresponding byte values, the
+ <code class="varname">xb</code> command will be easier to use, in particular
+ on little endian computers when associating undefined parts of an integer
+ with their V bits values.
+ </p>
+<p>
+ The following example shows the result of <code class="varname">get_vibts</code>
+ on the <code class="varname">string10</code> used in the <code class="varname">xb</code>
+ command explanation.
+ </p>
+<pre class="programlisting">
+(gdb) monitor get_vbits 0x804a2f0 10
+ff00ff00 ff__ff00 ff00
+Address 0x804A2F0 len 10 has 1 bytes unaddressable
+(gdb)
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">make_memory
+ [noaccess|undefined|defined|Definedifaddressable] <addr>
+ [<len>]</code> marks the range of <len> (default 1)
+ bytes at <addr> as having the given status. Parameter
+ <code class="varname">noaccess</code> marks the range as non-accessible, so
+ Memcheck will report an error on any access to it.
+ <code class="varname">undefined</code> or <code class="varname">defined</code> mark
+ the area as accessible, but Memcheck regards the bytes in it
+ respectively as having undefined or defined values.
+ <code class="varname">Definedifaddressable</code> marks as defined, bytes in
+ the range which are already addressible, but makes no change to
+ the status of bytes in the range which are not addressible. Note
+ that the first letter of <code class="varname">Definedifaddressable</code>
+ is an uppercase D to avoid confusion with <code class="varname">defined</code>.
+ </p>
+<p>
+ In the following example, the first byte of the
+ <code class="varname">string10</code> is marked as defined:
+ </p>
+<pre class="programlisting">
+(gdb) monitor make_memory defined 0x8049e28 1
+(gdb) monitor get_vbits 0x8049e28 10
+0000ff00 ff00ff00 ff00
+(gdb)
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">check_memory [addressable|defined] <addr>
+ [<len>]</code> checks that the range of <len>
+ (default 1) bytes at <addr> has the specified accessibility.
+ It then outputs a description of <addr>. In the following
+ example, a detailed description is available because the
+ option <code class="option">--read-var-info=yes</code> was given at Valgrind
+ startup:
+ </p>
+<pre class="programlisting">
+(gdb) monitor check_memory defined 0x8049e28 1
+Address 0x8049E28 len 1 defined
+==14698== Location 0x8049e28 is 0 bytes inside string10[0],
+==14698== declared at prog.c:10, in frame #0 of thread 1
+(gdb)
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">leak_check [full*|summary]
+ [kinds <set>|reachable|possibleleak*|definiteleak]
+ [heuristics heur1,heur2,...]
+ [increased*|changed|any]
+ [unlimited*|limited <max_loss_records_output>]
+ </code>
+ performs a leak check. The <code class="varname">*</code> in the arguments
+ indicates the default values. </p>
+<p> If the <code class="varname">[full*|summary]</code> argument is
+ <code class="varname">summary</code>, only a summary of the leak search is given;
+ otherwise a full leak report is produced. A full leak report gives
+ detailed information for each leak: the stack trace where the leaked blocks
+ were allocated, the number of blocks leaked and their total size. When a
+ full report is requested, the next two arguments further specify what
+ kind of leaks to report. A leak's details are shown if they match
+ both the second and third argument. A full leak report might
+ output detailed information for many leaks. The nr of leaks for
+ which information is output can be controlled using
+ the <code class="varname">limited</code> argument followed by the maximum nr
+ of leak records to output. If this maximum is reached, the leak
+ search outputs the records with the biggest number of bytes.
+ </p>
+<p>The <code class="varname">kinds</code> argument controls what kind of blocks
+ are shown for a <code class="varname">full</code> leak search. The set of leak kinds
+ to show can be specified using a <code class="varname"><set></code> similarly
+ to the command line option <code class="option">--show-leak-kinds</code>.
+ Alternatively, the value <code class="varname">definiteleak</code>
+ is equivalent to <code class="varname">kinds definite</code>, the
+ value <code class="varname">possibleleak</code> is equivalent to
+ <code class="varname">kinds definite,possible</code> : it will also show
+ possibly leaked blocks, .i.e those for which only an interior
+ pointer was found. The value <code class="varname">reachable</code> will
+ show all block categories (i.e. is equivalent to <code class="varname">kinds
+ all</code>).
+ </p>
+<p>The <code class="varname">heuristics</code> argument controls the heuristics
+ used during the leak search. The set of heuristics to use can be specified
+ using a <code class="varname"><set></code> similarly
+ to the command line option <code class="option">--leak-check-heuristics</code>.
+ The default value for the <code class="varname">heuristics</code> argument is
+ <code class="varname">heuristics none</code>.
+ </p>
+<p>The <code class="varname">[increased*|changed|any]</code> argument controls what
+ kinds of changes are shown for a <code class="varname">full</code> leak search. The
+ value <code class="varname">increased</code> specifies that only block
+ allocation stacks with an increased number of leaked bytes or
+ blocks since the previous leak check should be shown. The
+ value <code class="varname">changed</code> specifies that allocation stacks
+ with any change since the previous leak check should be shown.
+ The value <code class="varname">any</code> specifies that all leak entries
+ should be shown, regardless of any increase or decrease. When
+ If <code class="varname">increased</code> or <code class="varname">changed</code> are
+ specified, the leak report entries will show the delta relative to
+ the previous leak report.
+ </p>
+<p>The following example shows usage of the
+ <code class="varname">leak_check</code> monitor command on
+ the <code class="varname">memcheck/tests/leak-cases.c</code> regression
+ test. The first command outputs one entry having an increase in
+ the leaked bytes. The second command is the same as the first
+ command, but uses the abbreviated forms accepted by GDB and the
+ Valgrind gdbserver. It only outputs the summary information, as
+ there was no increase since the previous leak search.</p>
+<pre class="programlisting">
+(gdb) monitor leak_check full possibleleak increased
+==19520== 16 (+16) bytes in 1 (+1) blocks are possibly lost in loss record 9 of 12
+==19520== at 0x40070B4: malloc (vg_replace_malloc.c:263)
+==19520== by 0x80484D5: mk (leak-cases.c:52)
+==19520== by 0x804855F: f (leak-cases.c:81)
+==19520== by 0x80488E0: main (leak-cases.c:107)
+==19520==
+==19520== LEAK SUMMARY:
+==19520== definitely lost: 32 (+0) bytes in 2 (+0) blocks
+==19520== indirectly lost: 16 (+0) bytes in 1 (+0) blocks
+==19520== possibly lost: 32 (+16) bytes in 2 (+1) blocks
+==19520== still reachable: 96 (+16) bytes in 6 (+1) blocks
+==19520== suppressed: 0 (+0) bytes in 0 (+0) blocks
+==19520== Reachable blocks (those to which a pointer was found) are not shown.
+==19520== To see them, add 'reachable any' args to leak_check
+==19520==
+(gdb) mo l
+==19520== LEAK SUMMARY:
+==19520== definitely lost: 32 (+0) bytes in 2 (+0) blocks
+==19520== indirectly lost: 16 (+0) bytes in 1 (+0) blocks
+==19520== possibly lost: 32 (+0) bytes in 2 (+0) blocks
+==19520== still reachable: 96 (+0) bytes in 6 (+0) blocks
+==19520== suppressed: 0 (+0) bytes in 0 (+0) blocks
+==19520== Reachable blocks (those to which a pointer was found) are not shown.
+==19520== To see them, add 'reachable any' args to leak_check
+==19520==
+(gdb)
+</pre>
+<p>Note that when using Valgrind's gdbserver, it is not
+ necessary to rerun
+ with <code class="option">--leak-check=full</code>
+ <code class="option">--show-reachable=yes</code> to see the reachable
+ blocks. You can obtain the same information without rerunning by
+ using the GDB command <code class="computeroutput">monitor leak_check full
+ reachable any</code> (or, using
+ abbreviation: <code class="computeroutput">mo l f r a</code>).
+ </p>
+</li>
+<li class="listitem">
+<p><code class="varname">block_list <loss_record_nr>|<loss_record_nr_from>..<loss_record_nr_to>
+ [unlimited*|limited <max_blocks>]
+ [heuristics heur1,heur2,...]
+ </code>
+ shows the list of blocks belonging to
+ <code class="varname"><loss_record_nr></code> (or to the loss records range
+ <code class="varname"><loss_record_nr_from>..<loss_record_nr_to></code>).
+ The nr of blocks to print can be controlled using the
+ <code class="varname">limited</code> argument followed by the maximum nr
+ of blocks to output.
+ If one or more heuristics are given, only prints the loss records
+ and blocks found via one of the given <code class="varname">heur1,heur2,...</code>
+ heuristics.
+ </p>
+<p> A leak search merges the allocated blocks in loss records :
+ a loss record re-groups all blocks having the same state (for
+ example, Definitely Lost) and the same allocation backtrace.
+ Each loss record is identified in the leak search result
+ by a loss record number.
+ The <code class="varname">block_list</code> command shows the loss record information
+ followed by the addresses and sizes of the blocks which have been
+ merged in the loss record. If a block was found using an heuristic, the block size
+ is followed by the heuristic.
+ </p>
+<p> If a directly lost block causes some other blocks to be indirectly
+ lost, the block_list command will also show these indirectly lost blocks.
+ The indirectly lost blocks will be indented according to the level of indirection
+ between the directly lost block and the indirectly lost block(s).
+ Each indirectly lost block is followed by the reference of its loss record.
+ </p>
+<p> The block_list command can be used on the results of a leak search as long
+ as no block has been freed after this leak search: as soon as the program frees
+ a block, a new leak search is needed before block_list can be used again.
+ </p>
+<p>
+ In the below example, the program leaks a tree structure by losing the pointer to
+ the block A (top of the tree).
+ So, the block A is directly lost, causing an indirect
+ loss of blocks B to G. The first block_list command shows the loss record of A
+ (a definitely lost block with address 0x4028028, size 16). The addresses and sizes
+ of the indirectly lost blocks due to block A are shown below the block A.
+ The second command shows the details of one of the indirect loss records output
+ by the first command.
+ </p>
+<pre class="programlisting">
+ A
+ / \
+ B C
+ / \ / \
+ D E F G
+</pre>
+<pre class="programlisting">
+(gdb) bt
+#0 main () at leak-tree.c:69
+(gdb) monitor leak_check full any
+==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7 of 7
+==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
+==19552== by 0x80484D5: mk (leak-tree.c:28)
+==19552== by 0x80484FC: f (leak-tree.c:41)
+==19552== by 0x8048856: main (leak-tree.c:63)
+==19552==
+==19552== LEAK SUMMARY:
+==19552== definitely lost: 16 bytes in 1 blocks
+==19552== indirectly lost: 96 bytes in 6 blocks
+==19552== possibly lost: 0 bytes in 0 blocks
+==19552== still reachable: 0 bytes in 0 blocks
+==19552== suppressed: 0 bytes in 0 blocks
+==19552==
+(gdb) monitor block_list 7
+==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7 of 7
+==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
+==19552== by 0x80484D5: mk (leak-tree.c:28)
+==19552== by 0x80484FC: f (leak-tree.c:41)
+==19552== by 0x8048856: main (leak-tree.c:63)
+==19552== 0x4028028[16]
+==19552== 0x4028068[16] indirect loss record 1
+==19552== 0x40280E8[16] indirect loss record 3
+==19552== 0x4028128[16] indirect loss record 4
+==19552== 0x40280A8[16] indirect loss record 2
+==19552== 0x4028168[16] indirect loss record 5
+==19552== 0x40281A8[16] indirect loss record 6
+(gdb) mo b 2
+==19552== 16 bytes in 1 blocks are indirectly lost in loss record 2 of 7
+==19552== at 0x40070B4: malloc (vg_replace_malloc.c:263)
+==19552== by 0x80484D5: mk (leak-tree.c:28)
+==19552== by 0x8048519: f (leak-tree.c:43)
+==19552== by 0x8048856: main (leak-tree.c:63)
+==19552== 0x40280A8[16]
+==19552== 0x4028168[16] indirect loss record 5
+==19552== 0x40281A8[16] indirect loss record 6
+(gdb)
+
+</pre>
+</li>
+<li class="listitem">
+<p><code class="varname">who_points_at <addr> [<len>]</code>
+ shows all the locations where a pointer to addr is found.
+ If len is equal to 1, the command only shows the locations pointing
+ exactly at addr (i.e. the "start pointers" to addr).
+ If len is > 1, "interior pointers" pointing at the len first bytes
+ will also be shown.
+ </p>
+<p>The locations searched for are the same as the locations
+ used in the leak search. So, <code class="varname">who_points_at</code> can a.o.
+ be used to show why the leak search still can reach a block, or can
+ search for dangling pointers to a freed block.
+ Each location pointing at addr (or pointing inside addr if interior pointers
+ are being searched for) will be described.
+ </p>
+<p>In the below example, the pointers to the 'tree block A' (see example
+ in command <code class="varname">block_list</code>) is shown before the tree was leaked.
+ The descriptions are detailed as the option <code class="option">--read-var-info=yes</code>
+ was given at Valgrind startup. The second call shows the pointers (start and interior
+ pointers) to block G. The block G (0x40281A8) is reachable via block C (0x40280a8)
+ and register ECX of tid 1 (tid is the Valgrind thread id).
+ It is "interior reachable" via the register EBX.
+ </p>
+<pre class="programlisting">
+(gdb) monitor who_points_at 0x4028028
+==20852== Searching for pointers to 0x4028028
+==20852== *0x8049e20 points at 0x4028028
+==20852== Location 0x8049e20 is 0 bytes inside global var "t"
+==20852== declared at leak-tree.c:35
+(gdb) monitor who_points_at 0x40281A8 16
+==20852== Searching for pointers pointing in 16 bytes from 0x40281a8
+==20852== *0x40280ac points at 0x40281a8
+==20852== Address 0x40280ac is 4 bytes inside a block of size 16 alloc'd
+==20852== at 0x40070B4: malloc (vg_replace_malloc.c:263)
+==20852== by 0x80484D5: mk (leak-tree.c:28)
+==20852== by 0x8048519: f (leak-tree.c:43)
+==20852== by 0x8048856: main (leak-tree.c:63)
+==20852== tid 1 register ECX points at 0x40281a8
+==20852== tid 1 register EBX interior points at 2 bytes inside 0x40281a8
+(gdb)
+</pre>
+<p> When <code class="varname">who_points_at</code> finds an interior pointer,
+ it will report the heuristic(s) with which this interior pointer
+ will be considered as reachable. Note that this is done independently
+ of the value of the option <code class="option">--leak-check-heuristics</code>.
+ In the below example, the loss record 6 indicates a possibly lost
+ block. <code class="varname">who_points_at</code> reports that there is an interior
+ pointer pointing in this block, and that the block can be considered
+ reachable using the heuristic
+ <code class="computeroutput">multipleinheritance</code>.
+ </p>
+<pre class="programlisting">
+(gdb) monitor block_list 6
+==3748== 8 bytes in 1 blocks are possibly lost in loss record 6 of 7
+==3748== at 0x4007D77: operator new(unsigned int) (vg_replace_malloc.c:313)
+==3748== by 0x8048954: main (leak_cpp_interior.cpp:43)
+==3748== 0x402A0E0[8]
+(gdb) monitor who_points_at 0x402A0E0 8
+==3748== Searching for pointers pointing in 8 bytes from 0x402a0e0
+==3748== *0xbe8ee078 interior points at 4 bytes inside 0x402a0e0
+==3748== Address 0xbe8ee078 is on thread 1's stack
+==3748== block at 0x402a0e0 considered reachable by ptr 0x402a0e4 using multipleinheritance heuristic
+(gdb)
+</pre>
+</li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.clientreqs"></a>4.7. Client Requests</h2></div></div></div>
+<p>The following client requests are defined in
+<code class="filename">memcheck.h</code>.
+See <code class="filename">memcheck.h</code> for exact details of their
+arguments.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">VALGRIND_MAKE_MEM_NOACCESS</code>,
+ <code class="varname">VALGRIND_MAKE_MEM_UNDEFINED</code> and
+ <code class="varname">VALGRIND_MAKE_MEM_DEFINED</code>.
+ These mark address ranges as completely inaccessible,
+ accessible but containing undefined data, and accessible and
+ containing defined data, respectively. They return -1, when
+ run on Valgrind and 0 otherwise.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE</code>.
+ This is just like <code class="varname">VALGRIND_MAKE_MEM_DEFINED</code> but only
+ affects those bytes that are already addressable.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_CHECK_MEM_IS_ADDRESSABLE</code> and
+ <code class="varname">VALGRIND_CHECK_MEM_IS_DEFINED</code>: check immediately
+ whether or not the given address range has the relevant property,
+ and if not, print an error message. Also, for the convenience of
+ the client, returns zero if the relevant property holds; otherwise,
+ the returned value is the address of the first byte for which the
+ property is not true. Always returns 0 when not run on
+ Valgrind.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_CHECK_VALUE_IS_DEFINED</code>: a quick and easy
+ way to find out whether Valgrind thinks a particular value
+ (lvalue, to be precise) is addressable and defined. Prints an error
+ message if not. It has no return value.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_DO_LEAK_CHECK</code>: does a full memory leak
+ check (like <code class="option">--leak-check=full</code>) right now.
+ This is useful for incrementally checking for leaks between arbitrary
+ places in the program's execution. It has no return value.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_DO_ADDED_LEAK_CHECK</code>: same as
+ <code class="varname"> VALGRIND_DO_LEAK_CHECK</code> but only shows the
+ entries for which there was an increase in leaked bytes or leaked
+ number of blocks since the previous leak search. It has no return
+ value.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_DO_CHANGED_LEAK_CHECK</code>: same as
+ <code class="varname">VALGRIND_DO_LEAK_CHECK</code> but only shows the
+ entries for which there was an increase or decrease in leaked
+ bytes or leaked number of blocks since the previous leak search. It
+ has no return value.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_DO_QUICK_LEAK_CHECK</code>: like
+ <code class="varname">VALGRIND_DO_LEAK_CHECK</code>, except it produces only a leak
+ summary (like <code class="option">--leak-check=summary</code>).
+ It has no return value.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_COUNT_LEAKS</code>: fills in the four
+ arguments with the number of bytes of memory found by the previous
+ leak check to be leaked (i.e. the sum of direct leaks and indirect leaks),
+ dubious, reachable and suppressed. This is useful in test harness code,
+ after calling <code class="varname">VALGRIND_DO_LEAK_CHECK</code> or
+ <code class="varname">VALGRIND_DO_QUICK_LEAK_CHECK</code>.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_COUNT_LEAK_BLOCKS</code>: identical to
+ <code class="varname">VALGRIND_COUNT_LEAKS</code> except that it returns the
+ number of blocks rather than the number of bytes in each
+ category.</p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_GET_VBITS</code> and
+ <code class="varname">VALGRIND_SET_VBITS</code>: allow you to get and set the
+ V (validity) bits for an address range. You should probably only
+ set V bits that you have got with
+ <code class="varname">VALGRIND_GET_VBITS</code>. Only for those who really
+ know what they are doing.</p></li>
+<li class="listitem">
+<p><code class="varname">VALGRIND_CREATE_BLOCK</code> and
+ <code class="varname">VALGRIND_DISCARD</code>. <code class="varname">VALGRIND_CREATE_BLOCK</code>
+ takes an address, a number of bytes and a character string. The
+ specified address range is then associated with that string. When
+ Memcheck reports an invalid access to an address in the range, it
+ will describe it in terms of this block rather than in terms of
+ any other block it knows about. Note that the use of this macro
+ does not actually change the state of memory in any way -- it
+ merely gives a name for the range.
+ </p>
+<p>At some point you may want Memcheck to stop reporting errors
+ in terms of the block named
+ by <code class="varname">VALGRIND_CREATE_BLOCK</code>. To make this
+ possible, <code class="varname">VALGRIND_CREATE_BLOCK</code> returns a
+ "block handle", which is a C <code class="varname">int</code> value. You
+ can pass this block handle to <code class="varname">VALGRIND_DISCARD</code>.
+ After doing so, Valgrind will no longer relate addressing errors
+ in the specified range to the block. Passing invalid handles to
+ <code class="varname">VALGRIND_DISCARD</code> is harmless.
+ </p>
+</li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.mempools"></a>4.8. Memory Pools: describing and working with custom allocators</h2></div></div></div>
+<p>Some programs use custom memory allocators, often for performance
+reasons. Left to itself, Memcheck is unable to understand the
+behaviour of custom allocation schemes as well as it understands the
+standard allocators, and so may miss errors and leaks in your program. What
+this section describes is a way to give Memcheck enough of a description of
+your custom allocator that it can make at least some sense of what is
+happening.</p>
+<p>There are many different sorts of custom allocator, so Memcheck
+attempts to reason about them using a loose, abstract model. We
+use the following terminology when describing custom allocation
+systems:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Custom allocation involves a set of independent "memory pools".
+ </p></li>
+<li class="listitem"><p>Memcheck's notion of a a memory pool consists of a single "anchor
+ address" and a set of non-overlapping "chunks" associated with the
+ anchor address.</p></li>
+<li class="listitem"><p>Typically a pool's anchor address is the address of a
+ book-keeping "header" structure.</p></li>
+<li class="listitem"><p>Typically the pool's chunks are drawn from a contiguous
+ "superblock" acquired through the system
+ <code class="function">malloc</code> or
+ <code class="function">mmap</code>.</p></li>
+</ul></div>
+<p>Keep in mind that the last two points above say "typically": the
+Valgrind mempool client request API is intentionally vague about the
+exact structure of a mempool. There is no specific mention made of
+headers or superblocks. Nevertheless, the following picture may help
+elucidate the intention of the terms in the API:</p>
+<pre class="programlisting">
+ "pool"
+ (anchor address)
+ |
+ v
+ +--------+---+
+ | header | o |
+ +--------+-|-+
+ |
+ v superblock
+ +------+---+--------------+---+------------------+
+ | |rzB| allocation |rzB| |
+ +------+---+--------------+---+------------------+
+ ^ ^
+ | |
+ "addr" "addr"+"size"
+</pre>
+<p>
+Note that the header and the superblock may be contiguous or
+discontiguous, and there may be multiple superblocks associated with a
+single header; such variations are opaque to Memcheck. The API
+only requires that your allocation scheme can present sensible values
+of "pool", "addr" and "size".</p>
+<p>
+Typically, before making client requests related to mempools, a client
+program will have allocated such a header and superblock for their
+mempool, and marked the superblock NOACCESS using the
+<code class="varname">VALGRIND_MAKE_MEM_NOACCESS</code> client request.</p>
+<p>
+When dealing with mempools, the goal is to maintain a particular
+invariant condition: that Memcheck believes the unallocated portions
+of the pool's superblock (including redzones) are NOACCESS. To
+maintain this invariant, the client program must ensure that the
+superblock starts out in that state; Memcheck cannot make it so, since
+Memcheck never explicitly learns about the superblock of a pool, only
+the allocated chunks within the pool.</p>
+<p>
+Once the header and superblock for a pool are established and properly
+marked, there are a number of client requests programs can use to
+inform Memcheck about changes to the state of a mempool:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>
+ <code class="varname">VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed)</code>:
+ This request registers the address <code class="varname">pool</code> as the anchor
+ address for a memory pool. It also provides a size
+ <code class="varname">rzB</code>, specifying how large the redzones placed around
+ chunks allocated from the pool should be. Finally, it provides an
+ <code class="varname">is_zeroed</code> argument that specifies whether the pool's
+ chunks are zeroed (more precisely: defined) when allocated.
+ </p>
+<p>
+ Upon completion of this request, no chunks are associated with the
+ pool. The request simply tells Memcheck that the pool exists, so that
+ subsequent calls can refer to it as a pool.
+ </p>
+</li>
+<li class="listitem">
+<p>
+ <code class="varname">VALGRIND_CREATE_MEMPOOL_EXT(pool, rzB, is_zeroed, flags)</code>:
+ Create a memory pool with some flags (that can
+ be OR-ed together) specifying extended behaviour. When flags is
+ zero, the behaviour is identical to
+ <code class="varname">VALGRIND_CREATE_MEMPOOL</code>.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
+<li class="listitem"><p> The flag <code class="varname">VALGRIND_MEMPOOL_METAPOOL</code>
+ specifies that the pieces of memory associated with the pool
+ using <code class="varname">VALGRIND_MEMPOOL_ALLOC</code> will be used
+ by the application as superblocks to dole out MALLOC_LIKE
+ blocks using <code class="varname">VALGRIND_MALLOCLIKE_BLOCK</code>.
+ In other words, a meta pool is a "2 levels" pool : first
+ level is the blocks described
+ by <code class="varname">VALGRIND_MEMPOOL_ALLOC</code>. The second
+ level blocks are described
+ using <code class="varname">VALGRIND_MALLOCLIKE_BLOCK</code>. Note
+ that the association between the pool and the second level
+ blocks is implicit : second level blocks will be located
+ inside first level blocks. It is necessary to use
+ the <code class="varname">VALGRIND_MEMPOOL_METAPOOL</code> flag for
+ such 2 levels pools, as otherwise valgrind will detect
+ overlapping memory blocks, and will abort execution
+ (e.g. during leak search).
+ </p></li>
+<li class="listitem"><p>
+ <code class="varname">VALGRIND_MEMPOOL_AUTO_FREE</code>. Such a meta
+ pool can also be marked as an 'auto free' pool using the
+ flag <code class="varname">VALGRIND_MEMPOOL_AUTO_FREE</code>, which
+ must be OR-ed together with
+ the <code class="varname">VALGRIND_MEMPOOL_METAPOOL</code>. For an
+ 'auto free' pool, <code class="varname">VALGRIND_MEMPOOL_FREE</code>
+ will automatically free the second level blocks that are
+ contained inside the first level block freed
+ with <code class="varname">VALGRIND_MEMPOOL_FREE</code>. In other
+ words, calling <code class="varname">VALGRIND_MEMPOOL_FREE</code> will
+ cause implicit calls
+ to <code class="varname">VALGRIND_FREELIKE_BLOCK</code> for all the
+ second level blocks included in the first level block.
+ Note: it is an error to use
+ the <code class="varname">VALGRIND_MEMPOOL_AUTO_FREE</code> flag
+ without the
+ <code class="varname">VALGRIND_MEMPOOL_METAPOOL</code> flag.
+ </p></li>
+</ul></div>
+</li>
+<li class="listitem"><p><code class="varname">VALGRIND_DESTROY_MEMPOOL(pool)</code>:
+ This request tells Memcheck that a pool is being torn down. Memcheck
+ then removes all records of chunks associated with the pool, as well
+ as its record of the pool's existence. While destroying its records of
+ a mempool, Memcheck resets the redzones of any live chunks in the pool
+ to NOACCESS.
+ </p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_MEMPOOL_ALLOC(pool, addr, size)</code>:
+ This request informs Memcheck that a <code class="varname">size</code>-byte chunk
+ has been allocated at <code class="varname">addr</code>, and associates the chunk with the
+ specified
+ <code class="varname">pool</code>. If the pool was created with nonzero
+ <code class="varname">rzB</code> redzones, Memcheck will mark the
+ <code class="varname">rzB</code> bytes before and after the chunk as NOACCESS. If
+ the pool was created with the <code class="varname">is_zeroed</code> argument set,
+ Memcheck will mark the chunk as DEFINED, otherwise Memcheck will mark
+ the chunk as UNDEFINED.
+ </p></li>
+<li class="listitem"><p><code class="varname">VALGRIND_MEMPOOL_FREE(pool, addr)</code>:
+ This request informs Memcheck that the chunk at <code class="varname">addr</code>
+ should no longer be considered allocated. Memcheck will mark the chunk
+ associated with <code class="varname">addr</code> as NOACCESS, and delete its
+ record of the chunk's existence.
+ </p></li>
+<li class="listitem">
+<p><code class="varname">VALGRIND_MEMPOOL_TRIM(pool, addr, size)</code>:
+ This request trims the chunks associated with <code class="varname">pool</code>.
+ The request only operates on chunks associated with
+ <code class="varname">pool</code>. Trimming is formally defined as:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
+<li class="listitem"><p> All chunks entirely inside the range
+ <code class="varname">addr..(addr+size-1)</code> are preserved.</p></li>
+<li class="listitem"><p>All chunks entirely outside the range
+ <code class="varname">addr..(addr+size-1)</code> are discarded, as though
+ <code class="varname">VALGRIND_MEMPOOL_FREE</code> was called on them. </p></li>
+<li class="listitem"><p>All other chunks must intersect with the range
+ <code class="varname">addr..(addr+size-1)</code>; areas outside the
+ intersection are marked as NOACCESS, as though they had been
+ independently freed with
+ <code class="varname">VALGRIND_MEMPOOL_FREE</code>.</p></li>
+</ul></div>
+<p>This is a somewhat rare request, but can be useful in
+ implementing the type of mass-free operations common in custom
+ LIFO allocators.</p>
+</li>
+<li class="listitem">
+<p><code class="varname">VALGRIND_MOVE_MEMPOOL(poolA, poolB)</code>: This
+ request informs Memcheck that the pool previously anchored at
+ address <code class="varname">poolA</code> has moved to anchor address
+ <code class="varname">poolB</code>. This is a rare request, typically only needed
+ if you <code class="function">realloc</code> the header of a mempool.</p>
+<p>No memory-status bits are altered by this request.</p>
+</li>
+<li class="listitem">
+<p>
+ <code class="varname">VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB,
+ size)</code>: This request informs Memcheck that the chunk
+ previously allocated at address <code class="varname">addrA</code> within
+ <code class="varname">pool</code> has been moved and/or resized, and should be
+ changed to cover the region <code class="varname">addrB..(addrB+size-1)</code>. This
+ is a rare request, typically only needed if you
+ <code class="function">realloc</code> a superblock or wish to extend a chunk
+ without changing its memory-status bits.
+ </p>
+<p>No memory-status bits are altered by this request.
+ </p>
+</li>
+<li class="listitem"><p><code class="varname">VALGRIND_MEMPOOL_EXISTS(pool)</code>:
+ This request informs the caller whether or not Memcheck is currently
+ tracking a mempool at anchor address <code class="varname">pool</code>. It
+ evaluates to 1 when there is a mempool associated with that address, 0
+ otherwise. This is a rare request, only useful in circumstances when
+ client code might have lost track of the set of active mempools.
+ </p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="mc-manual.mpiwrap"></a>4.9. Debugging MPI Parallel Programs with Valgrind</h2></div></div></div>
+<p>Memcheck supports debugging of distributed-memory applications
+which use the MPI message passing standard. This support consists of a
+library of wrapper functions for the
+<code class="computeroutput">PMPI_*</code> interface. When incorporated
+into the application's address space, either by direct linking or by
+<code class="computeroutput">LD_PRELOAD</code>, the wrappers intercept
+calls to <code class="computeroutput">PMPI_Send</code>,
+<code class="computeroutput">PMPI_Recv</code>, etc. They then
+use client requests to inform Memcheck of memory state changes caused
+by the function being wrapped. This reduces the number of false
+positives that Memcheck otherwise typically reports for MPI
+applications.</p>
+<p>The wrappers also take the opportunity to carefully check
+size and definedness of buffers passed as arguments to MPI functions, hence
+detecting errors such as passing undefined data to
+<code class="computeroutput">PMPI_Send</code>, or receiving data into a
+buffer which is too small.</p>
+<p>Unlike most of the rest of Valgrind, the wrapper library is subject to a
+BSD-style license, so you can link it into any code base you like.
+See the top of <code class="computeroutput">mpi/libmpiwrap.c</code>
+for license details.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.build"></a>4.9.1. Building and installing the wrappers</h3></div></div></div>
+<p> The wrapper library will be built automatically if possible.
+Valgrind's configure script will look for a suitable
+<code class="computeroutput">mpicc</code> to build it with. This must be
+the same <code class="computeroutput">mpicc</code> you use to build the
+MPI application you want to debug. By default, Valgrind tries
+<code class="computeroutput">mpicc</code>, but you can specify a
+different one by using the configure-time option
+<code class="option">--with-mpicc</code>. Currently the
+wrappers are only buildable with
+<code class="computeroutput">mpicc</code>s which are based on GNU
+GCC or Intel's C++ Compiler.</p>
+<p>Check that the configure script prints a line like this:</p>
+<pre class="programlisting">
+checking for usable MPI2-compliant mpicc and mpi.h... yes, mpicc
+</pre>
+<p>If it says <code class="computeroutput">... no</code>, your
+<code class="computeroutput">mpicc</code> has failed to compile and link
+a test MPI2 program.</p>
+<p>If the configure test succeeds, continue in the usual way with
+<code class="computeroutput">make</code> and <code class="computeroutput">make
+install</code>. The final install tree should then contain
+<code class="computeroutput">libmpiwrap-<platform>.so</code>.
+</p>
+<p>Compile up a test MPI program (eg, MPI hello-world) and try
+this:</p>
+<pre class="programlisting">
+LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
+ mpirun [args] $prefix/bin/valgrind ./hello
+</pre>
+<p>You should see something similar to the following</p>
+<pre class="programlisting">
+valgrind MPI wrappers 31901: Active for pid 31901
+valgrind MPI wrappers 31901: Try MPIWRAP_DEBUG=help for possible options
+</pre>
+<p>repeated for every process in the group. If you do not see
+these, there is an build/installation problem of some kind.</p>
+<p> The MPI functions to be wrapped are assumed to be in an ELF
+shared object with soname matching
+<code class="computeroutput">libmpi.so*</code>. This is known to be
+correct at least for Open MPI and Quadrics MPI, and can easily be
+changed if required.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.gettingstarted"></a>4.9.2. Getting started</h3></div></div></div>
+<p>Compile your MPI application as usual, taking care to link it
+using the same <code class="computeroutput">mpicc</code> that your
+Valgrind build was configured with.</p>
+<p>
+Use the following basic scheme to run your application on Valgrind with
+the wrappers engaged:</p>
+<pre class="programlisting">
+MPIWRAP_DEBUG=[wrapper-args] \
+ LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so \
+ mpirun [mpirun-args] \
+ $prefix/bin/valgrind [valgrind-args] \
+ [application] [app-args]
+</pre>
+<p>As an alternative to
+<code class="computeroutput">LD_PRELOAD</code>ing
+<code class="computeroutput">libmpiwrap-<platform>.so</code>, you can
+simply link it to your application if desired. This should not disturb
+native behaviour of your application in any way.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.controlling"></a>4.9.3. Controlling the wrapper library</h3></div></div></div>
+<p>Environment variable
+<code class="computeroutput">MPIWRAP_DEBUG</code> is consulted at
+startup. The default behaviour is to print a starting banner</p>
+<pre class="programlisting">
+valgrind MPI wrappers 16386: Active for pid 16386
+valgrind MPI wrappers 16386: Try MPIWRAP_DEBUG=help for possible options
+</pre>
+<p> and then be relatively quiet.</p>
+<p>You can give a list of comma-separated options in
+<code class="computeroutput">MPIWRAP_DEBUG</code>. These are</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="computeroutput">verbose</code>:
+ show entries/exits of all wrappers. Also show extra
+ debugging info, such as the status of outstanding
+ <code class="computeroutput">MPI_Request</code>s resulting
+ from uncompleted <code class="computeroutput">MPI_Irecv</code>s.</p></li>
+<li class="listitem"><p><code class="computeroutput">quiet</code>:
+ opposite of <code class="computeroutput">verbose</code>, only print
+ anything when the wrappers want
+ to report a detected programming error, or in case of catastrophic
+ failure of the wrappers.</p></li>
+<li class="listitem"><p><code class="computeroutput">warn</code>:
+ by default, functions which lack proper wrappers
+ are not commented on, just silently
+ ignored. This causes a warning to be printed for each unwrapped
+ function used, up to a maximum of three warnings per function.</p></li>
+<li class="listitem"><p><code class="computeroutput">strict</code>:
+ print an error message and abort the program if
+ a function lacking a wrapper is used.</p></li>
+</ul></div>
+<p> If you want to use Valgrind's XML output facility
+(<code class="option">--xml=yes</code>), you should pass
+<code class="computeroutput">quiet</code> in
+<code class="computeroutput">MPIWRAP_DEBUG</code> so as to get rid of any
+extraneous printing from the wrappers.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.limitations.functions"></a>4.9.4. Functions</h3></div></div></div>
+<p>All MPI2 functions except
+<code class="computeroutput">MPI_Wtick</code>,
+<code class="computeroutput">MPI_Wtime</code> and
+<code class="computeroutput">MPI_Pcontrol</code> have wrappers. The
+first two are not wrapped because they return a
+<code class="computeroutput">double</code>, which Valgrind's
+function-wrap mechanism cannot handle (but it could easily be
+extended to do so). <code class="computeroutput">MPI_Pcontrol</code> cannot be
+wrapped as it has variable arity:
+<code class="computeroutput">int MPI_Pcontrol(const int level, ...)</code></p>
+<p>Most functions are wrapped with a default wrapper which does
+nothing except complain or abort if it is called, depending on
+settings in <code class="computeroutput">MPIWRAP_DEBUG</code> listed
+above. The following functions have "real", do-something-useful
+wrappers:</p>
+<pre class="programlisting">
+PMPI_Send PMPI_Bsend PMPI_Ssend PMPI_Rsend
+
+PMPI_Recv PMPI_Get_count
+
+PMPI_Isend PMPI_Ibsend PMPI_Issend PMPI_Irsend
+
+PMPI_Irecv
+PMPI_Wait PMPI_Waitall
+PMPI_Test PMPI_Testall
+
+PMPI_Iprobe PMPI_Probe
+
+PMPI_Cancel
+
+PMPI_Sendrecv
+
+PMPI_Type_commit PMPI_Type_free
+
+PMPI_Pack PMPI_Unpack
+
+PMPI_Bcast PMPI_Gather PMPI_Scatter PMPI_Alltoall
+PMPI_Reduce PMPI_Allreduce PMPI_Op_create
+
+PMPI_Comm_create PMPI_Comm_dup PMPI_Comm_free PMPI_Comm_rank PMPI_Comm_size
+
+PMPI_Error_string
+PMPI_Init PMPI_Initialized PMPI_Finalize
+</pre>
+<p> A few functions such as
+<code class="computeroutput">PMPI_Address</code> are listed as
+<code class="computeroutput">HAS_NO_WRAPPER</code>. They have no wrapper
+at all as there is nothing worth checking, and giving a no-op wrapper
+would reduce performance for no reason.</p>
+<p> Note that the wrapper library itself can itself generate large
+numbers of calls to the MPI implementation, especially when walking
+complex types. The most common functions called are
+<code class="computeroutput">PMPI_Extent</code>,
+<code class="computeroutput">PMPI_Type_get_envelope</code>,
+<code class="computeroutput">PMPI_Type_get_contents</code>, and
+<code class="computeroutput">PMPI_Type_free</code>. </p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.limitations.types"></a>4.9.5. Types</h3></div></div></div>
+<p> MPI-1.1 structured types are supported, and walked exactly.
+The currently supported combiners are
+<code class="computeroutput">MPI_COMBINER_NAMED</code>,
+<code class="computeroutput">MPI_COMBINER_CONTIGUOUS</code>,
+<code class="computeroutput">MPI_COMBINER_VECTOR</code>,
+<code class="computeroutput">MPI_COMBINER_HVECTOR</code>
+<code class="computeroutput">MPI_COMBINER_INDEXED</code>,
+<code class="computeroutput">MPI_COMBINER_HINDEXED</code> and
+<code class="computeroutput">MPI_COMBINER_STRUCT</code>. This should
+cover all MPI-1.1 types. The mechanism (function
+<code class="computeroutput">walk_type</code>) should extend easily to
+cover MPI2 combiners.</p>
+<p>MPI defines some named structured types
+(<code class="computeroutput">MPI_FLOAT_INT</code>,
+<code class="computeroutput">MPI_DOUBLE_INT</code>,
+<code class="computeroutput">MPI_LONG_INT</code>,
+<code class="computeroutput">MPI_2INT</code>,
+<code class="computeroutput">MPI_SHORT_INT</code>,
+<code class="computeroutput">MPI_LONG_DOUBLE_INT</code>) which are pairs
+of some basic type and a C <code class="computeroutput">int</code>.
+Unfortunately the MPI specification makes it impossible to look inside
+these types and see where the fields are. Therefore these wrappers
+assume the types are laid out as <code class="computeroutput">struct { float val;
+int loc; }</code> (for
+<code class="computeroutput">MPI_FLOAT_INT</code>), etc, and act
+accordingly. This appears to be correct at least for Open MPI 1.0.2
+and for Quadrics MPI.</p>
+<p>If <code class="computeroutput">strict</code> is an option specified
+in <code class="computeroutput">MPIWRAP_DEBUG</code>, the application
+will abort if an unhandled type is encountered. Otherwise, the
+application will print a warning message and continue.</p>
+<p>Some effort is made to mark/check memory ranges corresponding to
+arrays of values in a single pass. This is important for performance
+since asking Valgrind to mark/check any range, no matter how small,
+carries quite a large constant cost. This optimisation is applied to
+arrays of primitive types (<code class="computeroutput">double</code>,
+<code class="computeroutput">float</code>,
+<code class="computeroutput">int</code>,
+<code class="computeroutput">long</code>, <code class="computeroutput">long
+long</code>, <code class="computeroutput">short</code>,
+<code class="computeroutput">char</code>, and <code class="computeroutput">long
+double</code> on platforms where <code class="computeroutput">sizeof(long
+double) == 8</code>). For arrays of all other types, the
+wrappers handle each element individually and so there can be a very
+large performance cost.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.writingwrappers"></a>4.9.6. Writing new wrappers</h3></div></div></div>
+<p>
+For the most part the wrappers are straightforward. The only
+significant complexity arises with nonblocking receives.</p>
+<p>The issue is that <code class="computeroutput">MPI_Irecv</code>
+states the recv buffer and returns immediately, giving a handle
+(<code class="computeroutput">MPI_Request</code>) for the transaction.
+Later the user will have to poll for completion with
+<code class="computeroutput">MPI_Wait</code> etc, and when the
+transaction completes successfully, the wrappers have to paint the
+recv buffer. But the recv buffer details are not presented to
+<code class="computeroutput">MPI_Wait</code> -- only the handle is. The
+library therefore maintains a shadow table which associates
+uncompleted <code class="computeroutput">MPI_Request</code>s with the
+corresponding buffer address/count/type. When an operation completes,
+the table is searched for the associated address/count/type info, and
+memory is marked accordingly.</p>
+<p>Access to the table is guarded by a (POSIX pthreads) lock, so as
+to make the library thread-safe.</p>
+<p>The table is allocated with
+<code class="computeroutput">malloc</code> and never
+<code class="computeroutput">free</code>d, so it will show up in leak
+checks.</p>
+<p>Writing new wrappers should be fairly easy. The source file is
+<code class="computeroutput">mpi/libmpiwrap.c</code>. If possible,
+find an existing wrapper for a function of similar behaviour to the
+one you want to wrap, and use it as a starting point. The wrappers
+are organised in sections in the same order as the MPI 1.1 spec, to
+aid navigation. When adding a wrapper, remember to comment out the
+definition of the default wrapper in the long list of defaults at the
+bottom of the file (do not remove it, just comment it out).</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="mc-manual.mpiwrap.whattoexpect"></a>4.9.7. What to expect when using the wrappers</h3></div></div></div>
+<p>The wrappers should reduce Memcheck's false-error rate on MPI
+applications. Because the wrapping is done at the MPI interface,
+there will still potentially be a large number of errors reported in
+the MPI implementation below the interface. The best you can do is
+try to suppress them.</p>
+<p>You may also find that the input-side (buffer
+length/definedness) checks find errors in your MPI use, for example
+passing too short a buffer to
+<code class="computeroutput">MPI_Recv</code>.</p>
+<p>Functions which are not wrapped may increase the false
+error rate. A possible approach is to run with
+<code class="computeroutput">MPI_DEBUG</code> containing
+<code class="computeroutput">warn</code>. This will show you functions
+which lack proper wrappers but which are nevertheless used. You can
+then write wrappers for them.
+</p>
+<p>A known source of potential false errors are the
+<code class="computeroutput">PMPI_Reduce</code> family of functions, when
+using a custom (user-defined) reduction function. In a reduction
+operation, each node notionally sends data to a "central point" which
+uses the specified reduction function to merge the data items into a
+single item. Hence, in general, data is passed between nodes and fed
+to the reduction function, but the wrapper library cannot mark the
+transferred data as initialised before it is handed to the reduction
+function, because all that happens "inside" the
+<code class="computeroutput">PMPI_Reduce</code> call. As a result you
+may see false positives reported in your reduction function.</p>
+</div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="manual-core-adv.html"><< 3. Using and understanding the Valgrind core: Advanced Topics</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="cg-manual.html">5. Cachegrind: a cache and branch-prediction profiler >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/ms-manual.html b/docs/html/ms-manual.html
new file mode 100644
index 0000000..fbb0e5a
--- /dev/null
+++ b/docs/html/ms-manual.html
@@ -0,0 +1,852 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>9. Massif: a heap profiler</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="drd-manual.html" title="8. DRD: a thread error detector">
+<link rel="next" href="dh-manual.html" title="10. DHAT: a dynamic heap analysis tool">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="drd-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="dh-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="ms-manual"></a>9. Massif: a heap profiler</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.overview">9.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.using">9.2. Using Massif and ms_print</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.anexample">9.2.1. An Example Program</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.running-massif">9.2.2. Running Massif</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.running-ms_print">9.2.3. Running ms_print</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.theoutputpreamble">9.2.4. The Output Preamble</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.theoutputgraph">9.2.5. The Output Graph</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.thesnapshotdetails">9.2.6. The Snapshot Details</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.forkingprograms">9.2.7. Forking Programs</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.not-measured">9.2.8. Measuring All Memory in a Process</a></span></dt>
+<dt><span class="sect2"><a href="ms-manual.html#ms-manual.acting">9.2.9. Acting on Massif's Information</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.options">9.3. Massif Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.monitor-commands">9.4. Massif Monitor Commands</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.clientreqs">9.5. Massif Client Requests</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.ms_print-options">9.6. ms_print Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="ms-manual.html#ms-manual.fileformat">9.7. Massif's Output File Format</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=massif</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.overview"></a>9.1. Overview</h2></div></div></div>
+<p>Massif is a heap profiler. It measures how much heap memory your
+program uses. This includes both the useful space, and the extra bytes
+allocated for book-keeping and alignment purposes. It can also
+measure the size of your program's stack(s), although it does not do so by
+default.</p>
+<p>Heap profiling can help you reduce the amount of memory your program
+uses. On modern machines with virtual memory, this provides the following
+benefits:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>It can speed up your program -- a smaller
+ program will interact better with your machine's caches and
+ avoid paging.</p></li>
+<li class="listitem"><p>If your program uses lots of memory, it will
+ reduce the chance that it exhausts your machine's swap
+ space.</p></li>
+</ul></div>
+<p>Also, there are certain space leaks that aren't detected by
+traditional leak-checkers, such as Memcheck's. That's because
+the memory isn't ever actually lost -- a pointer remains to it --
+but it's not in use. Programs that have leaks like this can
+unnecessarily increase the amount of memory they are using over
+time. Massif can help identify these leaks.</p>
+<p>Importantly, Massif tells you not only how much heap memory your
+program is using, it also gives very detailed information that indicates
+which parts of your program are responsible for allocating the heap memory.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.using"></a>9.2. Using Massif and ms_print</h2></div></div></div>
+<p>First off, as for the other Valgrind tools, you should compile with
+debugging info (the <code class="option">-g</code> option). It shouldn't
+matter much what optimisation level you compile your program with, as this
+is unlikely to affect the heap memory usage.</p>
+<p>Then, you need to run Massif itself to gather the profiling
+information, and then run ms_print to present it in a readable way.</p>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.anexample"></a>9.2.1. An Example Program</h3></div></div></div>
+<p>An example will make things clear. Consider the following C program
+(annotated with line numbers) which allocates a number of different blocks
+on the heap.</p>
+<pre class="screen">
+ 1 #include <stdlib.h>
+ 2
+ 3 void g(void)
+ 4 {
+ 5 malloc(4000);
+ 6 }
+ 7
+ 8 void f(void)
+ 9 {
+10 malloc(2000);
+11 g();
+12 }
+13
+14 int main(void)
+15 {
+16 int i;
+17 int* a[10];
+18
+19 for (i = 0; i < 10; i++) {
+20 a[i] = malloc(1000);
+21 }
+22
+23 f();
+24
+25 g();
+26
+27 for (i = 0; i < 10; i++) {
+28 free(a[i]);
+29 }
+30
+31 return 0;
+32 }
+</pre>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.running-massif"></a>9.2.2. Running Massif</h3></div></div></div>
+<p>To gather heap profiling information about the program
+<code class="computeroutput">prog</code>, type:</p>
+<pre class="screen">
+valgrind --tool=massif prog
+</pre>
+<p>The program will execute (slowly). Upon completion, no summary
+statistics are printed to Valgrind's commentary; all of Massif's profiling
+data is written to a file. By default, this file is called
+<code class="filename">massif.out.<pid></code>, where
+<code class="filename"><pid></code> is the process ID, although this filename
+can be changed with the <code class="option">--massif-out-file</code> option.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.running-ms_print"></a>9.2.3. Running ms_print</h3></div></div></div>
+<p>To see the information gathered by Massif in an easy-to-read form, use
+ms_print. If the output file's name is
+<code class="filename">massif.out.12345</code>, type:</p>
+<pre class="screen">
+ms_print massif.out.12345</pre>
+<p>ms_print will produce (a) a graph showing the memory consumption over
+the program's execution, and (b) detailed information about the responsible
+allocation sites at various points in the program, including the point of
+peak memory allocation. The use of a separate script for presenting the
+results is deliberate: it separates the data gathering from its
+presentation, and means that new methods of presenting the data can be added in
+the future.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.theoutputpreamble"></a>9.2.4. The Output Preamble</h3></div></div></div>
+<p>After running this program under Massif, the first part of ms_print's
+output contains a preamble which just states how the program, Massif and
+ms_print were each invoked:</p>
+<pre class="screen">
+--------------------------------------------------------------------------------
+Command: example
+Massif arguments: (none)
+ms_print arguments: massif.out.12797
+--------------------------------------------------------------------------------
+</pre>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.theoutputgraph"></a>9.2.5. The Output Graph</h3></div></div></div>
+<p>The next part is the graph that shows how memory consumption occurred
+as the program executed:</p>
+<pre class="screen">
+ KB
+19.63^ #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | #
+ | :#
+ | :#
+ | :#
+ 0 +----------------------------------------------------------------------->ki 0 113.4
+
+
+Number of snapshots: 25
+ Detailed snapshots: [9, 14 (peak), 24]
+</pre>
+<p>Why is most of the graph empty, with only a couple of bars at the very
+end? By default, Massif uses "instructions executed" as the unit of time.
+For very short-run programs such as the example, most of the executed
+instructions involve the loading and dynamic linking of the program. The
+execution of <code class="computeroutput">main</code> (and thus the heap
+allocations) only occur at the very end. For a short-running program like
+this, we can use the <code class="option">--time-unit=B</code> option
+to specify that we want the time unit to instead be the number of bytes
+allocated/deallocated on the heap and stack(s).</p>
+<p>If we re-run the program under Massif with this option, and then
+re-run ms_print, we get this more useful graph:</p>
+<pre class="screen">
+19.63^ ###
+ | #
+ | # ::
+ | # : :::
+ | :::::::::# : : ::
+ | : # : : : ::
+ | : # : : : : :::
+ | : # : : : : : ::
+ | ::::::::::: # : : : : : : :::
+ | : : # : : : : : : : ::
+ | ::::: : # : : : : : : : : ::
+ | @@@: : : # : : : : : : : : : @
+ | ::@ : : : # : : : : : : : : : @
+ | :::: @ : : : # : : : : : : : : : @
+ | ::: : @ : : : # : : : : : : : : : @
+ | ::: : : @ : : : # : : : : : : : : : @
+ | :::: : : : @ : : : # : : : : : : : : : @
+ | ::: : : : : @ : : : # : : : : : : : : : @
+ | :::: : : : : : @ : : : # : : : : : : : : : @
+ | ::: : : : : : : @ : : : # : : : : : : : : : @
+ 0 +----------------------------------------------------------------------->KB 0 29.48
+
+Number of snapshots: 25
+ Detailed snapshots: [9, 14 (peak), 24]
+</pre>
+<p>The size of the graph can be changed with ms_print's
+<code class="option">--x</code> and <code class="option">--y</code> options. Each vertical bar
+represents a snapshot, i.e. a measurement of the memory usage at a certain
+point in time. If the next snapshot is more than one column away, a
+horizontal line of characters is drawn from the top of the snapshot to just
+before the next snapshot column. The text at the bottom show that 25
+snapshots were taken for this program, which is one per heap
+allocation/deallocation, plus a couple of extras. Massif starts by taking
+snapshots for every heap allocation/deallocation, but as a program runs for
+longer, it takes snapshots less frequently. It also discards older
+snapshots as the program goes on; when it reaches the maximum number of
+snapshots (100 by default, although changeable with the
+<code class="option">--max-snapshots</code> option) half of them are
+deleted. This means that a reasonable number of snapshots are always
+maintained.</p>
+<p>Most snapshots are <span class="emphasis"><em>normal</em></span>, and only basic
+information is recorded for them. Normal snapshots are represented in the
+graph by bars consisting of ':' characters.</p>
+<p>Some snapshots are <span class="emphasis"><em>detailed</em></span>. Information about
+where allocations happened are recorded for these snapshots, as we will see
+shortly. Detailed snapshots are represented in the graph by bars consisting
+of '@' characters. The text at the bottom show that 3 detailed
+snapshots were taken for this program (snapshots 9, 14 and 24). By default,
+every 10th snapshot is detailed, although this can be changed via the
+<code class="option">--detailed-freq</code> option.</p>
+<p>Finally, there is at most one <span class="emphasis"><em>peak</em></span> snapshot. The
+peak snapshot is a detailed snapshot, and records the point where memory
+consumption was greatest. The peak snapshot is represented in the graph by
+a bar consisting of '#' characters. The text at the bottom shows
+that snapshot 14 was the peak.</p>
+<p>Massif's determination of when the peak occurred can be wrong, for
+two reasons.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Peak snapshots are only ever taken after a deallocation
+ happens. This avoids lots of unnecessary peak snapshot recordings
+ (imagine what happens if your program allocates a lot of heap blocks in
+ succession, hitting a new peak every time). But it means that if your
+ program never deallocates any blocks, no peak will be recorded. It also
+ means that if your program does deallocate blocks but later allocates to a
+ higher peak without subsequently deallocating, the reported peak will be
+ too low.
+ </p></li>
+<li class="listitem"><p>Even with this behaviour, recording the peak accurately
+ is slow. So by default Massif records a peak whose size is within 1% of
+ the size of the true peak. This inaccuracy in the peak measurement can be
+ changed with the <code class="option">--peak-inaccuracy</code> option.</p></li>
+</ul></div>
+<p>The following graph is from an execution of Konqueror, the KDE web
+browser. It shows what graphs for larger programs look like.</p>
+<pre class="screen">
+ MB
+3.952^ #
+ | @#:
+ | :@@#:
+ | @@::::@@#:
+ | @ :: :@@#::
+ | @@@ :: :@@#::
+ | @@:@@@ :: :@@#::
+ | :::@ :@@@ :: :@@#::
+ | : :@ :@@@ :: :@@#::
+ | :@: :@ :@@@ :: :@@#::
+ | @@:@: :@ :@@@ :: :@@#:::
+ | : :: ::@@:@: :@ :@@@ :: :@@#:::
+ | :@@: ::::: ::::@@@:::@@:@: :@ :@@@ :: :@@#:::
+ | ::::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | @: ::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | @: ::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | @: ::@@:::::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | ::@@@: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | :::::@ @: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ | @@:::::@ @: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
+ 0 +----------------------------------------------------------------------->Mi
+ 0 626.4
+
+Number of snapshots: 63
+ Detailed snapshots: [3, 4, 10, 11, 15, 16, 29, 33, 34, 36, 39, 41,
+ 42, 43, 44, 49, 50, 51, 53, 55, 56, 57 (peak)]
+</pre>
+<p>Note that the larger size units are KB, MB, GB, etc. As is typical
+for memory measurements, these are based on a multiplier of 1024, rather
+than the standard SI multiplier of 1000. Strictly speaking, they should be
+written KiB, MiB, GiB, etc.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.thesnapshotdetails"></a>9.2.6. The Snapshot Details</h3></div></div></div>
+<p>Returning to our example, the graph is followed by the detailed
+information for each snapshot. The first nine snapshots are normal, so only
+a small amount of information is recorded for each one:</p>
+<pre class="screen">
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 0 0 0 0 0 0
+ 1 1,008 1,008 1,000 8 0
+ 2 2,016 2,016 2,000 16 0
+ 3 3,024 3,024 3,000 24 0
+ 4 4,032 4,032 4,000 32 0
+ 5 5,040 5,040 5,000 40 0
+ 6 6,048 6,048 6,000 48 0
+ 7 7,056 7,056 7,000 56 0
+ 8 8,064 8,064 8,000 64 0
+</pre>
+<p>Each normal snapshot records several things.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Its number.</p></li>
+<li class="listitem"><p>The time it was taken. In this case, the time unit is
+ bytes, due to the use of
+ <code class="option">--time-unit=B</code>.</p></li>
+<li class="listitem"><p>The total memory consumption at that point.</p></li>
+<li class="listitem"><p>The number of useful heap bytes allocated at that point.
+ This reflects the number of bytes asked for by the
+ program.</p></li>
+<li class="listitem">
+<p>The number of extra heap bytes allocated at that point.
+ This reflects the number of bytes allocated in excess of what the program
+ asked for. There are two sources of extra heap bytes.</p>
+<p>First, every heap block has administrative bytes associated with it.
+ The exact number of administrative bytes depends on the details of the
+ allocator. By default Massif assumes 8 bytes per block, as can be seen
+ from the example, but this number can be changed via the
+ <code class="option">--heap-admin</code> option.</p>
+<p>Second, allocators often round up the number of bytes asked for to a
+ larger number, usually 8 or 16. This is required to ensure that elements
+ within the block are suitably aligned. If N bytes are asked for, Massif
+ rounds N up to the nearest multiple of the value specified by the
+ <code class="option"><a class="xref" href="manual-core.html#opt.alignment">--alignment</a></code> option.
+ </p>
+</li>
+<li class="listitem"><p>The size of the stack(s). By default, stack profiling is
+ off as it slows Massif down greatly. Therefore, the stack column is zero
+ in the example. Stack profiling can be turned on with the
+ <code class="option">--stacks=yes</code> option.
+
+ </p></li>
+</ul></div>
+<p>The next snapshot is detailed. As well as the basic counts, it gives
+an allocation tree which indicates exactly which pieces of code were
+responsible for allocating heap memory:</p>
+<pre class="screen">
+ 9 9,072 9,072 9,000 72 0
+99.21% (9,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+->99.21% (9,000B) 0x804841A: main (example.c:20)
+</pre>
+<p>The allocation tree can be read from the top down. The first line
+indicates all heap allocation functions such as <code class="function">malloc</code>
+and C++ <code class="function">new</code>. All heap allocations go through these
+functions, and so all 9,000 useful bytes (which is 99.21% of all allocated
+bytes) go through them. But how were <code class="function">malloc</code> and new
+called? At this point, every allocation so far has been due to line 20
+inside <code class="function">main</code>, hence the second line in the tree. The
+<code class="option">-></code> indicates that main (line 20) called
+<code class="function">malloc</code>.</p>
+<p>Let's see what the subsequent output shows happened next:</p>
+<pre class="screen">
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 10 10,080 10,080 10,000 80 0
+ 11 12,088 12,088 12,000 88 0
+ 12 16,096 16,096 16,000 96 0
+ 13 20,104 20,104 20,000 104 0
+ 14 20,104 20,104 20,000 104 0
+99.48% (20,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+->49.74% (10,000B) 0x804841A: main (example.c:20)
+|
+->39.79% (8,000B) 0x80483C2: g (example.c:5)
+| ->19.90% (4,000B) 0x80483E2: f (example.c:11)
+| | ->19.90% (4,000B) 0x8048431: main (example.c:23)
+| |
+| ->19.90% (4,000B) 0x8048436: main (example.c:25)
+|
+->09.95% (2,000B) 0x80483DA: f (example.c:10)
+ ->09.95% (2,000B) 0x8048431: main (example.c:23)
+</pre>
+<p>The first four snapshots are similar to the previous ones. But then
+the global allocation peak is reached, and a detailed snapshot (number 14)
+is taken. Its allocation tree shows that 20,000B of useful heap memory has
+been allocated, and the lines and arrows indicate that this is from three
+different code locations: line 20, which is responsible for 10,000B
+(49.74%); line 5, which is responsible for 8,000B (39.79%); and line 10,
+which is responsible for 2,000B (9.95%).</p>
+<p>We can then drill down further in the allocation tree. For example,
+of the 8,000B asked for by line 5, half of it was due to a call from line
+11, and half was due to a call from line 25.</p>
+<p>In short, Massif collates the stack trace of every single allocation
+point in the program into a single tree, which gives a complete picture at
+a particular point in time of how and why all heap memory was
+allocated.</p>
+<p>Note that the tree entries correspond not to functions, but to
+individual code locations. For example, if function <code class="function">A</code>
+calls <code class="function">malloc</code>, and function <code class="function">B</code> calls
+<code class="function">A</code> twice, once on line 10 and once on line 11, then
+the two calls will result in two distinct stack traces in the tree. In
+contrast, if <code class="function">B</code> calls <code class="function">A</code> repeatedly
+from line 15 (e.g. due to a loop), then each of those calls will be
+represented by the same stack trace in the tree.</p>
+<p>Note also that each tree entry with children in the example satisfies an
+invariant: the entry's size is equal to the sum of its children's sizes.
+For example, the first entry has size 20,000B, and its children have sizes
+10,000B, 8,000B, and 2,000B. In general, this invariant almost always
+holds. However, in rare circumstances stack traces can be malformed, in
+which case a stack trace can be a sub-trace of another stack trace. This
+means that some entries in the tree may not satisfy the invariant -- the
+entry's size will be greater than the sum of its children's sizes. This is
+not a big problem, but could make the results confusing. Massif can
+sometimes detect when this happens; if it does, it issues a warning:</p>
+<pre class="screen">
+Warning: Malformed stack trace detected. In Massif's output,
+ the size of an entry's child entries may not sum up
+ to the entry's size as they normally do.
+</pre>
+<p>However, Massif does not detect and warn about every such occurrence.
+Fortunately, malformed stack traces are rare in practice.</p>
+<p>Returning now to ms_print's output, the final part is similar:</p>
+<pre class="screen">
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 15 21,112 19,096 19,000 96 0
+ 16 22,120 18,088 18,000 88 0
+ 17 23,128 17,080 17,000 80 0
+ 18 24,136 16,072 16,000 72 0
+ 19 25,144 15,064 15,000 64 0
+ 20 26,152 14,056 14,000 56 0
+ 21 27,160 13,048 13,000 48 0
+ 22 28,168 12,040 12,000 40 0
+ 23 29,176 11,032 11,000 32 0
+ 24 30,184 10,024 10,000 24 0
+99.76% (10,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+->79.81% (8,000B) 0x80483C2: g (example.c:5)
+| ->39.90% (4,000B) 0x80483E2: f (example.c:11)
+| | ->39.90% (4,000B) 0x8048431: main (example.c:23)
+| |
+| ->39.90% (4,000B) 0x8048436: main (example.c:25)
+|
+->19.95% (2,000B) 0x80483DA: f (example.c:10)
+| ->19.95% (2,000B) 0x8048431: main (example.c:23)
+|
+->00.00% (0B) in 1+ places, all below ms_print's threshold (01.00%)
+</pre>
+<p>The final detailed snapshot shows how the heap looked at termination.
+The 00.00% entry represents the code locations for which memory was
+allocated and then freed (line 20 in this case, the memory for which was
+freed on line 28). However, no code location details are given for this
+entry; by default, Massif only records the details for code locations
+responsible for more than 1% of useful memory bytes, and ms_print likewise
+only prints the details for code locations responsible for more than 1%.
+The entries that do not meet this threshold are aggregated. This avoids
+filling up the output with large numbers of unimportant entries. The
+thresholds can be changed with the
+<code class="option">--threshold</code> option that both Massif and
+ms_print support.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.forkingprograms"></a>9.2.7. Forking Programs</h3></div></div></div>
+<p>If your program forks, the child will inherit all the profiling data that
+has been gathered for the parent.</p>
+<p>If the output file format string (controlled by
+<code class="option">--massif-out-file</code>) does not contain <code class="option">%p</code>, then
+the outputs from the parent and child will be intermingled in a single output
+file, which will almost certainly make it unreadable by ms_print.</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.not-measured"></a>9.2.8. Measuring All Memory in a Process</h3></div></div></div>
+<p>
+It is worth emphasising that by default Massif measures only heap memory, i.e.
+memory allocated with
+<code class="function">malloc</code>,
+<code class="function">calloc</code>,
+<code class="function">realloc</code>,
+<code class="function">memalign</code>,
+<code class="function">new</code>,
+<code class="function">new[]</code>,
+and a few other, similar functions. (And it can optionally measure stack
+memory, of course.) This means it does <span class="emphasis"><em>not</em></span> directly
+measure memory allocated with lower-level system calls such as
+<code class="function">mmap</code>,
+<code class="function">mremap</code>, and
+<code class="function">brk</code>.
+</p>
+<p>
+Heap allocation functions such as <code class="function">malloc</code> are built on
+top of these system calls. For example, when needed, an allocator will
+typically call <code class="function">mmap</code> to allocate a large chunk of
+memory, and then hand over pieces of that memory chunk to the client program
+in response to calls to <code class="function">malloc</code> et al. Massif directly
+measures only these higher-level <code class="function">malloc</code> et al calls,
+not the lower-level system calls.
+</p>
+<p>
+Furthermore, a client program may use these lower-level system calls
+directly to allocate memory. By default, Massif does not measure these. Nor
+does it measure the size of code, data and BSS segments. Therefore, the
+numbers reported by Massif may be significantly smaller than those reported by
+tools such as <code class="filename">top</code> that measure a program's total size in
+memory.
+</p>
+<p>
+However, if you wish to measure <span class="emphasis"><em>all</em></span> the memory used by
+your program, you can use the <code class="option">--pages-as-heap=yes</code>. When this
+option is enabled, Massif's normal heap block profiling is replaced by
+lower-level page profiling. Every page allocated via
+<code class="function">mmap</code> and similar system calls is treated as a distinct
+block. This means that code, data and BSS segments are all measured, as they
+are just memory pages. Even the stack is measured, since it is ultimately
+allocated (and extended when necessary) via <code class="function">mmap</code>; for
+this reason <code class="option">--stacks=yes</code> is not allowed in conjunction with
+<code class="option">--pages-as-heap=yes</code>.
+</p>
+<p>
+After <code class="option">--pages-as-heap=yes</code> is used, ms_print's output is
+mostly unchanged. One difference is that the start of each detailed snapshot
+says:
+</p>
+<pre class="screen">
+(page allocation syscalls) mmap/mremap/brk, --alloc-fns, etc.
+</pre>
+<p>instead of the usual</p>:
+
+<pre class="screen">
+(heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+</pre>
+<p>
+The stack traces in the output may be more difficult to read, and interpreting
+them may require some detailed understanding of the lower levels of a program
+like the memory allocators. But for some programs having the full information
+about memory usage can be very useful.
+</p>
+</div>
+<div class="sect2">
+<div class="titlepage"><div><div><h3 class="title">
+<a name="ms-manual.acting"></a>9.2.9. Acting on Massif's Information</h3></div></div></div>
+<p>Massif's information is generally fairly easy to act upon. The
+obvious place to start looking is the peak snapshot.</p>
+<p>It can also be useful to look at the overall shape of the graph, to
+see if memory usage climbs and falls as you expect; spikes in the graph
+might be worth investigating.</p>
+<p>The detailed snapshots can get quite large. It is worth viewing them
+in a very wide window. It's also a good idea to view them with a text
+editor. That makes it easy to scroll up and down while keeping the cursor
+in a particular column, which makes following the allocation chains easier.
+</p>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.options"></a>9.3. Massif Command-line Options</h2></div></div></div>
+<p>Massif-specific command-line options are:</p>
+<div class="variablelist">
+<a name="ms.opts.list"></a><dl class="variablelist">
+<dt>
+<a name="opt.heap"></a><span class="term">
+ <code class="option">--heap=<yes|no> [default: yes] </code>
+ </span>
+</dt>
+<dd><p>Specifies whether heap profiling should be done.</p></dd>
+<dt>
+<a name="opt.heap-admin"></a><span class="term">
+ <code class="option">--heap-admin=<size> [default: 8] </code>
+ </span>
+</dt>
+<dd><p>If heap profiling is enabled, gives the number of administrative
+ bytes per block to use. This should be an estimate of the average,
+ since it may vary. For example, the allocator used by
+ glibc on Linux requires somewhere between 4 to
+ 15 bytes per block, depending on various factors. That allocator also
+ requires admin space for freed blocks, but Massif cannot
+ account for this.</p></dd>
+<dt>
+<a name="opt.stacks"></a><span class="term">
+ <code class="option">--stacks=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Specifies whether stack profiling should be done. This option
+ slows Massif down greatly, and so is off by default. Note that Massif
+ assumes that the main stack has size zero at start-up. This is not
+ true, but doing otherwise accurately is difficult. Furthermore,
+ starting at zero better indicates the size of the part of the main
+ stack that a user program actually has control over.</p></dd>
+<dt>
+<a name="opt.pages-as-heap"></a><span class="term">
+ <code class="option">--pages-as-heap=<yes|no> [default: no] </code>
+ </span>
+</dt>
+<dd><p>Tells Massif to profile memory at the page level rather
+ than at the malloc'd block level. See above for details.
+ </p></dd>
+<dt>
+<a name="opt.depth"></a><span class="term">
+ <code class="option">--depth=<number> [default: 30] </code>
+ </span>
+</dt>
+<dd><p>Maximum depth of the allocation trees recorded for detailed
+ snapshots. Increasing it will make Massif run somewhat more slowly,
+ use more memory, and produce bigger output files.</p></dd>
+<dt>
+<a name="opt.alloc-fn"></a><span class="term">
+ <code class="option">--alloc-fn=<name> </code>
+ </span>
+</dt>
+<dd>
+<p>Functions specified with this option will be treated as though
+ they were a heap allocation function such as
+ <code class="function">malloc</code>. This is useful for functions that are
+ wrappers to <code class="function">malloc</code> or <code class="function">new</code>,
+ which can fill up the allocation trees with uninteresting information.
+ This option can be specified multiple times on the command line, to
+ name multiple functions.</p>
+<p>Note that the named function will only be treated this way if it is
+ the top entry in a stack trace, or just below another function treated
+ this way. For example, if you have a function
+ <code class="function">malloc1</code> that wraps <code class="function">malloc</code>,
+ and <code class="function">malloc2</code> that wraps
+ <code class="function">malloc1</code>, just specifying
+ <code class="option">--alloc-fn=malloc2</code> will have no effect. You need to
+ specify <code class="option">--alloc-fn=malloc1</code> as well. This is a little
+ inconvenient, but the reason is that checking for allocation functions
+ is slow, and it saves a lot of time if Massif can stop looking through
+ the stack trace entries as soon as it finds one that doesn't match
+ rather than having to continue through all the entries.</p>
+<p>Note that C++ names are demangled. Note also that overloaded
+ C++ names must be written in full. Single quotes may be necessary to
+ prevent the shell from breaking them up. For example:
+</p>
+<pre class="screen">
+--alloc-fn='operator new(unsigned, std::nothrow_t const&)'
+</pre>
+<p>
+ </p>
+</dd>
+<dt>
+<a name="opt.ignore-fn"></a><span class="term">
+ <code class="option">--ignore-fn=<name> </code>
+ </span>
+</dt>
+<dd>
+<p>Any direct heap allocation (i.e. a call to
+ <code class="function">malloc</code>, <code class="function">new</code>, etc, or a call
+ to a function named by an <code class="option">--alloc-fn</code>
+ option) that occurs in a function specified by this option will be
+ ignored. This is mostly useful for testing purposes. This option can
+ be specified multiple times on the command line, to name multiple
+ functions.
+ </p>
+<p>Any <code class="function">realloc</code> of an ignored block will
+ also be ignored, even if the <code class="function">realloc</code> call does
+ not occur in an ignored function. This avoids the possibility of
+ negative heap sizes if ignored blocks are shrunk with
+ <code class="function">realloc</code>.
+ </p>
+<p>The rules for writing C++ function names are the same as
+ for <code class="option">--alloc-fn</code> above.
+ </p>
+</dd>
+<dt>
+<a name="opt.threshold"></a><span class="term">
+ <code class="option">--threshold=<m.n> [default: 1.0] </code>
+ </span>
+</dt>
+<dd><p>The significance threshold for heap allocations, as a
+ percentage of total memory size. Allocation tree entries that account
+ for less than this will be aggregated. Note that this should be
+ specified in tandem with ms_print's option of the same name.</p></dd>
+<dt>
+<a name="opt.peak-inaccuracy"></a><span class="term">
+ <code class="option">--peak-inaccuracy=<m.n> [default: 1.0] </code>
+ </span>
+</dt>
+<dd><p>Massif does not necessarily record the actual global memory
+ allocation peak; by default it records a peak only when the global
+ memory allocation size exceeds the previous peak by at least 1.0%.
+ This is because there can be many local allocation peaks along the way,
+ and doing a detailed snapshot for every one would be expensive and
+ wasteful, as all but one of them will be later discarded. This
+ inaccuracy can be changed (even to 0.0%) via this option, but Massif
+ will run drastically slower as the number approaches zero.</p></dd>
+<dt>
+<a name="opt.time-unit"></a><span class="term">
+ <code class="option">--time-unit=<i|ms|B> [default: i] </code>
+ </span>
+</dt>
+<dd><p>The time unit used for the profiling. There are three
+ possibilities: instructions executed (i), which is good for most
+ cases; real (wallclock) time (ms, i.e. milliseconds), which is
+ sometimes useful; and bytes allocated/deallocated on the heap and/or
+ stack (B), which is useful for very short-run programs, and for
+ testing purposes, because it is the most reproducible across different
+ machines.</p></dd>
+<dt>
+<a name="opt.detailed-freq"></a><span class="term">
+ <code class="option">--detailed-freq=<n> [default: 10] </code>
+ </span>
+</dt>
+<dd><p>Frequency of detailed snapshots. With
+ <code class="option">--detailed-freq=1</code>, every snapshot is
+ detailed.</p></dd>
+<dt>
+<a name="opt.max-snapshots"></a><span class="term">
+ <code class="option">--max-snapshots=<n> [default: 100] </code>
+ </span>
+</dt>
+<dd><p>The maximum number of snapshots recorded. If set to N, for all
+ programs except very short-running ones, the final number of snapshots
+ will be between N/2 and N.</p></dd>
+<dt>
+<a name="opt.massif-out-file"></a><span class="term">
+ <code class="option">--massif-out-file=<file> [default: massif.out.%p] </code>
+ </span>
+</dt>
+<dd><p>Write the profile data to <code class="computeroutput">file</code>
+ rather than to the default output file,
+ <code class="computeroutput">massif.out.<pid></code>. The
+ <code class="option">%p</code> and <code class="option">%q</code> format specifiers can be
+ used to embed the process ID and/or the contents of an environment
+ variable in the name, as is the case for the core option
+ <code class="option"><a class="xref" href="manual-core.html#opt.log-file">--log-file</a></code>.
+ </p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.monitor-commands"></a>9.4. Massif Monitor Commands</h2></div></div></div>
+<p>The Massif tool provides monitor commands handled by the Valgrind
+gdbserver (see <a class="xref" href="manual-core-adv.html#manual-core-adv.gdbserver-commandhandling" title="3.2.5. Monitor command handling by the Valgrind gdbserver">Monitor command handling by the Valgrind gdbserver</a>).
+</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p><code class="varname">snapshot [<filename>]</code> requests
+ to take a snapshot and save it in the given <filename>
+ (default massif.vgdb.out).
+ </p></li>
+<li class="listitem"><p><code class="varname">detailed_snapshot [<filename>]</code>
+ requests to take a detailed snapshot and save it in the given
+ <filename> (default massif.vgdb.out).
+ </p></li>
+<li class="listitem"><p><code class="varname">all_snapshots [<filename>]</code>
+ requests to take all captured snapshots so far and save them in the given
+ <filename> (default massif.vgdb.out).
+ </p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.clientreqs"></a>9.5. Massif Client Requests</h2></div></div></div>
+<p>Massif does not have a <code class="filename">massif.h</code> file, but it does
+implement two of the core client requests:
+<code class="function">VALGRIND_MALLOCLIKE_BLOCK</code> and
+<code class="function">VALGRIND_FREELIKE_BLOCK</code>; they are described in
+<a class="xref" href="manual-core-adv.html#manual-core-adv.clientreq" title="3.1. The Client Request mechanism">The Client Request mechanism</a>.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.ms_print-options"></a>9.6. ms_print Command-line Options</h2></div></div></div>
+<p>ms_print's options are:</p>
+<div class="variablelist">
+<a name="ms_print.opts.list"></a><dl class="variablelist">
+<dt><span class="term">
+ <code class="option">-h --help </code>
+ </span></dt>
+<dd><p>Show the help message.</p></dd>
+<dt><span class="term">
+ <code class="option">--version </code>
+ </span></dt>
+<dd><p>Show the version number.</p></dd>
+<dt><span class="term">
+ <code class="option">--threshold=<m.n> [default: 1.0] </code>
+ </span></dt>
+<dd><p>Same as Massif's <code class="option">--threshold</code> option, but
+ applied after profiling rather than during.</p></dd>
+<dt><span class="term">
+ <code class="option">--x=<4..1000> [default: 72]</code>
+ </span></dt>
+<dd><p>Width of the graph, in columns.</p></dd>
+<dt><span class="term">
+ <code class="option">--y=<4..1000> [default: 20] </code>
+ </span></dt>
+<dd><p>Height of the graph, in rows.</p></dd>
+</dl>
+</div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.fileformat"></a>9.7. Massif's Output File Format</h2></div></div></div>
+<p>Massif's file format is plain text (i.e. not binary) and deliberately
+easy to read for both humans and machines. Nonetheless, the exact format
+is not described here. This is because the format is currently very
+Massif-specific. In the future we hope to make the format more general, and
+thus suitable for possible use with other tools. Once this has been done,
+the format will be documented here.</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="drd-manual.html"><< 8. DRD: a thread error detector</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="dh-manual.html">10. DHAT: a dynamic heap analysis tool >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/nl-manual.html b/docs/html/nl-manual.html
new file mode 100644
index 0000000..643a272
--- /dev/null
+++ b/docs/html/nl-manual.html
@@ -0,0 +1,56 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>14. Nulgrind: the minimal Valgrind tool</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="lk-manual.html" title="13. Lackey: an example tool">
+<link rel="next" href="FAQ.html" title="Valgrind FAQ">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="lk-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="FAQ.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="nl-manual"></a>14. Nulgrind: the minimal Valgrind tool</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc"><dt><span class="sect1"><a href="nl-manual.html#ms-manual.overview">14.1. Overview</a></span></dt></dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=none</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="ms-manual.overview"></a>14.1. Overview</h2></div></div></div>
+<p>Nulgrind is the simplest possible Valgrind tool. It performs no
+instrumentation or analysis of a program, just runs it normally. It is
+mainly of use for Valgrind's developers for debugging and regression
+testing.</p>
+<p>Nonetheless you can run programs with Nulgrind. They will run
+roughly 5 times more slowly than normal, for no useful effect. Note
+that you need to use the option <code class="option">--tool=none</code> to run
+Nulgrind (ie. not <code class="option">--tool=nulgrind</code>).</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="lk-manual.html"><< 13. Lackey: an example tool</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="FAQ.html">Valgrind FAQ >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/quick-start.html b/docs/html/quick-start.html
new file mode 100644
index 0000000..00d7b07
--- /dev/null
+++ b/docs/html/quick-start.html
@@ -0,0 +1,203 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>The Valgrind Quick Start Guide</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="QuickStart.html" title="The Valgrind Quick Start Guide">
+<link rel="prev" href="QuickStart.html" title="The Valgrind Quick Start Guide">
+<link rel="next" href="manual.html" title="Valgrind User Manual">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="QuickStart.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="QuickStart.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">The Valgrind Quick Start Guide</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="article">
+<div class="titlepage">
+<div><div><h1 class="title">
+<a name="quick-start"></a>The Valgrind Quick Start Guide</h1></div></div>
+<hr>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.intro"></a>1. Introduction</h2></div></div></div>
+<p>The Valgrind tool suite provides a number of debugging and
+profiling tools that help you make your programs faster and more correct.
+The most popular of these tools is called Memcheck. It can detect many
+memory-related errors that are common in C and C++ programs and that can
+lead to crashes and unpredictable behaviour.</p>
+<p>The rest of this guide gives the minimum information you need to start
+detecting memory errors in your program with Memcheck. For full
+documentation of Memcheck and the other tools, please read the User Manual.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.prepare"></a>2. Preparing your program</h2></div></div></div>
+<p>Compile your program with <code class="option">-g</code> to include debugging
+information so that Memcheck's error messages include exact line
+numbers. Using <code class="option">-O0</code> is also a good
+idea, if you can tolerate the slowdown. With
+<code class="option">-O1</code> line numbers in error messages can
+be inaccurate, although generally speaking running Memcheck on code compiled
+at <code class="option">-O1</code> works fairly well, and the speed improvement
+compared to running <code class="option">-O0</code> is quite significant.
+Use of
+<code class="option">-O2</code> and above is not recommended as
+Memcheck occasionally reports uninitialised-value errors which don't
+really exist.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.mcrun"></a>3. Running your program under Memcheck</h2></div></div></div>
+<p>If you normally run your program like this:</p>
+<pre class="programlisting"> myprog arg1 arg2
+</pre>
+<p>Use this command line:</p>
+<pre class="programlisting"> valgrind --leak-check=yes myprog arg1 arg2
+</pre>
+<p>Memcheck is the default tool. The <code class="option">--leak-check</code>
+option turns on the detailed memory leak detector.</p>
+<p>Your program will run much slower (eg. 20 to 30 times) than
+normal, and use a lot more memory. Memcheck will issue messages about
+memory errors and leaks that it detects.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.interpret"></a>4. Interpreting Memcheck's output</h2></div></div></div>
+<p>Here's an example C program, in a file called a.c, with a memory error
+and a memory leak.</p>
+<pre class="programlisting">
+ #include <stdlib.h>
+
+ void f(void)
+ {
+ int* x = malloc(10 * sizeof(int));
+ x[10] = 0; // problem 1: heap block overrun
+ } // problem 2: memory leak -- x not freed
+
+ int main(void)
+ {
+ f();
+ return 0;
+ }
+</pre>
+<p>Most error messages look like the following, which describes
+problem 1, the heap block overrun:</p>
+<pre class="programlisting">
+ ==19182== Invalid write of size 4
+ ==19182== at 0x804838F: f (example.c:6)
+ ==19182== by 0x80483AB: main (example.c:11)
+ ==19182== Address 0x1BA45050 is 0 bytes after a block of size 40 alloc'd
+ ==19182== at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
+ ==19182== by 0x8048385: f (example.c:5)
+ ==19182== by 0x80483AB: main (example.c:11)
+</pre>
+<p>Things to notice:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>There is a lot of information in each error message; read it
+ carefully.</p></li>
+<li class="listitem"><p>The 19182 is the process ID; it's usually unimportant.</p></li>
+<li class="listitem"><p>The first line ("Invalid write...") tells you what kind of
+ error it is. Here, the program wrote to some memory it should not
+ have due to a heap block overrun.</p></li>
+<li class="listitem"><p>Below the first line is a stack trace telling you where the
+ problem occurred. Stack traces can get quite large, and be
+ confusing, especially if you are using the C++ STL. Reading them
+ from the bottom up can help. If the stack trace is not big enough,
+ use the <code class="option">--num-callers</code> option to make it
+ bigger.</p></li>
+<li class="listitem"><p>The code addresses (eg. 0x804838F) are usually unimportant, but
+ occasionally crucial for tracking down weirder bugs.</p></li>
+<li class="listitem"><p>Some error messages have a second component which describes
+ the memory address involved. This one shows that the written memory
+ is just past the end of a block allocated with malloc() on line 5 of
+ example.c.</p></li>
+</ul></div>
+<p>It's worth fixing errors in the order they are reported, as
+later errors can be caused by earlier errors. Failing to do this is a
+common cause of difficulty with Memcheck.</p>
+<p>Memory leak messages look like this:</p>
+<pre class="programlisting">
+ ==19182== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1
+ ==19182== at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
+ ==19182== by 0x8048385: f (a.c:5)
+ ==19182== by 0x80483AB: main (a.c:11)
+</pre>
+<p>The stack trace tells you where the leaked memory was allocated.
+Memcheck cannot tell you why the memory leaked, unfortunately.
+(Ignore the "vg_replace_malloc.c", that's an implementation
+detail.)</p>
+<p>There are several kinds of leaks; the two most important
+categories are:</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>"definitely lost": your program is leaking memory -- fix
+ it!</p></li>
+<li class="listitem"><p>"probably lost": your program is leaking memory, unless you're
+ doing funny things with pointers (such as moving them to point to
+ the middle of a heap block).</p></li>
+</ul></div>
+<p>Memcheck also reports uses of uninitialised values, most commonly with
+the message "Conditional jump or move depends on uninitialised
+value(s)". It can be difficult to determine the root cause of these errors.
+Try using the <code class="option">--track-origins=yes</code> to get extra information.
+This makes Memcheck run slower, but the extra information you get often
+saves a lot of time figuring out where the uninitialised values are coming
+from.</p>
+<p>If you don't understand an error message, please consult
+<a class="xref" href="mc-manual.html#mc-manual.errormsgs" title="4.2. Explanation of error messages from Memcheck">Explanation of error messages from Memcheck</a> in the <a class="xref" href="manual.html" title="Valgrind User Manual">Valgrind User Manual</a>
+which has examples of all the error messages Memcheck produces.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.caveats"></a>5. Caveats</h2></div></div></div>
+<p>Memcheck is not perfect; it occasionally produces false positives,
+and there are mechanisms for suppressing these (see
+<a class="xref" href="manual-core.html#manual-core.suppress" title="2.5. Suppressing errors">Suppressing errors</a> in the <a class="xref" href="manual.html" title="Valgrind User Manual">Valgrind User Manual</a>).
+However, it is typically right 99% of the time, so you should be wary of
+ignoring its error messages. After all, you wouldn't ignore warning
+messages produced by a compiler, right? The suppression mechanism is
+also useful if Memcheck is reporting errors in library code that you
+cannot change. The default suppression set hides a lot of these, but you
+may come across more.</p>
+<p>Memcheck cannot detect every memory error your program has.
+For example, it can't detect out-of-range reads or writes to arrays
+that are allocated statically or on the stack. But it should detect many
+errors that could crash your program (eg. cause a segmentation
+fault).</p>
+<p>Try to make your program so clean that Memcheck reports no
+errors. Once you achieve this state, it is much easier to see when
+changes to the program cause Memcheck to report new errors.
+Experience from several years of Memcheck use shows that it is
+possible to make even huge programs run Memcheck-clean. For example,
+large parts of KDE, OpenOffice.org and Firefox are Memcheck-clean, or very
+close to it.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="quick-start.info"></a>6. More information</h2></div></div></div>
+<p>Please consult the <a class="xref" href="FAQ.html" title="Valgrind FAQ">Valgrind FAQ</a> and the
+<a class="xref" href="manual.html" title="Valgrind User Manual">Valgrind User Manual</a>, which have much more information. Note that
+the other tools in the Valgrind distribution can be invoked with the
+<code class="option">--tool</code> option.</p>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="QuickStart.html"><< The Valgrind Quick Start Guide</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="QuickStart.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="manual.html">Valgrind User Manual >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/sg-manual.html b/docs/html/sg-manual.html
new file mode 100644
index 0000000..ec96ab9
--- /dev/null
+++ b/docs/html/sg-manual.html
@@ -0,0 +1,264 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>11. SGCheck: an experimental stack and global array overrun detector</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="manual.html" title="Valgrind User Manual">
+<link rel="prev" href="dh-manual.html" title="10. DHAT: a dynamic heap analysis tool">
+<link rel="next" href="bbv-manual.html" title="12. BBV: an experimental basic block vector generation tool">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="dh-manual.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="manual.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind User Manual</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="bbv-manual.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="chapter">
+<div class="titlepage"><div><div><h1 class="title">
+<a name="sg-manual"></a>11. SGCheck: an experimental stack and global array overrun detector</h1></div></div></div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.overview">11.1. Overview</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.options">11.2. SGCheck Command-line Options</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.how-works.sg-checks">11.3. How SGCheck Works</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.cmp-w-memcheck">11.4. Comparison with Memcheck</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.limitations">11.5. Limitations</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.todo-user-visible">11.6. Still To Do: User-visible Functionality</a></span></dt>
+<dt><span class="sect1"><a href="sg-manual.html#sg-manual.todo-implementation">11.7. Still To Do: Implementation Tidying</a></span></dt>
+</dl>
+</div>
+<p>To use this tool, you must specify
+<code class="option">--tool=exp-sgcheck</code> on the Valgrind
+command line.</p>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.overview"></a>11.1. Overview</h2></div></div></div>
+<p>SGCheck is a tool for finding overruns of stack and global
+arrays. It works by using a heuristic approach derived from an
+observation about the likely forms of stack and global array accesses.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.options"></a>11.2. SGCheck Command-line Options</h2></div></div></div>
+<p><a name="sg.opts.list"></a>There are no SGCheck-specific command-line options at present.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.how-works.sg-checks"></a>11.3. How SGCheck Works</h2></div></div></div>
+<p>When a source file is compiled
+with <code class="option">-g</code>, the compiler attaches DWARF3
+debugging information which describes the location of all stack and
+global arrays in the file.</p>
+<p>Checking of accesses to such arrays would then be relatively
+simple, if the compiler could also tell us which array (if any) each
+memory referencing instruction was supposed to access. Unfortunately
+the DWARF3 debugging format does not provide a way to represent such
+information, so we have to resort to a heuristic technique to
+approximate it. The key observation is that
+ <span class="emphasis"><em>
+ if a memory referencing instruction accesses inside a stack or
+ global array once, then it is highly likely to always access that
+ same array</em></span>.</p>
+<p>To see how this might be useful, consider the following buggy
+fragment:</p>
+<pre class="programlisting">
+ { int i, a[10]; // both are auto vars
+ for (i = 0; i <= 10; i++)
+ a[i] = 42;
+ }
+</pre>
+<p>At run time we will know the precise address
+of <code class="computeroutput">a[]</code> on the stack, and so we can
+observe that the first store resulting from <code class="computeroutput">a[i] =
+42</code> writes <code class="computeroutput">a[]</code>, and
+we will (correctly) assume that that instruction is intended always to
+access <code class="computeroutput">a[]</code>. Then, on the 11th
+iteration, it accesses somewhere else, possibly a different local,
+possibly an un-accounted for area of the stack (eg, spill slot), so
+SGCheck reports an error.</p>
+<p>There is an important caveat.</p>
+<p>Imagine a function such as <code class="function">memcpy</code>, which is used
+to read and write many different areas of memory over the lifetime of the
+program. If we insist that the read and write instructions in its memory
+copying loop only ever access one particular stack or global variable, we
+will be flooded with errors resulting from calls to
+<code class="function">memcpy</code>.</p>
+<p>To avoid this problem, SGCheck instantiates fresh likely-target
+records for each entry to a function, and discards them on exit. This
+allows detection of cases where (e.g.) <code class="function">memcpy</code>
+overflows its source or destination buffers for any specific call, but
+does not carry any restriction from one call to the next. Indeed,
+multiple threads may make multiple simultaneous calls to
+(e.g.) <code class="function">memcpy</code> without mutual interference.</p>
+<p>It is important to note that the association is done between
+ a <span class="emphasis"><em>binary instruction</em></span> and an array, the
+ <span class="emphasis"><em>first time</em></span> this binary instruction accesses an
+ array during a function call. When the same instruction is executed
+ again during the same function call, then SGCheck might report a
+ problem, if these further executions are not accessing the same
+ array. This technique causes several limitations in SGCheck, see
+ <a class="xref" href="sg-manual.html#sg-manual.limitations" title="11.5. Limitations">Limitations</a>.
+</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.cmp-w-memcheck"></a>11.4. Comparison with Memcheck</h2></div></div></div>
+<p>SGCheck and Memcheck are complementary: their capabilities do
+not overlap. Memcheck performs bounds checks and use-after-free
+checks for heap arrays. It also finds uses of uninitialised values
+created by heap or stack allocations. But it does not perform bounds
+checking for stack or global arrays.</p>
+<p>SGCheck, on the other hand, does do bounds checking for stack or
+global arrays, but it doesn't do anything else.</p>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.limitations"></a>11.5. Limitations</h2></div></div></div>
+<p>This is an experimental tool, which relies rather too heavily on some
+not-as-robust-as-I-would-like assumptions on the behaviour of correct
+programs. There are a number of limitations which you should be aware
+of.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem">
+<p>False negatives (missed errors): it follows from the
+ description above (<a class="xref" href="sg-manual.html#sg-manual.how-works.sg-checks" title="11.3. How SGCheck Works">How SGCheck Works</a>)
+ that the first access by a memory referencing instruction to a
+ stack or global array creates an association between that
+ instruction and the array, which is checked on subsequent accesses
+ by that instruction, until the containing function exits. Hence,
+ the first access by an instruction to an array (in any given
+ function instantiation) is not checked for overrun, since SGCheck
+ uses that as the "example" of how subsequent accesses should
+ behave.</p>
+<p>It also means that errors will not be found in an instruction
+ executed only once (e.g. because this instruction is not in a loop,
+ or the loop is executed only once).</p>
+</li>
+<li class="listitem">
+<p>False positives (false errors): similarly, and more serious,
+ it is clearly possible to write legitimate pieces of code which
+ break the basic assumption upon which the checking algorithm
+ depends. For example:</p>
+<pre class="programlisting">
+ { int a[10], b[10], *p, i;
+ for (i = 0; i < 10; i++) {
+ p = /* arbitrary condition */ ? &a[i] : &b[i];
+ *p = 42;
+ }
+ }
+</pre>
+<p>In this case the store sometimes
+ accesses <code class="computeroutput">a[]</code> and
+ sometimes <code class="computeroutput">b[]</code>, but in no cases is
+ the addressed array overrun. Nevertheless the change in target
+ will cause an error to be reported.</p>
+<p>It is hard to see how to get around this problem. The only
+ mitigating factor is that such constructions appear very rare, at
+ least judging from the results using the tool so far. Such a
+ construction appears only once in the Valgrind sources (running
+ Valgrind on Valgrind) and perhaps two or three times for a start
+ and exit of Firefox. The best that can be done is to suppress the
+ errors.</p>
+</li>
+<li class="listitem"><p>Performance: SGCheck has to read all of
+ the DWARF3 type and variable information on the executable and its
+ shared objects. This is computationally expensive and makes
+ startup quite slow. You can expect debuginfo reading time to be in
+ the region of a minute for an OpenOffice sized application, on a
+ 2.4 GHz Core 2 machine. Reading this information also requires a
+ lot of memory. To make it viable, SGCheck goes to considerable
+ trouble to compress the in-memory representation of the DWARF3
+ data, which is why the process of reading it appears slow.</p></li>
+<li class="listitem"><p>Performance: SGCheck runs slower than Memcheck. This is
+ partly due to a lack of tuning, but partly due to algorithmic
+ difficulties. The
+ stack and global checks can sometimes require a number of range
+ checks per memory access, and these are difficult to short-circuit,
+ despite considerable efforts having been made. A
+ redesign and reimplementation could potentially make it much faster.
+ </p></li>
+<li class="listitem">
+<p>Coverage: Stack and global checking is fragile. If a shared
+ object does not have debug information attached, then SGCheck will
+ not be able to determine the bounds of any stack or global arrays
+ defined within that shared object, and so will not be able to check
+ accesses to them. This is true even when those arrays are accessed
+ from some other shared object which was compiled with debug
+ info.</p>
+<p>At the moment SGCheck accepts objects lacking debuginfo
+ without comment. This is dangerous as it causes SGCheck to
+ silently skip stack and global checking for such objects. It would
+ be better to print a warning in such circumstances.</p>
+</li>
+<li class="listitem"><p>Coverage: SGCheck does not check whether the areas read
+ or written by system calls do overrun stack or global arrays. This
+ would be easy to add.</p></li>
+<li class="listitem"><p>Platforms: the stack/global checks won't work properly on
+ PowerPC, ARM or S390X platforms, only on X86 and AMD64 targets.
+ That's because the stack and global checking requires tracking
+ function calls and exits reliably, and there's no obvious way to do
+ it on ABIs that use a link register for function returns.
+ </p></li>
+<li class="listitem"><p>Robustness: related to the previous point. Function
+ call/exit tracking for X86 and AMD64 is believed to work properly
+ even in the presence of longjmps within the same stack (although
+ this has not been tested). However, code which switches stacks is
+ likely to cause breakage/chaos.</p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.todo-user-visible"></a>11.6. Still To Do: User-visible Functionality</h2></div></div></div>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p>Extend system call checking to work on stack and global arrays.</p></li>
+<li class="listitem"><p>Print a warning if a shared object does not have debug info
+ attached, or if, for whatever reason, debug info could not be
+ found, or read.</p></li>
+<li class="listitem"><p>Add some heuristic filtering that removes obvious false
+ positives. This would be easy to do. For example, an access
+ transition from a heap to a stack object almost certainly isn't a
+ bug and so should not be reported to the user.</p></li>
+</ul></div>
+</div>
+<div class="sect1">
+<div class="titlepage"><div><div><h2 class="title" style="clear: both">
+<a name="sg-manual.todo-implementation"></a>11.7. Still To Do: Implementation Tidying</h2></div></div></div>
+<p>Items marked CRITICAL are considered important for correctness:
+non-fixage of them is liable to lead to crashes or assertion failures
+in real use.</p>
+<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
+<li class="listitem"><p> sg_main.c: Redesign and reimplement the basic checking
+ algorithm. It could be done much faster than it is -- the current
+ implementation isn't very good.
+ </p></li>
+<li class="listitem"><p> sg_main.c: Improve the performance of the stack / global
+ checks by doing some up-front filtering to ignore references in
+ areas which "obviously" can't be stack or globals. This will
+ require using information that m_aspacemgr knows about the address
+ space layout.</p></li>
+<li class="listitem"><p>sg_main.c: fix compute_II_hash to make it a bit more sensible
+ for ppc32/64 targets (except that sg_ doesn't work on ppc32/64
+ targets, so this is a bit academic at the moment).</p></li>
+</ul></div>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="dh-manual.html"><< 10. DHAT: a dynamic heap analysis tool</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="manual.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="bbv-manual.html">12. BBV: an experimental basic block vector generation tool >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/tech-docs.html b/docs/html/tech-docs.html
new file mode 100644
index 0000000..f2fc1e0
--- /dev/null
+++ b/docs/html/tech-docs.html
@@ -0,0 +1,98 @@
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+<title>Valgrind Technical Documentation</title>
+<link rel="stylesheet" type="text/css" href="vg_basic.css">
+<meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
+<link rel="home" href="index.html" title="Valgrind Documentation">
+<link rel="up" href="index.html" title="Valgrind Documentation">
+<link rel="prev" href="faq.html" title="Valgrind Frequently Asked Questions">
+<link rel="next" href="design-impl.html" title="1. The Design and Implementation of Valgrind">
+</head>
+<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
+<div><table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header"><tr>
+<td width="22px" align="center" valign="middle"><a accesskey="p" href="faq.html"><img src="images/prev.png" width="18" height="21" border="0" alt="Prev"></a></td>
+<td width="25px" align="center" valign="middle"><a accesskey="u" href="index.html"><img src="images/up.png" width="21" height="18" border="0" alt="Up"></a></td>
+<td width="31px" align="center" valign="middle"><a accesskey="h" href="index.html"><img src="images/home.png" width="27" height="20" border="0" alt="Up"></a></td>
+<th align="center" valign="middle">Valgrind Documentation</th>
+<td width="22px" align="center" valign="middle"><a accesskey="n" href="design-impl.html"><img src="images/next.png" width="18" height="21" border="0" alt="Next"></a></td>
+</tr></table></div>
+<div class="book">
+<div class="titlepage">
+<div>
+<div><h1 class="title">
+<a name="tech-docs"></a>Valgrind Technical Documentation</h1></div>
+<div><p class="releaseinfo">Release 3.12.0 20 October 2016</p></div>
+<div><p class="copyright">Copyright © 2000-2016 <a class="ulink" href="http://www.valgrind.org/info/developers.html" target="_top">Valgrind Developers</a></p></div>
+<div><div class="legalnotice">
+<a name="idm140639109670320"></a><p>Email: <a class="ulink" href="mailto:valgrind@valgrind.org" target="_top">valgrind@valgrind.org</a></p>
+</div></div>
+</div>
+<hr>
+</div>
+<div class="toc">
+<p><b>Table of Contents</b></p>
+<dl class="toc">
+<dt><span class="chapter"><a href="design-impl.html">1. The Design and Implementation of Valgrind</a></span></dt>
+<dt><span class="chapter"><a href="manual-writing-tools.html">2. Writing a New Valgrind Tool</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.intro">2.1. Introduction</a></span></dt>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.writingatool">2.2. Basics</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.howtoolswork">2.2.1. How tools work</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.gettingcode">2.2.2. Getting the code</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.gettingstarted">2.2.3. Getting started</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.writingcode">2.2.4. Writing the code</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.init">2.2.5. Initialisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.instr">2.2.6. Instrumentation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.fini">2.2.7. Finalisation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.otherinfo">2.2.8. Other Important Information</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.advtopics">2.3. Advanced Topics</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.advice">2.3.1. Debugging Tips</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.suppressions">2.3.2. Suppressions</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.docs">2.3.3. Documentation</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.regtests">2.3.4. Regression Tests</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.profiling">2.3.5. Profiling</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.mkhackery">2.3.6. Other Makefile Hackery</a></span></dt>
+<dt><span class="sect2"><a href="manual-writing-tools.html#manual-writing-tools.ifacever">2.3.7. The Core/tool Interface</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="manual-writing-tools.html#manual-writing-tools.finalwords">2.4. Final Words</a></span></dt>
+</dl></dd>
+<dt><span class="chapter"><a href="cl-format.html">3. Callgrind Format Specification</a></span></dt>
+<dd><dl>
+<dt><span class="sect1"><a href="cl-format.html#cl-format.overview">3.1. Overview</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.basics">3.1.1. Basic Structure</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.example1">3.1.2. Simple Example</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.associations">3.1.3. Associations</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.example2">3.1.4. Extended Example</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.compression1">3.1.5. Name Compression</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.compression2">3.1.6. Subposition Compression</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.overview.misc">3.1.7. Miscellaneous</a></span></dt>
+</dl></dd>
+<dt><span class="sect1"><a href="cl-format.html#cl-format.reference">3.2. Reference</a></span></dt>
+<dd><dl>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.grammar">3.2.1. Grammar</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.header">3.2.2. Description of Header Lines</a></span></dt>
+<dt><span class="sect2"><a href="cl-format.html#cl-format.reference.body">3.2.3. Description of Body Lines</a></span></dt>
+</dl></dd>
+</dl></dd>
+</dl>
+</div>
+</div>
+<div>
+<br><table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+<tr>
+<td rowspan="2" width="40%" align="left">
+<a accesskey="p" href="faq.html"><< Valgrind Frequently Asked Questions</a> </td>
+<td width="20%" align="center"><a accesskey="u" href="index.html">Up</a></td>
+<td rowspan="2" width="40%" align="right"> <a accesskey="n" href="design-impl.html">1. The Design and Implementation of Valgrind >></a>
+</td>
+</tr>
+<tr><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td></tr>
+</table>
+</div>
+</body>
+</html>
diff --git a/docs/html/vg_basic.css b/docs/html/vg_basic.css
new file mode 100644
index 0000000..49367fe
--- /dev/null
+++ b/docs/html/vg_basic.css
@@ -0,0 +1,67 @@
+/* default link colours */
+a, a:link, a:visited, a:active { color: #74240f; }
+a:hover { color: #888800; }
+
+body {
+ color: #202020;
+ background-color: #ffffff;
+}
+
+body, td {
+ font-size: 90%;
+ line-height: 125%;
+ font-family: Arial, Geneva, Helvetica, sans-serif;
+}
+
+h1, h2, h3, h4 { color: #74240f; }
+h3 { margin-bottom: 0.4em; }
+
+code, tt { color: #761596; }
+code a, code a:link, code a:visited, code a:active, code a:hover {
+ color: #761596;
+ text-decoration: none;
+ border-bottom: dashed 1px #761596;
+}
+
+pre { color: #3366cc; }
+pre.programlisting {
+ color: #000000;
+ padding: 0.5em;
+ background: #f2f2f9;
+ border: 1px solid #3366cc;
+}
+pre.screen {
+ color: #000000;
+ padding: 0.5em;
+ background: #eeeeee;
+ border: 1px solid #626262;
+}
+
+ul { list-style: url("images/li-brown.png"); }
+
+.titlepage hr {
+ height: 1px;
+ border: 0px;
+ background-color: #7f7f7f;
+}
+
+/* header / footer nav tables */
+table.nav {
+ color: #0f7355;
+ border: solid 1px #0f7355;
+ background: #edf7f4;
+ background-color: #edf7f4;
+ margin-bottom: 0.5em;
+}
+/* don't have underlined links in chunked nav menus */
+table.nav a { text-decoration: none; }
+table.nav a:hover { text-decoration: underline; }
+table.nav td { font-size: 85%; }
+
+/* yellow box just for massif blockquotes */
+blockquote {
+ padding: 0.5em;
+ background: #fffbc9;
+ border: solid 1px #ffde84;
+}
+