Overhauled the docs.  Removed all the HTML files, put in XML files as
converted by Donna.  Hooked it into the build system so they are only
built when specifically asked for, and when doing "make dist".

They're not perfect;  in particular, there are the following problems:
- The plain-text FAQ should be built from FAQ.xml, but this is not
  currently done.  (The text FAQ has been left in for now.)

- The PS/PDF building doesn't work -- it fails with an incomprehensible
  error message which I haven't yet deciphered.

Nonetheless, I'm putting it in so others can see it.



git-svn-id: svn://svn.valgrind.org/valgrind/trunk@3153 a5019735-40e9-0310-863c-91ae7b9d1cf9
diff --git a/COPYING b/COPYING
index d60c31a..e90dfed 100644
--- a/COPYING
+++ b/COPYING
@@ -55,7 +55,7 @@
 
   The precise terms and conditions for copying, distribution and
 modification follow.
-
+
 		    GNU GENERAL PUBLIC LICENSE
    TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
 
@@ -110,7 +110,7 @@
     License.  (Exception: if the Program itself is interactive but
     does not normally print such an announcement, your work based on
     the Program is not required to print an announcement.)
-
+
 These requirements apply to the modified work as a whole.  If
 identifiable sections of that work are not derived from the Program,
 and can be reasonably considered independent and separate works in
@@ -168,7 +168,7 @@
 access to copy the source code from the same place counts as
 distribution of the source code, even though third parties are not
 compelled to copy the source along with the object code.
-
+
   4. You may not copy, modify, sublicense, or distribute the Program
 except as expressly provided under this License.  Any attempt
 otherwise to copy, modify, sublicense or distribute the Program is
@@ -225,7 +225,7 @@
 
 This section is intended to make thoroughly clear what is believed to
 be a consequence of the rest of this License.
-
+
   8. If the distribution and/or use of the Program is restricted in
 certain countries either by patents or by copyrighted interfaces, the
 original copyright holder who places the Program under this License
@@ -278,7 +278,7 @@
 POSSIBILITY OF SUCH DAMAGES.
 
 		     END OF TERMS AND CONDITIONS
-
+
 	    How to Apply These Terms to Your New Programs
 
   If you develop a new program, and you want it to be of the greatest
diff --git a/COPYING.DOCS b/COPYING.DOCS
new file mode 100644
index 0000000..1ad50b0
--- /dev/null
+++ b/COPYING.DOCS
@@ -0,0 +1,398 @@
+        GNU Free Documentation License
+          Version 1.2, November 2002
+
+
+ Copyright (C) 2000,2001,2002  Free Software Foundation, Inc.
+     59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+0. PREAMBLE
+
+The purpose of this License is to make a manual, textbook, or other
+functional and useful document "free" in the sense of freedom: to
+assure everyone the effective freedom to copy and redistribute it,
+with or without modifying it, either commercially or noncommercially.
+Secondarily, this License preserves for the author and publisher a way
+to get credit for their work, while not being considered responsible
+for modifications made by others.
+
+This License is a kind of "copyleft", which means that derivative
+works of the document must themselves be free in the same sense.  It
+complements the GNU General Public License, which is a copyleft
+license designed for free software.
+
+We have designed this License in order to use it for manuals for free
+software, because free software needs free documentation: a free
+program should come with manuals providing the same freedoms that the
+software does.  But this License is not limited to software manuals;
+it can be used for any textual work, regardless of subject matter or
+whether it is published as a printed book.  We recommend this License
+principally for works whose purpose is instruction or reference.
+
+
+1. APPLICABILITY AND DEFINITIONS
+
+This License applies to any manual or other work, in any medium, that
+contains a notice placed by the copyright holder saying it can be
+distributed under the terms of this License.  Such a notice grants a
+world-wide, royalty-free license, unlimited in duration, to use that
+work under the conditions stated herein.  The "Document", below,
+refers to any such manual or work.  Any member of the public is a
+licensee, and is addressed as "you".  You accept the license if you
+copy, modify or distribute the work in a way requiring permission
+under copyright law.
+
+A "Modified Version" of the Document means any work containing the
+Document or a portion of it, either copied verbatim, or with
+modifications and/or translated into another language.
+
+A "Secondary Section" is a named appendix or a front-matter section of
+the Document that deals exclusively with the relationship of the
+publishers or authors of the Document to the Document's overall subject
+(or to related matters) and contains nothing that could fall directly
+within that overall subject.  (Thus, if the Document is in part a
+textbook of mathematics, a Secondary Section may not explain any
+mathematics.)  The relationship could be a matter of historical
+connection with the subject or with related matters, or of legal,
+commercial, philosophical, ethical or political position regarding
+them.
+
+The "Invariant Sections" are certain Secondary Sections whose titles
+are designated, as being those of Invariant Sections, in the notice
+that says that the Document is released under this License.  If a
+section does not fit the above definition of Secondary then it is not
+allowed to be designated as Invariant.  The Document may contain zero
+Invariant Sections.  If the Document does not identify any Invariant
+Sections then there are none.
+
+The "Cover Texts" are certain short passages of text that are listed,
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that
+the Document is released under this License.  A Front-Cover Text may
+be at most 5 words, and a Back-Cover Text may be at most 25 words.
+
+A "Transparent" copy of the Document means a machine-readable copy,
+represented in a format whose specification is available to the
+general public, that is suitable for revising the document
+straightforwardly with generic text editors or (for images composed of
+pixels) generic paint programs or (for drawings) some widely available
+drawing editor, and that is suitable for input to text formatters or
+for automatic translation to a variety of formats suitable for input
+to text formatters.  A copy made in an otherwise Transparent file
+format whose markup, or absence of markup, has been arranged to thwart
+or discourage subsequent modification by readers is not Transparent.
+An image format is not Transparent if used for any substantial amount
+of text.  A copy that is not "Transparent" is called "Opaque".
+
+Examples of suitable formats for Transparent copies include plain
+ASCII without markup, Texinfo input format, LaTeX input format, SGML
+or XML using a publicly available DTD, and standard-conforming simple
+HTML, PostScript or PDF designed for human modification.  Examples of
+transparent image formats include PNG, XCF and JPG.  Opaque formats
+include proprietary formats that can be read and edited only by
+proprietary word processors, SGML or XML for which the DTD and/or
+processing tools are not generally available, and the
+machine-generated HTML, PostScript or PDF produced by some word
+processors for output purposes only.
+
+The "Title Page" means, for a printed book, the title page itself,
+plus such following pages as are needed to hold, legibly, the material
+this License requires to appear in the title page.  For works in
+formats which do not have any title page as such, "Title Page" means
+the text near the most prominent appearance of the work's title,
+preceding the beginning of the body of the text.
+
+A section "Entitled XYZ" means a named subunit of the Document whose
+title either is precisely XYZ or contains XYZ in parentheses following
+text that translates XYZ in another language.  (Here XYZ stands for a
+specific section name mentioned below, such as "Acknowledgements",
+"Dedications", "Endorsements", or "History".)  To "Preserve the Title"
+of such a section when you modify the Document means that it remains a
+section "Entitled XYZ" according to this definition.
+
+The Document may include Warranty Disclaimers next to the notice which
+states that this License applies to the Document.  These Warranty
+Disclaimers are considered to be included by reference in this
+License, but only as regards disclaiming warranties: any other
+implication that these Warranty Disclaimers may have is void and has
+no effect on the meaning of this License.
+
+
+2. VERBATIM COPYING
+
+You may copy and distribute the Document in any medium, either
+commercially or noncommercially, provided that this License, the
+copyright notices, and the license notice saying this License applies
+to the Document are reproduced in all copies, and that you add no other
+conditions whatsoever to those of this License.  You may not use
+technical measures to obstruct or control the reading or further
+copying of the copies you make or distribute.  However, you may accept
+compensation in exchange for copies.  If you distribute a large enough
+number of copies you must also follow the conditions in section 3.
+
+You may also lend copies, under the same conditions stated above, and
+you may publicly display copies.
+
+
+3. COPYING IN QUANTITY
+
+If you publish printed copies (or copies in media that commonly have
+printed covers) of the Document, numbering more than 100, and the
+Document's license notice requires Cover Texts, you must enclose the
+copies in covers that carry, clearly and legibly, all these Cover
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
+the back cover.  Both covers must also clearly and legibly identify
+you as the publisher of these copies.  The front cover must present
+the full title with all words of the title equally prominent and
+visible.  You may add other material on the covers in addition.
+Copying with changes limited to the covers, as long as they preserve
+the title of the Document and satisfy these conditions, can be treated
+as verbatim copying in other respects.
+
+If the required texts for either cover are too voluminous to fit
+legibly, you should put the first ones listed (as many as fit
+reasonably) on the actual cover, and continue the rest onto adjacent
+pages.
+
+If you publish or distribute Opaque copies of the Document numbering
+more than 100, you must either include a machine-readable Transparent
+copy along with each Opaque copy, or state in or with each Opaque copy
+a computer-network location from which the general network-using
+public has access to download using public-standard network protocols
+a complete Transparent copy of the Document, free of added material.
+If you use the latter option, you must take reasonably prudent steps,
+when you begin distribution of Opaque copies in quantity, to ensure
+that this Transparent copy will remain thus accessible at the stated
+location until at least one year after the last time you distribute an
+Opaque copy (directly or through your agents or retailers) of that
+edition to the public.
+
+It is requested, but not required, that you contact the authors of the
+Document well before redistributing any large number of copies, to give
+them a chance to provide you with an updated version of the Document.
+
+
+4. MODIFICATIONS
+
+You may copy and distribute a Modified Version of the Document under
+the conditions of sections 2 and 3 above, provided that you release
+the Modified Version under precisely this License, with the Modified
+Version filling the role of the Document, thus licensing distribution
+and modification of the Modified Version to whoever possesses a copy
+of it.  In addition, you must do these things in the Modified Version:
+
+A. Use in the Title Page (and on the covers, if any) a title distinct
+   from that of the Document, and from those of previous versions
+   (which should, if there were any, be listed in the History section
+   of the Document).  You may use the same title as a previous version
+   if the original publisher of that version gives permission.
+B. List on the Title Page, as authors, one or more persons or entities
+   responsible for authorship of the modifications in the Modified
+   Version, together with at least five of the principal authors of the
+   Document (all of its principal authors, if it has fewer than five),
+   unless they release you from this requirement.
+C. State on the Title page the name of the publisher of the
+   Modified Version, as the publisher.
+D. Preserve all the copyright notices of the Document.
+E. Add an appropriate copyright notice for your modifications
+   adjacent to the other copyright notices.
+F. Include, immediately after the copyright notices, a license notice
+   giving the public permission to use the Modified Version under the
+   terms of this License, in the form shown in the Addendum below.
+G. Preserve in that license notice the full lists of Invariant Sections
+   and required Cover Texts given in the Document's license notice.
+H. Include an unaltered copy of this License.
+I. Preserve the section Entitled "History", Preserve its Title, and add
+   to it an item stating at least the title, year, new authors, and
+   publisher of the Modified Version as given on the Title Page.  If
+   there is no section Entitled "History" in the Document, create one
+   stating the title, year, authors, and publisher of the Document as
+   given on its Title Page, then add an item describing the Modified
+   Version as stated in the previous sentence.
+J. Preserve the network location, if any, given in the Document for
+   public access to a Transparent copy of the Document, and likewise
+   the network locations given in the Document for previous versions
+   it was based on.  These may be placed in the "History" section.
+   You may omit a network location for a work that was published at
+   least four years before the Document itself, or if the original
+   publisher of the version it refers to gives permission.
+K. For any section Entitled "Acknowledgements" or "Dedications",
+   Preserve the Title of the section, and preserve in the section all
+   the substance and tone of each of the contributor acknowledgements
+   and/or dedications given therein.
+L. Preserve all the Invariant Sections of the Document,
+   unaltered in their text and in their titles.  Section numbers
+   or the equivalent are not considered part of the section titles.
+M. Delete any section Entitled "Endorsements".  Such a section
+   may not be included in the Modified Version.
+N. Do not retitle any existing section to be Entitled "Endorsements"
+   or to conflict in title with any Invariant Section.
+O. Preserve any Warranty Disclaimers.
+
+If the Modified Version includes new front-matter sections or
+appendices that qualify as Secondary Sections and contain no material
+copied from the Document, you may at your option designate some or all
+of these sections as invariant.  To do this, add their titles to the
+list of Invariant Sections in the Modified Version's license notice.
+These titles must be distinct from any other section titles.
+
+You may add a section Entitled "Endorsements", provided it contains
+nothing but endorsements of your Modified Version by various
+parties--for example, statements of peer review or that the text has
+been approved by an organization as the authoritative definition of a
+standard.
+
+You may add a passage of up to five words as a Front-Cover Text, and a
+passage of up to 25 words as a Back-Cover Text, to the end of the list
+of Cover Texts in the Modified Version.  Only one passage of
+Front-Cover Text and one of Back-Cover Text may be added by (or
+through arrangements made by) any one entity.  If the Document already
+includes a cover text for the same cover, previously added by you or
+by arrangement made by the same entity you are acting on behalf of,
+you may not add another; but you may replace the old one, on explicit
+permission from the previous publisher that added the old one.
+
+The author(s) and publisher(s) of the Document do not by this License
+give permission to use their names for publicity for or to assert or
+imply endorsement of any Modified Version.
+
+
+5. COMBINING DOCUMENTS
+
+You may combine the Document with other documents released under this
+License, under the terms defined in section 4 above for modified
+versions, provided that you include in the combination all of the
+Invariant Sections of all of the original documents, unmodified, and
+list them all as Invariant Sections of your combined work in its
+license notice, and that you preserve all their Warranty Disclaimers.
+
+The combined work need only contain one copy of this License, and
+multiple identical Invariant Sections may be replaced with a single
+copy.  If there are multiple Invariant Sections with the same name but
+different contents, make the title of each such section unique by
+adding at the end of it, in parentheses, the name of the original
+author or publisher of that section if known, or else a unique number.
+Make the same adjustment to the section titles in the list of
+Invariant Sections in the license notice of the combined work.
+
+In the combination, you must combine any sections Entitled "History"
+in the various original documents, forming one section Entitled
+"History"; likewise combine any sections Entitled "Acknowledgements",
+and any sections Entitled "Dedications".  You must delete all sections
+Entitled "Endorsements".
+
+
+6. COLLECTIONS OF DOCUMENTS
+
+You may make a collection consisting of the Document and other documents
+released under this License, and replace the individual copies of this
+License in the various documents with a single copy that is included in
+the collection, provided that you follow the rules of this License for
+verbatim copying of each of the documents in all other respects.
+
+You may extract a single document from such a collection, and distribute
+it individually under this License, provided you insert a copy of this
+License into the extracted document, and follow this License in all
+other respects regarding verbatim copying of that document.
+
+
+7. AGGREGATION WITH INDEPENDENT WORKS
+
+A compilation of the Document or its derivatives with other separate
+and independent documents or works, in or on a volume of a storage or
+distribution medium, is called an "aggregate" if the copyright
+resulting from the compilation is not used to limit the legal rights
+of the compilation's users beyond what the individual works permit.
+When the Document is included in an aggregate, this License does not
+apply to the other works in the aggregate which are not themselves
+derivative works of the Document.
+
+If the Cover Text requirement of section 3 is applicable to these
+copies of the Document, then if the Document is less than one half of
+the entire aggregate, the Document's Cover Texts may be placed on
+covers that bracket the Document within the aggregate, or the
+electronic equivalent of covers if the Document is in electronic form.
+Otherwise they must appear on printed covers that bracket the whole
+aggregate.
+
+
+8. TRANSLATION
+
+Translation is considered a kind of modification, so you may
+distribute translations of the Document under the terms of section 4.
+Replacing Invariant Sections with translations requires special
+permission from their copyright holders, but you may include
+translations of some or all Invariant Sections in addition to the
+original versions of these Invariant Sections.  You may include a
+translation of this License, and all the license notices in the
+Document, and any Warranty Disclaimers, provided that you also include
+the original English version of this License and the original versions
+of those notices and disclaimers.  In case of a disagreement between
+the translation and the original version of this License or a notice
+or disclaimer, the original version will prevail.
+
+If a section in the Document is Entitled "Acknowledgements",
+"Dedications", or "History", the requirement (section 4) to Preserve
+its Title (section 1) will typically require changing the actual
+title.
+
+
+9. TERMINATION
+
+You may not copy, modify, sublicense, or distribute the Document except
+as expressly provided for under this License.  Any other attempt to
+copy, modify, sublicense or distribute the Document is void, and will
+automatically terminate your rights under this License.  However,
+parties who have received copies, or rights, from you under this
+License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+
+10. FUTURE REVISIONS OF THIS LICENSE
+
+The Free Software Foundation may publish new, revised versions
+of the GNU Free Documentation License from time to time.  Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.  See
+http://www.gnu.org/copyleft/.
+
+Each version of the License is given a distinguishing version number.
+If the Document specifies that a particular numbered version of this
+License "or any later version" applies to it, you have the option of
+following the terms and conditions either of that specified version or
+of any later version that has been published (not as a draft) by the
+Free Software Foundation.  If the Document does not specify a version
+number of this License, you may choose any version ever published (not
+as a draft) by the Free Software Foundation.
+
+
+ADDENDUM: How to use this License for your documents
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and
+license notices just after the title page:
+
+    Copyright (c)  YEAR  YOUR NAME.
+    Permission is granted to copy, distribute and/or modify this document
+    under the terms of the GNU Free Documentation License, Version 1.2
+    or any later version published by the Free Software Foundation;
+    with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+    A copy of the license is included in the section entitled "GNU
+    Free Documentation License".
+
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
+replace the "with...Texts." line with this:
+
+    with the Invariant Sections being LIST THEIR TITLES, with the
+    Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
+
+If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License,
+to permit their use in free software.
+
diff --git a/Makefile.am b/Makefile.am
index 74f35a9..48f0475 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,7 +1,7 @@
 
 AUTOMAKE_OPTIONS = foreign 1.6 dist-bzip2
 
-include $(top_srcdir)/Makefile.all.am
+include $(top_srcdir)/Makefile.all.am 
 
 ## include must be first for tool.h
 ## addrcheck must come after memcheck, for mac_*.o
diff --git a/addrcheck/docs/Makefile.am b/addrcheck/docs/Makefile.am
index 6e049ab..b6ce351 100644
--- a/addrcheck/docs/Makefile.am
+++ b/addrcheck/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = ac_main.html
+EXTRA_DIST = ac-manual.xml
diff --git a/addrcheck/docs/ac-manual.xml b/addrcheck/docs/ac-manual.xml
new file mode 100644
index 0000000..bf55c37
--- /dev/null
+++ b/addrcheck/docs/ac-manual.xml
@@ -0,0 +1,131 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="ac-manual" xreflabel="Addrcheck: a lightweight memory checker">
+  <title>Addrcheck: a lightweight memory checker</title>
+
+<para>To use this tool, you must specify
+<computeroutput>--tool=addrcheck</computeroutput> on the Valgrind
+command line.</para>
+
+<sect1>
+<title>Kinds of bugs that Addrcheck can find</title>
+
+<para>Addrcheck is a simplified version of the Memcheck tool
+described in Section 3.  It is identical in every way to
+Memcheck, except for one important detail: it does not do the
+undefined-value checks that Memcheck does.  This means Addrcheck
+is about twice as fast as Memcheck, and uses less memory.
+Addrcheck can detect the following errors:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>Reading/writing memory after it has been free'd</para>
+  </listitem>
+  <listitem>
+    <para>Reading/writing off the end of malloc'd blocks</para>
+  </listitem>
+  <listitem>
+    <para>Reading/writing inappropriate areas on the stack</para>
+  </listitem>
+  <listitem>
+    <para>Memory leaks -- where pointers to malloc'd blocks are lost
+    forever</para>
+  </listitem>
+  <listitem>
+    <para>Mismatched use of malloc/new/new [] vs free/delete/delete []</para>
+  </listitem>
+  <listitem>
+    <para>Overlapping <computeroutput>src</computeroutput> and
+    <computeroutput>dst</computeroutput> pointers in
+    <computeroutput>memcpy()</computeroutput> and related
+    functions</para>
+  </listitem>
+  <listitem>
+    <para>Some misuses of the POSIX pthreads API</para>
+  </listitem>
+</itemizedlist>
+
+
+<para>Rather than duplicate much of the Memcheck docs here
+(a.k.a. since I am a lazy b'stard), users of Addrcheck are
+advised to read <xref linkend="mc-manual.bugs"/>.  Some important
+points:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Addrcheck is exactly like Memcheck, except that all the
+    value-definedness tracking machinery has been removed.
+    Therefore, the Memcheck documentation which discusses
+    definedess ("V-bits") is irrelevant.  The stuff on
+    addressibility ("A-bits") is still relevant.</para>
+  </listitem>
+
+  <listitem>
+    <para>Addrcheck accepts the same command-line flags as
+    Memcheck, with the exception of ... (to be filled in).</para>
+  </listitem>
+
+  <listitem>
+    <para>Like Memcheck, Addrcheck will do memory leak checking
+    (internally, the same code does leak checking for both
+    tools).  The only difference is how the two tools decide
+    which memory locations to consider when searching for
+    pointers to blocks.  Memcheck will only consider 4-byte
+    aligned locations which are validly addressible and which
+    hold defined values.  Addrcheck does not track definedness
+    and so cannot apply the last, "defined value",
+    criteria.</para>
+
+    <para>The result is that Addrcheck's leak checker may
+    "discover" pointers to blocks that Memcheck would not.  So it
+    is possible that Memcheck could (correctly) conclude that a
+    block is leaked, yet Addrcheck would not conclude
+    that.</para>
+
+    <para>Whether or not this has any effect in practice is
+    unknown.  I suspect not, but that is mere speculation at this
+    stage.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Addrcheck is, therefore, a fine-grained address checker.
+All it really does is check each memory reference to say whether
+or not that location may validly be addressed.  Addrcheck has a
+memory overhead of one bit per byte of used address space.  In
+contrast, Memcheck has an overhead of nine bits per byte.</para>
+
+<para>Due to lazyness on the part of the implementor (Julian),
+error messages from Addrcheck do not distinguish reads from
+writes.  So it will say, for example, "Invalid memory access of
+size 4", whereas Memcheck would have said whether the access is a
+read or a write.  This could easily be remedied, if anyone is
+particularly bothered.</para>
+
+<para>Addrcheck is quite pleasant to use.  It's faster than
+Memcheck, and the lack of valid-value checks has another side
+effect: the errors it does report are relatively easy to track
+down, compared to the tedious and often confusing search
+sometimes needed to find the cause of uninitialised-value errors
+reported by Memcheck.</para>
+
+<para>Because it is faster and lighter than Memcheck, our hope is
+that Addrcheck is more suitable for less-intrusive, larger scale
+testing than is viable with Memcheck.  As of mid-November 2002,
+we have experimented with running the KDE-3.1 desktop on
+Addrcheck (the entire process tree, starting from
+<computeroutput>startkde</computeroutput>).  Running on a 512MB,
+1.7 GHz P4, the result is nearly usable.  The ultimate aim is
+that is fast and unintrusive enough that (eg) KDE sessions may be
+unintrusively monitored for addressing errors whilst people do
+real work with their KDE desktop.</para>
+
+<para>Addrcheck is a new experiment in the Valgrind world.  We'd
+be interested to hear your feedback on it.</para>
+
+</sect1>
+
+</chapter>
diff --git a/addrcheck/docs/ac_main.html b/addrcheck/docs/ac_main.html
deleted file mode 100644
index d540fc0..0000000
--- a/addrcheck/docs/ac_main.html
+++ /dev/null
@@ -1,103 +0,0 @@
-<html>
-  <head>
-    <title>Addrcheck: a lightweight memory checker</title>
-  </head>
-
-<body>
-<a name="ac-top"></a>
-<h2>5&nbsp; <b>Addrcheck</b>: a lightweight memory checker</h2>
-
-To use this tool, you must specify <code>--tool=addrcheck</code>
-on the Valgrind command line.
-
-<h3>5.1&nbsp; Kinds of bugs that Addrcheck can find</h3>
-
-Addrcheck is a simplified version of the Memcheck tool described
-in Section 3.  It is identical in every way to Memcheck, except for
-one important detail: it does not do the undefined-value checks that
-Memcheck does.  This means Addrcheck is about twice as fast as
-Memcheck, and uses less memory.  Addrcheck can detect the following
-errors:
-    <ul>
-        <li>Reading/writing memory after it has been free'd</li>
-        <li>Reading/writing off the end of malloc'd blocks</li>
-        <li>Reading/writing inappropriate areas on the stack</li>
-        <li>Memory leaks -- where pointers to malloc'd blocks are lost
-            forever</li>
-        <li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
-        <li>Overlapping <code>src</code> and <code>dst</code> pointers in 
-            <code>memcpy()</code> and related functions</li>
-        <li>Some misuses of the POSIX pthreads API</li>
-    </ul>
-    <p>
-
-<p>
-Rather than duplicate much of the Memcheck docs here (a.k.a. since I
-am a lazy b'stard), users of Addrcheck are advised to read
-the section on Memcheck.  Some important points:
-<ul>
-<li>Addrcheck is exactly like Memcheck, except that all the
-   value-definedness tracking machinery has been removed.  Therefore,
-   the Memcheck documentation which discusses definedess ("V-bits") is
-   irrelevant.  The stuff on addressibility ("A-bits") is still
-   relevant.
-<p>
-<li>Addrcheck accepts the same command-line flags as Memcheck, with
-    the exception of ... (to be filled in).
-<p>
-<li>Like Memcheck, Addrcheck will do memory leak checking (internally,
-    the same code does leak checking for both tools).  The only
-    difference is how the two tools decide which memory locations
-    to consider when searching for pointers to blocks.  Memcheck will
-    only consider 4-byte aligned locations which are validly
-    addressible and which hold defined values.  Addrcheck does not
-    track definedness and so cannot apply the last, "defined value",
-    criteria.  
-    <p>
-    The result is that Addrcheck's leak checker may "discover"
-    pointers to blocks that Memcheck would not.  So it is possible
-    that Memcheck could (correctly) conclude that a block is leaked,
-    yet Addrcheck would not conclude that.
-    <p>
-    Whether or not this has any effect in practice is unknown.  I
-    suspect not, but that is mere speculation at this stage.
-</ul>
-
-<p>
-Addrcheck is, therefore, a fine-grained address checker.  All it
-really does is check each memory reference to say whether or not that
-location may validly be addressed.  Addrcheck has a memory overhead of
-one bit per byte of used address space.  In contrast, Memcheck has an
-overhead of nine bits per byte.
-
-<p>
-Due to lazyness on the part of the implementor (Julian), error
-messages from Addrcheck do not distinguish reads from writes.  So it
-will say, for example, "Invalid memory access of size 4", whereas 
-Memcheck would have said whether the access is a read or a write.
-This could easily be remedied, if anyone is particularly bothered.
-
-<p>
-Addrcheck is quite pleasant to use.  It's faster than Memcheck, and
-the lack of valid-value checks has another side effect: the errors it
-does report are relatively easy to track down, compared to the 
-tedious and often confusing search sometimes needed to find the 
-cause of uninitialised-value errors reported by Memcheck.  
-
-<p>
-Because it is faster and lighter than Memcheck, our hope is that
-Addrcheck is more suitable for less-intrusive, larger scale testing
-than is viable with Memcheck.  As of mid-November 2002, we have
-experimented with running the KDE-3.1 desktop on Addrcheck (the entire
-process tree, starting from <code>startkde</code>).  Running on a
-512MB, 1.7 GHz P4, the result is nearly usable.  The ultimate aim is
-that is fast and unintrusive enough that (eg) KDE sessions may be
-unintrusively monitored for addressing errors whilst people do real
-work with their KDE desktop.
-
-<p>
-Addrcheck is a new experiment in the Valgrind world.  We'd be
-interested to hear your feedback on it.
-
-</body>
-</html>
diff --git a/cachegrind/docs/Makefile.am b/cachegrind/docs/Makefile.am
index 9657fe5..f052e04 100644
--- a/cachegrind/docs/Makefile.am
+++ b/cachegrind/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = cg_main.html cg_techdocs.html 
+EXTRA_DIST = cg-manual.xml cg-tech-docs.xml 
diff --git a/cachegrind/docs/cg-manual.xml b/cachegrind/docs/cg-manual.xml
new file mode 100644
index 0000000..58df498
--- /dev/null
+++ b/cachegrind/docs/cg-manual.xml
@@ -0,0 +1,1012 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="cg-manual" xreflabel="Cachegrind: a cache-miss profiler">
+<title>Cachegrind: a cache profiler</title>
+
+<para>Detailed technical documentation on how Cachegrind works is
+available in <xref linkend="cg-tech-docs"/>.  If you only want to know
+how to <command>use</command> it, this is the page you need to
+read.</para>
+
+
+<sect1 id="cg-manual.cache" xreflabel="Cache profiling">
+<title>Cache profiling</title>
+
+<para>To use this tool, you must specify
+<computeroutput>--tool=cachegrind</computeroutput> on the
+Valgrind command line.</para>
+
+<para>Cachegrind is a tool for doing cache simulations and
+annotating your source line-by-line with the number of cache
+misses.  In particular, it records:</para>
+<itemizedlist>
+  <listitem>
+    <para>L1 instruction cache reads and misses;</para>
+  </listitem>
+  <listitem>
+    <para>L1 data cache reads and read misses, writes and write
+    misses;</para>
+  </listitem>
+  <listitem>
+    <para>L2 unified cache reads and read misses, writes and
+    writes misses.</para>
+  </listitem>
+</itemizedlist>
+
+<para>On a modern x86 machine, an L1 miss will typically cost
+around 10 cycles, and an L2 miss can cost as much as 200
+cycles. Detailed cache profiling can be very useful for improving
+the performance of your program.</para>
+
+<para>Also, since one instruction cache read is performed per
+instruction executed, you can find out how many instructions are
+executed per line, which can be useful for traditional profiling
+and test coverage.</para>
+
+<para>Any feedback, bug-fixes, suggestions, etc, welcome.</para>
+
+
+
+<sect2 id="cg-manual.overview" xreflabel="Overview">
+<title>Overview</title>
+
+<para>First off, as for normal Valgrind use, you probably want to
+compile with debugging info (the
+<computeroutput>-g</computeroutput> flag).  But by contrast with
+normal Valgrind use, you probably <command>do</command> want to turn
+optimisation on, since you should profile your program as it will
+be normally run.</para>
+
+<para>The two steps are:</para>
+<orderedlist>
+  <listitem>
+    <para>Run your program with <computeroutput>valgrind
+    --tool=cachegrind</computeroutput> in front of the normal
+    command line invocation.  When the program finishes,
+    Cachegrind will print summary cache statistics. It also
+    collects line-by-line information in a file
+    <computeroutput>cachegrind.out.pid</computeroutput>, where
+    <computeroutput>pid</computeroutput> is the program's process
+    id.</para>
+
+    <para>This step should be done every time you want to collect
+    information about a new program, a changed program, or about
+    the same program with different input.</para>
+  </listitem>
+
+  <listitem>
+    <para>Generate a function-by-function summary, and possibly
+    annotate source files, using the supplied
+    <computeroutput>cg_annotate</computeroutput> program. Source
+    files to annotate can be specified manually, or manually on
+    the command line, or "interesting" source files can be
+    annotated automatically with the
+    <computeroutput>--auto=yes</computeroutput> option.  You can
+    annotate C/C++ files or assembly language files equally
+    easily.</para>
+
+    <para>This step can be performed as many times as you like
+    for each Step 2.  You may want to do multiple annotations
+    showing different information each time.</para>
+  </listitem>
+
+</orderedlist>
+
+<para>The steps are described in detail in the following
+sections.</para>
+
+</sect2>
+
+
+<sect2>
+<title>Cache simulation specifics</title>
+
+<para>Cachegrind uses a simulation for a machine with a split L1
+cache and a unified L2 cache.  This configuration is used for all
+(modern) x86-based machines we are aware of.  Old Cyrix CPUs had
+a unified I and D L1 cache, but they are ancient history
+now.</para>
+
+<para>The more specific characteristics of the simulation are as
+follows.</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Write-allocate: when a write miss occurs, the block
+    written to is brought into the D1 cache.  Most modern caches
+    have this property.</para>
+  </listitem>
+
+  <listitem>
+    <para>Bit-selection hash function: the line(s) in the cache
+    to which a memory block maps is chosen by the middle bits
+    M--(M+N-1) of the byte address, where:</para>
+    <itemizedlist>
+      <listitem>
+        <para>line size = 2^M bytes</para>
+      </listitem>
+      <listitem>
+        <para>(cache size / line size) = 2^N bytes</para>
+      </listitem>
+    </itemizedlist> 
+  </listitem>
+
+  <listitem>
+    <para>Inclusive L2 cache: the L2 cache replicates all the
+    entries of the L1 cache.  This is standard on Pentium chips,
+    but AMD Athlons use an exclusive L2 cache that only holds
+    blocks evicted from L1.  Ditto AMD Durons and most modern
+    VIAs.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The cache configuration simulated (cache size,
+associativity and line size) is determined automagically using
+the CPUID instruction.  If you have an old machine that (a)
+doesn't support the CPUID instruction, or (b) supports it in an
+early incarnation that doesn't give any cache information, then
+Cachegrind will fall back to using a default configuration (that
+of a model 3/4 Athlon).  Cachegrind will tell you if this
+happens.  You can manually specify one, two or all three levels
+(I1/D1/L2) of the cache from the command line using the
+<computeroutput>--I1</computeroutput>,
+<computeroutput>--D1</computeroutput> and
+<computeroutput>--L2</computeroutput> options.</para>
+
+
+<para>Other noteworthy behaviour:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>References that straddle two cache lines are treated as
+    follows:</para>
+    <itemizedlist>
+      <listitem>
+        <para>If both blocks hit --&gt; counted as one hit</para>
+      </listitem>
+      <listitem>
+        <para>If one block hits, the other misses --&gt; counted
+        as one miss.</para>
+      </listitem>
+      <listitem>
+        <para>If both blocks miss --&gt; counted as one miss (not
+        two)</para>
+      </listitem>
+    </itemizedlist>
+  </listitem>
+
+  <listitem>
+    <para>Instructions that modify a memory location
+    (eg. <computeroutput>inc</computeroutput> and
+    <computeroutput>dec</computeroutput>) are counted as doing
+    just a read, ie. a single data reference.  This may seem
+    strange, but since the write can never cause a miss (the read
+    guarantees the block is in the cache) it's not very
+    interesting.</para>
+
+    <para>Thus it measures not the number of times the data cache
+    is accessed, but the number of times a data cache miss could
+    occur.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>If you are interested in simulating a cache with different
+properties, it is not particularly hard to write your own cache
+simulator, or to modify the existing ones in
+<computeroutput>vg_cachesim_I1.c</computeroutput>,
+<computeroutput>vg_cachesim_D1.c</computeroutput>,
+<computeroutput>vg_cachesim_L2.c</computeroutput> and
+<computeroutput>vg_cachesim_gen.c</computeroutput>.  We'd be
+interested to hear from anyone who does.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="cg-manual.profile" xreflabel="Profiling programs">
+<title>Profiling programs</title>
+
+<para>To gather cache profiling information about the program
+<computeroutput>ls -l</computeroutput>, invoke Cachegrind like
+this:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=cachegrind ls -l]]></programlisting>
+
+<para>The program will execute (slowly).  Upon completion,
+summary statistics that look like this will be printed:</para>
+
+<programlisting><![CDATA[
+==31751== I   refs:      27,742,716
+==31751== I1  misses:           276
+==31751== L2  misses:           275
+==31751== I1  miss rate:        0.0%
+==31751== L2i miss rate:        0.0%
+==31751== 
+==31751== D   refs:      15,430,290  (10,955,517 rd + 4,474,773 wr)
+==31751== D1  misses:        41,185  (    21,905 rd +    19,280 wr)
+==31751== L2  misses:        23,085  (     3,987 rd +    19,098 wr)
+==31751== D1  miss rate:        0.2% (       0.1%   +       0.4%)
+==31751== L2d miss rate:        0.1% (       0.0%   +       0.4%)
+==31751== 
+==31751== L2 misses:         23,360  (     4,262 rd +    19,098 wr)
+==31751== L2 miss rate:         0.0% (       0.0%   +       0.4%)]]></programlisting>
+
+<para>Cache accesses for instruction fetches are summarised
+first, giving the number of fetches made (this is the number of
+instructions executed, which can be useful to know in its own
+right), the number of I1 misses, and the number of L2 instruction
+(<computeroutput>L2i</computeroutput>) misses.</para>
+
+<para>Cache accesses for data follow. The information is similar
+to that of the instruction fetches, except that the values are
+also shown split between reads and writes (note each row's
+<computeroutput>rd</computeroutput> and
+<computeroutput>wr</computeroutput> values add up to the row's
+total).</para>
+
+<para>Combined instruction and data figures for the L2 cache
+follow that.</para>
+
+
+
+<sect2 id="cg-manual.outputfile" xreflabel="Output file">
+<title>Output file</title>
+
+<para>As well as printing summary information, Cachegrind also
+writes line-by-line cache profiling information to a file named
+<computeroutput>cachegrind.out.pid</computeroutput>.  This file
+is human-readable, but is best interpreted by the accompanying
+program <computeroutput>cg_annotate</computeroutput>, described
+in the next section.</para>
+
+<para>Things to note about the
+<computeroutput>cachegrind.out.pid</computeroutput>
+file:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>It is written every time Cachegrind is run, and will
+    overwrite any existing
+    <computeroutput>cachegrind.out.pid</computeroutput>
+    in the current directory (but that won't happen very often
+    because it takes some time for process ids to be
+    recycled).</para>
+  </listitem>
+  <listitem>
+    <para>It can be huge: <computeroutput>ls -l</computeroutput>
+    generates a file of about 350KB.  Browsing a few files and
+    web pages with a Konqueror built with full debugging
+    information generates a file of around 15 MB.</para>
+  </listitem>
+</itemizedlist>
+
+<para>Note that older versions of Cachegrind used a log file
+named <computeroutput>cachegrind.out</computeroutput> (i.e. no
+<computeroutput>.pid</computeroutput> suffix).  The suffix serves
+two purposes.  Firstly, it means you don't have to rename old log
+files that you don't want to overwrite.  Secondly, and more
+importantly, it allows correct profiling with the
+<computeroutput>--trace-children=yes</computeroutput> option of
+programs that spawn child processes.</para>
+
+</sect2>
+
+
+
+<sect2 id="cg-manual.cgopts" xreflabel="Cachegrind options">
+<title>Cachegrind options</title>
+
+<para>Cache-simulation specific options are:</para>
+
+<screen><![CDATA[
+--I1=<size>,<associativity>,<line_size>
+--D1=<size>,<associativity>,<line_size>
+--L2=<size>,<associativity>,<line_size>
+
+[default: uses CPUID for automagic cache configuration]]]></screen>
+
+<para>Manually specifies the I1/D1/L2 cache configuration, where
+<computeroutput>size</computeroutput> and
+<computeroutput>line_size</computeroutput> are measured in bytes.
+The three items must be comma-separated, but with no spaces,
+eg:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=cachegrind --I1=65535,2,64]]></programlisting>
+
+<para>You can specify one, two or three of the I1/D1/L2 caches.
+Any level not manually specified will be simulated using the
+configuration found in the normal way (via the CPUID instruction,
+or failing that, via defaults).</para>
+
+</sect2>
+
+
+  
+<sect2 id="cg-manual.annotate" xreflabel="Annotating C/C++ programs">
+<title>Annotating C/C++ programs</title>
+
+<para>Before using <computeroutput>cg_annotate</computeroutput>,
+it is worth widening your window to be at least 120-characters
+wide if possible, as the output lines can be quite long.</para>
+
+<para>To get a function-by-function summary, run
+<computeroutput>cg_annotate --pid</computeroutput> in a directory
+containing a <computeroutput>cachegrind.out.pid</computeroutput>
+file.  The <emphasis>--pid</emphasis> is required so that
+<computeroutput>cg_annotate</computeroutput> knows which log file
+to use when several are present.</para>
+
+<para>The output looks like this:</para>
+
+<programlisting><![CDATA[
+--------------------------------------------------------------------------------
+I1 cache:              65536 B, 64 B, 2-way associative
+D1 cache:              65536 B, 64 B, 2-way associative
+L2 cache:              262144 B, 64 B, 8-way associative
+Command:               concord vg_to_ucode.c
+Events recorded:       Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Events shown:          Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Event sort order:      Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Threshold:             99%
+Chosen for annotation:
+Auto-annotation:       on
+
+--------------------------------------------------------------------------------
+Ir         I1mr I2mr Dr         D1mr   D2mr  Dw        D1mw   D2mw
+--------------------------------------------------------------------------------
+27,742,716  276  275 10,955,517 21,905 3,987 4,474,773 19,280 19,098  PROGRAM TOTALS
+
+--------------------------------------------------------------------------------
+Ir        I1mr I2mr Dr        D1mr  D2mr  Dw        D1mw   D2mw    file:function
+--------------------------------------------------------------------------------
+8,821,482    5    5 2,242,702 1,621    73 1,794,230      0      0  getc.c:_IO_getc
+5,222,023    4    4 2,276,334    16    12   875,959      1      1  concord.c:get_word
+2,649,248    2    2 1,344,810 7,326 1,385         .      .      .  vg_main.c:strcmp
+2,521,927    2    2   591,215     0     0   179,398      0      0  concord.c:hash
+2,242,740    2    2 1,046,612   568    22   448,548      0      0  ctype.c:tolower
+1,496,937    4    4   630,874 9,000 1,400   279,388      0      0  concord.c:insert
+  897,991   51   51   897,831    95    30        62      1      1  ???:???
+  598,068    1    1   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__flockfile
+  598,068    0    0   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__funlockfile
+  598,024    4    4   213,580    35    16   149,506      0      0  vg_clientmalloc.c:malloc
+  446,587    1    1   215,973 2,167   430   129,948 14,057 13,957  concord.c:add_existing
+  341,760    2    2   128,160     0     0   128,160      0      0  vg_clientmalloc.c:vg_trap_here_WRAPPER
+  320,782    4    4   150,711   276     0    56,027     53     53  concord.c:init_hash_table
+  298,998    1    1   106,785     0     0    64,071      1      1  concord.c:create
+  149,518    0    0   149,516     0     0         1      0      0  ???:tolower@@GLIBC_2.0
+  149,518    0    0   149,516     0     0         1      0      0  ???:fgetc@@GLIBC_2.0
+   95,983    4    4    38,031     0     0    34,409  3,152  3,150  concord.c:new_word_node
+   85,440    0    0    42,720     0     0    21,360      0      0  vg_clientmalloc.c:vg_bogus_epilogue]]></programlisting>
+
+
+<para>First up is a summary of the annotation options:</para>
+                    
+<itemizedlist>
+
+  <listitem>
+    <para>I1 cache, D1 cache, L2 cache: cache configuration.  So
+    you know the configuration with which these results were
+    obtained.</para>
+  </listitem>
+
+  <listitem>
+    <para>Command: the command line invocation of the program
+      under examination.</para>
+  </listitem>
+
+  <listitem>
+   <para>Events recorded: event abbreviations are:</para>
+   <itemizedlist>
+     <listitem>
+       <para><computeroutput>Ir </computeroutput>: I cache reads
+       (ie. instructions executed)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>I1mr</computeroutput>: I1 cache read
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>I2mr</computeroutput>: L2 cache
+       instruction read misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>Dr </computeroutput>: D cache reads
+       (ie. memory reads)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D1mr</computeroutput>: D1 cache read
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D2mr</computeroutput>: L2 cache data
+       read misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>Dw </computeroutput>: D cache writes
+       (ie. memory writes)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D1mw</computeroutput>: D1 cache write
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D2mw</computeroutput>: L2 cache data
+       write misses</para>
+     </listitem>
+   </itemizedlist>
+
+   <para>Note that D1 total accesses is given by
+   <computeroutput>D1mr</computeroutput> +
+   <computeroutput>D1mw</computeroutput>, and that L2 total
+   accesses is given by <computeroutput>I2mr</computeroutput> +
+   <computeroutput>D2mr</computeroutput> +
+   <computeroutput>D2mw</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+   <para>Events shown: the events shown (a subset of events
+   gathered).  This can be adjusted with the
+   <computeroutput>--show</computeroutput> option.</para>
+  </listitem>
+
+  <listitem>
+    <para>Event sort order: the sort order in which functions are
+    shown.  For example, in this case the functions are sorted
+    from highest <computeroutput>Ir</computeroutput> counts to
+    lowest.  If two functions have identical
+    <computeroutput>Ir</computeroutput> counts, they will then be
+    sorted by <computeroutput>I1mr</computeroutput> counts, and
+    so on.  This order can be adjusted with the
+    <computeroutput>--sort</computeroutput> option.</para>
+
+    <para>Note that this dictates the order the functions appear.
+    It is <command>not</command> the order in which the columns
+    appear; that is dictated by the "events shown" line (and can
+    be changed with the <computeroutput>--show</computeroutput>
+    option).</para>
+  </listitem>
+
+  <listitem>
+    <para>Threshold: <computeroutput>cg_annotate</computeroutput>
+    by default omits functions that cause very low numbers of
+    misses to avoid drowning you in information.  In this case,
+    cg_annotate shows summaries the functions that account for
+    99% of the <computeroutput>Ir</computeroutput> counts;
+    <computeroutput>Ir</computeroutput> is chosen as the
+    threshold event since it is the primary sort event.  The
+    threshold can be adjusted with the
+    <computeroutput>--threshold</computeroutput>
+    option.</para>
+  </listitem>
+
+  <listitem>
+    <para>Chosen for annotation: names of files specified
+    manually for annotation; in this case none.</para>
+  </listitem>
+
+  <listitem>
+    <para>Auto-annotation: whether auto-annotation was requested
+    via the <computeroutput>--auto=yes</computeroutput>
+    option. In this case no.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Then follows summary statistics for the whole
+program. These are similar to the summary provided when running
+<computeroutput>valgrind
+--tool=cachegrind</computeroutput>.</para>
+  
+<para>Then follows function-by-function statistics. Each function
+is identified by a
+<computeroutput>file_name:function_name</computeroutput> pair. If
+a column contains only a dot it means the function never performs
+that event (eg. the third row shows that
+<computeroutput>strcmp()</computeroutput> contains no
+instructions that write to memory). The name
+<computeroutput>???</computeroutput> is used if the the file name
+and/or function name could not be determined from debugging
+information. If most of the entries have the form
+<computeroutput>???:???</computeroutput> the program probably
+wasn't compiled with <computeroutput>-g</computeroutput>.  If any
+code was invalidated (either due to self-modifying code or
+unloading of shared objects) its counts are aggregated into a
+single cost centre written as
+<computeroutput>(discarded):(discarded)</computeroutput>.</para>
+
+<para>It is worth noting that functions will come from three
+types of source files:</para>
+
+<orderedlist>
+  <listitem>
+    <para>From the profiled program
+    (<filename>concord.c</filename> in this example).</para>
+  </listitem>
+  <listitem>
+    <para>From libraries (eg. <filename>getc.c</filename>)</para>
+  </listitem>
+  <listitem>
+    <para>From Valgrind's implementation of some libc functions
+    (eg. <computeroutput>vg_clientmalloc.c:malloc</computeroutput>).
+    These are recognisable because the filename begins with
+    <computeroutput>vg_</computeroutput>, and is probably one of
+    <filename>vg_main.c</filename>,
+    <filename>vg_clientmalloc.c</filename> or
+    <filename>vg_mylibc.c</filename>.</para>
+  </listitem>
+
+</orderedlist>
+
+<para>There are two ways to annotate source files -- by choosing
+them manually, or with the
+<computeroutput>--auto=yes</computeroutput> option. To do it
+manually, just specify the filenames as arguments to
+<computeroutput>cg_annotate</computeroutput>. For example, the
+output from running <filename>cg_annotate concord.c</filename>
+for our example produces the same output as above followed by an
+annotated version of <filename>concord.c</filename>, a section of
+which looks like:</para>
+
+<programlisting><![CDATA[
+--------------------------------------------------------------------------------
+-- User-annotated source: concord.c
+--------------------------------------------------------------------------------
+Ir        I1mr I2mr Dr      D1mr  D2mr  Dw      D1mw   D2mw
+
+[snip]
+
+        .    .    .       .     .     .       .      .      .  void init_hash_table(char *file_name, Word_Node *table[])
+        3    1    1       .     .     .       1      0      0  {
+        .    .    .       .     .     .       .      .      .      FILE *file_ptr;
+        .    .    .       .     .     .       .      .      .      Word_Info *data;
+        1    0    0       .     .     .       1      1      1      int line = 1, i;
+        .    .    .       .     .     .       .      .      .
+        5    0    0       .     .     .       3      0      0      data = (Word_Info *) create(sizeof(Word_Info));
+        .    .    .       .     .     .       .      .      .
+    4,991    0    0   1,995     0     0     998      0      0      for (i = 0; i < TABLE_SIZE; i++)
+    3,988    1    1   1,994     0     0     997     53     52          table[i] = NULL;
+        .    .    .       .     .     .       .      .      .
+        .    .    .       .     .     .       .      .      .      /* Open file, check it. */
+        6    0    0       1     0     0       4      0      0      file_ptr = fopen(file_name, "r");
+        2    0    0       1     0     0       .      .      .      if (!(file_ptr)) {
+        .    .    .       .     .     .       .      .      .          fprintf(stderr, "Couldn't open '%s'.\n", file_name);
+        1    1    1       .     .     .       .      .      .          exit(EXIT_FAILURE);
+        .    .    .       .     .     .       .      .      .      }
+        .    .    .       .     .     .       .      .      .
+  165,062    1    1  73,360     0     0  91,700      0      0      while ((line = get_word(data, line, file_ptr)) != EOF)
+  146,712    0    0  73,356     0     0  73,356      0      0          insert(data->;word, data->line, table);
+        .    .    .       .     .     .       .      .      .
+        4    0    0       1     0     0       2      0      0      free(data);
+        4    0    0       1     0     0       2      0      0      fclose(file_ptr);
+        3    0    0       2     0     0       .      .      .  }]]></programlisting>
+
+<para>(Although column widths are automatically minimised, a wide
+terminal is clearly useful.)</para>
+  
+<para>Each source file is clearly marked
+(<computeroutput>User-annotated source</computeroutput>) as
+having been chosen manually for annotation.  If the file was
+found in one of the directories specified with the
+<computeroutput>-I / --include</computeroutput> option, the directory
+and file are both given.</para>
+
+<para>Each line is annotated with its event counts.  Events not
+applicable for a line are represented by a `.'; this is useful
+for distinguishing between an event which cannot happen, and one
+which can but did not.</para>
+
+<para>Sometimes only a small section of a source file is
+executed.  To minimise uninteresting output, Valgrind only shows
+annotated lines and lines within a small distance of annotated
+lines.  Gaps are marked with the line numbers so you know which
+part of a file the shown code comes from, eg:</para>
+
+<programlisting><![CDATA[
+(figures and code for line 704)
+-- line 704 ----------------------------------------
+-- line 878 ----------------------------------------
+(figures and code for line 878)]]></programlisting>
+
+<para>The amount of context to show around annotated lines is
+controlled by the <computeroutput>--context</computeroutput>
+option.</para>
+
+<para>To get automatic annotation, run
+<computeroutput>cg_annotate --auto=yes</computeroutput>.
+cg_annotate will automatically annotate every source file it can
+find that is mentioned in the function-by-function summary.
+Therefore, the files chosen for auto-annotation are affected by
+the <computeroutput>--sort</computeroutput> and
+<computeroutput>--threshold</computeroutput> options.  Each
+source file is clearly marked (<computeroutput>Auto-annotated
+source</computeroutput>) as being chosen automatically.  Any
+files that could not be found are mentioned at the end of the
+output, eg:</para>
+
+<programlisting><![CDATA[
+------------------------------------------------------------------
+The following files chosen for auto-annotation could not be found:
+------------------------------------------------------------------
+  getc.c
+  ctype.c
+  ../sysdeps/generic/lockfile.c]]></programlisting>
+
+<para>This is quite common for library files, since libraries are
+usually compiled with debugging information, but the source files
+are often not present on a system.  If a file is chosen for
+annotation <command>both</command> manually and automatically, it
+is marked as <computeroutput>User-annotated
+source</computeroutput>. Use the <computeroutput>-I /
+--include</computeroutput> option to tell Valgrind where to look
+for source files if the filenames found from the debugging
+information aren't specific enough.</para>
+
+<para>Beware that cg_annotate can take some time to digest large
+<computeroutput>cachegrind.out.pid</computeroutput> files,
+e.g. 30 seconds or more.  Also beware that auto-annotation can
+produce a lot of output if your program is large!</para>
+
+</sect2>
+
+
+<sect2 id="cg-manual.assembler" xreflabel="Annotating assembler programs">
+<title>Annotating assembler programs</title>
+
+<para>Valgrind can annotate assembler programs too, or annotate
+the assembler generated for your C program.  Sometimes this is
+useful for understanding what is really happening when an
+interesting line of C code is translated into multiple
+instructions.</para>
+
+<para>To do this, you just need to assemble your
+<computeroutput>.s</computeroutput> files with assembler-level
+debug information.  gcc doesn't do this, but you can use the GNU
+assembler with the <computeroutput>--gstabs</computeroutput>
+option to generate object files with this information, eg:</para>
+
+<programlisting><![CDATA[
+as --gstabs foo.s]]></programlisting>
+
+<para>You can then profile and annotate source files in the same
+way as for C/C++ programs.</para>
+
+</sect2>
+
+</sect1>
+
+
+<sect1 id="cg-manual.annopts" xreflabel="cg_annotate options">
+<title><computeroutput>cg_annotate</computeroutput> options</title>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>--pid</computeroutput></para>
+    <para>Indicates which
+    <computeroutput>cachegrind.out.pid</computeroutput> file to
+    read.  Not actually an option -- it is required.</para>
+  </listitem>
+    
+  <listitem>
+    <para><computeroutput>-h, --help</computeroutput></para>
+    <para><computeroutput>-v, --version</computeroutput></para>
+    <para>Help and version, as usual.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--sort=A,B,C</computeroutput> [default:
+    order in
+    <computeroutput>cachegrind.out.pid</computeroutput>]</para>
+    <para>Specifies the events upon which the sorting of the
+    function-by-function entries will be based.  Useful if you
+    want to concentrate on eg. I cache misses
+    (<computeroutput>--sort=I1mr,I2mr</computeroutput>), or D
+    cache misses
+    (<computeroutput>--sort=D1mr,D2mr</computeroutput>), or L2
+    misses
+    (<computeroutput>--sort=D2mr,I2mr</computeroutput>).</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--show=A,B,C</computeroutput> [default:
+    all, using order in
+    <computeroutput>cachegrind.out.pid</computeroutput>]</para>
+    <para>Specifies which events to show (and the column
+    order). Default is to use all present in the
+    <computeroutput>cachegrind.out.pid</computeroutput> file (and
+    use the order in the file).</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--threshold=X</computeroutput>
+    [default: 99%]</para>
+    <para>Sets the threshold for the function-by-function
+    summary.  Functions are shown that account for more than X%
+    of the primary sort event.  If auto-annotating, also affects
+    which files are annotated.</para>
+      
+    <para>Note: thresholds can be set for more than one of the
+    events by appending any events for the
+    <computeroutput>--sort</computeroutput> option with a colon
+    and a number (no spaces, though).  E.g. if you want to see
+    the functions that cover 99% of L2 read misses and 99% of L2
+    write misses, use this option:</para>
+    <para><computeroutput>--sort=D2mr:99,D2mw:99</computeroutput></para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--auto=no</computeroutput> [default]</para>
+    <para><computeroutput>--auto=yes</computeroutput></para>
+    <para>When enabled, automatically annotates every file that
+    is mentioned in the function-by-function summary that can be
+    found.  Also gives a list of those that couldn't be found.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--context=N</computeroutput> [default:
+    8]</para>
+    <para>Print N lines of context before and after each
+    annotated line.  Avoids printing large sections of source
+    files that were not executed.  Use a large number
+    (eg. 10,000) to show all source lines.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>-I=&lt;dir&gt;,
+      --include=&lt;dir&gt;</computeroutput> [default: empty
+      string]</para>
+    <para>Adds a directory to the list in which to search for
+    files.  Multiple -I/--include options can be given to add
+    multiple directories.</para>
+  </listitem>
+
+</itemizedlist>
+  
+
+
+<sect2>
+<title>Warnings</title>
+
+<para>There are a couple of situations in which
+<computeroutput>cg_annotate</computeroutput> issues
+warnings.</para>
+
+<itemizedlist>
+  <listitem>
+    <para>If a source file is more recent than the
+    <computeroutput>cachegrind.out.pid</computeroutput> file.
+    This is because the information in
+    <computeroutput>cachegrind.out.pid</computeroutput> is only
+    recorded with line numbers, so if the line numbers change at
+    all in the source (eg.  lines added, deleted, swapped), any
+    annotations will be incorrect.</para>
+  </listitem>
+  <listitem>
+    <para>If information is recorded about line numbers past the
+    end of a file.  This can be caused by the above problem,
+    ie. shortening the source file while using an old
+    <computeroutput>cachegrind.out.pid</computeroutput> file.  If
+    this happens, the figures for the bogus lines are printed
+    anyway (clearly marked as bogus) in case they are
+    important.</para>
+  </listitem>
+</itemizedlist>
+
+</sect2>
+
+
+
+<sect2>
+<title>Things to watch out for</title>
+
+<para>Some odd things that can occur during annotation:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>If annotating at the assembler level, you might see
+    something like this:</para>
+<programlisting><![CDATA[
+      1    0    0  .    .    .  .    .    .          leal -12(%ebp),%eax
+      1    0    0  .    .    .  1    0    0          movl %eax,84(%ebx)
+      2    0    0  0    0    0  1    0    0          movl $1,-20(%ebp)
+      .    .    .  .    .    .  .    .    .          .align 4,0x90
+      1    0    0  .    .    .  .    .    .          movl $.LnrB,%eax
+      1    0    0  .    .    .  1    0    0          movl %eax,-16(%ebp)]]></programlisting>
+
+    <para>How can the third instruction be executed twice when
+    the others are executed only once?  As it turns out, it
+    isn't.  Here's a dump of the executable, using
+    <computeroutput>objdump -d</computeroutput>:</para>
+<programlisting><![CDATA[
+      8048f25:       8d 45 f4                lea    0xfffffff4(%ebp),%eax
+      8048f28:       89 43 54                mov    %eax,0x54(%ebx)
+      8048f2b:       c7 45 ec 01 00 00 00    movl   $0x1,0xffffffec(%ebp)
+      8048f32:       89 f6                   mov    %esi,%esi
+      8048f34:       b8 08 8b 07 08          mov    $0x8078b08,%eax
+      8048f39:       89 45 f0                mov    %eax,0xfffffff0(%ebp)]]></programlisting>
+
+    <para>Notice the extra <computeroutput>mov
+    %esi,%esi</computeroutput> instruction.  Where did this come
+    from?  The GNU assembler inserted it to serve as the two
+    bytes of padding needed to align the <computeroutput>movl
+    $.LnrB,%eax</computeroutput> instruction on a four-byte
+    boundary, but pretended it didn't exist when adding debug
+    information.  Thus when Valgrind reads the debug info it
+    thinks that the <computeroutput>movl
+    $0x1,0xffffffec(%ebp)</computeroutput> instruction covers the
+    address range 0x8048f2b--0x804833 by itself, and attributes
+    the counts for the <computeroutput>mov
+    %esi,%esi</computeroutput> to it.</para>
+  </listitem>
+
+  <listitem>
+    <para>Inlined functions can cause strange results in the
+    function-by-function summary.  If a function
+    <computeroutput>inline_me()</computeroutput> is defined in
+    <filename>foo.h</filename> and inlined in the functions
+    <computeroutput>f1()</computeroutput>,
+    <computeroutput>f2()</computeroutput> and
+    <computeroutput>f3()</computeroutput> in
+    <filename>bar.c</filename>, there will not be a
+    <computeroutput>foo.h:inline_me()</computeroutput> function
+    entry.  Instead, there will be separate function entries for
+    each inlining site, ie.
+    <computeroutput>foo.h:f1()</computeroutput>,
+    <computeroutput>foo.h:f2()</computeroutput> and
+    <computeroutput>foo.h:f3()</computeroutput>.  To find the
+    total counts for
+    <computeroutput>foo.h:inline_me()</computeroutput>, add up
+    the counts from each entry.</para>
+
+    <para>The reason for this is that although the debug info
+    output by gcc indicates the switch from
+    <filename>bar.c</filename> to <filename>foo.h</filename>, it
+    doesn't indicate the name of the function in
+    <filename>foo.h</filename>, so Valgrind keeps using the old
+    one.</para>
+  </listitem>
+
+  <listitem>
+    <para>Sometimes, the same filename might be represented with
+    a relative name and with an absolute name in different parts
+    of the debug info, eg:
+    <filename>/home/user/proj/proj.h</filename> and
+    <filename>../proj.h</filename>.  In this case, if you use
+    auto-annotation, the file will be annotated twice with the
+    counts split between the two.</para>
+  </listitem>
+
+  <listitem>
+    <para>Files with more than 65,535 lines cause difficulties
+    for the stabs debug info reader.  This is because the line
+    number in the <computeroutput>struct nlist</computeroutput>
+    defined in <filename>a.out.h</filename> under Linux is only a
+    16-bit value.  Valgrind can handle some files with more than
+    65,535 lines correctly by making some guesses to identify
+    line number overflows.  But some cases are beyond it, in
+    which case you'll get a warning message explaining that
+    annotations for the file might be incorrect.</para>
+  </listitem>
+
+  <listitem>
+    <para>If you compile some files with
+    <computeroutput>-g</computeroutput> and some without, some
+    events that take place in a file without debug info could be
+    attributed to the last line of a file with debug info
+    (whichever one gets placed before the non-debug-info file in
+    the executable).</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>This list looks long, but these cases should be fairly
+rare.</para>
+
+<formalpara>
+  <title>Note:</title>
+  <para><computeroutput>stabs</computeroutput> is not an easy
+  format to read.  If you come across bizarre annotations that
+  look like might be caused by a bug in the stabs reader, please
+  let us know.</para>
+</formalpara>
+
+</sect2>
+
+
+
+<sect2>
+<title>Accuracy</title>
+
+<para>Valgrind's cache profiling has a number of
+shortcomings:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>It doesn't account for kernel activity -- the effect of
+    system calls on the cache contents is ignored.</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for other process activity (although
+    this is probably desirable when considering a single
+    program).</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for virtual-to-physical address
+    mappings; hence the entire simulation is not a true
+    representation of what's happening in the
+    cache.</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for cache misses not visible at the
+    instruction level, eg. those arising from TLB misses, or
+    speculative execution.</para>
+  </listitem>
+
+  <listitem>
+    <para>Valgrind's custom threads implementation will schedule
+    threads differently to the standard one.  This could warp the
+    results for threaded programs.</para>
+  </listitem>
+
+  <listitem>
+    <para>The instructions <computeroutput>bts</computeroutput>,
+    <computeroutput>btr</computeroutput> and
+    <computeroutput>btc</computeroutput> will incorrectly be
+    counted as doing a data read if both the arguments are
+    registers, eg:</para>
+<programlisting><![CDATA[
+    btsl %eax, %edx]]></programlisting>
+
+    <para>This should only happen rarely.</para>
+  </listitem>
+
+  <listitem>
+    <para>FPU instructions with data sizes of 28 and 108 bytes
+    (e.g.  <computeroutput>fsave</computeroutput>) are treated as
+    though they only access 16 bytes.  These instructions seem to
+    be rare so hopefully this won't affect accuracy much.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Another thing worth nothing is that results are very
+sensitive.  Changing the size of the
+<filename>valgrind.so</filename> file, the size of the program
+being profiled, or even the length of its name can perturb the
+results.  Variations will be small, but don't expect perfectly
+repeatable results if your program changes at all.</para>
+
+<para>While these factors mean you shouldn't trust the results to
+be super-accurate, hopefully they should be close enough to be
+useful.</para>
+
+</sect2>
+
+
+<sect2>
+<title>Todo</title>
+
+<itemizedlist>
+  <listitem>
+    <para>Program start-up/shut-down calls a lot of functions
+    that aren't interesting and just complicate the output.
+    Would be nice to exclude these somehow.</para>
+  </listitem>
+</itemizedlist> 
+
+</sect2>
+
+</sect1>
+</chapter>
diff --git a/cachegrind/docs/cg-tech-docs.xml b/cachegrind/docs/cg-tech-docs.xml
new file mode 100644
index 0000000..210dee0
--- /dev/null
+++ b/cachegrind/docs/cg-tech-docs.xml
@@ -0,0 +1,560 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="cg-tech-docs" xreflabel="How Cachegrind works">
+
+<title>How Cachegrind works</title>
+
+<sect1 id="cg-tech-docs.profiling" xreflabel="Cache profiling">
+<title>Cache profiling</title>
+
+<para>Valgrind is a very nice platform for doing cache profiling
+and other kinds of simulation, because it converts horrible x86
+instructions into nice clean RISC-like UCode.  For example, for
+cache profiling we are interested in instructions that read and
+write memory; in UCode there are only four instructions that do
+this: <computeroutput>LOAD</computeroutput>,
+<computeroutput>STORE</computeroutput>,
+<computeroutput>FPU_R</computeroutput> and
+<computeroutput>FPU_W</computeroutput>.  By contrast, because of
+the x86 addressing modes, almost every instruction can read or
+write memory.</para>
+
+<para>Most of the cache profiling machinery is in the file
+<filename>vg_cachesim.c</filename>.</para>
+
+<para>These notes are a somewhat haphazard guide to how
+Valgrind's cache profiling works.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.costcentres" xreflabel="Cost centres">
+<title>Cost centres</title>
+
+<para>Valgrind gathers cache profiling about every instruction
+executed, individually.  Each instruction has a <command>cost
+centre</command> associated with it.  There are two kinds of cost
+centre: one for instructions that don't reference memory
+(<computeroutput>iCC</computeroutput>), and one for instructions
+that do (<computeroutput>idCC</computeroutput>):</para>
+
+<programlisting><![CDATA[
+typedef struct _CC {
+  ULong a;
+  ULong m1;
+  ULong m2;
+} CC;
+
+typedef struct _iCC {
+  /* word 1 */
+  UChar tag;
+  UChar instr_size;
+
+  /* words 2+ */
+  Addr instr_addr;
+  CC I;
+} iCC;
+   
+typedef struct _idCC {
+  /* word 1 */
+  UChar tag;
+  UChar instr_size;
+  UChar data_size;
+
+  /* words 2+ */
+  Addr instr_addr;
+  CC I; 
+  CC D; 
+} idCC; ]]></programlisting>
+
+<para>Each <computeroutput>CC</computeroutput> has three fields
+<computeroutput>a</computeroutput>,
+<computeroutput>m1</computeroutput>,
+<computeroutput>m2</computeroutput> for recording references,
+level 1 misses and level 2 misses.  Each of these is a 64-bit
+<computeroutput>ULong</computeroutput> -- the numbers can get
+very large, ie. greater than 4.2 billion allowed by a 32-bit
+unsigned int.</para>
+
+<para>A <computeroutput>iCC</computeroutput> has one
+<computeroutput>CC</computeroutput> for instruction cache
+accesses.  A <computeroutput>idCC</computeroutput> has two, one
+for instruction cache accesses, and one for data cache
+accesses.</para>
+
+<para>The <computeroutput>iCC</computeroutput> and
+<computeroutput>dCC</computeroutput> structs also store
+unchanging information about the instruction:</para>
+<itemizedlist>
+  <listitem>
+    <para>An instruction-type identification tag (explained
+    below)</para>
+  </listitem>
+  <listitem>
+    <para>Instruction size</para>
+  </listitem>
+  <listitem>
+    <para>Data reference size
+    (<computeroutput>idCC</computeroutput> only)</para>
+  </listitem>
+  <listitem>
+    <para>Instruction address</para>
+  </listitem>
+</itemizedlist>
+
+<para>Note that data address is not one of the fields for
+<computeroutput>idCC</computeroutput>.  This is because for many
+memory-referencing instructions the data address can change each
+time it's executed (eg. if it uses register-offset addressing).
+We have to give this item to the cache simulation in a different
+way (see Instrumentation section below). Some memory-referencing
+instructions do always reference the same address, but we don't
+try to treat them specialy in order to keep things simple.</para>
+
+<para>Also note that there is only room for recording info about
+one data cache access in an
+<computeroutput>idCC</computeroutput>.  So what about
+instructions that do a read then a write, such as:</para>
+<programlisting><![CDATA[
+inc %(esi)]]></programlisting>
+
+<para>In a write-allocate cache, as simulated by Valgrind, the
+write cannot miss, since it immediately follows the read which
+will drag the block into the cache if it's not already there.  So
+the write access isn't really interesting, and Valgrind doesn't
+record it.  This means that Valgrind doesn't measure memory
+references, but rather memory references that could miss in the
+cache.  This behaviour is the same as that used by the AMD Athlon
+hardware counters.  It also has the benefit of simplifying the
+implementation -- instructions that read and write memory can be
+treated like instructions that read memory.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.ccstore" xreflabel="Storing cost-centres">
+<title>Storing cost-centres</title>
+
+<para>Cost centres are stored in a way that makes them very cheap
+to lookup, which is important since one is looked up for every
+original x86 instruction executed.</para>
+
+<para>Valgrind does JIT translations at the basic block level,
+and cost centres are also setup and stored at the basic block
+level.  By doing things carefully, we store all the cost centres
+for a basic block in a contiguous array, and lookup comes almost
+for free.</para>
+
+<para>Consider this part of a basic block (for exposition
+purposes, pretend it's an entire basic block):</para>
+<programlisting><![CDATA[
+movl $0x0,%eax
+movl $0x99, -4(%ebp)]]></programlisting>
+
+<para>The translation to UCode looks like this:</para>
+<programlisting><![CDATA[
+MOVL      $0x0, t20
+PUTL      t20, %EAX
+INCEIPo   $5
+
+LEA1L     -4(t4), t14
+MOVL      $0x99, t18
+STL       t18, (t14)
+INCEIPo   $7]]></programlisting>
+
+<para>The first step is to allocate the cost centres.  This
+requires a preliminary pass to count how many x86 instructions
+were in the basic block, and their types (and thus sizes).  UCode
+translations for single x86 instructions are delimited by the
+<computeroutput>INCEIPo</computeroutput> instruction, the
+argument of which gives the byte size of the instruction (note
+that lazy INCEIP updating is turned off to allow this).</para>
+
+<para>We can tell if an x86 instruction references memory by
+looking for <computeroutput>LDL</computeroutput> and
+<computeroutput>STL</computeroutput> UCode instructions, and thus
+what kind of cost centre is required.  From this we can determine
+how many cost centres we need for the basic block, and their
+sizes.  We can then allocate them in a single array.</para>
+
+<para>Consider the example code above.  After the preliminary
+pass, we know we need two cost centres, one
+<computeroutput>iCC</computeroutput> and one
+<computeroutput>dCC</computeroutput>.  So we allocate an array to
+store these which looks like this:</para>
+
+<programlisting><![CDATA[
+|(uninit)|      tag         (1 byte)
+|(uninit)|      instr_size  (1 bytes)
+|(uninit)|      (padding)   (2 bytes)
+|(uninit)|      instr_addr  (4 bytes)
+|(uninit)|      I.a         (8 bytes)
+|(uninit)|      I.m1        (8 bytes)
+|(uninit)|      I.m2        (8 bytes)
+
+|(uninit)|      tag         (1 byte)
+|(uninit)|      instr_size  (1 byte)
+|(uninit)|      data_size   (1 byte)
+|(uninit)|      (padding)   (1 byte)
+|(uninit)|      instr_addr  (4 bytes)
+|(uninit)|      I.a         (8 bytes)
+|(uninit)|      I.m1        (8 bytes)
+|(uninit)|      I.m2        (8 bytes)
+|(uninit)|      D.a         (8 bytes)
+|(uninit)|      D.m1        (8 bytes)
+|(uninit)|      D.m2        (8 bytes)]]></programlisting>
+
+<para>(We can see now why we need tags to distinguish between the
+two types of cost centres.)</para>
+
+<para>We also record the size of the array.  We look up the debug
+info of the first instruction in the basic block, and then stick
+the array into a table indexed by filename and function name.
+This makes it easy to dump the information quickly to file at the
+end.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.instrum" xreflabel="Instrumentation">
+<title>Instrumentation</title>
+
+<para>The instrumentation pass has two main jobs:</para>
+
+<orderedlist>
+  <listitem>
+    <para>Fill in the gaps in the allocated cost centres.</para>
+  </listitem>
+  <listitem>
+    <para>Add UCode to call the cache simulator for each
+   instruction.</para>
+  </listitem>
+</orderedlist>
+
+<para>The instrumentation pass steps through the UCode and the
+cost centres in tandem.  As each original x86 instruction's UCode
+is processed, the appropriate gaps in the instructions cost
+centre are filled in, for example:</para>
+
+<programlisting><![CDATA[
+|INSTR_CC|      tag         (1 byte)
+|5       |      instr_size  (1 bytes)
+|(uninit)|      (padding)   (2 bytes)
+|i_addr1 |      instr_addr  (4 bytes)
+|0       |      I.a         (8 bytes)
+|0       |      I.m1        (8 bytes)
+|0       |      I.m2        (8 bytes)
+
+|WRITE_CC|      tag         (1 byte)
+|7       |      instr_size  (1 byte)
+|4       |      data_size   (1 byte)
+|(uninit)|      (padding)   (1 byte)
+|i_addr2 |      instr_addr  (4 bytes)
+|0       |      I.a         (8 bytes)
+|0       |      I.m1        (8 bytes)
+|0       |      I.m2        (8 bytes)
+|0       |      D.a         (8 bytes)
+|0       |      D.m1        (8 bytes)
+|0       |      D.m2        (8 bytes)]]></programlisting>
+
+<para>(Note that this step is not performed if a basic block is
+re-translated; see <xref linkend="cg-tech-docs.retranslations"/> for
+more information.)</para>
+
+<para>GCC inserts padding before the
+<computeroutput>instr_size</computeroutput> field so that it is
+word aligned.</para>
+
+<para>The instrumentation added to call the cache simulation
+function looks like this (instrumentation is indented to
+distinguish it from the original UCode):</para>
+
+<programlisting><![CDATA[
+MOVL      $0x0, t20
+PUTL      t20, %EAX
+  PUSHL     %eax
+  PUSHL     %ecx
+  PUSHL     %edx
+  MOVL      $0x4091F8A4, t46  # address of 1st CC
+  PUSHL     t46
+  CALLMo    $0x12             # second cachesim function
+  CLEARo    $0x4
+  POPL      %edx
+  POPL      %ecx
+  POPL      %eax
+INCEIPo   $5
+
+LEA1L     -4(t4), t14
+MOVL      $0x99, t18
+  MOVL      t14, t42
+STL       t18, (t14)
+  PUSHL     %eax
+  PUSHL     %ecx
+  PUSHL     %edx
+  PUSHL     t42
+  MOVL      $0x4091F8C4, t44  # address of 2nd CC
+  PUSHL     t44
+  CALLMo    $0x13             # second cachesim function
+  CLEARo    $0x8
+  POPL      %edx
+  POPL      %ecx
+  POPL      %eax
+INCEIPo   $7]]></programlisting>
+
+<para>Consider the first instruction's UCode.  Each call is
+surrounded by three <computeroutput>PUSHL</computeroutput> and
+<computeroutput>POPL</computeroutput> instructions to save and
+restore the caller-save registers.  Then the address of the
+instruction's cost centre is pushed onto the stack, to be the
+first argument to the cache simulation function.  The address is
+known at this point because we are doing a simultaneous pass
+through the cost centre array.  This means the cost centre lookup
+for each instruction is almost free (just the cost of pushing an
+argument for a function call).  Then the call to the cache
+simulation function for non-memory-reference instructions is made
+(note that the <computeroutput>CALLMo</computeroutput>
+UInstruction takes an offset into a table of predefined
+functions; it is not an absolute address), and the single
+argument is <computeroutput>CLEAR</computeroutput>ed from the
+stack.</para>
+
+<para>The second instruction's UCode is similar.  The only
+difference is that, as mentioned before, we have to pass the
+address of the data item referenced to the cache simulation
+function too.  This explains the <computeroutput>MOVL t14,
+t42</computeroutput> and <computeroutput>PUSHL
+t42</computeroutput> UInstructions.  (Note that the seemingly
+redundant <computeroutput>MOV</computeroutput>ing will probably
+be optimised away during register allocation.)</para>
+
+<para>Note that instead of storing unchanging information about
+each instruction (instruction size, data size, etc) in its cost
+centre, we could have passed in these arguments to the simulation
+function.  But this would slow the calls down (two or three extra
+arguments pushed onto the stack).  Also it would bloat the UCode
+instrumentation by amounts similar to the space required for them
+in the cost centre; bloated UCode would also fill the translation
+cache more quickly, requiring more translations for large
+programs and slowing them down more.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.retranslations" 
+         xreflabel="Handling basic block retranslations">
+<title>Handling basic block retranslations</title>
+
+<para>The above description ignores one complication.  Valgrind
+has a limited size cache for basic block translations; if it
+fills up, old translations are discarded.  If a discarded basic
+block is executed again, it must be re-translated.</para>
+
+<para>However, we can't use this approach for profiling -- we
+can't throw away cost centres for instructions in the middle of
+execution!  So when a basic block is translated, we first look
+for its cost centre array in the hash table.  If there is no cost
+centre array, it must be the first translation, so we proceed as
+described above.  But if there is a cost centre array already, it
+must be a retranslation.  In this case, we skip the cost centre
+allocation and initialisation steps, but still do the UCode
+instrumentation step.</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.cachesim" xreflabel="The cache simulation">
+<title>The cache simulation</title>
+
+<para>The cache simulation is fairly straightforward.  It just
+tracks which memory blocks are in the cache at the moment (it
+doesn't track the contents, since that is irrelevant).</para>
+
+<para>The interface to the simulation is quite clean.  The
+functions called from the UCode contain calls to the simulation
+functions in the files
+<filename>vg_cachesim_{I1,D1,L2}.c</filename>; these calls are
+inlined so that only one function call is done per simulated x86
+instruction.  The file <filename>vg_cachesim.c</filename> simply
+<computeroutput>#include</computeroutput>s the three files
+containing the simulation, which makes plugging in new cache
+simulations is very easy -- you just replace the three files and
+recompile.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.output" xreflabel="Output">
+<title>Output</title>
+
+<para>Output is fairly straightforward, basically printing the
+cost centre for every instruction, grouped by files and
+functions.  Total counts (eg. total cache accesses, total L1
+misses) are calculated when traversing this structure rather than
+during execution, to save time; the cache simulation functions
+are called so often that even one or two extra adds can make a
+sizeable difference.</para>
+
+<para>Input file has the following format:</para>
+<programlisting><![CDATA[
+file         ::= desc_line* cmd_line events_line data_line+ summary_line
+desc_line    ::= "desc:" ws? non_nl_string
+cmd_line     ::= "cmd:" ws? cmd
+events_line  ::= "events:" ws? (event ws)+
+data_line    ::= file_line | fn_line | count_line
+file_line    ::= ("fl=" | "fi=" | "fe=") filename
+fn_line      ::= "fn=" fn_name
+count_line   ::= line_num ws? (count ws)+
+summary_line ::= "summary:" ws? (count ws)+
+count        ::= num | "."]]></programlisting>
+
+<para>Where:</para>
+<itemizedlist>
+  <listitem>
+    <para><computeroutput>non_nl_string</computeroutput> is any
+    string not containing a newline.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>cmd</computeroutput> is a command line
+    invocation.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>filename</computeroutput> and
+    <computeroutput>fn_name</computeroutput> can be anything.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>num</computeroutput> and
+    <computeroutput>line_num</computeroutput> are decimal
+    numbers.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>ws</computeroutput> is whitespace.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>nl</computeroutput> is a newline.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The contents of the "desc:" lines is printed out at the top
+of the summary.  This is a generic way of providing simulation
+specific information, eg. for giving the cache configuration for
+cache simulation.</para>
+
+<para>Counts can be "." to represent "N/A", eg. the number of
+write misses for an instruction that doesn't write to
+memory.</para>
+
+<para>The number of counts in each
+<computeroutput>line</computeroutput> and the
+<computeroutput>summary_line</computeroutput> should not exceed
+the number of events in the
+<computeroutput>event_line</computeroutput>.  If the number in
+each <computeroutput>line</computeroutput> is less, cg_annotate
+treats those missing as though they were a "." entry.</para>
+
+<para>A <computeroutput>file_line</computeroutput> changes the
+current file name.  A <computeroutput>fn_line</computeroutput>
+changes the current function name.  A
+<computeroutput>count_line</computeroutput> contains counts that
+pertain to the current filename/fn_name.  A "fn="
+<computeroutput>file_line</computeroutput> and a
+<computeroutput>fn_line</computeroutput> must appear before any
+<computeroutput>count_line</computeroutput>s to give the context
+of the first <computeroutput>count_line</computeroutput>s.</para>
+
+<para>Each <computeroutput>file_line</computeroutput> should be
+immediately followed by a
+<computeroutput>fn_line</computeroutput>.  "fi="
+<computeroutput>file_lines</computeroutput> are used to switch
+filenames for inlined functions; "fe="
+<computeroutput>file_lines</computeroutput> are similar, but are
+put at the end of a basic block in which the file name hasn't
+been switched back to the original file name.  (fi and fe lines
+behave the same, they are only distinguished to help
+debugging.)</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.summary" 
+         xreflabel="Summary of performance features">
+<title>Summary of performance features</title>
+
+<para>Quite a lot of work has gone into making the profiling as
+fast as possible.  This is a summary of the important
+features:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>The basic block-level cost centre storage allows almost
+    free cost centre lookup.</para>
+  </listitem>
+  
+  <listitem>
+    <para>Only one function call is made per instruction
+    simulated; even this accounts for a sizeable percentage of
+    execution time, but it seems unavoidable if we want
+    flexibility in the cache simulator.</para>
+  </listitem>
+
+  <listitem>
+    <para>Unchanging information about an instruction is stored
+    in its cost centre, avoiding unnecessary argument pushing,
+    and minimising UCode instrumentation bloat.</para>
+  </listitem>
+
+  <listitem>
+    <para>Summary counts are calculated at the end, rather than
+    during execution.</para>
+  </listitem>
+
+  <listitem>
+    <para>The <computeroutput>cachegrind.out</computeroutput>
+    output files can contain huge amounts of information; file
+    format was carefully chosen to minimise file sizes.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.annotate" xreflabel="Annotation">
+<title>Annotation</title>
+
+<para>Annotation is done by cg_annotate.  It is a fairly
+straightforward Perl script that slurps up all the cost centres,
+and then runs through all the chosen source files, printing out
+cost centres with them.  It too has been carefully optimised.</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.extensions" xreflabel="Similar work, extensions">
+<title>Similar work, extensions</title>
+
+<para>It would be relatively straightforward to do other
+simulations and obtain line-by-line information about interesting
+events.  A good example would be branch prediction -- all
+branches could be instrumented to interact with a branch
+prediction simulator, using very similar techniques to those
+described above.</para>
+
+<para>In particular, cg_annotate would not need to change -- the
+file format is such that it is not specific to the cache
+simulation, but could be used for any kind of line-by-line
+information.  The only part of cg_annotate that is specific to
+the cache simulation is the name of the input file
+(<computeroutput>cachegrind.out</computeroutput>), although it
+would be very simple to add an option to control this.</para>
+
+</sect1>
+
+</chapter>
diff --git a/cachegrind/docs/cg_main.html b/cachegrind/docs/cg_main.html
deleted file mode 100644
index 545748a..0000000
--- a/cachegrind/docs/cg_main.html
+++ /dev/null
@@ -1,714 +0,0 @@
-<html>
-  <head>
-    <title>Cachegrind: a cache-miss profiler</title>
-  </head>
-
-<body>
-<a name="cg-top"></a>
-<h2>4&nbsp; <b>Cachegrind</b>: a cache-miss profiler</h2>
-
-To use this tool, you must specify <code>--tool=cachegrind</code>
-on the Valgrind command line.
-
-<p>
-Detailed technical documentation on how Cachegrind works is available
-<A HREF="cg_techdocs.html">here</A>.  If you want to know how
-to <b>use</b> it, you only need to read this page.
-
-
-<a name="cache"></a>
-<h3>4.1&nbsp; Cache profiling</h3>
-Cachegrind is a tool for doing cache simulations and annotating your source
-line-by-line with the number of cache misses.  In particular, it records:
-<ul>
-  <li>L1 instruction cache reads and misses;
-  <li>L1 data cache reads and read misses, writes and write misses;
-  <li>L2 unified cache reads and read misses, writes and writes misses.
-</ul>
-On a modern x86 machine, an L1 miss will typically cost around 10 cycles,
-and an L2 miss can cost as much as 200 cycles. Detailed cache profiling can be
-very useful for improving the performance of your program.<p>
-
-Also, since one instruction cache read is performed per instruction executed,
-you can find out how many instructions are executed per line, which can be
-useful for traditional profiling and test coverage.<p>
-
-Any feedback, bug-fixes, suggestions, etc, welcome.
-
-
-<h3>4.2&nbsp; Overview</h3>
-First off, as for normal Valgrind use, you probably want to compile with
-debugging info (the <code>-g</code> flag).  But by contrast with normal
-Valgrind use, you probably <b>do</b> want to turn optimisation on, since you
-should profile your program as it will be normally run.
-
-The two steps are:
-<ol>
-  <li>Run your program with <code>valgrind --tool=cachegrind</code> in front of
-      the normal command line invocation.  When the program finishes,
-      Cachegrind will print summary cache statistics. It also collects
-      line-by-line information in a file
-      <code>cachegrind.out.<i>pid</i></code>, where <code><i>pid</i></code>
-      is the program's process id.
-      <p>
-      This step should be done every time you want to collect
-      information about a new program, a changed program, or about the
-      same program with different input.
-  </li><p>
-  <li>Generate a function-by-function summary, and possibly annotate
-      source files, using the supplied
-      <code>cg_annotate</code> program. Source files to annotate can be
-      specified manually, or manually on the command line, or
-      "interesting" source files can be annotated automatically with
-      the <code>--auto=yes</code> option.  You can annotate C/C++
-      files or assembly language files equally easily.
-      <p>
-      This step can be performed as many times as you like for each
-      Step 2.  You may want to do multiple annotations showing
-      different information each time.
-  </li><p>
-</ol>
-
-The steps are described in detail in the following sections.
-
-
-<h3>4.3&nbsp; Cache simulation specifics</h3>
-
-Cachegrind uses a simulation for a machine with a split L1 cache and a unified
-L2 cache.  This configuration is used for all (modern) x86-based machines we
-are aware of.  Old Cyrix CPUs had a unified I and D L1 cache, but they are
-ancient history now.<p>
-
-The more specific characteristics of the simulation are as follows.
-
-<ul>
-  <li>Write-allocate: when a write miss occurs, the block written to
-      is brought into the D1 cache.  Most modern caches have this
-      property.<p>
-  </li>
-  <p>
-  <li>Bit-selection hash function: the line(s) in the cache to which a
-      memory block maps is chosen by the middle bits M--(M+N-1) of the
-      byte address, where:
-      <ul>
-        <li>&nbsp;line size = 2^M bytes&nbsp;</li>
-        <li>(cache size / line size) = 2^N bytes</li>
-      </ul> 
-  </li>
-  <p>
-  <li>Inclusive L2 cache: the L2 cache replicates all the entries of
-      the L1 cache.  This is standard on Pentium chips, but AMD
-      Athlons use an exclusive L2 cache that only holds blocks evicted
-      from L1.  Ditto AMD Durons and most modern VIAs.</li>
-</ul>
-
-The cache configuration simulated (cache size, associativity and line size) is
-determined automagically using the CPUID instruction.  If you have an old
-machine that (a) doesn't support the CPUID instruction, or (b) supports it in
-an early incarnation that doesn't give any cache information, then Cachegrind
-will fall back to using a default configuration (that of a model 3/4 Athlon).
-Cachegrind will tell you if this happens.  You can manually specify one, two or
-all three levels (I1/D1/L2) of the cache from the command line using the
-<code>--I1</code>, <code>--D1</code> and <code>--L2</code> options.
-
-<p>
-Other noteworthy behaviour:
-
-<ul>
-  <li>References that straddle two cache lines are treated as follows:
-  <ul>
-    <li>If both blocks hit --&gt; counted as one hit</li>
-    <li>If one block hits, the other misses --&gt; counted as one miss</li>
-    <li>If both blocks miss --&gt; counted as one miss (not two)</li>
-  </ul>
-  </li>
-
-  <li>Instructions that modify a memory location (eg. <code>inc</code> and
-      <code>dec</code>) are counted as doing just a read, ie. a single data
-      reference.  This may seem strange, but since the write can never cause a
-      miss (the read guarantees the block is in the cache) it's not very
-      interesting.
-      <p>
-      Thus it measures not the number of times the data cache is accessed, but
-      the number of times a data cache miss could occur.<p>
-      </li>
-</ul>
-
-If you are interested in simulating a cache with different properties, it is
-not particularly hard to write your own cache simulator, or to modify the
-existing ones in <code>vg_cachesim_I1.c</code>, <code>vg_cachesim_D1.c</code>,
-<code>vg_cachesim_L2.c</code> and <code>vg_cachesim_gen.c</code>.  We'd be
-interested to hear from anyone who does.
-
-
-<a name="profile"></a>
-<h3>4.4&nbsp; Profiling programs</h3>
-
-To gather cache profiling information about the program <code>ls -l</code>,
-invoke Cachegrind like this:
-
-<blockquote><code>valgrind --tool=cachegrind ls -l</code></blockquote>
-
-The program will execute (slowly).  Upon completion, summary statistics
-that look like this will be printed:
-
-<pre>
-==31751== I   refs:      27,742,716
-==31751== I1  misses:           276
-==31751== L2  misses:           275
-==31751== I1  miss rate:        0.0%
-==31751== L2i miss rate:        0.0%
-==31751== 
-==31751== D   refs:      15,430,290  (10,955,517 rd + 4,474,773 wr)
-==31751== D1  misses:        41,185  (    21,905 rd +    19,280 wr)
-==31751== L2  misses:        23,085  (     3,987 rd +    19,098 wr)
-==31751== D1  miss rate:        0.2% (       0.1%   +       0.4%)
-==31751== L2d miss rate:        0.1% (       0.0%   +       0.4%)
-==31751== 
-==31751== L2 misses:         23,360  (     4,262 rd +    19,098 wr)
-==31751== L2 miss rate:         0.0% (       0.0%   +       0.4%)
-</pre>
-
-Cache accesses for instruction fetches are summarised first, giving the
-number of fetches made (this is the number of instructions executed, which
-can be useful to know in its own right), the number of I1 misses, and the
-number of L2 instruction (<code>L2i</code>) misses.
-<p>
-Cache accesses for data follow. The information is similar to that of the
-instruction fetches, except that the values are also shown split between reads
-and writes (note each row's <code>rd</code> and <code>wr</code> values add up
-to the row's total).
-<p>
-Combined instruction and data figures for the L2 cache follow that.
-
-
-<h3>4.5&nbsp; Output file</h3>
-
-As well as printing summary information, Cachegrind also writes
-line-by-line cache profiling information to a file named
-<code>cachegrind.out.<i>pid</i></code>.  This file is human-readable, but is
-best interpreted by the accompanying program <code>cg_annotate</code>,
-described in the next section.
-<p>
-Things to note about the <code>cachegrind.out.<i>pid</i></code> file:
-<ul>
-  <li>It is written every time Cachegrind
-      is run, and will overwrite any existing
-      <code>cachegrind.out.<i>pid</i></code> in the current directory (but
-      that won't happen very often because it takes some time for process ids
-      to be recycled).</li><p>
-  <li>It can be huge: <code>ls -l</code> generates a file of about
-      350KB.  Browsing a few files and web pages with a Konqueror
-      built with full debugging information generates a file
-      of around 15 MB.</li>
-</ul>
-
-Note that older versions of Cachegrind used a log file named
-<code>cachegrind.out</code> (i.e. no <code><i>.pid</i></code> suffix).
-The suffix serves two purposes.  Firstly, it means you don't have to
-rename old log files that you don't want to overwrite.  Secondly, and
-more importantly, it allows correct profiling with the
-<code>--trace-children=yes</code> option of programs that spawn child
-processes.
-
-
-<a name="profileflags"></a>
-<h3>4.6&nbsp; Cachegrind options</h3>
-
-Cache-simulation specific options are:
-
-<ul>
-  <li><code>--I1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br>
-      <code>--D1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br> 
-      <code>--L2=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><p> 
-      [default: uses CPUID for automagic cache configuration]<p>
-
-      Manually specifies the I1/D1/L2 cache configuration, where
-      <code>size</code> and <code>line_size</code> are measured in bytes.  The
-      three items must be comma-separated, but with no spaces, eg:
-
-      <blockquote>
-      <code>valgrind --tool=cachegrind --I1=65535,2,64</code>
-      </blockquote>
-
-      You can specify one, two or three of the I1/D1/L2 caches.  Any level not
-      manually specified will be simulated using the configuration found in the
-      normal way (via the CPUID instruction, or failing that, via defaults).
-</ul>
-
-  
-<a name="annotate"></a>
-<h3>4.7&nbsp; Annotating C/C++ programs</h3>
-
-Before using <code>cg_annotate</code>, it is worth widening your
-window to be at least 120-characters wide if possible, as the output
-lines can be quite long.
-<p>
-To get a function-by-function summary, run <code>cg_annotate
---<i>pid</i></code> in a directory containing a
-<code>cachegrind.out.<i>pid</i></code> file.  The <code>--<i>pid</i></code>
-is required so that <code>cg_annotate</code> knows which log file to use when
-several are present.
-<p>
-The output looks like this:
-
-<pre>
---------------------------------------------------------------------------------
-I1 cache:              65536 B, 64 B, 2-way associative
-D1 cache:              65536 B, 64 B, 2-way associative
-L2 cache:              262144 B, 64 B, 8-way associative
-Command:               concord vg_to_ucode.c
-Events recorded:       Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Events shown:          Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Event sort order:      Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Threshold:             99%
-Chosen for annotation:
-Auto-annotation:       on
-
---------------------------------------------------------------------------------
-Ir         I1mr I2mr Dr         D1mr   D2mr  Dw        D1mw   D2mw
---------------------------------------------------------------------------------
-27,742,716  276  275 10,955,517 21,905 3,987 4,474,773 19,280 19,098  PROGRAM TOTALS
-
---------------------------------------------------------------------------------
-Ir        I1mr I2mr Dr        D1mr  D2mr  Dw        D1mw   D2mw    file:function
---------------------------------------------------------------------------------
-8,821,482    5    5 2,242,702 1,621    73 1,794,230      0      0  getc.c:_IO_getc
-5,222,023    4    4 2,276,334    16    12   875,959      1      1  concord.c:get_word
-2,649,248    2    2 1,344,810 7,326 1,385         .      .      .  vg_main.c:strcmp
-2,521,927    2    2   591,215     0     0   179,398      0      0  concord.c:hash
-2,242,740    2    2 1,046,612   568    22   448,548      0      0  ctype.c:tolower
-1,496,937    4    4   630,874 9,000 1,400   279,388      0      0  concord.c:insert
-  897,991   51   51   897,831    95    30        62      1      1  ???:???
-  598,068    1    1   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__flockfile
-  598,068    0    0   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__funlockfile
-  598,024    4    4   213,580    35    16   149,506      0      0  vg_clientmalloc.c:malloc
-  446,587    1    1   215,973 2,167   430   129,948 14,057 13,957  concord.c:add_existing
-  341,760    2    2   128,160     0     0   128,160      0      0  vg_clientmalloc.c:vg_trap_here_WRAPPER
-  320,782    4    4   150,711   276     0    56,027     53     53  concord.c:init_hash_table
-  298,998    1    1   106,785     0     0    64,071      1      1  concord.c:create
-  149,518    0    0   149,516     0     0         1      0      0  ???:tolower@@GLIBC_2.0
-  149,518    0    0   149,516     0     0         1      0      0  ???:fgetc@@GLIBC_2.0
-   95,983    4    4    38,031     0     0    34,409  3,152  3,150  concord.c:new_word_node
-   85,440    0    0    42,720     0     0    21,360      0      0  vg_clientmalloc.c:vg_bogus_epilogue
-</pre>
-
-First up is a summary of the annotation options:
-                    
-<ul>
-  <li>I1 cache, D1 cache, L2 cache: cache configuration.  So you know the
-      configuration with which these results were obtained.</li><p>
-
-  <li>Command: the command line invocation of the program under
-      examination.</li><p>
-
-  <li>Events recorded: event abbreviations are:<p>
-  <ul>
-    <li><code>Ir  </code>:  I cache reads (ie. instructions executed)</li>
-    <li><code>I1mr</code>: I1 cache read misses</li>
-    <li><code>I2mr</code>: L2 cache instruction read misses</li>
-    <li><code>Dr  </code>:  D cache reads (ie. memory reads)</li>
-    <li><code>D1mr</code>: D1 cache read misses</li>
-    <li><code>D2mr</code>: L2 cache data read misses</li>
-    <li><code>Dw  </code>:  D cache writes (ie. memory writes)</li>
-    <li><code>D1mw</code>: D1 cache write misses</li>
-    <li><code>D2mw</code>: L2 cache data write misses</li>
-  </ul><p>
-      Note that D1 total accesses is given by <code>D1mr</code> +
-      <code>D1mw</code>, and that L2 total accesses is given by
-      <code>I2mr</code> + <code>D2mr</code> + <code>D2mw</code>.</li><p>
-
-  <li>Events shown: the events shown (a subset of events gathered).  This can
-      be adjusted with the <code>--show</code> option.</li><p>
-
-  <li>Event sort order: the sort order in which functions are shown.  For
-      example, in this case the functions are sorted from highest
-      <code>Ir</code> counts to lowest.  If two functions have identical
-      <code>Ir</code> counts, they will then be sorted by <code>I1mr</code>
-      counts, and so on.  This order can be adjusted with the
-      <code>--sort</code> option.<p>
-
-      Note that this dictates the order the functions appear.  It is <b>not</b>
-      the order in which the columns appear;  that is dictated by the "events
-      shown" line (and can be changed with the <code>--show</code> option).
-      </li><p>
-
-  <li>Threshold: <code>cg_annotate</code> by default omits functions
-      that cause very low numbers of misses to avoid drowning you in
-      information.  In this case, cg_annotate shows summaries the
-      functions that account for 99% of the <code>Ir</code> counts;
-      <code>Ir</code> is chosen as the threshold event since it is the
-      primary sort event.  The threshold can be adjusted with the
-      <code>--threshold</code> option.</li><p>
-
-  <li>Chosen for annotation: names of files specified manually for annotation; 
-      in this case none.</li><p>
-
-  <li>Auto-annotation: whether auto-annotation was requested via the 
-      <code>--auto=yes</code> option. In this case no.</li><p>
-</ul>
-
-Then follows summary statistics for the whole program. These are similar
-to the summary provided when running <code>valgrind --tool=cachegrind</code>.<p>
-  
-Then follows function-by-function statistics. Each function is
-identified by a <code>file_name:function_name</code> pair. If a column
-contains only a dot it means the function never performs
-that event (eg. the third row shows that <code>strcmp()</code>
-contains no instructions that write to memory). The name
-<code>???</code> is used if the the file name and/or function name
-could not be determined from debugging information. If most of the
-entries have the form <code>???:???</code> the program probably wasn't
-compiled with <code>-g</code>.  If any code was invalidated (either due to
-self-modifying code or unloading of shared objects) its counts are aggregated
-into a single cost centre written as <code>(discarded):(discarded)</code>.<p>
-
-It is worth noting that functions will come from three types of source files:
-<ol>
-  <li> From the profiled program (<code>concord.c</code> in this example).</li>
-  <li>From libraries (eg. <code>getc.c</code>)</li>
-  <li>From Valgrind's implementation of some libc functions (eg.
-      <code>vg_clientmalloc.c:malloc</code>).  These are recognisable because
-      the filename begins with <code>vg_</code>, and is probably one of
-      <code>vg_main.c</code>, <code>vg_clientmalloc.c</code> or
-      <code>vg_mylibc.c</code>.
-  </li>
-</ol>
-
-There are two ways to annotate source files -- by choosing them
-manually, or with the <code>--auto=yes</code> option. To do it
-manually, just specify the filenames as arguments to
-<code>cg_annotate</code>. For example, the output from running
-<code>cg_annotate concord.c</code> for our example produces the same
-output as above followed by an annotated version of
-<code>concord.c</code>, a section of which looks like:
-
-<pre>
---------------------------------------------------------------------------------
--- User-annotated source: concord.c
---------------------------------------------------------------------------------
-Ir        I1mr I2mr Dr      D1mr  D2mr  Dw      D1mw   D2mw
-
-[snip]
-
-        .    .    .       .     .     .       .      .      .  void init_hash_table(char *file_name, Word_Node *table[])
-        3    1    1       .     .     .       1      0      0  {
-        .    .    .       .     .     .       .      .      .      FILE *file_ptr;
-        .    .    .       .     .     .       .      .      .      Word_Info *data;
-        1    0    0       .     .     .       1      1      1      int line = 1, i;
-        .    .    .       .     .     .       .      .      .
-        5    0    0       .     .     .       3      0      0      data = (Word_Info *) create(sizeof(Word_Info));
-        .    .    .       .     .     .       .      .      .
-    4,991    0    0   1,995     0     0     998      0      0      for (i = 0; i < TABLE_SIZE; i++)
-    3,988    1    1   1,994     0     0     997     53     52          table[i] = NULL;
-        .    .    .       .     .     .       .      .      .
-        .    .    .       .     .     .       .      .      .      /* Open file, check it. */
-        6    0    0       1     0     0       4      0      0      file_ptr = fopen(file_name, "r");
-        2    0    0       1     0     0       .      .      .      if (!(file_ptr)) {
-        .    .    .       .     .     .       .      .      .          fprintf(stderr, "Couldn't open '%s'.\n", file_name);
-        1    1    1       .     .     .       .      .      .          exit(EXIT_FAILURE);
-        .    .    .       .     .     .       .      .      .      }
-        .    .    .       .     .     .       .      .      .
-  165,062    1    1  73,360     0     0  91,700      0      0      while ((line = get_word(data, line, file_ptr)) != EOF)
-  146,712    0    0  73,356     0     0  73,356      0      0          insert(data->;word, data->line, table);
-        .    .    .       .     .     .       .      .      .
-        4    0    0       1     0     0       2      0      0      free(data);
-        4    0    0       1     0     0       2      0      0      fclose(file_ptr);
-        3    0    0       2     0     0       .      .      .  }
-</pre>
-
-(Although column widths are automatically minimised, a wide terminal is clearly
-useful.)<p>
-  
-Each source file is clearly marked (<code>User-annotated source</code>) as
-having been chosen manually for annotation.  If the file was found in one of
-the directories specified with the <code>-I</code>/<code>--include</code>
-option, the directory and file are both given.<p>
-
-Each line is annotated with its event counts.  Events not applicable for a line
-are represented by a `.';  this is useful for distinguishing between an event
-which cannot happen, and one which can but did not.<p> 
-
-Sometimes only a small section of a source file is executed.  To minimise
-uninteresting output, Valgrind only shows annotated lines and lines within a
-small distance of annotated lines.  Gaps are marked with the line numbers so
-you know which part of a file the shown code comes from, eg:
-
-<pre>
-(figures and code for line 704)
--- line 704 ----------------------------------------
--- line 878 ----------------------------------------
-(figures and code for line 878)
-</pre>
-
-The amount of context to show around annotated lines is controlled by the
-<code>--context</code> option.<p>
-
-To get automatic annotation, run <code>cg_annotate --auto=yes</code>.
-cg_annotate will automatically annotate every source file it can find that is
-mentioned in the function-by-function summary.  Therefore, the files chosen for
-auto-annotation  are affected by the <code>--sort</code> and
-<code>--threshold</code> options.  Each source file is clearly marked
-(<code>Auto-annotated source</code>) as being chosen automatically.  Any files
-that could not be found are mentioned at the end of the output, eg:    
-
-<pre>
---------------------------------------------------------------------------------
-The following files chosen for auto-annotation could not be found:
---------------------------------------------------------------------------------
-  getc.c
-  ctype.c
-  ../sysdeps/generic/lockfile.c
-</pre>
-
-This is quite common for library files, since libraries are usually compiled
-with debugging information, but the source files are often not present on a
-system.  If a file is chosen for annotation <b>both</b> manually and
-automatically, it is marked as <code>User-annotated source</code>.
-
-Use the <code>-I/--include</code> option to tell Valgrind where to look for
-source files if the filenames found from the debugging information aren't
-specific enough.
-
-Beware that cg_annotate can take some time to digest large
-<code>cachegrind.out.<i>pid</i></code> files, e.g. 30 seconds or more.  Also
-beware that auto-annotation can produce a lot of output if your program is
-large!
-
-
-<h3>4.8&nbsp; Annotating assembler programs</h3>
-
-Valgrind can annotate assembler programs too, or annotate the
-assembler generated for your C program.  Sometimes this is useful for
-understanding what is really happening when an interesting line of C
-code is translated into multiple instructions.<p>
-
-To do this, you just need to assemble your <code>.s</code> files with
-assembler-level debug information.  gcc doesn't do this, but you can
-use the GNU assembler with the <code>--gstabs</code> option to
-generate object files with this information, eg:
-
-<blockquote><code>as --gstabs foo.s</code></blockquote>
-
-You can then profile and annotate source files in the same way as for C/C++
-programs.
-
-
-<h3>4.9&nbsp; <code>cg_annotate</code> options</h3>
-<ul>
-  <li><code>--<i>pid</i></code></li><p>
-
-      Indicates which <code>cachegrind.out.<i>pid</i></code> file to read.
-      Not actually an option -- it is required.
-    
-  <li><code>-h, --help</code></li><p>
-  <li><code>-v, --version</code><p>
-
-      Help and version, as usual.</li>
-
-  <li><code>--sort=A,B,C</code> [default: order in 
-      <code>cachegrind.out.<i>pid</i></code>]<p>
-      Specifies the events upon which the sorting of the function-by-function
-      entries will be based.  Useful if you want to concentrate on eg. I cache
-      misses (<code>--sort=I1mr,I2mr</code>), or D cache misses
-      (<code>--sort=D1mr,D2mr</code>), or L2 misses
-      (<code>--sort=D2mr,I2mr</code>).</li><p>
-
-  <li><code>--show=A,B,C</code> [default: all, using order in
-      <code>cachegrind.out.<i>pid</i></code>]<p>
-      Specifies which events to show (and the column order). Default is to use
-      all present in the <code>cachegrind.out.<i>pid</i></code> file (and use
-      the order in the file).</li><p>
-
-  <li><code>--threshold=X</code> [default: 99%] <p>
-      Sets the threshold for the function-by-function summary.  Functions are
-      shown that account for more than X% of the primary sort event.  If
-      auto-annotating, also affects which files are annotated.
-      
-      Note: thresholds can be set for more than one of the events by appending
-      any events for the <code>--sort</code> option with a colon and a number
-      (no spaces, though).  E.g. if you want to see the functions that cover
-      99% of L2 read misses and 99% of L2 write misses, use this option:
-      
-      <blockquote><code>--sort=D2mr:99,D2mw:99</code></blockquote>
-      </li><p>
-
-  <li><code>--auto=no</code> [default]<br>
-      <code>--auto=yes</code> <p>
-      When enabled, automatically annotates every file that is mentioned in the
-      function-by-function summary that can be found.  Also gives a list of
-      those that couldn't be found.
-
-  <li><code>--context=N</code> [default: 8]<p>
-      Print N lines of context before and after each annotated line.  Avoids
-      printing large sections of source files that were not executed.  Use a 
-      large number (eg. 10,000) to show all source lines.
-      </li><p>
-
-  <li><code>-I=&lt;dir&gt;, --include=&lt;dir&gt;</code> 
-      [default: empty string]<p>
-      Adds a directory to the list in which to search for files.  Multiple
-      -I/--include options can be given to add multiple directories.
-</ul>
-  
-
-<h3>4.10&nbsp; Warnings</h3>
-There are a couple of situations in which cg_annotate issues warnings.
-
-<ul>
-  <li>If a source file is more recent than the
-      <code>cachegrind.out.<i>pid</i></code> file.  This is because the
-      information in <code>cachegrind.out.<i>pid</i></code> is only recorded
-      with line numbers, so if the line numbers change at all in the source
-      (eg.  lines added, deleted, swapped), any annotations will be
-      incorrect.<p>
-
-  <li>If information is recorded about line numbers past the end of a file.
-      This can be caused by the above problem, ie. shortening the source file
-      while using an old <code>cachegrind.out.<i>pid</i></code> file.  If this
-      happens, the figures for the bogus lines are printed anyway (clearly
-      marked as bogus) in case they are important.</li><p>
-</ul>
-
-
-<h3>4.11&nbsp; Things to watch out for</h3>
-Some odd things that can occur during annotation:
-
-<ul>
-  <li>If annotating at the assembler level, you might see something like this:
-
-      <pre>
-      1    0    0  .    .    .  .    .    .          leal -12(%ebp),%eax
-      1    0    0  .    .    .  1    0    0          movl %eax,84(%ebx)
-      2    0    0  0    0    0  1    0    0          movl $1,-20(%ebp)
-      .    .    .  .    .    .  .    .    .          .align 4,0x90
-      1    0    0  .    .    .  .    .    .          movl $.LnrB,%eax
-      1    0    0  .    .    .  1    0    0          movl %eax,-16(%ebp)
-      </pre>
-
-      How can the third instruction be executed twice when the others are
-      executed only once?  As it turns out, it isn't.  Here's a dump of the
-      executable, using <code>objdump -d</code>:
-
-      <pre>
-      8048f25:       8d 45 f4                lea    0xfffffff4(%ebp),%eax
-      8048f28:       89 43 54                mov    %eax,0x54(%ebx)
-      8048f2b:       c7 45 ec 01 00 00 00    movl   $0x1,0xffffffec(%ebp)
-      8048f32:       89 f6                   mov    %esi,%esi
-      8048f34:       b8 08 8b 07 08          mov    $0x8078b08,%eax
-      8048f39:       89 45 f0                mov    %eax,0xfffffff0(%ebp)
-      </pre>
-
-      Notice the extra <code>mov %esi,%esi</code> instruction.  Where did this
-      come from?  The GNU assembler inserted it to serve as the two bytes of
-      padding needed to align the <code>movl $.LnrB,%eax</code> instruction on
-      a four-byte boundary, but pretended it didn't exist when adding debug
-      information.  Thus when Valgrind reads the debug info it thinks that the
-      <code>movl $0x1,0xffffffec(%ebp)</code> instruction covers the address
-      range 0x8048f2b--0x804833 by itself, and attributes the counts for the
-      <code>mov %esi,%esi</code> to it.<p>
-  </li>
-
-  <li>Inlined functions can cause strange results in the function-by-function
-      summary.  If a function <code>inline_me()</code> is defined in
-      <code>foo.h</code> and inlined in the functions <code>f1()</code>,
-      <code>f2()</code> and <code>f3()</code> in <code>bar.c</code>, there will
-      not be a <code>foo.h:inline_me()</code> function entry.  Instead, there
-      will be separate function entries for each inlining site, ie.
-      <code>foo.h:f1()</code>, <code>foo.h:f2()</code> and
-      <code>foo.h:f3()</code>.  To find the total counts for
-      <code>foo.h:inline_me()</code>, add up the counts from each entry.<p>
-
-      The reason for this is that although the debug info output by gcc
-      indicates the switch from <code>bar.c</code> to <code>foo.h</code>, it
-      doesn't indicate the name of the function in <code>foo.h</code>, so
-      Valgrind keeps using the old one.<p>
-
-  <li>Sometimes, the same filename might be represented with a relative name
-      and with an absolute name in different parts of the debug info, eg:
-      <code>/home/user/proj/proj.h</code> and <code>../proj.h</code>.  In this
-      case, if you use auto-annotation, the file will be annotated twice with
-      the counts split between the two.<p>
-  </li>
-
-  <li>Files with more than 65,535 lines cause difficulties for the stabs debug
-      info reader.  This is because the line number in the <code>struct
-      nlist</code> defined in <code>a.out.h</code> under Linux is only a 16-bit
-      value.  Valgrind can handle some files with more than 65,535 lines
-      correctly by making some guesses to identify line number overflows.  But
-      some cases are beyond it, in which case you'll get a warning message
-      explaining that annotations for the file might be incorrect.<p>
-  </li>
-
-  <li>If you compile some files with <code>-g</code> and some without, some
-      events that take place in a file without debug info could be attributed
-      to the last line of a file with debug info (whichever one gets placed
-      before the non-debug-info file in the executable).<p>
-  </li>
-</ul>
-
-This list looks long, but these cases should be fairly rare.<p>
-
-Note: stabs is not an easy format to read.  If you come across bizarre
-annotations that look like might be caused by a bug in the stabs reader,
-please let us know.<p>
-
-
-<h3>4.12&nbsp; Accuracy</h3>
-Valgrind's cache profiling has a number of shortcomings:
-
-<ul>
-  <li>It doesn't account for kernel activity -- the effect of system calls on
-      the cache contents is ignored.</li><p>
-
-  <li>It doesn't account for other process activity (although this is probably
-      desirable when considering a single program).</li><p>
-
-  <li>It doesn't account for virtual-to-physical address mappings;  hence the
-      entire simulation is not a true representation of what's happening in the
-      cache.</li><p>
-
-  <li>It doesn't account for cache misses not visible at the instruction level,
-      eg. those arising from TLB misses, or speculative execution.</li><p>
-
-  <li>Valgrind's custom threads implementation will schedule threads
-      differently to the standard one.  This could warp the results for
-      threaded programs.
-      </li><p>
-
-  <li>The instructions <code>bts</code>, <code>btr</code> and <code>btc</code>
-      will incorrectly be counted as doing a data read if both the arguments
-      are registers, eg:
-
-      <blockquote><code>btsl %eax, %edx</code></blockquote>
-
-      This should only happen rarely.
-      </li><p>
-
-  <li>FPU instructions with data sizes of 28 and 108 bytes (e.g.
-      <code>fsave</code>) are treated as though they only access 16 bytes.
-      These instructions seem to be rare so hopefully this won't affect
-      accuracy much.
-      </li><p>
-</ul>
-
-Another thing worth nothing is that results are very sensitive.  Changing the
-size of the <code>valgrind.so</code> file, the size of the program being
-profiled, or even the length of its name can perturb the results.  Variations
-will be small, but don't expect perfectly repeatable results if your program
-changes at all.<p>
-
-While these factors mean you shouldn't trust the results to be super-accurate,
-hopefully they should be close enough to be useful.<p>
-
-
-<h3>4.13&nbsp; Todo</h3>
-<ul>
-  <li>Program start-up/shut-down calls a lot of functions that aren't
-      interesting and just complicate the output.  Would be nice to exclude
-      these somehow.</li>
-  <p>
-</ul> 
-</body>
-</html>
-
diff --git a/cachegrind/docs/cg_techdocs.html b/cachegrind/docs/cg_techdocs.html
deleted file mode 100644
index 0ac5b67..0000000
--- a/cachegrind/docs/cg_techdocs.html
+++ /dev/null
@@ -1,458 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>How Cachegrind works</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="cg-techdocs">&nbsp;</a>
-<h1 align=center>How Cachegrind works</h1>
-
-<center>
-Detailed technical notes for hackers, maintainers and the
-overly-curious<br>
-<p>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-<a
-href="http://valgrind.kde.org">http://valgrind.kde.org</a><br>
-<p>
-Copyright &copy; 2001-2003 Nick Nethercote
-<p>
-</center>
-
-<p>
-
-
-
-
-<hr width="100%">
-
-<h2>Cache profiling</h2>
-Valgrind is a very nice platform for doing cache profiling and other kinds of
-simulation, because it converts horrible x86 instructions into nice clean
-RISC-like UCode.  For example, for cache profiling we are interested in
-instructions that read and write memory;  in UCode there are only four
-instructions that do this:  <code>LOAD</code>, <code>STORE</code>,
-<code>FPU_R</code> and <code>FPU_W</code>.  By contrast, because of the x86
-addressing modes, almost every instruction can read or write memory.<p>
-
-Most of the cache profiling machinery is in the file
-<code>vg_cachesim.c</code>.<p>
-
-These notes are a somewhat haphazard guide to how Valgrind's cache profiling
-works.<p>
-
-<h3>Cost centres</h3>
-Valgrind gathers cache profiling about every instruction executed,
-individually.  Each instruction has a <b>cost centre</b> associated with it.
-There are two kinds of cost centre: one for instructions that don't reference
-memory (<code>iCC</code>), and one for instructions that do
-(<code>idCC</code>):
-
-<pre>
-typedef struct _CC {
-   ULong a;
-   ULong m1;
-   ULong m2;
-} CC;
-
-typedef struct _iCC {
-   /* word 1 */
-   UChar tag;
-   UChar instr_size;
-
-   /* words 2+ */
-   Addr instr_addr;
-   CC I;
-} iCC;
-   
-typedef struct _idCC {
-   /* word 1 */
-   UChar tag;
-   UChar instr_size;
-   UChar data_size;
-
-   /* words 2+ */
-   Addr instr_addr;
-   CC I; 
-   CC D; 
-} idCC; 
-</pre>
-
-Each <code>CC</code> has three fields <code>a</code>, <code>m1</code>,
-<code>m2</code> for recording references, level 1 misses and level 2 misses.
-Each of these is a 64-bit <code>ULong</code> -- the numbers can get very large,
-ie. greater than 4.2 billion allowed by a 32-bit unsigned int.<p>
-
-A <code>iCC</code> has one <code>CC</code> for instruction cache accesses.  A
-<code>idCC</code> has two, one for instruction cache accesses, and one for data
-cache accesses.<p>
-
-The <code>iCC</code> and <code>dCC</code> structs also store unchanging
-information about the instruction:
-<ul>
-  <li>An instruction-type identification tag (explained below)</li><p>
-  <li>Instruction size</li><p>
-  <li>Data reference size (<code>idCC</code> only)</li><p>
-  <li>Instruction address</li><p>
-</ul>
-
-Note that data address is not one of the fields for <code>idCC</code>.  This is
-because for many memory-referencing instructions the data address can change
-each time it's executed (eg. if it uses register-offset addressing).  We have
-to give this item to the cache simulation in a different way (see
-Instrumentation section below). Some memory-referencing instructions do always
-reference the same address, but we don't try to treat them specialy in order to
-keep things simple.<p>
-
-Also note that there is only room for recording info about one data cache
-access in an <code>idCC</code>.  So what about instructions that do a read then
-a write, such as:
-
-<blockquote><code>inc %(esi)</code></blockquote>
-
-In a write-allocate cache, as simulated by Valgrind, the write cannot miss,
-since it immediately follows the read which will drag the block into the cache
-if it's not already there.  So the write access isn't really interesting, and
-Valgrind doesn't record it.  This means that Valgrind doesn't measure
-memory references, but rather memory references that could miss in the cache.
-This behaviour is the same as that used by the AMD Athlon hardware counters.
-It also has the benefit of simplifying the implementation -- instructions that
-read and write memory can be treated like instructions that read memory.<p>
-
-<h3>Storing cost-centres</h3>
-Cost centres are stored in a way that makes them very cheap to lookup, which is
-important since one is looked up for every original x86 instruction
-executed.<p>
-
-Valgrind does JIT translations at the basic block level, and cost centres are
-also setup and stored at the basic block level.  By doing things carefully, we
-store all the cost centres for a basic block in a contiguous array, and lookup
-comes almost for free.<p>
-
-Consider this part of a basic block (for exposition purposes, pretend it's an
-entire basic block):
-
-<pre>
-movl $0x0,%eax
-movl $0x99, -4(%ebp)
-</pre>
-
-The translation to UCode looks like this:
-                
-<pre>
-MOVL      $0x0, t20
-PUTL      t20, %EAX
-INCEIPo   $5
-
-LEA1L     -4(t4), t14
-MOVL      $0x99, t18
-STL       t18, (t14)
-INCEIPo   $7
-</pre>
-
-The first step is to allocate the cost centres.  This requires a preliminary
-pass to count how many x86 instructions were in the basic block, and their
-types (and thus sizes).  UCode translations for single x86 instructions are
-delimited by the <code>INCEIPo</code> instruction, the argument of which gives
-the byte size of the instruction (note that lazy INCEIP updating is turned off
-to allow this).<p>
-
-We can tell if an x86 instruction references memory by looking for
-<code>LDL</code> and <code>STL</code> UCode instructions, and thus what kind of
-cost centre is required.  From this we can determine how many cost centres we
-need for the basic block, and their sizes.  We can then allocate them in a
-single array.<p>
-
-Consider the example code above.  After the preliminary pass, we know we need
-two cost centres, one <code>iCC</code> and one <code>dCC</code>.  So we
-allocate an array to store these which looks like this:
-
-<pre>
-|(uninit)|      tag         (1 byte)
-|(uninit)|      instr_size  (1 bytes)
-|(uninit)|      (padding)   (2 bytes)
-|(uninit)|      instr_addr  (4 bytes)
-|(uninit)|      I.a         (8 bytes)
-|(uninit)|      I.m1        (8 bytes)
-|(uninit)|      I.m2        (8 bytes)
-
-|(uninit)|      tag         (1 byte)
-|(uninit)|      instr_size  (1 byte)
-|(uninit)|      data_size   (1 byte)
-|(uninit)|      (padding)   (1 byte)
-|(uninit)|      instr_addr  (4 bytes)
-|(uninit)|      I.a         (8 bytes)
-|(uninit)|      I.m1        (8 bytes)
-|(uninit)|      I.m2        (8 bytes)
-|(uninit)|      D.a         (8 bytes)
-|(uninit)|      D.m1        (8 bytes)
-|(uninit)|      D.m2        (8 bytes)
-</pre>
-
-(We can see now why we need tags to distinguish between the two types of cost
-centres.)<p>
-
-We also record the size of the array.  We look up the debug info of the first
-instruction in the basic block, and then stick the array into a table indexed
-by filename and function name.  This makes it easy to dump the information
-quickly to file at the end.<p>
-
-<h3>Instrumentation</h3>
-The instrumentation pass has two main jobs:
-
-<ol>
-  <li>Fill in the gaps in the allocated cost centres.</li><p>
-  <li>Add UCode to call the cache simulator for each instruction.</li><p>
-</ol>
-
-The instrumentation pass steps through the UCode and the cost centres in
-tandem.  As each original x86 instruction's UCode is processed, the appropriate
-gaps in the instructions cost centre are filled in, for example:
-
-<pre>
-|INSTR_CC|      tag         (1 byte)
-|5       |      instr_size  (1 bytes)
-|(uninit)|      (padding)   (2 bytes)
-|i_addr1 |      instr_addr  (4 bytes)
-|0       |      I.a         (8 bytes)
-|0       |      I.m1        (8 bytes)
-|0       |      I.m2        (8 bytes)
-
-|WRITE_CC|      tag         (1 byte)
-|7       |      instr_size  (1 byte)
-|4       |      data_size   (1 byte)
-|(uninit)|      (padding)   (1 byte)
-|i_addr2 |      instr_addr  (4 bytes)
-|0       |      I.a         (8 bytes)
-|0       |      I.m1        (8 bytes)
-|0       |      I.m2        (8 bytes)
-|0       |      D.a         (8 bytes)
-|0       |      D.m1        (8 bytes)
-|0       |      D.m2        (8 bytes)
-</pre>
-
-(Note that this step is not performed if a basic block is re-translated;  see
-<a href="#retranslations">here</a> for more information.)<p>
-
-GCC inserts padding before the <code>instr_size</code> field so that it is word
-aligned.<p>
-
-The instrumentation added to call the cache simulation function looks like this
-(instrumentation is indented to distinguish it from the original UCode):
-
-<pre>
-MOVL      $0x0, t20
-PUTL      t20, %EAX
-  PUSHL     %eax
-  PUSHL     %ecx
-  PUSHL     %edx
-  MOVL      $0x4091F8A4, t46  # address of 1st CC
-  PUSHL     t46
-  CALLMo    $0x12             # second cachesim function
-  CLEARo    $0x4
-  POPL      %edx
-  POPL      %ecx
-  POPL      %eax
-INCEIPo   $5
-
-LEA1L     -4(t4), t14
-MOVL      $0x99, t18
-  MOVL      t14, t42
-STL       t18, (t14)
-  PUSHL     %eax
-  PUSHL     %ecx
-  PUSHL     %edx
-  PUSHL     t42
-  MOVL      $0x4091F8C4, t44  # address of 2nd CC
-  PUSHL     t44
-  CALLMo    $0x13             # second cachesim function
-  CLEARo    $0x8
-  POPL      %edx
-  POPL      %ecx
-  POPL      %eax
-INCEIPo   $7
-</pre>
-
-Consider the first instruction's UCode.  Each call is surrounded by three
-<code>PUSHL</code> and <code>POPL</code> instructions to save and restore the
-caller-save registers.  Then the address of the instruction's cost centre is
-pushed onto the stack, to be the first argument to the cache simulation
-function.  The address is known at this point because we are doing a
-simultaneous pass through the cost centre array.  This means the cost centre
-lookup for each instruction is almost free (just the cost of pushing an
-argument for a function call).  Then the call to the cache simulation function
-for non-memory-reference instructions is made (note that the
-<code>CALLMo</code> UInstruction takes an offset into a table of predefined
-functions;  it is not an absolute address), and the single argument is
-<code>CLEAR</code>ed from the stack.<p>
-
-The second instruction's UCode is similar.  The only difference is that, as
-mentioned before, we have to pass the address of the data item referenced to
-the cache simulation function too.  This explains the <code>MOVL t14,
-t42</code> and <code>PUSHL t42</code> UInstructions.  (Note that the seemingly
-redundant <code>MOV</code>ing will probably be optimised away during register
-allocation.)<p>
-
-Note that instead of storing unchanging information about each instruction
-(instruction size, data size, etc) in its cost centre, we could have passed in
-these arguments to the simulation function.  But this would slow the calls down
-(two or three extra arguments pushed onto the stack).  Also it would bloat the
-UCode instrumentation by amounts similar to the space required for them in the
-cost centre;  bloated UCode would also fill the translation cache more quickly,
-requiring more translations for large programs and slowing them down more.<p>
-
-<a name="retranslations"></a>
-<h3>Handling basic block retranslations</h3>
-The above description ignores one complication.  Valgrind has a limited size
-cache for basic block translations;  if it fills up, old translations are
-discarded.  If a discarded basic block is executed again, it must be
-re-translated.<p>
-
-However, we can't use this approach for profiling -- we can't throw away cost
-centres for instructions in the middle of execution!  So when a basic block is
-translated, we first look for its cost centre array in the hash table.  If
-there is no cost centre array, it must be the first translation, so we proceed
-as described above.  But if there is a cost centre array already, it must be a
-retranslation.  In this case, we skip the cost centre allocation and
-initialisation steps, but still do the UCode instrumentation step.<p>
-
-<h3>The cache simulation</h3>
-The cache simulation is fairly straightforward.  It just tracks which memory
-blocks are in the cache at the moment (it doesn't track the contents, since
-that is irrelevant).<p>
-
-The interface to the simulation is quite clean.  The functions called from the
-UCode contain calls to the simulation functions in the files
-<Code>vg_cachesim_{I1,D1,L2}.c</code>;  these calls are inlined so that only
-one function call is done per simulated x86 instruction.  The file
-<code>vg_cachesim.c</code> simply <code>#include</code>s the three files
-containing the simulation, which makes plugging in new cache simulations is
-very easy -- you just replace the three files and recompile.<p>
-
-<h3>Output</h3>
-Output is fairly straightforward, basically printing the cost centre for every
-instruction, grouped by files and functions.  Total counts (eg. total cache
-accesses, total L1 misses) are calculated when traversing this structure rather
-than during execution, to save time;  the cache simulation functions are called
-so often that even one or two extra adds can make a sizeable difference.<p>
-
-Input file has the following format:
-
-<pre>
-file         ::= desc_line* cmd_line events_line data_line+ summary_line
-desc_line    ::= "desc:" ws? non_nl_string
-cmd_line     ::= "cmd:" ws? cmd
-events_line  ::= "events:" ws? (event ws)+
-data_line    ::= file_line | fn_line | count_line
-file_line    ::= ("fl=" | "fi=" | "fe=") filename
-fn_line      ::= "fn=" fn_name
-count_line   ::= line_num ws? (count ws)+
-summary_line ::= "summary:" ws? (count ws)+
-count        ::= num | "."
-</pre>
-
-Where:
-
-<ul>
-  <li><code>non_nl_string</code> is any string not containing a newline.</li><p>
-  <li><code>cmd</code> is a command line invocation.</li><p>
-  <li><code>filename</code> and <code>fn_name</code> can be anything.</li><p>
-  <li><code>num</code> and <code>line_num</code> are decimal numbers.</li><p>
-  <li><code>ws</code> is whitespace.</li><p>
-  <li><code>nl</code> is a newline.</li><p>
-</ul>
-
-The contents of the "desc:" lines is printed out at the top of the summary.
-This is a generic way of providing simulation specific information, eg. for
-giving the cache configuration for cache simulation.<p>
-
-Counts can be "." to represent "N/A", eg. the number of write misses for an
-instruction that doesn't write to memory.<p>
-
-The number of counts in each <code>line</code> and the
-<code>summary_line</code> should not exceed the number of events in the
-<code>event_line</code>.  If the number in each <code>line</code> is less,
-cg_annotate treats those missing as though they were a "." entry.  <p>
-
-A <code>file_line</code> changes the current file name.  A <code>fn_line</code>
-changes the current function name.  A <code>count_line</code> contains counts
-that pertain to the current filename/fn_name.  A "fn=" <code>file_line</code>
-and a <code>fn_line</code> must appear before any <code>count_line</code>s to
-give the context of the first <code>count_line</code>s.<p>
-
-Each <code>file_line</code> should be immediately followed by a
-<code>fn_line</code>.  "fi=" <code>file_lines</code> are used to switch
-filenames for inlined functions; "fe=" <code>file_lines</code> are similar, but
-are put at the end of a basic block in which the file name hasn't been switched
-back to the original file name.  (fi and fe lines behave the same, they are
-only distinguished to help debugging.)<p>
-
-
-<h3>Summary of performance features</h3>
-Quite a lot of work has gone into making the profiling as fast as possible.
-This is a summary of the important features:
-
-<ul>
-  <li>The basic block-level cost centre storage allows almost free cost centre
-      lookup.</li><p>
-  
-  <li>Only one function call is made per instruction simulated;  even this
-      accounts for a sizeable percentage of execution time, but it seems
-      unavoidable if we want flexibility in the cache simulator.</li><p>
-
-  <li>Unchanging information about an instruction is stored in its cost centre,
-      avoiding unnecessary argument pushing, and minimising UCode
-      instrumentation bloat.</li><p>
-
-  <li>Summary counts are calculated at the end, rather than during
-      execution.</li><p>
-
-  <li>The <code>cachegrind.out</code> output files can contain huge amounts of
-      information; file format was carefully chosen to minimise file
-      sizes.</li><p>
-</ul>
-
-
-<h3>Annotation</h3>
-Annotation is done by cg_annotate.  It is a fairly straightforward Perl script
-that slurps up all the cost centres, and then runs through all the chosen
-source files, printing out cost centres with them.  It too has been carefully
-optimised.
-
-
-<h3>Similar work, extensions</h3>
-It would be relatively straightforward to do other simulations and obtain
-line-by-line information about interesting events.  A good example would be
-branch prediction -- all branches could be instrumented to interact with a
-branch prediction simulator, using very similar techniques to those described
-above.<p>
-
-In particular, cg_annotate would not need to change -- the file format is such
-that it is not specific to the cache simulation, but could be used for any kind
-of line-by-line information.  The only part of cg_annotate that is specific to
-the cache simulation is the name of the input file
-(<code>cachegrind.out</code>), although it would be very simple to add an
-option to control this.<p>
-
-</body>
-</html>
diff --git a/configure.in b/configure.in
index 437dcc3..63df619 100644
--- a/configure.in
+++ b/configure.in
@@ -356,6 +356,9 @@
    valgrind.spec
    valgrind.pc
    docs/Makefile 
+   docs/lib/Makefile
+   docs/images/Makefile
+   docs/xml/Makefile
    tests/Makefile 
    tests/vg_regtest 
    tests/unused/Makefile 
@@ -371,7 +374,6 @@
    auxprogs/Makefile
    coregrind/Makefile 
    coregrind/demangle/Makefile 
-   coregrind/docs/Makefile
    coregrind/amd64/Makefile
    coregrind/arm/Makefile
    coregrind/x86/Makefile
diff --git a/corecheck/docs/Makefile.am b/corecheck/docs/Makefile.am
index 4e4da80..859c364 100644
--- a/corecheck/docs/Makefile.am
+++ b/corecheck/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = cc_main.html
+EXTRA_DIST = cc-manual.xml
diff --git a/corecheck/docs/cc-manual.xml b/corecheck/docs/cc-manual.xml
new file mode 100644
index 0000000..4316bd5
--- /dev/null
+++ b/corecheck/docs/cc-manual.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="cc-manual" xreflabel="CoreCheck">
+
+<title>CoreCheck: a very simple error detector</title>
+
+<para>CoreCheck is a very simple tool for Valgrind.  It adds no
+instrumentation to the program's code, and only reports the few
+kinds of errors detected by Valgrind's core.  It is mainly of use
+for Valgrind's developers for debugging and regression
+testing.</para>
+
+<para>The errors detected are those found by the core when
+<computeroutput>VG_(needs).core_errors</computeroutput> is set.
+These include:</para>
+
+<itemizedlist>
+
+ <listitem>
+  <para>Pthread API errors (many; eg. unlocking a non-locked
+  mutex)</para>
+ </listitem>
+
+ <listitem>
+  <para>Silly arguments to <computeroutput>malloc() </computeroutput> et al
+  (eg. negative size)</para>
+ </listitem>
+
+ <listitem>
+  <para>Invalid file descriptors to blocking syscalls
+  <computeroutput>read()</computeroutput> and
+  <computeroutput>write()</computeroutput></para>
+ </listitem>
+
+ <listitem>
+  <para>Bad signal numbers passed to
+  <computeroutput>sigaction()</computeroutput></para>
+ </listitem>
+
+ <listitem>
+  <para>Attempts to install signal handler for
+  <computeroutput>SIGKILL</computeroutput> or
+  <computeroutput>SIGSTOP</computeroutput></para>
+ </listitem>
+
+</itemizedlist>
+
+</chapter>
diff --git a/corecheck/docs/cc_main.html b/corecheck/docs/cc_main.html
deleted file mode 100644
index 3a374a4..0000000
--- a/corecheck/docs/cc_main.html
+++ /dev/null
@@ -1,66 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>Cachegrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="title"></a>
-<h1 align=center>CoreCheck</h1>
-<center>This manual was last updated on 2002-10-03</center>
-<p>
-
-<center>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Copyright &copy; 2000-2004 Nicholas Nethercote
-<p>
-CoreCheck is licensed under the GNU General Public License, 
-version 2<br>
-CoreCheck is a Valgrind tool that does very basic error checking.
-</center>
-
-<p>
-
-<h2>1&nbsp; CoreCheck</h2>
-
-CoreCheck is a very simple tool for Valgrind.  It adds no instrumentation to
-the program's code, and only reports the few kinds of errors detected by
-Valgrind's core.  It is mainly of use for Valgrind's developers for debugging
-and regression testing.
-<p>
-The errors detected are those found by the core when
-<code>VG_(needs).core_errors</code> is set.  These include:
-
-<ul>
-<li>Pthread API errors (many;  eg. unlocking a non-locked mutex)<p>
-<li>Silly arguments to <code>malloc() </code> et al (eg. negative size)<p>
-<li>Invalid file descriptors to blocking syscalls <code>read()</code> and 
-    <code>write()</code><p>
-<li>Bad signal numbers passed to <code>sigaction()</code><p>
-<li>Attempts to install signal handler for <code>SIGKILL</code> or
-    <code>SIGSTOP</code> <p>
-</ul>
-
-<hr width="100%">
-</body>
-</html>
-
diff --git a/coregrind/Makefile.am b/coregrind/Makefile.am
index 4874b57..06b46f6 100644
--- a/coregrind/Makefile.am
+++ b/coregrind/Makefile.am
@@ -4,8 +4,8 @@
 ## When building, we are only interested in the current arch/OS/platform.
 ## But when doing 'make dist', we are interested in every arch/OS/platform.
 ## That's what DIST_SUBDIRS specifies.
-SUBDIRS      = $(VG_ARCH)     $(VG_OS)     $(VG_PLATFORM)     demangle . docs
-DIST_SUBDIRS = $(VG_ARCH_ALL) $(VG_OS_ALL) $(VG_PLATFORM_ALL) demangle . docs
+SUBDIRS      = $(VG_ARCH)     $(VG_OS)     $(VG_PLATFORM)     demangle .
+DIST_SUBDIRS = $(VG_ARCH_ALL) $(VG_OS_ALL) $(VG_PLATFORM_ALL) demangle .
 
 AM_CPPFLAGS += -DVG_LIBDIR="\"$(valdir)"\" -I$(srcdir)/demangle \
 		-DKICKSTART_BASE=@KICKSTART_BASE@ \
diff --git a/coregrind/docs/.cvsignore b/coregrind/docs/.cvsignore
deleted file mode 100644
index 3dda729..0000000
--- a/coregrind/docs/.cvsignore
+++ /dev/null
@@ -1,2 +0,0 @@
-Makefile.in
-Makefile
diff --git a/coregrind/docs/Makefile.am b/coregrind/docs/Makefile.am
deleted file mode 100644
index 27a9e9b..0000000
--- a/coregrind/docs/Makefile.am
+++ /dev/null
@@ -1,3 +0,0 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = coregrind_core.html coregrind_intro.html coregrind_tools.html
diff --git a/coregrind/docs/coregrind_core.html b/coregrind/docs/coregrind_core.html
deleted file mode 100644
index feaf757..0000000
--- a/coregrind/docs/coregrind_core.html
+++ /dev/null
@@ -1,1552 +0,0 @@
-
-
-<a name="core"></a>
-<h2>2&nbsp; Using and understanding the Valgrind core</h2>
-
-This section describes the Valgrind core services, flags and behaviours.  That
-means it is relevant regardless of what particular tool you are using.
-A point of terminology: most references to "valgrind" in the rest of
-this section (Section 2) refer to the valgrind core services.
-
-
-<a name="core-whatdoes"></a>
-<h3>2.1&nbsp; What it does with your program</h3>
-
-Valgrind is designed to be as non-intrusive as possible. It works
-directly with existing executables. You don't need to recompile,
-relink, or otherwise modify, the program to be checked.
-
-Simply put <code>valgrind --tool=<i>tool_name</i></code> at the start of
-the command line normally used to run the program.  For example,
-if want to run the command <code>ls -l</code>
-using the heavyweight memory-checking tool Memcheck, issue the command:
-
-  <blockquote>
-  <code>valgrind --tool=memcheck ls -l</code>.  
-  </blockquote>
-
-<p>Regardless of which tool is in use, Valgrind takes control of your
-program before it starts.  Debugging information is read from the
-executable and associated libraries, so that error messages and other
-outputs can be phrased in terms of source code locations (if that is
-appropriate)
-
-<p>
-Your program is then run on a synthetic x86 CPU provided by the
-Valgrind core.  As new code is executed for the first time, the core
-hands the code to the selected tool.  The tool adds its own
-instrumentation code to this and hands the result back to the core,
-which coordinates the continued execution of this instrumented code.
-
-<p>
-The amount of instrumentation code added varies widely between tools.
-At one end of the scale, Memcheck adds code to check every
-memory access and every value computed, increasing the size of the
-code at least 12 times, and making it run 25-50 times slower than
-natively.  At the other end of the spectrum, the ultra-trivial "none"
-tool (a.k.a. Nulgrind) adds no instrumentation at all and causes in total
-"only" about a 4 times slowdown.  
-
-<p>
-Valgrind simulates every single instruction your program executes.
-Because of this, the active tool checks, or profiles, not only the
-code in your application but also in all supporting dynamically-linked
-(<code>.so</code>-format) libraries, including the GNU C library, the
-X client libraries, Qt, if you work with KDE, and so on.  
-
-<p>
-If you're using one of the error-detection tools, Valgrind will often
-detect errors in libraries, for example the GNU C or X11 libraries,
-which you have to use.  You might not be interested in these errors,
-since you probably have noo control over that code.  Therefore, Valgrind
-allows you to selectively suppress errors, by recording them in a
-suppressions file which is read when Valgrind starts up.  The build
-mechanism attempts to select suppressions which give reasonable
-behaviour for the libc and XFree86 versions detected on your machine.
-To make it easier to write suppressions, you can use the
-<code>--gen-suppressions=yes</code> option which tells Valgrind to print
-out a suppression for each error that appears, which you can then copy
-into a suppressions file.
-
-<p>
-Different error-checking tools report different kinds of errors.  The
-suppression mechanism therefore allows you to say which tool or tool(s)
-each suppression applies to.
-
-
-<a name="started"></a>
-<h3>2.2&nbsp; Getting started</h3>
-
-First off, consider whether it might be beneficial to recompile your
-application and supporting libraries with debugging info enabled (the
-<code>-g</code> flag).  Without debugging info, the best Valgrind tools
-will be able to do is guess which function a particular piece of code
-belongs to, which makes both error messages and profiling output
-nearly useless.  With <code>-g</code>, you'll hopefully get messages
-which point directly to the relevant source code lines.
-
-<p>
-Another flag you might like to consider, if you are working with 
-C++, is <code>-fno-inline</code>.  That makes it easier to see the
-function-call chain, which can help reduce confusion when navigating
-around large C++ apps.  For whatever it's worth, debugging
-OpenOffice.org with Memcheck is a bit easier when using this flag.
-
-<p>
-You don't have to do this, but doing so helps Valgrind produce more
-accurate and less confusing error reports.  Chances are you're set up
-like this already, if you intended to debug your program with GNU gdb,
-or some other debugger.
-
-<p>
-This paragraph applies only if you plan to use Memcheck:
-On rare occasions, optimisation levels
-at <code>-O2</code> and above have been observed to generate code which
-fools Memcheck into wrongly reporting uninitialised value
-errors.  We have looked in detail into fixing this, and unfortunately 
-the result is that doing so would give a further significant slowdown
-in what is already a slow tool.  So the best solution is to turn off
-optimisation altogether.  Since this often makes things unmanagably
-slow, a plausible compromise is to use <code>-O</code>.  This gets 
-you the majority of the benefits of higher optimisation levels whilst
-keeping relatively small the chances of false complaints from Memcheck.
-All other tools (as far as we know) are unaffected by optimisation
-level.
-
-<p>
-Valgrind understands both the older "stabs" debugging format, used by
-gcc versions prior to 3.1, and the newer DWARF2 format used by gcc 3.1
-and later.  We continue to refine and debug our debug-info readers,
-although the majority of effort will naturally enough go into the 
-newer DWARF2 reader.
-
-<p>
-When you're ready to roll, just run your application as you would
-normally, but place <code>valgrind --tool=<i>tool_name</i></code> in
-front of your usual command-line invocation.  Note that you should run
-the real (machine-code) executable here.  If your application is
-started by, for example, a shell or perl script, you'll need to modify
-it to invoke Valgrind on the real executables.  Running such scripts
-directly under Valgrind will result in you getting error reports
-pertaining to <code>/bin/sh</code>, <code>/usr/bin/perl</code>, or
-whatever interpreter you're using.  This may not be what you want and
-can be confusing.  You can force the issue by giving the flag
-<code>--trace-children=yes</code>, but confusion is still likely.
-
-
-<a name="comment"></a>
-<h3>2.3&nbsp; The commentary</h3>
-
-Valgrind tools write a commentary, a stream of text, detailing error
-reports and other significant events.  All lines in the commentary
-have following form:<br>
-<pre>
-  ==12345== some-message-from-Valgrind
-</pre>
-
-<p>The <code>12345</code>  is the process ID.  This scheme makes it easy
-to distinguish program output from Valgrind commentary, and also easy
-to differentiate commentaries from different processes which have
-become merged together, for whatever reason.
-
-<p>By default, Valgrind tools write only essential messages to the commentary,
-so as to avoid flooding you with information of secondary importance.
-If you want more information about what is happening, re-run, passing
-the <code>-v</code> flag to Valgrind.
-
-<p>
-You can direct the commentary to three different places:
-
-<ul>
-<li>The default: send it to a file descriptor, which is by default 2
-    (stderr).  So, if you give the core no options, it will write 
-    commentary to the standard error stream.  If you want to send 
-    it to some other file descriptor, for example number 9,
-    you can specify <code>--log-fd=9</code>.
-<p>
-<li>A less intrusive option is to write the commentary to a file, 
-    which you specify by <code>--log-file=filename</code>.  Note 
-    carefully that the commentary is <b>not</b> written to the file
-    you specify, but instead to one called
-    <code>filename.pid12345</code>, if for example the pid of the
-    traced process is 12345.  This is helpful when valgrinding a whole
-    tree of processes at once, since it means that each process writes
-    to its own logfile, rather than the result being jumbled up in one
-    big logfile.
-<p>
-<li>The least intrusive option is to send the commentary to a network
-    socket.  The socket is specified as an IP address and port number
-    pair, like this: <code>--log-socket=192.168.0.1:12345</code> if you
-    want to send the output to host IP 192.168.0.1 port 12345 (I have
-    no idea if 12345 is a port of pre-existing significance).  You can
-    also omit the port number: <code>--log-socket=192.168.0.1</code>, 
-    in which case a default port of 1500 is used.  This default is
-    defined by the constant <code>VG_CLO_DEFAULT_LOGPORT</code>
-    in the sources.
-    <p>
-    Note, unfortunately, that you have to use an IP address here, rather
-    than a hostname.
-    <p>
-    Writing to a network socket is pretty useless if you don't have
-    something listening at the other end.  We provide a simple
-    listener program, <code>valgrind-listener</code>, which accepts 
-    connections on the specified port and copies whatever it is sent
-    to stdout.  Probably someone will tell us this is a horrible
-    security risk.  It seems likely that people will write more
-    sophisticated listeners in the fullness of time.
-    <p>
-    valgrind-listener can accept simultaneous connections from up to 50
-    valgrinded processes.  In front of each line of output it prints
-    the current number of active connections in round brackets.  
-    <p>
-    valgrind-listener accepts two command-line flags:
-    <ul>
-    <li><code>-e</code> or <code>--exit-at-zero</code>: when the
-        number of connected processes falls back to zero, exit.
-        Without this, it will run forever, that is, until you send it
-        Control-C.
-    <p>
-    <li><code>portnumber</code>: changes the port it listens on from
-        the default (1500).  The specified port must be in the range
-        1024 to 65535.  The same restriction applies to port numbers
-        specified by a <code>--log-socket=</code> to Valgrind itself.
-    </ul>
-    <p>
-    If a valgrinded process fails to connect to a listener, for
-    whatever reason (the listener isn't running, invalid or
-    unreachable host or port, etc), Valgrind switches back to writing
-    the commentary to stderr.  The same goes for any process which
-    loses an established connection to a listener.  In other words,
-    killing the listener doesn't kill the processes sending data to
-    it.
-</ul>
-<p>
-Here is an important point about the relationship between the
-commentary and profiling output from tools.  The commentary contains a
-mix of messages from the Valgrind core and the selected tool.  If the
-tool reports errors, it will report them to the commentary.  However,
-if the tool does profiling, the profile data will be written to a file
-of some kind, depending on the tool, and independent of what
-<code>--log-*</code> options are in force.  The commentary is intended
-to be a low-bandwidth, human-readable channel.  Profiling data, on the
-other hand, is usually voluminous and not meaningful without further
-processing, which is why we have chosen this arrangement.
-
-
-<a name="report"></a>
-<h3>2.4&nbsp; Reporting of errors</h3>
-
-When one of the error-checking tools (Memcheck, Addrcheck, Helgrind)
-detects something bad happening in the program, an error message is
-written to the commentary.  For example:<br>
-<pre>
-  ==25832== Invalid read of size 4
-  ==25832==    at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45)
-  ==25832==    by 0x80487AF: main (bogon.cpp:66)
-  ==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
-  ==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
-  ==25832==    Address 0xBFFFF74C is not stack'd, malloc'd or free'd
-</pre>
-
-<p>
-This message says that the program did an illegal 4-byte read of
-address 0xBFFFF74C, which, as far as Memcheck can tell, is not a valid
-stack address, nor corresponds to any currently malloc'd or free'd
-blocks.  The read is happening at line 45 of <code>bogon.cpp</code>,
-called from line 66 of the same file, etc.  For errors associated with
-an identified malloc'd/free'd block, for example reading free'd
-memory, Valgrind reports not only the location where the error
-happened, but also where the associated block was malloc'd/free'd.
-
-<p>
-Valgrind remembers all error reports.  When an error is detected,
-it is compared against old reports, to see if it is a duplicate.  If
-so, the error is noted, but no further commentary is emitted.  This
-avoids you being swamped with bazillions of duplicate error reports.
-
-<p>
-If you want to know how many times each error occurred, run with the
-<code>-v</code> option.  When execution finishes, all the reports are
-printed out, along with, and sorted by, their occurrence counts.  This
-makes it easy to see which errors have occurred most frequently.
-
-<p>
-Errors are reported before the associated operation actually happens.
-If you're using a tool (Memcheck, Addrcheck) which does address
-checking, and your program attempts to read from address zero, the
-tool will emit a message to this effect, and the program will then
-duly die with a segmentation fault.
-
-<p>
-In general, you should try and fix errors in the order that they are
-reported.  Not doing so can be confusing.  For example, a program
-which copies uninitialised values to several memory locations, and
-later uses them, will generate several error messages, when run on
-Memcheck.  The first such error message may well give the most direct
-clue to the root cause of the problem.
-
-<p>
-The process of detecting duplicate errors is quite an expensive one
-and can become a significant performance overhead if your program
-generates huge quantities of errors.  To avoid serious problems here,
-Valgrind will simply stop collecting errors after 300 different errors
-have been seen, or 30000 errors in total have been seen.  In this
-situation you might as well stop your program and fix it, because
-Valgrind won't tell you anything else useful after this.  Note that
-the 300/30000 limits apply after suppressed errors are removed.  These
-limits are defined in <code>core.h</code> and can be increased
-if necessary.
-
-<p>
-To avoid this cutoff you can use the <code>--error-limit=no</code>
-flag.  Then Valgrind will always show errors, regardless of how many
-there are.  Use this flag carefully, since it may have a dire effect
-on performance.
-
-
-<a name="suppress"></a>
-<h3>2.5&nbsp; Suppressing errors</h3>
-
-The error-checking tools detect numerous problems in the base
-libraries, such as the GNU C library, and the XFree86 client
-libraries, which come pre-installed on your GNU/Linux system.  You
-can't easily fix these, but you don't want to see these errors (and
-yes, there are many!)  So Valgrind reads a list of errors to suppress
-at startup.  A default suppression file is cooked up by the
-<code>./configure</code> script when the system is built.
-
-<p>
-You can modify and add to the suppressions file at your leisure,
-or, better, write your own.  Multiple suppression files are allowed.
-This is useful if part of your project contains errors you can't or
-don't want to fix, yet you don't want to continuously be reminded of
-them.
-
-<p>
-<b>Note:</b> By far the easiest way to add suppressions is to use the
-<code>--gen-suppressions=yes</code> flag described in <a href="#flags">this
-section</a>.
-
-<p>
-Each error to be suppressed is described very specifically, to
-minimise the possibility that a suppression-directive inadvertantly
-suppresses a bunch of similar errors which you did want to see.  The
-suppression mechanism is designed to allow precise yet flexible
-specification of errors to suppress.
-
-<p>
-If you use the <code>-v</code> flag, at the end of execution, Valgrind
-prints out one line for each used suppression, giving its name and the
-number of times it got used.  Here's the suppressions used by a run of
-<code>valgrind --tool=memcheck ls -l</code>:
-<pre>
-  --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getgrgid_r
-  --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getpwuid_r
-  --27579-- supp: 6 strrchr/_dl_map_object_from_fd/_dl_map_object
-</pre>
-
-<p>
-Multiple suppressions files are allowed.  By default, Valgrind uses
-<code>$PREFIX/lib/valgrind/default.supp</code>.  You can ask to add
-suppressions from another file, by specifying
-<code>--suppressions=/path/to/file.supp</code>.
-
-<p>
-If you want to understand more about suppressions, look at an existing
-suppressions file whilst reading the following documentation.  The file
-<code>glibc-2.2.supp</code>, in the source distribution, provides some good
-examples.
-
-<p>Each suppression has the following components:<br>
-<ul>
-  <li>First line: its name.  This merely gives a handy name to the suppression,
-      by which it is referred to in the summary of used suppressions printed
-      out when a program finishes.  It's not important what the name is; any
-      identifying string will do.
-      </li>
-      <p>
-
-  <li>Second line: name of the tool(s) that the suppression is for (if more
-      than one, comma-separated), and the name of the suppression itself,
-      separated by a colon, eg:
-      <pre>
-      tool_name1,tool_name2:suppression_name
-      </pre>
-      (Nb: no spaces are allowed).
-      <p>      
-      Recall that Valgrind-2.0.X is a modular system, in which
-      different instrumentation tools can observe your program whilst
-      it is running.  Since different tools detect different kinds of
-      errors, it is necessary to say which tool(s) the suppression is
-      meaningful to.
-      <p>
-      Tools will complain, at startup, if a tool does not understand
-      any suppression directed to it.  Tools ignore suppressions which
-      are not directed to them.  As a result, it is quite practical to
-      put suppressions for all tools into the same suppression file.
-      <p>
-      Valgrind's core can detect certain PThreads API errors, for which this
-      line reads:
-      <pre>
-      core:PThread
-      </pre>
-
-  <li>Next line: a small number of suppression types have extra information
-      after the second line (eg. the <code>Param</code> suppression for
-      Memcheck)<p>
-
-  <li>Remaining lines: This is the calling context for the error -- the chain
-      of function calls that led to it.  There can be up to four of these lines.
-      <p>
-      Locations may be either names of shared objects/executables or wildcards
-      matching function names.  They begin <code>obj:</code> and
-      <code>fun:</code> respectively.  Function and object names to match
-      against may use the wildcard characters <code>*</code> and
-      <code>?</code>.
-      <p>
-      <b>Important note:</b> C++ function names must be <b>mangled</b>.  If
-      you are writing suppressions by hand, use the <code>--demangle=no</code>
-      option to get the mangled names in your error messages.
-      <p>
-    
-  <li>Finally, the entire suppression must be between curly braces. Each
-      brace must be the first character on its own line.
-</ul>
-
-<p>
-
-A suppression only suppresses an error when the error matches all the
-details in the suppression.  Here's an example:
-<pre>
-  {
-    __gconv_transform_ascii_internal/__mbrtowc/mbtowc
-    Memcheck:Value4
-    fun:__gconv_transform_ascii_internal
-    fun:__mbr*toc
-    fun:mbtowc
-  }
-</pre>
-
-<p>What is means is: for Memcheck only, suppress a
-use-of-uninitialised-value error, when the data size is 4, when it
-occurs in the function <code>__gconv_transform_ascii_internal</code>,
-when that is called from any function of name matching
-<code>__mbr*toc</code>, when that is called from <code>mbtowc</code>.
-It doesn't apply under any other circumstances.  The string by which
-this suppression is identified to the user is
-__gconv_transform_ascii_internal/__mbrtowc/mbtowc.
-<p>
-(See <a href="mc_main.html#suppfiles">this section</a> for more details on
-the specifics of Memcheck's suppression kinds.)
-
-<p>Another example, again for the Memcheck tool:
-<pre>
-  {
-    libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
-    Memcheck:Value4
-    obj:/usr/X11R6/lib/libX11.so.6.2
-    obj:/usr/X11R6/lib/libX11.so.6.2
-    obj:/usr/X11R6/lib/libXaw.so.7.0
-  }
-</pre>
-
-<p>Suppress any size 4 uninitialised-value error which occurs anywhere
-in <code>libX11.so.6.2</code>, when called from anywhere in the same
-library, when called from anywhere in <code>libXaw.so.7.0</code>.  The
-inexact specification of locations is regrettable, but is about all
-you can hope for, given that the X11 libraries shipped with Red Hat
-7.2 have had their symbol tables removed.
-
-<p>Note -- since the above two examples did not make it clear -- that
-you can freely mix the <code>obj:</code> and <code>fun:</code>
-styles of description within a single suppression record.
-<p>
-
-<a name="flags"></a>
-<h3>2.6&nbsp; Command-line flags for the Valgrind core</h3>
-
-
-As mentioned above, Valgrind's core accepts a common set of flags.
-The tools also accept tool-specific flags, which are documented
-seperately for each tool.  
-
-You invoke Valgrind like this:
-<pre>
-  valgrind --tool=<i>tool_name</i> [options-for-Valgrind] your-prog [options for your-prog]
-</pre>
-
-<p>Valgrind's default settings succeed in giving reasonable behaviour
-in most cases.  We group the available options by rough categories.
-
-<h4>Tool-selection option</h4>
-The single most important option.
-<ul>
-  <li><code>--tool=<i>name</i></code><br>
-      <p>Run the Valgrind tool called <i>name</i>, e.g. Memcheck, Addrcheck,
-      Cachegrind, etc.
-      </li><br><p>
-</ul>
-
-<h4>Basic Options</h4>
-These options work with all tools.
-
-<ul>
-  <li><code>--help</code><br>
-      <p>Show help for all options, both for the core and for the
-      selected tool. </li><br><p>
-
-  <li><code>--help-debug</code><br>
-      <p>Same as <code>--help</code>, but also lists debugging options which
-      usually are only of use to developers.</li><br><p>
-
-  <li><code>--version</code><br> <p>Show the version number of the
-      Valgrind core.  Tools can have their own version numbers.  There
-      is a scheme in place to ensure that tools only execute when the
-      core version is one they are known to work with.  This was done
-      to minimise the chances of strange problems arising from
-      tool-vs-core version incompatibilities.  </li><br><p>
-
-  <li><code>-v --verbose</code><br> <p>Be more verbose.  Gives extra
-      information on various aspects of your program, such as: the
-      shared objects loaded, the suppressions used, the progress of
-      the instrumentation and execution engines, and warnings about
-      unusual behaviour.  Repeating the flag increases the verbosity
-      level.  </li><br><p>
-
-  <li><code>-q --quiet</code><br>
-      <p>Run silently, and only print error messages.  Useful if you
-      are running regression tests or have some other automated test
-      machinery.
-      </li><br><p>
-
-  <li><code>--trace-children=no</code> [default]<br>
-      <code>--trace-children=yes</code>
-      <p>When enabled, Valgrind will trace into child processes.  This
-      is confusing and often not what you want, so is disabled by
-      default.
-
-      <p>Note that the name of this option is slightly misleading.
-      It actually controls whether programs started with
-      <code>exec()</code> are run under Valgrind's control.  If your
-      program calls <code>fork()</code>, both the parent and the child
-      will run under Valgrind's control.
-      </li><br><p>
-
-  <li><code>--log-fd=&lt;number&gt;</code> [default: 2, stderr]
-      <p>Specifies that Valgrind should send all of its
-      messages to the specified file descriptor.  The default, 2, is
-      the standard error channel (stderr).  Note that this may
-      interfere with the client's own use of stderr.  
-      </li><br><p>
-
-  <li><code>--log-file=&lt;filename&gt;</code>
-      <p>Specifies that Valgrind should send all of its
-      messages to the specified file.  In fact, the file name used
-      is created by concatenating the text <code>filename</code>,
-      ".pid" and the process ID, so as to create a file per process.
-      The specified file name may not be the empty string.
-      </li><br><p>
-
-  <li><code>--log-socket=&lt;ip-address:port-number&gt;</code>
-      <p>Specifies that Valgrind should send all of its messages to
-      the specified port at the specified IP address.  The port may be
-      omitted, in which case port 1500 is used.  If a connection
-      cannot be made to the specified socket, Valgrind falls back to
-      writing output to the standard error (stderr).  This option is
-      intended to be used in conjunction with the
-      <code>valgrind-listener</code> program.  For further details,
-      see section <a href="#core-comment">2.3</a>.
-      </li><br><p>
-
-  <li><code>--time-stamp=no</code> [default]<br>
-      <code>--time-stamp=yes</code>
-      <p>Specifies that valgrind should output a timestamp before
-      each message that it outputs.
-      </li><br><p>
-</ul>
-
-<h4>Error-related options</h4>
-These options are used by all tools that can report errors, e.g. Memcheck, but
-not Cachegrind.
-<ul>
-  <li><code>--demangle=no</code><br>
-      <code>--demangle=yes</code> [default]
-      <p>Disable/enable automatic demangling (decoding) of C++ names.
-      Enabled by default.  When enabled, Valgrind will attempt to
-      translate encoded C++ procedure names back to something
-      approaching the original.  The demangler handles symbols mangled
-      by g++ versions 2.X and 3.X.
-
-      <p>An important fact about demangling is that function
-      names mentioned in suppressions files should be in their mangled
-      form.  Valgrind does not demangle function names when searching
-      for applicable suppressions, because to do otherwise would make
-      suppressions file contents dependent on the state of Valgrind's
-      demangling machinery, and would also be slow and pointless.
-      </li><br><p>
-
-  <li><code>--num-callers=&lt;number&gt;</code> [default=4]<br>
-      <p>By default, Valgrind shows four levels of function call names
-      to help you identify program locations.  You can change that
-      number with this option.  This can help in determining the
-      program's location in deeply-nested call chains.  Note that errors
-      are commoned up using only the top three function locations (the
-      place in the current function, and that of its two immediate
-      callers).  So this doesn't affect the total number of errors
-      reported.  
-      <p>
-      The maximum value for this is 50.  Note that higher settings
-      will make Valgrind run a bit more slowly and take a bit more
-      memory, but can be useful when working with programs with
-      deeply-nested call chains.  
-      </li><br><p>
-
-  <li><code>--error-limit=yes</code> [default]<br>
-      <code>--error-limit=no</code> <p>When enabled, Valgrind stops
-      reporting errors after 30000 in total, or 300 different ones,
-      have been seen.  This is to stop the error tracking machinery
-      from becoming a huge performance overhead in programs with many
-      errors.  
-      </li><br><p>
-
-  <li><code>--show-below-main=yes</code><br>
-      <code>--show-below-main=no</code>  [default]
-      <p>By default, stack traces for errors do not show any functions that
-      appear beneath <code>main()</code>;  most of the time it's uninteresting
-      C library stuff.  If this option is enabled, these entries below
-      <code>main()</code> will be shown.
-      </li><br><p>
-
-  <li><code>--suppressions=&lt;filename&gt;</code> 
-      [default: $PREFIX/lib/valgrind/default.supp]
-      <p>Specifies an extra
-      file from which to read descriptions of errors to suppress.  You
-      may use as many extra suppressions files as you
-      like.
-      </li><br><p>
-
-  <li><code>--gen-suppressions=no</code> [default]<br>
-      <code>--gen-suppressions=yes</code>
-      <p>When enabled, Valgrind will pause after every error shown,
-      and print the line
-      <br>
-      <code>---- Print suppression ? --- [Return/N/n/Y/y/C/c] ----</code>
-      <p>
-      The prompt's behaviour is the same as for the <code>--db-attach</code>
-      option.
-      <p>
-      If you choose to, Valgrind will print out a suppression for this error.
-      You can then cut and paste it into a suppression file if you don't want
-      to hear about the error in the future.
-      <p>
-      This option is particularly useful with C++ programs, as it prints out
-      the suppressions with mangled names, as required.
-      <p>
-      Note that the suppressions printed are as specific as possible.  You
-      may want to common up similar ones, eg. by adding wildcards to function
-      names.  Also, sometimes two different errors are suppressed by the same
-      suppression, in which case Valgrind will output the suppression more than
-      once, but you only need to have one copy in your suppression file (but
-      having more than one won't cause problems).  Also, the suppression
-      name is given as <code>&lt;insert a suppression name here&gt;</code>;
-      the name doesn't really matter, it's only used with the
-      <code>-v</code> option which prints out all used suppression records.
-      </li><br><p>
-
-  <li><code>--track-fds=no</code> [default]<br>
-      <code>--track-fds=yes</code>
-      <p>When enabled, Valgrind will print out a list of open file
-      descriptors on exit.  Along with each file descriptor, Valgrind
-      prints out a stack backtrace of where the file was opened and any
-      details relating to the file descriptor such as the file name or
-      socket details.
-      <br><p>
-
-  <li><code>--db-attach=no</code> [default]<br>
-      <code>--db-attach=yes</code>
-      <p>When enabled, Valgrind will pause after every error shown,
-      and print the line
-      <br>
-      <code>---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ----</code>
-      <p>
-      Pressing <code>Ret</code>, or <code>N</code> <code>Ret</code>
-      or <code>n</code> <code>Ret</code>, causes Valgrind not to
-      start a debugger for this error.
-      <p>
-      <code>Y</code> <code>Ret</code>
-      or <code>y</code> <code>Ret</code> causes Valgrind to
-      start a debugger, for the program at this point.  When you have
-      finished with the debugger, quit from it, and the program will continue.
-      Trying to continue from inside the debugger doesn't work.
-      <p>
-      <code>C</code> <code>Ret</code>
-      or <code>c</code> <code>Ret</code> causes Valgrind not to
-      start a debugger, and not to ask again.
-      <p>
-      <code>--db-attach=yes</code> conflicts with
-      <code>--trace-children=yes</code>.  You can't use them together.
-      Valgrind refuses to start up in this situation.  1 May 2002:
-      this is a historical relic which could be easily fixed if it
-      gets in your way.  Mail me and complain if this is a problem for
-      you.
-      <p>
-      Nov 2002: if you're sending output to a logfile or to a network
-      socket, I guess this option doesn't make any sense.  Caveat emptor.
-      </li><br><p>
-
-  <li><code>--db-command=&lt;command&gt;</code> [default: gdb -nw %f %p]<br>
-      <p>This specifies how Valgrind will invoke the debugger.  By
-      default it will use whatever GDB is detected at build time, 
-      which is usually <code>/usr/bin/gdb</code>.  Using this command,
-      you can specify some alternative command to invoke the debugger
-      you want to use.
-      <p>
-      The command string given can include one or instances of the
-      %p and %f expansions. Each instance of %p expands to the PID of
-      the process to be debugged and each instance of %f expands to
-      the path to the executable for the process to be debugged.
-      </li><br><p>
-
-  <li><code>--input-fd=&lt;number&gt;</code> [default=0, stdin]<br>
-      <p>When using <code>--db-attach=yes</code> and 
-         <code>--gen-suppressions=yes</code>, Valgrind will stop
-      so as to read keyboard input from you, when each error occurs. 
-      By default it reads from the standard input (stdin), which is
-      problematic for programs which close stdin.  This option
-      allows you to specify an alternative file descriptor from
-      which to read input.  
-      </li><br><p>
-</ul>
-
-<h4><code>malloc()</code>-related options</h4>
-For tools that use their own version of <code>malloc()</code> (e.g. Memcheck
-and Addrcheck), the following options apply.
-<ul>
-  <li><code>--alignment=&lt;number&gt;</code> [default: 8]<br> <p>By
-      default Valgrind's <code>malloc</code>, <code>realloc</code>,
-      etc, return 4-byte aligned addresses.  These are suitable for
-      any accesses on x86 processors. 
-      Some programs might however assume that <code>malloc</code> et
-      al return 8- or more aligned memory.  The supplied value must be
-      between 4 and 4096 inclusive, and must be a power of two.</li><br><p>
-
-  <li><code>--sloppy-malloc=no</code> [default]<br>
-      <code>--sloppy-malloc=yes</code>
-      <p>When enabled, all requests for malloc/calloc are rounded up
-      to a multiple of 4 bytes.  For example, a request for 17 bytes of space
-      would result in a 20-byte area being made available.  This works
-      around bugs in sloppy libraries which assume that they can
-      safely rely on malloc/calloc requests being rounded up in this
-      fashion.  Without the workaround, these libraries tend to
-      generate large numbers of errors when they access the ends of
-      these areas.  
-      <p>
-      Valgrind snapshots dated 17 Feb 2002 and later are
-      cleverer about this problem, and you should no longer need to 
-      use this flag.  To put it bluntly, if you do need to use this
-      flag, your program violates the ANSI C semantics defined for
-      <code>malloc</code> and <code>free</code>, even if it appears to
-      work correctly, and you should fix it, at least if you hope for
-      maximum portability.
-      </li><br><p>
-</ul>
-
-<h4>Rare options</h4>
-These options apply to all tools, as they affect certain obscure workings of
-the Valgrind core.  Most people won't need to use these.
-<ul>
-  <li><code>--run-libc-freeres=yes</code> [default]<br>
-      <code>--run-libc-freeres=no</code>
-      <p>The GNU C library (<code>libc.so</code>), which is used by
-      all programs, may allocate memory for its own uses.  Usually it
-      doesn't bother to free that memory when the program ends - there
-      would be no point, since the Linux kernel reclaims all process
-      resources when a process exits anyway, so it would just slow
-      things down.
-      <p>
-      The glibc authors realised that this behaviour causes leak
-      checkers, such as Valgrind, to falsely report leaks in glibc,
-      when a leak check is done at exit.  In order to avoid this, they
-      provided a routine called <code>__libc_freeres</code>
-      specifically to make glibc release all memory it has allocated.
-      Memcheck and Addrcheck therefore try and run
-      <code>__libc_freeres</code> at exit.
-      <p>
-      Unfortunately, in some versions of glibc,
-      <code>__libc_freeres</code> is sufficiently buggy to cause
-      segmentation faults.  This is particularly noticeable on Red Hat
-      7.1.  So this flag is provided in order to inhibit the run of
-      <code>__libc_freeres</code>.  If your program seems to run fine
-      on Valgrind, but segfaults at exit, you may find that
-      <code>--run-libc-freeres=no</code> fixes that, although at the
-      cost of possibly falsely reporting space leaks in
-      <code>libc.so</code>.
-      </li><br><p>
-
-  <li><code>--weird-hacks=hack1,hack2,...</code>
-      Pass miscellaneous hints to Valgrind which slightly modify the
-      simulated behaviour in nonstandard or dangerous ways, possibly
-      to help the simulation of strange features.  By default no hacks
-      are enabled.  Use with caution!  Currently known hacks are:
-      <p>
-      <ul>
-      <li><code>lax-ioctls</code> Be very lax about ioctl handling; the only
-          assumption is that the size is correct. Doesn't require the full
-          buffer to be initialized when writing.  Without this, using some
-          device drivers with a large number of strange ioctl commands becomes
-          very tiresome.
-      </ul>
-      </li><br><p>
-
-  <li><code>--signal-polltime=&lt;time&gt;</code> [default=50]<br>
-      <p>How often to poll for signals (in milliseconds).  Only applies for
-      older kernels that need signal routing.
-      </li><br><p>
-
-  <li><code>--lowlat-signals=no</code> [default]<br>
-      <code>--lowlat-signals=yes</code><br>
-      <p>Improve wake-up latency when a thread receives a signal.
-      </li><br><p>
-
-  <li><code>--lowlat-syscalls=no</code> [default]<br>
-      <code>--lowlat-syscalls=yes</code><br>
-      <p>Improve wake-up latency when a thread's syscall completes. 
-      </li><br><p>
-
-</ul>
-
-There are also some options for debugging Valgrind itself.  You
-shouldn't need to use them in the normal run of things.  Nevertheless:
-
-<ul>
-
-  <li><code>--single-step=no</code> [default]<br>
-      <code>--single-step=yes</code>
-      <p>When enabled, each x86 insn is translated separately into
-      instrumented code.  When disabled, translation is done on a
-      per-basic-block basis, giving much better translations.</li><br>
-      <p>
-
-  <li><code>--optimise=no</code><br>
-      <code>--optimise=yes</code> [default]
-      <p>When enabled, various improvements are applied to the
-      intermediate code, mainly aimed at allowing the simulated CPU's
-      registers to be cached in the real CPU's registers over several
-      simulated instructions.</li><br>
-      <p>
-
-  <li><code>--profile=no</code><br>
-      <code>--profile=yes</code> [default]
-      <p>When enabled, does crude internal profiling of Valgrind 
-      itself.  This is not for profiling your programs.  Rather it is
-      to allow the developers to assess where Valgrind is spending
-      its time.  The tools must be built for profiling for this to
-      work.
-      </li><br><p>
-
-  <li><code>--trace-syscalls=no</code> [default]<br>
-      <code>--trace-syscalls=yes</code>
-      <p>Enable/disable tracing of system call intercepts.</li><br>
-      <p>
-
-  <li><code>--trace-signals=no</code> [default]<br>
-      <code>--trace-signals=yes</code>
-      <p>Enable/disable tracing of signal handling.</li><br>
-      <p>
-
-  <li><code>--trace-sched=no</code> [default]<br>
-      <code>--trace-sched=yes</code>
-      <p>Enable/disable tracing of thread scheduling events.</li><br>
-      <p>
-
-  <li><code>--trace-pthread=none</code> [default]<br>
-      <code>--trace-pthread=some</code> <br>
-      <code>--trace-pthread=all</code>
-      <p>Specifies amount of trace detail for pthread-related events.</li><br>
-      <p>
-
-  <li><code>--trace-symtab=no</code> [default]<br>
-      <code>--trace-symtab=yes</code>
-      <p>Enable/disable tracing of symbol table reading.</li><br>
-      <p>
-
-  <li><code>--trace-malloc=no</code> [default]<br>
-      <code>--trace-malloc=yes</code>
-      <p>Enable/disable tracing of malloc/free (et al) intercepts.
-      </li><br>
-      <p>
-
-  <li><code>--trace-codegen=XXXXX</code> [default: 00000]
-      <p>Enable/disable tracing of code generation.  Code can be printed
-      at five different stages of translation;  each <code>X</code> element
-      must be 0 or 1.
-      </li><br>
-      <p>
-
-  <li><code>--dump-error=&lt;number></code> [default: inactive]
-      <p>After the program has exited, show gory details of the
-      translation of the basic block containing the &lt;number>'th
-      error context.  When used with <code>--single-step=yes</code>,
-      can show the exact x86 instruction causing an error.  This is
-      all fairly dodgy and doesn't work at all if threads are
-      involved.</li><br>
-      <p>
-</ul>
-
-<h4>Setting default options</h4>
-
-<p>Note that Valgrind also reads options from three places:
-<ul>
-<li>The file <code>~/.valgrindrc</code>
-<li>The environment variable <code>$VALGRIND_OPTS</code>
-<li>The file <code>./.valgrindrc</code>
-</ul>
-These are processed in the given order, before the command-line options.
-Options processed later override those processed earlier;  for example,
-options in <code>./.valgrindrc</code> will take precedence over those in
-<code>~/.valgrindrc</code>.  The first two are particularly useful for
-setting the default tool to use.
-<p>
-Any tool-specific options put in <code>$VALGRIND_OPTS</code> or the
-<code>.valgrindrc</code> files should be prefixed with the tool name and
-a colon.  For example, if you want Memcheck to always do leak checking,
-you can put the following entry in <code>~/.valgrindrc</code>:
-
-<pre>
-    --memcheck:leak-check=yes
-</pre>
-
-This will be ignored if any tool other than Memcheck is run.
-Without the <code>memcheck:</code> part, this will cause problems if you
-select other tools that don't understand <code>--leak-check=yes</code>.
-
-
-<a name="clientreq"></a>
-<h3>2.7&nbsp; The Client Request mechanism</h3>
-
-Valgrind has a trapdoor mechanism via which the client program can
-pass all manner of requests and queries to Valgrind and the current tool.
-Internally, this is used extensively to make malloc, free, signals, threads,
-etc, work, although you don't see that.
-<p>
-For your convenience, a subset of these so-called client requests is
-provided to allow you to tell Valgrind facts about the behaviour of
-your program, and conversely to make queries.  In particular, your
-program can tell Valgrind about changes in memory range permissions
-that Valgrind would not otherwise know about, and so allows clients to
-get Valgrind to do arbitrary custom checks.
-<p>
-Clients need to include a header file to make this work.  Which header file
-depends on which client requests you use.  Some client requests are handled by
-the core, and are defined in the header file <code>valgrind.h</code>.
-Tool-specific header files are named after the tool, e.g.
-<code>memcheck.h</code>.  All header files can be found in the
-<code>include</code> directory of wherever Valgrind was installed.
-<p>
-The macros in these header files have the magical property that
-they generate code in-line which Valgrind can spot.  However, the code
-does nothing when not run on Valgrind, so you are not forced to run
-your program on Valgrind just because you use the macros in this file.
-Also, you are not required to link your program with any extra
-supporting libraries.
-<p>
-Here is a brief description of the macros available in
-<code>valgrind.h</code>, which work with more than one tool (see the
-tool-specific documentation for explanations of the tool-specific macros).
-<ul>
-<li><code>RUNNING_ON_VALGRIND</code>: returns 1 if running on
-    Valgrind, 0 if running on the real CPU.
-<p>
-<li><code>VALGRIND_DISCARD_TRANSLATIONS</code>: discard translations
-    of code in the specified address range.  Useful if you are
-    debugging a JITter or some other dynamic code generation system.
-    After this call, attempts to execute code in the invalidated
-    address range will cause Valgrind to make new translations of that
-    code, which is probably the semantics you want.  Note that this is
-    implemented naively, and involves checking all 200191 entries in
-    the translation table to see if any of them overlap the specified
-    address range.  So try not to call it often, or performance will
-    nosedive.  Note that you can be clever about this: you only need
-    to call it when an area which previously contained code is
-    overwritten with new code.  You can choose to write code into
-    fresh memory, and just call this occasionally to discard large
-    chunks of old code all at once.
-    <p>
-    Warning: minimally tested, especially for tools other than Memcheck.
-<p>
-<li><code>VALGRIND_COUNT_ERRORS</code>: returns the number of errors
-    found so far by Valgrind.  Can be useful in test harness code when
-    combined with the <code>--log-fd=-1</code> option;  this runs
-    Valgrind silently, but the client program can detect when errors
-    occur.  Only useful for tools that report errors, e.g. it's useful for
-    Memcheck, but for Cachegrind it will always return zero because 
-    Cachegrind doesn't report errors.
-<p>
-<li><code>VALGRIND_MALLOCLIKE_BLOCK</code>: If your program manages its own
-    memory instead of using the standard
-    <code>malloc()</code>/<code>new</code>/<code>new[]</code>, tools that track
-    information about heap blocks will not do nearly as good a
-    job.  For example, Memcheck won't detect nearly as many errors, and the
-    error messages won't be as informative.  To improve this situation, use
-    this macro just after your custom allocator allocates some new memory.  See
-    the comments in <code>valgrind.h</code> for information on how to use it.
-<p>
-<li><code>VALGRIND_FREELIKE_BLOCK</code>: This should be used in conjunction 
-    with <code>VALGRIND_MALLOCLIKE_BLOCK</code>.  Again, see
-    <code>memcheck/memcheck.h</code> for information on how to use it.  
-<p>
-<li><code>VALGRIND_CREATE_MEMPOOL</code>: This is similar to
-    <code>VALGRIND_MALLOCLIKE_BLOCK</code>, but is tailored towards code
-    that uses memory pools.  See the comments in <code>valgrind.h</code>
-    for information on how to use it.
-<p>
-<li><code>VALGRIND_DESTROY_MEMPOOL</code>: This should be used in
-    conjunction with <code>VALGRIND_CREATE_MEMPOOL</code> Again, see the
-    comments in <code>valgrind.h</code> for information on how to use it.
-<p>
-<li><code>VALGRIND_MEMPOOL_ALLOC</code>: This should be used in
-    conjunction with <code>VALGRIND_CREATE_MEMPOOL</code> Again, see the
-    comments in <code>valgrind.h</code> for information on how to use it.
-<p>
-<li><code>VALGRIND_MEMPOOL_FREE</code>: This should be used in
-    conjunction with <code>VALGRIND_CREATE_MEMPOOL</code> Again, see the
-    comments in <code>valgrind.h</code> for information on how to use it.
-<p>
-<li><code>VALGRIND_NON_SIMD_CALL[0123]</code>: executes a function of 0, 1, 2
-     or 3 args in the client program on the <i>real</i> CPU, not the virtual
-     CPU that Valgrind normally runs code on.  These are used in various ways
-     internally to Valgrind.  They might be useful to client programs.
-     <b>Warning:</b> Only use these if you <i>really</i> know what you are
-     doing.
-<p>
-<li><code>VALGRIND_PRINTF(format, ...)</code>: printf a message to the
-    log file when running under Valgrind.  Nothing is output if not
-    running under Valgrind.  Returns the number of characters output.
-<p>
-<li><code>VALGRIND_PRINTF_BACKTRACE(format, ...)</code>: printf a message
-    to the log file along with a stack backtrace when running under
-    Valgrind.  Nothing is output if not running under Valgrind.
-    Returns the number of characters output.
-<p>
-</ul>
-Note that <code>valgrind.h</code> is included by all the tool-specific header
-files (such as <code>memcheck.h</code>), so you don't need to include it in
-your client if you include a tool-specific header.
-<p>
-
-
-<a name="pthreads"></a>
-<h3>2.8&nbsp; Support for POSIX Pthreads</h3>
-
-Valgrind supports programs which use POSIX pthreads.  Getting this to work was
-technically challenging but it all works well enough for significant threaded
-applications to work.
-<p>
-It works as follows: threaded apps are (dynamically) linked against
-<code>libpthread.so</code>.  Usually this is the one installed with
-your Linux distribution.  Valgrind, however, supplies its own
-<code>libpthread.so</code> and automatically connects your program to
-it instead.
-<p>
-The fake <code>libpthread.so</code> and Valgrind cooperate to
-implement a user-space pthreads package.  This approach avoids the 
-horrible implementation problems of implementing a truly
-multiprocessor version of Valgrind, but it does mean that threaded
-apps run only on one CPU, even if you have a multiprocessor machine.
-<p>
-Valgrind schedules your threads in a round-robin fashion, with all
-threads having equal priority.  It switches threads every 50000 basic
-blocks (typically around 300000 x86 instructions), which means you'll
-get a much finer interleaving of thread executions than when run
-natively.  This in itself may cause your program to behave differently
-if you have some kind of concurrency, critical race, locking, or
-similar, bugs.
-<p>
-As of the Valgrind-1.0 release, the state of pthread support was as follows:
-<ul>
-<li>Mutexes, condition variables, thread-specific data,
-    <code>pthread_once</code>, reader-writer locks, semaphores,
-    cleanup stacks, cancellation and thread detaching currently work.
-    Various attribute-like calls are handled but ignored; you get a
-    warning message.
-<p>
-<li>Currently the following syscalls are thread-safe (nonblocking):
-    <code>write</code> <code>read</code> <code>nanosleep</code>
-    <code>sleep</code> <code>select</code> <code>poll</code> 
-    <code>recvmsg</code> and
-    <code>accept</code>.
-<p>
-<li>Signals in pthreads are now handled properly(ish): 
-    <code>pthread_sigmask</code>, <code>pthread_kill</code>,
-    <code>sigwait</code> and <code>raise</code> are now implemented.
-    Each thread has its own signal mask, as POSIX requires.
-    It's a bit kludgey -- there's a system-wide pending signal set,
-    rather than one for each thread.  But hey.
-</ul>
-
-As of 18 May 02, the following threaded programs now work fine on my
-RedHat 7.2 box: Opera 6.0Beta2, KNode in KDE 3.0, Mozilla-0.9.2.1 and
-Galeon-0.11.3, both as supplied with RedHat 7.2.  Also Mozilla 1.0RC2.
-OpenOffice 1.0.  MySQL 3.something (the current stable release).
-
-
-
-<a name="signals"></a>
-<h3>2.9&nbsp; Handling of signals</h3>
-
-Valgrind provides suitable handling of signals, so, provided you stick
-to POSIX stuff, you should be ok.  Basic sigaction() and sigprocmask()
-are handled.  Signal handlers may return in the normal way or do
-longjmp(); both should work ok.  As specified by POSIX, a signal is
-blocked in its own handler.  Default actions for signals should work
-as before.  Etc, etc.
-
-<p>Under the hood, dealing with signals is a real pain, and Valgrind's
-simulation leaves much to be desired.  If your program does
-way-strange stuff with signals, bad things may happen.  If so, let me
-know.  I don't promise to fix it, but I'd at least like to be aware of
-it.
-
-
-
-<a name="install"></a>
-<h3>2.10&nbsp; Building and installing</h3>
-
-We now use the standard Unix <code>./configure</code>,
-<code>make</code>, <code>make install</code> mechanism, and I have
-attempted to ensure that it works on machines with kernel 2.4 or 2.6
-and glibc 2.2.X or 2.3.X.  I don't think there is much else to say.
-There are no options apart from the usual <code>--prefix</code> that
-you should give to <code>./configure</code>.
-
-<p>
-The <code>configure</code> script tests the version of the X server
-currently indicated by the current <code>$DISPLAY</code>.  This is a
-known bug.  The intention was to detect the version of the current
-XFree86 client libraries, so that correct suppressions could be
-selected for them, but instead the test checks the server version.
-This is just plain wrong.
-
-<p>
-If you are building a binary package of Valgrind for distribution,
-please read <code>README_PACKAGERS</code>.  It contains some important
-information.
-
-<p>
-Apart from that there is no excitement here.  Let me know if you have
-build problems.
-
-
-
-<a name="problems"></a>
-<h3>2.11&nbsp; If you have problems</h3>
-Contact us at <a href="http://valgrind.kde.org">valgrind.kde.org</a>.
-
-<p>See <a href="#limits">this section</a> for the known limitations of
-Valgrind, and for a list of programs which are known not to work on
-it.
-
-<p>The translator/instrumentor has a lot of assertions in it.  They
-are permanently enabled, and I have no plans to disable them.  If one
-of these breaks, please mail us!
-
-<p>If you get an assertion failure on the expression
-<code>chunkSane(ch)</code> in <code>vg_free()</code> in
-<code>vg_malloc.c</code>, this may have happened because your program
-wrote off the end of a malloc'd block, or before its beginning.
-Valgrind should have emitted a proper message to that effect before
-dying in this way.  This is a known problem which I should fix.
-
-<p>
-Read the file <code>FAQ.txt</code> in the source distribution, for
-more advice about common problems, crashes, etc.
-
-<a name="limits"></a>
-<h3>2.12&nbsp; Limitations</h3>
-
-The following list of limitations seems depressingly long.  However,
-most programs actually work fine.
-
-<p>Valgrind will run x86-GNU/Linux ELF dynamically linked binaries, on
-a kernel 2.4.X or 2.6.X system, subject to the following constraints:
-
-<ul>
-  <li>No support for 3DNow instructions.  If the translator encounters
-      these, Valgrind will generate a SIGILL when the instruction is
-      executed.</li>
-      <p>
-
-  <li>Pthreads support is improving, but there are still significant
-      limitations in that department.  See the section above on
-      Pthreads.  Note that your program must be dynamically linked
-      against <code>libpthread.so</code>, so that Valgrind can
-      substitute its own implementation at program startup time.  If
-      you're statically linked against it, things will fail
-      badly.</li>
-      <p>
-
-  <li>Memcheck assumes that the floating point registers are
-      not used as intermediaries in memory-to-memory copies, so it
-      immediately checks definedness of values loaded from memory by
-      floating-point loads.  If you want to write code which copies
-      around possibly-uninitialised values, you must ensure these
-      travel through the integer registers, not the FPU.</li>
-      <p>
-
-  <li>If your program does its own memory management, rather than
-      using malloc/new/free/delete, it should still work, but
-      Valgrind's error checking won't be so effective.
-      If you describe your program's memory management scheme
-      using "client requests" (Section 3.7 of this manual),
-      Memcheck can do better.  Nevertheless, using malloc/new
-      and free/delete is still the best approach.
-      </li>
-      <p>
-
-  <li>Valgrind's signal simulation is not as robust as it could be.
-      Basic POSIX-compliant sigaction and sigprocmask functionality is
-      supplied, but it's conceivable that things could go badly awry
-      if you do weird things with signals.  Workaround: don't.
-      Programs that do non-POSIX signal tricks are in any case
-      inherently unportable, so should be avoided if
-      possible.</li>
-      <p>
-
-  <li>Programs which switch stacks are not well handled.  Valgrind
-      does have support for this, but I don't have great faith in it.
-      It's difficult -- there's no cast-iron way to decide whether a
-      large change in %esp is as a result of the program switching
-      stacks, or merely allocating a large object temporarily on the
-      current stack -- yet Valgrind needs to handle the two situations
-      differently.</li>
-      <p>
-
-  <li>x86 instructions, and system calls, have been implemented on
-      demand.  So it's possible, although unlikely, that a program
-      will fall over with a message to that effect.  If this happens,
-      please report ALL the details printed out, so we can try and
-      implement the missing feature.</li>
-      <p>
-
-  <li>x86 floating point works correctly, but floating-point code may
-      run even more slowly than integer code, due to my simplistic
-      approach to FPU emulation.</li>
-      <p>
-
-  <li>Memory consumption of your program is majorly increased whilst
-      running under Valgrind.  This is due to the large amount of
-      administrative information maintained behind the scenes.  Another
-      cause is that Valgrind dynamically translates the original
-      executable.  Translated, instrumented code is 14-16 times larger
-      than the original (!) so you can easily end up with 30+ MB of
-      translations when running (eg) a web browser.
-      </li>
-      <p>
-
-  <li>Valgrind can handle dynamically-generated code just fine.
-      However, if you regenerate code over the top of old code 
-      (ie. at the same memory addresses) Valgrind will not realise the 
-      code has changed, and will run its old translations, which will 
-      be out-of-date.  You need to use the VALGRIND_DISCARD_TRANSLATIONS 
-      client request in that case. For the same reason gcc's
-      <a href="http://gcc.gnu.org/onlinedocs/gcc/Nested-Functions.html">
-      trampolines for nested functions</a> are currently
-      unsupported, see <a href="http://bugs.kde.org/show_bug.cgi?id=69511">
-      bug 69511</a>.
-      </li>
-      <p>
-      
-</ul>
-
-Programs which are known not to work are:
-
-<ul>
-  <li>emacs starts up but immediately concludes it is out of memory
-      and aborts.  Emacs has it's own memory-management scheme, but I
-      don't understand why this should interact so badly with
-      Valgrind.  Emacs works fine if you build it to use the standard
-      malloc/free routines.</li><br>
-      <p>
-</ul>
-
-Known platform-specific limitations, as of release 1.0.0:
-
-<ul>
-  <li>On Red Hat 7.3, there have been reports of link errors (at
-      program start time) for threaded programs using
-      <code>__pthread_clock_gettime</code> and
-      <code>__pthread_clock_settime</code>.  This appears to be due to
-      <code>/lib/librt-2.2.5.so</code> needing them.  Unfortunately I
-      do not understand enough about this problem to fix it properly,
-      and I can't reproduce it on my test RedHat 7.3 system.  Please
-      mail me if you have more information / understanding.  </li><br>
-      <p>
-</ul>
-
-
-
-<a name="howworks"></a>
-<h3>2.13&nbsp; How it works -- a rough overview</h3>
-Some gory details, for those with a passion for gory details.  You
-don't need to read this section if all you want to do is use Valgrind.
-What follows is an outline of the machinery.  A more detailed
-(and somewhat out of date) description is to be found
-<A HREF="mc_techdocs.html">here</A>.
-
-<a name="startb"></a>
-<h4>2.13.1&nbsp; Getting started</h4>
-
-Valgrind is compiled into a shared object, valgrind.so.  The shell
-script valgrind sets the LD_PRELOAD environment variable to point to
-valgrind.so.  This causes the .so to be loaded as an extra library to
-any subsequently executed dynamically-linked ELF binary, viz, the
-program you want to debug.
-
-<p>The dynamic linker allows each .so in the process image to have an
-initialisation function which is run before main().  It also allows
-each .so to have a finalisation function run after main() exits.
-
-<p>When valgrind.so's initialisation function is called by the dynamic
-linker, the synthetic CPU to starts up.  The real CPU remains locked
-in valgrind.so for the entire rest of the program, but the synthetic
-CPU returns from the initialisation function.  Startup of the program
-now continues as usual -- the dynamic linker calls all the other .so's
-initialisation routines, and eventually runs main().  This all runs on
-the synthetic CPU, not the real one, but the client program cannot
-tell the difference.
-
-<p>Eventually main() exits, so the synthetic CPU calls valgrind.so's
-finalisation function.  Valgrind detects this, and uses it as its cue
-to exit.  It prints summaries of all errors detected, possibly checks
-for memory leaks, and then exits the finalisation routine, but now on
-the real CPU.  The synthetic CPU has now lost control -- permanently
--- so the program exits back to the OS on the real CPU, just as it
-would have done anyway.
-
-<p>On entry, Valgrind switches stacks, so it runs on its own stack.
-On exit, it switches back.  This means that the client program
-continues to run on its own stack, so we can switch back and forth
-between running it on the simulated and real CPUs without difficulty.
-This was an important design decision, because it makes it easy (well,
-significantly less difficult) to debug the synthetic CPU.
-
-
-<a name="engine"></a>
-<h4>2.13.2&nbsp; The translation/instrumentation engine</h4>
-
-Valgrind does not directly run any of the original program's code.  Only
-instrumented translations are run.  Valgrind maintains a translation
-table, which allows it to find the translation quickly for any branch
-target (code address).  If no translation has yet been made, the
-translator - a just-in-time translator - is summoned.  This makes an
-instrumented translation, which is added to the collection of
-translations.  Subsequent jumps to that address will use this
-translation.
-
-<p>Valgrind no longer directly supports detection of self-modifying
-code.  Such checking is expensive, and in practice (fortunately)
-almost no applications need it.  However, to help people who are
-debugging dynamic code generation systems, there is a Client Request 
-(basically a macro you can put in your program) which directs Valgrind
-to discard translations in a given address range.  So Valgrind can
-still work in this situation provided the client tells it when
-code has become out-of-date and needs to be retranslated.
-
-<p>The JITter translates basic blocks -- blocks of straight-line-code
--- as single entities.  To minimise the considerable difficulties of
-dealing with the x86 instruction set, x86 instructions are first
-translated to a RISC-like intermediate code, similar to sparc code,
-but with an infinite number of virtual integer registers.  Initially
-each insn is translated seperately, and there is no attempt at
-instrumentation.
-
-<p>The intermediate code is improved, mostly so as to try and cache
-the simulated machine's registers in the real machine's registers over
-several simulated instructions.  This is often very effective.  Also,
-we try to remove redundant updates of the simulated machines's
-condition-code register.
-
-<p>The intermediate code is then instrumented, giving more
-intermediate code.  There are a few extra intermediate-code operations
-to support instrumentation; it is all refreshingly simple.  After
-instrumentation there is a cleanup pass to remove redundant value
-checks.
-
-<p>This gives instrumented intermediate code which mentions arbitrary
-numbers of virtual registers.  A linear-scan register allocator is
-used to assign real registers and possibly generate spill code.  All
-of this is still phrased in terms of the intermediate code.  This
-machinery is inspired by the work of Reuben Thomas (Mite).
-
-<p>Then, and only then, is the final x86 code emitted.  The
-intermediate code is carefully designed so that x86 code can be
-generated from it without need for spare registers or other
-inconveniences.
-
-<p>The translations are managed using a traditional LRU-based caching
-scheme.  The translation cache has a default size of about 14MB.
-
-<a name="track"></a>
-
-<h4>2.13.3&nbsp; Tracking the status of memory</h4> Each byte in the
-process' address space has nine bits associated with it: one A bit and
-eight V bits.  The A and V bits for each byte are stored using a
-sparse array, which flexibly and efficiently covers arbitrary parts of
-the 32-bit address space without imposing significant space or
-performance overheads for the parts of the address space never
-visited.  The scheme used, and speedup hacks, are described in detail
-at the top of the source file vg_memory.c, so you should read that for
-the gory details.
-
-<a name="sys_calls"></a>
-
-<h4>2.13.4 System calls</h4>
-All system calls are intercepted.  The memory status map is consulted
-before and updated after each call.  It's all rather tiresome.  See
-coregrind/vg_syscalls.c for details.
-
-<a name="sys_signals"></a>
-
-<h4>2.13.5&nbsp; Signals</h4>
-All system calls to sigaction() and sigprocmask() are intercepted.  If
-the client program is trying to set a signal handler, Valgrind makes a
-note of the handler address and which signal it is for.  Valgrind then
-arranges for the same signal to be delivered to its own handler.
-
-<p>When such a signal arrives, Valgrind's own handler catches it, and
-notes the fact.  At a convenient safe point in execution, Valgrind
-builds a signal delivery frame on the client's stack and runs its
-handler.  If the handler longjmp()s, there is nothing more to be said.
-If the handler returns, Valgrind notices this, zaps the delivery
-frame, and carries on where it left off before delivering the signal.
-
-<p>The purpose of this nonsense is that setting signal handlers
-essentially amounts to giving callback addresses to the Linux kernel.
-We can't allow this to happen, because if it did, signal handlers
-would run on the real CPU, not the simulated one.  This means the
-checking machinery would not operate during the handler run, and,
-worse, memory permissions maps would not be updated, which could cause
-spurious error reports once the handler had returned.
-
-<p>An even worse thing would happen if the signal handler longjmp'd
-rather than returned: Valgrind would completely lose control of the
-client program.
-
-<p>Upshot: we can't allow the client to install signal handlers
-directly.  Instead, Valgrind must catch, on behalf of the client, any
-signal the client asks to catch, and must delivery it to the client on
-the simulated CPU, not the real one.  This involves considerable
-gruesome fakery; see vg_signals.c for details.
-<p>
-
-
-
-<a name="example"></a>
-<h3>2.14&nbsp; An example run</h3>
-This is the log for a run of a small program using Memcheck
-The program is in fact correct, and the reported error is as the
-result of a potentially serious code generation bug in GNU g++
-(snapshot 20010527).
-<pre>
-sewardj@phoenix:~/newmat10$
-~/Valgrind-6/valgrind -v ./bogon 
-==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1.
-==25832== Copyright (C) 2000-2001, and GNU GPL'd, by Julian Seward.
-==25832== Startup, with flags:
-==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp
-==25832== reading syms from /lib/ld-linux.so.2
-==25832== reading syms from /lib/libc.so.6
-==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0
-==25832== reading syms from /lib/libm.so.6
-==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3
-==25832== reading syms from /home/sewardj/Valgrind/valgrind.so
-==25832== reading syms from /proc/self/exe
-==25832== loaded 5950 symbols, 142333 line number locations
-==25832== 
-==25832== Invalid read of size 4
-==25832==    at 0x8048724: _ZN10BandMatrix6ReSizeEiii (bogon.cpp:45)
-==25832==    by 0x80487AF: main (bogon.cpp:66)
-==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
-==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
-==25832==    Address 0xBFFFF74C is not stack'd, malloc'd or free'd
-==25832==
-==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
-==25832== malloc/free: in use at exit: 0 bytes in 0 blocks.
-==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
-==25832== For a detailed leak analysis, rerun with: --leak-check=yes
-==25832==
-==25832== exiting, did 1881 basic blocks, 0 misses.
-==25832== 223 translations, 3626 bytes in, 56801 bytes out.
-</pre>
-<p>The GCC folks fixed this about a week before gcc-3.0 shipped.
-<p>
-
-<a name="warnings"></a>
-<h3>2.15&nbsp; Warning messages you might see</h3>
-
-Most of these only appear if you run in verbose mode (enabled by
-<code>-v</code>):
-<ul>
-<li> <code>More than 50 errors detected.  Subsequent errors
-     will still be recorded, but in less detail than before.</code>
-     <br>
-     After 50 different errors have been shown, Valgrind becomes 
-     more conservative about collecting them.  It then requires only 
-     the program counters in the top two stack frames to match when
-     deciding whether or not two errors are really the same one.
-     Prior to this point, the PCs in the top four frames are required
-     to match.  This hack has the effect of slowing down the
-     appearance of new errors after the first 50.  The 50 constant can
-     be changed by recompiling Valgrind.
-<p>
-<li> <code>More than 300 errors detected.  I'm not reporting any more.
-     Final error counts may be inaccurate.  Go fix your
-     program!</code>
-     <br>
-     After 300 different errors have been detected, Valgrind ignores
-     any more.  It seems unlikely that collecting even more different
-     ones would be of practical help to anybody, and it avoids the
-     danger that Valgrind spends more and more of its time comparing
-     new errors against an ever-growing collection.  As above, the 300
-     number is a compile-time constant.
-<p>
-<li> <code>Warning: client switching stacks?</code>
-     <br>
-     Valgrind spotted such a large change in the stack pointer, %esp,
-     that it guesses the client is switching to a different stack.
-     At this point it makes a kludgey guess where the base of the new
-     stack is, and sets memory permissions accordingly.  You may get
-     many bogus error messages following this, if Valgrind guesses
-     wrong.  At the moment "large change" is defined as a change of
-     more that 2000000 in the value of the %esp (stack pointer)
-     register.
-<p>
-<li> <code>Warning: client attempted to close Valgrind's logfile fd &lt;number>
-     </code>
-     <br>
-     Valgrind doesn't allow the client
-     to close the logfile, because you'd never see any diagnostic
-     information after that point.  If you see this message,
-     you may want to use the <code>--log-fd=&lt;number></code> 
-     option to specify a different logfile file-descriptor number.
-     Or 
-<p>
-<li> <code>Warning: noted but unhandled ioctl &lt;number></code>
-     <br>
-     Valgrind observed a call to one of the vast family of
-     <code>ioctl</code> system calls, but did not modify its
-     memory status info (because I have not yet got round to it).
-     The call will still have gone through, but you may get spurious
-     errors after this as a result of the non-update of the memory info.
-<p>
-<li> <code>Warning: set address range perms: large range &lt;number></code>
-     <br> 
-     Diagnostic message, mostly for benefit of the valgrind
-     developers, to do with memory permissions.
-</ul>
-
-</body>
-</html>
-
-
-
diff --git a/coregrind/docs/coregrind_intro.html b/coregrind/docs/coregrind_intro.html
deleted file mode 100644
index 662e205..0000000
--- a/coregrind/docs/coregrind_intro.html
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
-<a name="intro"></a>
-<h2>1&nbsp; Introduction</h2>
-
-<a name="intro-overview"></a>
-<h3>1.1&nbsp; An overview of Valgrind</h3>
-
-Valgrind is a flexible system for debugging and profiling Linux-x86
-executables.  The system consists of a core, which provides a synthetic
-x86 CPU in software, and a series of tools, each of which performs some
-kind of debugging, profiling, or similar task.  The architecture is
-modular, so that new tools can be created easily and without disturbing
-the existing structure.
-
-<p>
-A number of useful tools are supplied as standard.  In summary, these
-are:
-
-<ul>
-<li><b>Memcheck</b> detects memory-management problems in your programs.
-    All reads and writes of memory are checked, and calls to
-    malloc/new/free/delete are intercepted. As a result, Memcheck can
-    detect the following problems:
-    <ul>
-        <li>Use of uninitialised memory</li>
-        <li>Reading/writing memory after it has been free'd</li>
-        <li>Reading/writing off the end of malloc'd blocks</li>
-        <li>Reading/writing inappropriate areas on the stack</li>
-        <li>Memory leaks -- where pointers to malloc'd blocks are lost
-            forever</li>
-        <li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
-        <li>Overlapping <code>src</code> and <code>dst</code> pointers in 
-            <code>memcpy()</code> and related functions</li>
-        <li>Some misuses of the POSIX pthreads API</li>
-    </ul>
-    <p>
-    Problems like these can be difficult to find by other means, often
-    lying undetected for long periods, then causing occasional,
-    difficult-to-diagnose crashes.
-<p>
-<li><b>Addrcheck</b> is a lightweight version of
-    Memcheck.  It is identical to Memcheck except
-    for the single detail that it does not do any uninitialised-value
-    checks.  All of the other checks -- primarily the fine-grained
-    address checking -- are still done.  The downside of this is that
-    you don't catch the uninitialised-value errors that
-    Memcheck can find.
-    <p>
-    But the upside is significant: programs run about twice as fast as
-    they do on Memcheck, and a lot less memory is used.  It
-    still finds reads/writes of freed memory, memory off the end of
-    blocks and in other invalid places, bugs which you really want to
-    find before release!
-    <p>
-    Because Addrcheck is lighter and faster than
-    Memcheck, you can run more programs for longer, and so you
-    may be able to cover more test scenarios.  Addrcheck was 
-    created because one of us (Julian) wanted to be able to 
-    run a complete KDE desktop session with checking.  As of early 
-    November 2002, we have been able to run KDE-3.0.3 on a 1.7 GHz P4
-    with 512 MB of memory, using Addrcheck.  Although the
-    result is not stellar, it's quite usable, and it seems plausible
-    to run KDE for long periods at a time like this, collecting up
-    all the addressing errors that appear.
-<p>
-<li><b>Cachegrind</b> is a cache profiler.  It performs detailed simulation of
-    the I1, D1 and L2 caches in your CPU and so can accurately
-    pinpoint the sources of cache misses in your code.  If you desire,
-    it will show the number of cache misses, memory references and
-    instructions accruing to each line of source code, with
-    per-function, per-module and whole-program summaries.  If you ask
-    really nicely it will even show counts for each individual x86
-    instruction.
-    <p>
-    Cachegrind auto-detects your machine's cache configuration
-    using the <code>CPUID</code> instruction, and so needs no further
-    configuration info, in most cases.
-    <p>
-    Cachegrind is nicely complemented by Josef Weidendorfer's
-    amazing KCacheGrind visualisation tool (<A
-    HREF="http://kcachegrind.sourceforge.net">
-    http://kcachegrind.sourceforge.net</A>), a KDE application which
-    presents these profiling results in a graphical and
-    easier-to-understand form.
-<p>
-<li><b>Helgrind</b> finds data races in multithreaded programs.
-    Helgrind looks for
-    memory locations which are accessed by more than one (POSIX
-    p-)thread, but for which no consistently used (pthread_mutex_)lock
-    can be found.  Such locations are indicative of missing
-    synchronisation between threads, and could cause hard-to-find
-    timing-dependent problems.
-    <p>
-    Helgrind ("Hell's Gate", in Norse mythology) implements the
-    so-called "Eraser" data-race-detection algorithm, along with
-    various refinements (thread-segment lifetimes) which reduce the
-    number of false errors it reports.  It is as yet somewhat of an
-    experimental tool, so your feedback is especially welcomed here.
-    <p>
-    Helgrind has been hacked on extensively by Jeremy
-    Fitzhardinge, and we have him to thank for getting it to a
-    releasable state.
-</ul>
-
-A number of minor tools (<b>corecheck</b>, <b>lackey</b> and
-<b>Nulgrind</b>) are also supplied.  These aren't particularly useful --
-they exist to illustrate how to create simple tools and to help the
-valgrind developers in various ways.
-
-
-<p>
-Valgrind is closely tied to details of the CPU, operating system and
-to a less extent, compiler and basic C libraries. This makes it
-difficult to make it portable, so we have chosen at the outset to
-concentrate on what we believe to be a widely used platform: Linux on
-x86s.  Valgrind uses the standard Unix <code>./configure</code>,
-<code>make</code>, <code>make install</code> mechanism, and we have
-attempted to ensure that it works on machines with kernel 2.2 or 2.4
-and glibc 2.1.X, 2.2.X or 2.3.1.  This should cover the vast majority
-of modern Linux installations.  Note that glibc-2.3.2+, with the
-NPTL (Native Posix Threads Library) package won't work.  We hope to
-be able to fix this, but it won't be easy.
-
-
-<p>
-Valgrind is licensed under the GNU General Public License, version
-2. Read the file LICENSE in the source distribution for details.  Some
-of the PThreads test cases, <code>pth_*.c</code>, are taken from
-"Pthreads Programming" by Bradford Nichols, Dick Buttlar &amp;
-Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by O'Reilly
-&amp; Associates, Inc.
-
-
-
-
-<a name="intro-navigation"></a>
-<h3>1.2&nbsp; How to navigate this manual</h3>
-
-The Valgrind distribution consists of the Valgrind core, upon which are
-built Valgrind tools, which do different kinds of debugging and
-profiling.  This manual is structured similarly.  
-
-<p>
-First, we describe the Valgrind core, how to use it, and the flags it
-supports.  Then, each tool has its own chapter in this manual.  You only
-need to read the documentation for the core and for the tool(s) you
-actually use, although you may find it helpful to be at least a little
-bit familar with what all tools do.  If you're new to all this, you 
-probably want to run the Memcheck tool.  If you want to write a new tool,
-read <A HREF="coregrind_tools.html">this</A>.
-
-<p>
-Be aware that the core understands some command line flags, and the
-tools have their own flags which they know about.  This means
-there is no central place describing all the flags that are accepted
--- you have to read the flags documentation both for 
-<A HREF="coregrind_core.html#core">Valgrind's core</A>
-and for the tool you want to use.
-
-<p>
-
diff --git a/coregrind/docs/coregrind_tools.html b/coregrind/docs/coregrind_tools.html
deleted file mode 100644
index e185244..0000000
--- a/coregrind/docs/coregrind_tools.html
+++ /dev/null
@@ -1,735 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>Valgrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="title">&nbsp;</a>
-<h1 align=center>Valgrind Tools</h1>
-<center>
-  A guide to writing new tools for Valgrind<br>
-  This guide was last updated on 20030520
-</center>
-<p>
-
-<center>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Nick Nethercote
-<p>
-Valgrind is licensed under the GNU General Public License, 
-version 2<br>
-An open-source tool for supervising execution of Linux-x86 executables.
-</center>
-
-<p>
-
-<hr width="100%">
-<a name="contents"></a>
-<h2>Contents of this manual</h2>
-
-<h4>1&nbsp; <a href="#intro">Introduction</a></h4>
-    1.1&nbsp; <a href="#supexec">Supervised Execution</a><br>
-    1.2&nbsp; <a href="#tools">Tools</a><br>
-    1.3&nbsp; <a href="#execspaces">Execution Spaces</a><br>
-
-<h4>2&nbsp; <a href="#writingatool">Writing a Tool</a></h4>
-    2.1&nbsp; <a href="#whywriteatool">Why write a tool?</a><br>
-    2.2&nbsp; <a href="#suggestedtools">Suggested tools</a><br>
-    2.3&nbsp; <a href="#howtoolswork">How tools work</a><br>
-    2.4&nbsp; <a href="#gettingcode">Getting the code</a><br>
-    2.5&nbsp; <a href="#gettingstarted">Getting started</a><br>
-    2.6&nbsp; <a href="#writingcode">Writing the code</a><br>
-    2.7&nbsp; <a href="#init">Initialisation</a><br>
-    2.8&nbsp; <a href="#instr">Instrumentation</a><br>
-    2.9&nbsp; <a href="#fini">Finalisation</a><br>
-    2.10&nbsp; <a href="#otherimportantinfo">Other important information</a><br>
-    2.11&nbsp; <a href="#wordsofadvice">Words of advice</a><br>
-
-<h4>3&nbsp; <a href="#advancedtopics">Advanced Topics</a></h4>
-    3.1&nbsp; <a href="#suppressions">Suppressions</a><br>
-    3.2&nbsp; <a href="#documentation">Documentation</a><br>
-    3.3&nbsp; <a href="#regressiontests">Regression tests</a><br>
-    3.4&nbsp; <a href="#profiling">Profiling</a><br>
-    3.5&nbsp; <a href="#othermakefilehackery">Other makefile hackery</a><br>
-    3.6&nbsp; <a href="#interfaceversions">Core/tool interface versions</a><br>
-
-<h4>4&nbsp; <a href="#finalwords">Final Words</a></h4>
-
-<hr width="100%">
-
-<a name="intro"></a>
-<h2>1&nbsp; Introduction</h2>
-
-<a name="supexec"></a>
-<h3>1.1&nbsp; Supervised Execution</h3>
-
-Valgrind provides a generic infrastructure for supervising the execution of
-programs.  This is done by providing a way to instrument programs in very
-precise ways, making it relatively easy to support activities such as dynamic
-error detection and profiling.<p>
-
-Although writing a tool is not easy, and requires learning quite a few things
-about Valgrind, it is much easier than instrumenting a program from scratch
-yourself.
-
-<a name="tools"></a>
-<h3>1.2&nbsp; Tools</h3>
-The key idea behind Valgrind's architecture is the division between its
-``core'' and ``tools''.
-<p>
-The core provides the common low-level infrastructure to support program
-instrumentation, including the x86-to-x86 JIT compiler, low-level memory
-manager, signal handling and a scheduler (for pthreads).   It also provides
-certain services that are useful to some but not all tools, such as support
-for error recording and suppression.
-<p>
-But the core leaves certain operations undefined, which must be filled by tools.
-Most notably, tools define how program code should be instrumented.  They can
-also define certain variables to indicate to the core that they would like to
-use certain services, or be notified when certain interesting events occur.
-But the core takes care of all the hard work.
-<p>
-
-<a name="execspaces"></a>
-<h3>1.3&nbsp; Execution Spaces</h3>
-An important concept to understand before writing a tool is that there are
-three spaces in which program code executes:
-
-<ol>
-  <li>User space: this covers most of the program's execution.  The tool is
-      given the code and can instrument it any way it likes, providing (more or
-      less) total control over the code.<p>
-
-      Code executed in user space includes all the program code, almost all of
-      the C library (including things like the dynamic linker), and almost
-      all parts of all other libraries.
-  </li><p>
-
-  <li>Core space: a small proportion of the program's execution takes place
-      entirely within Valgrind's core.  This includes:<p>
-
-      <ul>
-        <li>Dynamic memory management (<code>malloc()</code> etc.)</li>
-
-        <li>Pthread operations and scheduling</li>
-
-        <li>Signal handling</li>
-      </ul><p>
-
-      A tool has no control over these operations;  it never ``sees'' the code
-      doing this work and thus cannot instrument it.  However, the core
-      provides hooks so a tool can be notified when certain interesting events
-      happen, for example when when dynamic memory is allocated or freed, the
-      stack pointer is changed, or a pthread mutex is locked, etc.<p>
-
-      Note that these hooks only notify tools of events relevant to user 
-      space.  For example, when the core allocates some memory for its own use,
-      the tool is not notified of this, because it's not directly part of the
-      supervised program's execution.
-  </li><p>
-      
-  <li>Kernel space: execution in the kernel.  Two kinds:<p>
-   
-      <ol>
-        <li>System calls:  can't be directly observed by either the tool or the
-            core.  But the core does have some idea of what happens to the
-            arguments, and it provides hooks for a tool to wrap system calls.
-        </li><p>
-
-        <li>Other: all other kernel activity (e.g. process scheduling) is
-            totally opaque and irrelevant to the program.
-        </li><p>
-      </ol>
-  </li><p>
-
-  It should be noted that a tool only has direct control over code executed in
-  user space.  This is the vast majority of code executed, but it is not
-  absolutely all of it, so any profiling information recorded by a tool won't
-  be totally accurate.
-</ol>
-
-
-<a name="writingatool"></a>
-<h2>2&nbsp; Writing a Tool</h2>
-
-<a name="whywriteatool"></a>
-<h3>2.1&nbsp; Why write a tool?</h3>
-
-Before you write a tool, you should have some idea of what it should do.  What
-is it you want to know about your programs of interest?  Consider some existing
-tools:
-
-<ul>
-  <li>memcheck: among other things, performs fine-grained validity and
-      addressibility checks of every memory reference performed by the program
-      </li><p>
-
-  <li>addrcheck: performs lighterweight addressibility checks of every memory
-      reference performed by the program</li><p>
-
-  <li>cachegrind: tracks every instruction and memory reference to simulate
-      instruction and data caches, tracking cache accesses and misses that
-      occur on every line in the program</li><p>
-
-  <li>helgrind: tracks every memory access and mutex lock/unlock to determine
-      if a program contains any data races</li><p>
-
-  <li>lackey: does simple counting of various things: the number of calls to a
-      particular function (<code>_dl_runtime_resolve()</code>);  the number of
-      basic blocks, x86 instruction, UCode instructions executed;  the number
-      of branches executed and the proportion of those which were taken.</li><p>
-</ul>
-
-These examples give a reasonable idea of what kinds of things Valgrind can be
-used for.  The instrumentation can range from very lightweight (e.g. counting
-the number of times a particular function is called) to very intrusive (e.g.
-memcheck's memory checking).
-
-
-<a name="suggestedtools"></a>
-<h3>2.2&nbsp; Suggested tools</h3>
-
-Here is a list of ideas we have had for tools that should not be too hard to
-implement.
-
-<ul>
-  <li>branch profiler: A machine's branch prediction hardware could be
-      simulated, and each branch annotated with the number of predicted and
-      mispredicted branches.  Would be implemented quite similarly to
-      Cachegrind, and could reuse the <code>cg_annotate</code> script to
-      annotate source code.<p>
-
-      The biggest difficulty with this is the simulation;  the chip-makers
-      are very cagey about how their chips do branch prediction.  But
-      implementing one or more of the basic algorithms could still give good
-      information.
-      </li><p>
-
-  <li>coverage tool:  Cachegrind can already be used for doing test coverage,
-      but it's massive overkill to use it just for that.<p>
-
-      It would be easy to write a coverage tool that records how many times
-      each basic block was recorded.  Again, the <code>cg_annotate</code>
-      script could be used for annotating source code with the gathered
-      information.  Although, <code>cg_annotate</code> is only designed for
-      working with single program runs.  It could be extended relatively easily
-      to deal with multiple runs of a program, so that the coverage of a whole
-      test suite could be determined.<p>
-
-      In addition to the standard coverage information, such a tool could
-      record extra information that would help a user generate test cases to
-      exercise unexercised paths.  For example, for each conditional branch,
-      the tool could record all inputs to the conditional test, and print these
-      out when annotating.<p>
-  
-  <li>run-time type checking: A nice example of a dynamic checker is given
-      in this paper:
-
-      <blockquote>
-      Debugging via Run-Time Type Checking<br>
-      Alexey Loginov, Suan Hsi Yong, Susan Horwitz and Thomas Reps<br>
-      Proceedings of Fundamental Approaches to Software Engineering<br>
-      April 2001.
-      </blockquote>
-
-      Similar is the tool described in this paper:
-
-      <blockquote>
-      Run-Time Type Checking for Binary Programs<br>
-      Michael Burrows, Stephen N. Freund, Janet L. Wiener<br>
-      Proceedings of the 12th International Conference on Compiler Construction
-      (CC 2003)<br>
-      April 2003.
-      </blockquote>
-
-      These approach can find quite a range of bugs, particularly in C and C++
-      programs, and could be implemented quite nicely as a Valgrind tool.<p>
-
-      Ways to speed up this run-time type checking are described in this paper:
-
-      <blockquote>
-      Reducing the Overhead of Dynamic Analysis<br>
-      Suan Hsi Yong and Susan Horwitz<br>
-      Proceedings of Runtime Verification '02<br>
-      July 2002.
-      </blockquote>
-
-      Valgrind's client requests could be used to pass information to a tool
-      about which elements need instrumentation and which don't.
-      </li><p>
-</ul>
-
-We would love to hear from anyone who implements these or other tools.
-
-<a name="howtoolswork"></a>
-<h3>2.3&nbsp; How tools work</h3>
-
-Tools must define various functions for instrumenting programs that are called
-by Valgrind's core, yet they must be implemented in such a way that they can be
-written and compiled without touching Valgrind's core.  This is important,
-because one of our aims is to allow people to write and distribute their own
-tools that can be plugged into Valgrind's core easily.<p>
-
-This is achieved by packaging each tool into a separate shared object which is
-then loaded ahead of the core shared object <code>valgrind.so</code>, using the
-dynamic linker's <code>LD_PRELOAD</code> variable.  Any functions defined in
-the tool that share the name with a function defined in core (such as
-the instrumentation function <code>TL_(instrument)()</code>) override the
-core's definition.  Thus the core can call the necessary tool functions.<p>
-
-This magic is all done for you;  the shared object used is chosen with the
-<code>--tool</code> option to the <code>valgrind</code> startup script.  The
-default tool used is <code>memcheck</code>, Valgrind's original memory checker.
-
-<a name="gettingcode"></a>
-<h3>2.4&nbsp; Getting the code</h3>
-
-To write your own tool, you'll need to check out a copy of Valgrind from the
-CVS repository, rather than using a packaged distribution.  This is because it
-contains several extra files needed for writing tools.<p>
-
-To check out the code from the CVS repository, first login:
-<blockquote><code>
-cvs -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind login
-</code></blockquote>
-
-Then checkout the code.  To get a copy of the current development version
-(recommended for the brave only):
-<blockquote><code>
-cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind co valgrind
-</code></blockquote>
-
-To get a copy of the stable released branch:
-<blockquote><code>
-cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind co -r <i>TAG</i> valgrind
-</code></blockquote>
-
-where <code><i>TAG</i></code> has the form <code>VALGRIND_X_Y_Z</code> for
-version X.Y.Z.
-
-<a name="gettingstarted"></a>
-<h3>2.5&nbsp; Getting started</h3>
-
-Valgrind uses GNU <code>automake</code> and <code>autoconf</code> for the
-creation of Makefiles and configuration.  But don't worry, these instructions
-should be enough to get you started even if you know nothing about those
-tools.<p>
-
-In what follows, all filenames are relative to Valgrind's top-level directory
-<code>valgrind/</code>.
-
-<ol>
-  <li>Choose a name for the tool, and an abbreviation that can be used as a
-      short prefix.  We'll use <code>foobar</code> and <code>fb</code> as an
-      example.
-  </li><p>
-
-  <li>Make a new directory <code>foobar/</code> which will hold the tool.
-  </li><p>
-
-  <li>Copy <code>none/Makefile.am</code> into <code>foobar/</code>.
-      Edit it by replacing all occurrences of the string
-      ``<code>none</code>'' with ``<code>foobar</code>'' and the one
-      occurrence of the string ``<code>nl_</code>'' with ``<code>fb_</code>''.
-      It might be worth trying to understand this file, at least a little;  you
-      might have to do more complicated things with it later on.  In
-      particular, the name of the <code>vgtool_foobar_so_SOURCES</code> variable
-      determines the name of the tool's shared object, which determines what
-      name must be passed to the <code>--tool</code> option to use the tool.
-  </li><p>
-
-  <li>Copy <code>none/nl_main.c</code> into
-      <code>foobar/</code>, renaming it as <code>fb_main.c</code>.
-      Edit it by changing the lines in <code>TL_(pre_clo_init)()</code>
-      to something appropriate for the tool.  These fields are used in the
-      startup message, except for <code>bug_reports_to</code> which is used
-      if a tool assertion fails.
-  </li><p>
-
-  <li>Edit <code>Makefile.am</code>, adding the new directory
-      <code>foobar</code> to the <code>SUBDIRS</code> variable.
-  </li><p>
-
-  <li>Edit <code>configure.in</code>, adding <code>foobar/Makefile</code> to the
-      <code>AC_OUTPUT</code> list.
-  </li><p>
-
-  <li>Run:
-      <pre>
-    autogen.sh
-    ./configure --prefix=`pwd`/inst
-    make install</pre>
-
-      It should automake, configure and compile without errors, putting copies
-      of the tool's shared object <code>vgtool_foobar.so</code> in
-      <code>foobar/</code> and
-      <code>inst/lib/valgrind/</code>.
-  </li><p>
-
-  <li>You can test it with a command like
-      <pre>
-    inst/bin/valgrind --tool=foobar date</pre>
-
-      (almost any program should work; <code>date</code> is just an example).  
-      The output should be something like this:
-      <pre>
-==738== foobar-0.0.1, a foobarring tool for x86-linux.
-==738== Copyright (C) 1066AD, and GNU GPL'd, by J. Random Hacker.
-==738== Built with valgrind-1.1.0, a program execution monitor.
-==738== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward.
-==738== Estimated CPU clock rate is 1400 MHz
-==738== For more details, rerun with: -v
-==738== 
-Wed Sep 25 10:31:54 BST 2002
-==738==</pre>
-
-      The tool does nothing except run the program uninstrumented.
-  </li><p>
-</ol>
-
-These steps don't have to be followed exactly - you can choose different names
-for your source files, and use a different <code>--prefix</code> for
-<code>./configure</code>.<p>
-
-Now that we've setup, built and tested the simplest possible tool, onto the
-interesting stuff...
-
-
-<a name="writingcode"></a>
-<h3>2.6&nbsp; Writing the code</h3>
-
-A tool must define at least these four functions:
-<pre>
-    TL_(pre_clo_init)()
-    TL_(post_clo_init)()
-    TL_(instrument)()
-    TL_(fini)()
-</pre>
-
-Also, it must use the macro <code>VG_DETERMINE_INTERFACE_VERSION</code>
-exactly once in its source code.  If it doesn't, you will get a link error
-explaining the problem.  This macro is used to ensure the core/tool interface
-used by the core and a plugged-in tool are binary compatible.
-
-In addition, if a tool wants to use some of the optional services provided by
-the core, it may have to define other functions.
-
-<a name="init"></a>
-<h3>2.7&nbsp; Initialisation</h3>
-
-Most of the initialisation should be done in <code>TL_(pre_clo_init)()</code>.
-Only use <code>TL_(post_clo_init)()</code> if a tool provides command line
-options and must do some initialisation after option processing takes place
-(``<code>clo</code>'' stands for ``command line options'').<p>
-
-First of all, various ``details'' need to be set for a tool, using the
-functions <code>VG_(details_*)()</code>.  Some are all compulsory, some aren't. 
-Some are used when constructing the startup message,
-<code>detail_bug_reports_to</code> is used if <code>VG_(tool_panic)()</code> is
-ever called, or a tool assertion fails.  Others have other uses.<p>
-
-Second, various ``needs'' can be set for a tool, using the functions
-<code>VG_(needs_*)()</code>.  They are mostly booleans, and can be left
-untouched (they default to <code>False</code>).  They determine whether a tool
-can do various things such as:  record, report and suppress errors; process
-command line options;  wrap system calls;  record extra information about
-malloc'd blocks, etc.<p>
-
-For example, if a tool wants the core's help in recording and reporting errors,
-it must set the <code>tool_errors</code> need to <code>True</code>, and then
-provide definitions of six functions for comparing errors, printing out errors,
-reading suppressions from a suppressions file, etc.  While writing these
-functions requires some work, it's much less than doing error handling from
-scratch because the core is doing most of the work.  See the type
-<code>VgNeeds</code> in <code>include/tool.h</code> for full details of all
-the needs.<p>
-
-Third, the tool can indicate which events in core it wants to be notified
-about, using the functions <code>VG_(track_*)()</code>.  These include things
-such as blocks of memory being malloc'd, the stack pointer changing, a mutex
-being locked, etc.  If a tool wants to know about this, it should set the
-relevant pointer in the structure to point to a function, which will be called
-when that event happens.<p>
-
-For example, if the tool want to be notified when a new block of memory is
-malloc'd, it should call <code>VG_(track_new_mem_heap)()</code> with an
-appropriate function pointer, and the assigned function will be called each
-time this happens.<p>
-
-More information about ``details'', ``needs'' and ``trackable events'' can be
-found in <code>include/tool.h</code>.<p>
-
-<a name="instr"></a>
-<h3>2.8&nbsp; Instrumentation</h3>
-
-<code>TL_(instrument)()</code> is the interesting one.  It allows you to
-instrument <i>UCode</i>, which is Valgrind's RISC-like intermediate language.
-UCode is described in the <a href="mc_techdocs.html">technical docs</a> for
-Memcheck.
-
-The easiest way to instrument UCode is to insert calls to C functions when
-interesting things happen.  See the tool ``Lackey''
-(<code>lackey/lk_main.c</code>) for a simple example of this, or
-Cachegrind (<code>cachegrind/cg_main.c</code>) for a more complex
-example.<p>
-
-A much more complicated way to instrument UCode, albeit one that might result
-in faster instrumented programs, is to extend UCode with new UCode
-instructions.  This is recommended for advanced Valgrind hackers only!  See
-Memcheck for an example.
-
-<a name="fini"></a>
-<h3>2.9&nbsp; Finalisation</h3>
-
-This is where you can present the final results, such as a summary of the
-information collected.  Any log files should be written out at this point.
-
-<a name="otherimportantinfo"></a>
-<h3>2.10&nbsp; Other important information</h3>
-
-Please note that the core/tool split infrastructure is quite complex and
-not brilliantly documented.  Here are some important points, but there are
-undoubtedly many others that I should note but haven't thought of.<p>
-
-The file <code>include/tool.h</code> contains all the types,
-macros, functions, etc. that a tool should (hopefully) need, and is the only
-<code>.h</code> file a tool should need to <code>#include</code>.<p>
-
-In particular, you probably shouldn't use anything from the C library (there
-are deep reasons for this, trust us).  Valgrind provides an implementation of a
-reasonable subset of the C library, details of which are in
-<code>tool.h</code>.<p>
-
-Similarly, when writing a tool, you shouldn't need to look at any of the code
-in Valgrind's core.  Although it might be useful sometimes to help understand
-something.<p>
-
-<code>tool.h</code> has a reasonable amount of documentation in it that
-should hopefully be enough to get you going.  But ultimately, the tools
-distributed (Memcheck, Addrcheck, Cachegrind, Lackey, etc.) are probably the
-best documentation of all, for the moment.<p>
-
-Note that the <code>VG_</code> and <code>TL_</code> macros are used heavily.
-These just prepend longer strings in front of names to avoid potential
-namespace clashes.  We strongly recommend using the <code>TL_</code> macro for
-any global functions and variables in your tool, or writing a similar macro.<p>
-
-<a name="wordsofadvice"></a>
-<h3>2.11&nbsp; Words of Advice</h3>
-
-Writing and debugging tools is not trivial.  Here are some suggestions for
-solving common problems.<p>
-
-If you are getting segmentation faults in C functions used by your tool, the
-usual GDB command:
-<blockquote><code>gdb <i>prog</i> core</code></blockquote>
-usually gives the location of the segmentation fault.<p>
-
-If you want to debug C functions used by your tool, you can attach GDB to
-Valgrind with some effort; see the file <code>README_DEVELOPERS</code> in
-CVS for instructions.<p>
-
-GDB may be able to give you useful information.  Note that by default
-most of the system is built with <code>-fomit-frame-pointer</code>,
-and you'll need to get rid of this to extract useful tracebacks from
-GDB.<p>
-
-If you just want to know whether a program point has been reached, using the
-<code>OINK</code> macro (in <code> include/tool.h</code>) can be easier than
-using GDB.<p>
-
-If you are having problems with your UCode instrumentation, it's likely that
-GDB won't be able to help at all.  In this case, Valgrind's
-<code>--trace-codegen</code> option is invaluable for observing the results of
-instrumentation.<p>
-
-The other debugging command line options can be useful too (run <code>valgrind
--h</code> for the list).<p>
-
-<a name="advancedtopics"></a>
-<h2>3&nbsp; Advanced Topics</h2>
-
-Once a tool becomes more complicated, there are some extra things you may
-want/need to do.
-
-<a name="suppressions"></a>
-<h3>3.1&nbsp; Suppressions</h3>
-
-If your tool reports errors and you want to suppress some common ones, you can
-add suppressions to the suppression files.  The relevant files are 
-<code>valgrind/*.supp</code>;  the final suppression file is aggregated from
-these files by combining the relevant <code>.supp</code> files depending on the
-versions of linux, X and glibc on a system.
-<p>
-Suppression types have the form <code>tool_name:suppression_name</code>.  The
-<code>tool_name</code> here is the name you specify for the tool during
-initialisation with <code>VG_(details_name)()</code>.
-
-<a name="documentation"></a>
-<h3>3.2&nbsp; Documentation</h3>
-
-If you are feeling conscientious and want to write some HTML documentation for
-your tool, follow these steps (using <code>foobar</code> as the example tool
-name again):
-
-<ol>
-  <li>Make a directory <code>foobar/docs/</code>.
-  </li><p>
-
-  <li>Edit <code>foobar/Makefile.am</code>, adding <code>docs</code> to
-      the <code>SUBDIRS</code> variable.
-  </li><p>
-
-  <li>Edit <code>configure.in</code>, adding
-      <code>foobar/docs/Makefile</code> to the <code>AC_OUTPUT</code> list.
-  </li><p>
-
-  <li>Write <code>foobar/docs/Makefile.am</code>.  Use
-      <code>memcheck/docs/Makefile.am</code> as an example.
-  </li><p>
-
-  <li>Write the documentation, putting it in <code>foobar/docs/</code>.
-  </li><p>
-</ol>
-
-<a name="regressiontests"></a>
-<h3>3.3&nbsp; Regression tests</h3>
-
-Valgrind has some support for regression tests.  If you want to write
-regression tests for your tool:
-
-<ol>
-  <li>Make a directory <code>foobar/tests/</code>.
-  </li><p>
-
-  <li>Edit <code>foobar/Makefile.am</code>, adding <code>tests</code> to
-      the <code>SUBDIRS</code> variable.
-  </li><p>
-
-  <li>Edit <code>configure.in</code>, adding
-      <code>foobar/tests/Makefile</code> to the <code>AC_OUTPUT</code> list.
-  </li><p>
-
-  <li>Write <code>foobar/tests/Makefile.am</code>.  Use
-      <code>memcheck/tests/Makefile.am</code> as an example.
-  </li><p>
-
-  <li>Write the tests, <code>.vgtest</code> test description files, 
-      <code>.stdout.exp</code> and <code>.stderr.exp</code> expected output
-      files.  (Note that Valgrind's output goes to stderr.)  Some details
-      on writing and running tests are given in the comments at the top of the
-      testing script <code>tests/vg_regtest</code>.
-  </li><p>
-
-  <li>Write a filter for stderr results <code>foobar/tests/filter_stderr</code>.
-      It can call the existing filters in <code>tests/</code>.  See
-      <code>memcheck/tests/filter_stderr</code> for an example;  in particular
-      note the <code>$dir</code> trick that ensures the filter works correctly
-      from any directory.
-  </li><p>
-</ol>
-
-<a name="profiling"></a>
-<h3>3.4&nbsp; Profiling</h3>
-
-To do simple tick-based profiling of a tool, include the line 
-<blockquote>
-#include "vg_profile.c"
-</blockquote>
-in the tool somewhere, and rebuild (you may have to <code>make clean</code>
-first).  Then run Valgrind with the <code>--profile=yes</code> option.<p>
-
-The profiler is stack-based;  you can register a profiling event with
-<code>VGP_(register_profile_event)()</code> and then use the
-<code>VGP_PUSHCC</code> and <code>VGP_POPCC</code> macros to record time spent
-doing certain things.  New profiling event numbers must not overlap with the
-core profiling event numbers.  See <code>include/tool.h</code> for details
-and Memcheck for an example.
-
-
-<a name="othermakefilehackery"></a>
-<h3>3.5&nbsp; Other makefile hackery</h3>
-
-If you add any directories under <code>valgrind/foobar/</code>, you will
-need to add an appropriate <code>Makefile.am</code> to it, and add a
-corresponding entry to the <code>AC_OUTPUT</code> list in
-<code>valgrind/configure.in</code>.<p>
-
-If you add any scripts to your tool (see Cachegrind for an example) you need to
-add them to the <code>bin_SCRIPTS</code> variable in
-<code>valgrind/foobar/Makefile.am</code>.<p>
-
-
-<a name="interfaceversions"></a>
-<h3>3.5&nbsp; Core/tool interface versions</h3>
-
-In order to allow for the core/tool interface to evolve over time, Valgrind
-uses a basic interface versioning system.  All a tool has to do is use the
-<code>VG_DETERMINE_INTERFACE_VERSION</code> macro exactly once in its code.
-If not, a link error will occur when the tool is built.
-<p>
-The interface version number has the form X.Y.  Changes in Y indicate binary
-compatible changes.  Changes in X indicate binary incompatible changes.  If
-the core and tool has the same major version number X they should work
-together.  If X doesn't match, Valgrind will abort execution with an
-explanation of the problem.
-<p>
-This approach was chosen so that if the interface changes in the future,
-old tools won't work and the reason will be clearly explained, instead of
-possibly crashing mysteriously.  We have attempted to minimise the potential
-for binary incompatible changes by means such as minimising the use of naked
-structs in the interface.
-
-<a name="finalwords"></a>
-<h2>4&nbsp; Final Words</h2>
-
-This whole core/tool business under active development, although it's slowly
-maturing.<p>
-
-The first consequence of this is that the core/tool interface will continue
-to change in the future;  we have no intention of freezing it and then
-regretting the inevitable stupidities.  Hopefully most of the future changes
-will be to add new features, hooks, functions, etc, rather than to change old
-ones, which should cause a minimum of trouble for existing tools, and we've put
-some effort into future-proofing the interface to avoid binary incompatibility.
-But we can't guarantee anything.  The versioning system should catch any
-incompatibilities.  Just something to be aware of.<p>
-
-The second consequence of this is that we'd love to hear your feedback about
-it:
-
-<ul>
-  <li>If you love it or hate it</li><p>
-  <li>If you find bugs</li><p>
-  <li>If you write a tool</li><p>
-  <li>If you have suggestions for new features, needs, trackable events,
-      functions</li><p>
-  <li>If you have suggestions for making tools easier to write</li><p>
-  <li>If you have suggestions for improving this documentation</li><p>
-  <li>If you don't understand something</li><p>
-</ul>
-
-or anything else!<p>
-
-Happy programming.
-
diff --git a/docs/Makefile.am b/docs/Makefile.am
index 5bb35dc..c36c75e 100644
--- a/docs/Makefile.am
+++ b/docs/Makefile.am
@@ -1,3 +1,81 @@
-docdir = $(datadir)/doc/valgrind
+SUBDIRS = xml lib images
 
-dist_doc_DATA = manual.html
+EXTRA_DIST = README
+
+##-------------------------------------------------------------------
+## Below here is more ordinary make stuff...
+##-------------------------------------------------------------------
+docdir   = ./
+xmldir   = $(docdir)xml
+imgdir   = $(docdir)images
+libdir   = $(docdir)lib
+htmldir  = $(docdir)html
+printdir = $(docdir)print
+
+XML_CATALOG_FILES = /etc/xml/catalog
+
+# file to log print output to
+LOGFILE = print.log
+
+# validation stuff
+XMLLINT       = xmllint
+LINT_FLAGS    = --noout --xinclude --noblanks --postvalid
+VALID_FLAGS   = --dtdvalid http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd 
+XMLLINT_FLAGS = $(LINT_FLAGS) $(VALID_FLAGS)
+
+# stylesheet processor
+XSLTPROC       = xsltproc
+XSLTPROC_FLAGS = --nonet --xinclude 
+
+# stylesheets
+XSL_HTML_CHUNK_STYLE  = $(libdir)/vg-html-chunk.xsl
+XSL_HTML_SINGLE_STYLE = $(libdir)/vg-html-single.xsl
+XSL_FO_STYLE          = $(libdir)/vg-fo.xsl
+
+all-docs: html-docs print-docs
+
+valid:
+	$(XMLLINT) $(XMLLINT_FLAGS) $(xmldir)/index.xml
+
+# chunked html
+html-docs:
+	@echo "Generating html files..."
+	export XML_CATALOG_FILES=$(XML_CATALOG_FILES)
+	mkdir -p $(htmldir)
+	/bin/rm -fr $(htmldir)/
+	mkdir -p $(htmldir)/
+	mkdir -p $(htmldir)/images
+	cp $(libdir)/vg_basic.css $(htmldir)/
+	cp $(imgdir)/*.png $(htmldir)/images
+	$(XSLTPROC) $(XSLTPROC_FLAGS) -o $(htmldir)/ $(XSL_HTML_CHUNK_STYLE) $(xmldir)/index.xml
+
+# pdf and postscript
+print-docs:
+	@echo "Generating pdf file: $(printdir)/index.pdf ...";
+	export XML_CATALOG_FILES=$(XML_CATALOG_FILES);
+	mkdir -p $(printdir);
+	mkdir -p $(printdir)/images;
+	cp $(imgdir)/massif-graph-sm.png $(printdir)/images;
+	$(XSLTPROC) $(XSLTPROC_FLAGS) -o $(printdir)/index.fo $(XSL_FO_STYLE) $(xmldir)/index.xml;
+	(cd $(printdir);
+	 pdfxmltex index.fo &> $(LOGFILE);
+	 pdfxmltex index.fo &> $(LOGFILE);
+	 pdfxmltex index.fo &> $(LOGFILE);
+	 echo "Generating ps file: $(printdir)/index.ps";
+	 pdftops index.pdf;
+	 rm *.log *.aux *.fo *.out)
+
+# If the docs have been built, install them.  But don't worry if they have 
+# not -- developers do 'make install' not from a 'make dist'-ified distro all
+# the time.
+install-data-hook:
+	if test -r html ; then \
+		mkdir -p $(datadir)/doc/ z; \
+		cp -r html $(datadir)/doc/; \
+	fi
+
+dist-hook: html-docs
+	cp -r html $(distdir)	
+
+distclean-local:
+	rm -rf html print
diff --git a/docs/README b/docs/README
new file mode 100644
index 0000000..19f947d
--- /dev/null
+++ b/docs/README
@@ -0,0 +1,166 @@
+Valgrind Documentation
+----------------------
+This text assumes the following directory structure:
+
+Distribution text files (eg. README):
+  valgrind/
+
+Main /docs/ dir:
+  valgrind/docs/
+
+Top-level XML files: 
+  valgrind/docs/xml/
+
+Tool specific XML docs:
+  valgrind/<toolname>/docs/
+
+All images used in the docs:
+  valgrind/docs/images/
+
+Stylesheets, catalogs, parsing/formatting scripts:
+  valgrind/docs/lib/
+
+Some files of note:
+  docs/xml/index.xml:       Top-level book-set wrapper
+  docs/xml/FAQ.xml:         The FAQ
+  docs/xml/vg-entities.xml: Various strings, dates etc. used all over
+  docs/xml/xml_help.txt:    Basic guide to common XML tags.
+
+
+Overview
+---------
+The Documentation Set contains all books, articles,
+etc. pertaining to Valgrind, and is designed to be built as:
+- chunked html files
+- PDF file
+- PS file
+
+The whole thing is a "book set", made up of multiple books (the user
+manual, the FAQ, the tech-docs, the licenses).  Each book could be
+made individually, but the build system doesn't do that.
+
+CSS: the style-sheet used by the docs is the same as that used by the
+website (consistency is king).  It might be worth doing a pre-build diff
+to check whether the website stylesheet has changed.
+
+
+The build process
+-----------------
+It's not obvious exactly when things get built, and so on.  Here's an
+overview:
+
+- The HTML docs can be built manually by running 'make html-docs' in
+  valgrind/docs/.  (Don't use 'make html'; that is a valid built-in
+  automake target, but does nothing.)  Likewise for PDF/PS with 'make
+  print-docs'.
+
+- 'make dist' puts the XML files into the tarball.  It also builds the
+  HTML docs and puts them in too, in valgrind/docs/html/ (including
+  style sheets, images, etc).
+
+- 'make install' installs the HTML docs in
+  $(install)/share/doc/valgrind/html/, if they are present.  (They will
+  be present if you are installing from the result of a 'make dist'.
+  They might not be present if you are developing in a Subversion
+  workspace and have not built them.)  It doesn't install the XML docs,
+  as they're not useful installed.
+
+If the XML processing tools ever mature enough to become standard, we
+could just build the docs from XML when doing 'make install', which
+would be simpler.
+
+
+The XML Toolchain
+------------------
+I spent some time on the docbook-apps list in order to ascertain
+the most-useful / widely-available / least-fragile / advanced
+toolchain.  Basically, everything has problems of one sort or
+another, so I ended up going with what I felt was the
+least-problematical of the various options.
+
+The maintainer is responsible for ensure the following tools are
+present on his system:
+- xmllint:   using libxml version 20607
+- xsltproc:  using libxml 20607, libxslt 10102 and libexslt 802
+             (Nb:be sure to use a version based on libxml2 
+              version 2.6.11 or later.  There was a bug in 
+	            xml:base processing in versions before that.)
+- pdfxmltex: pdfTeX (Web2C 7.4.5) 3.14159-1.10b
+- pdftops:   version 3.00
+- DocBook:   version 4.2
+- bzip2
+- lynx
+
+A big problem is latency.  Norman Walsh is constantly updating
+DocBook, but the tools tend to lag behind somewhat.  It is
+important that the versions get on with each other.  If you
+decide to upgrade something, then it is your responsibility to
+ascertain whether things still work nicely - this *cannot* be
+assumed.
+
+Print output: if make expires with an error, cat output.
+If you see something like this:
+  ! TeX capacity exceeded, sorry [pool size=436070]
+
+then look at this:
+  http://lists.debian.org/debian-doc/2003/12/msg00020.html
+and modify your texmf files accordingly.
+
+
+Catalog Locations
+------------------
+oasis:
+http://www.oasis-open.org/docbook/xml/4.2/catalog.xml
+http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd
+
+Suse 9.1: 
+/usr/share/xml/docbook/ stylesheet/nwalsh/1.64.1/html/docbook.xsl
+/usr/share/xml/docbook/ schema/dtd/4.2/docbookx.dtd
+/usr/share/xml/docbook/ schema/dtd/4.2/catalog.xml
+
+
+Notes:
+------
+- the end of file.xml must have only ONE newline after the last tag:
+  </book>
+
+- pdfxmltex barfs if given a filename with an underscore in it
+  
+
+References:
+----------
+- samba have got all the stuff
+http://websvn.samba.org/listing.php?rep=4&path=/trunk/&opt=dir&sc=1
+
+excellent on-line howto reference:
+- http://www.cogent.ca/
+
+using automake with docbook:
+- http://www.movement.uklinux.net/docs/docbook-autotools/index.html
+
+Debugging catalog processing:
+- http://xmlsoft.org/catalog.html#Declaring
+  xmlcatalog -v <catalog-file>
+
+shell script to generate xml catalogs for docbook 4.1.2:
+- http://xmlsoft.org/XSLT/docbook.html
+
+configure.in re pdfxmltex
+- http://cvs.sourceforge.net/viewcvs.py/logreport/service/configure.in?rev=1.325
+
+some useful xls stylesheets in cvs:
+- http://cvs.sourceforge.net/viewcvs.py/perl-xml/perl-xml-faq/
+
+
+TODO:
+----
+- get rid of blank pages in fo output
+- concat titlepage + subtitle page in fo output
+- generate an index for the user manual (??)
+- fix tex so it does not run out of memory
+- run through and check for not-linked hrefs: grep on 'http'
+- run through and check for bad email addresses: grep on '@' etc.
+- when we move to svn, change all refs to sourceforge.cvs
+- go through and wrap refs+addresses in '<address>' tags
+
+
diff --git a/docs/images/Makefile.am b/docs/images/Makefile.am
new file mode 100644
index 0000000..c99caed
--- /dev/null
+++ b/docs/images/Makefile.am
@@ -0,0 +1,3 @@
+EXTRA_DIST = \
+	home.png next.png prev.png up.png \
+	massif-graph-sm.png massif-graph.png
diff --git a/docs/images/home.png b/docs/images/home.png
new file mode 100644
index 0000000..1ccfb7b
--- /dev/null
+++ b/docs/images/home.png
Binary files differ
diff --git a/docs/images/massif-graph-sm.png b/docs/images/massif-graph-sm.png
new file mode 100644
index 0000000..35894ff
--- /dev/null
+++ b/docs/images/massif-graph-sm.png
Binary files differ
diff --git a/docs/images/massif-graph.png b/docs/images/massif-graph.png
new file mode 100644
index 0000000..972b31f
--- /dev/null
+++ b/docs/images/massif-graph.png
Binary files differ
diff --git a/docs/images/next.png b/docs/images/next.png
new file mode 100644
index 0000000..6d0c11a
--- /dev/null
+++ b/docs/images/next.png
Binary files differ
diff --git a/docs/images/prev.png b/docs/images/prev.png
new file mode 100644
index 0000000..9fdf29e
--- /dev/null
+++ b/docs/images/prev.png
Binary files differ
diff --git a/docs/images/up.png b/docs/images/up.png
new file mode 100644
index 0000000..a75f0b3
--- /dev/null
+++ b/docs/images/up.png
Binary files differ
diff --git a/docs/lib/Makefile.am b/docs/lib/Makefile.am
new file mode 100644
index 0000000..627e39d
--- /dev/null
+++ b/docs/lib/Makefile.am
@@ -0,0 +1,6 @@
+EXTRA_DIST = \
+	vg-common.xsl \
+	vg-fo.xsl \
+	vg-html-chunk.xsl \
+	vg-html-single.xsl \
+	vg_basic.css
diff --git a/docs/lib/vg-common.xsl b/docs/lib/vg-common.xsl
new file mode 100644
index 0000000..0302b52
--- /dev/null
+++ b/docs/lib/vg-common.xsl
@@ -0,0 +1,45 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<xsl:stylesheet 
+     xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+
+<!-- we like '1.2 Title' -->
+<xsl:param name="section.autolabel" select="'1'"/> 
+<xsl:param name="section.label.includes.component.label" select="'1'"/>
+
+<!-- Do not put 'Chapter' at the start of eg 'Chapter 1. Doing This' -->
+<xsl:param name="local.l10n.xml" select="document('')"/> 
+<l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0"> 
+  <l:l10n language="en"> 
+    <l:context name="title-numbered">
+      <l:template name="chapter" text="%n.&#160;%t"/>
+    </l:context> 
+  </l:l10n>
+</l:i18n>
+
+<!-- don't generate sub-tocs for qanda sets -->
+<xsl:param name="generate.toc">
+set       toc,title
+book      toc,title,figure,table,example,equation
+chapter   toc,title
+section   toc
+sect1     toc
+sect2     toc
+sect3     toc
+sect4     nop
+sect5     nop
+qandaset  toc
+qandadiv  nop
+appendix  toc,title
+article/appendix  nop
+<!-- article   toc,title -->
+article   nop
+preface   toc,title
+reference toc,title
+</xsl:param>
+
+<!-- center everything at the top of a titlepage -->
+<xsl:attribute-set name="set.titlepage.recto.style">
+  <xsl:attribute name="align">center</xsl:attribute>
+</xsl:attribute-set>
+
+</xsl:stylesheet>
diff --git a/docs/lib/vg-fo.xsl b/docs/lib/vg-fo.xsl
new file mode 100644
index 0000000..647d91b
--- /dev/null
+++ b/docs/lib/vg-fo.xsl
@@ -0,0 +1,320 @@
+<?xml version="1.0" encoding="UTF-8"?> <!-- -*- sgml -*- -->
+<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 
+     xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
+
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/fo/docbook.xsl"/>
+<xsl:import href="vg-common.xsl"/>
+
+<!-- set indent = yes while debugging, then change to NO -->
+<xsl:output method="xml" indent="no"/>
+
+<!-- ensure only passivetex extensions are on -->
+<xsl:param name="stylesheet.result.type" select="'fo'"/>
+<!-- fo extensions: PDF bookmarks and index terms -->
+<xsl:param name="use.extensions" select="'1'"/>
+<xsl:param name="xep.extensions" select="0"/>      
+<xsl:param name="fop.extensions" select="0"/>     
+<xsl:param name="saxon.extensions" select="0"/>   
+<xsl:param name="passivetex.extensions" select="1"/>
+<xsl:param name="tablecolumns.extension" select="'1'"/>
+
+<!-- ensure we are using single sided -->
+<xsl:param name="double.sided" select="'0'"/> 
+
+<!-- insert cross references to page numbers -->
+<xsl:param name="insert.xref.page.number" select="1"/>
+
+<!-- <?custom-pagebreak?> inserts a page break at this point -->
+<xsl:template match="processing-instruction('custom-pagebreak')">
+  <fo:block break-before='page'/>
+</xsl:template>
+
+<!-- show links in color -->
+<xsl:attribute-set name="xref.properties">
+  <xsl:attribute name="color">blue</xsl:attribute>
+</xsl:attribute-set>
+
+<!-- make pre listings indented a bit + a bg colour -->
+<xsl:template match="programlisting | screen">
+  <fo:block start-indent="0.25in" wrap-option="no-wrap" 
+            white-space-collapse="false" text-align="start" 
+            font-family="monospace" background-color="#f2f2f9"
+            linefeed-treatment="preserve" 
+            xsl:use-attribute-sets="normal.para.spacing">
+    <xsl:apply-templates/>
+  </fo:block>
+</xsl:template>
+
+<!-- workaround bug in passivetex fo output for itemizedlist -->
+<xsl:template match="itemizedlist/listitem">
+  <xsl:variable name="id">
+  <xsl:call-template name="object.id"/></xsl:variable>
+  <xsl:variable name="itemsymbol">
+    <xsl:call-template name="list.itemsymbol">
+      <xsl:with-param name="node" select="parent::itemizedlist"/>
+    </xsl:call-template>
+  </xsl:variable>
+  <xsl:variable name="item.contents">
+    <fo:list-item-label end-indent="label-end()">
+      <fo:block>
+        <xsl:choose>
+          <xsl:when test="$itemsymbol='disc'">&#x2022;</xsl:when>
+          <xsl:when test="$itemsymbol='bullet'">&#x2022;</xsl:when>
+          <xsl:otherwise>&#x2022;</xsl:otherwise>
+        </xsl:choose>
+      </fo:block>
+    </fo:list-item-label>
+    <fo:list-item-body start-indent="body-start()">
+      <xsl:apply-templates/>    <!-- removed extra block wrapper -->
+    </fo:list-item-body>
+  </xsl:variable>
+  <xsl:choose>
+    <xsl:when test="parent::*/@spacing = 'compact'">
+      <fo:list-item id="{$id}" 
+          xsl:use-attribute-sets="compact.list.item.spacing">
+        <xsl:copy-of select="$item.contents"/>
+      </fo:list-item>
+    </xsl:when>
+    <xsl:otherwise>
+      <fo:list-item id="{$id}" xsl:use-attribute-sets="list.item.spacing">
+        <xsl:copy-of select="$item.contents"/>
+      </fo:list-item>
+    </xsl:otherwise>
+  </xsl:choose>
+</xsl:template>
+
+<!-- workaround bug in passivetex fo output for orderedlist -->
+<xsl:template match="orderedlist/listitem">
+  <xsl:variable name="id">
+  <xsl:call-template name="object.id"/></xsl:variable>
+  <xsl:variable name="item.contents">
+    <fo:list-item-label end-indent="label-end()">
+      <fo:block>
+        <xsl:apply-templates select="." mode="item-number"/>
+      </fo:block>
+    </fo:list-item-label>
+    <fo:list-item-body start-indent="body-start()">
+      <xsl:apply-templates/>    <!-- removed extra block wrapper -->
+    </fo:list-item-body>
+  </xsl:variable>
+  <xsl:choose>
+    <xsl:when test="parent::*/@spacing = 'compact'">
+      <fo:list-item id="{$id}" 
+          xsl:use-attribute-sets="compact.list.item.spacing">
+        <xsl:copy-of select="$item.contents"/>
+      </fo:list-item>
+    </xsl:when>
+    <xsl:otherwise>
+      <fo:list-item id="{$id}" xsl:use-attribute-sets="list.item.spacing">
+        <xsl:copy-of select="$item.contents"/>
+      </fo:list-item>
+    </xsl:otherwise>
+  </xsl:choose>
+</xsl:template>
+
+<!-- workaround bug in passivetex fo output for variablelist -->
+<xsl:param name="variablelist.as.blocks" select="1"/>
+<xsl:template match="varlistentry" mode="vl.as.blocks">
+  <xsl:variable name="id">
+    <xsl:call-template name="object.id"/></xsl:variable>
+  <fo:block id="{$id}" xsl:use-attribute-sets="list.item.spacing"  
+      keep-together.within-column="always" 
+      keep-with-next.within-column="always">
+    <xsl:apply-templates select="term"/>
+  </fo:block>
+  <fo:block start-indent="0.5in" end-indent="0in" 
+            space-after.minimum="0.2em" 
+            space-after.optimum="0.4em" 
+            space-after.maximum="0.6em">
+    <fo:block>
+      <xsl:apply-templates select="listitem"/>
+    </fo:block>
+  </fo:block>
+</xsl:template>
+
+<!-- workaround bug in passivetext fo output for revhistory -->
+<xsl:template match="revhistory" mode="titlepage.mode">
+  <fo:block space-before="1.0em">
+  <fo:table table-layout="fixed" width="100%">
+    <fo:table-column column-number="1" column-width="33%"/>
+    <fo:table-column column-number="2" column-width="33%"/>
+    <fo:table-column column-number="3" column-width="34%"/>
+    <fo:table-body>
+      <fo:table-row>
+        <fo:table-cell number-columns-spanned="3" text-align="left">
+          <fo:block>
+            <xsl:call-template name="gentext">
+              <xsl:with-param name="key" select="'RevHistory'"/>
+            </xsl:call-template>
+          </fo:block>
+        </fo:table-cell>
+      </fo:table-row>
+      <xsl:apply-templates mode="titlepage.mode"/>
+    </fo:table-body>
+  </fo:table>
+  </fo:block>
+</xsl:template>
+
+<xsl:template match="revhistory/revision" mode="titlepage.mode">
+  <xsl:variable name="revnumber" select=".//revnumber"/>
+  <xsl:variable name="revdate"   select=".//date"/>
+  <xsl:variable name="revauthor" select=".//authorinitials"/>
+  <xsl:variable name="revremark" select=".//revremark"/>
+  <fo:table-row>
+    <fo:table-cell text-align="left">
+      <fo:block>
+        <xsl:if test="$revnumber">
+          <xsl:call-template name="gentext">
+            <xsl:with-param name="key" select="'Revision'"/>
+          </xsl:call-template>
+          <xsl:call-template name="gentext.space"/>
+          <xsl:apply-templates select="$revnumber[1]" mode="titlepage.mode"/>
+        </xsl:if>
+      </fo:block>
+    </fo:table-cell>
+    <fo:table-cell text-align="left">
+      <fo:block>
+        <xsl:apply-templates select="$revdate[1]" mode="titlepage.mode"/>
+      </fo:block>
+    </fo:table-cell>
+    <fo:table-cell text-align="left">
+      <fo:block>
+        <xsl:apply-templates select="$revauthor[1]" mode="titlepage.mode"/>
+      </fo:block>
+    </fo:table-cell>
+  </fo:table-row>
+  <xsl:if test="$revremark">
+    <fo:table-row>
+      <fo:table-cell number-columns-spanned="3" text-align="left">
+        <fo:block>
+          <xsl:apply-templates select="$revremark[1]" mode="titlepage.mode"/>
+        </fo:block>
+      </fo:table-cell>
+    </fo:table-row>
+  </xsl:if>
+</xsl:template>
+
+
+<!-- workaround bug in footers: force right-align w/two 80|30 cols -->
+<xsl:template name="footer.table">
+  <xsl:param name="pageclass" select="''"/>
+  <xsl:param name="sequence" select="''"/>
+  <xsl:param name="gentext-key" select="''"/>
+  <xsl:choose>
+    <xsl:when test="$pageclass = 'index'">
+      <xsl:attribute name="margin-left">0pt</xsl:attribute>
+    </xsl:when>
+  </xsl:choose>
+  <xsl:variable name="candidate">
+    <fo:table table-layout="fixed" width="100%">
+      <fo:table-column column-number="1" column-width="80%"/>
+      <fo:table-column column-number="2" column-width="20%"/>
+      <fo:table-body>
+        <fo:table-row height="14pt">
+          <fo:table-cell text-align="left" display-align="after">
+            <xsl:attribute name="relative-align">baseline</xsl:attribute>
+            <fo:block> 
+              <fo:block> </fo:block><!-- empty cell -->
+            </fo:block>
+          </fo:table-cell>
+          <fo:table-cell text-align="center" display-align="after">
+            <xsl:attribute name="relative-align">baseline</xsl:attribute>
+            <fo:block>
+              <xsl:call-template name="footer.content">
+                <xsl:with-param name="pageclass" select="$pageclass"/>
+                <xsl:with-param name="sequence" select="$sequence"/>
+                <xsl:with-param name="position" select="'center'"/>
+                <xsl:with-param name="gentext-key" select="$gentext-key"/>
+              </xsl:call-template>
+            </fo:block>
+          </fo:table-cell>
+        </fo:table-row>
+      </fo:table-body>
+    </fo:table>
+  </xsl:variable>
+  <!-- Really output a footer? -->
+  <xsl:choose>
+    <xsl:when test="$pageclass='titlepage' and $gentext-key='book'
+                    and $sequence='first'">
+      <!-- no, book titlepages have no footers at all -->
+    </xsl:when>
+    <xsl:when test="$sequence = 'blank' and $footers.on.blank.pages = 0">
+      <!-- no output -->
+    </xsl:when>
+    <xsl:otherwise>
+      <xsl:copy-of select="$candidate"/>
+    </xsl:otherwise>
+  </xsl:choose>
+</xsl:template>
+
+
+<!-- fix bug in headers: force right-align w/two 40|60 cols -->
+<xsl:template name="header.table">
+  <xsl:param name="pageclass" select="''"/>
+  <xsl:param name="sequence" select="''"/>
+  <xsl:param name="gentext-key" select="''"/>
+  <xsl:choose>
+    <xsl:when test="$pageclass = 'index'">
+      <xsl:attribute name="margin-left">0pt</xsl:attribute>
+    </xsl:when>
+  </xsl:choose>
+  <xsl:variable name="candidate">
+    <fo:table table-layout="fixed" width="100%">
+      <xsl:call-template name="head.sep.rule">
+        <xsl:with-param name="pageclass" select="$pageclass"/>
+        <xsl:with-param name="sequence" select="$sequence"/>
+        <xsl:with-param name="gentext-key" select="$gentext-key"/>
+      </xsl:call-template>
+      <fo:table-column column-number="1" column-width="40%"/>
+      <fo:table-column column-number="2" column-width="60%"/>
+      <fo:table-body>
+        <fo:table-row height="14pt">
+          <fo:table-cell text-align="left" display-align="before">
+            <xsl:attribute name="relative-align">baseline</xsl:attribute>
+            <fo:block>
+              <fo:block> </fo:block><!-- empty cell -->
+            </fo:block>
+          </fo:table-cell>
+          <fo:table-cell text-align="center" display-align="before">
+            <xsl:attribute name="relative-align">baseline</xsl:attribute>
+            <fo:block>
+              <xsl:call-template name="header.content">
+                <xsl:with-param name="pageclass" select="$pageclass"/>
+                <xsl:with-param name="sequence" select="$sequence"/>
+                <xsl:with-param name="position" select="'center'"/>
+                <xsl:with-param name="gentext-key" select="$gentext-key"/>
+              </xsl:call-template>
+            </fo:block>
+          </fo:table-cell>
+        </fo:table-row>
+      </fo:table-body>
+    </fo:table>
+  </xsl:variable>
+  <!-- Really output a header? -->
+  <xsl:choose>
+    <xsl:when test="$pageclass = 'titlepage' and $gentext-key = 'book'
+                    and $sequence='first'">
+      <!-- no, book titlepages have no headers at all -->
+    </xsl:when>
+    <xsl:when test="$sequence = 'blank' and $headers.on.blank.pages = 0">
+      <!-- no output -->
+    </xsl:when>
+    <xsl:otherwise>
+      <xsl:copy-of select="$candidate"/>
+    </xsl:otherwise>
+  </xsl:choose>
+</xsl:template>
+
+
+</xsl:stylesheet>
+
+<!--
+pagebreaks in fo output:
+- http://www.dpawson.co.uk/docbook/styling/fo.html#d1408e636
+http://www.dpawson.co.uk/docbook/styling/fo.html
+http://docbook.sourceforge.net/release/xsl/current/doc/fo/variablelist.as.blocks.html
+alt. book to oreilly:
+- http://www.ravelgrane.com/ER/doc/lx/book.html
+tex memory:
+- http://www.dpawson.co.uk/docbook/tools.html#d4e191
+-->
diff --git a/docs/lib/vg-html-chunk.xsl b/docs/lib/vg-html-chunk.xsl
new file mode 100644
index 0000000..66537c0
--- /dev/null
+++ b/docs/lib/vg-html-chunk.xsl
@@ -0,0 +1,321 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<xsl:stylesheet 
+     xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/docbook.xsl"/>
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/chunk-common.xsl"/>
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/manifest.xsl"/>
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/chunk-code.xsl"/>
+<xsl:import href="vg-common.xsl"/>
+
+<!-- use 8859-1 encoding -->
+<xsl:output method="html" encoding="ISO-8859-1" indent="yes"/>
+
+<xsl:param name="use.id.as.filename" select="'1'"/> 
+<xsl:param name="chunker.output.indent" select="'yes'"/>
+<!-- use our custom html stylesheet -->
+<xsl:param name="html.stylesheet" select="'vg_basic.css'"/>
+
+<!-- use our custom header -->
+<xsl:template name="header.navigation">
+  <xsl:param name="prev" select="/foo"/>
+  <xsl:param name="next" select="/foo"/>
+  <xsl:param name="nav.context"/>
+
+  <xsl:variable name="home" select="/*[1]"/>
+  <xsl:variable name="up" select="parent::*"/>
+
+  <xsl:variable name="row1" select="$navig.showtitles != 0"/>
+  <xsl:variable name="row2" select="count($prev) &gt; 0
+                            or (count($up) &gt; 0 
+                            and generate-id($up) != generate-id($home) )
+                            or count($next) &gt; 0"/>
+
+<div>
+<!-- never show header nav stuff on title page -->
+<xsl:if test="count($prev)>0">
+ <xsl:if test="$row1 or $row2">
+  <table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header">
+   <xsl:if test="$row2">
+    <tr>
+     <!-- prev -->
+     <td width="22px" align="center" valign="middle">
+      <xsl:if test="count($prev)>0">
+       <a accesskey="p">
+        <xsl:attribute name="href">
+         <xsl:call-template name="href.target">
+          <xsl:with-param name="object" select="$prev"/>
+         </xsl:call-template>
+        </xsl:attribute>
+        <img src="images/prev.png" width="18" height="21" border="0">
+         <xsl:attribute name="alt">
+          <xsl:call-template name="gentext">
+           <xsl:with-param name="key">nav-prev</xsl:with-param>
+          </xsl:call-template>
+         </xsl:attribute>
+        </img>
+       </a>
+      </xsl:if>
+     </td>
+     <!-- up -->
+     <xsl:if test="count($up)>0">
+      <td width="25px" align="center" valign="middle">
+       <a accesskey="u">
+        <xsl:attribute name="href">
+         <xsl:call-template name="href.target">
+          <xsl:with-param name="object" select="$up"/>
+         </xsl:call-template>
+        </xsl:attribute>
+        <img src="images/up.png" width="21" height="18" border="0">
+         <xsl:attribute name="alt">
+          <xsl:call-template name="gentext">
+           <xsl:with-param name="key">nav-up</xsl:with-param>
+          </xsl:call-template>
+         </xsl:attribute>
+        </img>
+       </a>
+      </td>
+     </xsl:if>
+     <!-- home -->
+     <xsl:if test="$home != . or $nav.context = 'toc'">
+      <td width="31px" align="center" valign="middle">
+       <a accesskey="h">
+        <xsl:attribute name="href">
+         <xsl:call-template name="href.target">
+          <xsl:with-param name="object" select="$home"/>
+         </xsl:call-template>
+        </xsl:attribute>
+        <img src="images/home.png" width="27" height="20" border="0">
+         <xsl:attribute name="alt">
+          <xsl:call-template name="gentext">
+           <xsl:with-param name="key">nav-up</xsl:with-param>
+          </xsl:call-template>
+         </xsl:attribute>
+        </img>
+       </a>
+      </td>
+     </xsl:if>
+     <!-- chapter|section heading -->
+     <th align="center" valign="middle">
+       <xsl:apply-templates select="$up" mode="object.title.markup"/>
+<!--
+      <xsl:choose>
+       <xsl:when test="count($up) > 0 and generate-id($up) != generate-id($home)">
+        <xsl:apply-templates select="$up" mode="object.title.markup"/>
+       </xsl:when>
+       <xsl:otherwise>
+        <xsl:text>Valgrind User's Manual</xsl:text>
+       </xsl:otherwise>
+      </xsl:choose>
+-->
+     </th>
+     <!-- next -->
+      <td width="22px" align="center" valign="middle">
+        <xsl:if test="count($next)>0">
+         <a accesskey="n">
+          <xsl:attribute name="href">
+           <xsl:call-template name="href.target">
+            <xsl:with-param name="object" select="$next"/>
+           </xsl:call-template>
+          </xsl:attribute>
+          <img src="images/next.png" width="18" height="21" border="0">
+           <xsl:attribute name="alt">
+            <xsl:call-template name="gentext">
+             <xsl:with-param name="key">nav-next</xsl:with-param>
+            </xsl:call-template>
+           </xsl:attribute>
+          </img>
+         </a>
+        </xsl:if>
+       </td>
+      </tr>
+    </xsl:if>
+   </table>
+ </xsl:if>
+</xsl:if>
+</div>
+</xsl:template>
+
+
+<!-- our custom footer -->
+<xsl:template name="footer.navigation">
+  <xsl:param name="prev" select="/foo"/>
+  <xsl:param name="next" select="/foo"/>
+  <xsl:param name="nav.context"/>
+
+  <xsl:variable name="home" select="/*[1]"/>
+  <xsl:variable name="up" select="parent::*"/>
+
+  <xsl:variable name="row1" select="count($prev) &gt; 0
+                                    or count($up) &gt; 0
+                                    or count($next) &gt; 0"/>
+
+  <xsl:variable name="row2" select="($prev != 0)
+                                    or (generate-id($home) != generate-id(.)
+                                        or $nav.context = 'toc')
+                                    or ($chunk.tocs.and.lots != 0
+                                        and $nav.context != 'toc')
+                                    or ($next != 0)"/>
+  <div>
+  <xsl:if test="$row1 or $row2">
+   <br />
+   <table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
+    <xsl:if test="$row1">
+     <tr>
+      <td rowspan="2" width="40%" align="left">
+       <xsl:if test="count($prev)>0">
+        <a accesskey="p">
+         <xsl:attribute name="href">
+          <xsl:call-template name="href.target">
+           <xsl:with-param name="object" select="$prev"/>
+          </xsl:call-template>
+         </xsl:attribute>
+         <xsl:text>&#060;&#060;&#160;</xsl:text>
+         <xsl:apply-templates select="$prev" mode="object.title.markup"/>
+        </a>
+       </xsl:if>
+       <xsl:text>&#160;</xsl:text>
+      </td>
+      <td width="20%" align="center">
+       <xsl:choose>
+        <xsl:when test="count($up)>0">
+         <a accesskey="u">
+          <xsl:attribute name="href">
+           <xsl:call-template name="href.target">
+            <xsl:with-param name="object" select="$up"/>
+           </xsl:call-template>
+          </xsl:attribute>
+          <xsl:call-template name="navig.content">
+           <xsl:with-param name="direction" select="'up'"/>
+          </xsl:call-template>
+         </a>
+        </xsl:when>
+        <xsl:otherwise>&#160;</xsl:otherwise>
+       </xsl:choose>
+      </td>
+      <td rowspan="2" width="40%" align="right">
+       <xsl:text>&#160;</xsl:text>
+       <xsl:if test="count($next)>0">
+        <a accesskey="n">
+         <xsl:attribute name="href">
+          <xsl:call-template name="href.target">
+           <xsl:with-param name="object" select="$next"/>
+          </xsl:call-template>
+         </xsl:attribute>
+         <xsl:apply-templates select="$next" mode="object.title.markup"/>
+         <xsl:text>&#160;&#062;&#062;</xsl:text>
+        </a>
+       </xsl:if>
+      </td>
+     </tr>
+    </xsl:if>
+
+    <xsl:if test="$row2">
+     <tr>
+      <td width="20%" align="center">
+       <xsl:choose>
+       <xsl:when test="$home != . or $nav.context = 'toc'">
+        <a accesskey="h">
+         <xsl:attribute name="href">
+          <xsl:call-template name="href.target">
+           <xsl:with-param name="object" select="$home"/>
+          </xsl:call-template>
+         </xsl:attribute>
+         <xsl:call-template name="navig.content">
+          <xsl:with-param name="direction" select="'home'"/>
+         </xsl:call-template>
+        </a>
+        <xsl:if test="$chunk.tocs.and.lots != 0 and $nav.context != 'toc'">
+         <xsl:text>&#160;|&#160;</xsl:text>
+        </xsl:if>
+       </xsl:when>
+       <xsl:otherwise>&#160;</xsl:otherwise>
+       </xsl:choose>
+       <xsl:if test="$chunk.tocs.and.lots != 0 and $nav.context != 'toc'">
+        <a accesskey="t">
+         <xsl:attribute name="href">
+          <xsl:apply-templates select="/*[1]" mode="recursive-chunk-filename"/>
+          <xsl:text>-toc</xsl:text>
+          <xsl:value-of select="$html.ext"/>
+         </xsl:attribute>
+         <xsl:call-template name="gentext">
+          <xsl:with-param name="key" select="'nav-toc'"/>
+         </xsl:call-template>
+        </a>
+       </xsl:if>
+      </td>
+     </tr>
+    </xsl:if>
+   </table>
+  </xsl:if>
+ </div>
+</xsl:template>
+
+<!-- We don't like tables with borders -->
+<xsl:template match="revhistory" mode="titlepage.mode">
+  <xsl:variable name="numcols">
+    <xsl:choose>
+      <xsl:when test="//authorinitials">3</xsl:when>
+      <xsl:otherwise>2</xsl:otherwise>
+    </xsl:choose>
+  </xsl:variable>
+  <table width="100%" border="0" summary="Revision history">
+    <tr>
+      <th align="left" colspan="{$numcols}">
+        <h3>Revision History</h3>
+      </th>
+    </tr>
+    <xsl:apply-templates mode="titlepage.mode">
+      <xsl:with-param name="numcols" select="$numcols"/>
+    </xsl:apply-templates>
+  </table>
+</xsl:template>
+
+<!-- don't put an expanded set-level TOC, only book titles -->
+<xsl:template match="book" mode="toc">
+  <xsl:param name="toc-context" select="."/>
+  <xsl:choose>
+    <xsl:when test="local-name($toc-context) = 'set'">
+      <xsl:call-template name="subtoc">
+        <xsl:with-param name="toc-context" select="$toc-context"/>
+        <xsl:with-param name="nodes" select="foo"/>
+      </xsl:call-template>
+    </xsl:when>
+    <xsl:otherwise>
+      <xsl:call-template name="subtoc">
+        <xsl:with-param name="toc-context" select="$toc-context"/>
+        <xsl:with-param name="nodes" select="part|reference
+                                         |preface|chapter|appendix
+                                         |article
+                                         |bibliography|glossary|index
+                                         |refentry
+                                         |bridgehead[$bridgehead.in.toc !=
+0]"/>
+      </xsl:call-template>
+    </xsl:otherwise>
+  </xsl:choose>
+</xsl:template>
+
+<!-- question and answer set mods -->
+<xsl:template match="answer">
+  <xsl:variable name="deflabel">
+    <xsl:choose>
+      <xsl:when test="ancestor-or-self::*[@defaultlabel]">
+        <xsl:value-of select="(ancestor-or-self::*[@defaultlabel])[last()]
+                              /@defaultlabel"/>
+      </xsl:when>
+      <xsl:otherwise>
+        <xsl:value-of select="$qanda.defaultlabel"/>
+      </xsl:otherwise>
+    </xsl:choose>
+  </xsl:variable>
+  <tr class="{name(.)}">
+    <td><xsl:text>&#160;</xsl:text></td>
+    <td align="left" valign="top">
+      <xsl:apply-templates select="*[name(.) != 'label']"/>
+    </td>
+  </tr>
+  <tr><td colspan="2"><xsl:text>&#160;</xsl:text></td></tr>
+</xsl:template>
+
+</xsl:stylesheet>
diff --git a/docs/lib/vg-html-single.xsl b/docs/lib/vg-html-single.xsl
new file mode 100644
index 0000000..c6c1cec
--- /dev/null
+++ b/docs/lib/vg-html-single.xsl
@@ -0,0 +1,63 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE xsl:stylesheet [ <!ENTITY vg-css SYSTEM "vg_basic.css"> ]>
+
+<xsl:stylesheet 
+   xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+
+<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/docbook.xsl"/>
+<xsl:import href="vg-common.xsl"/>
+
+<!-- use 8859-1 encoding -->
+<xsl:output method="html" encoding="ISO-8859-1" indent="yes"/>
+
+<!-- we include the css directly when generating one large file -->
+<xsl:template name="user.head.content">  
+  <style type="text/css" media="screen">
+    <xsl:text>&vg-css;</xsl:text>
+  </style>
+</xsl:template>
+
+<!-- We don't like tables with borders -->
+<xsl:template match="revhistory" mode="titlepage.mode">
+  <xsl:variable name="numcols">
+    <xsl:choose>
+      <xsl:when test="//authorinitials">3</xsl:when>
+      <xsl:otherwise>2</xsl:otherwise>
+    </xsl:choose>
+  </xsl:variable>
+  <table width="100%" border="0" summary="Revision history">
+    <tr>
+      <th align="left" colspan="{$numcols}">
+        <h4>Revision History</h4>
+      </th>
+    </tr>
+    <xsl:apply-templates mode="titlepage.mode">
+      <xsl:with-param name="numcols" select="$numcols"/>
+    </xsl:apply-templates>
+  </table>
+</xsl:template>
+
+<!-- question and answer set mods -->
+<xsl:template match="answer">
+  <xsl:variable name="deflabel">
+    <xsl:choose>
+      <xsl:when test="ancestor-or-self::*[@defaultlabel]">
+        <xsl:value-of select="(ancestor-or-self::*[@defaultlabel])[last()]
+                              /@defaultlabel"/>
+      </xsl:when>
+      <xsl:otherwise>
+        <xsl:value-of select="$qanda.defaultlabel"/>
+      </xsl:otherwise>
+    </xsl:choose>
+  </xsl:variable>
+  <tr class="{name(.)}">
+    <td><xsl:text>&#160;</xsl:text></td>
+    <td align="left" valign="top">
+      <xsl:apply-templates select="*[name(.) != 'label']"/>
+    </td>
+  </tr>
+  <tr><td colspan="2"><xsl:text>&#160;</xsl:text></td></tr>
+</xsl:template>
+
+</xsl:stylesheet>
+
diff --git a/docs/lib/vg_basic.css b/docs/lib/vg_basic.css
new file mode 100644
index 0000000..16e6cc2
--- /dev/null
+++ b/docs/lib/vg_basic.css
@@ -0,0 +1,62 @@
+/* default link colours */
+a, a:link, a:visited, a:active { color: #74240f; }
+a:hover { color: #888800; }
+
+body { 
+ color: #202020; 
+ background-color: #ffffff;
+}
+
+body, td {
+ font-size:   90%;
+ line-height: 125%;
+ font-family: Arial, Geneva, Helvetica, sans-serif;
+}
+
+h1, h2, h3, h4 { color: #74240f; }
+h3 { margin-bottom: 0.4em; }
+
+code, tt, pre { color: #3366cc; }
+code, tt { color: #761596; }
+
+pre.programlisting {
+ color:      #000000;
+ padding:    0.5em;
+ background: #f2f2f9;
+ border:     1px solid #3366cc;
+}
+pre.screen {
+ color:      #000000;
+ padding:    0.5em;
+ background: #eeeeee;
+ border:     1px solid #626262;
+}
+
+ul { list-style: url("images/li-brown.png"); }
+
+.titlepage hr {
+  height:  1px;
+  border:  0px;
+  background-color: #7f7f7f;
+}
+
+/* header / footer nav tables */
+table.nav {
+ color:      #0f7355;
+ border:     solid 1px #0f7355;
+ background: #edf7f4;
+ background-color: #edf7f4;
+ margin-bottom: 0.5em;
+}
+/* don't have underlined links in chunked nav menus */
+table.nav a { text-decoration: none; }
+table.nav a:hover { text-decoration: underline; }
+table.nav td { font-size: 85%; }
+
+/* yellow box just for massif blockquotes */
+blockquote {
+ padding:     0.5em;
+ background:  #fffbc9; 
+ border:      solid 1px #ffde84; 
+}
+
diff --git a/docs/manual.html b/docs/manual.html
deleted file mode 100644
index 26d82b6..0000000
--- a/docs/manual.html
+++ /dev/null
@@ -1,125 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>Valgrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="title">&nbsp;</a>
-<h1 align=center>Valgrind, version 2.2.0</h1>
-<center>This manual was last updated on 31 August 2004</center>
-<p>
-
-<center>
-<a href="mailto:jseward@acm.org">jseward@acm.org</a>,
-   <a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Copyright &copy; 2000-2004 Julian Seward, Nick Nethercote
-<p>
-
-Valgrind is licensed under the GNU General Public License, version
-2<br>
-
-An open-source tool for debugging and profiling Linux-x86 executables.
-</center>
-
-<p>
-
-<hr width="100%">
-<a name="contents"></a>
-<h2>Contents of this manual</h2>
-
-<h4>1&nbsp; <a href="coregrind_intro.html#intro">Introduction</a></h4>
-    1.1&nbsp; <a href="coregrind_intro.html#intro-overview">
-              An overview of Valgrind</a><br>
-    1.2&nbsp; <a href="coregrind_intro.html#intro-navigation">
-              How to navigate this manual</a>
-
-<h4>2&nbsp; <a href="coregrind_core.html#core">
-            Using and understanding the Valgrind core</a></h4>
-    2.1&nbsp; <a href="coregrind_core.html#core-whatdoes">
-                      What it does with your program</a><br>
-    2.2&nbsp; <a href="coregrind_core.html#started">
-                      Getting started</a><br>
-    2.3&nbsp; <a href="coregrind_core.html#comment">
-                      The commentary</a><br>
-    2.4&nbsp; <a href="coregrind_core.html#report">
-                      Reporting of errors</a><br>
-    2.5&nbsp; <a href="coregrind_core.html#suppress">
-                      Suppressing errors</a><br>
-    2.6&nbsp; <a href="coregrind_core.html#flags">
-                      Command-line flags for the Valgrind core</a><br>
-    2.7&nbsp; <a href="coregrind_core.html#clientreq">
-                      The Client Request mechanism</a><br>
-    2.8&nbsp; <a href="coregrind_core.html#pthreads">
-                      Support for POSIX pthreads</a><br>
-    2.9&nbsp; <a href="coregrind_core.html#signals">
-                      Handling of signals</a><br>
-    2.10&nbsp; <a href="coregrind_core.html#install">
-                       Building and installing</a><br>
-    2.11&nbsp; <a href="coregrind_core.html#problems">
-                        If you have problems</a><br>
-    2.12&nbsp; <a href="coregrind_core.html#limits">
-                       Limitations</a><br>
-    2.13&nbsp; <a href="coregrind_core.html#howworks">
-                       How it works -- a rough overview</a><br>
-    2.14&nbsp; <a href="coregrind_core.html#example">
-                       An example run</a><br>
-    2.15&nbsp; <a href="coregrind_core.html#warnings">
-                       Warning messages you might see</a><br>
-
-<h4>3&nbsp; <a href="mc_main.html#mc-top">
-            Memcheck: a heavyweight memory checker</a></h4>
-
-<h4>4&nbsp; <a href="cg_main.html#cg-top">
-            Cachegrind: a cache-miss profiler</a></h4>
-
-<h4>5&nbsp; <a href="ac_main.html#ac-top">
-            Addrcheck: a lightweight memory checker</a></h4>
-
-<h4>6&nbsp; <a href="hg_main.html#hg-top">
-            Helgrind: a data-race detector</a></h4>
-
-<h4>7&nbsp; <a href="ms_main.html#ms-top">
-            Massif: a heap profiler</a></h4>
-
-<p>
-The following is not part of the user manual.  It describes how you can
-write tools for Valgrind, in order to make new program supervision
-tools.
-
-<h4>8&nbsp; <a href="coregrind_tools.html">
-            Valgrind Tools</a></h4>
-
-<p>
-The following are not part of the user manual.  They describe internal
-details of how Valgrind works.  Reading them may rot your brain.  You
-have been warned.
-
-<h4>9&nbsp; <a href="mc_techdocs.html#mc-techdocs">
-            The design and implementation of Valgrind</a></h4>
-
-<h4>10&nbsp; <a href="cg_techdocs.html#cg-techdocs">
-            How Cachegrind works</a></h4>
-
-<hr width="100%">
-
-
diff --git a/docs/xml/FAQ.xml b/docs/xml/FAQ.xml
new file mode 100644
index 0000000..96897d6
--- /dev/null
+++ b/docs/xml/FAQ.xml
@@ -0,0 +1,674 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
+[ <!ENTITY % vg-entities SYSTEM "vg-entities.xml"> %vg-entities; ]>
+
+<book id="FAQ" xreflabel="Valgrind FAQ">
+
+  <bookinfo>
+    <title>Valgrind FAQ</title>
+  </bookinfo>
+
+
+<chapter id="faq.background" xreflabel="Background">
+<title>Background</title>
+
+<qandaset id="qset.background">
+
+<qandaentry id="faq.pronounce">
+ <question>
+  <para>How do you pronounce "Valgrind"?</para>
+ </question>
+ <answer>
+  <para>The "Val" as in the world "value".  The "grind" is
+  pronounced with a short 'i' -- ie. "grinned" (rhymes with
+  "tinned") rather than "grined" (rhymes with "find").</para>
+  <para>Don't feel bad: almost everyone gets it wrong at
+  first.</para>
+ </answer>
+</qandaentry>
+
+<qandaentry id="faq.whence">
+ <question>
+  <para>Where does the name "Valgrind" come from?</para>
+ </question>
+ <answer>
+  <para>From Nordic mythology.  Originally (before release) the
+  project was named Heimdall, after the watchman of the Nordic
+  gods.  He could "see a hundred miles by day or night, hear the
+  grass growing, see the wool growing on a sheep's back" (etc).
+  This would have been a great name, but it was already taken by
+  a security package "Heimdal".</para> <para>Keeping with the
+  Nordic theme, Valgrind was chosen.  Valgrind is the name of the
+  main entrance to Valhalla (the Hall of the Chosen Slain in
+  Asgard).  Over this entrance there resides a wolf and over it
+  there is the head of a boar and on it perches a huge eagle,
+  whose eyes can see to the far regions of the nine worlds.  Only
+  those judged worthy by the guardians are allowed to pass
+  through Valgrind.  All others are refused entrance.</para>
+  <para>It's not short for "value grinder", although that's not a
+  bad guess.</para>
+  </answer>
+ </qandaentry>
+
+</qandaset>
+
+</chapter>
+
+
+<chapter id="faq.installing" 
+       xreflabel="Compiling, installing and configuring">
+<title>Compiling, installing and configuring</title>
+<qandaset id="qset.installing">
+
+<qandaentry id="faq.make_dies">
+ <question>
+  <para>When I trying building Valgrind, 'make' dies partway with
+  an assertion failure, something like this: 
+<screen>
+% make: expand.c:489: allocated_variable_append: 
+        Assertion 'current_variable_set_list->next != 0' failed.
+</screen>
+  </para>
+ </question>
+ <answer>
+  <para>It's probably a bug in 'make'.  Some, but not all,
+  instances of version 3.79.1 have this bug, see
+  www.mail-archive.com/bug-make@gnu.org/msg01658.html.  Try
+  upgrading to a more recent version of 'make'.  Alternatively,
+  we have heard that unsetting the CFLAGS environment variable
+  avoids the problem.</para>
+ </answer>
+</qandaentry>
+
+</qandaset>
+</chapter>
+
+
+
+<chapter id="faq.abort" 
+       xreflabel="Valgrind aborts unexpectedly">
+<title>Valgrind aborts unexpectedly</title>
+<qandaset id="qset.abort">
+
+<qandaentry id="faq.exit_errors">
+ <question>
+  <para>Programs run OK on Valgrind, but at exit produce a bunch
+  of errors a bit like this:</para>
+ </question>
+ <answer><para>
+<programlisting>
+==20755== Invalid read of size 4
+==20755==    at 0x40281C8A: _nl_unload_locale (loadlocale.c:238)
+==20755==    by 0x4028179D: free_mem (findlocale.c:257)
+==20755==    by 0x402E0962: __libc_freeres (set-freeres.c:34)
+==20755==    by 0x40048DCC: vgPlain___libc_freeres_wrapper (vg_clientfuncs.c:585)
+==20755==    Address 0x40CC304C is 8 bytes inside a block of size 380 free'd
+==20755==    at 0x400484C9: free (vg_clientfuncs.c:180)
+==20755==    by 0x40281CBA: _nl_unload_locale (loadlocale.c:246)
+==20755==    by 0x40281218: free_mem (setlocale.c:461)
+==20755==    by 0x402E0962: __libc_freeres (set-freeres.c:34)
+</programlisting>
+
+ and then die with a segmentation fault.</para>
+ <para>When the program exits, Valgrind runs the procedure
+ <literal>__libc_freeres()</literal> in glibc.  This is a hook
+ for memory debuggers, so they can ask glibc to free up any
+ memory it has used.  Doing that is needed to ensure that
+ Valgrind doesn't incorrectly report space leaks in glibc.</para>
+ <para>Problem is that running
+ <literal>__libc_freeres()</literal> in older glibc versions
+ causes this crash.</para> <para>WORKAROUND FOR 1.1.X and later
+ versions of Valgrind: use the
+ <literal>--run-libc-freeres=no</literal> flag.  You may then get
+ space leak reports for glibc-allocations (please _don't_ report
+ these to the glibc people, since they are not real leaks), but
+ at least the program runs.</para>
+ </answer>
+</qandaentry>
+
+<qandaentry id="faq.bugdeath">
+ <question>
+  <para>My (buggy) program dies like this:</para>
+ </question>
+ <answer>
+  <screen>
+% valgrind: vg_malloc2.c:442 (bszW_to_pszW): Assertion 'pszW >= 0' failed.
+</screen>
+
+  <para>If Memcheck (the memory checker) shows any invalid reads,
+  invalid writes and invalid frees in your program, the above may
+  happen.  Reason is that your program may trash Valgrind's
+  low-level memory manager, which then dies with the above
+  assertion, or something like this.  The cure is to fix your
+  program so that it doesn't do any illegal memory accesses.  The
+  above failure will hopefully go away after that.</para>
+ </answer>
+</qandaentry>
+
+<qandaentry id="faq.msgdeath">
+ <question>
+  <para>My program dies, printing a message like this along the
+    way:</para>
+ </question>
+ <answer>
+<screen>
+% disInstr: unhandled instruction bytes: 0x66 0xF 0x2E 0x5
+</screen>
+
+  <para>Older versions did not support some x86 instructions,
+  particularly SSE/SSE2 instructions.  Try a newer Valgrind; we
+  now support almost all instructions.  If it still happens with
+  newer versions, if the failing instruction is an SSE/SSE2
+  instruction, you might be able to recompile your progrma
+  without it by using the flag
+  <computeroutput>-march</computeroutput> to gcc.  Either way,
+  let us know and we'll try to fix it.</para>
+
+  <para>Another possibility is that your program has a bug and
+  erroneously jumps to a non-code address, in which case you'll
+  get a SIGILL signal.  Memcheck/Addrcheck may issue a warning
+  just before this happens, but they might not if the jump
+  happens to land in addressable memory.</para>
+ </answer>
+</qandaentry>
+
+<qandaentry id="faq.defdeath">
+ <question>
+  <para>My program dies like this:</para>
+ </question>
+ <answer>
+<screen>
+% error: /lib/librt.so.1: symbol __pthread_clock_settime, 
+  version GLIBC_PRIVATE not defined in file libpthread.so.0 with link time reference
+</screen>
+
+  <para>This is a total swamp.  Nevertheless there is a way out.
+  It's a problem which is not easy to fix.  Really the problem is
+  that <filename>/lib/librt.so.1</filename> refers to some
+  symbols <literal>__pthread_clock_settime</literal> and
+  <literal>__pthread_clock_gettime</literal> in
+  <filename>/lib/libpthread.so</filename> which are not intended
+  to be exported, ie they are private.</para>
+
+  <para>Best solution is to ensure your program does not use
+  <filename>/lib/librt.so.1</filename>.</para>
+
+  <para>However ... since you're probably not using it directly,
+  or even knowingly, that's hard to do.  You might instead be
+  able to fix it by playing around with
+  <filename>coregrind/vg_libpthread.vs</filename>.  Things to
+  try:</para>
+
+  <para>Remove this:</para>
+<programlisting>
+GLIBC_PRIVATE {
+  __pthread_clock_gettime;
+  __pthread_clock_settime;
+};
+</programlisting>
+
+<para>or maybe remove this</para>
+<programlisting>
+GLIBC_2.2.3 {
+  __pthread_clock_gettime;
+  __pthread_clock_settime;
+}  GLIBC_2.2;
+</programlisting>
+
+<para>or maybe add this:</para>
+<programlisting>
+GLIBC_2.2.4 {
+  __pthread_clock_gettime;
+  __pthread_clock_settime;
+} GLIBC_2.2;
+
+GLIBC_2.2.5 {
+  __pthread_clock_gettime;
+  __pthread_clock_settime;
+} GLIBC_2.2;
+</programlisting>
+
+  <para>or some combination of the above.  After each change you
+  need to delete <filename>coregrind/libpthread.so</filename> and
+  do <computeroutput>make &amp;&amp; make
+  install</computeroutput>.</para>
+
+  <para>I just don't know if any of the above will work.  If you
+  can find a solution which works, I would be interested to hear
+  it.</para>
+
+  <para>To which someone replied:</para>
+<screen>
+I deleted this:
+
+GLIBC_2.2.3 { 
+   __pthread_clock_gettime; 
+   __pthread_clock_settime; 
+} GLIBC_2.2; 
+
+and it worked.
+</screen>
+
+ </answer>
+</qandaentry>
+
+</qandaset>
+</chapter>
+
+
+<chapter id="faq.unexpected" 
+       xreflabel="Valgrind behaves unexpectedly">
+<title>Valgrind behaves unexpectedly</title>
+<qandaset id="qset.unexpected">
+
+<qandaentry id="faq.no-output">
+ <question>
+  <para>I try running "valgrind my-program", but my-program runs
+  normally, and Valgrind doesn't emit any output at all.</para>
+ </question>
+ <answer>
+  <para><command>For versions prior to 2.1.1:</command></para>
+
+  <para>Valgrind doesn't work out-of-the-box with programs that
+  are entirely statically linked.  It does a quick test at
+  startup, and if it detects that the program is statically
+  linked, it aborts with an explanation.</para>
+
+  <para>This test may fail in some obscure cases, eg. if you run
+  a script under Valgrind and the script interpreter is
+  statically linked.</para>
+
+  <para>If you still want static linking, you can ask gcc to link
+  certain libraries statically.  Try the following options:</para>
+<screen>
+-Wl,-Bstatic -lmyLibrary1 -lotherLibrary -Wl,-Bdynamic
+</screen>
+
+  <para>Just make sure you end with
+  <computeroutput>-Wl,-Bdynamic</computeroutput> so that libc is
+  dynamically linked.</para>
+
+  <para>If you absolutely cannot use dynamic libraries, you can
+  try statically linking together all the .o files in coregrind/,
+  all the .o files of the tool of your choice (eg. those in
+  memcheck/), and the .o files of your program.  You'll end up
+  with a statically linked binary that runs permanently under
+  Valgrind's control.  Note that we haven't tested this procedure
+  thoroughly.</para>
+
+  <para><command>For versions 2.1.1 and later:</command></para>
+  <para>Valgrind does now work with static binaries, although
+  beware that some of the tools won't operate as well as normal,
+  because they have access to less information about how the
+  program runs.  Eg. Memcheck will miss some errors that it would
+  otherwise find.  This is because Valgrind doesn't replace
+  malloc() and friends with its own versions.  It's best if your
+  program is dynamically linked with glibc.</para>
+ </answer>
+</qandaentry>
+
+<qandaentry id="faq.slowthread">
+ <question>
+  <para>My threaded server process runs unbelievably slowly on
+  Valgrind.  So slowly, in fact, that at first I thought it had
+  completely locked up.</para>
+ </question>
+ <answer>
+  <para>We are not completely sure about this, but one
+  possibility is that laptops with power management fool
+  Valgrind's timekeeping mechanism, which is (somewhat in error)
+  based on the x86 RDTSC instruction.  A "fix" which is claimed
+  to work is to run some other cpu-intensive process at the same
+  time, so that the laptop's power-management clock-slowing does
+  not kick in.  We would be interested in hearing more feedback
+  on this.</para>
+
+  <para>Another possible cause is that versions prior to 1.9.6
+  did not support threading on glibc 2.3.X systems well.
+  Hopefully the situation is much improved with 1.9.6 and later
+  versions.</para>
+ </answer>
+</qandaentry>
+
+
+<qandaentry id="faq.reports">
+ <question>
+  <para>My program uses the C++ STL and string classes.  Valgrind
+  reports 'still reachable' memory leaks involving these classes
+  at the exit of the program, but there should be none.</para>
+ </question>
+ <answer>
+  <para>First of all: relax, it's probably not a bug, but a
+  feature.  Many implementations of the C++ standard libraries
+  use their own memory pool allocators.  Memory for quite a
+  number of destructed objects is not immediately freed and given
+  back to the OS, but kept in the pool(s) for later re-use.  The
+  fact that the pools are not freed at the exit() of the program
+  cause Valgrind to report this memory as still reachable.  The
+  behaviour not to free pools at the exit() could be called a bug
+  of the library though.</para>
+
+  <para>Using gcc, you can force the STL to use malloc and to
+  free memory as soon as possible by globally disabling memory
+  caching.  Beware!  Doing so will probably slow down your
+  program, sometimes drastically.</para>
+  <itemizedlist>
+   <listitem>
+    <para>With gcc 2.91, 2.95, 3.0 and 3.1, compile all source
+    using the STL with <literal>-D__USE_MALLOC</literal>. Beware!
+    This is removed from gcc starting with version 3.3.</para>
+   </listitem>
+   <listitem>
+    <para>With 3.2.2 and later, you should export the environment
+    variable <literal>GLIBCPP_FORCE_NEW</literal> before running
+    your program.</para>
+   </listitem>
+  </itemizedlist>
+
+  <para>There are other ways to disable memory pooling: using the
+  <literal>malloc_alloc</literal> template with your objects (not
+  portable, but should work for gcc) or even writing your own
+  memory allocators. But all this goes beyond the scope of this
+  FAQ.  Start by reading <ulink
+  url="http://gcc.gnu.org/onlinedocs/libstdc++/ext/howto.html#3">
+  http://gcc.gnu.org/onlinedocs/libstdc++/ext/howto.html#3</ulink>
+  if you absolutely want to do that. But beware:</para>
+
+  <orderedlist>
+   <listitem>
+    <para>there are currently changes underway for gcc which are
+    not totally reflected in the docs right now ("now" == 26 Apr
+    03)</para>
+   </listitem>
+   <listitem>
+    <para>allocators belong to the more messy parts of the STL
+    and people went at great lengths to make it portable across
+    platforms. Chances are good that your solution will work on
+    your platform, but not on others.</para>
+   </listitem>
+  </orderedlist>
+ </answer>
+</qandaentry>
+
+
+<qandaentry id="faq.unhelpful">
+ <question>
+  <para>The stack traces given by Memcheck (or another tool)
+  aren't helpful.  How can I improve them?</para>
+ </question>
+ <answer>
+  <para>If they're not long enough, use
+  <literal>--num-callers</literal> to make them longer.</para>
+  <para>If they're not detailed enough, make sure you are
+  compiling with <literal>-g</literal> to add debug information.
+  And don't strip symbol tables (programs should be unstripped
+  unless you run 'strip' on them; some libraries ship
+  stripped).</para>
+
+  <para>Also, <literal>-fomit-frame-pointer</literal> and
+  <literal>-fstack-check</literal> can make stack traces
+  worse.</para>
+
+  <para>Some example sub-traces:</para>
+
+  <para>With debug information and unstripped (best):</para>
+<programlisting>
+Invalid write of size 1
+   at 0x80483BF: really (malloc1.c:20)
+   by 0x8048370: main (malloc1.c:9)
+</programlisting>
+
+  <para>With no debug information, unstripped:</para>
+<programlisting>
+Invalid write of size 1
+   at 0x80483BF: really (in /auto/homes/njn25/grind/head5/a.out)
+   by 0x8048370: main (in /auto/homes/njn25/grind/head5/a.out)
+</programlisting>
+
+  <para>With no debug information, stripped:</para>
+<programlisting>
+Invalid write of size 1
+   at 0x80483BF: (within /auto/homes/njn25/grind/head5/a.out)
+   by 0x8048370: (within /auto/homes/njn25/grind/head5/a.out)
+   by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
+   by 0x80482CC: (within /auto/homes/njn25/grind/head5/a.out)
+</programlisting>
+
+  <para>With debug information and -fomit-frame-pointer:</para>
+<programlisting>
+Invalid write of size 1
+   at 0x80483C4: really (malloc1.c:20)
+   by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
+   by 0x80482CC: ??? (start.S:81)
+</programlisting>
+
+ </answer>
+</qandaentry>
+
+</qandaset>
+</chapter>
+
+
+<chapter id="faq.notfound" xreflabel="Memcheck doesn't find my bug">
+<title>Memcheck doesn't find my bug</title>
+<qandaset id="qset.notfound">
+
+<qandaentry id="faq.hiddenbug">
+ <question>
+  <para>I try running "valgrind --tool=memcheck my_program" and
+  get Valgrind's startup message, but I don't get any errors and
+  I know my program has errors.</para>
+ </question>
+ <answer>
+  <para>By default, Valgrind only traces the top-level process.
+  So if your program spawns children, they won't be traced by
+  Valgrind by default.  Also, if your program is started by a
+  shell script, Perl script, or something similar, Valgrind will
+  trace the shell, or the Perl interpreter, or equivalent.</para>
+
+  <para>To trace child processes, use the
+  <literal>--trace-children=yes</literal> option.</para>
+
+  <para>If you are tracing large trees of processes, it can be
+  less disruptive to have the output sent over the network.  Give
+  Valgrind the flag
+  <literal>--log-socket=127.0.0.1:12345</literal> (if you want
+  logging output sent to <literal>port 12345</literal> on
+  <literal>localhost</literal>).  You can use the
+  valgrind-listener program to listen on that port:</para>
+<programlisting>
+valgrind-listener 12345
+</programlisting>
+
+  <para>Obviously you have to start the listener process first.
+  See the Manual: <ulink url="http://www.valgrind.org/docs/bookset/manual-core.out2file.html">Directing output to file</ulink> for more details.</para>
+ </answer>
+</qandaentry>
+
+
+<qandaentry id="faq.overruns">
+ <question>
+  <para>Why doesn't Memcheck find the array overruns in this program?</para>
+ </question>
+ <answer>
+<programlisting>
+int static[5];
+
+int main(void)
+{
+  int stack[5];
+
+  static[5] = 0;
+  stack [5] = 0;
+          
+  return 0;
+}
+</programlisting>
+  <para>Unfortunately, Memcheck doesn't do bounds checking on
+  static or stack arrays.  We'd like to, but it's just not
+  possible to do in a reasonable way that fits with how Memcheck
+  works.  Sorry.</para>
+ </answer>
+</qandaentry>
+
+
+<qandaentry id="faq.segfault">
+ <question>
+  <para>My program dies with a segmentation fault, but Memcheck
+  doesn't give any error messages before it, or none that look
+  related.</para>
+ </question>
+ <answer>
+  <para>One possibility is that your program accesses to memory
+  with inappropriate permissions set, such as writing to
+  read-only memory.  Maybe your program is writing to a static
+  string like this:</para>
+<programlisting>
+char* s = "hello";
+s[0] = 'j';
+</programlisting>
+
+  <para>or something similar.  Writing to read-only memory can
+  also apparently make LinuxThreads behave strangely.</para>
+ </answer>
+</qandaentry>
+
+</qandaset>
+</chapter>
+
+
+<chapter id="faq.misc" 
+       xreflabel="Miscellaneous">
+<title>Miscellaneous</title>
+<qandaset id="qset.misc">
+
+<qandaentry id="faq.writesupp">
+ <question>
+  <para>I tried writing a suppression but it didn't work.  Can
+  you write my suppression for me?</para>
+ </question>
+ <answer>
+  <para>Yes!  Use the
+  <computeroutput>--gen-suppressions=yes</computeroutput> feature
+  to spit out suppressions automatically for you.  You can then
+  edit them if you like, eg.  combining similar automatically
+  generated suppressions using wildcards like
+  <literal>'*'</literal>.</para>
+
+  <para>If you really want to write suppressions by hand, read
+  the manual carefully.  Note particularly that C++ function
+  names must be <literal>_mangled_</literal>.</para>
+ </answer>
+</qandaentry>
+
+
+<qandaentry id="faq.deflost">
+ <question>
+  <para>With Memcheck/Addrcheck's memory leak detector, what's
+  the difference between "definitely lost", "possibly lost",
+  "still reachable", and "suppressed"?</para>
+ </question>
+ <answer>
+  <para>The details are in the Manual: 
+   <ulink url="http://www.valgrind.org/docs/bookset/mc-manual.leaks.html">Memory leak detection</ulink>.</para>
+
+  <para>In short:</para>
+   <itemizedlist>
+    <listitem>
+     <para>"definitely lost" means your program is leaking memory
+     -- fix it!</para>
+    </listitem>
+    <listitem>
+     <para>"possibly lost" means your program is probably leaking
+     memory, unless you're doing funny things with
+     pointers.</para>
+    </listitem>
+    <listitem>
+     <para>"still reachable" means your program is probably ok --
+     it didn't free some memory it could have.  This is quite
+     common and often reasonable.  Don't use
+     <computeroutput>--show-reachable=yes</computeroutput> if you
+     don't want to see these reports.</para>
+    </listitem>
+    <listitem>
+     <para>"suppressed" means that a leak error has been
+     suppressed.  There are some suppressions in the default
+     suppression files.  You can ignore suppressed errors.</para>
+    </listitem>
+   </itemizedlist>
+  </answer>
+</qandaentry>
+
+
+</qandaset>
+</chapter>
+
+
+<!-- template 
+<chapter id="faq." 
+       xreflabel="xx">
+<title>xx</title>
+<qandaset id="qset.">
+
+<qandaentry id="faq.deflost">
+ <question>
+  <para></para>
+ </question>
+ <answer>
+  <para></para>
+ </answer>
+</qandaentry>
+
+</qandaset>
+</chapter>
+-->
+
+
+
+<chapter id="faq.help" xreflabel="How To Get Further Assistance">
+<title>How To Get Further Assistance</title>
+
+
+<para>Please read all of this section before posting.</para>
+
+<para>If you think an answer is incomplete or inaccurate, please
+e-mail <ulink url="mailto:&vg-vemail;">&vg-vemail;</ulink>.</para>
+
+<para>Read the appropriate section(s) of the Manual(s): 
+<ulink url="http://www.valgrind.org/docs/">Valgrind 
+Documentation</ulink>.</para>
+
+<para>Read the <ulink url="http://www.valgrind.org/docs/">Distribution Documents</ulink>.</para>
+
+<para><ulink url="http://search.gmane.org">Search</ulink> the 
+<ulink url="http://news.gmane.org/gmane.comp.debugging.valgrind">valgrind-users</ulink> mailing list archives, using the group name 
+<computeroutput>gmane.comp.debugging.valgrind</computeroutput>.</para>
+
+<para>Only when you have tried all of these things and are still stuck,
+should you post to the <ulink url="&vg-users-list;">valgrind-users
+mailing list</ulink>. In which case, please read the following
+carefully.  Making a complete posting will greatly increase the chances
+that an expert or fellow user reading it will have enough information
+and motivation to reply.</para>
+
+<para>Make sure you give full details of the problem,
+including the full output of <computeroutput>valgrind
+-v</computeroutput>, if applicable.  Also which Linux distribution
+you're using (Red Hat, Debian, etc) and its version number.</para>
+
+<para>You are in little danger of making your posting too long
+unless you include large chunks of valgrind's (unsuppressed)
+output, so err on the side of giving too much information.</para>
+
+<para>Clearly written subject lines and message bodies are appreciated,
+too.</para>
+
+<para>Finally, remember that, despite the fact that most of the
+community are very helpful and responsive to emailed questions,
+you are probably requesting help from unpaid volunteers, so you
+have no guarantee of receiving an answer.</para>
+
+</chapter>
+
+</book>
diff --git a/docs/xml/Makefile.am b/docs/xml/Makefile.am
new file mode 100644
index 0000000..c07286e
--- /dev/null
+++ b/docs/xml/Makefile.am
@@ -0,0 +1,10 @@
+EXTRA_DIST =  \
+	index.xml 		\
+	FAQ.xml 		\
+	manual.xml manual-intro.xml manual-core.xml \
+	writing-tools.xml	\
+	dist-docs.xml 		\
+	tech-docs.xml 		\
+	licenses.xml 		\
+	vg-entities.xml 	\
+	xml_help.txt
diff --git a/docs/xml/dist-docs.xml b/docs/xml/dist-docs.xml
new file mode 100644
index 0000000..6fb0244
--- /dev/null
+++ b/docs/xml/dist-docs.xml
@@ -0,0 +1,82 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<book id="dist" xreflabel="Distribution Documents">
+
+  <bookinfo>
+    <title>Distribution Documents</title>
+  </bookinfo>
+
+  <!-- Nb: because these are all text files, we have to wrap them in suitable
+       XML.  Hence the chapter/title stuff -->
+
+  <chapter id="dist.acknowledge" xreflabel="Acknowledgements">
+    <title>ACKNOWLEDGEMENTS</title>
+    <literallayout>
+      <xi:include href="../../ACKNOWLEDGEMENTS" parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.authors" xreflabel="Valgrind Developers">
+    <title id="dist.authors.title">AUTHORS</title>
+    <literallayout>
+        <xi:include href="../../AUTHORS" parse="text"  
+            xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.install" xreflabel="Install">
+    <title>INSTALL</title>
+    <literallayout>
+      <xi:include href="../../INSTALL" parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.news" xreflabel="News">
+    <title>NEWS</title>
+    <literallayout>
+      <xi:include href="../../NEWS" parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.readme" xreflabel="Readme">
+    <title>README</title>
+    <literallayout>
+      <xi:include href="../../README" parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.readme-missing" 
+             xreflabel="Readme Missing Syscall or Ioctl">
+    <title>README_MISSING_SYSCALL_OR_IOCTL</title>
+    <literallayout>
+      <xi:include href="../../README_MISSING_SYSCALL_OR_IOCTL" 
+          parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.readme-packagers" 
+             xreflabel="Readme Packagers">
+    <title>README_PACKAGERS</title>
+    <literallayout>
+      <xi:include href="../../README_PACKAGERS" 
+          parse="text" 
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+
+  <chapter id="dist.todo" xreflabel="Todo">
+    <title>TODO</title>
+    <literallayout>
+      <xi:include href="../../TODO" 
+          parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+    </literallayout>
+    </chapter>
+</book>
diff --git a/docs/xml/index.xml b/docs/xml/index.xml
new file mode 100644
index 0000000..97d7231
--- /dev/null
+++ b/docs/xml/index.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE set PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
+[
+ <!-- various strings, dates etc. common to all docs -->
+ <!ENTITY % vg-entities SYSTEM "vg-entities.xml"> %vg-entities;
+]>
+
+<set lang="en" id="index">
+
+  <setinfo>
+    <title>Valgrind Documentation</title>
+    <releaseinfo>&rel-type; &rel-version; &rel-date;</releaseinfo>
+    <copyright>
+      <year>&vg-lifespan;</year>
+      <holder>
+       <link linkend="dist.authors" endterm="dist.authors.title"></link>
+      </holder>
+    </copyright>
+
+    <legalnotice>
+      <para>Permission is granted to copy, distribute and/or
+      modify this document under the terms of the GNU Free
+      Documentation License, Version 1.2 or any later version
+      published by the Free Software Foundation; with no
+      Invariant Sections, with no Front-Cover Texts, and with no
+      Back-Cover Texts.  A copy of the license is included in the
+      section entitled <xref linkend="license.gfdl"/>.</para>
+    </legalnotice>
+
+  </setinfo>
+
+  <!-- User Manual -->
+  <xi:include href="manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+  <!-- FAQ -->
+  <xi:include href="FAQ.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+  <!-- Technical Docs -->
+  <xi:include href="tech-docs.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+  <!-- Distribution Docs -->
+  <xi:include href="dist-docs.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+  <!-- GNU Licenses -->
+  <xi:include href="licenses.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+
+</set>
diff --git a/docs/xml/licenses.xml b/docs/xml/licenses.xml
new file mode 100644
index 0000000..2b31e9a
--- /dev/null
+++ b/docs/xml/licenses.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<book id="licenses" xreflabel="GNU Licenses">
+
+  <bookinfo>
+   <title>GNU Licenses</title>
+  </bookinfo>
+
+  <chapter id="license.gpl" xreflabel=" The GNU General Public License">
+    <title>The GNU General Public License</title>
+      <literallayout>
+      <xi:include href="../../COPYING" 
+          parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+      </literallayout>
+  </chapter>
+
+  <chapter id="license.gfdl" xreflabel="The GNU Free Documentation License">
+    <title>The GNU Free Documentation License</title>
+      <literallayout>
+      <xi:include href="../../COPYING.DOCS" 
+          parse="text"  
+          xmlns:xi="http://www.w3.org/2001/XInclude" />
+      </literallayout>
+  </chapter>
+
+</book>
diff --git a/docs/xml/manual-core.xml b/docs/xml/manual-core.xml
new file mode 100644
index 0000000..6ed8eea
--- /dev/null
+++ b/docs/xml/manual-core.xml
@@ -0,0 +1,1951 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="manual-core" xreflabel="Valgrind's core">
+<title>Using and understanding the Valgrind core</title>
+
+<para>This section describes the Valgrind core services, flags
+and behaviours.  That means it is relevant regardless of what
+particular tool you are using.  A point of terminology: most
+references to "valgrind" in the rest of this section (Section 2)
+refer to the valgrind core services.</para>
+
+<sect1 id="manual-core.whatdoes" 
+       xreflabel="What Valgrind does with your program">
+<title>What Valgrind does with your program</title>
+
+<para>Valgrind is designed to be as non-intrusive as possible. It
+works directly with existing executables. You don't need to
+recompile, relink, or otherwise modify, the program to be
+checked.</para>
+
+<para>Simply put <computeroutput>valgrind
+--tool=tool_name</computeroutput> at the start of the command
+line normally used to run the program.  For example, if want to
+run the command <computeroutput>ls -l</computeroutput> using the
+heavyweight memory-checking tool Memcheck, issue the
+command:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=memcheck ls -l]]></programlisting>
+
+<para>Regardless of which tool is in use, Valgrind takes control
+of your program before it starts.  Debugging information is read
+from the executable and associated libraries, so that error
+messages and other outputs can be phrased in terms of source code
+locations (if that is appropriate).</para>
+
+<para>Your program is then run on a synthetic x86 CPU provided by
+the Valgrind core.  As new code is executed for the first time,
+the core hands the code to the selected tool.  The tool adds its
+own instrumentation code to this and hands the result back to the
+core, which coordinates the continued execution of this
+instrumented code.</para>
+
+<para>The amount of instrumentation code added varies widely
+between tools.  At one end of the scale, Memcheck adds code to
+check every memory access and every value computed, increasing
+the size of the code at least 12 times, and making it run 25-50
+times slower than natively.  At the other end of the spectrum,
+the ultra-trivial "none" tool (a.k.a. Nulgrind) adds no
+instrumentation at all and causes in total "only" about a 4 times
+slowdown.</para>
+
+<para>Valgrind simulates every single instruction your program
+executes.  Because of this, the active tool checks, or profiles,
+not only the code in your application but also in all supporting
+dynamically-linked (<computeroutput>.so</computeroutput>-format)
+libraries, including the GNU C library, the X client libraries,
+Qt, if you work with KDE, and so on.</para>
+
+<para>If you're using one of the error-detection tools, Valgrind
+will often detect errors in libraries, for example the GNU C or
+X11 libraries, which you have to use.  You might not be
+interested in these errors, since you probably have no control
+over that code.  Therefore, Valgrind allows you to selectively
+suppress errors, by recording them in a suppressions file which
+is read when Valgrind starts up.  The build mechanism attempts to
+select suppressions which give reasonable behaviour for the libc
+and XFree86 versions detected on your machine.  To make it easier
+to write suppressions, you can use the
+<computeroutput>--gen-suppressions=yes</computeroutput> option
+which tells Valgrind to print out a suppression for each error
+that appears, which you can then copy into a suppressions
+file.</para>
+
+<para>Different error-checking tools report different kinds of
+errors.  The suppression mechanism therefore allows you to say
+which tool or tool(s) each suppression applies to.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.started" xreflabel="Getting started">
+<title>Getting started</title>
+
+<para>First off, consider whether it might be beneficial to
+recompile your application and supporting libraries with
+debugging info enabled (the <computeroutput>-g</computeroutput>
+flag).  Without debugging info, the best Valgrind tools will be
+able to do is guess which function a particular piece of code
+belongs to, which makes both error messages and profiling output
+nearly useless.  With <computeroutput>-g</computeroutput>, you'll
+hopefully get messages which point directly to the relevant
+source code lines.</para>
+
+<para>Another flag you might like to consider, if you are working
+with C++, is <computeroutput>-fno-inline</computeroutput>.  That
+makes it easier to see the function-call chain, which can help
+reduce confusion when navigating around large C++ apps.  For
+whatever it's worth, debugging OpenOffice.org with Memcheck is a
+bit easier when using this flag.</para>
+
+<para>You don't have to do this, but doing so helps Valgrind
+produce more accurate and less confusing error reports.  Chances
+are you're set up like this already, if you intended to debug
+your program with GNU gdb, or some other debugger.</para>
+
+<para>This paragraph applies only if you plan to use Memcheck: On
+rare occasions, optimisation levels at
+<computeroutput>-O2</computeroutput> and above have been observed
+to generate code which fools Memcheck into wrongly reporting
+uninitialised value errors.  We have looked in detail into fixing
+this, and unfortunately the result is that doing so would give a
+further significant slowdown in what is already a slow tool.  So
+the best solution is to turn off optimisation altogether.  Since
+this often makes things unmanagably slow, a plausible compromise
+is to use <computeroutput>-O</computeroutput>.  This gets you the
+majority of the benefits of higher optimisation levels whilst
+keeping relatively small the chances of false complaints from
+Memcheck.  All other tools (as far as we know) are unaffected by
+optimisation level.</para>
+
+<para>Valgrind understands both the older "stabs" debugging
+format, used by gcc versions prior to 3.1, and the newer DWARF2
+format used by gcc 3.1 and later.  We continue to refine and
+debug our debug-info readers, although the majority of effort
+will naturally enough go into the newer DWARF2 reader.</para>
+
+<para>When you're ready to roll, just run your application as you
+would normally, but place <computeroutput>valgrind
+--tool=tool_name</computeroutput> in front of your usual
+command-line invocation.  Note that you should run the real
+(machine-code) executable here.  If your application is started
+by, for example, a shell or perl script, you'll need to modify it
+to invoke Valgrind on the real executables.  Running such scripts
+directly under Valgrind will result in you getting error reports
+pertaining to <computeroutput>/bin/sh</computeroutput>,
+<computeroutput>/usr/bin/perl</computeroutput>, or whatever
+interpreter you're using.  This may not be what you want and can
+be confusing.  You can force the issue by giving the flag
+<computeroutput>--trace-children=yes</computeroutput>, but
+confusion is still likely.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.comment" xreflabel="The commentary">
+<title>The commentary</title>
+
+<para>Valgrind tools write a commentary, a stream of text,
+detailing error reports and other significant events.  All lines
+in the commentary have following form:
+
+<programlisting><![CDATA[
+==12345== some-message-from-Valgrind]]></programlisting>
+</para>
+
+<para>The <computeroutput>12345</computeroutput> is the process
+ID.  This scheme makes it easy to distinguish program output from
+Valgrind commentary, and also easy to differentiate commentaries
+from different processes which have become merged together, for
+whatever reason.</para>
+
+<para>By default, Valgrind tools write only essential messages to
+the commentary, so as to avoid flooding you with information of
+secondary importance.  If you want more information about what is
+happening, re-run, passing the
+<computeroutput>-v</computeroutput> flag to Valgrind.</para>
+
+<para>You can direct the commentary to three different
+places:</para>
+
+<orderedlist>
+
+ <listitem id="manual-core.out2fd" xreflabel="Directing output to fd">
+   <para>The default: send it to a file descriptor, which is by
+   default 2 (stderr).  So, if you give the core no options, it
+   will write commentary to the standard error stream.  If you
+   want to send it to some other file descriptor, for example
+   number 9, you can specify
+   <computeroutput>--log-fd=9</computeroutput>.</para>
+  </listitem>
+
+  <listitem id="manual-core.out2file" 
+            xreflabel="Directing output to file">
+   <para>A less intrusive option is to write the commentary to a
+   file, which you specify by
+   <computeroutput>--log-file=filename</computeroutput>.  Note
+   carefully that the commentary is <command>not</command>
+   written to the file you specify, but instead to one called
+   <computeroutput>filename.pid12345</computeroutput>, if for
+   example the pid of the traced process is 12345.  This is
+   helpful when valgrinding a whole tree of processes at once,
+   since it means that each process writes to its own logfile,
+   rather than the result being jumbled up in one big
+   logfile.</para>
+  </listitem>
+
+  <listitem id="manual-core.out2socket" 
+            xreflabel="Directing output to network socket">
+   <para>The least intrusive option is to send the commentary to
+   a network socket.  The socket is specified as an IP address
+   and port number pair, like this:
+   <computeroutput>--log-socket=192.168.0.1:12345</computeroutput>
+   if you want to send the output to host IP 192.168.0.1 port
+   12345 (I have no idea if 12345 is a port of pre-existing
+   significance).  You can also omit the port number:
+   <computeroutput>--log-socket=192.168.0.1</computeroutput>, in
+   which case a default port of 1500 is used.  This default is
+   defined by the constant
+   <computeroutput>VG_CLO_DEFAULT_LOGPORT</computeroutput> in the
+   sources.</para>
+
+   <para>Note, unfortunately, that you have to use an IP address
+   here, rather than a hostname.</para>
+
+   <para>Writing to a network socket is pretty useless if you
+   don't have something listening at the other end.  We provide a
+   simple listener program,
+   <computeroutput>valgrind-listener</computeroutput>, which
+   accepts connections on the specified port and copies whatever
+   it is sent to stdout.  Probably someone will tell us this is a
+   horrible security risk.  It seems likely that people will
+   write more sophisticated listeners in the fullness of
+   time.</para>
+
+   <para>valgrind-listener can accept simultaneous connections
+   from up to 50 valgrinded processes.  In front of each line of
+   output it prints the current number of active connections in
+   round brackets.</para>
+
+   <para>valgrind-listener accepts two command-line flags:</para>
+    <itemizedlist>
+     <listitem>
+      <para><computeroutput>-e</computeroutput> or 
+      <computeroutput>--exit-at-zero</computeroutput>: when the
+      number of connected processes falls back to zero, exit.
+      Without this, it will run forever, that is, until you send
+      it Control-C.</para>
+     </listitem>
+     <listitem>
+      <para><computeroutput>portnumber</computeroutput>: changes
+      the port it listens on from the default (1500).  The
+      specified port must be in the range 1024 to 65535.  The
+      same restriction applies to port numbers specified by a
+      <computeroutput>--log-socket=</computeroutput> to Valgrind
+      itself.</para>
+     </listitem>
+    </itemizedlist>
+
+    <para>If a valgrinded process fails to connect to a listener,
+    for whatever reason (the listener isn't running, invalid or
+    unreachable host or port, etc), Valgrind switches back to
+    writing the commentary to stderr.  The same goes for any
+    process which loses an established connection to a listener.
+    In other words, killing the listener doesn't kill the
+    processes sending data to it.</para>
+  </listitem>
+ </orderedlist>
+
+<para>Here is an important point about the relationship between
+the commentary and profiling output from tools.  The commentary
+contains a mix of messages from the Valgrind core and the
+selected tool.  If the tool reports errors, it will report them
+to the commentary.  However, if the tool does profiling, the
+profile data will be written to a file of some kind, depending on
+the tool, and independent of what
+<computeroutput>--log-*</computeroutput> options are in force.
+The commentary is intended to be a low-bandwidth, human-readable
+channel.  Profiling data, on the other hand, is usually
+voluminous and not meaningful without further processing, which
+is why we have chosen this arrangement.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.report" xreflabel="Reporting of errors">
+<title>Reporting of errors</title>
+
+<para>When one of the error-checking tools (Memcheck, Addrcheck,
+Helgrind) detects something bad happening in the program, an
+error message is written to the commentary.  For example:</para>
+
+<programlisting><![CDATA[
+==25832== Invalid read of size 4
+==25832==    at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45)
+==25832==    by 0x80487AF: main (bogon.cpp:66)
+==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
+==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
+==25832==    Address 0xBFFFF74C is not stack'd, malloc'd or free'd]]></programlisting>
+
+<para>This message says that the program did an illegal 4-byte
+read of address 0xBFFFF74C, which, as far as Memcheck can tell,
+is not a valid stack address, nor corresponds to any currently
+malloc'd or free'd blocks.  The read is happening at line 45 of
+<filename>bogon.cpp</filename>, called from line 66 of the same
+file, etc.  For errors associated with an identified
+malloc'd/free'd block, for example reading free'd memory,
+Valgrind reports not only the location where the error happened,
+but also where the associated block was malloc'd/free'd.</para>
+
+<para>Valgrind remembers all error reports.  When an error is
+detected, it is compared against old reports, to see if it is a
+duplicate.  If so, the error is noted, but no further commentary
+is emitted.  This avoids you being swamped with bazillions of
+duplicate error reports.</para>
+
+<para>If you want to know how many times each error occurred, run
+with the <computeroutput>-v</computeroutput> option.  When
+execution finishes, all the reports are printed out, along with,
+and sorted by, their occurrence counts.  This makes it easy to
+see which errors have occurred most frequently.</para>
+
+<para>Errors are reported before the associated operation
+actually happens.  If you're using a tool (Memcheck, Addrcheck)
+which does address checking, and your program attempts to read
+from address zero, the tool will emit a message to this effect,
+and the program will then duly die with a segmentation
+fault.</para>
+
+<para>In general, you should try and fix errors in the order that
+they are reported.  Not doing so can be confusing.  For example,
+a program which copies uninitialised values to several memory
+locations, and later uses them, will generate several error
+messages, when run on Memcheck.  The first such error message may
+well give the most direct clue to the root cause of the
+problem.</para>
+
+<para>The process of detecting duplicate errors is quite an
+expensive one and can become a significant performance overhead
+if your program generates huge quantities of errors.  To avoid
+serious problems here, Valgrind will simply stop collecting
+errors after 300 different errors have been seen, or 30000 errors
+in total have been seen.  In this situation you might as well
+stop your program and fix it, because Valgrind won't tell you
+anything else useful after this.  Note that the 300/30000 limits
+apply after suppressed errors are removed.  These limits are
+defined in <filename>vg_include.h</filename> and can be increased
+if necessary.</para>
+
+<para>To avoid this cutoff you can use the
+<computeroutput>--error-limit=no</computeroutput> flag.  Then
+Valgrind will always show errors, regardless of how many there
+are.  Use this flag carefully, since it may have a dire effect on
+performance.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.suppress" xreflabel="Suppressing errors">
+<title>Suppressing errors</title>
+
+<para>The error-checking tools detect numerous problems in the
+base libraries, such as the GNU C library, and the XFree86 client
+libraries, which come pre-installed on your GNU/Linux system.
+You can't easily fix these, but you don't want to see these
+errors (and yes, there are many!)  So Valgrind reads a list of
+errors to suppress at startup.  A default suppression file is
+cooked up by the <computeroutput>./configure</computeroutput>
+script when the system is built.</para>
+
+<para>You can modify and add to the suppressions file at your
+leisure, or, better, write your own.  Multiple suppression files
+are allowed.  This is useful if part of your project contains
+errors you can't or don't want to fix, yet you don't want to
+continuously be reminded of them.</para>
+
+<formalpara><title>Note:</title>
+<para>By far the easiest way to add suppressions is to use the
+<computeroutput>--gen-suppressions=yes</computeroutput> flag
+described in <xref linkend="manual-core.flags"/>.</para>
+</formalpara>
+
+<para>Each error to be suppressed is described very specifically,
+to minimise the possibility that a suppression-directive
+inadvertantly suppresses a bunch of similar errors which you did
+want to see.  The suppression mechanism is designed to allow
+precise yet flexible specification of errors to suppress.</para>
+
+<para>If you use the <computeroutput>-v</computeroutput> flag, at
+the end of execution, Valgrind prints out one line for each used
+suppression, giving its name and the number of times it got used.
+Here's the suppressions used by a run of <computeroutput>valgrind
+--tool=memcheck ls -l</computeroutput>:</para>
+
+<programlisting><![CDATA[
+--27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getgrgid_r
+--27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getpwuid_r
+--27579-- supp: 6 strrchr/_dl_map_object_from_fd/_dl_map_object]]></programlisting>
+
+<para>Multiple suppressions files are allowed.  By default,
+Valgrind uses
+<computeroutput>$PREFIX/lib/valgrind/default.supp</computeroutput>.
+You can ask to add suppressions from another file, by specifying
+<computeroutput>--suppressions=/path/to/file.supp</computeroutput>.
+</para>
+
+<para>If you want to understand more about suppressions, look at
+an existing suppressions file whilst reading the following
+documentation.  The file
+<computeroutput>glibc-2.2.supp</computeroutput>, in the source
+distribution, provides some good examples.</para>
+
+<para>Each suppression has the following components:</para>
+
+ <itemizedlist>
+
+  <listitem>
+   <para>First line: its name.  This merely gives a handy name to
+   the suppression, by which it is referred to in the summary of
+   used suppressions printed out when a program finishes.  It's
+   not important what the name is; any identifying string will
+   do.</para>
+  </listitem>
+
+  <listitem>
+   <para>Second line: name of the tool(s) that the suppression is
+   for (if more than one, comma-separated), and the name of the
+   suppression itself, separated by a colon (Nb: no spaces are
+   allowed), eg:</para>
+
+<programlisting><![CDATA[
+tool_name1,tool_name2:suppression_name]]></programlisting>
+
+   <para>Recall that Valgrind-2.0.X is a modular system, in which
+   different instrumentation tools can observe your program
+   whilst it is running.  Since different tools detect different
+   kinds of errors, it is necessary to say which tool(s) the
+   suppression is meaningful to.</para>
+
+   <para>Tools will complain, at startup, if a tool does not
+   understand any suppression directed to it.  Tools ignore
+   suppressions which are not directed to them.  As a result, it
+   is quite practical to put suppressions for all tools into the
+   same suppression file.</para>
+
+   <para>Valgrind's core can detect certain PThreads API errors,
+   for which this line reads:</para>
+
+<programlisting><![CDATA[
+core:PThread]]></programlisting>
+  </listitem>
+
+  <listitem>
+   <para>Next line: a small number of suppression types have
+   extra information after the second line (eg. the
+   <computeroutput>Param</computeroutput> suppression for
+   Memcheck)</para>
+  </listitem>
+
+  <listitem>
+   <para>Remaining lines: This is the calling context for the
+   error -- the chain of function calls that led to it.  There
+   can be up to four of these lines.</para>
+
+   <para>Locations may be either names of shared
+   objects/executables or wildcards matching function names.
+   They begin <computeroutput>obj:</computeroutput> and
+   <computeroutput>fun:</computeroutput> respectively.  Function
+   and object names to match against may use the wildcard
+   characters <computeroutput>*</computeroutput> and
+   <computeroutput>?</computeroutput>.</para>
+
+   <formalpara><title>Important note:</title>
+    <para>C++ function names must be <command>mangled</command>.
+    If you are writing suppressions by hand, use the
+    <computeroutput>--demangle=no</computeroutput> option to get
+    the mangled names in your error messages.</para>
+   </formalpara>
+  </listitem>
+
+  <listitem>
+   <para>Finally, the entire suppression must be between curly
+   braces. Each brace must be the first character on its own
+   line.</para>
+  </listitem>
+
+ </itemizedlist>
+
+<para>A suppression only suppresses an error when the error
+matches all the details in the suppression.  Here's an
+example:</para>
+
+<programlisting><![CDATA[
+{
+  __gconv_transform_ascii_internal/__mbrtowc/mbtowc
+  Memcheck:Value4
+  fun:__gconv_transform_ascii_internal
+  fun:__mbr*toc
+  fun:mbtowc
+}]]></programlisting>
+
+
+<para>What it means is: for Memcheck only, suppress a
+use-of-uninitialised-value error, when the data size is 4, when
+it occurs in the function
+<computeroutput>__gconv_transform_ascii_internal</computeroutput>,
+when that is called from any function of name matching
+<computeroutput>__mbr*toc</computeroutput>, when that is called
+from <computeroutput>mbtowc</computeroutput>.  It doesn't apply
+under any other circumstances.  The string by which this
+suppression is identified to the user is
+<computeroutput>__gconv_transform_ascii_internal/__mbrtowc/mbtowc</computeroutput>.</para>
+
+<para>(See <xref linkend="mc-manual.suppfiles"/> for more details
+on the specifics of Memcheck's suppression kinds.)</para>
+
+<para>Another example, again for the Memcheck tool:</para>
+
+<programlisting><![CDATA[
+{
+  libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
+  Memcheck:Value4
+  obj:/usr/X11R6/lib/libX11.so.6.2
+  obj:/usr/X11R6/lib/libX11.so.6.2
+  obj:/usr/X11R6/lib/libXaw.so.7.0
+}]]></programlisting>
+
+<para>Suppress any size 4 uninitialised-value error which occurs
+anywhere in <computeroutput>libX11.so.6.2</computeroutput>, when
+called from anywhere in the same library, when called from
+anywhere in <computeroutput>libXaw.so.7.0</computeroutput>.  The
+inexact specification of locations is regrettable, but is about
+all you can hope for, given that the X11 libraries shipped with
+Red Hat 7.2 have had their symbol tables removed.</para>
+
+<para>Note: since the above two examples did not make it clear,
+you can freely mix the <computeroutput>obj:</computeroutput> and
+<computeroutput>fun:</computeroutput> styles of description
+within a single suppression record.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.flags" 
+       xreflabel="Command-line flags for the Valgrind core">
+<title>Command-line flags for the Valgrind core</title>
+
+<para>As mentioned above, Valgrind's core accepts a common set of
+flags.  The tools also accept tool-specific flags, which are
+documented seperately for each tool.</para>
+
+<para>You invoke Valgrind like this:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=<emphasis>tool_name</emphasis> [valgrind-options] your-prog [your-prog options]]]></programlisting>
+
+<para>Valgrind's default settings succeed in giving reasonable
+behaviour in most cases.  We group the available options by rough
+categories.</para>
+
+<sect2 id="manual-core.toolopts" xreflabel="Tool-selection option">
+<title>Tool-selection option</title>
+
+<para>The single most important option.</para>
+  <itemizedlist>
+   <listitem>
+    <para><computeroutput>--tool=name</computeroutput></para>
+    <para>Run the Valgrind tool called <emphasis>name</emphasis>,
+    e.g. Memcheck, Addrcheck, Cachegrind, etc.</para>
+   </listitem>
+  </itemizedlist>
+</sect2>
+
+<sect2 id="manual-core.basicopts" xreflabel="Basic Options">
+<title>Basic Options</title>
+
+<para>These options work with all tools.</para>
+
+ <itemizedlist>
+   <listitem>
+    <para><computeroutput>--help</computeroutput></para>
+    <para>Show help for all options, both for the core and for
+    the selected tool.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--help-debug</computeroutput></para>
+    <para>Same as <computeroutput>--help</computeroutput>, but
+     also lists debugging options which usually are only of use
+     to developers.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--version</computeroutput></para>
+    <para>Show the version number of the Valgrind core.  Tools
+    can have their own version numbers.  There is a scheme in
+    place to ensure that tools only execute when the core version
+    is one they are known to work with.  This was done to
+    minimise the chances of strange problems arising from
+    tool-vs-core version incompatibilities.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>-v --verbose</computeroutput></para>
+    <para>Be more verbose.  Gives extra information on various
+    aspects of your program, such as: the shared objects loaded,
+    the suppressions used, the progress of the instrumentation
+    and execution engines, and warnings about unusual behaviour.
+    Repeating the flag increases the verbosity level.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>-q --quiet</computeroutput></para>
+    <para>Run silently, and only print error messages.  Useful if
+    you are running regression tests or have some other automated
+    test machinery.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-children=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-children=yes</computeroutput></para>
+    <para>When enabled, Valgrind will trace into child processes.
+    This is confusing and usually not what you want, so is
+    disabled by default.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--log-fd=&lt;number&gt;</computeroutput>
+    [default: 2, stderr]</para>
+    <para>Specifies that Valgrind should send all of its messages
+    to the specified file descriptor.  The default, 2, is the
+    standard error channel (stderr).  Note that this may
+    interfere with the client's own use of stderr.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--log-file=&lt;filename&gt;</computeroutput></para>
+    <para>Specifies that Valgrind should send all of its messages
+    to the specified file.  In fact, the file name used is
+    created by concatenating the text
+    <computeroutput>filename</computeroutput>, ".pid" and the
+    process ID, so as to create a file per process.  The
+    specified file name may not be the empty string.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--log-socket=&lt;ip-address:port-number&gt;</computeroutput></para>
+    <para>Specifies that Valgrind should send all of its messages
+    to the specified port at the specified IP address.  The port
+    may be omitted, in which case port 1500 is used.  If a
+    connection cannot be made to the specified socket, Valgrind
+    falls back to writing output to the standard error (stderr).
+    This option is intended to be used in conjunction with the
+    <computeroutput>valgrind-listener</computeroutput> program.
+    For further details, see <xref linkend="manual-core.comment"/>.</para>
+   </listitem>
+
+  </itemizedlist>
+ </sect2>
+
+
+<sect2 id="manual-core.erropts" xreflabel="Error-related Options">
+<title>Error-related options</title>
+
+<para>These options are used by all tools that can report
+errors, e.g. Memcheck, but not Cachegrind.</para>
+
+  <itemizedlist>
+
+   <listitem>
+    <para><computeroutput>--demangle=no</computeroutput></para>
+    <para><computeroutput>--demangle=yes</computeroutput> [default]</para>
+    <para>Disable/enable automatic demangling (decoding) of C++
+    names.  Enabled by default.  When enabled, Valgrind will
+    attempt to translate encoded C++ procedure names back to
+    something approaching the original.  The demangler handles
+    symbols mangled by g++ versions 2.X and 3.X.</para>
+
+    <para>An important fact about demangling is that function
+    names mentioned in suppressions files should be in their
+    mangled form.  Valgrind does not demangle function names when
+    searching for applicable suppressions, because to do
+    otherwise would make suppressions file contents dependent on
+    the state of Valgrind's demangling machinery, and would also
+    be slow and pointless.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--num-callers=&lt;number&gt;</computeroutput> [default=4]</para>
+    <para>By default, Valgrind shows four levels of function call
+    names to help you identify program locations.  You can change
+    that number with this option.  This can help in determining
+    the program's location in deeply-nested call chains.  Note
+    that errors are commoned up using only the top three function
+    locations (the place in the current function, and that of its
+    two immediate callers).  So this doesn't affect the total
+    number of errors reported.</para>
+
+    <para>The maximum value for this is 50.  Note that higher
+    settings will make Valgrind run a bit more slowly and take a
+    bit more memory, but can be useful when working with programs
+    with deeply-nested call chains.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--error-limit=yes</computeroutput>
+    [default]</para>
+    <para><computeroutput>--error-limit=no</computeroutput></para>
+    <para>When enabled, Valgrind stops reporting errors after
+    30000 in total, or 300 different ones, have been seen.  This
+    is to stop the error tracking machinery from becoming a huge
+    performance overhead in programs with many errors.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--show-below-main=yes</computeroutput></para>
+    <para><computeroutput>--show-below-main=no</computeroutput>
+    [default]</para>
+    <para>By default, stack traces for errors do not show any
+    functions that appear beneath
+    <computeroutput>main()</computeroutput>; most of the time
+    it's uninteresting C library stuff.  If this option is
+    enabled, these entries below
+    <computeroutput>main()</computeroutput> will be shown.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--suppressions=&lt;filename&gt;</computeroutput>
+    [default: $PREFIX/lib/valgrind/default.supp]</para>
+    <para>Specifies an extra file from which to read descriptions
+    of errors to suppress.  You may use as many extra
+    suppressions files as you like.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--gen-suppressions=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--gen-suppressions=yes</computeroutput></para>
+    <para>When enabled, Valgrind will pause after every error
+    shown, and print the line: <computeroutput>---- Print
+    suppression ?  --- [Return/N/n/Y/y/C/c] ----</computeroutput></para>
+
+    <para>The prompt's behaviour is the same as for the 
+    <computeroutput>--db-attach</computeroutput> option.</para>
+
+    <para>If you choose to, Valgrind will print out a suppression
+    for this error.  You can then cut and paste it into a
+    suppression file if you don't want to hear about the error in
+    the future.</para>
+
+    <para>This option is particularly useful with C++ programs,
+    as it prints out the suppressions with mangled names, as
+    required.</para>
+
+    <para>Note that the suppressions printed are as specific as
+    possible.  You may want to common up similar ones, eg. by
+    adding wildcards to function names.  Also, sometimes two
+    different errors are suppressed by the same suppression, in
+    which case Valgrind will output the suppression more than
+    once, but you only need to have one copy in your suppression
+    file (but having more than one won't cause problems).  Also,
+    the suppression name is given as <computeroutput>&lt;insert a
+    suppression name here&gt;</computeroutput>; the name doesn't
+    really matter, it's only used with the
+    <computeroutput>-v</computeroutput> option which prints out
+    all used suppression records.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--track-fds=no</computeroutput> [default]</para>
+    <para><computeroutput>--track-fds=yes</computeroutput></para>
+    <para>When enabled, Valgrind will print out a list of open
+    file descriptors on exit.  Along with each file descriptor,
+    Valgrind prints out a stack backtrace of where the file was
+    opened and any details relating to the file descriptor such
+    as the file name or socket details.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--db-attach=no</computeroutput> [default]</para>
+    <para><computeroutput>--db-attach=yes</computeroutput></para>
+    <para>When enabled, Valgrind will pause after every error
+    shown, and print the line: <computeroutput>---- Attach to
+    debugger ? --- [Return/N/n/Y/y/C/c] ----</computeroutput></para>
+
+    <para>Pressing <literal>Ret</literal>, or 
+    <literal>N Ret</literal> or <literal>n Ret</literal>, causes
+    Valgrind not to start a debugger for this error.</para>
+
+    <para><literal>Y Ret</literal> or <literal>y Ret</literal>
+    causes Valgrind to start a debugger, for the program at this
+    point.  When you have finished with the debugger, quit from
+    it, and the program will continue.  Trying to continue from
+    inside the debugger doesn't work.</para>
+
+    <para><literal>C Ret</literal> or <literal>c Ret</literal>
+    causes Valgrind not to start a debugger, and not to ask
+    again.</para>
+
+    <formalpara>
+     <title>Note:</title>
+     <para><computeroutput>--db-attach=yes</computeroutput>
+     conflicts with
+     <computeroutput>--trace-children=yes</computeroutput>.  You
+     can't use them together.  Valgrind refuses to start up in
+     this situation.</para>
+    </formalpara>
+    <para>1 May 2002: this is a historical relic which could be
+    easily fixed if it gets in your way.  Mail me and complain if
+    this is a problem for you.</para> <para>Nov 2002: if you're
+    sending output to a logfile or to a network socket, I guess
+    this option doesn't make any sense.  Caveat emptor.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--db-command=&lt;command&gt;</computeroutput>
+    [default: gdb -nw %f %p]</para>
+    <para>This specifies how Valgrind will invoke the debugger.
+    By default it will use whatever GDB is detected at build
+    time, which is usually
+    <computeroutput>/usr/bin/gdb</computeroutput>.  Using this
+    command, you can specify some alternative command to invoke
+    the debugger you want to use.</para>
+
+    <para>The command string given can include one or instances
+    of the <literal>%p</literal> and <literal>%f</literal>
+    expansions. Each instance of <literal>%p</literal> expands to
+    the PID of the process to be debugged and each instance of
+    <literal>%f</literal> expands to the path to the executable
+    for the process to be debugged.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--input-fd=&lt;number&gt;</computeroutput>
+    [default=0, stdin]</para>
+    <para>When using
+    <computeroutput>--db-attach=yes</computeroutput> and
+    <computeroutput>--gen-suppressions=yes</computeroutput>,
+    Valgrind will stop so as to read keyboard input from you,
+    when each error occurs.  By default it reads from the
+    standard input (stdin), which is problematic for programs
+    which close stdin.  This option allows you to specify an
+    alternative file descriptor from which to read input.</para>
+   </listitem>
+
+  </itemizedlist>
+</sect2>
+
+<sect2 id="manual-core.mallocopts" xreflabel="malloc()-related Options">
+<title><computeroutput>malloc()</computeroutput>-related Options</title>
+
+<para>For tools that use their own version of
+<computeroutput>malloc()</computeroutput> (e.g. Memcheck and
+Addrcheck), the following options apply.</para>
+
+  <itemizedlist>
+
+   <listitem>
+    <para><computeroutput>--alignment=&lt;number&gt;</computeroutput>
+    [default: 8]</para>
+    <para>By default Valgrind's
+    <computeroutput>malloc</computeroutput>,
+    <computeroutput>realloc</computeroutput>, etc, return 4-byte
+    aligned addresses.  These are suitable for any accesses on
+    x86 processors.  Some programs might however assume that
+    <computeroutput>malloc</computeroutput> et al return 8- or
+    more aligned memory.  The supplied value must be between 4
+    and 4096 inclusive, and must be a power of two.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--sloppy-malloc=no</computeroutput>
+     [default]</para>
+    <para><computeroutput>--sloppy-malloc=yes</computeroutput></para>
+    <para>When enabled, all requests for malloc/calloc are
+    rounded up to a whole number of machine words -- in other
+    words, made divisible by 4.  For example, a request for 17
+    bytes of space would result in a 20-byte area being made
+    available.  This works around bugs in sloppy libraries which
+    assume that they can safely rely on malloc/calloc requests
+    being rounded up in this fashion.  Without the workaround,
+    these libraries tend to generate large numbers of errors when
+    they access the ends of these areas.</para>
+
+    <para>Valgrind snapshots dated 17 Feb 2002 and later are
+    cleverer about this problem, and you should no longer need to
+    use this flag.  To put it bluntly, if you do need to use this
+    flag, your program violates the ANSI C semantics defined for
+    <computeroutput>malloc</computeroutput> and
+    <computeroutput>free</computeroutput>, even if it appears to
+    work correctly, and you should fix it, at least if you hope
+    for maximum portability.</para>
+   </listitem>
+  </itemizedlist>
+
+ </sect2>
+
+
+ <sect2 id="manual-core.rareopts" xreflabel="Rare Options">
+  <title>Rare Options</title>
+
+  <para>These options apply to all tools, as they affect certain
+  obscure workings of the Valgrind core.  Most people won't need
+  to use these.</para>
+
+  <itemizedlist>
+
+   <listitem>
+    <para><computeroutput>--run-libc-freeres=yes</computeroutput>
+     [default]</para>
+    <para><computeroutput>--run-libc-freeres=no</computeroutput></para>
+    <para>The GNU C library
+    (<computeroutput>libc.so</computeroutput>), which is used by
+    all programs, may allocate memory for its own uses.  Usually
+    it doesn't bother to free that memory when the program ends -
+    there would be no point, since the Linux kernel reclaims all
+    process resources when a process exits anyway, so it would
+    just slow things down.</para>
+
+    <para>The glibc authors realised that this behaviour causes
+    leak checkers, such as Valgrind, to falsely report leaks in
+    glibc, when a leak check is done at exit.  In order to avoid
+    this, they provided a routine called
+    <computeroutput>__libc_freeres</computeroutput> specifically
+    to make glibc release all memory it has allocated.  Memcheck
+    and Addrcheck therefore try and run
+    <computeroutput>__libc_freeres</computeroutput> at
+    exit.</para>
+
+    <para>Unfortunately, in some versions of glibc,
+    <computeroutput>__libc_freeres</computeroutput> is
+    sufficiently buggy to cause segmentation faults.  This is
+    particularly noticeable on Red Hat 7.1.  So this flag is
+    provided in order to inhibit the run of
+    <computeroutput>__libc_freeres</computeroutput>.  If your
+    program seems to run fine on Valgrind, but segfaults at exit,
+    you may find that
+    <computeroutput>--run-libc-freeres=no</computeroutput> fixes
+    that, although at the cost of possibly falsely reporting
+    space leaks in
+    <computeroutput>libc.so</computeroutput>.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--weird-hacks=hack1,hack2,...</computeroutput></para>
+    <para>Pass miscellaneous hints to Valgrind which slightly
+    modify the simulated behaviour in nonstandard or dangerous
+    ways, possibly to help the simulation of strange features.
+    By default no hacks are enabled.  Use with caution!
+    Currently known hacks are:</para>
+    <itemizedlist>
+     <listitem><para><computeroutput>lax-ioctls</computeroutput></para>
+      <para>Be very lax about ioctl handling; the only assumption
+      is that the size is correct. Doesn't require the full
+      buffer to be initialized when writing.  Without this, using
+      some device drivers with a large number of strange ioctl
+      commands becomes very tiresome.</para>
+     </listitem>
+    </itemizedlist>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--signal-polltime=&lt;time&gt;</computeroutput>
+    [default=50]</para>
+    <para>How often to poll for signals (in milliseconds).  Only
+    applies for older kernels that need signal routing.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--lowlat-signals=no</computeroutput>
+     [default]</para>
+    <para><computeroutput>--lowlat-signals=yes</computeroutput></para>
+    <para>Improve wake-up latency when a thread receives a signal.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--lowlat-syscalls=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--lowlat-syscalls=yes</computeroutput></para>
+    <para>Improve wake-up latency when a thread's syscall
+    completes.</para>
+   </listitem>
+
+  </itemizedlist>
+</sect2>
+
+
+<sect2 id="manual-core.debugopts" xreflabel="Debugging Valgrind Options">
+<title>Debugging Valgrind Options</title>
+
+<para>There are also some options for debugging Valgrind itself.
+You shouldn't need to use them in the normal run of things.
+Nevertheless:</para>
+
+ <itemizedlist>
+
+   <listitem>
+    <para><computeroutput>--single-step=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--single-step=yes</computeroutput></para>
+    <para>When enabled, each x86 insn is translated separately
+    into instrumented code.  When disabled, translation is done
+    on a per-basic-block basis, giving much better
+    translations.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--optimise=no</computeroutput></para>
+    <para><computeroutput>--optimise=yes</computeroutput> [default]</para>
+    <para>When enabled, various improvements are applied to the
+    intermediate code, mainly aimed at allowing the simulated
+    CPU's registers to be cached in the real CPU's registers over
+    several simulated instructions.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--profile=no</computeroutput></para>
+    <para><computeroutput>--profile=yes</computeroutput> [default]</para>
+    <para>When enabled, does crude internal profiling of Valgrind
+    itself.  This is not for profiling your programs.  Rather it
+    is to allow the developers to assess where Valgrind is
+    spending its time.  The tools must be built for profiling for
+    this to work.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-syscalls=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-syscalls=yes</computeroutput></para>
+    <para>Enable/disable tracing of system call intercepts.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-signals=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-signals=yes</computeroutput></para>
+    <para>Enable/disable tracing of signal handling.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-sched=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-sched=yes</computeroutput></para>
+    <para>Enable/disable tracing of thread scheduling events.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-pthread=none</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-pthread=some</computeroutput></para>
+    <para><computeroutput>--trace-pthread=all</computeroutput></para>
+    <para>Specifies amount of trace detail for pthread-related
+    events.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-symtab=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-symtab=yes</computeroutput></para>
+    <para>Enable/disable tracing of symbol table reading.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-malloc=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--trace-malloc=yes</computeroutput></para>
+    <para>Enable/disable tracing of malloc/free (et al)
+    intercepts.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--trace-codegen=XXXXX</computeroutput>
+    [default: 00000]</para>
+    <para>Enable/disable tracing of code generation.  Code can be
+    printed at five different stages of translation; each
+    <computeroutput>X</computeroutput> element must be 0 or
+    1.</para>
+   </listitem>
+
+   <listitem>
+    <para><computeroutput>--dump-error=&lt;number></computeroutput>
+    [default: inactive]</para>
+    <para>After the program has exited, show gory details of the
+    translation of the basic block containing the &lt;number>'th
+    error context.  When used with
+    <computeroutput>--single-step=yes</computeroutput>, can show
+    the exact x86 instruction causing an error.  This is all
+    fairly dodgy and doesn't work at all if threads are
+    involved.</para>
+   </listitem>
+
+  </itemizedlist>
+</sect2>
+
+
+<sect2 id="manual-core.defopts" xreflabel="Setting default options">
+<title>Setting default Options</title>
+
+<para>Note that Valgrind also reads options from three places:</para>
+
+  <orderedlist>
+   <listitem>
+    <para>The file <computeroutput>~/.valgrindrc</computeroutput></para>
+   </listitem>
+
+   <listitem>
+    <para>The environment variable
+    <computeroutput>$VALGRIND_OPTS</computeroutput></para>
+   </listitem>
+
+   <listitem>
+    <para>The file <computeroutput>./.valgrindrc</computeroutput></para>
+   </listitem>
+  </orderedlist>
+
+<para>These are processed in the given order, before the
+command-line options.  Options processed later override those
+processed earlier; for example, options in
+<computeroutput>./.valgrindrc</computeroutput> will take
+precedence over those in
+<computeroutput>~/.valgrindrc</computeroutput>.  The first two
+are particularly useful for setting the default tool to
+use.</para>
+
+<para>Any tool-specific options put in
+<computeroutput>$VALGRIND_OPTS</computeroutput> or the
+<computeroutput>.valgrindrc</computeroutput> files should be
+prefixed with the tool name and a colon.  For example, if you
+want Memcheck to always do leak checking, you can put the
+following entry in <literal>~/.valgrindrc</literal>:</para>
+
+<programlisting><![CDATA[
+--memcheck:leak-check=yes]]></programlisting>
+
+<para>This will be ignored if any tool other than Memcheck is
+run.  Without the <computeroutput>memcheck:</computeroutput>
+part, this will cause problems if you select other tools that
+don't understand
+<computeroutput>--leak-check=yes</computeroutput>.</para>
+
+</sect2>
+
+</sect1>
+
+
+<sect1 id="manual-core.clientreq" 
+       xreflabel="The Client Request mechanism">
+<title>The Client Request mechanism</title>
+
+<para>Valgrind has a trapdoor mechanism via which the client
+program can pass all manner of requests and queries to Valgrind
+and the current tool.  Internally, this is used extensively to
+make malloc, free, signals, threads, etc, work, although you
+don't see that.</para>
+
+<para>For your convenience, a subset of these so-called client
+requests is provided to allow you to tell Valgrind facts about
+the behaviour of your program, and conversely to make queries.
+In particular, your program can tell Valgrind about changes in
+memory range permissions that Valgrind would not otherwise know
+about, and so allows clients to get Valgrind to do arbitrary
+custom checks.</para>
+
+<para>Clients need to include a header file to make this work.
+Which header file depends on which client requests you use.  Some
+client requests are handled by the core, and are defined in the
+header file <filename>valgrind.h</filename>.  Tool-specific
+header files are named after the tool, e.g.
+<filename>memcheck.h</filename>.  All header files can be found
+in the <literal>include</literal> directory of wherever Valgrind
+was installed.</para>
+
+<para>The macros in these header files have the magical property
+that they generate code in-line which Valgrind can spot.
+However, the code does nothing when not run on Valgrind, so you
+are not forced to run your program on Valgrind just because you
+use the macros in this file.  Also, you are not required to link
+your program with any extra supporting libraries.</para>
+
+<para>Here is a brief description of the macros available in
+<filename>valgrind.h</filename>, which work with more than one
+tool (see the tool-specific documentation for explanations of the
+tool-specific macros).</para>
+
+ <variablelist>
+
+  <varlistentry>
+   <term><computeroutput>RUNNING_ON_VALGRIND</computeroutput>:</term>
+   <listitem>
+    <para>returns 1 if running on Valgrind, 0 if running on the
+    real CPU.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_DISCARD_TRANSLATIONS</computeroutput>:</term>
+   <listitem>
+    <para>discard translations of code in the specified address
+    range.  Useful if you are debugging a JITter or some other
+    dynamic code generation system.  After this call, attempts to
+    execute code in the invalidated address range will cause
+    Valgrind to make new translations of that code, which is
+    probably the semantics you want.  Note that this is
+    implemented naively, and involves checking all 200191 entries
+    in the translation table to see if any of them overlap the
+    specified address range.  So try not to call it often, or
+    performance will nosedive.  Note that you can be clever about
+    this: you only need to call it when an area which previously
+    contained code is overwritten with new code.  You can choose
+    to write coode into fresh memory, and just call this
+    occasionally to discard large chunks of old code all at
+    once.</para>
+
+    <para><command>Warning:</command> minimally tested,
+    especially for tools other than Memcheck.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_COUNT_ERRORS</computeroutput>:</term>
+   <listitem>
+    <para>returns the number of errors found so far by Valgrind.
+    Can be useful in test harness code when combined with the
+    <computeroutput>--log-fd=-1</computeroutput> option; this
+    runs Valgrind silently, but the client program can detect
+    when errors occur.  Only useful for tools that report errors,
+    e.g. it's useful for Memcheck, but for Cachegrind it will
+    always return zero because Cachegrind doesn't report
+    errors.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_MALLOCLIKE_BLOCK</computeroutput>:</term>
+   <listitem>
+    <para>If your program manages its own memory instead of using
+    the standard <computeroutput>malloc()</computeroutput> /
+    <computeroutput>new</computeroutput> /
+    <computeroutput>new[]</computeroutput>, tools that track
+    information about heap blocks will not do nearly as good a
+    job.  For example, Memcheck won't detect nearly as many
+    errors, and the error messages won't be as informative.  To
+    improve this situation, use this macro just after your custom
+    allocator allocates some new memory.  See the comments in
+    <filename>valgrind.h</filename> for information on how to use
+    it.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_FREELIKE_BLOCK</computeroutput>:</term>
+   <listitem>
+    <para>This should be used in conjunction with
+    <computeroutput>VALGRIND_MALLOCLIKE_BLOCK</computeroutput>.
+    Again, see <filename>memcheck/memcheck.h</filename> for
+    information on how to use it.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>:</term>
+   <listitem>
+    <para>This is similar to
+    <computeroutput>VALGRIND_MALLOCLIKE_BLOCK</computeroutput>,
+    but is tailored towards code that uses memory pools.  See the
+    comments in <filename>valgrind.h</filename> for information
+    on how to use it.</para>
+   </listitem>
+  </varlistentry>
+  
+  <varlistentry>
+  <term><computeroutput>VALGRIND_DESTROY_MEMPOOL</computeroutput>:</term>
+   <listitem>
+    <para>This should be used in conjunction with
+    <computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
+    Again, see the comments in <filename>valgrind.h</filename> for
+    information on how to use it.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_MEMPOOL_ALLOC</computeroutput>:</term>
+   <listitem>
+    <para>This should be used in conjunction with
+    <computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
+    Again, see the comments in <filename>valgrind.h</filename> for
+    information on how to use it.</para>
+   </listitem>
+  </varlistentry>
+   
+  <varlistentry>
+   <term><computeroutput>VALGRIND_MEMPOOL_FREE</computeroutput>:</term>
+   <listitem>
+    <para>This should be used in conjunction with
+    <computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
+    Again, see the comments in <filename>valgrind.h</filename> for
+    information on how to use it.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_NON_SIMD_CALL[0123]</computeroutput>:</term>
+   <listitem>
+    <para>executes a function of 0, 1, 2 or 3 args in the client
+    program on the <emphasis>real</emphasis> CPU, not the virtual
+    CPU that Valgrind normally runs code on.  These are used in
+    various ways internally to Valgrind.  They might be useful to
+    client programs.</para> <formalpara><title>Warning:</title>
+    <para>Only use these if you <emphasis>really</emphasis> know
+    what you are doing.</para>
+    </formalpara>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_PRINTF(format, ...)</computeroutput>:</term>
+   <listitem>
+    <para>printf a message to the log file when running under
+    Valgrind.  Nothing is output if not running under Valgrind.
+    Returns the number of characters output.</para>
+   </listitem>
+  </varlistentry>
+
+  <varlistentry>
+   <term><computeroutput>VALGRIND_PRINTF_BACKTRACE(format, ...)</computeroutput>:</term>
+   <listitem>
+    <para>printf a message to the log file along with a stack
+    backtrace when running under Valgrind.  Nothing is output if
+    not running under Valgrind.  Returns the number of characters
+    output.</para>
+   </listitem>
+  </varlistentry>
+
+ </variablelist>
+
+<para>Note that <filename>valgrind.h</filename> is included by
+all the tool-specific header files (such as
+<filename>memcheck.h</filename>), so you don't need to include it
+in your client if you include a tool-specific header.</para>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.pthreads" xreflabel="Support for POSIX Pthreads">
+<title>Support for POSIX Pthreads</title>
+
+<para>Valgrind supports programs which use POSIX pthreads.
+Getting this to work was technically challenging but it all works
+well enough for significant threaded applications to work.</para>
+
+<para>It works as follows: threaded apps are (dynamically) linked
+against <literal>libpthread.so</literal>.  Usually this is the
+one installed with your Linux distribution.  Valgrind, however,
+supplies its own <literal>libpthread.so</literal> and
+automatically connects your program to it instead.</para>
+
+<para>The fake <literal>libpthread.so</literal> and Valgrind
+cooperate to implement a user-space pthreads package.  This
+approach avoids the horrible implementation problems of
+implementing a truly multiprocessor version of Valgrind, but it
+does mean that threaded apps run only on one CPU, even if you
+have a multiprocessor machine.</para>
+
+<para>Valgrind schedules your threads in a round-robin fashion,
+with all threads having equal priority.  It switches threads
+every 50000 basic blocks (typically around 300000 x86
+instructions), which means you'll get a much finer interleaving
+of thread executions than when run natively.  This in itself may
+cause your program to behave differently if you have some kind of
+concurrency, critical race, locking, or similar, bugs.</para>
+
+<para>As of the Valgrind-1.0 release, the state of pthread
+support was as follows:</para>
+
+ <itemizedlist>
+
+  <listitem>
+   <para>Mutexes, condition variables, thread-specific data,
+   <computeroutput>pthread_once</computeroutput>, reader-writer
+   locks, semaphores, cleanup stacks, cancellation and thread
+   detaching currently work.  Various attribute-like calls are
+   handled but ignored; you get a warning message.</para>
+  </listitem>
+
+  <listitem>
+   <para>Currently the following syscalls are thread-safe
+   (nonblocking): <literal>write</literal>,
+   <literal>read</literal>, <literal>nanosleep</literal>,
+   <literal>sleep</literal>, <literal>select</literal>,
+   <literal>poll</literal>, <literal>recvmsg</literal> and
+   <literal>accept</literal>.</para>
+  </listitem>
+
+  <listitem>
+   <para>Signals in pthreads are now handled properly(ish):
+   <literal>pthread_sigmask</literal>,
+   <literal>pthread_kill</literal>, <literal>sigwait</literal>
+   and <literal>raise</literal> are now implemented.  Each thread
+   has its own signal mask, as POSIX requires.  It's a bit
+   kludgey -- there's a system-wide pending signal set, rather
+   than one for each thread.  But hey.</para>
+  </listitem>
+
+ </itemizedlist>
+
+<formalpara>
+<title>Note:</title> 
+<para>As of 18 May 2002, the following threaded programs now work
+fine on my RedHat 7.2 box: Opera 6.0Beta2, KNode in KDE 3.0,
+Mozilla-0.9.2.1 and Galeon-0.11.3, both as supplied with RedHat
+7.2.  Also Mozilla 1.0RC2.  OpenOffice 1.0.  MySQL 3.something
+(the current stable release).</para>
+</formalpara>
+
+</sect1>
+
+
+<sect1 id="manual-core.signals" xreflabel="Handling of Signals">
+<title>Handling of Signals</title>
+
+<para>Valgrind provides suitable handling of signals, so,
+provided you stick to POSIX stuff, you should be ok.  Basic
+sigaction() and sigprocmask() are handled.  Signal handlers may
+return in the normal way or do longjmp(); both should work ok.
+As specified by POSIX, a signal is blocked in its own handler.
+Default actions for signals should work as before.  Etc,
+etc.</para>
+
+<para>Under the hood, dealing with signals is a real pain, and
+Valgrind's simulation leaves much to be desired.  If your program
+does way-strange stuff with signals, bad things may happen.  If
+so, let us know.  We don't promise to fix it, but we'd at least
+like to be aware of it.</para>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.install" xreflabel="Building and Installing">
+<title>Building and Installing</title>
+
+<para>We use the standard Unix
+<computeroutput>./configure</computeroutput>,
+<computeroutput>make</computeroutput>, <computeroutput>make
+install</computeroutput> mechanism, and we have attempted to
+ensure that it works on machines with kernel 2.4 or 2.6 and glibc
+2.2.X or 2.3.X.  I don't think there is much else to say.  There
+are no options apart from the usual
+<computeroutput>--prefix</computeroutput> that you should give to
+<computeroutput>./configure</computeroutput>.</para>
+
+<para>The <computeroutput>configure</computeroutput> script tests
+the version of the X server currently indicated by the current
+<computeroutput>$DISPLAY</computeroutput>.  This is a known bug.
+The intention was to detect the version of the current XFree86
+client libraries, so that correct suppressions could be selected
+for them, but instead the test checks the server version.  This
+is just plain wrong.</para>
+
+<para>If you are building a binary package of Valgrind for
+distribution, please read <literal>README_PACKAGERS</literal>
+<xref linkend="dist.readme-packagers"/>.  It contains some
+important information.</para>
+
+<para>Apart from that, there's not much excitement here.  Let us
+know if you have build problems.</para>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.problems" xreflabel="If You Have Problems">
+<title>If You Have Problems</title>
+
+<para>Contact us at <ulink url="http://www.valgrind.org">http://www.valgrind.org</ulink>.</para>
+
+<para>See <xref linkend="manual-core.limits"/> for the known
+limitations of Valgrind, and for a list of programs which are
+known not to work on it.</para>
+
+<para>The translator/instrumentor has a lot of assertions in it.
+They are permanently enabled, and I have no plans to disable
+them.  If one of these breaks, please mail us!</para>
+
+<para>If you get an assertion failure on the expression
+<computeroutput>chunkSane(ch)</computeroutput> in
+<computeroutput>vg_free()</computeroutput> in
+<filename>vg_malloc.c</filename>, this may have happened because
+your program wrote off the end of a malloc'd block, or before its
+beginning.  Valgrind should have emitted a proper message to that
+effect before dying in this way.  This is a known problem which
+we should fix.</para>
+
+<para>Read the 
+<ulink url="http://www.valgrind.org/docs/faq/index.html">FAQ</ulink> for
+more advice about common problems, crashes, etc.</para>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.limits" xreflabel="Limitations">
+<title>Limitations</title>
+
+<para>The following list of limitations seems depressingly long.
+However, most programs actually work fine.</para>
+
+<para>Valgrind will run x86-GNU/Linux ELF dynamically linked
+binaries, on a kernel 2.4.X or 2.6.X system, subject to
+the following constraints:</para>
+
+ <itemizedlist>
+
+  <listitem>
+   <para>No support for 3DNow instructions.  If the translator
+   encounters these, Valgrind will generate a SIGILL when the
+   instruction is executed.</para>
+  </listitem>
+
+  <listitem>
+   <para>Pthreads support is improving, but there are still
+   significant limitations in that department.  See the section
+   above on Pthreads.  Note that your program must be dynamically
+   linked against <literal>libpthread.so</literal>, so that
+   Valgrind can substitute its own implementation at program
+   startup time.  If you're statically linked against it, things
+   will fail badly.</para>
+  </listitem>
+
+  <listitem>
+   <para>Memcheck assumes that the floating point registers are
+   not used as intermediaries in memory-to-memory copies, so it
+   immediately checks definedness of values loaded from memory by
+   floating-point loads.  If you want to write code which copies
+   around possibly-uninitialised values, you must ensure these
+   travel through the integer registers, not the FPU.</para>
+  </listitem>
+
+  <listitem>
+   <para>If your program does its own memory management, rather
+   than using malloc/new/free/delete, it should still work, but
+   Valgrind's error checking won't be so effective.  If you
+   describe your program's memory management scheme using "client
+   requests" (Section 3.7 of this manual), Memcheck can do
+   better.  Nevertheless, using malloc/new and free/delete is
+   still the best approach.</para>
+  </listitem>
+
+  <listitem>
+   <para>Valgrind's signal simulation is not as robust as it
+   could be.  Basic POSIX-compliant sigaction and sigprocmask
+   functionality is supplied, but it's conceivable that things
+   could go badly awry if you do weird things with signals.
+   Workaround: don't.  Programs that do non-POSIX signal tricks
+   are in any case inherently unportable, so should be avoided if
+   possible.</para>
+  </listitem>
+
+  <listitem>
+   <para>Programs which switch stacks are not well handled.
+   Valgrind does have support for this, but I don't have great
+   faith in it.  It's difficult -- there's no cast-iron way to
+   decide whether a large change in %esp is as a result of the
+   program switching stacks, or merely allocating a large object
+   temporarily on the current stack -- yet Valgrind needs to
+   handle the two situations differently.</para>
+  </listitem>
+
+  <listitem>
+   <para>x86 instructions, and system calls, have been
+   implemented on demand.  So it's possible, although unlikely,
+   that a program will fall over with a message to that effect.
+   If this happens, please report ALL the details printed out, so
+   we can try and implement the missing feature.</para>
+  </listitem>
+
+  <listitem>
+   <para>x86 floating point works correctly, but floating-point
+   code may run even more slowly than integer code, due to my
+   simplistic approach to FPU emulation.</para>
+  </listitem>
+
+  <listitem>
+   <para>Memory consumption of your program is majorly increased
+   whilst running under Valgrind.  This is due to the large
+   amount of administrative information maintained behind the
+   scenes.  Another cause is that Valgrind dynamically translates
+   the original executable.  Translated, instrumented code is
+   14-16 times larger than the original (!) so you can easily end
+   up with 30+ MB of translations when running (eg) a web
+   browser.</para>
+  </listitem>
+
+  <listitem>
+   <para>Valgrind can handle dynamically-generated code just
+   fine. However, if you regenerate code over the top of old code
+   (ie. at the same memory addresses) Valgrind will not realise
+   the code has changed, and will run its old translations, which
+   will be out-of-date.  You need to use the
+   VALGRIND_DISCARD_TRANSLATIONS client request in that case. For
+   the same reason gcc's <ulink
+   url="http://gcc.gnu.org/onlinedocs/gcc/Nested-Functions.html">trampolines
+   for nested functions</ulink> are currently unsupported, see
+   <ulink url="http://bugs.kde.org/show_bug.cgi?id=69511">bug
+   69511</ulink>.</para>
+  </listitem>
+
+ </itemizedlist>
+
+
+ <para>Programs which are known not to work are:</para>
+ <itemizedlist>
+  <listitem>
+   <para>emacs starts up but immediately concludes it is out of
+   memory and aborts.  Emacs has it's own memory-management
+   scheme, but I don't understand why this should interact so
+   badly with Valgrind.  Emacs works fine if you build it to use
+   the standard malloc/free routines.</para>
+  </listitem>
+ </itemizedlist>
+
+
+ <para>Known platform-specific limitations, as of release 1.0.0:</para>
+ <itemizedlist>
+  <listitem>
+   <para>On Red Hat 7.3, there have been reports of link errors
+   (at program start time) for threaded programs using
+   <computeroutput>__pthread_clock_gettime</computeroutput> and
+   <computeroutput>__pthread_clock_settime</computeroutput>.
+   This appears to be due to
+   <computeroutput>/lib/librt-2.2.5.so</computeroutput> needing
+   them.  Unfortunately I do not understand enough about this
+   problem to fix it properly, and I can't reproduce it on my
+   test RedHat 7.3 system.  Please mail me if you have more
+   information / understanding.</para>
+  </listitem>
+ </itemizedlist>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.howworks" xreflabel="How It Works - A Rough Overview">
+<title>How It Works -- A Rough Overview</title>
+
+<para>Some gory details, for those with a passion for gory
+details.  You don't need to read this section if all you want to
+do is use Valgrind.  What follows is an outline of the machinery.
+A more detailed (and somewhat out of date) description is to be
+found <xref linkend="mc-tech-docs"/>.</para>
+
+<sect2 id="manual-core.startb" xreflabel="Getting Started">
+<title>Getting started</title>
+
+<para>Valgrind is compiled into a shared object, valgrind.so.
+The shell script valgrind sets the LD_PRELOAD environment
+variable to point to valgrind.so.  This causes the .so to be
+loaded as an extra library to any subsequently executed
+dynamically-linked ELF binary, viz, the program you want to
+debug.</para>
+
+<para>The dynamic linker allows each .so in the process image to
+have an initialisation function which is run before main().  It
+also allows each .so to have a finalisation function run after
+main() exits.</para>
+
+<para>When valgrind.so's initialisation function is called by the
+dynamic linker, the synthetic CPU to starts up.  The real CPU
+remains locked in valgrind.so for the entire rest of the program,
+but the synthetic CPU returns from the initialisation function.
+Startup of the program now continues as usual -- the dynamic
+linker calls all the other .so's initialisation routines, and
+eventually runs main().  This all runs on the synthetic CPU, not
+the real one, but the client program cannot tell the
+difference.</para>
+
+<para>Eventually main() exits, so the synthetic CPU calls
+valgrind.so's finalisation function.  Valgrind detects this, and
+uses it as its cue to exit.  It prints summaries of all errors
+detected, possibly checks for memory leaks, and then exits the
+finalisation routine, but now on the real CPU.  The synthetic CPU
+has now lost control -- permanently -- so the program exits back
+to the OS on the real CPU, just as it would have done
+anyway.</para>
+
+<para>On entry, Valgrind switches stacks, so it runs on its own
+stack.  On exit, it switches back.  This means that the client
+program continues to run on its own stack, so we can switch back
+and forth between running it on the simulated and real CPUs
+without difficulty.  This was an important design decision,
+because it makes it easy (well, significantly less difficult) to
+debug the synthetic CPU.</para>
+
+</sect2>
+
+
+<sect2 id="manual-core.engine" 
+       xreflabel="The translation/instrumentation engine">
+<title>The translation/instrumentation engine</title>
+
+<para>Valgrind does not directly run any of the original
+program's code.  Only instrumented translations are run.
+Valgrind maintains a translation table, which allows it to find
+the translation quickly for any branch target (code address).  If
+no translation has yet been made, the translator - a just-in-time
+translator - is summoned.  This makes an instrumented
+translation, which is added to the collection of translations.
+Subsequent jumps to that address will use this
+translation.</para>
+
+<para>Valgrind no longer directly supports detection of
+self-modifying code.  Such checking is expensive, and in practice
+(fortunately) almost no applications need it.  However, to help
+people who are debugging dynamic code generation systems, there
+is a Client Request (basically a macro you can put in your
+program) which directs Valgrind to discard translations in a
+given address range.  So Valgrind can still work in this
+situation provided the client tells it when code has become
+out-of-date and needs to be retranslated.</para>
+
+<para>The JITter translates basic blocks -- blocks of
+straight-line-code -- as single entities.  To minimise the
+considerable difficulties of dealing with the x86 instruction
+set, x86 instructions are first translated to a RISC-like
+intermediate code, similar to sparc code, but with an infinite
+number of virtual integer registers.  Initially each insn is
+translated seperately, and there is no attempt at
+instrumentation.</para>
+
+<para>The intermediate code is improved, mostly so as to try and
+cache the simulated machine's registers in the real machine's
+registers over several simulated instructions.  This is often
+very effective.  Also, we try to remove redundant updates of the
+simulated machines's condition-code register.</para>
+
+<para>The intermediate code is then instrumented, giving more
+intermediate code.  There are a few extra intermediate-code
+operations to support instrumentation; it is all refreshingly
+simple.  After instrumentation there is a cleanup pass to remove
+redundant value checks.</para>
+
+<para>This gives instrumented intermediate code which mentions
+arbitrary numbers of virtual registers.  A linear-scan register
+allocator is used to assign real registers and possibly generate
+spill code.  All of this is still phrased in terms of the
+intermediate code.  This machinery is inspired by the work of
+Reuben Thomas (Mite).</para>
+
+<para>Then, and only then, is the final x86 code emitted.  The
+intermediate code is carefully designed so that x86 code can be
+generated from it without need for spare registers or other
+inconveniences.</para>
+
+<para>The translations are managed using a traditional LRU-based
+caching scheme.  The translation cache has a default size of
+about 14MB.</para>
+
+</sect2>
+
+
+<sect2 id="manual-core.track" 
+       xreflabel="Tracking the Status of Memory">
+<title>Tracking the Status of Memory</title>
+
+<para>Each byte in the process' address space has nine bits
+associated with it: one A bit and eight V bits.  The A and V bits
+for each byte are stored using a sparse array, which flexibly and
+efficiently covers arbitrary parts of the 32-bit address space
+without imposing significant space or performance overheads for
+the parts of the address space never visited.  The scheme used,
+and speedup hacks, are described in detail at the top of the
+source file <filename>coregrind/vg_memory.c</filename>, so you
+should read that for the gory details.</para>
+
+</sect2>
+
+
+
+<sect2 id="manual-core.syscalls" xreflabel="System calls">
+<title>System calls</title>
+
+<para>All system calls are intercepted.  The memory status map is
+consulted before and updated after each call.  It's all rather
+tiresome.  See <filename>coregrind/vg_syscalls.c</filename> for
+details.</para>
+
+</sect2>
+
+
+<sect2 id="manual-core.syssignals" xreflabel="Signals">
+<title>Signals</title>
+
+<para>All system calls to sigaction() and sigprocmask() are
+intercepted.  If the client program is trying to set a signal
+handler, Valgrind makes a note of the handler address and which
+signal it is for.  Valgrind then arranges for the same signal to
+be delivered to its own handler.</para>
+
+<para>When such a signal arrives, Valgrind's own handler catches
+it, and notes the fact.  At a convenient safe point in execution,
+Valgrind builds a signal delivery frame on the client's stack and
+runs its handler.  If the handler longjmp()s, there is nothing
+more to be said.  If the handler returns, Valgrind notices this,
+zaps the delivery frame, and carries on where it left off before
+delivering the signal.</para>
+
+<para>The purpose of this nonsense is that setting signal
+handlers essentially amounts to giving callback addresses to the
+Linux kernel.  We can't allow this to happen, because if it did,
+signal handlers would run on the real CPU, not the simulated one.
+This means the checking machinery would not operate during the
+handler run, and, worse, memory permissions maps would not be
+updated, which could cause spurious error reports once the
+handler had returned.</para>
+
+<para>An even worse thing would happen if the signal handler
+longjmp'd rather than returned: Valgrind would completely lose
+control of the client program.</para>
+
+<para>Upshot: we can't allow the client to install signal
+handlers directly.  Instead, Valgrind must catch, on behalf of
+the client, any signal the client asks to catch, and must
+delivery it to the client on the simulated CPU, not the real one.
+This involves considerable gruesome fakery; see
+<filename>coregrind/vg_signals.c</filename> for details.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="manual-core.example" xreflabel="An Example Run">
+<title>An Example Run</title>
+
+<para>This is the log for a run of a small program using Memcheck
+The program is in fact correct, and the reported error is as the
+result of a potentially serious code generation bug in GNU g++
+(snapshot 20010527).</para>
+
+<programlisting><![CDATA[
+sewardj@phoenix:~/newmat10$
+~/Valgrind-6/valgrind -v ./bogon 
+==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1.
+==25832== Copyright (C) 2000-2001, and GNU GPL'd, by Julian Seward.
+==25832== Startup, with flags:
+==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp
+==25832== reading syms from /lib/ld-linux.so.2
+==25832== reading syms from /lib/libc.so.6
+==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0
+==25832== reading syms from /lib/libm.so.6
+==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3
+==25832== reading syms from /home/sewardj/Valgrind/valgrind.so
+==25832== reading syms from /proc/self/exe
+==25832== loaded 5950 symbols, 142333 line number locations
+==25832== 
+==25832== Invalid read of size 4
+==25832==    at 0x8048724: _ZN10BandMatrix6ReSizeEiii (bogon.cpp:45)
+==25832==    by 0x80487AF: main (bogon.cpp:66)
+==25832==    by 0x40371E5E: __libc_start_main (libc-start.c:129)
+==25832==    by 0x80485D1: (within /home/sewardj/newmat10/bogon)
+==25832==    Address 0xBFFFF74C is not stack'd, malloc'd or free'd
+==25832==
+==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
+==25832== malloc/free: in use at exit: 0 bytes in 0 blocks.
+==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
+==25832== For a detailed leak analysis, rerun with: --leak-check=yes
+==25832==
+==25832== exiting, did 1881 basic blocks, 0 misses.
+==25832== 223 translations, 3626 bytes in, 56801 bytes out.]]></programlisting>
+
+<para>The GCC folks fixed this about a week before gcc-3.0
+shipped.</para>
+
+</sect1>
+
+
+<sect1 id="manual-core.warnings" xreflabel="Warning Messages">
+<title>Warning Messages You Might See</title>
+
+<para>Most of these only appear if you run in verbose mode
+(enabled by <computeroutput>-v</computeroutput>):</para>
+
+ <itemizedlist>
+
+  <listitem>
+   <para><computeroutput>More than 50 errors detected.
+   Subsequent errors will still be recorded, but in less detail
+   than before.</computeroutput></para>
+   <para>After 50 different errors have been shown, Valgrind
+   becomes more conservative about collecting them.  It then
+   requires only the program counters in the top two stack frames
+   to match when deciding whether or not two errors are really
+   the same one.  Prior to this point, the PCs in the top four
+   frames are required to match.  This hack has the effect of
+   slowing down the appearance of new errors after the first 50.
+   The 50 constant can be changed by recompiling Valgrind.</para>
+  </listitem>
+
+  <listitem>
+   <para><computeroutput>More than 300 errors detected.  I'm not
+   reporting any more.  Final error counts may be inaccurate.  Go
+   fix your program!</computeroutput></para>
+   <para>After 300 different errors have been detected, Valgrind
+   ignores any more.  It seems unlikely that collecting even more
+   different ones would be of practical help to anybody, and it
+   avoids the danger that Valgrind spends more and more of its
+   time comparing new errors against an ever-growing collection.
+   As above, the 300 number is a compile-time constant.</para>
+  </listitem>
+
+  <listitem>
+   <para><computeroutput>Warning: client switching
+   stacks?</computeroutput></para>
+   <para>Valgrind spotted such a large change in the stack
+   pointer, <literal>%esp</literal>, that it guesses the client
+   is switching to a different stack.  At this point it makes a
+   kludgey guess where the base of the new stack is, and sets
+   memory permissions accordingly.  You may get many bogus error
+   messages following this, if Valgrind guesses wrong.  At the
+   moment "large change" is defined as a change of more that
+   2000000 in the value of the <literal>%esp</literal> (stack
+   pointer) register.</para>
+  </listitem>
+
+  <listitem>
+   <para><computeroutput>Warning: client attempted to close
+   Valgrind's logfile fd &lt;number&gt;</computeroutput></para>
+   <para>Valgrind doesn't allow the client to close the logfile,
+   because you'd never see any diagnostic information after that
+   point.  If you see this message, you may want to use the
+   <computeroutput>--log-fd=&lt;number&gt;</computeroutput> option
+   to specify a different logfile file-descriptor number.</para>
+  </listitem>
+
+  <listitem>
+   <para><computeroutput>Warning: noted but unhandled ioctl
+   &lt;number&gt;</computeroutput></para>
+   <para>Valgrind observed a call to one of the vast family of
+   <computeroutput>ioctl</computeroutput> system calls, but did
+   not modify its memory status info (because I have not yet got
+   round to it).  The call will still have gone through, but you
+   may get spurious errors after this as a result of the
+   non-update of the memory info.</para>
+  </listitem>
+
+  <listitem>
+   <para><computeroutput>Warning: set address range perms: large
+   range &lt;number></computeroutput></para>
+   <para>Diagnostic message, mostly for benefit of the valgrind
+   developers, to do with memory permissions.</para>
+  </listitem>
+
+ </itemizedlist>
+
+</sect1>
+</chapter>
diff --git a/docs/xml/manual-intro.xml b/docs/xml/manual-intro.xml
new file mode 100644
index 0000000..844774b
--- /dev/null
+++ b/docs/xml/manual-intro.xml
@@ -0,0 +1,199 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="manual-intro" xreflabel="Introduction">
+<title>Introduction</title>
+
+<sect1 id="manual-intro.overview" xreflabel="An Overview of Valgrind">
+<title>An Overview of Valgrind</title>
+
+<para>Valgrind is a flexible system for debugging and profiling
+Linux-x86 executables.  The system consists of a core, which
+provides a synthetic x86 CPU in software, and a series of tools,
+each of which performs some kind of debugging, profiling, or
+similar task.  The architecture is modular, so that new tools can
+be created easily and without disturbing the existing
+structure.</para>
+
+<para>A number of useful tools are supplied as standard.  In
+summary, these are:</para>
+
+<orderedlist>
+
+  <listitem>
+    <para><command>Memcheck</command> detects memory-management
+    problems in your programs.  All reads and writes of memory
+    are checked, and calls to malloc/new/free/delete are
+    intercepted. As a result, Memcheck can detect the following
+    problems:</para>
+
+    <itemizedlist>
+     <listitem>
+      <para>Use of uninitialised memory</para>
+     </listitem>
+     <listitem>
+      <para>Reading/writing memory after it has been
+      free'd</para>
+     </listitem>
+     <listitem>
+      <para>Reading/writing off the end of malloc'd
+      blocks</para>
+     </listitem>
+     <listitem>
+      <para>Reading/writing inappropriate areas on the
+      stack</para>
+     </listitem>
+     <listitem>
+      <para>Memory leaks -- where pointers to malloc'd
+      blocks are lost forever</para>
+     </listitem>
+     <listitem>
+      <para>Mismatched use of malloc/new/new [] vs
+      free/delete/delete []</para>
+      </listitem>
+     <listitem>
+      <para>Overlapping <computeroutput>src</computeroutput> and
+      <computeroutput>dst</computeroutput> pointers in
+      <computeroutput>memcpy()</computeroutput> and related
+      functions</para></listitem> <listitem><para>Some misuses of
+      the POSIX pthreads API</para>
+     </listitem>
+    </itemizedlist>
+
+    <para>Problems like these can be difficult to find by other
+    means, often lying undetected for long periods, then causing
+    occasional, difficult-to-diagnose crashes.</para>
+   </listitem>
+ 
+   <listitem>
+    <para><command>Addrcheck</command> is a lightweight version
+    of Memcheck.  It is identical to Memcheck except for the
+    single detail that it does not do any uninitialised-value
+    checks.  All of the other checks -- primarily the
+    fine-grained address checking -- are still done.  The
+    downside of this is that you don't catch the
+    uninitialised-value errors that Memcheck can find.</para>
+
+    <para>But the upside is significant: programs run about twice
+    as fast as they do on Memcheck, and a lot less memory is
+    used.  It still finds reads/writes of freed memory, memory
+    off the end of blocks and in other invalid places, bugs which
+    you really want to find before release!</para>
+
+    <para>Because Addrcheck is lighter and faster than Memcheck,
+    you can run more programs for longer, and so you may be able
+    to cover more test scenarios.  Addrcheck was created because
+    one of us (Julian) wanted to be able to run a complete KDE
+    desktop session with checking.  As of early November 2002, we
+    have been able to run KDE-3.0.3 on a 1.7 GHz P4 with 512 MB
+    of memory, using Addrcheck.  Although the result is not
+    stellar, it's quite usable, and it seems plausible to run KDE
+    for long periods at a time like this, collecting up all the
+    addressing errors that appear.</para>
+   </listitem>
+
+   <listitem>
+    <para><command>Cachegrind</command> is a cache profiler.  It
+    performs detailed simulation of the I1, D1 and L2 caches in
+    your CPU and so can accurately pinpoint the sources of cache
+    misses in your code.  If you desire, it will show the number
+    of cache misses, memory references and instructions accruing
+    to each line of source code, with per-function, per-module
+    and whole-program summaries.  If you ask really nicely it
+    will even show counts for each individual x86
+    instruction.</para>
+
+    <para>Cachegrind auto-detects your machine's cache
+    configuration using the
+    <computeroutput>CPUID</computeroutput> instruction, and so
+    needs no further configuration info, in most cases.</para>
+
+    <para>Cachegrind is nicely complemented by Josef
+    Weidendorfer's amazing KCacheGrind visualisation tool 
+    (<ulink url="http://kcachegrind.sourceforge.net">http://kcachegrind.sourceforge.net</ulink>),
+    a KDE application which presents these profiling results in a
+    graphical and easier-to-understand form.</para>
+   </listitem>
+
+   <listitem>
+    <para><command>Helgrind</command> finds data races in
+    multithreaded programs.  Helgrind looks for memory locations
+    which are accessed by more than one (POSIX p-)thread, but for
+    which no consistently used (pthread_mutex_)lock can be found.
+    Such locations are indicative of missing synchronisation
+    between threads, and could cause hard-to-find
+    timing-dependent problems.</para>
+
+    <para>Helgrind ("Hell's Gate", in Norse mythology) implements
+    the so-called "Eraser" data-race-detection algorithm, along
+    with various refinements (thread-segment lifetimes) which
+    reduce the number of false errors it reports.  It is as yet
+    somewhat of an experimental tool, so your feedback is
+    especially welcomed here.</para>
+
+    <para>Helgrind has been hacked on extensively by Jeremy
+    Fitzhardinge, and we have him to thank for getting it to a
+    releasable state.</para>
+   </listitem>
+
+</orderedlist>
+  
+
+<para>A number of minor tools (<command>Corecheck</command>,
+<command>Lackey</command> and <command>Nulgrind</command>) are
+also supplied.  These aren't particularly useful -- they exist to
+illustrate how to create simple tools and to help the valgrind
+developers in various ways.</para>
+
+<para>Valgrind is closely tied to details of the CPU, operating
+system and to a less extent, compiler and basic C libraries. This
+makes it difficult to make it portable, so we have chosen at the
+outset to concentrate on what we believe to be a widely used
+platform: Linux on x86s.  Valgrind uses the standard Unix
+<computeroutput>./configure</computeroutput>,
+<computeroutput>make</computeroutput>, <computeroutput>make
+install</computeroutput> mechanism, and we have attempted to
+ensure that it works on machines with kernel 2.2 or 2.4 and glibc
+2.1.X, 2.2.X or 2.3.1.  This should cover the vast majority of
+modern Linux installations.  Note that glibc-2.3.2+, with the
+NPTL (Native Posix Threads Library) package won't work.  We hope
+to be able to fix this, but it won't be easy.</para>
+
+<para>Valgrind is licensed under the <xref linkend="license.gpl"/>,
+version 2. Some of the PThreads test cases,
+<computeroutput>pth_*.c</computeroutput>, are taken from
+"Pthreads Programming" by Bradford Nichols, Dick Buttlar &amp;
+Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by
+O'Reilly &amp; Associates, Inc.</para>
+
+</sect1>
+
+
+<sect1 id="manual-intro.navigation" xreflabel="How to navigate this manual">
+<title>How to navigate this manual</title>
+
+<para>The Valgrind distribution consists of the Valgrind core,
+upon which are built Valgrind tools, which do different kinds of
+debugging and profiling.  This manual is structured
+similarly.</para>
+
+<para>First, we describe the Valgrind core, how to use it, and
+the flags it supports.  Then, each tool has its own chapter in
+this manual.  You only need to read the documentation for the
+core and for the tool(s) you actually use, although you may find
+it helpful to be at least a little bit familar with what all
+tools do.  If you're new to all this, you probably want to run
+the Memcheck tool.  If you want to write a new tool, read 
+<xref linkend="writing-tools"/>.</para>
+
+<para>Be aware that the core understands some command line flags,
+and the tools have their own flags which they know about.  This
+means there is no central place describing all the flags that are
+accepted -- you have to read the flags documentation both for
+<xref linkend="manual-core"/> and for the tool you want to
+use.</para>
+
+</sect1>
+
+</chapter>
diff --git a/docs/xml/manual.xml b/docs/xml/manual.xml
new file mode 100644
index 0000000..f68f2eb
--- /dev/null
+++ b/docs/xml/manual.xml
@@ -0,0 +1,32 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<book id="manual" xreflabel="Valgrind User Manual">
+
+  <bookinfo>
+    <title>Valgrind User Manual</title>
+  </bookinfo>
+
+  <xi:include href="manual-intro.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="manual-core.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../memcheck/docs/mc-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../addrcheck/docs/ac-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../cachegrind/docs/cg-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../massif/docs/ms-manual.xml" parse="xml"  
+       xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../helgrind/docs/hg-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../none/docs/nl-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../corecheck/docs/cc-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../lackey/docs/lk-manual.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+</book>
diff --git a/docs/xml/tech-docs.xml b/docs/xml/tech-docs.xml
new file mode 100644
index 0000000..3e8a60b
--- /dev/null
+++ b/docs/xml/tech-docs.xml
@@ -0,0 +1,18 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<book id="tech-docs" xreflabel="Valgrind Technical Documentation">
+
+  <bookinfo>
+    <title>Valgrind Technical Documentation</title>
+  </bookinfo>
+
+  <xi:include href="../../memcheck/docs/mc-tech-docs.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="../../cachegrind/docs/cg-tech-docs.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+  <xi:include href="writing-tools.xml" parse="xml"  
+      xmlns:xi="http://www.w3.org/2001/XInclude" />
+
+</book>
diff --git a/docs/xml/vg-entities.xml b/docs/xml/vg-entities.xml
new file mode 100644
index 0000000..638d436
--- /dev/null
+++ b/docs/xml/vg-entities.xml
@@ -0,0 +1,12 @@
+<!-- misc. strings -->
+<!ENTITY vg-url      "http://www.valgrind.org">
+<!ENTITY vg-jemail   "jseward@valgrind.org">
+<!ENTITY vg-vemail   "valgrind@valgrind.org">
+<!ENTITY vg-lifespan "2000-2004">
+<!ENTITY vg-users-list "http://lists.sourceforge.net/lists/listinfo/valgrind-users">
+
+<!-- valgrind release + version stuff -->
+<!ENTITY rel-type    "Development release">
+<!ENTITY rel-version "2.1.2">
+<!ENTITY rel-date    "July 18 2004">
+
diff --git a/docs/xml/writing-tools.xml b/docs/xml/writing-tools.xml
new file mode 100644
index 0000000..b8b8aff
--- /dev/null
+++ b/docs/xml/writing-tools.xml
@@ -0,0 +1,1248 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"[
+<!ENTITY % vg-entities SYSTEM "../../docs/xml/vg-entities.xml"> %vg-entities;
+]>
+
+<chapter id="writing-tools" xreflabel="Writing a New Valgrind Tool">
+<title>Writing a New Valgrind Tool</title>
+
+<sect1 id="writing-tools.intro" xreflabel="Introduction">
+<title>Introduction</title>
+
+<sect2 id="writing-tools.supexec" xreflabel="Supervised Execution">
+<title>Supervised Execution</title>
+
+<para>Valgrind provides a generic infrastructure for supervising
+the execution of programs.  This is done by providing a way to
+instrument programs in very precise ways, making it relatively
+easy to support activities such as dynamic error detection and
+profiling.</para>
+
+<para>Although writing a tool is not easy, and requires learning
+quite a few things about Valgrind, it is much easier than
+instrumenting a program from scratch yourself.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.tools" xreflabel="Tools">
+<title>Tools</title>
+
+<para>The key idea behind Valgrind's architecture is the division
+between its "core" and "tools".</para>
+
+<para>The core provides the common low-level infrastructure to
+support program instrumentation, including the x86-to-x86 JIT
+compiler, low-level memory manager, signal handling and a
+scheduler (for pthreads).  It also provides certain services that
+are useful to some but not all tools, such as support for error
+recording and suppression.</para>
+
+<para>But the core leaves certain operations undefined, which
+must be filled by tools.  Most notably, tools define how program
+code should be instrumented.  They can also define certain
+variables to indicate to the core that they would like to use
+certain services, or be notified when certain interesting events
+occur.  But the core takes care of all the hard work.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.execspaces" xreflabel="Execution Spaces">
+<title>Execution Spaces</title>
+
+<para>An important concept to understand before writing a tool is
+that there are three spaces in which program code executes:</para>
+
+
+<orderedlist>
+
+ <listitem>
+  <para>User space: this covers most of the program's execution.
+  The tool is given the code and can instrument it any way it
+  likes, providing (more or less) total control over the
+  code.</para>
+
+  <para>Code executed in user space includes all the program
+  code, almost all of the C library (including things like the
+  dynamic linker), and almost all parts of all other
+  libraries.</para>
+ </listitem>
+
+  <listitem>
+   <para>Core space: a small proportion of the program's execution
+   takes place entirely within Valgrind's core.  This includes:</para>
+   <itemizedlist>
+    <listitem>
+      <para>Dynamic memory management 
+      (<computeroutput>malloc()</computeroutput> etc.)</para>
+    </listitem>
+    <listitem>
+     <para>Pthread operations and scheduling</para>
+    </listitem>
+    <listitem>
+     <para>Signal handling</para>
+    </listitem>
+   </itemizedlist>
+
+   <para>A tool has no control over these operations; it never
+   "sees" the code doing this work and thus cannot instrument it.
+   However, the core provides hooks so a tool can be notified
+   when certain interesting events happen, for example when when
+   dynamic memory is allocated or freed, the stack pointer is
+   changed, or a pthread mutex is locked, etc.</para>
+
+   <para>Note that these hooks only notify tools of events
+   relevant to user space.  For example, when the core allocates
+   some memory for its own use, the tool is not notified of this,
+   because it's not directly part of the supervised program's
+   execution.</para>
+  </listitem>
+
+  <listitem>
+   <para>Kernel space: execution in the kernel.  Two kinds:</para>
+    <orderedlist>
+     <listitem>
+      <para>System calls: can't be directly observed by either
+      the tool or the core.  But the core does have some idea of
+      what happens to the arguments, and it provides hooks for a
+      tool to wrap system calls.</para>
+     </listitem>
+     <listitem>
+      <para>Other: all other kernel activity (e.g. process
+      scheduling) is totally opaque and irrelevant to the
+      program.</para>
+     </listitem>
+    </orderedlist>
+  </listitem>
+
+  <listitem>
+   <para>It should be noted that a tool only has direct control
+   over code executed in user space.  This is the vast majority
+   of code executed, but it is not absolutely all of it, so any
+   profiling information recorded by a tool won't be totally
+   accurate.</para>
+  </listitem>
+
+</orderedlist>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="writing-tools.writingatool" xreflabel="Writing a Tool">
+<title>Writing a Tool</title>
+
+
+<sect2 id="writing-tools.whywriteatool" xreflabel="Why write a tool?">
+<title>Why write a tool?</title>
+
+<para>Before you write a tool, you should have some idea of what
+it should do.  What is it you want to know about your programs of
+interest?  Consider some existing tools:</para>
+
+<itemizedlist>
+
+ <listitem>
+  <para><command>memcheck</command>: among other things, performs
+  fine-grained validity and addressibility checks of every memory
+  reference performed by the program.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>addrcheck</command>: performs lighterweight
+  addressibility checks of every memory reference performed by
+  the program.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>cachegrind</command>: tracks every instruction
+  and memory reference to simulate instruction and data caches,
+  tracking cache accesses and misses that occur on every line in
+  the program.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>helgrind</command>: tracks every memory access
+  and mutex lock/unlock to determine if a program contains any
+  data races.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>lackey</command>: does simple counting of
+  various things: the number of calls to a particular function
+  (<computeroutput>_dl_runtime_resolve()</computeroutput>); the
+  number of basic blocks, x86 instruction, UCode instructions
+  executed; the number of branches executed and the proportion of
+  those which were taken.</para>
+ </listitem>
+</itemizedlist>
+
+<para>These examples give a reasonable idea of what kinds of
+things Valgrind can be used for.  The instrumentation can range
+from very lightweight (e.g. counting the number of times a
+particular function is called) to very intrusive (e.g.
+memcheck's memory checking).</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.suggestedtools" xreflabel="Suggested tools">
+<title>Suggested tools</title>
+
+<para>Here is a list of ideas we have had for tools that should
+not be too hard to implement.</para>
+
+<itemizedlist>
+ <listitem>
+  <para><command>branch profiler</command>: A machine's branch
+  prediction hardware could be simulated, and each branch
+  annotated with the number of predicted and mispredicted
+  branches.  Would be implemented quite similarly to Cachegrind,
+  and could reuse the
+  <computeroutput>cg_annotate</computeroutput> script to annotate
+  source code.</para>
+
+  <para>The biggest difficulty with this is the simulation; the
+  chip-makers are very cagey about how their chips do branch
+  prediction.  But implementing one or more of the basic
+  algorithms could still give good information.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>coverage tool</command>: Cachegrind can already
+  be used for doing test coverage, but it's massive overkill to
+  use it just for that.</para>
+
+  <para>It would be easy to write a coverage tool that records
+  how many times each basic block was recorded.  Again, the
+  <computeroutput>cg_annotate</computeroutput> script could be
+  used for annotating source code with the gathered information.
+  Although, <computeroutput>cg_annotate</computeroutput> is only
+  designed for working with single program runs.  It could be
+  extended relatively easily to deal with multiple runs of a
+  program, so that the coverage of a whole test suite could be
+  determined.</para>
+
+  <para>In addition to the standard coverage information, such a
+  tool could record extra information that would help a user
+  generate test cases to exercise unexercised paths.  For
+  example, for each conditional branch, the tool could record all
+  inputs to the conditional test, and print these out when
+  annotating.</para>
+ </listitem>
+
+ <listitem>
+  <para><command>run-time type checking</command>: A nice example
+  of a dynamic checker is given in this paper:</para>
+  <address>Debugging via Run-Time Type Checking
+  Alexey Loginov, Suan Hsi Yong, Susan Horwitz and Thomas Reps
+  Proceedings of Fundamental Approaches to Software Engineering
+  April 2001.
+  </address>
+
+  <para>Similar is the tool described in this paper:</para>
+  <address>Run-Time Type Checking for Binary Programs
+  Michael Burrows, Stephen N. Freund, Janet L. Wiener
+  Proceedings of the 12th International Conference on Compiler Construction (CC 2003)
+  April 2003.
+  </address>
+
+  <para>This approach can find quite a range of bugs,
+  particularly in C and C++ programs, and could be implemented
+  quite nicely as a Valgrind tool.</para>
+
+  <para>Ways to speed up this run-time type checking are
+  described in this paper:</para>
+  <address>Reducing the Overhead of Dynamic Analysis
+  Suan Hsi Yong and Susan Horwitz
+  Proceedings of Runtime Verification '02
+  July 2002.
+  </address>
+
+  <para>Valgrind's client requests could be used to pass
+  information to a tool about which elements need instrumentation
+  and which don't.</para>
+ </listitem>
+</itemizedlist>
+
+<para>We would love to hear from anyone who implements these or
+other tools.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.howtoolswork" xreflabel="How tools work">
+<title>How tools work</title>
+
+<para>Tools must define various functions for instrumenting
+programs that are called by Valgrind's core, yet they must be
+implemented in such a way that they can be written and compiled
+without touching Valgrind's core.  This is important, because one
+of our aims is to allow people to write and distribute their own
+tools that can be plugged into Valgrind's core easily.</para>
+
+<para>This is achieved by packaging each tool into a separate
+shared object which is then loaded ahead of the core shared
+object <computeroutput>valgrind.so</computeroutput>, using the
+dynamic linker's <computeroutput>LD_PRELOAD</computeroutput>
+variable.  Any functions defined in the tool that share the name
+with a function defined in core (such as the instrumentation
+function <computeroutput>SK_(instrument)()</computeroutput>)
+override the core's definition.  Thus the core can call the
+necessary tool functions.</para>
+
+<para>This magic is all done for you; the shared object used is
+chosen with the <computeroutput>--tool</computeroutput> option to
+the <computeroutput>valgrind</computeroutput> startup script.
+The default tool used is
+<computeroutput>memcheck</computeroutput>, Valgrind's original
+memory checker.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.gettingcode" xreflabel="Getting the code">
+<title>Getting the code</title>
+
+<para>To write your own tool, you'll need to check out a copy of
+Valgrind from the CVS repository, rather than using a packaged
+distribution.  This is because it contains several extra files
+needed for writing tools.</para>
+
+<para>To check out the code from the CVS repository, first login:</para>
+<programlisting><![CDATA[
+cvs -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind
+login]]></programlisting>
+
+<para>Then checkout the code.  To get a copy of the current
+development version (recommended for the brave only):</para>
+<programlisting><![CDATA[
+cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind 
+co valgrind]]></programlisting>
+
+<para>To get a copy of the stable released branch:</para>
+<programlisting><![CDATA[
+cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind
+co -r <TAG> valgrind]]></programlisting>
+
+<para>where &lt;<computeroutput>TAG</computeroutput>&gt; has the
+form <computeroutput>VALGRIND_X_Y_Z</computeroutput> for version
+X.Y.Z.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.gettingstarted" xreflabel="Getting started">
+<title>Getting started</title>
+
+<para>Valgrind uses GNU <computeroutput>automake</computeroutput>
+and <computeroutput>autoconf</computeroutput> for the creation of
+Makefiles and configuration.  But don't worry, these instructions
+should be enough to get you started even if you know nothing
+about those tools.</para>
+
+<para>In what follows, all filenames are relative to Valgrind's
+top-level directory <computeroutput>valgrind/</computeroutput>.</para>
+
+<orderedlist>
+ <listitem>
+  <para>Choose a name for the tool, and an abbreviation that can
+  be used as a short prefix.  We'll use
+  <computeroutput>foobar</computeroutput> and
+  <computeroutput>fb</computeroutput> as an example.</para>
+ </listitem>
+
+ <listitem>
+  <para>Make a new directory
+  <computeroutput>foobar/</computeroutput> which will hold the
+  tool.</para>
+ </listitem>
+
+ <listitem>
+  <para>Copy <computeroutput>none/Makefile.am</computeroutput>
+  into <computeroutput>foobar/</computeroutput>.  Edit it by
+  replacing all occurrences of the string
+  <computeroutput>"none"</computeroutput> with
+  <computeroutput>"foobar"</computeroutput> and the one
+  occurrence of the string <computeroutput>"nl_"</computeroutput>
+  with <computeroutput>"fb_"</computeroutput>.  It might be worth
+  trying to understand this file, at least a little; you might
+  have to do more complicated things with it later on.  In
+  particular, the name of the
+  <computeroutput>vgskin_foobar_so_SOURCES</computeroutput>
+  variable determines the name of the tool's shared object, which
+  determines what name must be passed to the
+  <computeroutput>--tool</computeroutput> option to use the
+  tool.</para>
+ </listitem>
+
+ <listitem>
+  <para>Copy <filename>none/nl_main.c</filename> into
+  <computeroutput>foobar/</computeroutput>, renaming it as
+  <filename>fb_main.c</filename>.  Edit it by changing the lines
+  in <computeroutput>SK_(pre_clo_init)()</computeroutput> to
+  something appropriate for the tool.  These fields are used in
+  the startup message, except for
+  <computeroutput>bug_reports_to</computeroutput> which is used
+  if a tool assertion fails.</para>
+ </listitem>
+
+  <listitem>
+   <para>Edit <computeroutput>Makefile.am</computeroutput>,
+   adding the new directory
+   <computeroutput>foobar</computeroutput> to the
+   <computeroutput>SUBDIRS</computeroutput> variable.</para>
+  </listitem>
+
+  <listitem>
+   <para>Edit <computeroutput>configure.in</computeroutput>,
+   adding <computeroutput>foobar/Makefile</computeroutput> to the
+   <computeroutput>AC_OUTPUT</computeroutput> list.</para>
+  </listitem>
+
+  <listitem>
+   <para>Run:</para>
+<programlisting><![CDATA[
+  autogen.sh
+  ./configure --prefix=`pwd`/inst
+  make install]]></programlisting>
+
+   <para>It should automake, configure and compile without
+   errors, putting copies of the tool's shared object
+   <computeroutput>vgskin_foobar.so</computeroutput> in
+   <computeroutput>foobar/</computeroutput> and
+   <computeroutput>inst/lib/valgrind/</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+   <para>You can test it with a command like:</para>
+<programlisting><![CDATA[
+  inst/bin/valgrind --tool=foobar date]]></programlisting>
+
+   <para>(almost any program should work;
+   <computeroutput>date</computeroutput> is just an example).
+   The output should be something like this:</para>
+<programlisting><![CDATA[
+  ==738== foobar-0.0.1, a foobarring tool for x86-linux.
+  ==738== Copyright (C) 1066AD, and GNU GPL'd, by J. Random Hacker.
+  ==738== Built with valgrind-1.1.0, a program execution monitor.
+  ==738== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward.
+  ==738== Estimated CPU clock rate is 1400 MHz
+  ==738== For more details, rerun with: -v
+  ==738== Wed Sep 25 10:31:54 BST 2002
+  ==738==]]></programlisting>
+
+   <para>The tool does nothing except run the program
+   uninstrumented.</para>
+  </listitem>
+
+</orderedlist>
+
+<para>These steps don't have to be followed exactly - you can
+choose different names for your source files, and use a different
+<computeroutput>--prefix</computeroutput> for
+<computeroutput>./configure</computeroutput>.</para>
+
+<para>Now that we've setup, built and tested the simplest
+possible tool, onto the interesting stuff...</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.writingcode" xreflabel="Writing the Code">
+<title>Writing the code</title>
+
+<para>A tool must define at least these four functions:</para>
+<programlisting><![CDATA[
+  SK_(pre_clo_init)()
+  SK_(post_clo_init)()
+  SK_(instrument)()
+  SK_(fini)()]]></programlisting>
+
+<para>Also, it must use the macro
+<computeroutput>VG_DETERMINE_INTERFACE_VERSION</computeroutput>
+exactly once in its source code.  If it doesn't, you will get a
+link error involving
+<computeroutput>VG_(skin_interface_major_version)</computeroutput>.
+This macro is used to ensure the core/tool interface used by the
+core and a plugged-in tool are binary compatible.</para>
+
+<para>In addition, if a tool wants to use some of the optional
+services provided by the core, it may have to define other
+functions.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.init" xreflabel="Initialisation">
+<title>Initialisation</title>
+
+<para>Most of the initialisation should be done in
+<computeroutput>SK_(pre_clo_init)()</computeroutput>.  Only use
+<computeroutput>SK_(post_clo_init)()</computeroutput> if a tool
+provides command line options and must do some initialisation
+after option processing takes place
+(<computeroutput>"clo"</computeroutput> stands for "command line
+options").</para>
+
+<para>First of all, various "details" need to be set for a tool,
+using the functions
+<computeroutput>VG_(details_*)()</computeroutput>.  Some are all
+compulsory, some aren't.  Some are used when constructing the
+startup message,
+<computeroutput>detail_bug_reports_to</computeroutput> is used if
+<computeroutput>VG_(skin_panic)()</computeroutput> is ever
+called, or a tool assertion fails.  Others have other uses.</para>
+
+<para>Second, various "needs" can be set for a tool, using the
+functions <computeroutput>VG_(needs_*)()</computeroutput>.  They
+are mostly booleans, and can be left untouched (they default to
+<computeroutput>False</computeroutput>).  They determine whether
+a tool can do various things such as: record, report and suppress
+errors; process command line options; wrap system calls; record
+extra information about malloc'd blocks, etc.</para>
+
+<para>For example, if a tool wants the core's help in recording
+and reporting errors, it must set the
+<computeroutput>skin_errors</computeroutput> need to
+<computeroutput>True</computeroutput>, and then provide
+definitions of six functions for comparing errors, printing out
+errors, reading suppressions from a suppressions file, etc.
+While writing these functions requires some work, it's much less
+than doing error handling from scratch because the core is doing
+most of the work.  See the type
+<computeroutput>VgNeeds</computeroutput> in
+<filename>include/vg_skin.h</filename> for full details of all
+the needs.</para>
+
+<para>Third, the tool can indicate which events in core it wants
+to be notified about, using the functions
+<computeroutput>VG_(track_*)()</computeroutput>.  These include
+things such as blocks of memory being malloc'd, the stack pointer
+changing, a mutex being locked, etc.  If a tool wants to know
+about this, it should set the relevant pointer in the structure
+to point to a function, which will be called when that event
+happens.</para>
+
+<para>For example, if the tool want to be notified when a new
+block of memory is malloc'd, it should call
+<computeroutput>VG_(track_new_mem_heap)()</computeroutput> with
+an appropriate function pointer, and the assigned function will
+be called each time this happens.</para>
+
+<para>More information about "details", "needs" and "trackable
+events" can be found in
+<filename>include/vg_skin.h</filename>.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.instr" xreflabel="Instrumentation">
+<title>Instrumentation</title>
+
+<para><computeroutput>SK_(instrument)()</computeroutput> is the
+interesting one.  It allows you to instrument
+<emphasis>UCode</emphasis>, which is Valgrind's RISC-like
+intermediate language.  UCode is described in 
+<xref linkend="mc-tech-docs.ucode"/>.</para>
+
+<para>The easiest way to instrument UCode is to insert calls to C
+functions when interesting things happen.  See the tool "Lackey"
+(<filename>lackey/lk_main.c</filename>) for a simple example of
+this, or Cachegrind (<filename>cachegrind/cg_main.c</filename>)
+for a more complex example.</para>
+
+<para>A much more complicated way to instrument UCode, albeit one
+that might result in faster instrumented programs, is to extend
+UCode with new UCode instructions.  This is recommended for
+advanced Valgrind hackers only!  See Memcheck for an example.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.fini" xreflabel="Finalisation">
+<title>Finalisation</title>
+
+<para>This is where you can present the final results, such as a
+summary of the information collected.  Any log files should be
+written out at this point.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.otherinfo" xreflabel="Other Important Information">
+<title>Other Important Information</title>
+
+<para>Please note that the core/tool split infrastructure is
+quite complex and not brilliantly documented.  Here are some
+important points, but there are undoubtedly many others that I
+should note but haven't thought of.</para>
+
+<para>The file <filename>include/vg_skin.h</filename> contains
+all the types, macros, functions, etc. that a tool should
+(hopefully) need, and is the only <filename>.h</filename> file a
+tool should need to
+<computeroutput>#include</computeroutput>.</para>
+
+<para>In particular, you probably shouldn't use anything from the
+C library (there are deep reasons for this, trust us).  Valgrind
+provides an implementation of a reasonable subset of the C
+library, details of which are in
+<filename>vg_skin.h</filename>.</para>
+
+<para>Similarly, when writing a tool, you shouldn't need to look
+at any of the code in Valgrind's core.  Although it might be
+useful sometimes to help understand something.</para>
+
+<para><filename>vg_skin.h</filename> has a reasonable amount of
+documentation in it that should hopefully be enough to get you
+going.  But ultimately, the tools distributed (Memcheck,
+Addrcheck, Cachegrind, Lackey, etc.) are probably the best
+documentation of all, for the moment.</para>
+
+<para>Note that the <computeroutput>VG_</computeroutput> and
+<computeroutput>SK_</computeroutput> macros are used heavily.
+These just prepend longer strings in front of names to avoid
+potential namespace clashes.  We strongly recommend using the
+<computeroutput>SK_</computeroutput> macro for any global
+functions and variables in your tool, or writing a similar
+macro.</para>
+
+</sect2>
+
+
+<sect2 id="writing-tools.advice" xreflabel="Words of Advice">
+<title>Words of Advice</title>
+
+<para>Writing and debugging tools is not trivial.  Here are some
+suggestions for solving common problems.</para>
+
+
+<sect3 id="writing-tools.segfaults">
+<title>Segmentation Faults</title>
+
+<para>If you are getting segmentation faults in C functions used
+by your tool, the usual GDB command:</para>
+<screen><![CDATA[
+  gdb <prog> core]]></screen>
+<para>usually gives the location of the segmentation fault.</para>
+
+</sect3>
+
+
+<sect3 id="writing-tools.debugfns">
+<title>Debugging C functions</title>
+
+<para>If you want to debug C functions used by your tool, you can
+attach GDB to Valgrind with some effort:</para>
+<orderedlist>
+ <listitem>
+  <para>Enable the following code in
+  <filename>coregrind/vg_main.c</filename> by changing
+  <computeroutput>if (0)</computeroutput> 
+  into <computeroutput>if (1)</computeroutput>:
+<programlisting><![CDATA[
+  /* Hook to delay things long enough so we can get the pid and
+     attach GDB in another shell. */
+  if (0) { 
+    Int p, q;
+    for ( p = 0; p < 50000; p++ )
+      for ( q = 0; q < 50000; q++ ) ;
+  }]]></programlisting>
+  and rebuild Valgrind.</para>
+ </listitem>
+
+ <listitem>
+  <para>Then run:</para>
+<programlisting><![CDATA[
+  valgrind <prog>]]></programlisting>
+  <para>Valgrind starts the program, printing its process id, and
+  then delays for a few seconds (you may have to change the loop
+  bounds to get a suitable delay).</para>
+ </listitem>
+
+ <listitem>
+  <para>In a second shell run:</para>
+<programlisting><![CDATA[
+  gdb <prog pid>]]></programlisting>
+ </listitem>
+
+</orderedlist>
+
+<para>GDB may be able to give you useful information.  Note that
+by default most of the system is built with
+<computeroutput>-fomit-frame-pointer</computeroutput>, and you'll
+need to get rid of this to extract useful tracebacks from GDB.</para>
+
+</sect3>
+
+
+<sect3 id="writing-tools.ucode-probs">
+<title>UCode Instrumentation Problems</title>
+
+<para>If you are having problems with your UCode instrumentation,
+it's likely that GDB won't be able to help at all.  In this case,
+Valgrind's <computeroutput>--trace-codegen</computeroutput>
+option is invaluable for observing the results of
+instrumentation.</para>
+
+</sect3>
+
+
+<sect3 id="writing-tools.misc">
+<title>Miscellaneous</title>
+
+<para>If you just want to know whether a program point has been
+reached, using the <computeroutput>OINK</computeroutput> macro
+(in <filename>include/vg_skin.h</filename>) can be easier than
+using GDB.</para>
+
+<para>The other debugging command line options can be useful too
+(run <computeroutput>valgrind -h</computeroutput> for the
+list).</para>
+
+</sect3>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="writing-tools.advtopics" xreflabel="Advanced Topics">
+<title>Advanced Topics</title>
+
+<para>Once a tool becomes more complicated, there are some extra
+things you may want/need to do.</para>
+
+<sect2 id="writing-tools.suppressions" xreflabel="Suppressions">
+<title>Suppressions</title>
+
+<para>If your tool reports errors and you want to suppress some
+common ones, you can add suppressions to the suppression files.
+The relevant files are
+<computeroutput>valgrind/*.supp</computeroutput>; the final
+suppression file is aggregated from these files by combining the
+relevant <computeroutput>.supp</computeroutput> files depending
+on the versions of linux, X and glibc on a system.</para>
+
+<para>Suppression types have the form
+<computeroutput>tool_name:suppression_name</computeroutput>.  The
+<computeroutput>tool_name</computeroutput> here is the name you
+specify for the tool during initialisation with
+<computeroutput>VG_(details_name)()</computeroutput>.</para>
+
+</sect2>
+
+
+<!--
+<sect2 id="writing-tools.docs" xreflabel="Documentation">
+<title>Documentation</title>
+
+<para>As of version &rel-version;, Valgrind documentation has
+been converted to XML. Why? 
+See <ulink url="http://www.ucc.ie/xml/">The XML FAQ</ulink>.
+</para>
+
+
+<sect3 id="writing-tools.xml" xreflabel="The XML Toolchain">
+<title>The XML Toolchain</title>
+
+<para>If you are feeling conscientious and want to write some
+documentation for your tool, please use XML.  The Valgrind
+Docs use the following toolchain and versions:</para>
+
+<programlisting>
+ xmllint:   using libxml version 20607
+ xsltproc:  using libxml 20607, libxslt 10102 and libexslt 802
+ pdfxmltex: pdfTeX (Web2C 7.4.5) 3.14159-1.10b
+ pdftops:   version 3.00
+ DocBook:   version 4.2
+</programlisting>
+
+<para><command>Latency:</command> you should note that latency is
+a big problem: DocBook is constantly being updated, but the tools
+tend to lag behind somewhat.  It is important that the versions
+get on with each other, so if you decide to upgrade something,
+then you need to ascertain whether things still work nicely -
+this *cannot* be assumed.</para>
+
+<para><command>Stylesheets:</command> The Valgrind docs use
+various custom stylesheet layers, all of which are in
+<computeroutput>valgrind/docs/lib/</computeroutput>. You
+shouldn't need to modify these in any way.</para>
+
+<para><command>Catalogs:</command> Assuming that you have the
+various tools listed above installed, you will probably need to
+modify
+<computeroutput>valgrind/docs/lib/vg-catalog.xml</computeroutput>
+so that the parser can find your DocBook installation. Catalogs
+provide a mapping from generic addresses to specific local
+directories on a given machine.  Just add another
+<computeroutput>group</computeroutput> to this file, reflecting
+your local installation.</para>
+
+</sect3>
+
+
+<sect3 id="writing-tools.writing" xreflabel="Writing the Documentation">
+<title>Writing the Documentation</title>
+
+<para>If you aren't confident using XML, or you have problems
+with the toolchain, then write your documentation in text format,
+email it to
+<computeroutput>valgrind@valgrind.org</computeroutput>, and
+someone will convert it to XML for you.  Otherwise, follow these
+steps (using <computeroutput>foobar</computeroutput> as the
+example tool name again):</para>
+
+<orderedlist>
+
+  <listitem>
+   <para>Make a directory
+  <computeroutput>valgrind/foobar/docs/</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Copy the xml tool documentation template file 
+  <computeroutput>valgrind/docs/xml/tool-template.xml</computeroutput>
+  to <computeroutput>foobar/docs/</computeroutput>, and rename it
+  to
+  <computeroutput>foobar/docs/fb-manual.xml</computeroutput>.</para>
+  <para><command>Note</command>: there is a *really stupid* tetex
+  bug with underscores in filenames, so don't use '_'.</para>
+ </listitem>
+
+ <listitem>
+  <para>Write the documentation. There are some helpful bits and
+  pieces on using xml markup in
+  <filename>valgrind/docs/xml/xml_help.txt</filename>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Validate <computeroutput>foobar/docs/fb-manual.xml</computeroutput>
+  using the shell script
+  <filename>valgrind/docs/lib/xmlproc.sh</filename>.</para>
+<screen><![CDATA[
+% cd valgrind/docs/lib/
+% ./xmlproc.sh -valid ../../foobar/docs/fb-manual.xml
+]]></screen>
+
+   <para>If you have linked to other documents in the Valgrind
+   Documentation Set, you will get errors of the form:</para>
+
+<screen><![CDATA[
+fb-manual.xml:1632: element xref: validity error : 
+        IDREF attribute linkend references an unknown ID "mc-tech-docs"
+]]></screen>
+
+   <para>Ignore (only) these - they will disappear when
+   <filename>fb-manual.xml</filename> is integrated into the
+   Set.</para>
+
+  <para>Because the xml toolchain is fragile, it is important to
+  ensure that <computeroutput>fb-manual.xml</computeroutput> won't
+  break the documentation set build.  Note that just because an
+  xml file happily transforms to html does not necessarily mean
+  the same holds true for pdf/ps.</para>
+ </listitem>
+
+ <listitem>
+  <para>You can (re-)generate <filename>fb-manual.html</filename>
+  while you are writing <filename>fb-manual.xml</filename> to help
+  you see how it's looking.  The generated file
+  <filename>fb-manual.html</filename> will be output in
+  <computeroutput>foobar/docs/</computeroutput>.</para>
+
+<screen><![CDATA[
+% ./xmlproc.sh -html ../../foobar/docs/fb-manual.xml
+]]></screen>
+ </listitem>
+
+ <listitem>
+  <para>When you have finished, generate html, pdf and ps output
+  to check all is well:</para>
+
+<screen><![CDATA[
+% cp ../../foobar/fb-manual.xml .
+% ./xmlproc.sh -test fb-manual.xml
+]]></screen>
+
+   <para>Check the output files (<filename>index.html,
+   fb-manual.pdf, fb-manual.ps</filename>) in
+   <computeroutput>/lib/test/</computeroutput> with the relevant
+   viewers.  When you are happy and have finished tinkering with
+   <computeroutput>fb-manual.xml</computeroutput>:</para>
+
+<screen><![CDATA[
+% ./xmlproc.sh -clean fb-manual.xml
+]]></screen>
+</listitem>
+
+ <listitem>
+  <para>In order for your documentation to be included in the
+  User Manual, the relevant entries must be made in
+  <filename>/valgrind/docs/xml/vg-bookset.xml</filename> in this
+  format (hopefully, it should be pretty obvious):</para>
+
+<programlisting><![CDATA[
+<!ENTITY fb-manual   SYSTEM "../../foobar/docs/fb-manual.xml">
+... ...
+&fb-manual;
+]]></programlisting>
+
+  <para>Send a patch for this to
+  <computeroutput>valgrind@valgrind.org</computeroutput>.</para>
+
+  <para>To achieve true anality, try for a full doc-set build:</para>
+<screen><![CDATA[
+% cd valgrind/docs/
+% make all
+]]></screen>
+ </listitem>
+
+</orderedlist>
+
+</sect3>
+
+</sect2>
+-->
+<sect2 id="writing-tools.docs" xreflabel="Documentation">
+<title>Documentation</title>
+
+<para>As of version &rel-version;, Valgrind documentation has
+been converted to XML. Why? 
+See <ulink url="http://www.ucc.ie/xml/">The XML FAQ</ulink>.
+</para>
+
+
+<sect3 id="writing-tools.xml" xreflabel="The XML Toolchain">
+<title>The XML Toolchain</title>
+
+<para>If you are feeling conscientious and want to write some
+documentation for your tool, please use XML.  The Valgrind
+Docs use the following toolchain and versions:</para>
+
+<programlisting>
+ xmllint:   using libxml version 20607
+ xsltproc:  using libxml 20607, libxslt 10102 and libexslt 802
+ pdfxmltex: pdfTeX (Web2C 7.4.5) 3.14159-1.10b
+ pdftops:   version 3.00
+ DocBook:   version 4.2
+</programlisting>
+
+<para><command>Latency:</command> you should note that latency is
+a big problem: DocBook is constantly being updated, but the tools
+tend to lag behind somewhat.  It is important that the versions
+get on with each other, so if you decide to upgrade something,
+then you need to ascertain whether things still work nicely -
+this *cannot* be assumed.</para>
+
+<para><command>Stylesheets:</command> The Valgrind docs use
+various custom stylesheet layers, all of which are in
+<computeroutput>valgrind/docs/lib/</computeroutput>. You
+shouldn't need to modify these in any way.</para>
+
+<para><command>Catalogs:</command> Assuming that you have the
+various tools listed above installed, you will probably need to
+modify
+<computeroutput>valgrind/docs/lib/vg-catalog.xml</computeroutput>
+so that the parser can find your DocBook installation. Catalogs
+provide a mapping from generic addresses to specific local
+directories on a given machine.  Just add another
+<computeroutput>group</computeroutput> to this file, reflecting
+your local installation.</para>
+
+</sect3>
+
+
+<sect3 id="writing-tools.writing" xreflabel="Writing the Documentation">
+<title>Writing the Documentation</title>
+
+<para>Follow these steps (using <computeroutput>foobar</computeroutput>
+as the example tool name again):</para>
+
+<orderedlist>
+
+  <listitem>
+   <para>Make a directory
+  <computeroutput>valgrind/foobar/docs/</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Copy the XML documentation file for the tool Nulgrind from
+  <computeroutput>valgrind/none/docs/nl-manual.xml</computeroutput>
+  to <computeroutput>foobar/docs/</computeroutput>, and rename it
+  to
+  <computeroutput>foobar/docs/fb-manual.xml</computeroutput>.</para>
+  <para><command>Note</command>: there is a *really stupid* tetex
+  bug with underscores in filenames, so don't use '_'.</para>
+ </listitem>
+
+ <listitem>
+  <para>Write the documentation. There are some helpful bits and
+  pieces on using xml markup in
+  <filename>valgrind/docs/xml/xml_help.txt</filename>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Include it in the User Manual by adding the relevant entry must
+  be added to <filename>valgrind/docs/xml/manual.xml</filename>.  Copy
+  and edit an existing entry.</para>
+ </listitem>
+
+ <listitem>
+  <para>Validate <computeroutput>foobar/docs/fb-manual.xml</computeroutput>
+  using the following command from within <filename>valgrind/docs/</filename>:
+  </para>
+<screen><![CDATA[
+% make valid
+]]></screen>
+
+   <para>You will probably get errors that look like this:</para>
+
+<screen><![CDATA[
+./xml/index.xml:5: element chapter: validity error : No declaration for
+attribute base of element chapter
+]]></screen>
+
+  <para>Ignore (only) these -- they're not important.</para>
+
+  <para>Because the xml toolchain is fragile, it is important to
+  ensure that <filename>fb-manual.xml</filename> won't
+  break the documentation set build.  Note that just because an
+  xml file happily transforms to html does not necessarily mean
+  the same holds true for pdf/ps.</para>
+ </listitem>
+
+ <listitem>
+  <para>You can (re-)generate the HTML docs
+  while you are writing <filename>fb-manual.xml</filename> to help
+  you see how it's looking.  The generated files end up in
+  <filename>valgrind/docs/html/</filename>.  Use the following
+  command, within <filename>valgrind/docs/</filename>:</para>
+
+<screen><![CDATA[
+% make html-docs
+]]></screen>
+ </listitem>
+
+ <listitem>
+  <para>When you have finished, also generate pdf and ps output
+  to check all is well, from within <filename>valgrind/docs/</filename>:
+  </para>
+
+<screen><![CDATA[
+% make print-docs
+]]></screen>
+
+   <para>Check the output <filename>.pdf</filename> and
+   <filename>.ps</filename> files in
+   <computeroutput>valgrind/docs/print/</computeroutput>. 
+   </para>
+</listitem>
+
+</orderedlist>
+
+</sect3>
+
+</sect2>
+
+
+<sect2 id="writing-tools.regtests" xreflabel="Regression Tests">
+<title>Regression Tests</title>
+
+<para>Valgrind has some support for regression tests.  If you
+want to write regression tests for your tool:</para>
+
+<orderedlist>
+ <listitem>
+  <para>Make a directory
+  <computeroutput>foobar/tests/</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Edit <computeroutput>foobar/Makefile.am</computeroutput>,
+  adding <computeroutput>tests</computeroutput> to the
+  <computeroutput>SUBDIRS</computeroutput> variable.</para>
+ </listitem>
+
+ <listitem>
+  <para>Edit <computeroutput>configure.in</computeroutput>,
+  adding <computeroutput>foobar/tests/Makefile</computeroutput>
+  to the <computeroutput>AC_OUTPUT</computeroutput> list.</para>
+ </listitem>
+
+ <listitem>
+  <para>Write
+  <computeroutput>foobar/tests/Makefile.am</computeroutput>.  Use
+  <computeroutput>memcheck/tests/Makefile.am</computeroutput> as
+  an example.</para>
+ </listitem>
+
+ <listitem>
+  <para>Write the tests, <computeroutput>.vgtest</computeroutput>
+  test description files,
+  <computeroutput>.stdout.exp</computeroutput> and
+  <computeroutput>.stderr.exp</computeroutput> expected output
+  files.  (Note that Valgrind's output goes to stderr.)  Some
+  details on writing and running tests are given in the comments
+  at the top of the testing script
+  <computeroutput>tests/vg_regtest</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+  <para>Write a filter for stderr results
+  <computeroutput>foobar/tests/filter_stderr</computeroutput>.
+  It can call the existing filters in
+  <computeroutput>tests/</computeroutput>.  See
+  <computeroutput>memcheck/tests/filter_stderr</computeroutput>
+  for an example; in particular note the
+  <computeroutput>$dir</computeroutput> trick that ensures the
+  filter works correctly from any directory.</para>
+ </listitem>
+
+</orderedlist>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.profiling" xreflabel="Profiling">
+<title>Profiling</title>
+
+<para>To do simple tick-based profiling of a tool, include the
+line:</para>
+<programlisting><![CDATA[
+  #include "vg_profile.c"]]></programlisting>
+<para>in the tool somewhere, and rebuild (you may have to
+<computeroutput>make clean</computeroutput> first).  Then run
+Valgrind with the <computeroutput>--profile=yes</computeroutput>
+option.</para>
+
+<para>The profiler is stack-based; you can register a profiling
+event with
+<computeroutput>VGP_(register_profile_event)()</computeroutput>
+and then use the <computeroutput>VGP_PUSHCC</computeroutput> and
+<computeroutput>VGP_POPCC</computeroutput> macros to record time
+spent doing certain things.  New profiling event numbers must not
+overlap with the core profiling event numbers.  See
+<filename>include/vg_skin.h</filename> for details and Memcheck
+for an example.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.mkhackery" xreflabel="Other Makefile Hackery">
+<title>Other Makefile Hackery</title>
+
+<para>If you add any directories under
+<computeroutput>valgrind/foobar/</computeroutput>, you will need
+to add an appropriate <filename>Makefile.am</filename> to it, and
+add a corresponding entry to the
+<computeroutput>AC_OUTPUT</computeroutput> list in
+<filename>valgrind/configure.in</filename>.</para>
+
+<para>If you add any scripts to your tool (see Cachegrind for an
+example) you need to add them to the
+<computeroutput>bin_SCRIPTS</computeroutput> variable in
+<filename>valgrind/foobar/Makefile.am</filename>.</para>
+
+</sect2>
+
+
+
+<sect2 id="writing-tools.ifacever" xreflabel="Core/tool Interface Versions">
+<title>Core/tool Interface Versions</title>
+
+<para>In order to allow for the core/tool interface to evolve
+over time, Valgrind uses a basic interface versioning system.
+All a tool has to do is use the
+<computeroutput>VG_DETERMINE_INTERFACE_VERSION</computeroutput>
+macro exactly once in its code.  If not, a link error will occur
+when the tool is built.</para>
+
+<para>The interface version number has the form X.Y.  Changes in
+Y indicate binary compatible changes.  Changes in X indicate
+binary incompatible changes.  If the core and tool has the same
+major version number X they should work together.  If X doesn't
+match, Valgrind will abort execution with an explanation of the
+problem.</para>
+
+<para>This approach was chosen so that if the interface changes
+in the future, old tools won't work and the reason will be
+clearly explained, instead of possibly crashing mysteriously.  We
+have attempted to minimise the potential for binary incompatible
+changes by means such as minimising the use of naked structs in
+the interface.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="writing-tools.finalwords" xreflabel="Final Words">
+<title>Final Words</title>
+
+<para>This whole core/tool business is under active development,
+although it's slowly maturing.</para>
+
+<para>The first consequence of this is that the core/tool
+interface will continue to change in the future; we have no
+intention of freezing it and then regretting the inevitable
+stupidities.  Hopefully most of the future changes will be to add
+new features, hooks, functions, etc, rather than to change old
+ones, which should cause a minimum of trouble for existing tools,
+and we've put some effort into future-proofing the interface to
+avoid binary incompatibility.  But we can't guarantee anything.
+The versioning system should catch any incompatibilities.  Just
+something to be aware of.</para>
+
+<para>The second consequence of this is that we'd love to hear
+your feedback about it:</para>
+
+<itemizedlist>
+ <listitem>
+  <para>If you love it or hate it</para>
+ </listitem>
+ <listitem>
+  <para>If you find bugs</para>
+ </listitem>
+ <listitem>
+  <para>If you write a tool</para>
+ </listitem>
+ <listitem>
+  <para>If you have suggestions for new features, needs,
+  trackable events, functions</para>
+ </listitem>
+ <listitem>
+  <para>If you have suggestions for making tools easier to
+  write</para>
+ </listitem>
+ <listitem>
+  <para>If you have suggestions for improving this
+  documentation</para>
+ </listitem>
+ <listitem>
+  <para>If you don't understand something</para>
+ </listitem>
+</itemizedlist>
+
+<para>or anything else!</para>
+
+<para>Happy programming.</para>
+
+</sect1>
+
+</chapter>
diff --git a/docs/xml/xml_help.txt b/docs/xml/xml_help.txt
new file mode 100644
index 0000000..41d5ed7
--- /dev/null
+++ b/docs/xml/xml_help.txt
@@ -0,0 +1,174 @@
+ <!-- -*- sgml -*- -->
+----------------------------------------------
+Docbook Reference Manual (1999):
+- http://www.oreilly.com/catalog/docbook/
+DocBook XSL: The Complete Guide (2002)
+- http://www.sagehill.net/docbookxsl/index.html
+
+DocBook elements (what tags are allowed where)
+- http://www.oreilly.com/catalog/docbook/chapter/book/refelem.html
+
+Catalogs:
+- http://www.sagehill.net/docbookxsl/WriteCatalog.html
+
+
+----------------------------------------------
+xml to html markup transformations:
+
+<programlisting> --> <pre class="programlisting">
+<screen>         --> <pre class="screen">
+<computeroutput> --> <tt class="computeroutput">
+<literal>        --> <tt>
+<emphasis>       --> <i>
+<command>        --> <b class="command">
+<blockquote>     --> <div class="blockquote">
+                     <blockquote class="blockquote">
+
+Important: inside <screen> and <programlisting> blocks, do NOT
+use 'html entities' in your markup, eg. '&lt;' If you *do* use
+them, they will be output verbatim, which is not what you want.
+
+
+----------------------------------------------
+
+<ulink url="http://..">http://kcachegrind.sourceforge.net</ulink>
+
+
+----------------------------------------------
+<variablelist>                         --> <dl>
+ <varlistentry>
+  <term>TTF</term>                     --> <dt>
+  <listitem>TrueType fonts.</listitem> --> <dd>
+ </varlistentry>
+</variablelist>                        --> <dl>
+
+
+----------------------------------------------
+<itemizedlist>          --> <ul>
+ <listitem>             --> <li>
+  <para>....</para>
+  <para>....</para>
+ </listitem>            --> </li>
+</itemizedlist>         --> </ul>
+
+
+----------------------------------------------
+<orderedlist>           --> <ol>
+ <listitem>             --> <li>
+  <para>....</para>
+  <para>....</para>
+ </listitem>            --> </li>
+</orderedlist>          --> </ol>
+
+
+----------------------------------------------
+To achieve this:
+
+This is a paragraph of text before a list:
+
+  * some text
+
+  * some more text
+
+and this is some more text after the list.
+
+Do this:
+<para>This is a paragraph of text before a list:</para>
+<itemizedlist>
+ <listitem>
+  <para>some text</para>
+ </listitem>
+ <listitem>
+  <para>some more text</para>
+ </listitem>
+</itemizedlist>
+
+
+----------------------------------------------
+To achieve this:
+For further details, see <a href="clientreq">The Mechanism</a>
+
+Do this:
+
+  Given:
+  <sect1 id="clientreq" xreflabel="The Mechanism">
+   <title>The Mechanism</title>
+   <para>...</para>
+  </sect1>
+
+  Then do:
+  For further details, see <xref linkend="clientreq"/>.
+
+
+----------------------------------------------
+To achieve this:
+<p><b>Warning:</b> Only do this if ...</p>
+
+Do this:
+<formalpara>
+ <title>Warning:</title>
+ <para>Only do this if ...</para>
+</formalpara>
+
+Or this:
+<para><command>Warning:</command> Only do this if ... </para>
+
+
+----------------------------------------------
+To achieve this:
+<p>It uses the Eraser algorithm described in:<br />
+<br />
+  Eraser: A Dynamic Data Race Detector for Multithreaded Programs<br />
+  Stefan Savage, Michael Burrows, Patrick Sobalvarro and Thomas Anderson<br />
+  ACM Transactions on Computer Systems, 15(4):391-411<br />
+  November 1997.<br />
+</p>
+
+Do this:
+<literallayout>
+It uses the Eraser algorithm described in:
+
+  Eraser: A Dynamic Data Race Detector for Multithreaded Programs
+  Stefan Savage, Michael Burrows, Patrick Sobalvarro and Thomas Anderson
+  ACM Transactions on Computer Systems, 15(4):391-411
+  November 1997.
+</literallayout>
+
+
+----------------------------------------------
+To achieve this:
+<pre>
+/* Hook to delay things long enough so we can get the pid 
+   and attach GDB in another shell. */
+if (0) { 
+  Int p, q;
+  for ( p = 0; p < 50000; p++ )
+    for ( q = 0; q < 50000; q++ ) ;
+</pre>
+
+Do this:
+<programlisting><![CDATA[
+/* Hook to delay things long enough so we can get the pid
+   and attach GDB in another shell. */
+if (0) { 
+  Int p, q;
+  for ( p = 0; p < 50000; p++ )
+    for ( q = 0; q < 50000; q++ ) ;
+}]]></programlisting>
+
+
+(do the same thing for <screen> tag)
+
+
+----------------------------------------------
+To achieve this:
+  where <i><code>TAG</code></i> has the ...
+
+Do this:
+  where <emphasis><computeroutput>TAG</computeroutput></emphasis> has the ...
+
+Note: you cannot put <emphasis> inside <computeroutput>, unfortunately.
+
+----------------------------------------------
+
+Any other helpful hints?  Please tell us.
diff --git a/helgrind/docs/Makefile.am b/helgrind/docs/Makefile.am
index 54b4b1b..84f630f 100644
--- a/helgrind/docs/Makefile.am
+++ b/helgrind/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = hg_main.html
+EXTRA_DIST = hg-manual.xml
diff --git a/helgrind/docs/hg-manual.xml b/helgrind/docs/hg-manual.xml
new file mode 100644
index 0000000..385b60a
--- /dev/null
+++ b/helgrind/docs/hg-manual.xml
@@ -0,0 +1,57 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="hg-manual" xreflabel="Helgrind: a data-race detector">
+  <title>Helgrind: a data-race detector</title>
+
+<para>Helgrind is a Valgrind tool for detecting data races in C
+and C++ programs that use the Pthreads library.</para>
+
+<para>To use this tool, you specify
+<computeroutput>--tool=helgrind</computeroutput> on the Valgrind
+command line.</para>
+
+<para>It uses the Eraser algorithm described in:
+
+ <address>Eraser: A Dynamic Data Race Detector for Multithreaded Programs
+  Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro and Thomas Anderson
+  ACM Transactions on Computer Systems, 15(4):391-411
+  November 1997.
+ </address>
+</para>
+
+<para>We also incorporate significant improvements from this paper:
+
+ <address>Runtime Checking of Multithreaded Applications with Visual Threads
+  Jerry J. Harrow, Jr.
+  Proceedings of the 7th International SPIN Workshop on Model Checking of Software
+  Stanford, California, USA
+  August 2000
+  LNCS 1885, pp331--342
+  K. Havelund, J. Penix, and W. Visser, editors.
+ </address>
+</para>
+
+<para>Basically what Helgrind does is to look for memory
+locations which are accessed by more than one thread.  For each
+such location, Helgrind records which of the program's
+(pthread_mutex_)locks were held by the accessing thread at the
+time of the access.  The hope is to discover that there is indeed
+at least one lock which is used by all threads to protect that
+location.  If no such lock can be found, then there is
+(apparently) no consistent locking strategy being applied for
+that location, and so a possible data race might result.</para>
+
+<para>Helgrind also allows for "thread segment lifetimes".  If
+the execution of two threads cannot overlap -- for example, if
+your main thread waits on another thread with a
+<computeroutput>pthread_join()</computeroutput> operation -- they
+can both access the same variable without holding a lock.</para>
+
+<para>There's a lot of other sophistication in Helgrind, aimed at
+reducing the number of false reports, and at producing useful
+error reports.  We hope to have more documentation one
+day...</para>
+
+</chapter>
diff --git a/helgrind/docs/hg_main.html b/helgrind/docs/hg_main.html
deleted file mode 100644
index 74ee451..0000000
--- a/helgrind/docs/hg_main.html
+++ /dev/null
@@ -1,60 +0,0 @@
-
-<html>
-  <head>
-    <title>Helgrind: a data-race detector</title>
-  </head>
-
-<a name="hg-top"></a>
-<h2>6&nbsp; Helgrind: a data-race detector</h2>
-
-To use this tool, you must specify <code>--tool=helgrind</code> on the
-Valgrind command line.
-<p>
-
-Helgrind is a Valgrind tool for detecting data races in C and C++ programs
-that use the Pthreads library.
-<p>
-It uses the Eraser algorithm described in 
-<blockquote>
-    Eraser: A Dynamic Data Race Detector for Multithreaded Programs<br>
-    Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro and 
-    Thomas Anderson<br>
-    ACM Transactions on Computer Systems, 15(4):391-411<br>
-    November 1997.
-</blockquote>
-
-We also incorporate significant improvements from this paper:
-
-<blockquote>
-    Runtime Checking of Multithreaded Applications with Visual Threads
-    Jerry J. Harrow, Jr.<br>
-    Proceedings of the 7th International SPIN Workshop on Model Checking of 
-    Software<br>
-    Stanford, California, USA<br>
-    August 2000<br>
-    LNCS 1885, pp331--342<br>
-    K. Havelund, J. Penix, and W. Visser, editors.<br>
-</blockquote>
-
-<p>
-Basically what Helgrind does is to look for memory locations which are
-accessed by more than one thread.  For each such location, Helgrind
-records which of the program's (pthread_mutex_)locks were held by the
-accessing thread at the time of the access.  The hope is to discover
-that there is indeed at least one lock which is used by all threads to
-protect that location.  If no such lock can be found, then there is 
-(apparently) no consistent locking strategy being applied for that
-location, and so a possible data race might result.
-<p>
-Helgrind also allows for "thread segment lifetimes".  If the execution of two
-threads cannot overlap -- for example, if your main thread waits on another
-thread with a <code>pthread_join()</code> operation -- they can both access the
-same variable without holding a lock.
-<p>
-There's a lot of other sophistication in Helgrind, aimed at
-reducing the number of false reports, and at producing useful error
-reports.  We hope to have more documentation one day...
-
-</body>
-</html>
-
diff --git a/lackey/docs/Makefile.am b/lackey/docs/Makefile.am
index 4872f33..86dc406 100644
--- a/lackey/docs/Makefile.am
+++ b/lackey/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = lk_main.html
+EXTRA_DIST = lk-manual.xml
diff --git a/lackey/docs/lk-manual.xml b/lackey/docs/lk-manual.xml
new file mode 100644
index 0000000..7baa753
--- /dev/null
+++ b/lackey/docs/lk-manual.xml
@@ -0,0 +1,39 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="lk-manual" xreflabel="Lackey">
+
+<title>Lackey: a very simple profiler</title>
+
+<para>Lackey is a simple Valgrind tool that does some basic
+program measurement.  It adds quite a lot of simple
+instrumentation to the program's code.  It is primarily intended
+to be of use as an example tool.</para>
+
+<para>It measures three things:</para>
+
+<orderedlist>
+
+ <listitem>
+  <para>The number of calls to
+  <computeroutput>_dl_runtime_resolve()</computeroutput>, the
+  function in glibc's dynamic linker that resolves function
+  lookups into shared objects.</para>
+ </listitem>
+
+ <listitem>
+  <para>The number of UCode instructions (UCode is Valgrind's
+  RISC-like intermediate language), x86 instructions, and basic
+  blocks executed by the program, and some ratios between the
+  three counts.</para>
+ </listitem>
+
+ <listitem>
+  <para>The number of conditional branches encountered and the
+  proportion of those taken.</para>
+ </listitem>
+
+</orderedlist>
+
+</chapter>
diff --git a/lackey/docs/lk_main.html b/lackey/docs/lk_main.html
deleted file mode 100644
index a6f22a0..0000000
--- a/lackey/docs/lk_main.html
+++ /dev/null
@@ -1,68 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>Cachegrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="title"></a>
-<h1 align=center>Lackey</h1>
-<center>This manual was last updated on 2002-10-03</center>
-<p>
-
-<center>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Copyright &copy; 2002-2004 Nicholas Nethercote
-<p>
-Lackey is licensed under the GNU General Public License, 
-version 2<br>
-Lackey is an example Valgrind tool that does some very basic program
-measurement.
-</center>
-
-<p>
-
-<h2>1&nbsp; Lackey</h2>
-
-Lackey is a simple Valgrind tool that does some basic program measurement.
-It adds quite a lot of simple instrumentation to the program's code.  It is
-primarily intended to be of use as an example tool.
-<p>
-It measures three things:
-
-<ol>
-<li>The number of calls to <code>_dl_runtime_resolve()</code>, the function
-    in glibc's dynamic linker that resolves function lookups into shared 
-    objects.<p>
-
-<li>The number of UCode instructions (UCode is Valgrind's RISC-like
-    intermediate language), x86 instructions, and basic blocks executed by the
-    program, and some ratios between the three counts.<p>
-
-<li>The number of conditional branches encountered and the proportion of those
-    taken.<p>
-</ol>
-
-<hr width="100%">
-</body>
-</html>
-
diff --git a/massif/docs/Makefile.am b/massif/docs/Makefile.am
index a53c352..fc698e9 100644
--- a/massif/docs/Makefile.am
+++ b/massif/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = ms_main.html date.gif
+EXTRA_DIST = ms-manual.xml
diff --git a/massif/docs/date.gif b/massif/docs/date.gif
deleted file mode 100644
index eff527a..0000000
--- a/massif/docs/date.gif
+++ /dev/null
Binary files differ
diff --git a/massif/docs/ms-manual.xml b/massif/docs/ms-manual.xml
new file mode 100644
index 0000000..f34af7b
--- /dev/null
+++ b/massif/docs/ms-manual.xml
@@ -0,0 +1,465 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="ms-manual" xreflabel="Massif: a heap profiler">
+  <title>Massif: a heap profiler</title>
+
+<para>To use this tool, you must specify
+<computeroutput>--tool=massif</computeroutput> on the Valgrind
+command line.</para>
+
+<sect1 id="ms-manual.spaceprof" xreflabel="Heap profiling">
+<title>Heap profiling</title>
+
+<para>Massif is a heap profiler, i.e. it measures how much heap
+memory programs use.  In particular, it can give you information
+about:</para>
+
+<itemizedlist>
+  <listitem><para>Heap blocks;</para></listitem>
+  <listitem><para>Heap administration blocks;</para></listitem>
+  <listitem><para>Stack sizes.</para></listitem>
+</itemizedlist>
+
+<para>Heap profiling is useful to help you reduce the amount of
+memory your program uses.  On modern machines with virtual
+memory, this provides the following benefits:</para>
+
+<itemizedlist>
+  <listitem><para>It can speed up your program -- a smaller
+    program will interact better with your machine's caches,
+    avoid paging, and so on.</para></listitem>
+
+  <listitem><para>If your program uses lots of memory, it will
+    reduce the chance that it exhausts your machine's swap
+    space.</para></listitem>
+</itemizedlist>
+
+<para>Also, there are certain space leaks that aren't detected by
+traditional leak-checkers, such as Memcheck's.  That's because
+the memory isn't ever actually lost -- a pointer remains to it --
+but it's not in use.  Programs that have leaks like this can
+unnecessarily increase the amount of memory they are using over
+time.</para>
+
+
+
+<sect2 id="ms-manual.heapprof" 
+       xreflabel="Why Use a Heap Profiler?">
+<title>Why Use a Heap Profiler?</title>
+
+<para>Everybody knows how useful time profilers are for speeding
+up programs.  They are particularly useful because people are
+notoriously bad at predicting where are the bottlenecks in their
+programs.</para>
+
+<para>But the story is different for heap profilers.  Some
+programming languages, particularly lazy functional languages
+like <ulink url="http://www.haskell.org">Haskell</ulink>, have
+quite sophisticated heap profilers.  But there are few tools as
+powerful for profiling C and C++ programs.</para>
+
+<para>Why is this?  Maybe it's because C and C++ programmers must
+think that they know where the memory is being allocated.  After
+all, you can see all the calls to
+<computeroutput>malloc()</computeroutput> and
+<computeroutput>new</computeroutput> and
+<computeroutput>new[]</computeroutput>, right?  But, in a big
+program, do you really know which heap allocations are being
+executed, how many times, and how large each allocation is?  Can
+you give even a vague estimate of the memory footprint for your
+program?  Do you know this for all the libraries your program
+uses?  What about administration bytes required by the heap
+allocator to track heap blocks -- have you thought about them?
+What about the stack?  If you are unsure about any of these
+things, maybe you should think about heap profiling.</para>
+
+<para>Massif can tell you these things.</para>
+
+<para>Or maybe it's because it's relatively easy to add basic
+heap profiling functionality into a program, to tell you how many
+bytes you have allocated for certain objects, or similar.  But
+this information might only be simple like total counts for the
+whole program's execution.  What about space usage at different
+points in the program's execution, for example?  And
+reimplementing heap profiling code for each project is a
+pain.</para>
+
+<para>Massif can save you this effort.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="ms-manual.using" xreflabel="Using Massif">
+<title>Using Massif</title>
+
+
+<sect2 id="ms-manual.overview" xreflabel="Overview">
+<title>Overview</title>
+
+<para>First off, as for normal Valgrind use, you probably want to
+compile with debugging info (the
+<computeroutput>-g</computeroutput> flag).  But, as opposed to
+Memcheck, you probably <command>do</command> want to turn
+optimisation on, since you should profile your program as it will
+be normally run.</para>
+
+<para>Then, run your program with <computeroutput>valgrind
+--tool=massif</computeroutput> in front of the normal command
+line invocation.  When the program finishes, Massif will print
+summary space statistics.  It also creates a graph representing
+the program's heap usage in a file called
+<filename>massif.pid.ps</filename>, which can be read by any
+PostScript viewer, such as Ghostview.</para>
+
+<para>It also puts detailed information about heap consumption in
+a file <filename>massif.pid.txt</filename> (text format) or
+<filename>massif.pid.html</filename> (HTML format), where
+<emphasis>pid</emphasis> is the program's process id.</para>
+
+</sect2>
+
+
+<sect2 id="ms-manual.basicresults" xreflabel="Basic Results of Profiling">
+<title>Basic Results of Profiling</title>
+
+<para>To gather heap profiling information about the program
+<computeroutput>prog</computeroutput>, type:</para>
+<screen><![CDATA[
+% valgrind --tool=massif prog]]></screen>
+
+<para>The program will execute (slowly).  Upon completion,
+summary statistics that look like this will be printed:</para>
+<programlisting><![CDATA[
+==27519== Total spacetime:   2,258,106 ms.B
+==27519== heap:              24.0%
+==27519== heap admin:         2.2%
+==27519== stack(s):          73.7%]]></programlisting>
+
+<para>All measurements are done in
+<emphasis>spacetime</emphasis>, i.e. space (in bytes) multiplied
+by time (in milliseconds).  Note that because Massif slows a
+program down a lot, the actual spacetime figure is fairly
+meaningless; it's the relative values that are
+interesting.</para>
+
+<para>Which entries you see in the breakdown depends on the
+command line options given.  The above example measures all the
+possible parts of memory:</para>
+
+<itemizedlist>
+  <listitem><para>Heap: number of words allocated on the heap, via
+    <computeroutput>malloc()</computeroutput>,
+    <computeroutput>new</computeroutput> and
+    <computeroutput>new[]</computeroutput>.</para>
+  </listitem>
+  <listitem>
+    <para>Heap admin: each heap block allocated requires some
+    administration data, which lets the allocator track certain
+    things about the block.  It is easy to forget about this, and
+    if your program allocates lots of small blocks, it can add
+    up.  This value is an estimate of the space required for this
+    administration data.</para>
+  </listitem>
+  <listitem>
+    <para>Stack(s): the spacetime used by the programs' stack(s).
+    (Threaded programs can have multiple stacks.)  This includes
+    signal handler stacks.</para>
+  </listitem>
+</itemizedlist>
+
+</sect2>
+
+
+<sect2 id="ms-manual.graphs" xreflabel="Spacetime Graphs">
+<title>Spacetime Graphs</title>
+
+<para>As well as printing summary information, Massif also
+creates a file representing a spacetime graph,
+<filename>massif.pid.hp</filename>.  It will produce a file
+called <filename>massif.pid.ps</filename>, which can be viewed in
+a PostScript viewer.</para>
+
+<para>Massif uses a program called
+<computeroutput>hp2ps</computeroutput> to convert the raw data
+into the PostScript graph.  It's distributed with Massif, but
+came originally from the 
+<ulink url="http://haskell.cs.yale.edu/ghc/">Glasgow Haskell
+Compiler</ulink>.  You shouldn't need to worry about this at all.
+However, if the graph creation fails for any reason, Massif will
+tell you, and will leave behind a file named
+<filename>massif.pid.hp</filename>, containing the raw heap
+profiling data.</para>
+
+<para>Here's an example graph:</para>
+<mediaobject id="spacetime-graph">
+  <imageobject>
+    <imagedata fileref="images/massif-graph-sm.png" format="PNG"/>
+  </imageobject>
+  <textobject>
+    <phrase>Spacetime Graph</phrase>
+  </textobject>
+</mediaobject>
+
+<para>The graph is broken into several bands.  Most bands
+represent a single line of your program that does some heap
+allocation; each such band represents all the allocations and
+deallocations done from that line.  Up to twenty bands are shown;
+less significant allocation sites are merged into "other" and/or
+"OTHER" bands.  The accompanying text/HTML file produced by
+Massif has more detail about these heap allocation bands.  Then
+there are single bands for the stack(s) and heap admin
+bytes.</para>
+
+<formalpara>
+<title>Note:</title>
+<para>it's the height of a band that's important.  Don't let the
+ups and downs caused by other bands confuse you.  For example,
+the <computeroutput>read_alias_file</computeroutput> band in the
+example has the same height all the time it's in existence.</para>
+</formalpara>
+
+<para>The triangles on the x-axis show each point at which a
+memory census was taken.  These aren't necessarily evenly spread;
+Massif only takes a census when memory is allocated or
+deallocated.  The time on the x-axis is wallclock time, which is
+not ideal because you can get different graphs for different
+executions of the same program, due to random OS delays.  But
+it's not too bad, and it becomes less of a problem the longer a
+program runs.</para>
+
+<para>Massif takes censuses at an appropriate timescale; censuses
+take place less frequently as the program runs for longer.  There
+is no point having more than 100-200 censuses on a single
+graph.</para>
+
+<para>The graphs give a good overview of where your program's
+space use comes from, and how that varies over time.  The
+accompanying text/HTML file gives a lot more information about
+heap use.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="ms-manual.heapdetails" 
+       xreflabel="Details of Heap Allocations">
+<title>Details of Heap Allocations</title>
+
+<para>The text/HTML file contains information to help interpret
+the heap bands of the graph.  It also contains a lot of extra
+information about heap allocations that you don't see in the
+graph.</para>
+
+
+<para>Here's part of the information that accompanies the above
+graph.</para>
+
+<blockquote>
+<literallayout>== 0 ===========================</literallayout>
+
+<para>Heap allocation functions accounted for 50.8% of measured
+spacetime</para>
+
+<para>Called from:</para>
+<itemizedlist>
+  <listitem id="a401767D1"><para>
+    <ulink url="#b401767D1">22.1%</ulink>: 0x401767D0:
+    _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)</para>
+  </listitem>
+  <listitem id="a4017C394"><para>
+    <ulink url="#b4017C394">8.6%</ulink>: 0x4017C393:
+    read_alias_file (in /lib/i686/libc-2.3.2.so)</para>
+  </listitem>
+  <listitem>
+    <para>... ... <emphasis>(several entries omitted)</emphasis></para>
+  </listitem>
+  <listitem>
+    <para>and 6 other insignificant places</para>
+  </listitem>
+</itemizedlist>
+</blockquote>
+
+<para>The first part shows the total spacetime due to heap
+allocations, and the places in the program where most memory was
+allocated (Nb: if this program had been compiled with
+<computeroutput>-g</computeroutput>, actual line numbers would be
+given).  These places are sorted, from most significant to least,
+and correspond to the bands seen in the graph.  Insignificant
+sites (accounting for less than 0.5% of total spacetime) are
+omitted.</para>
+
+<para>That alone can be useful, but often isn't enough.  What if
+one of these functions was called from several different places
+in the program?  Which one of these is responsible for most of
+the memory used?  For
+<computeroutput>_nl_intern_locale_data()</computeroutput>, this
+question is answered by clicking on the 
+<ulink url="#b401767D1">22.1%</ulink> link, which takes us to the
+following part of the file:</para>
+
+<blockquote id="b401767D1">
+<literallayout>== 1 ===========================</literallayout>
+
+<para>Context accounted for <ulink url="#a401767D1">22.1%</ulink>
+of measured spacetime</para>
+
+<para><computeroutput> 0x401767D0: _nl_intern_locale_data (in
+/lib/i686/libc-2.3.2.so)</computeroutput></para>
+
+<para>Called from:</para>
+<itemizedlist>
+  <listitem id="a40176F96"><para>
+    <ulink url="#b40176F96">22.1%</ulink>: 0x40176F95:
+    _nl_load_locale_from_archive (in
+    /lib/i686/libc-2.3.2.so)</para>
+  </listitem>
+</itemizedlist>
+</blockquote>
+
+<para>At this level, we can see all the places from which
+<computeroutput>_nl_load_locale_from_archive()</computeroutput>
+was called such that it allocated memory at 0x401767D0.  (We can
+click on the top <ulink url="#a40176F96">22.1%</ulink> link to go back
+to the parent entry.)  At this level, we have moved beyond the
+information presented in the graph.  In this case, it is only
+called from one place.  We can again follow the link for more
+detail, moving to the following part of the file.</para>
+
+<blockquote>
+<literallayout>== 2 ===========================</literallayout>
+<para id="b40176F96">
+Context accounted for <ulink url="#a40176F96">22.1%</ulink> of
+measured spacetime</para>
+
+<para><computeroutput> 0x401767D0: _nl_intern_locale_data (in
+/lib/i686/libc-2.3.2.so)</computeroutput> <computeroutput>
+0x40176F95: _nl_load_locale_from_archive (in
+/lib/i686/libc-2.3.2.so)</computeroutput></para>
+
+<para>Called from:</para>
+<itemizedlist>
+  <listitem id="a40176185">
+    <para>22.1%: 0x40176184: _nl_find_locale (in
+    /lib/i686/libc-2.3.2.so)</para>
+  </listitem>
+</itemizedlist>
+</blockquote>
+
+<para>In this way we can dig deeper into the call stack, to work
+out exactly what sequence of calls led to some memory being
+allocated.  At this point, with a call depth of 3, the
+information runs out (thus the address of the child entry,
+0x40176184, isn't a link).  We could rerun the program with a
+greater <computeroutput>--depth</computeroutput> value if we
+wanted more information.</para>
+
+<para>Sometimes you will get a code location like this:</para>
+<programlisting><![CDATA[
+30.8% : 0xFFFFFFFF: ???]]></programlisting>
+
+<para>The code address isn't really 0xFFFFFFFF -- that's
+impossible.  This is what Massif does when it can't work out what
+the real code address is.</para>
+
+<para>Massif produces this information in a plain text file by
+default, or HTML with the
+<computeroutput>--format=html</computeroutput> option.  The plain
+text version obviously doesn't have the links, but a similar
+effect can be achieved by searching on the code addresses.  (In
+Vim, the '*' and '#' searches are ideal for this.)</para>
+
+
+<sect2 id="ms-manual.accuracy" xreflabel="Accuracy">
+<title>Accuracy</title>
+
+<para>The information should be pretty accurate.  Some
+approximations made might cause some allocation contexts to be
+attributed with less memory than they actually allocated, but the
+amounts should be miniscule.</para>
+
+<para>The heap admin spacetime figure is an approximation, as
+described above.  If anyone knows how to improve its accuracy,
+please let us know.</para>
+
+</sect2>
+
+</sect1>
+
+
+<sect1 id="ms-manual.options" xreflabel="Massif options">
+<title>Massif options</title>
+
+<para>Massif-specific options are:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>--heap=no</computeroutput></para>
+    <para><computeroutput>--heap=yes</computeroutput> [default]</para>
+    <para>When enabled, profile heap usage in detail.  Without
+    it, the <filename>massif.pid.txt</filename> or
+    <filename>massif.pid.html</filename> will be very
+    short.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--heap-admin=n</computeroutput>
+    [default: 8]</para>
+    <para>The number of admin bytes per block to use.  This can
+    only be an estimate of the average, since it may vary.  The
+    allocator used by <computeroutput>glibc</computeroutput>
+    requires somewhere between 4--15 bytes per block, depending
+    on various factors.  It also requires admin space for freed
+    blocks, although Massif does not count this.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--stacks=no</computeroutput></para>
+    <para><computeroutput>--stacks=yes</computeroutput> [default]</para>
+    <para>When enabled, include stack(s) in the profile.
+    Threaded programs can have multiple stacks.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--depth=n</computeroutput>
+    [default: 3]</para>
+    <para>Depth of call chains to present in the detailed heap
+    information.  Increasing it will give more information, but
+    Massif will run the program more slowly, using more memory,
+    and produce a bigger <computeroutput>.txt</computeroutput> /
+    <computeroutput>.hp</computeroutput> file.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--alloc-fn=name</computeroutput></para>
+    <para>Specify a function that allocates memory.  This is
+    useful for functions that are wrappers to
+    <computeroutput>malloc()</computeroutput>, which can fill up
+    the context information uselessly (and give very
+    uninformative bands on the graph).  Functions specified will
+    be ignored in contexts, i.e. treated as though they were
+    <computeroutput>malloc()</computeroutput>.  This option can
+    be specified multiple times on the command line, to name
+    multiple functions.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--format=text</computeroutput> [default]</para>
+    <para><computeroutput>--format=html</computeroutput></para>
+    <para>Produce the detailed heap information in text or HTML
+    format.  The file suffix used will be either
+    <computeroutput>.txt</computeroutput> or
+    <computeroutput>.html</computeroutput>.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect1>
+</chapter>
diff --git a/massif/docs/ms_main.html b/massif/docs/ms_main.html
deleted file mode 100644
index 87d1abc..0000000
--- a/massif/docs/ms_main.html
+++ /dev/null
@@ -1,331 +0,0 @@
-<html>
-  <head>
-    <title>Massif: a heap profiler</title>
-  </head>
-
-<body>
-<a name="ms-top"></a>
-<h2>7&nbsp; <b>Massif</b>: a heap profiler</h2>
-
-To use this tool, you must specify <code>--tool=massif</code>
-on the Valgrind command line.
-
-<a name="spaceprof"></a>
-<h3>7.1&nbsp; Heap profiling</h3>
-Massif is a heap profiler, i.e. it measures how much heap memory programs use.
-In particular, it can give you information about:
-<ul>
-  <li>Heap blocks;
-  <li>Heap administration blocks;
-  <li>Stack sizes.
-</ul>
-
-Heap profiling is useful to help you reduce the amount of memory your program
-uses.  On modern machines with virtual memory, this provides the following
-benefits:
-<ul>
-<li>It can speed up your program -- a smaller program will interact better
-    with your machine's caches, avoid paging, and so on.
-
-<li>If your program uses lots of memory, it will reduce the chance that it
-    exhausts your machine's swap space.
-</ul>
-
-Also, there are certain space leaks that aren't detected by traditional
-leak-checkers, such as Memcheck's.  That's because the memory isn't ever
-actually lost -- a pointer remains to it -- but it's not in use.  Programs
-that have leaks like this can unnecessarily increase the amount of memory
-they are using over time.
-<p>
-
-
-<a name="whyuse_heapprof"></a>
-<h3>7.2&nbsp; Why Use a Heap Profiler?</h3>
-
-Everybody knows how useful time profilers are for speeding up programs.  They
-are particularly useful because people are notoriously bad at predicting where
-are the bottlenecks in their programs.
-<p>
-But the story is different for heap profilers.  Some programming languages,
-particularly lazy functional languages like <a
-href="http://www.haskell.org">Haskell</a>, have quite sophisticated heap
-profilers.  But there are few tools as powerful for profiling C and C++
-programs.
-<p>
-Why is this?  Maybe it's because C and C++ programmers must think that
-they know where the memory is being allocated.  After all, you can see all the
-calls to <code>malloc()</code> and <code>new</code> and <code>new[]</code>,
-right?  But, in a big program, do you really know which heap allocations are
-being executed, how many times, and how large each allocation is?  Can you give
-even a vague estimate of the memory footprint for your program?  Do you know
-this for all the libraries your program uses?  What about administration bytes
-required by the heap allocator to track heap blocks -- have you thought about
-them?  What about the stack?  If you are unsure about any of these things,
-maybe you should think about heap profiling.  
-<p>
-Massif can tell you these things.
-<p>
-Or maybe it's because it's relatively easy to add basic heap profiling
-functionality into a program, to tell you how many bytes you have allocated for
-certain objects, or similar.  But this information might only be simple like
-total counts for the whole program's execution.  What about space usage at
-different points in the program's execution, for example?  And reimplementing
-heap profiling code for each project is a pain.
-<p>
-Massif can save you this effort.
-<p>
-
-
-<a name="overview"></a>
-<h3>7.3&nbsp; Overview</h3>
-First off, as for normal Valgrind use, you probably want to compile with
-debugging info (the <code>-g</code> flag).  But, as opposed to Memcheck,
-you probably <b>do</b> want to turn optimisation on, since you should profile
-your program as it will be normally run.
-<p>
-Then, run your program with <code>valgrind --tool=massif</code> in front of the
-normal command line invocation.  When the program finishes, Massif will print
-summary space statistics.  It also creates a graph representing the program's
-heap usage in a file called <code>massif.<i>pid</i>.ps</code>, which can
-be read by any PostScript viewer, such as Ghostview.
-<p>
-It also puts detailed information about heap consumption in a file file
-<code>massif.<i>pid</i>.txt</code> (text format) or
-<code>massif.<i>pid</i>.html</code> (HTML format), where
-<code><i>pid</i></code> is the program's process id.
-<p>
-
-
-<a name="basicresults"></a>
-<h3>7.4&nbsp; Basic Results of Profiling</h3>
-
-To gather heap profiling information about the program <code>prog</code>,
-type:
-<p>
-<blockquote>
-<code>valgrind --tool=massif prog</code>
-</blockquote>
-<p>
-The program will execute (slowly).  Upon completion, summary statistics
-that look like this will be printed:
-
-<pre>
-==27519== Total spacetime:   2,258,106 ms.B
-==27519== heap:              24.0%
-==27519== heap admin:         2.2%
-==27519== stack(s):          73.7%
-</pre>
-
-All measurements are done in <i>spacetime</i>, i.e. space (in bytes) multiplied
-by time (in milliseconds).  Note that because Massif slows a program down a
-lot, the actual spacetime figure is fairly meaningless;  it's the relative
-values that are interesting.
-<p>
-Which entries you see in the breakdown depends on the command line options
-given.  The above example measures all the possible parts of memory:
-<ul>
-<li>Heap: number of words allocated on the heap, via <code>malloc()</code>,
-    <code>new</code> and <code>new[]</code>.
-    <p>
-<li>Heap admin: each heap block allocated requires some administration data,
-    which lets the allocator track certain things about the block.  It is easy
-    to forget about this, and if your program allocates lots of small blocks,
-    it can add up.  This value is an estimate of the space required for this
-    administration data.
-    <p>
-<li>Stack(s): the spacetime used by the programs' stack(s).  (Threaded programs
-    can have multiple stacks.)  This includes signal handler stacks.
-    <p>
-</ul>
-<p>
-
-
-<a name="graphs"></a>
-<h3>7.5&nbsp; Spacetime Graphs</h3>
-As well as printing summary information, Massif also creates a file
-representing a spacetime graph, <code>massif.<i>pid</i>.hp</code>.  
-It will produce a file called <code>massif.<i>pid</i>.ps</code>, which can be
-viewed in a PostScript viewer.
-<p>
-Massif uses a program called <code>hp2ps</code> to convert the raw data into
-the PostScript graph.  It's distributed with Massif, but came originally
-from the <a href="http://haskell.cs.yale.edu/ghc/">Glasgow Haskell
-Compiler</a>.  You shouldn't need to worry about this at all.  However, if
-the graph creation fails for any reason, Massif tell you, and will leave
-behind a file named <code>massif.<i>pid</i>.hp</code>, containing the raw
-heap profiling data.
-<p>
-Here's an example graph:<br>
-    <img src="date.gif" alt="spacetime graph">
-<p>
-The graph is broken into several bands.  Most bands represent a single line of
-your program that does some heap allocation;  each such band represents all
-the allocations and deallocations done from that line.  Up to twenty bands are
-shown; less significant allocation sites are merged into "other" and/or "OTHER"
-bands.  The accompanying text/HTML file produced by Massif has more detail
-about these heap allocation bands.  Then there are single bands for the
-stack(s) and heap admin bytes.
-<p>
-Note: it's the height of a band that's important.  Don't let the ups and downs
-caused by other bands confuse you.  For example, the
-<code>read_alias_file</code> band in the example has the same height all the
-time it's in existence.
-<p>
-The triangles on the x-axis show each point at which a memory census was taken.
-These aren't necessarily evenly spread;  Massif only takes a census when
-memory is allocated or deallocated.  The time on the x-axis is wallclock
-time, which is not ideal because you can get different graphs for different
-executions of the same program, due to random OS delays.  But it's not too
-bad, and it becomes less of a problem the longer a program runs.
-<p>
-Massif takes censuses at an appropriate timescale;  censuses take place less
-frequently as the program runs for longer.  There is no point having more
-than 100-200 censuses on a single graph.
-<p>
-The graphs give a good overview of where your program's space use comes from,
-and how that varies over time.  The accompanying text/HTML file gives a lot
-more information about heap use.
-
-<a name="detailsofheap"></a>
-<h3>7.6&nbsp; Details of Heap Allocations</h3>
-
-The text/HTML file contains information to help interpret the heap bands of the
-graph.  It also contains a lot of extra information about heap allocations that you don't see in the graph.
-<p>
-Here's part of the information that accompanies the above graph.
-
-<hr>
-== 0 ===========================<br>
-Heap allocation functions accounted for 50.8% of measured spacetime<br>
-<p>
-Called from:
-<ul>
-<li><a name="a401767D1"></a><a href="#b401767D1">22.1%</a>: 0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)
-<li><a name="a4017C394"></a><a href="#b4017C394"> 8.6%</a>: 0x4017C393: read_alias_file (in /lib/i686/libc-2.3.2.so)
-
-<li><i>(several entries omitted)</i>
-
-<li>and 6 other insignificant places</li>
-</ul>
-<hr>
-The first part shows the total spacetime due to heap allocations, and the
-places in the program where most memory was allocated (nb: if this program had
-been compiled with <code>-g</code>, actual line numbers would be given).  These
-places are sorted, from most significant to least, and correspond to the bands
-seen in the graph.  Insignificant sites (accounting for less than 0.5% of total
-spacetime) are omitted.
-<p>
-That alone can be useful, but often isn't enough.  What if one of these
-functions was called from several different places in the program?  Which one
-of these is responsible for most of the memory used?  For
-<code>_nl_intern_locale_data()</code>, this question is answered by clicking on
-the <a href="#b401767D1">22.1%</a> link, which takes us to the following part
-of the file.
-
-<hr>
-<p>== 1 ===========================<br>
-<a name="b401767D1"></a>Context accounted for <a href="#a401767D1">22.1%</a> of measured spacetime<br>
-  &nbsp;&nbsp;0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)<br>
-<p>
-Called from:
-<ul>
-<li><a name="a40176F96"></a><a href="#b40176F96">22.1%</a>: 0x40176F95: _nl_load_locale_from_archive (in /lib/i686/libc-2.3.2.so)
-</ul>
-<hr>
-
-At this level, we can see all the places from which
-<code>_nl_load_locale_from_archive()</code> was called such that it allocated
-memory at 0x401767D0.  (We can click on the top <a href="#a40176F96">22.1%</a>
-link to go back to the parent entry.)  At this level, we have moved beyond the
-information presented in the graph.  In this case, it is only called from one
-place.  We can again follow the link for more detail, moving to the following
-part of the file.
-
-<hr>
-<p>== 2 ===========================<br>
-<a name="b40176F96"></a>Context accounted for <a href="#a40176F96">22.1%</a> of measured spacetime<br>
-  &nbsp;&nbsp;0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)<br>
-  &nbsp;&nbsp;0x40176F95: _nl_load_locale_from_archive (in /lib/i686/libc-2.3.2.so)<br>
-<p>
-Called from:
-<ul>
-<li><a name="a40176185"></a>22.1%: 0x40176184: _nl_find_locale (in /lib/i686/libc-2.3.2.so)
-</ul>
-<hr>
-
-In this way we can dig deeper into the call stack, to work out exactly what
-sequence of calls led to some memory being allocated.  At this point, with a
-call depth of 3, the information runs out (thus the address of the child entry,
-0x40176184, isn't a link).  We could rerun the program with a greater
-<code>--depth</code> value if we wanted more information.
-<p>
-Sometimes you will get a code location like this:
-<ul>
-<li>30.8% : 0xFFFFFFFF: ???
-</ul>
-The code address isn't really 0xFFFFFFFF -- that's impossible.  This is what
-Massif does when it can't work out what the real code address is.
-<p>
-Massif produces this information in a plain text file by default, or HTML with
-the <code>--format=html</code> option.  The plain text version obviously
-doesn't have the links, but a similar effect can be achieved by searching on
-the code addresses.  (In Vim, the '*' and '#' searches are ideal for this.)
-
-
-<a name="massifoptions"></a>
-<h3>7.7&nbsp; Massif options</h3>
-
-Massif-specific options are:
-
-<ul>
-<li><code>--heap=no</code><br>
-    <code>--heap=yes</code> [default]<br>
-    When enabled, profile heap usage in detail.  Without it, the 
-    <code>massif.<i>pid</i>.txt</code> or
-    <code>massif.<i>pid</i>.html</code> will be very short.
-    <p>
-<li><code>--heap-admin=<i>n</i></code> [default: 8]<br>
-    The number of admin bytes per block to use.  This can only be an
-    estimate of the average, since it may vary.  The allocator used by
-    <code>glibc</code> requires somewhere between 4--15 bytes per block,
-    depending on various factors.  It also requires admin space for freed
-    blocks, although Massif does not count this.
-    <p>
-<li><code>--stacks=no</code><br>
-    <code>--stacks=yes</code> [default]<br>
-    When enabled, include stack(s) in the profile.  Threaded programs can
-    have multiple stacks.
-    <p>
-<li><code>--depth=<i>n</i></code> [default: 3]<br>
-    Depth of call chains to present in the detailed heap information.
-    Increasing it will give more information, but Massif will run the program
-    more slowly, using more memory, and produce a bigger
-    <code>.txt</code>/<code>.hp</code> file.
-    <p>
-<li><code>--alloc-fn=<i>name</i></code><br>
-    Specify a function that allocates memory.  This is useful for functions
-    that are wrappers to <code>malloc()</code>, which can fill up the context
-    information uselessly (and give very uninformative bands on the graph).
-    Functions specified will be ignored in contexts, i.e. treated as though
-    they were <code>malloc()</code>.  This option can be specified multiple
-    times on the command line, to name multiple functions.
-    <p>
-<li><code>--format=text</code> [default]<br>
-    <code>--format=html</code><br>
-    Produce the detailed heap information in text or HTML format.  The file
-    suffix used will be either <code>.txt</code> or <code>.html</code>.
-    <p>
-</ul>
-  
-<a name="accuracy"></a>
-<h3>7.8&nbsp; Accuracy</h3>
-The information should be pretty accurate.  Some approximations made might
-cause some allocation contexts to be attributed with less memory than they
-actually allocated, but the amounts should be miniscule.
-<p>
-The heap admin spacetime figure is an approximation, as described above.  If
-anyone knows how to improve its accuracy, please let us know.
-
-</body>
-</html>
-
diff --git a/memcheck/docs/Makefile.am b/memcheck/docs/Makefile.am
index 8d9e7c8..e620710 100644
--- a/memcheck/docs/Makefile.am
+++ b/memcheck/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = mc_main.html mc_techdocs.html
+EXTRA_DIST = mc-manual.xml mc-tech-docs.xml
diff --git a/memcheck/docs/mc-manual.xml b/memcheck/docs/mc-manual.xml
new file mode 100644
index 0000000..b540bf8
--- /dev/null
+++ b/memcheck/docs/mc-manual.xml
@@ -0,0 +1,1100 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="mc-manual" xreflabel="Memcheck: a heavyweight memory checker">
+<title>Memcheck: a heavyweight memory checker</title>
+
+<para>To use this tool, you must specify
+<computeroutput>--tool=memcheck</computeroutput> on the Valgrind
+command line.</para>
+
+
+<sect1 id="mc-manual.bugs" 
+       xreflabel="Kinds of bugs that Memcheck can find">
+<title>Kinds of bugs that Memcheck can find</title>
+
+<para>Memcheck is Valgrind-1.0.X's checking mechanism bundled up
+into a tool.  All reads and writes of memory are checked, and
+calls to malloc/new/free/delete are intercepted. As a result,
+memcheck can detect the following problems:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>Use of uninitialised memory</para>
+  </listitem>
+  <listitem>
+    <para>Reading/writing memory after it has been free'd</para>
+  </listitem>
+  <listitem>
+    <para>Reading/writing off the end of malloc'd blocks</para>
+  </listitem>
+  <listitem>
+    <para>Reading/writing inappropriate areas on the stack</para>
+  </listitem>
+  <listitem>
+    <para>Memory leaks -- where pointers to malloc'd blocks are
+   lost forever</para>
+  </listitem>
+  <listitem>
+    <para>Mismatched use of malloc/new/new [] vs
+    free/delete/delete []</para>
+  </listitem>
+  <listitem>
+    <para>Overlapping <computeroutput>src</computeroutput> and
+    <computeroutput>dst</computeroutput> pointers in
+    <computeroutput>memcpy()</computeroutput> and related
+    functions</para>
+  </listitem>
+  <listitem>
+    <para>Some misuses of the POSIX pthreads API</para>
+  </listitem>
+</itemizedlist>
+
+</sect1>
+
+
+
+<sect1 id="mc-manual.flags" 
+       xreflabel="Command-line flags specific to memcheck">
+<title>Command-line flags specific to memcheck</title>
+
+<itemizedlist>
+  <listitem>
+    <para><computeroutput>--leak-check=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--leak-check=yes</computeroutput></para>
+    <para>When enabled, search for memory leaks when the client
+    program finishes.  A memory leak means a malloc'd block,
+    which has not yet been free'd, but to which no pointer can be
+    found.  Such a block can never be free'd by the program,
+    since no pointer to it exists.  Leak checking is disabled by
+    default because it tends to generate dozens of error
+    messages.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--show-reachable=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--show-reachable=yes</computeroutput></para>
+    <para>When disabled, the memory leak detector only shows
+    blocks for which it cannot find a pointer to at all, or it
+    can only find a pointer to the middle of.  These blocks are
+    prime candidates for memory leaks.  When enabled, the leak
+    detector also reports on blocks which it could find a pointer
+    to.  Your program could, at least in principle, have freed
+    such blocks before exit.  Contrast this to blocks for which
+    no pointer, or only an interior pointer could be found: they
+    are more likely to indicate memory leaks, because you do not
+    actually have a pointer to the start of the block which you
+    can hand to <computeroutput>free</computeroutput>, even if
+    you wanted to.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--leak-resolution=low</computeroutput>
+    [default]</para>
+    <para><computeroutput>--leak-resolution=med</computeroutput></para>
+    <para><computeroutput>--leak-resolution=high</computeroutput></para>
+    <para>When doing leak checking, determines how willing
+    Memcheck is to consider different backtraces to be the same.
+    When set to <computeroutput>low</computeroutput>, the
+    default, only the first two entries need match.  When
+    <computeroutput>med</computeroutput>, four entries have to
+    match.  When <computeroutput>high</computeroutput>, all
+    entries need to match.</para>
+    <para>For hardcore leak debugging, you probably want to use
+    <computeroutput>--leak-resolution=high</computeroutput>
+    together with
+    <computeroutput>--num-callers=40</computeroutput> or some
+    such large number.  Note however that this can give an
+    overwhelming amount of information, which is why the defaults
+    are 4 callers and low-resolution matching.</para>
+    <para>Note that the
+    <computeroutput>--leak-resolution=</computeroutput> setting
+    does not affect Memcheck's ability to find leaks.  It only
+    changes how the results are presented.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--freelist-vol=&lt;number></computeroutput>
+    [default: 1000000]</para>
+    <para>When the client program releases memory using free (in
+    <literal>C</literal>) or delete (<literal>C++</literal>),
+    that memory is not immediately made available for
+    re-allocation.  Instead it is marked inaccessible and placed
+    in a queue of freed blocks.  The purpose is to delay the
+    point at which freed-up memory comes back into circulation.
+    This increases the chance that Memcheck will be able to
+    detect invalid accesses to blocks for some significant period
+    of time after they have been freed.</para>
+    <para>This flag specifies the maximum total size, in bytes,
+    of the blocks in the queue.  The default value is one million
+    bytes.  Increasing this increases the total amount of memory
+    used by Memcheck but may detect invalid uses of freed blocks
+    which would otherwise go undetected.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--workaround-gcc296-bugs=no</computeroutput>
+    [default]</para>
+    <para><computeroutput>--workaround-gcc296-bugs=yes</computeroutput></para>
+    <para>When enabled, assume that reads and writes some small
+    distance below the stack pointer
+    <computeroutput>%esp</computeroutput> are due to bugs in gcc
+    2.96, and does not report them.  The "small distance" is 256
+    bytes by default.  Note that gcc 2.96 is the default compiler
+    on some popular Linux distributions (RedHat 7.X, Mandrake)
+    and so you may well need to use this flag.  Do not use it if
+    you do not have to, as it can cause real errors to be
+    overlooked.  Another option is to use a gcc/g++ which does
+    not generate accesses below the stack pointer.  2.95.3 seems
+    to be a good choice in this respect.</para>
+    <para>Unfortunately (27 Feb 02) it looks like g++ 3.0.4 has a
+    similar bug, so you may need to issue this flag if you use
+    3.0.4.  A while later (early Apr 02) this is confirmed as a
+    scheduling bug in g++-3.0.4.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--partial-loads-ok=yes</computeroutput>
+    [the default]</para>
+    <para><computeroutput>--partial-loads-ok=no</computeroutput></para>
+    <para>Controls how Memcheck handles word (4-byte) loads from
+    addresses for which some bytes are addressible and others are
+    not.  When <computeroutput>yes</computeroutput> (the
+    default), such loads do not elicit an address error.
+    Instead, the loaded V bytes corresponding to the illegal
+    addresses indicate undefined, and those corresponding to
+    legal addresses are loaded from shadow memory, as usual.</para>
+    <para>When <computeroutput>no</computeroutput>, loads from
+    partially invalid addresses are treated the same as loads
+    from completely invalid addresses: an illegal-address error
+    is issued, and the resulting V bytes indicate valid data.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--cleanup=no</computeroutput></para>
+    <para><computeroutput>--cleanup=yes</computeroutput> [default]</para>
+    <para><command>This is a flag to help debug valgrind itself.
+    It is of no use to end-users.</command> When enabled, various
+    improvments are applied to the post-instrumented intermediate
+    code, aimed at removing redundant value checks.</para>
+  </listitem>
+
+</itemizedlist>
+</sect1>
+
+
+<sect1 id="mc-manual.errormsgs"
+       xreflabel="Explanation of error messages from Memcheck">
+<title>Explanation of error messages from Memcheck</title>
+
+<para>Despite considerable sophistication under the hood,
+Memcheck can only really detect two kinds of errors, use of
+illegal addresses, and use of undefined values.  Nevertheless,
+this is enough to help you discover all sorts of
+memory-management nasties in your code.  This section presents a
+quick summary of what error messages mean.  The precise behaviour
+of the error-checking machinery is described in <xref
+linkend="mc-manual.machine"/>.</para>
+
+
+<sect2 id="mc-manual.badrw" 
+       xreflabel="Illegal read / Illegal write errors">
+<title>Illegal read / Illegal write errors</title>
+
+<para>For example:</para>
+<programlisting><![CDATA[
+Invalid read of size 4
+   at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
+   by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
+   by 0x40B07FF4: read_png_image__FP8QImageIO (kernel/qpngio.cpp:326)
+   by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
+   Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
+]]></programlisting>
+
+<para>This happens when your program reads or writes memory at a
+place which Memcheck reckons it shouldn't.  In this example, the
+program did a 4-byte read at address 0xBFFFF0E0, somewhere within
+the system-supplied library libpng.so.2.1.0.9, which was called
+from somewhere else in the same library, called from line 326 of
+<filename>qpngio.cpp</filename>, and so on.</para>
+
+<para>Memcheck tries to establish what the illegal address might
+relate to, since that's often useful.  So, if it points into a
+block of memory which has already been freed, you'll be informed
+of this, and also where the block was free'd at.  Likewise, if it
+should turn out to be just off the end of a malloc'd block, a
+common result of off-by-one-errors in array subscripting, you'll
+be informed of this fact, and also where the block was
+malloc'd.</para>
+
+<para>In this example, Memcheck can't identify the address.
+Actually the address is on the stack, but, for some reason, this
+is not a valid stack address -- it is below the stack pointer,
+<literal>%esp</literal>, and that isn't allowed.  In this
+particular case it's probably caused by gcc generating invalid
+code, a known bug in various flavours of gcc.</para>
+
+<para>Note that Memcheck only tells you that your program is
+about to access memory at an illegal address.  It can't stop the
+access from happening.  So, if your program makes an access which
+normally would result in a segmentation fault, you program will
+still suffer the same fate -- but you will get a message from
+Memcheck immediately prior to this.  In this particular example,
+reading junk on the stack is non-fatal, and the program stays
+alive.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-manual.uninitvals" 
+       xreflabel="Use of uninitialised values">
+<title>Use of uninitialised values</title>
+
+<para>For example:</para>
+<programlisting><![CDATA[
+Conditional jump or move depends on uninitialised value(s)
+   at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
+   by 0x402E8476: _IO_printf (printf.c:36)
+   by 0x8048472: main (tests/manuel1.c:8)
+   by 0x402A6E5E: __libc_start_main (libc-start.c:129)
+]]></programlisting>
+
+<para>An uninitialised-value use error is reported when your
+program uses a value which hasn't been initialised -- in other
+words, is undefined.  Here, the undefined value is used somewhere
+inside the printf() machinery of the C library.  This error was
+reported when running the following small program:</para>
+<programlisting><![CDATA[
+int main()
+{
+  int x;
+  printf ("x = %d\n", x);
+}]]></programlisting>
+
+<para>It is important to understand that your program can copy
+around junk (uninitialised) data to its heart's content.
+Memcheck observes this and keeps track of the data, but does not
+complain.  A complaint is issued only when your program attempts
+to make use of uninitialised data.  In this example, x is
+uninitialised.  Memcheck observes the value being passed to
+<literal>_IO_printf</literal> and thence to
+<literal>_IO_vfprintf</literal>, but makes no comment.  However,
+_IO_vfprintf has to examine the value of x so it can turn it into
+the corresponding ASCII string, and it is at this point that
+Memcheck complains.</para>
+
+<para>Sources of uninitialised data tend to be:</para>
+<itemizedlist>
+  <listitem>
+    <para>Local variables in procedures which have not been
+    initialised, as in the example above.</para>
+  </listitem>
+  <listitem>
+    <para>The contents of malloc'd blocks, before you write
+    something there.  In C++, the new operator is a wrapper round
+    malloc, so if you create an object with new, its fields will
+    be uninitialised until you (or the constructor) fill them in,
+    which is only Right and Proper.</para>
+  </listitem>
+</itemizedlist>
+
+</sect2>
+
+
+
+<sect2 id="mc-manual.badfrees" xreflabel="Illegal frees">
+<title>Illegal frees</title>
+
+<para>For example:</para>
+<programlisting><![CDATA[
+Invalid free()
+   at 0x4004FFDF: free (vg_clientmalloc.c:577)
+   by 0x80484C7: main (tests/doublefree.c:10)
+   by 0x402A6E5E: __libc_start_main (libc-start.c:129)
+   by 0x80483B1: (within tests/doublefree)
+   Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd
+   at 0x4004FFDF: free (vg_clientmalloc.c:577)
+   by 0x80484C7: main (tests/doublefree.c:10)
+   by 0x402A6E5E: __libc_start_main (libc-start.c:129)
+   by 0x80483B1: (within tests/doublefree)
+]]></programlisting>
+
+<para>Memcheck keeps track of the blocks allocated by your
+program with malloc/new, so it can know exactly whether or not
+the argument to free/delete is legitimate or not.  Here, this
+test program has freed the same block twice.  As with the illegal
+read/write errors, Memcheck attempts to make sense of the address
+free'd.  If, as here, the address is one which has previously
+been freed, you wil be told that -- making duplicate frees of the
+same block easy to spot.</para>
+
+</sect2>
+
+
+<sect2 id="mc-manual.rudefn" 
+       xreflabel="When a block is freed with an inappropriate deallocation
+function">
+<title>When a block is freed with an inappropriate deallocation
+function</title>
+
+<para>In the following example, a block allocated with
+<computeroutput>new[]</computeroutput> has wrongly been
+deallocated with <computeroutput>free</computeroutput>:</para>
+<programlisting><![CDATA[
+Mismatched free() / delete / delete []
+   at 0x40043249: free (vg_clientfuncs.c:171)
+   by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
+   by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
+   by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
+   Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd
+   at 0x4004318C: __builtin_vec_new (vg_clientfuncs.c:152)
+   by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
+   by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
+   by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)
+]]></programlisting>
+
+<para>The following was told to me be the KDE 3 developers.  I
+didn't know any of it myself.  They also implemented the check
+itself.</para>
+
+<para>In <literal>C++</literal> it's important to deallocate
+memory in a way compatible with how it was allocated.  The deal
+is:</para>
+<itemizedlist>
+  <listitem>
+    <para>If allocated with
+    <computeroutput>malloc</computeroutput>,
+    <computeroutput>calloc</computeroutput>,
+    <computeroutput>realloc</computeroutput>,
+    <computeroutput>valloc</computeroutput> or
+    <computeroutput>memalign</computeroutput>, you must
+    deallocate with <computeroutput>free</computeroutput>.</para>
+  </listitem>
+  <listitem>
+    <para>If allocated with
+    <computeroutput>new[]</computeroutput>, you must deallocate
+    with <computeroutput>delete[]</computeroutput>.</para>
+  </listitem>
+  <listitem>
+   <para>If allocated with <computeroutput>new</computeroutput>,
+   you must deallocate with
+   <computeroutput>delete</computeroutput>.</para>
+  </listitem>
+</itemizedlist>
+
+<para>The worst thing is that on Linux apparently it doesn't
+matter if you do muddle these up, and it all seems to work ok,
+but the same program may then crash on a different platform,
+Solaris for example.  So it's best to fix it properly.  According
+to the KDE folks "it's amazing how many C++ programmers don't
+know this".</para>
+
+<para>Pascal Massimino adds the following clarification:
+<computeroutput>delete[]</computeroutput> must be called
+associated with a <computeroutput>new[]</computeroutput> because
+the compiler stores the size of the array and the
+pointer-to-member to the destructor of the array's content just
+before the pointer actually returned.  This implies a
+variable-sized overhead in what's returned by
+<computeroutput>new</computeroutput> or
+<computeroutput>new[]</computeroutput>.  It rather surprising how
+compilers
+<footnote>
+  <para>[Ed: runtime-support libraries ?]</para>
+</footnote>
+are robust to mismatch in <computeroutput>new</computeroutput> /
+<computeroutput>delete</computeroutput>
+<computeroutput>new[]</computeroutput> /
+<computeroutput>delete[]</computeroutput>.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-manual.badperm" 
+       xreflabel="Passing system call parameters with 
+       inadequate read/write permissions">
+<title>Passing system call parameters with inadequate read/write
+permissions</title>
+
+<para>Memcheck checks all parameters to system calls.  If a
+system call needs to read from a buffer provided by your program,
+Memcheck checks that the entire buffer is addressible and has
+valid data, ie, it is readable.  And if the system call needs to
+write to a user-supplied buffer, Memcheck checks that the buffer
+is addressible.  After the system call, Memcheck updates its
+administrative information to precisely reflect any changes in
+memory permissions caused by the system call.</para>
+
+<para>Here's an example of a system call with an invalid
+parameter:</para>
+<programlisting><![CDATA[
+#include <stdlib.h>
+#include <unistd.h>
+int main( void )
+{
+  char* arr = malloc(10);
+  (void) write( 1 /* stdout */, arr, 10 );
+  return 0;
+}]]></programlisting>
+
+<para>You get this complaint ...</para>
+<programlisting><![CDATA[
+Syscall param write(buf) contains uninitialised or unaddressable byte(s)
+   at 0x4035E072: __libc_write
+   by 0x402A6E5E: __libc_start_main (libc-start.c:129)
+   by 0x80483B1: (within tests/badwrite)
+   by <bogus frame pointer> ???
+   Address 0x3807E6D0 is 0 bytes inside a block of size 10 alloc'd
+   at 0x4004FEE6: malloc (ut_clientmalloc.c:539)
+   by 0x80484A0: main (tests/badwrite.c:6)
+   by 0x402A6E5E: __libc_start_main (libc-start.c:129)
+   by 0x80483B1: (within tests/badwrite)
+]]></programlisting>
+
+<para>... because the program has tried to write uninitialised
+junk from the malloc'd block to the standard output.</para>
+
+</sect2>
+
+
+<sect2 id="mc-manual.overlap" 
+       xreflabel="Overlapping source and destination blocks">
+<title>Overlapping source and destination blocks</title>
+
+<para>The following C library functions copy some data from one
+memory block to another (or something similar):
+<computeroutput>memcpy()</computeroutput>,
+<computeroutput>strcpy()</computeroutput>,
+<computeroutput>strncpy()</computeroutput>,
+<computeroutput>strcat()</computeroutput>,
+<computeroutput>strncat()</computeroutput>. 
+The blocks pointed to by their
+<computeroutput>src</computeroutput> and
+<computeroutput>dst</computeroutput> pointers aren't allowed to
+overlap.  Memcheck checks for this.</para>
+
+<para>For example:</para>
+<programlisting><![CDATA[
+==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
+==27492==    at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
+==27492==    by 0x804865A: main (overlap.c:40)
+==27492==    by 0x40246335: __libc_start_main (../sysdeps/generic/libc-start.c:129)
+==27492==    by 0x8048470: (within /auto/homes/njn25/grind/head6/memcheck/tests/overlap)
+==27492== 
+]]></programlisting>
+
+<para>You don't want the two blocks to overlap because one of
+them could get partially trashed by the copying.</para>
+
+</sect2>
+
+
+</sect1>
+
+
+
+<sect1 id="mc-manual.suppfiles" xreflabel="Writing suppressions files">
+<title>Writing suppressions files</title>
+
+<para>The basic suppression format is described in 
+<xref linkend="manual-core.suppress"/>.</para>
+
+<para>The suppression (2nd) line should have the form:</para>
+<programlisting><![CDATA[
+Memcheck:suppression_type]]></programlisting>
+
+<para>Or, since some of the suppressions are shared with Addrcheck:</para>
+<programlisting><![CDATA[
+Memcheck,Addrcheck:suppression_type]]></programlisting>
+
+<para>The Memcheck suppression types are as follows:</para>
+
+<itemizedlist>
+  <listitem>
+    <para><computeroutput>Value1</computeroutput>, 
+    <computeroutput>Value2</computeroutput>,
+    <computeroutput>Value4</computeroutput>,
+    <computeroutput>Value8</computeroutput>,
+    <computeroutput>Value16</computeroutput>,
+    meaning an uninitialised-value error when
+    using a value of 1, 2, 4, 8 or 16 bytes.</para>
+  </listitem>
+
+  <listitem>
+    <para>Or: <computeroutput>Cond</computeroutput> (or its old
+    name, <computeroutput>Value0</computeroutput>), meaning use
+    of an uninitialised CPU condition code.</para>
+  </listitem>
+
+  <listitem>
+    <para>Or: <computeroutput>Addr1</computeroutput>,
+    <computeroutput>Addr2</computeroutput>, 
+    <computeroutput>Addr4</computeroutput>,
+    <computeroutput>Addr8</computeroutput>,
+    <computeroutput>Addr16</computeroutput>, 
+    meaning an invalid address during a
+    memory access of 1, 2, 4, 8 or 16 bytes respectively.</para>
+  </listitem>
+
+  <listitem>
+    <para>Or: <computeroutput>Param</computeroutput>, meaning an
+    invalid system call parameter error.</para>
+  </listitem>
+
+  <listitem>
+    <para>Or: <computeroutput>Free</computeroutput>, meaning an
+    invalid or mismatching free.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>Overlap</computeroutput>, meaning a
+    <computeroutput>src</computeroutput> /
+    <computeroutput>dst</computeroutput> overlap in
+    <computeroutput>memcpy() or a similar
+    function</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>Last but not least, you can suppress leak reports with
+    <computeroutput>Leak</computeroutput>.  Leak suppression was
+    added in valgrind-1.9.3, I believe.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The extra information line: for Param errors, is the name
+of the offending system call parameter.  No other error kinds
+have this extra line.</para>
+
+<para>The first line of the calling context: for Value and Addr
+errors, it is either the name of the function in which the error
+occurred, or, failing that, the full path of the .so file or
+executable containing the error location.  For Free errors, is
+the name of the function doing the freeing (eg,
+<computeroutput>free</computeroutput>,
+<computeroutput>__builtin_vec_delete</computeroutput>, etc).  For
+Overlap errors, is the name of the function with the overlapping
+arguments (eg.  <computeroutput>memcpy()</computeroutput>,
+<computeroutput>strcpy()</computeroutput>, etc).</para>
+
+<para>Lastly, there's the rest of the calling context.</para>
+
+</sect1>
+
+
+
+<sect1 id="mc-manual.machine" 
+       xreflabel="Details of Memcheck's checking machinery">
+<title>Details of Memcheck's checking machinery</title>
+
+<para>Read this section if you want to know, in detail, exactly
+what and how Memcheck is checking.</para>
+
+
+<sect2 id="mc-manual.value" xreflabel="Valid-value (V) bit">
+<title>Valid-value (V) bits</title>
+
+<para>It is simplest to think of Memcheck implementing a
+synthetic Intel x86 CPU which is identical to a real CPU, except
+for one crucial detail.  Every bit (literally) of data processed,
+stored and handled by the real CPU has, in the synthetic CPU, an
+associated "valid-value" bit, which says whether or not the
+accompanying bit has a legitimate value.  In the discussions
+which follow, this bit is referred to as the V (valid-value)
+bit.</para>
+
+<para>Each byte in the system therefore has a 8 V bits which
+follow it wherever it goes.  For example, when the CPU loads a
+word-size item (4 bytes) from memory, it also loads the
+corresponding 32 V bits from a bitmap which stores the V bits for
+the process' entire address space.  If the CPU should later write
+the whole or some part of that value to memory at a different
+address, the relevant V bits will be stored back in the V-bit
+bitmap.</para>
+
+<para>In short, each bit in the system has an associated V bit,
+which follows it around everywhere, even inside the CPU.  Yes,
+the CPU's (integer and <computeroutput>%eflags</computeroutput>)
+registers have their own V bit vectors.</para>
+
+<para>Copying values around does not cause Memcheck to check for,
+or report on, errors.  However, when a value is used in a way
+which might conceivably affect the outcome of your program's
+computation, the associated V bits are immediately checked.  If
+any of these indicate that the value is undefined, an error is
+reported.</para>
+
+<para>Here's an (admittedly nonsensical) example:</para>
+<programlisting><![CDATA[
+int i, j;
+int a[10], b[10];
+for ( i = 0; i < 10; i++ ) {
+  j = a[i];
+  b[i] = j;
+}]]></programlisting>
+
+<para>Memcheck emits no complaints about this, since it merely
+copies uninitialised values from
+<computeroutput>a[]</computeroutput> into
+<computeroutput>b[]</computeroutput>, and doesn't use them in any
+way.  However, if the loop is changed to:</para>
+<programlisting><![CDATA[
+for ( i = 0; i < 10; i++ ) {
+  j += a[i];
+}
+if ( j == 77 ) 
+  printf("hello there\n");
+]]></programlisting>
+
+<para>then Valgrind will complain, at the
+<computeroutput>if</computeroutput>, that the condition depends
+on uninitialised values.  Note that it <command>doesn't</command>
+complain at the <computeroutput>j += a[i];</computeroutput>,
+since at that point the undefinedness is not "observable".  It's
+only when a decision has to be made as to whether or not to do
+the <computeroutput>printf</computeroutput> -- an observable
+action of your program -- that Memcheck complains.</para>
+
+<para>Most low level operations, such as adds, cause Memcheck to
+use the <literal>V bits</literal> for the operands to calculate
+the V bits for the result.  Even if the result is partially or
+wholly undefined, it does not complain.</para>
+
+<para>Checks on definedness only occur in two places: when a
+value is used to generate a memory address, and where control
+flow decision needs to be made.  Also, when a system call is
+detected, valgrind checks definedness of parameters as
+required.</para>
+
+<para>If a check should detect undefinedness, an error message is
+issued.  The resulting value is subsequently regarded as
+well-defined.  To do otherwise would give long chains of error
+messages.  In effect, we say that undefined values are
+non-infectious.</para>
+
+<para>This sounds overcomplicated.  Why not just check all reads
+from memory, and complain if an undefined value is loaded into a
+CPU register?  Well, that doesn't work well, because perfectly
+legitimate C programs routinely copy uninitialised values around
+in memory, and we don't want endless complaints about that.
+Here's the canonical example.  Consider a struct like
+this:</para>
+<programlisting><![CDATA[
+struct S { int x; char c; };
+struct S s1, s2;
+s1.x = 42;
+s1.c = 'z';
+s2 = s1;
+]]></programlisting>
+
+<para>The question to ask is: how large is <computeroutput>struct
+S</computeroutput>, in bytes?  An
+<computeroutput>int</computeroutput> is 4 bytes and a
+<computeroutput>char</computeroutput> one byte, so perhaps a
+<computeroutput>struct S</computeroutput> occupies 5 bytes?
+Wrong.  All (non-toy) compilers we know of will round the size of
+<computeroutput>struct S</computeroutput> up to a whole number of
+words, in this case 8 bytes.  Not doing this forces compilers to
+generate truly appalling code for subscripting arrays of
+<computeroutput>struct S</computeroutput>'s.</para>
+
+<para>So <computeroutput>s1</computeroutput> occupies 8 bytes,
+yet only 5 of them will be initialised.  For the assignment
+<computeroutput>s2 = s1</computeroutput>, gcc generates code to
+copy all 8 bytes wholesale into
+<computeroutput>s2</computeroutput> without regard for their
+meaning.  If Memcheck simply checked values as they came out of
+memory, it would yelp every time a structure assignment like this
+happened.  So the more complicated semantics described above is
+necessary.  This allows <literal>gcc</literal> to copy
+<computeroutput>s1</computeroutput> into
+<computeroutput>s2</computeroutput> any way it likes, and a
+warning will only be emitted if the uninitialised values are
+later used.</para>
+
+<para>One final twist to this story.  The above scheme allows
+garbage to pass through the CPU's integer registers without
+complaint.  It does this by giving the integer registers
+<literal>V</literal> tags, passing these around in the expected
+way.  This complicated and computationally expensive to do, but
+is necessary.  Memcheck is more simplistic about floating-point
+loads and stores.  In particular, <literal>V</literal> bits for
+data read as a result of floating-point loads are checked at the
+load instruction.  So if your program uses the floating-point
+registers to do memory-to-memory copies, you will get complaints
+about uninitialised values.  Fortunately, I have not yet
+encountered a program which (ab)uses the floating-point registers
+in this way.</para>
+
+</sect2>
+
+
+<sect2 id="mc-manual.vaddress" xreflabel=" Valid-address (A) bits">
+<title>Valid-address (A) bits</title>
+
+<para>Notice that the previous subsection describes how the
+validity of values is established and maintained without having
+to say whether the program does or does not have the right to
+access any particular memory location.  We now consider the
+latter issue.</para>
+
+<para>As described above, every bit in memory or in the CPU has
+an associated valid-value (<literal>V</literal>) bit.  In
+addition, all bytes in memory, but not in the CPU, have an
+associated valid-address (<literal>A</literal>) bit.  This
+indicates whether or not the program can legitimately read or
+write that location.  It does not give any indication of the
+validity or the data at that location -- that's the job of the
+<literal>V</literal> bits -- only whether or not the location may
+be accessed.</para>
+
+<para>Every time your program reads or writes memory, Memcheck
+checks the <literal>A</literal> bits associated with the address.
+If any of them indicate an invalid address, an error is emitted.
+Note that the reads and writes themselves do not change the A
+bits, only consult them.</para>
+
+<para>So how do the <literal>A</literal> bits get set/cleared?
+Like this:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>When the program starts, all the global data areas are
+    marked as accessible.</para>
+  </listitem>
+
+  <listitem>
+    <para>When the program does malloc/new, the A bits for
+    exactly the area allocated, and not a byte more, are marked
+    as accessible.  Upon freeing the area the A bits are changed
+    to indicate inaccessibility.</para>
+  </listitem>
+
+  <listitem>
+
+    <para>When the stack pointer register
+    (<literal>%esp</literal>) moves up or down,
+    <literal>A</literal> bits are set.  The rule is that the area
+    from <literal>%esp</literal> up to the base of the stack is
+    marked as accessible, and below <literal>%esp</literal> is
+    inaccessible.  (If that sounds illogical, bear in mind that
+    the stack grows down, not up, on almost all Unix systems,
+    including GNU/Linux.)  Tracking <literal>%esp</literal> like
+    this has the useful side-effect that the section of stack
+    used by a function for local variables etc is automatically
+    marked accessible on function entry and inaccessible on
+    exit.</para>
+  </listitem>
+
+  <listitem>
+    <para>When doing system calls, A bits are changed
+    appropriately.  For example, mmap() magically makes files
+    appear in the process's address space, so the A bits must be
+    updated if mmap() succeeds.</para>
+  </listitem>
+
+  <listitem>
+    <para>Optionally, your program can tell Valgrind about such
+    changes explicitly, using the client request mechanism
+    described above.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect2>
+
+
+<sect2 id="mc-manual.together" xreflabel="Putting it all together">
+<title>Putting it all together</title>
+
+<para>Memcheck's checking machinery can be summarised as
+follows:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>Each byte in memory has 8 associated
+    <literal>V</literal> (valid-value) bits, saying whether or
+    not the byte has a defined value, and a single
+    <literal>A</literal> (valid-address) bit, saying whether or
+    not the program currently has the right to read/write that
+    address.</para>
+  </listitem>
+
+  <listitem>
+    <para>When memory is read or written, the relevant
+    <literal>A</literal> bits are consulted.  If they indicate an
+    invalid address, Valgrind emits an Invalid read or Invalid
+    write error.</para>
+  </listitem>
+
+  <listitem>
+    <para>When memory is read into the CPU's integer registers,
+    the relevant <literal>V</literal> bits are fetched from
+    memory and stored in the simulated CPU.  They are not
+    consulted.</para>
+  </listitem>
+
+  <listitem>
+    <para>When an integer register is written out to memory, the
+    <literal>V</literal> bits for that register are written back
+    to memory too.</para>
+  </listitem>
+
+  <listitem>
+    <para>When memory is read into the CPU's floating point
+    registers, the relevant <literal>V</literal> bits are read
+    from memory and they are immediately checked.  If any are
+    invalid, an uninitialised value error is emitted.  This
+    precludes using the floating-point registers to copy
+    possibly-uninitialised memory, but simplifies Valgrind in
+    that it does not have to track the validity status of the
+    floating-point registers.</para>
+  </listitem>
+
+  <listitem>
+    <para>As a result, when a floating-point register is written
+    to memory, the associated V bits are set to indicate a valid
+    value.</para>
+  </listitem>
+
+  <listitem>
+    <para>When values in integer CPU registers are used to
+    generate a memory address, or to determine the outcome of a
+    conditional branch, the <literal>V</literal> bits for those
+    values are checked, and an error emitted if any of them are
+    undefined.</para>
+  </listitem>
+
+  <listitem>
+    <para>When values in integer CPU registers are used for any
+    other purpose, Valgrind computes the V bits for the result,
+    but does not check them.</para>
+  </listitem>
+
+  <listitem>
+    <para>One the <literal>V</literal> bits for a value in the
+    CPU have been checked, they are then set to indicate
+    validity.  This avoids long chains of errors.</para>
+  </listitem>
+
+  <listitem>
+    <para>When values are loaded from memory, valgrind checks the
+    A bits for that location and issues an illegal-address
+    warning if needed.  In that case, the V bits loaded are
+    forced to indicate Valid, despite the location being invalid.</para>
+    <para>This apparently strange choice reduces the amount of
+    confusing information presented to the user.  It avoids the
+    unpleasant phenomenon in which memory is read from a place
+    which is both unaddressible and contains invalid values, and,
+    as a result, you get not only an invalid-address (read/write)
+    error, but also a potentially large set of
+    uninitialised-value errors, one for every time the value is
+    used.</para>
+    <para>There is a hazy boundary case to do with multi-byte
+    loads from addresses which are partially valid and partially
+    invalid.  See details of the flag
+    <computeroutput>--partial-loads-ok</computeroutput> for
+    details.  </para>
+  </listitem>
+
+</itemizedlist>
+
+
+<para>Memcheck intercepts calls to malloc, calloc, realloc,
+valloc, memalign, free, new and delete.  The behaviour you get
+is:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>malloc/new: the returned memory is marked as
+    addressible but not having valid values.  This means you have
+    to write on it before you can read it.</para>
+  </listitem>
+
+  <listitem>
+    <para>calloc: returned memory is marked both addressible and
+    valid, since calloc() clears the area to zero.</para>
+  </listitem>
+
+  <listitem>
+    <para>realloc: if the new size is larger than the old, the
+    new section is addressible but invalid, as with
+    malloc.</para>
+  </listitem>
+
+  <listitem>
+    <para>If the new size is smaller, the dropped-off section is
+    marked as unaddressible.  You may only pass to realloc a
+    pointer previously issued to you by malloc/calloc/realloc.</para>
+  </listitem>
+
+  <listitem>
+    <para>free/delete: you may only pass to free a pointer
+    previously issued to you by malloc/calloc/realloc, or the
+    value NULL. Otherwise, Valgrind complains.  If the pointer is
+    indeed valid, Valgrind marks the entire area it points at as
+    unaddressible, and places the block in the
+    freed-blocks-queue.  The aim is to defer as long as possible
+    reallocation of this block.  Until that happens, all attempts
+    to access it will elicit an invalid-address error, as you
+    would hope.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect2>
+</sect1>
+
+
+
+<sect1 id="mc-manual.leaks" xreflabel="Memory leak detection">
+<title>Memory leak detection</title>
+
+<para>Memcheck keeps track of all memory blocks issued in
+response to calls to malloc/calloc/realloc/new.  So when the
+program exits, it knows which blocks are still outstanding --
+have not been returned, in other words.  Ideally, you want your
+program to have no blocks still in use at exit.  But many
+programs do.</para>
+
+<para>For each such block, Memcheck scans the entire address
+space of the process, looking for pointers to the block.  One of
+three situations may result:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>A pointer to the start of the block is found.  This
+    usually indicates programming sloppiness; since the block is
+    still pointed at, the programmer could, at least in
+    principle, free'd it before program exit.</para>
+  </listitem>
+
+  <listitem>
+    <para>A pointer to the interior of the block is found.  The
+    pointer might originally have pointed to the start and have
+    been moved along, or it might be entirely unrelated.
+    Memcheck deems such a block as "dubious", that is, possibly
+    leaked, because it's unclear whether or not a pointer to it
+    still exists.</para>
+  </listitem>
+
+  <listitem>
+    <para>The worst outcome is that no pointer to the block can
+    be found.  The block is classified as "leaked", because the
+    programmer could not possibly have free'd it at program exit,
+    since no pointer to it exists.  This might be a symptom of
+    having lost the pointer at some earlier point in the
+    program.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Memcheck reports summaries about leaked and dubious blocks.
+For each such block, it will also tell you where the block was
+allocated.  This should help you figure out why the pointer to it
+has been lost.  In general, you should attempt to ensure your
+programs do not have any leaked or dubious blocks at exit.</para>
+
+<para>The precise area of memory in which Memcheck searches for
+pointers is: all naturally-aligned 4-byte words for which all A
+bits indicate addressibility and all V bits indicated that the
+stored value is actually valid.</para>
+
+</sect1>
+
+
+<sect1 id="mc-manual.clientreqs" xreflabel="Client requests">
+<title>Client Requests</title>
+
+<para>The following client requests are defined in
+<filename>memcheck.h</filename>.  They also work for Addrcheck.
+See <filename>memcheck.h</filename> for exact details of their
+arguments.</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_MAKE_NOACCESS</computeroutput>,
+    <computeroutput>VALGRIND_MAKE_WRITABLE</computeroutput> and
+    <computeroutput>VALGRIND_MAKE_READABLE</computeroutput>.
+    These mark address ranges as completely inaccessible,
+    accessible but containing undefined data, and accessible and
+    containing defined data, respectively.  Subsequent errors may
+    have their faulting addresses described in terms of these
+    blocks.  Returns a "block handle".  Returns zero when not run
+    on Valgrind.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_DISCARD</computeroutput>: At
+    some point you may want Valgrind to stop reporting errors in
+    terms of the blocks defined by the previous three macros.  To
+    do this, the above macros return a small-integer "block
+    handle".  You can pass this block handle to
+    <computeroutput>VALGRIND_DISCARD</computeroutput>.  After
+    doing so, Valgrind will no longer be able to relate
+    addressing errors to the user-defined block associated with
+    the handle.  The permissions settings associated with the
+    handle remain in place; this just affects how errors are
+    reported, not whether they are reported.  Returns 1 for an
+    invalid handle and 0 for a valid handle (although passing
+    invalid handles is harmless).  Always returns 0 when not run
+    on Valgrind.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_CHECK_WRITABLE</computeroutput>
+    and <computeroutput>VALGRIND_CHECK_READABLE</computeroutput>:
+    check immediately whether or not the given address range has
+    the relevant property, and if not, print an error message.
+    Also, for the convenience of the client, returns zero if the
+    relevant property holds; otherwise, the returned value is the
+    address of the first byte for which the property is not true.
+    Always returns 0 when not run on Valgrind.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_CHECK_DEFINED</computeroutput>:
+    a quick and easy way to find out whether Valgrind thinks a
+    particular variable (lvalue, to be precise) is addressible
+    and defined.  Prints an error message if not.  Returns no
+    value.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_DO_LEAK_CHECK</computeroutput>:
+    run the memory leak detector right now.  Returns no value.  I
+    guess this could be used to incrementally check for leaks
+    between arbitrary places in the program's execution.
+    Warning: not properly tested!</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_COUNT_LEAKS</computeroutput>:
+    fills in the four arguments with the number of bytes of
+    memory found by the previous leak check to be leaked,
+    dubious, reachable and suppressed.  Again, useful in test
+    harness code, after calling
+    <computeroutput>VALGRIND_DO_LEAK_CHECK</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>VALGRIND_GET_VBITS</computeroutput> and
+    <computeroutput>VALGRIND_SET_VBITS</computeroutput>: allow
+    you to get and set the V (validity) bits for an address
+    range.  You should probably only set V bits that you have got
+    with <computeroutput>VALGRIND_GET_VBITS</computeroutput>.
+    Only for those who really know what they are doing.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect1>
+</chapter>
diff --git a/memcheck/docs/mc-tech-docs.xml b/memcheck/docs/mc-tech-docs.xml
new file mode 100644
index 0000000..492902c
--- /dev/null
+++ b/memcheck/docs/mc-tech-docs.xml
@@ -0,0 +1,2747 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="mc-tech-docs" 
+         xreflabel="The design and implementation of Valgrind">
+
+<title>The Design and Implementation of Valgrind</title>
+<subtitle>Detailed technical notes for hackers, maintainers and
+          the overly-curious</subtitle>
+
+<sect1 id="mc-tech-docs.intro" xreflabel="Introduction">
+<title>Introduction</title>
+
+<para>This document contains a detailed, highly-technical
+description of the internals of Valgrind.  This is not the user
+manual; if you are an end-user of Valgrind, you do not want to
+read this.  Conversely, if you really are a hacker-type and want
+to know how it works, I assume that you have read the user manual
+thoroughly.</para>
+
+<para>You may need to read this document several times, and
+carefully.  Some important things, I only say once.</para>
+
+
+
+
+<sect2 id="mc-tech-docs.history" xreflabel="History">
+<title>History</title>
+
+<para>Valgrind came into public view in late Feb 2002.  However,
+it has been under contemplation for a very long time, perhaps
+seriously for about five years.  Somewhat over two years ago, I
+started working on the x86 code generator for the Glasgow Haskell
+Compiler (http://www.haskell.org/ghc), gaining familiarity with
+x86 internals on the way.  I then did Cacheprof
+(http://www.cacheprof.org), gaining further x86 experience.  Some
+time around Feb 2000 I started experimenting with a user-space
+x86 interpreter for x86-Linux.  This worked, but it was clear
+that a JIT-based scheme would be necessary to give reasonable
+performance for Valgrind.  Design work for the JITter started in
+earnest in Oct 2000, and by early 2001 I had an x86-to-x86
+dynamic translator which could run quite large programs.  This
+translator was in a sense pointless, since it did not do any
+instrumentation or checking.</para>
+
+<para>Most of the rest of 2001 was taken up designing and
+implementing the instrumentation scheme.  The main difficulty,
+which consumed a lot of effort, was to design a scheme which did
+not generate large numbers of false uninitialised-value warnings.
+By late 2001 a satisfactory scheme had been arrived at, and I
+started to test it on ever-larger programs, with an eventual eye
+to making it work well enough so that it was helpful to folks
+debugging the upcoming version 3 of KDE.  I've used KDE since
+before version 1.0, and wanted to Valgrind to be an indirect
+contribution to the KDE 3 development effort.  At the start of
+Feb 02 the kde-core-devel crew started using it, and gave a huge
+amount of helpful feedback and patches in the space of three
+weeks.  Snapshot 20020306 is the result.</para>
+
+<para>In the best Unix tradition, or perhaps in the spirit of
+Fred Brooks' depressing-but-completely-accurate epitaph "build
+one to throw away; you will anyway", much of Valgrind is a second
+or third rendition of the initial idea.  The instrumentation
+machinery (<filename>vg_translate.c</filename>,
+<filename>vg_memory.c</filename>) and core CPU simulation
+(<filename>vg_to_ucode.c</filename>,
+<filename>vg_from_ucode.c</filename>) have had three redesigns
+and rewrites; the register allocator, low-level memory manager
+(<filename>vg_malloc2.c</filename>) and symbol table reader
+(<filename>vg_symtab2.c</filename>) are on the second rewrite.
+In a sense, this document serves to record some of the knowledge
+gained as a result.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.overview" xreflabel="Design overview">
+<title>Design overview</title>
+
+<para>Valgrind is compiled into a Linux shared object,
+<filename>valgrind.so</filename>, and also a dummy one,
+<filename>valgrinq.so</filename>, of which more later.  The
+<filename>valgrind</filename> shell script adds
+<filename>valgrind.so</filename> to the
+<computeroutput>LD_PRELOAD</computeroutput> list of extra
+libraries to be loaded with any dynamically linked library.  This
+is a standard trick, one which I assume the
+<computeroutput>LD_PRELOAD</computeroutput> mechanism was
+developed to support.</para>
+
+<para><filename>valgrind.so</filename> is linked with the
+<computeroutput>-z initfirst</computeroutput> flag, which
+requests that its initialisation code is run before that of any
+other object in the executable image.  When this happens,
+valgrind gains control.  The real CPU becomes "trapped" in
+<filename>valgrind.so</filename> and the translations it
+generates.  The synthetic CPU provided by Valgrind does, however,
+return from this initialisation function.  So the normal startup
+actions, orchestrated by the dynamic linker
+<filename>ld.so</filename>, continue as usual, except on the
+synthetic CPU, not the real one.  Eventually
+<computeroutput>main</computeroutput> is run and returns, and
+then the finalisation code of the shared objects is run,
+presumably in inverse order to which they were initialised.
+Remember, this is still all happening on the simulated CPU.
+Eventually <filename>valgrind.so</filename>'s own finalisation
+code is called.  It spots this event, shuts down the simulated
+CPU, prints any error summaries and/or does leak detection, and
+returns from the initialisation code on the real CPU.  At this
+point, in effect the real and synthetic CPUs have merged back
+into one, Valgrind has lost control of the program, and the
+program finally <computeroutput>exit()s</computeroutput> back to
+the kernel in the usual way.</para>
+
+<para>The normal course of activity, once Valgrind has started
+up, is as follows.  Valgrind never runs any part of your program
+(usually referred to as the "client"), not a single byte of it,
+directly.  Instead it uses function
+<computeroutput>VG_(translate)</computeroutput> to translate
+basic blocks (BBs, straight-line sequences of code) into
+instrumented translations, and those are run instead.  The
+translations are stored in the translation cache (TC),
+<computeroutput>vg_tc</computeroutput>, with the translation
+table (TT), <computeroutput>vg_tt</computeroutput> supplying the
+original-to-translation code address mapping.  Auxiliary array
+<computeroutput>VG_(tt_fast)</computeroutput> is used as a
+direct-map cache for fast lookups in TT; it usually achieves a
+hit rate of around 98% and facilitates an orig-to-trans lookup in
+4 x86 insns, which is not bad.</para>
+
+<para>Function <computeroutput>VG_(dispatch)</computeroutput> in
+<filename>vg_dispatch.S</filename> is the heart of the JIT
+dispatcher.  Once a translated code address has been found, it is
+executed simply by an x86 <computeroutput>call</computeroutput>
+to the translation.  At the end of the translation, the next
+original code addr is loaded into
+<computeroutput>%eax</computeroutput>, and the translation then
+does a <computeroutput>ret</computeroutput>, taking it back to
+the dispatch loop, with, interestingly, zero branch
+mispredictions.  The address requested in
+<computeroutput>%eax</computeroutput> is looked up first in
+<computeroutput>VG_(tt_fast)</computeroutput>, and, if not found,
+by calling C helper
+<computeroutput>VG_(search_transtab)</computeroutput>.  If there
+is still no translation available,
+<computeroutput>VG_(dispatch)</computeroutput> exits back to the
+top-level C dispatcher
+<computeroutput>VG_(toploop)</computeroutput>, which arranges for
+<computeroutput>VG_(translate)</computeroutput> to make a new
+translation.  All fairly unsurprising, really.  There are various
+complexities described below.</para>
+
+<para>The translator, orchestrated by
+<computeroutput>VG_(translate)</computeroutput>, is complicated
+but entirely self-contained.  It is described in great detail in
+subsequent sections.  Translations are stored in TC, with TT
+tracking administrative information.  The translations are
+subject to an approximate LRU-based management scheme.  With the
+current settings, the TC can hold at most about 15MB of
+translations, and LRU passes prune it to about 13.5MB.  Given
+that the orig-to-translation expansion ratio is about 13:1 to
+14:1, this means TC holds translations for more or less a
+megabyte of original code, which generally comes to about 70000
+basic blocks for C++ compiled with optimisation on.  Generating
+new translations is expensive, so it is worth having a large TC
+to minimise the (capacity) miss rate.</para>
+
+<para>The dispatcher,
+<computeroutput>VG_(dispatch)</computeroutput>, receives hints
+from the translations which allow it to cheaply spot all control
+transfers corresponding to x86
+<computeroutput>call</computeroutput> and
+<computeroutput>ret</computeroutput> instructions.  It has to do
+this in order to spot some special events:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>Calls to
+    <computeroutput>VG_(shutdown)</computeroutput>.  This is
+    Valgrind's cue to exit.  NOTE: actually this is done a
+    different way; it should be cleaned up.</para>
+  </listitem>
+
+  <listitem>
+    <para>Returns of system call handlers, to the return address
+    <computeroutput>VG_(signalreturn_bogusRA)</computeroutput>.
+    The signal simulator needs to know when a signal handler is
+    returning, so we spot jumps (returns) to this address.</para>
+  </listitem>
+
+  <listitem>
+    <para>Calls to <computeroutput>vg_trap_here</computeroutput>.
+    All <computeroutput>malloc</computeroutput>,
+    <computeroutput>free</computeroutput>, etc calls that the
+    client program makes are eventually routed to a call to
+    <computeroutput>vg_trap_here</computeroutput>, and Valgrind
+    does its own special thing with these calls.  In effect this
+    provides a trapdoor, by which Valgrind can intercept certain
+    calls on the simulated CPU, run the call as it sees fit
+    itself (on the real CPU), and return the result to the
+    simulated CPU, quite transparently to the client
+    program.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Valgrind intercepts the client's
+<computeroutput>malloc</computeroutput>,
+<computeroutput>free</computeroutput>, etc, calls, so that it can
+store additional information.  Each block
+<computeroutput>malloc</computeroutput>'d by the client gives
+rise to a shadow block in which Valgrind stores the call stack at
+the time of the <computeroutput>malloc</computeroutput> call.
+When the client calls <computeroutput>free</computeroutput>,
+Valgrind tries to find the shadow block corresponding to the
+address passed to <computeroutput>free</computeroutput>, and
+emits an error message if none can be found.  If it is found, the
+block is placed on the freed blocks queue
+<computeroutput>vg_freed_list</computeroutput>, it is marked as
+inaccessible, and its shadow block now records the call stack at
+the time of the <computeroutput>free</computeroutput> call.
+Keeping <computeroutput>free</computeroutput>'d blocks in this
+queue allows Valgrind to spot all (presumably invalid) accesses
+to them.  However, once the volume of blocks in the free queue
+exceeds <computeroutput>VG_(clo_freelist_vol)</computeroutput>,
+blocks are finally removed from the queue.</para>
+
+<para>Keeping track of <literal>A</literal> and
+<literal>V</literal> bits (note: if you don't know what these
+are, you haven't read the user guide carefully enough) for memory
+is done in <filename>vg_memory.c</filename>.  This implements a
+sparse array structure which covers the entire 4G address space
+in a way which is reasonably fast and reasonably space efficient.
+The 4G address space is divided up into 64K sections, each
+covering 64Kb of address space.  Given a 32-bit address, the top
+16 bits are used to select one of the 65536 entries in
+<computeroutput>VG_(primary_map)</computeroutput>.  The resulting
+"secondary" (<computeroutput>SecMap</computeroutput>) holds A and
+V bits for the 64k of address space chunk corresponding to the
+lower 16 bits of the address.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.design" xreflabel="Design decisions">
+<title>Design decisions</title>
+
+<para>Some design decisions were motivated by the need to make
+Valgrind debuggable.  Imagine you are writing a CPU simulator.
+It works fairly well.  However, you run some large program, like
+Netscape, and after tens of millions of instructions, it crashes.
+How can you figure out where in your simulator the bug is?</para>
+
+<para>Valgrind's answer is: cheat.  Valgrind is designed so that
+it is possible to switch back to running the client program on
+the real CPU at any point.  Using the
+<computeroutput>--stop-after= </computeroutput> flag, you can ask
+Valgrind to run just some number of basic blocks, and then run
+the rest of the way on the real CPU.  If you are searching for a
+bug in the simulated CPU, you can use this to do a binary search,
+which quickly leads you to the specific basic block which is
+causing the problem.</para>
+
+<para>This is all very handy.  It does constrain the design in
+certain unimportant ways.  Firstly, the layout of memory, when
+viewed from the client's point of view, must be identical
+regardless of whether it is running on the real or simulated CPU.
+This means that Valgrind can't do pointer swizzling -- well, no
+great loss -- and it can't run on the same stack as the client --
+again, no great loss.  Valgrind operates on its own stack,
+<computeroutput>VG_(stack)</computeroutput>, which it switches to
+at startup, temporarily switching back to the client's stack when
+doing system calls for the client.</para>
+
+<para>Valgrind also receives signals on its own stack,
+<computeroutput>VG_(sigstack)</computeroutput>, but for different
+gruesome reasons discussed below.</para>
+
+<para>This nice clean
+switch-back-to-the-real-CPU-whenever-you-like story is muddied by
+signals.  Problem is that signals arrive at arbitrary times and
+tend to slightly perturb the basic block count, with the result
+that you can get close to the basic block causing a problem but
+can't home in on it exactly.  My kludgey hack is to define
+<computeroutput>SIGNAL_SIMULATION</computeroutput> to 1 towards
+the bottom of <filename>vg_syscall_mem.c</filename>, so that
+signal handlers are run on the real CPU and don't change the BB
+counts.</para>
+
+<para>A second hole in the switch-back-to-real-CPU story is that
+Valgrind's way of delivering signals to the client is different
+from that of the kernel.  Specifically, the layout of the signal
+delivery frame, and the mechanism used to detect a sighandler
+returning, are different.  So you can't expect to make the
+transition inside a sighandler and still have things working, but
+in practice that's not much of a restriction.</para>
+
+<para>Valgrind's implementation of
+<computeroutput>malloc</computeroutput>,
+<computeroutput>free</computeroutput>, etc, (in
+<filename>vg_clientmalloc.c</filename>, not the low-level stuff
+in <filename>vg_malloc2.c</filename>) is somewhat complicated by
+the need to handle switching back at arbitrary points.  It does
+work tho.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.correctness" xreflabel="Correctness">
+<title>Correctness</title>
+
+<para>There's only one of me, and I have a Real Life (tm) as well
+as hacking Valgrind [allegedly :-].  That means I don't have time
+to waste chasing endless bugs in Valgrind.  My emphasis is
+therefore on doing everything as simply as possible, with
+correctness, stability and robustness being the number one
+priority, more important than performance or functionality.  As a
+result:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>The code is absolutely loaded with assertions, and
+    these are <command>permanently enabled.</command> I have no
+    plan to remove or disable them later.  Over the past couple
+    of months, as valgrind has become more widely used, they have
+    shown their worth, pulling up various bugs which would
+    otherwise have appeared as hard-to-find segmentation
+    faults.</para>
+
+    <para>I am of the view that it's acceptable to spend 5% of
+    the total running time of your valgrindified program doing
+    assertion checks and other internal sanity checks.</para>
+  </listitem>
+
+  <listitem>
+    <para>Aside from the assertions, valgrind contains various
+    sets of internal sanity checks, which get run at varying
+    frequencies during normal operation.
+    <computeroutput>VG_(do_sanity_checks)</computeroutput> runs
+    every 1000 basic blocks, which means 500 to 2000 times/second
+    for typical machines at present.  It checks that Valgrind
+    hasn't overrun its private stack, and does some simple checks
+    on the memory permissions maps.  Once every 25 calls it does
+    some more extensive checks on those maps.  Etc, etc.</para>
+    <para>The following components also have sanity check code,
+    which can be enabled to aid debugging:</para>
+    <itemizedlist>
+      <listitem><para>The low-level memory-manager
+        (<computeroutput>VG_(mallocSanityCheckArena)</computeroutput>).
+        This does a complete check of all blocks and chains in an
+        arena, which is very slow.  Is not engaged by default.</para>
+      </listitem>
+
+      <listitem>
+        <para>The symbol table reader(s): various checks to
+        ensure uniqueness of mappings; see
+        <computeroutput>VG_(read_symbols)</computeroutput> for a
+        start.  Is permanently engaged.</para>
+      </listitem>
+
+      <listitem>
+        <para>The A and V bit tracking stuff in
+        <filename>vg_memory.c</filename>.  This can be compiled
+        with cpp symbol
+        <computeroutput>VG_DEBUG_MEMORY</computeroutput> defined,
+        which removes all the fast, optimised cases, and uses
+        simple-but-slow fallbacks instead.  Not engaged by
+        default.</para>
+      </listitem>
+
+      <listitem>
+        <para>Ditto
+        <computeroutput>VG_DEBUG_LEAKCHECK</computeroutput>.</para>
+      </listitem>
+
+      <listitem>
+        <para>The JITter parses x86 basic blocks into sequences
+        of UCode instructions.  It then sanity checks each one
+        with <computeroutput>VG_(saneUInstr)</computeroutput> and
+        sanity checks the sequence as a whole with
+        <computeroutput>VG_(saneUCodeBlock)</computeroutput>.
+        This stuff is engaged by default, and has caught some
+        way-obscure bugs in the simulated CPU machinery in its
+        time.</para>
+      </listitem>
+
+      <listitem>
+        <para>The system call wrapper does
+        <computeroutput>VG_(first_and_last_secondaries_look_plausible)</computeroutput>
+        after every syscall; this is known to pick up bugs in the
+        syscall wrappers.  Engaged by default.</para>
+      </listitem>
+
+      <listitem>
+        <para>The main dispatch loop, in
+        <computeroutput>VG_(dispatch)</computeroutput>, checks
+        that translations do not set
+        <computeroutput>%ebp</computeroutput> to any value
+        different from
+        <computeroutput>VG_EBP_DISPATCH_CHECKED</computeroutput>
+        or <computeroutput>&amp; VG_(baseBlock)</computeroutput>.
+        In effect this test is free, and is permanently
+        engaged.</para>
+      </listitem>
+
+      <listitem>
+        <para>There are a couple of ifdefed-out consistency
+        checks I inserted whilst debugging the new register
+        allocater,
+        <computeroutput>vg_do_register_allocation</computeroutput>.</para>
+      </listitem>
+    </itemizedlist>
+  </listitem>
+
+  <listitem>
+    <para>I try to avoid techniques, algorithms, mechanisms, etc,
+    for which I can supply neither a convincing argument that
+    they are correct, nor sanity-check code which might pick up
+    bugs in my implementation.  I don't always succeed in this,
+    but I try.  Basically the idea is: avoid techniques which
+    are, in practice, unverifiable, in some sense.  When doing
+    anything, always have in mind: "how can I verify that this is
+    correct?"</para>
+  </listitem>
+
+</itemizedlist>
+
+
+<para>Some more specific things are:</para>
+<itemizedlist>
+  <listitem>
+    <para>Valgrind runs in the same namespace as the client, at
+    least from <filename>ld.so</filename>'s point of view, and it
+    therefore absolutely had better not export any symbol with a
+    name which could clash with that of the client or any of its
+    libraries.  Therefore, all globally visible symbols exported
+    from <filename>valgrind.so</filename> are defined using the
+    <computeroutput>VG_</computeroutput> CPP macro.  As you'll
+    see from <filename>vg_constants.h</filename>, this appends
+    some arbitrary prefix to the symbol, in order that it be, we
+    hope, globally unique.  Currently the prefix is
+    <computeroutput>vgPlain_</computeroutput>.  For convenience
+    there are also <computeroutput>VGM_</computeroutput>,
+    <computeroutput>VGP_</computeroutput> and
+    <computeroutput>VGOFF_</computeroutput>.  All locally defined
+    symbols are declared <computeroutput>static</computeroutput>
+    and do not appear in the final shared object.</para>
+
+    <para>To check this, I periodically do <computeroutput>nm
+    valgrind.so | grep " T "</computeroutput>, which shows you
+    all the globally exported text symbols.  They should all have
+    an approved prefix, except for those like
+    <computeroutput>malloc</computeroutput>,
+    <computeroutput>free</computeroutput>, etc, which we
+    deliberately want to shadow and take precedence over the same
+    names exported from <filename>glibc.so</filename>, so that
+    valgrind can intercept those calls easily.  Similarly,
+    <computeroutput>nm valgrind.so | grep " D "</computeroutput>
+    allows you to find any rogue data-segment symbol
+    names.</para>
+  </listitem>
+
+  <listitem>
+    <para>Valgrind tries, and almost succeeds, in being
+    completely independent of all other shared objects, in
+    particular of <filename>glibc.so</filename>.  For example, we
+    have our own low-level memory manager in
+    <filename>vg_malloc2.c</filename>, which is a fairly standard
+    malloc/free scheme augmented with arenas, and
+    <filename>vg_mylibc.c</filename> exports reimplementations of
+    various bits and pieces you'd normally get from the C
+    library.</para>
+
+    <para>Why all the hassle?  Because imagine the potential
+    chaos of both the simulated and real CPUs executing in
+    <filename>glibc.so</filename>.  It just seems simpler and
+    cleaner to be completely self-contained, so that only the
+    simulated CPU visits <filename>glibc.so</filename>.  In
+    practice it's not much hassle anyway.  Also, valgrind starts
+    up before glibc has a chance to initialise itself, and who
+    knows what difficulties that could lead to.  Finally, glibc
+    has definitions for some types, specifically
+    <computeroutput>sigset_t</computeroutput>, which conflict
+    (are different from) the Linux kernel's idea of same.  When
+    Valgrind wants to fiddle around with signal stuff, it wants
+    to use the kernel's definitions, not glibc's definitions.  So
+    it's simplest just to keep glibc out of the picture
+    entirely.</para>
+
+    <para>To find out which glibc symbols are used by Valgrind,
+    reinstate the link flags <computeroutput>-nostdlib
+    -Wl,-no-undefined</computeroutput>.  This causes linking to
+    fail, but will tell you what you depend on.  I have mostly,
+    but not entirely, got rid of the glibc dependencies; what
+    remains is, IMO, fairly harmless.  AFAIK the current
+    dependencies are: <computeroutput>memset</computeroutput>,
+    <computeroutput>memcmp</computeroutput>,
+    <computeroutput>stat</computeroutput>,
+    <computeroutput>system</computeroutput>,
+    <computeroutput>sbrk</computeroutput>,
+    <computeroutput>setjmp</computeroutput> and
+    <computeroutput>longjmp</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>Similarly, valgrind should not really import any
+    headers other than the Linux kernel headers, since it knows
+    of no API other than the kernel interface to talk to.  At the
+    moment this is really not in a good state, and
+    <computeroutput>vg_syscall_mem</computeroutput> imports, via
+    <filename>vg_unsafe.h</filename>, a significant number of
+    C-library headers so as to know the sizes of various structs
+    passed across the kernel boundary.  This is of course
+    completely bogus, since there is no guarantee that the C
+    library's definitions of these structs matches those of the
+    kernel.  I have started to sort this out using
+    <filename>vg_kerneliface.h</filename>, into which I had
+    intended to copy all kernel definitions which valgrind could
+    need, but this has not gotten very far.  At the moment it
+    mostly contains definitions for
+    <computeroutput>sigset_t</computeroutput> and
+    <computeroutput>struct sigaction</computeroutput>, since the
+    kernel's definition for these really does clash with glibc's.
+    I plan to use a <computeroutput>vki_</computeroutput> prefix
+    on all these types and constants, to denote the fact that
+    they pertain to <command>V</command>algrind's
+    <command>K</command>ernel
+    <command>I</command>nterface.</para>
+
+    <para>Another advantage of having a
+    <filename>vg_kerneliface.h</filename> file is that it makes
+    it simpler to interface to a different kernel.  Once can, for
+    example, easily imagine writing a new
+    <filename>vg_kerneliface.h</filename> for FreeBSD, or x86
+    NetBSD.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.limits" xreflabel="Current limitations">
+<title>Current limitations</title>
+
+<para>Support for weird (non-POSIX) signal stuff is patchy.  Does
+anybody care?</para>
+
+</sect2>
+
+</sect1>
+
+
+
+
+
+<sect1 id="mc-tech-docs.jitter" xreflabel="The instrumenting JITter">
+<title>The instrumenting JITter</title>
+
+<para>This really is the heart of the matter.  We begin with
+various side issues.</para>
+
+
+<sect2 id="mc-tech-docs.storage" 
+       xreflabel="Run-time storage, and the use of host registers">
+<title>Run-time storage, and the use of host registers</title>
+
+<para>Valgrind translates client (original) basic blocks into
+instrumented basic blocks, which live in the translation cache
+TC, until either the client finishes or the translations are
+ejected from TC to make room for newer ones.</para>
+
+<para>Since it generates x86 code in memory, Valgrind has
+complete control of the use of registers in the translations.
+Now pay attention.  I shall say this only once, and it is
+important you understand this.  In what follows I will refer to
+registers in the host (real) cpu using their standard names,
+<computeroutput>%eax</computeroutput>,
+<computeroutput>%edi</computeroutput>, etc.  I refer to registers
+in the simulated CPU by capitalising them:
+<computeroutput>%EAX</computeroutput>,
+<computeroutput>%EDI</computeroutput>, etc.  These two sets of
+registers usually bear no direct relationship to each other;
+there is no fixed mapping between them.  This naming scheme is
+used fairly consistently in the comments in the sources.</para>
+
+<para>Host registers, once things are up and running, are used as
+follows:</para>
+
+<itemizedlist>
+  <listitem>
+    <para><computeroutput>%esp</computeroutput>, the real stack
+    pointer, points somewhere in Valgrind's private stack area,
+    <computeroutput>VG_(stack)</computeroutput> or, transiently,
+    into its signal delivery stack,
+    <computeroutput>VG_(sigstack)</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>%edi</computeroutput> is used as a
+    temporary in code generation; it is almost always dead,
+    except when used for the
+    <computeroutput>Left</computeroutput> value-tag operations.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>%eax</computeroutput>,
+    <computeroutput>%ebx</computeroutput>,
+    <computeroutput>%ecx</computeroutput>,
+    <computeroutput>%edx</computeroutput> and
+    <computeroutput>%esi</computeroutput> are available to
+    Valgrind's register allocator.  They are dead (carry
+    unimportant values) in between translations, and are live
+    only in translations.  The one exception to this is
+    <computeroutput>%eax</computeroutput>, which, as mentioned
+    far above, has a special significance to the dispatch loop
+    <computeroutput>VG_(dispatch)</computeroutput>: when a
+    translation returns to the dispatch loop,
+    <computeroutput>%eax</computeroutput> is expected to contain
+    the original-code-address of the next translation to run.
+    The register allocator is so good at minimising spill code
+    that using five regs and not having to save/restore
+    <computeroutput>%edi</computeroutput> actually gives better
+    code than allocating to <computeroutput>%edi</computeroutput>
+    as well, but then having to push/pop it around special
+    uses.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>%ebp</computeroutput> points
+    permanently at
+    <computeroutput>VG_(baseBlock)</computeroutput>.  Valgrind's
+    translations are position-independent, partly because this is
+    convenient, but also because translations get moved around in
+    TC as part of the LRUing activity.  <command>All</command>
+    static entities which need to be referred to from generated
+    code, whether data or helper functions, are stored starting
+    at <computeroutput>VG_(baseBlock)</computeroutput> and are
+    therefore reached by indexing from
+    <computeroutput>%ebp</computeroutput>.  There is but one
+    exception, which is that by placing the value
+    <computeroutput>VG_EBP_DISPATCH_CHECKED</computeroutput> in
+    <computeroutput>%ebp</computeroutput> just before a return to
+    the dispatcher, the dispatcher is informed that the next
+    address to run, in <computeroutput>%eax</computeroutput>,
+    requires special treatment.</para>
+  </listitem>
+
+  <listitem>
+    <para>The real machine's FPU state is pretty much
+    unimportant, for reasons which will become obvious.  Ditto
+    its <computeroutput>%eflags</computeroutput> register.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The state of the simulated CPU is stored in memory, in
+<computeroutput>VG_(baseBlock)</computeroutput>, which is a block
+of 200 words IIRC.  Recall that
+<computeroutput>%ebp</computeroutput> points permanently at the
+start of this block.  Function
+<computeroutput>vg_init_baseBlock</computeroutput> decides what
+the offsets of various entities in
+<computeroutput>VG_(baseBlock)</computeroutput> are to be, and
+allocates word offsets for them.  The code generator then emits
+<computeroutput>%ebp</computeroutput> relative addresses to get
+at those things.  The sequence in which entities are allocated
+has been carefully chosen so that the 32 most popular entities
+come first, because this means 8-bit offsets can be used in the
+generated code.</para>
+
+<para>If I was clever, I could make
+<computeroutput>%ebp</computeroutput> point 32 words along
+<computeroutput>VG_(baseBlock)</computeroutput>, so that I'd have
+another 32 words of short-form offsets available, but that's just
+complicated, and it's not important -- the first 32 words take
+99% (or whatever) of the traffic.</para>
+
+<para>Currently, the sequence of stuff in
+<computeroutput>VG_(baseBlock)</computeroutput> is as
+follows:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>9 words, holding the simulated integer registers,
+    <computeroutput>%EAX</computeroutput>
+    .. <computeroutput>%EDI</computeroutput>, and the simulated
+    flags, <computeroutput>%EFLAGS</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>Another 9 words, holding the V bit "shadows" for the
+    above 9 regs.</para>
+  </listitem>
+
+  <listitem>
+    <para>The <command>addresses</command> of various helper
+    routines called from generated code:
+    <computeroutput>VG_(helper_value_check4_fail)</computeroutput>,
+    <computeroutput>VG_(helper_value_check0_fail)</computeroutput>,
+    which register V-check failures,
+    <computeroutput>VG_(helperc_STOREV4)</computeroutput>,
+    <computeroutput>VG_(helperc_STOREV1)</computeroutput>,
+    <computeroutput>VG_(helperc_LOADV4)</computeroutput>,
+    <computeroutput>VG_(helperc_LOADV1)</computeroutput>, which
+    do stores and loads of V bits to/from the sparse array which
+    keeps track of V bits in memory, and
+    <computeroutput>VGM_(handle_esp_assignment)</computeroutput>,
+    which messes with memory addressibility resulting from
+    changes in <computeroutput>%ESP</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>The simulated <computeroutput>%EIP</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>24 spill words, for when the register allocator can't
+    make it work with 5 measly registers.</para>
+  </listitem>
+
+  <listitem>
+    <para>Addresses of helpers
+    <computeroutput>VG_(helperc_STOREV2)</computeroutput>,
+    <computeroutput>VG_(helperc_LOADV2)</computeroutput>.  These
+    are here because 2-byte loads and stores are relatively rare,
+    so are placed above the magic 32-word offset boundary.</para>
+  </listitem>
+
+  <listitem>
+    <para>For similar reasons, addresses of helper functions
+    <computeroutput>VGM_(fpu_write_check)</computeroutput> and
+    <computeroutput>VGM_(fpu_read_check)</computeroutput>, which
+    handle the A/V maps testing and changes required by FPU
+    writes/reads.</para>
+  </listitem>
+
+  <listitem>
+    <para>Some other boring helper addresses:
+    <computeroutput>VG_(helper_value_check2_fail)</computeroutput>
+    and
+    <computeroutput>VG_(helper_value_check1_fail)</computeroutput>.
+    These are probably never emitted now, and should be
+    removed.</para>
+  </listitem>
+
+  <listitem>
+    <para>The entire state of the simulated FPU, which I believe
+    to be 108 bytes long.</para>
+  </listitem>
+
+  <listitem>
+    <para>Finally, the addresses of various other helper
+    functions in <filename>vg_helpers.S</filename>, which deal
+    with rare situations which are tedious or difficult to
+    generate code in-line for.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>As a general rule, the simulated machine's state lives
+permanently in memory at
+<computeroutput>VG_(baseBlock)</computeroutput>.  However, the
+JITter does some optimisations which allow the simulated integer
+registers to be cached in real registers over multiple simulated
+instructions within the same basic block.  These are always
+flushed back into memory at the end of every basic block, so that
+the in-memory state is up-to-date between basic blocks.  (This
+flushing is implied by the statement above that the real
+machine's allocatable registers are dead in between simulated
+blocks).</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.startup" 
+       xreflabel="Startup, shutdown, and system calls">
+<title>Startup, shutdown, and system calls</title>
+
+<para>Getting into of Valgrind
+(<computeroutput>VG_(startup)</computeroutput>, called from
+<filename>valgrind.so</filename>'s initialisation section),
+really means copying the real CPU's state into
+<computeroutput>VG_(baseBlock)</computeroutput>, and then
+installing our own stack pointer, etc, into the real CPU, and
+then starting up the JITter.  Exiting valgrind involves copying
+the simulated state back to the real state.</para>
+
+<para>Unfortunately, there's a complication at startup time.
+Problem is that at the point where we need to take a snapshot of
+the real CPU's state, the offsets in
+<computeroutput>VG_(baseBlock)</computeroutput> are not set up
+yet, because to do so would involve disrupting the real machine's
+state significantly.  The way round this is to dump the real
+machine's state into a temporary, static block of memory,
+<computeroutput>VG_(m_state_static)</computeroutput>.  We can
+then set up the <computeroutput>VG_(baseBlock)</computeroutput>
+offsets at our leisure, and copy into it from
+<computeroutput>VG_(m_state_static)</computeroutput> at some
+convenient later time.  This copying is done by
+<computeroutput>VG_(copy_m_state_static_to_baseBlock)</computeroutput>.</para>
+
+<para>On exit, the inverse transformation is (rather
+unnecessarily) used: stuff in
+<computeroutput>VG_(baseBlock)</computeroutput> is copied to
+<computeroutput>VG_(m_state_static)</computeroutput>, and the
+assembly stub then copies from
+<computeroutput>VG_(m_state_static)</computeroutput> into the
+real machine registers.</para>
+
+<para>Doing system calls on behalf of the client
+(<filename>vg_syscall.S</filename>) is something of a half-way
+house.  We have to make the world look sufficiently like that
+which the client would normally have to make the syscall actually
+work properly, but we can't afford to lose control.  So the trick
+is to copy all of the client's state, <command>except its program
+counter</command>, into the real CPU, do the system call, and
+copy the state back out.  Note that the client's state includes
+its stack pointer register, so one effect of this partial
+restoration is to cause the system call to be run on the client's
+stack, as it should be.</para>
+
+<para>As ever there are complications.  We have to save some of
+our own state somewhere when restoring the client's state into
+the CPU, so that we can keep going sensibly afterwards.  In fact
+the only thing which is important is our own stack pointer, but
+for paranoia reasons I save and restore our own FPU state as
+well, even though that's probably pointless.</para>
+
+<para>The complication on the above complication is, that for
+horrible reasons to do with signals, we may have to handle a
+second client system call whilst the client is blocked inside
+some other system call (unbelievable!).  That means there's two
+sets of places to dump Valgrind's stack pointer and FPU state
+across the syscall, and we decide which to use by consulting
+<computeroutput>VG_(syscall_depth)</computeroutput>, which is in
+turn maintained by
+<computeroutput>VG_(wrap_syscall)</computeroutput>.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.ucode" xreflabel="Introduction to UCode">
+<title>Introduction to UCode</title>
+
+<para>UCode lies at the heart of the x86-to-x86 JITter.  The
+basic premise is that dealing the the x86 instruction set head-on
+is just too darn complicated, so we do the traditional
+compiler-writer's trick and translate it into a simpler,
+easier-to-deal-with form.</para>
+
+<para>In normal operation, translation proceeds through six
+stages, coordinated by
+<computeroutput>VG_(translate)</computeroutput>:</para>
+
+<orderedlist>
+  <listitem>
+    <para>Parsing of an x86 basic block into a sequence of UCode
+    instructions (<computeroutput>VG_(disBB)</computeroutput>).</para>
+  </listitem>
+
+  <listitem>
+    <para>UCode optimisation
+    (<computeroutput>vg_improve</computeroutput>), with the aim
+    of caching simulated registers in real registers over
+    multiple simulated instructions, and removing redundant
+    simulated <computeroutput>%EFLAGS</computeroutput>
+    saving/restoring.</para>
+  </listitem>
+
+  <listitem>
+    <para>UCode instrumentation
+    (<computeroutput>vg_instrument</computeroutput>), which adds
+    value and address checking code.</para>
+  </listitem>
+
+  <listitem>
+    <para>Post-instrumentation cleanup
+    (<computeroutput>vg_cleanup</computeroutput>), removing
+    redundant value-check computations.</para>
+  </listitem>
+
+  <listitem>
+    <para>Register allocation
+    (<computeroutput>vg_do_register_allocation</computeroutput>),
+    which, note, is done on UCode.</para>
+  </listitem>
+
+  <listitem>
+    <para>Emission of final instrumented x86 code
+    (<computeroutput>VG_(emit_code)</computeroutput>).</para>
+  </listitem>
+
+</orderedlist>
+
+<para>Notice how steps 2, 3, 4 and 5 are simple UCode-to-UCode
+transformation passes, all on straight-line blocks of UCode (type
+<computeroutput>UCodeBlock</computeroutput>).  Steps 2 and 4 are
+optimisation passes and can be disabled for debugging purposes,
+with <computeroutput>--optimise=no</computeroutput> and
+<computeroutput>--cleanup=no</computeroutput> respectively.</para>
+
+<para>Valgrind can also run in a no-instrumentation mode, given
+<computeroutput>--instrument=no</computeroutput>.  This is useful
+for debugging the JITter quickly without having to deal with the
+complexity of the instrumentation mechanism too.  In this mode,
+steps 3 and 4 are omitted.</para>
+
+<para>These flags combine, so that
+<computeroutput>--instrument=no</computeroutput> together with
+<computeroutput>--optimise=no</computeroutput> means only steps
+1, 5 and 6 are used.
+<computeroutput>--single-step=yes</computeroutput> causes each
+x86 instruction to be treated as a single basic block.  The
+translations are terrible but this is sometimes instructive.</para>
+
+<para>The <computeroutput>--stop-after=N</computeroutput> flag
+switches back to the real CPU after
+<computeroutput>N</computeroutput> basic blocks.  It also re-JITs
+the final basic block executed and prints the debugging info
+resulting, so this gives you a way to get a quick snapshot of how
+a basic block looks as it passes through the six stages mentioned
+above.  If you want to see full information for every block
+translated (probably not, but still ...) find, in
+<computeroutput>VG_(translate)</computeroutput>, the lines</para>
+<programlisting><![CDATA[
+dis = True;
+dis = debugging_translation;]]></programlisting>
+
+<para>and comment out the second line.  This will spew out
+debugging junk faster than you can possibly imagine.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.tags" xreflabel="UCode operand tags: type 'Tag'">
+<title>UCode operand tags: type <computeroutput>Tag</computeroutput></title>
+
+<para>UCode is, more or less, a simple two-address RISC-like
+code.  In keeping with the x86 AT&amp;T assembly syntax,
+generally speaking the first operand is the source operand, and
+the second is the destination operand, which is modified when the
+uinstr is notionally executed.</para>
+
+<para>UCode instructions have up to three operand fields, each of
+which has a corresponding <computeroutput>Tag</computeroutput>
+describing it.  Possible values for the tag are:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>NoValue</computeroutput>: indicates
+    that the field is not in use.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>Lit16</computeroutput>: the field
+    contains a 16-bit literal.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>Literal</computeroutput>: the field
+    denotes a 32-bit literal, whose value is stored in the
+    <computeroutput>lit32</computeroutput> field of the uinstr
+    itself.  Since there is only one
+    <computeroutput>lit32</computeroutput> for the whole uinstr,
+    only one operand field may contain this tag.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>SpillNo</computeroutput>: the field
+    contains a spill slot number, in the range 0 to 23 inclusive,
+    denoting one of the spill slots contained inside
+    <computeroutput>VG_(baseBlock)</computeroutput>.  Such tags
+    only exist after register allocation.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>RealReg</computeroutput>: the field
+    contains a number in the range 0 to 7 denoting an integer x86
+    ("real") register on the host.  The number is the Intel
+    encoding for integer registers.  Such tags only exist after
+    register allocation.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>ArchReg</computeroutput>: the field
+    contains a number in the range 0 to 7 denoting an integer x86
+    register on the simulated CPU.  In reality this means a
+    reference to one of the first 8 words of
+    <computeroutput>VG_(baseBlock)</computeroutput>.  Such tags
+    can exist at any point in the translation process.</para>
+  </listitem>
+
+  <listitem>
+    <para>Last, but not least,
+    <computeroutput>TempReg</computeroutput>.  The field contains
+    the number of one of an infinite set of virtual (integer)
+    registers. <computeroutput>TempReg</computeroutput>s are used
+    everywhere throughout the translation process; you can have
+    as many as you want.  The register allocator maps as many as
+    it can into <computeroutput>RealReg</computeroutput>s and
+    turns the rest into
+    <computeroutput>SpillNo</computeroutput>s, so
+    <computeroutput>TempReg</computeroutput>s should not exist
+    after the register allocation phase.</para>
+
+    <para><computeroutput>TempReg</computeroutput>s are always 32
+    bits long, even if the data they hold is logically shorter.
+    In that case the upper unused bits are required, and, I
+    think, generally assumed, to be zero.
+    <computeroutput>TempReg</computeroutput>s holding V bits for
+    quantities shorter than 32 bits are expected to have ones in
+    the unused places, since a one denotes "undefined".</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.uinstr" 
+       xreflabel="UCode instructions: type 'UInstr'">
+<title>UCode instructions: type <computeroutput>UInstr</computeroutput></title>
+
+<para>UCode was carefully designed to make it possible to do
+register allocation on UCode and then translate the result into
+x86 code without needing any extra registers ... well, that was
+the original plan, anyway.  Things have gotten a little more
+complicated since then.  In what follows, UCode instructions are
+referred to as uinstrs, to distinguish them from x86
+instructions.  Uinstrs of course have uopcodes which are
+(naturally) different from x86 opcodes.</para>
+
+<para>A uinstr (type <computeroutput>UInstr</computeroutput>)
+contains various fields, not all of which are used by any one
+uopcode:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Three 16-bit operand fields,
+    <computeroutput>val1</computeroutput>,
+    <computeroutput>val2</computeroutput> and
+    <computeroutput>val3</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>Three tag fields,
+    <computeroutput>tag1</computeroutput>,
+    <computeroutput>tag2</computeroutput> and
+    <computeroutput>tag3</computeroutput>.  Each of these has a
+    value of type <computeroutput>Tag</computeroutput>, and they
+    describe what the <computeroutput>val1</computeroutput>,
+    <computeroutput>val2</computeroutput> and
+    <computeroutput>val3</computeroutput> fields contain.</para>
+  </listitem>
+
+  <listitem>
+    <para>A 32-bit literal field.</para>
+  </listitem>
+
+  <listitem>
+    <para>Two <computeroutput>FlagSet</computeroutput>s,
+    specifying which x86 condition codes are read and written by
+    the uinstr.</para>
+  </listitem>
+
+  <listitem>
+    <para>An opcode byte, containing a value of type
+    <computeroutput>Opcode</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>A size field, indicating the data transfer size
+    (1/2/4/8/10) in cases where this makes sense, or zero
+    otherwise.</para>
+  </listitem>
+
+  <listitem>
+    <para>A condition-code field, which, for jumps, holds a value
+    of type <computeroutput>Condcode</computeroutput>, indicating
+    the condition which applies.  The encoding is as it is in the
+    x86 insn stream, except we add a 17th value
+    <computeroutput>CondAlways</computeroutput> to indicate an
+    unconditional transfer.</para>
+  </listitem>
+
+  <listitem>
+    <para>Various 1-bit flags, indicating whether this insn
+    pertains to an x86 CALL or RET instruction, whether a
+    widening is signed or not, etc.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>UOpcodes (type <computeroutput>Opcode</computeroutput>) are
+divided into two groups: those necessary merely to express the
+functionality of the x86 code, and extra uopcodes needed to
+express the instrumentation.  The former group contains:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>GET</computeroutput> and
+    <computeroutput>PUT</computeroutput>, which move values from
+    the simulated CPU's integer registers
+    (<computeroutput>ArchReg</computeroutput>s) into
+    <computeroutput>TempReg</computeroutput>s, and back.
+    <computeroutput>GETF</computeroutput> and
+    <computeroutput>PUTF</computeroutput> do the corresponding
+    thing for the simulated
+    <computeroutput>%EFLAGS</computeroutput>.  There are no
+    corresponding insns for the FPU register stack, since we
+    don't explicitly simulate its registers.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>LOAD</computeroutput> and
+    <computeroutput>STORE</computeroutput>, which, in RISC-like
+    fashion, are the only uinstrs able to interact with
+    memory.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>MOV</computeroutput> and
+    <computeroutput>CMOV</computeroutput> allow unconditional and
+    conditional moves of values between
+    <computeroutput>TempReg</computeroutput>s.</para>
+  </listitem>
+
+  <listitem>
+    <para>ALU operations.  Again in RISC-like fashion, these only
+    operate on <computeroutput>TempReg</computeroutput>s (before
+    reg-alloc) or <computeroutput>RealReg</computeroutput>s
+    (after reg-alloc).  These are:
+    <computeroutput>ADD</computeroutput>,
+    <computeroutput>ADC</computeroutput>,
+    <computeroutput>AND</computeroutput>,
+    <computeroutput>OR</computeroutput>,
+    <computeroutput>XOR</computeroutput>,
+    <computeroutput>SUB</computeroutput>,
+    <computeroutput>SBB</computeroutput>,
+    <computeroutput>SHL</computeroutput>,
+    <computeroutput>SHR</computeroutput>,
+    <computeroutput>SAR</computeroutput>,
+    <computeroutput>ROL</computeroutput>,
+    <computeroutput>ROR</computeroutput>,
+    <computeroutput>RCL</computeroutput>,
+    <computeroutput>RCR</computeroutput>,
+    <computeroutput>NOT</computeroutput>,
+    <computeroutput>NEG</computeroutput>,
+    <computeroutput>INC</computeroutput>,
+    <computeroutput>DEC</computeroutput>,
+    <computeroutput>BSWAP</computeroutput>,
+    <computeroutput>CC2VAL</computeroutput> and
+    <computeroutput>WIDEN</computeroutput>.
+    <computeroutput>WIDEN</computeroutput> does signed or
+    unsigned value widening.
+    <computeroutput>CC2VAL</computeroutput> is used to convert
+    condition codes into a value, zero or one.  The rest are
+    obvious.</para>
+
+    <para>To allow for more efficient code generation, we bend
+    slightly the restriction at the start of the previous para:
+    for <computeroutput>ADD</computeroutput>,
+    <computeroutput>ADC</computeroutput>,
+    <computeroutput>XOR</computeroutput>,
+    <computeroutput>SUB</computeroutput> and
+    <computeroutput>SBB</computeroutput>, we allow the first
+    (source) operand to also be an
+    <computeroutput>ArchReg</computeroutput>, that is, one of the
+    simulated machine's registers.  Also, many of these ALU ops
+    allow the source operand to be a literal.  See
+    <computeroutput>VG_(saneUInstr)</computeroutput> for the
+    final word on the allowable forms of uinstrs.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>LEA1</computeroutput> and
+    <computeroutput>LEA2</computeroutput> are not strictly
+    necessary, but allow faciliate better translations.  They
+    record the fancy x86 addressing modes in a direct way, which
+    allows those amodes to be emitted back into the final
+    instruction stream more or less verbatim.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>CALLM</computeroutput> calls a
+    machine-code helper, one of the methods whose address is
+    stored at some
+    <computeroutput>VG_(baseBlock)</computeroutput> offset.
+    <computeroutput>PUSH</computeroutput> and
+    <computeroutput>POP</computeroutput> move values to/from
+    <computeroutput>TempReg</computeroutput> to the real
+    (Valgrind's) stack, and
+    <computeroutput>CLEAR</computeroutput> removes values from
+    the stack.  <computeroutput>CALLM_S</computeroutput> and
+    <computeroutput>CALLM_E</computeroutput> delimit the
+    boundaries of call setups and clearings, for the benefit of
+    the instrumentation passes.  Getting this right is critical,
+    and so <computeroutput>VG_(saneUCodeBlock)</computeroutput>
+    makes various checks on the use of these uopcodes.</para>
+
+    <para>It is important to understand that these uopcodes have
+    nothing to do with the x86
+    <computeroutput>call</computeroutput>,
+    <computeroutput>return,</computeroutput>
+    <computeroutput>push</computeroutput> or
+    <computeroutput>pop</computeroutput> instructions, and are
+    not used to implement them.  Those guys turn into
+    combinations of <computeroutput>GET</computeroutput>,
+    <computeroutput>PUT</computeroutput>,
+    <computeroutput>LOAD</computeroutput>,
+    <computeroutput>STORE</computeroutput>,
+    <computeroutput>ADD</computeroutput>,
+    <computeroutput>SUB</computeroutput>, and
+    <computeroutput>JMP</computeroutput>.  What these uopcodes
+    support is calling of helper functions such as
+    <computeroutput>VG_(helper_imul_32_64)</computeroutput>,
+    which do stuff which is too difficult or tedious to emit
+    inline.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>FPU</computeroutput>,
+    <computeroutput>FPU_R</computeroutput> and
+    <computeroutput>FPU_W</computeroutput>.  Valgrind doesn't
+    attempt to simulate the internal state of the FPU at all.
+    Consequently it only needs to be able to distinguish FPU ops
+    which read and write memory from those that don't, and for
+    those which do, it needs to know the effective address and
+    data transfer size.  This is made easier because the x86 FP
+    instruction encoding is very regular, basically consisting of
+    16 bits for a non-memory FPU insn and 11 (IIRC) bits + an
+    address mode for a memory FPU insn.  So our
+    <computeroutput>FPU</computeroutput> uinstr carries the 16
+    bits in its <computeroutput>val1</computeroutput> field.  And
+    <computeroutput>FPU_R</computeroutput> and
+    <computeroutput>FPU_W</computeroutput> carry 11 bits in that
+    field, together with the identity of a
+    <computeroutput>TempReg</computeroutput> or (later)
+    <computeroutput>RealReg</computeroutput> which contains the
+    address.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>JIFZ</computeroutput> is unique, in
+    that it allows a control-flow transfer which is not deemed to
+    end a basic block.  It causes a jump to a literal (original)
+    address if the specified argument is zero.</para>
+  </listitem>
+
+  <listitem>
+    <para>Finally, <computeroutput>INCEIP</computeroutput>
+    advances the simulated <computeroutput>%EIP</computeroutput>
+    by the specified literal amount.  This supports lazy
+    <computeroutput>%EIP</computeroutput> updating, as described
+    below.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Stages 1 and 2 of the 6-stage translation process mentioned
+above deal purely with these uopcodes, and no others.  They are
+sufficient to express pretty much all the x86 32-bit
+protected-mode instruction set, at least everything understood by
+a pre-MMX original Pentium (P54C).</para>
+
+<para>Stages 3, 4, 5 and 6 also deal with the following extra
+"instrumentation" uopcodes.  They are used to express all the
+definedness-tracking and -checking machinery which valgrind does.
+In later sections we show how to create checking code for each of
+the uopcodes above.  Note that these instrumentation uopcodes,
+although some appearing complicated, have been carefully chosen
+so that efficient x86 code can be generated for them.  GNU
+superopt v2.5 did a great job helping out here.  Anyways, the
+uopcodes are as follows:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>GETV</computeroutput> and
+    <computeroutput>PUTV</computeroutput> are analogues to
+    <computeroutput>GET</computeroutput> and
+    <computeroutput>PUT</computeroutput> above.  They are
+    identical except that they move the V bits for the specified
+    values back and forth to
+    <computeroutput>TempRegs</computeroutput>, rather than moving
+    the values themselves.</para>
+  </listitem>
+
+  <listitem>
+    <para>Similarly, <computeroutput>LOADV</computeroutput> and
+    <computeroutput>STOREV</computeroutput> read and write V bits
+    from the synthesised shadow memory that Valgrind maintains.
+    In fact they do more than that, since they also do
+    address-validity checks, and emit complaints if the
+    read/written addresses are unaddressible.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>TESTV</computeroutput>, whose
+    parameters are a <computeroutput>TempReg</computeroutput> and
+    a size, tests the V bits in the
+    <computeroutput>TempReg</computeroutput>, at the specified
+    operation size (0/1/2/4 byte) and emits an error if any of
+    them indicate undefinedness.  This is the only uopcode
+    capable of doing such tests.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>SETV</computeroutput>, whose parameters
+    are also <computeroutput>TempReg</computeroutput> and a size,
+    makes the V bits in the
+    <computeroutput>TempReg</computeroutput> indicated
+    definedness, at the specified operation size.  This is
+    usually used to generate the correct V bits for a literal
+    value, which is of course fully defined.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>GETVF</computeroutput> and
+    <computeroutput>PUTVF</computeroutput> are analogues to
+    <computeroutput>GETF</computeroutput> and
+    <computeroutput>PUTF</computeroutput>.  They move the single
+    V bit used to model definedness of
+    <computeroutput>%EFLAGS</computeroutput> between its home in
+    <computeroutput>VG_(baseBlock)</computeroutput> and the
+    specified <computeroutput>TempReg</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>TAG1</computeroutput> denotes one of a
+    family of unary operations on
+    <computeroutput>TempReg</computeroutput>s containing V bits.
+    Similarly, <computeroutput>TAG2</computeroutput> denotes one
+    in a family of binary operations on V bits.</para>
+  </listitem>
+
+</itemizedlist>
+
+
+<para>These 10 uopcodes are sufficient to express Valgrind's
+entire definedness-checking semantics.  In fact most of the
+interesting magic is done by the
+<computeroutput>TAG1</computeroutput> and
+<computeroutput>TAG2</computeroutput> suboperations.</para>
+
+<para>First, however, I need to explain about V-vector operation
+sizes.  There are 4 sizes: 1, 2 and 4, which operate on groups of
+8, 16 and 32 V bits at a time, supporting the usual 1, 2 and 4
+byte x86 operations.  However there is also the mysterious size
+0, which really means a single V bit.  Single V bits are used in
+various circumstances; in particular, the definedness of
+<computeroutput>%EFLAGS</computeroutput> is modelled with a
+single V bit.  Now might be a good time to also point out that
+for V bits, 1 means "undefined" and 0 means "defined".
+Similarly, for A bits, 1 means "invalid address" and 0 means
+"valid address".  This seems counterintuitive (and so it is), but
+testing against zero on x86s saves instructions compared to
+testing against all 1s, because many ALU operations set the Z
+flag for free, so to speak.</para>
+
+<para>With that in mind, the tag ops are:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <formalpara>
+    <title>(UNARY) Pessimising casts:</title>
+    <para><computeroutput>VgT_PCast40</computeroutput>,
+    <computeroutput>VgT_PCast20</computeroutput>,
+    <computeroutput>VgT_PCast10</computeroutput>,
+    <computeroutput>VgT_PCast01</computeroutput>,
+    <computeroutput>VgT_PCast02</computeroutput> and
+    <computeroutput>VgT_PCast04</computeroutput>.  A "pessimising
+    cast" takes a V-bit vector at one size, and creates a new one
+    at another size, pessimised in the sense that if any of the
+    bits in the source vector indicate undefinedness, then all
+    the bits in the result indicate undefinedness.  In this case
+    the casts are all to or from a single V bit, so for example
+    <computeroutput>VgT_PCast40</computeroutput> is a pessimising
+    cast from 32 bits to 1, whereas
+    <computeroutput>VgT_PCast04</computeroutput> simply copies
+    the single source V bit into all 32 bit positions in the
+    result.  Surprisingly, these ops can all be implemented very
+    efficiently.</para>
+    </formalpara>
+
+    <para>There are also the pessimising casts
+    <computeroutput>VgT_PCast14</computeroutput>, from 8 bits to
+    32, <computeroutput>VgT_PCast12</computeroutput>, from 8 bits
+    to 16, and <computeroutput>VgT_PCast11</computeroutput>, from
+    8 bits to 8.  This last one seems nonsensical, but in fact it
+    isn't a no-op because, as mentioned above, any undefined (1)
+    bits in the source infect the entire result.</para>
+  </listitem>
+
+  <listitem>
+    <formalpara>
+    <title>(UNARY) Propagating undefinedness upwards in a
+    word:</title>
+    <para><computeroutput>VgT_Left4</computeroutput>,
+    <computeroutput>VgT_Left2</computeroutput> and
+    <computeroutput>VgT_Left1</computeroutput>.  These are used
+    to simulate the worst-case effects of carry propagation in
+    adds and subtracts.  They return a V vector identical to the
+    original, except that if the original contained any undefined
+    bits, then it and all bits above it are marked as undefined
+    too.  Hence the Left bit in the names.</para></formalpara>
+  </listitem>
+
+  <listitem>
+    <formalpara>
+    <title>(UNARY) Signed and unsigned value widening:</title> 
+    <para><computeroutput>VgT_SWiden14</computeroutput>,
+    <computeroutput>VgT_SWiden24</computeroutput>,
+    <computeroutput>VgT_SWiden12</computeroutput>,
+    <computeroutput>VgT_ZWiden14</computeroutput>,
+    <computeroutput>VgT_ZWiden24</computeroutput> and
+    <computeroutput>VgT_ZWiden12</computeroutput>.  These mimic
+    the definedness effects of standard signed and unsigned
+    integer widening.  Unsigned widening creates zero bits in the
+    new positions, so
+    <computeroutput>VgT_ZWiden*</computeroutput> accordingly park
+    mark those parts of their argument as defined.  Signed
+    widening copies the sign bit into the new positions, so
+    <computeroutput>VgT_SWiden*</computeroutput> copies the
+    definedness of the sign bit into the new positions.  Because
+    1 means undefined and 0 means defined, these operations can
+    (fascinatingly) be done by the same operations which they
+    mimic.  Go figure.</para>
+    </formalpara>
+  </listitem>
+
+  <listitem>
+    <formalpara>
+    <title>(BINARY) Undefined-if-either-Undefined,
+    Defined-if-either-Defined:</title>
+    <para><computeroutput>VgT_UifU4</computeroutput>,
+    <computeroutput>VgT_UifU2</computeroutput>,
+    <computeroutput>VgT_UifU1</computeroutput>,
+    <computeroutput>VgT_UifU0</computeroutput>,
+    <computeroutput>VgT_DifD4</computeroutput>,
+    <computeroutput>VgT_DifD2</computeroutput>,
+    <computeroutput>VgT_DifD1</computeroutput>.  These do simple
+    bitwise operations on pairs of V-bit vectors, with
+    <computeroutput>UifU</computeroutput> giving undefined if
+    either arg bit is undefined, and
+    <computeroutput>DifD</computeroutput> giving defined if
+    either arg bit is defined.  Abstract interpretation junkies,
+    if any make it this far, may like to think of them as meets
+    and joins (or is it joins and meets) in the definedness
+    lattices.</para>
+    </formalpara>
+  </listitem>
+
+  <listitem>
+    <formalpara>
+    <title>(BINARY; one value, one V bits) Generate argument
+    improvement terms for AND and OR</title>
+    <para><computeroutput>VgT_ImproveAND4_TQ</computeroutput>,
+    <computeroutput>VgT_ImproveAND2_TQ</computeroutput>,
+    <computeroutput>VgT_ImproveAND1_TQ</computeroutput>,
+    <computeroutput>VgT_ImproveOR4_TQ</computeroutput>,
+    <computeroutput>VgT_ImproveOR2_TQ</computeroutput>,
+    <computeroutput>VgT_ImproveOR1_TQ</computeroutput>.  These
+    help out with AND and OR operations.  AND and OR have the
+    inconvenient property that the definedness of the result
+    depends on the actual values of the arguments as well as
+    their definedness.  At the bit level:</para></formalpara>
+<programlisting><![CDATA[
+1 AND undefined = undefined, but
+0 AND undefined = 0, and
+similarly 
+0 OR undefined = undefined, but
+1 OR undefined = 1.]]></programlisting>
+    
+    <para>It turns out that gcc (quite legitimately) generates
+    code which relies on this fact, so we have to model it
+    properly in order to avoid flooding users with spurious value
+    errors.  The ultimate definedness result of AND and OR is
+    calculated using <computeroutput>UifU</computeroutput> on the
+    definedness of the arguments, but we also
+    <computeroutput>DifD</computeroutput> in some "improvement"
+    terms which take into account the above phenomena.</para>
+
+    <para><computeroutput>ImproveAND</computeroutput> takes as
+    its first argument the actual value of an argument to AND
+    (the T) and the definedness of that argument (the Q), and
+    returns a V-bit vector which is defined (0) for bits which
+    have value 0 and are defined; this, when
+    <computeroutput>DifD</computeroutput> into the final result
+    causes those bits to be defined even if the corresponding bit
+    in the other argument is undefined.</para>
+
+    <para>The <computeroutput>ImproveOR</computeroutput> ops do
+    the dual thing for OR arguments.  Note that XOR does not have
+    this property that one argument can make the other
+    irrelevant, so there is no need for such complexity for
+    XOR.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>That's all the tag ops.  If you stare at this long enough,
+and then run Valgrind and stare at the pre- and post-instrumented
+ucode, it should be fairly obvious how the instrumentation
+machinery hangs together.</para>
+
+<para>One point, if you do this: in order to make it easy to
+differentiate <computeroutput>TempReg</computeroutput>s carrying
+values from <computeroutput>TempReg</computeroutput>s carrying V
+bit vectors, Valgrind prints the former as (for example)
+<computeroutput>t28</computeroutput> and the latter as
+<computeroutput>q28</computeroutput>; the fact that they carry
+the same number serves to indicate their relationship.  This is
+purely for the convenience of the human reader; the register
+allocator and code generator don't regard them as
+different.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-manual.trans" xreflabel="Translation into UCode">
+<title>Translation into UCode</title>
+
+<para><computeroutput>VG_(disBB)</computeroutput> allocates a new
+<computeroutput>UCodeBlock</computeroutput> and then uses
+<computeroutput>disInstr</computeroutput> to translate x86
+instructions one at a time into UCode, dumping the result in the
+<computeroutput>UCodeBlock</computeroutput>.  This goes on until
+a control-flow transfer instruction is encountered.</para>
+
+<para>Despite the large size of
+<filename>vg_to_ucode.c</filename>, this translation is really
+very simple.  Each x86 instruction is translated entirely
+independently of its neighbours, merrily allocating new
+<computeroutput>TempReg</computeroutput>s as it goes.  The idea
+is to have a simple translator -- in reality, no more than a
+macro-expander -- and the -- resulting bad UCode translation is
+cleaned up by the UCode optimisation phase which follows.  To
+give you an idea of some x86 instructions and their translations
+(this is a complete basic block, as Valgrind sees it):</para>
+<programlisting><![CDATA[
+0x40435A50:  incl %edx
+     0: GETL      %EDX, t0
+     1: INCL      t0  (-wOSZAP)
+     2: PUTL      t0, %EDX
+
+0x40435A51:  movsbl (%edx),%eax
+     3: GETL      %EDX, t2
+     4: LDB       (t2), t2
+     5: WIDENL_Bs t2
+     6: PUTL      t2, %EAX
+
+0x40435A54:  testb $0x20, 1(%ecx,%eax,2)
+     7: GETL      %EAX, t6
+     8: GETL      %ECX, t8
+     9: LEA2L     1(t8,t6,2), t4
+    10: LDB       (t4), t10
+    11: MOVB      $0x20, t12
+    12: ANDB      t12, t10  (-wOSZACP)
+    13: INCEIPo   $9
+
+0x40435A59:  jnz-8 0x40435A50
+    14: Jnzo      $0x40435A50  (-rOSZACP)
+    15: JMPo      $0x40435A5B]]></programlisting>
+
+<para>Notice how the block always ends with an unconditional jump
+to the next block.  This is a bit unnecessary, but makes many
+things simpler.</para>
+
+<para>Most x86 instructions turn into sequences of
+<computeroutput>GET</computeroutput>,
+<computeroutput>PUT</computeroutput>,
+<computeroutput>LEA1</computeroutput>,
+<computeroutput>LEA2</computeroutput>,
+<computeroutput>LOAD</computeroutput> and
+<computeroutput>STORE</computeroutput>.  Some complicated ones
+however rely on calling helper bits of code in
+<filename>vg_helpers.S</filename>.  The ucode instructions
+<computeroutput>PUSH</computeroutput>,
+<computeroutput>POP</computeroutput>,
+<computeroutput>CALL</computeroutput>,
+<computeroutput>CALLM_S</computeroutput> and
+<computeroutput>CALLM_E</computeroutput> support this.  The
+calling convention is somewhat ad-hoc and is not the C calling
+convention.  The helper routines must save all integer registers,
+and the flags, that they use.  Args are passed on the stack
+underneath the return address, as usual, and if result(s) are to
+be returned, it (they) are either placed in dummy arg slots
+created by the ucode <computeroutput>PUSH</computeroutput>
+sequence, or just overwrite the incoming args.</para>
+
+<para>In order that the instrumentation mechanism can handle
+calls to these helpers,
+<computeroutput>VG_(saneUCodeBlock)</computeroutput> enforces the
+following restrictions on calls to helpers:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Each <computeroutput>CALL</computeroutput> uinstr must
+    be bracketed by a preceding
+    <computeroutput>CALLM_S</computeroutput> marker (dummy
+    uinstr) and a trailing
+    <computeroutput>CALLM_E</computeroutput> marker.  These
+    markers are used by the instrumentation mechanism later to
+    establish the boundaries of the
+    <computeroutput>PUSH</computeroutput>,
+    <computeroutput>POP</computeroutput> and
+    <computeroutput>CLEAR</computeroutput> sequences for the
+    call.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>PUSH</computeroutput>,
+    <computeroutput>POP</computeroutput> and
+    <computeroutput>CLEAR</computeroutput> may only appear inside
+    sections bracketed by
+    <computeroutput>CALLM_S</computeroutput> and
+    <computeroutput>CALLM_E</computeroutput>, and nowhere else.</para>
+  </listitem>
+
+  <listitem>
+    <para>In any such bracketed section, no two
+    <computeroutput>PUSH</computeroutput> insns may push the same
+    <computeroutput>TempReg</computeroutput>.  Dually, no two two
+    <computeroutput>POP</computeroutput>s may pop the same
+    <computeroutput>TempReg</computeroutput>.</para>
+  </listitem>
+
+  <listitem>
+    <para>Finally, although this is not checked, args should be
+    removed from the stack with
+    <computeroutput>CLEAR</computeroutput>, rather than
+    <computeroutput>POP</computeroutput>s into a
+    <computeroutput>TempReg</computeroutput> which is not
+    subsequently used.  This is because the instrumentation
+    mechanism assumes that all values
+    <computeroutput>POP</computeroutput>ped from the stack are
+    actually used.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Some of the translations may appear to have redundant
+<computeroutput>TempReg</computeroutput>-to-<computeroutput>TempReg</computeroutput>
+moves.  This helps the next phase, UCode optimisation, to
+generate better code.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.optim" xreflabel="UCode optimisation">
+<title>UCode optimisation</title>
+
+<para>UCode is then subjected to an improvement pass
+(<computeroutput>vg_improve()</computeroutput>), which blurs the
+boundaries between the translations of the original x86
+instructions.  It's pretty straightforward.  Three
+transformations are done:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Redundant <computeroutput>GET</computeroutput>
+    elimination.  Actually, more general than that -- eliminates
+    redundant fetches of ArchRegs.  In our running example,
+    uinstr 3 <computeroutput>GET</computeroutput>s
+    <computeroutput>%EDX</computeroutput> into
+    <computeroutput>t2</computeroutput> despite the fact that, by
+    looking at the previous uinstr, it is already in
+    <computeroutput>t0</computeroutput>.  The
+    <computeroutput>GET</computeroutput> is therefore removed,
+    and <computeroutput>t2</computeroutput> renamed to
+    <computeroutput>t0</computeroutput>.  Assuming
+    <computeroutput>t0</computeroutput> is allocated to a host
+    register, it means the simulated
+    <computeroutput>%EDX</computeroutput> will exist in a host
+    CPU register for more than one simulated x86 instruction,
+    which seems to me to be a highly desirable property.</para>
+
+    <para>There is some mucking around to do with subregisters;
+    <computeroutput>%AL</computeroutput> vs
+    <computeroutput>%AH</computeroutput>
+    <computeroutput>%AX</computeroutput> vs
+    <computeroutput>%EAX</computeroutput> etc.  I can't remember
+    how it works, but in general we are very conservative, and
+    these tend to invalidate the caching.</para>
+  </listitem>
+
+  <listitem>
+    <para>Redundant <computeroutput>PUT</computeroutput>
+    elimination.  This annuls
+    <computeroutput>PUT</computeroutput>s of values back to
+    simulated CPU registers if a later
+    <computeroutput>PUT</computeroutput> would overwrite the
+    earlier <computeroutput>PUT</computeroutput> value, and there
+    is no intervening reads of the simulated register
+    (<computeroutput>ArchReg</computeroutput>).</para>
+
+    <para>As before, we are paranoid when faced with subregister
+    references.  Also, <computeroutput>PUT</computeroutput>s of
+    <computeroutput>%ESP</computeroutput> are never annulled,
+    because it is vital the instrumenter always has an up-to-date
+    <computeroutput>%ESP</computeroutput> value available,
+    <computeroutput>%ESP</computeroutput> changes affect
+    addressibility of the memory around the simulated stack
+    pointer.</para>
+
+    <para>The implication of the above paragraph is that the
+    simulated machine's registers are only lazily updated once
+    the above two optimisation phases have run, with the
+    exception of <computeroutput>%ESP</computeroutput>.
+    <computeroutput>TempReg</computeroutput>s go dead at the end
+    of every basic block, from which is is inferrable that any
+    <computeroutput>TempReg</computeroutput> caching a simulated
+    CPU reg is flushed (back into the relevant
+    <computeroutput>VG_(baseBlock)</computeroutput> slot) at the
+    end of every basic block.  The further implication is that
+    the simulated registers are only up-to-date at in between
+    basic blocks, and not at arbitrary points inside basic
+    blocks.  And the consequence of that is that we can only
+    deliver signals to the client in between basic blocks.  None
+    of this seems any problem in practice.</para>
+  </listitem>
+
+  <listitem>
+    <para>Finally there is a simple def-use thing for condition
+    codes.  If an earlier uinstr writes the condition codes, and
+    the next uinsn along which actually cares about the condition
+    codes writes the same or larger set of them, but does not
+    read any, the earlier uinsn is marked as not writing any
+    condition codes.  This saves a lot of redundant cond-code
+    saving and restoring.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The effect of these transformations on our short block is
+rather unexciting, and shown below.  On longer basic blocks they
+can dramatically improve code quality.</para>
+
+<programlisting><![CDATA[
+at 3: delete GET, rename t2 to t0 in (4 .. 6)
+at 7: delete GET, rename t6 to t0 in (8 .. 9)
+at 1: annul flag write OSZAP due to later OSZACP
+
+Improved code:
+     0: GETL      %EDX, t0
+     1: INCL      t0
+     2: PUTL      t0, %EDX
+     4: LDB       (t0), t0
+     5: WIDENL_Bs t0
+     6: PUTL      t0, %EAX
+     8: GETL      %ECX, t8
+     9: LEA2L     1(t8,t0,2), t4
+    10: LDB       (t4), t10
+    11: MOVB      $0x20, t12
+    12: ANDB      t12, t10  (-wOSZACP)
+    13: INCEIPo   $9
+    14: Jnzo      $0x40435A50  (-rOSZACP)
+    15: JMPo      $0x40435A5B]]></programlisting>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.instrum" xreflabel="UCode instrumentation">
+<title>UCode instrumentation</title>
+
+<para>Once you understand the meaning of the instrumentation
+uinstrs, discussed in detail above, the instrumentation scheme is
+fairly straightforward.  Each uinstr is instrumented in
+isolation, and the instrumentation uinstrs are placed before the
+original uinstr.  Our running example continues below.  I have
+placed a blank line after every original ucode, to make it easier
+to see which instrumentation uinstrs correspond to which
+originals.</para>
+
+<para>As mentioned somewhere above,
+<computeroutput>TempReg</computeroutput>s carrying values have
+names like <computeroutput>t28</computeroutput>, and each one has
+a shadow carrying its V bits, with names like
+<computeroutput>q28</computeroutput>.  This pairing aids in
+reading instrumented ucode.</para>
+
+<para>One decision about all this is where to have "observation
+points", that is, where to check that V bits are valid.  I use a
+minimalistic scheme, only checking where a failure of validity
+could cause the original program to (seg)fault.  So the use of
+values as memory addresses causes a check, as do conditional
+jumps (these cause a check on the definedness of the condition
+codes).  And arguments <computeroutput>PUSH</computeroutput>ed
+for helper calls are checked, hence the weird restrictions on
+help call preambles described above.</para>
+
+<para>Another decision is that once a value is tested, it is
+thereafter regarded as defined, so that we do not emit multiple
+undefined-value errors for the same undefined value.  That means
+that <computeroutput>TESTV</computeroutput> uinstrs are always
+followed by <computeroutput>SETV</computeroutput> on the same
+(shadow) <computeroutput>TempReg</computeroutput>s.  Most of
+these <computeroutput>SETV</computeroutput>s are redundant and
+are removed by the post-instrumentation cleanup phase.</para>
+
+<para>The instrumentation for calling helper functions deserves
+further comment.  The definedness of results from a helper is
+modelled using just one V bit.  So, in short, we do pessimising
+casts of the definedness of all the args, down to a single bit,
+and then <computeroutput>UifU</computeroutput> these bits
+together.  So this single V bit will say "undefined" if any part
+of any arg is undefined.  This V bit is then pessimally cast back
+up to the result(s) sizes, as needed.  If, by seeing that all the
+args are got rid of with <computeroutput>CLEAR</computeroutput>
+and none with <computeroutput>POP</computeroutput>, Valgrind sees
+that the result of the call is not actually used, it immediately
+examines the result V bit with a
+<computeroutput>TESTV</computeroutput> --
+<computeroutput>SETV</computeroutput> pair.  If it did not do
+this, there would be no observation point to detect that the some
+of the args to the helper were undefined.  Of course, if the
+helper's results are indeed used, we don't do this, since the
+result usage will presumably cause the result definedness to be
+checked at some suitable future point.</para>
+
+<para>In general Valgrind tries to track definedness on a
+bit-for-bit basis, but as the above para shows, for calls to
+helpers we throw in the towel and approximate down to a single
+bit.  This is because it's too complex and difficult to track
+bit-level definedness through complex ops such as integer
+multiply and divide, and in any case there is no reasonable code
+fragments which attempt to (eg) multiply two partially-defined
+values and end up with something meaningful, so there seems
+little point in modelling multiplies, divides, etc, in that level
+of detail.</para>
+
+<para>Integer loads and stores are instrumented with firstly a
+test of the definedness of the address, followed by a
+<computeroutput>LOADV</computeroutput> or
+<computeroutput>STOREV</computeroutput> respectively.  These turn
+into calls to (for example)
+<computeroutput>VG_(helperc_LOADV4)</computeroutput>.  These
+helpers do two things: they perform an address-valid check, and
+they load or store V bits from/to the relevant address in the
+(simulated V-bit) memory.</para>
+
+<para>FPU loads and stores are different.  As above the
+definedness of the address is first tested.  However, the helper
+routine for FPU loads
+(<computeroutput>VGM_(fpu_read_check)</computeroutput>) emits an
+error if either the address is invalid or the referenced area
+contains undefined values.  It has to do this because we do not
+simulate the FPU at all, and so cannot track definedness of
+values loaded into it from memory, so we have to check them as
+soon as they are loaded into the FPU, ie, at this point.  We
+notionally assume that everything in the FPU is defined.</para>
+
+<para>It follows therefore that FPU writes first check the
+definedness of the address, then the validity of the address, and
+finally mark the written bytes as well-defined.</para>
+
+<para>If anyone is inspired to extend Valgrind to MMX/SSE insns,
+I suggest you use the same trick.  It works provided that the
+FPU/MMX unit is not used to merely as a conduit to copy partially
+undefined data from one place in memory to another.
+Unfortunately the integer CPU is used like that (when copying C
+structs with holes, for example) and this is the cause of much of
+the elaborateness of the instrumentation here described.</para>
+
+<para><computeroutput>vg_instrument()</computeroutput> in
+<filename>vg_translate.c</filename> actually does the
+instrumentation.  There are comments explaining how each uinstr
+is handled, so we do not repeat that here.  As explained already,
+it is bit-accurate, except for calls to helper functions.
+Unfortunately the x86 insns
+<computeroutput>bt/bts/btc/btr</computeroutput> are done by
+helper fns, so bit-level accuracy is lost there.  This should be
+fixed by doing them inline; it will probably require adding a
+couple new uinstrs.  Also, left and right rotates through the
+carry flag (x86 <computeroutput>rcl</computeroutput> and
+<computeroutput>rcr</computeroutput>) are approximated via a
+single V bit; so far this has not caused anyone to complain.  The
+non-carry rotates, <computeroutput>rol</computeroutput> and
+<computeroutput>ror</computeroutput>, are much more common and
+are done exactly.  Re-visiting the instrumentation for AND and
+OR, they seem rather verbose, and I wonder if it could be done
+more concisely now.</para>
+
+<para>The lowercase <computeroutput>o</computeroutput> on many of
+the uopcodes in the running example indicates that the size field
+is zero, usually meaning a single-bit operation.</para>
+
+<para>Anyroads, the post-instrumented version of our running
+example looks like this:</para>
+
+<programlisting><![CDATA[
+Instrumented code:
+     0: GETVL     %EDX, q0
+     1: GETL      %EDX, t0
+
+     2: TAG1o     q0 = Left4 ( q0 )
+     3: INCL      t0
+
+     4: PUTVL     q0, %EDX
+     5: PUTL      t0, %EDX
+
+     6: TESTVL    q0
+     7: SETVL     q0
+     8: LOADVB    (t0), q0
+     9: LDB       (t0), t0
+
+    10: TAG1o     q0 = SWiden14 ( q0 )
+    11: WIDENL_Bs t0
+
+    12: PUTVL     q0, %EAX
+    13: PUTL      t0, %EAX
+
+    14: GETVL     %ECX, q8
+    15: GETL      %ECX, t8
+
+    16: MOVL      q0, q4
+    17: SHLL      $0x1, q4
+    18: TAG2o     q4 = UifU4 ( q8, q4 )
+    19: TAG1o     q4 = Left4 ( q4 )
+    20: LEA2L     1(t8,t0,2), t4
+
+    21: TESTVL    q4
+    22: SETVL     q4
+    23: LOADVB    (t4), q10
+    24: LDB       (t4), t10
+
+    25: SETVB     q12
+    26: MOVB      $0x20, t12
+
+    27: MOVL      q10, q14
+    28: TAG2o     q14 = ImproveAND1_TQ ( t10, q14 )
+    29: TAG2o     q10 = UifU1 ( q12, q10 )
+    30: TAG2o     q10 = DifD1 ( q14, q10 )
+    31: MOVL      q12, q14
+    32: TAG2o     q14 = ImproveAND1_TQ ( t12, q14 )
+    33: TAG2o     q10 = DifD1 ( q14, q10 )
+    34: MOVL      q10, q16
+    35: TAG1o     q16 = PCast10 ( q16 )
+    36: PUTVFo    q16
+    37: ANDB      t12, t10  (-wOSZACP)
+
+    38: INCEIPo   $9
+
+    39: GETVFo    q18
+    40: TESTVo    q18
+    41: SETVo     q18
+    42: Jnzo      $0x40435A50  (-rOSZACP)
+
+    43: JMPo      $0x40435A5B]]></programlisting>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.cleanup" 
+       xreflabel="UCode post-instrumentation cleanup">
+<title>UCode post-instrumentation cleanup</title>
+
+<para>This pass, coordinated by
+<computeroutput>vg_cleanup()</computeroutput>, removes redundant
+definedness computation created by the simplistic instrumentation
+pass.  It consists of two passes,
+<computeroutput>vg_propagate_definedness()</computeroutput>
+followed by
+<computeroutput>vg_delete_redundant_SETVs</computeroutput>.</para>
+
+<para><computeroutput>vg_propagate_definedness()</computeroutput>
+is a simple constant-propagation and constant-folding pass.  It
+tries to determine which
+<computeroutput>TempReg</computeroutput>s containing V bits will
+always indicate "fully defined", and it propagates this
+information as far as it can, and folds out as many operations as
+possible.  For example, the instrumentation for an ADD of a
+literal to a variable quantity will be reduced down so that the
+definedness of the result is simply the definedness of the
+variable quantity, since the literal is by definition fully
+defined.</para>
+
+<para><computeroutput>vg_delete_redundant_SETVs</computeroutput>
+removes <computeroutput>SETV</computeroutput>s on shadow
+<computeroutput>TempReg</computeroutput>s for which the next
+action is a write.  I don't think there's anything else worth
+saying about this; it is simple.  Read the sources for
+details.</para>
+
+<para>So the cleaned-up running example looks like this.  As
+above, I have inserted line breaks after every original
+(non-instrumentation) uinstr to aid readability.  As with
+straightforward ucode optimisation, the results in this block are
+undramatic because it is so short; longer blocks benefit more
+because they have more redundancy which gets eliminated.</para>
+
+<programlisting><![CDATA[
+at 29: delete UifU1 due to defd arg1
+at 32: change ImproveAND1_TQ to MOV due to defd arg2
+at 41: delete SETV
+at 31: delete MOV
+at 25: delete SETV
+at 22: delete SETV
+at 7: delete SETV
+
+     0: GETVL     %EDX, q0
+     1: GETL      %EDX, t0
+
+     2: TAG1o     q0 = Left4 ( q0 )
+     3: INCL      t0
+
+     4: PUTVL     q0, %EDX
+     5: PUTL      t0, %EDX
+
+     6: TESTVL    q0
+     8: LOADVB    (t0), q0
+     9: LDB       (t0), t0
+
+    10: TAG1o     q0 = SWiden14 ( q0 )
+    11: WIDENL_Bs t0
+
+    12: PUTVL     q0, %EAX
+    13: PUTL      t0, %EAX
+
+    14: GETVL     %ECX, q8
+    15: GETL      %ECX, t8
+
+    16: MOVL      q0, q4
+    17: SHLL      $0x1, q4
+    18: TAG2o     q4 = UifU4 ( q8, q4 )
+    19: TAG1o     q4 = Left4 ( q4 )
+    20: LEA2L     1(t8,t0,2), t4
+
+    21: TESTVL    q4
+    23: LOADVB    (t4), q10
+    24: LDB       (t4), t10
+
+    26: MOVB      $0x20, t12
+
+    27: MOVL      q10, q14
+    28: TAG2o     q14 = ImproveAND1_TQ ( t10, q14 )
+    30: TAG2o     q10 = DifD1 ( q14, q10 )
+    32: MOVL      t12, q14
+    33: TAG2o     q10 = DifD1 ( q14, q10 )
+    34: MOVL      q10, q16
+    35: TAG1o     q16 = PCast10 ( q16 )
+    36: PUTVFo    q16
+    37: ANDB      t12, t10  (-wOSZACP)
+
+    38: INCEIPo   $9
+    39: GETVFo    q18
+    40: TESTVo    q18
+    42: Jnzo      $0x40435A50  (-rOSZACP)
+
+    43: JMPo      $0x40435A5B]]></programlisting>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.transfrom" xreflabel="Translation from UCode">
+<title>Translation from UCode</title>
+
+<para>This is all very simple, even though
+<filename>vg_from_ucode.c</filename> is a big file.
+Position-independent x86 code is generated into a dynamically
+allocated array <computeroutput>emitted_code</computeroutput>;
+this is doubled in size when it overflows.  Eventually the array
+is handed back to the caller of
+<computeroutput>VG_(translate)</computeroutput>, who must copy
+the result into TC and TT, and free the array.</para>
+
+<para>This file is structured into four layers of abstraction,
+which, thankfully, are glued back together with extensive
+<computeroutput>__inline__</computeroutput> directives.  From the
+bottom upwards:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Address-mode emitters,
+    <computeroutput>emit_amode_regmem_reg</computeroutput> et
+    al.</para>
+  </listitem>
+
+  <listitem>
+    <para>Emitters for specific x86 instructions.  There are
+    quite a lot of these, with names such as
+    <computeroutput>emit_movv_offregmem_reg</computeroutput>.
+    The <computeroutput>v</computeroutput> suffix is Intel
+    parlance for a 16/32 bit insn; there are also
+    <computeroutput>b</computeroutput> suffixes for 8 bit
+    insns.</para>
+  </listitem>
+
+  <listitem>
+    <para>The next level up are the
+    <computeroutput>synth_*</computeroutput> functions, which
+    synthesise possibly a sequence of raw x86 instructions to do
+    some simple task.  Some of these are quite complex because
+    they have to work around Intel's silly restrictions on
+    subregister naming.  See
+    <computeroutput>synth_nonshiftop_reg_reg</computeroutput> for
+    example.</para>
+  </listitem>
+
+  <listitem>
+    <para>Finally, at the top of the heap, we have
+    <computeroutput>emitUInstr()</computeroutput>, which emits
+    code for a single uinstr.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Some comments:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>The hack for FPU instructions becomes apparent here.
+    To do a <computeroutput>FPU</computeroutput> ucode
+    instruction, we load the simulated FPU's state into from its
+    <computeroutput>VG_(baseBlock)</computeroutput> into the real
+    FPU using an x86 <computeroutput>frstor</computeroutput>
+    insn, do the ucode <computeroutput>FPU</computeroutput> insn
+    on the real CPU, and write the updated FPU state back into
+    <computeroutput>VG_(baseBlock)</computeroutput> using an
+    <computeroutput>fnsave</computeroutput> instruction.  This is
+    pretty brutal, but is simple and it works, and even seems
+    tolerably efficient.  There is no attempt to cache the
+    simulated FPU state in the real FPU over multiple
+    back-to-back ucode FPU instructions.</para>
+
+    <para><computeroutput>FPU_R</computeroutput> and
+    <computeroutput>FPU_W</computeroutput> are also done this
+    way, with the minor complication that we need to patch in
+    some addressing mode bits so the resulting insn knows the
+    effective address to use.  This is easy because of the
+    regularity of the x86 FPU instruction encodings.</para>
+  </listitem>
+
+  <listitem>
+    <para>An analogous trick is done with ucode insns which
+    claim, in their <computeroutput>flags_r</computeroutput> and
+    <computeroutput>flags_w</computeroutput> fields, that they
+    read or write the simulated
+    <computeroutput>%EFLAGS</computeroutput>.  For such cases we
+    first copy the simulated
+    <computeroutput>%EFLAGS</computeroutput> into the real
+    <computeroutput>%eflags</computeroutput>, then do the insn,
+    then, if the insn says it writes the flags, copy back to
+    <computeroutput>%EFLAGS</computeroutput>.  This is a bit
+    expensive, which is why the ucode optimisation pass goes to
+    some effort to remove redundant flag-update annotations.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>And so ... that's the end of the documentation for the
+instrumentating translator!  It's really not that complex,
+because it's composed as a sequence of simple(ish) self-contained
+transformations on straight-line blocks of code.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.dispatch" xreflabel="Top-level dispatch loop">
+<title>Top-level dispatch loop</title>
+
+<para>Urk.  In <computeroutput>VG_(toploop)</computeroutput>.
+This is basically boring and unsurprising, not to mention fiddly
+and fragile.  It needs to be cleaned up.</para>
+
+<para>The only perhaps surprise is that the whole thing is run on
+top of a <computeroutput>setjmp</computeroutput>-installed
+exception handler, because, supposing a translation got a
+segfault, we have to bail out of the Valgrind-supplied exception
+handler <computeroutput>VG_(oursignalhandler)</computeroutput>
+and immediately start running the client's segfault handler, if
+it has one.  In particular we can't finish the current basic
+block and then deliver the signal at some convenient future
+point, because signals like SIGILL, SIGSEGV and SIGBUS mean that
+the faulting insn should not simply be re-tried.  (I'm sure there
+is a clearer way to explain this).</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.lazy" 
+       xreflabel="Lazy updates of the simulated program counter">
+<title>Lazy updates of the simulated program counter</title>
+
+<para>Simulated <computeroutput>%EIP</computeroutput> is not
+updated after every simulated x86 insn as this was regarded as
+too expensive.  Instead ucode
+<computeroutput>INCEIP</computeroutput> insns move it along as
+and when necessary.  Currently we don't allow it to fall more
+than 4 bytes behind reality (see
+<computeroutput>VG_(disBB)</computeroutput> for the way this
+works).</para>
+
+<para>Note that <computeroutput>%EIP</computeroutput> is always
+brought up to date by the inner dispatch loop in
+<computeroutput>VG_(dispatch)</computeroutput>, so that if the
+client takes a fault we know at least which basic block this
+happened in.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.signals" xreflabel="Signals">
+<title>Signals</title>
+
+<para>Horrible, horrible.  <filename>vg_signals.c</filename>.
+Basically, since we have to intercept all system calls anyway, we
+can see when the client tries to install a signal handler.  If it
+does so, we make a note of what the client asked to happen, and
+ask the kernel to route the signal to our own signal handler,
+<computeroutput>VG_(oursignalhandler)</computeroutput>.  This
+simply notes the delivery of signals, and returns.</para>
+
+<para>Every 1000 basic blocks, we see if more signals have
+arrived.  If so,
+<computeroutput>VG_(deliver_signals)</computeroutput> builds
+signal delivery frames on the client's stack, and allows their
+handlers to be run.  Valgrind places in these signal delivery
+frames a bogus return address,
+<computeroutput>VG_(signalreturn_bogusRA)</computeroutput>, and
+checks all jumps to see if any jump to it.  If so, this is a sign
+that a signal handler is returning, and if so Valgrind removes
+the relevant signal frame from the client's stack, restores the
+from the signal frame the simulated state before the signal was
+delivered, and allows the client to run onwards.  We have to do
+it this way because some signal handlers never return, they just
+<computeroutput>longjmp()</computeroutput>, which nukes the
+signal delivery frame.</para>
+
+<para>The Linux kernel has a different but equally horrible hack
+for detecting signal handler returns.  Discovering it is left as
+an exercise for the reader.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.todo">
+<title>To be written</title>
+
+<para>The following is a list of as-yet-not-written stuff. Apologies.</para>
+<orderedlist>
+  <listitem>
+    <para>The translation cache and translation table</para>
+  </listitem>
+  <listitem>
+    <para>Exceptions, creating new translations</para>
+  </listitem>
+  <listitem>
+    <para>Self-modifying code</para>
+  </listitem>
+  <listitem>
+    <para>Errors, error contexts, error reporting, suppressions</para>
+  </listitem>
+  <listitem>
+    <para>Client malloc/free</para>
+  </listitem>
+  <listitem>
+    <para>Low-level memory management</para>
+  </listitem>
+  <listitem>
+    <para>A and V bitmaps</para>
+  </listitem>
+  <listitem>
+    <para>Symbol table management</para>
+  </listitem>
+  <listitem>
+    <para>Dealing with system calls</para>
+  </listitem>
+  <listitem>
+    <para>Namespace management</para>
+  </listitem>
+  <listitem>
+    <para>GDB attaching</para>
+  </listitem>
+  <listitem>
+    <para>Non-dependence on glibc or anything else</para>
+  </listitem>
+  <listitem>
+    <para>The leak detector</para>
+  </listitem>
+  <listitem>
+    <para>Performance problems</para>
+  </listitem>
+  <listitem>
+    <para>Continuous sanity checking</para>
+  </listitem>
+  <listitem>
+    <para>Tracing, or not tracing, child processes</para>
+  </listitem>
+  <listitem>
+    <para>Assembly glue for syscalls</para>
+  </listitem>
+</orderedlist>
+
+</sect2>
+
+</sect1>
+
+
+
+
+<sect1 id="mc-tech-docs.extensions" xreflabel="Extensions">
+<title>Extensions</title>
+
+<para>Some comments about Stuff To Do.</para>
+
+<sect2 id="mc-tech-docs.bugs" xreflabel="Bugs">
+<title>Bugs</title>
+
+<para>Stephan Kulow and Marc Mutz report problems with kmail in
+KDE 3 CVS (RC2 ish) when run on Valgrind.  Stephan has it
+deadlocking; Marc has it looping at startup.  I can't repro
+either behaviour. Needs repro-ing and fixing.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.threads" xreflabel="Threads">
+<title>Threads</title>
+
+<para>Doing a good job of thread support strikes me as almost a
+research-level problem.  The central issues are how to do fast
+cheap locking of the
+<computeroutput>VG_(primary_map)</computeroutput> structure,
+whether or not accesses to the individual secondary maps need
+locking, what race-condition issues result, and whether the
+already-nasty mess that is the signal simulator needs further
+hackery.</para>
+
+<para>I realise that threads are the most-frequently-requested
+feature, and I am thinking about it all.  If you have guru-level
+understanding of fast mutual exclusion mechanisms and race
+conditions, I would be interested in hearing from you.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.verify" xreflabel="Verification suite">
+<title>Verification suite</title>
+
+<para>Directory <computeroutput>tests/</computeroutput> contains
+various ad-hoc tests for Valgrind.  However, there is no
+systematic verification or regression suite, that, for example,
+exercises all the stuff in <filename>vg_memory.c</filename>, to
+ensure that illegal memory accesses and undefined value uses are
+detected as they should be.  It would be good to have such a
+suite.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.porting" xreflabel="Porting to other platforms">
+<title>Porting to other platforms</title>
+
+<para>It would be great if Valgrind was ported to FreeBSD and x86
+NetBSD, and to x86 OpenBSD, if it's possible (doesn't OpenBSD use
+a.out-style executables, not ELF ?)</para>
+
+<para>The main difficulties, for an x86-ELF platform, seem to
+be:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>You'd need to rewrite the
+    <computeroutput>/proc/self/maps</computeroutput> parser
+    (<filename>vg_procselfmaps.c</filename>).  Easy.</para>
+  </listitem>
+
+  <listitem>
+    <para>You'd need to rewrite
+    <filename>vg_syscall_mem.c</filename>, or, more specifically,
+    provide one for your OS.  This is tedious, but you can
+    implement syscalls on demand, and the Linux kernel interface
+    is, for the most part, going to look very similar to the *BSD
+    interfaces, so it's really a copy-paste-and-modify-on-demand
+    job.  As part of this, you'd need to supply a new
+    <filename>vg_kerneliface.h</filename> file.</para>
+  </listitem>
+
+  <listitem>
+    <para>You'd also need to change the syscall wrappers for
+    Valgrind's internal use, in
+    <filename>vg_mylibc.c</filename>.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>All in all, I think a port to x86-ELF *BSDs is not really
+very difficult, and in some ways I would like to see it happen,
+because that would force a more clear factoring of Valgrind into
+platform dependent and independent pieces.  Not to mention, *BSD
+folks also deserve to use Valgrind just as much as the Linux crew
+do.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="mc-tech-docs.easystuff" 
+       xreflabel="Easy stuff which ought to be done">
+<title>Easy stuff which ought to be done</title>
+
+
+<sect2 id="mc-tech-docs.mmx" xreflabel="MMX Instructions">
+<title>MMX Instructions</title>
+
+<para>MMX insns should be supported, using the same trick as for
+FPU insns.  If the MMX registers are not used to copy
+uninitialised junk from one place to another in memory, this
+means we don't have to actually simulate the internal MMX unit
+state, so the FPU hack applies.  This should be fairly
+easy.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.fixstabs" xreflabel="Fix stabs-info Reader">
+<title>Fix stabs-info reader</title>
+
+<para>The machinery in <filename>vg_symtab2.c</filename> which
+reads "stabs" style debugging info is pretty weak.  It usually
+correctly translates simulated program counter values into line
+numbers and procedure names, but the file name is often
+completely wrong.  I think the logic used to parse "stabs"
+entries is weak.  It should be fixed.  The simplest solution,
+IMO, is to copy either the logic or simply the code out of GNU
+binutils which does this; since GDB can clearly get it right,
+binutils (or GDB?) must have code to do this somewhere.</para>
+
+</sect2>
+
+
+
+<sect2 id="mc-tech-docs.x86instr" xreflabel="BT/BTC/BTS/BTR">
+<title>BT/BTC/BTS/BTR</title>
+
+<para>These are x86 instructions which test, complement, set, or
+reset, a single bit in a word.  At the moment they are both
+incorrectly implemented and incorrectly instrumented.</para>
+
+<para>The incorrect instrumentation is due to use of helper
+functions.  This means we lose bit-level definedness tracking,
+which could wind up giving spurious uninitialised-value use
+errors.  The Right Thing to do is to invent a couple of new
+UOpcodes, I think <computeroutput>GET_BIT</computeroutput> and
+<computeroutput>SET_BIT</computeroutput>, which can be used to
+implement all 4 x86 insns, get rid of the helpers, and give
+bit-accurate instrumentation rules for the two new
+UOpcodes.</para>
+
+<para>I realised the other day that they are mis-implemented too.
+The x86 insns take a bit-index and a register or memory location
+to access.  For registers the bit index clearly can only be in
+the range zero to register-width minus 1, and I assumed the same
+applied to memory locations too.  But evidently not; for memory
+locations the index can be arbitrary, and the processor will
+index arbitrarily into memory as a result.  This too should be
+fixed.  Sigh.  Presumably indexing outside the immediate word is
+not actually used by any programs yet tested on Valgrind, for
+otherwise they (presumably) would simply not work at all.  If you
+plan to hack on this, first check the Intel docs to make sure my
+understanding is really correct.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.prefetch" xreflabel="Using PREFETCH Instructions">
+<title>Using PREFETCH Instructions</title>
+
+<para>Here's a small but potentially interesting project for
+performance junkies.  Experiments with valgrind's code generator
+and optimiser(s) suggest that reducing the number of instructions
+executed in the translations and mem-check helpers gives
+disappointingly small performance improvements.  Perhaps this is
+because performance of Valgrindified code is limited by cache
+misses.  After all, each read in the original program now gives
+rise to at least three reads, one for the
+<computeroutput>VG_(primary_map)</computeroutput>, one of the
+resulting secondary, and the original.  Not to mention, the
+instrumented translations are 13 to 14 times larger than the
+originals.  All in all one would expect the memory system to be
+hammered to hell and then some.</para>
+
+<para>So here's an idea.  An x86 insn involving a read from
+memory, after instrumentation, will turn into ucode of the
+following form:</para>
+<programlisting><![CDATA[
+... calculate effective addr, into ta and qa ...
+  TESTVL qa             -- is the addr defined?
+  LOADV (ta), qloaded   -- fetch V bits for the addr
+  LOAD  (ta), tloaded   -- do the original load]]></programlisting>
+
+<para>At the point where the
+<computeroutput>LOADV</computeroutput> is done, we know the
+actual address (<computeroutput>ta</computeroutput>) from which
+the real <computeroutput>LOAD</computeroutput> will be done.  We
+also know that the <computeroutput>LOADV</computeroutput> will
+take around 20 x86 insns to do.  So it seems plausible that doing
+a prefetch of <computeroutput>ta</computeroutput> just before the
+<computeroutput>LOADV</computeroutput> might just avoid a miss at
+the <computeroutput>LOAD</computeroutput> point, and that might
+be a significant performance win.</para>
+
+<para>Prefetch insns are notoriously tempermental, more often
+than not making things worse rather than better, so this would
+require considerable fiddling around.  It's complicated because
+Intels and AMDs have different prefetch insns with different
+semantics, so that too needs to be taken into account.  As a
+general rule, even placing the prefetches before the
+<computeroutput>LOADV</computeroutput> insn is too near the
+<computeroutput>LOAD</computeroutput>; the ideal distance is
+apparently circa 200 CPU cycles.  So it might be worth having
+another analysis/transformation pass which pushes prefetches as
+far back as possible, hopefully immediately after the effective
+address becomes available.</para>
+
+<para>Doing too many prefetches is also bad because they soak up
+bus bandwidth / cpu resources, so some cleverness in deciding
+which loads to prefetch and which to not might be helpful.  One
+can imagine not prefetching client-stack-relative
+(<computeroutput>%EBP</computeroutput> or
+<computeroutput>%ESP</computeroutput>) accesses, since the stack
+in general tends to show good locality anyway.</para>
+
+<para>There's quite a lot of experimentation to do here, but I
+think it might make an interesting week's work for
+someone.</para>
+
+<para>As of 15-ish March 2002, I've started to experiment with
+this, using the AMD
+<computeroutput>prefetch/prefetchw</computeroutput> insns.</para>
+
+</sect2>
+
+
+<sect2 id="mc-tech-docs.pranges" xreflabel="User-defined Permission Ranges">
+<title>User-defined Permission Ranges</title>
+
+<para>This is quite a large project -- perhaps a month's hacking
+for a capable hacker to do a good job -- but it's potentially
+very interesting.  The outcome would be that Valgrind could
+detect a whole class of bugs which it currently cannot.</para>
+
+<para>The presentation falls into two pieces.</para>
+
+<sect3 id="mc-tech-docs.psetting" 
+  xreflabel="Part 1: User-defined Address-range Permission Setting">
+<title>Part 1: User-defined Address-range Permission Setting</title>
+
+<para>Valgrind intercepts the client's
+<computeroutput>malloc</computeroutput>,
+<computeroutput>free</computeroutput>, etc calls, watches system
+calls, and watches the stack pointer move.  This is currently the
+only way it knows about which addresses are valid and which not.
+Sometimes the client program knows extra information about its
+memory areas.  For example, the client could at some point know
+that all elements of an array are out-of-date.  We would like to
+be able to convey to Valgrind this information that the array is
+now addressable-but-uninitialised, so that Valgrind can then warn
+if elements are used before they get new values.</para>
+
+<para>What I would like are some macros like this:</para>
+<programlisting><![CDATA[
+  VALGRIND_MAKE_NOACCESS(addr, len)
+  VALGRIND_MAKE_WRITABLE(addr, len)
+  VALGRIND_MAKE_READABLE(addr, len)]]></programlisting>
+
+<para>and also, to check that memory is
+addressible/initialised,</para>
+<programlisting><![CDATA[
+  VALGRIND_CHECK_ADDRESSIBLE(addr, len)
+  VALGRIND_CHECK_INITIALISED(addr, len)]]></programlisting>
+
+<para>I then include in my sources a header defining these
+macros, rebuild my app, run under Valgrind, and get user-defined
+checks.</para>
+
+<para>Now here's a neat trick.  It's a nuisance to have to
+re-link the app with some new library which implements the above
+macros.  So the idea is to define the macros so that the
+resulting executable is still completely stand-alone, and can be
+run without Valgrind, in which case the macros do nothing, but
+when run on Valgrind, the Right Thing happens.  How to do this?
+The idea is for these macros to turn into a piece of inline
+assembly code, which (1) has no effect when run on the real CPU,
+(2) is easily spotted by Valgrind's JITter, and (3) no sane
+person would ever write, which is important for avoiding false
+matches in (2).  So here's a suggestion:</para>
+<programlisting><![CDATA[
+  VALGRIND_MAKE_NOACCESS(addr, len)]]></programlisting>
+
+<para>becomes (roughly speaking)</para>
+<programlisting><![CDATA[
+  movl addr, %eax
+  movl len,  %ebx
+  movl $1,   %ecx   -- 1 describes the action; MAKE_WRITABLE might be
+                    -- 2, etc
+  rorl $13, %ecx
+  rorl $19, %ecx
+  rorl $11, %eax
+  rorl $21, %eax]]></programlisting>
+
+<para>The rotate sequences have no effect, and it's unlikely they
+would appear for any other reason, but they define a unique
+byte-sequence which the JITter can easily spot.  Using the
+operand constraints section at the end of a gcc inline-assembly
+statement, we can tell gcc that the assembly fragment kills
+<computeroutput>%eax</computeroutput>,
+<computeroutput>%ebx</computeroutput>,
+<computeroutput>%ecx</computeroutput> and the condition codes, so
+this fragment is made harmless when not running on Valgrind, runs
+quickly when not on Valgrind, and does not require any other
+library support.</para>
+
+
+</sect3>
+
+
+<sect3 id="mc-tech-docs.prange-detect" 
+  xreflabel="Part 2: Using it to detect Interference between Stack 
+Variables">
+<title>Part 2: Using it to detect Interference between Stack 
+Variables</title>
+
+<para>Currently Valgrind cannot detect errors of the following
+form:</para>
+<programlisting><![CDATA[
+void fooble ( void )
+{
+  int a[10];
+  int b[10];
+  a[10] = 99;
+}]]></programlisting>
+
+<para>Now imagine rewriting this as</para>
+<programlisting><![CDATA[
+void fooble ( void )
+{
+  int spacer0;
+  int a[10];
+  int spacer1;
+  int b[10];
+  int spacer2;
+  VALGRIND_MAKE_NOACCESS(&spacer0, sizeof(int));
+  VALGRIND_MAKE_NOACCESS(&spacer1, sizeof(int));
+  VALGRIND_MAKE_NOACCESS(&spacer2, sizeof(int));
+  a[10] = 99;
+}]]></programlisting>
+
+<para>Now the invalid write is certain to hit
+<computeroutput>spacer0</computeroutput> or
+<computeroutput>spacer1</computeroutput>, so Valgrind will spot
+the error.</para>
+
+<para>There are two complications.</para>
+
+<orderedlist>
+
+  <listitem>
+    <para>The first is that we don't want to annotate sources by
+    hand, so the Right Thing to do is to write a C/C++ parser,
+    annotator, prettyprinter which does this automatically, and
+    run it on post-CPP'd C/C++ source.  See
+    http://www.cacheprof.org for an example of a system which
+    transparently inserts another phase into the gcc/g++
+    compilation route.  The parser/prettyprinter is probably not
+    as hard as it sounds; I would write it in Haskell, a powerful
+    functional language well suited to doing symbolic
+    computation, with which I am intimately familar.  There is
+    already a C parser written in Haskell by someone in the
+    Haskell community, and that would probably be a good starting
+    point.</para>
+  </listitem>
+
+
+  <listitem>
+    <para>The second complication is how to get rid of these
+    <computeroutput>NOACCESS</computeroutput> records inside
+    Valgrind when the instrumented function exits; after all,
+    these refer to stack addresses and will make no sense
+    whatever when some other function happens to re-use the same
+    stack address range, probably shortly afterwards.  I think I
+    would be inclined to define a special stack-specific
+    macro:</para>
+<programlisting><![CDATA[
+  VALGRIND_MAKE_NOACCESS_STACK(addr, len)]]></programlisting>
+    <para>which causes Valgrind to record the client's
+    <computeroutput>%ESP</computeroutput> at the time it is
+    executed.  Valgrind will then watch for changes in
+    <computeroutput>%ESP</computeroutput> and discard such
+    records as soon as the protected area is uncovered by an
+    increase in <computeroutput>%ESP</computeroutput>.  I
+    hesitate with this scheme only because it is potentially
+    expensive, if there are hundreds of such records, and
+    considering that changes in
+    <computeroutput>%ESP</computeroutput> already require
+    expensive messing with stack access permissions.</para>
+  </listitem>
+</orderedlist>
+
+<para>This is probably easier and more robust than for the
+instrumenter program to try and spot all exit points for the
+procedure and place suitable deallocation annotations there.
+Plus C++ procedures can bomb out at any point if they get an
+exception, so spotting return points at the source level just
+won't work at all.</para>
+
+<para>Although some work, it's all eminently doable, and it would
+make Valgrind into an even-more-useful tool.</para>
+
+</sect3>
+
+</sect2>
+
+</sect1>
+</chapter>
diff --git a/memcheck/docs/mc_main.html b/memcheck/docs/mc_main.html
deleted file mode 100644
index 022fe53..0000000
--- a/memcheck/docs/mc_main.html
+++ /dev/null
@@ -1,841 +0,0 @@
-
-<html>
-  <head>
-    <title>Memcheck: a heavyweight memory checker</title>
-  </head>
-
-<a name="mc-top"></a>
-<h2>3&nbsp; <b>Memcheck</b>: a heavyweight memory checker</h2>
-
-To use this tool, you must specify <code>--tool=memcheck</code> on the
-Valgrind command line.
-
-<h3>3.1&nbsp; Kinds of bugs that memcheck can find</h3>
-
-Memcheck is Valgrind-1.0.X's checking mechanism bundled up into a tool.
-    All reads and writes of memory are checked, and calls to
-    malloc/new/free/delete are intercepted. As a result, memcheck can
-    detect the following problems:
-    <ul>
-        <li>Use of uninitialised memory</li>
-        <li>Reading/writing memory after it has been free'd</li>
-        <li>Reading/writing off the end of malloc'd blocks</li>
-        <li>Reading/writing inappropriate areas on the stack</li>
-        <li>Memory leaks -- where pointers to malloc'd blocks are lost
-            forever</li>
-        <li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
-        <li>Overlapping <code>src</code> and <code>dst</code> pointers in 
-            <code>memcpy()</code> and related functions</li>
-        <li>Some misuses of the POSIX pthreads API</li>
-    </ul>
-    <p>
-
-
-<h3>3.2&nbsp; Command-line flags specific to memcheck</h3>
-
-<ul>
-  <li><code>--leak-check=no</code> [default]<br>
-      <code>--leak-check=yes</code> 
-      <p>When enabled, search for memory leaks when the client program
-      finishes.  A memory leak means a malloc'd block, which has not
-      yet been free'd, but to which no pointer can be found.  Such a
-      block can never be free'd by the program, since no pointer to it
-      exists.  Leak checking is disabled by default because it tends
-      to generate dozens of error messages.  </li><br><p>
-
-  <li><code>--show-reachable=no</code> [default]<br>
-      <code>--show-reachable=yes</code> 
-      <p>When disabled, the memory leak detector only shows blocks for
-      which it cannot find a pointer to at all, or it can only find a
-      pointer to the middle of.  These blocks are prime candidates for
-      memory leaks.  When enabled, the leak detector also reports on
-      blocks which it could find a pointer to.  Your program could, at
-      least in principle, have freed such blocks before exit.
-      Contrast this to blocks for which no pointer, or only an
-      interior pointer could be found: they are more likely to
-      indicate memory leaks, because you do not actually have a
-      pointer to the start of the block which you can hand to
-      <code>free</code>, even if you wanted to.  </li><br><p>
-
-  <li><code>--leak-resolution=low</code> [default]<br>
-      <code>--leak-resolution=med</code> <br>
-      <code>--leak-resolution=high</code>
-      <p>When doing leak checking, determines how willing Memcheck is
-      to consider different backtraces to be the same.  When set to
-      <code>low</code>, the default, only the first two entries need
-      match.  When <code>med</code>, four entries have to match.  When
-      <code>high</code>, all entries need to match.  
-      <p>
-      For hardcore leak debugging, you probably want to use
-      <code>--leak-resolution=high</code> together with 
-      <code>--num-callers=40</code> or some such large number.  Note
-      however that this can give an overwhelming amount of
-      information, which is why the defaults are 4 callers and
-      low-resolution matching.
-      <p>
-      Note that the <code>--leak-resolution=</code> setting does not
-      affect Memcheck's ability to find leaks.  It only changes how
-      the results are presented.
-      </li><br><p>
-
-  <li><code>--freelist-vol=&lt;number></code> [default: 1000000]
-      <p>When the client program releases memory using free (in C) or
-      delete (C++), that memory is not immediately made available for
-      re-allocation.  Instead it is marked inaccessible and placed in
-      a queue of freed blocks.  The purpose is to delay the point at
-      which freed-up memory comes back into circulation.  This
-      increases the chance that Memcheck will be able to detect
-      invalid accesses to blocks for some significant period of time
-      after they have been freed.  
-      <p>
-      This flag specifies the maximum total size, in bytes, of the
-      blocks in the queue.  The default value is one million bytes.
-      Increasing this increases the total amount of memory used by
-      Memcheck but may detect invalid uses of freed blocks which would
-      otherwise go undetected.</li><br><p>
-
-  <li><code>--workaround-gcc296-bugs=no</code> [default]<br>
-      <code>--workaround-gcc296-bugs=yes</code> <p>When enabled,
-      assume that reads and writes some small distance below the stack
-      pointer <code>%esp</code> are due to bugs in gcc 2.96, and does
-      not report them.  The "small distance" is 256 bytes by default.
-      Note that gcc 2.96 is the default compiler on some popular Linux
-      distributions (RedHat 7.X, Mandrake) and so you may well need to
-      use this flag.  Do not use it if you do not have to, as it can
-      cause real errors to be overlooked.  Another option is to use a
-      gcc/g++ which does not generate accesses below the stack
-      pointer.  2.95.3 seems to be a good choice in this respect.
-      <p>
-      Unfortunately (27 Feb 02) it looks like g++ 3.0.4 has a similar
-      bug, so you may need to issue this flag if you use 3.0.4.  A
-      while later (early Apr 02) this is confirmed as a scheduling bug
-      in g++-3.0.4.
-      </li><br><p>
-
-  <li><code>--partial-loads-ok=yes</code> [the default]<br>
-      <code>--partial-loads-ok=no</code>
-      <p>Controls how Memcheck handles word (4-byte) loads from
-      addresses for which some bytes are addressible and others
-      are not.  When <code>yes</code> (the default), such loads
-      do not elicit an address error.  Instead, the loaded V bytes
-      corresponding to the illegal addresses indicate undefined, and
-      those corresponding to legal addresses are loaded from shadow 
-      memory, as usual.
-      <p>
-      When <code>no</code>, loads from partially
-      invalid addresses are treated the same as loads from completely
-      invalid addresses: an illegal-address error is issued,
-      and the resulting V bytes indicate valid data.
-      </li><br><p>
-
-  <li><code>--cleanup=no</code><br>
-      <code>--cleanup=yes</code> [default]
-      <p><b>This is a flag to help debug valgrind itself.  It is of no
-      use to end-users.</b> When enabled, various improvments are
-      applied to the post-instrumented intermediate code, aimed at
-      removing redundant value checks.</li><br>
-      <p>
-</ul>
-
-
-<a name="errormsgs"></a>
-<h3>3.3&nbsp; Explanation of error messages from Memcheck</h3>
-
-Despite considerable sophistication under the hood, Memcheck can only
-really detect two kinds of errors, use of illegal addresses, and use
-of undefined values.  Nevertheless, this is enough to help you
-discover all sorts of memory-management nasties in your code.  This
-section presents a quick summary of what error messages mean.  The
-precise behaviour of the error-checking machinery is described in
-<a href="#machine">this section</a>.
-
-
-<h4>3.3.1&nbsp; Illegal read / Illegal write errors</h4>
-For example:
-<pre>
-  Invalid read of size 4
-     at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
-     by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
-     by 0x40B07FF4: read_png_image__FP8QImageIO (kernel/qpngio.cpp:326)
-     by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
-     Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
-</pre>
-
-<p>This happens when your program reads or writes memory at a place
-which Memcheck reckons it shouldn't.  In this example, the program did
-a 4-byte read at address 0xBFFFF0E0, somewhere within the
-system-supplied library libpng.so.2.1.0.9, which was called from
-somewhere else in the same library, called from line 326 of
-qpngio.cpp, and so on.
-
-<p>Memcheck tries to establish what the illegal address might relate
-to, since that's often useful.  So, if it points into a block of
-memory which has already been freed, you'll be informed of this, and
-also where the block was free'd at.  Likewise, if it should turn out
-to be just off the end of a malloc'd block, a common result of
-off-by-one-errors in array subscripting, you'll be informed of this
-fact, and also where the block was malloc'd.
-
-<p>In this example, Memcheck can't identify the address.  Actually the
-address is on the stack, but, for some reason, this is not a valid
-stack address -- it is below the stack pointer, %esp, and that isn't
-allowed.  In this particular case it's probably caused by gcc
-generating invalid code, a known bug in various flavours of gcc.
-
-<p>Note that Memcheck only tells you that your program is about to
-access memory at an illegal address.  It can't stop the access from
-happening.  So, if your program makes an access which normally would
-result in a segmentation fault, you program will still suffer the same
-fate -- but you will get a message from Memcheck immediately prior to
-this.  In this particular example, reading junk on the stack is
-non-fatal, and the program stays alive.
-
-
-<h4>3.3.2&nbsp; Use of uninitialised values</h4>
-For example:
-<pre>
-  Conditional jump or move depends on uninitialised value(s)
-     at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
-     by 0x402E8476: _IO_printf (printf.c:36)
-     by 0x8048472: main (tests/manuel1.c:8)
-     by 0x402A6E5E: __libc_start_main (libc-start.c:129)
-</pre>
-
-<p>An uninitialised-value use error is reported when your program uses
-a value which hasn't been initialised -- in other words, is undefined.
-Here, the undefined value is used somewhere inside the printf()
-machinery of the C library.  This error was reported when running the
-following small program:
-<pre>
-  int main()
-  {
-    int x;
-    printf ("x = %d\n", x);
-  }
-</pre>
-
-<p>It is important to understand that your program can copy around
-junk (uninitialised) data to its heart's content.  Memcheck observes
-this and keeps track of the data, but does not complain.  A complaint
-is issued only when your program attempts to make use of uninitialised
-data.  In this example, x is uninitialised.  Memcheck observes the
-value being passed to _IO_printf and thence to _IO_vfprintf, but makes
-no comment.  However, _IO_vfprintf has to examine the value of x so it
-can turn it into the corresponding ASCII string, and it is at this
-point that Memcheck complains.
-
-<p>Sources of uninitialised data tend to be:
-<ul>
-  <li>Local variables in procedures which have not been initialised,
-      as in the example above.</li><p>
-
-  <li>The contents of malloc'd blocks, before you write something
-      there.  In C++, the new operator is a wrapper round malloc, so
-      if you create an object with new, its fields will be
-      uninitialised until you (or the constructor) fill them in, which
-      is only Right and Proper.</li>
-</ul>
-
-
-
-<h4>3.3.3&nbsp; Illegal frees</h4>
-For example:
-<pre>
-  Invalid free()
-     at 0x4004FFDF: free (vg_clientmalloc.c:577)
-     by 0x80484C7: main (tests/doublefree.c:10)
-     by 0x402A6E5E: __libc_start_main (libc-start.c:129)
-     by 0x80483B1: (within tests/doublefree)
-     Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd
-     at 0x4004FFDF: free (vg_clientmalloc.c:577)
-     by 0x80484C7: main (tests/doublefree.c:10)
-     by 0x402A6E5E: __libc_start_main (libc-start.c:129)
-     by 0x80483B1: (within tests/doublefree)
-</pre>
-<p>Memcheck keeps track of the blocks allocated by your program with
-malloc/new, so it can know exactly whether or not the argument to
-free/delete is legitimate or not.  Here, this test program has
-freed the same block twice.  As with the illegal read/write errors,
-Memcheck attempts to make sense of the address free'd.  If, as
-here, the address is one which has previously been freed, you wil
-be told that -- making duplicate frees of the same block easy to spot.
-
-
-<h4>3.3.4&nbsp; When a block is freed with an inappropriate
-deallocation function</h4>
-In the following example, a block allocated with <code>new[]</code>
-has wrongly been deallocated with <code>free</code>:
-<pre>
-  Mismatched free() / delete / delete []
-     at 0x40043249: free (vg_clientfuncs.c:171)
-     by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
-     by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
-     by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
-     Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd
-     at 0x4004318C: __builtin_vec_new (vg_clientfuncs.c:152)
-     by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
-     by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
-     by 0x4C21788F: OLEFilter::convert(QCString const &amp;) (olefilter.cc:272)
-</pre>
-The following was told to me be the KDE 3 developers.  I didn't know
-any of it myself.  They also implemented the check itself.
-<p>
-In C++ it's important to deallocate memory in a way compatible with
-how it was allocated.  The deal is:
-<ul>
-<li>If allocated with <code>malloc</code>, <code>calloc</code>,
-    <code>realloc</code>, <code>valloc</code> or
-    <code>memalign</code>, you must deallocate with <code>free</code>.
-<li>If allocated with <code>new[]</code>, you must deallocate with
-    <code>delete[]</code>.
-<li>If allocated with <code>new</code>, you must deallocate with
-    <code>delete</code>.
-</ul>
-The worst thing is that on Linux apparently it doesn't matter if you
-do muddle these up, and it all seems to work ok, but the same program
-may then crash on a different platform, Solaris for example.  So it's
-best to fix it properly.  According to the KDE folks "it's amazing how
-many C++ programmers don't know this".  
-<p>
-Pascal Massimino adds the following clarification:
-<code>delete[]</code> must be called associated with a
-<code>new[]</code> because the compiler stores the size of the array
-and the pointer-to-member to the destructor of the array's content
-just before the pointer actually returned.  This implies a
-variable-sized overhead in what's returned by <code>new</code> or
-<code>new[]</code>.  It rather surprising how compilers [Ed:
-runtime-support libraries?] are robust to mismatch in
-<code>new</code>/<code>delete</code>
-<code>new[]</code>/<code>delete[]</code>.
-
-
-<h4>3.3.5&nbsp; Passing system call parameters with inadequate
-read/write permissions</h4>
-
-Memcheck checks all parameters to system calls, i.e:
-<ul>
-<li>It checks all the direct parameters themselves.
-<li>Also, if a system call needs to read from a buffer provided by your
-    program, Memcheck checks that the entire buffer is addressible and has
-    valid data, ie, it is readable.  
-<li>Also, if the system call needs to write to a user-supplied buffer, Memcheck
-    checks that the buffer is addressible.  
-</ul>
-
-After the system call, Memcheck updates its administrative information to
-precisely reflect any changes in memory permissions caused by the system call.
-
-<p>Here's an example of two system calls with invalid parameters:
-<pre>
-  #include &lt;stdlib.h>
-  #include &lt;unistd.h>
-  int main( void )
-  {
-    char* arr  = malloc(10);
-    int*  arr2 = malloc(sizeof(int));
-    write( 1 /* stdout */, arr, 10 );
-    exit(arr2[0]);
-  }
-</pre>
-
-<p>You get these complaints ...
-<pre>
-  Syscall param write(buf) points to uninitialised byte(s)
-     at 0x25A48723: __write_nocancel (in /lib/tls/libc-2.3.3.so)
-     by 0x259AFAD3: __libc_start_main (in /lib/tls/libc-2.3.3.so)
-     by 0x8048348: (within /auto/homes/njn25/grind/head4/a.out)
-   Address 0x25AB8028 is 0 bytes inside a block of size 10 alloc'd
-     at 0x259852B0: malloc (vg_replace_malloc.c:130)
-     by 0x80483F1: main (a.c:5)
-  
-  Syscall param exit(error_code) contains uninitialised byte(s)
-     at 0x25A21B44: __GI__exit (in /lib/tls/libc-2.3.3.so)
-     by 0x8048426: main (a.c:8)
-</pre>
-
-<p>... because the program has (a) tried to write uninitialised junk from
-the malloc'd block to the standard output, and (b) passed an uninitialised
-value to <code>exit</code>.  Note that the first error refers to the memory
-pointed to by <code>buf</code> (not <code>buf</code> itself), but the second
-error refers to the argument <code>error_code</code> itself.
-
-<h4>3.3.6&nbsp; Overlapping source and destination blocks</h4>
-The following C library functions copy some data from one memory block
-to another (or something similar): <code>memcpy()</code>,
-<code>strcpy()</code>, <code>strncpy()</code>, <code>strcat()</code>,
-<code>strncat()</code>.  The blocks pointed to by their <code>src</code> and
-<code>dst</code> pointers aren't allowed to overlap.  Memcheck checks
-for this.
-<p>
-For example:
-<pre>
-==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
-==27492==    at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
-==27492==    by 0x804865A: main (overlap.c:40)
-==27492==    by 0x40246335: __libc_start_main (../sysdeps/generic/libc-start.c:129)
-==27492==    by 0x8048470: (within /auto/homes/njn25/grind/head6/memcheck/tests/overlap)
-==27492== 
-</pre>
-<p>
-You don't want the two blocks to overlap because one of them could get
-partially trashed by the copying.
-
-<a name="suppfiles"></a>
-<h3>3.4&nbsp; Writing suppressions files</h3>
-
-The basic suppression format was described in <a
-href="coregrind_core.html#suppress">this section</a>.
-<p>
-The suppression (2nd) line should have the form:
-<pre>
-Memcheck:suppression_type
-</pre>
-Or, since some of the suppressions are shared with Addrcheck:
-<pre>
-Memcheck,Addrcheck:suppression_type
-</pre>
-
-<p>
-The Memcheck suppression types are as follows:
-<code>Value1</code>, 
-<code>Value2</code>,
-<code>Value4</code>,
-<code>Value8</code>,
-<code>Value16</code>,
-meaning an uninitialised-value error when
-using a value of 1, 2, 4, 8 or 16 bytes.
-Or
-<code>Cond</code> (or its old name, <code>Value0</code>),
-meaning use of an uninitialised CPU condition code.  Or: 
-<code>Addr1</code>,
-<code>Addr2</code>, 
-<code>Addr4</code>,
-<code>Addr8</code>,
-<code>Addr16</code>, 
-meaning an invalid address during a
-memory access of 1, 2, 4, 8 or 16 bytes respectively.  Or 
-<code>Param</code>,
-meaning an invalid system call parameter error.  Or
-<code>Free</code>, meaning an invalid or mismatching free.
-<code>Overlap</code>, meaning a <code>src</code>/<code>dst</code>
-overlap in <code>memcpy() or a similar function</code>.  Last but not least,
-you can suppress leak reports with <code>Leak</code>.  Leak suppression was
-added in valgrind-1.9.3, I believe.
-<p>
-
-The extra information line: for Param errors, is the name of the offending
-system call parameter.  
-No other error kinds have this extra line.
-<p>
-The first line of the calling context: for Value and Addr errors, it is either
-the name of the function in which the error occurred, or, failing that, the
-full path of the .so file or executable containing the error location.  For
-Free errors, is the name of the function doing the freeing (eg,
-<code>free</code>, <code>__builtin_vec_delete</code>, etc).  For Overlap
-errors, is the name of the function with the overlapping arguments (eg.
-<code>memcpy()</code>, <code>strcpy()</code>, etc).
-<p>
-Lastly, there's the rest of the calling context.
-<p>
-
-<a name="machine"></a>
-<h3>3.5&nbsp; Details of Memcheck's checking machinery</h3>
-
-Read this section if you want to know, in detail, exactly what and how
-Memcheck is checking.
-
-<a name="vvalue"></a>
-<h4>3.5.1&nbsp; Valid-value (V) bits</h4>
-
-It is simplest to think of Memcheck implementing a synthetic Intel x86
-CPU which is identical to a real CPU, except for one crucial detail.
-Every bit (literally) of data processed, stored and handled by the
-real CPU has, in the synthetic CPU, an associated "valid-value" bit,
-which says whether or not the accompanying bit has a legitimate value.
-In the discussions which follow, this bit is referred to as the V
-(valid-value) bit.
-
-<p>Each byte in the system therefore has a 8 V bits which follow
-it wherever it goes.  For example, when the CPU loads a word-size item
-(4 bytes) from memory, it also loads the corresponding 32 V bits from
-a bitmap which stores the V bits for the process' entire address
-space.  If the CPU should later write the whole or some part of that
-value to memory at a different address, the relevant V bits will be
-stored back in the V-bit bitmap.
-
-<p>In short, each bit in the system has an associated V bit, which
-follows it around everywhere, even inside the CPU.  Yes, the CPU's
-(integer and <code>%eflags</code>) registers have their own V bit
-vectors.
-
-<p>Copying values around does not cause Memcheck to check for, or
-report on, errors.  However, when a value is used in a way which might
-conceivably affect the outcome of your program's computation, the
-associated V bits are immediately checked.  If any of these indicate
-that the value is undefined, an error is reported.
-
-<p>Here's an (admittedly nonsensical) example:
-<pre>
-  int i, j;
-  int a[10], b[10];
-  for (i = 0; i &lt; 10; i++) {
-    j = a[i];
-    b[i] = j;
-  }
-</pre>
-
-<p>Memcheck emits no complaints about this, since it merely copies
-uninitialised values from <code>a[]</code> into <code>b[]</code>, and
-doesn't use them in any way.  However, if the loop is changed to
-<pre>
-  for (i = 0; i &lt; 10; i++) {
-    j += a[i];
-  }
-  if (j == 77) 
-     printf("hello there\n");
-</pre>
-then Valgrind will complain, at the <code>if</code>, that the
-condition depends on uninitialised values.  Note that it
-<b>doesn't</b> complain at the <code>j += a[i];</code>, since 
-at that point the undefinedness is not "observable".  It's only
-when a decision has to be made as to whether or not to do the
-<code>printf</code> -- an observable action of your program -- that
-Memcheck complains.
-
-<p>Most low level operations, such as adds, cause Memcheck to 
-use the V bits for the operands to calculate the V bits for the
-result.  Even if the result is partially or wholly undefined,
-it does not complain.
-
-<p>Checks on definedness only occur in two places: when a value is
-used to generate a memory address, and where control flow decision
-needs to be made.  Also, when a system call is detected, valgrind
-checks definedness of parameters as required.
-
-<p>If a check should detect undefinedness, an error message is
-issued.  The resulting value is subsequently regarded as well-defined.
-To do otherwise would give long chains of error messages.  In effect,
-we say that undefined values are non-infectious.
-
-<p>This sounds overcomplicated.  Why not just check all reads from
-memory, and complain if an undefined value is loaded into a CPU register? 
-Well, that doesn't work well, because perfectly legitimate C programs routinely
-copy uninitialised values around in memory, and we don't want endless complaints
-about that.  Here's the canonical example.  Consider a struct
-like this:
-<pre>
-  struct S { int x; char c; };
-  struct S s1, s2;
-  s1.x = 42;
-  s1.c = 'z';
-  s2 = s1;
-</pre>
-
-<p>The question to ask is: how large is <code>struct S</code>, in
-bytes?  An int is 4 bytes and a char one byte, so perhaps a struct S
-occupies 5 bytes?  Wrong.  All (non-toy) compilers we know of will
-round the size of <code>struct S</code> up to a whole number of words,
-in this case 8 bytes.  Not doing this forces compilers to generate
-truly appalling code for subscripting arrays of <code>struct
-S</code>'s.
-
-<p>So s1 occupies 8 bytes, yet only 5 of them will be initialised.
-For the assignment <code>s2 = s1</code>, gcc generates code to copy
-all 8 bytes wholesale into <code>s2</code> without regard for their
-meaning.  If Memcheck simply checked values as they came out of
-memory, it would yelp every time a structure assignment like this
-happened.  So the more complicated semantics described above is
-necessary.  This allows gcc to copy <code>s1</code> into
-<code>s2</code> any way it likes, and a warning will only be emitted
-if the uninitialised values are later used.
-
-<p>One final twist to this story.  The above scheme allows garbage to
-pass through the CPU's integer registers without complaint.  It does
-this by giving the integer registers V tags, passing these around in
-the expected way.  This complicated and computationally expensive to
-do, but is necessary.  Memcheck is more simplistic about
-floating-point loads and stores.  In particular, V bits for data read
-as a result of floating-point loads are checked at the load
-instruction.  So if your program uses the floating-point registers to
-do memory-to-memory copies, you will get complaints about
-uninitialised values.  Fortunately, I have not yet encountered a
-program which (ab)uses the floating-point registers in this way.
-
-<a name="vaddress"></a>
-<h4>3.5.2&nbsp; Valid-address (A) bits</h4>
-
-Notice that the previous subsection describes how the validity of values
-is established and maintained without having to say whether the
-program does or does not have the right to access any particular
-memory location.  We now consider the latter issue.
-
-<p>As described above, every bit in memory or in the CPU has an
-associated valid-value (V) bit.  In addition, all bytes in memory, but
-not in the CPU, have an associated valid-address (A) bit.  This
-indicates whether or not the program can legitimately read or write
-that location.  It does not give any indication of the validity or the
-data at that location -- that's the job of the V bits -- only whether
-or not the location may be accessed.
-
-<p>Every time your program reads or writes memory, Memcheck checks the
-A bits associated with the address.  If any of them indicate an
-invalid address, an error is emitted.  Note that the reads and writes
-themselves do not change the A bits, only consult them.
-
-<p>So how do the A bits get set/cleared?  Like this:
-
-<ul>
-  <li>When the program starts, all the global data areas are marked as
-      accessible.</li><br>
-      <p>
-
-  <li>When the program does malloc/new, the A bits for exactly the
-      area allocated, and not a byte more, are marked as accessible.
-      Upon freeing the area the A bits are changed to indicate
-      inaccessibility.</li><br>
-      <p>
-
-  <li>When the stack pointer register (%esp) moves up or down, A bits
-      are set.  The rule is that the area from %esp up to the base of
-      the stack is marked as accessible, and below %esp is
-      inaccessible.  (If that sounds illogical, bear in mind that the
-      stack grows down, not up, on almost all Unix systems, including
-      GNU/Linux.)  Tracking %esp like this has the useful side-effect
-      that the section of stack used by a function for local variables
-      etc is automatically marked accessible on function entry and
-      inaccessible on exit.</li><br>
-      <p>
-
-  <li>When doing system calls, A bits are changed appropriately.  For
-      example, mmap() magically makes files appear in the process's
-      address space, so the A bits must be updated if mmap()
-      succeeds.</li><br>
-      <p>
-
-  <li>Optionally, your program can tell Valgrind about such changes
-      explicitly, using the client request mechanism described above.
-</ul>
-
-
-<a name="together"></a>
-<h4>3.5.3&nbsp; Putting it all together</h4>
-Memcheck's checking machinery can be summarised as follows:
-
-<ul>
-  <li>Each byte in memory has 8 associated V (valid-value) bits,
-      saying whether or not the byte has a defined value, and a single
-      A (valid-address) bit, saying whether or not the program
-      currently has the right to read/write that address.</li><br>
-      <p>
-
-  <li>When memory is read or written, the relevant A bits are
-      consulted.  If they indicate an invalid address, Valgrind emits
-      an Invalid read or Invalid write error.</li><br>
-      <p>
-
-  <li>When memory is read into the CPU's integer registers, the
-      relevant V bits are fetched from memory and stored in the
-      simulated CPU.  They are not consulted.</li><br>
-      <p>
-
-  <li>When an integer register is written out to memory, the V bits
-      for that register are written back to memory too.</li><br>
-      <p>
-
-  <li>When memory is read into the CPU's floating point registers, the
-      relevant V bits are read from memory and they are immediately
-      checked.  If any are invalid, an uninitialised value error is
-      emitted.  This precludes using the floating-point registers to
-      copy possibly-uninitialised memory, but simplifies Valgrind in
-      that it does not have to track the validity status of the
-      floating-point registers.</li><br>
-      <p>
-
-  <li>As a result, when a floating-point register is written to
-      memory, the associated V bits are set to indicate a valid
-      value.</li><br>
-      <p>
-
-  <li>When values in integer CPU registers are used to generate a
-      memory address, or to determine the outcome of a conditional
-      branch, the V bits for those values are checked, and an error
-      emitted if any of them are undefined.</li><br>
-      <p>
-
-  <li>When values in integer CPU registers are used for any other
-      purpose, Valgrind computes the V bits for the result, but does
-      not check them.</li><br>
-      <p>
-
-  <li>One the V bits for a value in the CPU have been checked, they
-      are then set to indicate validity.  This avoids long chains of
-      errors.</li><br>
-      <p>
-
-  <li>When values are loaded from memory, valgrind checks the A bits
-      for that location and issues an illegal-address warning if
-      needed.  In that case, the V bits loaded are forced to indicate
-      Valid, despite the location being invalid.
-      <p>
-      This apparently strange choice reduces the amount of confusing
-      information presented to the user.  It avoids the
-      unpleasant phenomenon in which memory is read from a place which
-      is both unaddressible and contains invalid values, and, as a
-      result, you get not only an invalid-address (read/write) error,
-      but also a potentially large set of uninitialised-value errors,
-      one for every time the value is used.
-      <p>
-      There is a hazy boundary case to do with multi-byte loads from
-      addresses which are partially valid and partially invalid.  See
-      details of the flag <code>--partial-loads-ok</code> for details.
-      </li><br>
-</ul>
-
-Memcheck intercepts calls to malloc, calloc, realloc, valloc,
-memalign, free, new and delete.  The behaviour you get is:
-
-<ul>
-
-  <li>malloc/new: the returned memory is marked as addressible but not
-      having valid values.  This means you have to write on it before
-      you can read it.</li><br>
-      <p>
-
-  <li>calloc: returned memory is marked both addressible and valid,
-      since calloc() clears the area to zero.</li><br>
-      <p>
-
-  <li>realloc: if the new size is larger than the old, the new section
-      is addressible but invalid, as with malloc.</li><br>
-      <p>
-
-  <li>If the new size is smaller, the dropped-off section is marked as
-      unaddressible.  You may only pass to realloc a pointer
-      previously issued to you by malloc/calloc/realloc.</li><br>
-      <p>
-
-  <li>free/delete: you may only pass to free a pointer previously
-      issued to you by malloc/calloc/realloc, or the value
-      NULL. Otherwise, Valgrind complains.  If the pointer is indeed
-      valid, Valgrind marks the entire area it points at as
-      unaddressible, and places the block in the freed-blocks-queue.
-      The aim is to defer as long as possible reallocation of this
-      block.  Until that happens, all attempts to access it will
-      elicit an invalid-address error, as you would hope.</li><br>
-</ul>
-
-
-
-
-<a name="leaks"></a>
-<h3>3.6&nbsp; Memory leak detection</h3>
-
-Memcheck keeps track of all memory blocks issued in response to calls
-to malloc/calloc/realloc/new.  So when the program exits, it knows
-which blocks are still outstanding -- have not been returned, in other
-words.  Ideally, you want your program to have no blocks still in use
-at exit.  But many programs do.
-
-<p>For each such block, Memcheck scans the entire address space of the
-process, looking for pointers to the block.  One of three situations
-may result:
-
-<ul>
-  <li>A pointer to the start of the block is found.  This usually
-      indicates programming sloppiness; since the block is still
-      pointed at, the programmer could, at least in principle, free'd
-      it before program exit.</li><br>
-      <p>
-
-  <li>A pointer to the interior of the block is found.  The pointer
-      might originally have pointed to the start and have been moved
-      along, or it might be entirely unrelated.  Memcheck deems such a
-      block as "dubious", that is, possibly leaked,
-      because it's unclear whether or
-      not a pointer to it still exists.</li><br>
-      <p>
-
-  <li>The worst outcome is that no pointer to the block can be found.
-      The block is classified as "leaked", because the
-      programmer could not possibly have free'd it at program exit,
-      since no pointer to it exists.  This might be a symptom of
-      having lost the pointer at some earlier point in the
-      program.</li>
-</ul>
-
-Memcheck reports summaries about leaked and dubious blocks.
-For each such block, it will also tell you where the block was
-allocated.  This should help you figure out why the pointer to it has
-been lost.  In general, you should attempt to ensure your programs do
-not have any leaked or dubious blocks at exit.
-
-<p>The precise area of memory in which Memcheck searches for pointers
-is: all naturally-aligned 4-byte words for which all A bits indicate
-addressibility and all V bits indicated that the stored value is
-actually valid.
-<p>
-
-
-<a name="clientreqs"></a>
-<h3>3.7&nbsp; Client Requests</h3>
-
-The following client requests are defined in <code>memcheck.h</code>.  They
-also work for Addrcheck.  See <code>memcheck.h</code> for exact
-details of their arguments.
-
-<ul>
-<li><code>VALGRIND_MAKE_NOACCESS</code>,
-    <code>VALGRIND_MAKE_WRITABLE</code> and
-    <code>VALGRIND_MAKE_READABLE</code>.  These mark address
-    ranges as completely inaccessible, accessible but containing
-    undefined data, and accessible and containing defined data,
-    respectively.  Subsequent errors may have their faulting
-    addresses described in terms of these blocks.  Returns a
-    "block handle".  Returns zero when not run on Valgrind.
-<p>
-<li><code>VALGRIND_DISCARD</code>: At some point you may want
-    Valgrind to stop reporting errors in terms of the blocks
-    defined by the previous three macros.  To do this, the above
-    macros return a small-integer "block handle".  You can pass
-    this block handle to <code>VALGRIND_DISCARD</code>.  After
-    doing so, Valgrind will no longer be able to relate
-    addressing errors to the user-defined block associated with
-    the handle.  The permissions settings associated with the
-    handle remain in place; this just affects how errors are
-    reported, not whether they are reported.  Returns 1 for an
-    invalid handle and 0 for a valid handle (although passing
-    invalid handles is harmless).  Always returns 0 when not run
-    on Valgrind.
-<p>
-<li><code>VALGRIND_CHECK_WRITABLE</code> and
-    <code>VALGRIND_CHECK_READABLE</code>: check immediately
-    whether or not the given address range has the relevant
-    property, and if not, print an error message.  Also, for the
-    convenience of the client, returns zero if the relevant
-    property holds; otherwise, the returned value is the address
-    of the first byte for which the property is not true.
-    Always returns 0 when not run on Valgrind.
-<p>
-<li><code>VALGRIND_CHECK_DEFINED</code>: a quick and easy way
-    to find out whether Valgrind thinks a particular variable
-    (lvalue, to be precise) is addressible and defined.  Prints
-    an error message if not.  Returns no value.
-<p>
-<li><code>VALGRIND_DO_LEAK_CHECK</code>: run the memory leak detector
-    right now.  Returns no value.  I guess this could be used to
-    incrementally check for leaks between arbitrary places in the
-    program's execution.  Warning: not properly tested!
-<p>
-<li><code>VALGRIND_COUNT_LEAKS</code>: fills in the four arguments with
-    the number of bytes of memory found by the previous leak check to
-    be leaked, dubious, reachable and suppressed.  Again, useful in
-    test harness code, after calling <code>VALGRIND_DO_LEAK_CHECK</code>.
-<p>
-<li><code>VALGRIND_GET_VBITS</code> and
-    <code>VALGRIND_SET_VBITS</code>: allow you to get and set the V (validity)
-    bits for an address range.  You should probably only set V bits that you
-    have got with <code>VALGRIND_GET_VBITS</code>.  Only for those who really
-    know what they are doing.
-<p>
-</ul>
-
diff --git a/memcheck/docs/mc_techdocs.html b/memcheck/docs/mc_techdocs.html
deleted file mode 100644
index b02baad..0000000
--- a/memcheck/docs/mc_techdocs.html
+++ /dev/null
@@ -1,2119 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>The design and implementation of Valgrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="mc-techdocs">&nbsp;</a>
-<h1 align=center>The design and implementation of Valgrind</h1>
-
-<center>
-Detailed technical notes for hackers, maintainers and the
-overly-curious<br>
-These notes pertain to snapshot 20020306<br>
-<p>
-<a href="mailto:jseward@acm.org">jseward@acm.org</a><br>
-<a href="http://valgrind.kde.org">http://valgrind.kde.org</a><br>
-Copyright &copy; 2000-2004 Julian Seward
-<p>
-Valgrind is licensed under the GNU General Public License, 
-version 2<br>
-An open-source tool for finding memory-management problems in
-x86 GNU/Linux executables.
-</center>
-
-<p>
-
-
-
-
-<hr width="100%">
-
-<h2>Introduction</h2>
-
-This document contains a detailed, highly-technical description of the
-internals of Valgrind.  This is not the user manual; if you are an
-end-user of Valgrind, you do not want to read this.  Conversely, if
-you really are a hacker-type and want to know how it works, I assume
-that you have read the user manual thoroughly.
-<p>
-You may need to read this document several times, and carefully.  Some
-important things, I only say once.
-<p>
-[Nb: this document is now badly out of date.  There are some annotations in
-here that explain particular inaccuracies, but there are many more that are not
-annotated in such a way.]
-
-<h3>History</h3>
-
-Valgrind came into public view in late Feb 2002.  However, it has been
-under contemplation for a very long time, perhaps seriously for about
-five years.  Somewhat over two years ago, I started working on the x86
-code generator for the Glasgow Haskell Compiler
-(http://www.haskell.org/ghc), gaining familiarity with x86 internals
-on the way.  I then did Cacheprof (http://www.cacheprof.org), gaining
-further x86 experience.  Some time around Feb 2000 I started
-experimenting with a user-space x86 interpreter for x86-Linux.  This
-worked, but it was clear that a JIT-based scheme would be necessary to
-give reasonable performance for Valgrind.  Design work for the JITter
-started in earnest in Oct 2000, and by early 2001 I had an x86-to-x86
-dynamic translator which could run quite large programs.  This
-translator was in a sense pointless, since it did not do any
-instrumentation or checking.
-
-<p>
-Most of the rest of 2001 was taken up designing and implementing the
-instrumentation scheme.  The main difficulty, which consumed a lot
-of effort, was to design a scheme which did not generate large numbers
-of false uninitialised-value warnings.  By late 2001 a satisfactory
-scheme had been arrived at, and I started to test it on ever-larger
-programs, with an eventual eye to making it work well enough so that
-it was helpful to folks debugging the upcoming version 3 of KDE.  I've
-used KDE since before version 1.0, and wanted to Valgrind to be an
-indirect contribution to the KDE 3 development effort.  At the start of
-Feb 02 the kde-core-devel crew started using it, and gave a huge
-amount of helpful feedback and patches in the space of three weeks.
-Snapshot 20020306 is the result.
-
-<p>
-In the best Unix tradition, or perhaps in the spirit of Fred Brooks'
-depressing-but-completely-accurate epitaph "build one to throw away;
-you will anyway", much of Valgrind is a second or third rendition of
-the initial idea.  The instrumentation machinery
-(<code>vg_translate.c</code>, <code>vg_memory.c</code>) and core CPU
-simulation (<code>vg_to_ucode.c</code>, <code>vg_from_ucode.c</code>)
-have had three redesigns and rewrites; the register allocator,
-low-level memory manager (<code>vg_malloc2.c</code>) and symbol table
-reader (<code>vg_symtab2.c</code>) are on the second rewrite.  In a
-sense, this document serves to record some of the knowledge gained as
-a result.
-
-[Nb: the entire instrumentation/simulation part has again been rewritten,
-so as to be suitable for porting to architectures other than x86.]
-
-
-<h3>Design overview</h3>
-
-Valgrind is compiled into a Linux shared object,
-<code>valgrind.so</code>, and also a dummy one,
-<code>valgrinq.so</code>, of which more later.  The
-<code>valgrind</code> shell script adds <code>valgrind.so</code> to
-the <code>LD_PRELOAD</code> list of extra libraries to be
-loaded with any dynamically linked library.  This is a standard trick,
-one which I assume the <code>LD_PRELOAD</code> mechanism was developed
-to support.
-
-<p>
-<code>valgrind.so</code>
-is linked with the <code>-z initfirst</code> flag, which requests that
-its initialisation code is run before that of any other object in the
-executable image.  When this happens, valgrind gains control.  The
-real CPU becomes "trapped" in <code>valgrind.so</code> and the 
-translations it generates.  The synthetic CPU provided by Valgrind
-does, however, return from this initialisation function.  So the 
-normal startup actions, orchestrated by the dynamic linker
-<code>ld.so</code>, continue as usual, except on the synthetic CPU,
-not the real one.  Eventually <code>main</code> is run and returns,
-and then the finalisation code of the shared objects is run,
-presumably in inverse order to which they were initialised.  Remember,
-this is still all happening on the simulated CPU.  Eventually
-<code>valgrind.so</code>'s own finalisation code is called.  It spots
-this event, shuts down the simulated CPU, prints any error summaries
-and/or does leak detection, and returns from the initialisation code
-on the real CPU.  At this point, in effect the real and synthetic CPUs
-have merged back into one, Valgrind has lost control of the program,
-and the program finally <code>exit()s</code> back to the kernel in the
-usual way.
-
-<p>
-The normal course of activity, once Valgrind has started up, is as
-follows.  Valgrind never runs any part of your program (usually
-referred to as the "client"), not a single byte of it, directly.
-Instead it uses function <code>VG_(translate)</code> to translate
-basic blocks (BBs, straight-line sequences of code) into instrumented
-translations, and those are run instead.  The translations are stored
-in the translation cache (TC), <code>vg_tc</code>, with the
-translation table (TT), <code>vg_tt</code> supplying the
-original-to-translation code address mapping.  Auxiliary array
-<code>VG_(tt_fast)</code> is used as a direct-map cache for fast
-lookups in TT; it usually achieves a hit rate of around 98% and
-facilitates an orig-to-trans lookup in 4 x86 insns, which is not bad.
-
-<p>
-Function <code>VG_(dispatch)</code> in <code>vg_dispatch.S</code> is
-the heart of the JIT dispatcher.  Once a translated code address has
-been found, it is executed simply by an x86 <code>call</code>
-to the translation.  At the end of the translation, the next 
-original code addr is loaded into <code>%eax</code>, and the 
-translation then does a <code>ret</code>, taking it back to the
-dispatch loop, with, interestingly, zero branch mispredictions.  
-The address requested in <code>%eax</code> is looked up first in
-<code>VG_(tt_fast)</code>, and, if not found, by calling C helper
-<code>VG_(search_transtab)</code>.  If there is still no translation 
-available, <code>VG_(dispatch)</code> exits back to the top-level
-C dispatcher <code>VG_(toploop)</code>, which arranges for 
-<code>VG_(translate)</code> to make a new translation.  All fairly
-unsurprising, really.  There are various complexities described below.
-
-<p>
-The translator, orchestrated by <code>VG_(translate)</code>, is
-complicated but entirely self-contained.  It is described in great
-detail in subsequent sections.  Translations are stored in TC, with TT
-tracking administrative information.  The translations are subject to
-an approximate LRU-based management scheme.  With the current
-settings, the TC can hold at most about 15MB of translations, and LRU
-passes prune it to about 13.5MB.  Given that the
-orig-to-translation expansion ratio is about 13:1 to 14:1, this means
-TC holds translations for more or less a megabyte of original code,
-which generally comes to about 70000 basic blocks for C++ compiled
-with optimisation on.  Generating new translations is expensive, so it
-is worth having a large TC to minimise the (capacity) miss rate.
-
-<p>
-The dispatcher, <code>VG_(dispatch)</code>, receives hints from
-the translations which allow it to cheaply spot all control 
-transfers corresponding to x86 <code>call</code> and <code>ret</code>
-instructions.  It has to do this in order to spot some special events:
-<ul>
-<li>Calls to <code>VG_(shutdown)</code>.  This is Valgrind's cue to
-    exit.  NOTE: actually this is done a different way; it should be
-    cleaned up.
-<p>
-<li>Returns of system call handlers, to the return address 
-    <code>VG_(signalreturn_bogusRA)</code>.  The signal simulator
-    needs to know when a signal handler is returning, so we spot
-    jumps (returns) to this address.
-<p>
-<li>Calls to <code>vg_trap_here</code>.  All <code>malloc</code>,
-    <code>free</code>, etc calls that the client program makes are
-    eventually routed to a call to <code>vg_trap_here</code>,
-    and Valgrind does its own special thing with these calls.
-    In effect this provides a trapdoor, by which Valgrind can
-    intercept certain calls on the simulated CPU, run the call as it
-    sees fit itself (on the real CPU), and return the result to
-    the simulated CPU, quite transparently to the client program.
-</ul>
-Valgrind intercepts the client's <code>malloc</code>,
-<code>free</code>, etc,
-calls, so that it can store additional information.  Each block 
-<code>malloc</code>'d by the client gives rise to a shadow block
-in which Valgrind stores the call stack at the time of the
-<code>malloc</code>
-call.  When the client calls <code>free</code>, Valgrind tries to
-find the shadow block corresponding to the address passed to
-<code>free</code>, and emits an error message if none can be found.
-If it is found, the block is placed on the freed blocks queue 
-<code>vg_freed_list</code>, it is marked as inaccessible, and
-its shadow block now records the call stack at the time of the
-<code>free</code> call.  Keeping <code>free</code>'d blocks in
-this queue allows Valgrind to spot all (presumably invalid) accesses
-to them.  However, once the volume of blocks in the free queue 
-exceeds <code>VG_(clo_freelist_vol)</code>, blocks are finally
-removed from the queue.
-
-<p>
-Keeping track of A and V bits (note: if you don't know what these are,
-you haven't read the user guide carefully enough) for memory is done
-in <code>vg_memory.c</code>.  This implements a sparse array structure
-which covers the entire 4G address space in a way which is reasonably
-fast and reasonably space efficient.  The 4G address space is divided
-up into 64K sections, each covering 64Kb of address space.  Given a
-32-bit address, the top 16 bits are used to select one of the 65536
-entries in <code>VG_(primary_map)</code>.  The resulting "secondary"
-(<code>SecMap</code>) holds A and V bits for the 64k of address space
-chunk corresponding to the lower 16 bits of the address.
-
-
-<h3>Design decisions</h3>
-
-Some design decisions were motivated by the need to make Valgrind
-debuggable.  Imagine you are writing a CPU simulator.  It works fairly
-well.  However, you run some large program, like Netscape, and after
-tens of millions of instructions, it crashes.  How can you figure out
-where in your simulator the bug is?
-
-<p>
-Valgrind's answer is: cheat.  Valgrind is designed so that it is
-possible to switch back to running the client program on the real
-CPU at any point.  Using the <code>--stop-after= </code> flag, you can 
-ask Valgrind to run just some number of basic blocks, and then 
-run the rest of the way on the real CPU.  If you are searching for
-a bug in the simulated CPU, you can use this to do a binary search,
-which quickly leads you to the specific basic block which is
-causing the problem.  
-
-<p>
-This is all very handy.  It does constrain the design in certain
-unimportant ways.  Firstly, the layout of memory, when viewed from the
-client's point of view, must be identical regardless of whether it is
-running on the real or simulated CPU.  This means that Valgrind can't
-do pointer swizzling -- well, no great loss -- and it can't run on 
-the same stack as the client -- again, no great loss.  
-Valgrind operates on its own stack, <code>VG_(stack)</code>, which
-it switches to at startup, temporarily switching back to the client's
-stack when doing system calls for the client.
-
-<p>
-Valgrind also receives signals on its own stack,
-<code>VG_(sigstack)</code>, but for different gruesome reasons
-discussed below.
-
-<p>
-This nice clean switch-back-to-the-real-CPU-whenever-you-like story
-is muddied by signals.  Problem is that signals arrive at arbitrary
-times and tend to slightly perturb the basic block count, with the
-result that you can get close to the basic block causing a problem but
-can't home in on it exactly.  My kludgey hack is to define
-<code>SIGNAL_SIMULATION</code> to 1 towards the bottom of 
-<code>vg_syscall_mem.c</code>, so that signal handlers are run on the
-real CPU and don't change the BB counts.
-
-<p>
-A second hole in the switch-back-to-real-CPU story is that Valgrind's
-way of delivering signals to the client is different from that of the
-kernel.  Specifically, the layout of the signal delivery frame, and
-the mechanism used to detect a sighandler returning, are different.
-So you can't expect to make the transition inside a sighandler and
-still have things working, but in practice that's not much of a
-restriction.
-
-<p>
-Valgrind's implementation of <code>malloc</code>, <code>free</code>,
-etc, (in <code>vg_clientmalloc.c</code>, not the low-level stuff in
-<code>vg_malloc2.c</code>) is somewhat complicated by the need to 
-handle switching back at arbitrary points.  It does work tho.
-
-
-
-<h3>Correctness</h3>
-
-There's only one of me, and I have a Real Life (tm) as well as hacking
-Valgrind [allegedly :-].  That means I don't have time to waste
-chasing endless bugs in Valgrind.  My emphasis is therefore on doing
-everything as simply as possible, with correctness, stability and
-robustness being the number one priority, more important than
-performance or functionality.  As a result:
-<ul>
-<li>The code is absolutely loaded with assertions, and these are
-    <b>permanently enabled.</b>  I have no plan to remove or disable
-    them later.  Over the past couple of months, as valgrind has
-    become more widely used, they have shown their worth, pulling
-    up various bugs which would otherwise have appeared as
-    hard-to-find segmentation faults.
-    <p>
-    I am of the view that it's acceptable to spend 5% of the total
-    running time of your valgrindified program doing assertion checks
-    and other internal sanity checks.
-<p>
-<li>Aside from the assertions, valgrind contains various sets of
-    internal sanity checks, which get run at varying frequencies
-    during normal operation.  <code>VG_(do_sanity_checks)</code>
-    runs every 1000 basic blocks, which means 500 to 2000 times/second 
-    for typical machines at present.  It checks that Valgrind hasn't
-    overrun its private stack, and does some simple checks on the
-    memory permissions maps.  Once every 25 calls it does some more
-    extensive checks on those maps.  Etc, etc.
-    <p>
-    The following components also have sanity check code, which can
-    be enabled to aid debugging:
-    <ul>
-    <li>The low-level memory-manager
-        (<code>VG_(mallocSanityCheckArena)</code>).  This does a 
-        complete check of all blocks and chains in an arena, which
-        is very slow.  Is not engaged by default.
-    <p>
-    <li>The symbol table reader(s): various checks to ensure
-        uniqueness of mappings; see <code>VG_(read_symbols)</code>
-        for a start.  Is permanently engaged.
-    <p>
-    <li>The A and V bit tracking stuff in <code>vg_memory.c</code>.
-        This can be compiled with cpp symbol
-        <code>VG_DEBUG_MEMORY</code> defined, which removes all the
-        fast, optimised cases, and uses simple-but-slow fallbacks
-        instead.  Not engaged by default.
-    <p>
-    <li>Ditto <code>VG_DEBUG_LEAKCHECK</code>.
-    <p>
-    <li>The JITter parses x86 basic blocks into sequences of 
-        UCode instructions.  It then sanity checks each one with
-        <code>VG_(saneUInstr)</code> and sanity checks the sequence
-        as a whole with <code>VG_(saneUCodeBlock)</code>.  This stuff
-        is engaged by default, and has caught some way-obscure bugs
-        in the simulated CPU machinery in its time.
-    <p>
-    <li>The system call wrapper does
-        <code>VG_(first_and_last_secondaries_look_plausible)</code> after
-        every syscall; this is known to pick up bugs in the syscall
-        wrappers.  Engaged by default.
-    <p>
-    <li>The main dispatch loop, in <code>VG_(dispatch)</code>, checks
-        that translations do not set <code>%ebp</code> to any value
-        different from <code>VG_EBP_DISPATCH_CHECKED</code> or
-        <code>&amp; VG_(baseBlock)</code>.  In effect this test is free,
-        and is permanently engaged.
-    <p>
-    <li>There are a couple of ifdefed-out consistency checks I
-        inserted whilst debugging the new register allocater, 
-        <code>vg_do_register_allocation</code>.
-    </ul>
-<p>
-<li>I try to avoid techniques, algorithms, mechanisms, etc, for which
-    I can supply neither a convincing argument that they are correct,
-    nor sanity-check code which might pick up bugs in my
-    implementation.  I don't always succeed in this, but I try.
-    Basically the idea is: avoid techniques which are, in practice,
-    unverifiable, in some sense.   When doing anything, always have in
-    mind: "how can I verify that this is correct?"
-</ul>
-
-<p>
-Some more specific things are:
-
-<ul>
-<li>Valgrind runs in the same namespace as the client, at least from
-    <code>ld.so</code>'s point of view, and it therefore absolutely
-    had better not export any symbol with a name which could clash
-    with that of the client or any of its libraries.  Therefore, all
-    globally visible symbols exported from <code>valgrind.so</code>
-    are defined using the <code>VG_</code> CPP macro.  As you'll see
-    from <code>tool_asm.h</code>, this appends some arbitrary
-    prefix to the symbol, in order that it be, we hope, globally
-    unique.  Currently the prefix is <code>vgPlain_</code>.  For
-    convenience there are also <code>VGM_</code>, <code>VGP_</code>
-    and <code>VGOFF_</code>.  All locally defined symbols are declared
-    <code>static</code> and do not appear in the final shared object.
-    <p>
-    To check this, I periodically do 
-    <code>nm valgrind.so | grep " T "</code>, 
-    which shows you all the globally exported text symbols.
-    They should all have an approved prefix, except for those like
-    <code>malloc</code>, <code>free</code>, etc, which we deliberately
-    want to shadow and take precedence over the same names exported
-    from <code>glibc.so</code>, so that valgrind can intercept those
-    calls easily.  Similarly, <code>nm valgrind.so | grep " D "</code>
-    allows you to find any rogue data-segment symbol names.
-<p>
-<li>Valgrind tries, and almost succeeds, in being completely
-    independent of all other shared objects, in particular of
-    <code>glibc.so</code>.  For example, we have our own low-level
-    memory manager in <code>vg_malloc2.c</code>, which is a fairly
-    standard malloc/free scheme augmented with arenas, and
-    <code>vg_mylibc.c</code> exports reimplementations of various bits
-    and pieces you'd normally get from the C library.
-    <p>
-    Why all the hassle?  Because imagine the potential chaos of both
-    the simulated and real CPUs executing in <code>glibc.so</code>.
-    It just seems simpler and cleaner to be completely self-contained,
-    so that only the simulated CPU visits <code>glibc.so</code>.  In
-    practice it's not much hassle anyway.  Also, valgrind starts up
-    before glibc has a chance to initialise itself, and who knows what
-    difficulties that could lead to.  Finally, glibc has definitions
-    for some types, specifically <code>sigset_t</code>, which conflict
-    (are different from) the Linux kernel's idea of same.  When 
-    Valgrind wants to fiddle around with signal stuff, it wants to
-    use the kernel's definitions, not glibc's definitions.  So it's 
-    simplest just to keep glibc out of the picture entirely.
-    <p>
-    To find out which glibc symbols are used by Valgrind, reinstate
-    the link flags <code>-nostdlib -Wl,-no-undefined</code>.  This
-    causes linking to fail, but will tell you what you depend on.
-    I have mostly, but not entirely, got rid of the glibc
-    dependencies; what remains is, IMO, fairly harmless.  AFAIK the
-    current dependencies are: <code>memset</code>,
-    <code>memcmp</code>, <code>stat</code>, <code>system</code>,
-    <code>sbrk</code>, <code>setjmp</code> and <code>longjmp</code>.
-
-<p>
-<li>[Update: this is now out of date;  there are a number of such
-    kernel interface files -- vki*.h -- and now no kernel headers are used by
-    Valgrind at all.  We did this because unfortunately kernel headers
-    are frequently broken, and cannot be relied on.]
-
-    Similarly, valgrind should not really import any headers other
-    than the Linux kernel headers, since it knows of no API other than
-    the kernel interface to talk to.  At the moment this is really not
-    in a good state, and <code>vg_syscall_mem</code> imports, via
-    <code>vg_unsafe.h</code>, a significant number of C-library
-    headers so as to know the sizes of various structs passed across
-    the kernel boundary.  This is of course completely bogus, since
-    there is no guarantee that the C library's definitions of these
-    structs matches those of the kernel.  I have started to sort this
-    out using <code>vg_kerneliface.h</code>, into which I had intended
-    to copy all kernel definitions which valgrind could need, but this
-    has not gotten very far.  At the moment it mostly contains
-    definitions for <code>sigset_t</code> and <code>struct
-    sigaction</code>, since the kernel's definition for these really
-    does clash with glibc's.  I plan to use a <code>vki_</code> prefix
-    on all these types and constants, to denote the fact that they
-    pertain to <b>V</b>algrind's <b>K</b>ernel <b>I</b>nterface.
-    <p>
-    Another advantage of having a <code>vg_kerneliface.h</code> file
-    is that it makes it simpler to interface to a different kernel.
-    Once can, for example, easily imagine writing a new
-    <code>vg_kerneliface.h</code> for FreeBSD, or x86 NetBSD.
-
-</ul>
-
-<h3>Current limitations</h3>
-
-Support for weird (non-POSIX) signal stuff is patchy.  Does anybody
-care?
-<p>
-
-
-
-
-<hr width="100%">
-
-<h2>The instrumenting JITter</h2>
-
-This really is the heart of the matter.  We begin with various side
-issues.
-
-<h3>Run-time storage, and the use of host registers</h3>
-
-Valgrind translates client (original) basic blocks into instrumented
-basic blocks, which live in the translation cache TC, until either the
-client finishes or the translations are ejected from TC to make room
-for newer ones.
-<p>
-Since it generates x86 code in memory, Valgrind has complete control
-of the use of registers in the translations.  Now pay attention.  I
-shall say this only once, and it is important you understand this.  In
-what follows I will refer to registers in the host (real) cpu using
-their standard names, <code>%eax</code>, <code>%edi</code>, etc.  I
-refer to registers in the simulated CPU by capitalising them:
-<code>%EAX</code>, <code>%EDI</code>, etc.  These two sets of
-registers usually bear no direct relationship to each other; there is
-no fixed mapping between them.  This naming scheme is used fairly
-consistently in the comments in the sources.
-<p>
-Host registers, once things are up and running, are used as follows:
-<ul>
-<li><code>%esp</code>, the real stack pointer, points
-    somewhere in Valgrind's private stack area,
-    <code>VG_(stack)</code> or, transiently, into its signal delivery
-    stack, <code>VG_(sigstack)</code>.
-<p>
-<li><code>%edi</code> is used as a temporary in code generation; it
-    is almost always dead, except when used for the <code>Left</code>
-    value-tag operations.
-<p>
-<li><code>%eax</code>, <code>%ebx</code>, <code>%ecx</code>,
-    <code>%edx</code> and <code>%esi</code> are available to
-    Valgrind's register allocator.  They are dead (carry unimportant
-    values) in between translations, and are live only in
-    translations.  The one exception to this is <code>%eax</code>,
-    which, as mentioned far above, has a special significance to the
-    dispatch loop <code>VG_(dispatch)</code>: when a translation
-    returns to the dispatch loop, <code>%eax</code> is expected to
-    contain the original-code-address of the next translation to run.
-    The register allocator is so good at minimising spill code that
-    using five regs and not having to save/restore <code>%edi</code>
-    actually gives better code than allocating to <code>%edi</code>
-    as well, but then having to push/pop it around special uses.
-<p>
-<li><code>%ebp</code> points permanently at
-    <code>VG_(baseBlock)</code>.  Valgrind's translations are
-    position-independent, partly because this is convenient, but also
-    because translations get moved around in TC as part of the LRUing
-    activity.  <b>All</b> static entities which need to be referred to
-    from generated code, whether data or helper functions, are stored
-    starting at <code>VG_(baseBlock)</code> and are therefore reached
-    by indexing from <code>%ebp</code>.  There is but one exception, 
-    which is that by placing the value
-    <code>VG_EBP_DISPATCH_CHECKED</code>
-    in <code>%ebp</code> just before a return to the dispatcher, 
-    the dispatcher is informed that the next address to run, 
-    in <code>%eax</code>, requires special treatment.
-<p>
-<li>The real machine's FPU state is pretty much unimportant, for
-    reasons which will become obvious.  Ditto its <code>%eflags</code>
-    register.
-</ul>
-
-<p>
-The state of the simulated CPU is stored in memory, in
-<code>VG_(baseBlock)</code>, which is a block of 200 words IIRC.
-Recall that <code>%ebp</code> points permanently at the start of this
-block.  Function <code>vg_init_baseBlock</code> decides what the
-offsets of various entities in <code>VG_(baseBlock)</code> are to be,
-and allocates word offsets for them.  The code generator then emits
-<code>%ebp</code> relative addresses to get at those things.  The
-sequence in which entities are allocated has been carefully chosen so
-that the 32 most popular entities come first, because this means 8-bit
-offsets can be used in the generated code.
-
-<p>
-If I was clever, I could make <code>%ebp</code> point 32 words along 
-<code>VG_(baseBlock)</code>, so that I'd have another 32 words of
-short-form offsets available, but that's just complicated, and it's
-not important -- the first 32 words take 99% (or whatever) of the
-traffic.
-
-<p>
-Currently, the sequence of stuff in <code>VG_(baseBlock)</code> is as
-follows:
-<ul>
-<li>9 words, holding the simulated integer registers,
-    <code>%EAX</code> .. <code>%EDI</code>, and the simulated flags,
-    <code>%EFLAGS</code>.
-<p>
-<li>Another 9 words, holding the V bit "shadows" for the above 9 regs.
-<p>
-<li>The <b>addresses</b> of various helper routines called from
-    generated code: 
-    <code>VG_(helper_value_check4_fail)</code>,
-    <code>VG_(helper_value_check0_fail)</code>,
-    which register V-check failures,
-    <code>VG_(helperc_STOREV4)</code>,
-    <code>VG_(helperc_STOREV1)</code>,
-    <code>VG_(helperc_LOADV4)</code>,
-    <code>VG_(helperc_LOADV1)</code>,
-    which do stores and loads of V bits to/from the 
-    sparse array which keeps track of V bits in memory,
-    and
-    <code>VGM_(handle_esp_assignment)</code>, which messes with
-    memory addressibility resulting from changes in <code>%ESP</code>.
-<p>
-<li>The simulated <code>%EIP</code>.
-<p>
-<li>24 spill words, for when the register allocator can't make it work
-    with 5 measly registers.
-<p>
-<li>Addresses of helpers <code>VG_(helperc_STOREV2)</code>,
-    <code>VG_(helperc_LOADV2)</code>.  These are here because 2-byte
-    loads and stores are relatively rare, so are placed above the
-    magic 32-word offset boundary.
-<p>
-<li>For similar reasons, addresses of helper functions 
-    <code>VGM_(fpu_write_check)</code> and
-    <code>VGM_(fpu_read_check)</code>, which handle the A/V maps
-    testing and changes required by FPU writes/reads.  
-<p>
-<li>Some other boring helper addresses:
-    <code>VG_(helper_value_check2_fail)</code> and
-    <code>VG_(helper_value_check1_fail)</code>.  These are probably
-    never emitted now, and should be removed.
-<p>
-<li>The entire state of the simulated FPU, which I believe to be
-    108 bytes long.
-<p>
-<li>Finally, the addresses of various other helper functions in
-    <code>vg_helpers.S</code>, which deal with rare situations which
-    are tedious or difficult to generate code in-line for.
-</ul>
-
-<p>
-As a general rule, the simulated machine's state lives permanently in
-memory at <code>VG_(baseBlock)</code>.  However, the JITter does some
-optimisations which allow the simulated integer registers to be
-cached in real registers over multiple simulated instructions within
-the same basic block.  These are always flushed back into memory at
-the end of every basic block, so that the in-memory state is
-up-to-date between basic blocks.  (This flushing is implied by the
-statement above that the real machine's allocatable registers are
-dead in between simulated blocks).
-
-
-<h3>Startup, shutdown, and system calls</h3>
-
-Getting into of Valgrind (<code>VG_(startup)</code>, called from
-<code>valgrind.so</code>'s initialisation section), really means
-copying the real CPU's state into <code>VG_(baseBlock)</code>, and
-then installing our own stack pointer, etc, into the real CPU, and
-then starting up the JITter.  Exiting valgrind involves copying the
-simulated state back to the real state.
-
-<p>
-Unfortunately, there's a complication at startup time.  Problem is
-that at the point where we need to take a snapshot of the real CPU's
-state, the offsets in <code>VG_(baseBlock)</code> are not set up yet,
-because to do so would involve disrupting the real machine's state
-significantly.  The way round this is to dump the real machine's state
-into a temporary, static block of memory,
-<code>VG_(m_state_static)</code>.  We can then set up the
-<code>VG_(baseBlock)</code> offsets at our leisure, and copy into it
-from <code>VG_(m_state_static)</code> at some convenient later time.
-This copying is done by
-<code>VG_(copy_m_state_static_to_baseBlock)</code>.
-
-<p>
-On exit, the inverse transformation is (rather unnecessarily) used:
-stuff in <code>VG_(baseBlock)</code> is copied to
-<code>VG_(m_state_static)</code>, and the assembly stub then copies
-from <code>VG_(m_state_static)</code> into the real machine registers.
-
-<p>
-Doing system calls on behalf of the client (<code>vg_syscall.S</code>)
-is something of a half-way house.  We have to make the world look
-sufficiently like that which the client would normally have to make
-the syscall actually work properly, but we can't afford to lose
-control.  So the trick is to copy all of the client's state, <b>except
-its program counter</b>, into the real CPU, do the system call, and
-copy the state back out.  Note that the client's state includes its
-stack pointer register, so one effect of this partial restoration is
-to cause the system call to be run on the client's stack, as it should
-be.
-
-<p>
-As ever there are complications.  We have to save some of our own state
-somewhere when restoring the client's state into the CPU, so that we
-can keep going sensibly afterwards.  In fact the only thing which is
-important is our own stack pointer, but for paranoia reasons I save 
-and restore our own FPU state as well, even though that's probably
-pointless.
-
-<p>
-The complication on the above complication is, that for horrible
-reasons to do with signals, we may have to handle a second client
-system call whilst the client is blocked inside some other system 
-call (unbelievable!).  That means there's two sets of places to 
-dump Valgrind's stack pointer and FPU state across the syscall,
-and we decide which to use by consulting
-<code>VG_(syscall_depth)</code>, which is in turn maintained by
-<code>VG_(wrap_syscall)</code>.
-
-
-
-<h3>Introduction to UCode</h3>
-
-UCode lies at the heart of the x86-to-x86 JITter.  The basic premise
-is that dealing the the x86 instruction set head-on is just too darn
-complicated, so we do the traditional compiler-writer's trick and
-translate it into a simpler, easier-to-deal-with form.
-
-<p>
-In normal operation, translation proceeds through six stages,
-coordinated by <code>VG_(translate)</code>:
-<ol>
-<li>Parsing of an x86 basic block into a sequence of UCode
-    instructions (<code>VG_(disBB)</code>).
-<p>
-<li>UCode optimisation (<code>vg_improve</code>), with the aim of
-    caching simulated registers in real registers over multiple
-    simulated instructions, and removing redundant simulated
-    <code>%EFLAGS</code> saving/restoring.
-<p>
-<li>UCode instrumentation (<code>vg_instrument</code>), which adds
-    value and address checking code.
-<p>
-<li>Post-instrumentation cleanup (<code>vg_cleanup</code>), removing
-    redundant value-check computations.
-<p>
-<li>Register allocation (<code>vg_do_register_allocation</code>),
-    which, note, is done on UCode.
-<p>
-<li>Emission of final instrumented x86 code
-    (<code>VG_(emit_code)</code>).
-</ol>
-
-<p>
-Notice how steps 2, 3, 4 and 5 are simple UCode-to-UCode
-transformation passes, all on straight-line blocks of UCode (type
-<code>UCodeBlock</code>).  Steps 2 and 4 are optimisation passes and
-can be disabled for debugging purposes, with
-<code>--optimise=no</code> and <code>--cleanup=no</code> respectively.
-
-<p>
-Valgrind can also run in a no-instrumentation mode, given
-<code>--instrument=no</code>.  This is useful for debugging the JITter
-quickly without having to deal with the complexity of the
-instrumentation mechanism too.  In this mode, steps 3 and 4 are
-omitted.
-
-<p>
-These flags combine, so that <code>--instrument=no</code> together with 
-<code>--optimise=no</code> means only steps 1, 5 and 6 are used.
-<code>--single-step=yes</code> causes each x86 instruction to be
-treated as a single basic block.  The translations are terrible but
-this is sometimes instructive.  
-
-<p>
-The <code>--stop-after=N</code> flag switches back to the real CPU
-after <code>N</code> basic blocks.  It also re-JITs the final basic
-block executed and prints the debugging info resulting, so this
-gives you a way to get a quick snapshot of how a basic block looks as
-it passes through the six stages mentioned above.  If you want to 
-see full information for every block translated (probably not, but
-still ...) find, in <code>VG_(translate)</code>, the lines
-<br><code>   dis = True;</code>
-<br><code>   dis = debugging_translation;</code>
-<br>
-and comment out the second line.  This will spew out debugging
-junk faster than you can possibly imagine.
-
-
-
-<h3>UCode operand tags: type <code>Tag</code></h3>
-
-UCode is, more or less, a simple two-address RISC-like code.  In
-keeping with the x86 AT&amp;T assembly syntax, generally speaking the
-first operand is the source operand, and the second is the destination
-operand, which is modified when the uinstr is notionally executed.
-
-<p>
-UCode instructions have up to three operand fields, each of which has
-a corresponding <code>Tag</code> describing it.  Possible values for
-the tag are:
-
-<ul>
-<li><code>NoValue</code>: indicates that the field is not in use.
-<p>
-<li><code>Lit16</code>: the field contains a 16-bit literal.
-<p>
-<li><code>Literal</code>: the field denotes a 32-bit literal, whose
-    value is stored in the <code>lit32</code> field of the uinstr
-    itself.  Since there is only one <code>lit32</code> for the whole
-    uinstr, only one operand field may contain this tag.
-<p>
-<li><code>SpillNo</code>: the field contains a spill slot number, in
-    the range 0 to 23 inclusive, denoting one of the spill slots
-    contained inside <code>VG_(baseBlock)</code>.  Such tags only
-    exist after register allocation.
-<p>
-<li><code>RealReg</code>: the field contains a number in the range 0
-    to 7 denoting an integer x86 ("real") register on the host.  The
-    number is the Intel encoding for integer registers.  Such tags
-    only exist after register allocation.
-<p>
-<li><code>ArchReg</code>: the field contains a number in the range 0
-    to 7 denoting an integer x86 register on the simulated CPU.  In
-    reality this means a reference to one of the first 8 words of
-    <code>VG_(baseBlock)</code>.  Such tags can exist at any point in
-    the translation process.
-<p>
-<li>Last, but not least, <code>TempReg</code>.  The field contains the
-    number of one of an infinite set of virtual (integer)
-    registers. <code>TempReg</code>s are used everywhere throughout
-    the translation process; you can have as many as you want.  The
-    register allocator maps as many as it can into
-    <code>RealReg</code>s and turns the rest into
-    <code>SpillNo</code>s, so <code>TempReg</code>s should not exist
-    after the register allocation phase.
-    <p>
-    <code>TempReg</code>s are always 32 bits long, even if the data
-    they hold is logically shorter.  In that case the upper unused
-    bits are required, and, I think, generally assumed, to be zero.  
-    <code>TempReg</code>s holding V bits for quantities shorter than 
-    32 bits are expected to have ones in the unused places, since a
-    one denotes "undefined".
-</ul>
-
-
-<h3>UCode instructions: type <code>UInstr</code></h3>
-
-<p>
-UCode was carefully designed to make it possible to do register
-allocation on UCode and then translate the result into x86 code
-without needing any extra registers ... well, that was the original
-plan, anyway.  Things have gotten a little more complicated since
-then.  In what follows, UCode instructions are referred to as uinstrs,
-to distinguish them from x86 instructions.  Uinstrs of course have
-uopcodes which are (naturally) different from x86 opcodes.
-
-<p>
-A uinstr (type <code>UInstr</code>) contains
-various fields, not all of which are used by any one uopcode:
-<ul>
-<li>Three 16-bit operand fields, <code>val1</code>, <code>val2</code>
-    and <code>val3</code>.
-<p>
-<li>Three tag fields, <code>tag1</code>, <code>tag2</code>
-    and <code>tag3</code>.  Each of these has a value of type
-    <code>Tag</code>,
-    and they describe what the <code>val1</code>, <code>val2</code>
-    and <code>val3</code> fields contain.
-<p>
-<li>A 32-bit literal field.
-<p>
-<li>Two <code>FlagSet</code>s, specifying which x86 condition codes are
-    read and written by the uinstr.
-<p>
-<li>An opcode byte, containing a value of type <code>Opcode</code>.
-<p>
-<li>A size field, indicating the data transfer size (1/2/4/8/10) in
-    cases where this makes sense, or zero otherwise.
-<p>
-<li>A condition-code field, which, for jumps, holds a
-    value of type <code>Condcode</code>, indicating the condition
-    which applies.  The encoding is as it is in the x86 insn stream,
-    except we add a 17th value <code>CondAlways</code> to indicate
-    an unconditional transfer.
-<p>
-<li>Various 1-bit flags, indicating whether this insn pertains to an
-    x86 CALL or RET instruction, whether a widening is signed or not,
-    etc.
-</ul>
-
-<p>
-UOpcodes (type <code>Opcode</code>) are divided into two groups: those
-necessary merely to express the functionality of the x86 code, and
-extra uopcodes needed to express the instrumentation.  The former
-group contains:
-<ul>
-<li><code>GET</code> and <code>PUT</code>, which move values from the
-    simulated CPU's integer registers (<code>ArchReg</code>s) into
-    <code>TempReg</code>s, and back.  <code>GETF</code> and
-    <code>PUTF</code> do the corresponding thing for the simulated
-    <code>%EFLAGS</code>.  There are no corresponding insns for the
-    FPU register stack, since we don't explicitly simulate its
-    registers.
-<p>
-<li><code>LOAD</code> and <code>STORE</code>, which, in RISC-like
-    fashion, are the only uinstrs able to interact with memory.
-<p>
-<li><code>MOV</code> and <code>CMOV</code> allow unconditional and
-    conditional moves of values between <code>TempReg</code>s.
-<p>
-<li>ALU operations.  Again in RISC-like fashion, these only operate on
-    <code>TempReg</code>s (before reg-alloc) or <code>RealReg</code>s
-    (after reg-alloc).  These are: <code>ADD</code>, <code>ADC</code>,
-    <code>AND</code>, <code>OR</code>, <code>XOR</code>,
-    <code>SUB</code>, <code>SBB</code>, <code>SHL</code>,
-    <code>SHR</code>, <code>SAR</code>, <code>ROL</code>,
-    <code>ROR</code>, <code>RCL</code>, <code>RCR</code>,
-    <code>NOT</code>, <code>NEG</code>, <code>INC</code>,
-    <code>DEC</code>, <code>BSWAP</code>, <code>CC2VAL</code> and
-    <code>WIDEN</code>.  <code>WIDEN</code> does signed or unsigned
-    value widening.  <code>CC2VAL</code> is used to convert condition
-    codes into a value, zero or one.  The rest are obvious.
-    <p>
-    To allow for more efficient code generation, we bend slightly the
-    restriction at the start of the previous para: for
-    <code>ADD</code>, <code>ADC</code>, <code>XOR</code>,
-    <code>SUB</code> and <code>SBB</code>, we allow the first (source)
-    operand to also be an <code>ArchReg</code>, that is, one of the
-    simulated machine's registers.  Also, many of these ALU ops allow
-    the source operand to be a literal.  See
-    <code>VG_(saneUInstr)</code> for the final word on the allowable
-    forms of uinstrs.
-<p>
-<li><code>LEA1</code> and <code>LEA2</code> are not strictly
-    necessary, but allow faciliate better translations.  They
-    record the fancy x86 addressing modes in a direct way, which
-    allows those amodes to be emitted back into the final
-    instruction stream more or less verbatim.
-<p>
-<li><code>CALLM</code> calls a machine-code helper, one of the methods
-    whose address is stored at some <code>VG_(baseBlock)</code>
-    offset.  <code>PUSH</code> and <code>POP</code> move values
-    to/from <code>TempReg</code> to the real (Valgrind's) stack, and
-    <code>CLEAR</code> removes values from the stack.
-    <code>CALLM_S</code> and <code>CALLM_E</code> delimit the
-    boundaries of call setups and clearings, for the benefit of the
-    instrumentation passes.  Getting this right is critical, and so
-    <code>VG_(saneUCodeBlock)</code> makes various checks on the use
-    of these uopcodes.
-    <p>
-    It is important to understand that these uopcodes have nothing to
-    do with the x86 <code>call</code>, <code>return,</code>
-    <code>push</code> or <code>pop</code> instructions, and are not
-    used to implement them.  Those guys turn into combinations of
-    <code>GET</code>, <code>PUT</code>, <code>LOAD</code>,
-    <code>STORE</code>, <code>ADD</code>, <code>SUB</code>, and
-    <code>JMP</code>.  What these uopcodes support is calling of
-    helper functions such as <code>VG_(helper_imul_32_64)</code>,
-    which do stuff which is too difficult or tedious to emit inline.
-<p>
-<li><code>FPU</code>, <code>FPU_R</code> and <code>FPU_W</code>.
-    Valgrind doesn't attempt to simulate the internal state of the
-    FPU at all.  Consequently it only needs to be able to distinguish
-    FPU ops which read and write memory from those that don't, and
-    for those which do, it needs to know the effective address and
-    data transfer size.  This is made easier because the x86 FP
-    instruction encoding is very regular, basically consisting of
-    16 bits for a non-memory FPU insn and 11 (IIRC) bits + an address mode
-    for a memory FPU insn.  So our <code>FPU</code> uinstr carries
-    the 16 bits in its <code>val1</code> field.  And
-    <code>FPU_R</code> and <code>FPU_W</code> carry 11 bits in that
-    field, together with the identity of a <code>TempReg</code> or
-    (later) <code>RealReg</code> which contains the address.
-<p>
-<li><code>JIFZ</code> is unique, in that it allows a control-flow
-    transfer which is not deemed to end a basic block.  It causes a
-    jump to a literal (original) address if the specified argument
-    is zero.
-<p>
-<li>Finally, <code>INCEIP</code> advances the simulated
-    <code>%EIP</code> by the specified literal amount.  This supports
-    lazy <code>%EIP</code> updating, as described below.
-</ul>
-
-<p>
-Stages 1 and 2 of the 6-stage translation process mentioned above
-deal purely with these uopcodes, and no others.  They are
-sufficient to express pretty much all the x86 32-bit protected-mode 
-instruction set, at
-least everything understood by a pre-MMX original Pentium (P54C). 
-
-<p>
-Stages 3, 4, 5 and 6 also deal with the following extra
-"instrumentation" uopcodes.  They are used to express all the
-definedness-tracking and -checking machinery which valgrind does.  In
-later sections we show how to create checking code for each of the
-uopcodes above.  Note that these instrumentation uopcodes, although
-some appearing complicated, have been carefully chosen so that
-efficient x86 code can be generated for them.  GNU superopt v2.5 did a
-great job helping out here.  Anyways, the uopcodes are as follows:
-
-<ul>
-<li><code>GETV</code> and <code>PUTV</code> are analogues to
-    <code>GET</code> and <code>PUT</code> above.  They are identical
-    except that they move the V bits for the specified values back and
-    forth to <code>TempRegs</code>, rather than moving the values
-    themselves.
-<p>
-<li>Similarly, <code>LOADV</code> and <code>STOREV</code> read and
-    write V bits from the synthesised shadow memory that Valgrind
-    maintains.  In fact they do more than that, since they also do
-    address-validity checks, and emit complaints if the read/written
-    addresses are unaddressible.
-<p>
-<li><code>TESTV</code>, whose parameters are a <code>TempReg</code>
-    and a size, tests the V bits in the <code>TempReg</code>, at the
-    specified operation size (0/1/2/4 byte) and emits an error if any
-    of them indicate undefinedness.  This is the only uopcode capable
-    of doing such tests.
-<p>
-<li><code>SETV</code>, whose parameters are also <code>TempReg</code>
-    and a size, makes the V bits in the <code>TempReg</code> indicated
-    definedness, at the specified operation size.  This is usually
-    used to generate the correct V bits for a literal value, which is
-    of course fully defined.
-<p>
-<li><code>GETVF</code> and <code>PUTVF</code> are analogues to
-    <code>GETF</code> and <code>PUTF</code>.  They move the single V
-    bit used to model definedness of <code>%EFLAGS</code> between its
-    home in <code>VG_(baseBlock)</code> and the specified
-    <code>TempReg</code>.
-<p>
-<li><code>TAG1</code> denotes one of a family of unary operations on
-    <code>TempReg</code>s containing V bits.  Similarly,
-    <code>TAG2</code> denotes one in a family of binary operations on
-    V bits.
-</ul>
-
-<p>
-These 10 uopcodes are sufficient to express Valgrind's entire
-definedness-checking semantics.  In fact most of the interesting magic
-is done by the <code>TAG1</code> and <code>TAG2</code>
-suboperations.
-
-<p>
-First, however, I need to explain about V-vector operation sizes.
-There are 4 sizes: 1, 2 and 4, which operate on groups of 8, 16 and 32
-V bits at a time, supporting the usual 1, 2 and 4 byte x86 operations.
-However there is also the mysterious size 0, which really means a
-single V bit.  Single V bits are used in various circumstances; in
-particular, the definedness of <code>%EFLAGS</code> is modelled with a
-single V bit.  Now might be a good time to also point out that for
-V bits, 1 means "undefined" and 0 means "defined".  Similarly, for A
-bits, 1 means "invalid address" and 0 means "valid address".  This
-seems counterintuitive (and so it is), but testing against zero on
-x86s saves instructions compared to testing against all 1s, because
-many ALU operations set the Z flag for free, so to speak.
-
-<p>
-With that in mind, the tag ops are:
-
-<ul>
-<li><b>(UNARY) Pessimising casts</b>: <code>VgT_PCast40</code>,
-    <code>VgT_PCast20</code>, <code>VgT_PCast10</code>,
-    <code>VgT_PCast01</code>, <code>VgT_PCast02</code> and
-    <code>VgT_PCast04</code>.  A "pessimising cast" takes a V-bit
-    vector at one size, and creates a new one at another size,
-    pessimised in the sense that if any of the bits in the source
-    vector indicate undefinedness, then all the bits in the result
-    indicate undefinedness.  In this case the casts are all to or from
-    a single V bit, so for example <code>VgT_PCast40</code> is a
-    pessimising cast from 32 bits to 1, whereas
-    <code>VgT_PCast04</code> simply copies the single source V bit
-    into all 32 bit positions in the result.  Surprisingly, these ops
-    can all be implemented very efficiently.
-    <p>
-    There are also the pessimising casts <code>VgT_PCast14</code>,
-    from 8 bits to 32, <code>VgT_PCast12</code>, from 8 bits to 16,
-    and <code>VgT_PCast11</code>, from 8 bits to 8.  This last one
-    seems nonsensical, but in fact it isn't a no-op because, as
-    mentioned above, any undefined (1) bits in the source infect the
-    entire result.
-<p>
-<li><b>(UNARY) Propagating undefinedness upwards in a word</b>:
-    <code>VgT_Left4</code>, <code>VgT_Left2</code> and
-    <code>VgT_Left1</code>.  These are used to simulate the worst-case
-    effects of carry propagation in adds and subtracts.  They return a
-    V vector identical to the original, except that if the original
-    contained any undefined bits, then it and all bits above it are
-    marked as undefined too.  Hence the Left bit in the names.
-<p>
-<li><b>(UNARY) Signed and unsigned value widening</b>:
-     <code>VgT_SWiden14</code>, <code>VgT_SWiden24</code>,
-     <code>VgT_SWiden12</code>, <code>VgT_ZWiden14</code>,
-     <code>VgT_ZWiden24</code> and <code>VgT_ZWiden12</code>.  These
-     mimic the definedness effects of standard signed and unsigned
-     integer widening.  Unsigned widening creates zero bits in the new
-     positions, so <code>VgT_ZWiden*</code> accordingly park mark
-     those parts of their argument as defined.  Signed widening copies
-     the sign bit into the new positions, so <code>VgT_SWiden*</code>
-     copies the definedness of the sign bit into the new positions.
-     Because 1 means undefined and 0 means defined, these operations
-     can (fascinatingly) be done by the same operations which they
-     mimic.  Go figure.
-<p>
-<li><b>(BINARY) Undefined-if-either-Undefined,
-     Defined-if-either-Defined</b>: <code>VgT_UifU4</code>,
-     <code>VgT_UifU2</code>, <code>VgT_UifU1</code>,
-     <code>VgT_UifU0</code>, <code>VgT_DifD4</code>,
-     <code>VgT_DifD2</code>, <code>VgT_DifD1</code>.  These do simple
-     bitwise operations on pairs of V-bit vectors, with
-     <code>UifU</code> giving undefined if either arg bit is
-     undefined, and <code>DifD</code> giving defined if either arg bit
-     is defined.  Abstract interpretation junkies, if any make it this
-     far, may like to think of them as meets and joins (or is it joins
-     and meets) in the definedness lattices.  
-<p>
-<li><b>(BINARY; one value, one V bits) Generate argument improvement
-    terms for AND and OR</b>: <code>VgT_ImproveAND4_TQ</code>,
-    <code>VgT_ImproveAND2_TQ</code>, <code>VgT_ImproveAND1_TQ</code>,
-    <code>VgT_ImproveOR4_TQ</code>, <code>VgT_ImproveOR2_TQ</code>,
-    <code>VgT_ImproveOR1_TQ</code>.  These help out with AND and OR
-    operations.  AND and OR have the inconvenient property that the
-    definedness of the result depends on the actual values of the
-    arguments as well as their definedness.  At the bit level:
-    <br><code>1 AND undefined = undefined</code>, but 
-    <br><code>0 AND undefined = 0</code>, and similarly 
-    <br><code>0 OR  undefined = undefined</code>, but 
-    <br><code>1 OR  undefined = 1</code>.
-    <br>
-    <p>
-    It turns out that gcc (quite legitimately) generates code which
-    relies on this fact, so we have to model it properly in order to
-    avoid flooding users with spurious value errors.  The ultimate
-    definedness result of AND and OR is calculated using
-    <code>UifU</code> on the definedness of the arguments, but we
-    also <code>DifD</code> in some "improvement" terms which 
-    take into account the above phenomena.  
-    <p>
-    <code>ImproveAND</code> takes as its first argument the actual
-    value of an argument to AND (the T) and the definedness of that
-    argument (the Q), and returns a V-bit vector which is defined (0)
-    for bits which have value 0 and are defined; this, when
-    <code>DifD</code> into the final result causes those bits to be
-    defined even if the corresponding bit in the other argument is undefined.
-    <p>
-    The <code>ImproveOR</code> ops do the dual thing for OR
-    arguments.  Note that XOR does not have this property that one
-    argument can make the other irrelevant, so there is no need for
-    such complexity for XOR.
-</ul>
-
-<p>
-That's all the tag ops.  If you stare at this long enough, and then
-run Valgrind and stare at the pre- and post-instrumented ucode, it
-should be fairly obvious how the instrumentation machinery hangs
-together.
-
-<p>
-One point, if you do this: in order to make it easy to differentiate
-<code>TempReg</code>s carrying values from <code>TempReg</code>s
-carrying V bit vectors, Valgrind prints the former as (for example)
-<code>t28</code> and the latter as <code>q28</code>; the fact that
-they carry the same number serves to indicate their relationship.
-This is purely for the convenience of the human reader; the register
-allocator and code generator don't regard them as different.
-
-
-<h3>Translation into UCode</h3>
-
-<code>VG_(disBB)</code> allocates a new <code>UCodeBlock</code> and
-then uses <code>disInstr</code> to translate x86 instructions one at a
-time into UCode, dumping the result in the <code>UCodeBlock</code>.
-This goes on until a control-flow transfer instruction is encountered.
-
-<p>
-Despite the large size of <code>vg_to_ucode.c</code>, this translation
-is really very simple.  Each x86 instruction is translated entirely
-independently of its neighbours, merrily allocating new
-<code>TempReg</code>s as it goes.  The idea is to have a simple
-translator -- in reality, no more than a macro-expander -- and the --
-resulting bad UCode translation is cleaned up by the UCode
-optimisation phase which follows.  To give you an idea of some x86
-instructions and their translations (this is a complete basic block,
-as Valgrind sees it):
-<pre>
-        0x40435A50:  incl %edx
-
-           0: GETL      %EDX, t0
-           1: INCL      t0  (-wOSZAP)
-           2: PUTL      t0, %EDX
-
-        0x40435A51:  movsbl (%edx),%eax
-
-           3: GETL      %EDX, t2
-           4: LDB       (t2), t2
-           5: WIDENL_Bs t2
-           6: PUTL      t2, %EAX
-
-        0x40435A54:  testb $0x20, 1(%ecx,%eax,2)
-
-           7: GETL      %EAX, t6
-           8: GETL      %ECX, t8
-           9: LEA2L     1(t8,t6,2), t4
-          10: LDB       (t4), t10
-          11: MOVB      $0x20, t12
-          12: ANDB      t12, t10  (-wOSZACP)
-          13: INCEIPo   $9
-
-        0x40435A59:  jnz-8 0x40435A50
-
-          14: Jnzo      $0x40435A50  (-rOSZACP)
-          15: JMPo      $0x40435A5B
-</pre>
-
-<p>
-Notice how the block always ends with an unconditional jump to the
-next block.  This is a bit unnecessary, but makes many things simpler.
-
-<p>
-Most x86 instructions turn into sequences of <code>GET</code>,
-<code>PUT</code>, <code>LEA1</code>, <code>LEA2</code>,
-<code>LOAD</code> and <code>STORE</code>.  Some complicated ones
-however rely on calling helper bits of code in 
-<code>vg_helpers.S</code>.  The ucode instructions <code>PUSH</code>,
-<code>POP</code>, <code>CALL</code>, <code>CALLM_S</code> and
-<code>CALLM_E</code> support this.  The calling convention is somewhat
-ad-hoc and is not the C calling convention.  The helper routines must 
-save all integer registers, and the flags, that they use.  Args are
-passed on the stack underneath the return address, as usual, and if 
-result(s) are to be returned, it (they) are either placed in dummy arg
-slots created by the ucode <code>PUSH</code> sequence, or just
-overwrite the incoming args.
-
-<p>
-In order that the instrumentation mechanism can handle calls to these
-helpers, <code>VG_(saneUCodeBlock)</code> enforces the following
-restrictions on calls to helpers:
-
-<ul>
-<li>Each <code>CALL</code> uinstr must be bracketed by a preceding
-    <code>CALLM_S</code> marker (dummy uinstr) and a trailing
-    <code>CALLM_E</code> marker.  These markers are used by the
-    instrumentation mechanism later to establish the boundaries of the
-    <code>PUSH</code>, <code>POP</code> and <code>CLEAR</code>
-    sequences for the call.
-<p>
-<li><code>PUSH</code>, <code>POP</code> and <code>CLEAR</code>
-    may only appear inside sections bracketed by <code>CALLM_S</code>
-    and <code>CALLM_E</code>, and nowhere else.
-<p>
-<li>In any such bracketed section, no two <code>PUSH</code> insns may
-    push the same <code>TempReg</code>.  Dually, no two two
-    <code>POP</code>s may pop the same <code>TempReg</code>.
-<p>
-<li>Finally, although this is not checked, args should be removed from
-    the stack with <code>CLEAR</code>, rather than <code>POP</code>s
-    into a <code>TempReg</code> which is not subsequently used.  This
-    is because the instrumentation mechanism assumes that all values
-    <code>POP</code>ped from the stack are actually used.
-</ul>
-
-Some of the translations may appear to have redundant
-<code>TempReg</code>-to-<code>TempReg</code> moves.  This helps the
-next phase, UCode optimisation, to generate better code.
-
-
-
-<h3>UCode optimisation</h3>
-
-UCode is then subjected to an improvement pass
-(<code>vg_improve()</code>), which blurs the boundaries between the
-translations of the original x86 instructions.  It's pretty
-straightforward.  Three transformations are done:
-
-<ul>
-<li>Redundant <code>GET</code> elimination.  Actually, more general
-    than that -- eliminates redundant fetches of ArchRegs.  In our
-    running example, uinstr 3 <code>GET</code>s <code>%EDX</code> into
-    <code>t2</code> despite the fact that, by looking at the previous
-    uinstr, it is already in <code>t0</code>.  The <code>GET</code> is
-    therefore removed, and <code>t2</code> renamed to <code>t0</code>.
-    Assuming <code>t0</code> is allocated to a host register, it means
-    the simulated <code>%EDX</code> will exist in a host CPU register
-    for more than one simulated x86 instruction, which seems to me to
-    be a highly desirable property.
-    <p>
-    There is some mucking around to do with subregisters;
-    <code>%AL</code> vs <code>%AH</code> <code>%AX</code> vs
-    <code>%EAX</code> etc.  I can't remember how it works, but in
-    general we are very conservative, and these tend to invalidate the
-    caching. 
-<p>
-<li>Redundant <code>PUT</code> elimination.  This annuls
-    <code>PUT</code>s of values back to simulated CPU registers if a
-    later <code>PUT</code> would overwrite the earlier
-    <code>PUT</code> value, and there is no intervening reads of the
-    simulated register (<code>ArchReg</code>).
-    <p>
-    As before, we are paranoid when faced with subregister references.
-    Also, <code>PUT</code>s of <code>%ESP</code> are never annulled,
-    because it is vital the instrumenter always has an up-to-date
-    <code>%ESP</code> value available, <code>%ESP</code> changes
-    affect addressibility of the memory around the simulated stack
-    pointer.
-    <p>
-    The implication of the above paragraph is that the simulated
-    machine's registers are only lazily updated once the above two
-    optimisation phases have run, with the exception of
-    <code>%ESP</code>.  <code>TempReg</code>s go dead at the end of
-    every basic block, from which is is inferrable that any
-    <code>TempReg</code> caching a simulated CPU reg is flushed (back
-    into the relevant <code>VG_(baseBlock)</code> slot) at the end of
-    every basic block.  The further implication is that the simulated
-    registers are only up-to-date at in between basic blocks, and not
-    at arbitrary points inside basic blocks.  And the consequence of
-    that is that we can only deliver signals to the client in between
-    basic blocks.  None of this seems any problem in practice.
-<p>
-<li>Finally there is a simple def-use thing for condition codes.  If
-    an earlier uinstr writes the condition codes, and the next uinsn
-    along which actually cares about the condition codes writes the
-    same or larger set of them, but does not read any, the earlier
-    uinsn is marked as not writing any condition codes.  This saves 
-    a lot of redundant cond-code saving and restoring.
-</ul>
-
-The effect of these transformations on our short block is rather
-unexciting, and shown below.  On longer basic blocks they can
-dramatically improve code quality.
-
-<pre>
-at 3: delete GET, rename t2 to t0 in (4 .. 6)
-at 7: delete GET, rename t6 to t0 in (8 .. 9)
-at 1: annul flag write OSZAP due to later OSZACP
-
-Improved code:
-           0: GETL      %EDX, t0
-           1: INCL      t0
-           2: PUTL      t0, %EDX
-           4: LDB       (t0), t0
-           5: WIDENL_Bs t0
-           6: PUTL      t0, %EAX
-           8: GETL      %ECX, t8
-           9: LEA2L     1(t8,t0,2), t4
-          10: LDB       (t4), t10
-          11: MOVB      $0x20, t12
-          12: ANDB      t12, t10  (-wOSZACP)
-          13: INCEIPo   $9
-          14: Jnzo      $0x40435A50  (-rOSZACP)
-          15: JMPo      $0x40435A5B
-</pre>
-
-<h3>UCode instrumentation</h3>
-
-Once you understand the meaning of the instrumentation uinstrs,
-discussed in detail above, the instrumentation scheme is fairly
-straightforward.  Each uinstr is instrumented in isolation, and the
-instrumentation uinstrs are placed before the original uinstr.
-Our running example continues below.  I have placed a blank line 
-after every original ucode, to make it easier to see which
-instrumentation uinstrs correspond to which originals.
-
-<p>
-As mentioned somewhere above, <code>TempReg</code>s carrying values 
-have names like <code>t28</code>, and each one has a shadow carrying
-its V bits, with names like <code>q28</code>.  This pairing aids in
-reading instrumented ucode.
-
-<p>
-One decision about all this is where to have "observation points",
-that is, where to check that V bits are valid.  I use a minimalistic
-scheme, only checking where a failure of validity could cause the 
-original program to (seg)fault.  So the use of values as memory
-addresses causes a check, as do conditional jumps (these cause a check
-on the definedness of the condition codes).  And arguments
-<code>PUSH</code>ed for helper calls are checked, hence the weird
-restrictions on help call preambles described above.
-
-<p>
-Another decision is that once a value is tested, it is thereafter
-regarded as defined, so that we do not emit multiple undefined-value
-errors for the same undefined value.  That means that
-<code>TESTV</code> uinstrs are always followed by <code>SETV</code> 
-on the same (shadow) <code>TempReg</code>s.  Most of these
-<code>SETV</code>s are redundant and are removed by the
-post-instrumentation cleanup phase.
-
-<p>
-The instrumentation for calling helper functions deserves further
-comment.  The definedness of results from a helper is modelled using
-just one V bit.  So, in short, we do pessimising casts of the
-definedness of all the args, down to a single bit, and then
-<code>UifU</code> these bits together.  So this single V bit will say
-"undefined" if any part of any arg is undefined.  This V bit is then
-pessimally cast back up to the result(s) sizes, as needed.  If, by
-seeing that all the args are got rid of with <code>CLEAR</code> and
-none with <code>POP</code>, Valgrind sees that the result of the call
-is not actually used, it immediately examines the result V bit with a
-<code>TESTV</code> -- <code>SETV</code> pair.  If it did not do this,
-there would be no observation point to detect that the some of the
-args to the helper were undefined.  Of course, if the helper's results
-are indeed used, we don't do this, since the result usage will
-presumably cause the result definedness to be checked at some suitable
-future point.
-
-<p>
-In general Valgrind tries to track definedness on a bit-for-bit basis,
-but as the above para shows, for calls to helpers we throw in the
-towel and approximate down to a single bit.  This is because it's too
-complex and difficult to track bit-level definedness through complex
-ops such as integer multiply and divide, and in any case there is no
-reasonable code fragments which attempt to (eg) multiply two
-partially-defined values and end up with something meaningful, so
-there seems little point in modelling multiplies, divides, etc, in
-that level of detail.
-
-<p>
-Integer loads and stores are instrumented with firstly a test of the
-definedness of the address, followed by a <code>LOADV</code> or
-<code>STOREV</code> respectively.  These turn into calls to 
-(for example) <code>VG_(helperc_LOADV4)</code>.  These helpers do two
-things: they perform an address-valid check, and they load or store V
-bits from/to the relevant address in the (simulated V-bit) memory.
-
-<p>
-FPU loads and stores are different.  As above the definedness of the
-address is first tested.  However, the helper routine for FPU loads
-(<code>VGM_(fpu_read_check)</code>) emits an error if either the
-address is invalid or the referenced area contains undefined values.
-It has to do this because we do not simulate the FPU at all, and so
-cannot track definedness of values loaded into it from memory, so we
-have to check them as soon as they are loaded into the FPU, ie, at
-this point.  We notionally assume that everything in the FPU is
-defined.
-
-<p>
-It follows therefore that FPU writes first check the definedness of
-the address, then the validity of the address, and finally mark the
-written bytes as well-defined.
-
-<p>
-If anyone is inspired to extend Valgrind to MMX/SSE insns, I suggest
-you use the same trick.  It works provided that the FPU/MMX unit is
-not used to merely as a conduit to copy partially undefined data from
-one place in memory to another.  Unfortunately the integer CPU is used
-like that (when copying C structs with holes, for example) and this is
-the cause of much of the elaborateness of the instrumentation here
-described.
-
-<p>
-<code>vg_instrument()</code> in <code>vg_translate.c</code> actually
-does the instrumentation.  There are comments explaining how each
-uinstr is handled, so we do not repeat that here.  As explained
-already, it is bit-accurate, except for calls to helper functions.
-Unfortunately the x86 insns <code>bt/bts/btc/btr</code> are done by
-helper fns, so bit-level accuracy is lost there.  This should be fixed
-by doing them inline; it will probably require adding a couple new
-uinstrs.  Also, left and right rotates through the carry flag (x86
-<code>rcl</code> and <code>rcr</code>) are approximated via a single
-V bit; so far this has not caused anyone to complain.  The
-non-carry rotates, <code>rol</code> and <code>ror</code>, are much
-more common and are done exactly.  Re-visiting the instrumentation for
-AND and OR, they seem rather verbose, and I wonder if it could be done
-more concisely now.
-
-<p>
-The lowercase <code>o</code> on many of the uopcodes in the running
-example indicates that the size field is zero, usually meaning a
-single-bit operation.
-
-<p>
-Anyroads, the post-instrumented version of our running example looks
-like this:
-
-<pre>
-Instrumented code:
-           0: GETVL     %EDX, q0
-           1: GETL      %EDX, t0
-
-           2: TAG1o     q0 = Left4 ( q0 )
-           3: INCL      t0
-
-           4: PUTVL     q0, %EDX
-           5: PUTL      t0, %EDX
-
-           6: TESTVL    q0
-           7: SETVL     q0
-           8: LOADVB    (t0), q0
-           9: LDB       (t0), t0
-
-          10: TAG1o     q0 = SWiden14 ( q0 )
-          11: WIDENL_Bs t0
-
-          12: PUTVL     q0, %EAX
-          13: PUTL      t0, %EAX
-
-          14: GETVL     %ECX, q8
-          15: GETL      %ECX, t8
-
-          16: MOVL      q0, q4
-          17: SHLL      $0x1, q4
-          18: TAG2o     q4 = UifU4 ( q8, q4 )
-          19: TAG1o     q4 = Left4 ( q4 )
-          20: LEA2L     1(t8,t0,2), t4
-
-          21: TESTVL    q4
-          22: SETVL     q4
-          23: LOADVB    (t4), q10
-          24: LDB       (t4), t10
-
-          25: SETVB     q12
-          26: MOVB      $0x20, t12
-
-          27: MOVL      q10, q14
-          28: TAG2o     q14 = ImproveAND1_TQ ( t10, q14 )
-          29: TAG2o     q10 = UifU1 ( q12, q10 )
-          30: TAG2o     q10 = DifD1 ( q14, q10 )
-          31: MOVL      q12, q14
-          32: TAG2o     q14 = ImproveAND1_TQ ( t12, q14 )
-          33: TAG2o     q10 = DifD1 ( q14, q10 )
-          34: MOVL      q10, q16
-          35: TAG1o     q16 = PCast10 ( q16 )
-          36: PUTVFo    q16
-          37: ANDB      t12, t10  (-wOSZACP)
-
-          38: INCEIPo   $9
-
-          39: GETVFo    q18
-          40: TESTVo    q18
-          41: SETVo     q18
-          42: Jnzo      $0x40435A50  (-rOSZACP)
-
-          43: JMPo      $0x40435A5B
-</pre>
-
-
-<h3>UCode post-instrumentation cleanup</h3>
-
-<p>
-This pass, coordinated by <code>vg_cleanup()</code>, removes redundant
-definedness computation created by the simplistic instrumentation
-pass.  It consists of two passes,
-<code>vg_propagate_definedness()</code> followed by
-<code>vg_delete_redundant_SETVs</code>.
-
-<p>
-<code>vg_propagate_definedness()</code> is a simple
-constant-propagation and constant-folding pass.  It tries to determine
-which <code>TempReg</code>s containing V bits will always indicate
-"fully defined", and it propagates this information as far as it can,
-and folds out as many operations as possible.  For example, the
-instrumentation for an ADD of a literal to a variable quantity will be
-reduced down so that the definedness of the result is simply the
-definedness of the variable quantity, since the literal is by
-definition fully defined.
-
-<p>
-<code>vg_delete_redundant_SETVs</code> removes <code>SETV</code>s on
-shadow <code>TempReg</code>s for which the next action is a write.
-I don't think there's anything else worth saying about this; it is
-simple.  Read the sources for details.
-
-<p>
-So the cleaned-up running example looks like this.  As above, I have
-inserted line breaks after every original (non-instrumentation) uinstr
-to aid readability.  As with straightforward ucode optimisation, the
-results in this block are undramatic because it is so short; longer
-blocks benefit more because they have more redundancy which gets
-eliminated.
-
-
-<pre>
-at 29: delete UifU1 due to defd arg1
-at 32: change ImproveAND1_TQ to MOV due to defd arg2
-at 41: delete SETV
-at 31: delete MOV
-at 25: delete SETV
-at 22: delete SETV
-at 7: delete SETV
-
-           0: GETVL     %EDX, q0
-           1: GETL      %EDX, t0
-
-           2: TAG1o     q0 = Left4 ( q0 )
-           3: INCL      t0
-
-           4: PUTVL     q0, %EDX
-           5: PUTL      t0, %EDX
-
-           6: TESTVL    q0
-           8: LOADVB    (t0), q0
-           9: LDB       (t0), t0
-
-          10: TAG1o     q0 = SWiden14 ( q0 )
-          11: WIDENL_Bs t0
-
-          12: PUTVL     q0, %EAX
-          13: PUTL      t0, %EAX
-
-          14: GETVL     %ECX, q8
-          15: GETL      %ECX, t8
-
-          16: MOVL      q0, q4
-          17: SHLL      $0x1, q4
-          18: TAG2o     q4 = UifU4 ( q8, q4 )
-          19: TAG1o     q4 = Left4 ( q4 )
-          20: LEA2L     1(t8,t0,2), t4
-
-          21: TESTVL    q4
-          23: LOADVB    (t4), q10
-          24: LDB       (t4), t10
-
-          26: MOVB      $0x20, t12
-
-          27: MOVL      q10, q14
-          28: TAG2o     q14 = ImproveAND1_TQ ( t10, q14 )
-          30: TAG2o     q10 = DifD1 ( q14, q10 )
-          32: MOVL      t12, q14
-          33: TAG2o     q10 = DifD1 ( q14, q10 )
-          34: MOVL      q10, q16
-          35: TAG1o     q16 = PCast10 ( q16 )
-          36: PUTVFo    q16
-          37: ANDB      t12, t10  (-wOSZACP)
-
-          38: INCEIPo   $9
-          39: GETVFo    q18
-          40: TESTVo    q18
-          42: Jnzo      $0x40435A50  (-rOSZACP)
-
-          43: JMPo      $0x40435A5B
-</pre>
-
-
-<h3>Translation from UCode</h3>
-
-This is all very simple, even though <code>vg_from_ucode.c</code>
-is a big file.  Position-independent x86 code is generated into 
-a dynamically allocated array <code>emitted_code</code>; this is
-doubled in size when it overflows.  Eventually the array is handed
-back to the caller of <code>VG_(translate)</code>, who must copy
-the result into TC and TT, and free the array.
-
-<p>
-This file is structured into four layers of abstraction, which,
-thankfully, are glued back together with extensive
-<code>__inline__</code> directives.  From the bottom upwards:
-
-<ul>
-<li>Address-mode emitters, <code>emit_amode_regmem_reg</code> et al.
-<p>
-<li>Emitters for specific x86 instructions.  There are quite a lot of
-    these, with names such as <code>emit_movv_offregmem_reg</code>.
-    The <code>v</code> suffix is Intel parlance for a 16/32 bit insn;
-    there are also <code>b</code> suffixes for 8 bit insns.
-<p>
-<li>The next level up are the <code>synth_*</code> functions, which
-    synthesise possibly a sequence of raw x86 instructions to do some
-    simple task.  Some of these are quite complex because they have to
-    work around Intel's silly restrictions on subregister naming.  See 
-    <code>synth_nonshiftop_reg_reg</code> for example.
-<p>
-<li>Finally, at the top of the heap, we have
-    <code>emitUInstr()</code>,
-    which emits code for a single uinstr.
-</ul>
-
-<p>
-Some comments:
-<ul>
-<li>The hack for FPU instructions becomes apparent here.  To do a
-    <code>FPU</code> ucode instruction, we load the simulated FPU's
-    state into from its <code>VG_(baseBlock)</code> into the real FPU
-    using an x86 <code>frstor</code> insn, do the ucode
-    <code>FPU</code> insn on the real CPU, and write the updated FPU
-    state back into <code>VG_(baseBlock)</code> using an
-    <code>fnsave</code> instruction.  This is pretty brutal, but is
-    simple and it works, and even seems tolerably efficient.  There is
-    no attempt to cache the simulated FPU state in the real FPU over
-    multiple back-to-back ucode FPU instructions.
-    <p>
-    <code>FPU_R</code> and <code>FPU_W</code> are also done this way,
-    with the minor complication that we need to patch in some
-    addressing mode bits so the resulting insn knows the effective
-    address to use.  This is easy because of the regularity of the x86
-    FPU instruction encodings.
-<p>
-<li>An analogous trick is done with ucode insns which claim, in their
-    <code>flags_r</code> and <code>flags_w</code> fields, that they
-    read or write the simulated <code>%EFLAGS</code>.  For such cases
-    we first copy the simulated <code>%EFLAGS</code> into the real
-    <code>%eflags</code>, then do the insn, then, if the insn says it
-    writes the flags, copy back to <code>%EFLAGS</code>.  This is a
-    bit expensive, which is why the ucode optimisation pass goes to
-    some effort to remove redundant flag-update annotations.
-</ul>
-
-<p>
-And so ... that's the end of the documentation for the instrumentating
-translator!  It's really not that complex, because it's composed as a
-sequence of simple(ish) self-contained transformations on
-straight-line blocks of code.
-
-
-<h3>Top-level dispatch loop</h3>
-
-Urk.  In <code>VG_(toploop)</code>.  This is basically boring and
-unsurprising, not to mention fiddly and fragile.  It needs to be
-cleaned up.  
-
-<p>
-The only perhaps surprise is that the whole thing is run
-on top of a <code>setjmp</code>-installed exception handler, because,
-supposing a translation got a segfault, we have to bail out of the
-Valgrind-supplied exception handler <code>VG_(oursignalhandler)</code>
-and immediately start running the client's segfault handler, if it has
-one.  In particular we can't finish the current basic block and then
-deliver the signal at some convenient future point, because signals
-like SIGILL, SIGSEGV and SIGBUS mean that the faulting insn should not
-simply be re-tried.  (I'm sure there is a clearer way to explain this).
-
-
-<h3>Exceptions, creating new translations</h3>
-<h3>Self-modifying code</h3>
-
-<h3>Lazy updates of the simulated program counter</h3>
-
-Simulated <code>%EIP</code> is not updated after every simulated x86
-insn as this was regarded as too expensive.  Instead ucode
-<code>INCEIP</code> insns move it along as and when necessary.
-Currently we don't allow it to fall more than 4 bytes behind reality
-(see <code>VG_(disBB)</code> for the way this works).
-<p>
-Note that <code>%EIP</code> is always brought up to date by the inner
-dispatch loop in <code>VG_(dispatch)</code>, so that if the client
-takes a fault we know at least which basic block this happened in.
-
-
-<h3>The translation cache and translation table</h3>
-
-<h3>Signals</h3>
-
-Horrible, horrible.  <code>vg_signals.c</code>.
-Basically, since we have to intercept all system
-calls anyway, we can see when the client tries to install a signal
-handler.  If it does so, we make a note of what the client asked to
-happen, and ask the kernel to route the signal to our own signal
-handler, <code>VG_(oursignalhandler)</code>.  This simply notes the
-delivery of signals, and returns.  
-
-<p>
-Every 1000 basic blocks, we see if more signals have arrived.  If so,
-<code>VG_(deliver_signals)</code> builds signal delivery frames on the
-client's stack, and allows their handlers to be run.  Valgrind places
-in these signal delivery frames a bogus return address,
-<code>VG_(signalreturn_bogusRA)</code>, and checks all jumps to see
-if any jump to it.  If so, this is a sign that a signal handler is
-returning, and if so Valgrind removes the relevant signal frame from
-the client's stack, restores the from the signal frame the simulated
-state before the signal was delivered, and allows the client to run
-onwards.  We have to do it this way because some signal handlers never
-return, they just <code>longjmp()</code>, which nukes the signal
-delivery frame.
-
-<p>
-The Linux kernel has a different but equally horrible hack for
-detecting signal handler returns.  Discovering it is left as an
-exercise for the reader.
-
-
-
-<h3>Errors, error contexts, error reporting, suppressions</h3>
-<h3>Client malloc/free</h3>
-<h3>Low-level memory management</h3>
-<h3>A and V bitmaps</h3>
-<h3>Symbol table management</h3>
-<h3>Dealing with system calls</h3>
-<h3>Namespace management</h3>
-<h3>GDB attaching</h3>
-<h3>Non-dependence on glibc or anything else</h3>
-<h3>The leak detector</h3>
-<h3>Performance problems</h3>
-<h3>Continuous sanity checking</h3>
-<h3>Tracing, or not tracing, child processes</h3>
-<h3>Assembly glue for syscalls</h3>
-
-
-<hr width="100%">
-
-<h2>Extensions</h2>
-
-Some comments about Stuff To Do.
-
-<h3>Bugs</h3>
-
-Stephan Kulow and Marc Mutz report problems with kmail in KDE 3 CVS
-(RC2 ish) when run on Valgrind.  Stephan has it deadlocking; Marc has
-it looping at startup.  I can't repro either behaviour. Needs
-repro-ing and fixing.
-
-
-<h3>Threads</h3>
-
-Doing a good job of thread support strikes me as almost a
-research-level problem.  The central issues are how to do fast cheap
-locking of the <code>VG_(primary_map)</code> structure, whether or not
-accesses to the individual secondary maps need locking, what
-race-condition issues result, and whether the already-nasty mess that
-is the signal simulator needs further hackery.
-
-<p>
-I realise that threads are the most-frequently-requested feature, and
-I am thinking about it all.  If you have guru-level understanding of 
-fast mutual exclusion mechanisms and race conditions, I would be
-interested in hearing from you.
-
-
-<h3>Verification suite</h3>
-
-Directory <code>tests/</code> contains various ad-hoc tests for
-Valgrind.  However, there is no systematic verification or regression
-suite, that, for example, exercises all the stuff in
-<code>vg_memory.c</code>, to ensure that illegal memory accesses and
-undefined value uses are detected as they should be.  It would be good
-to have such a suite.
-
-
-<h3>Porting to other platforms</h3>
-
-It would be great if Valgrind was ported to FreeBSD and x86 NetBSD,
-and to x86 OpenBSD, if it's possible (doesn't OpenBSD use a.out-style
-executables, not ELF ?)
-
-<p>
-The main difficulties, for an x86-ELF platform, seem to be:
-
-<ul>
-<li>You'd need to rewrite the <code>/proc/self/maps</code> parser
-    (<code>vg_procselfmaps.c</code>).
-    Easy.
-<p>
-<li>You'd need to rewrite <code>vg_syscall_mem.c</code>, or, more
-    specifically, provide one for your OS.  This is tedious, but you
-    can implement syscalls on demand, and the Linux kernel interface
-    is, for the most part, going to look very similar to the *BSD
-    interfaces, so it's really a copy-paste-and-modify-on-demand job.
-    As part of this, you'd need to supply a new
-    <code>vg_kerneliface.h</code> file.
-<p>
-<li>You'd also need to change the syscall wrappers for Valgrind's
-    internal use, in <code>vg_mylibc.c</code>.
-</ul>
-
-All in all, I think a port to x86-ELF *BSDs is not really very
-difficult, and in some ways I would like to see it happen, because
-that would force a more clear factoring of Valgrind into platform
-dependent and independent pieces.  Not to mention, *BSD folks also
-deserve to use Valgrind just as much as the Linux crew do.
-
-
-<p>
-<hr width="100%">
-
-<h2>Easy stuff which ought to be done</h2>
-
-<h3>MMX instructions</h3>
-
-MMX insns should be supported, using the same trick as for FPU insns.
-If the MMX registers are not used to copy uninitialised junk from one
-place to another in memory, this means we don't have to actually
-simulate the internal MMX unit state, so the FPU hack applies.  This
-should be fairly easy.
-
-
-
-<h3>Fix stabs-info reader</h3>
-
-The machinery in <code>vg_symtab2.c</code> which reads "stabs" style
-debugging info is pretty weak.  It usually correctly translates 
-simulated program counter values into line numbers and procedure
-names, but the file name is often completely wrong.  I think the
-logic used to parse "stabs" entries is weak.  It should be fixed.
-The simplest solution, IMO, is to copy either the logic or simply the
-code out of GNU binutils which does this; since GDB can clearly get it
-right, binutils (or GDB?) must have code to do this somewhere.
-
-
-
-
-
-<h3>BT/BTC/BTS/BTR</h3>
-
-These are x86 instructions which test, complement, set, or reset, a
-single bit in a word.  At the moment they are both incorrectly
-implemented and incorrectly instrumented.
-
-<p>
-The incorrect instrumentation is due to use of helper functions.  This
-means we lose bit-level definedness tracking, which could wind up
-giving spurious uninitialised-value use errors.  The Right Thing to do
-is to invent a couple of new UOpcodes, I think <code>GET_BIT</code>
-and <code>SET_BIT</code>, which can be used to implement all 4 x86
-insns, get rid of the helpers, and give bit-accurate instrumentation
-rules for the two new UOpcodes.
-
-<p>
-I realised the other day that they are mis-implemented too.  The x86
-insns take a bit-index and a register or memory location to access.
-For registers the bit index clearly can only be in the range zero to
-register-width minus 1, and I assumed the same applied to memory
-locations too.  But evidently not; for memory locations the index can
-be arbitrary, and the processor will index arbitrarily into memory as
-a result.  This too should be fixed.  Sigh.  Presumably indexing
-outside the immediate word is not actually used by any programs yet
-tested on Valgrind, for otherwise they (presumably) would simply not
-work at all.  If you plan to hack on this, first check the Intel docs
-to make sure my understanding is really correct.
-
-
-
-<h3>Using PREFETCH instructions</h3>
-
-Here's a small but potentially interesting project for performance
-junkies.  Experiments with valgrind's code generator and optimiser(s)
-suggest that reducing the number of instructions executed in the
-translations and mem-check helpers gives disappointingly small
-performance improvements.  Perhaps this is because performance of
-Valgrindified code is limited by cache misses.  After all, each read
-in the original program now gives rise to at least three reads, one
-for the <code>VG_(primary_map)</code>, one of the resulting
-secondary, and the original.  Not to mention, the instrumented
-translations are 13 to 14 times larger than the originals.  All in all
-one would expect the memory system to be hammered to hell and then
-some.
-
-<p>
-So here's an idea.  An x86 insn involving a read from memory, after
-instrumentation, will turn into ucode of the following form:
-<pre>
-    ... calculate effective addr, into ta and qa ...
-    TESTVL qa             -- is the addr defined?
-    LOADV (ta), qloaded   -- fetch V bits for the addr
-    LOAD  (ta), tloaded   -- do the original load
-</pre>
-At the point where the <code>LOADV</code> is done, we know the actual
-address (<code>ta</code>) from which the real <code>LOAD</code> will
-be done.  We also know that the <code>LOADV</code> will take around
-20 x86 insns to do.  So it seems plausible that doing a prefetch of
-<code>ta</code> just before the <code>LOADV</code> might just avoid a
-miss at the <code>LOAD</code> point, and that might be a significant
-performance win.
-
-<p>
-Prefetch insns are notoriously tempermental, more often than not
-making things worse rather than better, so this would require
-considerable fiddling around.  It's complicated because Intels and
-AMDs have different prefetch insns with different semantics, so that
-too needs to be taken into account.  As a general rule, even placing
-the prefetches before the <code>LOADV</code> insn is too near the
-<code>LOAD</code>; the ideal distance is apparently circa 200 CPU
-cycles.  So it might be worth having another analysis/transformation
-pass which pushes prefetches as far back as possible, hopefully 
-immediately after the effective address becomes available.
-
-<p>
-Doing too many prefetches is also bad because they soak up bus
-bandwidth / cpu resources, so some cleverness in deciding which loads
-to prefetch and which to not might be helpful.  One can imagine not
-prefetching client-stack-relative (<code>%EBP</code> or
-<code>%ESP</code>) accesses, since the stack in general tends to show
-good locality anyway.
-
-<p>
-There's quite a lot of experimentation to do here, but I think it
-might make an interesting week's work for someone.
-
-<p>
-As of 15-ish March 2002, I've started to experiment with this, using
-the AMD <code>prefetch/prefetchw</code> insns.
-
-
-
-<h3>User-defined permission ranges</h3>
-
-This is quite a large project -- perhaps a month's hacking for a
-capable hacker to do a good job -- but it's potentially very
-interesting.  The outcome would be that Valgrind could detect a 
-whole class of bugs which it currently cannot.
-
-<p>
-The presentation falls into two pieces.
-
-<p>
-<b>Part 1: user-defined address-range permission setting</b>
-<p>
-
-Valgrind intercepts the client's <code>malloc</code>,
-<code>free</code>, etc calls, watches system calls, and watches the
-stack pointer move.  This is currently the only way it knows about
-which addresses are valid and which not.  Sometimes the client program
-knows extra information about its memory areas.  For example, the
-client could at some point know that all elements of an array are
-out-of-date.  We would like to be able to convey to Valgrind this
-information that the array is now addressable-but-uninitialised, so
-that Valgrind can then warn if elements are used before they get new
-values. 
-
-<p>
-What I would like are some macros like this:
-<pre>
-   VALGRIND_MAKE_NOACCESS(addr, len)
-   VALGRIND_MAKE_WRITABLE(addr, len)
-   VALGRIND_MAKE_READABLE(addr, len)
-</pre>
-and also, to check that memory is addressible/initialised,
-<pre>
-   VALGRIND_CHECK_ADDRESSIBLE(addr, len)
-   VALGRIND_CHECK_INITIALISED(addr, len)
-</pre>
-
-<p>
-I then include in my sources a header defining these macros, rebuild
-my app, run under Valgrind, and get user-defined checks.
-
-<p>
-Now here's a neat trick.  It's a nuisance to have to re-link the app
-with some new library which implements the above macros.  So the idea
-is to define the macros so that the resulting executable is still
-completely stand-alone, and can be run without Valgrind, in which case
-the macros do nothing, but when run on Valgrind, the Right Thing
-happens.  How to do this?  The idea is for these macros to turn into a
-piece of inline assembly code, which (1) has no effect when run on the
-real CPU, (2) is easily spotted by Valgrind's JITter, and (3) no sane
-person would ever write, which is important for avoiding false matches
-in (2).  So here's a suggestion:
-<pre>
-   VALGRIND_MAKE_NOACCESS(addr, len)
-</pre>
-becomes (roughly speaking)
-<pre>
-   movl addr, %eax
-   movl len,  %ebx
-   movl $1,   %ecx   -- 1 describes the action; MAKE_WRITABLE might be
-                     -- 2, etc
-   rorl $13, %ecx
-   rorl $19, %ecx
-   rorl $11, %eax
-   rorl $21, %eax
-</pre>
-The rotate sequences have no effect, and it's unlikely they would
-appear for any other reason, but they define a unique byte-sequence
-which the JITter can easily spot.  Using the operand constraints
-section at the end of a gcc inline-assembly statement, we can tell gcc
-that the assembly fragment kills <code>%eax</code>, <code>%ebx</code>,
-<code>%ecx</code> and the condition codes, so this fragment is made
-harmless when not running on Valgrind, runs quickly when not on
-Valgrind, and does not require any other library support.
-
-
-<p>
-<b>Part 2: using it to detect interference between stack variables</b>
-<p>
-
-Currently Valgrind cannot detect errors of the following form:
-<pre>
-void fooble ( void )
-{
-   int a[10];
-   int b[10];
-   a[10] = 99;
-}
-</pre>
-Now imagine rewriting this as
-<pre>
-void fooble ( void )
-{
-   int spacer0;
-   int a[10];
-   int spacer1;
-   int b[10];
-   int spacer2;
-   VALGRIND_MAKE_NOACCESS(&amp;spacer0, sizeof(int));
-   VALGRIND_MAKE_NOACCESS(&amp;spacer1, sizeof(int));
-   VALGRIND_MAKE_NOACCESS(&amp;spacer2, sizeof(int));
-   a[10] = 99;
-}
-</pre>
-Now the invalid write is certain to hit <code>spacer0</code> or
-<code>spacer1</code>, so Valgrind will spot the error.
-
-<p>
-There are two complications.
-
-<p>
-The first is that we don't want to annotate sources by hand, so the
-Right Thing to do is to write a C/C++ parser, annotator, prettyprinter
-which does this automatically, and run it on post-CPP'd C/C++ source.
-See http://www.cacheprof.org for an example of a system which
-transparently inserts another phase into the gcc/g++ compilation
-route.  The parser/prettyprinter is probably not as hard as it sounds;
-I would write it in Haskell, a powerful functional language well
-suited to doing symbolic computation, with which I am intimately
-familar.  There is already a C parser written in Haskell by someone in
-the Haskell community, and that would probably be a good starting
-point.
-
-<p>
-The second complication is how to get rid of these
-<code>NOACCESS</code> records inside Valgrind when the instrumented
-function exits; after all, these refer to stack addresses and will
-make no sense whatever when some other function happens to re-use the
-same stack address range, probably shortly afterwards.  I think I
-would be inclined to define a special stack-specific macro
-<pre>
-   VALGRIND_MAKE_NOACCESS_STACK(addr, len)
-</pre>
-which causes Valgrind to record the client's <code>%ESP</code> at the
-time it is executed.  Valgrind will then watch for changes in
-<code>%ESP</code> and discard such records as soon as the protected
-area is uncovered by an increase in <code>%ESP</code>.  I hesitate
-with this scheme only because it is potentially expensive, if there
-are hundreds of such records, and considering that changes in
-<code>%ESP</code> already require expensive messing with stack access
-permissions.
-
-<p>
-This is probably easier and more robust than for the instrumenter 
-program to try and spot all exit points for the procedure and place
-suitable deallocation annotations there.  Plus C++ procedures can 
-bomb out at any point if they get an exception, so spotting return
-points at the source level just won't work at all.
-
-<p>
-Although some work, it's all eminently doable, and it would make
-Valgrind into an even-more-useful tool.
-
-
-<p>
-
-</body>
-</html>
diff --git a/none/docs/Makefile.am b/none/docs/Makefile.am
index bbd2296..0a59eb1 100644
--- a/none/docs/Makefile.am
+++ b/none/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = nl_main.html
+EXTRA_DIST = nl-manual.xml
diff --git a/none/docs/nl-manual.xml b/none/docs/nl-manual.xml
new file mode 100644
index 0000000..384773e
--- /dev/null
+++ b/none/docs/nl-manual.xml
@@ -0,0 +1,22 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="nl-manual" xreflabel="Nulgrind">
+
+<title>Nulgrind: the ``null'' tool</title>
+<subtitle>A tool that does not very much at all</subtitle>
+
+<para>Nulgrind is the minimal tool for Valgrind.  It does no
+initialisation or finalisation, and adds no instrumentation to
+the program's code.  It is mainly of use for Valgrind's
+developers for debugging and regression testing.</para>
+
+<para>Nonetheless you can run programs with Nulgrind.  They will
+run roughly 5 times more slowly than normal, for no useful
+effect.  Note that you need to use the option
+<computeroutput>--tool=none</computeroutput> to run Nulgrind
+(ie. not <computeroutput>--tool=nulgrind</computeroutput>).</para>
+
+</chapter>
+
diff --git a/none/docs/nl_main.html b/none/docs/nl_main.html
deleted file mode 100644
index a431944..0000000
--- a/none/docs/nl_main.html
+++ /dev/null
@@ -1,57 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>Cachegrind</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="title"></a>
-<h1 align=center>Nulgrind</h1>
-<center>This manual was last updated on 2002-10-02</center>
-<p>
-
-<center>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-Copyright &copy; 2000-2004 Nicholas Nethercote
-<p>
-Nulgrind is licensed under the GNU General Public License, 
-version 2<br>
-Nulgrind is a Valgrind tool that does not very much at all.
-</center>
-
-<p>
-
-<h2>1&nbsp; Nulgrind</h2>
-
-Nulgrind is the minimal tool for Valgrind.  It does no initialisation or
-finalisation, and adds no instrumentation to the program's code.  It is mainly
-of use for Valgrind's developers for debugging and regression testing.
-<p>
-Nonetheless you can run programs with Nulgrind.  They will run roughly 5-10
-times more slowly than normal, for no useful effect.  Note that you need to use
-the option <code>--tool=none</code> to run Nulgrind (ie. not
-<code>--tool=nulgrind</code>).
-
-<hr width="100%">
-</body>
-</html>
-