Overhauled the docs.  Removed all the HTML files, put in XML files as
converted by Donna.  Hooked it into the build system so they are only
built when specifically asked for, and when doing "make dist".

They're not perfect;  in particular, there are the following problems:
- The plain-text FAQ should be built from FAQ.xml, but this is not
  currently done.  (The text FAQ has been left in for now.)

- The PS/PDF building doesn't work -- it fails with an incomprehensible
  error message which I haven't yet deciphered.

Nonetheless, I'm putting it in so others can see it.



git-svn-id: svn://svn.valgrind.org/valgrind/trunk@3153 a5019735-40e9-0310-863c-91ae7b9d1cf9
diff --git a/cachegrind/docs/Makefile.am b/cachegrind/docs/Makefile.am
index 9657fe5..f052e04 100644
--- a/cachegrind/docs/Makefile.am
+++ b/cachegrind/docs/Makefile.am
@@ -1,3 +1 @@
-docdir = $(datadir)/doc/valgrind
-
-dist_doc_DATA = cg_main.html cg_techdocs.html 
+EXTRA_DIST = cg-manual.xml cg-tech-docs.xml 
diff --git a/cachegrind/docs/cg-manual.xml b/cachegrind/docs/cg-manual.xml
new file mode 100644
index 0000000..58df498
--- /dev/null
+++ b/cachegrind/docs/cg-manual.xml
@@ -0,0 +1,1012 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="cg-manual" xreflabel="Cachegrind: a cache-miss profiler">
+<title>Cachegrind: a cache profiler</title>
+
+<para>Detailed technical documentation on how Cachegrind works is
+available in <xref linkend="cg-tech-docs"/>.  If you only want to know
+how to <command>use</command> it, this is the page you need to
+read.</para>
+
+
+<sect1 id="cg-manual.cache" xreflabel="Cache profiling">
+<title>Cache profiling</title>
+
+<para>To use this tool, you must specify
+<computeroutput>--tool=cachegrind</computeroutput> on the
+Valgrind command line.</para>
+
+<para>Cachegrind is a tool for doing cache simulations and
+annotating your source line-by-line with the number of cache
+misses.  In particular, it records:</para>
+<itemizedlist>
+  <listitem>
+    <para>L1 instruction cache reads and misses;</para>
+  </listitem>
+  <listitem>
+    <para>L1 data cache reads and read misses, writes and write
+    misses;</para>
+  </listitem>
+  <listitem>
+    <para>L2 unified cache reads and read misses, writes and
+    writes misses.</para>
+  </listitem>
+</itemizedlist>
+
+<para>On a modern x86 machine, an L1 miss will typically cost
+around 10 cycles, and an L2 miss can cost as much as 200
+cycles. Detailed cache profiling can be very useful for improving
+the performance of your program.</para>
+
+<para>Also, since one instruction cache read is performed per
+instruction executed, you can find out how many instructions are
+executed per line, which can be useful for traditional profiling
+and test coverage.</para>
+
+<para>Any feedback, bug-fixes, suggestions, etc, welcome.</para>
+
+
+
+<sect2 id="cg-manual.overview" xreflabel="Overview">
+<title>Overview</title>
+
+<para>First off, as for normal Valgrind use, you probably want to
+compile with debugging info (the
+<computeroutput>-g</computeroutput> flag).  But by contrast with
+normal Valgrind use, you probably <command>do</command> want to turn
+optimisation on, since you should profile your program as it will
+be normally run.</para>
+
+<para>The two steps are:</para>
+<orderedlist>
+  <listitem>
+    <para>Run your program with <computeroutput>valgrind
+    --tool=cachegrind</computeroutput> in front of the normal
+    command line invocation.  When the program finishes,
+    Cachegrind will print summary cache statistics. It also
+    collects line-by-line information in a file
+    <computeroutput>cachegrind.out.pid</computeroutput>, where
+    <computeroutput>pid</computeroutput> is the program's process
+    id.</para>
+
+    <para>This step should be done every time you want to collect
+    information about a new program, a changed program, or about
+    the same program with different input.</para>
+  </listitem>
+
+  <listitem>
+    <para>Generate a function-by-function summary, and possibly
+    annotate source files, using the supplied
+    <computeroutput>cg_annotate</computeroutput> program. Source
+    files to annotate can be specified manually, or manually on
+    the command line, or "interesting" source files can be
+    annotated automatically with the
+    <computeroutput>--auto=yes</computeroutput> option.  You can
+    annotate C/C++ files or assembly language files equally
+    easily.</para>
+
+    <para>This step can be performed as many times as you like
+    for each Step 2.  You may want to do multiple annotations
+    showing different information each time.</para>
+  </listitem>
+
+</orderedlist>
+
+<para>The steps are described in detail in the following
+sections.</para>
+
+</sect2>
+
+
+<sect2>
+<title>Cache simulation specifics</title>
+
+<para>Cachegrind uses a simulation for a machine with a split L1
+cache and a unified L2 cache.  This configuration is used for all
+(modern) x86-based machines we are aware of.  Old Cyrix CPUs had
+a unified I and D L1 cache, but they are ancient history
+now.</para>
+
+<para>The more specific characteristics of the simulation are as
+follows.</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>Write-allocate: when a write miss occurs, the block
+    written to is brought into the D1 cache.  Most modern caches
+    have this property.</para>
+  </listitem>
+
+  <listitem>
+    <para>Bit-selection hash function: the line(s) in the cache
+    to which a memory block maps is chosen by the middle bits
+    M--(M+N-1) of the byte address, where:</para>
+    <itemizedlist>
+      <listitem>
+        <para>line size = 2^M bytes</para>
+      </listitem>
+      <listitem>
+        <para>(cache size / line size) = 2^N bytes</para>
+      </listitem>
+    </itemizedlist> 
+  </listitem>
+
+  <listitem>
+    <para>Inclusive L2 cache: the L2 cache replicates all the
+    entries of the L1 cache.  This is standard on Pentium chips,
+    but AMD Athlons use an exclusive L2 cache that only holds
+    blocks evicted from L1.  Ditto AMD Durons and most modern
+    VIAs.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The cache configuration simulated (cache size,
+associativity and line size) is determined automagically using
+the CPUID instruction.  If you have an old machine that (a)
+doesn't support the CPUID instruction, or (b) supports it in an
+early incarnation that doesn't give any cache information, then
+Cachegrind will fall back to using a default configuration (that
+of a model 3/4 Athlon).  Cachegrind will tell you if this
+happens.  You can manually specify one, two or all three levels
+(I1/D1/L2) of the cache from the command line using the
+<computeroutput>--I1</computeroutput>,
+<computeroutput>--D1</computeroutput> and
+<computeroutput>--L2</computeroutput> options.</para>
+
+
+<para>Other noteworthy behaviour:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>References that straddle two cache lines are treated as
+    follows:</para>
+    <itemizedlist>
+      <listitem>
+        <para>If both blocks hit --&gt; counted as one hit</para>
+      </listitem>
+      <listitem>
+        <para>If one block hits, the other misses --&gt; counted
+        as one miss.</para>
+      </listitem>
+      <listitem>
+        <para>If both blocks miss --&gt; counted as one miss (not
+        two)</para>
+      </listitem>
+    </itemizedlist>
+  </listitem>
+
+  <listitem>
+    <para>Instructions that modify a memory location
+    (eg. <computeroutput>inc</computeroutput> and
+    <computeroutput>dec</computeroutput>) are counted as doing
+    just a read, ie. a single data reference.  This may seem
+    strange, but since the write can never cause a miss (the read
+    guarantees the block is in the cache) it's not very
+    interesting.</para>
+
+    <para>Thus it measures not the number of times the data cache
+    is accessed, but the number of times a data cache miss could
+    occur.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>If you are interested in simulating a cache with different
+properties, it is not particularly hard to write your own cache
+simulator, or to modify the existing ones in
+<computeroutput>vg_cachesim_I1.c</computeroutput>,
+<computeroutput>vg_cachesim_D1.c</computeroutput>,
+<computeroutput>vg_cachesim_L2.c</computeroutput> and
+<computeroutput>vg_cachesim_gen.c</computeroutput>.  We'd be
+interested to hear from anyone who does.</para>
+
+</sect2>
+
+</sect1>
+
+
+
+<sect1 id="cg-manual.profile" xreflabel="Profiling programs">
+<title>Profiling programs</title>
+
+<para>To gather cache profiling information about the program
+<computeroutput>ls -l</computeroutput>, invoke Cachegrind like
+this:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=cachegrind ls -l]]></programlisting>
+
+<para>The program will execute (slowly).  Upon completion,
+summary statistics that look like this will be printed:</para>
+
+<programlisting><![CDATA[
+==31751== I   refs:      27,742,716
+==31751== I1  misses:           276
+==31751== L2  misses:           275
+==31751== I1  miss rate:        0.0%
+==31751== L2i miss rate:        0.0%
+==31751== 
+==31751== D   refs:      15,430,290  (10,955,517 rd + 4,474,773 wr)
+==31751== D1  misses:        41,185  (    21,905 rd +    19,280 wr)
+==31751== L2  misses:        23,085  (     3,987 rd +    19,098 wr)
+==31751== D1  miss rate:        0.2% (       0.1%   +       0.4%)
+==31751== L2d miss rate:        0.1% (       0.0%   +       0.4%)
+==31751== 
+==31751== L2 misses:         23,360  (     4,262 rd +    19,098 wr)
+==31751== L2 miss rate:         0.0% (       0.0%   +       0.4%)]]></programlisting>
+
+<para>Cache accesses for instruction fetches are summarised
+first, giving the number of fetches made (this is the number of
+instructions executed, which can be useful to know in its own
+right), the number of I1 misses, and the number of L2 instruction
+(<computeroutput>L2i</computeroutput>) misses.</para>
+
+<para>Cache accesses for data follow. The information is similar
+to that of the instruction fetches, except that the values are
+also shown split between reads and writes (note each row's
+<computeroutput>rd</computeroutput> and
+<computeroutput>wr</computeroutput> values add up to the row's
+total).</para>
+
+<para>Combined instruction and data figures for the L2 cache
+follow that.</para>
+
+
+
+<sect2 id="cg-manual.outputfile" xreflabel="Output file">
+<title>Output file</title>
+
+<para>As well as printing summary information, Cachegrind also
+writes line-by-line cache profiling information to a file named
+<computeroutput>cachegrind.out.pid</computeroutput>.  This file
+is human-readable, but is best interpreted by the accompanying
+program <computeroutput>cg_annotate</computeroutput>, described
+in the next section.</para>
+
+<para>Things to note about the
+<computeroutput>cachegrind.out.pid</computeroutput>
+file:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>It is written every time Cachegrind is run, and will
+    overwrite any existing
+    <computeroutput>cachegrind.out.pid</computeroutput>
+    in the current directory (but that won't happen very often
+    because it takes some time for process ids to be
+    recycled).</para>
+  </listitem>
+  <listitem>
+    <para>It can be huge: <computeroutput>ls -l</computeroutput>
+    generates a file of about 350KB.  Browsing a few files and
+    web pages with a Konqueror built with full debugging
+    information generates a file of around 15 MB.</para>
+  </listitem>
+</itemizedlist>
+
+<para>Note that older versions of Cachegrind used a log file
+named <computeroutput>cachegrind.out</computeroutput> (i.e. no
+<computeroutput>.pid</computeroutput> suffix).  The suffix serves
+two purposes.  Firstly, it means you don't have to rename old log
+files that you don't want to overwrite.  Secondly, and more
+importantly, it allows correct profiling with the
+<computeroutput>--trace-children=yes</computeroutput> option of
+programs that spawn child processes.</para>
+
+</sect2>
+
+
+
+<sect2 id="cg-manual.cgopts" xreflabel="Cachegrind options">
+<title>Cachegrind options</title>
+
+<para>Cache-simulation specific options are:</para>
+
+<screen><![CDATA[
+--I1=<size>,<associativity>,<line_size>
+--D1=<size>,<associativity>,<line_size>
+--L2=<size>,<associativity>,<line_size>
+
+[default: uses CPUID for automagic cache configuration]]]></screen>
+
+<para>Manually specifies the I1/D1/L2 cache configuration, where
+<computeroutput>size</computeroutput> and
+<computeroutput>line_size</computeroutput> are measured in bytes.
+The three items must be comma-separated, but with no spaces,
+eg:</para>
+
+<programlisting><![CDATA[
+valgrind --tool=cachegrind --I1=65535,2,64]]></programlisting>
+
+<para>You can specify one, two or three of the I1/D1/L2 caches.
+Any level not manually specified will be simulated using the
+configuration found in the normal way (via the CPUID instruction,
+or failing that, via defaults).</para>
+
+</sect2>
+
+
+  
+<sect2 id="cg-manual.annotate" xreflabel="Annotating C/C++ programs">
+<title>Annotating C/C++ programs</title>
+
+<para>Before using <computeroutput>cg_annotate</computeroutput>,
+it is worth widening your window to be at least 120-characters
+wide if possible, as the output lines can be quite long.</para>
+
+<para>To get a function-by-function summary, run
+<computeroutput>cg_annotate --pid</computeroutput> in a directory
+containing a <computeroutput>cachegrind.out.pid</computeroutput>
+file.  The <emphasis>--pid</emphasis> is required so that
+<computeroutput>cg_annotate</computeroutput> knows which log file
+to use when several are present.</para>
+
+<para>The output looks like this:</para>
+
+<programlisting><![CDATA[
+--------------------------------------------------------------------------------
+I1 cache:              65536 B, 64 B, 2-way associative
+D1 cache:              65536 B, 64 B, 2-way associative
+L2 cache:              262144 B, 64 B, 8-way associative
+Command:               concord vg_to_ucode.c
+Events recorded:       Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Events shown:          Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Event sort order:      Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
+Threshold:             99%
+Chosen for annotation:
+Auto-annotation:       on
+
+--------------------------------------------------------------------------------
+Ir         I1mr I2mr Dr         D1mr   D2mr  Dw        D1mw   D2mw
+--------------------------------------------------------------------------------
+27,742,716  276  275 10,955,517 21,905 3,987 4,474,773 19,280 19,098  PROGRAM TOTALS
+
+--------------------------------------------------------------------------------
+Ir        I1mr I2mr Dr        D1mr  D2mr  Dw        D1mw   D2mw    file:function
+--------------------------------------------------------------------------------
+8,821,482    5    5 2,242,702 1,621    73 1,794,230      0      0  getc.c:_IO_getc
+5,222,023    4    4 2,276,334    16    12   875,959      1      1  concord.c:get_word
+2,649,248    2    2 1,344,810 7,326 1,385         .      .      .  vg_main.c:strcmp
+2,521,927    2    2   591,215     0     0   179,398      0      0  concord.c:hash
+2,242,740    2    2 1,046,612   568    22   448,548      0      0  ctype.c:tolower
+1,496,937    4    4   630,874 9,000 1,400   279,388      0      0  concord.c:insert
+  897,991   51   51   897,831    95    30        62      1      1  ???:???
+  598,068    1    1   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__flockfile
+  598,068    0    0   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__funlockfile
+  598,024    4    4   213,580    35    16   149,506      0      0  vg_clientmalloc.c:malloc
+  446,587    1    1   215,973 2,167   430   129,948 14,057 13,957  concord.c:add_existing
+  341,760    2    2   128,160     0     0   128,160      0      0  vg_clientmalloc.c:vg_trap_here_WRAPPER
+  320,782    4    4   150,711   276     0    56,027     53     53  concord.c:init_hash_table
+  298,998    1    1   106,785     0     0    64,071      1      1  concord.c:create
+  149,518    0    0   149,516     0     0         1      0      0  ???:tolower@@GLIBC_2.0
+  149,518    0    0   149,516     0     0         1      0      0  ???:fgetc@@GLIBC_2.0
+   95,983    4    4    38,031     0     0    34,409  3,152  3,150  concord.c:new_word_node
+   85,440    0    0    42,720     0     0    21,360      0      0  vg_clientmalloc.c:vg_bogus_epilogue]]></programlisting>
+
+
+<para>First up is a summary of the annotation options:</para>
+                    
+<itemizedlist>
+
+  <listitem>
+    <para>I1 cache, D1 cache, L2 cache: cache configuration.  So
+    you know the configuration with which these results were
+    obtained.</para>
+  </listitem>
+
+  <listitem>
+    <para>Command: the command line invocation of the program
+      under examination.</para>
+  </listitem>
+
+  <listitem>
+   <para>Events recorded: event abbreviations are:</para>
+   <itemizedlist>
+     <listitem>
+       <para><computeroutput>Ir </computeroutput>: I cache reads
+       (ie. instructions executed)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>I1mr</computeroutput>: I1 cache read
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>I2mr</computeroutput>: L2 cache
+       instruction read misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>Dr </computeroutput>: D cache reads
+       (ie. memory reads)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D1mr</computeroutput>: D1 cache read
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D2mr</computeroutput>: L2 cache data
+       read misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>Dw </computeroutput>: D cache writes
+       (ie. memory writes)</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D1mw</computeroutput>: D1 cache write
+       misses</para>
+     </listitem>
+     <listitem>
+       <para><computeroutput>D2mw</computeroutput>: L2 cache data
+       write misses</para>
+     </listitem>
+   </itemizedlist>
+
+   <para>Note that D1 total accesses is given by
+   <computeroutput>D1mr</computeroutput> +
+   <computeroutput>D1mw</computeroutput>, and that L2 total
+   accesses is given by <computeroutput>I2mr</computeroutput> +
+   <computeroutput>D2mr</computeroutput> +
+   <computeroutput>D2mw</computeroutput>.</para>
+ </listitem>
+
+ <listitem>
+   <para>Events shown: the events shown (a subset of events
+   gathered).  This can be adjusted with the
+   <computeroutput>--show</computeroutput> option.</para>
+  </listitem>
+
+  <listitem>
+    <para>Event sort order: the sort order in which functions are
+    shown.  For example, in this case the functions are sorted
+    from highest <computeroutput>Ir</computeroutput> counts to
+    lowest.  If two functions have identical
+    <computeroutput>Ir</computeroutput> counts, they will then be
+    sorted by <computeroutput>I1mr</computeroutput> counts, and
+    so on.  This order can be adjusted with the
+    <computeroutput>--sort</computeroutput> option.</para>
+
+    <para>Note that this dictates the order the functions appear.
+    It is <command>not</command> the order in which the columns
+    appear; that is dictated by the "events shown" line (and can
+    be changed with the <computeroutput>--show</computeroutput>
+    option).</para>
+  </listitem>
+
+  <listitem>
+    <para>Threshold: <computeroutput>cg_annotate</computeroutput>
+    by default omits functions that cause very low numbers of
+    misses to avoid drowning you in information.  In this case,
+    cg_annotate shows summaries the functions that account for
+    99% of the <computeroutput>Ir</computeroutput> counts;
+    <computeroutput>Ir</computeroutput> is chosen as the
+    threshold event since it is the primary sort event.  The
+    threshold can be adjusted with the
+    <computeroutput>--threshold</computeroutput>
+    option.</para>
+  </listitem>
+
+  <listitem>
+    <para>Chosen for annotation: names of files specified
+    manually for annotation; in this case none.</para>
+  </listitem>
+
+  <listitem>
+    <para>Auto-annotation: whether auto-annotation was requested
+    via the <computeroutput>--auto=yes</computeroutput>
+    option. In this case no.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Then follows summary statistics for the whole
+program. These are similar to the summary provided when running
+<computeroutput>valgrind
+--tool=cachegrind</computeroutput>.</para>
+  
+<para>Then follows function-by-function statistics. Each function
+is identified by a
+<computeroutput>file_name:function_name</computeroutput> pair. If
+a column contains only a dot it means the function never performs
+that event (eg. the third row shows that
+<computeroutput>strcmp()</computeroutput> contains no
+instructions that write to memory). The name
+<computeroutput>???</computeroutput> is used if the the file name
+and/or function name could not be determined from debugging
+information. If most of the entries have the form
+<computeroutput>???:???</computeroutput> the program probably
+wasn't compiled with <computeroutput>-g</computeroutput>.  If any
+code was invalidated (either due to self-modifying code or
+unloading of shared objects) its counts are aggregated into a
+single cost centre written as
+<computeroutput>(discarded):(discarded)</computeroutput>.</para>
+
+<para>It is worth noting that functions will come from three
+types of source files:</para>
+
+<orderedlist>
+  <listitem>
+    <para>From the profiled program
+    (<filename>concord.c</filename> in this example).</para>
+  </listitem>
+  <listitem>
+    <para>From libraries (eg. <filename>getc.c</filename>)</para>
+  </listitem>
+  <listitem>
+    <para>From Valgrind's implementation of some libc functions
+    (eg. <computeroutput>vg_clientmalloc.c:malloc</computeroutput>).
+    These are recognisable because the filename begins with
+    <computeroutput>vg_</computeroutput>, and is probably one of
+    <filename>vg_main.c</filename>,
+    <filename>vg_clientmalloc.c</filename> or
+    <filename>vg_mylibc.c</filename>.</para>
+  </listitem>
+
+</orderedlist>
+
+<para>There are two ways to annotate source files -- by choosing
+them manually, or with the
+<computeroutput>--auto=yes</computeroutput> option. To do it
+manually, just specify the filenames as arguments to
+<computeroutput>cg_annotate</computeroutput>. For example, the
+output from running <filename>cg_annotate concord.c</filename>
+for our example produces the same output as above followed by an
+annotated version of <filename>concord.c</filename>, a section of
+which looks like:</para>
+
+<programlisting><![CDATA[
+--------------------------------------------------------------------------------
+-- User-annotated source: concord.c
+--------------------------------------------------------------------------------
+Ir        I1mr I2mr Dr      D1mr  D2mr  Dw      D1mw   D2mw
+
+[snip]
+
+        .    .    .       .     .     .       .      .      .  void init_hash_table(char *file_name, Word_Node *table[])
+        3    1    1       .     .     .       1      0      0  {
+        .    .    .       .     .     .       .      .      .      FILE *file_ptr;
+        .    .    .       .     .     .       .      .      .      Word_Info *data;
+        1    0    0       .     .     .       1      1      1      int line = 1, i;
+        .    .    .       .     .     .       .      .      .
+        5    0    0       .     .     .       3      0      0      data = (Word_Info *) create(sizeof(Word_Info));
+        .    .    .       .     .     .       .      .      .
+    4,991    0    0   1,995     0     0     998      0      0      for (i = 0; i < TABLE_SIZE; i++)
+    3,988    1    1   1,994     0     0     997     53     52          table[i] = NULL;
+        .    .    .       .     .     .       .      .      .
+        .    .    .       .     .     .       .      .      .      /* Open file, check it. */
+        6    0    0       1     0     0       4      0      0      file_ptr = fopen(file_name, "r");
+        2    0    0       1     0     0       .      .      .      if (!(file_ptr)) {
+        .    .    .       .     .     .       .      .      .          fprintf(stderr, "Couldn't open '%s'.\n", file_name);
+        1    1    1       .     .     .       .      .      .          exit(EXIT_FAILURE);
+        .    .    .       .     .     .       .      .      .      }
+        .    .    .       .     .     .       .      .      .
+  165,062    1    1  73,360     0     0  91,700      0      0      while ((line = get_word(data, line, file_ptr)) != EOF)
+  146,712    0    0  73,356     0     0  73,356      0      0          insert(data->;word, data->line, table);
+        .    .    .       .     .     .       .      .      .
+        4    0    0       1     0     0       2      0      0      free(data);
+        4    0    0       1     0     0       2      0      0      fclose(file_ptr);
+        3    0    0       2     0     0       .      .      .  }]]></programlisting>
+
+<para>(Although column widths are automatically minimised, a wide
+terminal is clearly useful.)</para>
+  
+<para>Each source file is clearly marked
+(<computeroutput>User-annotated source</computeroutput>) as
+having been chosen manually for annotation.  If the file was
+found in one of the directories specified with the
+<computeroutput>-I / --include</computeroutput> option, the directory
+and file are both given.</para>
+
+<para>Each line is annotated with its event counts.  Events not
+applicable for a line are represented by a `.'; this is useful
+for distinguishing between an event which cannot happen, and one
+which can but did not.</para>
+
+<para>Sometimes only a small section of a source file is
+executed.  To minimise uninteresting output, Valgrind only shows
+annotated lines and lines within a small distance of annotated
+lines.  Gaps are marked with the line numbers so you know which
+part of a file the shown code comes from, eg:</para>
+
+<programlisting><![CDATA[
+(figures and code for line 704)
+-- line 704 ----------------------------------------
+-- line 878 ----------------------------------------
+(figures and code for line 878)]]></programlisting>
+
+<para>The amount of context to show around annotated lines is
+controlled by the <computeroutput>--context</computeroutput>
+option.</para>
+
+<para>To get automatic annotation, run
+<computeroutput>cg_annotate --auto=yes</computeroutput>.
+cg_annotate will automatically annotate every source file it can
+find that is mentioned in the function-by-function summary.
+Therefore, the files chosen for auto-annotation are affected by
+the <computeroutput>--sort</computeroutput> and
+<computeroutput>--threshold</computeroutput> options.  Each
+source file is clearly marked (<computeroutput>Auto-annotated
+source</computeroutput>) as being chosen automatically.  Any
+files that could not be found are mentioned at the end of the
+output, eg:</para>
+
+<programlisting><![CDATA[
+------------------------------------------------------------------
+The following files chosen for auto-annotation could not be found:
+------------------------------------------------------------------
+  getc.c
+  ctype.c
+  ../sysdeps/generic/lockfile.c]]></programlisting>
+
+<para>This is quite common for library files, since libraries are
+usually compiled with debugging information, but the source files
+are often not present on a system.  If a file is chosen for
+annotation <command>both</command> manually and automatically, it
+is marked as <computeroutput>User-annotated
+source</computeroutput>. Use the <computeroutput>-I /
+--include</computeroutput> option to tell Valgrind where to look
+for source files if the filenames found from the debugging
+information aren't specific enough.</para>
+
+<para>Beware that cg_annotate can take some time to digest large
+<computeroutput>cachegrind.out.pid</computeroutput> files,
+e.g. 30 seconds or more.  Also beware that auto-annotation can
+produce a lot of output if your program is large!</para>
+
+</sect2>
+
+
+<sect2 id="cg-manual.assembler" xreflabel="Annotating assembler programs">
+<title>Annotating assembler programs</title>
+
+<para>Valgrind can annotate assembler programs too, or annotate
+the assembler generated for your C program.  Sometimes this is
+useful for understanding what is really happening when an
+interesting line of C code is translated into multiple
+instructions.</para>
+
+<para>To do this, you just need to assemble your
+<computeroutput>.s</computeroutput> files with assembler-level
+debug information.  gcc doesn't do this, but you can use the GNU
+assembler with the <computeroutput>--gstabs</computeroutput>
+option to generate object files with this information, eg:</para>
+
+<programlisting><![CDATA[
+as --gstabs foo.s]]></programlisting>
+
+<para>You can then profile and annotate source files in the same
+way as for C/C++ programs.</para>
+
+</sect2>
+
+</sect1>
+
+
+<sect1 id="cg-manual.annopts" xreflabel="cg_annotate options">
+<title><computeroutput>cg_annotate</computeroutput> options</title>
+
+<itemizedlist>
+
+  <listitem>
+    <para><computeroutput>--pid</computeroutput></para>
+    <para>Indicates which
+    <computeroutput>cachegrind.out.pid</computeroutput> file to
+    read.  Not actually an option -- it is required.</para>
+  </listitem>
+    
+  <listitem>
+    <para><computeroutput>-h, --help</computeroutput></para>
+    <para><computeroutput>-v, --version</computeroutput></para>
+    <para>Help and version, as usual.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--sort=A,B,C</computeroutput> [default:
+    order in
+    <computeroutput>cachegrind.out.pid</computeroutput>]</para>
+    <para>Specifies the events upon which the sorting of the
+    function-by-function entries will be based.  Useful if you
+    want to concentrate on eg. I cache misses
+    (<computeroutput>--sort=I1mr,I2mr</computeroutput>), or D
+    cache misses
+    (<computeroutput>--sort=D1mr,D2mr</computeroutput>), or L2
+    misses
+    (<computeroutput>--sort=D2mr,I2mr</computeroutput>).</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--show=A,B,C</computeroutput> [default:
+    all, using order in
+    <computeroutput>cachegrind.out.pid</computeroutput>]</para>
+    <para>Specifies which events to show (and the column
+    order). Default is to use all present in the
+    <computeroutput>cachegrind.out.pid</computeroutput> file (and
+    use the order in the file).</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--threshold=X</computeroutput>
+    [default: 99%]</para>
+    <para>Sets the threshold for the function-by-function
+    summary.  Functions are shown that account for more than X%
+    of the primary sort event.  If auto-annotating, also affects
+    which files are annotated.</para>
+      
+    <para>Note: thresholds can be set for more than one of the
+    events by appending any events for the
+    <computeroutput>--sort</computeroutput> option with a colon
+    and a number (no spaces, though).  E.g. if you want to see
+    the functions that cover 99% of L2 read misses and 99% of L2
+    write misses, use this option:</para>
+    <para><computeroutput>--sort=D2mr:99,D2mw:99</computeroutput></para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--auto=no</computeroutput> [default]</para>
+    <para><computeroutput>--auto=yes</computeroutput></para>
+    <para>When enabled, automatically annotates every file that
+    is mentioned in the function-by-function summary that can be
+    found.  Also gives a list of those that couldn't be found.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>--context=N</computeroutput> [default:
+    8]</para>
+    <para>Print N lines of context before and after each
+    annotated line.  Avoids printing large sections of source
+    files that were not executed.  Use a large number
+    (eg. 10,000) to show all source lines.</para>
+  </listitem>
+
+  <listitem>
+    <para><computeroutput>-I=&lt;dir&gt;,
+      --include=&lt;dir&gt;</computeroutput> [default: empty
+      string]</para>
+    <para>Adds a directory to the list in which to search for
+    files.  Multiple -I/--include options can be given to add
+    multiple directories.</para>
+  </listitem>
+
+</itemizedlist>
+  
+
+
+<sect2>
+<title>Warnings</title>
+
+<para>There are a couple of situations in which
+<computeroutput>cg_annotate</computeroutput> issues
+warnings.</para>
+
+<itemizedlist>
+  <listitem>
+    <para>If a source file is more recent than the
+    <computeroutput>cachegrind.out.pid</computeroutput> file.
+    This is because the information in
+    <computeroutput>cachegrind.out.pid</computeroutput> is only
+    recorded with line numbers, so if the line numbers change at
+    all in the source (eg.  lines added, deleted, swapped), any
+    annotations will be incorrect.</para>
+  </listitem>
+  <listitem>
+    <para>If information is recorded about line numbers past the
+    end of a file.  This can be caused by the above problem,
+    ie. shortening the source file while using an old
+    <computeroutput>cachegrind.out.pid</computeroutput> file.  If
+    this happens, the figures for the bogus lines are printed
+    anyway (clearly marked as bogus) in case they are
+    important.</para>
+  </listitem>
+</itemizedlist>
+
+</sect2>
+
+
+
+<sect2>
+<title>Things to watch out for</title>
+
+<para>Some odd things that can occur during annotation:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>If annotating at the assembler level, you might see
+    something like this:</para>
+<programlisting><![CDATA[
+      1    0    0  .    .    .  .    .    .          leal -12(%ebp),%eax
+      1    0    0  .    .    .  1    0    0          movl %eax,84(%ebx)
+      2    0    0  0    0    0  1    0    0          movl $1,-20(%ebp)
+      .    .    .  .    .    .  .    .    .          .align 4,0x90
+      1    0    0  .    .    .  .    .    .          movl $.LnrB,%eax
+      1    0    0  .    .    .  1    0    0          movl %eax,-16(%ebp)]]></programlisting>
+
+    <para>How can the third instruction be executed twice when
+    the others are executed only once?  As it turns out, it
+    isn't.  Here's a dump of the executable, using
+    <computeroutput>objdump -d</computeroutput>:</para>
+<programlisting><![CDATA[
+      8048f25:       8d 45 f4                lea    0xfffffff4(%ebp),%eax
+      8048f28:       89 43 54                mov    %eax,0x54(%ebx)
+      8048f2b:       c7 45 ec 01 00 00 00    movl   $0x1,0xffffffec(%ebp)
+      8048f32:       89 f6                   mov    %esi,%esi
+      8048f34:       b8 08 8b 07 08          mov    $0x8078b08,%eax
+      8048f39:       89 45 f0                mov    %eax,0xfffffff0(%ebp)]]></programlisting>
+
+    <para>Notice the extra <computeroutput>mov
+    %esi,%esi</computeroutput> instruction.  Where did this come
+    from?  The GNU assembler inserted it to serve as the two
+    bytes of padding needed to align the <computeroutput>movl
+    $.LnrB,%eax</computeroutput> instruction on a four-byte
+    boundary, but pretended it didn't exist when adding debug
+    information.  Thus when Valgrind reads the debug info it
+    thinks that the <computeroutput>movl
+    $0x1,0xffffffec(%ebp)</computeroutput> instruction covers the
+    address range 0x8048f2b--0x804833 by itself, and attributes
+    the counts for the <computeroutput>mov
+    %esi,%esi</computeroutput> to it.</para>
+  </listitem>
+
+  <listitem>
+    <para>Inlined functions can cause strange results in the
+    function-by-function summary.  If a function
+    <computeroutput>inline_me()</computeroutput> is defined in
+    <filename>foo.h</filename> and inlined in the functions
+    <computeroutput>f1()</computeroutput>,
+    <computeroutput>f2()</computeroutput> and
+    <computeroutput>f3()</computeroutput> in
+    <filename>bar.c</filename>, there will not be a
+    <computeroutput>foo.h:inline_me()</computeroutput> function
+    entry.  Instead, there will be separate function entries for
+    each inlining site, ie.
+    <computeroutput>foo.h:f1()</computeroutput>,
+    <computeroutput>foo.h:f2()</computeroutput> and
+    <computeroutput>foo.h:f3()</computeroutput>.  To find the
+    total counts for
+    <computeroutput>foo.h:inline_me()</computeroutput>, add up
+    the counts from each entry.</para>
+
+    <para>The reason for this is that although the debug info
+    output by gcc indicates the switch from
+    <filename>bar.c</filename> to <filename>foo.h</filename>, it
+    doesn't indicate the name of the function in
+    <filename>foo.h</filename>, so Valgrind keeps using the old
+    one.</para>
+  </listitem>
+
+  <listitem>
+    <para>Sometimes, the same filename might be represented with
+    a relative name and with an absolute name in different parts
+    of the debug info, eg:
+    <filename>/home/user/proj/proj.h</filename> and
+    <filename>../proj.h</filename>.  In this case, if you use
+    auto-annotation, the file will be annotated twice with the
+    counts split between the two.</para>
+  </listitem>
+
+  <listitem>
+    <para>Files with more than 65,535 lines cause difficulties
+    for the stabs debug info reader.  This is because the line
+    number in the <computeroutput>struct nlist</computeroutput>
+    defined in <filename>a.out.h</filename> under Linux is only a
+    16-bit value.  Valgrind can handle some files with more than
+    65,535 lines correctly by making some guesses to identify
+    line number overflows.  But some cases are beyond it, in
+    which case you'll get a warning message explaining that
+    annotations for the file might be incorrect.</para>
+  </listitem>
+
+  <listitem>
+    <para>If you compile some files with
+    <computeroutput>-g</computeroutput> and some without, some
+    events that take place in a file without debug info could be
+    attributed to the last line of a file with debug info
+    (whichever one gets placed before the non-debug-info file in
+    the executable).</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>This list looks long, but these cases should be fairly
+rare.</para>
+
+<formalpara>
+  <title>Note:</title>
+  <para><computeroutput>stabs</computeroutput> is not an easy
+  format to read.  If you come across bizarre annotations that
+  look like might be caused by a bug in the stabs reader, please
+  let us know.</para>
+</formalpara>
+
+</sect2>
+
+
+
+<sect2>
+<title>Accuracy</title>
+
+<para>Valgrind's cache profiling has a number of
+shortcomings:</para>
+
+<itemizedlist>
+  <listitem>
+    <para>It doesn't account for kernel activity -- the effect of
+    system calls on the cache contents is ignored.</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for other process activity (although
+    this is probably desirable when considering a single
+    program).</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for virtual-to-physical address
+    mappings; hence the entire simulation is not a true
+    representation of what's happening in the
+    cache.</para>
+  </listitem>
+
+  <listitem>
+    <para>It doesn't account for cache misses not visible at the
+    instruction level, eg. those arising from TLB misses, or
+    speculative execution.</para>
+  </listitem>
+
+  <listitem>
+    <para>Valgrind's custom threads implementation will schedule
+    threads differently to the standard one.  This could warp the
+    results for threaded programs.</para>
+  </listitem>
+
+  <listitem>
+    <para>The instructions <computeroutput>bts</computeroutput>,
+    <computeroutput>btr</computeroutput> and
+    <computeroutput>btc</computeroutput> will incorrectly be
+    counted as doing a data read if both the arguments are
+    registers, eg:</para>
+<programlisting><![CDATA[
+    btsl %eax, %edx]]></programlisting>
+
+    <para>This should only happen rarely.</para>
+  </listitem>
+
+  <listitem>
+    <para>FPU instructions with data sizes of 28 and 108 bytes
+    (e.g.  <computeroutput>fsave</computeroutput>) are treated as
+    though they only access 16 bytes.  These instructions seem to
+    be rare so hopefully this won't affect accuracy much.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>Another thing worth nothing is that results are very
+sensitive.  Changing the size of the
+<filename>valgrind.so</filename> file, the size of the program
+being profiled, or even the length of its name can perturb the
+results.  Variations will be small, but don't expect perfectly
+repeatable results if your program changes at all.</para>
+
+<para>While these factors mean you shouldn't trust the results to
+be super-accurate, hopefully they should be close enough to be
+useful.</para>
+
+</sect2>
+
+
+<sect2>
+<title>Todo</title>
+
+<itemizedlist>
+  <listitem>
+    <para>Program start-up/shut-down calls a lot of functions
+    that aren't interesting and just complicate the output.
+    Would be nice to exclude these somehow.</para>
+  </listitem>
+</itemizedlist> 
+
+</sect2>
+
+</sect1>
+</chapter>
diff --git a/cachegrind/docs/cg-tech-docs.xml b/cachegrind/docs/cg-tech-docs.xml
new file mode 100644
index 0000000..210dee0
--- /dev/null
+++ b/cachegrind/docs/cg-tech-docs.xml
@@ -0,0 +1,560 @@
+<?xml version="1.0"?> <!-- -*- sgml -*- -->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+  "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
+
+<chapter id="cg-tech-docs" xreflabel="How Cachegrind works">
+
+<title>How Cachegrind works</title>
+
+<sect1 id="cg-tech-docs.profiling" xreflabel="Cache profiling">
+<title>Cache profiling</title>
+
+<para>Valgrind is a very nice platform for doing cache profiling
+and other kinds of simulation, because it converts horrible x86
+instructions into nice clean RISC-like UCode.  For example, for
+cache profiling we are interested in instructions that read and
+write memory; in UCode there are only four instructions that do
+this: <computeroutput>LOAD</computeroutput>,
+<computeroutput>STORE</computeroutput>,
+<computeroutput>FPU_R</computeroutput> and
+<computeroutput>FPU_W</computeroutput>.  By contrast, because of
+the x86 addressing modes, almost every instruction can read or
+write memory.</para>
+
+<para>Most of the cache profiling machinery is in the file
+<filename>vg_cachesim.c</filename>.</para>
+
+<para>These notes are a somewhat haphazard guide to how
+Valgrind's cache profiling works.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.costcentres" xreflabel="Cost centres">
+<title>Cost centres</title>
+
+<para>Valgrind gathers cache profiling about every instruction
+executed, individually.  Each instruction has a <command>cost
+centre</command> associated with it.  There are two kinds of cost
+centre: one for instructions that don't reference memory
+(<computeroutput>iCC</computeroutput>), and one for instructions
+that do (<computeroutput>idCC</computeroutput>):</para>
+
+<programlisting><![CDATA[
+typedef struct _CC {
+  ULong a;
+  ULong m1;
+  ULong m2;
+} CC;
+
+typedef struct _iCC {
+  /* word 1 */
+  UChar tag;
+  UChar instr_size;
+
+  /* words 2+ */
+  Addr instr_addr;
+  CC I;
+} iCC;
+   
+typedef struct _idCC {
+  /* word 1 */
+  UChar tag;
+  UChar instr_size;
+  UChar data_size;
+
+  /* words 2+ */
+  Addr instr_addr;
+  CC I; 
+  CC D; 
+} idCC; ]]></programlisting>
+
+<para>Each <computeroutput>CC</computeroutput> has three fields
+<computeroutput>a</computeroutput>,
+<computeroutput>m1</computeroutput>,
+<computeroutput>m2</computeroutput> for recording references,
+level 1 misses and level 2 misses.  Each of these is a 64-bit
+<computeroutput>ULong</computeroutput> -- the numbers can get
+very large, ie. greater than 4.2 billion allowed by a 32-bit
+unsigned int.</para>
+
+<para>A <computeroutput>iCC</computeroutput> has one
+<computeroutput>CC</computeroutput> for instruction cache
+accesses.  A <computeroutput>idCC</computeroutput> has two, one
+for instruction cache accesses, and one for data cache
+accesses.</para>
+
+<para>The <computeroutput>iCC</computeroutput> and
+<computeroutput>dCC</computeroutput> structs also store
+unchanging information about the instruction:</para>
+<itemizedlist>
+  <listitem>
+    <para>An instruction-type identification tag (explained
+    below)</para>
+  </listitem>
+  <listitem>
+    <para>Instruction size</para>
+  </listitem>
+  <listitem>
+    <para>Data reference size
+    (<computeroutput>idCC</computeroutput> only)</para>
+  </listitem>
+  <listitem>
+    <para>Instruction address</para>
+  </listitem>
+</itemizedlist>
+
+<para>Note that data address is not one of the fields for
+<computeroutput>idCC</computeroutput>.  This is because for many
+memory-referencing instructions the data address can change each
+time it's executed (eg. if it uses register-offset addressing).
+We have to give this item to the cache simulation in a different
+way (see Instrumentation section below). Some memory-referencing
+instructions do always reference the same address, but we don't
+try to treat them specialy in order to keep things simple.</para>
+
+<para>Also note that there is only room for recording info about
+one data cache access in an
+<computeroutput>idCC</computeroutput>.  So what about
+instructions that do a read then a write, such as:</para>
+<programlisting><![CDATA[
+inc %(esi)]]></programlisting>
+
+<para>In a write-allocate cache, as simulated by Valgrind, the
+write cannot miss, since it immediately follows the read which
+will drag the block into the cache if it's not already there.  So
+the write access isn't really interesting, and Valgrind doesn't
+record it.  This means that Valgrind doesn't measure memory
+references, but rather memory references that could miss in the
+cache.  This behaviour is the same as that used by the AMD Athlon
+hardware counters.  It also has the benefit of simplifying the
+implementation -- instructions that read and write memory can be
+treated like instructions that read memory.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.ccstore" xreflabel="Storing cost-centres">
+<title>Storing cost-centres</title>
+
+<para>Cost centres are stored in a way that makes them very cheap
+to lookup, which is important since one is looked up for every
+original x86 instruction executed.</para>
+
+<para>Valgrind does JIT translations at the basic block level,
+and cost centres are also setup and stored at the basic block
+level.  By doing things carefully, we store all the cost centres
+for a basic block in a contiguous array, and lookup comes almost
+for free.</para>
+
+<para>Consider this part of a basic block (for exposition
+purposes, pretend it's an entire basic block):</para>
+<programlisting><![CDATA[
+movl $0x0,%eax
+movl $0x99, -4(%ebp)]]></programlisting>
+
+<para>The translation to UCode looks like this:</para>
+<programlisting><![CDATA[
+MOVL      $0x0, t20
+PUTL      t20, %EAX
+INCEIPo   $5
+
+LEA1L     -4(t4), t14
+MOVL      $0x99, t18
+STL       t18, (t14)
+INCEIPo   $7]]></programlisting>
+
+<para>The first step is to allocate the cost centres.  This
+requires a preliminary pass to count how many x86 instructions
+were in the basic block, and their types (and thus sizes).  UCode
+translations for single x86 instructions are delimited by the
+<computeroutput>INCEIPo</computeroutput> instruction, the
+argument of which gives the byte size of the instruction (note
+that lazy INCEIP updating is turned off to allow this).</para>
+
+<para>We can tell if an x86 instruction references memory by
+looking for <computeroutput>LDL</computeroutput> and
+<computeroutput>STL</computeroutput> UCode instructions, and thus
+what kind of cost centre is required.  From this we can determine
+how many cost centres we need for the basic block, and their
+sizes.  We can then allocate them in a single array.</para>
+
+<para>Consider the example code above.  After the preliminary
+pass, we know we need two cost centres, one
+<computeroutput>iCC</computeroutput> and one
+<computeroutput>dCC</computeroutput>.  So we allocate an array to
+store these which looks like this:</para>
+
+<programlisting><![CDATA[
+|(uninit)|      tag         (1 byte)
+|(uninit)|      instr_size  (1 bytes)
+|(uninit)|      (padding)   (2 bytes)
+|(uninit)|      instr_addr  (4 bytes)
+|(uninit)|      I.a         (8 bytes)
+|(uninit)|      I.m1        (8 bytes)
+|(uninit)|      I.m2        (8 bytes)
+
+|(uninit)|      tag         (1 byte)
+|(uninit)|      instr_size  (1 byte)
+|(uninit)|      data_size   (1 byte)
+|(uninit)|      (padding)   (1 byte)
+|(uninit)|      instr_addr  (4 bytes)
+|(uninit)|      I.a         (8 bytes)
+|(uninit)|      I.m1        (8 bytes)
+|(uninit)|      I.m2        (8 bytes)
+|(uninit)|      D.a         (8 bytes)
+|(uninit)|      D.m1        (8 bytes)
+|(uninit)|      D.m2        (8 bytes)]]></programlisting>
+
+<para>(We can see now why we need tags to distinguish between the
+two types of cost centres.)</para>
+
+<para>We also record the size of the array.  We look up the debug
+info of the first instruction in the basic block, and then stick
+the array into a table indexed by filename and function name.
+This makes it easy to dump the information quickly to file at the
+end.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.instrum" xreflabel="Instrumentation">
+<title>Instrumentation</title>
+
+<para>The instrumentation pass has two main jobs:</para>
+
+<orderedlist>
+  <listitem>
+    <para>Fill in the gaps in the allocated cost centres.</para>
+  </listitem>
+  <listitem>
+    <para>Add UCode to call the cache simulator for each
+   instruction.</para>
+  </listitem>
+</orderedlist>
+
+<para>The instrumentation pass steps through the UCode and the
+cost centres in tandem.  As each original x86 instruction's UCode
+is processed, the appropriate gaps in the instructions cost
+centre are filled in, for example:</para>
+
+<programlisting><![CDATA[
+|INSTR_CC|      tag         (1 byte)
+|5       |      instr_size  (1 bytes)
+|(uninit)|      (padding)   (2 bytes)
+|i_addr1 |      instr_addr  (4 bytes)
+|0       |      I.a         (8 bytes)
+|0       |      I.m1        (8 bytes)
+|0       |      I.m2        (8 bytes)
+
+|WRITE_CC|      tag         (1 byte)
+|7       |      instr_size  (1 byte)
+|4       |      data_size   (1 byte)
+|(uninit)|      (padding)   (1 byte)
+|i_addr2 |      instr_addr  (4 bytes)
+|0       |      I.a         (8 bytes)
+|0       |      I.m1        (8 bytes)
+|0       |      I.m2        (8 bytes)
+|0       |      D.a         (8 bytes)
+|0       |      D.m1        (8 bytes)
+|0       |      D.m2        (8 bytes)]]></programlisting>
+
+<para>(Note that this step is not performed if a basic block is
+re-translated; see <xref linkend="cg-tech-docs.retranslations"/> for
+more information.)</para>
+
+<para>GCC inserts padding before the
+<computeroutput>instr_size</computeroutput> field so that it is
+word aligned.</para>
+
+<para>The instrumentation added to call the cache simulation
+function looks like this (instrumentation is indented to
+distinguish it from the original UCode):</para>
+
+<programlisting><![CDATA[
+MOVL      $0x0, t20
+PUTL      t20, %EAX
+  PUSHL     %eax
+  PUSHL     %ecx
+  PUSHL     %edx
+  MOVL      $0x4091F8A4, t46  # address of 1st CC
+  PUSHL     t46
+  CALLMo    $0x12             # second cachesim function
+  CLEARo    $0x4
+  POPL      %edx
+  POPL      %ecx
+  POPL      %eax
+INCEIPo   $5
+
+LEA1L     -4(t4), t14
+MOVL      $0x99, t18
+  MOVL      t14, t42
+STL       t18, (t14)
+  PUSHL     %eax
+  PUSHL     %ecx
+  PUSHL     %edx
+  PUSHL     t42
+  MOVL      $0x4091F8C4, t44  # address of 2nd CC
+  PUSHL     t44
+  CALLMo    $0x13             # second cachesim function
+  CLEARo    $0x8
+  POPL      %edx
+  POPL      %ecx
+  POPL      %eax
+INCEIPo   $7]]></programlisting>
+
+<para>Consider the first instruction's UCode.  Each call is
+surrounded by three <computeroutput>PUSHL</computeroutput> and
+<computeroutput>POPL</computeroutput> instructions to save and
+restore the caller-save registers.  Then the address of the
+instruction's cost centre is pushed onto the stack, to be the
+first argument to the cache simulation function.  The address is
+known at this point because we are doing a simultaneous pass
+through the cost centre array.  This means the cost centre lookup
+for each instruction is almost free (just the cost of pushing an
+argument for a function call).  Then the call to the cache
+simulation function for non-memory-reference instructions is made
+(note that the <computeroutput>CALLMo</computeroutput>
+UInstruction takes an offset into a table of predefined
+functions; it is not an absolute address), and the single
+argument is <computeroutput>CLEAR</computeroutput>ed from the
+stack.</para>
+
+<para>The second instruction's UCode is similar.  The only
+difference is that, as mentioned before, we have to pass the
+address of the data item referenced to the cache simulation
+function too.  This explains the <computeroutput>MOVL t14,
+t42</computeroutput> and <computeroutput>PUSHL
+t42</computeroutput> UInstructions.  (Note that the seemingly
+redundant <computeroutput>MOV</computeroutput>ing will probably
+be optimised away during register allocation.)</para>
+
+<para>Note that instead of storing unchanging information about
+each instruction (instruction size, data size, etc) in its cost
+centre, we could have passed in these arguments to the simulation
+function.  But this would slow the calls down (two or three extra
+arguments pushed onto the stack).  Also it would bloat the UCode
+instrumentation by amounts similar to the space required for them
+in the cost centre; bloated UCode would also fill the translation
+cache more quickly, requiring more translations for large
+programs and slowing them down more.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.retranslations" 
+         xreflabel="Handling basic block retranslations">
+<title>Handling basic block retranslations</title>
+
+<para>The above description ignores one complication.  Valgrind
+has a limited size cache for basic block translations; if it
+fills up, old translations are discarded.  If a discarded basic
+block is executed again, it must be re-translated.</para>
+
+<para>However, we can't use this approach for profiling -- we
+can't throw away cost centres for instructions in the middle of
+execution!  So when a basic block is translated, we first look
+for its cost centre array in the hash table.  If there is no cost
+centre array, it must be the first translation, so we proceed as
+described above.  But if there is a cost centre array already, it
+must be a retranslation.  In this case, we skip the cost centre
+allocation and initialisation steps, but still do the UCode
+instrumentation step.</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.cachesim" xreflabel="The cache simulation">
+<title>The cache simulation</title>
+
+<para>The cache simulation is fairly straightforward.  It just
+tracks which memory blocks are in the cache at the moment (it
+doesn't track the contents, since that is irrelevant).</para>
+
+<para>The interface to the simulation is quite clean.  The
+functions called from the UCode contain calls to the simulation
+functions in the files
+<filename>vg_cachesim_{I1,D1,L2}.c</filename>; these calls are
+inlined so that only one function call is done per simulated x86
+instruction.  The file <filename>vg_cachesim.c</filename> simply
+<computeroutput>#include</computeroutput>s the three files
+containing the simulation, which makes plugging in new cache
+simulations is very easy -- you just replace the three files and
+recompile.</para>
+
+</sect1>
+
+
+<sect1 id="cg-tech-docs.output" xreflabel="Output">
+<title>Output</title>
+
+<para>Output is fairly straightforward, basically printing the
+cost centre for every instruction, grouped by files and
+functions.  Total counts (eg. total cache accesses, total L1
+misses) are calculated when traversing this structure rather than
+during execution, to save time; the cache simulation functions
+are called so often that even one or two extra adds can make a
+sizeable difference.</para>
+
+<para>Input file has the following format:</para>
+<programlisting><![CDATA[
+file         ::= desc_line* cmd_line events_line data_line+ summary_line
+desc_line    ::= "desc:" ws? non_nl_string
+cmd_line     ::= "cmd:" ws? cmd
+events_line  ::= "events:" ws? (event ws)+
+data_line    ::= file_line | fn_line | count_line
+file_line    ::= ("fl=" | "fi=" | "fe=") filename
+fn_line      ::= "fn=" fn_name
+count_line   ::= line_num ws? (count ws)+
+summary_line ::= "summary:" ws? (count ws)+
+count        ::= num | "."]]></programlisting>
+
+<para>Where:</para>
+<itemizedlist>
+  <listitem>
+    <para><computeroutput>non_nl_string</computeroutput> is any
+    string not containing a newline.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>cmd</computeroutput> is a command line
+    invocation.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>filename</computeroutput> and
+    <computeroutput>fn_name</computeroutput> can be anything.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>num</computeroutput> and
+    <computeroutput>line_num</computeroutput> are decimal
+    numbers.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>ws</computeroutput> is whitespace.</para>
+  </listitem>
+  <listitem>
+    <para><computeroutput>nl</computeroutput> is a newline.</para>
+  </listitem>
+
+</itemizedlist>
+
+<para>The contents of the "desc:" lines is printed out at the top
+of the summary.  This is a generic way of providing simulation
+specific information, eg. for giving the cache configuration for
+cache simulation.</para>
+
+<para>Counts can be "." to represent "N/A", eg. the number of
+write misses for an instruction that doesn't write to
+memory.</para>
+
+<para>The number of counts in each
+<computeroutput>line</computeroutput> and the
+<computeroutput>summary_line</computeroutput> should not exceed
+the number of events in the
+<computeroutput>event_line</computeroutput>.  If the number in
+each <computeroutput>line</computeroutput> is less, cg_annotate
+treats those missing as though they were a "." entry.</para>
+
+<para>A <computeroutput>file_line</computeroutput> changes the
+current file name.  A <computeroutput>fn_line</computeroutput>
+changes the current function name.  A
+<computeroutput>count_line</computeroutput> contains counts that
+pertain to the current filename/fn_name.  A "fn="
+<computeroutput>file_line</computeroutput> and a
+<computeroutput>fn_line</computeroutput> must appear before any
+<computeroutput>count_line</computeroutput>s to give the context
+of the first <computeroutput>count_line</computeroutput>s.</para>
+
+<para>Each <computeroutput>file_line</computeroutput> should be
+immediately followed by a
+<computeroutput>fn_line</computeroutput>.  "fi="
+<computeroutput>file_lines</computeroutput> are used to switch
+filenames for inlined functions; "fe="
+<computeroutput>file_lines</computeroutput> are similar, but are
+put at the end of a basic block in which the file name hasn't
+been switched back to the original file name.  (fi and fe lines
+behave the same, they are only distinguished to help
+debugging.)</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.summary" 
+         xreflabel="Summary of performance features">
+<title>Summary of performance features</title>
+
+<para>Quite a lot of work has gone into making the profiling as
+fast as possible.  This is a summary of the important
+features:</para>
+
+<itemizedlist>
+
+  <listitem>
+    <para>The basic block-level cost centre storage allows almost
+    free cost centre lookup.</para>
+  </listitem>
+  
+  <listitem>
+    <para>Only one function call is made per instruction
+    simulated; even this accounts for a sizeable percentage of
+    execution time, but it seems unavoidable if we want
+    flexibility in the cache simulator.</para>
+  </listitem>
+
+  <listitem>
+    <para>Unchanging information about an instruction is stored
+    in its cost centre, avoiding unnecessary argument pushing,
+    and minimising UCode instrumentation bloat.</para>
+  </listitem>
+
+  <listitem>
+    <para>Summary counts are calculated at the end, rather than
+    during execution.</para>
+  </listitem>
+
+  <listitem>
+    <para>The <computeroutput>cachegrind.out</computeroutput>
+    output files can contain huge amounts of information; file
+    format was carefully chosen to minimise file sizes.</para>
+  </listitem>
+
+</itemizedlist>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.annotate" xreflabel="Annotation">
+<title>Annotation</title>
+
+<para>Annotation is done by cg_annotate.  It is a fairly
+straightforward Perl script that slurps up all the cost centres,
+and then runs through all the chosen source files, printing out
+cost centres with them.  It too has been carefully optimised.</para>
+
+</sect1>
+
+
+
+<sect1 id="cg-tech-docs.extensions" xreflabel="Similar work, extensions">
+<title>Similar work, extensions</title>
+
+<para>It would be relatively straightforward to do other
+simulations and obtain line-by-line information about interesting
+events.  A good example would be branch prediction -- all
+branches could be instrumented to interact with a branch
+prediction simulator, using very similar techniques to those
+described above.</para>
+
+<para>In particular, cg_annotate would not need to change -- the
+file format is such that it is not specific to the cache
+simulation, but could be used for any kind of line-by-line
+information.  The only part of cg_annotate that is specific to
+the cache simulation is the name of the input file
+(<computeroutput>cachegrind.out</computeroutput>), although it
+would be very simple to add an option to control this.</para>
+
+</sect1>
+
+</chapter>
diff --git a/cachegrind/docs/cg_main.html b/cachegrind/docs/cg_main.html
deleted file mode 100644
index 545748a..0000000
--- a/cachegrind/docs/cg_main.html
+++ /dev/null
@@ -1,714 +0,0 @@
-<html>
-  <head>
-    <title>Cachegrind: a cache-miss profiler</title>
-  </head>
-
-<body>
-<a name="cg-top"></a>
-<h2>4&nbsp; <b>Cachegrind</b>: a cache-miss profiler</h2>
-
-To use this tool, you must specify <code>--tool=cachegrind</code>
-on the Valgrind command line.
-
-<p>
-Detailed technical documentation on how Cachegrind works is available
-<A HREF="cg_techdocs.html">here</A>.  If you want to know how
-to <b>use</b> it, you only need to read this page.
-
-
-<a name="cache"></a>
-<h3>4.1&nbsp; Cache profiling</h3>
-Cachegrind is a tool for doing cache simulations and annotating your source
-line-by-line with the number of cache misses.  In particular, it records:
-<ul>
-  <li>L1 instruction cache reads and misses;
-  <li>L1 data cache reads and read misses, writes and write misses;
-  <li>L2 unified cache reads and read misses, writes and writes misses.
-</ul>
-On a modern x86 machine, an L1 miss will typically cost around 10 cycles,
-and an L2 miss can cost as much as 200 cycles. Detailed cache profiling can be
-very useful for improving the performance of your program.<p>
-
-Also, since one instruction cache read is performed per instruction executed,
-you can find out how many instructions are executed per line, which can be
-useful for traditional profiling and test coverage.<p>
-
-Any feedback, bug-fixes, suggestions, etc, welcome.
-
-
-<h3>4.2&nbsp; Overview</h3>
-First off, as for normal Valgrind use, you probably want to compile with
-debugging info (the <code>-g</code> flag).  But by contrast with normal
-Valgrind use, you probably <b>do</b> want to turn optimisation on, since you
-should profile your program as it will be normally run.
-
-The two steps are:
-<ol>
-  <li>Run your program with <code>valgrind --tool=cachegrind</code> in front of
-      the normal command line invocation.  When the program finishes,
-      Cachegrind will print summary cache statistics. It also collects
-      line-by-line information in a file
-      <code>cachegrind.out.<i>pid</i></code>, where <code><i>pid</i></code>
-      is the program's process id.
-      <p>
-      This step should be done every time you want to collect
-      information about a new program, a changed program, or about the
-      same program with different input.
-  </li><p>
-  <li>Generate a function-by-function summary, and possibly annotate
-      source files, using the supplied
-      <code>cg_annotate</code> program. Source files to annotate can be
-      specified manually, or manually on the command line, or
-      "interesting" source files can be annotated automatically with
-      the <code>--auto=yes</code> option.  You can annotate C/C++
-      files or assembly language files equally easily.
-      <p>
-      This step can be performed as many times as you like for each
-      Step 2.  You may want to do multiple annotations showing
-      different information each time.
-  </li><p>
-</ol>
-
-The steps are described in detail in the following sections.
-
-
-<h3>4.3&nbsp; Cache simulation specifics</h3>
-
-Cachegrind uses a simulation for a machine with a split L1 cache and a unified
-L2 cache.  This configuration is used for all (modern) x86-based machines we
-are aware of.  Old Cyrix CPUs had a unified I and D L1 cache, but they are
-ancient history now.<p>
-
-The more specific characteristics of the simulation are as follows.
-
-<ul>
-  <li>Write-allocate: when a write miss occurs, the block written to
-      is brought into the D1 cache.  Most modern caches have this
-      property.<p>
-  </li>
-  <p>
-  <li>Bit-selection hash function: the line(s) in the cache to which a
-      memory block maps is chosen by the middle bits M--(M+N-1) of the
-      byte address, where:
-      <ul>
-        <li>&nbsp;line size = 2^M bytes&nbsp;</li>
-        <li>(cache size / line size) = 2^N bytes</li>
-      </ul> 
-  </li>
-  <p>
-  <li>Inclusive L2 cache: the L2 cache replicates all the entries of
-      the L1 cache.  This is standard on Pentium chips, but AMD
-      Athlons use an exclusive L2 cache that only holds blocks evicted
-      from L1.  Ditto AMD Durons and most modern VIAs.</li>
-</ul>
-
-The cache configuration simulated (cache size, associativity and line size) is
-determined automagically using the CPUID instruction.  If you have an old
-machine that (a) doesn't support the CPUID instruction, or (b) supports it in
-an early incarnation that doesn't give any cache information, then Cachegrind
-will fall back to using a default configuration (that of a model 3/4 Athlon).
-Cachegrind will tell you if this happens.  You can manually specify one, two or
-all three levels (I1/D1/L2) of the cache from the command line using the
-<code>--I1</code>, <code>--D1</code> and <code>--L2</code> options.
-
-<p>
-Other noteworthy behaviour:
-
-<ul>
-  <li>References that straddle two cache lines are treated as follows:
-  <ul>
-    <li>If both blocks hit --&gt; counted as one hit</li>
-    <li>If one block hits, the other misses --&gt; counted as one miss</li>
-    <li>If both blocks miss --&gt; counted as one miss (not two)</li>
-  </ul>
-  </li>
-
-  <li>Instructions that modify a memory location (eg. <code>inc</code> and
-      <code>dec</code>) are counted as doing just a read, ie. a single data
-      reference.  This may seem strange, but since the write can never cause a
-      miss (the read guarantees the block is in the cache) it's not very
-      interesting.
-      <p>
-      Thus it measures not the number of times the data cache is accessed, but
-      the number of times a data cache miss could occur.<p>
-      </li>
-</ul>
-
-If you are interested in simulating a cache with different properties, it is
-not particularly hard to write your own cache simulator, or to modify the
-existing ones in <code>vg_cachesim_I1.c</code>, <code>vg_cachesim_D1.c</code>,
-<code>vg_cachesim_L2.c</code> and <code>vg_cachesim_gen.c</code>.  We'd be
-interested to hear from anyone who does.
-
-
-<a name="profile"></a>
-<h3>4.4&nbsp; Profiling programs</h3>
-
-To gather cache profiling information about the program <code>ls -l</code>,
-invoke Cachegrind like this:
-
-<blockquote><code>valgrind --tool=cachegrind ls -l</code></blockquote>
-
-The program will execute (slowly).  Upon completion, summary statistics
-that look like this will be printed:
-
-<pre>
-==31751== I   refs:      27,742,716
-==31751== I1  misses:           276
-==31751== L2  misses:           275
-==31751== I1  miss rate:        0.0%
-==31751== L2i miss rate:        0.0%
-==31751== 
-==31751== D   refs:      15,430,290  (10,955,517 rd + 4,474,773 wr)
-==31751== D1  misses:        41,185  (    21,905 rd +    19,280 wr)
-==31751== L2  misses:        23,085  (     3,987 rd +    19,098 wr)
-==31751== D1  miss rate:        0.2% (       0.1%   +       0.4%)
-==31751== L2d miss rate:        0.1% (       0.0%   +       0.4%)
-==31751== 
-==31751== L2 misses:         23,360  (     4,262 rd +    19,098 wr)
-==31751== L2 miss rate:         0.0% (       0.0%   +       0.4%)
-</pre>
-
-Cache accesses for instruction fetches are summarised first, giving the
-number of fetches made (this is the number of instructions executed, which
-can be useful to know in its own right), the number of I1 misses, and the
-number of L2 instruction (<code>L2i</code>) misses.
-<p>
-Cache accesses for data follow. The information is similar to that of the
-instruction fetches, except that the values are also shown split between reads
-and writes (note each row's <code>rd</code> and <code>wr</code> values add up
-to the row's total).
-<p>
-Combined instruction and data figures for the L2 cache follow that.
-
-
-<h3>4.5&nbsp; Output file</h3>
-
-As well as printing summary information, Cachegrind also writes
-line-by-line cache profiling information to a file named
-<code>cachegrind.out.<i>pid</i></code>.  This file is human-readable, but is
-best interpreted by the accompanying program <code>cg_annotate</code>,
-described in the next section.
-<p>
-Things to note about the <code>cachegrind.out.<i>pid</i></code> file:
-<ul>
-  <li>It is written every time Cachegrind
-      is run, and will overwrite any existing
-      <code>cachegrind.out.<i>pid</i></code> in the current directory (but
-      that won't happen very often because it takes some time for process ids
-      to be recycled).</li><p>
-  <li>It can be huge: <code>ls -l</code> generates a file of about
-      350KB.  Browsing a few files and web pages with a Konqueror
-      built with full debugging information generates a file
-      of around 15 MB.</li>
-</ul>
-
-Note that older versions of Cachegrind used a log file named
-<code>cachegrind.out</code> (i.e. no <code><i>.pid</i></code> suffix).
-The suffix serves two purposes.  Firstly, it means you don't have to
-rename old log files that you don't want to overwrite.  Secondly, and
-more importantly, it allows correct profiling with the
-<code>--trace-children=yes</code> option of programs that spawn child
-processes.
-
-
-<a name="profileflags"></a>
-<h3>4.6&nbsp; Cachegrind options</h3>
-
-Cache-simulation specific options are:
-
-<ul>
-  <li><code>--I1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br>
-      <code>--D1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br> 
-      <code>--L2=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><p> 
-      [default: uses CPUID for automagic cache configuration]<p>
-
-      Manually specifies the I1/D1/L2 cache configuration, where
-      <code>size</code> and <code>line_size</code> are measured in bytes.  The
-      three items must be comma-separated, but with no spaces, eg:
-
-      <blockquote>
-      <code>valgrind --tool=cachegrind --I1=65535,2,64</code>
-      </blockquote>
-
-      You can specify one, two or three of the I1/D1/L2 caches.  Any level not
-      manually specified will be simulated using the configuration found in the
-      normal way (via the CPUID instruction, or failing that, via defaults).
-</ul>
-
-  
-<a name="annotate"></a>
-<h3>4.7&nbsp; Annotating C/C++ programs</h3>
-
-Before using <code>cg_annotate</code>, it is worth widening your
-window to be at least 120-characters wide if possible, as the output
-lines can be quite long.
-<p>
-To get a function-by-function summary, run <code>cg_annotate
---<i>pid</i></code> in a directory containing a
-<code>cachegrind.out.<i>pid</i></code> file.  The <code>--<i>pid</i></code>
-is required so that <code>cg_annotate</code> knows which log file to use when
-several are present.
-<p>
-The output looks like this:
-
-<pre>
---------------------------------------------------------------------------------
-I1 cache:              65536 B, 64 B, 2-way associative
-D1 cache:              65536 B, 64 B, 2-way associative
-L2 cache:              262144 B, 64 B, 8-way associative
-Command:               concord vg_to_ucode.c
-Events recorded:       Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Events shown:          Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Event sort order:      Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
-Threshold:             99%
-Chosen for annotation:
-Auto-annotation:       on
-
---------------------------------------------------------------------------------
-Ir         I1mr I2mr Dr         D1mr   D2mr  Dw        D1mw   D2mw
---------------------------------------------------------------------------------
-27,742,716  276  275 10,955,517 21,905 3,987 4,474,773 19,280 19,098  PROGRAM TOTALS
-
---------------------------------------------------------------------------------
-Ir        I1mr I2mr Dr        D1mr  D2mr  Dw        D1mw   D2mw    file:function
---------------------------------------------------------------------------------
-8,821,482    5    5 2,242,702 1,621    73 1,794,230      0      0  getc.c:_IO_getc
-5,222,023    4    4 2,276,334    16    12   875,959      1      1  concord.c:get_word
-2,649,248    2    2 1,344,810 7,326 1,385         .      .      .  vg_main.c:strcmp
-2,521,927    2    2   591,215     0     0   179,398      0      0  concord.c:hash
-2,242,740    2    2 1,046,612   568    22   448,548      0      0  ctype.c:tolower
-1,496,937    4    4   630,874 9,000 1,400   279,388      0      0  concord.c:insert
-  897,991   51   51   897,831    95    30        62      1      1  ???:???
-  598,068    1    1   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__flockfile
-  598,068    0    0   299,034     0     0   149,517      0      0  ../sysdeps/generic/lockfile.c:__funlockfile
-  598,024    4    4   213,580    35    16   149,506      0      0  vg_clientmalloc.c:malloc
-  446,587    1    1   215,973 2,167   430   129,948 14,057 13,957  concord.c:add_existing
-  341,760    2    2   128,160     0     0   128,160      0      0  vg_clientmalloc.c:vg_trap_here_WRAPPER
-  320,782    4    4   150,711   276     0    56,027     53     53  concord.c:init_hash_table
-  298,998    1    1   106,785     0     0    64,071      1      1  concord.c:create
-  149,518    0    0   149,516     0     0         1      0      0  ???:tolower@@GLIBC_2.0
-  149,518    0    0   149,516     0     0         1      0      0  ???:fgetc@@GLIBC_2.0
-   95,983    4    4    38,031     0     0    34,409  3,152  3,150  concord.c:new_word_node
-   85,440    0    0    42,720     0     0    21,360      0      0  vg_clientmalloc.c:vg_bogus_epilogue
-</pre>
-
-First up is a summary of the annotation options:
-                    
-<ul>
-  <li>I1 cache, D1 cache, L2 cache: cache configuration.  So you know the
-      configuration with which these results were obtained.</li><p>
-
-  <li>Command: the command line invocation of the program under
-      examination.</li><p>
-
-  <li>Events recorded: event abbreviations are:<p>
-  <ul>
-    <li><code>Ir  </code>:  I cache reads (ie. instructions executed)</li>
-    <li><code>I1mr</code>: I1 cache read misses</li>
-    <li><code>I2mr</code>: L2 cache instruction read misses</li>
-    <li><code>Dr  </code>:  D cache reads (ie. memory reads)</li>
-    <li><code>D1mr</code>: D1 cache read misses</li>
-    <li><code>D2mr</code>: L2 cache data read misses</li>
-    <li><code>Dw  </code>:  D cache writes (ie. memory writes)</li>
-    <li><code>D1mw</code>: D1 cache write misses</li>
-    <li><code>D2mw</code>: L2 cache data write misses</li>
-  </ul><p>
-      Note that D1 total accesses is given by <code>D1mr</code> +
-      <code>D1mw</code>, and that L2 total accesses is given by
-      <code>I2mr</code> + <code>D2mr</code> + <code>D2mw</code>.</li><p>
-
-  <li>Events shown: the events shown (a subset of events gathered).  This can
-      be adjusted with the <code>--show</code> option.</li><p>
-
-  <li>Event sort order: the sort order in which functions are shown.  For
-      example, in this case the functions are sorted from highest
-      <code>Ir</code> counts to lowest.  If two functions have identical
-      <code>Ir</code> counts, they will then be sorted by <code>I1mr</code>
-      counts, and so on.  This order can be adjusted with the
-      <code>--sort</code> option.<p>
-
-      Note that this dictates the order the functions appear.  It is <b>not</b>
-      the order in which the columns appear;  that is dictated by the "events
-      shown" line (and can be changed with the <code>--show</code> option).
-      </li><p>
-
-  <li>Threshold: <code>cg_annotate</code> by default omits functions
-      that cause very low numbers of misses to avoid drowning you in
-      information.  In this case, cg_annotate shows summaries the
-      functions that account for 99% of the <code>Ir</code> counts;
-      <code>Ir</code> is chosen as the threshold event since it is the
-      primary sort event.  The threshold can be adjusted with the
-      <code>--threshold</code> option.</li><p>
-
-  <li>Chosen for annotation: names of files specified manually for annotation; 
-      in this case none.</li><p>
-
-  <li>Auto-annotation: whether auto-annotation was requested via the 
-      <code>--auto=yes</code> option. In this case no.</li><p>
-</ul>
-
-Then follows summary statistics for the whole program. These are similar
-to the summary provided when running <code>valgrind --tool=cachegrind</code>.<p>
-  
-Then follows function-by-function statistics. Each function is
-identified by a <code>file_name:function_name</code> pair. If a column
-contains only a dot it means the function never performs
-that event (eg. the third row shows that <code>strcmp()</code>
-contains no instructions that write to memory). The name
-<code>???</code> is used if the the file name and/or function name
-could not be determined from debugging information. If most of the
-entries have the form <code>???:???</code> the program probably wasn't
-compiled with <code>-g</code>.  If any code was invalidated (either due to
-self-modifying code or unloading of shared objects) its counts are aggregated
-into a single cost centre written as <code>(discarded):(discarded)</code>.<p>
-
-It is worth noting that functions will come from three types of source files:
-<ol>
-  <li> From the profiled program (<code>concord.c</code> in this example).</li>
-  <li>From libraries (eg. <code>getc.c</code>)</li>
-  <li>From Valgrind's implementation of some libc functions (eg.
-      <code>vg_clientmalloc.c:malloc</code>).  These are recognisable because
-      the filename begins with <code>vg_</code>, and is probably one of
-      <code>vg_main.c</code>, <code>vg_clientmalloc.c</code> or
-      <code>vg_mylibc.c</code>.
-  </li>
-</ol>
-
-There are two ways to annotate source files -- by choosing them
-manually, or with the <code>--auto=yes</code> option. To do it
-manually, just specify the filenames as arguments to
-<code>cg_annotate</code>. For example, the output from running
-<code>cg_annotate concord.c</code> for our example produces the same
-output as above followed by an annotated version of
-<code>concord.c</code>, a section of which looks like:
-
-<pre>
---------------------------------------------------------------------------------
--- User-annotated source: concord.c
---------------------------------------------------------------------------------
-Ir        I1mr I2mr Dr      D1mr  D2mr  Dw      D1mw   D2mw
-
-[snip]
-
-        .    .    .       .     .     .       .      .      .  void init_hash_table(char *file_name, Word_Node *table[])
-        3    1    1       .     .     .       1      0      0  {
-        .    .    .       .     .     .       .      .      .      FILE *file_ptr;
-        .    .    .       .     .     .       .      .      .      Word_Info *data;
-        1    0    0       .     .     .       1      1      1      int line = 1, i;
-        .    .    .       .     .     .       .      .      .
-        5    0    0       .     .     .       3      0      0      data = (Word_Info *) create(sizeof(Word_Info));
-        .    .    .       .     .     .       .      .      .
-    4,991    0    0   1,995     0     0     998      0      0      for (i = 0; i < TABLE_SIZE; i++)
-    3,988    1    1   1,994     0     0     997     53     52          table[i] = NULL;
-        .    .    .       .     .     .       .      .      .
-        .    .    .       .     .     .       .      .      .      /* Open file, check it. */
-        6    0    0       1     0     0       4      0      0      file_ptr = fopen(file_name, "r");
-        2    0    0       1     0     0       .      .      .      if (!(file_ptr)) {
-        .    .    .       .     .     .       .      .      .          fprintf(stderr, "Couldn't open '%s'.\n", file_name);
-        1    1    1       .     .     .       .      .      .          exit(EXIT_FAILURE);
-        .    .    .       .     .     .       .      .      .      }
-        .    .    .       .     .     .       .      .      .
-  165,062    1    1  73,360     0     0  91,700      0      0      while ((line = get_word(data, line, file_ptr)) != EOF)
-  146,712    0    0  73,356     0     0  73,356      0      0          insert(data->;word, data->line, table);
-        .    .    .       .     .     .       .      .      .
-        4    0    0       1     0     0       2      0      0      free(data);
-        4    0    0       1     0     0       2      0      0      fclose(file_ptr);
-        3    0    0       2     0     0       .      .      .  }
-</pre>
-
-(Although column widths are automatically minimised, a wide terminal is clearly
-useful.)<p>
-  
-Each source file is clearly marked (<code>User-annotated source</code>) as
-having been chosen manually for annotation.  If the file was found in one of
-the directories specified with the <code>-I</code>/<code>--include</code>
-option, the directory and file are both given.<p>
-
-Each line is annotated with its event counts.  Events not applicable for a line
-are represented by a `.';  this is useful for distinguishing between an event
-which cannot happen, and one which can but did not.<p> 
-
-Sometimes only a small section of a source file is executed.  To minimise
-uninteresting output, Valgrind only shows annotated lines and lines within a
-small distance of annotated lines.  Gaps are marked with the line numbers so
-you know which part of a file the shown code comes from, eg:
-
-<pre>
-(figures and code for line 704)
--- line 704 ----------------------------------------
--- line 878 ----------------------------------------
-(figures and code for line 878)
-</pre>
-
-The amount of context to show around annotated lines is controlled by the
-<code>--context</code> option.<p>
-
-To get automatic annotation, run <code>cg_annotate --auto=yes</code>.
-cg_annotate will automatically annotate every source file it can find that is
-mentioned in the function-by-function summary.  Therefore, the files chosen for
-auto-annotation  are affected by the <code>--sort</code> and
-<code>--threshold</code> options.  Each source file is clearly marked
-(<code>Auto-annotated source</code>) as being chosen automatically.  Any files
-that could not be found are mentioned at the end of the output, eg:    
-
-<pre>
---------------------------------------------------------------------------------
-The following files chosen for auto-annotation could not be found:
---------------------------------------------------------------------------------
-  getc.c
-  ctype.c
-  ../sysdeps/generic/lockfile.c
-</pre>
-
-This is quite common for library files, since libraries are usually compiled
-with debugging information, but the source files are often not present on a
-system.  If a file is chosen for annotation <b>both</b> manually and
-automatically, it is marked as <code>User-annotated source</code>.
-
-Use the <code>-I/--include</code> option to tell Valgrind where to look for
-source files if the filenames found from the debugging information aren't
-specific enough.
-
-Beware that cg_annotate can take some time to digest large
-<code>cachegrind.out.<i>pid</i></code> files, e.g. 30 seconds or more.  Also
-beware that auto-annotation can produce a lot of output if your program is
-large!
-
-
-<h3>4.8&nbsp; Annotating assembler programs</h3>
-
-Valgrind can annotate assembler programs too, or annotate the
-assembler generated for your C program.  Sometimes this is useful for
-understanding what is really happening when an interesting line of C
-code is translated into multiple instructions.<p>
-
-To do this, you just need to assemble your <code>.s</code> files with
-assembler-level debug information.  gcc doesn't do this, but you can
-use the GNU assembler with the <code>--gstabs</code> option to
-generate object files with this information, eg:
-
-<blockquote><code>as --gstabs foo.s</code></blockquote>
-
-You can then profile and annotate source files in the same way as for C/C++
-programs.
-
-
-<h3>4.9&nbsp; <code>cg_annotate</code> options</h3>
-<ul>
-  <li><code>--<i>pid</i></code></li><p>
-
-      Indicates which <code>cachegrind.out.<i>pid</i></code> file to read.
-      Not actually an option -- it is required.
-    
-  <li><code>-h, --help</code></li><p>
-  <li><code>-v, --version</code><p>
-
-      Help and version, as usual.</li>
-
-  <li><code>--sort=A,B,C</code> [default: order in 
-      <code>cachegrind.out.<i>pid</i></code>]<p>
-      Specifies the events upon which the sorting of the function-by-function
-      entries will be based.  Useful if you want to concentrate on eg. I cache
-      misses (<code>--sort=I1mr,I2mr</code>), or D cache misses
-      (<code>--sort=D1mr,D2mr</code>), or L2 misses
-      (<code>--sort=D2mr,I2mr</code>).</li><p>
-
-  <li><code>--show=A,B,C</code> [default: all, using order in
-      <code>cachegrind.out.<i>pid</i></code>]<p>
-      Specifies which events to show (and the column order). Default is to use
-      all present in the <code>cachegrind.out.<i>pid</i></code> file (and use
-      the order in the file).</li><p>
-
-  <li><code>--threshold=X</code> [default: 99%] <p>
-      Sets the threshold for the function-by-function summary.  Functions are
-      shown that account for more than X% of the primary sort event.  If
-      auto-annotating, also affects which files are annotated.
-      
-      Note: thresholds can be set for more than one of the events by appending
-      any events for the <code>--sort</code> option with a colon and a number
-      (no spaces, though).  E.g. if you want to see the functions that cover
-      99% of L2 read misses and 99% of L2 write misses, use this option:
-      
-      <blockquote><code>--sort=D2mr:99,D2mw:99</code></blockquote>
-      </li><p>
-
-  <li><code>--auto=no</code> [default]<br>
-      <code>--auto=yes</code> <p>
-      When enabled, automatically annotates every file that is mentioned in the
-      function-by-function summary that can be found.  Also gives a list of
-      those that couldn't be found.
-
-  <li><code>--context=N</code> [default: 8]<p>
-      Print N lines of context before and after each annotated line.  Avoids
-      printing large sections of source files that were not executed.  Use a 
-      large number (eg. 10,000) to show all source lines.
-      </li><p>
-
-  <li><code>-I=&lt;dir&gt;, --include=&lt;dir&gt;</code> 
-      [default: empty string]<p>
-      Adds a directory to the list in which to search for files.  Multiple
-      -I/--include options can be given to add multiple directories.
-</ul>
-  
-
-<h3>4.10&nbsp; Warnings</h3>
-There are a couple of situations in which cg_annotate issues warnings.
-
-<ul>
-  <li>If a source file is more recent than the
-      <code>cachegrind.out.<i>pid</i></code> file.  This is because the
-      information in <code>cachegrind.out.<i>pid</i></code> is only recorded
-      with line numbers, so if the line numbers change at all in the source
-      (eg.  lines added, deleted, swapped), any annotations will be
-      incorrect.<p>
-
-  <li>If information is recorded about line numbers past the end of a file.
-      This can be caused by the above problem, ie. shortening the source file
-      while using an old <code>cachegrind.out.<i>pid</i></code> file.  If this
-      happens, the figures for the bogus lines are printed anyway (clearly
-      marked as bogus) in case they are important.</li><p>
-</ul>
-
-
-<h3>4.11&nbsp; Things to watch out for</h3>
-Some odd things that can occur during annotation:
-
-<ul>
-  <li>If annotating at the assembler level, you might see something like this:
-
-      <pre>
-      1    0    0  .    .    .  .    .    .          leal -12(%ebp),%eax
-      1    0    0  .    .    .  1    0    0          movl %eax,84(%ebx)
-      2    0    0  0    0    0  1    0    0          movl $1,-20(%ebp)
-      .    .    .  .    .    .  .    .    .          .align 4,0x90
-      1    0    0  .    .    .  .    .    .          movl $.LnrB,%eax
-      1    0    0  .    .    .  1    0    0          movl %eax,-16(%ebp)
-      </pre>
-
-      How can the third instruction be executed twice when the others are
-      executed only once?  As it turns out, it isn't.  Here's a dump of the
-      executable, using <code>objdump -d</code>:
-
-      <pre>
-      8048f25:       8d 45 f4                lea    0xfffffff4(%ebp),%eax
-      8048f28:       89 43 54                mov    %eax,0x54(%ebx)
-      8048f2b:       c7 45 ec 01 00 00 00    movl   $0x1,0xffffffec(%ebp)
-      8048f32:       89 f6                   mov    %esi,%esi
-      8048f34:       b8 08 8b 07 08          mov    $0x8078b08,%eax
-      8048f39:       89 45 f0                mov    %eax,0xfffffff0(%ebp)
-      </pre>
-
-      Notice the extra <code>mov %esi,%esi</code> instruction.  Where did this
-      come from?  The GNU assembler inserted it to serve as the two bytes of
-      padding needed to align the <code>movl $.LnrB,%eax</code> instruction on
-      a four-byte boundary, but pretended it didn't exist when adding debug
-      information.  Thus when Valgrind reads the debug info it thinks that the
-      <code>movl $0x1,0xffffffec(%ebp)</code> instruction covers the address
-      range 0x8048f2b--0x804833 by itself, and attributes the counts for the
-      <code>mov %esi,%esi</code> to it.<p>
-  </li>
-
-  <li>Inlined functions can cause strange results in the function-by-function
-      summary.  If a function <code>inline_me()</code> is defined in
-      <code>foo.h</code> and inlined in the functions <code>f1()</code>,
-      <code>f2()</code> and <code>f3()</code> in <code>bar.c</code>, there will
-      not be a <code>foo.h:inline_me()</code> function entry.  Instead, there
-      will be separate function entries for each inlining site, ie.
-      <code>foo.h:f1()</code>, <code>foo.h:f2()</code> and
-      <code>foo.h:f3()</code>.  To find the total counts for
-      <code>foo.h:inline_me()</code>, add up the counts from each entry.<p>
-
-      The reason for this is that although the debug info output by gcc
-      indicates the switch from <code>bar.c</code> to <code>foo.h</code>, it
-      doesn't indicate the name of the function in <code>foo.h</code>, so
-      Valgrind keeps using the old one.<p>
-
-  <li>Sometimes, the same filename might be represented with a relative name
-      and with an absolute name in different parts of the debug info, eg:
-      <code>/home/user/proj/proj.h</code> and <code>../proj.h</code>.  In this
-      case, if you use auto-annotation, the file will be annotated twice with
-      the counts split between the two.<p>
-  </li>
-
-  <li>Files with more than 65,535 lines cause difficulties for the stabs debug
-      info reader.  This is because the line number in the <code>struct
-      nlist</code> defined in <code>a.out.h</code> under Linux is only a 16-bit
-      value.  Valgrind can handle some files with more than 65,535 lines
-      correctly by making some guesses to identify line number overflows.  But
-      some cases are beyond it, in which case you'll get a warning message
-      explaining that annotations for the file might be incorrect.<p>
-  </li>
-
-  <li>If you compile some files with <code>-g</code> and some without, some
-      events that take place in a file without debug info could be attributed
-      to the last line of a file with debug info (whichever one gets placed
-      before the non-debug-info file in the executable).<p>
-  </li>
-</ul>
-
-This list looks long, but these cases should be fairly rare.<p>
-
-Note: stabs is not an easy format to read.  If you come across bizarre
-annotations that look like might be caused by a bug in the stabs reader,
-please let us know.<p>
-
-
-<h3>4.12&nbsp; Accuracy</h3>
-Valgrind's cache profiling has a number of shortcomings:
-
-<ul>
-  <li>It doesn't account for kernel activity -- the effect of system calls on
-      the cache contents is ignored.</li><p>
-
-  <li>It doesn't account for other process activity (although this is probably
-      desirable when considering a single program).</li><p>
-
-  <li>It doesn't account for virtual-to-physical address mappings;  hence the
-      entire simulation is not a true representation of what's happening in the
-      cache.</li><p>
-
-  <li>It doesn't account for cache misses not visible at the instruction level,
-      eg. those arising from TLB misses, or speculative execution.</li><p>
-
-  <li>Valgrind's custom threads implementation will schedule threads
-      differently to the standard one.  This could warp the results for
-      threaded programs.
-      </li><p>
-
-  <li>The instructions <code>bts</code>, <code>btr</code> and <code>btc</code>
-      will incorrectly be counted as doing a data read if both the arguments
-      are registers, eg:
-
-      <blockquote><code>btsl %eax, %edx</code></blockquote>
-
-      This should only happen rarely.
-      </li><p>
-
-  <li>FPU instructions with data sizes of 28 and 108 bytes (e.g.
-      <code>fsave</code>) are treated as though they only access 16 bytes.
-      These instructions seem to be rare so hopefully this won't affect
-      accuracy much.
-      </li><p>
-</ul>
-
-Another thing worth nothing is that results are very sensitive.  Changing the
-size of the <code>valgrind.so</code> file, the size of the program being
-profiled, or even the length of its name can perturb the results.  Variations
-will be small, but don't expect perfectly repeatable results if your program
-changes at all.<p>
-
-While these factors mean you shouldn't trust the results to be super-accurate,
-hopefully they should be close enough to be useful.<p>
-
-
-<h3>4.13&nbsp; Todo</h3>
-<ul>
-  <li>Program start-up/shut-down calls a lot of functions that aren't
-      interesting and just complicate the output.  Would be nice to exclude
-      these somehow.</li>
-  <p>
-</ul> 
-</body>
-</html>
-
diff --git a/cachegrind/docs/cg_techdocs.html b/cachegrind/docs/cg_techdocs.html
deleted file mode 100644
index 0ac5b67..0000000
--- a/cachegrind/docs/cg_techdocs.html
+++ /dev/null
@@ -1,458 +0,0 @@
-<html>
-  <head>
-    <style type="text/css">
-      body      { background-color: #ffffff;
-                  color:            #000000;
-                  font-family:      Times, Helvetica, Arial;
-                  font-size:        14pt}
-      h4        { margin-bottom:    0.3em}
-      code      { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      pre       { color:            #000000;
-                  font-family:      Courier; 
-                  font-size:        13pt }
-      a:link    { color:            #0000C0;
-                  text-decoration:  none; }
-      a:visited { color:            #0000C0; 
-                  text-decoration:  none; }
-      a:active  { color:            #0000C0;
-                  text-decoration:  none; }
-    </style>
-    <title>How Cachegrind works</title>
-  </head>
-
-<body bgcolor="#ffffff">
-
-<a name="cg-techdocs">&nbsp;</a>
-<h1 align=center>How Cachegrind works</h1>
-
-<center>
-Detailed technical notes for hackers, maintainers and the
-overly-curious<br>
-<p>
-<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
-<a
-href="http://valgrind.kde.org">http://valgrind.kde.org</a><br>
-<p>
-Copyright &copy; 2001-2003 Nick Nethercote
-<p>
-</center>
-
-<p>
-
-
-
-
-<hr width="100%">
-
-<h2>Cache profiling</h2>
-Valgrind is a very nice platform for doing cache profiling and other kinds of
-simulation, because it converts horrible x86 instructions into nice clean
-RISC-like UCode.  For example, for cache profiling we are interested in
-instructions that read and write memory;  in UCode there are only four
-instructions that do this:  <code>LOAD</code>, <code>STORE</code>,
-<code>FPU_R</code> and <code>FPU_W</code>.  By contrast, because of the x86
-addressing modes, almost every instruction can read or write memory.<p>
-
-Most of the cache profiling machinery is in the file
-<code>vg_cachesim.c</code>.<p>
-
-These notes are a somewhat haphazard guide to how Valgrind's cache profiling
-works.<p>
-
-<h3>Cost centres</h3>
-Valgrind gathers cache profiling about every instruction executed,
-individually.  Each instruction has a <b>cost centre</b> associated with it.
-There are two kinds of cost centre: one for instructions that don't reference
-memory (<code>iCC</code>), and one for instructions that do
-(<code>idCC</code>):
-
-<pre>
-typedef struct _CC {
-   ULong a;
-   ULong m1;
-   ULong m2;
-} CC;
-
-typedef struct _iCC {
-   /* word 1 */
-   UChar tag;
-   UChar instr_size;
-
-   /* words 2+ */
-   Addr instr_addr;
-   CC I;
-} iCC;
-   
-typedef struct _idCC {
-   /* word 1 */
-   UChar tag;
-   UChar instr_size;
-   UChar data_size;
-
-   /* words 2+ */
-   Addr instr_addr;
-   CC I; 
-   CC D; 
-} idCC; 
-</pre>
-
-Each <code>CC</code> has three fields <code>a</code>, <code>m1</code>,
-<code>m2</code> for recording references, level 1 misses and level 2 misses.
-Each of these is a 64-bit <code>ULong</code> -- the numbers can get very large,
-ie. greater than 4.2 billion allowed by a 32-bit unsigned int.<p>
-
-A <code>iCC</code> has one <code>CC</code> for instruction cache accesses.  A
-<code>idCC</code> has two, one for instruction cache accesses, and one for data
-cache accesses.<p>
-
-The <code>iCC</code> and <code>dCC</code> structs also store unchanging
-information about the instruction:
-<ul>
-  <li>An instruction-type identification tag (explained below)</li><p>
-  <li>Instruction size</li><p>
-  <li>Data reference size (<code>idCC</code> only)</li><p>
-  <li>Instruction address</li><p>
-</ul>
-
-Note that data address is not one of the fields for <code>idCC</code>.  This is
-because for many memory-referencing instructions the data address can change
-each time it's executed (eg. if it uses register-offset addressing).  We have
-to give this item to the cache simulation in a different way (see
-Instrumentation section below). Some memory-referencing instructions do always
-reference the same address, but we don't try to treat them specialy in order to
-keep things simple.<p>
-
-Also note that there is only room for recording info about one data cache
-access in an <code>idCC</code>.  So what about instructions that do a read then
-a write, such as:
-
-<blockquote><code>inc %(esi)</code></blockquote>
-
-In a write-allocate cache, as simulated by Valgrind, the write cannot miss,
-since it immediately follows the read which will drag the block into the cache
-if it's not already there.  So the write access isn't really interesting, and
-Valgrind doesn't record it.  This means that Valgrind doesn't measure
-memory references, but rather memory references that could miss in the cache.
-This behaviour is the same as that used by the AMD Athlon hardware counters.
-It also has the benefit of simplifying the implementation -- instructions that
-read and write memory can be treated like instructions that read memory.<p>
-
-<h3>Storing cost-centres</h3>
-Cost centres are stored in a way that makes them very cheap to lookup, which is
-important since one is looked up for every original x86 instruction
-executed.<p>
-
-Valgrind does JIT translations at the basic block level, and cost centres are
-also setup and stored at the basic block level.  By doing things carefully, we
-store all the cost centres for a basic block in a contiguous array, and lookup
-comes almost for free.<p>
-
-Consider this part of a basic block (for exposition purposes, pretend it's an
-entire basic block):
-
-<pre>
-movl $0x0,%eax
-movl $0x99, -4(%ebp)
-</pre>
-
-The translation to UCode looks like this:
-                
-<pre>
-MOVL      $0x0, t20
-PUTL      t20, %EAX
-INCEIPo   $5
-
-LEA1L     -4(t4), t14
-MOVL      $0x99, t18
-STL       t18, (t14)
-INCEIPo   $7
-</pre>
-
-The first step is to allocate the cost centres.  This requires a preliminary
-pass to count how many x86 instructions were in the basic block, and their
-types (and thus sizes).  UCode translations for single x86 instructions are
-delimited by the <code>INCEIPo</code> instruction, the argument of which gives
-the byte size of the instruction (note that lazy INCEIP updating is turned off
-to allow this).<p>
-
-We can tell if an x86 instruction references memory by looking for
-<code>LDL</code> and <code>STL</code> UCode instructions, and thus what kind of
-cost centre is required.  From this we can determine how many cost centres we
-need for the basic block, and their sizes.  We can then allocate them in a
-single array.<p>
-
-Consider the example code above.  After the preliminary pass, we know we need
-two cost centres, one <code>iCC</code> and one <code>dCC</code>.  So we
-allocate an array to store these which looks like this:
-
-<pre>
-|(uninit)|      tag         (1 byte)
-|(uninit)|      instr_size  (1 bytes)
-|(uninit)|      (padding)   (2 bytes)
-|(uninit)|      instr_addr  (4 bytes)
-|(uninit)|      I.a         (8 bytes)
-|(uninit)|      I.m1        (8 bytes)
-|(uninit)|      I.m2        (8 bytes)
-
-|(uninit)|      tag         (1 byte)
-|(uninit)|      instr_size  (1 byte)
-|(uninit)|      data_size   (1 byte)
-|(uninit)|      (padding)   (1 byte)
-|(uninit)|      instr_addr  (4 bytes)
-|(uninit)|      I.a         (8 bytes)
-|(uninit)|      I.m1        (8 bytes)
-|(uninit)|      I.m2        (8 bytes)
-|(uninit)|      D.a         (8 bytes)
-|(uninit)|      D.m1        (8 bytes)
-|(uninit)|      D.m2        (8 bytes)
-</pre>
-
-(We can see now why we need tags to distinguish between the two types of cost
-centres.)<p>
-
-We also record the size of the array.  We look up the debug info of the first
-instruction in the basic block, and then stick the array into a table indexed
-by filename and function name.  This makes it easy to dump the information
-quickly to file at the end.<p>
-
-<h3>Instrumentation</h3>
-The instrumentation pass has two main jobs:
-
-<ol>
-  <li>Fill in the gaps in the allocated cost centres.</li><p>
-  <li>Add UCode to call the cache simulator for each instruction.</li><p>
-</ol>
-
-The instrumentation pass steps through the UCode and the cost centres in
-tandem.  As each original x86 instruction's UCode is processed, the appropriate
-gaps in the instructions cost centre are filled in, for example:
-
-<pre>
-|INSTR_CC|      tag         (1 byte)
-|5       |      instr_size  (1 bytes)
-|(uninit)|      (padding)   (2 bytes)
-|i_addr1 |      instr_addr  (4 bytes)
-|0       |      I.a         (8 bytes)
-|0       |      I.m1        (8 bytes)
-|0       |      I.m2        (8 bytes)
-
-|WRITE_CC|      tag         (1 byte)
-|7       |      instr_size  (1 byte)
-|4       |      data_size   (1 byte)
-|(uninit)|      (padding)   (1 byte)
-|i_addr2 |      instr_addr  (4 bytes)
-|0       |      I.a         (8 bytes)
-|0       |      I.m1        (8 bytes)
-|0       |      I.m2        (8 bytes)
-|0       |      D.a         (8 bytes)
-|0       |      D.m1        (8 bytes)
-|0       |      D.m2        (8 bytes)
-</pre>
-
-(Note that this step is not performed if a basic block is re-translated;  see
-<a href="#retranslations">here</a> for more information.)<p>
-
-GCC inserts padding before the <code>instr_size</code> field so that it is word
-aligned.<p>
-
-The instrumentation added to call the cache simulation function looks like this
-(instrumentation is indented to distinguish it from the original UCode):
-
-<pre>
-MOVL      $0x0, t20
-PUTL      t20, %EAX
-  PUSHL     %eax
-  PUSHL     %ecx
-  PUSHL     %edx
-  MOVL      $0x4091F8A4, t46  # address of 1st CC
-  PUSHL     t46
-  CALLMo    $0x12             # second cachesim function
-  CLEARo    $0x4
-  POPL      %edx
-  POPL      %ecx
-  POPL      %eax
-INCEIPo   $5
-
-LEA1L     -4(t4), t14
-MOVL      $0x99, t18
-  MOVL      t14, t42
-STL       t18, (t14)
-  PUSHL     %eax
-  PUSHL     %ecx
-  PUSHL     %edx
-  PUSHL     t42
-  MOVL      $0x4091F8C4, t44  # address of 2nd CC
-  PUSHL     t44
-  CALLMo    $0x13             # second cachesim function
-  CLEARo    $0x8
-  POPL      %edx
-  POPL      %ecx
-  POPL      %eax
-INCEIPo   $7
-</pre>
-
-Consider the first instruction's UCode.  Each call is surrounded by three
-<code>PUSHL</code> and <code>POPL</code> instructions to save and restore the
-caller-save registers.  Then the address of the instruction's cost centre is
-pushed onto the stack, to be the first argument to the cache simulation
-function.  The address is known at this point because we are doing a
-simultaneous pass through the cost centre array.  This means the cost centre
-lookup for each instruction is almost free (just the cost of pushing an
-argument for a function call).  Then the call to the cache simulation function
-for non-memory-reference instructions is made (note that the
-<code>CALLMo</code> UInstruction takes an offset into a table of predefined
-functions;  it is not an absolute address), and the single argument is
-<code>CLEAR</code>ed from the stack.<p>
-
-The second instruction's UCode is similar.  The only difference is that, as
-mentioned before, we have to pass the address of the data item referenced to
-the cache simulation function too.  This explains the <code>MOVL t14,
-t42</code> and <code>PUSHL t42</code> UInstructions.  (Note that the seemingly
-redundant <code>MOV</code>ing will probably be optimised away during register
-allocation.)<p>
-
-Note that instead of storing unchanging information about each instruction
-(instruction size, data size, etc) in its cost centre, we could have passed in
-these arguments to the simulation function.  But this would slow the calls down
-(two or three extra arguments pushed onto the stack).  Also it would bloat the
-UCode instrumentation by amounts similar to the space required for them in the
-cost centre;  bloated UCode would also fill the translation cache more quickly,
-requiring more translations for large programs and slowing them down more.<p>
-
-<a name="retranslations"></a>
-<h3>Handling basic block retranslations</h3>
-The above description ignores one complication.  Valgrind has a limited size
-cache for basic block translations;  if it fills up, old translations are
-discarded.  If a discarded basic block is executed again, it must be
-re-translated.<p>
-
-However, we can't use this approach for profiling -- we can't throw away cost
-centres for instructions in the middle of execution!  So when a basic block is
-translated, we first look for its cost centre array in the hash table.  If
-there is no cost centre array, it must be the first translation, so we proceed
-as described above.  But if there is a cost centre array already, it must be a
-retranslation.  In this case, we skip the cost centre allocation and
-initialisation steps, but still do the UCode instrumentation step.<p>
-
-<h3>The cache simulation</h3>
-The cache simulation is fairly straightforward.  It just tracks which memory
-blocks are in the cache at the moment (it doesn't track the contents, since
-that is irrelevant).<p>
-
-The interface to the simulation is quite clean.  The functions called from the
-UCode contain calls to the simulation functions in the files
-<Code>vg_cachesim_{I1,D1,L2}.c</code>;  these calls are inlined so that only
-one function call is done per simulated x86 instruction.  The file
-<code>vg_cachesim.c</code> simply <code>#include</code>s the three files
-containing the simulation, which makes plugging in new cache simulations is
-very easy -- you just replace the three files and recompile.<p>
-
-<h3>Output</h3>
-Output is fairly straightforward, basically printing the cost centre for every
-instruction, grouped by files and functions.  Total counts (eg. total cache
-accesses, total L1 misses) are calculated when traversing this structure rather
-than during execution, to save time;  the cache simulation functions are called
-so often that even one or two extra adds can make a sizeable difference.<p>
-
-Input file has the following format:
-
-<pre>
-file         ::= desc_line* cmd_line events_line data_line+ summary_line
-desc_line    ::= "desc:" ws? non_nl_string
-cmd_line     ::= "cmd:" ws? cmd
-events_line  ::= "events:" ws? (event ws)+
-data_line    ::= file_line | fn_line | count_line
-file_line    ::= ("fl=" | "fi=" | "fe=") filename
-fn_line      ::= "fn=" fn_name
-count_line   ::= line_num ws? (count ws)+
-summary_line ::= "summary:" ws? (count ws)+
-count        ::= num | "."
-</pre>
-
-Where:
-
-<ul>
-  <li><code>non_nl_string</code> is any string not containing a newline.</li><p>
-  <li><code>cmd</code> is a command line invocation.</li><p>
-  <li><code>filename</code> and <code>fn_name</code> can be anything.</li><p>
-  <li><code>num</code> and <code>line_num</code> are decimal numbers.</li><p>
-  <li><code>ws</code> is whitespace.</li><p>
-  <li><code>nl</code> is a newline.</li><p>
-</ul>
-
-The contents of the "desc:" lines is printed out at the top of the summary.
-This is a generic way of providing simulation specific information, eg. for
-giving the cache configuration for cache simulation.<p>
-
-Counts can be "." to represent "N/A", eg. the number of write misses for an
-instruction that doesn't write to memory.<p>
-
-The number of counts in each <code>line</code> and the
-<code>summary_line</code> should not exceed the number of events in the
-<code>event_line</code>.  If the number in each <code>line</code> is less,
-cg_annotate treats those missing as though they were a "." entry.  <p>
-
-A <code>file_line</code> changes the current file name.  A <code>fn_line</code>
-changes the current function name.  A <code>count_line</code> contains counts
-that pertain to the current filename/fn_name.  A "fn=" <code>file_line</code>
-and a <code>fn_line</code> must appear before any <code>count_line</code>s to
-give the context of the first <code>count_line</code>s.<p>
-
-Each <code>file_line</code> should be immediately followed by a
-<code>fn_line</code>.  "fi=" <code>file_lines</code> are used to switch
-filenames for inlined functions; "fe=" <code>file_lines</code> are similar, but
-are put at the end of a basic block in which the file name hasn't been switched
-back to the original file name.  (fi and fe lines behave the same, they are
-only distinguished to help debugging.)<p>
-
-
-<h3>Summary of performance features</h3>
-Quite a lot of work has gone into making the profiling as fast as possible.
-This is a summary of the important features:
-
-<ul>
-  <li>The basic block-level cost centre storage allows almost free cost centre
-      lookup.</li><p>
-  
-  <li>Only one function call is made per instruction simulated;  even this
-      accounts for a sizeable percentage of execution time, but it seems
-      unavoidable if we want flexibility in the cache simulator.</li><p>
-
-  <li>Unchanging information about an instruction is stored in its cost centre,
-      avoiding unnecessary argument pushing, and minimising UCode
-      instrumentation bloat.</li><p>
-
-  <li>Summary counts are calculated at the end, rather than during
-      execution.</li><p>
-
-  <li>The <code>cachegrind.out</code> output files can contain huge amounts of
-      information; file format was carefully chosen to minimise file
-      sizes.</li><p>
-</ul>
-
-
-<h3>Annotation</h3>
-Annotation is done by cg_annotate.  It is a fairly straightforward Perl script
-that slurps up all the cost centres, and then runs through all the chosen
-source files, printing out cost centres with them.  It too has been carefully
-optimised.
-
-
-<h3>Similar work, extensions</h3>
-It would be relatively straightforward to do other simulations and obtain
-line-by-line information about interesting events.  A good example would be
-branch prediction -- all branches could be instrumented to interact with a
-branch prediction simulator, using very similar techniques to those described
-above.<p>
-
-In particular, cg_annotate would not need to change -- the file format is such
-that it is not specific to the cache simulation, but could be used for any kind
-of line-by-line information.  The only part of cg_annotate that is specific to
-the cache simulation is the name of the input file
-(<code>cachegrind.out</code>), although it would be very simple to add an
-option to control this.<p>
-
-</body>
-</html>