-----------------------------------------------------------------------------
overview
-----------------------------------------------------------------------------
Previously Valgrind had its own versions of malloc() et al that replaced
glibc's.  This is necessary for various reasons for Memcheck, but isn't needed,
and was actually detrimental, to some other skins.  I never managed to treat
this satisfactorily w.r.t the core/skin split.

Now I have.  If a skin needs to know about malloc() et al, it must provide its
own replacements.  But because this is not uncommon, the core provides a module
vg_replace_malloc.c which a skin can link with, which provides skeleton
definitions, to reduce the amount of work a skin must do.  The skeletons handle
the transfer of control from the simd CPU to the real CPU, and also the
--alignment, --sloppy-malloc and --trace-malloc options.  These skeleton
definitions subsequently call functions SK_(malloc), SK_(free), etc, which the
skin must define;  in these functions the skin can do the things it needs to do
about tracking heap blocks.

For skins that track extra info about malloc'd blocks -- previously done with
ShadowChunks -- there is a new file vg_hashtable.c that implements a
generic-ish hash table (using dodgy C-style inheritance using struct overlays)
which allows skins to continue doing this fairly easily.

Skins can also replace other functions too, eg. Memcheck has its own versions
of strcpy(), memcpy(), etc.

Overall, it's slightly more work now for skins that need to replace malloc(),
but other skins don't have to use Valgrind's malloc(), so they're getting a
"purer" program run, which is good, and most of the remaining rough edges from
the core/skin split have been removed.

-----------------------------------------------------------------------------
details
-----------------------------------------------------------------------------
Moved malloc() et al intercepts from vg_clientfuncs.c into vg_replace_malloc.c.
Skins can link to it if they want to replace malloc() and friends;  it does
some stuff then passes control to SK_(malloc)() et al which the skin must
define.  They can call VG_(cli_malloc)() and VG_(cli_free)() to do the actual
allocation/deallocation.  Redzone size for the client (the CLIENT arena) is
specified by the static variable VG_(vg_malloc_redzone_szB).
vg_replace_malloc.c thus represents a kind of "mantle" level service.

To get automake to build vg_replace_malloc.o, had to resort to a similar trick
as used for the demangler -- ask for a "no install" library (which is never
used) to be built from it.

Note that all malloc, calloc, realloc, builtin_new, builtin_vec_new, memalign
are now aware of --alignment, when running on simd CPU or real CPU.

This means the new_mem_heap, die_mem_heap, copy_mem_heap and ban_mem_heap
events no longer exist, since the core doesn't control malloc() any more, and
skins can watch for these events themselves.

This required moving all the ShadowChunk stuff out of the core, which meant
the sizeof_shadow_block ``need'' could be removed, yay -- it was a horrible
hack.  Now ShadowChunks are done with a generic HashTable type, in
vg_hashtable.c, which skins can "inherit from" (in a dodgy C-only fashion by
using structs with similar layouts).  Also, the free_list stuff was all moved
as a part of this.  Also, VgAllocKind was moved out of core into
Memcheck/Addrcheck and renamed MAC_AllocKind.

Moved these options out of core into vg_replace_malloc.c:
    --trace-malloc
    --sloppy-malloc
    --alignment

The alternative_free ``need'' could go, too, since Memcheck is now in complete
control of free(), yay -- another horribility.

The bad_free and free_mismatch events could go too, since they're now not
detected by core, yay -- yet another horribility.

Moved malloc() et al wrappers for Memcheck out of vg_clientmalloc.c into
mac_malloc_wrappers.c.  Helgrind has its own wrappers now too.

Introduced VG_USERREQ__CLIENT_CALL[123] client requests.  When a skin function
is operating on the simd CPU, this will call a given function and run it on the
real CPU.  The macros VG_NON_SIMD_CALL[123] in valgrind.h present a cleaner
interface to actually use.  Also introduce analogues of these that pass 'tst'
from the scheduler as the first arg to the called function -- needed for
MC_(client_malloc)() et al.

Fiddled with USERREQ_{MALLOC,FREE} etc. in vg_scheduler.c; they call
SK_({malloc,free})() which by default call VG_(cli_malloc)() -- can't call
glibc's malloc() here.  All the other default SK_(calloc)() etc. instantly
panic; there's a lock variable to ensure that the default SK_({malloc,free})()
are only called from the scheduler, which prevents a skin from forgetting to
override SK_({malloc,free})().  Got rid of the unused USERREQ_CALLOC,
USERREQ_BUILTIN_NEW, etc.

Moved special versions of strcpy/strlen, etc, memcpy() and memchr() into
mac_replace_strmem.c -- they are only necessary for memcheck, because the
hyper-optimised normal glibc versions confuse it, and for memcpy() etc. overlap
checking.

Also added dst/src overlap checks to strcpy(), memcpy(), strcat().  They are
reported not as proper errors, but just with single line warnings, as for silly
args to malloc() et al;  this is mainly because they're on the simulated CPU
and proper error handling would be a pain;  hopefully they're rare enough to
not be a problem.  The strcpy check is done after the copy, because it would
require counting the length of the string beforehand.  Also added strncpy() and
strncat(), which have overlap checks too.  Note that addrcheck doesn't do
overlap checking.

Put USERREQ__LOGMESSAGE in vg_skin.h to do the overlap check error messages.

After removing malloc() et al and strcpy() et al out of vg_clientfuncs.c, moved
the remaining three things (sigsuspend, VG_(__libc_freeres_wrapper),
__errno_location) into vg_intercept.c, since it contains things that run on the
simulated CPU too.  Removed vg_clientfuncs.c altogether.

Moved regression test "malloc3" out of corecheck into memcheck, since corecheck
no longer looks for silly (eg. negative) args to malloc().

Removed the m_eip, m_esp, m_ebp fields from the `Error' type.  They were being
set up, and then read immediately only once, only if GDB attachment was done.
So now they're just being held in local variables.  This saves 12 bytes per
Error.

Made replacement calloc() check for --sloppy-malloc;  previously it didn't.

Added "silly" negative size arg check to realloc(), it didn't have one.

Changed VG_(read_selfprocmaps)() so it can parse the file directly, or from a
previously read buffer.  Buffer can be filled with the new
VG_(read_selfprocmaps_contents)().  Using this at start-up to snapshot
/proc/self/maps before the skins do anything, and then parsing it once they
have done their setup stuff.  Skins can now safely call VG_(malloc)() in
SK_({pre,post}_clo_init)() without the mmap'd superblock erroneously being
identified as client memory.

Changed the --help usage message slightly, now divided into four sections: core
normal, skin normal, core debugging, skin debugging.  Changed the interface for
the command_line_options need slightly -- now two functions, VG_(print_usage)()
and VG_(print_debug_usage)(), and they do the printing themselves, instead of
just returning a string -- that's more flexible.

Removed DEBUG_CLIENTMALLOC code, it wasn't being used and was a pain.

Added a regression test testing leak suppressions (nanoleak_supp), and another
testing strcpy/memcpy/etc overlap warnings (overlap).

Also changed Addrcheck to link with the files shared with Memcheck, rather than
#including the .c files directly.

Commoned up a little more shared Addrcheck/Memcheck code, for the usage
message, and initialisation/finalisation.

Added a Bool param to VG_(unique_error)() dictating whether it should allow
GDB to be attached; for leak checks, because we don't want to attach GDB on
leak errors (causes seg faults).  A bit hacky, but it will do.

Had to change lots of the expected outputs from regression files now that
malloc() et al are in vg_replace_malloc.c rather than vg_clientfuncs.c.


git-svn-id: svn://svn.valgrind.org/valgrind/trunk@1524 a5019735-40e9-0310-863c-91ae7b9d1cf9
diff --git a/memcheck/mac_malloc_wrappers.c b/memcheck/mac_malloc_wrappers.c
new file mode 100644
index 0000000..0636477
--- /dev/null
+++ b/memcheck/mac_malloc_wrappers.c
@@ -0,0 +1,415 @@
+
+/*--------------------------------------------------------------------*/
+/*--- malloc/free wrappers for detecting errors and updating bits. ---*/
+/*---                                        mac_malloc_wrappers.c ---*/
+/*--------------------------------------------------------------------*/
+
+/*
+   This file is part of MemCheck, a heavyweight Valgrind skin for
+   detecting memory errors, and AddrCheck, a lightweight Valgrind skin 
+   for detecting memory errors.
+
+   Copyright (C) 2000-2002 Julian Seward 
+      jseward@acm.org
+
+   This program is free software; you can redistribute it and/or
+   modify it under the terms of the GNU General Public License as
+   published by the Free Software Foundation; either version 2 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, write to the Free Software
+   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+   02111-1307, USA.
+
+   The GNU General Public License is contained in the file COPYING.
+*/
+
+#include "mac_shared.h"
+
+/*------------------------------------------------------------*/
+/*--- Defns                                                ---*/
+/*------------------------------------------------------------*/
+
+/* Stats ... */
+static UInt cmalloc_n_mallocs  = 0;
+static UInt cmalloc_n_frees    = 0;
+static UInt cmalloc_bs_mallocd = 0;
+
+/* We want a 16B redzone on heap blocks for Addrcheck and Memcheck */
+UInt VG_(vg_malloc_redzone_szB) = 16;
+
+/*------------------------------------------------------------*/
+/*--- Tracking malloc'd and free'd blocks                  ---*/
+/*------------------------------------------------------------*/
+
+/* Record malloc'd blocks.  Nb: Addrcheck and Memcheck construct this
+   separately in their respective initialisation functions. */
+VgHashTable MAC_(malloc_list) = NULL;
+   
+/* Records blocks after freeing. */
+static MAC_Chunk* freed_list_start  = NULL;
+static MAC_Chunk* freed_list_end    = NULL;
+static Int        freed_list_volume = 0;
+
+/* Put a shadow chunk on the freed blocks queue, possibly freeing up
+   some of the oldest blocks in the queue at the same time. */
+static void add_to_freed_queue ( MAC_Chunk* mc )
+{
+   MAC_Chunk* sc1;
+
+   /* Put it at the end of the freed list */
+   if (freed_list_end == NULL) {
+      sk_assert(freed_list_start == NULL);
+      freed_list_end    = freed_list_start = mc;
+      freed_list_volume = mc->size;
+   } else {
+      sk_assert(freed_list_end->next == NULL);
+      freed_list_end->next = mc;
+      freed_list_end       = mc;
+      freed_list_volume += mc->size;
+   }
+   mc->next = NULL;
+
+   /* Release enough of the oldest blocks to bring the free queue
+      volume below vg_clo_freelist_vol. */
+
+   while (freed_list_volume > MAC_(clo_freelist_vol)) {
+      sk_assert(freed_list_start != NULL);
+      sk_assert(freed_list_end != NULL);
+
+      sc1 = freed_list_start;
+      freed_list_volume -= sc1->size;
+      /* VG_(printf)("volume now %d\n", freed_list_volume); */
+      sk_assert(freed_list_volume >= 0);
+
+      if (freed_list_start == freed_list_end) {
+         freed_list_start = freed_list_end = NULL;
+      } else {
+         freed_list_start = sc1->next;
+      }
+      sc1->next = NULL; /* just paranoia */
+
+      /* free MAC_Chunk */
+      VG_(cli_free) ( (void*)(sc1->data) );
+      VG_(free) ( sc1 );
+   }
+}
+
+/* Return the first shadow chunk satisfying the predicate p. */
+MAC_Chunk* MAC_(first_matching_freed_MAC_Chunk) ( Bool (*p)(MAC_Chunk*) )
+{
+   MAC_Chunk* mc;
+
+   /* No point looking through freed blocks if we're not keeping
+      them around for a while... */
+   for (mc = freed_list_start; mc != NULL; mc = mc->next)
+      if (p(mc))
+         return mc;
+
+   return NULL;
+}
+
+/* Allocate a user-chunk of size bytes.  Also allocate its shadow
+   block, make the shadow block point at the user block.  Put the
+   shadow chunk on the appropriate list, and set all memory
+   protections correctly. */
+
+static void add_MAC_Chunk ( ThreadState* tst,
+                            Addr p, UInt size, MAC_AllocKind kind )
+{
+   MAC_Chunk* mc;
+
+   mc            = VG_(malloc)(sizeof(MAC_Chunk));
+   mc->data      = p;
+   mc->size      = size;
+   mc->allockind = kind;
+   mc->where     = VG_(get_ExeContext)(tst);
+
+   VG_(HT_add_node)( MAC_(malloc_list), (VgHashNode*)mc );
+}
+
+/*------------------------------------------------------------*/
+/*--- client_malloc(), etc                                 ---*/
+/*------------------------------------------------------------*/
+
+/* Function pointers for the two skins to track interesting events. */
+void (*MAC_(new_mem_heap)) ( Addr a, UInt len, Bool is_inited );
+void (*MAC_(ban_mem_heap)) ( Addr a, UInt len );
+void (*MAC_(die_mem_heap)) ( Addr a, UInt len );
+void (*MAC_(copy_mem_heap))( Addr from, Addr to, UInt len );
+
+/* Allocate memory and note change in memory available */
+static __inline__
+void* alloc_and_new_mem ( ThreadState* tst, UInt size, UInt alignment,
+                          Bool is_zeroed, MAC_AllocKind kind )
+{
+   Addr p;
+
+   VGP_PUSHCC(VgpCliMalloc);
+
+   cmalloc_n_mallocs ++;
+   cmalloc_bs_mallocd += size;
+
+   p = (Addr)VG_(cli_malloc)(alignment, size);
+
+   add_MAC_Chunk ( tst, p, size, kind );
+
+   MAC_(ban_mem_heap)( p-VG_(vg_malloc_redzone_szB), 
+                         VG_(vg_malloc_redzone_szB) );
+   MAC_(new_mem_heap)( p, size, is_zeroed );
+   MAC_(ban_mem_heap)( p+size, VG_(vg_malloc_redzone_szB) );
+
+   VGP_POPCC(VgpCliMalloc);
+   return (void*)p;
+}
+
+void* SK_(malloc) ( ThreadState* tst, Int n )
+{
+   if (n < 0) {
+      VG_(message)(Vg_UserMsg, "Warning: silly arg (%d) to malloc()", n );
+      return NULL;
+   } else {
+      return alloc_and_new_mem ( tst, n, VG_(clo_alignment), 
+                                 /*is_zeroed*/False, MAC_AllocMalloc );
+   }
+}
+
+void* SK_(__builtin_new) ( ThreadState* tst, Int n )
+{
+   if (n < 0) {
+      VG_(message)(Vg_UserMsg, "Warning: silly arg (%d) to __builtin_new()", n);
+      return NULL;
+   } else {
+      return alloc_and_new_mem ( tst, n, VG_(clo_alignment), 
+                                 /*is_zeroed*/False, MAC_AllocNew );
+   }
+}
+
+void* SK_(__builtin_vec_new) ( ThreadState* tst, Int n )
+{
+   if (n < 0) {
+      VG_(message)(Vg_UserMsg, 
+                   "Warning: silly arg (%d) to __builtin_vec_new()", n );
+      return NULL;
+   } else {
+      return alloc_and_new_mem ( tst, n, VG_(clo_alignment), 
+                                 /*is_zeroed*/False, MAC_AllocNewVec );
+   }
+}
+
+void* SK_(memalign) ( ThreadState* tst, Int align, Int n )
+{
+   if (n < 0) {
+      VG_(message)(Vg_UserMsg, "Warning: silly arg (%d) to memalign()", n);
+      return NULL;
+   } else {
+      return alloc_and_new_mem ( tst, n, align, /*is_zeroed*/False, 
+                                 MAC_AllocMalloc );
+   }
+}
+
+void* SK_(calloc) ( ThreadState* tst, Int nmemb, Int size1 )
+{
+   void* p;
+   Int   size, i;
+
+   size = nmemb * size1;
+
+   if (nmemb < 0 || size1 < 0) {
+      VG_(message)(Vg_UserMsg, "Warning: silly args (%d,%d) to calloc()",
+                               nmemb, size1 );
+      return NULL;
+   } else {
+      p = alloc_and_new_mem ( tst, size, VG_(clo_alignment), 
+                              /*is_zeroed*/True, MAC_AllocMalloc );
+      for (i = 0; i < size; i++) 
+         ((UChar*)p)[i] = 0;
+      return p;
+   }
+}
+
+static
+void die_and_free_mem ( ThreadState* tst, MAC_Chunk* mc,
+                        MAC_Chunk** prev_chunks_next_ptr )
+{
+   /* Note: ban redzones again -- just in case user de-banned them
+      with a client request... */
+   MAC_(ban_mem_heap)( mc->data-VG_(vg_malloc_redzone_szB), 
+                                VG_(vg_malloc_redzone_szB) );
+   MAC_(die_mem_heap)( mc->data, mc->size );
+   MAC_(ban_mem_heap)( mc->data+mc->size, VG_(vg_malloc_redzone_szB) );
+
+   /* Remove mc from the malloclist using prev_chunks_next_ptr to
+      avoid repeating the hash table lookup.  Can't remove until at least
+      after free and free_mismatch errors are done because they use
+      describe_addr() which looks for it in malloclist. */
+   *prev_chunks_next_ptr = mc->next;
+
+   /* Record where freed */
+   mc->where = VG_(get_ExeContext) ( tst );
+
+   /* Put it out of harm's way for a while. */
+   add_to_freed_queue ( mc );
+}
+
+
+static __inline__
+void handle_free ( ThreadState* tst, void* p, MAC_AllocKind kind )
+{
+   MAC_Chunk*  mc;
+   MAC_Chunk** prev_chunks_next_ptr;
+
+   VGP_PUSHCC(VgpCliMalloc);
+
+   cmalloc_n_frees++;
+
+   mc = (MAC_Chunk*)VG_(HT_get_node) ( MAC_(malloc_list), (UInt)p,
+                                       (VgHashNode***)&prev_chunks_next_ptr );
+
+   if (mc == NULL) {
+      MAC_(record_free_error) ( tst, (Addr)p );
+      VGP_POPCC(VgpCliMalloc);
+      return;
+   }
+
+   /* check if its a matching free() / delete / delete [] */
+   if (kind != mc->allockind) {
+      MAC_(record_freemismatch_error) ( tst, (Addr)p );
+   }
+
+   die_and_free_mem ( tst, mc, prev_chunks_next_ptr );
+   VGP_POPCC(VgpCliMalloc);
+}
+
+void SK_(free) ( ThreadState* tst, void* p )
+{
+   handle_free(tst, p, MAC_AllocMalloc);
+}
+
+void SK_(__builtin_delete) ( ThreadState* tst, void* p )
+{
+   handle_free(tst, p, MAC_AllocNew);
+}
+
+void SK_(__builtin_vec_delete) ( ThreadState* tst, void* p )
+{
+   handle_free(tst, p, MAC_AllocNewVec);
+}
+
+void* SK_(realloc) ( ThreadState* tst, void* p, Int new_size )
+{
+   MAC_Chunk  *mc;
+   MAC_Chunk **prev_chunks_next_ptr;
+   UInt        i;
+
+   VGP_PUSHCC(VgpCliMalloc);
+
+   cmalloc_n_frees ++;
+   cmalloc_n_mallocs ++;
+   cmalloc_bs_mallocd += new_size;
+
+   if (new_size < 0) {
+      VG_(message)(Vg_UserMsg, 
+                   "Warning: silly arg (%d) to realloc()", new_size );
+      return NULL;
+   }
+
+   /* First try and find the block. */
+   mc = (MAC_Chunk*)VG_(HT_get_node) ( MAC_(malloc_list), (UInt)p,
+                                       (VgHashNode***)&prev_chunks_next_ptr );
+
+   if (mc == NULL) {
+      MAC_(record_free_error) ( tst, (Addr)p );
+      /* Perhaps we should return to the program regardless. */
+      VGP_POPCC(VgpCliMalloc);
+      return NULL;
+   }
+  
+   /* check if its a matching free() / delete / delete [] */
+   if (MAC_AllocMalloc != mc->allockind) {
+      /* can not realloc a range that was allocated with new or new [] */
+      MAC_(record_freemismatch_error) ( tst, (Addr)p );
+      /* but keep going anyway */
+   }
+
+   if (mc->size == new_size) {
+      /* size unchanged */
+      VGP_POPCC(VgpCliMalloc);
+      return p;
+      
+   } else if (mc->size > new_size) {
+      /* new size is smaller */
+      MAC_(die_mem_heap)( mc->data+new_size, mc->size-new_size );
+      mc->size = new_size;
+      VGP_POPCC(VgpCliMalloc);
+      return p;
+
+   } else {
+      /* new size is bigger */
+      Addr p_new;
+
+      /* Get new memory */
+      p_new = (Addr)VG_(cli_malloc)(VG_(clo_alignment), new_size);
+
+      /* First half kept and copied, second half new, 
+         red zones as normal */
+      MAC_(ban_mem_heap) ( p_new-VG_(vg_malloc_redzone_szB), 
+                                 VG_(vg_malloc_redzone_szB) );
+      MAC_(copy_mem_heap)( (Addr)p, p_new, mc->size );
+      MAC_(new_mem_heap) ( p_new+mc->size, new_size-mc->size, /*inited*/False );
+      MAC_(ban_mem_heap) ( p_new+new_size, VG_(vg_malloc_redzone_szB) );
+
+      /* Copy from old to new */
+      for (i = 0; i < mc->size; i++)
+         ((UChar*)p_new)[i] = ((UChar*)p)[i];
+
+      /* Free old memory */
+      die_and_free_mem ( tst, mc, prev_chunks_next_ptr );
+
+      /* this has to be after die_and_free_mem, otherwise the
+         former succeeds in shorting out the new block, not the
+         old, in the case when both are on the same list.  */
+      add_MAC_Chunk ( tst, p_new, new_size, MAC_AllocMalloc );
+
+      VGP_POPCC(VgpCliMalloc);
+      return (void*)p_new;
+   }  
+}
+
+void MAC_(print_malloc_stats) ( void )
+{
+   UInt nblocks = 0, nbytes = 0;
+   
+   /* Mmm... more lexical scoping */
+   void count_one_chunk(VgHashNode* node) {
+      MAC_Chunk* mc = (MAC_Chunk*)node;
+      nblocks ++;
+      nbytes  += mc->size;
+   }
+
+   if (VG_(clo_verbosity) == 0)
+      return;
+
+   /* Count memory still in use. */
+   VG_(HT_apply_to_all_nodes)(MAC_(malloc_list), count_one_chunk);
+
+   VG_(message)(Vg_UserMsg, 
+                "malloc/free: in use at exit: %d bytes in %d blocks.",
+                nbytes, nblocks);
+   VG_(message)(Vg_UserMsg, 
+                "malloc/free: %d allocs, %d frees, %u bytes allocated.",
+                cmalloc_n_mallocs,
+                cmalloc_n_frees, cmalloc_bs_mallocd);
+   if (VG_(clo_verbosity) > 1)
+      VG_(message)(Vg_UserMsg, "");
+}
+
+/*--------------------------------------------------------------------*/
+/*--- end                                    mac_malloc_wrappers.c ---*/
+/*--------------------------------------------------------------------*/