This commit moves some skin-specific stuff out of core, and generally
neatens other things up.
Also, it adds the --gen-suppressions option for automatically generating
suppressions for each error.
Note that it changes the core/skin interface:
SK_(dup_extra_and_update)() is replaced by SK_(update_extra)(), and
SK_(get_error_name)() and SK_(print_extra_suppression_info)() are added.
-----------------------------------------------------------------------------
details
-----------------------------------------------------------------------------
Removed ac_common.c -- it just #included another .c file; moved the
#include into ac_main.c.
Introduced "mac_" prefixes for files shared between Addrcheck and Memcheck,
to make it clearer which code is shared. Also using a "MAC_" prefix for
functions and variables and types that are shared. Addrcheck doesn't see
the "MC_" prefix at all.
Factored out almost-identical mc_describe_addr() and describe_addr()
(AddrCheck's version) into MAC_(describe_addr)().
Got rid of the "pp_ExeContext" closure passed to SK_(pp_SkinError)(), it
wasn't really necessary.
Introduced MAC_(pp_shared_SkinError)() for the error printing code shared by
Addrcheck and Memcheck. Fixed some bogus stuff in Addrcheck error messages
about "uninitialised bytes" (there because of an imperfect conversion from
Memcheck).
Moved the leak checker out of core (vg_memory.c), into mac_leakcheck.c.
- This meant the hacky way of recording Leak errors, which was different to
normal errors, could be changed to something better: introduced a
function VG_(unique_error)(), which unlike VG_(maybe_record_error)() just
prints the error (unless suppressed) but doesn't record it. Used for
leaks; a much better solution all round as it allowed me to remove a lot
of almost-identical code from leak handling (is_suppressible_leak(),
leaksupp_matches_callers()).
- As part of this, changed the horrible SK_(dup_extra_and_update) into the
slightly less horrible SK_(update_extra), which returns the size of the
`extra' part for the core to duplicate.
- Also renamed it from VG_(generic_detect_memory_leaks)() to
MAC_(do_detect_memory_leaks). In making the code nicer w.r.t suppressions
and error reporting, I tied it a bit more closely to Memcheck/Addrcheck,
and got rid of some of the args. It's not really "generic" any more, but
then it never really was. (This could be undone, but there doesn't seem
to be much point.)
STREQ and STREQN were #defined in several places, and in two different ways.
Made global macros VG_STREQ, VG_CLO_STREQ and VG_CLO_STREQN in vg_skin.h.
Added the --gen-suppressions code. This required adding the functions
SK_(get_error_name)() and SK_(print_extra_suppression_info)() for skins that
use the error handling need.
Added documentation for --gen-suppressions, and fixed some other minor document
problems.
Various other minor related changes too.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@1517 a5019735-40e9-0310-863c-91ae7b9d1cf9
diff --git a/coregrind/docs/coregrind_core.html b/coregrind/docs/coregrind_core.html
index 16b7c1b..14a235e 100644
--- a/coregrind/docs/coregrind_core.html
+++ b/coregrind/docs/coregrind_core.html
@@ -72,7 +72,10 @@
recording them in a suppressions file which is read when Valgrind
starts up. The build mechanism attempts to select suppressions which
give reasonable behaviour for the libc and XFree86 versions detected
-on your machine.
+on your machine. To make it easier to write suppressions, you can use
+the <code>--gen-suppressions=yes</code> option which tells Valgrind to
+print out a suppression for each error that appears, which you can
+then copy into a suppressions file.
<p>
Different skins report different kinds of errors. The suppression
@@ -189,7 +192,7 @@
for technical reasons, valgrind's core itself can't use the GNU C
library, and this makes it difficult to do hostname-to-IP lookups.
<p>
- Writing to a network socket it pretty useless if you don't have
+ Writing to a network socket is pretty useless if you don't have
something listening at the other end. We provide a simple
listener program, <code>valgrind-listener</code>, which accepts
connections on the specified port and copies whatever it is sent
@@ -453,6 +456,34 @@
socket, I guess this option doesn't make any sense. Caveat emptor.
</li><br><p>
+ <li><code>--gen-suppressions=no</code> [the default]<br>
+ <code>--gen-suppressions=yes</code>
+ <p>When enabled, Valgrind will pause after every error shown,
+ and print the line
+ <br>
+ <code>---- Print suppression ? --- [Return/N/n/Y/y/C/c] ----</code>
+ <p>
+ The prompt's behaviour is the same as for the <code>--gdb-attach</code>
+ option.
+ <p>
+ If you choose to, Valgrind will print out a suppression for this error.
+ You can then cut and paste it into a suppression file if you don't want
+ to hear about the error in the future.
+ <p>
+ This option is particularly useful with C++ programs, as it prints out
+ the suppressions with mangled names, as required.
+ <p>
+ Note that the suppressions printed are as specific as possible. You
+ may want to common up similar ones, eg. by adding wildcards to function
+ names. Also, sometimes two different errors are suppressed by the same
+ suppression, in which case Valgrind will output the suppression more than
+ once, but you only need to have one copy in your suppression file (but
+ having more than one won't cause problems). Also, the suppression
+ name is given as <code><insert a suppression name here></code>;
+ the name doesn't really matter, it's only used with the
+ <code>-v</code> option which prints out all used suppression records.
+ </li><br><p>
+
<li><code>--alignment=<number></code> [default: 4]<br> <p>By
default valgrind's <code>malloc</code>, <code>realloc</code>,
etc, return 4-byte aligned addresses. These are suitable for
@@ -620,6 +651,12 @@
unexpectedly in the <code>write()</code> system call, you
may find the <code>--trace-syscalls=yes
--trace-sched=yes</code> flags useful.
+ <p>
+ <li><code>lax-ioctls</code> Be very lax about ioctl handling; the only
+ assumption is that the size is correct. Doesn't require the full
+ buffer to be initialized when writing. Without this, using some
+ device drivers with a large number of strange ioctl commands becomes
+ very tiresome.
</ul>
</li><br><p>
</ul>
@@ -685,6 +722,13 @@
</li><br>
<p>
+ <li><code>--trace-codegen=XXXXX</code> [default: 00000]
+ <p>Enable/disable tracing of code generation. Code can be printed
+ at five different stages of translation; each <code>X</code> element
+ must be 0 or 1.
+ </li><br>
+ <p>
+
<li><code>--stop-after=<number></code>
[default: infinity, more or less]
<p>After <number> basic blocks have been executed, shut down
@@ -709,6 +753,11 @@
(NOTE 20021117: this subsection is illogical here now; it jumbles up
core and skin issues. To be fixed.).
+(NOTE 20030318: the most important correction is that
+<code>valgrind.h</code> should not be included in your program, but
+instead <code>memcheck.h</code> (for the Memcheck and Addrcheck skins)
+or <code>helgrind.h</code> (for Helgrind).)
+
<p>
Valgrind has a trapdoor mechanism via which the client program can
pass all manner of requests and queries to Valgrind. Internally, this
@@ -937,7 +986,7 @@
<h3>2.11 If you have problems</h3>
Mail me (<a href="mailto:jseward@acm.org">jseward@acm.org</a>).
-<p>See <a href="#limits">Section 4</a> for the known limitations of
+<p>See <a href="#limits">this section</a> for the known limitations of
Valgrind, and for a list of programs which are known not to work on
it.
diff --git a/coregrind/vg_clientmalloc.c b/coregrind/vg_clientmalloc.c
index 653dc06..025de27 100644
--- a/coregrind/vg_clientmalloc.c
+++ b/coregrind/vg_clientmalloc.c
@@ -153,6 +153,38 @@
VG_(arena_free) ( VG_AR_CORE, sc );
}
+static
+void sort_malloc_shadows ( ShadowChunk** shadows, UInt n_shadows )
+{
+ Int incs[14] = { 1, 4, 13, 40, 121, 364, 1093, 3280,
+ 9841, 29524, 88573, 265720,
+ 797161, 2391484 };
+ Int lo = 0;
+ Int hi = n_shadows-1;
+ Int i, j, h, bigN, hp;
+ ShadowChunk* v;
+
+ bigN = hi - lo + 1; if (bigN < 2) return;
+ hp = 0; while (hp < 14 && incs[hp] < bigN) hp++; hp--;
+ vg_assert(0 <= hp && hp < 14);
+
+ for (; hp >= 0; hp--) {
+ h = incs[hp];
+ i = lo + h;
+ while (1) {
+ if (i > hi) break;
+ v = shadows[i];
+ j = i;
+ while (shadows[j-h]->data > v->data) {
+ shadows[j] = shadows[j-h];
+ j = j - h;
+ if (j <= (lo + h - 1)) break;
+ }
+ shadows[j] = v;
+ i++;
+ }
+ }
+}
/* Allocate a suitably-sized array, copy all the malloc-d block
shadows into it, and return both the array and the size of it.
@@ -180,6 +212,16 @@
}
}
vg_assert(i == *n_shadows);
+
+ sort_malloc_shadows(arr, *n_shadows);
+
+ /* Sanity check; assert that the blocks are now in order and that
+ they don't overlap. */
+ for (i = 0; i < *n_shadows-1; i++) {
+ sk_assert( arr[i]->data < arr[i+1]->data );
+ sk_assert( arr[i]->data + arr[i]->size < arr[i+1]->data );
+ }
+
return arr;
}
@@ -190,7 +232,7 @@
}
/* Return the first shadow chunk satisfying the predicate p. */
-ShadowChunk* VG_(any_matching_mallocd_ShadowChunks)
+ShadowChunk* VG_(first_matching_mallocd_ShadowChunk)
( Bool (*p) ( ShadowChunk* ))
{
UInt ml_no;
diff --git a/coregrind/vg_default.c b/coregrind/vg_default.c
index 8778b84..1bd36ff 100644
--- a/coregrind/vg_default.c
+++ b/coregrind/vg_default.c
@@ -105,15 +105,15 @@
}
__attribute__ ((weak))
-void SK_(pp_SkinError)(Error* err, void (*pp_ExeContext)(void))
+void SK_(pp_SkinError)(Error* err)
{
non_fund_panic("SK_(pp_SkinError)");
}
__attribute__ ((weak))
-void* SK_(dup_extra_and_update)(Error* err)
+UInt SK_(update_extra)(Error* err)
{
- non_fund_panic("SK_(dup_extra_and_update)");
+ non_fund_panic("SK_(update_extra)");
}
__attribute__ ((weak))
@@ -134,6 +134,18 @@
non_fund_panic("SK_(error_matches_suppression)");
}
+__attribute__ ((weak))
+Char* SK_(get_error_name)(Error* err)
+{
+ non_fund_panic("SK_(get_error_name)");
+}
+
+__attribute__ ((weak))
+void SK_(print_extra_suppression_info)(Error* err)
+{
+ non_fund_panic("SK_(print_extra_suppression_info)");
+}
+
/* ---------------------------------------------------------------------
For throwing out basic block level info when code is invalidated
diff --git a/coregrind/vg_errcontext.c b/coregrind/vg_errcontext.c
index 6ba7343..3040fb2 100644
--- a/coregrind/vg_errcontext.c
+++ b/coregrind/vg_errcontext.c
@@ -91,13 +91,6 @@
static void pp_Error ( Error* err, Bool printCount )
{
- /* Closure for printing where the error occurred. Abstracts details
- about the `where' field away from the skin. */
- void pp_ExeContextClosure(void)
- {
- VG_(pp_ExeContext) ( err->where );
- }
-
if (printCount)
VG_(message)(Vg_UserMsg, "Observed %d times:", err->count );
if (err->tid > 1)
@@ -111,7 +104,7 @@
break;
default:
if (VG_(needs).skin_errors)
- SK_(pp_SkinError)( err, &pp_ExeContextClosure );
+ SK_(pp_SkinError)( err );
else {
VG_(printf)("\nUnhandled error type: %u. VG_(needs).skin_errors\n"
"probably needs to be set?\n",
@@ -123,13 +116,12 @@
/* Figure out if we want to attach for GDB for this error, possibly
by asking the user. */
-static
-Bool vg_is_GDB_attach_requested ( void )
+Bool VG_(is_action_requested) ( Char* action, Bool* clo )
{
Char ch, ch2;
Int res;
- if (VG_(clo_GDB_attach) == False)
+ if (*clo == False)
return False;
VG_(message)(Vg_UserMsg, "");
@@ -137,8 +129,8 @@
again:
VG_(printf)(
"==%d== "
- "---- Attach to GDB ? --- [Return/N/n/Y/y/C/c] ---- ",
- VG_(getpid)()
+ "---- %s ? --- [Return/N/n/Y/y/C/c] ---- ",
+ VG_(getpid)(), action
);
res = VG_(read)(0 /*stdin*/, &ch, 1);
@@ -152,15 +144,15 @@
if (res != 1) goto ioerror;
if (ch2 != '\n') goto again;
- /* No, don't want to attach. */
+ /* No, don't want to do action. */
if (ch == 'n' || ch == 'N') return False;
- /* Yes, want to attach. */
+ /* Yes, want to do action. */
if (ch == 'y' || ch == 'Y') return True;
- /* No, don't want to attach, and don't ask again either. */
+ /* No, don't want to do action, and don't ask again either. */
vg_assert(ch == 'c' || ch == 'C');
ioerror:
- VG_(clo_GDB_attach) = False;
+ *clo = False;
return False;
}
@@ -178,14 +170,17 @@
stored thread state, not from VG_(baseBlock).
*/
static __inline__
-void construct_error ( Error* err, ThreadState* tst,
- ErrorKind ekind, Addr a, Char* s, void* extra )
+void construct_error ( Error* err, ThreadState* tst, ErrorKind ekind, Addr a,
+ Char* s, void* extra, ExeContext* where )
{
/* Core-only parts */
err->next = NULL;
err->supp = NULL;
err->count = 1;
- err->where = VG_(get_ExeContext)( tst );
+ if (NULL == where)
+ err->where = VG_(get_ExeContext)( tst );
+ else
+ err->where = where;
if (NULL == tst) {
err->tid = VG_(get_current_tid)();
@@ -209,6 +204,66 @@
vg_assert(err->tid >= 0 && err->tid < VG_N_THREADS);
}
+void VG_(gen_suppression)(Error* err)
+{
+ UInt i;
+ UChar buf[M_VG_ERRTXT];
+ ExeContext* ec = VG_(get_error_where)(err);
+ Int stop_at = VG_(clo_backtrace_size);
+ Char* name = SK_(get_error_name)(err);
+
+ if (NULL == name) {
+ VG_(message)(Vg_UserMsg, "(skin does not allow error to be suppressed)");
+ return;
+ }
+
+ if (stop_at > 3) stop_at = 3; /* At most three names */
+ vg_assert(stop_at > 0);
+
+ VG_(printf)("{\n");
+ VG_(printf)(" <insert a suppression name here>\n");
+ VG_(printf)(" %s:%s\n", VG_(details).name, name);
+ SK_(print_extra_suppression_info)(err);
+
+ /* This loop condensed from VG_(mini_stack_dump)() */
+ i = 0;
+ do {
+ Addr eip = ec->eips[i];
+ if (i > 0)
+ eip--; /* point to calling line */
+
+ if ( VG_(get_fnname_nodemangle) (eip, buf, M_VG_ERRTXT) ) {
+ VG_(printf)(" fun:%s\n", buf);
+ } else if ( VG_(get_objname)(eip, buf, M_VG_ERRTXT) ) {
+ VG_(printf)(" obj:%s\n", buf);
+ } else {
+ VG_(printf)(" ???:??? "
+ "# unknown, suppression will not work, sorry)\n");
+ }
+ i++;
+ } while (i < stop_at && ec->eips[i] != 0);
+
+ VG_(printf)("}\n");
+}
+
+void do_actions_on_error(Error* err)
+{
+ /* Perhaps we want a GDB attach at this point? */
+ if (VG_(is_action_requested)( "Attach to GDB", & VG_(clo_GDB_attach) )) {
+ VG_(swizzle_esp_then_start_GDB)(
+ err->m_eip, err->m_esp, err->m_ebp);
+ }
+ /* Or maybe we want to generate the error's suppression? */
+ if (VG_(is_action_requested)( "Print suppression",
+ & VG_(clo_gen_suppressions) )) {
+ VG_(gen_suppression)(err);
+ }
+}
+
+/* Shared between VG_(maybe_record_error)() and VG_(unique_error)(),
+ just for pretty printing purposes. */
+static Bool is_first_shown_context = True;
+
/* Top-level entry point to the error management subsystem.
All detected errors are notified here; this routine decides if/when the
user should see the error. */
@@ -218,8 +273,8 @@
Error err;
Error* p;
Error* p_prev;
+ UInt extra_size;
VgRes exe_res = Vg_MedRes;
- static Bool is_first_shown_context = True;
static Bool stopping_message = False;
static Bool slowdown_message = False;
static Int vg_n_errs_shown = 0;
@@ -279,7 +334,7 @@
}
/* Build ourselves the error */
- construct_error ( &err, tst, ekind, a, s, extra );
+ construct_error ( &err, tst, ekind, a, s, extra, NULL );
/* First, see if we've got an error record matching this one. */
p = vg_errors;
@@ -312,20 +367,34 @@
/* Didn't see it. Copy and add. */
- /* OK, we're really going to collect it. First make a copy,
- because the error context is on the stack and will disappear shortly.
- We can duplicate the main part ourselves, but use
- SK_(dup_extra_and_update) to duplicate the `extra' part.
+ /* OK, we're really going to collect it. The context is on the stack and
+ will disappear shortly, so we must copy it. First do the main
+ (non-`extra') part.
- SK_(dup_extra_and_update) can also update the `extra' part. This is
- for when there are more details to fill in which take time to work out
- but don't affect our earlier decision to include the error -- by
+ Then SK_(update_extra) can update the `extra' part. This is for when
+ there are more details to fill in which take time to work out but
+ don't affect our earlier decision to include the error -- by
postponing those details until now, we avoid the extra work in the
case where we ignore the error. Ugly.
- */
+
+ Then, if there is an `extra' part, copy it too, using the size that
+ SK_(update_extra) returned.
+ */
+
+ /* copy main part */
p = VG_(arena_malloc)(VG_AR_ERRORS, sizeof(Error));
*p = err;
- p->extra = SK_(dup_extra_and_update)(p);
+
+ /* update `extra' */
+ extra_size = SK_(update_extra)(p);
+
+ /* copy `extra' if there is one */
+ if (NULL != p->extra) {
+ void* new_extra = VG_(malloc)(extra_size);
+ VG_(memcpy)(new_extra, p->extra, extra_size);
+ p->extra = new_extra;
+ }
+
p->next = vg_errors;
p->supp = is_suppressible_error(&err);
vg_errors = p;
@@ -333,20 +402,56 @@
vg_n_errs_found++;
if (!is_first_shown_context)
VG_(message)(Vg_UserMsg, "");
- pp_Error(p, False);
+ pp_Error(p, False);
is_first_shown_context = False;
vg_n_errs_shown++;
- /* Perhaps we want a GDB attach at this point? */
- if (vg_is_GDB_attach_requested()) {
- VG_(swizzle_esp_then_start_GDB)(
- err.m_eip, err.m_esp, err.m_ebp);
- }
+ do_actions_on_error(p);
} else {
vg_n_errs_suppressed++;
p->supp->count++;
}
}
+/* Second top-level entry point to the error management subsystem, for
+ errors that the skin want to report immediately, eg. because they're
+ guaranteed to only happen once. This avoids all the recording and
+ comparing stuff. But they can be suppressed; returns True if it is
+ suppressed. Bool `print_error' dictates whether to print the error. */
+Bool VG_(unique_error) ( ThreadState* tst, ErrorKind ekind, Addr a, Char* s,
+ void* extra, ExeContext* where, Bool print_error )
+{
+ Error err;
+
+ /* Build ourselves the error */
+ construct_error ( &err, tst, ekind, a, s, extra, where );
+
+ /* Unless it's suppressed, we're going to show it. Don't need to make
+ a copy, because it's only temporary anyway.
+
+ Then update the `extra' part with SK_(update_extra), because that can
+ have an affect on whether it's suppressed. Ignore the size return
+ value of SK_(update_extra), because we're not copying `extra'. */
+ (void)SK_(update_extra)(&err);
+
+ if (NULL == is_suppressible_error(&err)) {
+ vg_n_errs_found++;
+
+ if (print_error) {
+ if (!is_first_shown_context)
+ VG_(message)(Vg_UserMsg, "");
+ pp_Error(&err, False);
+ is_first_shown_context = False;
+ }
+ do_actions_on_error(&err);
+
+ return False;
+
+ } else {
+ vg_n_errs_suppressed++;
+ return True;
+ }
+}
+
/*------------------------------------------------------------*/
/*--- Exported fns ---*/
@@ -529,9 +634,6 @@
return found;
}
-#define STREQ(s1,s2) (s1 != NULL && s2 != NULL \
- && VG_(strcmp)((s1),(s2))==0)
-
/* Read suppressions from the file specified in vg_clo_suppressions
and place them in the suppressions list. If there's any difficulty
doing this, just give up -- there's no point in trying to recover.
@@ -563,10 +665,10 @@
eof = VG_(get_line) ( fd, buf, N_BUF );
if (eof) break;
- if (!STREQ(buf, "{")) goto syntax_error;
+ if (!VG_STREQ(buf, "{")) goto syntax_error;
eof = VG_(get_line) ( fd, buf, N_BUF );
- if (eof || STREQ(buf, "}")) goto syntax_error;
+ if (eof || VG_STREQ(buf, "}")) goto syntax_error;
supp->sname = VG_(arena_strdup)(VG_AR_CORE, buf);
eof = VG_(get_line) ( fd, buf, N_BUF );
@@ -588,7 +690,7 @@
/* Is it a core suppression? */
if (VG_(needs).core_errors && skin_name_present("core", skin_names))
{
- if (STREQ(supp_name, "PThread"))
+ if (VG_STREQ(supp_name, "PThread"))
supp->skind = PThreadSupp;
else
goto syntax_error;
@@ -610,7 +712,7 @@
while (True) {
eof = VG_(get_line) ( fd, buf, N_BUF );
if (eof) goto syntax_error;
- if (STREQ(buf, "}"))
+ if (VG_STREQ(buf, "}"))
break;
}
continue;
@@ -624,7 +726,7 @@
for (i = 0; i < VG_N_SUPP_CALLERS; i++) {
eof = VG_(get_line) ( fd, buf, N_BUF );
if (eof) goto syntax_error;
- if (i > 0 && STREQ(buf, "}"))
+ if (i > 0 && VG_STREQ(buf, "}"))
break;
supp->caller[i] = VG_(arena_strdup)(VG_AR_CORE, buf);
if (!setLocationTy(&(supp->caller[i]), &(supp->caller_ty[i])))
@@ -673,9 +775,8 @@
Doesn't demangle the fn name, because we want to refer to
mangled names in the suppressions file.
*/
-void VG_(get_objname_fnname) ( Addr a,
- Char* obj_buf, Int n_obj_buf,
- Char* fun_buf, Int n_fun_buf )
+static void get_objname_fnname ( Addr a, Char* obj_buf, Int n_obj_buf,
+ Char* fun_buf, Int n_fun_buf )
{
(void)VG_(get_objname) ( a, obj_buf, n_obj_buf );
(void)VG_(get_fnname_nodemangle)( a, fun_buf, n_fun_buf );
@@ -714,7 +815,7 @@
case FunName: if (VG_(string_match)(su->caller[i],
caller_fun[i])) break;
return False;
- default: VG_(skin_panic)("is_suppressible_error");
+ default: VG_(skin_panic)("supp_matches_callers");
}
}
@@ -736,7 +837,7 @@
Supp* su;
/* get_objname_fnname() writes the function name and object name if
- it finds them in the debug info. so the strings in the suppression
+ it finds them in the debug info. So the strings in the suppression
file should match these.
*/
@@ -746,9 +847,8 @@
caller_obj[i][0] = caller_fun[i][0] = 0;
for (i = 0; i < VG_N_SUPP_CALLERS && i < VG_(clo_backtrace_size); i++) {
- VG_(get_objname_fnname) ( err->where->eips[i],
- caller_obj[i], M_VG_ERRTXT,
- caller_fun[i], M_VG_ERRTXT );
+ get_objname_fnname ( err->where->eips[i], caller_obj[i], M_VG_ERRTXT,
+ caller_fun[i], M_VG_ERRTXT );
}
/* See if the error context matches any suppression. */
@@ -761,8 +861,6 @@
return NULL; /* no matches */
}
-#undef STREQ
-
/*--------------------------------------------------------------------*/
/*--- end vg_errcontext.c ---*/
/*--------------------------------------------------------------------*/
diff --git a/coregrind/vg_include.h b/coregrind/vg_include.h
index a420d45..85ba492 100644
--- a/coregrind/vg_include.h
+++ b/coregrind/vg_include.h
@@ -171,6 +171,8 @@
extern Bool VG_(clo_error_limit);
/* Enquire about whether to attach to GDB at errors? default: NO */
extern Bool VG_(clo_GDB_attach);
+/* Enquire about generating a suppression for each error? default: NO */
+extern Bool VG_(clo_gen_suppressions);
/* Sanity-check level: 0 = none, 1 (default), > 1 = expensive. */
extern Int VG_(sanity_level);
/* Automatically attempt to demangle C++ names? default: YES */
@@ -1192,14 +1194,13 @@
extern void VG_(show_all_errors) ( void );
-extern void VG_(get_objname_fnname) ( Addr a,
- Char* obj_buf, Int n_obj_buf,
- Char* fun_buf, Int n_fun_buf );
-
/* Get hold of the suppression list ... just so we don't have to
make it global. */
extern Supp* VG_(get_suppressions) ( void );
+extern Bool VG_(is_action_requested) ( Char* action, Bool* clo );
+
+extern void VG_(gen_suppression) ( Error* err );
/* ---------------------------------------------------------------------
Exports of vg_procselfmaps.c
diff --git a/coregrind/vg_main.c b/coregrind/vg_main.c
index 351aa50..aca07cc 100644
--- a/coregrind/vg_main.c
+++ b/coregrind/vg_main.c
@@ -487,6 +487,7 @@
/* Define, and set defaults. */
Bool VG_(clo_error_limit) = True;
Bool VG_(clo_GDB_attach) = False;
+Bool VG_(clo_gen_suppressions) = False;
Int VG_(sanity_level) = 1;
Int VG_(clo_verbosity) = 1;
Bool VG_(clo_demangle) = True;
@@ -588,6 +589,7 @@
" -q --quiet run silently; only print error msgs\n"
" -v --verbose be more verbose, incl counts of errors\n"
" --gdb-attach=no|yes start GDB when errors detected? [no]\n"
+" --gen-suppressions=no|yes print suppressions for errors detected [no]\n"
" --demangle=no|yes automatically demangle C++ names? [yes]\n"
" --num-callers=<number> show <num> callers in stack traces [4]\n"
" --error-limit=no|yes stop showing new errors if too many? [yes]\n"
@@ -685,8 +687,6 @@
Int i, eventually_logfile_fd, ctr;
# define ISSPACE(cc) ((cc) == ' ' || (cc) == '\t' || (cc) == '\n')
-# define STREQ(s1,s2) (0==VG_(strcmp_ws)((s1),(s2)))
-# define STREQN(nn,s1,s2) (0==VG_(strncmp_ws)((s1),(s2),(nn)))
eventually_logfile_fd = VG_(clo_logfile_fd);
@@ -863,64 +863,71 @@
for (i = 0; i < argc; i++) {
- if (STREQ(argv[i], "-v") || STREQ(argv[i], "--verbose"))
+ if (VG_CLO_STREQ(argv[i], "-v") ||
+ VG_CLO_STREQ(argv[i], "--verbose"))
VG_(clo_verbosity)++;
- else if (STREQ(argv[i], "-q") || STREQ(argv[i], "--quiet"))
+ else if (VG_CLO_STREQ(argv[i], "-q") ||
+ VG_CLO_STREQ(argv[i], "--quiet"))
VG_(clo_verbosity)--;
- else if (STREQ(argv[i], "--error-limit=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--error-limit=yes"))
VG_(clo_error_limit) = True;
- else if (STREQ(argv[i], "--error-limit=no"))
+ else if (VG_CLO_STREQ(argv[i], "--error-limit=no"))
VG_(clo_error_limit) = False;
- else if (STREQ(argv[i], "--gdb-attach=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--gdb-attach=yes"))
VG_(clo_GDB_attach) = True;
- else if (STREQ(argv[i], "--gdb-attach=no"))
+ else if (VG_CLO_STREQ(argv[i], "--gdb-attach=no"))
VG_(clo_GDB_attach) = False;
- else if (STREQ(argv[i], "--demangle=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--gen-suppressions=yes"))
+ VG_(clo_gen_suppressions) = True;
+ else if (VG_CLO_STREQ(argv[i], "--gen-suppressions=no"))
+ VG_(clo_gen_suppressions) = False;
+
+ else if (VG_CLO_STREQ(argv[i], "--demangle=yes"))
VG_(clo_demangle) = True;
- else if (STREQ(argv[i], "--demangle=no"))
+ else if (VG_CLO_STREQ(argv[i], "--demangle=no"))
VG_(clo_demangle) = False;
- else if (STREQ(argv[i], "--sloppy-malloc=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--sloppy-malloc=yes"))
VG_(clo_sloppy_malloc) = True;
- else if (STREQ(argv[i], "--sloppy-malloc=no"))
+ else if (VG_CLO_STREQ(argv[i], "--sloppy-malloc=no"))
VG_(clo_sloppy_malloc) = False;
- else if (STREQN(12, argv[i], "--alignment="))
+ else if (VG_CLO_STREQN(12, argv[i], "--alignment="))
VG_(clo_alignment) = (Int)VG_(atoll)(&argv[i][12]);
- else if (STREQ(argv[i], "--trace-children=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-children=yes"))
VG_(clo_trace_children) = True;
- else if (STREQ(argv[i], "--trace-children=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-children=no"))
VG_(clo_trace_children) = False;
- else if (STREQ(argv[i], "--run-libc-freeres=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--run-libc-freeres=yes"))
VG_(clo_run_libc_freeres) = True;
- else if (STREQ(argv[i], "--run-libc-freeres=no"))
+ else if (VG_CLO_STREQ(argv[i], "--run-libc-freeres=no"))
VG_(clo_run_libc_freeres) = False;
- else if (STREQN(15, argv[i], "--sanity-level="))
+ else if (VG_CLO_STREQN(15, argv[i], "--sanity-level="))
VG_(sanity_level) = (Int)VG_(atoll)(&argv[i][15]);
- else if (STREQN(13, argv[i], "--logfile-fd=")) {
+ else if (VG_CLO_STREQN(13, argv[i], "--logfile-fd=")) {
VG_(clo_log_to) = VgLogTo_Fd;
VG_(clo_logfile_name) = NULL;
eventually_logfile_fd = (Int)VG_(atoll)(&argv[i][13]);
}
- else if (STREQN(10, argv[i], "--logfile=")) {
+ else if (VG_CLO_STREQN(10, argv[i], "--logfile=")) {
VG_(clo_log_to) = VgLogTo_File;
VG_(clo_logfile_name) = &argv[i][10];
}
- else if (STREQN(12, argv[i], "--logsocket=")) {
+ else if (VG_CLO_STREQN(12, argv[i], "--logsocket=")) {
VG_(clo_log_to) = VgLogTo_Socket;
VG_(clo_logfile_name) = &argv[i][12];
}
- else if (STREQN(15, argv[i], "--suppressions=")) {
+ else if (VG_CLO_STREQN(15, argv[i], "--suppressions=")) {
if (VG_(clo_n_suppressions) >= VG_CLO_MAX_SFILES) {
VG_(message)(Vg_UserMsg, "Too many suppression files specified.");
VG_(message)(Vg_UserMsg,
@@ -930,28 +937,28 @@
VG_(clo_suppressions)[VG_(clo_n_suppressions)] = &argv[i][15];
VG_(clo_n_suppressions)++;
}
- else if (STREQ(argv[i], "--profile=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--profile=yes"))
VG_(clo_profile) = True;
- else if (STREQ(argv[i], "--profile=no"))
+ else if (VG_CLO_STREQ(argv[i], "--profile=no"))
VG_(clo_profile) = False;
- else if (STREQ(argv[i], "--chain-bb=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--chain-bb=yes"))
VG_(clo_chain_bb) = True;
- else if (STREQ(argv[i], "--chain-bb=no"))
+ else if (VG_CLO_STREQ(argv[i], "--chain-bb=no"))
VG_(clo_chain_bb) = False;
- else if (STREQ(argv[i], "--single-step=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--single-step=yes"))
VG_(clo_single_step) = True;
- else if (STREQ(argv[i], "--single-step=no"))
+ else if (VG_CLO_STREQ(argv[i], "--single-step=no"))
VG_(clo_single_step) = False;
- else if (STREQ(argv[i], "--optimise=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--optimise=yes"))
VG_(clo_optimise) = True;
- else if (STREQ(argv[i], "--optimise=no"))
+ else if (VG_CLO_STREQ(argv[i], "--optimise=no"))
VG_(clo_optimise) = False;
/* "vwxyz" --> 000zyxwv (binary) */
- else if (STREQN(16, argv[i], "--trace-codegen=")) {
+ else if (VG_CLO_STREQN(16, argv[i], "--trace-codegen=")) {
Int j;
char* opt = & argv[i][16];
@@ -971,48 +978,48 @@
}
}
- else if (STREQ(argv[i], "--trace-syscalls=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-syscalls=yes"))
VG_(clo_trace_syscalls) = True;
- else if (STREQ(argv[i], "--trace-syscalls=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-syscalls=no"))
VG_(clo_trace_syscalls) = False;
- else if (STREQ(argv[i], "--trace-signals=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-signals=yes"))
VG_(clo_trace_signals) = True;
- else if (STREQ(argv[i], "--trace-signals=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-signals=no"))
VG_(clo_trace_signals) = False;
- else if (STREQ(argv[i], "--trace-symtab=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-symtab=yes"))
VG_(clo_trace_symtab) = True;
- else if (STREQ(argv[i], "--trace-symtab=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-symtab=no"))
VG_(clo_trace_symtab) = False;
- else if (STREQ(argv[i], "--trace-malloc=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-malloc=yes"))
VG_(clo_trace_malloc) = True;
- else if (STREQ(argv[i], "--trace-malloc=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-malloc=no"))
VG_(clo_trace_malloc) = False;
- else if (STREQ(argv[i], "--trace-sched=yes"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-sched=yes"))
VG_(clo_trace_sched) = True;
- else if (STREQ(argv[i], "--trace-sched=no"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-sched=no"))
VG_(clo_trace_sched) = False;
- else if (STREQ(argv[i], "--trace-pthread=none"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-pthread=none"))
VG_(clo_trace_pthread_level) = 0;
- else if (STREQ(argv[i], "--trace-pthread=some"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-pthread=some"))
VG_(clo_trace_pthread_level) = 1;
- else if (STREQ(argv[i], "--trace-pthread=all"))
+ else if (VG_CLO_STREQ(argv[i], "--trace-pthread=all"))
VG_(clo_trace_pthread_level) = 2;
- else if (STREQN(14, argv[i], "--weird-hacks="))
+ else if (VG_CLO_STREQN(14, argv[i], "--weird-hacks="))
VG_(clo_weird_hacks) = &argv[i][14];
- else if (STREQN(13, argv[i], "--stop-after="))
+ else if (VG_CLO_STREQN(13, argv[i], "--stop-after="))
VG_(clo_stop_after) = VG_(atoll)(&argv[i][13]);
- else if (STREQN(13, argv[i], "--dump-error="))
+ else if (VG_CLO_STREQN(13, argv[i], "--dump-error="))
VG_(clo_dump_error) = (Int)VG_(atoll)(&argv[i][13]);
- else if (STREQN(14, argv[i], "--num-callers=")) {
+ else if (VG_CLO_STREQN(14, argv[i], "--num-callers=")) {
/* Make sure it's sane. */
VG_(clo_backtrace_size) = (Int)VG_(atoll)(&argv[i][14]);
if (VG_(clo_backtrace_size) < 2)
@@ -1031,8 +1038,6 @@
}
# undef ISSPACE
-# undef STREQ
-# undef STREQN
if (VG_(clo_verbosity < 0))
VG_(clo_verbosity) = 0;
diff --git a/coregrind/vg_memory.c b/coregrind/vg_memory.c
index 64200ed..6ade642 100644
--- a/coregrind/vg_memory.c
+++ b/coregrind/vg_memory.c
@@ -293,612 +293,6 @@
}
/*--------------------------------------------------------------------*/
-/*--- Support for memory leak detectors ---*/
-/*--------------------------------------------------------------------*/
-
-/*------------------------------------------------------------*/
-/*--- Low-level address-space scanning, for the leak ---*/
-/*--- detector. ---*/
-/*------------------------------------------------------------*/
-
-static
-jmp_buf memscan_jmpbuf;
-
-
-static
-void vg_scan_all_valid_memory_sighandler ( Int sigNo )
-{
- __builtin_longjmp(memscan_jmpbuf, 1);
-}
-
-
-/* Safely (avoiding SIGSEGV / SIGBUS) scan the entire valid address
- space and pass the addresses and values of all addressible,
- defined, aligned words to notify_word. This is the basis for the
- leak detector. Returns the number of calls made to notify_word.
-
- Addresses are validated 3 ways. First we enquire whether (addr >>
- 16) denotes a 64k chunk in use, by asking is_valid_64k_chunk(). If
- so, we decide for ourselves whether each x86-level (4 K) page in
- the chunk is safe to inspect. If yes, we enquire with
- is_valid_address() whether or not each of the 1024 word-locations
- on the page is valid. Only if so are that address and its contents
- passed to notify_word.
-
- This is all to avoid duplication of this machinery between the
- memcheck and addrcheck skins.
-*/
-static
-UInt vg_scan_all_valid_memory ( Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr ),
- void (*notify_word)( Addr, UInt ) )
-{
- /* All volatile, because some gccs seem paranoid about longjmp(). */
- volatile Bool anyValid;
- volatile Addr pageBase, addr;
- volatile UInt res, numPages, page, primaryMapNo;
- volatile UInt page_first_word, nWordsNotified;
-
- vki_ksigaction sigbus_saved;
- vki_ksigaction sigbus_new;
- vki_ksigaction sigsegv_saved;
- vki_ksigaction sigsegv_new;
- vki_ksigset_t blockmask_saved;
- vki_ksigset_t unblockmask_new;
-
- /* Temporarily install a new sigsegv and sigbus handler, and make
- sure SIGBUS, SIGSEGV and SIGTERM are unblocked. (Perhaps the
- first two can never be blocked anyway?) */
-
- sigbus_new.ksa_handler = vg_scan_all_valid_memory_sighandler;
- sigbus_new.ksa_flags = VKI_SA_ONSTACK | VKI_SA_RESTART;
- sigbus_new.ksa_restorer = NULL;
- res = VG_(ksigemptyset)( &sigbus_new.ksa_mask );
- sk_assert(res == 0);
-
- sigsegv_new.ksa_handler = vg_scan_all_valid_memory_sighandler;
- sigsegv_new.ksa_flags = VKI_SA_ONSTACK | VKI_SA_RESTART;
- sigsegv_new.ksa_restorer = NULL;
- res = VG_(ksigemptyset)( &sigsegv_new.ksa_mask );
- sk_assert(res == 0+0);
-
- res = VG_(ksigemptyset)( &unblockmask_new );
- res |= VG_(ksigaddset)( &unblockmask_new, VKI_SIGBUS );
- res |= VG_(ksigaddset)( &unblockmask_new, VKI_SIGSEGV );
- res |= VG_(ksigaddset)( &unblockmask_new, VKI_SIGTERM );
- sk_assert(res == 0+0+0);
-
- res = VG_(ksigaction)( VKI_SIGBUS, &sigbus_new, &sigbus_saved );
- sk_assert(res == 0+0+0+0);
-
- res = VG_(ksigaction)( VKI_SIGSEGV, &sigsegv_new, &sigsegv_saved );
- sk_assert(res == 0+0+0+0+0);
-
- res = VG_(ksigprocmask)( VKI_SIG_UNBLOCK, &unblockmask_new, &blockmask_saved );
- sk_assert(res == 0+0+0+0+0+0);
-
- /* The signal handlers are installed. Actually do the memory scan. */
- numPages = 1 << (32-VKI_BYTES_PER_PAGE_BITS);
- sk_assert(numPages == 1048576);
- sk_assert(4096 == (1 << VKI_BYTES_PER_PAGE_BITS));
-
- nWordsNotified = 0;
-
- for (page = 0; page < numPages; page++) {
-
- /* Base address of this 4k page. */
- pageBase = page << VKI_BYTES_PER_PAGE_BITS;
-
- /* Skip if this page is in an unused 64k chunk. */
- primaryMapNo = pageBase >> 16;
- if (!is_valid_64k_chunk(primaryMapNo))
- continue;
-
- /* Next, establish whether or not we want to consider any
- locations on this page. We need to do so before actually
- prodding it, because prodding it when in fact it is not
- needed can cause a page fault which under some rare
- circumstances can cause the kernel to extend the stack
- segment all the way down to here, which is seriously bad.
- Hence: */
- anyValid = False;
- for (addr = pageBase; addr < pageBase+VKI_BYTES_PER_PAGE; addr += 4) {
- if (is_valid_address(addr)) {
- anyValid = True;
- break;
- }
- }
-
- if (!anyValid)
- continue; /* nothing interesting here .. move to the next page */
-
- /* Ok, we have to prod cautiously at the page and see if it
- explodes or not. */
- if (__builtin_setjmp(memscan_jmpbuf) == 0) {
- /* try this ... */
- page_first_word = * (volatile UInt*)pageBase;
- /* we get here if we didn't get a fault */
- /* Scan the page */
- for (addr = pageBase; addr < pageBase+VKI_BYTES_PER_PAGE; addr += 4) {
- if (is_valid_address(addr)) {
- nWordsNotified++;
- notify_word ( addr, *(UInt*)addr );
- }
- }
- } else {
- /* We get here if reading the first word of the page caused a
- fault, which in turn caused the signal handler to longjmp.
- Ignore this page. */
- if (0)
- VG_(printf)(
- "vg_scan_all_valid_memory_sighandler: ignoring page at %p\n",
- (void*)pageBase
- );
- }
- }
-
- /* Restore signal state to whatever it was before. */
- res = VG_(ksigaction)( VKI_SIGBUS, &sigbus_saved, NULL );
- sk_assert(res == 0 +0);
-
- res = VG_(ksigaction)( VKI_SIGSEGV, &sigsegv_saved, NULL );
- sk_assert(res == 0 +0 +0);
-
- res = VG_(ksigprocmask)( VKI_SIG_SETMASK, &blockmask_saved, NULL );
- sk_assert(res == 0 +0 +0 +0);
-
- return nWordsNotified;
-}
-
-
-/*------------------------------------------------------------*/
-/*--- Detecting leaked (unreachable) malloc'd blocks. ---*/
-/*------------------------------------------------------------*/
-
-/* A block is either
- -- Proper-ly reached; a pointer to its start has been found
- -- Interior-ly reached; only an interior pointer to it has been found
- -- Unreached; so far, no pointers to any part of it have been found.
-*/
-typedef
- enum { Unreached, Interior, Proper }
- Reachedness;
-
-/* A block record, used for generating err msgs. */
-typedef
- struct _LossRecord {
- struct _LossRecord* next;
- /* Where these lost blocks were allocated. */
- ExeContext* allocated_at;
- /* Their reachability. */
- Reachedness loss_mode;
- /* Number of blocks and total # bytes involved. */
- UInt total_bytes;
- UInt num_blocks;
- }
- LossRecord;
-
-
-/* Find the i such that ptr points at or inside the block described by
- shadows[i]. Return -1 if none found. This assumes that shadows[]
- has been sorted on the ->data field. */
-
-#ifdef VG_DEBUG_LEAKCHECK
-/* Used to sanity-check the fast binary-search mechanism. */
-static
-Int find_shadow_for_OLD ( Addr ptr,
- ShadowChunk** shadows,
- Int n_shadows )
-
-{
- Int i;
- Addr a_lo, a_hi;
- PROF_EVENT(70);
- for (i = 0; i < n_shadows; i++) {
- PROF_EVENT(71);
- a_lo = shadows[i]->data;
- a_hi = ((Addr)shadows[i]->data) + shadows[i]->size - 1;
- if (a_lo <= ptr && ptr <= a_hi)
- return i;
- }
- return -1;
-}
-#endif
-
-
-static
-Int find_shadow_for ( Addr ptr,
- ShadowChunk** shadows,
- Int n_shadows )
-{
- Addr a_mid_lo, a_mid_hi;
- Int lo, mid, hi, retVal;
- /* VG_(printf)("find shadow for %p = ", ptr); */
- retVal = -1;
- lo = 0;
- hi = n_shadows-1;
- while (True) {
- /* invariant: current unsearched space is from lo to hi,
- inclusive. */
- if (lo > hi) break; /* not found */
-
- mid = (lo + hi) / 2;
- a_mid_lo = shadows[mid]->data;
- a_mid_hi = ((Addr)shadows[mid]->data) + shadows[mid]->size - 1;
-
- if (ptr < a_mid_lo) {
- hi = mid-1;
- continue;
- }
- if (ptr > a_mid_hi) {
- lo = mid+1;
- continue;
- }
- sk_assert(ptr >= a_mid_lo && ptr <= a_mid_hi);
- retVal = mid;
- break;
- }
-
-# ifdef VG_DEBUG_LEAKCHECK
- vg_assert(retVal == find_shadow_for_OLD ( ptr, shadows, n_shadows ));
-# endif
- /* VG_(printf)("%d\n", retVal); */
- return retVal;
-}
-
-
-
-static
-void sort_malloc_shadows ( ShadowChunk** shadows, UInt n_shadows )
-{
- Int incs[14] = { 1, 4, 13, 40, 121, 364, 1093, 3280,
- 9841, 29524, 88573, 265720,
- 797161, 2391484 };
- Int lo = 0;
- Int hi = n_shadows-1;
- Int i, j, h, bigN, hp;
- ShadowChunk* v;
-
- bigN = hi - lo + 1; if (bigN < 2) return;
- hp = 0; while (hp < 14 && incs[hp] < bigN) hp++; hp--;
- vg_assert(0 <= hp && hp < 14);
-
- for (; hp >= 0; hp--) {
- h = incs[hp];
- i = lo + h;
- while (1) {
- if (i > hi) break;
- v = shadows[i];
- j = i;
- while (shadows[j-h]->data > v->data) {
- shadows[j] = shadows[j-h];
- j = j - h;
- if (j <= (lo + h - 1)) break;
- }
- shadows[j] = v;
- i++;
- }
- }
-}
-
-
-/* Globals, for the callback used by VG_(detect_memory_leaks). */
-
-static ShadowChunk** vglc_shadows;
-static Int vglc_n_shadows;
-static Reachedness* vglc_reachedness;
-static Addr vglc_min_mallocd_addr;
-static Addr vglc_max_mallocd_addr;
-
-static
-void vg_detect_memory_leaks_notify_addr ( Addr a, UInt word_at_a )
-{
- Int sh_no;
- Addr ptr;
-
- /* Rule out some known causes of bogus pointers. Mostly these do
- not cause much trouble because only a few false pointers can
- ever lurk in these places. This mainly stops it reporting that
- blocks are still reachable in stupid test programs like this
-
- int main (void) { char* a = malloc(100); return 0; }
-
- which people seem inordinately fond of writing, for some reason.
-
- Note that this is a complete kludge. It would be better to
- ignore any addresses corresponding to valgrind.so's .bss and
- .data segments, but I cannot think of a reliable way to identify
- where the .bss segment has been put. If you can, drop me a
- line.
- */
- if (VG_(within_stack)(a)) return;
- if (VG_(within_m_state_static)(a)) return;
- if (a == (Addr)(&vglc_min_mallocd_addr)) return;
- if (a == (Addr)(&vglc_max_mallocd_addr)) return;
-
- /* OK, let's get on and do something Useful for a change. */
-
- ptr = (Addr)word_at_a;
- if (ptr >= vglc_min_mallocd_addr && ptr <= vglc_max_mallocd_addr) {
- /* Might be legitimate; we'll have to investigate further. */
- sh_no = find_shadow_for ( ptr, vglc_shadows, vglc_n_shadows );
- if (sh_no != -1) {
- /* Found a block at/into which ptr points. */
- sk_assert(sh_no >= 0 && sh_no < vglc_n_shadows);
- sk_assert(ptr < vglc_shadows[sh_no]->data
- + vglc_shadows[sh_no]->size);
- /* Decide whether Proper-ly or Interior-ly reached. */
- if (ptr == vglc_shadows[sh_no]->data) {
- if (0) VG_(printf)("pointer at %p to %p\n", a, word_at_a );
- vglc_reachedness[sh_no] = Proper;
- } else {
- if (vglc_reachedness[sh_no] == Unreached)
- vglc_reachedness[sh_no] = Interior;
- }
- }
- }
-}
-
-
-/* Stuff for figuring out if a leak report should be suppressed. */
-static
-Bool leaksupp_matches_callers(Supp* su, Char caller_obj[][M_VG_ERRTXT],
- Char caller_fun[][M_VG_ERRTXT])
-{
- Int i;
-
- for (i = 0; su->caller[i] != NULL; i++) {
- switch (su->caller_ty[i]) {
- case ObjName: if (VG_(string_match)(su->caller[i],
- caller_obj[i])) break;
- return False;
- case FunName: if (VG_(string_match)(su->caller[i],
- caller_fun[i])) break;
- return False;
- default: VG_(skin_panic)("leaksupp_matches_callers");
- }
- }
-
- /* If we reach here, it's a match */
- return True;
-}
-
-
-/* Does a leak record match a suppression? ie is this a suppressible
- leak? Tries to minimise the number of symbol searches since they
- are expensive. Copy n paste (more or less) of
- is_suppressible_error. We have to pass in the actual value of
- LeakSupp for comparison since this is the core and LeakSupp is a
- skin-specific value. */
-static
-Bool is_suppressible_leak ( ExeContext* allocated_at,
- UInt /*CoreErrorKind*/ leakSupp )
-{
- Int i;
-
- Char caller_obj[VG_N_SUPP_CALLERS][M_VG_ERRTXT];
- Char caller_fun[VG_N_SUPP_CALLERS][M_VG_ERRTXT];
-
- Supp* su;
-
- /* get_objname_fnname() writes the function name and object name if
- it finds them in the debug info. so the strings in the suppression
- file should match these.
- */
-
- /* Initialise these strs so they are always safe to compare, even
- if get_objname_fnname doesn't write anything to them. */
- for (i = 0; i < VG_N_SUPP_CALLERS; i++)
- caller_obj[i][0] = caller_fun[i][0] = 0;
-
- for (i = 0; i < VG_N_SUPP_CALLERS && i < VG_(clo_backtrace_size); i++) {
- VG_(get_objname_fnname) ( allocated_at->eips[i],
- caller_obj[i], M_VG_ERRTXT,
- caller_fun[i], M_VG_ERRTXT );
- }
-
- /* See if the leak record any suppression. */
- for (su = VG_(get_suppressions)(); su != NULL; su = su->next) {
- if (VG_(get_supp_kind)(su) == (CoreErrorKind)leakSupp
- && leaksupp_matches_callers(su, caller_obj, caller_fun)) {
- return True;
- }
- }
- return False; /* no matches */
-}
-
-/* Top level entry point to leak detector. Call here, passing in
- suitable address-validating functions (see comment at top of
- vg_scan_all_valid_memory above). All this is to avoid duplication
- of the leak-detection code for the Memcheck and Addrcheck skins.
- Also pass in a skin-specific function to extract the .where field
- for allocated blocks, an indication of the resolution wanted for
- distinguishing different allocation points, and whether or not
- reachable blocks should be shown.
-*/
-void VG_(generic_detect_memory_leaks) (
- Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr ),
- ExeContext* get_where ( ShadowChunk* ),
- VgRes leak_resolution,
- Bool show_reachable,
- UInt /*CoreErrorKind*/ leakSupp
-)
-{
- Int i;
- Int blocks_leaked, bytes_leaked;
- Int blocks_dubious, bytes_dubious;
- Int blocks_reachable, bytes_reachable;
- Int blocks_suppressed, bytes_suppressed;
- Int n_lossrecords;
- UInt bytes_notified;
-
- LossRecord* errlist;
- LossRecord* p;
-
- /* VG_(get_malloc_shadows) allocates storage for shadows */
- vglc_shadows = VG_(get_malloc_shadows)( &vglc_n_shadows );
- if (vglc_n_shadows == 0) {
- sk_assert(vglc_shadows == NULL);
- VG_(message)(Vg_UserMsg,
- "No malloc'd blocks -- no leaks are possible.");
- return;
- }
-
- VG_(message)(Vg_UserMsg,
- "searching for pointers to %d not-freed blocks.",
- vglc_n_shadows );
- sort_malloc_shadows ( vglc_shadows, vglc_n_shadows );
-
- /* Sanity check; assert that the blocks are now in order and that
- they don't overlap. */
- for (i = 0; i < vglc_n_shadows-1; i++) {
- sk_assert( ((Addr)vglc_shadows[i]->data)
- < ((Addr)vglc_shadows[i+1]->data) );
- sk_assert( ((Addr)vglc_shadows[i]->data) + vglc_shadows[i]->size
- < ((Addr)vglc_shadows[i+1]->data) );
- }
-
- vglc_min_mallocd_addr = ((Addr)vglc_shadows[0]->data);
- vglc_max_mallocd_addr = ((Addr)vglc_shadows[vglc_n_shadows-1]->data)
- + vglc_shadows[vglc_n_shadows-1]->size - 1;
-
- vglc_reachedness
- = VG_(malloc)( vglc_n_shadows * sizeof(Reachedness) );
- for (i = 0; i < vglc_n_shadows; i++)
- vglc_reachedness[i] = Unreached;
-
- /* Do the scan of memory. */
- bytes_notified
- = VKI_BYTES_PER_WORD
- * vg_scan_all_valid_memory (
- is_valid_64k_chunk,
- is_valid_address,
- &vg_detect_memory_leaks_notify_addr
- );
-
- VG_(message)(Vg_UserMsg, "checked %d bytes.", bytes_notified);
-
- /* Common up the lost blocks so we can print sensible error
- messages. */
-
- n_lossrecords = 0;
- errlist = NULL;
- for (i = 0; i < vglc_n_shadows; i++) {
-
- /* 'where' stored in 'skin_extra' field; extract using function
- supplied by the calling skin. */
- ExeContext* where = get_where ( vglc_shadows[i] );
-
- for (p = errlist; p != NULL; p = p->next) {
- if (p->loss_mode == vglc_reachedness[i]
- && VG_(eq_ExeContext) ( leak_resolution,
- p->allocated_at,
- where) ) {
- break;
- }
- }
- if (p != NULL) {
- p->num_blocks ++;
- p->total_bytes += vglc_shadows[i]->size;
- } else {
- n_lossrecords ++;
- p = VG_(malloc)(sizeof(LossRecord));
- p->loss_mode = vglc_reachedness[i];
- p->allocated_at = where;
- p->total_bytes = vglc_shadows[i]->size;
- p->num_blocks = 1;
- p->next = errlist;
- errlist = p;
- }
- }
-
- /* Print out the commoned-up blocks and collect summary stats. */
-
- blocks_leaked = bytes_leaked = 0;
- blocks_dubious = bytes_dubious = 0;
- blocks_reachable = bytes_reachable = 0;
- blocks_suppressed = bytes_suppressed = 0;
-
- for (i = 0; i < n_lossrecords; i++) {
- LossRecord* p_min = NULL;
- UInt n_min = 0xFFFFFFFF;
- for (p = errlist; p != NULL; p = p->next) {
- if (p->num_blocks > 0 && p->total_bytes < n_min) {
- n_min = p->total_bytes;
- p_min = p;
- }
- }
- sk_assert(p_min != NULL);
-
- if (is_suppressible_leak(p_min->allocated_at, leakSupp)) {
- blocks_suppressed += p_min->num_blocks;
- bytes_suppressed += p_min->total_bytes;
- p_min->num_blocks = 0;
- continue;
- } else {
- switch (p_min->loss_mode) {
- case Unreached:
- blocks_leaked += p_min->num_blocks;
- bytes_leaked += p_min->total_bytes;
- break;
- case Interior:
- blocks_dubious += p_min->num_blocks;
- bytes_dubious += p_min->total_bytes;
- break;
- case Proper:
- blocks_reachable += p_min->num_blocks;
- bytes_reachable += p_min->total_bytes;
- break;
- default:
- VG_(core_panic)("generic_detect_memory_leaks: "
- "unknown loss mode");
- }
- }
-
- if ( (!show_reachable) && (p_min->loss_mode == Proper)) {
- p_min->num_blocks = 0;
- continue;
- }
-
- VG_(message)(Vg_UserMsg, "");
- VG_(message)(
- Vg_UserMsg,
- "%d bytes in %d blocks are %s in loss record %d of %d",
- p_min->total_bytes, p_min->num_blocks,
- p_min->loss_mode==Unreached ? "definitely lost" :
- (p_min->loss_mode==Interior ? "possibly lost"
- : "still reachable"),
- i+1, n_lossrecords
- );
- VG_(pp_ExeContext)(p_min->allocated_at);
- p_min->num_blocks = 0;
- }
-
- VG_(message)(Vg_UserMsg, "");
- VG_(message)(Vg_UserMsg, "LEAK SUMMARY:");
- VG_(message)(Vg_UserMsg, " definitely lost: %d bytes in %d blocks.",
- bytes_leaked, blocks_leaked );
- VG_(message)(Vg_UserMsg, " possibly lost: %d bytes in %d blocks.",
- bytes_dubious, blocks_dubious );
- VG_(message)(Vg_UserMsg, " still reachable: %d bytes in %d blocks.",
- bytes_reachable, blocks_reachable );
- VG_(message)(Vg_UserMsg, " suppressed: %d bytes in %d blocks.",
- bytes_suppressed, blocks_suppressed );
- if (!show_reachable) {
- VG_(message)(Vg_UserMsg,
- "Reachable blocks (those to which a pointer was found) are not shown.");
- VG_(message)(Vg_UserMsg,
- "To see them, rerun with: --show-reachable=yes");
- }
- VG_(message)(Vg_UserMsg, "");
-
- VG_(free) ( vglc_shadows );
- VG_(free) ( vglc_reachedness );
-}
-
-
-/*--------------------------------------------------------------------*/
/*--- end vg_memory.c ---*/
/*--------------------------------------------------------------------*/
diff --git a/coregrind/vg_scheduler.c b/coregrind/vg_scheduler.c
index 90b2ac6..72f33d6 100644
--- a/coregrind/vg_scheduler.c
+++ b/coregrind/vg_scheduler.c
@@ -197,7 +197,7 @@
if none do. A small complication is dealing with any currently
VG_(baseBlock)-resident thread.
*/
-ThreadId VG_(any_matching_thread_stack)
+ThreadId VG_(first_matching_thread_stack)
( Bool (*p) ( Addr stack_min, Addr stack_max ))
{
ThreadId tid, tid_to_skip;
diff --git a/coregrind/vg_symtab2.c b/coregrind/vg_symtab2.c
index c36d32b..5f64156 100644
--- a/coregrind/vg_symtab2.c
+++ b/coregrind/vg_symtab2.c
@@ -2272,10 +2272,9 @@
n = 0;
if (i > 0)
eip--; /* point to calling line */
- know_fnname = get_fnname (True, eip, buf_fn, M_VG_ERRTXT, True, False);
+ know_fnname = VG_(get_fnname) (eip, buf_fn, M_VG_ERRTXT);
know_objname = VG_(get_objname)(eip, buf_obj, M_VG_ERRTXT);
- know_srcloc = VG_(get_filename_linenum)(eip,
- buf_srcloc, M_VG_ERRTXT,
+ know_srcloc = VG_(get_filename_linenum)(eip, buf_srcloc, M_VG_ERRTXT,
&lineno);
if (i == 0) APPEND(" at ") else APPEND(" by ");