JIT: Trace selection tuning to reduce number of spurious "hot" traces.

This change trades a smallish adverse performance impact on programs
with flat execution profiles (for example, dacapo's xalan) for significant
reductions in useless translations.  For dacapo, performance dropped
by about 2% in return for a 40% reduction in memory usage (300 Kbytes).
No significant performance drop in loopy profiles, but ocassionally good
memory savings.  BenchmarkPi performance unchanged, but memory usage
after 4 runs went from 120 Kbytes to 55 Kbytes.

This is still not ideal (and probably will never be).  For programs with
flat execution profiles we do best with loose hotness detection - try to
get as much of the working set compiled as quickly as possible.  However,
too loose and we end up translating a lot of little-used UI code.

My current inclination is to err towards tight standards for the trace
JIT and push better performance for flat profiles in a future world in
which we analyze profile data across runs - perhaps in conjunction with
a method JIT.

Change-Id: If1b6e940ca7799acd6266e47175dae644269bc87
2 files changed