Suspend check reworking (ready for rewiew)

I hate burning a register, but the cost of suspend checks was just too high
in our current environment.  There are things that can be done in future
releases to avoid the register burn, but for now it's worthwhile.

The general strategy is to reserve r4 as a suspend check counter.
Rather than poll the thread suspendPending counter, we instead simply
decrement the counter register.  When it rolls to zero, we check.  For
now I'm just using the counter scheme on backwards branches - we always
poll on returns (which is already heavyweight enough that the extra cost
isn't especially noticable).

I've also added an optimization hint to the MIR in case we have enough
time to test and enable the existing loop analysis code that omits the
suspend check on smallish counted loops.

Change-Id: I82d8bad5882a4cf2ccff590942e2d1520d58969d
diff --git a/src/asm_support.h b/src/asm_support.h
index 6eda4bf..097ab7a 100644
--- a/src/asm_support.h
+++ b/src/asm_support.h
@@ -3,9 +3,16 @@
 #ifndef ART_SRC_ASM_SUPPORT_H_
 #define ART_SRC_ASM_SUPPORT_H_
 
+#if defined(__arm__)
+#define rSUSPEND r4
+#define rSELF r9
+#define rLR r14
+#define SUSPEND_CHECK_INTERVAL (1000)
+#endif
+
 #if defined(__i386__)
 // Offset of field Thread::self_ verified in InitCpu
-#define THREAD_SELF_OFFSET 0x161
+#define THREAD_SELF_OFFSET 0x165
 #endif
 
 #endif  // ART_SRC_ASM_SUPPORT_H_