Register promotion support for 64-bit targets
Not sufficiently tested for 64-bit targets, but should be
fairly close.
A significant amount of refactoring could stil be done, (in
later CLs).
With this change we are not making any changes to the vmap
scheme. As a result, it is a requirement that if a vreg
is promoted to both a 32-bit view and the low half of a
64-bit view it must share the same physical register. We
may change this restriction later on to allow for more flexibility
for 32-bit Arm.
For example, if v4, v5, v4/v5 and v5/v6 are all hot enough to
promote, we'd end up with something like:
v4 (as an int) -> r10
v4/v5 (as a long) -> r10
v5 (as an int) -> r11
v5/v6 (as a long) -> r11
Fix a couple of ARM64 bugs on the way...
Change-Id: I6a152b9c164d9f1a053622266e165428045362f3
diff --git a/runtime/vmap_table.h b/runtime/vmap_table.h
index 9821753..df5cd80 100644
--- a/runtime/vmap_table.h
+++ b/runtime/vmap_table.h
@@ -64,6 +64,12 @@
const uint8_t* table = table_;
uint16_t adjusted_vreg = vreg + kEntryAdjustment;
size_t end = DecodeUnsignedLeb128(&table);
+ bool high_reg = (kind == kLongHiVReg) || (kind == kDoubleHiVReg);
+ bool target64 = (kRuntimeISA == kArm64) || (kRuntimeISA == kX86_64);
+ if (target64 && high_reg) {
+ // Wide promoted registers are associated with the sreg of the low portion.
+ adjusted_vreg--;
+ }
for (size_t i = 0; i < end; ++i) {
// Stop if we find what we are are looking for.
uint16_t adjusted_entry = DecodeUnsignedLeb128(&table);