Fix a double unmap issue in MemMap::UnMapAtEnd().

MemMap::UnMapAtEnd() unmaps the unused tail of the alloc space during
a zygote fork. But it can cause the same tail region of the memory to
be unmapped twice (once in UnMapAtEnd() and once more in ~MemMap()
during a shutdown.)

I encountered a crash because of this issue in SpaceTest.ZygoteTest
(which happens to happen only on a device in a branch with the
rosalloc change probably due to some randomness in mmap address
choice, etc.)

Here's what happens:

1) CreateZygoteSpace() will call UnMapAtEnd() and unmap the unused
tail of the alloc space.

2) In the same function, after UnMapAtEnd(), several libc new/malloc
allocations, including a new DlMallocSpace object, happen. This
happens to cause libc to map a new memory region that overlaps with
the memory region that has just been unmapped in 1) and use it to
allocate those allocations (that is, the new DlMallocSpace object is
allocated in that memory region.) This is a second DlMallocSpace that
becomes the new alloc space after zygote fork. The first DlMallocSpace
becomes the zygote space. Note that that libc maps that memory region
before the underlying memory of the second DlMallocSpace is mapped.

3) During a Runtime shutdown (which happens once for a normal VM
shutdown or at the end of each test run) all the spaces get destructed
including the the two DlMallocSpaces one by one. When the first
DlMallocSpace gets destructed (note the space list is sorted by
address,) its super destructor ~MemMap() unmaps the original memory
region that's already partially unmapped in 2). Now this memory region
includes the libc memory region that includes the second DlMallocSpace
object.

4) When the second DlMallocSpace object gets attempted to be
destructed, the memory in which the object resides is already unmapped
in 3) and causes a SIGSEGV.

This change replaces UnMapAtEnd() with a new function RemapAtEnd()
which combines the unmapping of the tail region and remapping of it to
achieve the following two things:

1) Fixes this double unmap issue by updating the base_size_ member
variable to exclude the already-unmapped tail region so that ~MemMap()
will not unmap the tail region again.

2) Improves on the non-atomicity issue in the unmap/map sequence in
CreateZygoteSpace(). That is, once the unused tail portion of the
memory region of the origina alloc space is unmapped, something like
libc could come along and take that memory region, before the memory
region is mapped again for the new alloc space. This, as a result,
would make a hole between the old alloc (new zygote) space and the new
alloc space and cause the two spaces to be
non-contiguous. RemapAtEnd() eliminates new/malloc allocations between
the unmap and the map calls. But note this still isn't perfect as
other threads could in theory take the memory region between the
munmap and the mmap calls.

Added tests.

Change-Id: I43bc3a33a2cbfc7a092890312e34aa5285384589
diff --git a/runtime/mem_map_test.cc b/runtime/mem_map_test.cc
index 09de320..cf2c9d0 100644
--- a/runtime/mem_map_test.cc
+++ b/runtime/mem_map_test.cc
@@ -21,7 +21,15 @@
 
 namespace art {
 
-class MemMapTest : public testing::Test {};
+class MemMapTest : public testing::Test {
+ public:
+  byte* BaseBegin(MemMap* mem_map) {
+    return reinterpret_cast<byte*>(mem_map->base_begin_);
+  }
+  size_t BaseSize(MemMap* mem_map) {
+    return mem_map->base_size_;
+  }
+};
 
 TEST_F(MemMapTest, MapAnonymousEmpty) {
   std::string error_msg;
@@ -34,4 +42,57 @@
   ASSERT_TRUE(error_msg.empty());
 }
 
+TEST_F(MemMapTest, RemapAtEnd) {
+  std::string error_msg;
+  // Cast the page size to size_t.
+  const size_t page_size = static_cast<size_t>(kPageSize);
+  // Map a two-page memory region.
+  MemMap* m0 = MemMap::MapAnonymous("MemMapTest_RemapAtEndTest_map0",
+                                    NULL,
+                                    2 * page_size,
+                                    PROT_READ | PROT_WRITE,
+                                    &error_msg);
+  // Check its state and write to it.
+  byte* base0 = m0->Begin();
+  ASSERT_TRUE(base0 != NULL) << error_msg;
+  size_t size0 = m0->Size();
+  EXPECT_EQ(m0->Size(), 2 * page_size);
+  EXPECT_EQ(BaseBegin(m0), base0);
+  EXPECT_EQ(BaseSize(m0), size0);
+  memset(base0, 42, 2 * page_size);
+  // Remap the latter half into a second MemMap.
+  MemMap* m1 = m0->RemapAtEnd(base0 + page_size,
+                              "MemMapTest_RemapAtEndTest_map1",
+                              PROT_READ | PROT_WRITE,
+                              &error_msg);
+  // Check the states of the two maps.
+  EXPECT_EQ(m0->Begin(), base0) << error_msg;
+  EXPECT_EQ(m0->Size(), page_size);
+  EXPECT_EQ(BaseBegin(m0), base0);
+  EXPECT_EQ(BaseSize(m0), page_size);
+  byte* base1 = m1->Begin();
+  size_t size1 = m1->Size();
+  EXPECT_EQ(base1, base0 + page_size);
+  EXPECT_EQ(size1, page_size);
+  EXPECT_EQ(BaseBegin(m1), base1);
+  EXPECT_EQ(BaseSize(m1), size1);
+  // Write to the second region.
+  memset(base1, 43, page_size);
+  // Check the contents of the two regions.
+  for (size_t i = 0; i < page_size; ++i) {
+    EXPECT_EQ(base0[i], 42);
+  }
+  for (size_t i = 0; i < page_size; ++i) {
+    EXPECT_EQ(base1[i], 43);
+  }
+  // Unmap the first region.
+  delete m0;
+  // Make sure the second region is still accessible after the first
+  // region is unmapped.
+  for (size_t i = 0; i < page_size; ++i) {
+    EXPECT_EQ(base1[i], 43);
+  }
+  delete m1;
+}
+
 }  // namespace art