[PATCH] mm: Ensure proper alignment for node_remap_start_pfn

While reserving KVA for lmem_maps of node, we have to make sure that
node_remap_start_pfn[] is aligned to a proper pmd boundary.
(node_remap_start_pfn[] gets its value from node_end_pfn[])

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/arch/i386/mm/discontig.c b/arch/i386/mm/discontig.c
index b358f07..c369a8b 100644
--- a/arch/i386/mm/discontig.c
+++ b/arch/i386/mm/discontig.c
@@ -243,6 +243,14 @@
 		/* now the roundup is correct, convert to PAGE_SIZE pages */
 		size = size * PTRS_PER_PTE;
 
+		if (node_end_pfn[nid] & (PTRS_PER_PTE-1)) {
+			/*
+			 * Adjust size if node_end_pfn is not on a proper
+			 * pmd boundary. remap_numa_kva will barf otherwise.
+			 */
+			size +=  node_end_pfn[nid] & (PTRS_PER_PTE-1);
+		}
+
 		/*
 		 * Validate the region we are allocating only contains valid
 		 * pages.