ARC: fix page address calculation if PAGE_OFFSET != LINUX_LINK_BASE

We used to calculate page address differently in 2 cases:

1. In virt_to_page(x) we do
 --->8---
 mem_map + (x - CONFIG_LINUX_LINK_BASE) >> PAGE_SHIFT
 --->8---

2. In in pte_page(x) we do
 --->8---
 mem_map + (pte_val(x) - PAGE_OFFSET) >> PAGE_SHIFT
 --->8---

That leads to problems in case PAGE_OFFSET != CONFIG_LINUX_LINK_BASE -
different pages will be selected depending on where and how we calculate
page address.

In particular in the STAR 9000853582 when gdb attempted to read memory
of another process it got improper page in get_user_pages() because this
is exactly one of the places where we search for a page by pte_page().

The fix is trivial - we need to calculate page address similarly in both
cases.

Cc: <stable@vger.kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 6b0b7f7e..7670f33 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -259,7 +259,8 @@
 #define pmd_clear(xp)			do { pmd_val(*(xp)) = 0; } while (0)
 
 #define pte_page(x) (mem_map + \
-		(unsigned long)(((pte_val(x) - PAGE_OFFSET) >> PAGE_SHIFT)))
+		(unsigned long)(((pte_val(x) - CONFIG_LINUX_LINK_BASE) >> \
+				PAGE_SHIFT)))
 
 #define mk_pte(page, pgprot)						\
 ({									\