Stop producing .data.rel sections.
If a section is rw, it is irrelevant if the dynamic linker will write to
it or not.
It looks like llvm implemented this because gcc was doing it. It looks
like gcc implemented this in the hope that it would put all the
relocated items close together and speed up the dynamic linker.
There are two problem with this:
* It doesn't work. Both bfd and gold will map .data.rel to .data and
concatenate the input sections in the order they are seen.
* If we want a feature like that, it can be implemented directly in the
linker since it knowns where the dynamic relocations are.
llvm-svn: 253436
diff --git a/llvm/test/CodeGen/Mips/emutls_generic.ll b/llvm/test/CodeGen/Mips/emutls_generic.ll
index 6d0bb33..a6cf23a 100644
--- a/llvm/test/CodeGen/Mips/emutls_generic.ll
+++ b/llvm/test/CodeGen/Mips/emutls_generic.ll
@@ -30,13 +30,13 @@
; MIPS_32: lw {{.+}}call16(__emutls_get_address
; MIPS_32-NOT: __emutls_t.external_x
; MIPS_32-NOT: __emutls_v.external_x:
-; MIPS_32: .section .data.rel
+; MIPS_32: .data
; MIPS_32: .align 2
; MIPS_32-LABEL: __emutls_v.external_y:
; MIPS_32: .section .rodata,
; MIPS_32-LABEL: __emutls_t.external_y:
; MIPS_32-NEXT: .byte 7
-; MIPS_32: .section .data.rel
+; MIPS_32: .data
; MIPS_32: .align 2
; MIPS_32-LABEL: __emutls_v.internal_y:
; MIPS_32-NEXT: .4byte 8
@@ -58,7 +58,7 @@
; MIPS_64: .section .rodata,
; MIPS_64-LABEL: __emutls_t.external_y:
; MIPS_64-NEXT: .byte 7
-; MIPS_64: .section .data.rel
+; MIPS_64: .data
; MIPS_64: .align 3
; MIPS_64-LABEL: __emutls_v.internal_y:
; MIPS_64-NEXT: .8byte 8