<rdar://problem/13069948>

Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.

So I defined a new "lldb::offset_t" which should be used for all file offsets.

After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.

Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.

llvm-svn: 173463
diff --git a/lldb/source/Plugins/ObjectFile/PECOFF/ObjectFilePECOFF.h b/lldb/source/Plugins/ObjectFile/PECOFF/ObjectFilePECOFF.h
index 2e41ce4..446999c 100644
--- a/lldb/source/Plugins/ObjectFile/PECOFF/ObjectFilePECOFF.h
+++ b/lldb/source/Plugins/ObjectFile/PECOFF/ObjectFilePECOFF.h
@@ -68,7 +68,7 @@
     virtual bool
     IsExecutable () const;
     
-    virtual size_t
+    virtual uint32_t
     GetAddressByteSize ()  const;
     
 //    virtual lldb_private::AddressClass
@@ -212,8 +212,8 @@
 	} coff_symbol_t;
     
 	bool ParseDOSHeader ();
-	bool ParseCOFFHeader (uint32_t* offset_ptr);
-	bool ParseCOFFOptionalHeader (uint32_t* offset_ptr);
+	bool ParseCOFFHeader (lldb::offset_t *offset_ptr);
+	bool ParseCOFFOptionalHeader (lldb::offset_t *offset_ptr);
 	bool ParseSectionHeaders (uint32_t offset);
 	
 	static	void DumpDOSHeader(lldb_private::Stream *s, const dos_header_t& header);