<rdar://problem/13069948>

Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.

So I defined a new "lldb::offset_t" which should be used for all file offsets.

After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.

Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.

llvm-svn: 173463
diff --git a/lldb/source/Core/ConstString.cpp b/lldb/source/Core/ConstString.cpp
index 72a4332..8751694 100644
--- a/lldb/source/Core/ConstString.cpp
+++ b/lldb/source/Core/ConstString.cpp
@@ -87,7 +87,7 @@
     }
 
     const char *
-    GetConstCStringWithLength (const char *cstr, int cstr_len)
+    GetConstCStringWithLength (const char *cstr, size_t cstr_len)
     {
         if (cstr)
         {
@@ -132,11 +132,11 @@
     }
 
     const char *
-    GetConstTrimmedCStringWithLength (const char *cstr, int cstr_len)
+    GetConstTrimmedCStringWithLength (const char *cstr, size_t cstr_len)
     {
         if (cstr)
         {
-            int trimmed_len = std::min<int> (strlen (cstr), cstr_len);
+            const size_t trimmed_len = std::min<size_t> (strlen (cstr), cstr_len);
             return GetConstCStringWithLength (cstr, trimmed_len);
         }
         return NULL;