<rdar://problem/13069948>

Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.

So I defined a new "lldb::offset_t" which should be used for all file offsets.

After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.

Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.



git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@173463 91177308-0d34-0410-b5e6-96231b3b80d8
diff --git a/source/Commands/CommandObjectCommands.cpp b/source/Commands/CommandObjectCommands.cpp
index e8937ea..7448a87 100644
--- a/source/Commands/CommandObjectCommands.cpp
+++ b/source/Commands/CommandObjectCommands.cpp
@@ -188,7 +188,7 @@
         return "";
     }
     
-    int
+    virtual int
     HandleArgumentCompletion (Args &input,
                               int &cursor_index,
                               int &cursor_char_position,
@@ -285,7 +285,7 @@
     bool
     DoExecute(Args& command, CommandReturnObject &result)
     {
-        const int argc = command.GetArgumentCount();
+        const size_t argc = command.GetArgumentCount();
         if (argc == 1)
         {
             const char *filename = command.GetArgumentAtIndex(0);
@@ -1289,7 +1289,7 @@
     {
     }
     
-    int
+    virtual int
     HandleArgumentCompletion (Args &input,
                               int &cursor_index,
                               int &cursor_char_position,