<rdar://problem/13069948>
Major fixed to allow reading files that are over 4GB. The main problems were that the DataExtractor was using 32 bit offsets as a data cursor, and since we mmap all of our object files we could run into cases where if we had a very large core file that was over 4GB, we were running into the 4GB boundary.
So I defined a new "lldb::offset_t" which should be used for all file offsets.
After making this change, I enabled warnings for data loss and for enexpected implicit conversions temporarily and found a ton of things that I fixed.
Any functions that take an index internally, should use "size_t" for any indexes and also should return "size_t" for any sizes of collections.
llvm-svn: 173463
diff --git a/lldb/source/Commands/CommandObjectMultiword.cpp b/lldb/source/Commands/CommandObjectMultiword.cpp
index d2e2811..aa3a8eb 100644
--- a/lldb/source/Commands/CommandObjectMultiword.cpp
+++ b/lldb/source/Commands/CommandObjectMultiword.cpp
@@ -143,7 +143,7 @@
else
{
std::string error_msg;
- int num_subcmd_matches = matches.GetSize();
+ const size_t num_subcmd_matches = matches.GetSize();
if (num_subcmd_matches > 0)
error_msg.assign ("ambiguous command ");
else
@@ -158,14 +158,14 @@
if (num_subcmd_matches > 0)
{
error_msg.append (" Possible completions:");
- for (int i = 0; i < num_subcmd_matches; i++)
+ for (size_t i = 0; i < num_subcmd_matches; i++)
{
error_msg.append ("\n\t");
error_msg.append (matches.GetStringAtIndex (i));
}
}
error_msg.append ("\n");
- result.AppendRawError (error_msg.c_str(), error_msg.size());
+ result.AppendRawError (error_msg.c_str());
result.SetStatus (eReturnStatusFailed);
}
}