Switch the asmprinter (.ll) and all the stuff it requires over to
use raw_ostream instead of std::ostream. Among other goodness,
this speeds up llvm-dis of kc++ with a release build from 0.85s
to 0.49s (88% faster).
Other interesting changes:
1) This makes Value::print be non-virtual.
2) AP[S]Int and ConstantRange can no longer print to ostream directly,
use raw_ostream instead.
3) This fixes a bug in raw_os_ostream where it didn't flush itself
when destroyed.
4) This adds a new SDNode::print method, instead of only allowing "dump".
A lot of APIs have both std::ostream and raw_ostream versions, it would
be useful to go through and systematically anihilate the std::ostream
versions.
This passes dejagnu, but there may be minor fallout, plz let me know if
so and I'll fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55263 91177308-0d34-0410-b5e6-96231b3b80d8
diff --git a/examples/HowToUseJIT/HowToUseJIT.cpp b/examples/HowToUseJIT/HowToUseJIT.cpp
index d500005..0482df6 100644
--- a/examples/HowToUseJIT/HowToUseJIT.cpp
+++ b/examples/HowToUseJIT/HowToUseJIT.cpp
@@ -42,7 +42,7 @@
#include "llvm/ExecutionEngine/JIT.h"
#include "llvm/ExecutionEngine/Interpreter.h"
#include "llvm/ExecutionEngine/GenericValue.h"
-#include <iostream>
+#include "llvm/Support/raw_ostream.h"
using namespace llvm;
int main() {
@@ -99,14 +99,15 @@
ExistingModuleProvider* MP = new ExistingModuleProvider(M);
ExecutionEngine* EE = ExecutionEngine::create(MP, false);
- std::cout << "We just constructed this LLVM module:\n\n" << *M;
- std::cout << "\n\nRunning foo: " << std::flush;
+ outs() << "We just constructed this LLVM module:\n\n" << *M;
+ outs() << "\n\nRunning foo: ";
+ outs().flush();
// Call the `foo' function with no arguments:
std::vector<GenericValue> noargs;
GenericValue gv = EE->runFunction(FooF, noargs);
// Import result of execution:
- std::cout << "Result: " << gv.IntVal << "\n";
+ outs() << "Result: " << gv.IntVal << "\n";
return 0;
}