blob: 04475e638e2d73c4deae5e212b1aa80c0eef2062 [file] [log] [blame]
Chris Lattnerc0b42e92007-10-23 06:27:55 +00001<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
3
4<html>
5<head>
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
10</head>
11
12<body>
13
14<div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
15
Chris Lattner128eb862007-11-05 19:06:59 +000016<ul>
Chris Lattner0e555b12007-11-05 20:04:56 +000017<li><a href="index.html">Up to Tutorial Index</a></li>
Chris Lattner128eb862007-11-05 19:06:59 +000018<li>Chapter 4
19 <ol>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
25 </ol>
26</li>
Chris Lattner0e555b12007-11-05 20:04:56 +000027<li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28Flow</li>
Chris Lattner128eb862007-11-05 19:06:59 +000029</ul>
30
Chris Lattnerc0b42e92007-10-23 06:27:55 +000031<div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33</div>
34
35<!-- *********************************************************************** -->
Chris Lattner128eb862007-11-05 19:06:59 +000036<div class="doc_section"><a name="intro">Chapter 4 Introduction</a></div>
Chris Lattnerc0b42e92007-10-23 06:27:55 +000037<!-- *********************************************************************** -->
38
39<div class="doc_text">
40
Chris Lattner128eb862007-11-05 19:06:59 +000041<p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
Chris Lattnera54c2012007-11-07 05:28:43 +000042with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43language and added support for generating LLVM IR. This chapter describes
Chris Lattner128eb862007-11-05 19:06:59 +000044two new techniques: adding optimizer support to your language, and adding JIT
45compiler support. This shows how to get nice efficient code for your
46language.</p>
Chris Lattnerc0b42e92007-10-23 06:27:55 +000047
48</div>
49
50<!-- *********************************************************************** -->
Chris Lattner118749e2007-10-25 06:23:36 +000051<div class="doc_section"><a name="trivialconstfold">Trivial Constant
52Folding</a></div>
Chris Lattnerc0b42e92007-10-23 06:27:55 +000053<!-- *********************************************************************** -->
54
55<div class="doc_text">
56
57<p>
Chris Lattner118749e2007-10-25 06:23:36 +000058Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
59it does not produce wonderful code. For example, when compiling simple code,
60we don't get obvious optimizations:</p>
Chris Lattnerc0b42e92007-10-23 06:27:55 +000061
62<div class="doc_code">
63<pre>
Chris Lattner118749e2007-10-25 06:23:36 +000064ready&gt; <b>def test(x) 1+2+x;</b>
65Read function definition:
66define double @test(double %x) {
67entry:
68 %addtmp = add double 1.000000e+00, 2.000000e+00
69 %addtmp1 = add double %addtmp, %x
70 ret double %addtmp1
71}
72</pre>
73</div>
74
75<p>This code is a very very literal transcription of the AST built by parsing
76our code, and as such, lacks optimizations like constant folding (we'd like to
77get "<tt>add x, 3.0</tt>" in the example above) as well as other more important
78optimizations. Constant folding in particular is a very common and very
79important optimization: so much so that many language implementors implement
80constant folding support in their AST representation.</p>
81
82<p>With LLVM, you don't need to. Since all calls to build LLVM IR go through
83the LLVM builder, it would be nice if the builder itself checked to see if there
84was a constant folding opportunity when you call it. If so, it could just do
85the constant fold and return the constant instead of creating an instruction.
86This is exactly what the <tt>LLVMFoldingBuilder</tt> class does. Lets make one
87change:
88
89<div class="doc_code">
90<pre>
91static LLVMFoldingBuilder Builder;
92</pre>
93</div>
94
95<p>All we did was switch from <tt>LLVMBuilder</tt> to
96<tt>LLVMFoldingBuilder</tt>. Though we change no other code, now all of our
97instructions are implicitly constant folded without us having to do anything
98about it. For example, our example above now compiles to:</p>
99
100<div class="doc_code">
101<pre>
102ready&gt; <b>def test(x) 1+2+x;</b>
103Read function definition:
104define double @test(double %x) {
105entry:
106 %addtmp = add double 3.000000e+00, %x
107 ret double %addtmp
108}
109</pre>
110</div>
111
Chris Lattnera54c2012007-11-07 05:28:43 +0000112<p>Well, that was easy :). In practice, we recommend always using
Owen Anderson6867aec2007-10-25 06:50:30 +0000113<tt>LLVMFoldingBuilder</tt> when generating code like this. It has no
Chris Lattner118749e2007-10-25 06:23:36 +0000114"syntactic overhead" for its use (you don't have to uglify your compiler with
115constant checks everywhere) and it can dramatically reduce the amount of
116LLVM IR that is generated in some cases (particular for languages with a macro
117preprocessor or that use a lot of constants).</p>
118
119<p>On the other hand, the <tt>LLVMFoldingBuilder</tt> is limited by the fact
120that it does all of its analysis inline with the code as it is built. If you
121take a slightly more complex example:</p>
122
123<div class="doc_code">
124<pre>
125ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
126ready> Read function definition:
127define double @test(double %x) {
128entry:
129 %addtmp = add double 3.000000e+00, %x
130 %addtmp1 = add double %x, 3.000000e+00
131 %multmp = mul double %addtmp, %addtmp1
132 ret double %multmp
133}
134</pre>
135</div>
136
137<p>In this case, the LHS and RHS of the multiplication are the same value. We'd
138really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
139of computing "<tt>x*3</tt>" twice.</p>
140
141<p>Unfortunately, no amount of local analysis will be able to detect and correct
142this. This requires two transformations: reassociation of expressions (to
143make the add's lexically identical) and Common Subexpression Elimination (CSE)
144to delete the redundant add instruction. Fortunately, LLVM provides a broad
145range of optimizations that you can use, in the form of "passes".</p>
146
147</div>
148
149<!-- *********************************************************************** -->
150<div class="doc_section"><a name="optimizerpasses">LLVM Optimization
151 Passes</a></div>
152<!-- *********************************************************************** -->
153
154<div class="doc_text">
155
156<p>LLVM provides many optimization passes which do many different sorts of
157things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
158to the mistaken notion that one set of optimizations is right for all languages
159and for all situations. LLVM allows a compiler implementor to make complete
160decisions about what optimizations to use, in which order, and in what
161situation.</p>
162
163<p>As a concrete example, LLVM supports both "whole module" passes, which look
164across as large of body of code as they can (often a whole file, but if run
165at link time, this can be a substantial portion of the whole program). It also
166supports and includes "per-function" passes which just operate on a single
167function at a time, without looking at other functions. For more information
168on passes and how the get run, see the <a href="../WritingAnLLVMPass.html">How
Chris Lattnera54c2012007-11-07 05:28:43 +0000169to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
170Passes</a>.</p>
Chris Lattner118749e2007-10-25 06:23:36 +0000171
172<p>For Kaleidoscope, we are currently generating functions on the fly, one at
173a time, as the user types them in. We aren't shooting for the ultimate
174optimization experience in this setting, but we also want to catch the easy and
175quick stuff where possible. As such, we will choose to run a few per-function
176optimizations as the user types the function in. If we wanted to make a "static
177Kaleidoscope compiler", we would use exactly the code we have now, except that
178we would defer running the optimizer until the entire file has been parsed.</p>
179
180<p>In order to get per-function optimizations going, we need to set up a
181<a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
182organize the LLVM optimizations that we want to run. Once we have that, we can
183add a set of optimizations to run. The code looks like this:</p>
184
185<div class="doc_code">
186<pre>
187 ExistingModuleProvider OurModuleProvider(TheModule);
188 FunctionPassManager OurFPM(&amp;OurModuleProvider);
189
190 // Set up the optimizer pipeline. Start with registering info about how the
191 // target lays out data structures.
192 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
193 // Do simple "peephole" optimizations and bit-twiddling optzns.
194 OurFPM.add(createInstructionCombiningPass());
195 // Reassociate expressions.
196 OurFPM.add(createReassociatePass());
197 // Eliminate Common SubExpressions.
198 OurFPM.add(createGVNPass());
199 // Simplify the control flow graph (deleting unreachable blocks, etc).
200 OurFPM.add(createCFGSimplificationPass());
201
202 // Set the global so the code gen can use this.
203 TheFPM = &amp;OurFPM;
204
205 // Run the main "interpreter loop" now.
206 MainLoop();
207</pre>
208</div>
209
210<p>This code defines two objects, a <tt>ExistingModuleProvider</tt> and a
211<tt>FunctionPassManager</tt>. The former is basically a wrapper around our
212<tt>Module</tt> that the PassManager requires. It provides certain flexibility
213that we're not going to take advantage of here, so I won't dive into what it is
214all about.</p>
215
Chris Lattnera54c2012007-11-07 05:28:43 +0000216<p>The meat of the matter is the definition of "<tt>OurFPM</tt>". It
Chris Lattner118749e2007-10-25 06:23:36 +0000217requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
218to construct itself. Once it is set up, we use a series of "add" calls to add
219a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
220so that later optimizations know how the data structures in the program are
221layed out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
222which we will get to in the next section.</p>
223
224<p>In this case, we choose to add 4 optimization passes. The passes we chose
225here are a pretty standard set of "cleanup" optimizations that are useful for
Chris Lattnera54c2012007-11-07 05:28:43 +0000226a wide variety of code. I won't delve into what they do, but believe me that
227they are a good starting place :).</p>
Chris Lattner118749e2007-10-25 06:23:36 +0000228
Chris Lattnera54c2012007-11-07 05:28:43 +0000229<p>Once the PassManager is set up, we need to make use of it. We do this by
Chris Lattner118749e2007-10-25 06:23:36 +0000230running it after our newly created function is constructed (in
231<tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
232
233<div class="doc_code">
234<pre>
235 if (Value *RetVal = Body->Codegen()) {
236 // Finish off the function.
237 Builder.CreateRet(RetVal);
238
239 // Validate the generated code, checking for consistency.
240 verifyFunction(*TheFunction);
241
Chris Lattnera54c2012007-11-07 05:28:43 +0000242 <b>// Optimize the function.
243 TheFPM-&gt;run(*TheFunction);</b>
Chris Lattner118749e2007-10-25 06:23:36 +0000244
245 return TheFunction;
246 }
247</pre>
248</div>
249
250<p>As you can see, this is pretty straight-forward. The
251<tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
252improving (hopefully) its body. With this in place, we can try our test above
253again:</p>
254
255<div class="doc_code">
256<pre>
257ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
258ready> Read function definition:
259define double @test(double %x) {
260entry:
261 %addtmp = add double %x, 3.000000e+00
262 %multmp = mul double %addtmp, %addtmp
263 ret double %multmp
264}
265</pre>
266</div>
267
268<p>As expected, we now get our nicely optimized code, saving a floating point
Chris Lattnera54c2012007-11-07 05:28:43 +0000269add instruction from every execution of this function.</p>
Chris Lattner118749e2007-10-25 06:23:36 +0000270
271<p>LLVM provides a wide variety of optimizations that can be used in certain
Chris Lattner72714232007-10-25 17:52:39 +0000272circumstances. Some <a href="../Passes.html">documentation about the various
273passes</a> is available, but it isn't very complete. Another good source of
274ideas is to look at the passes that <tt>llvm-gcc</tt> or
Chris Lattner118749e2007-10-25 06:23:36 +0000275<tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
276experiment with passes from the command line, so you can see if they do
277anything.</p>
278
279<p>Now that we have reasonable code coming out of our front-end, lets talk about
280executing it!</p>
281
282</div>
283
284<!-- *********************************************************************** -->
285<div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
286<!-- *********************************************************************** -->
287
288<div class="doc_text">
289
Chris Lattnera54c2012007-11-07 05:28:43 +0000290<p>Code that is available in LLVM IR can have a wide variety of tools
Chris Lattner118749e2007-10-25 06:23:36 +0000291applied to it. For example, you can run optimizations on it (as we did above),
292you can dump it out in textual or binary forms, you can compile the code to an
293assembly file (.s) for some target, or you can JIT compile it. The nice thing
Chris Lattnera54c2012007-11-07 05:28:43 +0000294about the LLVM IR representation is that it is the "common currency" between
295many different parts of the compiler.
Chris Lattner118749e2007-10-25 06:23:36 +0000296</p>
297
Chris Lattnera54c2012007-11-07 05:28:43 +0000298<p>In this section, we'll add JIT compiler support to our interpreter. The
Chris Lattner118749e2007-10-25 06:23:36 +0000299basic idea that we want for Kaleidoscope is to have the user enter function
300bodies as they do now, but immediately evaluate the top-level expressions they
301type in. For example, if they type in "1 + 2;", we should evaluate and print
302out 3. If they define a function, they should be able to call it from the
303command line.</p>
304
305<p>In order to do this, we first declare and initialize the JIT. This is done
306by adding a global variable and a call in <tt>main</tt>:</p>
307
308<div class="doc_code">
309<pre>
Chris Lattnera54c2012007-11-07 05:28:43 +0000310<b>static ExecutionEngine *TheExecutionEngine;</b>
Chris Lattner118749e2007-10-25 06:23:36 +0000311...
312int main() {
313 ..
Chris Lattnera54c2012007-11-07 05:28:43 +0000314 <b>// Create the JIT.
315 TheExecutionEngine = ExecutionEngine::create(TheModule);</b>
Chris Lattner118749e2007-10-25 06:23:36 +0000316 ..
317}
318</pre>
319</div>
320
321<p>This creates an abstract "Execution Engine" which can be either a JIT
322compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
323for you if one is available for your platform, otherwise it will fall back to
324the interpreter.</p>
325
326<p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
327There are a variety of APIs that are useful, but the most simple one is the
328"<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
329specified LLVM Function and returns a function pointer to the generated machine
330code. In our case, this means that we can change the code that parses a
331top-level expression to look like this:</p>
332
333<div class="doc_code">
334<pre>
335static void HandleTopLevelExpression() {
336 // Evaluate a top level expression into an anonymous function.
337 if (FunctionAST *F = ParseTopLevelExpr()) {
338 if (Function *LF = F-&gt;Codegen()) {
339 LF->dump(); // Dump the function for exposition purposes.
340
Chris Lattnera54c2012007-11-07 05:28:43 +0000341 <b>// JIT the function, returning a function pointer.
Chris Lattner118749e2007-10-25 06:23:36 +0000342 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
343
344 // Cast it to the right type (takes no arguments, returns a double) so we
345 // can call it as a native function.
346 double (*FP)() = (double (*)())FPtr;
Chris Lattnera54c2012007-11-07 05:28:43 +0000347 fprintf(stderr, "Evaluated to %f\n", FP());</b>
Chris Lattner118749e2007-10-25 06:23:36 +0000348 }
349</pre>
350</div>
351
352<p>Recall that we compile top-level expressions into a self-contained LLVM
353function that takes no arguments and returns the computed double. Because the
354LLVM JIT compiler matches the native platform ABI, this means that you can just
355cast the result pointer to a function pointer of that type and call it directly.
356As such, there is no difference between JIT compiled code and native machine
357code that is statically linked into your application.</p>
358
359<p>With just these two changes, lets see how Kaleidoscope works now!</p>
360
361<div class="doc_code">
362<pre>
363ready&gt; <b>4+5;</b>
364define double @""() {
365entry:
366 ret double 9.000000e+00
367}
368
369<em>Evaluated to 9.000000</em>
370</pre>
371</div>
372
373<p>Well this looks like it is basically working. The dump of the function
374shows the "no argument function that always returns double" that we synthesize
375for each top level expression that is typed it. This demonstrates very basic
376functionality, but can we do more?</p>
377
378<div class="doc_code">
379<pre>
Chris Lattner2e89f3a2007-10-31 07:30:39 +0000380ready&gt; <b>def testfunc(x y) x + y*2; </b>
Chris Lattner118749e2007-10-25 06:23:36 +0000381Read function definition:
382define double @testfunc(double %x, double %y) {
383entry:
384 %multmp = mul double %y, 2.000000e+00
385 %addtmp = add double %multmp, %x
386 ret double %addtmp
387}
388
389ready&gt; <b>testfunc(4, 10);</b>
390define double @""() {
391entry:
392 %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
393 ret double %calltmp
394}
395
396<em>Evaluated to 24.000000</em>
397</pre>
398</div>
399
400<p>This illustrates that we can now call user code, but it is a bit subtle what
401is going on here. Note that we only invoke the JIT on the anonymous functions
402that <em>calls testfunc</em>, but we never invoked it on <em>testfunc
403itself</em>.</p>
404
405<p>What actually happened here is that the anonymous function is
406JIT'd when requested. When the Kaleidoscope app calls through the function
407pointer that is returned, the anonymous function starts executing. It ends up
Chris Lattnera54c2012007-11-07 05:28:43 +0000408making the call to the "testfunc" function, and ends up in a stub that invokes
Chris Lattner118749e2007-10-25 06:23:36 +0000409the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
Chris Lattnera54c2012007-11-07 05:28:43 +0000410it returns and the code re-executes the call.</p>
Chris Lattner118749e2007-10-25 06:23:36 +0000411
412<p>In summary, the JIT will lazily JIT code on the fly as it is needed. The
413JIT provides a number of other more advanced interfaces for things like freeing
414allocated machine code, rejit'ing functions to update them, etc. However, even
415with this simple code, we get some surprisingly powerful capabilities - check
416this out (I removed the dump of the anonymous functions, you should get the idea
417by now :) :</p>
418
419<div class="doc_code">
420<pre>
421ready&gt; <b>extern sin(x);</b>
422Read extern:
423declare double @sin(double)
424
425ready&gt; <b>extern cos(x);</b>
426Read extern:
427declare double @cos(double)
428
429ready&gt; <b>sin(1.0);</b>
430<em>Evaluated to 0.841471</em>
Chris Lattner72714232007-10-25 17:52:39 +0000431
Chris Lattner118749e2007-10-25 06:23:36 +0000432ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
433Read function definition:
434define double @foo(double %x) {
435entry:
436 %calltmp = call double @sin( double %x )
437 %multmp = mul double %calltmp, %calltmp
438 %calltmp2 = call double @cos( double %x )
439 %multmp4 = mul double %calltmp2, %calltmp2
440 %addtmp = add double %multmp, %multmp4
441 ret double %addtmp
442}
443
444ready&gt; <b>foo(4.0);</b>
445<em>Evaluated to 1.000000</em>
446</pre>
447</div>
448
Chris Lattnera54c2012007-11-07 05:28:43 +0000449<p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
450simple: in this
Chris Lattner118749e2007-10-25 06:23:36 +0000451example, the JIT started execution of a function and got to a function call. It
452realized that the function was not yet JIT compiled and invoked the standard set
453of routines to resolve the function. In this case, there is no body defined
Chris Lattnera54c2012007-11-07 05:28:43 +0000454for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
455Kaleidoscope process itself.
Chris Lattner118749e2007-10-25 06:23:36 +0000456Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
457patches up calls in the module to call the libm version of <tt>sin</tt>
458directly.</p>
459
460<p>The LLVM JIT provides a number of interfaces (look in the
461<tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
462resolved. It allows you to establish explicit mappings between IR objects and
463addresses (useful for LLVM global variables that you want to map to static
464tables, for example), allows you to dynamically decide on the fly based on the
465function name, and even allows you to have the JIT abort itself if any lazy
466compilation is attempted.</p>
467
Chris Lattner72714232007-10-25 17:52:39 +0000468<p>One interesting application of this is that we can now extend the language
469by writing arbitrary C++ code to implement operations. For example, if we add:
470</p>
471
472<div class="doc_code">
473<pre>
474/// putchard - putchar that takes a double and returns 0.
475extern "C"
476double putchard(double X) {
477 putchar((char)X);
478 return 0;
479}
480</pre>
481</div>
482
483<p>Now we can produce simple output to the console by using things like:
484"<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
Chris Lattnera54c2012007-11-07 05:28:43 +0000485the console (120 is the ASCII code for 'x'). Similar code could be used to
Chris Lattner72714232007-10-25 17:52:39 +0000486implement file I/O, console input, and many other capabilities in
487Kaleidoscope.</p>
488
Chris Lattner118749e2007-10-25 06:23:36 +0000489<p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
490this point, we can compile a non-Turing-complete programming language, optimize
491and JIT compile it in a user-driven way. Next up we'll look into <a
492href="LangImpl5.html">extending the language with control flow constructs</a>,
493tackling some interesting LLVM IR issues along the way.</p>
494
495</div>
496
497<!-- *********************************************************************** -->
498<div class="doc_section"><a name="code">Full Code Listing</a></div>
499<!-- *********************************************************************** -->
500
501<div class="doc_text">
502
503<p>
504Here is the complete code listing for our running example, enhanced with the
505LLVM JIT and optimizer. To build this example, use:
506</p>
507
508<div class="doc_code">
509<pre>
510 # Compile
511 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
512 # Run
513 ./toy
514</pre>
515</div>
516
517<p>Here is the code:</p>
518
519<div class="doc_code">
520<pre>
521#include "llvm/DerivedTypes.h"
522#include "llvm/ExecutionEngine/ExecutionEngine.h"
523#include "llvm/Module.h"
524#include "llvm/ModuleProvider.h"
525#include "llvm/PassManager.h"
526#include "llvm/Analysis/Verifier.h"
527#include "llvm/Target/TargetData.h"
528#include "llvm/Transforms/Scalar.h"
529#include "llvm/Support/LLVMBuilder.h"
530#include &lt;cstdio&gt;
531#include &lt;string&gt;
532#include &lt;map&gt;
533#include &lt;vector&gt;
534using namespace llvm;
535
536//===----------------------------------------------------------------------===//
537// Lexer
538//===----------------------------------------------------------------------===//
539
540// The lexer returns tokens [0-255] if it is an unknown character, otherwise one
541// of these for known things.
542enum Token {
543 tok_eof = -1,
544
545 // commands
546 tok_def = -2, tok_extern = -3,
547
548 // primary
549 tok_identifier = -4, tok_number = -5,
550};
551
552static std::string IdentifierStr; // Filled in if tok_identifier
553static double NumVal; // Filled in if tok_number
554
555/// gettok - Return the next token from standard input.
556static int gettok() {
557 static int LastChar = ' ';
558
559 // Skip any whitespace.
560 while (isspace(LastChar))
561 LastChar = getchar();
562
563 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
564 IdentifierStr = LastChar;
565 while (isalnum((LastChar = getchar())))
566 IdentifierStr += LastChar;
567
568 if (IdentifierStr == "def") return tok_def;
569 if (IdentifierStr == "extern") return tok_extern;
570 return tok_identifier;
571 }
572
573 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
574 std::string NumStr;
575 do {
576 NumStr += LastChar;
577 LastChar = getchar();
578 } while (isdigit(LastChar) || LastChar == '.');
579
580 NumVal = strtod(NumStr.c_str(), 0);
581 return tok_number;
582 }
583
584 if (LastChar == '#') {
585 // Comment until end of line.
586 do LastChar = getchar();
587 while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp; LastChar != '\r');
588
589 if (LastChar != EOF)
590 return gettok();
591 }
592
593 // Check for end of file. Don't eat the EOF.
594 if (LastChar == EOF)
595 return tok_eof;
596
597 // Otherwise, just return the character as its ascii value.
598 int ThisChar = LastChar;
599 LastChar = getchar();
600 return ThisChar;
601}
602
603//===----------------------------------------------------------------------===//
604// Abstract Syntax Tree (aka Parse Tree)
605//===----------------------------------------------------------------------===//
606
Chris Lattnerc0b42e92007-10-23 06:27:55 +0000607/// ExprAST - Base class for all expression nodes.
608class ExprAST {
609public:
610 virtual ~ExprAST() {}
611 virtual Value *Codegen() = 0;
612};
613
614/// NumberExprAST - Expression class for numeric literals like "1.0".
615class NumberExprAST : public ExprAST {
616 double Val;
617public:
Chris Lattner118749e2007-10-25 06:23:36 +0000618 NumberExprAST(double val) : Val(val) {}
Chris Lattnerc0b42e92007-10-23 06:27:55 +0000619 virtual Value *Codegen();
620};
Chris Lattner118749e2007-10-25 06:23:36 +0000621
622/// VariableExprAST - Expression class for referencing a variable, like "a".
623class VariableExprAST : public ExprAST {
624 std::string Name;
625public:
626 VariableExprAST(const std::string &amp;name) : Name(name) {}
627 virtual Value *Codegen();
628};
629
630/// BinaryExprAST - Expression class for a binary operator.
631class BinaryExprAST : public ExprAST {
632 char Op;
633 ExprAST *LHS, *RHS;
634public:
635 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
636 : Op(op), LHS(lhs), RHS(rhs) {}
637 virtual Value *Codegen();
638};
639
640/// CallExprAST - Expression class for function calls.
641class CallExprAST : public ExprAST {
642 std::string Callee;
643 std::vector&lt;ExprAST*&gt; Args;
644public:
645 CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
646 : Callee(callee), Args(args) {}
647 virtual Value *Codegen();
648};
649
650/// PrototypeAST - This class represents the "prototype" for a function,
651/// which captures its argument names as well as if it is an operator.
652class PrototypeAST {
653 std::string Name;
654 std::vector&lt;std::string&gt; Args;
655public:
656 PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
657 : Name(name), Args(args) {}
658
659 Function *Codegen();
660};
661
662/// FunctionAST - This class represents a function definition itself.
663class FunctionAST {
664 PrototypeAST *Proto;
665 ExprAST *Body;
666public:
667 FunctionAST(PrototypeAST *proto, ExprAST *body)
668 : Proto(proto), Body(body) {}
669
670 Function *Codegen();
671};
672
673//===----------------------------------------------------------------------===//
674// Parser
675//===----------------------------------------------------------------------===//
676
677/// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
678/// token the parser it looking at. getNextToken reads another token from the
679/// lexer and updates CurTok with its results.
680static int CurTok;
681static int getNextToken() {
682 return CurTok = gettok();
683}
684
685/// BinopPrecedence - This holds the precedence for each binary operator that is
686/// defined.
687static std::map&lt;char, int&gt; BinopPrecedence;
688
689/// GetTokPrecedence - Get the precedence of the pending binary operator token.
690static int GetTokPrecedence() {
691 if (!isascii(CurTok))
692 return -1;
693
694 // Make sure it's a declared binop.
695 int TokPrec = BinopPrecedence[CurTok];
696 if (TokPrec &lt;= 0) return -1;
697 return TokPrec;
698}
699
700/// Error* - These are little helper functions for error handling.
701ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
702PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
703FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
704
705static ExprAST *ParseExpression();
706
707/// identifierexpr
Chris Lattner20a0c802007-11-05 17:54:34 +0000708/// ::= identifier
709/// ::= identifier '(' expression* ')'
Chris Lattner118749e2007-10-25 06:23:36 +0000710static ExprAST *ParseIdentifierExpr() {
711 std::string IdName = IdentifierStr;
712
Chris Lattner20a0c802007-11-05 17:54:34 +0000713 getNextToken(); // eat identifier.
Chris Lattner118749e2007-10-25 06:23:36 +0000714
715 if (CurTok != '(') // Simple variable ref.
716 return new VariableExprAST(IdName);
717
718 // Call.
719 getNextToken(); // eat (
720 std::vector&lt;ExprAST*&gt; Args;
Chris Lattner71155212007-11-06 01:39:12 +0000721 if (CurTok != ')') {
722 while (1) {
723 ExprAST *Arg = ParseExpression();
724 if (!Arg) return 0;
725 Args.push_back(Arg);
Chris Lattner118749e2007-10-25 06:23:36 +0000726
Chris Lattner71155212007-11-06 01:39:12 +0000727 if (CurTok == ')') break;
Chris Lattner118749e2007-10-25 06:23:36 +0000728
Chris Lattner71155212007-11-06 01:39:12 +0000729 if (CurTok != ',')
730 return Error("Expected ')'");
731 getNextToken();
732 }
Chris Lattner118749e2007-10-25 06:23:36 +0000733 }
734
735 // Eat the ')'.
736 getNextToken();
737
738 return new CallExprAST(IdName, Args);
739}
740
741/// numberexpr ::= number
742static ExprAST *ParseNumberExpr() {
743 ExprAST *Result = new NumberExprAST(NumVal);
744 getNextToken(); // consume the number
745 return Result;
746}
747
748/// parenexpr ::= '(' expression ')'
749static ExprAST *ParseParenExpr() {
750 getNextToken(); // eat (.
751 ExprAST *V = ParseExpression();
752 if (!V) return 0;
753
754 if (CurTok != ')')
755 return Error("expected ')'");
756 getNextToken(); // eat ).
757 return V;
758}
759
760/// primary
761/// ::= identifierexpr
762/// ::= numberexpr
763/// ::= parenexpr
764static ExprAST *ParsePrimary() {
765 switch (CurTok) {
766 default: return Error("unknown token when expecting an expression");
767 case tok_identifier: return ParseIdentifierExpr();
768 case tok_number: return ParseNumberExpr();
769 case '(': return ParseParenExpr();
770 }
771}
772
773/// binoprhs
774/// ::= ('+' primary)*
775static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
776 // If this is a binop, find its precedence.
777 while (1) {
778 int TokPrec = GetTokPrecedence();
779
780 // If this is a binop that binds at least as tightly as the current binop,
781 // consume it, otherwise we are done.
782 if (TokPrec &lt; ExprPrec)
783 return LHS;
784
785 // Okay, we know this is a binop.
786 int BinOp = CurTok;
787 getNextToken(); // eat binop
788
789 // Parse the primary expression after the binary operator.
790 ExprAST *RHS = ParsePrimary();
791 if (!RHS) return 0;
792
793 // If BinOp binds less tightly with RHS than the operator after RHS, let
794 // the pending operator take RHS as its LHS.
795 int NextPrec = GetTokPrecedence();
796 if (TokPrec &lt; NextPrec) {
797 RHS = ParseBinOpRHS(TokPrec+1, RHS);
798 if (RHS == 0) return 0;
799 }
800
801 // Merge LHS/RHS.
802 LHS = new BinaryExprAST(BinOp, LHS, RHS);
803 }
804}
805
806/// expression
807/// ::= primary binoprhs
808///
809static ExprAST *ParseExpression() {
810 ExprAST *LHS = ParsePrimary();
811 if (!LHS) return 0;
812
813 return ParseBinOpRHS(0, LHS);
814}
815
816/// prototype
817/// ::= id '(' id* ')'
818static PrototypeAST *ParsePrototype() {
819 if (CurTok != tok_identifier)
820 return ErrorP("Expected function name in prototype");
821
822 std::string FnName = IdentifierStr;
823 getNextToken();
824
825 if (CurTok != '(')
826 return ErrorP("Expected '(' in prototype");
827
828 std::vector&lt;std::string&gt; ArgNames;
829 while (getNextToken() == tok_identifier)
830 ArgNames.push_back(IdentifierStr);
831 if (CurTok != ')')
832 return ErrorP("Expected ')' in prototype");
833
834 // success.
835 getNextToken(); // eat ')'.
836
837 return new PrototypeAST(FnName, ArgNames);
838}
839
840/// definition ::= 'def' prototype expression
841static FunctionAST *ParseDefinition() {
842 getNextToken(); // eat def.
843 PrototypeAST *Proto = ParsePrototype();
844 if (Proto == 0) return 0;
845
846 if (ExprAST *E = ParseExpression())
847 return new FunctionAST(Proto, E);
848 return 0;
849}
850
851/// toplevelexpr ::= expression
852static FunctionAST *ParseTopLevelExpr() {
853 if (ExprAST *E = ParseExpression()) {
854 // Make an anonymous proto.
855 PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
856 return new FunctionAST(Proto, E);
857 }
858 return 0;
859}
860
861/// external ::= 'extern' prototype
862static PrototypeAST *ParseExtern() {
863 getNextToken(); // eat extern.
864 return ParsePrototype();
865}
866
867//===----------------------------------------------------------------------===//
868// Code Generation
869//===----------------------------------------------------------------------===//
870
871static Module *TheModule;
872static LLVMFoldingBuilder Builder;
873static std::map&lt;std::string, Value*&gt; NamedValues;
874static FunctionPassManager *TheFPM;
875
876Value *ErrorV(const char *Str) { Error(Str); return 0; }
877
878Value *NumberExprAST::Codegen() {
879 return ConstantFP::get(Type::DoubleTy, APFloat(Val));
880}
881
882Value *VariableExprAST::Codegen() {
883 // Look this variable up in the function.
884 Value *V = NamedValues[Name];
885 return V ? V : ErrorV("Unknown variable name");
886}
887
888Value *BinaryExprAST::Codegen() {
889 Value *L = LHS-&gt;Codegen();
890 Value *R = RHS-&gt;Codegen();
891 if (L == 0 || R == 0) return 0;
892
893 switch (Op) {
894 case '+': return Builder.CreateAdd(L, R, "addtmp");
895 case '-': return Builder.CreateSub(L, R, "subtmp");
896 case '*': return Builder.CreateMul(L, R, "multmp");
897 case '&lt;':
Chris Lattner71155212007-11-06 01:39:12 +0000898 L = Builder.CreateFCmpULT(L, R, "cmptmp");
Chris Lattner118749e2007-10-25 06:23:36 +0000899 // Convert bool 0/1 to double 0.0 or 1.0
900 return Builder.CreateUIToFP(L, Type::DoubleTy, "booltmp");
901 default: return ErrorV("invalid binary operator");
902 }
903}
904
905Value *CallExprAST::Codegen() {
906 // Look up the name in the global module table.
907 Function *CalleeF = TheModule-&gt;getFunction(Callee);
908 if (CalleeF == 0)
909 return ErrorV("Unknown function referenced");
910
911 // If argument mismatch error.
912 if (CalleeF-&gt;arg_size() != Args.size())
913 return ErrorV("Incorrect # arguments passed");
914
915 std::vector&lt;Value*&gt; ArgsV;
916 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
917 ArgsV.push_back(Args[i]-&gt;Codegen());
918 if (ArgsV.back() == 0) return 0;
919 }
920
921 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
922}
923
924Function *PrototypeAST::Codegen() {
925 // Make the function type: double(double,double) etc.
926 std::vector&lt;const Type*&gt; Doubles(Args.size(), Type::DoubleTy);
927 FunctionType *FT = FunctionType::get(Type::DoubleTy, Doubles, false);
928
929 Function *F = new Function(FT, Function::ExternalLinkage, Name, TheModule);
930
931 // If F conflicted, there was already something named 'Name'. If it has a
932 // body, don't allow redefinition or reextern.
933 if (F-&gt;getName() != Name) {
934 // Delete the one we just made and get the existing one.
935 F-&gt;eraseFromParent();
936 F = TheModule-&gt;getFunction(Name);
937
938 // If F already has a body, reject this.
939 if (!F-&gt;empty()) {
940 ErrorF("redefinition of function");
941 return 0;
942 }
943
944 // If F took a different number of args, reject.
945 if (F-&gt;arg_size() != Args.size()) {
946 ErrorF("redefinition of function with different # args");
947 return 0;
948 }
949 }
950
951 // Set names for all arguments.
952 unsigned Idx = 0;
953 for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
954 ++AI, ++Idx) {
955 AI-&gt;setName(Args[Idx]);
956
957 // Add arguments to variable symbol table.
958 NamedValues[Args[Idx]] = AI;
959 }
960
961 return F;
962}
963
964Function *FunctionAST::Codegen() {
965 NamedValues.clear();
966
967 Function *TheFunction = Proto-&gt;Codegen();
968 if (TheFunction == 0)
969 return 0;
970
971 // Create a new basic block to start insertion into.
972 BasicBlock *BB = new BasicBlock("entry", TheFunction);
973 Builder.SetInsertPoint(BB);
974
975 if (Value *RetVal = Body-&gt;Codegen()) {
976 // Finish off the function.
977 Builder.CreateRet(RetVal);
978
979 // Validate the generated code, checking for consistency.
980 verifyFunction(*TheFunction);
981
982 // Optimize the function.
983 TheFPM-&gt;run(*TheFunction);
984
985 return TheFunction;
986 }
987
988 // Error reading body, remove function.
989 TheFunction-&gt;eraseFromParent();
990 return 0;
991}
992
993//===----------------------------------------------------------------------===//
994// Top-Level parsing and JIT Driver
995//===----------------------------------------------------------------------===//
996
997static ExecutionEngine *TheExecutionEngine;
998
999static void HandleDefinition() {
1000 if (FunctionAST *F = ParseDefinition()) {
1001 if (Function *LF = F-&gt;Codegen()) {
1002 fprintf(stderr, "Read function definition:");
1003 LF-&gt;dump();
1004 }
1005 } else {
1006 // Skip token for error recovery.
1007 getNextToken();
1008 }
1009}
1010
1011static void HandleExtern() {
1012 if (PrototypeAST *P = ParseExtern()) {
1013 if (Function *F = P-&gt;Codegen()) {
1014 fprintf(stderr, "Read extern: ");
1015 F-&gt;dump();
1016 }
1017 } else {
1018 // Skip token for error recovery.
1019 getNextToken();
1020 }
1021}
1022
1023static void HandleTopLevelExpression() {
1024 // Evaluate a top level expression into an anonymous function.
1025 if (FunctionAST *F = ParseTopLevelExpr()) {
1026 if (Function *LF = F-&gt;Codegen()) {
1027 // JIT the function, returning a function pointer.
1028 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1029
1030 // Cast it to the right type (takes no arguments, returns a double) so we
1031 // can call it as a native function.
1032 double (*FP)() = (double (*)())FPtr;
1033 fprintf(stderr, "Evaluated to %f\n", FP());
1034 }
1035 } else {
1036 // Skip token for error recovery.
1037 getNextToken();
1038 }
1039}
1040
1041/// top ::= definition | external | expression | ';'
1042static void MainLoop() {
1043 while (1) {
1044 fprintf(stderr, "ready&gt; ");
1045 switch (CurTok) {
1046 case tok_eof: return;
1047 case ';': getNextToken(); break; // ignore top level semicolons.
1048 case tok_def: HandleDefinition(); break;
1049 case tok_extern: HandleExtern(); break;
1050 default: HandleTopLevelExpression(); break;
1051 }
1052 }
1053}
1054
1055
1056
1057//===----------------------------------------------------------------------===//
1058// "Library" functions that can be "extern'd" from user code.
1059//===----------------------------------------------------------------------===//
1060
1061/// putchard - putchar that takes a double and returns 0.
1062extern "C"
1063double putchard(double X) {
1064 putchar((char)X);
1065 return 0;
1066}
1067
1068//===----------------------------------------------------------------------===//
1069// Main driver code.
1070//===----------------------------------------------------------------------===//
1071
1072int main() {
1073 // Install standard binary operators.
1074 // 1 is lowest precedence.
1075 BinopPrecedence['&lt;'] = 10;
1076 BinopPrecedence['+'] = 20;
1077 BinopPrecedence['-'] = 20;
1078 BinopPrecedence['*'] = 40; // highest.
1079
1080 // Prime the first token.
1081 fprintf(stderr, "ready&gt; ");
1082 getNextToken();
1083
1084 // Make the module, which holds all the code.
1085 TheModule = new Module("my cool jit");
1086
1087 // Create the JIT.
1088 TheExecutionEngine = ExecutionEngine::create(TheModule);
1089
1090 {
1091 ExistingModuleProvider OurModuleProvider(TheModule);
1092 FunctionPassManager OurFPM(&amp;OurModuleProvider);
1093
1094 // Set up the optimizer pipeline. Start with registering info about how the
1095 // target lays out data structures.
1096 OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1097 // Do simple "peephole" optimizations and bit-twiddling optzns.
1098 OurFPM.add(createInstructionCombiningPass());
1099 // Reassociate expressions.
1100 OurFPM.add(createReassociatePass());
1101 // Eliminate Common SubExpressions.
1102 OurFPM.add(createGVNPass());
1103 // Simplify the control flow graph (deleting unreachable blocks, etc).
1104 OurFPM.add(createCFGSimplificationPass());
1105
1106 // Set the global so the code gen can use this.
1107 TheFPM = &amp;OurFPM;
1108
1109 // Run the main "interpreter loop" now.
1110 MainLoop();
1111
1112 TheFPM = 0;
1113 } // Free module provider and pass manager.
1114
1115
1116 // Print out all of the generated code.
1117 TheModule-&gt;dump();
1118 return 0;
1119}
Chris Lattnerc0b42e92007-10-23 06:27:55 +00001120</pre>
1121</div>
1122
Chris Lattnerc0b42e92007-10-23 06:27:55 +00001123</div>
1124
1125<!-- *********************************************************************** -->
1126<hr>
1127<address>
1128 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1129 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1130 <a href="http://validator.w3.org/check/referer"><img
Chris Lattner8eef4b22007-10-23 06:30:50 +00001131 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
Chris Lattnerc0b42e92007-10-23 06:27:55 +00001132
1133 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1134 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1135 Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
1136</address>
1137</body>
1138</html>