blob: 81fefab2adfff407f34d9bcb95754430dcb5be81 [file] [log] [blame]
<html>
<head>
<title>"clang" CFE Internals Manual</title>
<link type="text/css" rel="stylesheet" href="../menu.css" />
<link type="text/css" rel="stylesheet" href="../content.css" />
</head>
<body>
<!--#include virtual="../menu.html.incl"-->
<div id="content">
<h1>"clang" CFE Internals Manual</h1>
<ul>
<li><a href="#intro">Introduction</a></li>
<li><a href="#libsystem">LLVM System and Support Libraries</a></li>
<li><a href="#libbasic">The clang 'Basic' Library</a>
<ul>
<li><a href="#SourceLocation">The SourceLocation and SourceManager
classes</a></li>
</ul>
</li>
<li><a href="#liblex">The Lexer and Preprocessor Library</a>
<ul>
<li><a href="#Token">The Token class</a></li>
<li><a href="#Lexer">The Lexer class</a></li>
<li><a href="#TokenLexer">The TokenLexer class</a></li>
<li><a href="#MultipleIncludeOpt">The MultipleIncludeOpt class</a></li>
</ul>
</li>
<li><a href="#libparse">The Parser Library</a>
<ul>
</ul>
</li>
<li><a href="#libast">The AST Library</a>
<ul>
<li><a href="#Type">The Type class and its subclasses</a></li>
<li><a href="#QualType">The QualType class</a></li>
<li><a href="#CFG">The CFG class</a></li>
</ul>
</li>
</ul>
<!-- ======================================================================= -->
<h2 id="intro">Introduction</h2>
<!-- ======================================================================= -->
<p>This document describes some of the more important APIs and internal design
decisions made in the clang C front-end. The purpose of this document is to
both capture some of this high level information and also describe some of the
design decisions behind it. This is meant for people interested in hacking on
clang, not for end-users. The description below is categorized by
libraries, and does not describe any of the clients of the libraries.</p>
<!-- ======================================================================= -->
<h2 id="libsystem">LLVM System and Support Libraries</h2>
<!-- ======================================================================= -->
<p>The LLVM libsystem library provides the basic clang system abstraction layer,
which is used for file system access. The LLVM libsupport library provides many
underlying libraries and <a
href="http://llvm.org/docs/ProgrammersManual.html">data-structures</a>,
including command line option
processing and various containers.</p>
<!-- ======================================================================= -->
<h2 id="libbasic">The clang 'Basic' Library</h2>
<!-- ======================================================================= -->
<p>This library certainly needs a better name. The 'basic' library contains a
number of low-level utilities for tracking and manipulating source buffers,
locations within the source buffers, diagnostics, tokens, target abstraction,
and information about the subset of the language being compiled for.</p>
<p>Part of this infrastructure is specific to C (such as the TargetInfo class),
other parts could be reused for other non-C-based languages (SourceLocation,
SourceManager, Diagnostics, FileManager). When and if there is future demand
we can figure out if it makes sense to introduce a new library, move the general
classes somewhere else, or introduce some other solution.</p>
<p>We describe the roles of these classes in order of their dependencies.</p>
<!-- ======================================================================= -->
<h3 id="SourceLocation">The SourceLocation and SourceManager classes</h3>
<!-- ======================================================================= -->
<p>Strangely enough, the SourceLocation class represents a location within the
source code of the program. Important design points include:</p>
<ol>
<li>sizeof(SourceLocation) must be extremely small, as these are embedded into
many AST nodes and are passed around often. Currently it is 32 bits.</li>
<li>SourceLocation must be a simple value object that can be efficiently
copied.</li>
<li>We should be able to represent a source location for any byte of any input
file. This includes in the middle of tokens, in whitespace, in trigraphs,
etc.</li>
<li>A SourceLocation must encode the current #include stack that was active when
the location was processed. For example, if the location corresponds to a
token, it should contain the set of #includes active when the token was
lexed. This allows us to print the #include stack for a diagnostic.</li>
<li>SourceLocation must be able to describe macro expansions, capturing both
the ultimate instantiation point and the source of the original character
data.</li>
</ol>
<p>In practice, the SourceLocation works together with the SourceManager class
to encode two pieces of information about a location: it's physical location
and it's virtual location. For most tokens, these will be the same. However,
for a macro expansion (or tokens that came from a _Pragma directive) these will
describe the location of the characters corresponding to the token and the
location where the token was used (i.e. the macro instantiation point or the
location of the _Pragma itself).</p>
<p>For efficiency, we only track one level of macro instantions: if a token was
produced by multiple instantiations, we only track the source and ultimate
destination. Though we could track the intermediate instantiation points, this
would require extra bookkeeping and no known client would benefit substantially
from this.</p>
<p>The clang front-end inherently depends on the location of a token being
tracked correctly. If it is ever incorrect, the front-end may get confused and
die. The reason for this is that the notion of the 'spelling' of a Token in
clang depends on being able to find the original input characters for the token.
This concept maps directly to the "physical" location for the token.</p>
<!-- ======================================================================= -->
<h2 id="liblex">The Lexer and Preprocessor Library</h2>
<!-- ======================================================================= -->
<p>The Lexer library contains several tightly-connected classes that are involved
with the nasty process of lexing and preprocessing C source code. The main
interface to this library for outside clients is the large <a
href="#Preprocessor">Preprocessor</a> class.
It contains the various pieces of state that are required to coherently read
tokens out of a translation unit.</p>
<p>The core interface to the Preprocessor object (once it is set up) is the
Preprocessor::Lex method, which returns the next <a href="#Token">Token</a> from
the preprocessor stream. There are two types of token providers that the
preprocessor is capable of reading from: a buffer lexer (provided by the <a
href="#Lexer">Lexer</a> class) and a buffered token stream (provided by the <a
href="#TokenLexer">TokenLexer</a> class).
<!-- ======================================================================= -->
<h3 id="Token">The Token class</h3>
<!-- ======================================================================= -->
<p>The Token class is used to represent a single lexed token. Tokens are
intended to be used by the lexer/preprocess and parser libraries, but are not
intended to live beyond them (for example, they should not live in the ASTs).<p>
<p>Tokens most often live on the stack (or some other location that is efficient
to access) as the parser is running, but occasionally do get buffered up. For
example, macro definitions are stored as a series of tokens, and the C++
front-end will eventually need to buffer tokens up for tentative parsing and
various pieces of look-ahead. As such, the size of a Token matter. On a 32-bit
system, sizeof(Token) is currently 16 bytes.</p>
<p>Tokens contain the following information:</p>
<ul>
<li><b>A SourceLocation</b> - This indicates the location of the start of the
token.</li>
<li><b>A length</b> - This stores the length of the token as stored in the
SourceBuffer. For tokens that include them, this length includes trigraphs and
escaped newlines which are ignored by later phases of the compiler. By pointing
into the original source buffer, it is always possible to get the original
spelling of a token completely accurately.</li>
<li><b>IdentifierInfo</b> - If a token takes the form of an identifier, and if
identifier lookup was enabled when the token was lexed (e.g. the lexer was not
reading in 'raw' mode) this contains a pointer to the unique hash value for the
identifier. Because the lookup happens before keyword identification, this
field is set even for language keywords like 'for'.</li>
<li><b>TokenKind</b> - This indicates the kind of token as classified by the
lexer. This includes things like <tt>tok::starequal</tt> (for the "*="
operator), <tt>tok::ampamp</tt> for the "&amp;&amp;" token, and keyword values
(e.g. <tt>tok::kw_for</tt>) for identifiers that correspond to keywords. Note
that some tokens can be spelled multiple ways. For example, C++ supports
"operator keywords", where things like "and" are treated exactly like the
"&amp;&amp;" operator. In these cases, the kind value is set to
<tt>tok::ampamp</tt>, which is good for the parser, which doesn't have to
consider both forms. For something that cares about which form is used (e.g.
the preprocessor 'stringize' operator) the spelling indicates the original
form.</li>
<li><b>Flags</b> - There are currently four flags tracked by the
lexer/preprocessor system on a per-token basis:
<ol>
<li><b>StartOfLine</b> - This was the first token that occurred on its input
source line.</li>
<li><b>LeadingSpace</b> - There was a space character either immediately
before the token or transitively before the token as it was expanded
through a macro. The definition of this flag is very closely defined by
the stringizing requirements of the preprocessor.</li>
<li><b>DisableExpand</b> - This flag is used internally to the preprocessor to
represent identifier tokens which have macro expansion disabled. This
prevents them from being considered as candidates for macro expansion ever
in the future.</li>
<li><b>NeedsCleaning</b> - This flag is set if the original spelling for the
token includes a trigraph or escaped newline. Since this is uncommon,
many pieces of code can fast-path on tokens that did not need cleaning.
</p>
</ol>
</li>
</ul>
<p>One interesting (and somewhat unusual) aspect of tokens is that they don't
contain any semantic information about the lexed value. For example, if the
token was a pp-number token, we do not represent the value of the number that
was lexed (this is left for later pieces of code to decide). Additionally, the
lexer library has no notion of typedef names vs variable names: both are
returned as identifiers, and the parser is left to decide whether a specific
identifier is a typedef or a variable (tracking this requires scope information
among other things).</p>
<!-- ======================================================================= -->
<h3 id="Lexer">The Lexer class</h3>
<!-- ======================================================================= -->
<p>The Lexer class provides the mechanics of lexing tokens out of a source
buffer and deciding what they mean. The Lexer is complicated by the fact that
it operates on raw buffers that have not had spelling eliminated (this is a
necessity to get decent performance), but this is countered with careful coding
as well as standard performance techniques (for example, the comment handling
code is vectorized on X86 and PowerPC hosts).</p>
<p>The lexer has a couple of interesting modal features:</p>
<ul>
<li>The lexer can operate in 'raw' mode. This mode has several features that
make it possible to quickly lex the file (e.g. it stops identifier lookup,
doesn't specially handle preprocessor tokens, handles EOF differently, etc).
This mode is used for lexing within an "<tt>#if 0</tt>" block, for
example.</li>
<li>The lexer can capture and return comments as tokens. This is required to
support the -C preprocessor mode, which passes comments through, and is
used by the diagnostic checker to identifier expect-error annotations.</li>
<li>The lexer can be in ParsingFilename mode, which happens when preprocessing
after reading a #include directive. This mode changes the parsing of '&lt;'
to return an "angled string" instead of a bunch of tokens for each thing
within the filename.</li>
<li>When parsing a preprocessor directive (after "<tt>#</tt>") the
ParsingPreprocessorDirective mode is entered. This changes the parser to
return EOM at a newline.</li>
<li>The Lexer uses a LangOptions object to know whether trigraphs are enabled,
whether C++ or ObjC keywords are recognized, etc.</li>
</ul>
<p>In addition to these modes, the lexer keeps track of a couple of other
features that are local to a lexed buffer, which change as the buffer is
lexed:</p>
<ul>
<li>The Lexer uses BufferPtr to keep track of the current character being
lexed.</li>
<li>The Lexer uses IsAtStartOfLine to keep track of whether the next lexed token
will start with its "start of line" bit set.</li>
<li>The Lexer keeps track of the current #if directives that are active (which
can be nested).</li>
<li>The Lexer keeps track of an <a href="#MultipleIncludeOpt">
MultipleIncludeOpt</a> object, which is used to
detect whether the buffer uses the standard "<tt>#ifndef XX</tt> /
<tt>#define XX</tt>" idiom to prevent multiple inclusion. If a buffer does,
subsequent includes can be ignored if the XX macro is defined.</li>
</ul>
<!-- ======================================================================= -->
<h3 id="TokenLexer">The TokenLexer class</h3>
<!-- ======================================================================= -->
<p>The TokenLexer class is a token provider that returns tokens from a list
of tokens that came from somewhere else. It typically used for two things: 1)
returning tokens from a macro definition as it is being expanded 2) returning
tokens from an arbitrary buffer of tokens. The later use is used by _Pragma and
will most likely be used to handle unbounded look-ahead for the C++ parser.</p>
<!-- ======================================================================= -->
<h3 id="MultipleIncludeOpt">The MultipleIncludeOpt class</h3>
<!-- ======================================================================= -->
<p>The MultipleIncludeOpt class implements a really simple little state machine
that is used to detect the standard "<tt>#ifndef XX</tt> / <tt>#define XX</tt>"
idiom that people typically use to prevent multiple inclusion of headers. If a
buffer uses this idiom and is subsequently #include'd, the preprocessor can
simply check to see whether the guarding condition is defined or not. If so,
the preprocessor can completely ignore the include of the header.</p>
<!-- ======================================================================= -->
<h2 id="libparse">The Parser Library</h2>
<!-- ======================================================================= -->
<!-- ======================================================================= -->
<h2 id="libast">The AST Library</h2>
<!-- ======================================================================= -->
<!-- ======================================================================= -->
<h3 id="Type">The Type class and its subclasses</h3>
<!-- ======================================================================= -->
<p>The Type class (and its subclasses) are an important part of the AST. Types
are accessed through the ASTContext class, which implicitly creates and uniques
them as they are needed. Types have a couple of non-obvious features: 1) they
do not capture type qualifiers like const or volatile (See
<a href="#QualType">QualType</a>), and 2) they implicitly capture typedef
information. Once created, types are immutable (unlike decls).</p>
<p>Typedefs in C make semantic analysis a bit more complex than it would
be without them. The issue is that we want to capture typedef information
and represent it in the AST perfectly, but the semantics of operations need to
"see through" typedefs. For example, consider this code:</p>
<code>
void func() {<br>
&nbsp;&nbsp;typedef int foo;<br>
&nbsp;&nbsp;foo X, *Y;<br>
&nbsp;&nbsp;typedef foo* bar;<br>
&nbsp;&nbsp;bar Z;<br>
&nbsp;&nbsp;*X; <i>// error</i><br>
&nbsp;&nbsp;**Y; <i>// error</i><br>
&nbsp;&nbsp;**Z; <i>// error</i><br>
}<br>
</code>
<p>The code above is illegal, and thus we expect there to be diagnostics emitted
on the annotated lines. In this example, we expect to get:</p>
<pre>
<b>test.c:6:1: error: indirection requires pointer operand ('foo' invalid)</b>
*X; // error
<font color="blue">^~</font>
<b>test.c:7:1: error: indirection requires pointer operand ('foo' invalid)</b>
**Y; // error
<font color="blue">^~~</font>
<b>test.c:8:1: error: indirection requires pointer operand ('foo' invalid)</b>
**Z; // error
<font color="blue">^~~</font>
</pre>
<p>While this example is somewhat silly, it illustrates the point: we want to
retain typedef information where possible, so that we can emit errors about
"<tt>std::string</tt>" instead of "<tt>std::basic_string&lt;char, std:...</tt>".
Doing this requires properly keeping typedef information (for example, the type
of "X" is "foo", not "int"), and requires properly propagating it through the
various operators (for example, the type of *Y is "foo", not "int"). In order
to retain this information, the type of these expressions is an instance of the
TypedefType class, which indicates that the type of these expressions is a
typedef for foo.
</p>
<p>Representing types like this is great for diagnostics, because the
user-specified type is always immediately available. There are two problems
with this: first, various semantic checks need to make judgements about the
<em>actual structure</em> of a type, ignoring typdefs. Second, we need an
efficient way to query whether two types are structurally identical to each
other, ignoring typedefs. The solution to both of these problems is the idea of
canonical types.</p>
<h4>Canonical Types</h4>
<p>Every instance of the Type class contains a canonical type pointer. For
simple types with no typedefs involved (e.g. "<tt>int</tt>", "<tt>int*</tt>",
"<tt>int**</tt>"), the type just points to itself. For types that have a
typedef somewhere in their structure (e.g. "<tt>foo</tt>", "<tt>foo*</tt>",
"<tt>foo**</tt>", "<tt>bar</tt>"), the canonical type pointer points to their
structurally equivalent type without any typedefs (e.g. "<tt>int</tt>",
"<tt>int*</tt>", "<tt>int**</tt>", and "<tt>int*</tt>" respectively).</p>
<p>This design provides a constant time operation (dereferencing the canonical
type pointer) that gives us access to the structure of types. For example,
we can trivially tell that "bar" and "foo*" are the same type by dereferencing
their canonical type pointers and doing a pointer comparison (they both point
to the single "<tt>int*</tt>" type).</p>
<p>Canonical types and typedef types bring up some complexities that must be
carefully managed. Specifically, the "isa/cast/dyncast" operators generally
shouldn't be used in code that is inspecting the AST. For example, when type
checking the indirection operator (unary '*' on a pointer), the type checker
must verify that the operand has a pointer type. It would not be correct to
check that with "<tt>isa&lt;PointerType&gt;(SubExpr-&gt;getType())</tt>",
because this predicate would fail if the subexpression had a typedef type.</p>
<p>The solution to this problem are a set of helper methods on Type, used to
check their properties. In this case, it would be correct to use
"<tt>SubExpr-&gt;getType()-&gt;isPointerType()</tt>" to do the check. This
predicate will return true if the <em>canonical type is a pointer</em>, which is
true any time the type is structurally a pointer type. The only hard part here
is remembering not to use the <tt>isa/cast/dyncast</tt> operations.</p>
<p>The second problem we face is how to get access to the pointer type once we
know it exists. To continue the example, the result type of the indirection
operator is the pointee type of the subexpression. In order to determine the
type, we need to get the instance of PointerType that best captures the typedef
information in the program. If the type of the expression is literally a
PointerType, we can return that, otherwise we have to dig through the
typedefs to find the pointer type. For example, if the subexpression had type
"<tt>foo*</tt>", we could return that type as the result. If the subexpression
had type "<tt>bar</tt>", we want to return "<tt>foo*</tt>" (note that we do
<em>not</em> want "<tt>int*</tt>"). In order to provide all of this, Type has
a getAsPointerType() method that checks whether the type is structurally a
PointerType and, if so, returns the best one. If not, it returns a null
pointer.</p>
<p>This structure is somewhat mystical, but after meditating on it, it will
make sense to you :).</p>
<!-- ======================================================================= -->
<h3 id="QualType">The QualType class</h3>
<!-- ======================================================================= -->
<p>The QualType class is designed as a trivial value class that is small,
passed by-value and is efficient to query. The idea of QualType is that it
stores the type qualifiers (const, volatile, restrict) separately from the types
themselves: QualType is conceptually a pair of "Type*" and bits for the type
qualifiers.</p>
<p>By storing the type qualifiers as bits in the conceptual pair, it is
extremely efficient to get the set of qualifiers on a QualType (just return the
field of the pair), add a type qualifier (which is a trivial constant-time
operation that sets a bit), and remove one or more type qualifiers (just return
a QualType with the bitfield set to empty).</p>
<p>Further, because the bits are stored outside of the type itself, we do not
need to create duplicates of types with different sets of qualifiers (i.e. there
is only a single heap allocated "int" type: "const int" and "volatile const int"
both point to the same heap allocated "int" type). This reduces the heap size
used to represent bits and also means we do not have to consider qualifiers when
uniquing types (<a href="#Type">Type</a> does not even contain qualifiers).</p>
<p>In practice, on hosts where it is safe, the 3 type qualifiers are stored in
the low bit of the pointer to the Type object. This means that QualType is
exactly the same size as a pointer, and this works fine on any system where
malloc'd objects are at least 8 byte aligned.</p>
<!-- ======================================================================= -->
<h3 id="CFG">The <tt>CFG</tt> class</h3>
<!-- ======================================================================= -->
<p>The <tt>CFG</tt> class is designed to represent a source-level
control-flow graph for a single statement (<tt>Stmt*</tt>). Typically
instances of <tt>CFG</tt> are constructed for function bodies (usually
an instance of <tt>CompoundStmt</tt>), but can also be instantiated to
represent the control-flow of any class that subclasses <tt>Stmt</tt>,
which includes simple expressions. Control-flow graphs are especially
useful for performing
<a href="http://en.wikipedia.org/wiki/Data_flow_analysis#Sensitivities">flow-
or path-sensitive</a> program analyses on a given function.</p>
<h4>Basic Blocks</h4>
<p>Concretely, an instance of <tt>CFG</tt> is a collection of basic
blocks. Each basic block is an instance of <tt>CFGBlock</tt>, which
simply contains an ordered sequence of <tt>Stmt*</tt> (each referring
to statements in the AST). The ordering of statements within a block
indicates unconditional flow of control from one statement to the
next. <a href="#ConditionalControlFlow">Conditional control-flow</a>
is represented using edges between basic blocks. The statements
within a given <tt>CFGBlock</tt> can be traversed using
the <tt>CFGBlock::*iterator</tt> interface.</p>
<p>
A <tt>CFG</tt> object owns the instances of <tt>CFGBlock</tt> within
the control-flow graph it represents. Each <tt>CFGBlock</tt> within a
CFG is also uniquely numbered (accessible
via <tt>CFGBlock::getBlockID()</tt>). Currently the number is
based on the ordering the blocks were created, but no assumptions
should be made on how <tt>CFGBlock</tt>s are numbered other than their
numbers are unique and that they are numbered from 0..N-1 (where N is
the number of basic blocks in the CFG).</p>
<h4>Entry and Exit Blocks</h4>
Each instance of <tt>CFG</tt> contains two special blocks:
an <i>entry</i> block (accessible via <tt>CFG::getEntry()</tt>), which
has no incoming edges, and an <i>exit</i> block (accessible
via <tt>CFG::getExit()</tt>), which has no outgoing edges. Neither
block contains any statements, and they serve the role of providing a
clear entrance and exit for a body of code such as a function body.
The presence of these empty blocks greatly simplifies the
implementation of many analyses built on top of CFGs.
<h4 id ="ConditionalControlFlow">Conditional Control-Flow</h4>
<p>Conditional control-flow (such as those induced by if-statements
and loops) is represented as edges between <tt>CFGBlock</tt>s.
Because different C language constructs can induce control-flow,
each <tt>CFGBlock</tt> also records an extra <tt>Stmt*</tt> that
represents the <i>terminator</i> of the block. A terminator is simply
the statement that caused the control-flow, and is used to identify
the nature of the conditional control-flow between blocks. For
example, in the case of an if-statement, the terminator refers to
the <tt>IfStmt</tt> object in the AST that represented the given
branch.</p>
<p>To illustrate, consider the following code example:</p>
<code>
int foo(int x) {<br>
&nbsp;&nbsp;x = x + 1;<br>
<br>
&nbsp;&nbsp;if (x > 2) x++;<br>
&nbsp;&nbsp;else {<br>
&nbsp;&nbsp;&nbsp;&nbsp;x += 2;<br>
&nbsp;&nbsp;&nbsp;&nbsp;x *= 2;<br>
&nbsp;&nbsp;}<br>
<br>
&nbsp;&nbsp;return x;<br>
}
</code>
<p>After invoking the parser+semantic analyzer on this code fragment,
the AST of the body of <tt>foo</tt> is referenced by a
single <tt>Stmt*</tt>. We can then construct an instance
of <tt>CFG</tt> representing the control-flow graph of this function
body by single call to a static class method:</p>
<code>
&nbsp;&nbsp;Stmt* FooBody = ...<br>
&nbsp;&nbsp;CFG* FooCFG = <b>CFG::buildCFG</b>(FooBody);
</code>
<p>It is the responsibility of the caller of <tt>CFG::buildCFG</tt>
to <tt>delete</tt> the returned <tt>CFG*</tt> when the CFG is no
longer needed.</p>
<p>Along with providing an interface to iterate over
its <tt>CFGBlock</tt>s, the <tt>CFG</tt> class also provides methods
that are useful for debugging and visualizing CFGs. For example, the
method
<tt>CFG::dump()</tt> dumps a pretty-printed version of the CFG to
standard error. This is especially useful when one is using a
debugger such as gdb. For example, here is the output
of <tt>FooCFG->dump()</tt>:</p>
<code>
&nbsp;[ B5 (ENTRY) ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (0):<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B4<br>
<br>
&nbsp;[ B4 ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;1: x = x + 1<br>
&nbsp;&nbsp;&nbsp;&nbsp;2: (x > 2)<br>
&nbsp;&nbsp;&nbsp;&nbsp;<b>T: if [B4.2]</b><br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B5<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (2): B3 B2<br>
<br>
&nbsp;[ B3 ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;1: x++<br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B4<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B1<br>
<br>
&nbsp;[ B2 ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;1: x += 2<br>
&nbsp;&nbsp;&nbsp;&nbsp;2: x *= 2<br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B4<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B1<br>
<br>
&nbsp;[ B1 ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;1: return x;<br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (2): B2 B3<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (1): B0<br>
<br>
&nbsp;[ B0 (EXIT) ]<br>
&nbsp;&nbsp;&nbsp;&nbsp;Predecessors (1): B1<br>
&nbsp;&nbsp;&nbsp;&nbsp;Successors (0):
</code>
<p>For each block, the pretty-printed output displays for each block
the number of <i>predecessor</i> blocks (blocks that have outgoing
control-flow to the given block) and <i>successor</i> blocks (blocks
that have control-flow that have incoming control-flow from the given
block). We can also clearly see the special entry and exit blocks at
the beginning and end of the pretty-printed output. For the entry
block (block B5), the number of predecessor blocks is 0, while for the
exit block (block B0) the number of successor blocks is 0.</p>
<p>The most interesting block here is B4, whose outgoing control-flow
represents the branching caused by the sole if-statement
in <tt>foo</tt>. Of particular interest is the second statement in
the block, <b><tt>(x > 2)</tt></b>, and the terminator, printed
as <b><tt>if [B4.2]</tt></b>. The second statement represents the
evaluation of the condition of the if-statement, which occurs before
the actual branching of control-flow. Within the <tt>CFGBlock</tt>
for B4, the <tt>Stmt*</tt> for the second statement refers to the
actual expression in the AST for <b><tt>(x > 2)</tt></b>. Thus
pointers to subclasses of <tt>Expr</tt> can appear in the list of
statements in a block, and not just subclasses of <tt>Stmt</tt> that
refer to proper C statements.</p>
<p>The terminator of block B4 is a pointer to the <tt>IfStmt</tt>
object in the AST. The pretty-printer outputs <b><tt>if
[B4.2]</tt></b> because the condition expression of the if-statement
has an actual place in the basic block, and thus the terminator is
essentially
<i>referring</i> to the expression that is the second statement of
block B4 (i.e., B4.2). In this manner, conditions for control-flow
(which also includes conditions for loops and switch statements) are
hoisted into the actual basic block.</p>
<!--
<h4>Implicit Control-Flow</h4>
-->
<!--
<p>A key design principle of the <tt>CFG</tt> class was to not require
any transformations to the AST in order to represent control-flow.
Thus the <tt>CFG</tt> does not perform any "lowering" of the
statements in an AST: loops are not transformed into guarded gotos,
short-circuit operations are not converted to a set of if-statements,
and so on.</p>
-->
</div>
</body>
</html>