blob: 1751bfbdc895f79aae81729450ad3ce9feed9853 [file] [log] [blame]
Justin Lebar6f04ed92016-09-07 20:37:41 +00001=========================
Justin Lebar7029cb52016-09-07 20:09:53 +00002Compiling CUDA with clang
Justin Lebar6f04ed92016-09-07 20:37:41 +00003=========================
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +00004
5.. contents::
6 :local:
7
8Introduction
9============
10
Justin Lebar7029cb52016-09-07 20:09:53 +000011This document describes how to compile CUDA code with clang, and gives some
12details about LLVM and clang's CUDA implementations.
13
14This document assumes a basic familiarity with CUDA. Information about CUDA
15programming can be found in the
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000016`CUDA programming guide
17<http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html>`_.
18
Justin Lebar7029cb52016-09-07 20:09:53 +000019Compiling CUDA Code
20===================
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000021
Justin Lebar7029cb52016-09-07 20:09:53 +000022Prerequisites
23-------------
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000024
Justin Lebar7029cb52016-09-07 20:09:53 +000025CUDA is supported in llvm 3.9, but it's still in active development, so we
26recommend you `compile clang/LLVM from HEAD
27<http://llvm.org/docs/GettingStarted.html>`_.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000028
Justin Lebar7029cb52016-09-07 20:09:53 +000029Before you build CUDA code, you'll need to have installed the appropriate
30driver for your nvidia GPU and the CUDA SDK. See `NVIDIA's CUDA installation
31guide <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_
32for details. Note that clang `does not support
33<https://llvm.org/bugs/show_bug.cgi?id=26966>`_ the CUDA toolkit as installed
34by many Linux package managers; you probably need to install nvidia's package.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000035
Justin Lebar7029cb52016-09-07 20:09:53 +000036You will need CUDA 7.0 or 7.5 to compile with clang. CUDA 8 support is in the
37works.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000038
Justin Lebar6f04ed92016-09-07 20:37:41 +000039Invoking clang
40--------------
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000041
Justin Lebar6f04ed92016-09-07 20:37:41 +000042Invoking clang for CUDA compilation works similarly to compiling regular C++.
43You just need to be aware of a few additional flags.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000044
Justin Lebar62d5b012016-09-07 20:42:24 +000045You can use `this <https://gist.github.com/855e277884eb6b388cd2f00d956c2fd4>`_
Justin Lebar1c102572016-09-07 21:46:21 +000046program as a toy example. Save it as ``axpy.cu``. (Clang detects that you're
47compiling CUDA code by noticing that your filename ends with ``.cu``.
48Alternatively, you can pass ``-x cuda``.)
49
50To build and run, run the following commands, filling in the parts in angle
51brackets as described below:
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000052
53.. code-block:: console
54
Justin Lebar6f04ed92016-09-07 20:37:41 +000055 $ clang++ axpy.cu -o axpy --cuda-gpu-arch=<GPU arch> \
56 -L<CUDA install path>/<lib64 or lib> \
Jingyue Wu313496b2016-01-30 23:48:47 +000057 -lcudart_static -ldl -lrt -pthread
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000058 $ ./axpy
59 y[0] = 2
60 y[1] = 4
61 y[2] = 6
62 y[3] = 8
63
Justin Lebar1c102572016-09-07 21:46:21 +000064* ``<CUDA install path>`` -- the directory where you installed CUDA SDK.
65 Typically, ``/usr/local/cuda``.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +000066
Justin Lebar1c102572016-09-07 21:46:21 +000067 Pass e.g. ``-L/usr/local/cuda/lib64`` if compiling in 64-bit mode; otherwise,
68 pass e.g. ``-L/usr/local/cuda/lib``. (In CUDA, the device code and host code
69 always have the same pointer widths, so if you're compiling 64-bit code for
70 the host, you're also compiling 64-bit code for the device.)
Justin Lebar84473cd2016-09-07 20:09:46 +000071
Justin Lebar1c102572016-09-07 21:46:21 +000072* ``<GPU arch>`` -- the `compute capability
73 <https://developer.nvidia.com/cuda-gpus>`_ of your GPU. For example, if you
74 want to run your program on a GPU with compute capability of 3.5, specify
Justin Lebar6f04ed92016-09-07 20:37:41 +000075 ``--cuda-gpu-arch=sm_35``.
Justin Lebar32835c82016-03-21 23:05:15 +000076
Justin Lebar6f04ed92016-09-07 20:37:41 +000077 Note: You cannot pass ``compute_XX`` as an argument to ``--cuda-gpu-arch``;
78 only ``sm_XX`` is currently supported. However, clang always includes PTX in
79 its binaries, so e.g. a binary compiled with ``--cuda-gpu-arch=sm_30`` would be
80 forwards-compatible with e.g. ``sm_35`` GPUs.
Justin Lebar32835c82016-03-21 23:05:15 +000081
Justin Lebar1c102572016-09-07 21:46:21 +000082 You can pass ``--cuda-gpu-arch`` multiple times to compile for multiple archs.
Justin Lebar32835c82016-03-21 23:05:15 +000083
Justin Lebarb5cb9df2016-09-07 21:46:49 +000084The `-L` and `-l` flags only need to be passed when linking. When compiling,
85you may also need to pass ``--cuda-path=/path/to/cuda`` if you didn't install
86the CUDA SDK into ``/usr/local/cuda``, ``/usr/local/cuda-7.0``, or
87``/usr/local/cuda-7.5``.
88
Justin Lebarb649e752016-05-25 23:11:31 +000089Flags that control numerical code
Justin Lebar6f04ed92016-09-07 20:37:41 +000090---------------------------------
Justin Lebarb649e752016-05-25 23:11:31 +000091
92If you're using GPUs, you probably care about making numerical code run fast.
93GPU hardware allows for more control over numerical operations than most CPUs,
94but this results in more compiler options for you to juggle.
95
96Flags you may wish to tweak include:
97
98* ``-ffp-contract={on,off,fast}`` (defaults to ``fast`` on host and device when
99 compiling CUDA) Controls whether the compiler emits fused multiply-add
100 operations.
101
102 * ``off``: never emit fma operations, and prevent ptxas from fusing multiply
103 and add instructions.
104 * ``on``: fuse multiplies and adds within a single statement, but never
105 across statements (C11 semantics). Prevent ptxas from fusing other
106 multiplies and adds.
107 * ``fast``: fuse multiplies and adds wherever profitable, even across
108 statements. Doesn't prevent ptxas from fusing additional multiplies and
109 adds.
110
111 Fused multiply-add instructions can be much faster than the unfused
112 equivalents, but because the intermediate result in an fma is not rounded,
113 this flag can affect numerical code.
114
115* ``-fcuda-flush-denormals-to-zero`` (default: off) When this is enabled,
116 floating point operations may flush `denormal
117 <https://en.wikipedia.org/wiki/Denormal_number>`_ inputs and/or outputs to 0.
118 Operations on denormal numbers are often much slower than the same operations
119 on normal numbers.
120
121* ``-fcuda-approx-transcendentals`` (default: off) When this is enabled, the
122 compiler may emit calls to faster, approximate versions of transcendental
123 functions, instead of using the slower, fully IEEE-compliant versions. For
124 example, this flag allows clang to emit the ptx ``sin.approx.f32``
125 instruction.
126
127 This is implied by ``-ffast-math``.
128
Justin Lebar6f04ed92016-09-07 20:37:41 +0000129Detecting clang vs NVCC from code
130=================================
131
132Although clang's CUDA implementation is largely compatible with NVCC's, you may
133still want to detect when you're compiling CUDA code specifically with clang.
134
135This is tricky, because NVCC may invoke clang as part of its own compilation
136process! For example, NVCC uses the host compiler's preprocessor when
137compiling for device code, and that host compiler may in fact be clang.
138
139When clang is actually compiling CUDA code -- rather than being used as a
140subtool of NVCC's -- it defines the ``__CUDA__`` macro. ``__CUDA_ARCH__`` is
141defined only in device mode (but will be defined if NVCC is using clang as a
142preprocessor). So you can use the following incantations to detect clang CUDA
143compilation, in host and device modes:
144
145.. code-block:: c++
146
147 #if defined(__clang__) && defined(__CUDA__) && !defined(__CUDA_ARCH__)
148 // clang compiling CUDA code, host mode.
149 #endif
150
151 #if defined(__clang__) && defined(__CUDA__) && defined(__CUDA_ARCH__)
152 // clang compiling CUDA code, device mode.
153 #endif
154
155Both clang and nvcc define ``__CUDACC__`` during CUDA compilation. You can
156detect NVCC specifically by looking for ``__NVCC__``.
157
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +0000158Optimizations
159=============
160
161CPU and GPU have different design philosophies and architectures. For example, a
162typical CPU has branch prediction, out-of-order execution, and is superscalar,
163whereas a typical GPU has none of these. Due to such differences, an
164optimization pipeline well-tuned for CPUs may be not suitable for GPUs.
165
166LLVM performs several general and CUDA-specific optimizations for GPUs. The
167list below shows some of the more important optimizations for GPUs. Most of
168them have been upstreamed to ``lib/Transforms/Scalar`` and
169``lib/Target/NVPTX``. A few of them have not been upstreamed due to lack of a
170customizable target-independent optimization pipeline.
171
172* **Straight-line scalar optimizations**. These optimizations reduce redundancy
173 in straight-line code. Details can be found in the `design document for
174 straight-line scalar optimizations <https://goo.gl/4Rb9As>`_.
175
176* **Inferring memory spaces**. `This optimization
Jingyue Wuf190ed42016-03-30 05:05:40 +0000177 <https://github.com/llvm-mirror/llvm/blob/master/lib/Target/NVPTX/NVPTXInferAddressSpaces.cpp>`_
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +0000178 infers the memory space of an address so that the backend can emit faster
Jingyue Wuf190ed42016-03-30 05:05:40 +0000179 special loads and stores from it.
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +0000180
181* **Aggressive loop unrooling and function inlining**. Loop unrolling and
182 function inlining need to be more aggressive for GPUs than for CPUs because
183 control flow transfer in GPU is more expensive. They also promote other
184 optimizations such as constant propagation and SROA which sometimes speed up
185 code by over 10x. An empirical inline threshold for GPUs is 1100. This
186 configuration has yet to be upstreamed with a target-specific optimization
187 pipeline. LLVM also provides `loop unrolling pragmas
188 <http://clang.llvm.org/docs/AttributeReference.html#pragma-unroll-pragma-nounroll>`_
189 and ``__attribute__((always_inline))`` for programmers to force unrolling and
190 inling.
191
192* **Aggressive speculative execution**. `This transformation
193 <http://llvm.org/docs/doxygen/html/SpeculativeExecution_8cpp_source.html>`_ is
194 mainly for promoting straight-line scalar optimizations which are most
195 effective on code along dominator paths.
196
197* **Memory-space alias analysis**. `This alias analysis
Jingyue Wu03d90e52015-11-18 22:01:44 +0000198 <http://reviews.llvm.org/D12414>`_ infers that two pointers in different
Jingyue Wu4f2a6cb2015-11-10 22:35:47 +0000199 special memory spaces do not alias. It has yet to be integrated to the new
200 alias analysis infrastructure; the new infrastructure does not run
201 target-specific alias analysis.
202
203* **Bypassing 64-bit divides**. `An existing optimization
204 <http://llvm.org/docs/doxygen/html/BypassSlowDivision_8cpp_source.html>`_
205 enabled in the NVPTX backend. 64-bit integer divides are much slower than
206 32-bit ones on NVIDIA GPUs due to lack of a divide unit. Many of the 64-bit
207 divides in our benchmarks have a divisor and dividend which fit in 32-bits at
208 runtime. This optimization provides a fast path for this common case.
Jingyue Wubec78182016-02-23 23:34:49 +0000209
Jingyue Wuf190ed42016-03-30 05:05:40 +0000210Publication
211===========
212
213| `gpucc: An Open-Source GPGPU Compiler <http://dl.acm.org/citation.cfm?id=2854041>`_
214| Jingyue Wu, Artem Belevich, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng, Robert Hundt
215| *Proceedings of the 2016 International Symposium on Code Generation and Optimization (CGO 2016)*
216| `Slides for the CGO talk <http://wujingyue.com/docs/gpucc-talk.pdf>`_
217
218Tutorial
219========
220
221`CGO 2016 gpucc tutorial <http://wujingyue.com/docs/gpucc-tutorial.pdf>`_
222
Jingyue Wubec78182016-02-23 23:34:49 +0000223Obtaining Help
224==============
225
226To obtain help on LLVM in general and its CUDA support, see `the LLVM
227community <http://llvm.org/docs/#mailing-lists>`_.