blob: f0cc3f772265488b24f2ace181c62c34cd0b080d [file] [log] [blame]
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -03001=========================
2Dynamic DMA mapping Guide
3=========================
Linus Torvalds1da177e2005-04-16 15:20:36 -07004
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -03005:Author: David S. Miller <davem@redhat.com>
6:Author: Richard Henderson <rth@cygnus.com>
7:Author: Jakub Jelinek <jakub@redhat.com>
Linus Torvalds1da177e2005-04-16 15:20:36 -07008
FUJITA Tomonori216bf582010-03-10 15:23:42 -08009This is a guide to device driver writers on how to use the DMA API
10with example pseudo-code. For a concise description of the API, see
Linus Torvalds1da177e2005-04-16 15:20:36 -070011DMA-API.txt.
12
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -030013CPU and DMA addresses
14=====================
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060015
16There are several kinds of addresses involved in the DMA API, and it's
17important to understand the differences.
18
19The kernel normally uses virtual addresses. Any address returned by
20kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -030021be stored in a ``void *``.
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060022
23The virtual memory system (TLB, page tables, etc.) translates virtual
24addresses to CPU physical addresses, which are stored as "phys_addr_t" or
25"resource_size_t". The kernel manages device resources like registers as
26physical addresses. These are the addresses in /proc/iomem. The physical
27address is not directly useful to a driver; it must use ioremap() to map
28the space and produce a virtual address.
29
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -070030I/O devices use a third kind of address: a "bus address". If a device has
31registers at an MMIO address, or if it performs DMA to read or write system
32memory, the addresses used by the device are bus addresses. In some
33systems, bus addresses are identical to CPU physical addresses, but in
34general they are not. IOMMUs and host bridges can produce arbitrary
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060035mappings between physical and bus addresses.
36
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -070037From a device's point of view, DMA uses the bus address space, but it may
38be restricted to a subset of that space. For example, even if a system
39supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40so devices only need to use 32-bit DMA addresses.
41
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -030042Here's a picture and some examples::
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060043
44 CPU CPU Bus
45 Virtual Physical Address
46 Address Address Space
47 Space Space
48
49 +-------+ +------+ +------+
50 | | |MMIO | Offset | |
51 | | Virtual |Space | applied | |
52 C +-------+ --------> B +------+ ----------> +------+ A
53 | | mapping | | by host | |
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
56 | CPU | | | | RAM | | | | Device |
57 | | | | | | | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
59 | | Virtual |Buffer| Mapping | |
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
62 | | | |
63 | | | |
64 +-------+ +------+
65
66During the enumeration process, the kernel learns about I/O devices and
67their MMIO space and the host bridges that connect them to the system. For
68example, if a PCI device has a BAR, the kernel reads the bus address (A)
69from the BAR and converts it to a CPU physical address (B). The address B
70is stored in a struct resource and usually exposed via /proc/iomem. When a
71driver claims a device, it typically uses ioremap() to map physical address
72B at a virtual address (C). It can then use, e.g., ioread32(C), to access
73the device registers at bus address A.
74
75If the device supports DMA, the driver sets up a buffer using kmalloc() or
76a similar interface, which returns a virtual address (X). The virtual
77memory system maps X to a physical address (Y) in system RAM. The driver
78can use virtual address X to access the buffer, but the device itself
79cannot because DMA doesn't go through the CPU virtual memory system.
80
81In some simple systems, the device can do DMA directly to physical address
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -070082Y. But in many others, there is IOMMU hardware that translates DMA
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060083addresses to physical addresses, e.g., it translates Z to Y. This is part
84of the reason for the DMA API: the driver can give a virtual address X to
85an interface like dma_map_single(), which sets up any required IOMMU
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -070086mapping and returns the DMA address Z. The driver then tells the device to
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060087do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
88RAM.
Linus Torvalds1da177e2005-04-16 15:20:36 -070089
90So that Linux can use the dynamic DMA mapping, it needs some help from the
91drivers, namely it has to take into account that DMA addresses should be
92mapped only for the time they are actually used and unmapped after the DMA
93transfer.
94
95The following API will work of course even on platforms where no such
FUJITA Tomonori216bf582010-03-10 15:23:42 -080096hardware exists.
97
98Note that the DMA API works with any bus independent of the underlying
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -060099microprocessor architecture. You should use the DMA API rather than the
100bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
101pci_map_*() interfaces.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700102
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300103First of all, you should make sure::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700104
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300105 #include <linux/dma-mapping.h>
Linus Torvalds1da177e2005-04-16 15:20:36 -0700106
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600107is in your driver, which provides the definition of dma_addr_t. This type
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -0700108can hold any valid DMA address for the platform and should be used
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600109everywhere you hold a DMA address returned from the DMA mapping functions.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700110
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300111What memory is DMA'able?
112========================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700113
114The first piece of information you must know is what kernel memory can
115be used with the DMA mapping facilities. There has been an unwritten
116set of rules regarding this, and this text is an attempt to finally
117write them down.
118
119If you acquired your memory via the page allocator
120(i.e. __get_free_page*()) or the generic memory allocators
121(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
122that memory using the addresses returned from those routines.
123
124This means specifically that you may _not_ use the memory/addresses
125returned from vmalloc() for DMA. It is possible to DMA to the
126_underlying_ memory mapped into a vmalloc() area, but this requires
127walking page tables to get the physical addresses, and then
128translating each of those pages back to a kernel address using
129something like __va(). [ EDIT: Update this when we integrate
130Gerd Knorr's generic code which does this. ]
131
David Brownell21440d32006-04-01 10:21:52 -0800132This rule also means that you may use neither kernel image addresses
133(items in data/text/bss segments), nor module image addresses, nor
134stack addresses for DMA. These could all be mapped somewhere entirely
135different than the rest of physical memory. Even if those classes of
136memory could physically work with DMA, you'd need to ensure the I/O
137buffers were cacheline-aligned. Without that, you'd see cacheline
138sharing problems (data corruption) on CPUs with DMA-incoherent caches.
139(The CPU could write to one word, DMA would write to a different one
140in the same cache line, and one of them could be overwritten.)
Linus Torvalds1da177e2005-04-16 15:20:36 -0700141
142Also, this means that you cannot take the return of a kmap()
143call and DMA to/from that. This is similar to vmalloc().
144
145What about block I/O and networking buffers? The block I/O and
146networking subsystems make sure that the buffers they use are valid
147for you to DMA from/to.
148
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300149DMA addressing limitations
150==========================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700151
152Does your device have any DMA addressing limitations? For example, is
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800153your device only capable of driving the low order 24-bits of address?
154If so, you need to inform the kernel of this fact.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700155
156By default, the kernel assumes that your device can address the full
FUJITA Tomonori216bf582010-03-10 15:23:42 -080015732-bits. For a 64-bit capable device, this needs to be increased.
158And for a device with limitations, as discussed in the previous
159paragraph, it needs to be decreased.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700160
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800161Special note about PCI: PCI-X specification requires PCI-X devices to
162support 64-bit addressing (DAC) for all transactions. And at least
163one platform (SGI SN2) requires 64-bit consistent allocations to
164operate correctly when the IO bus is in PCI-X mode.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700165
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800166For correct operation, you must interrogate the kernel in your device
167probe routine to see if the DMA controller on the machine can properly
168support the DMA addressing limitation your device has. It is good
169style to do this even if your device holds the default setting,
Linus Torvalds1da177e2005-04-16 15:20:36 -0700170because this shows that you did think about these issues wrt. your
171device.
172
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300173The query is performed via a call to dma_set_mask_and_coherent()::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700174
Russell King4aa806b2013-06-26 13:49:44 +0100175 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700176
Russell King4aa806b2013-06-26 13:49:44 +0100177which will query the mask for both streaming and coherent APIs together.
178If you have some special requirements, then the following two separate
179queries can be used instead:
Linus Torvalds1da177e2005-04-16 15:20:36 -0700180
Russell King4aa806b2013-06-26 13:49:44 +0100181 The query for streaming mappings is performed via a call to
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300182 dma_set_mask()::
Russell King4aa806b2013-06-26 13:49:44 +0100183
184 int dma_set_mask(struct device *dev, u64 mask);
185
186 The query for consistent allocations is performed via a call
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300187 to dma_set_coherent_mask()::
Russell King4aa806b2013-06-26 13:49:44 +0100188
189 int dma_set_coherent_mask(struct device *dev, u64 mask);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700190
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800191Here, dev is a pointer to the device struct of your device, and mask
192is a bit mask describing which bits of an address your device
193supports. It returns zero if your card can perform DMA properly on
194the machine given the address mask you provided. In general, the
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600195device struct of your device is embedded in the bus-specific device
196struct of your device. For example, &pdev->dev is a pointer to the
197device struct of a PCI device (pdev is a pointer to the PCI device
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800198struct of your device).
Linus Torvalds1da177e2005-04-16 15:20:36 -0700199
Matt LaPlante84eb8d02006-10-03 22:53:09 +0200200If it returns non-zero, your device cannot perform DMA properly on
Linus Torvalds1da177e2005-04-16 15:20:36 -0700201this platform, and attempting to do so will result in undefined
202behavior. You must either use a different mask, or not use DMA.
203
204This means that in the failure case, you have three options:
205
2061) Use another DMA mask, if possible (see below).
2072) Use some non-DMA mode for data transfer, if possible.
2083) Ignore this device and do not initialize it.
209
210It is recommended that your driver print a kernel KERN_WARNING message
211when you end up performing either #2 or #3. In this manner, if a user
212of your driver reports that performance is bad or that the device is not
213even detected, you can ask them for the kernel messages to find out
214exactly why.
215
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300216The standard 32-bit addressing device would do something like this::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700217
Russell King4aa806b2013-06-26 13:49:44 +0100218 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600219 dev_warn(dev, "mydev: No suitable DMA available\n");
Linus Torvalds1da177e2005-04-16 15:20:36 -0700220 goto ignore_this_device;
221 }
222
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800223Another common scenario is a 64-bit capable device. The approach here
224is to try for 64-bit addressing, but back down to a 32-bit mask that
225should not fail. The kernel may fail the 64-bit mask not because the
226platform is not capable of 64-bit addressing. Rather, it may fail in
227this case simply because 32-bit addressing is done more efficiently
228than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
229more efficient than DAC addressing.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700230
231Here is how you would handle a 64-bit capable device which can drive
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300232all 64-bits when accessing streaming DMA::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700233
234 int using_dac;
235
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800236 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700237 using_dac = 1;
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800238 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700239 using_dac = 0;
240 } else {
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600241 dev_warn(dev, "mydev: No suitable DMA available\n");
Linus Torvalds1da177e2005-04-16 15:20:36 -0700242 goto ignore_this_device;
243 }
244
245If a card is capable of using 64-bit consistent allocations as well,
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300246the case would look like this::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700247
248 int using_dac, consistent_using_dac;
249
Russell King4aa806b2013-06-26 13:49:44 +0100250 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700251 using_dac = 1;
Geert Uytterhoeven11e285d2015-05-21 13:57:07 +0200252 consistent_using_dac = 1;
Russell King4aa806b2013-06-26 13:49:44 +0100253 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700254 using_dac = 0;
255 consistent_using_dac = 0;
Linus Torvalds1da177e2005-04-16 15:20:36 -0700256 } else {
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600257 dev_warn(dev, "mydev: No suitable DMA available\n");
Linus Torvalds1da177e2005-04-16 15:20:36 -0700258 goto ignore_this_device;
259 }
260
Emilio López34c815f2014-05-20 16:54:22 -0600261The coherent mask will always be able to set the same or a smaller mask as
262the streaming mask. However for the rare case that a device driver only
263uses consistent allocations, one would have to check the return value from
264dma_set_coherent_mask().
Linus Torvalds1da177e2005-04-16 15:20:36 -0700265
Linus Torvalds1da177e2005-04-16 15:20:36 -0700266Finally, if your device can only drive the low 24-bits of
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300267address you might do something like::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700268
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800269 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600270 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
Linus Torvalds1da177e2005-04-16 15:20:36 -0700271 goto ignore_this_device;
272 }
273
Russell King4aa806b2013-06-26 13:49:44 +0100274When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
275returns zero, the kernel saves away this mask you have provided. The
276kernel will use this information later when you make DMA mappings.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700277
278There is a case which we are aware of at this time, which is worth
279mentioning in this documentation. If your device supports multiple
280functions (for example a sound card provides playback and record
281functions) and the various different functions have _different_
282DMA addressing limitations, you may wish to probe each mask and
283only provide the functionality which the machine can handle. It
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800284is important that the last call to dma_set_mask() be for the
Linus Torvalds1da177e2005-04-16 15:20:36 -0700285most specific mask.
286
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300287Here is pseudo-code showing how this might be done::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700288
Yang Hongyang2c5510d2009-04-06 19:01:19 -0700289 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
Marin Mitov038f7d02009-12-06 18:30:44 -0800290 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
Linus Torvalds1da177e2005-04-16 15:20:36 -0700291
292 struct my_sound_card *card;
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800293 struct device *dev;
Linus Torvalds1da177e2005-04-16 15:20:36 -0700294
295 ...
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800296 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700297 card->playback_enabled = 1;
298 } else {
299 card->playback_enabled = 0;
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600300 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
Linus Torvalds1da177e2005-04-16 15:20:36 -0700301 card->name);
302 }
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800303 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700304 card->record_enabled = 1;
305 } else {
306 card->record_enabled = 0;
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600307 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
Linus Torvalds1da177e2005-04-16 15:20:36 -0700308 card->name);
309 }
310
311A sound card was used as an example here because this genre of PCI
312devices seems to be littered with ISA chips given a PCI front end,
313and thus retaining the 16MB DMA addressing limitations of ISA.
314
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300315Types of DMA mappings
316=====================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700317
318There are two types of DMA mappings:
319
320- Consistent DMA mappings which are usually mapped at driver
321 initialization, unmapped at the end and for which the hardware should
322 guarantee that the device and the CPU can access the data
323 in parallel and will see updates made by each other without any
324 explicit software flushing.
325
326 Think of "consistent" as "synchronous" or "coherent".
327
328 The current default is to return consistent memory in the low 32
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -0700329 bits of the DMA space. However, for future compatibility you should
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800330 set the consistent mask even if this default is fine for your
Linus Torvalds1da177e2005-04-16 15:20:36 -0700331 driver.
332
333 Good examples of what to use consistent mappings for are:
334
335 - Network card DMA ring descriptors.
336 - SCSI adapter mailbox command data structures.
337 - Device firmware microcode executed out of
338 main memory.
339
340 The invariant these examples all require is that any CPU store
341 to memory is immediately visible to the device, and vice
342 versa. Consistent mappings guarantee this.
343
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300344 .. important::
345
346 Consistent DMA memory does not preclude the usage of
347 proper memory barriers. The CPU may reorder stores to
Linus Torvalds1da177e2005-04-16 15:20:36 -0700348 consistent memory just as it may normal memory. Example:
349 if it is important for the device to see the first word
350 of a descriptor updated before the second, you must do
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300351 something like::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700352
353 desc->word0 = address;
354 wmb();
355 desc->word1 = DESC_VALID;
356
357 in order to get correct behavior on all platforms.
358
David Brownell21440d32006-04-01 10:21:52 -0800359 Also, on some platforms your driver may need to flush CPU write
360 buffers in much the same way as it needs to flush write buffers
361 found in PCI bridges (such as by reading a register's value
362 after writing it).
363
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800364- Streaming DMA mappings which are usually mapped for one DMA
365 transfer, unmapped right after it (unless you use dma_sync_* below)
366 and for which hardware can optimize for sequential accesses.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700367
Geert Uytterhoeven11e285d2015-05-21 13:57:07 +0200368 Think of "streaming" as "asynchronous" or "outside the coherency
Linus Torvalds1da177e2005-04-16 15:20:36 -0700369 domain".
370
371 Good examples of what to use streaming mappings for are:
372
373 - Networking buffers transmitted/received by a device.
374 - Filesystem buffers written/read by a SCSI device.
375
376 The interfaces for using this type of mapping were designed in
377 such a way that an implementation can make whatever performance
378 optimizations the hardware allows. To this end, when using
379 such mappings you must be explicit about what you want to happen.
380
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800381Neither type of DMA mapping has alignment restrictions that come from
382the underlying bus, although some devices may have such restrictions.
David Brownell21440d32006-04-01 10:21:52 -0800383Also, systems with caches that aren't DMA-coherent will work better
384when the underlying buffers don't share cache lines with other data.
385
Linus Torvalds1da177e2005-04-16 15:20:36 -0700386
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300387Using Consistent DMA mappings
388=============================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700389
390To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300391you should do::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700392
393 dma_addr_t dma_handle;
394
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800395 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700396
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300397where device is a ``struct device *``. This may be called in interrupt
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800398context with the GFP_ATOMIC flag.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700399
400Size is the length of the region you want to allocate, in bytes.
401
402This routine will allocate RAM for that region, so it acts similarly to
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600403__get_free_pages() (but takes size instead of a page order). If your
Linus Torvalds1da177e2005-04-16 15:20:36 -0700404driver needs regions sized smaller than a page, you may prefer using
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800405the dma_pool interface, described below.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700406
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800407The consistent DMA mapping interfaces, for non-NULL dev, will by
408default return a DMA address which is 32-bit addressable. Even if the
409device indicates (via DMA mask) that it may address the upper 32-bits,
410consistent allocation will only return > 32-bit addresses for DMA if
411the consistent DMA mask has been explicitly changed via
412dma_set_coherent_mask(). This is true of the dma_pool interface as
413well.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700414
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600415dma_alloc_coherent() returns two values: the virtual address which you
Linus Torvalds1da177e2005-04-16 15:20:36 -0700416can use to access it from the CPU and dma_handle which you pass to the
417card.
418
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -0700419The CPU virtual address and the DMA address are both
Linus Torvalds1da177e2005-04-16 15:20:36 -0700420guaranteed to be aligned to the smallest PAGE_SIZE order which
421is greater than or equal to the requested size. This invariant
422exists (for example) to guarantee that if you allocate a chunk
423which is smaller than or equal to 64 kilobytes, the extent of the
424buffer you receive will not cross a 64K boundary.
425
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300426To unmap and free such a DMA region, you call::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700427
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800428 dma_free_coherent(dev, size, cpu_addr, dma_handle);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700429
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800430where dev, size are the same as in the above call and cpu_addr and
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600431dma_handle are the values dma_alloc_coherent() returned to you.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700432This function may not be called in interrupt context.
433
434If your driver needs lots of smaller memory regions, you can write
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600435custom code to subdivide pages returned by dma_alloc_coherent(),
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800436or you can use the dma_pool API to do that. A dma_pool is like
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600437a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
Linus Torvalds1da177e2005-04-16 15:20:36 -0700438Also, it understands common hardware constraints for alignment,
439like queue heads needing to be aligned on N byte boundaries.
440
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300441Create a dma_pool like this::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700442
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800443 struct dma_pool *pool;
Linus Torvalds1da177e2005-04-16 15:20:36 -0700444
Gioh Kim2af9da82014-05-20 17:09:35 -0600445 pool = dma_pool_create(name, dev, size, align, boundary);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700446
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800447The "name" is for diagnostics (like a kmem_cache name); dev and size
Linus Torvalds1da177e2005-04-16 15:20:36 -0700448are as above. The device's hardware alignment requirement for this
449type of data is "align" (which is expressed in bytes, and must be a
450power of two). If your device has no boundary crossing restrictions,
Gioh Kim2af9da82014-05-20 17:09:35 -0600451pass 0 for boundary; passing 4096 says memory allocated from this pool
Linus Torvalds1da177e2005-04-16 15:20:36 -0700452must not cross 4KByte boundaries (but at that time it may be better to
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600453use dma_alloc_coherent() directly instead).
Linus Torvalds1da177e2005-04-16 15:20:36 -0700454
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300455Allocate memory from a DMA pool like this::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700456
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800457 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700458
Gioh Kim2af9da82014-05-20 17:09:35 -0600459flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
460holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
Linus Torvalds1da177e2005-04-16 15:20:36 -0700461this returns two values, cpu_addr and dma_handle.
462
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300463Free memory that was allocated from a dma_pool like this::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700464
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800465 dma_pool_free(pool, cpu_addr, dma_handle);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700466
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600467where pool is what you passed to dma_pool_alloc(), and cpu_addr and
468dma_handle are the values dma_pool_alloc() returned. This function
Linus Torvalds1da177e2005-04-16 15:20:36 -0700469may be called in interrupt context.
470
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300471Destroy a dma_pool by calling::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700472
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800473 dma_pool_destroy(pool);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700474
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600475Make sure you've called dma_pool_free() for all memory allocated
Linus Torvalds1da177e2005-04-16 15:20:36 -0700476from a pool before you destroy the pool. This function may not
477be called in interrupt context.
478
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300479DMA Direction
480=============
Linus Torvalds1da177e2005-04-16 15:20:36 -0700481
482The interfaces described in subsequent portions of this document
483take a DMA direction argument, which is an integer and takes on
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300484one of the following values::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700485
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800486 DMA_BIDIRECTIONAL
487 DMA_TO_DEVICE
488 DMA_FROM_DEVICE
489 DMA_NONE
Linus Torvalds1da177e2005-04-16 15:20:36 -0700490
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600491You should provide the exact DMA direction if you know it.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700492
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800493DMA_TO_DEVICE means "from main memory to the device"
494DMA_FROM_DEVICE means "from the device to main memory"
Linus Torvalds1da177e2005-04-16 15:20:36 -0700495It is the direction in which the data moves during the DMA
496transfer.
497
498You are _strongly_ encouraged to specify this as precisely
499as you possibly can.
500
501If you absolutely cannot know the direction of the DMA transfer,
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800502specify DMA_BIDIRECTIONAL. It means that the DMA can go in
Linus Torvalds1da177e2005-04-16 15:20:36 -0700503either direction. The platform guarantees that you may legally
504specify this, and that it will work, but this may be at the
505cost of performance for example.
506
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800507The value DMA_NONE is to be used for debugging. One can
Linus Torvalds1da177e2005-04-16 15:20:36 -0700508hold this in a data structure before you come to know the
509precise direction, and this will help catch cases where your
510direction tracking logic has failed to set things up properly.
511
512Another advantage of specifying this value precisely (outside of
513potential platform-specific optimizations of such) is for debugging.
514Some platforms actually have a write permission boolean which DMA
515mappings can be marked with, much like page protections in the user
516program address space. Such platforms can and do report errors in the
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800517kernel logs when the DMA controller hardware detects violation of the
Linus Torvalds1da177e2005-04-16 15:20:36 -0700518permission setting.
519
520Only streaming mappings specify a direction, consistent mappings
521implicitly have a direction attribute setting of
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800522DMA_BIDIRECTIONAL.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700523
be7db052005-04-17 15:26:13 -0500524The SCSI subsystem tells you the direction to use in the
525'sc_data_direction' member of the SCSI command your driver is
526working on.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700527
528For Networking drivers, it's a rather simple affair. For transmit
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800529packets, map/unmap them with the DMA_TO_DEVICE direction
Linus Torvalds1da177e2005-04-16 15:20:36 -0700530specifier. For receive packets, just the opposite, map/unmap them
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800531with the DMA_FROM_DEVICE direction specifier.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700532
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300533Using Streaming DMA mappings
534============================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700535
536The streaming DMA mapping routines can be called from interrupt
537context. There are two versions of each map/unmap, one which will
538map/unmap a single memory region, and one which will map/unmap a
539scatterlist.
540
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300541To map a single region, you do::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700542
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800543 struct device *dev = &my_dev->dev;
Linus Torvalds1da177e2005-04-16 15:20:36 -0700544 dma_addr_t dma_handle;
545 void *addr = buffer->ptr;
546 size_t size = buffer->len;
547
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800548 dma_handle = dma_map_single(dev, addr, size, direction);
Liu Huab2dd83b2014-09-18 12:15:28 +0800549 if (dma_mapping_error(dev, dma_handle)) {
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600550 /*
551 * reduce current DMA mapping usage,
552 * delay and try again later or
553 * reset driver.
554 */
555 goto map_error_handling;
556 }
Linus Torvalds1da177e2005-04-16 15:20:36 -0700557
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300558and to unmap it::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700559
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800560 dma_unmap_single(dev, dma_handle, size, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700561
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600562You should call dma_mapping_error() as dma_map_single() could fail and return
Christoph Hellwigf51f2882017-05-22 10:58:49 +0200563error. Doing so will ensure that the mapping code will work correctly on all
564DMA implementations without any dependency on the specifics of the underlying
565implementation. Using the returned address without checking for errors could
566result in failures ranging from panics to silent data corruption. The same
567applies to dma_map_page() as well.
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600568
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600569You should call dma_unmap_single() when the DMA activity is finished, e.g.,
Linus Torvalds1da177e2005-04-16 15:20:36 -0700570from the interrupt which told you that the DMA transfer is done.
571
Bjorn Helgaasf311a722014-05-20 16:56:27 -0600572Using CPU pointers like this for single mappings has a disadvantage:
Linus Torvalds1da177e2005-04-16 15:20:36 -0700573you cannot reference HIGHMEM memory in this way. Thus, there is a
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600574map/unmap interface pair akin to dma_{map,unmap}_single(). These
Bjorn Helgaasf311a722014-05-20 16:56:27 -0600575interfaces deal with page/offset pairs instead of CPU pointers.
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300576Specifically::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700577
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800578 struct device *dev = &my_dev->dev;
Linus Torvalds1da177e2005-04-16 15:20:36 -0700579 dma_addr_t dma_handle;
580 struct page *page = buffer->page;
581 unsigned long offset = buffer->offset;
582 size_t size = buffer->len;
583
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800584 dma_handle = dma_map_page(dev, page, offset, size, direction);
Liu Huab2dd83b2014-09-18 12:15:28 +0800585 if (dma_mapping_error(dev, dma_handle)) {
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600586 /*
587 * reduce current DMA mapping usage,
588 * delay and try again later or
589 * reset driver.
590 */
591 goto map_error_handling;
592 }
Linus Torvalds1da177e2005-04-16 15:20:36 -0700593
594 ...
595
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800596 dma_unmap_page(dev, dma_handle, size, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700597
598Here, "offset" means byte offset within the given page.
599
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600600You should call dma_mapping_error() as dma_map_page() could fail and return
601error as outlined under the dma_map_single() discussion.
602
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600603You should call dma_unmap_page() when the DMA activity is finished, e.g.,
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600604from the interrupt which told you that the DMA transfer is done.
605
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300606With scatterlists, you map a region gathered from several regions by::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700607
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800608 int i, count = dma_map_sg(dev, sglist, nents, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700609 struct scatterlist *sg;
610
saeed bishara4c2f6d42007-08-08 13:09:00 +0200611 for_each_sg(sglist, sg, count, i) {
Linus Torvalds1da177e2005-04-16 15:20:36 -0700612 hw_address[i] = sg_dma_address(sg);
613 hw_len[i] = sg_dma_len(sg);
614 }
615
616where nents is the number of entries in the sglist.
617
618The implementation is free to merge several consecutive sglist entries
619into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
620consecutive sglist entries can be merged into one provided the first one
621ends and the second one starts on a page boundary - in fact this is a huge
622advantage for cards which either cannot do scatter-gather or have very
623limited number of scatter-gather entries) and returns the actual number
624of sg entries it mapped them to. On failure 0 is returned.
625
626Then you should loop count times (note: this can be less than nents times)
627and use sg_dma_address() and sg_dma_len() macros where you previously
628accessed sg->address and sg->length as shown above.
629
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300630To unmap a scatterlist, just call::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700631
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800632 dma_unmap_sg(dev, sglist, nents, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700633
634Again, make sure DMA activity has already finished.
635
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300636.. note::
637
638 The 'nents' argument to the dma_unmap_sg call must be
639 the _same_ one you passed into the dma_map_sg call,
640 it should _NOT_ be the 'count' value _returned_ from the
641 dma_map_sg call.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700642
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600643Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
Yinghai Lu3a9ad0b2015-05-27 17:23:51 -0700644counterpart, because the DMA address space is a shared resource and
645you could render the machine unusable by consuming all DMA addresses.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700646
647If you need to use the same streaming DMA region multiple times and touch
648the data in between the DMA transfers, the buffer needs to be synced
Bjorn Helgaasf311a722014-05-20 16:56:27 -0600649properly in order for the CPU and device to see the most up-to-date and
Linus Torvalds1da177e2005-04-16 15:20:36 -0700650correct copy of the DMA buffer.
651
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600652So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300653transfer call either::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700654
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800655 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700656
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300657or::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700658
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800659 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700660
661as appropriate.
662
663Then, if you wish to let the device get at the DMA area again,
Bjorn Helgaasf311a722014-05-20 16:56:27 -0600664finish accessing the data with the CPU, and then before actually
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300665giving the buffer to the hardware call either::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700666
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800667 dma_sync_single_for_device(dev, dma_handle, size, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700668
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300669or::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700670
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800671 dma_sync_sg_for_device(dev, sglist, nents, direction);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700672
673as appropriate.
674
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300675.. note::
676
677 The 'nents' argument to dma_sync_sg_for_cpu() and
Sakari Ailus7bc590b2015-09-23 14:41:09 +0300678 dma_sync_sg_for_device() must be the same passed to
679 dma_map_sg(). It is _NOT_ the count returned by
680 dma_map_sg().
681
Linus Torvalds1da177e2005-04-16 15:20:36 -0700682After the last DMA transfer call one of the DMA unmap routines
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600683dma_unmap_{single,sg}(). If you don't touch the data from the first
684dma_map_*() call till dma_unmap_*(), then you don't have to call the
685dma_sync_*() routines at all.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700686
687Here is pseudo code which shows a situation in which you would need
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300688to use the dma_sync_*() interfaces::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700689
690 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
691 {
692 dma_addr_t mapping;
693
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800694 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
Andrey Smirnovbe6c3092016-09-20 09:04:20 -0700695 if (dma_mapping_error(cp->dev, mapping)) {
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600696 /*
697 * reduce current DMA mapping usage,
698 * delay and try again later or
699 * reset driver.
700 */
701 goto map_error_handling;
702 }
Linus Torvalds1da177e2005-04-16 15:20:36 -0700703
704 cp->rx_buf = buffer;
705 cp->rx_len = len;
706 cp->rx_dma = mapping;
707
708 give_rx_buf_to_card(cp);
709 }
710
711 ...
712
713 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
714 {
715 struct my_card *cp = devid;
716
717 ...
718 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
719 struct my_card_header *hp;
720
721 /* Examine the header to see if we wish
722 * to accept the data. But synchronize
723 * the DMA transfer with the CPU first
724 * so that we see updated contents.
725 */
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800726 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
727 cp->rx_len,
728 DMA_FROM_DEVICE);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700729
730 /* Now it is safe to examine the buffer. */
731 hp = (struct my_card_header *) cp->rx_buf;
732 if (header_is_ok(hp)) {
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800733 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
734 DMA_FROM_DEVICE);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700735 pass_to_upper_layers(cp->rx_buf);
736 make_and_setup_new_rx_buf(cp);
737 } else {
Michal Miroslaw3f0fb4e2011-07-26 16:08:51 -0700738 /* CPU should not write to
739 * DMA_FROM_DEVICE-mapped area,
740 * so dma_sync_single_for_device() is
741 * not needed here. It would be required
742 * for DMA_BIDIRECTIONAL mapping if
743 * the memory was modified.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700744 */
Linus Torvalds1da177e2005-04-16 15:20:36 -0700745 give_rx_buf_to_card(cp);
746 }
747 }
748 }
749
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600750Drivers converted fully to this interface should not use virt_to_bus() any
751longer, nor should they use bus_to_virt(). Some drivers have to be changed a
752little bit, because there is no longer an equivalent to bus_to_virt() in the
Linus Torvalds1da177e2005-04-16 15:20:36 -0700753dynamic DMA mapping scheme - you have to always store the DMA addresses
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600754returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
755calls (dma_map_sg() stores them in the scatterlist itself if the platform
Linus Torvalds1da177e2005-04-16 15:20:36 -0700756supports dynamic DMA mapping in hardware) in your driver structures and/or
757in the card registers.
758
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800759All drivers should be using these interfaces with no exceptions. It
760is planned to completely remove virt_to_bus() and bus_to_virt() as
Linus Torvalds1da177e2005-04-16 15:20:36 -0700761they are entirely deprecated. Some ports already do not provide these
762as it is impossible to correctly support them.
763
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300764Handling Errors
765===============
FUJITA Tomonori4ae9ca82010-05-26 14:44:22 -0700766
767DMA address space is limited on some architectures and an allocation
768failure can be determined by:
769
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600770- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
FUJITA Tomonori4ae9ca82010-05-26 14:44:22 -0700771
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600772- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300773 by using dma_mapping_error()::
FUJITA Tomonori4ae9ca82010-05-26 14:44:22 -0700774
775 dma_addr_t dma_handle;
776
777 dma_handle = dma_map_single(dev, addr, size, direction);
778 if (dma_mapping_error(dev, dma_handle)) {
779 /*
780 * reduce current DMA mapping usage,
781 * delay and try again later or
782 * reset driver.
783 */
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600784 goto map_error_handling;
785 }
786
787- unmap pages that are already mapped, when mapping error occurs in the middle
788 of a multiple page mapping attempt. These example are applicable to
789 dma_map_page() as well.
790
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300791Example 1::
792
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600793 dma_addr_t dma_handle1;
794 dma_addr_t dma_handle2;
795
796 dma_handle1 = dma_map_single(dev, addr, size, direction);
797 if (dma_mapping_error(dev, dma_handle1)) {
798 /*
799 * reduce current DMA mapping usage,
800 * delay and try again later or
801 * reset driver.
802 */
803 goto map_error_handling1;
804 }
805 dma_handle2 = dma_map_single(dev, addr, size, direction);
806 if (dma_mapping_error(dev, dma_handle2)) {
807 /*
808 * reduce current DMA mapping usage,
809 * delay and try again later or
810 * reset driver.
811 */
812 goto map_error_handling2;
813 }
814
815 ...
816
817 map_error_handling2:
818 dma_unmap_single(dma_handle1);
819 map_error_handling1:
820
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300821Example 2::
822
823 /*
824 * if buffers are allocated in a loop, unmap all mapped buffers when
825 * mapping error is detected in the middle
826 */
Shuah Khan8d7f62e2012-10-18 14:00:58 -0600827
828 dma_addr_t dma_addr;
829 dma_addr_t array[DMA_BUFFERS];
830 int save_index = 0;
831
832 for (i = 0; i < DMA_BUFFERS; i++) {
833
834 ...
835
836 dma_addr = dma_map_single(dev, addr, size, direction);
837 if (dma_mapping_error(dev, dma_addr)) {
838 /*
839 * reduce current DMA mapping usage,
840 * delay and try again later or
841 * reset driver.
842 */
843 goto map_error_handling;
844 }
845 array[i].dma_addr = dma_addr;
846 save_index++;
847 }
848
849 ...
850
851 map_error_handling:
852
853 for (i = 0; i < save_index; i++) {
854
855 ...
856
857 dma_unmap_single(array[i].dma_addr);
FUJITA Tomonori4ae9ca82010-05-26 14:44:22 -0700858 }
859
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -0600860Networking drivers must call dev_kfree_skb() to free the socket buffer
FUJITA Tomonori4ae9ca82010-05-26 14:44:22 -0700861and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
862(ndo_start_xmit). This means that the socket buffer is just dropped in
863the failure case.
864
865SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
866fails in the queuecommand hook. This means that the SCSI subsystem
867passes the command to the driver again later.
868
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300869Optimizing Unmap State Space Consumption
870========================================
Linus Torvalds1da177e2005-04-16 15:20:36 -0700871
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800872On many platforms, dma_unmap_{single,page}() is simply a nop.
Linus Torvalds1da177e2005-04-16 15:20:36 -0700873Therefore, keeping track of the mapping address and length is a waste
874of space. Instead of filling your drivers up with ifdefs and the like
875to "work around" this (which would defeat the whole purpose of a
876portable API) the following facilities are provided.
877
878Actually, instead of describing the macros one by one, we'll
879transform some example code.
880
FUJITA Tomonori216bf582010-03-10 15:23:42 -08008811) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300882 Example, before::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700883
884 struct ring_state {
885 struct sk_buff *skb;
886 dma_addr_t mapping;
887 __u32 len;
888 };
889
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300890 after::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700891
892 struct ring_state {
893 struct sk_buff *skb;
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800894 DEFINE_DMA_UNMAP_ADDR(mapping);
895 DEFINE_DMA_UNMAP_LEN(len);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700896 };
897
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -06008982) Use dma_unmap_{addr,len}_set() to set these values.
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300899 Example, before::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700900
901 ringp->mapping = FOO;
902 ringp->len = BAR;
903
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300904 after::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700905
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800906 dma_unmap_addr_set(ringp, mapping, FOO);
907 dma_unmap_len_set(ringp, len, BAR);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700908
Bjorn Helgaas77f2ea22014-04-30 11:20:53 -06009093) Use dma_unmap_{addr,len}() to access these values.
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300910 Example, before::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700911
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800912 dma_unmap_single(dev, ringp->mapping, ringp->len,
913 DMA_FROM_DEVICE);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700914
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300915 after::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700916
FUJITA Tomonori216bf582010-03-10 15:23:42 -0800917 dma_unmap_single(dev,
918 dma_unmap_addr(ringp, mapping),
919 dma_unmap_len(ringp, len),
920 DMA_FROM_DEVICE);
Linus Torvalds1da177e2005-04-16 15:20:36 -0700921
922It really should be self-explanatory. We treat the ADDR and LEN
923separately, because it is possible for an implementation to only
924need the address in order to perform the unmap operation.
925
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300926Platform Issues
927===============
Linus Torvalds1da177e2005-04-16 15:20:36 -0700928
929If you are just writing drivers for Linux and do not maintain
930an architecture port for the kernel, you can safely skip down
931to "Closing".
932
9331) Struct scatterlist requirements.
934
Christoph Hellwige92ae522016-09-11 15:58:53 +0200935 You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
936 supports IOMMUs (including software IOMMU).
Linus Torvalds1da177e2005-04-16 15:20:36 -0700937
FUJITA Tomonorice00f7f2010-08-14 16:36:17 +09009382) ARCH_DMA_MINALIGN
FUJITA Tomonori2fd74e22010-05-26 14:44:23 -0700939
940 Architectures must ensure that kmalloc'ed buffer is
941 DMA-safe. Drivers and subsystems depend on it. If an architecture
942 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
943 the CPU cache is identical to data in main memory),
FUJITA Tomonorice00f7f2010-08-14 16:36:17 +0900944 ARCH_DMA_MINALIGN must be set so that the memory allocator
FUJITA Tomonori2fd74e22010-05-26 14:44:23 -0700945 makes sure that kmalloc'ed buffer doesn't share a cache line with
946 the others. See arch/arm/include/asm/cache.h as an example.
947
FUJITA Tomonorice00f7f2010-08-14 16:36:17 +0900948 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
FUJITA Tomonori2fd74e22010-05-26 14:44:23 -0700949 constraints. You don't need to worry about the architecture data
950 alignment constraints (e.g. the alignment constraints about 64-bit
951 objects).
Linus Torvalds1da177e2005-04-16 15:20:36 -0700952
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300953Closing
954=======
Linus Torvalds1da177e2005-04-16 15:20:36 -0700955
Francis Galieguea33f3222010-04-23 00:08:02 +0200956This document, and the API itself, would not be in its current
Linus Torvalds1da177e2005-04-16 15:20:36 -0700957form without the feedback and suggestions from numerous individuals.
958We would like to specifically mention, in no particular order, the
Mauro Carvalho Chehab266921b2017-05-17 10:27:28 -0300959following people::
Linus Torvalds1da177e2005-04-16 15:20:36 -0700960
961 Russell King <rmk@arm.linux.org.uk>
962 Leo Dagum <dagum@barrel.engr.sgi.com>
963 Ralf Baechle <ralf@oss.sgi.com>
964 Grant Grundler <grundler@cup.hp.com>
965 Jay Estabrook <Jay.Estabrook@compaq.com>
966 Thomas Sailer <sailer@ife.ee.ethz.ch>
967 Andrea Arcangeli <andrea@suse.de>
Rob Landley26bbb292007-10-15 11:42:52 +0200968 Jens Axboe <jens.axboe@oracle.com>
Linus Torvalds1da177e2005-04-16 15:20:36 -0700969 David Mosberger-Tang <davidm@hpl.hp.com>