Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 1 | ===================== |
| 2 | DRM Memory Management |
| 3 | ===================== |
| 4 | |
| 5 | Modern Linux systems require large amount of graphics memory to store |
| 6 | frame buffers, textures, vertices and other graphics-related data. Given |
| 7 | the very dynamic nature of many of that data, managing graphics memory |
| 8 | efficiently is thus crucial for the graphics stack and plays a central |
| 9 | role in the DRM infrastructure. |
| 10 | |
| 11 | The DRM core includes two memory managers, namely Translation Table Maps |
| 12 | (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory |
| 13 | manager to be developed and tried to be a one-size-fits-them all |
| 14 | solution. It provides a single userspace API to accommodate the need of |
| 15 | all hardware, supporting both Unified Memory Architecture (UMA) devices |
| 16 | and devices with dedicated video RAM (i.e. most discrete video cards). |
| 17 | This resulted in a large, complex piece of code that turned out to be |
| 18 | hard to use for driver development. |
| 19 | |
| 20 | GEM started as an Intel-sponsored project in reaction to TTM's |
| 21 | complexity. Its design philosophy is completely different: instead of |
| 22 | providing a solution to every graphics memory-related problems, GEM |
| 23 | identified common code between drivers and created a support library to |
| 24 | share it. GEM has simpler initialization and execution requirements than |
| 25 | TTM, but has no video RAM management capabilities and is thus limited to |
| 26 | UMA devices. |
| 27 | |
| 28 | The Translation Table Manager (TTM) |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 29 | =================================== |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 30 | |
| 31 | TTM design background and information belongs here. |
| 32 | |
| 33 | TTM initialization |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 34 | ------------------ |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 35 | |
| 36 | **Warning** |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 37 | This section is outdated. |
| 38 | |
Gabriel Krisman Bertazi | b834ff8 | 2016-12-30 12:50:49 +0100 | [diff] [blame] | 39 | Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver |
| 40 | <ttm_bo_driver>` structure to ttm_bo_device_init, together with an |
| 41 | initialized global reference to the memory manager. The ttm_bo_driver |
| 42 | structure contains several fields with function pointers for |
| 43 | initializing the TTM, allocating and freeing memory, waiting for command |
| 44 | completion and fence synchronization, and memory migration. |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 45 | |
Gabriel Krisman Bertazi | b834ff8 | 2016-12-30 12:50:49 +0100 | [diff] [blame] | 46 | The :c:type:`struct drm_global_reference <drm_global_reference>` is made |
| 47 | up of several fields: |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 48 | |
Jani Nikula | 29849a6 | 2016-11-03 11:44:23 +0200 | [diff] [blame] | 49 | .. code-block:: c |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 50 | |
Gabriel Krisman Bertazi | b834ff8 | 2016-12-30 12:50:49 +0100 | [diff] [blame] | 51 | struct drm_global_reference { |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 52 | enum ttm_global_types global_type; |
| 53 | size_t size; |
| 54 | void *object; |
Gabriel Krisman Bertazi | b834ff8 | 2016-12-30 12:50:49 +0100 | [diff] [blame] | 55 | int (*init) (struct drm_global_reference *); |
| 56 | void (*release) (struct drm_global_reference *); |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 57 | }; |
| 58 | |
| 59 | |
| 60 | There should be one global reference structure for your memory manager |
| 61 | as a whole, and there will be others for each object created by the |
| 62 | memory manager at runtime. Your global TTM should have a type of |
| 63 | TTM_GLOBAL_TTM_MEM. The size field for the global object should be |
| 64 | sizeof(struct ttm_mem_global), and the init and release hooks should |
| 65 | point at your driver-specific init and release routines, which probably |
| 66 | eventually call ttm_mem_global_init and ttm_mem_global_release, |
| 67 | respectively. |
| 68 | |
| 69 | Once your global TTM accounting structure is set up and initialized by |
| 70 | calling ttm_global_item_ref() on it, you need to create a buffer |
| 71 | object TTM to provide a pool for buffer object allocation by clients and |
| 72 | the kernel itself. The type of this object should be |
| 73 | TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct |
| 74 | ttm_bo_global). Again, driver-specific init and release functions may |
| 75 | be provided, likely eventually calling ttm_bo_global_init() and |
| 76 | ttm_bo_global_release(), respectively. Also, like the previous |
| 77 | object, ttm_global_item_ref() is used to create an initial reference |
| 78 | count for the TTM, which will call your initialization function. |
| 79 | |
Gabriel Krisman Bertazi | b834ff8 | 2016-12-30 12:50:49 +0100 | [diff] [blame] | 80 | See the radeon_ttm.c file for an example of usage. |
| 81 | |
| 82 | .. kernel-doc:: drivers/gpu/drm/drm_global.c |
| 83 | :export: |
| 84 | |
| 85 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 86 | The Graphics Execution Manager (GEM) |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 87 | ==================================== |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 88 | |
| 89 | The GEM design approach has resulted in a memory manager that doesn't |
| 90 | provide full coverage of all (or even all common) use cases in its |
| 91 | userspace or kernel API. GEM exposes a set of standard memory-related |
| 92 | operations to userspace and a set of helper functions to drivers, and |
| 93 | let drivers implement hardware-specific operations with their own |
| 94 | private API. |
| 95 | |
| 96 | The GEM userspace API is described in the `GEM - the Graphics Execution |
| 97 | Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While |
| 98 | slightly outdated, the document provides a good overview of the GEM API |
| 99 | principles. Buffer allocation and read and write operations, described |
| 100 | as part of the common GEM API, are currently implemented using |
| 101 | driver-specific ioctls. |
| 102 | |
| 103 | GEM is data-agnostic. It manages abstract buffer objects without knowing |
| 104 | what individual buffers contain. APIs that require knowledge of buffer |
| 105 | contents or purpose, such as buffer allocation or synchronization |
| 106 | primitives, are thus outside of the scope of GEM and must be implemented |
| 107 | using driver-specific ioctls. |
| 108 | |
| 109 | On a fundamental level, GEM involves several operations: |
| 110 | |
| 111 | - Memory allocation and freeing |
| 112 | - Command execution |
| 113 | - Aperture management at command execution time |
| 114 | |
| 115 | Buffer object allocation is relatively straightforward and largely |
| 116 | provided by Linux's shmem layer, which provides memory to back each |
| 117 | object. |
| 118 | |
| 119 | Device-specific operations, such as command execution, pinning, buffer |
| 120 | read & write, mapping, and domain ownership transfers are left to |
| 121 | driver-specific ioctls. |
| 122 | |
| 123 | GEM Initialization |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 124 | ------------------ |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 125 | |
| 126 | Drivers that use GEM must set the DRIVER_GEM bit in the struct |
| 127 | :c:type:`struct drm_driver <drm_driver>` driver_features |
| 128 | field. The DRM core will then automatically initialize the GEM core |
| 129 | before calling the load operation. Behind the scene, this will create a |
| 130 | DRM Memory Manager object which provides an address space pool for |
| 131 | object allocation. |
| 132 | |
| 133 | In a KMS configuration, drivers need to allocate and initialize a |
| 134 | command ring buffer following core GEM initialization if required by the |
| 135 | hardware. UMA devices usually have what is called a "stolen" memory |
| 136 | region, which provides space for the initial framebuffer and large, |
| 137 | contiguous memory regions required by the device. This space is |
| 138 | typically not managed by GEM, and must be initialized separately into |
| 139 | its own DRM MM object. |
| 140 | |
| 141 | GEM Objects Creation |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 142 | -------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 143 | |
| 144 | GEM splits creation of GEM objects and allocation of the memory that |
| 145 | backs them in two distinct operations. |
| 146 | |
| 147 | GEM objects are represented by an instance of struct :c:type:`struct |
| 148 | drm_gem_object <drm_gem_object>`. Drivers usually need to |
| 149 | extend GEM objects with private information and thus create a |
| 150 | driver-specific GEM object structure type that embeds an instance of |
| 151 | struct :c:type:`struct drm_gem_object <drm_gem_object>`. |
| 152 | |
| 153 | To create a GEM object, a driver allocates memory for an instance of its |
| 154 | specific GEM object type and initializes the embedded struct |
| 155 | :c:type:`struct drm_gem_object <drm_gem_object>` with a call |
| 156 | to :c:func:`drm_gem_object_init()`. The function takes a pointer |
| 157 | to the DRM device, a pointer to the GEM object and the buffer object |
| 158 | size in bytes. |
| 159 | |
| 160 | GEM uses shmem to allocate anonymous pageable memory. |
| 161 | :c:func:`drm_gem_object_init()` will create an shmfs file of the |
| 162 | requested size and store it into the struct :c:type:`struct |
| 163 | drm_gem_object <drm_gem_object>` filp field. The memory is |
| 164 | used as either main storage for the object when the graphics hardware |
| 165 | uses system memory directly or as a backing store otherwise. |
| 166 | |
| 167 | Drivers are responsible for the actual physical pages allocation by |
| 168 | calling :c:func:`shmem_read_mapping_page_gfp()` for each page. |
| 169 | Note that they can decide to allocate pages when initializing the GEM |
| 170 | object, or to delay allocation until the memory is needed (for instance |
| 171 | when a page fault occurs as a result of a userspace memory access or |
| 172 | when the driver needs to start a DMA transfer involving the memory). |
| 173 | |
| 174 | Anonymous pageable memory allocation is not always desired, for instance |
| 175 | when the hardware requires physically contiguous system memory as is |
| 176 | often the case in embedded devices. Drivers can create GEM objects with |
| 177 | no shmfs backing (called private GEM objects) by initializing them with |
| 178 | a call to :c:func:`drm_gem_private_object_init()` instead of |
| 179 | :c:func:`drm_gem_object_init()`. Storage for private GEM objects |
| 180 | must be managed by drivers. |
| 181 | |
| 182 | GEM Objects Lifetime |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 183 | -------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 184 | |
| 185 | All GEM objects are reference-counted by the GEM core. References can be |
Thierry Reding | e6b6271 | 2017-02-28 15:46:41 +0100 | [diff] [blame] | 186 | acquired and release by :c:func:`calling drm_gem_object_get()` and |
| 187 | :c:func:`drm_gem_object_put()` respectively. The caller must hold the |
| 188 | :c:type:`struct drm_device <drm_device>` struct_mutex lock when calling |
| 189 | :c:func:`drm_gem_object_get()`. As a convenience, GEM provides |
| 190 | :c:func:`drm_gem_object_put_unlocked()` functions that can be called without |
| 191 | holding the lock. |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 192 | |
| 193 | When the last reference to a GEM object is released the GEM core calls |
| 194 | the :c:type:`struct drm_driver <drm_driver>` gem_free_object |
| 195 | operation. That operation is mandatory for GEM-enabled drivers and must |
| 196 | free the GEM object and all associated resources. |
| 197 | |
| 198 | void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are |
| 199 | responsible for freeing all GEM object resources. This includes the |
| 200 | resources created by the GEM core, which need to be released with |
| 201 | :c:func:`drm_gem_object_release()`. |
| 202 | |
| 203 | GEM Objects Naming |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 204 | ------------------ |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 205 | |
| 206 | Communication between userspace and the kernel refers to GEM objects |
| 207 | using local handles, global names or, more recently, file descriptors. |
| 208 | All of those are 32-bit integer values; the usual Linux kernel limits |
| 209 | apply to the file descriptors. |
| 210 | |
| 211 | GEM handles are local to a DRM file. Applications get a handle to a GEM |
| 212 | object through a driver-specific ioctl, and can use that handle to refer |
| 213 | to the GEM object in other standard or driver-specific ioctls. Closing a |
| 214 | DRM file handle frees all its GEM handles and dereferences the |
| 215 | associated GEM objects. |
| 216 | |
| 217 | To create a handle for a GEM object drivers call |
| 218 | :c:func:`drm_gem_handle_create()`. The function takes a pointer |
| 219 | to the DRM file and the GEM object and returns a locally unique handle. |
| 220 | When the handle is no longer needed drivers delete it with a call to |
| 221 | :c:func:`drm_gem_handle_delete()`. Finally the GEM object |
| 222 | associated with a handle can be retrieved by a call to |
| 223 | :c:func:`drm_gem_object_lookup()`. |
| 224 | |
| 225 | Handles don't take ownership of GEM objects, they only take a reference |
| 226 | to the object that will be dropped when the handle is destroyed. To |
| 227 | avoid leaking GEM objects, drivers must make sure they drop the |
| 228 | reference(s) they own (such as the initial reference taken at object |
| 229 | creation time) as appropriate, without any special consideration for the |
| 230 | handle. For example, in the particular case of combined GEM object and |
| 231 | handle creation in the implementation of the dumb_create operation, |
| 232 | drivers must drop the initial reference to the GEM object before |
| 233 | returning the handle. |
| 234 | |
| 235 | GEM names are similar in purpose to handles but are not local to DRM |
| 236 | files. They can be passed between processes to reference a GEM object |
| 237 | globally. Names can't be used directly to refer to objects in the DRM |
| 238 | API, applications must convert handles to names and names to handles |
| 239 | using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls |
| 240 | respectively. The conversion is handled by the DRM core without any |
| 241 | driver-specific support. |
| 242 | |
| 243 | GEM also supports buffer sharing with dma-buf file descriptors through |
| 244 | PRIME. GEM-based drivers must use the provided helpers functions to |
| 245 | implement the exporting and importing correctly. See ?. Since sharing |
| 246 | file descriptors is inherently more secure than the easily guessable and |
| 247 | global GEM names it is the preferred buffer sharing mechanism. Sharing |
| 248 | buffers through GEM names is only supported for legacy userspace. |
| 249 | Furthermore PRIME also allows cross-device buffer sharing since it is |
| 250 | based on dma-bufs. |
| 251 | |
| 252 | GEM Objects Mapping |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 253 | ------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 254 | |
| 255 | Because mapping operations are fairly heavyweight GEM favours |
| 256 | read/write-like access to buffers, implemented through driver-specific |
| 257 | ioctls, over mapping buffers to userspace. However, when random access |
| 258 | to the buffer is needed (to perform software rendering for instance), |
| 259 | direct access to the object can be more efficient. |
| 260 | |
| 261 | The mmap system call can't be used directly to map GEM objects, as they |
| 262 | don't have their own file handle. Two alternative methods currently |
| 263 | co-exist to map GEM objects to userspace. The first method uses a |
| 264 | driver-specific ioctl to perform the mapping operation, calling |
| 265 | :c:func:`do_mmap()` under the hood. This is often considered |
| 266 | dubious, seems to be discouraged for new GEM-enabled drivers, and will |
| 267 | thus not be described here. |
| 268 | |
| 269 | The second method uses the mmap system call on the DRM file handle. void |
| 270 | \*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t |
| 271 | offset); DRM identifies the GEM object to be mapped by a fake offset |
| 272 | passed through the mmap offset argument. Prior to being mapped, a GEM |
| 273 | object must thus be associated with a fake offset. To do so, drivers |
| 274 | must call :c:func:`drm_gem_create_mmap_offset()` on the object. |
| 275 | |
| 276 | Once allocated, the fake offset value must be passed to the application |
| 277 | in a driver-specific way and can then be used as the mmap offset |
| 278 | argument. |
| 279 | |
| 280 | The GEM core provides a helper method :c:func:`drm_gem_mmap()` to |
| 281 | handle object mapping. The method can be set directly as the mmap file |
| 282 | operation handler. It will look up the GEM object based on the offset |
| 283 | value and set the VMA operations to the :c:type:`struct drm_driver |
| 284 | <drm_driver>` gem_vm_ops field. Note that |
| 285 | :c:func:`drm_gem_mmap()` doesn't map memory to userspace, but |
| 286 | relies on the driver-provided fault handler to map pages individually. |
| 287 | |
| 288 | To use :c:func:`drm_gem_mmap()`, drivers must fill the struct |
| 289 | :c:type:`struct drm_driver <drm_driver>` gem_vm_ops field |
| 290 | with a pointer to VM operations. |
| 291 | |
Liviu Dudau | 059c7a5 | 2017-01-31 17:41:09 +0000 | [diff] [blame] | 292 | The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` |
| 293 | made up of several fields, the more interesting ones being: |
| 294 | |
| 295 | .. code-block:: c |
| 296 | |
| 297 | struct vm_operations_struct { |
| 298 | void (*open)(struct vm_area_struct * area); |
| 299 | void (*close)(struct vm_area_struct * area); |
| 300 | int (*fault)(struct vm_fault *vmf); |
| 301 | }; |
| 302 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 303 | |
| 304 | The open and close operations must update the GEM object reference |
| 305 | count. Drivers can use the :c:func:`drm_gem_vm_open()` and |
| 306 | :c:func:`drm_gem_vm_close()` helper functions directly as open |
| 307 | and close handlers. |
| 308 | |
| 309 | The fault operation handler is responsible for mapping individual pages |
| 310 | to userspace when a page fault occurs. Depending on the memory |
| 311 | allocation scheme, drivers can allocate pages at fault time, or can |
| 312 | decide to allocate memory for the GEM object at the time the object is |
| 313 | created. |
| 314 | |
| 315 | Drivers that want to map the GEM object upfront instead of handling page |
| 316 | faults can implement their own mmap file operation handler. |
| 317 | |
Benjamin Gaignard | 62a0d98 | 2017-01-04 10:12:57 +0100 | [diff] [blame] | 318 | For platforms without MMU the GEM core provides a helper method |
| 319 | :c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call |
| 320 | this to get a proposed address for the mapping. |
| 321 | |
| 322 | To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the |
| 323 | struct :c:type:`struct file_operations <file_operations>` get_unmapped_area |
| 324 | field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`. |
| 325 | |
| 326 | More detailed information about get_unmapped_area can be found in |
| 327 | Documentation/nommu-mmap.txt |
| 328 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 329 | Memory Coherency |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 330 | ---------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 331 | |
| 332 | When mapped to the device or used in a command buffer, backing pages for |
| 333 | an object are flushed to memory and marked write combined so as to be |
| 334 | coherent with the GPU. Likewise, if the CPU accesses an object after the |
| 335 | GPU has finished rendering to the object, then the object must be made |
| 336 | coherent with the CPU's view of memory, usually involving GPU cache |
| 337 | flushing of various kinds. This core CPU<->GPU coherency management is |
| 338 | provided by a device-specific ioctl, which evaluates an object's current |
| 339 | domain and performs any necessary flushing or synchronization to put the |
| 340 | object into the desired coherency domain (note that the object may be |
| 341 | busy, i.e. an active render target; in that case, setting the domain |
| 342 | blocks the client and waits for rendering to complete before performing |
| 343 | any necessary flushing operations). |
| 344 | |
| 345 | Command Execution |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 346 | ----------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 347 | |
| 348 | Perhaps the most important GEM function for GPU devices is providing a |
| 349 | command execution interface to clients. Client programs construct |
| 350 | command buffers containing references to previously allocated memory |
| 351 | objects, and then submit them to GEM. At that point, GEM takes care to |
| 352 | bind all the objects into the GTT, execute the buffer, and provide |
| 353 | necessary synchronization between clients accessing the same buffers. |
| 354 | This often involves evicting some objects from the GTT and re-binding |
| 355 | others (a fairly expensive operation), and providing relocation support |
| 356 | which hides fixed GTT offsets from clients. Clients must take care not |
| 357 | to submit command buffers that reference more objects than can fit in |
| 358 | the GTT; otherwise, GEM will reject them and no rendering will occur. |
| 359 | Similarly, if several objects in the buffer require fence registers to |
| 360 | be allocated for correct rendering (e.g. 2D blits on pre-965 chips), |
| 361 | care must be taken not to require more fence registers than are |
| 362 | available to the client. Such resource management should be abstracted |
| 363 | from the client in libdrm. |
| 364 | |
| 365 | GEM Function Reference |
| 366 | ---------------------- |
| 367 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 368 | .. kernel-doc:: include/drm/drm_gem.h |
| 369 | :internal: |
| 370 | |
Daniel Vetter | 1ea3576 | 2017-03-02 16:16:36 +0100 | [diff] [blame] | 371 | .. kernel-doc:: drivers/gpu/drm/drm_gem.c |
| 372 | :export: |
| 373 | |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 374 | GEM CMA Helper Functions Reference |
| 375 | ---------------------------------- |
| 376 | |
| 377 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c |
| 378 | :doc: cma helpers |
| 379 | |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 380 | .. kernel-doc:: include/drm/drm_gem_cma_helper.h |
| 381 | :internal: |
| 382 | |
Daniel Vetter | 1ea3576 | 2017-03-02 16:16:36 +0100 | [diff] [blame] | 383 | .. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c |
| 384 | :export: |
| 385 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 386 | VMA Offset Manager |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 387 | ================== |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 388 | |
| 389 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c |
| 390 | :doc: vma offset manager |
| 391 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 392 | .. kernel-doc:: include/drm/drm_vma_manager.h |
| 393 | :internal: |
| 394 | |
Daniel Vetter | 1ea3576 | 2017-03-02 16:16:36 +0100 | [diff] [blame] | 395 | .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c |
| 396 | :export: |
| 397 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 398 | PRIME Buffer Sharing |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 399 | ==================== |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 400 | |
| 401 | PRIME is the cross device buffer sharing framework in drm, originally |
| 402 | created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME |
| 403 | buffers are dma-buf based file descriptors. |
| 404 | |
| 405 | Overview and Driver Interface |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 406 | ----------------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 407 | |
| 408 | Similar to GEM global names, PRIME file descriptors are also used to |
| 409 | share buffer objects across processes. They offer additional security: |
| 410 | as file descriptors must be explicitly sent over UNIX domain sockets to |
| 411 | be shared between applications, they can't be guessed like the globally |
| 412 | unique GEM names. |
| 413 | |
| 414 | Drivers that support the PRIME API must set the DRIVER_PRIME bit in the |
| 415 | struct :c:type:`struct drm_driver <drm_driver>` |
| 416 | driver_features field, and implement the prime_handle_to_fd and |
| 417 | prime_fd_to_handle operations. |
| 418 | |
| 419 | int (\*prime_handle_to_fd)(struct drm_device \*dev, struct drm_file |
| 420 | \*file_priv, uint32_t handle, uint32_t flags, int \*prime_fd); int |
| 421 | (\*prime_fd_to_handle)(struct drm_device \*dev, struct drm_file |
| 422 | \*file_priv, int prime_fd, uint32_t \*handle); Those two operations |
| 423 | convert a handle to a PRIME file descriptor and vice versa. Drivers must |
| 424 | use the kernel dma-buf buffer sharing framework to manage the PRIME file |
| 425 | descriptors. Similar to the mode setting API PRIME is agnostic to the |
| 426 | underlying buffer object manager, as long as handles are 32bit unsigned |
| 427 | integers. |
| 428 | |
| 429 | While non-GEM drivers must implement the operations themselves, GEM |
| 430 | drivers must use the :c:func:`drm_gem_prime_handle_to_fd()` and |
| 431 | :c:func:`drm_gem_prime_fd_to_handle()` helper functions. Those |
| 432 | helpers rely on the driver gem_prime_export and gem_prime_import |
| 433 | operations to create a dma-buf instance from a GEM object (dma-buf |
| 434 | exporter role) and to create a GEM object from a dma-buf instance |
| 435 | (dma-buf importer role). |
| 436 | |
| 437 | struct dma_buf \* (\*gem_prime_export)(struct drm_device \*dev, |
| 438 | struct drm_gem_object \*obj, int flags); struct drm_gem_object \* |
| 439 | (\*gem_prime_import)(struct drm_device \*dev, struct dma_buf |
| 440 | \*dma_buf); These two operations are mandatory for GEM drivers that |
| 441 | support PRIME. |
| 442 | |
| 443 | PRIME Helper Functions |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 444 | ---------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 445 | |
| 446 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c |
| 447 | :doc: PRIME Helpers |
| 448 | |
| 449 | PRIME Function References |
| 450 | ------------------------- |
| 451 | |
Daniel Vetter | c6bb9ba | 2017-03-08 15:12:35 +0100 | [diff] [blame] | 452 | .. kernel-doc:: include/drm/drm_prime.h |
| 453 | :internal: |
| 454 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 455 | .. kernel-doc:: drivers/gpu/drm/drm_prime.c |
| 456 | :export: |
| 457 | |
| 458 | DRM MM Range Allocator |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 459 | ====================== |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 460 | |
| 461 | Overview |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 462 | -------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 463 | |
| 464 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c |
| 465 | :doc: Overview |
| 466 | |
| 467 | LRU Scan/Eviction Support |
Daniel Vetter | 8febdf0 | 2016-08-12 22:48:40 +0200 | [diff] [blame] | 468 | ------------------------- |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 469 | |
| 470 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c |
Daniel Vetter | 05fc032 | 2016-12-29 21:48:23 +0100 | [diff] [blame] | 471 | :doc: lru scan roster |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 472 | |
| 473 | DRM MM Range Allocator Function References |
| 474 | ------------------------------------------ |
| 475 | |
Jani Nikula | 2fa91d1 | 2016-06-21 14:49:02 +0300 | [diff] [blame] | 476 | .. kernel-doc:: include/drm/drm_mm.h |
| 477 | :internal: |
Gabriel Krisman Bertazi | f0e3672 | 2017-01-09 19:56:48 -0200 | [diff] [blame] | 478 | |
Daniel Vetter | 1ea3576 | 2017-03-02 16:16:36 +0100 | [diff] [blame] | 479 | .. kernel-doc:: drivers/gpu/drm/drm_mm.c |
| 480 | :export: |
| 481 | |
Gabriel Krisman Bertazi | f0e3672 | 2017-01-09 19:56:48 -0200 | [diff] [blame] | 482 | DRM Cache Handling |
| 483 | ================== |
| 484 | |
| 485 | .. kernel-doc:: drivers/gpu/drm/drm_cache.c |
| 486 | :export: |