Erik Gilling | 1bcf88f | 2012-09-14 14:36:34 -0700 | [diff] [blame] | 1 | Motivation: |
| 2 | |
| 3 | In complicated DMA pipelines such as graphics (multimedia, camera, gpu, display) |
| 4 | a consumer of a buffer needs to know when the producer has finished producing |
| 5 | it. Likewise the producer needs to know when the consumer is finished with the |
| 6 | buffer so it can reuse it. A particular buffer may be consumed by multiple |
| 7 | consumers which will retain the buffer for different amounts of time. In |
| 8 | addition, a consumer may consume multiple buffers atomically. |
| 9 | The sync framework adds an API which allows synchronization between the |
| 10 | producers and consumers in a generic way while also allowing platforms which |
| 11 | have shared hardware synchronization primitives to exploit them. |
| 12 | |
| 13 | Goals: |
| 14 | * provide a generic API for expressing synchronization dependencies |
| 15 | * allow drivers to exploit hardware synchronization between hardware |
| 16 | blocks |
| 17 | * provide a userspace API that allows a compositor to manage |
| 18 | dependencies. |
| 19 | * provide rich telemetry data to allow debugging slowdowns and stalls of |
| 20 | the graphics pipeline. |
| 21 | |
| 22 | Objects: |
| 23 | * sync_timeline |
| 24 | * sync_pt |
| 25 | * sync_fence |
| 26 | |
| 27 | sync_timeline: |
| 28 | |
| 29 | A sync_timeline is an abstract monotonically increasing counter. In general, |
| 30 | each driver/hardware block context will have one of these. They can be backed |
| 31 | by the appropriate hardware or rely on the generic sw_sync implementation. |
| 32 | Timelines are only ever created through their specific implementations |
| 33 | (i.e. sw_sync.) |
| 34 | |
| 35 | sync_pt: |
| 36 | |
| 37 | A sync_pt is an abstract value which marks a point on a sync_timeline. Sync_pts |
| 38 | have a single timeline parent. They have 3 states: active, signaled, and error. |
| 39 | They start in active state and transition, once, to either signaled (when the |
| 40 | timeline counter advances beyond the sync_pt’s value) or error state. |
| 41 | |
| 42 | sync_fence: |
| 43 | |
| 44 | Sync_fences are the primary primitives used by drivers to coordinate |
| 45 | synchronization of their buffers. They are a collection of sync_pts which may |
| 46 | or may not have the same timeline parent. A sync_pt can only exist in one fence |
| 47 | and the fence's list of sync_pts is immutable once created. Fences can be |
| 48 | waited on synchronously or asynchronously. Two fences can also be merged to |
| 49 | create a third fence containing a copy of the two fences’ sync_pts. Fences are |
| 50 | backed by file descriptors to allow userspace to coordinate the display pipeline |
| 51 | dependencies. |
| 52 | |
| 53 | Use: |
| 54 | |
| 55 | A driver implementing sync support should have a work submission function which: |
| 56 | * takes a fence argument specifying when to begin work |
| 57 | * asynchronously queues that work to kick off when the fence is signaled |
| 58 | * returns a fence to indicate when its work will be done. |
| 59 | * signals the returned fence once the work is completed. |
| 60 | |
| 61 | Consider an imaginary display driver that has the following API: |
| 62 | /* |
| 63 | * assumes buf is ready to be displayed. |
| 64 | * blocks until the buffer is on screen. |
| 65 | */ |
| 66 | void display_buffer(struct dma_buf *buf); |
| 67 | |
| 68 | The new API will become: |
| 69 | /* |
| 70 | * will display buf when fence is signaled. |
| 71 | * returns immediately with a fence that will signal when buf |
| 72 | * is no longer displayed. |
| 73 | */ |
| 74 | struct sync_fence* display_buffer(struct dma_buf *buf, |
| 75 | struct sync_fence *fence); |