Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 1 | completions - wait for completion handling |
| 2 | ========================================== |
| 3 | |
| 4 | This document was originally written based on 3.18.0 (linux-next) |
| 5 | |
| 6 | Introduction: |
| 7 | ------------- |
| 8 | |
| 9 | If you have one or more threads of execution that must wait for some process |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 10 | to have reached a point or a specific state, completions can provide a |
| 11 | race-free solution to this problem. Semantically they are somewhat like a |
| 12 | pthread_barrier and have similar use-cases. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 13 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 14 | Completions are a code synchronization mechanism which is preferable to any |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 15 | misuse of locks. Any time you think of using yield() or some quirky |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 16 | msleep(1) loop to allow something else to proceed, you probably want to |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 17 | look into using one of the wait_for_completion*() calls instead. The |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 18 | advantage of using completions is clear intent of the code, but also more |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 19 | efficient code as both threads can continue until the result is actually |
| 20 | needed. |
| 21 | |
| 22 | Completions are built on top of the generic event infrastructure in Linux, |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 23 | with the event reduced to a simple flag (appropriately called "done") in |
| 24 | struct completion that tells the waiting threads of execution if they |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 25 | can continue safely. |
| 26 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 27 | As completions are scheduling related, the code is found in |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 28 | kernel/sched/completion.c - for details on completion design and |
| 29 | implementation see completions-design.txt |
| 30 | |
| 31 | |
| 32 | Usage: |
| 33 | ------ |
| 34 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 35 | There are three parts to using completions, the initialization of the |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 36 | struct completion, the waiting part through a call to one of the variants of |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 37 | wait_for_completion() and the signaling side through a call to complete() |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 38 | or complete_all(). Further there are some helper functions for checking the |
| 39 | state of completions. |
| 40 | |
| 41 | To use completions one needs to include <linux/completion.h> and |
| 42 | create a variable of type struct completion. The structure used for |
| 43 | handling of completions is: |
| 44 | |
| 45 | struct completion { |
| 46 | unsigned int done; |
| 47 | wait_queue_head_t wait; |
| 48 | }; |
| 49 | |
| 50 | providing the wait queue to place tasks on for waiting and the flag for |
| 51 | indicating the state of affairs. |
| 52 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 53 | Completions should be named to convey the intent of the waiter. A good |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 54 | example is: |
| 55 | |
| 56 | wait_for_completion(&early_console_added); |
| 57 | |
| 58 | complete(&early_console_added); |
| 59 | |
| 60 | Good naming (as always) helps code readability. |
| 61 | |
| 62 | |
| 63 | Initializing completions: |
| 64 | ------------------------- |
| 65 | |
| 66 | Initialization of dynamically allocated completions, often embedded in |
| 67 | other structures, is done with: |
| 68 | |
| 69 | void init_completion(&done); |
| 70 | |
| 71 | Initialization is accomplished by initializing the wait queue and setting |
| 72 | the default state to "not available", that is, "done" is set to 0. |
| 73 | |
| 74 | The re-initialization function, reinit_completion(), simply resets the |
| 75 | done element to "not available", thus again to 0, without touching the |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 76 | wait queue. Calling init_completion() twice on the same completion object is |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 77 | most likely a bug as it re-initializes the queue to an empty queue and |
| 78 | enqueued tasks could get "lost" - use reinit_completion() in that case. |
| 79 | |
| 80 | For static declaration and initialization, macros are available. These are: |
| 81 | |
| 82 | static DECLARE_COMPLETION(setup_done) |
| 83 | |
| 84 | used for static declarations in file scope. Within functions the static |
| 85 | initialization should always use: |
| 86 | |
| 87 | DECLARE_COMPLETION_ONSTACK(setup_done) |
| 88 | |
| 89 | suitable for automatic/local variables on the stack and will make lockdep |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 90 | happy. Note also that one needs to make *sure* the completion passed to |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 91 | work threads remains in-scope, and no references remain to on-stack data |
| 92 | when the initiating function returns. |
| 93 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 94 | Using on-stack completions for code that calls any of the _timeout or |
| 95 | _interruptible/_killable variants is not advisable as they will require |
| 96 | additional synchronization to prevent the on-stack completion object in |
| 97 | the timeout/signal cases from going out of scope. Consider using dynamically |
| 98 | allocated completions when intending to use the _interruptible/_killable |
| 99 | or _timeout variants of wait_for_completion(). |
| 100 | |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 101 | |
| 102 | Waiting for completions: |
| 103 | ------------------------ |
| 104 | |
| 105 | For a thread of execution to wait for some concurrent work to finish, it |
| 106 | calls wait_for_completion() on the initialized completion structure. |
| 107 | A typical usage scenario is: |
| 108 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 109 | struct completion setup_done; |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 110 | init_completion(&setup_done); |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 111 | initialize_work(...,&setup_done,...) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 112 | |
| 113 | /* run non-dependent code */ /* do setup */ |
| 114 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 115 | wait_for_completion(&setup_done); complete(setup_done) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 116 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 117 | This is not implying any temporal order on wait_for_completion() and the |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 118 | call to complete() - if the call to complete() happened before the call |
| 119 | to wait_for_completion() then the waiting side simply will continue |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 120 | immediately as all dependencies are satisfied if not it will block until |
| 121 | completion is signaled by complete(). |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 122 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 123 | Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(), |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 124 | so it can only be called safely when you know that interrupts are enabled. |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 125 | Calling it from hard-irq or irqs-off atomic contexts will result in |
| 126 | hard-to-detect spurious enabling of interrupts. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 127 | |
| 128 | wait_for_completion(): |
| 129 | |
| 130 | void wait_for_completion(struct completion *done): |
| 131 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 132 | The default behavior is to wait without a timeout and to mark the task as |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 133 | uninterruptible. wait_for_completion() and its variants are only safe |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 134 | in process context (as they can sleep) but not in atomic context, |
| 135 | interrupt context, with disabled irqs. or preemption is disabled - see also |
| 136 | try_wait_for_completion() below for handling completion in atomic/interrupt |
| 137 | context. |
| 138 | |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 139 | As all variants of wait_for_completion() can (obviously) block for a long |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 140 | time, you probably don't want to call this with held mutexes. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 141 | |
| 142 | |
| 143 | Variants available: |
| 144 | ------------------- |
| 145 | |
| 146 | The below variants all return status and this status should be checked in |
| 147 | most(/all) cases - in cases where the status is deliberately not checked you |
| 148 | probably want to make a note explaining this (e.g. see |
| 149 | arch/arm/kernel/smp.c:__cpu_up()). |
| 150 | |
| 151 | A common problem that occurs is to have unclean assignment of return types, |
| 152 | so care should be taken with assigning return-values to variables of proper |
| 153 | type. Checking for the specific meaning of return values also has been found |
| 154 | to be quite inaccurate e.g. constructs like |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 155 | if (!wait_for_completion_interruptible_timeout(...)) would execute the same |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 156 | code path for successful completion and for the interrupted case - which is |
| 157 | probably not what you want. |
| 158 | |
| 159 | int wait_for_completion_interruptible(struct completion *done) |
| 160 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 161 | This function marks the task TASK_INTERRUPTIBLE. If a signal was received |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 162 | while waiting it will return -ERESTARTSYS; 0 otherwise. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 163 | |
| 164 | unsigned long wait_for_completion_timeout(struct completion *done, |
| 165 | unsigned long timeout) |
| 166 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 167 | The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout' |
| 168 | (in jiffies). If timeout occurs it returns 0 else the remaining time in |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 169 | jiffies (but at least 1). Timeouts are preferably calculated with |
| 170 | msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is |
| 171 | deliberately ignored a comment should probably explain why (e.g. see |
| 172 | drivers/mfd/wm8350-core.c wm8350_read_auxadc()) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 173 | |
| 174 | long wait_for_completion_interruptible_timeout( |
| 175 | struct completion *done, unsigned long timeout) |
| 176 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 177 | This function passes a timeout in jiffies and marks the task as |
| 178 | TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS; |
| 179 | otherwise it returns 0 if the completion timed out or the remaining time in |
| 180 | jiffies if completion occurred. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 181 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 182 | Further variants include _killable which uses TASK_KILLABLE as the |
| 183 | designated tasks state and will return -ERESTARTSYS if it is interrupted or |
| 184 | else 0 if completion was achieved. There is a _timeout variant as well: |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 185 | |
| 186 | long wait_for_completion_killable(struct completion *done) |
| 187 | long wait_for_completion_killable_timeout(struct completion *done, |
| 188 | unsigned long timeout) |
| 189 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 190 | The _io variants wait_for_completion_io() behave the same as the non-_io |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 191 | variants, except for accounting waiting time as waiting on IO, which has |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 192 | an impact on how the task is accounted in scheduling stats. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 193 | |
| 194 | void wait_for_completion_io(struct completion *done) |
| 195 | unsigned long wait_for_completion_io_timeout(struct completion *done |
| 196 | unsigned long timeout) |
| 197 | |
| 198 | |
| 199 | Signaling completions: |
| 200 | ---------------------- |
| 201 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 202 | A thread that wants to signal that the conditions for continuation have been |
| 203 | achieved calls complete() to signal exactly one of the waiters that it can |
| 204 | continue. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 205 | |
| 206 | void complete(struct completion *done) |
| 207 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 208 | or calls complete_all() to signal all current and future waiters. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 209 | |
| 210 | void complete_all(struct completion *done) |
| 211 | |
| 212 | The signaling will work as expected even if completions are signaled before |
| 213 | a thread starts waiting. This is achieved by the waiter "consuming" |
| 214 | (decrementing) the done element of struct completion. Waiting threads |
| 215 | wakeup order is the same in which they were enqueued (FIFO order). |
| 216 | |
| 217 | If complete() is called multiple times then this will allow for that number |
| 218 | of waiters to continue - each call to complete() will simply increment the |
| 219 | done element. Calling complete_all() multiple times is a bug though. Both |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 220 | complete() and complete_all() can be called in hard-irq/atomic context safely. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 221 | |
| 222 | There only can be one thread calling complete() or complete_all() on a |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 223 | particular struct completion at any time - serialized through the wait |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 224 | queue spinlock. Any such concurrent calls to complete() or complete_all() |
| 225 | probably are a design bug. |
| 226 | |
| 227 | Signaling completion from hard-irq context is fine as it will appropriately |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 228 | lock with spin_lock_irqsave/spin_unlock_irqrestore and it will never sleep. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 229 | |
| 230 | |
| 231 | try_wait_for_completion()/completion_done(): |
| 232 | -------------------------------------------- |
| 233 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 234 | The try_wait_for_completion() function will not put the thread on the wait |
| 235 | queue but rather returns false if it would need to enqueue (block) the thread, |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 236 | else it consumes one posted completion and returns true. |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 237 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 238 | bool try_wait_for_completion(struct completion *done) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 239 | |
Jonathan Corbet | 7085f6c | 2015-03-27 10:16:35 -0600 | [diff] [blame] | 240 | Finally, to check the state of a completion without changing it in any way, |
| 241 | call completion_done(), which returns false if there are no posted |
| 242 | completions that were not yet consumed by waiters (implying that there are |
| 243 | waiters) and true otherwise; |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 244 | |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 245 | bool completion_done(struct completion *done) |
Nicholas Mc Guire | 202799b | 2015-01-30 08:01:52 +0100 | [diff] [blame] | 246 | |
| 247 | Both try_wait_for_completion() and completion_done() are safe to be called in |
Nicholas Mc Guire | 4988aaa | 2015-02-20 12:28:48 -0500 | [diff] [blame] | 248 | hard-irq or atomic context. |