Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 1 | =========================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 2 | Hardware Spinlock Framework |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 3 | =========================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 4 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 5 | Introduction |
| 6 | ============ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 7 | |
| 8 | Hardware spinlock modules provide hardware assistance for synchronization |
| 9 | and mutual exclusion between heterogeneous processors and those not operating |
| 10 | under a single, shared operating system. |
| 11 | |
| 12 | For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP, |
| 13 | each of which is running a different Operating System (the master, A9, |
| 14 | is usually running Linux and the slave processors, the M3 and the DSP, |
| 15 | are running some flavor of RTOS). |
| 16 | |
| 17 | A generic hwspinlock framework allows platform-independent drivers to use |
| 18 | the hwspinlock device in order to access data structures that are shared |
| 19 | between remote processors, that otherwise have no alternative mechanism |
| 20 | to accomplish synchronization and mutual exclusion operations. |
| 21 | |
| 22 | This is necessary, for example, for Inter-processor communications: |
| 23 | on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the |
| 24 | remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink). |
| 25 | |
| 26 | To achieve fast message-based communications, a minimal kernel support |
| 27 | is needed to deliver messages arriving from a remote processor to the |
| 28 | appropriate user process. |
| 29 | |
| 30 | This communication is based on simple data structures that is shared between |
| 31 | the remote processors, and access to it is synchronized using the hwspinlock |
| 32 | module (remote processor directly places new messages in this shared data |
| 33 | structure). |
| 34 | |
| 35 | A common hwspinlock interface makes it possible to have generic, platform- |
| 36 | independent, drivers. |
| 37 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 38 | User API |
| 39 | ======== |
| 40 | |
| 41 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 42 | |
| 43 | struct hwspinlock *hwspin_lock_request(void); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 44 | |
| 45 | Dynamically assign an hwspinlock and return its address, or NULL |
| 46 | in case an unused hwspinlock isn't available. Users of this |
| 47 | API will usually want to communicate the lock's id to the remote core |
| 48 | before it can be used to achieve synchronization. |
| 49 | |
| 50 | Should be called from a process context (might sleep). |
| 51 | |
| 52 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 53 | |
| 54 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 55 | |
| 56 | Assign a specific hwspinlock id and return its address, or NULL |
| 57 | if that hwspinlock is already in use. Usually board code will |
| 58 | be calling this function in order to reserve specific hwspinlock |
| 59 | ids for predefined purposes. |
| 60 | |
| 61 | Should be called from a process context (might sleep). |
| 62 | |
| 63 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 64 | |
Suman Anna | fb7737e | 2015-03-04 20:01:14 -0600 | [diff] [blame] | 65 | int of_hwspin_lock_get_id(struct device_node *np, int index); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 66 | |
| 67 | Retrieve the global lock id for an OF phandle-based specific lock. |
| 68 | This function provides a means for DT users of a hwspinlock module |
| 69 | to get the global lock id of a specific hwspinlock, so that it can |
| 70 | be requested using the normal hwspin_lock_request_specific() API. |
| 71 | |
| 72 | The function returns a lock id number on success, -EPROBE_DEFER if |
| 73 | the hwspinlock device is not yet registered with the core, or other |
| 74 | error values. |
| 75 | |
| 76 | Should be called from a process context (might sleep). |
| 77 | |
| 78 | :: |
Suman Anna | fb7737e | 2015-03-04 20:01:14 -0600 | [diff] [blame] | 79 | |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 80 | int hwspin_lock_free(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 81 | |
| 82 | Free a previously-assigned hwspinlock; returns 0 on success, or an |
| 83 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock |
| 84 | is already free). |
| 85 | |
| 86 | Should be called from a process context (might sleep). |
| 87 | |
| 88 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 89 | |
| 90 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 91 | |
| 92 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 93 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 94 | waiting for it to be released, but give up when the timeout elapses. |
| 95 | Upon a successful return from this function, preemption is disabled so |
| 96 | the caller must not sleep, and is advised to release the hwspinlock as |
| 97 | soon as possible, in order to minimize remote cores polling on the |
| 98 | hardware interconnect. |
| 99 | |
| 100 | Returns 0 when successful and an appropriate error code otherwise (most |
| 101 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 102 | The function will never sleep. |
| 103 | |
| 104 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 105 | |
| 106 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 107 | |
| 108 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 109 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 110 | waiting for it to be released, but give up when the timeout elapses. |
| 111 | Upon a successful return from this function, preemption and the local |
| 112 | interrupts are disabled, so the caller must not sleep, and is advised to |
| 113 | release the hwspinlock as soon as possible. |
| 114 | |
| 115 | Returns 0 when successful and an appropriate error code otherwise (most |
| 116 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 117 | The function will never sleep. |
| 118 | |
| 119 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 120 | |
| 121 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 122 | unsigned long *flags); |
| 123 | |
| 124 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 125 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 126 | waiting for it to be released, but give up when the timeout elapses. |
| 127 | Upon a successful return from this function, preemption is disabled, |
| 128 | local interrupts are disabled and their previous state is saved at the |
| 129 | given flags placeholder. The caller must not sleep, and is advised to |
| 130 | release the hwspinlock as soon as possible. |
| 131 | |
| 132 | Returns 0 when successful and an appropriate error code otherwise (most |
| 133 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 134 | |
| 135 | The function will never sleep. |
| 136 | |
| 137 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 138 | |
Fabien Dessenne | bce6f52 | 2019-03-07 16:58:22 +0100 | [diff] [blame] | 139 | int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout); |
| 140 | |
| 141 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 142 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 143 | waiting for it to be released, but give up when the timeout elapses. |
| 144 | |
| 145 | Caution: User must protect the routine of getting hardware lock with mutex |
| 146 | or spinlock to avoid dead-lock, that will let user can do some time-consuming |
| 147 | or sleepable operations under the hardware lock. |
| 148 | |
| 149 | Returns 0 when successful and an appropriate error code otherwise (most |
| 150 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 151 | |
| 152 | The function will never sleep. |
| 153 | |
| 154 | :: |
| 155 | |
Fabien Dessenne | 360aa64 | 2019-03-07 16:58:23 +0100 | [diff] [blame] | 156 | int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to); |
| 157 | |
| 158 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 159 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 160 | waiting for it to be released, but give up when the timeout elapses. |
| 161 | |
| 162 | This function shall be called only from an atomic context and the timeout |
| 163 | value shall not exceed a few msecs. |
| 164 | |
| 165 | Returns 0 when successful and an appropriate error code otherwise (most |
| 166 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 167 | |
| 168 | The function will never sleep. |
| 169 | |
| 170 | :: |
| 171 | |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 172 | int hwspin_trylock(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 173 | |
| 174 | |
| 175 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 176 | it is already taken. |
| 177 | |
| 178 | Upon a successful return from this function, preemption is disabled so |
| 179 | caller must not sleep, and is advised to release the hwspinlock as soon as |
| 180 | possible, in order to minimize remote cores polling on the hardware |
| 181 | interconnect. |
| 182 | |
| 183 | Returns 0 on success and an appropriate error code otherwise (most |
| 184 | notably -EBUSY if the hwspinlock was already taken). |
| 185 | The function will never sleep. |
| 186 | |
| 187 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 188 | |
| 189 | int hwspin_trylock_irq(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 190 | |
| 191 | |
| 192 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 193 | it is already taken. |
| 194 | |
| 195 | Upon a successful return from this function, preemption and the local |
| 196 | interrupts are disabled so caller must not sleep, and is advised to |
| 197 | release the hwspinlock as soon as possible. |
| 198 | |
| 199 | Returns 0 on success and an appropriate error code otherwise (most |
| 200 | notably -EBUSY if the hwspinlock was already taken). |
| 201 | |
| 202 | The function will never sleep. |
| 203 | |
| 204 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 205 | |
| 206 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 207 | |
| 208 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 209 | it is already taken. |
| 210 | |
| 211 | Upon a successful return from this function, preemption is disabled, |
| 212 | the local interrupts are disabled and their previous state is saved |
| 213 | at the given flags placeholder. The caller must not sleep, and is advised |
| 214 | to release the hwspinlock as soon as possible. |
| 215 | |
| 216 | Returns 0 on success and an appropriate error code otherwise (most |
| 217 | notably -EBUSY if the hwspinlock was already taken). |
| 218 | The function will never sleep. |
| 219 | |
| 220 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 221 | |
Fabien Dessenne | bce6f52 | 2019-03-07 16:58:22 +0100 | [diff] [blame] | 222 | int hwspin_trylock_raw(struct hwspinlock *hwlock); |
| 223 | |
| 224 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 225 | it is already taken. |
| 226 | |
| 227 | Caution: User must protect the routine of getting hardware lock with mutex |
| 228 | or spinlock to avoid dead-lock, that will let user can do some time-consuming |
| 229 | or sleepable operations under the hardware lock. |
| 230 | |
| 231 | Returns 0 on success and an appropriate error code otherwise (most |
| 232 | notably -EBUSY if the hwspinlock was already taken). |
| 233 | The function will never sleep. |
| 234 | |
| 235 | :: |
| 236 | |
Fabien Dessenne | 360aa64 | 2019-03-07 16:58:23 +0100 | [diff] [blame] | 237 | int hwspin_trylock_in_atomic(struct hwspinlock *hwlock); |
| 238 | |
| 239 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 240 | it is already taken. |
| 241 | |
| 242 | This function shall be called only from an atomic context. |
| 243 | |
| 244 | Returns 0 on success and an appropriate error code otherwise (most |
| 245 | notably -EBUSY if the hwspinlock was already taken). |
| 246 | The function will never sleep. |
| 247 | |
| 248 | :: |
| 249 | |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 250 | void hwspin_unlock(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 251 | |
| 252 | Unlock a previously-locked hwspinlock. Always succeed, and can be called |
| 253 | from any context (the function never sleeps). |
| 254 | |
| 255 | .. note:: |
| 256 | |
| 257 | code should **never** unlock an hwspinlock which is already unlocked |
| 258 | (there is no protection against this). |
| 259 | |
| 260 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 261 | |
| 262 | void hwspin_unlock_irq(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 263 | |
| 264 | Unlock a previously-locked hwspinlock and enable local interrupts. |
| 265 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 266 | |
| 267 | Doing so is considered a bug (there is no protection against this). |
| 268 | Upon a successful return from this function, preemption and local |
| 269 | interrupts are enabled. This function will never sleep. |
| 270 | |
| 271 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 272 | |
| 273 | void |
| 274 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 275 | |
| 276 | Unlock a previously-locked hwspinlock. |
| 277 | |
| 278 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 279 | Doing so is considered a bug (there is no protection against this). |
| 280 | Upon a successful return from this function, preemption is reenabled, |
| 281 | and the state of the local interrupts is restored to the state saved at |
| 282 | the given flags. This function will never sleep. |
| 283 | |
| 284 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 285 | |
Fabien Dessenne | bce6f52 | 2019-03-07 16:58:22 +0100 | [diff] [blame] | 286 | void hwspin_unlock_raw(struct hwspinlock *hwlock); |
| 287 | |
| 288 | Unlock a previously-locked hwspinlock. |
| 289 | |
| 290 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 291 | Doing so is considered a bug (there is no protection against this). |
| 292 | This function will never sleep. |
| 293 | |
| 294 | :: |
| 295 | |
Fabien Dessenne | 360aa64 | 2019-03-07 16:58:23 +0100 | [diff] [blame] | 296 | void hwspin_unlock_in_atomic(struct hwspinlock *hwlock); |
| 297 | |
| 298 | Unlock a previously-locked hwspinlock. |
| 299 | |
| 300 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 301 | Doing so is considered a bug (there is no protection against this). |
| 302 | This function will never sleep. |
| 303 | |
| 304 | :: |
| 305 | |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 306 | int hwspin_lock_get_id(struct hwspinlock *hwlock); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 307 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 308 | Retrieve id number of a given hwspinlock. This is needed when an |
| 309 | hwspinlock is dynamically assigned: before it can be used to achieve |
| 310 | mutual exclusion with a remote cpu, the id number should be communicated |
| 311 | to the remote task with which we want to synchronize. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 312 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 313 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 314 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 315 | Typical usage |
| 316 | ============= |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 317 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 318 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 319 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 320 | #include <linux/hwspinlock.h> |
| 321 | #include <linux/err.h> |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 322 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 323 | int hwspinlock_example1(void) |
| 324 | { |
| 325 | struct hwspinlock *hwlock; |
| 326 | int ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 327 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 328 | /* dynamically assign a hwspinlock */ |
| 329 | hwlock = hwspin_lock_request(); |
| 330 | if (!hwlock) |
| 331 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 332 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 333 | id = hwspin_lock_get_id(hwlock); |
| 334 | /* probably need to communicate id to a remote processor now */ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 335 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 336 | /* take the lock, spin for 1 sec if it's already taken */ |
| 337 | ret = hwspin_lock_timeout(hwlock, 1000); |
| 338 | if (ret) |
| 339 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 340 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 341 | /* |
| 342 | * we took the lock, do our thing now, but do NOT sleep |
| 343 | */ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 344 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 345 | /* release the lock */ |
| 346 | hwspin_unlock(hwlock); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 347 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 348 | /* free the lock */ |
| 349 | ret = hwspin_lock_free(hwlock); |
| 350 | if (ret) |
| 351 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 352 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 353 | return ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 354 | } |
| 355 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 356 | int hwspinlock_example2(void) |
| 357 | { |
| 358 | struct hwspinlock *hwlock; |
| 359 | int ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 360 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 361 | /* |
| 362 | * assign a specific hwspinlock id - this should be called early |
| 363 | * by board init code. |
| 364 | */ |
| 365 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); |
| 366 | if (!hwlock) |
| 367 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 368 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 369 | /* try to take it, but don't spin on it */ |
| 370 | ret = hwspin_trylock(hwlock); |
| 371 | if (!ret) { |
| 372 | pr_info("lock is already taken\n"); |
| 373 | return -EBUSY; |
| 374 | } |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 375 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 376 | /* |
| 377 | * we took the lock, do our thing now, but do NOT sleep |
| 378 | */ |
| 379 | |
| 380 | /* release the lock */ |
| 381 | hwspin_unlock(hwlock); |
| 382 | |
| 383 | /* free the lock */ |
| 384 | ret = hwspin_lock_free(hwlock); |
| 385 | if (ret) |
| 386 | ... |
| 387 | |
| 388 | return ret; |
| 389 | } |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 390 | |
| 391 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 392 | API for implementors |
| 393 | ==================== |
| 394 | |
| 395 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 396 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 397 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, |
| 398 | const struct hwspinlock_ops *ops, int base_id, int num_locks); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 399 | |
| 400 | To be called from the underlying platform-specific implementation, in |
| 401 | order to register a new hwspinlock device (which is usually a bank of |
| 402 | numerous locks). Should be called from a process context (this function |
| 403 | might sleep). |
| 404 | |
| 405 | Returns 0 on success, or appropriate error code on failure. |
| 406 | |
| 407 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 408 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 409 | int hwspin_lock_unregister(struct hwspinlock_device *bank); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 410 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 411 | To be called from the underlying vendor-specific implementation, in order |
| 412 | to unregister an hwspinlock device (which is usually a bank of numerous |
| 413 | locks). |
| 414 | |
| 415 | Should be called from a process context (this function might sleep). |
| 416 | |
| 417 | Returns the address of hwspinlock on success, or NULL on error (e.g. |
| 418 | if the hwspinlock is still in use). |
| 419 | |
| 420 | Important structs |
| 421 | ================= |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 422 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 423 | struct hwspinlock_device is a device which usually contains a bank |
| 424 | of hardware locks. It is registered by the underlying hwspinlock |
| 425 | implementation using the hwspin_lock_register() API. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 426 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 427 | :: |
| 428 | |
| 429 | /** |
| 430 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks |
| 431 | * @dev: underlying device, will be used to invoke runtime PM api |
| 432 | * @ops: platform-specific hwspinlock handlers |
| 433 | * @base_id: id index of the first lock in this device |
| 434 | * @num_locks: number of locks in this device |
| 435 | * @lock: dynamically allocated array of 'struct hwspinlock' |
| 436 | */ |
| 437 | struct hwspinlock_device { |
| 438 | struct device *dev; |
| 439 | const struct hwspinlock_ops *ops; |
| 440 | int base_id; |
| 441 | int num_locks; |
| 442 | struct hwspinlock lock[0]; |
| 443 | }; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 444 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 445 | struct hwspinlock_device contains an array of hwspinlock structs, each |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 446 | of which represents a single hardware lock:: |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 447 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 448 | /** |
| 449 | * struct hwspinlock - this struct represents a single hwspinlock instance |
| 450 | * @bank: the hwspinlock_device structure which owns this lock |
| 451 | * @lock: initialized and used by hwspinlock core |
| 452 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv |
| 453 | */ |
| 454 | struct hwspinlock { |
| 455 | struct hwspinlock_device *bank; |
| 456 | spinlock_t lock; |
| 457 | void *priv; |
| 458 | }; |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 459 | |
| 460 | When registering a bank of locks, the hwspinlock driver only needs to |
| 461 | set the priv members of the locks. The rest of the members are set and |
| 462 | initialized by the hwspinlock core itself. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 463 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 464 | Implementation callbacks |
| 465 | ======================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 466 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 467 | There are three possible callbacks defined in 'struct hwspinlock_ops':: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 468 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 469 | struct hwspinlock_ops { |
| 470 | int (*trylock)(struct hwspinlock *lock); |
| 471 | void (*unlock)(struct hwspinlock *lock); |
| 472 | void (*relax)(struct hwspinlock *lock); |
| 473 | }; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 474 | |
| 475 | The first two callbacks are mandatory: |
| 476 | |
| 477 | The ->trylock() callback should make a single attempt to take the lock, and |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 478 | return 0 on failure and 1 on success. This callback may **not** sleep. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 479 | |
| 480 | The ->unlock() callback releases the lock. It always succeed, and it, too, |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 481 | may **not** sleep. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 482 | |
| 483 | The ->relax() callback is optional. It is called by hwspinlock core while |
| 484 | spinning on a lock, and can be used by the underlying implementation to force |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 485 | a delay between two successive invocations of ->trylock(). It may **not** sleep. |