Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 1 | =========================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 2 | Hardware Spinlock Framework |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 3 | =========================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 4 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 5 | Introduction |
| 6 | ============ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 7 | |
| 8 | Hardware spinlock modules provide hardware assistance for synchronization |
| 9 | and mutual exclusion between heterogeneous processors and those not operating |
| 10 | under a single, shared operating system. |
| 11 | |
| 12 | For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP, |
| 13 | each of which is running a different Operating System (the master, A9, |
| 14 | is usually running Linux and the slave processors, the M3 and the DSP, |
| 15 | are running some flavor of RTOS). |
| 16 | |
| 17 | A generic hwspinlock framework allows platform-independent drivers to use |
| 18 | the hwspinlock device in order to access data structures that are shared |
| 19 | between remote processors, that otherwise have no alternative mechanism |
| 20 | to accomplish synchronization and mutual exclusion operations. |
| 21 | |
| 22 | This is necessary, for example, for Inter-processor communications: |
| 23 | on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the |
| 24 | remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink). |
| 25 | |
| 26 | To achieve fast message-based communications, a minimal kernel support |
| 27 | is needed to deliver messages arriving from a remote processor to the |
| 28 | appropriate user process. |
| 29 | |
| 30 | This communication is based on simple data structures that is shared between |
| 31 | the remote processors, and access to it is synchronized using the hwspinlock |
| 32 | module (remote processor directly places new messages in this shared data |
| 33 | structure). |
| 34 | |
| 35 | A common hwspinlock interface makes it possible to have generic, platform- |
| 36 | independent, drivers. |
| 37 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 38 | User API |
| 39 | ======== |
| 40 | |
| 41 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 42 | |
| 43 | struct hwspinlock *hwspin_lock_request(void); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 44 | |
| 45 | Dynamically assign an hwspinlock and return its address, or NULL |
| 46 | in case an unused hwspinlock isn't available. Users of this |
| 47 | API will usually want to communicate the lock's id to the remote core |
| 48 | before it can be used to achieve synchronization. |
| 49 | |
| 50 | Should be called from a process context (might sleep). |
| 51 | |
| 52 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 53 | |
| 54 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 55 | |
| 56 | Assign a specific hwspinlock id and return its address, or NULL |
| 57 | if that hwspinlock is already in use. Usually board code will |
| 58 | be calling this function in order to reserve specific hwspinlock |
| 59 | ids for predefined purposes. |
| 60 | |
| 61 | Should be called from a process context (might sleep). |
| 62 | |
| 63 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 64 | |
Suman Anna | fb7737e | 2015-03-04 20:01:14 -0600 | [diff] [blame] | 65 | int of_hwspin_lock_get_id(struct device_node *np, int index); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 66 | |
| 67 | Retrieve the global lock id for an OF phandle-based specific lock. |
| 68 | This function provides a means for DT users of a hwspinlock module |
| 69 | to get the global lock id of a specific hwspinlock, so that it can |
| 70 | be requested using the normal hwspin_lock_request_specific() API. |
| 71 | |
| 72 | The function returns a lock id number on success, -EPROBE_DEFER if |
| 73 | the hwspinlock device is not yet registered with the core, or other |
| 74 | error values. |
| 75 | |
| 76 | Should be called from a process context (might sleep). |
| 77 | |
| 78 | :: |
Suman Anna | fb7737e | 2015-03-04 20:01:14 -0600 | [diff] [blame] | 79 | |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 80 | int hwspin_lock_free(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 81 | |
| 82 | Free a previously-assigned hwspinlock; returns 0 on success, or an |
| 83 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock |
| 84 | is already free). |
| 85 | |
| 86 | Should be called from a process context (might sleep). |
| 87 | |
| 88 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 89 | |
| 90 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 91 | |
| 92 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 93 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 94 | waiting for it to be released, but give up when the timeout elapses. |
| 95 | Upon a successful return from this function, preemption is disabled so |
| 96 | the caller must not sleep, and is advised to release the hwspinlock as |
| 97 | soon as possible, in order to minimize remote cores polling on the |
| 98 | hardware interconnect. |
| 99 | |
| 100 | Returns 0 when successful and an appropriate error code otherwise (most |
| 101 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 102 | The function will never sleep. |
| 103 | |
| 104 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 105 | |
| 106 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 107 | |
| 108 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 109 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 110 | waiting for it to be released, but give up when the timeout elapses. |
| 111 | Upon a successful return from this function, preemption and the local |
| 112 | interrupts are disabled, so the caller must not sleep, and is advised to |
| 113 | release the hwspinlock as soon as possible. |
| 114 | |
| 115 | Returns 0 when successful and an appropriate error code otherwise (most |
| 116 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 117 | The function will never sleep. |
| 118 | |
| 119 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 120 | |
| 121 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 122 | unsigned long *flags); |
| 123 | |
| 124 | Lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 125 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 126 | waiting for it to be released, but give up when the timeout elapses. |
| 127 | Upon a successful return from this function, preemption is disabled, |
| 128 | local interrupts are disabled and their previous state is saved at the |
| 129 | given flags placeholder. The caller must not sleep, and is advised to |
| 130 | release the hwspinlock as soon as possible. |
| 131 | |
| 132 | Returns 0 when successful and an appropriate error code otherwise (most |
| 133 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 134 | |
| 135 | The function will never sleep. |
| 136 | |
| 137 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 138 | |
| 139 | int hwspin_trylock(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 140 | |
| 141 | |
| 142 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 143 | it is already taken. |
| 144 | |
| 145 | Upon a successful return from this function, preemption is disabled so |
| 146 | caller must not sleep, and is advised to release the hwspinlock as soon as |
| 147 | possible, in order to minimize remote cores polling on the hardware |
| 148 | interconnect. |
| 149 | |
| 150 | Returns 0 on success and an appropriate error code otherwise (most |
| 151 | notably -EBUSY if the hwspinlock was already taken). |
| 152 | The function will never sleep. |
| 153 | |
| 154 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 155 | |
| 156 | int hwspin_trylock_irq(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 157 | |
| 158 | |
| 159 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 160 | it is already taken. |
| 161 | |
| 162 | Upon a successful return from this function, preemption and the local |
| 163 | interrupts are disabled so caller must not sleep, and is advised to |
| 164 | release the hwspinlock as soon as possible. |
| 165 | |
| 166 | Returns 0 on success and an appropriate error code otherwise (most |
| 167 | notably -EBUSY if the hwspinlock was already taken). |
| 168 | |
| 169 | The function will never sleep. |
| 170 | |
| 171 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 172 | |
| 173 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 174 | |
| 175 | Attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 176 | it is already taken. |
| 177 | |
| 178 | Upon a successful return from this function, preemption is disabled, |
| 179 | the local interrupts are disabled and their previous state is saved |
| 180 | at the given flags placeholder. The caller must not sleep, and is advised |
| 181 | to release the hwspinlock as soon as possible. |
| 182 | |
| 183 | Returns 0 on success and an appropriate error code otherwise (most |
| 184 | notably -EBUSY if the hwspinlock was already taken). |
| 185 | The function will never sleep. |
| 186 | |
| 187 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 188 | |
| 189 | void hwspin_unlock(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 190 | |
| 191 | Unlock a previously-locked hwspinlock. Always succeed, and can be called |
| 192 | from any context (the function never sleeps). |
| 193 | |
| 194 | .. note:: |
| 195 | |
| 196 | code should **never** unlock an hwspinlock which is already unlocked |
| 197 | (there is no protection against this). |
| 198 | |
| 199 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 200 | |
| 201 | void hwspin_unlock_irq(struct hwspinlock *hwlock); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 202 | |
| 203 | Unlock a previously-locked hwspinlock and enable local interrupts. |
| 204 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 205 | |
| 206 | Doing so is considered a bug (there is no protection against this). |
| 207 | Upon a successful return from this function, preemption and local |
| 208 | interrupts are enabled. This function will never sleep. |
| 209 | |
| 210 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 211 | |
| 212 | void |
| 213 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 214 | |
| 215 | Unlock a previously-locked hwspinlock. |
| 216 | |
| 217 | The caller should **never** unlock an hwspinlock which is already unlocked. |
| 218 | Doing so is considered a bug (there is no protection against this). |
| 219 | Upon a successful return from this function, preemption is reenabled, |
| 220 | and the state of the local interrupts is restored to the state saved at |
| 221 | the given flags. This function will never sleep. |
| 222 | |
| 223 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 224 | |
| 225 | int hwspin_lock_get_id(struct hwspinlock *hwlock); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 226 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 227 | Retrieve id number of a given hwspinlock. This is needed when an |
| 228 | hwspinlock is dynamically assigned: before it can be used to achieve |
| 229 | mutual exclusion with a remote cpu, the id number should be communicated |
| 230 | to the remote task with which we want to synchronize. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 231 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 232 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 233 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 234 | Typical usage |
| 235 | ============= |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 236 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 237 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 238 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 239 | #include <linux/hwspinlock.h> |
| 240 | #include <linux/err.h> |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 241 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 242 | int hwspinlock_example1(void) |
| 243 | { |
| 244 | struct hwspinlock *hwlock; |
| 245 | int ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 246 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 247 | /* dynamically assign a hwspinlock */ |
| 248 | hwlock = hwspin_lock_request(); |
| 249 | if (!hwlock) |
| 250 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 251 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 252 | id = hwspin_lock_get_id(hwlock); |
| 253 | /* probably need to communicate id to a remote processor now */ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 254 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 255 | /* take the lock, spin for 1 sec if it's already taken */ |
| 256 | ret = hwspin_lock_timeout(hwlock, 1000); |
| 257 | if (ret) |
| 258 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 259 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 260 | /* |
| 261 | * we took the lock, do our thing now, but do NOT sleep |
| 262 | */ |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 263 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 264 | /* release the lock */ |
| 265 | hwspin_unlock(hwlock); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 266 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 267 | /* free the lock */ |
| 268 | ret = hwspin_lock_free(hwlock); |
| 269 | if (ret) |
| 270 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 271 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 272 | return ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 273 | } |
| 274 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 275 | int hwspinlock_example2(void) |
| 276 | { |
| 277 | struct hwspinlock *hwlock; |
| 278 | int ret; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 279 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 280 | /* |
| 281 | * assign a specific hwspinlock id - this should be called early |
| 282 | * by board init code. |
| 283 | */ |
| 284 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); |
| 285 | if (!hwlock) |
| 286 | ... |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 287 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 288 | /* try to take it, but don't spin on it */ |
| 289 | ret = hwspin_trylock(hwlock); |
| 290 | if (!ret) { |
| 291 | pr_info("lock is already taken\n"); |
| 292 | return -EBUSY; |
| 293 | } |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 294 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 295 | /* |
| 296 | * we took the lock, do our thing now, but do NOT sleep |
| 297 | */ |
| 298 | |
| 299 | /* release the lock */ |
| 300 | hwspin_unlock(hwlock); |
| 301 | |
| 302 | /* free the lock */ |
| 303 | ret = hwspin_lock_free(hwlock); |
| 304 | if (ret) |
| 305 | ... |
| 306 | |
| 307 | return ret; |
| 308 | } |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 309 | |
| 310 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 311 | API for implementors |
| 312 | ==================== |
| 313 | |
| 314 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 315 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 316 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, |
| 317 | const struct hwspinlock_ops *ops, int base_id, int num_locks); |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 318 | |
| 319 | To be called from the underlying platform-specific implementation, in |
| 320 | order to register a new hwspinlock device (which is usually a bank of |
| 321 | numerous locks). Should be called from a process context (this function |
| 322 | might sleep). |
| 323 | |
| 324 | Returns 0 on success, or appropriate error code on failure. |
| 325 | |
| 326 | :: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 327 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 328 | int hwspin_lock_unregister(struct hwspinlock_device *bank); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 329 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 330 | To be called from the underlying vendor-specific implementation, in order |
| 331 | to unregister an hwspinlock device (which is usually a bank of numerous |
| 332 | locks). |
| 333 | |
| 334 | Should be called from a process context (this function might sleep). |
| 335 | |
| 336 | Returns the address of hwspinlock on success, or NULL on error (e.g. |
| 337 | if the hwspinlock is still in use). |
| 338 | |
| 339 | Important structs |
| 340 | ================= |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 341 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 342 | struct hwspinlock_device is a device which usually contains a bank |
| 343 | of hardware locks. It is registered by the underlying hwspinlock |
| 344 | implementation using the hwspin_lock_register() API. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 345 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 346 | :: |
| 347 | |
| 348 | /** |
| 349 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks |
| 350 | * @dev: underlying device, will be used to invoke runtime PM api |
| 351 | * @ops: platform-specific hwspinlock handlers |
| 352 | * @base_id: id index of the first lock in this device |
| 353 | * @num_locks: number of locks in this device |
| 354 | * @lock: dynamically allocated array of 'struct hwspinlock' |
| 355 | */ |
| 356 | struct hwspinlock_device { |
| 357 | struct device *dev; |
| 358 | const struct hwspinlock_ops *ops; |
| 359 | int base_id; |
| 360 | int num_locks; |
| 361 | struct hwspinlock lock[0]; |
| 362 | }; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 363 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 364 | struct hwspinlock_device contains an array of hwspinlock structs, each |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 365 | of which represents a single hardware lock:: |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 366 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 367 | /** |
| 368 | * struct hwspinlock - this struct represents a single hwspinlock instance |
| 369 | * @bank: the hwspinlock_device structure which owns this lock |
| 370 | * @lock: initialized and used by hwspinlock core |
| 371 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv |
| 372 | */ |
| 373 | struct hwspinlock { |
| 374 | struct hwspinlock_device *bank; |
| 375 | spinlock_t lock; |
| 376 | void *priv; |
| 377 | }; |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 378 | |
| 379 | When registering a bank of locks, the hwspinlock driver only needs to |
| 380 | set the priv members of the locks. The rest of the members are set and |
| 381 | initialized by the hwspinlock core itself. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 382 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 383 | Implementation callbacks |
| 384 | ======================== |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 385 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 386 | There are three possible callbacks defined in 'struct hwspinlock_ops':: |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 387 | |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 388 | struct hwspinlock_ops { |
| 389 | int (*trylock)(struct hwspinlock *lock); |
| 390 | void (*unlock)(struct hwspinlock *lock); |
| 391 | void (*relax)(struct hwspinlock *lock); |
| 392 | }; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 393 | |
| 394 | The first two callbacks are mandatory: |
| 395 | |
| 396 | The ->trylock() callback should make a single attempt to take the lock, and |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 397 | return 0 on failure and 1 on success. This callback may **not** sleep. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 398 | |
| 399 | The ->unlock() callback releases the lock. It always succeed, and it, too, |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 400 | may **not** sleep. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 401 | |
| 402 | The ->relax() callback is optional. It is called by hwspinlock core while |
| 403 | spinning on a lock, and can be used by the underlying implementation to force |
Mauro Carvalho Chehab | e2862b2 | 2017-05-14 14:23:08 -0300 | [diff] [blame] | 404 | a delay between two successive invocations of ->trylock(). It may **not** sleep. |