Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 1 | Hardware Spinlock Framework |
| 2 | |
| 3 | 1. Introduction |
| 4 | |
| 5 | Hardware spinlock modules provide hardware assistance for synchronization |
| 6 | and mutual exclusion between heterogeneous processors and those not operating |
| 7 | under a single, shared operating system. |
| 8 | |
| 9 | For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP, |
| 10 | each of which is running a different Operating System (the master, A9, |
| 11 | is usually running Linux and the slave processors, the M3 and the DSP, |
| 12 | are running some flavor of RTOS). |
| 13 | |
| 14 | A generic hwspinlock framework allows platform-independent drivers to use |
| 15 | the hwspinlock device in order to access data structures that are shared |
| 16 | between remote processors, that otherwise have no alternative mechanism |
| 17 | to accomplish synchronization and mutual exclusion operations. |
| 18 | |
| 19 | This is necessary, for example, for Inter-processor communications: |
| 20 | on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the |
| 21 | remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink). |
| 22 | |
| 23 | To achieve fast message-based communications, a minimal kernel support |
| 24 | is needed to deliver messages arriving from a remote processor to the |
| 25 | appropriate user process. |
| 26 | |
| 27 | This communication is based on simple data structures that is shared between |
| 28 | the remote processors, and access to it is synchronized using the hwspinlock |
| 29 | module (remote processor directly places new messages in this shared data |
| 30 | structure). |
| 31 | |
| 32 | A common hwspinlock interface makes it possible to have generic, platform- |
| 33 | independent, drivers. |
| 34 | |
| 35 | 2. User API |
| 36 | |
| 37 | struct hwspinlock *hwspin_lock_request(void); |
| 38 | - dynamically assign an hwspinlock and return its address, or NULL |
| 39 | in case an unused hwspinlock isn't available. Users of this |
| 40 | API will usually want to communicate the lock's id to the remote core |
| 41 | before it can be used to achieve synchronization. |
Juan Gutierrez | 93b465c | 2011-09-06 09:30:16 +0300 | [diff] [blame] | 42 | Should be called from a process context (might sleep). |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 43 | |
| 44 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); |
| 45 | - assign a specific hwspinlock id and return its address, or NULL |
| 46 | if that hwspinlock is already in use. Usually board code will |
| 47 | be calling this function in order to reserve specific hwspinlock |
| 48 | ids for predefined purposes. |
Juan Gutierrez | 93b465c | 2011-09-06 09:30:16 +0300 | [diff] [blame] | 49 | Should be called from a process context (might sleep). |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 50 | |
| 51 | int hwspin_lock_free(struct hwspinlock *hwlock); |
| 52 | - free a previously-assigned hwspinlock; returns 0 on success, or an |
| 53 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock |
| 54 | is already free). |
Juan Gutierrez | 93b465c | 2011-09-06 09:30:16 +0300 | [diff] [blame] | 55 | Should be called from a process context (might sleep). |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 56 | |
| 57 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); |
| 58 | - lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 59 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 60 | waiting for it to be released, but give up when the timeout elapses. |
| 61 | Upon a successful return from this function, preemption is disabled so |
| 62 | the caller must not sleep, and is advised to release the hwspinlock as |
| 63 | soon as possible, in order to minimize remote cores polling on the |
| 64 | hardware interconnect. |
| 65 | Returns 0 when successful and an appropriate error code otherwise (most |
| 66 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 67 | The function will never sleep. |
| 68 | |
| 69 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); |
| 70 | - lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 71 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 72 | waiting for it to be released, but give up when the timeout elapses. |
| 73 | Upon a successful return from this function, preemption and the local |
| 74 | interrupts are disabled, so the caller must not sleep, and is advised to |
| 75 | release the hwspinlock as soon as possible. |
| 76 | Returns 0 when successful and an appropriate error code otherwise (most |
| 77 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 78 | The function will never sleep. |
| 79 | |
| 80 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, |
| 81 | unsigned long *flags); |
| 82 | - lock a previously-assigned hwspinlock with a timeout limit (specified in |
| 83 | msecs). If the hwspinlock is already taken, the function will busy loop |
| 84 | waiting for it to be released, but give up when the timeout elapses. |
| 85 | Upon a successful return from this function, preemption is disabled, |
| 86 | local interrupts are disabled and their previous state is saved at the |
| 87 | given flags placeholder. The caller must not sleep, and is advised to |
| 88 | release the hwspinlock as soon as possible. |
| 89 | Returns 0 when successful and an appropriate error code otherwise (most |
| 90 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). |
| 91 | The function will never sleep. |
| 92 | |
| 93 | int hwspin_trylock(struct hwspinlock *hwlock); |
| 94 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 95 | it is already taken. |
| 96 | Upon a successful return from this function, preemption is disabled so |
| 97 | caller must not sleep, and is advised to release the hwspinlock as soon as |
| 98 | possible, in order to minimize remote cores polling on the hardware |
| 99 | interconnect. |
| 100 | Returns 0 on success and an appropriate error code otherwise (most |
| 101 | notably -EBUSY if the hwspinlock was already taken). |
| 102 | The function will never sleep. |
| 103 | |
| 104 | int hwspin_trylock_irq(struct hwspinlock *hwlock); |
| 105 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 106 | it is already taken. |
| 107 | Upon a successful return from this function, preemption and the local |
| 108 | interrupts are disabled so caller must not sleep, and is advised to |
| 109 | release the hwspinlock as soon as possible. |
| 110 | Returns 0 on success and an appropriate error code otherwise (most |
| 111 | notably -EBUSY if the hwspinlock was already taken). |
| 112 | The function will never sleep. |
| 113 | |
| 114 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); |
| 115 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if |
| 116 | it is already taken. |
| 117 | Upon a successful return from this function, preemption is disabled, |
| 118 | the local interrupts are disabled and their previous state is saved |
| 119 | at the given flags placeholder. The caller must not sleep, and is advised |
| 120 | to release the hwspinlock as soon as possible. |
| 121 | Returns 0 on success and an appropriate error code otherwise (most |
| 122 | notably -EBUSY if the hwspinlock was already taken). |
| 123 | The function will never sleep. |
| 124 | |
| 125 | void hwspin_unlock(struct hwspinlock *hwlock); |
| 126 | - unlock a previously-locked hwspinlock. Always succeed, and can be called |
| 127 | from any context (the function never sleeps). Note: code should _never_ |
| 128 | unlock an hwspinlock which is already unlocked (there is no protection |
| 129 | against this). |
| 130 | |
| 131 | void hwspin_unlock_irq(struct hwspinlock *hwlock); |
| 132 | - unlock a previously-locked hwspinlock and enable local interrupts. |
| 133 | The caller should _never_ unlock an hwspinlock which is already unlocked. |
| 134 | Doing so is considered a bug (there is no protection against this). |
| 135 | Upon a successful return from this function, preemption and local |
| 136 | interrupts are enabled. This function will never sleep. |
| 137 | |
| 138 | void |
| 139 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); |
| 140 | - unlock a previously-locked hwspinlock. |
| 141 | The caller should _never_ unlock an hwspinlock which is already unlocked. |
| 142 | Doing so is considered a bug (there is no protection against this). |
| 143 | Upon a successful return from this function, preemption is reenabled, |
| 144 | and the state of the local interrupts is restored to the state saved at |
| 145 | the given flags. This function will never sleep. |
| 146 | |
| 147 | int hwspin_lock_get_id(struct hwspinlock *hwlock); |
| 148 | - retrieve id number of a given hwspinlock. This is needed when an |
| 149 | hwspinlock is dynamically assigned: before it can be used to achieve |
| 150 | mutual exclusion with a remote cpu, the id number should be communicated |
| 151 | to the remote task with which we want to synchronize. |
| 152 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. |
| 153 | |
| 154 | 3. Typical usage |
| 155 | |
| 156 | #include <linux/hwspinlock.h> |
| 157 | #include <linux/err.h> |
| 158 | |
| 159 | int hwspinlock_example1(void) |
| 160 | { |
| 161 | struct hwspinlock *hwlock; |
| 162 | int ret; |
| 163 | |
| 164 | /* dynamically assign a hwspinlock */ |
| 165 | hwlock = hwspin_lock_request(); |
| 166 | if (!hwlock) |
| 167 | ... |
| 168 | |
| 169 | id = hwspin_lock_get_id(hwlock); |
| 170 | /* probably need to communicate id to a remote processor now */ |
| 171 | |
| 172 | /* take the lock, spin for 1 sec if it's already taken */ |
| 173 | ret = hwspin_lock_timeout(hwlock, 1000); |
| 174 | if (ret) |
| 175 | ... |
| 176 | |
| 177 | /* |
| 178 | * we took the lock, do our thing now, but do NOT sleep |
| 179 | */ |
| 180 | |
| 181 | /* release the lock */ |
| 182 | hwspin_unlock(hwlock); |
| 183 | |
| 184 | /* free the lock */ |
| 185 | ret = hwspin_lock_free(hwlock); |
| 186 | if (ret) |
| 187 | ... |
| 188 | |
| 189 | return ret; |
| 190 | } |
| 191 | |
| 192 | int hwspinlock_example2(void) |
| 193 | { |
| 194 | struct hwspinlock *hwlock; |
| 195 | int ret; |
| 196 | |
| 197 | /* |
| 198 | * assign a specific hwspinlock id - this should be called early |
| 199 | * by board init code. |
| 200 | */ |
| 201 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); |
| 202 | if (!hwlock) |
| 203 | ... |
| 204 | |
| 205 | /* try to take it, but don't spin on it */ |
| 206 | ret = hwspin_trylock(hwlock); |
| 207 | if (!ret) { |
| 208 | pr_info("lock is already taken\n"); |
| 209 | return -EBUSY; |
| 210 | } |
| 211 | |
| 212 | /* |
| 213 | * we took the lock, do our thing now, but do NOT sleep |
| 214 | */ |
| 215 | |
| 216 | /* release the lock */ |
| 217 | hwspin_unlock(hwlock); |
| 218 | |
| 219 | /* free the lock */ |
| 220 | ret = hwspin_lock_free(hwlock); |
| 221 | if (ret) |
| 222 | ... |
| 223 | |
| 224 | return ret; |
| 225 | } |
| 226 | |
| 227 | |
| 228 | 4. API for implementors |
| 229 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 230 | int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, |
| 231 | const struct hwspinlock_ops *ops, int base_id, int num_locks); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 232 | - to be called from the underlying platform-specific implementation, in |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 233 | order to register a new hwspinlock device (which is usually a bank of |
| 234 | numerous locks). Should be called from a process context (this function |
| 235 | might sleep). |
Juan Gutierrez | 93b465c | 2011-09-06 09:30:16 +0300 | [diff] [blame] | 236 | Returns 0 on success, or appropriate error code on failure. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 237 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 238 | int hwspin_lock_unregister(struct hwspinlock_device *bank); |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 239 | - to be called from the underlying vendor-specific implementation, in order |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 240 | to unregister an hwspinlock device (which is usually a bank of numerous |
| 241 | locks). |
Juan Gutierrez | 93b465c | 2011-09-06 09:30:16 +0300 | [diff] [blame] | 242 | Should be called from a process context (this function might sleep). |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 243 | Returns the address of hwspinlock on success, or NULL on error (e.g. |
Xishi Qiu | 6c1b06d | 2013-08-26 14:54:54 -0700 | [diff] [blame] | 244 | if the hwspinlock is still in use). |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 245 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 246 | 5. Important structs |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 247 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 248 | struct hwspinlock_device is a device which usually contains a bank |
| 249 | of hardware locks. It is registered by the underlying hwspinlock |
| 250 | implementation using the hwspin_lock_register() API. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 251 | |
| 252 | /** |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 253 | * struct hwspinlock_device - a device which usually spans numerous hwspinlocks |
| 254 | * @dev: underlying device, will be used to invoke runtime PM api |
| 255 | * @ops: platform-specific hwspinlock handlers |
| 256 | * @base_id: id index of the first lock in this device |
| 257 | * @num_locks: number of locks in this device |
| 258 | * @lock: dynamically allocated array of 'struct hwspinlock' |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 259 | */ |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 260 | struct hwspinlock_device { |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 261 | struct device *dev; |
| 262 | const struct hwspinlock_ops *ops; |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 263 | int base_id; |
| 264 | int num_locks; |
| 265 | struct hwspinlock lock[0]; |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 266 | }; |
| 267 | |
Ohad Ben-Cohen | 300bab9 | 2011-09-06 15:39:21 +0300 | [diff] [blame] | 268 | struct hwspinlock_device contains an array of hwspinlock structs, each |
| 269 | of which represents a single hardware lock: |
| 270 | |
| 271 | /** |
| 272 | * struct hwspinlock - this struct represents a single hwspinlock instance |
| 273 | * @bank: the hwspinlock_device structure which owns this lock |
| 274 | * @lock: initialized and used by hwspinlock core |
| 275 | * @priv: private data, owned by the underlying platform-specific hwspinlock drv |
| 276 | */ |
| 277 | struct hwspinlock { |
| 278 | struct hwspinlock_device *bank; |
| 279 | spinlock_t lock; |
| 280 | void *priv; |
| 281 | }; |
| 282 | |
| 283 | When registering a bank of locks, the hwspinlock driver only needs to |
| 284 | set the priv members of the locks. The rest of the members are set and |
| 285 | initialized by the hwspinlock core itself. |
Ohad Ben-Cohen | bd9a4c7 | 2011-02-17 09:52:03 -0800 | [diff] [blame] | 286 | |
| 287 | 6. Implementation callbacks |
| 288 | |
| 289 | There are three possible callbacks defined in 'struct hwspinlock_ops': |
| 290 | |
| 291 | struct hwspinlock_ops { |
| 292 | int (*trylock)(struct hwspinlock *lock); |
| 293 | void (*unlock)(struct hwspinlock *lock); |
| 294 | void (*relax)(struct hwspinlock *lock); |
| 295 | }; |
| 296 | |
| 297 | The first two callbacks are mandatory: |
| 298 | |
| 299 | The ->trylock() callback should make a single attempt to take the lock, and |
| 300 | return 0 on failure and 1 on success. This callback may _not_ sleep. |
| 301 | |
| 302 | The ->unlock() callback releases the lock. It always succeed, and it, too, |
| 303 | may _not_ sleep. |
| 304 | |
| 305 | The ->relax() callback is optional. It is called by hwspinlock core while |
| 306 | spinning on a lock, and can be used by the underlying implementation to force |
| 307 | a delay between two successive invocations of ->trylock(). It may _not_ sleep. |