Mathieu Desnoyers | f1f8810 | 2007-02-10 01:46:01 -0800 | [diff] [blame] | 1 | Semantics and Behavior of Local Atomic Operations |
| 2 | |
| 3 | Mathieu Desnoyers |
| 4 | |
| 5 | |
| 6 | This document explains the purpose of the local atomic operations, how |
| 7 | to implement them for any given architecture and shows how they can be used |
| 8 | properly. It also stresses on the precautions that must be taken when reading |
| 9 | those local variables across CPUs when the order of memory writes matters. |
| 10 | |
| 11 | |
| 12 | |
| 13 | * Purpose of local atomic operations |
| 14 | |
| 15 | Local atomic operations are meant to provide fast and highly reentrant per CPU |
| 16 | counters. They minimize the performance cost of standard atomic operations by |
| 17 | removing the LOCK prefix and memory barriers normally required to synchronize |
| 18 | across CPUs. |
| 19 | |
| 20 | Having fast per CPU atomic counters is interesting in many cases : it does not |
| 21 | require disabling interrupts to protect from interrupt handlers and it permits |
| 22 | coherent counters in NMI handlers. It is especially useful for tracing purposes |
| 23 | and for various performance monitoring counters. |
| 24 | |
| 25 | Local atomic operations only guarantee variable modification atomicity wrt the |
| 26 | CPU which owns the data. Therefore, care must taken to make sure that only one |
| 27 | CPU writes to the local_t data. This is done by using per cpu data and making |
| 28 | sure that we modify it from within a preemption safe context. It is however |
| 29 | permitted to read local_t data from any CPU : it will then appear to be written |
Mathieu Desnoyers | 0e1ccb9 | 2007-10-16 23:29:29 -0700 | [diff] [blame] | 30 | out of order wrt other memory writes by the owner CPU. |
Mathieu Desnoyers | f1f8810 | 2007-02-10 01:46:01 -0800 | [diff] [blame] | 31 | |
| 32 | |
| 33 | * Implementation for a given architecture |
| 34 | |
| 35 | It can be done by slightly modifying the standard atomic operations : only |
| 36 | their UP variant must be kept. It typically means removing LOCK prefix (on |
| 37 | i386 and x86_64) and any SMP sychronization barrier. If the architecture does |
| 38 | not have a different behavior between SMP and UP, including asm-generic/local.h |
| 39 | in your archtecture's local.h is sufficient. |
| 40 | |
| 41 | The local_t type is defined as an opaque signed long by embedding an |
| 42 | atomic_long_t inside a structure. This is made so a cast from this type to a |
| 43 | long fails. The definition looks like : |
| 44 | |
| 45 | typedef struct { atomic_long_t a; } local_t; |
| 46 | |
| 47 | |
Mathieu Desnoyers | 74beb9d | 2007-10-16 23:29:28 -0700 | [diff] [blame] | 48 | * Rules to follow when using local atomic operations |
| 49 | |
| 50 | - Variables touched by local ops must be per cpu variables. |
| 51 | - _Only_ the CPU owner of these variables must write to them. |
| 52 | - This CPU can use local ops from any context (process, irq, softirq, nmi, ...) |
| 53 | to update its local_t variables. |
| 54 | - Preemption (or interrupts) must be disabled when using local ops in |
| 55 | process context to make sure the process won't be migrated to a |
| 56 | different CPU between getting the per-cpu variable and doing the |
| 57 | actual local op. |
| 58 | - When using local ops in interrupt context, no special care must be |
| 59 | taken on a mainline kernel, since they will run on the local CPU with |
| 60 | preemption already disabled. I suggest, however, to explicitly |
| 61 | disable preemption anyway to make sure it will still work correctly on |
| 62 | -rt kernels. |
| 63 | - Reading the local cpu variable will provide the current copy of the |
| 64 | variable. |
| 65 | - Reads of these variables can be done from any CPU, because updates to |
| 66 | "long", aligned, variables are always atomic. Since no memory |
| 67 | synchronization is done by the writer CPU, an outdated copy of the |
| 68 | variable can be read when reading some _other_ cpu's variables. |
| 69 | |
| 70 | |
Mathieu Desnoyers | e1265205 | 2007-11-05 14:50:54 -0800 | [diff] [blame] | 71 | * Rules to follow when using local atomic operations |
| 72 | |
| 73 | - Variables touched by local ops must be per cpu variables. |
| 74 | - _Only_ the CPU owner of these variables must write to them. |
| 75 | - This CPU can use local ops from any context (process, irq, softirq, nmi, ...) |
| 76 | to update its local_t variables. |
| 77 | - Preemption (or interrupts) must be disabled when using local ops in |
| 78 | process context to make sure the process won't be migrated to a |
| 79 | different CPU between getting the per-cpu variable and doing the |
| 80 | actual local op. |
| 81 | - When using local ops in interrupt context, no special care must be |
| 82 | taken on a mainline kernel, since they will run on the local CPU with |
| 83 | preemption already disabled. I suggest, however, to explicitly |
| 84 | disable preemption anyway to make sure it will still work correctly on |
| 85 | -rt kernels. |
| 86 | - Reading the local cpu variable will provide the current copy of the |
| 87 | variable. |
| 88 | - Reads of these variables can be done from any CPU, because updates to |
| 89 | "long", aligned, variables are always atomic. Since no memory |
| 90 | synchronization is done by the writer CPU, an outdated copy of the |
| 91 | variable can be read when reading some _other_ cpu's variables. |
| 92 | |
| 93 | |
Mathieu Desnoyers | f1f8810 | 2007-02-10 01:46:01 -0800 | [diff] [blame] | 94 | * How to use local atomic operations |
| 95 | |
| 96 | #include <linux/percpu.h> |
| 97 | #include <asm/local.h> |
| 98 | |
| 99 | static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0); |
| 100 | |
| 101 | |
| 102 | * Counting |
| 103 | |
| 104 | Counting is done on all the bits of a signed long. |
| 105 | |
| 106 | In preemptible context, use get_cpu_var() and put_cpu_var() around local atomic |
| 107 | operations : it makes sure that preemption is disabled around write access to |
| 108 | the per cpu variable. For instance : |
| 109 | |
| 110 | local_inc(&get_cpu_var(counters)); |
| 111 | put_cpu_var(counters); |
| 112 | |
| 113 | If you are already in a preemption-safe context, you can directly use |
| 114 | __get_cpu_var() instead. |
| 115 | |
| 116 | local_inc(&__get_cpu_var(counters)); |
| 117 | |
| 118 | |
| 119 | |
| 120 | * Reading the counters |
| 121 | |
| 122 | Those local counters can be read from foreign CPUs to sum the count. Note that |
| 123 | the data seen by local_read across CPUs must be considered to be out of order |
| 124 | relatively to other memory writes happening on the CPU that owns the data. |
| 125 | |
| 126 | long sum = 0; |
| 127 | for_each_online_cpu(cpu) |
| 128 | sum += local_read(&per_cpu(counters, cpu)); |
| 129 | |
| 130 | If you want to use a remote local_read to synchronize access to a resource |
| 131 | between CPUs, explicit smp_wmb() and smp_rmb() memory barriers must be used |
| 132 | respectively on the writer and the reader CPUs. It would be the case if you use |
| 133 | the local_t variable as a counter of bytes written in a buffer : there should |
| 134 | be a smp_wmb() between the buffer write and the counter increment and also a |
| 135 | smp_rmb() between the counter read and the buffer read. |
| 136 | |
| 137 | |
| 138 | Here is a sample module which implements a basic per cpu counter using local.h. |
| 139 | |
| 140 | --- BEGIN --- |
| 141 | /* test-local.c |
| 142 | * |
| 143 | * Sample module for local.h usage. |
| 144 | */ |
| 145 | |
| 146 | |
| 147 | #include <asm/local.h> |
| 148 | #include <linux/module.h> |
| 149 | #include <linux/timer.h> |
| 150 | |
| 151 | static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0); |
| 152 | |
| 153 | static struct timer_list test_timer; |
| 154 | |
| 155 | /* IPI called on each CPU. */ |
| 156 | static void test_each(void *info) |
| 157 | { |
| 158 | /* Increment the counter from a non preemptible context */ |
| 159 | printk("Increment on cpu %d\n", smp_processor_id()); |
| 160 | local_inc(&__get_cpu_var(counters)); |
| 161 | |
| 162 | /* This is what incrementing the variable would look like within a |
| 163 | * preemptible context (it disables preemption) : |
| 164 | * |
| 165 | * local_inc(&get_cpu_var(counters)); |
| 166 | * put_cpu_var(counters); |
| 167 | */ |
| 168 | } |
| 169 | |
| 170 | static void do_test_timer(unsigned long data) |
| 171 | { |
| 172 | int cpu; |
| 173 | |
| 174 | /* Increment the counters */ |
| 175 | on_each_cpu(test_each, NULL, 0, 1); |
| 176 | /* Read all the counters */ |
| 177 | printk("Counters read from CPU %d\n", smp_processor_id()); |
| 178 | for_each_online_cpu(cpu) { |
| 179 | printk("Read : CPU %d, count %ld\n", cpu, |
| 180 | local_read(&per_cpu(counters, cpu))); |
| 181 | } |
| 182 | del_timer(&test_timer); |
| 183 | test_timer.expires = jiffies + 1000; |
| 184 | add_timer(&test_timer); |
| 185 | } |
| 186 | |
| 187 | static int __init test_init(void) |
| 188 | { |
| 189 | /* initialize the timer that will increment the counter */ |
| 190 | init_timer(&test_timer); |
| 191 | test_timer.function = do_test_timer; |
| 192 | test_timer.expires = jiffies + 1; |
| 193 | add_timer(&test_timer); |
| 194 | |
| 195 | return 0; |
| 196 | } |
| 197 | |
| 198 | static void __exit test_exit(void) |
| 199 | { |
| 200 | del_timer_sync(&test_timer); |
| 201 | } |
| 202 | |
| 203 | module_init(test_init); |
| 204 | module_exit(test_exit); |
| 205 | |
| 206 | MODULE_LICENSE("GPL"); |
| 207 | MODULE_AUTHOR("Mathieu Desnoyers"); |
| 208 | MODULE_DESCRIPTION("Local Atomic Ops"); |
| 209 | --- END --- |