Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 1 | UNALIGNED MEMORY ACCESSES |
| 2 | ========================= |
| 3 | |
| 4 | Linux runs on a wide variety of architectures which have varying behaviour |
| 5 | when it comes to memory access. This document presents some details about |
| 6 | unaligned accesses, why you need to write code that doesn't cause them, |
| 7 | and how to write such code! |
| 8 | |
| 9 | |
| 10 | The definition of an unaligned access |
| 11 | ===================================== |
| 12 | |
| 13 | Unaligned memory accesses occur when you try to read N bytes of data starting |
| 14 | from an address that is not evenly divisible by N (i.e. addr % N != 0). |
| 15 | For example, reading 4 bytes of data from address 0x10004 is fine, but |
| 16 | reading 4 bytes of data from address 0x10005 would be an unaligned memory |
| 17 | access. |
| 18 | |
| 19 | The above may seem a little vague, as memory access can happen in different |
| 20 | ways. The context here is at the machine code level: certain instructions read |
| 21 | or write a number of bytes to or from memory (e.g. movb, movw, movl in x86 |
| 22 | assembly). As will become clear, it is relatively easy to spot C statements |
| 23 | which will compile to multiple-byte memory access instructions, namely when |
| 24 | dealing with types such as u16, u32 and u64. |
| 25 | |
| 26 | |
| 27 | Natural alignment |
| 28 | ================= |
| 29 | |
| 30 | The rule mentioned above forms what we refer to as natural alignment: |
| 31 | When accessing N bytes of memory, the base memory address must be evenly |
| 32 | divisible by N, i.e. addr % N == 0. |
| 33 | |
| 34 | When writing code, assume the target architecture has natural alignment |
| 35 | requirements. |
| 36 | |
| 37 | In reality, only a few architectures require natural alignment on all sizes |
| 38 | of memory access. However, we must consider ALL supported architectures; |
| 39 | writing code that satisfies natural alignment requirements is the easiest way |
| 40 | to achieve full portability. |
| 41 | |
| 42 | |
| 43 | Why unaligned access is bad |
| 44 | =========================== |
| 45 | |
| 46 | The effects of performing an unaligned memory access vary from architecture |
| 47 | to architecture. It would be easy to write a whole document on the differences |
| 48 | here; a summary of the common scenarios is presented below: |
| 49 | |
| 50 | - Some architectures are able to perform unaligned memory accesses |
| 51 | transparently, but there is usually a significant performance cost. |
| 52 | - Some architectures raise processor exceptions when unaligned accesses |
| 53 | happen. The exception handler is able to correct the unaligned access, |
| 54 | at significant cost to performance. |
| 55 | - Some architectures raise processor exceptions when unaligned accesses |
| 56 | happen, but the exceptions do not contain enough information for the |
| 57 | unaligned access to be corrected. |
| 58 | - Some architectures are not capable of unaligned memory access, but will |
| 59 | silently perform a different memory access to the one that was requested, |
Dmitri Vorobiev | e8d49f3 | 2008-04-02 13:04:45 -0700 | [diff] [blame] | 60 | resulting in a subtle code bug that is hard to detect! |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 61 | |
| 62 | It should be obvious from the above that if your code causes unaligned |
| 63 | memory accesses to happen, your code will not work correctly on certain |
| 64 | platforms and will cause performance problems on others. |
| 65 | |
| 66 | |
| 67 | Code that does not cause unaligned access |
| 68 | ========================================= |
| 69 | |
| 70 | At first, the concepts above may seem a little hard to relate to actual |
| 71 | coding practice. After all, you don't have a great deal of control over |
| 72 | memory addresses of certain variables, etc. |
| 73 | |
| 74 | Fortunately things are not too complex, as in most cases, the compiler |
| 75 | ensures that things will work for you. For example, take the following |
| 76 | structure: |
| 77 | |
| 78 | struct foo { |
| 79 | u16 field1; |
| 80 | u32 field2; |
| 81 | u8 field3; |
| 82 | }; |
| 83 | |
| 84 | Let us assume that an instance of the above structure resides in memory |
| 85 | starting at address 0x10000. With a basic level of understanding, it would |
| 86 | not be unreasonable to expect that accessing field2 would cause an unaligned |
| 87 | access. You'd be expecting field2 to be located at offset 2 bytes into the |
| 88 | structure, i.e. address 0x10002, but that address is not evenly divisible |
| 89 | by 4 (remember, we're reading a 4 byte value here). |
| 90 | |
| 91 | Fortunately, the compiler understands the alignment constraints, so in the |
| 92 | above case it would insert 2 bytes of padding in between field1 and field2. |
| 93 | Therefore, for standard structure types you can always rely on the compiler |
| 94 | to pad structures so that accesses to fields are suitably aligned (assuming |
| 95 | you do not cast the field to a type of different length). |
| 96 | |
| 97 | Similarly, you can also rely on the compiler to align variables and function |
| 98 | parameters to a naturally aligned scheme, based on the size of the type of |
| 99 | the variable. |
| 100 | |
| 101 | At this point, it should be clear that accessing a single byte (u8 or char) |
| 102 | will never cause an unaligned access, because all memory addresses are evenly |
| 103 | divisible by one. |
| 104 | |
| 105 | On a related topic, with the above considerations in mind you may observe |
| 106 | that you could reorder the fields in the structure in order to place fields |
| 107 | where padding would otherwise be inserted, and hence reduce the overall |
| 108 | resident memory size of structure instances. The optimal layout of the |
| 109 | above example is: |
| 110 | |
| 111 | struct foo { |
| 112 | u32 field2; |
| 113 | u16 field1; |
| 114 | u8 field3; |
| 115 | }; |
| 116 | |
| 117 | For a natural alignment scheme, the compiler would only have to add a single |
| 118 | byte of padding at the end of the structure. This padding is added in order |
| 119 | to satisfy alignment constraints for arrays of these structures. |
| 120 | |
| 121 | Another point worth mentioning is the use of __attribute__((packed)) on a |
| 122 | structure type. This GCC-specific attribute tells the compiler never to |
| 123 | insert any padding within structures, useful when you want to use a C struct |
| 124 | to represent some data that comes in a fixed arrangement 'off the wire'. |
| 125 | |
| 126 | You might be inclined to believe that usage of this attribute can easily |
| 127 | lead to unaligned accesses when accessing fields that do not satisfy |
| 128 | architectural alignment requirements. However, again, the compiler is aware |
| 129 | of the alignment constraints and will generate extra instructions to perform |
| 130 | the memory access in a way that does not cause unaligned access. Of course, |
| 131 | the extra instructions obviously cause a loss in performance compared to the |
| 132 | non-packed case, so the packed attribute should only be used when avoiding |
| 133 | structure padding is of importance. |
| 134 | |
| 135 | |
| 136 | Code that causes unaligned access |
| 137 | ================================= |
| 138 | |
| 139 | With the above in mind, let's move onto a real life example of a function |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 140 | that can cause an unaligned memory access. The following function taken |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 141 | from include/linux/etherdevice.h is an optimized routine to compare two |
| 142 | ethernet MAC addresses for equality. |
| 143 | |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 144 | bool ether_addr_equal(const u8 *addr1, const u8 *addr2) |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 145 | { |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 146 | #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 147 | u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | |
| 148 | ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); |
| 149 | |
| 150 | return fold == 0; |
| 151 | #else |
| 152 | const u16 *a = (const u16 *)addr1; |
| 153 | const u16 *b = (const u16 *)addr2; |
Cihangir Akturk | 36f671b | 2016-12-17 19:42:17 +0200 | [diff] [blame] | 154 | return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) == 0; |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 155 | #endif |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 156 | } |
| 157 | |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 158 | In the above function, when the hardware has efficient unaligned access |
| 159 | capability, there is no issue with this code. But when the hardware isn't |
| 160 | able to access memory on arbitrary boundaries, the reference to a[0] causes |
| 161 | 2 bytes (16 bits) to be read from memory starting at address addr1. |
| 162 | |
| 163 | Think about what would happen if addr1 was an odd address such as 0x10003. |
| 164 | (Hint: it'd be an unaligned access.) |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 165 | |
| 166 | Despite the potential unaligned access problems with the above function, it |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 167 | is included in the kernel anyway but is understood to only work normally on |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 168 | 16-bit-aligned addresses. It is up to the caller to ensure this alignment or |
| 169 | not use this function at all. This alignment-unsafe function is still useful |
| 170 | as it is a decent optimization for the cases when you can ensure alignment, |
| 171 | which is true almost all of the time in ethernet networking context. |
| 172 | |
| 173 | |
| 174 | Here is another example of some code that could cause unaligned accesses: |
| 175 | void myfunc(u8 *data, u32 value) |
| 176 | { |
| 177 | [...] |
| 178 | *((u32 *) data) = cpu_to_le32(value); |
| 179 | [...] |
| 180 | } |
| 181 | |
| 182 | This code will cause unaligned accesses every time the data parameter points |
| 183 | to an address that is not evenly divisible by 4. |
| 184 | |
| 185 | In summary, the 2 main scenarios where you may run into unaligned access |
| 186 | problems involve: |
| 187 | 1. Casting variables to types of different lengths |
| 188 | 2. Pointer arithmetic followed by access to at least 2 bytes of data |
| 189 | |
| 190 | |
| 191 | Avoiding unaligned accesses |
| 192 | =========================== |
| 193 | |
| 194 | The easiest way to avoid unaligned access is to use the get_unaligned() and |
| 195 | put_unaligned() macros provided by the <asm/unaligned.h> header file. |
| 196 | |
| 197 | Going back to an earlier example of code that potentially causes unaligned |
| 198 | access: |
| 199 | |
| 200 | void myfunc(u8 *data, u32 value) |
| 201 | { |
| 202 | [...] |
| 203 | *((u32 *) data) = cpu_to_le32(value); |
| 204 | [...] |
| 205 | } |
| 206 | |
| 207 | To avoid the unaligned memory access, you would rewrite it as follows: |
| 208 | |
| 209 | void myfunc(u8 *data, u32 value) |
| 210 | { |
| 211 | [...] |
| 212 | value = cpu_to_le32(value); |
| 213 | put_unaligned(value, (u32 *) data); |
| 214 | [...] |
| 215 | } |
| 216 | |
| 217 | The get_unaligned() macro works similarly. Assuming 'data' is a pointer to |
| 218 | memory and you wish to avoid unaligned access, its usage is as follows: |
| 219 | |
| 220 | u32 value = get_unaligned((u32 *) data); |
| 221 | |
Dmitri Vorobiev | e8d49f3 | 2008-04-02 13:04:45 -0700 | [diff] [blame] | 222 | These macros work for memory accesses of any length (not just 32 bits as |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 223 | in the examples above). Be aware that when compared to standard access of |
| 224 | aligned memory, using these macros to access unaligned memory can be costly in |
| 225 | terms of performance. |
| 226 | |
| 227 | If use of such macros is not convenient, another option is to use memcpy(), |
| 228 | where the source or destination (or both) are of type u8* or unsigned char*. |
| 229 | Due to the byte-wise nature of this operation, unaligned accesses are avoided. |
| 230 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 231 | |
| 232 | Alignment vs. Networking |
| 233 | ======================== |
| 234 | |
| 235 | On architectures that require aligned loads, networking requires that the IP |
| 236 | header is aligned on a four-byte boundary to optimise the IP stack. For |
| 237 | regular ethernet hardware, the constant NET_IP_ALIGN is used. On most |
| 238 | architectures this constant has the value 2 because the normal ethernet |
| 239 | header is 14 bytes long, so in order to get proper alignment one needs to |
| 240 | DMA to an address which can be expressed as 4*n + 2. One notable exception |
| 241 | here is powerpc which defines NET_IP_ALIGN to 0 because DMA to unaligned |
| 242 | addresses can be very expensive and dwarf the cost of unaligned loads. |
| 243 | |
| 244 | For some ethernet hardware that cannot DMA to unaligned addresses like |
| 245 | 4*n+2 or non-ethernet hardware, this can be a problem, and it is then |
| 246 | required to copy the incoming frame into an aligned buffer. Because this is |
| 247 | unnecessary on architectures that can do unaligned accesses, the code can be |
| 248 | made dependent on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS like so: |
| 249 | |
| 250 | #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 251 | skb = original skb |
| 252 | #else |
| 253 | skb = copy skb |
| 254 | #endif |
| 255 | |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 256 | -- |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 257 | Authors: Daniel Drake <dsd@gentoo.org>, |
| 258 | Johannes Berg <johannes@sipsolutions.net> |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 259 | With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt, |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 260 | Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz, |
| 261 | Vadim Lobanov |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 262 | |