Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 1 | function tracer guts |
| 2 | ==================== |
Mike Frysinger | 0368897 | 2010-01-22 08:12:47 -0500 | [diff] [blame] | 3 | By Mike Frysinger |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 4 | |
| 5 | Introduction |
| 6 | ------------ |
| 7 | |
| 8 | Here we will cover the architecture pieces that the common function tracing |
| 9 | code relies on for proper functioning. Things are broken down into increasing |
| 10 | complexity so that you can start simple and at least get basic functionality. |
| 11 | |
| 12 | Note that this focuses on architecture implementation details only. If you |
| 13 | want more explanation of a feature in terms of common code, review the common |
| 14 | ftrace.txt file. |
| 15 | |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 16 | Ideally, everyone who wishes to retain performance while supporting tracing in |
| 17 | their kernel should make it all the way to dynamic ftrace support. |
| 18 | |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 19 | |
| 20 | Prerequisites |
| 21 | ------------- |
| 22 | |
| 23 | Ftrace relies on these features being implemented: |
| 24 | STACKTRACE_SUPPORT - implement save_stack_trace() |
| 25 | TRACE_IRQFLAGS_SUPPORT - implement include/asm/irqflags.h |
| 26 | |
| 27 | |
| 28 | HAVE_FUNCTION_TRACER |
| 29 | -------------------- |
| 30 | |
| 31 | You will need to implement the mcount and the ftrace_stub functions. |
| 32 | |
| 33 | The exact mcount symbol name will depend on your toolchain. Some call it |
| 34 | "mcount", "_mcount", or even "__mcount". You can probably figure it out by |
| 35 | running something like: |
| 36 | $ echo 'main(){}' | gcc -x c -S -o - - -pg | grep mcount |
| 37 | call mcount |
| 38 | We'll make the assumption below that the symbol is "mcount" just to keep things |
| 39 | nice and simple in the examples. |
| 40 | |
| 41 | Keep in mind that the ABI that is in effect inside of the mcount function is |
| 42 | *highly* architecture/toolchain specific. We cannot help you in this regard, |
| 43 | sorry. Dig up some old documentation and/or find someone more familiar than |
| 44 | you to bang ideas off of. Typically, register usage (argument/scratch/etc...) |
| 45 | is a major issue at this point, especially in relation to the location of the |
| 46 | mcount call (before/after function prologue). You might also want to look at |
| 47 | how glibc has implemented the mcount function for your architecture. It might |
| 48 | be (semi-)relevant. |
| 49 | |
| 50 | The mcount function should check the function pointer ftrace_trace_function |
| 51 | to see if it is set to ftrace_stub. If it is, there is nothing for you to do, |
| 52 | so return immediately. If it isn't, then call that function in the same way |
| 53 | the mcount function normally calls __mcount_internal -- the first argument is |
| 54 | the "frompc" while the second argument is the "selfpc" (adjusted to remove the |
| 55 | size of the mcount call that is embedded in the function). |
| 56 | |
| 57 | For example, if the function foo() calls bar(), when the bar() function calls |
| 58 | mcount(), the arguments mcount() will pass to the tracer are: |
| 59 | "frompc" - the address bar() will use to return to foo() |
Randy Dunlap | 7e25f44 | 2009-12-18 15:17:12 -0800 | [diff] [blame] | 60 | "selfpc" - the address bar() (with mcount() size adjustment) |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 61 | |
| 62 | Also keep in mind that this mcount function will be called *a lot*, so |
| 63 | optimizing for the default case of no tracer will help the smooth running of |
| 64 | your system when tracing is disabled. So the start of the mcount function is |
Randy Dunlap | 7e25f44 | 2009-12-18 15:17:12 -0800 | [diff] [blame] | 65 | typically the bare minimum with checking things before returning. That also |
| 66 | means the code flow should usually be kept linear (i.e. no branching in the nop |
| 67 | case). This is of course an optimization and not a hard requirement. |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 68 | |
| 69 | Here is some pseudo code that should help (these functions should actually be |
| 70 | implemented in assembly): |
| 71 | |
| 72 | void ftrace_stub(void) |
| 73 | { |
| 74 | return; |
| 75 | } |
| 76 | |
| 77 | void mcount(void) |
| 78 | { |
| 79 | /* save any bare state needed in order to do initial checking */ |
| 80 | |
| 81 | extern void (*ftrace_trace_function)(unsigned long, unsigned long); |
| 82 | if (ftrace_trace_function != ftrace_stub) |
| 83 | goto do_trace; |
| 84 | |
| 85 | /* restore any bare state */ |
| 86 | |
| 87 | return; |
| 88 | |
| 89 | do_trace: |
| 90 | |
| 91 | /* save all state needed by the ABI (see paragraph above) */ |
| 92 | |
| 93 | unsigned long frompc = ...; |
| 94 | unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; |
| 95 | ftrace_trace_function(frompc, selfpc); |
| 96 | |
| 97 | /* restore all state needed by the ABI */ |
| 98 | } |
| 99 | |
| 100 | Don't forget to export mcount for modules ! |
| 101 | extern void mcount(void); |
| 102 | EXPORT_SYMBOL(mcount); |
| 103 | |
| 104 | |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 105 | HAVE_FUNCTION_GRAPH_TRACER |
| 106 | -------------------------- |
| 107 | |
| 108 | Deep breath ... time to do some real work. Here you will need to update the |
| 109 | mcount function to check ftrace graph function pointers, as well as implement |
| 110 | some functions to save (hijack) and restore the return address. |
| 111 | |
| 112 | The mcount function should check the function pointers ftrace_graph_return |
| 113 | (compare to ftrace_stub) and ftrace_graph_entry (compare to |
Randy Dunlap | 7e25f44 | 2009-12-18 15:17:12 -0800 | [diff] [blame] | 114 | ftrace_graph_entry_stub). If either of those is not set to the relevant stub |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 115 | function, call the arch-specific function ftrace_graph_caller which in turn |
| 116 | calls the arch-specific function prepare_ftrace_return. Neither of these |
Randy Dunlap | 7e25f44 | 2009-12-18 15:17:12 -0800 | [diff] [blame] | 117 | function names is strictly required, but you should use them anyway to stay |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 118 | consistent across the architecture ports -- easier to compare & contrast |
| 119 | things. |
| 120 | |
| 121 | The arguments to prepare_ftrace_return are slightly different than what are |
| 122 | passed to ftrace_trace_function. The second argument "selfpc" is the same, |
| 123 | but the first argument should be a pointer to the "frompc". Typically this is |
| 124 | located on the stack. This allows the function to hijack the return address |
| 125 | temporarily to have it point to the arch-specific function return_to_handler. |
| 126 | That function will simply call the common ftrace_return_to_handler function and |
Randy Dunlap | 7e25f44 | 2009-12-18 15:17:12 -0800 | [diff] [blame] | 127 | that will return the original return address with which you can return to the |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 128 | original call site. |
| 129 | |
| 130 | Here is the updated mcount pseudo code: |
| 131 | void mcount(void) |
| 132 | { |
| 133 | ... |
| 134 | if (ftrace_trace_function != ftrace_stub) |
| 135 | goto do_trace; |
| 136 | |
| 137 | +#ifdef CONFIG_FUNCTION_GRAPH_TRACER |
| 138 | + extern void (*ftrace_graph_return)(...); |
| 139 | + extern void (*ftrace_graph_entry)(...); |
| 140 | + if (ftrace_graph_return != ftrace_stub || |
| 141 | + ftrace_graph_entry != ftrace_graph_entry_stub) |
| 142 | + ftrace_graph_caller(); |
| 143 | +#endif |
| 144 | |
| 145 | /* restore any bare state */ |
| 146 | ... |
| 147 | |
| 148 | Here is the pseudo code for the new ftrace_graph_caller assembly function: |
| 149 | #ifdef CONFIG_FUNCTION_GRAPH_TRACER |
| 150 | void ftrace_graph_caller(void) |
| 151 | { |
| 152 | /* save all state needed by the ABI */ |
| 153 | |
| 154 | unsigned long *frompc = &...; |
| 155 | unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; |
Mike Frysinger | 0368897 | 2010-01-22 08:12:47 -0500 | [diff] [blame] | 156 | /* passing frame pointer up is optional -- see below */ |
| 157 | prepare_ftrace_return(frompc, selfpc, frame_pointer); |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 158 | |
| 159 | /* restore all state needed by the ABI */ |
| 160 | } |
| 161 | #endif |
| 162 | |
Mike Frysinger | 0368897 | 2010-01-22 08:12:47 -0500 | [diff] [blame] | 163 | For information on how to implement prepare_ftrace_return(), simply look at the |
| 164 | x86 version (the frame pointer passing is optional; see the next section for |
| 165 | more information). The only architecture-specific piece in it is the setup of |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 166 | the fault recovery table (the asm(...) code). The rest should be the same |
| 167 | across architectures. |
| 168 | |
| 169 | Here is the pseudo code for the new return_to_handler assembly function. Note |
| 170 | that the ABI that applies here is different from what applies to the mcount |
| 171 | code. Since you are returning from a function (after the epilogue), you might |
| 172 | be able to skimp on things saved/restored (usually just registers used to pass |
| 173 | return values). |
| 174 | |
| 175 | #ifdef CONFIG_FUNCTION_GRAPH_TRACER |
| 176 | void return_to_handler(void) |
| 177 | { |
| 178 | /* save all state needed by the ABI (see paragraph above) */ |
| 179 | |
| 180 | void (*original_return_point)(void) = ftrace_return_to_handler(); |
| 181 | |
| 182 | /* restore all state needed by the ABI */ |
| 183 | |
| 184 | /* this is usually either a return or a jump */ |
| 185 | original_return_point(); |
| 186 | } |
| 187 | #endif |
| 188 | |
| 189 | |
Mike Frysinger | 0368897 | 2010-01-22 08:12:47 -0500 | [diff] [blame] | 190 | HAVE_FUNCTION_GRAPH_FP_TEST |
| 191 | --------------------------- |
| 192 | |
| 193 | An arch may pass in a unique value (frame pointer) to both the entering and |
| 194 | exiting of a function. On exit, the value is compared and if it does not |
| 195 | match, then it will panic the kernel. This is largely a sanity check for bad |
| 196 | code generation with gcc. If gcc for your port sanely updates the frame |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 197 | pointer under different optimization levels, then ignore this option. |
Mike Frysinger | 0368897 | 2010-01-22 08:12:47 -0500 | [diff] [blame] | 198 | |
| 199 | However, adding support for it isn't terribly difficult. In your assembly code |
| 200 | that calls prepare_ftrace_return(), pass the frame pointer as the 3rd argument. |
| 201 | Then in the C version of that function, do what the x86 port does and pass it |
| 202 | along to ftrace_push_return_trace() instead of a stub value of 0. |
| 203 | |
| 204 | Similarly, when you call ftrace_return_to_handler(), pass it the frame pointer. |
| 205 | |
| 206 | |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 207 | HAVE_FTRACE_NMI_ENTER |
| 208 | --------------------- |
| 209 | |
| 210 | If you can't trace NMI functions, then skip this option. |
| 211 | |
| 212 | <details to be filled> |
| 213 | |
| 214 | |
Frederic Weisbecker | 459c6d1 | 2009-09-19 07:14:15 +0200 | [diff] [blame] | 215 | HAVE_SYSCALL_TRACEPOINTS |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 216 | ------------------------ |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 217 | |
Frederic Weisbecker | 459c6d1 | 2009-09-19 07:14:15 +0200 | [diff] [blame] | 218 | You need very few things to get the syscalls tracing in an arch. |
| 219 | |
Mike Frysinger | e7b8e67 | 2010-01-26 04:40:03 -0500 | [diff] [blame] | 220 | - Support HAVE_ARCH_TRACEHOOK (see arch/Kconfig). |
Frederic Weisbecker | 459c6d1 | 2009-09-19 07:14:15 +0200 | [diff] [blame] | 221 | - Have a NR_syscalls variable in <asm/unistd.h> that provides the number |
| 222 | of syscalls supported by the arch. |
Mike Frysinger | e7b8e67 | 2010-01-26 04:40:03 -0500 | [diff] [blame] | 223 | - Support the TIF_SYSCALL_TRACEPOINT thread flags. |
Frederic Weisbecker | 459c6d1 | 2009-09-19 07:14:15 +0200 | [diff] [blame] | 224 | - Put the trace_sys_enter() and trace_sys_exit() tracepoints calls from ptrace |
| 225 | in the ptrace syscalls tracing path. |
Ian Munsie | c763ba0 | 2011-02-03 14:27:22 +1100 | [diff] [blame] | 226 | - If the system call table on this arch is more complicated than a simple array |
| 227 | of addresses of the system calls, implement an arch_syscall_addr to return |
| 228 | the address of a given system call. |
Ian Munsie | b2d5549 | 2011-02-03 14:27:23 +1100 | [diff] [blame] | 229 | - If the symbol names of the system calls do not match the function names on |
| 230 | this arch, define ARCH_HAS_SYSCALL_MATCH_SYM_NAME in asm/ftrace.h and |
| 231 | implement arch_syscall_match_sym_name with the appropriate logic to return |
| 232 | true if the function name corresponds with the symbol name. |
Frederic Weisbecker | 459c6d1 | 2009-09-19 07:14:15 +0200 | [diff] [blame] | 233 | - Tag this arch as HAVE_SYSCALL_TRACEPOINTS. |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 234 | |
| 235 | |
| 236 | HAVE_FTRACE_MCOUNT_RECORD |
| 237 | ------------------------- |
| 238 | |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 239 | See scripts/recordmcount.pl for more info. Just fill in the arch-specific |
| 240 | details for how to locate the addresses of mcount call sites via objdump. |
| 241 | This option doesn't make much sense without also implementing dynamic ftrace. |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 242 | |
| 243 | |
| 244 | HAVE_DYNAMIC_FTRACE |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 245 | ------------------- |
| 246 | |
| 247 | You will first need HAVE_FTRACE_MCOUNT_RECORD and HAVE_FUNCTION_TRACER, so |
| 248 | scroll your reader back up if you got over eager. |
| 249 | |
| 250 | Once those are out of the way, you will need to implement: |
| 251 | - asm/ftrace.h: |
| 252 | - MCOUNT_ADDR |
| 253 | - ftrace_call_adjust() |
| 254 | - struct dyn_arch_ftrace{} |
| 255 | - asm code: |
| 256 | - mcount() (new stub) |
| 257 | - ftrace_caller() |
| 258 | - ftrace_call() |
| 259 | - ftrace_stub() |
| 260 | - C code: |
| 261 | - ftrace_dyn_arch_init() |
| 262 | - ftrace_make_nop() |
| 263 | - ftrace_make_call() |
| 264 | - ftrace_update_ftrace_func() |
| 265 | |
| 266 | First you will need to fill out some arch details in your asm/ftrace.h. |
| 267 | |
| 268 | Define MCOUNT_ADDR as the address of your mcount symbol similar to: |
| 269 | #define MCOUNT_ADDR ((unsigned long)mcount) |
| 270 | Since no one else will have a decl for that function, you will need to: |
| 271 | extern void mcount(void); |
| 272 | |
| 273 | You will also need the helper function ftrace_call_adjust(). Most people |
| 274 | will be able to stub it out like so: |
| 275 | static inline unsigned long ftrace_call_adjust(unsigned long addr) |
| 276 | { |
| 277 | return addr; |
| 278 | } |
| 279 | <details to be filled> |
| 280 | |
| 281 | Lastly you will need the custom dyn_arch_ftrace structure. If you need |
| 282 | some extra state when runtime patching arbitrary call sites, this is the |
| 283 | place. For now though, create an empty struct: |
| 284 | struct dyn_arch_ftrace { |
| 285 | /* No extra data needed */ |
| 286 | }; |
| 287 | |
| 288 | With the header out of the way, we can fill out the assembly code. While we |
| 289 | did already create a mcount() function earlier, dynamic ftrace only wants a |
| 290 | stub function. This is because the mcount() will only be used during boot |
| 291 | and then all references to it will be patched out never to return. Instead, |
| 292 | the guts of the old mcount() will be used to create a new ftrace_caller() |
| 293 | function. Because the two are hard to merge, it will most likely be a lot |
| 294 | easier to have two separate definitions split up by #ifdefs. Same goes for |
| 295 | the ftrace_stub() as that will now be inlined in ftrace_caller(). |
| 296 | |
| 297 | Before we get confused anymore, let's check out some pseudo code so you can |
| 298 | implement your own stuff in assembly: |
| 299 | |
| 300 | void mcount(void) |
| 301 | { |
| 302 | return; |
| 303 | } |
| 304 | |
| 305 | void ftrace_caller(void) |
| 306 | { |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 307 | /* save all state needed by the ABI (see paragraph above) */ |
| 308 | |
| 309 | unsigned long frompc = ...; |
| 310 | unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; |
| 311 | |
| 312 | ftrace_call: |
| 313 | ftrace_stub(frompc, selfpc); |
| 314 | |
| 315 | /* restore all state needed by the ABI */ |
| 316 | |
| 317 | ftrace_stub: |
| 318 | return; |
| 319 | } |
| 320 | |
| 321 | This might look a little odd at first, but keep in mind that we will be runtime |
| 322 | patching multiple things. First, only functions that we actually want to trace |
| 323 | will be patched to call ftrace_caller(). Second, since we only have one tracer |
| 324 | active at a time, we will patch the ftrace_caller() function itself to call the |
| 325 | specific tracer in question. That is the point of the ftrace_call label. |
| 326 | |
| 327 | With that in mind, let's move on to the C code that will actually be doing the |
| 328 | runtime patching. You'll need a little knowledge of your arch's opcodes in |
| 329 | order to make it through the next section. |
| 330 | |
| 331 | Every arch has an init callback function. If you need to do something early on |
| 332 | to initialize some state, this is the time to do that. Otherwise, this simple |
| 333 | function below should be sufficient for most people: |
| 334 | |
Jiri Slaby | 3a36cb1 | 2014-02-24 19:59:59 +0100 | [diff] [blame] | 335 | int __init ftrace_dyn_arch_init(void) |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 336 | { |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 337 | return 0; |
| 338 | } |
| 339 | |
| 340 | There are two functions that are used to do runtime patching of arbitrary |
| 341 | functions. The first is used to turn the mcount call site into a nop (which |
| 342 | is what helps us retain runtime performance when not tracing). The second is |
| 343 | used to turn the mcount call site into a call to an arbitrary location (but |
| 344 | typically that is ftracer_caller()). See the general function definition in |
| 345 | linux/ftrace.h for the functions: |
| 346 | ftrace_make_nop() |
| 347 | ftrace_make_call() |
| 348 | The rec->ip value is the address of the mcount call site that was collected |
| 349 | by the scripts/recordmcount.pl during build time. |
| 350 | |
| 351 | The last function is used to do runtime patching of the active tracer. This |
| 352 | will be modifying the assembly code at the location of the ftrace_call symbol |
| 353 | inside of the ftrace_caller() function. So you should have sufficient padding |
| 354 | at that location to support the new function calls you'll be inserting. Some |
| 355 | people will be using a "call" type instruction while others will be using a |
| 356 | "branch" type instruction. Specifically, the function is: |
| 357 | ftrace_update_ftrace_func() |
| 358 | |
| 359 | |
| 360 | HAVE_DYNAMIC_FTRACE + HAVE_FUNCTION_GRAPH_TRACER |
| 361 | ------------------------------------------------ |
| 362 | |
| 363 | The function grapher needs a few tweaks in order to work with dynamic ftrace. |
| 364 | Basically, you will need to: |
| 365 | - update: |
| 366 | - ftrace_caller() |
| 367 | - ftrace_graph_call() |
| 368 | - ftrace_graph_caller() |
| 369 | - implement: |
| 370 | - ftrace_enable_ftrace_graph_caller() |
| 371 | - ftrace_disable_ftrace_graph_caller() |
Mike Frysinger | 555f386 | 2009-09-14 20:10:15 -0400 | [diff] [blame] | 372 | |
| 373 | <details to be filled> |
Mike Frysinger | 9849ed4 | 2010-07-20 03:13:35 -0400 | [diff] [blame] | 374 | Quick notes: |
| 375 | - add a nop stub after the ftrace_call location named ftrace_graph_call; |
| 376 | stub needs to be large enough to support a call to ftrace_graph_caller() |
| 377 | - update ftrace_graph_caller() to work with being called by the new |
| 378 | ftrace_caller() since some semantics may have changed |
| 379 | - ftrace_enable_ftrace_graph_caller() will runtime patch the |
| 380 | ftrace_graph_call location with a call to ftrace_graph_caller() |
| 381 | - ftrace_disable_ftrace_graph_caller() will runtime patch the |
| 382 | ftrace_graph_call location with nops |