Mathieu Desnoyers | fb32e03 | 2008-02-02 15:10:33 -0500 | [diff] [blame] | 1 | # |
| 2 | # General architecture dependent options |
| 3 | # |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 4 | |
| 5 | config OPROFILE |
Robert Richter | b309a29 | 2010-02-26 15:01:23 +0100 | [diff] [blame] | 6 | tristate "OProfile system profiling" |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 7 | depends on PROFILING |
| 8 | depends on HAVE_OPROFILE |
Ingo Molnar | d69d59f | 2008-12-12 09:38:57 +0100 | [diff] [blame] | 9 | select RING_BUFFER |
Christian Borntraeger | 9a5963e | 2009-09-16 21:56:49 +0200 | [diff] [blame] | 10 | select RING_BUFFER_ALLOW_SWAP |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 11 | help |
| 12 | OProfile is a profiling system capable of profiling the |
| 13 | whole system, include the kernel, kernel modules, libraries, |
| 14 | and applications. |
| 15 | |
| 16 | If unsure, say N. |
| 17 | |
Jason Yeh | 4d4036e | 2009-07-08 13:49:38 +0200 | [diff] [blame] | 18 | config OPROFILE_EVENT_MULTIPLEX |
| 19 | bool "OProfile multiplexing support (EXPERIMENTAL)" |
| 20 | default n |
| 21 | depends on OPROFILE && X86 |
| 22 | help |
| 23 | The number of hardware counters is limited. The multiplexing |
| 24 | feature enables OProfile to gather more events than counters |
| 25 | are provided by the hardware. This is realized by switching |
| 26 | between events at an user specified time interval. |
| 27 | |
| 28 | If unsure, say N. |
| 29 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 30 | config HAVE_OPROFILE |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 31 | bool |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 32 | |
Robert Richter | dcfce4a | 2011-10-11 17:11:08 +0200 | [diff] [blame] | 33 | config OPROFILE_NMI_TIMER |
| 34 | def_bool y |
| 35 | depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI |
| 36 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 37 | config KPROBES |
| 38 | bool "Kprobes" |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 39 | depends on MODULES |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 40 | depends on HAVE_KPROBES |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 41 | select KALLSYMS |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 42 | help |
| 43 | Kprobes allows you to trap at almost any kernel address and |
| 44 | execute a callback function. register_kprobe() establishes |
| 45 | a probepoint and specifies the callback. Kprobes is useful |
| 46 | for kernel debugging, non-intrusive instrumentation and testing. |
| 47 | If in doubt, say "N". |
| 48 | |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 49 | config JUMP_LABEL |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 50 | bool "Optimize very unlikely/likely branches" |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 51 | depends on HAVE_ARCH_JUMP_LABEL |
| 52 | help |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 53 | This option enables a transparent branch optimization that |
| 54 | makes certain almost-always-true or almost-always-false branch |
| 55 | conditions even cheaper to execute within the kernel. |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 56 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 57 | Certain performance-sensitive kernel code, such as trace points, |
| 58 | scheduler functionality, networking code and KVM have such |
| 59 | branches and include support for this optimization technique. |
| 60 | |
| 61 | If it is detected that the compiler has support for "asm goto", |
| 62 | the kernel will compile such branches with just a nop |
| 63 | instruction. When the condition flag is toggled to true, the |
| 64 | nop will be converted to a jump instruction to execute the |
| 65 | conditional block of instructions. |
| 66 | |
| 67 | This technique lowers overhead and stress on the branch prediction |
| 68 | of the processor and generally makes the kernel faster. The update |
| 69 | of the condition is slower, but those are always very rare. |
| 70 | |
| 71 | ( On 32-bit x86, the necessary options added to the compiler |
| 72 | flags may increase the size of the kernel slightly. ) |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 73 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 74 | config OPTPROBES |
Masami Hiramatsu | 5cc718b | 2010-03-15 13:00:54 -0400 | [diff] [blame] | 75 | def_bool y |
| 76 | depends on KPROBES && HAVE_OPTPROBES |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 77 | depends on !PREEMPT |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 78 | |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 79 | config UPROBES |
Ingo Molnar | 7b2d81d | 2012-02-17 09:27:41 +0100 | [diff] [blame] | 80 | bool "Transparent user-space probes (EXPERIMENTAL)" |
Srikar Dronamraju | ec83db0 | 2012-05-08 16:41:26 +0530 | [diff] [blame] | 81 | depends on UPROBE_EVENT && PERF_EVENTS |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 82 | default n |
Oleg Nesterov | 22b361d | 2012-12-17 16:01:39 -0800 | [diff] [blame] | 83 | select PERCPU_RWSEM |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 84 | help |
Ingo Molnar | 7b2d81d | 2012-02-17 09:27:41 +0100 | [diff] [blame] | 85 | Uprobes is the user-space counterpart to kprobes: they |
| 86 | enable instrumentation applications (such as 'perf probe') |
| 87 | to establish unintrusive probes in user-space binaries and |
| 88 | libraries, by executing handler functions when the probes |
| 89 | are hit by user-space applications. |
| 90 | |
| 91 | ( These probes come in the form of single-byte breakpoints, |
| 92 | managed by the kernel and kept transparent to the probed |
| 93 | application. ) |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 94 | |
| 95 | If in doubt, say "N". |
| 96 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 97 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 98 | bool |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 99 | help |
| 100 | Some architectures are unable to perform unaligned accesses |
| 101 | without the use of get_unaligned/put_unaligned. Others are |
| 102 | unable to perform such accesses efficiently (e.g. trap on |
| 103 | unaligned access and require fixing it up in the exception |
| 104 | handler.) |
| 105 | |
| 106 | This symbol should be selected by an architecture if it can |
| 107 | perform unaligned accesses efficiently to allow different |
| 108 | code paths to be selected for these cases. Some network |
| 109 | drivers, for example, could opt to not fix up alignment |
| 110 | problems with received packets if doing so would not help |
| 111 | much. |
| 112 | |
| 113 | See Documentation/unaligned-memory-access.txt for more |
| 114 | information on the topic of unaligned memory accesses. |
| 115 | |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 116 | config ARCH_USE_BUILTIN_BSWAP |
| 117 | bool |
| 118 | help |
| 119 | Modern versions of GCC (since 4.4) have builtin functions |
| 120 | for handling byte-swapping. Using these, instead of the old |
| 121 | inline assembler that the architecture code provides in the |
| 122 | __arch_bswapXX() macros, allows the compiler to see what's |
| 123 | happening and offers more opportunity for optimisation. In |
| 124 | particular, the compiler will be able to combine the byteswap |
| 125 | with a nearby load or store and use load-and-swap or |
| 126 | store-and-swap instructions if the architecture has them. It |
| 127 | should almost *never* result in code which is worse than the |
| 128 | hand-coded assembler in <asm/swab.h>. But just in case it |
| 129 | does, the use of the builtins is optional. |
| 130 | |
| 131 | Any architecture with load-and-swap or store-and-swap |
| 132 | instructions should set this. And it shouldn't hurt to set it |
| 133 | on architectures that don't have such instructions. |
| 134 | |
Heiko Carstens | 1a94bc3 | 2009-01-14 14:13:59 +0100 | [diff] [blame] | 135 | config HAVE_SYSCALL_WRAPPERS |
| 136 | bool |
| 137 | |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 138 | config KRETPROBES |
| 139 | def_bool y |
| 140 | depends on KPROBES && HAVE_KRETPROBES |
| 141 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 142 | config USER_RETURN_NOTIFIER |
| 143 | bool |
| 144 | depends on HAVE_USER_RETURN_NOTIFIER |
| 145 | help |
| 146 | Provide a kernel-internal notification when a cpu is about to |
| 147 | switch to user mode. |
| 148 | |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 149 | config HAVE_IOREMAP_PROT |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 150 | bool |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 151 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 152 | config HAVE_KPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 153 | bool |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 154 | |
| 155 | config HAVE_KRETPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 156 | bool |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 157 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 158 | config HAVE_OPTPROBES |
| 159 | bool |
Cong Wang | d314d74 | 2012-03-23 15:01:51 -0700 | [diff] [blame] | 160 | |
| 161 | config HAVE_NMI_WATCHDOG |
| 162 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 163 | # |
| 164 | # An arch should select this if it provides all these things: |
| 165 | # |
| 166 | # task_pt_regs() in asm/processor.h or asm/ptrace.h |
| 167 | # arch_has_single_step() if there is hardware single-step support |
| 168 | # arch_has_block_step() if there is hardware block-step support |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 169 | # asm/syscall.h supplying asm-generic/syscall.h interface |
| 170 | # linux/regset.h user_regset interfaces |
| 171 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h |
| 172 | # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit} |
| 173 | # TIF_NOTIFY_RESUME calls tracehook_notify_resume() |
| 174 | # signal delivery calls tracehook_signal_handler() |
| 175 | # |
| 176 | config HAVE_ARCH_TRACEHOOK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 177 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 178 | |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 179 | config HAVE_DMA_ATTRS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 180 | bool |
Jens Axboe | 3d44223 | 2008-06-26 11:21:34 +0200 | [diff] [blame] | 181 | |
Marek Szyprowski | c64be2b | 2011-12-29 13:09:51 +0100 | [diff] [blame] | 182 | config HAVE_DMA_CONTIGUOUS |
| 183 | bool |
| 184 | |
Jens Axboe | 3d44223 | 2008-06-26 11:21:34 +0200 | [diff] [blame] | 185 | config USE_GENERIC_SMP_HELPERS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 186 | bool |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 187 | |
Thomas Gleixner | 29d5e04 | 2012-04-20 13:05:45 +0000 | [diff] [blame] | 188 | config GENERIC_SMP_IDLE_THREAD |
| 189 | bool |
| 190 | |
Thomas Gleixner | a6359d1 | 2012-05-03 09:03:02 +0000 | [diff] [blame] | 191 | # Select if arch init_task initializer is different to init/init_task.c |
| 192 | config ARCH_INIT_TASK |
Thomas Gleixner | a4a2eb4 | 2012-05-03 09:02:48 +0000 | [diff] [blame] | 193 | bool |
| 194 | |
Thomas Gleixner | f5e1028 | 2012-05-05 15:05:48 +0000 | [diff] [blame] | 195 | # Select if arch has its private alloc_task_struct() function |
| 196 | config ARCH_TASK_STRUCT_ALLOCATOR |
| 197 | bool |
| 198 | |
| 199 | # Select if arch has its private alloc_thread_info() function |
| 200 | config ARCH_THREAD_INFO_ALLOCATOR |
| 201 | bool |
| 202 | |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 203 | config HAVE_REGS_AND_STACK_ACCESS_API |
| 204 | bool |
Heiko Carstens | e01292b | 2010-02-18 14:25:21 +0100 | [diff] [blame] | 205 | help |
| 206 | This symbol should be selected by an architecure if it supports |
| 207 | the API needed to access registers and stack entries from pt_regs, |
| 208 | declared in asm/ptrace.h |
| 209 | For example the kprobes-based event tracer needs this API. |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 210 | |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 211 | config HAVE_CLK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 212 | bool |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 213 | help |
| 214 | The <linux/clk.h> calls support software clock gating and |
| 215 | thus are a key power management tool on many systems. |
| 216 | |
Joerg Roedel | 5ee00bd | 2009-01-09 12:14:24 +0100 | [diff] [blame] | 217 | config HAVE_DMA_API_DEBUG |
| 218 | bool |
Heiko Carstens | 36cd3c9 | 2009-04-09 18:48:34 +0200 | [diff] [blame] | 219 | |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 220 | config HAVE_HW_BREAKPOINT |
| 221 | bool |
Frederic Weisbecker | 99e8c5a | 2009-12-17 01:33:54 +0100 | [diff] [blame] | 222 | depends on PERF_EVENTS |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 223 | |
Frederic Weisbecker | 0102752 | 2010-04-11 18:55:56 +0200 | [diff] [blame] | 224 | config HAVE_MIXED_BREAKPOINTS_REGS |
| 225 | bool |
| 226 | depends on HAVE_HW_BREAKPOINT |
| 227 | help |
| 228 | Depending on the arch implementation of hardware breakpoints, |
| 229 | some of them have separate registers for data and instruction |
| 230 | breakpoints addresses, others have mixed registers to store |
| 231 | them but define the access type in a control register. |
| 232 | Select this option if your arch implements breakpoints under the |
| 233 | latter fashion. |
| 234 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 235 | config HAVE_USER_RETURN_NOTIFIER |
| 236 | bool |
Ingo Molnar | a1922ed | 2009-09-07 08:19:51 +0200 | [diff] [blame] | 237 | |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 238 | config HAVE_PERF_EVENTS_NMI |
| 239 | bool |
Frederic Weisbecker | 23637d4 | 2010-05-15 23:15:20 +0200 | [diff] [blame] | 240 | help |
| 241 | System hardware can generate an NMI using the perf event |
| 242 | subsystem. Also has support for calculating CPU cycle events |
| 243 | to determine how many clock cycles in a given period. |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 244 | |
Jiri Olsa | c5e6319 | 2012-08-07 15:20:36 +0200 | [diff] [blame] | 245 | config HAVE_PERF_REGS |
| 246 | bool |
| 247 | help |
| 248 | Support selective register dumps for perf events. This includes |
| 249 | bit-mapping of each registers and a unique architecture id. |
| 250 | |
Jiri Olsa | c5ebced | 2012-08-07 15:20:40 +0200 | [diff] [blame] | 251 | config HAVE_PERF_USER_STACK_DUMP |
| 252 | bool |
| 253 | help |
| 254 | Support user stack dumps for perf event samples. This needs |
| 255 | access to the user stack pointer which is not unified across |
| 256 | architectures. |
| 257 | |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 258 | config HAVE_ARCH_JUMP_LABEL |
| 259 | bool |
| 260 | |
Gerald Schaefer | 335d7af | 2010-11-22 15:47:36 +0100 | [diff] [blame] | 261 | config HAVE_ARCH_MUTEX_CPU_RELAX |
| 262 | bool |
| 263 | |
Peter Zijlstra | 2672391 | 2011-05-24 17:12:00 -0700 | [diff] [blame] | 264 | config HAVE_RCU_TABLE_FREE |
| 265 | bool |
| 266 | |
Huang Ying | df013ff | 2011-07-13 13:14:22 +0800 | [diff] [blame] | 267 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
| 268 | bool |
| 269 | |
Heiko Carstens | 43570fd | 2012-01-12 17:17:27 -0800 | [diff] [blame] | 270 | config HAVE_ALIGNED_STRUCT_PAGE |
| 271 | bool |
| 272 | help |
| 273 | This makes sure that struct pages are double word aligned and that |
| 274 | e.g. the SLUB allocator can perform double word atomic operations |
| 275 | on a struct page for better performance. However selecting this |
| 276 | might increase the size of a struct page by a word. |
| 277 | |
Heiko Carstens | 4156153 | 2012-01-12 17:17:30 -0800 | [diff] [blame] | 278 | config HAVE_CMPXCHG_LOCAL |
| 279 | bool |
| 280 | |
Heiko Carstens | 2565409 | 2012-01-12 17:17:33 -0800 | [diff] [blame] | 281 | config HAVE_CMPXCHG_DOUBLE |
| 282 | bool |
| 283 | |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 284 | config ARCH_WANT_IPC_PARSE_VERSION |
| 285 | bool |
| 286 | |
| 287 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
| 288 | bool |
| 289 | |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 290 | config ARCH_WANT_OLD_COMPAT_IPC |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 291 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 292 | bool |
| 293 | |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 294 | config HAVE_ARCH_SECCOMP_FILTER |
| 295 | bool |
| 296 | help |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 297 | An arch should select this symbol if it provides all of these things: |
Will Drewry | bb6ea43 | 2012-04-12 16:48:01 -0500 | [diff] [blame] | 298 | - syscall_get_arch() |
| 299 | - syscall_get_arguments() |
| 300 | - syscall_rollback() |
| 301 | - syscall_set_return_value() |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 302 | - SIGSYS siginfo_t support |
| 303 | - secure_computing is called from a ptrace_event()-safe context |
| 304 | - secure_computing return value is checked and a return value of -1 |
| 305 | results in the system call being skipped immediately. |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 306 | |
| 307 | config SECCOMP_FILTER |
| 308 | def_bool y |
| 309 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET |
| 310 | help |
| 311 | Enable tasks to build secure computing environments defined |
| 312 | in terms of Berkeley Packet Filter programs which implement |
| 313 | task-defined system call filtering polices. |
| 314 | |
| 315 | See Documentation/prctl/seccomp_filter.txt for details. |
| 316 | |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 317 | config HAVE_CONTEXT_TRACKING |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 318 | bool |
| 319 | help |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 320 | Provide kernel/user boundaries probes necessary for subsystems |
| 321 | that need it, such as userspace RCU extended quiescent state. |
| 322 | Syscalls need to be wrapped inside user_exit()-user_enter() through |
| 323 | the slow path using TIF_NOHZ flag. Exceptions handlers must be |
| 324 | wrapped as well. Irqs are already protected inside |
| 325 | rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on |
| 326 | irq exit still need to be protected. |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 327 | |
Frederic Weisbecker | b952741 | 2012-06-16 15:39:34 +0200 | [diff] [blame] | 328 | config HAVE_VIRT_CPU_ACCOUNTING |
| 329 | bool |
| 330 | |
Frederic Weisbecker | fdf9c35 | 2012-09-09 14:56:31 +0200 | [diff] [blame] | 331 | config HAVE_IRQ_TIME_ACCOUNTING |
| 332 | bool |
| 333 | help |
| 334 | Archs need to ensure they use a high enough resolution clock to |
| 335 | support irq time accounting and then call enable_sched_clock_irqtime(). |
| 336 | |
Gerald Schaefer | 1562606 | 2012-10-08 16:30:04 -0700 | [diff] [blame] | 337 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
| 338 | bool |
| 339 | |
David Howells | 786d35d | 2012-09-28 14:31:03 +0930 | [diff] [blame] | 340 | config HAVE_MOD_ARCH_SPECIFIC |
| 341 | bool |
| 342 | help |
| 343 | The arch uses struct mod_arch_specific to store data. Many arches |
| 344 | just need a simple module loader without arch specific data - those |
| 345 | should not enable this. |
| 346 | |
| 347 | config MODULES_USE_ELF_RELA |
| 348 | bool |
| 349 | help |
| 350 | Modules only use ELF RELA relocations. Modules with ELF REL |
| 351 | relocations will give an error. |
| 352 | |
| 353 | config MODULES_USE_ELF_REL |
| 354 | bool |
| 355 | help |
| 356 | Modules only use ELF REL relocations. Modules with ELF RELA |
| 357 | relocations will give an error. |
| 358 | |
Al Viro | 6bf9adf | 2012-12-14 14:09:47 -0500 | [diff] [blame] | 359 | config GENERIC_SIGALTSTACK |
| 360 | bool |
| 361 | |
Al Viro | d212504 | 2012-10-23 13:17:59 -0400 | [diff] [blame] | 362 | # |
| 363 | # ABI hall of shame |
| 364 | # |
| 365 | config CLONE_BACKWARDS |
| 366 | bool |
| 367 | help |
| 368 | Architecture has tls passed as the 4th argument of clone(2), |
| 369 | not the 5th one. |
| 370 | |
| 371 | config CLONE_BACKWARDS2 |
| 372 | bool |
| 373 | help |
| 374 | Architecture has the first two arguments of clone(2) swapped. |
| 375 | |
Al Viro | eaca6ea | 2012-11-25 23:12:10 -0500 | [diff] [blame^] | 376 | config ODD_RT_SIGACTION |
| 377 | bool |
| 378 | help |
| 379 | Architecture has unusual rt_sigaction(2) arguments |
| 380 | |
Peter Oberparleiter | 2521f2c | 2009-06-17 16:28:08 -0700 | [diff] [blame] | 381 | source "kernel/gcov/Kconfig" |