Mathieu Desnoyers | fb32e03 | 2008-02-02 15:10:33 -0500 | [diff] [blame] | 1 | # |
| 2 | # General architecture dependent options |
| 3 | # |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 4 | |
| 5 | config OPROFILE |
Robert Richter | b309a29 | 2010-02-26 15:01:23 +0100 | [diff] [blame] | 6 | tristate "OProfile system profiling" |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 7 | depends on PROFILING |
| 8 | depends on HAVE_OPROFILE |
Ingo Molnar | d69d59f | 2008-12-12 09:38:57 +0100 | [diff] [blame] | 9 | select RING_BUFFER |
Christian Borntraeger | 9a5963e | 2009-09-16 21:56:49 +0200 | [diff] [blame] | 10 | select RING_BUFFER_ALLOW_SWAP |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 11 | help |
| 12 | OProfile is a profiling system capable of profiling the |
| 13 | whole system, include the kernel, kernel modules, libraries, |
| 14 | and applications. |
| 15 | |
| 16 | If unsure, say N. |
| 17 | |
Jason Yeh | 4d4036e | 2009-07-08 13:49:38 +0200 | [diff] [blame] | 18 | config OPROFILE_EVENT_MULTIPLEX |
| 19 | bool "OProfile multiplexing support (EXPERIMENTAL)" |
| 20 | default n |
| 21 | depends on OPROFILE && X86 |
| 22 | help |
| 23 | The number of hardware counters is limited. The multiplexing |
| 24 | feature enables OProfile to gather more events than counters |
| 25 | are provided by the hardware. This is realized by switching |
| 26 | between events at an user specified time interval. |
| 27 | |
| 28 | If unsure, say N. |
| 29 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 30 | config HAVE_OPROFILE |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 31 | bool |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 32 | |
Robert Richter | dcfce4a | 2011-10-11 17:11:08 +0200 | [diff] [blame] | 33 | config OPROFILE_NMI_TIMER |
| 34 | def_bool y |
| 35 | depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI |
| 36 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 37 | config KPROBES |
| 38 | bool "Kprobes" |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 39 | depends on MODULES |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 40 | depends on HAVE_KPROBES |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 41 | select KALLSYMS |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 42 | help |
| 43 | Kprobes allows you to trap at almost any kernel address and |
| 44 | execute a callback function. register_kprobe() establishes |
| 45 | a probepoint and specifies the callback. Kprobes is useful |
| 46 | for kernel debugging, non-intrusive instrumentation and testing. |
| 47 | If in doubt, say "N". |
| 48 | |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 49 | config JUMP_LABEL |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 50 | bool "Optimize very unlikely/likely branches" |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 51 | depends on HAVE_ARCH_JUMP_LABEL |
| 52 | help |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 53 | This option enables a transparent branch optimization that |
| 54 | makes certain almost-always-true or almost-always-false branch |
| 55 | conditions even cheaper to execute within the kernel. |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 56 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 57 | Certain performance-sensitive kernel code, such as trace points, |
| 58 | scheduler functionality, networking code and KVM have such |
| 59 | branches and include support for this optimization technique. |
| 60 | |
| 61 | If it is detected that the compiler has support for "asm goto", |
| 62 | the kernel will compile such branches with just a nop |
| 63 | instruction. When the condition flag is toggled to true, the |
| 64 | nop will be converted to a jump instruction to execute the |
| 65 | conditional block of instructions. |
| 66 | |
| 67 | This technique lowers overhead and stress on the branch prediction |
| 68 | of the processor and generally makes the kernel faster. The update |
| 69 | of the condition is slower, but those are always very rare. |
| 70 | |
| 71 | ( On 32-bit x86, the necessary options added to the compiler |
| 72 | flags may increase the size of the kernel slightly. ) |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 73 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 74 | config OPTPROBES |
Masami Hiramatsu | 5cc718b | 2010-03-15 13:00:54 -0400 | [diff] [blame] | 75 | def_bool y |
| 76 | depends on KPROBES && HAVE_OPTPROBES |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 77 | depends on !PREEMPT |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 78 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 79 | config KPROBES_ON_FTRACE |
| 80 | def_bool y |
| 81 | depends on KPROBES && HAVE_KPROBES_ON_FTRACE |
| 82 | depends on DYNAMIC_FTRACE_WITH_REGS |
| 83 | help |
| 84 | If function tracer is enabled and the arch supports full |
| 85 | passing of pt_regs to function tracing, then kprobes can |
| 86 | optimize on top of function tracing. |
| 87 | |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 88 | config UPROBES |
David A. Long | 09294e3 | 2014-03-07 10:32:22 -0500 | [diff] [blame] | 89 | def_bool n |
Oleg Nesterov | 22b361d | 2012-12-17 16:01:39 -0800 | [diff] [blame] | 90 | select PERCPU_RWSEM |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 91 | help |
Ingo Molnar | 7b2d81d | 2012-02-17 09:27:41 +0100 | [diff] [blame] | 92 | Uprobes is the user-space counterpart to kprobes: they |
| 93 | enable instrumentation applications (such as 'perf probe') |
| 94 | to establish unintrusive probes in user-space binaries and |
| 95 | libraries, by executing handler functions when the probes |
| 96 | are hit by user-space applications. |
| 97 | |
| 98 | ( These probes come in the form of single-byte breakpoints, |
| 99 | managed by the kernel and kept transparent to the probed |
| 100 | application. ) |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 101 | |
James Hogan | c19fa94 | 2012-05-30 11:23:23 +0100 | [diff] [blame] | 102 | config HAVE_64BIT_ALIGNED_ACCESS |
| 103 | def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 104 | help |
| 105 | Some architectures require 64 bit accesses to be 64 bit |
| 106 | aligned, which also requires structs containing 64 bit values |
| 107 | to be 64 bit aligned too. This includes some 32 bit |
| 108 | architectures which can do 64 bit accesses, as well as 64 bit |
| 109 | architectures without unaligned access. |
| 110 | |
| 111 | This symbol should be selected by an architecture if 64 bit |
| 112 | accesses are required to be 64 bit aligned in this way even |
| 113 | though it is not a 64 bit architecture. |
| 114 | |
| 115 | See Documentation/unaligned-memory-access.txt for more |
| 116 | information on the topic of unaligned memory accesses. |
| 117 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 118 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 119 | bool |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 120 | help |
| 121 | Some architectures are unable to perform unaligned accesses |
| 122 | without the use of get_unaligned/put_unaligned. Others are |
| 123 | unable to perform such accesses efficiently (e.g. trap on |
| 124 | unaligned access and require fixing it up in the exception |
| 125 | handler.) |
| 126 | |
| 127 | This symbol should be selected by an architecture if it can |
| 128 | perform unaligned accesses efficiently to allow different |
| 129 | code paths to be selected for these cases. Some network |
| 130 | drivers, for example, could opt to not fix up alignment |
| 131 | problems with received packets if doing so would not help |
| 132 | much. |
| 133 | |
| 134 | See Documentation/unaligned-memory-access.txt for more |
| 135 | information on the topic of unaligned memory accesses. |
| 136 | |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 137 | config ARCH_USE_BUILTIN_BSWAP |
| 138 | bool |
| 139 | help |
| 140 | Modern versions of GCC (since 4.4) have builtin functions |
| 141 | for handling byte-swapping. Using these, instead of the old |
| 142 | inline assembler that the architecture code provides in the |
| 143 | __arch_bswapXX() macros, allows the compiler to see what's |
| 144 | happening and offers more opportunity for optimisation. In |
| 145 | particular, the compiler will be able to combine the byteswap |
| 146 | with a nearby load or store and use load-and-swap or |
| 147 | store-and-swap instructions if the architecture has them. It |
| 148 | should almost *never* result in code which is worse than the |
| 149 | hand-coded assembler in <asm/swab.h>. But just in case it |
| 150 | does, the use of the builtins is optional. |
| 151 | |
| 152 | Any architecture with load-and-swap or store-and-swap |
| 153 | instructions should set this. And it shouldn't hurt to set it |
| 154 | on architectures that don't have such instructions. |
| 155 | |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 156 | config KRETPROBES |
| 157 | def_bool y |
| 158 | depends on KPROBES && HAVE_KRETPROBES |
| 159 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 160 | config USER_RETURN_NOTIFIER |
| 161 | bool |
| 162 | depends on HAVE_USER_RETURN_NOTIFIER |
| 163 | help |
| 164 | Provide a kernel-internal notification when a cpu is about to |
| 165 | switch to user mode. |
| 166 | |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 167 | config HAVE_IOREMAP_PROT |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 168 | bool |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 169 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 170 | config HAVE_KPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 171 | bool |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 172 | |
| 173 | config HAVE_KRETPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 174 | bool |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 175 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 176 | config HAVE_OPTPROBES |
| 177 | bool |
Cong Wang | d314d74 | 2012-03-23 15:01:51 -0700 | [diff] [blame] | 178 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 179 | config HAVE_KPROBES_ON_FTRACE |
| 180 | bool |
| 181 | |
Cong Wang | d314d74 | 2012-03-23 15:01:51 -0700 | [diff] [blame] | 182 | config HAVE_NMI_WATCHDOG |
| 183 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 184 | # |
| 185 | # An arch should select this if it provides all these things: |
| 186 | # |
| 187 | # task_pt_regs() in asm/processor.h or asm/ptrace.h |
| 188 | # arch_has_single_step() if there is hardware single-step support |
| 189 | # arch_has_block_step() if there is hardware block-step support |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 190 | # asm/syscall.h supplying asm-generic/syscall.h interface |
| 191 | # linux/regset.h user_regset interfaces |
| 192 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h |
| 193 | # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit} |
| 194 | # TIF_NOTIFY_RESUME calls tracehook_notify_resume() |
| 195 | # signal delivery calls tracehook_signal_handler() |
| 196 | # |
| 197 | config HAVE_ARCH_TRACEHOOK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 198 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 199 | |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 200 | config HAVE_DMA_ATTRS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 201 | bool |
Jens Axboe | 3d44223 | 2008-06-26 11:21:34 +0200 | [diff] [blame] | 202 | |
Marek Szyprowski | c64be2b | 2011-12-29 13:09:51 +0100 | [diff] [blame] | 203 | config HAVE_DMA_CONTIGUOUS |
| 204 | bool |
| 205 | |
Thomas Gleixner | 29d5e04 | 2012-04-20 13:05:45 +0000 | [diff] [blame] | 206 | config GENERIC_SMP_IDLE_THREAD |
| 207 | bool |
| 208 | |
Kevin Hilman | 485cf5d | 2013-04-24 17:19:13 -0700 | [diff] [blame] | 209 | config GENERIC_IDLE_POLL_SETUP |
| 210 | bool |
| 211 | |
Thomas Gleixner | a6359d1 | 2012-05-03 09:03:02 +0000 | [diff] [blame] | 212 | # Select if arch init_task initializer is different to init/init_task.c |
| 213 | config ARCH_INIT_TASK |
Thomas Gleixner | a4a2eb4 | 2012-05-03 09:02:48 +0000 | [diff] [blame] | 214 | bool |
| 215 | |
Thomas Gleixner | f5e1028 | 2012-05-05 15:05:48 +0000 | [diff] [blame] | 216 | # Select if arch has its private alloc_task_struct() function |
| 217 | config ARCH_TASK_STRUCT_ALLOCATOR |
| 218 | bool |
| 219 | |
| 220 | # Select if arch has its private alloc_thread_info() function |
| 221 | config ARCH_THREAD_INFO_ALLOCATOR |
| 222 | bool |
| 223 | |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 224 | config HAVE_REGS_AND_STACK_ACCESS_API |
| 225 | bool |
Heiko Carstens | e01292b | 2010-02-18 14:25:21 +0100 | [diff] [blame] | 226 | help |
| 227 | This symbol should be selected by an architecure if it supports |
| 228 | the API needed to access registers and stack entries from pt_regs, |
| 229 | declared in asm/ptrace.h |
| 230 | For example the kprobes-based event tracer needs this API. |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 231 | |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 232 | config HAVE_CLK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 233 | bool |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 234 | help |
| 235 | The <linux/clk.h> calls support software clock gating and |
| 236 | thus are a key power management tool on many systems. |
| 237 | |
Joerg Roedel | 5ee00bd | 2009-01-09 12:14:24 +0100 | [diff] [blame] | 238 | config HAVE_DMA_API_DEBUG |
| 239 | bool |
Heiko Carstens | 36cd3c9 | 2009-04-09 18:48:34 +0200 | [diff] [blame] | 240 | |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 241 | config HAVE_HW_BREAKPOINT |
| 242 | bool |
Frederic Weisbecker | 99e8c5a | 2009-12-17 01:33:54 +0100 | [diff] [blame] | 243 | depends on PERF_EVENTS |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 244 | |
Frederic Weisbecker | 0102752 | 2010-04-11 18:55:56 +0200 | [diff] [blame] | 245 | config HAVE_MIXED_BREAKPOINTS_REGS |
| 246 | bool |
| 247 | depends on HAVE_HW_BREAKPOINT |
| 248 | help |
| 249 | Depending on the arch implementation of hardware breakpoints, |
| 250 | some of them have separate registers for data and instruction |
| 251 | breakpoints addresses, others have mixed registers to store |
| 252 | them but define the access type in a control register. |
| 253 | Select this option if your arch implements breakpoints under the |
| 254 | latter fashion. |
| 255 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 256 | config HAVE_USER_RETURN_NOTIFIER |
| 257 | bool |
Ingo Molnar | a1922ed | 2009-09-07 08:19:51 +0200 | [diff] [blame] | 258 | |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 259 | config HAVE_PERF_EVENTS_NMI |
| 260 | bool |
Frederic Weisbecker | 23637d4 | 2010-05-15 23:15:20 +0200 | [diff] [blame] | 261 | help |
| 262 | System hardware can generate an NMI using the perf event |
| 263 | subsystem. Also has support for calculating CPU cycle events |
| 264 | to determine how many clock cycles in a given period. |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 265 | |
Jiri Olsa | c5e6319 | 2012-08-07 15:20:36 +0200 | [diff] [blame] | 266 | config HAVE_PERF_REGS |
| 267 | bool |
| 268 | help |
| 269 | Support selective register dumps for perf events. This includes |
| 270 | bit-mapping of each registers and a unique architecture id. |
| 271 | |
Jiri Olsa | c5ebced | 2012-08-07 15:20:40 +0200 | [diff] [blame] | 272 | config HAVE_PERF_USER_STACK_DUMP |
| 273 | bool |
| 274 | help |
| 275 | Support user stack dumps for perf event samples. This needs |
| 276 | access to the user stack pointer which is not unified across |
| 277 | architectures. |
| 278 | |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 279 | config HAVE_ARCH_JUMP_LABEL |
| 280 | bool |
| 281 | |
Peter Zijlstra | 2672391 | 2011-05-24 17:12:00 -0700 | [diff] [blame] | 282 | config HAVE_RCU_TABLE_FREE |
| 283 | bool |
| 284 | |
Huang Ying | df013ff | 2011-07-13 13:14:22 +0800 | [diff] [blame] | 285 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
| 286 | bool |
| 287 | |
Heiko Carstens | 43570fd | 2012-01-12 17:17:27 -0800 | [diff] [blame] | 288 | config HAVE_ALIGNED_STRUCT_PAGE |
| 289 | bool |
| 290 | help |
| 291 | This makes sure that struct pages are double word aligned and that |
| 292 | e.g. the SLUB allocator can perform double word atomic operations |
| 293 | on a struct page for better performance. However selecting this |
| 294 | might increase the size of a struct page by a word. |
| 295 | |
Heiko Carstens | 4156153 | 2012-01-12 17:17:30 -0800 | [diff] [blame] | 296 | config HAVE_CMPXCHG_LOCAL |
| 297 | bool |
| 298 | |
Heiko Carstens | 2565409 | 2012-01-12 17:17:33 -0800 | [diff] [blame] | 299 | config HAVE_CMPXCHG_DOUBLE |
| 300 | bool |
| 301 | |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 302 | config ARCH_WANT_IPC_PARSE_VERSION |
| 303 | bool |
| 304 | |
| 305 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
| 306 | bool |
| 307 | |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 308 | config ARCH_WANT_OLD_COMPAT_IPC |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 309 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 310 | bool |
| 311 | |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 312 | config HAVE_ARCH_SECCOMP_FILTER |
| 313 | bool |
| 314 | help |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 315 | An arch should select this symbol if it provides all of these things: |
Will Drewry | bb6ea43 | 2012-04-12 16:48:01 -0500 | [diff] [blame] | 316 | - syscall_get_arch() |
| 317 | - syscall_get_arguments() |
| 318 | - syscall_rollback() |
| 319 | - syscall_set_return_value() |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 320 | - SIGSYS siginfo_t support |
| 321 | - secure_computing is called from a ptrace_event()-safe context |
| 322 | - secure_computing return value is checked and a return value of -1 |
| 323 | results in the system call being skipped immediately. |
Kees Cook | 48dc92b | 2014-06-25 16:08:24 -0700 | [diff] [blame] | 324 | - seccomp syscall wired up |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 325 | |
Andy Lutomirski | ff27f38 | 2014-07-21 18:49:17 -0700 | [diff] [blame] | 326 | For best performance, an arch should use seccomp_phase1 and |
| 327 | seccomp_phase2 directly. It should call seccomp_phase1 for all |
| 328 | syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not |
| 329 | need to be called from a ptrace-safe context. It must then |
| 330 | call seccomp_phase2 if seccomp_phase1 returns anything other |
| 331 | than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP. |
| 332 | |
| 333 | As an additional optimization, an arch may provide seccomp_data |
| 334 | directly to seccomp_phase1; this avoids multiple calls |
| 335 | to the syscall_xyz helpers for every syscall. |
| 336 | |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 337 | config SECCOMP_FILTER |
| 338 | def_bool y |
| 339 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET |
| 340 | help |
| 341 | Enable tasks to build secure computing environments defined |
| 342 | in terms of Berkeley Packet Filter programs which implement |
| 343 | task-defined system call filtering polices. |
| 344 | |
| 345 | See Documentation/prctl/seccomp_filter.txt for details. |
| 346 | |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 347 | config HAVE_CC_STACKPROTECTOR |
| 348 | bool |
| 349 | help |
| 350 | An arch should select this symbol if: |
| 351 | - its compiler supports the -fstack-protector option |
| 352 | - it has implemented a stack canary (e.g. __stack_chk_guard) |
| 353 | |
| 354 | config CC_STACKPROTECTOR |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 355 | def_bool n |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 356 | help |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 357 | Set when a stack-protector mode is enabled, so that the build |
| 358 | can enable kernel-side support for the GCC feature. |
| 359 | |
| 360 | choice |
| 361 | prompt "Stack Protector buffer overflow detection" |
| 362 | depends on HAVE_CC_STACKPROTECTOR |
| 363 | default CC_STACKPROTECTOR_NONE |
| 364 | help |
| 365 | This option turns on the "stack-protector" GCC feature. This |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 366 | feature puts, at the beginning of functions, a canary value on |
| 367 | the stack just before the return address, and validates |
| 368 | the value just before actually returning. Stack based buffer |
| 369 | overflows (that need to overwrite this return address) now also |
| 370 | overwrite the canary, which gets detected and the attack is then |
| 371 | neutralized via a kernel panic. |
| 372 | |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 373 | config CC_STACKPROTECTOR_NONE |
| 374 | bool "None" |
| 375 | help |
| 376 | Disable "stack-protector" GCC feature. |
| 377 | |
| 378 | config CC_STACKPROTECTOR_REGULAR |
| 379 | bool "Regular" |
| 380 | select CC_STACKPROTECTOR |
| 381 | help |
| 382 | Functions will have the stack-protector canary logic added if they |
| 383 | have an 8-byte or larger character array on the stack. |
| 384 | |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 385 | This feature requires gcc version 4.2 or above, or a distribution |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 386 | gcc with the feature backported ("-fstack-protector"). |
| 387 | |
| 388 | On an x86 "defconfig" build, this feature adds canary checks to |
| 389 | about 3% of all kernel functions, which increases kernel code size |
| 390 | by about 0.3%. |
| 391 | |
| 392 | config CC_STACKPROTECTOR_STRONG |
| 393 | bool "Strong" |
| 394 | select CC_STACKPROTECTOR |
| 395 | help |
| 396 | Functions will have the stack-protector canary logic added in any |
| 397 | of the following conditions: |
| 398 | |
| 399 | - local variable's address used as part of the right hand side of an |
| 400 | assignment or function argument |
| 401 | - local variable is an array (or union containing an array), |
| 402 | regardless of array type or length |
| 403 | - uses register local variables |
| 404 | |
| 405 | This feature requires gcc version 4.9 or above, or a distribution |
| 406 | gcc with the feature backported ("-fstack-protector-strong"). |
| 407 | |
| 408 | On an x86 "defconfig" build, this feature adds canary checks to |
| 409 | about 20% of all kernel functions, which increases the kernel code |
| 410 | size by about 2%. |
| 411 | |
| 412 | endchoice |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 413 | |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 414 | config HAVE_CONTEXT_TRACKING |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 415 | bool |
| 416 | help |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 417 | Provide kernel/user boundaries probes necessary for subsystems |
| 418 | that need it, such as userspace RCU extended quiescent state. |
| 419 | Syscalls need to be wrapped inside user_exit()-user_enter() through |
| 420 | the slow path using TIF_NOHZ flag. Exceptions handlers must be |
| 421 | wrapped as well. Irqs are already protected inside |
| 422 | rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on |
| 423 | irq exit still need to be protected. |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 424 | |
Frederic Weisbecker | b952741 | 2012-06-16 15:39:34 +0200 | [diff] [blame] | 425 | config HAVE_VIRT_CPU_ACCOUNTING |
| 426 | bool |
| 427 | |
Kevin Hilman | 554b000 | 2013-09-16 15:28:21 -0700 | [diff] [blame] | 428 | config HAVE_VIRT_CPU_ACCOUNTING_GEN |
| 429 | bool |
| 430 | default y if 64BIT |
| 431 | help |
| 432 | With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. |
| 433 | Before enabling this option, arch code must be audited |
| 434 | to ensure there are no races in concurrent read/write of |
| 435 | cputime_t. For example, reading/writing 64-bit cputime_t on |
| 436 | some 32-bit arches may require multiple accesses, so proper |
| 437 | locking is needed to protect against concurrent accesses. |
| 438 | |
| 439 | |
Frederic Weisbecker | fdf9c35 | 2012-09-09 14:56:31 +0200 | [diff] [blame] | 440 | config HAVE_IRQ_TIME_ACCOUNTING |
| 441 | bool |
| 442 | help |
| 443 | Archs need to ensure they use a high enough resolution clock to |
| 444 | support irq time accounting and then call enable_sched_clock_irqtime(). |
| 445 | |
Gerald Schaefer | 1562606 | 2012-10-08 16:30:04 -0700 | [diff] [blame] | 446 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
| 447 | bool |
| 448 | |
Toshi Kani | 0ddab1d | 2015-04-14 15:47:20 -0700 | [diff] [blame] | 449 | config HAVE_ARCH_HUGE_VMAP |
| 450 | bool |
| 451 | |
Pavel Emelyanov | 0f8975e | 2013-07-03 15:01:20 -0700 | [diff] [blame] | 452 | config HAVE_ARCH_SOFT_DIRTY |
| 453 | bool |
| 454 | |
David Howells | 786d35d | 2012-09-28 14:31:03 +0930 | [diff] [blame] | 455 | config HAVE_MOD_ARCH_SPECIFIC |
| 456 | bool |
| 457 | help |
| 458 | The arch uses struct mod_arch_specific to store data. Many arches |
| 459 | just need a simple module loader without arch specific data - those |
| 460 | should not enable this. |
| 461 | |
| 462 | config MODULES_USE_ELF_RELA |
| 463 | bool |
| 464 | help |
| 465 | Modules only use ELF RELA relocations. Modules with ELF REL |
| 466 | relocations will give an error. |
| 467 | |
| 468 | config MODULES_USE_ELF_REL |
| 469 | bool |
| 470 | help |
| 471 | Modules only use ELF REL relocations. Modules with ELF RELA |
| 472 | relocations will give an error. |
| 473 | |
Rusty Russell | b92021b | 2013-03-15 15:04:17 +1030 | [diff] [blame] | 474 | config HAVE_UNDERSCORE_SYMBOL_PREFIX |
| 475 | bool |
| 476 | help |
| 477 | Some architectures generate an _ in front of C symbols; things like |
| 478 | module loading and assembly files need to know about this. |
| 479 | |
Frederic Weisbecker | cc1f027 | 2013-09-24 17:17:47 +0200 | [diff] [blame] | 480 | config HAVE_IRQ_EXIT_ON_IRQ_STACK |
| 481 | bool |
| 482 | help |
| 483 | Architecture doesn't only execute the irq handler on the irq stack |
| 484 | but also irq_exit(). This way we can process softirqs on this irq |
| 485 | stack instead of switching to a new one when we call __do_softirq() |
| 486 | in the end of an hardirq. |
| 487 | This spares a stack switch and improves cache usage on softirq |
| 488 | processing. |
| 489 | |
Kirill A. Shutemov | 235a8f0 | 2015-04-14 15:46:17 -0700 | [diff] [blame] | 490 | config PGTABLE_LEVELS |
| 491 | int |
| 492 | default 2 |
| 493 | |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 494 | config ARCH_HAS_ELF_RANDOMIZE |
| 495 | bool |
| 496 | help |
| 497 | An architecture supports choosing randomized locations for |
| 498 | stack, mmap, brk, and ET_DYN. Defined functions: |
| 499 | - arch_mmap_rnd() |
Kees Cook | 204db6e | 2015-04-14 15:48:12 -0700 | [diff] [blame] | 500 | - arch_randomize_brk() |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 501 | |
Al Viro | d212504 | 2012-10-23 13:17:59 -0400 | [diff] [blame] | 502 | # |
| 503 | # ABI hall of shame |
| 504 | # |
| 505 | config CLONE_BACKWARDS |
| 506 | bool |
| 507 | help |
| 508 | Architecture has tls passed as the 4th argument of clone(2), |
| 509 | not the 5th one. |
| 510 | |
| 511 | config CLONE_BACKWARDS2 |
| 512 | bool |
| 513 | help |
| 514 | Architecture has the first two arguments of clone(2) swapped. |
| 515 | |
Michal Simek | dfa9771 | 2013-08-13 16:00:53 -0700 | [diff] [blame] | 516 | config CLONE_BACKWARDS3 |
| 517 | bool |
| 518 | help |
| 519 | Architecture has tls passed as the 3rd argument of clone(2), |
| 520 | not the 5th one. |
| 521 | |
Al Viro | eaca6ea | 2012-11-25 23:12:10 -0500 | [diff] [blame] | 522 | config ODD_RT_SIGACTION |
| 523 | bool |
| 524 | help |
| 525 | Architecture has unusual rt_sigaction(2) arguments |
| 526 | |
Al Viro | 0a0e8cd | 2012-12-25 16:04:12 -0500 | [diff] [blame] | 527 | config OLD_SIGSUSPEND |
| 528 | bool |
| 529 | help |
| 530 | Architecture has old sigsuspend(2) syscall, of one-argument variety |
| 531 | |
| 532 | config OLD_SIGSUSPEND3 |
| 533 | bool |
| 534 | help |
| 535 | Even weirder antique ABI - three-argument sigsuspend(2) |
| 536 | |
Al Viro | 495dfbf | 2012-12-25 19:09:45 -0500 | [diff] [blame] | 537 | config OLD_SIGACTION |
| 538 | bool |
| 539 | help |
| 540 | Architecture has old sigaction(2) syscall. Nope, not the same |
| 541 | as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), |
| 542 | but fairly different variant of sigaction(2), thanks to OSF/1 |
| 543 | compatibility... |
| 544 | |
| 545 | config COMPAT_OLD_SIGACTION |
| 546 | bool |
| 547 | |
Peter Oberparleiter | 2521f2c | 2009-06-17 16:28:08 -0700 | [diff] [blame] | 548 | source "kernel/gcov/Kconfig" |