Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 1 | REDUCING OS JITTER DUE TO PER-CPU KTHREADS |
| 2 | |
| 3 | This document lists per-CPU kthreads in the Linux kernel and presents |
| 4 | options to control their OS jitter. Note that non-per-CPU kthreads are |
| 5 | not listed here. To reduce OS jitter from non-per-CPU kthreads, bind |
| 6 | them to a "housekeeping" CPU dedicated to such work. |
| 7 | |
| 8 | |
| 9 | REFERENCES |
| 10 | |
| 11 | o Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs. |
| 12 | |
| 13 | o Documentation/cgroups: Using cgroups to bind tasks to sets of CPUs. |
| 14 | |
| 15 | o man taskset: Using the taskset command to bind tasks to sets |
| 16 | of CPUs. |
| 17 | |
| 18 | o man sched_setaffinity: Using the sched_setaffinity() system |
| 19 | call to bind tasks to sets of CPUs. |
| 20 | |
| 21 | o /sys/devices/system/cpu/cpuN/online: Control CPU N's hotplug state, |
| 22 | writing "0" to offline and "1" to online. |
| 23 | |
| 24 | o In order to locate kernel-generated OS jitter on CPU N: |
| 25 | |
| 26 | cd /sys/kernel/debug/tracing |
| 27 | echo 1 > max_graph_depth # Increase the "1" for more detail |
| 28 | echo function_graph > current_tracer |
| 29 | # run workload |
| 30 | cat per_cpu/cpuN/trace |
| 31 | |
| 32 | |
| 33 | KTHREADS |
| 34 | |
| 35 | Name: ehca_comp/%u |
| 36 | Purpose: Periodically process Infiniband-related work. |
| 37 | To reduce its OS jitter, do any of the following: |
| 38 | 1. Don't use eHCA Infiniband hardware, instead choosing hardware |
| 39 | that does not require per-CPU kthreads. This will prevent these |
| 40 | kthreads from being created in the first place. (This will |
| 41 | work for most people, as this hardware, though important, is |
| 42 | relatively old and is produced in relatively low unit volumes.) |
| 43 | 2. Do all eHCA-Infiniband-related work on other CPUs, including |
| 44 | interrupts. |
| 45 | 3. Rework the eHCA driver so that its per-CPU kthreads are |
| 46 | provisioned only on selected CPUs. |
| 47 | |
| 48 | |
| 49 | Name: irq/%d-%s |
| 50 | Purpose: Handle threaded interrupts. |
| 51 | To reduce its OS jitter, do the following: |
| 52 | 1. Use irq affinity to force the irq threads to execute on |
| 53 | some other CPU. |
| 54 | |
| 55 | Name: kcmtpd_ctr_%d |
| 56 | Purpose: Handle Bluetooth work. |
| 57 | To reduce its OS jitter, do one of the following: |
| 58 | 1. Don't use Bluetooth, in which case these kthreads won't be |
| 59 | created in the first place. |
| 60 | 2. Use irq affinity to force Bluetooth-related interrupts to |
| 61 | occur on some other CPU and furthermore initiate all |
| 62 | Bluetooth activity on some other CPU. |
| 63 | |
| 64 | Name: ksoftirqd/%u |
| 65 | Purpose: Execute softirq handlers when threaded or when under heavy load. |
| 66 | To reduce its OS jitter, each softirq vector must be handled |
| 67 | separately as follows: |
| 68 | TIMER_SOFTIRQ: Do all of the following: |
| 69 | 1. To the extent possible, keep the CPU out of the kernel when it |
| 70 | is non-idle, for example, by avoiding system calls and by forcing |
| 71 | both kernel threads and interrupts to execute elsewhere. |
| 72 | 2. Build with CONFIG_HOTPLUG_CPU=y. After boot completes, force |
| 73 | the CPU offline, then bring it back online. This forces |
| 74 | recurring timers to migrate elsewhere. If you are concerned |
| 75 | with multiple CPUs, force them all offline before bringing the |
| 76 | first one back online. Once you have onlined the CPUs in question, |
| 77 | do not offline any other CPUs, because doing so could force the |
| 78 | timer back onto one of the CPUs in question. |
| 79 | NET_TX_SOFTIRQ and NET_RX_SOFTIRQ: Do all of the following: |
| 80 | 1. Force networking interrupts onto other CPUs. |
| 81 | 2. Initiate any network I/O on other CPUs. |
| 82 | 3. Once your application has started, prevent CPU-hotplug operations |
| 83 | from being initiated from tasks that might run on the CPU to |
| 84 | be de-jittered. (It is OK to force this CPU offline and then |
| 85 | bring it back online before you start your application.) |
| 86 | BLOCK_SOFTIRQ: Do all of the following: |
| 87 | 1. Force block-device interrupts onto some other CPU. |
| 88 | 2. Initiate any block I/O on other CPUs. |
| 89 | 3. Once your application has started, prevent CPU-hotplug operations |
| 90 | from being initiated from tasks that might run on the CPU to |
| 91 | be de-jittered. (It is OK to force this CPU offline and then |
| 92 | bring it back online before you start your application.) |
Christoph Hellwig | 511cbce | 2015-11-10 14:56:14 +0100 | [diff] [blame] | 93 | IRQ_POLL_SOFTIRQ: Do all of the following: |
Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 94 | 1. Force block-device interrupts onto some other CPU. |
| 95 | 2. Initiate any block I/O and block-I/O polling on other CPUs. |
| 96 | 3. Once your application has started, prevent CPU-hotplug operations |
| 97 | from being initiated from tasks that might run on the CPU to |
| 98 | be de-jittered. (It is OK to force this CPU offline and then |
| 99 | bring it back online before you start your application.) |
| 100 | TASKLET_SOFTIRQ: Do one or more of the following: |
| 101 | 1. Avoid use of drivers that use tasklets. (Such drivers will contain |
| 102 | calls to things like tasklet_schedule().) |
| 103 | 2. Convert all drivers that you must use from tasklets to workqueues. |
| 104 | 3. Force interrupts for drivers using tasklets onto other CPUs, |
| 105 | and also do I/O involving these drivers on other CPUs. |
| 106 | SCHED_SOFTIRQ: Do all of the following: |
| 107 | 1. Avoid sending scheduler IPIs to the CPU to be de-jittered, |
| 108 | for example, ensure that at most one runnable kthread is present |
| 109 | on that CPU. If a thread that expects to run on the de-jittered |
| 110 | CPU awakens, the scheduler will send an IPI that can result in |
| 111 | a subsequent SCHED_SOFTIRQ. |
| 112 | 2. Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y, |
| 113 | CONFIG_NO_HZ_FULL=y, and, in addition, ensure that the CPU |
| 114 | to be de-jittered is marked as an adaptive-ticks CPU using the |
| 115 | "nohz_full=" boot parameter. This reduces the number of |
| 116 | scheduler-clock interrupts that the de-jittered CPU receives, |
| 117 | minimizing its chances of being selected to do the load balancing |
| 118 | work that runs in SCHED_SOFTIRQ context. |
| 119 | 3. To the extent possible, keep the CPU out of the kernel when it |
| 120 | is non-idle, for example, by avoiding system calls and by |
| 121 | forcing both kernel threads and interrupts to execute elsewhere. |
| 122 | This further reduces the number of scheduler-clock interrupts |
| 123 | received by the de-jittered CPU. |
| 124 | HRTIMER_SOFTIRQ: Do all of the following: |
| 125 | 1. To the extent possible, keep the CPU out of the kernel when it |
| 126 | is non-idle. For example, avoid system calls and force both |
| 127 | kernel threads and interrupts to execute elsewhere. |
| 128 | 2. Build with CONFIG_HOTPLUG_CPU=y. Once boot completes, force the |
| 129 | CPU offline, then bring it back online. This forces recurring |
| 130 | timers to migrate elsewhere. If you are concerned with multiple |
| 131 | CPUs, force them all offline before bringing the first one |
| 132 | back online. Once you have onlined the CPUs in question, do not |
| 133 | offline any other CPUs, because doing so could force the timer |
| 134 | back onto one of the CPUs in question. |
| 135 | RCU_SOFTIRQ: Do at least one of the following: |
| 136 | 1. Offload callbacks and keep the CPU in either dyntick-idle or |
| 137 | adaptive-ticks state by doing all of the following: |
| 138 | a. Build with CONFIG_RCU_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_ALL=y, |
| 139 | CONFIG_NO_HZ_FULL=y, and, in addition ensure that the CPU |
| 140 | to be de-jittered is marked as an adaptive-ticks CPU using |
| 141 | the "nohz_full=" boot parameter. Bind the rcuo kthreads |
| 142 | to housekeeping CPUs, which can tolerate OS jitter. |
| 143 | b. To the extent possible, keep the CPU out of the kernel |
| 144 | when it is non-idle, for example, by avoiding system |
| 145 | calls and by forcing both kernel threads and interrupts |
| 146 | to execute elsewhere. |
| 147 | 2. Enable RCU to do its processing remotely via dyntick-idle by |
| 148 | doing all of the following: |
| 149 | a. Build with CONFIG_NO_HZ=y and CONFIG_RCU_FAST_NO_HZ=y. |
| 150 | b. Ensure that the CPU goes idle frequently, allowing other |
| 151 | CPUs to detect that it has passed through an RCU quiescent |
| 152 | state. If the kernel is built with CONFIG_NO_HZ_FULL=y, |
| 153 | userspace execution also allows other CPUs to detect that |
| 154 | the CPU in question has passed through a quiescent state. |
| 155 | c. To the extent possible, keep the CPU out of the kernel |
| 156 | when it is non-idle, for example, by avoiding system |
| 157 | calls and by forcing both kernel threads and interrupts |
| 158 | to execute elsewhere. |
| 159 | |
Paul E. McKenney | f7bac9b | 2013-04-30 10:48:35 -0700 | [diff] [blame] | 160 | Name: kworker/%u:%d%s (cpu, id, priority) |
| 161 | Purpose: Execute workqueue requests |
| 162 | To reduce its OS jitter, do any of the following: |
| 163 | 1. Run your workload at a real-time priority, which will allow |
| 164 | preempting the kworker daemons. |
Paul E. McKenney | bbf393b | 2014-02-12 11:12:37 -0800 | [diff] [blame] | 165 | 2. A given workqueue can be made visible in the sysfs filesystem |
| 166 | by passing the WQ_SYSFS to that workqueue's alloc_workqueue(). |
| 167 | Such a workqueue can be confined to a given subset of the |
| 168 | CPUs using the /sys/devices/virtual/workqueue/*/cpumask sysfs |
| 169 | files. The set of WQ_SYSFS workqueues can be displayed using |
| 170 | "ls sys/devices/virtual/workqueue". That said, the workqueues |
| 171 | maintainer would like to caution people against indiscriminately |
| 172 | sprinkling WQ_SYSFS across all the workqueues. The reason for |
| 173 | caution is that it is easy to add WQ_SYSFS, but because sysfs is |
| 174 | part of the formal user/kernel API, it can be nearly impossible |
| 175 | to remove it, even if its addition was a mistake. |
| 176 | 3. Do any of the following needed to avoid jitter that your |
Paul E. McKenney | f7bac9b | 2013-04-30 10:48:35 -0700 | [diff] [blame] | 177 | application cannot tolerate: |
| 178 | a. Build your kernel with CONFIG_SLUB=y rather than |
| 179 | CONFIG_SLAB=y, thus avoiding the slab allocator's periodic |
| 180 | use of each CPU's workqueues to run its cache_reap() |
| 181 | function. |
| 182 | b. Avoid using oprofile, thus avoiding OS jitter from |
| 183 | wq_sync_buffer(). |
| 184 | c. Limit your CPU frequency so that a CPU-frequency |
| 185 | governor is not required, possibly enlisting the aid of |
| 186 | special heatsinks or other cooling technologies. If done |
| 187 | correctly, and if you CPU architecture permits, you should |
| 188 | be able to build your kernel with CONFIG_CPU_FREQ=n to |
| 189 | avoid the CPU-frequency governor periodically running |
| 190 | on each CPU, including cs_dbs_timer() and od_dbs_timer(). |
| 191 | WARNING: Please check your CPU specifications to |
| 192 | make sure that this is safe on your particular system. |
Paul E. McKenney | 89bf5d8 | 2015-01-24 22:24:14 -0800 | [diff] [blame] | 193 | d. As of v3.18, Christoph Lameter's on-demand vmstat workers |
| 194 | commit prevents OS jitter due to vmstat_update() on |
| 195 | CONFIG_SMP=y systems. Before v3.18, is not possible |
| 196 | to entirely get rid of the OS jitter, but you can |
| 197 | decrease its frequency by writing a large value to |
| 198 | /proc/sys/vm/stat_interval. The default value is HZ, |
| 199 | for an interval of one second. Of course, larger values |
| 200 | will make your virtual-memory statistics update more |
| 201 | slowly. Of course, you can also run your workload at |
| 202 | a real-time priority, thus preempting vmstat_update(), |
Paul E. McKenney | 64f26e5 | 2013-09-10 08:26:09 -0700 | [diff] [blame] | 203 | but if your workload is CPU-bound, this is a bad idea. |
| 204 | However, there is an RFC patch from Christoph Lameter |
| 205 | (based on an earlier one from Gilad Ben-Yossef) that |
| 206 | reduces or even eliminates vmstat overhead for some |
| 207 | workloads at https://lkml.org/lkml/2013/9/4/379. |
Paul E. McKenney | f136057 | 2015-01-25 11:48:18 -0800 | [diff] [blame] | 208 | e. Boot with "elevator=noop" to avoid workqueue use by |
| 209 | the block layer. |
| 210 | f. If running on high-end powerpc servers, build with |
Paul E. McKenney | f7bac9b | 2013-04-30 10:48:35 -0700 | [diff] [blame] | 211 | CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS |
| 212 | daemon from running on each CPU every second or so. |
| 213 | (This will require editing Kconfig files and will defeat |
| 214 | this platform's RAS functionality.) This avoids jitter |
| 215 | due to the rtas_event_scan() function. |
| 216 | WARNING: Please check your CPU specifications to |
| 217 | make sure that this is safe on your particular system. |
Paul E. McKenney | f136057 | 2015-01-25 11:48:18 -0800 | [diff] [blame] | 218 | g. If running on Cell Processor, build your kernel with |
Paul E. McKenney | f7bac9b | 2013-04-30 10:48:35 -0700 | [diff] [blame] | 219 | CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from |
| 220 | spu_gov_work(). |
| 221 | WARNING: Please check your CPU specifications to |
| 222 | make sure that this is safe on your particular system. |
Paul E. McKenney | f136057 | 2015-01-25 11:48:18 -0800 | [diff] [blame] | 223 | h. If running on PowerMAC, build your kernel with |
Paul E. McKenney | f7bac9b | 2013-04-30 10:48:35 -0700 | [diff] [blame] | 224 | CONFIG_PMAC_RACKMETER=n to disable the CPU-meter, |
| 225 | avoiding OS jitter from rackmeter_do_timer(). |
| 226 | |
Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 227 | Name: rcuc/%u |
| 228 | Purpose: Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels. |
| 229 | To reduce its OS jitter, do at least one of the following: |
| 230 | 1. Build the kernel with CONFIG_PREEMPT=n. This prevents these |
| 231 | kthreads from being created in the first place, and also obviates |
| 232 | the need for RCU priority boosting. This approach is feasible |
| 233 | for workloads that do not require high degrees of responsiveness. |
| 234 | 2. Build the kernel with CONFIG_RCU_BOOST=n. This prevents these |
| 235 | kthreads from being created in the first place. This approach |
| 236 | is feasible only if your workload never requires RCU priority |
| 237 | boosting, for example, if you ensure frequent idle time on all |
| 238 | CPUs that might execute within the kernel. |
| 239 | 3. Build with CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y, |
| 240 | which offloads all RCU callbacks to kthreads that can be moved |
| 241 | off of CPUs susceptible to OS jitter. This approach prevents the |
| 242 | rcuc/%u kthreads from having any work to do, so that they are |
| 243 | never awakened. |
| 244 | 4. Ensure that the CPU never enters the kernel, and, in particular, |
| 245 | avoid initiating any CPU hotplug operations on this CPU. This is |
| 246 | another way of preventing any callbacks from being queued on the |
| 247 | CPU, again preventing the rcuc/%u kthreads from having any work |
| 248 | to do. |
| 249 | |
| 250 | Name: rcuob/%d, rcuop/%d, and rcuos/%d |
| 251 | Purpose: Offload RCU callbacks from the corresponding CPU. |
| 252 | To reduce its OS jitter, do at least one of the following: |
| 253 | 1. Use affinity, cgroups, or other mechanism to force these kthreads |
| 254 | to execute on some other CPU. |
Paul Bolle | b965162 | 2013-05-28 09:52:05 +0200 | [diff] [blame] | 255 | 2. Build with CONFIG_RCU_NOCB_CPU=n, which will prevent these |
Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 256 | kthreads from being created in the first place. However, please |
| 257 | note that this will not eliminate OS jitter, but will instead |
| 258 | shift it to RCU_SOFTIRQ. |
| 259 | |
| 260 | Name: watchdog/%u |
| 261 | Purpose: Detect software lockups on each CPU. |
| 262 | To reduce its OS jitter, do at least one of the following: |
| 263 | 1. Build with CONFIG_LOCKUP_DETECTOR=n, which will prevent these |
| 264 | kthreads from being created in the first place. |
Paul E. McKenney | f136057 | 2015-01-25 11:48:18 -0800 | [diff] [blame] | 265 | 2. Boot with "nosoftlockup=0", which will also prevent these kthreads |
| 266 | from being created. Other related watchdog and softlockup boot |
| 267 | parameters may be found in Documentation/kernel-parameters.txt |
| 268 | and Documentation/watchdog/watchdog-parameters.txt. |
| 269 | 3. Echo a zero to /proc/sys/kernel/watchdog to disable the |
Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 270 | watchdog timer. |
Paul E. McKenney | f136057 | 2015-01-25 11:48:18 -0800 | [diff] [blame] | 271 | 4. Echo a large number of /proc/sys/kernel/watchdog_thresh in |
Paul E. McKenney | 49717cb | 2013-04-11 08:07:11 -0700 | [diff] [blame] | 272 | order to reduce the frequency of OS jitter due to the watchdog |
| 273 | timer down to a level that is acceptable for your workload. |