| Version 10 of schedstats includes support for sched_domains, which |
| hit the mainline kernel in 2.6.7. Some counters make more sense to be |
| per-runqueue; other to be per-domain. Note that domains (and their associated |
| information) will only be pertinent and available on machines utilizing |
| CONFIG_SMP. |
| |
| In version 10 of schedstat, there is at least one level of domain |
| statistics for each cpu listed, and there may well be more than one |
| domain. Domains have no particular names in this implementation, but |
| the highest numbered one typically arbitrates balancing across all the |
| cpus on the machine, while domain0 is the most tightly focused domain, |
| sometimes balancing only between pairs of cpus. At this time, there |
| are no architectures which need more than three domain levels. The first |
| field in the domain stats is a bit map indicating which cpus are affected |
| by that domain. |
| |
| These fields are counters, and only increment. Programs which make use |
| of these will need to start with a baseline observation and then calculate |
| the change in the counters at each subsequent observation. A perl script |
| which does this for many of the fields is available at |
| |
| http://eaglet.rain.com/rick/linux/schedstat/ |
| |
| Note that any such script will necessarily be version-specific, as the main |
| reason to change versions is changes in the output format. For those wishing |
| to write their own scripts, the fields are described here. |
| |
| CPU statistics |
| -------------- |
| cpu<N> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
| |
| NOTE: In the sched_yield() statistics, the active queue is considered empty |
| if it has only one process in it, since obviously the process calling |
| sched_yield() is that process. |
| |
| First four fields are sched_yield() statistics: |
| 1) # of times both the active and the expired queue were empty |
| 2) # of times just the active queue was empty |
| 3) # of times just the expired queue was empty |
| 4) # of times sched_yield() was called |
| |
| Next four are schedule() statistics: |
| 5) # of times the active queue had at least one other process on it |
| 6) # of times we switched to the expired queue and reused it |
| 7) # of times schedule() was called |
| 8) # of times schedule() left the processor idle |
| |
| Next four are active_load_balance() statistics: |
| 9) # of times active_load_balance() was called |
| 10) # of times active_load_balance() caused this cpu to gain a task |
| 11) # of times active_load_balance() caused this cpu to lose a task |
| 12) # of times active_load_balance() tried to move a task and failed |
| |
| Next three are try_to_wake_up() statistics: |
| 13) # of times try_to_wake_up() was called |
| 14) # of times try_to_wake_up() successfully moved the awakening task |
| 15) # of times try_to_wake_up() attempted to move the awakening task |
| |
| Next two are wake_up_new_task() statistics: |
| 16) # of times wake_up_new_task() was called |
| 17) # of times wake_up_new_task() successfully moved the new task |
| |
| Next one is a sched_migrate_task() statistic: |
| 18) # of times sched_migrate_task() was called |
| |
| Next one is a sched_balance_exec() statistic: |
| 19) # of times sched_balance_exec() was called |
| |
| Next three are statistics describing scheduling latency: |
| 20) sum of all time spent running by tasks on this processor (in ms) |
| 21) sum of all time spent waiting to run by tasks on this processor (in ms) |
| 22) # of tasks (not necessarily unique) given to the processor |
| |
| The last six are statistics dealing with pull_task(): |
| 23) # of times pull_task() moved a task to this cpu when newly idle |
| 24) # of times pull_task() stole a task from this cpu when another cpu |
| was newly idle |
| 25) # of times pull_task() moved a task to this cpu when idle |
| 26) # of times pull_task() stole a task from this cpu when another cpu |
| was idle |
| 27) # of times pull_task() moved a task to this cpu when busy |
| 28) # of times pull_task() stole a task from this cpu when another cpu |
| was busy |
| |
| |
| Domain statistics |
| ----------------- |
| One of these is produced per domain for each cpu described. (Note that if |
| CONFIG_SMP is not defined, *no* domains are utilized and these lines |
| will not appear in the output.) |
| |
| domain<N> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
| |
| The first field is a bit mask indicating what cpus this domain operates over. |
| |
| The next fifteen are a variety of load_balance() statistics: |
| |
| 1) # of times in this domain load_balance() was called when the cpu |
| was idle |
| 2) # of times in this domain load_balance() was called when the cpu |
| was busy |
| 3) # of times in this domain load_balance() was called when the cpu |
| was just becoming idle |
| 4) # of times in this domain load_balance() tried to move one or more |
| tasks and failed, when the cpu was idle |
| 5) # of times in this domain load_balance() tried to move one or more |
| tasks and failed, when the cpu was busy |
| 6) # of times in this domain load_balance() tried to move one or more |
| tasks and failed, when the cpu was just becoming idle |
| 7) sum of imbalances discovered (if any) with each call to |
| load_balance() in this domain when the cpu was idle |
| 8) sum of imbalances discovered (if any) with each call to |
| load_balance() in this domain when the cpu was busy |
| 9) sum of imbalances discovered (if any) with each call to |
| load_balance() in this domain when the cpu was just becoming idle |
| 10) # of times in this domain load_balance() was called but did not find |
| a busier queue while the cpu was idle |
| 11) # of times in this domain load_balance() was called but did not find |
| a busier queue while the cpu was busy |
| 12) # of times in this domain load_balance() was called but did not find |
| a busier queue while the cpu was just becoming idle |
| 13) # of times in this domain a busier queue was found while the cpu was |
| idle but no busier group was found |
| 14) # of times in this domain a busier queue was found while the cpu was |
| busy but no busier group was found |
| 15) # of times in this domain a busier queue was found while the cpu was |
| just becoming idle but no busier group was found |
| |
| Next two are sched_balance_exec() statistics: |
| 17) # of times in this domain sched_balance_exec() successfully pushed |
| a task to a new cpu |
| 18) # of times in this domain sched_balance_exec() tried but failed to |
| push a task to a new cpu |
| |
| Next two are try_to_wake_up() statistics: |
| 19) # of times in this domain try_to_wake_up() tried to move a task based |
| on affinity and cache warmth |
| 20) # of times in this domain try_to_wake_up() tried to move a task based |
| on load balancing |
| |
| |
| /proc/<pid>/schedstat |
| ---------------- |
| schedstats also adds a new /proc/<pid/schedstat file to include some of |
| the same information on a per-process level. There are three fields in |
| this file correlating to fields 20, 21, and 22 in the CPU fields, but |
| they only apply for that process. |
| |
| A program could be easily written to make use of these extra fields to |
| report on how well a particular process or set of processes is faring |
| under the scheduler's policies. A simple version of such a program is |
| available at |
| http://eaglet.rain.com/rick/linux/schedstat/v10/latency.c |