Balbir Singh | 00f0b82 | 2008-03-04 14:28:39 -0800 | [diff] [blame] | 1 | Memory Resource Controller |
| 2 | |
Johannes Weiner | 1306a85 | 2014-12-10 15:44:52 -0800 | [diff] [blame] | 3 | NOTE: This document is hopelessly outdated and it asks for a complete |
| 4 | rewrite. It still contains a useful information so we are keeping it |
| 5 | here but make sure to check the current code if you need a deeper |
| 6 | understanding. |
| 7 | |
Jörg Sommer | 67de016 | 2011-06-15 13:00:47 -0700 | [diff] [blame] | 8 | NOTE: The Memory Resource Controller has generically been referred to as the |
| 9 | memory controller in this document. Do not confuse memory controller |
| 10 | used here with the memory controller that is used in hardware. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 11 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 12 | (For editors) |
| 13 | In this document: |
| 14 | When we mention a cgroup (cgroupfs's directory) with memory controller, |
| 15 | we call it "memory cgroup". When you see git-log and source code, you'll |
| 16 | see patch's title and function names tend to use "memcg". |
| 17 | In this document, we avoid using it. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 18 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 19 | Benefits and Purpose of the memory controller |
| 20 | |
| 21 | The memory controller isolates the memory behaviour of a group of tasks |
| 22 | from the rest of the system. The article on LWN [12] mentions some probable |
| 23 | uses of the memory controller. The memory controller can be used to |
| 24 | |
| 25 | a. Isolate an application or a group of applications |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 26 | Memory-hungry applications can be isolated and limited to a smaller |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 27 | amount of memory. |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 28 | b. Create a cgroup with a limited amount of memory; this can be used |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 29 | as a good alternative to booting with mem=XXXX. |
| 30 | c. Virtualization solutions can control the amount of memory they want |
| 31 | to assign to a virtual machine instance. |
| 32 | d. A CD/DVD burner could control the amount of memory used by the |
| 33 | rest of the system to ensure that burning does not fail due to lack |
| 34 | of available memory. |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 35 | e. There are several other use cases; find one or use the controller just |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 36 | for fun (to learn and hack on the VM subsystem). |
| 37 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 38 | Current Status: linux-2.6.34-mmotm(development version of 2010/April) |
| 39 | |
| 40 | Features: |
| 41 | - accounting anonymous pages, file caches, swap caches usage and limiting them. |
Ying Han | 6252efc | 2012-04-12 12:49:10 -0700 | [diff] [blame] | 42 | - pages are linked to per-memcg LRU exclusively, and there is no global LRU. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 43 | - optionally, memory+swap usage can be accounted and limited. |
| 44 | - hierarchical accounting |
| 45 | - soft limit |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 46 | - moving (recharging) account at moving a task is selectable. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 47 | - usage threshold notifier |
Anton Vorontsov | 70ddf63 | 2013-04-29 15:08:31 -0700 | [diff] [blame] | 48 | - memory pressure notifier |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 49 | - oom-killer disable knob and oom-notifier |
| 50 | - Root cgroup has no limit controls. |
| 51 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 52 | Kernel memory support is a work in progress, and the current version provides |
Glauber Costa | 65c64ce | 2011-12-22 01:02:27 +0000 | [diff] [blame] | 53 | basically functionality. (See Section 2.7) |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 54 | |
| 55 | Brief summary of control files. |
| 56 | |
| 57 | tasks # attach a task(thread) and show list of threads |
| 58 | cgroup.procs # show list of processes |
| 59 | cgroup.event_control # an interface for event_fd() |
Johannes Weiner | 3e32cb2 | 2014-12-10 15:42:31 -0800 | [diff] [blame] | 60 | memory.usage_in_bytes # show current usage for memory |
Daisuke Nishimura | a111c96 | 2011-04-27 15:26:48 -0700 | [diff] [blame] | 61 | (See 5.5 for details) |
Johannes Weiner | 3e32cb2 | 2014-12-10 15:42:31 -0800 | [diff] [blame] | 62 | memory.memsw.usage_in_bytes # show current usage for memory+Swap |
Daisuke Nishimura | a111c96 | 2011-04-27 15:26:48 -0700 | [diff] [blame] | 63 | (See 5.5 for details) |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 64 | memory.limit_in_bytes # set/show limit of memory usage |
| 65 | memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage |
| 66 | memory.failcnt # show the number of memory usage hits limits |
| 67 | memory.memsw.failcnt # show the number of memory+Swap hits limits |
| 68 | memory.max_usage_in_bytes # show max memory usage recorded |
Zhu Yanhai | d66c1ce | 2012-01-12 17:18:24 -0800 | [diff] [blame] | 69 | memory.memsw.max_usage_in_bytes # show max memory+Swap usage recorded |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 70 | memory.soft_limit_in_bytes # set/show soft limit of memory usage |
| 71 | memory.stat # show various statistics |
| 72 | memory.use_hierarchy # set/show hierarchical account enabled |
| 73 | memory.force_empty # trigger forced move charge to parent |
Anton Vorontsov | 70ddf63 | 2013-04-29 15:08:31 -0700 | [diff] [blame] | 74 | memory.pressure_level # set memory pressure notifications |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 75 | memory.swappiness # set/show swappiness parameter of vmscan |
| 76 | (See sysctl's vm.swappiness) |
| 77 | memory.move_charge_at_immigrate # set/show controls of moving charges |
| 78 | memory.oom_control # set/show oom controls. |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 79 | memory.numa_stat # show the number of memory usage per numa node |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 80 | |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 81 | memory.kmem.limit_in_bytes # set/show hard limit for kernel memory |
| 82 | memory.kmem.usage_in_bytes # show current kernel memory allocation |
| 83 | memory.kmem.failcnt # show the number of kernel memory usage hits limits |
| 84 | memory.kmem.max_usage_in_bytes # show max kernel memory usage recorded |
| 85 | |
Glauber Costa | 3aaabe2 | 2011-12-11 21:47:06 +0000 | [diff] [blame] | 86 | memory.kmem.tcp.limit_in_bytes # set/show hard limit for tcp buf memory |
Glauber Costa | 5a6dd34 | 2011-12-11 21:47:07 +0000 | [diff] [blame] | 87 | memory.kmem.tcp.usage_in_bytes # show current tcp buf memory allocation |
Wanpeng Li | 05a73ed | 2012-07-31 16:43:21 -0700 | [diff] [blame] | 88 | memory.kmem.tcp.failcnt # show the number of tcp buf memory usage hits limits |
| 89 | memory.kmem.tcp.max_usage_in_bytes # show max tcp buf memory usage recorded |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 90 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 91 | 1. History |
| 92 | |
| 93 | The memory controller has a long history. A request for comments for the memory |
| 94 | controller was posted by Balbir Singh [1]. At the time the RFC was posted |
| 95 | there were several implementations for memory control. The goal of the |
| 96 | RFC was to build consensus and agreement for the minimal features required |
| 97 | for memory control. The first RSS controller was posted by Balbir Singh[2] |
| 98 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the |
| 99 | RSS controller. At OLS, at the resource management BoF, everyone suggested |
| 100 | that we handle both page cache and RSS together. Another request was raised |
| 101 | to allow user space handling of OOM. The current memory controller is |
| 102 | at version 6; it combines both mapped (RSS) and unmapped Page |
| 103 | Cache Control [11]. |
| 104 | |
| 105 | 2. Memory Control |
| 106 | |
| 107 | Memory is a unique resource in the sense that it is present in a limited |
| 108 | amount. If a task requires a lot of CPU processing, the task can spread |
| 109 | its processing over a period of hours, days, months or years, but with |
| 110 | memory, the same physical memory needs to be reused to accomplish the task. |
| 111 | |
| 112 | The memory controller implementation has been divided into phases. These |
| 113 | are: |
| 114 | |
| 115 | 1. Memory controller |
| 116 | 2. mlock(2) controller |
| 117 | 3. Kernel user memory accounting and slab control |
| 118 | 4. user mappings length controller |
| 119 | |
| 120 | The memory controller is the first controller developed. |
| 121 | |
| 122 | 2.1. Design |
| 123 | |
Johannes Weiner | 5b1efc0 | 2014-12-10 15:42:37 -0800 | [diff] [blame] | 124 | The core of the design is a counter called the page_counter. The |
| 125 | page_counter tracks the current memory usage and limit of the group of |
| 126 | processes associated with the controller. Each cgroup has a memory controller |
| 127 | specific data structure (mem_cgroup) associated with it. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 128 | |
| 129 | 2.2. Accounting |
| 130 | |
| 131 | +--------------------+ |
Johannes Weiner | 5b1efc0 | 2014-12-10 15:42:37 -0800 | [diff] [blame] | 132 | | mem_cgroup | |
| 133 | | (page_counter) | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 134 | +--------------------+ |
| 135 | / ^ \ |
| 136 | / | \ |
| 137 | +---------------+ | +---------------+ |
| 138 | | mm_struct | |.... | mm_struct | |
| 139 | | | | | | |
| 140 | +---------------+ | +---------------+ |
| 141 | | |
| 142 | + --------------+ |
| 143 | | |
| 144 | +---------------+ +------+--------+ |
| 145 | | page +----------> page_cgroup| |
| 146 | | | | | |
| 147 | +---------------+ +---------------+ |
| 148 | |
| 149 | (Figure 1: Hierarchy of Accounting) |
| 150 | |
| 151 | |
| 152 | Figure 1 shows the important aspects of the controller |
| 153 | |
| 154 | 1. Accounting happens per cgroup |
| 155 | 2. Each mm_struct knows about which cgroup it belongs to |
| 156 | 3. Each page has a pointer to the page_cgroup, which in turn knows the |
| 157 | cgroup it belongs to |
| 158 | |
Jeff Liu | 348b465 | 2012-12-11 16:01:28 -0800 | [diff] [blame] | 159 | The accounting is done as follows: mem_cgroup_charge_common() is invoked to |
| 160 | set up the necessary data structures and check if the cgroup that is being |
| 161 | charged is over its limit. If it is, then reclaim is invoked on the cgroup. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 162 | More details can be found in the reclaim section of this document. |
| 163 | If everything goes well, a page meta-data-structure called page_cgroup is |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 164 | updated. page_cgroup has its own LRU on cgroup. |
| 165 | (*) page_cgroup structure is allocated at boot/memory-hotplug time. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 166 | |
| 167 | 2.2.1 Accounting details |
| 168 | |
KAMEZAWA Hiroyuki | 5b4e655 | 2008-10-18 20:28:10 -0700 | [diff] [blame] | 169 | All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. |
Ying Han | 6252efc | 2012-04-12 12:49:10 -0700 | [diff] [blame] | 170 | Some pages which are never reclaimable and will not be on the LRU |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 171 | are not accounted. We just account pages under usual VM management. |
KAMEZAWA Hiroyuki | 5b4e655 | 2008-10-18 20:28:10 -0700 | [diff] [blame] | 172 | |
| 173 | RSS pages are accounted at page_fault unless they've already been accounted |
| 174 | for earlier. A file page will be accounted for as Page Cache when it's |
| 175 | inserted into inode (radix-tree). While it's mapped into the page tables of |
| 176 | processes, duplicate accounting is carefully avoided. |
| 177 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 178 | An RSS page is unaccounted when it's fully unmapped. A PageCache page is |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 179 | unaccounted when it's removed from radix-tree. Even if RSS pages are fully |
| 180 | unmapped (by kswapd), they may exist as SwapCache in the system until they |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 181 | are really freed. Such SwapCaches are also accounted. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 182 | A swapped-in page is not accounted until it's mapped. |
| 183 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 184 | Note: The kernel does swapin-readahead and reads multiple swaps at once. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 185 | This means swapped-in pages may contain pages for other tasks than a task |
| 186 | causing page fault. So, we avoid accounting at swap-in I/O. |
KAMEZAWA Hiroyuki | 5b4e655 | 2008-10-18 20:28:10 -0700 | [diff] [blame] | 187 | |
| 188 | At page migration, accounting information is kept. |
| 189 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 190 | Note: we just account pages-on-LRU because our purpose is to control amount |
| 191 | of used pages; not-on-LRU pages tend to be out-of-control from VM view. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 192 | |
| 193 | 2.3 Shared Page Accounting |
| 194 | |
| 195 | Shared pages are accounted on the basis of the first touch approach. The |
| 196 | cgroup that first touches a page is accounted for the page. The principle |
| 197 | behind this approach is that a cgroup that aggressively uses a shared |
| 198 | page will eventually get charged for it (once it is uncharged from |
| 199 | the cgroup that brought it in -- this will happen on memory pressure). |
| 200 | |
KAMEZAWA Hiroyuki | 4b91355 | 2012-05-29 15:06:51 -0700 | [diff] [blame] | 201 | But see section 8.2: when moving a task to another cgroup, its pages may |
| 202 | be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. |
| 203 | |
Paul Bolle | df7c6b9 | 2013-03-25 23:59:16 +0100 | [diff] [blame] | 204 | Exception: If CONFIG_MEMCG_SWAP is not used. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 205 | When you do swapoff and make swapped-out pages of shmem(tmpfs) to |
KAMEZAWA Hiroyuki | d13d144 | 2009-01-07 18:07:56 -0800 | [diff] [blame] | 206 | be backed into memory in force, charges for pages are accounted against the |
| 207 | caller of swapoff rather than the users of shmem. |
| 208 | |
Andrew Morton | c255a45 | 2012-07-31 16:43:02 -0700 | [diff] [blame] | 209 | 2.4 Swap Extension (CONFIG_MEMCG_SWAP) |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 210 | |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 211 | Swap Extension allows you to record charge for swap. A swapped-in page is |
| 212 | charged back to original page allocator if possible. |
| 213 | |
| 214 | When swap is accounted, following files are added. |
| 215 | - memory.memsw.usage_in_bytes. |
| 216 | - memory.memsw.limit_in_bytes. |
| 217 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 218 | memsw means memory+swap. Usage of memory+swap is limited by |
| 219 | memsw.limit_in_bytes. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 220 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 221 | Example: Assume a system with 4G of swap. A task which allocates 6G of memory |
| 222 | (by mistake) under 2G memory limitation will use all swap. |
| 223 | In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 224 | By using the memsw limit, you can avoid system OOM which can be caused by swap |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 225 | shortage. |
| 226 | |
| 227 | * why 'memory+swap' rather than swap. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 228 | The global LRU(kswapd) can swap out arbitrary pages. Swap-out means |
| 229 | to move account from memory to swap...there is no change in usage of |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 230 | memory+swap. In other words, when we want to limit the usage of swap without |
| 231 | affecting global LRU, memory+swap limit is better than just limiting swap from |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 232 | an OS point of view. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 233 | |
KAMEZAWA Hiroyuki | 22a668d | 2009-06-17 16:27:19 -0700 | [diff] [blame] | 234 | * What happens when a cgroup hits memory.memsw.limit_in_bytes |
Jörg Sommer | 67de016 | 2011-06-15 13:00:47 -0700 | [diff] [blame] | 235 | When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out |
KAMEZAWA Hiroyuki | 22a668d | 2009-06-17 16:27:19 -0700 | [diff] [blame] | 236 | in this cgroup. Then, swap-out will not be done by cgroup routine and file |
| 237 | caches are dropped. But as mentioned above, global LRU can do swapout memory |
| 238 | from it for sanity of the system's memory management state. You can't forbid |
| 239 | it by cgroup. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 240 | |
| 241 | 2.5 Reclaim |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 242 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 243 | Each cgroup maintains a per cgroup LRU which has the same structure as |
| 244 | global VM. When a cgroup goes over its limit, we first try |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 245 | to reclaim memory from the cgroup so as to make space for the new |
| 246 | pages that the cgroup has touched. If the reclaim is unsuccessful, |
| 247 | an OOM routine is invoked to select and kill the bulkiest task in the |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 248 | cgroup. (See 10. OOM Control below.) |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 249 | |
| 250 | The reclaim algorithm has not been modified for cgroups, except that |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 251 | pages that are selected for reclaiming come from the per-cgroup LRU |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 252 | list. |
| 253 | |
Balbir Singh | 4b3bde4 | 2009-09-23 15:56:32 -0700 | [diff] [blame] | 254 | NOTE: Reclaim does not work for the root cgroup, since we cannot set any |
| 255 | limits on the root cgroup. |
| 256 | |
KAMEZAWA Hiroyuki | daaf1e6 | 2010-03-10 15:22:32 -0800 | [diff] [blame] | 257 | Note2: When panic_on_oom is set to "2", the whole system will panic. |
| 258 | |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 259 | When oom event notifier is registered, event will be delivered. |
| 260 | (See oom_control section) |
| 261 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 262 | 2.6 Locking |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 263 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 264 | lock_page_cgroup()/unlock_page_cgroup() should not be called under |
| 265 | mapping->tree_lock. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 266 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 267 | Other lock order is following: |
| 268 | PG_locked. |
| 269 | mm->page_table_lock |
| 270 | zone->lru_lock |
| 271 | lock_page_cgroup. |
| 272 | In many cases, just lock_page_cgroup() is called. |
| 273 | per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by |
| 274 | zone->lru_lock, it has no lock of its own. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 275 | |
Andrew Morton | c255a45 | 2012-07-31 16:43:02 -0700 | [diff] [blame] | 276 | 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 277 | |
| 278 | With the Kernel memory extension, the Memory Controller is able to limit |
| 279 | the amount of kernel memory used by the system. Kernel memory is fundamentally |
| 280 | different than user memory, since it can't be swapped out, which makes it |
| 281 | possible to DoS the system by consuming too much of this precious resource. |
| 282 | |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 283 | Kernel memory won't be accounted at all until limit on a group is set. This |
| 284 | allows for existing setups to continue working without disruption. The limit |
| 285 | cannot be set if the cgroup have children, or if there are already tasks in the |
| 286 | cgroup. Attempting to set the limit under those conditions will return -EBUSY. |
| 287 | When use_hierarchy == 1 and a group is accounted, its children will |
| 288 | automatically be accounted regardless of their limit value. |
| 289 | |
| 290 | After a group is first limited, it will be kept being accounted until it |
| 291 | is removed. The memory limitation itself, can of course be removed by writing |
| 292 | -1 to memory.kmem.limit_in_bytes. In this case, kmem will be accounted, but not |
| 293 | limited. |
| 294 | |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 295 | Kernel memory limits are not imposed for the root cgroup. Usage for the root |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 296 | cgroup may or may not be accounted. The memory used is accumulated into |
| 297 | memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. |
| 298 | (currently only for tcp). |
| 299 | The main "kmem" counter is fed into the main counter, so kmem charges will |
| 300 | also be visible from the user counter. |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 301 | |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 302 | Currently no soft limit is implemented for kernel memory. It is future work |
| 303 | to trigger slab reclaim when those limits are reached. |
| 304 | |
| 305 | 2.7.1 Current Kernel Memory resources accounted |
| 306 | |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 307 | * stack pages: every process consumes some stack pages. By accounting into |
| 308 | kernel memory, we prevent new processes from being created when the kernel |
| 309 | memory usage is too high. |
| 310 | |
Glauber Costa | 92e7934 | 2012-12-18 14:23:08 -0800 | [diff] [blame] | 311 | * slab pages: pages allocated by the SLAB or SLUB allocator are tracked. A copy |
Anatol Pomozov | f884ab1 | 2013-05-08 16:56:16 -0700 | [diff] [blame] | 312 | of each kmem_cache is created every time the cache is touched by the first time |
Glauber Costa | 92e7934 | 2012-12-18 14:23:08 -0800 | [diff] [blame] | 313 | from inside the memcg. The creation is done lazily, so some objects can still be |
| 314 | skipped while the cache is being created. All objects in a slab page should |
| 315 | belong to the same memcg. This only fails to hold when a task is migrated to a |
| 316 | different memcg during the page allocation by the cache. |
| 317 | |
Glauber Costa | e1aab16 | 2011-12-11 21:47:03 +0000 | [diff] [blame] | 318 | * sockets memory pressure: some sockets protocols have memory pressure |
| 319 | thresholds. The Memory Controller allows them to be controlled individually |
| 320 | per cgroup, instead of globally. |
Glauber Costa | e5671df | 2011-12-11 21:47:01 +0000 | [diff] [blame] | 321 | |
Glauber Costa | d1a4c0b | 2011-12-11 21:47:04 +0000 | [diff] [blame] | 322 | * tcp memory pressure: sockets memory pressure for the tcp protocol. |
| 323 | |
SeongJae Park | 29d293b | 2014-12-12 16:58:50 -0800 | [diff] [blame] | 324 | 2.7.2 Common use cases |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 325 | |
| 326 | Because the "kmem" counter is fed to the main user counter, kernel memory can |
| 327 | never be limited completely independently of user memory. Say "U" is the user |
| 328 | limit, and "K" the kernel limit. There are three possible ways limits can be |
| 329 | set: |
| 330 | |
| 331 | U != 0, K = unlimited: |
| 332 | This is the standard memcg limitation mechanism already present before kmem |
| 333 | accounting. Kernel memory is completely ignored. |
| 334 | |
| 335 | U != 0, K < U: |
| 336 | Kernel memory is a subset of the user memory. This setup is useful in |
| 337 | deployments where the total amount of memory per-cgroup is overcommited. |
| 338 | Overcommiting kernel memory limits is definitely not recommended, since the |
| 339 | box can still run out of non-reclaimable memory. |
| 340 | In this case, the admin could set up K so that the sum of all groups is |
| 341 | never greater than the total memory, and freely set U at the cost of his |
| 342 | QoS. |
Vladimir Davydov | 1971754 | 2015-04-01 17:30:36 +0300 | [diff] [blame] | 343 | WARNING: In the current implementation, memory reclaim will NOT be |
| 344 | triggered for a cgroup when it hits K while staying below U, which makes |
| 345 | this setup impractical. |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 346 | |
| 347 | U != 0, K >= U: |
| 348 | Since kmem charges will also be fed to the user counter and reclaim will be |
| 349 | triggered for the cgroup for both kinds of memory. This setup gives the |
| 350 | admin a unified view of memory, and it is also useful for people who just |
| 351 | want to track kernel memory usage. |
| 352 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 353 | 3. User Interface |
| 354 | |
SeongJae Park | 29d293b | 2014-12-12 16:58:50 -0800 | [diff] [blame] | 355 | 3.0. Configuration |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 356 | |
| 357 | a. Enable CONFIG_CGROUPS |
Johannes Weiner | 5b1efc0 | 2014-12-10 15:42:37 -0800 | [diff] [blame] | 358 | b. Enable CONFIG_MEMCG |
| 359 | c. Enable CONFIG_MEMCG_SWAP (to use swap extension) |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 360 | d. Enable CONFIG_MEMCG_KMEM (to use kmem extension) |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 361 | |
SeongJae Park | 29d293b | 2014-12-12 16:58:50 -0800 | [diff] [blame] | 362 | 3.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?) |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 363 | # mount -t tmpfs none /sys/fs/cgroup |
| 364 | # mkdir /sys/fs/cgroup/memory |
| 365 | # mount -t cgroup none /sys/fs/cgroup/memory -o memory |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 366 | |
SeongJae Park | 29d293b | 2014-12-12 16:58:50 -0800 | [diff] [blame] | 367 | 3.2. Make the new group and move bash into it |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 368 | # mkdir /sys/fs/cgroup/memory/0 |
| 369 | # echo $$ > /sys/fs/cgroup/memory/0/tasks |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 370 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 371 | Since now we're in the 0 cgroup, we can alter the memory limit: |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 372 | # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 373 | |
| 374 | NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 375 | mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) |
| 376 | |
Daisuke Nishimura | c5b947b | 2009-06-17 16:27:20 -0700 | [diff] [blame] | 377 | NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited). |
Balbir Singh | 4b3bde4 | 2009-09-23 15:56:32 -0700 | [diff] [blame] | 378 | NOTE: We cannot set limits on the root cgroup any more. |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 379 | |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 380 | # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 381 | 4194304 |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 382 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 383 | We can check the usage: |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 384 | # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 385 | 1216512 |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 386 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 387 | A successful write to this file does not guarantee a successful setting of |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 388 | this limit to the value written into the file. This can be due to a |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 389 | number of factors, such as rounding up to page boundaries or the total |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 390 | availability of memory on the system. The user is required to re-read |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 391 | this file after a write to guarantee the value committed by the kernel. |
| 392 | |
Balbir Singh | fb78922 | 2008-03-04 14:28:24 -0800 | [diff] [blame] | 393 | # echo 1 > memory.limit_in_bytes |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 394 | # cat memory.limit_in_bytes |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 395 | 4096 |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 396 | |
| 397 | The memory.failcnt field gives the number of times that the cgroup limit was |
| 398 | exceeded. |
| 399 | |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame] | 400 | The memory.stat file gives accounting information. Now, the number of |
| 401 | caches, RSS and Active pages/Inactive pages are shown. |
| 402 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 403 | 4. Testing |
| 404 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 405 | For testing features and implementation, see memcg_test.txt. |
| 406 | |
| 407 | Performance test is also important. To see pure memory controller's overhead, |
| 408 | testing on tmpfs will give you good numbers of small overheads. |
| 409 | Example: do kernel make on tmpfs. |
| 410 | |
| 411 | Page-fault scalability is also important. At measuring parallel |
| 412 | page fault test, multi-process test may be better than multi-thread |
| 413 | test because it has noise of shared objects/status. |
| 414 | |
| 415 | But the above two are testing extreme situations. |
| 416 | Trying usual test under memory controller is always helpful. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 417 | |
| 418 | 4.1 Troubleshooting |
| 419 | |
| 420 | Sometimes a user might find that the application under a cgroup is |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 421 | terminated by the OOM killer. There are several causes for this: |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 422 | |
| 423 | 1. The cgroup limit is too low (just too low to do anything useful) |
| 424 | 2. The user is using anonymous memory and swap is turned off or too low |
| 425 | |
| 426 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of |
| 427 | some of the pages cached in the cgroup (page cache pages). |
| 428 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 429 | To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 430 | seeing what happens will be helpful. |
| 431 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 432 | 4.2 Task migration |
| 433 | |
Francis Galiegue | a33f322 | 2010-04-23 00:08:02 +0200 | [diff] [blame] | 434 | When a task migrates from one cgroup to another, its charge is not |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 435 | carried forward by default. The pages allocated from the original cgroup still |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 436 | remain charged to it, the charge is dropped when the page is freed or |
| 437 | reclaimed. |
| 438 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 439 | You can move charges of a task along with task migration. |
| 440 | See 8. "Move charges at task migration" |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 441 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 442 | 4.3 Removing a cgroup |
| 443 | |
| 444 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a |
| 445 | cgroup might have some charge associated with it, even though all |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 446 | tasks have migrated away from it. (because we charge against pages, not |
| 447 | against tasks.) |
| 448 | |
KAMEZAWA Hiroyuki | cc926f7 | 2012-05-29 15:07:04 -0700 | [diff] [blame] | 449 | We move the stats to root (if use_hierarchy==0) or parent (if |
| 450 | use_hierarchy==1), and no change on the charge except uncharging |
| 451 | from the child. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 452 | |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 453 | Charges recorded in swap information is not updated at removal of cgroup. |
| 454 | Recorded information is discarded and a cgroup which uses swap (swapcache) |
| 455 | will be charged as a new owner of it. |
| 456 | |
KAMEZAWA Hiroyuki | cc926f7 | 2012-05-29 15:07:04 -0700 | [diff] [blame] | 457 | About use_hierarchy, see Section 6. |
KAMEZAWA Hiroyuki | 8c7c6e34 | 2009-01-07 18:08:00 -0800 | [diff] [blame] | 458 | |
KAMEZAWA Hiroyuki | c1e862c | 2009-01-07 18:07:55 -0800 | [diff] [blame] | 459 | 5. Misc. interfaces. |
| 460 | |
| 461 | 5.1 force_empty |
| 462 | memory.force_empty interface is provided to make cgroup's memory usage empty. |
KAMEZAWA Hiroyuki | c1e862c | 2009-01-07 18:07:55 -0800 | [diff] [blame] | 463 | When writing anything to this |
| 464 | |
| 465 | # echo 0 > memory.force_empty |
| 466 | |
Michal Hocko | f61c42a7 | 2014-05-12 16:34:17 +0200 | [diff] [blame] | 467 | the cgroup will be reclaimed and as many pages reclaimed as possible. |
KAMEZAWA Hiroyuki | c1e862c | 2009-01-07 18:07:55 -0800 | [diff] [blame] | 468 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 469 | The typical use case for this interface is before calling rmdir(). |
KAMEZAWA Hiroyuki | c1e862c | 2009-01-07 18:07:55 -0800 | [diff] [blame] | 470 | Because rmdir() moves all pages to parent, some out-of-use page caches can be |
| 471 | moved to the parent. If you want to avoid that, force_empty will be useful. |
| 472 | |
Glauber Costa | d5bdae7 | 2012-12-18 14:22:22 -0800 | [diff] [blame] | 473 | Also, note that when memory.kmem.limit_in_bytes is set the charges due to |
| 474 | kernel pages will still be seen. This is not considered a failure and the |
| 475 | write will still return success. In this case, it is expected that |
| 476 | memory.kmem.usage_in_bytes == memory.usage_in_bytes. |
| 477 | |
KAMEZAWA Hiroyuki | cc926f7 | 2012-05-29 15:07:04 -0700 | [diff] [blame] | 478 | About use_hierarchy, see Section 6. |
| 479 | |
KOSAKI Motohiro | 7f016ee | 2009-01-07 18:08:22 -0800 | [diff] [blame] | 480 | 5.2 stat file |
KOSAKI Motohiro | 7f016ee | 2009-01-07 18:08:22 -0800 | [diff] [blame] | 481 | |
Johannes Weiner | 185efc0 | 2011-09-14 16:21:58 -0700 | [diff] [blame] | 482 | memory.stat file includes following statistics |
KOSAKI Motohiro | 7f016ee | 2009-01-07 18:08:22 -0800 | [diff] [blame] | 483 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 484 | # per-memory cgroup local status |
Bharata B Rao | c863d83 | 2009-04-13 14:40:15 -0700 | [diff] [blame] | 485 | cache - # of bytes of page cache memory. |
David Rientjes | b070e65 | 2013-05-07 16:18:09 -0700 | [diff] [blame] | 486 | rss - # of bytes of anonymous and swap cache memory (includes |
| 487 | transparent hugepages). |
| 488 | rss_huge - # of bytes of anonymous transparent hugepages. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 489 | mapped_file - # of bytes of mapped file (includes tmpfs/shmem) |
Ying Han | 0527b69 | 2012-01-12 17:18:27 -0800 | [diff] [blame] | 490 | pgpgin - # of charging events to the memory cgroup. The charging |
| 491 | event happens each time a page is accounted as either mapped |
| 492 | anon page(RSS) or cache page(Page Cache) to the cgroup. |
| 493 | pgpgout - # of uncharging events to the memory cgroup. The uncharging |
| 494 | event happens each time a page is unaccounted from the cgroup. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 495 | swap - # of bytes of swap usage |
Greg Thelen | c4843a7 | 2015-05-22 17:13:16 -0400 | [diff] [blame] | 496 | dirty - # of bytes that are waiting to get written back to the disk. |
Sha Zhengju | 9cb2dc1 | 2013-09-12 15:13:54 -0700 | [diff] [blame] | 497 | writeback - # of bytes of file/anon cache that are queued for syncing to |
| 498 | disk. |
Aaro Koskinen | a15e419 | 2013-06-19 15:34:29 +0300 | [diff] [blame] | 499 | inactive_anon - # of bytes of anonymous and swap cache memory on inactive |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 500 | LRU list. |
| 501 | active_anon - # of bytes of anonymous and swap cache memory on active |
Aaro Koskinen | a15e419 | 2013-06-19 15:34:29 +0300 | [diff] [blame] | 502 | LRU list. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 503 | inactive_file - # of bytes of file-backed memory on inactive LRU list. |
| 504 | active_file - # of bytes of file-backed memory on active LRU list. |
Bharata B Rao | c863d83 | 2009-04-13 14:40:15 -0700 | [diff] [blame] | 505 | unevictable - # of bytes of memory that cannot be reclaimed (mlocked etc). |
| 506 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 507 | # status considering hierarchy (see memory.use_hierarchy settings) |
| 508 | |
| 509 | hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy |
| 510 | under which the memory cgroup is |
| 511 | hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to |
| 512 | hierarchy under which memory cgroup is. |
| 513 | |
Johannes Weiner | eb6332a | 2012-05-29 15:06:26 -0700 | [diff] [blame] | 514 | total_<counter> - # hierarchical version of <counter>, which in |
| 515 | addition to the cgroup's own value includes the |
| 516 | sum of all hierarchical children's values of |
| 517 | <counter>, i.e. total_cache |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 518 | |
| 519 | # The following additional stats are dependent on CONFIG_DEBUG_VM. |
Bharata B Rao | c863d83 | 2009-04-13 14:40:15 -0700 | [diff] [blame] | 520 | |
Bharata B Rao | c863d83 | 2009-04-13 14:40:15 -0700 | [diff] [blame] | 521 | recent_rotated_anon - VM internal parameter. (see mm/vmscan.c) |
| 522 | recent_rotated_file - VM internal parameter. (see mm/vmscan.c) |
| 523 | recent_scanned_anon - VM internal parameter. (see mm/vmscan.c) |
| 524 | recent_scanned_file - VM internal parameter. (see mm/vmscan.c) |
| 525 | |
| 526 | Memo: |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 527 | recent_rotated means recent frequency of LRU rotation. |
| 528 | recent_scanned means recent # of scans to LRU. |
KOSAKI Motohiro | 7f016ee | 2009-01-07 18:08:22 -0800 | [diff] [blame] | 529 | showing for better debug please see the code for meanings. |
| 530 | |
Bharata B Rao | c863d83 | 2009-04-13 14:40:15 -0700 | [diff] [blame] | 531 | Note: |
| 532 | Only anonymous and swap cache memory is listed as part of 'rss' stat. |
| 533 | This should not be confused with the true 'resident set size' or the |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 534 | amount of physical memory used by the cgroup. |
| 535 | 'rss + file_mapped" will give you resident set size of cgroup. |
| 536 | (Note: file and shmem may be shared among other cgroups. In that case, |
| 537 | file_mapped is accounted only when the memory cgroup is owner of page |
| 538 | cache.) |
KOSAKI Motohiro | 7f016ee | 2009-01-07 18:08:22 -0800 | [diff] [blame] | 539 | |
KOSAKI Motohiro | a7885eb | 2009-01-07 18:08:24 -0800 | [diff] [blame] | 540 | 5.3 swappiness |
KOSAKI Motohiro | a7885eb | 2009-01-07 18:08:24 -0800 | [diff] [blame] | 541 | |
Michal Hocko | 688eb98 | 2014-06-06 14:38:15 -0700 | [diff] [blame] | 542 | Overrides /proc/sys/vm/swappiness for the particular group. The tunable |
| 543 | in the root cgroup corresponds to the global swappiness setting. |
Johannes Weiner | 3dae7fe | 2014-06-04 16:07:01 -0700 | [diff] [blame] | 544 | |
Michal Hocko | 688eb98 | 2014-06-06 14:38:15 -0700 | [diff] [blame] | 545 | Please note that unlike during the global reclaim, limit reclaim |
| 546 | enforces that 0 swappiness really prevents from any swapping even if |
| 547 | there is a swap storage available. This might lead to memcg OOM killer |
| 548 | if there are no file pages to reclaim. |
KOSAKI Motohiro | a7885eb | 2009-01-07 18:08:24 -0800 | [diff] [blame] | 549 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 550 | 5.4 failcnt |
| 551 | |
| 552 | A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. |
| 553 | This failcnt(== failure count) shows the number of times that a usage counter |
| 554 | hit its limit. When a memory cgroup hits a limit, failcnt increases and |
| 555 | memory under it will be reclaimed. |
| 556 | |
| 557 | You can reset failcnt by writing 0 to failcnt file. |
| 558 | # echo 0 > .../memory.failcnt |
KOSAKI Motohiro | a7885eb | 2009-01-07 18:08:24 -0800 | [diff] [blame] | 559 | |
Daisuke Nishimura | a111c96 | 2011-04-27 15:26:48 -0700 | [diff] [blame] | 560 | 5.5 usage_in_bytes |
| 561 | |
| 562 | For efficiency, as other kernel components, memory cgroup uses some optimization |
| 563 | to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 564 | method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz |
Daisuke Nishimura | a111c96 | 2011-04-27 15:26:48 -0700 | [diff] [blame] | 565 | value for efficient access. (Of course, when necessary, it's synchronized.) |
| 566 | If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) |
| 567 | value in memory.stat(see 5.2). |
| 568 | |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 569 | 5.6 numa_stat |
| 570 | |
| 571 | This is similar to numa_maps but operates on a per-memcg basis. This is |
| 572 | useful for providing visibility into the numa locality information within |
| 573 | an memcg since the pages are allowed to be allocated from any physical |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 574 | node. One of the use cases is evaluating application performance by |
| 575 | combining this information with the application's CPU allocation. |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 576 | |
Ying Han | 071aee1 | 2013-11-12 15:07:41 -0800 | [diff] [blame] | 577 | Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" |
| 578 | per-node page counts including "hierarchical_<counter>" which sums up all |
| 579 | hierarchical children's values in addition to the memcg's own value. |
| 580 | |
Masanari Iida | 8173d5a | 2013-12-22 00:57:33 +0900 | [diff] [blame] | 581 | The output format of memory.numa_stat is: |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 582 | |
| 583 | total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 584 | file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 585 | anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 586 | unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... |
Ying Han | 071aee1 | 2013-11-12 15:07:41 -0800 | [diff] [blame] | 587 | hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 588 | |
Ying Han | 071aee1 | 2013-11-12 15:07:41 -0800 | [diff] [blame] | 589 | The "total" count is sum of file + anon + unevictable. |
Ying Han | 50c35e5 | 2011-06-15 15:08:16 -0700 | [diff] [blame] | 590 | |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 591 | 6. Hierarchy support |
KAMEZAWA Hiroyuki | c1e862c | 2009-01-07 18:07:55 -0800 | [diff] [blame] | 592 | |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 593 | The memory controller supports a deep hierarchy and hierarchical accounting. |
| 594 | The hierarchy is created by creating the appropriate cgroups in the |
| 595 | cgroup filesystem. Consider for example, the following cgroup filesystem |
| 596 | hierarchy |
| 597 | |
Jörg Sommer | 67de016 | 2011-06-15 13:00:47 -0700 | [diff] [blame] | 598 | root |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 599 | / | \ |
Jörg Sommer | 67de016 | 2011-06-15 13:00:47 -0700 | [diff] [blame] | 600 | / | \ |
| 601 | a b c |
| 602 | | \ |
| 603 | | \ |
| 604 | d e |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 605 | |
| 606 | In the diagram above, with hierarchical accounting enabled, all memory |
| 607 | usage of e, is accounted to its ancestors up until the root (i.e, c and root), |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 608 | that has memory.use_hierarchy enabled. If one of the ancestors goes over its |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 609 | limit, the reclaim algorithm reclaims from the tasks in the ancestor and the |
| 610 | children of the ancestor. |
| 611 | |
| 612 | 6.1 Enabling hierarchical accounting and reclaim |
| 613 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 614 | A memory cgroup by default disables the hierarchy feature. Support |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 615 | can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup |
| 616 | |
| 617 | # echo 1 > memory.use_hierarchy |
| 618 | |
| 619 | The feature can be disabled by |
| 620 | |
| 621 | # echo 0 > memory.use_hierarchy |
| 622 | |
Greg Thelen | 689bca3 | 2011-02-16 17:51:23 -0800 | [diff] [blame] | 623 | NOTE1: Enabling/disabling will fail if either the cgroup already has other |
| 624 | cgroups created below it, or if the parent cgroup has use_hierarchy |
| 625 | enabled. |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 626 | |
KAMEZAWA Hiroyuki | daaf1e6 | 2010-03-10 15:22:32 -0800 | [diff] [blame] | 627 | NOTE2: When panic_on_oom is set to "2", the whole system will panic in |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 628 | case of an OOM event in any cgroup. |
Balbir Singh | 52bc0d8 | 2009-01-07 18:08:03 -0800 | [diff] [blame] | 629 | |
Balbir Singh | a6df636 | 2009-09-23 15:56:34 -0700 | [diff] [blame] | 630 | 7. Soft limits |
| 631 | |
| 632 | Soft limits allow for greater sharing of memory. The idea behind soft limits |
| 633 | is to allow control groups to use as much of the memory as needed, provided |
| 634 | |
| 635 | a. There is no memory contention |
| 636 | b. They do not exceed their hard limit |
| 637 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 638 | When the system detects memory contention or low memory, control groups |
Balbir Singh | a6df636 | 2009-09-23 15:56:34 -0700 | [diff] [blame] | 639 | are pushed back to their soft limits. If the soft limit of each control |
| 640 | group is very high, they are pushed back as much as possible to make |
| 641 | sure that one control group does not starve the others of memory. |
| 642 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 643 | Please note that soft limits is a best-effort feature; it comes with |
Balbir Singh | a6df636 | 2009-09-23 15:56:34 -0700 | [diff] [blame] | 644 | no guarantees, but it does its best to make sure that when memory is |
| 645 | heavily contended for, memory is allocated based on the soft limit |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 646 | hints/setup. Currently soft limit based reclaim is set up such that |
Balbir Singh | a6df636 | 2009-09-23 15:56:34 -0700 | [diff] [blame] | 647 | it gets invoked from balance_pgdat (kswapd). |
| 648 | |
| 649 | 7.1 Interface |
| 650 | |
| 651 | Soft limits can be setup by using the following commands (in this example we |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 652 | assume a soft limit of 256 MiB) |
Balbir Singh | a6df636 | 2009-09-23 15:56:34 -0700 | [diff] [blame] | 653 | |
| 654 | # echo 256M > memory.soft_limit_in_bytes |
| 655 | |
| 656 | If we want to change this to 1G, we can at any time use |
| 657 | |
| 658 | # echo 1G > memory.soft_limit_in_bytes |
| 659 | |
| 660 | NOTE1: Soft limits take effect over a long period of time, since they involve |
| 661 | reclaiming memory for balancing between memory cgroups |
| 662 | NOTE2: It is recommended to set the soft limit always below the hard limit, |
| 663 | otherwise the hard limit will take precedence. |
| 664 | |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 665 | 8. Move charges at task migration |
| 666 | |
| 667 | Users can move charges associated with a task along with task migration, that |
| 668 | is, uncharge task's pages from the old cgroup and charge them to the new cgroup. |
Daisuke Nishimura | 0249144 | 2010-03-10 15:22:17 -0800 | [diff] [blame] | 669 | This feature is not supported in !CONFIG_MMU environments because of lack of |
| 670 | page tables. |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 671 | |
| 672 | 8.1 Interface |
| 673 | |
Masanari Iida | 8173d5a | 2013-12-22 00:57:33 +0900 | [diff] [blame] | 674 | This feature is disabled by default. It can be enabled (and disabled again) by |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 675 | writing to memory.move_charge_at_immigrate of the destination cgroup. |
| 676 | |
| 677 | If you want to enable it: |
| 678 | |
| 679 | # echo (some positive value) > memory.move_charge_at_immigrate |
| 680 | |
| 681 | Note: Each bits of move_charge_at_immigrate has its own meaning about what type |
| 682 | of charges should be moved. See 8.2 for details. |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 683 | Note: Charges are moved only when you move mm->owner, in other words, |
| 684 | a leader of a thread group. |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 685 | Note: If we cannot find enough space for the task in the destination cgroup, we |
| 686 | try to make space by reclaiming memory. Task migration may fail if we |
| 687 | cannot make enough space. |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 688 | Note: It can take several seconds if you move charges much. |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 689 | |
| 690 | And if you want disable it again: |
| 691 | |
| 692 | # echo 0 > memory.move_charge_at_immigrate |
| 693 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 694 | 8.2 Type of charges which can be moved |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 695 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 696 | Each bit in move_charge_at_immigrate has its own meaning about what type of |
| 697 | charges should be moved. But in any case, it must be noted that an account of |
| 698 | a page or a swap can be moved only when it is charged to the task's current |
| 699 | (old) memory cgroup. |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 700 | |
| 701 | bit | what type of charges would be moved ? |
| 702 | -----+------------------------------------------------------------------------ |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 703 | 0 | A charge of an anonymous page (or swap of it) used by the target task. |
| 704 | | You must enable Swap Extension (see 2.4) to enable move of swap charges. |
Daisuke Nishimura | 87946a7 | 2010-05-26 14:42:39 -0700 | [diff] [blame] | 705 | -----+------------------------------------------------------------------------ |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 706 | 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 707 | | and swaps of tmpfs file) mmapped by the target task. Unlike the case of |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 708 | | anonymous pages, file pages (and swaps) in the range mmapped by the task |
Daisuke Nishimura | 87946a7 | 2010-05-26 14:42:39 -0700 | [diff] [blame] | 709 | | will be moved even if the task hasn't done page fault, i.e. they might |
| 710 | | not be the task's "RSS", but other task's "RSS" that maps the same file. |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 711 | | And mapcount of the page is ignored (the page can be moved even if |
| 712 | | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to |
Daisuke Nishimura | 87946a7 | 2010-05-26 14:42:39 -0700 | [diff] [blame] | 713 | | enable move of swap charges. |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 714 | |
| 715 | 8.3 TODO |
| 716 | |
Daisuke Nishimura | 7dc74be | 2010-03-10 15:22:13 -0800 | [diff] [blame] | 717 | - All of moving charge operations are done under cgroup_mutex. It's not good |
| 718 | behavior to hold the mutex too long, so we may need some trick. |
| 719 | |
Kirill A. Shutemov | 2e72b63 | 2010-03-10 15:22:24 -0800 | [diff] [blame] | 720 | 9. Memory thresholds |
| 721 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 722 | Memory cgroup implements memory thresholds using the cgroups notification |
Kirill A. Shutemov | 2e72b63 | 2010-03-10 15:22:24 -0800 | [diff] [blame] | 723 | API (see cgroups.txt). It allows to register multiple memory and memsw |
| 724 | thresholds and gets notifications when it crosses. |
| 725 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 726 | To register a threshold, an application must: |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 727 | - create an eventfd using eventfd(2); |
| 728 | - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; |
| 729 | - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to |
| 730 | cgroup.event_control. |
Kirill A. Shutemov | 2e72b63 | 2010-03-10 15:22:24 -0800 | [diff] [blame] | 731 | |
| 732 | Application will be notified through eventfd when memory usage crosses |
| 733 | threshold in any direction. |
| 734 | |
| 735 | It's applicable for root and non-root cgroup. |
| 736 | |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 737 | 10. OOM Control |
| 738 | |
KAMEZAWA Hiroyuki | 3c11ecf | 2010-05-26 14:42:37 -0700 | [diff] [blame] | 739 | memory.oom_control file is for OOM notification and other controls. |
| 740 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 741 | Memory cgroup implements OOM notifier using the cgroup notification |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 742 | API (See cgroups.txt). It allows to register multiple OOM notification |
| 743 | delivery and gets notification when OOM happens. |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 744 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 745 | To register a notifier, an application must: |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 746 | - create an eventfd using eventfd(2) |
| 747 | - open memory.oom_control file |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 748 | - write string like "<event_fd> <fd of memory.oom_control>" to |
| 749 | cgroup.event_control |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 750 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 751 | The application will be notified through eventfd when OOM happens. |
| 752 | OOM notification doesn't work for the root cgroup. |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 753 | |
Michael Kerrisk | 1939c55 | 2012-10-08 16:33:09 -0700 | [diff] [blame] | 754 | You can disable the OOM-killer by writing "1" to memory.oom_control file, as: |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 755 | |
KAMEZAWA Hiroyuki | 3c11ecf | 2010-05-26 14:42:37 -0700 | [diff] [blame] | 756 | #echo 1 > memory.oom_control |
| 757 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 758 | If OOM-killer is disabled, tasks under cgroup will hang/sleep |
| 759 | in memory cgroup's OOM-waitqueue when they request accountable memory. |
KAMEZAWA Hiroyuki | 3c11ecf | 2010-05-26 14:42:37 -0700 | [diff] [blame] | 760 | |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 761 | For running them, you have to relax the memory cgroup's OOM status by |
KAMEZAWA Hiroyuki | 3c11ecf | 2010-05-26 14:42:37 -0700 | [diff] [blame] | 762 | * enlarge limit or reduce usage. |
| 763 | To reduce usage, |
| 764 | * kill some tasks. |
| 765 | * move some tasks to other group with account migration. |
| 766 | * remove some files (on tmpfs?) |
| 767 | |
| 768 | Then, stopped tasks will work again. |
| 769 | |
| 770 | At reading, current status of OOM is shown. |
| 771 | oom_kill_disable 0 or 1 (if 1, oom-killer is disabled) |
KAMEZAWA Hiroyuki | dc10e28 | 2010-05-26 14:42:40 -0700 | [diff] [blame] | 772 | under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may |
KAMEZAWA Hiroyuki | 3c11ecf | 2010-05-26 14:42:37 -0700 | [diff] [blame] | 773 | be stopped.) |
KAMEZAWA Hiroyuki | 9490ff2 | 2010-05-26 14:42:36 -0700 | [diff] [blame] | 774 | |
Anton Vorontsov | 70ddf63 | 2013-04-29 15:08:31 -0700 | [diff] [blame] | 775 | 11. Memory Pressure |
| 776 | |
| 777 | The pressure level notifications can be used to monitor the memory |
| 778 | allocation cost; based on the pressure, applications can implement |
| 779 | different strategies of managing their memory resources. The pressure |
| 780 | levels are defined as following: |
| 781 | |
| 782 | The "low" level means that the system is reclaiming memory for new |
| 783 | allocations. Monitoring this reclaiming activity might be useful for |
| 784 | maintaining cache level. Upon notification, the program (typically |
| 785 | "Activity Manager") might analyze vmstat and act in advance (i.e. |
| 786 | prematurely shutdown unimportant services). |
| 787 | |
| 788 | The "medium" level means that the system is experiencing medium memory |
| 789 | pressure, the system might be making swap, paging out active file caches, |
| 790 | etc. Upon this event applications may decide to further analyze |
| 791 | vmstat/zoneinfo/memcg or internal memory usage statistics and free any |
| 792 | resources that can be easily reconstructed or re-read from a disk. |
| 793 | |
| 794 | The "critical" level means that the system is actively thrashing, it is |
| 795 | about to out of memory (OOM) or even the in-kernel OOM killer is on its |
| 796 | way to trigger. Applications should do whatever they can to help the |
| 797 | system. It might be too late to consult with vmstat or any other |
| 798 | statistics, so it's advisable to take an immediate action. |
| 799 | |
| 800 | The events are propagated upward until the event is handled, i.e. the |
| 801 | events are not pass-through. Here is what this means: for example you have |
| 802 | three cgroups: A->B->C. Now you set up an event listener on cgroups A, B |
| 803 | and C, and suppose group C experiences some pressure. In this situation, |
| 804 | only group C will receive the notification, i.e. groups A and B will not |
| 805 | receive it. This is done to avoid excessive "broadcasting" of messages, |
| 806 | which disturbs the system and which is especially bad if we are low on |
| 807 | memory or thrashing. So, organize the cgroups wisely, or propagate the |
| 808 | events manually (or, ask us to implement the pass-through events, |
| 809 | explaining why would you need them.) |
| 810 | |
| 811 | The file memory.pressure_level is only used to setup an eventfd. To |
| 812 | register a notification, an application must: |
| 813 | |
| 814 | - create an eventfd using eventfd(2); |
| 815 | - open memory.pressure_level; |
| 816 | - write string like "<event_fd> <fd of memory.pressure_level> <level>" |
| 817 | to cgroup.event_control. |
| 818 | |
| 819 | Application will be notified through eventfd when memory pressure is at |
| 820 | the specific level (or higher). Read/write operations to |
| 821 | memory.pressure_level are no implemented. |
| 822 | |
| 823 | Test: |
| 824 | |
| 825 | Here is a small script example that makes a new cgroup, sets up a |
| 826 | memory limit, sets up a notification in the cgroup and then makes child |
| 827 | cgroup experience a critical pressure: |
| 828 | |
| 829 | # cd /sys/fs/cgroup/memory/ |
| 830 | # mkdir foo |
| 831 | # cd foo |
| 832 | # cgroup_event_listener memory.pressure_level low & |
| 833 | # echo 8000000 > memory.limit_in_bytes |
| 834 | # echo 8000000 > memory.memsw.limit_in_bytes |
| 835 | # echo $$ > tasks |
| 836 | # dd if=/dev/zero | read x |
| 837 | |
| 838 | (Expect a bunch of notifications, and eventually, the oom-killer will |
| 839 | trigger.) |
| 840 | |
| 841 | 12. TODO |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 842 | |
Li Zefan | f968ef1 | 2013-07-03 15:02:25 -0700 | [diff] [blame] | 843 | 1. Make per-cgroup scanner reclaim not-shared pages first |
| 844 | 2. Teach controller to account for shared-pages |
| 845 | 3. Start reclamation in the background when the limit is |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 846 | not yet hit but the usage is getting closer |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 847 | |
| 848 | Summary |
| 849 | |
| 850 | Overall, the memory controller has been a stable controller and has been |
| 851 | commented and discussed quite extensively in the community. |
| 852 | |
| 853 | References |
| 854 | |
| 855 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ |
| 856 | 2. Singh, Balbir. Memory Controller (RSS Control), |
| 857 | http://lwn.net/Articles/222762/ |
| 858 | 3. Emelianov, Pavel. Resource controllers based on process cgroups |
| 859 | http://lkml.org/lkml/2007/3/6/198 |
| 860 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 861 | http://lkml.org/lkml/2007/4/9/78 |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 862 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) |
| 863 | http://lkml.org/lkml/2007/5/30/244 |
| 864 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ |
| 865 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control |
| 866 | subsystem (v3), http://lwn.net/Articles/235534/ |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 867 | 8. Singh, Balbir. RSS controller v2 test results (lmbench), |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 868 | http://lkml.org/lkml/2007/5/17/232 |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 869 | 9. Singh, Balbir. RSS controller v2 AIM9 results |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 870 | http://lkml.org/lkml/2007/5/18/1 |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 871 | 10. Singh, Balbir. Memory controller v6 test results, |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 872 | http://lkml.org/lkml/2007/8/19/36 |
Li Zefan | 2324c5d | 2008-02-23 15:24:12 -0800 | [diff] [blame] | 873 | 11. Singh, Balbir. Memory controller introduction (v6), |
| 874 | http://lkml.org/lkml/2007/8/17/69 |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 875 | 12. Corbet, Jonathan, Controlling memory use in cgroups, |
| 876 | http://lwn.net/Articles/243795/ |