Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 1 | Memory Controller |
| 2 | |
| 3 | Salient features |
| 4 | |
| 5 | a. Enable control of both RSS (mapped) and Page Cache (unmapped) pages |
| 6 | b. The infrastructure allows easy addition of other types of memory to control |
| 7 | c. Provides *zero overhead* for non memory controller users |
| 8 | d. Provides a double LRU: global memory pressure causes reclaim from the |
| 9 | global LRU; a cgroup on hitting a limit, reclaims from the per |
| 10 | cgroup LRU |
| 11 | |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame^] | 12 | NOTE: Swap Cache (unmapped) is not accounted now. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 13 | |
| 14 | Benefits and Purpose of the memory controller |
| 15 | |
| 16 | The memory controller isolates the memory behaviour of a group of tasks |
| 17 | from the rest of the system. The article on LWN [12] mentions some probable |
| 18 | uses of the memory controller. The memory controller can be used to |
| 19 | |
| 20 | a. Isolate an application or a group of applications |
| 21 | Memory hungry applications can be isolated and limited to a smaller |
| 22 | amount of memory. |
| 23 | b. Create a cgroup with limited amount of memory, this can be used |
| 24 | as a good alternative to booting with mem=XXXX. |
| 25 | c. Virtualization solutions can control the amount of memory they want |
| 26 | to assign to a virtual machine instance. |
| 27 | d. A CD/DVD burner could control the amount of memory used by the |
| 28 | rest of the system to ensure that burning does not fail due to lack |
| 29 | of available memory. |
| 30 | e. There are several other use cases, find one or use the controller just |
| 31 | for fun (to learn and hack on the VM subsystem). |
| 32 | |
| 33 | 1. History |
| 34 | |
| 35 | The memory controller has a long history. A request for comments for the memory |
| 36 | controller was posted by Balbir Singh [1]. At the time the RFC was posted |
| 37 | there were several implementations for memory control. The goal of the |
| 38 | RFC was to build consensus and agreement for the minimal features required |
| 39 | for memory control. The first RSS controller was posted by Balbir Singh[2] |
| 40 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the |
| 41 | RSS controller. At OLS, at the resource management BoF, everyone suggested |
| 42 | that we handle both page cache and RSS together. Another request was raised |
| 43 | to allow user space handling of OOM. The current memory controller is |
| 44 | at version 6; it combines both mapped (RSS) and unmapped Page |
| 45 | Cache Control [11]. |
| 46 | |
| 47 | 2. Memory Control |
| 48 | |
| 49 | Memory is a unique resource in the sense that it is present in a limited |
| 50 | amount. If a task requires a lot of CPU processing, the task can spread |
| 51 | its processing over a period of hours, days, months or years, but with |
| 52 | memory, the same physical memory needs to be reused to accomplish the task. |
| 53 | |
| 54 | The memory controller implementation has been divided into phases. These |
| 55 | are: |
| 56 | |
| 57 | 1. Memory controller |
| 58 | 2. mlock(2) controller |
| 59 | 3. Kernel user memory accounting and slab control |
| 60 | 4. user mappings length controller |
| 61 | |
| 62 | The memory controller is the first controller developed. |
| 63 | |
| 64 | 2.1. Design |
| 65 | |
| 66 | The core of the design is a counter called the res_counter. The res_counter |
| 67 | tracks the current memory usage and limit of the group of processes associated |
| 68 | with the controller. Each cgroup has a memory controller specific data |
| 69 | structure (mem_cgroup) associated with it. |
| 70 | |
| 71 | 2.2. Accounting |
| 72 | |
| 73 | +--------------------+ |
| 74 | | mem_cgroup | |
| 75 | | (res_counter) | |
| 76 | +--------------------+ |
| 77 | / ^ \ |
| 78 | / | \ |
| 79 | +---------------+ | +---------------+ |
| 80 | | mm_struct | |.... | mm_struct | |
| 81 | | | | | | |
| 82 | +---------------+ | +---------------+ |
| 83 | | |
| 84 | + --------------+ |
| 85 | | |
| 86 | +---------------+ +------+--------+ |
| 87 | | page +----------> page_cgroup| |
| 88 | | | | | |
| 89 | +---------------+ +---------------+ |
| 90 | |
| 91 | (Figure 1: Hierarchy of Accounting) |
| 92 | |
| 93 | |
| 94 | Figure 1 shows the important aspects of the controller |
| 95 | |
| 96 | 1. Accounting happens per cgroup |
| 97 | 2. Each mm_struct knows about which cgroup it belongs to |
| 98 | 3. Each page has a pointer to the page_cgroup, which in turn knows the |
| 99 | cgroup it belongs to |
| 100 | |
| 101 | The accounting is done as follows: mem_cgroup_charge() is invoked to setup |
| 102 | the necessary data structures and check if the cgroup that is being charged |
| 103 | is over its limit. If it is then reclaim is invoked on the cgroup. |
| 104 | More details can be found in the reclaim section of this document. |
| 105 | If everything goes well, a page meta-data-structure called page_cgroup is |
| 106 | allocated and associated with the page. This routine also adds the page to |
| 107 | the per cgroup LRU. |
| 108 | |
| 109 | 2.2.1 Accounting details |
| 110 | |
| 111 | All mapped pages (RSS) and unmapped user pages (Page Cache) are accounted. |
| 112 | RSS pages are accounted at the time of page_add_*_rmap() unless they've already |
| 113 | been accounted for earlier. A file page will be accounted for as Page Cache; |
| 114 | it's mapped into the page tables of a process, duplicate accounting is carefully |
| 115 | avoided. Page Cache pages are accounted at the time of add_to_page_cache(). |
| 116 | The corresponding routines that remove a page from the page tables or removes |
| 117 | a page from Page Cache is used to decrement the accounting counters of the |
| 118 | cgroup. |
| 119 | |
| 120 | 2.3 Shared Page Accounting |
| 121 | |
| 122 | Shared pages are accounted on the basis of the first touch approach. The |
| 123 | cgroup that first touches a page is accounted for the page. The principle |
| 124 | behind this approach is that a cgroup that aggressively uses a shared |
| 125 | page will eventually get charged for it (once it is uncharged from |
| 126 | the cgroup that brought it in -- this will happen on memory pressure). |
| 127 | |
| 128 | 2.4 Reclaim |
| 129 | |
| 130 | Each cgroup maintains a per cgroup LRU that consists of an active |
| 131 | and inactive list. When a cgroup goes over its limit, we first try |
| 132 | to reclaim memory from the cgroup so as to make space for the new |
| 133 | pages that the cgroup has touched. If the reclaim is unsuccessful, |
| 134 | an OOM routine is invoked to select and kill the bulkiest task in the |
| 135 | cgroup. |
| 136 | |
| 137 | The reclaim algorithm has not been modified for cgroups, except that |
| 138 | pages that are selected for reclaiming come from the per cgroup LRU |
| 139 | list. |
| 140 | |
| 141 | 2. Locking |
| 142 | |
| 143 | The memory controller uses the following hierarchy |
| 144 | |
| 145 | 1. zone->lru_lock is used for selecting pages to be isolated |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame^] | 146 | 2. mem->per_zone->lru_lock protects the per cgroup LRU (per zone) |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 147 | 3. lock_page_cgroup() is used to protect page->page_cgroup |
| 148 | |
| 149 | 3. User Interface |
| 150 | |
| 151 | 0. Configuration |
| 152 | |
| 153 | a. Enable CONFIG_CGROUPS |
| 154 | b. Enable CONFIG_RESOURCE_COUNTERS |
| 155 | c. Enable CONFIG_CGROUP_MEM_CONT |
| 156 | |
| 157 | 1. Prepare the cgroups |
| 158 | # mkdir -p /cgroups |
| 159 | # mount -t cgroup none /cgroups -o memory |
| 160 | |
| 161 | 2. Make the new group and move bash into it |
| 162 | # mkdir /cgroups/0 |
| 163 | # echo $$ > /cgroups/0/tasks |
| 164 | |
| 165 | Since now we're in the 0 cgroup, |
| 166 | We can alter the memory limit: |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 167 | # echo -n 4M > /cgroups/0/memory.limit_in_bytes |
| 168 | |
| 169 | NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, |
| 170 | mega or gigabytes. |
| 171 | |
| 172 | # cat /cgroups/0/memory.limit_in_bytes |
| 173 | 4194304 Bytes |
| 174 | |
| 175 | NOTE: The interface has now changed to display the usage in bytes |
| 176 | instead of pages |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 177 | |
| 178 | We can check the usage: |
Balbir Singh | 0eea103 | 2008-02-07 00:13:57 -0800 | [diff] [blame] | 179 | # cat /cgroups/0/memory.usage_in_bytes |
| 180 | 1216512 Bytes |
| 181 | |
| 182 | A successful write to this file does not guarantee a successful set of |
| 183 | this limit to the value written into the file. This can be due to a |
| 184 | number of factors, such as rounding up to page boundaries or the total |
| 185 | availability of memory on the system. The user is required to re-read |
| 186 | this file after a write to guarantee the value committed by the kernel. |
| 187 | |
| 188 | # echo -n 1 > memory.limit_in_bytes |
| 189 | # cat memory.limit_in_bytes |
| 190 | 4096 Bytes |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 191 | |
| 192 | The memory.failcnt field gives the number of times that the cgroup limit was |
| 193 | exceeded. |
| 194 | |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame^] | 195 | The memory.stat file gives accounting information. Now, the number of |
| 196 | caches, RSS and Active pages/Inactive pages are shown. |
| 197 | |
| 198 | The memory.force_empty gives an interface to drop *all* charges by force. |
| 199 | |
| 200 | # echo -n 1 > memory.force_empty |
| 201 | |
| 202 | will drop all charges in cgroup. Currently, this is maintained for test. |
| 203 | |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 204 | 4. Testing |
| 205 | |
| 206 | Balbir posted lmbench, AIM9, LTP and vmmstress results [10] and [11]. |
| 207 | Apart from that v6 has been tested with several applications and regular |
| 208 | daily use. The controller has also been tested on the PPC64, x86_64 and |
| 209 | UML platforms. |
| 210 | |
| 211 | 4.1 Troubleshooting |
| 212 | |
| 213 | Sometimes a user might find that the application under a cgroup is |
| 214 | terminated. There are several causes for this: |
| 215 | |
| 216 | 1. The cgroup limit is too low (just too low to do anything useful) |
| 217 | 2. The user is using anonymous memory and swap is turned off or too low |
| 218 | |
| 219 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of |
| 220 | some of the pages cached in the cgroup (page cache pages). |
| 221 | |
| 222 | 4.2 Task migration |
| 223 | |
| 224 | When a task migrates from one cgroup to another, it's charge is not |
| 225 | carried forward. The pages allocated from the original cgroup still |
| 226 | remain charged to it, the charge is dropped when the page is freed or |
| 227 | reclaimed. |
| 228 | |
| 229 | 4.3 Removing a cgroup |
| 230 | |
| 231 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a |
| 232 | cgroup might have some charge associated with it, even though all |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame^] | 233 | tasks have migrated away from it. Such charges are automatically dropped at |
| 234 | rmdir() if there are no tasks. |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 235 | |
| 236 | 4.4 Choosing what to account -- Page Cache (unmapped) vs RSS (mapped)? |
| 237 | |
| 238 | The type of memory accounted by the cgroup can be limited to just |
| 239 | mapped pages by writing "1" to memory.control_type field |
| 240 | |
| 241 | echo -n 1 > memory.control_type |
| 242 | |
| 243 | 5. TODO |
| 244 | |
| 245 | 1. Add support for accounting huge pages (as a separate controller) |
KAMEZAWA Hiroyuki | dfc05c2 | 2008-02-07 00:14:41 -0800 | [diff] [blame^] | 246 | 2. Make per-cgroup scanner reclaim not-shared pages first |
| 247 | 3. Teach controller to account for shared-pages |
| 248 | 4. Start reclamation when the limit is lowered |
| 249 | 5. Start reclamation in the background when the limit is |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 250 | not yet hit but the usage is getting closer |
Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 251 | |
| 252 | Summary |
| 253 | |
| 254 | Overall, the memory controller has been a stable controller and has been |
| 255 | commented and discussed quite extensively in the community. |
| 256 | |
| 257 | References |
| 258 | |
| 259 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ |
| 260 | 2. Singh, Balbir. Memory Controller (RSS Control), |
| 261 | http://lwn.net/Articles/222762/ |
| 262 | 3. Emelianov, Pavel. Resource controllers based on process cgroups |
| 263 | http://lkml.org/lkml/2007/3/6/198 |
| 264 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) |
| 265 | http://lkml.org/lkml/2007/4/9/74 |
| 266 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) |
| 267 | http://lkml.org/lkml/2007/5/30/244 |
| 268 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ |
| 269 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control |
| 270 | subsystem (v3), http://lwn.net/Articles/235534/ |
| 271 | 8. Singh, Balbir. RSS controller V2 test results (lmbench), |
| 272 | http://lkml.org/lkml/2007/5/17/232 |
| 273 | 9. Singh, Balbir. RSS controller V2 AIM9 results |
| 274 | http://lkml.org/lkml/2007/5/18/1 |
| 275 | 10. Singh, Balbir. Memory controller v6 results, |
| 276 | http://lkml.org/lkml/2007/8/19/36 |
| 277 | 11. Singh, Balbir. Memory controller v6, http://lkml.org/lkml/2007/8/17/69 |
| 278 | 12. Corbet, Jonathan, Controlling memory use in cgroups, |
| 279 | http://lwn.net/Articles/243795/ |