Balbir Singh | 1b6df3a | 2008-02-07 00:13:46 -0800 | [diff] [blame] | 1 | Memory Controller |
| 2 | |
| 3 | Salient features |
| 4 | |
| 5 | a. Enable control of both RSS (mapped) and Page Cache (unmapped) pages |
| 6 | b. The infrastructure allows easy addition of other types of memory to control |
| 7 | c. Provides *zero overhead* for non memory controller users |
| 8 | d. Provides a double LRU: global memory pressure causes reclaim from the |
| 9 | global LRU; a cgroup on hitting a limit, reclaims from the per |
| 10 | cgroup LRU |
| 11 | |
| 12 | NOTE: Page Cache (unmapped) also includes Swap Cache pages as a subset |
| 13 | and will not be referred to explicitly in the rest of the documentation. |
| 14 | |
| 15 | Benefits and Purpose of the memory controller |
| 16 | |
| 17 | The memory controller isolates the memory behaviour of a group of tasks |
| 18 | from the rest of the system. The article on LWN [12] mentions some probable |
| 19 | uses of the memory controller. The memory controller can be used to |
| 20 | |
| 21 | a. Isolate an application or a group of applications |
| 22 | Memory hungry applications can be isolated and limited to a smaller |
| 23 | amount of memory. |
| 24 | b. Create a cgroup with limited amount of memory, this can be used |
| 25 | as a good alternative to booting with mem=XXXX. |
| 26 | c. Virtualization solutions can control the amount of memory they want |
| 27 | to assign to a virtual machine instance. |
| 28 | d. A CD/DVD burner could control the amount of memory used by the |
| 29 | rest of the system to ensure that burning does not fail due to lack |
| 30 | of available memory. |
| 31 | e. There are several other use cases, find one or use the controller just |
| 32 | for fun (to learn and hack on the VM subsystem). |
| 33 | |
| 34 | 1. History |
| 35 | |
| 36 | The memory controller has a long history. A request for comments for the memory |
| 37 | controller was posted by Balbir Singh [1]. At the time the RFC was posted |
| 38 | there were several implementations for memory control. The goal of the |
| 39 | RFC was to build consensus and agreement for the minimal features required |
| 40 | for memory control. The first RSS controller was posted by Balbir Singh[2] |
| 41 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the |
| 42 | RSS controller. At OLS, at the resource management BoF, everyone suggested |
| 43 | that we handle both page cache and RSS together. Another request was raised |
| 44 | to allow user space handling of OOM. The current memory controller is |
| 45 | at version 6; it combines both mapped (RSS) and unmapped Page |
| 46 | Cache Control [11]. |
| 47 | |
| 48 | 2. Memory Control |
| 49 | |
| 50 | Memory is a unique resource in the sense that it is present in a limited |
| 51 | amount. If a task requires a lot of CPU processing, the task can spread |
| 52 | its processing over a period of hours, days, months or years, but with |
| 53 | memory, the same physical memory needs to be reused to accomplish the task. |
| 54 | |
| 55 | The memory controller implementation has been divided into phases. These |
| 56 | are: |
| 57 | |
| 58 | 1. Memory controller |
| 59 | 2. mlock(2) controller |
| 60 | 3. Kernel user memory accounting and slab control |
| 61 | 4. user mappings length controller |
| 62 | |
| 63 | The memory controller is the first controller developed. |
| 64 | |
| 65 | 2.1. Design |
| 66 | |
| 67 | The core of the design is a counter called the res_counter. The res_counter |
| 68 | tracks the current memory usage and limit of the group of processes associated |
| 69 | with the controller. Each cgroup has a memory controller specific data |
| 70 | structure (mem_cgroup) associated with it. |
| 71 | |
| 72 | 2.2. Accounting |
| 73 | |
| 74 | +--------------------+ |
| 75 | | mem_cgroup | |
| 76 | | (res_counter) | |
| 77 | +--------------------+ |
| 78 | / ^ \ |
| 79 | / | \ |
| 80 | +---------------+ | +---------------+ |
| 81 | | mm_struct | |.... | mm_struct | |
| 82 | | | | | | |
| 83 | +---------------+ | +---------------+ |
| 84 | | |
| 85 | + --------------+ |
| 86 | | |
| 87 | +---------------+ +------+--------+ |
| 88 | | page +----------> page_cgroup| |
| 89 | | | | | |
| 90 | +---------------+ +---------------+ |
| 91 | |
| 92 | (Figure 1: Hierarchy of Accounting) |
| 93 | |
| 94 | |
| 95 | Figure 1 shows the important aspects of the controller |
| 96 | |
| 97 | 1. Accounting happens per cgroup |
| 98 | 2. Each mm_struct knows about which cgroup it belongs to |
| 99 | 3. Each page has a pointer to the page_cgroup, which in turn knows the |
| 100 | cgroup it belongs to |
| 101 | |
| 102 | The accounting is done as follows: mem_cgroup_charge() is invoked to setup |
| 103 | the necessary data structures and check if the cgroup that is being charged |
| 104 | is over its limit. If it is then reclaim is invoked on the cgroup. |
| 105 | More details can be found in the reclaim section of this document. |
| 106 | If everything goes well, a page meta-data-structure called page_cgroup is |
| 107 | allocated and associated with the page. This routine also adds the page to |
| 108 | the per cgroup LRU. |
| 109 | |
| 110 | 2.2.1 Accounting details |
| 111 | |
| 112 | All mapped pages (RSS) and unmapped user pages (Page Cache) are accounted. |
| 113 | RSS pages are accounted at the time of page_add_*_rmap() unless they've already |
| 114 | been accounted for earlier. A file page will be accounted for as Page Cache; |
| 115 | it's mapped into the page tables of a process, duplicate accounting is carefully |
| 116 | avoided. Page Cache pages are accounted at the time of add_to_page_cache(). |
| 117 | The corresponding routines that remove a page from the page tables or removes |
| 118 | a page from Page Cache is used to decrement the accounting counters of the |
| 119 | cgroup. |
| 120 | |
| 121 | 2.3 Shared Page Accounting |
| 122 | |
| 123 | Shared pages are accounted on the basis of the first touch approach. The |
| 124 | cgroup that first touches a page is accounted for the page. The principle |
| 125 | behind this approach is that a cgroup that aggressively uses a shared |
| 126 | page will eventually get charged for it (once it is uncharged from |
| 127 | the cgroup that brought it in -- this will happen on memory pressure). |
| 128 | |
| 129 | 2.4 Reclaim |
| 130 | |
| 131 | Each cgroup maintains a per cgroup LRU that consists of an active |
| 132 | and inactive list. When a cgroup goes over its limit, we first try |
| 133 | to reclaim memory from the cgroup so as to make space for the new |
| 134 | pages that the cgroup has touched. If the reclaim is unsuccessful, |
| 135 | an OOM routine is invoked to select and kill the bulkiest task in the |
| 136 | cgroup. |
| 137 | |
| 138 | The reclaim algorithm has not been modified for cgroups, except that |
| 139 | pages that are selected for reclaiming come from the per cgroup LRU |
| 140 | list. |
| 141 | |
| 142 | 2. Locking |
| 143 | |
| 144 | The memory controller uses the following hierarchy |
| 145 | |
| 146 | 1. zone->lru_lock is used for selecting pages to be isolated |
| 147 | 2. mem->lru_lock protects the per cgroup LRU |
| 148 | 3. lock_page_cgroup() is used to protect page->page_cgroup |
| 149 | |
| 150 | 3. User Interface |
| 151 | |
| 152 | 0. Configuration |
| 153 | |
| 154 | a. Enable CONFIG_CGROUPS |
| 155 | b. Enable CONFIG_RESOURCE_COUNTERS |
| 156 | c. Enable CONFIG_CGROUP_MEM_CONT |
| 157 | |
| 158 | 1. Prepare the cgroups |
| 159 | # mkdir -p /cgroups |
| 160 | # mount -t cgroup none /cgroups -o memory |
| 161 | |
| 162 | 2. Make the new group and move bash into it |
| 163 | # mkdir /cgroups/0 |
| 164 | # echo $$ > /cgroups/0/tasks |
| 165 | |
| 166 | Since now we're in the 0 cgroup, |
| 167 | We can alter the memory limit: |
| 168 | # echo -n 6000 > /cgroups/0/memory.limit |
| 169 | |
| 170 | We can check the usage: |
| 171 | # cat /cgroups/0/memory.usage |
| 172 | 25 |
| 173 | |
| 174 | The memory.failcnt field gives the number of times that the cgroup limit was |
| 175 | exceeded. |
| 176 | |
| 177 | 4. Testing |
| 178 | |
| 179 | Balbir posted lmbench, AIM9, LTP and vmmstress results [10] and [11]. |
| 180 | Apart from that v6 has been tested with several applications and regular |
| 181 | daily use. The controller has also been tested on the PPC64, x86_64 and |
| 182 | UML platforms. |
| 183 | |
| 184 | 4.1 Troubleshooting |
| 185 | |
| 186 | Sometimes a user might find that the application under a cgroup is |
| 187 | terminated. There are several causes for this: |
| 188 | |
| 189 | 1. The cgroup limit is too low (just too low to do anything useful) |
| 190 | 2. The user is using anonymous memory and swap is turned off or too low |
| 191 | |
| 192 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of |
| 193 | some of the pages cached in the cgroup (page cache pages). |
| 194 | |
| 195 | 4.2 Task migration |
| 196 | |
| 197 | When a task migrates from one cgroup to another, it's charge is not |
| 198 | carried forward. The pages allocated from the original cgroup still |
| 199 | remain charged to it, the charge is dropped when the page is freed or |
| 200 | reclaimed. |
| 201 | |
| 202 | 4.3 Removing a cgroup |
| 203 | |
| 204 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a |
| 205 | cgroup might have some charge associated with it, even though all |
| 206 | tasks have migrated away from it. If some pages are still left, after following |
| 207 | the steps listed in sections 4.1 and 4.2, check the Swap Cache usage in |
| 208 | /proc/meminfo to see if the Swap Cache usage is showing up in the |
| 209 | cgroups memory.usage counter. A simple test of swapoff -a and swapon -a |
| 210 | should free any pending Swap Cache usage. |
| 211 | |
| 212 | 4.4 Choosing what to account -- Page Cache (unmapped) vs RSS (mapped)? |
| 213 | |
| 214 | The type of memory accounted by the cgroup can be limited to just |
| 215 | mapped pages by writing "1" to memory.control_type field |
| 216 | |
| 217 | echo -n 1 > memory.control_type |
| 218 | |
| 219 | 5. TODO |
| 220 | |
| 221 | 1. Add support for accounting huge pages (as a separate controller) |
| 222 | 2. Improve the user interface to accept/display memory limits in KB or MB |
| 223 | rather than pages (since page sizes can differ across platforms/machines). |
| 224 | 3. Make cgroup lists per-zone |
| 225 | 4. Make per-cgroup scanner reclaim not-shared pages first |
| 226 | 5. Teach controller to account for shared-pages |
| 227 | 6. Start reclamation when the limit is lowered |
| 228 | 7. Start reclamation in the background when the limit is |
| 229 | not yet hit but the usage is getting closer |
| 230 | 8. Create per zone LRU lists per cgroup |
| 231 | |
| 232 | Summary |
| 233 | |
| 234 | Overall, the memory controller has been a stable controller and has been |
| 235 | commented and discussed quite extensively in the community. |
| 236 | |
| 237 | References |
| 238 | |
| 239 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ |
| 240 | 2. Singh, Balbir. Memory Controller (RSS Control), |
| 241 | http://lwn.net/Articles/222762/ |
| 242 | 3. Emelianov, Pavel. Resource controllers based on process cgroups |
| 243 | http://lkml.org/lkml/2007/3/6/198 |
| 244 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) |
| 245 | http://lkml.org/lkml/2007/4/9/74 |
| 246 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) |
| 247 | http://lkml.org/lkml/2007/5/30/244 |
| 248 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ |
| 249 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control |
| 250 | subsystem (v3), http://lwn.net/Articles/235534/ |
| 251 | 8. Singh, Balbir. RSS controller V2 test results (lmbench), |
| 252 | http://lkml.org/lkml/2007/5/17/232 |
| 253 | 9. Singh, Balbir. RSS controller V2 AIM9 results |
| 254 | http://lkml.org/lkml/2007/5/18/1 |
| 255 | 10. Singh, Balbir. Memory controller v6 results, |
| 256 | http://lkml.org/lkml/2007/8/19/36 |
| 257 | 11. Singh, Balbir. Memory controller v6, http://lkml.org/lkml/2007/8/17/69 |
| 258 | 12. Corbet, Jonathan, Controlling memory use in cgroups, |
| 259 | http://lwn.net/Articles/243795/ |