blob: 86847a7647ab2572fe92d3cd55f50c3347e52fd5 [file] [log] [blame]
Tejun Heo65731572014-04-25 18:28:02 -04001
2Cgroup unified hierarchy
3
4April, 2014 Tejun Heo <tj@kernel.org>
5
6This document describes the changes made by unified hierarchy and
7their rationales. It will eventually be merged into the main cgroup
8documentation.
9
10CONTENTS
11
121. Background
132. Basic Operation
14 2-1. Mounting
15 2-2. cgroup.subtree_control
16 2-3. cgroup.controllers
173. Structural Constraints
18 3-1. Top-down
19 3-2. No internal tasks
Tejun Heo8a0792e2015-06-18 16:54:28 -0400204. Delegation
21 4-1. Model of delegation
22 4-2. Common ancestor rule
235. Other Changes
24 5-1. [Un]populated Notification
25 5-2. Other Core Changes
26 5-3. Per-Controller Changes
27 5-3-1. blkio
28 5-3-2. cpuset
29 5-3-3. memory
306. Planned Changes
31 6-1. CAP for resource control
Tejun Heo65731572014-04-25 18:28:02 -040032
33
341. Background
35
36cgroup allows an arbitrary number of hierarchies and each hierarchy
37can host any number of controllers. While this seems to provide a
38high level of flexibility, it isn't quite useful in practice.
39
40For example, as there is only one instance of each controller, utility
41type controllers such as freezer which can be useful in all
42hierarchies can only be used in one. The issue is exacerbated by the
43fact that controllers can't be moved around once hierarchies are
44populated. Another issue is that all controllers bound to a hierarchy
45are forced to have exactly the same view of the hierarchy. It isn't
46possible to vary the granularity depending on the specific controller.
47
48In practice, these issues heavily limit which controllers can be put
49on the same hierarchy and most configurations resort to putting each
50controller on its own hierarchy. Only closely related ones, such as
51the cpu and cpuacct controllers, make sense to put on the same
52hierarchy. This often means that userland ends up managing multiple
53similar hierarchies repeating the same steps on each hierarchy
54whenever a hierarchy management operation is necessary.
55
56Unfortunately, support for multiple hierarchies comes at a steep cost.
57Internal implementation in cgroup core proper is dazzlingly
58complicated but more importantly the support for multiple hierarchies
59restricts how cgroup is used in general and what controllers can do.
60
61There's no limit on how many hierarchies there may be, which means
62that a task's cgroup membership can't be described in finite length.
63The key may contain any varying number of entries and is unlimited in
64length, which makes it highly awkward to handle and leads to addition
65of controllers which exist only to identify membership, which in turn
66exacerbates the original problem.
67
68Also, as a controller can't have any expectation regarding what shape
69of hierarchies other controllers would be on, each controller has to
70assume that all other controllers are operating on completely
71orthogonal hierarchies. This makes it impossible, or at least very
72cumbersome, for controllers to cooperate with each other.
73
74In most use cases, putting controllers on hierarchies which are
75completely orthogonal to each other isn't necessary. What usually is
76called for is the ability to have differing levels of granularity
77depending on the specific controller. In other words, hierarchy may
78be collapsed from leaf towards root when viewed from specific
79controllers. For example, a given configuration might not care about
80how memory is distributed beyond a certain level while still wanting
81to control how CPU cycles are distributed.
82
83Unified hierarchy is the next version of cgroup interface. It aims to
84address the aforementioned issues by having more structure while
85retaining enough flexibility for most use cases. Various other
86general and controller-specific interface issues are also addressed in
87the process.
88
89
902. Basic Operation
91
922-1. Mounting
93
94Currently, unified hierarchy can be mounted with the following mount
95command. Note that this is still under development and scheduled to
96change soon.
97
98 mount -t cgroup -o __DEVEL__sane_behavior cgroup $MOUNT_POINT
99
Tejun Heoa8ddc822014-07-15 11:05:10 -0400100All controllers which support the unified hierarchy and are not bound
101to other hierarchies are automatically bound to unified hierarchy and
102show up at the root of it. Controllers which are enabled only in the
103root of unified hierarchy can be bound to other hierarchies. This
104allows mixing unified hierarchy with the traditional multiple
105hierarchies in a fully backward compatible way.
106
107For development purposes, the following boot parameter makes all
108controllers to appear on the unified hierarchy whether supported or
109not.
110
111 cgroup__DEVEL__legacy_files_on_dfl
Tejun Heoaf0ba672014-07-08 18:02:57 -0400112
113A controller can be moved across hierarchies only after the controller
114is no longer referenced in its current hierarchy. Because per-cgroup
115controller states are destroyed asynchronously and controllers may
116have lingering references, a controller may not show up immediately on
117the unified hierarchy after the final umount of the previous
118hierarchy. Similarly, a controller should be fully disabled to be
119moved out of the unified hierarchy and it may take some time for the
120disabled controller to become available for other hierarchies;
121furthermore, due to dependencies among controllers, other controllers
122may need to be disabled too.
123
124While useful for development and manual configurations, dynamically
125moving controllers between the unified and other hierarchies is
126strongly discouraged for production use. It is recommended to decide
127the hierarchies and controller associations before starting using the
128controllers.
Tejun Heo65731572014-04-25 18:28:02 -0400129
130
1312-2. cgroup.subtree_control
132
133All cgroups on unified hierarchy have a "cgroup.subtree_control" file
134which governs which controllers are enabled on the children of the
135cgroup. Let's assume a hierarchy like the following.
136
137 root - A - B - C
138 \ D
139
140root's "cgroup.subtree_control" file determines which controllers are
141enabled on A. A's on B. B's on C and D. This coincides with the
142fact that controllers on the immediate sub-level are used to
143distribute the resources of the parent. In fact, it's natural to
144assume that resource control knobs of a child belong to its parent.
145Enabling a controller in a "cgroup.subtree_control" file declares that
146distribution of the respective resources of the cgroup will be
147controlled. Note that this means that controller enable states are
148shared among siblings.
149
150When read, the file contains a space-separated list of currently
151enabled controllers. A write to the file should contain a
152space-separated list of controllers with '+' or '-' prefixed (without
153the quotes). Controllers prefixed with '+' are enabled and '-'
154disabled. If a controller is listed multiple times, the last entry
155wins. The specific operations are executed atomically - either all
156succeed or fail.
157
158
1592-3. cgroup.controllers
160
161Read-only "cgroup.controllers" file contains a space-separated list of
162controllers which can be enabled in the cgroup's
163"cgroup.subtree_control" file.
164
165In the root cgroup, this lists controllers which are not bound to
166other hierarchies and the content changes as controllers are bound to
167and unbound from other hierarchies.
168
169In non-root cgroups, the content of this file equals that of the
170parent's "cgroup.subtree_control" file as only controllers enabled
171from the parent can be used in its children.
172
173
1743. Structural Constraints
175
1763-1. Top-down
177
178As it doesn't make sense to nest control of an uncontrolled resource,
179all non-root "cgroup.subtree_control" files can only contain
180controllers which are enabled in the parent's "cgroup.subtree_control"
181file. A controller can be enabled only if the parent has the
182controller enabled and a controller can't be disabled if one or more
183children have it enabled.
184
185
1863-2. No internal tasks
187
188One long-standing issue that cgroup faces is the competition between
189tasks belonging to the parent cgroup and its children cgroups. This
190is inherently nasty as two different types of entities compete and
191there is no agreed-upon obvious way to handle it. Different
192controllers are doing different things.
193
194The cpu controller considers tasks and cgroups as equivalents and maps
195nice levels to cgroup weights. This works for some cases but falls
196flat when children should be allocated specific ratios of CPU cycles
197and the number of internal tasks fluctuates - the ratios constantly
198change as the number of competing entities fluctuates. There also are
199other issues. The mapping from nice level to weight isn't obvious or
200universal, and there are various other knobs which simply aren't
201available for tasks.
202
203The blkio controller implicitly creates a hidden leaf node for each
204cgroup to host the tasks. The hidden leaf has its own copies of all
205the knobs with "leaf_" prefixed. While this allows equivalent control
206over internal tasks, it's with serious drawbacks. It always adds an
207extra layer of nesting which may not be necessary, makes the interface
208messy and significantly complicates the implementation.
209
210The memory controller currently doesn't have a way to control what
211happens between internal tasks and child cgroups and the behavior is
212not clearly defined. There have been attempts to add ad-hoc behaviors
213and knobs to tailor the behavior to specific workloads. Continuing
214this direction will lead to problems which will be extremely difficult
215to resolve in the long term.
216
217Multiple controllers struggle with internal tasks and came up with
218different ways to deal with it; unfortunately, all the approaches in
219use now are severely flawed and, furthermore, the widely different
220behaviors make cgroup as whole highly inconsistent.
221
222It is clear that this is something which needs to be addressed from
223cgroup core proper in a uniform way so that controllers don't need to
224worry about it and cgroup as a whole shows a consistent and logical
225behavior. To achieve that, unified hierarchy enforces the following
226structural constraint:
227
228 Except for the root, only cgroups which don't contain any task may
229 have controllers enabled in their "cgroup.subtree_control" files.
230
231Combined with other properties, this guarantees that, when a
232controller is looking at the part of the hierarchy which has it
233enabled, tasks are always only on the leaves. This rules out
234situations where child cgroups compete against internal tasks of the
235parent.
236
237There are two things to note. Firstly, the root cgroup is exempt from
238the restriction. Root contains tasks and anonymous resource
239consumption which can't be associated with any other cgroup and
240requires special treatment from most controllers. How resource
241consumption in the root cgroup is governed is up to each controller.
242
243Secondly, the restriction doesn't take effect if there is no enabled
244controller in the cgroup's "cgroup.subtree_control" file. This is
245important as otherwise it wouldn't be possible to create children of a
246populated cgroup. To control resource distribution of a cgroup, the
247cgroup must create children and transfer all its tasks to the children
248before enabling controllers in its "cgroup.subtree_control" file.
249
250
Tejun Heo8a0792e2015-06-18 16:54:28 -04002514. Delegation
Tejun Heo65731572014-04-25 18:28:02 -0400252
Tejun Heo8a0792e2015-06-18 16:54:28 -04002534-1. Model of delegation
254
255A cgroup can be delegated to a less privileged user by granting write
256access of the directory and its "cgroup.procs" file to the user. Note
257that the resource control knobs in a given directory concern the
258resources of the parent and thus must not be delegated along with the
259directory.
260
261Once delegated, the user can build sub-hierarchy under the directory,
262organize processes as it sees fit and further distribute the resources
263it got from the parent. The limits and other settings of all resource
264controllers are hierarchical and regardless of what happens in the
265delegated sub-hierarchy, nothing can escape the resource restrictions
266imposed by the parent.
267
268Currently, cgroup doesn't impose any restrictions on the number of
269cgroups in or nesting depth of a delegated sub-hierarchy; however,
270this may in the future be limited explicitly.
271
272
2734-2. Common ancestor rule
274
275On the unified hierarchy, to write to a "cgroup.procs" file, in
276addition to the usual write permission to the file and uid match, the
277writer must also have write access to the "cgroup.procs" file of the
278common ancestor of the source and destination cgroups. This prevents
279delegatees from smuggling processes across disjoint sub-hierarchies.
280
281Let's say cgroups C0 and C1 have been delegated to user U0 who created
282C00, C01 under C0 and C10 under C1 as follows.
283
284 ~~~~~~~~~~~~~ - C0 - C00
285 ~ cgroup ~ \ C01
286 ~ hierarchy ~
287 ~~~~~~~~~~~~~ - C1 - C10
288
289C0 and C1 are separate entities in terms of resource distribution
290regardless of their relative positions in the hierarchy. The
291resources the processes under C0 are entitled to are controlled by
292C0's ancestors and may be completely different from C1. It's clear
293that the intention of delegating C0 to U0 is allowing U0 to organize
294the processes under C0 and further control the distribution of C0's
295resources.
296
297On traditional hierarchies, if a task has write access to "tasks" or
298"cgroup.procs" file of a cgroup and its uid agrees with the target, it
299can move the target to the cgroup. In the above example, U0 will not
300only be able to move processes in each sub-hierarchy but also across
301the two sub-hierarchies, effectively allowing it to violate the
302organizational and resource restrictions implied by the hierarchical
303structure above C0 and C1.
304
305On the unified hierarchy, let's say U0 wants to write the pid of a
306process which has a matching uid and is currently in C10 into
307"C00/cgroup.procs". U0 obviously has write access to the file and
308migration permission on the process; however, the common ancestor of
309the source cgroup C10 and the destination cgroup C00 is above the
310points of delegation and U0 would not have write access to its
311"cgroup.procs" and thus be denied with -EACCES.
312
313
3145. Other Changes
315
3165-1. [Un]populated Notification
Tejun Heo65731572014-04-25 18:28:02 -0400317
318cgroup users often need a way to determine when a cgroup's
319subhierarchy becomes empty so that it can be cleaned up. cgroup
320currently provides release_agent for it; unfortunately, this mechanism
321is riddled with issues.
322
323- It delivers events by forking and execing a userland binary
324 specified as the release_agent. This is a long deprecated method of
325 notification delivery. It's extremely heavy, slow and cumbersome to
326 integrate with larger infrastructure.
327
328- There is single monitoring point at the root. There's no way to
329 delegate management of a subtree.
330
331- The event isn't recursive. It triggers when a cgroup doesn't have
332 any tasks or child cgroups. Events for internal nodes trigger only
333 after all children are removed. This again makes it impossible to
334 delegate management of a subtree.
335
336- Events are filtered from the kernel side. A "notify_on_release"
337 file is used to subscribe to or suppress release events. This is
338 unnecessarily complicated and probably done this way because event
339 delivery itself was expensive.
340
341Unified hierarchy implements an interface file "cgroup.populated"
342which can be used to monitor whether the cgroup's subhierarchy has
343tasks in it or not. Its value is 0 if there is no task in the cgroup
344and its descendants; otherwise, 1. poll and [id]notify events are
345triggered when the value changes.
346
347This is significantly lighter and simpler and trivially allows
348delegating management of subhierarchy - subhierarchy monitoring can
349block further propagation simply by putting itself or another process
350in the subhierarchy and monitor events that it's interested in from
351there without interfering with monitoring higher in the tree.
352
353In unified hierarchy, the release_agent mechanism is no longer
354supported and the interface files "release_agent" and
355"notify_on_release" do not exist.
356
357
Tejun Heo8a0792e2015-06-18 16:54:28 -04003585-2. Other Core Changes
Tejun Heo65731572014-04-25 18:28:02 -0400359
360- None of the mount options is allowed.
361
362- remount is disallowed.
363
364- rename(2) is disallowed.
365
366- The "tasks" file is removed. Everything should at process
367 granularity. Use the "cgroup.procs" file instead.
368
369- The "cgroup.procs" file is not sorted. pids will be unique unless
370 they got recycled in-between reads.
371
372- The "cgroup.clone_children" file is removed.
373
374
Tejun Heo8a0792e2015-06-18 16:54:28 -04003755-3. Per-Controller Changes
Tejun Heo65731572014-04-25 18:28:02 -0400376
Tejun Heo8a0792e2015-06-18 16:54:28 -04003775-3-1. blkio
Tejun Heo65731572014-04-25 18:28:02 -0400378
379- blk-throttle becomes properly hierarchical.
380
381
Tejun Heo8a0792e2015-06-18 16:54:28 -04003825-3-2. cpuset
Tejun Heo65731572014-04-25 18:28:02 -0400383
384- Tasks are kept in empty cpusets after hotplug and take on the masks
385 of the nearest non-empty ancestor, instead of being moved to it.
386
387- A task can be moved into an empty cpuset, and again it takes on the
388 masks of the nearest non-empty ancestor.
389
390
Tejun Heo8a0792e2015-06-18 16:54:28 -04003915-3-3. memory
Tejun Heo65731572014-04-25 18:28:02 -0400392
393- use_hierarchy is on by default and the cgroup file for the flag is
394 not created.
395
Johannes Weiner241994e2015-02-11 15:26:06 -0800396- The original lower boundary, the soft limit, is defined as a limit
397 that is per default unset. As a result, the set of cgroups that
398 global reclaim prefers is opt-in, rather than opt-out. The costs
399 for optimizing these mostly negative lookups are so high that the
400 implementation, despite its enormous size, does not even provide the
401 basic desirable behavior. First off, the soft limit has no
402 hierarchical meaning. All configured groups are organized in a
403 global rbtree and treated like equal peers, regardless where they
404 are located in the hierarchy. This makes subtree delegation
405 impossible. Second, the soft limit reclaim pass is so aggressive
406 that it not just introduces high allocation latencies into the
407 system, but also impacts system performance due to overreclaim, to
408 the point where the feature becomes self-defeating.
409
410 The memory.low boundary on the other hand is a top-down allocated
411 reserve. A cgroup enjoys reclaim protection when it and all its
412 ancestors are below their low boundaries, which makes delegation of
413 subtrees possible. Secondly, new cgroups have no reserve per
414 default and in the common case most cgroups are eligible for the
415 preferred reclaim pass. This allows the new low boundary to be
416 efficiently implemented with just a minor addition to the generic
417 reclaim code, without the need for out-of-band data structures and
418 reclaim passes. Because the generic reclaim code considers all
419 cgroups except for the ones running low in the preferred first
420 reclaim pass, overreclaim of individual groups is eliminated as
421 well, resulting in much better overall workload performance.
422
423- The original high boundary, the hard limit, is defined as a strict
424 limit that can not budge, even if the OOM killer has to be called.
425 But this generally goes against the goal of making the most out of
426 the available memory. The memory consumption of workloads varies
427 during runtime, and that requires users to overcommit. But doing
428 that with a strict upper limit requires either a fairly accurate
429 prediction of the working set size or adding slack to the limit.
430 Since working set size estimation is hard and error prone, and
431 getting it wrong results in OOM kills, most users tend to err on the
432 side of a looser limit and end up wasting precious resources.
433
434 The memory.high boundary on the other hand can be set much more
435 conservatively. When hit, it throttles allocations by forcing them
436 into direct reclaim to work off the excess, but it never invokes the
437 OOM killer. As a result, a high boundary that is chosen too
438 aggressively will not terminate the processes, but instead it will
439 lead to gradual performance degradation. The user can monitor this
440 and make corrections until the minimal memory footprint that still
441 gives acceptable performance is found.
442
443 In extreme cases, with many concurrent allocations and a complete
444 breakdown of reclaim progress within the group, the high boundary
445 can be exceeded. But even then it's mostly better to satisfy the
446 allocation from the slack available in other groups or the rest of
447 the system than killing the group. Otherwise, memory.max is there
448 to limit this type of spillover and ultimately contain buggy or even
449 malicious applications.
450
451- The original control file names are unwieldy and inconsistent in
452 many different ways. For example, the upper boundary hit count is
453 exported in the memory.failcnt file, but an OOM event count has to
454 be manually counted by listening to memory.oom_control events, and
455 lower boundary / soft limit events have to be counted by first
456 setting a threshold for that value and then counting those events.
457 Also, usage and limit files encode their units in the filename.
458 That makes the filenames very long, even though this is not
459 information that a user needs to be reminded of every time they type
460 out those names.
461
462 To address these naming issues, as well as to signal clearly that
463 the new interface carries a new configuration model, the naming
464 conventions in it necessarily differ from the old interface.
465
466- The original limit files indicate the state of an unset limit with a
467 Very High Number, and a configured limit can be unset by echoing -1
468 into those files. But that very high number is implementation and
469 architecture dependent and not very descriptive. And while -1 can
470 be understood as an underflow into the highest possible value, -2 or
471 -10M etc. do not work, so it's not consistent.
472
Johannes Weinerd2973692015-02-27 15:52:04 -0800473 memory.low, memory.high, and memory.max will use the string "max" to
474 indicate and set the highest possible value.
Tejun Heo65731572014-04-25 18:28:02 -0400475
Tejun Heo8a0792e2015-06-18 16:54:28 -04004766. Planned Changes
Tejun Heo65731572014-04-25 18:28:02 -0400477
Tejun Heo8a0792e2015-06-18 16:54:28 -04004786-1. CAP for resource control
Tejun Heo65731572014-04-25 18:28:02 -0400479
480Unified hierarchy will require one of the capabilities(7), which is
481yet to be decided, for all resource control related knobs. Process
482organization operations - creation of sub-cgroups and migration of
483processes in sub-hierarchies may be delegated by changing the
484ownership and/or permissions on the cgroup directory and
485"cgroup.procs" interface file; however, all operations which affect
486resource control - writes to a "cgroup.subtree_control" file or any
487controller-specific knobs - will require an explicit CAP privilege.
488
489This, in part, is to prevent the cgroup interface from being
490inadvertently promoted to programmable API used by non-privileged
491binaries. cgroup exposes various aspects of the system in ways which
492aren't properly abstracted for direct consumption by regular programs.
493This is an administration interface much closer to sysctl knobs than
494system calls. Even the basic access model, being filesystem path
495based, isn't suitable for direct consumption. There's no way to
496access "my cgroup" in a race-free way or make multiple operations
497atomic against migration to another cgroup.
498
499Another aspect is that, for better or for worse, the cgroup interface
500goes through far less scrutiny than regular interfaces for
501unprivileged userland. The upside is that cgroup is able to expose
502useful features which may not be suitable for general consumption in a
503reasonable time frame. It provides a relatively short path between
504internal details and userland-visible interface. Of course, this
505shortcut comes with high risk. We go through what we go through for
506general kernel APIs for good reasons. It may end up leaking internal
507details in a way which can exert significant pain by locking the
508kernel into a contract that can't be maintained in a reasonable
509manner.
510
511Also, due to the specific nature, cgroup and its controllers don't
512tend to attract attention from a wide scope of developers. cgroup's
513short history is already fraught with severely mis-designed
514interfaces, unnecessary commitments to and exposing of internal
515details, broken and dangerous implementations of various features.
516
517Keeping cgroup as an administration interface is both advantageous for
518its role and imperative given its nature. Some of the cgroup features
519may make sense for unprivileged access. If deemed justified, those
520must be further abstracted and implemented as a different interface,
521be it a system call or process-private filesystem, and survive through
522the scrutiny that any interface for general consumption is required to
523go through.
524
525Requiring CAP is not a complete solution but should serve as a
526significant deterrent against spraying cgroup usages in non-privileged
527programs.