Commit d206e090 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup

Pull cgroup changes from Tejun Heo:
 "A lot of activities on cgroup side.  The big changes are focused on
  making cgroup hierarchy handling saner.

   - cgroup_rmdir() had peculiar semantics - it allowed cgroup
     destruction to be vetoed by individual controllers and tried to
     drain refcnt synchronously.  The vetoing never worked properly and
     caused good deal of contortions in cgroup.  memcg was the last
     reamining user.  Michal Hocko removed the usage and cgroup_rmdir()
     path has been simplified significantly.  This was done in a
     separate branch so that the memcg people can base further memcg
     changes on top.

   - The above allowed cleaning up cgroup lifecycle management and
     implementation of generic cgroup iterators which are used to
     improve hierarchy support.

   - cgroup_freezer updated to allow migration in and out of a frozen
     cgroup and handle hierarchy.  If a cgroup is frozen, all descendant
     cgroups are frozen.

   - netcls_cgroup and netprio_cgroup updated to handle hierarchy
     properly.

   - Various fixes and cleanups.

   - Two merge commits.  One to pull in memcg and rmdir cleanups (needed
     to build iterators).  The other pulled in cgroup/for-3.7-fixes for
     device_cgroup fixes so that further device_cgroup patches can be
     stacked on top."

Fixed up a trivial conflict in mm/memcontrol.c as per Tejun (due to
commit bea8c150 ("memcg: fix hotplugged memory zone oops") in master
touching code close to commit 2ef37d3f ("memcg: Simplify
mem_cgroup_force_empty_list error handling") in for-3.8)

* 'for-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (65 commits)
  cgroup: update Documentation/cgroups/00-INDEX
  cgroup_rm_file: don't delete the uncreated files
  cgroup: remove subsystem files when remounting cgroup
  cgroup: use cgroup_addrm_files() in cgroup_clear_directory()
  cgroup: warn about broken hierarchies only after css_online
  cgroup: list_del_init() on removed events
  cgroup: fix lockdep warning for event_control
  cgroup: move list add after list head initilization
  netprio_cgroup: allow nesting and inherit config on cgroup creation
  netprio_cgroup: implement netprio[_set]_prio() helpers
  netprio_cgroup: use cgroup->id instead of cgroup_netprio_state->prioidx
  netprio_cgroup: reimplement priomap expansion
  netprio_cgroup: shorten variable names in extend_netdev_table()
  netprio_cgroup: simplify write_priomap()
  netcls_cgroup: move config inheritance to ->css_online() and remove .broken_hierarchy marking
  cgroup: remove obsolete guarantee from cgroup_task_migrate.
  cgroup: add cgroup->id
  cgroup, cpuset: remove cgroup_subsys->post_clone()
  cgroup: s/CGRP_CLONE_CHILDREN/CGRP_CPUSET_CLONE_CHILDREN/
  cgroup: rename ->create/post_create/pre_destroy/destroy() to ->css_alloc/online/offline/free()
  ...
parents fef3ff2e 15ef4ffa
00-INDEX
- this file
blkio-controller.txt
- Description for Block IO Controller, implementation and usage details.
cgroups.txt
- Control Groups definition, implementation details, examples and API.
cgroup_event_listener.c
- A user program for cgroup listener.
cpuacct.txt
- CPU Accounting Controller; account CPU usage for groups of tasks.
cpusets.txt
......@@ -10,9 +14,13 @@ devices.txt
- Device Whitelist Controller; description, interface and security.
freezer-subsystem.txt
- checkpointing; rationale to not use signals, interface.
hugetlb.txt
- HugeTLB Controller implementation and usage details.
memcg_test.txt
- Memory Resource Controller; implementation details.
memory.txt
- Memory Resource Controller; design, accounting, interface, testing.
net_prio.txt
- Network priority cgroups details and usages.
resource_counter.txt
- Resource Counter API.
......@@ -299,11 +299,9 @@ a cgroup hierarchy's release_agent path is empty.
1.5 What does clone_children do ?
---------------------------------
If the clone_children flag is enabled (1) in a cgroup, then all
cgroups created beneath will call the post_clone callbacks for each
subsystem of the newly created cgroup. Usually when this callback is
implemented for a subsystem, it copies the values of the parent
subsystem, this is the case for the cpuset.
This flag only affects the cpuset controller. If the clone_children
flag is enabled (1) in a cgroup, a new cpuset cgroup will copy its
configuration from the parent during initialization.
1.6 How do I use cgroups ?
--------------------------
......@@ -553,16 +551,16 @@ call to cgroup_unload_subsys(). It should also set its_subsys.module =
THIS_MODULE in its .c file.
Each subsystem may export the following methods. The only mandatory
methods are create/destroy. Any others that are null are presumed to
methods are css_alloc/free. Any others that are null are presumed to
be successful no-ops.
struct cgroup_subsys_state *create(struct cgroup *cgrp)
struct cgroup_subsys_state *css_alloc(struct cgroup *cgrp)
(cgroup_mutex held by caller)
Called to create a subsystem state object for a cgroup. The
Called to allocate a subsystem state object for a cgroup. The
subsystem should allocate its subsystem state object for the passed
cgroup, returning a pointer to the new object on success or a
negative error code. On success, the subsystem pointer should point to
ERR_PTR() value. On success, the subsystem pointer should point to
a structure of type cgroup_subsys_state (typically embedded in a
larger subsystem-specific object), which will be initialized by the
cgroup system. Note that this will be called at initialization to
......@@ -571,24 +569,33 @@ identified by the passed cgroup object having a NULL parent (since
it's the root of the hierarchy) and may be an appropriate place for
initialization code.
void destroy(struct cgroup *cgrp)
int css_online(struct cgroup *cgrp)
(cgroup_mutex held by caller)
The cgroup system is about to destroy the passed cgroup; the subsystem
should do any necessary cleanup and free its subsystem state
object. By the time this method is called, the cgroup has already been
unlinked from the file system and from the child list of its parent;
cgroup->parent is still valid. (Note - can also be called for a
newly-created cgroup if an error occurs after this subsystem's
create() method has been called for the new cgroup).
Called after @cgrp successfully completed all allocations and made
visible to cgroup_for_each_child/descendant_*() iterators. The
subsystem may choose to fail creation by returning -errno. This
callback can be used to implement reliable state sharing and
propagation along the hierarchy. See the comment on
cgroup_for_each_descendant_pre() for details.
int pre_destroy(struct cgroup *cgrp);
void css_offline(struct cgroup *cgrp);
Called before checking the reference count on each subsystem. This may
be useful for subsystems which have some extra references even if
there are not tasks in the cgroup. If pre_destroy() returns error code,
rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
This is the counterpart of css_online() and called iff css_online()
has succeeded on @cgrp. This signifies the beginning of the end of
@cgrp. @cgrp is being removed and the subsystem should start dropping
all references it's holding on @cgrp. When all references are dropped,
cgroup removal will proceed to the next step - css_free(). After this
callback, @cgrp should be considered dead to the subsystem.
void css_free(struct cgroup *cgrp)
(cgroup_mutex held by caller)
The cgroup system is about to free @cgrp; the subsystem should free
its subsystem state object. By the time this method is called, @cgrp
is completely unused; @cgrp->parent is still valid. (Note - can also
be called for a newly-created cgroup if an error occurs after this
subsystem's create() method has been called for the new cgroup).
int can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)
......@@ -635,14 +642,6 @@ void exit(struct task_struct *task)
Called during task exit.
void post_clone(struct cgroup *cgrp)
(cgroup_mutex held by caller)
Called during cgroup_create() to do any parameter
initialization which might be required before a task could attach. For
example, in cpusets, no task may attach before 'cpus' and 'mems' are set
up.
void bind(struct cgroup *root)
(cgroup_mutex held by caller)
......
......@@ -49,13 +49,49 @@ prevent the freeze/unfreeze cycle from becoming visible to the tasks
being frozen. This allows the bash example above and gdb to run as
expected.
The freezer subsystem in the container filesystem defines a file named
freezer.state. Writing "FROZEN" to the state file will freeze all tasks in the
cgroup. Subsequently writing "THAWED" will unfreeze the tasks in the cgroup.
Reading will return the current state.
The cgroup freezer is hierarchical. Freezing a cgroup freezes all
tasks beloning to the cgroup and all its descendant cgroups. Each
cgroup has its own state (self-state) and the state inherited from the
parent (parent-state). Iff both states are THAWED, the cgroup is
THAWED.
Note freezer.state doesn't exist in root cgroup, which means root cgroup
is non-freezable.
The following cgroupfs files are created by cgroup freezer.
* freezer.state: Read-write.
When read, returns the effective state of the cgroup - "THAWED",
"FREEZING" or "FROZEN". This is the combined self and parent-states.
If any is freezing, the cgroup is freezing (FREEZING or FROZEN).
FREEZING cgroup transitions into FROZEN state when all tasks
belonging to the cgroup and its descendants become frozen. Note that
a cgroup reverts to FREEZING from FROZEN after a new task is added
to the cgroup or one of its descendant cgroups until the new task is
frozen.
When written, sets the self-state of the cgroup. Two values are
allowed - "FROZEN" and "THAWED". If FROZEN is written, the cgroup,
if not already freezing, enters FREEZING state along with all its
descendant cgroups.
If THAWED is written, the self-state of the cgroup is changed to
THAWED. Note that the effective state may not change to THAWED if
the parent-state is still freezing. If a cgroup's effective state
becomes THAWED, all its descendants which are freezing because of
the cgroup also leave the freezing state.
* freezer.self_freezing: Read only.
Shows the self-state. 0 if the self-state is THAWED; otherwise, 1.
This value is 1 iff the last write to freezer.state was "FROZEN".
* freezer.parent_freezing: Read only.
Shows the parent-state. 0 if none of the cgroup's ancestors is
frozen; otherwise, 1.
The root cgroup is non-freezable and the above interface files don't
exist.
* Examples of usage :
......@@ -85,18 +121,3 @@ to unfreeze all tasks in the container :
This is the basic mechanism which should do the right thing for user space task
in a simple scenario.
It's important to note that freezing can be incomplete. In that case we return
EBUSY. This means that some tasks in the cgroup are busy doing something that
prevents us from completely freezing the cgroup at this time. After EBUSY,
the cgroup will remain partially frozen -- reflected by freezer.state reporting
"FREEZING" when read. The state will remain "FREEZING" until one of these
things happens:
1) Userspace cancels the freezing operation by writing "THAWED" to
the freezer.state file
2) Userspace retries the freezing operation by writing "FROZEN" to
the freezer.state file (writing "FREEZING" is not legal
and returns EINVAL)
3) The tasks that blocked the cgroup from entering the "FROZEN"
state disappear from the cgroup's set of tasks.
......@@ -51,3 +51,5 @@ One usage for the net_prio cgroup is with mqprio qdisc allowing application
traffic to be steered to hardware/driver based traffic classes. These mappings
can then be managed by administrators or other networking protocols such as
DCBX.
A new net_prio cgroup inherits the parent's configuration.
......@@ -600,7 +600,7 @@ struct cftype blkcg_files[] = {
};
/**
* blkcg_pre_destroy - cgroup pre_destroy callback
* blkcg_css_offline - cgroup css_offline callback
* @cgroup: cgroup of interest
*
* This function is called when @cgroup is about to go away and responsible
......@@ -610,7 +610,7 @@ struct cftype blkcg_files[] = {
*
* This is the blkcg counterpart of ioc_release_fn().
*/
static int blkcg_pre_destroy(struct cgroup *cgroup)
static void blkcg_css_offline(struct cgroup *cgroup)
{
struct blkcg *blkcg = cgroup_to_blkcg(cgroup);
......@@ -632,10 +632,9 @@ static int blkcg_pre_destroy(struct cgroup *cgroup)
}
spin_unlock_irq(&blkcg->lock);
return 0;
}
static void blkcg_destroy(struct cgroup *cgroup)
static void blkcg_css_free(struct cgroup *cgroup)
{
struct blkcg *blkcg = cgroup_to_blkcg(cgroup);
......@@ -643,7 +642,7 @@ static void blkcg_destroy(struct cgroup *cgroup)
kfree(blkcg);
}
static struct cgroup_subsys_state *blkcg_create(struct cgroup *cgroup)
static struct cgroup_subsys_state *blkcg_css_alloc(struct cgroup *cgroup)
{
static atomic64_t id_seq = ATOMIC64_INIT(0);
struct blkcg *blkcg;
......@@ -740,10 +739,10 @@ static int blkcg_can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
struct cgroup_subsys blkio_subsys = {
.name = "blkio",
.create = blkcg_create,
.css_alloc = blkcg_css_alloc,
.css_offline = blkcg_css_offline,
.css_free = blkcg_css_free,
.can_attach = blkcg_can_attach,
.pre_destroy = blkcg_pre_destroy,
.destroy = blkcg_destroy,
.subsys_id = blkio_subsys_id,
.base_cftypes = blkcg_files,
.module = THIS_MODULE,
......
This diff is collapsed.
......@@ -75,35 +75,68 @@ static inline bool cgroup_freezing(struct task_struct *task)
*/
/* Tell the freezer not to count the current task as freezable. */
/**
* freezer_do_not_count - tell freezer to ignore %current
*
* Tell freezers to ignore the current task when determining whether the
* target frozen state is reached. IOW, the current task will be
* considered frozen enough by freezers.
*
* The caller shouldn't do anything which isn't allowed for a frozen task
* until freezer_cont() is called. Usually, freezer[_do_not]_count() pair
* wrap a scheduling operation and nothing much else.
*/
static inline void freezer_do_not_count(void)
{
current->flags |= PF_FREEZER_SKIP;
}
/*
* Tell the freezer to count the current task as freezable again and try to
* freeze it.
/**
* freezer_count - tell freezer to stop ignoring %current
*
* Undo freezer_do_not_count(). It tells freezers that %current should be
* considered again and tries to freeze if freezing condition is already in
* effect.
*/
static inline void freezer_count(void)
{
current->flags &= ~PF_FREEZER_SKIP;
/*
* If freezing is in progress, the following paired with smp_mb()
* in freezer_should_skip() ensures that either we see %true
* freezing() or freezer_should_skip() sees !PF_FREEZER_SKIP.
*/
smp_mb();
try_to_freeze();
}
/*
* Check if the task should be counted as freezable by the freezer
/**
* freezer_should_skip - whether to skip a task when determining frozen
* state is reached
* @p: task in quesion
*
* This function is used by freezers after establishing %true freezing() to
* test whether a task should be skipped when determining the target frozen
* state is reached. IOW, if this function returns %true, @p is considered
* frozen enough.
*/
static inline int freezer_should_skip(struct task_struct *p)
static inline bool freezer_should_skip(struct task_struct *p)
{
return !!(p->flags & PF_FREEZER_SKIP);
/*
* The following smp_mb() paired with the one in freezer_count()
* ensures that either freezer_count() sees %true freezing() or we
* see cleared %PF_FREEZER_SKIP and return %false. This makes it
* impossible for a task to slip frozen state testing after
* clearing %PF_FREEZER_SKIP.
*/
smp_mb();
return p->flags & PF_FREEZER_SKIP;
}
/*
* These macros are intended to be used whenever you want allow a task that's
* sleeping in TASK_UNINTERRUPTIBLE or TASK_KILLABLE state to be frozen. Note
* that neither return any clear indication of whether a freeze event happened
* while in this function.
* These macros are intended to be used whenever you want allow a sleeping
* task to be frozen. Note that neither return any clear indication of
* whether a freeze event happened while in this function.
*/
/* Like schedule(), but should not block the freezer. */
......
......@@ -27,7 +27,6 @@ struct netprio_map {
struct cgroup_netprio_state {
struct cgroup_subsys_state css;
u32 prioidx;
};
extern void sock_update_netprioidx(struct sock *sk, struct task_struct *task);
......@@ -36,13 +35,12 @@ extern void sock_update_netprioidx(struct sock *sk, struct task_struct *task);
static inline u32 task_netprioidx(struct task_struct *p)
{
struct cgroup_netprio_state *state;
struct cgroup_subsys_state *css;
u32 idx;
rcu_read_lock();
state = container_of(task_subsys_state(p, net_prio_subsys_id),
struct cgroup_netprio_state, css);
idx = state->prioidx;
css = task_subsys_state(p, net_prio_subsys_id);
idx = css->cgroup->id;
rcu_read_unlock();
return idx;
}
......@@ -57,8 +55,7 @@ static inline u32 task_netprioidx(struct task_struct *p)
rcu_read_lock();
css = task_subsys_state(p, net_prio_subsys_id);
if (css)
idx = container_of(css,
struct cgroup_netprio_state, css)->prioidx;
idx = css->cgroup->id;
rcu_read_unlock();
return idx;
}
......
This diff is collapsed.
This diff is collapsed.
......@@ -1784,56 +1784,20 @@ static struct cftype files[] = {
};
/*
* post_clone() is called during cgroup_create() when the
* clone_children mount argument was specified. The cgroup
* can not yet have any tasks.
*
* Currently we refuse to set up the cgroup - thereby
* refusing the task to be entered, and as a result refusing
* the sys_unshare() or clone() which initiated it - if any
* sibling cpusets have exclusive cpus or mem.
*
* If this becomes a problem for some users who wish to
* allow that scenario, then cpuset_post_clone() could be
* changed to grant parent->cpus_allowed-sibling_cpus_exclusive
* (and likewise for mems) to the new cgroup. Called with cgroup_mutex
* held.
*/
static void cpuset_post_clone(struct cgroup *cgroup)
{
struct cgroup *parent, *child;
struct cpuset *cs, *parent_cs;
parent = cgroup->parent;
list_for_each_entry(child, &parent->children, sibling) {
cs = cgroup_cs(child);
if (is_mem_exclusive(cs) || is_cpu_exclusive(cs))
return;
}
cs = cgroup_cs(cgroup);
parent_cs = cgroup_cs(parent);
mutex_lock(&callback_mutex);
cs->mems_allowed = parent_cs->mems_allowed;
cpumask_copy(cs->cpus_allowed, parent_cs->cpus_allowed);
mutex_unlock(&callback_mutex);
return;
}
/*
* cpuset_create - create a cpuset
* cpuset_css_alloc - allocate a cpuset css
* cont: control group that the new cpuset will be part of
*/
static struct cgroup_subsys_state *cpuset_create(struct cgroup *cont)
static struct cgroup_subsys_state *cpuset_css_alloc(struct cgroup *cont)
{
struct cpuset *cs;
struct cpuset *parent;
struct cgroup *parent_cg = cont->parent;
struct cgroup *tmp_cg;
struct cpuset *parent, *cs;
if (!cont->parent) {
if (!parent_cg)
return &top_cpuset.css;
}
parent = cgroup_cs(cont->parent);
parent = cgroup_cs(parent_cg);
cs = kmalloc(sizeof(*cs), GFP_KERNEL);
if (!cs)
return ERR_PTR(-ENOMEM);
......@@ -1855,7 +1819,36 @@ static struct cgroup_subsys_state *cpuset_create(struct cgroup *cont)
cs->parent = parent;
number_of_cpusets++;
return &cs->css ;
if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &cont->flags))
goto skip_clone;
/*
* Clone @parent's configuration if CGRP_CPUSET_CLONE_CHILDREN is
* set. This flag handling is implemented in cgroup core for
* histrical reasons - the flag may be specified during mount.
*
* Currently, if any sibling cpusets have exclusive cpus or mem, we
* refuse to clone the configuration - thereby refusing the task to
* be entered, and as a result refusing the sys_unshare() or
* clone() which initiated it. If this becomes a problem for some
* users who wish to allow that scenario, then this could be
* changed to grant parent->cpus_allowed-sibling_cpus_exclusive
* (and likewise for mems) to the new cgroup.
*/
list_for_each_entry(tmp_cg, &parent_cg->children, sibling) {
struct cpuset *tmp_cs = cgroup_cs(tmp_cg);
if (is_mem_exclusive(tmp_cs) || is_cpu_exclusive(tmp_cs))
goto skip_clone;
}
mutex_lock(&callback_mutex);
cs->mems_allowed = parent->mems_allowed;
cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
mutex_unlock(&callback_mutex);
skip_clone:
return &cs->css;
}
/*
......@@ -1864,7 +1857,7 @@ static struct cgroup_subsys_state *cpuset_create(struct cgroup *cont)
* will call async_rebuild_sched_domains().
*/
static void cpuset_destroy(struct cgroup *cont)
static void cpuset_css_free(struct cgroup *cont)
{
struct cpuset *cs = cgroup_cs(cont);
......@@ -1878,11 +1871,10 @@ static void cpuset_destroy(struct cgroup *cont)
struct cgroup_subsys cpuset_subsys = {
.name = "cpuset",
.create = cpuset_create,
.destroy = cpuset_destroy,
.css_alloc = cpuset_css_alloc,
.css_free = cpuset_css_free,
.can_attach = cpuset_can_attach,
.attach = cpuset_attach,
.post_clone = cpuset_post_clone,
.subsys_id = cpuset_subsys_id,
.base_cftypes = files,
.early_init = 1,
......
......@@ -7434,7 +7434,7 @@ static int __init perf_event_sysfs_init(void)
device_initcall(perf_event_sysfs_init);
#ifdef CONFIG_CGROUP_PERF
static struct cgroup_subsys_state *perf_cgroup_create(struct cgroup *cont)
static struct cgroup_subsys_state *perf_cgroup_css_alloc(struct cgroup *cont)
{
struct perf_cgroup *jc;
......@@ -7451,7 +7451,7 @@ static struct cgroup_subsys_state *perf_cgroup_create(struct cgroup *cont)
return &jc->css;
}
static void perf_cgroup_destroy(struct cgroup *cont)
static void perf_cgroup_css_free(struct cgroup *cont)
{
struct perf_cgroup *jc;
jc = container_of(cgroup_subsys_state(cont, perf_subsys_id),
......@@ -7492,8 +7492,8 @@ static void perf_cgroup_exit(struct cgroup *cgrp, struct cgroup *old_cgrp,
struct cgroup_subsys perf_subsys = {
.name = "perf_event",
.subsys_id = perf_subsys_id,
.create = perf_cgroup_create,
.destroy = perf_cgroup_destroy,
.css_alloc = perf_cgroup_css_alloc,
.css_free = perf_cgroup_css_free,
.exit = perf_cgroup_exit,
.attach = perf_cgroup_attach,
......
......@@ -1137,7 +1137,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
{
int retval;
struct task_struct *p;
int cgroup_callbacks_done = 0;
if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS))
return ERR_PTR(-EINVAL);
......@@ -1395,12 +1394,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
INIT_LIST_HEAD(&p->thread_group);
p->task_works = NULL;
/* Now that the task is set up, run cgroup callbacks if
* necessary. We need to run them before the task is visible
* on the tasklist. */
cgroup_fork_callbacks(p);
cgroup_callbacks_done = 1;
/* Need tasklist lock for parent etc handling! */
write_lock_irq(&tasklist_lock);
......@@ -1505,7 +1498,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
#endif
if (clone_flags & CLONE_THREAD)
threadgroup_change_end(current);
cgroup_exit(p, cgroup_callbacks_done);
cgroup_exit(p, 0);
delayacct_tsk_free(p);
module_put(task_thread_info(p)->exec_domain->module);
bad_fork_cleanup_count:
......
......@@ -116,17 +116,10 @@ bool freeze_task(struct task_struct *p)
return false;
}
if (!(p->flags & PF_KTHREAD)) {
if (!(p->flags & PF_KTHREAD))
fake_signal_wake_up(p);
/*
* fake_signal_wake_up() goes through p's scheduler
* lock and guarantees that TASK_STOPPED/TRACED ->
* TASK_RUNNING transition can't race with task state
* testing in try_to_freeze_tasks().
*/
} else {
else
wake_up_state(p, TASK_INTERRUPTIBLE);
}
spin_unlock_irqrestore(&freezer_lock, flags);
return true;
......
......@@ -48,18 +48,7 @@ static int try_to_freeze_tasks(bool user_only)
if (p == current || !freeze_task(p))
continue;
/*
* Now that we've done set_freeze_flag, don't
* perturb a task in TASK_STOPPED or TASK_TRACED.
* It is "frozen enough". If the task does wake
* up, it will immediately call try_to_freeze.
*
* Because freeze_task() goes through p's scheduler lock, it's
* guaranteed that TASK_STOPPED/TRACED -> TASK_RUNNING
* transition can't race with task state testing here.
*/
if (!task_is_stopped_or_traced(p) &&
!freezer_should_skip(p))
if (!freezer_should_skip(p))
todo++;
} while_each_thread(g, p);
read_unlock(&tasklist_lock);
......
......@@ -7484,7 +7484,7 @@ static inline struct task_group *cgroup_tg(struct cgroup *cgrp)
struct task_group, css);
}
static struct cgroup_subsys_state *cpu_cgroup_create(struct cgroup *cgrp)
static struct cgroup_subsys_state *cpu_cgroup_css_alloc(struct cgroup *cgrp)
{
struct task_group *tg, *parent;
......@@ -7501,7 +7501,7 @@ static struct cgroup_subsys_state *cpu_cgroup_create(struct cgroup *cgrp)
return &tg->css;
}
static void cpu_cgroup_destroy(struct cgroup *cgrp)
static void cpu_cgroup_css_free(struct cgroup *cgrp)
{
struct task_group *tg = cgroup_tg(cgrp);
......@@ -7861,8 +7861,8 @@ static struct cftype cpu_files[] = {
struct cgroup_subsys cpu_cgroup_subsys = {
.name = "cpu",
.create = cpu_cgroup_create,
.destroy = cpu_cgroup_destroy,
.css_alloc = cpu_cgroup_css_alloc,
.css_free = cpu_cgroup_css_free,
.can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,
......@@ -7885,7 +7885,7 @@ struct cgroup_subsys cpu_cgroup_subsys = {
struct cpuacct root_cpuacct;
/* create a new cpu accounting group */
static struct cgroup_subsys_state *cpuacct_create(struct cgroup *cgrp)
static struct cgroup_subsys_state *cpuacct_css_alloc(struct cgroup *cgrp)
{
struct cpuacct *ca;
......@@ -7915,7 +7915,7 @@ static struct cgroup_subsys_state *cpuacct_create(struct cgroup *cgrp)
}
/* destroy an existing cpu accounting group */
static void cpuacct_destroy(struct cgroup *cgrp)
static void cpuacct_css_free(struct cgroup *cgrp)
{
struct cpuacct *ca = cgroup_ca(cgrp);
......@@ -8086,8 +8086,8 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
struct cgroup_subsys cpuacct_subsys = {
.name = "cpuacct",
.create = cpuacct_create,
.destroy = cpuacct_destroy,
.css_alloc = cpuacct_css_alloc,
.css_free = cpuacct_css_free,
.subsys_id = cpuacct_subsys_id,
.base_cftypes = files,
};
......
......@@ -1908,7 +1908,7 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info)
preempt_disable();
read_unlock(&tasklist_lock);
preempt_enable_no_resched();
schedule();
freezable_schedule();
} else {
/*
* By the time we got the lock, our tracer went away.
......@@ -1929,13 +1929,6 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info)
read_unlock(&tasklist_lock);
}
/*
* While in TASK_TRACED, we were considered "frozen enough".
* Now that we woke up, it's crucial if we're supposed to be
* frozen that we freeze now before running anything substantial.
*/
try_to_freeze();
/*
* We are back. Now reacquire the siglock before touching
* last_siginfo, so that we are sure to have synchronized with
......@@ -2092,7 +2085,7 @@ static bool do_signal_stop(int signr)
}
/* Now we don't run again until woken by SIGCONT or SIGKILL */
schedule();
freezable_schedule();
return true;
} else {
/*
......@@ -2200,15 +2193,14 @@ int get_signal_to_deliver(siginfo_t *info, struct k_sigaction *return_ka,
if (unlikely(uprobe_deny_signal()))
return 0;
relock:
/*
* We'll jump back here after any time we were stopped in TASK_STOPPED.
* While in TASK_STOPPED, we were considered "frozen enough".
* Now that we woke up, it's crucial if we're supposed to be
* frozen that we freeze now before running anything substantial.
* Do this once, we can't return to user-mode if freezing() == T.
* do_signal_stop() and ptrace_stop() do freezable_schedule() and
* thus do not need another check after return.
*/
try_to_freeze();
relock:
spin_lock_irq(&sighand->siglock);
/*
* Every stopped thread goes here after wakeup. Check to see if
......
......@@ -77,7 +77,7 @@ static inline bool hugetlb_cgroup_have_usage(struct cgroup *cg)
return false;
}
static struct cgroup_subsys_state *hugetlb_cgroup_create(struct cgroup *cgroup)
static struct cgroup_subsys_state *hugetlb_cgroup_css_alloc(struct cgroup *cgroup)
{
int idx;
struct cgroup *parent_cgroup;
......@@ -101,7 +101,7 @@ static struct cgroup_subsys_state *hugetlb_cgroup_create(struct cgroup *cgroup)
return &h_cgroup->css;
}
static void hugetlb_cgroup_destroy(struct cgroup *cgroup)
static void hugetlb_cgroup_css_free(struct cgroup *cgroup)
{
struct hugetlb_cgroup *h_cgroup;
......@@ -155,18 +155,13 @@ static void hugetlb_cgroup_move_parent(int idx, struct cgroup *cgroup,
* Force the hugetlb cgroup to empty the hugetlb resources by moving them to
* the parent cgroup.
*/
static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
static void hugetlb_cgroup_css_offline(struct cgroup *cgroup)
{
struct hstate *h;
struct page *page;
int ret = 0, idx = 0;
int idx = 0;
do {
if (cgroup_task_count(cgroup) ||
!list_empty(&cgroup->children)) {
ret = -EBUSY;
goto out;
}
for_each_hstate(h) {
spin_lock(&hugetlb_lock);
list_for_each_entry(page, &h->hugepage_activelist, lru)
......@@ -177,8 +172,6 @@ static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
}
cond_resched();
} while (hugetlb_cgroup_have_usage(cgroup));
out:
return ret;
}
int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
......@@ -411,8 +404,8 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
struct cgroup_subsys hugetlb_subsys = {
.name = "hugetlb",
.create = hugetlb_cgroup_create,
.pre_destroy = hugetlb_cgroup_pre_destroy,
.destroy = hugetlb_cgroup_destroy,
.subsys_id = hugetlb_subsys_id,
.css_alloc = hugetlb_cgroup_css_alloc,
.css_offline = hugetlb_cgroup_css_offline,
.css_free = hugetlb_cgroup_css_free,
.subsys_id = hugetlb_subsys_id,
};
This diff is collapsed.
......@@ -27,11 +27,7 @@
#include <linux/fdtable.h>
#define PRIOIDX_SZ 128
static unsigned long prioidx_map[PRIOIDX_SZ];
static DEFINE_SPINLOCK(prioidx_map_lock);
static atomic_t max_prioidx = ATOMIC_INIT(0);
#define PRIOMAP_MIN_SZ 128
static inline struct cgroup_netprio_state *cgrp_netprio_state(struct cgroup *cgrp)
{
......@@ -39,136 +35,157 @@ static inline struct cgroup_netprio_state *cgrp_netprio_state(struct cgroup *cgr
struct cgroup_netprio_state, css);
}
static int get_prioidx(u32 *prio)
{
unsigned long flags;
u32 prioidx;
spin_lock_irqsave(&prioidx_map_lock, flags);
prioidx = find_first_zero_bit(prioidx_map, sizeof(unsigned long) * PRIOIDX_SZ);
if (prioidx == sizeof(unsigned long) * PRIOIDX_SZ) {
spin_unlock_irqrestore(&prioidx_map_lock, flags);
return -ENOSPC;
}
set_bit(prioidx, prioidx_map);
if (atomic_read(&max_prioidx) < prioidx)
atomic_set(&max_prioidx, prioidx);
spin_unlock_irqrestore(&prioidx_map_lock, flags);
*prio = prioidx;
return 0;
}
static void put_prioidx(u32 idx)
/*
* Extend @dev->priomap so that it's large enough to accomodate
* @target_idx. @dev->priomap.priomap_len > @target_idx after successful
* return. Must be called under rtnl lock.
*/
static int extend_netdev_table(struct net_device *dev, u32 target_idx)
{
unsigned long flags;
spin_lock_irqsave(&prioidx_map_lock, flags);
clear_bit(idx, prioidx_map);
spin_unlock_irqrestore(&prioidx_map_lock, flags);
}
struct netprio_map *old, *new;
size_t new_sz, new_len;
static int extend_netdev_table(struct net_device *dev, u32 new_len)
{
size_t new_size = sizeof(struct netprio_map) +
((sizeof(u32) * new_len));
struct netprio_map *new_priomap = kzalloc(new_size, GFP_KERNEL);
struct netprio_map *old_priomap;
/* is the existing priomap large enough? */
old = rtnl_dereference(dev->priomap);
if (old && old->priomap_len > target_idx)
return 0;
old_priomap = rtnl_dereference(dev->priomap);
/*
* Determine the new size. Let's keep it power-of-two. We start
* from PRIOMAP_MIN_SZ and double it until it's large enough to
* accommodate @target_idx.
*/
new_sz = PRIOMAP_MIN_SZ;
while (true) {
new_len = (new_sz - offsetof(struct netprio_map, priomap)) /
sizeof(new->priomap[0]);
if (new_len > target_idx)
break;
new_sz *= 2;
/* overflowed? */
if (WARN_ON(new_sz < PRIOMAP_MIN_SZ))
return -ENOSPC;
}
if (!new_priomap) {
/* allocate & copy */
new = kzalloc(new_sz, GFP_KERNEL);
if (!new) {
pr_warn("Unable to alloc new priomap!\n");
return -ENOMEM;
}
if (old_priomap)
memcpy(new_priomap->priomap, old_priomap->priomap,
old_priomap->priomap_len *
sizeof(old_priomap->priomap[0]));
if (old)
memcpy(new->priomap, old->priomap,
old->priomap_len * sizeof(old->priomap[0]));
new_priomap->priomap_len = new_len;
new->priomap_len = new_len;
rcu_assign_pointer(dev->priomap, new_priomap);
if (old_priomap)
kfree_rcu(old_priomap, rcu);
/* install the new priomap */
rcu_assign_pointer(dev->priomap, new);
if (old)
kfree_rcu(old, rcu);
return 0;
}
static int write_update_netdev_table(struct net_device *dev)
/**
* netprio_prio - return the effective netprio of a cgroup-net_device pair
* @cgrp: cgroup part of the target pair
* @dev: net_device part of the target pair
*
* Should be called under RCU read or rtnl lock.
*/
static u32 netprio_prio(struct cgroup *cgrp, struct net_device *dev)
{
struct netprio_map *map = rcu_dereference_rtnl(dev->priomap);
if (map && cgrp->id < map->priomap_len)
return map->priomap[cgrp->id];
return 0;
}
/**
* netprio_set_prio - set netprio on a cgroup-net_device pair
* @cgrp: cgroup part of the target pair
* @dev: net_device part of the target pair
* @prio: prio to set
*
* Set netprio to @prio on @cgrp-@dev pair. Should be called under rtnl
* lock and may fail under memory pressure for non-zero @prio.
*/
static int netprio_set_prio(struct cgroup *cgrp, struct net_device *dev,
u32 prio)
{
int ret = 0;
u32 max_len;
struct netprio_map *map;
int ret;
max_len = atomic_read(&max_prioidx) + 1;
/* avoid extending priomap for zero writes */
map = rtnl_dereference(dev->priomap);
if (!map || map->priomap_len < max_len)
ret = extend_netdev_table(dev, max_len);
if (!prio && (!map || map->priomap_len <= cgrp->id))
return 0;
return ret;
ret = extend_netdev_table(dev, cgrp->id);
if (ret)
return ret;
map = rtnl_dereference(dev->priomap);
map->priomap[cgrp->id] = prio;
return 0;
}
static struct cgroup_subsys_state *cgrp_create(struct cgroup *cgrp)
static struct cgroup_subsys_state *cgrp_css_alloc(struct cgroup *cgrp)
{
struct cgroup_netprio_state *cs;
int ret = -EINVAL;
cs = kzalloc(sizeof(*cs), GFP_KERNEL);
if (!cs)
return ERR_PTR(-ENOMEM);
if (cgrp->parent && cgrp_netprio_state(cgrp->parent)->prioidx)
goto out;
ret = get_prioidx(&cs->prioidx);
if (ret < 0) {
pr_warn("No space in priority index array\n");
goto out;
}
return &cs->css;
out:
kfree(cs);
return ERR_PTR(ret);
}
static void cgrp_destroy(struct cgroup *cgrp)
static int cgrp_css_online(struct cgroup *cgrp)
{
struct cgroup_netprio_state *cs;
struct cgroup *parent = cgrp->parent;
struct net_device *dev;
struct netprio_map *map;
int ret = 0;
if (!parent)
return 0;
cs = cgrp_netprio_state(cgrp);
rtnl_lock();
/*
* Inherit prios from the parent. As all prios are set during
* onlining, there is no need to clear them on offline.
*/
for_each_netdev(&init_net, dev) {
map = rtnl_dereference(dev->priomap);
if (map && cs->prioidx < map->priomap_len)
map->priomap[cs->prioidx] = 0;
u32 prio = netprio_prio(parent, dev);
ret = netprio_set_prio(cgrp, dev, prio);
if (ret)
break;
}
rtnl_unlock();
put_prioidx(cs->prioidx);
kfree(cs);
return ret;
}
static void cgrp_css_free(struct cgroup *cgrp)
{
kfree(cgrp_netprio_state(cgrp));
}
static u64 read_prioidx(struct cgroup *cgrp, struct cftype *cft)
{
return (u64)cgrp_netprio_state(cgrp)->prioidx;
return cgrp->id;
}
static int read_priomap(struct cgroup *cont, struct cftype *cft,
struct cgroup_map_cb *cb)
{
struct net_device *dev;
u32 prioidx = cgrp_netprio_state(cont)->prioidx;
u32 priority;
struct netprio_map *map;
rcu_read_lock();
for_each_netdev_rcu(&init_net, dev) {
map = rcu_dereference(dev->priomap);
priority = (map && prioidx < map->priomap_len) ? map->priomap[prioidx] : 0;
cb->fill(cb, dev->name, priority);
}
for_each_netdev_rcu(&init_net, dev)
cb->fill(cb, dev->name, netprio_prio(cont, dev));
rcu_read_unlock();
return 0;
}
......@@ -176,66 +193,24 @@ static int read_priomap(struct cgroup *cont, struct cftype *cft,
static int write_priomap(struct cgroup *cgrp, struct cftype *cft,
const char *buffer)
{
char *devname = kstrdup(buffer, GFP_KERNEL);
int ret = -EINVAL;
u32 prioidx = cgrp_netprio_state(cgrp)->prioidx;
unsigned long priority;
char *priostr;
char devname[IFNAMSIZ + 1];
struct net_device *dev;
struct netprio_map *map;
if (!devname)
return -ENOMEM;
/*
* Minimally sized valid priomap string
*/
if (strlen(devname) < 3)
goto out_free_devname;
priostr = strstr(devname, " ");
if (!priostr)
goto out_free_devname;
/*
*Separate the devname from the associated priority
*and advance the priostr pointer to the priority value
*/
*priostr = '\0';
priostr++;
/*
* If the priostr points to NULL, we're at the end of the passed
* in string, and its not a valid write
*/
if (*priostr == '\0')
goto out_free_devname;
ret = kstrtoul(priostr, 10, &priority);
if (ret < 0)
goto out_free_devname;
u32 prio;
int ret;
ret = -ENODEV;
if (sscanf(buffer, "%"__stringify(IFNAMSIZ)"s %u", devname, &prio) != 2)
return -EINVAL;
dev = dev_get_by_name(&init_net, devname);
if (!dev)
goto out_free_devname;
return -ENODEV;
rtnl_lock();
ret = write_update_netdev_table(dev);
if (ret < 0)
goto out_put_dev;
map = rtnl_dereference(dev->priomap);
if (map)
map->priomap[prioidx] = priority;
ret = netprio_set_prio(cgrp, dev, prio);
out_put_dev:
rtnl_unlock();
dev_put(dev);
out_free_devname:
kfree(devname);
return ret;
}
......@@ -276,22 +251,13 @@ static struct cftype ss_files[] = {
struct cgroup_subsys net_prio_subsys = {
.name = "net_prio",
.create = cgrp_create,
.destroy = cgrp_destroy,
.css_alloc = cgrp_css_alloc,
.css_online = cgrp_css_online,
.css_free = cgrp_css_free,
.attach = net_prio_attach,
.subsys_id = net_prio_subsys_id,
.base_cftypes = ss_files,
.module = THIS_MODULE,
/*
* net_prio has artificial limit on the number of cgroups and
* disallows nesting making it impossible to co-mount it with other
* hierarchical subsystems. Remove the artificially low PRIOIDX_SZ
* limit and properly nest configuration such that children follow
* their parents' configurations by default and are allowed to
* override and remove the following.
*/
.broken_hierarchy = true,
};
static int netprio_device_event(struct notifier_block *unused,
......
......@@ -34,21 +34,25 @@ static inline struct cgroup_cls_state *task_cls_state(struct task_struct *p)
struct cgroup_cls_state, css);
}
static struct cgroup_subsys_state *cgrp_create(struct cgroup *cgrp)
static struct cgroup_subsys_state *cgrp_css_alloc(struct cgroup *cgrp)
{
struct cgroup_cls_state *cs;
cs = kzalloc(sizeof(*cs), GFP_KERNEL);
if (!cs)
return ERR_PTR(-ENOMEM);
return &cs->css;
}
static int cgrp_css_online(struct cgroup *cgrp)
{
if (cgrp->parent)
cs->classid = cgrp_cls_state(cgrp->parent)->classid;
return &cs->css;
cgrp_cls_state(cgrp)->classid =
cgrp_cls_state(cgrp->parent)->classid;
return 0;
}
static void cgrp_destroy(struct cgroup *cgrp)
static void cgrp_css_free(struct cgroup *cgrp)
{
kfree(cgrp_cls_state(cgrp));
}
......@@ -75,20 +79,12 @@ static struct cftype ss_files[] = {
struct cgroup_subsys net_cls_subsys = {
.name = "net_cls",
.create = cgrp_create,
.destroy = cgrp_destroy,
.css_alloc = cgrp_css_alloc,
.css_online = cgrp_css_online,
.css_free = cgrp_css_free,
.subsys_id = net_cls_subsys_id,
.base_cftypes = ss_files,
.module = THIS_MODULE,
/*
* While net_cls cgroup has the rudimentary hierarchy support of
* inheriting the parent's classid on cgroup creation, it doesn't
* properly propagates config changes in ancestors to their
* descendents. A child should follow the parent's configuration
* but be allowed to override it. Fix it and remove the following.
*/
.broken_hierarchy = true,
};
struct cls_cgroup_head {
......
......@@ -82,6 +82,8 @@ static int dev_exceptions_copy(struct list_head *dest, struct list_head *orig)
{
struct dev_exception_item *ex, *tmp, *new;
lockdep_assert_held(&devcgroup_mutex);
list_for_each_entry(ex, orig, list) {
new = kmemdup(ex, sizeof(*ex), GFP_KERNEL);
if (!new)
......@@ -107,6 +109,8 @@ static int dev_exception_add(struct dev_cgroup *dev_cgroup,
{
struct dev_exception_item *excopy, *walk;
lockdep_assert_held(&devcgroup_mutex);
excopy = kmemdup(ex, sizeof(*ex), GFP_KERNEL);
if (!excopy)
return -ENOMEM;
......@@ -137,6 +141,8 @@ static void dev_exception_rm(struct dev_cgroup *dev_cgroup,
{
struct dev_exception_item *walk, *tmp;
lockdep_assert_held(&devcgroup_mutex);
list_for_each_entry_safe(walk, tmp, &dev_cgroup->exceptions, list) {
if (walk->type != ex->type)
continue;
......@@ -163,6 +169,8 @@ static void dev_exception_clean(struct dev_cgroup *dev_cgroup)
{
struct dev_exception_item *ex, *tmp;
lockdep_assert_held(&devcgroup_mutex);
list_for_each_entry_safe(ex, tmp, &dev_cgroup->exceptions, list) {
list_del_rcu(&ex->list);
kfree_rcu(ex, rcu);
......@@ -172,7 +180,7 @@ static void dev_exception_clean(struct dev_cgroup *dev_cgroup)
/*
* called from kernel/cgroup.c with cgroup_lock() held.
*/
static struct cgroup_subsys_state *devcgroup_create(struct cgroup *cgroup)
static struct cgroup_subsys_state *devcgroup_css_alloc(struct cgroup *cgroup)
{
struct dev_cgroup *dev_cgroup, *parent_dev_cgroup;
struct cgroup *parent_cgroup;
......@@ -202,7 +210,7 @@ static struct cgroup_subsys_state *devcgroup_create(struct cgroup *cgroup)
return &dev_cgroup->css;
}
static void devcgroup_destroy(struct cgroup *cgroup)
static void devcgroup_css_free(struct cgroup *cgroup)
{
struct dev_cgroup *dev_cgroup;
......@@ -298,6 +306,10 @@ static int may_access(struct dev_cgroup *dev_cgroup,
struct dev_exception_item *ex;
bool match = false;
rcu_lockdep_assert(rcu_read_lock_held() ||
lockdep_is_held(&devcgroup_mutex),
"device_cgroup::may_access() called without proper synchronization");
list_for_each_entry_rcu(ex, &dev_cgroup->exceptions, list) {
if ((refex->type & DEV_BLOCK) && !(ex->type & DEV_BLOCK))
continue;
......@@ -552,8 +564,8 @@ static struct cftype dev_cgroup_files[] = {
struct cgroup_subsys devices_subsys = {
.name = "devices",
.can_attach = devcgroup_can_attach,
.create = devcgroup_create,
.destroy = devcgroup_destroy,
.css_alloc = devcgroup_css_alloc,
.css_free = devcgroup_css_free,
.subsys_id = devices_subsys_id,
.base_cftypes = dev_cgroup_files,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment