Commit 14208b0e authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup

Pull cgroup updates from Tejun Heo:
 "A lot of activities on cgroup side.  Heavy restructuring including
  locking simplification took place to improve the code base and enable
  implementation of the unified hierarchy, which currently exists behind
  a __DEVEL__ mount option.  The core support is mostly complete but
  individual controllers need further work.  To explain the design and
  rationales of the the unified hierarchy

        Documentation/cgroups/unified-hierarchy.txt

  is added.

  Another notable change is css (cgroup_subsys_state - what each
  controller uses to identify and interact with a cgroup) iteration
  update.  This is part of continuing updates on css object lifetime and
  visibility.  cgroup started with reference count draining on removal
  way back and is now reaching a point where csses behave and are
  iterated like normal refcnted objects albeit with some complexities to
  allow distinguishing the state where they're being deleted.  The css
  iteration update isn't taken advantage of yet but is planned to be
  used to simplify memcg significantly"

* 'for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (77 commits)
  cgroup: disallow disabled controllers on the default hierarchy
  cgroup: don't destroy the default root
  cgroup: disallow debug controller on the default hierarchy
  cgroup: clean up MAINTAINERS entries
  cgroup: implement css_tryget()
  device_cgroup: use css_has_online_children() instead of has_children()
  cgroup: convert cgroup_has_live_children() into css_has_online_children()
  cgroup: use CSS_ONLINE instead of CGRP_DEAD
  cgroup: iterate cgroup_subsys_states directly
  cgroup: introduce CSS_RELEASED and reduce css iteration fallback window
  cgroup: move cgroup->serial_nr into cgroup_subsys_state
  cgroup: link all cgroup_subsys_states in their sibling lists
  cgroup: move cgroup->sibling and ->children into cgroup_subsys_state
  cgroup: remove cgroup->parent
  device_cgroup: remove direct access to cgroup->children
  memcg: update memcg_has_children() to use css_next_child()
  memcg: remove tasks/children test from mem_cgroup_force_empty()
  cgroup: remove css_parent()
  cgroup: skip refcnting on normal root csses and cgrp_dfl_root self css
  cgroup: use cgroup->self.refcnt for cgroup refcnting
  ...
parents 6ea4fa70 c731ae1d
......@@ -458,15 +458,11 @@ About use_hierarchy, see Section 6.
5.1 force_empty
memory.force_empty interface is provided to make cgroup's memory usage empty.
You can use this interface only when the cgroup has no tasks.
When writing anything to this
# echo 0 > memory.force_empty
Almost all pages tracked by this memory cgroup will be unmapped and freed.
Some pages cannot be freed because they are locked or in-use. Such pages are
moved to parent (if use_hierarchy==1) or root (if use_hierarchy==0) and this
cgroup will be empty.
the cgroup will be reclaimed and as many pages reclaimed as possible.
The typical use case for this interface is before calling rmdir().
Because rmdir() moves all pages to parent, some out-of-use page caches can be
......
This diff is collapsed.
......@@ -2384,16 +2384,35 @@ L: netdev@vger.kernel.org
S: Maintained
F: drivers/connector/
CONTROL GROUPS (CGROUPS)
CONTROL GROUP (CGROUP)
M: Tejun Heo <tj@kernel.org>
M: Li Zefan <lizefan@huawei.com>
L: containers@lists.linux-foundation.org
L: cgroups@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained
F: Documentation/cgroups/
F: include/linux/cgroup*
F: kernel/cgroup*
F: mm/*cgroup*
CONTROL GROUP - CPUSET
M: Li Zefan <lizefan@huawei.com>
L: cgroups@vger.kernel.org
W: http://www.bullopensource.org/cpuset/
W: http://oss.sgi.com/projects/cpusets/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained
F: Documentation/cgroups/cpusets.txt
F: include/linux/cpuset.h
F: kernel/cpuset.c
CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
M: Johannes Weiner <hannes@cmpxchg.org>
M: Michal Hocko <mhocko@suse.cz>
L: cgroups@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
F: mm/memcontrol.c
F: mm/page_cgroup.c
CORETEMP HARDWARE MONITORING DRIVER
M: Fenghua Yu <fenghua.yu@intel.com>
......@@ -2464,17 +2483,6 @@ M: Thomas Renninger <trenn@suse.de>
S: Maintained
F: tools/power/cpupower/
CPUSETS
M: Li Zefan <lizefan@huawei.com>
L: cgroups@vger.kernel.org
W: http://www.bullopensource.org/cpuset/
W: http://oss.sgi.com/projects/cpusets/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained
F: Documentation/cgroups/cpusets.txt
F: include/linux/cpuset.h
F: kernel/cpuset.c
CRAMFS FILESYSTEM
W: http://sourceforge.net/projects/cramfs/
S: Orphan / Obsolete
......@@ -5757,17 +5765,6 @@ F: include/linux/memory_hotplug.h
F: include/linux/vmalloc.h
F: mm/
MEMORY RESOURCE CONTROLLER
M: Johannes Weiner <hannes@cmpxchg.org>
M: Michal Hocko <mhocko@suse.cz>
M: Balbir Singh <bsingharora@gmail.com>
M: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
L: cgroups@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
F: mm/memcontrol.c
F: mm/page_cgroup.c
MEMORY TECHNOLOGY DEVICES (MTD)
M: David Woodhouse <dwmw2@infradead.org>
M: Brian Norris <computersforpeace@gmail.com>
......
......@@ -1971,7 +1971,7 @@ int bio_associate_current(struct bio *bio)
/* associate blkcg if exists */
rcu_read_lock();
css = task_css(current, blkio_cgrp_id);
if (css && css_tryget(css))
if (css && css_tryget_online(css))
bio->bi_css = css;
rcu_read_unlock();
......
......@@ -185,7 +185,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
lockdep_assert_held(q->queue_lock);
/* blkg holds a reference to blkcg */
if (!css_tryget(&blkcg->css)) {
if (!css_tryget_online(&blkcg->css)) {
ret = -EINVAL;
goto err_free_blkg;
}
......
......@@ -204,7 +204,7 @@ static inline struct blkcg *bio_blkcg(struct bio *bio)
*/
static inline struct blkcg *blkcg_parent(struct blkcg *blkcg)
{
return css_to_blkcg(css_parent(&blkcg->css));
return css_to_blkcg(blkcg->css.parent);
}
/**
......
......@@ -1346,10 +1346,10 @@ static int tg_print_conf_uint(struct seq_file *sf, void *v)
return 0;
}
static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft,
const char *buf, bool is_u64)
static ssize_t tg_set_conf(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off, bool is_u64)
{
struct blkcg *blkcg = css_to_blkcg(css);
struct blkcg *blkcg = css_to_blkcg(of_css(of));
struct blkg_conf_ctx ctx;
struct throtl_grp *tg;
struct throtl_service_queue *sq;
......@@ -1368,9 +1368,9 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft,
ctx.v = -1;
if (is_u64)
*(u64 *)((void *)tg + cft->private) = ctx.v;
*(u64 *)((void *)tg + of_cft(of)->private) = ctx.v;
else
*(unsigned int *)((void *)tg + cft->private) = ctx.v;
*(unsigned int *)((void *)tg + of_cft(of)->private) = ctx.v;
throtl_log(&tg->service_queue,
"limit change rbps=%llu wbps=%llu riops=%u wiops=%u",
......@@ -1404,19 +1404,19 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft,
}
blkg_conf_finish(&ctx);
return 0;
return nbytes;
}
static int tg_set_conf_u64(struct cgroup_subsys_state *css, struct cftype *cft,
char *buf)
static ssize_t tg_set_conf_u64(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
return tg_set_conf(css, cft, buf, true);
return tg_set_conf(of, buf, nbytes, off, true);
}
static int tg_set_conf_uint(struct cgroup_subsys_state *css, struct cftype *cft,
char *buf)
static ssize_t tg_set_conf_uint(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
return tg_set_conf(css, cft, buf, false);
return tg_set_conf(of, buf, nbytes, off, false);
}
static struct cftype throtl_files[] = {
......@@ -1424,25 +1424,25 @@ static struct cftype throtl_files[] = {
.name = "throttle.read_bps_device",
.private = offsetof(struct throtl_grp, bps[READ]),
.seq_show = tg_print_conf_u64,
.write_string = tg_set_conf_u64,
.write = tg_set_conf_u64,
},
{
.name = "throttle.write_bps_device",
.private = offsetof(struct throtl_grp, bps[WRITE]),
.seq_show = tg_print_conf_u64,
.write_string = tg_set_conf_u64,
.write = tg_set_conf_u64,
},
{
.name = "throttle.read_iops_device",
.private = offsetof(struct throtl_grp, iops[READ]),
.seq_show = tg_print_conf_uint,
.write_string = tg_set_conf_uint,
.write = tg_set_conf_uint,
},
{
.name = "throttle.write_iops_device",
.private = offsetof(struct throtl_grp, iops[WRITE]),
.seq_show = tg_print_conf_uint,
.write_string = tg_set_conf_uint,
.write = tg_set_conf_uint,
},
{
.name = "throttle.io_service_bytes",
......
......@@ -1670,11 +1670,11 @@ static int cfq_print_leaf_weight(struct seq_file *sf, void *v)
return 0;
}
static int __cfqg_set_weight_device(struct cgroup_subsys_state *css,
struct cftype *cft, const char *buf,
static ssize_t __cfqg_set_weight_device(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off,
bool is_leaf_weight)
{
struct blkcg *blkcg = css_to_blkcg(css);
struct blkcg *blkcg = css_to_blkcg(of_css(of));
struct blkg_conf_ctx ctx;
struct cfq_group *cfqg;
int ret;
......@@ -1697,19 +1697,19 @@ static int __cfqg_set_weight_device(struct cgroup_subsys_state *css,
}
blkg_conf_finish(&ctx);
return ret;
return ret ?: nbytes;
}
static int cfqg_set_weight_device(struct cgroup_subsys_state *css,
struct cftype *cft, char *buf)
static ssize_t cfqg_set_weight_device(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
return __cfqg_set_weight_device(css, cft, buf, false);
return __cfqg_set_weight_device(of, buf, nbytes, off, false);
}
static int cfqg_set_leaf_weight_device(struct cgroup_subsys_state *css,
struct cftype *cft, char *buf)
static ssize_t cfqg_set_leaf_weight_device(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
return __cfqg_set_weight_device(css, cft, buf, true);
return __cfqg_set_weight_device(of, buf, nbytes, off, true);
}
static int __cfq_set_weight(struct cgroup_subsys_state *css, struct cftype *cft,
......@@ -1837,7 +1837,7 @@ static struct cftype cfq_blkcg_files[] = {
.name = "weight_device",
.flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cfqg_print_leaf_weight_device,
.write_string = cfqg_set_leaf_weight_device,
.write = cfqg_set_leaf_weight_device,
},
{
.name = "weight",
......@@ -1851,7 +1851,7 @@ static struct cftype cfq_blkcg_files[] = {
.name = "weight_device",
.flags = CFTYPE_NOT_ON_ROOT,
.seq_show = cfqg_print_weight_device,
.write_string = cfqg_set_weight_device,
.write = cfqg_set_weight_device,
},
{
.name = "weight",
......@@ -1863,7 +1863,7 @@ static struct cftype cfq_blkcg_files[] = {
{
.name = "leaf_weight_device",
.seq_show = cfqg_print_leaf_weight_device,
.write_string = cfqg_set_leaf_weight_device,
.write = cfqg_set_leaf_weight_device,
},
{
.name = "leaf_weight",
......
This diff is collapsed.
......@@ -7,10 +7,6 @@
SUBSYS(cpuset)
#endif
#if IS_ENABLED(CONFIG_CGROUP_DEBUG)
SUBSYS(debug)
#endif
#if IS_ENABLED(CONFIG_CGROUP_SCHED)
SUBSYS(cpu)
#endif
......@@ -50,6 +46,13 @@ SUBSYS(net_prio)
#if IS_ENABLED(CONFIG_CGROUP_HUGETLB)
SUBSYS(hugetlb)
#endif
/*
* The following subsystems are not supported on the default hierarchy.
*/
#if IS_ENABLED(CONFIG_CGROUP_DEBUG)
SUBSYS(debug)
#endif
/*
* DO NOT ADD ANY SUBSYSTEM WITHOUT EXPLICIT ACKS FROM CGROUP MAINTAINERS.
*/
This diff is collapsed.
......@@ -59,7 +59,7 @@ static inline struct freezer *task_freezer(struct task_struct *task)
static struct freezer *parent_freezer(struct freezer *freezer)
{
return css_freezer(css_parent(&freezer->css));
return css_freezer(freezer->css.parent);
}
bool cgroup_freezing(struct task_struct *task)
......@@ -73,10 +73,6 @@ bool cgroup_freezing(struct task_struct *task)
return ret;
}
/*
* cgroups_write_string() limits the size of freezer state strings to
* CGROUP_LOCAL_BUFFER_SIZE
*/
static const char *freezer_state_strs(unsigned int state)
{
if (state & CGROUP_FROZEN)
......@@ -304,7 +300,7 @@ static int freezer_read(struct seq_file *m, void *v)
/* update states bottom-up */
css_for_each_descendant_post(pos, css) {
if (!css_tryget(pos))
if (!css_tryget_online(pos))
continue;
rcu_read_unlock();
......@@ -404,7 +400,7 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
struct freezer *pos_f = css_freezer(pos);
struct freezer *parent = parent_freezer(pos_f);
if (!css_tryget(pos))
if (!css_tryget_online(pos))
continue;
rcu_read_unlock();
......@@ -423,20 +419,22 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
mutex_unlock(&freezer_mutex);
}
static int freezer_write(struct cgroup_subsys_state *css, struct cftype *cft,
char *buffer)
static ssize_t freezer_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
bool freeze;
if (strcmp(buffer, freezer_state_strs(0)) == 0)
buf = strstrip(buf);
if (strcmp(buf, freezer_state_strs(0)) == 0)
freeze = false;
else if (strcmp(buffer, freezer_state_strs(CGROUP_FROZEN)) == 0)
else if (strcmp(buf, freezer_state_strs(CGROUP_FROZEN)) == 0)
freeze = true;
else
return -EINVAL;
freezer_change_state(css_freezer(css), freeze);
return 0;
freezer_change_state(css_freezer(of_css(of)), freeze);
return nbytes;
}
static u64 freezer_self_freezing_read(struct cgroup_subsys_state *css,
......@@ -460,7 +458,7 @@ static struct cftype files[] = {
.name = "state",
.flags = CFTYPE_NOT_ON_ROOT,
.seq_show = freezer_read,
.write_string = freezer_write,
.write = freezer_write,
},
{
.name = "self_freezing",
......
......@@ -119,7 +119,7 @@ static inline struct cpuset *task_cs(struct task_struct *task)
static inline struct cpuset *parent_cs(struct cpuset *cs)
{
return css_cs(css_parent(&cs->css));
return css_cs(cs->css.parent);
}
#ifdef CONFIG_NUMA
......@@ -691,10 +691,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
if (nslot == ndoms) {
static int warnings = 10;
if (warnings) {
printk(KERN_WARNING
"rebuild_sched_domains confused:"
" nslot %d, ndoms %d, csn %d, i %d,"
" apn %d\n",
pr_warn("rebuild_sched_domains confused: nslot %d, ndoms %d, csn %d, i %d, apn %d\n",
nslot, ndoms, csn, i, apn);
warnings--;
}
......@@ -870,7 +867,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root)
continue;
}
}
if (!css_tryget(&cp->css))
if (!css_tryget_online(&cp->css))
continue;
rcu_read_unlock();
......@@ -885,6 +882,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root)
/**
* update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it
* @cs: the cpuset to consider
* @trialcs: trial cpuset
* @buf: buffer of cpu numbers written to this cpuset
*/
static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
......@@ -1105,7 +1103,7 @@ static void update_tasks_nodemask_hier(struct cpuset *root_cs, bool update_root)
continue;
}
}
if (!css_tryget(&cp->css))
if (!css_tryget_online(&cp->css))
continue;
rcu_read_unlock();
......@@ -1600,13 +1598,15 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
/*
* Common handling for a write to a "cpus" or "mems" file.
*/
static int cpuset_write_resmask(struct cgroup_subsys_state *css,
struct cftype *cft, char *buf)
static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
struct cpuset *cs = css_cs(css);
struct cpuset *cs = css_cs(of_css(of));
struct cpuset *trialcs;
int retval = -ENODEV;
buf = strstrip(buf);
/*
* CPU or memory hotunplug may leave @cs w/o any execution
* resources, in which case the hotplug code asynchronously updates
......@@ -1630,7 +1630,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css,
goto out_unlock;
}
switch (cft->private) {
switch (of_cft(of)->private) {
case FILE_CPULIST:
retval = update_cpumask(cs, trialcs, buf);
break;
......@@ -1645,7 +1645,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css,
free_trial_cpuset(trialcs);
out_unlock:
mutex_unlock(&cpuset_mutex);
return retval;
return retval ?: nbytes;
}
/*
......@@ -1747,7 +1747,7 @@ static struct cftype files[] = {
{
.name = "cpus",
.seq_show = cpuset_common_seq_show,
.write_string = cpuset_write_resmask,
.write = cpuset_write_resmask,
.max_write_len = (100U + 6 * NR_CPUS),
.private = FILE_CPULIST,
},
......@@ -1755,7 +1755,7 @@ static struct cftype files[] = {
{
.name = "mems",
.seq_show = cpuset_common_seq_show,
.write_string = cpuset_write_resmask,
.write = cpuset_write_resmask,
.max_write_len = (100U + 6 * MAX_NUMNODES),
.private = FILE_MEMLIST,
},
......@@ -2011,7 +2011,7 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
parent = parent_cs(parent);
if (cgroup_transfer_tasks(parent->css.cgroup, cs->css.cgroup)) {
printk(KERN_ERR "cpuset: failed to transfer tasks out of empty cpuset ");
pr_err("cpuset: failed to transfer tasks out of empty cpuset ");
pr_cont_cgroup_name(cs->css.cgroup);
pr_cont("\n");
}
......@@ -2149,7 +2149,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
rcu_read_lock();
cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
if (cs == &top_cpuset || !css_tryget(&cs->css))
if (cs == &top_cpuset || !css_tryget_online(&cs->css))
continue;
rcu_read_unlock();
......@@ -2530,7 +2530,7 @@ int cpuset_mems_allowed_intersects(const struct task_struct *tsk1,
/**
* cpuset_print_task_mems_allowed - prints task's cpuset and mems_allowed
* @task: pointer to task_struct of some task.
* @tsk: pointer to task_struct of some task.
*
* Description: Prints @task's name, cpuset name, and cached copy of its
* mems_allowed to the kernel log.
......@@ -2548,7 +2548,7 @@ void cpuset_print_task_mems_allowed(struct task_struct *tsk)
cgrp = task_cs(tsk)->css.cgroup;
nodelist_scnprintf(cpuset_nodelist, CPUSET_NODELIST_LEN,
tsk->mems_allowed);
printk(KERN_INFO "%s cpuset=", tsk->comm);
pr_info("%s cpuset=", tsk->comm);
pr_cont_cgroup_name(cgrp);
pr_cont(" mems_allowed=%s\n", cpuset_nodelist);
......@@ -2640,10 +2640,10 @@ int proc_cpuset_show(struct seq_file *m, void *unused_v)
/* Display task mems_allowed in /proc/<pid>/status file. */
void cpuset_task_status_allowed(struct seq_file *m, struct task_struct *task)
{
seq_printf(m, "Mems_allowed:\t");
seq_puts(m, "Mems_allowed:\t");
seq_nodemask(m, &task->mems_allowed);
seq_printf(m, "\n");
seq_printf(m, "Mems_allowed_list:\t");
seq_puts(m, "\n");
seq_puts(m, "Mems_allowed_list:\t");
seq_nodemask_list(m, &task->mems_allowed);
seq_printf(m, "\n");
seq_puts(m, "\n");
}
......@@ -608,7 +608,8 @@ static inline int perf_cgroup_connect(int fd, struct perf_event *event,
if (!f.file)
return -EBADF;
css = css_tryget_from_dir(f.file->f_dentry, &perf_event_cgrp_subsys);
css = css_tryget_online_from_dir(f.file->f_dentry,
&perf_event_cgrp_subsys);
if (IS_ERR(css)) {
ret = PTR_ERR(css);
goto out;
......
......@@ -7669,7 +7669,7 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
{
struct task_group *tg = css_tg(css);
struct task_group *parent = css_tg(css_parent(css));
struct task_group *parent = css_tg(css->parent);
if (parent)
sched_online_group(tg, parent);
......
......@@ -46,7 +46,7 @@ static inline struct cpuacct *task_ca(struct task_struct *tsk)
static inline struct cpuacct *parent_ca(struct cpuacct *ca)
{
return css_ca(css_parent(&ca->css));
return css_ca(ca->css.parent);
}
static DEFINE_PER_CPU(u64, root_cpuacct_cpuusage);
......
......@@ -52,7 +52,7 @@ static inline bool hugetlb_cgroup_is_root(struct hugetlb_cgroup *h_cg)
static inline struct hugetlb_cgroup *
parent_hugetlb_cgroup(struct hugetlb_cgroup *h_cg)
{
return hugetlb_cgroup_from_css(css_parent(&h_cg->css));
return hugetlb_cgroup_from_css(h_cg->css.parent);
}
static inline bool hugetlb_cgroup_have_usage(struct hugetlb_cgroup *h_cg)
......@@ -181,7 +181,7 @@ int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
again:
rcu_read_lock();
h_cg = hugetlb_cgroup_from_task(current);
if (!css_tryget(&h_cg->css)) {
if (!css_tryget_online(&h_cg->css)) {
rcu_read_unlock();
goto again;
}
......@@ -253,15 +253,16 @@ static u64 hugetlb_cgroup_read_u64(struct cgroup_subsys_state *css,
return res_counter_read_u64(&h_cg->hugepage[idx], name);
}
static int hugetlb_cgroup_write(struct cgroup_subsys_state *css,
struct cftype *cft, char *buffer)
static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
int idx, name, ret;
unsigned long long val;
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css);
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(of_css(of));
idx = MEMFILE_IDX(cft->private);
name = MEMFILE_ATTR(cft->private);
buf = strstrip(buf);
idx = MEMFILE_IDX(of_cft(of)->private);
name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) {
case RES_LIMIT:
......@@ -271,7 +272,7 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css,
break;
}
/* This function does all necessary parse...reuse it */
ret = res_counter_memparse_write_strategy(buffer, &val);
ret = res_counter_memparse_write_strategy(buf, &val);
if (ret)
break;
ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
......@@ -280,17 +281,17 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css,
ret = -EINVAL;
break;
}
return ret;
return ret ?: nbytes;
}
static int hugetlb_cgroup_reset(struct cgroup_subsys_state *css,
unsigned int event)
static ssize_t hugetlb_cgroup_reset(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
int idx, name, ret = 0;
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css);
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(of_css(of));
idx = MEMFILE_IDX(event);
name = MEMFILE_ATTR(event);
idx = MEMFILE_IDX(of_cft(of)->private);
name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) {
case RES_MAX_USAGE:
......@@ -303,7 +304,7 @@ static int hugetlb_cgroup_reset(struct cgroup_subsys_state *css,
ret = -EINVAL;
break;
}
return ret;
return ret ?: nbytes;
}
static char *mem_fmt(char *buf, int size, unsigned long hsize)
......@@ -331,7 +332,7 @@ static void __init __hugetlb_cgroup_file_init(int idx)
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.limit_in_bytes", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_LIMIT);
cft->read_u64 = hugetlb_cgroup_read_u64;
cft->write_string = hugetlb_cgroup_write;
cft->write = hugetlb_cgroup_write;
/* Add the usage file */
cft = &h->cgroup_files[1];
......@@ -343,14 +344,14 @@ static void __init __hugetlb_cgroup_file_init(int idx)
cft = &h->cgroup_files[2];
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.max_usage_in_bytes", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_MAX_USAGE);
cft->trigger = hugetlb_cgroup_reset;
cft->write = hugetlb_cgroup_reset;
cft->read_u64 = hugetlb_cgroup_read_u64;
/* Add the failcntfile */
cft = &h->cgroup_files[3];
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.failcnt", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_FAILCNT);
cft->trigger = hugetlb_cgroup_reset;
cft->write = hugetlb_cgroup_reset;
cft->read_u64 = hugetlb_cgroup_read_u64;
/* NULL terminate the last cft */
......
This diff is collapsed.
......@@ -42,7 +42,7 @@ cgrp_css_alloc(struct cgroup_subsys_state *parent_css)
static int cgrp_css_online(struct cgroup_subsys_state *css)
{
struct cgroup_cls_state *cs = css_cls_state(css);
struct cgroup_cls_state *parent = css_cls_state(css_parent(css));
struct cgroup_cls_state *parent = css_cls_state(css->parent);
if (parent)
cs->classid = parent->classid;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment