Commit c7cc0cc1 authored by Fenghua Yu's avatar Fenghua Yu Committed by Thomas Gleixner

x86/intel_rdt: Reset per cpu closids on unmount

All CPUs in a rdtgroup are given back to the default rdtgroup before the
rdtgroup is removed during umount. After umount, the default rdtgroup
contains all online CPUs, but the per cpu closids are not cleared. As a
result the stale closid value will be used immediately after the next
mount.

Move all cpus to the default group and update the percpu closid storage.

[ tglx: Massaged changelong ]
Signed-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
Cc: "Tony Luck" <tony.luck@intel.com>
Cc: "Sai Prakhya" <sai.praneeth.prakhya@intel.com>
Cc: "Vikas Shivappa" <vikas.shivappa@linux.intel.com>
Cc: "Ingo Molnar" <mingo@elte.hu>
Cc: "H. Peter Anvin" <h.peter.anvin@intel.com>
Link: http://lkml.kernel.org/r/1478912558-55514-2-git-send-email-fenghua.yu@intel.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent 59fe5a77
...@@ -799,6 +799,7 @@ static void rmdir_all_sub(void) ...@@ -799,6 +799,7 @@ static void rmdir_all_sub(void)
{ {
struct rdtgroup *rdtgrp, *tmp; struct rdtgroup *rdtgrp, *tmp;
struct task_struct *p, *t; struct task_struct *p, *t;
int cpu;
/* move all tasks to default resource group */ /* move all tasks to default resource group */
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
...@@ -813,14 +814,29 @@ static void rmdir_all_sub(void) ...@@ -813,14 +814,29 @@ static void rmdir_all_sub(void)
smp_call_function_many(cpu_online_mask, rdt_reset_pqr_assoc_closid, smp_call_function_many(cpu_online_mask, rdt_reset_pqr_assoc_closid,
NULL, 1); NULL, 1);
put_cpu(); put_cpu();
list_for_each_entry_safe(rdtgrp, tmp, &rdt_all_groups, rdtgroup_list) { list_for_each_entry_safe(rdtgrp, tmp, &rdt_all_groups, rdtgroup_list) {
/* Remove each rdtgroup other than root */ /* Remove each rdtgroup other than root */
if (rdtgrp == &rdtgroup_default) if (rdtgrp == &rdtgroup_default)
continue; continue;
/*
* Give any CPUs back to the default group. We cannot copy
* cpu_online_mask because a CPU might have executed the
* offline callback already, but is still marked online.
*/
cpumask_or(&rdtgroup_default.cpu_mask,
&rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask);
kernfs_remove(rdtgrp->kn); kernfs_remove(rdtgrp->kn);
list_del(&rdtgrp->rdtgroup_list); list_del(&rdtgrp->rdtgroup_list);
kfree(rdtgrp); kfree(rdtgrp);
} }
/* Reset all per cpu closids to the default value */
for_each_cpu(cpu, &rdtgroup_default.cpu_mask)
per_cpu(cpu_closid, cpu) = 0;
kernfs_remove(kn_info); kernfs_remove(kn_info);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment