Commit 3a978e55 authored by Paul Jackson's avatar Paul Jackson Committed by Linus Torvalds

[PATCH] cpusets - big numa cpu and memory placement

This my cpuset patch, with the following changes in the last two weeks:

 1) Updated to 2.6.8.1-mm1
 2) [Simon Derr <Simon.Derr@bull.net>] Fix new cpuset to begin empty,
    not copied from parent.  Needed to avoid breaking exclusive property.
 3) [Dinakar Guniguntala <dino@in.ibm.com>] Finish initializing top
    cpuset from cpu_possible_map after smp_init() called.
 4) [Paul Jackson <pj@sgi.com>] Check on each call to __alloc_pages()
    if the current tasks cpuset mems_allowed has changed.  Use a cpuset
    generation number, bumped on any cpuset memory placement change,
    to make this check efficient.  Update the tasks mems_allowed from
    its cpuset, if the cpuset has changed.
 5) [Paul Jackson <pj@sgi.com>] If a task is moved to another cpuset,
    then update its cpus_allowed, using set_cpus_allowed().
 6) [Paul Jackson <pj@sgi.com>] Update Documentation/cpusets.txt to
    reflect above changes (4) and (5).

I continue to recommend the following patch for inclusion in your 2.6.9-*mm
series, when that opens.  It provides an important facility for high
performance computing on large systems.  Simon Derr of Bull (France) and
myself are the primary authors.  Erich Focht has indicated that NEC is also
a potential user of this patch on the TX-7 NUMA machines, and that he
"would very much welcome the inclusion of cpusets."

I offer this update to lkml, in order to invite continued feedback.

The one prerequiste patch for this cpuset patch was just posted before this
one.  That was a patch to provide a new bitmap list format, of which
cpusets is the first user.

This patch has been built on top of 2.6.8.1-mm1, for the arch's:

  i386 x86_64 sparc ia64 powerpc-405 powerpc-750 sparc64

with and without CONFIG_CPUSET.  It has been booted and tested on ia64
(sn2_defconfig, SN2 hardware).  The 'alpha' arch also built, except for
what seems to be an unrelated toolchain problem (crosstool ld sigsegv) in
the final link step.

===

Cpusets provide a mechanism for assigning a set of CPUs and Memory Nodes to
a set of tasks.

Cpusets constrain the CPU and Memory placement of tasks to only the
processor and memory resources within a tasks current cpuset.  They form a
nested hierarchy visible in a virtual file system.  These are the essential
hooks, beyond what is already present, required to manage dynamic job
placement on large systems.

Cpusets require small kernel hooks in init, exit, fork, mempolicy,
sched_setaffinity, page_alloc and vmscan.  And they require a "struct
cpuset" pointer, a cpuset_mems_generation, and a "mems_allowed" nodemask_t
(to go along with the "cpus_allowed" cpumask_t that's already there) in
each task struct.

These hooks:
  1) establish and propagate cpusets,
  2) enforce CPU placement in sched_setaffinity,
  3) enforce Memory placement in mbind and sys_set_mempolicy,
  4) restrict page allocation and scanning to mems_allowed, and
  5) restrict migration and set_cpus_allowed to cpus_allowed.

The other required hook, restricting task scheduling to CPUs in a tasks
cpus_allowed mask, is already present.

Cpusets extend the usefulness of, the existing placement support that was
added to Linux 2.6 kernels: sched_setaffinity() for CPU placement, and
mbind() and set_mempolicy() for memory placement.  On smaller or dedicated
use systems, the existing calls are often sufficient.

On larger NUMA systems, running more than one, performance critical, job,
it is necessary to be able to manage jobs in their entirety.  This includes
providing a job with exclusive CPU and memory that no other job can use,
and being able to list all tasks currently in a cpuset.

A given job running within a cpuset, would likely use the existing
placement calls to manage its CPU and memory placement in more detail.

Cpusets are named, nested sets of CPUs and Memory Nodes.  Each cpuset is
represented by a directory in the cpuset virtual file system, normally
mounted at /dev/cpuset.

Each cpuset directory provides the following files, which can be
read and written:

  cpus:
      List of CPUs allowed to tasks in that cpuset.

  mems:
      List of Memory Nodes allowed to tasks in that cpuset.

  tasks:
      List of pid's of tasks in that cpuset.

  cpu_exclusive:
      Flag (0 or 1) - if set, cpuset has exclusive use of
      its CPUs (no sibling or cousin cpuset may overlap CPUs).

  mem_exclusive:
      Flag (0 or 1) - if set, cpuset has exclusive use of
      its Memory Nodes (no sibling or cousin may overlap).

  notify_on_release:
      Flag (0 or 1) - if set, then /sbin/cpuset_release_agent
      will be invoked, with the name (/dev/cpuset relative path)
      of that cpuset in argv[1], when the last user of it (task
      or child cpuset) goes away.  This supports automatic
      cleanup of abandoned cpusets.

In addition one new filetype is added to the /proc file system:

  /proc/<pid>/cpuset:
      For each task (pid), list its cpuset path, relative to the
      root of the cpuset file system.  This file is read-only.

New cpusets are created using 'mkdir' (at the shell or in C).  Old ones are
removed using 'rmdir'.  The above files are accessed using read(2) and
write(2) system calls, or shell commands such as 'cat' and 'echo'.

The CPUs and Memory Nodes in a given cpuset are always a subset of its
parent.  The root cpuset has all possible CPUs and Memory Nodes in the
system.  A cpuset may be exclusive (cpu or memory) only if its parent is
similarly exclusive.

See further Documentation/cpusets.txt, at the top of the following
patch.


/proc interface:

It is useful, when learning and making new uses of cpusets and placement to be
able to see what are the current value of a tasks cpus_allowed and
mems_allowed, which are the actual placement used by the kernel scheduler and
memory allocator.

The cpus_allowed and mems_allowed values are needed by user space apps that
are micromanaging placement, such as when moving an app to a obtained by
that app within its cpuset using sched_setaffinity, mbind and
set_mempolicy.

The cpus_allowed value is also available via the sched_getaffinity system
call.  But since the entire rest of the cpuset API, including the display
of mems_allowed added here, is via an ascii style presentation in /proc and
/dev/cpuset, it is worth the extra couple lines of code to display
cpus_allowed in the same way.

This patch adds the display of these two fields to the 'status' file in the
/proc/<pid> directory of each task.  The fields are only added if
CONFIG_CPUSETS is enabled (which is also needed to define the mems_allowed
field of each task).  The new output lines look like:

  $ tail -2 /proc/1/status
  Cpus_allowed:   ffffffff,ffffffff,ffffffff,ffffffff
  Mems_allowed:   ffffffff,ffffffff
Signed-off-by: default avatarDinakar Guniguntala <dino@in.ibm.com>
Signed-off-by: default avatarPaul Jackson <pj@sgi.com>
Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
Signed-off-by: default avatarSimon Derr <simon.derr@bull.net>
Signed-off-by: default avatarMatt Mackall <mpm@selenic.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 794c8de9
This diff is collapsed.
...@@ -73,6 +73,7 @@ ...@@ -73,6 +73,7 @@
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/file.h> #include <linux/file.h>
#include <linux/times.h> #include <linux/times.h>
#include <linux/cpuset.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -300,6 +301,7 @@ int proc_pid_status(struct task_struct *task, char * buffer) ...@@ -300,6 +301,7 @@ int proc_pid_status(struct task_struct *task, char * buffer)
} }
buffer = task_sig(task, buffer); buffer = task_sig(task, buffer);
buffer = task_cap(task, buffer); buffer = task_cap(task, buffer);
buffer = cpuset_task_status_allowed(task, buffer);
#if defined(CONFIG_ARCH_S390) #if defined(CONFIG_ARCH_S390)
buffer = task_show_regs(task, buffer); buffer = task_show_regs(task, buffer);
#endif #endif
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#include <linux/security.h> #include <linux/security.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/seccomp.h> #include <linux/seccomp.h>
#include <linux/cpuset.h>
#include "internal.h" #include "internal.h"
/* /*
...@@ -68,6 +69,9 @@ enum pid_directory_inos { ...@@ -68,6 +69,9 @@ enum pid_directory_inos {
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
PROC_TGID_SCHEDSTAT, PROC_TGID_SCHEDSTAT,
#endif #endif
#ifdef CONFIG_CPUSETS
PROC_TGID_CPUSET,
#endif
#ifdef CONFIG_SECURITY #ifdef CONFIG_SECURITY
PROC_TGID_ATTR, PROC_TGID_ATTR,
PROC_TGID_ATTR_CURRENT, PROC_TGID_ATTR_CURRENT,
...@@ -102,6 +106,9 @@ enum pid_directory_inos { ...@@ -102,6 +106,9 @@ enum pid_directory_inos {
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
PROC_TID_SCHEDSTAT, PROC_TID_SCHEDSTAT,
#endif #endif
#ifdef CONFIG_CPUSETS
PROC_TID_CPUSET,
#endif
#ifdef CONFIG_SECURITY #ifdef CONFIG_SECURITY
PROC_TID_ATTR, PROC_TID_ATTR,
PROC_TID_ATTR_CURRENT, PROC_TID_ATTR_CURRENT,
...@@ -152,6 +159,9 @@ static struct pid_entry tgid_base_stuff[] = { ...@@ -152,6 +159,9 @@ static struct pid_entry tgid_base_stuff[] = {
#endif #endif
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
E(PROC_TGID_SCHEDSTAT, "schedstat", S_IFREG|S_IRUGO), E(PROC_TGID_SCHEDSTAT, "schedstat", S_IFREG|S_IRUGO),
#endif
#ifdef CONFIG_CPUSETS
E(PROC_TGID_CPUSET, "cpuset", S_IFREG|S_IRUGO),
#endif #endif
E(PROC_TGID_OOM_SCORE, "oom_score",S_IFREG|S_IRUGO), E(PROC_TGID_OOM_SCORE, "oom_score",S_IFREG|S_IRUGO),
E(PROC_TGID_OOM_ADJUST,"oom_adj", S_IFREG|S_IRUGO|S_IWUSR), E(PROC_TGID_OOM_ADJUST,"oom_adj", S_IFREG|S_IRUGO|S_IWUSR),
...@@ -185,6 +195,9 @@ static struct pid_entry tid_base_stuff[] = { ...@@ -185,6 +195,9 @@ static struct pid_entry tid_base_stuff[] = {
#endif #endif
#ifdef CONFIG_SCHEDSTATS #ifdef CONFIG_SCHEDSTATS
E(PROC_TID_SCHEDSTAT, "schedstat",S_IFREG|S_IRUGO), E(PROC_TID_SCHEDSTAT, "schedstat",S_IFREG|S_IRUGO),
#endif
#ifdef CONFIG_CPUSETS
E(PROC_TID_CPUSET, "cpuset", S_IFREG|S_IRUGO),
#endif #endif
E(PROC_TID_OOM_SCORE, "oom_score",S_IFREG|S_IRUGO), E(PROC_TID_OOM_SCORE, "oom_score",S_IFREG|S_IRUGO),
E(PROC_TID_OOM_ADJUST, "oom_adj", S_IFREG|S_IRUGO|S_IWUSR), E(PROC_TID_OOM_ADJUST, "oom_adj", S_IFREG|S_IRUGO|S_IWUSR),
...@@ -1556,6 +1569,12 @@ static struct dentry *proc_pident_lookup(struct inode *dir, ...@@ -1556,6 +1569,12 @@ static struct dentry *proc_pident_lookup(struct inode *dir,
inode->i_fop = &proc_info_file_operations; inode->i_fop = &proc_info_file_operations;
ei->op.proc_read = proc_pid_schedstat; ei->op.proc_read = proc_pid_schedstat;
break; break;
#endif
#ifdef CONFIG_CPUSETS
case PROC_TID_CPUSET:
case PROC_TGID_CPUSET:
inode->i_fop = &proc_cpuset_operations;
break;
#endif #endif
case PROC_TID_OOM_SCORE: case PROC_TID_OOM_SCORE:
case PROC_TGID_OOM_SCORE: case PROC_TGID_OOM_SCORE:
......
#ifndef _LINUX_CPUSET_H
#define _LINUX_CPUSET_H
/*
* cpuset interface
*
* Copyright (C) 2003 BULL SA
* Copyright (C) 2004 Silicon Graphics, Inc.
*
*/
#include <linux/sched.h>
#include <linux/cpumask.h>
#include <linux/nodemask.h>
#ifdef CONFIG_CPUSETS
extern int cpuset_init(void);
extern void cpuset_init_smp(void);
extern void cpuset_fork(struct task_struct *p);
extern void cpuset_exit(struct task_struct *p);
extern const cpumask_t cpuset_cpus_allowed(const struct task_struct *p);
void cpuset_init_current_mems_allowed(void);
void cpuset_update_current_mems_allowed(void);
void cpuset_restrict_to_mems_allowed(unsigned long *nodes);
int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl);
int cpuset_zone_allowed(struct zone *z);
extern struct file_operations proc_cpuset_operations;
extern char *cpuset_task_status_allowed(struct task_struct *task, char *buffer);
#else /* !CONFIG_CPUSETS */
static inline int cpuset_init(void) { return 0; }
static inline void cpuset_init_smp(void) {}
static inline void cpuset_fork(struct task_struct *p) {}
static inline void cpuset_exit(struct task_struct *p) {}
static inline const cpumask_t cpuset_cpus_allowed(struct task_struct *p)
{
return cpu_possible_map;
}
static inline void cpuset_init_current_mems_allowed(void) {}
static inline void cpuset_update_current_mems_allowed(void) {}
static inline void cpuset_restrict_to_mems_allowed(unsigned long *nodes) {}
static inline int cpuset_zonelist_valid_mems_allowed(struct zonelist *zl)
{
return 1;
}
static inline int cpuset_zone_allowed(struct zone *z)
{
return 1;
}
static inline char *cpuset_task_status_allowed(struct task_struct *task,
char *buffer)
{
return buffer;
}
#endif /* !CONFIG_CPUSETS */
#endif /* _LINUX_CPUSET_H */
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/nodemask.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/semaphore.h> #include <asm/semaphore.h>
...@@ -514,6 +515,7 @@ extern void cpu_attach_domain(struct sched_domain *sd, int cpu); ...@@ -514,6 +515,7 @@ extern void cpu_attach_domain(struct sched_domain *sd, int cpu);
struct io_context; /* See blkdev.h */ struct io_context; /* See blkdev.h */
void exit_io_context(void); void exit_io_context(void);
struct cpuset;
#define NGROUPS_SMALL 32 #define NGROUPS_SMALL 32
#define NGROUPS_PER_BLOCK ((int)(PAGE_SIZE / sizeof(gid_t))) #define NGROUPS_PER_BLOCK ((int)(PAGE_SIZE / sizeof(gid_t)))
...@@ -712,6 +714,11 @@ struct task_struct { ...@@ -712,6 +714,11 @@ struct task_struct {
struct mempolicy *mempolicy; struct mempolicy *mempolicy;
short il_next; short il_next;
#endif #endif
#ifdef CONFIG_CPUSETS
struct cpuset *cpuset;
nodemask_t mems_allowed;
int cpuset_mems_generation;
#endif
}; };
static inline pid_t process_group(struct task_struct *tsk) static inline pid_t process_group(struct task_struct *tsk)
......
...@@ -237,6 +237,16 @@ config IKCONFIG_PROC ...@@ -237,6 +237,16 @@ config IKCONFIG_PROC
This option enables access to the kernel configuration file This option enables access to the kernel configuration file
through /proc/config.gz. through /proc/config.gz.
config CPUSETS
bool "Cpuset support"
depends on SMP
help
This options will let you create and manage CPUSET's which
allow dynamically partitioning a system into sets of CPUs and
Memory Nodes and assigning tasks to run only within those sets.
This is primarily useful on large SMP or NUMA systems.
Say N if unsure.
menuconfig EMBEDDED menuconfig EMBEDDED
bool "Configure standard kernel features (for small systems)" bool "Configure standard kernel features (for small systems)"
......
...@@ -41,6 +41,7 @@ ...@@ -41,6 +41,7 @@
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/writeback.h> #include <linux/writeback.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h>
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/rmap.h> #include <linux/rmap.h>
...@@ -515,6 +516,8 @@ asmlinkage void __init start_kernel(void) ...@@ -515,6 +516,8 @@ asmlinkage void __init start_kernel(void)
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
proc_root_init(); proc_root_init();
#endif #endif
cpuset_init();
check_bugs(); check_bugs();
acpi_early_init(); /* before LAPIC and SMP init */ acpi_early_init(); /* before LAPIC and SMP init */
...@@ -656,6 +659,8 @@ static int init(void * unused) ...@@ -656,6 +659,8 @@ static int init(void * unused)
smp_init(); smp_init();
sched_init_smp(); sched_init_smp();
cpuset_init_smp();
/* /*
* Do this before initcalls, because some drivers want to access * Do this before initcalls, because some drivers want to access
* firmware files. * firmware files.
......
...@@ -18,6 +18,7 @@ obj-$(CONFIG_KALLSYMS) += kallsyms.o ...@@ -18,6 +18,7 @@ obj-$(CONFIG_KALLSYMS) += kallsyms.o
obj-$(CONFIG_PM) += power/ obj-$(CONFIG_PM) += power/
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
obj-$(CONFIG_COMPAT) += compat.o obj-$(CONFIG_COMPAT) += compat.o
obj-$(CONFIG_CPUSETS) += cpuset.o
obj-$(CONFIG_IKCONFIG) += configs.o obj-$(CONFIG_IKCONFIG) += configs.o
obj-$(CONFIG_IKCONFIG_PROC) += configs.o obj-$(CONFIG_IKCONFIG_PROC) += configs.o
obj-$(CONFIG_STOP_MACHINE) += stop_machine.o obj-$(CONFIG_STOP_MACHINE) += stop_machine.o
......
This diff is collapsed.
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/mempolicy.h> #include <linux/mempolicy.h>
#include <linux/cpuset.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -820,6 +821,7 @@ fastcall NORET_TYPE void do_exit(long code) ...@@ -820,6 +821,7 @@ fastcall NORET_TYPE void do_exit(long code)
__exit_fs(tsk); __exit_fs(tsk);
exit_namespace(tsk); exit_namespace(tsk);
exit_thread(); exit_thread();
cpuset_exit(tsk);
exit_keys(tsk); exit_keys(tsk);
if (group_dead && tsk->signal->leader) if (group_dead && tsk->signal->leader)
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
...@@ -1069,6 +1070,8 @@ static task_t *copy_process(unsigned long clone_flags, ...@@ -1069,6 +1070,8 @@ static task_t *copy_process(unsigned long clone_flags,
if (unlikely(p->ptrace & PT_PTRACED)) if (unlikely(p->ptrace & PT_PTRACED))
__ptrace_link(p, current->parent); __ptrace_link(p, current->parent);
cpuset_fork(p);
attach_pid(p, PIDTYPE_PID, p->pid); attach_pid(p, PIDTYPE_PID, p->pid);
attach_pid(p, PIDTYPE_TGID, p->tgid); attach_pid(p, PIDTYPE_TGID, p->tgid);
if (thread_group_leader(p)) { if (thread_group_leader(p)) {
......
...@@ -40,6 +40,7 @@ ...@@ -40,6 +40,7 @@
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
...@@ -3539,6 +3540,7 @@ long sched_setaffinity(pid_t pid, cpumask_t new_mask) ...@@ -3539,6 +3540,7 @@ long sched_setaffinity(pid_t pid, cpumask_t new_mask)
{ {
task_t *p; task_t *p;
int retval; int retval;
cpumask_t cpus_allowed;
lock_cpu_hotplug(); lock_cpu_hotplug();
read_lock(&tasklist_lock); read_lock(&tasklist_lock);
...@@ -3563,6 +3565,8 @@ long sched_setaffinity(pid_t pid, cpumask_t new_mask) ...@@ -3563,6 +3565,8 @@ long sched_setaffinity(pid_t pid, cpumask_t new_mask)
!capable(CAP_SYS_NICE)) !capable(CAP_SYS_NICE))
goto out_unlock; goto out_unlock;
cpus_allowed = cpuset_cpus_allowed(p);
cpus_and(new_mask, new_mask, cpus_allowed);
retval = set_cpus_allowed(p, new_mask); retval = set_cpus_allowed(p, new_mask);
out_unlock: out_unlock:
...@@ -4240,7 +4244,7 @@ static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *tsk) ...@@ -4240,7 +4244,7 @@ static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *tsk)
/* No more Mr. Nice Guy. */ /* No more Mr. Nice Guy. */
if (dest_cpu == NR_CPUS) { if (dest_cpu == NR_CPUS) {
cpus_setall(tsk->cpus_allowed); tsk->cpus_allowed = cpuset_cpus_allowed(tsk);
dest_cpu = any_online_cpu(tsk->cpus_allowed); dest_cpu = any_online_cpu(tsk->cpus_allowed);
/* /*
......
...@@ -67,6 +67,7 @@ ...@@ -67,6 +67,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/nodemask.h> #include <linux/nodemask.h>
#include <linux/cpuset.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/string.h> #include <linux/string.h>
...@@ -167,6 +168,10 @@ static int get_nodes(unsigned long *nodes, unsigned long __user *nmask, ...@@ -167,6 +168,10 @@ static int get_nodes(unsigned long *nodes, unsigned long __user *nmask,
if (copy_from_user(nodes, nmask, nlongs*sizeof(unsigned long))) if (copy_from_user(nodes, nmask, nlongs*sizeof(unsigned long)))
return -EFAULT; return -EFAULT;
nodes[nlongs-1] &= endmask; nodes[nlongs-1] &= endmask;
/* Update current mems_allowed */
cpuset_update_current_mems_allowed();
/* Ignore nodes not set in current->mems_allowed */
cpuset_restrict_to_mems_allowed(nodes);
return mpol_check_policy(mode, nodes); return mpol_check_policy(mode, nodes);
} }
...@@ -655,7 +660,9 @@ static struct zonelist *zonelist_policy(unsigned gfp, struct mempolicy *policy) ...@@ -655,7 +660,9 @@ static struct zonelist *zonelist_policy(unsigned gfp, struct mempolicy *policy)
break; break;
case MPOL_BIND: case MPOL_BIND:
/* Lower zones don't get a policy applied */ /* Lower zones don't get a policy applied */
/* Careful: current->mems_allowed might have moved */
if (gfp >= policy_zone) if (gfp >= policy_zone)
if (cpuset_zonelist_valid_mems_allowed(policy->v.zonelist))
return policy->v.zonelist; return policy->v.zonelist;
/*FALL THROUGH*/ /*FALL THROUGH*/
case MPOL_INTERLEAVE: /* should not happen */ case MPOL_INTERLEAVE: /* should not happen */
...@@ -747,6 +754,8 @@ alloc_page_vma(unsigned gfp, struct vm_area_struct *vma, unsigned long addr) ...@@ -747,6 +754,8 @@ alloc_page_vma(unsigned gfp, struct vm_area_struct *vma, unsigned long addr)
{ {
struct mempolicy *pol = get_vma_policy(vma, addr); struct mempolicy *pol = get_vma_policy(vma, addr);
cpuset_update_current_mems_allowed();
if (unlikely(pol->policy == MPOL_INTERLEAVE)) { if (unlikely(pol->policy == MPOL_INTERLEAVE)) {
unsigned nid; unsigned nid;
if (vma) { if (vma) {
...@@ -784,6 +793,8 @@ struct page *alloc_pages_current(unsigned gfp, unsigned order) ...@@ -784,6 +793,8 @@ struct page *alloc_pages_current(unsigned gfp, unsigned order)
{ {
struct mempolicy *pol = current->mempolicy; struct mempolicy *pol = current->mempolicy;
if (!in_interrupt())
cpuset_update_current_mems_allowed();
if (!pol || in_interrupt()) if (!pol || in_interrupt())
pol = &default_policy; pol = &default_policy;
if (pol->policy == MPOL_INTERLEAVE) if (pol->policy == MPOL_INTERLEAVE)
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/topology.h> #include <linux/topology.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h>
#include <linux/nodemask.h> #include <linux/nodemask.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
...@@ -765,6 +766,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order, ...@@ -765,6 +766,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
classzone_idx, 0, 0)) classzone_idx, 0, 0))
continue; continue;
if (!cpuset_zone_allowed(z))
continue;
page = buffered_rmqueue(z, order, gfp_mask); page = buffered_rmqueue(z, order, gfp_mask);
if (page) if (page)
goto got_pg; goto got_pg;
...@@ -783,6 +787,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order, ...@@ -783,6 +787,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
gfp_mask & __GFP_HIGH)) gfp_mask & __GFP_HIGH))
continue; continue;
if (!cpuset_zone_allowed(z))
continue;
page = buffered_rmqueue(z, order, gfp_mask); page = buffered_rmqueue(z, order, gfp_mask);
if (page) if (page)
goto got_pg; goto got_pg;
...@@ -792,6 +799,8 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order, ...@@ -792,6 +799,8 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE))) && !in_interrupt()) { if (((p->flags & PF_MEMALLOC) || unlikely(test_thread_flag(TIF_MEMDIE))) && !in_interrupt()) {
/* go through the zonelist yet again, ignoring mins */ /* go through the zonelist yet again, ignoring mins */
for (i = 0; (z = zones[i]) != NULL; i++) { for (i = 0; (z = zones[i]) != NULL; i++) {
if (!cpuset_zone_allowed(z))
continue;
page = buffered_rmqueue(z, order, gfp_mask); page = buffered_rmqueue(z, order, gfp_mask);
if (page) if (page)
goto got_pg; goto got_pg;
...@@ -831,6 +840,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order, ...@@ -831,6 +840,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
gfp_mask & __GFP_HIGH)) gfp_mask & __GFP_HIGH))
continue; continue;
if (!cpuset_zone_allowed(z))
continue;
page = buffered_rmqueue(z, order, gfp_mask); page = buffered_rmqueue(z, order, gfp_mask);
if (page) if (page)
goto got_pg; goto got_pg;
...@@ -847,6 +859,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order, ...@@ -847,6 +859,9 @@ __alloc_pages(unsigned int gfp_mask, unsigned int order,
classzone_idx, 0, 0)) classzone_idx, 0, 0))
continue; continue;
if (!cpuset_zone_allowed(z))
continue;
page = buffered_rmqueue(z, order, gfp_mask); page = buffered_rmqueue(z, order, gfp_mask);
if (page) if (page)
goto got_pg; goto got_pg;
...@@ -1490,6 +1505,7 @@ void __init build_all_zonelists(void) ...@@ -1490,6 +1505,7 @@ void __init build_all_zonelists(void)
for_each_online_node(i) for_each_online_node(i)
build_zonelists(NODE_DATA(i)); build_zonelists(NODE_DATA(i));
printk("Built %i zonelists\n", num_online_nodes()); printk("Built %i zonelists\n", num_online_nodes());
cpuset_init_current_mems_allowed();
} }
/* /*
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <linux/rmap.h> #include <linux/rmap.h>
#include <linux/topology.h> #include <linux/topology.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/rwsem.h> #include <linux/rwsem.h>
...@@ -865,6 +866,9 @@ shrink_caches(struct zone **zones, struct scan_control *sc) ...@@ -865,6 +866,9 @@ shrink_caches(struct zone **zones, struct scan_control *sc)
if (zone->present_pages == 0) if (zone->present_pages == 0)
continue; continue;
if (!cpuset_zone_allowed(zone))
continue;
zone->temp_priority = sc->priority; zone->temp_priority = sc->priority;
if (zone->prev_priority > sc->priority) if (zone->prev_priority > sc->priority)
zone->prev_priority = sc->priority; zone->prev_priority = sc->priority;
...@@ -908,6 +912,9 @@ int try_to_free_pages(struct zone **zones, ...@@ -908,6 +912,9 @@ int try_to_free_pages(struct zone **zones,
for (i = 0; zones[i] != NULL; i++) { for (i = 0; zones[i] != NULL; i++) {
struct zone *zone = zones[i]; struct zone *zone = zones[i];
if (!cpuset_zone_allowed(zone))
continue;
zone->temp_priority = DEF_PRIORITY; zone->temp_priority = DEF_PRIORITY;
lru_pages += zone->nr_active + zone->nr_inactive; lru_pages += zone->nr_active + zone->nr_inactive;
} }
...@@ -948,8 +955,14 @@ int try_to_free_pages(struct zone **zones, ...@@ -948,8 +955,14 @@ int try_to_free_pages(struct zone **zones,
blk_congestion_wait(WRITE, HZ/10); blk_congestion_wait(WRITE, HZ/10);
} }
out: out:
for (i = 0; zones[i] != 0; i++) for (i = 0; zones[i] != 0; i++) {
zones[i]->prev_priority = zones[i]->temp_priority; struct zone *zone = zones[i];
if (!cpuset_zone_allowed(zone))
continue;
zone->prev_priority = zone->temp_priority;
}
return ret; return ret;
} }
...@@ -1210,6 +1223,8 @@ void wakeup_kswapd(struct zone *zone, int order) ...@@ -1210,6 +1223,8 @@ void wakeup_kswapd(struct zone *zone, int order)
return; return;
if (pgdat->kswapd_max_order < order) if (pgdat->kswapd_max_order < order)
pgdat->kswapd_max_order = order; pgdat->kswapd_max_order = order;
if (!cpuset_zone_allowed(zone))
return;
if (!waitqueue_active(&zone->zone_pgdat->kswapd_wait)) if (!waitqueue_active(&zone->zone_pgdat->kswapd_wait))
return; return;
wake_up_interruptible(&zone->zone_pgdat->kswapd_wait); wake_up_interruptible(&zone->zone_pgdat->kswapd_wait);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment