Commit deb74f5c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://github.com/rustyrussell/linux

Pull cpumask cleanups from Rusty Russell:
 "(Somehow forgot to send this out; it's been sitting in linux-next, and
  if you don't want it, it can sit there another cycle)"

I'm a sucker for things that actually delete lines of code.

Fix up trivial conflict in arch/arm/kernel/kprobes.c, where Rusty fixed
a user of &cpu_online_map to be cpu_online_mask, but that code got
deleted by commit b21d55e9 ("ARM: 7332/1: extract out code patch
function from kprobes").

* tag 'for-linus' of git://github.com/rustyrussell/linux:
  cpumask: remove old cpu_*_map.
  documentation: remove references to cpu_*_map.
  drivers/cpufreq/db8500-cpufreq: remove references to cpu_*_map.
  remove references to cpu_*_map in arch/
parents dd775ae2 615399c8
...@@ -217,7 +217,7 @@ and name space for cpusets, with a minimum of additional kernel code. ...@@ -217,7 +217,7 @@ and name space for cpusets, with a minimum of additional kernel code.
The cpus and mems files in the root (top_cpuset) cpuset are The cpus and mems files in the root (top_cpuset) cpuset are
read-only. The cpus file automatically tracks the value of read-only. The cpus file automatically tracks the value of
cpu_online_map using a CPU hotplug notifier, and the mems file cpu_online_mask using a CPU hotplug notifier, and the mems file
automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e., automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e.,
nodes with memory--using the cpuset_track_online_nodes() hook. nodes with memory--using the cpuset_track_online_nodes() hook.
......
...@@ -47,7 +47,7 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using ...@@ -47,7 +47,7 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
other cpus later online, read FAQ's for more info. other cpus later online, read FAQ's for more info.
additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets
cpu_possible_map = cpu_present_map + additional_cpus cpu_possible_mask = cpu_present_mask + additional_cpus
cede_offline={"off","on"} Use this option to disable/enable putting offlined cede_offline={"off","on"} Use this option to disable/enable putting offlined
processors to an extended H_CEDE state on processors to an extended H_CEDE state on
...@@ -64,11 +64,11 @@ should only rely on this to count the # of cpus, but *MUST* not rely ...@@ -64,11 +64,11 @@ should only rely on this to count the # of cpus, but *MUST* not rely
on the apicid values in those tables for disabled apics. In the event on the apicid values in those tables for disabled apics. In the event
BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could
use this parameter "additional_cpus=x" to represent those cpus in the use this parameter "additional_cpus=x" to represent those cpus in the
cpu_possible_map. cpu_possible_mask.
possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus. possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus.
This option sets possible_cpus bits in This option sets possible_cpus bits in
cpu_possible_map. Thus keeping the numbers of bits set cpu_possible_mask. Thus keeping the numbers of bits set
constant even if the machine gets rebooted. constant even if the machine gets rebooted.
CPU maps and such CPU maps and such
...@@ -76,7 +76,7 @@ CPU maps and such ...@@ -76,7 +76,7 @@ CPU maps and such
[More on cpumaps and primitive to manipulate, please check [More on cpumaps and primitive to manipulate, please check
include/linux/cpumask.h that has more descriptive text.] include/linux/cpumask.h that has more descriptive text.]
cpu_possible_map: Bitmap of possible CPUs that can ever be available in the cpu_possible_mask: Bitmap of possible CPUs that can ever be available in the
system. This is used to allocate some boot time memory for per_cpu variables system. This is used to allocate some boot time memory for per_cpu variables
that aren't designed to grow/shrink as CPUs are made available or removed. that aren't designed to grow/shrink as CPUs are made available or removed.
Once set during boot time discovery phase, the map is static, i.e no bits Once set during boot time discovery phase, the map is static, i.e no bits
...@@ -84,13 +84,13 @@ are added or removed anytime. Trimming it accurately for your system needs ...@@ -84,13 +84,13 @@ are added or removed anytime. Trimming it accurately for your system needs
upfront can save some boot time memory. See below for how we use heuristics upfront can save some boot time memory. See below for how we use heuristics
in x86_64 case to keep this under check. in x86_64 case to keep this under check.
cpu_online_map: Bitmap of all CPUs currently online. Its set in __cpu_up() cpu_online_mask: Bitmap of all CPUs currently online. Its set in __cpu_up()
after a cpu is available for kernel scheduling and ready to receive after a cpu is available for kernel scheduling and ready to receive
interrupts from devices. Its cleared when a cpu is brought down using interrupts from devices. Its cleared when a cpu is brought down using
__cpu_disable(), before which all OS services including interrupts are __cpu_disable(), before which all OS services including interrupts are
migrated to another target CPU. migrated to another target CPU.
cpu_present_map: Bitmap of CPUs currently present in the system. Not all cpu_present_mask: Bitmap of CPUs currently present in the system. Not all
of them may be online. When physical hotplug is processed by the relevant of them may be online. When physical hotplug is processed by the relevant
subsystem (e.g ACPI) can change and new bit either be added or removed subsystem (e.g ACPI) can change and new bit either be added or removed
from the map depending on the event is hot-add/hot-remove. There are currently from the map depending on the event is hot-add/hot-remove. There are currently
...@@ -99,22 +99,22 @@ at which time hotplug is disabled. ...@@ -99,22 +99,22 @@ at which time hotplug is disabled.
You really dont need to manipulate any of the system cpu maps. They should You really dont need to manipulate any of the system cpu maps. They should
be read-only for most use. When setting up per-cpu resources almost always use be read-only for most use. When setting up per-cpu resources almost always use
cpu_possible_map/for_each_possible_cpu() to iterate. cpu_possible_mask/for_each_possible_cpu() to iterate.
Never use anything other than cpumask_t to represent bitmap of CPUs. Never use anything other than cpumask_t to represent bitmap of CPUs.
#include <linux/cpumask.h> #include <linux/cpumask.h>
for_each_possible_cpu - Iterate over cpu_possible_map for_each_possible_cpu - Iterate over cpu_possible_mask
for_each_online_cpu - Iterate over cpu_online_map for_each_online_cpu - Iterate over cpu_online_mask
for_each_present_cpu - Iterate over cpu_present_map for_each_present_cpu - Iterate over cpu_present_mask
for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask. for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
#include <linux/cpu.h> #include <linux/cpu.h>
get_online_cpus() and put_online_cpus(): get_online_cpus() and put_online_cpus():
The above calls are used to inhibit cpu hotplug operations. While the The above calls are used to inhibit cpu hotplug operations. While the
cpu_hotplug.refcount is non zero, the cpu_online_map will not change. cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
If you merely need to avoid cpus going away, you could also use If you merely need to avoid cpus going away, you could also use
preempt_disable() and preempt_enable() for those sections. preempt_disable() and preempt_enable() for those sections.
Just remember the critical section cannot call any Just remember the critical section cannot call any
......
...@@ -450,7 +450,7 @@ setup_smp(void) ...@@ -450,7 +450,7 @@ setup_smp(void)
smp_num_probed = 1; smp_num_probed = 1;
} }
printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_map = %lx\n", printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_mask = %lx\n",
smp_num_probed, cpumask_bits(cpu_present_mask)[0]); smp_num_probed, cpumask_bits(cpu_present_mask)[0]);
} }
......
...@@ -152,7 +152,7 @@ int __kprobes __arch_disarm_kprobe(void *p) ...@@ -152,7 +152,7 @@ int __kprobes __arch_disarm_kprobe(void *p)
void __kprobes arch_disarm_kprobe(struct kprobe *p) void __kprobes arch_disarm_kprobe(struct kprobe *p)
{ {
stop_machine(__arch_disarm_kprobe, p, &cpu_online_map); stop_machine(__arch_disarm_kprobe, p, cpu_online_mask);
} }
void __kprobes arch_remove_kprobe(struct kprobe *p) void __kprobes arch_remove_kprobe(struct kprobe *p)
......
...@@ -349,7 +349,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -349,7 +349,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
* re-initialize the map in platform_smp_prepare_cpus() if * re-initialize the map in platform_smp_prepare_cpus() if
* present != possible (e.g. physical hotplug). * present != possible (e.g. physical hotplug).
*/ */
init_cpu_present(&cpu_possible_map); init_cpu_present(cpu_possible_mask);
/* /*
* Initialise the SCU if there are more than one CPU * Initialise the SCU if there are more than one CPU
...@@ -581,8 +581,9 @@ void smp_send_stop(void) ...@@ -581,8 +581,9 @@ void smp_send_stop(void)
unsigned long timeout; unsigned long timeout;
if (num_online_cpus() > 1) { if (num_online_cpus() > 1) {
cpumask_t mask = cpu_online_map; struct cpumask mask;
cpu_clear(smp_processor_id(), mask); cpumask_copy(&mask, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), &mask);
smp_cross_call(&mask, IPI_CPU_STOP); smp_cross_call(&mask, IPI_CPU_STOP);
} }
......
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
#define BASE_IPI_IRQ 26 #define BASE_IPI_IRQ 26
/* /*
* cpu_possible_map needs to be filled out prior to setup_per_cpu_areas * cpu_possible_mask needs to be filled out prior to setup_per_cpu_areas
* (which is prior to any of our smp_prepare_cpu crap), in order to set * (which is prior to any of our smp_prepare_cpu crap), in order to set
* up the... per_cpu areas. * up the... per_cpu areas.
*/ */
...@@ -208,7 +208,7 @@ int __cpuinit __cpu_up(unsigned int cpu) ...@@ -208,7 +208,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
stack_start = ((void *) thread) + THREAD_SIZE; stack_start = ((void *) thread) + THREAD_SIZE;
__vmstart(start_secondary, stack_start); __vmstart(start_secondary, stack_start);
while (!cpu_isset(cpu, cpu_online_map)) while (!cpu_online(cpu))
barrier(); barrier();
return 0; return 0;
...@@ -229,7 +229,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -229,7 +229,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
/* Right now, let's just fake it. */ /* Right now, let's just fake it. */
for (i = 0; i < max_cpus; i++) for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_present_map); set_cpu_present(i, true);
/* Also need to register the interrupts for IPI */ /* Also need to register the interrupts for IPI */
if (max_cpus > 1) if (max_cpus > 1)
...@@ -269,5 +269,5 @@ void smp_start_cpus(void) ...@@ -269,5 +269,5 @@ void smp_start_cpus(void)
int i; int i;
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
cpu_set(i, cpu_possible_map); set_cpu_possible(i, true);
} }
...@@ -839,7 +839,7 @@ static __init int setup_additional_cpus(char *s) ...@@ -839,7 +839,7 @@ static __init int setup_additional_cpus(char *s)
early_param("additional_cpus", setup_additional_cpus); early_param("additional_cpus", setup_additional_cpus);
/* /*
* cpu_possible_map should be static, it cannot change as CPUs * cpu_possible_mask should be static, it cannot change as CPUs
* are onlined, or offlined. The reason is per-cpu data-structures * are onlined, or offlined. The reason is per-cpu data-structures
* are allocated by some modules at init time, and dont expect to * are allocated by some modules at init time, and dont expect to
* do this dynamically on cpu arrival/departure. * do this dynamically on cpu arrival/departure.
......
...@@ -78,7 +78,7 @@ static inline void octeon_send_ipi_mask(const struct cpumask *mask, ...@@ -78,7 +78,7 @@ static inline void octeon_send_ipi_mask(const struct cpumask *mask,
} }
/** /**
* Detect available CPUs, populate cpu_possible_map * Detect available CPUs, populate cpu_possible_mask
*/ */
static void octeon_smp_hotplug_setup(void) static void octeon_smp_hotplug_setup(void)
{ {
...@@ -268,7 +268,7 @@ static int octeon_cpu_disable(void) ...@@ -268,7 +268,7 @@ static int octeon_cpu_disable(void)
spin_lock(&smp_reserve_lock); spin_lock(&smp_reserve_lock);
cpu_clear(cpu, cpu_online_map); set_cpu_online(cpu, false);
cpu_clear(cpu, cpu_callin_map); cpu_clear(cpu, cpu_callin_map);
local_irq_disable(); local_irq_disable();
fixup_irqs(); fixup_irqs();
......
...@@ -173,7 +173,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len, ...@@ -173,7 +173,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len,
if (retval) if (retval)
goto out_unlock; goto out_unlock;
cpus_and(mask, p->thread.user_cpus_allowed, cpu_possible_map); cpumask_and(&mask, &p->thread.user_cpus_allowed, cpu_possible_mask);
out_unlock: out_unlock:
read_unlock(&tasklist_lock); read_unlock(&tasklist_lock);
......
...@@ -25,7 +25,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) ...@@ -25,7 +25,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
int i; int i;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (!cpu_isset(n, cpu_online_map)) if (!cpu_online(n))
return 0; return 0;
#endif #endif
......
...@@ -317,7 +317,7 @@ static int bmips_cpu_disable(void) ...@@ -317,7 +317,7 @@ static int bmips_cpu_disable(void)
pr_info("SMP: CPU%d is offline\n", cpu); pr_info("SMP: CPU%d is offline\n", cpu);
cpu_clear(cpu, cpu_online_map); set_cpu_online(cpu, false);
cpu_clear(cpu, cpu_callin_map); cpu_clear(cpu, cpu_callin_map);
local_flush_tlb_all(); local_flush_tlb_all();
......
...@@ -148,7 +148,7 @@ static void stop_this_cpu(void *dummy) ...@@ -148,7 +148,7 @@ static void stop_this_cpu(void *dummy)
/* /*
* Remove this CPU: * Remove this CPU:
*/ */
cpu_clear(smp_processor_id(), cpu_online_map); set_cpu_online(smp_processor_id(), false);
for (;;) { for (;;) {
if (cpu_wait) if (cpu_wait)
(*cpu_wait)(); /* Wait if available. */ (*cpu_wait)(); /* Wait if available. */
...@@ -174,7 +174,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -174,7 +174,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
mp_ops->prepare_cpus(max_cpus); mp_ops->prepare_cpus(max_cpus);
set_cpu_sibling_map(0); set_cpu_sibling_map(0);
#ifndef CONFIG_HOTPLUG_CPU #ifndef CONFIG_HOTPLUG_CPU
init_cpu_present(&cpu_possible_map); init_cpu_present(cpu_possible_mask);
#endif #endif
} }
...@@ -248,7 +248,7 @@ int __cpuinit __cpu_up(unsigned int cpu) ...@@ -248,7 +248,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
while (!cpu_isset(cpu, cpu_callin_map)) while (!cpu_isset(cpu, cpu_callin_map))
udelay(100); udelay(100);
cpu_set(cpu, cpu_online_map); set_cpu_online(cpu, true);
return 0; return 0;
} }
...@@ -320,14 +320,13 @@ void flush_tlb_mm(struct mm_struct *mm) ...@@ -320,14 +320,13 @@ void flush_tlb_mm(struct mm_struct *mm)
if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
smp_on_other_tlbs(flush_tlb_mm_ipi, mm); smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
} else { } else {
cpumask_t mask = cpu_online_map;
unsigned int cpu; unsigned int cpu;
cpu_clear(smp_processor_id(), mask); for_each_online_cpu(cpu) {
for_each_cpu_mask(cpu, mask) if (cpu != smp_processor_id() && cpu_context(cpu, mm))
if (cpu_context(cpu, mm))
cpu_context(cpu, mm) = 0; cpu_context(cpu, mm) = 0;
} }
}
local_flush_tlb_mm(mm); local_flush_tlb_mm(mm);
preempt_enable(); preempt_enable();
...@@ -360,14 +359,13 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l ...@@ -360,14 +359,13 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
smp_on_other_tlbs(flush_tlb_range_ipi, &fd); smp_on_other_tlbs(flush_tlb_range_ipi, &fd);
} else { } else {
cpumask_t mask = cpu_online_map;
unsigned int cpu; unsigned int cpu;
cpu_clear(smp_processor_id(), mask); for_each_online_cpu(cpu) {
for_each_cpu_mask(cpu, mask) if (cpu != smp_processor_id() && cpu_context(cpu, mm))
if (cpu_context(cpu, mm))
cpu_context(cpu, mm) = 0; cpu_context(cpu, mm) = 0;
} }
}
local_flush_tlb_range(vma, start, end); local_flush_tlb_range(vma, start, end);
preempt_enable(); preempt_enable();
} }
...@@ -407,14 +405,13 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) ...@@ -407,14 +405,13 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
smp_on_other_tlbs(flush_tlb_page_ipi, &fd); smp_on_other_tlbs(flush_tlb_page_ipi, &fd);
} else { } else {
cpumask_t mask = cpu_online_map;
unsigned int cpu; unsigned int cpu;
cpu_clear(smp_processor_id(), mask); for_each_online_cpu(cpu) {
for_each_cpu_mask(cpu, mask) if (cpu != smp_processor_id() && cpu_context(cpu, vma->vm_mm))
if (cpu_context(cpu, vma->vm_mm))
cpu_context(cpu, vma->vm_mm) = 0; cpu_context(cpu, vma->vm_mm) = 0;
} }
}
local_flush_tlb_page(vma, page); local_flush_tlb_page(vma, page);
preempt_enable(); preempt_enable();
} }
......
...@@ -291,7 +291,7 @@ static void smtc_configure_tlb(void) ...@@ -291,7 +291,7 @@ static void smtc_configure_tlb(void)
* possibly leave some TCs/VPEs as "slave" processors. * possibly leave some TCs/VPEs as "slave" processors.
* *
* Use c0_MVPConf0 to find out how many TCs are available, setting up * Use c0_MVPConf0 to find out how many TCs are available, setting up
* cpu_possible_map and the logical/physical mappings. * cpu_possible_mask and the logical/physical mappings.
*/ */
int __init smtc_build_cpu_map(int start_cpu_slot) int __init smtc_build_cpu_map(int start_cpu_slot)
......
...@@ -80,9 +80,9 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma) ...@@ -80,9 +80,9 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
if (vma) if (vma)
mask = *mm_cpumask(vma->vm_mm); mask = *mm_cpumask(vma->vm_mm);
else else
mask = cpu_online_map; mask = *cpu_online_mask;
cpu_clear(cpu, mask); cpumask_clear_cpu(cpu, &mask);
for_each_cpu_mask(cpu, mask) for_each_cpu(cpu, &mask)
octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH); octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
preempt_enable(); preempt_enable();
......
...@@ -165,7 +165,7 @@ void __init nlm_smp_setup(void) ...@@ -165,7 +165,7 @@ void __init nlm_smp_setup(void)
cpu_set(boot_cpu, phys_cpu_present_map); cpu_set(boot_cpu, phys_cpu_present_map);
__cpu_number_map[boot_cpu] = 0; __cpu_number_map[boot_cpu] = 0;
__cpu_logical_map[0] = boot_cpu; __cpu_logical_map[0] = boot_cpu;
cpu_set(0, cpu_possible_map); set_cpu_possible(0, true);
num_cpus = 1; num_cpus = 1;
for (i = 0; i < NR_CPUS; i++) { for (i = 0; i < NR_CPUS; i++) {
...@@ -177,14 +177,14 @@ void __init nlm_smp_setup(void) ...@@ -177,14 +177,14 @@ void __init nlm_smp_setup(void)
cpu_set(i, phys_cpu_present_map); cpu_set(i, phys_cpu_present_map);
__cpu_number_map[i] = num_cpus; __cpu_number_map[i] = num_cpus;
__cpu_logical_map[num_cpus] = i; __cpu_logical_map[num_cpus] = i;
cpu_set(num_cpus, cpu_possible_map); set_cpu_possible(num_cpus, true);
++num_cpus; ++num_cpus;
} }
} }
pr_info("Phys CPU present map: %lx, possible map %lx\n", pr_info("Phys CPU present map: %lx, possible map %lx\n",
(unsigned long)phys_cpu_present_map.bits[0], (unsigned long)phys_cpu_present_map.bits[0],
(unsigned long)cpu_possible_map.bits[0]); (unsigned long)cpumask_bits(cpu_possible_mask)[0]);
pr_info("Detected %i Slave CPU(s)\n", num_cpus); pr_info("Detected %i Slave CPU(s)\n", num_cpus);
nlm_set_nmi_handler(nlm_boot_secondary_cpus); nlm_set_nmi_handler(nlm_boot_secondary_cpus);
......
...@@ -146,7 +146,7 @@ static void __cpuinit yos_boot_secondary(int cpu, struct task_struct *idle) ...@@ -146,7 +146,7 @@ static void __cpuinit yos_boot_secondary(int cpu, struct task_struct *idle)
} }
/* /*
* Detect available CPUs, populate cpu_possible_map before smp_init * Detect available CPUs, populate cpu_possible_mask before smp_init
* *
* We don't want to start the secondary CPU yet nor do we have a nice probing * We don't want to start the secondary CPU yet nor do we have a nice probing
* feature in PMON so we just assume presence of the secondary core. * feature in PMON so we just assume presence of the secondary core.
...@@ -155,10 +155,10 @@ static void __init yos_smp_setup(void) ...@@ -155,10 +155,10 @@ static void __init yos_smp_setup(void)
{ {
int i; int i;
cpus_clear(cpu_possible_map); init_cpu_possible(cpu_none_mask);
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
cpu_set(i, cpu_possible_map); set_cpu_possible(i, true);
__cpu_number_map[i] = i; __cpu_number_map[i] = i;
__cpu_logical_map[i] = i; __cpu_logical_map[i] = i;
} }
...@@ -169,7 +169,7 @@ static void __init yos_prepare_cpus(unsigned int max_cpus) ...@@ -169,7 +169,7 @@ static void __init yos_prepare_cpus(unsigned int max_cpus)
/* /*
* Be paranoid. Enable the IPI only if we're really about to go SMP. * Be paranoid. Enable the IPI only if we're really about to go SMP.
*/ */
if (cpus_weight(cpu_possible_map)) if (num_possible_cpus())
set_c0_status(STATUSF_IP5); set_c0_status(STATUSF_IP5);
} }
......
...@@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, nasid_t nasid, int highest) ...@@ -76,7 +76,7 @@ static int do_cpumask(cnodeid_t cnode, nasid_t nasid, int highest)
/* Only let it join in if it's marked enabled */ /* Only let it join in if it's marked enabled */
if ((acpu->cpu_info.flags & KLINFO_ENABLE) && if ((acpu->cpu_info.flags & KLINFO_ENABLE) &&
(tot_cpus_found != NR_CPUS)) { (tot_cpus_found != NR_CPUS)) {
cpu_set(cpuid, cpu_possible_map); set_cpu_possible(cpuid, true);
alloc_cpupda(cpuid, tot_cpus_found); alloc_cpupda(cpuid, tot_cpus_found);
cpus_found++; cpus_found++;
tot_cpus_found++; tot_cpus_found++;
......
...@@ -138,7 +138,7 @@ static void __cpuinit bcm1480_boot_secondary(int cpu, struct task_struct *idle) ...@@ -138,7 +138,7 @@ static void __cpuinit bcm1480_boot_secondary(int cpu, struct task_struct *idle)
/* /*
* Use CFE to find out how many CPUs are available, setting up * Use CFE to find out how many CPUs are available, setting up
* cpu_possible_map and the logical/physical mappings. * cpu_possible_mask and the logical/physical mappings.
* XXXKW will the boot CPU ever not be physical 0? * XXXKW will the boot CPU ever not be physical 0?
* *
* Common setup before any secondaries are started * Common setup before any secondaries are started
...@@ -147,14 +147,13 @@ static void __init bcm1480_smp_setup(void) ...@@ -147,14 +147,13 @@ static void __init bcm1480_smp_setup(void)
{ {
int i, num; int i, num;
cpus_clear(cpu_possible_map); init_cpu_possible(cpumask_of(0));
cpu_set(0, cpu_possible_map);
__cpu_number_map[0] = 0; __cpu_number_map[0] = 0;
__cpu_logical_map[0] = 0; __cpu_logical_map[0] = 0;
for (i = 1, num = 0; i < NR_CPUS; i++) { for (i = 1, num = 0; i < NR_CPUS; i++) {
if (cfe_cpu_stop(i) == 0) { if (cfe_cpu_stop(i) == 0) {
cpu_set(i, cpu_possible_map); set_cpu_possible(i, true);
__cpu_number_map[i] = ++num; __cpu_number_map[i] = ++num;
__cpu_logical_map[num] = i; __cpu_logical_map[num] = i;
} }
......
...@@ -126,7 +126,7 @@ static void __cpuinit sb1250_boot_secondary(int cpu, struct task_struct *idle) ...@@ -126,7 +126,7 @@ static void __cpuinit sb1250_boot_secondary(int cpu, struct task_struct *idle)
/* /*
* Use CFE to find out how many CPUs are available, setting up * Use CFE to find out how many CPUs are available, setting up
* cpu_possible_map and the logical/physical mappings. * cpu_possible_mask and the logical/physical mappings.
* XXXKW will the boot CPU ever not be physical 0? * XXXKW will the boot CPU ever not be physical 0?
* *
* Common setup before any secondaries are started * Common setup before any secondaries are started
...@@ -135,14 +135,13 @@ static void __init sb1250_smp_setup(void) ...@@ -135,14 +135,13 @@ static void __init sb1250_smp_setup(void)
{ {
int i, num; int i, num;
cpus_clear(cpu_possible_map); init_cpu_possible(cpumask_of(0));
cpu_set(0, cpu_possible_map);
__cpu_number_map[0] = 0; __cpu_number_map[0] = 0;
__cpu_logical_map[0] = 0; __cpu_logical_map[0] = 0;
for (i = 1, num = 0; i < NR_CPUS; i++) { for (i = 1, num = 0; i < NR_CPUS; i++) {
if (cfe_cpu_stop(i) == 0) { if (cfe_cpu_stop(i) == 0) {
cpu_set(i, cpu_possible_map); set_cpu_possible(i, true);
__cpu_number_map[i] = ++num; __cpu_number_map[i] = ++num;
__cpu_logical_map[num] = i; __cpu_logical_map[num] = i;
} }
......
...@@ -104,11 +104,11 @@ static int irq_choose_cpu(const struct cpumask *affinity) ...@@ -104,11 +104,11 @@ static int irq_choose_cpu(const struct cpumask *affinity)
{ {
cpumask_t mask; cpumask_t mask;
cpus_and(mask, cpu_online_map, *affinity); cpumask_and(&mask, cpu_online_mask, affinity);
if (cpus_equal(mask, cpu_online_map) || cpus_empty(mask)) if (cpumask_equal(&mask, cpu_online_mask) || cpumask_empty(&mask))
return boot_cpu_id; return boot_cpu_id;
else else
return first_cpu(mask); return cpumask_first(&mask);
} }
#else #else
#define irq_choose_cpu(affinity) boot_cpu_id #define irq_choose_cpu(affinity) boot_cpu_id
......
...@@ -1100,7 +1100,7 @@ EXPORT_SYMBOL(hash_for_home_map); ...@@ -1100,7 +1100,7 @@ EXPORT_SYMBOL(hash_for_home_map);
/* /*
* cpu_cacheable_map lists all the cpus whose caches the hypervisor can * cpu_cacheable_map lists all the cpus whose caches the hypervisor can
* flush on our behalf. It is set to cpu_possible_map OR'ed with * flush on our behalf. It is set to cpu_possible_mask OR'ed with
* hash_for_home_map, and it is what should be passed to * hash_for_home_map, and it is what should be passed to
* hv_flush_remote() to flush all caches. Note that if there are * hv_flush_remote() to flush all caches. Note that if there are
* dedicated hypervisor driver tiles that have authorized use of their * dedicated hypervisor driver tiles that have authorized use of their
...@@ -1186,7 +1186,7 @@ static void __init setup_cpu_maps(void) ...@@ -1186,7 +1186,7 @@ static void __init setup_cpu_maps(void)
sizeof(cpu_lotar_map)); sizeof(cpu_lotar_map));
if (rc < 0) { if (rc < 0) {
pr_err("warning: no HV_INQ_TILES_LOTAR; using AVAIL\n"); pr_err("warning: no HV_INQ_TILES_LOTAR; using AVAIL\n");
cpu_lotar_map = cpu_possible_map; cpu_lotar_map = *cpu_possible_mask;
} }
#if CHIP_HAS_CBOX_HOME_MAP() #if CHIP_HAS_CBOX_HOME_MAP()
...@@ -1196,9 +1196,9 @@ static void __init setup_cpu_maps(void) ...@@ -1196,9 +1196,9 @@ static void __init setup_cpu_maps(void)
sizeof(hash_for_home_map)); sizeof(hash_for_home_map));
if (rc < 0) if (rc < 0)
early_panic("hv_inquire_tiles(HFH_CACHE) failed: rc %d\n", rc); early_panic("hv_inquire_tiles(HFH_CACHE) failed: rc %d\n", rc);
cpumask_or(&cpu_cacheable_map, &cpu_possible_map, &hash_for_home_map); cpumask_or(&cpu_cacheable_map, cpu_possible_mask, &hash_for_home_map);
#else #else
cpu_cacheable_map = cpu_possible_map; cpu_cacheable_map = *cpu_possible_mask;
#endif #endif
} }
......
...@@ -41,7 +41,7 @@ static int __init start_kernel_proc(void *unused) ...@@ -41,7 +41,7 @@ static int __init start_kernel_proc(void *unused)
cpu_tasks[0].pid = pid; cpu_tasks[0].pid = pid;
cpu_tasks[0].task = current; cpu_tasks[0].task = current;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
cpu_online_map = cpumask_of_cpu(0); init_cpu_online(get_cpu_mask(0));
#endif #endif
start_kernel(); start_kernel();
return 0; return 0;
......
...@@ -76,7 +76,7 @@ static int idle_proc(void *cpup) ...@@ -76,7 +76,7 @@ static int idle_proc(void *cpup)
cpu_relax(); cpu_relax();
notify_cpu_starting(cpu); notify_cpu_starting(cpu);
cpu_set(cpu, cpu_online_map); set_cpu_online(cpu, true);
default_idle(); default_idle();
return 0; return 0;
} }
...@@ -110,8 +110,7 @@ void smp_prepare_cpus(unsigned int maxcpus) ...@@ -110,8 +110,7 @@ void smp_prepare_cpus(unsigned int maxcpus)
for (i = 0; i < ncpus; ++i) for (i = 0; i < ncpus; ++i)
set_cpu_possible(i, true); set_cpu_possible(i, true);
cpu_clear(me, cpu_online_map); set_cpu_online(me, true);
cpu_set(me, cpu_online_map);
cpu_set(me, cpu_callin_map); cpu_set(me, cpu_callin_map);
err = os_pipe(cpu_data[me].ipi_pipe, 1, 1); err = os_pipe(cpu_data[me].ipi_pipe, 1, 1);
...@@ -138,13 +137,13 @@ void smp_prepare_cpus(unsigned int maxcpus) ...@@ -138,13 +137,13 @@ void smp_prepare_cpus(unsigned int maxcpus)
void smp_prepare_boot_cpu(void) void smp_prepare_boot_cpu(void)
{ {
cpu_set(smp_processor_id(), cpu_online_map); set_cpu_online(smp_processor_id(), true);
} }
int __cpu_up(unsigned int cpu) int __cpu_up(unsigned int cpu)
{ {
cpu_set(cpu, smp_commenced_mask); cpu_set(cpu, smp_commenced_mask);
while (!cpu_isset(cpu, cpu_online_map)) while (!cpu_online(cpu))
mb(); mb();
return 0; return 0;
} }
......
...@@ -967,7 +967,7 @@ void xen_setup_shared_info(void) ...@@ -967,7 +967,7 @@ void xen_setup_shared_info(void)
xen_setup_mfn_list_list(); xen_setup_mfn_list_list();
} }
/* This is called once we have the cpu_possible_map */ /* This is called once we have the cpu_possible_mask */
void xen_setup_vcpu_info_placement(void) void xen_setup_vcpu_info_placement(void)
{ {
int cpu; int cpu;
......
...@@ -142,7 +142,7 @@ static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy) ...@@ -142,7 +142,7 @@ static int __cpuinit db8500_cpufreq_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = 20 * 1000; /* in ns */ policy->cpuinfo.transition_latency = 20 * 1000; /* in ns */
/* policy sharing between dual CPUs */ /* policy sharing between dual CPUs */
cpumask_copy(policy->cpus, &cpu_present_map); cpumask_copy(policy->cpus, cpu_present_mask);
policy->shared_type = CPUFREQ_SHARED_TYPE_ALL; policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
......
...@@ -764,12 +764,6 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu) ...@@ -764,12 +764,6 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
* *
*/ */
#ifndef CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS #ifndef CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
/* These strip const, as traditionally they weren't const. */
#define cpu_possible_map (*(cpumask_t *)cpu_possible_mask)
#define cpu_online_map (*(cpumask_t *)cpu_online_mask)
#define cpu_present_map (*(cpumask_t *)cpu_present_mask)
#define cpu_active_map (*(cpumask_t *)cpu_active_mask)
#define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu)) #define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu))
#define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS) #define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS)
......
...@@ -1414,8 +1414,8 @@ endif # MODULES ...@@ -1414,8 +1414,8 @@ endif # MODULES
config INIT_ALL_POSSIBLE config INIT_ALL_POSSIBLE
bool bool
help help
Back when each arch used to define their own cpu_online_map and Back when each arch used to define their own cpu_online_mask and
cpu_possible_map, some of them chose to initialize cpu_possible_map cpu_possible_mask, some of them chose to initialize cpu_possible_mask
with all 1s, and others with all 0s. When they were centralised, with all 1s, and others with all 0s. When they were centralised,
it was better to provide this option than to break all the archs it was better to provide this option than to break all the archs
and have several arch maintainers pursuing me down dark alleys. and have several arch maintainers pursuing me down dark alleys.
......
...@@ -270,11 +270,11 @@ static struct file_system_type cpuset_fs_type = { ...@@ -270,11 +270,11 @@ static struct file_system_type cpuset_fs_type = {
* are online. If none are online, walk up the cpuset hierarchy * are online. If none are online, walk up the cpuset hierarchy
* until we find one that does have some online cpus. If we get * until we find one that does have some online cpus. If we get
* all the way to the top and still haven't found any online cpus, * all the way to the top and still haven't found any online cpus,
* return cpu_online_map. Or if passed a NULL cs from an exit'ing * return cpu_online_mask. Or if passed a NULL cs from an exit'ing
* task, return cpu_online_map. * task, return cpu_online_mask.
* *
* One way or another, we guarantee to return some non-empty subset * One way or another, we guarantee to return some non-empty subset
* of cpu_online_map. * of cpu_online_mask.
* *
* Call with callback_mutex held. * Call with callback_mutex held.
*/ */
...@@ -867,7 +867,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, ...@@ -867,7 +867,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
int retval; int retval;
int is_load_balanced; int is_load_balanced;
/* top_cpuset.cpus_allowed tracks cpu_online_map; it's read-only */ /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */
if (cs == &top_cpuset) if (cs == &top_cpuset)
return -EACCES; return -EACCES;
...@@ -2149,7 +2149,7 @@ void __init cpuset_init_smp(void) ...@@ -2149,7 +2149,7 @@ void __init cpuset_init_smp(void)
* *
* Description: Returns the cpumask_var_t cpus_allowed of the cpuset * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
* attached to the specified @tsk. Guaranteed to return some non-empty * attached to the specified @tsk. Guaranteed to return some non-empty
* subset of cpu_online_map, even if this means going outside the * subset of cpu_online_mask, even if this means going outside the
* tasks cpuset. * tasks cpuset.
**/ **/
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment