- 08 Jul, 2008 2 commits
-
-
Yinghai Lu authored
and use max_pfn directly. Signed-off-by:
Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Mike Travis authored
* Introduce a new PER_CPU macro called "EARLY_PER_CPU". This is used by some per_cpu variables that are initialized and accessed before there are per_cpu areas allocated. ["Early" in respect to per_cpu variables is "earlier than the per_cpu areas have been setup".] This patchset adds these new macros: DEFINE_EARLY_PER_CPU(_type, _name, _initvalue) EXPORT_EARLY_PER_CPU_SYMBOL(_name) DECLARE_EARLY_PER_CPU(_type, _name) early_per_cpu_ptr(_name) early_per_cpu_map(_name, _idx) early_per_cpu(_name, _cpu) The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. The early_per_cpu() macro is not very efficient but does show how to access the variable if you have a function that can be called both "early" and "late". It tests the early ptr to be NULL, and if not then it's still valid. Otherwise, the per_cpu variable is used instead: #define early_per_cpu(_name, _cpu) \ (early_per_cpu_ptr(_name) ? \ early_per_cpu_ptr(_name)[_cpu] : \ per_cpu(_name, _cpu)) A better method is to actually check the pointer manually. In the case below, numa_set_node can be called both "early" and "late": void __cpuinit numa_set_node(int cpu, int node) { int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map); if (cpu_to_node_map) cpu_to_node_map[cpu] = node; else per_cpu(x86_cpu_to_node_map, cpu) = node; } * Add a flag "arch_provides_topology_pointers" that indicates pointers to topology cpumask_t maps are available. Otherwise, use the function returning the cpumask_t value. This is useful if cpumask_t set size is very large to avoid copying data on to/off of the stack. * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while the non-debug case has been optimized a bit. * Remove an unreferenced compiler warning in drivers/base/topology.c * Clean up #ifdef in setup.c For inclusion into sched-devel/latest tree. Based on: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git + sched-devel/latest .../mingo/linux-2.6-sched-devel.git Signed-off-by:
Mike Travis <travis@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 11 Jun, 2008 1 commit
-
-
Fenghua Yu authored
This is a SLIT sanity checking patch. It moves slit_valid() function to generic ACPI code and does sanity checking for both x86 and ia64. It sets up node_distance with LOCAL_DISTANCE and REMOTE_DISTANCE when hitting invalid SLIT table on ia64. It also cleans up unused variable localities in acpi_parse_slit() on x86. Signed-off-by:
Fenghua Yu <fenghua.yu@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Len Brown <len.brown@intel.com>
-
- 25 May, 2008 1 commit
-
-
Thomas Gleixner authored
memory_add_physaddr_to_nid() is only used in the CONFIG_MEMORY_HOTPLUG_SPARSE || CONFIG_ACPI_HOTPLUG_MEMORY case. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 24 Apr, 2008 1 commit
-
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Apr, 2008 1 commit
-
-
Suresh Siddha authored
For example, If the physical address layout on a two node system with 8 GB memory is something like: node 0: 0-2GB, 4-6GB node 1: 2-4GB, 6-8GB Current kernels fail to boot/detect this NUMA topology. ACPI SRAT tables can expose such a topology which needs to be supported. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 17 Apr, 2008 2 commits
-
-
Jack Steiner authored
Increase the number of bits in an apicid from 8 to 32. By default, MP_processor_info() gets the APICID from the mpc_config_processor structure. However, this structure limits the size of APICID to 8 bits. This patch allows the caller of MP_processor_info() to optionally pass a larger APICID that will be used instead of the one in the mpc_config_processor struct. Signed-off-by:
Jack Steiner <steiner@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Yinghai Lu authored
we don't need get that so early. Signed-off-by:
Yinghai Lu <yinghai.lu@sun.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 19 Feb, 2008 1 commit
-
-
Sam Ravnborg authored
reserve_hotadd() are only used by __init acpi_numa_memory_affinity_init(). Annotate reserve_hotadd() with __init is the trivial fix. Signed-off-by:
Sam Ravnborg <sam@ravnborg.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 07 Feb, 2008 1 commit
-
-
Bernhard Walle authored
This patchset adds a flags variable to reserve_bootmem() and uses the BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions between crashkernel area and already used memory. This patch: Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE. If that flag is set, the function returns with -EBUSY if the memory already has been reserved in the past. This is to avoid conflicts. Because that code runs before SMP initialisation, there's no race condition inside reserve_bootmem_core(). [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: fix powerpc build] Signed-off-by:
Bernhard Walle <bwalle@suse.de> Cc: <linux-arch@vger.kernel.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 30 Jan, 2008 10 commits
-
-
Sam Ravnborg authored
Fix the following warnings: WARNING: arch/x86/mm/built-in.o(.text+0x1abc): Section mismatch: reference to .init.data:nodes_parsed in 'unparse_node' WARNING: arch/x86/mm/built-in.o(.text+0x1ac6): Section mismatch: reference to .cpuinit.data:apicid_to_node in 'unparse_node' WARNING: arch/x86/mm/built-in.o(.text+0x1ad2): Section mismatch: reference to .cpuinit.data:apicid_to_node in 'unparse_node' unparse_node are static and only used by acpi_scan_nodes which is already annotated __init. So we annotate unparse_node with __init. Signed-off-by:
Sam Ravnborg <sam@ravnborg.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Minoru Usui authored
I found a small bug of NUMA emulation code for x86_64. (CONFIG_NUMA_EMU) If machine is non-NUMA, find_node_by_addr() should return NUMA_NO_NODE, but current implementation code returns existent maximum NUMA node number + 1. This is not existent NUMA node number. However, this behaviour does not affect NUMA emulation fortunately, because acpi_fake_nodes() that is caller of find_node_by_addr() gets pxm (proximity domain) by node_to_pxm() from non-existent NUMA node number that was returned by find_node_by_addr(). node_to_pxm() returns PXM_INVAL that means illegal or non-existent NUMA node number. Signed-off-by:
Minoru Usui <usui@mxm.nes.nec.co.jp> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
travis@sgi.com authored
Change static bios_cpu_apicid array to a per_cpu data variable. This includes using a static array used during initialization similar to the way x86_cpu_to_apicid[] is handled. There is one early use of bios_cpu_apicid in apic_is_clustered_box(). The other reference in cpu_present_to_apicid() is called after smp_set_apicids() has setup the percpu version of bios_cpu_apicid. Signed-off-by:
Mike Travis <travis@sgi.com> Reviewed-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Mike Travis authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
travis@sgi.com authored
Change the following static arrays sized by NR_CPUS to per_cpu data variables: char cpu_to_node_map[NR_CPUS]; fixup: - Split cpu_to_node function into "early" and "late" versions so that x86_cpu_to_node_map_early_ptr is not EXPORT'ed and the cpu_to_node inline function is more streamlined. - This also involves setting up the percpu maps as early as possible. - Fix X86_32 NUMA build errors that previous version of this patch caused. V2->V3: - add early_cpu_to_node function to keep cpu_to_node efficient - move and rename smp_set_apicids() to setup_percpu_maps() - call setup_percpu_maps() as early as possible V1->V2: - Removed extraneous casts - Fix !NUMA builds with '#ifdef CONFIG_NUMA" Signed-off-by:
Mike Travis <travis@sgi.com> Reviewed-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Mike Travis authored
Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
travis@sgi.com authored
Change the size of node ids from 8 bits to 16 bits to accomodate more than 256 nodes. Signed-off-by:
Mike Travis <travis@sgi.com> Reviewed-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
travis@sgi.com authored
Change the size of APICIDs from u8 to u16. This partially supports the new x2apic mode that will be present on future processor chips. (Chips actually support 32-bit APICIDs, but that change is more intrusive. Supporting 16-bit is sufficient for now). Signed-off-by:
Jack Steiner <steiner@sgi.com> I've included just the partial change from u8 to u16 apicids. The remaining x2apic changes will be in a separate patch. In addition, the fake_node_to_pxm_map[] and fake_apicid_to_node[] tables have been moved from local data to the __initdata section reducing stack pressure when MAX_NUMNODES and MAX_LOCAL_APIC are increased in size. Signed-off-by:
Mike Travis <travis@sgi.com> Reviewed-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Joe Perches authored
Spelling fixes. Signed-off-by:
Joe Perches <joe@perches.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Christoph Lameter authored
Use sparsemem as the only memory model for UP, SMP and NUMA. Measurements indicate that DISCONTIGMEM has a higher overhead than sparsemem. And FLATMEMs benefits are minimal. So I think its best to simply standardize on sparsemem. Results of page allocator tests (test can be had via git from slab git tree branch tests) Measurements in cycle counts. 1000 allocations were performed and then the average cycle count was calculated. Order FlatMem Discontig SparseMem 0 639 665 641 1 567 647 593 2 679 774 692 3 763 967 781 4 961 1501 962 5 1356 2344 1392 6 2224 3982 2336 7 4869 7225 5074 8 12500 14048 12732 9 27926 28223 28165 10 58578 58714 58682 (Note that FlatMem is an SMP config and the rest NUMA configurations) Memory use: SMP Sparsemem ------------- Kernel size: text data bss dec hex filename 3849268 397739 1264856 5511863 541ab7 vmlinux total used free shared buffers cached Mem: 8242252 41164 8201088 0 352 11512 -/+ buffers/cache: 29300 8212952 Swap: 9775512 0 9775512 SMP Flatmem ----------- Kernel size: text data bss dec hex filename 3844612 397739 1264536 5506887 540747 vmlinux So 4.5k growth in text size vs. FLATMEM. total used free shared buffers cached Mem: 8244052 40544 8203508 0 352 11484 -/+ buffers/cache: 28708 8215344 2k growth in overall memory use after boot. NUMA discontig: text data bss dec hex filename 3888124 470659 1276504 5635287 55fcd7 vmlinux total used free shared buffers cached Mem: 8256256 56908 8199348 0 352 11496 -/+ buffers/cache: 45060 8211196 Swap: 9775512 0 9775512 NUMA sparse: text data bss dec hex filename 3896428 470659 1276824 5643911 561e87 vmlinux 8k text growth. Given that we fully inline virt_to_page and friends now that is rather good. total used free shared buffers cached Mem: 8264720 57240 8207480 0 352 11516 -/+ buffers/cache: 45372 8219348 Swap: 9775512 0 9775512 The total available memory is increased by 8k. This patch makes sparsemem the default and removes discontig and flatmem support from x86. [ akpm@linux-foundation.org: allnoconfig build fix ] Acked-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Christoph Lameter <clameter@sgi.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 19 Oct, 2007 1 commit
-
-
Simon Arlott authored
Spelling fixes in arch/x86_64/. Signed-off-by:
Simon Arlott <simon@fire.lp0.eu> Signed-off-by:
Adrian Bunk <bunk@kernel.org>
-
- 17 Oct, 2007 1 commit
-
-
Mike Travis authored
In x86_64 and i386 architectures most arrays that are sized using NR_CPUS lay in local memory on node 0. Not only will most (99%?) of the systems not use all the slots in these arrays, particularly when NR_CPUS is increased to accommodate future very high cpu count systems, but a number of cache lines are passed unnecessarily on the system bus when these arrays are referenced by cpus on other nodes. Typically, the values in these arrays are referenced by the cpu accessing it's own values, though when passing IPI interrupts, the cpu does access the data relevant to the targeted cpu/node. Of course, if the referencing cpu is not on node 0, then the reference will still require cross node exchanges of cache lines. A common use of this is for an interrupt service routine to pass the interrupt to other cpus local to that node. Ideally, all the elements in these arrays should be moved to the per_cpu data area. In some cases (such as x86_cpu_to_apicid) the array is referenced before the per_cpu data areas are setup. In this case, a static array is declared in the __initdata area and initialized by the booting cpu (BSP). The values are then moved to the per_cpu area after it is initialized and the original static array is freed with the rest of the __initdata. This patch: Fix four instances where cpu_to_node is referenced by array instead of via the cpu_to_node macro. This is preparation to moving it to the per_cpu data area. Signed-off-by:
Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 11 Oct, 2007 2 commits
-
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 22 Jul, 2007 4 commits
-
-
David Rientjes authored
When we are in the emulated NUMA case, we need to make sure that all existing apicid_to_node mappings that point to real node ID's now point to the equivalent fake node ID's. If we simply iterate over all apicid_to_node[] members for each node, we risk remapping an entry if it shares a node ID with a real node. Since apicid's may not be consecutive, we're forced to create an automatic array of apicid_to_node mappings and then copy it over once we have finished remapping fake to real nodes. Signed-off-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
For NUMA emulation, our SLIT should represent the true NUMA topology of the system but our proximity domain to node ID mapping needs to reflect the emulated state. When NUMA emulation has successfully setup fake nodes on the system, a new function, acpi_fake_nodes() is called. This function determines the proximity domain (_PXM) for each true node found on the system. It then finds which emulated nodes have been allocated on this true node as determined by its starting address. The node ID to PXM mapping is changed so that each fake node ID points to the PXM of the true node that it is located on. If the machine failed to register a SLIT, then we assume there is no special requirement for emulated node affinity so we use the default LOCAL_DISTANCE, which is newly exported to this code, as our measurement if the emulated nodes appear in the same PXM. Otherwise, we use REMOTE_DISTANCE. PXM_INVAL and NID_INVAL are also exported to the ACPI header file so that we can compare node_to_pxm() results in generic code (in this case, the SRAT code). Cc: Len Brown <lenb@kernel.org> Signed-off-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
In acpi_scan_nodes(), we immediately return -1 if acpi_numa <= 0, meaning we haven't detected any underlying ACPI topology or we have explicitly disabled its use from the command-line with numa=noacpi. acpi_table_print_srat_entry() and acpi_table_parse_srat() are only referenced within drivers/acpi/numa.c, so we can mark them as static and remove their prototypes from the header file. Likewise, pxm_to_node_map[] and node_to_pxm_map[] are only used within drivers/acpi/numa.c, so we mark them as static and remove their externs from the header file. The automatic 'result' variable is unused in acpi_numa_init(), so it's removed. Signed-off-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
Use LOCAL_DISTANCE and REMOTE_DISTANCE in x86_64 ACPI code Signed-off-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 02 May, 2007 1 commit
-
-
Suresh Siddha authored
Set the node_possible_map at runtime on x86_64. On a non NUMA system, num_possible_nodes() will now say '1'. Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
Andi Kleen <ak@suse.de> Cc: Andi Kleen <andi@firstfloor.org> Cc: Eric Dumazet <dada1@cosmosbay.com> Cc: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <clameter@engr.sgi.com>
-
- 03 Feb, 2007 1 commit
-
-
Alexey Starikovskiy authored
Signed-off-by:
Len Brown <len.brown@intel.com>
-
- 21 Oct, 2006 1 commit
-
-
keith mannthey authored
This patch corrects the logic used in srat.c to figure out what parsing what action to take when registering hot-add areas. Hot-add areas should only be added to the node information for the MEMORY_HOTPLUG_RESERVE case. When booting MEMORY_HOTPLUG_SPARSE hot-add areas on everything but the last node are getting include in the node data and during kernel boot the pages are setup then the kernel dies when the pages are used. This patch fixes this issue. Signed-off-by:
Keith Mannthey <kmannth@us.ibm.com> Signed-off-by:
Andi Kleen <ak@suse.de>
-
- 01 Oct, 2006 3 commits
-
-
Keith Mannthey authored
In cases where the acpi memory-add event does not containe the pxm (node) infomation allow the driver to look up node info based on the address. The acpi_get_node call returns -1 if it can't decode the pxm info, this causes add_memory to panic. acpi_get_node would have to decode the resource from the handle (a lenghty proposition). This seems to be the cleanist point to interject the hook. [kamezawa.hiroyu@jp.fujitsu.com: build fixes] [y-goto@jp.fujitsu.com: build fixes] Signed-off-by:
Keith Mannthey <kmannth@us.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
Keith Mannthey authored
The api for hot-add memory already has a construct for finding nodes based on an address, memory_add_physaddr_to_nid. This patch allows the fucntion to do something besides return 0. It uses the nodes_add infomation to lookup to node info for a hot add event. Signed-off-by:
Keith Mannthey <kmannth@us.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
Keith Mannthey authored
Enable x86_64 srat.c to share code between both reserve and sparsemem based add memory paths. Both paths need the hot-add area node locality infomration (nodes_add). This code refactors the code path to allow this. Signed-off-by:
Keith Mannthey <kmannth@us.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
- 27 Sep, 2006 3 commits
-
-
Mel Gorman authored
Arch-independent zone-sizing determines the size of a node (pgdat->node_spanned_pages) based on the physical memory that was registered by the architecture. However, when CONFIG_MEMORY_HOTPLUG_RESERVE is set, the architecture expects that the spanned_pages will be much larger and that mem_map will be allocated that is used lated on memory hot-add. This patch allows an architecture that sets CONFIG_MEMORY_HOTPLUG_RESERVE to call push_node_boundaries() which will set the node beginning and end to at *least* the requested boundary. Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Andi Kleen <ak@muc.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Keith Mannthey" <kmannth@gmail.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
Mel Gorman authored
absent_pages_in_range() made the assumption that users of the API would not care about holes beyound the end of physical memory. This was not the case. This patch will account for ranges outside of physical memory as holes correctly. Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Andi Kleen <ak@muc.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Keith Mannthey" <kmannth@gmail.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
Mel Gorman authored
Size zones and holes in an architecture independent manner for x86_64. Signed-off-by:
Mel Gorman <mel@csn.ul.ie> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Andi Kleen <ak@muc.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Keith Mannthey" <kmannth@gmail.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
- 26 Sep, 2006 1 commit
-
-
Andi Kleen authored
Move it into srat.c No need to clutter up setup.c for it And remove use in setup.c completely - it only guarded a printk which can be done unconditionally. Signed-off-by:
Andi Kleen <ak@suse.de>
-
- 23 Jun, 2006 1 commit
-
-
Yasunori Goto authored
Consolidate the various arch-specific implementations of pxm_to_node() and node_to_pxm() into a single generic version. Signed-off-by:
Yasunori Goto <y-goto@jp.fujitsu.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-
- 31 May, 2006 1 commit
-
-
Daniel Yeisley authored
From: Daniel Yeisley <dan.yeisley@unisys.com> It is possible to boot a Unisys ES7000 with CPUs from multiple cells, and not also include the memory from those cells. This can create a scenario where node 0 has cpus, but no associated memory. The system will boot fine in a configuration where node 0 has memory, but nodes 2 and 3 do not. [AK: I rechecked the code and generic code seems to indeed handle that already. Dan's original patch had a change for mm/slab.c that seems to be already in now.] Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-