Commit 4f817847 authored by Jeremy Fitzhardinge's avatar Jeremy Fitzhardinge Committed by Jeremy Fitzhardinge

remove dead code in pgtable_cache_init

The conversion from using a slab cache to quicklist left some residual
dead code.

I note that in the conversion it now always allocates a whole page for
the pgd, rather than the 32 bytes needed for a PAE pgd.  Was this
intended?
Signed-off-by: default avatarJeremy Fitzhardinge <jeremy@xensource.com>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
parent 8965c1c0
...@@ -746,24 +746,12 @@ struct kmem_cache *pmd_cache; ...@@ -746,24 +746,12 @@ struct kmem_cache *pmd_cache;
void __init pgtable_cache_init(void) void __init pgtable_cache_init(void)
{ {
size_t pgd_size = PTRS_PER_PGD*sizeof(pgd_t); if (PTRS_PER_PMD > 1)
if (PTRS_PER_PMD > 1) {
pmd_cache = kmem_cache_create("pmd", pmd_cache = kmem_cache_create("pmd",
PTRS_PER_PMD*sizeof(pmd_t), PTRS_PER_PMD*sizeof(pmd_t),
PTRS_PER_PMD*sizeof(pmd_t), PTRS_PER_PMD*sizeof(pmd_t),
SLAB_PANIC, SLAB_PANIC,
pmd_ctor); pmd_ctor);
if (!SHARED_KERNEL_PMD) {
/* If we're in PAE mode and have a non-shared
kernel pmd, then the pgd size must be a
page size. This is because the pgd_list
links through the page structure, so there
can only be one pgd per page for this to
work. */
pgd_size = PAGE_SIZE;
}
}
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment