• Peter Zijlstra's avatar
    sched: Dynamically allocate sched_domain/sched_group data-structures · dce840a0
    Peter Zijlstra authored
    Instead of relying on static allocations for the sched_domain and
    sched_group trees, dynamically allocate and RCU free them.
    
    Allocating this dynamically also allows for some build_sched_groups()
    simplification since we can now (like with other simplifications) rely
    on the sched_domain tree instead of hard-coded knowledge.
    
    One tricky to note is that detach_destroy_domains() needs to hold
    rcu_read_lock() over the entire tear-down, per-cpu is not sufficient
    since that can lead to partial sched_group existance (could possibly
    be solved by doing the tear-down backwards but this is much more
    robust).
    
    A concequence of the above is that we can no longer print the
    sched_domain debug stuff from cpu_attach_domain() since that might now
    run with preemption disabled (due to classic RCU etc.) and
    sched_domain_debug() does some GFP_KERNEL allocations.
    
    Another thing to note is that we now fully rely on normal RCU and not
    RCU-sched, this is because with the new and exiting RCU flavours we
    grew over the years BH doesn't necessarily hold off RCU-sched grace
    periods (-rt is known to break this). This would in fact already cause
    us grief since we do sched_domain/sched_group iterations from softirq
    context.
    
    This patch is somewhat larger than I would like it to be, but I didn't
    find any means of shrinking/splitting this.
    Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Nick Piggin <npiggin@kernel.dk>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Link: http://lkml.kernel.org/r/20110407122942.245307941@chello.nlSigned-off-by: default avatarIngo Molnar <mingo@elte.hu>
    dce840a0
sched.c 219 KB