• Andrew Morton's avatar
    [PATCH] Fix an SMP+preempt latency problem · 2faf4338
    Andrew Morton authored
    Here is spin_lock():
    
    #define spin_lock(lock) \
    do { \
            preempt_disable(); \
            _raw_spin_lock(lock); \
    } while(0)
    
    
    Here is the scenario:
    
    CPU0:
    	spin_lock(some_lock);
    	do_very_long_thing();	/* This has cond_resched()s in it */
    
    CPU1:
    	spin_lock(some_lock);
    
    Now suppose that the scheduler tries to schedule a task on CPU1.  Nothing
    happens, because CPU1 is spinning on the lock with preemption disabled.  CPU0
    will happliy hold the lock for a long time because nobody has set
    need_resched() against CPU0.
    
    This problem can cause scheduling latencies of many tens of milliseconds on
    SMP on kernels which handle UP quite happily.
    
    
    This patch fixes the problem by changing the spin_lock() and write_lock()
    contended slowpath to spin on the lock by hand, while polling for preemption
    requests.
    
    I would have done read_lock() too, but we don't seem to have read_trylock()
    primitives.
    
    The patch also shrinks the kernel by 30k due to not having separate
    out-of-line spinning code for each spin_lock() callsite.
    2faf4338
ksyms.c 16.4 KB