• Andi Kleen's avatar
    random: optimize add_interrupt_randomness · e8e8a2e4
    Andi Kleen authored
    add_interrupt_randomess always wakes up
    code blocking on /dev/random. This wake up is done
    unconditionally. Unfortunately this means all interrupts
    take the wait queue spinlock, which can be rather expensive
    on large systems processing lots of interrupts.
    
    We saw 1% cpu time spinning on this on a large macro workload
    running on a large system.
    
    I believe it's a recent regression (?)
    
    Always check if there is a waiter on the wait queue
    before waking up. This check can be done without
    taking a spinlock.
    
    1.06%         10460  [kernel.vmlinux] [k] native_queued_spin_lock_slowpath
             |
             ---native_queued_spin_lock_slowpath
                |
                 --0.57%--_raw_spin_lock_irqsave
                           |
                            --0.56%--__wake_up_common_lock
                                      credit_entropy_bits
                                      add_interrupt_randomness
                                      handle_irq_event_percpu
                                      handle_irq_event
                                      handle_edge_irq
                                      handle_irq
                                      do_IRQ
                                      common_interrupt
    Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
    Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
    e8e8a2e4
random.c 63.9 KB