• Sebastian Andrzej Siewior's avatar
    crypto: mcryptd - protect the per-CPU queue with a lock · 9abffc6f
    Sebastian Andrzej Siewior authored
    mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
    access to it with disabled preemption. Then it schedules a worker on the
    same CPU. The worker in mcryptd_queue_worker() guards access to the same
    per-CPU variable with disabled preemption.
    
    If we take CPU-hotplug into account then it is possible that between
    queue_work_on() and the actual invocation of the worker the CPU goes
    down and the worker will be scheduled on _another_ CPU. And here the
    preempt_disable() protection does not work anymore. The easiest thing is
    to add a spin_lock() to guard access to the list.
    
    Another detail: mcryptd_queue_worker() is not processing more than
    MCRYPTD_BATCH invocation in a row. If there are still items left, then
    it will invoke queue_work() to proceed with more later. *I* would
    suggest to simply drop that check because it does not use a system
    workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
    preemption is required then the scheduler should do it.
    However if queue_work() is used then the work item is marked as CPU
    unbound. That means it will try to run on the local CPU but it may run
    on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
    Again, the preempt_disable() won't work here but lock which was
    introduced will help.
    In order to keep work-item on the local CPU (and avoid RR) I changed it
    to queue_work_on().
    
    Cc: stable@vger.kernel.org
    Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    9abffc6f
mcryptd.h 2.37 KB