Commit e22553e2 authored by Chris Mason's avatar Chris Mason Committed by Linus Torvalds

eventfd: don't take the spinlock in eventfd_poll

The spinlock in eventfd_poll is trying to protect the count of events so
it can decide if it should return POLLIN, POLLERR, or POLLOUT.  But,
because of the way we drop the lock after calling poll_wait, and drop it
again before returning, we have the same pile of races with the lock as
we do with a single read of ctx->count().

This replaces the lock with a read barrier and single read.

eventfd_write does a single bump of ctx->count, so this should not add
new races with adding events.  eventfd_read is similar, it will do a
single decrement with the lock held, and so we're making the race with
concurrent readers slightly larger.

This spinlock is the top CPU user in kernel code during one of our
workloads.  Removing it gives us a ~2% boost.

[arnd@arndb.de: avoid unused variable warning]
[dan.carpenter@oracle.com: type bug in eventfd_poll()]
Signed-off-by: default avatarChris Mason <clm@fb.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 7647f14f
...@@ -118,18 +118,18 @@ static unsigned int eventfd_poll(struct file *file, poll_table *wait) ...@@ -118,18 +118,18 @@ static unsigned int eventfd_poll(struct file *file, poll_table *wait)
{ {
struct eventfd_ctx *ctx = file->private_data; struct eventfd_ctx *ctx = file->private_data;
unsigned int events = 0; unsigned int events = 0;
unsigned long flags; u64 count;
poll_wait(file, &ctx->wqh, wait); poll_wait(file, &ctx->wqh, wait);
smp_rmb();
count = ctx->count;
spin_lock_irqsave(&ctx->wqh.lock, flags); if (count > 0)
if (ctx->count > 0)
events |= POLLIN; events |= POLLIN;
if (ctx->count == ULLONG_MAX) if (count == ULLONG_MAX)
events |= POLLERR; events |= POLLERR;
if (ULLONG_MAX - 1 > ctx->count) if (ULLONG_MAX - 1 > count)
events |= POLLOUT; events |= POLLOUT;
spin_unlock_irqrestore(&ctx->wqh.lock, flags);
return events; return events;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment