Commit 973aa818 authored by Andy Lutomirski's avatar Andy Lutomirski Committed by Thomas Gleixner

x86-64: Optimize vDSO time()

This function just reads a 64-bit variable that's updated
atomically, so we don't need any locks.
Signed-off-by: default avatarAndy Lutomirski <luto@mit.edu>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Borislav Petkov <bp@amd64.org>
Link: http://lkml.kernel.org/r/%3C40e2700f8cda4d511e5910be1e633025d28b36c2.1306156808.git.luto%40mit.edu%3ESigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent f144a6b4
......@@ -180,12 +180,8 @@ notrace time_t __vdso_time(time_t *t)
if (unlikely(!VVAR(vsyscall_gtod_data).sysctl_enabled))
return time_syscall(t);
do {
seq = read_seqbegin(&VVAR(vsyscall_gtod_data).lock);
result = VVAR(vsyscall_gtod_data).wall_time_sec;
} while (read_seqretry(&VVAR(vsyscall_gtod_data).lock, seq));
/* This is atomic on x86_64 so we don't need any locks. */
result = ACCESS_ONCE(VVAR(vsyscall_gtod_data).wall_time_sec);
if (t)
*t = result;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment