Commit d32cdbfb authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

locking/lglock: Remove lglock implementation

It is now unused, remove it before someone else thinks its a good idea
to use this.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent e6253970
lglock - local/global locks for mostly local access patterns
------------------------------------------------------------
Origin: Nick Piggin's VFS scalability series introduced during
2.6.35++ [1] [2]
Location: kernel/locking/lglock.c
include/linux/lglock.h
Users: currently only the VFS and stop_machine related code
Design Goal:
------------
Improve scalability of globally used large data sets that are
distributed over all CPUs as per_cpu elements.
To manage global data structures that are partitioned over all CPUs
as per_cpu elements but can be mostly handled by CPU local actions
lglock will be used where the majority of accesses are cpu local
reading and occasional cpu local writing with very infrequent
global write access.
* deal with things locally whenever possible
- very fast access to the local per_cpu data
- reasonably fast access to specific per_cpu data on a different
CPU
* while making global action possible when needed
- by expensive access to all CPUs locks - effectively
resulting in a globally visible critical section.
Design:
-------
Basically it is an array of per_cpu spinlocks with the
lg_local_lock/unlock accessing the local CPUs lock object and the
lg_local_lock_cpu/unlock_cpu accessing a remote CPUs lock object
the lg_local_lock has to disable preemption as migration protection so
that the reference to the local CPUs lock does not go out of scope.
Due to the lg_local_lock/unlock only touching cpu-local resources it
is fast. Taking the local lock on a different CPU will be more
expensive but still relatively cheap.
One can relax the migration constraints by acquiring the current
CPUs lock with lg_local_lock_cpu, remember the cpu, and release that
lock at the end of the critical section even if migrated. This should
give most of the performance benefits without inhibiting migration
though needs careful considerations for nesting of lglocks and
consideration of deadlocks with lg_global_lock.
The lg_global_lock/unlock locks all underlying spinlocks of all
possible CPUs (including those off-line). The preemption disable/enable
are needed in the non-RT kernels to prevent deadlocks like:
on cpu 1
task A task B
lg_global_lock
got cpu 0 lock
<<<< preempt <<<<
lg_local_lock_cpu for cpu 0
spin on cpu 0 lock
On -RT this deadlock scenario is resolved by the arch_spin_locks in the
lglocks being replaced by rt_mutexes which resolve the above deadlock
by boosting the lock-holder.
Implementation:
---------------
The initial lglock implementation from Nick Piggin used some complex
macros to generate the lglock/brlock in lglock.h - they were later
turned into a set of functions by Andi Kleen [7]. The change to functions
was motivated by the presence of multiple lock users and also by them
being easier to maintain than the generating macros. This change to
functions is also the basis to eliminated the restriction of not
being initializeable in kernel modules (the remaining problem is that
locks are not explicitly initialized - see lockdep-design.txt)
Declaration and initialization:
-------------------------------
#include <linux/lglock.h>
DEFINE_LGLOCK(name)
or:
DEFINE_STATIC_LGLOCK(name);
lg_lock_init(&name, "lockdep_name_string");
on UP this is mapped to DEFINE_SPINLOCK(name) in both cases, note
also that as of 3.18-rc6 all declaration in use are of the _STATIC_
variant (and it seems that the non-static was never in use).
lg_lock_init is initializing the lockdep map only.
Usage:
------
From the locking semantics it is a spinlock. It could be called a
locality aware spinlock. lg_local_* behaves like a per_cpu
spinlock and lg_global_* like a global spinlock.
No surprises in the API.
lg_local_lock(*lglock);
access to protected per_cpu object on this CPU
lg_local_unlock(*lglock);
lg_local_lock_cpu(*lglock, cpu);
access to protected per_cpu object on other CPU cpu
lg_local_unlock_cpu(*lglock, cpu);
lg_global_lock(*lglock);
access all protected per_cpu objects on all CPUs
lg_global_unlock(*lglock);
There are no _trylock variants of the lglocks.
Note that the lg_global_lock/unlock has to iterate over all possible
CPUs rather than the actually present CPUs or a CPU could go off-line
with a held lock [4] and that makes it very expensive. A discussion on
these issues can be found at [5]
Constraints:
------------
* currently the declaration of lglocks in kernel modules is not
possible, though this should be doable with little change.
* lglocks are not recursive.
* suitable for code that can do most operations on the CPU local
data and will very rarely need the global lock
* lg_global_lock/unlock is *very* expensive and does not scale
* on UP systems all lg_* primitives are simply spinlocks
* in PREEMPT_RT the spinlock becomes an rt-mutex and can sleep but
does not change the tasks state while sleeping [6].
* in PREEMPT_RT the preempt_disable/enable in lg_local_lock/unlock
is downgraded to a migrate_disable/enable, the other
preempt_disable/enable are downgraded to barriers [6].
The deadlock noted for non-RT above is resolved due to rt_mutexes
boosting the lock-holder in this case which arch_spin_locks do
not do.
lglocks were designed for very specific problems in the VFS and probably
only are the right answer in these corner cases. Any new user that looks
at lglocks probably wants to look at the seqlock and RCU alternatives as
her first choice. There are also efforts to resolve the RCU issues that
currently prevent using RCU in place of view remaining lglocks.
Note on brlock history:
-----------------------
The 'Big Reader' read-write spinlocks were originally introduced by
Ingo Molnar in 2000 (2.4/2.5 kernel series) and removed in 2003. They
later were introduced by the VFS scalability patch set in 2.6 series
again as the "big reader lock" brlock [2] variant of lglock which has
been replaced by seqlock primitives or by RCU based primitives in the
3.13 kernel series as was suggested in [3] in 2003. The brlock was
entirely removed in the 3.13 kernel series.
Link: 1 http://lkml.org/lkml/2010/8/2/81
Link: 2 http://lwn.net/Articles/401738/
Link: 3 http://lkml.org/lkml/2003/3/9/205
Link: 4 https://lkml.org/lkml/2011/8/24/185
Link: 5 http://lkml.org/lkml/2011/12/18/189
Link: 6 https://www.kernel.org/pub/linux/kernel/projects/rt/
patch series - lglocks-rt.patch.patch
Link: 7 http://lkml.org/lkml/2012/3/5/26
/*
* Specialised local-global spinlock. Can only be declared as global variables
* to avoid overhead and keep things simple (and we don't want to start using
* these inside dynamically allocated structures).
*
* "local/global locks" (lglocks) can be used to:
*
* - Provide fast exclusive access to per-CPU data, with exclusive access to
* another CPU's data allowed but possibly subject to contention, and to
* provide very slow exclusive access to all per-CPU data.
* - Or to provide very fast and scalable read serialisation, and to provide
* very slow exclusive serialisation of data (not necessarily per-CPU data).
*
* Brlocks are also implemented as a short-hand notation for the latter use
* case.
*
* Copyright 2009, 2010, Nick Piggin, Novell Inc.
*/
#ifndef __LINUX_LGLOCK_H
#define __LINUX_LGLOCK_H
#include <linux/spinlock.h>
#include <linux/lockdep.h>
#include <linux/percpu.h>
#include <linux/cpu.h>
#include <linux/notifier.h>
#ifdef CONFIG_SMP
#ifdef CONFIG_DEBUG_LOCK_ALLOC
#define LOCKDEP_INIT_MAP lockdep_init_map
#else
#define LOCKDEP_INIT_MAP(a, b, c, d)
#endif
struct lglock {
arch_spinlock_t __percpu *lock;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lock_class_key lock_key;
struct lockdep_map lock_dep_map;
#endif
};
#define DEFINE_LGLOCK(name) \
static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock) \
= __ARCH_SPIN_LOCK_UNLOCKED; \
struct lglock name = { .lock = &name ## _lock }
#define DEFINE_STATIC_LGLOCK(name) \
static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock) \
= __ARCH_SPIN_LOCK_UNLOCKED; \
static struct lglock name = { .lock = &name ## _lock }
void lg_lock_init(struct lglock *lg, char *name);
void lg_local_lock(struct lglock *lg);
void lg_local_unlock(struct lglock *lg);
void lg_local_lock_cpu(struct lglock *lg, int cpu);
void lg_local_unlock_cpu(struct lglock *lg, int cpu);
void lg_global_lock(struct lglock *lg);
void lg_global_unlock(struct lglock *lg);
#else
/* When !CONFIG_SMP, map lglock to spinlock */
#define lglock spinlock
#define DEFINE_LGLOCK(name) DEFINE_SPINLOCK(name)
#define DEFINE_STATIC_LGLOCK(name) static DEFINE_SPINLOCK(name)
#define lg_lock_init(lg, name) spin_lock_init(lg)
#define lg_local_lock spin_lock
#define lg_local_unlock spin_unlock
#define lg_local_lock_cpu(lg, cpu) spin_lock(lg)
#define lg_local_unlock_cpu(lg, cpu) spin_unlock(lg)
#define lg_global_lock spin_lock
#define lg_global_unlock spin_unlock
#endif
#endif
......@@ -18,7 +18,6 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
endif
obj-$(CONFIG_SMP) += spinlock.o
obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_lock.o
obj-$(CONFIG_SMP) += lglock.o
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
obj-$(CONFIG_QUEUED_SPINLOCKS) += qspinlock.o
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
......
/* See include/linux/lglock.h for description */
#include <linux/module.h>
#include <linux/lglock.h>
#include <linux/cpu.h>
#include <linux/string.h>
/*
* Note there is no uninit, so lglocks cannot be defined in
* modules (but it's fine to use them from there)
* Could be added though, just undo lg_lock_init
*/
void lg_lock_init(struct lglock *lg, char *name)
{
LOCKDEP_INIT_MAP(&lg->lock_dep_map, name, &lg->lock_key, 0);
}
EXPORT_SYMBOL(lg_lock_init);
void lg_local_lock(struct lglock *lg)
{
arch_spinlock_t *lock;
preempt_disable();
lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
lock = this_cpu_ptr(lg->lock);
arch_spin_lock(lock);
}
EXPORT_SYMBOL(lg_local_lock);
void lg_local_unlock(struct lglock *lg)
{
arch_spinlock_t *lock;
lock_release(&lg->lock_dep_map, 1, _RET_IP_);
lock = this_cpu_ptr(lg->lock);
arch_spin_unlock(lock);
preempt_enable();
}
EXPORT_SYMBOL(lg_local_unlock);
void lg_local_lock_cpu(struct lglock *lg, int cpu)
{
arch_spinlock_t *lock;
preempt_disable();
lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
lock = per_cpu_ptr(lg->lock, cpu);
arch_spin_lock(lock);
}
EXPORT_SYMBOL(lg_local_lock_cpu);
void lg_local_unlock_cpu(struct lglock *lg, int cpu)
{
arch_spinlock_t *lock;
lock_release(&lg->lock_dep_map, 1, _RET_IP_);
lock = per_cpu_ptr(lg->lock, cpu);
arch_spin_unlock(lock);
preempt_enable();
}
EXPORT_SYMBOL(lg_local_unlock_cpu);
void lg_global_lock(struct lglock *lg)
{
int i;
preempt_disable();
lock_acquire_exclusive(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
for_each_possible_cpu(i) {
arch_spinlock_t *lock;
lock = per_cpu_ptr(lg->lock, i);
arch_spin_lock(lock);
}
}
EXPORT_SYMBOL(lg_global_lock);
void lg_global_unlock(struct lglock *lg)
{
int i;
lock_release(&lg->lock_dep_map, 1, _RET_IP_);
for_each_possible_cpu(i) {
arch_spinlock_t *lock;
lock = per_cpu_ptr(lg->lock, i);
arch_spin_unlock(lock);
}
preempt_enable();
}
EXPORT_SYMBOL(lg_global_unlock);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment