Commit 3a33c760 authored by Will Deacon's avatar Will Deacon

arm64: context: Fix comments and remove pointless smp_wmb()

The comments in the ASID allocator incorrectly hint at an MP-style idiom
using the asid_generation and the active_asids array. In fact, the
synchronisation is achieved using a combination of an xchg operation
and a spinlock, so update the comments and remove the pointless smp_wmb().

Cc: James Morse <james.morse@arm.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
parent 770ba060
...@@ -96,12 +96,6 @@ static void flush_context(unsigned int cpu) ...@@ -96,12 +96,6 @@ static void flush_context(unsigned int cpu)
set_reserved_asid_bits(); set_reserved_asid_bits();
/*
* Ensure the generation bump is observed before we xchg the
* active_asids.
*/
smp_wmb();
for_each_possible_cpu(i) { for_each_possible_cpu(i) {
asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0);
/* /*
...@@ -205,11 +199,18 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) ...@@ -205,11 +199,18 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
asid = atomic64_read(&mm->context.id); asid = atomic64_read(&mm->context.id);
/* /*
* The memory ordering here is subtle. We rely on the control * The memory ordering here is subtle.
* dependency between the generation read and the update of * If our ASID matches the current generation, then we update
* active_asids to ensure that we are synchronised with a * our active_asids entry with a relaxed xchg. Racing with a
* parallel rollover (i.e. this pairs with the smp_wmb() in * concurrent rollover means that either:
* flush_context). *
* - We get a zero back from the xchg and end up waiting on the
* lock. Taking the lock synchronises with the rollover and so
* we are forced to see the updated generation.
*
* - We get a valid ASID back from the xchg, which means the
* relaxed xchg in flush_context will treat us as reserved
* because atomic RmWs are totally ordered for a given location.
*/ */
if (!((asid ^ atomic64_read(&asid_generation)) >> asid_bits) if (!((asid ^ atomic64_read(&asid_generation)) >> asid_bits)
&& atomic64_xchg_relaxed(&per_cpu(active_asids, cpu), asid)) && atomic64_xchg_relaxed(&per_cpu(active_asids, cpu), asid))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment