Commit 23fb3b17 authored by Andrew Jones's avatar Andrew Jones Committed by Paolo Bonzini
Browse files

arm/arm64: spinlocks: fix memory barriers



It shouldn't be necessary to use a barrier on the way into
spin_lock. We'll be focused on a single address until we get
it (exclusively) set, and then we'll do a barrier on the way
out. Also, it does make sense to do a barrier on the way in
to spin_unlock, i.e. ensure what we did in the critical section
is ordered wrt to what we do outside it, before we announce that
we're outside.
Signed-off-by: Andrew Jones's avatarAndrew Jones <drjones@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent cb3c62d6
......@@ -7,10 +7,9 @@ void spin_lock(struct spinlock *lock)
{
u32 val, fail;
dmb();
if (!mmu_enabled()) {
lock->v = 1;
smp_mb();
return;
}
......@@ -25,11 +24,12 @@ void spin_lock(struct spinlock *lock)
: "r" (&lock->v)
: "cc" );
} while (fail);
dmb();
smp_mb();
}
void spin_unlock(struct spinlock *lock)
{
smp_mb();
lock->v = 0;
dmb();
}
......@@ -13,10 +13,9 @@ void spin_lock(struct spinlock *lock)
{
u32 val, fail;
smp_mb();
if (!mmu_enabled()) {
lock->v = 1;
smp_mb();
return;
}
......@@ -35,9 +34,9 @@ void spin_lock(struct spinlock *lock)
void spin_unlock(struct spinlock *lock)
{
smp_mb();
if (mmu_enabled())
asm volatile("stlrh wzr, [%0]" :: "r" (&lock->v));
else
lock->v = 0;
smp_mb();
}
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment