Commit e5380f6d authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: SVM: Hide SEV migration lockdep goo behind CONFIG_PROVE_LOCKING

Wrap the manipulation of @role and the manual mutex_{release,acquire}()
invocations in CONFIG_PROVE_LOCKING=y to squash a clang-15 warning.  When
building with -Wunused-but-set-parameter and CONFIG_DEBUG_LOCK_ALLOC=n,
clang-15 seees there's no usage of @role in mutex_lock_killable_nested()
and yells.  PROVE_LOCKING selects DEBUG_LOCK_ALLOC, and the only reason
KVM manipulates @role is to make PROVE_LOCKING happy.

To avoid true ugliness, use "i" and "j" to detect the first pass in the
loops; the "idx" field that's used by kvm_for_each_vcpu() is guaranteed
to be '0' on the first pass as it's simply the first entry in the vCPUs
XArray, which is fully KVM controlled.  kvm_for_each_vcpu() passes '0'
for xa_for_each_range()'s "start", and xa_for_each_range() will not enter
the loop if there's no entry at '0'.

Fixes: 0c2c7c06 ("KVM: SEV: Mark nested locking of vcpu->lock")
Reported-by: default avatarkernel test robot <lkp@intel.com>
Cc: Peter Gonda <pgonda@google.com>
Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
Message-Id: <20220613214237.2538266-1-seanjc@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 5bdae49f
...@@ -1606,38 +1606,35 @@ static int sev_lock_vcpus_for_migration(struct kvm *kvm, ...@@ -1606,38 +1606,35 @@ static int sev_lock_vcpus_for_migration(struct kvm *kvm,
{ {
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
unsigned long i, j; unsigned long i, j;
bool first = true;
kvm_for_each_vcpu(i, vcpu, kvm) { kvm_for_each_vcpu(i, vcpu, kvm) {
if (mutex_lock_killable_nested(&vcpu->mutex, role)) if (mutex_lock_killable_nested(&vcpu->mutex, role))
goto out_unlock; goto out_unlock;
if (first) { #ifdef CONFIG_PROVE_LOCKING
if (!i)
/* /*
* Reset the role to one that avoids colliding with * Reset the role to one that avoids colliding with
* the role used for the first vcpu mutex. * the role used for the first vcpu mutex.
*/ */
role = SEV_NR_MIGRATION_ROLES; role = SEV_NR_MIGRATION_ROLES;
first = false; else
} else {
mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
} #endif
} }
return 0; return 0;
out_unlock: out_unlock:
first = true;
kvm_for_each_vcpu(j, vcpu, kvm) { kvm_for_each_vcpu(j, vcpu, kvm) {
if (i == j) if (i == j)
break; break;
if (first) #ifdef CONFIG_PROVE_LOCKING
first = false; if (j)
else
mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_); mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_);
#endif
mutex_unlock(&vcpu->mutex); mutex_unlock(&vcpu->mutex);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment