Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a355aa54 authored by Paul Mackerras's avatar Paul Mackerras Committed by Avi Kivity
Browse files

KVM: Add barriers to allow mmu_notifier_retry to be used locklessly



This adds an smp_wmb in kvm_mmu_notifier_invalidate_range_end() and an
smp_rmb in mmu_notifier_retry() so that mmu_notifier_retry() will give
the correct answer when called without kvm->mmu_lock being held.
PowerPC Book3S HV KVM wants to use a bitlock per guest page rather than
a single global spinlock in order to improve the scalability of updates
to the guest MMU hashed page table, and so needs this.

Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Acked-by: default avatarAvi Kivity <avi@redhat.com>
Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
parent 342d3db7
Loading
Loading
Loading
Loading
+9 −5
Original line number Original line Diff line number Diff line
@@ -702,12 +702,16 @@ static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_se
	if (unlikely(vcpu->kvm->mmu_notifier_count))
	if (unlikely(vcpu->kvm->mmu_notifier_count))
		return 1;
		return 1;
	/*
	/*
	 * Both reads happen under the mmu_lock and both values are
	 * Ensure the read of mmu_notifier_count happens before the read
	 * modified under mmu_lock, so there's no need of smb_rmb()
	 * of mmu_notifier_seq.  This interacts with the smp_wmb() in
	 * here in between, otherwise mmu_notifier_count should be
	 * mmu_notifier_invalidate_range_end to make sure that the caller
	 * read before mmu_notifier_seq, see
	 * either sees the old (non-zero) value of mmu_notifier_count or
	 * mmu_notifier_invalidate_range_end write side.
	 * the new (incremented) value of mmu_notifier_seq.
	 * PowerPC Book3s HV KVM calls this under a per-page lock
	 * rather than under kvm->mmu_lock, for scalability, so
	 * can't rely on kvm->mmu_lock to keep things ordered.
	 */
	 */
	smp_rmb();
	if (vcpu->kvm->mmu_notifier_seq != mmu_seq)
	if (vcpu->kvm->mmu_notifier_seq != mmu_seq)
		return 1;
		return 1;
	return 0;
	return 0;
+3 −3
Original line number Original line Diff line number Diff line
@@ -357,11 +357,11 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
	 * been freed.
	 * been freed.
	 */
	 */
	kvm->mmu_notifier_seq++;
	kvm->mmu_notifier_seq++;
	smp_wmb();
	/*
	/*
	 * The above sequence increase must be visible before the
	 * The above sequence increase must be visible before the
	 * below count decrease but both values are read by the kvm
	 * below count decrease, which is ensured by the smp_wmb above
	 * page fault under mmu_lock spinlock so we don't need to add
	 * in conjunction with the smp_rmb in mmu_notifier_retry().
	 * a smb_wmb() here in between the two.
	 */
	 */
	kvm->mmu_notifier_count--;
	kvm->mmu_notifier_count--;
	spin_unlock(&kvm->mmu_lock);
	spin_unlock(&kvm->mmu_lock);