Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit fbb158cb authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini
Browse files

KVM: x86/mmu: Revert "Revert "KVM: MMU: zap pages in batch""



Now that the fast invalidate mechanism has been reintroduced, restore
the performance tweaks for fast invalidation that existed prior to its
removal.

Paraphrashing the original changelog:

  Zap at least 10 shadow pages before releasing mmu_lock to reduce the
  overhead associated with re-acquiring the lock.

  Note: "10" is an arbitrary number, speculated to be high enough so
  that a vCPU isn't stuck zapping obsolete pages for an extended period,
  but small enough so that other vCPUs aren't starved waiting for
  mmu_lock.

This reverts commit 43d2b14b.

Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 14a3c4f4
Loading
Loading
Loading
Loading
+9 −26
Original line number Diff line number Diff line
@@ -5671,12 +5671,12 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
	return ret;
}


#define BATCH_ZAP_PAGES	10
static void kvm_zap_obsolete_pages(struct kvm *kvm)
{
	struct kvm_mmu_page *sp, *node;
	LIST_HEAD(invalid_list);
	int ign;
	int nr_zapped, batch = 0;

restart:
	list_for_each_entry_safe_reverse(sp, node,
@@ -5689,28 +5689,6 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
			break;

		/*
		 * Do not repeatedly zap a root page to avoid unnecessary
		 * KVM_REQ_MMU_RELOAD, otherwise we may not be able to
		 * progress:
		 *    vcpu 0                        vcpu 1
		 *                         call vcpu_enter_guest():
		 *                            1): handle KVM_REQ_MMU_RELOAD
		 *                                and require mmu-lock to
		 *                                load mmu
		 * repeat:
		 *    1): zap root page and
		 *        send KVM_REQ_MMU_RELOAD
		 *
		 *    2): if (cond_resched_lock(mmu-lock))
		 *
		 *                            2): hold mmu-lock and load mmu
		 *
		 *                            3): see KVM_REQ_MMU_RELOAD bit
		 *                                on vcpu->requests is set
		 *                                then return 1 to call
		 *                                vcpu_enter_guest() again.
		 *            goto repeat;
		 *
		 * Since we are reversely walking the list and the invalid
		 * list will be moved to the head, skip the invalid page
		 * can help us to avoid the infinity list walking.
@@ -5718,15 +5696,20 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
		if (sp->role.invalid)
			continue;

		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
		if (batch >= BATCH_ZAP_PAGES &&
		    (need_resched() || spin_needbreak(&kvm->mmu_lock))) {
			batch = 0;
			kvm_mmu_commit_zap_page(kvm, &invalid_list);
			cond_resched_lock(&kvm->mmu_lock);
			goto restart;
		}

		if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign))
		if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list,
					       &nr_zapped)) {
			batch += nr_zapped;
			goto restart;
		}
	}

	kvm_mmu_commit_zap_page(kvm, &invalid_list);
}