Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit d162f30a authored by Junaid Shahid's avatar Junaid Shahid Committed by Paolo Bonzini
Browse files

kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault



Redo the page table walk in fast_page_fault when retrying so that we are
working on the latest PTE even if the hierarchy changes.

Signed-off-by: default avatarJunaid Shahid <junaids@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 20d65236
Loading
Loading
Loading
Loading
+5 −5
Original line number Diff line number Diff line
@@ -3088,14 +3088,16 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
		return false;

	walk_shadow_page_lockless_begin(vcpu);
	for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
		if (!is_shadow_present_pte(spte) || iterator.level < level)
			break;

	do {
		bool remove_write_prot = false;
		bool remove_acc_track;

		for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
			if (!is_shadow_present_pte(spte) ||
			    iterator.level < level)
				break;

		sp = page_header(__pa(iterator.sptep));
		if (!is_last_spte(spte, sp->role.level))
			break;
@@ -3176,8 +3178,6 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
			break;
		}

		spte = mmu_spte_get_lockless(iterator.sptep);

	} while (true);

	trace_fast_page_fault(vcpu, gva, error_code, iterator.sptep,