Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 1ea11b16 authored by Sean Christopherson's avatar Sean Christopherson Committed by Greg Kroah-Hartman
Browse files

KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages



commit e89505698c9f70125651060547da4ff5046124fc upstream.

Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in
kvm_recover_nx_lpages() to finish zapping pages in the unlikely event
that the loop exited due to lpage_disallowed_mmu_pages being empty.
Because the recovery thread drops mmu_lock() when rescheduling, it's
possible that lpage_disallowed_mmu_pages could be emptied by a different
thread without to_zap reaching zero despite to_zap being derived from
the number of disallowed lpages.

Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages")
Cc: Junaid Shahid <junaids@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent a4f751c4
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -6225,6 +6225,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
				cond_resched_lock(&kvm->mmu_lock);
		}
	}
	kvm_mmu_commit_zap_page(kvm, &invalid_list);

	spin_unlock(&kvm->mmu_lock);
	srcu_read_unlock(&kvm->srcu, rcu_idx);