Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9d1beefb authored by Takuya Yoshikawa's avatar Takuya Yoshikawa Committed by Gleb Natapov
Browse files

KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself



Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.

Reviewed-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: default avatarTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
parent b34cb590
Loading
Loading
Loading
Loading
+3 −0
Original line number Diff line number Diff line
@@ -4173,6 +4173,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
	memslot = id_to_memslot(kvm->memslots, slot);
	last_gfn = memslot->base_gfn + memslot->npages - 1;

	spin_lock(&kvm->mmu_lock);

	for (i = PT_PAGE_TABLE_LEVEL;
	     i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) {
		unsigned long *rmapp;
@@ -4188,6 +4190,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
	}

	kvm_flush_remote_tlbs(kvm);
	spin_unlock(&kvm->mmu_lock);
}

void kvm_mmu_zap_all(struct kvm *kvm)
+1 −4
Original line number Diff line number Diff line
@@ -6899,11 +6899,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
	 * Existing largepage mappings are destroyed here and new ones will
	 * not be created until the end of the logging.
	 */
	if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
		spin_lock(&kvm->mmu_lock);
	if (npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
		kvm_mmu_slot_remove_write_access(kvm, mem->slot);
		spin_unlock(&kvm->mmu_lock);
	}
	/*
	 * If memory slot is created, or moved, we need to clear all
	 * mmio sptes.