Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6dbf79e7 authored by Takuya Yoshikawa's avatar Takuya Yoshikawa Committed by Avi Kivity
Browse files

KVM: Fix write protection race during dirty logging



This patch fixes a race introduced by:

  commit 95d4c16c
  KVM: Optimize dirty logging by rmap_write_protect()

During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().

In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:

  A (get_dirty_log)     B (another thread)

  lock(mmu_lock)
  clear pte.w
  unlock(mmu_lock)
                        lock(mmu_lock)
                        pte.w is already cleared
                        unlock(mmu_lock)
                        skip TLB flush
                        return
  ...
  TLB flush

Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.

This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.

Signed-off-by: default avatarTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
parent 10166744
Loading
Loading
Loading
Loading
+5 −6
Original line number Diff line number Diff line
@@ -3065,6 +3065,8 @@ static void write_protect_slot(struct kvm *kvm,
			       unsigned long *dirty_bitmap,
			       unsigned long nr_dirty_pages)
{
	spin_lock(&kvm->mmu_lock);

	/* Not many dirty pages compared to # of shadow pages. */
	if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) {
		unsigned long gfn_offset;
@@ -3072,17 +3074,14 @@ static void write_protect_slot(struct kvm *kvm,
		for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) {
			unsigned long gfn = memslot->base_gfn + gfn_offset;

			spin_lock(&kvm->mmu_lock);
			kvm_mmu_rmap_write_protect(kvm, gfn, memslot);
			spin_unlock(&kvm->mmu_lock);
		}
		kvm_flush_remote_tlbs(kvm);
	} else {
		spin_lock(&kvm->mmu_lock);
	} else
		kvm_mmu_slot_remove_write_access(kvm, memslot->id);

	spin_unlock(&kvm->mmu_lock);
}
}

/*
 * Get (and clear) the dirty memory log for a memory slot.