Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 53c810c3 authored by Mario Smarduch's avatar Mario Smarduch Committed by Christoffer Dall
Browse files

KVM: arm: dirty logging write protect support



Add support to track dirty pages between user space KVM_GET_DIRTY_LOG ioctl
calls. We call kvm_get_dirty_log_protect() function to do most of the work.

Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarMario Smarduch <m.smarduch@samsung.com>
parent c6473555
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -24,6 +24,7 @@ config KVM
	select HAVE_KVM_ARCH_TLB_FLUSH_ALL
	select KVM_MMIO
	select KVM_ARM_HOST
	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
	depends on ARM_VIRT_EXT && ARM_LPAE
	---help---
	  Support hosting virtualized guest machines. You will also
+34 −0
Original line number Diff line number Diff line
@@ -787,9 +787,43 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
	}
}

/**
 * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
 * @kvm: kvm instance
 * @log: slot id and address to which we copy the log
 *
 * Steps 1-4 below provide general overview of dirty page logging. See
 * kvm_get_dirty_log_protect() function description for additional details.
 *
 * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
 * always flush the TLB (step 4) even if previous step failed  and the dirty
 * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
 * does not preclude user space subsequent dirty log read. Flushing TLB ensures
 * writes will be marked dirty for next log read.
 *
 *   1. Take a snapshot of the bit and clear it if needed.
 *   2. Write protect the corresponding page.
 *   3. Copy the snapshot to the userspace.
 *   4. Flush TLB's if needed.
 */
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
{
#ifdef CONFIG_ARM
	bool is_dirty = false;
	int r;

	mutex_lock(&kvm->slots_lock);

	r = kvm_get_dirty_log_protect(kvm, log, &is_dirty);

	if (is_dirty)
		kvm_flush_remote_tlbs(kvm);

	mutex_unlock(&kvm->slots_lock);
	return r;
#else /* arm64 */
	return -EINVAL;
#endif
}

static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
+22 −0
Original line number Diff line number Diff line
@@ -1029,6 +1029,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
	spin_unlock(&kvm->mmu_lock);
	kvm_flush_remote_tlbs(kvm);
}

/**
 * kvm_arch_mmu_write_protect_pt_masked() - write protect dirty pages
 * @kvm:	The KVM pointer
 * @slot:	The memory slot associated with mask
 * @gfn_offset:	The gfn offset in memory slot
 * @mask:	The mask of dirty pages at offset 'gfn_offset' in this memory
 *		slot to be write protected
 *
 * Walks bits set in mask write protects the associated pte's. Caller must
 * acquire kvm_mmu_lock.
 */
void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm,
		struct kvm_memory_slot *slot,
		gfn_t gfn_offset, unsigned long mask)
{
	phys_addr_t base_gfn = slot->base_gfn + gfn_offset;
	phys_addr_t start = (base_gfn +  __ffs(mask)) << PAGE_SHIFT;
	phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;

	stage2_wp_range(kvm, start, end);
}
#endif

static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,