Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c812a51d authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (145 commits)
  KVM: x86: Add KVM_CAP_X86_ROBUST_SINGLESTEP
  KVM: VMX: Update instruction length on intercepted BP
  KVM: Fix emulate_sys[call, enter, exit]()'s fault handling
  KVM: Fix segment descriptor loading
  KVM: Fix load_guest_segment_descriptor() to inject page fault
  KVM: x86 emulator: Forbid modifying CS segment register by mov instruction
  KVM: Convert kvm->requests_lock to raw_spinlock_t
  KVM: Convert i8254/i8259 locks to raw_spinlocks
  KVM: x86 emulator: disallow opcode 82 in 64-bit mode
  KVM: x86 emulator: code style cleanup
  KVM: Plan obsolescence of kernel allocated slots, paravirt mmu
  KVM: x86 emulator: Add LOCK prefix validity checking
  KVM: x86 emulator: Check CPL level during privilege instruction emulation
  KVM: x86 emulator: Fix popf emulation
  KVM: x86 emulator: Check IOPL level during io instruction emulation
  KVM: x86 emulator: fix memory access during x86 emulation
  KVM: x86 emulator: Add Virtual-8086 mode of emulation
  KVM: x86 emulator: Add group9 instruction decoding
  KVM: x86 emulator: Add group8 instruction decoding
  KVM: do not store wqh in irqfd
  ...

Trivial conflicts in Documentation/feature-removal-schedule.txt
parents 9467c4fd d2be1651
Loading
Loading
Loading
Loading
+32 −0
Original line number Diff line number Diff line
@@ -556,3 +556,35 @@ Why: udev fully replaces this special file system that only contains CAPI
	NCCI TTY device nodes. User space (pppdcapiplugin) works without
	noticing the difference.
Who:	Jan Kiszka <jan.kiszka@web.de>

----------------------------

What:	KVM memory aliases support
When:	July 2010
Why:	Memory aliasing support is used for speeding up guest vga access
	through the vga windows.

	Modern userspace no longer uses this feature, so it's just bitrotted
	code and can be removed with no impact.
Who:	Avi Kivity <avi@redhat.com>

----------------------------

What:	KVM kernel-allocated memory slots
When:	July 2010
Why:	Since 2.6.25, kvm supports user-allocated memory slots, which are
	much more flexible than kernel-allocated slots.  All current userspace
	supports the newer interface and this code can be removed with no
	impact.
Who:	Avi Kivity <avi@redhat.com>

----------------------------

What:	KVM paravirt mmu host support
When:	January 2011
Why:	The paravirt mmu host support is slower than non-paravirt mmu, both
	on newer and older hardware.  It is already not exposed to the guest,
	and kept only for live migration purposes.
Who:	Avi Kivity <avi@redhat.com>

----------------------------
+6 −6
Original line number Diff line number Diff line
@@ -23,12 +23,12 @@ of a virtual machine. The ioctls belong to three classes
   Only run vcpu ioctls from the same thread that was used to create the
   vcpu.

2. File descritpors
2. File descriptors

The kvm API is centered around file descriptors.  An initial
open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
handle will create a VM file descripror which can be used to issue VM
handle will create a VM file descriptor which can be used to issue VM
ioctls.  A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
and return a file descriptor pointing to it.  Finally, ioctls on a vcpu
fd can be used to control the vcpu, including the important task of
@@ -643,7 +643,7 @@ Type: vm ioctl
Parameters: struct kvm_clock_data (in)
Returns: 0 on success, -1 on error

Sets the current timestamp of kvmclock to the valued specific in its parameter.
Sets the current timestamp of kvmclock to the value specified in its parameter.
In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios
such as migration.

@@ -795,11 +795,11 @@ Unused.
			__u64 data_offset; /* relative to kvm_run start */
		} io;

If exit_reason is KVM_EXIT_IO_IN or KVM_EXIT_IO_OUT, then the vcpu has
If exit_reason is KVM_EXIT_IO, then the vcpu has
executed a port I/O instruction which could not be satisfied by kvm.
data_offset describes where the data is located (KVM_EXIT_IO_OUT) or
where kvm expects application code to place the data for the next
KVM_RUN invocation (KVM_EXIT_IO_IN).  Data format is a patcked array.
KVM_RUN invocation (KVM_EXIT_IO_IN).  Data format is a packed array.

		struct {
			struct kvm_debug_exit_arch arch;
@@ -815,7 +815,7 @@ Unused.
			__u8  is_write;
		} mmio;

If exit_reason is KVM_EXIT_MMIO or KVM_EXIT_IO_OUT, then the vcpu has
If exit_reason is KVM_EXIT_MMIO, then the vcpu has
executed a memory-mapped I/O instruction which could not be satisfied
by kvm.  The 'data' member contains the written data if 'is_write' is
true, and should be filled by application code otherwise.
+1 −1
Original line number Diff line number Diff line
@@ -3173,7 +3173,7 @@ F: arch/x86/include/asm/svm.h
F:	arch/x86/kvm/svm.c

KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC
M:	Hollis Blanchard <hollisb@us.ibm.com>
M:	Alexander Graf <agraf@suse.de>
L:	kvm-ppc@vger.kernel.org
W:	http://kvm.qumranet.com
S:	Supported
+1 −0
Original line number Diff line number Diff line
@@ -26,6 +26,7 @@ config KVM
	select ANON_INODES
	select HAVE_KVM_IRQCHIP
	select KVM_APIC_ARCHITECTURE
	select KVM_MMIO
	---help---
	  Support hosting fully virtualized guest machines using hardware
	  virtualization extensions.  You will need a fairly recent
+30 −20
Original line number Diff line number Diff line
@@ -241,10 +241,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
	return 0;
mmio:
	if (p->dir)
		r = kvm_io_bus_read(&vcpu->kvm->mmio_bus, p->addr,
		r = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, p->addr,
				    p->size, &p->data);
	else
		r = kvm_io_bus_write(&vcpu->kvm->mmio_bus, p->addr,
		r = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, p->addr,
				     p->size, &p->data);
	if (r)
		printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr);
@@ -636,12 +636,9 @@ static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
	union context *host_ctx, *guest_ctx;
	int r;
	int r, idx;

	/*
	 * down_read() may sleep and return with interrupts enabled
	 */
	down_read(&vcpu->kvm->slots_lock);
	idx = srcu_read_lock(&vcpu->kvm->srcu);

again:
	if (signal_pending(current)) {
@@ -663,7 +660,7 @@ again:
	if (r < 0)
		goto vcpu_run_fail;

	up_read(&vcpu->kvm->slots_lock);
	srcu_read_unlock(&vcpu->kvm->srcu, idx);
	kvm_guest_enter();

	/*
@@ -687,7 +684,7 @@ again:
	kvm_guest_exit();
	preempt_enable();

	down_read(&vcpu->kvm->slots_lock);
	idx = srcu_read_lock(&vcpu->kvm->srcu);

	r = kvm_handle_exit(kvm_run, vcpu);

@@ -697,10 +694,10 @@ again:
	}

out:
	up_read(&vcpu->kvm->slots_lock);
	srcu_read_unlock(&vcpu->kvm->srcu, idx);
	if (r > 0) {
		kvm_resched(vcpu);
		down_read(&vcpu->kvm->slots_lock);
		idx = srcu_read_lock(&vcpu->kvm->srcu);
		goto again;
	}

@@ -971,7 +968,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
			goto out;
		r = kvm_setup_default_irq_routing(kvm);
		if (r) {
			kfree(kvm->arch.vioapic);
			kvm_ioapic_destroy(kvm);
			goto out;
		}
		break;
@@ -1377,12 +1374,14 @@ static void free_kvm(struct kvm *kvm)

static void kvm_release_vm_pages(struct kvm *kvm)
{
	struct kvm_memslots *slots;
	struct kvm_memory_slot *memslot;
	int i, j;
	unsigned long base_gfn;

	for (i = 0; i < kvm->nmemslots; i++) {
		memslot = &kvm->memslots[i];
	slots = rcu_dereference(kvm->memslots);
	for (i = 0; i < slots->nmemslots; i++) {
		memslot = &slots->memslots[i];
		base_gfn = memslot->base_gfn;

		for (j = 0; j < memslot->npages; j++) {
@@ -1405,6 +1404,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
	kfree(kvm->arch.vioapic);
	kvm_release_vm_pages(kvm);
	kvm_free_physmem(kvm);
	cleanup_srcu_struct(&kvm->srcu);
	free_kvm(kvm);
}

@@ -1576,15 +1576,15 @@ out:
	return r;
}

int kvm_arch_set_memory_region(struct kvm *kvm,
		struct kvm_userspace_memory_region *mem,
int kvm_arch_prepare_memory_region(struct kvm *kvm,
		struct kvm_memory_slot *memslot,
		struct kvm_memory_slot old,
		struct kvm_userspace_memory_region *mem,
		int user_alloc)
{
	unsigned long i;
	unsigned long pfn;
	int npages = mem->memory_size >> PAGE_SHIFT;
	struct kvm_memory_slot *memslot = &kvm->memslots[mem->slot];
	int npages = memslot->npages;
	unsigned long base_gfn = memslot->base_gfn;

	if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT))
@@ -1608,6 +1608,14 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
	return 0;
}

void kvm_arch_commit_memory_region(struct kvm *kvm,
		struct kvm_userspace_memory_region *mem,
		struct kvm_memory_slot old,
		int user_alloc)
{
	return;
}

void kvm_arch_flush_shadow(struct kvm *kvm)
{
	kvm_flush_remote_tlbs(kvm);
@@ -1802,7 +1810,7 @@ static int kvm_ia64_sync_dirty_log(struct kvm *kvm,
	if (log->slot >= KVM_MEMORY_SLOTS)
		goto out;

	memslot = &kvm->memslots[log->slot];
	memslot = &kvm->memslots->memslots[log->slot];
	r = -ENOENT;
	if (!memslot->dirty_bitmap)
		goto out;
@@ -1827,6 +1835,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
	struct kvm_memory_slot *memslot;
	int is_dirty = 0;

	mutex_lock(&kvm->slots_lock);
	spin_lock(&kvm->arch.dirty_log_lock);

	r = kvm_ia64_sync_dirty_log(kvm, log);
@@ -1840,12 +1849,13 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
	/* If nothing is dirty, don't bother messing with page tables. */
	if (is_dirty) {
		kvm_flush_remote_tlbs(kvm);
		memslot = &kvm->memslots[log->slot];
		memslot = &kvm->memslots->memslots[log->slot];
		n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
		memset(memslot->dirty_bitmap, 0, n);
	}
	r = 0;
out:
	mutex_unlock(&kvm->slots_lock);
	spin_unlock(&kvm->arch.dirty_log_lock);
	return r;
}
Loading