Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 119031be authored by Marc Orr's avatar Marc Orr Committed by Greg Kroah-Hartman
Browse files

KVM: x86: nVMX: close leak of L0's x2APIC MSRs (CVE-2019-3887)



commit acff78477b9b4f26ecdf65733a4ed77fe837e9dc upstream.

The nested_vmx_prepare_msr_bitmap() function doesn't directly guard the
x2APIC MSR intercepts with the "virtualize x2APIC mode" MSR. As a
result, we discovered the potential for a buggy or malicious L1 to get
access to L0's x2APIC MSRs, via an L2, as follows.

1. L1 executes WRMSR(IA32_SPEC_CTRL, 1). This causes the spec_ctrl
variable, in nested_vmx_prepare_msr_bitmap() to become true.
2. L1 disables "virtualize x2APIC mode" in VMCS12.
3. L1 enables "APIC-register virtualization" in VMCS12.

Now, KVM will set VMCS02's x2APIC MSR intercepts from VMCS12, and then
set "virtualize x2APIC mode" to 0 in VMCS02. Oops.

This patch closes the leak by explicitly guarding VMCS02's x2APIC MSR
intercepts with VMCS12's "virtualize x2APIC mode" control.

The scenario outlined above and fix prescribed here, were verified with
a related patch in kvm-unit-tests titled "Add leak scenario to
virt_x2apic_mode_test".

Note, it looks like this issue may have been introduced inadvertently
during a merge---see 15303ba5.

Signed-off-by: default avatarMarc Orr <marcorr@google.com>
Reviewed-by: default avatarJim Mattson <jmattson@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent f8053df6
Loading
Loading
Loading
Loading
+44 −28
Original line number Original line Diff line number Diff line
@@ -11582,6 +11582,17 @@ static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
	return 0;
	return 0;
}
}


static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap) {
	int msr;

	for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
		unsigned word = msr / BITS_PER_LONG;

		msr_bitmap[word] = ~0;
		msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
	}
}

/*
/*
 * Merge L0's and L1's MSR bitmap, return false to indicate that
 * Merge L0's and L1's MSR bitmap, return false to indicate that
 * we do not use the hardware.
 * we do not use the hardware.
@@ -11623,22 +11634,26 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
		return false;
		return false;


	msr_bitmap_l1 = (unsigned long *)kmap(page);
	msr_bitmap_l1 = (unsigned long *)kmap(page);

	/*
	 * To keep the control flow simple, pay eight 8-byte writes (sixteen
	 * 4-byte writes on 32-bit systems) up front to enable intercepts for
	 * the x2APIC MSR range and selectively disable them below.
	 */
	enable_x2apic_msr_intercepts(msr_bitmap_l0);

	if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
		if (nested_cpu_has_apic_reg_virt(vmcs12)) {
		if (nested_cpu_has_apic_reg_virt(vmcs12)) {
			/*
			/*
		 * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
			 * L0 need not intercept reads for MSRs between 0x800
		 * just lets the processor take the value from the virtual-APIC page;
			 * and 0x8ff, it just lets the processor take the value
		 * take those 256 bits directly from the L1 bitmap.
			 * from the virtual-APIC page; take those 256 bits
			 * directly from the L1 bitmap.
			 */
			 */
			for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
			for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
				unsigned word = msr / BITS_PER_LONG;
				unsigned word = msr / BITS_PER_LONG;

				msr_bitmap_l0[word] = msr_bitmap_l1[word];
				msr_bitmap_l0[word] = msr_bitmap_l1[word];
			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
		}
	} else {
		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
			unsigned word = msr / BITS_PER_LONG;
			msr_bitmap_l0[word] = ~0;
			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
			}
			}
		}
		}


@@ -11657,6 +11672,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
				X2APIC_MSR(APIC_SELF_IPI),
				X2APIC_MSR(APIC_SELF_IPI),
				MSR_TYPE_W);
				MSR_TYPE_W);
		}
		}
	}


	if (spec_ctrl)
	if (spec_ctrl)
		nested_vmx_disable_intercept_for_msr(
		nested_vmx_disable_intercept_for_msr(