Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 4e933027 authored by Linux Build Service Account's avatar Linux Build Service Account Committed by Gerrit - the friendly Code Review server
Browse files

Merge "Merge android-4.4.181 (bd858d73) into msm-4.4"

parents c176a066 5ef154a2
Loading
Loading
Loading
Loading
+6 −38
Original line number Diff line number Diff line
@@ -142,45 +142,13 @@ Mitigation points
   mds_user_clear.

   The mitigation is invoked in prepare_exit_to_usermode() which covers
   most of the kernel to user space transitions. There are a few exceptions
   which are not invoking prepare_exit_to_usermode() on return to user
   space. These exceptions use the paranoid exit code.
   all but one of the kernel to user space transitions.  The exception
   is when we return from a Non Maskable Interrupt (NMI), which is
   handled directly in do_nmi().

   - Non Maskable Interrupt (NMI):

     Access to sensible data like keys, credentials in the NMI context is
     mostly theoretical: The CPU can do prefetching or execute a
     misspeculated code path and thereby fetching data which might end up
     leaking through a buffer.

     But for mounting other attacks the kernel stack address of the task is
     already valuable information. So in full mitigation mode, the NMI is
     mitigated on the return from do_nmi() to provide almost complete
     coverage.

   - Double fault (#DF):

     A double fault is usually fatal, but the ESPFIX workaround, which can
     be triggered from user space through modify_ldt(2) is a recoverable
     double fault. #DF uses the paranoid exit path, so explicit mitigation
     in the double fault handler is required.

   - Machine Check Exception (#MC):

     Another corner case is a #MC which hits between the CPU buffer clear
     invocation and the actual return to user. As this still is in kernel
     space it takes the paranoid exit path which does not clear the CPU
     buffers. So the #MC handler repopulates the buffers to some
     extent. Machine checks are not reliably controllable and the window is
     extremly small so mitigation would just tick a checkbox that this
     theoretical corner case is covered. To keep the amount of special
     cases small, ignore #MC.

   - Debug Exception (#DB):

     This takes the paranoid exit path only when the INT1 breakpoint is in
     kernel space. #DB on a user space address takes the regular exit path,
     so no extra mitigation required.
   (The reason that NMI is special is that prepare_exit_to_usermode() can
    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
    enable IRQs with NMIs blocked.)


2. C-State transition
+1 −1
Original line number Diff line number Diff line
VERSION = 4
PATCHLEVEL = 4
SUBLEVEL = 180
SUBLEVEL = 181
EXTRAVERSION =
NAME = Blurry Fish Butt

+4 −0
Original line number Diff line number Diff line
@@ -259,6 +259,8 @@ static int aesbs_xts_encrypt(struct blkcipher_desc *desc,

	blkcipher_walk_init(&walk, dst, src, nbytes);
	err = blkcipher_walk_virt_block(desc, &walk, 8 * AES_BLOCK_SIZE);
	if (err)
		return err;

	/* generate the initial tweak */
	AES_encrypt(walk.iv, walk.iv, &ctx->twkey);
@@ -283,6 +285,8 @@ static int aesbs_xts_decrypt(struct blkcipher_desc *desc,

	blkcipher_walk_init(&walk, dst, src, nbytes);
	err = blkcipher_walk_virt_block(desc, &walk, 8 * AES_BLOCK_SIZE);
	if (err)
		return err;

	/* generate the initial tweak */
	AES_encrypt(walk.iv, walk.iv, &ctx->twkey);
+8 −3
Original line number Diff line number Diff line
@@ -748,7 +748,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
			       const struct kvm_vcpu_init *init)
{
	unsigned int i;
	unsigned int i, ret;
	int phys_target = kvm_target_cpu();

	if (init->target != phys_target)
@@ -783,9 +783,14 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
	vcpu->arch.target = phys_target;

	/* Now we know what it is, we can reset it. */
	return kvm_reset_vcpu(vcpu);
	ret = kvm_reset_vcpu(vcpu);
	if (ret) {
		vcpu->arch.target = -1;
		bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
	}

	return ret;
}

static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
					 struct kvm_vcpu_init *init)
+1 −0
Original line number Diff line number Diff line
@@ -207,6 +207,7 @@ void __init exynos_firmware_init(void)
		return;

	addr = of_get_address(nd, 0, NULL, NULL);
	of_node_put(nd);
	if (!addr) {
		pr_err("%s: No address specified.\n", __func__);
		return;
Loading