Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit dbff124e authored by Joel Schopp's avatar Joel Schopp Committed by Christoffer Dall
Browse files

arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc



The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.

 [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
   instead of a hard-coded value and to move the alignment check of the
   allocation to mmu.c.  Also added a comment explaining why we hardcode
   the IPA range and changed the stage-2 pgd allocation to be based on
   the 40 bit IPA range instead of the maximum possible 48 bit PA range.
   - Christoffer ]

Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Signed-off-by: default avatarJoel Schopp <joel.schopp@amd.com>
Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
parent 0fea6d76
Loading
Loading
Loading
Loading
+2 −2
Original line number Original line Diff line number Diff line
@@ -410,9 +410,9 @@ static void update_vttbr(struct kvm *kvm)


	/* update vttbr to be used with the new vmid */
	/* update vttbr to be used with the new vmid */
	pgd_phys = virt_to_phys(kvm->arch.pgd);
	pgd_phys = virt_to_phys(kvm->arch.pgd);
	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
	kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK;
	kvm->arch.vttbr = pgd_phys | vmid;
	kvm->arch.vttbr |= vmid;


	spin_unlock(&kvm_vmid_lock);
	spin_unlock(&kvm_vmid_lock);
}
}
+12 −1
Original line number Original line Diff line number Diff line
@@ -122,6 +122,17 @@
#define VTCR_EL2_T0SZ_MASK	0x3f
#define VTCR_EL2_T0SZ_MASK	0x3f
#define VTCR_EL2_T0SZ_40B	24
#define VTCR_EL2_T0SZ_40B	24


/*
 * We configure the Stage-2 page tables to always restrict the IPA space to be
 * 40 bits wide (T0SZ = 24).  Systems with a PARange smaller than 40 bits are
 * not known to exist and will break with this configuration.
 *
 * Note that when using 4K pages, we concatenate two first level page tables
 * together.
 *
 * The magic numbers used for VTTBR_X in this patch can be found in Tables
 * D4-23 and D4-25 in ARM DDI 0487A.b.
 */
#ifdef CONFIG_ARM64_64K_PAGES
#ifdef CONFIG_ARM64_64K_PAGES
/*
/*
 * Stage2 translation configuration:
 * Stage2 translation configuration:
@@ -149,7 +160,7 @@
#endif
#endif


#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_BADDR_MASK  (((1LLU << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_VMID_SHIFT  (48LLU)
#define VTTBR_VMID_SHIFT  (48LLU)
#define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
#define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)


+2 −3
Original line number Original line Diff line number Diff line
@@ -59,10 +59,9 @@
#define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
#define KERN_TO_HYP(kva)	((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)


/*
/*
 * Align KVM with the kernel's view of physical memory. Should be
 * We currently only support a 40bit IPA.
 * 40bit IPA, with PGD being 8kB aligned in the 4KB page configuration.
 */
 */
#define KVM_PHYS_SHIFT	PHYS_MASK_SHIFT
#define KVM_PHYS_SHIFT	(40)
#define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
#define KVM_PHYS_SIZE	(1UL << KVM_PHYS_SHIFT)
#define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)
#define KVM_PHYS_MASK	(KVM_PHYS_SIZE - 1UL)