Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit de35cab0 authored by Shiraz Hashim's avatar Shiraz Hashim
Browse files

arm: mm: fix pte allocation with CONFIG_FORCE_PAGES feature



CONFIG_FORCE_PAGES introduces a debug option to mark free
pages as read only in order to trigger a fault if any code
attempts to write to a page on the buddy list.

In order to achieve this it splits all section mappings in
to PAGE_SIZE pages during boot. While doing this split, it
wrongly allocates a pte (of page size) for each pmd. Linux
however in armv7 short descriptor format, uses same second
level pte for 2 consecutive pmds offset by the actual second
level table size. Refer comments in kernel file
arch/arm/include/asm/pgtable-2level.h for details.

Fix this by allocating pte for a section mapped pmd while
providing required offset if pte already exists.

Change-Id: Iadbaa5e71e4b220208a7275bf039a2a413349e42
Signed-off-by: default avatarShiraz Hashim <shashim@codeaurora.org>
parent e80004be
Loading
Loading
Loading
Loading
+18 −1
Original line number Diff line number Diff line
@@ -1600,8 +1600,25 @@ static noinline void __init split_pmd(pmd_t *pmd, unsigned long addr,
				const struct mem_type *type)
{
	pte_t *pte, *start_pte;
	pmd_t *base_pmd;

	base_pmd = pmd_offset(
			pud_offset(pgd_offset(&init_mm, addr), addr), addr);

	if (pmd_none(*base_pmd) || pmd_bad(*base_pmd)) {
		start_pte = early_alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE);
#ifndef CONFIG_ARM_LPAE
		/*
		 * Following is needed when new pte is allocated for pmd[1]
		 * cases, which may happen when base (start) address falls
		 * under pmd[1].
		 */
		if (addr & SECTION_SIZE)
			start_pte += pte_index(addr);
#endif
	} else {
		start_pte = pte_offset_kernel(base_pmd, addr);
	}

	pte = start_pte;