Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c8e910d7 authored by Chintan Pandya's avatar Chintan Pandya Committed by Sudarshan Rajagopalan
Browse files

arm64: Implement page table free interfaces



arm64 requires break-before-make. Originally, before
setting up new pmd/pud entry for huge mapping, in few
cases, the modifying pmd/pud entry was still valid
and pointing to next level page table as we only
clear off leaf PTE in unmap leg.

 a) This was resulting into stale entry in TLBs (as few
    TLBs also cache intermediate mapping for performance
    reasons)
 b) Also, modifying pmd/pud was the only reference to
    next level page table and it was getting lost without
    freeing it. So, page leaks were happening.

Implement pud_free_pmd_page() and pmd_free_pte_page() to
enforce BBM and also free the leaking page tables.

Implementation requires,
 1) Clearing off the current pud/pmd entry
 2) Invalidation of TLB
 3) Freeing of the un-used next level page tables

Change-Id: I5d2ef6ff8a15a2fa832f2f376a885288a071a49e
Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarChintan Pandya <cpandya@codeaurora.org>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Git-Commit: ec28bb9c9b0826d7bd36f44cccfa5295c291cadd
Git-Repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git


Signed-off-by: default avatarSudarshan Rajagopalan <sudaraja@codeaurora.org>
parent bdb6dcae
Loading
Loading
Loading
Loading
+44 −4
Original line number Diff line number Diff line
@@ -47,6 +47,7 @@
#include <asm/memblock.h>
#include <asm/mmu_context.h>
#include <asm/ptdump.h>
#include <asm/tlbflush.h>

#define NO_BLOCK_MAPPINGS	BIT(0)
#define NO_CONT_MAPPINGS	BIT(1)
@@ -1398,12 +1399,51 @@ int pmd_clear_huge(pmd_t *pmd)
	return 1;
}

int pud_free_pmd_page(pud_t *pud, unsigned long addr)
int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr)
{
	return pud_none(*pud);
	pte_t *table;
	pmd_t pmd;

	pmd = READ_ONCE(*pmdp);

	/* No-op for empty entry and WARN_ON for valid entry */
	if (!pmd_present(pmd) || !pmd_table(pmd)) {
		VM_WARN_ON(!pmd_table(pmd));
		return 1;
	}

int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
	table = pte_offset_kernel(pmdp, addr);
	pmd_clear(pmdp);
	__flush_tlb_kernel_pgtable(addr);
	pte_free_kernel(NULL, table);
	return 1;
}

int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
{
	return pmd_none(*pmd);
	pmd_t *table;
	pmd_t *pmdp;
	pud_t pud;
	unsigned long next, end;

	pud = READ_ONCE(*pudp);

	/* No-op for empty entry and WARN_ON for valid entry */
	if (!pud_present(pud) || !pud_table(pud)) {
		VM_WARN_ON(!pud_table(pud));
		return 1;
	}

	table = pmd_offset(pudp, addr);
	pmdp = table;
	next = addr;
	end = addr + PUD_SIZE;
	do {
		pmd_free_pte_page(pmdp, next);
	} while (pmdp++, next += PMD_SIZE, next != end);

	pud_clear(pudp);
	__flush_tlb_kernel_pgtable(addr);
	pmd_free(NULL, table);
	return 1;
}