Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0b9d7052 authored by Andrea Arcangeli's avatar Andrea Arcangeli Committed by Mel Gorman
Browse files

mm: numa: Support NUMA hinting page faults from gup/gup_fast



Introduce FOLL_NUMA to tell follow_page to check
pte/pmd_numa. get_user_pages must use FOLL_NUMA, and it's safe to do
so because it always invokes handle_mm_fault and retries the
follow_page later.

KVM secondary MMU page faults will trigger the NUMA hinting page
faults through gup_fast -> get_user_pages -> follow_page ->
handle_mm_fault.

Other follow_page callers like KSM should not use FOLL_NUMA, or they
would fail to get the pages if they use follow_page instead of
get_user_pages.

[ This patch was picked up from the AutoNUMA tree. ]

Originally-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
[ ported to this tree. ]
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Reviewed-by: default avatarRik van Riel <riel@redhat.com>
parent be3a7284
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -1572,6 +1572,7 @@ struct page *follow_page(struct vm_area_struct *, unsigned long address,
#define FOLL_MLOCK	0x40	/* mark page as mlocked */
#define FOLL_SPLIT	0x80	/* don't return transhuge pages, split them */
#define FOLL_HWPOISON	0x100	/* check page is hwpoisoned */
#define FOLL_NUMA	0x200	/* force NUMA hinting page fault */

typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
			void *data);
+17 −0
Original line number Diff line number Diff line
@@ -1517,6 +1517,8 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
		page = follow_huge_pmd(mm, address, pmd, flags & FOLL_WRITE);
		goto out;
	}
	if ((flags & FOLL_NUMA) && pmd_numa(*pmd))
		goto no_page_table;
	if (pmd_trans_huge(*pmd)) {
		if (flags & FOLL_SPLIT) {
			split_huge_page_pmd(mm, pmd);
@@ -1546,6 +1548,8 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
	pte = *ptep;
	if (!pte_present(pte))
		goto no_page;
	if ((flags & FOLL_NUMA) && pte_numa(pte))
		goto no_page;
	if ((flags & FOLL_WRITE) && !pte_write(pte))
		goto unlock;

@@ -1697,6 +1701,19 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
			(VM_WRITE | VM_MAYWRITE) : (VM_READ | VM_MAYREAD);
	vm_flags &= (gup_flags & FOLL_FORCE) ?
			(VM_MAYREAD | VM_MAYWRITE) : (VM_READ | VM_WRITE);

	/*
	 * If FOLL_FORCE and FOLL_NUMA are both set, handle_mm_fault
	 * would be called on PROT_NONE ranges. We must never invoke
	 * handle_mm_fault on PROT_NONE ranges or the NUMA hinting
	 * page faults would unprotect the PROT_NONE ranges if
	 * _PAGE_NUMA and _PAGE_PROTNONE are sharing the same pte/pmd
	 * bitflag. So to avoid that, don't set FOLL_NUMA if
	 * FOLL_FORCE is set.
	 */
	if (!(gup_flags & FOLL_FORCE))
		gup_flags |= FOLL_NUMA;

	i = 0;

	do {