Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 88a9ab6e authored by Rik van Riel's avatar Rik van Riel Committed by Linus Torvalds
Browse files

mm,numa: reorganize change_pmd_range()



Reorganize the order of ifs in change_pmd_range a little, in preparation
for the next patch.

[akpm@linux-foundation.org: fix indenting, per David]
Signed-off-by: default avatarRik van Riel <riel@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: default avatarXing Gang <gang.xing@hp.com>
Tested-by: default avatarChegu Vinod <chegu_vinod@hp.com>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent a9af0c5d
Loading
Loading
Loading
Loading
+4 −3
Original line number Diff line number Diff line
@@ -118,6 +118,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
		unsigned long this_pages;

		next = pmd_addr_end(addr, end);
		if (!pmd_trans_huge(*pmd) && pmd_none_or_clear_bad(pmd))
			continue;
		if (pmd_trans_huge(*pmd)) {
			if (next - addr != HPAGE_PMD_SIZE)
				split_huge_page_pmd(vma, addr, pmd);
@@ -133,10 +135,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
					continue;
				}
			}
			/* fall through */
			/* fall through, the trans huge pmd just split */
		}
		if (pmd_none_or_clear_bad(pmd))
			continue;
		VM_BUG_ON(pmd_trans_huge(*pmd));
		this_pages = change_pte_range(vma, pmd, addr, next, newprot,
				 dirty_accountable, prot_numa);
		pages += this_pages;