Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 569f48b8 authored by Hillf Danton's avatar Hillf Danton Committed by Linus Torvalds
Browse files

mm: hugetlb: fix __unmap_hugepage_range()



First, after flushing TLB, we have no need to scan pte from start again.
Second, before bail out loop, the address is forwarded one step.

Signed-off-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e4bd6a02
Loading
Loading
Loading
Loading
+3 −1
Original line number Diff line number Diff line
@@ -2638,8 +2638,9 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,

	tlb_start_vma(tlb, vma);
	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
	address = start;
again:
	for (address = start; address < end; address += sz) {
	for (; address < end; address += sz) {
		ptep = huge_pte_offset(mm, address);
		if (!ptep)
			continue;
@@ -2686,6 +2687,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
		page_remove_rmap(page);
		force_flush = !__tlb_remove_page(tlb, page);
		if (force_flush) {
			address += sz;
			spin_unlock(ptl);
			break;
		}