Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 479f0abb authored by Kirill A. Shutemov's avatar Kirill A. Shutemov Committed by Linus Torvalds
Browse files

thp: zap_huge_pmd(): zap huge zero pmd



We don't have a mapped page to zap in huge zero page case.  Let's just clear
pmd and remove it from tlb.

Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@linux.intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4a6c1297
Loading
Loading
Loading
Loading
+13 −8
Original line number Diff line number Diff line
@@ -1085,8 +1085,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
		pmd_t orig_pmd;
		pgtable = pgtable_trans_huge_withdraw(tlb->mm);
		orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd);
		page = pmd_page(orig_pmd);
		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
		if (is_huge_zero_pmd(orig_pmd)) {
			tlb->mm->nr_ptes--;
			spin_unlock(&tlb->mm->page_table_lock);
		} else {
			page = pmd_page(orig_pmd);
			page_remove_rmap(page);
			VM_BUG_ON(page_mapcount(page) < 0);
			add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
@@ -1094,6 +1098,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
			tlb->mm->nr_ptes--;
			spin_unlock(&tlb->mm->page_table_lock);
			tlb_remove_page(tlb, page);
		}
		pte_free(tlb->mm, pgtable);
		ret = 1;
	}