Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit d6692183 authored by Kenneth W Chen's avatar Kenneth W Chen Committed by Linus Torvalds
Browse files

[PATCH] fix extra page ref count in follow_hugetlb_page



git-commit: d5d4b0aa
"[PATCH] optimize follow_hugetlb_page" breaks mlock on hugepage areas.

I mis-interpret pages argument and made get_page() unconditional.  It
should only get a ref count when "pages" argument is non-null.

Credit goes to Adam Litke who spotted the bug.

Signed-off-by: default avatarKen Chen <kenneth.w.chen@intel.com>
Acked-by: default avatarAdam Litke <agl@us.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 93fac704
Loading
Loading
Loading
Loading
+3 −2
Original line number Diff line number Diff line
@@ -697,9 +697,10 @@ int follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
		pfn_offset = (vaddr & ~HPAGE_MASK) >> PAGE_SHIFT;
		page = pte_page(*pte);
same_page:
		if (pages) {
			get_page(page);
		if (pages)
			pages[i] = page + pfn_offset;
		}

		if (vmas)
			vmas[i] = vma;