Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c27fe4c8 authored by KOSAKI Motohiro's avatar KOSAKI Motohiro Committed by Linus Torvalds
Browse files

pagewalk: add locking-rule comments



Originally, walk_hugetlb_range() didn't require a caller take any lock.
But commit d33b9f45 ("mm: hugetlb: fix hugepage memory leak in
walk_page_range") changed its rule.  Because it added find_vma() call in
walk_hugetlb_range().

Any locking-rule change commit should write a doc too.

[akpm@linux-foundation.org: clarify comment]
Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 6c6d5280
Loading
Loading
Loading
Loading
+2 −0
Original line number Diff line number Diff line
@@ -911,6 +911,8 @@ unsigned long unmap_vmas(struct mmu_gather *tlb,
 * @pte_entry: if set, called for each non-empty PTE (4th-level) entry
 * @pte_hole: if set, called for each hole at all levels
 * @hugetlb_entry: if set, called for each hugetlb entry
 *		   *Caution*: The caller must hold mmap_sem() if @hugetlb_entry
 * 			      is used.
 *
 * (see walk_page_range for more details)
 */
+3 −0
Original line number Diff line number Diff line
@@ -181,6 +181,9 @@ static int walk_hugetlb_range(struct vm_area_struct *vma,
 *
 * If any callback returns a non-zero value, the walk is aborted and
 * the return value is propagated back to the caller. Otherwise 0 is returned.
 *
 * walk->mm->mmap_sem must be held for at least read if walk->hugetlb_entry
 * is !NULL.
 */
int walk_page_range(unsigned long addr, unsigned long end,
		    struct mm_walk *walk)