Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit bea04b07 authored by Jianyu Zhan's avatar Jianyu Zhan Committed by Linus Torvalds
Browse files

mm: use the light version __mod_zone_page_state in mlocked_vma_newpage()



mlocked_vma_newpage() is called with pte lock held(a spinlock), which
implies preemtion disabled, and the vm stat counter is not modified from
interrupt context, so we need not use an irq-safe mod_zone_page_state()
here, using a light-weight version __mod_zone_page_state() would be OK.

This patch also documents __mod_zone_page_state() and some of its
callsites.  The comment above __mod_zone_page_state() is from Hugh
Dickins, and acked by Christoph.

Most credits to Hugh and Christoph for the clarification on the usage of
the __mod_zone_page_state().

[akpm@linux-foundation.org: coding-style fixes]
Suggested-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Signed-off-by: default avatarJianyu Zhan <nasa4836@gmail.com>
Reviewed-by: default avatarChristoph Lameter <cl@linux.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e9ade569
Loading
Loading
Loading
Loading
+6 −1
Original line number Diff line number Diff line
@@ -201,7 +201,12 @@ static inline int mlocked_vma_newpage(struct vm_area_struct *vma,
		return 0;

	if (!TestSetPageMlocked(page)) {
		mod_zone_page_state(page_zone(page), NR_MLOCK,
		/*
		 * We use the irq-unsafe __mod_zone_page_stat because this
		 * counter is not modified from interrupt context, and the pte
		 * lock is held(spinlock), which implies preemption disabled.
		 */
		__mod_zone_page_state(page_zone(page), NR_MLOCK,
				    hpage_nr_pages(page));
		count_vm_event(UNEVICTABLE_PGMLOCKED);
	}
+11 −0
Original line number Diff line number Diff line
@@ -988,6 +988,12 @@ void do_page_add_anon_rmap(struct page *page,
{
	int first = atomic_inc_and_test(&page->_mapcount);
	if (first) {
		/*
		 * We use the irq-unsafe __{inc|mod}_zone_page_stat because
		 * these counters are not modified in interrupt context, and
		 * pte lock(a spinlock) is held, which implies preemption
		 * disabled.
		 */
		if (PageTransHuge(page))
			__inc_zone_page_state(page,
					      NR_ANON_TRANSPARENT_HUGEPAGES);
@@ -1079,6 +1085,11 @@ void page_remove_rmap(struct page *page)
	/*
	 * Hugepages are not counted in NR_ANON_PAGES nor NR_FILE_MAPPED
	 * and not charged by memcg for now.
	 *
	 * We use the irq-unsafe __{inc|mod}_zone_page_stat because
	 * these counters are not modified in interrupt context, and
	 * these counters are not modified in interrupt context, and
	 * pte lock(a spinlock) is held, which implies preemption disabled.
	 */
	if (unlikely(PageHuge(page)))
		goto out;
+3 −1
Original line number Diff line number Diff line
@@ -207,7 +207,9 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
}

/*
 * For use when we know that interrupts are disabled.
 * For use when we know that interrupts are disabled,
 * or when we know that preemption is disabled and that
 * particular counter cannot be updated from interrupt context.
 */
void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
				int delta)