Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b555749a authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds
Browse files

vmscan: shrink_active_list(): reduce lru_lock hold time



These three statements manipulate local variables and do not need the lock
coverage.

Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Rik van Riel <riel@redhat.com
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 1e9e6365
Loading
Loading
Loading
Loading
+7 −7
Original line number Diff line number Diff line
@@ -1237,6 +1237,13 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
		list_add(&page->lru, &l_inactive);
	}

	/*
	 * Move the pages to the [file or anon] inactive list.
	 */
	pagevec_init(&pvec, 1);
	pgmoved = 0;
	lru = LRU_BASE + file * LRU_FILE;

	spin_lock_irq(&zone->lru_lock);
	/*
	 * Count referenced pages from currently used mappings as
@@ -1247,13 +1254,6 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
	if (scan_global_lru(sc))
		zone->recent_rotated[!!file] += pgmoved;

	/*
	 * Move the pages to the [file or anon] inactive list.
	 */
	pagevec_init(&pvec, 1);

	pgmoved = 0;
	lru = LRU_BASE + file * LRU_FILE;
	while (!list_empty(&l_inactive)) {
		page = lru_to_page(&l_inactive);
		prefetchw_prev_lru_page(page, &l_inactive, flags);