Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 41a40165 authored by Johannes Weiner's avatar Johannes Weiner Committed by Vinayak Menon
Browse files

mm: don't avoid high-priority reclaim on unreclaimable nodes

Commit 246e87a9 ("memcg: fix get_scan_count() for small targets")
sought to avoid high reclaim priorities for kswapd by forcing it to scan
a minimum amount of pages when lru_pages >> priority yielded nothing.

Commit b95a2f2d ("mm: vmscan: convert global reclaim to per-memcg
LRU lists"), due to switching global reclaim to a round-robin scheme
over all cgroups, had to restrict this forceful behavior to
unreclaimable zones in order to prevent massive overreclaim with many
cgroups.

The latter patch effectively neutered the behavior completely for all
but extreme memory pressure.  But in those situations we might as well
drop the reclaimers to lower priority levels.  Remove the check.

Change-Id: I23a5889a202303e496eefe364c24049b55b8a4e8
Link: http://lkml.kernel.org/r/20170228214007.5621-6-hannes@cmpxchg.org


Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Jia He <hejianet@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Git-commit: a2d7f8e461881394167bafb616112a96f5f567d0
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git


Signed-off-by: default avatarVinayak Menon <vinmenon@codeaurora.org>
parent b52d8b61
Loading
Loading
Loading
Loading
+5 −14
Original line number Diff line number Diff line
@@ -2206,22 +2206,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
	int pass;

	/*
	 * If the zone or memcg is small, nr[l] can be 0.  This
	 * results in no scanning on this priority and a potential
	 * priority drop.  Global direct reclaim can go to the next
	 * zone and tends to have no problems. Global kswapd is for
	 * zone balancing and it needs to scan a minimum amount. When
	 * If the zone or memcg is small, nr[l] can be 0. When
	 * reclaiming for a memcg, a priority drop can cause high
	 * latencies, so it's better to scan a minimum amount there as
	 * well.
	 * latencies, so it's better to scan a minimum amount. When a
	 * cgroup has already been deleted, scrape out the remaining
	 * cache forcefully to get rid of the lingering state.
	 */
	if (current_is_kswapd()) {
		if (!pgdat_reclaimable(pgdat))
			force_scan = true;
		if (!mem_cgroup_online(memcg))
			force_scan = true;
	}
	if (!global_reclaim(sc))
	if (!global_reclaim(sc) || !mem_cgroup_online(memcg))
		force_scan = true;

	/* If we have no swap space, do not bother scanning anon pages. */