Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3da367c3 authored by Shaohua Li's avatar Shaohua Li Committed by Linus Torvalds
Browse files

vmscan: add block plug for page reclaim



per-task block plug can reduce block queue lock contention and increase
request merge.  Currently page reclaim doesn't support it.  I originally
thought page reclaim doesn't need it, because kswapd thread count is
limited and file cache write is done at flusher mostly.

When I test a workload with heavy swap in a 4-node machine, each CPU is
doing direct page reclaim and swap.  This causes block queue lock
contention.  In my test, without below patch, the CPU utilization is about
2% ~ 7%.  With the patch, the CPU utilization is about 1% ~ 3%.  Disk
throughput isn't changed.  This should improve normal kswapd write and
file cache write too (increase request merge for example), but might not
be so obvious as I explain above.

Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 3fa36acb
Loading
Loading
Loading
Loading
+3 −0
Original line number Original line Diff line number Diff line
@@ -2005,12 +2005,14 @@ static void shrink_zone(int priority, struct zone *zone,
	enum lru_list l;
	enum lru_list l;
	unsigned long nr_reclaimed, nr_scanned;
	unsigned long nr_reclaimed, nr_scanned;
	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
	struct blk_plug plug;


restart:
restart:
	nr_reclaimed = 0;
	nr_reclaimed = 0;
	nr_scanned = sc->nr_scanned;
	nr_scanned = sc->nr_scanned;
	get_scan_count(zone, sc, nr, priority);
	get_scan_count(zone, sc, nr, priority);


	blk_start_plug(&plug);
	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
					nr[LRU_INACTIVE_FILE]) {
					nr[LRU_INACTIVE_FILE]) {
		for_each_evictable_lru(l) {
		for_each_evictable_lru(l) {
@@ -2034,6 +2036,7 @@ static void shrink_zone(int priority, struct zone *zone,
		if (nr_reclaimed >= nr_to_reclaim && priority < DEF_PRIORITY)
		if (nr_reclaimed >= nr_to_reclaim && priority < DEF_PRIORITY)
			break;
			break;
	}
	}
	blk_finish_plug(&plug);
	sc->nr_reclaimed += nr_reclaimed;
	sc->nr_reclaimed += nr_reclaimed;


	/*
	/*