Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2b618a16 authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Suren Baghdasaryan
Browse files

UPSTREAM: mm: rename and change semantics of nr_indirectly_reclaimable_bytes

The vmstat counter NR_INDIRECTLY_RECLAIMABLE_BYTES was introduced by
commit eb592546 ("mm: introduce NR_INDIRECTLY_RECLAIMABLE_BYTES") with
the goal of accounting objects that can be reclaimed, but cannot be
allocated via a SLAB_RECLAIM_ACCOUNT cache.  This is now possible via
kmalloc() with __GFP_RECLAIMABLE flag, and the dcache external names user
is converted.

The counter is however still useful for accounting direct page allocations
(i.e.  not slab) with a shrinker, such as the ION page pool.  So keep it,
and:

- change granularity to pages to be more like other counters; sub-page
  allocations should be able to use kmalloc
- rename the counter to NR_KERNEL_MISC_RECLAIMABLE
- expose the counter again in vmstat as "nr_kernel_misc_reclaimable"; we can
  again remove the check for not printing "hidden" counters

Link: http://lkml.kernel.org/r/20180731090649.16028-5-vbabka@suse.cz


Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Acked-by: default avatarRoman Gushchin <guro@fb.com>
Cc: Vijayanand Jitta <vjitta@codeaurora.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>

cherry picked from commit b29940c1abd7a4c3abeb926df0a5ec84d6902d47)

Bug: 138148041
Test: verify KReclaimable accounting after ION allocation+deallocation
Change-Id: I6694939516e5dfec388e038131021e3885c2640b
Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
parent 0ca673b3
Loading
Loading
Loading
Loading
+4 −4
Original line number Diff line number Diff line
@@ -36,8 +36,8 @@ static void ion_page_pool_add(struct ion_page_pool *pool, struct page *page)
		pool->low_count++;
	}

	mod_node_page_state(page_pgdat(page), NR_INDIRECTLY_RECLAIMABLE_BYTES,
			    (1 << (PAGE_SHIFT + pool->order)));
	mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
							1 << pool->order);
	mutex_unlock(&pool->mutex);
}

@@ -56,8 +56,8 @@ static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high)
	}

	list_del(&page->lru);
	mod_node_page_state(page_pgdat(page), NR_INDIRECTLY_RECLAIMABLE_BYTES,
			    -(1 << (PAGE_SHIFT + pool->order)));
	mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
							-(1 << pool->order));
	return page;
}

+1 −1
Original line number Diff line number Diff line
@@ -184,7 +184,7 @@ enum node_stat_item {
	NR_VMSCAN_IMMEDIATE,	/* Prioritise for reclaim when writeback ends */
	NR_DIRTIED,		/* page dirtyings since bootup */
	NR_WRITTEN,		/* page writings since bootup */
	NR_INDIRECTLY_RECLAIMABLE_BYTES, /* measured in bytes */
	NR_KERNEL_MISC_RECLAIMABLE,	/* reclaimable non-slab kernel pages */
	NR_VM_NODE_STAT_ITEMS
};

+7 −12
Original line number Diff line number Diff line
@@ -4804,6 +4804,7 @@ long si_mem_available(void)
	unsigned long pagecache;
	unsigned long wmark_low = 0;
	unsigned long pages[NR_LRU_LISTS];
	unsigned long reclaimable;
	struct zone *zone;
	int lru;

@@ -4829,19 +4830,13 @@ long si_mem_available(void)
	available += pagecache;

	/*
	 * Part of the reclaimable slab consists of items that are in use,
	 * and cannot be freed. Cap this estimate at the low watermark.
	 */
	available += global_node_page_state(NR_SLAB_RECLAIMABLE) -
		     min(global_node_page_state(NR_SLAB_RECLAIMABLE) / 2,
			 wmark_low);

	/*
	 * Part of the kernel memory, which can be released under memory
	 * pressure.
	 * Part of the reclaimable slab and other kernel memory consists of
	 * items that are in use, and cannot be freed. Cap this estimate at the
	 * low watermark.
	 */
	available += global_node_page_state(NR_INDIRECTLY_RECLAIMABLE_BYTES) >>
		PAGE_SHIFT;
	reclaimable = global_node_page_state(NR_SLAB_RECLAIMABLE) +
			global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE);
	available += reclaimable - min(reclaimable / 2, wmark_low);

	if (available < 0)
		available = 0;
+1 −2
Original line number Diff line number Diff line
@@ -685,8 +685,7 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
		 * Part of the kernel memory, which can be released
		 * under memory pressure.
		 */
		free += global_node_page_state(
			NR_INDIRECTLY_RECLAIMABLE_BYTES) >> PAGE_SHIFT;
		free += global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE);

		/*
		 * Leave reserved pages. The pages are not for anonymous pages.
+1 −5
Original line number Diff line number Diff line
@@ -1165,7 +1165,7 @@ const char * const vmstat_text[] = {
	"nr_vmscan_immediate_reclaim",
	"nr_dirtied",
	"nr_written",
	"", /* nr_indirectly_reclaimable */
	"nr_kernel_misc_reclaimable",

	/* enum writeback_stat_item counters */
	"nr_dirty_threshold",
@@ -1709,10 +1709,6 @@ static int vmstat_show(struct seq_file *m, void *arg)
	unsigned long *l = arg;
	unsigned long off = l - (unsigned long *)m->private;

	/* Skip hidden vmstat items. */
	if (*vmstat_text[off] == '\0')
		return 0;

	seq_puts(m, vmstat_text[off]);
	seq_put_decimal_ull(m, " ", *l);
	seq_putc(m, '\n');