Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ebe945c2 authored by Glauber Costa's avatar Glauber Costa Committed by Linus Torvalds
Browse files

memcg: add comments clarifying aspects of cache attribute propagation



This patch clarifies two aspects of cache attribute propagation.

First, the expected context for the for_each_memcg_cache macro in
memcontrol.h.  The usages already in the codebase are safe.  In mm/slub.c,
it is trivially safe because the lock is acquired right before the loop.
In mm/slab.c, it is less so: the lock is acquired by an outer function a
few steps back in the stack, so a VM_BUG_ON() is added to make sure it is
indeed safe.

A comment is also added to detail why we are returning the value of the
parent cache and ignoring the children's when we propagate the attributes.

Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 92e79349
Loading
Loading
Loading
Loading
+6 −0
Original line number Diff line number Diff line
@@ -422,6 +422,12 @@ static inline void sock_release_memcg(struct sock *sk)
extern struct static_key memcg_kmem_enabled_key;

extern int memcg_limited_groups_array_size;

/*
 * Helper macro to loop through all memcg-specific caches. Callers must still
 * check if the cache is valid (it is either valid or NULL).
 * the slab_mutex must be held when looping through those caches
 */
#define for_each_memcg_cache_index(_idx)	\
	for ((_idx) = 0; i < memcg_limited_groups_array_size; (_idx)++)

+1 −0
Original line number Diff line number Diff line
@@ -4099,6 +4099,7 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
	if ((ret < 0) || !is_root_cache(cachep))
		return ret;

	VM_BUG_ON(!mutex_is_locked(&slab_mutex));
	for_each_memcg_cache_index(i) {
		c = cache_from_memcg(cachep, i);
		if (c)
+17 −4
Original line number Diff line number Diff line
@@ -5108,12 +5108,25 @@ static ssize_t slab_attr_store(struct kobject *kobj,
		if (s->max_attr_size < len)
			s->max_attr_size = len;

		for_each_memcg_cache_index(i) {
			struct kmem_cache *c = cache_from_memcg(s, i);
		/*
			 * This function's return value is determined by the
			 * parent cache only
		 * This is a best effort propagation, so this function's return
		 * value will be determined by the parent cache only. This is
		 * basically because not all attributes will have a well
		 * defined semantics for rollbacks - most of the actions will
		 * have permanent effects.
		 *
		 * Returning the error value of any of the children that fail
		 * is not 100 % defined, in the sense that users seeing the
		 * error code won't be able to know anything about the state of
		 * the cache.
		 *
		 * Only returning the error code for the parent cache at least
		 * has well defined semantics. The cache being written to
		 * directly either failed or succeeded, in which case we loop
		 * through the descendants with best-effort propagation.
		 */
		for_each_memcg_cache_index(i) {
			struct kmem_cache *c = cache_from_memcg(s, i);
			if (c)
				attribute->store(c, buf, len);
		}