Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 312722cb authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds
Browse files

mm: memcontrol: shorten the page statistics update slowpath



While moving charges from one memcg to another, page stat updates must
acquire the old memcg's move_lock to prevent double accounting.  That
situation is denoted by an increased memcg->move_accounting.  However, the
charge moving code declares this way too early for now, even before
summing up the RSS and pre-allocating destination charges.

Shorten this slowpath mode by increasing memcg->move_accounting only right
before walking the task's address space with the intention of actually
moving the pages.

Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
Reviewed-by: default avatarVladimir Davydov <vdavydov@parallels.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e544a4e7
Loading
Loading
Loading
Loading
+8 −13
Original line number Diff line number Diff line
@@ -5333,8 +5333,6 @@ static void __mem_cgroup_clear_mc(void)

static void mem_cgroup_clear_mc(void)
{
	struct mem_cgroup *from = mc.from;

	/*
	 * we must clear moving_task before waking up waiters at the end of
	 * task migration.
@@ -5345,8 +5343,6 @@ static void mem_cgroup_clear_mc(void)
	mc.from = NULL;
	mc.to = NULL;
	spin_unlock(&mc.lock);

	atomic_dec(&from->moving_account);
}

static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
@@ -5380,15 +5376,6 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
			VM_BUG_ON(mc.moved_charge);
			VM_BUG_ON(mc.moved_swap);

			/*
			 * Signal mem_cgroup_begin_page_stat() to take
			 * the memcg's move_lock while we're moving
			 * its pages to another memcg.  Then wait for
			 * already started RCU-only updates to finish.
			 */
			atomic_inc(&from->moving_account);
			synchronize_rcu();

			spin_lock(&mc.lock);
			mc.from = from;
			mc.to = memcg;
@@ -5520,6 +5507,13 @@ static void mem_cgroup_move_charge(struct mm_struct *mm)
	struct vm_area_struct *vma;

	lru_add_drain_all();
	/*
	 * Signal mem_cgroup_begin_page_stat() to take the memcg's
	 * move_lock while we're moving its pages to another memcg.
	 * Then wait for already started RCU-only updates to finish.
	 */
	atomic_inc(&mc.from->moving_account);
	synchronize_rcu();
retry:
	if (unlikely(!down_read_trylock(&mm->mmap_sem))) {
		/*
@@ -5552,6 +5546,7 @@ static void mem_cgroup_move_charge(struct mm_struct *mm)
			break;
	}
	up_read(&mm->mmap_sem);
	atomic_dec(&mc.from->moving_account);
}

static void mem_cgroup_move_task(struct cgroup_subsys_state *css,