Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit f3b39d47 authored by Miao Xie's avatar Miao Xie Committed by Linus Torvalds
Browse files

cpusets: restructure the function cpuset_update_task_memory_state()



The kernel still allocates the page caches on old node after modifying its
cpuset's mems when 'memory_spread_page' was set, or it didn't spread the
page cache evenly over all the nodes that faulting task is allowed to usr
after memory_spread_page was set.  it is caused by the old mem_allowed and
flags of the task, the current kernel doesn't updates them unless some
function invokes cpuset_update_task_memory_state(), it is too late
sometimes.We must update the mem_allowed and the flags of the tasks in
time.

Slab has the same problem.

The following patches fix this bug by updating tasks' mem_allowed and
spread flag after its cpuset's mems or spread flag is changed.

This patch:

Extract a function from cpuset_update_task_memory_state().  It will be
used later for update tasks' page/slab spread flags after its cpuset's
flag is set

Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Paul Menage <menage@google.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent dcf975d5
Loading
Loading
Loading
Loading
+19 −8
Original line number Original line Diff line number Diff line
@@ -331,6 +331,24 @@ static void guarantee_online_mems(const struct cpuset *cs, nodemask_t *pmask)
	BUG_ON(!nodes_intersects(*pmask, node_states[N_HIGH_MEMORY]));
	BUG_ON(!nodes_intersects(*pmask, node_states[N_HIGH_MEMORY]));
}
}


/*
 * update task's spread flag if cpuset's page/slab spread flag is set
 *
 * Called with callback_mutex/cgroup_mutex held
 */
static void cpuset_update_task_spread_flag(struct cpuset *cs,
					struct task_struct *tsk)
{
	if (is_spread_page(cs))
		tsk->flags |= PF_SPREAD_PAGE;
	else
		tsk->flags &= ~PF_SPREAD_PAGE;
	if (is_spread_slab(cs))
		tsk->flags |= PF_SPREAD_SLAB;
	else
		tsk->flags &= ~PF_SPREAD_SLAB;
}

/**
/**
 * cpuset_update_task_memory_state - update task memory placement
 * cpuset_update_task_memory_state - update task memory placement
 *
 *
@@ -388,14 +406,7 @@ void cpuset_update_task_memory_state(void)
		cs = task_cs(tsk); /* Maybe changed when task not locked */
		cs = task_cs(tsk); /* Maybe changed when task not locked */
		guarantee_online_mems(cs, &tsk->mems_allowed);
		guarantee_online_mems(cs, &tsk->mems_allowed);
		tsk->cpuset_mems_generation = cs->mems_generation;
		tsk->cpuset_mems_generation = cs->mems_generation;
		if (is_spread_page(cs))
		cpuset_update_task_spread_flag(cs, tsk);
			tsk->flags |= PF_SPREAD_PAGE;
		else
			tsk->flags &= ~PF_SPREAD_PAGE;
		if (is_spread_slab(cs))
			tsk->flags |= PF_SPREAD_SLAB;
		else
			tsk->flags &= ~PF_SPREAD_SLAB;
		task_unlock(tsk);
		task_unlock(tsk);
		mutex_unlock(&callback_mutex);
		mutex_unlock(&callback_mutex);
		mpol_rebind_task(tsk, &tsk->mems_allowed);
		mpol_rebind_task(tsk, &tsk->mems_allowed);