Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e17224bf authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds
Browse files

[PATCH] sched: less locking



During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.

Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.

What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
operation.

Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Acked-by: default avatarIngo Molnar <mingo@elte.hu>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent d6d5cfaf
Loading
Loading
Loading
Loading
+2 −7
Original line number Original line Diff line number Diff line
@@ -2075,7 +2075,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
	int nr_moved, all_pinned = 0;
	int nr_moved, all_pinned = 0;
	int active_balance = 0;
	int active_balance = 0;


	spin_lock(&this_rq->lock);
	schedstat_inc(sd, lb_cnt[idle]);
	schedstat_inc(sd, lb_cnt[idle]);


	group = find_busiest_group(sd, this_cpu, &imbalance, idle);
	group = find_busiest_group(sd, this_cpu, &imbalance, idle);
@@ -2102,18 +2101,16 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
		 * still unbalanced. nr_moved simply stays zero, so it is
		 * still unbalanced. nr_moved simply stays zero, so it is
		 * correctly treated as an imbalance.
		 * correctly treated as an imbalance.
		 */
		 */
		double_lock_balance(this_rq, busiest);
		double_rq_lock(this_rq, busiest);
		nr_moved = move_tasks(this_rq, this_cpu, busiest,
		nr_moved = move_tasks(this_rq, this_cpu, busiest,
					imbalance, sd, idle, &all_pinned);
					imbalance, sd, idle, &all_pinned);
		spin_unlock(&busiest->lock);
		double_rq_unlock(this_rq, busiest);


		/* All tasks on this runqueue were pinned by CPU affinity */
		/* All tasks on this runqueue were pinned by CPU affinity */
		if (unlikely(all_pinned))
		if (unlikely(all_pinned))
			goto out_balanced;
			goto out_balanced;
	}
	}


	spin_unlock(&this_rq->lock);

	if (!nr_moved) {
	if (!nr_moved) {
		schedstat_inc(sd, lb_failed[idle]);
		schedstat_inc(sd, lb_failed[idle]);
		sd->nr_balance_failed++;
		sd->nr_balance_failed++;
@@ -2156,8 +2153,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
	return nr_moved;
	return nr_moved;


out_balanced:
out_balanced:
	spin_unlock(&this_rq->lock);

	schedstat_inc(sd, lb_balanced[idle]);
	schedstat_inc(sd, lb_balanced[idle]);


	sd->nr_balance_failed = 0;
	sd->nr_balance_failed = 0;