Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit fb58bac5 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched: Remove unnecessary RCU exclusion



As Nick pointed out, and realized by myself when doing:
   sched: Fix balance vs hotplug race
the patch:
   sched: for_each_domain() vs RCU

is wrong, sched_domains are freed after synchronize_sched(), which
means disabling preemption is enough.

Reported-by: default avatarNick Piggin <npiggin@suse.de>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 6cecd084
Loading
Loading
Loading
Loading
+2 −7
Original line number Diff line number Diff line
@@ -1403,7 +1403,6 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
		new_cpu = prev_cpu;
	}

	rcu_read_lock();
	for_each_domain(cpu, tmp) {
		/*
		 * If power savings logic is enabled for a domain, see if we
@@ -1484,10 +1483,8 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
			update_shares(tmp);
	}

	if (affine_sd && wake_affine(affine_sd, p, sync)) {
		new_cpu = cpu;
		goto out;
	}
	if (affine_sd && wake_affine(affine_sd, p, sync))
		return cpu;

	while (sd) {
		int load_idx = sd->forkexec_idx;
@@ -1528,8 +1525,6 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
		/* while loop will break here if sd == NULL */
	}

out:
	rcu_read_unlock();
	return new_cpu;
}
#endif /* CONFIG_SMP */