Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit d942fb6c authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched: fix sync wakeups



Pawel Dziekonski reported that the openssl benchmark and his
quantum chemistry application both show slowdowns due to the
scheduler under-parallelizing execution.

The reason are pipe wakeups still doing 'sync' wakeups which
overrides the normal buddy wakeup logic - even if waker and
wakee are loosely coupled.

Fix an inversion of logic in the buddy wakeup code.

Reported-by: default avatarPawel Dziekonski <dzieko@gmail.com>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent f90d4118
Loading
Loading
Loading
Loading
+4 −0
Original line number Diff line number Diff line
@@ -2266,6 +2266,10 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync)
	if (!sched_feat(SYNC_WAKEUPS))
		sync = 0;

	if (!sync && (current->se.avg_overlap < sysctl_sched_migration_cost &&
			    p->se.avg_overlap < sysctl_sched_migration_cost))
		sync = 1;

#ifdef CONFIG_SMP
	if (sched_feat(LB_WAKEUP_UPDATE)) {
		struct sched_domain *sd;
+2 −9
Original line number Diff line number Diff line
@@ -1179,20 +1179,15 @@ wake_affine(struct sched_domain *this_sd, struct rq *this_rq,
	    int idx, unsigned long load, unsigned long this_load,
	    unsigned int imbalance)
{
	struct task_struct *curr = this_rq->curr;
	struct task_group *tg;
	unsigned long tl = this_load;
	unsigned long tl_per_task;
	struct task_group *tg;
	unsigned long weight;
	int balanced;

	if (!(this_sd->flags & SD_WAKE_AFFINE) || !sched_feat(AFFINE_WAKEUPS))
		return 0;

	if (sync && (curr->se.avg_overlap > sysctl_sched_migration_cost ||
			p->se.avg_overlap > sysctl_sched_migration_cost))
		sync = 0;

	/*
	 * If sync wakeup then subtract the (maximum possible)
	 * effect of the currently running task from the load
@@ -1419,9 +1414,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int sync)
	if (!sched_feat(WAKEUP_PREEMPT))
		return;

	if (sched_feat(WAKEUP_OVERLAP) && (sync ||
			(se->avg_overlap < sysctl_sched_migration_cost &&
			 pse->avg_overlap < sysctl_sched_migration_cost))) {
	if (sched_feat(WAKEUP_OVERLAP) && sync) {
		resched_task(curr);
		return;
	}