Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 230bfa57 authored by Joonwoo Park's avatar Joonwoo Park Committed by Matt Wagantall
Browse files

sched: fix race conditions where HMP tunables change



When multiple threads race to update HMP scheduler tunables, at present,
the tunables which require big/small task count fix-up can be updated
without fix-up and it can trigger BUG_ON().
This happens because sched_hmp_proc_update_handler() acquires rq locks and
does fix-up only when number of big/small tasks affecting tunables are
updated even though the function sched_hmp_proc_update_handler() calls
set_hmp_defaults() which re-calculates all sysctl input data at that
point.  Consequently a thread that is trying to update a tunable which does
not affect big/small task count can call set_hmp_defaults() and update
big/small task count affecting tunable without fix-up if there is another
thread and it just set fix-up needed sysctl value.

Example of problem scenario :
thread 0                               thread 1
Set sched_small_task – needs fix up.
                                       Set sched_init_task_load – no fix
                                       up needed.
proc_dointvec_minmax() completed
which means sysctl_sched_small_task has
new value.
                                       Call set_hmp_defaults() without
                                       lock/fixup. set_hmp_defaults() still
                                       updates sched_small_tasks with new
                                       sysctl_sched_small_task value by
                                       thread 0.

Fix such issue by embracing proc update handler with already existing
policy mutex.

CRs-fixed: 812443
Change-Id: I7aa4c0efc1ca56e28dc0513480aca3264786d4f7
Signed-off-by: default avatarJoonwoo Park <joonwoop@codeaurora.org>
parent 49c0c6d9
Loading
Loading
Loading
Loading
+17 −11
Original line number Diff line number Diff line
@@ -3730,37 +3730,41 @@ int sched_hmp_proc_update_handler(struct ctl_table *table, int write,
		loff_t *ppos)
{
	int ret;
	unsigned int old_val;
	unsigned int *data = (unsigned int *)table->data;
	unsigned int old_val = *data;
	int update_min_nice = 0;

	mutex_lock(&policy_mutex);

	old_val = *data;

	ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);

	if (ret || !write || !sched_enable_hmp)
		return ret;
		goto done;

	if (write && (old_val == *data))
		return 0;
		goto done;

	if (data == &sysctl_sched_min_runtime) {
		sched_min_runtime = ((u64) sysctl_sched_min_runtime) * 1000;
		return 0;
		goto done;
	}

	if (data == (unsigned int *)&sysctl_sched_upmigrate_min_nice)
		update_min_nice = 1;

	if (update_min_nice) {
	if (data == (unsigned int *)&sysctl_sched_upmigrate_min_nice) {
		if ((*(int *)data) < -20 || (*(int *)data) > 19) {
			*data = old_val;
			return -EINVAL;
			ret = -EINVAL;
			goto done;
		}
		update_min_nice = 1;
	} else {
		/* all tunables other than min_nice are in percentage */
		if (sysctl_sched_downmigrate_pct >
		    sysctl_sched_upmigrate_pct || *data > 100) {
			*data = old_val;
			return -EINVAL;
			ret = -EINVAL;
			goto done;
		}
	}

@@ -3786,7 +3790,9 @@ int sched_hmp_proc_update_handler(struct ctl_table *table, int write,
		put_online_cpus();
	}

	return 0;
done:
	mutex_unlock(&policy_mutex);
	return ret;
}

/*