Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Skip to content
Commit fa28515c authored by Joonwoo Park's avatar Joonwoo Park Committed by Gerrit - the friendly Code Review server
Browse files

sched: fix race conditions where HMP tunables change



When multiple threads race to update HMP scheduler tunables, at present,
the tunables which require big/small task count fix-up can be updated
without fix-up and it can trigger BUG_ON().
This happens because sched_hmp_proc_update_handler() acquires rq locks and
does fix-up only when number of big/small tasks affecting tunables are
updated even though the function sched_hmp_proc_update_handler() calls
set_hmp_defaults() which re-calculates all sysctl input data at that
point.  Consequently a thread that is trying to update a tunable which does
not affect big/small task count can call set_hmp_defaults() and update
big/small task count affecting tunable without fix-up if there is another
thread and it just set fix-up needed sysctl value.

Example of problem scenario :
thread 0                               thread 1
Set sched_small_task – needs fix up.
                                       Set sched_init_task_load – no fix
                                       up needed.
proc_dointvec_minmax() completed
which means sysctl_sched_small_task has
new value.
                                       Call set_hmp_defaults() without
                                       lock/fixup. set_hmp_defaults() still
                                       updates sched_small_tasks with new
                                       sysctl_sched_small_task value by
                                       thread 0.

Fix such issue by embracing proc update handler with already existing
policy mutex.

CRs-fixed: 812443
Change-Id: I7aa4c0efc1ca56e28dc0513480aca3264786d4f7
Signed-off-by: default avatarJoonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent 4510bf0c
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment