Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a3c6d226 authored by Pavankumar Kondeti's avatar Pavankumar Kondeti
Browse files

sched/walt: Fix invalid access of CPU cycle counter callback



There is a potential reordering issue with CPU cycle counter
callback update and access.

The update path is invoked from the low level clock driver by
calling register_cpu_cycle_counter_cb().

register_cpu_cycle_counter_cb()
{
	cpu_cycle_counter_cb = *cb
	use_cycle_counter = true
}

The access/reader path is invoked from update_task_ravg()->
update_task_cpu_cycles().

update_task_cpu_cycles()
{
	if (use_cycle_counter)
		*cpu_cycle_counter_cb()
}

If the stores of cpu_cycle_counter_cb and use_cycle_counter are
re-ordered (either at compile time or execution time), there is
a possibility of accessing the callback function pointer before
the update.

This can be fixed by adding a write memory barrier at the update
side and a read memory barrier at the access side. However this
re-ordering issue happens only once during boot up time. So adding
a read memory barrier at the access side which is in scheduler
hotpaths is not efficient. Since the access path always takes a
rq lock, acquire all CPUs rq locks at the update side.

Change-Id: I81715ab0255ff9f52410dcf707a04ea7c6ccf165
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent 802d279d
Loading
Loading
Loading
Loading
+6 −1
Original line number Diff line number Diff line
/*
 * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
 * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 and
@@ -323,14 +323,19 @@ update_window_start(struct rq *rq, u64 wallclock, int event)

int register_cpu_cycle_counter_cb(struct cpu_cycle_counter_cb *cb)
{
	unsigned long flags;

	mutex_lock(&cluster_lock);
	if (!cb->get_cpu_cycle_counter) {
		mutex_unlock(&cluster_lock);
		return -EINVAL;
	}

	acquire_rq_locks_irqsave(cpu_possible_mask, &flags);
	cpu_cycle_counter_cb = *cb;
	use_cycle_counter = true;
	release_rq_locks_irqrestore(cpu_possible_mask, &flags);

	mutex_unlock(&cluster_lock);

	return 0;