Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Skip to content
Commit fd6298c8 authored by Pavankumar Kondeti's avatar Pavankumar Kondeti
Browse files

tracing: rework sched_preempt_disable trace point implementation



The current implementation of sched_preempt_disable trace point
fails to detect the preemption disable time inside spin_lock_bh()
and spin_unlock_bh(). This is because __local_bh_disable_ip() calls
directly __preempt_count_add() which skips the preemption disable
tracking. Instead of relying on the updates to preempt count, it
is better to write the preemption disable tracking directly to
preemptsoff tracer. This is similar to how irq disable tracking
is done.

The current code handles the false positives coming from __schedule()
by directly resetting the time stamp. This requires an interface
from the scheduler to preemptsoff tracer. To avoid this additional
interface, the current patch detects the same condition by comparing
the task pid and context switch count. If they are not matching
at the time of preemption disable to enable, don't track the preemption
disable time as it involved a context switch.

Due to this rework. the sched_preempt_disable trace point location is
changed to

/sys/kernel/debug/tracing/events/preemptirq/sched_preempt_disable/enable

Change-Id: I7f58d316b7c54bc7a54102bfeb678404bda010d4
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent 5e3f09ce
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment