Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9bbeacf5 authored by Jiri Olsa's avatar Jiri Olsa Committed by Ingo Molnar
Browse files

kprobes, x86: Disable irqs during optimized callback



Disable irqs during optimized callback, so we dont miss any in-irq kprobes.

The following commands:

 # cd /debug/tracing/
 # echo "p mutex_unlock" >> kprobe_events
 # echo "p _raw_spin_lock" >> kprobe_events
 # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
 # echo 1 > events/enable

Cause the optimized kprobes to be missed. None is missed
with the fix applied.

Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/20110511110613.GB2390@jolsa.brq.redhat.com


Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 693d92a1
Loading
Loading
Loading
Loading
+3 −2
Original line number Diff line number Diff line
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
					 struct pt_regs *regs)
{
	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
	unsigned long flags;

	/* This is possible if op is under delayed unoptimizing */
	if (kprobe_disabled(&op->kp))
		return;

	preempt_disable();
	local_irq_save(flags);
	if (kprobe_running()) {
		kprobes_inc_nmissed_count(&op->kp);
	} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
		opt_pre_handler(&op->kp, regs);
		__this_cpu_write(current_kprobe, NULL);
	}
	preempt_enable_no_resched();
	local_irq_restore(flags);
}

static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)