Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3ccb0123 authored by Steven Rostedt's avatar Steven Rostedt
Browse files

tracing: Only run synchronize_sched() at instance deletion time

It has been reported that boot up with FTRACE_SELFTEST enabled can take a
very long time. There can be stalls of over a minute.

This was tracked down to the synchronize_sched() called when a system call
event is disabled. As the self tests enable and disable thousands of events,
this makes the synchronize_sched() get called thousands of times.

The synchornize_sched() was added with d562aff9 "tracing: Add support
for SOFT_DISABLE to syscall events" which caused this regression (added
in 3.13-rc1).

The synchronize_sched() is to protect against the events being accessed
when a tracer instance is being deleted. When an instance is being deleted
all the events associated to it are unregistered. The synchronize_sched()
makes sure that no more users are running when it finishes.

Instead of calling synchronize_sched() for all syscall events, we only
need to call it once, after the events are unregistered and before the
instance is deleted. The event_mutex is held during this action to
prevent new users from enabling events.

Link: http://lkml.kernel.org/r/20131203124120.427b9661@gandalf.local.home



Reported-by: default avatarPetr Mladek <pmladek@suse.cz>
Acked-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
Acked-by: default avatarPetr Mladek <pmladek@suse.cz>
Tested-by: default avatarPetr Mladek <pmladek@suse.cz>
Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
parent dc1ccc48
Loading
Loading
Loading
Loading
+3 −0
Original line number Original line Diff line number Diff line
@@ -2314,6 +2314,9 @@ int event_trace_del_tracer(struct trace_array *tr)
	/* Disable any running events */
	/* Disable any running events */
	__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
	__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);


	/* Access to events are within rcu_read_lock_sched() */
	synchronize_sched();

	down_write(&trace_event_sem);
	down_write(&trace_event_sem);
	__trace_remove_event_dirs(tr);
	__trace_remove_event_dirs(tr);
	debugfs_remove_recursive(tr->event_dir);
	debugfs_remove_recursive(tr->event_dir);
+0 −10
Original line number Original line Diff line number Diff line
@@ -431,11 +431,6 @@ static void unreg_event_syscall_enter(struct ftrace_event_file *file,
	if (!tr->sys_refcount_enter)
	if (!tr->sys_refcount_enter)
		unregister_trace_sys_enter(ftrace_syscall_enter, tr);
		unregister_trace_sys_enter(ftrace_syscall_enter, tr);
	mutex_unlock(&syscall_trace_lock);
	mutex_unlock(&syscall_trace_lock);
	/*
	 * Callers expect the event to be completely disabled on
	 * return, so wait for current handlers to finish.
	 */
	synchronize_sched();
}
}


static int reg_event_syscall_exit(struct ftrace_event_file *file,
static int reg_event_syscall_exit(struct ftrace_event_file *file,
@@ -474,11 +469,6 @@ static void unreg_event_syscall_exit(struct ftrace_event_file *file,
	if (!tr->sys_refcount_exit)
	if (!tr->sys_refcount_exit)
		unregister_trace_sys_exit(ftrace_syscall_exit, tr);
		unregister_trace_sys_exit(ftrace_syscall_exit, tr);
	mutex_unlock(&syscall_trace_lock);
	mutex_unlock(&syscall_trace_lock);
	/*
	 * Callers expect the event to be completely disabled on
	 * return, so wait for current handlers to finish.
	 */
	synchronize_sched();
}
}


static int __init init_syscall_trace(struct ftrace_event_call *call)
static int __init init_syscall_trace(struct ftrace_event_call *call)