Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit fdccc3fb authored by leilei.lin's avatar leilei.lin Committed by Ingo Molnar
Browse files

perf/core: Reduce context switch overhead



Skip most of the PMU context switching overhead when ctx->nr_events is 0.

50% performance overhead was observed under an extreme testcase.

Signed-off-by: default avatarleilei.lin <leilei.lin@alibaba-inc.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: alexander.shishkin@linux.intel.com
Cc: eranian@gmail.com
Cc: jolsa@redhat.com
Cc: linxiulei@gmail.com
Cc: yang_oliver@hotmail.com
Link: http://lkml.kernel.org/r/20170809002921.69813-1-leilei.lin@alibaba-inc.com


[ Rewrote the changelog. ]
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent ab027620
Loading
Loading
Loading
Loading
+9 −0
Original line number Original line Diff line number Diff line
@@ -3211,6 +3211,13 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
		return;
		return;


	perf_ctx_lock(cpuctx, ctx);
	perf_ctx_lock(cpuctx, ctx);
	/*
	 * We must check ctx->nr_events while holding ctx->lock, such
	 * that we serialize against perf_install_in_context().
	 */
	if (!ctx->nr_events)
		goto unlock;

	perf_pmu_disable(ctx->pmu);
	perf_pmu_disable(ctx->pmu);
	/*
	/*
	 * We want to keep the following priority order:
	 * We want to keep the following priority order:
@@ -3224,6 +3231,8 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
		cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
		cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
	perf_event_sched_in(cpuctx, ctx, task);
	perf_event_sched_in(cpuctx, ctx, task);
	perf_pmu_enable(ctx->pmu);
	perf_pmu_enable(ctx->pmu);

unlock:
	perf_ctx_unlock(cpuctx, ctx);
	perf_ctx_unlock(cpuctx, ctx);
}
}