Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6050cb0b authored by Frederic Weisbecker's avatar Frederic Weisbecker Committed by Ingo Molnar
Browse files

perf: Fix branch stack refcount leak on callchain init failure



On callchain buffers allocation failure, free_event() is
called and all the accounting performed in perf_event_alloc()
for that event is cancelled.

But if the event has branch stack sampling, it is unaccounted
as well from the branch stack sampling events refcounts.

This is a bug because this accounting is performed after the
callchain buffer allocation. As a result, the branch stack sampling
events refcount can become negative.

To fix this, move the branch stack event accounting before the
callchain buffer allocation.

Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1374539466-4799-2-git-send-email-fweisbec@gmail.com


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 7d9ffa89
Loading
Loading
Loading
Loading
+6 −6
Original line number Diff line number Diff line
@@ -6567,6 +6567,12 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
			atomic_inc(&nr_comm_events);
		if (event->attr.task)
			atomic_inc(&nr_task_events);
		if (has_branch_stack(event)) {
			static_key_slow_inc(&perf_sched_events.key);
			if (!(event->attach_state & PERF_ATTACH_TASK))
				atomic_inc(&per_cpu(perf_branch_stack_events,
						    event->cpu));
		}
		if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) {
			err = get_callchain_buffers();
			if (err) {
@@ -6574,12 +6580,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
				return ERR_PTR(err);
			}
		}
		if (has_branch_stack(event)) {
			static_key_slow_inc(&perf_sched_events.key);
			if (!(event->attach_state & PERF_ATTACH_TASK))
				atomic_inc(&per_cpu(perf_branch_stack_events,
						    event->cpu));
		}
	}

	return event;