Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2dbf0116 authored by Andi Kleen's avatar Andi Kleen Committed by Ingo Molnar
Browse files

perf/x86/intel: Avoid checkpointed counters causing excessive TSX aborts



With checkpointed counters there can be a situation where the counter
is overflowing, aborts the transaction, is set back to a non overflowing
checkpoint, causes interupt. The interrupt doesn't see the overflow
because it has been checkpointed.  This is then a spurious PMI, typically with
a ugly NMI message.  It can also lead to excessive aborts.

Avoid this problem by:

- Using the full counter width for counting counters (earlier patch)

- Forbid sampling for checkpointed counters. It's not too useful anyways,
  checkpointing is mainly for counting. The check is approximate
  (to still handle KVM), but should catch the majority of cases.

- On a PMI always set back checkpointed counters to zero.

Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1378438661-24765-2-git-send-email-andi@firstfloor.org


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 06c939c1
Loading
Loading
Loading
Loading
+37 −0
Original line number Diff line number Diff line
@@ -1282,6 +1282,11 @@ static void intel_pmu_enable_event(struct perf_event *event)
	__x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE);
}

static inline bool event_is_checkpointed(struct perf_event *event)
{
	return (event->hw.config & HSW_IN_TX_CHECKPOINTED) != 0;
}

/*
 * Save and restart an expired event. Called by NMI contexts,
 * so it has to be careful about preempting normal event ops:
@@ -1289,6 +1294,17 @@ static void intel_pmu_enable_event(struct perf_event *event)
int intel_pmu_save_and_restart(struct perf_event *event)
{
	x86_perf_event_update(event);
	/*
	 * For a checkpointed counter always reset back to 0.  This
	 * avoids a situation where the counter overflows, aborts the
	 * transaction and is then set back to shortly before the
	 * overflow, and overflows and aborts again.
	 */
	if (unlikely(event_is_checkpointed(event))) {
		/* No race with NMIs because the counter should not be armed */
		wrmsrl(event->hw.event_base, 0);
		local64_set(&event->hw.prev_count, 0);
	}
	return x86_perf_event_set_period(event);
}

@@ -1372,6 +1388,13 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
		x86_pmu.drain_pebs(regs);
	}

	/*
	 * To avoid spurious interrupts with perf stat always reset checkpointed
	 * counters.
	 */
	if (cpuc->events[2] && event_is_checkpointed(cpuc->events[2]))
		status |= (1ULL << 2);

	for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
		struct perf_event *event = cpuc->events[bit];

@@ -1837,6 +1860,20 @@ static int hsw_hw_config(struct perf_event *event)
	      event->attr.precise_ip > 0))
		return -EOPNOTSUPP;

	if (event_is_checkpointed(event)) {
		/*
		 * Sampling of checkpointed events can cause situations where
		 * the CPU constantly aborts because of a overflow, which is
		 * then checkpointed back and ignored. Forbid checkpointing
		 * for sampling.
		 *
		 * But still allow a long sampling period, so that perf stat
		 * from KVM works.
		 */
		if (event->attr.sample_period > 0 &&
		    event->attr.sample_period < 0x7fffffff)
			return -EOPNOTSUPP;
	}
	return 0;
}