Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 89ae5a7c authored by Joel Fernandes (Google)'s avatar Joel Fernandes (Google) Committed by Ryan Savitski
Browse files

BACKPORT: perf_event: Add support for LSM and SELinux checks

In current mainline, the degree of access to perf_event_open(2) system
call depends on the perf_event_paranoid sysctl.  This has a number of
limitations:

1. The sysctl is only a single value. Many types of accesses are controlled
   based on the single value thus making the control very limited and
   coarse grained.
2. The sysctl is global, so if the sysctl is changed, then that means
   all processes get access to perf_event_open(2) opening the door to
   security issues.

This patch adds LSM and SELinux access checking which will be used in
Android to access perf_event_open(2) for the purposes of attaching BPF
programs to tracepoints, perf profiling and other operations from
userspace. These operations are intended for production systems.

5 new LSM hooks are added:
1. perf_event_open: This controls access during the perf_event_open(2)
   syscall itself. The hook is called from all the places that the
   perf_event_paranoid sysctl is checked to keep it consistent with the
   systctl. The hook gets passed a 'type' argument which controls CPU,
   kernel and tracepoint accesses (in this context, CPU, kernel and
   tracepoint have the same semantics as the perf_event_paranoid sysctl).
   Additionally, I added an 'open' type which is similar to
   perf_event_paranoid sysctl == 3 patch carried in Android and several other
   distros but was rejected in mainline [1] in 2016.

2. perf_event_alloc: This allocates a new security object for the event
   which stores the current SID within the event. It will be useful when
   the perf event's FD is passed through IPC to another process which may
   try to read the FD. Appropriate security checks will limit access.

3. perf_event_free: Called when the event is closed.

4. perf_event_read: Called from the read(2) and mmap(2) syscalls for the event.

5. perf_event_write: Called from the ioctl(2) syscalls for the event.

[1] https://lwn.net/Articles/696240/



Since Peter had suggest LSM hooks in 2016 [1], I am adding his
Suggested-by tag below.

To use this patch, we set the perf_event_paranoid sysctl to -1 and then
apply selinux checking as appropriate (default deny everything, and then
add policy rules to give access to domains that need it). In the future
we can remove the perf_event_paranoid sysctl altogether.

Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
Co-developed-by: default avatarPeter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarJames Morris <jmorris@namei.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: rostedt@goodmis.org
Cc: Yonghong Song <yhs@fb.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: jeffv@google.com
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: primiano@google.com
Cc: Song Liu <songliubraving@fb.com>
Cc: rsavitski@google.com
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Matthew Garrett <matthewgarrett@google.com>
Link: https://lkml.kernel.org/r/20191014170308.70668-1-joel@joelfernandes.org



(cherry picked from commit da97e18458fb42d7c00fac5fd1c56a3896ec666e)
[ Ryan Savitski: Resolved conflicts with existing code, and folded in
  upstream ae79d5588a04 (perf/core: Fix !CONFIG_PERF_EVENTS build
  warnings and failures). This should fix the build errors from the
  previous backport attempt, where certain configurations would end up
  with functions referring to the perf_event struct prior to its
  declaration (and therefore declaring it with a different scope). ]
Bug: 137092007
Signed-off-by: default avatarRyan Savitski <rsavitski@google.com>
Change-Id: Ief8c669083c81f4ea2fa75d5c0d947d19ea741b3
parent 59a8f6e9
Loading
Loading
Loading
Loading
+8 −10
Original line number Diff line number Diff line
@@ -95,7 +95,7 @@ static inline unsigned long perf_ip_adjust(struct pt_regs *regs)
{
	return 0;
}
static inline void perf_get_data_addr(struct pt_regs *regs, u64 *addrp) { }
static inline void perf_get_data_addr(struct perf_event *event, struct pt_regs *regs, u64 *addrp) { }
static inline u32 perf_get_misc_flags(struct pt_regs *regs)
{
	return 0;
@@ -126,7 +126,7 @@ static unsigned long ebb_switch_in(bool ebb, struct cpu_hw_events *cpuhw)
static inline void power_pmu_bhrb_enable(struct perf_event *event) {}
static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
static void power_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) {}
static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}
static inline void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *cpuhw) {}
static void pmao_restore_workaround(bool ebb) { }
#endif /* CONFIG_PPC32 */

@@ -170,7 +170,7 @@ static inline unsigned long perf_ip_adjust(struct pt_regs *regs)
 * pointed to by SIAR; this is indicated by the [POWER6_]MMCRA_SDSYNC, the
 * [POWER7P_]MMCRA_SDAR_VALID bit in MMCRA, or the SDAR_VALID bit in SIER.
 */
static inline void perf_get_data_addr(struct pt_regs *regs, u64 *addrp)
static inline void perf_get_data_addr(struct perf_event *event, struct pt_regs *regs, u64 *addrp)
{
	unsigned long mmcra = regs->dsisr;
	bool sdar_valid;
@@ -195,8 +195,7 @@ static inline void perf_get_data_addr(struct pt_regs *regs, u64 *addrp)
	if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
		*addrp = mfspr(SPRN_SDAR);

	if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN) &&
		is_kernel_addr(mfspr(SPRN_SDAR)))
	if (is_kernel_addr(mfspr(SPRN_SDAR)) && perf_allow_kernel(&event->attr) != 0)
		*addrp = 0;
}

@@ -435,7 +434,7 @@ static __u64 power_pmu_bhrb_to(u64 addr)
}

/* Processing BHRB entries */
static void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw)
static void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *cpuhw)
{
	u64 val;
	u64 addr;
@@ -463,8 +462,7 @@ static void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw)
			 * exporting it to userspace (avoid exposure of regions
			 * where we could have speculative execution)
			 */
			if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN) &&
				is_kernel_addr(addr))
			if (is_kernel_addr(addr) && perf_allow_kernel(&event->attr) != 0)
				continue;

			/* Branches are read most recent first (ie. mfbhrb 0 is
@@ -2068,12 +2066,12 @@ static void record_and_restart(struct perf_event *event, unsigned long val,

		if (event->attr.sample_type &
		    (PERF_SAMPLE_ADDR | PERF_SAMPLE_PHYS_ADDR))
			perf_get_data_addr(regs, &data.addr);
			perf_get_data_addr(event, regs, &data.addr);

		if (event->attr.sample_type & PERF_SAMPLE_BRANCH_STACK) {
			struct cpu_hw_events *cpuhw;
			cpuhw = this_cpu_ptr(&cpu_hw_events);
			power_pmu_bhrb_read(cpuhw);
			power_pmu_bhrb_read(event, cpuhw);
			data.br_stack = &cpuhw->bhrb_stack;
		}

+5 −3
Original line number Diff line number Diff line
@@ -563,9 +563,11 @@ static int bts_event_init(struct perf_event *event)
	 * Note that the default paranoia setting permits unprivileged
	 * users to profile the kernel.
	 */
	if (event->attr.exclude_kernel && perf_paranoid_kernel() &&
	    !capable(CAP_SYS_ADMIN))
		return -EACCES;
	if (event->attr.exclude_kernel) {
		ret = perf_allow_kernel(&event->attr);
		if (ret)
			return ret;
	}

	if (x86_add_exclusive(x86_lbr_exclusive_bts))
		return -EBUSY;
+3 −2
Original line number Diff line number Diff line
@@ -3109,8 +3109,9 @@ static int intel_pmu_hw_config(struct perf_event *event)
	if (x86_pmu.version < 3)
		return -EINVAL;

	if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
		return -EACCES;
	ret = perf_allow_cpu(&event->attr);
	if (ret)
		return ret;

	event->hw.config |= ARCH_PERFMON_EVENTSEL_ANY;

+3 −2
Original line number Diff line number Diff line
@@ -776,8 +776,9 @@ static int p4_validate_raw_event(struct perf_event *event)
	 * the user needs special permissions to be able to use it
	 */
	if (p4_ht_active() && p4_event_bind_map[v].shared) {
		if (perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
			return -EACCES;
		v = perf_allow_cpu(&event->attr);
		if (v)
			return v;
	}

	/* ESCR EventMask bits may be invalid */
+15 −0
Original line number Diff line number Diff line
@@ -1777,6 +1777,14 @@ union security_list_options {
	int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
	void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
#endif /* CONFIG_BPF_SYSCALL */
#ifdef CONFIG_PERF_EVENTS
	int (*perf_event_open)(struct perf_event_attr *attr, int type);
	int (*perf_event_alloc)(struct perf_event *event);
	void (*perf_event_free)(struct perf_event *event);
	int (*perf_event_read)(struct perf_event *event);
	int (*perf_event_write)(struct perf_event *event);

#endif
};

struct security_hook_heads {
@@ -2011,6 +2019,13 @@ struct security_hook_heads {
	struct hlist_head bpf_prog_alloc_security;
	struct hlist_head bpf_prog_free_security;
#endif /* CONFIG_BPF_SYSCALL */
#ifdef CONFIG_PERF_EVENTS
	struct hlist_head perf_event_open;
	struct hlist_head perf_event_alloc;
	struct hlist_head perf_event_free;
	struct hlist_head perf_event_read;
	struct hlist_head perf_event_write;
#endif
} __randomize_layout;

/*
Loading