Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 40e34ea0 authored by Toke Høiland-Jørgensen's avatar Toke Høiland-Jørgensen Committed by Greg Kroah-Hartman
Browse files

bpf: Avoid deadlock when using queue and stack maps from NMI



[ Upstream commit a34a9f1a19afe9c60ca0ea61dfeee63a1c2baac8 ]

Sysbot discovered that the queue and stack maps can deadlock if they are
being used from a BPF program that can be called from NMI context (such as
one that is attached to a perf HW counter event). To fix this, add an
in_nmi() check and use raw_spin_trylock() in NMI context, erroring out if
grabbing the lock fails.

Fixes: f1a2e44a ("bpf: add queue and stack maps")
Reported-by: default avatarHsin-Wei Hung <hsinweih@uci.edu>
Tested-by: default avatarHsin-Wei Hung <hsinweih@uci.edu>
Co-developed-by: default avatarHsin-Wei Hung <hsinweih@uci.edu>
Signed-off-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/r/20230911132815.717240-1-toke@redhat.com


Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent eec98134
Loading
Loading
Loading
Loading
+18 −3
Original line number Diff line number Diff line
@@ -118,7 +118,12 @@ static int __queue_map_get(struct bpf_map *map, void *value, bool delete)
	int err = 0;
	void *ptr;

	if (in_nmi()) {
		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
			return -EBUSY;
	} else {
		raw_spin_lock_irqsave(&qs->lock, flags);
	}

	if (queue_stack_map_is_empty(qs)) {
		memset(value, 0, qs->map.value_size);
@@ -148,7 +153,12 @@ static int __stack_map_get(struct bpf_map *map, void *value, bool delete)
	void *ptr;
	u32 index;

	if (in_nmi()) {
		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
			return -EBUSY;
	} else {
		raw_spin_lock_irqsave(&qs->lock, flags);
	}

	if (queue_stack_map_is_empty(qs)) {
		memset(value, 0, qs->map.value_size);
@@ -213,7 +223,12 @@ static int queue_stack_map_push_elem(struct bpf_map *map, void *value,
	if (flags & BPF_NOEXIST || flags > BPF_EXIST)
		return -EINVAL;

	if (in_nmi()) {
		if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags))
			return -EBUSY;
	} else {
		raw_spin_lock_irqsave(&qs->lock, irq_flags);
	}

	if (queue_stack_map_is_full(qs)) {
		if (!replace) {