Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit d4408ffe authored by Judy Hsiao's avatar Judy Hsiao Committed by Greg Kroah-Hartman
Browse files

neighbour: Don't let neigh_forced_gc() disable preemption for long



[ Upstream commit e5dc5afff62f3e97e86c3643ec9fcad23de4f2d3 ]

We are seeing cases where neigh_cleanup_and_release() is called by
neigh_forced_gc() many times in a row with preemption turned off.
When running on a low powered CPU at a low CPU frequency, this has
been measured to keep preemption off for ~10 ms. That's not great on a
system with HZ=1000 which expects tasks to be able to schedule in
with ~1ms latency.

Suggested-by: default avatarDouglas Anderson <dianders@chromium.org>
Signed-off-by: default avatarJudy Hsiao <judyhsiao@chromium.org>
Reviewed-by: default avatarDavid Ahern <dsahern@kernel.org>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Reviewed-by: default avatarDouglas Anderson <dianders@chromium.org>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 9cc9683a
Loading
Loading
Loading
Loading
+8 −1
Original line number Diff line number Diff line
@@ -226,9 +226,11 @@ static int neigh_forced_gc(struct neigh_table *tbl)
{
	int max_clean = atomic_read(&tbl->gc_entries) -
			READ_ONCE(tbl->gc_thresh2);
	u64 tmax = ktime_get_ns() + NSEC_PER_MSEC;
	unsigned long tref = jiffies - 5 * HZ;
	struct neighbour *n, *tmp;
	int shrunk = 0;
	int loop = 0;

	NEIGH_CACHE_STAT_INC(tbl, forced_gc_runs);

@@ -251,11 +253,16 @@ static int neigh_forced_gc(struct neigh_table *tbl)
				shrunk++;
			if (shrunk >= max_clean)
				break;
			if (++loop == 16) {
				if (ktime_get_ns() > tmax)
					goto unlock;
				loop = 0;
			}
		}
	}

	WRITE_ONCE(tbl->last_flush, jiffies);

unlock:
	write_unlock_bh(&tbl->lock);

	return shrunk;