Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9ada1934 authored by Shaohua Li's avatar Shaohua Li Committed by Pekka Enberg
Browse files

slub: move discard_slab out of node lock



Lockdep reports there is potential deadlock for slub node list_lock.
discard_slab() is called with the lock hold in unfreeze_partials(),
which could trigger a slab allocation, which could hold the lock again.

discard_slab() doesn't need hold the lock actually, if the slab is
already removed from partial list.

Acked-by: default avatarChristoph Lameter <cl@linux.com>
Reported-and-tested-by: default avatarYong Zhang <yong.zhang0@gmail.com>
Reported-and-tested-by: default avatarJulie Sullivan <kernelmail.jms@gmail.com>
Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
Signed-off-by: default avatarPekka Enberg <penberg@kernel.org>
parent f64ae042
Loading
Loading
Loading
Loading
+12 −4
Original line number Diff line number Diff line
@@ -1862,7 +1862,7 @@ static void unfreeze_partials(struct kmem_cache *s)
{
	struct kmem_cache_node *n = NULL;
	struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
	struct page *page;
	struct page *page, *discard_page = NULL;

	while ((page = c->partial)) {
		enum slab_modes { M_PARTIAL, M_FREE };
@@ -1916,14 +1916,22 @@ static void unfreeze_partials(struct kmem_cache *s)
				"unfreezing slab"));

		if (m == M_FREE) {
			stat(s, DEACTIVATE_EMPTY);
			discard_slab(s, page);
			stat(s, FREE_SLAB);
			page->next = discard_page;
			discard_page = page;
		}
	}

	if (n)
		spin_unlock(&n->list_lock);

	while (discard_page) {
		page = discard_page;
		discard_page = discard_page->next;

		stat(s, DEACTIVATE_EMPTY);
		discard_slab(s, page);
		stat(s, FREE_SLAB);
	}
}

/*