Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7cccd80b authored by Christoph Lameter's avatar Christoph Lameter Committed by Pekka Enberg
Browse files

slub: tid must be retrieved from the percpu area of the current processor



As Steven Rostedt has pointer out: rescheduling could occur on a
different processor after the determination of the per cpu pointer and
before the tid is retrieved. This could result in allocation from the
wrong node in slab_alloc().

The effect is much more severe in slab_free() where we could free to the
freelist of the wrong page.

The window for something like that occurring is pretty small but it is
possible.

Signed-off-by: default avatarChristoph Lameter <cl@linux.com>
Signed-off-by: default avatarPekka Enberg <penberg@kernel.org>
parent 4d7868e6
Loading
Loading
Loading
Loading
+9 −3
Original line number Diff line number Diff line
@@ -2332,13 +2332,18 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,

	s = memcg_kmem_get_cache(s, gfpflags);
redo:

	/*
	 * Must read kmem_cache cpu data via this cpu ptr. Preemption is
	 * enabled. We may switch back and forth between cpus while
	 * reading from one cpu area. That does not matter as long
	 * as we end up on the original cpu again when doing the cmpxchg.
	 *
	 * Preemption is disabled for the retrieval of the tid because that
	 * must occur from the current processor. We cannot allow rescheduling
	 * on a different processor between the determination of the pointer
	 * and the retrieval of the tid.
	 */
	preempt_disable();
	c = __this_cpu_ptr(s->cpu_slab);

	/*
@@ -2348,7 +2353,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
	 * linked list in between.
	 */
	tid = c->tid;
	barrier();
	preempt_enable();

	object = c->freelist;
	page = c->page;
@@ -2595,10 +2600,11 @@ static __always_inline void slab_free(struct kmem_cache *s,
	 * data is retrieved via this pointer. If we are on the same cpu
	 * during the cmpxchg then the free will succedd.
	 */
	preempt_disable();
	c = __this_cpu_ptr(s->cpu_slab);

	tid = c->tid;
	barrier();
	preempt_enable();

	if (likely(page == c->page)) {
		set_freepointer(s, object, c->freelist);