Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit d4d84fef authored by Chris Metcalf's avatar Chris Metcalf Committed by Pekka Enberg
Browse files

slub: always align cpu_slab to honor cmpxchg_double requirement



On an architecture without CMPXCHG_LOCAL but with DEBUG_VM enabled,
the VM_BUG_ON() in __pcpu_double_call_return_bool() will cause an early
panic during boot unless we always align cpu_slab properly.

In principle we could remove the alignment-testing VM_BUG_ON() for
architectures that don't have CMPXCHG_LOCAL, but leaving it in means
that new code will tend not to break x86 even if it is introduced
on another platform, and it's low cost to require alignment.

Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Signed-off-by: default avatarChris Metcalf <cmetcalf@tilera.com>
Signed-off-by: default avatarPekka Enberg <penberg@kernel.org>
parent 55922c9d
Loading
Loading
Loading
Loading
+3 −0
Original line number Original line Diff line number Diff line
@@ -259,6 +259,9 @@ extern void __bad_size_call_parameter(void);
 * Special handling for cmpxchg_double.  cmpxchg_double is passed two
 * Special handling for cmpxchg_double.  cmpxchg_double is passed two
 * percpu variables.  The first has to be aligned to a double word
 * percpu variables.  The first has to be aligned to a double word
 * boundary and the second has to follow directly thereafter.
 * boundary and the second has to follow directly thereafter.
 * We enforce this on all architectures even if they don't support
 * a double cmpxchg instruction, since it's a cheap requirement, and it
 * avoids breaking the requirement for architectures with the instruction.
 */
 */
#define __pcpu_double_call_return_bool(stem, pcp1, pcp2, ...)		\
#define __pcpu_double_call_return_bool(stem, pcp1, pcp2, ...)		\
({									\
({									\
+4 −8
Original line number Original line Diff line number Diff line
@@ -2320,16 +2320,12 @@ static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
	BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
	BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
			SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu));
			SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu));


#ifdef CONFIG_CMPXCHG_LOCAL
	/*
	/*
	 * Must align to double word boundary for the double cmpxchg instructions
	 * Must align to double word boundary for the double cmpxchg
	 * to work.
	 * instructions to work; see __pcpu_double_call_return_bool().
	 */
	 */
	s->cpu_slab = __alloc_percpu(sizeof(struct kmem_cache_cpu), 2 * sizeof(void *));
	s->cpu_slab = __alloc_percpu(sizeof(struct kmem_cache_cpu),
#else
				     2 * sizeof(void *));
	/* Regular alignment is sufficient */
	s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
#endif


	if (!s->cpu_slab)
	if (!s->cpu_slab)
		return 0;
		return 0;