Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit cba77f03 authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar
Browse files

locking/pvqspinlock: Fix kernel panic in locking-selftest



Enabling locking-selftest in a VM guest may cause the following
kernel panic:

  kernel BUG at .../kernel/locking/qspinlock_paravirt.h:137!

This is due to the fact that the pvqspinlock unlock function is
expecting either a _Q_LOCKED_VAL or _Q_SLOW_VAL in the lock
byte. This patch prevents that bug report by ignoring it when
debug_locks_silent is set. Otherwise, a warning will be printed
if it contains an unexpected value.

With this patch applied, the kernel locking-selftest completed
without any noise.

Tested-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1436663959-53092-1-git-send-email-Waiman.Long@hp.com


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 9d634c41
Loading
Loading
Loading
Loading
+10 −1
Original line number Original line Diff line number Diff line
@@ -4,6 +4,7 @@


#include <linux/hash.h>
#include <linux/hash.h>
#include <linux/bootmem.h>
#include <linux/bootmem.h>
#include <linux/debug_locks.h>


/*
/*
 * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead
 * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead
@@ -286,15 +287,23 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
{
{
	struct __qspinlock *l = (void *)lock;
	struct __qspinlock *l = (void *)lock;
	struct pv_node *node;
	struct pv_node *node;
	u8 lockval = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0);


	/*
	/*
	 * We must not unlock if SLOW, because in that case we must first
	 * We must not unlock if SLOW, because in that case we must first
	 * unhash. Otherwise it would be possible to have multiple @lock
	 * unhash. Otherwise it would be possible to have multiple @lock
	 * entries, which would be BAD.
	 * entries, which would be BAD.
	 */
	 */
	if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
	if (likely(lockval == _Q_LOCKED_VAL))
		return;
		return;


	if (unlikely(lockval != _Q_SLOW_VAL)) {
		if (debug_locks_silent)
			return;
		WARN(1, "pvqspinlock: lock %p has corrupted value 0x%x!\n", lock, atomic_read(&lock->val));
		return;
	}

	/*
	/*
	 * Since the above failed to release, this must be the SLOW path.
	 * Since the above failed to release, this must be the SLOW path.
	 * Therefore start by looking up the blocked node and unhashing it.
	 * Therefore start by looking up the blocked node and unhashing it.