Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8b405d5c authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

locking/lockdep: Fix stacktrace mess



There is some complication between check_prevs_add() and
check_prev_add() wrt. saving stack traces. The problem is that we want
to be frugal with saving stack traces, since it consumes static
resources.

We'll only know in check_prev_add() if we need the trace, but we can
call into it multiple times. So we want to do on-demand and re-use.

A further complication is that check_prev_add() can drop graph_lock
and mess with our static resources.

In any case, the current state; after commit:

  ce07a941 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")

is that we'll assume the trace contains valid data once
check_prev_add() returns '2'. However, as noted by Josh, this is
false, check_prev_add() can return '2' before having saved a trace,
this then result in the possibility of using uninitialized data.
Testing, as reported by Wu, shows a NULL deref.

So simplify.

Since the graph_lock() thing is a debug path that hasn't
really been used in a long while, take it out back and avoid the
head-ache.

Further initialize the stack_trace to a known 'empty' state; as long
as nr_entries == 0, nothing should deref entries. We can then use the
'entries == NULL' test for a valid trace / on-demand saving.

Analyzed-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
Reported-by: default avatarFengguang Wu <fengguang.wu@intel.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: ce07a941 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 529a86e0
Loading
Loading
Loading
Loading
+20 −28
Original line number Diff line number Diff line
@@ -1873,10 +1873,10 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
	       struct held_lock *next, int distance, struct stack_trace *trace,
	       int (*save)(struct stack_trace *trace))
{
	struct lock_list *uninitialized_var(target_entry);
	struct lock_list *entry;
	int ret;
	struct lock_list this;
	struct lock_list *uninitialized_var(target_entry);
	int ret;

	/*
	 * Prove that the new <prev> -> <next> dependency would not
@@ -1890,8 +1890,17 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
	this.class = hlock_class(next);
	this.parent = NULL;
	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
	if (unlikely(!ret))
	if (unlikely(!ret)) {
		if (!trace->entries) {
			/*
			 * If @save fails here, the printing might trigger
			 * a WARN but because of the !nr_entries it should
			 * not do bad things.
			 */
			save(trace);
		}
		return print_circular_bug(&this, target_entry, next, prev, trace);
	}
	else if (unlikely(ret < 0))
		return print_bfs_bug(ret);

@@ -1938,7 +1947,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
		return print_bfs_bug(ret);


	if (save && !save(trace))
	if (!trace->entries && !save(trace))
		return 0;

	/*
@@ -1958,20 +1967,6 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
	if (!ret)
		return 0;

	/*
	 * Debugging printouts:
	 */
	if (verbose(hlock_class(prev)) || verbose(hlock_class(next))) {
		graph_unlock();
		printk("\n new dependency: ");
		print_lock_name(hlock_class(prev));
		printk(KERN_CONT " => ");
		print_lock_name(hlock_class(next));
		printk(KERN_CONT "\n");
		dump_stack();
		if (!graph_lock())
			return 0;
	}
	return 2;
}

@@ -1986,8 +1981,12 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
{
	int depth = curr->lockdep_depth;
	struct held_lock *hlock;
	struct stack_trace trace;
	int (*save)(struct stack_trace *trace) = save_trace;
	struct stack_trace trace = {
		.nr_entries = 0,
		.max_entries = 0,
		.entries = NULL,
		.skip = 0,
	};

	/*
	 * Debugging checks.
@@ -2018,17 +2017,10 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
			 */
			if (hlock->read != 2 && hlock->check) {
				int ret = check_prev_add(curr, hlock, next,
							 distance, &trace, save);
							 distance, &trace, save_trace);
				if (!ret)
					return 0;

				/*
				 * Stop saving stack_trace if save_trace() was
				 * called at least once:
				 */
				if (save && ret == 2)
					save = NULL;

				/*
				 * Stop after the first non-trylock entry,
				 * as non-trylock entries have added their