Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8d67c821 authored by Jinshan Xiong's avatar Jinshan Xiong Committed by Greg Kroah-Hartman
Browse files

staging/lustre/clio: Solve a race in cl_lock_put



It's not atomic to check the last reference and state of cl_lock
in cl_lock_put(). This can cause a problem that an using lock is
freed, if the process is preempted between atomic_dec_and_test()
and (lock->cll_state == CLS_FREEING).

This problem can be solved by holding a refcount by coh_locks. In
this case, it can be sure that if the lock refcount reaches zero,
nobody else can have any chance to use it again.

Signed-off-by: default avatarJinshan Xiong <jinshan.xiong@intel.com>
Reviewed-on: http://review.whamcloud.com/9881
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-4558


Reviewed-by: default avatarBobi Jam <bobijam@gmail.com>
Reviewed-by: default avatarLai Siyao <lai.siyao@intel.com>
Signed-off-by: default avatarOleg Drokin <oleg.drokin@intel.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 4de665c1
Loading
Loading
Loading
Loading
+9 −1
Original line number Original line Diff line number Diff line
@@ -533,6 +533,7 @@ static struct cl_lock *cl_lock_find(const struct lu_env *env,
			spin_lock(&head->coh_lock_guard);
			spin_lock(&head->coh_lock_guard);
			ghost = cl_lock_lookup(env, obj, io, need);
			ghost = cl_lock_lookup(env, obj, io, need);
			if (ghost == NULL) {
			if (ghost == NULL) {
				cl_lock_get_trust(lock);
				list_add_tail(&lock->cll_linkage,
				list_add_tail(&lock->cll_linkage,
						  &head->coh_locks);
						  &head->coh_locks);
				spin_unlock(&head->coh_lock_guard);
				spin_unlock(&head->coh_lock_guard);
@@ -791,15 +792,22 @@ static void cl_lock_delete0(const struct lu_env *env, struct cl_lock *lock)
	LINVRNT(cl_lock_invariant(env, lock));
	LINVRNT(cl_lock_invariant(env, lock));


	if (lock->cll_state < CLS_FREEING) {
	if (lock->cll_state < CLS_FREEING) {
		bool in_cache;

		LASSERT(lock->cll_state != CLS_INTRANSIT);
		LASSERT(lock->cll_state != CLS_INTRANSIT);
		cl_lock_state_set(env, lock, CLS_FREEING);
		cl_lock_state_set(env, lock, CLS_FREEING);


		head = cl_object_header(lock->cll_descr.cld_obj);
		head = cl_object_header(lock->cll_descr.cld_obj);


		spin_lock(&head->coh_lock_guard);
		spin_lock(&head->coh_lock_guard);
		in_cache = !list_empty(&lock->cll_linkage);
		if (in_cache)
			list_del_init(&lock->cll_linkage);
			list_del_init(&lock->cll_linkage);
		spin_unlock(&head->coh_lock_guard);
		spin_unlock(&head->coh_lock_guard);


		if (in_cache) /* coh_locks cache holds a refcount. */
			cl_lock_put(env, lock);

		/*
		/*
		 * From now on, no new references to this lock can be acquired
		 * From now on, no new references to this lock can be acquired
		 * by cl_lock_lookup().
		 * by cl_lock_lookup().