Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8e9a2dba authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core locking updates from Ingo Molnar:
 "The main changes in this cycle are:

   - Another attempt at enabling cross-release lockdep dependency
     tracking (automatically part of CONFIG_PROVE_LOCKING=y), this time
     with better performance and fewer false positives. (Byungchul Park)

   - Introduce lockdep_assert_irqs_enabled()/disabled() and convert
     open-coded equivalents to lockdep variants. (Frederic Weisbecker)

   - Add down_read_killable() and use it in the VFS's iterate_dir()
     method. (Kirill Tkhai)

   - Convert remaining uses of ACCESS_ONCE() to
     READ_ONCE()/WRITE_ONCE(). Most of the conversion was Coccinelle
     driven. (Mark Rutland, Paul E. McKenney)

   - Get rid of lockless_dereference(), by strengthening Alpha atomics,
     strengthening READ_ONCE() with smp_read_barrier_depends() and thus
     being able to convert users of lockless_dereference() to
     READ_ONCE(). (Will Deacon)

   - Various micro-optimizations:

        - better PV qspinlocks (Waiman Long),
        - better x86 barriers (Michael S. Tsirkin)
        - better x86 refcounts (Kees Cook)

   - ... plus other fixes and enhancements. (Borislav Petkov, Juergen
     Gross, Miguel Bernal Marin)"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
  locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE
  rcu: Use lockdep to assert IRQs are disabled/enabled
  netpoll: Use lockdep to assert IRQs are disabled/enabled
  timers/posix-cpu-timers: Use lockdep to assert IRQs are disabled/enabled
  sched/clock, sched/cputime: Use lockdep to assert IRQs are disabled/enabled
  irq_work: Use lockdep to assert IRQs are disabled/enabled
  irq/timings: Use lockdep to assert IRQs are disabled/enabled
  perf/core: Use lockdep to assert IRQs are disabled/enabled
  x86: Use lockdep to assert IRQs are disabled/enabled
  smp/core: Use lockdep to assert IRQs are disabled/enabled
  timers/hrtimer: Use lockdep to assert IRQs are disabled/enabled
  timers/nohz: Use lockdep to assert IRQs are disabled/enabled
  workqueue: Use lockdep to assert IRQs are disabled/enabled
  irq/softirqs: Use lockdep to assert IRQs are disabled/enabled
  locking/lockdep: Add IRQs disabled/enabled assertion APIs: lockdep_assert_irqs_enabled()/disabled()
  locking/pvqspinlock: Implement hybrid PV queued/unfair locks
  locking/rwlocks: Fix comments
  x86/paravirt: Set up the virt_spin_lock_key after static keys get initialized
  block, locking/lockdep: Assign a lock_class per gendisk used for wait_for_completion()
  workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes
  ...
parents 6098850e 450cbdd0
Loading
Loading
Loading
Loading
+3 −0
Original line number Diff line number Diff line
@@ -709,6 +709,9 @@
			It will be ignored when crashkernel=X,high is not used
			or memory reserved is below 4G.

	crossrelease_fullstack
			[KNL] Allow to record full stack trace in cross-release

	cryptomgr.notests
                        [KNL] Disable crypto self-tests

+3 −3
Original line number Diff line number Diff line
@@ -826,9 +826,9 @@ If the filesystem may need to revalidate dcache entries, then
*is* passed the dentry but does not have access to the `inode` or the
`seq` number from the `nameidata`, so it needs to be extra careful
when accessing fields in the dentry.  This "extra care" typically
involves using `ACCESS_ONCE()` or the newer [`READ_ONCE()`] to access
fields, and verifying the result is not NULL before using it.  This
pattern can be see in `nfs_lookup_revalidate()`.
involves using [`READ_ONCE()`] to access fields, and verifying the
result is not NULL before using it.  This pattern can be seen in
`nfs_lookup_revalidate()`.

A pair of patterns
------------------
+0 −12
Original line number Diff line number Diff line
@@ -1880,18 +1880,6 @@ There are some more advanced barrier functions:
     See Documentation/atomic_{t,bitops}.txt for more information.


 (*) lockless_dereference();

     This can be thought of as a pointer-fetch wrapper around the
     smp_read_barrier_depends() data-dependency barrier.

     This is also similar to rcu_dereference(), but in cases where
     object lifetime is handled by some mechanism other than RCU, for
     example, when the objects removed only when the system goes down.
     In addition, lockless_dereference() is used in some data structures
     that can be used both with and without RCU.


 (*) dma_wmb();
 (*) dma_rmb();

+0 −12
Original line number Diff line number Diff line
@@ -1858,18 +1858,6 @@ Mandatory 배리어들은 SMP 시스템에서도 UP 시스템에서도 SMP 효
     참고하세요.


 (*) lockless_dereference();

     이 함수는 smp_read_barrier_depends() 데이터 의존성 배리어를 사용하는
     포인터 읽어오기 래퍼(wrapper) 함수로 생각될 수 있습니다.

     객체의 라이프타임이 RCU 외의 메커니즘으로 관리된다는 점을 제외하면
     rcu_dereference() 와도 유사한데, 예를 들면 객체가 시스템이 꺼질 때에만
     제거되는 경우 등입니다.  또한, lockless_dereference() 은 RCU 와 함께
     사용될수도, RCU 없이 사용될 수도 있는 일부 데이터 구조에 사용되고
     있습니다.


 (*) dma_wmb();
 (*) dma_rmb();

+13 −0
Original line number Diff line number Diff line
@@ -14,6 +14,15 @@
 * than regular operations.
 */

/*
 * To ensure dependency ordering is preserved for the _relaxed and
 * _release atomics, an smp_read_barrier_depends() is unconditionally
 * inserted into the _relaxed variants, which are used to build the
 * barriered versions. To avoid redundant back-to-back fences, we can
 * define the _acquire and _fence versions explicitly.
 */
#define __atomic_op_acquire(op, args...)	op##_relaxed(args)
#define __atomic_op_fence			__atomic_op_release

#define ATOMIC_INIT(i)		{ (i) }
#define ATOMIC64_INIT(i)	{ (i) }
@@ -61,6 +70,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
	".previous"							\
	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
	:"Ir" (i), "m" (v->counter) : "memory");			\
	smp_read_barrier_depends();					\
	return result;							\
}

@@ -78,6 +88,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
	".previous"							\
	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
	:"Ir" (i), "m" (v->counter) : "memory");			\
	smp_read_barrier_depends();					\
	return result;							\
}

@@ -112,6 +123,7 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
	".previous"							\
	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
	:"Ir" (i), "m" (v->counter) : "memory");			\
	smp_read_barrier_depends();					\
	return result;							\
}

@@ -129,6 +141,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
	".previous"							\
	:"=&r" (temp), "=m" (v->counter), "=&r" (result)		\
	:"Ir" (i), "m" (v->counter) : "memory");			\
	smp_read_barrier_depends();					\
	return result;							\
}

Loading