Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6cdf89b1 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
 "The tree got pretty big in this development cycle, but the net effect
  is pretty good:

    115 files changed, 673 insertions(+), 1522 deletions(-)

  The main changes were:

   - Rework and generalize the mutex code to remove per arch mutex
     primitives. (Peter Zijlstra)

   - Add vCPU preemption support: add an interface to query the
     preemption status of vCPUs and use it in locking primitives - this
     optimizes paravirt performance. (Pan Xinhui, Juergen Gross,
     Christian Borntraeger)

   - Introduce cpu_relax_yield() and remov cpu_relax_lowlatency() to
     clean up and improve the s390 lock yielding machinery and its core
     kernel impact. (Christian Borntraeger)

   - Micro-optimize mutexes some more. (Waiman Long)

   - Reluctantly add the to-be-deprecated mutex_trylock_recursive()
     interface on a temporary basis, to give the DRM code more time to
     get rid of its locking hacks. Any other users will be NAK-ed on
     sight. (We turned off the deprecation warning for the time being to
     not pollute the build log.) (Peter Zijlstra)

   - Improve the rtmutex code a bit, in light of recent long lived
     bugs/races. (Thomas Gleixner)

   - Misc fixes, cleanups"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
  x86/paravirt: Fix bool return type for PVOP_CALL()
  x86/paravirt: Fix native_patch()
  locking/ww_mutex: Use relaxed atomics
  locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked()
  locking/rtmutex: Get rid of RT_MUTEX_OWNER_MASKALL
  x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()
  locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted
  locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock()
  Documentation/virtual/kvm: Support the vCPU preemption check
  x86/xen: Support the vCPU preemption check
  x86/kvm: Support the vCPU preemption check
  x86/kvm: Support the vCPU preemption check
  kvm: Introduce kvm_write_guest_offset_cached()
  locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests
  locking/spinlocks, s390: Implement vcpu_is_preempted(cpu)
  locking/core, powerpc: Implement vcpu_is_preempted(cpu)
  sched/core: Introduce the vcpu_is_preempted(cpu) interface
  sched/wake_q: Rename WAKE_Q to DEFINE_WAKE_Q
  locking/core: Provide common cpu_relax_yield() definition
  locking/mutex: Don't mark mutex_trylock_recursive() as deprecated, temporarily
  ...
parents 3940cf0b 11f254db
Loading
Loading
Loading
Loading
+8 −1
Original line number Diff line number Diff line
@@ -208,7 +208,9 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
		__u64 steal;
		__u32 version;
		__u32 flags;
		__u32 pad[12];
		__u8  preempted;
		__u8  u8_pad[3];
		__u32 pad[11];
	}

	whose data will be filled in by the hypervisor periodically. Only one
@@ -232,6 +234,11 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
		nanoseconds. Time during which the vcpu is idle, will not be
		reported as steal time.

		preempted: indicate the vCPU who owns this struct is running or
		not. Non-zero values mean the vCPU has been preempted. Zero
		means the vCPU is not preempted. NOTE, it is always zero if the
		the hypervisor doesn't support this field.

MSR_KVM_EOI_EN: 0x4b564d04
	data: Bit 0 is 1 when PV end of interrupt is enabled on the vcpu; 0
	when disabled.  Bit 1 is reserved and must be zero.  When PV end of

arch/alpha/include/asm/mutex.h

deleted100644 → 0
+0 −9
Original line number Diff line number Diff line
/*
 * Pull in the generic implementation for the mutex fastpath.
 *
 * TODO: implement optimized primitives instead, or leave the generic
 * implementation in place, or pick the atomic_xchg() based generic
 * implementation. (see asm-generic/mutex-xchg.h for details)
 */

#include <asm-generic/mutex-dec.h>
+0 −1
Original line number Diff line number Diff line
@@ -58,7 +58,6 @@ unsigned long get_wchan(struct task_struct *p);
  ((tsk) == current ? rdusp() : task_thread_info(tsk)->pcb.usp)

#define cpu_relax()	barrier()
#define cpu_relax_lowlatency() cpu_relax()

#define ARCH_HAS_PREFETCH
#define ARCH_HAS_PREFETCHW

arch/arc/include/asm/mutex.h

deleted100644 → 0
+0 −18
Original line number Diff line number Diff line
/*
 * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 as
 * published by the Free Software Foundation.
 */

/*
 * xchg() based mutex fast path maintains a state of 0 or 1, as opposed to
 * atomic dec based which can "count" any number of lock contenders.
 * This ideally needs to be fixed in core, but for now switching to dec ver.
 */
#if defined(CONFIG_SMP) && (CONFIG_NR_CPUS > 2)
#include <asm-generic/mutex-dec.h>
#else
#include <asm-generic/mutex-xchg.h>
#endif
+0 −3
Original line number Diff line number Diff line
@@ -60,15 +60,12 @@ struct task_struct;
#ifndef CONFIG_EZNPS_MTM_EXT

#define cpu_relax()		barrier()
#define cpu_relax_lowlatency()	cpu_relax()

#else

#define cpu_relax()     \
	__asm__ __volatile__ (".word %0" : : "i"(CTOP_INST_SCHD_RW) : "memory")

#define cpu_relax_lowlatency()	barrier()

#endif

#define copy_segments(tsk, mm)      do { } while (0)
Loading