Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 45dbea5f authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

x86/paravirt: Fix native_patch()



While chasing a regression I noticed we potentially patch the wrong
code in native_patch().

If we do not select the native code sequence, we must use the default
patcher, not fall-through the switch case.

Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel test robot <xiaolong.ye@intel.com>
Fixes: 3cded417 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
Link: http://lkml.kernel.org/r/20161208154349.270616999@infradead.org


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 6f387515
Loading
Loading
Loading
Loading
+4 −0
Original line number Original line Diff line number Diff line
@@ -58,15 +58,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
				end   = end_pv_lock_ops_queued_spin_unlock;
				end   = end_pv_lock_ops_queued_spin_unlock;
				goto patch_site;
				goto patch_site;
			}
			}
			goto patch_default;

		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
			if (pv_is_native_vcpu_is_preempted()) {
			if (pv_is_native_vcpu_is_preempted()) {
				start = start_pv_lock_ops_vcpu_is_preempted;
				start = start_pv_lock_ops_vcpu_is_preempted;
				end   = end_pv_lock_ops_vcpu_is_preempted;
				end   = end_pv_lock_ops_vcpu_is_preempted;
				goto patch_site;
				goto patch_site;
			}
			}
			goto patch_default;
#endif
#endif


	default:
	default:
patch_default:
		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
		break;
		break;


+4 −0
Original line number Original line Diff line number Diff line
@@ -70,15 +70,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
				end   = end_pv_lock_ops_queued_spin_unlock;
				end   = end_pv_lock_ops_queued_spin_unlock;
				goto patch_site;
				goto patch_site;
			}
			}
			goto patch_default;

		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
		case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted):
			if (pv_is_native_vcpu_is_preempted()) {
			if (pv_is_native_vcpu_is_preempted()) {
				start = start_pv_lock_ops_vcpu_is_preempted;
				start = start_pv_lock_ops_vcpu_is_preempted;
				end   = end_pv_lock_ops_vcpu_is_preempted;
				end   = end_pv_lock_ops_vcpu_is_preempted;
				goto patch_site;
				goto patch_site;
			}
			}
			goto patch_default;
#endif
#endif


	default:
	default:
patch_default:
		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
		ret = paravirt_patch_default(type, clobbers, ibuf, addr, len);
		break;
		break;