Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2aba73a6 authored by Ingo Molnar's avatar Ingo Molnar
Browse files

Merge tag 'pr-20141223-x86-vdso' of...

Merge tag 'pr-20141223-x86-vdso' of git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux

 into x86/urgent

Pull VDSO fix from Andy Lutomirski:

 "This is hopefully the last vdso fix for 3.19.  It should be very
  safe (it just adds a volatile).

  I don't think it fixes an actual bug (the __getcpu calls in the
  pvclock code may not have been needed in the first place), but
  discussion on that point is ongoing.

  It also fixes a big performance issue in 3.18 and earlier in which
  the lsl instructions in vclock_gettime got hoisted so far up the
  function that they happened even when the function they were in was
  never called.  n 3.19, the performance issue seems to be gone due to
  the whims of my compiler and some interaction with a branch that's
  now gone.

  I'll hopefully have a much bigger overhaul of the pvclock code
  for 3.20, but it needs careful review."

Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 280dbc57 1ddf0b1b
Loading
Loading
Loading
Loading
+4 −2
Original line number Diff line number Diff line
@@ -80,9 +80,11 @@ static inline unsigned int __getcpu(void)

	/*
	 * Load per CPU data from GDT.  LSL is faster than RDTSCP and
	 * works on all CPUs.
	 * works on all CPUs.  This is volatile so that it orders
	 * correctly wrt barrier() and to keep gcc from cleverly
	 * hoisting it out of the calling function.
	 */
	asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
	asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));

	return p;
}