Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 17be0aec authored by Denys Vlasenko's avatar Denys Vlasenko Committed by Ingo Molnar
Browse files

x86/asm/entry/64: Implement better check for canonical addresses



This change makes the check exact (no more false positives
on "negative" addresses).

Andy explains:

 "Canonical addresses either start with 17 zeros or 17 ones.

  In the old code, we checked that the top (64-47) = 17 bits were all
  zero.  We did this by shifting right by 47 bits and making sure that
  nothing was left.

  In the new code, we're shifting left by (64 - 48) = 16 bits and then
  signed shifting right by the same amount, this propagating the 17th
  highest bit to all positions to its left.  If we get the same value we
  started with, then we're good to go."

While it isn't really important to be fully correct here -
almost all addresses we'll ever see will be userspace ones,
but OTOH it looks to be cheap enough: the new code uses
two more ALU ops but preserves %rcx, allowing to not
reload it from pt_regs->cx again.

On disassembly level, the changes are:

  cmp %rcx,0x80(%rsp) -> mov 0x80(%rsp),%r11; cmp %rcx,%r11
  shr $0x2f,%rcx      -> shl $0x10,%rcx; sar $0x10,%rcx; cmp %rcx,%r11
  mov 0x58(%rsp),%rcx -> (eliminated)

Signed-off-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
Acked-by: default avatarAndy Lutomirski <luto@kernel.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1429633649-20169-1-git-send-email-dvlasenk@redhat.com


[ Changelog massage. ]
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 3462bd2a
Loading
Loading
Loading
Loading
+12 −11
Original line number Diff line number Diff line
@@ -410,26 +410,27 @@ syscall_return:
	 * a completely clean 64-bit userspace context.
	 */
	movq RCX(%rsp),%rcx
	cmpq %rcx,RIP(%rsp)		/* RCX == RIP */
	movq RIP(%rsp),%r11
	cmpq %rcx,%r11			/* RCX == RIP */
	jne opportunistic_sysret_failed

	/*
	 * On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP
	 * in kernel space.  This essentially lets the user take over
	 * the kernel, since userspace controls RSP.  It's not worth
	 * testing for canonicalness exactly -- this check detects any
	 * of the 17 high bits set, which is true for non-canonical
	 * or kernel addresses.  (This will pessimize vsyscall=native.
	 * Big deal.)
	 * the kernel, since userspace controls RSP.
	 *
	 * If virtual addresses ever become wider, this will need
	 * If width of "canonical tail" ever becomes variable, this will need
	 * to be updated to remain correct on both old and new CPUs.
	 */
	.ifne __VIRTUAL_MASK_SHIFT - 47
	.error "virtual address width changed -- SYSRET checks need update"
	.endif
	shr $__VIRTUAL_MASK_SHIFT, %rcx
	jnz opportunistic_sysret_failed
	/* Change top 16 bits to be the sign-extension of 47th bit */
	shl	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
	sar	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
	/* If this changed %rcx, it was not canonical */
	cmpq	%rcx, %r11
	jne	opportunistic_sysret_failed

	cmpq $__USER_CS,CS(%rsp)	/* CS must match SYSRET */
	jne opportunistic_sysret_failed
@@ -466,8 +467,8 @@ syscall_return:
	 */
syscall_return_via_sysret:
	CFI_REMEMBER_STATE
	/* r11 is already restored (see code above) */
	RESTORE_C_REGS_EXCEPT_R11
	/* rcx and r11 are already restored (see code above) */
	RESTORE_C_REGS_EXCEPT_RCX_R11
	movq RSP(%rsp),%rsp
	USERGS_SYSRET64
	CFI_RESTORE_STATE