Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 70ad6368 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Ingo Molnar:
 "The biggest part is a series of reverts for the macro based GCC
  inlining workarounds. It caused regressions in distro build and other
  kernel tooling environments, and the GCC project was very receptive to
  fixing the underlying inliner weaknesses - so as time ran out we
  decided to do a reasonably straightforward revert of the patches. The
  plan is to rely on the 'asm inline' GCC 9 feature, which might be
  backported to GCC 8 and could thus become reasonably widely available
  on modern distros.

  Other than those reverts, there's misc fixes from all around the
  place.

  I wish our final x86 pull request for v4.20 was smaller..."

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  Revert "kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs"
  Revert "x86/objtool: Use asm macros to work around GCC inlining bugs"
  Revert "x86/refcount: Work around GCC inlining bug"
  Revert "x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugs"
  Revert "x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs"
  Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops"
  Revert "x86/extable: Macrofy inline assembly code to work around GCC inlining bugs"
  Revert "x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs"
  Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs"
  x86/mtrr: Don't copy uninitialized gentry fields back to userspace
  x86/fsgsbase/64: Fix the base write helper functions
  x86/mm/cpa: Fix cpa_flush_array() TLB invalidation
  x86/vdso: Pass --eh-frame-hdr to the linker
  x86/mm: Fix decoy address handling vs 32-bit builds
  x86/intel_rdt: Ensure a CPU remains online for the region's pseudo-locking sequence
  x86/dump_pagetables: Fix LDT remap address marker
  x86/mm: Fix guard hole handling
parents 96d6ee7d 6ac38934
Loading
Loading
Loading
Loading
+2 −7
Original line number Diff line number Diff line
@@ -1076,7 +1076,7 @@ scripts: scripts_basic scripts_dtc asm-generic gcc-plugins $(autoksyms_h)
# version.h and scripts_basic is processed / created.

# Listed in dependency order
PHONY += prepare archprepare macroprepare prepare0 prepare1 prepare2 prepare3
PHONY += prepare archprepare prepare0 prepare1 prepare2 prepare3

# prepare3 is used to check if we are building in a separate output directory,
# and if so do:
@@ -1099,9 +1099,7 @@ prepare2: prepare3 outputmakefile asm-generic
prepare1: prepare2 $(version_h) $(autoksyms_h) include/generated/utsrelease.h
	$(cmd_crmodverdir)

macroprepare: prepare1 archmacros

archprepare: archheaders archscripts macroprepare scripts_basic
archprepare: archheaders archscripts prepare1 scripts_basic

prepare0: archprepare gcc-plugins
	$(Q)$(MAKE) $(build)=.
@@ -1177,9 +1175,6 @@ archheaders:
PHONY += archscripts
archscripts:

PHONY += archmacros
archmacros:

PHONY += __headers
__headers: $(version_h) scripts_basic uapi-asm-generic archheaders archscripts
	$(Q)$(MAKE) $(build)=scripts build_unifdef
+0 −7
Original line number Diff line number Diff line
@@ -232,13 +232,6 @@ archscripts: scripts_basic
archheaders:
	$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all

archmacros:
	$(Q)$(MAKE) $(build)=arch/x86/kernel arch/x86/kernel/macros.s

ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s
export ASM_MACRO_FLAGS
KBUILD_CFLAGS += $(ASM_MACRO_FLAGS)

###
# Kernel objects

+1 −1
Original line number Diff line number Diff line
@@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with
.macro CALL_enter_from_user_mode
#ifdef CONFIG_CONTEXT_TRACKING
#ifdef HAVE_JUMP_LABEL
	STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1
	STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0
#endif
	call enter_from_user_mode
.Lafter_call_\@:
+2 −1
Original line number Diff line number Diff line
@@ -171,7 +171,8 @@ quiet_cmd_vdso = VDSO $@
		 sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'

VDSO_LDFLAGS = -shared $(call ld-option, --hash-style=both) \
	$(call ld-option, --build-id) -Bsymbolic
	$(call ld-option, --build-id) $(call ld-option, --eh-frame-hdr) \
	-Bsymbolic
GCOV_PROFILE := n

#
+6 −14
Original line number Diff line number Diff line
@@ -7,23 +7,15 @@
#include <asm/asm.h>

#ifdef CONFIG_SMP
.macro LOCK_PREFIX_HERE
	.macro LOCK_PREFIX
672:	lock
	.pushsection .smp_locks,"a"
	.balign 4
	.long 671f - .		# offset
	.long 672b - .
	.popsection
671:
.endm

.macro LOCK_PREFIX insn:vararg
	LOCK_PREFIX_HERE
	lock \insn
	.endm
#else
.macro LOCK_PREFIX_HERE
.endm

.macro LOCK_PREFIX insn:vararg
	.macro LOCK_PREFIX
	.endm
#endif

Loading