Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7af3494e authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman
Browse files

Merge 4.14.10 into android-4.14



Changes in 4.14.10
	Revert "ipv6: grab rt->rt6i_ref before allocating pcpu rt"
	objtool: Move synced files to their original relative locations
	objtool: Move kernel headers/code sync check to a script
	objtool: Fix cross-build
	tools/headers: Sync objtool UAPI header
	objtool: Fix 64-bit build on 32-bit host
	x86/decoder: Fix and update the opcodes map
	x86/insn-eval: Add utility functions to get segment selector
	x86/Kconfig: Limit NR_CPUS on 32-bit to a sane amount
	x86/mm/dump_pagetables: Check PAGE_PRESENT for real
	x86/mm/dump_pagetables: Make the address hints correct and readable
	x86/vsyscall/64: Explicitly set _PAGE_USER in the pagetable hierarchy
	x86/vsyscall/64: Warn and fail vsyscall emulation in NATIVE mode
	arch, mm: Allow arch_dup_mmap() to fail
	x86/ldt: Rework locking
	x86/ldt: Prevent LDT inheritance on exec
	x86/mm/64: Improve the memory map documentation
	x86/doc: Remove obvious weirdnesses from the x86 MM layout documentation
	x86/entry: Rename SYSENTER_stack to CPU_ENTRY_AREA_entry_stack
	x86/uv: Use the right TLB-flush API
	x86/microcode: Dont abuse the TLB-flush interface
	x86/mm: Use __flush_tlb_one() for kernel memory
	x86/mm: Remove superfluous barriers
	x86/mm: Add comments to clarify which TLB-flush functions are supposed to flush what
	x86/mm: Move the CR3 construction functions to tlbflush.h
	x86/mm: Remove hard-coded ASID limit checks
	x86/mm: Put MMU to hardware ASID translation in one place
	x86/mm: Create asm/invpcid.h
	x86/cpu_entry_area: Move it to a separate unit
	x86/cpu_entry_area: Move it out of the fixmap
	init: Invoke init_espfix_bsp() from mm_init()
	x86/cpu_entry_area: Prevent wraparound in setup_cpu_entry_area_ptes() on 32bit
	ACPI: APEI / ERST: Fix missing error handling in erst_reader()
	acpi, nfit: fix health event notification
	crypto: skcipher - set walk.iv for zero-length inputs
	crypto: mcryptd - protect the per-CPU queue with a lock
	crypto: af_alg - wait for data at beginning of recvmsg
	crypto: af_alg - fix race accessing cipher request
	mfd: cros ec: spi: Don't send first message too soon
	mfd: twl4030-audio: Fix sibling-node lookup
	mfd: twl6040: Fix child-node lookup
	ALSA: rawmidi: Avoid racy info ioctl via ctl device
	ALSA: hda/realtek - Fix Dell AIO LineOut issue
	ALSA: hda - Add vendor id for Cannonlake HDMI codec
	ALSA: usb-audio: Add native DSD support for Esoteric D-05X
	ALSA: usb-audio: Fix the missing ctl name suffix at parsing SU
	PCI / PM: Force devices to D0 in pci_pm_thaw_noirq()
	block: unalign call_single_data in struct request
	block-throttle: avoid double charge
	parisc: Align os_hpmc_size on word boundary
	parisc: Fix indenting in puts()
	parisc: Hide Diva-built-in serial aux and graphics card
	Revert "parisc: Re-enable interrupts early"
	spi: xilinx: Detect stall with Unknown commands
	spi: a3700: Fix clk prescaling for coefficient over 15
	pinctrl: cherryview: Mask all interrupts on Intel_Strago based systems
	arm64: kvm: Prevent restoring stale PMSCR_EL1 for vcpu
	KVM: arm/arm64: Fix HYP unmapping going off limits
	KVM: PPC: Book3S: fix XIVE migration of pending interrupts
	KVM: PPC: Book3S HV: Fix pending_pri value in kvmppc_xive_get_icp()
	KVM: MMU: Fix infinite loop when there is no available mmu page
	KVM: X86: Fix load RFLAGS w/o the fixed bit
	kvm: x86: fix RSM when PCID is non-zero
	clk: sunxi: sun9i-mmc: Implement reset callback for reset controls
	powerpc/perf: Dereference BHRB entries safely
	drm/i915: Flush pending GTT writes before unbinding
	drm/sun4i: Fix error path handling
	libnvdimm, dax: fix 1GB-aligned namespaces vs physical misalignment
	libnvdimm, btt: Fix an incompatibility in the log layout
	libnvdimm, pfn: fix start_pad handling for aligned namespaces
	net: mvneta: clear interface link status on port disable
	net: mvneta: use proper rxq_number in loop on rx queues
	net: mvneta: eliminate wrong call to handle rx descriptor error
	Revert "ipmi_si: fix memory leak on new_smi"
	Linux 4.14.10

Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@google.com>
parents 0cf36be1 b8ce8232
Loading
Loading
Loading
Loading
+11 −13
Original line number Diff line number Diff line

<previous description obsolete, deleted>

Virtual memory map with 4 level page tables:

0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
@@ -14,13 +12,15 @@ ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
... unused hole ...
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
... unused hole ...
fffffe8000000000 - fffffeffffffffff (=39 bits) cpu_entry_area mapping
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
... unused hole ...
ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
... unused hole ...
ffffffff80000000 - ffffffff9fffffff (=512 MB)  kernel text mapping, from phys 0
ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space (variable)
ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
ffffffffa0000000 - [fixmap start]   (~1526 MB) module mapping space (variable)
[fixmap start]   - ffffffffff5fffff kernel-internal fixmap range
ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole

Virtual memory map with 5 level page tables:
@@ -36,19 +36,22 @@ ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
... unused hole ...
ffdf000000000000 - fffffc0000000000 (=53 bits) kasan shadow memory (8PB)
... unused hole ...
fffffe8000000000 - fffffeffffffffff (=39 bits) cpu_entry_area mapping
ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
... unused hole ...
ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
... unused hole ...
ffffffff80000000 - ffffffff9fffffff (=512 MB)  kernel text mapping, from phys 0
ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space
ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
ffffffffa0000000 - [fixmap start]   (~1526 MB) module mapping space
[fixmap start]   - ffffffffff5fffff kernel-internal fixmap range
ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole

Architecture defines a 64-bit virtual address. Implementations can support
less. Currently supported are 48- and 57-bit virtual addresses. Bits 63
through to the most-significant implemented bit are set to either all ones
or all zero. This causes hole between user space and kernel addresses.
through to the most-significant implemented bit are sign extended.
This causes hole between user space and kernel addresses if you interpret them
as unsigned.

The direct mapping covers all memory in the system up to the highest
memory address (this means in some cases it can also include PCI memory
@@ -58,9 +61,6 @@ vmalloc space is lazily synchronized into the different PML4/PML5 pages of
the processes using the page fault handler, with init_top_pgt as
reference.

Current X86-64 implementations support up to 46 bits of address space (64 TB),
which is our current limit. This expands into MBZ space in the page tables.

We map EFI runtime services in the 'efi_pgd' PGD in a 64Gb large virtual
memory window (this size is arbitrary, it can be raised later if needed).
The mappings are not part of any other kernel PGD and are only available
@@ -72,5 +72,3 @@ following fixmap section.
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
Their order is preserved but their base will be offset early at boot time.

-Andi Kleen, Jul 2004
+1 −1
Original line number Diff line number Diff line
# SPDX-License-Identifier: GPL-2.0
VERSION = 4
PATCHLEVEL = 14
SUBLEVEL = 9
SUBLEVEL = 10
EXTRAVERSION =
NAME = Petit Gorille

+3 −0
Original line number Diff line number Diff line
@@ -84,6 +84,9 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
{
	u64 reg;

	/* Clear pmscr in case of early return */
	*pmscr_el1 = 0;

	/* SPE present on this CPU? */
	if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
						  ID_AA64DFR0_PMSVER_SHIFT))
+9 −3
Original line number Diff line number Diff line
@@ -878,9 +878,6 @@ ENTRY_CFI(syscall_exit_rfi)
	STREG   %r19,PT_SR7(%r16)

intr_return:
	/* NOTE: Need to enable interrupts incase we schedule. */
	ssm     PSW_SM_I, %r0

	/* check for reschedule */
	mfctl   %cr30,%r1
	LDREG   TI_FLAGS(%r1),%r19	/* sched.h: TIF_NEED_RESCHED */
@@ -907,6 +904,11 @@ intr_check_sig:
	LDREG	PT_IASQ1(%r16), %r20
	cmpib,COND(=),n 0,%r20,intr_restore /* backward */

	/* NOTE: We need to enable interrupts if we have to deliver
	 * signals. We used to do this earlier but it caused kernel
	 * stack overflows. */
	ssm     PSW_SM_I, %r0

	copy	%r0, %r25			/* long in_syscall = 0 */
#ifdef CONFIG_64BIT
	ldo	-16(%r30),%r29			/* Reference param save area */
@@ -958,6 +960,10 @@ intr_do_resched:
	cmpib,COND(=)	0, %r20, intr_do_preempt
	nop

	/* NOTE: We need to enable interrupts if we schedule.  We used
	 * to do this earlier but it caused kernel stack overflows. */
	ssm     PSW_SM_I, %r0

#ifdef CONFIG_64BIT
	ldo	-16(%r30),%r29		/* Reference param save area */
#endif
+1 −0
Original line number Diff line number Diff line
@@ -305,6 +305,7 @@ ENDPROC_CFI(os_hpmc)


	__INITRODATA
	.align 4
	.export os_hpmc_size
os_hpmc_size:
	.word .os_hpmc_end-.os_hpmc
Loading