Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2ef14f46 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 mm changes from Peter Anvin:
 "This is a huge set of several partly interrelated (and concurrently
  developed) changes, which is why the branch history is messier than
  one would like.

  The *really* big items are two humonguous patchsets mostly developed
  by Yinghai Lu at my request, which completely revamps the way we
  create initial page tables.  In particular, rather than estimating how
  much memory we will need for page tables and then build them into that
  memory -- a calculation that has shown to be incredibly fragile -- we
  now build them (on 64 bits) with the aid of a "pseudo-linear mode" --
  a #PF handler which creates temporary page tables on demand.

  This has several advantages:

  1. It makes it much easier to support things that need access to data
     very early (a followon patchset uses this to load microcode way
     early in the kernel startup).

  2. It allows the kernel and all the kernel data objects to be invoked
     from above the 4 GB limit.  This allows kdump to work on very large
     systems.

  3. It greatly reduces the difference between Xen and native (Xen's
     equivalent of the #PF handler are the temporary page tables created
     by the domain builder), eliminating a bunch of fragile hooks.

  The patch series also gets us a bit closer to W^X.

  Additional work in this pull is the 64-bit get_user() work which you
  were also involved with, and a bunch of cleanups/speedups to
  __phys_addr()/__pa()."

* 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (105 commits)
  x86, mm: Move reserving low memory later in initialization
  x86, doc: Clarify the use of asm("%edx") in uaccess.h
  x86, mm: Redesign get_user with a __builtin_choose_expr hack
  x86: Be consistent with data size in getuser.S
  x86, mm: Use a bitfield to mask nuisance get_user() warnings
  x86/kvm: Fix compile warning in kvm_register_steal_time()
  x86-32: Add support for 64bit get_user()
  x86-32, mm: Remove reference to alloc_remap()
  x86-32, mm: Remove reference to resume_map_numa_kva()
  x86-32, mm: Rip out x86_32 NUMA remapping code
  x86/numa: Use __pa_nodebug() instead
  x86: Don't panic if can not alloc buffer for swiotlb
  mm: Add alloc_bootmem_low_pages_nopanic()
  x86, 64bit, mm: hibernate use generic mapping_init
  x86, 64bit, mm: Mark data/bss/brk to nx
  x86: Merge early kernel reserve for 32bit and 64bit
  x86: Add Crash kernel low reservation
  x86, kdump: Remove crashkernel range find limit for 64bit
  memblock: Add memblock_mem_size()
  x86, boot: Not need to check setup_header version for setup_data
  ...
parents cb715a83 0da3e7f5
Loading
Loading
Loading
Loading
+3 −0
Original line number Diff line number Diff line
@@ -594,6 +594,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
			is selected automatically. Check
			Documentation/kdump/kdump.txt for further details.

	crashkernel_low=size[KMG]
			[KNL, x86] parts under 4G.

	crashkernel=range1:size1[,range2:size2,...][@offset]
			[KNL] Same as above, but depends on the memory
			in the running system. The syntax of range is
+38 −0
Original line number Diff line number Diff line
@@ -1055,6 +1055,44 @@ must have read/write permission; CS must be __BOOT_CS and DS, ES, SS
must be __BOOT_DS; interrupt must be disabled; %esi must hold the base
address of the struct boot_params; %ebp, %edi and %ebx must be zero.

**** 64-bit BOOT PROTOCOL

For machine with 64bit cpus and 64bit kernel, we could use 64bit bootloader
and we need a 64-bit boot protocol.

In 64-bit boot protocol, the first step in loading a Linux kernel
should be to setup the boot parameters (struct boot_params,
traditionally known as "zero page"). The memory for struct boot_params
could be allocated anywhere (even above 4G) and initialized to all zero.
Then, the setup header at offset 0x01f1 of kernel image on should be
loaded into struct boot_params and examined. The end of setup header
can be calculated as follows:

	0x0202 + byte value at offset 0x0201

In addition to read/modify/write the setup header of the struct
boot_params as that of 16-bit boot protocol, the boot loader should
also fill the additional fields of the struct boot_params as described
in zero-page.txt.

After setting up the struct boot_params, the boot loader can load
64-bit kernel in the same way as that of 16-bit boot protocol, but
kernel could be loaded above 4G.

In 64-bit boot protocol, the kernel is started by jumping to the
64-bit kernel entry point, which is the start address of loaded
64-bit kernel plus 0x200.

At entry, the CPU must be in 64-bit mode with paging enabled.
The range with setup_header.init_size from start address of loaded
kernel and zero page and command line buffer get ident mapping;
a GDT must be loaded with the descriptors for selectors
__BOOT_CS(0x10) and __BOOT_DS(0x18); both descriptors must be 4G flat
segment; __BOOT_CS must have execute/read permission, and __BOOT_DS
must have read/write permission; CS must be __BOOT_CS and DS, ES, SS
must be __BOOT_DS; interrupt must be disabled; %rsi must hold the base
address of the struct boot_params.

**** EFI HANDOVER PROTOCOL

This protocol allows boot loaders to defer initialisation to the EFI
+2 −1
Original line number Diff line number Diff line
@@ -317,7 +317,8 @@ void __init plat_swiotlb_setup(void)

	octeon_swiotlb = alloc_bootmem_low_pages(swiotlbsize);

	swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1);
	if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM)
		panic("Cannot allocate SWIOTLB buffer");

	mips_dma_map_ops = &octeon_linear_dma_map_ops.dma_map_ops;
}
+11 −13
Original line number Diff line number Diff line
@@ -2027,6 +2027,16 @@ static void __init patch_tlb_miss_handler_bitmap(void)
	flushi(&valid_addr_bitmap_insn[0]);
}

static void __init register_page_bootmem_info(void)
{
#ifdef CONFIG_NEED_MULTIPLE_NODES
	int i;

	for_each_online_node(i)
		if (NODE_DATA(i)->node_spanned_pages)
			register_page_bootmem_info_node(NODE_DATA(i));
#endif
}
void __init mem_init(void)
{
	unsigned long codepages, datapages, initpages;
@@ -2044,20 +2054,8 @@ void __init mem_init(void)

	high_memory = __va(last_valid_pfn << PAGE_SHIFT);

#ifdef CONFIG_NEED_MULTIPLE_NODES
	{
		int i;
		for_each_online_node(i) {
			if (NODE_DATA(i)->node_spanned_pages != 0) {
				totalram_pages +=
					free_all_bootmem_node(NODE_DATA(i));
			}
		}
		totalram_pages += free_low_memory_core_early(MAX_NUMNODES);
	}
#else
	register_page_bootmem_info();
	totalram_pages = free_all_bootmem();
#endif

	/* We subtract one to account for the mem_map_zero page
	 * allocated below.
+0 −4
Original line number Diff line number Diff line
@@ -1277,10 +1277,6 @@ config NODES_SHIFT
	  Specify the maximum number of NUMA Nodes available on the target
	  system.  Increases memory reserved to accommodate various tables.

config HAVE_ARCH_ALLOC_REMAP
	def_bool y
	depends on X86_32 && NUMA

config ARCH_HAVE_MEMORY_PRESENT
	def_bool y
	depends on X86_32 && DISCONTIGMEM
Loading