Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 031bc574 authored by Joonsoo Kim's avatar Joonsoo Kim Committed by Linus Torvalds
Browse files

mm/debug-pagealloc: make debug-pagealloc boottime configurable



Now, we have prepared to avoid using debug-pagealloc in boottime.  So
introduce new kernel-parameter to disable debug-pagealloc in boottime, and
makes related functions to be disabled in this case.

Only non-intuitive part is change of guard page functions.  Because guard
page is effective only if debug-pagealloc is enabled, turning off
according to debug-pagealloc is reasonable thing to do.

Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Dave Hansen <dave@sr71.net>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Jungsoo Son <jungsoo.son@lge.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e30825f1
Loading
Loading
Loading
Loading
+9 −0
Original line number Diff line number Diff line
@@ -829,6 +829,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
			CONFIG_DEBUG_PAGEALLOC, hence this option will not help
			tracking down these problems.

	debug_pagealloc=
			[KNL] When CONFIG_DEBUG_PAGEALLOC is set, this
			parameter enables the feature at boot time. In
			default, it is disabled. We can avoid allocating huge
			chunk of memory for debug pagealloc if we don't enable
			it at boot time and the system will work mostly same
			with the kernel built without CONFIG_DEBUG_PAGEALLOC.
			on: enable the feature

	debugpat	[X86] Enable PAT debugging

	decnet.addr=	[HW,NET]
+1 −1
Original line number Diff line number Diff line
@@ -1514,7 +1514,7 @@ static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
			       mmu_kernel_ssize, 0);
}

void kernel_map_pages(struct page *page, int numpages, int enable)
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
	unsigned long flags, vaddr, lmi;
	int i;
+1 −1
Original line number Diff line number Diff line
@@ -429,7 +429,7 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
}


void kernel_map_pages(struct page *page, int numpages, int enable)
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
	if (PageHighMem(page))
		return;
+1 −1
Original line number Diff line number Diff line
@@ -120,7 +120,7 @@ static void ipte_range(pte_t *pte, unsigned long address, int nr)
	}
}

void kernel_map_pages(struct page *page, int numpages, int enable)
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
	unsigned long address;
	int nr, i, j;
+1 −1
Original line number Diff line number Diff line
@@ -1621,7 +1621,7 @@ static void __init kernel_physical_mapping_init(void)
}

#ifdef CONFIG_DEBUG_PAGEALLOC
void kernel_map_pages(struct page *page, int numpages, int enable)
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
	unsigned long phys_start = page_to_pfn(page) << PAGE_SHIFT;
	unsigned long phys_end = phys_start + (numpages * PAGE_SIZE);
Loading