Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 34b10060 authored by Yaowei Bai's avatar Yaowei Bai Committed by Linus Torvalds
Browse files

mm/page_alloc.c: change sysctl_lower_zone_reserve_ratio to sysctl_lowmem_reserve_ratio in comments



We use sysctl_lowmem_reserve_ratio rather than
sysctl_lower_zone_reserve_ratio to determine how aggressive the kernel
is in defending lowmem from the possibility of being captured into
pinned user memory.  To avoid misleading, correct it in some comments.

Signed-off-by: default avatarYaowei Bai <bywxiaobai@163.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 013110a7
Loading
Loading
Loading
Loading
+2 −2
Original line number Original line Diff line number Diff line
@@ -6075,7 +6075,7 @@ void __init page_alloc_init(void)
}
}


/*
/*
 * calculate_totalreserve_pages - called when sysctl_lower_zone_reserve_ratio
 * calculate_totalreserve_pages - called when sysctl_lowmem_reserve_ratio
 *	or min_free_kbytes changes.
 *	or min_free_kbytes changes.
 */
 */
static void calculate_totalreserve_pages(void)
static void calculate_totalreserve_pages(void)
@@ -6119,7 +6119,7 @@ static void calculate_totalreserve_pages(void)


/*
/*
 * setup_per_zone_lowmem_reserve - called whenever
 * setup_per_zone_lowmem_reserve - called whenever
 *	sysctl_lower_zone_reserve_ratio changes.  Ensures that each zone
 *	sysctl_lowmem_reserve_ratio changes.  Ensures that each zone
 *	has a correct pages reserved value, so an adequate number of
 *	has a correct pages reserved value, so an adequate number of
 *	pages are left in the zone after a successful __alloc_pages().
 *	pages are left in the zone after a successful __alloc_pages().
 */
 */