Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 33e247c7 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'akpm' (patches from Andrew)

Merge third patch-bomb from Andrew Morton:

 - even more of the rest of MM

 - lib/ updates

 - checkpatch updates

 - small changes to a few scruffy filesystems

 - kmod fixes/cleanups

 - kexec updates

 - a dma-mapping cleanup series from hch

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (81 commits)
  dma-mapping: consolidate dma_set_mask
  dma-mapping: consolidate dma_supported
  dma-mapping: cosolidate dma_mapping_error
  dma-mapping: consolidate dma_{alloc,free}_noncoherent
  dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent}
  mm: use vma_is_anonymous() in create_huge_pmd() and wp_huge_pmd()
  mm: make sure all file VMAs have ->vm_ops set
  mm, mpx: add "vm_flags_t vm_flags" arg to do_mmap_pgoff()
  mm: mark most vm_operations_struct const
  namei: fix warning while make xmldocs caused by namei.c
  ipc: convert invalid scenarios to use WARN_ON
  zlib_deflate/deftree: remove bi_reverse()
  lib/decompress_unlzma: Do a NULL check for pointer
  lib/decompressors: use real out buf size for gunzip with kernel
  fs/affs: make root lookup from blkdev logical size
  sysctl: fix int -> unsigned long assignments in INT_MIN case
  kexec: export KERNEL_IMAGE_SIZE to vmcoreinfo
  kexec: align crash_notes allocation to make it be inside one physical page
  kexec: remove unnecessary test in kimage_alloc_crash_control_pages()
  kexec: split kexec_load syscall from kexec core code
  ...
parents d71fc239 452e06af
Loading
Loading
Loading
Loading
+4 −0
Original line number Original line Diff line number Diff line
@@ -2992,6 +2992,10 @@ S: 2200 Mission College Blvd
S: Santa Clara, CA 95052
S: Santa Clara, CA 95052
S: USA
S: USA


N: Anil Ravindranath
E: anil_ravindranath@pmc-sierra.com
D: PMC-Sierra MaxRAID driver

N: Eric S. Raymond
N: Eric S. Raymond
E: esr@thyrsus.com
E: esr@thyrsus.com
W: http://www.tuxedo.org/~esr/
W: http://www.tuxedo.org/~esr/
+2 −0
Original line number Original line Diff line number Diff line
@@ -14,6 +14,8 @@ hugetlbpage.txt
	- a brief summary of hugetlbpage support in the Linux kernel.
	- a brief summary of hugetlbpage support in the Linux kernel.
hwpoison.txt
hwpoison.txt
	- explains what hwpoison is
	- explains what hwpoison is
idle_page_tracking.txt
	- description of the idle page tracking feature.
ksm.txt
ksm.txt
	- how to use the Kernel Samepage Merging feature.
	- how to use the Kernel Samepage Merging feature.
numa
numa
+98 −0
Original line number Original line Diff line number Diff line
MOTIVATION

The idle page tracking feature allows to track which memory pages are being
accessed by a workload and which are idle. This information can be useful for
estimating the workload's working set size, which, in turn, can be taken into
account when configuring the workload parameters, setting memory cgroup limits,
or deciding where to place the workload within a compute cluster.

It is enabled by CONFIG_IDLE_PAGE_TRACKING=y.

USER API

The idle page tracking API is located at /sys/kernel/mm/page_idle. Currently,
it consists of the only read-write file, /sys/kernel/mm/page_idle/bitmap.

The file implements a bitmap where each bit corresponds to a memory page. The
bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
mapped to bit #i%64 of array element #i/64, byte order is native. When a bit is
set, the corresponding page is idle.

A page is considered idle if it has not been accessed since it was marked idle
(for more details on what "accessed" actually means see the IMPLEMENTATION
DETAILS section). To mark a page idle one has to set the bit corresponding to
the page by writing to the file. A value written to the file is OR-ed with the
current bitmap value.

Only accesses to user memory pages are tracked. These are pages mapped to a
process address space, page cache and buffer pages, swap cache pages. For other
page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored,
and hence such pages are never reported idle.

For huge pages the idle flag is set only on the head page, so one has to read
/proc/kpageflags in order to correctly count idle huge pages.

Reading from or writing to /sys/kernel/mm/page_idle/bitmap will return
-EINVAL if you are not starting the read/write on an 8-byte boundary, or
if the size of the read/write is not a multiple of 8 bytes. Writing to
this file beyond max PFN will return -ENXIO.

That said, in order to estimate the amount of pages that are not used by a
workload one should:

 1. Mark all the workload's pages as idle by setting corresponding bits in
    /sys/kernel/mm/page_idle/bitmap. The pages can be found by reading
    /proc/pid/pagemap if the workload is represented by a process, or by
    filtering out alien pages using /proc/kpagecgroup in case the workload is
    placed in a memory cgroup.

 2. Wait until the workload accesses its working set.

 3. Read /sys/kernel/mm/page_idle/bitmap and count the number of bits set. If
    one wants to ignore certain types of pages, e.g. mlocked pages since they
    are not reclaimable, he or she can filter them out using /proc/kpageflags.

See Documentation/vm/pagemap.txt for more information about /proc/pid/pagemap,
/proc/kpageflags, and /proc/kpagecgroup.

IMPLEMENTATION DETAILS

The kernel internally keeps track of accesses to user memory pages in order to
reclaim unreferenced pages first on memory shortage conditions. A page is
considered referenced if it has been recently accessed via a process address
space, in which case one or more PTEs it is mapped to will have the Accessed bit
set, or marked accessed explicitly by the kernel (see mark_page_accessed()). The
latter happens when:

 - a userspace process reads or writes a page using a system call (e.g. read(2)
   or write(2))

 - a page that is used for storing filesystem buffers is read or written,
   because a process needs filesystem metadata stored in it (e.g. lists a
   directory tree)

 - a page is accessed by a device driver using get_user_pages()

When a dirty page is written to swap or disk as a result of memory reclaim or
exceeding the dirty memory limit, it is not marked referenced.

The idle memory tracking feature adds a new page flag, the Idle flag. This flag
is set manually, by writing to /sys/kernel/mm/page_idle/bitmap (see the USER API
section), and cleared automatically whenever a page is referenced as defined
above.

When a page is marked idle, the Accessed bit must be cleared in all PTEs it is
mapped to, otherwise we will not be able to detect accesses to the page coming
from a process address space. To avoid interference with the reclaimer, which,
as noted above, uses the Accessed bit to promote actively referenced pages, one
more page flag is introduced, the Young flag. When the PTE Accessed bit is
cleared as a result of setting or updating a page's Idle flag, the Young flag
is set on the page. The reclaimer treats the Young flag as an extra PTE
Accessed bit and therefore will consider such a page as referenced.

Since the idle memory tracking feature is based on the memory reclaimer logic,
it only works with pages that are on an LRU list, other pages are silently
ignored. That means it will ignore a user memory page if it is isolated, but
since there are usually not many of them, it should not affect the overall
result noticeably. In order not to stall scanning of the idle page bitmap,
locked pages may be skipped too.
+12 −1
Original line number Original line Diff line number Diff line
@@ -5,7 +5,7 @@ pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
userspace programs to examine the page tables and related information by
userspace programs to examine the page tables and related information by
reading files in /proc.
reading files in /proc.


There are three components to pagemap:
There are four components to pagemap:


 * /proc/pid/pagemap.  This file lets a userspace process find out which
 * /proc/pid/pagemap.  This file lets a userspace process find out which
   physical frame each virtual page is mapped to.  It contains one 64-bit
   physical frame each virtual page is mapped to.  It contains one 64-bit
@@ -70,6 +70,11 @@ There are three components to pagemap:
    22. THP
    22. THP
    23. BALLOON
    23. BALLOON
    24. ZERO_PAGE
    24. ZERO_PAGE
    25. IDLE

 * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
   memory cgroup each page is charged to, indexed by PFN. Only available when
   CONFIG_MEMCG is set.


Short descriptions to the page flags:
Short descriptions to the page flags:


@@ -116,6 +121,12 @@ Short descriptions to the page flags:
24. ZERO_PAGE
24. ZERO_PAGE
    zero page for pfn_zero or huge_zero page
    zero page for pfn_zero or huge_zero page


25. IDLE
    page has not been accessed since it was marked idle (see
    Documentation/vm/idle_page_tracking.txt). Note that this flag may be
    stale in case the page was accessed via a PTE. To make sure the flag
    is up-to-date one has to read /sys/kernel/mm/page_idle/bitmap first.

    [IO related page flags]
    [IO related page flags]
 1. ERROR     IO error occurred
 1. ERROR     IO error occurred
 3. UPTODATE  page has up-to-date data
 3. UPTODATE  page has up-to-date data
+28 −8
Original line number Original line Diff line number Diff line
@@ -32,7 +32,7 @@ can also be enabled and disabled at runtime using the sysfs interface.
An example command to enable zswap at runtime, assuming sysfs is mounted
An example command to enable zswap at runtime, assuming sysfs is mounted
at /sys, is:
at /sys, is:


echo 1 > /sys/modules/zswap/parameters/enabled
echo 1 > /sys/module/zswap/parameters/enabled


When zswap is disabled at runtime it will stop storing pages that are
When zswap is disabled at runtime it will stop storing pages that are
being swapped out.  However, it will _not_ immediately write out or fault
being swapped out.  However, it will _not_ immediately write out or fault
@@ -49,14 +49,26 @@ Zswap receives pages for compression through the Frontswap API and is able to
evict pages from its own compressed pool on an LRU basis and write them back to
evict pages from its own compressed pool on an LRU basis and write them back to
the backing swap device in the case that the compressed pool is full.
the backing swap device in the case that the compressed pool is full.


Zswap makes use of zbud for the managing the compressed memory pool.  Each
Zswap makes use of zpool for the managing the compressed memory pool.  Each
allocation in zbud is not directly accessible by address.  Rather, a handle is
allocation in zpool is not directly accessible by address.  Rather, a handle is
returned by the allocation routine and that handle must be mapped before being
returned by the allocation routine and that handle must be mapped before being
accessed.  The compressed memory pool grows on demand and shrinks as compressed
accessed.  The compressed memory pool grows on demand and shrinks as compressed
pages are freed.  The pool is not preallocated.
pages are freed.  The pool is not preallocated.  By default, a zpool of type
zbud is created, but it can be selected at boot time by setting the "zpool"
attribute, e.g. zswap.zpool=zbud.  It can also be changed at runtime using the
sysfs "zpool" attribute, e.g.

echo zbud > /sys/module/zswap/parameters/zpool

The zbud type zpool allocates exactly 1 page to store 2 compressed pages, which
means the compression ratio will always be 2:1 or worse (because of half-full
zbud pages).  The zsmalloc type zpool has a more complex compressed page
storage method, and it can achieve greater storage densities.  However,
zsmalloc does not implement compressed page eviction, so once zswap fills it
cannot evict the oldest page, it can only reject new pages.


When a swap page is passed from frontswap to zswap, zswap maintains a mapping
When a swap page is passed from frontswap to zswap, zswap maintains a mapping
of the swap entry, a combination of the swap type and swap offset, to the zbud
of the swap entry, a combination of the swap type and swap offset, to the zpool
handle that references that compressed swap page.  This mapping is achieved
handle that references that compressed swap page.  This mapping is achieved
with a red-black tree per swap type.  The swap offset is the search key for the
with a red-black tree per swap type.  The swap offset is the search key for the
tree nodes.
tree nodes.
@@ -74,9 +86,17 @@ controlled policy:
* max_pool_percent - The maximum percentage of memory that the compressed
* max_pool_percent - The maximum percentage of memory that the compressed
    pool can occupy.
    pool can occupy.


Zswap allows the compressor to be selected at kernel boot time by setting the
The default compressor is lzo, but it can be selected at boot time by setting
“compressor” attribute.  The default compressor is lzo.  e.g.
the “compressor” attribute, e.g. zswap.compressor=lzo.  It can also be changed
zswap.compressor=deflate
at runtime using the sysfs "compressor" attribute, e.g.

echo lzo > /sys/module/zswap/parameters/compressor

When the zpool and/or compressor parameter is changed at runtime, any existing
compressed pages are not modified; they are left in their own zpool.  When a
request is made for a page in an old zpool, it is uncompressed using its
original compressor.  Once all pages are removed from an old zpool, the zpool
and its compressor are freed.


A debugfs interface is provided for various statistic about pool size, number
A debugfs interface is provided for various statistic about pool size, number
of pages stored, and various counters for the reasons pages are rejected.
of pages stored, and various counters for the reasons pages are rejected.
Loading