Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 314e51b9 authored by Konstantin Khlebnikov's avatar Konstantin Khlebnikov Committed by Linus Torvalds
Browse files

mm: kill vma flag VM_RESERVED and mm->reserved_vm counter



A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:

 | effect                 | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump      | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock           | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP

This patch removes reserved_vm counter from mm_struct.  Seems like nobody
cares about it, it does not exported into userspace directly, it only
reduces total_vm showed in proc.

Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.

remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.

[akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Eric Paris <eparis@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Venkatesh Pallipadi <venki@google.com>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0103bd16
Loading
Loading
Loading
Loading
+2 −2
Original line number Original line Diff line number Diff line
@@ -371,8 +371,8 @@ mlock_fixup() filters several classes of "special" VMAs:
   mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
   mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
   allocate the huge pages and populate the ptes.
   allocate the huge pages and populate the ptes.


3) VMAs with VM_DONTEXPAND or VM_RESERVED are generally userspace mappings of
3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
   kernel pages, such as the VDSO page, relay channel pages, etc.  These pages
   such as the VDSO page, relay channel pages, etc. These pages
   are inherently unevictable and are not managed on the LRU lists.
   are inherently unevictable and are not managed on the LRU lists.
   mlock_fixup() treats these VMAs the same as hugetlbfs VMAs.  It calls
   mlock_fixup() treats these VMAs the same as hugetlbfs VMAs.  It calls
   make_pages_present() to populate the ptes.
   make_pages_present() to populate the ptes.
+1 −1
Original line number Original line Diff line number Diff line
@@ -26,7 +26,7 @@ static int hose_mmap_page_range(struct pci_controller *hose,
		base = sparse ? hose->sparse_io_base : hose->dense_io_base;
		base = sparse ? hose->sparse_io_base : hose->dense_io_base;


	vma->vm_pgoff += base >> PAGE_SHIFT;
	vma->vm_pgoff += base >> PAGE_SHIFT;
	vma->vm_flags |= (VM_IO | VM_RESERVED);
	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;


	return io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
	return io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
				  vma->vm_end - vma->vm_start,
				  vma->vm_end - vma->vm_start,
+1 −1
Original line number Original line Diff line number Diff line
@@ -2307,7 +2307,7 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t
	 */
	 */
	vma->vm_mm	     = mm;
	vma->vm_mm	     = mm;
	vma->vm_file	     = get_file(filp);
	vma->vm_file	     = get_file(filp);
	vma->vm_flags	     = VM_READ| VM_MAYREAD |VM_RESERVED;
	vma->vm_flags	     = VM_READ|VM_MAYREAD|VM_DONTEXPAND|VM_DONTDUMP;
	vma->vm_page_prot    = PAGE_READONLY; /* XXX may need to change */
	vma->vm_page_prot    = PAGE_READONLY; /* XXX may need to change */


	/*
	/*
+2 −1
Original line number Original line Diff line number Diff line
@@ -138,7 +138,8 @@ ia64_init_addr_space (void)
			vma->vm_mm = current->mm;
			vma->vm_mm = current->mm;
			vma->vm_end = PAGE_SIZE;
			vma->vm_end = PAGE_SIZE;
			vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT);
			vma->vm_page_prot = __pgprot(pgprot_val(PAGE_READONLY) | _PAGE_MA_NAT);
			vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO | VM_RESERVED;
			vma->vm_flags = VM_READ | VM_MAYREAD | VM_IO |
					VM_DONTEXPAND | VM_DONTDUMP;
			down_write(&current->mm->mmap_sem);
			down_write(&current->mm->mmap_sem);
			if (insert_vm_struct(current->mm, vma)) {
			if (insert_vm_struct(current->mm, vma)) {
				up_write(&current->mm->mmap_sem);
				up_write(&current->mm->mmap_sem);
+1 −1
Original line number Original line Diff line number Diff line
@@ -1183,7 +1183,7 @@ static const struct vm_operations_struct kvm_rma_vm_ops = {


static int kvm_rma_mmap(struct file *file, struct vm_area_struct *vma)
static int kvm_rma_mmap(struct file *file, struct vm_area_struct *vma)
{
{
	vma->vm_flags |= VM_RESERVED;
	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
	vma->vm_ops = &kvm_rma_vm_ops;
	vma->vm_ops = &kvm_rma_vm_ops;
	return 0;
	return 0;
}
}
Loading