Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 33e5d769 authored by David Howells's avatar David Howells Committed by Linus Torvalds
Browse files

nommu: fix a number of issues with the per-MM VMA patch



Fix a number of issues with the per-MM VMA patch:

 (1) Make mmap_pages_allocated an atomic_long_t, just in case this is used on
     a NOMMU system with more than 2G pages.  Makes no difference on a 32-bit
     system.

 (2) Report vma->vm_pgoff * PAGE_SIZE as a 64-bit value, not a 32-bit value,
     lest it overflow.

 (3) Move the allocation of the vm_area_struct slab back for fork.c.

 (4) Use KMEM_CACHE() for both vm_area_struct and vm_region slabs.

 (5) Use BUG_ON() rather than if () BUG().

 (6) Make the default validate_nommu_regions() a static inline rather than a
     #define.

 (7) Make free_page_series()'s objection to pages with a refcount != 1 more
     informative.

 (8) Adjust the __put_nommu_region() banner comment to indicate that the
     semaphore must be held for writing.

 (9) Limit the number of warnings about munmaps of non-mmapped regions.

Reported-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
Cc: Greg Ungerer <gerg@snapgear.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 5482415a
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -120,7 +120,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
		K(i.freeram-i.freehigh),
#endif
#ifndef CONFIG_MMU
		K((unsigned long) atomic_read(&mmap_pages_allocated)),
		K((unsigned long) atomic_long_read(&mmap_pages_allocated)),
#endif
		K(i.totalswap),
		K(i.freeswap),
+2 −2
Original line number Diff line number Diff line
@@ -136,14 +136,14 @@ static int nommu_vma_show(struct seq_file *m, struct vm_area_struct *vma)
	}

	seq_printf(m,
		   "%08lx-%08lx %c%c%c%c %08lx %02x:%02x %lu %n",
		   "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu %n",
		   vma->vm_start,
		   vma->vm_end,
		   flags & VM_READ ? 'r' : '-',
		   flags & VM_WRITE ? 'w' : '-',
		   flags & VM_EXEC ? 'x' : '-',
		   flags & VM_MAYSHARE ? flags & VM_SHARED ? 'S' : 's' : 'p',
		   vma->vm_pgoff << PAGE_SHIFT,
		   (unsigned long long) vma->vm_pgoff << PAGE_SHIFT,
		   MAJOR(dev), MINOR(dev), ino, &len);

	if (file) {
+1 −1
Original line number Diff line number Diff line
@@ -1079,7 +1079,7 @@ static inline void setup_per_cpu_pageset(void) {}
#endif

/* nommu.c */
extern atomic_t mmap_pages_allocated;
extern atomic_long_t mmap_pages_allocated;

/* prio_tree.c */
void vma_prio_tree_add(struct vm_area_struct *, struct vm_area_struct *old);
+1 −0
Original line number Diff line number Diff line
@@ -1488,6 +1488,7 @@ void __init proc_caches_init(void)
	mm_cachep = kmem_cache_create("mm_struct",
			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
	mmap_init();
}

+0 −3
Original line number Diff line number Diff line
@@ -2481,7 +2481,4 @@ void mm_drop_all_locks(struct mm_struct *mm)
 */
void __init mmap_init(void)
{
	vm_area_cachep = kmem_cache_create("vm_area_struct",
			sizeof(struct vm_area_struct), 0,
			SLAB_PANIC, NULL);
}
Loading