Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 874bfcaf authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Linus Torvalds
Browse files

mm/xip: share the i_mmap_rwsem



__xip_unmap() will remove the xip sparse page from the cache and take down
pte mapping, without altering the interval tree, thus share the
i_mmap_rwsem when searching for the ptes to unmap.

Additionally, tidy up the function a bit and make variables only local to
the interval tree walk loop.

Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
Acked-by: default avatar"Kirill A. Shutemov" <kirill@shutemov.name>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4a23717a
Loading
Loading
Loading
Loading
+9 −14
Original line number Diff line number Diff line
@@ -155,22 +155,14 @@ xip_file_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos)
EXPORT_SYMBOL_GPL(xip_file_read);

/*
 * __xip_unmap is invoked from xip_unmap and
 * xip_write
 * __xip_unmap is invoked from xip_unmap and xip_write
 *
 * This function walks all vmas of the address_space and unmaps the
 * __xip_sparse_page when found at pgoff.
 */
static void
__xip_unmap (struct address_space * mapping,
		     unsigned long pgoff)
static void __xip_unmap(struct address_space * mapping, unsigned long pgoff)
{
	struct vm_area_struct *vma;
	struct mm_struct *mm;
	unsigned long address;
	pte_t *pte;
	pte_t pteval;
	spinlock_t *ptl;
	struct page *page;
	unsigned count;
	int locked = 0;
@@ -182,11 +174,14 @@ __xip_unmap (struct address_space * mapping,
		return;

retry:
	i_mmap_lock_write(mapping);
	i_mmap_lock_read(mapping);
	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
		mm = vma->vm_mm;
		address = vma->vm_start +
		pte_t *pte, pteval;
		spinlock_t *ptl;
		struct mm_struct *mm = vma->vm_mm;
		unsigned long address = vma->vm_start +
			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);

		BUG_ON(address < vma->vm_start || address >= vma->vm_end);
		pte = page_check_address(page, mm, address, &ptl, 1);
		if (pte) {
@@ -202,7 +197,7 @@ __xip_unmap (struct address_space * mapping,
			page_cache_release(page);
		}
	}
	i_mmap_unlock_write(mapping);
	i_mmap_unlock_read(mapping);

	if (locked) {
		mutex_unlock(&xip_sparse_mutex);