Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 76253fbc authored by Jan Kara's avatar Jan Kara Committed by Linus Torvalds
Browse files

mm: move accounting updates before page_cache_tree_delete()

Move updates of various counters before page_cache_tree_delete() call.
It will be easier to batch things this way and there is no difference
whether the counters get updated before or after removal from the radix
tree.

Link: http://lkml.kernel.org/r/20171010151937.26984-5-jack@suse.cz


Signed-off-by: default avatarJan Kara <jack@suse.cz>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 59c66c5f
Loading
Loading
Loading
Loading
+25 −24
Original line number Diff line number Diff line
@@ -224,15 +224,8 @@ void __delete_from_page_cache(struct page *page, void *shadow)
		}
	}

	page_cache_tree_delete(mapping, page, shadow);

	page->mapping = NULL;
	/* Leave page->index set: truncation lookup relies upon it */

	/* hugetlb pages do not participate in page cache accounting. */
	if (PageHuge(page))
		return;

	if (!PageHuge(page)) {
		__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr);
		if (PageSwapBacked(page)) {
			__mod_node_page_state(page_pgdat(page), NR_SHMEM, -nr);
@@ -243,15 +236,23 @@ void __delete_from_page_cache(struct page *page, void *shadow)
		}

		/*
	 * At this point page must be either written or cleaned by truncate.
	 * Dirty page here signals a bug and loss of unwritten data.
		 * At this point page must be either written or cleaned by
		 * truncate.  Dirty page here signals a bug and loss of
		 * unwritten data.
		 *
	 * This fixes dirty accounting after removing the page entirely but
	 * leaves PageDirty set: it has no effect for truncated page and
	 * anyway will be cleared before returning page into buddy allocator.
		 * This fixes dirty accounting after removing the page entirely
		 * but leaves PageDirty set: it has no effect for truncated
		 * page and anyway will be cleared before returning page into
		 * buddy allocator.
		 */
		if (WARN_ON_ONCE(PageDirty(page)))
		account_page_cleaned(page, mapping, inode_to_wb(mapping->host));
			account_page_cleaned(page, mapping,
					     inode_to_wb(mapping->host));
	}
	page_cache_tree_delete(mapping, page, shadow);

	page->mapping = NULL;
	/* Leave page->index set: truncation lookup relies upon it */
}

static void page_cache_free_page(struct address_space *mapping,