Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8a5c743e authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds
Browse files

mm, memcg: use consistent gfp flags during readahead

Vladimir has noticed that we might declare memcg oom even during
readahead because read_pages only uses GFP_KERNEL (with mapping_gfp
restriction) while __do_page_cache_readahead uses
page_cache_alloc_readahead which adds __GFP_NORETRY to prevent from
OOMs.  This gfp mask discrepancy is really unfortunate and easily
fixable.  Drop page_cache_alloc_readahead() which only has one user and
outsource the gfp_mask logic into readahead_gfp_mask and propagate this
mask from __do_page_cache_readahead down to read_pages.

This alone would have only very limited impact as most filesystems are
implementing ->readpages and the common implementation mpage_readpages
does GFP_KERNEL (with mapping_gfp restriction) again.  We can tell it to
use readahead_gfp_mask instead as this function is called only during
readahead as well.  The same applies to read_cache_pages.

ext4 has its own ext4_mpage_readpages but the path which has pages !=
NULL can use the same gfp mask.  Btrfs, cifs, f2fs and orangefs are
doing a very similar pattern to mpage_readpages so the same can be
applied to them as well.

[akpm@linux-foundation.org: coding-style fixes]
[mhocko@suse.com: restrict gfp mask in mpage_alloc]
  Link: http://lkml.kernel.org/r/20160610074223.GC32285@dhcp22.suse.cz
Link: http://lkml.kernel.org/r/1465301556-26431-1-git-send-email-mhocko@kernel.org


Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Cc: Chris Mason <clm@fb.com>
Cc: Steve French <sfrench@samba.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Jan Kara <jack@suse.cz>
Cc: Mike Marshall <hubcap@omnibond.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Changman Lee <cm224.lee@samsung.com>
Cc: Chao Yu <yuchao0@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e5e3f4c4
Loading
Loading
Loading
Loading
+2 −1
Original line number Diff line number Diff line
@@ -4180,7 +4180,8 @@ int extent_readpages(struct extent_io_tree *tree,
		prefetchw(&page->flags);
		list_del(&page->lru);
		if (add_to_page_cache_lru(page, mapping,
					page->index, GFP_NOFS)) {
					page->index,
					readahead_gfp_mask(mapping))) {
			put_page(page);
			continue;
		}
+1 −1
Original line number Diff line number Diff line
@@ -3366,7 +3366,7 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list,
	struct page *page, *tpage;
	unsigned int expected_index;
	int rc;
	gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
	gfp_t gfp = readahead_gfp_mask(mapping);

	INIT_LIST_HEAD(tmplist);

+1 −1
Original line number Diff line number Diff line
@@ -166,7 +166,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
			page = list_entry(pages->prev, struct page, lru);
			list_del(&page->lru);
			if (add_to_page_cache_lru(page, mapping, page->index,
				  mapping_gfp_constraint(mapping, GFP_KERNEL)))
				  readahead_gfp_mask(mapping)))
				goto next_page;
		}

+2 −1
Original line number Diff line number Diff line
@@ -996,7 +996,8 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
			page = list_entry(pages->prev, struct page, lru);
			list_del(&page->lru);
			if (add_to_page_cache_lru(page, mapping,
						  page->index, GFP_KERNEL))
						  page->index,
						  readahead_gfp_mask(mapping)))
				goto next_page;
		}

+3 −1
Original line number Diff line number Diff line
@@ -71,6 +71,8 @@ mpage_alloc(struct block_device *bdev,
{
	struct bio *bio;

	/* Restrict the given (page cache) mask for slab allocations */
	gfp_flags &= GFP_KERNEL;
	bio = bio_alloc(gfp_flags, nr_vecs);

	if (bio == NULL && (current->flags & PF_MEMALLOC)) {
@@ -362,7 +364,7 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
	sector_t last_block_in_bio = 0;
	struct buffer_head map_bh;
	unsigned long first_logical_block = 0;
	gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
	gfp_t gfp = readahead_gfp_mask(mapping);

	map_bh.b_state = 0;
	map_bh.b_size = 0;
Loading