Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 29bed634 authored by Jaegeuk Kim's avatar Jaegeuk Kim
Browse files

f2fs: updates on v4.16-rc1



Pull f2fs updates from Jaegeuk Kim:
 "In this round, we've followed up to support some generic features such
  as cgroup, block reservation, linking fscrypt_ops, delivering
  write_hints, and some ioctls. And, we could fix some corner cases in
  terms of power-cut recovery and subtle deadlocks.

  Enhancements:
   - bitmap operations to handle NAT blocks
   - readahead to improve readdir speed
   - switch to use fscrypt_*
   - apply write hints for direct IO
   - add reserve_root=%u,resuid=%u,resgid=%u to reserve blocks for root/uid/gid
   - modify b_avail and b_free to consider root reserved blocks
   - support cgroup writeback
   - support FIEMAP_FLAG_XATTR for fibmap
   - add F2FS_IOC_PRECACHE_EXTENTS to pre-cache extents
   - add F2FS_IOC_{GET/SET}_PIN_FILE to pin LBAs for data blocks
   - support inode creation time

  Bug fixs:
   - sysfile-based quota operations
   - memory footprint accounting
   - allow to write data on partial preallocation case
   - fix deadlock case on fallocate
   - fix to handle fill_super errors
   - fix missing inode updates of fsync'ed file
   - recover renamed file which was fsycn'ed before
   - drop inmemory pages in corner error case
   - keep last_disk_size correctly
   - recover missing i_inline flags during roll-forward

  Various clean-up patches were added as well"

Cherry-pick from origin/upstream-f2fs-stable-linux-3.18.y:

b85b2040 f2fs: support inode creation time
c72ef472 f2fs: rebuild sit page from sit info in mem
f368156e f2fs: stop issuing discard if fs is readonly
be823aa3 f2fs: clean up duplicated assignment in init_discard_policy
67ebfab1 f2fs: use GFP_F2FS_ZERO for cleanup
2ebcc5d8 f2fs: allow to recover node blocks given updated checkpoint
2ea7d6f1 f2fs: recover some i_inline flags
2e1f3aa5 f2fs: correct removexattr behavior for null valued extended attribute
c8330bdc f2fs: drop page cache after fs shutdown
ef83735b f2fs: stop gc/discard thread after fs shutdown
cb472c71 f2fs: hanlde error case in f2fs_ioc_shutdown
b66b08d9 f2fs: split need_inplace_update
af69483e f2fs: fix to update last_disk_size correctly
dc4b6954 f2fs: kill F2FS_INLINE_XATTR_ADDRS for cleanup
13240c90 f2fs: clean up error path of fill_super
2e5c39e5 f2fs: avoid hungtask when GC encrypted block if io_bits is set
a14cced5 f2fs: allow quota to use reserved blocks
859cc837 f2fs: fix to drop all inmem pages correctly
8975d2b9 f2fs: speed up defragment on sparse file
06e30f6f f2fs: support F2FS_IOC_PRECACHE_EXTENTS
f3d5ace5 f2fs: add an ioctl to disable GC for specific file
11228b15 f2fs: prevent newly created inode from being dirtied incorrectly
623e2841 f2fs: support FIEMAP_FLAG_XATTR
2075b0e8 f2fs: fix to cover f2fs_inline_data_fiemap with inode_lock
0ea602b1 f2fs: check node page again in write end io
d503f1e0 f2fs: fix to caclulate required free section correctly
e72c4237 f2fs: handle newly created page when revoking inmem pages
177018aa f2fs: add resgid and resuid to reserve root blocks
6ad1915c f2fs: implement cgroup writeback support
1ee182bc f2fs: remove unused pend_list_tag
e732db71 f2fs: avoid high cpu usage in discard thread
647763fa f2fs: make local functions static
3f81bf52 f2fs: add reserved blocks for root user
cb4ea095 f2fs: check segment type in __f2fs_replace_block
2a6f5454 f2fs: update inode info to inode page for new file
db2e6b82 f2fs: show precise # of blocks that user/root can use
add96ed3 f2fs: clean up unneeded declaration
babfbc08 f2fs: continue to do direct IO if we only preallocate partial blocks
f9289908 f2fs: enable quota at remount from r to w
cfee78c6 f2fs: skip stop_checkpoint for user data writes
29f0297f f2fs: fix missing error number for xattr operation
1e85f5d7 f2fs: recover directory operations by fsync
f1b68a50 f2fs: return error during fill_super
e913b190 f2fs: fix an error case of missing update inode page
62b6a5f6 f2fs: fix potential hangtask in f2fs_trace_pid
54c06e52 f2fs: no need return value in restore summary process
e88ab669 f2fs: use unlikely for release case
24601828 f2fs: don't return value in truncate_data_blocks_range
15f92902 f2fs: clean up f2fs_map_blocks
8dfee8c4 f2fs: clean up hash codes
5d81acf5 f2fs: fix error handling in fill_super
3acc2f31 f2fs: spread f2fs_k{m,z}alloc
8c72d9db f2fs: inject fault to kvmalloc
fc42fc2c f2fs: inject fault to kzalloc
c821080d f2fs: remove a redundant conditional expression
612e589b f2fs: apply write hints to select the type of segment for direct write
63a9fc80 f2fs: switch to fscrypt_prepare_setattr()
16c5bfa1 f2fs: switch to fscrypt_prepare_lookup()
5998a21b f2fs: switch to fscrypt_prepare_rename()
dd5ca5fe f2fs: switch to fscrypt_prepare_link()
09c91079 f2fs: switch to fscrypt_file_open()
08cae724 f2fs: remove repeated f2fs_bug_on
7357b452 f2fs: remove an excess variable
6f2915eb f2fs: fix lock dependency in between dio_rwsem & i_mmap_sem
8c3b1444 f2fs: remove unused parameter
35b94063 f2fs: still write data if preallocate only partial blocks
b6453fcb f2fs: do not preallocate blocks which has wrong buffer
bee58ad4 f2fs: introduce sysfs readdir_ra to readahead inode block in readdir
5b10dbde f2fs: fix concurrent problem for updating free bitmap
2638ff75 f2fs: remove unneeded memory footprint accounting
c569c0b1 f2fs: no need to read nat block if nat_block_bitmap is set
5321a23c f2fs: reserve nid resource for quota sysfile

Change-Id: I5f95446f4d51232e82f48b716e5796d871e2d9ee
Signed-off-by: default avatarJaegeuk Kim <jaegeuk@google.com>
parent 0cecf331
Loading
Loading
Loading
Loading
+6 −0
Original line number Diff line number Diff line
@@ -186,3 +186,9 @@ Date: August 2017
Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
		 Controls sleep time of GC urgent mode

What:		/sys/fs/f2fs/<disk>/readdir_ra
Date:		November 2017
Contact:	"Sheng Yong" <shengyong1@huawei.com>
Description:
		 Controls readahead inode block in readdir.
+7 −3
Original line number Diff line number Diff line
@@ -238,12 +238,15 @@ static int __f2fs_write_meta_page(struct page *page,

	trace_f2fs_writepage(page, META);

	if (unlikely(f2fs_cp_error(sbi))) {
		dec_page_count(sbi, F2FS_DIRTY_META);
		unlock_page(page);
		return 0;
	}
	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
		goto redirty_out;
	if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
		goto redirty_out;
	if (unlikely(f2fs_cp_error(sbi)))
		goto redirty_out;

	write_meta_page(sbi, page, io_type);
	dec_page_count(sbi, F2FS_DIRTY_META);
@@ -797,7 +800,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi)
	block_t cp_blk_no;
	int i;

	sbi->ckpt = kzalloc(cp_blks * blk_size, GFP_KERNEL);
	sbi->ckpt = f2fs_kzalloc(sbi, cp_blks * blk_size, GFP_KERNEL);
	if (!sbi->ckpt)
		return -ENOMEM;
	/*
@@ -1158,6 +1161,7 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)

	/* set this flag to activate crc|cp_ver for recovery */
	__set_ckpt_flags(ckpt, CP_CRC_RECOVERY_FLAG);
	__clear_ckpt_flags(ckpt, CP_NOCRC_RECOVERY_FLAG);

	spin_unlock_irqrestore(&sbi->cp_lock, flags);
}
+261 −47
Original line number Diff line number Diff line
@@ -112,8 +112,13 @@ static void f2fs_write_end_io(struct bio *bio, int err)

		if (unlikely(err)) {
			set_bit(AS_EIO, &page->mapping->flags);
			if (type == F2FS_WB_CP_DATA)
				f2fs_stop_checkpoint(sbi, true);
		}

		f2fs_bug_on(sbi, page->mapping == NODE_MAPPING(sbi) &&
					page->index != nid_of_node(page));

		dec_page_count(sbi, type);
		clear_cold_data(page);
		end_page_writeback(page);
@@ -169,6 +174,7 @@ static bool __same_bdev(struct f2fs_sb_info *sbi,
 * Low-level block read/write IO operations.
 */
static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
				struct writeback_control *wbc,
				int npages, bool is_read)
{
	struct bio *bio;
@@ -178,6 +184,8 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
	f2fs_target_device(sbi, blk_addr, bio);
	bio->bi_end_io = is_read ? f2fs_read_end_io : f2fs_write_end_io;
	bio->bi_private = is_read ? NULL : sbi;
	if (wbc)
		wbc_init_bio(wbc, bio);

	return bio;
}
@@ -373,7 +381,8 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
	f2fs_trace_ios(fio, 0);

	/* Allocate a new bio */
	bio = __bio_alloc(fio->sbi, fio->new_blkaddr, 1, is_read_io(fio->op));
	bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc,
				1, is_read_io(fio->op));

	if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
		bio_put(bio);
@@ -435,7 +444,7 @@ alloc_new:
			dec_page_count(sbi, WB_DATA_TYPE(bio_page));
			goto out_fail;
		}
		io->bio = __bio_alloc(sbi, fio->new_blkaddr,
		io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc,
						BIO_MAX_PAGES, false);
		io->fio = *fio;
	}
@@ -445,6 +454,9 @@ alloc_new:
		goto alloc_new;
	}

	if (fio->io_wbc)
		wbc_account_io(fio->io_wbc, bio_page, PAGE_SIZE);

	io->last_block_in_bio = fio->new_blkaddr;
	f2fs_trace_ios(fio, 0);

@@ -783,7 +795,7 @@ got_it:
	return page;
}

static int __allocate_data_block(struct dnode_of_data *dn)
static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
{
	struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
	struct f2fs_summary sum;
@@ -808,7 +820,7 @@ alloc:
	set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);

	allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr,
					&sum, CURSEG_WARM_DATA, NULL, false);
					&sum, seg_type, NULL, false);
	set_data_blkaddr(dn);

	/* update i_size */
@@ -828,18 +840,22 @@ static inline bool __force_buffered_io(struct inode *inode, int rw)
}

int f2fs_preallocate_blocks(struct inode *inode, loff_t pos,
					size_t count, bool dio)
				size_t count, bool direct_io)
{
	struct f2fs_map_blocks map;
	int flag;
	int err = 0;

	/* convert inline data for Direct I/O*/
	if (dio) {
	if (direct_io) {
		err = f2fs_convert_inline_inode(inode);
		if (err)
			return err;
	}

	if (is_inode_flag_set(inode, FI_NO_PREALLOC))
		return 0;

	map.m_lblk = F2FS_BLK_ALIGN(pos);
	map.m_len = F2FS_BYTES_TO_BLK(pos + count);
	if (map.m_len > map.m_lblk)
@@ -848,19 +864,34 @@ int f2fs_preallocate_blocks(struct inode *inode, loff_t pos,
		map.m_len = 0;

	map.m_next_pgofs = NULL;
	map.m_next_extent = NULL;
	map.m_seg_type = NO_CHECK_TYPE;

	if (dio)
		return f2fs_map_blocks(inode, &map, 1,
			__force_buffered_io(inode, WRITE) ?
	if (direct_io) {
		/* map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint); */
		map.m_seg_type = rw_hint_to_seg_type(WRITE_LIFE_NOT_SET);
		flag = __force_buffered_io(inode, WRITE) ?
					F2FS_GET_BLOCK_PRE_AIO :
				F2FS_GET_BLOCK_PRE_DIO);
					F2FS_GET_BLOCK_PRE_DIO;
		goto map_blocks;
	}
	if (pos + count > MAX_INLINE_DATA(inode)) {
		err = f2fs_convert_inline_inode(inode);
		if (err)
			return err;
	}
	if (!f2fs_has_inline_data(inode))
		return f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO);
	if (f2fs_has_inline_data(inode))
		return err;

	flag = F2FS_GET_BLOCK_PRE_AIO;

map_blocks:
	err = f2fs_map_blocks(inode, &map, 1, flag);
	if (map.m_len > 0 && err == -ENOSPC) {
		if (!direct_io)
			set_inode_flag(inode, FI_NO_PREALLOC);
		err = 0;
	}
	return err;
}

@@ -901,6 +932,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
	blkcnt_t prealloc;
	struct extent_info ei = {0,0,0};
	block_t blkaddr;
	unsigned int start_pgofs;

	if (!maxblocks)
		return 0;
@@ -916,6 +948,8 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
		map->m_pblk = ei.blk + pgofs - ei.fofs;
		map->m_len = min((pgoff_t)maxblocks, ei.fofs + ei.len - pgofs);
		map->m_flags = F2FS_MAP_MAPPED;
		if (map->m_next_extent)
			*map->m_next_extent = pgofs + map->m_len;
		goto out;
	}

@@ -934,10 +968,14 @@ next_dnode:
			if (map->m_next_pgofs)
				*map->m_next_pgofs =
					get_next_page_offset(&dn, pgofs);
			if (map->m_next_extent)
				*map->m_next_extent =
					get_next_page_offset(&dn, pgofs);
		}
		goto unlock_out;
	}

	start_pgofs = pgofs;
	prealloc = 0;
	last_ofs_in_node = ofs_in_node = dn.ofs_in_node;
	end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
@@ -957,7 +995,8 @@ next_block:
					last_ofs_in_node = dn.ofs_in_node;
				}
			} else {
				err = __allocate_data_block(&dn);
				err = __allocate_data_block(&dn,
							map->m_seg_type);
				if (!err)
					set_inode_flag(inode, FI_APPEND_WRITE);
			}
@@ -970,16 +1009,22 @@ next_block:
				map->m_pblk = 0;
				goto sync_out;
			}
			if (flag == F2FS_GET_BLOCK_PRECACHE)
				goto sync_out;
			if (flag == F2FS_GET_BLOCK_FIEMAP &&
						blkaddr == NULL_ADDR) {
				if (map->m_next_pgofs)
					*map->m_next_pgofs = pgofs + 1;
				goto sync_out;
			}
			if (flag != F2FS_GET_BLOCK_FIEMAP ||
						blkaddr != NEW_ADDR)
			if (flag != F2FS_GET_BLOCK_FIEMAP) {
				/* for defragment case */
				if (map->m_next_pgofs)
					*map->m_next_pgofs = pgofs + 1;
				goto sync_out;
			}
		}
	}

	if (flag == F2FS_GET_BLOCK_PRE_AIO)
		goto skip;
@@ -1028,6 +1073,16 @@ skip:
	else if (dn.ofs_in_node < end_offset)
		goto next_block;

	if (flag == F2FS_GET_BLOCK_PRECACHE) {
		if (map->m_flags & F2FS_MAP_MAPPED) {
			unsigned int ofs = start_pgofs - map->m_lblk;

			f2fs_update_extent_cache_range(&dn,
				start_pgofs, map->m_pblk + ofs,
				map->m_len - ofs);
		}
	}

	f2fs_put_dnode(&dn);

	if (create) {
@@ -1037,6 +1092,17 @@ skip:
	goto next_dnode;

sync_out:
	if (flag == F2FS_GET_BLOCK_PRECACHE) {
		if (map->m_flags & F2FS_MAP_MAPPED) {
			unsigned int ofs = start_pgofs - map->m_lblk;

			f2fs_update_extent_cache_range(&dn,
				start_pgofs, map->m_pblk + ofs,
				map->m_len - ofs);
		}
		if (map->m_next_extent)
			*map->m_next_extent = pgofs + 1;
	}
	f2fs_put_dnode(&dn);
unlock_out:
	if (create) {
@@ -1050,7 +1116,7 @@ out:

static int __get_data_block(struct inode *inode, sector_t iblock,
			struct buffer_head *bh, int create, int flag,
			pgoff_t *next_pgofs)
			pgoff_t *next_pgofs, int seg_type)
{
	struct f2fs_map_blocks map;
	int err;
@@ -1058,6 +1124,8 @@ static int __get_data_block(struct inode *inode, sector_t iblock,
	map.m_lblk = iblock;
	map.m_len = bh->b_size >> inode->i_blkbits;
	map.m_next_pgofs = next_pgofs;
	map.m_next_extent = NULL;
	map.m_seg_type = seg_type;

	err = f2fs_map_blocks(inode, &map, create, flag);
	if (!err) {
@@ -1073,14 +1141,18 @@ static int get_data_block(struct inode *inode, sector_t iblock,
			pgoff_t *next_pgofs)
{
	return __get_data_block(inode, iblock, bh_result, create,
							flag, next_pgofs);
							flag, next_pgofs,
							NO_CHECK_TYPE);
}

static int get_data_block_dio(struct inode *inode, sector_t iblock,
			struct buffer_head *bh_result, int create)
{
	return __get_data_block(inode, iblock, bh_result, create,
						F2FS_GET_BLOCK_DEFAULT, NULL);
						F2FS_GET_BLOCK_DEFAULT, NULL,
						rw_hint_to_seg_type(
							WRITE_LIFE_NOT_SET));
						/* inode->i_write_hint)); */
}

static int get_data_block_bmap(struct inode *inode, sector_t iblock,
@@ -1091,7 +1163,8 @@ static int get_data_block_bmap(struct inode *inode, sector_t iblock,
		return -EFBIG;

	return __get_data_block(inode, iblock, bh_result, create,
						F2FS_GET_BLOCK_BMAP, NULL);
						F2FS_GET_BLOCK_BMAP, NULL,
						NO_CHECK_TYPE);
}

static inline sector_t logical_to_blk(struct inode *inode, loff_t offset)
@@ -1104,6 +1177,68 @@ static inline loff_t blk_to_logical(struct inode *inode, sector_t blk)
	return (blk << inode->i_blkbits);
}

static int f2fs_xattr_fiemap(struct inode *inode,
				struct fiemap_extent_info *fieinfo)
{
	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
	struct page *page;
	struct node_info ni;
	__u64 phys = 0, len;
	__u32 flags;
	nid_t xnid = F2FS_I(inode)->i_xattr_nid;
	int err = 0;

	if (f2fs_has_inline_xattr(inode)) {
		int offset;

		page = f2fs_grab_cache_page(NODE_MAPPING(sbi),
						inode->i_ino, false);
		if (!page)
			return -ENOMEM;

		get_node_info(sbi, inode->i_ino, &ni);

		phys = (__u64)blk_to_logical(inode, ni.blk_addr);
		offset = offsetof(struct f2fs_inode, i_addr) +
					sizeof(__le32) * (DEF_ADDRS_PER_INODE -
					get_inline_xattr_addrs(inode));

		phys += offset;
		len = inline_xattr_size(inode);

		f2fs_put_page(page, 1);

		flags = FIEMAP_EXTENT_DATA_INLINE | FIEMAP_EXTENT_NOT_ALIGNED;

		if (!xnid)
			flags |= FIEMAP_EXTENT_LAST;

		err = fiemap_fill_next_extent(fieinfo, 0, phys, len, flags);
		if (err || err == 1)
			return err;
	}

	if (xnid) {
		page = f2fs_grab_cache_page(NODE_MAPPING(sbi), xnid, false);
		if (!page)
			return -ENOMEM;

		get_node_info(sbi, xnid, &ni);

		phys = (__u64)blk_to_logical(inode, ni.blk_addr);
		len = inode->i_sb->s_blocksize;

		f2fs_put_page(page, 1);

		flags = FIEMAP_EXTENT_LAST;
	}

	if (phys)
		err = fiemap_fill_next_extent(fieinfo, 0, phys, len, flags);

	return (err < 0 ? err : 0);
}

int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
		u64 start, u64 len)
{
@@ -1114,18 +1249,29 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
	u32 flags = 0;
	int ret = 0;

	ret = fiemap_check_flags(fieinfo, FIEMAP_FLAG_SYNC);
	if (fieinfo->fi_flags & FIEMAP_FLAG_CACHE) {
		ret = f2fs_precache_extents(inode);
		if (ret)
			return ret;
	}

	ret = fiemap_check_flags(fieinfo, FIEMAP_FLAG_SYNC | FIEMAP_FLAG_XATTR);
	if (ret)
		return ret;

	inode_lock(inode);

	if (fieinfo->fi_flags & FIEMAP_FLAG_XATTR) {
		ret = f2fs_xattr_fiemap(inode, fieinfo);
		goto out;
	}

	if (f2fs_has_inline_data(inode)) {
		ret = f2fs_inline_data_fiemap(inode, fieinfo, start, len);
		if (ret != -EAGAIN)
			return ret;
			goto out;
	}

	inode_lock(inode);

	if (logical_to_blk(inode, len) == 0)
		len = blk_to_logical(inode, 1);

@@ -1195,7 +1341,6 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
			unsigned nr_pages)
{
	struct bio *bio = NULL;
	unsigned page_idx;
	sector_t last_block_in_bio = 0;
	struct inode *inode = mapping->host;
	const unsigned blkbits = inode->i_blkbits;
@@ -1211,9 +1356,10 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
	map.m_len = 0;
	map.m_flags = 0;
	map.m_next_pgofs = NULL;
	map.m_next_extent = NULL;
	map.m_seg_type = NO_CHECK_TYPE;

	for (page_idx = 0; nr_pages; page_idx++, nr_pages--) {

	for (; nr_pages; nr_pages--) {
		if (pages) {
			page = list_last_entry(pages, struct page, lru);

@@ -1372,18 +1518,79 @@ retry_encrypt:
	return PTR_ERR(fio->encrypted_page);
}

static inline bool need_inplace_update(struct f2fs_io_info *fio)
static inline bool check_inplace_update_policy(struct inode *inode,
				struct f2fs_io_info *fio)
{
	struct inode *inode = fio->page->mapping->host;
	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
	unsigned int policy = SM_I(sbi)->ipu_policy;

	if (policy & (0x1 << F2FS_IPU_FORCE))
		return true;
	if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi))
		return true;
	if (policy & (0x1 << F2FS_IPU_UTIL) &&
			utilization(sbi) > SM_I(sbi)->min_ipu_util)
		return true;
	if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && need_SSR(sbi) &&
			utilization(sbi) > SM_I(sbi)->min_ipu_util)
		return true;

	/*
	 * IPU for rewrite async pages
	 */
	if (policy & (0x1 << F2FS_IPU_ASYNC) &&
			fio && fio->op == REQ_OP_WRITE &&
			!(fio->op_flags & REQ_SYNC) &&
			!f2fs_encrypted_inode(inode))
		return true;

	/* this is only set during fdatasync */
	if (policy & (0x1 << F2FS_IPU_FSYNC) &&
			is_inode_flag_set(inode, FI_NEED_IPU))
		return true;

	if (S_ISDIR(inode->i_mode) || f2fs_is_atomic_file(inode))
	return false;
}

bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
{
	if (f2fs_is_pinned_file(inode))
		return true;

	/* if this is cold file, we should overwrite to avoid fragmentation */
	if (file_is_cold(inode))
		return true;

	return check_inplace_update_policy(inode, fio);
}

bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
{
	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);

	if (test_opt(sbi, LFS))
		return true;
	if (S_ISDIR(inode->i_mode))
		return true;
	if (f2fs_is_atomic_file(inode))
		return true;
	if (fio) {
		if (is_cold_data(fio->page))
		return false;
			return true;
		if (IS_ATOMIC_WRITTEN_PAGE(fio->page))
			return true;
	}
	return false;
}

static inline bool need_inplace_update(struct f2fs_io_info *fio)
{
	struct inode *inode = fio->page->mapping->host;

	if (should_update_outplace(inode, fio))
		return false;

	return need_inplace_update_policy(inode, fio);
	return should_update_inplace(inode, fio);
}

static inline bool valid_ipu_blkaddr(struct f2fs_io_info *fio)
@@ -1504,10 +1711,17 @@ static int __write_data_page(struct page *page, bool *submitted,
		.submitted = false,
		.need_lock = LOCK_RETRY,
		.io_type = io_type,
		.io_wbc = wbc,
	};

	trace_f2fs_writepage(page, DATA);

	/* we should bypass data pages to proceed the kworkder jobs */
	if (unlikely(f2fs_cp_error(sbi))) {
		mapping_set_error(page->mapping, -EIO);
		goto out;
	}

	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
		goto redirty_out;

@@ -1532,12 +1746,6 @@ write:
			available_free_memory(sbi, BASE_CHECK))))
		goto redirty_out;

	/* we should bypass data pages to proceed the kworkder jobs */
	if (unlikely(f2fs_cp_error(sbi))) {
		mapping_set_error(page->mapping, -EIO);
		goto out;
	}

	/* Dentry blocks are controlled by checkpoint */
	if (S_ISDIR(inode->i_mode)) {
		fio.need_lock = LOCK_DONE;
@@ -1567,10 +1775,14 @@ write:
		}
	}

	if (err) {
		file_set_keep_isize(inode);
	} else {
		down_write(&F2FS_I(inode)->i_sem);
		if (F2FS_I(inode)->last_disk_size < psize)
			F2FS_I(inode)->last_disk_size = psize;
		up_write(&F2FS_I(inode)->i_sem);
	}

done:
	if (err && err != -ENOENT)
@@ -1865,7 +2077,8 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
	 * we already allocated all the blocks, so we don't need to get
	 * the block addresses when there is no need to fill the page.
	 */
	if (!f2fs_has_inline_data(inode) && len == PAGE_SIZE)
	if (!f2fs_has_inline_data(inode) && len == PAGE_SIZE &&
			!is_inode_flag_set(inode, FI_NO_PREALLOC))
		return 0;

	if (f2fs_has_inline_data(inode) ||
@@ -1933,7 +2146,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
	struct page *page = NULL;
	pgoff_t index = ((unsigned long long) pos) >> PAGE_SHIFT;
	bool need_balance = false;
	bool need_balance = false, drop_atomic = false;
	block_t blkaddr = NULL_ADDR;
	int err = 0;

@@ -1952,6 +2165,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
	if (f2fs_is_atomic_file(inode) &&
			!available_free_memory(sbi, INMEM_PAGES)) {
		err = -ENOMEM;
		drop_atomic = true;
		goto fail;
	}

@@ -2032,7 +2246,7 @@ repeat:
fail:
	f2fs_put_page(page, 1);
	f2fs_write_failed(mapping, pos + len);
	if (f2fs_is_atomic_file(inode))
	if (drop_atomic)
		drop_inmem_pages_all(sbi);
	return err;
}
+2 −10
Original line number Diff line number Diff line
@@ -49,14 +49,7 @@ static void update_general_status(struct f2fs_sb_info *sbi)
	si->ndirty_imeta = get_pages(sbi, F2FS_DIRTY_IMETA);
	si->ndirty_dirs = sbi->ndirty_inode[DIR_INODE];
	si->ndirty_files = sbi->ndirty_inode[FILE_INODE];

	si->nquota_files = 0;
	if (f2fs_sb_has_quota_ino(sbi->sb)) {
		for (i = 0; i < MAXQUOTAS; i++) {
			if (f2fs_qf_ino(sbi->sb, i))
				si->nquota_files++;
		}
	}
	si->nquota_files = sbi->nquota_files;
	si->ndirty_all = sbi->ndirty_inode[DIRTY_META];
	si->inmem_pages = get_pages(sbi, F2FS_INMEM_PAGES);
	si->aw_cnt = atomic_read(&sbi->aw_cnt);
@@ -186,7 +179,6 @@ static void update_mem_info(struct f2fs_sb_info *sbi)
	si->base_mem += sizeof(struct f2fs_sb_info) + sbi->sb->s_blocksize;
	si->base_mem += 2 * sizeof(struct f2fs_inode_info);
	si->base_mem += sizeof(*sbi->ckpt);
	si->base_mem += sizeof(struct percpu_counter) * NR_COUNT_TYPE;

	/* build sm */
	si->base_mem += sizeof(struct f2fs_sm_info);
@@ -449,7 +441,7 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
	struct f2fs_stat_info *si;

	si = kzalloc(sizeof(struct f2fs_stat_info), GFP_KERNEL);
	si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL);
	if (!si)
		return -ENOMEM;

+6 −0
Original line number Diff line number Diff line
@@ -713,6 +713,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,

	f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);

	add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);

	if (f2fs_has_inline_dentry(dir))
		return f2fs_delete_inline_entry(dentry, page, dir, inode);

@@ -798,6 +800,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
	unsigned int bit_pos;
	struct f2fs_dir_entry *de = NULL;
	struct fscrypt_str de_name = FSTR_INIT(NULL, 0);
	struct f2fs_sb_info *sbi = F2FS_I_SB(d->inode);

	bit_pos = ((unsigned long)ctx->pos % d->max);

@@ -836,6 +839,9 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
					le32_to_cpu(de->ino), d_type))
			return 1;

		if (sbi->readdir_ra == 1)
			ra_node_page(sbi, le32_to_cpu(de->ino));

		bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
		ctx->pos = start_pos + bit_pos;
	}
Loading