Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e76eebee authored by Yu Chao's avatar Yu Chao Committed by Jaegeuk Kim
Browse files

f2fs: optimize fs_lock for better performance



There is a performance problem: when all sbi->fs_lock are holded, then
all the following threads may get the same next_lock value from sbi->next_lock_num
in function mutex_lock_op, and wait for the same lock(fs_lock[next_lock]),
it may cause performance reduce.
So we move the sbi->next_lock_num++ before getting lock, this will average the
following threads if all sbi->fs_lock are holded.

v1-->v2:
	Drop the needless spin_lock as Jaegeuk suggested.

Suggested-by: default avatarJaegeuk Kim <jaegeuk.kim@samsung.com>
Signed-off-by: default avatarYu Chao <chao2.yu@samsung.com>
Signed-off-by: default avatarGu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: default avatarJaegeuk Kim <jaegeuk.kim@samsung.com>
parent 4a10c2ac
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -544,15 +544,15 @@ static inline void mutex_unlock_all(struct f2fs_sb_info *sbi)

static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
{
	unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
	unsigned char next_lock;
	int i = 0;

	for (; i < NR_GLOBAL_LOCKS; i++)
		if (mutex_trylock(&sbi->fs_lock[i]))
			return i;

	next_lock = sbi->next_lock_num++ % NR_GLOBAL_LOCKS;
	mutex_lock(&sbi->fs_lock[next_lock]);
	sbi->next_lock_num++;
	return next_lock;
}