Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ed8ad838 authored by Jan Kara's avatar Jan Kara Committed by Theodore Ts'o
Browse files

ext4: fix bh->b_state corruption



ext4 can update bh->b_state non-atomically in _ext4_get_block() and
ext4_da_get_block_prep(). Usually this is fine since bh is just a
temporary storage for mapping information on stack but in some cases it
can be fully living bh attached to a page. In such case non-atomic
update of bh->b_state can race with an atomic update which then gets
lost. Usually when we are mapping bh and thus updating bh->b_state
non-atomically, nobody else touches the bh and so things work out fine
but there is one case to especially worry about: ext4_finish_bio() uses
BH_Uptodate_Lock on the first bh in the page to synchronize handling of
PageWriteback state. So when blocksize < pagesize, we can be atomically
modifying bh->b_state of a buffer that actually isn't under IO and thus
can race e.g. with delalloc trying to map that buffer. The result is
that we can mistakenly set / clear BH_Uptodate_Lock bit resulting in the
corruption of PageWriteback state or missed unlock of BH_Uptodate_Lock.

Fix the problem by always updating bh->b_state bits atomically.

CC: stable@vger.kernel.org
Reported-by: default avatarNikolay Borisov <kernel@kyup.com>
Signed-off-by: default avatarJan Kara <jack@suse.cz>
Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
parent c906f38e
Loading
Loading
Loading
Loading
+30 −2
Original line number Original line Diff line number Diff line
@@ -686,6 +686,34 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
	return retval;
	return retval;
}
}


/*
 * Update EXT4_MAP_FLAGS in bh->b_state. For buffer heads attached to pages
 * we have to be careful as someone else may be manipulating b_state as well.
 */
static void ext4_update_bh_state(struct buffer_head *bh, unsigned long flags)
{
	unsigned long old_state;
	unsigned long new_state;

	flags &= EXT4_MAP_FLAGS;

	/* Dummy buffer_head? Set non-atomically. */
	if (!bh->b_page) {
		bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | flags;
		return;
	}
	/*
	 * Someone else may be modifying b_state. Be careful! This is ugly but
	 * once we get rid of using bh as a container for mapping information
	 * to pass to / from get_block functions, this can go away.
	 */
	do {
		old_state = READ_ONCE(bh->b_state);
		new_state = (old_state & ~EXT4_MAP_FLAGS) | flags;
	} while (unlikely(
		 cmpxchg(&bh->b_state, old_state, new_state) != old_state));
}

/* Maximum number of blocks we map for direct IO at once. */
/* Maximum number of blocks we map for direct IO at once. */
#define DIO_MAX_BLOCKS 4096
#define DIO_MAX_BLOCKS 4096


@@ -722,7 +750,7 @@ static int _ext4_get_block(struct inode *inode, sector_t iblock,
		ext4_io_end_t *io_end = ext4_inode_aio(inode);
		ext4_io_end_t *io_end = ext4_inode_aio(inode);


		map_bh(bh, inode->i_sb, map.m_pblk);
		map_bh(bh, inode->i_sb, map.m_pblk);
		bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | map.m_flags;
		ext4_update_bh_state(bh, map.m_flags);
		if (io_end && io_end->flag & EXT4_IO_END_UNWRITTEN)
		if (io_end && io_end->flag & EXT4_IO_END_UNWRITTEN)
			set_buffer_defer_completion(bh);
			set_buffer_defer_completion(bh);
		bh->b_size = inode->i_sb->s_blocksize * map.m_len;
		bh->b_size = inode->i_sb->s_blocksize * map.m_len;
@@ -1685,7 +1713,7 @@ int ext4_da_get_block_prep(struct inode *inode, sector_t iblock,
		return ret;
		return ret;


	map_bh(bh, inode->i_sb, map.m_pblk);
	map_bh(bh, inode->i_sb, map.m_pblk);
	bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | map.m_flags;
	ext4_update_bh_state(bh, map.m_flags);


	if (buffer_unwritten(bh)) {
	if (buffer_unwritten(bh)) {
		/* A delayed write to unwritten bh should be marked
		/* A delayed write to unwritten bh should be marked