Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 83bbd66b authored by Gao Xiang's avatar Gao Xiang Committed by Greg Kroah-Hartman
Browse files

staging: erofs: fix error handling when failed to read compresssed data



commit b6391ac73400eff38377a4a7364bd3df5efb5178 upstream.

Complete read error handling paths for all three kinds of
compressed pages:

 1) For cache-managed pages, PG_uptodate will be checked since
    read_endio will unlock and SetPageUptodate for these pages;

 2) For inplaced pages, read_endio cannot SetPageUptodate directly
    since it should be used to mark the final decompressed data,
    PG_error will be set with page locked for IO error instead;

 3) For staging pages, PG_error is used, which is similar to
    what we do for inplaced pages.

Fixes: 3883a79a ("staging: erofs: introduce VLE decompression support")
Cc: <stable@vger.kernel.org> # 4.19+
Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 3a18eaba
Loading
Loading
Loading
Loading
+29 −13
Original line number Diff line number Diff line
@@ -885,6 +885,7 @@ static int z_erofs_vle_unzip(struct super_block *sb,
	overlapped = false;
	compressed_pages = grp->compressed_pages;

	err = 0;
	for (i = 0; i < clusterpages; ++i) {
		unsigned pagenr;

@@ -894,16 +895,19 @@ static int z_erofs_vle_unzip(struct super_block *sb,
		DBG_BUGON(page == NULL);
		DBG_BUGON(page->mapping == NULL);

		if (z_erofs_is_stagingpage(page))
			continue;
		if (!z_erofs_is_stagingpage(page)) {
#ifdef EROFS_FS_HAS_MANAGED_CACHE
			if (page->mapping == mngda) {
			DBG_BUGON(!PageUptodate(page));
				if (unlikely(!PageUptodate(page)))
					err = -EIO;
				continue;
			}
#endif

		/* only non-head page could be reused as a compressed page */
			/*
			 * only if non-head page can be selected
			 * for inplace decompression
			 */
			pagenr = z_erofs_onlinepage_index(page);

			DBG_BUGON(pagenr >= nr_pages);
@@ -914,6 +918,16 @@ static int z_erofs_vle_unzip(struct super_block *sb,
			overlapped = true;
		}

		/* PG_error needs checking for inplaced and staging pages */
		if (unlikely(PageError(page))) {
			DBG_BUGON(PageUptodate(page));
			err = -EIO;
		}
	}

	if (unlikely(err))
		goto out;

	llen = (nr_pages << PAGE_SHIFT) - work->pageofs;

	if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) {
@@ -1082,6 +1096,8 @@ static inline bool recover_managed_page(struct z_erofs_vle_workgroup *grp,
		return true;

	lock_page(page);
	ClearPageError(page);

	if (unlikely(!PagePrivate(page))) {
		set_page_private(page, (unsigned long)grp);
		SetPagePrivate(page);