Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 9f8bdb3f authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds
Browse files

mm: make swapoff more robust against soft dirty



Both s390 and powerpc have hit the issue of swapoff hanging, when
CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were not
quite as x86_64 had them.  I think it would be much clearer if
HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures to
determine whether the MEM_SOFT_DIRTY option should be offered, and the
actual code depend upon CONFIG_MEM_SOFT_DIRTY alone.

But won't embark on that change myself: instead make swapoff more
robust, by using pte_swp_clear_soft_dirty() on each pte it encounters,
without an explicit #ifdef CONFIG_MEM_SOFT_DIRTY.  That being a no-op,
whether the bit in question is defined as 0 or the asm-generic fallback
is used, unless soft dirty is fully turned on.

Why "maybe" in maybe_same_pte()? Rename it pte_same_as_swp().

Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: default avatarCyrill Gorcunov <gorcunov@openvz.org>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 88f306b6
Loading
Loading
Loading
Loading
+4 −14
Original line number Original line Diff line number Diff line
@@ -1111,19 +1111,9 @@ unsigned int count_swap_pages(int type, int free)
}
}
#endif /* CONFIG_HIBERNATION */
#endif /* CONFIG_HIBERNATION */


static inline int maybe_same_pte(pte_t pte, pte_t swp_pte)
static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
{
{
#ifdef CONFIG_MEM_SOFT_DIRTY
	return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
	/*
	 * When pte keeps soft dirty bit the pte generated
	 * from swap entry does not has it, still it's same
	 * pte from logical point of view.
	 */
	pte_t swp_pte_dirty = pte_swp_mksoft_dirty(swp_pte);
	return pte_same(pte, swp_pte) || pte_same(pte, swp_pte_dirty);
#else
	return pte_same(pte, swp_pte);
#endif
}
}


/*
/*
@@ -1152,7 +1142,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
	}
	}


	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
	if (unlikely(!maybe_same_pte(*pte, swp_entry_to_pte(entry)))) {
	if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) {
		mem_cgroup_cancel_charge(page, memcg, false);
		mem_cgroup_cancel_charge(page, memcg, false);
		ret = 0;
		ret = 0;
		goto out;
		goto out;
@@ -1210,7 +1200,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
		 * swapoff spends a _lot_ of time in this loop!
		 * swapoff spends a _lot_ of time in this loop!
		 * Test inline before going to call unuse_pte.
		 * Test inline before going to call unuse_pte.
		 */
		 */
		if (unlikely(maybe_same_pte(*pte, swp_pte))) {
		if (unlikely(pte_same_as_swp(*pte, swp_pte))) {
			pte_unmap(pte);
			pte_unmap(pte);
			ret = unuse_pte(vma, pmd, addr, entry, page);
			ret = unuse_pte(vma, pmd, addr, entry, page);
			if (ret)
			if (ret)