Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 18863d3a authored by Minchan Kim's avatar Minchan Kim Committed by Linus Torvalds
Browse files

mm: remove SWAP_DIRTY in ttu

If we found lazyfree page is dirty, try_to_unmap_one can just
SetPageSwapBakced in there like PG_mlocked page and just return with
SWAP_FAIL which is very natural because the page is not swappable right
now so that vmscan can activate it.  There is no point to introduce new
return value SWAP_DIRTY in try_to_unmap at the moment.

Link: http://lkml.kernel.org/r/1489555493-14659-3-git-send-email-minchan@kernel.org


Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent c24f386c
Loading
Loading
Loading
Loading
+0 −1
Original line number Diff line number Diff line
@@ -298,6 +298,5 @@ static inline int page_mkclean(struct page *page)
#define SWAP_AGAIN	1
#define SWAP_FAIL	2
#define SWAP_MLOCK	3
#define SWAP_DIRTY	4

#endif	/* _LINUX_RMAP_H */
+2 −2
Original line number Diff line number Diff line
@@ -1436,7 +1436,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
				 * discarded. Remap the page to page table.
				 */
				set_pte_at(mm, address, pvmw.pte, pteval);
				ret = SWAP_DIRTY;
				SetPageSwapBacked(page);
				ret = SWAP_FAIL;
				page_vma_mapped_walk_done(&pvmw);
				break;
			}
@@ -1506,7 +1507,6 @@ static int page_mapcount_is_zero(struct page *page)
 * SWAP_AGAIN	- we missed a mapping, try again later
 * SWAP_FAIL	- the page is unswappable
 * SWAP_MLOCK	- page is mlocked.
 * SWAP_DIRTY	- page is dirty MADV_FREE page
 */
int try_to_unmap(struct page *page, enum ttu_flags flags)
{
+0 −3
Original line number Diff line number Diff line
@@ -1147,9 +1147,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
		if (page_mapped(page)) {
			switch (ret = try_to_unmap(page,
				ttu_flags | TTU_BATCH_FLUSH)) {
			case SWAP_DIRTY:
				SetPageSwapBacked(page);
				/* fall through */
			case SWAP_FAIL:
				nr_unmap_fail++;
				goto activate_locked;