Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b79bc0a0 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds
Browse files

ksm: enable KSM page migration



Migration of KSM pages is now safe: remove the PageKsm restrictions from
mempolicy.c and migrate.c.

But keep PageKsm out of __unmap_and_move()'s anon_vma contortions, which
are irrelevant to KSM: it looks as if that code was preventing hotremove
migration of KSM pages, unless they happened to be in swapcache.

There is some question as to whether enforcing a NUMA mempolicy migration
ought to migrate KSM pages, mapped into entirely unrelated processes; but
moving page_mapcount > 1 is only permitted with MPOL_MF_MOVE_ALL anyway,
and it seems reasonable to assume that you wouldn't set MADV_MERGEABLE on
any area where this is a worry.

Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Petr Holasek <pholasek@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4146d2d6
Loading
Loading
Loading
Loading
+1 −2
Original line number Diff line number Diff line
@@ -496,9 +496,8 @@ static int check_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
		/*
		 * vm_normal_page() filters out zero pages, but there might
		 * still be PageReserved pages to skip, perhaps in a VDSO.
		 * And we cannot move PageKsm pages sensibly or safely yet.
		 */
		if (PageReserved(page) || PageKsm(page))
		if (PageReserved(page))
			continue;
		nid = page_to_nid(page);
		if (node_isset(nid, *nodes) == !!(flags & MPOL_MF_INVERT))
+3 −18
Original line number Diff line number Diff line
@@ -731,20 +731,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
		lock_page(page);
	}

	/*
	 * Only memory hotplug's offline_pages() caller has locked out KSM,
	 * and can safely migrate a KSM page.  The other cases have skipped
	 * PageKsm along with PageReserved - but it is only now when we have
	 * the page lock that we can be certain it will not go KSM beneath us
	 * (KSM will not upgrade a page from PageAnon to PageKsm when it sees
	 * its pagecount raised, but only here do we take the page lock which
	 * serializes that).
	 */
	if (PageKsm(page) && !offlining) {
		rc = -EBUSY;
		goto unlock;
	}

	/* charge against new page */
	mem_cgroup_prepare_migration(page, newpage, &mem);

@@ -771,7 +757,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
	 * File Caches may use write_page() or lock_page() in migration, then,
	 * just care Anon page here.
	 */
	if (PageAnon(page)) {
	if (PageAnon(page) && !PageKsm(page)) {
		/*
		 * Only page_lock_anon_vma_read() understands the subtleties of
		 * getting a hold on an anon_vma from outside one of its mms.
@@ -851,7 +837,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
	mem_cgroup_end_migration(mem, page, newpage,
				 (rc == MIGRATEPAGE_SUCCESS ||
				  rc == MIGRATEPAGE_BALLOON_SUCCESS));
unlock:
	unlock_page(page);
out:
	return rc;
@@ -1155,7 +1140,7 @@ static int do_move_page_to_node_array(struct mm_struct *mm,
			goto set_status;

		/* Use PageReserved to check for zero page */
		if (PageReserved(page) || PageKsm(page))
		if (PageReserved(page))
			goto put_and_set;

		pp->page = page;
@@ -1317,7 +1302,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,

		err = -ENOENT;
		/* Use PageReserved to check for zero page */
		if (!page || PageReserved(page) || PageKsm(page))
		if (!page || PageReserved(page))
			goto set_status;

		err = page_to_nid(page);