Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit eb03aa00 authored by Gerald Schaefer's avatar Gerald Schaefer Committed by Linus Torvalds
Browse files

mm/hugetlb: improve locking in dissolve_free_huge_pages()

For every pfn aligned to minimum_order, dissolve_free_huge_pages() will
call dissolve_free_huge_page() which takes the hugetlb spinlock, even if
the page is not huge at all or a hugepage that is in-use.

Improve this by doing the PageHuge() and page_count() checks already in
dissolve_free_huge_pages() before calling dissolve_free_huge_page().  In
dissolve_free_huge_page(), when holding the spinlock, those checks need
to be revalidated.

Link: http://lkml.kernel.org/r/20160926172811.94033-4-gerald.schaefer@de.ibm.com


Signed-off-by: default avatarGerald Schaefer <gerald.schaefer@de.ibm.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Rui Teng <rui.teng@linux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 082d5b6b
Loading
Loading
Loading
Loading
+9 −3
Original line number Original line Diff line number Diff line
@@ -1476,14 +1476,20 @@ static int dissolve_free_huge_page(struct page *page)
int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
{
{
	unsigned long pfn;
	unsigned long pfn;
	struct page *page;
	int rc = 0;
	int rc = 0;


	if (!hugepages_supported())
	if (!hugepages_supported())
		return rc;
		return rc;


	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
		if (rc = dissolve_free_huge_page(pfn_to_page(pfn)))
		page = pfn_to_page(pfn);
		if (PageHuge(page) && !page_count(page)) {
			rc = dissolve_free_huge_page(page);
			if (rc)
				break;
				break;
		}
	}


	return rc;
	return rc;
}
}