Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit fe1668ae authored by Kenneth W Chen's avatar Kenneth W Chen Committed by Linus Torvalds
Browse files

[PATCH] enforce proper tlb flush in unmap_hugepage_range



Spotted by Hugh that hugetlb page is free'ed back to global pool before
performing any TLB flush in unmap_hugepage_range().  This potentially allow
threads to abuse free-alloc race condition.

The generic tlb gather code is unsuitable to use by hugetlb, I just open
coded a page gathering list and delayed put_page until tlb flush is
performed.

Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: default avatarKen Chen <kenneth.w.chen@intel.com>
Acked-by: default avatarWilliam Irwin <wli@holomorphy.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent e80ee884
Loading
Loading
Loading
Loading
+7 −1
Original line number Diff line number Diff line
@@ -364,6 +364,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
	pte_t *ptep;
	pte_t pte;
	struct page *page;
	struct page *tmp;
	LIST_HEAD(page_list);

	WARN_ON(!is_vm_hugetlb_page(vma));
	BUG_ON(start & ~HPAGE_MASK);
@@ -384,12 +386,16 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
			continue;

		page = pte_page(pte);
		put_page(page);
		list_add(&page->lru, &page_list);
		add_mm_counter(mm, file_rss, (int) -(HPAGE_SIZE / PAGE_SIZE));
	}

	spin_unlock(&mm->page_table_lock);
	flush_tlb_range(vma, start, end);
	list_for_each_entry_safe(page, tmp, &page_list, lru) {
		list_del(&page->lru);
		put_page(page);
	}
}

static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,