Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 48040c3f authored by Laurent Dufour's avatar Laurent Dufour Committed by Gerrit - the friendly Code Review server
Browse files

mm: flush TLB once pages are copied when SPF is on



Vinayak Menon reported that the following scenario results in
threads A and B of process 1 blocking on pthread_mutex_lock
forever after few iterations.

CPU 1                   CPU 2                    CPU 3
Process 1,              Process 1,               Process 1,
Thread A                Thread B                 Thread C

while (1) {             while (1) {              while(1) {
pthread_mutex_lock(l)   pthread_mutex_lock(l)    fork
pthread_mutex_unlock(l) pthread_mutex_unlock(l)  }
}                       }

When from thread C, copy_one_pte write-protects the parent pte
(of lock l), stale tlb entries can exist with write permissions
on one of the CPUs at least. This can create a problem if one
of the threads A or B hits the write fault. Though dup_mmap calls
flush_tlb_mm after copy_page_range, since speculative page fault
does not take mmap_sem it can proceed further fixing a fault soon
after CPU 3 does ptep_set_wrprotect. But the CPU with stale tlb
entry can still modify old_page even after it is copied to
new_page by wp_page_copy, thus causing a corruption.

Change-Id: Id9cb10d745f96fadec693ffad61c7d999d15bd81
Patch-mainline: linux-mm @ Wed, 16 Jan 2019 14:31:21
Reported-by: default avatarVinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: default avatarLaurent Dufour <ldufour@linux.vnet.ibm.com>
[vinmenon@codeaurora.org: fix the addr passed to flush_tlb_range]
Signed-off-by: default avatarCharan Teja Reddy <charante@codeaurora.org>
parent 5cd2a161
Loading
Loading
Loading
Loading
+10 −0
Original line number Diff line number Diff line
@@ -934,6 +934,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
	spinlock_t *src_ptl, *dst_ptl;
	int progress = 0;
	int rss[NR_MM_COUNTERS];
	unsigned long orig_addr = addr;
	swp_entry_t entry = (swp_entry_t){0};

again:
@@ -972,6 +973,15 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
	} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);

	arch_leave_lazy_mmu_mode();

	/*
	 * Prevent the page fault handler to copy the page while stale tlb entry
	 * are still not flushed.
	 */
	if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT) &&
	    is_cow_mapping(vma->vm_flags))
		flush_tlb_range(vma, orig_addr, end);

	spin_unlock(src_ptl);
	pte_unmap(orig_src_pte);
	add_mm_rss_vec(dst_mm, rss);