Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 62dbec78 authored by David S. Miller's avatar David S. Miller
Browse files

[SPARC64] mm: Do not flush TLB mm in tlb_finish_mmu()



It isn't needed any longer, as noted by Hugh Dickins.

We still need the flush routines, due to the one remaining
call site in hugetlb_prefault_arch_hook().  That can be
eliminated at some later point, however.

Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 4c85ce52
Loading
Loading
Loading
Loading
+17 −31
Original line number Original line Diff line number Diff line
@@ -839,25 +839,12 @@ void smp_flush_tlb_all(void)
 *    questionable (in theory the big win for threads is the massive sharing of
 *    questionable (in theory the big win for threads is the massive sharing of
 *    address space state across processors).
 *    address space state across processors).
 */
 */
void smp_flush_tlb_mm(struct mm_struct *mm)
{
        /*
         * This code is called from two places, dup_mmap and exit_mmap. In the
         * former case, we really need a flush. In the later case, the callers
         * are single threaded exec_mmap (really need a flush), multithreaded
         * exec_mmap case (do not need to flush, since the caller gets a new
         * context via activate_mm), and all other callers of mmput() whence
         * the flush can be optimized since the associated threads are dead and
         * the mm is being torn down (__exit_mm and other mmput callers) or the
         * owning thread is dissociating itself from the mm. The
         * (atomic_read(&mm->mm_users) == 0) check ensures real work is done
         * for single thread exec and dup_mmap cases. An alternate check might
         * have been (current->mm != mm).
         *                                              Kanoj Sarcar
         */
        if (atomic_read(&mm->mm_users) == 0)
                return;


/* This currently is only used by the hugetlb arch pre-fault
 * hook on UltraSPARC-III+ and later when changing the pagesize
 * bits of the context register for an address space.
 */
void smp_flush_tlb_mm(struct mm_struct *mm)
{
{
	u32 ctx = CTX_HWBITS(mm->context);
	u32 ctx = CTX_HWBITS(mm->context);
	int cpu = get_cpu();
	int cpu = get_cpu();
@@ -876,7 +863,6 @@ void smp_flush_tlb_mm(struct mm_struct *mm)


	put_cpu();
	put_cpu();
}
}
}


void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long *vaddrs)
void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long *vaddrs)
{
{
+2 −4
Original line number Original line Diff line number Diff line
@@ -78,11 +78,9 @@ static inline void tlb_finish_mmu(struct mmu_gather *mp, unsigned long start, un
{
{
	tlb_flush_mmu(mp);
	tlb_flush_mmu(mp);


	if (mp->fullmm) {
	if (mp->fullmm)
		if (CTX_VALID(mp->mm->context))
			do_flush_tlb_mm(mp->mm);
		mp->fullmm = 0;
		mp->fullmm = 0;
	} else
	else
		flush_tlb_pending();
		flush_tlb_pending();


	/* keep the page table cache within bounds */
	/* keep the page table cache within bounds */