Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 1c7037db authored by Benjamin Herrenschmidt's avatar Benjamin Herrenschmidt Committed by Linus Torvalds
Browse files

remove unused flush_tlb_pgtables



Nobody uses flush_tlb_pgtables anymore, this patch removes all remaining
traces of it from all archs.

Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 22124c99
Loading
Loading
Loading
Loading
+2 −25
Original line number Diff line number Diff line
@@ -87,30 +87,7 @@ changes occur:

	This is used primarily during fault processing.

5) void flush_tlb_pgtables(struct mm_struct *mm,
			   unsigned long start, unsigned long end)

   The software page tables for address space 'mm' for virtual
   addresses in the range 'start' to 'end-1' are being torn down.

   Some platforms cache the lowest level of the software page tables
   in a linear virtually mapped array, to make TLB miss processing
   more efficient.  On such platforms, since the TLB is caching the
   software page table structure, it needs to be flushed when parts
   of the software page table tree are unlinked/freed.

   Sparc64 is one example of a platform which does this.

   Usually, when munmap()'ing an area of user virtual address
   space, the kernel leaves the page table parts around and just
   marks the individual pte's as invalid.  However, if very large
   portions of the address space are unmapped, the kernel frees up
   those portions of the software page tables to prevent potential
   excessive kernel memory usage caused by erratic mmap/mmunmap
   sequences.  It is at these times that flush_tlb_pgtables will
   be invoked.

6) void update_mmu_cache(struct vm_area_struct *vma,
5) void update_mmu_cache(struct vm_area_struct *vma,
			 unsigned long address, pte_t pte)

	At the end of every page fault, this routine is invoked to
@@ -123,7 +100,7 @@ changes occur:
	translations for software managed TLB configurations.
	The sparc64 port currently does this.

7) void tlb_migrate_finish(struct mm_struct *mm)
6) void tlb_migrate_finish(struct mm_struct *mm)

	This interface is called at the end of an explicit
	process migration. This interface provides a hook
+0 −11
Original line number Diff line number Diff line
@@ -92,17 +92,6 @@ flush_tlb_other(struct mm_struct *mm)
	if (*mmc) *mmc = 0;
}

/* Flush a specified range of user mapping page tables from TLB.
   Although Alpha uses VPTE caches, this can be a nop, as Alpha does
   not have finegrained tlb flushing, so it will flush VPTE stuff
   during next flush_tlb_range.  */

static inline void
flush_tlb_pgtables(struct mm_struct *mm, unsigned long start,
		   unsigned long end)
{
}

#ifndef CONFIG_SMP
/* Flush everything (kernel mapping may also have changed
   due to vmalloc/vfree).  */
+0 −5
Original line number Diff line number Diff line
@@ -463,11 +463,6 @@ extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 */
extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t pte);

/*
 * ARM processors do not cache TLB tables in RAM.
 */
#define flush_tlb_pgtables(mm,start,end)	do { } while (0)

#endif

#endif /* CONFIG_MMU */
+0 −7
Original line number Diff line number Diff line
@@ -19,7 +19,6 @@
 *  - flush_tlb_page(vma, vmaddr) flushes one page
 *  - flush_tlb_range(vma, start, end) flushes a range of pages
 *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
 *  - flush_tlb_pgtables(mm, start, end) flushes a range of page tables
 */
extern void flush_tlb(void);
extern void flush_tlb_all(void);
@@ -29,12 +28,6 @@ extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
extern void __flush_tlb_page(unsigned long asid, unsigned long page);

static inline void flush_tlb_pgtables(struct mm_struct *mm,
				      unsigned long start, unsigned long end)
{
	/* Nothing to do */
}

extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);

#endif /* __ASM_AVR32_TLBFLUSH_H */
+0 −6
Original line number Diff line number Diff line
@@ -53,10 +53,4 @@ static inline void flush_tlb_kernel_page(unsigned long addr)
	BUG();
}

static inline void flush_tlb_pgtables(struct mm_struct *mm,
				      unsigned long start, unsigned long end)
{
	BUG();
}

#endif
Loading