Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 4b3073e1 authored by Russell King's avatar Russell King
Browse files

MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself



On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies.  We do this via make_coherent() by making the pages
uncacheable.

This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().

Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():

  On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
  to construct a pointer to the pte again.  Passing a pte_t * is much
  more elegant.  Maybe we might even replace the pte argument with the
  pte_t?

Ben Herrenschmidt would also like the pte pointer for PowerPC:

  Passing the ptep in there is exactly what I want.  I want that
  -instead- of the PTE value, because I have issue on some ppc cases,
  for I$/D$ coherency, where set_pte_at() may decide to mask out the
  _PAGE_EXEC.

So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.

Includes a fix from Stephen Rothwell:

  sparc: fix fallout from update_mmu_cache API change

  Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>

Acked-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
parent ed42acae
Loading
Loading
Loading
Loading
+3 −3
Original line number Diff line number Diff line
@@ -88,12 +88,12 @@ changes occur:
	This is used primarily during fault processing.

5) void update_mmu_cache(struct vm_area_struct *vma,
			 unsigned long address, pte_t pte)
			 unsigned long address, pte_t *ptep)

	At the end of every page fault, this routine is invoked to
	tell the architecture specific code that a translation
	described by "pte" now exists at virtual address "address"
	for address space "vma->vm_mm", in the software page tables.
	now exists at virtual address "address" for address space
	"vma->vm_mm", in the software page tables.

	A port may use this information in any way it so chooses.
	For example, it could use this event to pre-load TLB
+1 −1
Original line number Diff line number Diff line
@@ -329,7 +329,7 @@ extern pgd_t swapper_pg_dir[1024];
 * tables contain all the necessary information.
 */
extern inline void update_mmu_cache(struct vm_area_struct * vma,
	unsigned long address, pte_t pte)
	unsigned long address, pte_t *ptep)
{
}

+2 −1
Original line number Diff line number Diff line
@@ -529,7 +529,8 @@ extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 * cache entries for the kernels virtual memory range are written
 * back to the page.
 */
extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t pte);
extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr,
	pte_t *ptep);

#endif

+3 −2
Original line number Diff line number Diff line
@@ -149,9 +149,10 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, unsigne
 *
 * Note that the pte lock will be held.
 */
void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr,
	pte_t *ptep)
{
	unsigned long pfn = pte_pfn(pte);
	unsigned long pfn = pte_pfn(*ptep);
	struct address_space *mapping;
	struct page *page;

+1 −1
Original line number Diff line number Diff line
@@ -325,7 +325,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)

struct vm_area_struct;
extern void update_mmu_cache(struct vm_area_struct * vma,
			     unsigned long address, pte_t pte);
			     unsigned long address, pte_t *ptep);

/*
 * Encode and decode a swap entry
Loading