Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b93b0163 authored by Matthew Wilcox's avatar Matthew Wilcox Committed by Linus Torvalds
Browse files

page cache: use xa_lock

Remove the address_space ->tree_lock and use the xa_lock newly added to
the radix_tree_root.  Rename the address_space ->page_tree to ->i_pages,
since we don't really care that it's a tree.

[willy@infradead.org: fix nds32, fs/dax.c]
  Link: http://lkml.kernel.org/r/20180406145415.GB20605@bombadil.infradead.orgLink: http://lkml.kernel.org/r/20180313132639.17387-9-willy@infradead.org


Signed-off-by: default avatarMatthew Wilcox <mawilcox@microsoft.com>
Acked-by: default avatarJeff Layton <jlayton@redhat.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f6bb2a2c
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -262,7 +262,7 @@ When oom event notifier is registered, event will be delivered.
2.6 Locking

   lock_page_cgroup()/unlock_page_cgroup() should not be called under
   mapping->tree_lock.
   the i_pages lock.

   Other lock order is following:
   PG_locked.
+7 −7
Original line number Diff line number Diff line
@@ -90,7 +90,7 @@ Steps:

1. Lock the page to be migrated

2. Insure that writeback is complete.
2. Ensure that writeback is complete.

3. Lock the new page that we want to move to. It is locked so that accesses to
   this (not yet uptodate) page immediately lock while the move is in progress.
@@ -100,8 +100,8 @@ Steps:
   mapcount is not zero then we do not migrate the page. All user space
   processes that attempt to access the page will now wait on the page lock.

5. The radix tree lock is taken. This will cause all processes trying
   to access the page via the mapping to block on the radix tree spinlock.
5. The i_pages lock is taken. This will cause all processes trying
   to access the page via the mapping to block on the spinlock.

6. The refcount of the page is examined and we back out if references remain
   otherwise we know that we are the only one referencing this page.
@@ -114,12 +114,12 @@ Steps:

9. The radix tree is changed to point to the new page.

10. The reference count of the old page is dropped because the radix tree
10. The reference count of the old page is dropped because the address space
    reference is gone. A reference to the new page is established because
    the new page is referenced to by the radix tree.
    the new page is referenced by the address space.

11. The radix tree lock is dropped. With that lookups in the mapping
    become possible again. Processes will move from spinning on the tree_lock
11. The i_pages lock is dropped. With that lookups in the mapping
    become possible again. Processes will move from spinning on the lock
    to sleeping on the locked new page.

12. The page contents are copied to the new page.
+2 −4
Original line number Diff line number Diff line
@@ -318,10 +318,8 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);

#define flush_dcache_mmap_lock(mapping) \
	spin_lock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_unlock(mapping) \
	spin_unlock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)

#define flush_icache_user_range(vma,page,addr,len) \
	flush_dcache_page(page)
+2 −2
Original line number Diff line number Diff line
@@ -34,8 +34,8 @@ void flush_anon_page(struct vm_area_struct *vma,
void flush_kernel_dcache_page(struct page *page);
void flush_icache_range(unsigned long start, unsigned long end);
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
#define flush_dcache_mmap_lock(mapping)   spin_lock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_unlock(mapping) spin_unlock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_lock(mapping)   xa_lock_irq(&(mapping)->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages)

#else
#include <asm-generic/cacheflush.h>
+2 −4
Original line number Diff line number Diff line
@@ -46,9 +46,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
extern void flush_dcache_range(unsigned long start, unsigned long end);
extern void invalidate_dcache_range(unsigned long start, unsigned long end);

#define flush_dcache_mmap_lock(mapping) \
	spin_lock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_unlock(mapping) \
	spin_unlock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)

#endif /* _ASM_NIOS2_CACHEFLUSH_H */
Loading