This project is mirrored from https://github.com/Professor-Berni/android_kernel_sony_msm8994.git. Pull mirroring updated .
  1. 30 Sep, 2021 1 commit
  2. 26 Aug, 2021 1 commit
  3. 11 Aug, 2021 1 commit
  4. 09 Aug, 2021 24 commits
    • threader's avatar
      Revert "list_lru: dynamically adjust node arrays" · 64759646
      threader authored
      This reverts commit b72bdd1bf94b9d538c53c95a85c67e5bc07b45aa.
      64759646
    • threader's avatar
      Revert "binder: create userspace-to-binder-buffer copy function" · 80a5357c
      threader authored
      This reverts commit fb28e186b9faf19346dd18915cc9d1aab98c1264.
      80a5357c
    • threader's avatar
      Revert "UPSTREAM: binder: add flag to clear buffer on txn complete" · 4830fce8
      threader authored
      This reverts commit 67d42f9d8a7c71e35f33fb4862aef67b0137f764.
      4830fce8
    • threader's avatar
      mm: makefie: missed a 'new line' · 967b9f2b
      threader authored
      967b9f2b
    • Russell King's avatar
      mm: list_lru: fix almost infinite loop causing effective livelock · 34193aee
      Russell King authored
      
      
      I've seen a fair number of issues with kswapd and other processes
      appearing to get stuck in v3.12-rc.  Using sysrq-p many times seems to
      indicate that it gets stuck somewhere in list_lru_walk_node(), called
      from prune_icache_sb() and super_cache_scan().
      
      I never seem to be able to trigger a calltrace for functions above that
      point.
      
      So I decided to add the following to super_cache_scan():
      
          @@ -81,10 +81,14 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
                  inodes = list_lru_count_node(&sb->s_inode_lru, sc->nid);
                  dentries = list_lru_count_node(&sb->s_dentry_lru, sc->nid);
                  total_objects = dentries + inodes + fs_objects + 1;
          +printk("%s:%u: %s: dentries %lu inodes %lu total %lu\n", current->comm, current->pid, __func__, dentries, inodes, total_objects);
      
                  /* proportion the scan between the caches */
                  dentries = mult_frac(sc->nr_to_scan, dentries, total_objects);
                  inodes = mult_frac(sc->nr_to_scan, inodes, total_objects);
          +printk("%s:%u: %s: dentries %lu inodes %lu\n", current->comm, current->pid, __func__, dentries, inodes);
          +BUG_ON(dentries == 0);
          +BUG_ON(inodes == 0);
      
                  /*
                   * prune the dcache first as the icache is pinned by it, then
          @@ -99,7 +103,7 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
                          freed += sb->s_op->free_cached_objects(sb, fs_objects,
                                                                 sc->nid);
                  }
          -
          +printk("%s:%u: %s: dentries %lu inodes %lu freed %lu\n", current->comm, current->pid, __func__, dentries, inodes, freed);
                  drop_super(sb);
                  return freed;
           }
      
      and shortly thereafter, having applied some pressure, I got this:
      
          update-apt-xapi:1616: super_cache_scan: dentries 25632 inodes 2 total 25635
          update-apt-xapi:1616: super_cache_scan: dentries 1023 inodes 0
          ------------[ cut here ]------------
          Kernel BUG at c0101994 [verbose debug info unavailable]
          Internal error: Oops - BUG: 0 [#3] SMP ARM
          Modules linked in: fuse rfcomm bnep bluetooth hid_cypress
          CPU: 0 PID: 1616 Comm: update-apt-xapi Tainted: G      D      3.12.0-rc7+ #154
          task: daea1200 ti: c3bf8000 task.ti: c3bf8000
          PC is at super_cache_scan+0x1c0/0x278
          LR is at trace_hardirqs_on+0x14/0x18
          Process update-apt-xapi (pid: 1616, stack limit = 0xc3bf8240)
          ...
          Backtrace:
            (super_cache_scan) from [<c00cd69c>] (shrink_slab+0x254/0x4c8)
            (shrink_slab) from [<c00d09a0>] (try_to_free_pages+0x3a0/0x5e0)
            (try_to_free_pages) from [<c00c59cc>] (__alloc_pages_nodemask+0x5)
            (__alloc_pages_nodemask) from [<c00e07c0>] (__pte_alloc+0x2c/0x13)
            (__pte_alloc) from [<c00e3a70>] (handle_mm_fault+0x84c/0x914)
            (handle_mm_fault) from [<c001a4cc>] (do_page_fault+0x1f0/0x3bc)
            (do_page_fault) from [<c001a7b0>] (do_translation_fault+0xac/0xb8)
            (do_translation_fault) from [<c000840c>] (do_DataAbort+0x38/0xa0)
            (do_DataAbort) from [<c00133f8>] (__dabt_usr+0x38/0x40)
      
      Notice that we had a very low number of inodes, which were reduced to
      zero my mult_frac().
      
      Now, prune_icache_sb() calls list_lru_walk_node() passing that number of
      inodes (0) into that as the number of objects to scan:
      
          long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan,
                               int nid)
          {
                  LIST_HEAD(freeable);
                  long freed;
      
                  freed = list_lru_walk_node(&sb->s_inode_lru, nid, inode_lru_isolate,
                                                 &freeable, &nr_to_scan);
      
      which does:
      
          unsigned long
          list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate,
                             void *cb_arg, unsigned long *nr_to_walk)
          {
      
                  struct list_lru_node    *nlru = &lru->node[nid];
                  struct list_head *item, *n;
                  unsigned long isolated = 0;
      
                  spin_lock(&nlru->lock);
          restart:
                  list_for_each_safe(item, n, &nlru->list) {
                          enum lru_status ret;
      
                          /*
                           * decrement nr_to_walk first so that we don't livelock if we
                           * get stuck on large numbesr of LRU_RETRY items
                           */
                          if (--(*nr_to_walk) == 0)
                                  break;
      
      So, if *nr_to_walk was zero when this function was entered, that means
      we're wanting to operate on (~0UL)+1 objects - which might as well be
      infinite.
      
      Clearly this is not correct behaviour.  If we think about the behaviour
      of this function when *nr_to_walk is 1, then clearly it's wrong - we
      decrement first and then test for zero - which results in us doing
      nothing at all.  A post-decrement would give the desired behaviour -
      we'd try to walk one object and one object only if *nr_to_walk were one.
      
      It also gives the correct behaviour for zero - we exit at this point.
      
      Fixes: 5cedf721a7cd ("list_lru: fix broken LRU_RETRY behaviour")
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      [ Modified to make sure we never underflow the count: this function gets
        called in a loop, so the 0 -> ~0ul transition is dangerous  - Linus ]
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit c56b097af26cb11c1f49a4311ba538c825666fed)
      34193aee
    • Glauber Costa's avatar
      list_lru: dynamically adjust node arrays · 7d970b63
      Glauber Costa authored
      
      
      We currently use a compile-time constant to size the node array for the
      list_lru structure.  Due to this, we don't need to allocate any memory at
      initialization time.  But as a consequence, the structures that contain
      embedded list_lru lists can become way too big (the superblock for
      instance contains two of them).
      
      This patch aims at ameliorating this situation by dynamically allocating
      the node arrays with the firmware provided nr_node_ids.
      Signed-off-by: default avatarGlauber Costa <glommer@openvz.org>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Carlos Maiolino <cmaiolino@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: J. Bruce Fields <bfields@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      (cherry picked from commit 5ca302c8e502ca53b7d75f12127ec0289904003a)
      7d970b63
    • Glauber Costa's avatar
      list_lru: per-node API · 89fcb02d
      Glauber Costa authored
      
      
      This patch adapts the list_lru API to accept an optional node argument, to
      be used by NUMA aware shrinking functions.  Code that does not care about
      the NUMA placement of objects can still call into the very same functions
      as before.  They will simply iterate over all nodes.
      Signed-off-by: default avatarGlauber Costa <glommer@openvz.org>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Carlos Maiolino <cmaiolino@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: J. Bruce Fields <bfields@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      (cherry picked from commit 6a4f496fd2fc74fa036732ae52c184952d6e3e37)
      89fcb02d
    • Dave Chinner's avatar
      list_lru: fix broken LRU_RETRY behaviour · e8734293
      Dave Chinner authored
      
      
      The LRU_RETRY code assumes that the list traversal status after we have
      dropped and regained the list lock.  Unfortunately, this is not a valid
      assumption, and that can lead to racing traversals isolating objects that
      the other traversal expects to be the next item on the list.
      
      This is causing problems with the inode cache shrinker isolation, with
      races resulting in an inode on a dispose list being "isolated" because a
      racing traversal still thinks it is on the LRU.  The inode is then never
      reclaimed and that causes hangs if a subsequent lookup on that inode
      occurs.
      
      Fix it by always restarting the list walk on a LRU_RETRY return from the
      isolate callback.  Avoid the possibility of livelocks the current code was
      trying to avoid by always decrementing the nr_to_walk counter on retries
      so that even if we keep hitting the same item on the list we'll eventually
      stop trying to walk and exit out of the situation causing the problem.
      Reported-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      (cherry picked from commit 5cedf721a7cdb54e9222133516c916210d836470)
      e8734293
    • Dave Chinner's avatar
      list_lru: per-node list infrastructure · d65a68a0
      Dave Chinner authored
      
      
      Now that we have an LRU list API, we can start to enhance the
      implementation.  This splits the single LRU list into per-node lists and
      locks to enhance scalability.  Items are placed on lists according to the
      node the memory belongs to.  To make scanning the lists efficient, also
      track whether the per-node lists have entries in them in a active
      nodemask.
      
      Note: We use a fixed-size array for the node LRU, this struct can be very
      big if MAX_NUMNODES is big.  If this becomes a problem this is fixable by
      turning this into a pointer and dynamically allocating this to
      nr_node_ids.  This quantity is firwmare-provided, and still would provide
      room for all nodes at the cost of a pointer lookup and an extra
      allocation.  Because that allocation will most likely come from a may very
      well fail.
      
      [glommer@openvz.org: fix warnings, added note about node lru]
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarGlauber Costa <glommer@openvz.org>
      Reviewed-by: default avatarGreg Thelen <gthelen@google.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Carlos Maiolino <cmaiolino@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: J. Bruce Fields <bfields@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      (cherry picked from commit 3b1d58a4c96799eb4c92039e1b851b86f853548a)
      d65a68a0
    • Dave Chinner's avatar
      list: add a new LRU list type · cfc0e18d
      Dave Chinner authored
      
      
      Several subsystems use the same construct for LRU lists - a list head, a
      spin lock and and item count.  They also use exactly the same code for
      adding and removing items from the LRU.  Create a generic type for these
      LRU lists.
      
      This is the beginning of generic, node aware LRUs for shrinkers to work
      with.
      
      [glommer@openvz.org: enum defined constants for lru. Suggested by gthelen, don't relock over retry]
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarGlauber Costa <glommer@openvz.org>
      Reviewed-by: default avatarGreg Thelen <gthelen@google.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Carlos Maiolino <cmaiolino@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: J. Bruce Fields <bfields@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      (cherry picked from commit a38e40824844a5ec85f3ea95632be953477d2afa)
      cfc0e18d
    • Todd Kjos's avatar
      binder: create userspace-to-binder-buffer copy function · 5c132516
      Todd Kjos authored
      
      
      The binder driver uses a vm_area to map the per-process
      binder buffer space. For 32-bit android devices, this is
      now taking too much vmalloc space. This patch removes
      the use of vm_area when copying the transaction data
      from the sender to the buffer space. Instead of using
      copy_from_user() for multi-page copies, it now uses
      binder_alloc_copy_user_to_buffer() which uses kmap()
      and kunmap() to map each page, and uses copy_from_user()
      for copying to that page.
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit 1a7c3d9bb7a926e88d5f57643e75ad1abfc55013)
      5c132516
    • Todd Kjos's avatar
      UPSTREAM: binder: add flag to clear buffer on txn complete · 8f409272
      Todd Kjos authored
      
      
      Add a per-transaction flag to indicate that the buffer
      must be cleared when the transaction is complete to
      prevent copies of sensitive data from being preserved
      in memory.
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com
      
      
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Bug: 171501513
      Change-Id: Ic9338c85cbe3b11ab6f2bda55dce9964bb48447a
      (cherry picked from commit 0f966cba95c78029f491b433ea95ff38f414a761)
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      (cherry picked from commit 92b2ec21896a13b5aca1425dca1bd9a1df705cd0)
      8f409272
    • Thomas Gleixner's avatar
      highmem: Provide generic variant of kmap_atomic* · 5866d2f8
      Thomas Gleixner authored
      
      
      The kmap_atomic* interfaces in all architectures are pretty much the same
      except for post map operations (flush) and pre- and post unmap operations.
      
      Provide a generic variant for that.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linuxfoundation.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: https://lore.kernel.org/r/20201103095857.175939340@linutronix.de
      
      (cherry picked from commit 298fa1ad5571f59cb3ca5497a9455f36867f065e)
      5866d2f8
    • Ira Weiny's avatar
      kmap: consolidate kmap_prot definitions · 23a51173
      Ira Weiny authored
      
      
      Most architectures define kmap_prot to be PAGE_KERNEL.
      
      Let sparc and xtensa define there own and define PAGE_KERNEL as the
      default if not overridden.
      
      [akpm@linux-foundation.org: coding style fixes]
      Suggested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-16-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit 090e77e166334b83f555de408df64b9ab394ea08)
      23a51173
    • Ira Weiny's avatar
      arch/kmap: define kmap_atomic_prot() for all arch's · ac103e66
      Ira Weiny authored
      
      
      To support kmap_atomic_prot(), all architectures need to support
      protections passed to their kmap_atomic_high() function.  Pass protections
      into kmap_atomic_high() and change the name to kmap_atomic_high_prot() to
      match.
      
      Then define kmap_atomic_prot() as a core function which calls
      kmap_atomic_high_prot() when needed.
      
      Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with
      the default kmap_prot exported by the architectures.
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-11-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit 20b271dfe9d932b02b067a1f7ba9805c5b8d79bd)
      ac103e66
    • Ira Weiny's avatar
      arch/kunmap_atomic: consolidate duplicate code · 462964ea
      Ira Weiny authored
      Every single architecture (including !CONFIG_HIGHMEM) calls...
      
      	pagefault_enable();
      	preempt_enable();
      
      ... before returning from __kunmap_atomic().  Lift this code into the
      kunmap_atomic() macro.
      
      While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to
      be consistent.
      
      [ira.weiny@intel.com: don't enable pagefault/preempt twice]
        Link: http://lkml.kernel.org/r/20200518184843.3029640-1-ira.weiny@intel.com
      
      
      [akpm@linux-foundation.org: coding style fixes]
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-8-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit abca2500c0c1b20c3e552f259da4c4a99db3b4d1)
      462964ea
    • Ira Weiny's avatar
      arch/kmap_atomic: consolidate duplicate code · 2a5a03a3
      Ira Weiny authored
      
      
      Every arch has the same code to ensure atomic operations and a check for
      !HIGHMEM page.
      
      Remove the duplicate code by defining a core kmap_atomic() which only
      calls the arch specific kmap_atomic_high() when the page is high memory.
      
      [akpm@linux-foundation.org: coding style fixes]
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-7-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit 78b6d91ec7bbfc5bcc2dd05bb2cf13c9de1dc7cd)
      2a5a03a3
    • Ira Weiny's avatar
      arch/kunmap: remove duplicate kunmap implementations · f31ce0b3
      Ira Weiny authored
      
      
      All architectures do exactly the same thing for kunmap(); remove all the
      duplicate definitions and lift the call to the core.
      
      This also has the benefit of changing kmap_unmap() on a number of
      architectures to be an inline call rather than an actual function.
      
      [akpm@linux-foundation.org: fix CONFIG_HIGHMEM=n build on various architectures]
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-5-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit e23c45976f82ac789469c37e4d5a72ea2ce30bba)
      f31ce0b3
    • threader's avatar
      arch/kmap: clean-up after b365c5d17d7e · 93a3203c
      threader authored
      93a3203c
    • Ira Weiny's avatar
      arch/kmap: remove redundant arch specific kmaps · fae7d1c5
      Ira Weiny authored
      
      
      The kmap code for all the architectures is almost 100% identical.
      
      Lift the common code to the core.  Use ARCH_HAS_KMAP_FLUSH_TLB to indicate
      if an arch defines kmap_flush_tlb() and call if if needed.
      
      This also has the benefit of changing kmap() on a number of architectures
      to be an inline call rather than an actual function.
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20200507150004.1423069-4-ira.weiny@intel.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit 525aaf9bad00e7454b9f9b3873e92795afb59f8e)
      fae7d1c5
    • Khalid Aziz's avatar
      mm: Allow arch code to override copy_highpage() · 60a26e6c
      Khalid Aziz authored
      
      
      Some architectures can support metadata for memory pages and when a
      page is copied, its metadata must also be copied. Sparc processors
      from M7 onwards support metadata for memory pages. This metadata
      provides tag based protection for access to memory pages. To maintain
      this protection, the tag data must be copied to the new page when a
      page is migrated across NUMA nodes. This patch allows arch specific
      code to override default copy_highpage() and copy metadata along
      with page data upon migration.
      Signed-off-by: default avatarKhalid Aziz <khalid.aziz@oracle.com>
      Cc: Khalid Aziz <khalid@gonehiking.org>
      Reviewed-by: default avatarAnthony Yznaga <anthony.yznaga@oracle.com>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      (cherry picked from commit a4602b62d9fdea41412ba765bbf32ecfc2b6a94c)
      60a26e6c
    • David Hildenbrand's avatar
      sched/preempt, mm/kmap: Explicitly disable/enable preemption in kmap_atomic_* · 30029f69
      David Hildenbrand authored
      
      
      The existing code relies on pagefault_disable() implicitly disabling
      preemption, so that no schedule will happen between kmap_atomic() and
      kunmap_atomic().
      
      Let's make this explicit, to prepare for pagefault_disable() not
      touching preemption anymore.
      Reviewed-and-tested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David.Laight@ACULAB.COM
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: airlied@linux.ie
      Cc: akpm@linux-foundation.org
      Cc: benh@kernel.crashing.org
      Cc: bigeasy@linutronix.de
      Cc: borntraeger@de.ibm.com
      Cc: daniel.vetter@intel.com
      Cc: heiko.carstens@de.ibm.com
      Cc: herbert@gondor.apana.org.au
      Cc: hocko@suse.cz
      Cc: hughd@google.com
      Cc: mst@redhat.com
      Cc: paulus@samba.org
      Cc: ralf@linux-mips.org
      Cc: schwidefsky@de.ibm.com
      Cc: yang.shi@windriver.com
      Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      (cherry picked from commit 2cb7c9cb426660b5ed58b643d9e7dd5d50ba901f)
      30029f69
    • Chintan Pandya's avatar
      mm: BUG when __kmap_atomic_idx equals KM_TYPE_NR · c79ac9a3
      Chintan Pandya authored
      
      
      __kmap_atomic_idx is per_cpu variable.  Each CPU can use KM_TYPE_NR
      entries from FIXMAP i.e.  from 0 to KM_TYPE_NR - 1.  Allowing
      __kmap_atomic_idx to over- shoot to KM_TYPE_NR can mess up with next
      CPU's 0th entry which is a bug.  Hence BUG_ON if __kmap_atomic_idx >=
      KM_TYPE_NR.
      
      Fix the off-by-on in this test.
      Signed-off-by: default avatarChintan Pandya <cpandya@codeaurora.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      (cherry picked from commit 1d352bfd41e8219cdf9bebe79677700bdc38b540)
      c79ac9a3
    • threader's avatar
      BINDER: Fix warning on moved struct · e182d1b3
      threader authored
      e182d1b3
  5. 07 Aug, 2021 3 commits
  6. 03 Aug, 2021 7 commits
    • threader's avatar
      BINDER: Clean up pick conflict · 7b36094a
      threader authored
      7b36094a
    • Mrinal Pandey's avatar
      drivers: android: Remove braces for a single statement if-else block · c4ce50a3
      Mrinal Pandey authored
      
      
      Remove braces for both if and else block as suggested by checkpatch.
      Signed-off-by: default avatarMrinal Pandey <mrinalmni@gmail.com>
      Link: https://lore.kernel.org/r/20200724131403.dahfhdwa3wirzkxj@mrinalpandey
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit 8df5b9492202e9cac9917465e945fcf478d55404)
      c4ce50a3
    • Jann Horn's avatar
      binder: Don't modify VMA bounds in ->mmap handler · b39e6c0e
      Jann Horn authored
      
      
      binder_mmap() tries to prevent the creation of overly big binder mappings
      by silently truncating the size of the VMA to 4MiB. However, this violates
      the API contract of mmap(). If userspace attempts to create a large binder
      VMA, and later attempts to unmap that VMA, it will call munmap() on a range
      beyond the end of the VMA, which may have been allocated to another VMA in
      the meantime. This can lead to userspace memory corruption.
      
      The following sequence of calls leads to a segfault without this commit:
      
      int main(void) {
        int binder_fd = open("/dev/binder", O_RDWR);
        if (binder_fd == -1) err(1, "open binder");
        void *binder_mapping = mmap(NULL, 0x800000UL, PROT_READ, MAP_SHARED,
                                    binder_fd, 0);
        if (binder_mapping == MAP_FAILED) err(1, "mmap binder");
        void *data_mapping = mmap(NULL, 0x400000UL, PROT_READ|PROT_WRITE,
                                  MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
        if (data_mapping == MAP_FAILED) err(1, "mmap data");
        munmap(binder_mapping, 0x800000UL);
        *(char*)data_mapping = 1;
        return 0;
      }
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Acked-by: default avatarTodd Kjos <tkjos@google.com>
      Acked-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      Link: https://lore.kernel.org/r/20191016150119.154756-1-jannh@google.com
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit 45d02f79b539073b76077836871de6b674e36eb4)
      b39e6c0e
    • Yangtao Li's avatar
      binder: remove BINDER_DEBUG_ENTRY() · 1ed959a6
      Yangtao Li authored
      
      
      We already have the DEFINE_SHOW_ATTRIBUTE.There is no need to define
      such a macro,so remove BINDER_DEBUG_ENTRY.
      Signed-off-by: default avatarYangtao Li <tiny.windzz@gmail.com>
      Acked-by: default avatarTodd Kjos <tkjos@android.com>
      Reviewed-by: default avatarJoey Pabalinas <joeypabalinas@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit c13e0a5288195aadec1e53af7a48ea8dae971416)
      1ed959a6
    • Jens Axboe's avatar
      fs: move filp_close() outside of __close_fd_get_file() · c1437c6d
      Jens Axboe authored
      
      
      Just one caller of this, and just use filp_close() there manually.
      This is important to allow async close/removal of the fd.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      (cherry picked from commit 6e802a4ba056a6f2f51ac9d54eead3ed6f9829a2)
      c1437c6d
    • Todd Kjos's avatar
      binder: fix use-after-free due to ksys_close() during fdget() · 29043942
      Todd Kjos authored
      
      
      44d8047f1d8 ("binder: use standard functions to allocate fds")
      exposed a pre-existing issue in the binder driver.
      
      fdget() is used in ksys_ioctl() as a performance optimization.
      One of the rules associated with fdget() is that ksys_close() must
      not be called between the fdget() and the fdput(). There is a case
      where this requirement is not met in the binder driver which results
      in the reference count dropping to 0 when the device is still in
      use. This can result in use-after-free or other issues.
      
      If userpace has passed a file-descriptor for the binder driver using
      a BINDER_TYPE_FDA object, then kys_close() is called on it when
      handling a binder_ioctl(BC_FREE_BUFFER) command. This violates
      the assumptions for using fdget().
      
      The problem is fixed by deferring the close using task_work_add(). A
      new variant of __close_fd() was created that returns a struct file
      with a reference. The fput() is deferred instead of using ksys_close().
      
      Fixes: 44d8047f1d87a ("binder: use standard functions to allocate fds")
      Suggested-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit 80cd795630d6526ba729a089a435bf74a57af927)
      29043942
    • Todd Kjos's avatar
      binder: fix null deref of proc->context · 91596232
      Todd Kjos authored
      
      
      The binder driver makes the assumption proc->context pointer is invariant after
      initialization (as documented in the kerneldoc header for struct proc).
      However, in commit f0fe2c0f050d ("binder: prevent UAF for binderfs devices II")
      proc->context is set to NULL during binder_deferred_release().
      
      Another proc was in the middle of setting up a transaction to the dying
      process and crashed on a NULL pointer deref on "context" which is a local
      set to &proc->context:
      
          new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
      
      Here's the stack:
      
      [ 5237.855435] Call trace:
      [ 5237.855441] binder_get_ref_for_node_olocked+0x100/0x2ec
      [ 5237.855446] binder_inc_ref_for_node+0x140/0x280
      [ 5237.855451] binder_translate_binder+0x1d0/0x388
      [ 5237.855456] binder_transaction+0x2228/0x3730
      [ 5237.855461] binder_thread_write+0x640/0x25bc
      [ 5237.855466] binder_ioctl_write_read+0xb0/0x464
      [ 5237.855471] binder_ioctl+0x30c/0x96c
      [ 5237.855477] do_vfs_ioctl+0x3e0/0x700
      [ 5237.855482] __arm64_sys_ioctl+0x78/0xa4
      [ 5237.855488] el0_svc_common+0xb4/0x194
      [ 5237.855493] el0_svc_handler+0x74/0x98
      [ 5237.855497] el0_svc+0x8/0xc
      
      The fix is to move the kfree of the binder_device to binder_free_proc()
      so the binder_device is freed when we know there are no references
      remaining on the binder_proc.
      
      Fixes: f0fe2c0f050d ("binder: prevent UAF for binderfs devices II")
      Acked-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Cc: stable <stable@vger.kernel.org>
      Link: https://lore.kernel.org/r/20200622200715.114382-1-tkjos@google.com
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      (cherry picked from commit d35d3660e065b69fdb8bf512f3d899f350afce52)
      91596232
  7. 31 Jul, 2021 3 commits