This project is mirrored from https://github.com/Professor-Berni/android_kernel_sony_msm8994.git.
Pull mirroring updated .
- 19 Oct, 2021 1 commit
-
-
Pete Fotheringham authored
3866 Use Los18 Kernel See merge request !3
-
- 30 Sep, 2021 1 commit
-
-
LuK1337 authored
* Userspace needs it for model detection. Change-Id: Id47e30fb941fc1fd6822d0d3ba32cd5e7572b2b5
-
- 26 Aug, 2021 1 commit
-
-
Bernhard Thoben authored
This option turns on support for the /proc/keys file. Please take a look into the file kernel/sony/msm8994/security/keys/Kconfig for more information.
-
- 11 Aug, 2021 1 commit
-
-
threader authored
-
- 09 Aug, 2021 24 commits
-
-
threader authored
This reverts commit b72bdd1bf94b9d538c53c95a85c67e5bc07b45aa.
-
threader authored
This reverts commit fb28e186b9faf19346dd18915cc9d1aab98c1264.
-
threader authored
This reverts commit 67d42f9d8a7c71e35f33fb4862aef67b0137f764.
-
threader authored
-
Russell King authored
I've seen a fair number of issues with kswapd and other processes appearing to get stuck in v3.12-rc. Using sysrq-p many times seems to indicate that it gets stuck somewhere in list_lru_walk_node(), called from prune_icache_sb() and super_cache_scan(). I never seem to be able to trigger a calltrace for functions above that point. So I decided to add the following to super_cache_scan(): @@ -81,10 +81,14 @@ static unsigned long super_cache_scan(struct shrinker *shrink, inodes = list_lru_count_node(&sb->s_inode_lru, sc->nid); dentries = list_lru_count_node(&sb->s_dentry_lru, sc->nid); total_objects = dentries + inodes + fs_objects + 1; +printk("%s:%u: %s: dentries %lu inodes %lu total %lu\n", current->comm, current->pid, __func__, dentries, inodes, total_objects); /* proportion the scan between the caches */ dentries = mult_frac(sc->nr_to_scan, dentries, total_objects); inodes = mult_frac(sc->nr_to_scan, inodes, total_objects); +printk("%s:%u: %s: dentries %lu inodes %lu\n", current->comm, current->pid, __func__, dentries, inodes); +BUG_ON(dentries == 0); +BUG_ON(inodes == 0); /* * prune the dcache first as the icache is pinned by it, then @@ -99,7 +103,7 @@ static unsigned long super_cache_scan(struct shrinker *shrink, freed += sb->s_op->free_cached_objects(sb, fs_objects, sc->nid); } - +printk("%s:%u: %s: dentries %lu inodes %lu freed %lu\n", current->comm, current->pid, __func__, dentries, inodes, freed); drop_super(sb); return freed; } and shortly thereafter, having applied some pressure, I got this: update-apt-xapi:1616: super_cache_scan: dentries 25632 inodes 2 total 25635 update-apt-xapi:1616: super_cache_scan: dentries 1023 inodes 0 ------------[ cut here ]------------ Kernel BUG at c0101994 [verbose debug info unavailable] Internal error: Oops - BUG: 0 [#3] SMP ARM Modules linked in: fuse rfcomm bnep bluetooth hid_cypress CPU: 0 PID: 1616 Comm: update-apt-xapi Tainted: G D 3.12.0-rc7+ #154 task: daea1200 ti: c3bf8000 task.ti: c3bf8000 PC is at super_cache_scan+0x1c0/0x278 LR is at trace_hardirqs_on+0x14/0x18 Process update-apt-xapi (pid: 1616, stack limit = 0xc3bf8240) ... Backtrace: (super_cache_scan) from [<c00cd69c>] (shrink_slab+0x254/0x4c8) (shrink_slab) from [<c00d09a0>] (try_to_free_pages+0x3a0/0x5e0) (try_to_free_pages) from [<c00c59cc>] (__alloc_pages_nodemask+0x5) (__alloc_pages_nodemask) from [<c00e07c0>] (__pte_alloc+0x2c/0x13) (__pte_alloc) from [<c00e3a70>] (handle_mm_fault+0x84c/0x914) (handle_mm_fault) from [<c001a4cc>] (do_page_fault+0x1f0/0x3bc) (do_page_fault) from [<c001a7b0>] (do_translation_fault+0xac/0xb8) (do_translation_fault) from [<c000840c>] (do_DataAbort+0x38/0xa0) (do_DataAbort) from [<c00133f8>] (__dabt_usr+0x38/0x40) Notice that we had a very low number of inodes, which were reduced to zero my mult_frac(). Now, prune_icache_sb() calls list_lru_walk_node() passing that number of inodes (0) into that as the number of objects to scan: long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan, int nid) { LIST_HEAD(freeable); long freed; freed = list_lru_walk_node(&sb->s_inode_lru, nid, inode_lru_isolate, &freeable, &nr_to_scan); which does: unsigned long list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { struct list_lru_node *nlru = &lru->node[nid]; struct list_head *item, *n; unsigned long isolated = 0; spin_lock(&nlru->lock); restart: list_for_each_safe(item, n, &nlru->list) { enum lru_status ret; /* * decrement nr_to_walk first so that we don't livelock if we * get stuck on large numbesr of LRU_RETRY items */ if (--(*nr_to_walk) == 0) break; So, if *nr_to_walk was zero when this function was entered, that means we're wanting to operate on (~0UL)+1 objects - which might as well be infinite. Clearly this is not correct behaviour. If we think about the behaviour of this function when *nr_to_walk is 1, then clearly it's wrong - we decrement first and then test for zero - which results in us doing nothing at all. A post-decrement would give the desired behaviour - we'd try to walk one object and one object only if *nr_to_walk were one. It also gives the correct behaviour for zero - we exit at this point. Fixes: 5cedf721a7cd ("list_lru: fix broken LRU_RETRY behaviour") Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Cc: Dave Chinner <dchinner@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> [ Modified to make sure we never underflow the count: this function gets called in a loop, so the 0 -> ~0ul transition is dangerous - Linus ] Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit c56b097af26cb11c1f49a4311ba538c825666fed)
-
Glauber Costa authored
We currently use a compile-time constant to size the node array for the list_lru structure. Due to this, we don't need to allocate any memory at initialization time. But as a consequence, the structures that contain embedded list_lru lists can become way too big (the superblock for instance contains two of them). This patch aims at ameliorating this situation by dynamically allocating the node arrays with the firmware provided nr_node_ids. Signed-off-by:
Glauber Costa <glommer@openvz.org> Cc: Dave Chinner <dchinner@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit 5ca302c8e502ca53b7d75f12127ec0289904003a)
-
Glauber Costa authored
This patch adapts the list_lru API to accept an optional node argument, to be used by NUMA aware shrinking functions. Code that does not care about the NUMA placement of objects can still call into the very same functions as before. They will simply iterate over all nodes. Signed-off-by:
Glauber Costa <glommer@openvz.org> Cc: Dave Chinner <dchinner@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit 6a4f496fd2fc74fa036732ae52c184952d6e3e37)
-
Dave Chinner authored
The LRU_RETRY code assumes that the list traversal status after we have dropped and regained the list lock. Unfortunately, this is not a valid assumption, and that can lead to racing traversals isolating objects that the other traversal expects to be the next item on the list. This is causing problems with the inode cache shrinker isolation, with races resulting in an inode on a dispose list being "isolated" because a racing traversal still thinks it is on the LRU. The inode is then never reclaimed and that causes hangs if a subsequent lookup on that inode occurs. Fix it by always restarting the list walk on a LRU_RETRY return from the isolate callback. Avoid the possibility of livelocks the current code was trying to avoid by always decrementing the nr_to_walk counter on retries so that even if we keep hitting the same item on the list we'll eventually stop trying to walk and exit out of the situation causing the problem. Reported-by:
Michal Hocko <mhocko@suse.cz> Signed-off-by:
Dave Chinner <dchinner@redhat.com> Cc: Glauber Costa <glommer@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit 5cedf721a7cdb54e9222133516c916210d836470)
-
Dave Chinner authored
Now that we have an LRU list API, we can start to enhance the implementation. This splits the single LRU list into per-node lists and locks to enhance scalability. Items are placed on lists according to the node the memory belongs to. To make scanning the lists efficient, also track whether the per-node lists have entries in them in a active nodemask. Note: We use a fixed-size array for the node LRU, this struct can be very big if MAX_NUMNODES is big. If this becomes a problem this is fixable by turning this into a pointer and dynamically allocating this to nr_node_ids. This quantity is firwmare-provided, and still would provide room for all nodes at the cost of a pointer lookup and an extra allocation. Because that allocation will most likely come from a may very well fail. [glommer@openvz.org: fix warnings, added note about node lru] Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Glauber Costa <glommer@openvz.org> Reviewed-by:
Greg Thelen <gthelen@google.com> Acked-by:
Mel Gorman <mgorman@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit 3b1d58a4c96799eb4c92039e1b851b86f853548a)
-
Dave Chinner authored
Several subsystems use the same construct for LRU lists - a list head, a spin lock and and item count. They also use exactly the same code for adding and removing items from the LRU. Create a generic type for these LRU lists. This is the beginning of generic, node aware LRUs for shrinkers to work with. [glommer@openvz.org: enum defined constants for lru. Suggested by gthelen, don't relock over retry] Signed-off-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Glauber Costa <glommer@openvz.org> Reviewed-by:
Greg Thelen <gthelen@google.com> Acked-by:
Mel Gorman <mgorman@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> (cherry picked from commit a38e40824844a5ec85f3ea95632be953477d2afa)
-
Todd Kjos authored
The binder driver uses a vm_area to map the per-process binder buffer space. For 32-bit android devices, this is now taking too much vmalloc space. This patch removes the use of vm_area when copying the transaction data from the sender to the buffer space. Instead of using copy_from_user() for multi-page copies, it now uses binder_alloc_copy_user_to_buffer() which uses kmap() and kunmap() to map each page, and uses copy_from_user() for copying to that page. Signed-off-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 1a7c3d9bb7a926e88d5f57643e75ad1abfc55013)
-
Todd Kjos authored
Add a per-transaction flag to indicate that the buffer must be cleared when the transaction is complete to prevent copies of sensitive data from being preserved in memory. Signed-off-by:
Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com Cc: stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Bug: 171501513 Change-Id: Ic9338c85cbe3b11ab6f2bda55dce9964bb48447a (cherry picked from commit 0f966cba95c78029f491b433ea95ff38f414a761) Signed-off-by:
Todd Kjos <tkjos@google.com> (cherry picked from commit 92b2ec21896a13b5aca1425dca1bd9a1df705cd0)
-
Thomas Gleixner authored
The kmap_atomic* interfaces in all architectures are pretty much the same except for post map operations (flush) and pre- and post unmap operations. Provide a generic variant for that. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linuxfoundation.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/r/20201103095857.175939340@linutronix.de (cherry picked from commit 298fa1ad5571f59cb3ca5497a9455f36867f065e)
-
Ira Weiny authored
Most architectures define kmap_prot to be PAGE_KERNEL. Let sparc and xtensa define there own and define PAGE_KERNEL as the default if not overridden. [akpm@linux-foundation.org: coding style fixes] Suggested-by:
Christoph Hellwig <hch@infradead.org> Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200507150004.1423069-16-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 090e77e166334b83f555de408df64b9ab394ea08)
-
Ira Weiny authored
To support kmap_atomic_prot(), all architectures need to support protections passed to their kmap_atomic_high() function. Pass protections into kmap_atomic_high() and change the name to kmap_atomic_high_prot() to match. Then define kmap_atomic_prot() as a core function which calls kmap_atomic_high_prot() when needed. Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with the default kmap_prot exported by the architectures. Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200507150004.1423069-11-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 20b271dfe9d932b02b067a1f7ba9805c5b8d79bd)
-
Ira Weiny authored
Every single architecture (including !CONFIG_HIGHMEM) calls... pagefault_enable(); preempt_enable(); ... before returning from __kunmap_atomic(). Lift this code into the kunmap_atomic() macro. While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to be consistent. [ira.weiny@intel.com: don't enable pagefault/preempt twice] Link: http://lkml.kernel.org/r/20200518184843.3029640-1-ira.weiny@intel.com [akpm@linux-foundation.org: coding style fixes] Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Guenter Roeck <linux@roeck-us.net> Link: http://lkml.kernel.org/r/20200507150004.1423069-8-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit abca2500c0c1b20c3e552f259da4c4a99db3b4d1)
-
Ira Weiny authored
Every arch has the same code to ensure atomic operations and a check for !HIGHMEM page. Remove the duplicate code by defining a core kmap_atomic() which only calls the arch specific kmap_atomic_high() when the page is high memory. [akpm@linux-foundation.org: coding style fixes] Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200507150004.1423069-7-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 78b6d91ec7bbfc5bcc2dd05bb2cf13c9de1dc7cd)
-
Ira Weiny authored
All architectures do exactly the same thing for kunmap(); remove all the duplicate definitions and lift the call to the core. This also has the benefit of changing kmap_unmap() on a number of architectures to be an inline call rather than an actual function. [akpm@linux-foundation.org: fix CONFIG_HIGHMEM=n build on various architectures] Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200507150004.1423069-5-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit e23c45976f82ac789469c37e4d5a72ea2ce30bba)
-
threader authored
-
Ira Weiny authored
The kmap code for all the architectures is almost 100% identical. Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to indicate if an arch defines kmap_flush_tlb() and call if if needed. This also has the benefit of changing kmap() on a number of architectures to be an inline call rather than an actual function. Signed-off-by:
Ira Weiny <ira.weiny@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200507150004.1423069-4-ira.weiny@intel.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 525aaf9bad00e7454b9f9b3873e92795afb59f8e)
-
Khalid Aziz authored
Some architectures can support metadata for memory pages and when a page is copied, its metadata must also be copied. Sparc processors from M7 onwards support metadata for memory pages. This metadata provides tag based protection for access to memory pages. To maintain this protection, the tag data must be copied to the new page when a page is migrated across NUMA nodes. This patch allows arch specific code to override default copy_highpage() and copy metadata along with page data upon migration. Signed-off-by:
Khalid Aziz <khalid.aziz@oracle.com> Cc: Khalid Aziz <khalid@gonehiking.org> Reviewed-by:
Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David S. Miller <davem@davemloft.net> (cherry picked from commit a4602b62d9fdea41412ba765bbf32ecfc2b6a94c)
-
David Hildenbrand authored
The existing code relies on pagefault_disable() implicitly disabling preemption, so that no schedule will happen between kmap_atomic() and kunmap_atomic(). Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. Reviewed-and-tested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by:
Ingo Molnar <mingo@kernel.org> (cherry picked from commit 2cb7c9cb426660b5ed58b643d9e7dd5d50ba901f)
-
Chintan Pandya authored
__kmap_atomic_idx is per_cpu variable. Each CPU can use KM_TYPE_NR entries from FIXMAP i.e. from 0 to KM_TYPE_NR - 1. Allowing __kmap_atomic_idx to over- shoot to KM_TYPE_NR can mess up with next CPU's 0th entry which is a bug. Hence BUG_ON if __kmap_atomic_idx >= KM_TYPE_NR. Fix the off-by-on in this test. Signed-off-by:
Chintan Pandya <cpandya@codeaurora.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 1d352bfd41e8219cdf9bebe79677700bdc38b540)
-
threader authored
-
- 07 Aug, 2021 3 commits
-
-
syphyr authored
Sdcardfs needs to wait for the file to finish writing before returning an error. Backport from 3.18 to 3.10 kernel. Change-Id: I0fbdfd9a4c46ad34b8826099d9e3b255289d4794
-
Daniel Rosenberg authored
This adds the nocache mount option, which will cause sdcardfs to always drop dentries that are not in use, preventing cached entries from holding on to lower dentries, which could cause strange behavior when bypassing the sdcardfs layer and directly changing the lower fs. Change-Id: I70268584a20b989ae8cfdd278a2e4fa1605217fb Signed-off-by:
Daniel Rosenberg <drosen@google.com> [basilgello: imply 5dd43fe0853d27266768781a530d2862dff76170 ] Signed-Off-By:
Vasyl Gello <vasek.gello@gmail.com>
-
Hang Lu authored
Add BR_FROZEN_REPLY in binder_return_strings to support stat function. Fixes: ae28c1be1e54 ("binder: BINDER_GET_FROZEN_INFO ioctl") Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Hang Lu <hangl@codeaurora.org> Link: https://lore.kernel.org/r/1617961246-4502-2-git-send-email-hangl@codeaurora.org Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 005169157448ca41eff8716d79dc1b8f158229d2 git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-next) Change-Id: Ib12e3f1dc1a389c9b4d5e9f60bd740d269dadf94 Signed-off-by:
Hang Lu <hangl@codeaurora.org> (cherry picked from commit 6c3c18a4c9f62d186e767b7e55c4f86769c77bf4)
-
- 03 Aug, 2021 7 commits
-
-
threader authored
-
Mrinal Pandey authored
Remove braces for both if and else block as suggested by checkpatch. Signed-off-by:
Mrinal Pandey <mrinalmni@gmail.com> Link: https://lore.kernel.org/r/20200724131403.dahfhdwa3wirzkxj@mrinalpandey Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 8df5b9492202e9cac9917465e945fcf478d55404)
-
Jann Horn authored
binder_mmap() tries to prevent the creation of overly big binder mappings by silently truncating the size of the VMA to 4MiB. However, this violates the API contract of mmap(). If userspace attempts to create a large binder VMA, and later attempts to unmap that VMA, it will call munmap() on a range beyond the end of the VMA, which may have been allocated to another VMA in the meantime. This can lead to userspace memory corruption. The following sequence of calls leads to a segfault without this commit: int main(void) { int binder_fd = open("/dev/binder", O_RDWR); if (binder_fd == -1) err(1, "open binder"); void *binder_mapping = mmap(NULL, 0x800000UL, PROT_READ, MAP_SHARED, binder_fd, 0); if (binder_mapping == MAP_FAILED) err(1, "mmap binder"); void *data_mapping = mmap(NULL, 0x400000UL, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); if (data_mapping == MAP_FAILED) err(1, "mmap data"); munmap(binder_mapping, 0x800000UL); *(char*)data_mapping = 1; return 0; } Cc: stable@vger.kernel.org Signed-off-by:
Jann Horn <jannh@google.com> Acked-by:
Todd Kjos <tkjos@google.com> Acked-by:
Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20191016150119.154756-1-jannh@google.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 45d02f79b539073b76077836871de6b674e36eb4)
-
Yangtao Li authored
We already have the DEFINE_SHOW_ATTRIBUTE.There is no need to define such a macro,so remove BINDER_DEBUG_ENTRY. Signed-off-by:
Yangtao Li <tiny.windzz@gmail.com> Acked-by:
Todd Kjos <tkjos@android.com> Reviewed-by:
Joey Pabalinas <joeypabalinas@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit c13e0a5288195aadec1e53af7a48ea8dae971416)
-
Jens Axboe authored
Just one caller of this, and just use filp_close() there manually. This is important to allow async close/removal of the fd. Signed-off-by:
Jens Axboe <axboe@kernel.dk> (cherry picked from commit 6e802a4ba056a6f2f51ac9d54eead3ed6f9829a2)
-
Todd Kjos authored
44d8047f1d8 ("binder: use standard functions to allocate fds") exposed a pre-existing issue in the binder driver. fdget() is used in ksys_ioctl() as a performance optimization. One of the rules associated with fdget() is that ksys_close() must not be called between the fdget() and the fdput(). There is a case where this requirement is not met in the binder driver which results in the reference count dropping to 0 when the device is still in use. This can result in use-after-free or other issues. If userpace has passed a file-descriptor for the binder driver using a BINDER_TYPE_FDA object, then kys_close() is called on it when handling a binder_ioctl(BC_FREE_BUFFER) command. This violates the assumptions for using fdget(). The problem is fixed by deferring the close using task_work_add(). A new variant of __close_fd() was created that returns a struct file with a reference. The fput() is deferred instead of using ksys_close(). Fixes: 44d8047f1d87a ("binder: use standard functions to allocate fds") Suggested-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Todd Kjos <tkjos@google.com> Cc: stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 80cd795630d6526ba729a089a435bf74a57af927)
-
Todd Kjos authored
The binder driver makes the assumption proc->context pointer is invariant after initialization (as documented in the kerneldoc header for struct proc). However, in commit f0fe2c0f050d ("binder: prevent UAF for binderfs devices II") proc->context is set to NULL during binder_deferred_release(). Another proc was in the middle of setting up a transaction to the dying process and crashed on a NULL pointer deref on "context" which is a local set to &proc->context: new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1; Here's the stack: [ 5237.855435] Call trace: [ 5237.855441] binder_get_ref_for_node_olocked+0x100/0x2ec [ 5237.855446] binder_inc_ref_for_node+0x140/0x280 [ 5237.855451] binder_translate_binder+0x1d0/0x388 [ 5237.855456] binder_transaction+0x2228/0x3730 [ 5237.855461] binder_thread_write+0x640/0x25bc [ 5237.855466] binder_ioctl_write_read+0xb0/0x464 [ 5237.855471] binder_ioctl+0x30c/0x96c [ 5237.855477] do_vfs_ioctl+0x3e0/0x700 [ 5237.855482] __arm64_sys_ioctl+0x78/0xa4 [ 5237.855488] el0_svc_common+0xb4/0x194 [ 5237.855493] el0_svc_handler+0x74/0x98 [ 5237.855497] el0_svc+0x8/0xc The fix is to move the kfree of the binder_device to binder_free_proc() so the binder_device is freed when we know there are no references remaining on the binder_proc. Fixes: f0fe2c0f050d ("binder: prevent UAF for binderfs devices II") Acked-by:
Christian Brauner <christian.brauner@ubuntu.com> Signed-off-by:
Todd Kjos <tkjos@google.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200622200715.114382-1-tkjos@google.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit d35d3660e065b69fdb8bf512f3d899f350afce52)
-
- 31 Jul, 2021 2 commits