This project is mirrored from Pull mirroring updated .
  1. 24 Jun, 2018 10 commits
  2. 13 Jun, 2018 1 commit
  3. 08 Jun, 2018 1 commit
  4. 07 Jun, 2018 3 commits
  5. 01 Jun, 2018 1 commit
  6. 31 May, 2018 1 commit
  7. 30 May, 2018 23 commits
    • Dyneteve's avatar
      staging: Update binder driver · f5c99a55
      Dyneteve authored
      Change-Id: I948b327cc7c40cd2bea739f3dae66e93c7233655
    • Dyneteve's avatar
      config: Regenerate · 65cef798
      Dyneteve authored
      Change-Id: I88e6521f024324b7ccba3d031906ceddffd80f41
    • Tim Murray's avatar
      firmware_class: make firmware caching configurable · 417df88c
      Tim Murray authored
      Because firmware caching generates uevent messages that are sent over a
      netlink socket, it can prevent suspend on many platforms. It's also not
      always useful, so make it a configurable option.
      bug 32180327
      Change-Id: I1250512b27edb56caa78d536e5ccf1fb669476ad
      Signed-off-by: default avatarAjay Dudani <>
    • Marissa Wall's avatar
      BACKPORT: Sanitize 'move_pages()' permission checks · 6ec69ce8
      Marissa Wall authored
      The 'move_paghes()' system call was introduced long long ago with the
      same permission checks as for sending a signal (except using
      CAP_SYS_NICE instead of CAP_SYS_KILL for the overriding capability).
      That turns out to not be a great choice - while the system call really
      only moves physical page allocations around (and you need other
      capabilities to do a lot of it), you can check the return value to map
      out some the virtual address choices and defeat ASLR of a binary that
      still shares your uid.
      So change the access checks to the more common 'ptrace_may_access()'
      model instead.
      This tightens the access checks for the uid, and also effectively
      changes the CAP_SYS_NICE check to CAP_SYS_PTRACE, but it's unlikely that
      anybody really _uses_ this legacy system call any more (we hav ebetter
      NUMA placement models these days), so I expect nobody to notice.
      Famous last words.
      Reported-by: default avatarOtto Ebeling <>
      Acked-by: default avatarEric W. Biederman <>
      Cc: Willy Tarreau <>
      Signed-off-by: default avatarLinus Torvalds <>
      cherry-picked from: 197e7e521384a23b9e585178f3f11c9fa08274b9
      This branch does not have the PTRACE_MODE_REALCREDS flag but its
      default behavior is the same as PTRACE_MODE_REALCREDS. So use
      Change-Id: I75364561d91155c01f78dd62cdd41c5f0f418854
    • Willem de Bruijn's avatar
      UPSTREAM: packet: hold bind lock when rebinding to fanout hook · b177208c
      Willem de Bruijn authored
      [ Upstream commit 008ba2a13f2d04c947adc536d19debb8fe66f110 ]
      Packet socket bind operations must hold the po->bind_lock. This keeps
      po->running consistent with whether the socket is actually on a ptype
      list to receive packets.
      fanout_add unbinds a socket and its packet_rcv/tpacket_rcv call, then
      binds the fanout object to receive through packet_rcv_fanout.
      Make it hold the po->bind_lock when testing po->running and rebinding.
      Else, it can race with other rebind operations, such as that in
      packet_set_ring from packet_rcv to tpacket_rcv. Concurrent updates
      can result in a socket being added to a fanout group twice, causing
      use-after-free KASAN bug reports, among others.
      Reported independently by both trinity and syzkaller.
      Verified that the syzkaller reproducer passes after this patch.
      Fixes: dc99f600 ("packet: Add fanout support.")
      Change-Id: I6817d1f12654dd682a962cfd4645006a7315360d
      Reported-by: default avatarnixioaming <>
      Signed-off-by: default avatarWillem de Bruijn <>
      Signed-off-by: default avatarDavid S. Miller <>
      Signed-off-by: default avatarGreg Kroah-Hartman <>
      Signed-off-by: default avatarMarissa Wall <>
    • Marissa Wall's avatar
      BACKPORT: packet: in packet_do_bind, test fanout with bind_lock held · 64e201ec
      Marissa Wall authored
      [ Upstream commit 4971613c1639d8e5f102c4e797c3bf8f83a5a69e ]
      Once a socket has po->fanout set, it remains a member of the group
      until it is destroyed. The prot_hook must be constant and identical
      across sockets in the group.
      If fanout_add races with packet_do_bind between the test of po->fanout
      and taking the lock, the bind call may make type or dev inconsistent
      with that of the fanout group.
      Hold po->bind_lock when testing po->fanout to avoid this race.
      I had to introduce artificial delay (local_bh_enable) to actually
      observe the race.
      Fixes: dc99f600 ("packet: Add fanout support.")
      Change-Id: I899f9b6bcbd1d4b033388ef22c472857574bfc30
      Signed-off-by: default avatarWillem de Bruijn <>
      Reviewed-by: default avatarEric Dumazet <>
      Signed-off-by: default avatarDavid S. Miller <>
      Signed-off-by: default avatarGreg Kroah-Hartman <>
      Signed-off-by: default avatarMarissa Wall <>
    • Cong Wang's avatar
      mqueue: fix a use-after-free in sys_mq_notify() · a604464d
      Cong Wang authored
      commit f991af3daabaecff34684fd51fac80319d1baad1 upstream.
      The retry logic for netlink_attachskb() inside sys_mq_notify()
      is nasty and vulnerable:
      1) The sock refcnt is already released when retry is needed
      2) The fd is controllable by user-space because we already
         release the file refcnt
      so we when retry but the fd has been just closed by user-space
      during this small window, we end up calling netlink_detachskb()
      on the error path which releases the sock again, later when
      the user-space closes this socket a use-after-free could be
      Setting 'sock' to NULL here should be sufficient to fix it.
      Change-Id: I2a2d5b9330a8b79cbe1303b7b8ca434c2824f0fe
      Reported-by: default avatarGeneBlue <>
      Signed-off-by: default avatarCong Wang <>
      Cc: Andrew Morton <>
      Cc: Manfred Spraul <>
      Signed-off-by: default avatarLinus Torvalds <>
      Signed-off-by: default avatarBen Hutchings <>
      Signed-off-by: default avatarKevin F. Haggerty <>
    • Daniel Rosenberg's avatar
      ANDROID: sound: rawmidi: Hold lock around realloc · 53769980
      Daniel Rosenberg authored
      The SNDRV_RAWMIDI_STREAM_{OUTPUT,INPUT} ioctls may reallocate
      runtime->buffer while other kernel threads are accessing it.  If the
      underlying krealloc() call frees the original buffer, then this can turn
      into a use-after-free.
      Most of these accesses happen while the thread is holding runtime->lock,
      and can be fixed by just holding the same lock while replacing
      runtime->buffer, however we can't hold this spinlock while
      snd_rawmidi_kernel_{read1,write1} are copying to/from userspace.  We
      need to add and acquire a new mutex to prevent this from happening
      concurrently with reallocation.  We hold this mutex during the entire
      reallocation process, to also prevent multiple concurrent reallocations
      leading to a double-free.
      Signed-off-by: default avatarDaniel Rosenberg <>
      bug: 64315347
      Change-Id: I05764d4f1a38f373eb7c0ac1c98607ee5ff0eded
    • Oleg Nesterov's avatar
      BACKPORT: FROMLIST: pids: make task_tgid_nr_ns() safe · 4dc0b2a5
      Oleg Nesterov authored
      This was reported many times, and this was even mentioned in commit
      52ee2dfd "pids: refactor vnr/nr_ns helpers to make them safe" but
      somehow nobody bothered to fix the obvious problem: task_tgid_nr_ns()
      is not safe because task->group_leader points to nowhere after the
      exiting task passes exit_notify(), rcu_read_lock() can not help.
      We really need to change __unhash_process() to nullify group_leader,
      parent, and real_parent, but this needs some cleanups. Until then we
      can turn task_tgid_nr_ns() into another user of __task_pid_nr_ns() and
      fix the problem.
      Reported-by: default avatarTroy Kensinger <>
      Signed-off-by: default avatarOleg Nesterov <>
      Acked-by: default avatarPeter Zijlstra (Intel) <>
      Bug: 31495866
      Change-Id: I5e67b02a77e805f71fa3a787249f13c1310f02e2
      (cherry picked from commit 48de73e34a5760591758ff7ffdc79bc464e60f69)
    • Eric Biggers's avatar
      KEYS: prevent KEYCTL_READ on negative key · 629d4b0e
      Eric Biggers authored
      commit 37863c43b2c6464f252862bf2e9768264e961678 upstream.
      Because keyctl_read_key() looks up the key with no permissions
      requested, it may find a negatively instantiated key.  If the key is
      also possessed, we went ahead and called ->read() on the key.  But the
      key payload will actually contain the ->reject_error rather than the
      normal payload.  Thus, the kernel oopses trying to read the
      user_key_payload from memory address (int)-ENOKEY = 0x00000000ffffff82.
      Fortunately the payload data is stored inline, so it shouldn't be
      possible to abuse this as an arbitrary memory read primitive...
          keyctl new_session
          keyctl request2 user desc '' @s
          keyctl read $(keyctl show | awk '/user: desc/ {print $1}')
      It causes a crash like the following:
           BUG: unable to handle kernel paging request at 00000000ffffff92
           IP: user_read+0x33/0xa0
           PGD 36a54067 P4D 36a54067 PUD 0
           Oops: 0000 [#1] SMP
           CPU: 0 PID: 211 Comm: keyctl Not tainted 4.14.0-rc1 #337
           Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014
           task: ffff90aa3b74c3c0 task.stack: ffff9878c0478000
           RIP: 0010:user_read+0x33/0xa0
           RSP: 0018:ffff9878c047bee8 EFLAGS: 00010246
           RAX: 0000000000000001 RBX: ffff90aa3d7da340 RCX: 0000000000000017
           RDX: 0000000000000000 RSI: 00000000ffffff82 RDI: ffff90aa3d7da340
           RBP: ffff9878c047bf00 R08: 00000024f95da94f R09: 0000000000000000
           R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000
           R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
           FS:  00007f58ece69740(0000) GS:ffff90aa3e200000(0000) knlGS:0000000000000000
           CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
           CR2: 00000000ffffff92 CR3: 0000000036adc001 CR4: 00000000003606f0
           Call Trace:
           RIP: 0033:0x7f58ec787bb9
           RSP: 002b:00007ffc8d401678 EFLAGS: 00000206 ORIG_RAX: 00000000000000fa
           RAX: ffffffffffffffda RBX: 00007ffc8d402800 RCX: 00007f58ec787bb9
           RDX: 0000000000000000 RSI: 00000000174a63ac RDI: 000000000000000b
           RBP: 0000000000000004 R08: 00007ffc8d402809 R09: 0000000000000020
           R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc8d402800
           R13: 00007ffc8d4016e0 R14: 0000000000000000 R15: 0000000000000000
           Code: e5 41 55 49 89 f5 41 54 49 89 d4 53 48 89 fb e8 a4 b4 ad ff 85 c0 74 09 80 3d b9 4c 96 00 00 74 43 48 8b b3 20 01 00 00 4d 85 ed <0f> b7 5e 10 74 29 4d 85 e4 74 24 4c 39 e3 4c 89 e2 4c 89 ef 48
           RIP: user_read+0x33/0xa0 RSP: ffff9878c047bee8
           CR2: 00000000ffffff92
      Change-Id: Icb005a3f0f766690ff7303339372c369d2e2ca69
      Fixes: 61ea0c0ba904 ("KEYS: Skip key state checks when checking for possession")
      Signed-off-by: default avatarEric Biggers <>
      Signed-off-by: default avatarDavid Howells <>
      Signed-off-by: default avatarGreg Kroah-Hartman <>
      Signed-off-by: default avatarKevin F. Haggerty <>
    • Sultanxda's avatar
      msm: vidc: Fix broken debugfs creation error checks and error paths · be809edd
      Sultanxda authored
      The debugfs helper functions do not always return NULL when they fail;
      instead, they can return an error number casted as a pointer, so that their
      users have the option to determine the exact cause of failure.
      Use the IS_ERR_OR_NULL() helper when checking for debugfs errors to fix the
      error checks.
      Signed-off-by: default avatarSultanxda <>
    • Sultanxda's avatar
      soc: qcom: watchdog_v2: Fix memory leaks when memory_dump_v2 isn't built · 7682b53b
      Sultanxda authored
      When the memory_dump_v2.c driver isn't built (CONFIG_MSM_MEMORY_DUMP_V2=n),
      an inline version of msm_dump_data_register() will be used that does
      nothing. This means that all memory allocated with the intention of going
      to msm_dump_data_register() will be leaked.
      Fix this by ignoring the entire code block when CONFIG_MSM_MEMORY_DUMP_V2
      is disabled.
      Signed-off-by: default avatarSultanxda <>
      Signed-off-by: default avatarFrancisco Franco <>
    • Sultanxda's avatar
      net: ipc_router: Fix memory leaks when releasing a remote port · 357c1423
      Sultanxda authored
      In ipc_router_set_conn(), memory is allocated to a local port for the
      conn_info linked list, but the memory for the local ports is never
      freed when the remote port is released.
      Delete the local ports and free all of their memory when releasing a remote
      port to fix the memory leaks. This is just like how memory is freed for the
      remote_tx ports.
      Signed-off-by: default avatarSultanxda <>
      Signed-off-by: default avatarFrancisco Franco <>
    • Dmitry Shmidt's avatar
      ANDROID: Fix missing header file for get_cmdline() call · ba0a3959
      Dmitry Shmidt authored
      commit ab6f11cf39b6a605b0e644ac1fde21d517d56d40
          uid_sys_stats: log task io with a debug flag
      causes on some builds next error:
      error: implicit declaration of function 'get_cmdline'
      Change-Id: I41adaf5af999b94bac41e3f17a67cfc88400ee31
      Signed-off-by: default avatarDmitry Shmidt <>
      Signed-off-by: default avatarHarsh Shandilya <>
    • Yang Jin's avatar
      uid_sys_stats: log task io with a debug flag · 174e2f67
      Yang Jin authored
      Add a hashmap inside each uid_entry to keep track of task name and io.
      Task full name is a combination of thread and process name.
      Bug: 63739275
      Change-Id: I30083b757eaef8c61e55a213a883ce8d0c9cf2b1
      Signed-off-by: default avatarYang Jin <>
    • Greg Kroah-Hartman's avatar
      uid_sys_stats: make hash_table static · a974dc64
      Greg Kroah-Hartman authored
      Having a global variabled called "hash_table" is not a good idea, it
      should be static, so mark it as such.
      Signed-off-by: default avatarGreg Kroah-Hartman <>
      Signed-off-by: default avatarHarsh Shandilya <>
    • Sultanxda's avatar
      power: msm-core: Compile out temperature polling · f5e7b03d
      Sultanxda authored
      The temperature polling used here is expensive; according to top, it
      regularly eats up 10% of the CPU. Frequently spending 10% of CPU time on an
      *energy efficiency* feature is nonsense, so we'll just use the default
      temperature for power calculations so that we can still make use of
      power-aware scheduling.
      **[@dev-harsh1998]** : adapt for msm8916
      Signed-off-by: default avatarSultanxda <>
    • Jan Kara's avatar
      writeback: Do not sort b_io list only because of block device inode · 9f9663a5
      Jan Kara authored
      It is very likely that block device inode will be part of BDI dirty list
      as well. However it doesn't make sence to sort inodes on the b_io list
      just because of this inode (as it contains buffers all over the device
      anyway). So save some CPU cycles which is valuable since we hold relatively
      contented wb->list_lock.
      Change-Id: If71669c354c7dc17610c6fcc79bd76fd6bf78220
      Signed-off-by: default avatarJan Kara <>
    • Junxiao Bi's avatar
      writeback: fix race that cause writeback hung · 54e9f1d7
      Junxiao Bi authored
      There is a race between mark inode dirty and writeback thread, see the
      following scenario.  In this case, writeback thread will not run though
      there is dirty_io.
      __mark_inode_dirty()                                          bdi_writeback_workfn()
      	...                                                       	...
      	if (bdi_cap_writeback_dirty(bdi)) {
      	    <<< assume wb has dirty_io, so wakeup_bdi is false.
      	    <<< the following inode_dirty also have wakeup_bdi false.
      	    if (!wb_has_dirty_io(&bdi->wb))
      		    wakeup_bdi = true;
      	                                                            <<< assume last dirty_io is removed here.
      	                                                            pages_written = wb_do_writeback(wb);
      	                                                            <<< work_list empty and wb has no dirty_io,
      	                                                            <<< delayed_work will not be queued.
      	                                                            if (!list_empty(&bdi->work_list) ||
      	                                                                (wb_has_dirty_io(wb) && dirty_writeback_interval))
      	                                                                queue_delayed_work(bdi_wq, &wb->dwork,
      	                                                                    msecs_to_jiffies(dirty_writeback_interval * 10));
      	inode->dirtied_when = jiffies;
      	<<< new dirty_io is added.
      	list_move(&inode->i_wb_list, &bdi->wb.b_dirty);
      	<<< though there is dirty_io, but wakeup_bdi is false,
      	<<< so writeback thread will not be waked up and
      	<<< the new dirty_io will not be flushed.
      	if (wakeup_bdi)
      Writeback will run until there is a new flush work queued.  This may cause
      a lot of dirty pages stay in memory for a long time.
      Change-Id: I973fcba5381881a003a035ffff48f64348660079
      Signed-off-by: default avatarJunxiao Bi <>
      Reviewed-by: default avatarJan Kara <>
      Cc: Fengguang Wu <>
      Signed-off-by: default avatarAndrew Morton <>
      Signed-off-by: default avatarLinus Torvalds <>
    • dev-harsh1998's avatar
    • andip71's avatar
    • Tomoms's avatar
      mm: enable laptop mode by default · 679705c2
      Tomoms authored
      Change-Id: I5eeac24b0a8645eefeacd0535398e7e96b71ae21
      Signed-off-by: default avatarAlbert I <>
    • Tim Chen's avatar
      sched/fair: Implement fast idling of CPUs when the system is partially loaded · 31530636
      Tim Chen authored
      When a system is lightly loaded (i.e. no more than 1 job per cpu),
      attempt to pull job to a cpu before putting it to idle is unnecessary and
      can be skipped.  This patch adds an indicator so the scheduler can know
      when there's no more than 1 active job is on any CPU in the system to
      skip needless job pulls.
      On a 4 socket machine with a request/response kind of workload from
      clients, we saw about 0.13 msec delay when we go through a full load
      balance to try pull job from all the other cpus.  While 0.1 msec was
      spent on processing the request and generating a response, the 0.13 msec
      load balance overhead was actually more than the actual work being done.
      This overhead can be skipped much of the time for lightly loaded systems.
      With this patch, we tested with a netperf request/response workload that
      has the server busy with half the cpus in a 4 socket system.  We found
      the patch eliminated 75% of the load balance attempts before idling a cpu.
      The overhead of setting/clearing the indicator is low as we already gather
      the necessary info while we call add_nr_running() and update_sd_lb_stats.()
      We switch to full load balance load immediately if any cpu got more than
      one job on its run queue in add_nr_running.  We'll clear the indicator
      to avoid load balance when we detect no cpu's have more than one job
      when we scan the work queues in update_sg_lb_stats().  We are aggressive
      in turning on the load balance and opportunistic in skipping the load
      Change-Id: Ief3b6af64a99ce47a8c72a25151c71927292ac8b
      Signed-off-by: default avatarTim Chen <>
      Acked-by: default avatarRik van Riel <>
      Acked-by: default avatarJason Low <>
      Cc: "Paul E.McKenney" <>
      Cc: Andrew Morton <>
      Cc: Davidlohr Bueso <>
      Cc: Alex Shi <>
      Cc: Michel Lespinasse <>
      Cc: Peter Hurley <>
      Cc: Linus Torvalds <>
      Signed-off-by: default avatarPeter Zijlstra <>
      Link: default avatarIngo Molnar <>
      Signed-off-by: default avatarAlbert I <>