Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Skip to content
Commit 3b284d45 authored by Asutosh Das's avatar Asutosh Das
Browse files

scsi: ufs: fix deadlock between clock scaling and shutdown



There's a deadlock in which shutdown context is waiting for
a mutex acquired by clock-scaling monitor, which in turn is
waiting for the mutex acquired by shutdown context.
Fix this by not allowing clocks to scale after being
suspended & also de-register this device from the devfreq
framework.

As below:
<call stack : init>
-005|current_thread_info(inline)
-005|mutex_set_owner(inline)
-005|mutex_lock
-006|devfreq_monitor_suspend
-007|devfreq_simple_ondemand_handler
-008|devfreq_suspend_device(?)
-009|__ufshcd_suspend_clkscaling(inline)
-009|ufshcd_suspend_clkscaling
-010|ufshcd_shutdown
-011|ufshcd_pltfrm_shutdown(?)
-012|platform_drv_shutdown
-013|device_unlock(inline)
-013|device_shutdown()
-014|kernel_restart_prepare(?)
-015|kernel_restart
-016|SYSC_reboot(inline)
-016|sys_reboot(?, ?, ?)
-017|el0_svc_naked(asm)
-->|exception
-018|NUX:0x538F6C(asm)

<call stack : kworker/u16:4>
-008|rwsem_down_write_failed
    |    sem = -> (
    |      count = -8589934591,
    |      wait_list = (next = , prev = ),
    |      wait_lock = (raw_lock = (owner = 4, next = 4)),
    |      osq = (tail = (counter = 0)),
    |      owner = -> (
    |      comm = "init",
-009|current_thread_info(inline)
-009|rwsem_set_owner(inline)
-009|down_write
-010|ufshcd_clock_scaling_prepare(inline)
-010|ufshcd_devfreq_scale
-011|ufshcd_devfreq_target(?, ?, ?)
-012|update_devfreq
-013|devfreq_monitor
-014|__read_once_size(inline)
-014|static_key_count(inline)
-014|static_key_false(inline)
-014|trace_workqueue_execute_end(inline)
-014|process_one_work
-015|worker_thread
-016|kthread
-017|ret_from_fork(asm)
---|end of frame

Change-Id: Ic1853ef5143cadd95f0a6df474b35ad45fa918e1
Signed-off-by: default avatarAsutosh Das <asutoshd@codeaurora.org>
parent c7835c57
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment