Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 4b67157f authored by Rafael J. Wysocki's avatar Rafael J. Wysocki
Browse files

Merge branch 'pm-core'

* pm-core: (29 commits)
  dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit
  PM / runtime: Allow no callbacks in pm_runtime_force_suspend|resume()
  PM / runtime: Check ignore_children in pm_runtime_need_not_resume()
  PM / runtime: Rework pm_runtime_force_suspend/resume()
  PM / wakeup: Print warn if device gets enabled as wakeup source during sleep
  PM / core: Propagate wakeup_path status flag in __device_suspend_late()
  PM / core: Re-structure code for clearing the direct_complete flag
  PM: i2c-designware-platdrv: Optimize power management
  PM: i2c-designware-platdrv: Use DPM_FLAG_SMART_PREPARE
  PM / mfd: intel-lpss: Use DPM_FLAG_SMART_SUSPEND
  PCI / PM: Use SMART_SUSPEND and LEAVE_SUSPENDED flags for PCIe ports
  PM / wakeup: Add device_set_wakeup_path() helper to control wakeup path
  PM / core: Assign the wakeup_path status flag in __device_prepare()
  PM / wakeup: Do not fail dev_pm_attach_wake_irq() unnecessarily
  PM / core: Direct DPM_FLAG_LEAVE_SUSPENDED handling
  PM / core: Direct DPM_FLAG_SMART_SUSPEND optimization
  PM / core: Add helpers for subsystem callback selection
  PM / wakeup: Drop redundant check from device_init_wakeup()
  PM / wakeup: Drop redundant check from device_set_wakeup_enable()
  PM / wakeup: only recommend "call"ing device_init_wakeup() once
  ...
parents f9b736f6 1131b0a4
Loading
Loading
Loading
Loading
+44 −10
Original line number Diff line number Diff line
@@ -777,17 +777,51 @@ The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in
runtime suspend at the beginning of the ``suspend_late`` phase of system-wide
suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM
has been disabled for it, under the assumption that its state should not change
after that point until the system-wide transition is over.  If that happens, the
driver's system-wide resume callbacks, if present, may still be invoked during
the subsequent system-wide resume transition and the device's runtime power
management status may be set to "active" before enabling runtime PM for it,
so the driver must be prepared to cope with the invocation of its system-wide
resume callbacks back-to-back with its ``->runtime_suspend`` one (without the
intervening ``->runtime_resume`` and so on) and the final state of the device
must reflect the "active" status for runtime PM in that case.
after that point until the system-wide transition is over (the PM core itself
does that for devices whose "noirq", "late" and "early" system-wide PM callbacks
are executed directly by it).  If that happens, the driver's system-wide resume
callbacks, if present, may still be invoked during the subsequent system-wide
resume transition and the device's runtime power management status may be set
to "active" before enabling runtime PM for it, so the driver must be prepared to
cope with the invocation of its system-wide resume callbacks back-to-back with
its ``->runtime_suspend`` one (without the intervening ``->runtime_resume`` and
so on) and the final state of the device must reflect the "active" runtime PM
status in that case.

During system-wide resume from a sleep state it's easiest to put devices into
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
Refer to that document for more information regarding this particular issue as
[Refer to that document for more information regarding this particular issue as
well as for information on the device runtime power management framework in
general.
general.]

However, it often is desirable to leave devices in suspend after system
transitions to the working state, especially if those devices had been in
runtime suspend before the preceding system-wide suspend (or analogous)
transition.  Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to
indicate to the PM core (and middle-layer code) that they prefer the specific
devices handled by them to be left suspended and they have no problems with
skipping their system-wide resume callbacks for this reason.  Whether or not the
devices will actually be left in suspend may depend on their state before the
given system suspend-resume cycle and on the type of the system transition under
way.  In particular, devices are not left suspended if that transition is a
restore from hibernation, as device states are not guaranteed to be reflected
by the information stored in the hibernation image in that case.

The middle-layer code involved in the handling of the device is expected to
indicate to the PM core if the device may be left in suspend by setting its
:c:member:`power.may_skip_resume` status bit which is checked by the PM core
during the "noirq" phase of the preceding system-wide suspend (or analogous)
transition.  The middle layer is then responsible for handling the device as
appropriate in its "noirq" resume callback, which is executed regardless of
whether or not the device is left suspended, but the other resume callbacks
(except for ``->complete``) will be skipped automatically by the PM core if the
device really can be left in suspend.

For devices whose "noirq", "late" and "early" driver callbacks are invoked
directly by the PM core, all of the system-wide resume callbacks are skipped if
``DPM_FLAG_LEAVE_SUSPENDED`` is set and the device is in runtime suspend during
the ``suspend_noirq`` (or analogous) phase or the transition under way is a
proper system suspend (rather than anything related to hibernation) and the
device's wakeup settings are suitable for runtime PM (that is, it cannot
generate wakeup signals at all or it is allowed to wake up the system from
sleep).
+11 −0
Original line number Diff line number Diff line
@@ -994,6 +994,17 @@ into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(),
the function will set the power.direct_complete flag for it (to make the PM core
skip the subsequent "thaw" callbacks for it) and return.

Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the
device to be left in suspend after system-wide transitions to the working state.
This flag is checked by the PM core, but the PCI bus type informs the PM core
which devices may be left in suspend from its perspective (that happens during
the "noirq" phase of system-wide suspend and analogous transitions) and next it
uses the dev_pm_may_skip_resume() helper to decide whether or not to return from
pci_pm_resume_noirq() early, as the PM core will skip the remaining resume
callbacks for the device during the transition under way and will set its
runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for
it.

3.2. Device Runtime Power Management
------------------------------------
In addition to providing device power management callbacks PCI device drivers
+24 −3
Original line number Diff line number Diff line
@@ -990,7 +990,7 @@ void acpi_subsys_complete(struct device *dev)
	 * the sleep state it is going out of and it has never been resumed till
	 * now, resume it in case the firmware powered it up.
	 */
	if (dev->power.direct_complete && pm_resume_via_firmware())
	if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
		pm_request_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_complete);
@@ -1039,10 +1039,28 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
 */
int acpi_subsys_suspend_noirq(struct device *dev)
{
	if (dev_pm_smart_suspend_and_suspended(dev))
	int ret;

	if (dev_pm_smart_suspend_and_suspended(dev)) {
		dev->power.may_skip_resume = true;
		return 0;
	}

	ret = pm_generic_suspend_noirq(dev);
	if (ret)
		return ret;

	/*
	 * If the target system sleep state is suspend-to-idle, it is sufficient
	 * to check whether or not the device's wakeup settings are good for
	 * runtime PM.  Otherwise, the pm_resume_via_firmware() check will cause
	 * acpi_subsys_complete() to take care of fixing up the device's state
	 * anyway, if need be.
	 */
	dev->power.may_skip_resume = device_may_wakeup(dev) ||
					!device_can_wakeup(dev);

	return pm_generic_suspend_noirq(dev);
	return 0;
}
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);

@@ -1052,6 +1070,9 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
 */
int acpi_subsys_resume_noirq(struct device *dev)
{
	if (dev_pm_may_skip_resume(dev))
		return 0;

	/*
	 * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
	 * during system suspend, so update their runtime PM status to "active"
+329 −86
Original line number Diff line number Diff line
@@ -18,7 +18,6 @@
 */

#include <linux/device.h>
#include <linux/kallsyms.h>
#include <linux/export.h>
#include <linux/mutex.h>
#include <linux/pm.h>
@@ -540,6 +539,73 @@ void dev_pm_skip_next_resume_phases(struct device *dev)
	dev->power.is_suspended = false;
}

/**
 * suspend_event - Return a "suspend" message for given "resume" one.
 * @resume_msg: PM message representing a system-wide resume transition.
 */
static pm_message_t suspend_event(pm_message_t resume_msg)
{
	switch (resume_msg.event) {
	case PM_EVENT_RESUME:
		return PMSG_SUSPEND;
	case PM_EVENT_THAW:
	case PM_EVENT_RESTORE:
		return PMSG_FREEZE;
	case PM_EVENT_RECOVER:
		return PMSG_HIBERNATE;
	}
	return PMSG_ON;
}

/**
 * dev_pm_may_skip_resume - System-wide device resume optimization check.
 * @dev: Target device.
 *
 * Checks whether or not the device may be left in suspend after a system-wide
 * transition to the working state.
 */
bool dev_pm_may_skip_resume(struct device *dev)
{
	return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE;
}

static pm_callback_t dpm_subsys_resume_noirq_cb(struct device *dev,
						pm_message_t state,
						const char **info_p)
{
	pm_callback_t callback;
	const char *info;

	if (dev->pm_domain) {
		info = "noirq power domain ";
		callback = pm_noirq_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "noirq type ";
		callback = pm_noirq_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "noirq class ";
		callback = pm_noirq_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "noirq bus ";
		callback = pm_noirq_op(dev->bus->pm, state);
	} else {
		return NULL;
	}

	if (info_p)
		*info_p = info;

	return callback;
}

static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
						 pm_message_t state,
						 const char **info_p);

static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
						pm_message_t state,
						const char **info_p);

/**
 * device_resume_noirq - Execute a "noirq resume" callback for given device.
 * @dev: Device to handle.
@@ -551,8 +617,9 @@ void dev_pm_skip_next_resume_phases(struct device *dev)
 */
static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
{
	pm_callback_t callback = NULL;
	const char *info = NULL;
	pm_callback_t callback;
	const char *info;
	bool skip_resume;
	int error = 0;

	TRACE_DEVICE(dev);
@@ -566,28 +633,60 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn

	dpm_wait_for_superior(dev, async);

	if (dev->pm_domain) {
		info = "noirq power domain ";
		callback = pm_noirq_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "noirq type ";
		callback = pm_noirq_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "noirq class ";
		callback = pm_noirq_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "noirq bus ";
		callback = pm_noirq_op(dev->bus->pm, state);
	skip_resume = dev_pm_may_skip_resume(dev);

	callback = dpm_subsys_resume_noirq_cb(dev, state, &info);
	if (callback)
		goto Run;

	if (skip_resume)
		goto Skip;

	if (dev_pm_smart_suspend_and_suspended(dev)) {
		pm_message_t suspend_msg = suspend_event(state);

		/*
		 * If "freeze" callbacks have been skipped during a transition
		 * related to hibernation, the subsequent "thaw" callbacks must
		 * be skipped too or bad things may happen.  Otherwise, resume
		 * callbacks are going to be run for the device, so its runtime
		 * PM status must be changed to reflect the new state after the
		 * transition under way.
		 */
		if (!dpm_subsys_suspend_late_cb(dev, suspend_msg, NULL) &&
		    !dpm_subsys_suspend_noirq_cb(dev, suspend_msg, NULL)) {
			if (state.event == PM_EVENT_THAW) {
				skip_resume = true;
				goto Skip;
			} else {
				pm_runtime_set_active(dev);
			}
		}
	}

	if (!callback && dev->driver && dev->driver->pm) {
	if (dev->driver && dev->driver->pm) {
		info = "noirq driver ";
		callback = pm_noirq_op(dev->driver->pm, state);
	}

Run:
	error = dpm_run_callback(callback, dev, state, info);

Skip:
	dev->power.is_noirq_suspended = false;

	if (skip_resume) {
		/*
		 * The device is going to be left in suspend, but it might not
		 * have been in runtime suspend before the system suspended, so
		 * its runtime PM status needs to be updated to avoid confusing
		 * the runtime PM framework when runtime PM is enabled for the
		 * device again.
		 */
		pm_runtime_set_suspended(dev);
		dev_pm_skip_next_resume_phases(dev);
	}

Out:
	complete_all(&dev->power.completion);
	TRACE_RESUME(error);
@@ -681,6 +780,35 @@ void dpm_resume_noirq(pm_message_t state)
	dpm_noirq_end();
}

static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
						pm_message_t state,
						const char **info_p)
{
	pm_callback_t callback;
	const char *info;

	if (dev->pm_domain) {
		info = "early power domain ";
		callback = pm_late_early_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "early type ";
		callback = pm_late_early_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "early class ";
		callback = pm_late_early_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "early bus ";
		callback = pm_late_early_op(dev->bus->pm, state);
	} else {
		return NULL;
	}

	if (info_p)
		*info_p = info;

	return callback;
}

/**
 * device_resume_early - Execute an "early resume" callback for given device.
 * @dev: Device to handle.
@@ -691,8 +819,8 @@ void dpm_resume_noirq(pm_message_t state)
 */
static int device_resume_early(struct device *dev, pm_message_t state, bool async)
{
	pm_callback_t callback = NULL;
	const char *info = NULL;
	pm_callback_t callback;
	const char *info;
	int error = 0;

	TRACE_DEVICE(dev);
@@ -706,19 +834,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn

	dpm_wait_for_superior(dev, async);

	if (dev->pm_domain) {
		info = "early power domain ";
		callback = pm_late_early_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "early type ";
		callback = pm_late_early_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "early class ";
		callback = pm_late_early_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "early bus ";
		callback = pm_late_early_op(dev->bus->pm, state);
	}
	callback = dpm_subsys_resume_early_cb(dev, state, &info);

	if (!callback && dev->driver && dev->driver->pm) {
		info = "early driver ";
@@ -1089,6 +1205,77 @@ static pm_message_t resume_event(pm_message_t sleep_state)
	return PMSG_ON;
}

static void dpm_superior_set_must_resume(struct device *dev)
{
	struct device_link *link;
	int idx;

	if (dev->parent)
		dev->parent->power.must_resume = true;

	idx = device_links_read_lock();

	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
		link->supplier->power.must_resume = true;

	device_links_read_unlock(idx);
}

static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
						 pm_message_t state,
						 const char **info_p)
{
	pm_callback_t callback;
	const char *info;

	if (dev->pm_domain) {
		info = "noirq power domain ";
		callback = pm_noirq_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "noirq type ";
		callback = pm_noirq_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "noirq class ";
		callback = pm_noirq_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "noirq bus ";
		callback = pm_noirq_op(dev->bus->pm, state);
	} else {
		return NULL;
	}

	if (info_p)
		*info_p = info;

	return callback;
}

static bool device_must_resume(struct device *dev, pm_message_t state,
			       bool no_subsys_suspend_noirq)
{
	pm_message_t resume_msg = resume_event(state);

	/*
	 * If all of the device driver's "noirq", "late" and "early" callbacks
	 * are invoked directly by the core, the decision to allow the device to
	 * stay in suspend can be based on its current runtime PM status and its
	 * wakeup settings.
	 */
	if (no_subsys_suspend_noirq &&
	    !dpm_subsys_suspend_late_cb(dev, state, NULL) &&
	    !dpm_subsys_resume_early_cb(dev, resume_msg, NULL) &&
	    !dpm_subsys_resume_noirq_cb(dev, resume_msg, NULL))
		return !pm_runtime_status_suspended(dev) &&
			(resume_msg.event != PM_EVENT_RESUME ||
			 (device_can_wakeup(dev) && !device_may_wakeup(dev)));

	/*
	 * The only safe strategy here is to require that if the device may not
	 * be left in suspend, resume callbacks must be invoked for it.
	 */
	return !dev->power.may_skip_resume;
}

/**
 * __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
 * @dev: Device to handle.
@@ -1100,8 +1287,9 @@ static pm_message_t resume_event(pm_message_t sleep_state)
 */
static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async)
{
	pm_callback_t callback = NULL;
	const char *info = NULL;
	pm_callback_t callback;
	const char *info;
	bool no_subsys_cb = false;
	int error = 0;

	TRACE_DEVICE(dev);
@@ -1120,30 +1308,40 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
	if (dev->power.syscore || dev->power.direct_complete)
		goto Complete;

	if (dev->pm_domain) {
		info = "noirq power domain ";
		callback = pm_noirq_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "noirq type ";
		callback = pm_noirq_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "noirq class ";
		callback = pm_noirq_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "noirq bus ";
		callback = pm_noirq_op(dev->bus->pm, state);
	}
	callback = dpm_subsys_suspend_noirq_cb(dev, state, &info);
	if (callback)
		goto Run;

	if (!callback && dev->driver && dev->driver->pm) {
	no_subsys_cb = !dpm_subsys_suspend_late_cb(dev, state, NULL);

	if (dev_pm_smart_suspend_and_suspended(dev) && no_subsys_cb)
		goto Skip;

	if (dev->driver && dev->driver->pm) {
		info = "noirq driver ";
		callback = pm_noirq_op(dev->driver->pm, state);
	}

Run:
	error = dpm_run_callback(callback, dev, state, info);
	if (!error)
		dev->power.is_noirq_suspended = true;
	else
	if (error) {
		async_error = error;
		goto Complete;
	}

Skip:
	dev->power.is_noirq_suspended = true;

	if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) {
		dev->power.must_resume = dev->power.must_resume ||
				atomic_read(&dev->power.usage_count) > 1 ||
				device_must_resume(dev, state, no_subsys_cb);
	} else {
		dev->power.must_resume = true;
	}

	if (dev->power.must_resume)
		dpm_superior_set_must_resume(dev);

Complete:
	complete_all(&dev->power.completion);
@@ -1249,6 +1447,50 @@ int dpm_suspend_noirq(pm_message_t state)
	return ret;
}

static void dpm_propagate_wakeup_to_parent(struct device *dev)
{
	struct device *parent = dev->parent;

	if (!parent)
		return;

	spin_lock_irq(&parent->power.lock);

	if (dev->power.wakeup_path && !parent->power.ignore_children)
		parent->power.wakeup_path = true;

	spin_unlock_irq(&parent->power.lock);
}

static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
						pm_message_t state,
						const char **info_p)
{
	pm_callback_t callback;
	const char *info;

	if (dev->pm_domain) {
		info = "late power domain ";
		callback = pm_late_early_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "late type ";
		callback = pm_late_early_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "late class ";
		callback = pm_late_early_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "late bus ";
		callback = pm_late_early_op(dev->bus->pm, state);
	} else {
		return NULL;
	}

	if (info_p)
		*info_p = info;

	return callback;
}

/**
 * __device_suspend_late - Execute a "late suspend" callback for given device.
 * @dev: Device to handle.
@@ -1259,8 +1501,8 @@ int dpm_suspend_noirq(pm_message_t state)
 */
static int __device_suspend_late(struct device *dev, pm_message_t state, bool async)
{
	pm_callback_t callback = NULL;
	const char *info = NULL;
	pm_callback_t callback;
	const char *info;
	int error = 0;

	TRACE_DEVICE(dev);
@@ -1281,30 +1523,29 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
	if (dev->power.syscore || dev->power.direct_complete)
		goto Complete;

	if (dev->pm_domain) {
		info = "late power domain ";
		callback = pm_late_early_op(&dev->pm_domain->ops, state);
	} else if (dev->type && dev->type->pm) {
		info = "late type ";
		callback = pm_late_early_op(dev->type->pm, state);
	} else if (dev->class && dev->class->pm) {
		info = "late class ";
		callback = pm_late_early_op(dev->class->pm, state);
	} else if (dev->bus && dev->bus->pm) {
		info = "late bus ";
		callback = pm_late_early_op(dev->bus->pm, state);
	}
	callback = dpm_subsys_suspend_late_cb(dev, state, &info);
	if (callback)
		goto Run;

	if (!callback && dev->driver && dev->driver->pm) {
	if (dev_pm_smart_suspend_and_suspended(dev) &&
	    !dpm_subsys_suspend_noirq_cb(dev, state, NULL))
		goto Skip;

	if (dev->driver && dev->driver->pm) {
		info = "late driver ";
		callback = pm_late_early_op(dev->driver->pm, state);
	}

Run:
	error = dpm_run_callback(callback, dev, state, info);
	if (!error)
		dev->power.is_late_suspended = true;
	else
	if (error) {
		async_error = error;
		goto Complete;
	}
	dpm_propagate_wakeup_to_parent(dev);

Skip:
	dev->power.is_late_suspended = true;

Complete:
	TRACE_SUSPEND(error);
@@ -1435,11 +1676,17 @@ static int legacy_suspend(struct device *dev, pm_message_t state,
	return error;
}

static void dpm_clear_suppliers_direct_complete(struct device *dev)
static void dpm_clear_superiors_direct_complete(struct device *dev)
{
	struct device_link *link;
	int idx;

	if (dev->parent) {
		spin_lock_irq(&dev->parent->power.lock);
		dev->parent->power.direct_complete = false;
		spin_unlock_irq(&dev->parent->power.lock);
	}

	idx = device_links_read_lock();

	list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
@@ -1500,6 +1747,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
		dev->power.direct_complete = false;
	}

	dev->power.may_skip_resume = false;
	dev->power.must_resume = false;

	dpm_watchdog_set(&wd, dev);
	device_lock(dev);

@@ -1543,20 +1793,12 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)

 End:
	if (!error) {
		struct device *parent = dev->parent;

		dev->power.is_suspended = true;
		if (parent) {
			spin_lock_irq(&parent->power.lock);
		if (device_may_wakeup(dev))
			dev->power.wakeup_path = true;

			dev->parent->power.direct_complete = false;
			if (dev->power.wakeup_path
			    && !dev->parent->power.ignore_children)
				dev->parent->power.wakeup_path = true;

			spin_unlock_irq(&parent->power.lock);
		}
		dpm_clear_suppliers_direct_complete(dev);
		dpm_propagate_wakeup_to_parent(dev);
		dpm_clear_superiors_direct_complete(dev);
	}

	device_unlock(dev);
@@ -1665,8 +1907,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
	if (dev->power.syscore)
		return 0;

	WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
		!pm_runtime_enabled(dev));
	WARN_ON(!pm_runtime_enabled(dev) &&
		dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND |
					      DPM_FLAG_LEAVE_SUSPENDED));

	/*
	 * If a device's parent goes into runtime suspend at the wrong time,
@@ -1678,7 +1921,7 @@ static int device_prepare(struct device *dev, pm_message_t state)

	device_lock(dev);

	dev->power.wakeup_path = device_may_wakeup(dev);
	dev->power.wakeup_path = false;

	if (dev->power.no_pm_callbacks) {
		ret = 1;	/* Let device go direct_complete */
+3 −8
Original line number Diff line number Diff line
@@ -41,20 +41,15 @@ extern void dev_pm_disable_wake_irq_check(struct device *dev);

#ifdef CONFIG_PM_SLEEP

extern int device_wakeup_attach_irq(struct device *dev,
				    struct wake_irq *wakeirq);
extern void device_wakeup_attach_irq(struct device *dev, struct wake_irq *wakeirq);
extern void device_wakeup_detach_irq(struct device *dev);
extern void device_wakeup_arm_wake_irqs(void);
extern void device_wakeup_disarm_wake_irqs(void);

#else

static inline int
device_wakeup_attach_irq(struct device *dev,
			 struct wake_irq *wakeirq)
{
	return 0;
}
static inline void device_wakeup_attach_irq(struct device *dev,
					    struct wake_irq *wakeirq) {}

static inline void device_wakeup_detach_irq(struct device *dev)
{
Loading