Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 643161ac authored by Rafael J. Wysocki's avatar Rafael J. Wysocki
Browse files

Merge branch 'pm-sleep'

* pm-sleep:
  PM / Freezer: Remove references to TIF_FREEZE in comments
  PM / Sleep: Add more wakeup source initialization routines
  PM / Hibernate: Enable usermodehelpers in hibernate() error path
  PM / Sleep: Make __pm_stay_awake() delete wakeup source timers
  PM / Sleep: Fix race conditions related to wakeup source timer function
  PM / Sleep: Fix possible infinite loop during wakeup source destruction
  PM / Hibernate: print physical addresses consistently with other parts of kernel
  PM: Add comment describing relationships between PM callbacks to pm.h
  PM / Sleep: Drop suspend_stats_update()
  PM / Sleep: Make enter_state() in kernel/power/suspend.c static
  PM / Sleep: Unify kerneldoc comments in kernel/power/suspend.c
  PM / Sleep: Remove unnecessary label from suspend_freeze_processes()
  PM / Sleep: Do not check wakeup too often in try_to_freeze_tasks()
  PM / Sleep: Initialize wakeup source locks in wakeup_source_add()
  PM / Hibernate: Refactor and simplify freezer_test_done
  PM / Hibernate: Thaw kernel threads in hibernation_snapshot() in error/test path
  PM / Freezer / Docs: Document the beauty of freeze/thaw semantics
  PM / Suspend: Avoid code duplication in suspend statistics update
  PM / Sleep: Introduce generic callbacks for new device PM phases
  PM / Sleep: Introduce "late suspend" and "early resume" of devices
parents 743c5bc2 37f08be1
Loading
Loading
Loading
Loading
+61 −32
Original line number Original line Diff line number Diff line
@@ -96,6 +96,12 @@ struct dev_pm_ops {
	int (*thaw)(struct device *dev);
	int (*thaw)(struct device *dev);
	int (*poweroff)(struct device *dev);
	int (*poweroff)(struct device *dev);
	int (*restore)(struct device *dev);
	int (*restore)(struct device *dev);
	int (*suspend_late)(struct device *dev);
	int (*resume_early)(struct device *dev);
	int (*freeze_late)(struct device *dev);
	int (*thaw_early)(struct device *dev);
	int (*poweroff_late)(struct device *dev);
	int (*restore_early)(struct device *dev);
	int (*suspend_noirq)(struct device *dev);
	int (*suspend_noirq)(struct device *dev);
	int (*resume_noirq)(struct device *dev);
	int (*resume_noirq)(struct device *dev);
	int (*freeze_noirq)(struct device *dev);
	int (*freeze_noirq)(struct device *dev);
@@ -305,7 +311,7 @@ Entering System Suspend
-----------------------
-----------------------
When the system goes into the standby or memory sleep state, the phases are:
When the system goes into the standby or memory sleep state, the phases are:


		prepare, suspend, suspend_noirq.
		prepare, suspend, suspend_late, suspend_noirq.


    1.	The prepare phase is meant to prevent races by preventing new devices
    1.	The prepare phase is meant to prevent races by preventing new devices
	from being registered; the PM core would never know that all the
	from being registered; the PM core would never know that all the
@@ -324,7 +330,12 @@ When the system goes into the standby or memory sleep state, the phases are:
	appropriate low-power state, depending on the bus type the device is on,
	appropriate low-power state, depending on the bus type the device is on,
	and they may enable wakeup events.
	and they may enable wakeup events.


    3.	The suspend_noirq phase occurs after IRQ handlers have been disabled,
    3	For a number of devices it is convenient to split suspend into the
	"quiesce device" and "save device state" phases, in which cases
	suspend_late is meant to do the latter.  It is always executed after
	runtime power management has been disabled for all devices.

    4.	The suspend_noirq phase occurs after IRQ handlers have been disabled,
	which means that the driver's interrupt handler will not be called while
	which means that the driver's interrupt handler will not be called while
	the callback method is running.  The methods should save the values of
	the callback method is running.  The methods should save the values of
	the device's registers that weren't saved previously and finally put the
	the device's registers that weren't saved previously and finally put the
@@ -359,7 +370,7 @@ Leaving System Suspend
----------------------
----------------------
When resuming from standby or memory sleep, the phases are:
When resuming from standby or memory sleep, the phases are:


		resume_noirq, resume, complete.
		resume_noirq, resume_early, resume, complete.


    1.	The resume_noirq callback methods should perform any actions needed
    1.	The resume_noirq callback methods should perform any actions needed
	before the driver's interrupt handlers are invoked.  This generally
	before the driver's interrupt handlers are invoked.  This generally
@@ -375,14 +386,18 @@ When resuming from standby or memory sleep, the phases are:
	device driver's ->pm.resume_noirq() method to perform device-specific
	device driver's ->pm.resume_noirq() method to perform device-specific
	actions.
	actions.


    2.	The resume methods should bring the the device back to its operating
    2.	The resume_early methods should prepare devices for the execution of
	the resume methods.  This generally involves undoing the actions of the
	preceding suspend_late phase.

    3	The resume methods should bring the the device back to its operating
	state, so that it can perform normal I/O.  This generally involves
	state, so that it can perform normal I/O.  This generally involves
	undoing the actions of the suspend phase.
	undoing the actions of the suspend phase.


    3.	The complete phase uses only a bus callback.  The method should undo the
    4.	The complete phase should undo the actions of the prepare phase.  Note,
	actions of the prepare phase.  Note, however, that new children may be
	however, that new children may be registered below the device as soon as
	registered below the device as soon as the resume callbacks occur; it's
	the resume callbacks occur; it's not necessary to wait until the
	not necessary to wait until the complete phase.
	complete phase.


At the end of these phases, drivers should be as functional as they were before
At the end of these phases, drivers should be as functional as they were before
suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are
suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are
@@ -429,8 +444,8 @@ an image of the system memory while everything is stable, reactivate all
devices (thaw), write the image to permanent storage, and finally shut down the
devices (thaw), write the image to permanent storage, and finally shut down the
system (poweroff).  The phases used to accomplish this are:
system (poweroff).  The phases used to accomplish this are:


	prepare, freeze, freeze_noirq, thaw_noirq, thaw, complete,
	prepare, freeze, freeze_late, freeze_noirq, thaw_noirq, thaw_early,
	prepare, poweroff, poweroff_noirq
	thaw, complete, prepare, poweroff, poweroff_late, poweroff_noirq


    1.	The prepare phase is discussed in the "Entering System Suspend" section
    1.	The prepare phase is discussed in the "Entering System Suspend" section
	above.
	above.
@@ -441,7 +456,11 @@ system (poweroff). The phases used to accomplish this are:
	save time it's best not to do so.  Also, the device should not be
	save time it's best not to do so.  Also, the device should not be
	prepared to generate wakeup events.
	prepared to generate wakeup events.


    3.	The freeze_noirq phase is analogous to the suspend_noirq phase discussed
    3.	The freeze_late phase is analogous to the suspend_late phase described
	above, except that the device should not be put in a low-power state and
	should not be allowed to generate wakeup events by it.

    4.	The freeze_noirq phase is analogous to the suspend_noirq phase discussed
	above, except again that the device should not be put in a low-power
	above, except again that the device should not be put in a low-power
	state and should not be allowed to generate wakeup events.
	state and should not be allowed to generate wakeup events.


@@ -449,15 +468,19 @@ At this point the system image is created. All devices should be inactive and
the contents of memory should remain undisturbed while this happens, so that the
the contents of memory should remain undisturbed while this happens, so that the
image forms an atomic snapshot of the system state.
image forms an atomic snapshot of the system state.


    4.	The thaw_noirq phase is analogous to the resume_noirq phase discussed
    5.	The thaw_noirq phase is analogous to the resume_noirq phase discussed
	above.  The main difference is that its methods can assume the device is
	above.  The main difference is that its methods can assume the device is
	in the same state as at the end of the freeze_noirq phase.
	in the same state as at the end of the freeze_noirq phase.


    5.	The thaw phase is analogous to the resume phase discussed above.  Its
    6.	The thaw_early phase is analogous to the resume_early phase described
	above.  Its methods should undo the actions of the preceding
	freeze_late, if necessary.

    7.	The thaw phase is analogous to the resume phase discussed above.  Its
	methods should bring the device back to an operating state, so that it
	methods should bring the device back to an operating state, so that it
	can be used for saving the image if necessary.
	can be used for saving the image if necessary.


    6.	The complete phase is discussed in the "Leaving System Suspend" section
    8.	The complete phase is discussed in the "Leaving System Suspend" section
	above.
	above.


At this point the system image is saved, and the devices then need to be
At this point the system image is saved, and the devices then need to be
@@ -465,16 +488,19 @@ prepared for the upcoming system shutdown. This is much like suspending them
before putting the system into the standby or memory sleep state, and the phases
before putting the system into the standby or memory sleep state, and the phases
are similar.
are similar.


    7.	The prepare phase is discussed above.
    9.	The prepare phase is discussed above.

    10.	The poweroff phase is analogous to the suspend phase.


    8.	The poweroff phase is analogous to the suspend phase.
    11.	The poweroff_late phase is analogous to the suspend_late phase.


    9.	The poweroff_noirq phase is analogous to the suspend_noirq phase.
    12.	The poweroff_noirq phase is analogous to the suspend_noirq phase.


The poweroff and poweroff_noirq callbacks should do essentially the same things
The poweroff, poweroff_late and poweroff_noirq callbacks should do essentially
as the suspend and suspend_noirq callbacks.  The only notable difference is that
the same things as the suspend, suspend_late and suspend_noirq callbacks,
they need not store the device register values, because the registers should
respectively.  The only notable difference is that they need not store the
already have been stored during the freeze or freeze_noirq phases.
device register values, because the registers should already have been stored
during the freeze, freeze_late or freeze_noirq phases.




Leaving Hibernation
Leaving Hibernation
@@ -518,22 +544,25 @@ To achieve this, the image kernel must restore the devices' pre-hibernation
functionality.  The operation is much like waking up from the memory sleep
functionality.  The operation is much like waking up from the memory sleep
state, although it involves different phases:
state, although it involves different phases:


	restore_noirq, restore, complete
	restore_noirq, restore_early, restore, complete


    1.	The restore_noirq phase is analogous to the resume_noirq phase.
    1.	The restore_noirq phase is analogous to the resume_noirq phase.


    2.	The restore phase is analogous to the resume phase.
    2.	The restore_early phase is analogous to the resume_early phase.

    3.	The restore phase is analogous to the resume phase.


    3.	The complete phase is discussed above.
    4.	The complete phase is discussed above.


The main difference from resume[_noirq] is that restore[_noirq] must assume the
The main difference from resume[_early|_noirq] is that restore[_early|_noirq]
device has been accessed and reconfigured by the boot loader or the boot kernel.
must assume the device has been accessed and reconfigured by the boot loader or
Consequently the state of the device may be different from the state remembered
the boot kernel.  Consequently the state of the device may be different from the
from the freeze and freeze_noirq phases.  The device may even need to be reset
state remembered from the freeze, freeze_late and freeze_noirq phases.  The
and completely re-initialized.  In many cases this difference doesn't matter, so
device may even need to be reset and completely re-initialized.  In many cases
the resume[_noirq] and restore[_norq] method pointers can be set to the same
this difference doesn't matter, so the resume[_early|_noirq] and
routines.  Nevertheless, different callback pointers are used in case there is a
restore[_early|_norq] method pointers can be set to the same routines.
situation where it actually matters.
Nevertheless, different callback pointers are used in case there is a situation
where it actually does matter.




Device Power Management Domains
Device Power Management Domains
+21 −0
Original line number Original line Diff line number Diff line
@@ -63,6 +63,27 @@ devices have been reinitialized, the function thaw_processes() is called in
order to clear the PF_FROZEN flag for each frozen task.  Then, the tasks that
order to clear the PF_FROZEN flag for each frozen task.  Then, the tasks that
have been frozen leave __refrigerator() and continue running.
have been frozen leave __refrigerator() and continue running.



Rationale behind the functions dealing with freezing and thawing of tasks:
-------------------------------------------------------------------------

freeze_processes():
  - freezes only userspace tasks

freeze_kernel_threads():
  - freezes all tasks (including kernel threads) because we can't freeze
    kernel threads without freezing userspace tasks

thaw_kernel_threads():
  - thaws only kernel threads; this is particularly useful if we need to do
    anything special in between thawing of kernel threads and thawing of
    userspace tasks, or if we want to postpone the thawing of userspace tasks

thaw_processes():
  - thaws all tasks (including kernel threads) because we can't thaw userspace
    tasks without thawing kernel threads


III. Which kernel threads are freezable?
III. Which kernel threads are freezable?


Kernel threads are not freezable by default.  However, a kernel thread may clear
Kernel threads are not freezable by default.  However, a kernel thread may clear
+5 −6
Original line number Original line Diff line number Diff line
@@ -1234,8 +1234,7 @@ static int suspend(int vetoable)
	struct apm_user	*as;
	struct apm_user	*as;


	dpm_suspend_start(PMSG_SUSPEND);
	dpm_suspend_start(PMSG_SUSPEND);

	dpm_suspend_end(PMSG_SUSPEND);
	dpm_suspend_noirq(PMSG_SUSPEND);


	local_irq_disable();
	local_irq_disable();
	syscore_suspend();
	syscore_suspend();
@@ -1259,9 +1258,9 @@ static int suspend(int vetoable)
	syscore_resume();
	syscore_resume();
	local_irq_enable();
	local_irq_enable();


	dpm_resume_noirq(PMSG_RESUME);
	dpm_resume_start(PMSG_RESUME);

	dpm_resume_end(PMSG_RESUME);
	dpm_resume_end(PMSG_RESUME);

	queue_event(APM_NORMAL_RESUME, NULL);
	queue_event(APM_NORMAL_RESUME, NULL);
	spin_lock(&user_list_lock);
	spin_lock(&user_list_lock);
	for (as = user_list; as != NULL; as = as->next) {
	for (as = user_list; as != NULL; as = as->next) {
@@ -1277,7 +1276,7 @@ static void standby(void)
{
{
	int err;
	int err;


	dpm_suspend_noirq(PMSG_SUSPEND);
	dpm_suspend_end(PMSG_SUSPEND);


	local_irq_disable();
	local_irq_disable();
	syscore_suspend();
	syscore_suspend();
@@ -1291,7 +1290,7 @@ static void standby(void)
	syscore_resume();
	syscore_resume();
	local_irq_enable();
	local_irq_enable();


	dpm_resume_noirq(PMSG_RESUME);
	dpm_resume_start(PMSG_RESUME);
}
}


static apm_event_t get_event(void)
static apm_event_t get_event(void)
+104 −53
Original line number Original line Diff line number Diff line
@@ -92,59 +92,28 @@ int pm_generic_prepare(struct device *dev)
}
}


/**
/**
 * __pm_generic_call - Generic suspend/freeze/poweroff/thaw subsystem callback.
 * pm_generic_suspend_noirq - Generic suspend_noirq callback for subsystems.
 * @dev: Device to handle.
 * @dev: Device to suspend.
 * @event: PM transition of the system under way.
 * @bool: Whether or not this is the "noirq" stage.
 *
 * Execute the PM callback corresponding to @event provided by the driver of
 * @dev, if defined, and return its error code.    Return 0 if the callback is
 * not present.
 */
 */
static int __pm_generic_call(struct device *dev, int event, bool noirq)
int pm_generic_suspend_noirq(struct device *dev)
{
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
	int (*callback)(struct device *);


	if (!pm)
	return pm && pm->suspend_noirq ? pm->suspend_noirq(dev) : 0;
		return 0;

	switch (event) {
	case PM_EVENT_SUSPEND:
		callback = noirq ? pm->suspend_noirq : pm->suspend;
		break;
	case PM_EVENT_FREEZE:
		callback = noirq ? pm->freeze_noirq : pm->freeze;
		break;
	case PM_EVENT_HIBERNATE:
		callback = noirq ? pm->poweroff_noirq : pm->poweroff;
		break;
	case PM_EVENT_RESUME:
		callback = noirq ? pm->resume_noirq : pm->resume;
		break;
	case PM_EVENT_THAW:
		callback = noirq ? pm->thaw_noirq : pm->thaw;
		break;
	case PM_EVENT_RESTORE:
		callback = noirq ? pm->restore_noirq : pm->restore;
		break;
	default:
		callback = NULL;
		break;
	}

	return callback ? callback(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq);


/**
/**
 * pm_generic_suspend_noirq - Generic suspend_noirq callback for subsystems.
 * pm_generic_suspend_late - Generic suspend_late callback for subsystems.
 * @dev: Device to suspend.
 * @dev: Device to suspend.
 */
 */
int pm_generic_suspend_noirq(struct device *dev)
int pm_generic_suspend_late(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_SUSPEND, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->suspend_late ? pm->suspend_late(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq);
EXPORT_SYMBOL_GPL(pm_generic_suspend_late);


/**
/**
 * pm_generic_suspend - Generic suspend callback for subsystems.
 * pm_generic_suspend - Generic suspend callback for subsystems.
@@ -152,7 +121,9 @@ EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq);
 */
 */
int pm_generic_suspend(struct device *dev)
int pm_generic_suspend(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_SUSPEND, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->suspend ? pm->suspend(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_suspend);
EXPORT_SYMBOL_GPL(pm_generic_suspend);


@@ -162,17 +133,33 @@ EXPORT_SYMBOL_GPL(pm_generic_suspend);
 */
 */
int pm_generic_freeze_noirq(struct device *dev)
int pm_generic_freeze_noirq(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_FREEZE, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->freeze_noirq ? pm->freeze_noirq(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_freeze_noirq);
EXPORT_SYMBOL_GPL(pm_generic_freeze_noirq);


/**
 * pm_generic_freeze_late - Generic freeze_late callback for subsystems.
 * @dev: Device to freeze.
 */
int pm_generic_freeze_late(struct device *dev)
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->freeze_late ? pm->freeze_late(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_freeze_late);

/**
/**
 * pm_generic_freeze - Generic freeze callback for subsystems.
 * pm_generic_freeze - Generic freeze callback for subsystems.
 * @dev: Device to freeze.
 * @dev: Device to freeze.
 */
 */
int pm_generic_freeze(struct device *dev)
int pm_generic_freeze(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_FREEZE, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->freeze ? pm->freeze(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_freeze);
EXPORT_SYMBOL_GPL(pm_generic_freeze);


@@ -182,17 +169,33 @@ EXPORT_SYMBOL_GPL(pm_generic_freeze);
 */
 */
int pm_generic_poweroff_noirq(struct device *dev)
int pm_generic_poweroff_noirq(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_HIBERNATE, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->poweroff_noirq ? pm->poweroff_noirq(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff_noirq);
EXPORT_SYMBOL_GPL(pm_generic_poweroff_noirq);


/**
 * pm_generic_poweroff_late - Generic poweroff_late callback for subsystems.
 * @dev: Device to handle.
 */
int pm_generic_poweroff_late(struct device *dev)
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->poweroff_late ? pm->poweroff_late(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff_late);

/**
/**
 * pm_generic_poweroff - Generic poweroff callback for subsystems.
 * pm_generic_poweroff - Generic poweroff callback for subsystems.
 * @dev: Device to handle.
 * @dev: Device to handle.
 */
 */
int pm_generic_poweroff(struct device *dev)
int pm_generic_poweroff(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_HIBERNATE, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->poweroff ? pm->poweroff(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_poweroff);
EXPORT_SYMBOL_GPL(pm_generic_poweroff);


@@ -202,17 +205,33 @@ EXPORT_SYMBOL_GPL(pm_generic_poweroff);
 */
 */
int pm_generic_thaw_noirq(struct device *dev)
int pm_generic_thaw_noirq(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_THAW, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->thaw_noirq ? pm->thaw_noirq(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_thaw_noirq);
EXPORT_SYMBOL_GPL(pm_generic_thaw_noirq);


/**
 * pm_generic_thaw_early - Generic thaw_early callback for subsystems.
 * @dev: Device to thaw.
 */
int pm_generic_thaw_early(struct device *dev)
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->thaw_early ? pm->thaw_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_thaw_early);

/**
/**
 * pm_generic_thaw - Generic thaw callback for subsystems.
 * pm_generic_thaw - Generic thaw callback for subsystems.
 * @dev: Device to thaw.
 * @dev: Device to thaw.
 */
 */
int pm_generic_thaw(struct device *dev)
int pm_generic_thaw(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_THAW, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->thaw ? pm->thaw(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_thaw);
EXPORT_SYMBOL_GPL(pm_generic_thaw);


@@ -222,17 +241,33 @@ EXPORT_SYMBOL_GPL(pm_generic_thaw);
 */
 */
int pm_generic_resume_noirq(struct device *dev)
int pm_generic_resume_noirq(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_RESUME, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->resume_noirq ? pm->resume_noirq(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_resume_noirq);
EXPORT_SYMBOL_GPL(pm_generic_resume_noirq);


/**
 * pm_generic_resume_early - Generic resume_early callback for subsystems.
 * @dev: Device to resume.
 */
int pm_generic_resume_early(struct device *dev)
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->resume_early ? pm->resume_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_resume_early);

/**
/**
 * pm_generic_resume - Generic resume callback for subsystems.
 * pm_generic_resume - Generic resume callback for subsystems.
 * @dev: Device to resume.
 * @dev: Device to resume.
 */
 */
int pm_generic_resume(struct device *dev)
int pm_generic_resume(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_RESUME, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->resume ? pm->resume(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_resume);
EXPORT_SYMBOL_GPL(pm_generic_resume);


@@ -242,17 +277,33 @@ EXPORT_SYMBOL_GPL(pm_generic_resume);
 */
 */
int pm_generic_restore_noirq(struct device *dev)
int pm_generic_restore_noirq(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_RESTORE, true);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->restore_noirq ? pm->restore_noirq(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_restore_noirq);
EXPORT_SYMBOL_GPL(pm_generic_restore_noirq);


/**
 * pm_generic_restore_early - Generic restore_early callback for subsystems.
 * @dev: Device to resume.
 */
int pm_generic_restore_early(struct device *dev)
{
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->restore_early ? pm->restore_early(dev) : 0;
}
EXPORT_SYMBOL_GPL(pm_generic_restore_early);

/**
/**
 * pm_generic_restore - Generic restore callback for subsystems.
 * pm_generic_restore - Generic restore callback for subsystems.
 * @dev: Device to restore.
 * @dev: Device to restore.
 */
 */
int pm_generic_restore(struct device *dev)
int pm_generic_restore(struct device *dev)
{
{
	return __pm_generic_call(dev, PM_EVENT_RESTORE, false);
	const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;

	return pm && pm->restore ? pm->restore(dev) : 0;
}
}
EXPORT_SYMBOL_GPL(pm_generic_restore);
EXPORT_SYMBOL_GPL(pm_generic_restore);


+225 −22

File changed.

Preview size limit exceeded, changes collapsed.

Loading