Loading Documentation/power/devices.txt +30 −4 Original line number Diff line number Diff line Loading @@ -2,6 +2,7 @@ Device Power Management Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> Copyright (c) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> Most of the code in Linux is device drivers, so most of the Linux power Loading Loading @@ -326,6 +327,20 @@ the phases are: driver in some way for the upcoming system power transition, but it should not put the device into a low-power state. For devices supporting runtime power management, the return value of the prepare callback can be used to indicate to the PM core that it may safely leave the device in runtime suspend (if runtime-suspended already), provided that all of the device's descendants are also left in runtime suspend. Namely, if the prepare callback returns a positive number and that happens for all of the descendants of the device too, and all of them (including the device itself) are runtime-suspended, the PM core will skip the suspend, suspend_late and suspend_noirq suspend phases as well as the resume_noirq, resume_early and resume phases of the following system resume for all of these devices. In that case, the complete callback will be called directly after the prepare callback and is entirely responsible for bringing the device back to the functional state as appropriate. 2. The suspend methods should quiesce the device to stop it from performing I/O. They also may save the device registers and put it into the appropriate low-power state, depending on the bus type the device is on, Loading Loading @@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are: the resume callbacks occur; it's not necessary to wait until the complete phase. Moreover, if the preceding prepare callback returned a positive number, the device may have been left in runtime suspend throughout the whole system suspend and resume (the suspend, suspend_late, suspend_noirq phases of system suspend and the resume_noirq, resume_early, resume phases of system resume may have been skipped for it). In that case, the complete callback is entirely responsible for bringing the device back to the functional state after system suspend if necessary. [For example, it may need to queue up a runtime resume request for the device for this purpose.] To check if that is the case, the complete callback can consult the device's power.direct_complete flag. Namely, if that flag is set when the complete callback is being run, it has been called directly after the preceding prepare and special action may be required to make the device work correctly afterward. At the end of these phases, drivers should be as functional as they were before suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are gated on. Even if the device was in a low-power state before the system sleep because of runtime power management, afterwards it should be back in its full-power state. There are multiple reasons why it's best to do this; they are discussed in more detail in Documentation/power/runtime_pm.txt. gated on. However, the details here may again be platform-specific. For example, some systems support multiple "run" states, and the mode in effect at Loading Documentation/power/runtime_pm.txt +17 −0 Original line number Diff line number Diff line Loading @@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. (C) 2010 Alan Stern <stern@rowland.harvard.edu> (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 1. Introduction Loading Loading @@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: bool pm_runtime_status_suspended(struct device *dev); - return true if the device's runtime PM status is 'suspended' bool pm_runtime_suspended_if_enabled(struct device *dev); - return true if the device's runtime PM status is 'suspended' and its 'power.disable_depth' field is equal to 1 void pm_runtime_allow(struct device *dev); - set the power.runtime_auto flag for the device and decrease its usage counter (used by the /sys/devices/.../power/control interface to Loading Loading @@ -644,6 +649,18 @@ place (in particular, if the system is not waking up from hibernation), it may be more efficient to leave the devices that had been suspended before the system suspend began in the suspended state. To this end, the PM core provides a mechanism allowing some coordination between different levels of device hierarchy. Namely, if a system suspend .prepare() callback returns a positive number for a device, that indicates to the PM core that the device appears to be runtime-suspended and its state is fine, so it may be left in runtime suspend provided that all of its descendants are also left in runtime suspend. If that happens, the PM core will not execute any system suspend and resume callbacks for all of those devices, except for the complete callback, which is then entirely responsible for handling the device as appropriate. This only applies to system suspend transitions that are not related to hibernation (see Documentation/power/devices.txt for more information). The PM core does its best to reduce the probability of race conditions between the runtime PM and system suspend/resume (and hibernation) callbacks by carrying out the following operations: Loading Documentation/power/swsusp.txt +4 −1 Original line number Diff line number Diff line Loading @@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity. A: Try running cat `cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u` > /dev/null cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u | while read file do test -f "$file" && cat "$file" > /dev/null done after resume. swapoff -a; swapon -a may also be useful. Loading drivers/acpi/acpi_lpss.c +213 −36 Original line number Diff line number Diff line Loading @@ -19,6 +19,7 @@ #include <linux/platform_device.h> #include <linux/platform_data/clk-lpss.h> #include <linux/pm_runtime.h> #include <linux/delay.h> #include "internal.h" Loading @@ -28,6 +29,7 @@ ACPI_MODULE_NAME("acpi_lpss"); #define LPSS_LTR_SIZE 0x18 /* Offsets relative to LPSS_PRIVATE_OFFSET */ #define LPSS_CLK_DIVIDER_DEF_MASK (BIT(1) | BIT(16)) #define LPSS_GENERAL 0x08 #define LPSS_GENERAL_LTR_MODE_SW BIT(2) #define LPSS_GENERAL_UART_RTS_OVRD BIT(3) Loading @@ -43,6 +45,8 @@ ACPI_MODULE_NAME("acpi_lpss"); #define LPSS_TX_INT 0x20 #define LPSS_TX_INT_MASK BIT(1) #define LPSS_PRV_REG_COUNT 9 struct lpss_shared_clock { const char *name; unsigned long rate; Loading @@ -57,7 +61,9 @@ struct lpss_device_desc { bool ltr_required; unsigned int prv_offset; size_t prv_size_override; bool clk_divider; bool clk_gate; bool save_ctx; struct lpss_shared_clock *shared_clock; void (*setup)(struct lpss_private_data *pdata); }; Loading @@ -72,6 +78,7 @@ struct lpss_private_data { resource_size_t mmio_size; struct clk *clk; const struct lpss_device_desc *dev_desc; u32 prv_reg_ctx[LPSS_PRV_REG_COUNT]; }; static void lpss_uart_setup(struct lpss_private_data *pdata) Loading @@ -89,6 +96,14 @@ static void lpss_uart_setup(struct lpss_private_data *pdata) } static struct lpss_device_desc lpt_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, .clk_divider = true, .clk_gate = true, }; static struct lpss_device_desc lpt_i2c_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, Loading @@ -99,6 +114,7 @@ static struct lpss_device_desc lpt_uart_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, .clk_divider = true, .clk_gate = true, .setup = lpss_uart_setup, }; Loading @@ -116,32 +132,25 @@ static struct lpss_shared_clock pwm_clock = { static struct lpss_device_desc byt_pwm_dev_desc = { .clk_required = true, .save_ctx = true, .shared_clock = &pwm_clock, }; static struct lpss_shared_clock uart_clock = { .name = "uart_clk", .rate = 44236800, }; static struct lpss_device_desc byt_uart_dev_desc = { .clk_required = true, .prv_offset = 0x800, .clk_divider = true, .clk_gate = true, .shared_clock = &uart_clock, .save_ctx = true, .setup = lpss_uart_setup, }; static struct lpss_shared_clock spi_clock = { .name = "spi_clk", .rate = 50000000, }; static struct lpss_device_desc byt_spi_dev_desc = { .clk_required = true, .prv_offset = 0x400, .clk_divider = true, .clk_gate = true, .shared_clock = &spi_clock, .save_ctx = true, }; static struct lpss_device_desc byt_sdio_dev_desc = { Loading @@ -156,6 +165,7 @@ static struct lpss_shared_clock i2c_clock = { static struct lpss_device_desc byt_i2c_dev_desc = { .clk_required = true, .prv_offset = 0x800, .save_ctx = true, .shared_clock = &i2c_clock, }; Loading @@ -166,8 +176,8 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = { /* Lynxpoint LPSS devices */ { "INT33C0", (unsigned long)&lpt_dev_desc }, { "INT33C1", (unsigned long)&lpt_dev_desc }, { "INT33C2", (unsigned long)&lpt_dev_desc }, { "INT33C3", (unsigned long)&lpt_dev_desc }, { "INT33C2", (unsigned long)&lpt_i2c_dev_desc }, { "INT33C3", (unsigned long)&lpt_i2c_dev_desc }, { "INT33C4", (unsigned long)&lpt_uart_dev_desc }, { "INT33C5", (unsigned long)&lpt_uart_dev_desc }, { "INT33C6", (unsigned long)&lpt_sdio_dev_desc }, Loading @@ -183,8 +193,8 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = { { "INT3430", (unsigned long)&lpt_dev_desc }, { "INT3431", (unsigned long)&lpt_dev_desc }, { "INT3432", (unsigned long)&lpt_dev_desc }, { "INT3433", (unsigned long)&lpt_dev_desc }, { "INT3432", (unsigned long)&lpt_i2c_dev_desc }, { "INT3433", (unsigned long)&lpt_i2c_dev_desc }, { "INT3434", (unsigned long)&lpt_uart_dev_desc }, { "INT3435", (unsigned long)&lpt_uart_dev_desc }, { "INT3436", (unsigned long)&lpt_sdio_dev_desc }, Loading Loading @@ -212,9 +222,11 @@ static int register_device_clock(struct acpi_device *adev, { const struct lpss_device_desc *dev_desc = pdata->dev_desc; struct lpss_shared_clock *shared_clock = dev_desc->shared_clock; const char *devname = dev_name(&adev->dev); struct clk *clk = ERR_PTR(-ENODEV); struct lpss_clk_data *clk_data; const char *parent; const char *parent, *clk_name; void __iomem *prv_base; if (!lpss_clk_dev) lpt_register_clock_device(); Loading @@ -225,7 +237,7 @@ static int register_device_clock(struct acpi_device *adev, if (dev_desc->clkdev_name) { clk_register_clkdev(clk_data->clk, dev_desc->clkdev_name, dev_name(&adev->dev)); devname); return 0; } Loading @@ -234,6 +246,7 @@ static int register_device_clock(struct acpi_device *adev, return -ENODATA; parent = clk_data->name; prv_base = pdata->mmio_base + dev_desc->prv_offset; if (shared_clock) { clk = shared_clock->clk; Loading @@ -247,16 +260,41 @@ static int register_device_clock(struct acpi_device *adev, } if (dev_desc->clk_gate) { clk = clk_register_gate(NULL, dev_name(&adev->dev), parent, 0, pdata->mmio_base + dev_desc->prv_offset, 0, 0, NULL); pdata->clk = clk; clk = clk_register_gate(NULL, devname, parent, 0, prv_base, 0, 0, NULL); parent = devname; } if (dev_desc->clk_divider) { /* Prevent division by zero */ if (!readl(prv_base)) writel(LPSS_CLK_DIVIDER_DEF_MASK, prv_base); clk_name = kasprintf(GFP_KERNEL, "%s-div", devname); if (!clk_name) return -ENOMEM; clk = clk_register_fractional_divider(NULL, clk_name, parent, 0, prv_base, 1, 15, 16, 15, 0, NULL); parent = clk_name; clk_name = kasprintf(GFP_KERNEL, "%s-update", devname); if (!clk_name) { kfree(parent); return -ENOMEM; } clk = clk_register_gate(NULL, clk_name, parent, CLK_SET_RATE_PARENT | CLK_SET_RATE_GATE, prv_base, 31, 0, NULL); kfree(parent); kfree(clk_name); } if (IS_ERR(clk)) return PTR_ERR(clk); clk_register_clkdev(clk, NULL, dev_name(&adev->dev)); pdata->clk = clk; clk_register_clkdev(clk, NULL, devname); return 0; } Loading @@ -267,12 +305,14 @@ static int acpi_lpss_create_device(struct acpi_device *adev, struct lpss_private_data *pdata; struct resource_list_entry *rentry; struct list_head resource_list; struct platform_device *pdev; int ret; dev_desc = (struct lpss_device_desc *)id->driver_data; if (!dev_desc) return acpi_create_platform_device(adev, id); if (!dev_desc) { pdev = acpi_create_platform_device(adev); return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; } pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); if (!pdata) return -ENOMEM; Loading Loading @@ -322,10 +362,13 @@ static int acpi_lpss_create_device(struct acpi_device *adev, dev_desc->setup(pdata); adev->driver_data = pdata; ret = acpi_create_platform_device(adev, id); if (ret > 0) return ret; pdev = acpi_create_platform_device(adev); if (!IS_ERR_OR_NULL(pdev)) { device_enable_async_suspend(&pdev->dev); return 1; } ret = PTR_ERR(pdev); adev->driver_data = NULL; err_out: Loading Loading @@ -449,6 +492,126 @@ static void acpi_lpss_set_ltr(struct device *dev, s32 val) } } #ifdef CONFIG_PM /** * acpi_lpss_save_ctx() - Save the private registers of LPSS device * @dev: LPSS device * * Most LPSS devices have private registers which may loose their context when * the device is powered down. acpi_lpss_save_ctx() saves those registers into * prv_reg_ctx array. */ static void acpi_lpss_save_ctx(struct device *dev) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); unsigned int i; for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { unsigned long offset = i * sizeof(u32); pdata->prv_reg_ctx[i] = __lpss_reg_read(pdata, offset); dev_dbg(dev, "saving 0x%08x from LPSS reg at offset 0x%02lx\n", pdata->prv_reg_ctx[i], offset); } } /** * acpi_lpss_restore_ctx() - Restore the private registers of LPSS device * @dev: LPSS device * * Restores the registers that were previously stored with acpi_lpss_save_ctx(). */ static void acpi_lpss_restore_ctx(struct device *dev) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); unsigned int i; /* * The following delay is needed or the subsequent write operations may * fail. The LPSS devices are actually PCI devices and the PCI spec * expects 10ms delay before the device can be accessed after D3 to D0 * transition. */ msleep(10); for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { unsigned long offset = i * sizeof(u32); __lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset); dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n", pdata->prv_reg_ctx[i], offset); } } #ifdef CONFIG_PM_SLEEP static int acpi_lpss_suspend_late(struct device *dev) { int ret = pm_generic_suspend_late(dev); if (ret) return ret; acpi_lpss_save_ctx(dev); return acpi_dev_suspend_late(dev); } static int acpi_lpss_restore_early(struct device *dev) { int ret = acpi_dev_resume_early(dev); if (ret) return ret; acpi_lpss_restore_ctx(dev); return pm_generic_resume_early(dev); } #endif /* CONFIG_PM_SLEEP */ #ifdef CONFIG_PM_RUNTIME static int acpi_lpss_runtime_suspend(struct device *dev) { int ret = pm_generic_runtime_suspend(dev); if (ret) return ret; acpi_lpss_save_ctx(dev); return acpi_dev_runtime_suspend(dev); } static int acpi_lpss_runtime_resume(struct device *dev) { int ret = acpi_dev_runtime_resume(dev); if (ret) return ret; acpi_lpss_restore_ctx(dev); return pm_generic_runtime_resume(dev); } #endif /* CONFIG_PM_RUNTIME */ #endif /* CONFIG_PM */ static struct dev_pm_domain acpi_lpss_pm_domain = { .ops = { #ifdef CONFIG_PM_SLEEP .suspend_late = acpi_lpss_suspend_late, .restore_early = acpi_lpss_restore_early, .prepare = acpi_subsys_prepare, .complete = acpi_subsys_complete, .suspend = acpi_subsys_suspend, .resume_early = acpi_subsys_resume_early, .freeze = acpi_subsys_freeze, .poweroff = acpi_subsys_suspend, .poweroff_late = acpi_subsys_suspend_late, #endif #ifdef CONFIG_PM_RUNTIME .runtime_suspend = acpi_lpss_runtime_suspend, .runtime_resume = acpi_lpss_runtime_resume, #endif }, }; static int acpi_lpss_platform_notify(struct notifier_block *nb, unsigned long action, void *data) { Loading @@ -456,7 +619,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, struct lpss_private_data *pdata; struct acpi_device *adev; const struct acpi_device_id *id; int ret = 0; id = acpi_match_device(acpi_lpss_device_ids, &pdev->dev); if (!id || !id->driver_data) Loading @@ -466,7 +628,7 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, return 0; pdata = acpi_driver_data(adev); if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required) if (!pdata || !pdata->mmio_base) return 0; if (pdata->mmio_size < pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) { Loading @@ -474,12 +636,27 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, return 0; } if (action == BUS_NOTIFY_ADD_DEVICE) ret = sysfs_create_group(&pdev->dev.kobj, &lpss_attr_group); else if (action == BUS_NOTIFY_DEL_DEVICE) switch (action) { case BUS_NOTIFY_BOUND_DRIVER: if (pdata->dev_desc->save_ctx) pdev->dev.pm_domain = &acpi_lpss_pm_domain; break; case BUS_NOTIFY_UNBOUND_DRIVER: if (pdata->dev_desc->save_ctx) pdev->dev.pm_domain = NULL; break; case BUS_NOTIFY_ADD_DEVICE: if (pdata->dev_desc->ltr_required) return sysfs_create_group(&pdev->dev.kobj, &lpss_attr_group); case BUS_NOTIFY_DEL_DEVICE: if (pdata->dev_desc->ltr_required) sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); default: break; } return ret; return 0; } static struct notifier_block acpi_lpss_nb = { Loading drivers/acpi/acpi_platform.c +18 −11 Original line number Diff line number Diff line Loading @@ -31,6 +31,10 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { { "PNP0D40" }, { "VPC2004" }, { "BCM4752" }, { "LNV4752" }, { "BCM2E1A" }, { "BCM2E39" }, { "BCM2E3D" }, /* Intel Smart Sound Technology */ { "INT33C8" }, Loading @@ -42,7 +46,6 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { /** * acpi_create_platform_device - Create platform device for ACPI device node * @adev: ACPI device node to create a platform device for. * @id: ACPI device ID used to match @adev. * * Check if the given @adev can be represented as a platform device and, if * that's the case, create and register a platform device, populate its common Loading @@ -50,8 +53,7 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { * * Name of the platform device will be the same as @adev's. */ int acpi_create_platform_device(struct acpi_device *adev, const struct acpi_device_id *id) struct platform_device *acpi_create_platform_device(struct acpi_device *adev) { struct platform_device *pdev = NULL; struct acpi_device *acpi_parent; Loading @@ -63,19 +65,19 @@ int acpi_create_platform_device(struct acpi_device *adev, /* If the ACPI node already has a physical device attached, skip it. */ if (adev->physical_node_count) return 0; return NULL; INIT_LIST_HEAD(&resource_list); count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); if (count < 0) { return 0; return NULL; } else if (count > 0) { resources = kmalloc(count * sizeof(struct resource), GFP_KERNEL); if (!resources) { dev_err(&adev->dev, "No memory for resources\n"); acpi_dev_free_resource_list(&resource_list); return -ENOMEM; return ERR_PTR(-ENOMEM); } count = 0; list_for_each_entry(rentry, &resource_list, node) Loading Loading @@ -112,22 +114,27 @@ int acpi_create_platform_device(struct acpi_device *adev, pdevinfo.num_res = count; pdevinfo.acpi_node.companion = adev; pdev = platform_device_register_full(&pdevinfo); if (IS_ERR(pdev)) { if (IS_ERR(pdev)) dev_err(&adev->dev, "platform device creation failed: %ld\n", PTR_ERR(pdev)); pdev = NULL; } else { else dev_dbg(&adev->dev, "created platform device %s\n", dev_name(&pdev->dev)); } kfree(resources); return pdev; } static int acpi_platform_attach(struct acpi_device *adev, const struct acpi_device_id *id) { acpi_create_platform_device(adev); return 1; } static struct acpi_scan_handler platform_handler = { .ids = acpi_platform_device_ids, .attach = acpi_create_platform_device, .attach = acpi_platform_attach, }; void __init acpi_platform_init(void) Loading Loading
Documentation/power/devices.txt +30 −4 Original line number Diff line number Diff line Loading @@ -2,6 +2,7 @@ Device Power Management Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> Copyright (c) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> Most of the code in Linux is device drivers, so most of the Linux power Loading Loading @@ -326,6 +327,20 @@ the phases are: driver in some way for the upcoming system power transition, but it should not put the device into a low-power state. For devices supporting runtime power management, the return value of the prepare callback can be used to indicate to the PM core that it may safely leave the device in runtime suspend (if runtime-suspended already), provided that all of the device's descendants are also left in runtime suspend. Namely, if the prepare callback returns a positive number and that happens for all of the descendants of the device too, and all of them (including the device itself) are runtime-suspended, the PM core will skip the suspend, suspend_late and suspend_noirq suspend phases as well as the resume_noirq, resume_early and resume phases of the following system resume for all of these devices. In that case, the complete callback will be called directly after the prepare callback and is entirely responsible for bringing the device back to the functional state as appropriate. 2. The suspend methods should quiesce the device to stop it from performing I/O. They also may save the device registers and put it into the appropriate low-power state, depending on the bus type the device is on, Loading Loading @@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are: the resume callbacks occur; it's not necessary to wait until the complete phase. Moreover, if the preceding prepare callback returned a positive number, the device may have been left in runtime suspend throughout the whole system suspend and resume (the suspend, suspend_late, suspend_noirq phases of system suspend and the resume_noirq, resume_early, resume phases of system resume may have been skipped for it). In that case, the complete callback is entirely responsible for bringing the device back to the functional state after system suspend if necessary. [For example, it may need to queue up a runtime resume request for the device for this purpose.] To check if that is the case, the complete callback can consult the device's power.direct_complete flag. Namely, if that flag is set when the complete callback is being run, it has been called directly after the preceding prepare and special action may be required to make the device work correctly afterward. At the end of these phases, drivers should be as functional as they were before suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are gated on. Even if the device was in a low-power state before the system sleep because of runtime power management, afterwards it should be back in its full-power state. There are multiple reasons why it's best to do this; they are discussed in more detail in Documentation/power/runtime_pm.txt. gated on. However, the details here may again be platform-specific. For example, some systems support multiple "run" states, and the mode in effect at Loading
Documentation/power/runtime_pm.txt +17 −0 Original line number Diff line number Diff line Loading @@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. (C) 2010 Alan Stern <stern@rowland.harvard.edu> (C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 1. Introduction Loading Loading @@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: bool pm_runtime_status_suspended(struct device *dev); - return true if the device's runtime PM status is 'suspended' bool pm_runtime_suspended_if_enabled(struct device *dev); - return true if the device's runtime PM status is 'suspended' and its 'power.disable_depth' field is equal to 1 void pm_runtime_allow(struct device *dev); - set the power.runtime_auto flag for the device and decrease its usage counter (used by the /sys/devices/.../power/control interface to Loading Loading @@ -644,6 +649,18 @@ place (in particular, if the system is not waking up from hibernation), it may be more efficient to leave the devices that had been suspended before the system suspend began in the suspended state. To this end, the PM core provides a mechanism allowing some coordination between different levels of device hierarchy. Namely, if a system suspend .prepare() callback returns a positive number for a device, that indicates to the PM core that the device appears to be runtime-suspended and its state is fine, so it may be left in runtime suspend provided that all of its descendants are also left in runtime suspend. If that happens, the PM core will not execute any system suspend and resume callbacks for all of those devices, except for the complete callback, which is then entirely responsible for handling the device as appropriate. This only applies to system suspend transitions that are not related to hibernation (see Documentation/power/devices.txt for more information). The PM core does its best to reduce the probability of race conditions between the runtime PM and system suspend/resume (and hibernation) callbacks by carrying out the following operations: Loading
Documentation/power/swsusp.txt +4 −1 Original line number Diff line number Diff line Loading @@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity. A: Try running cat `cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u` > /dev/null cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u | while read file do test -f "$file" && cat "$file" > /dev/null done after resume. swapoff -a; swapon -a may also be useful. Loading
drivers/acpi/acpi_lpss.c +213 −36 Original line number Diff line number Diff line Loading @@ -19,6 +19,7 @@ #include <linux/platform_device.h> #include <linux/platform_data/clk-lpss.h> #include <linux/pm_runtime.h> #include <linux/delay.h> #include "internal.h" Loading @@ -28,6 +29,7 @@ ACPI_MODULE_NAME("acpi_lpss"); #define LPSS_LTR_SIZE 0x18 /* Offsets relative to LPSS_PRIVATE_OFFSET */ #define LPSS_CLK_DIVIDER_DEF_MASK (BIT(1) | BIT(16)) #define LPSS_GENERAL 0x08 #define LPSS_GENERAL_LTR_MODE_SW BIT(2) #define LPSS_GENERAL_UART_RTS_OVRD BIT(3) Loading @@ -43,6 +45,8 @@ ACPI_MODULE_NAME("acpi_lpss"); #define LPSS_TX_INT 0x20 #define LPSS_TX_INT_MASK BIT(1) #define LPSS_PRV_REG_COUNT 9 struct lpss_shared_clock { const char *name; unsigned long rate; Loading @@ -57,7 +61,9 @@ struct lpss_device_desc { bool ltr_required; unsigned int prv_offset; size_t prv_size_override; bool clk_divider; bool clk_gate; bool save_ctx; struct lpss_shared_clock *shared_clock; void (*setup)(struct lpss_private_data *pdata); }; Loading @@ -72,6 +78,7 @@ struct lpss_private_data { resource_size_t mmio_size; struct clk *clk; const struct lpss_device_desc *dev_desc; u32 prv_reg_ctx[LPSS_PRV_REG_COUNT]; }; static void lpss_uart_setup(struct lpss_private_data *pdata) Loading @@ -89,6 +96,14 @@ static void lpss_uart_setup(struct lpss_private_data *pdata) } static struct lpss_device_desc lpt_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, .clk_divider = true, .clk_gate = true, }; static struct lpss_device_desc lpt_i2c_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, Loading @@ -99,6 +114,7 @@ static struct lpss_device_desc lpt_uart_dev_desc = { .clk_required = true, .prv_offset = 0x800, .ltr_required = true, .clk_divider = true, .clk_gate = true, .setup = lpss_uart_setup, }; Loading @@ -116,32 +132,25 @@ static struct lpss_shared_clock pwm_clock = { static struct lpss_device_desc byt_pwm_dev_desc = { .clk_required = true, .save_ctx = true, .shared_clock = &pwm_clock, }; static struct lpss_shared_clock uart_clock = { .name = "uart_clk", .rate = 44236800, }; static struct lpss_device_desc byt_uart_dev_desc = { .clk_required = true, .prv_offset = 0x800, .clk_divider = true, .clk_gate = true, .shared_clock = &uart_clock, .save_ctx = true, .setup = lpss_uart_setup, }; static struct lpss_shared_clock spi_clock = { .name = "spi_clk", .rate = 50000000, }; static struct lpss_device_desc byt_spi_dev_desc = { .clk_required = true, .prv_offset = 0x400, .clk_divider = true, .clk_gate = true, .shared_clock = &spi_clock, .save_ctx = true, }; static struct lpss_device_desc byt_sdio_dev_desc = { Loading @@ -156,6 +165,7 @@ static struct lpss_shared_clock i2c_clock = { static struct lpss_device_desc byt_i2c_dev_desc = { .clk_required = true, .prv_offset = 0x800, .save_ctx = true, .shared_clock = &i2c_clock, }; Loading @@ -166,8 +176,8 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = { /* Lynxpoint LPSS devices */ { "INT33C0", (unsigned long)&lpt_dev_desc }, { "INT33C1", (unsigned long)&lpt_dev_desc }, { "INT33C2", (unsigned long)&lpt_dev_desc }, { "INT33C3", (unsigned long)&lpt_dev_desc }, { "INT33C2", (unsigned long)&lpt_i2c_dev_desc }, { "INT33C3", (unsigned long)&lpt_i2c_dev_desc }, { "INT33C4", (unsigned long)&lpt_uart_dev_desc }, { "INT33C5", (unsigned long)&lpt_uart_dev_desc }, { "INT33C6", (unsigned long)&lpt_sdio_dev_desc }, Loading @@ -183,8 +193,8 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = { { "INT3430", (unsigned long)&lpt_dev_desc }, { "INT3431", (unsigned long)&lpt_dev_desc }, { "INT3432", (unsigned long)&lpt_dev_desc }, { "INT3433", (unsigned long)&lpt_dev_desc }, { "INT3432", (unsigned long)&lpt_i2c_dev_desc }, { "INT3433", (unsigned long)&lpt_i2c_dev_desc }, { "INT3434", (unsigned long)&lpt_uart_dev_desc }, { "INT3435", (unsigned long)&lpt_uart_dev_desc }, { "INT3436", (unsigned long)&lpt_sdio_dev_desc }, Loading Loading @@ -212,9 +222,11 @@ static int register_device_clock(struct acpi_device *adev, { const struct lpss_device_desc *dev_desc = pdata->dev_desc; struct lpss_shared_clock *shared_clock = dev_desc->shared_clock; const char *devname = dev_name(&adev->dev); struct clk *clk = ERR_PTR(-ENODEV); struct lpss_clk_data *clk_data; const char *parent; const char *parent, *clk_name; void __iomem *prv_base; if (!lpss_clk_dev) lpt_register_clock_device(); Loading @@ -225,7 +237,7 @@ static int register_device_clock(struct acpi_device *adev, if (dev_desc->clkdev_name) { clk_register_clkdev(clk_data->clk, dev_desc->clkdev_name, dev_name(&adev->dev)); devname); return 0; } Loading @@ -234,6 +246,7 @@ static int register_device_clock(struct acpi_device *adev, return -ENODATA; parent = clk_data->name; prv_base = pdata->mmio_base + dev_desc->prv_offset; if (shared_clock) { clk = shared_clock->clk; Loading @@ -247,16 +260,41 @@ static int register_device_clock(struct acpi_device *adev, } if (dev_desc->clk_gate) { clk = clk_register_gate(NULL, dev_name(&adev->dev), parent, 0, pdata->mmio_base + dev_desc->prv_offset, 0, 0, NULL); pdata->clk = clk; clk = clk_register_gate(NULL, devname, parent, 0, prv_base, 0, 0, NULL); parent = devname; } if (dev_desc->clk_divider) { /* Prevent division by zero */ if (!readl(prv_base)) writel(LPSS_CLK_DIVIDER_DEF_MASK, prv_base); clk_name = kasprintf(GFP_KERNEL, "%s-div", devname); if (!clk_name) return -ENOMEM; clk = clk_register_fractional_divider(NULL, clk_name, parent, 0, prv_base, 1, 15, 16, 15, 0, NULL); parent = clk_name; clk_name = kasprintf(GFP_KERNEL, "%s-update", devname); if (!clk_name) { kfree(parent); return -ENOMEM; } clk = clk_register_gate(NULL, clk_name, parent, CLK_SET_RATE_PARENT | CLK_SET_RATE_GATE, prv_base, 31, 0, NULL); kfree(parent); kfree(clk_name); } if (IS_ERR(clk)) return PTR_ERR(clk); clk_register_clkdev(clk, NULL, dev_name(&adev->dev)); pdata->clk = clk; clk_register_clkdev(clk, NULL, devname); return 0; } Loading @@ -267,12 +305,14 @@ static int acpi_lpss_create_device(struct acpi_device *adev, struct lpss_private_data *pdata; struct resource_list_entry *rentry; struct list_head resource_list; struct platform_device *pdev; int ret; dev_desc = (struct lpss_device_desc *)id->driver_data; if (!dev_desc) return acpi_create_platform_device(adev, id); if (!dev_desc) { pdev = acpi_create_platform_device(adev); return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; } pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); if (!pdata) return -ENOMEM; Loading Loading @@ -322,10 +362,13 @@ static int acpi_lpss_create_device(struct acpi_device *adev, dev_desc->setup(pdata); adev->driver_data = pdata; ret = acpi_create_platform_device(adev, id); if (ret > 0) return ret; pdev = acpi_create_platform_device(adev); if (!IS_ERR_OR_NULL(pdev)) { device_enable_async_suspend(&pdev->dev); return 1; } ret = PTR_ERR(pdev); adev->driver_data = NULL; err_out: Loading Loading @@ -449,6 +492,126 @@ static void acpi_lpss_set_ltr(struct device *dev, s32 val) } } #ifdef CONFIG_PM /** * acpi_lpss_save_ctx() - Save the private registers of LPSS device * @dev: LPSS device * * Most LPSS devices have private registers which may loose their context when * the device is powered down. acpi_lpss_save_ctx() saves those registers into * prv_reg_ctx array. */ static void acpi_lpss_save_ctx(struct device *dev) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); unsigned int i; for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { unsigned long offset = i * sizeof(u32); pdata->prv_reg_ctx[i] = __lpss_reg_read(pdata, offset); dev_dbg(dev, "saving 0x%08x from LPSS reg at offset 0x%02lx\n", pdata->prv_reg_ctx[i], offset); } } /** * acpi_lpss_restore_ctx() - Restore the private registers of LPSS device * @dev: LPSS device * * Restores the registers that were previously stored with acpi_lpss_save_ctx(). */ static void acpi_lpss_restore_ctx(struct device *dev) { struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); unsigned int i; /* * The following delay is needed or the subsequent write operations may * fail. The LPSS devices are actually PCI devices and the PCI spec * expects 10ms delay before the device can be accessed after D3 to D0 * transition. */ msleep(10); for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { unsigned long offset = i * sizeof(u32); __lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset); dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n", pdata->prv_reg_ctx[i], offset); } } #ifdef CONFIG_PM_SLEEP static int acpi_lpss_suspend_late(struct device *dev) { int ret = pm_generic_suspend_late(dev); if (ret) return ret; acpi_lpss_save_ctx(dev); return acpi_dev_suspend_late(dev); } static int acpi_lpss_restore_early(struct device *dev) { int ret = acpi_dev_resume_early(dev); if (ret) return ret; acpi_lpss_restore_ctx(dev); return pm_generic_resume_early(dev); } #endif /* CONFIG_PM_SLEEP */ #ifdef CONFIG_PM_RUNTIME static int acpi_lpss_runtime_suspend(struct device *dev) { int ret = pm_generic_runtime_suspend(dev); if (ret) return ret; acpi_lpss_save_ctx(dev); return acpi_dev_runtime_suspend(dev); } static int acpi_lpss_runtime_resume(struct device *dev) { int ret = acpi_dev_runtime_resume(dev); if (ret) return ret; acpi_lpss_restore_ctx(dev); return pm_generic_runtime_resume(dev); } #endif /* CONFIG_PM_RUNTIME */ #endif /* CONFIG_PM */ static struct dev_pm_domain acpi_lpss_pm_domain = { .ops = { #ifdef CONFIG_PM_SLEEP .suspend_late = acpi_lpss_suspend_late, .restore_early = acpi_lpss_restore_early, .prepare = acpi_subsys_prepare, .complete = acpi_subsys_complete, .suspend = acpi_subsys_suspend, .resume_early = acpi_subsys_resume_early, .freeze = acpi_subsys_freeze, .poweroff = acpi_subsys_suspend, .poweroff_late = acpi_subsys_suspend_late, #endif #ifdef CONFIG_PM_RUNTIME .runtime_suspend = acpi_lpss_runtime_suspend, .runtime_resume = acpi_lpss_runtime_resume, #endif }, }; static int acpi_lpss_platform_notify(struct notifier_block *nb, unsigned long action, void *data) { Loading @@ -456,7 +619,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, struct lpss_private_data *pdata; struct acpi_device *adev; const struct acpi_device_id *id; int ret = 0; id = acpi_match_device(acpi_lpss_device_ids, &pdev->dev); if (!id || !id->driver_data) Loading @@ -466,7 +628,7 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, return 0; pdata = acpi_driver_data(adev); if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required) if (!pdata || !pdata->mmio_base) return 0; if (pdata->mmio_size < pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) { Loading @@ -474,12 +636,27 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, return 0; } if (action == BUS_NOTIFY_ADD_DEVICE) ret = sysfs_create_group(&pdev->dev.kobj, &lpss_attr_group); else if (action == BUS_NOTIFY_DEL_DEVICE) switch (action) { case BUS_NOTIFY_BOUND_DRIVER: if (pdata->dev_desc->save_ctx) pdev->dev.pm_domain = &acpi_lpss_pm_domain; break; case BUS_NOTIFY_UNBOUND_DRIVER: if (pdata->dev_desc->save_ctx) pdev->dev.pm_domain = NULL; break; case BUS_NOTIFY_ADD_DEVICE: if (pdata->dev_desc->ltr_required) return sysfs_create_group(&pdev->dev.kobj, &lpss_attr_group); case BUS_NOTIFY_DEL_DEVICE: if (pdata->dev_desc->ltr_required) sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); default: break; } return ret; return 0; } static struct notifier_block acpi_lpss_nb = { Loading
drivers/acpi/acpi_platform.c +18 −11 Original line number Diff line number Diff line Loading @@ -31,6 +31,10 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { { "PNP0D40" }, { "VPC2004" }, { "BCM4752" }, { "LNV4752" }, { "BCM2E1A" }, { "BCM2E39" }, { "BCM2E3D" }, /* Intel Smart Sound Technology */ { "INT33C8" }, Loading @@ -42,7 +46,6 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { /** * acpi_create_platform_device - Create platform device for ACPI device node * @adev: ACPI device node to create a platform device for. * @id: ACPI device ID used to match @adev. * * Check if the given @adev can be represented as a platform device and, if * that's the case, create and register a platform device, populate its common Loading @@ -50,8 +53,7 @@ static const struct acpi_device_id acpi_platform_device_ids[] = { * * Name of the platform device will be the same as @adev's. */ int acpi_create_platform_device(struct acpi_device *adev, const struct acpi_device_id *id) struct platform_device *acpi_create_platform_device(struct acpi_device *adev) { struct platform_device *pdev = NULL; struct acpi_device *acpi_parent; Loading @@ -63,19 +65,19 @@ int acpi_create_platform_device(struct acpi_device *adev, /* If the ACPI node already has a physical device attached, skip it. */ if (adev->physical_node_count) return 0; return NULL; INIT_LIST_HEAD(&resource_list); count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); if (count < 0) { return 0; return NULL; } else if (count > 0) { resources = kmalloc(count * sizeof(struct resource), GFP_KERNEL); if (!resources) { dev_err(&adev->dev, "No memory for resources\n"); acpi_dev_free_resource_list(&resource_list); return -ENOMEM; return ERR_PTR(-ENOMEM); } count = 0; list_for_each_entry(rentry, &resource_list, node) Loading Loading @@ -112,22 +114,27 @@ int acpi_create_platform_device(struct acpi_device *adev, pdevinfo.num_res = count; pdevinfo.acpi_node.companion = adev; pdev = platform_device_register_full(&pdevinfo); if (IS_ERR(pdev)) { if (IS_ERR(pdev)) dev_err(&adev->dev, "platform device creation failed: %ld\n", PTR_ERR(pdev)); pdev = NULL; } else { else dev_dbg(&adev->dev, "created platform device %s\n", dev_name(&pdev->dev)); } kfree(resources); return pdev; } static int acpi_platform_attach(struct acpi_device *adev, const struct acpi_device_id *id) { acpi_create_platform_device(adev); return 1; } static struct acpi_scan_handler platform_handler = { .ids = acpi_platform_device_ids, .attach = acpi_create_platform_device, .attach = acpi_platform_attach, }; void __init acpi_platform_init(void) Loading