Commit fc82e1d5 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6: (21 commits)
  PM / Hibernate: Reduce autotuned default image size
  PM / Core: Introduce struct syscore_ops for core subsystems PM
  PM QoS: Make pm_qos settings readable
  PM / OPP: opp_find_freq_exact() documentation fix
  PM: Documentation/power/states.txt: fix repetition
  PM: Make system-wide PM and runtime PM treat subsystems consistently
  PM: Simplify kernel/power/Kconfig
  PM: Add support for device power domains
  PM: Drop pm_flags that is not necessary
  PM: Allow pm_runtime_suspend() to succeed during system suspend
  PM: Clean up PM_TRACE dependencies and drop unnecessary Kconfig option
  PM: Remove CONFIG_PM_OPS
  PM: Reorder power management Kconfig options
  PM: Make CONFIG_PM depend on (CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME)
  PM / ACPI: Remove references to pm_flags from bus.c
  PM: Do not create wakeup sysfs files for devices that cannot wake up
  USB / Hub: Do not call device_set_wakeup_capable() under spinlock
  PM: Use appropriate printk() priority level in trace.c
  PM / Wakeup: Don't update events_check_enabled in pm_get_wakeup_count()
  PM / Wakeup: Make pm_save_wakeup_count() work as documented
  ...
parents 48d5f673 bea3864f
...@@ -29,9 +29,8 @@ Description: ...@@ -29,9 +29,8 @@ Description:
"disabled" to it. "disabled" to it.
For the devices that are not capable of generating system wakeup For the devices that are not capable of generating system wakeup
events this file contains "\n". In that cases the user space events this file is not present. In that case the device cannot
cannot modify the contents of this file and the device cannot be be enabled to wake up the system from sleep states.
enabled to wake up the system.
What: /sys/devices/.../power/control What: /sys/devices/.../power/control
Date: January 2009 Date: January 2009
...@@ -85,7 +84,7 @@ Description: ...@@ -85,7 +84,7 @@ Description:
The /sys/devices/.../wakeup_count attribute contains the number The /sys/devices/.../wakeup_count attribute contains the number
of signaled wakeup events associated with the device. This of signaled wakeup events associated with the device. This
attribute is read-only. If the device is not enabled to wake up attribute is read-only. If the device is not enabled to wake up
the system from sleep states, this attribute is empty. the system from sleep states, this attribute is not present.
What: /sys/devices/.../power/wakeup_active_count What: /sys/devices/.../power/wakeup_active_count
Date: September 2010 Date: September 2010
...@@ -95,7 +94,7 @@ Description: ...@@ -95,7 +94,7 @@ Description:
number of times the processing of wakeup events associated with number of times the processing of wakeup events associated with
the device was completed (at the kernel level). This attribute the device was completed (at the kernel level). This attribute
is read-only. If the device is not enabled to wake up the is read-only. If the device is not enabled to wake up the
system from sleep states, this attribute is empty. system from sleep states, this attribute is not present.
What: /sys/devices/.../power/wakeup_hit_count What: /sys/devices/.../power/wakeup_hit_count
Date: September 2010 Date: September 2010
...@@ -105,7 +104,8 @@ Description: ...@@ -105,7 +104,8 @@ Description:
number of times the processing of a wakeup event associated with number of times the processing of a wakeup event associated with
the device might prevent the system from entering a sleep state. the device might prevent the system from entering a sleep state.
This attribute is read-only. If the device is not enabled to This attribute is read-only. If the device is not enabled to
wake up the system from sleep states, this attribute is empty. wake up the system from sleep states, this attribute is not
present.
What: /sys/devices/.../power/wakeup_active What: /sys/devices/.../power/wakeup_active
Date: September 2010 Date: September 2010
...@@ -115,7 +115,7 @@ Description: ...@@ -115,7 +115,7 @@ Description:
or 0, depending on whether or not a wakeup event associated with or 0, depending on whether or not a wakeup event associated with
the device is being processed (1). This attribute is read-only. the device is being processed (1). This attribute is read-only.
If the device is not enabled to wake up the system from sleep If the device is not enabled to wake up the system from sleep
states, this attribute is empty. states, this attribute is not present.
What: /sys/devices/.../power/wakeup_total_time_ms What: /sys/devices/.../power/wakeup_total_time_ms
Date: September 2010 Date: September 2010
...@@ -125,7 +125,7 @@ Description: ...@@ -125,7 +125,7 @@ Description:
the total time of processing wakeup events associated with the the total time of processing wakeup events associated with the
device, in milliseconds. This attribute is read-only. If the device, in milliseconds. This attribute is read-only. If the
device is not enabled to wake up the system from sleep states, device is not enabled to wake up the system from sleep states,
this attribute is empty. this attribute is not present.
What: /sys/devices/.../power/wakeup_max_time_ms What: /sys/devices/.../power/wakeup_max_time_ms
Date: September 2010 Date: September 2010
...@@ -135,7 +135,7 @@ Description: ...@@ -135,7 +135,7 @@ Description:
the maximum time of processing a single wakeup event associated the maximum time of processing a single wakeup event associated
with the device, in milliseconds. This attribute is read-only. with the device, in milliseconds. This attribute is read-only.
If the device is not enabled to wake up the system from sleep If the device is not enabled to wake up the system from sleep
states, this attribute is empty. states, this attribute is not present.
What: /sys/devices/.../power/wakeup_last_time_ms What: /sys/devices/.../power/wakeup_last_time_ms
Date: September 2010 Date: September 2010
...@@ -146,7 +146,7 @@ Description: ...@@ -146,7 +146,7 @@ Description:
signaling the last wakeup event associated with the device, in signaling the last wakeup event associated with the device, in
milliseconds. This attribute is read-only. If the device is milliseconds. This attribute is read-only. If the device is
not enabled to wake up the system from sleep states, this not enabled to wake up the system from sleep states, this
attribute is empty. attribute is not present.
What: /sys/devices/.../power/autosuspend_delay_ms What: /sys/devices/.../power/autosuspend_delay_ms
Date: September 2010 Date: September 2010
......
Device Power Management Device Power Management
Copyright (c) 2010 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu>
...@@ -159,18 +159,18 @@ matter, and the kernel is responsible for keeping track of it. By contrast, ...@@ -159,18 +159,18 @@ matter, and the kernel is responsible for keeping track of it. By contrast,
whether or not a wakeup-capable device should issue wakeup events is a policy whether or not a wakeup-capable device should issue wakeup events is a policy
decision, and it is managed by user space through a sysfs attribute: the decision, and it is managed by user space through a sysfs attribute: the
power/wakeup file. User space can write the strings "enabled" or "disabled" to power/wakeup file. User space can write the strings "enabled" or "disabled" to
set or clear the should_wakeup flag, respectively. Reads from the file will set or clear the "should_wakeup" flag, respectively. This file is only present
return the corresponding string if can_wakeup is true, but if can_wakeup is for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set)
false then reads will return an empty string, to indicate that the device and is created (or removed) by device_set_wakeup_capable(). Reads from the
doesn't support wakeup events. (But even though the file appears empty, writes file will return the corresponding string.
will still affect the should_wakeup flag.)
The device_may_wakeup() routine returns true only if both flags are set. The device_may_wakeup() routine returns true only if both flags are set.
Drivers should check this routine when putting devices in a low-power state This information is used by subsystems, like the PCI bus type code, to see
during a system sleep transition, to see whether or not to enable the devices' whether or not to enable the devices' wakeup mechanisms. If device wakeup
wakeup mechanisms. However for runtime power management, wakeup events should mechanisms are enabled or disabled directly by drivers, they also should use
be enabled whenever the device and driver both support them, regardless of the device_may_wakeup() to decide what to do during a system sleep transition.
should_wakeup flag. However for runtime power management, wakeup events should be enabled whenever
the device and driver both support them, regardless of the should_wakeup flag.
/sys/devices/.../power/control files /sys/devices/.../power/control files
...@@ -249,23 +249,18 @@ various phases always run after tasks have been frozen and before they are ...@@ -249,23 +249,18 @@ various phases always run after tasks have been frozen and before they are
unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have
been disabled (except for those marked with the IRQ_WAKEUP flag). been disabled (except for those marked with the IRQ_WAKEUP flag).
Most phases use bus, type, and class callbacks (that is, methods defined in All phases use bus, type, or class callbacks (that is, methods defined in
dev->bus->pm, dev->type->pm, and dev->class->pm). The prepare and complete dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually
phases are exceptions; they use only bus callbacks. When multiple callbacks exclusive, so if the device type provides a struct dev_pm_ops object pointed to
are used in a phase, they are invoked in the order: <class, type, bus> during by its pm field (i.e. both dev->type and dev->type->pm are defined), the
power-down transitions and in the opposite order during power-up transitions. callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise,
For example, during the suspend phase the PM core invokes if the class provides a struct dev_pm_ops object pointed to by its pm field
(i.e. both dev->class and dev->class->pm are defined), the PM core will use the
dev->class->pm.suspend(dev); callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of
dev->type->pm.suspend(dev); both the device type and class objects are NULL (or those objects do not exist),
dev->bus->pm.suspend(dev); the callbacks provided by the bus (that is, the callbacks from dev->bus->pm)
will be used (this allows device types to override callbacks provided by bus
before moving on to the next device, whereas during the resume phase the core types or classes if necessary).
invokes
dev->bus->pm.resume(dev);
dev->type->pm.resume(dev);
dev->class->pm.resume(dev);
These callbacks may in turn invoke device- or driver-specific methods stored in These callbacks may in turn invoke device- or driver-specific methods stored in
dev->driver->pm, but they don't have to. dev->driver->pm, but they don't have to.
...@@ -507,6 +502,49 @@ routines. Nevertheless, different callback pointers are used in case there is a ...@@ -507,6 +502,49 @@ routines. Nevertheless, different callback pointers are used in case there is a
situation where it actually matters. situation where it actually matters.
Device Power Domains
--------------------
Sometimes devices share reference clocks or other power resources. In those
cases it generally is not possible to put devices into low-power states
individually. Instead, a set of devices sharing a power resource can be put
into a low-power state together at the same time by turning off the shared
power resource. Of course, they also need to be put into the full-power state
together, by turning the shared power resource on. A set of devices with this
property is often referred to as a power domain.
Support for power domains is provided through the pwr_domain field of struct
device. This field is a pointer to an object of type struct dev_power_domain,
defined in include/linux/pm.h, providing a set of power management callbacks
analogous to the subsystem-level and device driver callbacks that are executed
for the given device during all power transitions, in addition to the respective
subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
executed after the analogous subsystem-level callbacks, while the power domain
"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
etc.) are executed before the analogous subsystem-level callbacks. Error codes
returned by the "suspend" and "resume" power domain callbacks are ignored.
Power domain ->runtime_idle() callback is executed before the subsystem-level
->runtime_idle() callback and the result returned by it is not ignored. Namely,
if it returns error code, the subsystem-level ->runtime_idle() callback will not
be called and the helper function rpm_idle() executing it will return error
code. This mechanism is intended to help platforms where saving device state
is a time consuming operation and should only be carried out if all devices
in the power domain are idle, before turning off the shared power resource(s).
Namely, the power domain ->runtime_idle() callback may return error code until
the pm_runtime_idle() helper (or its asychronous version) has been called for
all devices in the power domain (it is recommended that the returned error code
be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
callback from being run prematurely.
The support for device power domains is only relevant to platforms needing to
use the same subsystem-level (e.g. platform bus type) and device driver power
management callbacks in many different power domain configurations and wanting
to avoid incorporating the support for power domains into the subsystem-level
callbacks. The other platforms need not implement it or take it into account
in any way.
System Devices System Devices
-------------- --------------
System devices (sysdevs) follow a slightly different API, which can be found in System devices (sysdevs) follow a slightly different API, which can be found in
......
Run-time Power Management Framework for I/O Devices Run-time Power Management Framework for I/O Devices
(C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
(C) 2010 Alan Stern <stern@rowland.harvard.edu> (C) 2010 Alan Stern <stern@rowland.harvard.edu>
1. Introduction 1. Introduction
...@@ -44,11 +44,12 @@ struct dev_pm_ops { ...@@ -44,11 +44,12 @@ struct dev_pm_ops {
}; };
The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks are The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks are
executed by the PM core for either the bus type, or device type (if the bus executed by the PM core for either the device type, or the class (if the device
type's callback is not defined), or device class (if the bus type's and device type's struct dev_pm_ops object does not exist), or the bus type (if the
type's callbacks are not defined) of given device. The bus type, device type device type's and class' struct dev_pm_ops objects do not exist) of the given
and device class callbacks are referred to as subsystem-level callbacks in what device (this allows device types to override callbacks provided by bus types or
follows. classes if necessary). The bus type, device type and class callbacks are
referred to as subsystem-level callbacks in what follows.
By default, the callbacks are always invoked in process context with interrupts By default, the callbacks are always invoked in process context with interrupts
enabled. However, subsystems can use the pm_runtime_irq_safe() helper function enabled. However, subsystems can use the pm_runtime_irq_safe() helper function
......
...@@ -62,12 +62,12 @@ setup via another operating system for it to use. Despite the ...@@ -62,12 +62,12 @@ setup via another operating system for it to use. Despite the
inconvenience, this method requires minimal work by the kernel, since inconvenience, this method requires minimal work by the kernel, since
the firmware will also handle restoring memory contents on resume. the firmware will also handle restoring memory contents on resume.
For suspend-to-disk, a mechanism called swsusp called 'swsusp' (Swap For suspend-to-disk, a mechanism called 'swsusp' (Swap Suspend) is used
Suspend) is used to write memory contents to free swap space. to write memory contents to free swap space. swsusp has some restrictive
swsusp has some restrictive requirements, but should work in most requirements, but should work in most cases. Some, albeit outdated,
cases. Some, albeit outdated, documentation can be found in documentation can be found in Documentation/power/swsusp.txt.
Documentation/power/swsusp.txt. Alternatively, userspace can do most Alternatively, userspace can do most of the actual suspend to disk work,
of the actual suspend to disk work, see userland-swsusp.txt. see userland-swsusp.txt.
Once memory state is written to disk, the system may either enter a Once memory state is written to disk, the system may either enter a
low-power state (like ACPI S4), or it may simply power down. Powering low-power state (like ACPI S4), or it may simply power down. Powering
......
...@@ -227,6 +227,7 @@ ...@@ -227,6 +227,7 @@
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include <linux/acpi.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -2331,12 +2332,11 @@ static int __init apm_init(void) ...@@ -2331,12 +2332,11 @@ static int __init apm_init(void)
apm_info.disabled = 1; apm_info.disabled = 1;
return -ENODEV; return -ENODEV;
} }
if (pm_flags & PM_ACPI) { if (!acpi_disabled) {
printk(KERN_NOTICE "apm: overridden by ACPI.\n"); printk(KERN_NOTICE "apm: overridden by ACPI.\n");
apm_info.disabled = 1; apm_info.disabled = 1;
return -ENODEV; return -ENODEV;
} }
pm_flags |= PM_APM;
/* /*
* Set up the long jump entry point to the APM BIOS, which is called * Set up the long jump entry point to the APM BIOS, which is called
...@@ -2428,7 +2428,6 @@ static void __exit apm_exit(void) ...@@ -2428,7 +2428,6 @@ static void __exit apm_exit(void)
kthread_stop(kapmd_task); kthread_stop(kapmd_task);
kapmd_task = NULL; kapmd_task = NULL;
} }
pm_flags &= ~PM_APM;
} }
module_init(apm_init); module_init(apm_init);
......
...@@ -38,7 +38,7 @@ config XEN_MAX_DOMAIN_MEMORY ...@@ -38,7 +38,7 @@ config XEN_MAX_DOMAIN_MEMORY
config XEN_SAVE_RESTORE config XEN_SAVE_RESTORE
bool bool
depends on XEN && PM depends on XEN
default y default y
config XEN_DEBUG_FS config XEN_DEBUG_FS
......
...@@ -7,7 +7,6 @@ menuconfig ACPI ...@@ -7,7 +7,6 @@ menuconfig ACPI
depends on !IA64_HP_SIM depends on !IA64_HP_SIM
depends on IA64 || X86 depends on IA64 || X86
depends on PCI depends on PCI
depends on PM
select PNP select PNP
default y default y
help help
......
...@@ -40,6 +40,7 @@ ...@@ -40,6 +40,7 @@
#include <acpi/acpi_bus.h> #include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h> #include <acpi/acpi_drivers.h>
#include <linux/dmi.h> #include <linux/dmi.h>
#include <linux/suspend.h>
#include "internal.h" #include "internal.h"
...@@ -1006,8 +1007,7 @@ struct kobject *acpi_kobj; ...@@ -1006,8 +1007,7 @@ struct kobject *acpi_kobj;
static int __init acpi_init(void) static int __init acpi_init(void)
{ {
int result = 0; int result;
if (acpi_disabled) { if (acpi_disabled) {
printk(KERN_INFO PREFIX "Interpreter disabled.\n"); printk(KERN_INFO PREFIX "Interpreter disabled.\n");
...@@ -1022,29 +1022,18 @@ static int __init acpi_init(void) ...@@ -1022,29 +1022,18 @@ static int __init acpi_init(void)
init_acpi_device_notify(); init_acpi_device_notify();
result = acpi_bus_init(); result = acpi_bus_init();
if (result) {
if (!result) {
pci_mmcfg_late_init();
if (!(pm_flags & PM_APM))
pm_flags |= PM_ACPI;
else {
printk(KERN_INFO PREFIX
"APM is already active, exiting\n");
disable_acpi();
result = -ENODEV;
}
} else
disable_acpi(); disable_acpi();
if (acpi_disabled)
return result; return result;
}
pci_mmcfg_late_init();
acpi_scan_init(); acpi_scan_init();
acpi_ec_init(); acpi_ec_init();
acpi_debugfs_init(); acpi_debugfs_init();
acpi_sleep_proc_init(); acpi_sleep_proc_init();
acpi_wakeup_device_init(); acpi_wakeup_device_init();
return result; return 0;
} }
subsys_initcall(acpi_init); subsys_initcall(acpi_init);
...@@ -585,7 +585,7 @@ int acpi_suspend(u32 acpi_state) ...@@ -585,7 +585,7 @@ int acpi_suspend(u32 acpi_state)
return -EINVAL; return -EINVAL;
} }
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
/** /**
* acpi_pm_device_sleep_state - return preferred power state of ACPI device * acpi_pm_device_sleep_state - return preferred power state of ACPI device
* in the system sleep state given by %acpi_target_sleep_state * in the system sleep state given by %acpi_target_sleep_state
...@@ -671,7 +671,7 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p) ...@@ -671,7 +671,7 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p)
*d_min_p = d_min; *d_min_p = d_min;
return d_max; return d_max;
} }
#endif /* CONFIG_PM_OPS */ #endif /* CONFIG_PM */
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
/** /**
......
# Makefile for the Linux device tree # Makefile for the Linux device tree
obj-y := core.o sys.o bus.o dd.o \ obj-y := core.o sys.o bus.o dd.o syscore.o \
driver.o class.o platform.o \ driver.o class.o platform.o \
cpu.o firmware.o init.o map.o devres.o \ cpu.o firmware.o init.o map.o devres.o \
attribute_container.o transport_class.o attribute_container.o transport_class.o
......
obj-$(CONFIG_PM) += sysfs.o obj-$(CONFIG_PM) += sysfs.o generic_ops.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_RUNTIME) += runtime.o obj-$(CONFIG_PM_RUNTIME) += runtime.o
obj-$(CONFIG_PM_OPS) += generic_ops.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_OPP) += opp.o obj-$(CONFIG_PM_OPP) += opp.o
......
...@@ -423,26 +423,22 @@ static int device_resume_noirq(struct device *dev, pm_message_t state) ...@@ -423,26 +423,22 @@ static int device_resume_noirq(struct device *dev, pm_message_t state)
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->bus && dev->bus->pm) { if (dev->pwr_domain) {
pm_dev_dbg(dev, state, "EARLY "); pm_dev_dbg(dev, state, "EARLY power domain ");
error = pm_noirq_op(dev, dev->bus->pm, state); pm_noirq_op(dev, &dev->pwr_domain->ops, state);
if (error)
goto End;
} }
if (dev->type && dev->type->pm) { if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "EARLY type "); pm_dev_dbg(dev, state, "EARLY type ");
error = pm_noirq_op(dev, dev->type->pm, state); error = pm_noirq_op(dev, dev->type->pm, state);
if (error) } else if (dev->class && dev->class->pm) {
goto End;
}
if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "EARLY class "); pm_dev_dbg(dev, state, "EARLY class ");
error = pm_noirq_op(dev, dev->class->pm, state); error = pm_noirq_op(dev, dev->class->pm, state);
} else if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "EARLY ");
error = pm_noirq_op(dev, dev->bus->pm, state);
} }
End:
TRACE_RESUME(error); TRACE_RESUME(error);
return error; return error;
} }
...@@ -518,36 +514,39 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -518,36 +514,39 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
dev->power.in_suspend = false; dev->power.in_suspend = false;
if (dev->bus) { if (dev->pwr_domain) {
if (dev->bus->pm) { pm_dev_dbg(dev, state, "power domain ");
pm_dev_dbg(dev, state, ""); pm_op(dev, &dev->pwr_domain->ops, state);
error = pm_op(dev, dev->bus->pm, state);
} else if (dev->bus->resume) {
pm_dev_dbg(dev, state, "legacy ");
error = legacy_resume(dev, dev->bus->resume);
}
if (error)
goto End;
} }
if (dev->type) { if (dev->type && dev->type->pm) {
if (dev->type->pm) { pm_dev_dbg(dev, state, "type ");
pm_dev_dbg(dev, state, "type "); error = pm_op(dev, dev->type->pm, state);
error = pm_op(dev, dev->type->pm, state); goto End;
}
if (error)
goto End;
} }
if (dev->class) { if (dev->class) {
if (dev->class->pm) { if (dev->class->pm) {
pm_dev_dbg(dev, state, "class "); pm_dev_dbg(dev, state, "class ");
error = pm_op(dev, dev->class->pm, state); error = pm_op(dev, dev->class->pm, state);
goto End;
} else if (dev->class->resume) { } else if (dev->class->resume) {
pm_dev_dbg(dev, state, "legacy class "); pm_dev_dbg(dev, state, "legacy class ");
error = legacy_resume(dev, dev->class->resume); error = legacy_resume(dev, dev->class->resume);
goto End;
} }
} }
if (dev->bus) {
if (dev->bus->pm) {
pm_dev_dbg(dev, state, "");
error = pm_op(dev, dev->bus->pm, state);
} else if (dev->bus->resume) {
pm_dev_dbg(dev, state, "legacy ");
error = legacy_resume(dev, dev->bus->resume);
}
}
End: End:
device_unlock(dev); device_unlock(dev);
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
...@@ -629,19 +628,23 @@ static void device_complete(struct device *dev, pm_message_t state) ...@@ -629,19 +628,23 @@ static void device_complete(struct device *dev, pm_message_t state)
{ {
device_lock(dev); device_lock(dev);
if (dev->class && dev->class->pm && dev->class->pm->complete) { if (dev->pwr_domain && dev->pwr_domain->ops.complete) {
pm_dev_dbg(dev, state, "completing class "); pm_dev_dbg(dev, state, "completing power domain ");
dev->class->pm->complete(dev); dev->pwr_domain->ops.complete(dev);
} }
if (dev->type && dev->type->pm && dev->type->pm->complete) { if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "completing type "); pm_dev_dbg(dev, state, "completing type ");
dev->type->pm->complete(dev); if (dev->type->pm->complete)
} dev->type->pm->complete(dev);
} else if (dev->class && dev->class->pm) {
if (dev->bus && dev->bus->pm && dev->bus->pm->complete) { pm_dev_dbg(dev, state, "completing class ");
if (dev->class->pm->complete)
dev->class->pm->complete(dev);
} else if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "completing "); pm_dev_dbg(dev, state, "completing ");
dev->bus->pm->complete(dev); if (dev->bus->pm->complete)
dev->bus->pm->complete(dev);
} }
device_unlock(dev); device_unlock(dev);
...@@ -669,7 +672,6 @@ static void dpm_complete(pm_message_t state) ...@@ -669,7 +672,6 @@ static void dpm_complete(pm_message_t state)
mutex_unlock(&dpm_list_mtx); mutex_unlock(&dpm_list_mtx);
device_complete(dev, state); device_complete(dev, state);
pm_runtime_put_sync(dev);
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
put_device(dev); put_device(dev);
...@@ -727,29 +729,31 @@ static pm_message_t resume_event(pm_message_t sleep_state) ...@@ -727,29 +729,31 @@ static pm_message_t resume_event(pm_message_t sleep_state)
*/ */
static int device_suspend_noirq(struct device *dev, pm_message_t state) static int device_suspend_noirq(struct device *dev, pm_message_t state)
{ {
int error = 0; int error;
if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "LATE class ");
error = pm_noirq_op(dev, dev->class->pm, state);
if (error)
goto End;
}
if (dev->type && dev->type->pm) { if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "LATE type "); pm_dev_dbg(dev, state, "LATE type ");
error = pm_noirq_op(dev, dev->type->pm, state); error = pm_noirq_op(dev, dev->type->pm, state);
if (error) if (error)
goto End; return error;
} } else if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "LATE class ");
if (dev->bus && dev->bus->pm) { error = pm_noirq_op(dev, dev->class->pm, state);
if (error)
return error;
} else if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "LATE "); pm_dev_dbg(dev, state, "LATE ");
error = pm_noirq_op(dev, dev->bus->pm, state); error = pm_noirq_op(dev, dev->bus->pm, state);
if (error)
return error;
} }
End: if (dev->pwr_domain) {
return error; pm_dev_dbg(dev, state, "LATE power domain ");
pm_noirq_op(dev, &dev->pwr_domain->ops, state);
}
return 0;
} }
/** /**
...@@ -836,25 +840,22 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -836,25 +840,22 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
goto End; goto End;
} }
if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "type ");
error = pm_op(dev, dev->type->pm, state);
goto Domain;
}
if (dev->class) { if (dev->class) {
if (dev->class->pm) { if (dev->class->pm) {
pm_dev_dbg(dev, state, "class "); pm_dev_dbg(dev, state, "class ");
error = pm_op(dev, dev->class->pm, state); error = pm_op(dev, dev->class->pm, state);
goto Domain;
} else if (dev->class->suspend) { } else if (dev->class->suspend) {
pm_dev_dbg(dev, state, "legacy class "); pm_dev_dbg(dev, state, "legacy class ");
error = legacy_suspend(dev, state, dev->class->suspend); error = legacy_suspend(dev, state, dev->class->suspend);
goto Domain;
} }
if (error)
goto End;
}
if (dev->type) {
if (dev->type->pm) {
pm_dev_dbg(dev, state, "type ");
error = pm_op(dev, dev->type->pm, state);
}
if (error)
goto End;
} }
if (dev->bus) { if (dev->bus) {
...@@ -867,6 +868,12 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -867,6 +868,12 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
} }
} }
Domain:
if (!error && dev->pwr_domain) {
pm_dev_dbg(dev, state, "power domain ");
pm_op(dev, &dev->pwr_domain->ops, state);
}
End: End:
device_unlock(dev); device_unlock(dev);
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
...@@ -957,27 +964,34 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -957,27 +964,34 @@ static int device_prepare(struct device *dev, pm_message_t state)
device_lock(dev); device_lock(dev);
if (dev->bus && dev->bus->pm && dev->bus->pm->prepare) { if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "preparing type ");
if (dev->type->pm->prepare)
error = dev->type->pm->prepare(dev);
suspend_report_result(dev->type->pm->prepare, error);
if (error)
goto End;
} else if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "preparing class ");
if (dev->class->pm->prepare)
error = dev->class->pm->prepare(dev);
suspend_report_result(dev->class->pm->prepare, error);
if (error)
goto End;
} else if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "preparing "); pm_dev_dbg(dev, state, "preparing ");
error = dev->bus->pm->prepare(dev); if (dev->bus->pm->prepare)
error = dev->bus->pm->prepare(dev);
suspend_report_result(dev->bus->pm->prepare, error); suspend_report_result(dev->bus->pm->prepare, error);
if (error) if (error)
goto End; goto End;
} }
if (dev->type && dev->type->pm && dev->type->pm->prepare) { if (dev->pwr_domain && dev->pwr_domain->ops.prepare) {
pm_dev_dbg(dev, state, "preparing type "); pm_dev_dbg(dev, state, "preparing power domain ");
error = dev->type->pm->prepare(dev); dev->pwr_domain->ops.prepare(dev);
suspend_report_result(dev->type->pm->prepare, error);
if (error)
goto End;
} }
if (dev->class && dev->class->pm && dev->class->pm->prepare) {
pm_dev_dbg(dev, state, "preparing class ");
error = dev->class->pm->prepare(dev);
suspend_report_result(dev->class->pm->prepare, error);
}
End: End:
device_unlock(dev); device_unlock(dev);
...@@ -1005,12 +1019,9 @@ static int dpm_prepare(pm_message_t state) ...@@ -1005,12 +1019,9 @@ static int dpm_prepare(pm_message_t state)
if (pm_runtime_barrier(dev) && device_may_wakeup(dev)) if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
pm_wakeup_event(dev, 0); pm_wakeup_event(dev, 0);
if (pm_wakeup_pending()) { pm_runtime_put_sync(dev);
pm_runtime_put_sync(dev); error = pm_wakeup_pending() ?
error = -EBUSY; -EBUSY : device_prepare(dev, state);
} else {
error = device_prepare(dev, state);
}
mutex_lock(&dpm_list_mtx); mutex_lock(&dpm_list_mtx);
if (error) { if (error) {
......
...@@ -222,7 +222,7 @@ int opp_get_opp_count(struct device *dev) ...@@ -222,7 +222,7 @@ int opp_get_opp_count(struct device *dev)
* opp_find_freq_exact() - search for an exact frequency * opp_find_freq_exact() - search for an exact frequency
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: frequency to search for * @freq: frequency to search for
* @is_available: true/false - match for available opp * @available: true/false - match for available opp
* *
* Searches for exact match in the opp list and returns pointer to the matching * Searches for exact match in the opp list and returns pointer to the matching
* opp if found, else returns ERR_PTR in case of error and should be handled * opp if found, else returns ERR_PTR in case of error and should be handled
......
...@@ -58,19 +58,18 @@ static inline void device_pm_move_last(struct device *dev) {} ...@@ -58,19 +58,18 @@ static inline void device_pm_move_last(struct device *dev) {}
* sysfs.c * sysfs.c
*/ */
extern int dpm_sysfs_add(struct device *); extern int dpm_sysfs_add(struct device *dev);
extern void dpm_sysfs_remove(struct device *); extern void dpm_sysfs_remove(struct device *dev);
extern void rpm_sysfs_remove(struct device *); extern void rpm_sysfs_remove(struct device *dev);
extern int wakeup_sysfs_add(struct device *dev);
extern void wakeup_sysfs_remove(struct device *dev);
#else /* CONFIG_PM */ #else /* CONFIG_PM */
static inline int dpm_sysfs_add(struct device *dev) static inline int dpm_sysfs_add(struct device *dev) { return 0; }
{ static inline void dpm_sysfs_remove(struct device *dev) {}
return 0; static inline void rpm_sysfs_remove(struct device *dev) {}
} static inline int wakeup_sysfs_add(struct device *dev) { return 0; }
static inline void wakeup_sysfs_remove(struct device *dev) {}
static inline void dpm_sysfs_remove(struct device *dev)
{
}
#endif #endif
...@@ -168,6 +168,7 @@ static int rpm_check_suspend_allowed(struct device *dev) ...@@ -168,6 +168,7 @@ static int rpm_check_suspend_allowed(struct device *dev)
static int rpm_idle(struct device *dev, int rpmflags) static int rpm_idle(struct device *dev, int rpmflags)
{ {
int (*callback)(struct device *); int (*callback)(struct device *);
int (*domain_callback)(struct device *);
int retval; int retval;
retval = rpm_check_suspend_allowed(dev); retval = rpm_check_suspend_allowed(dev);
...@@ -213,19 +214,28 @@ static int rpm_idle(struct device *dev, int rpmflags) ...@@ -213,19 +214,28 @@ static int rpm_idle(struct device *dev, int rpmflags)
dev->power.idle_notification = true; dev->power.idle_notification = true;
if (dev->bus && dev->bus->pm && dev->bus->pm->runtime_idle) if (dev->type && dev->type->pm)
callback = dev->bus->pm->runtime_idle;
else if (dev->type && dev->type->pm && dev->type->pm->runtime_idle)
callback = dev->type->pm->runtime_idle; callback = dev->type->pm->runtime_idle;
else if (dev->class && dev->class->pm) else if (dev->class && dev->class->pm)
callback = dev->class->pm->runtime_idle; callback = dev->class->pm->runtime_idle;
else if (dev->bus && dev->bus->pm)
callback = dev->bus->pm->runtime_idle;
else else
callback = NULL; callback = NULL;
if (callback) { if (dev->pwr_domain)
domain_callback = dev->pwr_domain->ops.runtime_idle;
else
domain_callback = NULL;
if (callback || domain_callback) {
spin_unlock_irq(&dev->power.lock); spin_unlock_irq(&dev->power.lock);
callback(dev); if (domain_callback)
retval = domain_callback(dev);
if (!retval && callback)
callback(dev);
spin_lock_irq(&dev->power.lock); spin_lock_irq(&dev->power.lock);
} }
...@@ -372,12 +382,12 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -372,12 +382,12 @@ static int rpm_suspend(struct device *dev, int rpmflags)
__update_runtime_status(dev, RPM_SUSPENDING); __update_runtime_status(dev, RPM_SUSPENDING);
if (dev->bus && dev->bus->pm && dev->bus->pm->runtime_suspend) if (dev->type && dev->type->pm)
callback = dev->bus->pm->runtime_suspend;
else if (dev->type && dev->type->pm && dev->type->pm->runtime_suspend)
callback = dev->type->pm->runtime_suspend; callback = dev->type->pm->runtime_suspend;
else if (dev->class && dev->class->pm) else if (dev->class && dev->class->pm)
callback = dev->class->pm->runtime_suspend; callback = dev->class->pm->runtime_suspend;
else if (dev->bus && dev->bus->pm)
callback = dev->bus->pm->runtime_suspend;
else else
callback = NULL; callback = NULL;
...@@ -390,6 +400,8 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -390,6 +400,8 @@ static int rpm_suspend(struct device *dev, int rpmflags)
else else
pm_runtime_cancel_pending(dev); pm_runtime_cancel_pending(dev);
} else { } else {
if (dev->pwr_domain)
rpm_callback(dev->pwr_domain->ops.runtime_suspend, dev);
no_callback: no_callback:
__update_runtime_status(dev, RPM_SUSPENDED); __update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_deactivate_timer(dev); pm_runtime_deactivate_timer(dev);
...@@ -569,12 +581,15 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -569,12 +581,15 @@ static int rpm_resume(struct device *dev, int rpmflags)
__update_runtime_status(dev, RPM_RESUMING); __update_runtime_status(dev, RPM_RESUMING);
if (dev->bus && dev->bus->pm && dev->bus->pm->runtime_resume) if (dev->pwr_domain)
callback = dev->bus->pm->runtime_resume; rpm_callback(dev->pwr_domain->ops.runtime_resume, dev);
else if (dev->type && dev->type->pm && dev->type->pm->runtime_resume)
if (dev->type && dev->type->pm)
callback = dev->type->pm->runtime_resume; callback = dev->type->pm->runtime_resume;
else if (dev->class && dev->class->pm) else if (dev->class && dev->class->pm)
callback = dev->class->pm->runtime_resume; callback = dev->class->pm->runtime_resume;
else if (dev->bus && dev->bus->pm)
callback = dev->bus->pm->runtime_resume;
else else
callback = NULL; callback = NULL;
......
...@@ -431,26 +431,18 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr, ...@@ -431,26 +431,18 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(async, 0644, async_show, async_store); static DEVICE_ATTR(async, 0644, async_show, async_store);
#endif /* CONFIG_PM_ADVANCED_DEBUG */ #endif /* CONFIG_PM_ADVANCED_DEBUG */
static struct attribute * power_attrs[] = { static struct attribute *power_attrs[] = {
&dev_attr_wakeup.attr,
#ifdef CONFIG_PM_SLEEP
&dev_attr_wakeup_count.attr,
&dev_attr_wakeup_active_count.attr,
&dev_attr_wakeup_hit_count.attr,
&dev_attr_wakeup_active.attr,
&dev_attr_wakeup_total_time_ms.attr,
&dev_attr_wakeup_max_time_ms.attr,
&dev_attr_wakeup_last_time_ms.attr,
#endif
#ifdef CONFIG_PM_ADVANCED_DEBUG #ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_SLEEP
&dev_attr_async.attr, &dev_attr_async.attr,
#endif
#ifdef CONFIG_PM_RUNTIME #ifdef CONFIG_PM_RUNTIME
&dev_attr_runtime_status.attr, &dev_attr_runtime_status.attr,
&dev_attr_runtime_usage.attr, &dev_attr_runtime_usage.attr,
&dev_attr_runtime_active_kids.attr, &dev_attr_runtime_active_kids.attr,
&dev_attr_runtime_enabled.attr, &dev_attr_runtime_enabled.attr,
#endif #endif
#endif #endif /* CONFIG_PM_ADVANCED_DEBUG */
NULL, NULL,
}; };
static struct attribute_group pm_attr_group = { static struct attribute_group pm_attr_group = {
...@@ -458,9 +450,26 @@ static struct attribute_group pm_attr_group = { ...@@ -458,9 +450,26 @@ static struct attribute_group pm_attr_group = {
.attrs = power_attrs, .attrs = power_attrs,
}; };
#ifdef CONFIG_PM_RUNTIME static struct attribute *wakeup_attrs[] = {
#ifdef CONFIG_PM_SLEEP
&dev_attr_wakeup.attr,
&dev_attr_wakeup_count.attr,
&dev_attr_wakeup_active_count.attr,
&dev_attr_wakeup_hit_count.attr,
&dev_attr_wakeup_active.attr,
&dev_attr_wakeup_total_time_ms.attr,
&dev_attr_wakeup_max_time_ms.attr,
&dev_attr_wakeup_last_time_ms.attr,
#endif
NULL,
};
static struct attribute_group pm_wakeup_attr_group = {
.name = power_group_name,
.attrs = wakeup_attrs,
};
static struct attribute *runtime_attrs[] = { static struct attribute *runtime_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
#ifndef CONFIG_PM_ADVANCED_DEBUG #ifndef CONFIG_PM_ADVANCED_DEBUG
&dev_attr_runtime_status.attr, &dev_attr_runtime_status.attr,
#endif #endif
...@@ -468,6 +477,7 @@ static struct attribute *runtime_attrs[] = { ...@@ -468,6 +477,7 @@ static struct attribute *runtime_attrs[] = {
&dev_attr_runtime_suspended_time.attr, &dev_attr_runtime_suspended_time.attr,
&dev_attr_runtime_active_time.attr, &dev_attr_runtime_active_time.attr,
&dev_attr_autosuspend_delay_ms.attr, &dev_attr_autosuspend_delay_ms.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL, NULL,
}; };
static struct attribute_group pm_runtime_attr_group = { static struct attribute_group pm_runtime_attr_group = {
...@@ -480,35 +490,49 @@ int dpm_sysfs_add(struct device *dev) ...@@ -480,35 +490,49 @@ int dpm_sysfs_add(struct device *dev)
int rc; int rc;
rc = sysfs_create_group(&dev->kobj, &pm_attr_group); rc = sysfs_create_group(&dev->kobj, &pm_attr_group);
if (rc == 0 && !dev->power.no_callbacks) { if (rc)
return rc;
if (pm_runtime_callbacks_present(dev)) {
rc = sysfs_merge_group(&dev->kobj, &pm_runtime_attr_group); rc = sysfs_merge_group(&dev->kobj, &pm_runtime_attr_group);
if (rc) if (rc)
sysfs_remove_group(&dev->kobj, &pm_attr_group); goto err_out;
}
if (device_can_wakeup(dev)) {
rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
if (rc) {
if (pm_runtime_callbacks_present(dev))
sysfs_unmerge_group(&dev->kobj,
&pm_runtime_attr_group);
goto err_out;
}
} }
return 0;
err_out:
sysfs_remove_group(&dev->kobj, &pm_attr_group);
return rc; return rc;
} }
void rpm_sysfs_remove(struct device *dev) int wakeup_sysfs_add(struct device *dev)
{ {
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group); return sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
} }
void dpm_sysfs_remove(struct device *dev) void wakeup_sysfs_remove(struct device *dev)
{ {
rpm_sysfs_remove(dev); sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
sysfs_remove_group(&dev->kobj, &pm_attr_group);
} }
#else /* CONFIG_PM_RUNTIME */ void rpm_sysfs_remove(struct device *dev)
int dpm_sysfs_add(struct device * dev)
{ {
return sysfs_create_group(&dev->kobj, &pm_attr_group); sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
} }
void dpm_sysfs_remove(struct device * dev) void dpm_sysfs_remove(struct device *dev)
{ {
rpm_sysfs_remove(dev);
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
sysfs_remove_group(&dev->kobj, &pm_attr_group); sysfs_remove_group(&dev->kobj, &pm_attr_group);
} }
#endif
...@@ -112,7 +112,7 @@ static unsigned int read_magic_time(void) ...@@ -112,7 +112,7 @@ static unsigned int read_magic_time(void)
unsigned int val; unsigned int val;
get_rtc_time(&time); get_rtc_time(&time);
printk("Time: %2d:%02d:%02d Date: %02d/%02d/%02d\n", pr_info("Time: %2d:%02d:%02d Date: %02d/%02d/%02d\n",
time.tm_hour, time.tm_min, time.tm_sec, time.tm_hour, time.tm_min, time.tm_sec,
time.tm_mon + 1, time.tm_mday, time.tm_year % 100); time.tm_mon + 1, time.tm_mday, time.tm_year % 100);
val = time.tm_year; /* 100 years */ val = time.tm_year; /* 100 years */
...@@ -179,7 +179,7 @@ static int show_file_hash(unsigned int value) ...@@ -179,7 +179,7 @@ static int show_file_hash(unsigned int value)
unsigned int hash = hash_string(lineno, file, FILEHASH); unsigned int hash = hash_string(lineno, file, FILEHASH);
if (hash != value) if (hash != value)
continue; continue;
printk(" hash matches %s:%u\n", file, lineno); pr_info(" hash matches %s:%u\n", file, lineno);
match++; match++;
} }
return match; return match;
...@@ -255,7 +255,7 @@ static int late_resume_init(void) ...@@ -255,7 +255,7 @@ static int late_resume_init(void)
val = val / FILEHASH; val = val / FILEHASH;
dev = val /* % DEVHASH */; dev = val /* % DEVHASH */;
printk(" Magic number: %d:%d:%d\n", user, file, dev); pr_info(" Magic number: %d:%d:%d\n", user, file, dev);
show_file_hash(file); show_file_hash(file);
show_dev_hash(dev); show_dev_hash(dev);
return 0; return 0;
......
...@@ -24,12 +24,26 @@ ...@@ -24,12 +24,26 @@
*/ */
bool events_check_enabled; bool events_check_enabled;
/* The counter of registered wakeup events. */ /*
static atomic_t event_count = ATOMIC_INIT(0); * Combined counters of registered wakeup events and wakeup events in progress.
/* A preserved old value of event_count. */ * They need to be modified together atomically, so it's better to use one
* atomic variable to hold them both.
*/
static atomic_t combined_event_count = ATOMIC_INIT(0);
#define IN_PROGRESS_BITS (sizeof(int) * 4)
#define MAX_IN_PROGRESS ((1 << IN_PROGRESS_BITS) - 1)
static void split_counters(unsigned int *cnt, unsigned int *inpr)
{
unsigned int comb = atomic_read(&combined_event_count);
*cnt = (comb >> IN_PROGRESS_BITS);
*inpr = comb & MAX_IN_PROGRESS;
}
/* A preserved old value of the events counter. */
static unsigned int saved_count; static unsigned int saved_count;
/* The counter of wakeup events being processed. */
static atomic_t events_in_progress = ATOMIC_INIT(0);
static DEFINE_SPINLOCK(events_lock); static DEFINE_SPINLOCK(events_lock);
...@@ -227,6 +241,35 @@ int device_wakeup_disable(struct device *dev) ...@@ -227,6 +241,35 @@ int device_wakeup_disable(struct device *dev)
} }
EXPORT_SYMBOL_GPL(device_wakeup_disable); EXPORT_SYMBOL_GPL(device_wakeup_disable);
/**
* device_set_wakeup_capable - Set/reset device wakeup capability flag.
* @dev: Device to handle.
* @capable: Whether or not @dev is capable of waking up the system from sleep.
*
* If @capable is set, set the @dev's power.can_wakeup flag and add its
* wakeup-related attributes to sysfs. Otherwise, unset the @dev's
* power.can_wakeup flag and remove its wakeup-related attributes from sysfs.
*
* This function may sleep and it can't be called from any context where
* sleeping is not allowed.
*/
void device_set_wakeup_capable(struct device *dev, bool capable)
{
if (!!dev->power.can_wakeup == !!capable)
return;
if (device_is_registered(dev)) {
if (capable) {
if (wakeup_sysfs_add(dev))
return;
} else {
wakeup_sysfs_remove(dev);
}
}
dev->power.can_wakeup = capable;
}
EXPORT_SYMBOL_GPL(device_set_wakeup_capable);
/** /**
* device_init_wakeup - Device wakeup initialization. * device_init_wakeup - Device wakeup initialization.
* @dev: Device to handle. * @dev: Device to handle.
...@@ -307,7 +350,8 @@ static void wakeup_source_activate(struct wakeup_source *ws) ...@@ -307,7 +350,8 @@ static void wakeup_source_activate(struct wakeup_source *ws)
ws->timer_expires = jiffies; ws->timer_expires = jiffies;
ws->last_time = ktime_get(); ws->last_time = ktime_get();
atomic_inc(&events_in_progress); /* Increment the counter of events in progress. */
atomic_inc(&combined_event_count);
} }
/** /**
...@@ -394,14 +438,10 @@ static void wakeup_source_deactivate(struct wakeup_source *ws) ...@@ -394,14 +438,10 @@ static void wakeup_source_deactivate(struct wakeup_source *ws)
del_timer(&ws->timer); del_timer(&ws->timer);
/* /*
* event_count has to be incremented before events_in_progress is * Increment the counter of registered wakeup events and decrement the
* modified, so that the callers of pm_check_wakeup_events() and * couter of wakeup events in progress simultaneously.
* pm_save_wakeup_count() don't see the old value of event_count and
* events_in_progress equal to zero at the same time.
*/ */
atomic_inc(&event_count); atomic_add(MAX_IN_PROGRESS, &combined_event_count);
smp_mb__before_atomic_dec();
atomic_dec(&events_in_progress);
} }
/** /**
...@@ -556,8 +596,10 @@ bool pm_wakeup_pending(void) ...@@ -556,8 +596,10 @@ bool pm_wakeup_pending(void)
spin_lock_irqsave(&events_lock, flags); spin_lock_irqsave(&events_lock, flags);
if (events_check_enabled) { if (events_check_enabled) {
ret = ((unsigned int)atomic_read(&event_count) != saved_count) unsigned int cnt, inpr;
|| atomic_read(&events_in_progress);
split_counters(&cnt, &inpr);
ret = (cnt != saved_count || inpr > 0);
events_check_enabled = !ret; events_check_enabled = !ret;
} }
spin_unlock_irqrestore(&events_lock, flags); spin_unlock_irqrestore(&events_lock, flags);
...@@ -573,25 +615,25 @@ bool pm_wakeup_pending(void) ...@@ -573,25 +615,25 @@ bool pm_wakeup_pending(void)
* Store the number of registered wakeup events at the address in @count. Block * Store the number of registered wakeup events at the address in @count. Block
* if the current number of wakeup events being processed is nonzero. * if the current number of wakeup events being processed is nonzero.
* *
* Return false if the wait for the number of wakeup events being processed to * Return 'false' if the wait for the number of wakeup events being processed to
* drop down to zero has been interrupted by a signal (and the current number * drop down to zero has been interrupted by a signal (and the current number
* of wakeup events being processed is still nonzero). Otherwise return true. * of wakeup events being processed is still nonzero). Otherwise return 'true'.
*/ */
bool pm_get_wakeup_count(unsigned int *count) bool pm_get_wakeup_count(unsigned int *count)
{ {
bool ret; unsigned int cnt, inpr;
if (capable(CAP_SYS_ADMIN))
events_check_enabled = false;
while (atomic_read(&events_in_progress) && !signal_pending(current)) { for (;;) {
split_counters(&cnt, &inpr);
if (inpr == 0 || signal_pending(current))
break;
pm_wakeup_update_hit_counts(); pm_wakeup_update_hit_counts();
schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT)); schedule_timeout_interruptible(msecs_to_jiffies(TIMEOUT));
} }
ret = !atomic_read(&events_in_progress); split_counters(&cnt, &inpr);
*count = atomic_read(&event_count); *count = cnt;
return ret; return !inpr;
} }
/** /**
...@@ -600,24 +642,25 @@ bool pm_get_wakeup_count(unsigned int *count) ...@@ -600,24 +642,25 @@ bool pm_get_wakeup_count(unsigned int *count)
* *
* If @count is equal to the current number of registered wakeup events and the * If @count is equal to the current number of registered wakeup events and the
* current number of wakeup events being processed is zero, store @count as the * current number of wakeup events being processed is zero, store @count as the
* old number of registered wakeup events to be used by pm_check_wakeup_events() * old number of registered wakeup events for pm_check_wakeup_events(), enable
* and return true. Otherwise return false. * wakeup events detection and return 'true'. Otherwise disable wakeup events
* detection and return 'false'.
*/ */
bool pm_save_wakeup_count(unsigned int count) bool pm_save_wakeup_count(unsigned int count)
{ {
bool ret = false; unsigned int cnt, inpr;
events_check_enabled = false;
spin_lock_irq(&events_lock); spin_lock_irq(&events_lock);
if (count == (unsigned int)atomic_read(&event_count) split_counters(&cnt, &inpr);
&& !atomic_read(&events_in_progress)) { if (cnt == count && inpr == 0) {
saved_count = count; saved_count = count;
events_check_enabled = true; events_check_enabled = true;
ret = true;
} }
spin_unlock_irq(&events_lock); spin_unlock_irq(&events_lock);
if (!ret) if (!events_check_enabled)
pm_wakeup_update_hit_counts(); pm_wakeup_update_hit_counts();
return ret; return events_check_enabled;
} }
static struct dentry *wakeup_sources_stats_dentry; static struct dentry *wakeup_sources_stats_dentry;
......
/*
* syscore.c - Execution of system core operations.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
*
* This file is released under the GPLv2.
*/
#include <linux/syscore_ops.h>
#include <linux/mutex.h>
#include <linux/module.h>
static LIST_HEAD(syscore_ops_list);
static DEFINE_MUTEX(syscore_ops_lock);
/**
* register_syscore_ops - Register a set of system core operations.
* @ops: System core operations to register.
*/
void register_syscore_ops(struct syscore_ops *ops)
{
mutex_lock(&syscore_ops_lock);
list_add_tail(&ops->node, &syscore_ops_list);
mutex_unlock(&syscore_ops_lock);
}
EXPORT_SYMBOL_GPL(register_syscore_ops);
/**
* unregister_syscore_ops - Unregister a set of system core operations.
* @ops: System core operations to unregister.
*/
void unregister_syscore_ops(struct syscore_ops *ops)
{
mutex_lock(&syscore_ops_lock);
list_del(&ops->node);
mutex_unlock(&syscore_ops_lock);
}
EXPORT_SYMBOL_GPL(unregister_syscore_ops);
#ifdef CONFIG_PM_SLEEP
/**
* syscore_suspend - Execute all the registered system core suspend callbacks.
*
* This function is executed with one CPU on-line and disabled interrupts.
*/
int syscore_suspend(void)
{
struct syscore_ops *ops;
int ret = 0;
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled before system core suspend.\n");
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->suspend) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->suspend);
ret = ops->suspend();
if (ret)
goto err_out;
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled after %pF\n", ops->suspend);
}
return 0;
err_out:
pr_err("PM: System core suspend callback %pF failed.\n", ops->suspend);
list_for_each_entry_continue(ops, &syscore_ops_list, node)
if (ops->resume)
ops->resume();
return ret;
}
/**
* syscore_resume - Execute all the registered system core resume callbacks.
*
* This function is executed with one CPU on-line and disabled interrupts.
*/
void syscore_resume(void)
{
struct syscore_ops *ops;
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled before system core resume.\n");
list_for_each_entry(ops, &syscore_ops_list, node)
if (ops->resume) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->resume);
ops->resume();
WARN_ONCE(!irqs_disabled(),
"Interrupts enabled after %pF\n", ops->resume);
}
}
#endif /* CONFIG_PM_SLEEP */
/**
* syscore_shutdown - Execute all the registered system core shutdown callbacks.
*/
void syscore_shutdown(void)
{
struct syscore_ops *ops;
mutex_lock(&syscore_ops_lock);
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->shutdown) {
if (initcall_debug)
pr_info("PM: Calling %pF\n", ops->shutdown);
ops->shutdown();
}
mutex_unlock(&syscore_ops_lock);
}
...@@ -5338,7 +5338,7 @@ void e1000e_disable_aspm(struct pci_dev *pdev, u16 state) ...@@ -5338,7 +5338,7 @@ void e1000e_disable_aspm(struct pci_dev *pdev, u16 state)
__e1000e_disable_aspm(pdev, state); __e1000e_disable_aspm(pdev, state);
} }
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
static bool e1000e_pm_ready(struct e1000_adapter *adapter) static bool e1000e_pm_ready(struct e1000_adapter *adapter)
{ {
return !!adapter->tx_ring->buffer_info; return !!adapter->tx_ring->buffer_info;
...@@ -5489,7 +5489,7 @@ static int e1000_runtime_resume(struct device *dev) ...@@ -5489,7 +5489,7 @@ static int e1000_runtime_resume(struct device *dev)
return __e1000_resume(pdev); return __e1000_resume(pdev);
} }
#endif /* CONFIG_PM_RUNTIME */ #endif /* CONFIG_PM_RUNTIME */
#endif /* CONFIG_PM_OPS */ #endif /* CONFIG_PM */
static void e1000_shutdown(struct pci_dev *pdev) static void e1000_shutdown(struct pci_dev *pdev)
{ {
...@@ -6196,7 +6196,7 @@ static DEFINE_PCI_DEVICE_TABLE(e1000_pci_tbl) = { ...@@ -6196,7 +6196,7 @@ static DEFINE_PCI_DEVICE_TABLE(e1000_pci_tbl) = {
}; };
MODULE_DEVICE_TABLE(pci, e1000_pci_tbl); MODULE_DEVICE_TABLE(pci, e1000_pci_tbl);
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
static const struct dev_pm_ops e1000_pm_ops = { static const struct dev_pm_ops e1000_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(e1000_suspend, e1000_resume) SET_SYSTEM_SLEEP_PM_OPS(e1000_suspend, e1000_resume)
SET_RUNTIME_PM_OPS(e1000_runtime_suspend, SET_RUNTIME_PM_OPS(e1000_runtime_suspend,
...@@ -6210,7 +6210,7 @@ static struct pci_driver e1000_driver = { ...@@ -6210,7 +6210,7 @@ static struct pci_driver e1000_driver = {
.id_table = e1000_pci_tbl, .id_table = e1000_pci_tbl,
.probe = e1000_probe, .probe = e1000_probe,
.remove = __devexit_p(e1000_remove), .remove = __devexit_p(e1000_remove),
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
.driver.pm = &e1000_pm_ops, .driver.pm = &e1000_pm_ops,
#endif #endif
.shutdown = e1000_shutdown, .shutdown = e1000_shutdown,
......
...@@ -2446,7 +2446,7 @@ static struct pci_driver pch_gbe_pcidev = { ...@@ -2446,7 +2446,7 @@ static struct pci_driver pch_gbe_pcidev = {
.id_table = pch_gbe_pcidev_id, .id_table = pch_gbe_pcidev_id,
.probe = pch_gbe_probe, .probe = pch_gbe_probe,
.remove = pch_gbe_remove, .remove = pch_gbe_remove,
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
.driver.pm = &pch_gbe_pm_ops, .driver.pm = &pch_gbe_pm_ops,
#endif #endif
.shutdown = pch_gbe_shutdown, .shutdown = pch_gbe_shutdown,
......
...@@ -431,7 +431,7 @@ static void pci_device_shutdown(struct device *dev) ...@@ -431,7 +431,7 @@ static void pci_device_shutdown(struct device *dev)
pci_msix_shutdown(pci_dev); pci_msix_shutdown(pci_dev);
} }
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
/* Auxiliary functions used for system resume and run-time resume. */ /* Auxiliary functions used for system resume and run-time resume. */
...@@ -1059,7 +1059,7 @@ static int pci_pm_runtime_idle(struct device *dev) ...@@ -1059,7 +1059,7 @@ static int pci_pm_runtime_idle(struct device *dev)
#endif /* !CONFIG_PM_RUNTIME */ #endif /* !CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
const struct dev_pm_ops pci_dev_pm_ops = { const struct dev_pm_ops pci_dev_pm_ops = {
.prepare = pci_pm_prepare, .prepare = pci_pm_prepare,
......
...@@ -165,7 +165,7 @@ scsi_mod-$(CONFIG_SCSI_NETLINK) += scsi_netlink.o ...@@ -165,7 +165,7 @@ scsi_mod-$(CONFIG_SCSI_NETLINK) += scsi_netlink.o
scsi_mod-$(CONFIG_SYSCTL) += scsi_sysctl.o scsi_mod-$(CONFIG_SYSCTL) += scsi_sysctl.o
scsi_mod-$(CONFIG_SCSI_PROC_FS) += scsi_proc.o scsi_mod-$(CONFIG_SCSI_PROC_FS) += scsi_proc.o
scsi_mod-y += scsi_trace.o scsi_mod-y += scsi_trace.o
scsi_mod-$(CONFIG_PM_OPS) += scsi_pm.o scsi_mod-$(CONFIG_PM) += scsi_pm.o
scsi_tgt-y += scsi_tgt_lib.o scsi_tgt_if.o scsi_tgt-y += scsi_tgt_lib.o scsi_tgt_if.o
......
...@@ -146,7 +146,7 @@ static inline void scsi_netlink_exit(void) {} ...@@ -146,7 +146,7 @@ static inline void scsi_netlink_exit(void) {}
#endif #endif
/* scsi_pm.c */ /* scsi_pm.c */
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
extern const struct dev_pm_ops scsi_bus_pm_ops; extern const struct dev_pm_ops scsi_bus_pm_ops;
#endif #endif
#ifdef CONFIG_PM_RUNTIME #ifdef CONFIG_PM_RUNTIME
......
...@@ -383,7 +383,7 @@ struct bus_type scsi_bus_type = { ...@@ -383,7 +383,7 @@ struct bus_type scsi_bus_type = {
.name = "scsi", .name = "scsi",
.match = scsi_bus_match, .match = scsi_bus_match,
.uevent = scsi_bus_uevent, .uevent = scsi_bus_uevent,
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
.pm = &scsi_bus_pm_ops, .pm = &scsi_bus_pm_ops,
#endif #endif
}; };
......
...@@ -335,7 +335,7 @@ void usb_hcd_pci_shutdown(struct pci_dev *dev) ...@@ -335,7 +335,7 @@ void usb_hcd_pci_shutdown(struct pci_dev *dev)
} }
EXPORT_SYMBOL_GPL(usb_hcd_pci_shutdown); EXPORT_SYMBOL_GPL(usb_hcd_pci_shutdown);
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
#ifdef CONFIG_PPC_PMAC #ifdef CONFIG_PPC_PMAC
static void powermac_set_asic(struct pci_dev *pci_dev, int enable) static void powermac_set_asic(struct pci_dev *pci_dev, int enable)
...@@ -580,4 +580,4 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = { ...@@ -580,4 +580,4 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = {
}; };
EXPORT_SYMBOL_GPL(usb_hcd_pci_pm_ops); EXPORT_SYMBOL_GPL(usb_hcd_pci_pm_ops);
#endif /* CONFIG_PM_OPS */ #endif /* CONFIG_PM */
...@@ -1465,6 +1465,7 @@ void usb_set_device_state(struct usb_device *udev, ...@@ -1465,6 +1465,7 @@ void usb_set_device_state(struct usb_device *udev,
enum usb_device_state new_state) enum usb_device_state new_state)
{ {
unsigned long flags; unsigned long flags;
int wakeup = -1;
spin_lock_irqsave(&device_state_lock, flags); spin_lock_irqsave(&device_state_lock, flags);
if (udev->state == USB_STATE_NOTATTACHED) if (udev->state == USB_STATE_NOTATTACHED)
...@@ -1479,11 +1480,10 @@ void usb_set_device_state(struct usb_device *udev, ...@@ -1479,11 +1480,10 @@ void usb_set_device_state(struct usb_device *udev,
|| new_state == USB_STATE_SUSPENDED) || new_state == USB_STATE_SUSPENDED)
; /* No change to wakeup settings */ ; /* No change to wakeup settings */
else if (new_state == USB_STATE_CONFIGURED) else if (new_state == USB_STATE_CONFIGURED)
device_set_wakeup_capable(&udev->dev, wakeup = udev->actconfig->desc.bmAttributes
(udev->actconfig->desc.bmAttributes & USB_CONFIG_ATT_WAKEUP;
& USB_CONFIG_ATT_WAKEUP));
else else
device_set_wakeup_capable(&udev->dev, 0); wakeup = 0;
} }
if (udev->state == USB_STATE_SUSPENDED && if (udev->state == USB_STATE_SUSPENDED &&
new_state != USB_STATE_SUSPENDED) new_state != USB_STATE_SUSPENDED)
...@@ -1495,6 +1495,8 @@ void usb_set_device_state(struct usb_device *udev, ...@@ -1495,6 +1495,8 @@ void usb_set_device_state(struct usb_device *udev,
} else } else
recursively_mark_NOTATTACHED(udev); recursively_mark_NOTATTACHED(udev);
spin_unlock_irqrestore(&device_state_lock, flags); spin_unlock_irqrestore(&device_state_lock, flags);
if (wakeup >= 0)
device_set_wakeup_capable(&udev->dev, wakeup);
} }
EXPORT_SYMBOL_GPL(usb_set_device_state); EXPORT_SYMBOL_GPL(usb_set_device_state);
......
...@@ -381,7 +381,7 @@ struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle); ...@@ -381,7 +381,7 @@ struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle);
int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state); int acpi_enable_wakeup_device_power(struct acpi_device *dev, int state);
int acpi_disable_wakeup_device_power(struct acpi_device *dev); int acpi_disable_wakeup_device_power(struct acpi_device *dev);
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
int acpi_pm_device_sleep_state(struct device *, int *); int acpi_pm_device_sleep_state(struct device *, int *);
#else #else
static inline int acpi_pm_device_sleep_state(struct device *d, int *p) static inline int acpi_pm_device_sleep_state(struct device *d, int *p)
......
...@@ -420,6 +420,7 @@ struct device { ...@@ -420,6 +420,7 @@ struct device {
void *platform_data; /* Platform specific data, device void *platform_data; /* Platform specific data, device
core doesn't touch it */ core doesn't touch it */
struct dev_pm_info power; struct dev_pm_info power;
struct dev_power_domain *pwr_domain;
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
int numa_node; /* NUMA node this device is close to */ int numa_node; /* NUMA node this device is close to */
......
...@@ -267,7 +267,7 @@ const struct dev_pm_ops name = { \ ...@@ -267,7 +267,7 @@ const struct dev_pm_ops name = { \
* callbacks provided by device drivers supporting both the system sleep PM and * callbacks provided by device drivers supporting both the system sleep PM and
* runtime PM, make the pm member point to generic_subsys_pm_ops. * runtime PM, make the pm member point to generic_subsys_pm_ops.
*/ */
#ifdef CONFIG_PM_OPS #ifdef CONFIG_PM
extern struct dev_pm_ops generic_subsys_pm_ops; extern struct dev_pm_ops generic_subsys_pm_ops;
#define GENERIC_SUBSYS_PM_OPS (&generic_subsys_pm_ops) #define GENERIC_SUBSYS_PM_OPS (&generic_subsys_pm_ops)
#else #else
...@@ -465,6 +465,14 @@ struct dev_pm_info { ...@@ -465,6 +465,14 @@ struct dev_pm_info {
extern void update_pm_runtime_accounting(struct device *dev); extern void update_pm_runtime_accounting(struct device *dev);
/*
* Power domains provide callbacks that are executed during system suspend,
* hibernation, system resume and during runtime PM transitions along with
* subsystem-level and driver-level callbacks.
*/
struct dev_power_domain {
struct dev_pm_ops ops;
};
/* /*
* The PM_EVENT_ messages are also used by drivers implementing the legacy * The PM_EVENT_ messages are also used by drivers implementing the legacy
...@@ -565,15 +573,6 @@ enum dpm_order { ...@@ -565,15 +573,6 @@ enum dpm_order {
DPM_ORDER_DEV_LAST, DPM_ORDER_DEV_LAST,
}; };
/*
* Global Power Management flags
* Used to keep APM and ACPI from both being active
*/
extern unsigned int pm_flags;
#define PM_APM 1
#define PM_ACPI 2
extern int pm_generic_suspend(struct device *dev); extern int pm_generic_suspend(struct device *dev);
extern int pm_generic_resume(struct device *dev); extern int pm_generic_resume(struct device *dev);
extern int pm_generic_freeze(struct device *dev); extern int pm_generic_freeze(struct device *dev);
......
...@@ -87,6 +87,11 @@ static inline bool pm_runtime_enabled(struct device *dev) ...@@ -87,6 +87,11 @@ static inline bool pm_runtime_enabled(struct device *dev)
return !dev->power.disable_depth; return !dev->power.disable_depth;
} }
static inline bool pm_runtime_callbacks_present(struct device *dev)
{
return !dev->power.no_callbacks;
}
static inline void pm_runtime_mark_last_busy(struct device *dev) static inline void pm_runtime_mark_last_busy(struct device *dev)
{ {
ACCESS_ONCE(dev->power.last_busy) = jiffies; ACCESS_ONCE(dev->power.last_busy) = jiffies;
...@@ -133,6 +138,7 @@ static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } ...@@ -133,6 +138,7 @@ static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
static inline void pm_runtime_no_callbacks(struct device *dev) {} static inline void pm_runtime_no_callbacks(struct device *dev) {}
static inline void pm_runtime_irq_safe(struct device *dev) {} static inline void pm_runtime_irq_safe(struct device *dev) {}
static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; }
static inline void pm_runtime_mark_last_busy(struct device *dev) {} static inline void pm_runtime_mark_last_busy(struct device *dev) {}
static inline void __pm_runtime_use_autosuspend(struct device *dev, static inline void __pm_runtime_use_autosuspend(struct device *dev,
bool use) {} bool use) {}
......
...@@ -62,18 +62,11 @@ struct wakeup_source { ...@@ -62,18 +62,11 @@ struct wakeup_source {
* Changes to device_may_wakeup take effect on the next pm state change. * Changes to device_may_wakeup take effect on the next pm state change.
*/ */
static inline void device_set_wakeup_capable(struct device *dev, bool capable)
{
dev->power.can_wakeup = capable;
}
static inline bool device_can_wakeup(struct device *dev) static inline bool device_can_wakeup(struct device *dev)
{ {
return dev->power.can_wakeup; return dev->power.can_wakeup;
} }
static inline bool device_may_wakeup(struct device *dev) static inline bool device_may_wakeup(struct device *dev)
{ {
return dev->power.can_wakeup && !!dev->power.wakeup; return dev->power.can_wakeup && !!dev->power.wakeup;
...@@ -88,6 +81,7 @@ extern struct wakeup_source *wakeup_source_register(const char *name); ...@@ -88,6 +81,7 @@ extern struct wakeup_source *wakeup_source_register(const char *name);
extern void wakeup_source_unregister(struct wakeup_source *ws); extern void wakeup_source_unregister(struct wakeup_source *ws);
extern int device_wakeup_enable(struct device *dev); extern int device_wakeup_enable(struct device *dev);
extern int device_wakeup_disable(struct device *dev); extern int device_wakeup_disable(struct device *dev);
extern void device_set_wakeup_capable(struct device *dev, bool capable);
extern int device_init_wakeup(struct device *dev, bool val); extern int device_init_wakeup(struct device *dev, bool val);
extern int device_set_wakeup_enable(struct device *dev, bool enable); extern int device_set_wakeup_enable(struct device *dev, bool enable);
extern void __pm_stay_awake(struct wakeup_source *ws); extern void __pm_stay_awake(struct wakeup_source *ws);
......
/*
* syscore_ops.h - System core operations.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
*
* This file is released under the GPLv2.
*/
#ifndef _LINUX_SYSCORE_OPS_H
#define _LINUX_SYSCORE_OPS_H
#include <linux/list.h>
struct syscore_ops {
struct list_head node;
int (*suspend)(void);
void (*resume)(void);
void (*shutdown)(void);
};
extern void register_syscore_ops(struct syscore_ops *ops);
extern void unregister_syscore_ops(struct syscore_ops *ops);
#ifdef CONFIG_PM_SLEEP
extern int syscore_suspend(void);
extern void syscore_resume(void);
#endif
extern void syscore_shutdown(void);
#endif
...@@ -103,11 +103,14 @@ static struct pm_qos_object *pm_qos_array[] = { ...@@ -103,11 +103,14 @@ static struct pm_qos_object *pm_qos_array[] = {
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos); size_t count, loff_t *f_pos);
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
size_t count, loff_t *f_pos);
static int pm_qos_power_open(struct inode *inode, struct file *filp); static int pm_qos_power_open(struct inode *inode, struct file *filp);
static int pm_qos_power_release(struct inode *inode, struct file *filp); static int pm_qos_power_release(struct inode *inode, struct file *filp);
static const struct file_operations pm_qos_power_fops = { static const struct file_operations pm_qos_power_fops = {
.write = pm_qos_power_write, .write = pm_qos_power_write,
.read = pm_qos_power_read,
.open = pm_qos_power_open, .open = pm_qos_power_open,
.release = pm_qos_power_release, .release = pm_qos_power_release,
.llseek = noop_llseek, .llseek = noop_llseek,
...@@ -376,6 +379,27 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp) ...@@ -376,6 +379,27 @@ static int pm_qos_power_release(struct inode *inode, struct file *filp)
} }
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
size_t count, loff_t *f_pos)
{
s32 value;
unsigned long flags;
struct pm_qos_object *o;
struct pm_qos_request_list *pm_qos_req = filp->private_data;;
if (!pm_qos_req)
return -EINVAL;
if (!pm_qos_request_active(pm_qos_req))
return -EINVAL;
o = pm_qos_array[pm_qos_req->pm_qos_class];
spin_lock_irqsave(&pm_qos_lock, flags);
value = pm_qos_get_value(o);
spin_unlock_irqrestore(&pm_qos_lock, flags);
return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
}
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos) size_t count, loff_t *f_pos)
{ {
......
config PM
bool "Power Management support"
depends on !IA64_HP_SIM
---help---
"Power Management" means that parts of your computer are shut
off or put into a power conserving "sleep" mode if they are not
being used. There are two competing standards for doing this: APM
and ACPI. If you want to use either one, say Y here and then also
to the requisite support below.
Power Management is most important for battery powered laptop
computers; if you have a laptop, check out the Linux Laptop home
page on the WWW at <http://www.linux-on-laptops.com/> or
Tuxmobil - Linux on Mobile Computers at <http://www.tuxmobil.org/>
and the Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
Note that, even if you say N here, Linux on the x86 architecture
will issue the hlt instruction if nothing is to be done, thereby
sending the processor to sleep and saving power.
config PM_DEBUG
bool "Power Management Debug Support"
depends on PM
---help---
This option enables various debugging support in the Power Management
code. This is helpful when debugging and reporting PM bugs, like
suspend support.
config PM_ADVANCED_DEBUG
bool "Extra PM attributes in sysfs for low-level debugging/testing"
depends on PM_DEBUG
default n
---help---
Add extra sysfs attributes allowing one to access some Power Management
fields of device objects from user space. If you are not a kernel
developer interested in debugging/testing Power Management, say "no".
config PM_VERBOSE
bool "Verbose Power Management debugging"
depends on PM_DEBUG
default n
---help---
This option enables verbose messages from the Power Management code.
config CAN_PM_TRACE
def_bool y
depends on PM_DEBUG && PM_SLEEP && EXPERIMENTAL
config PM_TRACE
bool
help
This enables code to save the last PM event point across
reboot. The architecture needs to support this, x86 for
example does by saving things in the RTC, see below.
The architecture specific code must provide the extern
functions from <linux/resume-trace.h> as well as the
<asm/resume-trace.h> header with a TRACE_RESUME() macro.
The way the information is presented is architecture-
dependent, x86 will print the information during a
late_initcall.
config PM_TRACE_RTC
bool "Suspend/resume event tracing"
depends on CAN_PM_TRACE
depends on X86
select PM_TRACE
default n
---help---
This enables some cheesy code to save the last PM event point in the
RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume).
To use this debugging feature you should attempt to suspend the
machine, reboot it and then run
dmesg -s 1000000 | grep 'hash matches'
CAUTION: this option will cause your machine's real-time clock to be
set to an invalid time after a resume.
config PM_SLEEP_SMP
bool
depends on SMP
depends on ARCH_SUSPEND_POSSIBLE || ARCH_HIBERNATION_POSSIBLE
depends on PM_SLEEP
select HOTPLUG
select HOTPLUG_CPU
default y
config PM_SLEEP
bool
depends on SUSPEND || HIBERNATION || XEN_SAVE_RESTORE
default y
config PM_SLEEP_ADVANCED_DEBUG
bool
depends on PM_ADVANCED_DEBUG
default n
config SUSPEND config SUSPEND
bool "Suspend to RAM and standby" bool "Suspend to RAM and standby"
depends on PM && ARCH_SUSPEND_POSSIBLE depends on ARCH_SUSPEND_POSSIBLE
default y default y
---help--- ---help---
Allow the system to enter sleep states in which main memory is Allow the system to enter sleep states in which main memory is
powered and thus its contents are preserved, such as the powered and thus its contents are preserved, such as the
suspend-to-RAM state (e.g. the ACPI S3 state). suspend-to-RAM state (e.g. the ACPI S3 state).
config PM_TEST_SUSPEND
bool "Test suspend/resume and wakealarm during bootup"
depends on SUSPEND && PM_DEBUG && RTC_CLASS=y
---help---
This option will let you suspend your machine during bootup, and
make it wake up a few seconds later using an RTC wakeup alarm.
Enable this with a kernel parameter like "test_suspend=mem".
You probably want to have your system's RTC driver statically
linked, ensuring that it's available when this test runs.
config SUSPEND_FREEZER config SUSPEND_FREEZER
bool "Enable freezer for suspend to RAM/standby" \ bool "Enable freezer for suspend to RAM/standby" \
if ARCH_WANTS_FREEZER_CONTROL || BROKEN if ARCH_WANTS_FREEZER_CONTROL || BROKEN
...@@ -133,7 +20,7 @@ config SUSPEND_FREEZER ...@@ -133,7 +20,7 @@ config SUSPEND_FREEZER
config HIBERNATION config HIBERNATION
bool "Hibernation (aka 'suspend to disk')" bool "Hibernation (aka 'suspend to disk')"
depends on PM && SWAP && ARCH_HIBERNATION_POSSIBLE depends on SWAP && ARCH_HIBERNATION_POSSIBLE
select LZO_COMPRESS select LZO_COMPRESS
select LZO_DECOMPRESS select LZO_DECOMPRESS
---help--- ---help---
...@@ -196,6 +83,106 @@ config PM_STD_PARTITION ...@@ -196,6 +83,106 @@ config PM_STD_PARTITION
suspended image to. It will simply pick the first available swap suspended image to. It will simply pick the first available swap
device. device.
config PM_SLEEP
def_bool y
depends on SUSPEND || HIBERNATION || XEN_SAVE_RESTORE
config PM_SLEEP_SMP
def_bool y
depends on SMP
depends on ARCH_SUSPEND_POSSIBLE || ARCH_HIBERNATION_POSSIBLE
depends on PM_SLEEP
select HOTPLUG
select HOTPLUG_CPU
config PM_RUNTIME
bool "Run-time PM core functionality"
depends on !IA64_HP_SIM
---help---
Enable functionality allowing I/O devices to be put into energy-saving
(low power) states at run time (or autosuspended) after a specified
period of inactivity and woken up in response to a hardware-generated
wake-up event or a driver's request.
Hardware support is generally required for this functionality to work
and the bus type drivers of the buses the devices are on are
responsible for the actual handling of the autosuspend requests and
wake-up events.
config PM
def_bool y
depends on PM_SLEEP || PM_RUNTIME
config PM_DEBUG
bool "Power Management Debug Support"
depends on PM
---help---
This option enables various debugging support in the Power Management
code. This is helpful when debugging and reporting PM bugs, like
suspend support.
config PM_VERBOSE
bool "Verbose Power Management debugging"
depends on PM_DEBUG
---help---
This option enables verbose messages from the Power Management code.
config PM_ADVANCED_DEBUG
bool "Extra PM attributes in sysfs for low-level debugging/testing"
depends on PM_DEBUG
---help---
Add extra sysfs attributes allowing one to access some Power Management
fields of device objects from user space. If you are not a kernel
developer interested in debugging/testing Power Management, say "no".
config PM_TEST_SUSPEND
bool "Test suspend/resume and wakealarm during bootup"
depends on SUSPEND && PM_DEBUG && RTC_CLASS=y
---help---
This option will let you suspend your machine during bootup, and
make it wake up a few seconds later using an RTC wakeup alarm.
Enable this with a kernel parameter like "test_suspend=mem".
You probably want to have your system's RTC driver statically
linked, ensuring that it's available when this test runs.
config CAN_PM_TRACE
def_bool y
depends on PM_DEBUG && PM_SLEEP
config PM_TRACE
bool
help
This enables code to save the last PM event point across
reboot. The architecture needs to support this, x86 for
example does by saving things in the RTC, see below.
The architecture specific code must provide the extern
functions from <linux/resume-trace.h> as well as the
<asm/resume-trace.h> header with a TRACE_RESUME() macro.
The way the information is presented is architecture-
dependent, x86 will print the information during a
late_initcall.
config PM_TRACE_RTC
bool "Suspend/resume event tracing"
depends on CAN_PM_TRACE
depends on X86
select PM_TRACE
---help---
This enables some cheesy code to save the last PM event point in the
RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume).
To use this debugging feature you should attempt to suspend the
machine, reboot it and then run
dmesg -s 1000000 | grep 'hash matches'
CAUTION: this option will cause your machine's real-time clock to be
set to an invalid time after a resume.
config APM_EMULATION config APM_EMULATION
tristate "Advanced Power Management Emulation" tristate "Advanced Power Management Emulation"
depends on PM && SYS_SUPPORTS_APM_EMULATION depends on PM && SYS_SUPPORTS_APM_EMULATION
...@@ -222,31 +209,11 @@ config APM_EMULATION ...@@ -222,31 +209,11 @@ config APM_EMULATION
anything, try disabling/enabling this option (or disabling/enabling anything, try disabling/enabling this option (or disabling/enabling
APM in your BIOS). APM in your BIOS).
config PM_RUNTIME
bool "Run-time PM core functionality"
depends on PM
---help---
Enable functionality allowing I/O devices to be put into energy-saving
(low power) states at run time (or autosuspended) after a specified
period of inactivity and woken up in response to a hardware-generated
wake-up event or a driver's request.
Hardware support is generally required for this functionality to work
and the bus type drivers of the buses the devices are on are
responsible for the actual handling of the autosuspend requests and
wake-up events.
config PM_OPS
bool
depends on PM_SLEEP || PM_RUNTIME
default y
config ARCH_HAS_OPP config ARCH_HAS_OPP
bool bool
config PM_OPP config PM_OPP
bool "Operating Performance Point (OPP) Layer library" bool "Operating Performance Point (OPP) Layer library"
depends on PM
depends on ARCH_HAS_OPP depends on ARCH_HAS_OPP
---help--- ---help---
SOCs have a standard set of tuples consisting of frequency and SOCs have a standard set of tuples consisting of frequency and
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/syscore_ops.h>
#include <scsi/scsi_scan.h> #include <scsi/scsi_scan.h>
#include <asm/suspend.h> #include <asm/suspend.h>
...@@ -272,6 +273,8 @@ static int create_image(int platform_mode) ...@@ -272,6 +273,8 @@ static int create_image(int platform_mode)
local_irq_disable(); local_irq_disable();
error = sysdev_suspend(PMSG_FREEZE); error = sysdev_suspend(PMSG_FREEZE);
if (!error)
error = syscore_suspend();
if (error) { if (error) {
printk(KERN_ERR "PM: Some system devices failed to power down, " printk(KERN_ERR "PM: Some system devices failed to power down, "
"aborting hibernation\n"); "aborting hibernation\n");
...@@ -295,6 +298,7 @@ static int create_image(int platform_mode) ...@@ -295,6 +298,7 @@ static int create_image(int platform_mode)
} }
Power_up: Power_up:
syscore_resume();
sysdev_resume(); sysdev_resume();
/* NOTE: dpm_resume_noirq() is just a resume() for devices /* NOTE: dpm_resume_noirq() is just a resume() for devices
* that suspended with irqs off ... no overall powerup. * that suspended with irqs off ... no overall powerup.
...@@ -403,6 +407,8 @@ static int resume_target_kernel(bool platform_mode) ...@@ -403,6 +407,8 @@ static int resume_target_kernel(bool platform_mode)
local_irq_disable(); local_irq_disable();
error = sysdev_suspend(PMSG_QUIESCE); error = sysdev_suspend(PMSG_QUIESCE);
if (!error)
error = syscore_suspend();
if (error) if (error)
goto Enable_irqs; goto Enable_irqs;
...@@ -429,6 +435,7 @@ static int resume_target_kernel(bool platform_mode) ...@@ -429,6 +435,7 @@ static int resume_target_kernel(bool platform_mode)
restore_processor_state(); restore_processor_state();
touch_softlockup_watchdog(); touch_softlockup_watchdog();
syscore_resume();
sysdev_resume(); sysdev_resume();
Enable_irqs: Enable_irqs:
...@@ -516,6 +523,7 @@ int hibernation_platform_enter(void) ...@@ -516,6 +523,7 @@ int hibernation_platform_enter(void)
local_irq_disable(); local_irq_disable();
sysdev_suspend(PMSG_HIBERNATE); sysdev_suspend(PMSG_HIBERNATE);
syscore_suspend();
if (pm_wakeup_pending()) { if (pm_wakeup_pending()) {
error = -EAGAIN; error = -EAGAIN;
goto Power_up; goto Power_up;
...@@ -526,6 +534,7 @@ int hibernation_platform_enter(void) ...@@ -526,6 +534,7 @@ int hibernation_platform_enter(void)
while (1); while (1);
Power_up: Power_up:
syscore_resume();
sysdev_resume(); sysdev_resume();
local_irq_enable(); local_irq_enable();
enable_nonboot_cpus(); enable_nonboot_cpus();
......
...@@ -17,9 +17,6 @@ ...@@ -17,9 +17,6 @@
DEFINE_MUTEX(pm_mutex); DEFINE_MUTEX(pm_mutex);
unsigned int pm_flags;
EXPORT_SYMBOL(pm_flags);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
/* Routines for PM-transition notifications */ /* Routines for PM-transition notifications */
......
...@@ -42,15 +42,15 @@ static void swsusp_unset_page_forbidden(struct page *); ...@@ -42,15 +42,15 @@ static void swsusp_unset_page_forbidden(struct page *);
/* /*
* Preferred image size in bytes (tunable via /sys/power/image_size). * Preferred image size in bytes (tunable via /sys/power/image_size).
* When it is set to N, swsusp will do its best to ensure the image * When it is set to N, the image creating code will do its best to
* size will not exceed N bytes, but if that is impossible, it will * ensure the image size will not exceed N bytes, but if that is
* try to create the smallest image possible. * impossible, it will try to create the smallest image possible.
*/ */
unsigned long image_size; unsigned long image_size;
void __init hibernate_image_size_init(void) void __init hibernate_image_size_init(void)
{ {
image_size = ((totalram_pages * 2) / 5) * PAGE_SIZE; image_size = (totalram_pages / 3) * PAGE_SIZE;
} }
/* List of PBEs needed for restoring the pages that were allocated before /* List of PBEs needed for restoring the pages that were allocated before
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/syscore_ops.h>
#include <trace/events/power.h> #include <trace/events/power.h>
#include "power.h" #include "power.h"
...@@ -163,11 +164,14 @@ static int suspend_enter(suspend_state_t state) ...@@ -163,11 +164,14 @@ static int suspend_enter(suspend_state_t state)
BUG_ON(!irqs_disabled()); BUG_ON(!irqs_disabled());
error = sysdev_suspend(PMSG_SUSPEND); error = sysdev_suspend(PMSG_SUSPEND);
if (!error)
error = syscore_suspend();
if (!error) { if (!error) {
if (!(suspend_test(TEST_CORE) || pm_wakeup_pending())) { if (!(suspend_test(TEST_CORE) || pm_wakeup_pending())) {
error = suspend_ops->enter(state); error = suspend_ops->enter(state);
events_check_enabled = false; events_check_enabled = false;
} }
syscore_resume();
sysdev_resume(); sysdev_resume();
} }
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/fs_struct.h> #include <linux/fs_struct.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/syscore_ops.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/syscalls.h> #include <linux/syscalls.h>
...@@ -298,6 +299,7 @@ void kernel_restart_prepare(char *cmd) ...@@ -298,6 +299,7 @@ void kernel_restart_prepare(char *cmd)
system_state = SYSTEM_RESTART; system_state = SYSTEM_RESTART;
device_shutdown(); device_shutdown();
sysdev_shutdown(); sysdev_shutdown();
syscore_shutdown();
} }
/** /**
...@@ -336,6 +338,7 @@ void kernel_halt(void) ...@@ -336,6 +338,7 @@ void kernel_halt(void)
{ {
kernel_shutdown_prepare(SYSTEM_HALT); kernel_shutdown_prepare(SYSTEM_HALT);
sysdev_shutdown(); sysdev_shutdown();
syscore_shutdown();
printk(KERN_EMERG "System halted.\n"); printk(KERN_EMERG "System halted.\n");
kmsg_dump(KMSG_DUMP_HALT); kmsg_dump(KMSG_DUMP_HALT);
machine_halt(); machine_halt();
...@@ -355,6 +358,7 @@ void kernel_power_off(void) ...@@ -355,6 +358,7 @@ void kernel_power_off(void)
pm_power_off_prepare(); pm_power_off_prepare();
disable_nonboot_cpus(); disable_nonboot_cpus();
sysdev_shutdown(); sysdev_shutdown();
syscore_shutdown();
printk(KERN_EMERG "Power down.\n"); printk(KERN_EMERG "Power down.\n");
kmsg_dump(KMSG_DUMP_POWEROFF); kmsg_dump(KMSG_DUMP_POWEROFF);
machine_power_off(); machine_power_off();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment