Commit 55cc33ce authored by Rafael J. Wysocki's avatar Rafael J. Wysocki

Merge branch 'pm-sleep' into acpi-pm

parents 1f0b6386 f71495f3
...@@ -2,6 +2,7 @@ Device Power Management ...@@ -2,6 +2,7 @@ Device Power Management
Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. Copyright (c) 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu> Copyright (c) 2010 Alan Stern <stern@rowland.harvard.edu>
Copyright (c) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Most of the code in Linux is device drivers, so most of the Linux power Most of the code in Linux is device drivers, so most of the Linux power
...@@ -326,6 +327,20 @@ the phases are: ...@@ -326,6 +327,20 @@ the phases are:
driver in some way for the upcoming system power transition, but it driver in some way for the upcoming system power transition, but it
should not put the device into a low-power state. should not put the device into a low-power state.
For devices supporting runtime power management, the return value of the
prepare callback can be used to indicate to the PM core that it may
safely leave the device in runtime suspend (if runtime-suspended
already), provided that all of the device's descendants are also left in
runtime suspend. Namely, if the prepare callback returns a positive
number and that happens for all of the descendants of the device too,
and all of them (including the device itself) are runtime-suspended, the
PM core will skip the suspend, suspend_late and suspend_noirq suspend
phases as well as the resume_noirq, resume_early and resume phases of
the following system resume for all of these devices. In that case,
the complete callback will be called directly after the prepare callback
and is entirely responsible for bringing the device back to the
functional state as appropriate.
2. The suspend methods should quiesce the device to stop it from performing 2. The suspend methods should quiesce the device to stop it from performing
I/O. They also may save the device registers and put it into the I/O. They also may save the device registers and put it into the
appropriate low-power state, depending on the bus type the device is on, appropriate low-power state, depending on the bus type the device is on,
...@@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are: ...@@ -400,12 +415,23 @@ When resuming from freeze, standby or memory sleep, the phases are:
the resume callbacks occur; it's not necessary to wait until the the resume callbacks occur; it's not necessary to wait until the
complete phase. complete phase.
Moreover, if the preceding prepare callback returned a positive number,
the device may have been left in runtime suspend throughout the whole
system suspend and resume (the suspend, suspend_late, suspend_noirq
phases of system suspend and the resume_noirq, resume_early, resume
phases of system resume may have been skipped for it). In that case,
the complete callback is entirely responsible for bringing the device
back to the functional state after system suspend if necessary. [For
example, it may need to queue up a runtime resume request for the device
for this purpose.] To check if that is the case, the complete callback
can consult the device's power.direct_complete flag. Namely, if that
flag is set when the complete callback is being run, it has been called
directly after the preceding prepare and special action may be required
to make the device work correctly afterward.
At the end of these phases, drivers should be as functional as they were before At the end of these phases, drivers should be as functional as they were before
suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are
gated on. Even if the device was in a low-power state before the system sleep gated on.
because of runtime power management, afterwards it should be back in its
full-power state. There are multiple reasons why it's best to do this; they are
discussed in more detail in Documentation/power/runtime_pm.txt.
However, the details here may again be platform-specific. For example, However, the details here may again be platform-specific. For example,
some systems support multiple "run" states, and the mode in effect at some systems support multiple "run" states, and the mode in effect at
......
...@@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices ...@@ -2,6 +2,7 @@ Runtime Power Management Framework for I/O Devices
(C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
(C) 2010 Alan Stern <stern@rowland.harvard.edu> (C) 2010 Alan Stern <stern@rowland.harvard.edu>
(C) 2014 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
1. Introduction 1. Introduction
...@@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: ...@@ -444,6 +445,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
bool pm_runtime_status_suspended(struct device *dev); bool pm_runtime_status_suspended(struct device *dev);
- return true if the device's runtime PM status is 'suspended' - return true if the device's runtime PM status is 'suspended'
bool pm_runtime_suspended_if_enabled(struct device *dev);
- return true if the device's runtime PM status is 'suspended' and its
'power.disable_depth' field is equal to 1
void pm_runtime_allow(struct device *dev); void pm_runtime_allow(struct device *dev);
- set the power.runtime_auto flag for the device and decrease its usage - set the power.runtime_auto flag for the device and decrease its usage
counter (used by the /sys/devices/.../power/control interface to counter (used by the /sys/devices/.../power/control interface to
...@@ -644,6 +649,18 @@ place (in particular, if the system is not waking up from hibernation), it may ...@@ -644,6 +649,18 @@ place (in particular, if the system is not waking up from hibernation), it may
be more efficient to leave the devices that had been suspended before the system be more efficient to leave the devices that had been suspended before the system
suspend began in the suspended state. suspend began in the suspended state.
To this end, the PM core provides a mechanism allowing some coordination between
different levels of device hierarchy. Namely, if a system suspend .prepare()
callback returns a positive number for a device, that indicates to the PM core
that the device appears to be runtime-suspended and its state is fine, so it
may be left in runtime suspend provided that all of its descendants are also
left in runtime suspend. If that happens, the PM core will not execute any
system suspend and resume callbacks for all of those devices, except for the
complete callback, which is then entirely responsible for handling the device
as appropriate. This only applies to system suspend transitions that are not
related to hibernation (see Documentation/power/devices.txt for more
information).
The PM core does its best to reduce the probability of race conditions between The PM core does its best to reduce the probability of race conditions between
the runtime PM and system suspend/resume (and hibernation) callbacks by carrying the runtime PM and system suspend/resume (and hibernation) callbacks by carrying
out the following operations: out the following operations:
......
...@@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity. ...@@ -220,7 +220,10 @@ Q: After resuming, system is paging heavily, leading to very bad interactivity.
A: Try running A: Try running
cat `cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u` > /dev/null cat /proc/[0-9]*/maps | grep / | sed 's:.* /:/:' | sort -u | while read file
do
test -f "$file" && cat "$file" > /dev/null
done
after resume. swapoff -a; swapon -a may also be useful. after resume. swapoff -a; swapon -a may also be useful.
......
...@@ -479,7 +479,7 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn ...@@ -479,7 +479,7 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->power.syscore) if (dev->power.syscore || dev->power.direct_complete)
goto Out; goto Out;
if (!dev->power.is_noirq_suspended) if (!dev->power.is_noirq_suspended)
...@@ -605,7 +605,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn ...@@ -605,7 +605,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
TRACE_RESUME(0); TRACE_RESUME(0);
if (dev->power.syscore) if (dev->power.syscore || dev->power.direct_complete)
goto Out; goto Out;
if (!dev->power.is_late_suspended) if (!dev->power.is_late_suspended)
...@@ -735,6 +735,12 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -735,6 +735,12 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
if (dev->power.syscore) if (dev->power.syscore)
goto Complete; goto Complete;
if (dev->power.direct_complete) {
/* Match the pm_runtime_disable() in __device_suspend(). */
pm_runtime_enable(dev);
goto Complete;
}
dpm_wait(dev->parent, async); dpm_wait(dev->parent, async);
dpm_watchdog_set(&wd, dev); dpm_watchdog_set(&wd, dev);
device_lock(dev); device_lock(dev);
...@@ -1007,7 +1013,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a ...@@ -1007,7 +1013,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
goto Complete; goto Complete;
} }
if (dev->power.syscore) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
dpm_wait_for_children(dev, async); dpm_wait_for_children(dev, async);
...@@ -1146,7 +1152,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as ...@@ -1146,7 +1152,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
goto Complete; goto Complete;
} }
if (dev->power.syscore) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
dpm_wait_for_children(dev, async); dpm_wait_for_children(dev, async);
...@@ -1332,6 +1338,17 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1332,6 +1338,17 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
if (dev->power.syscore) if (dev->power.syscore)
goto Complete; goto Complete;
if (dev->power.direct_complete) {
if (pm_runtime_status_suspended(dev)) {
pm_runtime_disable(dev);
if (pm_runtime_suspended_if_enabled(dev))
goto Complete;
pm_runtime_enable(dev);
}
dev->power.direct_complete = false;
}
dpm_watchdog_set(&wd, dev); dpm_watchdog_set(&wd, dev);
device_lock(dev); device_lock(dev);
...@@ -1382,10 +1399,19 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1382,10 +1399,19 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
End: End:
if (!error) { if (!error) {
struct device *parent = dev->parent;
dev->power.is_suspended = true; dev->power.is_suspended = true;
if (dev->power.wakeup_path if (parent) {
&& dev->parent && !dev->parent->power.ignore_children) spin_lock_irq(&parent->power.lock);
dev->parent->power.wakeup_path = true;
dev->parent->power.direct_complete = false;
if (dev->power.wakeup_path
&& !dev->parent->power.ignore_children)
dev->parent->power.wakeup_path = true;
spin_unlock_irq(&parent->power.lock);
}
} }
device_unlock(dev); device_unlock(dev);
...@@ -1487,7 +1513,7 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -1487,7 +1513,7 @@ static int device_prepare(struct device *dev, pm_message_t state)
{ {
int (*callback)(struct device *) = NULL; int (*callback)(struct device *) = NULL;
char *info = NULL; char *info = NULL;
int error = 0; int ret = 0;
if (dev->power.syscore) if (dev->power.syscore)
return 0; return 0;
...@@ -1523,17 +1549,27 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -1523,17 +1549,27 @@ static int device_prepare(struct device *dev, pm_message_t state)
callback = dev->driver->pm->prepare; callback = dev->driver->pm->prepare;
} }
if (callback) { if (callback)
error = callback(dev); ret = callback(dev);
suspend_report_result(callback, error);
}
device_unlock(dev); device_unlock(dev);
if (error) if (ret < 0) {
suspend_report_result(callback, ret);
pm_runtime_put(dev); pm_runtime_put(dev);
return ret;
return error; }
/*
* A positive return value from ->prepare() means "this device appears
* to be runtime-suspended and its state is fine, so if it really is
* runtime-suspended, you can leave it in that state provided that you
* will do the same thing with all of its descendants". This only
* applies to suspend transitions, however.
*/
spin_lock_irq(&dev->power.lock);
dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND;
spin_unlock_irq(&dev->power.lock);
return 0;
} }
/** /**
......
...@@ -32,6 +32,7 @@ LIST_HEAD(cpuidle_detected_devices); ...@@ -32,6 +32,7 @@ LIST_HEAD(cpuidle_detected_devices);
static int enabled_devices; static int enabled_devices;
static int off __read_mostly; static int off __read_mostly;
static int initialized __read_mostly; static int initialized __read_mostly;
static bool use_deepest_state __read_mostly;
int cpuidle_disabled(void) int cpuidle_disabled(void)
{ {
...@@ -65,23 +66,42 @@ int cpuidle_play_dead(void) ...@@ -65,23 +66,42 @@ int cpuidle_play_dead(void)
} }
/** /**
* cpuidle_enabled - check if the cpuidle framework is ready * cpuidle_use_deepest_state - Enable/disable the "deepest idle" mode.
* @dev: cpuidle device for this cpu * @enable: Whether enable or disable the feature.
* @drv: cpuidle driver for this cpu *
* If the "deepest idle" mode is enabled, cpuidle will ignore the governor and
* always use the state with the greatest exit latency (out of the states that
* are not disabled).
* *
* Return 0 on success, otherwise: * This function can only be called after cpuidle_pause() to avoid races.
* -NODEV : the cpuidle framework is not available
* -EBUSY : the cpuidle framework is not initialized
*/ */
int cpuidle_enabled(struct cpuidle_driver *drv, struct cpuidle_device *dev) void cpuidle_use_deepest_state(bool enable)
{ {
if (off || !initialized) use_deepest_state = enable;
return -ENODEV; }
if (!drv || !dev || !dev->enabled) /**
return -EBUSY; * cpuidle_find_deepest_state - Find the state of the greatest exit latency.
* @drv: cpuidle driver for a given CPU.
* @dev: cpuidle device for a given CPU.
*/
static int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
unsigned int latency_req = 0;
int i, ret = CPUIDLE_DRIVER_STATE_START - 1;
return 0; for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) {
struct cpuidle_state *s = &drv->states[i];
struct cpuidle_state_usage *su = &dev->states_usage[i];
if (s->disabled || su->disable || s->exit_latency <= latency_req)
continue;
latency_req = s->exit_latency;
ret = i;
}
return ret;
} }
/** /**
...@@ -138,6 +158,15 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv, ...@@ -138,6 +158,15 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
*/ */
int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{ {
if (off || !initialized)
return -ENODEV;
if (!drv || !dev || !dev->enabled)
return -EBUSY;
if (unlikely(use_deepest_state))
return cpuidle_find_deepest_state(drv, dev);
return cpuidle_curr_governor->select(drv, dev); return cpuidle_curr_governor->select(drv, dev);
} }
...@@ -169,7 +198,7 @@ int cpuidle_enter(struct cpuidle_driver *drv, struct cpuidle_device *dev, ...@@ -169,7 +198,7 @@ int cpuidle_enter(struct cpuidle_driver *drv, struct cpuidle_device *dev,
*/ */
void cpuidle_reflect(struct cpuidle_device *dev, int index) void cpuidle_reflect(struct cpuidle_device *dev, int index)
{ {
if (cpuidle_curr_governor->reflect) if (cpuidle_curr_governor->reflect && !unlikely(use_deepest_state))
cpuidle_curr_governor->reflect(dev, index); cpuidle_curr_governor->reflect(dev, index);
} }
......
...@@ -296,7 +296,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -296,7 +296,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
data->needs_update = 0; data->needs_update = 0;
} }
data->last_state_idx = 0; data->last_state_idx = CPUIDLE_DRIVER_STATE_START - 1;
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) if (unlikely(latency_req == 0))
...@@ -310,13 +310,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) ...@@ -310,13 +310,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
data->bucket = which_bucket(data->next_timer_us); data->bucket = which_bucket(data->next_timer_us);
/*
* if the correction factor is 0 (eg first time init or cpu hotplug
* etc), we actually want to start out with a unity factor.
*/
if (data->correction_factor[data->bucket] == 0)
data->correction_factor[data->bucket] = RESOLUTION * DECAY;
/* /*
* Force the result of multiplication to be 64 bits even if both * Force the result of multiplication to be 64 bits even if both
* operands are 32 bits. * operands are 32 bits.
...@@ -466,9 +459,17 @@ static int menu_enable_device(struct cpuidle_driver *drv, ...@@ -466,9 +459,17 @@ static int menu_enable_device(struct cpuidle_driver *drv,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{ {
struct menu_device *data = &per_cpu(menu_devices, dev->cpu); struct menu_device *data = &per_cpu(menu_devices, dev->cpu);
int i;
memset(data, 0, sizeof(struct menu_device)); memset(data, 0, sizeof(struct menu_device));
/*
* if the correction factor is 0 (eg first time init or cpu hotplug
* etc), we actually want to start out with a unity factor.
*/
for(i = 0; i < BUCKETS; i++)
data->correction_factor[i] = RESOLUTION * DECAY;
return 0; return 0;
} }
......
...@@ -120,8 +120,6 @@ struct cpuidle_driver { ...@@ -120,8 +120,6 @@ struct cpuidle_driver {
#ifdef CONFIG_CPU_IDLE #ifdef CONFIG_CPU_IDLE
extern void disable_cpuidle(void); extern void disable_cpuidle(void);
extern int cpuidle_enabled(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
extern int cpuidle_select(struct cpuidle_driver *drv, extern int cpuidle_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev); struct cpuidle_device *dev);
extern int cpuidle_enter(struct cpuidle_driver *drv, extern int cpuidle_enter(struct cpuidle_driver *drv,
...@@ -145,13 +143,11 @@ extern void cpuidle_resume(void); ...@@ -145,13 +143,11 @@ extern void cpuidle_resume(void);
extern int cpuidle_enable_device(struct cpuidle_device *dev); extern int cpuidle_enable_device(struct cpuidle_device *dev);
extern void cpuidle_disable_device(struct cpuidle_device *dev); extern void cpuidle_disable_device(struct cpuidle_device *dev);
extern int cpuidle_play_dead(void); extern int cpuidle_play_dead(void);
extern void cpuidle_use_deepest_state(bool enable);
extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev); extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev);
#else #else
static inline void disable_cpuidle(void) { } static inline void disable_cpuidle(void) { }
static inline int cpuidle_enabled(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return -ENODEV; }
static inline int cpuidle_select(struct cpuidle_driver *drv, static inline int cpuidle_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev) struct cpuidle_device *dev)
{return -ENODEV; } {return -ENODEV; }
...@@ -180,6 +176,7 @@ static inline int cpuidle_enable_device(struct cpuidle_device *dev) ...@@ -180,6 +176,7 @@ static inline int cpuidle_enable_device(struct cpuidle_device *dev)
{return -ENODEV; } {return -ENODEV; }
static inline void cpuidle_disable_device(struct cpuidle_device *dev) { } static inline void cpuidle_disable_device(struct cpuidle_device *dev) { }
static inline int cpuidle_play_dead(void) {return -ENODEV; } static inline int cpuidle_play_dead(void) {return -ENODEV; }
static inline void cpuidle_use_deepest_state(bool enable) {}
static inline struct cpuidle_driver *cpuidle_get_cpu_driver( static inline struct cpuidle_driver *cpuidle_get_cpu_driver(
struct cpuidle_device *dev) {return NULL; } struct cpuidle_device *dev) {return NULL; }
#endif #endif
......
...@@ -93,13 +93,23 @@ typedef struct pm_message { ...@@ -93,13 +93,23 @@ typedef struct pm_message {
* been registered) to recover from the race condition. * been registered) to recover from the race condition.
* This method is executed for all kinds of suspend transitions and is * This method is executed for all kinds of suspend transitions and is
* followed by one of the suspend callbacks: @suspend(), @freeze(), or * followed by one of the suspend callbacks: @suspend(), @freeze(), or
* @poweroff(). The PM core executes subsystem-level @prepare() for all * @poweroff(). If the transition is a suspend to memory or standby (that
* devices before starting to invoke suspend callbacks for any of them, so * is, not related to hibernation), the return value of @prepare() may be
* generally devices may be assumed to be functional or to respond to * used to indicate to the PM core to leave the device in runtime suspend
* runtime resume requests while @prepare() is being executed. However, * if applicable. Namely, if @prepare() returns a positive number, the PM
* device drivers may NOT assume anything about the availability of user * core will understand that as a declaration that the device appears to be
* space at that time and it is NOT valid to request firmware from within * runtime-suspended and it may be left in that state during the entire
* @prepare() (it's too late to do that). It also is NOT valid to allocate * transition and during the subsequent resume if all of its descendants
* are left in runtime suspend too. If that happens, @complete() will be
* executed directly after @prepare() and it must ensure the proper
* functioning of the device after the system resume.
* The PM core executes subsystem-level @prepare() for all devices before
* starting to invoke suspend callbacks for any of them, so generally
* devices may be assumed to be functional or to respond to runtime resume
* requests while @prepare() is being executed. However, device drivers
* may NOT assume anything about the availability of user space at that
* time and it is NOT valid to request firmware from within @prepare()
* (it's too late to do that). It also is NOT valid to allocate
* substantial amounts of memory from @prepare() in the GFP_KERNEL mode. * substantial amounts of memory from @prepare() in the GFP_KERNEL mode.
* [To work around these limitations, drivers may register suspend and * [To work around these limitations, drivers may register suspend and
* hibernation notifiers to be executed before the freezing of tasks.] * hibernation notifiers to be executed before the freezing of tasks.]
...@@ -112,7 +122,16 @@ typedef struct pm_message { ...@@ -112,7 +122,16 @@ typedef struct pm_message {
* of the other devices that the PM core has unsuccessfully attempted to * of the other devices that the PM core has unsuccessfully attempted to
* suspend earlier). * suspend earlier).
* The PM core executes subsystem-level @complete() after it has executed * The PM core executes subsystem-level @complete() after it has executed
* the appropriate resume callbacks for all devices. * the appropriate resume callbacks for all devices. If the corresponding
* @prepare() at the beginning of the suspend transition returned a
* positive number and the device was left in runtime suspend (without
* executing any suspend and resume callbacks for it), @complete() will be
* the only callback executed for the device during resume. In that case,
* @complete() must be prepared to do whatever is necessary to ensure the
* proper functioning of the device after the system resume. To this end,
* @complete() can check the power.direct_complete flag of the device to
* learn whether (unset) or not (set) the previous suspend and resume
* callbacks have been executed for it.
* *
* @suspend: Executed before putting the system into a sleep state in which the * @suspend: Executed before putting the system into a sleep state in which the
* contents of main memory are preserved. The exact action to perform * contents of main memory are preserved. The exact action to perform
...@@ -546,6 +565,7 @@ struct dev_pm_info { ...@@ -546,6 +565,7 @@ struct dev_pm_info {
bool is_late_suspended:1; bool is_late_suspended:1;
bool ignore_children:1; bool ignore_children:1;
bool early_init:1; /* Owned by the PM core */ bool early_init:1; /* Owned by the PM core */
bool direct_complete:1; /* Owned by the PM core */
spinlock_t lock; spinlock_t lock;
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
struct list_head entry; struct list_head entry;
......
...@@ -101,6 +101,11 @@ static inline bool pm_runtime_status_suspended(struct device *dev) ...@@ -101,6 +101,11 @@ static inline bool pm_runtime_status_suspended(struct device *dev)
return dev->power.runtime_status == RPM_SUSPENDED; return dev->power.runtime_status == RPM_SUSPENDED;
} }
static inline bool pm_runtime_suspended_if_enabled(struct device *dev)
{
return pm_runtime_status_suspended(dev) && dev->power.disable_depth == 1;
}
static inline bool pm_runtime_enabled(struct device *dev) static inline bool pm_runtime_enabled(struct device *dev)
{ {
return !dev->power.disable_depth; return !dev->power.disable_depth;
...@@ -150,6 +155,7 @@ static inline void device_set_run_wake(struct device *dev, bool enable) {} ...@@ -150,6 +155,7 @@ static inline void device_set_run_wake(struct device *dev, bool enable) {}
static inline bool pm_runtime_suspended(struct device *dev) { return false; } static inline bool pm_runtime_suspended(struct device *dev) { return false; }
static inline bool pm_runtime_active(struct device *dev) { return true; } static inline bool pm_runtime_active(struct device *dev) { return true; }
static inline bool pm_runtime_status_suspended(struct device *dev) { return false; } static inline bool pm_runtime_status_suspended(struct device *dev) { return false; }
static inline bool pm_runtime_suspended_if_enabled(struct device *dev) { return false; }
static inline bool pm_runtime_enabled(struct device *dev) { return false; } static inline bool pm_runtime_enabled(struct device *dev) { return false; }
static inline void pm_runtime_no_callbacks(struct device *dev) {} static inline void pm_runtime_no_callbacks(struct device *dev) {}
......
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
static int nocompress; static int nocompress;
static int noresume; static int noresume;
static int resume_wait; static int resume_wait;
static int resume_delay; static unsigned int resume_delay;
static char resume_file[256] = CONFIG_PM_STD_PARTITION; static char resume_file[256] = CONFIG_PM_STD_PARTITION;
dev_t swsusp_resume_device; dev_t swsusp_resume_device;
sector_t swsusp_resume_block; sector_t swsusp_resume_block;
...@@ -228,19 +228,23 @@ static void platform_recover(int platform_mode) ...@@ -228,19 +228,23 @@ static void platform_recover(int platform_mode)
void swsusp_show_speed(struct timeval *start, struct timeval *stop, void swsusp_show_speed(struct timeval *start, struct timeval *stop,
unsigned nr_pages, char *msg) unsigned nr_pages, char *msg)
{ {
s64 elapsed_centisecs64; u64 elapsed_centisecs64;
int centisecs; unsigned int centisecs;
int k; unsigned int k;
int kps; unsigned int kps;
elapsed_centisecs64 = timeval_to_ns(stop) - timeval_to_ns(start); elapsed_centisecs64 = timeval_to_ns(stop) - timeval_to_ns(start);
/*
* If "(s64)elapsed_centisecs64 < 0", it will print long elapsed time,
* it is obvious enough for what went wrong.
*/
do_div(elapsed_centisecs64, NSEC_PER_SEC / 100); do_div(elapsed_centisecs64, NSEC_PER_SEC / 100);
centisecs = elapsed_centisecs64; centisecs = elapsed_centisecs64;
if (centisecs == 0) if (centisecs == 0)
centisecs = 1; /* avoid div-by-zero */ centisecs = 1; /* avoid div-by-zero */
k = nr_pages * (PAGE_SIZE / 1024); k = nr_pages * (PAGE_SIZE / 1024);
kps = (k * 100) / centisecs; kps = (k * 100) / centisecs;
printk(KERN_INFO "PM: %s %d kbytes in %d.%02d seconds (%d.%02d MB/s)\n", printk(KERN_INFO "PM: %s %u kbytes in %u.%02u seconds (%u.%02u MB/s)\n",
msg, k, msg, k,
centisecs / 100, centisecs % 100, centisecs / 100, centisecs % 100,
kps / 1000, (kps % 1000) / 10); kps / 1000, (kps % 1000) / 10);
...@@ -595,7 +599,8 @@ static void power_down(void) ...@@ -595,7 +599,8 @@ static void power_down(void)
case HIBERNATION_PLATFORM: case HIBERNATION_PLATFORM:
hibernation_platform_enter(); hibernation_platform_enter();
case HIBERNATION_SHUTDOWN: case HIBERNATION_SHUTDOWN:
kernel_power_off(); if (pm_power_off)
kernel_power_off();
break; break;
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
case HIBERNATION_SUSPEND: case HIBERNATION_SUSPEND:
...@@ -623,7 +628,8 @@ static void power_down(void) ...@@ -623,7 +628,8 @@ static void power_down(void)
* corruption after resume. * corruption after resume.
*/ */
printk(KERN_CRIT "PM: Please power down manually\n"); printk(KERN_CRIT "PM: Please power down manually\n");
while(1); while (1)
cpu_relax();
} }
/** /**
...@@ -1109,7 +1115,10 @@ static int __init resumewait_setup(char *str) ...@@ -1109,7 +1115,10 @@ static int __init resumewait_setup(char *str)
static int __init resumedelay_setup(char *str) static int __init resumedelay_setup(char *str)
{ {
resume_delay = simple_strtoul(str, NULL, 0); int rc = kstrtouint(str, 0, &resume_delay);
if (rc)
return rc;
return 1; return 1;
} }
......
...@@ -62,9 +62,11 @@ static void freeze_begin(void) ...@@ -62,9 +62,11 @@ static void freeze_begin(void)
static void freeze_enter(void) static void freeze_enter(void)
{ {
cpuidle_use_deepest_state(true);
cpuidle_resume(); cpuidle_resume();
wait_event(suspend_freeze_wait_head, suspend_freeze_wake); wait_event(suspend_freeze_wait_head, suspend_freeze_wake);
cpuidle_pause(); cpuidle_pause();
cpuidle_use_deepest_state(false);
} }
void freeze_wake(void) void freeze_wake(void)
......
...@@ -101,19 +101,13 @@ static int cpuidle_idle_call(void) ...@@ -101,19 +101,13 @@ static int cpuidle_idle_call(void)
rcu_idle_enter(); rcu_idle_enter();
/* /*
* Check if the cpuidle framework is ready, otherwise fallback * Ask the cpuidle framework to choose a convenient idle state.
* to the default arch specific idle method * Fall back to the default arch specific idle method on errors.
*/ */
ret = cpuidle_enabled(drv, dev); next_state = cpuidle_select(drv, dev);
if (!ret) {
/*
* Ask the governor to choose an idle state it thinks
* it is convenient to go to. There is *always* a
* convenient idle state
*/
next_state = cpuidle_select(drv, dev);
ret = next_state;
if (ret >= 0) {
/* /*
* The idle task must be scheduled, it is pointless to * The idle task must be scheduled, it is pointless to
* go to idle, just update no idle residency and get * go to idle, just update no idle residency and get
...@@ -140,7 +134,7 @@ static int cpuidle_idle_call(void) ...@@ -140,7 +134,7 @@ static int cpuidle_idle_call(void)
CLOCK_EVT_NOTIFY_BROADCAST_ENTER, CLOCK_EVT_NOTIFY_BROADCAST_ENTER,
&dev->cpu); &dev->cpu);
if (!ret) { if (ret >= 0) {
trace_cpu_idle_rcuidle(next_state, dev->cpu); trace_cpu_idle_rcuidle(next_state, dev->cpu);
/* /*
...@@ -175,7 +169,7 @@ static int cpuidle_idle_call(void) ...@@ -175,7 +169,7 @@ static int cpuidle_idle_call(void)
* We can't use the cpuidle framework, let's use the default * We can't use the cpuidle framework, let's use the default
* idle routine * idle routine
*/ */
if (ret) if (ret < 0)
arch_cpu_idle(); arch_cpu_idle();
__current_set_polling(); __current_set_polling();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment