Commit eb59c505 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'pm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

* 'pm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
  PM / Hibernate: Implement compat_ioctl for /dev/snapshot
  PM / Freezer: fix return value of freezable_schedule_timeout_killable()
  PM / shmobile: Allow the A4R domain to be turned off at run time
  PM / input / touchscreen: Make st1232 use device PM QoS constraints
  PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
  PM / shmobile: Remove the stay_on flag from SH7372's PM domains
  PM / shmobile: Don't include SH7372's INTCS in syscore suspend/resume
  PM / shmobile: Add support for the sh7372 A4S power domain / sleep mode
  PM: Drop generic_subsys_pm_ops
  PM / Sleep: Remove forward-only callbacks from AMBA bus type
  PM / Sleep: Remove forward-only callbacks from platform bus type
  PM: Run the driver callback directly if the subsystem one is not there
  PM / Sleep: Make pm_op() and pm_noirq_op() return callback pointers
  PM/Devfreq: Add Exynos4-bus device DVFS driver for Exynos4210/4212/4412.
  PM / Sleep: Merge internal functions in generic_ops.c
  PM / Sleep: Simplify generic system suspend callbacks
  PM / Hibernate: Remove deprecated hibernation snapshot ioctls
  PM / Sleep: Fix freezer failures due to racy usermodehelper_is_disabled()
  ARM: S3C64XX: Implement basic power domain support
  PM / shmobile: Use common always on power domain governor
  ...

Fix up trivial conflict in fs/xfs/xfs_buf.c due to removal of unused
XBT_FORCE_SLEEP bit
parents 1619ed8f c233523b
...@@ -85,17 +85,6 @@ Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com> ...@@ -85,17 +85,6 @@ Who: Robin Getz <rgetz@blackfin.uclinux.org> & Matt Mackall <mpm@selenic.com>
--------------------------- ---------------------------
What: Deprecated snapshot ioctls
When: 2.6.36
Why: The ioctls in kernel/power/user.c were marked as deprecated long time
ago. Now they notify users about that so that they need to replace
their userspace. After some more time, remove them completely.
Who: Jiri Slaby <jirislaby@gmail.com>
---------------------------
What: The ieee80211_regdom module parameter What: The ieee80211_regdom module parameter
When: March 2010 / desktop catchup When: March 2010 / desktop catchup
......
...@@ -126,7 +126,9 @@ The core methods to suspend and resume devices reside in struct dev_pm_ops ...@@ -126,7 +126,9 @@ The core methods to suspend and resume devices reside in struct dev_pm_ops
pointed to by the ops member of struct dev_pm_domain, or by the pm member of pointed to by the ops member of struct dev_pm_domain, or by the pm member of
struct bus_type, struct device_type and struct class. They are mostly of struct bus_type, struct device_type and struct class. They are mostly of
interest to the people writing infrastructure for platforms and buses, like PCI interest to the people writing infrastructure for platforms and buses, like PCI
or USB, or device type and device class drivers. or USB, or device type and device class drivers. They also are relevant to the
writers of device drivers whose subsystems (PM domains, device types, device
classes and bus types) don't provide all power management methods.
Bus drivers implement these methods as appropriate for the hardware and the Bus drivers implement these methods as appropriate for the hardware and the
drivers using it; PCI works differently from USB, and so on. Not many people drivers using it; PCI works differently from USB, and so on. Not many people
...@@ -268,32 +270,35 @@ various phases always run after tasks have been frozen and before they are ...@@ -268,32 +270,35 @@ various phases always run after tasks have been frozen and before they are
unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have
been disabled (except for those marked with the IRQF_NO_SUSPEND flag). been disabled (except for those marked with the IRQF_NO_SUSPEND flag).
All phases use PM domain, bus, type, or class callbacks (that is, methods All phases use PM domain, bus, type, class or driver callbacks (that is, methods
defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm). defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, dev->class->pm or
These callbacks are regarded by the PM core as mutually exclusive. Moreover, dev->driver->pm). These callbacks are regarded by the PM core as mutually
PM domain callbacks always take precedence over bus, type and class callbacks, exclusive. Moreover, PM domain callbacks always take precedence over all of the
while type callbacks take precedence over bus and class callbacks, and class other callbacks and, for example, type callbacks take precedence over bus, class
callbacks take precedence over bus callbacks. To be precise, the following and driver callbacks. To be precise, the following rules are used to determine
rules are used to determine which callback to execute in the given phase: which callback to execute in the given phase:
1. If dev->pm_domain is present, the PM core will attempt to execute the 1. If dev->pm_domain is present, the PM core will choose the callback
callback included in dev->pm_domain->ops. If that callback is not included in dev->pm_domain->ops for execution
present, no action will be carried out for the given device.
2. Otherwise, if both dev->type and dev->type->pm are present, the callback 2. Otherwise, if both dev->type and dev->type->pm are present, the callback
included in dev->type->pm will be executed. included in dev->type->pm will be chosen for execution.
3. Otherwise, if both dev->class and dev->class->pm are present, the 3. Otherwise, if both dev->class and dev->class->pm are present, the
callback included in dev->class->pm will be executed. callback included in dev->class->pm will be chosen for execution.
4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback
included in dev->bus->pm will be executed. included in dev->bus->pm will be chosen for execution.
This allows PM domains and device types to override callbacks provided by bus This allows PM domains and device types to override callbacks provided by bus
types or device classes if necessary. types or device classes if necessary.
These callbacks may in turn invoke device- or driver-specific methods stored in The PM domain, type, class and bus callbacks may in turn invoke device- or
dev->driver->pm, but they don't have to. driver-specific methods stored in dev->driver->pm, but they don't have to do
that.
If the subsystem callback chosen for execution is not present, the PM core will
execute the corresponding method from dev->driver->pm instead if there is one.
Entering System Suspend Entering System Suspend
......
...@@ -21,7 +21,7 @@ freeze_processes() (defined in kernel/power/process.c) is called. It executes ...@@ -21,7 +21,7 @@ freeze_processes() (defined in kernel/power/process.c) is called. It executes
try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and
either wakes them up, if they are kernel threads, or sends fake signals to them, either wakes them up, if they are kernel threads, or sends fake signals to them,
if they are user space processes. A task that has TIF_FREEZE set, should react if they are user space processes. A task that has TIF_FREEZE set, should react
to it by calling the function called refrigerator() (defined in to it by calling the function called __refrigerator() (defined in
kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state
to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it. to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it.
Then, we say that the task is 'frozen' and therefore the set of functions Then, we say that the task is 'frozen' and therefore the set of functions
...@@ -29,10 +29,10 @@ handling this mechanism is referred to as 'the freezer' (these functions are ...@@ -29,10 +29,10 @@ handling this mechanism is referred to as 'the freezer' (these functions are
defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h). defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h).
User space processes are generally frozen before kernel threads. User space processes are generally frozen before kernel threads.
It is not recommended to call refrigerator() directly. Instead, it is __refrigerator() must not be called directly. Instead, use the
recommended to use the try_to_freeze() function (defined in try_to_freeze() function (defined in include/linux/freezer.h), that checks
include/linux/freezer.h), that checks the task's TIF_FREEZE flag and makes the the task's TIF_FREEZE flag and makes the task enter __refrigerator() if the
task enter refrigerator() if the flag is set. flag is set.
For user space processes try_to_freeze() is called automatically from the For user space processes try_to_freeze() is called automatically from the
signal-handling code, but the freezable kernel threads need to call it signal-handling code, but the freezable kernel threads need to call it
...@@ -61,13 +61,13 @@ wait_event_freezable() and wait_event_freezable_timeout() macros. ...@@ -61,13 +61,13 @@ wait_event_freezable() and wait_event_freezable_timeout() macros.
After the system memory state has been restored from a hibernation image and After the system memory state has been restored from a hibernation image and
devices have been reinitialized, the function thaw_processes() is called in devices have been reinitialized, the function thaw_processes() is called in
order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that
have been frozen leave refrigerator() and continue running. have been frozen leave __refrigerator() and continue running.
III. Which kernel threads are freezable? III. Which kernel threads are freezable?
Kernel threads are not freezable by default. However, a kernel thread may clear Kernel threads are not freezable by default. However, a kernel thread may clear
PF_NOFREEZE for itself by calling set_freezable() (the resetting of PF_NOFREEZE PF_NOFREEZE for itself by calling set_freezable() (the resetting of PF_NOFREEZE
directly is strongly discouraged). From this point it is regarded as freezable directly is not allowed). From this point it is regarded as freezable
and must call try_to_freeze() in a suitable place. and must call try_to_freeze() in a suitable place.
IV. Why do we do that? IV. Why do we do that?
...@@ -176,3 +176,28 @@ tasks, since it generally exists anyway. ...@@ -176,3 +176,28 @@ tasks, since it generally exists anyway.
A driver must have all firmwares it may need in RAM before suspend() is called. A driver must have all firmwares it may need in RAM before suspend() is called.
If keeping them is not practical, for example due to their size, they must be If keeping them is not practical, for example due to their size, they must be
requested early enough using the suspend notifier API described in notifiers.txt. requested early enough using the suspend notifier API described in notifiers.txt.
VI. Are there any precautions to be taken to prevent freezing failures?
Yes, there are.
First of all, grabbing the 'pm_mutex' lock to mutually exclude a piece of code
from system-wide sleep such as suspend/hibernation is not encouraged.
If possible, that piece of code must instead hook onto the suspend/hibernation
notifiers to achieve mutual exclusion. Look at the CPU-Hotplug code
(kernel/cpu.c) for an example.
However, if that is not feasible, and grabbing 'pm_mutex' is deemed necessary,
it is strongly discouraged to directly call mutex_[un]lock(&pm_mutex) since
that could lead to freezing failures, because if the suspend/hibernate code
successfully acquired the 'pm_mutex' lock, and hence that other entity failed
to acquire the lock, then that task would get blocked in TASK_UNINTERRUPTIBLE
state. As a consequence, the freezer would not be able to freeze that task,
leading to freezing failure.
However, the [un]lock_system_sleep() APIs are safe to use in this scenario,
since they ask the freezer to skip freezing this task, since it is anyway
"frozen enough" as it is blocked on 'pm_mutex', which will be released
only after the entire suspend/hibernation sequence is complete.
So, to summarize, use [un]lock_system_sleep() instead of directly using
mutex_[un]lock(&pm_mutex). That would prevent freezing failures.
...@@ -57,6 +57,10 @@ the following: ...@@ -57,6 +57,10 @@ the following:
4. Bus type of the device, if both dev->bus and dev->bus->pm are present. 4. Bus type of the device, if both dev->bus and dev->bus->pm are present.
If the subsystem chosen by applying the above rules doesn't provide the relevant
callback, the PM core will invoke the corresponding driver callback stored in
dev->driver->pm directly (if present).
The PM core always checks which callback to use in the order given above, so the The PM core always checks which callback to use in the order given above, so the
priority order of callbacks from high to low is: PM domain, device type, class priority order of callbacks from high to low is: PM domain, device type, class
and bus type. Moreover, the high-priority one will always take precedence over and bus type. Moreover, the high-priority one will always take precedence over
...@@ -64,86 +68,88 @@ a low-priority one. The PM domain, bus type, device type and class callbacks ...@@ -64,86 +68,88 @@ a low-priority one. The PM domain, bus type, device type and class callbacks
are referred to as subsystem-level callbacks in what follows. are referred to as subsystem-level callbacks in what follows.
By default, the callbacks are always invoked in process context with interrupts By default, the callbacks are always invoked in process context with interrupts
enabled. However, subsystems can use the pm_runtime_irq_safe() helper function enabled. However, the pm_runtime_irq_safe() helper function can be used to tell
to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and the PM core that it is safe to run the ->runtime_suspend(), ->runtime_resume()
->runtime_idle() callbacks may be invoked in atomic context with interrupts and ->runtime_idle() callbacks for the given device in atomic context with
disabled for a given device. This implies that the callback routines in interrupts disabled. This implies that the callback routines in question must
question must not block or sleep, but it also means that the synchronous helper not block or sleep, but it also means that the synchronous helper functions
functions listed at the end of Section 4 may be used for that device within an listed at the end of Section 4 may be used for that device within an interrupt
interrupt handler or generally in an atomic context. handler or generally in an atomic context.
The subsystem-level suspend callback is _entirely_ _responsible_ for handling The subsystem-level suspend callback, if present, is _entirely_ _responsible_
the suspend of the device as appropriate, which may, but need not include for handling the suspend of the device as appropriate, which may, but need not
executing the device driver's own ->runtime_suspend() callback (from the include executing the device driver's own ->runtime_suspend() callback (from the
PM core's point of view it is not necessary to implement a ->runtime_suspend() PM core's point of view it is not necessary to implement a ->runtime_suspend()
callback in a device driver as long as the subsystem-level suspend callback callback in a device driver as long as the subsystem-level suspend callback
knows what to do to handle the device). knows what to do to handle the device).
* Once the subsystem-level suspend callback has completed successfully * Once the subsystem-level suspend callback (or the driver suspend callback,
for given device, the PM core regards the device as suspended, which need if invoked directly) has completed successfully for the given device, the PM
not mean that the device has been put into a low power state. It is core regards the device as suspended, which need not mean that it has been
supposed to mean, however, that the device will not process data and will put into a low power state. It is supposed to mean, however, that the
not communicate with the CPU(s) and RAM until the subsystem-level resume device will not process data and will not communicate with the CPU(s) and
callback is executed for it. The runtime PM status of a device after RAM until the appropriate resume callback is executed for it. The runtime
successful execution of the subsystem-level suspend callback is 'suspended'. PM status of a device after successful execution of the suspend callback is
'suspended'.
* If the subsystem-level suspend callback returns -EBUSY or -EAGAIN,
the device's runtime PM status is 'active', which means that the device * If the suspend callback returns -EBUSY or -EAGAIN, the device's runtime PM
_must_ be fully operational afterwards. status remains 'active', which means that the device _must_ be fully
operational afterwards.
* If the subsystem-level suspend callback returns an error code different
from -EBUSY or -EAGAIN, the PM core regards this as a fatal error and will * If the suspend callback returns an error code different from -EBUSY and
refuse to run the helper functions described in Section 4 for the device, -EAGAIN, the PM core regards this as a fatal error and will refuse to run
until the status of it is directly set either to 'active', or to 'suspended' the helper functions described in Section 4 for the device until its status
(the PM core provides special helper functions for this purpose). is directly set to either'active', or 'suspended' (the PM core provides
special helper functions for this purpose).
In particular, if the driver requires remote wake-up capability (i.e. hardware
In particular, if the driver requires remote wakeup capability (i.e. hardware
mechanism allowing the device to request a change of its power state, such as mechanism allowing the device to request a change of its power state, such as
PCI PME) for proper functioning and device_run_wake() returns 'false' for the PCI PME) for proper functioning and device_run_wake() returns 'false' for the
device, then ->runtime_suspend() should return -EBUSY. On the other hand, if device, then ->runtime_suspend() should return -EBUSY. On the other hand, if
device_run_wake() returns 'true' for the device and the device is put into a low device_run_wake() returns 'true' for the device and the device is put into a
power state during the execution of the subsystem-level suspend callback, it is low-power state during the execution of the suspend callback, it is expected
expected that remote wake-up will be enabled for the device. Generally, remote that remote wakeup will be enabled for the device. Generally, remote wakeup
wake-up should be enabled for all input devices put into a low power state at should be enabled for all input devices put into low-power states at run time.
run time.
The subsystem-level resume callback, if present, is _entirely_ _responsible_ for
The subsystem-level resume callback is _entirely_ _responsible_ for handling the handling the resume of the device as appropriate, which may, but need not
resume of the device as appropriate, which may, but need not include executing include executing the device driver's own ->runtime_resume() callback (from the
the device driver's own ->runtime_resume() callback (from the PM core's point of PM core's point of view it is not necessary to implement a ->runtime_resume()
view it is not necessary to implement a ->runtime_resume() callback in a device callback in a device driver as long as the subsystem-level resume callback knows
driver as long as the subsystem-level resume callback knows what to do to handle what to do to handle the device).
the device).
* Once the subsystem-level resume callback (or the driver resume callback, if
* Once the subsystem-level resume callback has completed successfully, the PM invoked directly) has completed successfully, the PM core regards the device
core regards the device as fully operational, which means that the device as fully operational, which means that the device _must_ be able to complete
_must_ be able to complete I/O operations as needed. The runtime PM status I/O operations as needed. The runtime PM status of the device is then
of the device is then 'active'. 'active'.
* If the subsystem-level resume callback returns an error code, the PM core * If the resume callback returns an error code, the PM core regards this as a
regards this as a fatal error and will refuse to run the helper functions fatal error and will refuse to run the helper functions described in Section
described in Section 4 for the device, until its status is directly set 4 for the device, until its status is directly set to either 'active', or
either to 'active' or to 'suspended' (the PM core provides special helper 'suspended' (by means of special helper functions provided by the PM core
functions for this purpose). for this purpose).
The subsystem-level idle callback is executed by the PM core whenever the device The idle callback (a subsystem-level one, if present, or the driver one) is
appears to be idle, which is indicated to the PM core by two counters, the executed by the PM core whenever the device appears to be idle, which is
device's usage counter and the counter of 'active' children of the device. indicated to the PM core by two counters, the device's usage counter and the
counter of 'active' children of the device.
* If any of these counters is decreased using a helper function provided by * If any of these counters is decreased using a helper function provided by
the PM core and it turns out to be equal to zero, the other counter is the PM core and it turns out to be equal to zero, the other counter is
checked. If that counter also is equal to zero, the PM core executes the checked. If that counter also is equal to zero, the PM core executes the
subsystem-level idle callback with the device as an argument. idle callback with the device as its argument.
The action performed by a subsystem-level idle callback is totally dependent on The action performed by the idle callback is totally dependent on the subsystem
the subsystem in question, but the expected and recommended action is to check (or driver) in question, but the expected and recommended action is to check
if the device can be suspended (i.e. if all of the conditions necessary for if the device can be suspended (i.e. if all of the conditions necessary for
suspending the device are satisfied) and to queue up a suspend request for the suspending the device are satisfied) and to queue up a suspend request for the
device in that case. The value returned by this callback is ignored by the PM device in that case. The value returned by this callback is ignored by the PM
core. core.
The helper functions provided by the PM core, described in Section 4, guarantee The helper functions provided by the PM core, described in Section 4, guarantee
that the following constraints are met with respect to the bus type's runtime that the following constraints are met with respect to runtime PM callbacks for
PM callbacks: one device:
(1) The callbacks are mutually exclusive (e.g. it is forbidden to execute (1) The callbacks are mutually exclusive (e.g. it is forbidden to execute
->runtime_suspend() in parallel with ->runtime_resume() or with another ->runtime_suspend() in parallel with ->runtime_resume() or with another
......
...@@ -79,7 +79,6 @@ register struct thread_info *__current_thread_info __asm__("$8"); ...@@ -79,7 +79,6 @@ register struct thread_info *__current_thread_info __asm__("$8");
#define TIF_UAC_SIGBUS 12 /* ! userspace part of 'osf_sysinfo' */ #define TIF_UAC_SIGBUS 12 /* ! userspace part of 'osf_sysinfo' */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */ #define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 14 /* restore signal mask in do_signal */ #define TIF_RESTORE_SIGMASK 14 /* restore signal mask in do_signal */
#define TIF_FREEZE 16 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING) #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
...@@ -87,7 +86,6 @@ register struct thread_info *__current_thread_info __asm__("$8"); ...@@ -87,7 +86,6 @@ register struct thread_info *__current_thread_info __asm__("$8");
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1<<TIF_FREEZE)
/* Work to do on interrupt/exception return. */ /* Work to do on interrupt/exception return. */
#define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \ #define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
......
...@@ -142,7 +142,6 @@ extern void vfp_flush_hwstate(struct thread_info *); ...@@ -142,7 +142,6 @@ extern void vfp_flush_hwstate(struct thread_info *);
#define TIF_POLLING_NRFLAG 16 #define TIF_POLLING_NRFLAG 16
#define TIF_USING_IWMMXT 17 #define TIF_USING_IWMMXT 17
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19
#define TIF_RESTORE_SIGMASK 20 #define TIF_RESTORE_SIGMASK 20
#define TIF_SECCOMP 21 #define TIF_SECCOMP 21
...@@ -152,7 +151,6 @@ extern void vfp_flush_hwstate(struct thread_info *); ...@@ -152,7 +151,6 @@ extern void vfp_flush_hwstate(struct thread_info *);
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT) #define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_SECCOMP (1 << TIF_SECCOMP) #define _TIF_SECCOMP (1 << TIF_SECCOMP)
......
...@@ -8,6 +8,7 @@ config PLAT_S3C64XX ...@@ -8,6 +8,7 @@ config PLAT_S3C64XX
bool bool
depends on ARCH_S3C64XX depends on ARCH_S3C64XX
select SAMSUNG_WAKEMASK select SAMSUNG_WAKEMASK
select PM_GENERIC_DOMAINS
default y default y
help help
Base platform code for any Samsung S3C64XX device Base platform code for any Samsung S3C64XX device
......
...@@ -706,7 +706,7 @@ static void __init crag6410_machine_init(void) ...@@ -706,7 +706,7 @@ static void __init crag6410_machine_init(void)
regulator_has_full_constraints(); regulator_has_full_constraints();
s3c_pm_init(); s3c64xx_pm_init();
} }
MACHINE_START(WLF_CRAGG_6410, "Wolfson Cragganmore 6410") MACHINE_START(WLF_CRAGG_6410, "Wolfson Cragganmore 6410")
......
...@@ -17,10 +17,12 @@ ...@@ -17,10 +17,12 @@
#include <linux/serial_core.h> #include <linux/serial_core.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/pm_domain.h>
#include <mach/map.h> #include <mach/map.h>
#include <mach/irqs.h> #include <mach/irqs.h>
#include <plat/devs.h>
#include <plat/pm.h> #include <plat/pm.h>
#include <plat/wakeup-mask.h> #include <plat/wakeup-mask.h>
...@@ -31,6 +33,148 @@ ...@@ -31,6 +33,148 @@
#include <mach/regs-gpio-memport.h> #include <mach/regs-gpio-memport.h>
#include <mach/regs-modem.h> #include <mach/regs-modem.h>
struct s3c64xx_pm_domain {
char *const name;
u32 ena;
u32 pwr_stat;
struct generic_pm_domain pd;
};
static int s3c64xx_pd_off(struct generic_pm_domain *domain)
{
struct s3c64xx_pm_domain *pd;
u32 val;
pd = container_of(domain, struct s3c64xx_pm_domain, pd);
val = __raw_readl(S3C64XX_NORMAL_CFG);
val &= ~(pd->ena);
__raw_writel(val, S3C64XX_NORMAL_CFG);
return 0;
}
static int s3c64xx_pd_on(struct generic_pm_domain *domain)
{
struct s3c64xx_pm_domain *pd;
u32 val;
long retry = 1000000L;
pd = container_of(domain, struct s3c64xx_pm_domain, pd);
val = __raw_readl(S3C64XX_NORMAL_CFG);
val |= pd->ena;
__raw_writel(val, S3C64XX_NORMAL_CFG);
/* Not all domains provide power status readback */
if (pd->pwr_stat) {
do {
cpu_relax();
if (__raw_readl(S3C64XX_BLK_PWR_STAT) & pd->pwr_stat)
break;
} while (retry--);
if (!retry) {
pr_err("Failed to start domain %s\n", pd->name);
return -EBUSY;
}
}
return 0;
}
static struct s3c64xx_pm_domain s3c64xx_pm_irom = {
.name = "IROM",
.ena = S3C64XX_NORMALCFG_IROM_ON,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_etm = {
.name = "ETM",
.ena = S3C64XX_NORMALCFG_DOMAIN_ETM_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_ETM,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_s = {
.name = "S",
.ena = S3C64XX_NORMALCFG_DOMAIN_S_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_S,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_f = {
.name = "F",
.ena = S3C64XX_NORMALCFG_DOMAIN_F_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_F,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_p = {
.name = "P",
.ena = S3C64XX_NORMALCFG_DOMAIN_P_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_P,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_i = {
.name = "I",
.ena = S3C64XX_NORMALCFG_DOMAIN_I_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_I,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_g = {
.name = "G",
.ena = S3C64XX_NORMALCFG_DOMAIN_G_ON,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain s3c64xx_pm_v = {
.name = "V",
.ena = S3C64XX_NORMALCFG_DOMAIN_V_ON,
.pwr_stat = S3C64XX_BLKPWRSTAT_V,
.pd = {
.power_off = s3c64xx_pd_off,
.power_on = s3c64xx_pd_on,
},
};
static struct s3c64xx_pm_domain *s3c64xx_always_on_pm_domains[] = {
&s3c64xx_pm_irom,
};
static struct s3c64xx_pm_domain *s3c64xx_pm_domains[] = {
&s3c64xx_pm_etm,
&s3c64xx_pm_g,
&s3c64xx_pm_v,
&s3c64xx_pm_i,
&s3c64xx_pm_p,
&s3c64xx_pm_s,
&s3c64xx_pm_f,
};
#ifdef CONFIG_S3C_PM_DEBUG_LED_SMDK #ifdef CONFIG_S3C_PM_DEBUG_LED_SMDK
void s3c_pm_debug_smdkled(u32 set, u32 clear) void s3c_pm_debug_smdkled(u32 set, u32 clear)
{ {
...@@ -89,6 +233,8 @@ static struct sleep_save misc_save[] = { ...@@ -89,6 +233,8 @@ static struct sleep_save misc_save[] = {
SAVE_ITEM(S3C64XX_SDMA_SEL), SAVE_ITEM(S3C64XX_SDMA_SEL),
SAVE_ITEM(S3C64XX_MODEM_MIFPCON), SAVE_ITEM(S3C64XX_MODEM_MIFPCON),
SAVE_ITEM(S3C64XX_NORMAL_CFG),
}; };
void s3c_pm_configure_extint(void) void s3c_pm_configure_extint(void)
...@@ -179,7 +325,26 @@ static void s3c64xx_pm_prepare(void) ...@@ -179,7 +325,26 @@ static void s3c64xx_pm_prepare(void)
__raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT); __raw_writel(__raw_readl(S3C64XX_WAKEUP_STAT), S3C64XX_WAKEUP_STAT);
} }
static int s3c64xx_pm_init(void) int __init s3c64xx_pm_init(void)
{
int i;
s3c_pm_init();
for (i = 0; i < ARRAY_SIZE(s3c64xx_always_on_pm_domains); i++)
pm_genpd_init(&s3c64xx_always_on_pm_domains[i]->pd,
&pm_domain_always_on_gov, false);
for (i = 0; i < ARRAY_SIZE(s3c64xx_pm_domains); i++)
pm_genpd_init(&s3c64xx_pm_domains[i]->pd, NULL, false);
if (dev_get_platdata(&s3c_device_fb.dev))
pm_genpd_add_device(&s3c64xx_pm_f.pd, &s3c_device_fb.dev);
return 0;
}
static __init int s3c64xx_pm_initcall(void)
{ {
pm_cpu_prep = s3c64xx_pm_prepare; pm_cpu_prep = s3c64xx_pm_prepare;
pm_cpu_sleep = s3c64xx_cpu_suspend; pm_cpu_sleep = s3c64xx_cpu_suspend;
...@@ -198,5 +363,12 @@ static int s3c64xx_pm_init(void) ...@@ -198,5 +363,12 @@ static int s3c64xx_pm_init(void)
return 0; return 0;
} }
arch_initcall(s3c64xx_pm_initcall);
static __init int s3c64xx_pm_late_initcall(void)
{
pm_genpd_poweroff_unused();
arch_initcall(s3c64xx_pm_init); return 0;
}
late_initcall(s3c64xx_pm_late_initcall);
...@@ -34,8 +34,8 @@ extern void sh7372_add_standard_devices(void); ...@@ -34,8 +34,8 @@ extern void sh7372_add_standard_devices(void);
extern void sh7372_clock_init(void); extern void sh7372_clock_init(void);
extern void sh7372_pinmux_init(void); extern void sh7372_pinmux_init(void);
extern void sh7372_pm_init(void); extern void sh7372_pm_init(void);
extern void sh7372_resume_core_standby_a3sm(void); extern void sh7372_resume_core_standby_sysc(void);
extern int sh7372_do_idle_a3sm(unsigned long unused); extern int sh7372_do_idle_sysc(unsigned long sleep_mode);
extern struct clk sh7372_extal1_clk; extern struct clk sh7372_extal1_clk;
extern struct clk sh7372_extal2_clk; extern struct clk sh7372_extal2_clk;
......
...@@ -480,11 +480,10 @@ struct platform_device; ...@@ -480,11 +480,10 @@ struct platform_device;
struct sh7372_pm_domain { struct sh7372_pm_domain {
struct generic_pm_domain genpd; struct generic_pm_domain genpd;
struct dev_power_governor *gov; struct dev_power_governor *gov;
void (*suspend)(void); int (*suspend)(void);
void (*resume)(void); void (*resume)(void);
unsigned int bit_shift; unsigned int bit_shift;
bool no_debug; bool no_debug;
bool stay_on;
}; };
static inline struct sh7372_pm_domain *to_sh7372_pd(struct generic_pm_domain *d) static inline struct sh7372_pm_domain *to_sh7372_pd(struct generic_pm_domain *d)
...@@ -499,6 +498,7 @@ extern struct sh7372_pm_domain sh7372_d4; ...@@ -499,6 +498,7 @@ extern struct sh7372_pm_domain sh7372_d4;
extern struct sh7372_pm_domain sh7372_a4r; extern struct sh7372_pm_domain sh7372_a4r;
extern struct sh7372_pm_domain sh7372_a3rv; extern struct sh7372_pm_domain sh7372_a3rv;
extern struct sh7372_pm_domain sh7372_a3ri; extern struct sh7372_pm_domain sh7372_a3ri;
extern struct sh7372_pm_domain sh7372_a4s;
extern struct sh7372_pm_domain sh7372_a3sp; extern struct sh7372_pm_domain sh7372_a3sp;
extern struct sh7372_pm_domain sh7372_a3sg; extern struct sh7372_pm_domain sh7372_a3sg;
...@@ -515,5 +515,7 @@ extern void sh7372_pm_add_subdomain(struct sh7372_pm_domain *sh7372_pd, ...@@ -515,5 +515,7 @@ extern void sh7372_pm_add_subdomain(struct sh7372_pm_domain *sh7372_pd,
extern void sh7372_intcs_suspend(void); extern void sh7372_intcs_suspend(void);
extern void sh7372_intcs_resume(void); extern void sh7372_intcs_resume(void);
extern void sh7372_intca_suspend(void);
extern void sh7372_intca_resume(void);
#endif /* __ASM_SH7372_H__ */ #endif /* __ASM_SH7372_H__ */
...@@ -535,6 +535,7 @@ static struct resource intcs_resources[] __initdata = { ...@@ -535,6 +535,7 @@ static struct resource intcs_resources[] __initdata = {
static struct intc_desc intcs_desc __initdata = { static struct intc_desc intcs_desc __initdata = {
.name = "sh7372-intcs", .name = "sh7372-intcs",
.force_enable = ENABLED_INTCS, .force_enable = ENABLED_INTCS,
.skip_syscore_suspend = true,
.resource = intcs_resources, .resource = intcs_resources,
.num_resources = ARRAY_SIZE(intcs_resources), .num_resources = ARRAY_SIZE(intcs_resources),
.hw = INTC_HW_DESC(intcs_vectors, intcs_groups, intcs_mask_registers, .hw = INTC_HW_DESC(intcs_vectors, intcs_groups, intcs_mask_registers,
...@@ -611,3 +612,52 @@ void sh7372_intcs_resume(void) ...@@ -611,3 +612,52 @@ void sh7372_intcs_resume(void)
for (k = 0x80; k <= 0x9c; k += 4) for (k = 0x80; k <= 0x9c; k += 4)
__raw_writeb(ffd5[k], intcs_ffd5 + k); __raw_writeb(ffd5[k], intcs_ffd5 + k);
} }
static unsigned short e694[0x200];
static unsigned short e695[0x200];
void sh7372_intca_suspend(void)
{
int k;
for (k = 0x00; k <= 0x38; k += 4)
e694[k] = __raw_readw(0xe6940000 + k);
for (k = 0x80; k <= 0xb4; k += 4)
e694[k] = __raw_readb(0xe6940000 + k);
for (k = 0x180; k <= 0x1b4; k += 4)
e694[k] = __raw_readb(0xe6940000 + k);
for (k = 0x00; k <= 0x50; k += 4)
e695[k] = __raw_readw(0xe6950000 + k);
for (k = 0x80; k <= 0xa8; k += 4)
e695[k] = __raw_readb(0xe6950000 + k);
for (k = 0x180; k <= 0x1a8; k += 4)
e695[k] = __raw_readb(0xe6950000 + k);
}
void sh7372_intca_resume(void)
{
int k;
for (k = 0x00; k <= 0x38; k += 4)
__raw_writew(e694[k], 0xe6940000 + k);
for (k = 0x80; k <= 0xb4; k += 4)
__raw_writeb(e694[k], 0xe6940000 + k);
for (k = 0x180; k <= 0x1b4; k += 4)
__raw_writeb(e694[k], 0xe6940000 + k);
for (k = 0x00; k <= 0x50; k += 4)
__raw_writew(e695[k], 0xe6950000 + k);
for (k = 0x80; k <= 0xa8; k += 4)
__raw_writeb(e695[k], 0xe6950000 + k);
for (k = 0x180; k <= 0x1a8; k += 4)
__raw_writeb(e695[k], 0xe6950000 + k);
}
...@@ -82,11 +82,12 @@ static int pd_power_down(struct generic_pm_domain *genpd) ...@@ -82,11 +82,12 @@ static int pd_power_down(struct generic_pm_domain *genpd)
struct sh7372_pm_domain *sh7372_pd = to_sh7372_pd(genpd); struct sh7372_pm_domain *sh7372_pd = to_sh7372_pd(genpd);
unsigned int mask = 1 << sh7372_pd->bit_shift; unsigned int mask = 1 << sh7372_pd->bit_shift;
if (sh7372_pd->suspend) if (sh7372_pd->suspend) {
sh7372_pd->suspend(); int ret = sh7372_pd->suspend();
if (sh7372_pd->stay_on) if (ret)
return 0; return ret;
}
if (__raw_readl(PSTR) & mask) { if (__raw_readl(PSTR) & mask) {
unsigned int retry_count; unsigned int retry_count;
...@@ -101,8 +102,8 @@ static int pd_power_down(struct generic_pm_domain *genpd) ...@@ -101,8 +102,8 @@ static int pd_power_down(struct generic_pm_domain *genpd)
} }
if (!sh7372_pd->no_debug) if (!sh7372_pd->no_debug)
pr_debug("sh7372 power domain down 0x%08x -> PSTR = 0x%08x\n", pr_debug("%s: Power off, 0x%08x -> PSTR = 0x%08x\n",
mask, __raw_readl(PSTR)); genpd->name, mask, __raw_readl(PSTR));
return 0; return 0;
} }
...@@ -113,9 +114,6 @@ static int __pd_power_up(struct sh7372_pm_domain *sh7372_pd, bool do_resume) ...@@ -113,9 +114,6 @@ static int __pd_power_up(struct sh7372_pm_domain *sh7372_pd, bool do_resume)
unsigned int retry_count; unsigned int retry_count;
int ret = 0; int ret = 0;
if (sh7372_pd->stay_on)
goto out;
if (__raw_readl(PSTR) & mask) if (__raw_readl(PSTR) & mask)
goto out; goto out;
...@@ -133,8 +131,8 @@ static int __pd_power_up(struct sh7372_pm_domain *sh7372_pd, bool do_resume) ...@@ -133,8 +131,8 @@ static int __pd_power_up(struct sh7372_pm_domain *sh7372_pd, bool do_resume)
ret = -EIO; ret = -EIO;
if (!sh7372_pd->no_debug) if (!sh7372_pd->no_debug)
pr_debug("sh7372 power domain up 0x%08x -> PSTR = 0x%08x\n", pr_debug("%s: Power on, 0x%08x -> PSTR = 0x%08x\n",
mask, __raw_readl(PSTR)); sh7372_pd->genpd.name, mask, __raw_readl(PSTR));
out: out:
if (ret == 0 && sh7372_pd->resume && do_resume) if (ret == 0 && sh7372_pd->resume && do_resume)
...@@ -148,35 +146,60 @@ static int pd_power_up(struct generic_pm_domain *genpd) ...@@ -148,35 +146,60 @@ static int pd_power_up(struct generic_pm_domain *genpd)
return __pd_power_up(to_sh7372_pd(genpd), true); return __pd_power_up(to_sh7372_pd(genpd), true);
} }
static void sh7372_a4r_suspend(void) static int sh7372_a4r_suspend(void)
{ {
sh7372_intcs_suspend(); sh7372_intcs_suspend();
__raw_writel(0x300fffff, WUPRMSK); /* avoid wakeup */ __raw_writel(0x300fffff, WUPRMSK); /* avoid wakeup */
return 0;
} }
static bool pd_active_wakeup(struct device *dev) static bool pd_active_wakeup(struct device *dev)
{ {
return true; bool (*active_wakeup)(struct device *dev);
active_wakeup = dev_gpd_data(dev)->ops.active_wakeup;
return active_wakeup ? active_wakeup(dev) : true;
} }
static bool sh7372_power_down_forbidden(struct dev_pm_domain *domain) static int sh7372_stop_dev(struct device *dev)
{ {
return false; int (*stop)(struct device *dev);
stop = dev_gpd_data(dev)->ops.stop;
if (stop) {
int ret = stop(dev);
if (ret)
return ret;
}
return pm_clk_suspend(dev);
} }
struct dev_power_governor sh7372_always_on_gov = { static int sh7372_start_dev(struct device *dev)
.power_down_ok = sh7372_power_down_forbidden, {
}; int (*start)(struct device *dev);
int ret;
ret = pm_clk_resume(dev);
if (ret)
return ret;
start = dev_gpd_data(dev)->ops.start;
if (start)
ret = start(dev);
return ret;
}
void sh7372_init_pm_domain(struct sh7372_pm_domain *sh7372_pd) void sh7372_init_pm_domain(struct sh7372_pm_domain *sh7372_pd)
{ {
struct generic_pm_domain *genpd = &sh7372_pd->genpd; struct generic_pm_domain *genpd = &sh7372_pd->genpd;
struct dev_power_governor *gov = sh7372_pd->gov;
pm_genpd_init(genpd, sh7372_pd->gov, false); pm_genpd_init(genpd, gov ? : &simple_qos_governor, false);
genpd->stop_device = pm_clk_suspend; genpd->dev_ops.stop = sh7372_stop_dev;
genpd->start_device = pm_clk_resume; genpd->dev_ops.start = sh7372_start_dev;
genpd->dev_ops.active_wakeup = pd_active_wakeup;
genpd->dev_irq_safe = true; genpd->dev_irq_safe = true;
genpd->active_wakeup = pd_active_wakeup;
genpd->power_off = pd_power_down; genpd->power_off = pd_power_down;
genpd->power_on = pd_power_up; genpd->power_on = pd_power_up;
__pd_power_up(sh7372_pd, false); __pd_power_up(sh7372_pd, false);
...@@ -199,48 +222,73 @@ void sh7372_pm_add_subdomain(struct sh7372_pm_domain *sh7372_pd, ...@@ -199,48 +222,73 @@ void sh7372_pm_add_subdomain(struct sh7372_pm_domain *sh7372_pd,
} }
struct sh7372_pm_domain sh7372_a4lc = { struct sh7372_pm_domain sh7372_a4lc = {
.genpd.name = "A4LC",
.bit_shift = 1, .bit_shift = 1,
}; };
struct sh7372_pm_domain sh7372_a4mp = { struct sh7372_pm_domain sh7372_a4mp = {
.genpd.name = "A4MP",
.bit_shift = 2, .bit_shift = 2,
}; };
struct sh7372_pm_domain sh7372_d4 = { struct sh7372_pm_domain sh7372_d4 = {
.genpd.name = "D4",
.bit_shift = 3, .bit_shift = 3,
}; };
struct sh7372_pm_domain sh7372_a4r = { struct sh7372_pm_domain sh7372_a4r = {
.genpd.name = "A4R",
.bit_shift = 5, .bit_shift = 5,
.gov = &sh7372_always_on_gov,
.suspend = sh7372_a4r_suspend, .suspend = sh7372_a4r_suspend,
.resume = sh7372_intcs_resume, .resume = sh7372_intcs_resume,
.stay_on = true,
}; };
struct sh7372_pm_domain sh7372_a3rv = { struct sh7372_pm_domain sh7372_a3rv = {
.genpd.name = "A3RV",
.bit_shift = 6, .bit_shift = 6,
}; };
struct sh7372_pm_domain sh7372_a3ri = { struct sh7372_pm_domain sh7372_a3ri = {
.genpd.name = "A3RI",
.bit_shift = 8, .bit_shift = 8,
}; };
struct sh7372_pm_domain sh7372_a3sp = { static int sh7372_a4s_suspend(void)
.bit_shift = 11, {
.gov = &sh7372_always_on_gov, /*
* The A4S domain contains the CPU core and therefore it should
* only be turned off if the CPU is in use.
*/
return -EBUSY;
}
struct sh7372_pm_domain sh7372_a4s = {
.genpd.name = "A4S",
.bit_shift = 10,
.gov = &pm_domain_always_on_gov,
.no_debug = true, .no_debug = true,
.suspend = sh7372_a4s_suspend,
}; };
static void sh7372_a3sp_init(void) static int sh7372_a3sp_suspend(void)
{ {
/* serial consoles make use of SCIF hardware located in A3SP, /*
* Serial consoles make use of SCIF hardware located in A3SP,
* keep such power domain on if "no_console_suspend" is set. * keep such power domain on if "no_console_suspend" is set.
*/ */
sh7372_a3sp.stay_on = !console_suspend_enabled; return console_suspend_enabled ? -EBUSY : 0;
} }
struct sh7372_pm_domain sh7372_a3sp = {
.genpd.name = "A3SP",
.bit_shift = 11,
.gov = &pm_domain_always_on_gov,
.no_debug = true,
.suspend = sh7372_a3sp_suspend,
};
struct sh7372_pm_domain sh7372_a3sg = { struct sh7372_pm_domain sh7372_a3sg = {
.genpd.name = "A3SG",
.bit_shift = 13, .bit_shift = 13,
}; };
...@@ -257,11 +305,16 @@ static int sh7372_do_idle_core_standby(unsigned long unused) ...@@ -257,11 +305,16 @@ static int sh7372_do_idle_core_standby(unsigned long unused)
return 0; return 0;
} }
static void sh7372_enter_core_standby(void) static void sh7372_set_reset_vector(unsigned long address)
{ {
/* set reset vector, translate 4k */ /* set reset vector, translate 4k */
__raw_writel(__pa(sh7372_resume_core_standby_a3sm), SBAR); __raw_writel(address, SBAR);
__raw_writel(0, APARMBAREA); __raw_writel(0, APARMBAREA);
}
static void sh7372_enter_core_standby(void)
{
sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc));
/* enter sleep mode with SYSTBCR to 0x10 */ /* enter sleep mode with SYSTBCR to 0x10 */
__raw_writel(0x10, SYSTBCR); __raw_writel(0x10, SYSTBCR);
...@@ -274,27 +327,22 @@ static void sh7372_enter_core_standby(void) ...@@ -274,27 +327,22 @@ static void sh7372_enter_core_standby(void)
#endif #endif
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
static void sh7372_enter_a3sm_common(int pllc0_on) static void sh7372_enter_sysc(int pllc0_on, unsigned long sleep_mode)
{ {
/* set reset vector, translate 4k */
__raw_writel(__pa(sh7372_resume_core_standby_a3sm), SBAR);
__raw_writel(0, APARMBAREA);
if (pllc0_on) if (pllc0_on)
__raw_writel(0, PLLC01STPCR); __raw_writel(0, PLLC01STPCR);
else else
__raw_writel(1 << 28, PLLC01STPCR); __raw_writel(1 << 28, PLLC01STPCR);
__raw_writel(0, PDNSEL); /* power-down A3SM only, not A4S */
__raw_readl(WUPSFAC); /* read wakeup int. factor before sleep */ __raw_readl(WUPSFAC); /* read wakeup int. factor before sleep */
cpu_suspend(0, sh7372_do_idle_a3sm); cpu_suspend(sleep_mode, sh7372_do_idle_sysc);
__raw_readl(WUPSFAC); /* read wakeup int. factor after wakeup */ __raw_readl(WUPSFAC); /* read wakeup int. factor after wakeup */
/* disable reset vector translation */ /* disable reset vector translation */
__raw_writel(0, SBAR); __raw_writel(0, SBAR);
} }
static int sh7372_a3sm_valid(unsigned long *mskp, unsigned long *msk2p) static int sh7372_sysc_valid(unsigned long *mskp, unsigned long *msk2p)
{ {
unsigned long mstpsr0, mstpsr1, mstpsr2, mstpsr3, mstpsr4; unsigned long mstpsr0, mstpsr1, mstpsr2, mstpsr3, mstpsr4;
unsigned long msk, msk2; unsigned long msk, msk2;
...@@ -382,7 +430,7 @@ static void sh7372_icr_to_irqcr(unsigned long icr, u16 *irqcr1p, u16 *irqcr2p) ...@@ -382,7 +430,7 @@ static void sh7372_icr_to_irqcr(unsigned long icr, u16 *irqcr1p, u16 *irqcr2p)
*irqcr2p = irqcr2; *irqcr2p = irqcr2;
} }
static void sh7372_setup_a3sm(unsigned long msk, unsigned long msk2) static void sh7372_setup_sysc(unsigned long msk, unsigned long msk2)
{ {
u16 irqcrx_low, irqcrx_high, irqcry_low, irqcry_high; u16 irqcrx_low, irqcrx_high, irqcry_low, irqcry_high;
unsigned long tmp; unsigned long tmp;
...@@ -415,6 +463,22 @@ static void sh7372_setup_a3sm(unsigned long msk, unsigned long msk2) ...@@ -415,6 +463,22 @@ static void sh7372_setup_a3sm(unsigned long msk, unsigned long msk2)
__raw_writel((irqcrx_high << 16) | irqcrx_low, IRQCR3); __raw_writel((irqcrx_high << 16) | irqcrx_low, IRQCR3);
__raw_writel((irqcry_high << 16) | irqcry_low, IRQCR4); __raw_writel((irqcry_high << 16) | irqcry_low, IRQCR4);
} }
static void sh7372_enter_a3sm_common(int pllc0_on)
{
sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc));
sh7372_enter_sysc(pllc0_on, 1 << 12);
}
static void sh7372_enter_a4s_common(int pllc0_on)
{
sh7372_intca_suspend();
memcpy((void *)SMFRAM, sh7372_resume_core_standby_sysc, 0x100);
sh7372_set_reset_vector(SMFRAM);
sh7372_enter_sysc(pllc0_on, 1 << 10);
sh7372_intca_resume();
}
#endif #endif
#ifdef CONFIG_CPU_IDLE #ifdef CONFIG_CPU_IDLE
...@@ -448,14 +512,20 @@ static int sh7372_enter_suspend(suspend_state_t suspend_state) ...@@ -448,14 +512,20 @@ static int sh7372_enter_suspend(suspend_state_t suspend_state)
unsigned long msk, msk2; unsigned long msk, msk2;
/* check active clocks to determine potential wakeup sources */ /* check active clocks to determine potential wakeup sources */
if (sh7372_a3sm_valid(&msk, &msk2)) { if (sh7372_sysc_valid(&msk, &msk2)) {
/* convert INTC mask and sense to SYSC mask and sense */ /* convert INTC mask and sense to SYSC mask and sense */
sh7372_setup_a3sm(msk, msk2); sh7372_setup_sysc(msk, msk2);
/* enter A3SM sleep with PLLC0 off */ if (!console_suspend_enabled &&
pr_debug("entering A3SM\n"); sh7372_a4s.genpd.status == GPD_STATE_POWER_OFF) {
sh7372_enter_a3sm_common(0); /* enter A4S sleep with PLLC0 off */
pr_debug("entering A4S\n");
sh7372_enter_a4s_common(0);
} else {
/* enter A3SM sleep with PLLC0 off */
pr_debug("entering A3SM\n");
sh7372_enter_a3sm_common(0);
}
} else { } else {
/* default to Core Standby that supports all wakeup sources */ /* default to Core Standby that supports all wakeup sources */
pr_debug("entering Core Standby\n"); pr_debug("entering Core Standby\n");
...@@ -464,9 +534,37 @@ static int sh7372_enter_suspend(suspend_state_t suspend_state) ...@@ -464,9 +534,37 @@ static int sh7372_enter_suspend(suspend_state_t suspend_state)
return 0; return 0;
} }
/**
* sh7372_pm_notifier_fn - SH7372 PM notifier routine.
* @notifier: Unused.
* @pm_event: Event being handled.
* @unused: Unused.
*/
static int sh7372_pm_notifier_fn(struct notifier_block *notifier,
unsigned long pm_event, void *unused)
{
switch (pm_event) {
case PM_SUSPEND_PREPARE:
/*
* This is necessary, because the A4R domain has to be "on"
* when suspend_device_irqs() and resume_device_irqs() are
* executed during system suspend and resume, respectively, so
* that those functions don't crash while accessing the INTCS.
*/
pm_genpd_poweron(&sh7372_a4r.genpd);
break;
case PM_POST_SUSPEND:
pm_genpd_poweroff_unused();
break;
}
return NOTIFY_DONE;
}
static void sh7372_suspend_init(void) static void sh7372_suspend_init(void)
{ {
shmobile_suspend_ops.enter = sh7372_enter_suspend; shmobile_suspend_ops.enter = sh7372_enter_suspend;
pm_notifier(sh7372_pm_notifier_fn, 0);
} }
#else #else
static void sh7372_suspend_init(void) {} static void sh7372_suspend_init(void) {}
...@@ -482,8 +580,6 @@ void __init sh7372_pm_init(void) ...@@ -482,8 +580,6 @@ void __init sh7372_pm_init(void)
/* do not convert A3SM, A3SP, A3SG, A4R power down into A4S */ /* do not convert A3SM, A3SP, A3SG, A4R power down into A4S */
__raw_writel(0, PDNSEL); __raw_writel(0, PDNSEL);
sh7372_a3sp_init();
sh7372_suspend_init(); sh7372_suspend_init();
sh7372_cpuidle_init(); sh7372_cpuidle_init();
} }
...@@ -994,12 +994,16 @@ void __init sh7372_add_standard_devices(void) ...@@ -994,12 +994,16 @@ void __init sh7372_add_standard_devices(void)
sh7372_init_pm_domain(&sh7372_a4r); sh7372_init_pm_domain(&sh7372_a4r);
sh7372_init_pm_domain(&sh7372_a3rv); sh7372_init_pm_domain(&sh7372_a3rv);
sh7372_init_pm_domain(&sh7372_a3ri); sh7372_init_pm_domain(&sh7372_a3ri);
sh7372_init_pm_domain(&sh7372_a3sg); sh7372_init_pm_domain(&sh7372_a4s);
sh7372_init_pm_domain(&sh7372_a3sp); sh7372_init_pm_domain(&sh7372_a3sp);
sh7372_init_pm_domain(&sh7372_a3sg);
sh7372_pm_add_subdomain(&sh7372_a4lc, &sh7372_a3rv); sh7372_pm_add_subdomain(&sh7372_a4lc, &sh7372_a3rv);
sh7372_pm_add_subdomain(&sh7372_a4r, &sh7372_a4lc); sh7372_pm_add_subdomain(&sh7372_a4r, &sh7372_a4lc);
sh7372_pm_add_subdomain(&sh7372_a4s, &sh7372_a3sg);
sh7372_pm_add_subdomain(&sh7372_a4s, &sh7372_a3sp);
platform_add_devices(sh7372_early_devices, platform_add_devices(sh7372_early_devices,
ARRAY_SIZE(sh7372_early_devices)); ARRAY_SIZE(sh7372_early_devices));
......
...@@ -37,13 +37,18 @@ ...@@ -37,13 +37,18 @@
#if defined(CONFIG_SUSPEND) || defined(CONFIG_CPU_IDLE) #if defined(CONFIG_SUSPEND) || defined(CONFIG_CPU_IDLE)
.align 12 .align 12
.text .text
.global sh7372_resume_core_standby_a3sm .global sh7372_resume_core_standby_sysc
sh7372_resume_core_standby_a3sm: sh7372_resume_core_standby_sysc:
ldr pc, 1f ldr pc, 1f
1: .long cpu_resume - PAGE_OFFSET + PLAT_PHYS_OFFSET 1: .long cpu_resume - PAGE_OFFSET + PLAT_PHYS_OFFSET
.global sh7372_do_idle_a3sm #define SPDCR 0xe6180008
sh7372_do_idle_a3sm:
/* A3SM & A4S power down */
.global sh7372_do_idle_sysc
sh7372_do_idle_sysc:
mov r8, r0 /* sleep mode passed in r0 */
/* /*
* Clear the SCTLR.C bit to prevent further data cache * Clear the SCTLR.C bit to prevent further data cache
* allocation. Clearing SCTLR.C would make all the data accesses * allocation. Clearing SCTLR.C would make all the data accesses
...@@ -80,13 +85,9 @@ sh7372_do_idle_a3sm: ...@@ -80,13 +85,9 @@ sh7372_do_idle_a3sm:
dsb dsb
dmb dmb
#define SPDCR 0xe6180008 /* SYSC power down */
#define A3SM (1 << 12)
/* A3SM power down */
ldr r0, =SPDCR ldr r0, =SPDCR
ldr r1, =A3SM str r8, [r0]
str r1, [r0]
1: 1:
b 1b b 1b
......
...@@ -22,6 +22,7 @@ struct device; ...@@ -22,6 +22,7 @@ struct device;
#ifdef CONFIG_PM #ifdef CONFIG_PM
extern __init int s3c_pm_init(void); extern __init int s3c_pm_init(void);
extern __init int s3c64xx_pm_init(void);
#else #else
...@@ -29,6 +30,11 @@ static inline int s3c_pm_init(void) ...@@ -29,6 +30,11 @@ static inline int s3c_pm_init(void)
{ {
return 0; return 0;
} }
static inline int s3c64xx_pm_init(void)
{
return 0;
}
#endif #endif
/* configuration for the IRQ mask over sleep */ /* configuration for the IRQ mask over sleep */
......
...@@ -85,7 +85,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -85,7 +85,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */ #define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */
#define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */ #define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */
#define TIF_NOTIFY_RESUME 9 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 9 /* callback before returning to user */
#define TIF_FREEZE 29
#define TIF_DEBUG 30 /* debugging enabled */ #define TIF_DEBUG 30 /* debugging enabled */
#define TIF_USERSPACE 31 /* true if FS sets userspace */ #define TIF_USERSPACE 31 /* true if FS sets userspace */
...@@ -98,7 +97,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -98,7 +97,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_CPU_GOING_TO_SLEEP (1 << TIF_CPU_GOING_TO_SLEEP) #define _TIF_CPU_GOING_TO_SLEEP (1 << TIF_CPU_GOING_TO_SLEEP)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1 << TIF_FREEZE)
/* Note: The masks below must never span more than 16 bits! */ /* Note: The masks below must never span more than 16 bits! */
......
...@@ -100,7 +100,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -100,7 +100,6 @@ static inline struct thread_info *current_thread_info(void)
TIF_NEED_RESCHED */ TIF_NEED_RESCHED */
#define TIF_MEMDIE 4 /* is terminating due to OOM killer */ #define TIF_MEMDIE 4 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_FREEZE 6 /* is freezing for suspend */
#define TIF_IRQ_SYNC 7 /* sync pipeline stage */ #define TIF_IRQ_SYNC 7 /* sync pipeline stage */
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 #define TIF_SINGLESTEP 9
...@@ -111,7 +110,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -111,7 +110,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_IRQ_SYNC (1<<TIF_IRQ_SYNC) #define _TIF_IRQ_SYNC (1<<TIF_IRQ_SYNC)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
......
...@@ -86,7 +86,6 @@ struct thread_info { ...@@ -86,7 +86,6 @@ struct thread_info {
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 17 /* is terminating due to OOM killer */ #define TIF_MEMDIE 17 /* is terminating due to OOM killer */
#define TIF_FREEZE 18 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
...@@ -94,7 +93,6 @@ struct thread_info { ...@@ -94,7 +93,6 @@ struct thread_info {
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */ #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
......
...@@ -111,7 +111,6 @@ register struct thread_info *__current_thread_info asm("gr15"); ...@@ -111,7 +111,6 @@ register struct thread_info *__current_thread_info asm("gr15");
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 17 /* is terminating due to OOM killer */ #define TIF_MEMDIE 17 /* is terminating due to OOM killer */
#define TIF_FREEZE 18 /* freezing for suspend */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
...@@ -120,7 +119,6 @@ register struct thread_info *__current_thread_info asm("gr15"); ...@@ -120,7 +119,6 @@ register struct thread_info *__current_thread_info asm("gr15");
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */ #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
......
...@@ -90,7 +90,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -90,7 +90,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 4 /* is terminating due to OOM killer */ #define TIF_MEMDIE 4 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_NOTIFY_RESUME 6 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 6 /* callback before returning to user */
#define TIF_FREEZE 16 /* is freezing for suspend */
/* as above, but as bit values */ /* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
...@@ -99,7 +98,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -99,7 +98,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
......
...@@ -113,7 +113,6 @@ struct thread_info { ...@@ -113,7 +113,6 @@ struct thread_info {
#define TIF_MEMDIE 17 /* is terminating due to OOM killer */ #define TIF_MEMDIE 17 /* is terminating due to OOM killer */
#define TIF_MCA_INIT 18 /* this task is processing MCA or INIT */ #define TIF_MCA_INIT 18 /* this task is processing MCA or INIT */
#define TIF_DB_DISABLED 19 /* debug trap disabled for fsyscall */ #define TIF_DB_DISABLED 19 /* debug trap disabled for fsyscall */
#define TIF_FREEZE 20 /* is freezing for suspend */
#define TIF_RESTORE_RSE 21 /* user RBS is newer than kernel RBS */ #define TIF_RESTORE_RSE 21 /* user RBS is newer than kernel RBS */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
...@@ -126,7 +125,6 @@ struct thread_info { ...@@ -126,7 +125,6 @@ struct thread_info {
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_MCA_INIT (1 << TIF_MCA_INIT) #define _TIF_MCA_INIT (1 << TIF_MCA_INIT)
#define _TIF_DB_DISABLED (1 << TIF_DB_DISABLED) #define _TIF_DB_DISABLED (1 << TIF_DB_DISABLED)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_RESTORE_RSE (1 << TIF_RESTORE_RSE) #define _TIF_RESTORE_RSE (1 << TIF_RESTORE_RSE)
/* "work to do on user-return" bits */ /* "work to do on user-return" bits */
......
...@@ -138,7 +138,6 @@ static inline unsigned int get_thread_fault_code(void) ...@@ -138,7 +138,6 @@ static inline unsigned int get_thread_fault_code(void)
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING) #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
...@@ -149,7 +148,6 @@ static inline unsigned int get_thread_fault_code(void) ...@@ -149,7 +148,6 @@ static inline unsigned int get_thread_fault_code(void)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_USEDFPU (1<<TIF_USEDFPU) #define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */ #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
......
...@@ -76,7 +76,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -76,7 +76,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_DELAYED_TRACE 14 /* single step a syscall */ #define TIF_DELAYED_TRACE 14 /* single step a syscall */
#define TIF_SYSCALL_TRACE 15 /* syscall trace active */ #define TIF_SYSCALL_TRACE 15 /* syscall trace active */
#define TIF_MEMDIE 16 /* is terminating due to OOM killer */ #define TIF_MEMDIE 16 /* is terminating due to OOM killer */
#define TIF_FREEZE 17 /* thread is freezing for suspend */
#define TIF_RESTORE_SIGMASK 18 /* restore signal mask in do_signal */ #define TIF_RESTORE_SIGMASK 18 /* restore signal mask in do_signal */
#endif /* _ASM_M68K_THREAD_INFO_H */ #endif /* _ASM_M68K_THREAD_INFO_H */
...@@ -125,7 +125,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -125,7 +125,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 6 /* is terminating due to OOM killer */ #define TIF_MEMDIE 6 /* is terminating due to OOM killer */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */ #define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */
#define TIF_SECCOMP 10 /* secure computing */ #define TIF_SECCOMP 10 /* secure computing */
#define TIF_FREEZE 14 /* Freezing for suspend */
/* true if poll_idle() is polling TIF_NEED_RESCHED */ /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_POLLING_NRFLAG 16 #define TIF_POLLING_NRFLAG 16
...@@ -137,7 +136,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -137,7 +136,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_IRET (1 << TIF_IRET) #define _TIF_IRET (1 << TIF_IRET)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP) #define _TIF_SECCOMP (1 << TIF_SECCOMP)
......
...@@ -117,7 +117,6 @@ register struct thread_info *__current_thread_info __asm__("$28"); ...@@ -117,7 +117,6 @@ register struct thread_info *__current_thread_info __asm__("$28");
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19
#define TIF_FIXADE 20 /* Fix address errors in software */ #define TIF_FIXADE 20 /* Fix address errors in software */
#define TIF_LOGADE 21 /* Log address errors to syslog */ #define TIF_LOGADE 21 /* Log address errors to syslog */
#define TIF_32BIT_REGS 22 /* also implies 16/32 fprs */ #define TIF_32BIT_REGS 22 /* also implies 16/32 fprs */
...@@ -141,7 +140,6 @@ register struct thread_info *__current_thread_info __asm__("$28"); ...@@ -141,7 +140,6 @@ register struct thread_info *__current_thread_info __asm__("$28");
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_USEDFPU (1<<TIF_USEDFPU) #define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_FIXADE (1<<TIF_FIXADE) #define _TIF_FIXADE (1<<TIF_FIXADE)
#define _TIF_LOGADE (1<<TIF_LOGADE) #define _TIF_LOGADE (1<<TIF_LOGADE)
#define _TIF_32BIT_REGS (1<<TIF_32BIT_REGS) #define _TIF_32BIT_REGS (1<<TIF_32BIT_REGS)
......
...@@ -165,7 +165,6 @@ extern void free_thread_info(struct thread_info *); ...@@ -165,7 +165,6 @@ extern void free_thread_info(struct thread_info *);
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 17 /* is terminating due to OOM killer */ #define TIF_MEMDIE 17 /* is terminating due to OOM killer */
#define TIF_FREEZE 18 /* freezing for suspend */
#define _TIF_SYSCALL_TRACE +(1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE +(1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME +(1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME +(1 << TIF_NOTIFY_RESUME)
...@@ -174,7 +173,6 @@ extern void free_thread_info(struct thread_info *); ...@@ -174,7 +173,6 @@ extern void free_thread_info(struct thread_info *);
#define _TIF_SINGLESTEP +(1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP +(1 << TIF_SINGLESTEP)
#define _TIF_RESTORE_SIGMASK +(1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK +(1 << TIF_RESTORE_SIGMASK)
#define _TIF_POLLING_NRFLAG +(1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG +(1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE +(1 << TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */ #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
......
...@@ -58,7 +58,6 @@ struct thread_info { ...@@ -58,7 +58,6 @@ struct thread_info {
#define TIF_32BIT 4 /* 32 bit binary */ #define TIF_32BIT 4 /* 32 bit binary */
#define TIF_MEMDIE 5 /* is terminating due to OOM killer */ #define TIF_MEMDIE 5 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 6 /* restore saved signal mask */ #define TIF_RESTORE_SIGMASK 6 /* restore saved signal mask */
#define TIF_FREEZE 7 /* is freezing for suspend */
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define TIF_SINGLESTEP 9 /* single stepping? */ #define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */ #define TIF_BLOCKSTEP 10 /* branch stepping? */
...@@ -69,7 +68,6 @@ struct thread_info { ...@@ -69,7 +68,6 @@ struct thread_info {
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_32BIT (1 << TIF_32BIT) #define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
......
...@@ -109,7 +109,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -109,7 +109,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */ #define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall return */ #define TIF_NOERROR 12 /* Force successful syscall return */
#define TIF_NOTIFY_RESUME 13 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 13 /* callback before returning to user */
#define TIF_FREEZE 14 /* Freezing for suspend */
#define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */ #define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */
#define TIF_RUNLATCH 16 /* Is the runlatch enabled? */ #define TIF_RUNLATCH 16 /* Is the runlatch enabled? */
...@@ -127,7 +126,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -127,7 +126,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_RESTOREALL (1<<TIF_RESTOREALL) #define _TIF_RESTOREALL (1<<TIF_RESTOREALL)
#define _TIF_NOERROR (1<<TIF_NOERROR) #define _TIF_NOERROR (1<<TIF_NOERROR)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT) #define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_RUNLATCH (1<<TIF_RUNLATCH) #define _TIF_RUNLATCH (1<<TIF_RUNLATCH)
#define _TIF_SYSCALL_T_OR_A (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ #define _TIF_SYSCALL_T_OR_A (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
......
...@@ -1406,7 +1406,6 @@ static struct bus_type vio_bus_type = { ...@@ -1406,7 +1406,6 @@ static struct bus_type vio_bus_type = {
.match = vio_bus_match, .match = vio_bus_match,
.probe = vio_bus_probe, .probe = vio_bus_probe,
.remove = vio_bus_remove, .remove = vio_bus_remove,
.pm = GENERIC_SUBSYS_PM_OPS,
}; };
/** /**
......
...@@ -102,7 +102,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -102,7 +102,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 19 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 19 /* restore signal mask in do_signal() */
#define TIF_SINGLE_STEP 20 /* This task is single stepped */ #define TIF_SINGLE_STEP 20 /* This task is single stepped */
#define TIF_FREEZE 21 /* thread is freezing for suspend */
#define _TIF_SYSCALL (1<<TIF_SYSCALL) #define _TIF_SYSCALL (1<<TIF_SYSCALL)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
...@@ -119,7 +118,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -119,7 +118,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_31BIT (1<<TIF_31BIT) #define _TIF_31BIT (1<<TIF_31BIT)
#define _TIF_SINGLE_STEP (1<<TIF_SINGLE_STEP) #define _TIF_SINGLE_STEP (1<<TIF_SINGLE_STEP)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
#define is_32bit_task() (test_thread_flag(TIF_31BIT)) #define is_32bit_task() (test_thread_flag(TIF_31BIT))
......
...@@ -122,7 +122,6 @@ extern void init_thread_xstate(void); ...@@ -122,7 +122,6 @@ extern void init_thread_xstate(void);
#define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */ #define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19 /* Freezing for suspend */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
...@@ -133,7 +132,6 @@ extern void init_thread_xstate(void); ...@@ -133,7 +132,6 @@ extern void init_thread_xstate(void);
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1 << TIF_FREEZE)
/* /*
* _TIF_ALLWORK_MASK and _TIF_WORK_MASK need to fit within 2 bytes, or we * _TIF_ALLWORK_MASK and _TIF_WORK_MASK need to fit within 2 bytes, or we
......
...@@ -133,7 +133,6 @@ BTFIXUPDEF_CALL(void, free_thread_info, struct thread_info *) ...@@ -133,7 +133,6 @@ BTFIXUPDEF_CALL(void, free_thread_info, struct thread_info *)
#define TIF_POLLING_NRFLAG 9 /* true if poll_idle() is polling #define TIF_POLLING_NRFLAG 9 /* true if poll_idle() is polling
* TIF_NEED_RESCHED */ * TIF_NEED_RESCHED */
#define TIF_MEMDIE 10 /* is terminating due to OOM killer */ #define TIF_MEMDIE 10 /* is terminating due to OOM killer */
#define TIF_FREEZE 11 /* is freezing for suspend */
/* as above, but as bit values */ /* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
...@@ -147,7 +146,6 @@ BTFIXUPDEF_CALL(void, free_thread_info, struct thread_info *) ...@@ -147,7 +146,6 @@ BTFIXUPDEF_CALL(void, free_thread_info, struct thread_info *)
#define _TIF_DO_NOTIFY_RESUME_MASK (_TIF_NOTIFY_RESUME | \ #define _TIF_DO_NOTIFY_RESUME_MASK (_TIF_NOTIFY_RESUME | \
_TIF_SIGPENDING | \ _TIF_SIGPENDING | \
_TIF_RESTORE_SIGMASK) _TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
......
...@@ -225,7 +225,6 @@ register struct thread_info *current_thread_info_reg asm("g6"); ...@@ -225,7 +225,6 @@ register struct thread_info *current_thread_info_reg asm("g6");
/* flag bit 12 is available */ /* flag bit 12 is available */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */ #define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14 #define TIF_POLLING_NRFLAG 14
#define TIF_FREEZE 15 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
...@@ -237,7 +236,6 @@ register struct thread_info *current_thread_info_reg asm("g6"); ...@@ -237,7 +236,6 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT) #define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_USER_WORK_MASK ((0xff << TI_FLAG_WSAVED_SHIFT) | \ #define _TIF_USER_WORK_MASK ((0xff << TI_FLAG_WSAVED_SHIFT) | \
_TIF_DO_NOTIFY_RESUME_MASK | \ _TIF_DO_NOTIFY_RESUME_MASK | \
......
...@@ -71,7 +71,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -71,7 +71,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 5 /* is terminating due to OOM killer */ #define TIF_MEMDIE 5 /* is terminating due to OOM killer */
#define TIF_SYSCALL_AUDIT 6 #define TIF_SYSCALL_AUDIT 6
#define TIF_RESTORE_SIGMASK 7 #define TIF_RESTORE_SIGMASK 7
#define TIF_FREEZE 16 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
...@@ -80,6 +79,5 @@ static inline struct thread_info *current_thread_info(void) ...@@ -80,6 +79,5 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_MEMDIE (1 << TIF_MEMDIE) #define _TIF_MEMDIE (1 << TIF_MEMDIE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#endif #endif
...@@ -135,14 +135,12 @@ static inline struct thread_info *current_thread_info(void) ...@@ -135,14 +135,12 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_SYSCALL_TRACE 8 #define TIF_SYSCALL_TRACE 8
#define TIF_MEMDIE 18 #define TIF_MEMDIE 18
#define TIF_FREEZE 19
#define TIF_RESTORE_SIGMASK 20 #define TIF_RESTORE_SIGMASK 20
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
/* /*
......
...@@ -91,7 +91,6 @@ struct thread_info { ...@@ -91,7 +91,6 @@ struct thread_info {
#define TIF_MEMDIE 20 /* is terminating due to OOM killer */ #define TIF_MEMDIE 20 /* is terminating due to OOM killer */
#define TIF_DEBUG 21 /* uses debug registers */ #define TIF_DEBUG 21 /* uses debug registers */
#define TIF_IO_BITMAP 22 /* uses I/O bitmap */ #define TIF_IO_BITMAP 22 /* uses I/O bitmap */
#define TIF_FREEZE 23 /* is freezing for suspend */
#define TIF_FORCED_TF 24 /* true if TF in eflags artificially */ #define TIF_FORCED_TF 24 /* true if TF in eflags artificially */
#define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */ #define TIF_BLOCKSTEP 25 /* set when we want DEBUGCTLMSR_BTF */
#define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */ #define TIF_LAZY_MMU_UPDATES 27 /* task is updating the mmu lazily */
...@@ -113,7 +112,6 @@ struct thread_info { ...@@ -113,7 +112,6 @@ struct thread_info {
#define _TIF_FORK (1 << TIF_FORK) #define _TIF_FORK (1 << TIF_FORK)
#define _TIF_DEBUG (1 << TIF_DEBUG) #define _TIF_DEBUG (1 << TIF_DEBUG)
#define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP) #define _TIF_IO_BITMAP (1 << TIF_IO_BITMAP)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_FORCED_TF (1 << TIF_FORCED_TF) #define _TIF_FORCED_TF (1 << TIF_FORCED_TF)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES) #define _TIF_LAZY_MMU_UPDATES (1 << TIF_LAZY_MMU_UPDATES)
......
...@@ -132,7 +132,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -132,7 +132,6 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 5 /* is terminating due to OOM killer */ #define TIF_MEMDIE 5 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 6 /* restore signal mask in do_signal() */ #define TIF_RESTORE_SIGMASK 6 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ #define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_FREEZE 17 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING) #define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
...@@ -141,7 +140,6 @@ static inline struct thread_info *current_thread_info(void) ...@@ -141,7 +140,6 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_IRET (1<<TIF_IRET) #define _TIF_IRET (1<<TIF_IRET)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */ #define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
......
...@@ -476,6 +476,22 @@ static struct dmi_system_id __initdata acpisleep_dmi_table[] = { ...@@ -476,6 +476,22 @@ static struct dmi_system_id __initdata acpisleep_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW520F"), DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW520F"),
}, },
}, },
{
.callback = init_nvs_nosave,
.ident = "Asus K54C",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "K54C"),
},
},
{
.callback = init_nvs_nosave,
.ident = "Asus K54HR",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
},
},
{}, {},
}; };
#endif /* CONFIG_SUSPEND */ #endif /* CONFIG_SUSPEND */
......
...@@ -113,31 +113,7 @@ static int amba_legacy_resume(struct device *dev) ...@@ -113,31 +113,7 @@ static int amba_legacy_resume(struct device *dev)
return ret; return ret;
} }
static int amba_pm_prepare(struct device *dev) #endif /* CONFIG_PM_SLEEP */
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (drv && drv->pm && drv->pm->prepare)
ret = drv->pm->prepare(dev);
return ret;
}
static void amba_pm_complete(struct device *dev)
{
struct device_driver *drv = dev->driver;
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
}
#else /* !CONFIG_PM_SLEEP */
#define amba_pm_prepare NULL
#define amba_pm_complete NULL
#endif /* !CONFIG_PM_SLEEP */
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
...@@ -159,22 +135,6 @@ static int amba_pm_suspend(struct device *dev) ...@@ -159,22 +135,6 @@ static int amba_pm_suspend(struct device *dev)
return ret; return ret;
} }
static int amba_pm_suspend_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->suspend_noirq)
ret = drv->pm->suspend_noirq(dev);
}
return ret;
}
static int amba_pm_resume(struct device *dev) static int amba_pm_resume(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -193,28 +153,10 @@ static int amba_pm_resume(struct device *dev) ...@@ -193,28 +153,10 @@ static int amba_pm_resume(struct device *dev)
return ret; return ret;
} }
static int amba_pm_resume_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->resume_noirq)
ret = drv->pm->resume_noirq(dev);
}
return ret;
}
#else /* !CONFIG_SUSPEND */ #else /* !CONFIG_SUSPEND */
#define amba_pm_suspend NULL #define amba_pm_suspend NULL
#define amba_pm_resume NULL #define amba_pm_resume NULL
#define amba_pm_suspend_noirq NULL
#define amba_pm_resume_noirq NULL
#endif /* !CONFIG_SUSPEND */ #endif /* !CONFIG_SUSPEND */
...@@ -238,22 +180,6 @@ static int amba_pm_freeze(struct device *dev) ...@@ -238,22 +180,6 @@ static int amba_pm_freeze(struct device *dev)
return ret; return ret;
} }
static int amba_pm_freeze_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->freeze_noirq)
ret = drv->pm->freeze_noirq(dev);
}
return ret;
}
static int amba_pm_thaw(struct device *dev) static int amba_pm_thaw(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -272,22 +198,6 @@ static int amba_pm_thaw(struct device *dev) ...@@ -272,22 +198,6 @@ static int amba_pm_thaw(struct device *dev)
return ret; return ret;
} }
static int amba_pm_thaw_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->thaw_noirq)
ret = drv->pm->thaw_noirq(dev);
}
return ret;
}
static int amba_pm_poweroff(struct device *dev) static int amba_pm_poweroff(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -306,22 +216,6 @@ static int amba_pm_poweroff(struct device *dev) ...@@ -306,22 +216,6 @@ static int amba_pm_poweroff(struct device *dev)
return ret; return ret;
} }
static int amba_pm_poweroff_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->poweroff_noirq)
ret = drv->pm->poweroff_noirq(dev);
}
return ret;
}
static int amba_pm_restore(struct device *dev) static int amba_pm_restore(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -340,32 +234,12 @@ static int amba_pm_restore(struct device *dev) ...@@ -340,32 +234,12 @@ static int amba_pm_restore(struct device *dev)
return ret; return ret;
} }
static int amba_pm_restore_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->restore_noirq)
ret = drv->pm->restore_noirq(dev);
}
return ret;
}
#else /* !CONFIG_HIBERNATE_CALLBACKS */ #else /* !CONFIG_HIBERNATE_CALLBACKS */
#define amba_pm_freeze NULL #define amba_pm_freeze NULL
#define amba_pm_thaw NULL #define amba_pm_thaw NULL
#define amba_pm_poweroff NULL #define amba_pm_poweroff NULL
#define amba_pm_restore NULL #define amba_pm_restore NULL
#define amba_pm_freeze_noirq NULL
#define amba_pm_thaw_noirq NULL
#define amba_pm_poweroff_noirq NULL
#define amba_pm_restore_noirq NULL
#endif /* !CONFIG_HIBERNATE_CALLBACKS */ #endif /* !CONFIG_HIBERNATE_CALLBACKS */
...@@ -406,20 +280,12 @@ static int amba_pm_runtime_resume(struct device *dev) ...@@ -406,20 +280,12 @@ static int amba_pm_runtime_resume(struct device *dev)
#ifdef CONFIG_PM #ifdef CONFIG_PM
static const struct dev_pm_ops amba_pm = { static const struct dev_pm_ops amba_pm = {
.prepare = amba_pm_prepare,
.complete = amba_pm_complete,
.suspend = amba_pm_suspend, .suspend = amba_pm_suspend,
.resume = amba_pm_resume, .resume = amba_pm_resume,
.freeze = amba_pm_freeze, .freeze = amba_pm_freeze,
.thaw = amba_pm_thaw, .thaw = amba_pm_thaw,
.poweroff = amba_pm_poweroff, .poweroff = amba_pm_poweroff,
.restore = amba_pm_restore, .restore = amba_pm_restore,
.suspend_noirq = amba_pm_suspend_noirq,
.resume_noirq = amba_pm_resume_noirq,
.freeze_noirq = amba_pm_freeze_noirq,
.thaw_noirq = amba_pm_thaw_noirq,
.poweroff_noirq = amba_pm_poweroff_noirq,
.restore_noirq = amba_pm_restore_noirq,
SET_RUNTIME_PM_OPS( SET_RUNTIME_PM_OPS(
amba_pm_runtime_suspend, amba_pm_runtime_suspend,
amba_pm_runtime_resume, amba_pm_runtime_resume,
......
...@@ -534,6 +534,8 @@ static int _request_firmware(const struct firmware **firmware_p, ...@@ -534,6 +534,8 @@ static int _request_firmware(const struct firmware **firmware_p,
return 0; return 0;
} }
read_lock_usermodehelper();
if (WARN_ON(usermodehelper_is_disabled())) { if (WARN_ON(usermodehelper_is_disabled())) {
dev_err(device, "firmware: %s will not be loaded\n", name); dev_err(device, "firmware: %s will not be loaded\n", name);
retval = -EBUSY; retval = -EBUSY;
...@@ -572,6 +574,8 @@ static int _request_firmware(const struct firmware **firmware_p, ...@@ -572,6 +574,8 @@ static int _request_firmware(const struct firmware **firmware_p,
fw_destroy_instance(fw_priv); fw_destroy_instance(fw_priv);
out: out:
read_unlock_usermodehelper();
if (retval) { if (retval) {
release_firmware(firmware); release_firmware(firmware);
*firmware_p = NULL; *firmware_p = NULL;
......
...@@ -700,25 +700,6 @@ static int platform_legacy_resume(struct device *dev) ...@@ -700,25 +700,6 @@ static int platform_legacy_resume(struct device *dev)
return ret; return ret;
} }
int platform_pm_prepare(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (drv && drv->pm && drv->pm->prepare)
ret = drv->pm->prepare(dev);
return ret;
}
void platform_pm_complete(struct device *dev)
{
struct device_driver *drv = dev->driver;
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
}
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
...@@ -741,22 +722,6 @@ int platform_pm_suspend(struct device *dev) ...@@ -741,22 +722,6 @@ int platform_pm_suspend(struct device *dev)
return ret; return ret;
} }
int platform_pm_suspend_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->suspend_noirq)
ret = drv->pm->suspend_noirq(dev);
}
return ret;
}
int platform_pm_resume(struct device *dev) int platform_pm_resume(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -775,22 +740,6 @@ int platform_pm_resume(struct device *dev) ...@@ -775,22 +740,6 @@ int platform_pm_resume(struct device *dev)
return ret; return ret;
} }
int platform_pm_resume_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->resume_noirq)
ret = drv->pm->resume_noirq(dev);
}
return ret;
}
#endif /* CONFIG_SUSPEND */ #endif /* CONFIG_SUSPEND */
#ifdef CONFIG_HIBERNATE_CALLBACKS #ifdef CONFIG_HIBERNATE_CALLBACKS
...@@ -813,22 +762,6 @@ int platform_pm_freeze(struct device *dev) ...@@ -813,22 +762,6 @@ int platform_pm_freeze(struct device *dev)
return ret; return ret;
} }
int platform_pm_freeze_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->freeze_noirq)
ret = drv->pm->freeze_noirq(dev);
}
return ret;
}
int platform_pm_thaw(struct device *dev) int platform_pm_thaw(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -847,22 +780,6 @@ int platform_pm_thaw(struct device *dev) ...@@ -847,22 +780,6 @@ int platform_pm_thaw(struct device *dev)
return ret; return ret;
} }
int platform_pm_thaw_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->thaw_noirq)
ret = drv->pm->thaw_noirq(dev);
}
return ret;
}
int platform_pm_poweroff(struct device *dev) int platform_pm_poweroff(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -881,22 +798,6 @@ int platform_pm_poweroff(struct device *dev) ...@@ -881,22 +798,6 @@ int platform_pm_poweroff(struct device *dev)
return ret; return ret;
} }
int platform_pm_poweroff_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->poweroff_noirq)
ret = drv->pm->poweroff_noirq(dev);
}
return ret;
}
int platform_pm_restore(struct device *dev) int platform_pm_restore(struct device *dev)
{ {
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
...@@ -915,22 +816,6 @@ int platform_pm_restore(struct device *dev) ...@@ -915,22 +816,6 @@ int platform_pm_restore(struct device *dev)
return ret; return ret;
} }
int platform_pm_restore_noirq(struct device *dev)
{
struct device_driver *drv = dev->driver;
int ret = 0;
if (!drv)
return 0;
if (drv->pm) {
if (drv->pm->restore_noirq)
ret = drv->pm->restore_noirq(dev);
}
return ret;
}
#endif /* CONFIG_HIBERNATE_CALLBACKS */ #endif /* CONFIG_HIBERNATE_CALLBACKS */
static const struct dev_pm_ops platform_dev_pm_ops = { static const struct dev_pm_ops platform_dev_pm_ops = {
......
...@@ -3,7 +3,7 @@ obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o ...@@ -3,7 +3,7 @@ obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_RUNTIME) += runtime.o obj-$(CONFIG_PM_RUNTIME) += runtime.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_OPP) += opp.o obj-$(CONFIG_PM_OPP) += opp.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o obj-$(CONFIG_HAVE_CLK) += clock_ops.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
This diff is collapsed.
/*
* drivers/base/power/domain_governor.c - Governors for device PM domains.
*
* Copyright (C) 2011 Rafael J. Wysocki <rjw@sisk.pl>, Renesas Electronics Corp.
*
* This file is released under the GPLv2.
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/pm_domain.h>
#include <linux/pm_qos.h>
#include <linux/hrtimer.h>
/**
* default_stop_ok - Default PM domain governor routine for stopping devices.
* @dev: Device to check.
*/
bool default_stop_ok(struct device *dev)
{
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
dev_dbg(dev, "%s()\n", __func__);
if (dev->power.max_time_suspended_ns < 0 || td->break_even_ns == 0)
return true;
return td->stop_latency_ns + td->start_latency_ns < td->break_even_ns
&& td->break_even_ns < dev->power.max_time_suspended_ns;
}
/**
* default_power_down_ok - Default generic PM domain power off governor routine.
* @pd: PM domain to check.
*
* This routine must be executed under the PM domain's lock.
*/
static bool default_power_down_ok(struct dev_pm_domain *pd)
{
struct generic_pm_domain *genpd = pd_to_genpd(pd);
struct gpd_link *link;
struct pm_domain_data *pdd;
s64 min_dev_off_time_ns;
s64 off_on_time_ns;
ktime_t time_now = ktime_get();
off_on_time_ns = genpd->power_off_latency_ns +
genpd->power_on_latency_ns;
/*
* It doesn't make sense to remove power from the domain if saving
* the state of all devices in it and the power off/power on operations
* take too much time.
*
* All devices in this domain have been stopped already at this point.
*/
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
if (pdd->dev->driver)
off_on_time_ns +=
to_gpd_data(pdd)->td.save_state_latency_ns;
}
/*
* Check if subdomains can be off for enough time.
*
* All subdomains have been powered off already at this point.
*/
list_for_each_entry(link, &genpd->master_links, master_node) {
struct generic_pm_domain *sd = link->slave;
s64 sd_max_off_ns = sd->max_off_time_ns;
if (sd_max_off_ns < 0)
continue;
sd_max_off_ns -= ktime_to_ns(ktime_sub(time_now,
sd->power_off_time));
/*
* Check if the subdomain is allowed to be off long enough for
* the current domain to turn off and on (that's how much time
* it will have to wait worst case).
*/
if (sd_max_off_ns <= off_on_time_ns)
return false;
}
/*
* Check if the devices in the domain can be off enough time.
*/
min_dev_off_time_ns = -1;
list_for_each_entry(pdd, &genpd->dev_list, list_node) {
struct gpd_timing_data *td;
struct device *dev = pdd->dev;
s64 dev_off_time_ns;
if (!dev->driver || dev->power.max_time_suspended_ns < 0)
continue;
td = &to_gpd_data(pdd)->td;
dev_off_time_ns = dev->power.max_time_suspended_ns -
(td->start_latency_ns + td->restore_state_latency_ns +
ktime_to_ns(ktime_sub(time_now,
dev->power.suspend_time)));
if (dev_off_time_ns <= off_on_time_ns)
return false;
if (min_dev_off_time_ns > dev_off_time_ns
|| min_dev_off_time_ns < 0)
min_dev_off_time_ns = dev_off_time_ns;
}
if (min_dev_off_time_ns < 0) {
/*
* There are no latency constraints, so the domain can spend
* arbitrary time in the "off" state.
*/
genpd->max_off_time_ns = -1;
return true;
}
/*
* The difference between the computed minimum delta and the time needed
* to turn the domain on is the maximum theoretical time this domain can
* spend in the "off" state.
*/
min_dev_off_time_ns -= genpd->power_on_latency_ns;
/*
* If the difference between the computed minimum delta and the time
* needed to turn the domain off and back on on is smaller than the
* domain's power break even time, removing power from the domain is not
* worth it.
*/
if (genpd->break_even_ns >
min_dev_off_time_ns - genpd->power_off_latency_ns)
return false;
genpd->max_off_time_ns = min_dev_off_time_ns;
return true;
}
struct dev_power_governor simple_qos_governor = {
.stop_ok = default_stop_ok,
.power_down_ok = default_power_down_ok,
};
static bool always_on_power_down_ok(struct dev_pm_domain *domain)
{
return false;
}
/**
* pm_genpd_gov_always_on - A governor implementing an always-on policy
*/
struct dev_power_governor pm_domain_always_on_gov = {
.power_down_ok = always_on_power_down_ok,
.stop_ok = default_stop_ok,
};
...@@ -97,16 +97,16 @@ int pm_generic_prepare(struct device *dev) ...@@ -97,16 +97,16 @@ int pm_generic_prepare(struct device *dev)
* @event: PM transition of the system under way. * @event: PM transition of the system under way.
* @bool: Whether or not this is the "noirq" stage. * @bool: Whether or not this is the "noirq" stage.
* *
* If the device has not been suspended at run time, execute the * Execute the PM callback corresponding to @event provided by the driver of
* suspend/freeze/poweroff/thaw callback provided by its driver, if defined, and * @dev, if defined, and return its error code. Return 0 if the callback is
* return its error code. Otherwise, return zero. * not present.
*/ */
static int __pm_generic_call(struct device *dev, int event, bool noirq) static int __pm_generic_call(struct device *dev, int event, bool noirq)
{ {
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int (*callback)(struct device *); int (*callback)(struct device *);
if (!pm || pm_runtime_suspended(dev)) if (!pm)
return 0; return 0;
switch (event) { switch (event) {
...@@ -119,9 +119,15 @@ static int __pm_generic_call(struct device *dev, int event, bool noirq) ...@@ -119,9 +119,15 @@ static int __pm_generic_call(struct device *dev, int event, bool noirq)
case PM_EVENT_HIBERNATE: case PM_EVENT_HIBERNATE:
callback = noirq ? pm->poweroff_noirq : pm->poweroff; callback = noirq ? pm->poweroff_noirq : pm->poweroff;
break; break;
case PM_EVENT_RESUME:
callback = noirq ? pm->resume_noirq : pm->resume;
break;
case PM_EVENT_THAW: case PM_EVENT_THAW:
callback = noirq ? pm->thaw_noirq : pm->thaw; callback = noirq ? pm->thaw_noirq : pm->thaw;
break; break;
case PM_EVENT_RESTORE:
callback = noirq ? pm->restore_noirq : pm->restore;
break;
default: default:
callback = NULL; callback = NULL;
break; break;
...@@ -210,57 +216,13 @@ int pm_generic_thaw(struct device *dev) ...@@ -210,57 +216,13 @@ int pm_generic_thaw(struct device *dev)
} }
EXPORT_SYMBOL_GPL(pm_generic_thaw); EXPORT_SYMBOL_GPL(pm_generic_thaw);
/**
* __pm_generic_resume - Generic resume/restore callback for subsystems.
* @dev: Device to handle.
* @event: PM transition of the system under way.
* @bool: Whether or not this is the "noirq" stage.
*
* Execute the resume/resotre callback provided by the @dev's driver, if
* defined. If it returns 0, change the device's runtime PM status to 'active'.
* Return the callback's error code.
*/
static int __pm_generic_resume(struct device *dev, int event, bool noirq)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int (*callback)(struct device *);
int ret;
if (!pm)
return 0;
switch (event) {
case PM_EVENT_RESUME:
callback = noirq ? pm->resume_noirq : pm->resume;
break;
case PM_EVENT_RESTORE:
callback = noirq ? pm->restore_noirq : pm->restore;
break;
default:
callback = NULL;
break;
}
if (!callback)
return 0;
ret = callback(dev);
if (!ret && !noirq && pm_runtime_enabled(dev)) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
return ret;
}
/** /**
* pm_generic_resume_noirq - Generic resume_noirq callback for subsystems. * pm_generic_resume_noirq - Generic resume_noirq callback for subsystems.
* @dev: Device to resume. * @dev: Device to resume.
*/ */
int pm_generic_resume_noirq(struct device *dev) int pm_generic_resume_noirq(struct device *dev)
{ {
return __pm_generic_resume(dev, PM_EVENT_RESUME, true); return __pm_generic_call(dev, PM_EVENT_RESUME, true);
} }
EXPORT_SYMBOL_GPL(pm_generic_resume_noirq); EXPORT_SYMBOL_GPL(pm_generic_resume_noirq);
...@@ -270,7 +232,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume_noirq); ...@@ -270,7 +232,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume_noirq);
*/ */
int pm_generic_resume(struct device *dev) int pm_generic_resume(struct device *dev)
{ {
return __pm_generic_resume(dev, PM_EVENT_RESUME, false); return __pm_generic_call(dev, PM_EVENT_RESUME, false);
} }
EXPORT_SYMBOL_GPL(pm_generic_resume); EXPORT_SYMBOL_GPL(pm_generic_resume);
...@@ -280,7 +242,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume); ...@@ -280,7 +242,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume);
*/ */
int pm_generic_restore_noirq(struct device *dev) int pm_generic_restore_noirq(struct device *dev)
{ {
return __pm_generic_resume(dev, PM_EVENT_RESTORE, true); return __pm_generic_call(dev, PM_EVENT_RESTORE, true);
} }
EXPORT_SYMBOL_GPL(pm_generic_restore_noirq); EXPORT_SYMBOL_GPL(pm_generic_restore_noirq);
...@@ -290,7 +252,7 @@ EXPORT_SYMBOL_GPL(pm_generic_restore_noirq); ...@@ -290,7 +252,7 @@ EXPORT_SYMBOL_GPL(pm_generic_restore_noirq);
*/ */
int pm_generic_restore(struct device *dev) int pm_generic_restore(struct device *dev)
{ {
return __pm_generic_resume(dev, PM_EVENT_RESTORE, false); return __pm_generic_call(dev, PM_EVENT_RESTORE, false);
} }
EXPORT_SYMBOL_GPL(pm_generic_restore); EXPORT_SYMBOL_GPL(pm_generic_restore);
...@@ -314,28 +276,3 @@ void pm_generic_complete(struct device *dev) ...@@ -314,28 +276,3 @@ void pm_generic_complete(struct device *dev)
pm_runtime_idle(dev); pm_runtime_idle(dev);
} }
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
struct dev_pm_ops generic_subsys_pm_ops = {
#ifdef CONFIG_PM_SLEEP
.prepare = pm_generic_prepare,
.suspend = pm_generic_suspend,
.suspend_noirq = pm_generic_suspend_noirq,
.resume = pm_generic_resume,
.resume_noirq = pm_generic_resume_noirq,
.freeze = pm_generic_freeze,
.freeze_noirq = pm_generic_freeze_noirq,
.thaw = pm_generic_thaw,
.thaw_noirq = pm_generic_thaw_noirq,
.poweroff = pm_generic_poweroff,
.poweroff_noirq = pm_generic_poweroff_noirq,
.restore = pm_generic_restore,
.restore_noirq = pm_generic_restore_noirq,
.complete = pm_generic_complete,
#endif
#ifdef CONFIG_PM_RUNTIME
.runtime_suspend = pm_generic_runtime_suspend,
.runtime_resume = pm_generic_runtime_resume,
.runtime_idle = pm_generic_runtime_idle,
#endif
};
EXPORT_SYMBOL_GPL(generic_subsys_pm_ops);
This diff is collapsed.
...@@ -47,21 +47,29 @@ static DEFINE_MUTEX(dev_pm_qos_mtx); ...@@ -47,21 +47,29 @@ static DEFINE_MUTEX(dev_pm_qos_mtx);
static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers); static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers);
/** /**
* dev_pm_qos_read_value - Get PM QoS constraint for a given device. * __dev_pm_qos_read_value - Get PM QoS constraint for a given device.
* @dev: Device to get the PM QoS constraint value for.
*
* This routine must be called with dev->power.lock held.
*/
s32 __dev_pm_qos_read_value(struct device *dev)
{
struct pm_qos_constraints *c = dev->power.constraints;
return c ? pm_qos_read_value(c) : 0;
}
/**
* dev_pm_qos_read_value - Get PM QoS constraint for a given device (locked).
* @dev: Device to get the PM QoS constraint value for. * @dev: Device to get the PM QoS constraint value for.
*/ */
s32 dev_pm_qos_read_value(struct device *dev) s32 dev_pm_qos_read_value(struct device *dev)
{ {
struct pm_qos_constraints *c;
unsigned long flags; unsigned long flags;
s32 ret = 0; s32 ret;
spin_lock_irqsave(&dev->power.lock, flags); spin_lock_irqsave(&dev->power.lock, flags);
ret = __dev_pm_qos_read_value(dev);
c = dev->power.constraints;
if (c)
ret = pm_qos_read_value(c);
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
return ret; return ret;
...@@ -412,3 +420,28 @@ int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier) ...@@ -412,3 +420,28 @@ int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier)
return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier); return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier);
} }
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier); EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier);
/**
* dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor.
* @dev: Device whose ancestor to add the request for.
* @req: Pointer to the preallocated handle.
* @value: Constraint latency value.
*/
int dev_pm_qos_add_ancestor_request(struct device *dev,
struct dev_pm_qos_request *req, s32 value)
{
struct device *ancestor = dev->parent;
int error = -ENODEV;
while (ancestor && !ancestor->power.ignore_children)
ancestor = ancestor->parent;
if (ancestor)
error = dev_pm_qos_add_request(ancestor, req, value);
if (error)
req->dev = NULL;
return error;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_ancestor_request);
...@@ -250,6 +250,9 @@ static int rpm_idle(struct device *dev, int rpmflags) ...@@ -250,6 +250,9 @@ static int rpm_idle(struct device *dev, int rpmflags)
else else
callback = NULL; callback = NULL;
if (!callback && dev->driver && dev->driver->pm)
callback = dev->driver->pm->runtime_idle;
if (callback) if (callback)
__rpm_callback(callback, dev); __rpm_callback(callback, dev);
...@@ -279,6 +282,47 @@ static int rpm_callback(int (*cb)(struct device *), struct device *dev) ...@@ -279,6 +282,47 @@ static int rpm_callback(int (*cb)(struct device *), struct device *dev)
return retval != -EACCES ? retval : -EIO; return retval != -EACCES ? retval : -EIO;
} }
struct rpm_qos_data {
ktime_t time_now;
s64 constraint_ns;
};
/**
* rpm_update_qos_constraint - Update a given PM QoS constraint data.
* @dev: Device whose timing data to use.
* @data: PM QoS constraint data to update.
*
* Use the suspend timing data of @dev to update PM QoS constraint data pointed
* to by @data.
*/
static int rpm_update_qos_constraint(struct device *dev, void *data)
{
struct rpm_qos_data *qos = data;
unsigned long flags;
s64 delta_ns;
int ret = 0;
spin_lock_irqsave(&dev->power.lock, flags);
if (dev->power.max_time_suspended_ns < 0)
goto out;
delta_ns = dev->power.max_time_suspended_ns -
ktime_to_ns(ktime_sub(qos->time_now, dev->power.suspend_time));
if (delta_ns <= 0) {
ret = -EBUSY;
goto out;
}
if (qos->constraint_ns > delta_ns || qos->constraint_ns == 0)
qos->constraint_ns = delta_ns;
out:
spin_unlock_irqrestore(&dev->power.lock, flags);
return ret;
}
/** /**
* rpm_suspend - Carry out runtime suspend of given device. * rpm_suspend - Carry out runtime suspend of given device.
* @dev: Device to suspend. * @dev: Device to suspend.
...@@ -305,6 +349,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -305,6 +349,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
{ {
int (*callback)(struct device *); int (*callback)(struct device *);
struct device *parent = NULL; struct device *parent = NULL;
struct rpm_qos_data qos;
int retval; int retval;
trace_rpm_suspend(dev, rpmflags); trace_rpm_suspend(dev, rpmflags);
...@@ -400,8 +445,38 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -400,8 +445,38 @@ static int rpm_suspend(struct device *dev, int rpmflags)
goto out; goto out;
} }
qos.constraint_ns = __dev_pm_qos_read_value(dev);
if (qos.constraint_ns < 0) {
/* Negative constraint means "never suspend". */
retval = -EPERM;
goto out;
}
qos.constraint_ns *= NSEC_PER_USEC;
qos.time_now = ktime_get();
__update_runtime_status(dev, RPM_SUSPENDING); __update_runtime_status(dev, RPM_SUSPENDING);
if (!dev->power.ignore_children) {
if (dev->power.irq_safe)
spin_unlock(&dev->power.lock);
else
spin_unlock_irq(&dev->power.lock);
retval = device_for_each_child(dev, &qos,
rpm_update_qos_constraint);
if (dev->power.irq_safe)
spin_lock(&dev->power.lock);
else
spin_lock_irq(&dev->power.lock);
if (retval)
goto fail;
}
dev->power.suspend_time = qos.time_now;
dev->power.max_time_suspended_ns = qos.constraint_ns ? : -1;
if (dev->pm_domain) if (dev->pm_domain)
callback = dev->pm_domain->ops.runtime_suspend; callback = dev->pm_domain->ops.runtime_suspend;
else if (dev->type && dev->type->pm) else if (dev->type && dev->type->pm)
...@@ -413,28 +488,13 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -413,28 +488,13 @@ static int rpm_suspend(struct device *dev, int rpmflags)
else else
callback = NULL; callback = NULL;
if (!callback && dev->driver && dev->driver->pm)
callback = dev->driver->pm->runtime_suspend;
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) { if (retval)
__update_runtime_status(dev, RPM_ACTIVE); goto fail;
dev->power.deferred_resume = false;
if (retval == -EAGAIN || retval == -EBUSY) {
dev->power.runtime_error = 0;
/*
* If the callback routine failed an autosuspend, and
* if the last_busy time has been updated so that there
* is a new autosuspend expiration time, automatically
* reschedule another autosuspend.
*/
if ((rpmflags & RPM_AUTO) &&
pm_runtime_autosuspend_expiration(dev) != 0)
goto repeat;
} else {
pm_runtime_cancel_pending(dev);
}
wake_up_all(&dev->power.wait_queue);
goto out;
}
no_callback: no_callback:
__update_runtime_status(dev, RPM_SUSPENDED); __update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_deactivate_timer(dev); pm_runtime_deactivate_timer(dev);
...@@ -466,6 +526,29 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -466,6 +526,29 @@ static int rpm_suspend(struct device *dev, int rpmflags)
trace_rpm_return_int(dev, _THIS_IP_, retval); trace_rpm_return_int(dev, _THIS_IP_, retval);
return retval; return retval;
fail:
__update_runtime_status(dev, RPM_ACTIVE);
dev->power.suspend_time = ktime_set(0, 0);
dev->power.max_time_suspended_ns = -1;
dev->power.deferred_resume = false;
if (retval == -EAGAIN || retval == -EBUSY) {
dev->power.runtime_error = 0;
/*
* If the callback routine failed an autosuspend, and
* if the last_busy time has been updated so that there
* is a new autosuspend expiration time, automatically
* reschedule another autosuspend.
*/
if ((rpmflags & RPM_AUTO) &&
pm_runtime_autosuspend_expiration(dev) != 0)
goto repeat;
} else {
pm_runtime_cancel_pending(dev);
}
wake_up_all(&dev->power.wait_queue);
goto out;
} }
/** /**
...@@ -620,6 +703,9 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -620,6 +703,9 @@ static int rpm_resume(struct device *dev, int rpmflags)
if (dev->power.no_callbacks) if (dev->power.no_callbacks)
goto no_callback; /* Assume success. */ goto no_callback; /* Assume success. */
dev->power.suspend_time = ktime_set(0, 0);
dev->power.max_time_suspended_ns = -1;
__update_runtime_status(dev, RPM_RESUMING); __update_runtime_status(dev, RPM_RESUMING);
if (dev->pm_domain) if (dev->pm_domain)
...@@ -633,6 +719,9 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -633,6 +719,9 @@ static int rpm_resume(struct device *dev, int rpmflags)
else else
callback = NULL; callback = NULL;
if (!callback && dev->driver && dev->driver->pm)
callback = dev->driver->pm->runtime_resume;
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) { if (retval) {
__update_runtime_status(dev, RPM_SUSPENDED); __update_runtime_status(dev, RPM_SUSPENDED);
...@@ -1279,6 +1368,9 @@ void pm_runtime_init(struct device *dev) ...@@ -1279,6 +1368,9 @@ void pm_runtime_init(struct device *dev)
setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn, setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn,
(unsigned long)dev); (unsigned long)dev);
dev->power.suspend_time = ktime_set(0, 0);
dev->power.max_time_suspended_ns = -1;
init_waitqueue_head(&dev->power.wait_queue); init_waitqueue_head(&dev->power.wait_queue);
} }
...@@ -1296,3 +1388,28 @@ void pm_runtime_remove(struct device *dev) ...@@ -1296,3 +1388,28 @@ void pm_runtime_remove(struct device *dev)
if (dev->power.irq_safe && dev->parent) if (dev->power.irq_safe && dev->parent)
pm_runtime_put_sync(dev->parent); pm_runtime_put_sync(dev->parent);
} }
/**
* pm_runtime_update_max_time_suspended - Update device's suspend time data.
* @dev: Device to handle.
* @delta_ns: Value to subtract from the device's max_time_suspended_ns field.
*
* Update the device's power.max_time_suspended_ns field by subtracting
* @delta_ns from it. The resulting value of power.max_time_suspended_ns is
* never negative.
*/
void pm_runtime_update_max_time_suspended(struct device *dev, s64 delta_ns)
{
unsigned long flags;
spin_lock_irqsave(&dev->power.lock, flags);
if (delta_ns > 0 && dev->power.max_time_suspended_ns > 0) {
if (dev->power.max_time_suspended_ns > delta_ns)
dev->power.max_time_suspended_ns -= delta_ns;
else
dev->power.max_time_suspended_ns = 0;
}
spin_unlock_irqrestore(&dev->power.lock, flags);
}
...@@ -475,8 +475,6 @@ static int btmrvl_service_main_thread(void *data) ...@@ -475,8 +475,6 @@ static int btmrvl_service_main_thread(void *data)
init_waitqueue_entry(&wait, current); init_waitqueue_entry(&wait, current);
current->flags |= PF_NOFREEZE;
for (;;) { for (;;) {
add_wait_queue(&thread->wait_q, &wait); add_wait_queue(&thread->wait_q, &wait);
......
...@@ -65,4 +65,17 @@ config DEVFREQ_GOV_USERSPACE ...@@ -65,4 +65,17 @@ config DEVFREQ_GOV_USERSPACE
comment "DEVFREQ Drivers" comment "DEVFREQ Drivers"
config ARM_EXYNOS4_BUS_DEVFREQ
bool "ARM Exynos4210/4212/4412 Memory Bus DEVFREQ Driver"
depends on CPU_EXYNOS4210 || CPU_EXYNOS4212 || CPU_EXYNOS4412
select ARCH_HAS_OPP
select DEVFREQ_GOV_SIMPLE_ONDEMAND
help
This adds the DEVFREQ driver for Exynos4210 memory bus (vdd_int)
and Exynos4212/4412 memory interface and bus (vdd_mif + vdd_int).
It reads PPMU counters of memory controllers and adjusts
the operating frequencies and voltages with OPP support.
To operate with optimal voltages, ASV support is required
(CONFIG_EXYNOS_ASV).
endif # PM_DEVFREQ endif # PM_DEVFREQ
...@@ -3,3 +3,6 @@ obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) += governor_simpleondemand.o ...@@ -3,3 +3,6 @@ obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) += governor_simpleondemand.o
obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o
obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o
obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o
# DEVFREQ Drivers
obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos4_bus.o
...@@ -347,7 +347,7 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -347,7 +347,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (!IS_ERR(devfreq)) { if (!IS_ERR(devfreq)) {
dev_err(dev, "%s: Unable to create devfreq for the device. It already has one.\n", __func__); dev_err(dev, "%s: Unable to create devfreq for the device. It already has one.\n", __func__);
err = -EINVAL; err = -EINVAL;
goto out; goto err_out;
} }
} }
...@@ -356,7 +356,7 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -356,7 +356,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
dev_err(dev, "%s: Unable to create devfreq for the device\n", dev_err(dev, "%s: Unable to create devfreq for the device\n",
__func__); __func__);
err = -ENOMEM; err = -ENOMEM;
goto out; goto err_out;
} }
mutex_init(&devfreq->lock); mutex_init(&devfreq->lock);
...@@ -399,17 +399,16 @@ struct devfreq *devfreq_add_device(struct device *dev, ...@@ -399,17 +399,16 @@ struct devfreq *devfreq_add_device(struct device *dev,
devfreq->next_polling); devfreq->next_polling);
} }
mutex_unlock(&devfreq_list_lock); mutex_unlock(&devfreq_list_lock);
goto out; out:
return devfreq;
err_init: err_init:
device_unregister(&devfreq->dev); device_unregister(&devfreq->dev);
err_dev: err_dev:
mutex_unlock(&devfreq->lock); mutex_unlock(&devfreq->lock);
kfree(devfreq); kfree(devfreq);
out: err_out:
if (err) return ERR_PTR(err);
return ERR_PTR(err);
else
return devfreq;
} }
/** /**
......
This diff is collapsed.
...@@ -214,9 +214,18 @@ static unsigned int dmatest_verify(u8 **bufs, unsigned int start, ...@@ -214,9 +214,18 @@ static unsigned int dmatest_verify(u8 **bufs, unsigned int start,
return error_count; return error_count;
} }
static void dmatest_callback(void *completion) /* poor man's completion - we want to use wait_event_freezable() on it */
struct dmatest_done {
bool done;
wait_queue_head_t *wait;
};
static void dmatest_callback(void *arg)
{ {
complete(completion); struct dmatest_done *done = arg;
done->done = true;
wake_up_all(done->wait);
} }
/* /*
...@@ -235,7 +244,9 @@ static void dmatest_callback(void *completion) ...@@ -235,7 +244,9 @@ static void dmatest_callback(void *completion)
*/ */
static int dmatest_func(void *data) static int dmatest_func(void *data)
{ {
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(done_wait);
struct dmatest_thread *thread = data; struct dmatest_thread *thread = data;
struct dmatest_done done = { .wait = &done_wait };
struct dma_chan *chan; struct dma_chan *chan;
const char *thread_name; const char *thread_name;
unsigned int src_off, dst_off, len; unsigned int src_off, dst_off, len;
...@@ -252,7 +263,7 @@ static int dmatest_func(void *data) ...@@ -252,7 +263,7 @@ static int dmatest_func(void *data)
int i; int i;
thread_name = current->comm; thread_name = current->comm;
set_freezable_with_signal(); set_freezable();
ret = -ENOMEM; ret = -ENOMEM;
...@@ -306,9 +317,6 @@ static int dmatest_func(void *data) ...@@ -306,9 +317,6 @@ static int dmatest_func(void *data)
struct dma_async_tx_descriptor *tx = NULL; struct dma_async_tx_descriptor *tx = NULL;
dma_addr_t dma_srcs[src_cnt]; dma_addr_t dma_srcs[src_cnt];
dma_addr_t dma_dsts[dst_cnt]; dma_addr_t dma_dsts[dst_cnt];
struct completion cmp;
unsigned long start, tmo, end = 0 /* compiler... */;
bool reload = true;
u8 align = 0; u8 align = 0;
total_tests++; total_tests++;
...@@ -391,9 +399,9 @@ static int dmatest_func(void *data) ...@@ -391,9 +399,9 @@ static int dmatest_func(void *data)
continue; continue;
} }
init_completion(&cmp); done.done = false;
tx->callback = dmatest_callback; tx->callback = dmatest_callback;
tx->callback_param = &cmp; tx->callback_param = &done;
cookie = tx->tx_submit(tx); cookie = tx->tx_submit(tx);
if (dma_submit_error(cookie)) { if (dma_submit_error(cookie)) {
...@@ -407,20 +415,20 @@ static int dmatest_func(void *data) ...@@ -407,20 +415,20 @@ static int dmatest_func(void *data)
} }
dma_async_issue_pending(chan); dma_async_issue_pending(chan);
do { wait_event_freezable_timeout(done_wait, done.done,
start = jiffies; msecs_to_jiffies(timeout));
if (reload)
end = start + msecs_to_jiffies(timeout);
else if (end <= start)
end = start + 1;
tmo = wait_for_completion_interruptible_timeout(&cmp,
end - start);
reload = try_to_freeze();
} while (tmo == -ERESTARTSYS);
status = dma_async_is_tx_complete(chan, cookie, NULL, NULL); status = dma_async_is_tx_complete(chan, cookie, NULL, NULL);
if (tmo == 0) { if (!done.done) {
/*
* We're leaving the timed out dma operation with
* dangling pointer to done_wait. To make this
* correct, we'll need to allocate wait_done for
* each test iteration and perform "who's gonna
* free it this time?" dancing. For now, just
* leave it dangling.
*/
pr_warning("%s: #%u: test timed out\n", pr_warning("%s: #%u: test timed out\n",
thread_name, total_tests - 1); thread_name, total_tests - 1);
failed_tests++; failed_tests++;
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/input.h> #include <linux/input.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pm_qos.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/types.h> #include <linux/types.h>
...@@ -46,6 +47,7 @@ struct st1232_ts_data { ...@@ -46,6 +47,7 @@ struct st1232_ts_data {
struct i2c_client *client; struct i2c_client *client;
struct input_dev *input_dev; struct input_dev *input_dev;
struct st1232_ts_finger finger[MAX_FINGERS]; struct st1232_ts_finger finger[MAX_FINGERS];
struct dev_pm_qos_request low_latency_req;
}; };
static int st1232_ts_read_data(struct st1232_ts_data *ts) static int st1232_ts_read_data(struct st1232_ts_data *ts)
...@@ -118,8 +120,17 @@ static irqreturn_t st1232_ts_irq_handler(int irq, void *dev_id) ...@@ -118,8 +120,17 @@ static irqreturn_t st1232_ts_irq_handler(int irq, void *dev_id)
} }
/* SYN_MT_REPORT only if no contact */ /* SYN_MT_REPORT only if no contact */
if (!count) if (!count) {
input_mt_sync(input_dev); input_mt_sync(input_dev);
if (ts->low_latency_req.dev) {
dev_pm_qos_remove_request(&ts->low_latency_req);
ts->low_latency_req.dev = NULL;
}
} else if (!ts->low_latency_req.dev) {
/* First contact, request 100 us latency. */
dev_pm_qos_add_ancestor_request(&ts->client->dev,
&ts->low_latency_req, 100);
}
/* SYN_REPORT */ /* SYN_REPORT */
input_sync(input_dev); input_sync(input_dev);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -67,6 +67,7 @@ struct intc_desc_int { ...@@ -67,6 +67,7 @@ struct intc_desc_int {
struct intc_window *window; struct intc_window *window;
unsigned int nr_windows; unsigned int nr_windows;
struct irq_chip chip; struct irq_chip chip;
bool skip_suspend;
}; };
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment