Commit 277edbab authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm+acpi-4.6-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI updates from Rafael Wysocki:
 "This time the majority of changes go into cpufreq and they are
  significant.

  First off, the way CPU frequency updates are triggered is different
  now.  Instead of having to set up and manage a deferrable timer for
  each CPU in the system to evaluate and possibly change its frequency
  periodically, cpufreq governors set up callbacks to be invoked by the
  scheduler on a regular basis (basically on utilization updates).  The
  "old" governors, "ondemand" and "conservative", still do all of their
  work in process context (although that is triggered by the scheduler
  now), but intel_pstate does it all in the callback invoked by the
  scheduler with no need for any additional asynchronous processing.

  Of course, this eliminates the overhead related to the management of
  all those timers, but also it allows the cpufreq governor code to be
  simplified quite a bit.  On top of that, the common code and data
  structures used by the "ondemand" and "conservative" governors are
  cleaned up and made more straightforward and some long-standing and
  quite annoying problems are addressed.  In particular, the handling of
  governor sysfs attributes is modified and the related locking becomes
  more fine grained which allows some concurrency problems to be avoided
  (particularly deadlocks with the core cpufreq code).

  In principle, the new mechanism for triggering frequency updates
  allows utilization information to be passed from the scheduler to
  cpufreq.  Although the current code doesn't make use of it, in the
  works is a new cpufreq governor that will make decisions based on the
  scheduler's utilization data.  That should allow the scheduler and
  cpufreq to work more closely together in the long run.

  In addition to the core and governor changes, cpufreq drivers are
  updated too.  Fixes and optimizations go into intel_pstate, the
  cpufreq-dt driver is updated on top of some modification in the
  Operating Performance Points (OPP) framework and there are fixes and
  other updates in the powernv cpufreq driver.

  Apart from the cpufreq updates there is some new ACPICA material,
  including a fix for a problem introduced by previous ACPICA updates,
  and some less significant changes in the ACPI code, like CPPC code
  optimizations, ACPI processor driver cleanups and support for loading
  ACPI tables from initrd.

  Also updated are the generic power domains framework, the Intel RAPL
  power capping driver and the turbostat utility and we have a bunch of
  traditional assorted fixes and cleanups.

  Specifics:

   - Redesign of cpufreq governors and the intel_pstate driver to make
     them use callbacks invoked by the scheduler to trigger CPU
     frequency evaluation instead of using per-CPU deferrable timers for
     that purpose (Rafael Wysocki).

   - Reorganization and cleanup of cpufreq governor code to make it more
     straightforward and fix some concurrency problems in it (Rafael
     Wysocki, Viresh Kumar).

   - Cleanup and improvements of locking in the cpufreq core (Viresh
     Kumar).

   - Assorted cleanups in the cpufreq core (Rafael Wysocki, Viresh
     Kumar, Eric Biggers).

   - intel_pstate driver updates including fixes, optimizations and a
     modification to make it enable enable hardware-coordinated P-state
     selection (HWP) by default if supported by the processor (Philippe
     Longepe, Srinivas Pandruvada, Rafael Wysocki, Viresh Kumar, Felipe
     Franciosi).

   - Operating Performance Points (OPP) framework updates to improve its
     handling of voltage regulators and device clocks and updates of the
     cpufreq-dt driver on top of that (Viresh Kumar, Jon Hunter).

   - Updates of the powernv cpufreq driver to fix initialization and
     cleanup problems in it and correct its worker thread handling with
     respect to CPU offline, new powernv_throttle tracepoint (Shilpasri
     Bhat).

   - ACPI cpufreq driver optimization and cleanup (Rafael Wysocki).

   - ACPICA updates including one fix for a regression introduced by
     previos changes in the ACPICA code (Bob Moore, Lv Zheng, David Box,
     Colin Ian King).

   - Support for installing ACPI tables from initrd (Lv Zheng).

   - Optimizations of the ACPI CPPC code (Prashanth Prakash, Ashwin
     Chaugule).

   - Support for _HID(ACPI0010) devices (ACPI processor containers) and
     ACPI processor driver cleanups (Sudeep Holla).

   - Support for ACPI-based enumeration of the AMBA bus (Graeme Gregory,
     Aleksey Makarov).

   - Modification of the ACPI PCI IRQ management code to make it treat
     255 in the Interrupt Line register as "not connected" on x86 (as
     per the specification) and avoid attempts to use that value as a
     valid interrupt vector (Chen Fan).

   - ACPI APEI fixes related to resource leaks (Josh Hunt).

   - Removal of modularity from a few ACPI drivers (BGRT, GHES,
     intel_pmic_crc) that cannot be built as modules in practice (Paul
     Gortmaker).

   - PNP framework update to make it treat ACPI_RESOURCE_TYPE_SERIAL_BUS
     as a valid resource type (Harb Abdulhamid).

   - New device ID (future AMD I2C controller) in the ACPI driver for
     AMD SoCs (APD) and in the designware I2C driver (Xiangliang Yu).

   - Assorted ACPI cleanups (Colin Ian King, Kaiyen Chang, Oleg Drokin).

   - cpuidle menu governor optimization to avoid a square root
     computation in it (Rasmus Villemoes).

   - Fix for potential use-after-free in the generic device properties
     framework (Heikki Krogerus).

   - Updates of the generic power domains (genpd) framework including
     support for multiple power states of a domain, fixes and debugfs
     output improvements (Axel Haslam, Jon Hunter, Laurent Pinchart,
     Geert Uytterhoeven).

   - Intel RAPL power capping driver updates to reduce IPI overhead in
     it (Jacob Pan).

   - System suspend/hibernation code cleanups (Eric Biggers, Saurabh
     Sengar).

   - Year 2038 fix for the process freezer (Abhilash Jindal).

   - turbostat utility updates including new features (decoding of more
     registers and CPUID fields, sub-second intervals support, GFX MHz
     and RC6 printout, --out command line option), fixes (syscall jitter
     detection and workaround, reductioin of the number of syscalls
     made, fixes related to Xeon x200 processors, compiler warning
     fixes) and cleanups (Len Brown, Hubert Chrzaniuk, Chen Yu)"

* tag 'pm+acpi-4.6-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (182 commits)
  tools/power turbostat: bugfix: TDP MSRs print bits fixing
  tools/power turbostat: correct output for MSR_NHM_SNB_PKG_CST_CFG_CTL dump
  tools/power turbostat: call __cpuid() instead of __get_cpuid()
  tools/power turbostat: indicate SMX and SGX support
  tools/power turbostat: detect and work around syscall jitter
  tools/power turbostat: show GFX%rc6
  tools/power turbostat: show GFXMHz
  tools/power turbostat: show IRQs per CPU
  tools/power turbostat: make fewer systems calls
  tools/power turbostat: fix compiler warnings
  tools/power turbostat: add --out option for saving output in a file
  tools/power turbostat: re-name "%Busy" field to "Busy%"
  tools/power turbostat: Intel Xeon x200: fix turbo-ratio decoding
  tools/power turbostat: Intel Xeon x200: fix erroneous bclk value
  tools/power turbostat: allow sub-sec intervals
  ACPI / APEI: ERST: Fixed leaked resources in erst_init
  ACPI / APEI: Fix leaked resources
  intel_pstate: Do not skip samples partially
  intel_pstate: Remove freq calculation from intel_pstate_calc_busy()
  intel_pstate: Move intel_pstate_calc_busy() into get_target_pstate_use_performance()
  ...
parents 271ecc52 0d571b62
......@@ -25,7 +25,7 @@ callback, so cpufreq core can't request a transition to a specific frequency.
The driver provides minimum and maximum frequency limits and callbacks to set a
policy. The policy in cpufreq sysfs is referred to as the "scaling governor".
The cpufreq core can request the driver to operate in any of the two policies:
"performance: and "powersave". The driver decides which frequency to use based
"performance" and "powersave". The driver decides which frequency to use based
on the above policy selection considering minimum and maximum frequency limits.
The Intel P-State driver falls under the latter category, which implements the
......
......@@ -193,6 +193,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
(e.g. thinkpad_acpi, sony_acpi, etc.) instead
of the ACPI video.ko driver.
acpi_force_32bit_fadt_addr
force FADT to use 32 bit addresses rather than the
64 bit X_* addresses. Some firmware have broken 64
bit addresses for force ACPI ignore these and use
the older legacy 32 bit addresses.
acpica_no_return_repair [HW, ACPI]
Disable AML predefined validation mechanism
This mechanism can repair the evaluation result to make
......
......@@ -374,8 +374,13 @@ static struct pu_domain imx6q_pu_domain = {
.name = "PU",
.power_off = imx6q_pm_pu_power_off,
.power_on = imx6q_pm_pu_power_on,
.power_off_latency_ns = 25000,
.power_on_latency_ns = 2000000,
.states = {
[0] = {
.power_off_latency_ns = 25000,
.power_on_latency_ns = 2000000,
},
},
.state_count = 1,
},
};
......
......@@ -235,10 +235,10 @@
#define HWP_PACKAGE_LEVEL_REQUEST_BIT (1<<11)
/* IA32_HWP_CAPABILITIES */
#define HWP_HIGHEST_PERF(x) (x & 0xff)
#define HWP_GUARANTEED_PERF(x) ((x & (0xff << 8)) >>8)
#define HWP_MOSTEFFICIENT_PERF(x) ((x & (0xff << 16)) >>16)
#define HWP_LOWEST_PERF(x) ((x & (0xff << 24)) >>24)
#define HWP_HIGHEST_PERF(x) (((x) >> 0) & 0xff)
#define HWP_GUARANTEED_PERF(x) (((x) >> 8) & 0xff)
#define HWP_MOSTEFFICIENT_PERF(x) (((x) >> 16) & 0xff)
#define HWP_LOWEST_PERF(x) (((x) >> 24) & 0xff)
/* IA32_HWP_REQUEST */
#define HWP_MIN_PERF(x) (x & 0xff)
......
......@@ -43,6 +43,7 @@ acpi-y += pci_root.o pci_link.o pci_irq.o
acpi-y += acpi_lpss.o acpi_apd.o
acpi-y += acpi_platform.o
acpi-y += acpi_pnp.o
acpi-$(CONFIG_ARM_AMBA) += acpi_amba.o
acpi-y += int340x_thermal.o
acpi-y += power.o
acpi-y += event.o
......
/*
* ACPI support for platform bus type.
*
* Copyright (C) 2015, Linaro Ltd
* Author: Graeme Gregory <graeme.gregory@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/acpi.h>
#include <linux/amba/bus.h>
#include <linux/clkdev.h>
#include <linux/clk-provider.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/ioport.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "internal.h"
static const struct acpi_device_id amba_id_list[] = {
{"ARMH0061", 0}, /* PL061 GPIO Device */
{"", 0},
};
static void amba_register_dummy_clk(void)
{
static struct clk *amba_dummy_clk;
/* If clock already registered */
if (amba_dummy_clk)
return;
amba_dummy_clk = clk_register_fixed_rate(NULL, "apb_pclk", NULL,
CLK_IS_ROOT, 0);
clk_register_clkdev(amba_dummy_clk, "apb_pclk", NULL);
}
static int amba_handler_attach(struct acpi_device *adev,
const struct acpi_device_id *id)
{
struct amba_device *dev;
struct resource_entry *rentry;
struct list_head resource_list;
bool address_found = false;
int irq_no = 0;
int ret;
/* If the ACPI node already has a physical device attached, skip it. */
if (adev->physical_node_count)
return 0;
dev = amba_device_alloc(dev_name(&adev->dev), 0, 0);
if (!dev) {
dev_err(&adev->dev, "%s(): amba_device_alloc() failed\n",
__func__);
return -ENOMEM;
}
INIT_LIST_HEAD(&resource_list);
ret = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
if (ret < 0)
goto err_free;
list_for_each_entry(rentry, &resource_list, node) {
switch (resource_type(rentry->res)) {
case IORESOURCE_MEM:
if (!address_found) {
dev->res = *rentry->res;
address_found = true;
}
break;
case IORESOURCE_IRQ:
if (irq_no < AMBA_NR_IRQS)
dev->irq[irq_no++] = rentry->res->start;
break;
default:
dev_warn(&adev->dev, "Invalid resource\n");
break;
}
}
acpi_dev_free_resource_list(&resource_list);
/*
* If the ACPI node has a parent and that parent has a physical device
* attached to it, that physical device should be the parent of
* the amba device we are about to create.
*/
if (adev->parent)
dev->dev.parent = acpi_get_first_physical_node(adev->parent);
ACPI_COMPANION_SET(&dev->dev, adev);
ret = amba_device_add(dev, &iomem_resource);
if (ret) {
dev_err(&adev->dev, "%s(): amba_device_add() failed (%d)\n",
__func__, ret);
goto err_free;
}
return 1;
err_free:
amba_device_put(dev);
return ret;
}
static struct acpi_scan_handler amba_handler = {
.ids = amba_id_list,
.attach = amba_handler_attach,
};
void __init acpi_amba_init(void)
{
amba_register_dummy_clk();
acpi_scan_add_handler(&amba_handler);
}
......@@ -143,6 +143,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
/* Generic apd devices */
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
{ "AMD0010", APD_ADDR(cz_i2c_desc) },
{ "AMDI0010", APD_ADDR(cz_i2c_desc) },
{ "AMD0020", APD_ADDR(cz_uart_desc) },
{ "AMD0030", },
#endif
......
......@@ -43,7 +43,6 @@ static const struct acpi_device_id forbidden_id_list[] = {
struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
{
struct platform_device *pdev = NULL;
struct acpi_device *acpi_parent;
struct platform_device_info pdevinfo;
struct resource_entry *rentry;
struct list_head resource_list;
......@@ -82,22 +81,8 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
* attached to it, that physical device should be the parent of the
* platform device we are about to create.
*/
pdevinfo.parent = NULL;
acpi_parent = adev->parent;
if (acpi_parent) {
struct acpi_device_physical_node *entry;
struct list_head *list;
mutex_lock(&acpi_parent->physical_node_lock);
list = &acpi_parent->physical_node_list;
if (!list_empty(list)) {
entry = list_first_entry(list,
struct acpi_device_physical_node,
node);
pdevinfo.parent = entry->dev;
}
mutex_unlock(&acpi_parent->physical_node_lock);
}
pdevinfo.parent = adev->parent ?
acpi_get_first_physical_node(adev->parent) : NULL;
pdevinfo.name = dev_name(&adev->dev);
pdevinfo.id = -1;
pdevinfo.res = resources;
......
......@@ -514,7 +514,24 @@ static struct acpi_scan_handler processor_handler = {
},
};
static int acpi_processor_container_attach(struct acpi_device *dev,
const struct acpi_device_id *id)
{
return 1;
}
static const struct acpi_device_id processor_container_ids[] = {
{ ACPI_PROCESSOR_CONTAINER_HID, },
{ }
};
static struct acpi_scan_handler processor_container_handler = {
.ids = processor_container_ids,
.attach = acpi_processor_container_attach,
};
void __init acpi_processor_init(void)
{
acpi_scan_add_handler_with_hotplug(&processor_handler, "processor");
acpi_scan_add_handler(&processor_container_handler);
}
......@@ -218,13 +218,6 @@ struct acpi_video_device {
struct thermal_cooling_device *cooling_dev;
};
static const char device_decode[][30] = {
"motherboard VGA device",
"PCI VGA device",
"AGP VGA device",
"UNKNOWN",
};
static void acpi_video_device_notify(acpi_handle handle, u32 event, void *data);
static void acpi_video_device_rebind(struct acpi_video_bus *video);
static void acpi_video_device_bind(struct acpi_video_bus *video,
......
......@@ -165,7 +165,7 @@ ACPI_GLOBAL(u8, acpi_gbl_next_owner_id_offset);
/* Initialization sequencing */
ACPI_INIT_GLOBAL(u8, acpi_gbl_reg_methods_enabled, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_namespace_initialized, FALSE);
/* Misc */
......
......@@ -85,7 +85,7 @@ union acpi_parse_object;
#define ACPI_MTX_MEMORY 5 /* Debug memory tracking lists */
#define ACPI_MAX_MUTEX 5
#define ACPI_NUM_MUTEX ACPI_MAX_MUTEX+1
#define ACPI_NUM_MUTEX (ACPI_MAX_MUTEX+1)
/* Lock structure for reader/writer interfaces */
......@@ -103,11 +103,11 @@ struct acpi_rw_lock {
#define ACPI_LOCK_HARDWARE 1
#define ACPI_MAX_LOCK 1
#define ACPI_NUM_LOCK ACPI_MAX_LOCK+1
#define ACPI_NUM_LOCK (ACPI_MAX_LOCK+1)
/* This Thread ID means that the mutex is not in use (unlocked) */
#define ACPI_MUTEX_NOT_ACQUIRED (acpi_thread_id) 0
#define ACPI_MUTEX_NOT_ACQUIRED ((acpi_thread_id) 0)
/* This Thread ID means an invalid thread ID */
......
......@@ -88,7 +88,7 @@
*/
acpi_status acpi_ns_initialize_objects(void);
acpi_status acpi_ns_initialize_devices(void);
acpi_status acpi_ns_initialize_devices(u32 flags);
/*
* nsload - Namespace loading
......
......@@ -1125,7 +1125,7 @@ const union acpi_predefined_info acpi_gbl_resource_names[] = {
PACKAGE_INFO(0, 0, 0, 0, 0, 0) /* Table terminator */
};
static const union acpi_predefined_info acpi_gbl_scope_names[] = {
const union acpi_predefined_info acpi_gbl_scope_names[] = {
{{"_GPE", 0, 0}},
{{"_PR_", 0, 0}},
{{"_SB_", 0, 0}},
......
......@@ -348,7 +348,7 @@ void acpi_db_display_table_info(char *table_arg)
} else {
/* If the pointer is null, the table has been unloaded */
ACPI_INFO((AE_INFO, "%4.4s - Table has been unloaded",
ACPI_INFO(("%4.4s - Table has been unloaded",
table_desc->signature.ascii));
}
}
......
......@@ -408,7 +408,7 @@ void acpi_db_dump_pld_buffer(union acpi_object *obj_desc)
new_buffer = acpi_db_encode_pld_buffer(pld_info);
if (!new_buffer) {
return;
goto exit;
}
/* The two bit-packed buffers should match */
......@@ -479,6 +479,7 @@ void acpi_db_dump_pld_buffer(union acpi_object *obj_desc)
pld_info->horizontal_offset);
}
ACPI_FREE(pld_info);
ACPI_FREE(new_buffer);
exit:
ACPI_FREE(pld_info);
}
......@@ -809,8 +809,7 @@ acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
if (method_desc->method.
info_flags & ACPI_METHOD_SERIALIZED_PENDING) {
if (walk_state) {
ACPI_INFO((AE_INFO,
"Marking method %4.4s as Serialized "
ACPI_INFO(("Marking method %4.4s as Serialized "
"because of AE_ALREADY_EXISTS error",
walk_state->method_node->name.
ascii));
......
......@@ -524,8 +524,7 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
arg = arg->common.next;
}
ACPI_INFO((AE_INFO,
"Actual Package length (%u) is larger than "
ACPI_INFO(("Actual Package length (%u) is larger than "
"NumElements field (%u), truncated",
i, element_count));
} else if (i < element_count) {
......
......@@ -499,8 +499,7 @@ acpi_ev_initialize_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
}
if (gpe_enabled_count) {
ACPI_INFO((AE_INFO,
"Enabled %u GPEs in block %02X to %02X",
ACPI_INFO(("Enabled %u GPEs in block %02X to %02X",
gpe_enabled_count, (u32)gpe_block->block_base_number,
(u32)(gpe_block->block_base_number +
(gpe_block->gpe_count - 1))));
......
......@@ -281,7 +281,7 @@ void acpi_ev_update_gpes(acpi_owner_id table_owner_id)
}
if (walk_info.count) {
ACPI_INFO((AE_INFO, "Enabled %u new GPEs", walk_info.count));
ACPI_INFO(("Enabled %u new GPEs", walk_info.count));
}
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
......
......@@ -600,7 +600,7 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
if (region_obj2->extra.method_REG == NULL ||
region_obj->region.handler == NULL ||
!acpi_gbl_reg_methods_enabled) {
!acpi_gbl_namespace_initialized) {
return_ACPI_STATUS(AE_OK);
}
......
......@@ -252,7 +252,7 @@ acpi_ex_load_table_op(struct acpi_walk_state *walk_state,
status = acpi_get_table_by_index(table_index, &table);
if (ACPI_SUCCESS(status)) {
ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:"));
ACPI_INFO(("Dynamic OEM Table Load:"));
acpi_tb_print_table_header(0, table);
}
......@@ -472,7 +472,7 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* Install the new table into the local data structures */
ACPI_INFO((AE_INFO, "Dynamic OEM Table Load:"));
ACPI_INFO(("Dynamic OEM Table Load:"));
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table),
......
......@@ -123,8 +123,10 @@ acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
* op is intended for use by disassemblers in order to properly
* disassemble control method invocations. The opcode or group of
* opcodes should be surrounded by an "if (0)" clause to ensure that
* AML interpreters never see the opcode.
* AML interpreters never see the opcode. Thus, something is
* wrong if an external opcode ever gets here.
*/
ACPI_ERROR((AE_INFO, "Executed External Op"));
status = AE_OK;
goto cleanup;
......
......@@ -378,8 +378,7 @@ void acpi_ns_exec_module_code_list(void)
acpi_ut_remove_reference(prev);
}
ACPI_INFO((AE_INFO,
"Executed %u blocks of module-level executable AML code",
ACPI_INFO(("Executed %u blocks of module-level executable AML code",
method_count));
ACPI_FREE(info);
......
......@@ -46,6 +46,7 @@
#include "acnamesp.h"
#include "acdispat.h"
#include "acinterp.h"
#include "acevents.h"
#define _COMPONENT ACPI_NAMESPACE
ACPI_MODULE_NAME("nsinit")
......@@ -83,6 +84,8 @@ acpi_status acpi_ns_initialize_objects(void)
ACPI_FUNCTION_TRACE(ns_initialize_objects);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Completing Initialization of ACPI Objects\n"));
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"**** Starting initialization of namespace objects ****\n"));
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
......@@ -133,82 +136,108 @@ acpi_status acpi_ns_initialize_objects(void)
*
******************************************************************************/
acpi_status acpi_ns_initialize_devices(void)
acpi_status acpi_ns_initialize_devices(u32 flags)
{
acpi_status status;
acpi_status status = AE_OK;
struct acpi_device_walk_info info;
ACPI_FUNCTION_TRACE(ns_initialize_devices);
/* Init counters */
if (!(flags & ACPI_NO_DEVICE_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Initializing ACPI Devices\n"));
info.device_count = 0;
info.num_STA = 0;
info.num_INI = 0;
/* Init counters */
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
"Initializing Device/Processor/Thermal objects "
"and executing _INI/_STA methods:\n"));
info.device_count = 0;
info.num_STA = 0;
info.num_INI = 0;
/* Tree analysis: find all subtrees that contain _INI methods */
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
"Initializing Device/Processor/Thermal objects "
"and executing _INI/_STA methods:\n"));
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, FALSE,
acpi_ns_find_ini_methods, NULL, &info,
NULL);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
/* Tree analysis: find all subtrees that contain _INI methods */
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, FALSE,
acpi_ns_find_ini_methods, NULL,
&info, NULL);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
/* Allocate the evaluation information block */
/* Allocate the evaluation information block */
info.evaluate_info =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
if (!info.evaluate_info) {
status = AE_NO_MEMORY;
goto error_exit;
}
info.evaluate_info =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
if (!info.evaluate_info) {
status = AE_NO_MEMORY;
goto error_exit;
/*
* Execute the "global" _INI method that may appear at the root.
* This support is provided for Windows compatibility (Vista+) and
* is not part of the ACPI specification.
*/
info.evaluate_info->prefix_node = acpi_gbl_root_node;
info.evaluate_info->relative_pathname = METHOD_NAME__INI;
info.evaluate_info->parameters = NULL;
info.evaluate_info->flags = ACPI_IGNORE_RETURN_VALUE;
status = acpi_ns_evaluate(info.evaluate_info);
if (ACPI_SUCCESS(status)) {
info.num_INI++;
}
}
/*
* Execute the "global" _INI method that may appear at the root. This
* support is provided for Windows compatibility (Vista+) and is not
* part of the ACPI specification.
* Run all _REG methods
*
* Note: Any objects accessed by the _REG methods will be automatically
* initialized, even if they contain executable AML (see the call to
* acpi_ns_initialize_objects below).
*/
info.evaluate_info->prefix_node = acpi_gbl_root_node;
info.evaluate_info->relative_pathname = METHOD_NAME__INI;
info.evaluate_info->parameters = NULL;
info.evaluate_info->flags = ACPI_IGNORE_RETURN_VALUE;
if (!(flags & ACPI_NO_ADDRESS_SPACE_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Executing _REG OpRegion methods\n"));
status = acpi_ns_evaluate(info.evaluate_info);
if (ACPI_SUCCESS(status)) {
info.num_INI++;
status = acpi_ev_initialize_op_regions();
if (ACPI_FAILURE(status)) {
goto error_exit;
}
}
/* Walk namespace to execute all _INIs on present devices */
if (!(flags & ACPI_NO_DEVICE_INIT)) {
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, FALSE,
acpi_ns_init_one_device, NULL, &info,
NULL);
/* Walk namespace to execute all _INIs on present devices */
/*
* Any _OSI requests should be completed by now. If the BIOS has
* requested any Windows OSI strings, we will always truncate
* I/O addresses to 16 bits -- for Windows compatibility.
*/
if (acpi_gbl_osi_data >= ACPI_OSI_WIN_2000) {
acpi_gbl_truncate_io_addresses = TRUE;
}
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, FALSE,
acpi_ns_init_one_device, NULL,
&info, NULL);
ACPI_FREE(info.evaluate_info);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
/*
* Any _OSI requests should be completed by now. If the BIOS has
* requested any Windows OSI strings, we will always truncate
* I/O addresses to 16 bits -- for Windows compatibility.
*/
if (acpi_gbl_osi_data >= ACPI_OSI_WIN_2000) {
acpi_gbl_truncate_io_addresses = TRUE;
}
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
" Executed %u _INI methods requiring %u _STA executions "
"(examined %u objects)\n",
info.num_INI, info.num_STA, info.device_count));
ACPI_FREE(info.evaluate_info);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
" Executed %u _INI methods requiring %u _STA executions "
"(examined %u objects)\n",
info.num_INI, info.num_STA,
info.device_count));
}
return_ACPI_STATUS(status);
......
......@@ -267,8 +267,7 @@ acpi_tb_install_standard_table(acpi_physical_address address,
if (!reload &&
acpi_gbl_disable_ssdt_table_install &&
ACPI_COMPARE_NAME(&new_table_desc.signature, ACPI_SIG_SSDT)) {
ACPI_INFO((AE_INFO,
"Ignoring installation of %4.4s at %8.8X%8.8X",
ACPI_INFO(("Ignoring installation of %4.4s at %8.8X%8.8X",
new_table_desc.signature.ascii,
ACPI_FORMAT_UINT64(address)));
goto release_and_exit;
......@@ -432,7 +431,7 @@ void acpi_tb_override_table(struct acpi_table_desc *old_table_desc)
return;
}
ACPI_INFO((AE_INFO, "%4.4s 0x%8.8X%8.8X"
ACPI_INFO(("%4.4s 0x%8.8X%8.8X"
" %s table override, new table: 0x%8.8X%8.8X",
old_table_desc->signature.ascii,
ACPI_FORMAT_UINT64(old_table_desc->address),
......
......@@ -132,7 +132,7 @@ acpi_tb_print_table_header(acpi_physical_address address,
/* FACS only has signature and length fields */
ACPI_INFO((AE_INFO, "%-4.4s 0x%8.8X%8.8X %06X",
ACPI_INFO(("%-4.4s 0x%8.8X%8.8X %06X",
header->signature, ACPI_FORMAT_UINT64(address),
header->length));
} else if (ACPI_VALIDATE_RSDP_SIG(header->signature)) {
......@@ -144,7 +144,7 @@ acpi_tb_print_table_header(acpi_physical_address address,
ACPI_OEM_ID_SIZE);
acpi_tb_fix_string(local_header.oem_id, ACPI_OEM_ID_SIZE);
ACPI_INFO((AE_INFO, "RSDP 0x%8.8X%8.8X %06X (v%.2d %-6.6s)",
ACPI_INFO(("RSDP 0x%8.8X%8.8X %06X (v%.2d %-6.6s)",
ACPI_FORMAT_UINT64(address),
(ACPI_CAST_PTR(struct acpi_table_rsdp, header)->
revision >
......@@ -158,8 +158,7 @@ acpi_tb_print_table_header(acpi_physical_address address,
acpi_tb_cleanup_table_header(&local_header, header);
ACPI_INFO((AE_INFO,
"%-4.4s 0x%8.8X%8.8X"
ACPI_INFO(("%-4.4s 0x%8.8X%8.8X"
" %06X (v%.2d %-6.6s %-8.8s %08X %-4.4s %08X)",
local_header.signature, ACPI_FORMAT_UINT64(address),
local_header.length, local_header.revision,
......
......@@ -174,9 +174,7 @@ struct acpi_table_header *acpi_tb_copy_dsdt(u32 table_index)
ACPI_TABLE_ORIGIN_INTERNAL_VIRTUAL,
new_table);
ACPI_INFO((AE_INFO,
"Forced DSDT copy: length 0x%05X copied locally, original unmapped",
new_table->length));
ACPI_INFO(("Forced DSDT copy: length 0x%05X copied locally, original unmapped", new_table->length));
return (new_table);
}
......
......@@ -47,6 +47,7 @@
#include "accommon.h"
#include "acnamesp.h"
#include "actables.h"
#include "acevents.h"
#define _COMPONENT ACPI_TABLES
ACPI_MODULE_NAME("tbxfload")
......@@ -68,6 +69,25 @@ acpi_status __init acpi_load_tables(void)
ACPI_FUNCTION_TRACE(acpi_load_tables);
/*
* Install the default operation region handlers. These are the
* handlers that are defined by the ACPI specification to be
* "always accessible" -- namely, system_memory, system_IO, and
* PCI_Config. This also means that no _REG methods need to be
* run for these address spaces. We need to have these handlers
* installed before any AML code can be executed, especially any
* module-level code (11/2015).
* Note that we allow OSPMs to install their own region handlers
* between acpi_initialize_subsystem() and acpi_load_tables() to use
* their customized default region handlers.
*/
status = acpi_ev_install_region_handlers();
if (ACPI_FAILURE(status) && status != AE_ALREADY_EXISTS) {
ACPI_EXCEPTION((AE_INFO, status,
"During Region initialization"));
return_ACPI_STATUS(status);
}
/* Load the namespace from the tables */
status = acpi_tb_load_namespace();
......@@ -83,6 +103,20 @@ acpi_status __init acpi_load_tables(void)
"While loading namespace from ACPI tables"));
}
if (!acpi_gbl_group_module_level_code) {
/*
* Initialize the objects that remain uninitialized. This
* runs the executable AML that may be part of the
* declaration of these objects:
* operation_regions, buffer_fields, Buffers, and Packages.
*/
status = acpi_ns_initialize_objects();
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
acpi_gbl_namespace_initialized = TRUE;
return_ACPI_STATUS(status);
}
......@@ -206,9 +240,7 @@ acpi_status acpi_tb_load_namespace(void)
}
if (!tables_failed) {
ACPI_INFO((AE_INFO,
"%u ACPI AML tables successfully acquired and loaded\n",
tables_loaded));
ACPI_INFO(("%u ACPI AML tables successfully acquired and loaded\n", tables_loaded));
} else {
ACPI_ERROR((AE_INFO,
"%u table load failures, %u successful",
......@@ -301,7 +333,7 @@ acpi_status acpi_load_table(struct acpi_table_header *table)
/* Install the table and load it into the namespace */
ACPI_INFO((AE_INFO, "Host-directed Dynamic ACPI Table Load:"));
ACPI_INFO(("Host-directed Dynamic ACPI Table Load:"));
(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
status = acpi_tb_install_standard_table(ACPI_PTR_TO_PHYSADDR(table),
......
......@@ -245,7 +245,7 @@ void *acpi_os_acquire_object(struct acpi_memory_list *cache)
acpi_status status;
void *object;
ACPI_FUNCTION_NAME(os_acquire_object);
ACPI_FUNCTION_TRACE(os_acquire_object);
if (!cache) {
return_PTR(NULL);
......
......@@ -140,6 +140,67 @@ int acpi_ut_stricmp(char *string1, char *string2)
return (c1 - c2);
}
#if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION)
/*******************************************************************************
*
* FUNCTION: acpi_ut_safe_strcpy, acpi_ut_safe_strcat, acpi_ut_safe_strncat
*
* PARAMETERS: Adds a "DestSize" parameter to each of the standard string
* functions. This is the size of the Destination buffer.
*
* RETURN: TRUE if the operation would overflow the destination buffer.
*
* DESCRIPTION: Safe versions of standard Clib string functions. Ensure that
* the result of the operation will not overflow the output string
* buffer.
*
* NOTE: These functions are typically only helpful for processing
* user input and command lines. For most ACPICA code, the
* required buffer length is precisely calculated before buffer
* allocation, so the use of these functions is unnecessary.
*
******************************************************************************/
u8 acpi_ut_safe_strcpy(char *dest, acpi_size dest_size, char *source)
{
if (strlen(source) >= dest_size) {
return (TRUE);
}
strcpy(dest, source);
return (FALSE);
}
u8 acpi_ut_safe_strcat(char *dest, acpi_size dest_size, char *source)
{
if ((strlen(dest) + strlen(source)) >= dest_size) {
return (TRUE);
}
strcat(dest, source);
return (FALSE);
}
u8
acpi_ut_safe_strncat(char *dest,
acpi_size dest_size,
char *source, acpi_size max_transfer_length)
{
acpi_size actual_transfer_length;
actual_transfer_length = ACPI_MIN(max_transfer_length, strlen(source));
if ((strlen(dest) + actual_transfer_length) >= dest_size) {
return (TRUE);
}
strncat(dest, source, max_transfer_length);
return (FALSE);
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_ut_strtoul64
......@@ -155,7 +216,15 @@ int acpi_ut_stricmp(char *string1, char *string2)
* 32-bit or 64-bit conversion, depending on the current mode
* of the interpreter.
*
* NOTE: Does not support Octal strings, not needed.
* NOTES: acpi_gbl_integer_byte_width should be set to the proper width.
* For the core ACPICA code, this width depends on the DSDT
* version. For iASL, the default byte width is always 8.
*
* Does not support Octal strings, not needed at this time.
*
* There is an earlier version of the function after this one,
* below. It is slightly different than this one, and the two
* may eventually may need to be merged. (01/2016).
*
******************************************************************************/
......@@ -171,7 +240,7 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
u8 sign_of0x = 0;
u8 term = 0;
ACPI_FUNCTION_TRACE_STR(ut_stroul64, string);
ACPI_FUNCTION_TRACE_STR(ut_strtoul64, string);
switch (base) {
case ACPI_ANY_BASE:
......@@ -318,63 +387,162 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
}
}
#if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION)
#ifdef _OBSOLETE_FUNCTIONS
/* TBD: use version in ACPICA main code base? */
/* DONE: 01/2016 */
/*******************************************************************************
*
* FUNCTION: acpi_ut_safe_strcpy, acpi_ut_safe_strcat, acpi_ut_safe_strncat
* FUNCTION: strtoul64
*
* PARAMETERS: Adds a "DestSize" parameter to each of the standard string
* functions. This is the size of the Destination buffer.
* PARAMETERS: string - Null terminated string
* terminater - Where a pointer to the terminating byte
* is returned
* base - Radix of the string
*
* RETURN: TRUE if the operation would overflow the destination buffer.
* RETURN: Converted value
*
* DESCRIPTION: Safe versions of standard Clib string functions. Ensure that
* the result of the operation will not overflow the output string
* buffer.
*
* NOTE: These functions are typically only helpful for processing
* user input and command lines. For most ACPICA code, the
* required buffer length is precisely calculated before buffer
* allocation, so the use of these functions is unnecessary.
* DESCRIPTION: Convert a string into an unsigned value.
*
******************************************************************************/
u8 acpi_ut_safe_strcpy(char *dest, acpi_size dest_size, char *source)
acpi_status strtoul64(char *string, u32 base, u64 *ret_integer)
{
u32 index;
u32 sign;
u64 return_value = 0;
acpi_status status = AE_OK;
if (strlen(source) >= dest_size) {
return (TRUE);
*ret_integer = 0;
switch (base) {
case 0:
case 8:
case 10:
case 16:
break;
default:
/*
* The specified Base parameter is not in the domain of
* this function:
*/
return (AE_BAD_PARAMETER);
}
strcpy(dest, source);
return (FALSE);
}
/* Skip over any white space in the buffer: */
u8 acpi_ut_safe_strcat(char *dest, acpi_size dest_size, char *source)
{
while (isspace((int)*string) || *string == '\t') {
++string;
}
if ((strlen(dest) + strlen(source)) >= dest_size) {
return (TRUE);
/*
* The buffer may contain an optional plus or minus sign.
* If it does, then skip over it but remember what is was:
*/
if (*string == '-') {
sign = ACPI_SIGN_NEGATIVE;
++string;
} else if (*string == '+') {
++string;
sign = ACPI_SIGN_POSITIVE;
} else {
sign = ACPI_SIGN_POSITIVE;
}
strcat(dest, source);
return (FALSE);
}
/*
* If the input parameter Base is zero, then we need to
* determine if it is octal, decimal, or hexadecimal:
*/
if (base == 0) {
if (*string == '0') {
if (tolower((int)*(++string)) == 'x') {
base = 16;
++string;
} else {
base = 8;
}
} else {
base = 10;
}
}
u8
acpi_ut_safe_strncat(char *dest,
acpi_size dest_size,
char *source, acpi_size max_transfer_length)
{
acpi_size actual_transfer_length;
/*
* For octal and hexadecimal bases, skip over the leading
* 0 or 0x, if they are present.
*/
if (base == 8 && *string == '0') {
string++;
}
actual_transfer_length = ACPI_MIN(max_transfer_length, strlen(source));
if (base == 16 && *string == '0' && tolower((int)*(++string)) == 'x') {
string++;
}
if ((strlen(dest) + actual_transfer_length) >= dest_size) {
return (TRUE);
/* Main loop: convert the string to an unsigned long */
while (*string) {
if (isdigit((int)*string)) {
index = ((u8)*string) - '0';
} else {
index = (u8)toupper((int)*string);
if (isupper((int)index)) {
index = index - 'A' + 10;
} else {
goto error_exit;
}
}
if (index >= base) {
goto error_exit;
}
/* Check to see if value is out of range: */
if (return_value > ((ACPI_UINT64_MAX - (u64)index) / (u64)base)) {
goto error_exit;
} else {
return_value *= base;
return_value += index;
}
++string;
}
strncat(dest, source, max_transfer_length);
return (FALSE);
/* If a minus sign was present, then "the conversion is negated": */
if (sign == ACPI_SIGN_NEGATIVE) {
return_value = (ACPI_UINT32_MAX - return_value) + 1;
}
*ret_integer = return_value;
return (status);
error_exit:
switch (base) {
case 8:
status = AE_BAD_OCTAL_CONSTANT;
break;
case 10:
status = AE_BAD_DECIMAL_CONSTANT;
break;
case 16:
status = AE_BAD_HEX_CONSTANT;
break;
default:
/* Base validated above */
break;
}
return (status);
}
#endif
......@@ -712,7 +712,7 @@ void acpi_ut_dump_allocations(u32 component, const char *module)
/* Print summary */
if (!num_outstanding) {
ACPI_INFO((AE_INFO, "No outstanding allocations"));
ACPI_INFO(("No outstanding allocations"));
} else {
ACPI_ERROR((AE_INFO, "%u(0x%X) Outstanding allocations",
num_outstanding, num_outstanding));
......
......@@ -175,8 +175,7 @@ ACPI_EXPORT_SYMBOL(acpi_warning)
* TBD: module_name and line_number args are not needed, should be removed.
*
******************************************************************************/
void ACPI_INTERNAL_VAR_XFACE
acpi_info(const char *module_name, u32 line_number, const char *format, ...)
void ACPI_INTERNAL_VAR_XFACE acpi_info(const char *format, ...)
{
va_list arg_list;
......
......@@ -154,21 +154,6 @@ acpi_status __init acpi_enable_subsystem(u32 flags)
*/
acpi_gbl_early_initialization = FALSE;
/*
* Install the default operation region handlers. These are the
* handlers that are defined by the ACPI specification to be
* "always accessible" -- namely, system_memory, system_IO, and
* PCI_Config. This also means that no _REG methods need to be
* run for these address spaces. We need to have these handlers
* installed before any AML code can be executed, especially any
* module-level code (11/2015).
*/
status = acpi_ev_install_region_handlers();
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"During Region initialization"));
return_ACPI_STATUS(status);
}
#if (!ACPI_REDUCED_HARDWARE)
/* Enable ACPI mode */
......@@ -260,23 +245,6 @@ acpi_status __init acpi_initialize_objects(u32 flags)
ACPI_FUNCTION_TRACE(acpi_initialize_objects);
/*
* Run all _REG methods
*
* Note: Any objects accessed by the _REG methods will be automatically
* initialized, even if they contain executable AML (see the call to
* acpi_ns_initialize_objects below).
*/
acpi_gbl_reg_methods_enabled = TRUE;
if (!(flags & ACPI_NO_ADDRESS_SPACE_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Executing _REG OpRegion methods\n"));
status = acpi_ev_initialize_op_regions();
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
#ifdef ACPI_EXEC_APP
/*
* This call implements the "initialization file" option for acpi_exec.
......@@ -299,32 +267,27 @@ acpi_status __init acpi_initialize_objects(u32 flags)
*/
if (acpi_gbl_group_module_level_code) {
acpi_ns_exec_module_code_list();
}
/*
* Initialize the objects that remain uninitialized. This runs the
* executable AML that may be part of the declaration of these objects:
* operation_regions, buffer_fields, Buffers, and Packages.
*/
if (!(flags & ACPI_NO_OBJECT_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Completing Initialization of ACPI Objects\n"));
status = acpi_ns_initialize_objects();
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
/*
* Initialize the objects that remain uninitialized. This
* runs the executable AML that may be part of the
* declaration of these objects:
* operation_regions, buffer_fields, Buffers, and Packages.
*/
if (!(flags & ACPI_NO_OBJECT_INIT)) {
status = acpi_ns_initialize_objects();
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
}
/*
* Initialize all device objects in the namespace. This runs the device
* _STA and _INI methods.
* Initialize all device/region objects in the namespace. This runs
* the device _STA and _INI methods and region _REG methods.
*/
if (!(flags & ACPI_NO_DEVICE_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[Init] Initializing ACPI Devices\n"));
status = acpi_ns_initialize_devices();
if (!(flags & (ACPI_NO_DEVICE_INIT | ACPI_NO_ADDRESS_SPACE_INIT))) {
status = acpi_ns_initialize_devices(flags);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
......
......@@ -536,7 +536,8 @@ int apei_resources_request(struct apei_resources *resources,
goto err_unmap_ioport;
}
return 0;
goto arch_res_fini;
err_unmap_ioport:
list_for_each_entry(res, &resources->ioport, list) {
if (res == res_bak)
......@@ -551,7 +552,8 @@ int apei_resources_request(struct apei_resources *resources,
release_mem_region(res->start, res->end - res->start);
}
arch_res_fini:
apei_resources_fini(&arch_res);
if (arch_apei_filter_addr)
apei_resources_fini(&arch_res);
nvs_res_fini:
apei_resources_fini(&nvs_resources);
return rc;
......
......@@ -1207,6 +1207,9 @@ static int __init erst_init(void)
"Failed to allocate %lld bytes for persistent store error log.\n",
erst_erange.size);
/* Cleanup ERST Resources */
apei_resources_fini(&erst_resources);
return 0;
err_release_erange:
......
......@@ -26,7 +26,7 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/io.h>
......@@ -79,6 +79,11 @@
((struct acpi_hest_generic_status *) \
((struct ghes_estatus_node *)(estatus_node) + 1))
/*
* This driver isn't really modular, however for the time being,
* continuing to use module_param is the easiest way to remain
* compatible with existing boot arg use cases.
*/
bool ghes_disable;
module_param_named(disable, ghes_disable, bool, 0);
......@@ -1148,18 +1153,4 @@ static int __init ghes_init(void)
err:
return rc;
}
static void __exit ghes_exit(void)
{
platform_driver_unregister(&ghes_platform_driver);
ghes_estatus_pool_exit();
ghes_ioremap_exit();
}
module_init(ghes_init);
module_exit(ghes_exit);
MODULE_AUTHOR("Huang Ying");
MODULE_DESCRIPTION("APEI Generic Hardware Error Source support");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:GHES");
device_initcall(ghes_init);
/*
* BGRT boot graphic support
* Authors: Matthew Garrett, Josh Triplett <josh@joshtriplett.org>
* Copyright 2012 Red Hat, Inc <mjg@redhat.com>
* Copyright 2012 Intel Corporation
*
......@@ -8,7 +10,6 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/sysfs.h>
......@@ -103,9 +104,4 @@ static int __init bgrt_init(void)
kobject_put(bgrt_kobj);
return ret;
}
module_init(bgrt_init);
MODULE_AUTHOR("Matthew Garrett, Josh Triplett <josh@joshtriplett.org>");
MODULE_DESCRIPTION("BGRT boot graphic support");
MODULE_LICENSE("GPL");
device_initcall(bgrt_init);
......@@ -479,24 +479,38 @@ static void acpi_device_remove_notify_handler(struct acpi_device *device)
Device Matching
-------------------------------------------------------------------------- */
static struct acpi_device *acpi_primary_dev_companion(struct acpi_device *adev,
const struct device *dev)
/**
* acpi_get_first_physical_node - Get first physical node of an ACPI device
* @adev: ACPI device in question
*
* Return: First physical node of ACPI device @adev
*/
struct device *acpi_get_first_physical_node(struct acpi_device *adev)
{
struct mutex *physical_node_lock = &adev->physical_node_lock;
struct device *phys_dev;
mutex_lock(physical_node_lock);
if (list_empty(&adev->physical_node_list)) {
adev = NULL;
phys_dev = NULL;
} else {
const struct acpi_device_physical_node *node;
node = list_first_entry(&adev->physical_node_list,
struct acpi_device_physical_node, node);
if (node->dev != dev)
adev = NULL;
phys_dev = node->dev;
}
mutex_unlock(physical_node_lock);
return adev;
return phys_dev;
}
static struct acpi_device *acpi_primary_dev_companion(struct acpi_device *adev,
const struct device *dev)
{
const struct device *phys_dev = acpi_get_first_physical_node(adev);
return phys_dev && phys_dev == dev ? adev : NULL;
}
/**
......
......@@ -39,6 +39,7 @@
#include <linux/cpufreq.h>
#include <linux/delay.h>
#include <linux/ktime.h>
#include <acpi/cppc_acpi.h>
/*
......@@ -63,58 +64,140 @@ static struct mbox_chan *pcc_channel;
static void __iomem *pcc_comm_addr;
static u64 comm_base_addr;
static int pcc_subspace_idx = -1;
static u16 pcc_cmd_delay;
static bool pcc_channel_acquired;
static ktime_t deadline;
static unsigned int pcc_mpar, pcc_mrtt;
/* pcc mapped address + header size + offset within PCC subspace */
#define GET_PCC_VADDR(offs) (pcc_comm_addr + 0x8 + (offs))
/*
* Arbitrary Retries in case the remote processor is slow to respond
* to PCC commands.
* to PCC commands. Keeping it high enough to cover emulators where
* the processors run painfully slow.
*/
#define NUM_RETRIES 500
static int check_pcc_chan(void)
{
int ret = -EIO;
struct acpi_pcct_shared_memory __iomem *generic_comm_base = pcc_comm_addr;
ktime_t next_deadline = ktime_add(ktime_get(), deadline);
/* Retry in case the remote processor was too slow to catch up. */
while (!ktime_after(ktime_get(), next_deadline)) {
/*
* Per spec, prior to boot the PCC space wil be initialized by
* platform and should have set the command completion bit when
* PCC can be used by OSPM
*/
if (readw_relaxed(&generic_comm_base->status) & PCC_CMD_COMPLETE) {
ret = 0;
break;
}
/*
* Reducing the bus traffic in case this loop takes longer than
* a few retries.
*/
udelay(3);
}
return ret;
}
static int send_pcc_cmd(u16 cmd)
{
int retries, result = -EIO;
struct acpi_pcct_hw_reduced *pcct_ss = pcc_channel->con_priv;
int ret = -EIO;
struct acpi_pcct_shared_memory *generic_comm_base =
(struct acpi_pcct_shared_memory *) pcc_comm_addr;
u32 cmd_latency = pcct_ss->latency;
static ktime_t last_cmd_cmpl_time, last_mpar_reset;
static int mpar_count;
unsigned int time_delta;
/* Min time OS should wait before sending next command. */
udelay(pcc_cmd_delay);
/*
* For CMD_WRITE we know for a fact the caller should have checked
* the channel before writing to PCC space
*/
if (cmd == CMD_READ) {
ret = check_pcc_chan();
if (ret)
return ret;
}
/*
* Handle the Minimum Request Turnaround Time(MRTT)
* "The minimum amount of time that OSPM must wait after the completion
* of a command before issuing the next command, in microseconds"
*/
if (pcc_mrtt) {
time_delta = ktime_us_delta(ktime_get(), last_cmd_cmpl_time);
if (pcc_mrtt > time_delta)
udelay(pcc_mrtt - time_delta);
}
/*
* Handle the non-zero Maximum Periodic Access Rate(MPAR)
* "The maximum number of periodic requests that the subspace channel can
* support, reported in commands per minute. 0 indicates no limitation."
*
* This parameter should be ideally zero or large enough so that it can
* handle maximum number of requests that all the cores in the system can
* collectively generate. If it is not, we will follow the spec and just
* not send the request to the platform after hitting the MPAR limit in
* any 60s window
*/
if (pcc_mpar) {
if (mpar_count == 0) {
time_delta = ktime_ms_delta(ktime_get(), last_mpar_reset);
if (time_delta < 60 * MSEC_PER_SEC) {
pr_debug("PCC cmd not sent due to MPAR limit");
return -EIO;
}
last_mpar_reset = ktime_get();
mpar_count = pcc_mpar;
}
mpar_count--;
}
/* Write to the shared comm region. */
writew(cmd, &generic_comm_base->command);
writew_relaxed(cmd, &generic_comm_base->command);
/* Flip CMD COMPLETE bit */
writew(0, &generic_comm_base->status);
writew_relaxed(0, &generic_comm_base->status);
/* Ring doorbell */
result = mbox_send_message(pcc_channel, &cmd);
if (result < 0) {
ret = mbox_send_message(pcc_channel, &cmd);
if (ret < 0) {
pr_err("Err sending PCC mbox message. cmd:%d, ret:%d\n",
cmd, result);
return result;
cmd, ret);
return ret;
}
/* Wait for a nominal time to let platform process command. */
udelay(cmd_latency);
/* Retry in case the remote processor was too slow to catch up. */
for (retries = NUM_RETRIES; retries > 0; retries--) {
if (readw_relaxed(&generic_comm_base->status) & PCC_CMD_COMPLETE) {
result = 0;
break;
}
/*
* For READs we need to ensure the cmd completed to ensure
* the ensuing read()s can proceed. For WRITEs we dont care
* because the actual write()s are done before coming here
* and the next READ or WRITE will check if the channel
* is busy/free at the entry of this call.
*
* If Minimum Request Turnaround Time is non-zero, we need
* to record the completion time of both READ and WRITE
* command for proper handling of MRTT, so we need to check
* for pcc_mrtt in addition to CMD_READ
*/
if (cmd == CMD_READ || pcc_mrtt) {
ret = check_pcc_chan();
if (pcc_mrtt)
last_cmd_cmpl_time = ktime_get();
}
mbox_client_txdone(pcc_channel, result);
return result;
mbox_client_txdone(pcc_channel, ret);
return ret;
}
static void cppc_chan_tx_done(struct mbox_client *cl, void *msg, int ret)
{
if (ret)
if (ret < 0)
pr_debug("TX did not complete: CMD sent:%x, ret:%d\n",
*(u16 *)msg, ret);
else
......@@ -306,6 +389,7 @@ static int register_pcc_channel(int pcc_subspace_idx)
{
struct acpi_pcct_hw_reduced *cppc_ss;
unsigned int len;
u64 usecs_lat;
if (pcc_subspace_idx >= 0) {
pcc_channel = pcc_mbox_request_channel(&cppc_mbox_cl,
......@@ -335,7 +419,16 @@ static int register_pcc_channel(int pcc_subspace_idx)
*/
comm_base_addr = cppc_ss->base_address;
len = cppc_ss->length;
pcc_cmd_delay = cppc_ss->min_turnaround_time;
/*
* cppc_ss->latency is just a Nominal value. In reality
* the remote processor could be much slower to reply.
* So add an arbitrary amount of wait on top of Nominal.
*/
usecs_lat = NUM_RETRIES * cppc_ss->latency;
deadline = ns_to_ktime(usecs_lat * NSEC_PER_USEC);
pcc_mrtt = cppc_ss->min_turnaround_time;
pcc_mpar = cppc_ss->max_access_rate;
pcc_comm_addr = acpi_os_ioremap(comm_base_addr, len);
if (!pcc_comm_addr) {
......@@ -546,29 +639,74 @@ void acpi_cppc_processor_exit(struct acpi_processor *pr)
}
EXPORT_SYMBOL_GPL(acpi_cppc_processor_exit);
static u64 get_phys_addr(struct cpc_reg *reg)
{
/* PCC communication addr space begins at byte offset 0x8. */
if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM)
return (u64)comm_base_addr + 0x8 + reg->address;
else
return reg->address;
}
/*
* Since cpc_read and cpc_write are called while holding pcc_lock, it should be
* as fast as possible. We have already mapped the PCC subspace during init, so
* we can directly write to it.
*/
static void cpc_read(struct cpc_reg *reg, u64 *val)
static int cpc_read(struct cpc_reg *reg, u64 *val)
{
u64 addr = get_phys_addr(reg);
int ret_val = 0;
acpi_os_read_memory((acpi_physical_address)addr,
val, reg->bit_width);
*val = 0;
if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
void __iomem *vaddr = GET_PCC_VADDR(reg->address);
switch (reg->bit_width) {
case 8:
*val = readb_relaxed(vaddr);
break;
case 16:
*val = readw_relaxed(vaddr);
break;
case 32:
*val = readl_relaxed(vaddr);
break;
case 64:
*val = readq_relaxed(vaddr);
break;
default:
pr_debug("Error: Cannot read %u bit width from PCC\n",
reg->bit_width);
ret_val = -EFAULT;
}
} else
ret_val = acpi_os_read_memory((acpi_physical_address)reg->address,
val, reg->bit_width);
return ret_val;
}
static void cpc_write(struct cpc_reg *reg, u64 val)
static int cpc_write(struct cpc_reg *reg, u64 val)
{
u64 addr = get_phys_addr(reg);
int ret_val = 0;
acpi_os_write_memory((acpi_physical_address)addr,
val, reg->bit_width);
if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
void __iomem *vaddr = GET_PCC_VADDR(reg->address);
switch (reg->bit_width) {
case 8:
writeb_relaxed(val, vaddr);
break;
case 16:
writew_relaxed(val, vaddr);
break;
case 32:
writel_relaxed(val, vaddr);
break;
case 64:
writeq_relaxed(val, vaddr);
break;
default:
pr_debug("Error: Cannot write %u bit width to PCC\n",
reg->bit_width);
ret_val = -EFAULT;
break;
}
} else
ret_val = acpi_os_write_memory((acpi_physical_address)reg->address,
val, reg->bit_width);
return ret_val;
}
/**
......@@ -604,7 +742,7 @@ int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
(ref_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(nom_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) {
/* Ring doorbell once to update PCC subspace */
if (send_pcc_cmd(CMD_READ)) {
if (send_pcc_cmd(CMD_READ) < 0) {
ret = -EIO;
goto out_err;
}
......@@ -662,7 +800,7 @@ int cppc_get_perf_ctrs(int cpunum, struct cppc_perf_fb_ctrs *perf_fb_ctrs)
if ((delivered_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(reference_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) {
/* Ring doorbell once to update PCC subspace */
if (send_pcc_cmd(CMD_READ)) {
if (send_pcc_cmd(CMD_READ) < 0) {
ret = -EIO;
goto out_err;
}
......@@ -713,6 +851,13 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
spin_lock(&pcc_lock);
/* If this is PCC reg, check if channel is free before writing */
if (desired_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
ret = check_pcc_chan();
if (ret)
goto busy_channel;
}
/*
* Skip writing MIN/MAX until Linux knows how to come up with
* useful values.
......@@ -722,10 +867,10 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
/* Is this a PCC reg ?*/
if (desired_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
/* Ring doorbell so Remote can get our perf request. */
if (send_pcc_cmd(CMD_WRITE))
if (send_pcc_cmd(CMD_WRITE) < 0)
ret = -EIO;
}
busy_channel:
spin_unlock(&pcc_lock);
return ret;
......
......@@ -73,6 +73,9 @@ static ssize_t acpi_ec_write_io(struct file *f, const char __user *buf,
loff_t init_off = *off;
int err = 0;
if (!write_support)
return -EINVAL;
if (*off >= EC_SPACE_SIZE)
return 0;
if (*off + count >= EC_SPACE_SIZE) {
......
......@@ -46,7 +46,7 @@ MODULE_DEVICE_TABLE(acpi, fan_device_ids);
#ifdef CONFIG_PM_SLEEP
static int acpi_fan_suspend(struct device *dev);
static int acpi_fan_resume(struct device *dev);
static struct dev_pm_ops acpi_fan_pm = {
static const struct dev_pm_ops acpi_fan_pm = {
.resume = acpi_fan_resume,
.freeze = acpi_fan_suspend,
.thaw = acpi_fan_resume,
......
......@@ -20,6 +20,7 @@
#define PREFIX "ACPI: "
void acpi_initrd_initialize_tables(void);
acpi_status acpi_os_initialize1(void);
void init_acpi_device_notify(void);
int acpi_scan_init(void);
......@@ -29,6 +30,11 @@ void acpi_processor_init(void);
void acpi_platform_init(void);
void acpi_pnp_init(void);
void acpi_int340x_thermal_init(void);
#ifdef CONFIG_ARM_AMBA
void acpi_amba_init(void);
#else
static inline void acpi_amba_init(void) {}
#endif
int acpi_sysfs_init(void);
void acpi_container_init(void);
void acpi_memory_hotplug_init(void);
......@@ -106,6 +112,7 @@ bool acpi_device_is_present(struct acpi_device *adev);
bool acpi_device_is_battery(struct acpi_device *adev);
bool acpi_device_is_first_physical_node(struct acpi_device *adev,
const struct device *dev);
struct device *acpi_get_first_physical_node(struct acpi_device *adev);
/* --------------------------------------------------------------------------
Device Matching and Notification
......
......@@ -602,6 +602,14 @@ acpi_os_predefined_override(const struct acpi_predefined_names *init_val,
return AE_OK;
}
static void acpi_table_taint(struct acpi_table_header *table)
{
pr_warn(PREFIX
"Override [%4.4s-%8.8s], this is unsafe: tainting kernel\n",
table->signature, table->oem_table_id);
add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
}
#ifdef CONFIG_ACPI_INITRD_TABLE_OVERRIDE
#include <linux/earlycpio.h>
#include <linux/memblock.h>
......@@ -636,6 +644,7 @@ static const char * const table_sigs[] = {
#define ACPI_OVERRIDE_TABLES 64
static struct cpio_data __initdata acpi_initrd_files[ACPI_OVERRIDE_TABLES];
static DECLARE_BITMAP(acpi_initrd_installed, ACPI_OVERRIDE_TABLES);
#define MAP_CHUNK_SIZE (NR_FIX_BTMAPS << PAGE_SHIFT)
......@@ -746,96 +755,125 @@ void __init acpi_initrd_override(void *data, size_t size)
}
}
}
#endif /* CONFIG_ACPI_INITRD_TABLE_OVERRIDE */
static void acpi_table_taint(struct acpi_table_header *table)
acpi_status
acpi_os_physical_table_override(struct acpi_table_header *existing_table,
acpi_physical_address *address, u32 *length)
{
pr_warn(PREFIX
"Override [%4.4s-%8.8s], this is unsafe: tainting kernel\n",
table->signature, table->oem_table_id);
add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
}
int table_offset = 0;
int table_index = 0;
struct acpi_table_header *table;
u32 table_length;
*length = 0;
*address = 0;
if (!acpi_tables_addr)
return AE_OK;
acpi_status
acpi_os_table_override(struct acpi_table_header * existing_table,
struct acpi_table_header ** new_table)
{
if (!existing_table || !new_table)
return AE_BAD_PARAMETER;
while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
table = acpi_os_map_memory(acpi_tables_addr + table_offset,
ACPI_HEADER_SIZE);
if (table_offset + table->length > all_tables_size) {
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
WARN_ON(1);
return AE_OK;
}
*new_table = NULL;
table_length = table->length;
#ifdef CONFIG_ACPI_CUSTOM_DSDT
if (strncmp(existing_table->signature, "DSDT", 4) == 0)
*new_table = (struct acpi_table_header *)AmlCode;
#endif
if (*new_table != NULL)
/* Only override tables matched */
if (test_bit(table_index, acpi_initrd_installed) ||
memcmp(existing_table->signature, table->signature, 4) ||
memcmp(table->oem_table_id, existing_table->oem_table_id,
ACPI_OEM_TABLE_ID_SIZE)) {
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
goto next_table;
}
*length = table_length;
*address = acpi_tables_addr + table_offset;
acpi_table_taint(existing_table);
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
set_bit(table_index, acpi_initrd_installed);
break;
next_table:
table_offset += table_length;
table_index++;
}
return AE_OK;
}
acpi_status
acpi_os_physical_table_override(struct acpi_table_header *existing_table,
acpi_physical_address *address,
u32 *table_length)
void __init acpi_initrd_initialize_tables(void)
{
#ifndef CONFIG_ACPI_INITRD_TABLE_OVERRIDE
*table_length = 0;
*address = 0;
return AE_OK;
#else
int table_offset = 0;
int table_index = 0;
u32 table_length;
struct acpi_table_header *table;
*table_length = 0;
*address = 0;
if (!acpi_tables_addr)
return AE_OK;
do {
if (table_offset + ACPI_HEADER_SIZE > all_tables_size) {
WARN_ON(1);
return AE_OK;
}
return;
while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
table = acpi_os_map_memory(acpi_tables_addr + table_offset,
ACPI_HEADER_SIZE);
if (table_offset + table->length > all_tables_size) {
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
WARN_ON(1);
return AE_OK;
return;
}
table_offset += table->length;
table_length = table->length;
if (memcmp(existing_table->signature, table->signature, 4)) {
acpi_os_unmap_memory(table,
ACPI_HEADER_SIZE);
continue;
}
/* Only override tables with matching oem id */
if (memcmp(table->oem_table_id, existing_table->oem_table_id,
ACPI_OEM_TABLE_ID_SIZE)) {
acpi_os_unmap_memory(table,
ACPI_HEADER_SIZE);
continue;
/* Skip RSDT/XSDT which should only be used for override */
if (test_bit(table_index, acpi_initrd_installed) ||
ACPI_COMPARE_NAME(table->signature, ACPI_SIG_RSDT) ||
ACPI_COMPARE_NAME(table->signature, ACPI_SIG_XSDT)) {
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
goto next_table;
}
table_offset -= table->length;
*table_length = table->length;
acpi_table_taint(table);
acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
*address = acpi_tables_addr + table_offset;
break;
} while (table_offset + ACPI_HEADER_SIZE < all_tables_size);
acpi_install_table(acpi_tables_addr + table_offset, TRUE);
set_bit(table_index, acpi_initrd_installed);
next_table:
table_offset += table_length;
table_index++;
}
}
#else
acpi_status
acpi_os_physical_table_override(struct acpi_table_header *existing_table,
acpi_physical_address *address,
u32 *table_length)
{
*table_length = 0;
*address = 0;
return AE_OK;
}
void __init acpi_initrd_initialize_tables(void)
{
}
#endif /* CONFIG_ACPI_INITRD_TABLE_OVERRIDE */
if (*address != 0)
acpi_status
acpi_os_table_override(struct acpi_table_header *existing_table,
struct acpi_table_header **new_table)
{
if (!existing_table || !new_table)
return AE_BAD_PARAMETER;
*new_table = NULL;
#ifdef CONFIG_ACPI_CUSTOM_DSDT
if (strncmp(existing_table->signature, "DSDT", 4) == 0)
*new_table = (struct acpi_table_header *)AmlCode;
#endif
if (*new_table != NULL)
acpi_table_taint(existing_table);
return AE_OK;
#endif
}
static irqreturn_t acpi_irq(int irq, void *dev_id)
......
......@@ -33,6 +33,7 @@
#include <linux/pci.h>
#include <linux/acpi.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#define PREFIX "ACPI: "
......@@ -387,6 +388,23 @@ static inline int acpi_isa_register_gsi(struct pci_dev *dev)
}
#endif
static inline bool acpi_pci_irq_valid(struct pci_dev *dev, u8 pin)
{
#ifdef CONFIG_X86
/*
* On x86 irq line 0xff means "unknown" or "no connection"
* (PCI 3.0, Section 6.2.4, footnote on page 223).
*/
if (dev->irq == 0xff) {
dev->irq = IRQ_NOTCONNECTED;
dev_warn(&dev->dev, "PCI INT %c: not connected\n",
pin_name(pin));
return false;
}
#endif
return true;
}
int acpi_pci_irq_enable(struct pci_dev *dev)
{
struct acpi_prt_entry *entry;
......@@ -431,11 +449,14 @@ int acpi_pci_irq_enable(struct pci_dev *dev)
} else
gsi = -1;
/*
* No IRQ known to the ACPI subsystem - maybe the BIOS /
* driver reported one, then use it. Exit in any case.
*/
if (gsi < 0) {
/*
* No IRQ known to the ACPI subsystem - maybe the BIOS /
* driver reported one, then use it. Exit in any case.
*/
if (!acpi_pci_irq_valid(dev, pin))
return 0;
if (acpi_isa_register_gsi(dev))
dev_warn(&dev->dev, "PCI INT %c: no GSI\n",
pin_name(pin));
......
......@@ -13,7 +13,7 @@
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/mfd/intel_soc_pmic.h>
#include <linux/regmap.h>
......@@ -205,7 +205,4 @@ static int __init intel_crc_pmic_opregion_driver_init(void)
{
return platform_driver_register(&intel_crc_pmic_opregion_driver);
}
module_init(intel_crc_pmic_opregion_driver_init);
MODULE_DESCRIPTION("CrystalCove ACPI operation region driver");
MODULE_LICENSE("GPL");
device_initcall(intel_crc_pmic_opregion_driver_init);
......@@ -314,7 +314,6 @@ static int __init acpi_processor_driver_init(void)
if (result < 0)
return result;
acpi_processor_syscore_init();
register_hotcpu_notifier(&acpi_cpu_notifier);
acpi_thermal_cpufreq_init();
acpi_processor_ppc_init();
......@@ -330,7 +329,6 @@ static void __exit acpi_processor_driver_exit(void)
acpi_processor_ppc_exit();
acpi_thermal_cpufreq_exit();
unregister_hotcpu_notifier(&acpi_cpu_notifier);
acpi_processor_syscore_exit();
driver_unregister(&acpi_processor_driver);
}
......
......@@ -23,6 +23,7 @@
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
#define pr_fmt(fmt) "ACPI: " fmt
#include <linux/module.h>
#include <linux/acpi.h>
......@@ -30,7 +31,6 @@
#include <linux/sched.h> /* need_resched() */
#include <linux/tick.h>
#include <linux/cpuidle.h>
#include <linux/syscore_ops.h>
#include <acpi/processor.h>
/*
......@@ -43,8 +43,6 @@
#include <asm/apic.h>
#endif
#define PREFIX "ACPI: "
#define ACPI_PROCESSOR_CLASS "processor"
#define _COMPONENT ACPI_PROCESSOR_COMPONENT
ACPI_MODULE_NAME("processor_idle");
......@@ -81,9 +79,9 @@ static int set_max_cstate(const struct dmi_system_id *id)
if (max_cstate > ACPI_PROCESSOR_MAX_POWER)
return 0;
printk(KERN_NOTICE PREFIX "%s detected - limiting to C%ld max_cstate."
" Override with \"processor.max_cstate=%d\"\n", id->ident,
(long)id->driver_data, ACPI_PROCESSOR_MAX_POWER + 1);
pr_notice("%s detected - limiting to C%ld max_cstate."
" Override with \"processor.max_cstate=%d\"\n", id->ident,
(long)id->driver_data, ACPI_PROCESSOR_MAX_POWER + 1);
max_cstate = (long)id->driver_data;
......@@ -194,42 +192,6 @@ static void lapic_timer_state_broadcast(struct acpi_processor *pr,
#endif
#ifdef CONFIG_PM_SLEEP
static u32 saved_bm_rld;
static int acpi_processor_suspend(void)
{
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &saved_bm_rld);
return 0;
}
static void acpi_processor_resume(void)
{
u32 resumed_bm_rld = 0;
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &resumed_bm_rld);
if (resumed_bm_rld == saved_bm_rld)
return;
acpi_write_bit_register(ACPI_BITREG_BUS_MASTER_RLD, saved_bm_rld);
}
static struct syscore_ops acpi_processor_syscore_ops = {
.suspend = acpi_processor_suspend,
.resume = acpi_processor_resume,
};
void acpi_processor_syscore_init(void)
{
register_syscore_ops(&acpi_processor_syscore_ops);
}
void acpi_processor_syscore_exit(void)
{
unregister_syscore_ops(&acpi_processor_syscore_ops);
}
#endif /* CONFIG_PM_SLEEP */
#if defined(CONFIG_X86)
static void tsc_check_state(int state)
{
......@@ -351,7 +313,7 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
/* There must be at least 2 elements */
if (!cst || (cst->type != ACPI_TYPE_PACKAGE) || cst->package.count < 2) {
printk(KERN_ERR PREFIX "not enough elements in _CST\n");
pr_err("not enough elements in _CST\n");
ret = -EFAULT;
goto end;
}
......@@ -360,7 +322,7 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
/* Validate number of power states. */
if (count < 1 || count != cst->package.count - 1) {
printk(KERN_ERR PREFIX "count given by _CST is not valid\n");
pr_err("count given by _CST is not valid\n");
ret = -EFAULT;
goto end;
}
......@@ -469,11 +431,9 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
* (From 1 through ACPI_PROCESSOR_MAX_POWER - 1)
*/
if (current_count >= (ACPI_PROCESSOR_MAX_POWER - 1)) {
printk(KERN_WARNING
"Limiting number of power states to max (%d)\n",
ACPI_PROCESSOR_MAX_POWER);
printk(KERN_WARNING
"Please increase ACPI_PROCESSOR_MAX_POWER if needed.\n");
pr_warn("Limiting number of power states to max (%d)\n",
ACPI_PROCESSOR_MAX_POWER);
pr_warn("Please increase ACPI_PROCESSOR_MAX_POWER if needed.\n");
break;
}
}
......@@ -1097,8 +1057,8 @@ int acpi_processor_power_init(struct acpi_processor *pr)
retval = cpuidle_register_driver(&acpi_idle_driver);
if (retval)
return retval;
printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n",
acpi_idle_driver.name);
pr_debug("%s registered with cpuidle\n",
acpi_idle_driver.name);
}
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
......
......@@ -1930,6 +1930,7 @@ int __init acpi_scan_init(void)
acpi_memory_hotplug_init();
acpi_pnp_init();
acpi_int340x_thermal_init();
acpi_amba_init();
acpi_scan_add_handler(&generic_device_handler);
......
......@@ -19,6 +19,7 @@
#include <linux/reboot.h>
#include <linux/acpi.h>
#include <linux/module.h>
#include <linux/syscore_ops.h>
#include <asm/io.h>
#include <trace/events/power.h>
......@@ -677,6 +678,39 @@ static void acpi_sleep_suspend_setup(void)
static inline void acpi_sleep_suspend_setup(void) {}
#endif /* !CONFIG_SUSPEND */
#ifdef CONFIG_PM_SLEEP
static u32 saved_bm_rld;
static int acpi_save_bm_rld(void)
{
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &saved_bm_rld);
return 0;
}
static void acpi_restore_bm_rld(void)
{
u32 resumed_bm_rld = 0;
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &resumed_bm_rld);
if (resumed_bm_rld == saved_bm_rld)
return;
acpi_write_bit_register(ACPI_BITREG_BUS_MASTER_RLD, saved_bm_rld);
}
static struct syscore_ops acpi_sleep_syscore_ops = {
.suspend = acpi_save_bm_rld,
.resume = acpi_restore_bm_rld,
};
void acpi_sleep_syscore_init(void)
{
register_syscore_ops(&acpi_sleep_syscore_ops);
}
#else
static inline void acpi_sleep_syscore_init(void) {}
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_HIBERNATION
static unsigned long s4_hardware_signature;
static struct acpi_table_facs *facs;
......@@ -839,6 +873,7 @@ int __init acpi_sleep_init(void)
sleep_states[ACPI_STATE_S0] = 1;
acpi_sleep_syscore_init();
acpi_sleep_suspend_setup();
acpi_sleep_hibernate_setup();
......
......@@ -32,6 +32,7 @@
#include <linux/errno.h>
#include <linux/acpi.h>
#include <linux/bootmem.h>
#include "internal.h"
#define ACPI_MAX_TABLES 128
......@@ -456,6 +457,7 @@ int __init acpi_table_init(void)
status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
if (ACPI_FAILURE(status))
return -EINVAL;
acpi_initrd_initialize_tables();
check_multiple_madt();
return 0;
......@@ -484,3 +486,13 @@ static int __init acpi_force_table_verification_setup(char *s)
}
early_param("acpi_force_table_verification", acpi_force_table_verification_setup);
static int __init acpi_force_32bit_fadt_addr(char *s)
{
pr_info("Forcing 32 Bit FADT addresses\n");
acpi_gbl_use32_bit_fadt_addresses = TRUE;
return 0;
}
early_param("acpi_force_32bit_fadt_addr", acpi_force_32bit_fadt_addr);
......@@ -201,10 +201,6 @@ acpi_extract_package(union acpi_object *package,
u8 **pointer = NULL;
union acpi_object *element = &(package->package.elements[i]);
if (!element) {
return AE_BAD_DATA;
}
switch (element->type) {
case ACPI_TYPE_INTEGER:
......
......@@ -104,6 +104,7 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{
unsigned int state_idx = genpd->state_idx;
ktime_t time_start;
s64 elapsed_ns;
int ret;
......@@ -120,10 +121,10 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns <= genpd->power_on_latency_ns)
if (elapsed_ns <= genpd->states[state_idx].power_on_latency_ns)
return ret;
genpd->power_on_latency_ns = elapsed_ns;
genpd->states[state_idx].power_on_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "on", elapsed_ns);
......@@ -133,6 +134,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
{
unsigned int state_idx = genpd->state_idx;
ktime_t time_start;
s64 elapsed_ns;
int ret;
......@@ -149,10 +151,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns <= genpd->power_off_latency_ns)
if (elapsed_ns <= genpd->states[state_idx].power_off_latency_ns)
return ret;
genpd->power_off_latency_ns = elapsed_ns;
genpd->states[state_idx].power_off_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "off", elapsed_ns);
......@@ -485,8 +487,13 @@ static int pm_genpd_runtime_resume(struct device *dev)
if (timed && runtime_pm)
time_start = ktime_get();
genpd_start_dev(genpd, dev);
genpd_restore_dev(genpd, dev);
ret = genpd_start_dev(genpd, dev);
if (ret)
goto err_poweroff;
ret = genpd_restore_dev(genpd, dev);
if (ret)
goto err_stop;
/* Update resume latency value if the measured time exceeds it. */
if (timed && runtime_pm) {
......@@ -501,6 +508,17 @@ static int pm_genpd_runtime_resume(struct device *dev)
}
return 0;
err_stop:
genpd_stop_dev(genpd, dev);
err_poweroff:
if (!dev->power.irq_safe) {
mutex_lock(&genpd->lock);
genpd_poweroff(genpd, 0);
mutex_unlock(&genpd->lock);
}
return ret;
}
static bool pd_ignore_unused;
......@@ -585,6 +603,8 @@ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd,
|| atomic_read(&genpd->sd_count) > 0)
return;
/* Choose the deepest state when suspending */
genpd->state_idx = genpd->state_count - 1;
genpd_power_off(genpd, timed);
genpd->status = GPD_STATE_POWER_OFF;
......@@ -1378,7 +1398,7 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
mutex_lock(&subdomain->lock);
mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
if (!list_empty(&subdomain->slave_links) || subdomain->device_count) {
if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
subdomain->name);
ret = -EBUSY;
......@@ -1508,6 +1528,20 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->dev_ops.start = pm_clk_resume;
}
if (genpd->state_idx >= GENPD_MAX_NUM_STATES) {
pr_warn("Initial state index out of bounds.\n");
genpd->state_idx = GENPD_MAX_NUM_STATES - 1;
}
if (genpd->state_count > GENPD_MAX_NUM_STATES) {
pr_warn("Limiting states to %d\n", GENPD_MAX_NUM_STATES);
genpd->state_count = GENPD_MAX_NUM_STATES;
}
/* Use only one "off" state if there were no states declared */
if (genpd->state_count == 0)
genpd->state_count = 1;
mutex_lock(&gpd_list_lock);
list_add(&genpd->gpd_list_node, &gpd_list);
mutex_unlock(&gpd_list_lock);
......@@ -1668,6 +1702,9 @@ struct generic_pm_domain *of_genpd_get_from_provider(
struct generic_pm_domain *genpd = ERR_PTR(-ENOENT);
struct of_genpd_provider *provider;
if (!genpdspec)
return ERR_PTR(-EINVAL);
mutex_lock(&of_genpd_mutex);
/* Check if we have such a provider in our array */
......@@ -1864,6 +1901,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
struct pm_domain_data *pm_data;
const char *kobj_path;
struct gpd_link *link;
char state[16];
int ret;
ret = mutex_lock_interruptible(&genpd->lock);
......@@ -1872,7 +1910,13 @@ static int pm_genpd_summary_one(struct seq_file *s,
if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup)))
goto exit;
seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]);
if (genpd->status == GPD_STATE_POWER_OFF)
snprintf(state, sizeof(state), "%s-%u",
status_lookup[genpd->status], genpd->state_idx);
else
snprintf(state, sizeof(state), "%s",
status_lookup[genpd->status]);
seq_printf(s, "%-30s %-15s ", genpd->name, state);
/*
* Modifications on the list require holding locks on both
......
......@@ -98,7 +98,8 @@ static bool default_stop_ok(struct device *dev)
*
* This routine must be executed under the PM domain's lock.
*/
static bool default_power_down_ok(struct dev_pm_domain *pd)
static bool __default_power_down_ok(struct dev_pm_domain *pd,
unsigned int state)
{
struct generic_pm_domain *genpd = pd_to_genpd(pd);
struct gpd_link *link;
......@@ -106,27 +107,9 @@ static bool default_power_down_ok(struct dev_pm_domain *pd)
s64 min_off_time_ns;
s64 off_on_time_ns;
if (genpd->max_off_time_changed) {
struct gpd_link *link;
/*
* We have to invalidate the cached results for the masters, so
* use the observation that default_power_down_ok() is not
* going to be called for any master until this instance
* returns.
*/
list_for_each_entry(link, &genpd->slave_links, slave_node)
link->master->max_off_time_changed = true;
genpd->max_off_time_changed = false;
genpd->cached_power_down_ok = false;
genpd->max_off_time_ns = -1;
} else {
return genpd->cached_power_down_ok;
}
off_on_time_ns = genpd->states[state].power_off_latency_ns +
genpd->states[state].power_on_latency_ns;
off_on_time_ns = genpd->power_off_latency_ns +
genpd->power_on_latency_ns;
min_off_time_ns = -1;
/*
......@@ -186,8 +169,6 @@ static bool default_power_down_ok(struct dev_pm_domain *pd)
min_off_time_ns = constraint_ns;
}
genpd->cached_power_down_ok = true;
/*
* If the computed minimum device off time is negative, there are no
* latency constraints, so the domain can spend arbitrary time in the
......@@ -201,10 +182,45 @@ static bool default_power_down_ok(struct dev_pm_domain *pd)
* time and the time needed to turn the domain on is the maximum
* theoretical time this domain can spend in the "off" state.
*/
genpd->max_off_time_ns = min_off_time_ns - genpd->power_on_latency_ns;
genpd->max_off_time_ns = min_off_time_ns -
genpd->states[state].power_on_latency_ns;
return true;
}
static bool default_power_down_ok(struct dev_pm_domain *pd)
{
struct generic_pm_domain *genpd = pd_to_genpd(pd);
struct gpd_link *link;
if (!genpd->max_off_time_changed)
return genpd->cached_power_down_ok;
/*
* We have to invalidate the cached results for the masters, so
* use the observation that default_power_down_ok() is not
* going to be called for any master until this instance
* returns.
*/
list_for_each_entry(link, &genpd->slave_links, slave_node)
link->master->max_off_time_changed = true;
genpd->max_off_time_ns = -1;
genpd->max_off_time_changed = false;
genpd->cached_power_down_ok = true;
genpd->state_idx = genpd->state_count - 1;
/* Find a state to power down to, starting from the deepest. */
while (!__default_power_down_ok(pd, genpd->state_idx)) {
if (genpd->state_idx == 0) {
genpd->cached_power_down_ok = false;
break;
}
genpd->state_idx--;
}
return genpd->cached_power_down_ok;
}
static bool always_on_power_down_ok(struct dev_pm_domain *domain)
{
return false;
......
This diff is collapsed.
......@@ -31,7 +31,7 @@
* @table: Cpufreq table returned back to caller
*
* Generate a cpufreq table for a provided device- this assumes that the
* opp list is already initialized and ready for usage.
* opp table is already initialized and ready for usage.
*
* This function allocates required memory for the cpufreq table. It is
* expected that the caller does the required maintenance such as freeing
......@@ -44,7 +44,7 @@
* WARNING: It is important for the callers to ensure refreshing their copy of
* the table if any of the mentioned functions have been invoked in the interim.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Locking: The internal opp_table and opp structures are RCU protected.
* Since we just use the regular accessor functions to access the internal data
* structures, we use RCU read lock inside this function. As a result, users of
* this function DONOT need to use explicit locks for invoking.
......@@ -122,15 +122,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
/* Required only for V1 bindings, as v2 can manage it from DT itself */
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
{
struct device_list_opp *list_dev;
struct device_opp *dev_opp;
struct opp_device *opp_dev;
struct opp_table *opp_table;
struct device *dev;
int cpu, ret = 0;
mutex_lock(&dev_opp_list_lock);
mutex_lock(&opp_table_lock);
dev_opp = _find_device_opp(cpu_dev);
if (IS_ERR(dev_opp)) {
opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) {
ret = -EINVAL;
goto unlock;
}
......@@ -146,15 +146,15 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
continue;
}
list_dev = _add_list_dev(dev, dev_opp);
if (!list_dev) {
dev_err(dev, "%s: failed to add list-dev for cpu%d device\n",
opp_dev = _add_opp_dev(dev, opp_table);
if (!opp_dev) {
dev_err(dev, "%s: failed to add opp-dev for cpu%d device\n",
__func__, cpu);
continue;
}
}
unlock:
mutex_unlock(&dev_opp_list_lock);
mutex_unlock(&opp_table_lock);
return ret;
}
......
......@@ -34,9 +34,9 @@ void opp_debug_remove_one(struct dev_pm_opp *opp)
debugfs_remove_recursive(opp->dentry);
}
int opp_debug_create_one(struct dev_pm_opp *opp, struct device_opp *dev_opp)
int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
{
struct dentry *pdentry = dev_opp->dentry;
struct dentry *pdentry = opp_table->dentry;
struct dentry *d;
char name[25]; /* 20 chars for 64 bit value + 5 (opp:\0) */
......@@ -83,52 +83,52 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct device_opp *dev_opp)
return 0;
}
static int device_opp_debug_create_dir(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
static int opp_list_debug_create_dir(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
const struct device *dev = list_dev->dev;
const struct device *dev = opp_dev->dev;
struct dentry *d;
opp_set_dev_name(dev, dev_opp->dentry_name);
opp_set_dev_name(dev, opp_table->dentry_name);
/* Create device specific directory */
d = debugfs_create_dir(dev_opp->dentry_name, rootdir);
d = debugfs_create_dir(opp_table->dentry_name, rootdir);
if (!d) {
dev_err(dev, "%s: Failed to create debugfs dir\n", __func__);
return -ENOMEM;
}
list_dev->dentry = d;
dev_opp->dentry = d;
opp_dev->dentry = d;
opp_table->dentry = d;
return 0;
}
static int device_opp_debug_create_link(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
static int opp_list_debug_create_link(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
const struct device *dev = list_dev->dev;
const struct device *dev = opp_dev->dev;
char name[NAME_MAX];
struct dentry *d;
opp_set_dev_name(list_dev->dev, name);
opp_set_dev_name(opp_dev->dev, name);
/* Create device specific directory link */
d = debugfs_create_symlink(name, rootdir, dev_opp->dentry_name);
d = debugfs_create_symlink(name, rootdir, opp_table->dentry_name);
if (!d) {
dev_err(dev, "%s: Failed to create link\n", __func__);
return -ENOMEM;
}
list_dev->dentry = d;
opp_dev->dentry = d;
return 0;
}
/**
* opp_debug_register - add a device opp node to the debugfs 'opp' directory
* @list_dev: list-dev pointer for device
* @dev_opp: the device-opp being added
* @opp_dev: opp-dev pointer for device
* @opp_table: the device-opp being added
*
* Dynamically adds device specific directory in debugfs 'opp' directory. If the
* device-opp is shared with other devices, then links will be created for all
......@@ -136,73 +136,72 @@ static int device_opp_debug_create_link(struct device_list_opp *list_dev,
*
* Return: 0 on success, otherwise negative error.
*/
int opp_debug_register(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
int opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table)
{
if (!rootdir) {
pr_debug("%s: Uninitialized rootdir\n", __func__);
return -EINVAL;
}
if (dev_opp->dentry)
return device_opp_debug_create_link(list_dev, dev_opp);
if (opp_table->dentry)
return opp_list_debug_create_link(opp_dev, opp_table);
return device_opp_debug_create_dir(list_dev, dev_opp);
return opp_list_debug_create_dir(opp_dev, opp_table);
}
static void opp_migrate_dentry(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
static void opp_migrate_dentry(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
struct device_list_opp *new_dev;
struct opp_device *new_dev;
const struct device *dev;
struct dentry *dentry;
/* Look for next list-dev */
list_for_each_entry(new_dev, &dev_opp->dev_list, node)
if (new_dev != list_dev)
/* Look for next opp-dev */
list_for_each_entry(new_dev, &opp_table->dev_list, node)
if (new_dev != opp_dev)
break;
/* new_dev is guaranteed to be valid here */
dev = new_dev->dev;
debugfs_remove_recursive(new_dev->dentry);
opp_set_dev_name(dev, dev_opp->dentry_name);
opp_set_dev_name(dev, opp_table->dentry_name);
dentry = debugfs_rename(rootdir, list_dev->dentry, rootdir,
dev_opp->dentry_name);
dentry = debugfs_rename(rootdir, opp_dev->dentry, rootdir,
opp_table->dentry_name);
if (!dentry) {
dev_err(dev, "%s: Failed to rename link from: %s to %s\n",
__func__, dev_name(list_dev->dev), dev_name(dev));
__func__, dev_name(opp_dev->dev), dev_name(dev));
return;
}
new_dev->dentry = dentry;
dev_opp->dentry = dentry;
opp_table->dentry = dentry;
}
/**
* opp_debug_unregister - remove a device opp node from debugfs opp directory
* @list_dev: list-dev pointer for device
* @dev_opp: the device-opp being removed
* @opp_dev: opp-dev pointer for device
* @opp_table: the device-opp being removed
*
* Dynamically removes device specific directory from debugfs 'opp' directory.
*/
void opp_debug_unregister(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
void opp_debug_unregister(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
if (list_dev->dentry == dev_opp->dentry) {
if (opp_dev->dentry == opp_table->dentry) {
/* Move the real dentry object under another device */
if (!list_is_singular(&dev_opp->dev_list)) {
opp_migrate_dentry(list_dev, dev_opp);
if (!list_is_singular(&opp_table->dev_list)) {
opp_migrate_dentry(opp_dev, opp_table);
goto out;
}
dev_opp->dentry = NULL;
opp_table->dentry = NULL;
}
debugfs_remove_recursive(list_dev->dentry);
debugfs_remove_recursive(opp_dev->dentry);
out:
list_dev->dentry = NULL;
opp_dev->dentry = NULL;
}
static int __init opp_debug_init(void)
......
......@@ -22,13 +22,16 @@
#include <linux/rculist.h>
#include <linux/rcupdate.h>
struct clk;
struct regulator;
/* Lock to allow exclusive modification to the device and opp lists */
extern struct mutex dev_opp_list_lock;
extern struct mutex opp_table_lock;
/*
* Internal data structure organization with the OPP layer library is as
* follows:
* dev_opp_list (root)
* opp_tables (root)
* |- device 1 (represents voltage domain 1)
* | |- opp 1 (availability, freq, voltage)
* | |- opp 2 ..
......@@ -37,18 +40,18 @@ extern struct mutex dev_opp_list_lock;
* |- device 2 (represents the next voltage domain)
* ...
* `- device m (represents mth voltage domain)
* device 1, 2.. are represented by dev_opp structure while each opp
* device 1, 2.. are represented by opp_table structure while each opp
* is represented by the opp structure.
*/
/**
* struct dev_pm_opp - Generic OPP description structure
* @node: opp list node. The nodes are maintained throughout the lifetime
* @node: opp table node. The nodes are maintained throughout the lifetime
* of boot. It is expected only an optimal set of OPPs are
* added to the library by the SoC framework.
* RCU usage: opp list is traversed with RCU locks. node
* RCU usage: opp table is traversed with RCU locks. node
* modification is possible realtime, hence the modifications
* are protected by the dev_opp_list_lock for integrity.
* are protected by the opp_table_lock for integrity.
* IMPORTANT: the opp nodes should be maintained in increasing
* order.
* @available: true/false - marks if this OPP as available or not
......@@ -62,7 +65,7 @@ extern struct mutex dev_opp_list_lock;
* @u_amp: Maximum current drawn by the device in microamperes
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
* frequency from any other OPP's frequency.
* @dev_opp: points back to the device_opp struct this opp belongs to
* @opp_table: points back to the opp_table struct this opp belongs to
* @rcu_head: RCU callback head used for deferred freeing
* @np: OPP's device node.
* @dentry: debugfs dentry pointer (per opp)
......@@ -84,7 +87,7 @@ struct dev_pm_opp {
unsigned long u_amp;
unsigned long clock_latency_ns;
struct device_opp *dev_opp;
struct opp_table *opp_table;
struct rcu_head rcu_head;
struct device_node *np;
......@@ -95,16 +98,16 @@ struct dev_pm_opp {
};
/**
* struct device_list_opp - devices managed by 'struct device_opp'
* struct opp_device - devices managed by 'struct opp_table'
* @node: list node
* @dev: device to which the struct object belongs
* @rcu_head: RCU callback head used for deferred freeing
* @dentry: debugfs dentry pointer (per device)
*
* This is an internal data structure maintaining the list of devices that are
* managed by 'struct device_opp'.
* This is an internal data structure maintaining the devices that are managed
* by 'struct opp_table'.
*/
struct device_list_opp {
struct opp_device {
struct list_head node;
const struct device *dev;
struct rcu_head rcu_head;
......@@ -115,16 +118,16 @@ struct device_list_opp {
};
/**
* struct device_opp - Device opp structure
* @node: list node - contains the devices with OPPs that
* struct opp_table - Device opp structure
* @node: table node - contains the devices with OPPs that
* have been registered. Nodes once added are not modified in this
* list.
* RCU usage: nodes are not modified in the list of device_opp,
* however addition is possible and is secured by dev_opp_list_lock
* table.
* RCU usage: nodes are not modified in the table of opp_table,
* however addition is possible and is secured by opp_table_lock
* @srcu_head: notifier head to notify the OPP availability changes.
* @rcu_head: RCU callback head used for deferred freeing
* @dev_list: list of devices that share these OPPs
* @opp_list: list of opps
* @opp_list: table of opps
* @np: struct device_node pointer for opp's DT node.
* @clock_latency_ns_max: Max clock latency in nanoseconds.
* @shared_opp: OPP is shared between multiple devices.
......@@ -132,9 +135,13 @@ struct device_list_opp {
* @supported_hw: Array of version number to support.
* @supported_hw_count: Number of elements in supported_hw array.
* @prop_name: A name to postfix to many DT properties, while parsing them.
* @clk: Device's clock handle
* @regulator: Supply regulator
* @dentry: debugfs dentry pointer of the real device directory (not links).
* @dentry_name: Name of the real dentry.
*
* @voltage_tolerance_v1: In percentage, for v1 bindings only.
*
* This is an internal data structure maintaining the link to opps attached to
* a device. This structure is not meant to be shared to users as it is
* meant for book keeping and private to OPP library.
......@@ -143,7 +150,7 @@ struct device_list_opp {
* need to wait for the grace period of both of them before freeing any
* resources. And so we have used kfree_rcu() from within call_srcu() handlers.
*/
struct device_opp {
struct opp_table {
struct list_head node;
struct srcu_notifier_head srcu_head;
......@@ -153,12 +160,18 @@ struct device_opp {
struct device_node *np;
unsigned long clock_latency_ns_max;
/* For backward compatibility with v1 bindings */
unsigned int voltage_tolerance_v1;
bool shared_opp;
struct dev_pm_opp *suspend_opp;
unsigned int *supported_hw;
unsigned int supported_hw_count;
const char *prop_name;
struct clk *clk;
struct regulator *regulator;
#ifdef CONFIG_DEBUG_FS
struct dentry *dentry;
......@@ -167,30 +180,27 @@ struct device_opp {
};
/* Routines internal to opp core */
struct device_opp *_find_device_opp(struct device *dev);
struct device_list_opp *_add_list_dev(const struct device *dev,
struct device_opp *dev_opp);
struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
struct device_node *_of_get_opp_desc_node(struct device *dev);
#ifdef CONFIG_DEBUG_FS
void opp_debug_remove_one(struct dev_pm_opp *opp);
int opp_debug_create_one(struct dev_pm_opp *opp, struct device_opp *dev_opp);
int opp_debug_register(struct device_list_opp *list_dev,
struct device_opp *dev_opp);
void opp_debug_unregister(struct device_list_opp *list_dev,
struct device_opp *dev_opp);
int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table);
int opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table);
void opp_debug_unregister(struct opp_device *opp_dev, struct opp_table *opp_table);
#else
static inline void opp_debug_remove_one(struct dev_pm_opp *opp) {}
static inline int opp_debug_create_one(struct dev_pm_opp *opp,
struct device_opp *dev_opp)
struct opp_table *opp_table)
{ return 0; }
static inline int opp_debug_register(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
static inline int opp_debug_register(struct opp_device *opp_dev,
struct opp_table *opp_table)
{ return 0; }
static inline void opp_debug_unregister(struct device_list_opp *list_dev,
struct device_opp *dev_opp)
static inline void opp_debug_unregister(struct opp_device *opp_dev,
struct opp_table *opp_table)
{ }
#endif /* DEBUG_FS */
......
......@@ -166,14 +166,14 @@ void generate_pm_trace(const void *tracedata, unsigned int user)
}
EXPORT_SYMBOL(generate_pm_trace);
extern char __tracedata_start, __tracedata_end;
extern char __tracedata_start[], __tracedata_end[];
static int show_file_hash(unsigned int value)
{
int match;
char *tracedata;
match = 0;
for (tracedata = &__tracedata_start ; tracedata < &__tracedata_end ;
for (tracedata = __tracedata_start ; tracedata < __tracedata_end ;
tracedata += 2 + sizeof(unsigned long)) {
unsigned short lineno = *(unsigned short *)tracedata;
const char *file = *(const char **)(tracedata + 2);
......
......@@ -218,7 +218,8 @@ bool fwnode_property_present(struct fwnode_handle *fwnode, const char *propname)
bool ret;
ret = __fwnode_property_present(fwnode, propname);
if (ret == false && fwnode && !IS_ERR_OR_NULL(fwnode->secondary))
if (ret == false && !IS_ERR_OR_NULL(fwnode) &&
!IS_ERR_OR_NULL(fwnode->secondary))
ret = __fwnode_property_present(fwnode->secondary, propname);
return ret;
}
......@@ -423,7 +424,8 @@ EXPORT_SYMBOL_GPL(device_property_match_string);
int _ret_; \
_ret_ = FWNODE_PROP_READ(_fwnode_, _propname_, _type_, _proptype_, \
_val_, _nval_); \
if (_ret_ == -EINVAL && _fwnode_ && !IS_ERR_OR_NULL(_fwnode_->secondary)) \
if (_ret_ == -EINVAL && !IS_ERR_OR_NULL(_fwnode_) && \
!IS_ERR_OR_NULL(_fwnode_->secondary)) \
_ret_ = FWNODE_PROP_READ(_fwnode_->secondary, _propname_, _type_, \
_proptype_, _val_, _nval_); \
_ret_; \
......@@ -593,7 +595,8 @@ int fwnode_property_read_string_array(struct fwnode_handle *fwnode,
int ret;
ret = __fwnode_property_read_string_array(fwnode, propname, val, nval);
if (ret == -EINVAL && fwnode && !IS_ERR_OR_NULL(fwnode->secondary))
if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
!IS_ERR_OR_NULL(fwnode->secondary))
ret = __fwnode_property_read_string_array(fwnode->secondary,
propname, val, nval);
return ret;
......@@ -621,7 +624,8 @@ int fwnode_property_read_string(struct fwnode_handle *fwnode,
int ret;
ret = __fwnode_property_read_string(fwnode, propname, val);
if (ret == -EINVAL && fwnode && !IS_ERR_OR_NULL(fwnode->secondary))
if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
!IS_ERR_OR_NULL(fwnode->secondary))
ret = __fwnode_property_read_string(fwnode->secondary,
propname, val);
return ret;
......@@ -820,11 +824,16 @@ void device_remove_property_set(struct device *dev)
* the pset. If there is no real firmware node (ACPI/DT) primary
* will hold the pset.
*/
if (!is_pset_node(fwnode))
fwnode = fwnode->secondary;
if (!IS_ERR(fwnode) && is_pset_node(fwnode))
if (is_pset_node(fwnode)) {
set_primary_fwnode(dev, NULL);
pset_free_set(to_pset_node(fwnode));
set_secondary_fwnode(dev, NULL);
} else {
fwnode = fwnode->secondary;
if (!IS_ERR(fwnode) && is_pset_node(fwnode)) {
set_secondary_fwnode(dev, NULL);
pset_free_set(to_pset_node(fwnode));
}
}
}
EXPORT_SYMBOL_GPL(device_remove_property_set);
......
......@@ -19,6 +19,7 @@ config CPU_FREQ
if CPU_FREQ
config CPU_FREQ_GOV_COMMON
select IRQ_WORK
bool
config CPU_FREQ_BOOST_SW
......
......@@ -70,6 +70,8 @@ struct acpi_cpufreq_data {
unsigned int cpu_feature;
unsigned int acpi_perf_cpu;
cpumask_var_t freqdomain_cpus;
void (*cpu_freq_write)(struct acpi_pct_register *reg, u32 val);
u32 (*cpu_freq_read)(struct acpi_pct_register *reg);
};
/* acpi_perf_data is a pointer to percpu data. */
......@@ -243,125 +245,119 @@ static unsigned extract_freq(u32 val, struct acpi_cpufreq_data *data)
}
}
struct msr_addr {
u32 reg;
};
u32 cpu_freq_read_intel(struct acpi_pct_register *not_used)
{
u32 val, dummy;
struct io_addr {
u16 port;
u8 bit_width;
};
rdmsr(MSR_IA32_PERF_CTL, val, dummy);
return val;
}
void cpu_freq_write_intel(struct acpi_pct_register *not_used, u32 val)
{
u32 lo, hi;
rdmsr(MSR_IA32_PERF_CTL, lo, hi);
lo = (lo & ~INTEL_MSR_RANGE) | (val & INTEL_MSR_RANGE);
wrmsr(MSR_IA32_PERF_CTL, lo, hi);
}
u32 cpu_freq_read_amd(struct acpi_pct_register *not_used)
{
u32 val, dummy;
rdmsr(MSR_AMD_PERF_CTL, val, dummy);
return val;
}
void cpu_freq_write_amd(struct acpi_pct_register *not_used, u32 val)
{
wrmsr(MSR_AMD_PERF_CTL, val, 0);
}
u32 cpu_freq_read_io(struct acpi_pct_register *reg)
{
u32 val;
acpi_os_read_port(reg->address, &val, reg->bit_width);
return val;
}
void cpu_freq_write_io(struct acpi_pct_register *reg, u32 val)
{
acpi_os_write_port(reg->address, val, reg->bit_width);
}
struct drv_cmd {
unsigned int type;
const struct cpumask *mask;
union {
struct msr_addr msr;
struct io_addr io;
} addr;
struct acpi_pct_register *reg;
u32 val;
union {
void (*write)(struct acpi_pct_register *reg, u32 val);
u32 (*read)(struct acpi_pct_register *reg);
} func;
};
/* Called via smp_call_function_single(), on the target CPU */
static void do_drv_read(void *_cmd)
{
struct drv_cmd *cmd = _cmd;
u32 h;
switch (cmd->type) {
case SYSTEM_INTEL_MSR_CAPABLE:
case SYSTEM_AMD_MSR_CAPABLE:
rdmsr(cmd->addr.msr.reg, cmd->val, h);
break;
case SYSTEM_IO_CAPABLE:
acpi_os_read_port((acpi_io_address)cmd->addr.io.port,
&cmd->val,
(u32)cmd->addr.io.bit_width);
break;
default:
break;
}
cmd->val = cmd->func.read(cmd->reg);
}
/* Called via smp_call_function_many(), on the target CPUs */
static void do_drv_write(void *_cmd)
static u32 drv_read(struct acpi_cpufreq_data *data, const struct cpumask *mask)
{
struct drv_cmd *cmd = _cmd;
u32 lo, hi;
struct acpi_processor_performance *perf = to_perf_data(data);
struct drv_cmd cmd = {
.reg = &perf->control_register,
.func.read = data->cpu_freq_read,
};
int err;
switch (cmd->type) {
case SYSTEM_INTEL_MSR_CAPABLE:
rdmsr(cmd->addr.msr.reg, lo, hi);
lo = (lo & ~INTEL_MSR_RANGE) | (cmd->val & INTEL_MSR_RANGE);
wrmsr(cmd->addr.msr.reg, lo, hi);
break;
case SYSTEM_AMD_MSR_CAPABLE:
wrmsr(cmd->addr.msr.reg, cmd->val, 0);
break;
case SYSTEM_IO_CAPABLE:
acpi_os_write_port((acpi_io_address)cmd->addr.io.port,
cmd->val,
(u32)cmd->addr.io.bit_width);
break;
default:
break;
}
err = smp_call_function_any(mask, do_drv_read, &cmd, 1);
WARN_ON_ONCE(err); /* smp_call_function_any() was buggy? */
return cmd.val;
}
static void drv_read(struct drv_cmd *cmd)
/* Called via smp_call_function_many(), on the target CPUs */
static void do_drv_write(void *_cmd)
{
int err;
cmd->val = 0;
struct drv_cmd *cmd = _cmd;
err = smp_call_function_any(cmd->mask, do_drv_read, cmd, 1);
WARN_ON_ONCE(err); /* smp_call_function_any() was buggy? */
cmd->func.write(cmd->reg, cmd->val);
}
static void drv_write(struct drv_cmd *cmd)
static void drv_write(struct acpi_cpufreq_data *data,
const struct cpumask *mask, u32 val)
{
struct acpi_processor_performance *perf = to_perf_data(data);
struct drv_cmd cmd = {
.reg = &perf->control_register,
.val = val,
.func.write = data->cpu_freq_write,
};
int this_cpu;
this_cpu = get_cpu();
if (cpumask_test_cpu(this_cpu, cmd->mask))
do_drv_write(cmd);
smp_call_function_many(cmd->mask, do_drv_write, cmd, 1);
if (cpumask_test_cpu(this_cpu, mask))
do_drv_write(&cmd);
smp_call_function_many(mask, do_drv_write, &cmd, 1);
put_cpu();
}
static u32
get_cur_val(const struct cpumask *mask, struct acpi_cpufreq_data *data)
static u32 get_cur_val(const struct cpumask *mask, struct acpi_cpufreq_data *data)
{
struct acpi_processor_performance *perf;
struct drv_cmd cmd;
u32 val;
if (unlikely(cpumask_empty(mask)))
return 0;
switch (data->cpu_feature) {
case SYSTEM_INTEL_MSR_CAPABLE:
cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
cmd.addr.msr.reg = MSR_IA32_PERF_CTL;
break;
case SYSTEM_AMD_MSR_CAPABLE:
cmd.type = SYSTEM_AMD_MSR_CAPABLE;
cmd.addr.msr.reg = MSR_AMD_PERF_CTL;
break;
case SYSTEM_IO_CAPABLE:
cmd.type = SYSTEM_IO_CAPABLE;
perf = to_perf_data(data);
cmd.addr.io.port = perf->control_register.address;
cmd.addr.io.bit_width = perf->control_register.bit_width;
break;
default:
return 0;
}
cmd.mask = mask;
drv_read(&cmd);
val = drv_read(data, mask);
pr_debug("get_cur_val = %u\n", cmd.val);
pr_debug("get_cur_val = %u\n", val);
return cmd.val;
return val;
}
static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
......@@ -416,7 +412,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
{
struct acpi_cpufreq_data *data = policy->driver_data;
struct acpi_processor_performance *perf;
struct drv_cmd cmd;
const struct cpumask *mask;
unsigned int next_perf_state = 0; /* Index into perf table */
int result = 0;
......@@ -434,42 +430,21 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
} else {
pr_debug("Already at target state (P%d)\n",
next_perf_state);
goto out;
return 0;
}
}
switch (data->cpu_feature) {
case SYSTEM_INTEL_MSR_CAPABLE:
cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
cmd.addr.msr.reg = MSR_IA32_PERF_CTL;
cmd.val = (u32) perf->states[next_perf_state].control;
break;
case SYSTEM_AMD_MSR_CAPABLE:
cmd.type = SYSTEM_AMD_MSR_CAPABLE;
cmd.addr.msr.reg = MSR_AMD_PERF_CTL;
cmd.val = (u32) perf->states[next_perf_state].control;
break;
case SYSTEM_IO_CAPABLE:
cmd.type = SYSTEM_IO_CAPABLE;
cmd.addr.io.port = perf->control_register.address;
cmd.addr.io.bit_width = perf->control_register.bit_width;
cmd.val = (u32) perf->states[next_perf_state].control;
break;
default:
result = -ENODEV;
goto out;
}
/* cpufreq holds the hotplug lock, so we are safe from here on */
if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
cmd.mask = policy->cpus;
else
cmd.mask = cpumask_of(policy->cpu);
/*
* The core won't allow CPUs to go away until the governor has been
* stopped, so we can rely on the stability of policy->cpus.
*/
mask = policy->shared_type == CPUFREQ_SHARED_TYPE_ANY ?
cpumask_of(policy->cpu) : policy->cpus;
drv_write(&cmd);
drv_write(data, mask, perf->states[next_perf_state].control);
if (acpi_pstate_strict) {
if (!check_freqs(cmd.mask, data->freq_table[index].frequency,
if (!check_freqs(mask, data->freq_table[index].frequency,
data)) {
pr_debug("acpi_cpufreq_target failed (%d)\n",
policy->cpu);
......@@ -480,7 +455,6 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
if (!result)
perf->state = next_perf_state;
out:
return result;
}
......@@ -740,15 +714,21 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
}
pr_debug("SYSTEM IO addr space\n");
data->cpu_feature = SYSTEM_IO_CAPABLE;
data->cpu_freq_read = cpu_freq_read_io;
data->cpu_freq_write = cpu_freq_write_io;
break;
case ACPI_ADR_SPACE_FIXED_HARDWARE:
pr_debug("HARDWARE addr space\n");
if (check_est_cpu(cpu)) {
data->cpu_feature = SYSTEM_INTEL_MSR_CAPABLE;
data->cpu_freq_read = cpu_freq_read_intel;
data->cpu_freq_write = cpu_freq_write_intel;
break;
}
if (check_amd_hwpstate_cpu(cpu)) {
data->cpu_feature = SYSTEM_AMD_MSR_CAPABLE;
data->cpu_freq_read = cpu_freq_read_amd;
data->cpu_freq_write = cpu_freq_write_amd;
break;
}
result = -ENODEV;
......
......@@ -21,7 +21,7 @@
#include <asm/msr.h>
#include <asm/cpufeature.h>
#include "cpufreq_governor.h"
#include "cpufreq_ondemand.h"
#define MSR_AMD64_FREQ_SENSITIVITY_ACTUAL 0xc0010080
#define MSR_AMD64_FREQ_SENSITIVITY_REFERENCE 0xc0010081
......@@ -45,10 +45,10 @@ static unsigned int amd_powersave_bias_target(struct cpufreq_policy *policy,
long d_actual, d_reference;
struct msr actual, reference;
struct cpu_data_t *data = &per_cpu(cpu_data, policy->cpu);
struct dbs_data *od_data = policy->governor_data;
struct policy_dbs_info *policy_dbs = policy->governor_data;
struct dbs_data *od_data = policy_dbs->dbs_data;
struct od_dbs_tuners *od_tuners = od_data->tuners;
struct od_cpu_dbs_info_s *od_info =
od_data->cdata->get_cpu_dbs_info_s(policy->cpu);
struct od_policy_dbs_info *od_info = to_dbs_info(policy_dbs);
if (!od_info->freq_table)
return freq_next;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Header file for CPUFreq ondemand governor and related code.
*
* Copyright (C) 2016, Intel Corporation
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include "cpufreq_governor.h"
struct od_policy_dbs_info {
struct policy_dbs_info policy_dbs;
struct cpufreq_frequency_table *freq_table;
unsigned int freq_lo;
unsigned int freq_lo_delay_us;
unsigned int freq_hi_delay_us;
unsigned int sample_type:1;
};
static inline struct od_policy_dbs_info *to_dbs_info(struct policy_dbs_info *policy_dbs)
{
return container_of(policy_dbs, struct od_policy_dbs_info, policy_dbs);
}
struct od_dbs_tuners {
unsigned int powersave_bias;
};
This diff is collapsed.
......@@ -33,10 +33,7 @@ static int cpufreq_governor_powersave(struct cpufreq_policy *policy,
return 0;
}
#ifndef CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE
static
#endif
struct cpufreq_governor cpufreq_gov_powersave = {
static struct cpufreq_governor cpufreq_gov_powersave = {
.name = "powersave",
.governor = cpufreq_governor_powersave,
.owner = THIS_MODULE,
......@@ -57,6 +54,11 @@ MODULE_DESCRIPTION("CPUfreq policy governor 'powersave'");
MODULE_LICENSE("GPL");
#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE
struct cpufreq_governor *cpufreq_default_governor(void)
{
return &cpufreq_gov_powersave;
}
fs_initcall(cpufreq_gov_powersave_init);
#else
module_init(cpufreq_gov_powersave_init);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment