Commit 7d2b6ef1 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
 "Driver updates for v4.1.  Some of these are for drivers/soc, where we
  find more and more SoC-specific drivers these days.  Some are for
  other driver subsystems where we have received acks from the
  appropriate maintainers.

  The larger parts of this branch are:

   - MediaTek support for their PMIC wrapper interface, a high-level
     interface for talking to the system PMIC over a dedicated I2C
     interface.

   - Qualcomm SCM driver has been moved to drivers/firmware.  It's used
     for CPU up/down and needs to be in a shared location for arm/arm64
     common code.

   - cleanup of ARM-CCI PMU code.

   - another set of cleanusp to the OMAP GPMC code"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (43 commits)
  soc/mediatek: Remove unused variables
  clocksource: atmel-st: select MFD_SYSCON
  soc: mediatek: Add PMIC wrapper for MT8135 and MT8173 SoCs
  arm-cci: Fix CCI PMU event validation
  arm-cci: Split the code for PMU vs driver support
  arm-cci: Get rid of secure transactions for PMU driver
  arm-cci: Abstract the CCI400 PMU specific definitions
  arm-cci: Rearrange code for splitting PMU vs driver code
  drivers: cci: reject groups spanning multiple HW PMUs
  ARM: at91: remove useless include
  clocksource: atmel-st: remove mach/hardware dependency
  clocksource: atmel-st: use syscon/regmap
  ARM: at91: time: move the system timer driver to drivers/clocksource
  ARM: at91: properly initialize timer
  ARM: at91: at91rm9200: remove deprecated arm_pm_restart
  watchdog: at91rm9200: implement restart handler
  watchdog: at91rm9200: use the system timer syscon
  mfd: syscon: Add atmel system timer registers definition
  ARM: at91/dt: declare atmel,at91rm9200-st as a syscon
  soc: qcom: gsbi: Add support for ADM CRCI muxing
  ...
parents 5c73cc4b 7415d97e
......@@ -46,10 +46,12 @@ PIT Timer required properties:
shared across all System Controller members.
System Timer (ST) required properties:
- compatible: Should be "atmel,at91rm9200-st"
- compatible: Should be "atmel,at91rm9200-st", "syscon", "simple-mfd"
- reg: Should contain registers location and length
- interrupts: Should contain interrupt for the ST which is the IRQ line
shared across all System Controller members.
Its subnodes can be:
- watchdog: compatible should be "atmel,at91rm9200-wdt"
TC/TCLIB Timer required properties:
- compatible: Should be "atmel,<chip>-tcb".
......
......@@ -94,8 +94,11 @@ specific to ARM.
- compatible
Usage: required
Value type: <string>
Definition: must be "arm,cci-400-pmu"
Definition: Must contain one of:
"arm,cci-400-pmu,r0"
"arm,cci-400-pmu,r1"
"arm,cci-400-pmu" - DEPRECATED, permitted only where OS has
secure acces to CCI registers
- reg:
Usage: required
Value type: Integer cells. A register entry, expressed
......
Renesas Bus State Controller (BSC)
==================================
The Renesas Bus State Controller (BSC, sometimes called "LBSC within Bus
Bridge", or "External Bus Interface") can be found in several Renesas ARM SoCs.
It provides an external bus for connecting multiple external devices to the
SoC, driving several chip select lines, for e.g. NOR FLASH, Ethernet and USB.
While the BSC is a fairly simple memory-mapped bus, it may be part of a PM
domain, and may have a gateable functional clock.
Before a device connected to the BSC can be accessed, the PM domain
containing the BSC must be powered on, and the functional clock
driving the BSC must be enabled.
The bindings for the BSC extend the bindings for "simple-pm-bus".
Required properties
- compatible: Must contain an SoC-specific value, and "renesas,bsc" and
"simple-pm-bus" as fallbacks.
SoC-specific values can be:
"renesas,bsc-r8a73a4" for R-Mobile APE6 (r8a73a4)
"renesas,bsc-sh73a0" for SH-Mobile AG5 (sh73a0)
- #address-cells, #size-cells, ranges: Must describe the mapping between
parent address and child address spaces.
- reg: Must contain the base address and length to access the bus controller.
Optional properties:
- interrupts: Must contain a reference to the BSC interrupt, if available.
- clocks: Must contain a reference to the functional clock, if available.
- power-domains: Must contain a reference to the PM domain, if available.
Example:
bsc: bus@fec10000 {
compatible = "renesas,bsc-sh73a0", "renesas,bsc",
"simple-pm-bus";
#address-cells = <1>;
#size-cells = <1>;
ranges = <0 0 0x20000000>;
reg = <0xfec10000 0x400>;
interrupts = <0 39 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&zb_clk>;
power-domains = <&pd_a4s>;
};
Simple Power-Managed Bus
========================
A Simple Power-Managed Bus is a transparent bus that doesn't need a real
driver, as it's typically initialized by the boot loader.
However, its bus controller is part of a PM domain, or under the control of a
functional clock. Hence, the bus controller's PM domain and/or clock must be
enabled for child devices connected to the bus (either on-SoC or externally)
to function.
While "simple-pm-bus" follows the "simple-bus" set of properties, as specified
in ePAPR, it is not an extension of "simple-bus".
Required properties:
- compatible: Must contain at least "simple-pm-bus".
Must not contain "simple-bus".
It's recommended to let this be preceded by one or more
vendor-specific compatible values.
- #address-cells, #size-cells, ranges: Must describe the mapping between
parent address and child address spaces.
Optional platform-specific properties for clock or PM domain control (at least
one of them is required):
- clocks: Must contain a reference to the functional clock(s),
- power-domains: Must contain a reference to the PM domain.
Please refer to the binding documentation for the clock and/or PM domain
providers for more details.
Example:
bsc: bus@fec10000 {
compatible = "renesas,bsc-sh73a0", "renesas,bsc",
"simple-pm-bus";
#address-cells = <1>;
#size-cells = <1>;
ranges = <0 0 0x20000000>;
reg = <0xfec10000 0x400>;
interrupts = <0 39 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&zb_clk>;
power-domains = <&pd_a4s>;
};
......@@ -6,7 +6,8 @@ configuration settings. The mode setting will govern the input/output mode of
the 4 GSBI IOs.
Required properties:
- compatible: must contain "qcom,gsbi-v1.0.0" for APQ8064/IPQ8064
- compatible: Should contain "qcom,gsbi-v1.0.0"
- cell-index: Should contain the GSBI index
- reg: Address range for GSBI registers
- clocks: required clock
- clock-names: must contain "iface" entry
......@@ -16,6 +17,8 @@ Required properties:
Optional properties:
- qcom,crci : indicates CRCI MUX value for QUP CRCI ports. Please reference
dt-bindings/soc/qcom,gsbi.h for valid CRCI mux values.
- syscon-tcsr: indicates phandle of TCSR syscon node. Required if child uses
dma.
Required properties if child node exists:
- #address-cells: Must be 1
......@@ -39,6 +42,7 @@ Example for APQ8064:
gsbi4@16300000 {
compatible = "qcom,gsbi-v1.0.0";
cell-index = <4>;
reg = <0x16300000 0x100>;
clocks = <&gcc GSBI4_H_CLK>;
clock-names = "iface";
......@@ -48,22 +52,24 @@ Example for APQ8064:
qcom,mode = <GSBI_PROT_I2C_UART>;
qcom,crci = <GSBI_CRCI_QUP>;
syscon-tcsr = <&tcsr>;
/* child nodes go under here */
i2c_qup4: i2c@16380000 {
compatible = "qcom,i2c-qup-v1.1.1";
reg = <0x16380000 0x1000>;
interrupts = <0 153 0>;
compatible = "qcom,i2c-qup-v1.1.1";
reg = <0x16380000 0x1000>;
interrupts = <0 153 0>;
clocks = <&gcc GSBI4_QUP_CLK>, <&gcc GSBI4_H_CLK>;
clock-names = "core", "iface";
clocks = <&gcc GSBI4_QUP_CLK>, <&gcc GSBI4_H_CLK>;
clock-names = "core", "iface";
clock-frequency = <200000>;
clock-frequency = <200000>;
#address-cells = <1>;
#size-cells = <0>;
#address-cells = <1>;
#size-cells = <0>;
};
};
uart4: serial@16340000 {
compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
......@@ -76,3 +82,7 @@ Example for APQ8064:
};
};
tcsr: syscon@1a400000 {
compatible = "qcom,apq8064-tcsr", "syscon";
reg = <0x1a400000 0x100>;
};
......@@ -1327,6 +1327,7 @@ F: drivers/tty/serial/msm_serial.h
F: drivers/tty/serial/msm_serial.c
F: drivers/*/pm8???-*
F: drivers/mfd/ssbi.c
F: drivers/firmware/qcom_scm.c
T: git git://git.kernel.org/pub/scm/linux/kernel/git/galak/linux-qcom.git
ARM/RADISYS ENP2611 MACHINE SUPPORT
......
......@@ -2146,6 +2146,8 @@ source "net/Kconfig"
source "drivers/Kconfig"
source "drivers/firmware/Kconfig"
source "fs/Kconfig"
source "arch/arm/Kconfig.debug"
......
......@@ -356,9 +356,13 @@ macb0_clk: macb0_clk {
};
st: timer@fffffd00 {
compatible = "atmel,at91rm9200-st";
compatible = "atmel,at91rm9200-st", "syscon", "simple-mfd";
reg = <0xfffffd00 0x100>;
interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
watchdog {
compatible = "atmel,at91rm9200-wdt";
};
};
rtc: rtc@fffffe00 {
......
/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
/*
* arch/arm/include/asm/arm-cci.h
*
* Copyright (C) 2015 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
......@@ -10,30 +13,30 @@
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/module.h>
#include <linux/slab.h>
#ifndef __ASM_ARM_CCI_H
#define __ASM_ARM_CCI_H
#include "scm.h"
#include "scm-boot.h"
#ifdef CONFIG_MCPM
#include <asm/mcpm.h>
/*
* Set the cold/warm boot address for one of the CPU cores.
* We don't have a reliable way of detecting whether,
* if we have access to secure-only registers, unless
* mcpm is registered.
*/
int scm_set_boot_addr(u32 addr, int flags)
static inline bool platform_has_secure_cci_access(void)
{
struct {
__le32 flags;
__le32 addr;
} cmd;
return mcpm_is_available();
}
cmd.addr = cpu_to_le32(addr);
cmd.flags = cpu_to_le32(flags);
return scm_call(SCM_SVC_BOOT, SCM_BOOT_ADDR,
&cmd, sizeof(cmd), NULL, 0);
#else
static inline bool platform_has_secure_cci_access(void)
{
return false;
}
EXPORT_SYMBOL(scm_set_boot_addr);
#endif
#endif
......@@ -77,6 +77,7 @@ if SOC_SAM_V4_V5
config SOC_AT91RM9200
bool "AT91RM9200"
select ATMEL_AIC_IRQ
select ATMEL_ST
select COMMON_CLK_AT91
select CPU_ARM920T
select GENERIC_CLOCKEVENTS
......
......@@ -7,7 +7,7 @@ obj-y := soc.o
obj-$(CONFIG_SOC_AT91SAM9) += sam9_smc.o
# CPU-specific support
obj-$(CONFIG_SOC_AT91RM9200) += at91rm9200.o at91rm9200_time.o
obj-$(CONFIG_SOC_AT91RM9200) += at91rm9200.o
obj-$(CONFIG_SOC_AT91SAM9) += at91sam9.o
obj-$(CONFIG_SOC_SAMA5) += sama5.o
......
......@@ -15,8 +15,6 @@
#include <asm/mach/arch.h>
#include <asm/system_misc.h>
#include <mach/at91_st.h>
#include "generic.h"
#include "soc.h"
......@@ -25,21 +23,6 @@ static const struct at91_soc rm9200_socs[] = {
{ /* sentinel */ },
};
static void at91rm9200_restart(enum reboot_mode reboot_mode, const char *cmd)
{
/*
* Perform a hardware reset with the use of the Watchdog timer.
*/
at91_st_write(AT91_ST_WDMR, AT91_ST_RSTEN | AT91_ST_EXTEN | 1);
at91_st_write(AT91_ST_CR, AT91_ST_WDRST);
}
static void __init at91rm9200_dt_timer_init(void)
{
of_clk_init(NULL);
at91rm9200_timer_init();
}
static void __init at91rm9200_dt_device_init(void)
{
struct soc_device *soc;
......@@ -52,7 +35,6 @@ static void __init at91rm9200_dt_device_init(void)
of_platform_populate(NULL, of_default_bus_match_table, NULL, soc_dev);
arm_pm_idle = at91rm9200_idle;
arm_pm_restart = at91rm9200_restart;
at91rm9200_pm_init();
}
......@@ -62,7 +44,6 @@ static const char *at91rm9200_dt_board_compat[] __initconst = {
};
DT_MACHINE_START(at91rm9200_dt, "Atmel AT91RM9200")
.init_time = at91rm9200_dt_timer_init,
.init_machine = at91rm9200_dt_device_init,
.dt_compat = at91rm9200_dt_board_compat,
MACHINE_END
......@@ -18,9 +18,6 @@
extern void __init at91_map_io(void);
extern void __init at91_alt_map_io(void);
/* Timer */
extern void at91rm9200_timer_init(void);
/* idle */
extern void at91rm9200_idle(void);
extern void at91sam9_idle(void);
......
/*
* arch/arm/mach-at91/include/mach/at91_st.h
*
* Copyright (C) 2005 Ivan Kokshaysky
* Copyright (C) SAN People
*
* System Timer (ST) - System peripherals registers.
* Based on AT91RM9200 datasheet revision E.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef AT91_ST_H
#define AT91_ST_H
#ifndef __ASSEMBLY__
extern void __iomem *at91_st_base;
#define at91_st_read(field) \
__raw_readl(at91_st_base + field)
#define at91_st_write(field, value) \
__raw_writel(value, at91_st_base + field)
#else
.extern at91_st_base
#endif
#define AT91_ST_CR 0x00 /* Control Register */
#define AT91_ST_WDRST (1 << 0) /* Watchdog Timer Restart */
#define AT91_ST_PIMR 0x04 /* Period Interval Mode Register */
#define AT91_ST_PIV (0xffff << 0) /* Period Interval Value */
#define AT91_ST_WDMR 0x08 /* Watchdog Mode Register */
#define AT91_ST_WDV (0xffff << 0) /* Watchdog Counter Value */
#define AT91_ST_RSTEN (1 << 16) /* Reset Enable */
#define AT91_ST_EXTEN (1 << 17) /* External Signal Assertion Enable */
#define AT91_ST_RTMR 0x0c /* Real-time Mode Register */
#define AT91_ST_RTPRES (0xffff << 0) /* Real-time Prescalar Value */
#define AT91_ST_SR 0x10 /* Status Register */
#define AT91_ST_PITS (1 << 0) /* Period Interval Timer Status */
#define AT91_ST_WDOVF (1 << 1) /* Watchdog Overflow */
#define AT91_ST_RTTINC (1 << 2) /* Real-time Timer Increment */
#define AT91_ST_ALMS (1 << 3) /* Alarm Status */
#define AT91_ST_IER 0x14 /* Interrupt Enable Register */
#define AT91_ST_IDR 0x18 /* Interrupt Disable Register */
#define AT91_ST_IMR 0x1c /* Interrupt Mask Register */
#define AT91_ST_RTAR 0x20 /* Real-time Alarm Register */
#define AT91_ST_ALMV (0xfffff << 0) /* Alarm Value */
#define AT91_ST_CRTR 0x24 /* Current Real-time Register */
#define AT91_ST_CRTV (0xfffff << 0) /* Current Real-Time Value */
#endif
......@@ -123,7 +123,7 @@ config SOC_EXYNOS5800
config EXYNOS5420_MCPM
bool "Exynos5420 Multi-Cluster PM support"
depends on MCPM && SOC_EXYNOS5420
select ARM_CCI
select ARM_CCI400_PORT_CTRL
select ARM_CPU_SUSPEND
help
This is needed to provide CPU and cluster power management
......
menuconfig ARCH_MEDIATEK
bool "Mediatek MT65xx & MT81xx SoC" if ARCH_MULTI_V7
select ARM_GIC
select PINCTRL
select MTK_TIMER
help
Support for Mediatek MT65xx & MT81xx SoCs
......
......@@ -96,14 +96,6 @@ int gpmc_nand_init(struct omap_nand_platform_data *gpmc_nand_data,
gpmc_nand_res[1].start = gpmc_get_client_irq(GPMC_IRQ_FIFOEVENTENABLE);
gpmc_nand_res[2].start = gpmc_get_client_irq(GPMC_IRQ_COUNT_EVENT);
if (gpmc_t) {
err = gpmc_cs_set_timings(gpmc_nand_data->cs, gpmc_t);
if (err < 0) {
pr_err("omap2-gpmc: Unable to set gpmc timings: %d\n", err);
return err;
}
}
memset(&s, 0, sizeof(struct gpmc_settings));
if (gpmc_nand_data->of_node)
gpmc_read_settings_dt(gpmc_nand_data->of_node, &s);
......@@ -111,6 +103,16 @@ int gpmc_nand_init(struct omap_nand_platform_data *gpmc_nand_data,
gpmc_set_legacy(gpmc_nand_data, &s);
s.device_nand = true;
if (gpmc_t) {
err = gpmc_cs_set_timings(gpmc_nand_data->cs, gpmc_t, &s);
if (err < 0) {
pr_err("omap2-gpmc: Unable to set gpmc timings: %d\n",
err);
return err;
}
}
err = gpmc_cs_program_settings(gpmc_nand_data->cs, &s);
if (err < 0)
goto out_free_cs;
......
......@@ -293,7 +293,7 @@ static int omap2_onenand_setup_async(void __iomem *onenand_base)
if (ret < 0)
return ret;
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t);
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_async);
if (ret < 0)
return ret;
......@@ -331,7 +331,7 @@ static int omap2_onenand_setup_sync(void __iomem *onenand_base, int *freq_ptr)
if (ret < 0)
return ret;
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t);
ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_sync);
if (ret < 0)
return ret;
......
......@@ -71,7 +71,7 @@ static int tusb_set_async_mode(unsigned sysclk_ps)
gpmc_calc_timings(&t, &tusb_async, &dev_t);
return gpmc_cs_set_timings(async_cs, &t);
return gpmc_cs_set_timings(async_cs, &t, &tusb_async);
}
static int tusb_set_sync_mode(unsigned sysclk_ps)
......@@ -98,7 +98,7 @@ static int tusb_set_sync_mode(unsigned sysclk_ps)
gpmc_calc_timings(&t, &tusb_sync, &dev_t);
return gpmc_cs_set_timings(sync_cs, &t);
return gpmc_cs_set_timings(sync_cs, &t, &tusb_sync);
}
/* tusb driver calls this when it changes the chip's clocking */
......
......@@ -22,7 +22,4 @@ config ARCH_MSM8974
bool "Enable support for MSM8974"
select HAVE_ARM_ARCH_TIMER
config QCOM_SCM
bool
endif
obj-y := board.o
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_QCOM_SCM) += scm.o scm-boot.o
CFLAGS_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1)
......@@ -17,10 +17,10 @@
#include <linux/of_address.h>
#include <linux/smp.h>
#include <linux/io.h>
#include <linux/qcom_scm.h>
#include <asm/smp_plat.h>
#include "scm-boot.h"
#define VDD_SC1_ARRAY_CLAMP_GFS_CTL 0x35a0
#define SCSS_CPU1CORE_RESET 0x2d80
......@@ -319,25 +319,10 @@ static int kpssv2_boot_secondary(unsigned int cpu, struct task_struct *idle)
static void __init qcom_smp_prepare_cpus(unsigned int max_cpus)
{
int cpu, map;
unsigned int flags = 0;
static const int cold_boot_flags[] = {
0,
SCM_FLAG_COLDBOOT_CPU1,
SCM_FLAG_COLDBOOT_CPU2,
SCM_FLAG_COLDBOOT_CPU3,
};
for_each_present_cpu(cpu) {
map = cpu_logical_map(cpu);
if (WARN_ON(map >= ARRAY_SIZE(cold_boot_flags))) {
set_cpu_present(cpu, false);
continue;
}
flags |= cold_boot_flags[map];
}
int cpu;
if (scm_set_boot_addr(virt_to_phys(secondary_startup_arm), flags)) {
if (qcom_scm_set_cold_boot_addr(secondary_startup_arm,
cpu_present_mask)) {
for_each_present_cpu(cpu) {
if (cpu == smp_processor_id())
continue;
......
......@@ -54,7 +54,7 @@ config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA
config ARCH_VEXPRESS_DCSCB
bool "Dual Cluster System Control Block (DCSCB) support"
depends on MCPM
select ARM_CCI
select ARM_CCI400_PORT_CTRL
help
Support for the Dual Cluster System Configuration Block (DCSCB).
This is needed to provide CPU and cluster power management
......@@ -72,7 +72,7 @@ config ARCH_VEXPRESS_SPC
config ARCH_VEXPRESS_TC2_PM
bool "Versatile Express TC2 power management"
depends on MCPM
select ARM_CCI
select ARM_CCI400_PORT_CTRL
select ARCH_VEXPRESS_SPC
select ARM_CPU_SUSPEND
help
......
/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
/*
* arch/arm64/include/asm/arm-cci.h
*
* Copyright (C) 2015 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __MACH_SCM_H
#define __MACH_SCM_H
#define SCM_SVC_BOOT 0x1
#define SCM_SVC_PIL 0x2
extern int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len,
void *resp_buf, size_t resp_len);
#define SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF))
#ifndef __ASM_ARM_CCI_H
#define __ASM_ARM_CCI_H
extern u32 scm_get_version(void);
static inline bool platform_has_secure_cci_access(void)
{
return false;
}
#endif
......@@ -4,6 +4,41 @@
menu "Bus devices"
config ARM_CCI
bool
config ARM_CCI400_COMMON
bool
select ARM_CCI
config ARM_CCI400_PMU
bool "ARM CCI400 PMU support"
default y
depends on ARM || ARM64
depends on HW_PERF_EVENTS
select ARM_CCI400_COMMON
help
Support for PMU events monitoring on the ARM CCI cache coherent
interconnect.
If unsure, say Y
config ARM_CCI400_PORT_CTRL
bool
depends on ARM && OF && CPU_V7
select ARM_CCI400_COMMON
help
Low level power management driver for CCI400 cache coherent
interconnect for ARM platforms.
config ARM_CCN
bool "ARM CCN driver support"
depends on ARM || ARM64
depends on PERF_EVENTS
help
PMU (perf) driver supporting the ARM CCN (Cache Coherent Network)
interconnect.
config BRCMSTB_GISB_ARB
bool "Broadcom STB GISB bus arbiter"
depends on ARM || MIPS
......@@ -40,15 +75,6 @@ config MVEBU_MBUS
Driver needed for the MBus configuration on Marvell EBU SoCs
(Kirkwood, Dove, Orion5x, MV78XX0 and Armada 370/XP).
config OMAP_OCP2SCP
tristate "OMAP OCP2SCP DRIVER"
depends on ARCH_OMAP2PLUS
help
Driver to enable ocp2scp module which transforms ocp interface
protocol to scp protocol. In OMAP4, USB PHY is connected via
OCP2SCP and in OMAP5, both USB PHY and SATA PHY is connected via
OCP2SCP.
config OMAP_INTERCONNECT
tristate "OMAP INTERCONNECT DRIVER"
depends on ARCH_OMAP2PLUS
......@@ -56,20 +82,27 @@ config OMAP_INTERCONNECT
help
Driver to enable OMAP interconnect error handling driver.
config ARM_CCI
bool "ARM CCI driver support"
depends on ARM && OF && CPU_V7
config OMAP_OCP2SCP
tristate "OMAP OCP2SCP DRIVER"
depends on ARCH_OMAP2PLUS
help
Driver supporting the CCI cache coherent interconnect for ARM
platforms.
Driver to enable ocp2scp module which transforms ocp interface
protocol to scp protocol. In OMAP4, USB PHY is connected via
OCP2SCP and in OMAP5, both USB PHY and SATA PHY is connected via
OCP2SCP.
config ARM_CCN
bool "ARM CCN driver support"
depends on ARM || ARM64
depends on PERF_EVENTS
config SIMPLE_PM_BUS
bool "Simple Power-Managed Bus Driver"
depends on OF && PM
depends on ARCH_SHMOBILE || COMPILE_TEST
help
PMU (perf) driver supporting the ARM CCN (Cache Coherent Network)
interconnect.
Driver for transparent busses that don't need a real driver, but
where the bus controller is part of a PM domain, or under the control
of a functional clock, and thus relies on runtime PM for managing
this PM domain and/or clock.
An example of such a bus controller is the Renesas Bus State
Controller (BSC, sometimes called "LBSC within Bus Bridge", or
"External Bus Interface") as found on several Renesas ARM SoCs.
config VEXPRESS_CONFIG
bool "Versatile Express configuration bus"
......
......@@ -2,17 +2,18 @@
# Makefile for the bus drivers.
#
# Interconnect bus drivers for ARM platforms
obj-$(CONFIG_ARM_CCI) += arm-cci.o
obj-$(CONFIG_ARM_CCN) += arm-ccn.o
obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o
obj-$(CONFIG_IMX_WEIM) += imx-weim.o
obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o
obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o
obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o
obj-$(CONFIG_IMX_WEIM) += imx-weim.o
obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o
obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o
# Interconnect bus driver for OMAP SoCs.
obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o
# Interconnect bus drivers for ARM platforms
obj-$(CONFIG_ARM_CCI) += arm-cci.o
obj-$(CONFIG_ARM_CCN) += arm-ccn.o
obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o
obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
......@@ -29,41 +29,36 @@
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#define DRIVER_NAME "CCI-400"
#define DRIVER_NAME_PMU DRIVER_NAME " PMU"
#define CCI_PORT_CTRL 0x0
#define CCI_CTRL_STATUS 0xc
#define CCI_ENABLE_SNOOP_REQ 0x1
#define CCI_ENABLE_DVM_REQ 0x2
#define CCI_ENABLE_REQ (CCI_ENABLE_SNOOP_REQ | CCI_ENABLE_DVM_REQ)
static void __iomem *cci_ctrl_base;
static unsigned long cci_ctrl_phys;
#ifdef CONFIG_ARM_CCI400_PORT_CTRL
struct cci_nb_ports {
unsigned int nb_ace;
unsigned int nb_ace_lite;
};
enum cci_ace_port_type {
ACE_INVALID_PORT = 0x0,
ACE_PORT,
ACE_LITE_PORT,
static const struct cci_nb_ports cci400_ports = {
.nb_ace = 2,
.nb_ace_lite = 3
};
struct cci_ace_port {
void __iomem *base;
unsigned long phys;
enum cci_ace_port_type type;
struct device_node *dn;
};
#define CCI400_PORTS_DATA (&cci400_ports)
#else
#define CCI400_PORTS_DATA (NULL)
#endif
static struct cci_ace_port *ports;
static unsigned int nb_cci_ports;
static const struct of_device_id arm_cci_matches[] = {
#ifdef CONFIG_ARM_CCI400_COMMON
{.compatible = "arm,cci-400", .data = CCI400_PORTS_DATA },
#endif
{},
};
static void __iomem *cci_ctrl_base;
static unsigned long cci_ctrl_phys;
#ifdef CONFIG_ARM_CCI400_PMU
#ifdef CONFIG_HW_PERF_EVENTS
#define DRIVER_NAME "CCI-400"
#define DRIVER_NAME_PMU DRIVER_NAME " PMU"
#define CCI_PMCR 0x0100
#define CCI_PID2 0x0fe8
......@@ -75,20 +70,6 @@ static unsigned long cci_ctrl_phys;
#define CCI_PID2_REV_MASK 0xf0
#define CCI_PID2_REV_SHIFT 4
/* Port ids */
#define CCI_PORT_S0 0
#define CCI_PORT_S1 1
#define CCI_PORT_S2 2
#define CCI_PORT_S3 3
#define CCI_PORT_S4 4
#define CCI_PORT_M0 5
#define CCI_PORT_M1 6
#define CCI_PORT_M2 7
#define CCI_REV_R0 0
#define CCI_REV_R1 1
#define CCI_REV_R1_PX 5
#define CCI_PMU_EVT_SEL 0x000
#define CCI_PMU_CNTR 0x004
#define CCI_PMU_CNTR_CTRL 0x008
......@@ -100,76 +81,22 @@ static unsigned long cci_ctrl_phys;
#define CCI_PMU_CNTR_MASK ((1ULL << 32) -1)
/*
* Instead of an event id to monitor CCI cycles, a dedicated counter is
* provided. Use 0xff to represent CCI cycles and hope that no future revisions
* make use of this event in hardware.
*/
enum cci400_perf_events {
CCI_PMU_CYCLES = 0xff
};
#define CCI_PMU_EVENT_MASK 0xff
#define CCI_PMU_EVENT_MASK 0xffUL
#define CCI_PMU_EVENT_SOURCE(event) ((event >> 5) & 0x7)
#define CCI_PMU_EVENT_CODE(event) (event & 0x1f)
#define CCI_PMU_MAX_HW_EVENTS 5 /* CCI PMU has 4 counters + 1 cycle counter */
#define CCI_PMU_CYCLE_CNTR_IDX 0
#define CCI_PMU_CNTR0_IDX 1
#define CCI_PMU_CNTR_LAST(cci_pmu) (CCI_PMU_CYCLE_CNTR_IDX + cci_pmu->num_events - 1)
/*
* CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8
* ports and bits 4:0 are event codes. There are different event codes
* associated with each port type.
*
* Additionally, the range of events associated with the port types changed
* between Rev0 and Rev1.
*
* The constants below define the range of valid codes for each port type for
* the different revisions and are used to validate the event to be monitored.
*/
#define CCI_REV_R0_SLAVE_PORT_MIN_EV 0x00
#define CCI_REV_R0_SLAVE_PORT_MAX_EV 0x13
#define CCI_REV_R0_MASTER_PORT_MIN_EV 0x14
#define CCI_REV_R0_MASTER_PORT_MAX_EV 0x1a
#define CCI_REV_R1_SLAVE_PORT_MIN_EV 0x00
#define CCI_REV_R1_SLAVE_PORT_MAX_EV 0x14
#define CCI_REV_R1_MASTER_PORT_MIN_EV 0x00
#define CCI_REV_R1_MASTER_PORT_MAX_EV 0x11
struct pmu_port_event_ranges {
u8 slave_min;
u8 slave_max;
u8 master_min;
u8 master_max;
};
static struct pmu_port_event_ranges port_event_range[] = {
[CCI_REV_R0] = {
.slave_min = CCI_REV_R0_SLAVE_PORT_MIN_EV,
.slave_max = CCI_REV_R0_SLAVE_PORT_MAX_EV,
.master_min = CCI_REV_R0_MASTER_PORT_MIN_EV,
.master_max = CCI_REV_R0_MASTER_PORT_MAX_EV,
},
[CCI_REV_R1] = {
.slave_min = CCI_REV_R1_SLAVE_PORT_MIN_EV,
.slave_max = CCI_REV_R1_SLAVE_PORT_MAX_EV,
.master_min = CCI_REV_R1_MASTER_PORT_MIN_EV,
.master_max = CCI_REV_R1_MASTER_PORT_MAX_EV,
},
/* Types of interfaces that can generate events */
enum {
CCI_IF_SLAVE,
CCI_IF_MASTER,
CCI_IF_MAX,
};
/*
* Export different PMU names for the different revisions so userspace knows
* because the event ids are different
*/
static char *const pmu_names[] = {
[CCI_REV_R0] = "CCI_400",
[CCI_REV_R1] = "CCI_400_r1",
struct event_range {
u32 min;
u32 max;
};
struct cci_pmu_hw_events {
......@@ -178,13 +105,20 @@ struct cci_pmu_hw_events {
raw_spinlock_t pmu_lock;
};
struct cci_pmu_model {
char *name;
struct event_range event_ranges[CCI_IF_MAX];
};
static struct cci_pmu_model cci_pmu_models[];
struct cci_pmu {
void __iomem *base;
struct pmu pmu;
int nr_irqs;
int irqs[CCI_PMU_MAX_HW_EVENTS];
unsigned long active_irqs;
struct pmu_port_event_ranges *port_ranges;
const struct cci_pmu_model *model;
struct cci_pmu_hw_events hw_events;
struct platform_device *plat_device;
int num_events;
......@@ -196,52 +130,63 @@ static struct cci_pmu *pmu;
#define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu))
static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs)
{
int i;
for (i = 0; i < nr_irqs; i++)
if (irq == irqs[i])
return true;
return false;
}
/* Port ids */
#define CCI_PORT_S0 0
#define CCI_PORT_S1 1
#define CCI_PORT_S2 2
#define CCI_PORT_S3 3
#define CCI_PORT_S4 4
#define CCI_PORT_M0 5
#define CCI_PORT_M1 6
#define CCI_PORT_M2 7
static int probe_cci_revision(void)
{
int rev;
rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK;
rev >>= CCI_PID2_REV_SHIFT;
#define CCI_REV_R0 0
#define CCI_REV_R1 1
#define CCI_REV_R1_PX 5
if (rev < CCI_REV_R1_PX)
return CCI_REV_R0;
else
return CCI_REV_R1;
}
/*
* Instead of an event id to monitor CCI cycles, a dedicated counter is
* provided. Use 0xff to represent CCI cycles and hope that no future revisions
* make use of this event in hardware.
*/
enum cci400_perf_events {
CCI_PMU_CYCLES = 0xff
};
static struct pmu_port_event_ranges *port_range_by_rev(void)
{
int rev = probe_cci_revision();
#define CCI_PMU_CYCLE_CNTR_IDX 0
#define CCI_PMU_CNTR0_IDX 1
#define CCI_PMU_CNTR_LAST(cci_pmu) (CCI_PMU_CYCLE_CNTR_IDX + cci_pmu->num_events - 1)
return &port_event_range[rev];
}
/*
* CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8
* ports and bits 4:0 are event codes. There are different event codes
* associated with each port type.
*
* Additionally, the range of events associated with the port types changed
* between Rev0 and Rev1.
*
* The constants below define the range of valid codes for each port type for
* the different revisions and are used to validate the event to be monitored.
*/
static int pmu_is_valid_slave_event(u8 ev_code)
{
return pmu->port_ranges->slave_min <= ev_code &&
ev_code <= pmu->port_ranges->slave_max;
}
#define CCI_REV_R0_SLAVE_PORT_MIN_EV 0x00
#define CCI_REV_R0_SLAVE_PORT_MAX_EV 0x13
#define CCI_REV_R0_MASTER_PORT_MIN_EV 0x14
#define CCI_REV_R0_MASTER_PORT_MAX_EV 0x1a
static int pmu_is_valid_master_event(u8 ev_code)
{
return pmu->port_ranges->master_min <= ev_code &&
ev_code <= pmu->port_ranges->master_max;
}
#define CCI_REV_R1_SLAVE_PORT_MIN_EV 0x00
#define CCI_REV_R1_SLAVE_PORT_MAX_EV 0x14
#define CCI_REV_R1_MASTER_PORT_MIN_EV 0x00
#define CCI_REV_R1_MASTER_PORT_MAX_EV 0x11
static int pmu_validate_hw_event(u8 hw_event)
static int pmu_validate_hw_event(unsigned long hw_event)
{
u8 ev_source = CCI_PMU_EVENT_SOURCE(hw_event);
u8 ev_code = CCI_PMU_EVENT_CODE(hw_event);
int if_type;
if (hw_event & ~CCI_PMU_EVENT_MASK)
return -ENOENT;
switch (ev_source) {
case CCI_PORT_S0:
......@@ -250,21 +195,44 @@ static int pmu_validate_hw_event(u8 hw_event)
case CCI_PORT_S3:
case CCI_PORT_S4:
/* Slave Interface */
if (pmu_is_valid_slave_event(ev_code))
return hw_event;
if_type = CCI_IF_SLAVE;
break;
case CCI_PORT_M0:
case CCI_PORT_M1:
case CCI_PORT_M2:
/* Master Interface */
if (pmu_is_valid_master_event(ev_code))
return hw_event;
if_type = CCI_IF_MASTER;
break;
default:
return -ENOENT;
}
if (ev_code >= pmu->model->event_ranges[if_type].min &&
ev_code <= pmu->model->event_ranges[if_type].max)
return hw_event;
return -ENOENT;
}
static int probe_cci_revision(void)
{
int rev;
rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK;
rev >>= CCI_PID2_REV_SHIFT;
if (rev < CCI_REV_R1_PX)
return CCI_REV_R0;
else
return CCI_REV_R1;
}
static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev)
{
if (platform_has_secure_cci_access())
return &cci_pmu_models[probe_cci_revision()];
return NULL;
}
static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx)
{
return CCI_PMU_CYCLE_CNTR_IDX <= idx &&
......@@ -293,7 +261,6 @@ static void pmu_enable_counter(int idx)
static void pmu_set_event(int idx, unsigned long event)
{
event &= CCI_PMU_EVENT_MASK;
pmu_write_register(event, idx, CCI_PMU_EVT_SEL);
}
......@@ -310,7 +277,7 @@ static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *ev
{
struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
struct hw_perf_event *hw_event = &event->hw;
unsigned long cci_event = hw_event->config_base & CCI_PMU_EVENT_MASK;
unsigned long cci_event = hw_event->config_base;
int idx;
if (cci_event == CCI_PMU_CYCLES) {
......@@ -331,7 +298,7 @@ static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *ev
static int pmu_map_event(struct perf_event *event)
{
int mapping;
u8 config = event->attr.config & CCI_PMU_EVENT_MASK;
unsigned long config = event->attr.config;
if (event->attr.type < PERF_TYPE_MAX)
return -ENOENT;
......@@ -660,12 +627,21 @@ static void cci_pmu_del(struct perf_event *event, int flags)
}
static int
validate_event(struct cci_pmu_hw_events *hw_events,
struct perf_event *event)
validate_event(struct pmu *cci_pmu,
struct cci_pmu_hw_events *hw_events,
struct perf_event *event)
{
if (is_software_event(event))
return 1;
/*
* Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The
* core perf code won't check that the pmu->ctx == leader->ctx
* until after pmu->event_init(event).
*/
if (event->pmu != cci_pmu)
return 0;
if (event->state < PERF_EVENT_STATE_OFF)
return 1;
......@@ -687,15 +663,15 @@ validate_group(struct perf_event *event)
.used_mask = CPU_BITS_NONE,
};
if (!validate_event(&fake_pmu, leader))
if (!validate_event(event->pmu, &fake_pmu, leader))
return -EINVAL;
list_for_each_entry(sibling, &leader->sibling_list, group_entry) {
if (!validate_event(&fake_pmu, sibling))
if (!validate_event(event->pmu, &fake_pmu, sibling))
return -EINVAL;
}
if (!validate_event(&fake_pmu, event))
if (!validate_event(event->pmu, &fake_pmu, event))
return -EINVAL;
return 0;
......@@ -831,9 +807,9 @@ static const struct attribute_group *pmu_attr_groups[] = {
static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev)
{
char *name = pmu_names[probe_cci_revision()];
char *name = cci_pmu->model->name;
cci_pmu->pmu = (struct pmu) {
.name = pmu_names[probe_cci_revision()],
.name = cci_pmu->model->name,
.task_ctx_nr = perf_invalid_context,
.pmu_enable = cci_pmu_enable,
.pmu_disable = cci_pmu_disable,
......@@ -886,22 +862,93 @@ static struct notifier_block cci_pmu_cpu_nb = {
.priority = CPU_PRI_PERF + 1,
};
static struct cci_pmu_model cci_pmu_models[] = {
[CCI_REV_R0] = {
.name = "CCI_400",
.event_ranges = {
[CCI_IF_SLAVE] = {
CCI_REV_R0_SLAVE_PORT_MIN_EV,
CCI_REV_R0_SLAVE_PORT_MAX_EV,
},
[CCI_IF_MASTER] = {
CCI_REV_R0_MASTER_PORT_MIN_EV,
CCI_REV_R0_MASTER_PORT_MAX_EV,
},
},
},
[CCI_REV_R1] = {
.name = "CCI_400_r1",
.event_ranges = {
[CCI_IF_SLAVE] = {
CCI_REV_R1_SLAVE_PORT_MIN_EV,
CCI_REV_R1_SLAVE_PORT_MAX_EV,
},
[CCI_IF_MASTER] = {
CCI_REV_R1_MASTER_PORT_MIN_EV,
CCI_REV_R1_MASTER_PORT_MAX_EV,
},
},
},
};
static const struct of_device_id arm_cci_pmu_matches[] = {
{
.compatible = "arm,cci-400-pmu",
.data = NULL,
},
{
.compatible = "arm,cci-400-pmu,r0",
.data = &cci_pmu_models[CCI_REV_R0],
},
{
.compatible = "arm,cci-400-pmu,r1",
.data = &cci_pmu_models[CCI_REV_R1],
},
{},
};
static inline const struct cci_pmu_model *get_cci_model(struct platform_device *pdev)
{
const struct of_device_id *match = of_match_node(arm_cci_pmu_matches,
pdev->dev.of_node);
if (!match)
return NULL;
if (match->data)
return match->data;
dev_warn(&pdev->dev, "DEPRECATED compatible property,"
"requires secure access to CCI registers");
return probe_cci_model(pdev);
}
static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs)
{
int i;
for (i = 0; i < nr_irqs; i++)
if (irq == irqs[i])
return true;
return false;
}
static int cci_pmu_probe(struct platform_device *pdev)
{
struct resource *res;
int i, ret, irq;
const struct cci_pmu_model *model;
model = get_cci_model(pdev);
if (!model) {
dev_warn(&pdev->dev, "CCI PMU version not supported\n");
return -ENODEV;
}
pmu = devm_kzalloc(&pdev->dev, sizeof(*pmu), GFP_KERNEL);
if (!pmu)
return -ENOMEM;
pmu->model = model;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pmu->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(pmu->base))
......@@ -933,12 +980,6 @@ static int cci_pmu_probe(struct platform_device *pdev)
return -EINVAL;
}
pmu->port_ranges = port_range_by_rev();
if (!pmu->port_ranges) {
dev_warn(&pdev->dev, "CCI PMU version not supported\n");
return -EINVAL;
}
raw_spin_lock_init(&pmu->hw_events.pmu_lock);
mutex_init(&pmu->reserve_mutex);
atomic_set(&pmu->active_events, 0);
......@@ -952,6 +993,7 @@ static int cci_pmu_probe(struct platform_device *pdev)
if (ret)
return ret;
pr_info("ARM %s PMU driver probed", pmu->model->name);
return 0;
}
......@@ -963,7 +1005,66 @@ static int cci_platform_probe(struct platform_device *pdev)
return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
}
#endif /* CONFIG_HW_PERF_EVENTS */
static struct platform_driver cci_pmu_driver = {
.driver = {
.name = DRIVER_NAME_PMU,
.of_match_table = arm_cci_pmu_matches,
},
.probe = cci_pmu_probe,
};
static struct platform_driver cci_platform_driver = {
.driver = {
.name = DRIVER_NAME,
.of_match_table = arm_cci_matches,
},
.probe = cci_platform_probe,
};
static int __init cci_platform_init(void)
{
int ret;
ret = platform_driver_register(&cci_pmu_driver);
if (ret)
return ret;
return platform_driver_register(&cci_platform_driver);
}
#else /* !CONFIG_ARM_CCI400_PMU */
static int __init cci_platform_init(void)
{
return 0;
}
#endif /* CONFIG_ARM_CCI400_PMU */
#ifdef CONFIG_ARM_CCI400_PORT_CTRL
#define CCI_PORT_CTRL 0x0
#define CCI_CTRL_STATUS 0xc
#define CCI_ENABLE_SNOOP_REQ 0x1
#define CCI_ENABLE_DVM_REQ 0x2
#define CCI_ENABLE_REQ (CCI_ENABLE_SNOOP_REQ | CCI_ENABLE_DVM_REQ)
enum cci_ace_port_type {
ACE_INVALID_PORT = 0x0,
ACE_PORT,
ACE_LITE_PORT,
};
struct cci_ace_port {
void __iomem *base;
unsigned long phys;
enum cci_ace_port_type type;
struct device_node *dn;
};
static struct cci_ace_port *ports;
static unsigned int nb_cci_ports;
struct cpu_port {
u64 mpidr;
......@@ -1284,36 +1385,20 @@ int notrace __cci_control_port_by_index(u32 port, bool enable)
}
EXPORT_SYMBOL_GPL(__cci_control_port_by_index);
static const struct cci_nb_ports cci400_ports = {
.nb_ace = 2,
.nb_ace_lite = 3
};
static const struct of_device_id arm_cci_matches[] = {
{.compatible = "arm,cci-400", .data = &cci400_ports },
{},
};
static const struct of_device_id arm_cci_ctrl_if_matches[] = {
{.compatible = "arm,cci-400-ctrl-if", },
{},
};
static int cci_probe(void)
static int cci_probe_ports(struct device_node *np)
{
struct cci_nb_ports const *cci_config;
int ret, i, nb_ace = 0, nb_ace_lite = 0;
struct device_node *np, *cp;
struct device_node *cp;
struct resource res;
const char *match_str;
bool is_ace;
np = of_find_matching_node(NULL, arm_cci_matches);
if (!np)
return -ENODEV;
if (!of_device_is_available(np))
return -ENODEV;
cci_config = of_match_node(arm_cci_matches, np)->data;
if (!cci_config)
......@@ -1325,17 +1410,6 @@ static int cci_probe(void)
if (!ports)
return -ENOMEM;
ret = of_address_to_resource(np, 0, &res);
if (!ret) {
cci_ctrl_base = ioremap(res.start, resource_size(&res));
cci_ctrl_phys = res.start;
}
if (ret || !cci_ctrl_base) {
WARN(1, "unable to ioremap CCI ctrl\n");
ret = -ENXIO;
goto memalloc_err;
}
for_each_child_of_node(np, cp) {
if (!of_match_node(arm_cci_ctrl_if_matches, cp))
continue;
......@@ -1395,12 +1469,37 @@ static int cci_probe(void)
sync_cache_w(&cpu_port);
__sync_cache_range_w(ports, sizeof(*ports) * nb_cci_ports);
pr_info("ARM CCI driver probed\n");
return 0;
}
#else /* !CONFIG_ARM_CCI400_PORT_CTRL */
static inline int cci_probe_ports(struct device_node *np)
{
return 0;
}
#endif /* CONFIG_ARM_CCI400_PORT_CTRL */
memalloc_err:
static int cci_probe(void)
{
int ret;
struct device_node *np;
struct resource res;
np = of_find_matching_node(NULL, arm_cci_matches);
if(!np || !of_device_is_available(np))
return -ENODEV;
kfree(ports);
return ret;
ret = of_address_to_resource(np, 0, &res);
if (!ret) {
cci_ctrl_base = ioremap(res.start, resource_size(&res));
cci_ctrl_phys = res.start;
}
if (ret || !cci_ctrl_base) {
WARN(1, "unable to ioremap CCI ctrl\n");
return -ENXIO;
}
return cci_probe_ports(np);
}
static int cci_init_status = -EAGAIN;
......@@ -1418,42 +1517,6 @@ static int cci_init(void)
return cci_init_status;
}
#ifdef CONFIG_HW_PERF_EVENTS
static struct platform_driver cci_pmu_driver = {
.driver = {
.name = DRIVER_NAME_PMU,
.of_match_table = arm_cci_pmu_matches,
},
.probe = cci_pmu_probe,
};
static struct platform_driver cci_platform_driver = {
.driver = {
.name = DRIVER_NAME,
.of_match_table = arm_cci_matches,
},
.probe = cci_platform_probe,
};
static int __init cci_platform_init(void)
{
int ret;
ret = platform_driver_register(&cci_pmu_driver);
if (ret)
return ret;
return platform_driver_register(&cci_platform_driver);
}
#else
static int __init cci_platform_init(void)
{
return 0;
}
#endif
/*
* To sort out early init calls ordering a helper function is provided to
* check if the CCI driver has beed initialized. Function check if the driver
......
/*
* Simple Power-Managed Bus Driver
*
* Copyright (C) 2014-2015 Glider bvba
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
static int simple_pm_bus_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
dev_dbg(&pdev->dev, "%s\n", __func__);
pm_runtime_enable(&pdev->dev);
if (np)
of_platform_populate(np, NULL, NULL, &pdev->dev);
return 0;
}
static int simple_pm_bus_remove(struct platform_device *pdev)
{
dev_dbg(&pdev->dev, "%s\n", __func__);
pm_runtime_disable(&pdev->dev);
return 0;
}
static const struct of_device_id simple_pm_bus_of_match[] = {
{ .compatible = "simple-pm-bus", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
static struct platform_driver simple_pm_bus_driver = {
.probe = simple_pm_bus_probe,
.remove = simple_pm_bus_remove,
.driver = {
.name = "simple-pm-bus",
.of_match_table = simple_pm_bus_of_match,
},
};
module_platform_driver(simple_pm_bus_driver);
MODULE_DESCRIPTION("Simple Power-Managed Bus Driver");
MODULE_AUTHOR("Geert Uytterhoeven <geert+renesas@glider.be>");
MODULE_LICENSE("GPL v2");
......@@ -143,6 +143,11 @@ config ATMEL_PIT
select CLKSRC_OF if OF
def_bool SOC_AT91SAM9 || SOC_SAMA5
config ATMEL_ST
bool
select CLKSRC_OF
select MFD_SYSCON
config CLKSRC_METAG_GENERIC
def_bool y if METAG
help
......
obj-$(CONFIG_CLKSRC_OF) += clksrc-of.o
obj-$(CONFIG_ATMEL_PIT) += timer-atmel-pit.o
obj-$(CONFIG_ATMEL_ST) += timer-atmel-st.o
obj-$(CONFIG_ATMEL_TCB_CLKSRC) += tcb_clksrc.o
obj-$(CONFIG_X86_PM_TIMER) += acpi_pm.o
obj-$(CONFIG_SCx200HR_TIMER) += scx200_hrt.o
......
......@@ -24,19 +24,17 @@
#include <linux/irq.h>
#include <linux/clockchips.h>
#include <linux/export.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/atmel-st.h>
#include <linux/of_irq.h>
#include <asm/mach/time.h>
#include <mach/at91_st.h>
#include <mach/hardware.h>
#include <linux/regmap.h>
static unsigned long last_crtr;
static u32 irqmask;
static struct clock_event_device clkevt;
static struct regmap *regmap_st;
#define AT91_SLOW_CLOCK 32768
#define RM9200_TIMER_LATCH ((AT91_SLOW_CLOCK + HZ/2) / HZ)
/*
......@@ -46,11 +44,11 @@ static struct clock_event_device clkevt;
*/
static inline unsigned long read_CRTR(void)
{
unsigned long x1, x2;
unsigned int x1, x2;
x1 = at91_st_read(AT91_ST_CRTR);
regmap_read(regmap_st, AT91_ST_CRTR, &x1);
do {
x2 = at91_st_read(AT91_ST_CRTR);
regmap_read(regmap_st, AT91_ST_CRTR, &x2);
if (x1 == x2)
break;
x1 = x2;
......@@ -63,7 +61,10 @@ static inline unsigned long read_CRTR(void)
*/
static irqreturn_t at91rm9200_timer_interrupt(int irq, void *dev_id)
{
u32 sr = at91_st_read(AT91_ST_SR) & irqmask;
u32 sr;
regmap_read(regmap_st, AT91_ST_SR, &sr);
sr &= irqmask;
/*
* irqs should be disabled here, but as the irq is shared they are only
......@@ -92,13 +93,6 @@ static irqreturn_t at91rm9200_timer_interrupt(int irq, void *dev_id)
return IRQ_NONE;
}
static struct irqaction at91rm9200_timer_irq = {
.name = "at91_tick",
.flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
.handler = at91rm9200_timer_interrupt,
.irq = NR_IRQS_LEGACY + AT91_ID_SYS,
};
static cycle_t read_clk32k(struct clocksource *cs)
{
return read_CRTR();
......@@ -115,23 +109,25 @@ static struct clocksource clk32k = {
static void
clkevt32k_mode(enum clock_event_mode mode, struct clock_event_device *dev)
{
unsigned int val;
/* Disable and flush pending timer interrupts */
at91_st_write(AT91_ST_IDR, AT91_ST_PITS | AT91_ST_ALMS);
at91_st_read(AT91_ST_SR);
regmap_write(regmap_st, AT91_ST_IDR, AT91_ST_PITS | AT91_ST_ALMS);
regmap_read(regmap_st, AT91_ST_SR, &val);
last_crtr = read_CRTR();
switch (mode) {
case CLOCK_EVT_MODE_PERIODIC:
/* PIT for periodic irqs; fixed rate of 1/HZ */
irqmask = AT91_ST_PITS;
at91_st_write(AT91_ST_PIMR, RM9200_TIMER_LATCH);
regmap_write(regmap_st, AT91_ST_PIMR, RM9200_TIMER_LATCH);
break;
case CLOCK_EVT_MODE_ONESHOT:
/* ALM for oneshot irqs, set by next_event()
* before 32 seconds have passed
*/
irqmask = AT91_ST_ALMS;
at91_st_write(AT91_ST_RTAR, last_crtr);
regmap_write(regmap_st, AT91_ST_RTAR, last_crtr);
break;
case CLOCK_EVT_MODE_SHUTDOWN:
case CLOCK_EVT_MODE_UNUSED:
......@@ -139,7 +135,7 @@ clkevt32k_mode(enum clock_event_mode mode, struct clock_event_device *dev)
irqmask = 0;
break;
}
at91_st_write(AT91_ST_IER, irqmask);
regmap_write(regmap_st, AT91_ST_IER, irqmask);
}
static int
......@@ -147,6 +143,7 @@ clkevt32k_next_event(unsigned long delta, struct clock_event_device *dev)
{
u32 alm;
int status = 0;
unsigned int val;
BUG_ON(delta < 2);
......@@ -162,12 +159,12 @@ clkevt32k_next_event(unsigned long delta, struct clock_event_device *dev)
alm = read_CRTR();
/* Cancel any pending alarm; flush any pending IRQ */
at91_st_write(AT91_ST_RTAR, alm);
at91_st_read(AT91_ST_SR);
regmap_write(regmap_st, AT91_ST_RTAR, alm);
regmap_read(regmap_st, AT91_ST_SR, &val);
/* Schedule alarm by writing RTAR. */
alm += delta;
at91_st_write(AT91_ST_RTAR, alm);
regmap_write(regmap_st, AT91_ST_RTAR, alm);
return status;
}
......@@ -180,66 +177,40 @@ static struct clock_event_device clkevt = {
.set_mode = clkevt32k_mode,
};
void __iomem *at91_st_base;
EXPORT_SYMBOL_GPL(at91_st_base);
static const struct of_device_id at91rm9200_st_timer_ids[] = {
{ .compatible = "atmel,at91rm9200-st" },
{ /* sentinel */ }
};
static int __init of_at91rm9200_st_init(void)
{
struct device_node *np;
int ret;
np = of_find_matching_node(NULL, at91rm9200_st_timer_ids);
if (!np)
goto err;
at91_st_base = of_iomap(np, 0);
if (!at91_st_base)
goto node_err;
/* Get the interrupts property */
ret = irq_of_parse_and_map(np, 0);
if (!ret)
goto ioremap_err;
at91rm9200_timer_irq.irq = ret;
of_node_put(np);
return 0;
ioremap_err:
iounmap(at91_st_base);
node_err:
of_node_put(np);
err:
return -EINVAL;
}
/*
* ST (system timer) module supports both clockevents and clocksource.
*/
void __init at91rm9200_timer_init(void)
static void __init atmel_st_timer_init(struct device_node *node)
{
/* For device tree enabled device: initialize here */
of_at91rm9200_st_init();
unsigned int val;
int irq, ret;
regmap_st = syscon_node_to_regmap(node);
if (IS_ERR(regmap_st))
panic(pr_fmt("Unable to get regmap\n"));
/* Disable all timer interrupts, and clear any pending ones */
at91_st_write(AT91_ST_IDR,
regmap_write(regmap_st, AT91_ST_IDR,
AT91_ST_PITS | AT91_ST_WDOVF | AT91_ST_RTTINC | AT91_ST_ALMS);
at91_st_read(AT91_ST_SR);
regmap_read(regmap_st, AT91_ST_SR, &val);
/* Get the interrupts property */
irq = irq_of_parse_and_map(node, 0);
if (!irq)
panic(pr_fmt("Unable to get IRQ from DT\n"));
/* Make IRQs happen for the system timer */
setup_irq(at91rm9200_timer_irq.irq, &at91rm9200_timer_irq);
ret = request_irq(irq, at91rm9200_timer_interrupt,
IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
"at91_tick", regmap_st);
if (ret)
panic(pr_fmt("Unable to setup IRQ\n"));
/* The 32KiHz "Slow Clock" (tick every 30517.58 nanoseconds) is used
* directly for the clocksource and all clockevents, after adjusting
* its prescaler from the 1 Hz default.
*/
at91_st_write(AT91_ST_RTMR, 1);
regmap_write(regmap_st, AT91_ST_RTMR, 1);
/* Setup timer clockevent, with minimum of two ticks (important!!) */
clkevt.cpumask = cpumask_of(0);
......@@ -249,3 +220,5 @@ void __init at91rm9200_timer_init(void)
/* register clocksource */
clocksource_register_hz(&clk32k, AT91_SLOW_CLOCK);
}
CLOCKSOURCE_OF_DECLARE(atmel_st_timer, "atmel,at91rm9200-st",
atmel_st_timer_init);
......@@ -132,6 +132,10 @@ config ISCSI_IBFT
detect iSCSI boot parameters dynamically during system boot, say Y.
Otherwise, say N.
config QCOM_SCM
bool
depends on ARM || ARM64
source "drivers/firmware/google/Kconfig"
source "drivers/firmware/efi/Kconfig"
......
......@@ -11,6 +11,8 @@ obj-$(CONFIG_DMIID) += dmi-id.o
obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o
obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o
obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
obj-$(CONFIG_QCOM_SCM) += qcom_scm.o
CFLAGS_qcom_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1)
obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
obj-$(CONFIG_EFI) += efi/
......
/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
* Copyright (C) 2015 Linaro Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
......@@ -21,46 +22,68 @@
#include <linux/mutex.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/qcom_scm.h>
#include <asm/outercache.h>
#include <asm/cacheflush.h>
#include "scm.h"
#define SCM_ENOMEM -5
#define SCM_EOPNOTSUPP -4
#define SCM_EINVAL_ADDR -3
#define SCM_EINVAL_ARG -2
#define SCM_ERROR -1
#define SCM_INTERRUPTED 1
#define QCOM_SCM_ENOMEM -5
#define QCOM_SCM_EOPNOTSUPP -4
#define QCOM_SCM_EINVAL_ADDR -3
#define QCOM_SCM_EINVAL_ARG -2
#define QCOM_SCM_ERROR -1
#define QCOM_SCM_INTERRUPTED 1
static DEFINE_MUTEX(scm_lock);
#define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00
#define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01
#define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08
#define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20
#define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04
#define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02
#define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10
#define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40
struct qcom_scm_entry {
int flag;
void *entry;
};
static struct qcom_scm_entry qcom_scm_wb[] = {
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 },
};
static DEFINE_MUTEX(qcom_scm_lock);
/**
* struct scm_command - one SCM command buffer
* struct qcom_scm_command - one SCM command buffer
* @len: total available memory for command and response
* @buf_offset: start of command buffer
* @resp_hdr_offset: start of response buffer
* @id: command to be executed
* @buf: buffer returned from scm_get_command_buffer()
* @buf: buffer returned from qcom_scm_get_command_buffer()
*
* An SCM command is laid out in memory as follows:
*
* ------------------- <--- struct scm_command
* ------------------- <--- struct qcom_scm_command
* | command header |
* ------------------- <--- scm_get_command_buffer()
* ------------------- <--- qcom_scm_get_command_buffer()
* | command buffer |
* ------------------- <--- struct scm_response and
* | response header | scm_command_to_response()
* ------------------- <--- scm_get_response_buffer()
* ------------------- <--- struct qcom_scm_response and
* | response header | qcom_scm_command_to_response()
* ------------------- <--- qcom_scm_get_response_buffer()
* | response buffer |
* -------------------
*
* There can be arbitrary padding between the headers and buffers so
* you should always use the appropriate scm_get_*_buffer() routines
* you should always use the appropriate qcom_scm_get_*_buffer() routines
* to access the buffers in a safe manner.
*/
struct scm_command {
struct qcom_scm_command {
__le32 len;
__le32 buf_offset;
__le32 resp_hdr_offset;
......@@ -69,38 +92,38 @@ struct scm_command {
};
/**
* struct scm_response - one SCM response buffer
* struct qcom_scm_response - one SCM response buffer
* @len: total available memory for response
* @buf_offset: start of response data relative to start of scm_response
* @buf_offset: start of response data relative to start of qcom_scm_response
* @is_complete: indicates if the command has finished processing
*/
struct scm_response {
struct qcom_scm_response {
__le32 len;
__le32 buf_offset;
__le32 is_complete;
};
/**
* alloc_scm_command() - Allocate an SCM command
* alloc_qcom_scm_command() - Allocate an SCM command
* @cmd_size: size of the command buffer
* @resp_size: size of the response buffer
*
* Allocate an SCM command, including enough room for the command
* and response headers as well as the command and response buffers.
*
* Returns a valid &scm_command on success or %NULL if the allocation fails.
* Returns a valid &qcom_scm_command on success or %NULL if the allocation fails.
*/
static struct scm_command *alloc_scm_command(size_t cmd_size, size_t resp_size)
static struct qcom_scm_command *alloc_qcom_scm_command(size_t cmd_size, size_t resp_size)
{
struct scm_command *cmd;
size_t len = sizeof(*cmd) + sizeof(struct scm_response) + cmd_size +
struct qcom_scm_command *cmd;
size_t len = sizeof(*cmd) + sizeof(struct qcom_scm_response) + cmd_size +
resp_size;
u32 offset;
cmd = kzalloc(PAGE_ALIGN(len), GFP_KERNEL);
if (cmd) {
cmd->len = cpu_to_le32(len);
offset = offsetof(struct scm_command, buf);
offset = offsetof(struct qcom_scm_command, buf);
cmd->buf_offset = cpu_to_le32(offset);
cmd->resp_hdr_offset = cpu_to_le32(offset + cmd_size);
}
......@@ -108,62 +131,62 @@ static struct scm_command *alloc_scm_command(size_t cmd_size, size_t resp_size)
}
/**
* free_scm_command() - Free an SCM command
* free_qcom_scm_command() - Free an SCM command
* @cmd: command to free
*
* Free an SCM command.
*/
static inline void free_scm_command(struct scm_command *cmd)
static inline void free_qcom_scm_command(struct qcom_scm_command *cmd)
{
kfree(cmd);
}
/**
* scm_command_to_response() - Get a pointer to a scm_response
* qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response
* @cmd: command
*
* Returns a pointer to a response for a command.
*/
static inline struct scm_response *scm_command_to_response(
const struct scm_command *cmd)
static inline struct qcom_scm_response *qcom_scm_command_to_response(
const struct qcom_scm_command *cmd)
{
return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset);
}
/**
* scm_get_command_buffer() - Get a pointer to a command buffer
* qcom_scm_get_command_buffer() - Get a pointer to a command buffer
* @cmd: command
*
* Returns a pointer to the command buffer of a command.
*/
static inline void *scm_get_command_buffer(const struct scm_command *cmd)
static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd)
{
return (void *)cmd->buf;
}
/**
* scm_get_response_buffer() - Get a pointer to a response buffer
* qcom_scm_get_response_buffer() - Get a pointer to a response buffer
* @rsp: response
*
* Returns a pointer to a response buffer of a response.
*/
static inline void *scm_get_response_buffer(const struct scm_response *rsp)
static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp)
{
return (void *)rsp + le32_to_cpu(rsp->buf_offset);
}
static int scm_remap_error(int err)
static int qcom_scm_remap_error(int err)
{
pr_err("scm_call failed with error code %d\n", err);
pr_err("qcom_scm_call failed with error code %d\n", err);
switch (err) {
case SCM_ERROR:
case QCOM_SCM_ERROR:
return -EIO;
case SCM_EINVAL_ADDR:
case SCM_EINVAL_ARG:
case QCOM_SCM_EINVAL_ADDR:
case QCOM_SCM_EINVAL_ARG:
return -EINVAL;
case SCM_EOPNOTSUPP:
case QCOM_SCM_EOPNOTSUPP:
return -EOPNOTSUPP;
case SCM_ENOMEM:
case QCOM_SCM_ENOMEM:
return -ENOMEM;
}
return -EINVAL;
......@@ -188,12 +211,12 @@ static u32 smc(u32 cmd_addr)
: "=r" (r0)
: "r" (r0), "r" (r1), "r" (r2)
: "r3");
} while (r0 == SCM_INTERRUPTED);
} while (r0 == QCOM_SCM_INTERRUPTED);
return r0;
}
static int __scm_call(const struct scm_command *cmd)
static int __qcom_scm_call(const struct qcom_scm_command *cmd)
{
int ret;
u32 cmd_addr = virt_to_phys(cmd);
......@@ -207,12 +230,12 @@ static int __scm_call(const struct scm_command *cmd)
ret = smc(cmd_addr);
if (ret < 0)
ret = scm_remap_error(ret);
ret = qcom_scm_remap_error(ret);
return ret;
}
static void scm_inv_range(unsigned long start, unsigned long end)
static void qcom_scm_inv_range(unsigned long start, unsigned long end)
{
u32 cacheline_size, ctr;
......@@ -232,7 +255,7 @@ static void scm_inv_range(unsigned long start, unsigned long end)
}
/**
* scm_call() - Send an SCM command
* qcom_scm_call() - Send an SCM command
* @svc_id: service identifier
* @cmd_id: command identifier
* @cmd_buf: command buffer
......@@ -244,52 +267,90 @@ static void scm_inv_range(unsigned long start, unsigned long end)
*
* A note on cache maintenance:
* Note that any buffers that are expected to be accessed by the secure world
* must be flushed before invoking scm_call and invalidated in the cache
* immediately after scm_call returns. Cache maintenance on the command and
* response buffers is taken care of by scm_call; however, callers are
* must be flushed before invoking qcom_scm_call and invalidated in the cache
* immediately after qcom_scm_call returns. Cache maintenance on the command
* and response buffers is taken care of by qcom_scm_call; however, callers are
* responsible for any other cached buffers passed over to the secure world.
*/
int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len,
void *resp_buf, size_t resp_len)
static int qcom_scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf,
size_t cmd_len, void *resp_buf, size_t resp_len)
{
int ret;
struct scm_command *cmd;
struct scm_response *rsp;
struct qcom_scm_command *cmd;
struct qcom_scm_response *rsp;
unsigned long start, end;
cmd = alloc_scm_command(cmd_len, resp_len);
cmd = alloc_qcom_scm_command(cmd_len, resp_len);
if (!cmd)
return -ENOMEM;
cmd->id = cpu_to_le32((svc_id << 10) | cmd_id);
if (cmd_buf)
memcpy(scm_get_command_buffer(cmd), cmd_buf, cmd_len);
memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len);
mutex_lock(&scm_lock);
ret = __scm_call(cmd);
mutex_unlock(&scm_lock);
mutex_lock(&qcom_scm_lock);
ret = __qcom_scm_call(cmd);
mutex_unlock(&qcom_scm_lock);
if (ret)
goto out;
rsp = scm_command_to_response(cmd);
rsp = qcom_scm_command_to_response(cmd);
start = (unsigned long)rsp;
do {
scm_inv_range(start, start + sizeof(*rsp));
qcom_scm_inv_range(start, start + sizeof(*rsp));
} while (!rsp->is_complete);
end = (unsigned long)scm_get_response_buffer(rsp) + resp_len;
scm_inv_range(start, end);
end = (unsigned long)qcom_scm_get_response_buffer(rsp) + resp_len;
qcom_scm_inv_range(start, end);
if (resp_buf)
memcpy(resp_buf, scm_get_response_buffer(rsp), resp_len);
memcpy(resp_buf, qcom_scm_get_response_buffer(rsp), resp_len);
out:
free_scm_command(cmd);
free_qcom_scm_command(cmd);
return ret;
}
EXPORT_SYMBOL(scm_call);
u32 scm_get_version(void)
#define SCM_CLASS_REGISTER (0x2 << 8)
#define SCM_MASK_IRQS BIT(5)
#define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \
SCM_CLASS_REGISTER | \
SCM_MASK_IRQS | \
(n & 0xf))
/**
* qcom_scm_call_atomic1() - Send an atomic SCM command with one argument
* @svc_id: service identifier
* @cmd_id: command identifier
* @arg1: first argument
*
* This shall only be used with commands that are guaranteed to be
* uninterruptable, atomic and SMP safe.
*/
static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1)
{
int context_id;
register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1);
register u32 r1 asm("r1") = (u32)&context_id;
register u32 r2 asm("r2") = arg1;
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r0")
__asmeq("%2", "r1")
__asmeq("%3", "r2")
#ifdef REQUIRES_SEC
".arch_extension sec\n"
#endif
"smc #0 @ switch to secure world\n"
: "=r" (r0)
: "r" (r0), "r" (r1), "r" (r2)
: "r3");
return r0;
}
u32 qcom_scm_get_version(void)
{
int context_id;
static u32 version = -1;
......@@ -299,7 +360,7 @@ u32 scm_get_version(void)
if (version != -1)
return version;
mutex_lock(&scm_lock);
mutex_lock(&qcom_scm_lock);
r0 = 0x1 << 8;
r1 = (u32)&context_id;
......@@ -316,11 +377,118 @@ u32 scm_get_version(void)
: "=r" (r0), "=r" (r1)
: "r" (r0), "r" (r1)
: "r2", "r3");
} while (r0 == SCM_INTERRUPTED);
} while (r0 == QCOM_SCM_INTERRUPTED);
version = r1;
mutex_unlock(&scm_lock);
mutex_unlock(&qcom_scm_lock);
return version;
}
EXPORT_SYMBOL(scm_get_version);
EXPORT_SYMBOL(qcom_scm_get_version);
#define QCOM_SCM_SVC_BOOT 0x1
#define QCOM_SCM_BOOT_ADDR 0x1
/*
* Set the cold/warm boot address for one of the CPU cores.
*/
static int qcom_scm_set_boot_addr(u32 addr, int flags)
{
struct {
__le32 flags;
__le32 addr;
} cmd;
cmd.addr = cpu_to_le32(addr);
cmd.flags = cpu_to_le32(flags);
return qcom_scm_call(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR,
&cmd, sizeof(cmd), NULL, 0);
}
/**
* qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the cold boot address of the cpus. Any cpu outside the supported
* range would be removed from the cpu present mask.
*/
int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)
{
int flags = 0;
int cpu;
int scm_cb_flags[] = {
QCOM_SCM_FLAG_COLDBOOT_CPU0,
QCOM_SCM_FLAG_COLDBOOT_CPU1,
QCOM_SCM_FLAG_COLDBOOT_CPU2,
QCOM_SCM_FLAG_COLDBOOT_CPU3,
};
if (!cpus || (cpus && cpumask_empty(cpus)))
return -EINVAL;
for_each_cpu(cpu, cpus) {
if (cpu < ARRAY_SIZE(scm_cb_flags))
flags |= scm_cb_flags[cpu];
else
set_cpu_present(cpu, false);
}
return qcom_scm_set_boot_addr(virt_to_phys(entry), flags);
}
EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr);
/**
* qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the Linux entry point for the SCM to transfer control to when coming
* out of a power down. CPU power down may be executed on cpuidle or hotplug.
*/
int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus)
{
int ret;
int flags = 0;
int cpu;
/*
* Reassign only if we are switching from hotplug entry point
* to cpuidle entry point or vice versa.
*/
for_each_cpu(cpu, cpus) {
if (entry == qcom_scm_wb[cpu].entry)
continue;
flags |= qcom_scm_wb[cpu].flag;
}
/* No change in entry function */
if (!flags)
return 0;
ret = qcom_scm_set_boot_addr(virt_to_phys(entry), flags);
if (!ret) {
for_each_cpu(cpu, cpus)
qcom_scm_wb[cpu].entry = entry;
}
return ret;
}
EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr);
#define QCOM_SCM_CMD_TERMINATE_PC 0x2
#define QCOM_SCM_FLUSH_FLAG_MASK 0x3
/**
* qcom_scm_cpu_power_down() - Power down the cpu
* @flags - Flags to flush cache
*
* This is an end point to power down cpu. If there was a pending interrupt,
* the control would return from this function, otherwise, the cpu jumps to the
* warm boot entry point set for this cpu upon reset.
*/
void qcom_scm_cpu_power_down(u32 flags)
{
qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC,
flags & QCOM_SCM_FLUSH_FLAG_MASK);
}
EXPORT_SYMBOL(qcom_scm_cpu_power_down);
......@@ -12,8 +12,6 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#undef DEBUG
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/init.h>
......@@ -29,6 +27,7 @@
#include <linux/of_address.h>
#include <linux/of_mtd.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/omap-gpmc.h>
#include <linux/mtd/nand.h>
#include <linux/pm_runtime.h>
......@@ -136,13 +135,21 @@
#define GPMC_CONFIG1_WRITETYPE_ASYNC (0 << 27)
#define GPMC_CONFIG1_WRITETYPE_SYNC (1 << 27)
#define GPMC_CONFIG1_CLKACTIVATIONTIME(val) ((val & 3) << 25)
/** CLKACTIVATIONTIME Max Ticks */
#define GPMC_CONFIG1_CLKACTIVATIONTIME_MAX 2
#define GPMC_CONFIG1_PAGE_LEN(val) ((val & 3) << 23)
/** ATTACHEDDEVICEPAGELENGTH Max Value */
#define GPMC_CONFIG1_ATTACHEDDEVICEPAGELENGTH_MAX 2
#define GPMC_CONFIG1_WAIT_READ_MON (1 << 22)
#define GPMC_CONFIG1_WAIT_WRITE_MON (1 << 21)
#define GPMC_CONFIG1_WAIT_MON_IIME(val) ((val & 3) << 18)
#define GPMC_CONFIG1_WAIT_MON_TIME(val) ((val & 3) << 18)
/** WAITMONITORINGTIME Max Ticks */
#define GPMC_CONFIG1_WAITMONITORINGTIME_MAX 2
#define GPMC_CONFIG1_WAIT_PIN_SEL(val) ((val & 3) << 16)
#define GPMC_CONFIG1_DEVICESIZE(val) ((val & 3) << 12)
#define GPMC_CONFIG1_DEVICESIZE_16 GPMC_CONFIG1_DEVICESIZE(1)
/** DEVICESIZE Max Value */
#define GPMC_CONFIG1_DEVICESIZE_MAX 1
#define GPMC_CONFIG1_DEVICETYPE(val) ((val & 3) << 10)
#define GPMC_CONFIG1_DEVICETYPE_NOR GPMC_CONFIG1_DEVICETYPE(0)
#define GPMC_CONFIG1_MUXTYPE(val) ((val & 3) << 8)
......@@ -153,6 +160,15 @@
#define GPMC_CONFIG1_FCLK_DIV4 (GPMC_CONFIG1_FCLK_DIV(3))
#define GPMC_CONFIG7_CSVALID (1 << 6)
#define GPMC_CONFIG7_BASEADDRESS_MASK 0x3f
#define GPMC_CONFIG7_CSVALID_MASK BIT(6)
#define GPMC_CONFIG7_MASKADDRESS_OFFSET 8
#define GPMC_CONFIG7_MASKADDRESS_MASK (0xf << GPMC_CONFIG7_MASKADDRESS_OFFSET)
/* All CONFIG7 bits except reserved bits */
#define GPMC_CONFIG7_MASK (GPMC_CONFIG7_BASEADDRESS_MASK | \
GPMC_CONFIG7_CSVALID_MASK | \
GPMC_CONFIG7_MASKADDRESS_MASK)
#define GPMC_DEVICETYPE_NOR 0
#define GPMC_DEVICETYPE_NAND 2
#define GPMC_CONFIG_WRITEPROTECT 0x00000010
......@@ -169,6 +185,11 @@
*/
#define GPMC_NR_IRQ 2
enum gpmc_clk_domain {
GPMC_CD_FCLK,
GPMC_CD_CLK
};
struct gpmc_cs_data {
const char *name;
......@@ -267,16 +288,55 @@ static unsigned long gpmc_get_fclk_period(void)
return rate;
}
static unsigned int gpmc_ns_to_ticks(unsigned int time_ns)
/**
* gpmc_get_clk_period - get period of selected clock domain in ps
* @cs Chip Select Region.
* @cd Clock Domain.
*
* GPMC_CS_CONFIG1 GPMCFCLKDIVIDER for cs has to be setup
* prior to calling this function with GPMC_CD_CLK.
*/
static unsigned long gpmc_get_clk_period(int cs, enum gpmc_clk_domain cd)
{
unsigned long tick_ps = gpmc_get_fclk_period();
u32 l;
int div;
switch (cd) {
case GPMC_CD_CLK:
/* get current clk divider */
l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1);
div = (l & 0x03) + 1;
/* get GPMC_CLK period */
tick_ps *= div;
break;
case GPMC_CD_FCLK:
/* FALL-THROUGH */
default:
break;
}
return tick_ps;
}
static unsigned int gpmc_ns_to_clk_ticks(unsigned int time_ns, int cs,
enum gpmc_clk_domain cd)
{
unsigned long tick_ps;
/* Calculate in picosecs to yield more exact results */
tick_ps = gpmc_get_fclk_period();
tick_ps = gpmc_get_clk_period(cs, cd);
return (time_ns * 1000 + tick_ps - 1) / tick_ps;
}
static unsigned int gpmc_ns_to_ticks(unsigned int time_ns)
{
return gpmc_ns_to_clk_ticks(time_ns, /* any CS */ 0, GPMC_CD_FCLK);
}
static unsigned int gpmc_ps_to_ticks(unsigned int time_ps)
{
unsigned long tick_ps;
......@@ -287,9 +347,15 @@ static unsigned int gpmc_ps_to_ticks(unsigned int time_ps)
return (time_ps + tick_ps - 1) / tick_ps;
}
unsigned int gpmc_clk_ticks_to_ns(unsigned ticks, int cs,
enum gpmc_clk_domain cd)
{
return ticks * gpmc_get_clk_period(cs, cd) / 1000;
}
unsigned int gpmc_ticks_to_ns(unsigned int ticks)
{
return ticks * gpmc_get_fclk_period() / 1000;
return gpmc_clk_ticks_to_ns(ticks, /* any CS */ 0, GPMC_CD_FCLK);
}
static unsigned int gpmc_ticks_to_ps(unsigned int ticks)
......@@ -338,33 +404,66 @@ static void gpmc_cs_bool_timings(int cs, const struct gpmc_bool_timings *p)
}
#ifdef DEBUG
static int get_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
bool raw, bool noval, int shift,
const char *name)
/**
* get_gpmc_timing_reg - read a timing parameter and print DTS settings for it.
* @cs: Chip Select Region
* @reg: GPMC_CS_CONFIGn register offset.
* @st_bit: Start Bit
* @end_bit: End Bit. Must be >= @st_bit.
* @ma:x Maximum parameter value (before optional @shift).
* If 0, maximum is as high as @st_bit and @end_bit allow.
* @name: DTS node name, w/o "gpmc,"
* @cd: Clock Domain of timing parameter.
* @shift: Parameter value left shifts @shift, which is then printed instead of value.
* @raw: Raw Format Option.
* raw format: gpmc,name = <value>
* tick format: gpmc,name = <value> /&zwj;* x ns -- y ns; x ticks *&zwj;/
* Where x ns -- y ns result in the same tick value.
* When @max is exceeded, "invalid" is printed inside comment.
* @noval: Parameter values equal to 0 are not printed.
* @return: Specified timing parameter (after optional @shift).
*
*/
static int get_gpmc_timing_reg(
/* timing specifiers */
int cs, int reg, int st_bit, int end_bit, int max,
const char *name, const enum gpmc_clk_domain cd,
/* value transform */
int shift,
/* format specifiers */
bool raw, bool noval)
{
u32 l;
int nr_bits, max_value, mask;
int nr_bits;
int mask;
bool invalid;
l = gpmc_cs_read_reg(cs, reg);
nr_bits = end_bit - st_bit + 1;
max_value = (1 << nr_bits) - 1;
mask = max_value << st_bit;
l = (l & mask) >> st_bit;
mask = (1 << nr_bits) - 1;
l = (l >> st_bit) & mask;
if (!max)
max = mask;
invalid = l > max;
if (shift)
l = (shift << l);
if (noval && (l == 0))
return 0;
if (!raw) {
unsigned int time_ns_min, time_ns, time_ns_max;
time_ns_min = gpmc_ticks_to_ns(l ? l - 1 : 0);
time_ns = gpmc_ticks_to_ns(l);
time_ns_max = gpmc_ticks_to_ns(l + 1 > max_value ?
max_value : l + 1);
pr_info("gpmc,%s = <%u> (%u - %u ns, %i ticks)\n",
name, time_ns, time_ns_min, time_ns_max, l);
/* DTS tick format for timings in ns */
unsigned int time_ns;
unsigned int time_ns_min = 0;
if (l)
time_ns_min = gpmc_clk_ticks_to_ns(l - 1, cs, cd) + 1;
time_ns = gpmc_clk_ticks_to_ns(l, cs, cd);
pr_info("gpmc,%s = <%u> /* %u ns - %u ns; %i ticks%s*/\n",
name, time_ns, time_ns_min, time_ns, l,
invalid ? "; invalid " : " ");
} else {
pr_info("gpmc,%s = <%u>\n", name, l);
/* raw format */
pr_info("gpmc,%s = <%u>%s\n", name, l,
invalid ? " /* invalid */" : "");
}
return l;
......@@ -374,13 +473,19 @@ static int get_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
pr_info("cs%i %s: 0x%08x\n", cs, #config, \
gpmc_cs_read_reg(cs, config))
#define GPMC_GET_RAW(reg, st, end, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 0, 0, field)
get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 1, 0)
#define GPMC_GET_RAW_MAX(reg, st, end, max, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, GPMC_CD_FCLK, 0, 1, 0)
#define GPMC_GET_RAW_BOOL(reg, st, end, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 1, 0, field)
#define GPMC_GET_RAW_SHIFT(reg, st, end, shift, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 1, (shift), field)
get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 1, 1)
#define GPMC_GET_RAW_SHIFT_MAX(reg, st, end, shift, max, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, GPMC_CD_FCLK, (shift), 1, 1)
#define GPMC_GET_TICKS(reg, st, end, field) \
get_gpmc_timing_reg(cs, (reg), (st), (end), 0, 0, 0, field)
get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 0, 0)
#define GPMC_GET_TICKS_CD(reg, st, end, field, cd) \
get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, (cd), 0, 0, 0)
#define GPMC_GET_TICKS_CD_MAX(reg, st, end, max, field, cd) \
get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, (cd), 0, 0, 0)
static void gpmc_show_regs(int cs, const char *desc)
{
......@@ -404,11 +509,14 @@ static void gpmc_cs_show_timings(int cs, const char *desc)
pr_info("gpmc cs%i access configuration:\n", cs);
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 4, 4, "time-para-granularity");
GPMC_GET_RAW(GPMC_CS_CONFIG1, 8, 9, "mux-add-data");
GPMC_GET_RAW(GPMC_CS_CONFIG1, 12, 13, "device-width");
GPMC_GET_RAW_MAX(GPMC_CS_CONFIG1, 12, 13,
GPMC_CONFIG1_DEVICESIZE_MAX, "device-width");
GPMC_GET_RAW(GPMC_CS_CONFIG1, 16, 17, "wait-pin");
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 21, 21, "wait-on-write");
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 22, 22, "wait-on-read");
GPMC_GET_RAW_SHIFT(GPMC_CS_CONFIG1, 23, 24, 4, "burst-length");
GPMC_GET_RAW_SHIFT_MAX(GPMC_CS_CONFIG1, 23, 24, 4,
GPMC_CONFIG1_ATTACHEDDEVICEPAGELENGTH_MAX,
"burst-length");
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 27, 27, "sync-write");
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 28, 28, "burst-write");
GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 29, 29, "gpmc,sync-read");
......@@ -448,8 +556,12 @@ static void gpmc_cs_show_timings(int cs, const char *desc)
GPMC_GET_TICKS(GPMC_CS_CONFIG6, 0, 3, "bus-turnaround-ns");
GPMC_GET_TICKS(GPMC_CS_CONFIG6, 8, 11, "cycle2cycle-delay-ns");
GPMC_GET_TICKS(GPMC_CS_CONFIG1, 18, 19, "wait-monitoring-ns");
GPMC_GET_TICKS(GPMC_CS_CONFIG1, 25, 26, "clk-activation-ns");
GPMC_GET_TICKS_CD_MAX(GPMC_CS_CONFIG1, 18, 19,
GPMC_CONFIG1_WAITMONITORINGTIME_MAX,
"wait-monitoring-ns", GPMC_CD_CLK);
GPMC_GET_TICKS_CD_MAX(GPMC_CS_CONFIG1, 25, 26,
GPMC_CONFIG1_CLKACTIVATIONTIME_MAX,
"clk-activation-ns", GPMC_CD_FCLK);
GPMC_GET_TICKS(GPMC_CS_CONFIG6, 16, 19, "wr-data-mux-bus-ns");
GPMC_GET_TICKS(GPMC_CS_CONFIG6, 24, 28, "wr-access-ns");
......@@ -460,8 +572,24 @@ static inline void gpmc_cs_show_timings(int cs, const char *desc)
}
#endif
static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
int time, const char *name)
/**
* set_gpmc_timing_reg - set a single timing parameter for Chip Select Region.
* Caller is expected to have initialized CONFIG1 GPMCFCLKDIVIDER
* prior to calling this function with @cd equal to GPMC_CD_CLK.
*
* @cs: Chip Select Region.
* @reg: GPMC_CS_CONFIGn register offset.
* @st_bit: Start Bit
* @end_bit: End Bit. Must be >= @st_bit.
* @max: Maximum parameter value.
* If 0, maximum is as high as @st_bit and @end_bit allow.
* @time: Timing parameter in ns.
* @cd: Timing parameter clock domain.
* @name: Timing parameter name.
* @return: 0 on success, -1 on error.
*/
static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit, int max,
int time, enum gpmc_clk_domain cd, const char *name)
{
u32 l;
int ticks, mask, nr_bits;
......@@ -469,22 +597,25 @@ static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
if (time == 0)
ticks = 0;
else
ticks = gpmc_ns_to_ticks(time);
ticks = gpmc_ns_to_clk_ticks(time, cs, cd);
nr_bits = end_bit - st_bit + 1;
mask = (1 << nr_bits) - 1;
if (ticks > mask) {
pr_err("%s: GPMC error! CS%d: %s: %d ns, %d ticks > %d\n",
__func__, cs, name, time, ticks, mask);
if (!max)
max = mask;
if (ticks > max) {
pr_err("%s: GPMC CS%d: %s %d ns, %d ticks > %d ticks\n",
__func__, cs, name, time, ticks, max);
return -1;
}
l = gpmc_cs_read_reg(cs, reg);
#ifdef DEBUG
printk(KERN_INFO
"GPMC CS%d: %-10s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n",
cs, name, ticks, gpmc_get_fclk_period() * ticks / 1000,
pr_info(
"GPMC CS%d: %-17s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n",
cs, name, ticks, gpmc_get_clk_period(cs, cd) * ticks / 1000,
(l >> st_bit) & mask, time);
#endif
l &= ~(mask << st_bit);
......@@ -494,18 +625,56 @@ static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit,
return 0;
}
#define GPMC_SET_ONE(reg, st, end, field) \
if (set_gpmc_timing_reg(cs, (reg), (st), (end), \
t->field, #field) < 0) \
#define GPMC_SET_ONE_CD_MAX(reg, st, end, max, field, cd) \
if (set_gpmc_timing_reg(cs, (reg), (st), (end), (max), \
t->field, (cd), #field) < 0) \
return -1
#define GPMC_SET_ONE(reg, st, end, field) \
GPMC_SET_ONE_CD_MAX(reg, st, end, 0, field, GPMC_CD_FCLK)
/**
* gpmc_calc_waitmonitoring_divider - calculate proper GPMCFCLKDIVIDER based on WAITMONITORINGTIME
* WAITMONITORINGTIME will be _at least_ as long as desired, i.e.
* read --> don't sample bus too early
* write --> data is longer on bus
*
* Formula:
* gpmc_clk_div + 1 = ceil(ceil(waitmonitoringtime_ns / gpmc_fclk_ns)
* / waitmonitoring_ticks)
* WAITMONITORINGTIME resulting in 0 or 1 tick with div = 1 are caught by
* div <= 0 check.
*
* @wait_monitoring: WAITMONITORINGTIME in ns.
* @return: -1 on failure to scale, else proper divider > 0.
*/
static int gpmc_calc_waitmonitoring_divider(unsigned int wait_monitoring)
{
int div = gpmc_ns_to_ticks(wait_monitoring);
div += GPMC_CONFIG1_WAITMONITORINGTIME_MAX - 1;
div /= GPMC_CONFIG1_WAITMONITORINGTIME_MAX;
if (div > 4)
return -1;
if (div <= 0)
div = 1;
return div;
}
/**
* gpmc_calc_divider - calculate GPMC_FCLK divider for sync_clk GPMC_CLK period.
* @sync_clk: GPMC_CLK period in ps.
* @return: Returns at least 1 if GPMC_FCLK can be divided to GPMC_CLK.
* Else, returns -1.
*/
int gpmc_calc_divider(unsigned int sync_clk)
{
int div;
u32 l;
int div = gpmc_ps_to_ticks(sync_clk);
l = sync_clk + (gpmc_get_fclk_period() - 1);
div = l / gpmc_get_fclk_period();
if (div > 4)
return -1;
if (div <= 0)
......@@ -514,7 +683,15 @@ int gpmc_calc_divider(unsigned int sync_clk)
return div;
}
int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t)
/**
* gpmc_cs_set_timings - program timing parameters for Chip Select Region.
* @cs: Chip Select Region.
* @t: GPMC timing parameters.
* @s: GPMC timing settings.
* @return: 0 on success, -1 on error.
*/
int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t,
const struct gpmc_settings *s)
{
int div;
u32 l;
......@@ -524,6 +701,33 @@ int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t)
if (div < 0)
return div;
/*
* See if we need to change the divider for waitmonitoringtime.
*
* Calculate GPMCFCLKDIVIDER independent of gpmc,sync-clk-ps in DT for
* pure asynchronous accesses, i.e. both read and write asynchronous.
* However, only do so if WAITMONITORINGTIME is actually used, i.e.
* either WAITREADMONITORING or WAITWRITEMONITORING is set.
*
* This statement must not change div to scale async WAITMONITORINGTIME
* to protect mixed synchronous and asynchronous accesses.
*
* We raise an error later if WAITMONITORINGTIME does not fit.
*/
if (!s->sync_read && !s->sync_write &&
(s->wait_on_read || s->wait_on_write)
) {
div = gpmc_calc_waitmonitoring_divider(t->wait_monitoring);
if (div < 0) {
pr_err("%s: waitmonitoringtime %3d ns too large for greatest gpmcfclkdivider.\n",
__func__,
t->wait_monitoring
);
return -1;
}
}
GPMC_SET_ONE(GPMC_CS_CONFIG2, 0, 3, cs_on);
GPMC_SET_ONE(GPMC_CS_CONFIG2, 8, 12, cs_rd_off);
GPMC_SET_ONE(GPMC_CS_CONFIG2, 16, 20, cs_wr_off);
......@@ -546,27 +750,27 @@ int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t)
GPMC_SET_ONE(GPMC_CS_CONFIG6, 0, 3, bus_turnaround);
GPMC_SET_ONE(GPMC_CS_CONFIG6, 8, 11, cycle2cycle_delay);
GPMC_SET_ONE(GPMC_CS_CONFIG1, 18, 19, wait_monitoring);
GPMC_SET_ONE(GPMC_CS_CONFIG1, 25, 26, clk_activation);
if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS)
GPMC_SET_ONE(GPMC_CS_CONFIG6, 16, 19, wr_data_mux_bus);
if (gpmc_capability & GPMC_HAS_WR_ACCESS)
GPMC_SET_ONE(GPMC_CS_CONFIG6, 24, 28, wr_access);
/* caller is expected to have initialized CONFIG1 to cover
* at least sync vs async
*/
l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1);
if (l & (GPMC_CONFIG1_READTYPE_SYNC | GPMC_CONFIG1_WRITETYPE_SYNC)) {
l &= ~0x03;
l |= (div - 1);
gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, l);
GPMC_SET_ONE_CD_MAX(GPMC_CS_CONFIG1, 18, 19,
GPMC_CONFIG1_WAITMONITORINGTIME_MAX,
wait_monitoring, GPMC_CD_CLK);
GPMC_SET_ONE_CD_MAX(GPMC_CS_CONFIG1, 25, 26,
GPMC_CONFIG1_CLKACTIVATIONTIME_MAX,
clk_activation, GPMC_CD_FCLK);
#ifdef DEBUG
printk(KERN_INFO "GPMC CS%d CLK period is %lu ns (div %d)\n",
cs, (div * gpmc_get_fclk_period()) / 1000, div);
pr_info("GPMC CS%d CLK period is %lu ns (div %d)\n",
cs, (div * gpmc_get_fclk_period()) / 1000, div);
#endif
l &= ~0x03;
l |= (div - 1);
gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, l);
}
gpmc_cs_bool_timings(cs, &t->bool_timings);
gpmc_cs_show_timings(cs, "after gpmc_cs_set_timings");
......@@ -586,12 +790,15 @@ static int gpmc_cs_set_memconf(int cs, u32 base, u32 size)
if (base & (size - 1))
return -EINVAL;
base >>= GPMC_CHUNK_SHIFT;
mask = (1 << GPMC_SECTION_SHIFT) - size;
mask >>= GPMC_CHUNK_SHIFT;
mask <<= GPMC_CONFIG7_MASKADDRESS_OFFSET;
l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG7);
l &= ~0x3f;
l = (base >> GPMC_CHUNK_SHIFT) & 0x3f;
l &= ~(0x0f << 8);
l |= ((mask >> GPMC_CHUNK_SHIFT) & 0x0f) << 8;
l &= ~GPMC_CONFIG7_MASK;
l |= base & GPMC_CONFIG7_BASEADDRESS_MASK;
l |= mask & GPMC_CONFIG7_MASKADDRESS_MASK;
l |= GPMC_CONFIG7_CSVALID;
gpmc_cs_write_reg(cs, GPMC_CS_CONFIG7, l);
......@@ -656,7 +863,7 @@ static void gpmc_cs_set_name(int cs, const char *name)
gpmc->name = name;
}
const char *gpmc_cs_get_name(int cs)
static const char *gpmc_cs_get_name(int cs)
{
struct gpmc_cs_data *gpmc = &gpmc_cs[cs];
......@@ -1786,7 +1993,7 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
if (ret < 0)
goto err;
ret = gpmc_cs_set_timings(cs, &gpmc_t);
ret = gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s);
if (ret) {
dev_err(&pdev->dev, "failed to set gpmc timings for: %s\n",
child->name);
......@@ -1802,8 +2009,21 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
gpmc_cs_enable_mem(cs);
no_timings:
if (of_platform_device_create(child, NULL, &pdev->dev))
return 0;
/* create platform device, NULL on error or when disabled */
if (!of_platform_device_create(child, NULL, &pdev->dev))
goto err_child_fail;
/* is child a common bus? */
if (of_match_node(of_default_bus_match_table, child))
/* create children and other common bus children */
if (of_platform_populate(child, of_default_bus_match_table,
NULL, &pdev->dev))
goto err_child_fail;
return 0;
err_child_fail:
dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name);
ret = -ENODEV;
......
menu "SOC (System On Chip) specific Drivers"
source "drivers/soc/mediatek/Kconfig"
source "drivers/soc/qcom/Kconfig"
source "drivers/soc/ti/Kconfig"
source "drivers/soc/versatile/Kconfig"
......
......@@ -2,6 +2,7 @@
# Makefile for the Linux Kernel SOC specific device drivers.
#
obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
obj-$(CONFIG_ARCH_QCOM) += qcom/
obj-$(CONFIG_ARCH_TEGRA) += tegra/
obj-$(CONFIG_SOC_TI) += ti/
......
#
# MediaTek SoC drivers
#
config MTK_PMIC_WRAP
tristate "MediaTek PMIC Wrapper Support"
depends on ARCH_MEDIATEK
select REGMAP
help
Say yes here to add support for MediaTek PMIC Wrapper found
on different MediaTek SoCs. The PMIC wrapper is a proprietary
hardware to connect the PMIC.
obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
/*
* Copyright (c) 2014 MediaTek Inc.
* Author: Flora Fu, MediaTek
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/clk.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#define PWRAP_MT8135_BRIDGE_IORD_ARB_EN 0x4
#define PWRAP_MT8135_BRIDGE_WACS3_EN 0x10
#define PWRAP_MT8135_BRIDGE_INIT_DONE3 0x14
#define PWRAP_MT8135_BRIDGE_WACS4_EN 0x24
#define PWRAP_MT8135_BRIDGE_INIT_DONE4 0x28
#define PWRAP_MT8135_BRIDGE_INT_EN 0x38
#define PWRAP_MT8135_BRIDGE_TIMER_EN 0x48
#define PWRAP_MT8135_BRIDGE_WDT_UNIT 0x50
#define PWRAP_MT8135_BRIDGE_WDT_SRC_EN 0x54
/* macro for wrapper status */
#define PWRAP_GET_WACS_RDATA(x) (((x) >> 0) & 0x0000ffff)
#define PWRAP_GET_WACS_FSM(x) (((x) >> 16) & 0x00000007)
#define PWRAP_GET_WACS_REQ(x) (((x) >> 19) & 0x00000001)
#define PWRAP_STATE_SYNC_IDLE0 (1 << 20)
#define PWRAP_STATE_INIT_DONE0 (1 << 21)
/* macro for WACS FSM */
#define PWRAP_WACS_FSM_IDLE 0x00
#define PWRAP_WACS_FSM_REQ 0x02
#define PWRAP_WACS_FSM_WFDLE 0x04
#define PWRAP_WACS_FSM_WFVLDCLR 0x06
#define PWRAP_WACS_INIT_DONE 0x01
#define PWRAP_WACS_WACS_SYNC_IDLE 0x01
#define PWRAP_WACS_SYNC_BUSY 0x00
/* macro for device wrapper default value */
#define PWRAP_DEW_READ_TEST_VAL 0x5aa5
#define PWRAP_DEW_WRITE_TEST_VAL 0xa55a
/* macro for manual command */
#define PWRAP_MAN_CMD_SPI_WRITE (1 << 13)
#define PWRAP_MAN_CMD_OP_CSH (0x0 << 8)
#define PWRAP_MAN_CMD_OP_CSL (0x1 << 8)
#define PWRAP_MAN_CMD_OP_CK (0x2 << 8)
#define PWRAP_MAN_CMD_OP_OUTS (0x8 << 8)
#define PWRAP_MAN_CMD_OP_OUTD (0x9 << 8)
#define PWRAP_MAN_CMD_OP_OUTQ (0xa << 8)
/* macro for slave device wrapper registers */
#define PWRAP_DEW_BASE 0xbc00
#define PWRAP_DEW_EVENT_OUT_EN (PWRAP_DEW_BASE + 0x0)
#define PWRAP_DEW_DIO_EN (PWRAP_DEW_BASE + 0x2)
#define PWRAP_DEW_EVENT_SRC_EN (PWRAP_DEW_BASE + 0x4)
#define PWRAP_DEW_EVENT_SRC (PWRAP_DEW_BASE + 0x6)
#define PWRAP_DEW_EVENT_FLAG (PWRAP_DEW_BASE + 0x8)
#define PWRAP_DEW_READ_TEST (PWRAP_DEW_BASE + 0xa)
#define PWRAP_DEW_WRITE_TEST (PWRAP_DEW_BASE + 0xc)
#define PWRAP_DEW_CRC_EN (PWRAP_DEW_BASE + 0xe)
#define PWRAP_DEW_CRC_VAL (PWRAP_DEW_BASE + 0x10)
#define PWRAP_DEW_MON_GRP_SEL (PWRAP_DEW_BASE + 0x12)
#define PWRAP_DEW_MON_FLAG_SEL (PWRAP_DEW_BASE + 0x14)
#define PWRAP_DEW_EVENT_TEST (PWRAP_DEW_BASE + 0x16)
#define PWRAP_DEW_CIPHER_KEY_SEL (PWRAP_DEW_BASE + 0x18)
#define PWRAP_DEW_CIPHER_IV_SEL (PWRAP_DEW_BASE + 0x1a)
#define PWRAP_DEW_CIPHER_LOAD (PWRAP_DEW_BASE + 0x1c)
#define PWRAP_DEW_CIPHER_START (PWRAP_DEW_BASE + 0x1e)
#define PWRAP_DEW_CIPHER_RDY (PWRAP_DEW_BASE + 0x20)
#define PWRAP_DEW_CIPHER_MODE (PWRAP_DEW_BASE + 0x22)
#define PWRAP_DEW_CIPHER_SWRST (PWRAP_DEW_BASE + 0x24)
#define PWRAP_MT8173_DEW_CIPHER_IV0 (PWRAP_DEW_BASE + 0x26)
#define PWRAP_MT8173_DEW_CIPHER_IV1 (PWRAP_DEW_BASE + 0x28)
#define PWRAP_MT8173_DEW_CIPHER_IV2 (PWRAP_DEW_BASE + 0x2a)
#define PWRAP_MT8173_DEW_CIPHER_IV3 (PWRAP_DEW_BASE + 0x2c)
#define PWRAP_MT8173_DEW_CIPHER_IV4 (PWRAP_DEW_BASE + 0x2e)
#define PWRAP_MT8173_DEW_CIPHER_IV5 (PWRAP_DEW_BASE + 0x30)
enum pwrap_regs {
PWRAP_MUX_SEL,
PWRAP_WRAP_EN,
PWRAP_DIO_EN,
PWRAP_SIDLY,
PWRAP_CSHEXT_WRITE,
PWRAP_CSHEXT_READ,
PWRAP_CSLEXT_START,
PWRAP_CSLEXT_END,
PWRAP_STAUPD_PRD,
PWRAP_STAUPD_GRPEN,
PWRAP_STAUPD_MAN_TRIG,
PWRAP_STAUPD_STA,
PWRAP_WRAP_STA,
PWRAP_HARB_INIT,
PWRAP_HARB_HPRIO,
PWRAP_HIPRIO_ARB_EN,
PWRAP_HARB_STA0,
PWRAP_HARB_STA1,
PWRAP_MAN_EN,
PWRAP_MAN_CMD,
PWRAP_MAN_RDATA,
PWRAP_MAN_VLDCLR,
PWRAP_WACS0_EN,
PWRAP_INIT_DONE0,
PWRAP_WACS0_CMD,
PWRAP_WACS0_RDATA,
PWRAP_WACS0_VLDCLR,
PWRAP_WACS1_EN,
PWRAP_INIT_DONE1,
PWRAP_WACS1_CMD,
PWRAP_WACS1_RDATA,
PWRAP_WACS1_VLDCLR,
PWRAP_WACS2_EN,
PWRAP_INIT_DONE2,
PWRAP_WACS2_CMD,
PWRAP_WACS2_RDATA,
PWRAP_WACS2_VLDCLR,
PWRAP_INT_EN,
PWRAP_INT_FLG_RAW,
PWRAP_INT_FLG,
PWRAP_INT_CLR,
PWRAP_SIG_ADR,
PWRAP_SIG_MODE,
PWRAP_SIG_VALUE,
PWRAP_SIG_ERRVAL,
PWRAP_CRC_EN,
PWRAP_TIMER_EN,
PWRAP_TIMER_STA,
PWRAP_WDT_UNIT,
PWRAP_WDT_SRC_EN,
PWRAP_WDT_FLG,
PWRAP_DEBUG_INT_SEL,
PWRAP_CIPHER_KEY_SEL,
PWRAP_CIPHER_IV_SEL,
PWRAP_CIPHER_RDY,
PWRAP_CIPHER_MODE,
PWRAP_CIPHER_SWRST,
PWRAP_DCM_EN,
PWRAP_DCM_DBC_PRD,
/* MT8135 only regs */
PWRAP_CSHEXT,
PWRAP_EVENT_IN_EN,
PWRAP_EVENT_DST_EN,
PWRAP_RRARB_INIT,
PWRAP_RRARB_EN,
PWRAP_RRARB_STA0,
PWRAP_RRARB_STA1,
PWRAP_EVENT_STA,
PWRAP_EVENT_STACLR,
PWRAP_CIPHER_LOAD,
PWRAP_CIPHER_START,
/* MT8173 only regs */
PWRAP_RDDMY,
PWRAP_SI_CK_CON,
PWRAP_DVFS_ADR0,
PWRAP_DVFS_WDATA0,
PWRAP_DVFS_ADR1,
PWRAP_DVFS_WDATA1,
PWRAP_DVFS_ADR2,
PWRAP_DVFS_WDATA2,
PWRAP_DVFS_ADR3,
PWRAP_DVFS_WDATA3,
PWRAP_DVFS_ADR4,
PWRAP_DVFS_WDATA4,
PWRAP_DVFS_ADR5,
PWRAP_DVFS_WDATA5,
PWRAP_DVFS_ADR6,
PWRAP_DVFS_WDATA6,
PWRAP_DVFS_ADR7,
PWRAP_DVFS_WDATA7,
PWRAP_SPMINF_STA,
PWRAP_CIPHER_EN,
};
static int mt8173_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
[PWRAP_DIO_EN] = 0x8,
[PWRAP_SIDLY] = 0xc,
[PWRAP_RDDMY] = 0x10,
[PWRAP_SI_CK_CON] = 0x14,
[PWRAP_CSHEXT_WRITE] = 0x18,
[PWRAP_CSHEXT_READ] = 0x1c,
[PWRAP_CSLEXT_START] = 0x20,
[PWRAP_CSLEXT_END] = 0x24,
[PWRAP_STAUPD_PRD] = 0x28,
[PWRAP_STAUPD_GRPEN] = 0x2c,
[PWRAP_STAUPD_MAN_TRIG] = 0x40,
[PWRAP_STAUPD_STA] = 0x44,
[PWRAP_WRAP_STA] = 0x48,
[PWRAP_HARB_INIT] = 0x4c,
[PWRAP_HARB_HPRIO] = 0x50,
[PWRAP_HIPRIO_ARB_EN] = 0x54,
[PWRAP_HARB_STA0] = 0x58,
[PWRAP_HARB_STA1] = 0x5c,
[PWRAP_MAN_EN] = 0x60,
[PWRAP_MAN_CMD] = 0x64,
[PWRAP_MAN_RDATA] = 0x68,
[PWRAP_MAN_VLDCLR] = 0x6c,
[PWRAP_WACS0_EN] = 0x70,
[PWRAP_INIT_DONE0] = 0x74,
[PWRAP_WACS0_CMD] = 0x78,
[PWRAP_WACS0_RDATA] = 0x7c,
[PWRAP_WACS0_VLDCLR] = 0x80,
[PWRAP_WACS1_EN] = 0x84,
[PWRAP_INIT_DONE1] = 0x88,
[PWRAP_WACS1_CMD] = 0x8c,
[PWRAP_WACS1_RDATA] = 0x90,
[PWRAP_WACS1_VLDCLR] = 0x94,
[PWRAP_WACS2_EN] = 0x98,
[PWRAP_INIT_DONE2] = 0x9c,
[PWRAP_WACS2_CMD] = 0xa0,
[PWRAP_WACS2_RDATA] = 0xa4,
[PWRAP_WACS2_VLDCLR] = 0xa8,
[PWRAP_INT_EN] = 0xac,
[PWRAP_INT_FLG_RAW] = 0xb0,
[PWRAP_INT_FLG] = 0xb4,
[PWRAP_INT_CLR] = 0xb8,
[PWRAP_SIG_ADR] = 0xbc,
[PWRAP_SIG_MODE] = 0xc0,
[PWRAP_SIG_VALUE] = 0xc4,
[PWRAP_SIG_ERRVAL] = 0xc8,
[PWRAP_CRC_EN] = 0xcc,
[PWRAP_TIMER_EN] = 0xd0,
[PWRAP_TIMER_STA] = 0xd4,
[PWRAP_WDT_UNIT] = 0xd8,
[PWRAP_WDT_SRC_EN] = 0xdc,
[PWRAP_WDT_FLG] = 0xe0,
[PWRAP_DEBUG_INT_SEL] = 0xe4,
[PWRAP_DVFS_ADR0] = 0xe8,
[PWRAP_DVFS_WDATA0] = 0xec,
[PWRAP_DVFS_ADR1] = 0xf0,
[PWRAP_DVFS_WDATA1] = 0xf4,
[PWRAP_DVFS_ADR2] = 0xf8,
[PWRAP_DVFS_WDATA2] = 0xfc,
[PWRAP_DVFS_ADR3] = 0x100,
[PWRAP_DVFS_WDATA3] = 0x104,
[PWRAP_DVFS_ADR4] = 0x108,
[PWRAP_DVFS_WDATA4] = 0x10c,
[PWRAP_DVFS_ADR5] = 0x110,
[PWRAP_DVFS_WDATA5] = 0x114,
[PWRAP_DVFS_ADR6] = 0x118,
[PWRAP_DVFS_WDATA6] = 0x11c,
[PWRAP_DVFS_ADR7] = 0x120,
[PWRAP_DVFS_WDATA7] = 0x124,
[PWRAP_SPMINF_STA] = 0x128,
[PWRAP_CIPHER_KEY_SEL] = 0x12c,
[PWRAP_CIPHER_IV_SEL] = 0x130,
[PWRAP_CIPHER_EN] = 0x134,
[PWRAP_CIPHER_RDY] = 0x138,
[PWRAP_CIPHER_MODE] = 0x13c,
[PWRAP_CIPHER_SWRST] = 0x140,
[PWRAP_DCM_EN] = 0x144,
[PWRAP_DCM_DBC_PRD] = 0x148,
};
static int mt8135_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
[PWRAP_DIO_EN] = 0x8,
[PWRAP_SIDLY] = 0xc,
[PWRAP_CSHEXT] = 0x10,
[PWRAP_CSHEXT_WRITE] = 0x14,
[PWRAP_CSHEXT_READ] = 0x18,
[PWRAP_CSLEXT_START] = 0x1c,
[PWRAP_CSLEXT_END] = 0x20,
[PWRAP_STAUPD_PRD] = 0x24,
[PWRAP_STAUPD_GRPEN] = 0x28,
[PWRAP_STAUPD_MAN_TRIG] = 0x2c,
[PWRAP_STAUPD_STA] = 0x30,
[PWRAP_EVENT_IN_EN] = 0x34,
[PWRAP_EVENT_DST_EN] = 0x38,
[PWRAP_WRAP_STA] = 0x3c,
[PWRAP_RRARB_INIT] = 0x40,
[PWRAP_RRARB_EN] = 0x44,
[PWRAP_RRARB_STA0] = 0x48,
[PWRAP_RRARB_STA1] = 0x4c,
[PWRAP_HARB_INIT] = 0x50,
[PWRAP_HARB_HPRIO] = 0x54,
[PWRAP_HIPRIO_ARB_EN] = 0x58,
[PWRAP_HARB_STA0] = 0x5c,
[PWRAP_HARB_STA1] = 0x60,
[PWRAP_MAN_EN] = 0x64,
[PWRAP_MAN_CMD] = 0x68,
[PWRAP_MAN_RDATA] = 0x6c,
[PWRAP_MAN_VLDCLR] = 0x70,
[PWRAP_WACS0_EN] = 0x74,
[PWRAP_INIT_DONE0] = 0x78,
[PWRAP_WACS0_CMD] = 0x7c,
[PWRAP_WACS0_RDATA] = 0x80,
[PWRAP_WACS0_VLDCLR] = 0x84,
[PWRAP_WACS1_EN] = 0x88,
[PWRAP_INIT_DONE1] = 0x8c,
[PWRAP_WACS1_CMD] = 0x90,
[PWRAP_WACS1_RDATA] = 0x94,
[PWRAP_WACS1_VLDCLR] = 0x98,
[PWRAP_WACS2_EN] = 0x9c,
[PWRAP_INIT_DONE2] = 0xa0,
[PWRAP_WACS2_CMD] = 0xa4,
[PWRAP_WACS2_RDATA] = 0xa8,
[PWRAP_WACS2_VLDCLR] = 0xac,
[PWRAP_INT_EN] = 0xb0,
[PWRAP_INT_FLG_RAW] = 0xb4,
[PWRAP_INT_FLG] = 0xb8,
[PWRAP_INT_CLR] = 0xbc,
[PWRAP_SIG_ADR] = 0xc0,
[PWRAP_SIG_MODE] = 0xc4,
[PWRAP_SIG_VALUE] = 0xc8,
[PWRAP_SIG_ERRVAL] = 0xcc,
[PWRAP_CRC_EN] = 0xd0,
[PWRAP_EVENT_STA] = 0xd4,
[PWRAP_EVENT_STACLR] = 0xd8,
[PWRAP_TIMER_EN] = 0xdc,
[PWRAP_TIMER_STA] = 0xe0,
[PWRAP_WDT_UNIT] = 0xe4,
[PWRAP_WDT_SRC_EN] = 0xe8,
[PWRAP_WDT_FLG] = 0xec,
[PWRAP_DEBUG_INT_SEL] = 0xf0,
[PWRAP_CIPHER_KEY_SEL] = 0x134,
[PWRAP_CIPHER_IV_SEL] = 0x138,
[PWRAP_CIPHER_LOAD] = 0x13c,
[PWRAP_CIPHER_START] = 0x140,
[PWRAP_CIPHER_RDY] = 0x144,
[PWRAP_CIPHER_MODE] = 0x148,
[PWRAP_CIPHER_SWRST] = 0x14c,
[PWRAP_DCM_EN] = 0x15c,
[PWRAP_DCM_DBC_PRD] = 0x160,
};
enum pwrap_type {
PWRAP_MT8135,
PWRAP_MT8173,
};
struct pmic_wrapper_type {
int *regs;
enum pwrap_type type;
u32 arb_en_all;
};
static struct pmic_wrapper_type pwrap_mt8135 = {
.regs = mt8135_regs,
.type = PWRAP_MT8135,
.arb_en_all = 0x1ff,
};
static struct pmic_wrapper_type pwrap_mt8173 = {
.regs = mt8173_regs,
.type = PWRAP_MT8173,
.arb_en_all = 0x3f,
};
struct pmic_wrapper {
struct device *dev;
void __iomem *base;
struct regmap *regmap;
int *regs;
enum pwrap_type type;
u32 arb_en_all;
struct clk *clk_spi;
struct clk *clk_wrap;
struct reset_control *rstc;
struct reset_control *rstc_bridge;
void __iomem *bridge_base;
};
static inline int pwrap_is_mt8135(struct pmic_wrapper *wrp)
{
return wrp->type == PWRAP_MT8135;
}
static inline int pwrap_is_mt8173(struct pmic_wrapper *wrp)
{
return wrp->type == PWRAP_MT8173;
}
static u32 pwrap_readl(struct pmic_wrapper *wrp, enum pwrap_regs reg)
{
return readl(wrp->base + wrp->regs[reg]);
}
static void pwrap_writel(struct pmic_wrapper *wrp, u32 val, enum pwrap_regs reg)
{
writel(val, wrp->base + wrp->regs[reg]);
}
static bool pwrap_is_fsm_idle(struct pmic_wrapper *wrp)
{
u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_IDLE;
}
static bool pwrap_is_fsm_vldclr(struct pmic_wrapper *wrp)
{
u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR;
}
static bool pwrap_is_sync_idle(struct pmic_wrapper *wrp)
{
return pwrap_readl(wrp, PWRAP_WACS2_RDATA) & PWRAP_STATE_SYNC_IDLE0;
}
static bool pwrap_is_fsm_idle_and_sync_idle(struct pmic_wrapper *wrp)
{
u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
return (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_IDLE) &&
(val & PWRAP_STATE_SYNC_IDLE0);
}
static int pwrap_wait_for_state(struct pmic_wrapper *wrp,
bool (*fp)(struct pmic_wrapper *))
{
unsigned long timeout;
timeout = jiffies + usecs_to_jiffies(255);
do {
if (time_after(jiffies, timeout))
return fp(wrp) ? 0 : -ETIMEDOUT;
if (fp(wrp))
return 0;
} while (1);
}
static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
int ret;
u32 val;
val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR)
pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret)
return ret;
pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata,
PWRAP_WACS2_CMD);
return 0;
}
static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
int ret;
u32 val;
val = pwrap_readl(wrp, PWRAP_WACS2_RDATA);
if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR)
pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret)
return ret;
pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
if (ret)
return ret;
*rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
return 0;
}
static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata)
{
return pwrap_read(context, adr, rdata);
}
static int pwrap_regmap_write(void *context, u32 adr, u32 wdata)
{
return pwrap_write(context, adr, wdata);
}
static int pwrap_reset_spislave(struct pmic_wrapper *wrp)
{
int ret, i;
pwrap_writel(wrp, 0, PWRAP_HIPRIO_ARB_EN);
pwrap_writel(wrp, 0, PWRAP_WRAP_EN);
pwrap_writel(wrp, 1, PWRAP_MUX_SEL);
pwrap_writel(wrp, 1, PWRAP_MAN_EN);
pwrap_writel(wrp, 0, PWRAP_DIO_EN);
pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_CSL,
PWRAP_MAN_CMD);
pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_OUTS,
PWRAP_MAN_CMD);
pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_CSH,
PWRAP_MAN_CMD);
for (i = 0; i < 4; i++)
pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_OUTS,
PWRAP_MAN_CMD);
ret = pwrap_wait_for_state(wrp, pwrap_is_sync_idle);
if (ret) {
dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
return ret;
}
pwrap_writel(wrp, 0, PWRAP_MAN_EN);
pwrap_writel(wrp, 0, PWRAP_MUX_SEL);
return 0;
}
/*
* pwrap_init_sidly - configure serial input delay
*
* This configures the serial input delay. We can configure 0, 2, 4 or 6ns
* delay. Do a read test with all possible values and chose the best delay.
*/
static int pwrap_init_sidly(struct pmic_wrapper *wrp)
{
u32 rdata;
u32 i;
u32 pass = 0;
signed char dly[16] = {
-1, 0, 1, 0, 2, -1, 1, 1, 3, -1, -1, -1, 3, -1, 2, 1
};
for (i = 0; i < 4; i++) {
pwrap_writel(wrp, i, PWRAP_SIDLY);
pwrap_read(wrp, PWRAP_DEW_READ_TEST, &rdata);
if (rdata == PWRAP_DEW_READ_TEST_VAL) {
dev_dbg(wrp->dev, "[Read Test] pass, SIDLY=%x\n", i);
pass |= 1 << i;
}
}
if (dly[pass] < 0) {
dev_err(wrp->dev, "sidly pass range 0x%x not continuous\n",
pass);
return -EIO;
}
pwrap_writel(wrp, dly[pass], PWRAP_SIDLY);
return 0;
}
static int pwrap_init_reg_clock(struct pmic_wrapper *wrp)
{
unsigned long rate_spi;
int ck_mhz;
rate_spi = clk_get_rate(wrp->clk_spi);
if (rate_spi > 26000000)
ck_mhz = 26;
else if (rate_spi > 18000000)
ck_mhz = 18;
else
ck_mhz = 0;
switch (ck_mhz) {
case 18:
if (pwrap_is_mt8135(wrp))
pwrap_writel(wrp, 0xc, PWRAP_CSHEXT);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0xc, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
break;
case 26:
if (pwrap_is_mt8135(wrp))
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
break;
case 0:
if (pwrap_is_mt8135(wrp))
pwrap_writel(wrp, 0xf, PWRAP_CSHEXT);
pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_END);
break;
default:
return -EINVAL;
}
return 0;
}
static bool pwrap_is_cipher_ready(struct pmic_wrapper *wrp)
{
return pwrap_readl(wrp, PWRAP_CIPHER_RDY) & 1;
}
static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp)
{
u32 rdata;
int ret;
ret = pwrap_read(wrp, PWRAP_DEW_CIPHER_RDY, &rdata);
if (ret)
return 0;
return rdata == 1;
}
static int pwrap_init_cipher(struct pmic_wrapper *wrp)
{
int ret;
u32 rdata;
pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST);
pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST);
pwrap_writel(wrp, 0x1, PWRAP_CIPHER_KEY_SEL);
pwrap_writel(wrp, 0x2, PWRAP_CIPHER_IV_SEL);
if (pwrap_is_mt8135(wrp)) {
pwrap_writel(wrp, 1, PWRAP_CIPHER_LOAD);
pwrap_writel(wrp, 1, PWRAP_CIPHER_START);
} else {
pwrap_writel(wrp, 1, PWRAP_CIPHER_EN);
}
/* Config cipher mode @PMIC */
pwrap_write(wrp, PWRAP_DEW_CIPHER_SWRST, 0x1);
pwrap_write(wrp, PWRAP_DEW_CIPHER_SWRST, 0x0);
pwrap_write(wrp, PWRAP_DEW_CIPHER_KEY_SEL, 0x1);
pwrap_write(wrp, PWRAP_DEW_CIPHER_IV_SEL, 0x2);
pwrap_write(wrp, PWRAP_DEW_CIPHER_LOAD, 0x1);
pwrap_write(wrp, PWRAP_DEW_CIPHER_START, 0x1);
/* wait for cipher data ready@AP */
ret = pwrap_wait_for_state(wrp, pwrap_is_cipher_ready);
if (ret) {
dev_err(wrp->dev, "cipher data ready@AP fail, ret=%d\n", ret);
return ret;
}
/* wait for cipher data ready@PMIC */
ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready);
if (ret) {
dev_err(wrp->dev, "timeout waiting for cipher data ready@PMIC\n");
return ret;
}
/* wait for cipher mode idle */
pwrap_write(wrp, PWRAP_DEW_CIPHER_MODE, 0x1);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle);
if (ret) {
dev_err(wrp->dev, "cipher mode idle fail, ret=%d\n", ret);
return ret;
}
pwrap_writel(wrp, 1, PWRAP_CIPHER_MODE);
/* Write Test */
if (pwrap_write(wrp, PWRAP_DEW_WRITE_TEST, PWRAP_DEW_WRITE_TEST_VAL) ||
pwrap_read(wrp, PWRAP_DEW_WRITE_TEST, &rdata) ||
(rdata != PWRAP_DEW_WRITE_TEST_VAL)) {
dev_err(wrp->dev, "rdata=0x%04X\n", rdata);
return -EFAULT;
}
return 0;
}
static int pwrap_init(struct pmic_wrapper *wrp)
{
int ret;
u32 rdata;
reset_control_reset(wrp->rstc);
if (wrp->rstc_bridge)
reset_control_reset(wrp->rstc_bridge);
if (pwrap_is_mt8173(wrp)) {
/* Enable DCM */
pwrap_writel(wrp, 3, PWRAP_DCM_EN);
pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD);
}
/* Reset SPI slave */
ret = pwrap_reset_spislave(wrp);
if (ret)
return ret;
pwrap_writel(wrp, 1, PWRAP_WRAP_EN);
pwrap_writel(wrp, wrp->arb_en_all, PWRAP_HIPRIO_ARB_EN);
pwrap_writel(wrp, 1, PWRAP_WACS2_EN);
ret = pwrap_init_reg_clock(wrp);
if (ret)
return ret;
/* Setup serial input delay */
ret = pwrap_init_sidly(wrp);
if (ret)
return ret;
/* Enable dual IO mode */
pwrap_write(wrp, PWRAP_DEW_DIO_EN, 1);
/* Check IDLE & INIT_DONE in advance */
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle);
if (ret) {
dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
return ret;
}
pwrap_writel(wrp, 1, PWRAP_DIO_EN);
/* Read Test */
pwrap_read(wrp, PWRAP_DEW_READ_TEST, &rdata);
if (rdata != PWRAP_DEW_READ_TEST_VAL) {
dev_err(wrp->dev, "Read test failed after switch to DIO mode: 0x%04x != 0x%04x\n",
PWRAP_DEW_READ_TEST_VAL, rdata);
return -EFAULT;
}
/* Enable encryption */
ret = pwrap_init_cipher(wrp);
if (ret)
return ret;
/* Signature checking - using CRC */
if (pwrap_write(wrp, PWRAP_DEW_CRC_EN, 0x1))
return -EFAULT;
pwrap_writel(wrp, 0x1, PWRAP_CRC_EN);
pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE);
pwrap_writel(wrp, PWRAP_DEW_CRC_VAL, PWRAP_SIG_ADR);
pwrap_writel(wrp, wrp->arb_en_all, PWRAP_HIPRIO_ARB_EN);
if (pwrap_is_mt8135(wrp))
pwrap_writel(wrp, 0x7, PWRAP_RRARB_EN);
pwrap_writel(wrp, 0x1, PWRAP_WACS0_EN);
pwrap_writel(wrp, 0x1, PWRAP_WACS1_EN);
pwrap_writel(wrp, 0x1, PWRAP_WACS2_EN);
pwrap_writel(wrp, 0x5, PWRAP_STAUPD_PRD);
pwrap_writel(wrp, 0xff, PWRAP_STAUPD_GRPEN);
pwrap_writel(wrp, 0xf, PWRAP_WDT_UNIT);
pwrap_writel(wrp, 0xffffffff, PWRAP_WDT_SRC_EN);
pwrap_writel(wrp, 0x1, PWRAP_TIMER_EN);
pwrap_writel(wrp, ~((1 << 31) | (1 << 1)), PWRAP_INT_EN);
if (pwrap_is_mt8135(wrp)) {
/* enable pwrap events and pwrap bridge in AP side */
pwrap_writel(wrp, 0x1, PWRAP_EVENT_IN_EN);
pwrap_writel(wrp, 0xffff, PWRAP_EVENT_DST_EN);
writel(0x7f, wrp->bridge_base + PWRAP_MT8135_BRIDGE_IORD_ARB_EN);
writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WACS3_EN);
writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WACS4_EN);
writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WDT_UNIT);
writel(0xffff, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WDT_SRC_EN);
writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_TIMER_EN);
writel(0x7ff, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INT_EN);
/* enable PMIC event out and sources */
if (pwrap_write(wrp, PWRAP_DEW_EVENT_OUT_EN, 0x1) ||
pwrap_write(wrp, PWRAP_DEW_EVENT_SRC_EN, 0xffff)) {
dev_err(wrp->dev, "enable dewrap fail\n");
return -EFAULT;
}
} else {
/* PMIC_DEWRAP enables */
if (pwrap_write(wrp, PWRAP_DEW_EVENT_OUT_EN, 0x1) ||
pwrap_write(wrp, PWRAP_DEW_EVENT_SRC_EN, 0xffff)) {
dev_err(wrp->dev, "enable dewrap fail\n");
return -EFAULT;
}
}
/* Setup the init done registers */
pwrap_writel(wrp, 1, PWRAP_INIT_DONE2);
pwrap_writel(wrp, 1, PWRAP_INIT_DONE0);
pwrap_writel(wrp, 1, PWRAP_INIT_DONE1);
if (pwrap_is_mt8135(wrp)) {
writel(1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INIT_DONE3);
writel(1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INIT_DONE4);
}
return 0;
}
static irqreturn_t pwrap_interrupt(int irqno, void *dev_id)
{
u32 rdata;
struct pmic_wrapper *wrp = dev_id;
rdata = pwrap_readl(wrp, PWRAP_INT_FLG);
dev_err(wrp->dev, "unexpected interrupt int=0x%x\n", rdata);
pwrap_writel(wrp, 0xffffffff, PWRAP_INT_CLR);
return IRQ_HANDLED;
}
static const struct regmap_config pwrap_regmap_config = {
.reg_bits = 16,
.val_bits = 16,
.reg_stride = 2,
.reg_read = pwrap_regmap_read,
.reg_write = pwrap_regmap_write,
.max_register = 0xffff,
};
static struct of_device_id of_pwrap_match_tbl[] = {
{
.compatible = "mediatek,mt8135-pwrap",
.data = &pwrap_mt8135,
}, {
.compatible = "mediatek,mt8173-pwrap",
.data = &pwrap_mt8173,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(of, of_pwrap_match_tbl);
static int pwrap_probe(struct platform_device *pdev)
{
int ret, irq;
struct pmic_wrapper *wrp;
struct device_node *np = pdev->dev.of_node;
const struct of_device_id *of_id =
of_match_device(of_pwrap_match_tbl, &pdev->dev);
const struct pmic_wrapper_type *type;
struct resource *res;
wrp = devm_kzalloc(&pdev->dev, sizeof(*wrp), GFP_KERNEL);
if (!wrp)
return -ENOMEM;
platform_set_drvdata(pdev, wrp);
type = of_id->data;
wrp->regs = type->regs;
wrp->type = type->type;
wrp->arb_en_all = type->arb_en_all;
wrp->dev = &pdev->dev;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwrap");
wrp->base = devm_ioremap_resource(wrp->dev, res);
if (IS_ERR(wrp->base))
return PTR_ERR(wrp->base);
wrp->rstc = devm_reset_control_get(wrp->dev, "pwrap");
if (IS_ERR(wrp->rstc)) {
ret = PTR_ERR(wrp->rstc);
dev_dbg(wrp->dev, "cannot get pwrap reset: %d\n", ret);
return ret;
}
if (pwrap_is_mt8135(wrp)) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"pwrap-bridge");
wrp->bridge_base = devm_ioremap_resource(wrp->dev, res);
if (IS_ERR(wrp->bridge_base))
return PTR_ERR(wrp->bridge_base);
wrp->rstc_bridge = devm_reset_control_get(wrp->dev, "pwrap-bridge");
if (IS_ERR(wrp->rstc_bridge)) {
ret = PTR_ERR(wrp->rstc_bridge);
dev_dbg(wrp->dev, "cannot get pwrap-bridge reset: %d\n", ret);
return ret;
}
}
wrp->clk_spi = devm_clk_get(wrp->dev, "spi");
if (IS_ERR(wrp->clk_spi)) {
dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_spi));
return PTR_ERR(wrp->clk_spi);
}
wrp->clk_wrap = devm_clk_get(wrp->dev, "wrap");
if (IS_ERR(wrp->clk_wrap)) {
dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_wrap));
return PTR_ERR(wrp->clk_wrap);
}
ret = clk_prepare_enable(wrp->clk_spi);
if (ret)
return ret;
ret = clk_prepare_enable(wrp->clk_wrap);
if (ret)
goto err_out1;
/* Enable internal dynamic clock */
pwrap_writel(wrp, 1, PWRAP_DCM_EN);
pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD);
/*
* The PMIC could already be initialized by the bootloader.
* Skip initialization here in this case.
*/
if (!pwrap_readl(wrp, PWRAP_INIT_DONE2)) {
ret = pwrap_init(wrp);
if (ret) {
dev_dbg(wrp->dev, "init failed with %d\n", ret);
goto err_out2;
}
}
if (!(pwrap_readl(wrp, PWRAP_WACS2_RDATA) & PWRAP_STATE_INIT_DONE0)) {
dev_dbg(wrp->dev, "initialization isn't finished\n");
return -ENODEV;
}
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, IRQF_TRIGGER_HIGH,
"mt-pmic-pwrap", wrp);
if (ret)
goto err_out2;
wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, &pwrap_regmap_config);
if (IS_ERR(wrp->regmap))
return PTR_ERR(wrp->regmap);
ret = of_platform_populate(np, NULL, NULL, wrp->dev);
if (ret) {
dev_dbg(wrp->dev, "failed to create child devices at %s\n",
np->full_name);
goto err_out2;
}
return 0;
err_out2:
clk_disable_unprepare(wrp->clk_wrap);
err_out1:
clk_disable_unprepare(wrp->clk_spi);
return ret;
}
static struct platform_driver pwrap_drv = {
.driver = {
.name = "mt-pmic-pwrap",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(of_pwrap_match_tbl),
},
.probe = pwrap_probe,
};
module_platform_driver(pwrap_drv);
MODULE_AUTHOR("Flora Fu, MediaTek");
MODULE_DESCRIPTION("MediaTek MT8135 PMIC Wrapper Driver");
MODULE_LICENSE("GPL v2");
......@@ -4,6 +4,7 @@
config QCOM_GSBI
tristate "QCOM General Serial Bus Interface"
depends on ARCH_QCOM
select MFD_SYSCON
help
Say y here to enable GSBI support. The GSBI provides control
functions for connecting the underlying serial UART, SPI, and I2C
......
......@@ -18,22 +18,129 @@
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <dt-bindings/soc/qcom,gsbi.h>
#define GSBI_CTRL_REG 0x0000
#define GSBI_PROTOCOL_SHIFT 4
#define MAX_GSBI 12
#define TCSR_ADM_CRCI_BASE 0x70
struct crci_config {
u32 num_rows;
const u32 (*array)[MAX_GSBI];
};
static const u32 crci_ipq8064[][MAX_GSBI] = {
{
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
{
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
};
static const struct crci_config config_ipq8064 = {
.num_rows = ARRAY_SIZE(crci_ipq8064),
.array = crci_ipq8064,
};
static const unsigned int crci_apq8064[][MAX_GSBI] = {
{
0x001800, 0x006000, 0x000030, 0x0000c0,
0x000300, 0x000400, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x000000
},
{
0x000000, 0x000000, 0x000000, 0x000000,
0x000000, 0x000020, 0x0000c0, 0x000000,
0x000000, 0x000000, 0x000000, 0x000000
},
};
static const struct crci_config config_apq8064 = {
.num_rows = ARRAY_SIZE(crci_apq8064),
.array = crci_apq8064,
};
static const unsigned int crci_msm8960[][MAX_GSBI] = {
{
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000400, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x000000
},
{
0x000000, 0x000000, 0x000000, 0x000000,
0x000000, 0x000020, 0x0000c0, 0x000300,
0x001800, 0x006000, 0x000000, 0x000000
},
};
static const struct crci_config config_msm8960 = {
.num_rows = ARRAY_SIZE(crci_msm8960),
.array = crci_msm8960,
};
static const unsigned int crci_msm8660[][MAX_GSBI] = {
{ /* ADM 0 - B */
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
{ /* ADM 0 - B */
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
{ /* ADM 1 - A */
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
{ /* ADM 1 - B */
0x000003, 0x00000c, 0x000030, 0x0000c0,
0x000300, 0x000c00, 0x003000, 0x00c000,
0x030000, 0x0c0000, 0x300000, 0xc00000
},
};
static const struct crci_config config_msm8660 = {
.num_rows = ARRAY_SIZE(crci_msm8660),
.array = crci_msm8660,
};
struct gsbi_info {
struct clk *hclk;
u32 mode;
u32 crci;
struct regmap *tcsr;
};
static const struct of_device_id tcsr_dt_match[] = {
{ .compatible = "qcom,tcsr-ipq8064", .data = &config_ipq8064},
{ .compatible = "qcom,tcsr-apq8064", .data = &config_apq8064},
{ .compatible = "qcom,tcsr-msm8960", .data = &config_msm8960},
{ .compatible = "qcom,tcsr-msm8660", .data = &config_msm8660},
{ },
};
static int gsbi_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
struct device_node *tcsr_node;
const struct of_device_id *match;
struct resource *res;
void __iomem *base;
struct gsbi_info *gsbi;
int i;
u32 mask, gsbi_num;
const struct crci_config *config = NULL;
gsbi = devm_kzalloc(&pdev->dev, sizeof(*gsbi), GFP_KERNEL);
......@@ -45,6 +152,32 @@ static int gsbi_probe(struct platform_device *pdev)
if (IS_ERR(base))
return PTR_ERR(base);
/* get the tcsr node and setup the config and regmap */
gsbi->tcsr = syscon_regmap_lookup_by_phandle(node, "syscon-tcsr");
if (!IS_ERR(gsbi->tcsr)) {
tcsr_node = of_parse_phandle(node, "syscon-tcsr", 0);
if (tcsr_node) {
match = of_match_node(tcsr_dt_match, tcsr_node);
if (match)
config = match->data;
else
dev_warn(&pdev->dev, "no matching TCSR\n");
of_node_put(tcsr_node);
}
}
if (of_property_read_u32(node, "cell-index", &gsbi_num)) {
dev_err(&pdev->dev, "missing cell-index\n");
return -EINVAL;
}
if (gsbi_num < 1 || gsbi_num > MAX_GSBI) {
dev_err(&pdev->dev, "invalid cell-index\n");
return -EINVAL;
}
if (of_property_read_u32(node, "qcom,mode", &gsbi->mode)) {
dev_err(&pdev->dev, "missing mode configuration\n");
return -EINVAL;
......@@ -64,6 +197,25 @@ static int gsbi_probe(struct platform_device *pdev)
writel_relaxed((gsbi->mode << GSBI_PROTOCOL_SHIFT) | gsbi->crci,
base + GSBI_CTRL_REG);
/*
* modify tcsr to reflect mode and ADM CRCI mux
* Each gsbi contains a pair of bits, one for RX and one for TX
* SPI mode requires both bits cleared, otherwise they are set
*/
if (config) {
for (i = 0; i < config->num_rows; i++) {
mask = config->array[i][gsbi_num - 1];
if (gsbi->mode == GSBI_PROT_SPI)
regmap_update_bits(gsbi->tcsr,
TCSR_ADM_CRCI_BASE + 4 * i, mask, 0);
else
regmap_update_bits(gsbi->tcsr,
TCSR_ADM_CRCI_BASE + 4 * i, mask, mask);
}
}
/* make sure the gsbi control write is not reordered */
wmb();
......
......@@ -154,7 +154,7 @@ config ARM_SP805_WATCHDOG
config AT91RM9200_WATCHDOG
tristate "AT91RM9200 watchdog"
depends on SOC_AT91RM9200
depends on SOC_AT91RM9200 && MFD_SYSCON
help
Watchdog timer embedded into AT91RM9200 chips. This will reboot your
system when the timeout is reached.
......
......@@ -12,27 +12,32 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/atmel-st.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/platform_device.h>
#include <linux/reboot.h>
#include <linux/regmap.h>
#include <linux/types.h>
#include <linux/watchdog.h>
#include <linux/uaccess.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <mach/at91_st.h>
#define WDT_DEFAULT_TIME 5 /* seconds */
#define WDT_MAX_TIME 256 /* seconds */
static int wdt_time = WDT_DEFAULT_TIME;
static bool nowayout = WATCHDOG_NOWAYOUT;
static struct regmap *regmap_st;
module_param(wdt_time, int, 0);
MODULE_PARM_DESC(wdt_time, "Watchdog time in seconds. (default="
......@@ -50,12 +55,33 @@ static unsigned long at91wdt_busy;
/* ......................................................................... */
static int at91rm9200_restart(struct notifier_block *this,
unsigned long mode, void *cmd)
{
/*
* Perform a hardware reset with the use of the Watchdog timer.
*/
regmap_write(regmap_st, AT91_ST_WDMR,
AT91_ST_RSTEN | AT91_ST_EXTEN | 1);
regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST);
mdelay(2000);
pr_emerg("Unable to restart system\n");
return NOTIFY_DONE;
}
static struct notifier_block at91rm9200_restart_nb = {
.notifier_call = at91rm9200_restart,
.priority = 192,
};
/*
* Disable the watchdog.
*/
static inline void at91_wdt_stop(void)
{
at91_st_write(AT91_ST_WDMR, AT91_ST_EXTEN);
regmap_write(regmap_st, AT91_ST_WDMR, AT91_ST_EXTEN);
}
/*
......@@ -63,9 +89,9 @@ static inline void at91_wdt_stop(void)
*/
static inline void at91_wdt_start(void)
{
at91_st_write(AT91_ST_WDMR, AT91_ST_EXTEN | AT91_ST_RSTEN |
regmap_write(regmap_st, AT91_ST_WDMR, AT91_ST_EXTEN | AT91_ST_RSTEN |
(((65536 * wdt_time) >> 8) & AT91_ST_WDV));
at91_st_write(AT91_ST_CR, AT91_ST_WDRST);
regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST);
}
/*
......@@ -73,7 +99,7 @@ static inline void at91_wdt_start(void)
*/
static inline void at91_wdt_reload(void)
{
at91_st_write(AT91_ST_CR, AT91_ST_WDRST);
regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST);
}
/* ......................................................................... */
......@@ -203,16 +229,32 @@ static struct miscdevice at91wdt_miscdev = {
static int at91wdt_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device *parent;
int res;
if (at91wdt_miscdev.parent)
return -EBUSY;
at91wdt_miscdev.parent = &pdev->dev;
parent = dev->parent;
if (!parent) {
dev_err(dev, "no parent\n");
return -ENODEV;
}
regmap_st = syscon_node_to_regmap(parent->of_node);
if (!regmap_st)
return -ENODEV;
res = misc_register(&at91wdt_miscdev);
if (res)
return res;
res = register_restart_handler(&at91rm9200_restart_nb);
if (res)
dev_warn(dev, "failed to register restart handler\n");
pr_info("AT91 Watchdog Timer enabled (%d seconds%s)\n",
wdt_time, nowayout ? ", nowayout" : "");
return 0;
......@@ -220,8 +262,13 @@ static int at91wdt_probe(struct platform_device *pdev)
static int at91wdt_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
int res;
res = unregister_restart_handler(&at91rm9200_restart_nb);
if (res)
dev_warn(dev, "failed to unregister restart handler\n");
res = misc_deregister(&at91wdt_miscdev);
if (!res)
at91wdt_miscdev.parent = NULL;
......@@ -267,7 +314,7 @@ static struct platform_driver at91wdt_driver = {
.suspend = at91wdt_suspend,
.resume = at91wdt_resume,
.driver = {
.name = "at91_wdt",
.name = "atmel_st_watchdog",
.of_match_table = at91_wdt_dt_ids,
},
};
......@@ -296,4 +343,4 @@ module_exit(at91_wdt_exit);
MODULE_AUTHOR("Andrew Victor");
MODULE_DESCRIPTION("Watchdog driver for Atmel AT91RM9200");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:at91_wdt");
MODULE_ALIAS("platform:atmel_st_watchdog");
......@@ -24,16 +24,22 @@
#include <linux/errno.h>
#include <linux/types.h>
#include <asm/arm-cci.h>
struct device_node;
#ifdef CONFIG_ARM_CCI
extern bool cci_probed(void);
#else
static inline bool cci_probed(void) { return false; }
#endif
#ifdef CONFIG_ARM_CCI400_PORT_CTRL
extern int cci_ace_get_port(struct device_node *dn);
extern int cci_disable_port_by_cpu(u64 mpidr);
extern int __cci_control_port_by_device(struct device_node *dn, bool enable);
extern int __cci_control_port_by_index(u32 port, bool enable);
#else
static inline bool cci_probed(void) { return false; }
static inline int cci_ace_get_port(struct device_node *dn)
{
return -ENODEV;
......@@ -49,6 +55,7 @@ static inline int __cci_control_port_by_index(u32 port, bool enable)
return -ENODEV;
}
#endif
#define cci_disable_port_by_device(dev) \
__cci_control_port_by_device(dev, false)
#define cci_enable_port_by_device(dev) \
......
/*
* Copyright (C) 2005 Ivan Kokshaysky
* Copyright (C) SAN People
*
* System Timer (ST) - System peripherals registers.
* Based on AT91RM9200 datasheet revision E.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef _LINUX_MFD_SYSCON_ATMEL_ST_H
#define _LINUX_MFD_SYSCON_ATMEL_ST_H
#include <linux/bitops.h>
#define AT91_ST_CR 0x00 /* Control Register */
#define AT91_ST_WDRST BIT(0) /* Watchdog Timer Restart */
#define AT91_ST_PIMR 0x04 /* Period Interval Mode Register */
#define AT91_ST_PIV 0xffff /* Period Interval Value */
#define AT91_ST_WDMR 0x08 /* Watchdog Mode Register */
#define AT91_ST_WDV 0xffff /* Watchdog Counter Value */
#define AT91_ST_RSTEN BIT(16) /* Reset Enable */
#define AT91_ST_EXTEN BIT(17) /* External Signal Assertion Enable */
#define AT91_ST_RTMR 0x0c /* Real-time Mode Register */
#define AT91_ST_RTPRES 0xffff /* Real-time Prescalar Value */
#define AT91_ST_SR 0x10 /* Status Register */
#define AT91_ST_PITS BIT(0) /* Period Interval Timer Status */
#define AT91_ST_WDOVF BIT(1) /* Watchdog Overflow */
#define AT91_ST_RTTINC BIT(2) /* Real-time Timer Increment */
#define AT91_ST_ALMS BIT(3) /* Alarm Status */
#define AT91_ST_IER 0x14 /* Interrupt Enable Register */
#define AT91_ST_IDR 0x18 /* Interrupt Disable Register */
#define AT91_ST_IMR 0x1c /* Interrupt Mask Register */
#define AT91_ST_RTAR 0x20 /* Real-time Alarm Register */
#define AT91_ST_ALMV 0xfffff /* Alarm Value */
#define AT91_ST_CRTR 0x24 /* Current Real-time Register */
#define AT91_ST_CRTV 0xfffff /* Current Real-Time Value */
#endif /* _LINUX_MFD_SYSCON_ATMEL_ST_H */
......@@ -163,7 +163,8 @@ extern unsigned int gpmc_ticks_to_ns(unsigned int ticks);
extern void gpmc_cs_write_reg(int cs, int idx, u32 val);
extern int gpmc_calc_divider(unsigned int sync_clk);
extern int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t);
extern int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t,
const struct gpmc_settings *s);
extern int gpmc_cs_program_settings(int cs, struct gpmc_settings *p);
extern int gpmc_cs_request(int cs, unsigned long size, unsigned long *base);
extern void gpmc_cs_free(int cs);
......
/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
/* Copyright (c) 2010-2014, The Linux Foundation. All rights reserved.
* Copyright (C) 2015 Linaro Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
......@@ -9,18 +10,19 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __MACH_SCM_BOOT_H
#define __MACH_SCM_BOOT_H
#ifndef __QCOM_SCM_H
#define __QCOM_SCM_H
#define SCM_BOOT_ADDR 0x1
#define SCM_FLAG_COLDBOOT_CPU1 0x01
#define SCM_FLAG_COLDBOOT_CPU2 0x08
#define SCM_FLAG_COLDBOOT_CPU3 0x20
#define SCM_FLAG_WARMBOOT_CPU0 0x04
#define SCM_FLAG_WARMBOOT_CPU1 0x02
#define SCM_FLAG_WARMBOOT_CPU2 0x10
#define SCM_FLAG_WARMBOOT_CPU3 0x40
extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus);
extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus);
int scm_set_boot_addr(u32 addr, int flags);
#define QCOM_SCM_CPU_PWR_DOWN_L2_ON 0x0
#define QCOM_SCM_CPU_PWR_DOWN_L2_OFF 0x1
extern void qcom_scm_cpu_power_down(u32 flags);
#define QCOM_SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF))
extern u32 qcom_scm_get_version(void);
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment