Commit cbda94e0 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'drivers-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver changes from Arnd Bergmann:
 "These changes are mostly for ARM specific device drivers that either
  don't have an upstream maintainer, or that had the maintainer ask us
  to pick up the changes to avoid conflicts.

  A large chunk of this are clock drivers (bcm281xx, exynos, versatile,
  shmobile), aside from that, reset controllers for STi as well as a
  large rework of the Marvell Orion/EBU watchdog driver are notable"

* tag 'drivers-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (99 commits)
  Revert "dts: socfpga: Add DTS entry for adding the stmmac glue layer for stmmac."
  Revert "net: stmmac: Add SOCFPGA glue driver"
  ARM: shmobile: r8a7791: Fix SCIFA3-5 clocks
  ARM: STi: Add reset controller support to mach-sti Kconfig
  drivers: reset: stih416: add softreset controller
  drivers: reset: stih415: add softreset controller
  drivers: reset: Reset controller driver for STiH416
  drivers: reset: Reset controller driver for STiH415
  drivers: reset: STi SoC system configuration reset controller support
  dts: socfpga: Add sysmgr node so the gmac can use to reference
  dts: socfpga: Add support for SD/MMC on the SOCFPGA platform
  reset: Add optional resets and stubs
  ARM: shmobile: r7s72100: fix bus clock calculation
  Power: Reset: Generalize qnap-poweroff to work on Synology devices.
  dts: socfpga: Update clock entry to support multiple parents
  ARM: socfpga: Update socfpga_defconfig
  dts: socfpga: Add DTS entry for adding the stmmac glue layer for stmmac.
  net: stmmac: Add SOCFPGA glue driver
  watchdog: orion_wdt: Use %pa to print 'phys_addr_t'
  drivers: cci: Export CCI PMU revision
  ...
parents f83ccb93 f1d7d8c8
......@@ -50,6 +50,11 @@ Optional
regions, used when the GIC doesn't have banked registers. The offset is
cpu-offset * cpu-nr.
- arm,routable-irqs : Total number of gic irq inputs which are not directly
connected from the peripherals, but are routed dynamically
by a crossbar/multiplexer preceding the GIC. The GIC irq
input line is assigned dynamically when the corresponding
peripheral's crossbar line is mapped.
Example:
intc: interrupt-controller@fff11000 {
......@@ -57,6 +62,7 @@ Example:
#interrupt-cells = <3>;
#address-cells = <1>;
interrupt-controller;
arm,routable-irqs = <160>;
reg = <0xfff11000 0x1000>,
<0xfff10100 0x100>;
};
......
Some socs have a large number of interrupts requests to service
the needs of its many peripherals and subsystems. All of the
interrupt lines from the subsystems are not needed at the same
time, so they have to be muxed to the irq-controller appropriately.
In such places a interrupt controllers are preceded by an CROSSBAR
that provides flexibility in muxing the device requests to the controller
inputs.
Required properties:
- compatible : Should be "ti,irq-crossbar"
- reg: Base address and the size of the crossbar registers.
- ti,max-irqs: Total number of irqs available at the interrupt controller.
- ti,reg-size: Size of a individual register in bytes. Every individual
register is assumed to be of same size. Valid sizes are 1, 2, 4.
- ti,irqs-reserved: List of the reserved irq lines that are not muxed using
crossbar. These interrupt lines are reserved in the soc,
so crossbar bar driver should not consider them as free
lines.
Examples:
crossbar_mpu: @4a020000 {
compatible = "ti,irq-crossbar";
reg = <0x4a002a48 0x130>;
ti,max-irqs = <160>;
ti,reg-size = <2>;
ti,irqs-reserved = <0 1 2 3 5 6 131 132 139 140>;
};
Clock bindings for ARM Integrator Core Module clocks
Auxilary Oscillator Clock
This is a configurable clock fed from a 24 MHz chrystal,
used for generating e.g. video clocks. It is located on the
core module and there is only one of these.
This clock node *must* be a subnode of the core module, since
it obtains the base address for it's address range from its
parent node.
Required properties:
- compatible: must be "arm,integrator-cm-auxosc"
- #clock-cells: must be <0>
Optional properties:
- clocks: parent clock(s)
Example:
core-module@10000000 {
xtal24mhz: xtal24mhz@24M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <24000000>;
};
auxosc: cm_aux_osc@25M {
#clock-cells = <0>;
compatible = "arm,integrator-cm-auxosc";
clocks = <&xtal24mhz>;
};
};
* Altera SOCFPGA specific extensions to the Synopsys Designware Mobile
Storage Host Controller
The Synopsys designware mobile storage host controller is used to interface
a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
differences between the core Synopsys dw mshc controller properties described
by synopsys-dw-mshc.txt and the properties used by the Altera SOCFPGA specific
extensions to the Synopsys Designware Mobile Storage Host Controller.
Required Properties:
* compatible: should be
- "altr,socfpga-dw-mshc": for Altera's SOCFPGA platform
Example:
mmc: dwmmc0@ff704000 {
compatible = "altr,socfpga-dw-mshc";
reg = <0xff704000 0x1000>;
interrupts = <0 129 4>;
#address-cells = <1>;
#size-cells = <0>;
};
......@@ -6,8 +6,11 @@ Orion5x SoCs. Sending the character 'A', at 19200 baud, tells the
microcontroller to turn the power off. This driver adds a handler to
pm_power_off which is called to turn the power off.
Synology NAS devices use a similar scheme, but a different baud rate,
9600, and a different character, '1'.
Required Properties:
- compatible: Should be "qnap,power-off"
- compatible: Should be "qnap,power-off" or "synology,power-off"
- reg: Address and length of the register set for UART1
- clocks: tclk clock
......@@ -3,17 +3,24 @@
Required Properties:
- Compatibility : "marvell,orion-wdt"
- reg : Address of the timer registers
"marvell,armada-370-wdt"
"marvell,armada-xp-wdt"
- reg : Should contain two entries: first one with the
timer control address, second one with the
rstout enable address.
Optional properties:
- interrupts : Contains the IRQ for watchdog expiration
- timeout-sec : Contains the watchdog timeout in seconds
Example:
wdt@20300 {
compatible = "marvell,orion-wdt";
reg = <0x20300 0x28>;
reg = <0x20300 0x28>, <0x20108 0x4>;
interrupts = <3>;
timeout-sec = <10>;
status = "okay";
};
......@@ -18,6 +18,28 @@ chosen {
bootargs = "root=/dev/ram0 console=ttyAM0,38400n8 earlyprintk";
};
/* 24 MHz chrystal on the core module */
xtal24mhz: xtal24mhz@24M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <24000000>;
};
pclk: pclk@0 {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clock-div = <1>;
clock-mult = <1>;
clocks = <&xtal24mhz>;
};
/* The UART clock is 14.74 MHz divided by an ICS525 */
uartclk: uartclk@14.74M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <14745600>;
};
syscon {
compatible = "arm,integrator-ap-syscon";
reg = <0x11000000 0x100>;
......@@ -28,14 +50,17 @@ syscon {
timer0: timer@13000000 {
compatible = "arm,integrator-timer";
clocks = <&xtal24mhz>;
};
timer1: timer@13000100 {
compatible = "arm,integrator-timer";
clocks = <&xtal24mhz>;
};
timer2: timer@13000200 {
compatible = "arm,integrator-timer";
clocks = <&xtal24mhz>;
};
pic: pic@14000000 {
......@@ -92,26 +117,36 @@ fpga {
rtc: rtc@15000000 {
compatible = "arm,pl030", "arm,primecell";
arm,primecell-periphid = <0x00041030>;
clocks = <&pclk>;
clock-names = "apb_pclk";
};
uart0: uart@16000000 {
compatible = "arm,pl010", "arm,primecell";
arm,primecell-periphid = <0x00041010>;
clocks = <&uartclk>, <&pclk>;
clock-names = "uartclk", "apb_pclk";
};
uart1: uart@17000000 {
compatible = "arm,pl010", "arm,primecell";
arm,primecell-periphid = <0x00041010>;
clocks = <&uartclk>, <&pclk>;
clock-names = "uartclk", "apb_pclk";
};
kmi0: kmi@18000000 {
compatible = "arm,pl050", "arm,primecell";
arm,primecell-periphid = <0x00041050>;
clocks = <&xtal24mhz>, <&pclk>;
clock-names = "KMIREFCLK", "apb_pclk";
};
kmi1: kmi@19000000 {
compatible = "arm,pl050", "arm,primecell";
arm,primecell-periphid = <0x00041050>;
clocks = <&xtal24mhz>, <&pclk>;
clock-names = "KMIREFCLK", "apb_pclk";
};
};
};
......@@ -13,25 +13,107 @@ chosen {
bootargs = "root=/dev/ram0 console=ttyAMA0,38400n8 earlyprintk";
};
/*
* The Integrator/CP overall clocking architecture can be found in
* ARM DUI 0184B page 7-28 "Integrator/CP922T system clocks" which
* appear to illustrate the layout used in most configurations.
*/
/* The codec chrystal operates at 24.576 MHz */
xtal_codec: xtal24.576@24.576M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <24576000>;
};
/* The chrystal is divided by 2 by the codec for the AACI bit clock */
aaci_bitclk: aaci_bitclk@12.288M {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clock-div = <2>;
clock-mult = <1>;
clocks = <&xtal_codec>;
};
/* This is a 25MHz chrystal on the base board */
xtal25mhz: xtal25mhz@25M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <25000000>;
};
/* The UART clock is 14.74 MHz divided from 25MHz by an ICS525 */
uartclk: uartclk@14.74M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <14745600>;
};
/* Actually sysclk I think */
pclk: pclk@0 {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <0>;
};
core-module@10000000 {
/* 24 MHz chrystal on the core module */
xtal24mhz: xtal24mhz@24M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <24000000>;
};
/*
* External oscillator on the core module, usually used
* to drive video circuitry. Driven from the 24MHz clock.
*/
auxosc: cm_aux_osc@25M {
#clock-cells = <0>;
compatible = "arm,integrator-cm-auxosc";
clocks = <&xtal24mhz>;
};
/* The KMI clock is the 24 MHz oscillator divided to 8MHz */
kmiclk: kmiclk@1M {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clock-div = <3>;
clock-mult = <1>;
clocks = <&xtal24mhz>;
};
/* The timer clock is the 24 MHz oscillator divided to 1MHz */
timclk: timclk@1M {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clock-div = <24>;
clock-mult = <1>;
clocks = <&xtal24mhz>;
};
};
syscon {
compatible = "arm,integrator-cp-syscon";
reg = <0xcb000000 0x100>;
};
timer0: timer@13000000 {
/* TIMER0 runs @ 25MHz */
/* TIMER0 runs directly on the 25MHz chrystal */
compatible = "arm,integrator-cp-timer";
status = "disabled";
clocks = <&xtal25mhz>;
};
timer1: timer@13000100 {
/* TIMER1 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
clocks = <&timclk>;
};
timer2: timer@13000200 {
/* TIMER2 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
clocks = <&timclk>;
};
pic: pic@14000000 {
......@@ -74,22 +156,32 @@ fpga {
*/
rtc@15000000 {
compatible = "arm,pl031", "arm,primecell";
clocks = <&pclk>;
clock-names = "apb_pclk";
};
uart@16000000 {
compatible = "arm,pl011", "arm,primecell";
clocks = <&uartclk>, <&pclk>;
clock-names = "uartclk", "apb_pclk";
};
uart@17000000 {
compatible = "arm,pl011", "arm,primecell";
clocks = <&uartclk>, <&pclk>;
clock-names = "uartclk", "apb_pclk";
};
kmi@18000000 {
compatible = "arm,pl050", "arm,primecell";
clocks = <&kmiclk>, <&pclk>;
clock-names = "KMIREFCLK", "apb_pclk";
};
kmi@19000000 {
compatible = "arm,pl050", "arm,primecell";
clocks = <&kmiclk>, <&pclk>;
clock-names = "KMIREFCLK", "apb_pclk";
};
/*
......@@ -100,18 +192,24 @@ mmc@1c000000 {
reg = <0x1c000000 0x1000>;
interrupts = <23 24>;
max-frequency = <515633>;
clocks = <&uartclk>, <&pclk>;
clock-names = "mclk", "apb_pclk";
};
aaci@1d000000 {
compatible = "arm,pl041", "arm,primecell";
reg = <0x1d000000 0x1000>;
interrupts = <25>;
clocks = <&pclk>;
clock-names = "apb_pclk";
};
clcd@c0000000 {
compatible = "arm,pl110", "arm,primecell";
reg = <0xC0000000 0x1000>;
interrupts = <22>;
clocks = <&auxosc>, <&pclk>;
clock-names = "clcd", "apb_pclk";
};
};
};
......@@ -421,6 +421,29 @@ extal_clk: extal_clk {
clock-output-names = "extal";
};
/*
* The external audio clocks are configured as 0 Hz fixed frequency clocks by
* default. Boards that provide audio clocks should override them.
*/
audio_clk_a: audio_clk_a {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
clock-output-names = "audio_clk_a";
};
audio_clk_b: audio_clk_b {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
clock-output-names = "audio_clk_b";
};
audio_clk_c: audio_clk_c {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
clock-output-names = "audio_clk_c";
};
/* Special CPG clocks */
cpg_clocks: cpg_clocks@e6150000 {
compatible = "renesas,r8a7790-cpg-clocks",
......
......@@ -92,7 +92,12 @@ clocks {
#address-cells = <1>;
#size-cells = <0>;
osc: osc1 {
osc1: osc1 {
#clock-cells = <0>;
compatible = "fixed-clock";
};
osc2: osc2 {
#clock-cells = <0>;
compatible = "fixed-clock";
};
......@@ -100,7 +105,11 @@ osc: osc1 {
f2s_periph_ref_clk: f2s_periph_ref_clk {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <10000000>;
};
f2s_sdram_ref_clk: f2s_sdram_ref_clk {
#clock-cells = <0>;
compatible = "fixed-clock";
};
main_pll: main_pll {
......@@ -108,7 +117,7 @@ main_pll: main_pll {
#size-cells = <0>;
#clock-cells = <0>;
compatible = "altr,socfpga-pll-clock";
clocks = <&osc>;
clocks = <&osc1>;
reg = <0x40>;
mpuclk: mpuclk {
......@@ -162,7 +171,7 @@ periph_pll: periph_pll {
#size-cells = <0>;
#clock-cells = <0>;
compatible = "altr,socfpga-pll-clock";
clocks = <&osc>;
clocks = <&osc1>, <&osc2>, <&f2s_periph_ref_clk>;
reg = <0x80>;
emac0_clk: emac0_clk {
......@@ -213,7 +222,7 @@ sdram_pll: sdram_pll {
#size-cells = <0>;
#clock-cells = <0>;
compatible = "altr,socfpga-pll-clock";
clocks = <&osc>;
clocks = <&osc1>, <&osc2>, <&f2s_sdram_ref_clk>;
reg = <0xC0>;
ddr_dqs_clk: ddr_dqs_clk {
......@@ -475,6 +484,17 @@ L2: l2-cache@fffef000 {
arm,data-latency = <2 1 1>;
};
mmc: dwmmc0@ff704000 {
compatible = "altr,socfpga-dw-mshc";
reg = <0xff704000 0x1000>;
interrupts = <0 139 4>;
fifo-depth = <0x400>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&l4_mp_clk>, <&sdmmc_clk>;
clock-names = "biu", "ciu";
};
/* Local timer */
timer@fffec600 {
compatible = "arm,cortex-a9-twd-timer";
......@@ -528,8 +548,8 @@ rstmgr@ffd05000 {
reg = <0xffd05000 0x1000>;
};
sysmgr@ffd08000 {
compatible = "altr,sys-mgr";
sysmgr: sysmgr@ffd08000 {
compatible = "altr,sys-mgr", "syscon";
reg = <0xffd08000 0x4000>;
};
};
......
......@@ -27,6 +27,17 @@ osc1 {
};
};
dwmmc0@ff704000 {
num-slots = <1>;
supports-highspeed;
broken-cd;
slot@0 {
reg = <0>;
bus-width = <4>;
};
};
serial0@ffc02000 {
clock-frequency = <100000000>;
};
......
......@@ -28,6 +28,17 @@ osc1 {
};
};
dwmmc0@ff704000 {
num-slots = <1>;
supports-highspeed;
broken-cd;
slot@0 {
reg = <0>;
bus-width = <4>;
};
};
ethernet@ff702000 {
phy-mode = "rgmii";
phy-addr = <0xffffffff>; /* probe for phy addr */
......
......@@ -41,6 +41,17 @@ osc1 {
};
};
dwmmc0@ff704000 {
num-slots = <1>;
supports-highspeed;
broken-cd;
slot@0 {
reg = <0>;
bus-width = <4>;
};
};
ethernet@ff700000 {
phy-mode = "gmii";
status = "okay";
......
......@@ -271,10 +271,14 @@ static void __init integrator_cp_of_init(struct device_node *np)
void __iomem *base;
int irq;
const char *name = of_get_property(np, "compatible", NULL);
struct clk *clk;
base = of_iomap(np, 0);
if (WARN_ON(!base))
return;
clk = of_clk_get(np, 0);
if (WARN_ON(IS_ERR(clk)))
return;
/* Ensure timer is disabled */
writel(0, base + TIMER_CTRL);
......@@ -283,13 +287,13 @@ static void __init integrator_cp_of_init(struct device_node *np)
goto err;
if (!init_count)
sp804_clocksource_init(base, name);
__sp804_clocksource_and_sched_clock_init(base, name, clk, 0);
else {
irq = irq_of_parse_and_map(np, 0);
if (irq <= 0)
goto err;
sp804_clockevents_init(base, irq, name);
__sp804_clockevents_init(base, irq, clk, name);
}
init_count++;
......
......@@ -52,6 +52,7 @@ CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_NETDEVICES=y
CONFIG_STMMAC_ETH=y
CONFIG_MICREL_PHY=y
# CONFIG_STMMAC_PHY_ID_ZERO_WORKAROUND is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_SERIO_SERPORT is not set
......@@ -66,6 +67,9 @@ CONFIG_SERIAL_8250_DW=y
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT3_FS=y
CONFIG_NFS_FS=y
CONFIG_ROOT_NFS=y
# CONFIG_DNOTIFY is not set
# CONFIG_INOTIFY_USER is not set
CONFIG_VFAT_FS=y
......@@ -82,3 +86,5 @@ CONFIG_DEBUG_INFO=y
CONFIG_ENABLE_DEFAULT_TRACERS=y
CONFIG_DEBUG_USER=y
CONFIG_XZ_DEC=y
CONFIG_MMC=y
CONFIG_MMC_DW=y
......@@ -16,6 +16,7 @@
#include <linux/time.h>
#include <linux/platform_data/mtd-davinci-aemif.h>
#include <linux/platform_data/mtd-davinci.h>
/* Timing value configuration */
......@@ -43,6 +44,17 @@
WSTROBE(WSTROBE_MAX) | \
WSETUP(WSETUP_MAX))
static inline unsigned int davinci_aemif_readl(void __iomem *base, int offset)
{
return readl_relaxed(base + offset);
}
static inline void davinci_aemif_writel(void __iomem *base,
int offset, unsigned long value)
{
writel_relaxed(value, base + offset);
}
/*
* aemif_calc_rate - calculate timing data.
* @wanted: The cycle time needed in nanoseconds.
......@@ -76,6 +88,7 @@ static int aemif_calc_rate(int wanted, unsigned long clk, int max)
* @t: timing values to be progammed
* @base: The virtual base address of the AEMIF interface
* @cs: chip-select to program the timing values for
* @clkrate: the AEMIF clkrate
*
* This function programs the given timing values (in real clock) into the
* AEMIF registers taking the AEMIF clock into account.
......@@ -86,24 +99,17 @@ static int aemif_calc_rate(int wanted, unsigned long clk, int max)
*
* Returns 0 on success, else negative errno.
*/
int davinci_aemif_setup_timing(struct davinci_aemif_timing *t,
void __iomem *base, unsigned cs)
static int davinci_aemif_setup_timing(struct davinci_aemif_timing *t,
void __iomem *base, unsigned cs,
unsigned long clkrate)
{
unsigned set, val;
int ta, rhold, rstrobe, rsetup, whold, wstrobe, wsetup;
unsigned offset = A1CR_OFFSET + cs * 4;
struct clk *aemif_clk;
unsigned long clkrate;
if (!t)
return 0; /* Nothing to do */
aemif_clk = clk_get(NULL, "aemif");
if (IS_ERR(aemif_clk))
return PTR_ERR(aemif_clk);
clkrate = clk_get_rate(aemif_clk);
clkrate /= 1000; /* turn clock into kHz for ease of use */
ta = aemif_calc_rate(t->ta, clkrate, TA_MAX);
......@@ -130,4 +136,83 @@ int davinci_aemif_setup_timing(struct davinci_aemif_timing *t,
return 0;
}
EXPORT_SYMBOL(davinci_aemif_setup_timing);
/**
* davinci_aemif_setup - setup AEMIF interface by davinci_nand_pdata
* @pdev - link to platform device to setup settings for
*
* This function does not use any locking while programming the AEMIF
* because it is expected that there is only one user of a given
* chip-select.
*
* Returns 0 on success, else negative errno.
*/
int davinci_aemif_setup(struct platform_device *pdev)
{
struct davinci_nand_pdata *pdata = dev_get_platdata(&pdev->dev);
uint32_t val;
unsigned long clkrate;
struct resource *res;
void __iomem *base;
struct clk *clk;
int ret = 0;
clk = clk_get(&pdev->dev, "aemif");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret);
return ret;
}
ret = clk_prepare_enable(clk);
if (ret < 0) {
dev_dbg(&pdev->dev, "unable to enable AEMIF clock, err %d\n",
ret);
goto err_put;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res) {
dev_err(&pdev->dev, "cannot get IORESOURCE_MEM\n");
ret = -ENOMEM;
goto err;
}
base = ioremap(res->start, resource_size(res));
if (!base) {
dev_err(&pdev->dev, "ioremap failed for resource %pR\n", res);
ret = -ENOMEM;
goto err;
}
/*
* Setup Async configuration register in case we did not boot
* from NAND and so bootloader did not bother to set it up.
*/
val = davinci_aemif_readl(base, A1CR_OFFSET + pdev->id * 4);
/*
* Extended Wait is not valid and Select Strobe mode is not
* used
*/
val &= ~(ACR_ASIZE_MASK | ACR_EW_MASK | ACR_SS_MASK);
if (pdata->options & NAND_BUSWIDTH_16)
val |= 0x1;
davinci_aemif_writel(base, A1CR_OFFSET + pdev->id * 4, val);
clkrate = clk_get_rate(clk);
if (pdata->timing)
ret = davinci_aemif_setup_timing(pdata->timing, base, pdev->id,
clkrate);
if (ret < 0)
dev_dbg(&pdev->dev, "NAND timing values setup fail\n");
iounmap(base);
err:
clk_disable_unprepare(clk);
err_put:
clk_put(clk);
return ret;
}
......@@ -419,6 +419,9 @@ static inline void da830_evm_init_nand(int mux_mode)
if (ret)
pr_warning("da830_evm_init: NAND device not registered.\n");
if (davinci_aemif_setup(&da830_evm_nand_device))
pr_warn("%s: Cannot configure AEMIF.\n", __func__);
gpio_direction_output(mux_mode, 1);
}
#else
......
......@@ -358,6 +358,9 @@ static inline void da850_evm_setup_nor_nand(void)
platform_add_devices(da850_evm_devices,
ARRAY_SIZE(da850_evm_devices));
if (davinci_aemif_setup(&da850_evm_nandflash_device))
pr_warn("%s: Cannot configure AEMIF.\n", __func__);
}
}
......
......@@ -778,6 +778,11 @@ static __init void davinci_evm_init(void)
/* only one device will be jumpered and detected */
if (HAS_NAND) {
platform_device_register(&davinci_evm_nandflash_device);
if (davinci_aemif_setup(&davinci_evm_nandflash_device))
pr_warn("%s: Cannot configure AEMIF.\n",
__func__);
evm_leds[7].default_trigger = "nand-disk";
if (HAS_NOR)
pr_warning("WARNING: both NAND and NOR flash "
......
......@@ -805,6 +805,9 @@ static __init void evm_init(void)
platform_device_register(&davinci_nand_device);
if (davinci_aemif_setup(&davinci_nand_device))
pr_warn("%s: Cannot configure AEMIF.\n", __func__);
dm646x_init_edma(dm646x_edma_rsv);
if (HAS_ATA)
......
......@@ -27,6 +27,7 @@
#include <mach/cp_intc.h>
#include <mach/da8xx.h>
#include <linux/platform_data/mtd-davinci.h>
#include <linux/platform_data/mtd-davinci-aemif.h>
#include <mach/mux.h>
#include <linux/platform_data/spi-davinci.h>
......@@ -432,6 +433,9 @@ static void __init mityomapl138_setup_nand(void)
{
platform_add_devices(mityomapl138_devices,
ARRAY_SIZE(mityomapl138_devices));
if (davinci_aemif_setup(&mityomapl138_nandflash_device))
pr_warn("%s: Cannot configure AEMIF.\n", __func__);
}
static const short mityomap_mii_pins[] = {
......
......@@ -21,6 +21,7 @@
#define CPU_CTRL_PCIE1_LINK 0x00000008
#define RSTOUTn_MASK (BRIDGE_VIRT_BASE + 0x0108)
#define RSTOUTn_MASK_PHYS (BRIDGE_PHYS_BASE + 0x0108)
#define SOFT_RESET_OUT_EN 0x00000004
#define SYSTEM_SOFT_RESET (BRIDGE_VIRT_BASE + 0x010c)
......
......@@ -35,56 +35,6 @@
#include "common.h"
#include "regs-pmu.h"
#define EXYNOS4_EPLL_LOCK (S5P_VA_CMU + 0x0C010)
#define EXYNOS4_VPLL_LOCK (S5P_VA_CMU + 0x0C020)
#define EXYNOS4_EPLL_CON0 (S5P_VA_CMU + 0x0C110)
#define EXYNOS4_EPLL_CON1 (S5P_VA_CMU + 0x0C114)
#define EXYNOS4_VPLL_CON0 (S5P_VA_CMU + 0x0C120)
#define EXYNOS4_VPLL_CON1 (S5P_VA_CMU + 0x0C124)
#define EXYNOS4_CLKSRC_MASK_TOP (S5P_VA_CMU + 0x0C310)
#define EXYNOS4_CLKSRC_MASK_CAM (S5P_VA_CMU + 0x0C320)
#define EXYNOS4_CLKSRC_MASK_TV (S5P_VA_CMU + 0x0C324)
#define EXYNOS4_CLKSRC_MASK_LCD0 (S5P_VA_CMU + 0x0C334)
#define EXYNOS4_CLKSRC_MASK_MAUDIO (S5P_VA_CMU + 0x0C33C)
#define EXYNOS4_CLKSRC_MASK_FSYS (S5P_VA_CMU + 0x0C340)
#define EXYNOS4_CLKSRC_MASK_PERIL0 (S5P_VA_CMU + 0x0C350)
#define EXYNOS4_CLKSRC_MASK_PERIL1 (S5P_VA_CMU + 0x0C354)
#define EXYNOS4_CLKSRC_MASK_DMC (S5P_VA_CMU + 0x10300)
#define EXYNOS4_EPLLCON0_LOCKED_SHIFT (29)
#define EXYNOS4_VPLLCON0_LOCKED_SHIFT (29)
#define EXYNOS4210_CLKSRC_MASK_LCD1 (S5P_VA_CMU + 0x0C338)
static const struct sleep_save exynos4_set_clksrc[] = {
{ .reg = EXYNOS4_CLKSRC_MASK_TOP , .val = 0x00000001, },
{ .reg = EXYNOS4_CLKSRC_MASK_CAM , .val = 0x11111111, },
{ .reg = EXYNOS4_CLKSRC_MASK_TV , .val = 0x00000111, },
{ .reg = EXYNOS4_CLKSRC_MASK_LCD0 , .val = 0x00001111, },
{ .reg = EXYNOS4_CLKSRC_MASK_MAUDIO , .val = 0x00000001, },
{ .reg = EXYNOS4_CLKSRC_MASK_FSYS , .val = 0x01011111, },
{ .reg = EXYNOS4_CLKSRC_MASK_PERIL0 , .val = 0x01111111, },
{ .reg = EXYNOS4_CLKSRC_MASK_PERIL1 , .val = 0x01110111, },
{ .reg = EXYNOS4_CLKSRC_MASK_DMC , .val = 0x00010000, },
};
static const struct sleep_save exynos4210_set_clksrc[] = {
{ .reg = EXYNOS4210_CLKSRC_MASK_LCD1 , .val = 0x00001111, },
};
static struct sleep_save exynos4_epll_save[] = {
SAVE_ITEM(EXYNOS4_EPLL_CON0),
SAVE_ITEM(EXYNOS4_EPLL_CON1),
};
static struct sleep_save exynos4_vpll_save[] = {
SAVE_ITEM(EXYNOS4_VPLL_CON0),
SAVE_ITEM(EXYNOS4_VPLL_CON1),
};
static struct sleep_save exynos5_sys_save[] = {
SAVE_ITEM(EXYNOS5_SYS_I2C_CFG),
};
......@@ -124,10 +74,7 @@ static void exynos_pm_prepare(void)
s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save));
if (!soc_is_exynos5250()) {
s3c_pm_do_save(exynos4_epll_save, ARRAY_SIZE(exynos4_epll_save));
s3c_pm_do_save(exynos4_vpll_save, ARRAY_SIZE(exynos4_vpll_save));
} else {
if (soc_is_exynos5250()) {
s3c_pm_do_save(exynos5_sys_save, ARRAY_SIZE(exynos5_sys_save));
/* Disable USE_RETENTION of JPEG_MEM_OPTION */
tmp = __raw_readl(EXYNOS5_JPEG_MEM_OPTION);
......@@ -143,15 +90,6 @@ static void exynos_pm_prepare(void)
/* ensure at least INFORM0 has the resume address */
__raw_writel(virt_to_phys(s3c_cpu_resume), S5P_INFORM0);
/* Before enter central sequence mode, clock src register have to set */
if (!soc_is_exynos5250())
s3c_pm_do_restore_core(exynos4_set_clksrc, ARRAY_SIZE(exynos4_set_clksrc));
if (soc_is_exynos4210())
s3c_pm_do_restore_core(exynos4210_set_clksrc, ARRAY_SIZE(exynos4210_set_clksrc));
}
static int exynos_pm_add(struct device *dev, struct subsys_interface *sif)
......@@ -162,73 +100,6 @@ static int exynos_pm_add(struct device *dev, struct subsys_interface *sif)
return 0;
}
static unsigned long pll_base_rate;
static void exynos4_restore_pll(void)
{
unsigned long pll_con, locktime, lockcnt;
unsigned long pll_in_rate;
unsigned int p_div, epll_wait = 0, vpll_wait = 0;
if (pll_base_rate == 0)
return;
pll_in_rate = pll_base_rate;
/* EPLL */
pll_con = exynos4_epll_save[0].val;
if (pll_con & (1 << 31)) {
pll_con &= (PLL46XX_PDIV_MASK << PLL46XX_PDIV_SHIFT);
p_div = (pll_con >> PLL46XX_PDIV_SHIFT);
pll_in_rate /= 1000000;
locktime = (3000 / pll_in_rate) * p_div;
lockcnt = locktime * 10000 / (10000 / pll_in_rate);
__raw_writel(lockcnt, EXYNOS4_EPLL_LOCK);
s3c_pm_do_restore_core(exynos4_epll_save,
ARRAY_SIZE(exynos4_epll_save));
epll_wait = 1;
}
pll_in_rate = pll_base_rate;
/* VPLL */
pll_con = exynos4_vpll_save[0].val;
if (pll_con & (1 << 31)) {
pll_in_rate /= 1000000;
/* 750us */
locktime = 750;
lockcnt = locktime * 10000 / (10000 / pll_in_rate);
__raw_writel(lockcnt, EXYNOS4_VPLL_LOCK);
s3c_pm_do_restore_core(exynos4_vpll_save,
ARRAY_SIZE(exynos4_vpll_save));
vpll_wait = 1;
}
/* Wait PLL locking */
do {
if (epll_wait) {
pll_con = __raw_readl(EXYNOS4_EPLL_CON0);
if (pll_con & (1 << EXYNOS4_EPLLCON0_LOCKED_SHIFT))
epll_wait = 0;
}
if (vpll_wait) {
pll_con = __raw_readl(EXYNOS4_VPLL_CON0);
if (pll_con & (1 << EXYNOS4_VPLLCON0_LOCKED_SHIFT))
vpll_wait = 0;
}
} while (epll_wait || vpll_wait);
}
static struct subsys_interface exynos_pm_interface = {
.name = "exynos_pm",
.subsys = &exynos_subsys,
......@@ -237,7 +108,6 @@ static struct subsys_interface exynos_pm_interface = {
static __init int exynos_pm_drvinit(void)
{
struct clk *pll_base;
unsigned int tmp;
if (soc_is_exynos5440())
......@@ -251,15 +121,6 @@ static __init int exynos_pm_drvinit(void)
tmp |= ((0xFF << 8) | (0x1F << 1));
__raw_writel(tmp, S5P_WAKEUP_MASK);
if (!soc_is_exynos5250()) {
pll_base = clk_get(NULL, "xtal");
if (!IS_ERR(pll_base)) {
pll_base_rate = clk_get_rate(pll_base);
clk_put(pll_base);
}
}
return subsys_interface_register(&exynos_pm_interface);
}
arch_initcall(exynos_pm_drvinit);
......@@ -343,13 +204,8 @@ static void exynos_pm_resume(void)
s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save));
if (!soc_is_exynos5250()) {
exynos4_restore_pll();
#ifdef CONFIG_SMP
if (IS_ENABLED(CONFIG_SMP) && !soc_is_exynos5250())
scu_enable(S5P_VA_SCU);
#endif
}
early_wakeup:
......
......@@ -30,6 +30,9 @@ config ARCH_CINTEGRATOR
config INTEGRATOR_IMPD1
tristate "Include support for Integrator/IM-PD1"
depends on ARCH_INTEGRATOR_AP
select ARCH_REQUIRE_GPIOLIB
select ARM_VIC
select GPIO_PL061 if GPIOLIB
help
The IM-PD1 is an add-on logic module for the Integrator which
allows ARM(R) Ltd PrimeCells to be developed and evaluated.
......
......@@ -23,6 +23,7 @@
#include <linux/io.h>
#include <linux/platform_data/clk-integrator.h>
#include <linux/slab.h>
#include <linux/irqchip/arm-vic.h>
#include <mach/lm.h>
#include <mach/impd1.h>
......@@ -35,6 +36,7 @@ MODULE_PARM_DESC(lmid, "logic module stack position");
struct impd1_module {
void __iomem *base;
void __iomem *vic_base;
};
void impd1_tweak_control(struct device *dev, u32 mask, u32 val)
......@@ -262,9 +264,6 @@ struct impd1_device {
static struct impd1_device impd1_devs[] = {
{
.offset = 0x03000000,
.id = 0x00041190,
}, {
.offset = 0x00100000,
.irq = { 1 },
.id = 0x00141011,
......@@ -304,46 +303,72 @@ static struct impd1_device impd1_devs[] = {
}
};
static int impd1_probe(struct lm_device *dev)
/*
* Valid IRQs: 0 thru 9 and 11, 10 unused.
*/
#define IMPD1_VALID_IRQS 0x00000bffU
static int __init impd1_probe(struct lm_device *dev)
{
struct impd1_module *impd1;
int i, ret;
int irq_base;
int i;
if (dev->id != module_id)
return -EINVAL;
if (!request_mem_region(dev->resource.start, SZ_4K, "LM registers"))
if (!devm_request_mem_region(&dev->dev, dev->resource.start,
SZ_4K, "LM registers"))
return -EBUSY;
impd1 = kzalloc(sizeof(struct impd1_module), GFP_KERNEL);
if (!impd1) {
ret = -ENOMEM;
goto release_lm;
}
impd1 = devm_kzalloc(&dev->dev, sizeof(struct impd1_module),
GFP_KERNEL);
if (!impd1)
return -ENOMEM;
impd1->base = ioremap(dev->resource.start, SZ_4K);
if (!impd1->base) {
ret = -ENOMEM;
goto free_impd1;
}
impd1->base = devm_ioremap(&dev->dev, dev->resource.start, SZ_4K);
if (!impd1->base)
return -ENOMEM;
integrator_impd1_clk_init(impd1->base, dev->id);
if (!devm_request_mem_region(&dev->dev,
dev->resource.start + 0x03000000,
SZ_4K, "VIC"))
return -EBUSY;
impd1->vic_base = devm_ioremap(&dev->dev,
dev->resource.start + 0x03000000,
SZ_4K);
if (!impd1->vic_base)
return -ENOMEM;
irq_base = vic_init_cascaded(impd1->vic_base, dev->irq,
IMPD1_VALID_IRQS, 0);
lm_set_drvdata(dev, impd1);
printk("IM-PD1 found at 0x%08lx\n",
dev_info(&dev->dev, "IM-PD1 found at 0x%08lx\n",
(unsigned long)dev->resource.start);
integrator_impd1_clk_init(impd1->base, dev->id);
for (i = 0; i < ARRAY_SIZE(impd1_devs); i++) {
struct impd1_device *idev = impd1_devs + i;
struct amba_device *d;
unsigned long pc_base;
char devname[32];
int irq1 = idev->irq[0];
int irq2 = idev->irq[1];
/* Translate IRQs to IM-PD1 local numberspace */
if (irq1)
irq1 += irq_base;
if (irq2)
irq2 += irq_base;
pc_base = dev->resource.start + idev->offset;
snprintf(devname, 32, "lm%x:%5.5lx", dev->id, idev->offset >> 12);
d = amba_ahb_device_add_res(&dev->dev, devname, pc_base, SZ_4K,
dev->irq, dev->irq,
irq1, irq2,
idev->platform_data, idev->id,
&dev->resource);
if (IS_ERR(d)) {
......@@ -353,14 +378,6 @@ static int impd1_probe(struct lm_device *dev)
}
return 0;
free_impd1:
if (impd1 && impd1->base)
iounmap(impd1->base);
kfree(impd1);
release_lm:
release_mem_region(dev->resource.start, SZ_4K);
return ret;
}
static int impd1_remove_one(struct device *dev, void *data)
......@@ -371,16 +388,10 @@ static int impd1_remove_one(struct device *dev, void *data)
static void impd1_remove(struct lm_device *dev)
{
struct impd1_module *impd1 = lm_get_drvdata(dev);
device_for_each_child(&dev->dev, NULL, impd1_remove_one);
integrator_impd1_clk_exit(dev->id);
lm_set_drvdata(dev, NULL);
iounmap(impd1->base);
kfree(impd1);
release_mem_region(dev->resource.start, SZ_4K);
}
static struct lm_driver impd1_driver = {
......
......@@ -42,6 +42,7 @@
#include <linux/sys_soc.h>
#include <linux/termios.h>
#include <linux/sched_clock.h>
#include <linux/clk-provider.h>
#include <mach/hardware.h>
#include <mach/platform.h>
......@@ -402,10 +403,7 @@ static void __init ap_of_timer_init(void)
struct clk *clk;
unsigned long rate;
clk = clk_get_sys("ap_timer", NULL);
BUG_ON(IS_ERR(clk));
clk_prepare_enable(clk);
rate = clk_get_rate(clk);
of_clk_init(NULL);
err = of_property_read_string(of_aliases,
"arm,timer-primary", &path);
......@@ -415,6 +413,12 @@ static void __init ap_of_timer_init(void)
base = of_iomap(node, 0);
if (WARN_ON(!base))
return;
clk = of_clk_get(node, 0);
BUG_ON(IS_ERR(clk));
clk_prepare_enable(clk);
rate = clk_get_rate(clk);
writel(0, base + TIMER_CTRL);
integrator_clocksource_init(rate, base);
......@@ -427,6 +431,12 @@ static void __init ap_of_timer_init(void)
if (WARN_ON(!base))
return;
irq = irq_of_parse_and_map(node, 0);
clk = of_clk_get(node, 0);
BUG_ON(IS_ERR(clk));
clk_prepare_enable(clk);
rate = clk_get_rate(clk);
writel(0, base + TIMER_CTRL);
integrator_clockevent_init(rate, base, irq);
}
......@@ -440,7 +450,6 @@ static void __init ap_init_irq_of(void)
{
cm_init();
of_irq_init(fpga_irq_of_match);
integrator_clk_init(false);
}
/* For the Device Tree, add in the UART callbacks as AUXDATA */
......
......@@ -23,7 +23,6 @@
#include <linux/irqchip/versatile-fpga.h>
#include <linux/gfp.h>
#include <linux/mtd/physmap.h>
#include <linux/platform_data/clk-integrator.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
......@@ -33,8 +32,6 @@
#include <mach/platform.h>
#include <asm/setup.h>
#include <asm/mach-types.h>
#include <asm/hardware/arm_timer.h>
#include <asm/hardware/icst.h>
#include <mach/lm.h>
......@@ -43,8 +40,6 @@
#include <asm/mach/map.h>
#include <asm/mach/time.h>
#include <asm/hardware/timer-sp.h>
#include <plat/clcd.h>
#include <plat/sched_clock.h>
......@@ -250,7 +245,6 @@ static void __init intcp_init_irq_of(void)
{
cm_init();
of_irq_init(fpga_irq_of_match);
integrator_clk_init(true);
}
/*
......
......@@ -22,6 +22,7 @@
#define CPU_RESET 0x00000002
#define RSTOUTn_MASK (BRIDGE_VIRT_BASE + 0x0108)
#define RSTOUTn_MASK_PHYS (BRIDGE_PHYS_BASE + 0x0108)
#define SOFT_RESET_OUT_EN 0x00000004
#define SYSTEM_SOFT_RESET (BRIDGE_VIRT_BASE + 0x010c)
......
......@@ -15,6 +15,7 @@
#define L2_WRITETHROUGH 0x00020000
#define RSTOUTn_MASK (BRIDGE_VIRT_BASE + 0x0108)
#define RSTOUTn_MASK_PHYS (BRIDGE_PHYS_BASE + 0x0108)
#define SOFT_RESET_OUT_EN 0x00000004
#define SYSTEM_SOFT_RESET (BRIDGE_VIRT_BASE + 0x010c)
......
......@@ -74,6 +74,7 @@ config SOC_DRA7XX
select ARM_CPU_SUSPEND if PM
select ARM_GIC
select HAVE_ARM_ARCH_TIMER
select IRQ_CROSSBAR
config ARCH_OMAP2PLUS
bool
......
......@@ -138,7 +138,7 @@ static void wakeupgen_mask(struct irq_data *d)
unsigned long flags;
raw_spin_lock_irqsave(&wakeupgen_lock, flags);
_wakeupgen_clear(d->irq, irq_target_cpu[d->irq]);
_wakeupgen_clear(d->hwirq, irq_target_cpu[d->hwirq]);
raw_spin_unlock_irqrestore(&wakeupgen_lock, flags);
}
......@@ -150,7 +150,7 @@ static void wakeupgen_unmask(struct irq_data *d)
unsigned long flags;
raw_spin_lock_irqsave(&wakeupgen_lock, flags);
_wakeupgen_set(d->irq, irq_target_cpu[d->irq]);
_wakeupgen_set(d->hwirq, irq_target_cpu[d->hwirq]);
raw_spin_unlock_irqrestore(&wakeupgen_lock, flags);
}
......
......@@ -22,6 +22,7 @@
#include <linux/of_platform.h>
#include <linux/export.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/irqchip/irq-crossbar.h>
#include <linux/of_address.h>
#include <linux/reboot.h>
......@@ -288,5 +289,8 @@ void __init omap_gic_of_init(void)
skip_errata_init:
omap_wakeupgen_init();
#ifdef CONFIG_IRQ_CROSSBAR
irqcrossbar_init();
#endif
irqchip_init();
}
......@@ -18,6 +18,7 @@
#define CPU_CTRL (ORION5X_BRIDGE_VIRT_BASE + 0x104)
#define RSTOUTn_MASK (ORION5X_BRIDGE_VIRT_BASE + 0x108)
#define RSTOUTn_MASK_PHYS (ORION5X_BRIDGE_PHYS_BASE + 0x108)
#define CPU_SOFT_RESET (ORION5X_BRIDGE_VIRT_BASE + 0x10c)
......
......@@ -22,12 +22,15 @@
#include <mach/common.h>
#include <mach/r7s72100.h>
/* registers */
/* Frequency Control Registers */
#define FRQCR 0xfcfe0010
#define FRQCR2 0xfcfe0014
/* Standby Control Registers */
#define STBCR3 0xfcfe0420
#define STBCR4 0xfcfe0424
#define STBCR7 0xfcfe0430
#define STBCR9 0xfcfe0438
#define STBCR10 0xfcfe043c
#define PLL_RATE 30
......@@ -67,7 +70,7 @@ static struct clk pll_clk = {
static unsigned long bus_recalc(struct clk *clk)
{
return clk->parent->rate * 2 / 3;
return clk->parent->rate / 3;
}
static struct sh_clk_ops bus_clk_ops = {
......@@ -145,15 +148,25 @@ struct clk div4_clks[DIV4_NR] = {
| CLK_ENABLE_ON_INIT),
};
enum { MSTP97, MSTP96, MSTP95, MSTP94,
enum {
MSTP107, MSTP106, MSTP105, MSTP104, MSTP103,
MSTP97, MSTP96, MSTP95, MSTP94,
MSTP74,
MSTP47, MSTP46, MSTP45, MSTP44, MSTP43, MSTP42, MSTP41, MSTP40,
MSTP33, MSTP_NR };
MSTP33, MSTP_NR
};
static struct clk mstp_clks[MSTP_NR] = {
[MSTP107] = SH_CLK_MSTP8(&peripheral1_clk, STBCR10, 7, 0), /* RSPI0 */
[MSTP106] = SH_CLK_MSTP8(&peripheral1_clk, STBCR10, 6, 0), /* RSPI1 */
[MSTP105] = SH_CLK_MSTP8(&peripheral1_clk, STBCR10, 5, 0), /* RSPI2 */
[MSTP104] = SH_CLK_MSTP8(&peripheral1_clk, STBCR10, 4, 0), /* RSPI3 */
[MSTP103] = SH_CLK_MSTP8(&peripheral1_clk, STBCR10, 3, 0), /* RSPI4 */
[MSTP97] = SH_CLK_MSTP8(&peripheral0_clk, STBCR9, 7, 0), /* RIIC0 */
[MSTP96] = SH_CLK_MSTP8(&peripheral0_clk, STBCR9, 6, 0), /* RIIC1 */
[MSTP95] = SH_CLK_MSTP8(&peripheral0_clk, STBCR9, 5, 0), /* RIIC2 */
[MSTP94] = SH_CLK_MSTP8(&peripheral0_clk, STBCR9, 4, 0), /* RIIC3 */
[MSTP74] = SH_CLK_MSTP8(&peripheral1_clk, STBCR7, 4, 0), /* Ether */
[MSTP47] = SH_CLK_MSTP8(&peripheral1_clk, STBCR4, 7, 0), /* SCIF0 */
[MSTP46] = SH_CLK_MSTP8(&peripheral1_clk, STBCR4, 6, 0), /* SCIF1 */
[MSTP45] = SH_CLK_MSTP8(&peripheral1_clk, STBCR4, 5, 0), /* SCIF2 */
......@@ -176,6 +189,21 @@ static struct clk_lookup lookups[] = {
CLKDEV_CON_ID("cpu_clk", &div4_clks[DIV4_I]),
/* MSTP clocks */
CLKDEV_DEV_ID("rspi-rz.0", &mstp_clks[MSTP107]),
CLKDEV_DEV_ID("rspi-rz.1", &mstp_clks[MSTP106]),
CLKDEV_DEV_ID("rspi-rz.2", &mstp_clks[MSTP105]),
CLKDEV_DEV_ID("rspi-rz.3", &mstp_clks[MSTP104]),
CLKDEV_DEV_ID("rspi-rz.4", &mstp_clks[MSTP103]),
CLKDEV_DEV_ID("e800c800.spi", &mstp_clks[MSTP107]),
CLKDEV_DEV_ID("e800d000.spi", &mstp_clks[MSTP106]),
CLKDEV_DEV_ID("e800d800.spi", &mstp_clks[MSTP105]),
CLKDEV_DEV_ID("e800e000.spi", &mstp_clks[MSTP104]),
CLKDEV_DEV_ID("e800e800.spi", &mstp_clks[MSTP103]),
CLKDEV_DEV_ID("fcfee000.i2c", &mstp_clks[MSTP97]),
CLKDEV_DEV_ID("fcfee400.i2c", &mstp_clks[MSTP96]),
CLKDEV_DEV_ID("fcfee800.i2c", &mstp_clks[MSTP95]),
CLKDEV_DEV_ID("fcfeec00.i2c", &mstp_clks[MSTP94]),
CLKDEV_DEV_ID("r7s72100-ether", &mstp_clks[MSTP74]),
CLKDEV_CON_ID("mtu2_fck", &mstp_clks[MSTP33]),
/* ICK */
......
......@@ -221,6 +221,10 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("fffc6000.spi", &mstp_clks[MSTP007]), /* HSPI2 */
CLKDEV_DEV_ID("rcar_sound", &mstp_clks[MSTP008]), /* SRU */
CLKDEV_ICK_ID("clk_a", "rcar_sound", &audio_clk_a),
CLKDEV_ICK_ID("clk_b", "rcar_sound", &audio_clk_b),
CLKDEV_ICK_ID("clk_c", "rcar_sound", &audio_clk_c),
CLKDEV_ICK_ID("clk_i", "rcar_sound", &s1_clk),
CLKDEV_ICK_ID("ssi.0", "rcar_sound", &mstp_clks[MSTP012]),
CLKDEV_ICK_ID("ssi.1", "rcar_sound", &mstp_clks[MSTP011]),
CLKDEV_ICK_ID("ssi.2", "rcar_sound", &mstp_clks[MSTP010]),
......
......@@ -120,16 +120,16 @@ static struct clk mstp_clks[MSTP_NR] = {
[MSTP322] = SH_CLK_MSTP32(&clkp_clk, MSTPCR3, 22, 0), /* SDHI1 */
[MSTP321] = SH_CLK_MSTP32(&clkp_clk, MSTPCR3, 21, 0), /* SDHI2 */
[MSTP320] = SH_CLK_MSTP32(&clkp_clk, MSTPCR3, 20, 0), /* SDHI3 */
[MSTP120] = SH_CLK_MSTP32(&clks_clk, MSTPCR1, 20, 0), /* VIN3 */
[MSTP116] = SH_CLK_MSTP32(&clkp_clk, MSTPCR1, 16, 0), /* PCIe */
[MSTP115] = SH_CLK_MSTP32(&clkp_clk, MSTPCR1, 15, 0), /* SATA */
[MSTP114] = SH_CLK_MSTP32(&clkp_clk, MSTPCR1, 14, 0), /* Ether */
[MSTP110] = SH_CLK_MSTP32(&clks_clk, MSTPCR1, 10, 0), /* VIN0 */
[MSTP109] = SH_CLK_MSTP32(&clks_clk, MSTPCR1, 9, 0), /* VIN1 */
[MSTP108] = SH_CLK_MSTP32(&clks_clk, MSTPCR1, 8, 0), /* VIN2 */
[MSTP103] = SH_CLK_MSTP32(&clks_clk, MSTPCR1, 3, 0), /* DU */
[MSTP101] = SH_CLK_MSTP32(&clkp_clk, MSTPCR1, 1, 0), /* USB2 */
[MSTP100] = SH_CLK_MSTP32(&clkp_clk, MSTPCR1, 0, 0), /* USB0/1 */
[MSTP120] = SH_CLK_MSTP32_STS(&clks_clk, MSTPCR1, 20, MSTPSR1, 0), /* VIN3 */
[MSTP116] = SH_CLK_MSTP32_STS(&clkp_clk, MSTPCR1, 16, MSTPSR1, 0), /* PCIe */
[MSTP115] = SH_CLK_MSTP32_STS(&clkp_clk, MSTPCR1, 15, MSTPSR1, 0), /* SATA */
[MSTP114] = SH_CLK_MSTP32_STS(&clkp_clk, MSTPCR1, 14, MSTPSR1, 0), /* Ether */
[MSTP110] = SH_CLK_MSTP32_STS(&clks_clk, MSTPCR1, 10, MSTPSR1, 0), /* VIN0 */
[MSTP109] = SH_CLK_MSTP32_STS(&clks_clk, MSTPCR1, 9, MSTPSR1, 0), /* VIN1 */
[MSTP108] = SH_CLK_MSTP32_STS(&clks_clk, MSTPCR1, 8, MSTPSR1, 0), /* VIN2 */
[MSTP103] = SH_CLK_MSTP32_STS(&clks_clk, MSTPCR1, 3, MSTPSR1, 0), /* DU */
[MSTP101] = SH_CLK_MSTP32_STS(&clkp_clk, MSTPCR1, 1, MSTPSR1, 0), /* USB2 */
[MSTP100] = SH_CLK_MSTP32_STS(&clkp_clk, MSTPCR1, 0, MSTPSR1, 0), /* USB0/1 */
[MSTP030] = SH_CLK_MSTP32(&clkp_clk, MSTPCR0, 30, 0), /* I2C0 */
[MSTP029] = SH_CLK_MSTP32(&clkp_clk, MSTPCR0, 29, 0), /* I2C1 */
[MSTP028] = SH_CLK_MSTP32(&clkp_clk, MSTPCR0, 28, 0), /* I2C2 */
......
......@@ -55,6 +55,15 @@
#define SMSTPCR9 0xe6150994
#define SMSTPCR10 0xe6150998
#define MSTPSR1 IOMEM(0xe6150038)
#define MSTPSR2 IOMEM(0xe6150040)
#define MSTPSR3 IOMEM(0xe6150048)
#define MSTPSR5 IOMEM(0xe615003c)
#define MSTPSR7 IOMEM(0xe61501c4)
#define MSTPSR8 IOMEM(0xe61509a0)
#define MSTPSR9 IOMEM(0xe61509a4)
#define MSTPSR10 IOMEM(0xe61509a8)
#define SDCKCR 0xE6150074
#define SD2CKCR 0xE6150078
#define SD3CKCR 0xE615007C
......@@ -82,6 +91,15 @@ static struct clk main_clk = {
.ops = &followparent_clk_ops,
};
static struct clk audio_clk_a = {
};
static struct clk audio_clk_b = {
};
static struct clk audio_clk_c = {
};
/*
* clock ratio of these clock will be updated
* on r8a7790_clock_init()
......@@ -115,6 +133,9 @@ SH_FIXED_RATIO_CLK_SET(ddr_clk, pll3_clk, 1, 8);
SH_FIXED_RATIO_CLK_SET(mp_clk, pll1_div2_clk, 1, 15);
static struct clk *main_clks[] = {
&audio_clk_a,
&audio_clk_b,
&audio_clk_c,
&extal_clk,
&extal_div2_clk,
&main_clk,
......@@ -183,15 +204,22 @@ static struct clk div6_clks[DIV6_NR] = {
/* MSTP */
enum {
MSTP1017, /* parent of SCU */
MSTP1031, MSTP1030,
MSTP1029, MSTP1028, MSTP1027, MSTP1026, MSTP1025, MSTP1024, MSTP1023, MSTP1022,
MSTP1015, MSTP1014, MSTP1013, MSTP1012, MSTP1011, MSTP1010,
MSTP1009, MSTP1008, MSTP1007, MSTP1006, MSTP1005,
MSTP931, MSTP930, MSTP929, MSTP928,
MSTP917,
MSTP815, MSTP814,
MSTP813,
MSTP811, MSTP810, MSTP809, MSTP808,
MSTP726, MSTP725, MSTP724, MSTP723, MSTP722, MSTP721, MSTP720,
MSTP717, MSTP716,
MSTP704,
MSTP704, MSTP703,
MSTP522,
MSTP502, MSTP501,
MSTP315, MSTP314, MSTP313, MSTP312, MSTP311, MSTP305, MSTP304,
MSTP216, MSTP207, MSTP206, MSTP204, MSTP203, MSTP202,
MSTP124,
......@@ -199,53 +227,77 @@ enum {
};
static struct clk mstp_clks[MSTP_NR] = {
[MSTP1015] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 15, 0), /* SSI0 */
[MSTP1014] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 14, 0), /* SSI1 */
[MSTP1013] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 13, 0), /* SSI2 */
[MSTP1012] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 12, 0), /* SSI3 */
[MSTP1011] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 11, 0), /* SSI4 */
[MSTP1010] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 10, 0), /* SSI5 */
[MSTP1009] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 9, 0), /* SSI6 */
[MSTP1008] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 8, 0), /* SSI7 */
[MSTP1007] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 7, 0), /* SSI8 */
[MSTP1006] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 6, 0), /* SSI9 */
[MSTP1005] = SH_CLK_MSTP32(&p_clk, SMSTPCR10, 5, 0), /* SSI ALL */
[MSTP931] = SH_CLK_MSTP32(&p_clk, SMSTPCR9, 31, 0), /* I2C0 */
[MSTP930] = SH_CLK_MSTP32(&p_clk, SMSTPCR9, 30, 0), /* I2C1 */
[MSTP929] = SH_CLK_MSTP32(&p_clk, SMSTPCR9, 29, 0), /* I2C2 */
[MSTP928] = SH_CLK_MSTP32(&p_clk, SMSTPCR9, 28, 0), /* I2C3 */
[MSTP917] = SH_CLK_MSTP32(&qspi_clk, SMSTPCR9, 17, 0), /* QSPI */
[MSTP813] = SH_CLK_MSTP32(&p_clk, SMSTPCR8, 13, 0), /* Ether */
[MSTP726] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 26, 0), /* LVDS0 */
[MSTP725] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 25, 0), /* LVDS1 */
[MSTP724] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 24, 0), /* DU0 */
[MSTP723] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 23, 0), /* DU1 */
[MSTP722] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 22, 0), /* DU2 */
[MSTP721] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 21, 0), /* SCIF0 */
[MSTP720] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 20, 0), /* SCIF1 */
[MSTP717] = SH_CLK_MSTP32(&zs_clk, SMSTPCR7, 17, 0), /* HSCIF0 */
[MSTP716] = SH_CLK_MSTP32(&zs_clk, SMSTPCR7, 16, 0), /* HSCIF1 */
[MSTP704] = SH_CLK_MSTP32(&mp_clk, SMSTPCR7, 4, 0), /* HSUSB */
[MSTP522] = SH_CLK_MSTP32(&extal_clk, SMSTPCR5, 22, 0), /* Thermal */
[MSTP315] = SH_CLK_MSTP32(&div6_clks[DIV6_MMC0], SMSTPCR3, 15, 0), /* MMC0 */
[MSTP314] = SH_CLK_MSTP32(&div4_clks[DIV4_SD0], SMSTPCR3, 14, 0), /* SDHI0 */
[MSTP313] = SH_CLK_MSTP32(&div4_clks[DIV4_SD1], SMSTPCR3, 13, 0), /* SDHI1 */
[MSTP312] = SH_CLK_MSTP32(&div6_clks[DIV6_SD2], SMSTPCR3, 12, 0), /* SDHI2 */
[MSTP311] = SH_CLK_MSTP32(&div6_clks[DIV6_SD3], SMSTPCR3, 11, 0), /* SDHI3 */
[MSTP305] = SH_CLK_MSTP32(&div6_clks[DIV6_MMC1], SMSTPCR3, 5, 0), /* MMC1 */
[MSTP304] = SH_CLK_MSTP32(&cp_clk, SMSTPCR3, 4, 0), /* TPU0 */
[MSTP216] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 16, 0), /* SCIFB2 */
[MSTP207] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 7, 0), /* SCIFB1 */
[MSTP206] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 6, 0), /* SCIFB0 */
[MSTP204] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 4, 0), /* SCIFA0 */
[MSTP203] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 3, 0), /* SCIFA1 */
[MSTP202] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 2, 0), /* SCIFA2 */
[MSTP124] = SH_CLK_MSTP32(&rclk_clk, SMSTPCR1, 24, 0), /* CMT0 */
[MSTP1031] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 31, MSTPSR10, 0), /* SCU0 */
[MSTP1030] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 30, MSTPSR10, 0), /* SCU1 */
[MSTP1029] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 29, MSTPSR10, 0), /* SCU2 */
[MSTP1028] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 28, MSTPSR10, 0), /* SCU3 */
[MSTP1027] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 27, MSTPSR10, 0), /* SCU4 */
[MSTP1026] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 26, MSTPSR10, 0), /* SCU5 */
[MSTP1025] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 25, MSTPSR10, 0), /* SCU6 */
[MSTP1024] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 24, MSTPSR10, 0), /* SCU7 */
[MSTP1023] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 23, MSTPSR10, 0), /* SCU8 */
[MSTP1022] = SH_CLK_MSTP32_STS(&mstp_clks[MSTP1017], SMSTPCR10, 22, MSTPSR10, 0), /* SCU9 */
[MSTP1017] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 17, MSTPSR10, 0), /* SCU */
[MSTP1015] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 15, MSTPSR10, 0), /* SSI0 */
[MSTP1014] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 14, MSTPSR10, 0), /* SSI1 */
[MSTP1013] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 13, MSTPSR10, 0), /* SSI2 */
[MSTP1012] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 12, MSTPSR10, 0), /* SSI3 */
[MSTP1011] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 11, MSTPSR10, 0), /* SSI4 */
[MSTP1010] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 10, MSTPSR10, 0), /* SSI5 */
[MSTP1009] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 9, MSTPSR10, 0), /* SSI6 */
[MSTP1008] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 8, MSTPSR10, 0), /* SSI7 */
[MSTP1007] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 7, MSTPSR10, 0), /* SSI8 */
[MSTP1006] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 6, MSTPSR10, 0), /* SSI9 */
[MSTP1005] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR10, 5, MSTPSR10, 0), /* SSI ALL */
[MSTP931] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 31, MSTPSR9, 0), /* I2C0 */
[MSTP930] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 30, MSTPSR9, 0), /* I2C1 */
[MSTP929] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 29, MSTPSR9, 0), /* I2C2 */
[MSTP928] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 28, MSTPSR9, 0), /* I2C3 */
[MSTP917] = SH_CLK_MSTP32_STS(&qspi_clk, SMSTPCR9, 17, MSTPSR9, 0), /* QSPI */
[MSTP815] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR8, 15, MSTPSR8, 0), /* SATA0 */
[MSTP814] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR8, 14, MSTPSR8, 0), /* SATA1 */
[MSTP813] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR8, 13, MSTPSR8, 0), /* Ether */
[MSTP811] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 11, MSTPSR8, 0), /* VIN0 */
[MSTP810] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 10, MSTPSR8, 0), /* VIN1 */
[MSTP809] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 9, MSTPSR8, 0), /* VIN2 */
[MSTP808] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 8, MSTPSR8, 0), /* VIN3 */
[MSTP726] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 26, MSTPSR7, 0), /* LVDS0 */
[MSTP725] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 25, MSTPSR7, 0), /* LVDS1 */
[MSTP724] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 24, MSTPSR7, 0), /* DU0 */
[MSTP723] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 23, MSTPSR7, 0), /* DU1 */
[MSTP722] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 22, MSTPSR7, 0), /* DU2 */
[MSTP721] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 21, MSTPSR7, 0), /* SCIF0 */
[MSTP720] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 20, MSTPSR7, 0), /* SCIF1 */
[MSTP717] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR7, 17, MSTPSR7, 0), /* HSCIF0 */
[MSTP716] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR7, 16, MSTPSR7, 0), /* HSCIF1 */
[MSTP704] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR7, 4, MSTPSR7, 0), /* HSUSB */
[MSTP703] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR7, 3, MSTPSR7, 0), /* EHCI */
[MSTP522] = SH_CLK_MSTP32_STS(&extal_clk, SMSTPCR5, 22, MSTPSR5, 0), /* Thermal */
[MSTP502] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR5, 2, MSTPSR5, 0), /* Audio-DMAC low */
[MSTP501] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR5, 1, MSTPSR5, 0), /* Audio-DMAC hi */
[MSTP315] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_MMC0], SMSTPCR3, 15, MSTPSR3, 0), /* MMC0 */
[MSTP314] = SH_CLK_MSTP32_STS(&div4_clks[DIV4_SD0], SMSTPCR3, 14, MSTPSR3, 0), /* SDHI0 */
[MSTP313] = SH_CLK_MSTP32_STS(&div4_clks[DIV4_SD1], SMSTPCR3, 13, MSTPSR3, 0), /* SDHI1 */
[MSTP312] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_SD2], SMSTPCR3, 12, MSTPSR3, 0), /* SDHI2 */
[MSTP311] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_SD3], SMSTPCR3, 11, MSTPSR3, 0), /* SDHI3 */
[MSTP305] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_MMC1], SMSTPCR3, 5, MSTPSR3, 0), /* MMC1 */
[MSTP304] = SH_CLK_MSTP32_STS(&cp_clk, SMSTPCR3, 4, MSTPSR3, 0), /* TPU0 */
[MSTP216] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 16, MSTPSR2, 0), /* SCIFB2 */
[MSTP207] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 7, MSTPSR2, 0), /* SCIFB1 */
[MSTP206] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 6, MSTPSR2, 0), /* SCIFB0 */
[MSTP204] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 4, MSTPSR2, 0), /* SCIFA0 */
[MSTP203] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 3, MSTPSR2, 0), /* SCIFA1 */
[MSTP202] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 2, MSTPSR2, 0), /* SCIFA2 */
[MSTP124] = SH_CLK_MSTP32_STS(&rclk_clk, SMSTPCR1, 24, MSTPSR1, 0), /* CMT0 */
};
static struct clk_lookup lookups[] = {
/* main clocks */
CLKDEV_CON_ID("audio_clk_a", &audio_clk_a),
CLKDEV_CON_ID("audio_clk_b", &audio_clk_b),
CLKDEV_CON_ID("audio_clk_c", &audio_clk_c),
CLKDEV_CON_ID("audio_clk_internal", &m2_clk),
CLKDEV_CON_ID("extal", &extal_clk),
CLKDEV_CON_ID("extal_div2", &extal_div2_clk),
CLKDEV_CON_ID("main", &main_clk),
......@@ -291,32 +343,32 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("sh-sci.7", &mstp_clks[MSTP720]),
CLKDEV_DEV_ID("sh-sci.8", &mstp_clks[MSTP717]),
CLKDEV_DEV_ID("sh-sci.9", &mstp_clks[MSTP716]),
CLKDEV_DEV_ID("e6508000.i2c", &mstp_clks[MSTP931]),
CLKDEV_DEV_ID("i2c-rcar_gen2.0", &mstp_clks[MSTP931]),
CLKDEV_DEV_ID("e6518000.i2c", &mstp_clks[MSTP930]),
CLKDEV_DEV_ID("i2c-rcar_gen2.1", &mstp_clks[MSTP930]),
CLKDEV_DEV_ID("e6530000.i2c", &mstp_clks[MSTP929]),
CLKDEV_DEV_ID("i2c-rcar_gen2.2", &mstp_clks[MSTP929]),
CLKDEV_DEV_ID("e6540000.i2c", &mstp_clks[MSTP928]),
CLKDEV_DEV_ID("i2c-rcar_gen2.3", &mstp_clks[MSTP928]),
CLKDEV_DEV_ID("r8a7790-ether", &mstp_clks[MSTP813]),
CLKDEV_DEV_ID("e61f0000.thermal", &mstp_clks[MSTP522]),
CLKDEV_DEV_ID("r8a7790-vin.0", &mstp_clks[MSTP811]),
CLKDEV_DEV_ID("r8a7790-vin.1", &mstp_clks[MSTP810]),
CLKDEV_DEV_ID("r8a7790-vin.2", &mstp_clks[MSTP809]),
CLKDEV_DEV_ID("r8a7790-vin.3", &mstp_clks[MSTP808]),
CLKDEV_DEV_ID("rcar_thermal", &mstp_clks[MSTP522]),
CLKDEV_DEV_ID("ee200000.mmc", &mstp_clks[MSTP315]),
CLKDEV_DEV_ID("sh-dma-engine.0", &mstp_clks[MSTP502]),
CLKDEV_DEV_ID("sh-dma-engine.1", &mstp_clks[MSTP501]),
CLKDEV_DEV_ID("sh_mmcif.0", &mstp_clks[MSTP315]),
CLKDEV_DEV_ID("ee100000.sd", &mstp_clks[MSTP314]),
CLKDEV_DEV_ID("sh_mobile_sdhi.0", &mstp_clks[MSTP314]),
CLKDEV_DEV_ID("ee120000.sd", &mstp_clks[MSTP313]),
CLKDEV_DEV_ID("sh_mobile_sdhi.1", &mstp_clks[MSTP313]),
CLKDEV_DEV_ID("ee140000.sd", &mstp_clks[MSTP312]),
CLKDEV_DEV_ID("sh_mobile_sdhi.2", &mstp_clks[MSTP312]),
CLKDEV_DEV_ID("ee160000.sd", &mstp_clks[MSTP311]),
CLKDEV_DEV_ID("sh_mobile_sdhi.3", &mstp_clks[MSTP311]),
CLKDEV_DEV_ID("ee220000.mmc", &mstp_clks[MSTP305]),
CLKDEV_DEV_ID("sh_mmcif.1", &mstp_clks[MSTP305]),
CLKDEV_DEV_ID("sh_cmt.0", &mstp_clks[MSTP124]),
CLKDEV_DEV_ID("qspi.0", &mstp_clks[MSTP917]),
CLKDEV_DEV_ID("renesas_usbhs", &mstp_clks[MSTP704]),
CLKDEV_DEV_ID("pci-rcar-gen2.0", &mstp_clks[MSTP703]),
CLKDEV_DEV_ID("pci-rcar-gen2.1", &mstp_clks[MSTP703]),
CLKDEV_DEV_ID("pci-rcar-gen2.2", &mstp_clks[MSTP703]),
CLKDEV_DEV_ID("sata-r8a7790.0", &mstp_clks[MSTP815]),
CLKDEV_DEV_ID("sata-r8a7790.1", &mstp_clks[MSTP814]),
/* ICK */
CLKDEV_ICK_ID("usbhs", "usb_phy_rcar_gen2", &mstp_clks[MSTP704]),
......@@ -325,6 +377,20 @@ static struct clk_lookup lookups[] = {
CLKDEV_ICK_ID("du.0", "rcar-du-r8a7790", &mstp_clks[MSTP724]),
CLKDEV_ICK_ID("du.1", "rcar-du-r8a7790", &mstp_clks[MSTP723]),
CLKDEV_ICK_ID("du.2", "rcar-du-r8a7790", &mstp_clks[MSTP722]),
CLKDEV_ICK_ID("clk_a", "rcar_sound", &audio_clk_a),
CLKDEV_ICK_ID("clk_b", "rcar_sound", &audio_clk_b),
CLKDEV_ICK_ID("clk_c", "rcar_sound", &audio_clk_c),
CLKDEV_ICK_ID("clk_i", "rcar_sound", &m2_clk),
CLKDEV_ICK_ID("scu.0", "rcar_sound", &mstp_clks[MSTP1031]),
CLKDEV_ICK_ID("scu.1", "rcar_sound", &mstp_clks[MSTP1030]),
CLKDEV_ICK_ID("scu.2", "rcar_sound", &mstp_clks[MSTP1029]),
CLKDEV_ICK_ID("scu.3", "rcar_sound", &mstp_clks[MSTP1028]),
CLKDEV_ICK_ID("scu.4", "rcar_sound", &mstp_clks[MSTP1027]),
CLKDEV_ICK_ID("scu.5", "rcar_sound", &mstp_clks[MSTP1026]),
CLKDEV_ICK_ID("scu.6", "rcar_sound", &mstp_clks[MSTP1025]),
CLKDEV_ICK_ID("scu.7", "rcar_sound", &mstp_clks[MSTP1024]),
CLKDEV_ICK_ID("scu.8", "rcar_sound", &mstp_clks[MSTP1023]),
CLKDEV_ICK_ID("scu.9", "rcar_sound", &mstp_clks[MSTP1022]),
CLKDEV_ICK_ID("ssi.0", "rcar_sound", &mstp_clks[MSTP1015]),
CLKDEV_ICK_ID("ssi.1", "rcar_sound", &mstp_clks[MSTP1014]),
CLKDEV_ICK_ID("ssi.2", "rcar_sound", &mstp_clks[MSTP1013]),
......
......@@ -59,10 +59,19 @@
#define SMSTPCR10 0xE6150998
#define SMSTPCR11 0xE615099C
#define MSTPSR1 IOMEM(0xe6150038)
#define MSTPSR2 IOMEM(0xe6150040)
#define MSTPSR3 IOMEM(0xe6150048)
#define MSTPSR5 IOMEM(0xe615003c)
#define MSTPSR7 IOMEM(0xe61501c4)
#define MSTPSR8 IOMEM(0xe61509a0)
#define MSTPSR9 IOMEM(0xe61509a4)
#define MSTPSR11 IOMEM(0xe61509ac)
#define MODEMR 0xE6160060
#define SDCKCR 0xE6150074
#define SD2CKCR 0xE6150078
#define SD3CKCR 0xE615007C
#define SD1CKCR 0xE6150078
#define SD2CKCR 0xE615026c
#define MMC0CKCR 0xE6150240
#define MMC1CKCR 0xE6150244
#define SSPCKCR 0xE6150248
......@@ -93,6 +102,7 @@ static struct clk main_clk = {
*/
SH_FIXED_RATIO_CLK_SET(pll1_clk, main_clk, 1, 1);
SH_FIXED_RATIO_CLK_SET(pll3_clk, main_clk, 1, 1);
SH_FIXED_RATIO_CLK_SET(qspi_clk, pll1_clk, 1, 1);
/* fixed ratio clock */
SH_FIXED_RATIO_CLK_SET(extal_div2_clk, extal_clk, 1, 2);
......@@ -103,7 +113,9 @@ SH_FIXED_RATIO_CLK_SET(hp_clk, pll1_clk, 1, 12);
SH_FIXED_RATIO_CLK_SET(p_clk, pll1_clk, 1, 24);
SH_FIXED_RATIO_CLK_SET(rclk_clk, pll1_clk, 1, (48 * 1024));
SH_FIXED_RATIO_CLK_SET(mp_clk, pll1_div2_clk, 1, 15);
SH_FIXED_RATIO_CLK_SET(zg_clk, pll1_clk, 1, 3);
SH_FIXED_RATIO_CLK_SET(zx_clk, pll1_clk, 1, 3);
SH_FIXED_RATIO_CLK_SET(zs_clk, pll1_clk, 1, 6);
static struct clk *main_clks[] = {
&extal_clk,
......@@ -114,46 +126,103 @@ static struct clk *main_clks[] = {
&pll3_clk,
&hp_clk,
&p_clk,
&qspi_clk,
&rclk_clk,
&mp_clk,
&cp_clk,
&zg_clk,
&zx_clk,
&zs_clk,
};
/* SDHI (DIV4) clock */
static int divisors[] = { 2, 3, 4, 6, 8, 12, 16, 18, 24, 0, 36, 48, 10 };
static struct clk_div_mult_table div4_div_mult_table = {
.divisors = divisors,
.nr_divisors = ARRAY_SIZE(divisors),
};
static struct clk_div4_table div4_table = {
.div_mult_table = &div4_div_mult_table,
};
enum {
DIV4_SDH, DIV4_SD0,
DIV4_NR
};
static struct clk div4_clks[DIV4_NR] = {
[DIV4_SDH] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 8, 0x0dff, CLK_ENABLE_ON_INIT),
[DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1de0, CLK_ENABLE_ON_INIT),
};
/* DIV6 clocks */
enum {
DIV6_SD1, DIV6_SD2,
DIV6_NR
};
static struct clk div6_clks[DIV6_NR] = {
[DIV6_SD1] = SH_CLK_DIV6(&pll1_div2_clk, SD1CKCR, 0),
[DIV6_SD2] = SH_CLK_DIV6(&pll1_div2_clk, SD2CKCR, 0),
};
/* MSTP */
enum {
MSTP1108, MSTP1107, MSTP1106,
MSTP931, MSTP930, MSTP929, MSTP928, MSTP927, MSTP925,
MSTP917,
MSTP815, MSTP814,
MSTP813,
MSTP811, MSTP810, MSTP809,
MSTP726, MSTP724, MSTP723, MSTP721, MSTP720,
MSTP719, MSTP718, MSTP715, MSTP714,
MSTP522,
MSTP314, MSTP312, MSTP311,
MSTP216, MSTP207, MSTP206,
MSTP204, MSTP203, MSTP202, MSTP1105, MSTP1106, MSTP1107,
MSTP204, MSTP203, MSTP202,
MSTP124,
MSTP_NR
};
static struct clk mstp_clks[MSTP_NR] = {
[MSTP813] = SH_CLK_MSTP32(&p_clk, SMSTPCR8, 13, 0), /* Ether */
[MSTP726] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 26, 0), /* LVDS0 */
[MSTP724] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 24, 0), /* DU0 */
[MSTP723] = SH_CLK_MSTP32(&zx_clk, SMSTPCR7, 23, 0), /* DU1 */
[MSTP721] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 21, 0), /* SCIF0 */
[MSTP720] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 20, 0), /* SCIF1 */
[MSTP719] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 19, 0), /* SCIF2 */
[MSTP718] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 18, 0), /* SCIF3 */
[MSTP715] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 15, 0), /* SCIF4 */
[MSTP714] = SH_CLK_MSTP32(&p_clk, SMSTPCR7, 14, 0), /* SCIF5 */
[MSTP522] = SH_CLK_MSTP32(&extal_clk, SMSTPCR5, 22, 0), /* Thermal */
[MSTP216] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 16, 0), /* SCIFB2 */
[MSTP207] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 7, 0), /* SCIFB1 */
[MSTP206] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 6, 0), /* SCIFB0 */
[MSTP204] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 4, 0), /* SCIFA0 */
[MSTP203] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 3, 0), /* SCIFA1 */
[MSTP202] = SH_CLK_MSTP32(&mp_clk, SMSTPCR2, 2, 0), /* SCIFA2 */
[MSTP1105] = SH_CLK_MSTP32(&mp_clk, SMSTPCR11, 5, 0), /* SCIFA3 */
[MSTP1106] = SH_CLK_MSTP32(&mp_clk, SMSTPCR11, 6, 0), /* SCIFA4 */
[MSTP1107] = SH_CLK_MSTP32(&mp_clk, SMSTPCR11, 7, 0), /* SCIFA5 */
[MSTP124] = SH_CLK_MSTP32(&rclk_clk, SMSTPCR1, 24, 0), /* CMT0 */
[MSTP1108] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR11, 8, MSTPSR11, 0), /* SCIFA5 */
[MSTP1107] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR11, 7, MSTPSR11, 0), /* SCIFA4 */
[MSTP1106] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR11, 6, MSTPSR11, 0), /* SCIFA3 */
[MSTP931] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 31, MSTPSR9, 0), /* I2C0 */
[MSTP930] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 30, MSTPSR9, 0), /* I2C1 */
[MSTP929] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 29, MSTPSR9, 0), /* I2C2 */
[MSTP928] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 28, MSTPSR9, 0), /* I2C3 */
[MSTP927] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 27, MSTPSR9, 0), /* I2C4 */
[MSTP925] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR9, 25, MSTPSR9, 0), /* I2C5 */
[MSTP917] = SH_CLK_MSTP32_STS(&qspi_clk, SMSTPCR9, 17, MSTPSR9, 0), /* QSPI */
[MSTP815] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR8, 15, MSTPSR8, 0), /* SATA0 */
[MSTP814] = SH_CLK_MSTP32_STS(&zs_clk, SMSTPCR8, 14, MSTPSR8, 0), /* SATA1 */
[MSTP813] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR8, 13, MSTPSR8, 0), /* Ether */
[MSTP811] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 11, MSTPSR8, 0), /* VIN0 */
[MSTP810] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 10, MSTPSR8, 0), /* VIN1 */
[MSTP809] = SH_CLK_MSTP32_STS(&zg_clk, SMSTPCR8, 9, MSTPSR8, 0), /* VIN2 */
[MSTP726] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 26, MSTPSR7, 0), /* LVDS0 */
[MSTP724] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 24, MSTPSR7, 0), /* DU0 */
[MSTP723] = SH_CLK_MSTP32_STS(&zx_clk, SMSTPCR7, 23, MSTPSR7, 0), /* DU1 */
[MSTP721] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 21, MSTPSR7, 0), /* SCIF0 */
[MSTP720] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 20, MSTPSR7, 0), /* SCIF1 */
[MSTP719] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 19, MSTPSR7, 0), /* SCIF2 */
[MSTP718] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 18, MSTPSR7, 0), /* SCIF3 */
[MSTP715] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 15, MSTPSR7, 0), /* SCIF4 */
[MSTP714] = SH_CLK_MSTP32_STS(&p_clk, SMSTPCR7, 14, MSTPSR7, 0), /* SCIF5 */
[MSTP522] = SH_CLK_MSTP32_STS(&extal_clk, SMSTPCR5, 22, MSTPSR5, 0), /* Thermal */
[MSTP314] = SH_CLK_MSTP32_STS(&div4_clks[DIV4_SD0], SMSTPCR3, 14, MSTPSR3, 0), /* SDHI0 */
[MSTP312] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_SD1], SMSTPCR3, 12, MSTPSR3, 0), /* SDHI1 */
[MSTP311] = SH_CLK_MSTP32_STS(&div6_clks[DIV6_SD2], SMSTPCR3, 11, MSTPSR3, 0), /* SDHI2 */
[MSTP216] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 16, MSTPSR2, 0), /* SCIFB2 */
[MSTP207] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 7, MSTPSR2, 0), /* SCIFB1 */
[MSTP206] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 6, MSTPSR2, 0), /* SCIFB0 */
[MSTP204] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 4, MSTPSR2, 0), /* SCIFA0 */
[MSTP203] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 3, MSTPSR2, 0), /* SCIFA1 */
[MSTP202] = SH_CLK_MSTP32_STS(&mp_clk, SMSTPCR2, 2, MSTPSR2, 0), /* SCIFA2 */
[MSTP124] = SH_CLK_MSTP32_STS(&rclk_clk, SMSTPCR1, 24, MSTPSR1, 0), /* CMT0 */
};
static struct clk_lookup lookups[] = {
......@@ -165,8 +234,11 @@ static struct clk_lookup lookups[] = {
CLKDEV_CON_ID("pll1", &pll1_clk),
CLKDEV_CON_ID("pll1_div2", &pll1_div2_clk),
CLKDEV_CON_ID("pll3", &pll3_clk),
CLKDEV_CON_ID("zg", &zg_clk),
CLKDEV_CON_ID("zs", &zs_clk),
CLKDEV_CON_ID("hp", &hp_clk),
CLKDEV_CON_ID("p", &p_clk),
CLKDEV_CON_ID("qspi", &qspi_clk),
CLKDEV_CON_ID("rclk", &rclk_clk),
CLKDEV_CON_ID("mp", &mp_clk),
CLKDEV_CON_ID("cp", &cp_clk),
......@@ -188,13 +260,27 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("sh-sci.9", &mstp_clks[MSTP718]), /* SCIF3 */
CLKDEV_DEV_ID("sh-sci.10", &mstp_clks[MSTP715]), /* SCIF4 */
CLKDEV_DEV_ID("sh-sci.11", &mstp_clks[MSTP714]), /* SCIF5 */
CLKDEV_DEV_ID("sh-sci.12", &mstp_clks[MSTP1105]), /* SCIFA3 */
CLKDEV_DEV_ID("sh-sci.13", &mstp_clks[MSTP1106]), /* SCIFA4 */
CLKDEV_DEV_ID("sh-sci.14", &mstp_clks[MSTP1107]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.12", &mstp_clks[MSTP1106]), /* SCIFA3 */
CLKDEV_DEV_ID("sh-sci.13", &mstp_clks[MSTP1107]), /* SCIFA4 */
CLKDEV_DEV_ID("sh-sci.14", &mstp_clks[MSTP1108]), /* SCIFA5 */
CLKDEV_DEV_ID("sh_mobile_sdhi.0", &mstp_clks[MSTP314]),
CLKDEV_DEV_ID("sh_mobile_sdhi.1", &mstp_clks[MSTP312]),
CLKDEV_DEV_ID("sh_mobile_sdhi.2", &mstp_clks[MSTP311]),
CLKDEV_DEV_ID("sh_cmt.0", &mstp_clks[MSTP124]),
CLKDEV_DEV_ID("e61f0000.thermal", &mstp_clks[MSTP522]),
CLKDEV_DEV_ID("qspi.0", &mstp_clks[MSTP917]),
CLKDEV_DEV_ID("rcar_thermal", &mstp_clks[MSTP522]),
CLKDEV_DEV_ID("i2c-rcar_gen2.0", &mstp_clks[MSTP931]),
CLKDEV_DEV_ID("i2c-rcar_gen2.1", &mstp_clks[MSTP930]),
CLKDEV_DEV_ID("i2c-rcar_gen2.2", &mstp_clks[MSTP929]),
CLKDEV_DEV_ID("i2c-rcar_gen2.3", &mstp_clks[MSTP928]),
CLKDEV_DEV_ID("i2c-rcar_gen2.4", &mstp_clks[MSTP927]),
CLKDEV_DEV_ID("i2c-rcar_gen2.5", &mstp_clks[MSTP925]),
CLKDEV_DEV_ID("r8a7791-ether", &mstp_clks[MSTP813]), /* Ether */
CLKDEV_DEV_ID("r8a7791-vin.0", &mstp_clks[MSTP811]),
CLKDEV_DEV_ID("r8a7791-vin.1", &mstp_clks[MSTP810]),
CLKDEV_DEV_ID("r8a7791-vin.2", &mstp_clks[MSTP809]),
CLKDEV_DEV_ID("sata-r8a7791.0", &mstp_clks[MSTP815]),
CLKDEV_DEV_ID("sata-r8a7791.1", &mstp_clks[MSTP814]),
};
#define R8A7791_CLOCK_ROOT(e, m, p0, p1, p30, p31) \
......@@ -232,9 +318,20 @@ void __init r8a7791_clock_init(void)
break;
}
if ((mode & (MD(3) | MD(2) | MD(1))) == MD(2))
SH_CLK_SET_RATIO(&qspi_clk_ratio, 1, 16);
else
SH_CLK_SET_RATIO(&qspi_clk_ratio, 1, 20);
for (k = 0; !ret && (k < ARRAY_SIZE(main_clks)); k++)
ret = clk_register(main_clks[k]);
if (!ret)
ret = sh_clk_div4_register(div4_clks, DIV4_NR, &div4_table);
if (!ret)
ret = sh_clk_div6_register(div6_clks, DIV6_NR);
if (!ret)
ret = sh_clk_mstp_register(mstp_clks, MSTP_NR);
......
......@@ -3,6 +3,31 @@
#include <mach/rcar-gen2.h>
/* DMA slave IDs */
enum {
RCAR_DMA_SLAVE_INVALID,
AUDIO_DMAC_SLAVE_SSI0_TX,
AUDIO_DMAC_SLAVE_SSI0_RX,
AUDIO_DMAC_SLAVE_SSI1_TX,
AUDIO_DMAC_SLAVE_SSI1_RX,
AUDIO_DMAC_SLAVE_SSI2_TX,
AUDIO_DMAC_SLAVE_SSI2_RX,
AUDIO_DMAC_SLAVE_SSI3_TX,
AUDIO_DMAC_SLAVE_SSI3_RX,
AUDIO_DMAC_SLAVE_SSI4_TX,
AUDIO_DMAC_SLAVE_SSI4_RX,
AUDIO_DMAC_SLAVE_SSI5_TX,
AUDIO_DMAC_SLAVE_SSI5_RX,
AUDIO_DMAC_SLAVE_SSI6_TX,
AUDIO_DMAC_SLAVE_SSI6_RX,
AUDIO_DMAC_SLAVE_SSI7_TX,
AUDIO_DMAC_SLAVE_SSI7_RX,
AUDIO_DMAC_SLAVE_SSI8_TX,
AUDIO_DMAC_SLAVE_SSI8_RX,
AUDIO_DMAC_SLAVE_SSI9_TX,
AUDIO_DMAC_SLAVE_SSI9_RX,
};
void r8a7790_add_standard_devices(void);
void r8a7790_add_dt_devices(void);
void r8a7790_clock_init(void);
......
......@@ -24,12 +24,100 @@
#include <linux/platform_data/gpio-rcar.h>
#include <linux/platform_data/irq-renesas-irqc.h>
#include <linux/serial_sci.h>
#include <linux/sh_dma.h>
#include <linux/sh_timer.h>
#include <mach/common.h>
#include <mach/dma-register.h>
#include <mach/irqs.h>
#include <mach/r8a7790.h>
#include <asm/mach/arch.h>
/* Audio-DMAC */
#define AUDIO_DMAC_SLAVE(_id, _addr, t, r) \
{ \
.slave_id = AUDIO_DMAC_SLAVE_## _id ##_TX, \
.addr = _addr + 0x8, \
.chcr = CHCR_TX(XMIT_SZ_32BIT), \
.mid_rid = t, \
}, { \
.slave_id = AUDIO_DMAC_SLAVE_## _id ##_RX, \
.addr = _addr + 0xc, \
.chcr = CHCR_RX(XMIT_SZ_32BIT), \
.mid_rid = r, \
}
static const struct sh_dmae_slave_config r8a7790_audio_dmac_slaves[] = {
AUDIO_DMAC_SLAVE(SSI0, 0xec241000, 0x01, 0x02),
AUDIO_DMAC_SLAVE(SSI1, 0xec241040, 0x03, 0x04),
AUDIO_DMAC_SLAVE(SSI2, 0xec241080, 0x05, 0x06),
AUDIO_DMAC_SLAVE(SSI3, 0xec2410c0, 0x07, 0x08),
AUDIO_DMAC_SLAVE(SSI4, 0xec241100, 0x09, 0x0a),
AUDIO_DMAC_SLAVE(SSI5, 0xec241140, 0x0b, 0x0c),
AUDIO_DMAC_SLAVE(SSI6, 0xec241180, 0x0d, 0x0e),
AUDIO_DMAC_SLAVE(SSI7, 0xec2411c0, 0x0f, 0x10),
AUDIO_DMAC_SLAVE(SSI8, 0xec241200, 0x11, 0x12),
AUDIO_DMAC_SLAVE(SSI9, 0xec241240, 0x13, 0x14),
};
#define DMAE_CHANNEL(a, b) \
{ \
.offset = (a) - 0x20, \
.dmars = (a) - 0x20 + 0x40, \
.chclr_bit = (b), \
.chclr_offset = 0x80 - 0x20, \
}
static const struct sh_dmae_channel r8a7790_audio_dmac_channels[] = {
DMAE_CHANNEL(0x8000, 0),
DMAE_CHANNEL(0x8080, 1),
DMAE_CHANNEL(0x8100, 2),
DMAE_CHANNEL(0x8180, 3),
DMAE_CHANNEL(0x8200, 4),
DMAE_CHANNEL(0x8280, 5),
DMAE_CHANNEL(0x8300, 6),
DMAE_CHANNEL(0x8380, 7),
DMAE_CHANNEL(0x8400, 8),
DMAE_CHANNEL(0x8480, 9),
DMAE_CHANNEL(0x8500, 10),
DMAE_CHANNEL(0x8580, 11),
DMAE_CHANNEL(0x8600, 12),
};
static struct sh_dmae_pdata r8a7790_audio_dmac_platform_data = {
.slave = r8a7790_audio_dmac_slaves,
.slave_num = ARRAY_SIZE(r8a7790_audio_dmac_slaves),
.channel = r8a7790_audio_dmac_channels,
.channel_num = ARRAY_SIZE(r8a7790_audio_dmac_channels),
.ts_low_shift = TS_LOW_SHIFT,
.ts_low_mask = TS_LOW_BIT << TS_LOW_SHIFT,
.ts_high_shift = TS_HI_SHIFT,
.ts_high_mask = TS_HI_BIT << TS_HI_SHIFT,
.ts_shift = dma_ts_shift,
.ts_shift_num = ARRAY_SIZE(dma_ts_shift),
.dmaor_init = DMAOR_DME,
.chclr_present = 1,
.chclr_bitwise = 1,
};
static struct resource r8a7790_audio_dmac_resources[] = {
/* Channel registers and DMAOR for low */
DEFINE_RES_MEM(0xec700020, 0x8663 - 0x20),
DEFINE_RES_IRQ(gic_spi(346)),
DEFINE_RES_NAMED(gic_spi(320), 13, NULL, IORESOURCE_IRQ),
/* Channel registers and DMAOR for hi */
DEFINE_RES_MEM(0xec720020, 0x8663 - 0x20), /* hi */
DEFINE_RES_IRQ(gic_spi(347)),
DEFINE_RES_NAMED(gic_spi(333), 13, NULL, IORESOURCE_IRQ),
};
#define r8a7790_register_audio_dmac(id) \
platform_device_register_resndata( \
&platform_bus, "sh-dma-engine", id, \
&r8a7790_audio_dmac_resources[id * 3], 3, \
&r8a7790_audio_dmac_platform_data, \
sizeof(r8a7790_audio_dmac_platform_data))
static const struct resource pfc_resources[] __initconst = {
DEFINE_RES_MEM(0xe6060000, 0x250),
};
......@@ -101,6 +189,8 @@ void __init r8a7790_pinmux_init(void)
r8a7790_register_i2c(1);
r8a7790_register_i2c(2);
r8a7790_register_i2c(3);
r8a7790_register_audio_dmac(0);
r8a7790_register_audio_dmac(1);
}
#define __R8A7790_SCIF(scif_type, _scscr, index, baseaddr, irq) \
......
......@@ -5,6 +5,7 @@ menuconfig ARCH_STI
select PINCTRL
select PINCTRL_ST
select MFD_SYSCON
select ARCH_HAS_RESET_CONTROLLER
select HAVE_ARM_SCU if SMP
select ARCH_REQUIRE_GPIOLIB
select ARM_ERRATA_754322
......@@ -24,6 +25,7 @@ if ARCH_STI
config SOC_STIH415
bool "STiH415 STMicroelectronics Consumer Electronics family"
default y
select STIH415_RESET
help
This enables support for STMicroelectronics Digital Consumer
Electronics family StiH415 parts, primarily targeted at set-top-box
......@@ -33,6 +35,7 @@ config SOC_STIH415
config SOC_STIH416
bool "STiH416 STMicroelectronics Consumer Electronics family"
default y
select STIH416_RESET
help
This enables support for STMicroelectronics Digital Consumer
Electronics family StiH416 parts, primarily targeted at set-top-box
......
......@@ -108,7 +108,7 @@ void __init versatile_init_irq(void)
np = of_find_matching_node_by_address(NULL, vic_of_match,
VERSATILE_VIC_BASE);
__vic_init(VA_VIC_BASE, IRQ_VIC_START, ~0, 0, np);
__vic_init(VA_VIC_BASE, 0, IRQ_VIC_START, ~0, 0, np);
writel(~0, VA_SIC_BASE + SIC_IRQ_ENABLE_CLEAR);
......
......@@ -595,14 +595,16 @@ void __init orion_spi_1_init(unsigned long mapbase)
/*****************************************************************************
* Watchdog
****************************************************************************/
static struct resource orion_wdt_resource =
DEFINE_RES_MEM(TIMER_PHYS_BASE, 0x28);
static struct resource orion_wdt_resource[] = {
DEFINE_RES_MEM(TIMER_PHYS_BASE, 0x04),
DEFINE_RES_MEM(RSTOUTn_MASK_PHYS, 0x04),
};
static struct platform_device orion_wdt_device = {
.name = "orion_wdt",
.id = -1,
.num_resources = 1,
.resource = &orion_wdt_resource,
.num_resources = ARRAY_SIZE(orion_wdt_resource),
.resource = orion_wdt_resource,
};
void __init orion_wdt_init(void)
......
......@@ -256,8 +256,6 @@ static int tegra_ahb_probe(struct platform_device *pdev)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
ahb->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ahb->regs))
return PTR_ERR(ahb->regs);
......
......@@ -31,7 +31,6 @@
#define DRIVER_NAME "CCI-400"
#define DRIVER_NAME_PMU DRIVER_NAME " PMU"
#define PMU_NAME "CCI_400"
#define CCI_PORT_CTRL 0x0
#define CCI_CTRL_STATUS 0xc
......@@ -88,8 +87,7 @@ static unsigned long cci_ctrl_phys;
#define CCI_REV_R0 0
#define CCI_REV_R1 1
#define CCI_REV_R0_P4 4
#define CCI_REV_R1_P2 6
#define CCI_REV_R1_PX 5
#define CCI_PMU_EVT_SEL 0x000
#define CCI_PMU_CNTR 0x004
......@@ -163,6 +161,15 @@ static struct pmu_port_event_ranges port_event_range[] = {
},
};
/*
* Export different PMU names for the different revisions so userspace knows
* because the event ids are different
*/
static char *const pmu_names[] = {
[CCI_REV_R0] = "CCI_400",
[CCI_REV_R1] = "CCI_400_r1",
};
struct cci_pmu_drv_data {
void __iomem *base;
struct arm_pmu *cci_pmu;
......@@ -193,21 +200,16 @@ static int probe_cci_revision(void)
rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK;
rev >>= CCI_PID2_REV_SHIFT;
if (rev <= CCI_REV_R0_P4)
if (rev < CCI_REV_R1_PX)
return CCI_REV_R0;
else if (rev <= CCI_REV_R1_P2)
else
return CCI_REV_R1;
return -ENOENT;
}
static struct pmu_port_event_ranges *port_range_by_rev(void)
{
int rev = probe_cci_revision();
if (rev < 0)
return NULL;
return &port_event_range[rev];
}
......@@ -526,7 +528,7 @@ static void pmu_write_counter(struct perf_event *event, u32 value)
static int cci_pmu_init(struct arm_pmu *cci_pmu, struct platform_device *pdev)
{
*cci_pmu = (struct arm_pmu){
.name = PMU_NAME,
.name = pmu_names[probe_cci_revision()],
.max_period = (1LLU << 32) - 1,
.get_hw_events = pmu_get_hw_events,
.get_event_idx = pmu_get_event_idx,
......
......@@ -890,13 +890,12 @@ int __init mvebu_mbus_dt_init(void)
const __be32 *prop;
int ret;
np = of_find_matching_node(NULL, of_mvebu_mbus_ids);
np = of_find_matching_node_and_match(NULL, of_mvebu_mbus_ids, &of_id);
if (!np) {
pr_err("could not find a matching SoC family\n");
return -ENODEV;
}
of_id = of_match_node(of_mvebu_mbus_ids, np);
mbus_state.soc = of_id->data;
prop = of_get_property(np, "controller", NULL);
......
......@@ -342,11 +342,11 @@ config HW_RANDOM_TPM
If unsure, say Y.
config HW_RANDOM_MSM
tristate "Qualcomm MSM Random Number Generator support"
depends on HW_RANDOM && ARCH_MSM
tristate "Qualcomm SoCs Random Number Generator support"
depends on HW_RANDOM && ARCH_QCOM
---help---
This driver provides kernel-side support for the Random Number
Generator hardware found on Qualcomm MSM SoCs.
Generator hardware found on Qualcomm SoCs.
To compile this driver as a module, choose M here. the
module will be called msm-rng.
......
......@@ -111,4 +111,5 @@ source "drivers/clk/qcom/Kconfig"
endmenu
source "drivers/clk/bcm/Kconfig"
source "drivers/clk/mvebu/Kconfig"
......@@ -29,6 +29,7 @@ obj-$(CONFIG_ARCH_VT8500) += clk-vt8500.o
obj-$(CONFIG_COMMON_CLK_WM831X) += clk-wm831x.o
obj-$(CONFIG_COMMON_CLK_XGENE) += clk-xgene.o
obj-$(CONFIG_COMMON_CLK_AT91) += at91/
obj-$(CONFIG_ARCH_BCM_MOBILE) += bcm/
obj-$(CONFIG_ARCH_HI3xxx) += hisilicon/
obj-$(CONFIG_COMMON_CLK_KEYSTONE) += keystone/
ifeq ($(CONFIG_COMMON_CLK), y)
......
config CLK_BCM_KONA
bool "Broadcom Kona CCU clock support"
depends on ARCH_BCM_MOBILE
depends on COMMON_CLK
default y
help
Enable common clock framework support for Broadcom SoCs
using "Kona" style clock control units, including those
in the BCM281xx family.
obj-$(CONFIG_CLK_BCM_KONA) += clk-kona.o
obj-$(CONFIG_CLK_BCM_KONA) += clk-kona-setup.o
obj-$(CONFIG_CLK_BCM_KONA) += clk-bcm281xx.o
/*
* Copyright (C) 2013 Broadcom Corporation
* Copyright 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "clk-kona.h"
#include "dt-bindings/clock/bcm281xx.h"
/* bcm11351 CCU device tree "compatible" strings */
#define BCM11351_DT_ROOT_CCU_COMPAT "brcm,bcm11351-root-ccu"
#define BCM11351_DT_AON_CCU_COMPAT "brcm,bcm11351-aon-ccu"
#define BCM11351_DT_HUB_CCU_COMPAT "brcm,bcm11351-hub-ccu"
#define BCM11351_DT_MASTER_CCU_COMPAT "brcm,bcm11351-master-ccu"
#define BCM11351_DT_SLAVE_CCU_COMPAT "brcm,bcm11351-slave-ccu"
/* Root CCU clocks */
static struct peri_clk_data frac_1m_data = {
.gate = HW_SW_GATE(0x214, 16, 0, 1),
.trig = TRIGGER(0x0e04, 0),
.div = FRAC_DIVIDER(0x0e00, 0, 22, 16),
.clocks = CLOCKS("ref_crystal"),
};
/* AON CCU clocks */
static struct peri_clk_data hub_timer_data = {
.gate = HW_SW_GATE(0x0414, 16, 0, 1),
.clocks = CLOCKS("bbl_32k",
"frac_1m",
"dft_19_5m"),
.sel = SELECTOR(0x0a10, 0, 2),
.trig = TRIGGER(0x0a40, 4),
};
static struct peri_clk_data pmu_bsc_data = {
.gate = HW_SW_GATE(0x0418, 16, 0, 1),
.clocks = CLOCKS("ref_crystal",
"pmu_bsc_var",
"bbl_32k"),
.sel = SELECTOR(0x0a04, 0, 2),
.div = DIVIDER(0x0a04, 3, 4),
.trig = TRIGGER(0x0a40, 0),
};
static struct peri_clk_data pmu_bsc_var_data = {
.clocks = CLOCKS("var_312m",
"ref_312m"),
.sel = SELECTOR(0x0a00, 0, 2),
.div = DIVIDER(0x0a00, 4, 5),
.trig = TRIGGER(0x0a40, 2),
};
/* Hub CCU clocks */
static struct peri_clk_data tmon_1m_data = {
.gate = HW_SW_GATE(0x04a4, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"frac_1m"),
.sel = SELECTOR(0x0e74, 0, 2),
.trig = TRIGGER(0x0e84, 1),
};
/* Master CCU clocks */
static struct peri_clk_data sdio1_data = {
.gate = HW_SW_GATE(0x0358, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_52m",
"ref_52m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a28, 0, 3),
.div = DIVIDER(0x0a28, 4, 14),
.trig = TRIGGER(0x0afc, 9),
};
static struct peri_clk_data sdio2_data = {
.gate = HW_SW_GATE(0x035c, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_52m",
"ref_52m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a2c, 0, 3),
.div = DIVIDER(0x0a2c, 4, 14),
.trig = TRIGGER(0x0afc, 10),
};
static struct peri_clk_data sdio3_data = {
.gate = HW_SW_GATE(0x0364, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_52m",
"ref_52m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a34, 0, 3),
.div = DIVIDER(0x0a34, 4, 14),
.trig = TRIGGER(0x0afc, 12),
};
static struct peri_clk_data sdio4_data = {
.gate = HW_SW_GATE(0x0360, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_52m",
"ref_52m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a30, 0, 3),
.div = DIVIDER(0x0a30, 4, 14),
.trig = TRIGGER(0x0afc, 11),
};
static struct peri_clk_data usb_ic_data = {
.gate = HW_SW_GATE(0x0354, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_96m",
"ref_96m"),
.div = FIXED_DIVIDER(2),
.sel = SELECTOR(0x0a24, 0, 2),
.trig = TRIGGER(0x0afc, 7),
};
/* also called usbh_48m */
static struct peri_clk_data hsic2_48m_data = {
.gate = HW_SW_GATE(0x0370, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a38, 0, 2),
.div = FIXED_DIVIDER(2),
.trig = TRIGGER(0x0afc, 5),
};
/* also called usbh_12m */
static struct peri_clk_data hsic2_12m_data = {
.gate = HW_SW_GATE(0x0370, 20, 4, 5),
.div = DIVIDER(0x0a38, 12, 2),
.clocks = CLOCKS("ref_crystal",
"var_96m",
"ref_96m"),
.pre_div = FIXED_DIVIDER(2),
.sel = SELECTOR(0x0a38, 0, 2),
.trig = TRIGGER(0x0afc, 5),
};
/* Slave CCU clocks */
static struct peri_clk_data uartb_data = {
.gate = HW_SW_GATE(0x0400, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_156m",
"ref_156m"),
.sel = SELECTOR(0x0a10, 0, 2),
.div = FRAC_DIVIDER(0x0a10, 4, 12, 8),
.trig = TRIGGER(0x0afc, 2),
};
static struct peri_clk_data uartb2_data = {
.gate = HW_SW_GATE(0x0404, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_156m",
"ref_156m"),
.sel = SELECTOR(0x0a14, 0, 2),
.div = FRAC_DIVIDER(0x0a14, 4, 12, 8),
.trig = TRIGGER(0x0afc, 3),
};
static struct peri_clk_data uartb3_data = {
.gate = HW_SW_GATE(0x0408, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_156m",
"ref_156m"),
.sel = SELECTOR(0x0a18, 0, 2),
.div = FRAC_DIVIDER(0x0a18, 4, 12, 8),
.trig = TRIGGER(0x0afc, 4),
};
static struct peri_clk_data uartb4_data = {
.gate = HW_SW_GATE(0x0408, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_156m",
"ref_156m"),
.sel = SELECTOR(0x0a1c, 0, 2),
.div = FRAC_DIVIDER(0x0a1c, 4, 12, 8),
.trig = TRIGGER(0x0afc, 5),
};
static struct peri_clk_data ssp0_data = {
.gate = HW_SW_GATE(0x0410, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m",
"ref_104m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a20, 0, 3),
.div = DIVIDER(0x0a20, 4, 14),
.trig = TRIGGER(0x0afc, 6),
};
static struct peri_clk_data ssp2_data = {
.gate = HW_SW_GATE(0x0418, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m",
"ref_104m",
"var_96m",
"ref_96m"),
.sel = SELECTOR(0x0a28, 0, 3),
.div = DIVIDER(0x0a28, 4, 14),
.trig = TRIGGER(0x0afc, 8),
};
static struct peri_clk_data bsc1_data = {
.gate = HW_SW_GATE(0x0458, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m",
"ref_104m",
"var_13m",
"ref_13m"),
.sel = SELECTOR(0x0a64, 0, 3),
.trig = TRIGGER(0x0afc, 23),
};
static struct peri_clk_data bsc2_data = {
.gate = HW_SW_GATE(0x045c, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m",
"ref_104m",
"var_13m",
"ref_13m"),
.sel = SELECTOR(0x0a68, 0, 3),
.trig = TRIGGER(0x0afc, 24),
};
static struct peri_clk_data bsc3_data = {
.gate = HW_SW_GATE(0x0484, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m",
"ref_104m",
"var_13m",
"ref_13m"),
.sel = SELECTOR(0x0a84, 0, 3),
.trig = TRIGGER(0x0b00, 2),
};
static struct peri_clk_data pwm_data = {
.gate = HW_SW_GATE(0x0468, 18, 2, 3),
.clocks = CLOCKS("ref_crystal",
"var_104m"),
.sel = SELECTOR(0x0a70, 0, 2),
.div = DIVIDER(0x0a70, 4, 3),
.trig = TRIGGER(0x0afc, 15),
};
/*
* CCU setup routines
*
* These are called from kona_dt_ccu_setup() to initialize the array
* of clocks provided by the CCU. Once allocated, the entries in
* the array are initialized by calling kona_clk_setup() with the
* initialization data for each clock. They return 0 if successful
* or an error code otherwise.
*/
static int __init bcm281xx_root_ccu_clks_setup(struct ccu_data *ccu)
{
struct clk **clks;
size_t count = BCM281XX_ROOT_CCU_CLOCK_COUNT;
clks = kzalloc(count * sizeof(*clks), GFP_KERNEL);
if (!clks) {
pr_err("%s: failed to allocate root clocks\n", __func__);
return -ENOMEM;
}
ccu->data.clks = clks;
ccu->data.clk_num = count;
PERI_CLK_SETUP(clks, ccu, BCM281XX_ROOT_CCU_FRAC_1M, frac_1m);
return 0;
}
static int __init bcm281xx_aon_ccu_clks_setup(struct ccu_data *ccu)
{
struct clk **clks;
size_t count = BCM281XX_AON_CCU_CLOCK_COUNT;
clks = kzalloc(count * sizeof(*clks), GFP_KERNEL);
if (!clks) {
pr_err("%s: failed to allocate aon clocks\n", __func__);
return -ENOMEM;
}
ccu->data.clks = clks;
ccu->data.clk_num = count;
PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_HUB_TIMER, hub_timer);
PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_PMU_BSC, pmu_bsc);
PERI_CLK_SETUP(clks, ccu, BCM281XX_AON_CCU_PMU_BSC_VAR, pmu_bsc_var);
return 0;
}
static int __init bcm281xx_hub_ccu_clks_setup(struct ccu_data *ccu)
{
struct clk **clks;
size_t count = BCM281XX_HUB_CCU_CLOCK_COUNT;
clks = kzalloc(count * sizeof(*clks), GFP_KERNEL);
if (!clks) {
pr_err("%s: failed to allocate hub clocks\n", __func__);
return -ENOMEM;
}
ccu->data.clks = clks;
ccu->data.clk_num = count;
PERI_CLK_SETUP(clks, ccu, BCM281XX_HUB_CCU_TMON_1M, tmon_1m);
return 0;
}
static int __init bcm281xx_master_ccu_clks_setup(struct ccu_data *ccu)
{
struct clk **clks;
size_t count = BCM281XX_MASTER_CCU_CLOCK_COUNT;
clks = kzalloc(count * sizeof(*clks), GFP_KERNEL);
if (!clks) {
pr_err("%s: failed to allocate master clocks\n", __func__);
return -ENOMEM;
}
ccu->data.clks = clks;
ccu->data.clk_num = count;
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO1, sdio1);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO2, sdio2);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO3, sdio3);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_SDIO4, sdio4);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_USB_IC, usb_ic);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_HSIC2_48M, hsic2_48m);
PERI_CLK_SETUP(clks, ccu, BCM281XX_MASTER_CCU_HSIC2_12M, hsic2_12m);
return 0;
}
static int __init bcm281xx_slave_ccu_clks_setup(struct ccu_data *ccu)
{
struct clk **clks;
size_t count = BCM281XX_SLAVE_CCU_CLOCK_COUNT;
clks = kzalloc(count * sizeof(*clks), GFP_KERNEL);
if (!clks) {
pr_err("%s: failed to allocate slave clocks\n", __func__);
return -ENOMEM;
}
ccu->data.clks = clks;
ccu->data.clk_num = count;
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB, uartb);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB2, uartb2);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB3, uartb3);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_UARTB4, uartb4);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_SSP0, ssp0);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_SSP2, ssp2);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC1, bsc1);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC2, bsc2);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_BSC3, bsc3);
PERI_CLK_SETUP(clks, ccu, BCM281XX_SLAVE_CCU_PWM, pwm);
return 0;
}
/* Device tree match table callback functions */
static void __init kona_dt_root_ccu_setup(struct device_node *node)
{
kona_dt_ccu_setup(node, bcm281xx_root_ccu_clks_setup);
}
static void __init kona_dt_aon_ccu_setup(struct device_node *node)
{
kona_dt_ccu_setup(node, bcm281xx_aon_ccu_clks_setup);
}
static void __init kona_dt_hub_ccu_setup(struct device_node *node)
{
kona_dt_ccu_setup(node, bcm281xx_hub_ccu_clks_setup);
}
static void __init kona_dt_master_ccu_setup(struct device_node *node)
{
kona_dt_ccu_setup(node, bcm281xx_master_ccu_clks_setup);
}
static void __init kona_dt_slave_ccu_setup(struct device_node *node)
{
kona_dt_ccu_setup(node, bcm281xx_slave_ccu_clks_setup);
}
CLK_OF_DECLARE(bcm11351_root_ccu, BCM11351_DT_ROOT_CCU_COMPAT,
kona_dt_root_ccu_setup);
CLK_OF_DECLARE(bcm11351_aon_ccu, BCM11351_DT_AON_CCU_COMPAT,
kona_dt_aon_ccu_setup);
CLK_OF_DECLARE(bcm11351_hub_ccu, BCM11351_DT_HUB_CCU_COMPAT,
kona_dt_hub_ccu_setup);
CLK_OF_DECLARE(bcm11351_master_ccu, BCM11351_DT_MASTER_CCU_COMPAT,
kona_dt_master_ccu_setup);
CLK_OF_DECLARE(bcm11351_slave_ccu, BCM11351_DT_SLAVE_CCU_COMPAT,
kona_dt_slave_ccu_setup);
/*
* Copyright (C) 2013 Broadcom Corporation
* Copyright 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/io.h>
#include <linux/of_address.h>
#include "clk-kona.h"
/* These are used when a selector or trigger is found to be unneeded */
#define selector_clear_exists(sel) ((sel)->width = 0)
#define trigger_clear_exists(trig) FLAG_CLEAR(trig, TRIG, EXISTS)
LIST_HEAD(ccu_list); /* The list of set up CCUs */
/* Validity checking */
static bool clk_requires_trigger(struct kona_clk *bcm_clk)
{
struct peri_clk_data *peri = bcm_clk->peri;
struct bcm_clk_sel *sel;
struct bcm_clk_div *div;
if (bcm_clk->type != bcm_clk_peri)
return false;
sel = &peri->sel;
if (sel->parent_count && selector_exists(sel))
return true;
div = &peri->div;
if (!divider_exists(div))
return false;
/* Fixed dividers don't need triggers */
if (!divider_is_fixed(div))
return true;
div = &peri->pre_div;
return divider_exists(div) && !divider_is_fixed(div);
}
static bool peri_clk_data_offsets_valid(struct kona_clk *bcm_clk)
{
struct peri_clk_data *peri;
struct bcm_clk_gate *gate;
struct bcm_clk_div *div;
struct bcm_clk_sel *sel;
struct bcm_clk_trig *trig;
const char *name;
u32 range;
u32 limit;
BUG_ON(bcm_clk->type != bcm_clk_peri);
peri = bcm_clk->peri;
name = bcm_clk->name;
range = bcm_clk->ccu->range;
limit = range - sizeof(u32);
limit = round_down(limit, sizeof(u32));
gate = &peri->gate;
if (gate_exists(gate)) {
if (gate->offset > limit) {
pr_err("%s: bad gate offset for %s (%u > %u)\n",
__func__, name, gate->offset, limit);
return false;
}
}
div = &peri->div;
if (divider_exists(div)) {
if (div->offset > limit) {
pr_err("%s: bad divider offset for %s (%u > %u)\n",
__func__, name, div->offset, limit);
return false;
}
}
div = &peri->pre_div;
if (divider_exists(div)) {
if (div->offset > limit) {
pr_err("%s: bad pre-divider offset for %s "
"(%u > %u)\n",
__func__, name, div->offset, limit);
return false;
}
}
sel = &peri->sel;
if (selector_exists(sel)) {
if (sel->offset > limit) {
pr_err("%s: bad selector offset for %s (%u > %u)\n",
__func__, name, sel->offset, limit);
return false;
}
}
trig = &peri->trig;
if (trigger_exists(trig)) {
if (trig->offset > limit) {
pr_err("%s: bad trigger offset for %s (%u > %u)\n",
__func__, name, trig->offset, limit);
return false;
}
}
trig = &peri->pre_trig;
if (trigger_exists(trig)) {
if (trig->offset > limit) {
pr_err("%s: bad pre-trigger offset for %s (%u > %u)\n",
__func__, name, trig->offset, limit);
return false;
}
}
return true;
}
/* A bit position must be less than the number of bits in a 32-bit register. */
static bool bit_posn_valid(u32 bit_posn, const char *field_name,
const char *clock_name)
{
u32 limit = BITS_PER_BYTE * sizeof(u32) - 1;
if (bit_posn > limit) {
pr_err("%s: bad %s bit for %s (%u > %u)\n", __func__,
field_name, clock_name, bit_posn, limit);
return false;
}
return true;
}
/*
* A bitfield must be at least 1 bit wide. Both the low-order and
* high-order bits must lie within a 32-bit register. We require
* fields to be less than 32 bits wide, mainly because we use
* shifting to produce field masks, and shifting a full word width
* is not well-defined by the C standard.
*/
static bool bitfield_valid(u32 shift, u32 width, const char *field_name,
const char *clock_name)
{
u32 limit = BITS_PER_BYTE * sizeof(u32);
if (!width) {
pr_err("%s: bad %s field width 0 for %s\n", __func__,
field_name, clock_name);
return false;
}
if (shift + width > limit) {
pr_err("%s: bad %s for %s (%u + %u > %u)\n", __func__,
field_name, clock_name, shift, width, limit);
return false;
}
return true;
}
/*
* All gates, if defined, have a status bit, and for hardware-only
* gates, that's it. Gates that can be software controlled also
* have an enable bit. And a gate that can be hardware or software
* controlled will have a hardware/software select bit.
*/
static bool gate_valid(struct bcm_clk_gate *gate, const char *field_name,
const char *clock_name)
{
if (!bit_posn_valid(gate->status_bit, "gate status", clock_name))
return false;
if (gate_is_sw_controllable(gate)) {
if (!bit_posn_valid(gate->en_bit, "gate enable", clock_name))
return false;
if (gate_is_hw_controllable(gate)) {
if (!bit_posn_valid(gate->hw_sw_sel_bit,
"gate hw/sw select",
clock_name))
return false;
}
} else {
BUG_ON(!gate_is_hw_controllable(gate));
}
return true;
}
/*
* A selector bitfield must be valid. Its parent_sel array must
* also be reasonable for the field.
*/
static bool sel_valid(struct bcm_clk_sel *sel, const char *field_name,
const char *clock_name)
{
if (!bitfield_valid(sel->shift, sel->width, field_name, clock_name))
return false;
if (sel->parent_count) {
u32 max_sel;
u32 limit;
/*
* Make sure the selector field can hold all the
* selector values we expect to be able to use. A
* clock only needs to have a selector defined if it
* has more than one parent. And in that case the
* highest selector value will be in the last entry
* in the array.
*/
max_sel = sel->parent_sel[sel->parent_count - 1];
limit = (1 << sel->width) - 1;
if (max_sel > limit) {
pr_err("%s: bad selector for %s "
"(%u needs > %u bits)\n",
__func__, clock_name, max_sel,
sel->width);
return false;
}
} else {
pr_warn("%s: ignoring selector for %s (no parents)\n",
__func__, clock_name);
selector_clear_exists(sel);
kfree(sel->parent_sel);
sel->parent_sel = NULL;
}
return true;
}
/*
* A fixed divider just needs to be non-zero. A variable divider
* has to have a valid divider bitfield, and if it has a fraction,
* the width of the fraction must not be no more than the width of
* the divider as a whole.
*/
static bool div_valid(struct bcm_clk_div *div, const char *field_name,
const char *clock_name)
{
if (divider_is_fixed(div)) {
/* Any fixed divider value but 0 is OK */
if (div->fixed == 0) {
pr_err("%s: bad %s fixed value 0 for %s\n", __func__,
field_name, clock_name);
return false;
}
return true;
}
if (!bitfield_valid(div->shift, div->width, field_name, clock_name))
return false;
if (divider_has_fraction(div))
if (div->frac_width > div->width) {
pr_warn("%s: bad %s fraction width for %s (%u > %u)\n",
__func__, field_name, clock_name,
div->frac_width, div->width);
return false;
}
return true;
}
/*
* If a clock has two dividers, the combined number of fractional
* bits must be representable in a 32-bit unsigned value. This
* is because we scale up a dividend using both dividers before
* dividing to improve accuracy, and we need to avoid overflow.
*/
static bool kona_dividers_valid(struct kona_clk *bcm_clk)
{
struct peri_clk_data *peri = bcm_clk->peri;
struct bcm_clk_div *div;
struct bcm_clk_div *pre_div;
u32 limit;
BUG_ON(bcm_clk->type != bcm_clk_peri);
if (!divider_exists(&peri->div) || !divider_exists(&peri->pre_div))
return true;
div = &peri->div;
pre_div = &peri->pre_div;
if (divider_is_fixed(div) || divider_is_fixed(pre_div))
return true;
limit = BITS_PER_BYTE * sizeof(u32);
return div->frac_width + pre_div->frac_width <= limit;
}
/* A trigger just needs to represent a valid bit position */
static bool trig_valid(struct bcm_clk_trig *trig, const char *field_name,
const char *clock_name)
{
return bit_posn_valid(trig->bit, field_name, clock_name);
}
/* Determine whether the set of peripheral clock registers are valid. */
static bool
peri_clk_data_valid(struct kona_clk *bcm_clk)
{
struct peri_clk_data *peri;
struct bcm_clk_gate *gate;
struct bcm_clk_sel *sel;
struct bcm_clk_div *div;
struct bcm_clk_div *pre_div;
struct bcm_clk_trig *trig;
const char *name;
BUG_ON(bcm_clk->type != bcm_clk_peri);
/*
* First validate register offsets. This is the only place
* where we need something from the ccu, so we do these
* together.
*/
if (!peri_clk_data_offsets_valid(bcm_clk))
return false;
peri = bcm_clk->peri;
name = bcm_clk->name;
gate = &peri->gate;
if (gate_exists(gate) && !gate_valid(gate, "gate", name))
return false;
sel = &peri->sel;
if (selector_exists(sel)) {
if (!sel_valid(sel, "selector", name))
return false;
} else if (sel->parent_count > 1) {
pr_err("%s: multiple parents but no selector for %s\n",
__func__, name);
return false;
}
div = &peri->div;
pre_div = &peri->pre_div;
if (divider_exists(div)) {
if (!div_valid(div, "divider", name))
return false;
if (divider_exists(pre_div))
if (!div_valid(pre_div, "pre-divider", name))
return false;
} else if (divider_exists(pre_div)) {
pr_err("%s: pre-divider but no divider for %s\n", __func__,
name);
return false;
}
trig = &peri->trig;
if (trigger_exists(trig)) {
if (!trig_valid(trig, "trigger", name))
return false;
if (trigger_exists(&peri->pre_trig)) {
if (!trig_valid(trig, "pre-trigger", name)) {
return false;
}
}
if (!clk_requires_trigger(bcm_clk)) {
pr_warn("%s: ignoring trigger for %s (not needed)\n",
__func__, name);
trigger_clear_exists(trig);
}
} else if (trigger_exists(&peri->pre_trig)) {
pr_err("%s: pre-trigger but no trigger for %s\n", __func__,
name);
return false;
} else if (clk_requires_trigger(bcm_clk)) {
pr_err("%s: required trigger missing for %s\n", __func__,
name);
return false;
}
return kona_dividers_valid(bcm_clk);
}
static bool kona_clk_valid(struct kona_clk *bcm_clk)
{
switch (bcm_clk->type) {
case bcm_clk_peri:
if (!peri_clk_data_valid(bcm_clk))
return false;
break;
default:
pr_err("%s: unrecognized clock type (%d)\n", __func__,
(int)bcm_clk->type);
return false;
}
return true;
}
/*
* Scan an array of parent clock names to determine whether there
* are any entries containing BAD_CLK_NAME. Such entries are
* placeholders for non-supported clocks. Keep track of the
* position of each clock name in the original array.
*
* Allocates an array of pointers to to hold the names of all
* non-null entries in the original array, and returns a pointer to
* that array in *names. This will be used for registering the
* clock with the common clock code. On successful return,
* *count indicates how many entries are in that names array.
*
* If there is more than one entry in the resulting names array,
* another array is allocated to record the parent selector value
* for each (defined) parent clock. This is the value that
* represents this parent clock in the clock's source selector
* register. The position of the clock in the original parent array
* defines that selector value. The number of entries in this array
* is the same as the number of entries in the parent names array.
*
* The array of selector values is returned. If the clock has no
* parents, no selector is required and a null pointer is returned.
*
* Returns a null pointer if the clock names array supplied was
* null. (This is not an error.)
*
* Returns a pointer-coded error if an error occurs.
*/
static u32 *parent_process(const char *clocks[],
u32 *count, const char ***names)
{
static const char **parent_names;
static u32 *parent_sel;
const char **clock;
u32 parent_count;
u32 bad_count = 0;
u32 orig_count;
u32 i;
u32 j;
*count = 0; /* In case of early return */
*names = NULL;
if (!clocks)
return NULL;
/*
* Count the number of names in the null-terminated array,
* and find out how many of those are actually clock names.
*/
for (clock = clocks; *clock; clock++)
if (*clock == BAD_CLK_NAME)
bad_count++;
orig_count = (u32)(clock - clocks);
parent_count = orig_count - bad_count;
/* If all clocks are unsupported, we treat it as no clock */
if (!parent_count)
return NULL;
/* Avoid exceeding our parent clock limit */
if (parent_count > PARENT_COUNT_MAX) {
pr_err("%s: too many parents (%u > %u)\n", __func__,
parent_count, PARENT_COUNT_MAX);
return ERR_PTR(-EINVAL);
}
/*
* There is one parent name for each defined parent clock.
* We also maintain an array containing the selector value
* for each defined clock. If there's only one clock, the
* selector is not required, but we allocate space for the
* array anyway to keep things simple.
*/
parent_names = kmalloc(parent_count * sizeof(parent_names), GFP_KERNEL);
if (!parent_names) {
pr_err("%s: error allocating %u parent names\n", __func__,
parent_count);
return ERR_PTR(-ENOMEM);
}
/* There is at least one parent, so allocate a selector array */
parent_sel = kmalloc(parent_count * sizeof(*parent_sel), GFP_KERNEL);
if (!parent_sel) {
pr_err("%s: error allocating %u parent selectors\n", __func__,
parent_count);
kfree(parent_names);
return ERR_PTR(-ENOMEM);
}
/* Now fill in the parent names and selector arrays */
for (i = 0, j = 0; i < orig_count; i++) {
if (clocks[i] != BAD_CLK_NAME) {
parent_names[j] = clocks[i];
parent_sel[j] = i;
j++;
}
}
*names = parent_names;
*count = parent_count;
return parent_sel;
}
static int
clk_sel_setup(const char **clocks, struct bcm_clk_sel *sel,
struct clk_init_data *init_data)
{
const char **parent_names = NULL;
u32 parent_count = 0;
u32 *parent_sel;
/*
* If a peripheral clock has multiple parents, the value
* used by the hardware to select that parent is represented
* by the parent clock's position in the "clocks" list. Some
* values don't have defined or supported clocks; these will
* have BAD_CLK_NAME entries in the parents[] array. The
* list is terminated by a NULL entry.
*
* We need to supply (only) the names of defined parent
* clocks when registering a clock though, so we use an
* array of parent selector values to map between the
* indexes the common clock code uses and the selector
* values we need.
*/
parent_sel = parent_process(clocks, &parent_count, &parent_names);
if (IS_ERR(parent_sel)) {
int ret = PTR_ERR(parent_sel);
pr_err("%s: error processing parent clocks for %s (%d)\n",
__func__, init_data->name, ret);
return ret;
}
init_data->parent_names = parent_names;
init_data->num_parents = parent_count;
sel->parent_count = parent_count;
sel->parent_sel = parent_sel;
return 0;
}
static void clk_sel_teardown(struct bcm_clk_sel *sel,
struct clk_init_data *init_data)
{
kfree(sel->parent_sel);
sel->parent_sel = NULL;
sel->parent_count = 0;
init_data->num_parents = 0;
kfree(init_data->parent_names);
init_data->parent_names = NULL;
}
static void peri_clk_teardown(struct peri_clk_data *data,
struct clk_init_data *init_data)
{
clk_sel_teardown(&data->sel, init_data);
init_data->ops = NULL;
}
/*
* Caller is responsible for freeing the parent_names[] and
* parent_sel[] arrays in the peripheral clock's "data" structure
* that can be assigned if the clock has one or more parent clocks
* associated with it.
*/
static int peri_clk_setup(struct ccu_data *ccu, struct peri_clk_data *data,
struct clk_init_data *init_data)
{
init_data->ops = &kona_peri_clk_ops;
init_data->flags = CLK_IGNORE_UNUSED;
return clk_sel_setup(data->clocks, &data->sel, init_data);
}
static void bcm_clk_teardown(struct kona_clk *bcm_clk)
{
switch (bcm_clk->type) {
case bcm_clk_peri:
peri_clk_teardown(bcm_clk->data, &bcm_clk->init_data);
break;
default:
break;
}
bcm_clk->data = NULL;
bcm_clk->type = bcm_clk_none;
}
static void kona_clk_teardown(struct clk *clk)
{
struct clk_hw *hw;
struct kona_clk *bcm_clk;
if (!clk)
return;
hw = __clk_get_hw(clk);
if (!hw) {
pr_err("%s: clk %p has null hw pointer\n", __func__, clk);
return;
}
clk_unregister(clk);
bcm_clk = to_kona_clk(hw);
bcm_clk_teardown(bcm_clk);
}
struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name,
enum bcm_clk_type type, void *data)
{
struct kona_clk *bcm_clk;
struct clk_init_data *init_data;
struct clk *clk = NULL;
bcm_clk = kzalloc(sizeof(*bcm_clk), GFP_KERNEL);
if (!bcm_clk) {
pr_err("%s: failed to allocate bcm_clk for %s\n", __func__,
name);
return NULL;
}
bcm_clk->ccu = ccu;
bcm_clk->name = name;
init_data = &bcm_clk->init_data;
init_data->name = name;
switch (type) {
case bcm_clk_peri:
if (peri_clk_setup(ccu, data, init_data))
goto out_free;
break;
default:
data = NULL;
break;
}
bcm_clk->type = type;
bcm_clk->data = data;
/* Make sure everything makes sense before we set it up */
if (!kona_clk_valid(bcm_clk)) {
pr_err("%s: clock data invalid for %s\n", __func__, name);
goto out_teardown;
}
bcm_clk->hw.init = init_data;
clk = clk_register(NULL, &bcm_clk->hw);
if (IS_ERR(clk)) {
pr_err("%s: error registering clock %s (%ld)\n", __func__,
name, PTR_ERR(clk));
goto out_teardown;
}
BUG_ON(!clk);
return clk;
out_teardown:
bcm_clk_teardown(bcm_clk);
out_free:
kfree(bcm_clk);
return NULL;
}
static void ccu_clks_teardown(struct ccu_data *ccu)
{
u32 i;
for (i = 0; i < ccu->data.clk_num; i++)
kona_clk_teardown(ccu->data.clks[i]);
kfree(ccu->data.clks);
}
static void kona_ccu_teardown(struct ccu_data *ccu)
{
if (!ccu)
return;
if (!ccu->base)
goto done;
of_clk_del_provider(ccu->node); /* safe if never added */
ccu_clks_teardown(ccu);
list_del(&ccu->links);
of_node_put(ccu->node);
iounmap(ccu->base);
done:
kfree(ccu->name);
kfree(ccu);
}
/*
* Set up a CCU. Call the provided ccu_clks_setup callback to
* initialize the array of clocks provided by the CCU.
*/
void __init kona_dt_ccu_setup(struct device_node *node,
int (*ccu_clks_setup)(struct ccu_data *))
{
struct ccu_data *ccu;
struct resource res = { 0 };
resource_size_t range;
int ret;
ccu = kzalloc(sizeof(*ccu), GFP_KERNEL);
if (ccu)
ccu->name = kstrdup(node->name, GFP_KERNEL);
if (!ccu || !ccu->name) {
pr_err("%s: unable to allocate CCU struct for %s\n",
__func__, node->name);
kfree(ccu);
return;
}
ret = of_address_to_resource(node, 0, &res);
if (ret) {
pr_err("%s: no valid CCU registers found for %s\n", __func__,
node->name);
goto out_err;
}
range = resource_size(&res);
if (range > (resource_size_t)U32_MAX) {
pr_err("%s: address range too large for %s\n", __func__,
node->name);
goto out_err;
}
ccu->range = (u32)range;
ccu->base = ioremap(res.start, ccu->range);
if (!ccu->base) {
pr_err("%s: unable to map CCU registers for %s\n", __func__,
node->name);
goto out_err;
}
spin_lock_init(&ccu->lock);
INIT_LIST_HEAD(&ccu->links);
ccu->node = of_node_get(node);
list_add_tail(&ccu->links, &ccu_list);
/* Set up clocks array (in ccu->data) */
if (ccu_clks_setup(ccu))
goto out_err;
ret = of_clk_add_provider(node, of_clk_src_onecell_get, &ccu->data);
if (ret) {
pr_err("%s: error adding ccu %s as provider (%d)\n", __func__,
node->name, ret);
goto out_err;
}
if (!kona_ccu_init(ccu))
pr_err("Broadcom %s initialization had errors\n", node->name);
return;
out_err:
kona_ccu_teardown(ccu);
pr_err("Broadcom %s setup aborted\n", node->name);
}
/*
* Copyright (C) 2013 Broadcom Corporation
* Copyright 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "clk-kona.h"
#include <linux/delay.h>
#define CCU_ACCESS_PASSWORD 0xA5A500
#define CLK_GATE_DELAY_LOOP 2000
/* Bitfield operations */
/* Produces a mask of set bits covering a range of a 32-bit value */
static inline u32 bitfield_mask(u32 shift, u32 width)
{
return ((1 << width) - 1) << shift;
}
/* Extract the value of a bitfield found within a given register value */
static inline u32 bitfield_extract(u32 reg_val, u32 shift, u32 width)
{
return (reg_val & bitfield_mask(shift, width)) >> shift;
}
/* Replace the value of a bitfield found within a given register value */
static inline u32 bitfield_replace(u32 reg_val, u32 shift, u32 width, u32 val)
{
u32 mask = bitfield_mask(shift, width);
return (reg_val & ~mask) | (val << shift);
}
/* Divider and scaling helpers */
/*
* Implement DIV_ROUND_CLOSEST() for 64-bit dividend and both values
* unsigned. Note that unlike do_div(), the remainder is discarded
* and the return value is the quotient (not the remainder).
*/
u64 do_div_round_closest(u64 dividend, unsigned long divisor)
{
u64 result;
result = dividend + ((u64)divisor >> 1);
(void)do_div(result, divisor);
return result;
}
/* Convert a divider into the scaled divisor value it represents. */
static inline u64 scaled_div_value(struct bcm_clk_div *div, u32 reg_div)
{
return (u64)reg_div + ((u64)1 << div->frac_width);
}
/*
* Build a scaled divider value as close as possible to the
* given whole part (div_value) and fractional part (expressed
* in billionths).
*/
u64 scaled_div_build(struct bcm_clk_div *div, u32 div_value, u32 billionths)
{
u64 combined;
BUG_ON(!div_value);
BUG_ON(billionths >= BILLION);
combined = (u64)div_value * BILLION + billionths;
combined <<= div->frac_width;
return do_div_round_closest(combined, BILLION);
}
/* The scaled minimum divisor representable by a divider */
static inline u64
scaled_div_min(struct bcm_clk_div *div)
{
if (divider_is_fixed(div))
return (u64)div->fixed;
return scaled_div_value(div, 0);
}
/* The scaled maximum divisor representable by a divider */
u64 scaled_div_max(struct bcm_clk_div *div)
{
u32 reg_div;
if (divider_is_fixed(div))
return (u64)div->fixed;
reg_div = ((u32)1 << div->width) - 1;
return scaled_div_value(div, reg_div);
}
/*
* Convert a scaled divisor into its divider representation as
* stored in a divider register field.
*/
static inline u32
divider(struct bcm_clk_div *div, u64 scaled_div)
{
BUG_ON(scaled_div < scaled_div_min(div));
BUG_ON(scaled_div > scaled_div_max(div));
return (u32)(scaled_div - ((u64)1 << div->frac_width));
}
/* Return a rate scaled for use when dividing by a scaled divisor. */
static inline u64
scale_rate(struct bcm_clk_div *div, u32 rate)
{
if (divider_is_fixed(div))
return (u64)rate;
return (u64)rate << div->frac_width;
}
/* CCU access */
/* Read a 32-bit register value from a CCU's address space. */
static inline u32 __ccu_read(struct ccu_data *ccu, u32 reg_offset)
{
return readl(ccu->base + reg_offset);
}
/* Write a 32-bit register value into a CCU's address space. */
static inline void
__ccu_write(struct ccu_data *ccu, u32 reg_offset, u32 reg_val)
{
writel(reg_val, ccu->base + reg_offset);
}
static inline unsigned long ccu_lock(struct ccu_data *ccu)
{
unsigned long flags;
spin_lock_irqsave(&ccu->lock, flags);
return flags;
}
static inline void ccu_unlock(struct ccu_data *ccu, unsigned long flags)
{
spin_unlock_irqrestore(&ccu->lock, flags);
}
/*
* Enable/disable write access to CCU protected registers. The
* WR_ACCESS register for all CCUs is at offset 0.
*/
static inline void __ccu_write_enable(struct ccu_data *ccu)
{
if (ccu->write_enabled) {
pr_err("%s: access already enabled for %s\n", __func__,
ccu->name);
return;
}
ccu->write_enabled = true;
__ccu_write(ccu, 0, CCU_ACCESS_PASSWORD | 1);
}
static inline void __ccu_write_disable(struct ccu_data *ccu)
{
if (!ccu->write_enabled) {
pr_err("%s: access wasn't enabled for %s\n", __func__,
ccu->name);
return;
}
__ccu_write(ccu, 0, CCU_ACCESS_PASSWORD);
ccu->write_enabled = false;
}
/*
* Poll a register in a CCU's address space, returning when the
* specified bit in that register's value is set (or clear). Delay
* a microsecond after each read of the register. Returns true if
* successful, or false if we gave up trying.
*
* Caller must ensure the CCU lock is held.
*/
static inline bool
__ccu_wait_bit(struct ccu_data *ccu, u32 reg_offset, u32 bit, bool want)
{
unsigned int tries;
u32 bit_mask = 1 << bit;
for (tries = 0; tries < CLK_GATE_DELAY_LOOP; tries++) {
u32 val;
bool bit_val;
val = __ccu_read(ccu, reg_offset);
bit_val = (val & bit_mask) != 0;
if (bit_val == want)
return true;
udelay(1);
}
return false;
}
/* Gate operations */
/* Determine whether a clock is gated. CCU lock must be held. */
static bool
__is_clk_gate_enabled(struct ccu_data *ccu, struct bcm_clk_gate *gate)
{
u32 bit_mask;
u32 reg_val;
/* If there is no gate we can assume it's enabled. */
if (!gate_exists(gate))
return true;
bit_mask = 1 << gate->status_bit;
reg_val = __ccu_read(ccu, gate->offset);
return (reg_val & bit_mask) != 0;
}
/* Determine whether a clock is gated. */
static bool
is_clk_gate_enabled(struct ccu_data *ccu, struct bcm_clk_gate *gate)
{
long flags;
bool ret;
/* Avoid taking the lock if we can */
if (!gate_exists(gate))
return true;
flags = ccu_lock(ccu);
ret = __is_clk_gate_enabled(ccu, gate);
ccu_unlock(ccu, flags);
return ret;
}
/*
* Commit our desired gate state to the hardware.
* Returns true if successful, false otherwise.
*/
static bool
__gate_commit(struct ccu_data *ccu, struct bcm_clk_gate *gate)
{
u32 reg_val;
u32 mask;
bool enabled = false;
BUG_ON(!gate_exists(gate));
if (!gate_is_sw_controllable(gate))
return true; /* Nothing we can change */
reg_val = __ccu_read(ccu, gate->offset);
/* For a hardware/software gate, set which is in control */
if (gate_is_hw_controllable(gate)) {
mask = (u32)1 << gate->hw_sw_sel_bit;
if (gate_is_sw_managed(gate))
reg_val |= mask;
else
reg_val &= ~mask;
}
/*
* If software is in control, enable or disable the gate.
* If hardware is, clear the enabled bit for good measure.
* If a software controlled gate can't be disabled, we're
* required to write a 0 into the enable bit (but the gate
* will be enabled).
*/
mask = (u32)1 << gate->en_bit;
if (gate_is_sw_managed(gate) && (enabled = gate_is_enabled(gate)) &&
!gate_is_no_disable(gate))
reg_val |= mask;
else
reg_val &= ~mask;
__ccu_write(ccu, gate->offset, reg_val);
/* For a hardware controlled gate, we're done */
if (!gate_is_sw_managed(gate))
return true;
/* Otherwise wait for the gate to be in desired state */
return __ccu_wait_bit(ccu, gate->offset, gate->status_bit, enabled);
}
/*
* Initialize a gate. Our desired state (hardware/software select,
* and if software, its enable state) is committed to hardware
* without the usual checks to see if it's already set up that way.
* Returns true if successful, false otherwise.
*/
static bool gate_init(struct ccu_data *ccu, struct bcm_clk_gate *gate)
{
if (!gate_exists(gate))
return true;
return __gate_commit(ccu, gate);
}
/*
* Set a gate to enabled or disabled state. Does nothing if the
* gate is not currently under software control, or if it is already
* in the requested state. Returns true if successful, false
* otherwise. CCU lock must be held.
*/
static bool
__clk_gate(struct ccu_data *ccu, struct bcm_clk_gate *gate, bool enable)
{
bool ret;
if (!gate_exists(gate) || !gate_is_sw_managed(gate))
return true; /* Nothing to do */
if (!enable && gate_is_no_disable(gate)) {
pr_warn("%s: invalid gate disable request (ignoring)\n",
__func__);
return true;
}
if (enable == gate_is_enabled(gate))
return true; /* No change */
gate_flip_enabled(gate);
ret = __gate_commit(ccu, gate);
if (!ret)
gate_flip_enabled(gate); /* Revert the change */
return ret;
}
/* Enable or disable a gate. Returns 0 if successful, -EIO otherwise */
static int clk_gate(struct ccu_data *ccu, const char *name,
struct bcm_clk_gate *gate, bool enable)
{
unsigned long flags;
bool success;
/*
* Avoid taking the lock if we can. We quietly ignore
* requests to change state that don't make sense.
*/
if (!gate_exists(gate) || !gate_is_sw_managed(gate))
return 0;
if (!enable && gate_is_no_disable(gate))
return 0;
flags = ccu_lock(ccu);
__ccu_write_enable(ccu);
success = __clk_gate(ccu, gate, enable);
__ccu_write_disable(ccu);
ccu_unlock(ccu, flags);
if (success)
return 0;
pr_err("%s: failed to %s gate for %s\n", __func__,
enable ? "enable" : "disable", name);
return -EIO;
}
/* Trigger operations */
/*
* Caller must ensure CCU lock is held and access is enabled.
* Returns true if successful, false otherwise.
*/
static bool __clk_trigger(struct ccu_data *ccu, struct bcm_clk_trig *trig)
{
/* Trigger the clock and wait for it to finish */
__ccu_write(ccu, trig->offset, 1 << trig->bit);
return __ccu_wait_bit(ccu, trig->offset, trig->bit, false);
}
/* Divider operations */
/* Read a divider value and return the scaled divisor it represents. */
static u64 divider_read_scaled(struct ccu_data *ccu, struct bcm_clk_div *div)
{
unsigned long flags;
u32 reg_val;
u32 reg_div;
if (divider_is_fixed(div))
return (u64)div->fixed;
flags = ccu_lock(ccu);
reg_val = __ccu_read(ccu, div->offset);
ccu_unlock(ccu, flags);
/* Extract the full divider field from the register value */
reg_div = bitfield_extract(reg_val, div->shift, div->width);
/* Return the scaled divisor value it represents */
return scaled_div_value(div, reg_div);
}
/*
* Convert a divider's scaled divisor value into its recorded form
* and commit it into the hardware divider register.
*
* Returns 0 on success. Returns -EINVAL for invalid arguments.
* Returns -ENXIO if gating failed, and -EIO if a trigger failed.
*/
static int __div_commit(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_div *div, struct bcm_clk_trig *trig)
{
bool enabled;
u32 reg_div;
u32 reg_val;
int ret = 0;
BUG_ON(divider_is_fixed(div));
/*
* If we're just initializing the divider, and no initial
* state was defined in the device tree, we just find out
* what its current value is rather than updating it.
*/
if (div->scaled_div == BAD_SCALED_DIV_VALUE) {
reg_val = __ccu_read(ccu, div->offset);
reg_div = bitfield_extract(reg_val, div->shift, div->width);
div->scaled_div = scaled_div_value(div, reg_div);
return 0;
}
/* Convert the scaled divisor to the value we need to record */
reg_div = divider(div, div->scaled_div);
/* Clock needs to be enabled before changing the rate */
enabled = __is_clk_gate_enabled(ccu, gate);
if (!enabled && !__clk_gate(ccu, gate, true)) {
ret = -ENXIO;
goto out;
}
/* Replace the divider value and record the result */
reg_val = __ccu_read(ccu, div->offset);
reg_val = bitfield_replace(reg_val, div->shift, div->width, reg_div);
__ccu_write(ccu, div->offset, reg_val);
/* If the trigger fails we still want to disable the gate */
if (!__clk_trigger(ccu, trig))
ret = -EIO;
/* Disable the clock again if it was disabled to begin with */
if (!enabled && !__clk_gate(ccu, gate, false))
ret = ret ? ret : -ENXIO; /* return first error */
out:
return ret;
}
/*
* Initialize a divider by committing our desired state to hardware
* without the usual checks to see if it's already set up that way.
* Returns true if successful, false otherwise.
*/
static bool div_init(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_div *div, struct bcm_clk_trig *trig)
{
if (!divider_exists(div) || divider_is_fixed(div))
return true;
return !__div_commit(ccu, gate, div, trig);
}
static int divider_write(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_div *div, struct bcm_clk_trig *trig,
u64 scaled_div)
{
unsigned long flags;
u64 previous;
int ret;
BUG_ON(divider_is_fixed(div));
previous = div->scaled_div;
if (previous == scaled_div)
return 0; /* No change */
div->scaled_div = scaled_div;
flags = ccu_lock(ccu);
__ccu_write_enable(ccu);
ret = __div_commit(ccu, gate, div, trig);
__ccu_write_disable(ccu);
ccu_unlock(ccu, flags);
if (ret)
div->scaled_div = previous; /* Revert the change */
return ret;
}
/* Common clock rate helpers */
/*
* Implement the common clock framework recalc_rate method, taking
* into account a divider and an optional pre-divider. The
* pre-divider register pointer may be NULL.
*/
static unsigned long clk_recalc_rate(struct ccu_data *ccu,
struct bcm_clk_div *div, struct bcm_clk_div *pre_div,
unsigned long parent_rate)
{
u64 scaled_parent_rate;
u64 scaled_div;
u64 result;
if (!divider_exists(div))
return parent_rate;
if (parent_rate > (unsigned long)LONG_MAX)
return 0; /* actually this would be a caller bug */
/*
* If there is a pre-divider, divide the scaled parent rate
* by the pre-divider value first. In this case--to improve
* accuracy--scale the parent rate by *both* the pre-divider
* value and the divider before actually computing the
* result of the pre-divider.
*
* If there's only one divider, just scale the parent rate.
*/
if (pre_div && divider_exists(pre_div)) {
u64 scaled_rate;
scaled_rate = scale_rate(pre_div, parent_rate);
scaled_rate = scale_rate(div, scaled_rate);
scaled_div = divider_read_scaled(ccu, pre_div);
scaled_parent_rate = do_div_round_closest(scaled_rate,
scaled_div);
} else {
scaled_parent_rate = scale_rate(div, parent_rate);
}
/*
* Get the scaled divisor value, and divide the scaled
* parent rate by that to determine this clock's resulting
* rate.
*/
scaled_div = divider_read_scaled(ccu, div);
result = do_div_round_closest(scaled_parent_rate, scaled_div);
return (unsigned long)result;
}
/*
* Compute the output rate produced when a given parent rate is fed
* into two dividers. The pre-divider can be NULL, and even if it's
* non-null it may be nonexistent. It's also OK for the divider to
* be nonexistent, and in that case the pre-divider is also ignored.
*
* If scaled_div is non-null, it is used to return the scaled divisor
* value used by the (downstream) divider to produce that rate.
*/
static long round_rate(struct ccu_data *ccu, struct bcm_clk_div *div,
struct bcm_clk_div *pre_div,
unsigned long rate, unsigned long parent_rate,
u64 *scaled_div)
{
u64 scaled_parent_rate;
u64 min_scaled_div;
u64 max_scaled_div;
u64 best_scaled_div;
u64 result;
BUG_ON(!divider_exists(div));
BUG_ON(!rate);
BUG_ON(parent_rate > (u64)LONG_MAX);
/*
* If there is a pre-divider, divide the scaled parent rate
* by the pre-divider value first. In this case--to improve
* accuracy--scale the parent rate by *both* the pre-divider
* value and the divider before actually computing the
* result of the pre-divider.
*
* If there's only one divider, just scale the parent rate.
*
* For simplicity we treat the pre-divider as fixed (for now).
*/
if (divider_exists(pre_div)) {
u64 scaled_rate;
u64 scaled_pre_div;
scaled_rate = scale_rate(pre_div, parent_rate);
scaled_rate = scale_rate(div, scaled_rate);
scaled_pre_div = divider_read_scaled(ccu, pre_div);
scaled_parent_rate = do_div_round_closest(scaled_rate,
scaled_pre_div);
} else {
scaled_parent_rate = scale_rate(div, parent_rate);
}
/*
* Compute the best possible divider and ensure it is in
* range. A fixed divider can't be changed, so just report
* the best we can do.
*/
if (!divider_is_fixed(div)) {
best_scaled_div = do_div_round_closest(scaled_parent_rate,
rate);
min_scaled_div = scaled_div_min(div);
max_scaled_div = scaled_div_max(div);
if (best_scaled_div > max_scaled_div)
best_scaled_div = max_scaled_div;
else if (best_scaled_div < min_scaled_div)
best_scaled_div = min_scaled_div;
} else {
best_scaled_div = divider_read_scaled(ccu, div);
}
/* OK, figure out the resulting rate */
result = do_div_round_closest(scaled_parent_rate, best_scaled_div);
if (scaled_div)
*scaled_div = best_scaled_div;
return (long)result;
}
/* Common clock parent helpers */
/*
* For a given parent selector (register field) value, find the
* index into a selector's parent_sel array that contains it.
* Returns the index, or BAD_CLK_INDEX if it's not found.
*/
static u8 parent_index(struct bcm_clk_sel *sel, u8 parent_sel)
{
u8 i;
BUG_ON(sel->parent_count > (u32)U8_MAX);
for (i = 0; i < sel->parent_count; i++)
if (sel->parent_sel[i] == parent_sel)
return i;
return BAD_CLK_INDEX;
}
/*
* Fetch the current value of the selector, and translate that into
* its corresponding index in the parent array we registered with
* the clock framework.
*
* Returns parent array index that corresponds with the value found,
* or BAD_CLK_INDEX if the found value is out of range.
*/
static u8 selector_read_index(struct ccu_data *ccu, struct bcm_clk_sel *sel)
{
unsigned long flags;
u32 reg_val;
u32 parent_sel;
u8 index;
/* If there's no selector, there's only one parent */
if (!selector_exists(sel))
return 0;
/* Get the value in the selector register */
flags = ccu_lock(ccu);
reg_val = __ccu_read(ccu, sel->offset);
ccu_unlock(ccu, flags);
parent_sel = bitfield_extract(reg_val, sel->shift, sel->width);
/* Look up that selector's parent array index and return it */
index = parent_index(sel, parent_sel);
if (index == BAD_CLK_INDEX)
pr_err("%s: out-of-range parent selector %u (%s 0x%04x)\n",
__func__, parent_sel, ccu->name, sel->offset);
return index;
}
/*
* Commit our desired selector value to the hardware.
*
* Returns 0 on success. Returns -EINVAL for invalid arguments.
* Returns -ENXIO if gating failed, and -EIO if a trigger failed.
*/
static int
__sel_commit(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_sel *sel, struct bcm_clk_trig *trig)
{
u32 parent_sel;
u32 reg_val;
bool enabled;
int ret = 0;
BUG_ON(!selector_exists(sel));
/*
* If we're just initializing the selector, and no initial
* state was defined in the device tree, we just find out
* what its current value is rather than updating it.
*/
if (sel->clk_index == BAD_CLK_INDEX) {
u8 index;
reg_val = __ccu_read(ccu, sel->offset);
parent_sel = bitfield_extract(reg_val, sel->shift, sel->width);
index = parent_index(sel, parent_sel);
if (index == BAD_CLK_INDEX)
return -EINVAL;
sel->clk_index = index;
return 0;
}
BUG_ON((u32)sel->clk_index >= sel->parent_count);
parent_sel = sel->parent_sel[sel->clk_index];
/* Clock needs to be enabled before changing the parent */
enabled = __is_clk_gate_enabled(ccu, gate);
if (!enabled && !__clk_gate(ccu, gate, true))
return -ENXIO;
/* Replace the selector value and record the result */
reg_val = __ccu_read(ccu, sel->offset);
reg_val = bitfield_replace(reg_val, sel->shift, sel->width, parent_sel);
__ccu_write(ccu, sel->offset, reg_val);
/* If the trigger fails we still want to disable the gate */
if (!__clk_trigger(ccu, trig))
ret = -EIO;
/* Disable the clock again if it was disabled to begin with */
if (!enabled && !__clk_gate(ccu, gate, false))
ret = ret ? ret : -ENXIO; /* return first error */
return ret;
}
/*
* Initialize a selector by committing our desired state to hardware
* without the usual checks to see if it's already set up that way.
* Returns true if successful, false otherwise.
*/
static bool sel_init(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_sel *sel, struct bcm_clk_trig *trig)
{
if (!selector_exists(sel))
return true;
return !__sel_commit(ccu, gate, sel, trig);
}
/*
* Write a new value into a selector register to switch to a
* different parent clock. Returns 0 on success, or an error code
* (from __sel_commit()) otherwise.
*/
static int selector_write(struct ccu_data *ccu, struct bcm_clk_gate *gate,
struct bcm_clk_sel *sel, struct bcm_clk_trig *trig,
u8 index)
{
unsigned long flags;
u8 previous;
int ret;
previous = sel->clk_index;
if (previous == index)
return 0; /* No change */
sel->clk_index = index;
flags = ccu_lock(ccu);
__ccu_write_enable(ccu);
ret = __sel_commit(ccu, gate, sel, trig);
__ccu_write_disable(ccu);
ccu_unlock(ccu, flags);
if (ret)
sel->clk_index = previous; /* Revert the change */
return ret;
}
/* Clock operations */
static int kona_peri_clk_enable(struct clk_hw *hw)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct bcm_clk_gate *gate = &bcm_clk->peri->gate;
return clk_gate(bcm_clk->ccu, bcm_clk->name, gate, true);
}
static void kona_peri_clk_disable(struct clk_hw *hw)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct bcm_clk_gate *gate = &bcm_clk->peri->gate;
(void)clk_gate(bcm_clk->ccu, bcm_clk->name, gate, false);
}
static int kona_peri_clk_is_enabled(struct clk_hw *hw)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct bcm_clk_gate *gate = &bcm_clk->peri->gate;
return is_clk_gate_enabled(bcm_clk->ccu, gate) ? 1 : 0;
}
static unsigned long kona_peri_clk_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct peri_clk_data *data = bcm_clk->peri;
return clk_recalc_rate(bcm_clk->ccu, &data->div, &data->pre_div,
parent_rate);
}
static long kona_peri_clk_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct bcm_clk_div *div = &bcm_clk->peri->div;
if (!divider_exists(div))
return __clk_get_rate(hw->clk);
/* Quietly avoid a zero rate */
return round_rate(bcm_clk->ccu, div, &bcm_clk->peri->pre_div,
rate ? rate : 1, *parent_rate, NULL);
}
static int kona_peri_clk_set_parent(struct clk_hw *hw, u8 index)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct peri_clk_data *data = bcm_clk->peri;
struct bcm_clk_sel *sel = &data->sel;
struct bcm_clk_trig *trig;
int ret;
BUG_ON(index >= sel->parent_count);
/* If there's only one parent we don't require a selector */
if (!selector_exists(sel))
return 0;
/*
* The regular trigger is used by default, but if there's a
* pre-trigger we want to use that instead.
*/
trig = trigger_exists(&data->pre_trig) ? &data->pre_trig
: &data->trig;
ret = selector_write(bcm_clk->ccu, &data->gate, sel, trig, index);
if (ret == -ENXIO) {
pr_err("%s: gating failure for %s\n", __func__, bcm_clk->name);
ret = -EIO; /* Don't proliferate weird errors */
} else if (ret == -EIO) {
pr_err("%s: %strigger failed for %s\n", __func__,
trig == &data->pre_trig ? "pre-" : "",
bcm_clk->name);
}
return ret;
}
static u8 kona_peri_clk_get_parent(struct clk_hw *hw)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct peri_clk_data *data = bcm_clk->peri;
u8 index;
index = selector_read_index(bcm_clk->ccu, &data->sel);
/* Not all callers would handle an out-of-range value gracefully */
return index == BAD_CLK_INDEX ? 0 : index;
}
static int kona_peri_clk_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
struct kona_clk *bcm_clk = to_kona_clk(hw);
struct peri_clk_data *data = bcm_clk->peri;
struct bcm_clk_div *div = &data->div;
u64 scaled_div = 0;
int ret;
if (parent_rate > (unsigned long)LONG_MAX)
return -EINVAL;
if (rate == __clk_get_rate(hw->clk))
return 0;
if (!divider_exists(div))
return rate == parent_rate ? 0 : -EINVAL;
/*
* A fixed divider can't be changed. (Nor can a fixed
* pre-divider be, but for now we never actually try to
* change that.) Tolerate a request for a no-op change.
*/
if (divider_is_fixed(&data->div))
return rate == parent_rate ? 0 : -EINVAL;
/*
* Get the scaled divisor value needed to achieve a clock
* rate as close as possible to what was requested, given
* the parent clock rate supplied.
*/
(void)round_rate(bcm_clk->ccu, div, &data->pre_div,
rate ? rate : 1, parent_rate, &scaled_div);
/*
* We aren't updating any pre-divider at this point, so
* we'll use the regular trigger.
*/
ret = divider_write(bcm_clk->ccu, &data->gate, &data->div,
&data->trig, scaled_div);
if (ret == -ENXIO) {
pr_err("%s: gating failure for %s\n", __func__, bcm_clk->name);
ret = -EIO; /* Don't proliferate weird errors */
} else if (ret == -EIO) {
pr_err("%s: trigger failed for %s\n", __func__, bcm_clk->name);
}
return ret;
}
struct clk_ops kona_peri_clk_ops = {
.enable = kona_peri_clk_enable,
.disable = kona_peri_clk_disable,
.is_enabled = kona_peri_clk_is_enabled,
.recalc_rate = kona_peri_clk_recalc_rate,
.round_rate = kona_peri_clk_round_rate,
.set_parent = kona_peri_clk_set_parent,
.get_parent = kona_peri_clk_get_parent,
.set_rate = kona_peri_clk_set_rate,
};
/* Put a peripheral clock into its initial state */
static bool __peri_clk_init(struct kona_clk *bcm_clk)
{
struct ccu_data *ccu = bcm_clk->ccu;
struct peri_clk_data *peri = bcm_clk->peri;
const char *name = bcm_clk->name;
struct bcm_clk_trig *trig;
BUG_ON(bcm_clk->type != bcm_clk_peri);
if (!gate_init(ccu, &peri->gate)) {
pr_err("%s: error initializing gate for %s\n", __func__, name);
return false;
}
if (!div_init(ccu, &peri->gate, &peri->div, &peri->trig)) {
pr_err("%s: error initializing divider for %s\n", __func__,
name);
return false;
}
/*
* For the pre-divider and selector, the pre-trigger is used
* if it's present, otherwise we just use the regular trigger.
*/
trig = trigger_exists(&peri->pre_trig) ? &peri->pre_trig
: &peri->trig;
if (!div_init(ccu, &peri->gate, &peri->pre_div, trig)) {
pr_err("%s: error initializing pre-divider for %s\n", __func__,
name);
return false;
}
if (!sel_init(ccu, &peri->gate, &peri->sel, trig)) {
pr_err("%s: error initializing selector for %s\n", __func__,
name);
return false;
}
return true;
}
static bool __kona_clk_init(struct kona_clk *bcm_clk)
{
switch (bcm_clk->type) {
case bcm_clk_peri:
return __peri_clk_init(bcm_clk);
default:
BUG();
}
return -EINVAL;
}
/* Set a CCU and all its clocks into their desired initial state */
bool __init kona_ccu_init(struct ccu_data *ccu)
{
unsigned long flags;
unsigned int which;
struct clk **clks = ccu->data.clks;
bool success = true;
flags = ccu_lock(ccu);
__ccu_write_enable(ccu);
for (which = 0; which < ccu->data.clk_num; which++) {
struct kona_clk *bcm_clk;
if (!clks[which])
continue;
bcm_clk = to_kona_clk(__clk_get_hw(clks[which]));
success &= __kona_clk_init(bcm_clk);
}
__ccu_write_disable(ccu);
ccu_unlock(ccu, flags);
return success;
}
/*
* Copyright (C) 2013 Broadcom Corporation
* Copyright 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _CLK_KONA_H
#define _CLK_KONA_H
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/of.h>
#include <linux/clk-provider.h>
#define BILLION 1000000000
/* The common clock framework uses u8 to represent a parent index */
#define PARENT_COUNT_MAX ((u32)U8_MAX)
#define BAD_CLK_INDEX U8_MAX /* Can't ever be valid */
#define BAD_CLK_NAME ((const char *)-1)
#define BAD_SCALED_DIV_VALUE U64_MAX
/*
* Utility macros for object flag management. If possible, flags
* should be defined such that 0 is the desired default value.
*/
#define FLAG(type, flag) BCM_CLK_ ## type ## _FLAGS_ ## flag
#define FLAG_SET(obj, type, flag) ((obj)->flags |= FLAG(type, flag))
#define FLAG_CLEAR(obj, type, flag) ((obj)->flags &= ~(FLAG(type, flag)))
#define FLAG_FLIP(obj, type, flag) ((obj)->flags ^= FLAG(type, flag))
#define FLAG_TEST(obj, type, flag) (!!((obj)->flags & FLAG(type, flag)))
/* Clock field state tests */
#define gate_exists(gate) FLAG_TEST(gate, GATE, EXISTS)
#define gate_is_enabled(gate) FLAG_TEST(gate, GATE, ENABLED)
#define gate_is_hw_controllable(gate) FLAG_TEST(gate, GATE, HW)
#define gate_is_sw_controllable(gate) FLAG_TEST(gate, GATE, SW)
#define gate_is_sw_managed(gate) FLAG_TEST(gate, GATE, SW_MANAGED)
#define gate_is_no_disable(gate) FLAG_TEST(gate, GATE, NO_DISABLE)
#define gate_flip_enabled(gate) FLAG_FLIP(gate, GATE, ENABLED)
#define divider_exists(div) FLAG_TEST(div, DIV, EXISTS)
#define divider_is_fixed(div) FLAG_TEST(div, DIV, FIXED)
#define divider_has_fraction(div) (!divider_is_fixed(div) && \
(div)->frac_width > 0)
#define selector_exists(sel) ((sel)->width != 0)
#define trigger_exists(trig) FLAG_TEST(trig, TRIG, EXISTS)
/* Clock type, used to tell common block what it's part of */
enum bcm_clk_type {
bcm_clk_none, /* undefined clock type */
bcm_clk_bus,
bcm_clk_core,
bcm_clk_peri
};
/*
* Each CCU defines a mapped area of memory containing registers
* used to manage clocks implemented by the CCU. Access to memory
* within the CCU's space is serialized by a spinlock. Before any
* (other) address can be written, a special access "password" value
* must be written to its WR_ACCESS register (located at the base
* address of the range). We keep track of the name of each CCU as
* it is set up, and maintain them in a list.
*/
struct ccu_data {
void __iomem *base; /* base of mapped address space */
spinlock_t lock; /* serialization lock */
bool write_enabled; /* write access is currently enabled */
struct list_head links; /* for ccu_list */
struct device_node *node;
struct clk_onecell_data data;
const char *name;
u32 range; /* byte range of address space */
};
/*
* Gating control and status is managed by a 32-bit gate register.
*
* There are several types of gating available:
* - (no gate)
* A clock with no gate is assumed to be always enabled.
* - hardware-only gating (auto-gating)
* Enabling or disabling clocks with this type of gate is
* managed automatically by the hardware. Such clocks can be
* considered by the software to be enabled. The current status
* of auto-gated clocks can be read from the gate status bit.
* - software-only gating
* Auto-gating is not available for this type of clock.
* Instead, software manages whether it's enabled by setting or
* clearing the enable bit. The current gate status of a gate
* under software control can be read from the gate status bit.
* To ensure a change to the gating status is complete, the
* status bit can be polled to verify that the gate has entered
* the desired state.
* - selectable hardware or software gating
* Gating for this type of clock can be configured to be either
* under software or hardware control. Which type is in use is
* determined by the hw_sw_sel bit of the gate register.
*/
struct bcm_clk_gate {
u32 offset; /* gate register offset */
u32 status_bit; /* 0: gate is disabled; 0: gatge is enabled */
u32 en_bit; /* 0: disable; 1: enable */
u32 hw_sw_sel_bit; /* 0: hardware gating; 1: software gating */
u32 flags; /* BCM_CLK_GATE_FLAGS_* below */
};
/*
* Gate flags:
* HW means this gate can be auto-gated
* SW means the state of this gate can be software controlled
* NO_DISABLE means this gate is (only) enabled if under software control
* SW_MANAGED means the status of this gate is under software control
* ENABLED means this software-managed gate is *supposed* to be enabled
*/
#define BCM_CLK_GATE_FLAGS_EXISTS ((u32)1 << 0) /* Gate is valid */
#define BCM_CLK_GATE_FLAGS_HW ((u32)1 << 1) /* Can auto-gate */
#define BCM_CLK_GATE_FLAGS_SW ((u32)1 << 2) /* Software control */
#define BCM_CLK_GATE_FLAGS_NO_DISABLE ((u32)1 << 3) /* HW or enabled */
#define BCM_CLK_GATE_FLAGS_SW_MANAGED ((u32)1 << 4) /* SW now in control */
#define BCM_CLK_GATE_FLAGS_ENABLED ((u32)1 << 5) /* If SW_MANAGED */
/*
* Gate initialization macros.
*
* Any gate initially under software control will be enabled.
*/
/* A hardware/software gate initially under software control */
#define HW_SW_GATE(_offset, _status_bit, _en_bit, _hw_sw_sel_bit) \
{ \
.offset = (_offset), \
.status_bit = (_status_bit), \
.en_bit = (_en_bit), \
.hw_sw_sel_bit = (_hw_sw_sel_bit), \
.flags = FLAG(GATE, HW)|FLAG(GATE, SW)| \
FLAG(GATE, SW_MANAGED)|FLAG(GATE, ENABLED)| \
FLAG(GATE, EXISTS), \
}
/* A hardware/software gate initially under hardware control */
#define HW_SW_GATE_AUTO(_offset, _status_bit, _en_bit, _hw_sw_sel_bit) \
{ \
.offset = (_offset), \
.status_bit = (_status_bit), \
.en_bit = (_en_bit), \
.hw_sw_sel_bit = (_hw_sw_sel_bit), \
.flags = FLAG(GATE, HW)|FLAG(GATE, SW)| \
FLAG(GATE, EXISTS), \
}
/* A hardware-or-enabled gate (enabled if not under hardware control) */
#define HW_ENABLE_GATE(_offset, _status_bit, _en_bit, _hw_sw_sel_bit) \
{ \
.offset = (_offset), \
.status_bit = (_status_bit), \
.en_bit = (_en_bit), \
.hw_sw_sel_bit = (_hw_sw_sel_bit), \
.flags = FLAG(GATE, HW)|FLAG(GATE, SW)| \
FLAG(GATE, NO_DISABLE)|FLAG(GATE, EXISTS), \
}
/* A software-only gate */
#define SW_ONLY_GATE(_offset, _status_bit, _en_bit) \
{ \
.offset = (_offset), \
.status_bit = (_status_bit), \
.en_bit = (_en_bit), \
.flags = FLAG(GATE, SW)|FLAG(GATE, SW_MANAGED)| \
FLAG(GATE, ENABLED)|FLAG(GATE, EXISTS), \
}
/* A hardware-only gate */
#define HW_ONLY_GATE(_offset, _status_bit) \
{ \
.offset = (_offset), \
.status_bit = (_status_bit), \
.flags = FLAG(GATE, HW)|FLAG(GATE, EXISTS), \
}
/*
* Each clock can have zero, one, or two dividers which change the
* output rate of the clock. Each divider can be either fixed or
* variable. If there are two dividers, they are the "pre-divider"
* and the "regular" or "downstream" divider. If there is only one,
* there is no pre-divider.
*
* A fixed divider is any non-zero (positive) value, and it
* indicates how the input rate is affected by the divider.
*
* The value of a variable divider is maintained in a sub-field of a
* 32-bit divider register. The position of the field in the
* register is defined by its offset and width. The value recorded
* in this field is always 1 less than the value it represents.
*
* In addition, a variable divider can indicate that some subset
* of its bits represent a "fractional" part of the divider. Such
* bits comprise the low-order portion of the divider field, and can
* be viewed as representing the portion of the divider that lies to
* the right of the decimal point. Most variable dividers have zero
* fractional bits. Variable dividers with non-zero fraction width
* still record a value 1 less than the value they represent; the
* added 1 does *not* affect the low-order bit in this case, it
* affects the bits above the fractional part only. (Often in this
* code a divider field value is distinguished from the value it
* represents by referring to the latter as a "divisor".)
*
* In order to avoid dealing with fractions, divider arithmetic is
* performed using "scaled" values. A scaled value is one that's
* been left-shifted by the fractional width of a divider. Dividing
* a scaled value by a scaled divisor produces the desired quotient
* without loss of precision and without any other special handling
* for fractions.
*
* The recorded value of a variable divider can be modified. To
* modify either divider (or both), a clock must be enabled (i.e.,
* using its gate). In addition, a trigger register (described
* below) must be used to commit the change, and polled to verify
* the change is complete.
*/
struct bcm_clk_div {
union {
struct { /* variable divider */
u32 offset; /* divider register offset */
u32 shift; /* field shift */
u32 width; /* field width */
u32 frac_width; /* field fraction width */
u64 scaled_div; /* scaled divider value */
};
u32 fixed; /* non-zero fixed divider value */
};
u32 flags; /* BCM_CLK_DIV_FLAGS_* below */
};
/*
* Divider flags:
* EXISTS means this divider exists
* FIXED means it is a fixed-rate divider
*/
#define BCM_CLK_DIV_FLAGS_EXISTS ((u32)1 << 0) /* Divider is valid */
#define BCM_CLK_DIV_FLAGS_FIXED ((u32)1 << 1) /* Fixed-value */
/* Divider initialization macros */
/* A fixed (non-zero) divider */
#define FIXED_DIVIDER(_value) \
{ \
.fixed = (_value), \
.flags = FLAG(DIV, EXISTS)|FLAG(DIV, FIXED), \
}
/* A divider with an integral divisor */
#define DIVIDER(_offset, _shift, _width) \
{ \
.offset = (_offset), \
.shift = (_shift), \
.width = (_width), \
.scaled_div = BAD_SCALED_DIV_VALUE, \
.flags = FLAG(DIV, EXISTS), \
}
/* A divider whose divisor has an integer and fractional part */
#define FRAC_DIVIDER(_offset, _shift, _width, _frac_width) \
{ \
.offset = (_offset), \
.shift = (_shift), \
.width = (_width), \
.frac_width = (_frac_width), \
.scaled_div = BAD_SCALED_DIV_VALUE, \
.flags = FLAG(DIV, EXISTS), \
}
/*
* Clocks may have multiple "parent" clocks. If there is more than
* one, a selector must be specified to define which of the parent
* clocks is currently in use. The selected clock is indicated in a
* sub-field of a 32-bit selector register. The range of
* representable selector values typically exceeds the number of
* available parent clocks. Occasionally the reset value of a
* selector field is explicitly set to a (specific) value that does
* not correspond to a defined input clock.
*
* We register all known parent clocks with the common clock code
* using a packed array (i.e., no empty slots) of (parent) clock
* names, and refer to them later using indexes into that array.
* We maintain an array of selector values indexed by common clock
* index values in order to map between these common clock indexes
* and the selector values used by the hardware.
*
* Like dividers, a selector can be modified, but to do so a clock
* must be enabled, and a trigger must be used to commit the change.
*/
struct bcm_clk_sel {
u32 offset; /* selector register offset */
u32 shift; /* field shift */
u32 width; /* field width */
u32 parent_count; /* number of entries in parent_sel[] */
u32 *parent_sel; /* array of parent selector values */
u8 clk_index; /* current selected index in parent_sel[] */
};
/* Selector initialization macro */
#define SELECTOR(_offset, _shift, _width) \
{ \
.offset = (_offset), \
.shift = (_shift), \
.width = (_width), \
.clk_index = BAD_CLK_INDEX, \
}
/*
* Making changes to a variable divider or a selector for a clock
* requires the use of a trigger. A trigger is defined by a single
* bit within a register. To signal a change, a 1 is written into
* that bit. To determine when the change has been completed, that
* trigger bit is polled; the read value will be 1 while the change
* is in progress, and 0 when it is complete.
*
* Occasionally a clock will have more than one trigger. In this
* case, the "pre-trigger" will be used when changing a clock's
* selector and/or its pre-divider.
*/
struct bcm_clk_trig {
u32 offset; /* trigger register offset */
u32 bit; /* trigger bit */
u32 flags; /* BCM_CLK_TRIG_FLAGS_* below */
};
/*
* Trigger flags:
* EXISTS means this trigger exists
*/
#define BCM_CLK_TRIG_FLAGS_EXISTS ((u32)1 << 0) /* Trigger is valid */
/* Trigger initialization macro */
#define TRIGGER(_offset, _bit) \
{ \
.offset = (_offset), \
.bit = (_bit), \
.flags = FLAG(TRIG, EXISTS), \
}
struct peri_clk_data {
struct bcm_clk_gate gate;
struct bcm_clk_trig pre_trig;
struct bcm_clk_div pre_div;
struct bcm_clk_trig trig;
struct bcm_clk_div div;
struct bcm_clk_sel sel;
const char *clocks[]; /* must be last; use CLOCKS() to declare */
};
#define CLOCKS(...) { __VA_ARGS__, NULL, }
#define NO_CLOCKS { NULL, } /* Must use of no parent clocks */
struct kona_clk {
struct clk_hw hw;
struct clk_init_data init_data;
const char *name; /* name of this clock */
struct ccu_data *ccu; /* ccu this clock is associated with */
enum bcm_clk_type type;
union {
void *data;
struct peri_clk_data *peri;
};
};
#define to_kona_clk(_hw) \
container_of(_hw, struct kona_clk, hw)
/* Exported globals */
extern struct clk_ops kona_peri_clk_ops;
/* Help functions */
#define PERI_CLK_SETUP(clks, ccu, id, name) \
clks[id] = kona_clk_setup(ccu, #name, bcm_clk_peri, &name ## _data)
/* Externally visible functions */
extern u64 do_div_round_closest(u64 dividend, unsigned long divisor);
extern u64 scaled_div_max(struct bcm_clk_div *div);
extern u64 scaled_div_build(struct bcm_clk_div *div, u32 div_value,
u32 billionths);
extern struct clk *kona_clk_setup(struct ccu_data *ccu, const char *name,
enum bcm_clk_type type, void *data);
extern void __init kona_dt_ccu_setup(struct device_node *node,
int (*ccu_clks_setup)(struct ccu_data *));
extern bool __init kona_ccu_init(struct ccu_data *ccu);
#endif /* _CLK_KONA_H */
......@@ -16,6 +16,7 @@
#include <linux/clk-provider.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/syscore_ops.h>
#include "clk.h"
......@@ -130,6 +131,17 @@ enum exynos4_plls {
nr_plls /* number of PLLs */
};
static void __iomem *reg_base;
static enum exynos4_soc exynos4_soc;
/*
* Support for CMU save/restore across system suspends
*/
#ifdef CONFIG_PM_SLEEP
static struct samsung_clk_reg_dump *exynos4_save_common;
static struct samsung_clk_reg_dump *exynos4_save_soc;
static struct samsung_clk_reg_dump *exynos4_save_pll;
/*
* list of controller registers to be saved and restored during a
* suspend/resume cycle.
......@@ -154,6 +166,17 @@ static unsigned long exynos4x12_clk_save[] __initdata = {
E4X12_MPLL_CON0,
};
static unsigned long exynos4_clk_pll_regs[] __initdata = {
EPLL_LOCK,
VPLL_LOCK,
EPLL_CON0,
EPLL_CON1,
EPLL_CON2,
VPLL_CON0,
VPLL_CON1,
VPLL_CON2,
};
static unsigned long exynos4_clk_regs[] __initdata = {
SRC_LEFTBUS,
DIV_LEFTBUS,
......@@ -161,12 +184,6 @@ static unsigned long exynos4_clk_regs[] __initdata = {
SRC_RIGHTBUS,
DIV_RIGHTBUS,
GATE_IP_RIGHTBUS,
EPLL_CON0,
EPLL_CON1,
EPLL_CON2,
VPLL_CON0,
VPLL_CON1,
VPLL_CON2,
SRC_TOP0,
SRC_TOP1,
SRC_CAM,
......@@ -227,6 +244,124 @@ static unsigned long exynos4_clk_regs[] __initdata = {
GATE_IP_CPU,
};
static const struct samsung_clk_reg_dump src_mask_suspend[] = {
{ .offset = SRC_MASK_TOP, .value = 0x00000001, },
{ .offset = SRC_MASK_CAM, .value = 0x11111111, },
{ .offset = SRC_MASK_TV, .value = 0x00000111, },
{ .offset = SRC_MASK_LCD0, .value = 0x00001111, },
{ .offset = SRC_MASK_MAUDIO, .value = 0x00000001, },
{ .offset = SRC_MASK_FSYS, .value = 0x01011111, },
{ .offset = SRC_MASK_PERIL0, .value = 0x01111111, },
{ .offset = SRC_MASK_PERIL1, .value = 0x01110111, },
{ .offset = SRC_MASK_DMC, .value = 0x00010000, },
};
static const struct samsung_clk_reg_dump src_mask_suspend_e4210[] = {
{ .offset = E4210_SRC_MASK_LCD1, .value = 0x00001111, },
};
#define PLL_ENABLED (1 << 31)
#define PLL_LOCKED (1 << 29)
static void exynos4_clk_wait_for_pll(u32 reg)
{
u32 pll_con;
pll_con = readl(reg_base + reg);
if (!(pll_con & PLL_ENABLED))
return;
while (!(pll_con & PLL_LOCKED)) {
cpu_relax();
pll_con = readl(reg_base + reg);
}
}
static int exynos4_clk_suspend(void)
{
samsung_clk_save(reg_base, exynos4_save_common,
ARRAY_SIZE(exynos4_clk_regs));
samsung_clk_save(reg_base, exynos4_save_pll,
ARRAY_SIZE(exynos4_clk_pll_regs));
if (exynos4_soc == EXYNOS4210) {
samsung_clk_save(reg_base, exynos4_save_soc,
ARRAY_SIZE(exynos4210_clk_save));
samsung_clk_restore(reg_base, src_mask_suspend_e4210,
ARRAY_SIZE(src_mask_suspend_e4210));
} else {
samsung_clk_save(reg_base, exynos4_save_soc,
ARRAY_SIZE(exynos4x12_clk_save));
}
samsung_clk_restore(reg_base, src_mask_suspend,
ARRAY_SIZE(src_mask_suspend));
return 0;
}
static void exynos4_clk_resume(void)
{
samsung_clk_restore(reg_base, exynos4_save_pll,
ARRAY_SIZE(exynos4_clk_pll_regs));
exynos4_clk_wait_for_pll(EPLL_CON0);
exynos4_clk_wait_for_pll(VPLL_CON0);
samsung_clk_restore(reg_base, exynos4_save_common,
ARRAY_SIZE(exynos4_clk_regs));
if (exynos4_soc == EXYNOS4210)
samsung_clk_restore(reg_base, exynos4_save_soc,
ARRAY_SIZE(exynos4210_clk_save));
else
samsung_clk_restore(reg_base, exynos4_save_soc,
ARRAY_SIZE(exynos4x12_clk_save));
}
static struct syscore_ops exynos4_clk_syscore_ops = {
.suspend = exynos4_clk_suspend,
.resume = exynos4_clk_resume,
};
static void exynos4_clk_sleep_init(void)
{
exynos4_save_common = samsung_clk_alloc_reg_dump(exynos4_clk_regs,
ARRAY_SIZE(exynos4_clk_regs));
if (!exynos4_save_common)
goto err_warn;
if (exynos4_soc == EXYNOS4210)
exynos4_save_soc = samsung_clk_alloc_reg_dump(
exynos4210_clk_save,
ARRAY_SIZE(exynos4210_clk_save));
else
exynos4_save_soc = samsung_clk_alloc_reg_dump(
exynos4x12_clk_save,
ARRAY_SIZE(exynos4x12_clk_save));
if (!exynos4_save_soc)
goto err_common;
exynos4_save_pll = samsung_clk_alloc_reg_dump(exynos4_clk_pll_regs,
ARRAY_SIZE(exynos4_clk_pll_regs));
if (!exynos4_save_pll)
goto err_soc;
register_syscore_ops(&exynos4_clk_syscore_ops);
return;
err_soc:
kfree(exynos4_save_soc);
err_common:
kfree(exynos4_save_common);
err_warn:
pr_warn("%s: failed to allocate sleep save data, no sleep support!\n",
__func__);
}
#else
static void exynos4_clk_sleep_init(void) {}
#endif
/* list of all parent clock list */
PNAME(mout_apll_p) = { "fin_pll", "fout_apll", };
PNAME(mout_mpll_p) = { "fin_pll", "fout_mpll", };
......@@ -908,12 +1043,13 @@ static unsigned long exynos4_get_xom(void)
return xom;
}
static void __init exynos4_clk_register_finpll(unsigned long xom)
static void __init exynos4_clk_register_finpll(void)
{
struct samsung_fixed_rate_clock fclk;
struct clk *clk;
unsigned long finpll_f = 24000000;
char *parent_name;
unsigned int xom = exynos4_get_xom();
parent_name = xom & 1 ? "xusbxti" : "xxti";
clk = clk_get(NULL, parent_name);
......@@ -1038,27 +1174,21 @@ static struct samsung_pll_clock exynos4x12_plls[nr_plls] __initdata = {
/* register exynos4 clocks */
static void __init exynos4_clk_init(struct device_node *np,
enum exynos4_soc exynos4_soc,
void __iomem *reg_base, unsigned long xom)
enum exynos4_soc soc)
{
exynos4_soc = soc;
reg_base = of_iomap(np, 0);
if (!reg_base)
panic("%s: failed to map registers\n", __func__);
if (exynos4_soc == EXYNOS4210)
samsung_clk_init(np, reg_base, CLK_NR_CLKS,
exynos4_clk_regs, ARRAY_SIZE(exynos4_clk_regs),
exynos4210_clk_save, ARRAY_SIZE(exynos4210_clk_save));
else
samsung_clk_init(np, reg_base, CLK_NR_CLKS,
exynos4_clk_regs, ARRAY_SIZE(exynos4_clk_regs),
exynos4x12_clk_save, ARRAY_SIZE(exynos4x12_clk_save));
samsung_clk_init(np, reg_base, CLK_NR_CLKS);
samsung_clk_of_register_fixed_ext(exynos4_fixed_rate_ext_clks,
ARRAY_SIZE(exynos4_fixed_rate_ext_clks),
ext_clk_match);
exynos4_clk_register_finpll(xom);
exynos4_clk_register_finpll();
if (exynos4_soc == EXYNOS4210) {
samsung_clk_register_mux(exynos4210_mux_early,
......@@ -1125,6 +1255,8 @@ static void __init exynos4_clk_init(struct device_node *np,
samsung_clk_register_alias(exynos4_aliases,
ARRAY_SIZE(exynos4_aliases));
exynos4_clk_sleep_init();
pr_info("%s clocks: sclk_apll = %ld, sclk_mpll = %ld\n"
"\tsclk_epll = %ld, sclk_vpll = %ld, arm_clk = %ld\n",
exynos4_soc == EXYNOS4210 ? "Exynos4210" : "Exynos4x12",
......@@ -1136,12 +1268,12 @@ static void __init exynos4_clk_init(struct device_node *np,
static void __init exynos4210_clk_init(struct device_node *np)
{
exynos4_clk_init(np, EXYNOS4210, NULL, exynos4_get_xom());
exynos4_clk_init(np, EXYNOS4210);
}
CLK_OF_DECLARE(exynos4210_clk, "samsung,exynos4210-clock", exynos4210_clk_init);
static void __init exynos4412_clk_init(struct device_node *np)
{
exynos4_clk_init(np, EXYNOS4X12, NULL, exynos4_get_xom());
exynos4_clk_init(np, EXYNOS4X12);
}
CLK_OF_DECLARE(exynos4412_clk, "samsung,exynos4412-clock", exynos4412_clk_init);
......@@ -16,6 +16,7 @@
#include <linux/clk-provider.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/syscore_ops.h>
#include "clk.h"
......@@ -85,6 +86,11 @@ enum exynos5250_plls {
nr_plls /* number of PLLs */
};
static void __iomem *reg_base;
#ifdef CONFIG_PM_SLEEP
static struct samsung_clk_reg_dump *exynos5250_save;
/*
* list of controller registers to be saved and restored during a
* suspend/resume cycle.
......@@ -137,6 +143,41 @@ static unsigned long exynos5250_clk_regs[] __initdata = {
GATE_IP_ACP,
};
static int exynos5250_clk_suspend(void)
{
samsung_clk_save(reg_base, exynos5250_save,
ARRAY_SIZE(exynos5250_clk_regs));
return 0;
}
static void exynos5250_clk_resume(void)
{
samsung_clk_restore(reg_base, exynos5250_save,
ARRAY_SIZE(exynos5250_clk_regs));
}
static struct syscore_ops exynos5250_clk_syscore_ops = {
.suspend = exynos5250_clk_suspend,
.resume = exynos5250_clk_resume,
};
static void exynos5250_clk_sleep_init(void)
{
exynos5250_save = samsung_clk_alloc_reg_dump(exynos5250_clk_regs,
ARRAY_SIZE(exynos5250_clk_regs));
if (!exynos5250_save) {
pr_warn("%s: failed to allocate sleep save data, no sleep support!\n",
__func__);
return;
}
register_syscore_ops(&exynos5250_clk_syscore_ops);
}
#else
static void exynos5250_clk_sleep_init(void) {}
#endif
/* list of all parent clock list */
PNAME(mout_apll_p) = { "fin_pll", "fout_apll", };
PNAME(mout_cpu_p) = { "mout_apll", "mout_mpll", };
......@@ -645,8 +686,6 @@ static struct of_device_id ext_clk_match[] __initdata = {
/* register exynox5250 clocks */
static void __init exynos5250_clk_init(struct device_node *np)
{
void __iomem *reg_base;
if (np) {
reg_base = of_iomap(np, 0);
if (!reg_base)
......@@ -655,9 +694,7 @@ static void __init exynos5250_clk_init(struct device_node *np)
panic("%s: unable to determine soc\n", __func__);
}
samsung_clk_init(np, reg_base, CLK_NR_CLKS,
exynos5250_clk_regs, ARRAY_SIZE(exynos5250_clk_regs),
NULL, 0);
samsung_clk_init(np, reg_base, CLK_NR_CLKS);
samsung_clk_of_register_fixed_ext(exynos5250_fixed_rate_ext_clks,
ARRAY_SIZE(exynos5250_fixed_rate_ext_clks),
ext_clk_match);
......@@ -685,6 +722,8 @@ static void __init exynos5250_clk_init(struct device_node *np)
samsung_clk_register_gate(exynos5250_gate_clks,
ARRAY_SIZE(exynos5250_gate_clks));
exynos5250_clk_sleep_init();
pr_info("Exynos5250: clock setup completed, armclk=%ld\n",
_get_rate("div_arm2"));
}
......
......@@ -16,6 +16,7 @@
#include <linux/clk-provider.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/syscore_ops.h>
#include "clk.h"
......@@ -108,6 +109,11 @@ enum exynos5420_plls {
nr_plls /* number of PLLs */
};
static void __iomem *reg_base;
#ifdef CONFIG_PM_SLEEP
static struct samsung_clk_reg_dump *exynos5420_save;
/*
* list of controller registers to be saved and restored during a
* suspend/resume cycle.
......@@ -174,6 +180,41 @@ static unsigned long exynos5420_clk_regs[] __initdata = {
DIV_KFC0,
};
static int exynos5420_clk_suspend(void)
{
samsung_clk_save(reg_base, exynos5420_save,
ARRAY_SIZE(exynos5420_clk_regs));
return 0;
}
static void exynos5420_clk_resume(void)
{
samsung_clk_restore(reg_base, exynos5420_save,
ARRAY_SIZE(exynos5420_clk_regs));
}
static struct syscore_ops exynos5420_clk_syscore_ops = {
.suspend = exynos5420_clk_suspend,
.resume = exynos5420_clk_resume,
};
static void exynos5420_clk_sleep_init(void)
{
exynos5420_save = samsung_clk_alloc_reg_dump(exynos5420_clk_regs,
ARRAY_SIZE(exynos5420_clk_regs));
if (!exynos5420_save) {
pr_warn("%s: failed to allocate sleep save data, no sleep support!\n",
__func__);
return;
}
register_syscore_ops(&exynos5420_clk_syscore_ops);
}
#else
static void exynos5420_clk_sleep_init(void) {}
#endif
/* list of all parent clocks */
PNAME(mspll_cpu_p) = { "sclk_cpll", "sclk_dpll",
"sclk_mpll", "sclk_spll" };
......@@ -737,8 +778,6 @@ static struct of_device_id ext_clk_match[] __initdata = {
/* register exynos5420 clocks */
static void __init exynos5420_clk_init(struct device_node *np)
{
void __iomem *reg_base;
if (np) {
reg_base = of_iomap(np, 0);
if (!reg_base)
......@@ -747,9 +786,7 @@ static void __init exynos5420_clk_init(struct device_node *np)
panic("%s: unable to determine soc\n", __func__);
}
samsung_clk_init(np, reg_base, CLK_NR_CLKS,
exynos5420_clk_regs, ARRAY_SIZE(exynos5420_clk_regs),
NULL, 0);
samsung_clk_init(np, reg_base, CLK_NR_CLKS);
samsung_clk_of_register_fixed_ext(exynos5420_fixed_rate_ext_clks,
ARRAY_SIZE(exynos5420_fixed_rate_ext_clks),
ext_clk_match);
......@@ -765,5 +802,7 @@ static void __init exynos5420_clk_init(struct device_node *np)
ARRAY_SIZE(exynos5420_div_clks));
samsung_clk_register_gate(exynos5420_gate_clks,
ARRAY_SIZE(exynos5420_gate_clks));
exynos5420_clk_sleep_init();
}
CLK_OF_DECLARE(exynos5420_clk, "samsung,exynos5420-clock", exynos5420_clk_init);
......@@ -101,7 +101,7 @@ static void __init exynos5440_clk_init(struct device_node *np)
return;
}
samsung_clk_init(np, reg_base, CLK_NR_CLKS, NULL, 0, NULL, 0);
samsung_clk_init(np, reg_base, CLK_NR_CLKS);
samsung_clk_of_register_fixed_ext(exynos5440_fixed_rate_ext_clks,
ARRAY_SIZE(exynos5440_fixed_rate_ext_clks), ext_clk_match);
......
......@@ -13,6 +13,7 @@
#include <linux/clk-provider.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/syscore_ops.h>
#include <dt-bindings/clock/samsung,s3c64xx-clock.h>
......@@ -61,6 +62,13 @@ enum s3c64xx_plls {
apll, mpll, epll,
};
static void __iomem *reg_base;
static bool is_s3c6400;
#ifdef CONFIG_PM_SLEEP
static struct samsung_clk_reg_dump *s3c64xx_save_common;
static struct samsung_clk_reg_dump *s3c64xx_save_soc;
/*
* List of controller registers to be saved and restored during
* a suspend/resume cycle.
......@@ -87,6 +95,60 @@ static unsigned long s3c6410_clk_regs[] __initdata = {
MEM0_GATE,
};
static int s3c64xx_clk_suspend(void)
{
samsung_clk_save(reg_base, s3c64xx_save_common,
ARRAY_SIZE(s3c64xx_clk_regs));
if (!is_s3c6400)
samsung_clk_save(reg_base, s3c64xx_save_soc,
ARRAY_SIZE(s3c6410_clk_regs));
return 0;
}
static void s3c64xx_clk_resume(void)
{
samsung_clk_restore(reg_base, s3c64xx_save_common,
ARRAY_SIZE(s3c64xx_clk_regs));
if (!is_s3c6400)
samsung_clk_restore(reg_base, s3c64xx_save_soc,
ARRAY_SIZE(s3c6410_clk_regs));
}
static struct syscore_ops s3c64xx_clk_syscore_ops = {
.suspend = s3c64xx_clk_suspend,
.resume = s3c64xx_clk_resume,
};
static void s3c64xx_clk_sleep_init(void)
{
s3c64xx_save_common = samsung_clk_alloc_reg_dump(s3c64xx_clk_regs,
ARRAY_SIZE(s3c64xx_clk_regs));
if (!s3c64xx_save_common)
goto err_warn;
if (!is_s3c6400) {
s3c64xx_save_soc = samsung_clk_alloc_reg_dump(s3c6410_clk_regs,
ARRAY_SIZE(s3c6410_clk_regs));
if (!s3c64xx_save_soc)
goto err_soc;
}
register_syscore_ops(&s3c64xx_clk_syscore_ops);
return;
err_soc:
kfree(s3c64xx_save_common);
err_warn:
pr_warn("%s: failed to allocate sleep save data, no sleep support!\n",
__func__);
}
#else
static void s3c64xx_clk_sleep_init(void) {}
#endif
/* List of parent clocks common for all S3C64xx SoCs. */
PNAME(spi_mmc_p) = { "mout_epll", "dout_mpll", "fin_pll", "clk27m" };
PNAME(uart_p) = { "mout_epll", "dout_mpll" };
......@@ -391,11 +453,11 @@ static void __init s3c64xx_clk_register_fixed_ext(unsigned long fin_pll_f,
/* Register s3c64xx clocks. */
void __init s3c64xx_clk_init(struct device_node *np, unsigned long xtal_f,
unsigned long xusbxti_f, bool is_s3c6400,
void __iomem *reg_base)
unsigned long xusbxti_f, bool s3c6400,
void __iomem *base)
{
unsigned long *soc_regs = NULL;
unsigned long nr_soc_regs = 0;
reg_base = base;
is_s3c6400 = s3c6400;
if (np) {
reg_base = of_iomap(np, 0);
......@@ -403,13 +465,7 @@ void __init s3c64xx_clk_init(struct device_node *np, unsigned long xtal_f,
panic("%s: failed to map registers\n", __func__);
}
if (!is_s3c6400) {
soc_regs = s3c6410_clk_regs;
nr_soc_regs = ARRAY_SIZE(s3c6410_clk_regs);
}
samsung_clk_init(np, reg_base, NR_CLKS, s3c64xx_clk_regs,
ARRAY_SIZE(s3c64xx_clk_regs), soc_regs, nr_soc_regs);
samsung_clk_init(np, reg_base, NR_CLKS);
/* Register external clocks. */
if (!np)
......@@ -452,6 +508,7 @@ void __init s3c64xx_clk_init(struct device_node *np, unsigned long xtal_f,
samsung_clk_register_alias(s3c64xx_clock_aliases,
ARRAY_SIZE(s3c64xx_clock_aliases));
s3c64xx_clk_sleep_init();
pr_info("%s clocks: apll = %lu, mpll = %lu\n"
"\tepll = %lu, arm_clk = %lu\n",
......
......@@ -21,64 +21,45 @@ static void __iomem *reg_base;
static struct clk_onecell_data clk_data;
#endif
#ifdef CONFIG_PM_SLEEP
static struct samsung_clk_reg_dump *reg_dump;
static unsigned long nr_reg_dump;
static int samsung_clk_suspend(void)
void samsung_clk_save(void __iomem *base,
struct samsung_clk_reg_dump *rd,
unsigned int num_regs)
{
struct samsung_clk_reg_dump *rd = reg_dump;
unsigned long i;
for (i = 0; i < nr_reg_dump; i++, rd++)
rd->value = __raw_readl(reg_base + rd->offset);
for (; num_regs > 0; --num_regs, ++rd)
rd->value = readl(base + rd->offset);
}
return 0;
void samsung_clk_restore(void __iomem *base,
const struct samsung_clk_reg_dump *rd,
unsigned int num_regs)
{
for (; num_regs > 0; --num_regs, ++rd)
writel(rd->value, base + rd->offset);
}
static void samsung_clk_resume(void)
struct samsung_clk_reg_dump *samsung_clk_alloc_reg_dump(
const unsigned long *rdump,
unsigned long nr_rdump)
{
struct samsung_clk_reg_dump *rd = reg_dump;
unsigned long i;
struct samsung_clk_reg_dump *rd;
unsigned int i;
for (i = 0; i < nr_reg_dump; i++, rd++)
__raw_writel(rd->value, reg_base + rd->offset);
}
rd = kcalloc(nr_rdump, sizeof(*rd), GFP_KERNEL);
if (!rd)
return NULL;
static struct syscore_ops samsung_clk_syscore_ops = {
.suspend = samsung_clk_suspend,
.resume = samsung_clk_resume,
};
#endif /* CONFIG_PM_SLEEP */
for (i = 0; i < nr_rdump; ++i)
rd[i].offset = rdump[i];
return rd;
}
/* setup the essentials required to support clock lookup using ccf */
void __init samsung_clk_init(struct device_node *np, void __iomem *base,
unsigned long nr_clks, unsigned long *rdump,
unsigned long nr_rdump, unsigned long *soc_rdump,
unsigned long nr_soc_rdump)
unsigned long nr_clks)
{
reg_base = base;
#ifdef CONFIG_PM_SLEEP
if (rdump && nr_rdump) {
unsigned int idx;
reg_dump = kzalloc(sizeof(struct samsung_clk_reg_dump)
* (nr_rdump + nr_soc_rdump), GFP_KERNEL);
if (!reg_dump) {
pr_err("%s: memory alloc for register dump failed\n",
__func__);
return;
}
for (idx = 0; idx < nr_rdump; idx++)
reg_dump[idx].offset = rdump[idx];
for (idx = 0; idx < nr_soc_rdump; idx++)
reg_dump[nr_rdump + idx].offset = soc_rdump[idx];
nr_reg_dump = nr_rdump + nr_soc_rdump;
register_syscore_ops(&samsung_clk_syscore_ops);
}
#endif
clk_table = kzalloc(sizeof(struct clk *) * nr_clks, GFP_KERNEL);
if (!clk_table)
panic("could not allocate clock lookup table\n");
......
......@@ -313,9 +313,7 @@ struct samsung_pll_clock {
_lock, _con, _rtable, _alias)
extern void __init samsung_clk_init(struct device_node *np, void __iomem *base,
unsigned long nr_clks, unsigned long *rdump,
unsigned long nr_rdump, unsigned long *soc_rdump,
unsigned long nr_soc_rdump);
unsigned long nr_clks);
extern void __init samsung_clk_of_register_fixed_ext(
struct samsung_fixed_rate_clock *fixed_rate_clk,
unsigned int nr_fixed_rate_clk,
......@@ -340,4 +338,14 @@ extern void __init samsung_clk_register_pll(struct samsung_pll_clock *pll_list,
extern unsigned long _get_rate(const char *clk_name);
extern void samsung_clk_save(void __iomem *base,
struct samsung_clk_reg_dump *rd,
unsigned int num_regs);
extern void samsung_clk_restore(void __iomem *base,
const struct samsung_clk_reg_dump *rd,
unsigned int num_regs);
extern struct samsung_clk_reg_dump *samsung_clk_alloc_reg_dump(
const unsigned long *rdump,
unsigned long nr_rdump);
#endif /* __SAMSUNG_CLK_H */
......@@ -33,7 +33,7 @@ struct clk_icst {
struct clk_hw hw;
void __iomem *vcoreg;
void __iomem *lockreg;
const struct icst_params *params;
struct icst_params *params;
unsigned long rate;
};
......@@ -84,6 +84,8 @@ static unsigned long icst_recalc_rate(struct clk_hw *hw,
struct clk_icst *icst = to_icst(hw);
struct icst_vco vco;
if (parent_rate)
icst->params->ref = parent_rate;
vco = vco_get(icst->vcoreg);
icst->rate = icst_hz(icst->params, vco);
return icst->rate;
......@@ -105,6 +107,8 @@ static int icst_set_rate(struct clk_hw *hw, unsigned long rate,
struct clk_icst *icst = to_icst(hw);
struct icst_vco vco;
if (parent_rate)
icst->params->ref = parent_rate;
vco = icst_hz_to_vco(icst->params, rate);
icst->rate = icst_hz(icst->params, vco);
vco_set(icst->lockreg, icst->vcoreg, vco);
......@@ -120,24 +124,33 @@ static const struct clk_ops icst_ops = {
struct clk *icst_clk_register(struct device *dev,
const struct clk_icst_desc *desc,
const char *name,
const char *parent_name,
void __iomem *base)
{
struct clk *clk;
struct clk_icst *icst;
struct clk_init_data init;
struct icst_params *pclone;
icst = kzalloc(sizeof(struct clk_icst), GFP_KERNEL);
if (!icst) {
pr_err("could not allocate ICST clock!\n");
return ERR_PTR(-ENOMEM);
}
pclone = kmemdup(desc->params, sizeof(*pclone), GFP_KERNEL);
if (!pclone) {
pr_err("could not clone ICST params\n");
return ERR_PTR(-ENOMEM);
}
init.name = name;
init.ops = &icst_ops;
init.flags = CLK_IS_ROOT;
init.parent_names = NULL;
init.num_parents = 0;
init.parent_names = (parent_name ? &parent_name : NULL);
init.num_parents = (parent_name ? 1 : 0);
icst->hw.init = &init;
icst->params = desc->params;
icst->params = pclone;
icst->vcoreg = base + desc->vco_offset;
icst->lockreg = base + desc->lock_offset;
......
......@@ -16,4 +16,5 @@ struct clk_icst_desc {
struct clk *icst_clk_register(struct device *dev,
const struct clk_icst_desc *desc,
const char *name,
const char *parent_name,
void __iomem *base);
......@@ -93,13 +93,15 @@ void integrator_impd1_clk_init(void __iomem *base, unsigned int id)
imc = &impd1_clks[id];
imc->vco1name = kasprintf(GFP_KERNEL, "lm%x-vco1", id);
clk = icst_clk_register(NULL, &impd1_icst1_desc, imc->vco1name, base);
clk = icst_clk_register(NULL, &impd1_icst1_desc, imc->vco1name, NULL,
base);
imc->vco1clk = clk;
imc->clks[0] = clkdev_alloc(clk, NULL, "lm%x:01000", id);
/* VCO2 is also called "CLK2" */
imc->vco2name = kasprintf(GFP_KERNEL, "lm%x-vco2", id);
clk = icst_clk_register(NULL, &impd1_icst2_desc, imc->vco2name, base);
clk = icst_clk_register(NULL, &impd1_icst2_desc, imc->vco2name, NULL,
base);
imc->vco2clk = clk;
/* MMCI uses CLK2 right off */
......
......@@ -10,21 +10,17 @@
#include <linux/clk.h>
#include <linux/clkdev.h>
#include <linux/err.h>
#include <linux/platform_data/clk-integrator.h>
#include <mach/hardware.h>
#include <mach/platform.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include "clk-icst.h"
/*
* Implementation of the ARM Integrator/AP and Integrator/CP clock tree.
* Inspired by portions of:
* plat-versatile/clock.c and plat-versatile/include/plat/clock.h
*/
#define INTEGRATOR_HDR_LOCK_OFFSET 0x14
static const struct icst_params cp_auxvco_params = {
.ref = 24000000,
/* Base offset for the core module */
static void __iomem *cm_base;
static const struct icst_params cp_auxosc_params = {
.vco_max = ICST525_VCO_MAX_5V,
.vco_min = ICST525_VCO_MIN,
.vd_min = 8,
......@@ -35,50 +31,39 @@ static const struct icst_params cp_auxvco_params = {
.idx2s = icst525_idx2s,
};
static const struct clk_icst_desc __initdata cp_icst_desc = {
.params = &cp_auxvco_params,
static const struct clk_icst_desc __initdata cm_auxosc_desc = {
.params = &cp_auxosc_params,
.vco_offset = 0x1c,
.lock_offset = INTEGRATOR_HDR_LOCK_OFFSET,
};
/*
* integrator_clk_init() - set up the integrator clock tree
* @is_cp: pass true if it's the Integrator/CP else AP is assumed
*/
void __init integrator_clk_init(bool is_cp)
static void __init of_integrator_cm_osc_setup(struct device_node *np)
{
struct clk *clk;
/* APB clock dummy */
clk = clk_register_fixed_rate(NULL, "apb_pclk", NULL, CLK_IS_ROOT, 0);
clk_register_clkdev(clk, "apb_pclk", NULL);
/* UART reference clock */
clk = clk_register_fixed_rate(NULL, "uartclk", NULL, CLK_IS_ROOT,
14745600);
clk_register_clkdev(clk, NULL, "uart0");
clk_register_clkdev(clk, NULL, "uart1");
if (is_cp)
clk_register_clkdev(clk, NULL, "mmci");
struct clk *clk = ERR_PTR(-EINVAL);
const char *clk_name = np->name;
const struct clk_icst_desc *desc = &cm_auxosc_desc;
const char *parent_name;
/* 24 MHz clock */
clk = clk_register_fixed_rate(NULL, "clk24mhz", NULL, CLK_IS_ROOT,
24000000);
clk_register_clkdev(clk, NULL, "kmi0");
clk_register_clkdev(clk, NULL, "kmi1");
if (!is_cp)
clk_register_clkdev(clk, NULL, "ap_timer");
if (!cm_base) {
/* Remap the core module base if not done yet */
struct device_node *parent;
if (!is_cp)
parent = of_get_parent(np);
if (!np) {
pr_err("no parent on core module clock\n");
return;
}
cm_base = of_iomap(parent, 0);
if (!cm_base) {
pr_err("could not remap core module base\n");
return;
}
}
/* 1 MHz clock */
clk = clk_register_fixed_rate(NULL, "clk1mhz", NULL, CLK_IS_ROOT,
1000000);
clk_register_clkdev(clk, NULL, "sp804");
/* ICST VCO clock used on the Integrator/CP CLCD */
clk = icst_clk_register(NULL, &cp_icst_desc, "icst",
__io_address(INTEGRATOR_HDR_BASE));
clk_register_clkdev(clk, NULL, "clcd");
parent_name = of_clk_get_parent_name(np, 0);
clk = icst_clk_register(NULL, desc, clk_name, parent_name, cm_base);
if (!IS_ERR(clk))
of_clk_add_provider(np, of_clk_src_simple_get, clk);
}
CLK_OF_DECLARE(integrator_cm_auxosc_clk,
"arm,integrator-cm-auxosc", of_integrator_cm_osc_setup);
......@@ -85,10 +85,10 @@ void __init realview_clk_init(void __iomem *sysbase, bool is_pb1176)
/* ICST VCO clock */
if (is_pb1176)
clk = icst_clk_register(NULL, &realview_osc0_desc,
"osc0", sysbase);
"osc0", NULL, sysbase);
else
clk = icst_clk_register(NULL, &realview_osc4_desc,
"osc4", sysbase);
"osc4", NULL, sysbase);
clk_register_clkdev(clk, NULL, "dev:clcd");
clk_register_clkdev(clk, NULL, "issp:clcd");
......
......@@ -122,7 +122,7 @@ config ARM_INTEGRATOR
If in doubt, say Y.
config ARM_KIRKWOOD_CPUFREQ
def_bool ARCH_KIRKWOOD && OF
def_bool MACH_KIRKWOOD
help
This adds the CPUFreq driver for Marvell Kirkwood
SoCs.
......
......@@ -22,7 +22,7 @@ config ARM_HIGHBANK_CPUIDLE
config ARM_KIRKWOOD_CPUIDLE
bool "CPU Idle Driver for Marvell Kirkwood SoCs"
depends on ARCH_KIRKWOOD
depends on ARCH_KIRKWOOD || MACH_KIRKWOOD
help
This adds the CPU Idle driver for Marvell Kirkwood SoCs.
......
......@@ -210,7 +210,7 @@ config GPIO_MSM_V1
config GPIO_MSM_V2
tristate "Qualcomm MSM GPIO v2"
depends on GPIOLIB && OF && ARCH_MSM
depends on GPIOLIB && OF && ARCH_QCOM
help
Say yes here to support the GPIO interface on ARM v7 based
Qualcomm MSM chips. Most of the pins on the MSM can be
......
......@@ -3,7 +3,7 @@ config DRM_MSM
tristate "MSM DRM"
depends on DRM
depends on MSM_IOMMU
depends on (ARCH_MSM && ARCH_MSM8960) || (ARM && COMPILE_TEST)
depends on ARCH_MSM8960 || (ARM && COMPILE_TEST)
select DRM_KMS_HELPER
select SHMEM
select TMPFS
......
......@@ -77,3 +77,11 @@ config VERSATILE_FPGA_IRQ_NR
config XTENSA_MX
bool
select IRQ_DOMAIN
config IRQ_CROSSBAR
bool
help
Support for a CROSSBAR ip that preceeds the main interrupt controller.
The primary irqchip invokes the crossbar's callback which inturn allocates
a free irq and configures the IP. Thus the peripheral interrupts are
routed to one of the free irqchip interrupt lines.
......@@ -28,3 +28,4 @@ obj-$(CONFIG_ARCH_VT8500) += irq-vt8500.o
obj-$(CONFIG_TB10X_IRQC) += irq-tb10x.o
obj-$(CONFIG_XTENSA) += irq-xtensa-pic.o
obj-$(CONFIG_XTENSA_MX) += irq-xtensa-mx.o
obj-$(CONFIG_IRQ_CROSSBAR) += irq-crossbar.o
/*
* drivers/irqchip/irq-crossbar.c
*
* Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com
* Author: Sricharan R <r.sricharan@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
#include <linux/irqchip/arm-gic.h>
#define IRQ_FREE -1
#define GIC_IRQ_START 32
/*
* @int_max: maximum number of supported interrupts
* @irq_map: array of interrupts to crossbar number mapping
* @crossbar_base: crossbar base address
* @register_offsets: offsets for each irq number
*/
struct crossbar_device {
uint int_max;
uint *irq_map;
void __iomem *crossbar_base;
int *register_offsets;
void (*write) (int, int);
};
static struct crossbar_device *cb;
static inline void crossbar_writel(int irq_no, int cb_no)
{
writel(cb_no, cb->crossbar_base + cb->register_offsets[irq_no]);
}
static inline void crossbar_writew(int irq_no, int cb_no)
{
writew(cb_no, cb->crossbar_base + cb->register_offsets[irq_no]);
}
static inline void crossbar_writeb(int irq_no, int cb_no)
{
writeb(cb_no, cb->crossbar_base + cb->register_offsets[irq_no]);
}
static inline int allocate_free_irq(int cb_no)
{
int i;
for (i = 0; i < cb->int_max; i++) {
if (cb->irq_map[i] == IRQ_FREE) {
cb->irq_map[i] = cb_no;
return i;
}
}
return -ENODEV;
}
static int crossbar_domain_map(struct irq_domain *d, unsigned int irq,
irq_hw_number_t hw)
{
cb->write(hw - GIC_IRQ_START, cb->irq_map[hw - GIC_IRQ_START]);
return 0;
}
static void crossbar_domain_unmap(struct irq_domain *d, unsigned int irq)
{
irq_hw_number_t hw = irq_get_irq_data(irq)->hwirq;
if (hw > GIC_IRQ_START)
cb->irq_map[hw - GIC_IRQ_START] = IRQ_FREE;
}
static int crossbar_domain_xlate(struct irq_domain *d,
struct device_node *controller,
const u32 *intspec, unsigned int intsize,
unsigned long *out_hwirq,
unsigned int *out_type)
{
unsigned long ret;
ret = allocate_free_irq(intspec[1]);
if (IS_ERR_VALUE(ret))
return ret;
*out_hwirq = ret + GIC_IRQ_START;
return 0;
}
const struct irq_domain_ops routable_irq_domain_ops = {
.map = crossbar_domain_map,
.unmap = crossbar_domain_unmap,
.xlate = crossbar_domain_xlate
};
static int __init crossbar_of_init(struct device_node *node)
{
int i, size, max, reserved = 0, entry;
const __be32 *irqsr;
cb = kzalloc(sizeof(struct cb_device *), GFP_KERNEL);
if (!cb)
return -ENOMEM;
cb->crossbar_base = of_iomap(node, 0);
if (!cb->crossbar_base)
goto err1;
of_property_read_u32(node, "ti,max-irqs", &max);
cb->irq_map = kzalloc(max * sizeof(int), GFP_KERNEL);
if (!cb->irq_map)
goto err2;
cb->int_max = max;
for (i = 0; i < max; i++)
cb->irq_map[i] = IRQ_FREE;
/* Get and mark reserved irqs */
irqsr = of_get_property(node, "ti,irqs-reserved", &size);
if (irqsr) {
size /= sizeof(__be32);
for (i = 0; i < size; i++) {
of_property_read_u32_index(node,
"ti,irqs-reserved",
i, &entry);
if (entry > max) {
pr_err("Invalid reserved entry\n");
goto err3;
}
cb->irq_map[entry] = 0;
}
}
cb->register_offsets = kzalloc(max * sizeof(int), GFP_KERNEL);
if (!cb->register_offsets)
goto err3;
of_property_read_u32(node, "ti,reg-size", &size);
switch (size) {
case 1:
cb->write = crossbar_writeb;
break;
case 2:
cb->write = crossbar_writew;
break;
case 4:
cb->write = crossbar_writel;
break;
default:
pr_err("Invalid reg-size property\n");
goto err4;
break;
}
/*
* Register offsets are not linear because of the
* reserved irqs. so find and store the offsets once.
*/
for (i = 0; i < max; i++) {
if (!cb->irq_map[i])
continue;
cb->register_offsets[i] = reserved;
reserved += size;
}
register_routable_domain_ops(&routable_irq_domain_ops);
return 0;
err4:
kfree(cb->register_offsets);
err3:
kfree(cb->irq_map);
err2:
iounmap(cb->crossbar_base);
err1:
kfree(cb);
return -ENOMEM;
}
static const struct of_device_id crossbar_match[] __initconst = {
{ .compatible = "ti,irq-crossbar" },
{}
};
int __init irqcrossbar_init(void)
{
struct device_node *np;
np = of_find_matching_node(NULL, crossbar_match);
if (!np)
return -ENODEV;
crossbar_of_init(np);
return 0;
}
......@@ -824,16 +824,25 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int irq,
irq_set_chip_and_handler(irq, &gic_chip,
handle_fasteoi_irq);
set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
gic_routable_irq_domain_ops->map(d, irq, hw);
}
irq_set_chip_data(irq, d->host_data);
return 0;
}
static void gic_irq_domain_unmap(struct irq_domain *d, unsigned int irq)
{
gic_routable_irq_domain_ops->unmap(d, irq);
}
static int gic_irq_domain_xlate(struct irq_domain *d,
struct device_node *controller,
const u32 *intspec, unsigned int intsize,
unsigned long *out_hwirq, unsigned int *out_type)
{
unsigned long ret = 0;
if (d->of_node != controller)
return -EINVAL;
if (intsize < 3)
......@@ -843,11 +852,20 @@ static int gic_irq_domain_xlate(struct irq_domain *d,
*out_hwirq = intspec[1] + 16;
/* For SPIs, we need to add 16 more to get the GIC irq ID number */
if (!intspec[0])
*out_hwirq += 16;
if (!intspec[0]) {
ret = gic_routable_irq_domain_ops->xlate(d, controller,
intspec,
intsize,
out_hwirq,
out_type);
if (IS_ERR_VALUE(ret))
return ret;
}
*out_type = intspec[2] & IRQ_TYPE_SENSE_MASK;
return 0;
return ret;
}
#ifdef CONFIG_SMP
......@@ -871,9 +889,41 @@ static struct notifier_block gic_cpu_notifier = {
static const struct irq_domain_ops gic_irq_domain_ops = {
.map = gic_irq_domain_map,
.unmap = gic_irq_domain_unmap,
.xlate = gic_irq_domain_xlate,
};
/* Default functions for routable irq domain */
static int gic_routable_irq_domain_map(struct irq_domain *d, unsigned int irq,
irq_hw_number_t hw)
{
return 0;
}
static void gic_routable_irq_domain_unmap(struct irq_domain *d,
unsigned int irq)
{
}
static int gic_routable_irq_domain_xlate(struct irq_domain *d,
struct device_node *controller,
const u32 *intspec, unsigned int intsize,
unsigned long *out_hwirq,
unsigned int *out_type)
{
*out_hwirq += 16;
return 0;
}
const struct irq_domain_ops gic_default_routable_irq_domain_ops = {
.map = gic_routable_irq_domain_map,
.unmap = gic_routable_irq_domain_unmap,
.xlate = gic_routable_irq_domain_xlate,
};
const struct irq_domain_ops *gic_routable_irq_domain_ops =
&gic_default_routable_irq_domain_ops;
void __init gic_init_bases(unsigned int gic_nr, int irq_start,
void __iomem *dist_base, void __iomem *cpu_base,
u32 percpu_offset, struct device_node *node)
......@@ -881,6 +931,7 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
irq_hw_number_t hwirq_base;
struct gic_chip_data *gic;
int gic_irqs, irq_base, i;
int nr_routable_irqs;
BUG_ON(gic_nr >= MAX_GIC_NR);
......@@ -946,14 +997,25 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
gic->gic_irqs = gic_irqs;
gic_irqs -= hwirq_base; /* calculate # of irqs to allocate */
irq_base = irq_alloc_descs(irq_start, 16, gic_irqs, numa_node_id());
if (of_property_read_u32(node, "arm,routable-irqs",
&nr_routable_irqs)) {
irq_base = irq_alloc_descs(irq_start, 16, gic_irqs,
numa_node_id());
if (IS_ERR_VALUE(irq_base)) {
WARN(1, "Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
irq_start);
irq_base = irq_start;
}
gic->domain = irq_domain_add_legacy(node, gic_irqs, irq_base,
hwirq_base, &gic_irq_domain_ops, gic);
} else {
gic->domain = irq_domain_add_linear(node, nr_routable_irqs,
&gic_irq_domain_ops,
gic);
}
if (WARN_ON(!gic->domain))
return;
......
......@@ -57,6 +57,7 @@
/**
* struct vic_device - VIC PM device
* @parent_irq: The parent IRQ number of the VIC if cascaded, or 0.
* @irq: The IRQ number for the base of the VIC.
* @base: The register base for the VIC.
* @valid_sources: A bitmask of valid interrupts
......@@ -224,6 +225,17 @@ static int handle_one_vic(struct vic_device *vic, struct pt_regs *regs)
return handled;
}
static void vic_handle_irq_cascaded(unsigned int irq, struct irq_desc *desc)
{
u32 stat, hwirq;
struct vic_device *vic = irq_desc_get_handler_data(desc);
while ((stat = readl_relaxed(vic->base + VIC_IRQ_STATUS))) {
hwirq = ffs(stat) - 1;
generic_handle_irq(irq_find_mapping(vic->domain, hwirq));
}
}
/*
* Keep iterating over all registered VIC's until there are no pending
* interrupts.
......@@ -246,6 +258,7 @@ static struct irq_domain_ops vic_irqdomain_ops = {
/**
* vic_register() - Register a VIC.
* @base: The base address of the VIC.
* @parent_irq: The parent IRQ if cascaded, else 0.
* @irq: The base IRQ for the VIC.
* @valid_sources: bitmask of valid interrupts
* @resume_sources: bitmask of interrupts allowed for resume sources.
......@@ -257,7 +270,8 @@ static struct irq_domain_ops vic_irqdomain_ops = {
*
* This also configures the IRQ domain for the VIC.
*/
static void __init vic_register(void __iomem *base, unsigned int irq,
static void __init vic_register(void __iomem *base, unsigned int parent_irq,
unsigned int irq,
u32 valid_sources, u32 resume_sources,
struct device_node *node)
{
......@@ -273,15 +287,25 @@ static void __init vic_register(void __iomem *base, unsigned int irq,
v->base = base;
v->valid_sources = valid_sources;
v->resume_sources = resume_sources;
v->irq = irq;
set_handle_irq(vic_handle_irq);
vic_id++;
if (parent_irq) {
irq_set_handler_data(parent_irq, v);
irq_set_chained_handler(parent_irq, vic_handle_irq_cascaded);
}
v->domain = irq_domain_add_simple(node, fls(valid_sources), irq,
&vic_irqdomain_ops, v);
/* create an IRQ mapping for each valid IRQ */
for (i = 0; i < fls(valid_sources); i++)
if (valid_sources & (1 << i))
irq_create_mapping(v->domain, i);
/* If no base IRQ was passed, figure out our allocated base */
if (irq)
v->irq = irq;
else
v->irq = irq_find_mapping(v->domain, 0);
}
static void vic_ack_irq(struct irq_data *d)
......@@ -409,10 +433,10 @@ static void __init vic_init_st(void __iomem *base, unsigned int irq_start,
writel(32, base + VIC_PL190_DEF_VECT_ADDR);
}
vic_register(base, irq_start, vic_sources, 0, node);
vic_register(base, 0, irq_start, vic_sources, 0, node);
}
void __init __vic_init(void __iomem *base, int irq_start,
void __init __vic_init(void __iomem *base, int parent_irq, int irq_start,
u32 vic_sources, u32 resume_sources,
struct device_node *node)
{
......@@ -449,7 +473,7 @@ void __init __vic_init(void __iomem *base, int irq_start,
vic_init2(base);
vic_register(base, irq_start, vic_sources, resume_sources, node);
vic_register(base, parent_irq, irq_start, vic_sources, resume_sources, node);
}
/**
......@@ -462,8 +486,30 @@ void __init __vic_init(void __iomem *base, int irq_start,
void __init vic_init(void __iomem *base, unsigned int irq_start,
u32 vic_sources, u32 resume_sources)
{
__vic_init(base, irq_start, vic_sources, resume_sources, NULL);
__vic_init(base, 0, irq_start, vic_sources, resume_sources, NULL);
}
/**
* vic_init_cascaded() - initialise a cascaded vectored interrupt controller
* @base: iomem base address
* @parent_irq: the parent IRQ we're cascaded off
* @irq_start: starting interrupt number, must be muliple of 32
* @vic_sources: bitmask of interrupt sources to allow
* @resume_sources: bitmask of interrupt sources to allow for resume
*
* This returns the base for the new interrupts or negative on error.
*/
int __init vic_init_cascaded(void __iomem *base, unsigned int parent_irq,
u32 vic_sources, u32 resume_sources)
{
struct vic_device *v;
v = &vic_devices[vic_id];
__vic_init(base, parent_irq, 0, vic_sources, resume_sources, NULL);
/* Return out acquired base */
return v->irq;
}
EXPORT_SYMBOL_GPL(vic_init_cascaded);
#ifdef CONFIG_OF
int __init vic_of_init(struct device_node *node, struct device_node *parent)
......@@ -485,7 +531,7 @@ int __init vic_of_init(struct device_node *node, struct device_node *parent)
/*
* Passing 0 as first IRQ makes the simple domain allocate descriptors
*/
__vic_init(regs, 0, interrupt_mask, wakeup_mask, node);
__vic_init(regs, 0, 0, interrupt_mask, wakeup_mask, node);
return 0;
}
......
......@@ -421,7 +421,7 @@ config LEDS_MC13783
config LEDS_NS2
tristate "LED support for Network Space v2 GPIO LEDs"
depends on LEDS_CLASS
depends on ARCH_KIRKWOOD
depends on ARCH_KIRKWOOD || MACH_KIRKWOOD
default y
help
This option enable support for the dual-GPIO LED found on the
......@@ -431,7 +431,7 @@ config LEDS_NS2
config LEDS_NETXBIG
tristate "LED support for Big Network series LEDs"
depends on LEDS_CLASS
depends on ARCH_KIRKWOOD
depends on ARCH_KIRKWOOD || MACH_KIRKWOOD
default y
help
This option enable support for LEDs found on the LaCie 2Big
......
......@@ -746,28 +746,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
goto err_clk_enable;
}
/*
* Setup Async configuration register in case we did not boot from
* NAND and so bootloader did not bother to set it up.
*/
val = davinci_nand_readl(info, A1CR_OFFSET + info->core_chipsel * 4);
/* Extended Wait is not valid and Select Strobe mode is not used */
val &= ~(ACR_ASIZE_MASK | ACR_EW_MASK | ACR_SS_MASK);
if (info->chip.options & NAND_BUSWIDTH_16)
val |= 0x1;
davinci_nand_writel(info, A1CR_OFFSET + info->core_chipsel * 4, val);
ret = 0;
if (info->timing)
ret = davinci_aemif_setup_timing(info->timing, info->base,
info->core_chipsel);
if (ret < 0) {
dev_dbg(&pdev->dev, "NAND timing values setup fail\n");
goto err;
}
spin_lock_irq(&davinci_nand_lock);
/* put CSxNAND into NAND mode */
......
......@@ -27,7 +27,7 @@ config PHY_EXYNOS_MIPI_VIDEO
config PHY_MVEBU_SATA
def_bool y
depends on ARCH_KIRKWOOD || ARCH_DOVE || MACH_DOVE
depends on ARCH_KIRKWOOD || ARCH_DOVE || MACH_DOVE || MACH_KIRKWOOD
depends on OF
select GENERIC_PHY
......
......@@ -22,7 +22,7 @@ config POWER_RESET_GPIO
config POWER_RESET_MSM
bool "Qualcomm MSM power-off driver"
depends on POWER_RESET && ARCH_MSM
depends on POWER_RESET && ARCH_QCOM
help
Power off and restart support for Qualcomm boards.
......
/*
* QNAP Turbo NAS Board power off
* QNAP Turbo NAS Board power off. Can also be used on Synology devices.
*
* Copyright (C) 2012 Andrew Lunn <andrew@lunn.ch>
*
......@@ -25,17 +25,43 @@
#define UART1_REG(x) (base + ((UART_##x) << 2))
struct power_off_cfg {
u32 baud;
char cmd;
};
static const struct power_off_cfg qnap_power_off_cfg = {
.baud = 19200,
.cmd = 'A',
};
static const struct power_off_cfg synology_power_off_cfg = {
.baud = 9600,
.cmd = '1',
};
static const struct of_device_id qnap_power_off_of_match_table[] = {
{ .compatible = "qnap,power-off",
.data = &qnap_power_off_cfg,
},
{ .compatible = "synology,power-off",
.data = &synology_power_off_cfg,
},
{}
};
MODULE_DEVICE_TABLE(of, qnap_power_off_of_match_table);
static void __iomem *base;
static unsigned long tclk;
static const struct power_off_cfg *cfg;
static void qnap_power_off(void)
{
/* 19200 baud divisor */
const unsigned divisor = ((tclk + (8 * 19200)) / (16 * 19200));
const unsigned divisor = ((tclk + (8 * cfg->baud)) / (16 * cfg->baud));
pr_err("%s: triggering power-off...\n", __func__);
/* hijack UART1 and reset into sane state (19200,8n1) */
/* hijack UART1 and reset into sane state */
writel(0x83, UART1_REG(LCR));
writel(divisor & 0xff, UART1_REG(DLL));
writel((divisor >> 8) & 0xff, UART1_REG(DLM));
......@@ -44,16 +70,21 @@ static void qnap_power_off(void)
writel(0x00, UART1_REG(FCR));
writel(0x00, UART1_REG(MCR));
/* send the power-off command 'A' to PIC */
writel('A', UART1_REG(TX));
/* send the power-off command to PIC */
writel(cfg->cmd, UART1_REG(TX));
}
static int qnap_power_off_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct resource *res;
struct clk *clk;
char symname[KSYM_NAME_LEN];
const struct of_device_id *match =
of_match_node(qnap_power_off_of_match_table, np);
cfg = match->data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "Missing resource");
......@@ -94,12 +125,6 @@ static int qnap_power_off_remove(struct platform_device *pdev)
return 0;
}
static const struct of_device_id qnap_power_off_of_match_table[] = {
{ .compatible = "qnap,power-off", },
{}
};
MODULE_DEVICE_TABLE(of, qnap_power_off_of_match_table);
static struct platform_driver qnap_power_off_driver = {
.probe = qnap_power_off_probe,
.remove = qnap_power_off_remove,
......
......@@ -11,3 +11,5 @@ menuconfig RESET_CONTROLLER
via GPIOs or SoC-internal reset controller modules.
If unsure, say no.
source "drivers/reset/sti/Kconfig"
obj-$(CONFIG_RESET_CONTROLLER) += core.o
obj-$(CONFIG_ARCH_SUNXI) += reset-sunxi.o
obj-$(CONFIG_ARCH_STI) += sti/
......@@ -43,7 +43,7 @@ struct reset_control {
* This simple translation function should be used for reset controllers
* with 1:1 mapping, where reset lines can be indexed by number without gaps.
*/
int of_reset_simple_xlate(struct reset_controller_dev *rcdev,
static int of_reset_simple_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *reset_spec)
{
if (WARN_ON(reset_spec->args_count != rcdev->of_reset_n_cells))
......@@ -54,7 +54,6 @@ int of_reset_simple_xlate(struct reset_controller_dev *rcdev,
return reset_spec->args[0];
}
EXPORT_SYMBOL_GPL(of_reset_simple_xlate);
/**
* reset_controller_register - register a reset controller device
......@@ -127,15 +126,16 @@ int reset_control_deassert(struct reset_control *rstc)
EXPORT_SYMBOL_GPL(reset_control_deassert);
/**
* reset_control_get - Lookup and obtain a reference to a reset controller.
* @dev: device to be reset by the controller
* of_reset_control_get - Lookup and obtain a reference to a reset controller.
* @node: device to be reset by the controller
* @id: reset line name
*
* Returns a struct reset_control or IS_ERR() condition containing errno.
*
* Use of id names is optional.
*/
struct reset_control *reset_control_get(struct device *dev, const char *id)
struct reset_control *of_reset_control_get(struct device_node *node,
const char *id)
{
struct reset_control *rstc = ERR_PTR(-EPROBE_DEFER);
struct reset_controller_dev *r, *rcdev;
......@@ -144,13 +144,10 @@ struct reset_control *reset_control_get(struct device *dev, const char *id)
int rstc_id;
int ret;
if (!dev)
return ERR_PTR(-EINVAL);
if (id)
index = of_property_match_string(dev->of_node,
index = of_property_match_string(node,
"reset-names", id);
ret = of_parse_phandle_with_args(dev->of_node, "resets", "#reset-cells",
ret = of_parse_phandle_with_args(node, "resets", "#reset-cells",
index, &args);
if (ret)
return ERR_PTR(ret);
......@@ -167,7 +164,7 @@ struct reset_control *reset_control_get(struct device *dev, const char *id)
if (!rcdev) {
mutex_unlock(&reset_controller_list_mutex);
return ERR_PTR(-ENODEV);
return ERR_PTR(-EPROBE_DEFER);
}
rstc_id = rcdev->of_xlate(rcdev, &args);
......@@ -185,12 +182,35 @@ struct reset_control *reset_control_get(struct device *dev, const char *id)
return ERR_PTR(-ENOMEM);
}
rstc->dev = dev;
rstc->rcdev = rcdev;
rstc->id = rstc_id;
return rstc;
}
EXPORT_SYMBOL_GPL(of_reset_control_get);
/**
* reset_control_get - Lookup and obtain a reference to a reset controller.
* @dev: device to be reset by the controller
* @id: reset line name
*
* Returns a struct reset_control or IS_ERR() condition containing errno.
*
* Use of id names is optional.
*/
struct reset_control *reset_control_get(struct device *dev, const char *id)
{
struct reset_control *rstc;
if (!dev)
return ERR_PTR(-EINVAL);
rstc = of_reset_control_get(dev->of_node, id);
if (!IS_ERR(rstc))
rstc->dev = dev;
return rstc;
}
EXPORT_SYMBOL_GPL(reset_control_get);
/**
......@@ -243,33 +263,6 @@ struct reset_control *devm_reset_control_get(struct device *dev, const char *id)
}
EXPORT_SYMBOL_GPL(devm_reset_control_get);
static int devm_reset_control_match(struct device *dev, void *res, void *data)
{
struct reset_control **rstc = res;
if (WARN_ON(!rstc || !*rstc))
return 0;
return *rstc == data;
}
/**
* devm_reset_control_put - resource managed reset_control_put()
* @rstc: reset controller to free
*
* Deallocate a reset control allocated withd devm_reset_control_get().
* This function will not need to be called normally, as devres will take
* care of freeing the resource.
*/
void devm_reset_control_put(struct reset_control *rstc)
{
int ret;
ret = devres_release(rstc->dev, devm_reset_control_release,
devm_reset_control_match, rstc);
if (ret)
WARN_ON(ret);
}
EXPORT_SYMBOL_GPL(devm_reset_control_put);
/**
* device_reset - find reset controller associated with the device
* and perform reset
......
if ARCH_STI
config STI_RESET_SYSCFG
bool
select RESET_CONTROLLER
config STIH415_RESET
bool
select STI_RESET_SYSCFG
config STIH416_RESET
bool
select STI_RESET_SYSCFG
endif
obj-$(CONFIG_STI_RESET_SYSCFG) += reset-syscfg.o
obj-$(CONFIG_STIH415_RESET) += reset-stih415.o
obj-$(CONFIG_STIH416_RESET) += reset-stih416.o
/*
* Copyright (C) 2013 STMicroelectronics (R&D) Limited
* Author: Stephen Gallimore <stephen.gallimore@st.com>
* Author: Srinivas Kandagatla <srinivas.kandagatla@st.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <dt-bindings/reset-controller/stih415-resets.h>
#include "reset-syscfg.h"
/*
* STiH415 Peripheral powerdown definitions.
*/
static const char stih415_front[] = "st,stih415-front-syscfg";
static const char stih415_rear[] = "st,stih415-rear-syscfg";
static const char stih415_sbc[] = "st,stih415-sbc-syscfg";
static const char stih415_lpm[] = "st,stih415-lpm-syscfg";
#define STIH415_PDN_FRONT(_bit) \
_SYSCFG_RST_CH(stih415_front, SYSCFG_114, _bit, SYSSTAT_187, _bit)
#define STIH415_PDN_REAR(_cntl, _stat) \
_SYSCFG_RST_CH(stih415_rear, SYSCFG_336, _cntl, SYSSTAT_384, _stat)
#define STIH415_SRST_REAR(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih415_rear, _reg, _bit)
#define STIH415_SRST_SBC(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih415_sbc, _reg, _bit)
#define STIH415_SRST_FRONT(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih415_front, _reg, _bit)
#define STIH415_SRST_LPM(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih415_lpm, _reg, _bit)
#define SYSCFG_114 0x38 /* Powerdown request EMI/NAND/Keyscan */
#define SYSSTAT_187 0x15c /* Powerdown status EMI/NAND/Keyscan */
#define SYSCFG_336 0x90 /* Powerdown request USB/SATA/PCIe */
#define SYSSTAT_384 0x150 /* Powerdown status USB/SATA/PCIe */
#define SYSCFG_376 0x130 /* Reset generator 0 control 0 */
#define SYSCFG_166 0x108 /* Softreset Ethernet 0 */
#define SYSCFG_31 0x7c /* Softreset Ethernet 1 */
#define LPM_SYSCFG_1 0x4 /* Softreset IRB */
static const struct syscfg_reset_channel_data stih415_powerdowns[] = {
[STIH415_EMISS_POWERDOWN] = STIH415_PDN_FRONT(0),
[STIH415_NAND_POWERDOWN] = STIH415_PDN_FRONT(1),
[STIH415_KEYSCAN_POWERDOWN] = STIH415_PDN_FRONT(2),
[STIH415_USB0_POWERDOWN] = STIH415_PDN_REAR(0, 0),
[STIH415_USB1_POWERDOWN] = STIH415_PDN_REAR(1, 1),
[STIH415_USB2_POWERDOWN] = STIH415_PDN_REAR(2, 2),
[STIH415_SATA0_POWERDOWN] = STIH415_PDN_REAR(3, 3),
[STIH415_SATA1_POWERDOWN] = STIH415_PDN_REAR(4, 4),
[STIH415_PCIE_POWERDOWN] = STIH415_PDN_REAR(5, 8),
};
static const struct syscfg_reset_channel_data stih415_softresets[] = {
[STIH415_ETH0_SOFTRESET] = STIH415_SRST_FRONT(SYSCFG_166, 0),
[STIH415_ETH1_SOFTRESET] = STIH415_SRST_SBC(SYSCFG_31, 0),
[STIH415_IRB_SOFTRESET] = STIH415_SRST_LPM(LPM_SYSCFG_1, 6),
[STIH415_USB0_SOFTRESET] = STIH415_SRST_REAR(SYSCFG_376, 9),
[STIH415_USB1_SOFTRESET] = STIH415_SRST_REAR(SYSCFG_376, 10),
[STIH415_USB2_SOFTRESET] = STIH415_SRST_REAR(SYSCFG_376, 11),
};
static struct syscfg_reset_controller_data stih415_powerdown_controller = {
.wait_for_ack = true,
.nr_channels = ARRAY_SIZE(stih415_powerdowns),
.channels = stih415_powerdowns,
};
static struct syscfg_reset_controller_data stih415_softreset_controller = {
.wait_for_ack = false,
.active_low = true,
.nr_channels = ARRAY_SIZE(stih415_softresets),
.channels = stih415_softresets,
};
static struct of_device_id stih415_reset_match[] = {
{ .compatible = "st,stih415-powerdown",
.data = &stih415_powerdown_controller, },
{ .compatible = "st,stih415-softreset",
.data = &stih415_softreset_controller, },
{},
};
static struct platform_driver stih415_reset_driver = {
.probe = syscfg_reset_probe,
.driver = {
.name = "reset-stih415",
.owner = THIS_MODULE,
.of_match_table = stih415_reset_match,
},
};
static int __init stih415_reset_init(void)
{
return platform_driver_register(&stih415_reset_driver);
}
arch_initcall(stih415_reset_init);
/*
* Copyright (C) 2013 STMicroelectronics (R&D) Limited
* Author: Stephen Gallimore <stephen.gallimore@st.com>
* Author: Srinivas Kandagatla <srinivas.kandagatla@st.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <dt-bindings/reset-controller/stih416-resets.h>
#include "reset-syscfg.h"
/*
* STiH416 Peripheral powerdown definitions.
*/
static const char stih416_front[] = "st,stih416-front-syscfg";
static const char stih416_rear[] = "st,stih416-rear-syscfg";
static const char stih416_sbc[] = "st,stih416-sbc-syscfg";
static const char stih416_lpm[] = "st,stih416-lpm-syscfg";
static const char stih416_cpu[] = "st,stih416-cpu-syscfg";
#define STIH416_PDN_FRONT(_bit) \
_SYSCFG_RST_CH(stih416_front, SYSCFG_1500, _bit, SYSSTAT_1578, _bit)
#define STIH416_PDN_REAR(_cntl, _stat) \
_SYSCFG_RST_CH(stih416_rear, SYSCFG_2525, _cntl, SYSSTAT_2583, _stat)
#define SYSCFG_1500 0x7d0 /* Powerdown request EMI/NAND/Keyscan */
#define SYSSTAT_1578 0x908 /* Powerdown status EMI/NAND/Keyscan */
#define SYSCFG_2525 0x834 /* Powerdown request USB/SATA/PCIe */
#define SYSSTAT_2583 0x91c /* Powerdown status USB/SATA/PCIe */
#define SYSCFG_2552 0x8A0 /* Reset Generator control 0 */
#define SYSCFG_1539 0x86c /* Softreset Ethernet 0 */
#define SYSCFG_510 0x7f8 /* Softreset Ethernet 1 */
#define LPM_SYSCFG_1 0x4 /* Softreset IRB */
#define SYSCFG_2553 0x8a4 /* Softreset SATA0/1, PCIE0/1 */
#define SYSCFG_7563 0x8cc /* MPE softresets 0 */
#define SYSCFG_7564 0x8d0 /* MPE softresets 1 */
#define STIH416_SRST_CPU(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih416_cpu, _reg, _bit)
#define STIH416_SRST_FRONT(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih416_front, _reg, _bit)
#define STIH416_SRST_REAR(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih416_rear, _reg, _bit)
#define STIH416_SRST_LPM(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih416_lpm, _reg, _bit)
#define STIH416_SRST_SBC(_reg, _bit) \
_SYSCFG_RST_CH_NO_ACK(stih416_sbc, _reg, _bit)
static const struct syscfg_reset_channel_data stih416_powerdowns[] = {
[STIH416_EMISS_POWERDOWN] = STIH416_PDN_FRONT(0),
[STIH416_NAND_POWERDOWN] = STIH416_PDN_FRONT(1),
[STIH416_KEYSCAN_POWERDOWN] = STIH416_PDN_FRONT(2),
[STIH416_USB0_POWERDOWN] = STIH416_PDN_REAR(0, 0),
[STIH416_USB1_POWERDOWN] = STIH416_PDN_REAR(1, 1),
[STIH416_USB2_POWERDOWN] = STIH416_PDN_REAR(2, 2),
[STIH416_USB3_POWERDOWN] = STIH416_PDN_REAR(6, 5),
[STIH416_SATA0_POWERDOWN] = STIH416_PDN_REAR(3, 3),
[STIH416_SATA1_POWERDOWN] = STIH416_PDN_REAR(4, 4),
[STIH416_PCIE0_POWERDOWN] = STIH416_PDN_REAR(7, 9),
[STIH416_PCIE1_POWERDOWN] = STIH416_PDN_REAR(5, 8),
};
static const struct syscfg_reset_channel_data stih416_softresets[] = {
[STIH416_ETH0_SOFTRESET] = STIH416_SRST_FRONT(SYSCFG_1539, 0),
[STIH416_ETH1_SOFTRESET] = STIH416_SRST_SBC(SYSCFG_510, 0),
[STIH416_IRB_SOFTRESET] = STIH416_SRST_LPM(LPM_SYSCFG_1, 6),
[STIH416_USB0_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 9),
[STIH416_USB1_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 10),
[STIH416_USB2_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 11),
[STIH416_USB3_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 28),
[STIH416_SATA0_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 7),
[STIH416_SATA1_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 3),
[STIH416_PCIE0_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 15),
[STIH416_PCIE1_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 2),
[STIH416_AUD_DAC_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 14),
[STIH416_HDTVOUT_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 5),
[STIH416_VTAC_M_RX_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 25),
[STIH416_VTAC_A_RX_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2552, 26),
[STIH416_SYNC_HD_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 5),
[STIH416_SYNC_SD_SOFTRESET] = STIH416_SRST_REAR(SYSCFG_2553, 6),
[STIH416_BLITTER_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 10),
[STIH416_GPU_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 11),
[STIH416_VTAC_M_TX_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 18),
[STIH416_VTAC_A_TX_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 19),
[STIH416_VTG_AUX_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 21),
[STIH416_JPEG_DEC_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7563, 23),
[STIH416_HVA_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7564, 2),
[STIH416_COMPO_M_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7564, 3),
[STIH416_COMPO_A_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7564, 4),
[STIH416_VP8_DEC_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7564, 10),
[STIH416_VTG_MAIN_SOFTRESET] = STIH416_SRST_CPU(SYSCFG_7564, 16),
};
static struct syscfg_reset_controller_data stih416_powerdown_controller = {
.wait_for_ack = true,
.nr_channels = ARRAY_SIZE(stih416_powerdowns),
.channels = stih416_powerdowns,
};
static struct syscfg_reset_controller_data stih416_softreset_controller = {
.wait_for_ack = false,
.active_low = true,
.nr_channels = ARRAY_SIZE(stih416_softresets),
.channels = stih416_softresets,
};
static struct of_device_id stih416_reset_match[] = {
{ .compatible = "st,stih416-powerdown",
.data = &stih416_powerdown_controller, },
{ .compatible = "st,stih416-softreset",
.data = &stih416_softreset_controller, },
{},
};
static struct platform_driver stih416_reset_driver = {
.probe = syscfg_reset_probe,
.driver = {
.name = "reset-stih416",
.owner = THIS_MODULE,
.of_match_table = stih416_reset_match,
},
};
static int __init stih416_reset_init(void)
{
return platform_driver_register(&stih416_reset_driver);
}
arch_initcall(stih416_reset_init);
/*
* Copyright (C) 2013 STMicroelectronics Limited
* Author: Stephen Gallimore <stephen.gallimore@st.com>
*
* Inspired by mach-imx/src.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/types.h>
#include <linux/of_device.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include "reset-syscfg.h"
/**
* Reset channel regmap configuration
*
* @reset: regmap field for the channel's reset bit.
* @ack: regmap field for the channel's ack bit (optional).
*/
struct syscfg_reset_channel {
struct regmap_field *reset;
struct regmap_field *ack;
};
/**
* A reset controller which groups together a set of related reset bits, which
* may be located in different system configuration registers.
*
* @rst: base reset controller structure.
* @active_low: are the resets in this controller active low, i.e. clearing
* the reset bit puts the hardware into reset.
* @channels: An array of reset channels for this controller.
*/
struct syscfg_reset_controller {
struct reset_controller_dev rst;
bool active_low;
struct syscfg_reset_channel *channels;
};
#define to_syscfg_reset_controller(_rst) \
container_of(_rst, struct syscfg_reset_controller, rst)
static int syscfg_reset_program_hw(struct reset_controller_dev *rcdev,
unsigned long idx, int assert)
{
struct syscfg_reset_controller *rst = to_syscfg_reset_controller(rcdev);
const struct syscfg_reset_channel *ch;
u32 ctrl_val = rst->active_low ? !assert : !!assert;
int err;
if (idx >= rcdev->nr_resets)
return -EINVAL;
ch = &rst->channels[idx];
err = regmap_field_write(ch->reset, ctrl_val);
if (err)
return err;
if (ch->ack) {
unsigned long timeout = jiffies + msecs_to_jiffies(1000);
u32 ack_val;
while (true) {
err = regmap_field_read(ch->ack, &ack_val);
if (err)
return err;
if (ack_val == ctrl_val)
break;
if (time_after(jiffies, timeout))
return -ETIME;
cpu_relax();
}
}
return 0;
}
static int syscfg_reset_assert(struct reset_controller_dev *rcdev,
unsigned long idx)
{
return syscfg_reset_program_hw(rcdev, idx, true);
}
static int syscfg_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long idx)
{
return syscfg_reset_program_hw(rcdev, idx, false);
}
static int syscfg_reset_dev(struct reset_controller_dev *rcdev,
unsigned long idx)
{
int err = syscfg_reset_assert(rcdev, idx);
if (err)
return err;
return syscfg_reset_deassert(rcdev, idx);
}
static struct reset_control_ops syscfg_reset_ops = {
.reset = syscfg_reset_dev,
.assert = syscfg_reset_assert,
.deassert = syscfg_reset_deassert,
};
static int syscfg_reset_controller_register(struct device *dev,
const struct syscfg_reset_controller_data *data)
{
struct syscfg_reset_controller *rc;
size_t size;
int i, err;
rc = devm_kzalloc(dev, sizeof(*rc), GFP_KERNEL);
if (!rc)
return -ENOMEM;
size = sizeof(struct syscfg_reset_channel) * data->nr_channels;
rc->channels = devm_kzalloc(dev, size, GFP_KERNEL);
if (!rc->channels)
return -ENOMEM;
rc->rst.ops = &syscfg_reset_ops,
rc->rst.of_node = dev->of_node;
rc->rst.nr_resets = data->nr_channels;
rc->active_low = data->active_low;
for (i = 0; i < data->nr_channels; i++) {
struct regmap *map;
struct regmap_field *f;
const char *compatible = data->channels[i].compatible;
map = syscon_regmap_lookup_by_compatible(compatible);
if (IS_ERR(map))
return PTR_ERR(map);
f = devm_regmap_field_alloc(dev, map, data->channels[i].reset);
if (IS_ERR(f))
return PTR_ERR(f);
rc->channels[i].reset = f;
if (!data->wait_for_ack)
continue;
f = devm_regmap_field_alloc(dev, map, data->channels[i].ack);
if (IS_ERR(f))
return PTR_ERR(f);
rc->channels[i].ack = f;
}
err = reset_controller_register(&rc->rst);
if (!err)
dev_info(dev, "registered\n");
return err;
}
int syscfg_reset_probe(struct platform_device *pdev)
{
struct device *dev = pdev ? &pdev->dev : NULL;
const struct of_device_id *match;
if (!dev || !dev->driver)
return -ENODEV;
match = of_match_device(dev->driver->of_match_table, dev);
if (!match || !match->data)
return -EINVAL;
return syscfg_reset_controller_register(dev, match->data);
}
/*
* Copyright (C) 2013 STMicroelectronics (R&D) Limited
* Author: Stephen Gallimore <stephen.gallimore@st.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __STI_RESET_SYSCFG_H
#define __STI_RESET_SYSCFG_H
#include <linux/device.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
/**
* Reset channel description for a system configuration register based
* reset controller.
*
* @compatible: Compatible string of the syscon regmap containing this
* channel's control and ack (status) bits.
* @reset: Regmap field description of the channel's reset bit.
* @ack: Regmap field description of the channel's acknowledge bit.
*/
struct syscfg_reset_channel_data {
const char *compatible;
struct reg_field reset;
struct reg_field ack;
};
#define _SYSCFG_RST_CH(_c, _rr, _rb, _ar, _ab) \
{ .compatible = _c, \
.reset = REG_FIELD(_rr, _rb, _rb), \
.ack = REG_FIELD(_ar, _ab, _ab), }
#define _SYSCFG_RST_CH_NO_ACK(_c, _rr, _rb) \
{ .compatible = _c, \
.reset = REG_FIELD(_rr, _rb, _rb), }
/**
* Description of a system configuration register based reset controller.
*
* @wait_for_ack: The controller will wait for reset assert and de-assert to
* be "ack'd" in a channel's ack field.
* @active_low: Are the resets in this controller active low, i.e. clearing
* the reset bit puts the hardware into reset.
* @nr_channels: The number of reset channels in this controller.
* @channels: An array of reset channel descriptions.
*/
struct syscfg_reset_controller_data {
bool wait_for_ack;
bool active_low;
int nr_channels;
const struct syscfg_reset_channel_data *channels;
};
/**
* syscfg_reset_probe(): platform device probe function used by syscfg
* reset controller drivers. This registers a reset
* controller configured by the OF match data for
* the compatible device which should be of type
* "struct syscfg_reset_controller_data".
*
* @pdev: platform device
*/
int syscfg_reset_probe(struct platform_device *pdev);
#endif /* __STI_RESET_SYSCFG_H */
......@@ -274,10 +274,7 @@ static int isl12057_probe(struct i2c_client *client,
dev_set_drvdata(dev, data);
rtc = devm_rtc_device_register(dev, DRV_NAME, &rtc_ops, THIS_MODULE);
if (IS_ERR(rtc))
return PTR_ERR(rtc);
return 0;
return PTR_ERR_OR_ZERO(rtc);
}
#ifdef CONFIG_OF
......
......@@ -222,6 +222,7 @@ static int __init mv_rtc_probe(struct platform_device *pdev)
struct resource *res;
struct rtc_plat_data *pdata;
u32 rtc_time;
u32 rtc_date;
int ret = 0;
pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
......@@ -257,6 +258,17 @@ static int __init mv_rtc_probe(struct platform_device *pdev)
}
}
/*
* A date after January 19th, 2038 does not fit on 32 bits and
* will confuse the kernel and userspace. Reset to a sane date
* (January 1st, 2013) if we're after 2038.
*/
rtc_date = readl(pdata->ioaddr + RTC_DATE_REG_OFFS);
if (bcd2bin((rtc_date >> RTC_YEAR_OFFS) & 0xff) >= 38) {
dev_info(&pdev->dev, "invalid RTC date, resetting to January 1st, 2013\n");
writel(0x130101, pdata->ioaddr + RTC_DATE_REG_OFFS);
}
pdata->irq = platform_get_irq(pdev, 0);
platform_set_drvdata(pdev, pdata);
......
......@@ -36,9 +36,47 @@ static void sh_clk_write(int value, struct clk *clk)
iowrite32(value, clk->mapped_reg);
}
static unsigned int r8(const void __iomem *addr)
{
return ioread8(addr);
}
static unsigned int r16(const void __iomem *addr)
{
return ioread16(addr);
}
static unsigned int r32(const void __iomem *addr)
{
return ioread32(addr);
}
static int sh_clk_mstp_enable(struct clk *clk)
{
sh_clk_write(sh_clk_read(clk) & ~(1 << clk->enable_bit), clk);
if (clk->status_reg) {
unsigned int (*read)(const void __iomem *addr);
int i;
void __iomem *mapped_status = (phys_addr_t)clk->status_reg -
(phys_addr_t)clk->enable_reg + clk->mapped_reg;
if (clk->flags & CLK_ENABLE_REG_8BIT)
read = r8;
else if (clk->flags & CLK_ENABLE_REG_16BIT)
read = r16;
else
read = r32;
for (i = 1000;
(read(mapped_status) & (1 << clk->enable_bit)) && i;
i--)
cpu_relax();
if (!i) {
pr_err("cpg: failed to enable %p[%d]\n",
clk->enable_reg, clk->enable_bit);
return -ETIMEDOUT;
}
}
return 0;
}
......
......@@ -143,7 +143,7 @@ config RCAR_THERMAL
config KIRKWOOD_THERMAL
tristate "Temperature sensor on Marvell Kirkwood SoCs"
depends on ARCH_KIRKWOOD
depends on ARCH_KIRKWOOD || MACH_KIRKWOOD
depends on OF
help
Support for the Kirkwood thermal sensor driver into the Linux thermal
......
......@@ -1024,7 +1024,7 @@ config SERIAL_SGI_IOC3
config SERIAL_MSM
bool "MSM on-chip serial port support"
depends on ARCH_MSM
depends on ARCH_MSM || ARCH_QCOM
select SERIAL_CORE
config SERIAL_MSM_CONSOLE
......
......@@ -301,7 +301,7 @@ config DAVINCI_WATCHDOG
config ORION_WATCHDOG
tristate "Orion watchdog"
depends on ARCH_ORION5X || ARCH_KIRKWOOD || ARCH_DOVE || MACH_DOVE
depends on ARCH_ORION5X || ARCH_KIRKWOOD || ARCH_DOVE || MACH_DOVE || ARCH_MVEBU
select WATCHDOG_CORE
help
Say Y here if to include support for the watchdog timer
......
......@@ -18,101 +18,204 @@
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/watchdog.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/spinlock.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/of.h>
#include <mach/bridge-regs.h>
#include <linux/of_device.h>
/* RSTOUT mask register physical address for Orion5x, Kirkwood and Dove */
#define ORION_RSTOUT_MASK_OFFSET 0x20108
/* Internal registers can be configured at any 1 MiB aligned address */
#define INTERNAL_REGS_MASK ~(SZ_1M - 1)
/*
* Watchdog timer block registers.
*/
#define TIMER_CTRL 0x0000
#define WDT_EN 0x0010
#define WDT_VAL 0x0024
#define TIMER_A370_STATUS 0x04
#define WDT_MAX_CYCLE_COUNT 0xffffffff
#define WDT_IN_USE 0
#define WDT_OK_TO_CLOSE 1
#define WDT_RESET_OUT_EN BIT(1)
#define WDT_INT_REQ BIT(3)
#define WDT_A370_RATIO_MASK(v) ((v) << 16)
#define WDT_A370_RATIO_SHIFT 5
#define WDT_A370_RATIO (1 << WDT_A370_RATIO_SHIFT)
#define WDT_AXP_FIXED_ENABLE_BIT BIT(10)
#define WDT_A370_EXPIRED BIT(31)
static bool nowayout = WATCHDOG_NOWAYOUT;
static int heartbeat = -1; /* module parameter (seconds) */
static unsigned int wdt_max_duration; /* (seconds) */
static struct clk *clk;
static unsigned int wdt_tclk;
static void __iomem *wdt_reg;
static DEFINE_SPINLOCK(wdt_lock);
static int orion_wdt_ping(struct watchdog_device *wdt_dev)
struct orion_watchdog;
struct orion_watchdog_data {
int wdt_counter_offset;
int wdt_enable_bit;
int rstout_enable_bit;
int (*clock_init)(struct platform_device *,
struct orion_watchdog *);
int (*start)(struct watchdog_device *);
};
struct orion_watchdog {
struct watchdog_device wdt;
void __iomem *reg;
void __iomem *rstout;
unsigned long clk_rate;
struct clk *clk;
const struct orion_watchdog_data *data;
};
static int orion_wdt_clock_init(struct platform_device *pdev,
struct orion_watchdog *dev)
{
spin_lock(&wdt_lock);
int ret;
/* Reload watchdog duration */
writel(wdt_tclk * wdt_dev->timeout, wdt_reg + WDT_VAL);
dev->clk = clk_get(&pdev->dev, NULL);
if (IS_ERR(dev->clk))
return PTR_ERR(dev->clk);
ret = clk_prepare_enable(dev->clk);
if (ret) {
clk_put(dev->clk);
return ret;
}
spin_unlock(&wdt_lock);
dev->clk_rate = clk_get_rate(dev->clk);
return 0;
}
static int orion_wdt_start(struct watchdog_device *wdt_dev)
static int armada370_wdt_clock_init(struct platform_device *pdev,
struct orion_watchdog *dev)
{
u32 reg;
int ret;
dev->clk = clk_get(&pdev->dev, NULL);
if (IS_ERR(dev->clk))
return PTR_ERR(dev->clk);
ret = clk_prepare_enable(dev->clk);
if (ret) {
clk_put(dev->clk);
return ret;
}
/* Setup watchdog input clock */
atomic_io_modify(dev->reg + TIMER_CTRL,
WDT_A370_RATIO_MASK(WDT_A370_RATIO_SHIFT),
WDT_A370_RATIO_MASK(WDT_A370_RATIO_SHIFT));
dev->clk_rate = clk_get_rate(dev->clk) / WDT_A370_RATIO;
return 0;
}
spin_lock(&wdt_lock);
static int armadaxp_wdt_clock_init(struct platform_device *pdev,
struct orion_watchdog *dev)
{
int ret;
dev->clk = of_clk_get_by_name(pdev->dev.of_node, "fixed");
if (IS_ERR(dev->clk))
return PTR_ERR(dev->clk);
ret = clk_prepare_enable(dev->clk);
if (ret) {
clk_put(dev->clk);
return ret;
}
/* Enable the fixed watchdog clock input */
atomic_io_modify(dev->reg + TIMER_CTRL,
WDT_AXP_FIXED_ENABLE_BIT,
WDT_AXP_FIXED_ENABLE_BIT);
dev->clk_rate = clk_get_rate(dev->clk);
return 0;
}
static int orion_wdt_ping(struct watchdog_device *wdt_dev)
{
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
/* Reload watchdog duration */
writel(dev->clk_rate * wdt_dev->timeout,
dev->reg + dev->data->wdt_counter_offset);
return 0;
}
static int armada370_start(struct watchdog_device *wdt_dev)
{
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
/* Set watchdog duration */
writel(wdt_tclk * wdt_dev->timeout, wdt_reg + WDT_VAL);
writel(dev->clk_rate * wdt_dev->timeout,
dev->reg + dev->data->wdt_counter_offset);
/* Clear the watchdog expiration bit */
atomic_io_modify(dev->reg + TIMER_A370_STATUS, WDT_A370_EXPIRED, 0);
/* Clear watchdog timer interrupt */
writel(~WDT_INT_REQ, BRIDGE_CAUSE);
/* Enable watchdog timer */
atomic_io_modify(dev->reg + TIMER_CTRL, dev->data->wdt_enable_bit,
dev->data->wdt_enable_bit);
atomic_io_modify(dev->rstout, dev->data->rstout_enable_bit,
dev->data->rstout_enable_bit);
return 0;
}
static int orion_start(struct watchdog_device *wdt_dev)
{
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
/* Set watchdog duration */
writel(dev->clk_rate * wdt_dev->timeout,
dev->reg + dev->data->wdt_counter_offset);
/* Enable watchdog timer */
reg = readl(wdt_reg + TIMER_CTRL);
reg |= WDT_EN;
writel(reg, wdt_reg + TIMER_CTRL);
atomic_io_modify(dev->reg + TIMER_CTRL, dev->data->wdt_enable_bit,
dev->data->wdt_enable_bit);
/* Enable reset on watchdog */
reg = readl(RSTOUTn_MASK);
reg |= WDT_RESET_OUT_EN;
writel(reg, RSTOUTn_MASK);
atomic_io_modify(dev->rstout, dev->data->rstout_enable_bit,
dev->data->rstout_enable_bit);
spin_unlock(&wdt_lock);
return 0;
}
static int orion_wdt_stop(struct watchdog_device *wdt_dev)
static int orion_wdt_start(struct watchdog_device *wdt_dev)
{
u32 reg;
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
spin_lock(&wdt_lock);
/* There are some per-SoC quirks to handle */
return dev->data->start(wdt_dev);
}
static int orion_wdt_stop(struct watchdog_device *wdt_dev)
{
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
/* Disable reset on watchdog */
reg = readl(RSTOUTn_MASK);
reg &= ~WDT_RESET_OUT_EN;
writel(reg, RSTOUTn_MASK);
atomic_io_modify(dev->rstout, dev->data->rstout_enable_bit, 0);
/* Disable watchdog timer */
reg = readl(wdt_reg + TIMER_CTRL);
reg &= ~WDT_EN;
writel(reg, wdt_reg + TIMER_CTRL);
atomic_io_modify(dev->reg + TIMER_CTRL, dev->data->wdt_enable_bit, 0);
spin_unlock(&wdt_lock);
return 0;
}
static unsigned int orion_wdt_get_timeleft(struct watchdog_device *wdt_dev)
static int orion_wdt_enabled(struct orion_watchdog *dev)
{
unsigned int time_left;
bool enabled, running;
spin_lock(&wdt_lock);
time_left = readl(wdt_reg + WDT_VAL) / wdt_tclk;
spin_unlock(&wdt_lock);
enabled = readl(dev->rstout) & dev->data->rstout_enable_bit;
running = readl(dev->reg + TIMER_CTRL) & dev->data->wdt_enable_bit;
return time_left;
return enabled && running;
}
static unsigned int orion_wdt_get_timeleft(struct watchdog_device *wdt_dev)
{
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
return readl(dev->reg + dev->data->wdt_counter_offset) / dev->clk_rate;
}
static int orion_wdt_set_timeout(struct watchdog_device *wdt_dev,
......@@ -136,68 +239,188 @@ static const struct watchdog_ops orion_wdt_ops = {
.get_timeleft = orion_wdt_get_timeleft,
};
static struct watchdog_device orion_wdt = {
.info = &orion_wdt_info,
.ops = &orion_wdt_ops,
.min_timeout = 1,
static irqreturn_t orion_wdt_irq(int irq, void *devid)
{
panic("Watchdog Timeout");
return IRQ_HANDLED;
}
/*
* The original devicetree binding for this driver specified only
* one memory resource, so in order to keep DT backwards compatibility
* we try to fallback to a hardcoded register address, if the resource
* is missing from the devicetree.
*/
static void __iomem *orion_wdt_ioremap_rstout(struct platform_device *pdev,
phys_addr_t internal_regs)
{
struct resource *res;
phys_addr_t rstout;
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (res)
return devm_ioremap(&pdev->dev, res->start,
resource_size(res));
/* This workaround works only for "orion-wdt", DT-enabled */
if (!of_device_is_compatible(pdev->dev.of_node, "marvell,orion-wdt"))
return NULL;
rstout = internal_regs + ORION_RSTOUT_MASK_OFFSET;
WARN(1, FW_BUG "falling back to harcoded RSTOUT reg %pa\n", &rstout);
return devm_ioremap(&pdev->dev, rstout, 0x4);
}
static const struct orion_watchdog_data orion_data = {
.rstout_enable_bit = BIT(1),
.wdt_enable_bit = BIT(4),
.wdt_counter_offset = 0x24,
.clock_init = orion_wdt_clock_init,
.start = orion_start,
};
static const struct orion_watchdog_data armada370_data = {
.rstout_enable_bit = BIT(8),
.wdt_enable_bit = BIT(8),
.wdt_counter_offset = 0x34,
.clock_init = armada370_wdt_clock_init,
.start = armada370_start,
};
static const struct orion_watchdog_data armadaxp_data = {
.rstout_enable_bit = BIT(8),
.wdt_enable_bit = BIT(8),
.wdt_counter_offset = 0x34,
.clock_init = armadaxp_wdt_clock_init,
.start = armada370_start,
};
static const struct of_device_id orion_wdt_of_match_table[] = {
{
.compatible = "marvell,orion-wdt",
.data = &orion_data,
},
{
.compatible = "marvell,armada-370-wdt",
.data = &armada370_data,
},
{
.compatible = "marvell,armada-xp-wdt",
.data = &armadaxp_data,
},
{},
};
MODULE_DEVICE_TABLE(of, orion_wdt_of_match_table);
static int orion_wdt_probe(struct platform_device *pdev)
{
struct orion_watchdog *dev;
const struct of_device_id *match;
unsigned int wdt_max_duration; /* (seconds) */
struct resource *res;
int ret;
int ret, irq;
clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(clk)) {
dev_err(&pdev->dev, "Orion Watchdog missing clock\n");
return -ENODEV;
}
clk_prepare_enable(clk);
wdt_tclk = clk_get_rate(clk);
dev = devm_kzalloc(&pdev->dev, sizeof(struct orion_watchdog),
GFP_KERNEL);
if (!dev)
return -ENOMEM;
match = of_match_device(orion_wdt_of_match_table, &pdev->dev);
if (!match)
/* Default legacy match */
match = &orion_wdt_of_match_table[0];
dev->wdt.info = &orion_wdt_info;
dev->wdt.ops = &orion_wdt_ops;
dev->wdt.min_timeout = 1;
dev->data = match->data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
wdt_reg = devm_ioremap(&pdev->dev, res->start, resource_size(res));
if (!wdt_reg)
return -ENOMEM;
wdt_max_duration = WDT_MAX_CYCLE_COUNT / wdt_tclk;
dev->reg = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
if (!dev->reg)
return -ENOMEM;
orion_wdt.timeout = wdt_max_duration;
orion_wdt.max_timeout = wdt_max_duration;
watchdog_init_timeout(&orion_wdt, heartbeat, &pdev->dev);
dev->rstout = orion_wdt_ioremap_rstout(pdev, res->start &
INTERNAL_REGS_MASK);
if (!dev->rstout)
return -ENODEV;
watchdog_set_nowayout(&orion_wdt, nowayout);
ret = watchdog_register_device(&orion_wdt);
ret = dev->data->clock_init(pdev, dev);
if (ret) {
clk_disable_unprepare(clk);
dev_err(&pdev->dev, "cannot initialize clock\n");
return ret;
}
wdt_max_duration = WDT_MAX_CYCLE_COUNT / dev->clk_rate;
dev->wdt.timeout = wdt_max_duration;
dev->wdt.max_timeout = wdt_max_duration;
watchdog_init_timeout(&dev->wdt, heartbeat, &pdev->dev);
platform_set_drvdata(pdev, &dev->wdt);
watchdog_set_drvdata(&dev->wdt, dev);
/*
* Let's make sure the watchdog is fully stopped, unless it's
* explicitly enabled. This may be the case if the module was
* removed and re-insterted, or if the bootloader explicitly
* set a running watchdog before booting the kernel.
*/
if (!orion_wdt_enabled(dev))
orion_wdt_stop(&dev->wdt);
/* Request the IRQ only after the watchdog is disabled */
irq = platform_get_irq(pdev, 0);
if (irq > 0) {
/*
* Not all supported platforms specify an interrupt for the
* watchdog, so let's make it optional.
*/
ret = devm_request_irq(&pdev->dev, irq, orion_wdt_irq, 0,
pdev->name, dev);
if (ret < 0) {
dev_err(&pdev->dev, "failed to request IRQ\n");
goto disable_clk;
}
}
watchdog_set_nowayout(&dev->wdt, nowayout);
ret = watchdog_register_device(&dev->wdt);
if (ret)
goto disable_clk;
pr_info("Initial timeout %d sec%s\n",
orion_wdt.timeout, nowayout ? ", nowayout" : "");
dev->wdt.timeout, nowayout ? ", nowayout" : "");
return 0;
disable_clk:
clk_disable_unprepare(dev->clk);
clk_put(dev->clk);
return ret;
}
static int orion_wdt_remove(struct platform_device *pdev)
{
watchdog_unregister_device(&orion_wdt);
clk_disable_unprepare(clk);
struct watchdog_device *wdt_dev = platform_get_drvdata(pdev);
struct orion_watchdog *dev = watchdog_get_drvdata(wdt_dev);
watchdog_unregister_device(wdt_dev);
clk_disable_unprepare(dev->clk);
clk_put(dev->clk);
return 0;
}
static void orion_wdt_shutdown(struct platform_device *pdev)
{
orion_wdt_stop(&orion_wdt);
struct watchdog_device *wdt_dev = platform_get_drvdata(pdev);
orion_wdt_stop(wdt_dev);
}
static const struct of_device_id orion_wdt_of_match_table[] = {
{ .compatible = "marvell,orion-wdt", },
{},
};
MODULE_DEVICE_TABLE(of, orion_wdt_of_match_table);
static struct platform_driver orion_wdt_driver = {
.probe = orion_wdt_probe,
.remove = orion_wdt_remove,
......
/*
* Copyright (C) 2013 Broadcom Corporation
* Copyright 2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _CLOCK_BCM281XX_H
#define _CLOCK_BCM281XX_H
/*
* This file defines the values used to specify clocks provided by
* the clock control units (CCUs) on Broadcom BCM281XX family SoCs.
*/
/* root CCU clock ids */
#define BCM281XX_ROOT_CCU_FRAC_1M 0
#define BCM281XX_ROOT_CCU_CLOCK_COUNT 1
/* aon CCU clock ids */
#define BCM281XX_AON_CCU_HUB_TIMER 0
#define BCM281XX_AON_CCU_PMU_BSC 1
#define BCM281XX_AON_CCU_PMU_BSC_VAR 2
#define BCM281XX_AON_CCU_CLOCK_COUNT 3
/* hub CCU clock ids */
#define BCM281XX_HUB_CCU_TMON_1M 0
#define BCM281XX_HUB_CCU_CLOCK_COUNT 1
/* master CCU clock ids */
#define BCM281XX_MASTER_CCU_SDIO1 0
#define BCM281XX_MASTER_CCU_SDIO2 1
#define BCM281XX_MASTER_CCU_SDIO3 2
#define BCM281XX_MASTER_CCU_SDIO4 3
#define BCM281XX_MASTER_CCU_USB_IC 4
#define BCM281XX_MASTER_CCU_HSIC2_48M 5
#define BCM281XX_MASTER_CCU_HSIC2_12M 6
#define BCM281XX_MASTER_CCU_CLOCK_COUNT 7
/* slave CCU clock ids */
#define BCM281XX_SLAVE_CCU_UARTB 0
#define BCM281XX_SLAVE_CCU_UARTB2 1
#define BCM281XX_SLAVE_CCU_UARTB3 2
#define BCM281XX_SLAVE_CCU_UARTB4 3
#define BCM281XX_SLAVE_CCU_SSP0 4
#define BCM281XX_SLAVE_CCU_SSP2 5
#define BCM281XX_SLAVE_CCU_BSC1 6
#define BCM281XX_SLAVE_CCU_BSC2 7
#define BCM281XX_SLAVE_CCU_BSC3 8
#define BCM281XX_SLAVE_CCU_PWM 9
#define BCM281XX_SLAVE_CCU_CLOCK_COUNT 10
#endif /* _CLOCK_BCM281XX_H */
......@@ -46,8 +46,8 @@
#define R8A7790_CLK_MSIOF1 8
#define R8A7790_CLK_MSIOF3 15
#define R8A7790_CLK_SCIFB2 16
#define R8A7790_CLK_SYS_DMAC0 18
#define R8A7790_CLK_SYS_DMAC1 19
#define R8A7790_CLK_SYS_DMAC1 18
#define R8A7790_CLK_SYS_DMAC0 19
/* MSTP3 */
#define R8A7790_CLK_TPU0 4
......
......@@ -93,6 +93,11 @@ int gic_get_cpu_id(unsigned int cpu);
void gic_migrate_target(unsigned int new_cpu_id);
unsigned long gic_get_sgir_physaddr(void);
extern const struct irq_domain_ops *gic_routable_irq_domain_ops;
static inline void __init register_routable_domain_ops
(const struct irq_domain_ops *ops)
{
gic_routable_irq_domain_ops = ops;
}
#endif /* __ASSEMBLY */
#endif
......@@ -29,8 +29,10 @@
struct device_node;
struct pt_regs;
void __vic_init(void __iomem *base, int irq_start, u32 vic_sources,
u32 resume_sources, struct device_node *node);
void __vic_init(void __iomem *base, int parent_irq, int irq_start,
u32 vic_sources, u32 resume_sources, struct device_node *node);
void vic_init(void __iomem *base, unsigned int irq_start, u32 vic_sources, u32 resume_sources);
int vic_init_cascaded(void __iomem *base, unsigned int parent_irq,
u32 vic_sources, u32 resume_sources);
#endif
/*
* drivers/irqchip/irq-crossbar.h
*
* Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
int irqcrossbar_init(void);
void integrator_clk_init(bool is_cp);
void integrator_impd1_clk_init(void __iomem *base, unsigned int id);
void integrator_impd1_clk_exit(unsigned int id);
......@@ -10,6 +10,8 @@
#ifndef _MACH_DAVINCI_AEMIF_H
#define _MACH_DAVINCI_AEMIF_H
#include <linux/platform_device.h>
#define NRCSR_OFFSET 0x00
#define AWCCR_OFFSET 0x04
#define A1CR_OFFSET 0x10
......@@ -31,6 +33,5 @@ struct davinci_aemif_timing {
u8 ta;
};
int davinci_aemif_setup_timing(struct davinci_aemif_timing *t,
void __iomem *base, unsigned cs);
int davinci_aemif_setup(struct platform_device *pdev);
#endif
......@@ -4,6 +4,8 @@
struct device;
struct reset_control;
#ifdef CONFIG_RESET_CONTROLLER
int reset_control_reset(struct reset_control *rstc);
int reset_control_assert(struct reset_control *rstc);
int reset_control_deassert(struct reset_control *rstc);
......@@ -12,6 +14,67 @@ struct reset_control *reset_control_get(struct device *dev, const char *id);
void reset_control_put(struct reset_control *rstc);
struct reset_control *devm_reset_control_get(struct device *dev, const char *id);
int device_reset(struct device *dev);
int __must_check device_reset(struct device *dev);
static inline int device_reset_optional(struct device *dev)
{
return device_reset(dev);
}
static inline struct reset_control *reset_control_get_optional(
struct device *dev, const char *id)
{
return reset_control_get(dev, id);
}
static inline struct reset_control *devm_reset_control_get_optional(
struct device *dev, const char *id)
{
return devm_reset_control_get(dev, id);
}
#else
static inline int reset_control_reset(struct reset_control *rstc)
{
WARN_ON(1);
return 0;
}
static inline int reset_control_assert(struct reset_control *rstc)
{
WARN_ON(1);
return 0;
}
static inline int reset_control_deassert(struct reset_control *rstc)
{
WARN_ON(1);
return 0;
}
static inline void reset_control_put(struct reset_control *rstc)
{
WARN_ON(1);
}
static inline int device_reset_optional(struct device *dev)
{
return -ENOSYS;
}
static inline struct reset_control *reset_control_get_optional(
struct device *dev, const char *id)
{
return ERR_PTR(-ENOSYS);
}
static inline struct reset_control *devm_reset_control_get_optional(
struct device *dev, const char *id)
{
return ERR_PTR(-ENOSYS);
}
#endif /* CONFIG_RESET_CONTROLLER */
#endif
......@@ -52,6 +52,7 @@ struct clk {
unsigned long flags;
void __iomem *enable_reg;
void __iomem *status_reg;
unsigned int enable_bit;
void __iomem *mapped_reg;
......@@ -116,22 +117,26 @@ long clk_round_parent(struct clk *clk, unsigned long target,
unsigned long *best_freq, unsigned long *parent_freq,
unsigned int div_min, unsigned int div_max);
#define SH_CLK_MSTP(_parent, _enable_reg, _enable_bit, _flags) \
#define SH_CLK_MSTP(_parent, _enable_reg, _enable_bit, _status_reg, _flags) \
{ \
.parent = _parent, \
.enable_reg = (void __iomem *)_enable_reg, \
.enable_bit = _enable_bit, \
.status_reg = _status_reg, \
.flags = _flags, \
}
#define SH_CLK_MSTP32(_p, _r, _b, _f) \
SH_CLK_MSTP(_p, _r, _b, _f | CLK_ENABLE_REG_32BIT)
SH_CLK_MSTP(_p, _r, _b, 0, _f | CLK_ENABLE_REG_32BIT)
#define SH_CLK_MSTP32_STS(_p, _r, _b, _s, _f) \
SH_CLK_MSTP(_p, _r, _b, _s, _f | CLK_ENABLE_REG_32BIT)
#define SH_CLK_MSTP16(_p, _r, _b, _f) \
SH_CLK_MSTP(_p, _r, _b, _f | CLK_ENABLE_REG_16BIT)
SH_CLK_MSTP(_p, _r, _b, 0, _f | CLK_ENABLE_REG_16BIT)
#define SH_CLK_MSTP8(_p, _r, _b, _f) \
SH_CLK_MSTP(_p, _r, _b, _f | CLK_ENABLE_REG_8BIT)
SH_CLK_MSTP(_p, _r, _b, 0, _f | CLK_ENABLE_REG_8BIT)
int sh_clk_mstp_register(struct clk *clks, int nr);
......
config SND_KIRKWOOD_SOC
tristate "SoC Audio for the Marvell Kirkwood and Dove chips"
depends on ARCH_KIRKWOOD || ARCH_DOVE || ARCH_MVEBU || COMPILE_TEST
depends on ARCH_KIRKWOOD || ARCH_DOVE || ARCH_MVEBU || MACH_KIRKWOOD || COMPILE_TEST
help
Say Y or M if you want to add support for codecs attached to
the Kirkwood I2S interface. You will also need to select the
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment