Commit cf9b0772 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
 "This branch contains platform-related driver updates for ARM and
  ARM64, these are the areas that bring the changes:

  New drivers:

   - driver support for Renesas R-Car V3M (R8A77970)

   - power management support for Amlogic GX

   - a new driver for the Tegra BPMP thermal sensor

   - a new bus driver for Technologic Systems NBUS

  Changes for subsystems that prefer to merge through arm-soc:

   - the usual updates for reset controller drivers from Philipp Zabel,
     with five added drivers for SoCs in the arc, meson, socfpa,
     uniphier and mediatek families

   - updates to the ARM SCPI and PSCI frameworks, from Sudeep Holla,
     Heiner Kallweit and Lorenzo Pieralisi

  Changes specific to some ARM-based SoC

   - the Freescale/NXP DPAA QBMan drivers from PowerPC can now work on
     ARM as well

   - several changes for power management on Broadcom SoCs

   - various improvements on Qualcomm, Broadcom, Amlogic, Atmel,
     Mediatek

   - minor Cleanups for Samsung, TI OMAP SoCs"

[ NOTE! This doesn't work without the previous ARM SoC device-tree pull,
  because the R8A77970 driver is missing a header file that came from
  that pull.

  The fact that this got merged afterwards only fixes it at this point,
  and bisection of that driver will fail if/when you walk into the
  history of that driver.           - Linus ]

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (96 commits)
  soc: amlogic: meson-gx-pwrc-vpu: fix power-off when powered by bootloader
  bus: add driver for the Technologic Systems NBUS
  memory: omap-gpmc: Remove deprecated gpmc_update_nand_reg()
  soc: qcom: remove unused label
  soc: amlogic: gx pm domain: add PM and OF dependencies
  drivers/firmware: psci_checker: Add missing destroy_timer_on_stack()
  dt-bindings: power: add amlogic meson power domain bindings
  soc: amlogic: add Meson GX VPU Domains driver
  soc: qcom: Remote filesystem memory driver
  dt-binding: soc: qcom: Add binding for rmtfs memory
  of: reserved_mem: Accessor for acquiring reserved_mem
  of/platform: Generalize /reserved-memory handling
  soc: mediatek: pwrap: fix fatal compiler error
  soc: mediatek: pwrap: fix compiler errors
  arm64: mediatek: cleanup message for platform selection
  soc: Allow test-building of MediaTek drivers
  soc: mediatek: place Kconfig for all SoC drivers under menu
  soc: mediatek: pwrap: add support for MT7622 SoC
  soc: mediatek: pwrap: add common way for setup CS timing extenstion
  soc: mediatek: pwrap: add MediaTek MT6380 as one slave of pwrap
  ..
parents 527d1470 339cd0ea
......@@ -164,6 +164,8 @@ Control registers for this memory controller's DDR PHY.
Required properties:
- compatible : should contain one of these
"brcm,brcmstb-ddr-phy-v71.1"
"brcm,brcmstb-ddr-phy-v72.0"
"brcm,brcmstb-ddr-phy-v225.1"
"brcm,brcmstb-ddr-phy-v240.1"
"brcm,brcmstb-ddr-phy-v240.2"
......@@ -184,7 +186,9 @@ Sequencer DRAM parameters and control registers. Used for Self-Refresh
Power-Down (SRPD), among other things.
Required properties:
- compatible : should contain "brcm,brcmstb-memc-ddr"
- compatible : should contain one of these
"brcm,brcmstb-memc-ddr-rev-b.2.2"
"brcm,brcmstb-memc-ddr"
- reg : the MEMC DDR register range
Example:
......
......@@ -4,7 +4,6 @@ Properties:
- compatible : should contain two values. First value must be one from following list:
- "samsung,exynos3250-pmu" - for Exynos3250 SoC,
- "samsung,exynos4210-pmu" - for Exynos4210 SoC,
- "samsung,exynos4212-pmu" - for Exynos4212 SoC,
- "samsung,exynos4412-pmu" - for Exynos4412 SoC,
- "samsung,exynos5250-pmu" - for Exynos5250 SoC,
- "samsung,exynos5260-pmu" - for Exynos5260 SoC.
......
......@@ -18,6 +18,8 @@ Required properties:
* Core, iface, and bus clocks required for "qcom,scm"
- clock-names: Must contain "core" for the core clock, "iface" for the interface
clock and "bus" for the bus clock per the requirements of the compatible.
- qcom,dload-mode: phandle to the TCSR hardware block and offset of the
download mode control register (optional)
Example for MSM8916:
......
DDR PHY Front End (DPFE) for Broadcom STB
=========================================
DPFE and the DPFE firmware provide an interface for the host CPU to
communicate with the DCPU, which resides inside the DDR PHY.
There are three memory regions for interacting with the DCPU. These are
specified in a single reg property.
Required properties:
- compatible: must be "brcm,bcm7271-dpfe-cpu", "brcm,bcm7268-dpfe-cpu"
or "brcm,dpfe-cpu"
- reg: must reference three register ranges
- start address and length of the DCPU register space
- start address and length of the DCPU data memory space
- start address and length of the DCPU instruction memory space
- reg-names: must contain "dpfe-cpu", "dpfe-dmem", and "dpfe-imem";
they must be in the same order as the register declarations
Example:
dpfe_cpu0: dpfe-cpu@f1132000 {
compatible = "brcm,bcm7271-dpfe-cpu", "brcm,dpfe-cpu";
reg = <0xf1132000 0x180
0xf1134000 0x1000
0xf1138000 0x4000>;
reg-names = "dpfe-cpu", "dpfe-dmem", "dpfe-imem";
};
......@@ -11,3 +11,156 @@ Required properties:
The experimental -viper variants are for running Linux on the 3384's
BMIPS4355 cable modem CPU instead of the BMIPS5000 application processor.
Power management
----------------
For power management (particularly, S2/S3/S5 system suspend), the following SoC
components are needed:
= Always-On control block (AON CTRL)
This hardware provides control registers for the "always-on" (even in low-power
modes) hardware, such as the Power Management State Machine (PMSM).
Required properties:
- compatible : should be one of
"brcm,bcm7425-aon-ctrl"
"brcm,bcm7429-aon-ctrl"
"brcm,bcm7435-aon-ctrl" and
"brcm,brcmstb-aon-ctrl"
- reg : the register start and length for the AON CTRL block
Example:
syscon@410000 {
compatible = "brcm,bcm7425-aon-ctrl", "brcm,brcmstb-aon-ctrl";
reg = <0x410000 0x400>;
};
= Memory controllers
A Broadcom STB SoC typically has a number of independent memory controllers,
each of which may have several associated hardware blocks, which are versioned
independently (control registers, DDR PHYs, etc.). One might consider
describing these controllers as a parent "memory controllers" block, which
contains N sub-nodes (one for each controller in the system), each of which is
associated with a number of hardware register resources (e.g., its PHY.
== MEMC (MEMory Controller)
Represents a single memory controller instance.
Required properties:
- compatible : should contain "brcm,brcmstb-memc" and "simple-bus"
- ranges : should contain the child address in the parent address
space, must be 0 here, and the register start and length of
the entire memory controller (including all sub nodes: DDR PHY,
arbiter, etc.)
- #address-cells : must be 1
- #size-cells : must be 1
Example:
memory-controller@0 {
compatible = "brcm,brcmstb-memc", "simple-bus";
ranges = <0x0 0x0 0xa000>;
#address-cells = <1>;
#size-cells = <1>;
memc-arb@1000 {
...
};
memc-ddr@2000 {
...
};
ddr-phy@6000 {
...
};
};
Should contain subnodes for any of the following relevant hardware resources:
== DDR PHY control
Control registers for this memory controller's DDR PHY.
Required properties:
- compatible : should contain one of these
"brcm,brcmstb-ddr-phy-v64.5"
"brcm,brcmstb-ddr-phy"
- reg : the DDR PHY register range and length
Example:
ddr-phy@6000 {
compatible = "brcm,brcmstb-ddr-phy-v64.5";
reg = <0x6000 0xc8>;
};
== DDR memory controller sequencer
Control registers for this memory controller's DDR memory sequencer
Required properties:
- compatible : should contain one of these
"brcm,bcm7425-memc-ddr"
"brcm,bcm7429-memc-ddr"
"brcm,bcm7435-memc-ddr" and
"brcm,brcmstb-memc-ddr"
- reg : the DDR sequencer register range and length
Example:
memc-ddr@2000 {
compatible = "brcm,bcm7425-memc-ddr", "brcm,brcmstb-memc-ddr";
reg = <0x2000 0x300>;
};
== MEMC Arbiter
The memory controller arbiter is responsible for memory clients allocation
(bandwidth, priorities etc.) and needs to have its contents restored during
deep sleep states (S3).
Required properties:
- compatible : should contain one of these
"brcm,brcmstb-memc-arb-v10.0.0.0"
"brcm,brcmstb-memc-arb"
- reg : the DDR Arbiter register range and length
Example:
memc-arb@1000 {
compatible = "brcm,brcmstb-memc-arb-v10.0.0.0";
reg = <0x1000 0x248>;
};
== Timers
The Broadcom STB chips contain a timer block with several general purpose
timers that can be used.
Required properties:
- compatible : should contain one of:
"brcm,bcm7425-timers"
"brcm,bcm7429-timers"
"brcm,bcm7435-timers and
"brcm,brcmstb-timers"
- reg : the timers register range
- interrupts : the interrupt line for this timer block
Example:
timers: timer@4067c0 {
compatible = "brcm,bcm7425-timers", "brcm,brcmstb-timers";
reg = <0x4067c0 0x40>;
interrupts = <&periph_intc 19>;
};
Amlogic Meson Power Controller
==============================
The Amlogic Meson SoCs embeds an internal Power domain controller.
VPU Power Domain
----------------
The Video Processing Unit power domain is controlled by this power controller,
but the domain requires some external resources to meet the correct power
sequences.
The bindings must respect the power domain bindings as described in the file
power_domain.txt
Device Tree Bindings:
---------------------
Required properties:
- compatible: should be "amlogic,meson-gx-pwrc-vpu" for the Meson GX SoCs
- #power-domain-cells: should be 0
- amlogic,hhi-sysctrl: phandle to the HHI sysctrl node
- resets: phandles to the reset lines needed for this power demain sequence
as described in ../reset/reset.txt
- clocks: from common clock binding: handle to VPU and VAPB clocks
- clock-names: from common clock binding: must contain "vpu", "vapb"
corresponding to entry in the clocks property.
Parent node should have the following properties :
- compatible: "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd"
- reg: base address and size of the AO system control register space.
Example:
-------
ao_sysctrl: sys-ctrl@0 {
compatible = "amlogic,meson-gx-ao-sysctrl", "syscon", "simple-mfd";
reg = <0x0 0x0 0x0 0x100>;
pwrc_vpu: power-controller-vpu {
compatible = "amlogic,meson-gx-pwrc-vpu";
#power-domain-cells = <0>;
amlogic,hhi-sysctrl = <&sysctrl>;
resets = <&reset RESET_VIU>,
<&reset RESET_VENC>,
<&reset RESET_VCBUS>,
<&reset RESET_BT656>,
<&reset RESET_DVIN_RESET>,
<&reset RESET_RDMA>,
<&reset RESET_VENCI>,
<&reset RESET_VENCP>,
<&reset RESET_VDAC>,
<&reset RESET_VDI6>,
<&reset RESET_VENCL>,
<&reset RESET_VID_LOCK>;
clocks = <&clkc CLKID_VPU>,
<&clkc CLKID_VAPB>;
clock-names = "vpu", "vapb";
};
};
......@@ -17,6 +17,7 @@ Required properties:
- "renesas,r8a7794-sysc" (R-Car E2)
- "renesas,r8a7795-sysc" (R-Car H3)
- "renesas,r8a7796-sysc" (R-Car M3-W)
- "renesas,r8a77970-sysc" (R-Car V3M)
- "renesas,r8a77995-sysc" (R-Car D3)
- reg: Address start and address range for the device.
- #power-domain-cells: Must be 1.
......
Qualcomm Remote File System Memory binding
This binding describes the Qualcomm remote filesystem memory, which serves the
purpose of describing the shared memory region used for remote processors to
access block device data using the Remote Filesystem protocol.
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be:
"qcom,rmtfs-mem"
- reg:
Usage: required for static allocation
Value type: <prop-encoded-array>
Definition: must specify base address and size of the memory region,
as described in reserved-memory.txt
- size:
Usage: required for dynamic allocation
Value type: <prop-encoded-array>
Definition: must specify a size of the memory region, as described in
reserved-memory.txt
- qcom,client-id:
Usage: required
Value type: <u32>
Definition: identifier of the client to use this region for buffers.
- qcom,vmid:
Usage: optional
Value type: <u32>
Definition: vmid of the remote processor, to set up memory protection.
= EXAMPLE
The following example shows the remote filesystem memory setup for APQ8016,
with the rmtfs region for the Hexagon DSP (id #1) located at 0x86700000.
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
rmtfs@86700000 {
compatible = "qcom,rmtfs-mem";
reg = <0x0 0x86700000 0x0 0xe0000>;
no-map;
qcom,client-id = <1>;
};
};
......@@ -26,6 +26,7 @@ Required properties:
- "renesas,r8a7794-rst" (R-Car E2)
- "renesas,r8a7795-rst" (R-Car H3)
- "renesas,r8a7796-rst" (R-Car M3-W)
- "renesas,r8a77970-rst" (R-Car V3M)
- "renesas,r8a77995-rst" (R-Car D3)
- reg: Address start and address range for the device.
......
Binding for the AXS10x reset controller
This binding describes the ARC AXS10x boards custom IP-block which allows
to control reset signals of selected peripherals. For example DW GMAC, etc...
This block is controlled via memory-mapped register (AKA CREG) which
represents up-to 32 reset lines.
As of today only the following lines are used:
- DW GMAC - line 5
This binding uses the common reset binding[1].
[1] Documentation/devicetree/bindings/reset/reset.txt
Required properties:
- compatible: should be "snps,axs10x-reset".
- reg: should always contain pair address - length: for creg reset
bits register.
- #reset-cells: from common reset binding; Should always be set to 1.
Example:
reset: reset-controller@11220 {
compatible = "snps,axs10x-reset";
#reset-cells = <1>;
reg = <0x11220 0x4>;
};
Specifying reset lines connected to IP modules:
ethernet@.... {
....
resets = <&reset 5>;
....
};
......@@ -13,6 +13,7 @@ Required properties:
"socionext,uniphier-pxs2-reset" - for PXs2/LD6b SoC
"socionext,uniphier-ld11-reset" - for LD11 SoC
"socionext,uniphier-ld20-reset" - for LD20 SoC
"socionext,uniphier-pxs3-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
......@@ -44,6 +45,7 @@ Required properties:
"socionext,uniphier-ld11-mio-reset" - for LD11 SoC (MIO)
"socionext,uniphier-ld11-sd-reset" - for LD11 SoC (SD)
"socionext,uniphier-ld20-sd-reset" - for LD20 SoC
"socionext,uniphier-pxs3-sd-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
......@@ -74,6 +76,7 @@ Required properties:
"socionext,uniphier-pxs2-peri-reset" - for PXs2/LD6b SoC
"socionext,uniphier-ld11-peri-reset" - for LD11 SoC
"socionext,uniphier-ld20-peri-reset" - for LD20 SoC
"socionext,uniphier-pxs3-peri-reset" - for PXs3 SoC
- #reset-cells: should be 1.
Example:
......
......@@ -65,8 +65,8 @@ to the respective BMan instance
BMan Private Memory Node
BMan requires a contiguous range of physical memory used for the backing store
for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as a
node under the /reserved-memory node
for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as
a node under the /reserved-memory node.
The BMan FBPR memory node must be named "bman-fbpr"
......@@ -75,7 +75,9 @@ PROPERTIES
- compatible
Usage: required
Value type: <stringlist>
Definition: Must inclide "fsl,bman-fbpr"
Definition: PPC platforms: Must include "fsl,bman-fbpr"
ARM platforms: Must include "shared-dma-pool"
as well as the "no-map" property
The following constraints are relevant to the FBPR private memory:
- The size must be 2^(size + 1), with size = 11..33. That is 4 KiB to
......@@ -100,10 +102,10 @@ The example below shows a BMan FBPR dynamic allocation memory node
ranges;
bman_fbpr: bman-fbpr {
compatible = "fsl,bman-fbpr";
alloc-ranges = <0 0 0x10 0>;
compatible = "shared-mem-pool";
size = <0 0x1000000>;
alignment = <0 0x1000000>;
no-map;
};
};
......
......@@ -60,6 +60,12 @@ are located at offsets 0xbf8 and 0xbfc
Value type: <prop-encoded-array>
Definition: Reference input clock. Its frequency is half of the
platform clock
- memory-regions
Usage: Required for ARM
Value type: <phandle array>
Definition: List of phandles referencing the QMan private memory
nodes (described below). The qman-fqd node must be
first followed by qman-pfdr node. Only used on ARM
Devices connected to a QMan instance via Direct Connect Portals (DCP) must link
to the respective QMan instance
......@@ -74,7 +80,9 @@ QMan Private Memory Nodes
QMan requires two contiguous range of physical memory used for the backing store
for QMan Frame Queue Descriptor (FQD) and Packed Frame Descriptor Record (PFDR).
This memory is reserved/allocated as a nodes under the /reserved-memory node
This memory is reserved/allocated as a node under the /reserved-memory node.
For additional details about reserved memory regions see reserved-memory.txt
The QMan FQD memory node must be named "qman-fqd"
......@@ -83,7 +91,9 @@ PROPERTIES
- compatible
Usage: required
Value type: <stringlist>
Definition: Must inclide "fsl,qman-fqd"
Definition: PPC platforms: Must include "fsl,qman-fqd"
ARM platforms: Must include "shared-dma-pool"
as well as the "no-map" property
The QMan PFDR memory node must be named "qman-pfdr"
......@@ -92,7 +102,9 @@ PROPERTIES
- compatible
Usage: required
Value type: <stringlist>
Definition: Must inclide "fsl,qman-pfdr"
Definition: PPC platforms: Must include "fsl,qman-pfdr"
ARM platforms: Must include "shared-dma-pool"
as well as the "no-map" property
The following constraints are relevant to the FQD and PFDR private memory:
- The size must be 2^(size + 1), with size = 11..29. That is 4 KiB to
......@@ -117,16 +129,16 @@ The example below shows a QMan FQD and a PFDR dynamic allocation memory nodes
ranges;
qman_fqd: qman-fqd {
compatible = "fsl,qman-fqd";
alloc-ranges = <0 0 0x10 0>;
compatible = "shared-dma-pool";
size = <0 0x400000>;
alignment = <0 0x400000>;
no-map;
};
qman_pfdr: qman-pfdr {
compatible = "fsl,qman-pfdr";
alloc-ranges = <0 0 0x10 0>;
compatible = "shared-dma-pool";
size = <0 0x2000000>;
alignment = <0 0x2000000>;
no-map;
};
};
......
......@@ -19,6 +19,7 @@ IP Pairing
Required properties in pwrap device node.
- compatible:
"mediatek,mt2701-pwrap" for MT2701/7623 SoCs
"mediatek,mt7622-pwrap" for MT7622 SoCs
"mediatek,mt8135-pwrap" for MT8135 SoCs
"mediatek,mt8173-pwrap" for MT8173 SoCs
- interrupts: IRQ for pwrap in SOC
......@@ -36,9 +37,12 @@ Required properties in pwrap device node.
- clocks: Must contain an entry for each entry in clock-names.
Optional properities:
- pmic: Mediatek PMIC MFD is the child device of pwrap
- pmic: Using either MediaTek PMIC MFD as the child device of pwrap
See the following for child node definitions:
Documentation/devicetree/bindings/mfd/mt6397.txt
or the regulator-only device as the child device of pwrap, such as MT6380.
See the following definitions for such kinds of devices.
Documentation/devicetree/bindings/regulator/mt6380-regulator.txt
Example:
pwrap: pwrap@1000f000 {
......
......@@ -1219,6 +1219,8 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.linux4sam.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nferre/linux-at91.git
S: Supported
N: at91
N: atmel
F: arch/arm/mach-at91/
F: include/soc/at91/
F: arch/arm/boot/dts/at91*.dts
......@@ -1227,6 +1229,9 @@ F: arch/arm/boot/dts/sama*.dts
F: arch/arm/boot/dts/sama*.dtsi
F: arch/arm/include/debug/at91.S
F: drivers/memory/atmel*
F: drivers/watchdog/sama5d4_wdt.c
X: drivers/input/touchscreen/atmel_mxt_ts.c
X: drivers/net/wireless/atmel/
ARM/CALXEDA HIGHBANK ARCHITECTURE
M: Rob Herring <robh@kernel.org>
......@@ -2141,7 +2146,6 @@ F: drivers/gpio/gpio-zx.c
F: drivers/i2c/busses/i2c-zx2967.c
F: drivers/mmc/host/dw_mmc-zx.*
F: drivers/pinctrl/zte/
F: drivers/reset/reset-zx2967.c
F: drivers/soc/zte/
F: drivers/thermal/zx2967_thermal.c
F: drivers/watchdog/zx2967_wdt.c
......@@ -2990,6 +2994,14 @@ L: bcm-kernel-feedback-list@broadcom.com
S: Maintained
F: drivers/mtd/nand/brcmnand/
BROADCOM STB DPFE DRIVER
M: Markus Mayer <mmayer@broadcom.com>
M: bcm-kernel-feedback-list@broadcom.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/memory-controllers/brcm,dpfe-cpu.txt
F: drivers/memory/brcmstb_dpfe.c
BROADCOM SYSTEMPORT ETHERNET DRIVER
M: Florian Fainelli <f.fainelli@gmail.com>
L: netdev@vger.kernel.org
......@@ -13004,6 +13016,12 @@ F: arch/arc/plat-axs10x
F: arch/arc/boot/dts/ax*
F: Documentation/devicetree/bindings/arc/axs10*
SYNOPSYS AXS10x RESET CONTROLLER DRIVER
M: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
S: Supported
F: drivers/reset/reset-axs10x.c
F: Documentation/devicetree/bindings/reset/snps,axs10x-reset.txt
SYNOPSYS DESIGNWARE APB GPIO DRIVER
M: Hoan Tran <hotran@apm.com>
L: linux-gpio@vger.kernel.org
......
......@@ -54,12 +54,14 @@ static const struct of_device_id mtk_tz_smp_boot_infos[] __initconst = {
{ .compatible = "mediatek,mt8135", .data = &mtk_mt8135_tz_boot },
{ .compatible = "mediatek,mt8127", .data = &mtk_mt8135_tz_boot },
{ .compatible = "mediatek,mt2701", .data = &mtk_mt8135_tz_boot },
{},
};
static const struct of_device_id mtk_smp_boot_infos[] __initconst = {
{ .compatible = "mediatek,mt6589", .data = &mtk_mt6589_boot },
{ .compatible = "mediatek,mt7623", .data = &mtk_mt7623_boot },
{ .compatible = "mediatek,mt7623a", .data = &mtk_mt7623_boot },
{},
};
static void __iomem *mtk_smp_base;
......
......@@ -91,12 +91,13 @@ config ARCH_HISI
This enables support for Hisilicon ARMv8 SoC family
config ARCH_MEDIATEK
bool "Mediatek MT65xx & MT81xx ARMv8 SoC"
bool "MediaTek SoC Family"
select ARM_GIC
select PINCTRL
select MTK_TIMER
help
Support for Mediatek MT65xx & MT81xx ARMv8 SoCs
This enables support for MediaTek MT27xx, MT65xx, MT76xx
& MT81xx ARMv8 SoCs
config ARCH_MESON
bool "Amlogic Platforms"
......
......@@ -165,6 +165,14 @@ config TI_SYSC
Generic driver for Texas Instruments interconnect target module
found on many TI SoCs.
config TS_NBUS
tristate "Technologic Systems NBUS Driver"
depends on SOC_IMX28
depends on OF_GPIO && PWM
help
Driver for the Technologic Systems NBUS which is used to interface
with the peripherals in the FPGA of the TS-4600 SoM.
config UNIPHIER_SYSTEM_BUS
tristate "UniPhier System Bus driver"
depends on ARCH_UNIPHIER && OF
......
......@@ -22,6 +22,7 @@ obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o
obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o
obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o
obj-$(CONFIG_TI_SYSC) += ti-sysc.o
obj-$(CONFIG_TS_NBUS) += ts-nbus.o
obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
......
/*
* NBUS driver for TS-4600 based boards
*
* Copyright (c) 2016 - Savoir-faire Linux
* Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*
* This driver implements a GPIOs bit-banged bus, called the NBUS by Technologic
* Systems. It is used to communicate with the peripherals in the FPGA on the
* TS-4600 SoM.
*/
#include <linux/bitops.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pwm.h>
#include <linux/ts-nbus.h>
#define TS_NBUS_DIRECTION_IN 0
#define TS_NBUS_DIRECTION_OUT 1
#define TS_NBUS_WRITE_ADR 0
#define TS_NBUS_WRITE_VAL 1
struct ts_nbus {
struct pwm_device *pwm;
struct gpio_descs *data;
struct gpio_desc *csn;
struct gpio_desc *txrx;
struct gpio_desc *strobe;
struct gpio_desc *ale;
struct gpio_desc *rdy;
struct mutex lock;
};
/*
* request all gpios required by the bus.
*/
static int ts_nbus_init_pdata(struct platform_device *pdev, struct ts_nbus
*ts_nbus)
{
ts_nbus->data = devm_gpiod_get_array(&pdev->dev, "ts,data",
GPIOD_OUT_HIGH);
if (IS_ERR(ts_nbus->data)) {
dev_err(&pdev->dev, "failed to retrieve ts,data-gpio from dts\n");
return PTR_ERR(ts_nbus->data);
}
ts_nbus->csn = devm_gpiod_get(&pdev->dev, "ts,csn", GPIOD_OUT_HIGH);
if (IS_ERR(ts_nbus->csn)) {
dev_err(&pdev->dev, "failed to retrieve ts,csn-gpio from dts\n");
return PTR_ERR(ts_nbus->csn);
}
ts_nbus->txrx = devm_gpiod_get(&pdev->dev, "ts,txrx", GPIOD_OUT_HIGH);
if (IS_ERR(ts_nbus->txrx)) {
dev_err(&pdev->dev, "failed to retrieve ts,txrx-gpio from dts\n");
return PTR_ERR(ts_nbus->txrx);
}
ts_nbus->strobe = devm_gpiod_get(&pdev->dev, "ts,strobe", GPIOD_OUT_HIGH);
if (IS_ERR(ts_nbus->strobe)) {
dev_err(&pdev->dev, "failed to retrieve ts,strobe-gpio from dts\n");
return PTR_ERR(ts_nbus->strobe);
}
ts_nbus->ale = devm_gpiod_get(&pdev->dev, "ts,ale", GPIOD_OUT_HIGH);
if (IS_ERR(ts_nbus->ale)) {
dev_err(&pdev->dev, "failed to retrieve ts,ale-gpio from dts\n");
return PTR_ERR(ts_nbus->ale);
}
ts_nbus->rdy = devm_gpiod_get(&pdev->dev, "ts,rdy", GPIOD_IN);
if (IS_ERR(ts_nbus->rdy)) {
dev_err(&pdev->dev, "failed to retrieve ts,rdy-gpio from dts\n");
return PTR_ERR(ts_nbus->rdy);
}
return 0;
}
/*
* the data gpios are used for reading and writing values, their directions
* should be adjusted accordingly.
*/
static void ts_nbus_set_direction(struct ts_nbus *ts_nbus, int direction)
{
int i;
for (i = 0; i < 8; i++) {
if (direction == TS_NBUS_DIRECTION_IN)
gpiod_direction_input(ts_nbus->data->desc[i]);
else
/* when used as output the default state of the data
* lines are set to high */
gpiod_direction_output(ts_nbus->data->desc[i], 1);
}
}
/*
* reset the bus in its initial state.
* The data, csn, strobe and ale lines must be zero'ed to let the FPGA knows a
* new transaction can be process.
*/
static void ts_nbus_reset_bus(struct ts_nbus *ts_nbus)
{
int i;
int values[8];
for (i = 0; i < 8; i++)
values[i] = 0;
gpiod_set_array_value_cansleep(8, ts_nbus->data->desc, values);
gpiod_set_value_cansleep(ts_nbus->csn, 0);
gpiod_set_value_cansleep(ts_nbus->strobe, 0);
gpiod_set_value_cansleep(ts_nbus->ale, 0);
}
/*
* let the FPGA knows it can process.
*/
static void ts_nbus_start_transaction(struct ts_nbus *ts_nbus)
{
gpiod_set_value_cansleep(ts_nbus->strobe, 1);
}
/*
* read a byte value from the data gpios.
* return 0 on success or negative errno on failure.
*/
static int ts_nbus_read_byte(struct ts_nbus *ts_nbus, u8 *val)
{
struct gpio_descs *gpios = ts_nbus->data;
int ret, i;
*val = 0;
for (i = 0; i < 8; i++) {
ret = gpiod_get_value_cansleep(gpios->desc[i]);
if (ret < 0)
return ret;
if (ret)
*val |= BIT(i);
}
return 0;
}
/*
* set the data gpios accordingly to the byte value.
*/
static void ts_nbus_write_byte(struct ts_nbus *ts_nbus, u8 byte)
{
struct gpio_descs *gpios = ts_nbus->data;
int i;
int values[8];
for (i = 0; i < 8; i++)
if (byte & BIT(i))
values[i] = 1;
else
values[i] = 0;
gpiod_set_array_value_cansleep(8, gpios->desc, values);
}
/*
* reading the bus consists of resetting the bus, then notifying the FPGA to
* send the data in the data gpios and return the read value.
* return 0 on success or negative errno on failure.
*/
static int ts_nbus_read_bus(struct ts_nbus *ts_nbus, u8 *val)
{
ts_nbus_reset_bus(ts_nbus);
ts_nbus_start_transaction(ts_nbus);
return ts_nbus_read_byte(ts_nbus, val);
}
/*
* writing to the bus consists of resetting the bus, then define the type of
* command (address/value), write the data and notify the FPGA to retrieve the
* value in the data gpios.
*/
static void ts_nbus_write_bus(struct ts_nbus *ts_nbus, int cmd, u8 val)
{
ts_nbus_reset_bus(ts_nbus);
if (cmd == TS_NBUS_WRITE_ADR)
gpiod_set_value_cansleep(ts_nbus->ale, 1);
ts_nbus_write_byte(ts_nbus, val);
ts_nbus_start_transaction(ts_nbus);
}
/*
* read the value in the FPGA register at the given address.
* return 0 on success or negative errno on failure.
*/
int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val)
{
int ret, i;
u8 byte;
/* bus access must be atomic */
mutex_lock(&ts_nbus->lock);
/* set the bus in read mode */
gpiod_set_value_cansleep(ts_nbus->txrx, 0);
/* write address */
ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr);
/* set the data gpios direction as input before reading */
ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_IN);
/* reading value MSB first */
do {
*val = 0;
byte = 0;
for (i = 1; i >= 0; i--) {
/* read a byte from the bus, leave on error */
ret = ts_nbus_read_bus(ts_nbus, &byte);
if (ret < 0)
goto err;
/* append the byte read to the final value */
*val |= byte << (i * 8);
}
gpiod_set_value_cansleep(ts_nbus->csn, 1);
ret = gpiod_get_value_cansleep(ts_nbus->rdy);
} while (ret);
err:
/* restore the data gpios direction as output after reading */
ts_nbus_set_direction(ts_nbus, TS_NBUS_DIRECTION_OUT);
mutex_unlock(&ts_nbus->lock);
return ret;
}
EXPORT_SYMBOL_GPL(ts_nbus_read);
/*
* write the desired value in the FPGA register at the given address.
*/
int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val)
{
int i;
/* bus access must be atomic */
mutex_lock(&ts_nbus->lock);
/* set the bus in write mode */
gpiod_set_value_cansleep(ts_nbus->txrx, 1);
/* write address */
ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_ADR, adr);
/* writing value MSB first */
for (i = 1; i >= 0; i--)
ts_nbus_write_bus(ts_nbus, TS_NBUS_WRITE_VAL, (u8)(val >> (i * 8)));
/* wait for completion */
gpiod_set_value_cansleep(ts_nbus->csn, 1);
while (gpiod_get_value_cansleep(ts_nbus->rdy) != 0) {
gpiod_set_value_cansleep(ts_nbus->csn, 0);
gpiod_set_value_cansleep(ts_nbus->csn, 1);
}
mutex_unlock(&ts_nbus->lock);
return 0;
}
EXPORT_SYMBOL_GPL(ts_nbus_write);
static int ts_nbus_probe(struct platform_device *pdev)
{
struct pwm_device *pwm;
struct pwm_args pargs;
struct device *dev = &pdev->dev;
struct ts_nbus *ts_nbus;
int ret;
ts_nbus = devm_kzalloc(dev, sizeof(*ts_nbus), GFP_KERNEL);
if (!ts_nbus)
return -ENOMEM;
mutex_init(&ts_nbus->lock);
ret = ts_nbus_init_pdata(pdev, ts_nbus);
if (ret < 0)
return ret;
pwm = devm_pwm_get(dev, NULL);
if (IS_ERR(pwm)) {
ret = PTR_ERR(pwm);
if (ret != -EPROBE_DEFER)
dev_err(dev, "unable to request PWM\n");
return ret;
}
pwm_get_args(pwm, &pargs);
if (!pargs.period) {
dev_err(&pdev->dev, "invalid PWM period\n");
return -EINVAL;
}
/*
* FIXME: pwm_apply_args() should be removed when switching to
* the atomic PWM API.
*/
pwm_apply_args(pwm);
ret = pwm_config(pwm, pargs.period, pargs.period);
if (ret < 0)
return ret;
/*
* we can now start the FPGA and populate the peripherals.
*/
pwm_enable(pwm);
ts_nbus->pwm = pwm;
/*
* let the child nodes retrieve this instance of the ts-nbus.
*/
dev_set_drvdata(dev, ts_nbus);
ret = of_platform_populate(dev->of_node, NULL, NULL, dev);
if (ret < 0)
return ret;
dev_info(dev, "initialized\n");
return 0;
}
static int ts_nbus_remove(struct platform_device *pdev)
{
struct ts_nbus *ts_nbus = dev_get_drvdata(&pdev->dev);
/* shutdown the FPGA */
mutex_lock(&ts_nbus->lock);
pwm_disable(ts_nbus->pwm);
mutex_unlock(&ts_nbus->lock);
return 0;
}
static const struct of_device_id ts_nbus_of_match[] = {
{ .compatible = "technologic,ts-nbus", },
{ },
};
MODULE_DEVICE_TABLE(of, ts_nbus_of_match);
static struct platform_driver ts_nbus_driver = {
.probe = ts_nbus_probe,
.remove = ts_nbus_remove,
.driver = {
.name = "ts_nbus",
.of_match_table = ts_nbus_of_match,
},
};
module_platform_driver(ts_nbus_driver);
MODULE_ALIAS("platform:ts_nbus");
MODULE_AUTHOR("Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>");
MODULE_DESCRIPTION("Technologic Systems NBUS");
MODULE_LICENSE("GPL v2");
......@@ -30,6 +30,15 @@ config CLK_BCM_CYGNUS
help
Enable common clock framework support for the Broadcom Cygnus SoC
config CLK_BCM_HR2
bool "Broadcom Hurricane 2 clock support"
depends on ARCH_BCM_HR2 || COMPILE_TEST
select COMMON_CLK_IPROC
default ARCH_BCM_HR2
help
Enable common clock framework support for the Broadcom Hurricane 2
SoC
config CLK_BCM_NSP
bool "Broadcom Northstar/Northstar Plus clock support"
depends on ARCH_BCM_5301X || ARCH_BCM_NSP || COMPILE_TEST
......
......@@ -9,6 +9,7 @@ obj-$(CONFIG_ARCH_BCM2835) += clk-bcm2835.o
obj-$(CONFIG_ARCH_BCM2835) += clk-bcm2835-aux.o
obj-$(CONFIG_ARCH_BCM_53573) += clk-bcm53573-ilp.o
obj-$(CONFIG_CLK_BCM_CYGNUS) += clk-cygnus.o
obj-$(CONFIG_CLK_BCM_HR2) += clk-hr2.o
obj-$(CONFIG_CLK_BCM_NSP) += clk-nsp.o
obj-$(CONFIG_CLK_BCM_NS2) += clk-ns2.o
obj-$(CONFIG_CLK_BCM_SR) += clk-sr.o
/*
* Copyright (C) 2017 Broadcom
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/clk-provider.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include "clk-iproc.h"
static void __init hr2_armpll_init(struct device_node *node)
{
iproc_armpll_setup(node);
}
CLK_OF_DECLARE(hr2_armpll, "brcm,hr2-armpll", hr2_armpll_init);
......@@ -215,6 +215,17 @@ config QCOM_SCM_64
def_bool y
depends on QCOM_SCM && ARM64
config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
bool "Qualcomm download mode enabled by default"
depends on QCOM_SCM
help
A device with "download mode" enabled will upon an unexpected
warm-restart enter a special debug mode that allows the user to
"download" memory content over USB for offline postmortem analysis.
The feature can be enabled/disabled on the kernel command line.
Say Y here to enable "download mode" by default.
config TI_SCI_PROTOCOL
tristate "TI System Control Interface (TISCI) Message Protocol"
depends on TI_MESSAGE_MANAGER
......
......@@ -28,6 +28,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/bitmap.h>
#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/export.h>
......@@ -72,21 +73,13 @@
#define MAX_DVFS_DOMAINS 8
#define MAX_DVFS_OPPS 16
#define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16)
#define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff)
#define PROTOCOL_REV_MINOR_BITS 16
#define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1)
#define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS)
#define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK)
#define FW_REV_MAJOR_BITS 24
#define FW_REV_MINOR_BITS 16
#define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1)
#define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1)
#define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS)
#define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS)
#define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK)
#define PROTO_REV_MAJOR_MASK GENMASK(31, 16)
#define PROTO_REV_MINOR_MASK GENMASK(15, 0)
#define FW_REV_MAJOR_MASK GENMASK(31, 24)
#define FW_REV_MINOR_MASK GENMASK(23, 16)
#define FW_REV_PATCH_MASK GENMASK(15, 0)
#define MAX_RX_TIMEOUT (msecs_to_jiffies(30))
......@@ -311,10 +304,6 @@ struct clk_get_info {
u8 name[20];
} __packed;
struct clk_get_value {
__le32 rate;
} __packed;
struct clk_set_value {
__le16 id;
__le16 reserved;
......@@ -328,7 +317,9 @@ struct legacy_clk_set_value {
} __packed;
struct dvfs_info {
__le32 header;
u8 domain;
u8 opp_count;
__le16 latency;
struct {
__le32 freq;
__le32 m_volt;
......@@ -351,11 +342,6 @@ struct _scpi_sensor_info {
char name[20];
};
struct sensor_value {
__le32 lo_val;
__le32 hi_val;
} __packed;
struct dev_pstate_set {
__le16 dev_id;
u8 pstate;
......@@ -419,19 +405,20 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd)
unsigned int len;
if (scpi_info->is_legacy) {
struct legacy_scpi_shared_mem *mem = ch->rx_payload;
struct legacy_scpi_shared_mem __iomem *mem =
ch->rx_payload;
/* RX Length is not replied by the legacy Firmware */
len = match->rx_len;
match->status = le32_to_cpu(mem->status);
match->status = ioread32(&mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
} else {
struct scpi_shared_mem *mem = ch->rx_payload;
struct scpi_shared_mem __iomem *mem = ch->rx_payload;
len = min(match->rx_len, CMD_SIZE(cmd));
match->status = le32_to_cpu(mem->status);
match->status = ioread32(&mem->status);
memcpy_fromio(match->rx_buf, mem->payload, len);
}
......@@ -445,11 +432,11 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd)
static void scpi_handle_remote_msg(struct mbox_client *c, void *msg)
{
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
struct scpi_shared_mem *mem = ch->rx_payload;
struct scpi_shared_mem __iomem *mem = ch->rx_payload;
u32 cmd = 0;
if (!scpi_info->is_legacy)
cmd = le32_to_cpu(mem->command);
cmd = ioread32(&mem->command);
scpi_process_cmd(ch, cmd);
}
......@@ -459,7 +446,7 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
unsigned long flags;
struct scpi_xfer *t = msg;
struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload;
struct scpi_shared_mem __iomem *mem = ch->tx_payload;
if (t->tx_buf) {
if (scpi_info->is_legacy)
......@@ -478,7 +465,7 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
}
if (!scpi_info->is_legacy)
mem->command = cpu_to_le32(t->cmd);
iowrite32(t->cmd, &mem->command);
}
static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch)
......@@ -583,13 +570,13 @@ scpi_clk_get_range(u16 clk_id, unsigned long *min, unsigned long *max)
static unsigned long scpi_clk_get_val(u16 clk_id)
{
int ret;
struct clk_get_value clk;
__le32 rate;
__le16 le_clk_id = cpu_to_le16(clk_id);
ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
sizeof(le_clk_id), &clk, sizeof(clk));
sizeof(le_clk_id), &rate, sizeof(rate));
return ret ? ret : le32_to_cpu(clk.rate);
return ret ? ret : le32_to_cpu(rate);
}
static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
......@@ -644,35 +631,35 @@ static int opp_cmp_func(const void *opp1, const void *opp2)
}
static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
{
if (domain >= MAX_DVFS_DOMAINS)
return ERR_PTR(-EINVAL);
return scpi_info->dvfs[domain] ?: ERR_PTR(-EINVAL);
}
static int scpi_dvfs_populate_info(struct device *dev, u8 domain)
{
struct scpi_dvfs_info *info;
struct scpi_opp *opp;
struct dvfs_info buf;
int ret, i;
if (domain >= MAX_DVFS_DOMAINS)
return ERR_PTR(-EINVAL);
if (scpi_info->dvfs[domain]) /* data already populated */
return scpi_info->dvfs[domain];
ret = scpi_send_message(CMD_GET_DVFS_INFO, &domain, sizeof(domain),
&buf, sizeof(buf));
if (ret)
return ERR_PTR(ret);
return ret;
info = kmalloc(sizeof(*info), GFP_KERNEL);
info = devm_kmalloc(dev, sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
return -ENOMEM;
info->count = DVFS_OPP_COUNT(buf.header);
info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */
info->count = buf.opp_count;
info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */
info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL);
if (!info->opps) {
kfree(info);
return ERR_PTR(-ENOMEM);
}
info->opps = devm_kcalloc(dev, info->count, sizeof(*opp), GFP_KERNEL);
if (!info->opps)
return -ENOMEM;
for (i = 0, opp = info->opps; i < info->count; i++, opp++) {
opp->freq = le32_to_cpu(buf.opps[i].freq);
......@@ -682,7 +669,15 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
sort(info->opps, info->count, sizeof(*opp), opp_cmp_func, NULL);
scpi_info->dvfs[domain] = info;
return info;
return 0;
}
static void scpi_dvfs_populate(struct device *dev)
{
int domain;
for (domain = 0; domain < MAX_DVFS_DOMAINS; domain++)
scpi_dvfs_populate_info(dev, domain);
}
static int scpi_dev_domain_id(struct device *dev)
......@@ -713,9 +708,6 @@ static int scpi_dvfs_get_transition_latency(struct device *dev)
if (IS_ERR(info))
return PTR_ERR(info);
if (!info->latency)
return 0;
return info->latency;
}
......@@ -776,20 +768,19 @@ static int scpi_sensor_get_info(u16 sensor_id, struct scpi_sensor_info *info)
static int scpi_sensor_get_value(u16 sensor, u64 *val)
{
__le16 id = cpu_to_le16(sensor);
struct sensor_value buf;
__le64 value;
int ret;
ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id),
&buf, sizeof(buf));
&value, sizeof(value));
if (ret)
return ret;
if (scpi_info->is_legacy)
/* only 32-bits supported, hi_val can be junk */
*val = le32_to_cpu(buf.lo_val);
/* only 32-bits supported, upper 32 bits can be junk */
*val = le32_to_cpup((__le32 *)&value);
else
*val = (u64)le32_to_cpu(buf.hi_val) << 32 |
le32_to_cpu(buf.lo_val);
*val = le64_to_cpu(value);
return 0;
}
......@@ -862,23 +853,19 @@ static int scpi_init_versions(struct scpi_drvinfo *info)
static ssize_t protocol_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
return sprintf(buf, "%d.%d\n",
PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
PROTOCOL_REV_MINOR(scpi_info->protocol_version));
return sprintf(buf, "%lu.%lu\n",
FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version),
FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version));
}
static DEVICE_ATTR_RO(protocol_version);
static ssize_t firmware_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
return sprintf(buf, "%d.%d.%d\n",
FW_REV_MAJOR(scpi_info->firmware_version),
FW_REV_MINOR(scpi_info->firmware_version),
FW_REV_PATCH(scpi_info->firmware_version));
return sprintf(buf, "%lu.%lu.%lu\n",
FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version),
FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version),
FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version));
}
static DEVICE_ATTR_RO(firmware_version);
......@@ -889,39 +876,13 @@ static struct attribute *versions_attrs[] = {
};
ATTRIBUTE_GROUPS(versions);
static void
scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count)
static void scpi_free_channels(void *data)
{
struct scpi_drvinfo *info = data;
int i;
for (i = 0; i < count && pchan->chan; i++, pchan++) {
mbox_free_channel(pchan->chan);
devm_kfree(dev, pchan->xfers);
devm_iounmap(dev, pchan->rx_payload);
}
}
static int scpi_remove(struct platform_device *pdev)
{
int i;
struct device *dev = &pdev->dev;
struct scpi_drvinfo *info = platform_get_drvdata(pdev);
scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */
of_platform_depopulate(dev);
sysfs_remove_groups(&dev->kobj, versions_groups);
scpi_free_channels(dev, info->channels, info->num_chans);
platform_set_drvdata(pdev, NULL);
for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) {
kfree(info->dvfs[i]->opps);
kfree(info->dvfs[i]);
}
devm_kfree(dev, info->channels);
devm_kfree(dev, info);
return 0;
for (i = 0; i < info->num_chans; i++)
mbox_free_channel(info->channels[i].chan);
}
#define MAX_SCPI_XFERS 10
......@@ -952,7 +913,6 @@ static int scpi_probe(struct platform_device *pdev)
{
int count, idx, ret;
struct resource res;
struct scpi_chan *scpi_chan;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
......@@ -969,13 +929,19 @@ static int scpi_probe(struct platform_device *pdev)
return -ENODEV;
}
scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL);
if (!scpi_chan)
scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan),
GFP_KERNEL);
if (!scpi_info->channels)
return -ENOMEM;
for (idx = 0; idx < count; idx++) {
ret = devm_add_action(dev, scpi_free_channels, scpi_info);
if (ret)
return ret;
for (; scpi_info->num_chans < count; scpi_info->num_chans++) {
resource_size_t size;
struct scpi_chan *pchan = scpi_chan + idx;
int idx = scpi_info->num_chans;
struct scpi_chan *pchan = scpi_info->channels + idx;
struct mbox_client *cl = &pchan->cl;
struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
......@@ -983,15 +949,14 @@ static int scpi_probe(struct platform_device *pdev)
of_node_put(shmem);
if (ret) {
dev_err(dev, "failed to get SCPI payload mem resource\n");
goto err;
return ret;
}
size = resource_size(&res);
pchan->rx_payload = devm_ioremap(dev, res.start, size);
if (!pchan->rx_payload) {
dev_err(dev, "failed to ioremap SCPI payload\n");
ret = -EADDRNOTAVAIL;
goto err;
return -EADDRNOTAVAIL;
}
pchan->tx_payload = pchan->rx_payload + (size >> 1);
......@@ -1017,17 +982,11 @@ static int scpi_probe(struct platform_device *pdev)
dev_err(dev, "failed to get channel%d err %d\n",
idx, ret);
}
err:
scpi_free_channels(dev, scpi_chan, idx);
scpi_info = NULL;
return ret;
}
scpi_info->channels = scpi_chan;
scpi_info->num_chans = count;
scpi_info->commands = scpi_std_commands;
platform_set_drvdata(pdev, scpi_info);
scpi_info->scpi_ops = &scpi_ops;
if (scpi_info->is_legacy) {
/* Replace with legacy variants */
......@@ -1043,23 +1002,23 @@ static int scpi_probe(struct platform_device *pdev)
ret = scpi_init_versions(scpi_info);
if (ret) {
dev_err(dev, "incorrect or no SCP firmware found\n");
scpi_remove(pdev);
return ret;
}
_dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n",
PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
PROTOCOL_REV_MINOR(scpi_info->protocol_version),
FW_REV_MAJOR(scpi_info->firmware_version),
FW_REV_MINOR(scpi_info->firmware_version),
FW_REV_PATCH(scpi_info->firmware_version));
scpi_info->scpi_ops = &scpi_ops;
scpi_dvfs_populate(dev);
_dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n",
FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version),
FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version),
FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version),
FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version),
FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version));
ret = sysfs_create_groups(&dev->kobj, versions_groups);
ret = devm_device_add_groups(dev, versions_groups);
if (ret)
dev_err(dev, "unable to create sysfs version group\n");
return of_platform_populate(dev->of_node, NULL, NULL, dev);
return devm_of_platform_populate(dev);
}
static const struct of_device_id scpi_of_match[] = {
......@@ -1076,7 +1035,6 @@ static struct platform_driver scpi_driver = {
.of_match_table = scpi_of_match,
},
.probe = scpi_probe,
.remove = scpi_remove,
};
module_platform_driver(scpi_driver);
......
......@@ -340,6 +340,7 @@ static int suspend_test_thread(void *arg)
* later.
*/
del_timer(&wakeup_timer);
destroy_timer_on_stack(&wakeup_timer);
if (atomic_dec_return_relaxed(&nb_active_threads) == 0)
complete(&suspend_threads_done);
......
......@@ -561,6 +561,12 @@ int __qcom_scm_pas_mss_reset(struct device *dev, bool reset)
return ret ? : le32_to_cpu(out);
}
int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
{
return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
enable ? QCOM_SCM_SET_DLOAD_MODE : 0, 0);
}
int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id)
{
struct {
......@@ -596,3 +602,21 @@ int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size,
{
return -ENODEV;
}
int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
unsigned int *val)
{
int ret;
ret = qcom_scm_call_atomic1(QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, addr);
if (ret >= 0)
*val = ret;
return ret < 0 ? ret : 0;
}
int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
{
return qcom_scm_call_atomic2(QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
addr, val);
}
......@@ -439,3 +439,47 @@ int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size,
return ret;
}
int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = QCOM_SCM_SET_DLOAD_MODE;
desc.args[1] = enable ? QCOM_SCM_SET_DLOAD_MODE : 0;
desc.arginfo = QCOM_SCM_ARGS(2);
return qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
&desc, &res);
}
int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
unsigned int *val)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = addr;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ,
&desc, &res);
if (ret >= 0)
*val = res.a1;
return ret < 0 ? ret : 0;
}
int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = addr;
desc.args[1] = val;
desc.arginfo = QCOM_SCM_ARGS(2);
return qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
&desc, &res);
}
......@@ -19,15 +19,20 @@
#include <linux/cpumask.h>
#include <linux/export.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/qcom_scm.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/clk.h>
#include <linux/reset-controller.h>
#include "qcom_scm.h"
static bool download_mode = IS_ENABLED(CONFIG_QCOM_SCM_DOWNLOAD_MODE_DEFAULT);
module_param(download_mode, bool, 0);
#define SCM_HAS_CORE_CLK BIT(0)
#define SCM_HAS_IFACE_CLK BIT(1)
#define SCM_HAS_BUS_CLK BIT(2)
......@@ -38,6 +43,8 @@ struct qcom_scm {
struct clk *iface_clk;
struct clk *bus_clk;
struct reset_controller_dev reset;
u64 dload_mode_addr;
};
static struct qcom_scm *__scm;
......@@ -333,6 +340,66 @@ int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare)
}
EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init);
int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val)
{
return __qcom_scm_io_readl(__scm->dev, addr, val);
}
EXPORT_SYMBOL(qcom_scm_io_readl);
int qcom_scm_io_writel(phys_addr_t addr, unsigned int val)
{
return __qcom_scm_io_writel(__scm->dev, addr, val);
}
EXPORT_SYMBOL(qcom_scm_io_writel);
static void qcom_scm_set_download_mode(bool enable)
{
bool avail;
int ret = 0;
avail = __qcom_scm_is_call_available(__scm->dev,
QCOM_SCM_SVC_BOOT,
QCOM_SCM_SET_DLOAD_MODE);
if (avail) {
ret = __qcom_scm_set_dload_mode(__scm->dev, enable);
} else if (__scm->dload_mode_addr) {
ret = __qcom_scm_io_writel(__scm->dev, __scm->dload_mode_addr,
enable ? QCOM_SCM_SET_DLOAD_MODE : 0);
} else {
dev_err(__scm->dev,
"No available mechanism for setting download mode\n");
}
if (ret)
dev_err(__scm->dev, "failed to set download mode: %d\n", ret);
}
static int qcom_scm_find_dload_address(struct device *dev, u64 *addr)
{
struct device_node *tcsr;
struct device_node *np = dev->of_node;
struct resource res;
u32 offset;
int ret;
tcsr = of_parse_phandle(np, "qcom,dload-mode", 0);
if (!tcsr)
return 0;
ret = of_address_to_resource(tcsr, 0, &res);
of_node_put(tcsr);
if (ret)
return ret;
ret = of_property_read_u32_index(np, "qcom,dload-mode", 1, &offset);
if (ret < 0)
return ret;
*addr = res.start + offset;
return 0;
}
/**
* qcom_scm_is_available() - Checks if SCM is available
*/
......@@ -358,6 +425,10 @@ static int qcom_scm_probe(struct platform_device *pdev)
if (!scm)
return -ENOMEM;
ret = qcom_scm_find_dload_address(&pdev->dev, &scm->dload_mode_addr);
if (ret < 0)
return ret;
clks = (unsigned long)of_device_get_match_data(&pdev->dev);
if (clks & SCM_HAS_CORE_CLK) {
scm->core_clk = devm_clk_get(&pdev->dev, "core");
......@@ -406,9 +477,24 @@ static int qcom_scm_probe(struct platform_device *pdev)
__qcom_scm_init();
/*
* If requested enable "download mode", from this point on warmboot
* will cause the the boot stages to enter download mode, unless
* disabled below by a clean shutdown/reboot.
*/
if (download_mode)
qcom_scm_set_download_mode(true);
return 0;
}
static void qcom_scm_shutdown(struct platform_device *pdev)
{
/* Clean shutdown, disable download mode to allow normal restart */
if (download_mode)
qcom_scm_set_download_mode(false);
}
static const struct of_device_id qcom_scm_dt_match[] = {
{ .compatible = "qcom,scm-apq8064",
/* FIXME: This should have .data = (void *) SCM_HAS_CORE_CLK */
......@@ -436,6 +522,7 @@ static struct platform_driver qcom_scm_driver = {
.of_match_table = qcom_scm_dt_match,
},
.probe = qcom_scm_probe,
.shutdown = qcom_scm_shutdown,
};
static int __init qcom_scm_init(void)
......
......@@ -14,9 +14,11 @@
#define QCOM_SCM_SVC_BOOT 0x1
#define QCOM_SCM_BOOT_ADDR 0x1
#define QCOM_SCM_SET_DLOAD_MODE 0x10
#define QCOM_SCM_BOOT_ADDR_MC 0x11
#define QCOM_SCM_SET_REMOTE_STATE 0xa
extern int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id);
extern int __qcom_scm_set_dload_mode(struct device *dev, bool enable);
#define QCOM_SCM_FLAG_HLOS 0x01
#define QCOM_SCM_FLAG_COLDBOOT_MC 0x02
......@@ -30,6 +32,12 @@ extern int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus);
#define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10
extern void __qcom_scm_cpu_power_down(u32 flags);
#define QCOM_SCM_SVC_IO 0x5
#define QCOM_SCM_IO_READ 0x1
#define QCOM_SCM_IO_WRITE 0x2
extern int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, unsigned int *val);
extern int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val);
#define QCOM_SCM_SVC_INFO 0x6
#define QCOM_IS_CALL_AVAIL_CMD 0x1
extern int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
......
obj-$(CONFIG_TEGRA_BPMP) += bpmp.o
tegra-bpmp-y = bpmp.o
tegra-bpmp-$(CONFIG_DEBUG_FS) += bpmp-debugfs.o
obj-$(CONFIG_TEGRA_BPMP) += tegra-bpmp.o
obj-$(CONFIG_TEGRA_IVC) += ivc.o
/*
* Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
*/
#include <linux/debugfs.h>
#include <linux/dma-mapping.h>
#include <linux/uaccess.h>
#include <soc/tegra/bpmp.h>
#include <soc/tegra/bpmp-abi.h>
struct seqbuf {
char *buf;
size_t pos;
size_t size;
};
static void seqbuf_init(struct seqbuf *seqbuf, void *buf, size_t size)
{
seqbuf->buf = buf;
seqbuf->size = size;
seqbuf->pos = 0;
}
static size_t seqbuf_avail(struct seqbuf *seqbuf)
{
return seqbuf->pos < seqbuf->size ? seqbuf->size - seqbuf->pos : 0;
}
static size_t seqbuf_status(struct seqbuf *seqbuf)
{
return seqbuf->pos <= seqbuf->size ? 0 : -EOVERFLOW;
}
static int seqbuf_eof(struct seqbuf *seqbuf)
{
return seqbuf->pos >= seqbuf->size;
}
static int seqbuf_read(struct seqbuf *seqbuf, void *buf, size_t nbyte)
{
nbyte = min(nbyte, seqbuf_avail(seqbuf));
memcpy(buf, seqbuf->buf + seqbuf->pos, nbyte);
seqbuf->pos += nbyte;
return seqbuf_status(seqbuf);
}
static int seqbuf_read_u32(struct seqbuf *seqbuf, uint32_t *v)
{
int err;
err = seqbuf_read(seqbuf, v, 4);
*v = le32_to_cpu(*v);
return err;
}
static int seqbuf_read_str(struct seqbuf *seqbuf, const char **str)
{
*str = seqbuf->buf + seqbuf->pos;
seqbuf->pos += strnlen(*str, seqbuf_avail(seqbuf));
seqbuf->pos++;
return seqbuf_status(seqbuf);
}
static void seqbuf_seek(struct seqbuf *seqbuf, ssize_t offset)
{
seqbuf->pos += offset;
}
/* map filename in Linux debugfs to corresponding entry in BPMP */
static const char *get_filename(struct tegra_bpmp *bpmp,
const struct file *file, char *buf, int size)
{
char root_path_buf[512];
const char *root_path;
const char *filename;
size_t root_len;
root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf,
sizeof(root_path_buf));
if (IS_ERR(root_path))
return NULL;
root_len = strlen(root_path);
filename = dentry_path(file->f_path.dentry, buf, size);
if (IS_ERR(filename))
return NULL;
if (strlen(filename) < root_len ||
strncmp(filename, root_path, root_len))
return NULL;
filename += root_len;
return filename;
}
static int mrq_debugfs_read(struct tegra_bpmp *bpmp,
dma_addr_t name, size_t sz_name,
dma_addr_t data, size_t sz_data,
size_t *nbytes)
{
struct mrq_debugfs_request req = {
.cmd = cpu_to_le32(CMD_DEBUGFS_READ),
.fop = {
.fnameaddr = cpu_to_le32((uint32_t)name),
.fnamelen = cpu_to_le32((uint32_t)sz_name),
.dataaddr = cpu_to_le32((uint32_t)data),
.datalen = cpu_to_le32((uint32_t)sz_data),
},
};
struct mrq_debugfs_response resp;
struct tegra_bpmp_message msg = {
.mrq = MRQ_DEBUGFS,
.tx = {
.data = &req,
.size = sizeof(req),
},
.rx = {
.data = &resp,
.size = sizeof(resp),
},
};
int err;
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return err;
*nbytes = (size_t)resp.fop.nbytes;
return 0;
}
static int mrq_debugfs_write(struct tegra_bpmp *bpmp,
dma_addr_t name, size_t sz_name,
dma_addr_t data, size_t sz_data)
{
const struct mrq_debugfs_request req = {
.cmd = cpu_to_le32(CMD_DEBUGFS_WRITE),
.fop = {
.fnameaddr = cpu_to_le32((uint32_t)name),
.fnamelen = cpu_to_le32((uint32_t)sz_name),
.dataaddr = cpu_to_le32((uint32_t)data),
.datalen = cpu_to_le32((uint32_t)sz_data),
},
};
struct tegra_bpmp_message msg = {
.mrq = MRQ_DEBUGFS,
.tx = {
.data = &req,
.size = sizeof(req),
},
};
return tegra_bpmp_transfer(bpmp, &msg);
}
static int mrq_debugfs_dumpdir(struct tegra_bpmp *bpmp, dma_addr_t addr,
size_t size, size_t *nbytes)
{
const struct mrq_debugfs_request req = {
.cmd = cpu_to_le32(CMD_DEBUGFS_DUMPDIR),
.dumpdir = {
.dataaddr = cpu_to_le32((uint32_t)addr),
.datalen = cpu_to_le32((uint32_t)size),
},
};
struct mrq_debugfs_response resp;
struct tegra_bpmp_message msg = {
.mrq = MRQ_DEBUGFS,
.tx = {
.data = &req,
.size = sizeof(req),
},
.rx = {
.data = &resp,
.size = sizeof(resp),
},
};
int err;
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return err;
*nbytes = (size_t)resp.dumpdir.nbytes;
return 0;
}
static int debugfs_show(struct seq_file *m, void *p)
{
struct file *file = m->private;
struct inode *inode = file_inode(file);
struct tegra_bpmp *bpmp = inode->i_private;
const size_t datasize = m->size;
const size_t namesize = SZ_256;
void *datavirt, *namevirt;
dma_addr_t dataphys, namephys;
char buf[256];
const char *filename;
size_t len, nbytes;
int ret;
filename = get_filename(bpmp, file, buf, sizeof(buf));
if (!filename)
return -ENOENT;
namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys,
GFP_KERNEL | GFP_DMA32);
if (!namevirt)
return -ENOMEM;
datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys,
GFP_KERNEL | GFP_DMA32);
if (!datavirt) {
ret = -ENOMEM;
goto free_namebuf;
}
len = strlen(filename);
strncpy(namevirt, filename, namesize);
ret = mrq_debugfs_read(bpmp, namephys, len, dataphys, datasize,
&nbytes);
if (!ret)
seq_write(m, datavirt, nbytes);
dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys);
free_namebuf:
dma_free_coherent(bpmp->dev, namesize, namevirt, namephys);
return ret;
}
static int debugfs_open(struct inode *inode, struct file *file)
{
return single_open_size(file, debugfs_show, file, SZ_128K);
}
static ssize_t debugfs_store(struct file *file, const char __user *buf,
size_t count, loff_t *f_pos)
{
struct inode *inode = file_inode(file);
struct tegra_bpmp *bpmp = inode->i_private;
const size_t datasize = count;
const size_t namesize = SZ_256;
void *datavirt, *namevirt;
dma_addr_t dataphys, namephys;
char fnamebuf[256];
const char *filename;
size_t len;
int ret;
filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf));
if (!filename)
return -ENOENT;
namevirt = dma_alloc_coherent(bpmp->dev, namesize, &namephys,
GFP_KERNEL | GFP_DMA32);
if (!namevirt)
return -ENOMEM;
datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys,
GFP_KERNEL | GFP_DMA32);
if (!datavirt) {
ret = -ENOMEM;
goto free_namebuf;
}
len = strlen(filename);
strncpy(namevirt, filename, namesize);
if (copy_from_user(datavirt, buf, count)) {
ret = -EFAULT;
goto free_databuf;
}
ret = mrq_debugfs_write(bpmp, namephys, len, dataphys,
count);
free_databuf:
dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys);
free_namebuf:
dma_free_coherent(bpmp->dev, namesize, namevirt, namephys);
return ret ?: count;
}
static const struct file_operations debugfs_fops = {
.open = debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.write = debugfs_store,
.release = single_release,
};
static int bpmp_populate_dir(struct tegra_bpmp *bpmp, struct seqbuf *seqbuf,
struct dentry *parent, uint32_t depth)
{
int err;
uint32_t d, t;
const char *name;
struct dentry *dentry;
while (!seqbuf_eof(seqbuf)) {
err = seqbuf_read_u32(seqbuf, &d);
if (err < 0)
return err;
if (d < depth) {
seqbuf_seek(seqbuf, -4);
/* go up a level */
return 0;
} else if (d != depth) {
/* malformed data received from BPMP */
return -EIO;
}
err = seqbuf_read_u32(seqbuf, &t);
if (err < 0)
return err;
err = seqbuf_read_str(seqbuf, &name);
if (err < 0)
return err;
if (t & DEBUGFS_S_ISDIR) {
dentry = debugfs_create_dir(name, parent);
if (!dentry)
return -ENOMEM;
err = bpmp_populate_dir(bpmp, seqbuf, dentry, depth+1);
if (err < 0)
return err;
} else {
umode_t mode;
mode = t & DEBUGFS_S_IRUSR ? S_IRUSR : 0;
mode |= t & DEBUGFS_S_IWUSR ? S_IWUSR : 0;
dentry = debugfs_create_file(name, mode,
parent, bpmp,
&debugfs_fops);
if (!dentry)
return -ENOMEM;
}
}
return 0;
}
static int create_debugfs_mirror(struct tegra_bpmp *bpmp, void *buf,
size_t bufsize, struct dentry *root)
{
struct seqbuf seqbuf;
int err;
bpmp->debugfs_mirror = debugfs_create_dir("debug", root);
if (!bpmp->debugfs_mirror)
return -ENOMEM;
seqbuf_init(&seqbuf, buf, bufsize);
err = bpmp_populate_dir(bpmp, &seqbuf, bpmp->debugfs_mirror, 0);
if (err < 0) {
debugfs_remove_recursive(bpmp->debugfs_mirror);
bpmp->debugfs_mirror = NULL;
}
return err;
}
static int mrq_is_supported(struct tegra_bpmp *bpmp, unsigned int mrq)
{
struct mrq_query_abi_request req = { .mrq = cpu_to_le32(mrq) };
struct mrq_query_abi_response resp;
struct tegra_bpmp_message msg = {
.mrq = MRQ_QUERY_ABI,
.tx = {
.data = &req,
.size = sizeof(req),
},
.rx = {
.data = &resp,
.size = sizeof(resp),
},
};
int ret;
ret = tegra_bpmp_transfer(bpmp, &msg);
if (ret < 0) {
/* something went wrong; assume not supported */
dev_warn(bpmp->dev, "tegra_bpmp_transfer failed (%d)\n", ret);
return 0;
}
return resp.status ? 0 : 1;
}
int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
{
dma_addr_t phys;
void *virt;
const size_t sz = SZ_256K;
size_t nbytes;
int ret;
struct dentry *root;
if (!mrq_is_supported(bpmp, MRQ_DEBUGFS))
return 0;
root = debugfs_create_dir("bpmp", NULL);
if (!root)
return -ENOMEM;
virt = dma_alloc_coherent(bpmp->dev, sz, &phys,
GFP_KERNEL | GFP_DMA32);
if (!virt) {
ret = -ENOMEM;
goto out;
}
ret = mrq_debugfs_dumpdir(bpmp, phys, sz, &nbytes);
if (ret < 0)
goto free;
ret = create_debugfs_mirror(bpmp, virt, nbytes, root);
free:
dma_free_coherent(bpmp->dev, sz, virt, phys);
out:
if (ret < 0)
debugfs_remove(root);
return ret;
}
......@@ -194,16 +194,24 @@ static int tegra_bpmp_wait_master_free(struct tegra_bpmp_channel *channel)
}
static ssize_t __tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
void *data, size_t size)
void *data, size_t size, int *ret)
{
int err;
if (data && size > 0)
memcpy(data, channel->ib->data, size);
return tegra_ivc_read_advance(channel->ivc);
err = tegra_ivc_read_advance(channel->ivc);
if (err < 0)
return err;
*ret = channel->ib->code;
return 0;
}
static ssize_t tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
void *data, size_t size)
void *data, size_t size, int *ret)
{
struct tegra_bpmp *bpmp = channel->bpmp;
unsigned long flags;
......@@ -217,7 +225,7 @@ static ssize_t tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel,
}
spin_lock_irqsave(&bpmp->lock, flags);
err = __tegra_bpmp_channel_read(channel, data, size);
err = __tegra_bpmp_channel_read(channel, data, size, ret);
clear_bit(index, bpmp->threaded.allocated);
spin_unlock_irqrestore(&bpmp->lock, flags);
......@@ -337,7 +345,8 @@ int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
if (err < 0)
return err;
return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size);
return __tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size,
&msg->rx.ret);
}
EXPORT_SYMBOL_GPL(tegra_bpmp_transfer_atomic);
......@@ -371,7 +380,8 @@ int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
if (err == 0)
return -ETIMEDOUT;
return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size);
return tegra_bpmp_channel_read(channel, msg->rx.data, msg->rx.size,
&msg->rx.ret);
}
EXPORT_SYMBOL_GPL(tegra_bpmp_transfer);
......@@ -387,8 +397,8 @@ static struct tegra_bpmp_mrq *tegra_bpmp_find_mrq(struct tegra_bpmp *bpmp,
return NULL;
}
static void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel,
int code, const void *data, size_t size)
void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code,
const void *data, size_t size)
{
unsigned long flags = channel->ib->flags;
struct tegra_bpmp *bpmp = channel->bpmp;
......@@ -426,6 +436,7 @@ static void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel,
mbox_client_txdone(bpmp->mbox.channel, 0);
}
}
EXPORT_SYMBOL_GPL(tegra_bpmp_mrq_return);
static void tegra_bpmp_handle_mrq(struct tegra_bpmp *bpmp,
unsigned int mrq,
......@@ -824,6 +835,10 @@ static int tegra_bpmp_probe(struct platform_device *pdev)
if (err < 0)
goto free_mrq;
err = tegra_bpmp_init_debugfs(bpmp);
if (err < 0)
dev_err(&pdev->dev, "debugfs initialization failed: %d\n", err);
return 0;
free_mrq:
......
......@@ -439,7 +439,7 @@ static inline int ti_sci_do_xfer(struct ti_sci_info *info,
/* And we wait for the response. */
timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
if (!wait_for_completion_timeout(&xfer->done, timeout)) {
dev_err(dev, "Mbox timedout in resp(caller: %pF)\n",
dev_err(dev, "Mbox timedout in resp(caller: %pS)\n",
(void *)_RET_IP_);
ret = -ETIMEDOUT;
}
......
......@@ -9,6 +9,7 @@ endif
obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o
obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o
obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o
obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o
obj-$(CONFIG_TI_AEMIF) += ti-aemif.o
obj-$(CONFIG_TI_EMIF) += emif.o
obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o
......
/*
* DDR PHY Front End (DPFE) driver for Broadcom set top box SoCs
*
* Copyright (c) 2017 Broadcom
*
* Released under the GPLv2 only.
* SPDX-License-Identifier: GPL-2.0
*/
/*
* This driver provides access to the DPFE interface of Broadcom STB SoCs.
* The firmware running on the DCPU inside the DDR PHY can provide current
* information about the system's RAM, for instance the DRAM refresh rate.
* This can be used as an indirect indicator for the DRAM's temperature.
* Slower refresh rate means cooler RAM, higher refresh rate means hotter
* RAM.
*
* Throughout the driver, we use readl_relaxed() and writel_relaxed(), which
* already contain the appropriate le32_to_cpu()/cpu_to_le32() calls.
*
* Note regarding the loading of the firmware image: we use be32_to_cpu()
* and le_32_to_cpu(), so we can support the following four cases:
* - LE kernel + LE firmware image (the most common case)
* - LE kernel + BE firmware image
* - BE kernel + LE firmware image
* - BE kernel + BE firmware image
*
* The DPCU always runs in big endian mode. The firwmare image, however, can
* be in either format. Also, communication between host CPU and DCPU is
* always in little endian.
*/
#include <linux/delay.h>
#include <linux/firmware.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#define DRVNAME "brcmstb-dpfe"
#define FIRMWARE_NAME "dpfe.bin"
/* DCPU register offsets */
#define REG_DCPU_RESET 0x0
#define REG_TO_DCPU_MBOX 0x10
#define REG_TO_HOST_MBOX 0x14
/* Message RAM */
#define DCPU_MSG_RAM(x) (0x100 + (x) * sizeof(u32))
/* DRAM Info Offsets & Masks */
#define DRAM_INFO_INTERVAL 0x0
#define DRAM_INFO_MR4 0x4
#define DRAM_INFO_ERROR 0x8
#define DRAM_INFO_MR4_MASK 0xff
/* DRAM MR4 Offsets & Masks */
#define DRAM_MR4_REFRESH 0x0 /* Refresh rate */
#define DRAM_MR4_SR_ABORT 0x3 /* Self Refresh Abort */
#define DRAM_MR4_PPRE 0x4 /* Post-package repair entry/exit */
#define DRAM_MR4_TH_OFFS 0x5 /* Thermal Offset; vendor specific */
#define DRAM_MR4_TUF 0x7 /* Temperature Update Flag */
#define DRAM_MR4_REFRESH_MASK 0x7
#define DRAM_MR4_SR_ABORT_MASK 0x1
#define DRAM_MR4_PPRE_MASK 0x1
#define DRAM_MR4_TH_OFFS_MASK 0x3
#define DRAM_MR4_TUF_MASK 0x1
/* DRAM Vendor Offsets & Masks */
#define DRAM_VENDOR_MR5 0x0
#define DRAM_VENDOR_MR6 0x4
#define DRAM_VENDOR_MR7 0x8
#define DRAM_VENDOR_MR8 0xc
#define DRAM_VENDOR_ERROR 0x10
#define DRAM_VENDOR_MASK 0xff
/* Reset register bits & masks */
#define DCPU_RESET_SHIFT 0x0
#define DCPU_RESET_MASK 0x1
#define DCPU_CLK_DISABLE_SHIFT 0x2
/* DCPU return codes */
#define DCPU_RET_ERROR_BIT BIT(31)
#define DCPU_RET_SUCCESS 0x1
#define DCPU_RET_ERR_HEADER (DCPU_RET_ERROR_BIT | BIT(0))
#define DCPU_RET_ERR_INVAL (DCPU_RET_ERROR_BIT | BIT(1))
#define DCPU_RET_ERR_CHKSUM (DCPU_RET_ERROR_BIT | BIT(2))
#define DCPU_RET_ERR_COMMAND (DCPU_RET_ERROR_BIT | BIT(3))
/* This error code is not firmware defined and only used in the driver. */
#define DCPU_RET_ERR_TIMEDOUT (DCPU_RET_ERROR_BIT | BIT(4))
/* Firmware magic */
#define DPFE_BE_MAGIC 0xfe1010fe
#define DPFE_LE_MAGIC 0xfe0101fe
/* Error codes */
#define ERR_INVALID_MAGIC -1
#define ERR_INVALID_SIZE -2
#define ERR_INVALID_CHKSUM -3
/* Message types */
#define DPFE_MSG_TYPE_COMMAND 1
#define DPFE_MSG_TYPE_RESPONSE 2
#define DELAY_LOOP_MAX 200000
enum dpfe_msg_fields {
MSG_HEADER,
MSG_COMMAND,
MSG_ARG_COUNT,
MSG_ARG0,
MSG_CHKSUM,
MSG_FIELD_MAX /* Last entry */
};
enum dpfe_commands {
DPFE_CMD_GET_INFO,
DPFE_CMD_GET_REFRESH,
DPFE_CMD_GET_VENDOR,
DPFE_CMD_MAX /* Last entry */
};
struct dpfe_msg {
u32 header;
u32 command;
u32 arg_count;
u32 arg0;
u32 chksum; /* This is the sum of all other entries. */
};
/*
* Format of the binary firmware file:
*
* entry
* 0 header
* value: 0xfe0101fe <== little endian
* 0xfe1010fe <== big endian
* 1 sequence:
* [31:16] total segments on this build
* [15:0] this segment sequence.
* 2 FW version
* 3 IMEM byte size
* 4 DMEM byte size
* IMEM
* DMEM
* last checksum ==> sum of everything
*/
struct dpfe_firmware_header {
u32 magic;
u32 sequence;
u32 version;
u32 imem_size;
u32 dmem_size;
};
/* Things we only need during initialization. */
struct init_data {
unsigned int dmem_len;
unsigned int imem_len;
unsigned int chksum;
bool is_big_endian;
};
/* Things we need for as long as we are active. */
struct private_data {
void __iomem *regs;
void __iomem *dmem;
void __iomem *imem;
struct device *dev;
unsigned int index;
struct mutex lock;
};
static const char *error_text[] = {
"Success", "Header code incorrect", "Unknown command or argument",
"Incorrect checksum", "Malformed command", "Timed out",
};
/* List of supported firmware commands */
static const u32 dpfe_commands[DPFE_CMD_MAX][MSG_FIELD_MAX] = {
[DPFE_CMD_GET_INFO] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 1,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 4,
},
[DPFE_CMD_GET_REFRESH] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 1,
[MSG_CHKSUM] = 5,
},
[DPFE_CMD_GET_VENDOR] = {
[MSG_HEADER] = DPFE_MSG_TYPE_COMMAND,
[MSG_COMMAND] = 2,
[MSG_ARG_COUNT] = 1,
[MSG_ARG0] = 2,
[MSG_CHKSUM] = 6,
},
};
static bool is_dcpu_enabled(void __iomem *regs)
{
u32 val;
val = readl_relaxed(regs + REG_DCPU_RESET);
return !(val & DCPU_RESET_MASK);
}
static void __disable_dcpu(void __iomem *regs)
{
u32 val;
if (!is_dcpu_enabled(regs))
return;
/* Put DCPU in reset if it's running. */
val = readl_relaxed(regs + REG_DCPU_RESET);
val |= (1 << DCPU_RESET_SHIFT);
writel_relaxed(val, regs + REG_DCPU_RESET);
}
static void __enable_dcpu(void __iomem *regs)
{
u32 val;
/* Clear mailbox registers. */
writel_relaxed(0, regs + REG_TO_DCPU_MBOX);
writel_relaxed(0, regs + REG_TO_HOST_MBOX);
/* Disable DCPU clock gating */
val = readl_relaxed(regs + REG_DCPU_RESET);
val &= ~(1 << DCPU_CLK_DISABLE_SHIFT);
writel_relaxed(val, regs + REG_DCPU_RESET);
/* Take DCPU out of reset */
val = readl_relaxed(regs + REG_DCPU_RESET);
val &= ~(1 << DCPU_RESET_SHIFT);
writel_relaxed(val, regs + REG_DCPU_RESET);
}
static unsigned int get_msg_chksum(const u32 msg[])
{
unsigned int sum = 0;
unsigned int i;
/* Don't include the last field in the checksum. */
for (i = 0; i < MSG_FIELD_MAX - 1; i++)
sum += msg[i];
return sum;
}
static int __send_command(struct private_data *priv, unsigned int cmd,
u32 result[])
{
const u32 *msg = dpfe_commands[cmd];
void __iomem *regs = priv->regs;
unsigned int i, chksum;
int ret = 0;
u32 resp;
if (cmd >= DPFE_CMD_MAX)
return -1;
mutex_lock(&priv->lock);
/* Write command and arguments to message area */
for (i = 0; i < MSG_FIELD_MAX; i++)
writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i));
/* Tell DCPU there is a command waiting */
writel_relaxed(1, regs + REG_TO_DCPU_MBOX);
/* Wait for DCPU to process the command */
for (i = 0; i < DELAY_LOOP_MAX; i++) {
/* Read response code */
resp = readl_relaxed(regs + REG_TO_HOST_MBOX);
if (resp > 0)
break;
udelay(5);
}
if (i == DELAY_LOOP_MAX) {
resp = (DCPU_RET_ERR_TIMEDOUT & ~DCPU_RET_ERROR_BIT);
ret = -ffs(resp);
} else {
/* Read response data */
for (i = 0; i < MSG_FIELD_MAX; i++)
result[i] = readl_relaxed(regs + DCPU_MSG_RAM(i));
}
/* Tell DCPU we are done */
writel_relaxed(0, regs + REG_TO_HOST_MBOX);
mutex_unlock(&priv->lock);
if (ret)
return ret;
/* Verify response */
chksum = get_msg_chksum(result);
if (chksum != result[MSG_CHKSUM])
resp = DCPU_RET_ERR_CHKSUM;
if (resp != DCPU_RET_SUCCESS) {
resp &= ~DCPU_RET_ERROR_BIT;
ret = -ffs(resp);
}
return ret;
}
/* Ensure that the firmware file loaded meets all the requirements. */
static int __verify_firmware(struct init_data *init,
const struct firmware *fw)
{
const struct dpfe_firmware_header *header = (void *)fw->data;
unsigned int dmem_size, imem_size, total_size;
bool is_big_endian = false;
const u32 *chksum_ptr;
if (header->magic == DPFE_BE_MAGIC)
is_big_endian = true;
else if (header->magic != DPFE_LE_MAGIC)
return ERR_INVALID_MAGIC;
if (is_big_endian) {
dmem_size = be32_to_cpu(header->dmem_size);
imem_size = be32_to_cpu(header->imem_size);
} else {
dmem_size = le32_to_cpu(header->dmem_size);
imem_size = le32_to_cpu(header->imem_size);
}
/* Data and instruction sections are 32 bit words. */
if ((dmem_size % sizeof(u32)) != 0 || (imem_size % sizeof(u32)) != 0)
return ERR_INVALID_SIZE;
/*
* The header + the data section + the instruction section + the
* checksum must be equal to the total firmware size.
*/
total_size = dmem_size + imem_size + sizeof(*header) +
sizeof(*chksum_ptr);
if (total_size != fw->size)
return ERR_INVALID_SIZE;
/* The checksum comes at the very end. */
chksum_ptr = (void *)fw->data + sizeof(*header) + dmem_size + imem_size;
init->is_big_endian = is_big_endian;
init->dmem_len = dmem_size;
init->imem_len = imem_size;
init->chksum = (is_big_endian)
? be32_to_cpu(*chksum_ptr) : le32_to_cpu(*chksum_ptr);
return 0;
}
/* Verify checksum by reading back the firmware from co-processor RAM. */
static int __verify_fw_checksum(struct init_data *init,
struct private_data *priv,
const struct dpfe_firmware_header *header,
u32 checksum)
{
u32 magic, sequence, version, sum;
u32 __iomem *dmem = priv->dmem;
u32 __iomem *imem = priv->imem;
unsigned int i;
if (init->is_big_endian) {
magic = be32_to_cpu(header->magic);
sequence = be32_to_cpu(header->sequence);
version = be32_to_cpu(header->version);
} else {
magic = le32_to_cpu(header->magic);
sequence = le32_to_cpu(header->sequence);
version = le32_to_cpu(header->version);
}
sum = magic + sequence + version + init->dmem_len + init->imem_len;
for (i = 0; i < init->dmem_len / sizeof(u32); i++)
sum += readl_relaxed(dmem + i);
for (i = 0; i < init->imem_len / sizeof(u32); i++)
sum += readl_relaxed(imem + i);
return (sum == checksum) ? 0 : -1;
}
static int __write_firmware(u32 __iomem *mem, const u32 *fw,
unsigned int size, bool is_big_endian)
{
unsigned int i;
/* Convert size to 32-bit words. */
size /= sizeof(u32);
/* It is recommended to clear the firmware area first. */
for (i = 0; i < size; i++)
writel_relaxed(0, mem + i);
/* Now copy it. */
if (is_big_endian) {
for (i = 0; i < size; i++)
writel_relaxed(be32_to_cpu(fw[i]), mem + i);
} else {
for (i = 0; i < size; i++)
writel_relaxed(le32_to_cpu(fw[i]), mem + i);
}
return 0;
}
static int brcmstb_dpfe_download_firmware(struct platform_device *pdev,
struct init_data *init)
{
const struct dpfe_firmware_header *header;
unsigned int dmem_size, imem_size;
struct device *dev = &pdev->dev;
bool is_big_endian = false;
struct private_data *priv;
const struct firmware *fw;
const u32 *dmem, *imem;
const void *fw_blob;
int ret;
priv = platform_get_drvdata(pdev);
/*
* Skip downloading the firmware if the DCPU is already running and
* responding to commands.
*/
if (is_dcpu_enabled(priv->regs)) {
u32 response[MSG_FIELD_MAX];
ret = __send_command(priv, DPFE_CMD_GET_INFO, response);
if (!ret)
return 0;
}
ret = request_firmware(&fw, FIRMWARE_NAME, dev);
/* request_firmware() prints its own error messages. */
if (ret)
return ret;
ret = __verify_firmware(init, fw);
if (ret)
return -EFAULT;
__disable_dcpu(priv->regs);
is_big_endian = init->is_big_endian;
dmem_size = init->dmem_len;
imem_size = init->imem_len;
/* At the beginning of the firmware blob is a header. */
header = (struct dpfe_firmware_header *)fw->data;
/* Void pointer to the beginning of the actual firmware. */
fw_blob = fw->data + sizeof(*header);
/* IMEM comes right after the header. */
imem = fw_blob;
/* DMEM follows after IMEM. */
dmem = fw_blob + imem_size;
ret = __write_firmware(priv->dmem, dmem, dmem_size, is_big_endian);
if (ret)
return ret;
ret = __write_firmware(priv->imem, imem, imem_size, is_big_endian);
if (ret)
return ret;
ret = __verify_fw_checksum(init, priv, header, init->chksum);
if (ret)
return ret;
__enable_dcpu(priv->regs);
return 0;
}
static ssize_t generic_show(unsigned int command, u32 response[],
struct device *dev, char *buf)
{
struct private_data *priv;
int ret;
priv = dev_get_drvdata(dev);
if (!priv)
return sprintf(buf, "ERROR: driver private data not set\n");
ret = __send_command(priv, command, response);
if (ret < 0)
return sprintf(buf, "ERROR: %s\n", error_text[-ret]);
return 0;
}
static ssize_t show_info(struct device *dev, struct device_attribute *devattr,
char *buf)
{
u32 response[MSG_FIELD_MAX];
unsigned int info;
int ret;
ret = generic_show(DPFE_CMD_GET_INFO, response, dev, buf);
if (ret)
return ret;
info = response[MSG_ARG0];
return sprintf(buf, "%u.%u.%u.%u\n",
(info >> 24) & 0xff,
(info >> 16) & 0xff,
(info >> 8) & 0xff,
info & 0xff);
}
static ssize_t show_refresh(struct device *dev,
struct device_attribute *devattr, char *buf)
{
u32 response[MSG_FIELD_MAX];
void __iomem *info;
struct private_data *priv;
unsigned int offset;
u8 refresh, sr_abort, ppre, thermal_offs, tuf;
u32 mr4;
int ret;
ret = generic_show(DPFE_CMD_GET_REFRESH, response, dev, buf);
if (ret)
return ret;
priv = dev_get_drvdata(dev);
offset = response[MSG_ARG0];
info = priv->dmem + offset;
mr4 = readl_relaxed(info + DRAM_INFO_MR4) & DRAM_INFO_MR4_MASK;
refresh = (mr4 >> DRAM_MR4_REFRESH) & DRAM_MR4_REFRESH_MASK;
sr_abort = (mr4 >> DRAM_MR4_SR_ABORT) & DRAM_MR4_SR_ABORT_MASK;
ppre = (mr4 >> DRAM_MR4_PPRE) & DRAM_MR4_PPRE_MASK;
thermal_offs = (mr4 >> DRAM_MR4_TH_OFFS) & DRAM_MR4_TH_OFFS_MASK;
tuf = (mr4 >> DRAM_MR4_TUF) & DRAM_MR4_TUF_MASK;
return sprintf(buf, "%#x %#x %#x %#x %#x %#x %#x\n",
readl_relaxed(info + DRAM_INFO_INTERVAL),
refresh, sr_abort, ppre, thermal_offs, tuf,
readl_relaxed(info + DRAM_INFO_ERROR));
}
static ssize_t store_refresh(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
u32 response[MSG_FIELD_MAX];
struct private_data *priv;
void __iomem *info;
unsigned int offset;
unsigned long val;
int ret;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
priv = dev_get_drvdata(dev);
ret = __send_command(priv, DPFE_CMD_GET_REFRESH, response);
if (ret)
return ret;
offset = response[MSG_ARG0];
info = priv->dmem + offset;
writel_relaxed(val, info + DRAM_INFO_INTERVAL);
return count;
}
static ssize_t show_vendor(struct device *dev, struct device_attribute *devattr,
char *buf)
{
u32 response[MSG_FIELD_MAX];
struct private_data *priv;
void __iomem *info;
unsigned int offset;
int ret;
ret = generic_show(DPFE_CMD_GET_VENDOR, response, dev, buf);
if (ret)
return ret;
offset = response[MSG_ARG0];
priv = dev_get_drvdata(dev);
info = priv->dmem + offset;
return sprintf(buf, "%#x %#x %#x %#x %#x\n",
readl_relaxed(info + DRAM_VENDOR_MR5) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR6) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR7) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_MR8) & DRAM_VENDOR_MASK,
readl_relaxed(info + DRAM_VENDOR_ERROR));
}
static int brcmstb_dpfe_resume(struct platform_device *pdev)
{
struct init_data init;
return brcmstb_dpfe_download_firmware(pdev, &init);
}
static DEVICE_ATTR(dpfe_info, 0444, show_info, NULL);
static DEVICE_ATTR(dpfe_refresh, 0644, show_refresh, store_refresh);
static DEVICE_ATTR(dpfe_vendor, 0444, show_vendor, NULL);
static struct attribute *dpfe_attrs[] = {
&dev_attr_dpfe_info.attr,
&dev_attr_dpfe_refresh.attr,
&dev_attr_dpfe_vendor.attr,
NULL
};
ATTRIBUTE_GROUPS(dpfe);
static int brcmstb_dpfe_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct private_data *priv;
struct device *dpfe_dev;
struct init_data init;
struct resource *res;
u32 index;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
mutex_init(&priv->lock);
platform_set_drvdata(pdev, priv);
/* Cell index is optional; default to 0 if not present. */
ret = of_property_read_u32(dev->of_node, "cell-index", &index);
if (ret)
index = 0;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-cpu");
priv->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->regs)) {
dev_err(dev, "couldn't map DCPU registers\n");
return -ENODEV;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-dmem");
priv->dmem = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->dmem)) {
dev_err(dev, "Couldn't map DCPU data memory\n");
return -ENOENT;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-imem");
priv->imem = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->imem)) {
dev_err(dev, "Couldn't map DCPU instruction memory\n");
return -ENOENT;
}
ret = brcmstb_dpfe_download_firmware(pdev, &init);
if (ret)
goto err;
dpfe_dev = devm_kzalloc(dev, sizeof(*dpfe_dev), GFP_KERNEL);
if (!dpfe_dev) {
ret = -ENOMEM;
goto err;
}
priv->dev = dpfe_dev;
priv->index = index;
dpfe_dev->parent = dev;
dpfe_dev->groups = dpfe_groups;
dpfe_dev->of_node = dev->of_node;
dev_set_drvdata(dpfe_dev, priv);
dev_set_name(dpfe_dev, "dpfe%u", index);
ret = device_register(dpfe_dev);
if (ret)
goto err;
dev_info(dev, "registered.\n");
return 0;
err:
dev_err(dev, "failed to initialize -- error %d\n", ret);
return ret;
}
static const struct of_device_id brcmstb_dpfe_of_match[] = {
{ .compatible = "brcm,dpfe-cpu", },
{}
};
MODULE_DEVICE_TABLE(of, brcmstb_dpfe_of_match);
static struct platform_driver brcmstb_dpfe_driver = {
.driver = {
.name = DRVNAME,
.of_match_table = brcmstb_dpfe_of_match,
},
.probe = brcmstb_dpfe_probe,
.resume = brcmstb_dpfe_resume,
};
module_platform_driver(brcmstb_dpfe_driver);
MODULE_AUTHOR("Markus Mayer <mmayer@broadcom.com>");
MODULE_DESCRIPTION("BRCMSTB DDR PHY Front End Driver");
MODULE_LICENSE("GPL");
......@@ -1075,11 +1075,33 @@ int gpmc_configure(int cmd, int wval)
}
EXPORT_SYMBOL(gpmc_configure);
void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
static bool gpmc_nand_writebuffer_empty(void)
{
if (gpmc_read_reg(GPMC_STATUS) & GPMC_STATUS_EMPTYWRITEBUFFERSTATUS)
return true;
return false;
}
static struct gpmc_nand_ops nand_ops = {
.nand_writebuffer_empty = gpmc_nand_writebuffer_empty,
};
/**
* gpmc_omap_get_nand_ops - Get the GPMC NAND interface
* @regs: the GPMC NAND register map exclusive for NAND use.
* @cs: GPMC chip select number on which the NAND sits. The
* register map returned will be specific to this chip select.
*
* Returns NULL on error e.g. invalid cs.
*/
struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs)
{
int i;
reg->gpmc_status = NULL; /* deprecated */
if (cs >= gpmc_cs_num)
return NULL;
reg->gpmc_nand_command = gpmc_base + GPMC_CS0_OFFSET +
GPMC_CS_NAND_COMMAND + GPMC_CS_SIZE * cs;
reg->gpmc_nand_address = gpmc_base + GPMC_CS0_OFFSET +
......@@ -1111,34 +1133,6 @@ void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
reg->gpmc_bch_result6[i] = gpmc_base + GPMC_ECC_BCH_RESULT_6 +
i * GPMC_BCH_SIZE;
}
}
static bool gpmc_nand_writebuffer_empty(void)
{
if (gpmc_read_reg(GPMC_STATUS) & GPMC_STATUS_EMPTYWRITEBUFFERSTATUS)
return true;
return false;
}
static struct gpmc_nand_ops nand_ops = {
.nand_writebuffer_empty = gpmc_nand_writebuffer_empty,
};
/**
* gpmc_omap_get_nand_ops - Get the GPMC NAND interface
* @regs: the GPMC NAND register map exclusive for NAND use.
* @cs: GPMC chip select number on which the NAND sits. The
* register map returned will be specific to this chip select.
*
* Returns NULL on error e.g. invalid cs.
*/
struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs)
{
if (cs >= gpmc_cs_num)
return NULL;
gpmc_update_nand_reg(reg, cs);
return &nand_ops;
}
......
......@@ -397,3 +397,29 @@ void of_reserved_mem_device_release(struct device *dev)
rmem->ops->device_release(rmem, dev);
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_release);
/**
* of_reserved_mem_lookup() - acquire reserved_mem from a device node
* @np: node pointer of the desired reserved-memory region
*
* This function allows drivers to acquire a reference to the reserved_mem
* struct based on a device node handle.
*
* Returns a reserved_mem reference, or NULL on error.
*/
struct reserved_mem *of_reserved_mem_lookup(struct device_node *np)
{
const char *name;
int i;
if (!np->full_name)
return NULL;
name = kbasename(np->full_name);
for (i = 0; i < reserved_mem_count; i++)
if (!strcmp(reserved_mem[i].name, name))
return &reserved_mem[i];
return NULL;
}
EXPORT_SYMBOL_GPL(of_reserved_mem_lookup);
......@@ -497,6 +497,12 @@ int of_platform_default_populate(struct device_node *root,
EXPORT_SYMBOL_GPL(of_platform_default_populate);
#ifndef CONFIG_PPC
static const struct of_device_id reserved_mem_matches[] = {
{ .compatible = "qcom,rmtfs-mem" },
{ .compatible = "ramoops" },
{}
};
static int __init of_platform_default_populate_init(void)
{
struct device_node *node;
......@@ -505,15 +511,12 @@ static int __init of_platform_default_populate_init(void)
return -ENODEV;
/*
* Handle ramoops explicitly, since it is inside /reserved-memory,
* which lacks a "compatible" property.
* Handle certain compatibles explicitly, since we don't want to create
* platform_devices for every node in /reserved-memory with a
* "compatible",
*/
node = of_find_node_by_path("/reserved-memory");
if (node) {
node = of_find_compatible_node(node, NULL, "ramoops");
if (node)
of_platform_device_create(node, NULL, NULL);
}
for_each_matching_node(node, reserved_mem_matches)
of_platform_device_create(node, NULL, NULL);
/* Populate everything else. */
of_platform_default_populate(NULL, NULL, NULL);
......
......@@ -28,6 +28,12 @@ config RESET_ATH79
This enables the ATH79 reset controller driver that supports the
AR71xx SoC reset controller.
config RESET_AXS10X
bool "AXS10x Reset Driver" if COMPILE_TEST
default ARC_PLAT_AXS10X
help
This enables the reset controller driver for AXS10x.
config RESET_BERLIN
bool "Berlin Reset Driver" if COMPILE_TEST
default ARCH_BERLIN
......@@ -75,21 +81,21 @@ config RESET_PISTACHIO
help
This enables the reset driver for ImgTec Pistachio SoCs.
config RESET_SOCFPGA
bool "SoCFPGA Reset Driver" if COMPILE_TEST
default ARCH_SOCFPGA
config RESET_SIMPLE
bool "Simple Reset Controller Driver" if COMPILE_TEST
default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX
help
This enables the reset controller driver for Altera SoCFPGAs.
This enables a simple reset controller driver for reset lines that
that can be asserted and deasserted by toggling bits in a contiguous,
exclusive register space.
config RESET_STM32
bool "STM32 Reset Driver" if COMPILE_TEST
default ARCH_STM32
help
This enables the RCC reset controller driver for STM32 MCUs.
Currently this driver supports Altera SoCFPGAs, the RCC reset
controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family.
config RESET_SUNXI
bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI
default ARCH_SUNXI
select RESET_SIMPLE
help
This enables the reset driver for Allwinner SoCs.
......@@ -121,12 +127,6 @@ config RESET_UNIPHIER
Say Y if you want to control reset signals provided by System Control
block, Media I/O block, Peripheral Block.
config RESET_ZX2967
bool "ZTE ZX2967 Reset Driver"
depends on ARCH_ZX || COMPILE_TEST
help
This enables the reset controller driver for ZTE's zx2967 family.
config RESET_ZYNQ
bool "ZYNQ Reset Driver" if COMPILE_TEST
default ARCH_ZYNQ
......
......@@ -5,6 +5,7 @@ obj-$(CONFIG_ARCH_STI) += sti/
obj-$(CONFIG_ARCH_TEGRA) += tegra/
obj-$(CONFIG_RESET_A10SR) += reset-a10sr.o
obj-$(CONFIG_RESET_ATH79) += reset-ath79.o
obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o
obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o
obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o
obj-$(CONFIG_RESET_IMX7) += reset-imx7.o
......@@ -13,12 +14,10 @@ obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o
obj-$(CONFIG_RESET_STM32) += reset-stm32.o
obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o
obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o
obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o
obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o
obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o
obj-$(CONFIG_RESET_ZX2967) += reset-zx2967.o
obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o
/*
* Copyright (C) 2017 Synopsys.
*
* Synopsys AXS10x reset driver.
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/io.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#define to_axs10x_rst(p) container_of((p), struct axs10x_rst, rcdev)
#define AXS10X_MAX_RESETS 32
struct axs10x_rst {
void __iomem *regs_rst;
spinlock_t lock;
struct reset_controller_dev rcdev;
};
static int axs10x_reset_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct axs10x_rst *rst = to_axs10x_rst(rcdev);
unsigned long flags;
spin_lock_irqsave(&rst->lock, flags);
writel(BIT(id), rst->regs_rst);
spin_unlock_irqrestore(&rst->lock, flags);
return 0;
}
static const struct reset_control_ops axs10x_reset_ops = {
.reset = axs10x_reset_reset,
};
static int axs10x_reset_probe(struct platform_device *pdev)
{
struct axs10x_rst *rst;
struct resource *mem;
rst = devm_kzalloc(&pdev->dev, sizeof(*rst), GFP_KERNEL);
if (!rst)
return -ENOMEM;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
rst->regs_rst = devm_ioremap_resource(&pdev->dev, mem);
if (IS_ERR(rst->regs_rst))
return PTR_ERR(rst->regs_rst);
spin_lock_init(&rst->lock);
rst->rcdev.owner = THIS_MODULE;
rst->rcdev.ops = &axs10x_reset_ops;
rst->rcdev.of_node = pdev->dev.of_node;
rst->rcdev.nr_resets = AXS10X_MAX_RESETS;
return devm_reset_controller_register(&pdev->dev, &rst->rcdev);
}
static const struct of_device_id axs10x_reset_dt_match[] = {
{ .compatible = "snps,axs10x-reset" },
{ },
};
static struct platform_driver axs10x_reset_driver = {
.probe = axs10x_reset_probe,
.driver = {
.name = "axs10x-reset",
.of_match_table = axs10x_reset_dt_match,
},
};
builtin_platform_driver(axs10x_reset_driver);
MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>");
MODULE_DESCRIPTION("Synopsys AXS10x reset driver");
MODULE_LICENSE("GPL v2");
......@@ -62,13 +62,16 @@
#include <linux/reset-controller.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/of_device.h>
#define REG_COUNT 8
#define BITS_PER_REG 32
#define LEVEL_OFFSET 0x7c
struct meson_reset {
void __iomem *reg_base;
struct reset_controller_dev rcdev;
spinlock_t lock;
};
static int meson_reset_reset(struct reset_controller_dev *rcdev,
......@@ -80,26 +83,68 @@ static int meson_reset_reset(struct reset_controller_dev *rcdev,
unsigned int offset = id % BITS_PER_REG;
void __iomem *reg_addr = data->reg_base + (bank << 2);
if (bank >= REG_COUNT)
return -EINVAL;
writel(BIT(offset), reg_addr);
return 0;
}
static const struct reset_control_ops meson_reset_ops = {
static int meson_reset_level(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int bank = id / BITS_PER_REG;
unsigned int offset = id % BITS_PER_REG;
void __iomem *reg_addr = data->reg_base + LEVEL_OFFSET + (bank << 2);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(reg_addr);
if (assert)
writel(reg & ~BIT(offset), reg_addr);
else
writel(reg | BIT(offset), reg_addr);
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int meson_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, true);
}
static int meson_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, false);
}
static const struct reset_control_ops meson_reset_meson8_ops = {
.reset = meson_reset_reset,
};
static const struct reset_control_ops meson_reset_gx_ops = {
.reset = meson_reset_reset,
.assert = meson_reset_assert,
.deassert = meson_reset_deassert,
};
static const struct of_device_id meson_reset_dt_ids[] = {
{ .compatible = "amlogic,meson8b-reset", },
{ .compatible = "amlogic,meson-gxbb-reset", },
{ .compatible = "amlogic,meson8b-reset",
.data = &meson_reset_meson8_ops, },
{ .compatible = "amlogic,meson-gxbb-reset",
.data = &meson_reset_gx_ops, },
{ /* sentinel */ },
};
static int meson_reset_probe(struct platform_device *pdev)
{
const struct reset_control_ops *ops;
struct meson_reset *data;
struct resource *res;
......@@ -107,6 +152,10 @@ static int meson_reset_probe(struct platform_device *pdev)
if (!data)
return -ENOMEM;
ops = of_device_get_match_data(&pdev->dev);
if (!ops)
return -EINVAL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->reg_base))
......@@ -114,9 +163,11 @@ static int meson_reset_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, data);
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG;
data->rcdev.ops = &meson_reset_ops;
data->rcdev.ops = ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
......
/*
* Simple Reset Controller Driver
*
* Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
*
* Based on Allwinner SoCs Reset Controller driver
*
* Copyright 2013 Maxime Ripard
*
* Maxime Ripard <maxime.ripard@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/device.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/spinlock.h>
#include "reset-simple.h"
static inline struct reset_simple_data *
to_reset_simple_data(struct reset_controller_dev *rcdev)
{
return container_of(rcdev, struct reset_simple_data, rcdev);
}
static int reset_simple_update(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct reset_simple_data *data = to_reset_simple_data(rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * reg_width));
if (assert ^ data->active_low)
reg |= BIT(offset);
else
reg &= ~BIT(offset);
writel(reg, data->membase + (bank * reg_width));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int reset_simple_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return reset_simple_update(rcdev, id, true);
}
static int reset_simple_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return reset_simple_update(rcdev, id, false);
}
static int reset_simple_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct reset_simple_data *data = to_reset_simple_data(rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
u32 reg;
reg = readl(data->membase + (bank * reg_width));
return !(reg & BIT(offset)) ^ !data->status_active_low;
}
const struct reset_control_ops reset_simple_ops = {
.assert = reset_simple_assert,
.deassert = reset_simple_deassert,
.status = reset_simple_status,
};
/**
* struct reset_simple_devdata - simple reset controller properties
* @reg_offset: offset between base address and first reset register.
* @nr_resets: number of resets. If not set, default to resource size in bits.
* @active_low: if true, bits are cleared to assert the reset. Otherwise, bits
* are set to assert the reset.
* @status_active_low: if true, bits read back as cleared while the reset is
* asserted. Otherwise, bits read back as set while the
* reset is asserted.
*/
struct reset_simple_devdata {
u32 reg_offset;
u32 nr_resets;
bool active_low;
bool status_active_low;
};
#define SOCFPGA_NR_BANKS 8
static const struct reset_simple_devdata reset_simple_socfpga = {
.reg_offset = 0x10,
.nr_resets = SOCFPGA_NR_BANKS * 32,
.status_active_low = true,
};
static const struct reset_simple_devdata reset_simple_active_low = {
.active_low = true,
.status_active_low = true,
};
static const struct of_device_id reset_simple_dt_ids[] = {
{ .compatible = "altr,rst-mgr", .data = &reset_simple_socfpga },
{ .compatible = "st,stm32-rcc", },
{ .compatible = "allwinner,sun6i-a31-clock-reset",
.data = &reset_simple_active_low },
{ .compatible = "zte,zx296718-reset",
.data = &reset_simple_active_low },
{ /* sentinel */ },
};
static int reset_simple_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
const struct reset_simple_devdata *devdata;
struct reset_simple_data *data;
void __iomem *membase;
struct resource *res;
u32 reg_offset = 0;
devdata = of_device_get_match_data(dev);
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
membase = devm_ioremap_resource(dev, res);
if (IS_ERR(membase))
return PTR_ERR(membase);
spin_lock_init(&data->lock);
data->membase = membase;
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE;
data->rcdev.ops = &reset_simple_ops;
data->rcdev.of_node = dev->of_node;
if (devdata) {
reg_offset = devdata->reg_offset;
if (devdata->nr_resets)
data->rcdev.nr_resets = devdata->nr_resets;
data->active_low = devdata->active_low;
data->status_active_low = devdata->status_active_low;
}
if (of_device_is_compatible(dev->of_node, "altr,rst-mgr") &&
of_property_read_u32(dev->of_node, "altr,modrst-offset",
&reg_offset)) {
dev_warn(dev,
"missing altr,modrst-offset property, assuming 0x%x!\n",
reg_offset);
}
data->membase += reg_offset;
return devm_reset_controller_register(dev, &data->rcdev);
}
static struct platform_driver reset_simple_driver = {
.probe = reset_simple_probe,
.driver = {
.name = "simple-reset",
.of_match_table = reset_simple_dt_ids,
},
};
builtin_platform_driver(reset_simple_driver);
/*
* Simple Reset Controller ops
*
* Based on Allwinner SoCs Reset Controller driver
*
* Copyright 2013 Maxime Ripard
*
* Maxime Ripard <maxime.ripard@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __RESET_SIMPLE_H__
#define __RESET_SIMPLE_H__
#include <linux/io.h>
#include <linux/reset-controller.h>
#include <linux/spinlock.h>
/**
* struct reset_simple_data - driver data for simple reset controllers
* @lock: spinlock to protect registers during read-modify-write cycles
* @membase: memory mapped I/O register range
* @rcdev: reset controller device base structure
* @active_low: if true, bits are cleared to assert the reset. Otherwise, bits
* are set to assert the reset. Note that this says nothing about
* the voltage level of the actual reset line.
* @status_active_low: if true, bits read back as cleared while the reset is
* asserted. Otherwise, bits read back as set while the
* reset is asserted.
*/
struct reset_simple_data {
spinlock_t lock;
void __iomem *membase;
struct reset_controller_dev rcdev;
bool active_low;
bool status_active_low;
};
extern const struct reset_control_ops reset_simple_ops;
#endif /* __RESET_SIMPLE_H__ */
/*
* Socfpga Reset Controller Driver
*
* Copyright 2014 Steffen Trumtrar <s.trumtrar@pengutronix.de>
*
* based on
* Allwinner SoCs Reset Controller driver
*
* Copyright 2013 Maxime Ripard
*
* Maxime Ripard <maxime.ripard@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#define BANK_INCREMENT 4
#define NR_BANKS 8
struct socfpga_reset_data {
spinlock_t lock;
void __iomem *membase;
struct reset_controller_dev rcdev;
};
static int socfpga_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct socfpga_reset_data *data = container_of(rcdev,
struct socfpga_reset_data,
rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * BANK_INCREMENT));
writel(reg | BIT(offset), data->membase + (bank * BANK_INCREMENT));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int socfpga_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct socfpga_reset_data *data = container_of(rcdev,
struct socfpga_reset_data,
rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * BANK_INCREMENT));
writel(reg & ~BIT(offset), data->membase + (bank * BANK_INCREMENT));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int socfpga_reset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct socfpga_reset_data *data = container_of(rcdev,
struct socfpga_reset_data, rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
u32 reg;
reg = readl(data->membase + (bank * BANK_INCREMENT));
return !(reg & BIT(offset));
}
static const struct reset_control_ops socfpga_reset_ops = {
.assert = socfpga_reset_assert,
.deassert = socfpga_reset_deassert,
.status = socfpga_reset_status,
};
static int socfpga_reset_probe(struct platform_device *pdev)
{
struct socfpga_reset_data *data;
struct resource *res;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
u32 modrst_offset;
/*
* The binding was mainlined without the required property.
* Do not continue, when we encounter an old DT.
*/
if (!of_find_property(pdev->dev.of_node, "#reset-cells", NULL)) {
dev_err(&pdev->dev, "%pOF missing #reset-cells property\n",
pdev->dev.of_node);
return -EINVAL;
}
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->membase))
return PTR_ERR(data->membase);
if (of_property_read_u32(np, "altr,modrst-offset", &modrst_offset)) {
dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n");
modrst_offset = 0x10;
}
data->membase += modrst_offset;
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = NR_BANKS * (sizeof(u32) * BITS_PER_BYTE);
data->rcdev.ops = &socfpga_reset_ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(dev, &data->rcdev);
}
static const struct of_device_id socfpga_reset_dt_ids[] = {
{ .compatible = "altr,rst-mgr", },
{ /* sentinel */ },
};
static struct platform_driver socfpga_reset_driver = {
.probe = socfpga_reset_probe,
.driver = {
.name = "socfpga-reset",
.of_match_table = socfpga_reset_dt_ids,
},
};
builtin_platform_driver(socfpga_reset_driver);
/*
* Copyright (C) Maxime Coquelin 2015
* Author: Maxime Coquelin <mcoquelin.stm32@gmail.com>
* License terms: GNU General Public License (GPL), version 2
*
* Heavily based on sunxi driver from Maxime Ripard.
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
struct stm32_reset_data {
spinlock_t lock;
void __iomem *membase;
struct reset_controller_dev rcdev;
};
static int stm32_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct stm32_reset_data *data = container_of(rcdev,
struct stm32_reset_data,
rcdev);
int bank = id / BITS_PER_LONG;
int offset = id % BITS_PER_LONG;
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * 4));
writel(reg | BIT(offset), data->membase + (bank * 4));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int stm32_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct stm32_reset_data *data = container_of(rcdev,
struct stm32_reset_data,
rcdev);
int bank = id / BITS_PER_LONG;
int offset = id % BITS_PER_LONG;
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * 4));
writel(reg & ~BIT(offset), data->membase + (bank * 4));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static const struct reset_control_ops stm32_reset_ops = {
.assert = stm32_reset_assert,
.deassert = stm32_reset_deassert,
};
static const struct of_device_id stm32_reset_dt_ids[] = {
{ .compatible = "st,stm32-rcc", },
{ /* sentinel */ },
};
static int stm32_reset_probe(struct platform_device *pdev)
{
struct stm32_reset_data *data;
struct resource *res;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->membase))
return PTR_ERR(data->membase);
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = resource_size(res) * 8;
data->rcdev.ops = &stm32_reset_ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
}
static struct platform_driver stm32_reset_driver = {
.probe = stm32_reset_probe,
.driver = {
.name = "stm32-rcc-reset",
.of_match_table = stm32_reset_dt_ids,
},
};
builtin_platform_driver(stm32_reset_driver);
......@@ -22,64 +22,11 @@
#include <linux/spinlock.h>
#include <linux/types.h>
struct sunxi_reset_data {
spinlock_t lock;
void __iomem *membase;
struct reset_controller_dev rcdev;
};
static int sunxi_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct sunxi_reset_data *data = container_of(rcdev,
struct sunxi_reset_data,
rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * reg_width));
writel(reg & ~BIT(offset), data->membase + (bank * reg_width));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int sunxi_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct sunxi_reset_data *data = container_of(rcdev,
struct sunxi_reset_data,
rcdev);
int reg_width = sizeof(u32);
int bank = id / (reg_width * BITS_PER_BYTE);
int offset = id % (reg_width * BITS_PER_BYTE);
unsigned long flags;
u32 reg;
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + (bank * reg_width));
writel(reg | BIT(offset), data->membase + (bank * reg_width));
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static const struct reset_control_ops sunxi_reset_ops = {
.assert = sunxi_reset_assert,
.deassert = sunxi_reset_deassert,
};
#include "reset-simple.h"
static int sunxi_reset_init(struct device_node *np)
{
struct sunxi_reset_data *data;
struct reset_simple_data *data;
struct resource res;
resource_size_t size;
int ret;
......@@ -108,8 +55,9 @@ static int sunxi_reset_init(struct device_node *np)
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = size * 8;
data->rcdev.ops = &sunxi_reset_ops;
data->rcdev.ops = &reset_simple_ops;
data->rcdev.of_node = np;
data->active_low = true;
return reset_controller_register(&data->rcdev);
......@@ -122,6 +70,8 @@ static int sunxi_reset_init(struct device_node *np)
* These are the reset controller we need to initialize early on in
* our system, before we can even think of using a regular device
* driver for it.
* The controllers that we can register through the regular device
* model are handled by the simple reset driver directly.
*/
static const struct of_device_id sunxi_early_reset_dt_ids[] __initconst = {
{ .compatible = "allwinner,sun6i-a31-ahb1-reset", },
......@@ -135,45 +85,3 @@ void __init sun6i_reset_init(void)
for_each_matching_node(np, sunxi_early_reset_dt_ids)
sunxi_reset_init(np);
}
/*
* And these are the controllers we can register through the regular
* device model.
*/
static const struct of_device_id sunxi_reset_dt_ids[] = {
{ .compatible = "allwinner,sun6i-a31-clock-reset", },
{ /* sentinel */ },
};
static int sunxi_reset_probe(struct platform_device *pdev)
{
struct sunxi_reset_data *data;
struct resource *res;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->membase))
return PTR_ERR(data->membase);
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = resource_size(res) * 8;
data->rcdev.ops = &sunxi_reset_ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
}
static struct platform_driver sunxi_reset_driver = {
.probe = sunxi_reset_probe,
.driver = {
.name = "sunxi-reset",
.of_match_table = sunxi_reset_dt_ids,
},
};
builtin_platform_driver(sunxi_reset_driver);
......@@ -58,6 +58,7 @@ static const struct uniphier_reset_data uniphier_ld4_sys_reset_data[] = {
static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */
UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */
UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, MIO, RLE) */
UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
......@@ -76,6 +77,7 @@ static const struct uniphier_reset_data uniphier_pro5_sys_reset_data[] = {
static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x2000, 2), /* NAND */
UNIPHIER_RESETX(6, 0x2000, 12), /* Ether */
UNIPHIER_RESETX(8, 0x2000, 10), /* STDMAC (HSC, RLE) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */
......@@ -92,6 +94,7 @@ static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = {
static const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC, MIO) */
UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */
UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */
......@@ -102,6 +105,7 @@ static const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = {
static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC) */
UNIPHIER_RESETX(12, 0x200c, 5), /* GIO (PCIe, USB3) */
UNIPHIER_RESETX(16, 0x200c, 12), /* USB30-PHY0 */
......@@ -114,6 +118,20 @@ static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
UNIPHIER_RESET_END,
};
static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
UNIPHIER_RESETX(2, 0x200c, 0), /* NAND */
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
UNIPHIER_RESETX(8, 0x200c, 12), /* STDMAC */
UNIPHIER_RESETX(12, 0x200c, 4), /* USB30 link (GIO0) */
UNIPHIER_RESETX(13, 0x200c, 5), /* USB31 link (GIO1) */
UNIPHIER_RESETX(16, 0x200c, 16), /* USB30-PHY0 */
UNIPHIER_RESETX(17, 0x200c, 18), /* USB30-PHY1 */
UNIPHIER_RESETX(18, 0x200c, 20), /* USB30-PHY2 */
UNIPHIER_RESETX(20, 0x200c, 17), /* USB31-PHY0 */
UNIPHIER_RESETX(21, 0x200c, 19), /* USB31-PHY1 */
UNIPHIER_RESET_END,
};
/* Media I/O reset data */
#define UNIPHIER_MIO_RESET_SD(id, ch) \
UNIPHIER_RESETX((id), 0x110 + 0x200 * (ch), 0)
......@@ -359,6 +377,10 @@ static const struct of_device_id uniphier_reset_match[] = {
.compatible = "socionext,uniphier-ld20-reset",
.data = uniphier_ld20_sys_reset_data,
},
{
.compatible = "socionext,uniphier-pxs3-reset",
.data = uniphier_pxs3_sys_reset_data,
},
/* Media I/O reset, SD reset */
{
.compatible = "socionext,uniphier-ld4-mio-reset",
......@@ -392,6 +414,10 @@ static const struct of_device_id uniphier_reset_match[] = {
.compatible = "socionext,uniphier-ld20-sd-reset",
.data = uniphier_pro5_sd_reset_data,
},
{
.compatible = "socionext,uniphier-pxs3-sd-reset",
.data = uniphier_pro5_sd_reset_data,
},
/* Peripheral reset */
{
.compatible = "socionext,uniphier-ld4-peri-reset",
......@@ -421,6 +447,10 @@ static const struct of_device_id uniphier_reset_match[] = {
.compatible = "socionext,uniphier-ld20-peri-reset",
.data = uniphier_pro4_peri_reset_data,
},
{
.compatible = "socionext,uniphier-pxs3-peri-reset",
.data = uniphier_pro4_peri_reset_data,
},
/* Analog signal amplifiers reset */
{
.compatible = "socionext,uniphier-ld11-adamv-reset",
......
/*
* ZTE's zx2967 family reset controller driver
*
* Copyright (C) 2017 ZTE Ltd.
*
* Author: Baoyou Xie <baoyou.xie@linaro.org>
*
* License terms: GNU General Public License (GPL) version 2
*/
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
struct zx2967_reset {
void __iomem *reg_base;
spinlock_t lock;
struct reset_controller_dev rcdev;
};
static int zx2967_reset_act(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct zx2967_reset *reset = NULL;
int bank = id / 32;
int offset = id % 32;
u32 reg;
unsigned long flags;
reset = container_of(rcdev, struct zx2967_reset, rcdev);
spin_lock_irqsave(&reset->lock, flags);
reg = readl_relaxed(reset->reg_base + (bank * 4));
if (assert)
reg &= ~BIT(offset);
else
reg |= BIT(offset);
writel_relaxed(reg, reset->reg_base + (bank * 4));
spin_unlock_irqrestore(&reset->lock, flags);
return 0;
}
static int zx2967_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return zx2967_reset_act(rcdev, id, true);
}
static int zx2967_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return zx2967_reset_act(rcdev, id, false);
}
static const struct reset_control_ops zx2967_reset_ops = {
.assert = zx2967_reset_assert,
.deassert = zx2967_reset_deassert,
};
static int zx2967_reset_probe(struct platform_device *pdev)
{
struct zx2967_reset *reset;
struct resource *res;
reset = devm_kzalloc(&pdev->dev, sizeof(*reset), GFP_KERNEL);
if (!reset)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
reset->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(reset->reg_base))
return PTR_ERR(reset->reg_base);
spin_lock_init(&reset->lock);
reset->rcdev.owner = THIS_MODULE;
reset->rcdev.nr_resets = resource_size(res) * 8;
reset->rcdev.ops = &zx2967_reset_ops;
reset->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &reset->rcdev);
}
static const struct of_device_id zx2967_reset_dt_ids[] = {
{ .compatible = "zte,zx296718-reset", },
{},
};
static struct platform_driver zx2967_reset_driver = {
.probe = zx2967_reset_probe,
.driver = {
.name = "zx2967-reset",
.of_match_table = zx2967_reset_dt_ids,
},
};
builtin_platform_driver(zx2967_reset_driver);
......@@ -11,7 +11,7 @@ obj-$(CONFIG_MACH_DOVE) += dove/
obj-y += fsl/
obj-$(CONFIG_ARCH_MXC) += imx/
obj-$(CONFIG_SOC_XWAY) += lantiq/
obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
obj-y += mediatek/
obj-$(CONFIG_ARCH_MESON) += amlogic/
obj-$(CONFIG_ARCH_QCOM) += qcom/
obj-y += renesas/
......
......@@ -9,4 +9,25 @@ config MESON_GX_SOCINFO
Say yes to support decoding of Amlogic Meson GX SoC family
information about the type, package and version.
config MESON_GX_PM_DOMAINS
bool "Amlogic Meson GX Power Domains driver"
depends on ARCH_MESON || COMPILE_TEST
depends on PM && OF
default ARCH_MESON
select PM_GENERIC_DOMAINS
select PM_GENERIC_DOMAINS_OF
help
Say yes to expose Amlogic Meson GX Power Domains as
Generic Power Domains.
config MESON_MX_SOCINFO
bool "Amlogic Meson MX SoC Information driver"
depends on ARCH_MESON || COMPILE_TEST
default ARCH_MESON
select SOC_BUS
help
Say yes to support decoding of Amlogic Meson6, Meson8,
Meson8b and Meson8m2 SoC family information about the type
and version.
endmenu
obj-$(CONFIG_MESON_GX_SOCINFO) += meson-gx-socinfo.o
obj-$(CONFIG_MESON_GX_PM_DOMAINS) += meson-gx-pwrc-vpu.o
obj-$(CONFIG_MESON_MX_SOCINFO) += meson-mx-socinfo.o
/*
* Copyright (c) 2017 BayLibre, SAS
* Author: Neil Armstrong <narmstrong@baylibre.com>
*
* SPDX-License-Identifier: GPL-2.0+
*/
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/bitfield.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <linux/reset.h>
#include <linux/clk.h>
/* AO Offsets */
#define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2)
#define GEN_PWR_VPU_HDMI BIT(8)
#define GEN_PWR_VPU_HDMI_ISO BIT(9)
/* HHI Offsets */
#define HHI_MEM_PD_REG0 (0x40 << 2)
#define HHI_VPU_MEM_PD_REG0 (0x41 << 2)
#define HHI_VPU_MEM_PD_REG1 (0x42 << 2)
struct meson_gx_pwrc_vpu {
struct generic_pm_domain genpd;
struct regmap *regmap_ao;
struct regmap *regmap_hhi;
struct reset_control *rstc;
struct clk *vpu_clk;
struct clk *vapb_clk;
};
static inline
struct meson_gx_pwrc_vpu *genpd_to_pd(struct generic_pm_domain *d)
{
return container_of(d, struct meson_gx_pwrc_vpu, genpd);
}
static int meson_gx_pwrc_vpu_power_off(struct generic_pm_domain *genpd)
{
struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd);
int i;
regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
GEN_PWR_VPU_HDMI_ISO, GEN_PWR_VPU_HDMI_ISO);
udelay(20);
/* Power Down Memories */
for (i = 0; i < 32; i += 2) {
regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0,
0x2 << i, 0x3 << i);
udelay(5);
}
for (i = 0; i < 32; i += 2) {
regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1,
0x2 << i, 0x3 << i);
udelay(5);
}
for (i = 8; i < 16; i++) {
regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0,
BIT(i), BIT(i));
udelay(5);
}
udelay(20);
regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
GEN_PWR_VPU_HDMI, GEN_PWR_VPU_HDMI);
msleep(20);
clk_disable_unprepare(pd->vpu_clk);
clk_disable_unprepare(pd->vapb_clk);
return 0;
}
static int meson_gx_pwrc_vpu_setup_clk(struct meson_gx_pwrc_vpu *pd)
{
int ret;
ret = clk_prepare_enable(pd->vpu_clk);
if (ret)
return ret;
ret = clk_prepare_enable(pd->vapb_clk);
if (ret)
clk_disable_unprepare(pd->vpu_clk);
return ret;
}
static int meson_gx_pwrc_vpu_power_on(struct generic_pm_domain *genpd)
{
struct meson_gx_pwrc_vpu *pd = genpd_to_pd(genpd);
int ret;
int i;
regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
GEN_PWR_VPU_HDMI, 0);
udelay(20);
/* Power Up Memories */
for (i = 0; i < 32; i += 2) {
regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG0,
0x2 << i, 0);
udelay(5);
}
for (i = 0; i < 32; i += 2) {
regmap_update_bits(pd->regmap_hhi, HHI_VPU_MEM_PD_REG1,
0x2 << i, 0);
udelay(5);
}
for (i = 8; i < 16; i++) {
regmap_update_bits(pd->regmap_hhi, HHI_MEM_PD_REG0,
BIT(i), 0);
udelay(5);
}
udelay(20);
ret = reset_control_assert(pd->rstc);
if (ret)
return ret;
regmap_update_bits(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0,
GEN_PWR_VPU_HDMI_ISO, 0);
ret = reset_control_deassert(pd->rstc);
if (ret)
return ret;
ret = meson_gx_pwrc_vpu_setup_clk(pd);
if (ret)
return ret;
return 0;
}
static bool meson_gx_pwrc_vpu_get_power(struct meson_gx_pwrc_vpu *pd)
{
u32 reg;
regmap_read(pd->regmap_ao, AO_RTI_GEN_PWR_SLEEP0, &reg);
return (reg & GEN_PWR_VPU_HDMI);
}
static struct meson_gx_pwrc_vpu vpu_hdmi_pd = {
.genpd = {
.name = "vpu_hdmi",
.power_off = meson_gx_pwrc_vpu_power_off,
.power_on = meson_gx_pwrc_vpu_power_on,
},
};
static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev)
{
struct regmap *regmap_ao, *regmap_hhi;
struct reset_control *rstc;
struct clk *vpu_clk;
struct clk *vapb_clk;
bool powered_off;
int ret;
regmap_ao = syscon_node_to_regmap(of_get_parent(pdev->dev.of_node));
if (IS_ERR(regmap_ao)) {
dev_err(&pdev->dev, "failed to get regmap\n");
return PTR_ERR(regmap_ao);
}
regmap_hhi = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"amlogic,hhi-sysctrl");
if (IS_ERR(regmap_hhi)) {
dev_err(&pdev->dev, "failed to get HHI regmap\n");
return PTR_ERR(regmap_hhi);
}
rstc = devm_reset_control_array_get(&pdev->dev, false, false);
if (IS_ERR(rstc)) {
dev_err(&pdev->dev, "failed to get reset lines\n");
return PTR_ERR(rstc);
}
vpu_clk = devm_clk_get(&pdev->dev, "vpu");
if (IS_ERR(vpu_clk)) {
dev_err(&pdev->dev, "vpu clock request failed\n");
return PTR_ERR(vpu_clk);
}
vapb_clk = devm_clk_get(&pdev->dev, "vapb");
if (IS_ERR(vapb_clk)) {
dev_err(&pdev->dev, "vapb clock request failed\n");
return PTR_ERR(vapb_clk);
}
vpu_hdmi_pd.regmap_ao = regmap_ao;
vpu_hdmi_pd.regmap_hhi = regmap_hhi;
vpu_hdmi_pd.rstc = rstc;
vpu_hdmi_pd.vpu_clk = vpu_clk;
vpu_hdmi_pd.vapb_clk = vapb_clk;
powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd);
/* If already powered, sync the clock states */
if (!powered_off) {
ret = meson_gx_pwrc_vpu_setup_clk(&vpu_hdmi_pd);
if (ret)
return ret;
}
pm_genpd_init(&vpu_hdmi_pd.genpd, &pm_domain_always_on_gov,
powered_off);
return of_genpd_add_provider_simple(pdev->dev.of_node,
&vpu_hdmi_pd.genpd);
}
static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev)
{
meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd);
}
static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = {
{ .compatible = "amlogic,meson-gx-pwrc-vpu" },
{ /* sentinel */ }
};
static struct platform_driver meson_gx_pwrc_vpu_driver = {
.probe = meson_gx_pwrc_vpu_probe,
.shutdown = meson_gx_pwrc_vpu_shutdown,
.driver = {
.name = "meson_gx_pwrc_vpu",
.of_match_table = meson_gx_pwrc_vpu_match_table,
},
};
builtin_platform_driver(meson_gx_pwrc_vpu_driver);
/*
* Copyright (c) 2017 Martin Blumenstingl <martin.blumenstingl@googlemail.com>
*
* SPDX-License-Identifier: GPL-2.0+
*/
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/sys_soc.h>
#include <linux/bitfield.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#define MESON_SOCINFO_MAJOR_VER_MESON6 0x16
#define MESON_SOCINFO_MAJOR_VER_MESON8 0x19
#define MESON_SOCINFO_MAJOR_VER_MESON8B 0x1b
#define MESON_MX_ASSIST_HW_REV 0x14c
#define MESON_MX_ANALOG_TOP_METAL_REVISION 0x0
#define MESON_MX_BOOTROM_MISC_VER 0x4
static const char *meson_mx_socinfo_revision(unsigned int major_ver,
unsigned int misc_ver,
unsigned int metal_rev)
{
unsigned int minor_ver;
switch (major_ver) {
case MESON_SOCINFO_MAJOR_VER_MESON6:
minor_ver = 0xa;
break;
case MESON_SOCINFO_MAJOR_VER_MESON8:
if (metal_rev == 0x11111112)
major_ver = 0x1d;
if (metal_rev == 0x11111111 || metal_rev == 0x11111112)
minor_ver = 0xa;
else if (metal_rev == 0x11111113)
minor_ver = 0xb;
else if (metal_rev == 0x11111133)
minor_ver = 0xc;
else
minor_ver = 0xd;
break;
case MESON_SOCINFO_MAJOR_VER_MESON8B:
if (metal_rev == 0x11111111)
minor_ver = 0xa;
else
minor_ver = 0xb;
break;
default:
minor_ver = 0x0;
break;
}
return kasprintf(GFP_KERNEL, "Rev%X (%x - 0:%X)", minor_ver, major_ver,
misc_ver);
}
static const char *meson_mx_socinfo_soc_id(unsigned int major_ver,
unsigned int metal_rev)
{
const char *soc_id;
switch (major_ver) {
case MESON_SOCINFO_MAJOR_VER_MESON6:
soc_id = "Meson6 (AML8726-MX)";
break;
case MESON_SOCINFO_MAJOR_VER_MESON8:
if (metal_rev == 0x11111112)
soc_id = "Meson8m2 (S812)";
else
soc_id = "Meson8 (S802)";
break;
case MESON_SOCINFO_MAJOR_VER_MESON8B:
soc_id = "Meson8b (S805)";
break;
default:
soc_id = "Unknown";
break;
}
return kstrdup_const(soc_id, GFP_KERNEL);
}
static const struct of_device_id meson_mx_socinfo_analog_top_ids[] = {
{ .compatible = "amlogic,meson8-analog-top", },
{ .compatible = "amlogic,meson8b-analog-top", },
{ /* sentinel */ }
};
int __init meson_mx_socinfo_init(void)
{
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
struct device_node *np;
struct regmap *assist_regmap, *bootrom_regmap, *analog_top_regmap;
unsigned int major_ver, misc_ver, metal_rev = 0;
int ret;
assist_regmap =
syscon_regmap_lookup_by_compatible("amlogic,meson-mx-assist");
if (IS_ERR(assist_regmap))
return PTR_ERR(assist_regmap);
bootrom_regmap =
syscon_regmap_lookup_by_compatible("amlogic,meson-mx-bootrom");
if (IS_ERR(bootrom_regmap))
return PTR_ERR(bootrom_regmap);
np = of_find_matching_node(NULL, meson_mx_socinfo_analog_top_ids);
if (np) {
analog_top_regmap = syscon_node_to_regmap(np);
if (IS_ERR(analog_top_regmap))
return PTR_ERR(analog_top_regmap);
ret = regmap_read(analog_top_regmap,
MESON_MX_ANALOG_TOP_METAL_REVISION,
&metal_rev);
if (ret)
return ret;
}
ret = regmap_read(assist_regmap, MESON_MX_ASSIST_HW_REV, &major_ver);
if (ret < 0)
return ret;
ret = regmap_read(bootrom_regmap, MESON_MX_BOOTROM_MISC_VER,
&misc_ver);
if (ret < 0)
return ret;
soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
if (!soc_dev_attr)
return -ENODEV;
soc_dev_attr->family = "Amlogic Meson";
np = of_find_node_by_path("/");
of_property_read_string(np, "model", &soc_dev_attr->machine);
of_node_put(np);
soc_dev_attr->revision = meson_mx_socinfo_revision(major_ver, misc_ver,
metal_rev);
soc_dev_attr->soc_id = meson_mx_socinfo_soc_id(major_ver, metal_rev);
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev)) {
kfree_const(soc_dev_attr->revision);
kfree_const(soc_dev_attr->soc_id);
kfree(soc_dev_attr);
return PTR_ERR(soc_dev);
}
dev_info(soc_device_to_device(soc_dev), "Amlogic %s %s detected\n",
soc_dev_attr->soc_id, soc_dev_attr->revision);
return 0;
}
device_initcall(meson_mx_socinfo_init);
......@@ -72,6 +72,8 @@ static const struct at91_soc __initconst socs[] = {
"sama5d21", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D22CU_EXID_MATCH,
"sama5d22", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D225C_D1M_EXID_MATCH,
"sama5d225c 16MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D23CU_EXID_MATCH,
"sama5d23", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CX_EXID_MATCH,
......@@ -84,10 +86,16 @@ static const struct at91_soc __initconst socs[] = {
"sama5d27", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CN_EXID_MATCH,
"sama5d27", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D1G_EXID_MATCH,
"sama5d27c 128MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27C_D5M_EXID_MATCH,
"sama5d27c 64MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CU_EXID_MATCH,
"sama5d28", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CN_EXID_MATCH,
"sama5d28", "sama5d2"),
AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28C_D1G_EXID_MATCH,
"sama5d28c 128MiB SiP", "sama5d2"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D31_EXID_MATCH,
"sama5d31", "sama5d3"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D33_EXID_MATCH,
......
......@@ -64,14 +64,18 @@ at91_soc_init(const struct at91_soc *socs);
#define SAMA5D2_CIDR_MATCH 0x0a5c08c0
#define SAMA5D21CU_EXID_MATCH 0x0000005a
#define SAMA5D225C_D1M_EXID_MATCH 0x00000053
#define SAMA5D22CU_EXID_MATCH 0x00000059
#define SAMA5D22CN_EXID_MATCH 0x00000069
#define SAMA5D23CU_EXID_MATCH 0x00000058
#define SAMA5D24CX_EXID_MATCH 0x00000004
#define SAMA5D24CU_EXID_MATCH 0x00000014
#define SAMA5D26CU_EXID_MATCH 0x00000012
#define SAMA5D27C_D1G_EXID_MATCH 0x00000033
#define SAMA5D27C_D5M_EXID_MATCH 0x00000032
#define SAMA5D27CU_EXID_MATCH 0x00000011
#define SAMA5D27CN_EXID_MATCH 0x00000021
#define SAMA5D28C_D1G_EXID_MATCH 0x00000013
#define SAMA5D28CU_EXID_MATCH 0x00000010
#define SAMA5D28CN_EXID_MATCH 0x00000020
......
......@@ -20,4 +20,6 @@ config SOC_BRCMSTB
If unsure, say N.
source "drivers/soc/bcm/brcmstb/Kconfig"
endmenu
if SOC_BRCMSTB
config BRCMSTB_PM
bool "Support suspend/resume for STB platforms"
default y
depends on PM
depends on ARCH_BRCMSTB || BMIPS_GENERIC
select ARM_CPU_SUSPEND if ARM
endif # SOC_BRCMSTB
obj-y += common.o biuctrl.o
obj-$(CONFIG_BRCMSTB_PM) += pm/
obj-$(CONFIG_ARM) += s2-arm.o pm-arm.o
AFLAGS_s2-arm.o := -march=armv7-a
obj-$(CONFIG_BMIPS_GENERIC) += s2-mips.o s3-mips.o pm-mips.o
/*
* Always ON (AON) register interface between bootloader and Linux
*
* Copyright © 2014-2017 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __BRCMSTB_AON_DEFS_H__
#define __BRCMSTB_AON_DEFS_H__
#include <linux/compiler.h>
/* Magic number in upper 16-bits */
#define BRCMSTB_S3_MAGIC_MASK 0xffff0000
#define BRCMSTB_S3_MAGIC_SHORT 0x5AFE0000
enum {
/* Restore random key for AES memory verification (off = fixed key) */
S3_FLAG_LOAD_RANDKEY = (1 << 0),
/* Scratch buffer page table is present */
S3_FLAG_SCRATCH_BUFFER_TABLE = (1 << 1),
/* Skip all memory verification */
S3_FLAG_NO_MEM_VERIFY = (1 << 2),
/*
* Modification of this bit reserved for bootloader only.
* 1=PSCI started Linux, 0=Direct jump to Linux.
*/
S3_FLAG_PSCI_BOOT = (1 << 3),
/*
* Modification of this bit reserved for bootloader only.
* 1=64 bit boot, 0=32 bit boot.
*/
S3_FLAG_BOOTED64 = (1 << 4),
};
#define BRCMSTB_HASH_LEN (128 / 8) /* 128-bit hash */
#define AON_REG_MAGIC_FLAGS 0x00
#define AON_REG_CONTROL_LOW 0x04
#define AON_REG_CONTROL_HIGH 0x08
#define AON_REG_S3_HASH 0x0c /* hash of S3 params */
#define AON_REG_CONTROL_HASH_LEN 0x1c
#define AON_REG_PANIC 0x20
#define BRCMSTB_S3_MAGIC 0x5AFEB007
#define BRCMSTB_PANIC_MAGIC 0x512E115E
#define BOOTLOADER_SCRATCH_SIZE 64
#define BRCMSTB_DTU_STATE_MAP_ENTRIES (8*1024)
#define BRCMSTB_DTU_CONFIG_ENTRIES (512)
#define BRCMSTB_DTU_COUNT (2)
#define IMAGE_DESCRIPTORS_BUFSIZE (2 * 1024)
#define S3_BOOTLOADER_RESERVED (S3_FLAG_PSCI_BOOT | S3_FLAG_BOOTED64)
struct brcmstb_bootloader_dtu_table {
uint32_t dtu_state_map[BRCMSTB_DTU_STATE_MAP_ENTRIES];
uint32_t dtu_config[BRCMSTB_DTU_CONFIG_ENTRIES];
};
/*
* Bootloader utilizes a custom parameter block left in DRAM for handling S3
* warm resume
*/
struct brcmstb_s3_params {
/* scratch memory for bootloader */
uint8_t scratch[BOOTLOADER_SCRATCH_SIZE];
uint32_t magic; /* BRCMSTB_S3_MAGIC */
uint64_t reentry; /* PA */
/* descriptors */
uint32_t hash[BRCMSTB_HASH_LEN / 4];
/*
* If 0, then ignore this parameter (there is only one set of
* descriptors)
*
* If non-0, then a second set of descriptors is stored at:
*
* descriptors + desc_offset_2
*
* The MAC result of both descriptors is XOR'd and stored in @hash
*/
uint32_t desc_offset_2;
/*
* (Physical) address of a brcmstb_bootloader_scratch_table, for
* providing a large DRAM buffer to the bootloader
*/
uint64_t buffer_table;
uint32_t spare[70];
uint8_t descriptors[IMAGE_DESCRIPTORS_BUFSIZE];
/*
* Must be last member of struct. See brcmstb_pm_s3_finish() for reason.
*/
struct brcmstb_bootloader_dtu_table dtu[BRCMSTB_DTU_COUNT];
} __packed;
#endif /* __BRCMSTB_AON_DEFS_H__ */
/*
* ARM-specific support for Broadcom STB S2/S3/S5 power management
*
* S2: clock gate CPUs and as many peripherals as possible
* S3: power off all of the chip except the Always ON (AON) island; keep DDR is
* self-refresh
* S5: (a.k.a. S3 cold boot) much like S3, except DDR is powered down, so we
* treat this mode like a soft power-off, with wakeup allowed from AON
*
* Copyright © 2014-2017 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#define pr_fmt(fmt) "brcmstb-pm: " fmt
#include <linux/bitops.h>
#include <linux/compiler.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/printk.h>
#include <linux/proc_fs.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/sort.h>
#include <linux/suspend.h>
#include <linux/types.h>
#include <linux/uaccess.h>
#include <linux/soc/brcmstb/brcmstb.h>
#include <asm/fncpy.h>
#include <asm/setup.h>
#include <asm/suspend.h>
#include "pm.h"
#include "aon_defs.h"
#define SHIMPHY_DDR_PAD_CNTRL 0x8c
/* Method #0 */
#define SHIMPHY_PAD_PLL_SEQUENCE BIT(8)
#define SHIMPHY_PAD_GATE_PLL_S3 BIT(9)
/* Method #1 */
#define PWRDWN_SEQ_NO_SEQUENCING 0
#define PWRDWN_SEQ_HOLD_CHANNEL 1
#define PWRDWN_SEQ_RESET_PLL 2
#define PWRDWN_SEQ_POWERDOWN_PLL 3
#define SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK 0x00f00000
#define SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT 20
#define DDR_FORCE_CKE_RST_N BIT(3)
#define DDR_PHY_RST_N BIT(2)
#define DDR_PHY_CKE BIT(1)
#define DDR_PHY_NO_CHANNEL 0xffffffff
#define MAX_NUM_MEMC 3
struct brcmstb_memc {
void __iomem *ddr_phy_base;
void __iomem *ddr_shimphy_base;
void __iomem *ddr_ctrl;
};
struct brcmstb_pm_control {
void __iomem *aon_ctrl_base;
void __iomem *aon_sram;
struct brcmstb_memc memcs[MAX_NUM_MEMC];
void __iomem *boot_sram;
size_t boot_sram_len;
bool support_warm_boot;
size_t pll_status_offset;
int num_memc;
struct brcmstb_s3_params *s3_params;
dma_addr_t s3_params_pa;
int s3entry_method;
u32 warm_boot_offset;
u32 phy_a_standby_ctrl_offs;
u32 phy_b_standby_ctrl_offs;
bool needs_ddr_pad;
struct platform_device *pdev;
};
enum bsp_initiate_command {
BSP_CLOCK_STOP = 0x00,
BSP_GEN_RANDOM_KEY = 0x4A,
BSP_RESTORE_RANDOM_KEY = 0x55,
BSP_GEN_FIXED_KEY = 0x63,
};
#define PM_INITIATE 0x01
#define PM_INITIATE_SUCCESS 0x00
#define PM_INITIATE_FAIL 0xfe
static struct brcmstb_pm_control ctrl;
static int (*brcmstb_pm_do_s2_sram)(void __iomem *aon_ctrl_base,
void __iomem *ddr_phy_pll_status);
static int brcmstb_init_sram(struct device_node *dn)
{
void __iomem *sram;
struct resource res;
int ret;
ret = of_address_to_resource(dn, 0, &res);
if (ret)
return ret;
/* Uncached, executable remapping of SRAM */
sram = __arm_ioremap_exec(res.start, resource_size(&res), false);
if (!sram)
return -ENOMEM;
ctrl.boot_sram = sram;
ctrl.boot_sram_len = resource_size(&res);
return 0;
}
static const struct of_device_id sram_dt_ids[] = {
{ .compatible = "mmio-sram" },
{ /* sentinel */ }
};
static int do_bsp_initiate_command(enum bsp_initiate_command cmd)
{
void __iomem *base = ctrl.aon_ctrl_base;
int ret;
int timeo = 1000 * 1000; /* 1 second */
writel_relaxed(0, base + AON_CTRL_PM_INITIATE);
(void)readl_relaxed(base + AON_CTRL_PM_INITIATE);
/* Go! */
writel_relaxed((cmd << 1) | PM_INITIATE, base + AON_CTRL_PM_INITIATE);
/*
* If firmware doesn't support the 'ack', then just assume it's done
* after 10ms. Note that this only works for command 0, BSP_CLOCK_STOP
*/
if (of_machine_is_compatible("brcm,bcm74371a0")) {
(void)readl_relaxed(base + AON_CTRL_PM_INITIATE);
mdelay(10);
return 0;
}
for (;;) {
ret = readl_relaxed(base + AON_CTRL_PM_INITIATE);
if (!(ret & PM_INITIATE))
break;
if (timeo <= 0) {
pr_err("error: timeout waiting for BSP (%x)\n", ret);
break;
}
timeo -= 50;
udelay(50);
}
return (ret & 0xff) != PM_INITIATE_SUCCESS;
}
static int brcmstb_pm_handshake(void)
{
void __iomem *base = ctrl.aon_ctrl_base;
u32 tmp;
int ret;
/* BSP power handshake, v1 */
tmp = readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS);
tmp &= ~1UL;
writel_relaxed(tmp, base + AON_CTRL_HOST_MISC_CMDS);
(void)readl_relaxed(base + AON_CTRL_HOST_MISC_CMDS);
ret = do_bsp_initiate_command(BSP_CLOCK_STOP);
if (ret)
pr_err("BSP handshake failed\n");
/*
* HACK: BSP may have internal race on the CLOCK_STOP command.
* Avoid touching the BSP for a few milliseconds.
*/
mdelay(3);
return ret;
}
static inline void shimphy_set(u32 value, u32 mask)
{
int i;
if (!ctrl.needs_ddr_pad)
return;
for (i = 0; i < ctrl.num_memc; i++) {
u32 tmp;
tmp = readl_relaxed(ctrl.memcs[i].ddr_shimphy_base +
SHIMPHY_DDR_PAD_CNTRL);
tmp = value | (tmp & mask);
writel_relaxed(tmp, ctrl.memcs[i].ddr_shimphy_base +
SHIMPHY_DDR_PAD_CNTRL);
}
wmb(); /* Complete sequence in order. */
}
static inline void ddr_ctrl_set(bool warmboot)
{
int i;
for (i = 0; i < ctrl.num_memc; i++) {
u32 tmp;
tmp = readl_relaxed(ctrl.memcs[i].ddr_ctrl +
ctrl.warm_boot_offset);
if (warmboot)
tmp |= 1;
else
tmp &= ~1; /* Cold boot */
writel_relaxed(tmp, ctrl.memcs[i].ddr_ctrl +
ctrl.warm_boot_offset);
}
/* Complete sequence in order */
wmb();
}
static inline void s3entry_method0(void)
{
shimphy_set(SHIMPHY_PAD_GATE_PLL_S3 | SHIMPHY_PAD_PLL_SEQUENCE,
0xffffffff);
}
static inline void s3entry_method1(void)
{
/*
* S3 Entry Sequence
* -----------------
* Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3
* Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 1
*/
shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
ddr_ctrl_set(true);
}
static inline void s5entry_method1(void)
{
int i;
/*
* S5 Entry Sequence
* -----------------
* Step 1: SHIMPHY_ADDR_CNTL_0_DDR_PAD_CNTRL [ S3_PWRDWN_SEQ ] = 3
* Step 2: MEMC_DDR_0_WARM_BOOT [ WARM_BOOT ] = 0
* Step 3: DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ CKE ] = 0
* DDR_PHY_CONTROL_REGS_[AB]_0_STANDBY_CONTROL[ RST_N ] = 0
*/
shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
ddr_ctrl_set(false);
for (i = 0; i < ctrl.num_memc; i++) {
u32 tmp;
/* Step 3: Channel A (RST_N = CKE = 0) */
tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base +
ctrl.phy_a_standby_ctrl_offs);
tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N);
writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base +
ctrl.phy_a_standby_ctrl_offs);
/* Step 3: Channel B? */
if (ctrl.phy_b_standby_ctrl_offs != DDR_PHY_NO_CHANNEL) {
tmp = readl_relaxed(ctrl.memcs[i].ddr_phy_base +
ctrl.phy_b_standby_ctrl_offs);
tmp &= ~(DDR_PHY_RST_N | DDR_PHY_RST_N);
writel_relaxed(tmp, ctrl.memcs[i].ddr_phy_base +
ctrl.phy_b_standby_ctrl_offs);
}
}
/* Must complete */
wmb();
}
/*
* Run a Power Management State Machine (PMSM) shutdown command and put the CPU
* into a low-power mode
*/
static void brcmstb_do_pmsm_power_down(unsigned long base_cmd, bool onewrite)
{
void __iomem *base = ctrl.aon_ctrl_base;
if ((ctrl.s3entry_method == 1) && (base_cmd == PM_COLD_CONFIG))
s5entry_method1();
/* pm_start_pwrdn transition 0->1 */
writel_relaxed(base_cmd, base + AON_CTRL_PM_CTRL);
if (!onewrite) {
(void)readl_relaxed(base + AON_CTRL_PM_CTRL);
writel_relaxed(base_cmd | PM_PWR_DOWN, base + AON_CTRL_PM_CTRL);
(void)readl_relaxed(base + AON_CTRL_PM_CTRL);
}
wfi();
}
/* Support S5 cold boot out of "poweroff" */
static void brcmstb_pm_poweroff(void)
{
brcmstb_pm_handshake();
/* Clear magic S3 warm-boot value */
writel_relaxed(0, ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
(void)readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
/* Skip wait-for-interrupt signal; just use a countdown */
writel_relaxed(0x10, ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT);
(void)readl_relaxed(ctrl.aon_ctrl_base + AON_CTRL_PM_CPU_WAIT_COUNT);
if (ctrl.s3entry_method == 1) {
shimphy_set((PWRDWN_SEQ_POWERDOWN_PLL <<
SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
ddr_ctrl_set(false);
brcmstb_do_pmsm_power_down(M1_PM_COLD_CONFIG, true);
return; /* We should never actually get here */
}
brcmstb_do_pmsm_power_down(PM_COLD_CONFIG, false);
}
static void *brcmstb_pm_copy_to_sram(void *fn, size_t len)
{
unsigned int size = ALIGN(len, FNCPY_ALIGN);
if (ctrl.boot_sram_len < size) {
pr_err("standby code will not fit in SRAM\n");
return NULL;
}
return fncpy(ctrl.boot_sram, fn, size);
}
/*
* S2 suspend/resume picks up where we left off, so we must execute carefully
* from SRAM, in order to allow DDR to come back up safely before we continue.
*/
static int brcmstb_pm_s2(void)
{
/* A previous S3 can set a value hazardous to S2, so make sure. */
if (ctrl.s3entry_method == 1) {
shimphy_set((PWRDWN_SEQ_NO_SEQUENCING <<
SHIMPHY_PAD_S3_PWRDWN_SEQ_SHIFT),
~SHIMPHY_PAD_S3_PWRDWN_SEQ_MASK);
ddr_ctrl_set(false);
}
brcmstb_pm_do_s2_sram = brcmstb_pm_copy_to_sram(&brcmstb_pm_do_s2,
brcmstb_pm_do_s2_sz);
if (!brcmstb_pm_do_s2_sram)
return -EINVAL;
return brcmstb_pm_do_s2_sram(ctrl.aon_ctrl_base,
ctrl.memcs[0].ddr_phy_base +
ctrl.pll_status_offset);
}
/*
* This function is called on a new stack, so don't allow inlining (which will
* generate stack references on the old stack). It cannot be made static because
* it is referenced from brcmstb_pm_s3()
*/
noinline int brcmstb_pm_s3_finish(void)
{
struct brcmstb_s3_params *params = ctrl.s3_params;
dma_addr_t params_pa = ctrl.s3_params_pa;
phys_addr_t reentry = virt_to_phys(&cpu_resume);
enum bsp_initiate_command cmd;
u32 flags;
/*
* Clear parameter structure, but not DTU area, which has already been
* filled in. We know DTU is a the end, so we can just subtract its
* size.
*/
memset(params, 0, sizeof(*params) - sizeof(params->dtu));
flags = readl_relaxed(ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
flags &= S3_BOOTLOADER_RESERVED;
flags |= S3_FLAG_NO_MEM_VERIFY;
flags |= S3_FLAG_LOAD_RANDKEY;
/* Load random / fixed key */
if (flags & S3_FLAG_LOAD_RANDKEY)
cmd = BSP_GEN_RANDOM_KEY;
else
cmd = BSP_GEN_FIXED_KEY;
if (do_bsp_initiate_command(cmd)) {
pr_info("key loading failed\n");
return -EIO;
}
params->magic = BRCMSTB_S3_MAGIC;
params->reentry = reentry;
/* No more writes to DRAM */
flush_cache_all();
flags |= BRCMSTB_S3_MAGIC_SHORT;
writel_relaxed(flags, ctrl.aon_sram + AON_REG_MAGIC_FLAGS);
writel_relaxed(lower_32_bits(params_pa),
ctrl.aon_sram + AON_REG_CONTROL_LOW);
writel_relaxed(upper_32_bits(params_pa),
ctrl.aon_sram + AON_REG_CONTROL_HIGH);
switch (ctrl.s3entry_method) {
case 0:
s3entry_method0();
brcmstb_do_pmsm_power_down(PM_WARM_CONFIG, false);
break;
case 1:
s3entry_method1();
brcmstb_do_pmsm_power_down(M1_PM_WARM_CONFIG, true);
break;
default:
return -EINVAL;
}
/* Must have been interrupted from wfi()? */
return -EINTR;
}
static int brcmstb_pm_do_s3(unsigned long sp)
{
unsigned long save_sp;
int ret;
asm volatile (
"mov %[save], sp\n"
"mov sp, %[new]\n"
"bl brcmstb_pm_s3_finish\n"
"mov %[ret], r0\n"
"mov %[new], sp\n"
"mov sp, %[save]\n"
: [save] "=&r" (save_sp), [ret] "=&r" (ret)
: [new] "r" (sp)
);
return ret;
}
static int brcmstb_pm_s3(void)
{
void __iomem *sp = ctrl.boot_sram + ctrl.boot_sram_len;
return cpu_suspend((unsigned long)sp, brcmstb_pm_do_s3);
}
static int brcmstb_pm_standby(bool deep_standby)
{
int ret;
if (brcmstb_pm_handshake())
return -EIO;
if (deep_standby)
ret = brcmstb_pm_s3();
else
ret = brcmstb_pm_s2();
if (ret)
pr_err("%s: standby failed\n", __func__);
return ret;
}
static int brcmstb_pm_enter(suspend_state_t state)
{
int ret = -EINVAL;
switch (state) {
case PM_SUSPEND_STANDBY:
ret = brcmstb_pm_standby(false);
break;
case PM_SUSPEND_MEM:
ret = brcmstb_pm_standby(true);
break;
}
return ret;
}
static int brcmstb_pm_valid(suspend_state_t state)
{
switch (state) {
case PM_SUSPEND_STANDBY:
return true;
case PM_SUSPEND_MEM:
return ctrl.support_warm_boot;
default:
return false;
}
}
static const struct platform_suspend_ops brcmstb_pm_ops = {
.enter = brcmstb_pm_enter,
.valid = brcmstb_pm_valid,
};
static const struct of_device_id aon_ctrl_dt_ids[] = {
{ .compatible = "brcm,brcmstb-aon-ctrl" },
{}
};
struct ddr_phy_ofdata {
bool supports_warm_boot;
size_t pll_status_offset;
int s3entry_method;
u32 warm_boot_offset;
u32 phy_a_standby_ctrl_offs;
u32 phy_b_standby_ctrl_offs;
};
static struct ddr_phy_ofdata ddr_phy_71_1 = {
.supports_warm_boot = true,
.pll_status_offset = 0x0c,
.s3entry_method = 1,
.warm_boot_offset = 0x2c,
.phy_a_standby_ctrl_offs = 0x198,
.phy_b_standby_ctrl_offs = DDR_PHY_NO_CHANNEL
};
static struct ddr_phy_ofdata ddr_phy_72_0 = {
.supports_warm_boot = true,
.pll_status_offset = 0x10,
.s3entry_method = 1,
.warm_boot_offset = 0x40,
.phy_a_standby_ctrl_offs = 0x2a4,
.phy_b_standby_ctrl_offs = 0x8a4
};
static struct ddr_phy_ofdata ddr_phy_225_1 = {
.supports_warm_boot = false,
.pll_status_offset = 0x4,
.s3entry_method = 0
};
static struct ddr_phy_ofdata ddr_phy_240_1 = {
.supports_warm_boot = true,
.pll_status_offset = 0x4,
.s3entry_method = 0
};
static const struct of_device_id ddr_phy_dt_ids[] = {
{
.compatible = "brcm,brcmstb-ddr-phy-v71.1",
.data = &ddr_phy_71_1,
},
{
.compatible = "brcm,brcmstb-ddr-phy-v72.0",
.data = &ddr_phy_72_0,
},
{
.compatible = "brcm,brcmstb-ddr-phy-v225.1",
.data = &ddr_phy_225_1,
},
{
.compatible = "brcm,brcmstb-ddr-phy-v240.1",
.data = &ddr_phy_240_1,
},
{
/* Same as v240.1, for the registers we care about */
.compatible = "brcm,brcmstb-ddr-phy-v240.2",
.data = &ddr_phy_240_1,
},
{}
};
struct ddr_seq_ofdata {
bool needs_ddr_pad;
u32 warm_boot_offset;
};
static const struct ddr_seq_ofdata ddr_seq_b22 = {
.needs_ddr_pad = false,
.warm_boot_offset = 0x2c,
};
static const struct ddr_seq_ofdata ddr_seq = {
.needs_ddr_pad = true,
};
static const struct of_device_id ddr_shimphy_dt_ids[] = {
{ .compatible = "brcm,brcmstb-ddr-shimphy-v1.0" },
{}
};
static const struct of_device_id brcmstb_memc_of_match[] = {
{
.compatible = "brcm,brcmstb-memc-ddr-rev-b.2.2",
.data = &ddr_seq_b22,
},
{
.compatible = "brcm,brcmstb-memc-ddr",
.data = &ddr_seq,
},
{},
};
static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches,
int index, const void **ofdata)
{
struct device_node *dn;
const struct of_device_id *match;
dn = of_find_matching_node_and_match(NULL, matches, &match);
if (!dn)
return ERR_PTR(-EINVAL);
if (ofdata)
*ofdata = match->data;
return of_io_request_and_map(dn, index, dn->full_name);
}
static int brcmstb_pm_panic_notify(struct notifier_block *nb,
unsigned long action, void *data)
{
writel_relaxed(BRCMSTB_PANIC_MAGIC, ctrl.aon_sram + AON_REG_PANIC);
return NOTIFY_DONE;
}
static struct notifier_block brcmstb_pm_panic_nb = {
.notifier_call = brcmstb_pm_panic_notify,
};
static int brcmstb_pm_probe(struct platform_device *pdev)
{
const struct ddr_phy_ofdata *ddr_phy_data;
const struct ddr_seq_ofdata *ddr_seq_data;
const struct of_device_id *of_id = NULL;
struct device_node *dn;
void __iomem *base;
int ret, i;
/* AON ctrl registers */
base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
if (IS_ERR(base)) {
pr_err("error mapping AON_CTRL\n");
return PTR_ERR(base);
}
ctrl.aon_ctrl_base = base;
/* AON SRAM registers */
base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL);
if (IS_ERR(base)) {
/* Assume standard offset */
ctrl.aon_sram = ctrl.aon_ctrl_base +
AON_CTRL_SYSTEM_DATA_RAM_OFS;
} else {
ctrl.aon_sram = base;
}
writel_relaxed(0, ctrl.aon_sram + AON_REG_PANIC);
/* DDR PHY registers */
base = brcmstb_ioremap_match(ddr_phy_dt_ids, 0,
(const void **)&ddr_phy_data);
if (IS_ERR(base)) {
pr_err("error mapping DDR PHY\n");
return PTR_ERR(base);
}
ctrl.support_warm_boot = ddr_phy_data->supports_warm_boot;
ctrl.pll_status_offset = ddr_phy_data->pll_status_offset;
/* Only need DDR PHY 0 for now? */
ctrl.memcs[0].ddr_phy_base = base;
ctrl.s3entry_method = ddr_phy_data->s3entry_method;
ctrl.phy_a_standby_ctrl_offs = ddr_phy_data->phy_a_standby_ctrl_offs;
ctrl.phy_b_standby_ctrl_offs = ddr_phy_data->phy_b_standby_ctrl_offs;
/*
* Slightly grosss to use the phy ver to get a memc,
* offset but that is the only versioned things so far
* we can test for.
*/
ctrl.warm_boot_offset = ddr_phy_data->warm_boot_offset;
/* DDR SHIM-PHY registers */
for_each_matching_node(dn, ddr_shimphy_dt_ids) {
i = ctrl.num_memc;
if (i >= MAX_NUM_MEMC) {
pr_warn("too many MEMCs (max %d)\n", MAX_NUM_MEMC);
break;
}
base = of_io_request_and_map(dn, 0, dn->full_name);
if (IS_ERR(base)) {
if (!ctrl.support_warm_boot)
break;
pr_err("error mapping DDR SHIMPHY %d\n", i);
return PTR_ERR(base);
}
ctrl.memcs[i].ddr_shimphy_base = base;
ctrl.num_memc++;
}
/* Sequencer DRAM Param and Control Registers */
i = 0;
for_each_matching_node(dn, brcmstb_memc_of_match) {
base = of_iomap(dn, 0);
if (!base) {
pr_err("error mapping DDR Sequencer %d\n", i);
return -ENOMEM;
}
of_id = of_match_node(brcmstb_memc_of_match, dn);
if (!of_id) {
iounmap(base);
return -EINVAL;
}
ddr_seq_data = of_id->data;
ctrl.needs_ddr_pad = ddr_seq_data->needs_ddr_pad;
/* Adjust warm boot offset based on the DDR sequencer */
if (ddr_seq_data->warm_boot_offset)
ctrl.warm_boot_offset = ddr_seq_data->warm_boot_offset;
ctrl.memcs[i].ddr_ctrl = base;
i++;
}
pr_debug("PM: supports warm boot:%d, method:%d, wboffs:%x\n",
ctrl.support_warm_boot, ctrl.s3entry_method,
ctrl.warm_boot_offset);
dn = of_find_matching_node(NULL, sram_dt_ids);
if (!dn) {
pr_err("SRAM not found\n");
return -EINVAL;
}
ret = brcmstb_init_sram(dn);
if (ret) {
pr_err("error setting up SRAM for PM\n");
return ret;
}
ctrl.pdev = pdev;
ctrl.s3_params = kmalloc(sizeof(*ctrl.s3_params), GFP_KERNEL);
if (!ctrl.s3_params)
return -ENOMEM;
ctrl.s3_params_pa = dma_map_single(&pdev->dev, ctrl.s3_params,
sizeof(*ctrl.s3_params),
DMA_TO_DEVICE);
if (dma_mapping_error(&pdev->dev, ctrl.s3_params_pa)) {
pr_err("error mapping DMA memory\n");
ret = -ENOMEM;
goto out;
}
atomic_notifier_chain_register(&panic_notifier_list,
&brcmstb_pm_panic_nb);
pm_power_off = brcmstb_pm_poweroff;
suspend_set_ops(&brcmstb_pm_ops);
return 0;
out:
kfree(ctrl.s3_params);
pr_warn("PM: initialization failed with code %d\n", ret);
return ret;
}
static struct platform_driver brcmstb_pm_driver = {
.driver = {
.name = "brcmstb-pm",
.of_match_table = aon_ctrl_dt_ids,
},
};
static int __init brcmstb_pm_init(void)
{
return platform_driver_probe(&brcmstb_pm_driver,
brcmstb_pm_probe);
}
module_init(brcmstb_pm_init);
/*
* MIPS-specific support for Broadcom STB S2/S3/S5 power management
*
* Copyright (C) 2016-2017 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/printk.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/delay.h>
#include <linux/suspend.h>
#include <asm/bmips.h>
#include <asm/tlbflush.h>
#include "pm.h"
#define S2_NUM_PARAMS 6
#define MAX_NUM_MEMC 3
/* S3 constants */
#define MAX_GP_REGS 16
#define MAX_CP0_REGS 32
#define NUM_MEMC_CLIENTS 128
#define AON_CTRL_RAM_SIZE 128
#define BRCMSTB_S3_MAGIC 0x5AFEB007
#define CLEAR_RESET_MASK 0x01
/* Index each CP0 register that needs to be saved */
#define CONTEXT 0
#define USER_LOCAL 1
#define PGMK 2
#define HWRENA 3
#define COMPARE 4
#define STATUS 5
#define CONFIG 6
#define MODE 7
#define EDSP 8
#define BOOT_VEC 9
#define EBASE 10
struct brcmstb_memc {
void __iomem *ddr_phy_base;
void __iomem *arb_base;
};
struct brcmstb_pm_control {
void __iomem *aon_ctrl_base;
void __iomem *aon_sram_base;
void __iomem *timers_base;
struct brcmstb_memc memcs[MAX_NUM_MEMC];
int num_memc;
};
struct brcm_pm_s3_context {
u32 cp0_regs[MAX_CP0_REGS];
u32 memc0_rts[NUM_MEMC_CLIENTS];
u32 sc_boot_vec;
};
struct brcmstb_mem_transfer;
struct brcmstb_mem_transfer {
struct brcmstb_mem_transfer *next;
void *src;
void *dst;
dma_addr_t pa_src;
dma_addr_t pa_dst;
u32 len;
u8 key;
u8 mode;
u8 src_remapped;
u8 dst_remapped;
u8 src_dst_remapped;
};
#define AON_SAVE_SRAM(base, idx, val) \
__raw_writel(val, base + (idx << 2))
/* Used for saving registers in asm */
u32 gp_regs[MAX_GP_REGS];
#define BSP_CLOCK_STOP 0x00
#define PM_INITIATE 0x01
static struct brcmstb_pm_control ctrl;
static void brcm_pm_save_cp0_context(struct brcm_pm_s3_context *ctx)
{
/* Generic MIPS */
ctx->cp0_regs[CONTEXT] = read_c0_context();
ctx->cp0_regs[USER_LOCAL] = read_c0_userlocal();
ctx->cp0_regs[PGMK] = read_c0_pagemask();
ctx->cp0_regs[HWRENA] = read_c0_cache();
ctx->cp0_regs[COMPARE] = read_c0_compare();
ctx->cp0_regs[STATUS] = read_c0_status();
/* Broadcom specific */
ctx->cp0_regs[CONFIG] = read_c0_brcm_config();
ctx->cp0_regs[MODE] = read_c0_brcm_mode();
ctx->cp0_regs[EDSP] = read_c0_brcm_edsp();
ctx->cp0_regs[BOOT_VEC] = read_c0_brcm_bootvec();
ctx->cp0_regs[EBASE] = read_c0_ebase();
ctx->sc_boot_vec = bmips_read_zscm_reg(0xa0);
}
static void brcm_pm_restore_cp0_context(struct brcm_pm_s3_context *ctx)
{
/* Restore cp0 state */
bmips_write_zscm_reg(0xa0, ctx->sc_boot_vec);
/* Generic MIPS */
write_c0_context(ctx->cp0_regs[CONTEXT]);
write_c0_userlocal(ctx->cp0_regs[USER_LOCAL]);
write_c0_pagemask(ctx->cp0_regs[PGMK]);
write_c0_cache(ctx->cp0_regs[HWRENA]);
write_c0_compare(ctx->cp0_regs[COMPARE]);
write_c0_status(ctx->cp0_regs[STATUS]);
/* Broadcom specific */
write_c0_brcm_config(ctx->cp0_regs[CONFIG]);
write_c0_brcm_mode(ctx->cp0_regs[MODE]);
write_c0_brcm_edsp(ctx->cp0_regs[EDSP]);
write_c0_brcm_bootvec(ctx->cp0_regs[BOOT_VEC]);
write_c0_ebase(ctx->cp0_regs[EBASE]);
}
static void brcmstb_pm_handshake(void)
{
void __iomem *base = ctrl.aon_ctrl_base;
u32 tmp;
/* BSP power handshake, v1 */
tmp = __raw_readl(base + AON_CTRL_HOST_MISC_CMDS);
tmp &= ~1UL;
__raw_writel(tmp, base + AON_CTRL_HOST_MISC_CMDS);
(void)__raw_readl(base + AON_CTRL_HOST_MISC_CMDS);
__raw_writel(0, base + AON_CTRL_PM_INITIATE);
(void)__raw_readl(base + AON_CTRL_PM_INITIATE);
__raw_writel(BSP_CLOCK_STOP | PM_INITIATE,
base + AON_CTRL_PM_INITIATE);
/*
* HACK: BSP may have internal race on the CLOCK_STOP command.
* Avoid touching the BSP for a few milliseconds.
*/
mdelay(3);
}
static void brcmstb_pm_s5(void)
{
void __iomem *base = ctrl.aon_ctrl_base;
brcmstb_pm_handshake();
/* Clear magic s3 warm-boot value */
AON_SAVE_SRAM(ctrl.aon_sram_base, 0, 0);
/* Set the countdown */
__raw_writel(0x10, base + AON_CTRL_PM_CPU_WAIT_COUNT);
(void)__raw_readl(base + AON_CTRL_PM_CPU_WAIT_COUNT);
/* Prepare to S5 cold boot */
__raw_writel(PM_COLD_CONFIG, base + AON_CTRL_PM_CTRL);
(void)__raw_readl(base + AON_CTRL_PM_CTRL);
__raw_writel((PM_COLD_CONFIG | PM_PWR_DOWN), base +
AON_CTRL_PM_CTRL);
(void)__raw_readl(base + AON_CTRL_PM_CTRL);
__asm__ __volatile__(
" wait\n"
: : : "memory");
}
static int brcmstb_pm_s3(void)
{
struct brcm_pm_s3_context s3_context;
void __iomem *memc_arb_base;
unsigned long flags;
u32 tmp;
int i;
/* Prepare for s3 */
AON_SAVE_SRAM(ctrl.aon_sram_base, 0, BRCMSTB_S3_MAGIC);
AON_SAVE_SRAM(ctrl.aon_sram_base, 1, (u32)&s3_reentry);
AON_SAVE_SRAM(ctrl.aon_sram_base, 2, 0);
/* Clear RESET_HISTORY */
tmp = __raw_readl(ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL);
tmp &= ~CLEAR_RESET_MASK;
__raw_writel(tmp, ctrl.aon_ctrl_base + AON_CTRL_RESET_CTRL);
local_irq_save(flags);
/* Inhibit DDR_RSTb pulse for both MMCs*/
for (i = 0; i < ctrl.num_memc; i++) {
tmp = __raw_readl(ctrl.memcs[i].ddr_phy_base +
DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
tmp &= ~0x0f;
__raw_writel(tmp, ctrl.memcs[i].ddr_phy_base +
DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
tmp |= (0x05 | BIT(5));
__raw_writel(tmp, ctrl.memcs[i].ddr_phy_base +
DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL);
}
/* Save CP0 context */
brcm_pm_save_cp0_context(&s3_context);
/* Save RTS(skip debug register) */
memc_arb_base = ctrl.memcs[0].arb_base + 4;
for (i = 0; i < NUM_MEMC_CLIENTS; i++) {
s3_context.memc0_rts[i] = __raw_readl(memc_arb_base);
memc_arb_base += 4;
}
/* Save I/O context */
local_flush_tlb_all();
_dma_cache_wback_inv(0, ~0);
brcm_pm_do_s3(ctrl.aon_ctrl_base, current_cpu_data.dcache.linesz);
/* CPU reconfiguration */
local_flush_tlb_all();
bmips_cpu_setup();
cpumask_clear(&bmips_booted_mask);
/* Restore RTS (skip debug register) */
memc_arb_base = ctrl.memcs[0].arb_base + 4;
for (i = 0; i < NUM_MEMC_CLIENTS; i++) {
__raw_writel(s3_context.memc0_rts[i], memc_arb_base);
memc_arb_base += 4;
}
/* restore CP0 context */
brcm_pm_restore_cp0_context(&s3_context);
local_irq_restore(flags);
return 0;
}
static int brcmstb_pm_s2(void)
{
/*
* We need to pass 6 arguments to an assembly function. Lets avoid the
* stack and pass arguments in a explicit 4 byte array. The assembly
* code assumes all arguments are 4 bytes and arguments are ordered
* like so:
*
* 0: AON_CTRl base register
* 1: DDR_PHY base register
* 2: TIMERS base resgister
* 3: I-Cache line size
* 4: Restart vector address
* 5: Restart vector size
*/
u32 s2_params[6];
/* Prepare s2 parameters */
s2_params[0] = (u32)ctrl.aon_ctrl_base;
s2_params[1] = (u32)ctrl.memcs[0].ddr_phy_base;
s2_params[2] = (u32)ctrl.timers_base;
s2_params[3] = (u32)current_cpu_data.icache.linesz;
s2_params[4] = (u32)BMIPS_WARM_RESTART_VEC;
s2_params[5] = (u32)(bmips_smp_int_vec_end -
bmips_smp_int_vec);
/* Drop to standby */
brcm_pm_do_s2(s2_params);
return 0;
}
static int brcmstb_pm_standby(bool deep_standby)
{
brcmstb_pm_handshake();
/* Send IRQs to BMIPS_WARM_RESTART_VEC */
clear_c0_cause(CAUSEF_IV);
irq_disable_hazard();
set_c0_status(ST0_BEV);
irq_disable_hazard();
if (deep_standby)
brcmstb_pm_s3();
else
brcmstb_pm_s2();
/* Send IRQs to normal runtime vectors */
clear_c0_status(ST0_BEV);
irq_disable_hazard();
set_c0_cause(CAUSEF_IV);
irq_disable_hazard();
return 0;
}
static int brcmstb_pm_enter(suspend_state_t state)
{
int ret = -EINVAL;
switch (state) {
case PM_SUSPEND_STANDBY:
ret = brcmstb_pm_standby(false);
break;
case PM_SUSPEND_MEM:
ret = brcmstb_pm_standby(true);
break;
}
return ret;
}
static int brcmstb_pm_valid(suspend_state_t state)
{
switch (state) {
case PM_SUSPEND_STANDBY:
return true;
case PM_SUSPEND_MEM:
return true;
default:
return false;
}
}
static const struct platform_suspend_ops brcmstb_pm_ops = {
.enter = brcmstb_pm_enter,
.valid = brcmstb_pm_valid,
};
static const struct of_device_id aon_ctrl_dt_ids[] = {
{ .compatible = "brcm,brcmstb-aon-ctrl" },
{ /* sentinel */ }
};
static const struct of_device_id ddr_phy_dt_ids[] = {
{ .compatible = "brcm,brcmstb-ddr-phy" },
{ /* sentinel */ }
};
static const struct of_device_id arb_dt_ids[] = {
{ .compatible = "brcm,brcmstb-memc-arb" },
{ /* sentinel */ }
};
static const struct of_device_id timers_ids[] = {
{ .compatible = "brcm,brcmstb-timers" },
{ /* sentinel */ }
};
static inline void __iomem *brcmstb_ioremap_node(struct device_node *dn,
int index)
{
return of_io_request_and_map(dn, index, dn->full_name);
}
static void __iomem *brcmstb_ioremap_match(const struct of_device_id *matches,
int index, const void **ofdata)
{
struct device_node *dn;
const struct of_device_id *match;
dn = of_find_matching_node_and_match(NULL, matches, &match);
if (!dn)
return ERR_PTR(-EINVAL);
if (ofdata)
*ofdata = match->data;
return brcmstb_ioremap_node(dn, index);
}
static int brcmstb_pm_init(void)
{
struct device_node *dn;
void __iomem *base;
int i;
/* AON ctrl registers */
base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 0, NULL);
if (IS_ERR(base)) {
pr_err("error mapping AON_CTRL\n");
goto aon_err;
}
ctrl.aon_ctrl_base = base;
/* AON SRAM registers */
base = brcmstb_ioremap_match(aon_ctrl_dt_ids, 1, NULL);
if (IS_ERR(base)) {
pr_err("error mapping AON_SRAM\n");
goto sram_err;
}
ctrl.aon_sram_base = base;
ctrl.num_memc = 0;
/* Map MEMC DDR PHY registers */
for_each_matching_node(dn, ddr_phy_dt_ids) {
i = ctrl.num_memc;
if (i >= MAX_NUM_MEMC) {
pr_warn("Too many MEMCs (max %d)\n", MAX_NUM_MEMC);
break;
}
base = brcmstb_ioremap_node(dn, 0);
if (IS_ERR(base))
goto ddr_err;
ctrl.memcs[i].ddr_phy_base = base;
ctrl.num_memc++;
}
/* MEMC ARB registers */
base = brcmstb_ioremap_match(arb_dt_ids, 0, NULL);
if (IS_ERR(base)) {
pr_err("error mapping MEMC ARB\n");
goto ddr_err;
}
ctrl.memcs[0].arb_base = base;
/* Timer registers */
base = brcmstb_ioremap_match(timers_ids, 0, NULL);
if (IS_ERR(base)) {
pr_err("error mapping timers\n");
goto tmr_err;
}
ctrl.timers_base = base;
/* s3 cold boot aka s5 */
pm_power_off = brcmstb_pm_s5;
suspend_set_ops(&brcmstb_pm_ops);
return 0;
tmr_err:
iounmap(ctrl.memcs[0].arb_base);
ddr_err:
for (i = 0; i < ctrl.num_memc; i++)
iounmap(ctrl.memcs[i].ddr_phy_base);
iounmap(ctrl.aon_sram_base);
sram_err:
iounmap(ctrl.aon_ctrl_base);
aon_err:
return PTR_ERR(base);
}
arch_initcall(brcmstb_pm_init);
/*
* Definitions for Broadcom STB power management / Always ON (AON) block
*
* Copyright © 2016-2017 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __BRCMSTB_PM_H__
#define __BRCMSTB_PM_H__
#define AON_CTRL_RESET_CTRL 0x00
#define AON_CTRL_PM_CTRL 0x04
#define AON_CTRL_PM_STATUS 0x08
#define AON_CTRL_PM_CPU_WAIT_COUNT 0x10
#define AON_CTRL_PM_INITIATE 0x88
#define AON_CTRL_HOST_MISC_CMDS 0x8c
#define AON_CTRL_SYSTEM_DATA_RAM_OFS 0x200
/* MIPS PM constants */
/* MEMC0 offsets */
#define DDR40_PHY_CONTROL_REGS_0_PLL_STATUS 0x10
#define DDR40_PHY_CONTROL_REGS_0_STANDBY_CTRL 0xa4
/* TIMER offsets */
#define TIMER_TIMER1_CTRL 0x0c
#define TIMER_TIMER1_STAT 0x1c
/* TIMER defines */
#define RESET_TIMER 0x0
#define START_TIMER 0xbfffffff
#define TIMER_MASK 0x3fffffff
/* PM_CTRL bitfield (Method #0) */
#define PM_FAST_PWRDOWN (1 << 6)
#define PM_WARM_BOOT (1 << 5)
#define PM_DEEP_STANDBY (1 << 4)
#define PM_CPU_PWR (1 << 3)
#define PM_USE_CPU_RDY (1 << 2)
#define PM_PLL_PWRDOWN (1 << 1)
#define PM_PWR_DOWN (1 << 0)
/* PM_CTRL bitfield (Method #1) */
#define PM_DPHY_STANDBY_CLEAR (1 << 20)
#define PM_MIN_S3_WIDTH_TIMER_BYPASS (1 << 7)
#define PM_S2_COMMAND (PM_PLL_PWRDOWN | PM_USE_CPU_RDY | PM_PWR_DOWN)
/* Method 0 bitmasks */
#define PM_COLD_CONFIG (PM_PLL_PWRDOWN | PM_DEEP_STANDBY)
#define PM_WARM_CONFIG (PM_COLD_CONFIG | PM_USE_CPU_RDY | PM_WARM_BOOT)
/* Method 1 bitmask */
#define M1_PM_WARM_CONFIG (PM_DPHY_STANDBY_CLEAR | \
PM_MIN_S3_WIDTH_TIMER_BYPASS | \
PM_WARM_BOOT | PM_DEEP_STANDBY | \
PM_PLL_PWRDOWN | PM_PWR_DOWN)
#define M1_PM_COLD_CONFIG (PM_DPHY_STANDBY_CLEAR | \
PM_MIN_S3_WIDTH_TIMER_BYPASS | \
PM_DEEP_STANDBY | \
PM_PLL_PWRDOWN | PM_PWR_DOWN)
#ifndef __ASSEMBLY__
#ifndef CONFIG_MIPS
extern const unsigned long brcmstb_pm_do_s2_sz;
extern asmlinkage int brcmstb_pm_do_s2(void __iomem *aon_ctrl_base,
void __iomem *ddr_phy_pll_status);
#else
/* s2 asm */
extern asmlinkage int brcm_pm_do_s2(u32 *s2_params);
/* s3 asm */
extern asmlinkage int brcm_pm_do_s3(void __iomem *aon_ctrl_base,
int dcache_linesz);
extern int s3_reentry;
#endif /* CONFIG_MIPS */
#endif
#endif /* __BRCMSTB_PM_H__ */
/*
* Copyright © 2014-2017 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
#include "pm.h"
.text
.align 3
#define AON_CTRL_REG r10
#define DDR_PHY_STATUS_REG r11
/*
* r0: AON_CTRL base address
* r1: DDRY PHY PLL status register address
*/
ENTRY(brcmstb_pm_do_s2)
stmfd sp!, {r4-r11, lr}
mov AON_CTRL_REG, r0
mov DDR_PHY_STATUS_REG, r1
/* Flush memory transactions */
dsb
/* Cache DDR_PHY_STATUS_REG translation */
ldr r0, [DDR_PHY_STATUS_REG]
/* power down request */
ldr r0, =PM_S2_COMMAND
ldr r1, =0
str r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
ldr r1, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
/* Wait for interrupt */
wfi
nop
/* Bring MEMC back up */
1: ldr r0, [DDR_PHY_STATUS_REG]
ands r0, #1
beq 1b
/* Power-up handshake */
ldr r0, =1
str r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS]
ldr r0, [AON_CTRL_REG, #AON_CTRL_HOST_MISC_CMDS]
ldr r0, =0
str r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
ldr r0, [AON_CTRL_REG, #AON_CTRL_PM_CTRL]
/* Return to caller */
ldr r0, =0
ldmfd sp!, {r4-r11, pc}
ENDPROC(brcmstb_pm_do_s2)
/* Place literal pool here */
.ltorg
ENTRY(brcmstb_pm_do_s2_sz)
.word . - brcmstb_pm_do_s2
/*
* Copyright (C) 2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <asm/asm.h>
#include <asm/regdef.h>
#include <asm/mipsregs.h>
#include <asm/stackframe.h>
#include "pm.h"
.text
.set noreorder
.align 5
/*
* a0: u32 params array
*/
LEAF(brcm_pm_do_s2)
subu sp, 64
sw ra, 0(sp)
sw s0, 4(sp)
sw s1, 8(sp)
sw s2, 12(sp)
sw s3, 16(sp)
sw s4, 20(sp)
sw s5, 24(sp)
sw s6, 28(sp)
sw s7, 32(sp)
/*
* Dereference the params array
* s0: AON_CTRL base register
* s1: DDR_PHY base register
* s2: TIMERS base register
* s3: I-Cache line size
* s4: Restart vector address
* s5: Restart vector size
*/
move t0, a0
lw s0, 0(t0)
lw s1, 4(t0)
lw s2, 8(t0)
lw s3, 12(t0)
lw s4, 16(t0)
lw s5, 20(t0)
/* Lock this asm section into the I-cache */
addiu t1, s3, -1
not t1
la t0, brcm_pm_do_s2
and t0, t1
la t2, asm_end
and t2, t1
1: cache 0x1c, 0(t0)
bne t0, t2, 1b
addu t0, s3
/* Lock the interrupt vector into the I-cache */
move t0, zero
2: move t1, s4
cache 0x1c, 0(t1)
addu t1, s3
addu t0, s3
ble t0, s5, 2b
nop
sync
/* Power down request */
li t0, PM_S2_COMMAND
sw zero, AON_CTRL_PM_CTRL(s0)
lw zero, AON_CTRL_PM_CTRL(s0)
sw t0, AON_CTRL_PM_CTRL(s0)
lw t0, AON_CTRL_PM_CTRL(s0)
/* Enable CP0 interrupt 2 and wait for interrupt */
mfc0 t0, CP0_STATUS
/* Save cp0 sr for restoring later */
move s6, t0
li t1, ~(ST0_IM | ST0_IE)
and t0, t1
ori t0, STATUSF_IP2
mtc0 t0, CP0_STATUS
nop
nop
nop
ori t0, ST0_IE
mtc0 t0, CP0_STATUS
/* Wait for interrupt */
wait
nop
/* Wait for memc0 */
1: lw t0, DDR40_PHY_CONTROL_REGS_0_PLL_STATUS(s1)
andi t0, 1
beqz t0, 1b
nop
/* 1ms delay needed for stable recovery */
/* Use TIMER1 to count 1 ms */
li t0, RESET_TIMER
sw t0, TIMER_TIMER1_CTRL(s2)
lw t0, TIMER_TIMER1_CTRL(s2)
li t0, START_TIMER
sw t0, TIMER_TIMER1_CTRL(s2)
lw t0, TIMER_TIMER1_CTRL(s2)
/* Prepare delay */
li t0, TIMER_MASK
lw t1, TIMER_TIMER1_STAT(s2)
and t1, t0
/* 1ms delay */
addi t1, 27000
/* Wait for the timer value to exceed t1 */
1: lw t0, TIMER_TIMER1_STAT(s2)
sgtu t2, t1, t0
bnez t2, 1b
nop
/* Power back up */
li t1, 1
sw t1, AON_CTRL_HOST_MISC_CMDS(s0)
lw t1, AON_CTRL_HOST_MISC_CMDS(s0)
sw zero, AON_CTRL_PM_CTRL(s0)
lw zero, AON_CTRL_PM_CTRL(s0)
/* Unlock I-cache */
addiu t1, s3, -1
not t1
la t0, brcm_pm_do_s2
and t0, t1
la t2, asm_end
and t2, t1
1: cache 0x00, 0(t0)
bne t0, t2, 1b
addu t0, s3
/* Unlock interrupt vector */
move t0, zero
2: move t1, s4
cache 0x00, 0(t1)
addu t1, s3
addu t0, s3
ble t0, s5, 2b
nop
/* Restore cp0 sr */
sync
nop
mtc0 s6, CP0_STATUS
nop
/* Set return value to success */
li v0, 0
/* Return to caller */
lw s7, 32(sp)
lw s6, 28(sp)
lw s5, 24(sp)
lw s4, 20(sp)
lw s3, 16(sp)
lw s2, 12(sp)
lw s1, 8(sp)
lw s0, 4(sp)
lw ra, 0(sp)
addiu sp, 64
jr ra
nop
END(brcm_pm_do_s2)
.globl asm_end
asm_end:
nop
/*
* Copyright (C) 2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <asm/asm.h>
#include <asm/regdef.h>
#include <asm/mipsregs.h>
#include <asm/bmips.h>
#include "pm.h"
.text
.set noreorder
.align 5
.global s3_reentry
/*
* a0: AON_CTRL base register
* a1: D-Cache line size
*/
LEAF(brcm_pm_do_s3)
/* Get the address of s3_context */
la t0, gp_regs
sw ra, 0(t0)
sw s0, 4(t0)
sw s1, 8(t0)
sw s2, 12(t0)
sw s3, 16(t0)
sw s4, 20(t0)
sw s5, 24(t0)
sw s6, 28(t0)
sw s7, 32(t0)
sw gp, 36(t0)
sw sp, 40(t0)
sw fp, 44(t0)
/* Save CP0 Status */
mfc0 t1, CP0_STATUS
sw t1, 48(t0)
/* Write-back gp registers - cache will be gone */
addiu t1, a1, -1
not t1
and t0, t1
/* Flush at least 64 bytes */
addiu t2, t0, 64
and t2, t1
1: cache 0x17, 0(t0)
bne t0, t2, 1b
addu t0, a1
/* Drop to deep standby */
li t1, PM_WARM_CONFIG
sw zero, AON_CTRL_PM_CTRL(a0)
lw zero, AON_CTRL_PM_CTRL(a0)
sw t1, AON_CTRL_PM_CTRL(a0)
lw t1, AON_CTRL_PM_CTRL(a0)
li t1, (PM_WARM_CONFIG | PM_PWR_DOWN)
sw t1, AON_CTRL_PM_CTRL(a0)
lw t1, AON_CTRL_PM_CTRL(a0)
/* Enable CP0 interrupt 2 and wait for interrupt */
mfc0 t0, CP0_STATUS
li t1, ~(ST0_IM | ST0_IE)
and t0, t1
ori t0, STATUSF_IP2
mtc0 t0, CP0_STATUS
nop
nop
nop
ori t0, ST0_IE
mtc0 t0, CP0_STATUS
/* Wait for interrupt */
wait
nop
s3_reentry:
/* Clear call/return stack */
li t0, (0x06 << 16)
mtc0 t0, $22, 2
ssnop
ssnop
ssnop
/* Clear jump target buffer */
li t0, (0x04 << 16)
mtc0 t0, $22, 2
ssnop
ssnop
ssnop
sync
nop
/* Setup mmu defaults */
mtc0 zero, CP0_WIRED
mtc0 zero, CP0_ENTRYHI
li k0, PM_DEFAULT_MASK
mtc0 k0, CP0_PAGEMASK
li sp, BMIPS_WARM_RESTART_VEC
la k0, plat_wired_tlb_setup
jalr k0
nop
/* Restore general purpose registers */
la t0, gp_regs
lw fp, 44(t0)
lw sp, 40(t0)
lw gp, 36(t0)
lw s7, 32(t0)
lw s6, 28(t0)
lw s5, 24(t0)
lw s4, 20(t0)
lw s3, 16(t0)
lw s2, 12(t0)
lw s1, 8(t0)
lw s0, 4(t0)
lw ra, 0(t0)
/* Restore CP0 status */
lw t1, 48(t0)
mtc0 t1, CP0_STATUS
/* Return to caller */
li v0, 0
jr ra
nop
END(brcm_pm_do_s3)
......@@ -213,6 +213,7 @@ static const struct of_device_id fsl_guts_of_match[] = {
{ .compatible = "fsl,ls1021a-dcfg", },
{ .compatible = "fsl,ls1043a-dcfg", },
{ .compatible = "fsl,ls2080a-dcfg", },
{ .compatible = "fsl,ls1088a-dcfg", },
{}
};
MODULE_DEVICE_TABLE(of, fsl_guts_of_match);
......
menuconfig FSL_DPAA
bool "Freescale DPAA 1.x support"
depends on FSL_SOC_BOOKE
depends on (FSL_SOC_BOOKE || ARCH_LAYERSCAPE)
select GENERIC_ALLOCATOR
help
The Freescale Data Path Acceleration Architecture (DPAA) is a set of
......
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_FSL_DPAA) += bman_ccsr.o qman_ccsr.o \
bman_portal.o qman_portal.o \
bman.o qman.o
bman.o qman.o dpaa_sys.o
obj-$(CONFIG_FSL_BMAN_TEST) += bman-test.o
bman-test-y = bman_test.o
......
......@@ -35,6 +35,27 @@
/* Portal register assists */
#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
/* Cache-inhibited register offsets */
#define BM_REG_RCR_PI_CINH 0x3000
#define BM_REG_RCR_CI_CINH 0x3100
#define BM_REG_RCR_ITR 0x3200
#define BM_REG_CFG 0x3300
#define BM_REG_SCN(n) (0x3400 + ((n) << 6))
#define BM_REG_ISR 0x3e00
#define BM_REG_IER 0x3e40
#define BM_REG_ISDR 0x3e80
#define BM_REG_IIR 0x3ec0
/* Cache-enabled register offsets */
#define BM_CL_CR 0x0000
#define BM_CL_RR0 0x0100
#define BM_CL_RR1 0x0140
#define BM_CL_RCR 0x1000
#define BM_CL_RCR_PI_CENA 0x3000
#define BM_CL_RCR_CI_CENA 0x3100
#else
/* Cache-inhibited register offsets */
#define BM_REG_RCR_PI_CINH 0x0000
#define BM_REG_RCR_CI_CINH 0x0004
......@@ -53,6 +74,7 @@
#define BM_CL_RCR 0x1000
#define BM_CL_RCR_PI_CENA 0x3000
#define BM_CL_RCR_CI_CENA 0x3100
#endif
/*
* Portal modes.
......@@ -154,7 +176,8 @@ struct bm_mc {
};
struct bm_addr {
void __iomem *ce; /* cache-enabled */
void *ce; /* cache-enabled */
__be32 *ce_be; /* Same as above but for direct access */
void __iomem *ci; /* cache-inhibited */
};
......@@ -167,12 +190,12 @@ struct bm_portal {
/* Cache-inhibited register access. */
static inline u32 bm_in(struct bm_portal *p, u32 offset)
{
return be32_to_cpu(__raw_readl(p->addr.ci + offset));
return ioread32be(p->addr.ci + offset);
}
static inline void bm_out(struct bm_portal *p, u32 offset, u32 val)
{
__raw_writel(cpu_to_be32(val), p->addr.ci + offset);
iowrite32be(val, p->addr.ci + offset);
}
/* Cache Enabled Portal Access */
......@@ -188,7 +211,7 @@ static inline void bm_cl_touch_ro(struct bm_portal *p, u32 offset)
static inline u32 bm_ce_in(struct bm_portal *p, u32 offset)
{
return be32_to_cpu(__raw_readl(p->addr.ce + offset));
return be32_to_cpu(*(p->addr.ce_be + (offset/4)));
}
struct bman_portal {
......@@ -408,7 +431,7 @@ static int bm_mc_init(struct bm_portal *portal)
mc->cr = portal->addr.ce + BM_CL_CR;
mc->rr = portal->addr.ce + BM_CL_RR0;
mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & BM_MCC_VERB_VBIT) ?
mc->rridx = (mc->cr->_ncw_verb & BM_MCC_VERB_VBIT) ?
0 : 1;
mc->vbit = mc->rridx ? BM_MCC_VERB_VBIT : 0;
#ifdef CONFIG_FSL_DPAA_CHECKING
......@@ -466,7 +489,7 @@ static inline union bm_mc_result *bm_mc_result(struct bm_portal *portal)
* its command is submitted and completed. This includes the valid-bit,
* in case you were wondering...
*/
if (!__raw_readb(&rr->verb)) {
if (!rr->verb) {
dpaa_invalidate_touch_ro(rr);
return NULL;
}
......@@ -512,8 +535,9 @@ static int bman_create_portal(struct bman_portal *portal,
* config, everything that follows depends on it and "config" is more
* for (de)reference...
*/
p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
p->addr.ce = c->addr_virt_ce;
p->addr.ce_be = c->addr_virt_ce;
p->addr.ci = c->addr_virt_ci;
if (bm_rcr_init(p, bm_rcr_pvb, bm_rcr_cce)) {
dev_err(c->dev, "RCR initialisation failed\n");
goto fail_rcr;
......@@ -607,7 +631,7 @@ int bman_p_irqsource_add(struct bman_portal *p, u32 bits)
unsigned long irqflags;
local_irq_save(irqflags);
set_bits(bits & BM_PIRQ_VISIBLE, &p->irq_sources);
p->irq_sources |= bits & BM_PIRQ_VISIBLE;
bm_out(&p->p, BM_REG_IER, p->irq_sources);
local_irq_restore(irqflags);
return 0;
......
......@@ -201,6 +201,21 @@ static int fsl_bman_probe(struct platform_device *pdev)
return -ENODEV;
}
/*
* If FBPR memory wasn't defined using the qbman compatible string
* try using the of_reserved_mem_device method
*/
if (!fbpr_a) {
ret = qbman_init_private_mem(dev, 0, &fbpr_a, &fbpr_sz);
if (ret) {
dev_err(dev, "qbman_init_private_mem() failed 0x%x\n",
ret);
return -ENODEV;
}
}
dev_dbg(dev, "Allocated FBPR 0x%llx 0x%zx\n", fbpr_a, fbpr_sz);
bm_set_memory(fbpr_a, fbpr_sz);
err_irq = platform_get_irq(pdev, 0);
......
......@@ -91,7 +91,6 @@ static int bman_portal_probe(struct platform_device *pdev)
struct device_node *node = dev->of_node;
struct bm_portal_config *pcfg;
struct resource *addr_phys[2];
void __iomem *va;
int irq, cpu;
pcfg = devm_kmalloc(dev, sizeof(*pcfg), GFP_KERNEL);
......@@ -123,23 +122,21 @@ static int bman_portal_probe(struct platform_device *pdev)
}
pcfg->irq = irq;
va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0);
if (!va) {
dev_err(dev, "ioremap::CE failed\n");
pcfg->addr_virt_ce = memremap(addr_phys[0]->start,
resource_size(addr_phys[0]),
QBMAN_MEMREMAP_ATTR);
if (!pcfg->addr_virt_ce) {
dev_err(dev, "memremap::CE failed\n");
goto err_ioremap1;
}
pcfg->addr_virt[DPAA_PORTAL_CE] = va;
va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]),
_PAGE_GUARDED | _PAGE_NO_CACHE);
if (!va) {
pcfg->addr_virt_ci = ioremap(addr_phys[1]->start,
resource_size(addr_phys[1]));
if (!pcfg->addr_virt_ci) {
dev_err(dev, "ioremap::CI failed\n");
goto err_ioremap2;
}
pcfg->addr_virt[DPAA_PORTAL_CI] = va;
spin_lock(&bman_lock);
cpu = cpumask_next_zero(-1, &portal_cpus);
if (cpu >= nr_cpu_ids) {
......@@ -164,9 +161,9 @@ static int bman_portal_probe(struct platform_device *pdev)
return 0;
err_portal_init:
iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]);
iounmap(pcfg->addr_virt_ci);
err_ioremap2:
iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]);
memunmap(pcfg->addr_virt_ce);
err_ioremap1:
return -ENXIO;
}
......
......@@ -46,11 +46,9 @@ extern u16 bman_ip_rev; /* 0 if uninitialised, otherwise BMAN_REVx */
extern struct gen_pool *bm_bpalloc;
struct bm_portal_config {
/*
* Corenet portal addresses;
* [0]==cache-enabled, [1]==cache-inhibited.
*/
void __iomem *addr_virt[2];
/* Portal addresses */
void *addr_virt_ce;
void __iomem *addr_virt_ci;
/* Allow these to be joined in lists */
struct list_head list;
struct device *dev;
......
/* Copyright 2017 NXP Semiconductor, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of NXP Semiconductor nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY NXP Semiconductor ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NXP Semiconductor BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/dma-mapping.h>
#include "dpaa_sys.h"
/*
* Initialize a devices private memory region
*/
int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr,
size_t *size)
{
int ret;
struct device_node *mem_node;
u64 size64;
ret = of_reserved_mem_device_init_by_idx(dev, dev->of_node, idx);
if (ret) {
dev_err(dev,
"of_reserved_mem_device_init_by_idx(%d) failed 0x%x\n",
idx, ret);
return -ENODEV;
}
mem_node = of_parse_phandle(dev->of_node, "memory-region", 0);
if (mem_node) {
ret = of_property_read_u64(mem_node, "size", &size64);
if (ret) {
dev_err(dev, "of_address_to_resource fails 0x%x\n",
ret);
return -ENODEV;
}
*size = size64;
} else {
dev_err(dev, "No memory-region found for index %d\n", idx);
return -ENODEV;
}
if (!dma_zalloc_coherent(dev, *size, addr, 0)) {
dev_err(dev, "DMA Alloc memory failed\n");
return -ENODEV;
}
/*
* Disassociate the reserved memory area from the device
* because a device can only have one DMA memory area. This
* should be fine since the memory is allocated and initialized
* and only ever accessed by the QBMan device from now on
*/
of_reserved_mem_device_release(dev);
return 0;
}
......@@ -44,23 +44,21 @@
#include <linux/prefetch.h>
#include <linux/genalloc.h>
#include <asm/cacheflush.h>
#include <linux/io.h>
#include <linux/delay.h>
/* For 2-element tables related to cache-inhibited and cache-enabled mappings */
#define DPAA_PORTAL_CE 0
#define DPAA_PORTAL_CI 1
#if (L1_CACHE_BYTES != 32) && (L1_CACHE_BYTES != 64)
#error "Unsupported Cacheline Size"
#endif
static inline void dpaa_flush(void *p)
{
/*
* Only PPC needs to flush the cache currently - on ARM the mapping
* is non cacheable
*/
#ifdef CONFIG_PPC
flush_dcache_range((unsigned long)p, (unsigned long)p+64);
#elif defined(CONFIG_ARM32)
__cpuc_flush_dcache_area(p, 64);
#elif defined(CONFIG_ARM64)
__flush_dcache_area(p, 64);
#endif
}
......@@ -102,4 +100,15 @@ static inline u8 dpaa_cyc_diff(u8 ringsize, u8 first, u8 last)
/* Offset applied to genalloc pools due to zero being an error return */
#define DPAA_GENALLOC_OFF 0x80000000
/* Initialize the devices private memory region */
int qbman_init_private_mem(struct device *dev, int idx, dma_addr_t *addr,
size_t *size);
/* memremap() attributes for different platforms */
#ifdef CONFIG_PPC
#define QBMAN_MEMREMAP_ATTR MEMREMAP_WB
#else
#define QBMAN_MEMREMAP_ATTR MEMREMAP_WC
#endif
#endif /* __DPAA_SYS_H */
......@@ -41,6 +41,43 @@
/* Portal register assists */
#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
/* Cache-inhibited register offsets */
#define QM_REG_EQCR_PI_CINH 0x3000
#define QM_REG_EQCR_CI_CINH 0x3040
#define QM_REG_EQCR_ITR 0x3080
#define QM_REG_DQRR_PI_CINH 0x3100
#define QM_REG_DQRR_CI_CINH 0x3140
#define QM_REG_DQRR_ITR 0x3180
#define QM_REG_DQRR_DCAP 0x31C0
#define QM_REG_DQRR_SDQCR 0x3200
#define QM_REG_DQRR_VDQCR 0x3240
#define QM_REG_DQRR_PDQCR 0x3280
#define QM_REG_MR_PI_CINH 0x3300
#define QM_REG_MR_CI_CINH 0x3340
#define QM_REG_MR_ITR 0x3380
#define QM_REG_CFG 0x3500
#define QM_REG_ISR 0x3600
#define QM_REG_IER 0x3640
#define QM_REG_ISDR 0x3680
#define QM_REG_IIR 0x36C0
#define QM_REG_ITPR 0x3740
/* Cache-enabled register offsets */
#define QM_CL_EQCR 0x0000
#define QM_CL_DQRR 0x1000
#define QM_CL_MR 0x2000
#define QM_CL_EQCR_PI_CENA 0x3000
#define QM_CL_EQCR_CI_CENA 0x3040
#define QM_CL_DQRR_PI_CENA 0x3100
#define QM_CL_DQRR_CI_CENA 0x3140
#define QM_CL_MR_PI_CENA 0x3300
#define QM_CL_MR_CI_CENA 0x3340
#define QM_CL_CR 0x3800
#define QM_CL_RR0 0x3900
#define QM_CL_RR1 0x3940
#else
/* Cache-inhibited register offsets */
#define QM_REG_EQCR_PI_CINH 0x0000
#define QM_REG_EQCR_CI_CINH 0x0004
......@@ -75,6 +112,7 @@
#define QM_CL_CR 0x3800
#define QM_CL_RR0 0x3900
#define QM_CL_RR1 0x3940
#endif
/*
* BTW, the drivers (and h/w programming model) already obtain the required
......@@ -300,7 +338,8 @@ struct qm_mc {
};
struct qm_addr {
void __iomem *ce; /* cache-enabled */
void *ce; /* cache-enabled */
__be32 *ce_be; /* same value as above but for direct access */
void __iomem *ci; /* cache-inhibited */
};
......@@ -321,12 +360,12 @@ struct qm_portal {
/* Cache-inhibited register access. */
static inline u32 qm_in(struct qm_portal *p, u32 offset)
{
return be32_to_cpu(__raw_readl(p->addr.ci + offset));
return ioread32be(p->addr.ci + offset);
}
static inline void qm_out(struct qm_portal *p, u32 offset, u32 val)
{
__raw_writel(cpu_to_be32(val), p->addr.ci + offset);
iowrite32be(val, p->addr.ci + offset);
}
/* Cache Enabled Portal Access */
......@@ -342,7 +381,7 @@ static inline void qm_cl_touch_ro(struct qm_portal *p, u32 offset)
static inline u32 qm_ce_in(struct qm_portal *p, u32 offset)
{
return be32_to_cpu(__raw_readl(p->addr.ce + offset));
return be32_to_cpu(*(p->addr.ce_be + (offset/4)));
}
/* --- EQCR API --- */
......@@ -646,11 +685,7 @@ static inline void qm_dqrr_pvb_update(struct qm_portal *portal)
*/
dpaa_invalidate_touch_ro(res);
#endif
/*
* when accessing 'verb', use __raw_readb() to ensure that compiler
* inlining doesn't try to optimise out "excess reads".
*/
if ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
if ((res->verb & QM_DQRR_VERB_VBIT) == dqrr->vbit) {
dqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);
if (!dqrr->pi)
dqrr->vbit ^= QM_DQRR_VERB_VBIT;
......@@ -777,11 +812,8 @@ static inline void qm_mr_pvb_update(struct qm_portal *portal)
union qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
DPAA_ASSERT(mr->pmode == qm_mr_pvb);
/*
* when accessing 'verb', use __raw_readb() to ensure that compiler
* inlining doesn't try to optimise out "excess reads".
*/
if ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {
if ((res->verb & QM_MR_VERB_VBIT) == mr->vbit) {
mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
if (!mr->pi)
mr->vbit ^= QM_MR_VERB_VBIT;
......@@ -822,7 +854,7 @@ static inline int qm_mc_init(struct qm_portal *portal)
mc->cr = portal->addr.ce + QM_CL_CR;
mc->rr = portal->addr.ce + QM_CL_RR0;
mc->rridx = (__raw_readb(&mc->cr->_ncw_verb) & QM_MCC_VERB_VBIT)
mc->rridx = (mc->cr->_ncw_verb & QM_MCC_VERB_VBIT)
? 0 : 1;
mc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;
#ifdef CONFIG_FSL_DPAA_CHECKING
......@@ -880,7 +912,7 @@ static inline union qm_mc_result *qm_mc_result(struct qm_portal *portal)
* its command is submitted and completed. This includes the valid-bit,
* in case you were wondering...
*/
if (!__raw_readb(&rr->verb)) {
if (!rr->verb) {
dpaa_invalidate_touch_ro(rr);
return NULL;
}
......@@ -909,12 +941,12 @@ static inline int qm_mc_result_timeout(struct qm_portal *portal,
static inline void fq_set(struct qman_fq *fq, u32 mask)
{
set_bits(mask, &fq->flags);
fq->flags |= mask;
}
static inline void fq_clear(struct qman_fq *fq, u32 mask)
{
clear_bits(mask, &fq->flags);
fq->flags &= ~mask;
}
static inline int fq_isset(struct qman_fq *fq, u32 mask)
......@@ -1084,11 +1116,7 @@ static int drain_mr_fqrni(struct qm_portal *p)
* entries well before the ring has been fully consumed, so
* we're being *really* paranoid here.
*/
u64 now, then = jiffies;
do {
now = jiffies;
} while ((then + 10000) > now);
msleep(1);
msg = qm_mr_current(p);
if (!msg)
return 0;
......@@ -1124,8 +1152,9 @@ static int qman_create_portal(struct qman_portal *portal,
* config, everything that follows depends on it and "config" is more
* for (de)reference
*/
p->addr.ce = c->addr_virt[DPAA_PORTAL_CE];
p->addr.ci = c->addr_virt[DPAA_PORTAL_CI];
p->addr.ce = c->addr_virt_ce;
p->addr.ce_be = c->addr_virt_ce;
p->addr.ci = c->addr_virt_ci;
/*
* If CI-stashing is used, the current defaults use a threshold of 3,
* and stash with high-than-DQRR priority.
......@@ -1566,7 +1595,7 @@ void qman_p_irqsource_add(struct qman_portal *p, u32 bits)
unsigned long irqflags;
local_irq_save(irqflags);
set_bits(bits & QM_PIRQ_VISIBLE, &p->irq_sources);
p->irq_sources |= bits & QM_PIRQ_VISIBLE;
qm_out(&p->p, QM_REG_IER, p->irq_sources);
local_irq_restore(irqflags);
}
......@@ -1589,7 +1618,7 @@ void qman_p_irqsource_remove(struct qman_portal *p, u32 bits)
*/
local_irq_save(irqflags);
bits &= QM_PIRQ_VISIBLE;
clear_bits(bits, &p->irq_sources);
p->irq_sources &= ~bits;
qm_out(&p->p, QM_REG_IER, p->irq_sources);
ier = qm_in(&p->p, QM_REG_IER);
/*
......
......@@ -401,21 +401,42 @@ static int qm_init_pfdr(struct device *dev, u32 pfdr_start, u32 num)
}
/*
* Ideally we would use the DMA API to turn rmem->base into a DMA address
* (especially if iommu translations ever get involved). Unfortunately, the
* DMA API currently does not allow mapping anything that is not backed with
* a struct page.
* QMan needs two global memory areas initialized at boot time:
* 1) FQD: Frame Queue Descriptors used to manage frame queues
* 2) PFDR: Packed Frame Queue Descriptor Records used to store frames
* Both areas are reserved using the device tree reserved memory framework
* and the addresses and sizes are initialized when the QMan device is probed
*/
static dma_addr_t fqd_a, pfdr_a;
static size_t fqd_sz, pfdr_sz;
#ifdef CONFIG_PPC
/*
* Support for PPC Device Tree backward compatibility when compatible
* string is set to fsl-qman-fqd and fsl-qman-pfdr
*/
static int zero_priv_mem(phys_addr_t addr, size_t sz)
{
/* map as cacheable, non-guarded */
void __iomem *tmpp = ioremap_prot(addr, sz, 0);
if (!tmpp)
return -ENOMEM;
memset_io(tmpp, 0, sz);
flush_dcache_range((unsigned long)tmpp,
(unsigned long)tmpp + sz);
iounmap(tmpp);
return 0;
}
static int qman_fqd(struct reserved_mem *rmem)
{
fqd_a = rmem->base;
fqd_sz = rmem->size;
WARN_ON(!(fqd_a && fqd_sz));
return 0;
}
RESERVEDMEM_OF_DECLARE(qman_fqd, "fsl,qman-fqd", qman_fqd);
......@@ -431,32 +452,13 @@ static int qman_pfdr(struct reserved_mem *rmem)
}
RESERVEDMEM_OF_DECLARE(qman_pfdr, "fsl,qman-pfdr", qman_pfdr);
#endif
static unsigned int qm_get_fqid_maxcnt(void)
{
return fqd_sz / 64;
}
/*
* Flush this memory range from data cache so that QMAN originated
* transactions for this memory region could be marked non-coherent.
*/
static int zero_priv_mem(struct device *dev, struct device_node *node,
phys_addr_t addr, size_t sz)
{
/* map as cacheable, non-guarded */
void __iomem *tmpp = ioremap_prot(addr, sz, 0);
if (!tmpp)
return -ENOMEM;
memset_io(tmpp, 0, sz);
flush_dcache_range((unsigned long)tmpp,
(unsigned long)tmpp + sz);
iounmap(tmpp);
return 0;
}
static void log_edata_bits(struct device *dev, u32 bit_count)
{
u32 i, j, mask = 0xffffffff;
......@@ -717,6 +719,8 @@ static int fsl_qman_probe(struct platform_device *pdev)
qman_ip_rev = QMAN_REV30;
else if (major == 3 && minor == 1)
qman_ip_rev = QMAN_REV31;
else if (major == 3 && minor == 2)
qman_ip_rev = QMAN_REV32;
else {
dev_err(dev, "Unknown QMan version\n");
return -ENODEV;
......@@ -727,10 +731,41 @@ static int fsl_qman_probe(struct platform_device *pdev)
qm_channel_caam = QMAN_CHANNEL_CAAM_REV3;
}
ret = zero_priv_mem(dev, node, fqd_a, fqd_sz);
WARN_ON(ret);
if (ret)
return -ENODEV;
if (fqd_a) {
#ifdef CONFIG_PPC
/*
* For PPC backward DT compatibility
* FQD memory MUST be zero'd by software
*/
zero_priv_mem(fqd_a, fqd_sz);
#else
WARN(1, "Unexpected architecture using non shared-dma-mem reservations");
#endif
} else {
/*
* Order of memory regions is assumed as FQD followed by PFDR
* in order to ensure allocations from the correct regions the
* driver initializes then allocates each piece in order
*/
ret = qbman_init_private_mem(dev, 0, &fqd_a, &fqd_sz);
if (ret) {
dev_err(dev, "qbman_init_private_mem() for FQD failed 0x%x\n",
ret);
return -ENODEV;
}
}
dev_dbg(dev, "Allocated FQD 0x%llx 0x%zx\n", fqd_a, fqd_sz);
if (!pfdr_a) {
/* Setup PFDR memory */
ret = qbman_init_private_mem(dev, 1, &pfdr_a, &pfdr_sz);
if (ret) {
dev_err(dev, "qbman_init_private_mem() for PFDR failed 0x%x\n",
ret);
return -ENODEV;
}
}
dev_dbg(dev, "Allocated PFDR 0x%llx 0x%zx\n", pfdr_a, pfdr_sz);
ret = qman_init_ccsr(dev);
if (ret) {
......
......@@ -224,7 +224,6 @@ static int qman_portal_probe(struct platform_device *pdev)
struct device_node *node = dev->of_node;
struct qm_portal_config *pcfg;
struct resource *addr_phys[2];
void __iomem *va;
int irq, cpu, err;
u32 val;
......@@ -262,23 +261,21 @@ static int qman_portal_probe(struct platform_device *pdev)
}
pcfg->irq = irq;
va = ioremap_prot(addr_phys[0]->start, resource_size(addr_phys[0]), 0);
if (!va) {
dev_err(dev, "ioremap::CE failed\n");
pcfg->addr_virt_ce = memremap(addr_phys[0]->start,
resource_size(addr_phys[0]),
QBMAN_MEMREMAP_ATTR);
if (!pcfg->addr_virt_ce) {
dev_err(dev, "memremap::CE failed\n");
goto err_ioremap1;
}
pcfg->addr_virt[DPAA_PORTAL_CE] = va;
va = ioremap_prot(addr_phys[1]->start, resource_size(addr_phys[1]),
_PAGE_GUARDED | _PAGE_NO_CACHE);
if (!va) {
pcfg->addr_virt_ci = ioremap(addr_phys[1]->start,
resource_size(addr_phys[1]));
if (!pcfg->addr_virt_ci) {
dev_err(dev, "ioremap::CI failed\n");
goto err_ioremap2;
}
pcfg->addr_virt[DPAA_PORTAL_CI] = va;
pcfg->pools = qm_get_pools_sdqcr();
spin_lock(&qman_lock);
......@@ -310,9 +307,9 @@ static int qman_portal_probe(struct platform_device *pdev)
return 0;
err_portal_init:
iounmap(pcfg->addr_virt[DPAA_PORTAL_CI]);
iounmap(pcfg->addr_virt_ci);
err_ioremap2:
iounmap(pcfg->addr_virt[DPAA_PORTAL_CE]);
memunmap(pcfg->addr_virt_ce);
err_ioremap1:
return -ENXIO;
}
......
......@@ -28,8 +28,6 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "dpaa_sys.h"
#include <soc/fsl/qman.h>
......@@ -155,11 +153,9 @@ static inline void qman_cgrs_xor(struct qman_cgrs *dest,
void qman_init_cgr_all(void);
struct qm_portal_config {
/*
* Corenet portal addresses;
* [0]==cache-enabled, [1]==cache-inhibited.
*/
void __iomem *addr_virt[2];
/* Portal addresses */
void *addr_virt_ce;
void __iomem *addr_virt_ci;
struct device *dev;
struct iommu_domain *iommu_domain;
/* Allow these to be joined in lists */
......@@ -187,6 +183,7 @@ struct qm_portal_config {
#define QMAN_REV20 0x0200
#define QMAN_REV30 0x0300
#define QMAN_REV31 0x0301
#define QMAN_REV32 0x0302
extern u16 qman_ip_rev; /* 0 if uninitialised, otherwise QMAN_REVx */
#define QM_FQID_RANGE_START 1 /* FQID 0 reserved for internal use */
......
......@@ -30,7 +30,5 @@
#include "qman_priv.h"
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
int qman_test_stash(void);
int qman_test_api(void);
#
# MediaTek SoC drivers
#
menu "MediaTek SoC drivers"
depends on ARCH_MEDIATEK || COMPILE_TEST
config MTK_INFRACFG
bool "MediaTek INFRACFG Support"
depends on ARCH_MEDIATEK || COMPILE_TEST
select REGMAP
help
Say yes here to add support for the MediaTek INFRACFG controller. The
......@@ -12,7 +14,6 @@ config MTK_INFRACFG
config MTK_PMIC_WRAP
tristate "MediaTek PMIC Wrapper Support"
depends on ARCH_MEDIATEK
depends on RESET_CONTROLLER
select REGMAP
help
......@@ -22,7 +23,6 @@ config MTK_PMIC_WRAP
config MTK_SCPSYS
bool "MediaTek SCPSYS Support"
depends on ARCH_MEDIATEK || COMPILE_TEST
default ARCH_MEDIATEK
select REGMAP
select MTK_INFRACFG
......@@ -30,3 +30,5 @@ config MTK_SCPSYS
help
Say yes here to add support for the MediaTek SCPSYS power domain
driver.
endmenu
......@@ -70,6 +70,12 @@
PWRAP_WDT_SRC_EN_HARB_STAUPD_DLE | \
PWRAP_WDT_SRC_EN_HARB_STAUPD_ALE)
/* Group of bits used for shown slave capability */
#define PWRAP_SLV_CAP_SPI BIT(0)
#define PWRAP_SLV_CAP_DUALIO BIT(1)
#define PWRAP_SLV_CAP_SECURITY BIT(2)
#define HAS_CAP(_c, _x) (((_c) & (_x)) == (_x))
/* defines for slave device wrapper registers */
enum dew_regs {
PWRAP_DEW_BASE,
......@@ -208,6 +214,36 @@ enum pwrap_regs {
PWRAP_ADC_RDATA_ADDR1,
PWRAP_ADC_RDATA_ADDR2,
/* MT7622 only regs */
PWRAP_EINT_STA0_ADR,
PWRAP_EINT_STA1_ADR,
PWRAP_STA,
PWRAP_CLR,
PWRAP_DVFS_ADR8,
PWRAP_DVFS_WDATA8,
PWRAP_DVFS_ADR9,
PWRAP_DVFS_WDATA9,
PWRAP_DVFS_ADR10,
PWRAP_DVFS_WDATA10,
PWRAP_DVFS_ADR11,
PWRAP_DVFS_WDATA11,
PWRAP_DVFS_ADR12,
PWRAP_DVFS_WDATA12,
PWRAP_DVFS_ADR13,
PWRAP_DVFS_WDATA13,
PWRAP_DVFS_ADR14,
PWRAP_DVFS_WDATA14,
PWRAP_DVFS_ADR15,
PWRAP_DVFS_WDATA15,
PWRAP_EXT_CK,
PWRAP_ADC_RDATA_ADDR,
PWRAP_GPS_STA,
PWRAP_SW_RST,
PWRAP_DVFS_STEP_CTRL0,
PWRAP_DVFS_STEP_CTRL1,
PWRAP_DVFS_STEP_CTRL2,
PWRAP_SPI2_CTRL,
/* MT8135 only regs */
PWRAP_CSHEXT,
PWRAP_EVENT_IN_EN,
......@@ -330,6 +366,118 @@ static int mt2701_regs[] = {
[PWRAP_ADC_RDATA_ADDR2] = 0x154,
};
static int mt7622_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
[PWRAP_DIO_EN] = 0x8,
[PWRAP_SIDLY] = 0xC,
[PWRAP_RDDMY] = 0x10,
[PWRAP_SI_CK_CON] = 0x14,
[PWRAP_CSHEXT_WRITE] = 0x18,
[PWRAP_CSHEXT_READ] = 0x1C,
[PWRAP_CSLEXT_START] = 0x20,
[PWRAP_CSLEXT_END] = 0x24,
[PWRAP_STAUPD_PRD] = 0x28,
[PWRAP_STAUPD_GRPEN] = 0x2C,
[PWRAP_EINT_STA0_ADR] = 0x30,
[PWRAP_EINT_STA1_ADR] = 0x34,
[PWRAP_STA] = 0x38,
[PWRAP_CLR] = 0x3C,
[PWRAP_STAUPD_MAN_TRIG] = 0x40,
[PWRAP_STAUPD_STA] = 0x44,
[PWRAP_WRAP_STA] = 0x48,
[PWRAP_HARB_INIT] = 0x4C,
[PWRAP_HARB_HPRIO] = 0x50,
[PWRAP_HIPRIO_ARB_EN] = 0x54,
[PWRAP_HARB_STA0] = 0x58,
[PWRAP_HARB_STA1] = 0x5C,
[PWRAP_MAN_EN] = 0x60,
[PWRAP_MAN_CMD] = 0x64,
[PWRAP_MAN_RDATA] = 0x68,
[PWRAP_MAN_VLDCLR] = 0x6C,
[PWRAP_WACS0_EN] = 0x70,
[PWRAP_INIT_DONE0] = 0x74,
[PWRAP_WACS0_CMD] = 0x78,
[PWRAP_WACS0_RDATA] = 0x7C,
[PWRAP_WACS0_VLDCLR] = 0x80,
[PWRAP_WACS1_EN] = 0x84,
[PWRAP_INIT_DONE1] = 0x88,
[PWRAP_WACS1_CMD] = 0x8C,
[PWRAP_WACS1_RDATA] = 0x90,
[PWRAP_WACS1_VLDCLR] = 0x94,
[PWRAP_WACS2_EN] = 0x98,
[PWRAP_INIT_DONE2] = 0x9C,
[PWRAP_WACS2_CMD] = 0xA0,
[PWRAP_WACS2_RDATA] = 0xA4,
[PWRAP_WACS2_VLDCLR] = 0xA8,
[PWRAP_INT_EN] = 0xAC,
[PWRAP_INT_FLG_RAW] = 0xB0,
[PWRAP_INT_FLG] = 0xB4,
[PWRAP_INT_CLR] = 0xB8,
[PWRAP_SIG_ADR] = 0xBC,
[PWRAP_SIG_MODE] = 0xC0,
[PWRAP_SIG_VALUE] = 0xC4,
[PWRAP_SIG_ERRVAL] = 0xC8,
[PWRAP_CRC_EN] = 0xCC,
[PWRAP_TIMER_EN] = 0xD0,
[PWRAP_TIMER_STA] = 0xD4,
[PWRAP_WDT_UNIT] = 0xD8,
[PWRAP_WDT_SRC_EN] = 0xDC,
[PWRAP_WDT_FLG] = 0xE0,
[PWRAP_DEBUG_INT_SEL] = 0xE4,
[PWRAP_DVFS_ADR0] = 0xE8,
[PWRAP_DVFS_WDATA0] = 0xEC,
[PWRAP_DVFS_ADR1] = 0xF0,
[PWRAP_DVFS_WDATA1] = 0xF4,
[PWRAP_DVFS_ADR2] = 0xF8,
[PWRAP_DVFS_WDATA2] = 0xFC,
[PWRAP_DVFS_ADR3] = 0x100,
[PWRAP_DVFS_WDATA3] = 0x104,
[PWRAP_DVFS_ADR4] = 0x108,
[PWRAP_DVFS_WDATA4] = 0x10C,
[PWRAP_DVFS_ADR5] = 0x110,
[PWRAP_DVFS_WDATA5] = 0x114,
[PWRAP_DVFS_ADR6] = 0x118,
[PWRAP_DVFS_WDATA6] = 0x11C,
[PWRAP_DVFS_ADR7] = 0x120,
[PWRAP_DVFS_WDATA7] = 0x124,
[PWRAP_DVFS_ADR8] = 0x128,
[PWRAP_DVFS_WDATA8] = 0x12C,
[PWRAP_DVFS_ADR9] = 0x130,
[PWRAP_DVFS_WDATA9] = 0x134,
[PWRAP_DVFS_ADR10] = 0x138,
[PWRAP_DVFS_WDATA10] = 0x13C,
[PWRAP_DVFS_ADR11] = 0x140,
[PWRAP_DVFS_WDATA11] = 0x144,
[PWRAP_DVFS_ADR12] = 0x148,
[PWRAP_DVFS_WDATA12] = 0x14C,
[PWRAP_DVFS_ADR13] = 0x150,
[PWRAP_DVFS_WDATA13] = 0x154,
[PWRAP_DVFS_ADR14] = 0x158,
[PWRAP_DVFS_WDATA14] = 0x15C,
[PWRAP_DVFS_ADR15] = 0x160,
[PWRAP_DVFS_WDATA15] = 0x164,
[PWRAP_SPMINF_STA] = 0x168,
[PWRAP_CIPHER_KEY_SEL] = 0x16C,
[PWRAP_CIPHER_IV_SEL] = 0x170,
[PWRAP_CIPHER_EN] = 0x174,
[PWRAP_CIPHER_RDY] = 0x178,
[PWRAP_CIPHER_MODE] = 0x17C,
[PWRAP_CIPHER_SWRST] = 0x180,
[PWRAP_DCM_EN] = 0x184,
[PWRAP_DCM_DBC_PRD] = 0x188,
[PWRAP_EXT_CK] = 0x18C,
[PWRAP_ADC_CMD_ADDR] = 0x190,
[PWRAP_PWRAP_ADC_CMD] = 0x194,
[PWRAP_ADC_RDATA_ADDR] = 0x198,
[PWRAP_GPS_STA] = 0x19C,
[PWRAP_SW_RST] = 0x1A0,
[PWRAP_DVFS_STEP_CTRL0] = 0x238,
[PWRAP_DVFS_STEP_CTRL1] = 0x23C,
[PWRAP_DVFS_STEP_CTRL2] = 0x240,
[PWRAP_SPI2_CTRL] = 0x244,
};
static int mt8173_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
......@@ -487,18 +635,31 @@ static int mt8135_regs[] = {
enum pmic_type {
PMIC_MT6323,
PMIC_MT6380,
PMIC_MT6397,
};
enum pwrap_type {
PWRAP_MT2701,
PWRAP_MT7622,
PWRAP_MT8135,
PWRAP_MT8173,
};
struct pmic_wrapper;
struct pwrap_slv_type {
const u32 *dew_regs;
enum pmic_type type;
const struct regmap_config *regmap;
/* Flags indicating the capability for the target slave */
u32 caps;
/*
* pwrap operations are highly associated with the PMIC types,
* so the pointers added increases flexibility allowing determination
* which type is used by the detection through device tree.
*/
int (*pwrap_read)(struct pmic_wrapper *wrp, u32 adr, u32 *rdata);
int (*pwrap_write)(struct pmic_wrapper *wrp, u32 adr, u32 wdata);
};
struct pmic_wrapper {
......@@ -522,7 +683,7 @@ struct pmic_wrapper_type {
u32 int_en_all;
u32 spi_w;
u32 wdt_src;
int has_bridge:1;
unsigned int has_bridge:1;
int (*init_reg_clock)(struct pmic_wrapper *wrp);
int (*init_soc_specific)(struct pmic_wrapper *wrp);
};
......@@ -593,7 +754,7 @@ static int pwrap_wait_for_state(struct pmic_wrapper *wrp,
} while (1);
}
static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
static int pwrap_read16(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
int ret;
......@@ -603,13 +764,53 @@ static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
return ret;
}
pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata,
PWRAP_WACS2_CMD);
pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
if (ret)
return ret;
*rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
return 0;
}
static int pwrap_read32(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
int ret, msb;
*rdata = 0;
for (msb = 0; msb < 2; msb++) {
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret) {
pwrap_leave_fsm_vldclr(wrp);
return ret;
}
pwrap_writel(wrp, ((msb << 30) | (adr << 16)),
PWRAP_WACS2_CMD);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
if (ret)
return ret;
*rdata += (PWRAP_GET_WACS_RDATA(pwrap_readl(wrp,
PWRAP_WACS2_RDATA)) << (16 * msb));
pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
}
return 0;
}
static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
{
return wrp->slave->pwrap_read(wrp, adr, rdata);
}
static int pwrap_write16(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
int ret;
......@@ -619,19 +820,46 @@ static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata)
return ret;
}
pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD);
pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata,
PWRAP_WACS2_CMD);
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr);
if (ret)
return ret;
return 0;
}
*rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA));
static int pwrap_write32(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
int ret, msb, rdata;
pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR);
for (msb = 0; msb < 2; msb++) {
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle);
if (ret) {
pwrap_leave_fsm_vldclr(wrp);
return ret;
}
pwrap_writel(wrp, (1 << 31) | (msb << 30) | (adr << 16) |
((wdata >> (msb * 16)) & 0xffff),
PWRAP_WACS2_CMD);
/*
* The pwrap_read operation is the requirement of hardware used
* for the synchronization between two successive 16-bit
* pwrap_writel operations composing one 32-bit bus writing.
* Otherwise, we'll find the result fails on the lower 16-bit
* pwrap writing.
*/
if (!msb)
pwrap_read(wrp, adr, &rdata);
}
return 0;
}
static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata)
{
return wrp->slave->pwrap_write(wrp, adr, wdata);
}
static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata)
{
return pwrap_read(context, adr, rdata);
......@@ -711,23 +939,75 @@ static int pwrap_init_sidly(struct pmic_wrapper *wrp)
return 0;
}
static int pwrap_mt8135_init_reg_clock(struct pmic_wrapper *wrp)
static int pwrap_init_dual_io(struct pmic_wrapper *wrp)
{
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END);
int ret;
u32 rdata;
/* Enable dual IO mode */
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1);
/* Check IDLE & INIT_DONE in advance */
ret = pwrap_wait_for_state(wrp,
pwrap_is_fsm_idle_and_sync_idle);
if (ret) {
dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
return ret;
}
pwrap_writel(wrp, 1, PWRAP_DIO_EN);
/* Read Test */
pwrap_read(wrp,
wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata);
if (rdata != PWRAP_DEW_READ_TEST_VAL) {
dev_err(wrp->dev,
"Read failed on DIO mode: 0x%04x!=0x%04x\n",
PWRAP_DEW_READ_TEST_VAL, rdata);
return -EFAULT;
}
return 0;
}
static int pwrap_mt8173_init_reg_clock(struct pmic_wrapper *wrp)
/*
* pwrap_init_chip_select_ext is used to configure CS extension time for each
* phase during data transactions on the pwrap bus.
*/
static void pwrap_init_chip_select_ext(struct pmic_wrapper *wrp, u8 hext_write,
u8 hext_read, u8 lext_start,
u8 lext_end)
{
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
/*
* After finishing a write and read transaction, extends CS high time
* to be at least xT of BUS CLK as hext_write and hext_read specifies
* respectively.
*/
pwrap_writel(wrp, hext_write, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, hext_read, PWRAP_CSHEXT_READ);
/*
* Extends CS low time after CSL and before CSH command to be at
* least xT of BUS CLK as lext_start and lext_end specifies
* respectively.
*/
pwrap_writel(wrp, lext_start, PWRAP_CSLEXT_START);
pwrap_writel(wrp, lext_end, PWRAP_CSLEXT_END);
}
static int pwrap_common_init_reg_clock(struct pmic_wrapper *wrp)
{
switch (wrp->master->type) {
case PWRAP_MT8173:
pwrap_init_chip_select_ext(wrp, 0, 4, 2, 2);
break;
case PWRAP_MT8135:
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT);
pwrap_init_chip_select_ext(wrp, 0, 4, 0, 0);
break;
default:
break;
}
return 0;
}
......@@ -737,20 +1017,16 @@ static int pwrap_mt2701_init_reg_clock(struct pmic_wrapper *wrp)
switch (wrp->slave->type) {
case PMIC_MT6397:
pwrap_writel(wrp, 0xc, PWRAP_RDDMY);
pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
pwrap_init_chip_select_ext(wrp, 4, 0, 2, 2);
break;
case PMIC_MT6323:
pwrap_writel(wrp, 0x8, PWRAP_RDDMY);
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_RDDMY_NO],
0x8);
pwrap_writel(wrp, 0x5, PWRAP_CSHEXT_WRITE);
pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_READ);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_START);
pwrap_writel(wrp, 0x2, PWRAP_CSLEXT_END);
pwrap_init_chip_select_ext(wrp, 5, 0, 2, 2);
break;
default:
break;
}
......@@ -794,6 +1070,9 @@ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
case PWRAP_MT8173:
pwrap_writel(wrp, 1, PWRAP_CIPHER_EN);
break;
case PWRAP_MT7622:
pwrap_writel(wrp, 0, PWRAP_CIPHER_EN);
break;
}
/* Config cipher mode @PMIC */
......@@ -815,6 +1094,8 @@ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CIPHER_EN],
0x1);
break;
default:
break;
}
/* wait for cipher data ready@AP */
......@@ -827,7 +1108,8 @@ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
/* wait for cipher data ready@PMIC */
ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready);
if (ret) {
dev_err(wrp->dev, "timeout waiting for cipher data ready@PMIC\n");
dev_err(wrp->dev,
"timeout waiting for cipher data ready@PMIC\n");
return ret;
}
......@@ -854,6 +1136,30 @@ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
return 0;
}
static int pwrap_init_security(struct pmic_wrapper *wrp)
{
int ret;
/* Enable encryption */
ret = pwrap_init_cipher(wrp);
if (ret)
return ret;
/* Signature checking - using CRC */
if (pwrap_write(wrp,
wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1))
return -EFAULT;
pwrap_writel(wrp, 0x1, PWRAP_CRC_EN);
pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE);
pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL],
PWRAP_SIG_ADR);
pwrap_writel(wrp,
wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN);
return 0;
}
static int pwrap_mt8135_init_soc_specific(struct pmic_wrapper *wrp)
{
/* enable pwrap events and pwrap bridge in AP side */
......@@ -911,10 +1217,18 @@ static int pwrap_mt2701_init_soc_specific(struct pmic_wrapper *wrp)
return 0;
}
static int pwrap_mt7622_init_soc_specific(struct pmic_wrapper *wrp)
{
pwrap_writel(wrp, 0, PWRAP_STAUPD_PRD);
/* enable 2wire SPI master */
pwrap_writel(wrp, 0x8000000, PWRAP_SPI2_CTRL);
return 0;
}
static int pwrap_init(struct pmic_wrapper *wrp)
{
int ret;
u32 rdata;
reset_control_reset(wrp->rstc);
if (wrp->rstc_bridge)
......@@ -926,10 +1240,12 @@ static int pwrap_init(struct pmic_wrapper *wrp)
pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD);
}
/* Reset SPI slave */
ret = pwrap_reset_spislave(wrp);
if (ret)
return ret;
if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) {
/* Reset SPI slave */
ret = pwrap_reset_spislave(wrp);
if (ret)
return ret;
}
pwrap_writel(wrp, 1, PWRAP_WRAP_EN);
......@@ -941,45 +1257,26 @@ static int pwrap_init(struct pmic_wrapper *wrp)
if (ret)
return ret;
/* Setup serial input delay */
ret = pwrap_init_sidly(wrp);
if (ret)
return ret;
/* Enable dual IO mode */
pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_DIO_EN], 1);
/* Check IDLE & INIT_DONE in advance */
ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle);
if (ret) {
dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret);
return ret;
if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SPI)) {
/* Setup serial input delay */
ret = pwrap_init_sidly(wrp);
if (ret)
return ret;
}
pwrap_writel(wrp, 1, PWRAP_DIO_EN);
/* Read Test */
pwrap_read(wrp, wrp->slave->dew_regs[PWRAP_DEW_READ_TEST], &rdata);
if (rdata != PWRAP_DEW_READ_TEST_VAL) {
dev_err(wrp->dev, "Read test failed after switch to DIO mode: 0x%04x != 0x%04x\n",
PWRAP_DEW_READ_TEST_VAL, rdata);
return -EFAULT;
if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_DUALIO)) {
/* Enable dual I/O mode */
ret = pwrap_init_dual_io(wrp);
if (ret)
return ret;
}
/* Enable encryption */
ret = pwrap_init_cipher(wrp);
if (ret)
return ret;
/* Signature checking - using CRC */
if (pwrap_write(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_EN], 0x1))
return -EFAULT;
pwrap_writel(wrp, 0x1, PWRAP_CRC_EN);
pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE);
pwrap_writel(wrp, wrp->slave->dew_regs[PWRAP_DEW_CRC_VAL],
PWRAP_SIG_ADR);
pwrap_writel(wrp, wrp->master->arb_en_all, PWRAP_HIPRIO_ARB_EN);
if (HAS_CAP(wrp->slave->caps, PWRAP_SLV_CAP_SECURITY)) {
/* Enable security on bus */
ret = pwrap_init_security(wrp);
if (ret)
return ret;
}
if (wrp->master->type == PWRAP_MT8135)
pwrap_writel(wrp, 0x7, PWRAP_RRARB_EN);
......@@ -1023,7 +1320,7 @@ static irqreturn_t pwrap_interrupt(int irqno, void *dev_id)
return IRQ_HANDLED;
}
static const struct regmap_config pwrap_regmap_config = {
static const struct regmap_config pwrap_regmap_config16 = {
.reg_bits = 16,
.val_bits = 16,
.reg_stride = 2,
......@@ -1032,20 +1329,54 @@ static const struct regmap_config pwrap_regmap_config = {
.max_register = 0xffff,
};
static const struct regmap_config pwrap_regmap_config32 = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
.reg_read = pwrap_regmap_read,
.reg_write = pwrap_regmap_write,
.max_register = 0xffff,
};
static const struct pwrap_slv_type pmic_mt6323 = {
.dew_regs = mt6323_regs,
.type = PMIC_MT6323,
.regmap = &pwrap_regmap_config16,
.caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO |
PWRAP_SLV_CAP_SECURITY,
.pwrap_read = pwrap_read16,
.pwrap_write = pwrap_write16,
};
static const struct pwrap_slv_type pmic_mt6380 = {
.dew_regs = NULL,
.type = PMIC_MT6380,
.regmap = &pwrap_regmap_config32,
.caps = 0,
.pwrap_read = pwrap_read32,
.pwrap_write = pwrap_write32,
};
static const struct pwrap_slv_type pmic_mt6397 = {
.dew_regs = mt6397_regs,
.type = PMIC_MT6397,
.regmap = &pwrap_regmap_config16,
.caps = PWRAP_SLV_CAP_SPI | PWRAP_SLV_CAP_DUALIO |
PWRAP_SLV_CAP_SECURITY,
.pwrap_read = pwrap_read16,
.pwrap_write = pwrap_write16,
};
static const struct of_device_id of_slave_match_tbl[] = {
{
.compatible = "mediatek,mt6323",
.data = &pmic_mt6323,
}, {
/* The MT6380 PMIC only implements a regulator, so we bind it
* directly instead of using a MFD.
*/
.compatible = "mediatek,mt6380-regulator",
.data = &pmic_mt6380,
}, {
.compatible = "mediatek,mt6397",
.data = &pmic_mt6397,
......@@ -1067,6 +1398,18 @@ static const struct pmic_wrapper_type pwrap_mt2701 = {
.init_soc_specific = pwrap_mt2701_init_soc_specific,
};
static const struct pmic_wrapper_type pwrap_mt7622 = {
.regs = mt7622_regs,
.type = PWRAP_MT7622,
.arb_en_all = 0xff,
.int_en_all = ~(u32)BIT(31),
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_ALL,
.has_bridge = 0,
.init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = pwrap_mt7622_init_soc_specific,
};
static const struct pmic_wrapper_type pwrap_mt8135 = {
.regs = mt8135_regs,
.type = PWRAP_MT8135,
......@@ -1075,7 +1418,7 @@ static const struct pmic_wrapper_type pwrap_mt8135 = {
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_ALL,
.has_bridge = 1,
.init_reg_clock = pwrap_mt8135_init_reg_clock,
.init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = pwrap_mt8135_init_soc_specific,
};
......@@ -1087,7 +1430,7 @@ static const struct pmic_wrapper_type pwrap_mt8173 = {
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_NO_STAUPD,
.has_bridge = 0,
.init_reg_clock = pwrap_mt8173_init_reg_clock,
.init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = pwrap_mt8173_init_soc_specific,
};
......@@ -1095,6 +1438,9 @@ static const struct of_device_id of_pwrap_match_tbl[] = {
{
.compatible = "mediatek,mt2701-pwrap",
.data = &pwrap_mt2701,
}, {
.compatible = "mediatek,mt7622-pwrap",
.data = &pwrap_mt7622,
}, {
.compatible = "mediatek,mt8135-pwrap",
.data = &pwrap_mt8135,
......@@ -1159,23 +1505,27 @@ static int pwrap_probe(struct platform_device *pdev)
if (IS_ERR(wrp->bridge_base))
return PTR_ERR(wrp->bridge_base);
wrp->rstc_bridge = devm_reset_control_get(wrp->dev, "pwrap-bridge");
wrp->rstc_bridge = devm_reset_control_get(wrp->dev,
"pwrap-bridge");
if (IS_ERR(wrp->rstc_bridge)) {
ret = PTR_ERR(wrp->rstc_bridge);
dev_dbg(wrp->dev, "cannot get pwrap-bridge reset: %d\n", ret);
dev_dbg(wrp->dev,
"cannot get pwrap-bridge reset: %d\n", ret);
return ret;
}
}
wrp->clk_spi = devm_clk_get(wrp->dev, "spi");
if (IS_ERR(wrp->clk_spi)) {
dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_spi));
dev_dbg(wrp->dev, "failed to get clock: %ld\n",
PTR_ERR(wrp->clk_spi));
return PTR_ERR(wrp->clk_spi);
}
wrp->clk_wrap = devm_clk_get(wrp->dev, "wrap");
if (IS_ERR(wrp->clk_wrap)) {
dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_wrap));
dev_dbg(wrp->dev, "failed to get clock: %ld\n",
PTR_ERR(wrp->clk_wrap));
return PTR_ERR(wrp->clk_wrap);
}
......@@ -1220,12 +1570,13 @@ static int pwrap_probe(struct platform_device *pdev)
pwrap_writel(wrp, wrp->master->int_en_all, PWRAP_INT_EN);
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, IRQF_TRIGGER_HIGH,
"mt-pmic-pwrap", wrp);
ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt,
IRQF_TRIGGER_HIGH,
"mt-pmic-pwrap", wrp);
if (ret)
goto err_out2;
wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, &pwrap_regmap_config);
wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, wrp->slave->regmap);
if (IS_ERR(wrp->regmap)) {
ret = PTR_ERR(wrp->regmap);
goto err_out2;
......
......@@ -35,6 +35,17 @@ config QCOM_PM
modes. It interface with various system drivers to put the cores in
low power modes.
config QCOM_RMTFS_MEM
tristate "Qualcomm Remote Filesystem memory driver"
depends on ARCH_QCOM
help
The Qualcomm remote filesystem memory driver is used for allocating
and exposing regions of shared memory with remote processors for the
purpose of exchanging sector-data between the remote filesystem
service and its clients.
Say y here if you intend to boot the modem remoteproc.
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
......
......@@ -3,6 +3,7 @@ obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
......
/*
* Copyright (c) 2017 Linaro Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/cdev.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_reserved_mem.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/io.h>
#include <linux/qcom_scm.h>
#define QCOM_RMTFS_MEM_DEV_MAX (MINORMASK + 1)
static dev_t qcom_rmtfs_mem_major;
struct qcom_rmtfs_mem {
struct device dev;
struct cdev cdev;
void *base;
phys_addr_t addr;
phys_addr_t size;
unsigned int client_id;
};
static ssize_t qcom_rmtfs_mem_show(struct device *dev,
struct device_attribute *attr,
char *buf);
static DEVICE_ATTR(phys_addr, 0400, qcom_rmtfs_mem_show, NULL);
static DEVICE_ATTR(size, 0400, qcom_rmtfs_mem_show, NULL);
static DEVICE_ATTR(client_id, 0400, qcom_rmtfs_mem_show, NULL);
static ssize_t qcom_rmtfs_mem_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev,
struct qcom_rmtfs_mem,
dev);
if (attr == &dev_attr_phys_addr)
return sprintf(buf, "%pa\n", &rmtfs_mem->addr);
if (attr == &dev_attr_size)
return sprintf(buf, "%pa\n", &rmtfs_mem->size);
if (attr == &dev_attr_client_id)
return sprintf(buf, "%d\n", rmtfs_mem->client_id);
return -EINVAL;
}
static struct attribute *qcom_rmtfs_mem_attrs[] = {
&dev_attr_phys_addr.attr,
&dev_attr_size.attr,
&dev_attr_client_id.attr,
NULL
};
ATTRIBUTE_GROUPS(qcom_rmtfs_mem);
static int qcom_rmtfs_mem_open(struct inode *inode, struct file *filp)
{
struct qcom_rmtfs_mem *rmtfs_mem = container_of(inode->i_cdev,
struct qcom_rmtfs_mem,
cdev);
get_device(&rmtfs_mem->dev);
filp->private_data = rmtfs_mem;
return 0;
}
static ssize_t qcom_rmtfs_mem_read(struct file *filp,
char __user *buf, size_t count, loff_t *f_pos)
{
struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
if (*f_pos >= rmtfs_mem->size)
return 0;
if (*f_pos + count >= rmtfs_mem->size)
count = rmtfs_mem->size - *f_pos;
if (copy_to_user(buf, rmtfs_mem->base + *f_pos, count))
return -EFAULT;
*f_pos += count;
return count;
}
static ssize_t qcom_rmtfs_mem_write(struct file *filp,
const char __user *buf, size_t count,
loff_t *f_pos)
{
struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
if (*f_pos >= rmtfs_mem->size)
return 0;
if (*f_pos + count >= rmtfs_mem->size)
count = rmtfs_mem->size - *f_pos;
if (copy_from_user(rmtfs_mem->base + *f_pos, buf, count))
return -EFAULT;
*f_pos += count;
return count;
}
static int qcom_rmtfs_mem_release(struct inode *inode, struct file *filp)
{
struct qcom_rmtfs_mem *rmtfs_mem = filp->private_data;
put_device(&rmtfs_mem->dev);
return 0;
}
static const struct file_operations qcom_rmtfs_mem_fops = {
.owner = THIS_MODULE,
.open = qcom_rmtfs_mem_open,
.read = qcom_rmtfs_mem_read,
.write = qcom_rmtfs_mem_write,
.release = qcom_rmtfs_mem_release,
.llseek = default_llseek,
};
static void qcom_rmtfs_mem_release_device(struct device *dev)
{
struct qcom_rmtfs_mem *rmtfs_mem = container_of(dev,
struct qcom_rmtfs_mem,
dev);
kfree(rmtfs_mem);
}
static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
struct reserved_mem *rmem;
struct qcom_rmtfs_mem *rmtfs_mem;
u32 client_id;
int ret;
rmem = of_reserved_mem_lookup(node);
if (!rmem) {
dev_err(&pdev->dev, "failed to acquire memory region\n");
return -EINVAL;
}
ret = of_property_read_u32(node, "qcom,client-id", &client_id);
if (ret) {
dev_err(&pdev->dev, "failed to parse \"qcom,client-id\"\n");
return ret;
}
rmtfs_mem = kzalloc(sizeof(*rmtfs_mem), GFP_KERNEL);
if (!rmtfs_mem)
return -ENOMEM;
rmtfs_mem->addr = rmem->base;
rmtfs_mem->client_id = client_id;
rmtfs_mem->size = rmem->size;
device_initialize(&rmtfs_mem->dev);
rmtfs_mem->dev.parent = &pdev->dev;
rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
rmtfs_mem->size, MEMREMAP_WC);
if (IS_ERR(rmtfs_mem->base)) {
dev_err(&pdev->dev, "failed to remap rmtfs_mem region\n");
ret = PTR_ERR(rmtfs_mem->base);
goto put_device;
}
cdev_init(&rmtfs_mem->cdev, &qcom_rmtfs_mem_fops);
rmtfs_mem->cdev.owner = THIS_MODULE;
dev_set_name(&rmtfs_mem->dev, "qcom_rmtfs_mem%d", client_id);
rmtfs_mem->dev.id = client_id;
rmtfs_mem->dev.devt = MKDEV(MAJOR(qcom_rmtfs_mem_major), client_id);
ret = cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev);
if (ret) {
dev_err(&pdev->dev, "failed to add cdev: %d\n", ret);
goto put_device;
}
rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
dev_set_drvdata(&pdev->dev, rmtfs_mem);
return 0;
put_device:
put_device(&rmtfs_mem->dev);
return ret;
}
static int qcom_rmtfs_mem_remove(struct platform_device *pdev)
{
struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev);
cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev);
put_device(&rmtfs_mem->dev);
return 0;
}
static const struct of_device_id qcom_rmtfs_mem_of_match[] = {
{ .compatible = "qcom,rmtfs-mem" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_rmtfs_mem_of_match);
static struct platform_driver qcom_rmtfs_mem_driver = {
.probe = qcom_rmtfs_mem_probe,
.remove = qcom_rmtfs_mem_remove,
.driver = {
.name = "qcom_rmtfs_mem",
.of_match_table = qcom_rmtfs_mem_of_match,
},
};
static int qcom_rmtfs_mem_init(void)
{
int ret;
ret = alloc_chrdev_region(&qcom_rmtfs_mem_major, 0,
QCOM_RMTFS_MEM_DEV_MAX, "qcom_rmtfs_mem");
if (ret < 0) {
pr_err("qcom_rmtfs_mem: failed to allocate char dev region\n");
return ret;
}
ret = platform_driver_register(&qcom_rmtfs_mem_driver);
if (ret < 0) {
pr_err("qcom_rmtfs_mem: failed to register rmtfs_mem driver\n");
unregister_chrdev_region(qcom_rmtfs_mem_major,
QCOM_RMTFS_MEM_DEV_MAX);
}
return ret;
}
module_init(qcom_rmtfs_mem_init);
static void qcom_rmtfs_mem_exit(void)
{
platform_driver_unregister(&qcom_rmtfs_mem_driver);
unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX);
}
module_exit(qcom_rmtfs_mem_exit);
......@@ -52,8 +52,13 @@
*
* Items in the non-cached region are allocated from the start of the partition
* while items in the cached region are allocated from the end. The free area
* is hence the region between the cached and non-cached offsets.
* is hence the region between the cached and non-cached offsets. The header of
* cached items comes after the data.
*
* Version 12 (SMEM_GLOBAL_PART_VERSION) changes the item alloc/get procedure
* for the global heap. A new global partition is created from the global heap
* region with partition type (SMEM_GLOBAL_HOST) and the max smem item count is
* set by the bootloader.
*
* To synchronize allocations in the shared memory heaps a remote spinlock must
* be held - currently lock number 3 of the sfpb or tcsr is used for this on all
......@@ -62,13 +67,13 @@
*/
/*
* Item 3 of the global heap contains an array of versions for the various
* software components in the SoC. We verify that the boot loader version is
* what the expected version (SMEM_EXPECTED_VERSION) as a sanity check.
* The version member of the smem header contains an array of versions for the
* various software components in the SoC. We verify that the boot loader
* version is a valid version as a sanity check.
*/
#define SMEM_ITEM_VERSION 3
#define SMEM_MASTER_SBL_VERSION_INDEX 7
#define SMEM_EXPECTED_VERSION 11
#define SMEM_MASTER_SBL_VERSION_INDEX 7
#define SMEM_GLOBAL_HEAP_VERSION 11
#define SMEM_GLOBAL_PART_VERSION 12
/*
* The first 8 items are only to be allocated by the boot loader while
......@@ -82,8 +87,11 @@
/* Processor/host identifier for the application processor */
#define SMEM_HOST_APPS 0
/* Processor/host identifier for the global partition */
#define SMEM_GLOBAL_HOST 0xfffe
/* Max number of processors/hosts in a system */
#define SMEM_HOST_COUNT 9
#define SMEM_HOST_COUNT 10
/**
* struct smem_proc_comm - proc_comm communication struct (legacy)
......@@ -140,6 +148,7 @@ struct smem_header {
* @flags: flags for the partition (currently unused)
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
* @cacheline: alignment for "cached" entries
* @reserved: reserved entries for later use
*/
struct smem_ptable_entry {
......@@ -148,7 +157,8 @@ struct smem_ptable_entry {
__le32 flags;
__le16 host0;
__le16 host1;
__le32 reserved[8];
__le32 cacheline;
__le32 reserved[7];
};
/**
......@@ -212,6 +222,24 @@ struct smem_private_entry {
};
#define SMEM_PRIVATE_CANARY 0xa5a5
/**
* struct smem_info - smem region info located after the table of contents
* @magic: magic number, must be SMEM_INFO_MAGIC
* @size: size of the smem region
* @base_addr: base address of the smem region
* @reserved: for now reserved entry
* @num_items: highest accepted item number
*/
struct smem_info {
u8 magic[4];
__le32 size;
__le32 base_addr;
__le32 reserved;
__le16 num_items;
};
static const u8 SMEM_INFO_MAGIC[] = { 0x53, 0x49, 0x49, 0x49 }; /* SIII */
/**
* struct smem_region - representation of a chunk of memory used for smem
* @aux_base: identifier of aux_mem base
......@@ -228,8 +256,12 @@ struct smem_region {
* struct qcom_smem - device data for the smem device
* @dev: device pointer
* @hwlock: reference to a hwspinlock
* @global_partition: pointer to global partition when in use
* @global_cacheline: cacheline size for global partition
* @partitions: list of pointers to partitions affecting the current
* processor/host
* @cacheline: list of cacheline sizes for each host
* @item_count: max accepted item number
* @num_regions: number of @regions
* @regions: list of the memory regions defining the shared memory
*/
......@@ -238,21 +270,33 @@ struct qcom_smem {
struct hwspinlock *hwlock;
struct smem_partition_header *global_partition;
size_t global_cacheline;
struct smem_partition_header *partitions[SMEM_HOST_COUNT];
size_t cacheline[SMEM_HOST_COUNT];
u32 item_count;
unsigned num_regions;
struct smem_region regions[0];
};
static struct smem_private_entry *
phdr_to_last_private_entry(struct smem_partition_header *phdr)
phdr_to_last_uncached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
return p + le32_to_cpu(phdr->offset_free_uncached);
}
static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr)
static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr,
size_t cacheline)
{
void *p = phdr;
return p + le32_to_cpu(phdr->size) - ALIGN(sizeof(*phdr), cacheline);
}
static void *phdr_to_last_cached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
......@@ -260,7 +304,7 @@ static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr)
}
static struct smem_private_entry *
phdr_to_first_private_entry(struct smem_partition_header *phdr)
phdr_to_first_uncached_entry(struct smem_partition_header *phdr)
{
void *p = phdr;
......@@ -268,7 +312,7 @@ phdr_to_first_private_entry(struct smem_partition_header *phdr)
}
static struct smem_private_entry *
private_entry_next(struct smem_private_entry *e)
uncached_entry_next(struct smem_private_entry *e)
{
void *p = e;
......@@ -276,13 +320,28 @@ private_entry_next(struct smem_private_entry *e)
le32_to_cpu(e->size);
}
static void *entry_to_item(struct smem_private_entry *e)
static struct smem_private_entry *
cached_entry_next(struct smem_private_entry *e, size_t cacheline)
{
void *p = e;
return p - le32_to_cpu(e->size) - ALIGN(sizeof(*e), cacheline);
}
static void *uncached_entry_to_item(struct smem_private_entry *e)
{
void *p = e;
return p + sizeof(*e) + le16_to_cpu(e->padding_hdr);
}
static void *cached_entry_to_item(struct smem_private_entry *e)
{
void *p = e;
return p - le32_to_cpu(e->size);
}
/* Pointer to the one and only smem handle */
static struct qcom_smem *__smem;
......@@ -290,32 +349,30 @@ static struct qcom_smem *__smem;
#define HWSPINLOCK_TIMEOUT 1000
static int qcom_smem_alloc_private(struct qcom_smem *smem,
unsigned host,
struct smem_partition_header *phdr,
unsigned item,
size_t size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *hdr, *end;
size_t alloc_size;
void *cached;
phdr = smem->partitions[host];
hdr = phdr_to_first_private_entry(phdr);
end = phdr_to_last_private_entry(phdr);
cached = phdr_to_first_cached_entry(phdr);
hdr = phdr_to_first_uncached_entry(phdr);
end = phdr_to_last_uncached_entry(phdr);
cached = phdr_to_last_cached_entry(phdr);
while (hdr < end) {
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
"Found invalid canary in hosts %d:%d partition\n",
phdr->host0, phdr->host1);
return -EINVAL;
}
if (le16_to_cpu(hdr->item) == item)
return -EEXIST;
hdr = private_entry_next(hdr);
hdr = uncached_entry_next(hdr);
}
/* Check that we don't grow into the cached region */
......@@ -346,11 +403,8 @@ static int qcom_smem_alloc_global(struct qcom_smem *smem,
unsigned item,
size_t size)
{
struct smem_header *header;
struct smem_global_entry *entry;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return -EINVAL;
struct smem_header *header;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
......@@ -389,6 +443,7 @@ static int qcom_smem_alloc_global(struct qcom_smem *smem,
*/
int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
{
struct smem_partition_header *phdr;
unsigned long flags;
int ret;
......@@ -401,16 +456,24 @@ int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
return -EINVAL;
}
if (WARN_ON(item >= __smem->item_count))
return -EINVAL;
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
if (host < SMEM_HOST_COUNT && __smem->partitions[host])
ret = qcom_smem_alloc_private(__smem, host, item, size);
else
if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
phdr = __smem->partitions[host];
ret = qcom_smem_alloc_private(__smem, phdr, item, size);
} else if (__smem->global_partition) {
phdr = __smem->global_partition;
ret = qcom_smem_alloc_private(__smem, phdr, item, size);
} else {
ret = qcom_smem_alloc_global(__smem, item, size);
}
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
......@@ -428,9 +491,6 @@ static void *qcom_smem_get_global(struct qcom_smem *smem,
u32 aux_base;
unsigned i;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return ERR_PTR(-EINVAL);
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (!entry->allocated)
......@@ -452,37 +512,58 @@ static void *qcom_smem_get_global(struct qcom_smem *smem,
}
static void *qcom_smem_get_private(struct qcom_smem *smem,
unsigned host,
struct smem_partition_header *phdr,
size_t cacheline,
unsigned item,
size_t *size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *e, *end;
phdr = smem->partitions[host];
e = phdr_to_first_private_entry(phdr);
end = phdr_to_last_private_entry(phdr);
e = phdr_to_first_uncached_entry(phdr);
end = phdr_to_last_uncached_entry(phdr);
while (e < end) {
if (e->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
return ERR_PTR(-EINVAL);
if (e->canary != SMEM_PRIVATE_CANARY)
goto invalid_canary;
if (le16_to_cpu(e->item) == item) {
if (size != NULL)
*size = le32_to_cpu(e->size) -
le16_to_cpu(e->padding_data);
return uncached_entry_to_item(e);
}
e = uncached_entry_next(e);
}
/* Item was not found in the uncached list, search the cached list */
e = phdr_to_first_cached_entry(phdr, cacheline);
end = phdr_to_last_cached_entry(phdr);
while (e > end) {
if (e->canary != SMEM_PRIVATE_CANARY)
goto invalid_canary;
if (le16_to_cpu(e->item) == item) {
if (size != NULL)
*size = le32_to_cpu(e->size) -
le16_to_cpu(e->padding_data);
return entry_to_item(e);
return cached_entry_to_item(e);
}
e = private_entry_next(e);
e = cached_entry_next(e, cacheline);
}
return ERR_PTR(-ENOENT);
invalid_canary:
dev_err(smem->dev, "Found invalid canary in hosts %d:%d partition\n",
phdr->host0, phdr->host1);
return ERR_PTR(-EINVAL);
}
/**
......@@ -496,23 +577,35 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
*/
void *qcom_smem_get(unsigned host, unsigned item, size_t *size)
{
struct smem_partition_header *phdr;
unsigned long flags;
size_t cacheln;
int ret;
void *ptr = ERR_PTR(-EPROBE_DEFER);
if (!__smem)
return ptr;
if (WARN_ON(item >= __smem->item_count))
return ERR_PTR(-EINVAL);
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ERR_PTR(ret);
if (host < SMEM_HOST_COUNT && __smem->partitions[host])
ptr = qcom_smem_get_private(__smem, host, item, size);
else
if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
phdr = __smem->partitions[host];
cacheln = __smem->cacheline[host];
ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size);
} else if (__smem->global_partition) {
phdr = __smem->global_partition;
cacheln = __smem->global_cacheline;
ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size);
} else {
ptr = qcom_smem_get_global(__smem, item, size);
}
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
......@@ -541,6 +634,10 @@ int qcom_smem_get_free_space(unsigned host)
phdr = __smem->partitions[host];
ret = le32_to_cpu(phdr->offset_free_cached) -
le32_to_cpu(phdr->offset_free_uncached);
} else if (__smem->global_partition) {
phdr = __smem->global_partition;
ret = le32_to_cpu(phdr->offset_free_cached) -
le32_to_cpu(phdr->offset_free_uncached);
} else {
header = __smem->regions[0].virt_base;
ret = le32_to_cpu(header->available);
......@@ -552,44 +649,131 @@ EXPORT_SYMBOL(qcom_smem_get_free_space);
static int qcom_smem_get_sbl_version(struct qcom_smem *smem)
{
struct smem_header *header;
__le32 *versions;
size_t size;
versions = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, &size);
if (IS_ERR(versions)) {
dev_err(smem->dev, "Unable to read the version item\n");
return -ENOENT;
}
if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) {
dev_err(smem->dev, "Version item is too small\n");
return -EINVAL;
}
header = smem->regions[0].virt_base;
versions = header->version;
return le32_to_cpu(versions[SMEM_MASTER_SBL_VERSION_INDEX]);
}
static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
unsigned local_host)
static struct smem_ptable *qcom_smem_get_ptable(struct qcom_smem *smem)
{
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
unsigned remote_host;
u32 version, host0, host1;
int i;
u32 version;
ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;
if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic)))
return 0;
return ERR_PTR(-ENOENT);
version = le32_to_cpu(ptable->version);
if (version != 1) {
dev_err(smem->dev,
"Unsupported partition header version %d\n", version);
return ERR_PTR(-EINVAL);
}
return ptable;
}
static u32 qcom_smem_get_item_count(struct qcom_smem *smem)
{
struct smem_ptable *ptable;
struct smem_info *info;
ptable = qcom_smem_get_ptable(smem);
if (IS_ERR_OR_NULL(ptable))
return SMEM_ITEM_COUNT;
info = (struct smem_info *)&ptable->entry[ptable->num_entries];
if (memcmp(info->magic, SMEM_INFO_MAGIC, sizeof(info->magic)))
return SMEM_ITEM_COUNT;
return le16_to_cpu(info->num_items);
}
static int qcom_smem_set_global_partition(struct qcom_smem *smem)
{
struct smem_partition_header *header;
struct smem_ptable_entry *entry = NULL;
struct smem_ptable *ptable;
u32 host0, host1, size;
int i;
ptable = qcom_smem_get_ptable(smem);
if (IS_ERR(ptable))
return PTR_ERR(ptable);
for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) {
entry = &ptable->entry[i];
host0 = le16_to_cpu(entry->host0);
host1 = le16_to_cpu(entry->host1);
if (host0 == SMEM_GLOBAL_HOST && host0 == host1)
break;
}
if (!entry) {
dev_err(smem->dev, "Missing entry for global partition\n");
return -EINVAL;
}
if (!le32_to_cpu(entry->offset) || !le32_to_cpu(entry->size)) {
dev_err(smem->dev, "Invalid entry for global partition\n");
return -EINVAL;
}
if (smem->global_partition) {
dev_err(smem->dev, "Already found the global partition\n");
return -EINVAL;
}
header = smem->regions[0].virt_base + le32_to_cpu(entry->offset);
host0 = le16_to_cpu(header->host0);
host1 = le16_to_cpu(header->host1);
if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) {
dev_err(smem->dev, "Global partition has invalid magic\n");
return -EINVAL;
}
if (host0 != SMEM_GLOBAL_HOST && host1 != SMEM_GLOBAL_HOST) {
dev_err(smem->dev, "Global partition hosts are invalid\n");
return -EINVAL;
}
if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) {
dev_err(smem->dev, "Global partition has invalid size\n");
return -EINVAL;
}
size = le32_to_cpu(header->offset_free_uncached);
if (size > le32_to_cpu(header->size)) {
dev_err(smem->dev,
"Global partition has invalid free pointer\n");
return -EINVAL;
}
smem->global_partition = header;
smem->global_cacheline = le32_to_cpu(entry->cacheline);
return 0;
}
static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
unsigned int local_host)
{
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
unsigned int remote_host;
u32 host0, host1;
int i;
ptable = qcom_smem_get_ptable(smem);
if (IS_ERR(ptable))
return PTR_ERR(ptable);
for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) {
entry = &ptable->entry[i];
host0 = le16_to_cpu(entry->host0);
......@@ -646,7 +830,7 @@ static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
return -EINVAL;
}
if (header->size != entry->size) {
if (le32_to_cpu(header->size) != le32_to_cpu(entry->size)) {
dev_err(smem->dev,
"Partition %d has invalid size\n", i);
return -EINVAL;
......@@ -659,6 +843,7 @@ static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
}
smem->partitions[remote_host] = header;
smem->cacheline[remote_host] = le32_to_cpu(entry->cacheline);
}
return 0;
......@@ -729,13 +914,23 @@ static int qcom_smem_probe(struct platform_device *pdev)
}
version = qcom_smem_get_sbl_version(smem);
if (version >> 16 != SMEM_EXPECTED_VERSION) {
switch (version >> 16) {
case SMEM_GLOBAL_PART_VERSION:
ret = qcom_smem_set_global_partition(smem);
if (ret < 0)
return ret;
smem->item_count = qcom_smem_get_item_count(smem);
break;
case SMEM_GLOBAL_HEAP_VERSION:
smem->item_count = SMEM_ITEM_COUNT;
break;
default:
dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version);
return -EINVAL;
}
ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS);
if (ret < 0)
if (ret < 0 && ret != -ENOENT)
return ret;
hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
......
......@@ -3,7 +3,8 @@ config SOC_RENESAS
default y if ARCH_RENESAS
select SOC_BUS
select RST_RCAR if ARCH_RCAR_GEN1 || ARCH_RCAR_GEN2 || \
ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77995
ARCH_R8A7795 || ARCH_R8A7796 || ARCH_R8A77970 || \
ARCH_R8A77995
select SYSC_R8A7743 if ARCH_R8A7743
select SYSC_R8A7745 if ARCH_R8A7745
select SYSC_R8A7779 if ARCH_R8A7779
......@@ -13,6 +14,7 @@ config SOC_RENESAS
select SYSC_R8A7794 if ARCH_R8A7794
select SYSC_R8A7795 if ARCH_R8A7795
select SYSC_R8A7796 if ARCH_R8A7796
select SYSC_R8A77970 if ARCH_R8A77970
select SYSC_R8A77995 if ARCH_R8A77995
if SOC_RENESAS
......@@ -54,6 +56,10 @@ config SYSC_R8A7796
bool "R-Car M3-W System Controller support" if COMPILE_TEST
select SYSC_RCAR
config SYSC_R8A77970
bool "R-Car V3M System Controller support" if COMPILE_TEST
select SYSC_RCAR
config SYSC_R8A77995
bool "R-Car D3 System Controller support" if COMPILE_TEST
select SYSC_RCAR
......
......@@ -12,6 +12,7 @@ obj-$(CONFIG_SYSC_R8A7792) += r8a7792-sysc.o
obj-$(CONFIG_SYSC_R8A7794) += r8a7794-sysc.o
obj-$(CONFIG_SYSC_R8A7795) += r8a7795-sysc.o
obj-$(CONFIG_SYSC_R8A7796) += r8a7796-sysc.o
obj-$(CONFIG_SYSC_R8A77970) += r8a77970-sysc.o
obj-$(CONFIG_SYSC_R8A77995) += r8a77995-sysc.o
# Family
......
/*
* Renesas R-Car V3M System Controller
*
* Copyright (C) 2017 Cogent Embedded Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/bug.h>
#include <linux/kernel.h>
#include <dt-bindings/power/r8a77970-sysc.h>
#include "rcar-sysc.h"
static const struct rcar_sysc_area r8a77970_areas[] __initconst = {
{ "always-on", 0, 0, R8A77970_PD_ALWAYS_ON, -1, PD_ALWAYS_ON },
{ "ca53-scu", 0x140, 0, R8A77970_PD_CA53_SCU, R8A77970_PD_ALWAYS_ON,
PD_SCU },
{ "ca53-cpu0", 0x200, 0, R8A77970_PD_CA53_CPU0, R8A77970_PD_CA53_SCU,
PD_CPU_NOCR },
{ "ca53-cpu1", 0x200, 1, R8A77970_PD_CA53_CPU1, R8A77970_PD_CA53_SCU,
PD_CPU_NOCR },
{ "cr7", 0x240, 0, R8A77970_PD_CR7, R8A77970_PD_ALWAYS_ON },
{ "a3ir", 0x180, 0, R8A77970_PD_A3IR, R8A77970_PD_ALWAYS_ON },
{ "a2ir0", 0x400, 0, R8A77970_PD_A2IR0, R8A77970_PD_ALWAYS_ON },
{ "a2ir1", 0x400, 1, R8A77970_PD_A2IR1, R8A77970_PD_A2IR0 },
{ "a2ir2", 0x400, 2, R8A77970_PD_A2IR2, R8A77970_PD_A2IR0 },
{ "a2ir3", 0x400, 3, R8A77970_PD_A2IR3, R8A77970_PD_A2IR0 },
{ "a2sc0", 0x400, 4, R8A77970_PD_A2SC0, R8A77970_PD_ALWAYS_ON },
{ "a2sc1", 0x400, 5, R8A77970_PD_A2SC1, R8A77970_PD_A2SC0 },
};
const struct rcar_sysc_info r8a77970_sysc_info __initconst = {
.areas = r8a77970_areas,
.num_areas = ARRAY_SIZE(r8a77970_areas),
};
......@@ -41,6 +41,7 @@ static const struct of_device_id rcar_rst_matches[] __initconst = {
/* R-Car Gen3 is handled like R-Car Gen2 */
{ .compatible = "renesas,r8a7795-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a7796-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a77970-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a77995-rst", .data = &rcar_rst_gen2 },
{ /* sentinel */ }
};
......
......@@ -284,6 +284,9 @@ static const struct of_device_id rcar_sysc_matches[] = {
#ifdef CONFIG_SYSC_R8A7796
{ .compatible = "renesas,r8a7796-sysc", .data = &r8a7796_sysc_info },
#endif
#ifdef CONFIG_SYSC_R8A77970
{ .compatible = "renesas,r8a77970-sysc", .data = &r8a77970_sysc_info },
#endif
#ifdef CONFIG_SYSC_R8A77995
{ .compatible = "renesas,r8a77995-sysc", .data = &r8a77995_sysc_info },
#endif
......
......@@ -58,6 +58,7 @@ extern const struct rcar_sysc_info r8a7792_sysc_info;
extern const struct rcar_sysc_info r8a7794_sysc_info;
extern const struct rcar_sysc_info r8a7795_sysc_info;
extern const struct rcar_sysc_info r8a7796_sysc_info;
extern const struct rcar_sysc_info r8a77970_sysc_info;
extern const struct rcar_sysc_info r8a77995_sysc_info;
......
......@@ -144,6 +144,11 @@ static const struct renesas_soc soc_rcar_m3_w __initconst __maybe_unused = {
.id = 0x52,
};
static const struct renesas_soc soc_rcar_v3m __initconst __maybe_unused = {
.family = &fam_rcar_gen3,
.id = 0x54,
};
static const struct renesas_soc soc_rcar_d3 __initconst __maybe_unused = {
.family = &fam_rcar_gen3,
.id = 0x58,
......@@ -204,6 +209,9 @@ static const struct of_device_id renesas_socs[] __initconst = {
#ifdef CONFIG_ARCH_R8A7796
{ .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w },
#endif
#ifdef CONFIG_ARCH_R8A77970
{ .compatible = "renesas,r8a77970", .data = &soc_rcar_v3m },
#endif
#ifdef CONFIG_ARCH_R8A77995
{ .compatible = "renesas,r8a77995", .data = &soc_rcar_d3 },
#endif
......
......@@ -60,12 +60,6 @@ void exynos_sys_powerdown_conf(enum sys_powerdown mode)
if (pmu_data->powerdown_conf_extra)
pmu_data->powerdown_conf_extra(mode);
if (pmu_data->pmu_config_extra) {
for (i = 0; pmu_data->pmu_config_extra[i].offset != PMU_TABLE_END; i++)
pmu_raw_writel(pmu_data->pmu_config_extra[i].val[mode],
pmu_data->pmu_config_extra[i].offset);
}
}
/*
......@@ -88,9 +82,6 @@ static const struct of_device_id exynos_pmu_of_device_ids[] = {
}, {
.compatible = "samsung,exynos4210-pmu",
.data = exynos_pmu_data_arm_ptr(exynos4210_pmu_data),
}, {
.compatible = "samsung,exynos4212-pmu",
.data = exynos_pmu_data_arm_ptr(exynos4212_pmu_data),
}, {
.compatible = "samsung,exynos4412-pmu",
.data = exynos_pmu_data_arm_ptr(exynos4412_pmu_data),
......
......@@ -23,7 +23,6 @@ struct exynos_pmu_conf {
struct exynos_pmu_data {
const struct exynos_pmu_conf *pmu_config;
const struct exynos_pmu_conf *pmu_config_extra;
void (*pmu_init)(void);
void (*powerdown_conf)(enum sys_powerdown);
......@@ -36,7 +35,6 @@ extern void __iomem *pmu_base_addr;
/* list of all exported SoC specific data */
extern const struct exynos_pmu_data exynos3250_pmu_data;
extern const struct exynos_pmu_data exynos4210_pmu_data;
extern const struct exynos_pmu_data exynos4212_pmu_data;
extern const struct exynos_pmu_data exynos4412_pmu_data;
extern const struct exynos_pmu_data exynos5250_pmu_data;
extern const struct exynos_pmu_data exynos5420_pmu_data;
......
......@@ -90,7 +90,7 @@ static const struct exynos_pmu_conf exynos4210_pmu_config[] = {
{ PMU_TABLE_END,},
};
static const struct exynos_pmu_conf exynos4x12_pmu_config[] = {
static const struct exynos_pmu_conf exynos4412_pmu_config[] = {
{ S5P_ARM_CORE0_LOWPWR, { 0x0, 0x0, 0x2 } },
{ S5P_DIS_IRQ_CORE0, { 0x0, 0x0, 0x0 } },
{ S5P_DIS_IRQ_CENTRAL0, { 0x0, 0x0, 0x0 } },
......@@ -195,10 +195,6 @@ static const struct exynos_pmu_conf exynos4x12_pmu_config[] = {
{ S5P_GPS_ALIVE_LOWPWR, { 0x7, 0x0, 0x0 } },
{ S5P_CMU_SYSCLK_ISP_LOWPWR, { 0x1, 0x0, 0x0 } },
{ S5P_CMU_SYSCLK_GPS_LOWPWR, { 0x1, 0x0, 0x0 } },
{ PMU_TABLE_END,},
};
static const struct exynos_pmu_conf exynos4412_pmu_config[] = {
{ S5P_ARM_CORE2_LOWPWR, { 0x0, 0x0, 0x2 } },
{ S5P_DIS_IRQ_CORE2, { 0x0, 0x0, 0x0 } },
{ S5P_DIS_IRQ_CENTRAL2, { 0x0, 0x0, 0x0 } },
......@@ -212,11 +208,6 @@ const struct exynos_pmu_data exynos4210_pmu_data = {
.pmu_config = exynos4210_pmu_config,
};
const struct exynos_pmu_data exynos4212_pmu_data = {
.pmu_config = exynos4x12_pmu_config,
};
const struct exynos_pmu_data exynos4412_pmu_data = {
.pmu_config = exynos4x12_pmu_config,
.pmu_config_extra = exynos4412_pmu_config,
.pmu_config = exynos4412_pmu_config,
};
......@@ -42,6 +42,7 @@ static int tegra_bpmp_powergate_set_state(struct tegra_bpmp *bpmp,
{
struct mrq_pg_request request;
struct tegra_bpmp_message msg;
int err;
memset(&request, 0, sizeof(request));
request.cmd = CMD_PG_SET_STATE;
......@@ -53,7 +54,13 @@ static int tegra_bpmp_powergate_set_state(struct tegra_bpmp *bpmp,
msg.tx.data = &request;
msg.tx.size = sizeof(request);
return tegra_bpmp_transfer(bpmp, &msg);
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return err;
else if (msg.rx.ret < 0)
return -EINVAL;
return 0;
}
static int tegra_bpmp_powergate_get_state(struct tegra_bpmp *bpmp,
......@@ -80,6 +87,8 @@ static int tegra_bpmp_powergate_get_state(struct tegra_bpmp *bpmp,
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return PG_STATE_OFF;
else if (msg.rx.ret < 0)
return -EINVAL;
return response.get_state.state;
}
......@@ -106,6 +115,8 @@ static int tegra_bpmp_powergate_get_max_id(struct tegra_bpmp *bpmp)
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
return err;
else if (msg.rx.ret < 0)
return -EINVAL;
return response.get_max_id.max_id;
}
......@@ -132,7 +143,7 @@ static char *tegra_bpmp_powergate_get_name(struct tegra_bpmp *bpmp,
msg.rx.size = sizeof(response);
err = tegra_bpmp_transfer(bpmp, &msg);
if (err < 0)
if (err < 0 || msg.rx.ret < 0)
return NULL;
return kstrdup(response.get_name.name, GFP_KERNEL);
......
......@@ -55,7 +55,7 @@ obj-$(CONFIG_INTEL_BXT_PMIC_THERMAL) += intel_bxt_pmic_thermal.o
obj-$(CONFIG_INTEL_PCH_THERMAL) += intel_pch_thermal.o
obj-$(CONFIG_ST_THERMAL) += st/
obj-$(CONFIG_QCOM_TSENS) += qcom/
obj-$(CONFIG_TEGRA_SOCTHERM) += tegra/
obj-y += tegra/
obj-$(CONFIG_HISI_THERMAL) += hisi_thermal.o
obj-$(CONFIG_MTK_THERMAL) += mtk_thermal.o
obj-$(CONFIG_GENERIC_ADC_THERMAL) += thermal-generic-adc.o
......
......@@ -10,4 +10,11 @@ config TEGRA_SOCTHERM
zones to manage temperatures. This option is also required for the
emergency thermal reset (thermtrip) feature to function.
config TEGRA_BPMP_THERMAL
tristate "Tegra BPMP thermal sensing"
depends on TEGRA_BPMP || COMPILE_TEST
help
Enable this option for support for sensing system temperature of NVIDIA
Tegra systems-on-chip with the BPMP coprocessor (Tegra186).
endmenu
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o
obj-$(CONFIG_TEGRA_SOCTHERM) += tegra-soctherm.o
obj-$(CONFIG_TEGRA_BPMP_THERMAL) += tegra-bpmp-thermal.o
tegra-soctherm-y := soctherm.o soctherm-fuse.o
tegra-soctherm-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124-soctherm.o
......
/*
* Copyright (c) 2015-2017, NVIDIA CORPORATION. All rights reserved.
*
* Author:
* Mikko Perttunen <mperttunen@nvidia.com>
* Aapo Vienamo <avienamo@nvidia.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/err.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/thermal.h>
#include <linux/workqueue.h>
#include <soc/tegra/bpmp.h>
#include <soc/tegra/bpmp-abi.h>
struct tegra_bpmp_thermal_zone {
struct tegra_bpmp_thermal *tegra;
struct thermal_zone_device *tzd;
struct work_struct tz_device_update_work;
unsigned int idx;
};
struct tegra_bpmp_thermal {
struct device *dev;
struct tegra_bpmp *bpmp;
unsigned int num_zones;
struct tegra_bpmp_thermal_zone **zones;
};
static int tegra_bpmp_thermal_get_temp(void *data, int *out_temp)
{
struct tegra_bpmp_thermal_zone *zone = data;
struct mrq_thermal_host_to_bpmp_request req;
union mrq_thermal_bpmp_to_host_response reply;
struct tegra_bpmp_message msg;
int err;
memset(&req, 0, sizeof(req));
req.type = CMD_THERMAL_GET_TEMP;
req.get_temp.zone = zone->idx;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_THERMAL;
msg.tx.data = &req;
msg.tx.size = sizeof(req);
msg.rx.data = &reply;
msg.rx.size = sizeof(reply);
err = tegra_bpmp_transfer(zone->tegra->bpmp, &msg);
if (err)
return err;
*out_temp = reply.get_temp.temp;
return 0;
}
static int tegra_bpmp_thermal_set_trips(void *data, int low, int high)
{
struct tegra_bpmp_thermal_zone *zone = data;
struct mrq_thermal_host_to_bpmp_request req;
struct tegra_bpmp_message msg;
memset(&req, 0, sizeof(req));
req.type = CMD_THERMAL_SET_TRIP;
req.set_trip.zone = zone->idx;
req.set_trip.enabled = true;
req.set_trip.low = low;
req.set_trip.high = high;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_THERMAL;
msg.tx.data = &req;
msg.tx.size = sizeof(req);
return tegra_bpmp_transfer(zone->tegra->bpmp, &msg);
}
static void tz_device_update_work_fn(struct work_struct *work)
{
struct tegra_bpmp_thermal_zone *zone;
zone = container_of(work, struct tegra_bpmp_thermal_zone,
tz_device_update_work);
thermal_zone_device_update(zone->tzd, THERMAL_TRIP_VIOLATED);
}
static void bpmp_mrq_thermal(unsigned int mrq, struct tegra_bpmp_channel *ch,
void *data)
{
struct mrq_thermal_bpmp_to_host_request *req;
struct tegra_bpmp_thermal *tegra = data;
int i;
req = (struct mrq_thermal_bpmp_to_host_request *)ch->ib->data;
if (req->type != CMD_THERMAL_HOST_TRIP_REACHED) {
dev_err(tegra->dev, "%s: invalid request type: %d\n",
__func__, req->type);
tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0);
return;
}
for (i = 0; i < tegra->num_zones; ++i) {
if (tegra->zones[i]->idx != req->host_trip_reached.zone)
continue;
schedule_work(&tegra->zones[i]->tz_device_update_work);
tegra_bpmp_mrq_return(ch, 0, NULL, 0);
return;
}
dev_err(tegra->dev, "%s: invalid thermal zone: %d\n", __func__,
req->host_trip_reached.zone);
tegra_bpmp_mrq_return(ch, -EINVAL, NULL, 0);
}
static int tegra_bpmp_thermal_get_num_zones(struct tegra_bpmp *bpmp,
int *num_zones)
{
struct mrq_thermal_host_to_bpmp_request req;
union mrq_thermal_bpmp_to_host_response reply;
struct tegra_bpmp_message msg;
int err;
memset(&req, 0, sizeof(req));
req.type = CMD_THERMAL_GET_NUM_ZONES;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_THERMAL;
msg.tx.data = &req;
msg.tx.size = sizeof(req);
msg.rx.data = &reply;
msg.rx.size = sizeof(reply);
err = tegra_bpmp_transfer(bpmp, &msg);
if (err)
return err;
*num_zones = reply.get_num_zones.num;
return 0;
}
static const struct thermal_zone_of_device_ops tegra_bpmp_of_thermal_ops = {
.get_temp = tegra_bpmp_thermal_get_temp,
.set_trips = tegra_bpmp_thermal_set_trips,
};
static int tegra_bpmp_thermal_probe(struct platform_device *pdev)
{
struct tegra_bpmp *bpmp = dev_get_drvdata(pdev->dev.parent);
struct tegra_bpmp_thermal *tegra;
struct thermal_zone_device *tzd;
unsigned int i, max_num_zones;
int err;
tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
if (!tegra)
return -ENOMEM;
tegra->dev = &pdev->dev;
tegra->bpmp = bpmp;
err = tegra_bpmp_thermal_get_num_zones(bpmp, &max_num_zones);
if (err) {
dev_err(&pdev->dev, "failed to get the number of zones: %d\n",
err);
return err;
}
tegra->zones = devm_kcalloc(&pdev->dev, max_num_zones,
sizeof(*tegra->zones), GFP_KERNEL);
if (!tegra->zones)
return -ENOMEM;
for (i = 0; i < max_num_zones; ++i) {
struct tegra_bpmp_thermal_zone *zone;
int temp;
zone = devm_kzalloc(&pdev->dev, sizeof(*zone), GFP_KERNEL);
if (!zone)
return -ENOMEM;
zone->idx = i;
zone->tegra = tegra;
err = tegra_bpmp_thermal_get_temp(zone, &temp);
if (err < 0) {
devm_kfree(&pdev->dev, zone);
continue;
}
tzd = devm_thermal_zone_of_sensor_register(
&pdev->dev, i, zone, &tegra_bpmp_of_thermal_ops);
if (IS_ERR(tzd)) {
if (PTR_ERR(tzd) == -EPROBE_DEFER)
return -EPROBE_DEFER;
devm_kfree(&pdev->dev, zone);
continue;
}
zone->tzd = tzd;
INIT_WORK(&zone->tz_device_update_work,
tz_device_update_work_fn);
tegra->zones[tegra->num_zones++] = zone;
}
err = tegra_bpmp_request_mrq(bpmp, MRQ_THERMAL, bpmp_mrq_thermal,
tegra);
if (err) {
dev_err(&pdev->dev, "failed to register mrq handler: %d\n",
err);
return err;
}
platform_set_drvdata(pdev, tegra);
return 0;
}
static int tegra_bpmp_thermal_remove(struct platform_device *pdev)
{
struct tegra_bpmp_thermal *tegra = platform_get_drvdata(pdev);
tegra_bpmp_free_mrq(tegra->bpmp, MRQ_THERMAL, tegra);
return 0;
}
static const struct of_device_id tegra_bpmp_thermal_of_match[] = {
{ .compatible = "nvidia,tegra186-bpmp-thermal" },
{ },
};
MODULE_DEVICE_TABLE(of, tegra_bpmp_thermal_of_match);
static struct platform_driver tegra_bpmp_thermal_driver = {
.probe = tegra_bpmp_thermal_probe,
.remove = tegra_bpmp_thermal_remove,
.driver = {
.name = "tegra-bpmp-thermal",
.of_match_table = tegra_bpmp_thermal_of_match,
},
};
module_platform_driver(tegra_bpmp_thermal_driver);
MODULE_AUTHOR("Mikko Perttunen <mperttunen@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA Tegra BPMP thermal sensor driver");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2017 MediaTek Inc.
* Author: Sean Wang <sean.wang@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _DT_BINDINGS_RESET_CONTROLLER_MT7622
#define _DT_BINDINGS_RESET_CONTROLLER_MT7622
/* INFRACFG resets */
#define MT7622_INFRA_EMI_REG_RST 0
#define MT7622_INFRA_DRAMC0_A0_RST 1
#define MT7622_INFRA_APCIRQ_EINT_RST 3
#define MT7622_INFRA_APXGPT_RST 4
#define MT7622_INFRA_SCPSYS_RST 5
#define MT7622_INFRA_PMIC_WRAP_RST 7
#define MT7622_INFRA_IRRX_RST 9
#define MT7622_INFRA_EMI_RST 16
#define MT7622_INFRA_WED0_RST 17
#define MT7622_INFRA_DRAMC_RST 18
#define MT7622_INFRA_CCI_INTF_RST 19
#define MT7622_INFRA_TRNG_RST 21
#define MT7622_INFRA_SYSIRQ_RST 22
#define MT7622_INFRA_WED1_RST 25
/* PERICFG Subsystem resets */
#define MT7622_PERI_UART0_SW_RST 0
#define MT7622_PERI_UART1_SW_RST 1
#define MT7622_PERI_UART2_SW_RST 2
#define MT7622_PERI_UART3_SW_RST 3
#define MT7622_PERI_UART4_SW_RST 4
#define MT7622_PERI_BTIF_SW_RST 6
#define MT7622_PERI_PWM_SW_RST 8
#define MT7622_PERI_AUXADC_SW_RST 10
#define MT7622_PERI_DMA_SW_RST 11
#define MT7622_PERI_IRTX_SW_RST 13
#define MT7622_PERI_NFI_SW_RST 14
#define MT7622_PERI_THERM_SW_RST 16
#define MT7622_PERI_MSDC0_SW_RST 19
#define MT7622_PERI_MSDC1_SW_RST 20
#define MT7622_PERI_I2C0_SW_RST 22
#define MT7622_PERI_I2C1_SW_RST 23
#define MT7622_PERI_I2C2_SW_RST 24
#define MT7622_PERI_SPI0_SW_RST 33
#define MT7622_PERI_SPI1_SW_RST 34
#define MT7622_PERI_FLASHIF_SW_RST 36
/* TOPRGU resets */
#define MT7622_TOPRGU_INFRA_RST 0
#define MT7622_TOPRGU_ETHDMA_RST 1
#define MT7622_TOPRGU_DDRPHY_RST 6
#define MT7622_TOPRGU_INFRA_AO_RST 8
#define MT7622_TOPRGU_CONN_RST 9
#define MT7622_TOPRGU_APMIXED_RST 10
#define MT7622_TOPRGU_CONN_MCU_RST 12
/* PCIe/SATA Subsystem resets */
#define MT7622_SATA_PHY_REG_RST 12
#define MT7622_SATA_PHY_SW_RST 13
#define MT7622_SATA_AXI_BUS_RST 15
#define MT7622_PCIE1_CORE_RST 19
#define MT7622_PCIE1_MMIO_RST 20
#define MT7622_PCIE1_HRST 21
#define MT7622_PCIE1_USER_RST 22
#define MT7622_PCIE1_PIPE_RST 23
#define MT7622_PCIE0_CORE_RST 27
#define MT7622_PCIE0_MMIO_RST 28
#define MT7622_PCIE0_HRST 29
#define MT7622_PCIE0_USER_RST 30
#define MT7622_PCIE0_PIPE_RST 31
/* SSUSB Subsystem resets */
#define MT7622_SSUSB_PHY_PWR_RST 3
#define MT7622_SSUSB_MAC_PWR_RST 4
/* ETHSYS Subsystem resets */
#define MT7622_ETHSYS_SYS_RST 0
#define MT7622_ETHSYS_MCM_RST 2
#define MT7622_ETHSYS_HSDMA_RST 5
#define MT7622_ETHSYS_FE_RST 6
#define MT7622_ETHSYS_GMAC_RST 23
#define MT7622_ETHSYS_EPHY_RST 24
#define MT7622_ETHSYS_CRYPTO_RST 29
#define MT7622_ETHSYS_PPE_RST 31
#endif /* _DT_BINDINGS_RESET_CONTROLLER_MT7622 */
......@@ -45,6 +45,7 @@ int early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
void fdt_init_reserved_mem(void);
void fdt_reserved_mem_save_node(unsigned long node, const char *uname,
phys_addr_t base, phys_addr_t size);
struct reserved_mem *of_reserved_mem_lookup(struct device_node *np);
#else
static inline int of_reserved_mem_device_init_by_idx(struct device *dev,
struct device_node *np, int idx)
......@@ -56,6 +57,10 @@ static inline void of_reserved_mem_device_release(struct device *pdev) { }
static inline void fdt_init_reserved_mem(void) { }
static inline void fdt_reserved_mem_save_node(unsigned long node,
const char *uname, phys_addr_t base, phys_addr_t size) { }
static inline struct reserved_mem *of_reserved_mem_lookup(struct device_node *np)
{
return NULL;
}
#endif
/**
......
......@@ -36,18 +36,6 @@ static inline struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs
}
#endif /* CONFIG_OMAP_GPMC */
/*--------------------------------*/
/* deprecated APIs */
#if IS_ENABLED(CONFIG_OMAP_GPMC)
void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs);
#else
static inline void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
{
}
#endif /* CONFIG_OMAP_GPMC */
/*--------------------------------*/
extern int gpmc_calc_timings(struct gpmc_timings *gpmc_t,
struct gpmc_settings *gpmc_s,
struct gpmc_device_timings *dev_t);
......
......@@ -63,8 +63,6 @@ struct gpmc_nand_regs {
void __iomem *gpmc_bch_result4[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result5[GPMC_BCH_NUM_REMAINDER];
void __iomem *gpmc_bch_result6[GPMC_BCH_NUM_REMAINDER];
/* Deprecated. Do not use */
void __iomem *gpmc_status;
};
struct omap_nand_platform_data {
......
......@@ -43,6 +43,8 @@ extern int qcom_scm_set_remote_state(u32 state, u32 id);
extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare);
extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size);
extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare);
extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val);
extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val);
#else
static inline
int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)
......@@ -73,5 +75,7 @@ qcom_scm_set_remote_state(u32 state,u32 id) { return -ENODEV; }
static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) { return -ENODEV; }
static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) { return -ENODEV; }
static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) { return -ENODEV; }
static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) { return -ENODEV; }
static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) { return -ENODEV; }
#endif
#endif
/*
* Copyright (c) 2016 - Savoir-faire Linux
* Author: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#ifndef _TS_NBUS_H
#define _TS_NBUS_H
struct ts_nbus;
extern int ts_nbus_read(struct ts_nbus *ts_nbus, u8 adr, u16 *val);
extern int ts_nbus_write(struct ts_nbus *ts_nbus, u8 adr, u16 val);
#endif /* _TS_NBUS_H */
......@@ -94,10 +94,11 @@ struct tegra_bpmp {
struct reset_controller_dev rstc;
struct genpd_onecell_data genpd;
};
struct tegra_bpmp *tegra_bpmp_get(struct device *dev);
void tegra_bpmp_put(struct tegra_bpmp *bpmp);
#ifdef CONFIG_DEBUG_FS
struct dentry *debugfs_mirror;
#endif
};
struct tegra_bpmp_message {
unsigned int mrq;
......@@ -110,18 +111,60 @@ struct tegra_bpmp_message {
struct {
void *data;
size_t size;
int ret;
} rx;
};
#if IS_ENABLED(CONFIG_TEGRA_BPMP)
struct tegra_bpmp *tegra_bpmp_get(struct device *dev);
void tegra_bpmp_put(struct tegra_bpmp *bpmp);
int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg);
void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel, int code,
const void *data, size_t size);
int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
tegra_bpmp_mrq_handler_t handler, void *data);
void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp, unsigned int mrq,
void *data);
#else
static inline struct tegra_bpmp *tegra_bpmp_get(struct device *dev)
{
return ERR_PTR(-ENOTSUPP);
}
static inline void tegra_bpmp_put(struct tegra_bpmp *bpmp)
{
}
static inline int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg)
{
return -ENOTSUPP;
}
static inline int tegra_bpmp_transfer(struct tegra_bpmp *bpmp,
struct tegra_bpmp_message *msg)
{
return -ENOTSUPP;
}
static inline void tegra_bpmp_mrq_return(struct tegra_bpmp_channel *channel,
int code, const void *data,
size_t size)
{
}
static inline int tegra_bpmp_request_mrq(struct tegra_bpmp *bpmp,
unsigned int mrq,
tegra_bpmp_mrq_handler_t handler,
void *data)
{
return -ENOTSUPP;
}
static inline void tegra_bpmp_free_mrq(struct tegra_bpmp *bpmp,
unsigned int mrq, void *data)
{
}
#endif
#if IS_ENABLED(CONFIG_CLK_TEGRA_BPMP)
int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp);
......@@ -150,4 +193,14 @@ static inline int tegra_bpmp_init_powergates(struct tegra_bpmp *bpmp)
}
#endif
#if IS_ENABLED(CONFIG_DEBUG_FS)
int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp);
#else
static inline int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp)
{
return 0;
}
#endif
#endif /* __SOC_TEGRA_BPMP_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment