Commit 10217810 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
 "Some releases this branch is nearly empty, others we have more stuff.
  It tends to gather drivers that need SoC modification or dependencies
  such that they have to (also) go in through our tree.

  For this release, we have merged in part of the reset controller tree
  (with handshake that the parts we have merged in will remain stable),
  as well as dependencies on a few clock branches.

  In general, new items here are:

   - Qualcomm driver for SMM/SMD, which is how they communicate with the
     coprocessors on (some) of their platforms

   - memory controller work for ARM's PL172 memory controller

   - reset drivers for various platforms

   - PMU power domain support for Marvell platforms

   - Tegra support for T132/T210 SoCs: PMC, fuse, memory controller
     per-SoC support"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (49 commits)
  ARM: tegra: cpuidle: implement cpuidle_state.enter_freeze()
  ARM: tegra: Disable cpuidle if PSCI is available
  soc/tegra: pmc: Use existing pclk reference
  soc/tegra: pmc: Remove unnecessary return statement
  soc: tegra: Remove redundant $(CONFIG_ARCH_TEGRA) in Makefile
  memory: tegra: Add Tegra210 support
  memory: tegra: Add support for a variable-size client ID bitfield
  clk: shmobile: rz: Add CPG/MSTP Clock Domain support
  clk: shmobile: rcar-gen2: Add CPG/MSTP Clock Domain support
  clk: shmobile: r8a7779: Add CPG/MSTP Clock Domain support
  clk: shmobile: r8a7778: Add CPG/MSTP Clock Domain support
  clk: shmobile: Add CPG/MSTP Clock Domain support
  ARM: dove: create a proper PMU driver for power domains, PMU IRQs and resets
  reset: reset-zynq: Adding support for Xilinx Zynq reset controller.
  docs: dts: Added documentation for Xilinx Zynq Reset Controller bindings.
  MIPS: ath79: Add the reset controller to the AR9132 dtsi
  reset: Add a driver for the reset controller on the AR71XX/AR9XXX
  devicetree: Add bindings for the ATH79 reset controller
  reset: socfpga: Update reset-socfpga to read the altr,modrst-offset property
  doc: dt: add documentation for lpc1850-rgu reset driver
  ...
parents 50686e8a 21815b9a
* Renesas R8A7778 Clock Pulse Generator (CPG)
The CPG generates core clocks for the R8A7778. It includes two PLLs and
several fixed ratio dividers
several fixed ratio dividers.
The CPG also provides a Clock Domain for SoC devices, in combination with the
CPG Module Stop (MSTP) Clocks.
Required Properties:
......@@ -10,10 +12,18 @@ Required Properties:
- #clock-cells: Must be 1
- clock-output-names: The names of the clocks. Supported clocks are
"plla", "pllb", "b", "out", "p", "s", and "s1".
- #power-domain-cells: Must be 0
SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed
through an MSTP clock should refer to the CPG device node in their
"power-domains" property, as documented by the generic PM domain bindings in
Documentation/devicetree/bindings/power/power_domain.txt.
Example
-------
Examples
--------
- CPG device node:
cpg_clocks: cpg_clocks@ffc80000 {
compatible = "renesas,r8a7778-cpg-clocks";
......@@ -22,4 +32,17 @@ Example
clocks = <&extal_clk>;
clock-output-names = "plla", "pllb", "b",
"out", "p", "s", "s1";
#power-domain-cells = <0>;
};
- CPG/MSTP Clock Domain member device node:
sdhi0: sd@ffe4c000 {
compatible = "renesas,sdhi-r8a7778";
reg = <0xffe4c000 0x100>;
interrupts = <0 87 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7778_CLK_SDHI0>;
power-domains = <&cpg_clocks>;
status = "disabled";
};
* Renesas R8A7779 Clock Pulse Generator (CPG)
The CPG generates core clocks for the R8A7779. It includes one PLL and
several fixed ratio dividers
several fixed ratio dividers.
The CPG also provides a Clock Domain for SoC devices, in combination with the
CPG Module Stop (MSTP) Clocks.
Required Properties:
......@@ -12,16 +14,36 @@ Required Properties:
- #clock-cells: Must be 1
- clock-output-names: The names of the clocks. Supported clocks are "plla",
"z", "zs", "s", "s1", "p", "b", "out".
- #power-domain-cells: Must be 0
SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed
through an MSTP clock should refer to the CPG device node in their
"power-domains" property, as documented by the generic PM domain bindings in
Documentation/devicetree/bindings/power/power_domain.txt.
Example
-------
Examples
--------
- CPG device node:
cpg_clocks: cpg_clocks@ffc80000 {
compatible = "renesas,r8a7779-cpg-clocks";
reg = <0 0xffc80000 0 0x30>;
reg = <0xffc80000 0x30>;
clocks = <&extal_clk>;
#clock-cells = <1>;
clock-output-names = "plla", "z", "zs", "s", "s1", "p",
"b", "out";
#power-domain-cells = <0>;
};
- CPG/MSTP Clock Domain member device node:
sata: sata@fc600000 {
compatible = "renesas,sata-r8a7779", "renesas,rcar-sata";
reg = <0xfc600000 0x2000>;
interrupts = <0 100 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7779_CLK_SATA>;
power-domains = <&cpg_clocks>;
};
......@@ -2,6 +2,8 @@
The CPG generates core clocks for the R-Car Gen2 SoCs. It includes three PLLs
and several fixed ratio dividers.
The CPG also provides a Clock Domain for SoC devices, in combination with the
CPG Module Stop (MSTP) Clocks.
Required Properties:
......@@ -20,10 +22,18 @@ Required Properties:
- clock-output-names: The names of the clocks. Supported clocks are "main",
"pll0", "pll1", "pll3", "lb", "qspi", "sdh", "sd0", "sd1", "z", "rcan", and
"adsp"
- #power-domain-cells: Must be 0
SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed
through an MSTP clock should refer to the CPG device node in their
"power-domains" property, as documented by the generic PM domain bindings in
Documentation/devicetree/bindings/power/power_domain.txt.
Example
-------
Examples
--------
- CPG device node:
cpg_clocks: cpg_clocks@e6150000 {
compatible = "renesas,r8a7790-cpg-clocks",
......@@ -34,4 +44,16 @@ Example
clock-output-names = "main", "pll0, "pll1", "pll3",
"lb", "qspi", "sdh", "sd0", "sd1", "z",
"rcan", "adsp";
#power-domain-cells = <0>;
};
- CPG/MSTP Clock Domain member device node:
thermal@e61f0000 {
compatible = "renesas,thermal-r8a7790", "renesas,rcar-thermal";
reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>;
interrupts = <0 69 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp5_clks R8A7790_CLK_THERMAL>;
power-domains = <&cpg_clocks>;
};
......@@ -2,6 +2,8 @@
The CPG generates core clocks for the RZ SoCs. It includes the PLL, variable
CPU and GPU clocks, and several fixed ratio dividers.
The CPG also provides a Clock Domain for SoC devices, in combination with the
CPG Module Stop (MSTP) Clocks.
Required Properties:
......@@ -14,10 +16,18 @@ Required Properties:
- #clock-cells: Must be 1
- clock-output-names: The names of the clocks. Supported clocks are "pll",
"i", and "g"
- #power-domain-cells: Must be 0
SoC devices that are part of the CPG/MSTP Clock Domain and can be power-managed
through an MSTP clock should refer to the CPG device node in their
"power-domains" property, as documented by the generic PM domain bindings in
Documentation/devicetree/bindings/power/power_domain.txt.
Example
-------
Examples
--------
- CPG device node:
cpg_clocks: cpg_clocks@fcfe0000 {
#clock-cells = <1>;
......@@ -26,4 +36,19 @@ Example
reg = <0xfcfe0000 0x18>;
clocks = <&extal_clk>, <&usb_x1_clk>;
clock-output-names = "pll", "i", "g";
#power-domain-cells = <0>;
};
- CPG/MSTP Clock Domain member device node:
mtu2: timer@fcff0000 {
compatible = "renesas,mtu2-r7s72100", "renesas,mtu2";
reg = <0xfcff0000 0x400>;
interrupts = <0 107 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "tgi0a";
clocks = <&mstp3_clks R7S72100_CLK_MTU2>;
clock-names = "fck";
power-domains = <&cpg_clocks>;
status = "disabled";
};
Tegra124 CPU frequency scaling driver bindings
----------------------------------------------
Both required and optional properties listed below must be defined
under node /cpus/cpu@0.
Required properties:
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- cpu_g: Clock mux for the fast CPU cluster.
- cpu_lp: Clock mux for the low-power CPU cluster.
- pll_x: Fast PLL clocksource.
- pll_p: Auxiliary PLL used during fast PLL rate changes.
- dfll: Fast DFLL clocksource that also automatically scales CPU voltage.
- vdd-cpu-supply: Regulator for CPU voltage
Optional properties:
- clock-latency: Specify the possible maximum transition latency for clock,
in unit of nanoseconds.
Example:
--------
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a15";
reg = <0>;
clocks = <&tegra_car TEGRA124_CLK_CCLK_G>,
<&tegra_car TEGRA124_CLK_CCLK_LP>,
<&tegra_car TEGRA124_CLK_PLL_X>,
<&tegra_car TEGRA124_CLK_PLL_P>,
<&dfll>;
clock-names = "cpu_g", "cpu_lp", "pll_x", "pll_p", "dfll";
clock-latency = <300000>;
vdd-cpu-supply: <&vdd_cpu>;
};
<...>
};
* Device tree bindings for ARM PL172 MultiPort Memory Controller
Required properties:
- compatible: "arm,pl172", "arm,primecell"
- reg: Must contains offset/length value for controller.
- #address-cells: Must be 2. The partition number has to be encoded in the
first address cell and it may accept values 0..N-1
(N - total number of partitions). The second cell is the
offset into the partition.
- #size-cells: Must be set to 1.
- ranges: Must contain one or more chip select memory regions.
- clocks: Must contain references to controller clocks.
- clock-names: Must contain "mpmcclk" and "apb_pclk".
- clock-ranges: Empty property indicating that child nodes can inherit
named clocks. Required only if clock tree data present
in device tree.
See clock-bindings.txt
Child chip-select (cs) nodes contain the memory devices nodes connected to
such as NOR (e.g. cfi-flash) and NAND.
Required child cs node properties:
- #address-cells: Must be 2.
- #size-cells: Must be 1.
- ranges: Empty property indicating that child nodes can inherit
memory layout.
- clock-ranges: Empty property indicating that child nodes can inherit
named clocks. Required only if clock tree data present
in device tree.
- mpmc,cs: Chip select number. Indicates to the pl0172 driver
which chipselect is used for accessing the memory.
- mpmc,memory-width: Width of the chip select memory. Must be equal to
either 8, 16 or 32.
Optional child cs node config properties:
- mpmc,async-page-mode: Enable asynchronous page mode.
- mpmc,cs-active-high: Set chip select polarity to active high.
- mpmc,byte-lane-low: Set byte lane state to low.
- mpmc,extended-wait: Enable extended wait.
- mpmc,buffer-enable: Enable write buffer.
- mpmc,write-protect: Enable write protect.
Optional child cs node timing properties:
- mpmc,write-enable-delay: Delay from chip select assertion to write
enable (WE signal) in nano seconds.
- mpmc,output-enable-delay: Delay from chip select assertion to output
enable (OE signal) in nano seconds.
- mpmc,write-access-delay: Delay from chip select assertion to write
access in nano seconds.
- mpmc,read-access-delay: Delay from chip select assertion to read
access in nano seconds.
- mpmc,page-mode-read-delay: Delay for asynchronous page mode sequential
accesses in nano seconds.
- mpmc,turn-round-delay: Delay between access to memory banks in nano
seconds.
If any of the above timing parameters are absent, current parameter value will
be taken from the corresponding HW reg.
Example for pl172 with nor flash on chip select 0 shown below.
emc: memory-controller@40005000 {
compatible = "arm,pl172", "arm,primecell";
reg = <0x40005000 0x1000>;
clocks = <&ccu1 CLK_CPU_EMCDIV>, <&ccu1 CLK_CPU_EMC>;
clock-names = "mpmcclk", "apb_pclk";
#address-cells = <2>;
#size-cells = <1>;
ranges = <0 0 0x1c000000 0x1000000
1 0 0x1d000000 0x1000000
2 0 0x1e000000 0x1000000
3 0 0x1f000000 0x1000000>;
cs0 {
#address-cells = <2>;
#size-cells = <1>;
ranges;
mpmc,cs = <0>;
mpmc,memory-width = <16>;
mpmc,byte-lane-low;
mpmc,write-enable-delay = <0>;
mpmc,output-enable-delay = <0>;
mpmc,read-enable-delay = <70>;
mpmc,page-mode-read-delay = <70>;
flash@0,0 {
compatible = "sst,sst39vf320", "cfi-flash";
reg = <0 0 0x400000>;
bank-width = <2>;
#address-cells = <1>;
#size-cells = <1>;
partition@0 {
label = "data";
reg = <0 0x400000>;
};
};
};
};
Binding for Qualcomm Atheros AR7xxx/AR9XXX reset controller
Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required Properties:
- compatible: has to be "qca,<soctype>-reset", "qca,ar7100-reset"
as fallback
- reg: Base address and size of the controllers memory area
- #reset-cells : Specifies the number of cells needed to encode reset
line, should be 1
Example:
reset-controller@1806001c {
compatible = "qca,ar9132-reset", "qca,ar7100-reset";
reg = <0x1806001c 0x4>;
#reset-cells = <1>;
};
NXP LPC1850 Reset Generation Unit (RGU)
========================================
Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required properties:
- compatible: Should be "nxp,lpc1850-rgu"
- reg: register base and length
- clocks: phandle and clock specifier to RGU clocks
- clock-names: should contain "delay" and "reg"
- #reset-cells: should be 1
See table below for valid peripheral reset numbers. Numbers not
in the table below are either reserved or not applicable for
normal operation.
Reset Peripheral
9 System control unit (SCU)
12 ARM Cortex-M0 subsystem core (LPC43xx only)
13 CPU core
16 LCD controller
17 USB0
18 USB1
19 DMA
20 SDIO
21 External memory controller (EMC)
22 Ethernet
25 Flash bank A
27 EEPROM
28 GPIO
29 Flash bank B
32 Timer0
33 Timer1
34 Timer2
35 Timer3
36 Repetitive Interrupt timer (RIT)
37 State Configurable Timer (SCT)
38 Motor control PWM (MCPWM)
39 QEI
40 ADC0
41 ADC1
42 DAC
44 USART0
45 UART1
46 USART2
47 USART3
48 I2C0
49 I2C1
50 SSP0
51 SSP1
52 I2S0 and I2S1
53 Serial Flash Interface (SPIFI)
54 C_CAN1
55 C_CAN0
56 ARM Cortex-M0 application core (LPC4370 only)
57 SGPIO (LPC43xx only)
58 SPI (LPC43xx only)
60 ADCHS (12-bit ADC) (LPC4370 only)
Refer to NXP LPC18xx or LPC43xx user manual for more details about
the reset signals and the connected block/peripheral.
Reset provider example:
rgu: reset-controller@40053000 {
compatible = "nxp,lpc1850-rgu";
reg = <0x40053000 0x1000>;
clocks = <&cgu BASE_SAFE_CLK>, <&ccu1 CLK_CPU_BUS>;
clock-names = "delay", "reg";
#reset-cells = <1>;
};
Reset consumer example:
mac: ethernet@40010000 {
compatible = "nxp,lpc1850-dwmac", "snps,dwmac-3.611", "snps,dwmac";
reg = <0x40010000 0x2000>;
interrupts = <5>;
interrupt-names = "macirq";
clocks = <&ccu1 CLK_CPU_ETHERNET>;
clock-names = "stmmaceth";
resets = <&rgu 22>;
reset-names = "stmmaceth";
status = "disabled";
};
......@@ -39,4 +39,4 @@ Example:
};
Macro definitions for the supported reset channels can be found in:
include/dt-bindings/reset-controller/stih407-resets.h
include/dt-bindings/reset/stih407-resets.h
......@@ -43,5 +43,5 @@ example:
Macro definitions for the supported reset channels can be found in:
include/dt-bindings/reset-controller/stih415-resets.h
include/dt-bindings/reset-controller/stih416-resets.h
include/dt-bindings/reset/stih415-resets.h
include/dt-bindings/reset/stih416-resets.h
......@@ -42,5 +42,5 @@ example:
Macro definitions for the supported reset channels can be found in:
include/dt-bindings/reset-controller/stih415-resets.h
include/dt-bindings/reset-controller/stih416-resets.h
include/dt-bindings/reset/stih415-resets.h
include/dt-bindings/reset/stih416-resets.h
Xilinx Zynq Reset Manager
The Zynq AP-SoC has several different resets.
See Chapter 26 of the Zynq TRM (UG585) for more information about Zynq resets.
Required properties:
- compatible: "xlnx,zynq-reset"
- reg: SLCR offset and size taken via syscon <0x200 0x48>
- syscon: <&slcr>
This should be a phandle to the Zynq's SLCR registers.
- #reset-cells: Must be 1
The Zynq Reset Manager needs to be a childnode of the SLCR.
Example:
rstc: rstc@200 {
compatible = "xlnx,zynq-reset";
reg = <0x200 0x48>;
#reset-cells = <1>;
syscon = <&slcr>;
};
Reset outputs:
0 : soft reset
32 : ddr reset
64 : topsw reset
96 : dmac reset
128: usb0 reset
129: usb1 reset
160: gem0 reset
161: gem1 reset
164: gem0 rx reset
165: gem1 rx reset
166: gem0 ref reset
167: gem1 ref reset
192: sdio0 reset
193: sdio1 reset
196: sdio0 ref reset
197: sdio1 ref reset
224: spi0 reset
225: spi1 reset
226: spi0 ref reset
227: spi1 ref reset
256: can0 reset
257: can1 reset
258: can0 ref reset
259: can1 ref reset
288: i2c0 reset
289: i2c1 reset
320: uart0 reset
321: uart1 reset
322: uart0 ref reset
323: uart1 ref reset
352: gpio reset
384: lqspi reset
385: qspi ref reset
416: smc reset
417: smc ref reset
448: ocm reset
512: fpga0 out reset
513: fpga1 out reset
514: fpga2 out reset
515: fpga3 out reset
544: a9 reset 0
545: a9 reset 1
552: peri reset
Qualcomm Resource Power Manager (RPM) over SMD
This driver is used to interface with the Resource Power Manager (RPM) found in
various Qualcomm platforms. The RPM allows each component in the system to vote
for state of the system resources, such as clocks, regulators and bus
frequencies.
- compatible:
Usage: required
Value type: <string>
Definition: must be one of:
"qcom,rpm-msm8974"
- qcom,smd-channels:
Usage: required
Value type: <stringlist>
Definition: Shared Memory channel used for communication with the RPM
= SUBDEVICES
The RPM exposes resources to its subnodes. The below bindings specify the set
of valid subnodes that can operate on these resources.
== Regulators
Regulator nodes are identified by their compatible:
- compatible:
Usage: required
Value type: <string>
Definition: must be one of:
"qcom,rpm-pm8841-regulators"
"qcom,rpm-pm8941-regulators"
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_s4-supply:
- vdd_s5-supply:
- vdd_s6-supply:
- vdd_s7-supply:
- vdd_s8-supply:
Usage: optional (pm8841 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_l1_l3-supply:
- vdd_l2_lvs1_2_3-supply:
- vdd_l4_l11-supply:
- vdd_l5_l7-supply:
- vdd_l6_l12_l14_l15-supply:
- vdd_l8_l16_l18_l19-supply:
- vdd_l9_l10_l17_l22-supply:
- vdd_l13_l20_l23_l24-supply:
- vdd_l21-supply:
- vin_5vs-supply:
Usage: optional (pm8941 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
The regulator node houses sub-nodes for each regulator within the device. Each
sub-node is identified using the node's name, with valid values listed for each
of the pmics below.
pm8841:
s1, s2, s3, s4, s5, s6, s7, s8
pm8941:
s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13,
l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, lvs1, lvs2,
lvs3, 5vs1, 5vs2
The content of each sub-node is defined by the standard binding for regulators -
see regulator.txt.
= EXAMPLE
smd {
compatible = "qcom,smd";
rpm {
interrupts = <0 168 1>;
qcom,ipc = <&apcs 8 0>;
qcom,smd-edge = <15>;
rpm_requests {
compatible = "qcom,rpm-msm8974";
qcom,smd-channels = "rpm_requests";
pm8941-regulators {
compatible = "qcom,rpm-pm8941-regulators";
vdd_l13_l20_l23_l24-supply = <&pm8941_boost>;
pm8941_s3: s3 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
pm8941_boost: s4 {
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
};
pm8941_l20: l20 {
regulator-min-microvolt = <2950000>;
regulator-max-microvolt = <2950000>;
};
};
};
};
};
Qualcomm Shared Memory Driver (SMD) binding
This binding describes the Qualcomm Shared Memory Driver, a fifo based
communication channel for sending data between the various subsystems in
Qualcomm platforms.
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be "qcom,smd"
= EDGES
Each subnode of the SMD node represents a remote subsystem or a remote
processor of some sort - or in SMD language an "edge". The name of the edges
are not important.
The edge is described by the following properties:
- interrupts:
Usage: required
Value type: <prop-encoded-array>
Definition: should specify the IRQ used by the remote processor to
signal this processor about communication related updates
- qcom,ipc:
Usage: required
Value type: <prop-encoded-array>
Definition: three entries specifying the outgoing ipc bit used for
signaling the remote processor:
- phandle to a syscon node representing the apcs registers
- u32 representing offset to the register within the syscon
- u32 representing the ipc bit within the register
- qcom,smd-edge:
Usage: required
Value type: <u32>
Definition: the identifier of the remote processor in the smd channel
allocation table
= SMD DEVICES
In turn, subnodes of the "edges" represent devices tied to SMD channels on that
"edge". The names of the devices are not important. The properties of these
nodes are defined by the individual bindings for the SMD devices - but must
contain the following property:
- qcom,smd-channels:
Usage: required
Value type: <stringlist>
Definition: a list of channels tied to this device, used for matching
the device to channels
= EXAMPLE
The following example represents a smd node, with one edge representing the
"rpm" subsystem. For the "rpm" subsystem we have a device tied to the
"rpm_request" channel.
apcs: syscon@f9011000 {
compatible = "syscon";
reg = <0xf9011000 0x1000>;
};
smd {
compatible = "qcom,smd";
rpm {
interrupts = <0 168 1>;
qcom,ipc = <&apcs 8 0>;
qcom,smd-edge = <15>;
rpm_requests {
compatible = "qcom,rpm-msm8974";
qcom,smd-channels = "rpm_requests";
...
};
};
};
......@@ -8613,6 +8613,7 @@ M: Philipp Zabel <p.zabel@pengutronix.de>
S: Maintained
F: drivers/reset/
F: Documentation/devicetree/bindings/reset/
F: include/dt-bindings/reset/
F: include/linux/reset.h
F: include/linux/reset-controller.h
......
......@@ -9,7 +9,7 @@
#include "stih407-pinctrl.dtsi"
#include <dt-bindings/mfd/st-lpc.h>
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/reset-controller/stih407-resets.h>
#include <dt-bindings/reset/stih407-resets.h>
#include <dt-bindings/interrupt-controller/irq-st.h>
/ {
#address-cells = <1>;
......
......@@ -10,7 +10,7 @@
#include "stih415-clock.dtsi"
#include "stih415-pinctrl.dtsi"
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset-controller/stih415-resets.h>
#include <dt-bindings/reset/stih415-resets.h>
/ {
L2: cache-controller {
......
......@@ -12,7 +12,7 @@
#include <dt-bindings/phy/phy.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset-controller/stih416-resets.h>
#include <dt-bindings/reset/stih416-resets.h>
#include <dt-bindings/interrupt-controller/irq-st.h>
/ {
L2: cache-controller {
......
......@@ -96,6 +96,7 @@ config MACH_DOVE
select MACH_MVEBU_ANY
select ORION_IRQCHIP
select ORION_TIMER
select PM_GENERIC_DOMAINS if PM
select PINCTRL_DOVE
help
Say 'Y' here if you want your kernel to support the
......
......@@ -12,6 +12,7 @@
#include <linux/mbus.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/soc/dove/pmu.h>
#include <asm/hardware/cache-tauros2.h>
#include <asm/mach/arch.h>
#include "common.h"
......@@ -24,6 +25,7 @@ static void __init dove_init(void)
tauros2_init(0);
#endif
BUG_ON(mvebu_mbus_dt_init(false));
dove_init_pmu();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}
......
......@@ -4,6 +4,7 @@ config ARCH_SHMOBILE
config PM_RCAR
bool
select PM_GENERIC_DOMAINS if PM
config PM_RMOBILE
bool
......@@ -50,6 +51,7 @@ config ARCH_EMEV2
config ARCH_R7S72100
bool "RZ/A1H (R7S72100)"
select PM_GENERIC_DOMAINS if PM
select SYS_SUPPORTS_SH_MTU2
config ARCH_R8A73A4
......
......@@ -24,6 +24,7 @@
#include <asm/cpuidle.h>
#include <asm/smp_plat.h>
#include <asm/suspend.h>
#include <asm/psci.h>
#include "pm.h"
#include "sleep.h"
......@@ -44,16 +45,12 @@ static int tegra114_idle_power_down(struct cpuidle_device *dev,
tegra_set_cpu_in_lp2();
cpu_pm_enter();
tick_broadcast_enter();
call_firmware_op(prepare_idle);
/* Do suspend by ourselves if the firmware does not implement it */
if (call_firmware_op(do_idle, 0) == -ENOSYS)
cpu_suspend(0, tegra30_sleep_cpu_secondary_finish);
tick_broadcast_exit();
cpu_pm_exit();
tegra_clear_cpu_in_lp2();
......@@ -61,6 +58,13 @@ static int tegra114_idle_power_down(struct cpuidle_device *dev,
return index;
}
static void tegra114_idle_enter_freeze(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
int index)
{
tegra114_idle_power_down(dev, drv, index);
}
#endif
static struct cpuidle_driver tegra_idle_driver = {
......@@ -72,8 +76,10 @@ static struct cpuidle_driver tegra_idle_driver = {
#ifdef CONFIG_PM_SLEEP
[1] = {
.enter = tegra114_idle_power_down,
.enter_freeze = tegra114_idle_enter_freeze,
.exit_latency = 500,
.target_residency = 1000,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.power_usage = 0,
.name = "powered-down",
.desc = "CPU power gated",
......@@ -84,5 +90,8 @@ static struct cpuidle_driver tegra_idle_driver = {
int __init tegra114_cpuidle_init(void)
{
return cpuidle_register(&tegra_idle_driver, NULL);
if (!psci_smp_available())
return cpuidle_register(&tegra_idle_driver, NULL);
return 0;
}
......@@ -82,9 +82,6 @@
#define TEGRA_EMC_BASE 0x7000F400
#define TEGRA_EMC_SIZE SZ_1K
#define TEGRA_FUSE_BASE 0x7000F800
#define TEGRA_FUSE_SIZE SZ_1K
#define TEGRA_EMC0_BASE 0x7001A000
#define TEGRA_EMC0_SIZE SZ_2K
......
......@@ -118,6 +118,7 @@ config ATH25
config ATH79
bool "Atheros AR71XX/AR724X/AR913X based boards"
select ARCH_HAS_RESET_CONTROLLER
select ARCH_REQUIRE_GPIOLIB
select BOOT_RAW
select CEVT_R4K
......
......@@ -115,6 +115,14 @@ miscintc: interrupt-controller@18060010 {
interrupt-controller;
#interrupt-cells = <1>;
};
rst: reset-controller@1806001c {
compatible = "qca,ar9132-reset",
"qca,ar7100-reset";
reg = <0x1806001c 0x4>;
#reset-cells = <1>;
};
};
spi@1f000000 {
......
......@@ -2,6 +2,7 @@
* R-Car MSTP clocks
*
* Copyright (C) 2013 Ideas On Board SPRL
* Copyright (C) 2015 Glider bvba
*
* Contact: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
*
......@@ -10,11 +11,16 @@
* the Free Software Foundation; version 2 of the License.
*/
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/clkdev.h>
#include <linux/clk/shmobile.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/pm_clock.h>
#include <linux/pm_domain.h>
#include <linux/spinlock.h>
/*
......@@ -236,3 +242,84 @@ static void __init cpg_mstp_clocks_init(struct device_node *np)
of_clk_add_provider(np, of_clk_src_onecell_get, &group->data);
}
CLK_OF_DECLARE(cpg_mstp_clks, "renesas,cpg-mstp-clocks", cpg_mstp_clocks_init);
#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
int cpg_mstp_attach_dev(struct generic_pm_domain *domain, struct device *dev)
{
struct device_node *np = dev->of_node;
struct of_phandle_args clkspec;
struct clk *clk;
int i = 0;
int error;
while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i,
&clkspec)) {
if (of_device_is_compatible(clkspec.np,
"renesas,cpg-mstp-clocks"))
goto found;
of_node_put(clkspec.np);
i++;
}
return 0;
found:
clk = of_clk_get_from_provider(&clkspec);
of_node_put(clkspec.np);
if (IS_ERR(clk))
return PTR_ERR(clk);
error = pm_clk_create(dev);
if (error) {
dev_err(dev, "pm_clk_create failed %d\n", error);
goto fail_put;
}
error = pm_clk_add_clk(dev, clk);
if (error) {
dev_err(dev, "pm_clk_add_clk %pC failed %d\n", clk, error);
goto fail_destroy;
}
return 0;
fail_destroy:
pm_clk_destroy(dev);
fail_put:
clk_put(clk);
return error;
}
void cpg_mstp_detach_dev(struct generic_pm_domain *domain, struct device *dev)
{
if (!list_empty(&dev->power.subsys_data->clock_list))
pm_clk_destroy(dev);
}
void __init cpg_mstp_add_clk_domain(struct device_node *np)
{
struct generic_pm_domain *pd;
u32 ncells;
if (of_property_read_u32(np, "#power-domain-cells", &ncells)) {
pr_warn("%s lacks #power-domain-cells\n", np->full_name);
return;
}
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd)
return;
pd->name = np->name;
pd->flags = GENPD_FLAG_PM_CLK;
pm_genpd_init(pd, &simple_qos_governor, false);
pd->attach_dev = cpg_mstp_attach_dev;
pd->detach_dev = cpg_mstp_detach_dev;
of_genpd_add_provider_simple(np, pd);
}
#endif /* !CONFIG_PM_GENERIC_DOMAINS_OF */
......@@ -124,6 +124,8 @@ static void __init r8a7778_cpg_clocks_init(struct device_node *np)
}
of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data);
cpg_mstp_add_clk_domain(np);
}
CLK_OF_DECLARE(r8a7778_cpg_clks, "renesas,r8a7778-cpg-clocks",
......
......@@ -168,6 +168,8 @@ static void __init r8a7779_cpg_clocks_init(struct device_node *np)
}
of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data);
cpg_mstp_add_clk_domain(np);
}
CLK_OF_DECLARE(r8a7779_cpg_clks, "renesas,r8a7779-cpg-clocks",
r8a7779_cpg_clocks_init);
......
......@@ -415,6 +415,8 @@ static void __init rcar_gen2_cpg_clocks_init(struct device_node *np)
}
of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data);
cpg_mstp_add_clk_domain(np);
}
CLK_OF_DECLARE(rcar_gen2_cpg_clks, "renesas,rcar-gen2-cpg-clocks",
rcar_gen2_cpg_clocks_init);
......
......@@ -10,6 +10,7 @@
*/
#include <linux/clk-provider.h>
#include <linux/clk/shmobile.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/of.h>
......@@ -99,5 +100,7 @@ static void __init rz_cpg_clocks_init(struct device_node *np)
}
of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data);
cpg_mstp_add_clk_domain(np);
}
CLK_OF_DECLARE(rz_cpg_clks, "renesas,rz-cpg-clocks", rz_cpg_clocks_init);
......@@ -247,12 +247,19 @@ config ARM_SPEAR_CPUFREQ
help
This adds the CPUFreq driver support for SPEAr SOCs.
config ARM_TEGRA_CPUFREQ
bool "TEGRA CPUFreq support"
config ARM_TEGRA20_CPUFREQ
bool "Tegra20 CPUFreq support"
depends on ARCH_TEGRA
default y
help
This adds the CPUFreq driver support for TEGRA SOCs.
This adds the CPUFreq driver support for Tegra20 SOCs.
config ARM_TEGRA124_CPUFREQ
tristate "Tegra124 CPUFreq support"
depends on ARCH_TEGRA && CPUFREQ_DT
default y
help
This adds the CPUFreq driver support for Tegra124 SOCs.
config ARM_PXA2xx_CPUFREQ
tristate "Intel PXA2xx CPUfreq driver"
......
......@@ -76,7 +76,8 @@ obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o
obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o
obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o
obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
obj-$(CONFIG_ARM_TEGRA_CPUFREQ) += tegra-cpufreq.o
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
##################################################################################
......
/*
* Tegra 124 cpufreq driver
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
#include <linux/cpufreq-dt.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/regulator/consumer.h>
#include <linux/types.h>
struct tegra124_cpufreq_priv {
struct regulator *vdd_cpu_reg;
struct clk *cpu_clk;
struct clk *pllp_clk;
struct clk *pllx_clk;
struct clk *dfll_clk;
struct platform_device *cpufreq_dt_pdev;
};
static int tegra124_cpu_switch_to_dfll(struct tegra124_cpufreq_priv *priv)
{
struct clk *orig_parent;
int ret;
ret = clk_set_rate(priv->dfll_clk, clk_get_rate(priv->cpu_clk));
if (ret)
return ret;
orig_parent = clk_get_parent(priv->cpu_clk);
clk_set_parent(priv->cpu_clk, priv->pllp_clk);
ret = clk_prepare_enable(priv->dfll_clk);
if (ret)
goto out;
clk_set_parent(priv->cpu_clk, priv->dfll_clk);
return 0;
out:
clk_set_parent(priv->cpu_clk, orig_parent);
return ret;
}
static void tegra124_cpu_switch_to_pllx(struct tegra124_cpufreq_priv *priv)
{
clk_set_parent(priv->cpu_clk, priv->pllp_clk);
clk_disable_unprepare(priv->dfll_clk);
regulator_sync_voltage(priv->vdd_cpu_reg);
clk_set_parent(priv->cpu_clk, priv->pllx_clk);
}
static struct cpufreq_dt_platform_data cpufreq_dt_pd = {
.independent_clocks = false,
};
static int tegra124_cpufreq_probe(struct platform_device *pdev)
{
struct tegra124_cpufreq_priv *priv;
struct device_node *np;
struct device *cpu_dev;
struct platform_device_info cpufreq_dt_devinfo = {};
int ret;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -ENODEV;
np = of_cpu_device_node_get(0);
if (!np)
return -ENODEV;
priv->vdd_cpu_reg = regulator_get(cpu_dev, "vdd-cpu");
if (IS_ERR(priv->vdd_cpu_reg)) {
ret = PTR_ERR(priv->vdd_cpu_reg);
goto out_put_np;
}
priv->cpu_clk = of_clk_get_by_name(np, "cpu_g");
if (IS_ERR(priv->cpu_clk)) {
ret = PTR_ERR(priv->cpu_clk);
goto out_put_vdd_cpu_reg;
}
priv->dfll_clk = of_clk_get_by_name(np, "dfll");
if (IS_ERR(priv->dfll_clk)) {
ret = PTR_ERR(priv->dfll_clk);
goto out_put_cpu_clk;
}
priv->pllx_clk = of_clk_get_by_name(np, "pll_x");
if (IS_ERR(priv->pllx_clk)) {
ret = PTR_ERR(priv->pllx_clk);
goto out_put_dfll_clk;
}
priv->pllp_clk = of_clk_get_by_name(np, "pll_p");
if (IS_ERR(priv->pllp_clk)) {
ret = PTR_ERR(priv->pllp_clk);
goto out_put_pllx_clk;
}
ret = tegra124_cpu_switch_to_dfll(priv);
if (ret)
goto out_put_pllp_clk;
cpufreq_dt_devinfo.name = "cpufreq-dt";
cpufreq_dt_devinfo.parent = &pdev->dev;
cpufreq_dt_devinfo.data = &cpufreq_dt_pd;
cpufreq_dt_devinfo.size_data = sizeof(cpufreq_dt_pd);
priv->cpufreq_dt_pdev =
platform_device_register_full(&cpufreq_dt_devinfo);
if (IS_ERR(priv->cpufreq_dt_pdev)) {
ret = PTR_ERR(priv->cpufreq_dt_pdev);
goto out_switch_to_pllx;
}
platform_set_drvdata(pdev, priv);
return 0;
out_switch_to_pllx:
tegra124_cpu_switch_to_pllx(priv);
out_put_pllp_clk:
clk_put(priv->pllp_clk);
out_put_pllx_clk:
clk_put(priv->pllx_clk);
out_put_dfll_clk:
clk_put(priv->dfll_clk);
out_put_cpu_clk:
clk_put(priv->cpu_clk);
out_put_vdd_cpu_reg:
regulator_put(priv->vdd_cpu_reg);
out_put_np:
of_node_put(np);
return ret;
}
static int tegra124_cpufreq_remove(struct platform_device *pdev)
{
struct tegra124_cpufreq_priv *priv = platform_get_drvdata(pdev);
platform_device_unregister(priv->cpufreq_dt_pdev);
tegra124_cpu_switch_to_pllx(priv);
clk_put(priv->pllp_clk);
clk_put(priv->pllx_clk);
clk_put(priv->dfll_clk);
clk_put(priv->cpu_clk);
regulator_put(priv->vdd_cpu_reg);
return 0;
}
static struct platform_driver tegra124_cpufreq_platdrv = {
.driver.name = "cpufreq-tegra124",
.probe = tegra124_cpufreq_probe,
.remove = tegra124_cpufreq_remove,
};
static int __init tegra_cpufreq_init(void)
{
int ret;
struct platform_device *pdev;
if (!of_machine_is_compatible("nvidia,tegra124"))
return -ENODEV;
/*
* Platform driver+device required for handling EPROBE_DEFER with
* the regulator and the DFLL clock
*/
ret = platform_driver_register(&tegra124_cpufreq_platdrv);
if (ret)
return ret;
pdev = platform_device_register_simple("cpufreq-tegra124", -1, NULL, 0);
if (IS_ERR(pdev)) {
platform_driver_unregister(&tegra124_cpufreq_platdrv);
return PTR_ERR(pdev);
}
return 0;
}
module_init(tegra_cpufreq_init);
MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>");
MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124");
MODULE_LICENSE("GPL v2");
......@@ -222,7 +222,7 @@ config TEGRA_IOMMU_SMMU
select IOMMU_API
help
This driver supports the IOMMU hardware (SMMU) found on NVIDIA Tegra
SoCs (Tegra30 up to Tegra132).
SoCs (Tegra30 up to Tegra210).
config EXYNOS_IOMMU
bool "Exynos IOMMU Support"
......
......@@ -7,6 +7,14 @@ menuconfig MEMORY
if MEMORY
config ARM_PL172_MPMC
tristate "ARM PL172 MPMC driver"
depends on ARM_AMBA && OF
help
This selects the ARM PrimeCell PL172 MultiPort Memory Controller.
If you have an embedded system with an AMBA bus and a PL172
controller, say Y or M here.
config ATMEL_SDRAMC
bool "Atmel (Multi-port DDR-)SDRAM Controller"
default y
......
......@@ -5,6 +5,7 @@
ifeq ($(CONFIG_DDR),y)
obj-$(CONFIG_OF) += of_memory.o
endif
obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o
obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o
obj-$(CONFIG_TI_AEMIF) += ti-aemif.o
obj-$(CONFIG_TI_EMIF) += emif.o
......
/*
* Memory controller driver for ARM PrimeCell PL172
* PrimeCell MultiPort Memory Controller (PL172)
*
* Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com>
*
* Based on:
* TI AEMIF driver, Copyright (C) 2010 - 2013 Texas Instruments Inc.
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/amba/bus.h>
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/time.h>
#define MPMC_STATIC_CFG(n) (0x200 + 0x20 * n)
#define MPMC_STATIC_CFG_MW_8BIT 0x0
#define MPMC_STATIC_CFG_MW_16BIT 0x1
#define MPMC_STATIC_CFG_MW_32BIT 0x2
#define MPMC_STATIC_CFG_PM BIT(3)
#define MPMC_STATIC_CFG_PC BIT(6)
#define MPMC_STATIC_CFG_PB BIT(7)
#define MPMC_STATIC_CFG_EW BIT(8)
#define MPMC_STATIC_CFG_B BIT(19)
#define MPMC_STATIC_CFG_P BIT(20)
#define MPMC_STATIC_WAIT_WEN(n) (0x204 + 0x20 * n)
#define MPMC_STATIC_WAIT_WEN_MAX 0x0f
#define MPMC_STATIC_WAIT_OEN(n) (0x208 + 0x20 * n)
#define MPMC_STATIC_WAIT_OEN_MAX 0x0f
#define MPMC_STATIC_WAIT_RD(n) (0x20c + 0x20 * n)
#define MPMC_STATIC_WAIT_RD_MAX 0x1f
#define MPMC_STATIC_WAIT_PAGE(n) (0x210 + 0x20 * n)
#define MPMC_STATIC_WAIT_PAGE_MAX 0x1f
#define MPMC_STATIC_WAIT_WR(n) (0x214 + 0x20 * n)
#define MPMC_STATIC_WAIT_WR_MAX 0x1f
#define MPMC_STATIC_WAIT_TURN(n) (0x218 + 0x20 * n)
#define MPMC_STATIC_WAIT_TURN_MAX 0x0f
/* Maximum number of static chip selects */
#define PL172_MAX_CS 4
struct pl172_data {
void __iomem *base;
unsigned long rate;
struct clk *clk;
};
static int pl172_timing_prop(struct amba_device *adev,
const struct device_node *np, const char *name,
u32 reg_offset, u32 max, int start)
{
struct pl172_data *pl172 = amba_get_drvdata(adev);
int cycles;
u32 val;
if (!of_property_read_u32(np, name, &val)) {
cycles = DIV_ROUND_UP(val * pl172->rate, NSEC_PER_MSEC) - start;
if (cycles < 0) {
cycles = 0;
} else if (cycles > max) {
dev_err(&adev->dev, "%s timing too tight\n", name);
return -EINVAL;
}
writel(cycles, pl172->base + reg_offset);
}
dev_dbg(&adev->dev, "%s: %u cycle(s)\n", name, start +
readl(pl172->base + reg_offset));
return 0;
}
static int pl172_setup_static(struct amba_device *adev,
struct device_node *np, u32 cs)
{
struct pl172_data *pl172 = amba_get_drvdata(adev);
u32 cfg;
int ret;
/* MPMC static memory configuration */
if (!of_property_read_u32(np, "mpmc,memory-width", &cfg)) {
if (cfg == 8) {
cfg = MPMC_STATIC_CFG_MW_8BIT;
} else if (cfg == 16) {
cfg = MPMC_STATIC_CFG_MW_16BIT;
} else if (cfg == 32) {
cfg = MPMC_STATIC_CFG_MW_32BIT;
} else {
dev_err(&adev->dev, "invalid memory width cs%u\n", cs);
return -EINVAL;
}
} else {
dev_err(&adev->dev, "memory-width property required\n");
return -EINVAL;
}
if (of_property_read_bool(np, "mpmc,async-page-mode"))
cfg |= MPMC_STATIC_CFG_PM;
if (of_property_read_bool(np, "mpmc,cs-active-high"))
cfg |= MPMC_STATIC_CFG_PC;
if (of_property_read_bool(np, "mpmc,byte-lane-low"))
cfg |= MPMC_STATIC_CFG_PB;
if (of_property_read_bool(np, "mpmc,extended-wait"))
cfg |= MPMC_STATIC_CFG_EW;
if (of_property_read_bool(np, "mpmc,buffer-enable"))
cfg |= MPMC_STATIC_CFG_B;
if (of_property_read_bool(np, "mpmc,write-protect"))
cfg |= MPMC_STATIC_CFG_P;
writel(cfg, pl172->base + MPMC_STATIC_CFG(cs));
dev_dbg(&adev->dev, "mpmc static config cs%u: 0x%08x\n", cs, cfg);
/* MPMC static memory timing */
ret = pl172_timing_prop(adev, np, "mpmc,write-enable-delay",
MPMC_STATIC_WAIT_WEN(cs),
MPMC_STATIC_WAIT_WEN_MAX, 1);
if (ret)
goto fail;
ret = pl172_timing_prop(adev, np, "mpmc,output-enable-delay",
MPMC_STATIC_WAIT_OEN(cs),
MPMC_STATIC_WAIT_OEN_MAX, 0);
if (ret)
goto fail;
ret = pl172_timing_prop(adev, np, "mpmc,read-access-delay",
MPMC_STATIC_WAIT_RD(cs),
MPMC_STATIC_WAIT_RD_MAX, 1);
if (ret)
goto fail;
ret = pl172_timing_prop(adev, np, "mpmc,page-mode-read-delay",
MPMC_STATIC_WAIT_PAGE(cs),
MPMC_STATIC_WAIT_PAGE_MAX, 1);
if (ret)
goto fail;
ret = pl172_timing_prop(adev, np, "mpmc,write-access-delay",
MPMC_STATIC_WAIT_WR(cs),
MPMC_STATIC_WAIT_WR_MAX, 2);
if (ret)
goto fail;
ret = pl172_timing_prop(adev, np, "mpmc,turn-round-delay",
MPMC_STATIC_WAIT_TURN(cs),
MPMC_STATIC_WAIT_TURN_MAX, 1);
if (ret)
goto fail;
return 0;
fail:
dev_err(&adev->dev, "failed to configure cs%u\n", cs);
return ret;
}
static int pl172_parse_cs_config(struct amba_device *adev,
struct device_node *np)
{
u32 cs;
if (!of_property_read_u32(np, "mpmc,cs", &cs)) {
if (cs >= PL172_MAX_CS) {
dev_err(&adev->dev, "cs%u invalid\n", cs);
return -EINVAL;
}
return pl172_setup_static(adev, np, cs);
}
dev_err(&adev->dev, "cs property required\n");
return -EINVAL;
}
static const char * const pl172_revisions[] = {"r1", "r2", "r2p3", "r2p4"};
static int pl172_probe(struct amba_device *adev, const struct amba_id *id)
{
struct device_node *child_np, *np = adev->dev.of_node;
struct device *dev = &adev->dev;
static const char *rev = "?";
struct pl172_data *pl172;
int ret;
if (amba_part(adev) == 0x172) {
if (amba_rev(adev) < ARRAY_SIZE(pl172_revisions))
rev = pl172_revisions[amba_rev(adev)];
}
dev_info(dev, "ARM PL%x revision %s\n", amba_part(adev), rev);
pl172 = devm_kzalloc(dev, sizeof(*pl172), GFP_KERNEL);
if (!pl172)
return -ENOMEM;
pl172->clk = devm_clk_get(dev, "mpmcclk");
if (IS_ERR(pl172->clk)) {
dev_err(dev, "no mpmcclk provided clock\n");
return PTR_ERR(pl172->clk);
}
ret = clk_prepare_enable(pl172->clk);
if (ret) {
dev_err(dev, "unable to mpmcclk enable clock\n");
return ret;
}
pl172->rate = clk_get_rate(pl172->clk) / MSEC_PER_SEC;
if (!pl172->rate) {
dev_err(dev, "unable to get mpmcclk clock rate\n");
ret = -EINVAL;
goto err_clk_enable;
}
ret = amba_request_regions(adev, NULL);
if (ret) {
dev_err(dev, "unable to request AMBA regions\n");
goto err_clk_enable;
}
pl172->base = devm_ioremap(dev, adev->res.start,
resource_size(&adev->res));
if (!pl172->base) {
dev_err(dev, "ioremap failed\n");
ret = -ENOMEM;
goto err_no_ioremap;
}
amba_set_drvdata(adev, pl172);
/*
* Loop through each child node, which represent a chip select, and
* configure parameters and timing. If successful; populate devices
* under that node.
*/
for_each_available_child_of_node(np, child_np) {
ret = pl172_parse_cs_config(adev, child_np);
if (ret)
continue;
of_platform_populate(child_np, NULL, NULL, dev);
}
return 0;
err_no_ioremap:
amba_release_regions(adev);
err_clk_enable:
clk_disable_unprepare(pl172->clk);
return ret;
}
static int pl172_remove(struct amba_device *adev)
{
struct pl172_data *pl172 = amba_get_drvdata(adev);
clk_disable_unprepare(pl172->clk);
amba_release_regions(adev);
return 0;
}
static const struct amba_id pl172_ids[] = {
{
.id = 0x07341172,
.mask = 0xffffffff,
},
{ 0, 0 },
};
MODULE_DEVICE_TABLE(amba, pl172_ids);
static struct amba_driver pl172_driver = {
.drv = {
.name = "memory-pl172",
},
.probe = pl172_probe,
.remove = pl172_remove,
.id_table = pl172_ids,
};
module_amba_driver(pl172_driver);
MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>");
MODULE_DESCRIPTION("PL172 Memory Controller Driver");
MODULE_LICENSE("GPL v2");
......@@ -4,6 +4,7 @@ tegra-mc-$(CONFIG_ARCH_TEGRA_3x_SOC) += tegra30.o
tegra-mc-$(CONFIG_ARCH_TEGRA_114_SOC) += tegra114.o
tegra-mc-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124.o
tegra-mc-$(CONFIG_ARCH_TEGRA_132_SOC) += tegra124.o
tegra-mc-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o
obj-$(CONFIG_TEGRA_MC) += tegra-mc.o
......
......@@ -42,7 +42,6 @@
#define MC_ERR_STATUS_ADR_HI_MASK 0x3
#define MC_ERR_STATUS_SECURITY (1 << 17)
#define MC_ERR_STATUS_RW (1 << 16)
#define MC_ERR_STATUS_CLIENT_MASK 0x7f
#define MC_ERR_ADR 0x0c
......@@ -66,6 +65,9 @@ static const struct of_device_id tegra_mc_of_match[] = {
#endif
#ifdef CONFIG_ARCH_TEGRA_132_SOC
{ .compatible = "nvidia,tegra132-mc", .data = &tegra132_mc_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_210_SOC
{ .compatible = "nvidia,tegra210-mc", .data = &tegra210_mc_soc },
#endif
{ }
};
......@@ -283,7 +285,7 @@ static irqreturn_t tegra_mc_irq(int irq, void *data)
else
secure = "";
id = value & MC_ERR_STATUS_CLIENT_MASK;
id = value & mc->soc->client_id_mask;
for (i = 0; i < mc->soc->num_clients; i++) {
if (mc->soc->clients[i].id == id) {
......@@ -410,6 +412,8 @@ static int tegra_mc_probe(struct platform_device *pdev)
return err;
}
WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n");
value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM;
......
......@@ -41,4 +41,8 @@ extern const struct tegra_mc_soc tegra124_mc_soc;
extern const struct tegra_mc_soc tegra132_mc_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_210_SOC
extern const struct tegra_mc_soc tegra210_mc_soc;
#endif
#endif /* MEMORY_TEGRA_MC_H */
......@@ -944,5 +944,6 @@ const struct tegra_mc_soc tegra114_mc_soc = {
.num_clients = ARRAY_SIZE(tegra114_mc_clients),
.num_address_bits = 32,
.atom_size = 32,
.client_id_mask = 0x7f,
.smmu = &tegra114_smmu_soc,
};
......@@ -1027,7 +1027,40 @@ static int emc_debug_rate_set(void *data, u64 rate)
DEFINE_SIMPLE_ATTRIBUTE(emc_debug_rate_fops, emc_debug_rate_get,
emc_debug_rate_set, "%lld\n");
static void emc_debugfs_init(struct device *dev)
static int emc_debug_supported_rates_show(struct seq_file *s, void *data)
{
struct tegra_emc *emc = s->private;
const char *prefix = "";
unsigned int i;
for (i = 0; i < emc->num_timings; i++) {
struct emc_timing *timing = &emc->timings[i];
seq_printf(s, "%s%lu", prefix, timing->rate);
prefix = " ";
}
seq_puts(s, "\n");
return 0;
}
static int emc_debug_supported_rates_open(struct inode *inode,
struct file *file)
{
return single_open(file, emc_debug_supported_rates_show,
inode->i_private);
}
static const struct file_operations emc_debug_supported_rates_fops = {
.open = emc_debug_supported_rates_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc)
{
struct dentry *root, *file;
struct clk *clk;
......@@ -1048,6 +1081,11 @@ static void emc_debugfs_init(struct device *dev)
&emc_debug_rate_fops);
if (!file)
dev_err(dev, "failed to create debugfs entry\n");
file = debugfs_create_file("supported_rates", S_IRUGO, root, emc,
&emc_debug_supported_rates_fops);
if (!file)
dev_err(dev, "failed to create debugfs entry\n");
}
static int tegra_emc_probe(struct platform_device *pdev)
......@@ -1119,7 +1157,7 @@ static int tegra_emc_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, emc);
if (IS_ENABLED(CONFIG_DEBUG_FS))
emc_debugfs_init(&pdev->dev);
emc_debugfs_init(&pdev->dev, emc);
return 0;
};
......
......@@ -1032,6 +1032,7 @@ const struct tegra_mc_soc tegra124_mc_soc = {
.num_clients = ARRAY_SIZE(tegra124_mc_clients),
.num_address_bits = 34,
.atom_size = 32,
.client_id_mask = 0x7f,
.smmu = &tegra124_smmu_soc,
.emem_regs = tegra124_mc_emem_regs,
.num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs),
......@@ -1067,6 +1068,7 @@ const struct tegra_mc_soc tegra132_mc_soc = {
.num_clients = ARRAY_SIZE(tegra124_mc_clients),
.num_address_bits = 34,
.atom_size = 32,
.client_id_mask = 0x7f,
.smmu = &tegra132_smmu_soc,
};
#endif /* CONFIG_ARCH_TEGRA_132_SOC */
/*
* Copyright (C) 2015 NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/of.h>
#include <linux/mm.h>
#include <asm/cacheflush.h>
#include <dt-bindings/memory/tegra210-mc.h>
#include "mc.h"
static const struct tegra_mc_client tegra210_mc_clients[] = {
{
.id = 0x00,
.name = "ptcr",
.swgroup = TEGRA_SWGROUP_PTC,
}, {
.id = 0x01,
.name = "display0a",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x228,
.bit = 1,
},
.la = {
.reg = 0x2e8,
.shift = 0,
.mask = 0xff,
.def = 0xc2,
},
}, {
.id = 0x02,
.name = "display0ab",
.swgroup = TEGRA_SWGROUP_DCB,
.smmu = {
.reg = 0x228,
.bit = 2,
},
.la = {
.reg = 0x2f4,
.shift = 0,
.mask = 0xff,
.def = 0xc6,
},
}, {
.id = 0x03,
.name = "display0b",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x228,
.bit = 3,
},
.la = {
.reg = 0x2e8,
.shift = 16,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x04,
.name = "display0bb",
.swgroup = TEGRA_SWGROUP_DCB,
.smmu = {
.reg = 0x228,
.bit = 4,
},
.la = {
.reg = 0x2f4,
.shift = 16,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x05,
.name = "display0c",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x228,
.bit = 5,
},
.la = {
.reg = 0x2ec,
.shift = 0,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x06,
.name = "display0cb",
.swgroup = TEGRA_SWGROUP_DCB,
.smmu = {
.reg = 0x228,
.bit = 6,
},
.la = {
.reg = 0x2f8,
.shift = 0,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x0e,
.name = "afir",
.swgroup = TEGRA_SWGROUP_AFI,
.smmu = {
.reg = 0x228,
.bit = 14,
},
.la = {
.reg = 0x2e0,
.shift = 0,
.mask = 0xff,
.def = 0x13,
},
}, {
.id = 0x0f,
.name = "avpcarm7r",
.swgroup = TEGRA_SWGROUP_AVPC,
.smmu = {
.reg = 0x228,
.bit = 15,
},
.la = {
.reg = 0x2e4,
.shift = 0,
.mask = 0xff,
.def = 0x04,
},
}, {
.id = 0x10,
.name = "displayhc",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x228,
.bit = 16,
},
.la = {
.reg = 0x2f0,
.shift = 0,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x11,
.name = "displayhcb",
.swgroup = TEGRA_SWGROUP_DCB,
.smmu = {
.reg = 0x228,
.bit = 17,
},
.la = {
.reg = 0x2fc,
.shift = 0,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x15,
.name = "hdar",
.swgroup = TEGRA_SWGROUP_HDA,
.smmu = {
.reg = 0x228,
.bit = 21,
},
.la = {
.reg = 0x318,
.shift = 0,
.mask = 0xff,
.def = 0x24,
},
}, {
.id = 0x16,
.name = "host1xdmar",
.swgroup = TEGRA_SWGROUP_HC,
.smmu = {
.reg = 0x228,
.bit = 22,
},
.la = {
.reg = 0x310,
.shift = 0,
.mask = 0xff,
.def = 0x1e,
},
}, {
.id = 0x17,
.name = "host1xr",
.swgroup = TEGRA_SWGROUP_HC,
.smmu = {
.reg = 0x228,
.bit = 23,
},
.la = {
.reg = 0x310,
.shift = 16,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x1c,
.name = "nvencsrd",
.swgroup = TEGRA_SWGROUP_NVENC,
.smmu = {
.reg = 0x228,
.bit = 28,
},
.la = {
.reg = 0x328,
.shift = 0,
.mask = 0xff,
.def = 0x23,
},
}, {
.id = 0x1d,
.name = "ppcsahbdmar",
.swgroup = TEGRA_SWGROUP_PPCS,
.smmu = {
.reg = 0x228,
.bit = 29,
},
.la = {
.reg = 0x344,
.shift = 0,
.mask = 0xff,
.def = 0x49,
},
}, {
.id = 0x1e,
.name = "ppcsahbslvr",
.swgroup = TEGRA_SWGROUP_PPCS,
.smmu = {
.reg = 0x228,
.bit = 30,
},
.la = {
.reg = 0x344,
.shift = 16,
.mask = 0xff,
.def = 0x1a,
},
}, {
.id = 0x1f,
.name = "satar",
.swgroup = TEGRA_SWGROUP_SATA,
.smmu = {
.reg = 0x228,
.bit = 31,
},
.la = {
.reg = 0x350,
.shift = 0,
.mask = 0xff,
.def = 0x65,
},
}, {
.id = 0x27,
.name = "mpcorer",
.swgroup = TEGRA_SWGROUP_MPCORE,
.la = {
.reg = 0x320,
.shift = 0,
.mask = 0xff,
.def = 0x04,
},
}, {
.id = 0x2b,
.name = "nvencswr",
.swgroup = TEGRA_SWGROUP_NVENC,
.smmu = {
.reg = 0x22c,
.bit = 11,
},
.la = {
.reg = 0x328,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x31,
.name = "afiw",
.swgroup = TEGRA_SWGROUP_AFI,
.smmu = {
.reg = 0x22c,
.bit = 17,
},
.la = {
.reg = 0x2e0,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x32,
.name = "avpcarm7w",
.swgroup = TEGRA_SWGROUP_AVPC,
.smmu = {
.reg = 0x22c,
.bit = 18,
},
.la = {
.reg = 0x2e4,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x35,
.name = "hdaw",
.swgroup = TEGRA_SWGROUP_HDA,
.smmu = {
.reg = 0x22c,
.bit = 21,
},
.la = {
.reg = 0x318,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x36,
.name = "host1xw",
.swgroup = TEGRA_SWGROUP_HC,
.smmu = {
.reg = 0x22c,
.bit = 22,
},
.la = {
.reg = 0x314,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x39,
.name = "mpcorew",
.swgroup = TEGRA_SWGROUP_MPCORE,
.la = {
.reg = 0x320,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x3b,
.name = "ppcsahbdmaw",
.swgroup = TEGRA_SWGROUP_PPCS,
.smmu = {
.reg = 0x22c,
.bit = 27,
},
.la = {
.reg = 0x348,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x3c,
.name = "ppcsahbslvw",
.swgroup = TEGRA_SWGROUP_PPCS,
.smmu = {
.reg = 0x22c,
.bit = 28,
},
.la = {
.reg = 0x348,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x3d,
.name = "sataw",
.swgroup = TEGRA_SWGROUP_SATA,
.smmu = {
.reg = 0x22c,
.bit = 29,
},
.la = {
.reg = 0x350,
.shift = 16,
.mask = 0xff,
.def = 0x65,
},
}, {
.id = 0x44,
.name = "ispra",
.swgroup = TEGRA_SWGROUP_ISP2,
.smmu = {
.reg = 0x230,
.bit = 4,
},
.la = {
.reg = 0x370,
.shift = 0,
.mask = 0xff,
.def = 0x18,
},
}, {
.id = 0x46,
.name = "ispwa",
.swgroup = TEGRA_SWGROUP_ISP2,
.smmu = {
.reg = 0x230,
.bit = 6,
},
.la = {
.reg = 0x374,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x47,
.name = "ispwb",
.swgroup = TEGRA_SWGROUP_ISP2,
.smmu = {
.reg = 0x230,
.bit = 7,
},
.la = {
.reg = 0x374,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x4a,
.name = "xusb_hostr",
.swgroup = TEGRA_SWGROUP_XUSB_HOST,
.smmu = {
.reg = 0x230,
.bit = 10,
},
.la = {
.reg = 0x37c,
.shift = 0,
.mask = 0xff,
.def = 0x39,
},
}, {
.id = 0x4b,
.name = "xusb_hostw",
.swgroup = TEGRA_SWGROUP_XUSB_HOST,
.smmu = {
.reg = 0x230,
.bit = 11,
},
.la = {
.reg = 0x37c,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x4c,
.name = "xusb_devr",
.swgroup = TEGRA_SWGROUP_XUSB_DEV,
.smmu = {
.reg = 0x230,
.bit = 12,
},
.la = {
.reg = 0x380,
.shift = 0,
.mask = 0xff,
.def = 0x39,
},
}, {
.id = 0x4d,
.name = "xusb_devw",
.swgroup = TEGRA_SWGROUP_XUSB_DEV,
.smmu = {
.reg = 0x230,
.bit = 13,
},
.la = {
.reg = 0x380,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x4e,
.name = "isprab",
.swgroup = TEGRA_SWGROUP_ISP2B,
.smmu = {
.reg = 0x230,
.bit = 14,
},
.la = {
.reg = 0x384,
.shift = 0,
.mask = 0xff,
.def = 0x18,
},
}, {
.id = 0x50,
.name = "ispwab",
.swgroup = TEGRA_SWGROUP_ISP2B,
.smmu = {
.reg = 0x230,
.bit = 16,
},
.la = {
.reg = 0x388,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x51,
.name = "ispwbb",
.swgroup = TEGRA_SWGROUP_ISP2B,
.smmu = {
.reg = 0x230,
.bit = 17,
},
.la = {
.reg = 0x388,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x54,
.name = "tsecsrd",
.swgroup = TEGRA_SWGROUP_TSEC,
.smmu = {
.reg = 0x230,
.bit = 20,
},
.la = {
.reg = 0x390,
.shift = 0,
.mask = 0xff,
.def = 0x9b,
},
}, {
.id = 0x55,
.name = "tsecswr",
.swgroup = TEGRA_SWGROUP_TSEC,
.smmu = {
.reg = 0x230,
.bit = 21,
},
.la = {
.reg = 0x390,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x56,
.name = "a9avpscr",
.swgroup = TEGRA_SWGROUP_A9AVP,
.smmu = {
.reg = 0x230,
.bit = 22,
},
.la = {
.reg = 0x3a4,
.shift = 0,
.mask = 0xff,
.def = 0x04,
},
}, {
.id = 0x57,
.name = "a9avpscw",
.swgroup = TEGRA_SWGROUP_A9AVP,
.smmu = {
.reg = 0x230,
.bit = 23,
},
.la = {
.reg = 0x3a4,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x58,
.name = "gpusrd",
.swgroup = TEGRA_SWGROUP_GPU,
.smmu = {
/* read-only */
.reg = 0x230,
.bit = 24,
},
.la = {
.reg = 0x3c8,
.shift = 0,
.mask = 0xff,
.def = 0x1a,
},
}, {
.id = 0x59,
.name = "gpuswr",
.swgroup = TEGRA_SWGROUP_GPU,
.smmu = {
/* read-only */
.reg = 0x230,
.bit = 25,
},
.la = {
.reg = 0x3c8,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x5a,
.name = "displayt",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x230,
.bit = 26,
},
.la = {
.reg = 0x2f0,
.shift = 16,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x60,
.name = "sdmmcra",
.swgroup = TEGRA_SWGROUP_SDMMC1A,
.smmu = {
.reg = 0x234,
.bit = 0,
},
.la = {
.reg = 0x3b8,
.shift = 0,
.mask = 0xff,
.def = 0x49,
},
}, {
.id = 0x61,
.name = "sdmmcraa",
.swgroup = TEGRA_SWGROUP_SDMMC2A,
.smmu = {
.reg = 0x234,
.bit = 1,
},
.la = {
.reg = 0x3bc,
.shift = 0,
.mask = 0xff,
.def = 0x49,
},
}, {
.id = 0x62,
.name = "sdmmcr",
.swgroup = TEGRA_SWGROUP_SDMMC3A,
.smmu = {
.reg = 0x234,
.bit = 2,
},
.la = {
.reg = 0x3c0,
.shift = 0,
.mask = 0xff,
.def = 0x49,
},
}, {
.id = 0x63,
.swgroup = TEGRA_SWGROUP_SDMMC4A,
.name = "sdmmcrab",
.smmu = {
.reg = 0x234,
.bit = 3,
},
.la = {
.reg = 0x3c4,
.shift = 0,
.mask = 0xff,
.def = 0x49,
},
}, {
.id = 0x64,
.name = "sdmmcwa",
.swgroup = TEGRA_SWGROUP_SDMMC1A,
.smmu = {
.reg = 0x234,
.bit = 4,
},
.la = {
.reg = 0x3b8,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x65,
.name = "sdmmcwaa",
.swgroup = TEGRA_SWGROUP_SDMMC2A,
.smmu = {
.reg = 0x234,
.bit = 5,
},
.la = {
.reg = 0x3bc,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x66,
.name = "sdmmcw",
.swgroup = TEGRA_SWGROUP_SDMMC3A,
.smmu = {
.reg = 0x234,
.bit = 6,
},
.la = {
.reg = 0x3c0,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x67,
.name = "sdmmcwab",
.swgroup = TEGRA_SWGROUP_SDMMC4A,
.smmu = {
.reg = 0x234,
.bit = 7,
},
.la = {
.reg = 0x3c4,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x6c,
.name = "vicsrd",
.swgroup = TEGRA_SWGROUP_VIC,
.smmu = {
.reg = 0x234,
.bit = 12,
},
.la = {
.reg = 0x394,
.shift = 0,
.mask = 0xff,
.def = 0x1a,
},
}, {
.id = 0x6d,
.name = "vicswr",
.swgroup = TEGRA_SWGROUP_VIC,
.smmu = {
.reg = 0x234,
.bit = 13,
},
.la = {
.reg = 0x394,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x72,
.name = "viw",
.swgroup = TEGRA_SWGROUP_VI,
.smmu = {
.reg = 0x234,
.bit = 18,
},
.la = {
.reg = 0x398,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x73,
.name = "displayd",
.swgroup = TEGRA_SWGROUP_DC,
.smmu = {
.reg = 0x234,
.bit = 19,
},
.la = {
.reg = 0x3c8,
.shift = 0,
.mask = 0xff,
.def = 0x50,
},
}, {
.id = 0x78,
.name = "nvdecsrd",
.swgroup = TEGRA_SWGROUP_NVDEC,
.smmu = {
.reg = 0x234,
.bit = 24,
},
.la = {
.reg = 0x3d8,
.shift = 0,
.mask = 0xff,
.def = 0x23,
},
}, {
.id = 0x79,
.name = "nvdecswr",
.swgroup = TEGRA_SWGROUP_NVDEC,
.smmu = {
.reg = 0x234,
.bit = 25,
},
.la = {
.reg = 0x3d8,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x7a,
.name = "aper",
.swgroup = TEGRA_SWGROUP_APE,
.smmu = {
.reg = 0x234,
.bit = 26,
},
.la = {
.reg = 0x3dc,
.shift = 0,
.mask = 0xff,
.def = 0xff,
},
}, {
.id = 0x7b,
.name = "apew",
.swgroup = TEGRA_SWGROUP_APE,
.smmu = {
.reg = 0x234,
.bit = 27,
},
.la = {
.reg = 0x3dc,
.shift = 0,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x7e,
.name = "nvjpgsrd",
.swgroup = TEGRA_SWGROUP_NVJPG,
.smmu = {
.reg = 0x234,
.bit = 30,
},
.la = {
.reg = 0x3e4,
.shift = 0,
.mask = 0xff,
.def = 0x23,
},
}, {
.id = 0x7f,
.name = "nvjpgswr",
.swgroup = TEGRA_SWGROUP_NVJPG,
.smmu = {
.reg = 0x234,
.bit = 31,
},
.la = {
.reg = 0x3e4,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x80,
.name = "sesrd",
.swgroup = TEGRA_SWGROUP_SE,
.smmu = {
.reg = 0xb98,
.bit = 0,
},
.la = {
.reg = 0x3e0,
.shift = 0,
.mask = 0xff,
.def = 0x2e,
},
}, {
.id = 0x81,
.name = "seswr",
.swgroup = TEGRA_SWGROUP_SE,
.smmu = {
.reg = 0xb98,
.bit = 1,
},
.la = {
.reg = 0xb98,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x82,
.name = "axiapr",
.swgroup = TEGRA_SWGROUP_AXIAP,
.smmu = {
.reg = 0xb98,
.bit = 2,
},
.la = {
.reg = 0x3a0,
.shift = 0,
.mask = 0xff,
.def = 0xff,
},
}, {
.id = 0x83,
.name = "axiapw",
.swgroup = TEGRA_SWGROUP_AXIAP,
.smmu = {
.reg = 0xb98,
.bit = 3,
},
.la = {
.reg = 0x3a0,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x84,
.name = "etrr",
.swgroup = TEGRA_SWGROUP_ETR,
.smmu = {
.reg = 0xb98,
.bit = 4,
},
.la = {
.reg = 0x3ec,
.shift = 0,
.mask = 0xff,
.def = 0xff,
},
}, {
.id = 0x85,
.name = "etrw",
.swgroup = TEGRA_SWGROUP_ETR,
.smmu = {
.reg = 0xb98,
.bit = 5,
},
.la = {
.reg = 0x3ec,
.shift = 16,
.mask = 0xff,
.def = 0xff,
},
}, {
.id = 0x86,
.name = "tsecsrdb",
.swgroup = TEGRA_SWGROUP_TSECB,
.smmu = {
.reg = 0xb98,
.bit = 6,
},
.la = {
.reg = 0x3f0,
.shift = 0,
.mask = 0xff,
.def = 0x9b,
},
}, {
.id = 0x87,
.name = "tsecswrb",
.swgroup = TEGRA_SWGROUP_TSECB,
.smmu = {
.reg = 0xb98,
.bit = 7,
},
.la = {
.reg = 0x3f0,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
}, {
.id = 0x88,
.name = "gpusrd2",
.swgroup = TEGRA_SWGROUP_GPU,
.smmu = {
/* read-only */
.reg = 0xb98,
.bit = 8,
},
.la = {
.reg = 0x3e8,
.shift = 0,
.mask = 0xff,
.def = 0x1a,
},
}, {
.id = 0x89,
.name = "gpuswr2",
.swgroup = TEGRA_SWGROUP_GPU,
.smmu = {
/* read-only */
.reg = 0xb98,
.bit = 9,
},
.la = {
.reg = 0x3e8,
.shift = 16,
.mask = 0xff,
.def = 0x80,
},
},
};
static const struct tegra_smmu_swgroup tegra210_swgroups[] = {
{ .name = "dc", .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 },
{ .name = "dcb", .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 },
{ .name = "afi", .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 },
{ .name = "avpc", .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c },
{ .name = "hda", .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 },
{ .name = "hc", .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 },
{ .name = "nvenc", .swgroup = TEGRA_SWGROUP_NVENC, .reg = 0x264 },
{ .name = "ppcs", .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 },
{ .name = "sata", .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x274 },
{ .name = "isp2", .swgroup = TEGRA_SWGROUP_ISP2, .reg = 0x258 },
{ .name = "xusb_host", .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 },
{ .name = "xusb_dev", .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c },
{ .name = "isp2b", .swgroup = TEGRA_SWGROUP_ISP2B, .reg = 0xaa4 },
{ .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 },
{ .name = "a9avp", .swgroup = TEGRA_SWGROUP_A9AVP, .reg = 0x290 },
{ .name = "gpu", .swgroup = TEGRA_SWGROUP_GPU, .reg = 0xaac },
{ .name = "sdmmc1a", .swgroup = TEGRA_SWGROUP_SDMMC1A, .reg = 0xa94 },
{ .name = "sdmmc2a", .swgroup = TEGRA_SWGROUP_SDMMC2A, .reg = 0xa98 },
{ .name = "sdmmc3a", .swgroup = TEGRA_SWGROUP_SDMMC3A, .reg = 0xa9c },
{ .name = "sdmmc4a", .swgroup = TEGRA_SWGROUP_SDMMC4A, .reg = 0xaa0 },
{ .name = "vic", .swgroup = TEGRA_SWGROUP_VIC, .reg = 0x284 },
{ .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 },
{ .name = "nvdec", .swgroup = TEGRA_SWGROUP_NVDEC, .reg = 0xab4 },
{ .name = "ape", .swgroup = TEGRA_SWGROUP_APE, .reg = 0xab8 },
{ .name = "nvjpg", .swgroup = TEGRA_SWGROUP_NVJPG, .reg = 0xac0 },
{ .name = "se", .swgroup = TEGRA_SWGROUP_SE, .reg = 0xabc },
{ .name = "axiap", .swgroup = TEGRA_SWGROUP_AXIAP, .reg = 0xacc },
{ .name = "etr", .swgroup = TEGRA_SWGROUP_ETR, .reg = 0xad0 },
{ .name = "tsecb", .swgroup = TEGRA_SWGROUP_TSECB, .reg = 0xad4 },
};
static const struct tegra_smmu_soc tegra210_smmu_soc = {
.clients = tegra210_mc_clients,
.num_clients = ARRAY_SIZE(tegra210_mc_clients),
.swgroups = tegra210_swgroups,
.num_swgroups = ARRAY_SIZE(tegra210_swgroups),
.supports_round_robin_arbitration = true,
.supports_request_limit = true,
.num_tlb_lines = 32,
.num_asids = 128,
};
const struct tegra_mc_soc tegra210_mc_soc = {
.clients = tegra210_mc_clients,
.num_clients = ARRAY_SIZE(tegra210_mc_clients),
.num_address_bits = 34,
.atom_size = 64,
.client_id_mask = 0xff,
.smmu = &tegra210_smmu_soc,
};
......@@ -966,5 +966,6 @@ const struct tegra_mc_soc tegra30_mc_soc = {
.num_clients = ARRAY_SIZE(tegra30_mc_clients),
.num_address_bits = 32,
.atom_size = 16,
.client_id_mask = 0x7f,
.smmu = &tegra30_smmu_soc,
};
obj-$(CONFIG_RESET_CONTROLLER) += core.o
obj-$(CONFIG_ARCH_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_ARCH_SOCFPGA) += reset-socfpga.o
obj-$(CONFIG_ARCH_BERLIN) += reset-berlin.o
obj-$(CONFIG_ARCH_SUNXI) += reset-sunxi.o
obj-$(CONFIG_ARCH_STI) += sti/
obj-$(CONFIG_ARCH_ZYNQ) += reset-zynq.o
obj-$(CONFIG_ATH79) += reset-ath79.o
/*
* Copyright (C) 2015 Alban Bedel <albeu@free.fr>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
struct ath79_reset {
struct reset_controller_dev rcdev;
void __iomem *base;
spinlock_t lock;
};
static int ath79_reset_update(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct ath79_reset *ath79_reset =
container_of(rcdev, struct ath79_reset, rcdev);
unsigned long flags;
u32 val;
spin_lock_irqsave(&ath79_reset->lock, flags);
val = readl(ath79_reset->base);
if (assert)
val |= BIT(id);
else
val &= ~BIT(id);
writel(val, ath79_reset->base);
spin_unlock_irqrestore(&ath79_reset->lock, flags);
return 0;
}
static int ath79_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return ath79_reset_update(rcdev, id, true);
}
static int ath79_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return ath79_reset_update(rcdev, id, false);
}
static int ath79_reset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct ath79_reset *ath79_reset =
container_of(rcdev, struct ath79_reset, rcdev);
u32 val;
val = readl(ath79_reset->base);
return !!(val & BIT(id));
}
static struct reset_control_ops ath79_reset_ops = {
.assert = ath79_reset_assert,
.deassert = ath79_reset_deassert,
.status = ath79_reset_status,
};
static int ath79_reset_probe(struct platform_device *pdev)
{
struct ath79_reset *ath79_reset;
struct resource *res;
ath79_reset = devm_kzalloc(&pdev->dev,
sizeof(*ath79_reset), GFP_KERNEL);
if (!ath79_reset)
return -ENOMEM;
platform_set_drvdata(pdev, ath79_reset);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ath79_reset->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ath79_reset->base))
return PTR_ERR(ath79_reset->base);
ath79_reset->rcdev.ops = &ath79_reset_ops;
ath79_reset->rcdev.owner = THIS_MODULE;
ath79_reset->rcdev.of_node = pdev->dev.of_node;
ath79_reset->rcdev.of_reset_n_cells = 1;
ath79_reset->rcdev.nr_resets = 32;
return reset_controller_register(&ath79_reset->rcdev);
}
static int ath79_reset_remove(struct platform_device *pdev)
{
struct ath79_reset *ath79_reset = platform_get_drvdata(pdev);
reset_controller_unregister(&ath79_reset->rcdev);
return 0;
}
static const struct of_device_id ath79_reset_dt_ids[] = {
{ .compatible = "qca,ar7100-reset", },
{ },
};
MODULE_DEVICE_TABLE(of, ath79_reset_dt_ids);
static struct platform_driver ath79_reset_driver = {
.probe = ath79_reset_probe,
.remove = ath79_reset_remove,
.driver = {
.name = "ath79-reset",
.of_match_table = ath79_reset_dt_ids,
},
};
module_platform_driver(ath79_reset_driver);
MODULE_AUTHOR("Alban Bedel <albeu@free.fr>");
MODULE_DESCRIPTION("AR71xx Reset Controller Driver");
MODULE_LICENSE("GPL");
/*
* Reset driver for NXP LPC18xx/43xx Reset Generation Unit (RGU).
*
* Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/reboot.h>
#include <linux/reset-controller.h>
#include <linux/spinlock.h>
/* LPC18xx RGU registers */
#define LPC18XX_RGU_CTRL0 0x100
#define LPC18XX_RGU_CTRL1 0x104
#define LPC18XX_RGU_ACTIVE_STATUS0 0x150
#define LPC18XX_RGU_ACTIVE_STATUS1 0x154
#define LPC18XX_RGU_RESETS_PER_REG 32
/* Internal reset outputs */
#define LPC18XX_RGU_CORE_RST 0
#define LPC43XX_RGU_M0SUB_RST 12
#define LPC43XX_RGU_M0APP_RST 56
struct lpc18xx_rgu_data {
struct reset_controller_dev rcdev;
struct clk *clk_delay;
struct clk *clk_reg;
void __iomem *base;
spinlock_t lock;
u32 delay_us;
};
#define to_rgu_data(p) container_of(p, struct lpc18xx_rgu_data, rcdev)
static void __iomem *rgu_base;
static int lpc18xx_rgu_restart(struct notifier_block *this, unsigned long mode,
void *cmd)
{
writel(BIT(LPC18XX_RGU_CORE_RST), rgu_base + LPC18XX_RGU_CTRL0);
mdelay(2000);
pr_emerg("%s: unable to restart system\n", __func__);
return NOTIFY_DONE;
}
static struct notifier_block lpc18xx_rgu_restart_nb = {
.notifier_call = lpc18xx_rgu_restart,
.priority = 192,
};
/*
* The LPC18xx RGU has mostly self-deasserting resets except for the
* two reset lines going to the internal Cortex-M0 cores.
*
* To prevent the M0 core resets from accidentally getting deasserted
* status register must be check and bits in control register set to
* preserve the state.
*/
static int lpc18xx_rgu_setclear_reset(struct reset_controller_dev *rcdev,
unsigned long id, bool set)
{
struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev);
u32 stat_offset = LPC18XX_RGU_ACTIVE_STATUS0;
u32 ctrl_offset = LPC18XX_RGU_CTRL0;
unsigned long flags;
u32 stat, rst_bit;
stat_offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32);
ctrl_offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32);
rst_bit = 1 << (id % LPC18XX_RGU_RESETS_PER_REG);
spin_lock_irqsave(&rc->lock, flags);
stat = ~readl(rc->base + stat_offset);
if (set)
writel(stat | rst_bit, rc->base + ctrl_offset);
else
writel(stat & ~rst_bit, rc->base + ctrl_offset);
spin_unlock_irqrestore(&rc->lock, flags);
return 0;
}
static int lpc18xx_rgu_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return lpc18xx_rgu_setclear_reset(rcdev, id, true);
}
static int lpc18xx_rgu_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return lpc18xx_rgu_setclear_reset(rcdev, id, false);
}
/* Only M0 cores require explicit reset deassert */
static int lpc18xx_rgu_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev);
lpc18xx_rgu_assert(rcdev, id);
udelay(rc->delay_us);
switch (id) {
case LPC43XX_RGU_M0SUB_RST:
case LPC43XX_RGU_M0APP_RST:
lpc18xx_rgu_setclear_reset(rcdev, id, false);
}
return 0;
}
static int lpc18xx_rgu_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct lpc18xx_rgu_data *rc = to_rgu_data(rcdev);
u32 bit, offset = LPC18XX_RGU_ACTIVE_STATUS0;
offset += (id / LPC18XX_RGU_RESETS_PER_REG) * sizeof(u32);
bit = 1 << (id % LPC18XX_RGU_RESETS_PER_REG);
return !(readl(rc->base + offset) & bit);
}
static struct reset_control_ops lpc18xx_rgu_ops = {
.reset = lpc18xx_rgu_reset,
.assert = lpc18xx_rgu_assert,
.deassert = lpc18xx_rgu_deassert,
.status = lpc18xx_rgu_status,
};
static int lpc18xx_rgu_probe(struct platform_device *pdev)
{
struct lpc18xx_rgu_data *rc;
struct resource *res;
u32 fcclk, firc;
int ret;
rc = devm_kzalloc(&pdev->dev, sizeof(*rc), GFP_KERNEL);
if (!rc)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
rc->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(rc->base))
return PTR_ERR(rc->base);
rc->clk_reg = devm_clk_get(&pdev->dev, "reg");
if (IS_ERR(rc->clk_reg)) {
dev_err(&pdev->dev, "reg clock not found\n");
return PTR_ERR(rc->clk_reg);
}
rc->clk_delay = devm_clk_get(&pdev->dev, "delay");
if (IS_ERR(rc->clk_delay)) {
dev_err(&pdev->dev, "delay clock not found\n");
return PTR_ERR(rc->clk_delay);
}
ret = clk_prepare_enable(rc->clk_reg);
if (ret) {
dev_err(&pdev->dev, "unable to enable reg clock\n");
return ret;
}
ret = clk_prepare_enable(rc->clk_delay);
if (ret) {
dev_err(&pdev->dev, "unable to enable delay clock\n");
goto dis_clk_reg;
}
fcclk = clk_get_rate(rc->clk_reg) / USEC_PER_SEC;
firc = clk_get_rate(rc->clk_delay) / USEC_PER_SEC;
if (fcclk == 0 || firc == 0)
rc->delay_us = 2;
else
rc->delay_us = DIV_ROUND_UP(fcclk, firc * firc);
spin_lock_init(&rc->lock);
rc->rcdev.owner = THIS_MODULE;
rc->rcdev.nr_resets = 64;
rc->rcdev.ops = &lpc18xx_rgu_ops;
rc->rcdev.of_node = pdev->dev.of_node;
platform_set_drvdata(pdev, rc);
ret = reset_controller_register(&rc->rcdev);
if (ret) {
dev_err(&pdev->dev, "unable to register device\n");
goto dis_clks;
}
rgu_base = rc->base;
ret = register_restart_handler(&lpc18xx_rgu_restart_nb);
if (ret)
dev_warn(&pdev->dev, "failed to register restart handler\n");
return 0;
dis_clks:
clk_disable_unprepare(rc->clk_delay);
dis_clk_reg:
clk_disable_unprepare(rc->clk_reg);
return ret;
}
static int lpc18xx_rgu_remove(struct platform_device *pdev)
{
struct lpc18xx_rgu_data *rc = platform_get_drvdata(pdev);
int ret;
ret = unregister_restart_handler(&lpc18xx_rgu_restart_nb);
if (ret)
dev_warn(&pdev->dev, "failed to unregister restart handler\n");
reset_controller_unregister(&rc->rcdev);
clk_disable_unprepare(rc->clk_delay);
clk_disable_unprepare(rc->clk_reg);
return 0;
}
static const struct of_device_id lpc18xx_rgu_match[] = {
{ .compatible = "nxp,lpc1850-rgu" },
{ }
};
MODULE_DEVICE_TABLE(of, lpc18xx_rgu_match);
static struct platform_driver lpc18xx_rgu_driver = {
.probe = lpc18xx_rgu_probe,
.remove = lpc18xx_rgu_remove,
.driver = {
.name = "lpc18xx-reset",
.of_match_table = lpc18xx_rgu_match,
},
};
module_platform_driver(lpc18xx_rgu_driver);
MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>");
MODULE_DESCRIPTION("Reset driver for LPC18xx/43xx RGU");
MODULE_LICENSE("GPL v2");
......@@ -24,11 +24,11 @@
#include <linux/types.h>
#define NR_BANKS 4
#define OFFSET_MODRST 0x10
struct socfpga_reset_data {
spinlock_t lock;
void __iomem *membase;
u32 modrst_offset;
struct reset_controller_dev rcdev;
};
......@@ -45,8 +45,8 @@ static int socfpga_reset_assert(struct reset_controller_dev *rcdev,
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS));
writel(reg | BIT(offset), data->membase + OFFSET_MODRST +
reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS));
writel(reg | BIT(offset), data->membase + data->modrst_offset +
(bank * NR_BANKS));
spin_unlock_irqrestore(&data->lock, flags);
......@@ -67,8 +67,8 @@ static int socfpga_reset_deassert(struct reset_controller_dev *rcdev,
spin_lock_irqsave(&data->lock, flags);
reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS));
writel(reg & ~BIT(offset), data->membase + OFFSET_MODRST +
reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS));
writel(reg & ~BIT(offset), data->membase + data->modrst_offset +
(bank * NR_BANKS));
spin_unlock_irqrestore(&data->lock, flags);
......@@ -85,7 +85,7 @@ static int socfpga_reset_status(struct reset_controller_dev *rcdev,
int offset = id % BITS_PER_LONG;
u32 reg;
reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS));
reg = readl(data->membase + data->modrst_offset + (bank * NR_BANKS));
return !(reg & BIT(offset));
}
......@@ -100,6 +100,8 @@ static int socfpga_reset_probe(struct platform_device *pdev)
{
struct socfpga_reset_data *data;
struct resource *res;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
/*
* The binding was mainlined without the required property.
......@@ -120,6 +122,11 @@ static int socfpga_reset_probe(struct platform_device *pdev)
if (IS_ERR(data->membase))
return PTR_ERR(data->membase);
if (of_property_read_u32(np, "altr,modrst-offset", &data->modrst_offset)) {
dev_warn(dev, "missing altr,modrst-offset property, assuming 0x10!\n");
data->modrst_offset = 0x10;
}
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
......
/*
* Copyright (c) 2015, National Instruments Corp.
*
* Xilinx Zynq Reset controller driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/mfd/syscon.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/regmap.h>
#include <linux/types.h>
struct zynq_reset_data {
struct regmap *slcr;
struct reset_controller_dev rcdev;
u32 offset;
};
#define to_zynq_reset_data(p) \
container_of((p), struct zynq_reset_data, rcdev)
static int zynq_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct zynq_reset_data *priv = to_zynq_reset_data(rcdev);
int bank = id / BITS_PER_LONG;
int offset = id % BITS_PER_LONG;
pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__,
bank, offset);
return regmap_update_bits(priv->slcr,
priv->offset + (bank * 4),
BIT(offset),
BIT(offset));
}
static int zynq_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct zynq_reset_data *priv = to_zynq_reset_data(rcdev);
int bank = id / BITS_PER_LONG;
int offset = id % BITS_PER_LONG;
pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__,
bank, offset);
return regmap_update_bits(priv->slcr,
priv->offset + (bank * 4),
BIT(offset),
~BIT(offset));
}
static int zynq_reset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct zynq_reset_data *priv = to_zynq_reset_data(rcdev);
int bank = id / BITS_PER_LONG;
int offset = id % BITS_PER_LONG;
int ret;
u32 reg;
pr_debug("%s: %s reset bank %u offset %u\n", KBUILD_MODNAME, __func__,
bank, offset);
ret = regmap_read(priv->slcr, priv->offset + (bank * 4), &reg);
if (ret)
return ret;
return !!(reg & BIT(offset));
}
static struct reset_control_ops zynq_reset_ops = {
.assert = zynq_reset_assert,
.deassert = zynq_reset_deassert,
.status = zynq_reset_status,
};
static int zynq_reset_probe(struct platform_device *pdev)
{
struct resource *res;
struct zynq_reset_data *priv;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
platform_set_drvdata(pdev, priv);
priv->slcr = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"syscon");
if (IS_ERR(priv->slcr)) {
dev_err(&pdev->dev, "unable to get zynq-slcr regmap");
return PTR_ERR(priv->slcr);
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "missing IO resource\n");
return -ENODEV;
}
priv->offset = res->start;
priv->rcdev.owner = THIS_MODULE;
priv->rcdev.nr_resets = resource_size(res) / 4 * BITS_PER_LONG;
priv->rcdev.ops = &zynq_reset_ops;
priv->rcdev.of_node = pdev->dev.of_node;
reset_controller_register(&priv->rcdev);
return 0;
}
static int zynq_reset_remove(struct platform_device *pdev)
{
struct zynq_reset_data *priv = platform_get_drvdata(pdev);
reset_controller_unregister(&priv->rcdev);
return 0;
}
static const struct of_device_id zynq_reset_dt_ids[] = {
{ .compatible = "xlnx,zynq-reset", },
{ /* sentinel */ },
};
static struct platform_driver zynq_reset_driver = {
.probe = zynq_reset_probe,
.remove = zynq_reset_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = zynq_reset_dt_ids,
},
};
module_platform_driver(zynq_reset_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Moritz Fischer <moritz.fischer@ettus.com>");
MODULE_DESCRIPTION("Zynq Reset Controller Driver");
......@@ -11,7 +11,7 @@
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <dt-bindings/reset-controller/stih407-resets.h>
#include <dt-bindings/reset/stih407-resets.h>
#include "reset-syscfg.h"
/* STiH407 Peripheral powerdown definitions. */
......@@ -126,7 +126,7 @@ static const struct syscfg_reset_controller_data stih407_picophyreset_controller
.channels = stih407_picophyresets,
};
static struct of_device_id stih407_reset_match[] = {
static const struct of_device_id stih407_reset_match[] = {
{
.compatible = "st,stih407-powerdown",
.data = &stih407_powerdown_controller,
......
......@@ -13,7 +13,7 @@
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <dt-bindings/reset-controller/stih415-resets.h>
#include <dt-bindings/reset/stih415-resets.h>
#include "reset-syscfg.h"
......@@ -89,7 +89,7 @@ static struct syscfg_reset_controller_data stih415_softreset_controller = {
.channels = stih415_softresets,
};
static struct of_device_id stih415_reset_match[] = {
static const struct of_device_id stih415_reset_match[] = {
{ .compatible = "st,stih415-powerdown",
.data = &stih415_powerdown_controller, },
{ .compatible = "st,stih415-softreset",
......
......@@ -13,7 +13,7 @@
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <dt-bindings/reset-controller/stih416-resets.h>
#include <dt-bindings/reset/stih416-resets.h>
#include "reset-syscfg.h"
......@@ -120,7 +120,7 @@ static struct syscfg_reset_controller_data stih416_softreset_controller = {
.channels = stih416_softresets,
};
static struct of_device_id stih416_reset_match[] = {
static const struct of_device_id stih416_reset_match[] = {
{ .compatible = "st,stih416-powerdown",
.data = &stih416_powerdown_controller, },
{ .compatible = "st,stih416-softreset",
......
......@@ -2,6 +2,7 @@
# Makefile for the Linux Kernel SOC specific device drivers.
#
obj-$(CONFIG_MACH_DOVE) += dove/
obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
obj-$(CONFIG_ARCH_QCOM) += qcom/
obj-$(CONFIG_ARCH_SUNXI) += sunxi/
......
/*
* Marvell Dove PMU support
*/
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/reset.h>
#include <linux/reset-controller.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/soc/dove/pmu.h>
#include <linux/spinlock.h>
#define NR_PMU_IRQS 7
#define PMC_SW_RST 0x30
#define PMC_IRQ_CAUSE 0x50
#define PMC_IRQ_MASK 0x54
#define PMU_PWR 0x10
#define PMU_ISO 0x58
struct pmu_data {
spinlock_t lock;
struct device_node *of_node;
void __iomem *pmc_base;
void __iomem *pmu_base;
struct irq_chip_generic *irq_gc;
struct irq_domain *irq_domain;
#ifdef CONFIG_RESET_CONTROLLER
struct reset_controller_dev reset;
#endif
};
/*
* The PMU contains a register to reset various subsystems within the
* SoC. Export this as a reset controller.
*/
#ifdef CONFIG_RESET_CONTROLLER
#define rcdev_to_pmu(rcdev) container_of(rcdev, struct pmu_data, reset)
static int pmu_reset_reset(struct reset_controller_dev *rc, unsigned long id)
{
struct pmu_data *pmu = rcdev_to_pmu(rc);
unsigned long flags;
u32 val;
spin_lock_irqsave(&pmu->lock, flags);
val = readl_relaxed(pmu->pmc_base + PMC_SW_RST);
writel_relaxed(val & ~BIT(id), pmu->pmc_base + PMC_SW_RST);
writel_relaxed(val | BIT(id), pmu->pmc_base + PMC_SW_RST);
spin_unlock_irqrestore(&pmu->lock, flags);
return 0;
}
static int pmu_reset_assert(struct reset_controller_dev *rc, unsigned long id)
{
struct pmu_data *pmu = rcdev_to_pmu(rc);
unsigned long flags;
u32 val = ~BIT(id);
spin_lock_irqsave(&pmu->lock, flags);
val &= readl_relaxed(pmu->pmc_base + PMC_SW_RST);
writel_relaxed(val, pmu->pmc_base + PMC_SW_RST);
spin_unlock_irqrestore(&pmu->lock, flags);
return 0;
}
static int pmu_reset_deassert(struct reset_controller_dev *rc, unsigned long id)
{
struct pmu_data *pmu = rcdev_to_pmu(rc);
unsigned long flags;
u32 val = BIT(id);
spin_lock_irqsave(&pmu->lock, flags);
val |= readl_relaxed(pmu->pmc_base + PMC_SW_RST);
writel_relaxed(val, pmu->pmc_base + PMC_SW_RST);
spin_unlock_irqrestore(&pmu->lock, flags);
return 0;
}
static struct reset_control_ops pmu_reset_ops = {
.reset = pmu_reset_reset,
.assert = pmu_reset_assert,
.deassert = pmu_reset_deassert,
};
static struct reset_controller_dev pmu_reset __initdata = {
.ops = &pmu_reset_ops,
.owner = THIS_MODULE,
.nr_resets = 32,
};
static void __init pmu_reset_init(struct pmu_data *pmu)
{
int ret;
pmu->reset = pmu_reset;
pmu->reset.of_node = pmu->of_node;
ret = reset_controller_register(&pmu->reset);
if (ret)
pr_err("pmu: %s failed: %d\n", "reset_controller_register", ret);
}
#else
static void __init pmu_reset_init(struct pmu_data *pmu)
{
}
#endif
struct pmu_domain {
struct pmu_data *pmu;
u32 pwr_mask;
u32 rst_mask;
u32 iso_mask;
struct generic_pm_domain base;
};
#define to_pmu_domain(dom) container_of(dom, struct pmu_domain, base)
/*
* This deals with the "old" Marvell sequence of bringing a power domain
* down/up, which is: apply power, release reset, disable isolators.
*
* Later devices apparantly use a different sequence: power up, disable
* isolators, assert repair signal, enable SRMA clock, enable AXI clock,
* enable module clock, deassert reset.
*
* Note: reading the assembly, it seems that the IO accessors have an
* unfortunate side-effect - they cause memory already read into registers
* for the if () to be re-read for the bit-set or bit-clear operation.
* The code is written to avoid this.
*/
static int pmu_domain_power_off(struct generic_pm_domain *domain)
{
struct pmu_domain *pmu_dom = to_pmu_domain(domain);
struct pmu_data *pmu = pmu_dom->pmu;
unsigned long flags;
unsigned int val;
void __iomem *pmu_base = pmu->pmu_base;
void __iomem *pmc_base = pmu->pmc_base;
spin_lock_irqsave(&pmu->lock, flags);
/* Enable isolators */
if (pmu_dom->iso_mask) {
val = ~pmu_dom->iso_mask;
val &= readl_relaxed(pmu_base + PMU_ISO);
writel_relaxed(val, pmu_base + PMU_ISO);
}
/* Reset unit */
if (pmu_dom->rst_mask) {
val = ~pmu_dom->rst_mask;
val &= readl_relaxed(pmc_base + PMC_SW_RST);
writel_relaxed(val, pmc_base + PMC_SW_RST);
}
/* Power down */
val = readl_relaxed(pmu_base + PMU_PWR) | pmu_dom->pwr_mask;
writel_relaxed(val, pmu_base + PMU_PWR);
spin_unlock_irqrestore(&pmu->lock, flags);
return 0;
}
static int pmu_domain_power_on(struct generic_pm_domain *domain)
{
struct pmu_domain *pmu_dom = to_pmu_domain(domain);
struct pmu_data *pmu = pmu_dom->pmu;
unsigned long flags;
unsigned int val;
void __iomem *pmu_base = pmu->pmu_base;
void __iomem *pmc_base = pmu->pmc_base;
spin_lock_irqsave(&pmu->lock, flags);
/* Power on */
val = ~pmu_dom->pwr_mask & readl_relaxed(pmu_base + PMU_PWR);
writel_relaxed(val, pmu_base + PMU_PWR);
/* Release reset */
if (pmu_dom->rst_mask) {
val = pmu_dom->rst_mask;
val |= readl_relaxed(pmc_base + PMC_SW_RST);
writel_relaxed(val, pmc_base + PMC_SW_RST);
}
/* Disable isolators */
if (pmu_dom->iso_mask) {
val = pmu_dom->iso_mask;
val |= readl_relaxed(pmu_base + PMU_ISO);
writel_relaxed(val, pmu_base + PMU_ISO);
}
spin_unlock_irqrestore(&pmu->lock, flags);
return 0;
}
static void __pmu_domain_register(struct pmu_domain *domain,
struct device_node *np)
{
unsigned int val = readl_relaxed(domain->pmu->pmu_base + PMU_PWR);
domain->base.power_off = pmu_domain_power_off;
domain->base.power_on = pmu_domain_power_on;
pm_genpd_init(&domain->base, NULL, !(val & domain->pwr_mask));
if (np)
of_genpd_add_provider_simple(np, &domain->base);
}
/* PMU IRQ controller */
static void pmu_irq_handler(unsigned int irq, struct irq_desc *desc)
{
struct pmu_data *pmu = irq_get_handler_data(irq);
struct irq_chip_generic *gc = pmu->irq_gc;
struct irq_domain *domain = pmu->irq_domain;
void __iomem *base = gc->reg_base;
u32 stat = readl_relaxed(base + PMC_IRQ_CAUSE) & gc->mask_cache;
u32 done = ~0;
if (stat == 0) {
handle_bad_irq(irq, desc);
return;
}
while (stat) {
u32 hwirq = fls(stat) - 1;
stat &= ~(1 << hwirq);
done &= ~(1 << hwirq);
generic_handle_irq(irq_find_mapping(domain, hwirq));
}
/*
* The PMU mask register is not RW0C: it is RW. This means that
* the bits take whatever value is written to them; if you write
* a '1', you will set the interrupt.
*
* Unfortunately this means there is NO race free way to clear
* these interrupts.
*
* So, let's structure the code so that the window is as small as
* possible.
*/
irq_gc_lock(gc);
done &= readl_relaxed(base + PMC_IRQ_CAUSE);
writel_relaxed(done, base + PMC_IRQ_CAUSE);
irq_gc_unlock(gc);
}
static int __init dove_init_pmu_irq(struct pmu_data *pmu, int irq)
{
const char *name = "pmu_irq";
struct irq_chip_generic *gc;
struct irq_domain *domain;
int ret;
/* mask and clear all interrupts */
writel(0, pmu->pmc_base + PMC_IRQ_MASK);
writel(0, pmu->pmc_base + PMC_IRQ_CAUSE);
domain = irq_domain_add_linear(pmu->of_node, NR_PMU_IRQS,
&irq_generic_chip_ops, NULL);
if (!domain) {
pr_err("%s: unable to add irq domain\n", name);
return -ENOMEM;
}
ret = irq_alloc_domain_generic_chips(domain, NR_PMU_IRQS, 1, name,
handle_level_irq,
IRQ_NOREQUEST | IRQ_NOPROBE, 0,
IRQ_GC_INIT_MASK_CACHE);
if (ret) {
pr_err("%s: unable to alloc irq domain gc: %d\n", name, ret);
irq_domain_remove(domain);
return ret;
}
gc = irq_get_domain_generic_chip(domain, 0);
gc->reg_base = pmu->pmc_base;
gc->chip_types[0].regs.mask = PMC_IRQ_MASK;
gc->chip_types[0].chip.irq_mask = irq_gc_mask_clr_bit;
gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit;
pmu->irq_domain = domain;
pmu->irq_gc = gc;
irq_set_handler_data(irq, pmu);
irq_set_chained_handler(irq, pmu_irq_handler);
return 0;
}
/*
* pmu: power-manager@d0000 {
* compatible = "marvell,dove-pmu";
* reg = <0xd0000 0x8000> <0xd8000 0x8000>;
* interrupts = <33>;
* interrupt-controller;
* #reset-cells = 1;
* vpu_domain: vpu-domain {
* #power-domain-cells = <0>;
* marvell,pmu_pwr_mask = <0x00000008>;
* marvell,pmu_iso_mask = <0x00000001>;
* resets = <&pmu 16>;
* };
* gpu_domain: gpu-domain {
* #power-domain-cells = <0>;
* marvell,pmu_pwr_mask = <0x00000004>;
* marvell,pmu_iso_mask = <0x00000002>;
* resets = <&pmu 18>;
* };
* };
*/
int __init dove_init_pmu(void)
{
struct device_node *np_pmu, *domains_node, *np;
struct pmu_data *pmu;
int ret, parent_irq;
/* Lookup the PMU node */
np_pmu = of_find_compatible_node(NULL, NULL, "marvell,dove-pmu");
if (!np_pmu)
return 0;
domains_node = of_get_child_by_name(np_pmu, "domains");
if (!domains_node) {
pr_err("%s: failed to find domains sub-node\n", np_pmu->name);
return 0;
}
pmu = kzalloc(sizeof(*pmu), GFP_KERNEL);
if (!pmu)
return -ENOMEM;
spin_lock_init(&pmu->lock);
pmu->of_node = np_pmu;
pmu->pmc_base = of_iomap(pmu->of_node, 0);
pmu->pmu_base = of_iomap(pmu->of_node, 1);
if (!pmu->pmc_base || !pmu->pmu_base) {
pr_err("%s: failed to map PMU\n", np_pmu->name);
iounmap(pmu->pmu_base);
iounmap(pmu->pmc_base);
kfree(pmu);
return -ENOMEM;
}
pmu_reset_init(pmu);
for_each_available_child_of_node(domains_node, np) {
struct of_phandle_args args;
struct pmu_domain *domain;
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
break;
domain->pmu = pmu;
domain->base.name = kstrdup(np->name, GFP_KERNEL);
if (!domain->base.name) {
kfree(domain);
break;
}
of_property_read_u32(np, "marvell,pmu_pwr_mask",
&domain->pwr_mask);
of_property_read_u32(np, "marvell,pmu_iso_mask",
&domain->iso_mask);
/*
* We parse the reset controller property directly here
* to ensure that we can operate when the reset controller
* support is not configured into the kernel.
*/
ret = of_parse_phandle_with_args(np, "resets", "#reset-cells",
0, &args);
if (ret == 0) {
if (args.np == pmu->of_node)
domain->rst_mask = BIT(args.args[0]);
of_node_put(args.np);
}
__pmu_domain_register(domain, np);
}
pm_genpd_poweroff_unused();
/* Loss of the interrupt controller is not a fatal error. */
parent_irq = irq_of_parse_and_map(pmu->of_node, 0);
if (!parent_irq) {
pr_err("%s: no interrupt specified\n", np_pmu->name);
} else {
ret = dove_init_pmu_irq(pmu, parent_irq);
if (ret)
pr_err("dove_init_pmu_irq() failed: %d\n", ret);
}
return 0;
}
......@@ -13,7 +13,38 @@ config QCOM_GSBI
config QCOM_PM
bool "Qualcomm Power Management"
depends on ARCH_QCOM && !ARM64
select QCOM_SCM
help
QCOM Platform specific power driver to manage cores and L2 low power
modes. It interface with various system drivers to put the cores in
low power modes.
config QCOM_SMD
tristate "Qualcomm Shared Memory Driver (SMD)"
depends on QCOM_SMEM
help
Say y here to enable support for the Qualcomm Shared Memory Driver
providing communication channels to remote processors in Qualcomm
platforms.
config QCOM_SMD_RPM
tristate "Qualcomm Resource Power Manager (RPM) over SMD"
depends on QCOM_SMD && OF
help
If you say yes to this option, support will be included for the
Resource Power Manager system found in the Qualcomm 8974 based
devices.
This is required to access many regulators, clocks and bus
frequencies controlled by the RPM on these devices.
Say M here if you want to include support for the Qualcomm RPM as a
module. This will build a module called "qcom-smd-rpm".
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
help
Say y here to enable support for the Qualcomm Shared Memory Manager.
The driver provides an interface to items in a heap shared among all
processors in a Qualcomm platform.
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_SMD) += smd.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of_platform.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/soc/qcom/smd.h>
#include <linux/soc/qcom/smd-rpm.h>
#define RPM_REQUEST_TIMEOUT (5 * HZ)
/**
* struct qcom_smd_rpm - state of the rpm device driver
* @rpm_channel: reference to the smd channel
* @ack: completion for acks
* @lock: mutual exclusion around the send/complete pair
* @ack_status: result of the rpm request
*/
struct qcom_smd_rpm {
struct qcom_smd_channel *rpm_channel;
struct completion ack;
struct mutex lock;
int ack_status;
};
/**
* struct qcom_rpm_header - header for all rpm requests and responses
* @service_type: identifier of the service
* @length: length of the payload
*/
struct qcom_rpm_header {
u32 service_type;
u32 length;
};
/**
* struct qcom_rpm_request - request message to the rpm
* @msg_id: identifier of the outgoing message
* @flags: active/sleep state flags
* @type: resource type
* @id: resource id
* @data_len: length of the payload following this header
*/
struct qcom_rpm_request {
u32 msg_id;
u32 flags;
u32 type;
u32 id;
u32 data_len;
};
/**
* struct qcom_rpm_message - response message from the rpm
* @msg_type: indicator of the type of message
* @length: the size of this message, including the message header
* @msg_id: message id
* @message: textual message from the rpm
*
* Multiple of these messages can be stacked in an rpm message.
*/
struct qcom_rpm_message {
u32 msg_type;
u32 length;
union {
u32 msg_id;
u8 message[0];
};
};
#define RPM_SERVICE_TYPE_REQUEST 0x00716572 /* "req\0" */
#define RPM_MSG_TYPE_ERR 0x00727265 /* "err\0" */
#define RPM_MSG_TYPE_MSG_ID 0x2367736d /* "msg#" */
/**
* qcom_rpm_smd_write - write @buf to @type:@id
* @rpm: rpm handle
* @type: resource type
* @id: resource identifier
* @buf: the data to be written
* @count: number of bytes in @buf
*/
int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,
int state,
u32 type, u32 id,
void *buf,
size_t count)
{
static unsigned msg_id = 1;
int left;
int ret;
struct {
struct qcom_rpm_header hdr;
struct qcom_rpm_request req;
u8 payload[count];
} pkt;
/* SMD packets to the RPM may not exceed 256 bytes */
if (WARN_ON(sizeof(pkt) >= 256))
return -EINVAL;
mutex_lock(&rpm->lock);
pkt.hdr.service_type = RPM_SERVICE_TYPE_REQUEST;
pkt.hdr.length = sizeof(struct qcom_rpm_request) + count;
pkt.req.msg_id = msg_id++;
pkt.req.flags = BIT(state);
pkt.req.type = type;
pkt.req.id = id;
pkt.req.data_len = count;
memcpy(pkt.payload, buf, count);
ret = qcom_smd_send(rpm->rpm_channel, &pkt, sizeof(pkt));
if (ret)
goto out;
left = wait_for_completion_timeout(&rpm->ack, RPM_REQUEST_TIMEOUT);
if (!left)
ret = -ETIMEDOUT;
else
ret = rpm->ack_status;
out:
mutex_unlock(&rpm->lock);
return ret;
}
EXPORT_SYMBOL(qcom_rpm_smd_write);
static int qcom_smd_rpm_callback(struct qcom_smd_device *qsdev,
const void *data,
size_t count)
{
const struct qcom_rpm_header *hdr = data;
const struct qcom_rpm_message *msg;
struct qcom_smd_rpm *rpm = dev_get_drvdata(&qsdev->dev);
const u8 *buf = data + sizeof(struct qcom_rpm_header);
const u8 *end = buf + hdr->length;
char msgbuf[32];
int status = 0;
u32 len;
if (hdr->service_type != RPM_SERVICE_TYPE_REQUEST ||
hdr->length < sizeof(struct qcom_rpm_message)) {
dev_err(&qsdev->dev, "invalid request\n");
return 0;
}
while (buf < end) {
msg = (struct qcom_rpm_message *)buf;
switch (msg->msg_type) {
case RPM_MSG_TYPE_MSG_ID:
break;
case RPM_MSG_TYPE_ERR:
len = min_t(u32, ALIGN(msg->length, 4), sizeof(msgbuf));
memcpy_fromio(msgbuf, msg->message, len);
msgbuf[len - 1] = 0;
if (!strcmp(msgbuf, "resource does not exist"))
status = -ENXIO;
else
status = -EINVAL;
break;
}
buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg->length, 4);
}
rpm->ack_status = status;
complete(&rpm->ack);
return 0;
}
static int qcom_smd_rpm_probe(struct qcom_smd_device *sdev)
{
struct qcom_smd_rpm *rpm;
rpm = devm_kzalloc(&sdev->dev, sizeof(*rpm), GFP_KERNEL);
if (!rpm)
return -ENOMEM;
mutex_init(&rpm->lock);
init_completion(&rpm->ack);
rpm->rpm_channel = sdev->channel;
dev_set_drvdata(&sdev->dev, rpm);
return of_platform_populate(sdev->dev.of_node, NULL, NULL, &sdev->dev);
}
static void qcom_smd_rpm_remove(struct qcom_smd_device *sdev)
{
of_platform_depopulate(&sdev->dev);
}
static const struct of_device_id qcom_smd_rpm_of_match[] = {
{ .compatible = "qcom,rpm-msm8974" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match);
static struct qcom_smd_driver qcom_smd_rpm_driver = {
.probe = qcom_smd_rpm_probe,
.remove = qcom_smd_rpm_remove,
.callback = qcom_smd_rpm_callback,
.driver = {
.name = "qcom_smd_rpm",
.owner = THIS_MODULE,
.of_match_table = qcom_smd_rpm_of_match,
},
};
static int __init qcom_smd_rpm_init(void)
{
return qcom_smd_driver_register(&qcom_smd_rpm_driver);
}
arch_initcall(qcom_smd_rpm_init);
static void __exit qcom_smd_rpm_exit(void)
{
qcom_smd_driver_unregister(&qcom_smd_rpm_driver);
}
module_exit(qcom_smd_rpm_exit);
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm SMD backed RPM driver");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smd.h>
#include <linux/soc/qcom/smem.h>
#include <linux/wait.h>
/*
* The Qualcomm Shared Memory communication solution provides point-to-point
* channels for clients to send and receive streaming or packet based data.
*
* Each channel consists of a control item (channel info) and a ring buffer
* pair. The channel info carry information related to channel state, flow
* control and the offsets within the ring buffer.
*
* All allocated channels are listed in an allocation table, identifying the
* pair of items by name, type and remote processor.
*
* Upon creating a new channel the remote processor allocates channel info and
* ring buffer items from the smem heap and populate the allocation table. An
* interrupt is sent to the other end of the channel and a scan for new
* channels should be done. A channel never goes away, it will only change
* state.
*
* The remote processor signals it intent for bring up the communication
* channel by setting the state of its end of the channel to "opening" and
* sends out an interrupt. We detect this change and register a smd device to
* consume the channel. Upon finding a consumer we finish the handshake and the
* channel is up.
*
* Upon closing a channel, the remote processor will update the state of its
* end of the channel and signal us, we will then unregister any attached
* device and close our end of the channel.
*
* Devices attached to a channel can use the qcom_smd_send function to push
* data to the channel, this is done by copying the data into the tx ring
* buffer, updating the pointers in the channel info and signaling the remote
* processor.
*
* The remote processor does the equivalent when it transfer data and upon
* receiving the interrupt we check the channel info for new data and delivers
* this to the attached device. If the device is not ready to receive the data
* we leave it in the ring buffer for now.
*/
struct smd_channel_info;
struct smd_channel_info_word;
#define SMD_ALLOC_TBL_COUNT 2
#define SMD_ALLOC_TBL_SIZE 64
/*
* This lists the various smem heap items relevant for the allocation table and
* smd channel entries.
*/
static const struct {
unsigned alloc_tbl_id;
unsigned info_base_id;
unsigned fifo_base_id;
} smem_items[SMD_ALLOC_TBL_COUNT] = {
{
.alloc_tbl_id = 13,
.info_base_id = 14,
.fifo_base_id = 338
},
{
.alloc_tbl_id = 14,
.info_base_id = 266,
.fifo_base_id = 202,
},
};
/**
* struct qcom_smd_edge - representing a remote processor
* @smd: handle to qcom_smd
* @of_node: of_node handle for information related to this edge
* @edge_id: identifier of this edge
* @irq: interrupt for signals on this edge
* @ipc_regmap: regmap handle holding the outgoing ipc register
* @ipc_offset: offset within @ipc_regmap of the register for ipc
* @ipc_bit: bit in the register at @ipc_offset of @ipc_regmap
* @channels: list of all channels detected on this edge
* @channels_lock: guard for modifications of @channels
* @allocated: array of bitmaps representing already allocated channels
* @need_rescan: flag that the @work needs to scan smem for new channels
* @smem_available: last available amount of smem triggering a channel scan
* @work: work item for edge house keeping
*/
struct qcom_smd_edge {
struct qcom_smd *smd;
struct device_node *of_node;
unsigned edge_id;
int irq;
struct regmap *ipc_regmap;
int ipc_offset;
int ipc_bit;
struct list_head channels;
spinlock_t channels_lock;
DECLARE_BITMAP(allocated[SMD_ALLOC_TBL_COUNT], SMD_ALLOC_TBL_SIZE);
bool need_rescan;
unsigned smem_available;
struct work_struct work;
};
/*
* SMD channel states.
*/
enum smd_channel_state {
SMD_CHANNEL_CLOSED,
SMD_CHANNEL_OPENING,
SMD_CHANNEL_OPENED,
SMD_CHANNEL_FLUSHING,
SMD_CHANNEL_CLOSING,
SMD_CHANNEL_RESET,
SMD_CHANNEL_RESET_OPENING
};
/**
* struct qcom_smd_channel - smd channel struct
* @edge: qcom_smd_edge this channel is living on
* @qsdev: reference to a associated smd client device
* @name: name of the channel
* @state: local state of the channel
* @remote_state: remote state of the channel
* @tx_info: byte aligned outgoing channel info
* @rx_info: byte aligned incoming channel info
* @tx_info_word: word aligned outgoing channel info
* @rx_info_word: word aligned incoming channel info
* @tx_lock: lock to make writes to the channel mutually exclusive
* @fblockread_event: wakeup event tied to tx fBLOCKREADINTR
* @tx_fifo: pointer to the outgoing ring buffer
* @rx_fifo: pointer to the incoming ring buffer
* @fifo_size: size of each ring buffer
* @bounce_buffer: bounce buffer for reading wrapped packets
* @cb: callback function registered for this channel
* @recv_lock: guard for rx info modifications and cb pointer
* @pkt_size: size of the currently handled packet
* @list: lite entry for @channels in qcom_smd_edge
*/
struct qcom_smd_channel {
struct qcom_smd_edge *edge;
struct qcom_smd_device *qsdev;
char *name;
enum smd_channel_state state;
enum smd_channel_state remote_state;
struct smd_channel_info *tx_info;
struct smd_channel_info *rx_info;
struct smd_channel_info_word *tx_info_word;
struct smd_channel_info_word *rx_info_word;
struct mutex tx_lock;
wait_queue_head_t fblockread_event;
void *tx_fifo;
void *rx_fifo;
int fifo_size;
void *bounce_buffer;
int (*cb)(struct qcom_smd_device *, const void *, size_t);
spinlock_t recv_lock;
int pkt_size;
struct list_head list;
};
/**
* struct qcom_smd - smd struct
* @dev: device struct
* @num_edges: number of entries in @edges
* @edges: array of edges to be handled
*/
struct qcom_smd {
struct device *dev;
unsigned num_edges;
struct qcom_smd_edge edges[0];
};
/*
* Format of the smd_info smem items, for byte aligned channels.
*/
struct smd_channel_info {
u32 state;
u8 fDSR;
u8 fCTS;
u8 fCD;
u8 fRI;
u8 fHEAD;
u8 fTAIL;
u8 fSTATE;
u8 fBLOCKREADINTR;
u32 tail;
u32 head;
};
/*
* Format of the smd_info smem items, for word aligned channels.
*/
struct smd_channel_info_word {
u32 state;
u32 fDSR;
u32 fCTS;
u32 fCD;
u32 fRI;
u32 fHEAD;
u32 fTAIL;
u32 fSTATE;
u32 fBLOCKREADINTR;
u32 tail;
u32 head;
};
#define GET_RX_CHANNEL_INFO(channel, param) \
(channel->rx_info_word ? \
channel->rx_info_word->param : \
channel->rx_info->param)
#define SET_RX_CHANNEL_INFO(channel, param, value) \
(channel->rx_info_word ? \
(channel->rx_info_word->param = value) : \
(channel->rx_info->param = value))
#define GET_TX_CHANNEL_INFO(channel, param) \
(channel->tx_info_word ? \
channel->tx_info_word->param : \
channel->tx_info->param)
#define SET_TX_CHANNEL_INFO(channel, param, value) \
(channel->tx_info_word ? \
(channel->tx_info_word->param = value) : \
(channel->tx_info->param = value))
/**
* struct qcom_smd_alloc_entry - channel allocation entry
* @name: channel name
* @cid: channel index
* @flags: channel flags and edge id
* @ref_count: reference count of the channel
*/
struct qcom_smd_alloc_entry {
u8 name[20];
u32 cid;
u32 flags;
u32 ref_count;
} __packed;
#define SMD_CHANNEL_FLAGS_EDGE_MASK 0xff
#define SMD_CHANNEL_FLAGS_STREAM BIT(8)
#define SMD_CHANNEL_FLAGS_PACKET BIT(9)
/*
* Each smd packet contains a 20 byte header, with the first 4 being the length
* of the packet.
*/
#define SMD_PACKET_HEADER_LEN 20
/*
* Signal the remote processor associated with 'channel'.
*/
static void qcom_smd_signal_channel(struct qcom_smd_channel *channel)
{
struct qcom_smd_edge *edge = channel->edge;
regmap_write(edge->ipc_regmap, edge->ipc_offset, BIT(edge->ipc_bit));
}
/*
* Initialize the tx channel info
*/
static void qcom_smd_channel_reset(struct qcom_smd_channel *channel)
{
SET_TX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED);
SET_TX_CHANNEL_INFO(channel, fDSR, 0);
SET_TX_CHANNEL_INFO(channel, fCTS, 0);
SET_TX_CHANNEL_INFO(channel, fCD, 0);
SET_TX_CHANNEL_INFO(channel, fRI, 0);
SET_TX_CHANNEL_INFO(channel, fHEAD, 0);
SET_TX_CHANNEL_INFO(channel, fTAIL, 0);
SET_TX_CHANNEL_INFO(channel, fSTATE, 1);
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0);
SET_TX_CHANNEL_INFO(channel, head, 0);
SET_TX_CHANNEL_INFO(channel, tail, 0);
qcom_smd_signal_channel(channel);
channel->state = SMD_CHANNEL_CLOSED;
channel->pkt_size = 0;
}
/*
* Calculate the amount of data available in the rx fifo
*/
static size_t qcom_smd_channel_get_rx_avail(struct qcom_smd_channel *channel)
{
unsigned head;
unsigned tail;
head = GET_RX_CHANNEL_INFO(channel, head);
tail = GET_RX_CHANNEL_INFO(channel, tail);
return (head - tail) & (channel->fifo_size - 1);
}
/*
* Set tx channel state and inform the remote processor
*/
static void qcom_smd_channel_set_state(struct qcom_smd_channel *channel,
int state)
{
struct qcom_smd_edge *edge = channel->edge;
bool is_open = state == SMD_CHANNEL_OPENED;
if (channel->state == state)
return;
dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state);
SET_TX_CHANNEL_INFO(channel, fDSR, is_open);
SET_TX_CHANNEL_INFO(channel, fCTS, is_open);
SET_TX_CHANNEL_INFO(channel, fCD, is_open);
SET_TX_CHANNEL_INFO(channel, state, state);
SET_TX_CHANNEL_INFO(channel, fSTATE, 1);
channel->state = state;
qcom_smd_signal_channel(channel);
}
/*
* Copy count bytes of data using 32bit accesses, if that's required.
*/
static void smd_copy_to_fifo(void __iomem *_dst,
const void *_src,
size_t count,
bool word_aligned)
{
u32 *dst = (u32 *)_dst;
u32 *src = (u32 *)_src;
if (word_aligned) {
count /= sizeof(u32);
while (count--)
writel_relaxed(*src++, dst++);
} else {
memcpy_toio(_dst, _src, count);
}
}
/*
* Copy count bytes of data using 32bit accesses, if that is required.
*/
static void smd_copy_from_fifo(void *_dst,
const void __iomem *_src,
size_t count,
bool word_aligned)
{
u32 *dst = (u32 *)_dst;
u32 *src = (u32 *)_src;
if (word_aligned) {
count /= sizeof(u32);
while (count--)
*dst++ = readl_relaxed(src++);
} else {
memcpy_fromio(_dst, _src, count);
}
}
/*
* Read count bytes of data from the rx fifo into buf, but don't advance the
* tail.
*/
static size_t qcom_smd_channel_peek(struct qcom_smd_channel *channel,
void *buf, size_t count)
{
bool word_aligned;
unsigned tail;
size_t len;
word_aligned = channel->rx_info_word != NULL;
tail = GET_RX_CHANNEL_INFO(channel, tail);
len = min_t(size_t, count, channel->fifo_size - tail);
if (len) {
smd_copy_from_fifo(buf,
channel->rx_fifo + tail,
len,
word_aligned);
}
if (len != count) {
smd_copy_from_fifo(buf + len,
channel->rx_fifo,
count - len,
word_aligned);
}
return count;
}
/*
* Advance the rx tail by count bytes.
*/
static void qcom_smd_channel_advance(struct qcom_smd_channel *channel,
size_t count)
{
unsigned tail;
tail = GET_RX_CHANNEL_INFO(channel, tail);
tail += count;
tail &= (channel->fifo_size - 1);
SET_RX_CHANNEL_INFO(channel, tail, tail);
}
/*
* Read out a single packet from the rx fifo and deliver it to the device
*/
static int qcom_smd_channel_recv_single(struct qcom_smd_channel *channel)
{
struct qcom_smd_device *qsdev = channel->qsdev;
unsigned tail;
size_t len;
void *ptr;
int ret;
if (!channel->cb)
return 0;
tail = GET_RX_CHANNEL_INFO(channel, tail);
/* Use bounce buffer if the data wraps */
if (tail + channel->pkt_size >= channel->fifo_size) {
ptr = channel->bounce_buffer;
len = qcom_smd_channel_peek(channel, ptr, channel->pkt_size);
} else {
ptr = channel->rx_fifo + tail;
len = channel->pkt_size;
}
ret = channel->cb(qsdev, ptr, len);
if (ret < 0)
return ret;
/* Only forward the tail if the client consumed the data */
qcom_smd_channel_advance(channel, len);
channel->pkt_size = 0;
return 0;
}
/*
* Per channel interrupt handling
*/
static bool qcom_smd_channel_intr(struct qcom_smd_channel *channel)
{
bool need_state_scan = false;
int remote_state;
u32 pktlen;
int avail;
int ret;
/* Handle state changes */
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state != channel->remote_state) {
channel->remote_state = remote_state;
need_state_scan = true;
}
/* Indicate that we have seen any state change */
SET_RX_CHANNEL_INFO(channel, fSTATE, 0);
/* Signal waiting qcom_smd_send() about the interrupt */
if (!GET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR))
wake_up_interruptible(&channel->fblockread_event);
/* Don't consume any data until we've opened the channel */
if (channel->state != SMD_CHANNEL_OPENED)
goto out;
/* Indicate that we've seen the new data */
SET_RX_CHANNEL_INFO(channel, fHEAD, 0);
/* Consume data */
for (;;) {
avail = qcom_smd_channel_get_rx_avail(channel);
if (!channel->pkt_size && avail >= SMD_PACKET_HEADER_LEN) {
qcom_smd_channel_peek(channel, &pktlen, sizeof(pktlen));
qcom_smd_channel_advance(channel, SMD_PACKET_HEADER_LEN);
channel->pkt_size = pktlen;
} else if (channel->pkt_size && avail >= channel->pkt_size) {
ret = qcom_smd_channel_recv_single(channel);
if (ret)
break;
} else {
break;
}
}
/* Indicate that we have seen and updated tail */
SET_RX_CHANNEL_INFO(channel, fTAIL, 1);
/* Signal the remote that we've consumed the data (if requested) */
if (!GET_RX_CHANNEL_INFO(channel, fBLOCKREADINTR)) {
/* Ensure ordering of channel info updates */
wmb();
qcom_smd_signal_channel(channel);
}
out:
return need_state_scan;
}
/*
* The edge interrupts are triggered by the remote processor on state changes,
* channel info updates or when new channels are created.
*/
static irqreturn_t qcom_smd_edge_intr(int irq, void *data)
{
struct qcom_smd_edge *edge = data;
struct qcom_smd_channel *channel;
unsigned available;
bool kick_worker = false;
/*
* Handle state changes or data on each of the channels on this edge
*/
spin_lock(&edge->channels_lock);
list_for_each_entry(channel, &edge->channels, list) {
spin_lock(&channel->recv_lock);
kick_worker |= qcom_smd_channel_intr(channel);
spin_unlock(&channel->recv_lock);
}
spin_unlock(&edge->channels_lock);
/*
* Creating a new channel requires allocating an smem entry, so we only
* have to scan if the amount of available space in smem have changed
* since last scan.
*/
available = qcom_smem_get_free_space(edge->edge_id);
if (available != edge->smem_available) {
edge->smem_available = available;
edge->need_rescan = true;
kick_worker = true;
}
if (kick_worker)
schedule_work(&edge->work);
return IRQ_HANDLED;
}
/*
* Delivers any outstanding packets in the rx fifo, can be used after probe of
* the clients to deliver any packets that wasn't delivered before the client
* was setup.
*/
static void qcom_smd_channel_resume(struct qcom_smd_channel *channel)
{
unsigned long flags;
spin_lock_irqsave(&channel->recv_lock, flags);
qcom_smd_channel_intr(channel);
spin_unlock_irqrestore(&channel->recv_lock, flags);
}
/*
* Calculate how much space is available in the tx fifo.
*/
static size_t qcom_smd_get_tx_avail(struct qcom_smd_channel *channel)
{
unsigned head;
unsigned tail;
unsigned mask = channel->fifo_size - 1;
head = GET_TX_CHANNEL_INFO(channel, head);
tail = GET_TX_CHANNEL_INFO(channel, tail);
return mask - ((head - tail) & mask);
}
/*
* Write count bytes of data into channel, possibly wrapping in the ring buffer
*/
static int qcom_smd_write_fifo(struct qcom_smd_channel *channel,
const void *data,
size_t count)
{
bool word_aligned;
unsigned head;
size_t len;
word_aligned = channel->tx_info_word != NULL;
head = GET_TX_CHANNEL_INFO(channel, head);
len = min_t(size_t, count, channel->fifo_size - head);
if (len) {
smd_copy_to_fifo(channel->tx_fifo + head,
data,
len,
word_aligned);
}
if (len != count) {
smd_copy_to_fifo(channel->tx_fifo,
data + len,
count - len,
word_aligned);
}
head += count;
head &= (channel->fifo_size - 1);
SET_TX_CHANNEL_INFO(channel, head, head);
return count;
}
/**
* qcom_smd_send - write data to smd channel
* @channel: channel handle
* @data: buffer of data to write
* @len: number of bytes to write
*
* This is a blocking write of len bytes into the channel's tx ring buffer and
* signal the remote end. It will sleep until there is enough space available
* in the tx buffer, utilizing the fBLOCKREADINTR signaling mechanism to avoid
* polling.
*/
int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len)
{
u32 hdr[5] = {len,};
int tlen = sizeof(hdr) + len;
int ret;
/* Word aligned channels only accept word size aligned data */
if (channel->rx_info_word != NULL && len % 4)
return -EINVAL;
ret = mutex_lock_interruptible(&channel->tx_lock);
if (ret)
return ret;
while (qcom_smd_get_tx_avail(channel) < tlen) {
if (channel->state != SMD_CHANNEL_OPENED) {
ret = -EPIPE;
goto out;
}
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1);
ret = wait_event_interruptible(channel->fblockread_event,
qcom_smd_get_tx_avail(channel) >= tlen ||
channel->state != SMD_CHANNEL_OPENED);
if (ret)
goto out;
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0);
}
SET_TX_CHANNEL_INFO(channel, fTAIL, 0);
qcom_smd_write_fifo(channel, hdr, sizeof(hdr));
qcom_smd_write_fifo(channel, data, len);
SET_TX_CHANNEL_INFO(channel, fHEAD, 1);
/* Ensure ordering of channel info updates */
wmb();
qcom_smd_signal_channel(channel);
out:
mutex_unlock(&channel->tx_lock);
return ret;
}
EXPORT_SYMBOL(qcom_smd_send);
static struct qcom_smd_device *to_smd_device(struct device *dev)
{
return container_of(dev, struct qcom_smd_device, dev);
}
static struct qcom_smd_driver *to_smd_driver(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
return container_of(qsdev->dev.driver, struct qcom_smd_driver, driver);
}
static int qcom_smd_dev_match(struct device *dev, struct device_driver *drv)
{
return of_driver_match_device(dev, drv);
}
/*
* Probe the smd client.
*
* The remote side have indicated that it want the channel to be opened, so
* complete the state handshake and probe our client driver.
*/
static int qcom_smd_dev_probe(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
struct qcom_smd_driver *qsdrv = to_smd_driver(dev);
struct qcom_smd_channel *channel = qsdev->channel;
size_t bb_size;
int ret;
/*
* Packets are maximum 4k, but reduce if the fifo is smaller
*/
bb_size = min(channel->fifo_size, SZ_4K);
channel->bounce_buffer = kmalloc(bb_size, GFP_KERNEL);
if (!channel->bounce_buffer)
return -ENOMEM;
channel->cb = qsdrv->callback;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENING);
qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENED);
ret = qsdrv->probe(qsdev);
if (ret)
goto err;
qcom_smd_channel_resume(channel);
return 0;
err:
dev_err(&qsdev->dev, "probe failed\n");
channel->cb = NULL;
kfree(channel->bounce_buffer);
channel->bounce_buffer = NULL;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);
return ret;
}
/*
* Remove the smd client.
*
* The channel is going away, for some reason, so remove the smd client and
* reset the channel state.
*/
static int qcom_smd_dev_remove(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
struct qcom_smd_driver *qsdrv = to_smd_driver(dev);
struct qcom_smd_channel *channel = qsdev->channel;
unsigned long flags;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSING);
/*
* Make sure we don't race with the code receiving data.
*/
spin_lock_irqsave(&channel->recv_lock, flags);
channel->cb = NULL;
spin_unlock_irqrestore(&channel->recv_lock, flags);
/* Wake up any sleepers in qcom_smd_send() */
wake_up_interruptible(&channel->fblockread_event);
/*
* We expect that the client might block in remove() waiting for any
* outstanding calls to qcom_smd_send() to wake up and finish.
*/
if (qsdrv->remove)
qsdrv->remove(qsdev);
/*
* The client is now gone, cleanup and reset the channel state.
*/
channel->qsdev = NULL;
kfree(channel->bounce_buffer);
channel->bounce_buffer = NULL;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);
qcom_smd_channel_reset(channel);
return 0;
}
static struct bus_type qcom_smd_bus = {
.name = "qcom_smd",
.match = qcom_smd_dev_match,
.probe = qcom_smd_dev_probe,
.remove = qcom_smd_dev_remove,
};
/*
* Release function for the qcom_smd_device object.
*/
static void qcom_smd_release_device(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
kfree(qsdev);
}
/*
* Finds the device_node for the smd child interested in this channel.
*/
static struct device_node *qcom_smd_match_channel(struct device_node *edge_node,
const char *channel)
{
struct device_node *child;
const char *name;
const char *key;
int ret;
for_each_available_child_of_node(edge_node, child) {
key = "qcom,smd-channels";
ret = of_property_read_string(child, key, &name);
if (ret) {
of_node_put(child);
continue;
}
if (strcmp(name, channel) == 0)
return child;
}
return NULL;
}
/*
* Create a smd client device for channel that is being opened.
*/
static int qcom_smd_create_device(struct qcom_smd_channel *channel)
{
struct qcom_smd_device *qsdev;
struct qcom_smd_edge *edge = channel->edge;
struct device_node *node;
struct qcom_smd *smd = edge->smd;
int ret;
if (channel->qsdev)
return -EEXIST;
node = qcom_smd_match_channel(edge->of_node, channel->name);
if (!node) {
dev_dbg(smd->dev, "no match for '%s'\n", channel->name);
return -ENXIO;
}
dev_dbg(smd->dev, "registering '%s'\n", channel->name);
qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL);
if (!qsdev)
return -ENOMEM;
dev_set_name(&qsdev->dev, "%s.%s", edge->of_node->name, node->name);
qsdev->dev.parent = smd->dev;
qsdev->dev.bus = &qcom_smd_bus;
qsdev->dev.release = qcom_smd_release_device;
qsdev->dev.of_node = node;
qsdev->channel = channel;
channel->qsdev = qsdev;
ret = device_register(&qsdev->dev);
if (ret) {
dev_err(smd->dev, "device_register failed: %d\n", ret);
put_device(&qsdev->dev);
}
return ret;
}
/*
* Destroy a smd client device for a channel that's going away.
*/
static void qcom_smd_destroy_device(struct qcom_smd_channel *channel)
{
struct device *dev;
BUG_ON(!channel->qsdev);
dev = &channel->qsdev->dev;
device_unregister(dev);
of_node_put(dev->of_node);
put_device(dev);
}
/**
* qcom_smd_driver_register - register a smd driver
* @qsdrv: qcom_smd_driver struct
*/
int qcom_smd_driver_register(struct qcom_smd_driver *qsdrv)
{
qsdrv->driver.bus = &qcom_smd_bus;
return driver_register(&qsdrv->driver);
}
EXPORT_SYMBOL(qcom_smd_driver_register);
/**
* qcom_smd_driver_unregister - unregister a smd driver
* @qsdrv: qcom_smd_driver struct
*/
void qcom_smd_driver_unregister(struct qcom_smd_driver *qsdrv)
{
driver_unregister(&qsdrv->driver);
}
EXPORT_SYMBOL(qcom_smd_driver_unregister);
/*
* Allocate the qcom_smd_channel object for a newly found smd channel,
* retrieving and validating the smem items involved.
*/
static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *edge,
unsigned smem_info_item,
unsigned smem_fifo_item,
char *name)
{
struct qcom_smd_channel *channel;
struct qcom_smd *smd = edge->smd;
size_t fifo_size;
size_t info_size;
void *fifo_base;
void *info;
int ret;
channel = devm_kzalloc(smd->dev, sizeof(*channel), GFP_KERNEL);
if (!channel)
return ERR_PTR(-ENOMEM);
channel->edge = edge;
channel->name = devm_kstrdup(smd->dev, name, GFP_KERNEL);
if (!channel->name)
return ERR_PTR(-ENOMEM);
mutex_init(&channel->tx_lock);
spin_lock_init(&channel->recv_lock);
init_waitqueue_head(&channel->fblockread_event);
ret = qcom_smem_get(edge->edge_id, smem_info_item, (void **)&info, &info_size);
if (ret)
goto free_name_and_channel;
/*
* Use the size of the item to figure out which channel info struct to
* use.
*/
if (info_size == 2 * sizeof(struct smd_channel_info_word)) {
channel->tx_info_word = info;
channel->rx_info_word = info + sizeof(struct smd_channel_info_word);
} else if (info_size == 2 * sizeof(struct smd_channel_info)) {
channel->tx_info = info;
channel->rx_info = info + sizeof(struct smd_channel_info);
} else {
dev_err(smd->dev,
"channel info of size %zu not supported\n", info_size);
ret = -EINVAL;
goto free_name_and_channel;
}
ret = qcom_smem_get(edge->edge_id, smem_fifo_item, &fifo_base, &fifo_size);
if (ret)
goto free_name_and_channel;
/* The channel consist of a rx and tx fifo of equal size */
fifo_size /= 2;
dev_dbg(smd->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n",
name, info_size, fifo_size);
channel->tx_fifo = fifo_base;
channel->rx_fifo = fifo_base + fifo_size;
channel->fifo_size = fifo_size;
qcom_smd_channel_reset(channel);
return channel;
free_name_and_channel:
devm_kfree(smd->dev, channel->name);
devm_kfree(smd->dev, channel);
return ERR_PTR(ret);
}
/*
* Scans the allocation table for any newly allocated channels, calls
* qcom_smd_create_channel() to create representations of these and add
* them to the edge's list of channels.
*/
static void qcom_discover_channels(struct qcom_smd_edge *edge)
{
struct qcom_smd_alloc_entry *alloc_tbl;
struct qcom_smd_alloc_entry *entry;
struct qcom_smd_channel *channel;
struct qcom_smd *smd = edge->smd;
unsigned long flags;
unsigned fifo_id;
unsigned info_id;
int ret;
int tbl;
int i;
for (tbl = 0; tbl < SMD_ALLOC_TBL_COUNT; tbl++) {
ret = qcom_smem_get(edge->edge_id,
smem_items[tbl].alloc_tbl_id,
(void **)&alloc_tbl,
NULL);
if (ret < 0)
continue;
for (i = 0; i < SMD_ALLOC_TBL_SIZE; i++) {
entry = &alloc_tbl[i];
if (test_bit(i, edge->allocated[tbl]))
continue;
if (entry->ref_count == 0)
continue;
if (!entry->name[0])
continue;
if (!(entry->flags & SMD_CHANNEL_FLAGS_PACKET))
continue;
if ((entry->flags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id)
continue;
info_id = smem_items[tbl].info_base_id + entry->cid;
fifo_id = smem_items[tbl].fifo_base_id + entry->cid;
channel = qcom_smd_create_channel(edge, info_id, fifo_id, entry->name);
if (IS_ERR(channel))
continue;
spin_lock_irqsave(&edge->channels_lock, flags);
list_add(&channel->list, &edge->channels);
spin_unlock_irqrestore(&edge->channels_lock, flags);
dev_dbg(smd->dev, "new channel found: '%s'\n", channel->name);
set_bit(i, edge->allocated[tbl]);
}
}
schedule_work(&edge->work);
}
/*
* This per edge worker scans smem for any new channels and register these. It
* then scans all registered channels for state changes that should be handled
* by creating or destroying smd client devices for the registered channels.
*
* LOCKING: edge->channels_lock is not needed to be held during the traversal
* of the channels list as it's done synchronously with the only writer.
*/
static void qcom_channel_state_worker(struct work_struct *work)
{
struct qcom_smd_channel *channel;
struct qcom_smd_edge *edge = container_of(work,
struct qcom_smd_edge,
work);
unsigned remote_state;
/*
* Rescan smem if we have reason to belive that there are new channels.
*/
if (edge->need_rescan) {
edge->need_rescan = false;
qcom_discover_channels(edge);
}
/*
* Register a device for any closed channel where the remote processor
* is showing interest in opening the channel.
*/
list_for_each_entry(channel, &edge->channels, list) {
if (channel->state != SMD_CHANNEL_CLOSED)
continue;
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state != SMD_CHANNEL_OPENING &&
remote_state != SMD_CHANNEL_OPENED)
continue;
qcom_smd_create_device(channel);
}
/*
* Unregister the device for any channel that is opened where the
* remote processor is closing the channel.
*/
list_for_each_entry(channel, &edge->channels, list) {
if (channel->state != SMD_CHANNEL_OPENING &&
channel->state != SMD_CHANNEL_OPENED)
continue;
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state == SMD_CHANNEL_OPENING ||
remote_state == SMD_CHANNEL_OPENED)
continue;
qcom_smd_destroy_device(channel);
}
}
/*
* Parses an of_node describing an edge.
*/
static int qcom_smd_parse_edge(struct device *dev,
struct device_node *node,
struct qcom_smd_edge *edge)
{
struct device_node *syscon_np;
const char *key;
int irq;
int ret;
INIT_LIST_HEAD(&edge->channels);
spin_lock_init(&edge->channels_lock);
INIT_WORK(&edge->work, qcom_channel_state_worker);
edge->of_node = of_node_get(node);
irq = irq_of_parse_and_map(node, 0);
if (irq < 0) {
dev_err(dev, "required smd interrupt missing\n");
return -EINVAL;
}
ret = devm_request_irq(dev, irq,
qcom_smd_edge_intr, IRQF_TRIGGER_RISING,
node->name, edge);
if (ret) {
dev_err(dev, "failed to request smd irq\n");
return ret;
}
edge->irq = irq;
key = "qcom,smd-edge";
ret = of_property_read_u32(node, key, &edge->edge_id);
if (ret) {
dev_err(dev, "edge missing %s property\n", key);
return -EINVAL;
}
syscon_np = of_parse_phandle(node, "qcom,ipc", 0);
if (!syscon_np) {
dev_err(dev, "no qcom,ipc node\n");
return -ENODEV;
}
edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
if (IS_ERR(edge->ipc_regmap))
return PTR_ERR(edge->ipc_regmap);
key = "qcom,ipc";
ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);
if (ret < 0) {
dev_err(dev, "no offset in %s\n", key);
return -EINVAL;
}
ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);
if (ret < 0) {
dev_err(dev, "no bit in %s\n", key);
return -EINVAL;
}
return 0;
}
static int qcom_smd_probe(struct platform_device *pdev)
{
struct qcom_smd_edge *edge;
struct device_node *node;
struct qcom_smd *smd;
size_t array_size;
int num_edges;
int ret;
int i = 0;
/* Wait for smem */
ret = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL, NULL);
if (ret == -EPROBE_DEFER)
return ret;
num_edges = of_get_available_child_count(pdev->dev.of_node);
array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge);
smd = devm_kzalloc(&pdev->dev, array_size, GFP_KERNEL);
if (!smd)
return -ENOMEM;
smd->dev = &pdev->dev;
smd->num_edges = num_edges;
for_each_available_child_of_node(pdev->dev.of_node, node) {
edge = &smd->edges[i++];
edge->smd = smd;
ret = qcom_smd_parse_edge(&pdev->dev, node, edge);
if (ret)
continue;
edge->need_rescan = true;
schedule_work(&edge->work);
}
platform_set_drvdata(pdev, smd);
return 0;
}
/*
* Shut down all smd clients by making sure that each edge stops processing
* events and scanning for new channels, then call destroy on the devices.
*/
static int qcom_smd_remove(struct platform_device *pdev)
{
struct qcom_smd_channel *channel;
struct qcom_smd_edge *edge;
struct qcom_smd *smd = platform_get_drvdata(pdev);
int i;
for (i = 0; i < smd->num_edges; i++) {
edge = &smd->edges[i];
disable_irq(edge->irq);
cancel_work_sync(&edge->work);
list_for_each_entry(channel, &edge->channels, list) {
if (!channel->qsdev)
continue;
qcom_smd_destroy_device(channel);
}
}
return 0;
}
static const struct of_device_id qcom_smd_of_match[] = {
{ .compatible = "qcom,smd" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smd_of_match);
static struct platform_driver qcom_smd_driver = {
.probe = qcom_smd_probe,
.remove = qcom_smd_remove,
.driver = {
.name = "qcom-smd",
.of_match_table = qcom_smd_of_match,
},
};
static int __init qcom_smd_init(void)
{
int ret;
ret = bus_register(&qcom_smd_bus);
if (ret) {
pr_err("failed to register smd bus: %d\n", ret);
return ret;
}
return platform_driver_register(&qcom_smd_driver);
}
postcore_initcall(qcom_smd_init);
static void __exit qcom_smd_exit(void)
{
platform_driver_unregister(&qcom_smd_driver);
bus_unregister(&qcom_smd_bus);
}
module_exit(qcom_smd_exit);
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm Shared Memory Driver");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/hwspinlock.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smem.h>
/*
* The Qualcomm shared memory system is a allocate only heap structure that
* consists of one of more memory areas that can be accessed by the processors
* in the SoC.
*
* All systems contains a global heap, accessible by all processors in the SoC,
* with a table of contents data structure (@smem_header) at the beginning of
* the main shared memory block.
*
* The global header contains meta data for allocations as well as a fixed list
* of 512 entries (@smem_global_entry) that can be initialized to reference
* parts of the shared memory space.
*
*
* In addition to this global heap a set of "private" heaps can be set up at
* boot time with access restrictions so that only certain processor pairs can
* access the data.
*
* These partitions are referenced from an optional partition table
* (@smem_ptable), that is found 4kB from the end of the main smem region. The
* partition table entries (@smem_ptable_entry) lists the involved processors
* (or hosts) and their location in the main shared memory region.
*
* Each partition starts with a header (@smem_partition_header) that identifies
* the partition and holds properties for the two internal memory regions. The
* two regions are cached and non-cached memory respectively. Each region
* contain a link list of allocation headers (@smem_private_entry) followed by
* their data.
*
* Items in the non-cached region are allocated from the start of the partition
* while items in the cached region are allocated from the end. The free area
* is hence the region between the cached and non-cached offsets.
*
*
* To synchronize allocations in the shared memory heaps a remote spinlock must
* be held - currently lock number 3 of the sfpb or tcsr is used for this on all
* platforms.
*
*/
/*
* Item 3 of the global heap contains an array of versions for the various
* software components in the SoC. We verify that the boot loader version is
* what the expected version (SMEM_EXPECTED_VERSION) as a sanity check.
*/
#define SMEM_ITEM_VERSION 3
#define SMEM_MASTER_SBL_VERSION_INDEX 7
#define SMEM_EXPECTED_VERSION 11
/*
* The first 8 items are only to be allocated by the boot loader while
* initializing the heap.
*/
#define SMEM_ITEM_LAST_FIXED 8
/* Highest accepted item number, for both global and private heaps */
#define SMEM_ITEM_COUNT 512
/* Processor/host identifier for the application processor */
#define SMEM_HOST_APPS 0
/* Max number of processors/hosts in a system */
#define SMEM_HOST_COUNT 9
/**
* struct smem_proc_comm - proc_comm communication struct (legacy)
* @command: current command to be executed
* @status: status of the currently requested command
* @params: parameters to the command
*/
struct smem_proc_comm {
u32 command;
u32 status;
u32 params[2];
};
/**
* struct smem_global_entry - entry to reference smem items on the heap
* @allocated: boolean to indicate if this entry is used
* @offset: offset to the allocated space
* @size: size of the allocated space, 8 byte aligned
* @aux_base: base address for the memory region used by this unit, or 0 for
* the default region. bits 0,1 are reserved
*/
struct smem_global_entry {
u32 allocated;
u32 offset;
u32 size;
u32 aux_base; /* bits 1:0 reserved */
};
#define AUX_BASE_MASK 0xfffffffc
/**
* struct smem_header - header found in beginning of primary smem region
* @proc_comm: proc_comm communication interface (legacy)
* @version: array of versions for the various subsystems
* @initialized: boolean to indicate that smem is initialized
* @free_offset: index of the first unallocated byte in smem
* @available: number of bytes available for allocation
* @reserved: reserved field, must be 0
* toc: array of references to items
*/
struct smem_header {
struct smem_proc_comm proc_comm[4];
u32 version[32];
u32 initialized;
u32 free_offset;
u32 available;
u32 reserved;
struct smem_global_entry toc[SMEM_ITEM_COUNT];
};
/**
* struct smem_ptable_entry - one entry in the @smem_ptable list
* @offset: offset, within the main shared memory region, of the partition
* @size: size of the partition
* @flags: flags for the partition (currently unused)
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
* @reserved: reserved entries for later use
*/
struct smem_ptable_entry {
u32 offset;
u32 size;
u32 flags;
u16 host0;
u16 host1;
u32 reserved[8];
};
/**
* struct smem_ptable - partition table for the private partitions
* @magic: magic number, must be SMEM_PTABLE_MAGIC
* @version: version of the partition table
* @num_entries: number of partitions in the table
* @reserved: for now reserved entries
* @entry: list of @smem_ptable_entry for the @num_entries partitions
*/
struct smem_ptable {
u32 magic;
u32 version;
u32 num_entries;
u32 reserved[5];
struct smem_ptable_entry entry[];
};
#define SMEM_PTABLE_MAGIC 0x434f5424 /* "$TOC" */
/**
* struct smem_partition_header - header of the partitions
* @magic: magic number, must be SMEM_PART_MAGIC
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
* @size: size of the partition
* @offset_free_uncached: offset to the first free byte of uncached memory in
* this partition
* @offset_free_cached: offset to the first free byte of cached memory in this
* partition
* @reserved: for now reserved entries
*/
struct smem_partition_header {
u32 magic;
u16 host0;
u16 host1;
u32 size;
u32 offset_free_uncached;
u32 offset_free_cached;
u32 reserved[3];
};
#define SMEM_PART_MAGIC 0x54525024 /* "$PRT" */
/**
* struct smem_private_entry - header of each item in the private partition
* @canary: magic number, must be SMEM_PRIVATE_CANARY
* @item: identifying number of the smem item
* @size: size of the data, including padding bytes
* @padding_data: number of bytes of padding of data
* @padding_hdr: number of bytes of padding between the header and the data
* @reserved: for now reserved entry
*/
struct smem_private_entry {
u16 canary;
u16 item;
u32 size; /* includes padding bytes */
u16 padding_data;
u16 padding_hdr;
u32 reserved;
};
#define SMEM_PRIVATE_CANARY 0xa5a5
/**
* struct smem_region - representation of a chunk of memory used for smem
* @aux_base: identifier of aux_mem base
* @virt_base: virtual base address of memory with this aux_mem identifier
* @size: size of the memory region
*/
struct smem_region {
u32 aux_base;
void __iomem *virt_base;
size_t size;
};
/**
* struct qcom_smem - device data for the smem device
* @dev: device pointer
* @hwlock: reference to a hwspinlock
* @partitions: list of pointers to partitions affecting the current
* processor/host
* @num_regions: number of @regions
* @regions: list of the memory regions defining the shared memory
*/
struct qcom_smem {
struct device *dev;
struct hwspinlock *hwlock;
struct smem_partition_header *partitions[SMEM_HOST_COUNT];
unsigned num_regions;
struct smem_region regions[0];
};
/* Pointer to the one and only smem handle */
static struct qcom_smem *__smem;
/* Timeout (ms) for the trylock of remote spinlocks */
#define HWSPINLOCK_TIMEOUT 1000
static int qcom_smem_alloc_private(struct qcom_smem *smem,
unsigned host,
unsigned item,
size_t size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *hdr;
size_t alloc_size;
void *p;
/* We're not going to find it if there's no matching partition */
if (host >= SMEM_HOST_COUNT || !smem->partitions[host])
return -ENOENT;
phdr = smem->partitions[host];
p = (void *)phdr + sizeof(*phdr);
while (p < (void *)phdr + phdr->offset_free_uncached) {
hdr = p;
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
return -EINVAL;
}
if (hdr->item == item)
return -EEXIST;
p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;
}
/* Check that we don't grow into the cached region */
alloc_size = sizeof(*hdr) + ALIGN(size, 8);
if (p + alloc_size >= (void *)phdr + phdr->offset_free_cached) {
dev_err(smem->dev, "Out of memory\n");
return -ENOSPC;
}
hdr = p;
hdr->canary = SMEM_PRIVATE_CANARY;
hdr->item = item;
hdr->size = ALIGN(size, 8);
hdr->padding_data = hdr->size - size;
hdr->padding_hdr = 0;
/*
* Ensure the header is written before we advance the free offset, so
* that remote processors that does not take the remote spinlock still
* gets a consistent view of the linked list.
*/
wmb();
phdr->offset_free_uncached += alloc_size;
return 0;
}
static int qcom_smem_alloc_global(struct qcom_smem *smem,
unsigned item,
size_t size)
{
struct smem_header *header;
struct smem_global_entry *entry;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return -EINVAL;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (entry->allocated)
return -EEXIST;
size = ALIGN(size, 8);
if (WARN_ON(size > header->available))
return -ENOMEM;
entry->offset = header->free_offset;
entry->size = size;
/*
* Ensure the header is consistent before we mark the item allocated,
* so that remote processors will get a consistent view of the item
* even though they do not take the spinlock on read.
*/
wmb();
entry->allocated = 1;
header->free_offset += size;
header->available -= size;
return 0;
}
/**
* qcom_smem_alloc() - allocate space for a smem item
* @host: remote processor id, or -1
* @item: smem item handle
* @size: number of bytes to be allocated
*
* Allocate space for a given smem item of size @size, given that the item is
* not yet allocated.
*/
int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
{
unsigned long flags;
int ret;
if (!__smem)
return -EPROBE_DEFER;
if (item < SMEM_ITEM_LAST_FIXED) {
dev_err(__smem->dev,
"Rejecting allocation of static entry %d\n", item);
return -EINVAL;
}
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
ret = qcom_smem_alloc_private(__smem, host, item, size);
if (ret == -ENOENT)
ret = qcom_smem_alloc_global(__smem, item, size);
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
return ret;
}
EXPORT_SYMBOL(qcom_smem_alloc);
static int qcom_smem_get_global(struct qcom_smem *smem,
unsigned item,
void **ptr,
size_t *size)
{
struct smem_header *header;
struct smem_region *area;
struct smem_global_entry *entry;
u32 aux_base;
unsigned i;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return -EINVAL;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (!entry->allocated)
return -ENXIO;
if (ptr != NULL) {
aux_base = entry->aux_base & AUX_BASE_MASK;
for (i = 0; i < smem->num_regions; i++) {
area = &smem->regions[i];
if (area->aux_base == aux_base || !aux_base) {
*ptr = area->virt_base + entry->offset;
break;
}
}
}
if (size != NULL)
*size = entry->size;
return 0;
}
static int qcom_smem_get_private(struct qcom_smem *smem,
unsigned host,
unsigned item,
void **ptr,
size_t *size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *hdr;
void *p;
/* We're not going to find it if there's no matching partition */
if (host >= SMEM_HOST_COUNT || !smem->partitions[host])
return -ENOENT;
phdr = smem->partitions[host];
p = (void *)phdr + sizeof(*phdr);
while (p < (void *)phdr + phdr->offset_free_uncached) {
hdr = p;
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
return -EINVAL;
}
if (hdr->item == item) {
if (ptr != NULL)
*ptr = p + sizeof(*hdr) + hdr->padding_hdr;
if (size != NULL)
*size = hdr->size - hdr->padding_data;
return 0;
}
p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;
}
return -ENOENT;
}
/**
* qcom_smem_get() - resolve ptr of size of a smem item
* @host: the remote processor, or -1
* @item: smem item handle
* @ptr: pointer to be filled out with address of the item
* @size: pointer to be filled out with size of the item
*
* Looks up pointer and size of a smem item.
*/
int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size)
{
unsigned long flags;
int ret;
if (!__smem)
return -EPROBE_DEFER;
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
ret = qcom_smem_get_private(__smem, host, item, ptr, size);
if (ret == -ENOENT)
ret = qcom_smem_get_global(__smem, item, ptr, size);
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
return ret;
}
EXPORT_SYMBOL(qcom_smem_get);
/**
* qcom_smem_get_free_space() - retrieve amount of free space in a partition
* @host: the remote processor identifying a partition, or -1
*
* To be used by smem clients as a quick way to determine if any new
* allocations has been made.
*/
int qcom_smem_get_free_space(unsigned host)
{
struct smem_partition_header *phdr;
struct smem_header *header;
unsigned ret;
if (!__smem)
return -EPROBE_DEFER;
if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
phdr = __smem->partitions[host];
ret = phdr->offset_free_cached - phdr->offset_free_uncached;
} else {
header = __smem->regions[0].virt_base;
ret = header->available;
}
return ret;
}
EXPORT_SYMBOL(qcom_smem_get_free_space);
static int qcom_smem_get_sbl_version(struct qcom_smem *smem)
{
unsigned *versions;
size_t size;
int ret;
ret = qcom_smem_get_global(smem, SMEM_ITEM_VERSION,
(void **)&versions, &size);
if (ret < 0) {
dev_err(smem->dev, "Unable to read the version item\n");
return -ENOENT;
}
if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) {
dev_err(smem->dev, "Version item is too small\n");
return -EINVAL;
}
return versions[SMEM_MASTER_SBL_VERSION_INDEX];
}
static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
unsigned local_host)
{
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
unsigned remote_host;
int i;
ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;
if (ptable->magic != SMEM_PTABLE_MAGIC)
return 0;
if (ptable->version != 1) {
dev_err(smem->dev,
"Unsupported partition header version %d\n",
ptable->version);
return -EINVAL;
}
for (i = 0; i < ptable->num_entries; i++) {
entry = &ptable->entry[i];
if (entry->host0 != local_host && entry->host1 != local_host)
continue;
if (!entry->offset)
continue;
if (!entry->size)
continue;
if (entry->host0 == local_host)
remote_host = entry->host1;
else
remote_host = entry->host0;
if (remote_host >= SMEM_HOST_COUNT) {
dev_err(smem->dev,
"Invalid remote host %d\n",
remote_host);
return -EINVAL;
}
if (smem->partitions[remote_host]) {
dev_err(smem->dev,
"Already found a partition for host %d\n",
remote_host);
return -EINVAL;
}
header = smem->regions[0].virt_base + entry->offset;
if (header->magic != SMEM_PART_MAGIC) {
dev_err(smem->dev,
"Partition %d has invalid magic\n", i);
return -EINVAL;
}
if (header->host0 != local_host && header->host1 != local_host) {
dev_err(smem->dev,
"Partition %d hosts are invalid\n", i);
return -EINVAL;
}
if (header->host0 != remote_host && header->host1 != remote_host) {
dev_err(smem->dev,
"Partition %d hosts are invalid\n", i);
return -EINVAL;
}
if (header->size != entry->size) {
dev_err(smem->dev,
"Partition %d has invalid size\n", i);
return -EINVAL;
}
if (header->offset_free_uncached > header->size) {
dev_err(smem->dev,
"Partition %d has invalid free pointer\n", i);
return -EINVAL;
}
smem->partitions[remote_host] = header;
}
return 0;
}
static int qcom_smem_count_mem_regions(struct platform_device *pdev)
{
struct resource *res;
int num_regions = 0;
int i;
for (i = 0; i < pdev->num_resources; i++) {
res = &pdev->resource[i];
if (resource_type(res) == IORESOURCE_MEM)
num_regions++;
}
return num_regions;
}
static int qcom_smem_probe(struct platform_device *pdev)
{
struct smem_header *header;
struct device_node *np;
struct qcom_smem *smem;
struct resource *res;
struct resource r;
size_t array_size;
int num_regions = 0;
int hwlock_id;
u32 version;
int ret;
int i;
num_regions = qcom_smem_count_mem_regions(pdev) + 1;
array_size = num_regions * sizeof(struct smem_region);
smem = devm_kzalloc(&pdev->dev, sizeof(*smem) + array_size, GFP_KERNEL);
if (!smem)
return -ENOMEM;
smem->dev = &pdev->dev;
smem->num_regions = num_regions;
np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0);
if (!np) {
dev_err(&pdev->dev, "No memory-region specified\n");
return -EINVAL;
}
ret = of_address_to_resource(np, 0, &r);
of_node_put(np);
if (ret)
return ret;
smem->regions[0].aux_base = (u32)r.start;
smem->regions[0].size = resource_size(&r);
smem->regions[0].virt_base = devm_ioremap_nocache(&pdev->dev,
r.start,
resource_size(&r));
if (!smem->regions[0].virt_base)
return -ENOMEM;
for (i = 1; i < num_regions; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i - 1);
smem->regions[i].aux_base = (u32)res->start;
smem->regions[i].size = resource_size(res);
smem->regions[i].virt_base = devm_ioremap_nocache(&pdev->dev,
res->start,
resource_size(res));
if (!smem->regions[i].virt_base)
return -ENOMEM;
}
header = smem->regions[0].virt_base;
if (header->initialized != 1 || header->reserved) {
dev_err(&pdev->dev, "SMEM is not initialized by SBL\n");
return -EINVAL;
}
version = qcom_smem_get_sbl_version(smem);
if (version >> 16 != SMEM_EXPECTED_VERSION) {
dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version);
return -EINVAL;
}
ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS);
if (ret < 0)
return ret;
hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
if (hwlock_id < 0) {
dev_err(&pdev->dev, "failed to retrieve hwlock\n");
return hwlock_id;
}
smem->hwlock = hwspin_lock_request_specific(hwlock_id);
if (!smem->hwlock)
return -ENXIO;
__smem = smem;
return 0;
}
static int qcom_smem_remove(struct platform_device *pdev)
{
__smem = NULL;
hwspin_lock_free(__smem->hwlock);
return 0;
}
static const struct of_device_id qcom_smem_of_match[] = {
{ .compatible = "qcom,smem" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smem_of_match);
static struct platform_driver qcom_smem_driver = {
.probe = qcom_smem_probe,
.remove = qcom_smem_remove,
.driver = {
.name = "qcom-smem",
.of_match_table = qcom_smem_of_match,
.suppress_bind_attrs = true,
},
};
static int __init qcom_smem_init(void)
{
return platform_driver_register(&qcom_smem_driver);
}
arch_initcall(qcom_smem_init);
static void __exit qcom_smem_exit(void)
{
platform_driver_unregister(&qcom_smem_driver);
}
module_exit(qcom_smem_exit)
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm Shared Memory Manager");
MODULE_LICENSE("GPL v2");
obj-$(CONFIG_ARCH_TEGRA) += fuse/
obj-y += fuse/
obj-$(CONFIG_ARCH_TEGRA) += common.o
obj-$(CONFIG_ARCH_TEGRA) += pmc.o
obj-y += common.o
obj-y += pmc.o
......@@ -15,6 +15,8 @@ static const struct of_device_id tegra_machine_match[] = {
{ .compatible = "nvidia,tegra30", },
{ .compatible = "nvidia,tegra114", },
{ .compatible = "nvidia,tegra124", },
{ .compatible = "nvidia,tegra132", },
{ .compatible = "nvidia,tegra210", },
{ }
};
......
......@@ -6,3 +6,5 @@ obj-$(CONFIG_ARCH_TEGRA_2x_SOC) += speedo-tegra20.o
obj-$(CONFIG_ARCH_TEGRA_3x_SOC) += speedo-tegra30.o
obj-$(CONFIG_ARCH_TEGRA_114_SOC) += speedo-tegra114.o
obj-$(CONFIG_ARCH_TEGRA_124_SOC) += speedo-tegra124.o
obj-$(CONFIG_ARCH_TEGRA_132_SOC) += speedo-tegra124.o
obj-$(CONFIG_ARCH_TEGRA_210_SOC) += speedo-tegra210.o
......@@ -15,9 +15,10 @@
*
*/
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/kobject.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_address.h>
......@@ -28,8 +29,6 @@
#include "fuse.h"
static u32 (*fuse_readl)(const unsigned int offset);
static int fuse_size;
struct tegra_sku_info tegra_sku_info;
EXPORT_SYMBOL(tegra_sku_info);
......@@ -42,11 +41,11 @@ static const char *tegra_revision_name[TEGRA_REVISION_MAX] = {
[TEGRA_REVISION_A04] = "A04",
};
static u8 fuse_readb(const unsigned int offset)
static u8 fuse_readb(struct tegra_fuse *fuse, unsigned int offset)
{
u32 val;
val = fuse_readl(round_down(offset, 4));
val = fuse->read(fuse, round_down(offset, 4));
val >>= (offset % 4) * 8;
val &= 0xff;
......@@ -54,19 +53,21 @@ static u8 fuse_readb(const unsigned int offset)
}
static ssize_t fuse_read(struct file *fd, struct kobject *kobj,
struct bin_attribute *attr, char *buf,
loff_t pos, size_t size)
struct bin_attribute *attr, char *buf,
loff_t pos, size_t size)
{
struct device *dev = kobj_to_dev(kobj);
struct tegra_fuse *fuse = dev_get_drvdata(dev);
int i;
if (pos < 0 || pos >= fuse_size)
if (pos < 0 || pos >= attr->size)
return 0;
if (size > fuse_size - pos)
size = fuse_size - pos;
if (size > attr->size - pos)
size = attr->size - pos;
for (i = 0; i < size; i++)
buf[i] = fuse_readb(pos + i);
buf[i] = fuse_readb(fuse, pos + i);
return i;
}
......@@ -76,89 +77,239 @@ static struct bin_attribute fuse_bin_attr = {
.read = fuse_read,
};
static int tegra_fuse_create_sysfs(struct device *dev, unsigned int size,
const struct tegra_fuse_info *info)
{
fuse_bin_attr.size = size;
return device_create_bin_file(dev, &fuse_bin_attr);
}
static const struct of_device_id car_match[] __initconst = {
{ .compatible = "nvidia,tegra20-car", },
{ .compatible = "nvidia,tegra30-car", },
{ .compatible = "nvidia,tegra114-car", },
{ .compatible = "nvidia,tegra124-car", },
{ .compatible = "nvidia,tegra132-car", },
{ .compatible = "nvidia,tegra210-car", },
{},
};
static void tegra_enable_fuse_clk(void __iomem *base)
static struct tegra_fuse *fuse = &(struct tegra_fuse) {
.base = NULL,
.soc = NULL,
};
static const struct of_device_id tegra_fuse_match[] = {
#ifdef CONFIG_ARCH_TEGRA_210_SOC
{ .compatible = "nvidia,tegra210-efuse", .data = &tegra210_fuse_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_132_SOC
{ .compatible = "nvidia,tegra132-efuse", .data = &tegra124_fuse_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_124_SOC
{ .compatible = "nvidia,tegra124-efuse", .data = &tegra124_fuse_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_114_SOC
{ .compatible = "nvidia,tegra114-efuse", .data = &tegra114_fuse_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
{ .compatible = "nvidia,tegra30-efuse", .data = &tegra30_fuse_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
{ .compatible = "nvidia,tegra20-efuse", .data = &tegra20_fuse_soc },
#endif
{ /* sentinel */ }
};
static int tegra_fuse_probe(struct platform_device *pdev)
{
u32 reg;
void __iomem *base = fuse->base;
struct resource *res;
int err;
/* take over the memory region from the early initialization */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
fuse->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(fuse->base))
return PTR_ERR(fuse->base);
fuse->clk = devm_clk_get(&pdev->dev, "fuse");
if (IS_ERR(fuse->clk)) {
dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
PTR_ERR(fuse->clk));
return PTR_ERR(fuse->clk);
}
reg = readl_relaxed(base + 0x48);
reg |= 1 << 28;
writel(reg, base + 0x48);
platform_set_drvdata(pdev, fuse);
fuse->dev = &pdev->dev;
/*
* Enable FUSE clock. This needs to be hardcoded because the clock
* subsystem is not active during early boot.
*/
reg = readl(base + 0x14);
reg |= 1 << 7;
writel(reg, base + 0x14);
if (fuse->soc->probe) {
err = fuse->soc->probe(fuse);
if (err < 0)
return err;
}
if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
fuse->soc->info))
return -ENODEV;
/* release the early I/O memory mapping */
iounmap(base);
return 0;
}
static struct platform_driver tegra_fuse_driver = {
.driver = {
.name = "tegra-fuse",
.of_match_table = tegra_fuse_match,
.suppress_bind_attrs = true,
},
.probe = tegra_fuse_probe,
};
module_platform_driver(tegra_fuse_driver);
bool __init tegra_fuse_read_spare(unsigned int spare)
{
unsigned int offset = fuse->soc->info->spare + spare * 4;
return fuse->read_early(fuse, offset) & 1;
}
u32 __init tegra_fuse_read_early(unsigned int offset)
{
return fuse->read_early(fuse, offset);
}
int tegra_fuse_readl(unsigned long offset, u32 *value)
{
if (!fuse_readl)
if (!fuse->read)
return -EPROBE_DEFER;
*value = fuse_readl(offset);
*value = fuse->read(fuse, offset);
return 0;
}
EXPORT_SYMBOL(tegra_fuse_readl);
int tegra_fuse_create_sysfs(struct device *dev, int size,
u32 (*readl)(const unsigned int offset))
static void tegra_enable_fuse_clk(void __iomem *base)
{
if (fuse_size)
return -ENODEV;
fuse_bin_attr.size = size;
fuse_bin_attr.read = fuse_read;
u32 reg;
fuse_size = size;
fuse_readl = readl;
reg = readl_relaxed(base + 0x48);
reg |= 1 << 28;
writel(reg, base + 0x48);
return device_create_bin_file(dev, &fuse_bin_attr);
/*
* Enable FUSE clock. This needs to be hardcoded because the clock
* subsystem is not active during early boot.
*/
reg = readl(base + 0x14);
reg |= 1 << 7;
writel(reg, base + 0x14);
}
static int __init tegra_init_fuse(void)
{
const struct of_device_id *match;
struct device_node *np;
void __iomem *car_base;
if (!soc_is_tegra())
return 0;
struct resource regs;
tegra_init_apbmisc();
np = of_find_matching_node(NULL, car_match);
car_base = of_iomap(np, 0);
if (car_base) {
tegra_enable_fuse_clk(car_base);
iounmap(car_base);
np = of_find_matching_node_and_match(NULL, tegra_fuse_match, &match);
if (!np) {
/*
* Fall back to legacy initialization for 32-bit ARM only. All
* 64-bit ARM device tree files for Tegra are required to have
* a FUSE node.
*
* This is for backwards-compatibility with old device trees
* that didn't contain a FUSE node.
*/
if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) {
u8 chip = tegra_get_chip_id();
regs.start = 0x7000f800;
regs.end = 0x7000fbff;
regs.flags = IORESOURCE_MEM;
switch (chip) {
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
case TEGRA20:
fuse->soc = &tegra20_fuse_soc;
break;
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
case TEGRA30:
fuse->soc = &tegra30_fuse_soc;
break;
#endif
#ifdef CONFIG_ARCH_TEGRA_114_SOC
case TEGRA114:
fuse->soc = &tegra114_fuse_soc;
break;
#endif
#ifdef CONFIG_ARCH_TEGRA_124_SOC
case TEGRA124:
fuse->soc = &tegra124_fuse_soc;
break;
#endif
default:
pr_warn("Unsupported SoC: %02x\n", chip);
break;
}
} else {
/*
* At this point we're not running on Tegra, so play
* nice with multi-platform kernels.
*/
return 0;
}
} else {
pr_err("Could not enable fuse clk. ioremap tegra car failed.\n");
/*
* Extract information from the device tree if we've found a
* matching node.
*/
if (of_address_to_resource(np, 0, &regs) < 0) {
pr_err("failed to get FUSE register\n");
return -ENXIO;
}
fuse->soc = match->data;
}
np = of_find_matching_node(NULL, car_match);
if (np) {
void __iomem *base = of_iomap(np, 0);
if (base) {
tegra_enable_fuse_clk(base);
iounmap(base);
} else {
pr_err("failed to map clock registers\n");
return -ENXIO;
}
}
fuse->base = ioremap_nocache(regs.start, resource_size(&regs));
if (!fuse->base) {
pr_err("failed to map FUSE registers\n");
return -ENXIO;
}
if (tegra_get_chip_id() == TEGRA20)
tegra20_init_fuse_early();
else
tegra30_init_fuse_early();
fuse->soc->init(fuse);
pr_info("Tegra Revision: %s SKU: %d CPU Process: %d Core Process: %d\n",
pr_info("Tegra Revision: %s SKU: %d CPU Process: %d SoC Process: %d\n",
tegra_revision_name[tegra_sku_info.revision],
tegra_sku_info.sku_id, tegra_sku_info.cpu_process_id,
tegra_sku_info.core_process_id);
pr_debug("Tegra CPU Speedo ID %d, Soc Speedo ID %d\n",
tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id);
tegra_sku_info.soc_process_id);
pr_debug("Tegra CPU Speedo ID %d, SoC Speedo ID %d\n",
tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id);
return 0;
}
......
......@@ -34,159 +34,107 @@
#include "fuse.h"
#define FUSE_BEGIN 0x100
#define FUSE_SIZE 0x1f8
#define FUSE_UID_LOW 0x08
#define FUSE_UID_HIGH 0x0c
static phys_addr_t fuse_phys;
static struct clk *fuse_clk;
static void __iomem __initdata *fuse_base;
static DEFINE_MUTEX(apb_dma_lock);
static DECLARE_COMPLETION(apb_dma_wait);
static struct dma_chan *apb_dma_chan;
static struct dma_slave_config dma_sconfig;
static u32 *apb_buffer;
static dma_addr_t apb_buffer_phys;
static u32 tegra20_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset)
{
return readl_relaxed(fuse->base + FUSE_BEGIN + offset);
}
static void apb_dma_complete(void *args)
{
complete(&apb_dma_wait);
struct tegra_fuse *fuse = args;
complete(&fuse->apbdma.wait);
}
static u32 tegra20_fuse_readl(const unsigned int offset)
static u32 tegra20_fuse_read(struct tegra_fuse *fuse, unsigned int offset)
{
int ret;
u32 val = 0;
unsigned long flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK;
struct dma_async_tx_descriptor *dma_desc;
unsigned long time_left;
u32 value = 0;
int err;
mutex_lock(&fuse->apbdma.lock);
mutex_lock(&apb_dma_lock);
fuse->apbdma.config.src_addr = fuse->apbdma.phys + FUSE_BEGIN + offset;
dma_sconfig.src_addr = fuse_phys + FUSE_BEGIN + offset;
ret = dmaengine_slave_config(apb_dma_chan, &dma_sconfig);
if (ret)
err = dmaengine_slave_config(fuse->apbdma.chan, &fuse->apbdma.config);
if (err)
goto out;
dma_desc = dmaengine_prep_slave_single(apb_dma_chan, apb_buffer_phys,
sizeof(u32), DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
dma_desc = dmaengine_prep_slave_single(fuse->apbdma.chan,
fuse->apbdma.phys,
sizeof(u32), DMA_DEV_TO_MEM,
flags);
if (!dma_desc)
goto out;
dma_desc->callback = apb_dma_complete;
dma_desc->callback_param = NULL;
dma_desc->callback_param = fuse;
reinit_completion(&apb_dma_wait);
reinit_completion(&fuse->apbdma.wait);
clk_prepare_enable(fuse_clk);
clk_prepare_enable(fuse->clk);
dmaengine_submit(dma_desc);
dma_async_issue_pending(apb_dma_chan);
time_left = wait_for_completion_timeout(&apb_dma_wait,
dma_async_issue_pending(fuse->apbdma.chan);
time_left = wait_for_completion_timeout(&fuse->apbdma.wait,
msecs_to_jiffies(50));
if (WARN(time_left == 0, "apb read dma timed out"))
dmaengine_terminate_all(apb_dma_chan);
dmaengine_terminate_all(fuse->apbdma.chan);
else
val = *apb_buffer;
value = *fuse->apbdma.virt;
clk_disable_unprepare(fuse_clk);
out:
mutex_unlock(&apb_dma_lock);
clk_disable_unprepare(fuse->clk);
return val;
out:
mutex_unlock(&fuse->apbdma.lock);
return value;
}
static const struct of_device_id tegra20_fuse_of_match[] = {
{ .compatible = "nvidia,tegra20-efuse" },
{},
};
static int apb_dma_init(void)
static int tegra20_fuse_probe(struct tegra_fuse *fuse)
{
dma_cap_mask_t mask;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
apb_dma_chan = dma_request_channel(mask, NULL, NULL);
if (!apb_dma_chan)
fuse->apbdma.chan = dma_request_channel(mask, NULL, NULL);
if (!fuse->apbdma.chan)
return -EPROBE_DEFER;
apb_buffer = dma_alloc_coherent(NULL, sizeof(u32), &apb_buffer_phys,
GFP_KERNEL);
if (!apb_buffer) {
dma_release_channel(apb_dma_chan);
fuse->apbdma.virt = dma_alloc_coherent(fuse->dev, sizeof(u32),
&fuse->apbdma.phys,
GFP_KERNEL);
if (!fuse->apbdma.virt) {
dma_release_channel(fuse->apbdma.chan);
return -ENOMEM;
}
dma_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
dma_sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
dma_sconfig.src_maxburst = 1;
dma_sconfig.dst_maxburst = 1;
fuse->apbdma.config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
fuse->apbdma.config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
fuse->apbdma.config.src_maxburst = 1;
fuse->apbdma.config.dst_maxburst = 1;
return 0;
}
static int tegra20_fuse_probe(struct platform_device *pdev)
{
struct resource *res;
int err;
fuse_clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(fuse_clk)) {
dev_err(&pdev->dev, "missing clock");
return PTR_ERR(fuse_clk);
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
fuse_phys = res->start;
err = apb_dma_init();
if (err)
return err;
if (tegra_fuse_create_sysfs(&pdev->dev, FUSE_SIZE, tegra20_fuse_readl))
return -ENODEV;
dev_dbg(&pdev->dev, "loaded\n");
init_completion(&fuse->apbdma.wait);
mutex_init(&fuse->apbdma.lock);
fuse->read = tegra20_fuse_read;
return 0;
}
static struct platform_driver tegra20_fuse_driver = {
.probe = tegra20_fuse_probe,
.driver = {
.name = "tegra20_fuse",
.of_match_table = tegra20_fuse_of_match,
}
static const struct tegra_fuse_info tegra20_fuse_info = {
.read = tegra20_fuse_read,
.size = 0x1f8,
.spare = 0x100,
};
static int __init tegra20_fuse_init(void)
{
return platform_driver_register(&tegra20_fuse_driver);
}
postcore_initcall(tegra20_fuse_init);
/* Early boot code. This code is called before the devices are created */
u32 __init tegra20_fuse_early(const unsigned int offset)
{
return readl_relaxed(fuse_base + FUSE_BEGIN + offset);
}
bool __init tegra20_spare_fuse_early(int spare_bit)
{
u32 offset = spare_bit * 4;
bool value;
value = tegra20_fuse_early(offset + 0x100);
return value;
}
static void __init tegra20_fuse_add_randomness(void)
{
u32 randomness[7];
......@@ -195,22 +143,27 @@ static void __init tegra20_fuse_add_randomness(void)
randomness[1] = tegra_read_straps();
randomness[2] = tegra_read_chipid();
randomness[3] = tegra_sku_info.cpu_process_id << 16;
randomness[3] |= tegra_sku_info.core_process_id;
randomness[3] |= tegra_sku_info.soc_process_id;
randomness[4] = tegra_sku_info.cpu_speedo_id << 16;
randomness[4] |= tegra_sku_info.soc_speedo_id;
randomness[5] = tegra20_fuse_early(FUSE_UID_LOW);
randomness[6] = tegra20_fuse_early(FUSE_UID_HIGH);
randomness[5] = tegra_fuse_read_early(FUSE_UID_LOW);
randomness[6] = tegra_fuse_read_early(FUSE_UID_HIGH);
add_device_randomness(randomness, sizeof(randomness));
}
void __init tegra20_init_fuse_early(void)
static void __init tegra20_fuse_init(struct tegra_fuse *fuse)
{
fuse_base = ioremap(TEGRA_FUSE_BASE, TEGRA_FUSE_SIZE);
fuse->read_early = tegra20_fuse_read_early;
tegra_init_revision();
tegra20_init_speedo_data(&tegra_sku_info);
fuse->soc->speedo_init(&tegra_sku_info);
tegra20_fuse_add_randomness();
iounmap(fuse_base);
}
const struct tegra_fuse_soc tegra20_fuse_soc = {
.init = tegra20_fuse_init,
.speedo_init = tegra20_init_speedo_data,
.probe = tegra20_fuse_probe,
.info = &tegra20_fuse_info,
};
......@@ -42,113 +42,33 @@
#define FUSE_HAS_REVISION_INFO BIT(0)
enum speedo_idx {
SPEEDO_TEGRA30 = 0,
SPEEDO_TEGRA114,
SPEEDO_TEGRA124,
};
struct tegra_fuse_info {
int size;
int spare_bit;
enum speedo_idx speedo_idx;
};
static void __iomem *fuse_base;
static struct clk *fuse_clk;
static const struct tegra_fuse_info *fuse_info;
u32 tegra30_fuse_readl(const unsigned int offset)
#if defined(CONFIG_ARCH_TEGRA_3x_SOC) || \
defined(CONFIG_ARCH_TEGRA_114_SOC) || \
defined(CONFIG_ARCH_TEGRA_124_SOC) || \
defined(CONFIG_ARCH_TEGRA_132_SOC) || \
defined(CONFIG_ARCH_TEGRA_210_SOC)
static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset)
{
u32 val;
/*
* early in the boot, the fuse clock will be enabled by
* tegra_init_fuse()
*/
if (fuse_clk)
clk_prepare_enable(fuse_clk);
val = readl_relaxed(fuse_base + FUSE_BEGIN + offset);
if (fuse_clk)
clk_disable_unprepare(fuse_clk);
return val;
return readl_relaxed(fuse->base + FUSE_BEGIN + offset);
}
static const struct tegra_fuse_info tegra30_info = {
.size = 0x2a4,
.spare_bit = 0x144,
.speedo_idx = SPEEDO_TEGRA30,
};
static const struct tegra_fuse_info tegra114_info = {
.size = 0x2a0,
.speedo_idx = SPEEDO_TEGRA114,
};
static const struct tegra_fuse_info tegra124_info = {
.size = 0x300,
.speedo_idx = SPEEDO_TEGRA124,
};
static const struct of_device_id tegra30_fuse_of_match[] = {
{ .compatible = "nvidia,tegra30-efuse", .data = &tegra30_info },
{ .compatible = "nvidia,tegra114-efuse", .data = &tegra114_info },
{ .compatible = "nvidia,tegra124-efuse", .data = &tegra124_info },
{},
};
static int tegra30_fuse_probe(struct platform_device *pdev)
static u32 tegra30_fuse_read(struct tegra_fuse *fuse, unsigned int offset)
{
const struct of_device_id *of_dev_id;
of_dev_id = of_match_device(tegra30_fuse_of_match, &pdev->dev);
if (!of_dev_id)
return -ENODEV;
u32 value;
int err;
fuse_clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(fuse_clk)) {
dev_err(&pdev->dev, "missing clock");
return PTR_ERR(fuse_clk);
err = clk_prepare_enable(fuse->clk);
if (err < 0) {
dev_err(fuse->dev, "failed to enable FUSE clock: %d\n", err);
return 0;
}
platform_set_drvdata(pdev, NULL);
if (tegra_fuse_create_sysfs(&pdev->dev, fuse_info->size,
tegra30_fuse_readl))
return -ENODEV;
value = readl_relaxed(fuse->base + FUSE_BEGIN + offset);
dev_dbg(&pdev->dev, "loaded\n");
clk_disable_unprepare(fuse->clk);
return 0;
}
static struct platform_driver tegra30_fuse_driver = {
.probe = tegra30_fuse_probe,
.driver = {
.name = "tegra_fuse",
.of_match_table = tegra30_fuse_of_match,
}
};
static int __init tegra30_fuse_init(void)
{
return platform_driver_register(&tegra30_fuse_driver);
return value;
}
postcore_initcall(tegra30_fuse_init);
/* Early boot code. This code is called before the devices are created */
typedef void (*speedo_f)(struct tegra_sku_info *sku_info);
static speedo_f __initdata speedo_tbl[] = {
[SPEEDO_TEGRA30] = tegra30_init_speedo_data,
[SPEEDO_TEGRA114] = tegra114_init_speedo_data,
[SPEEDO_TEGRA124] = tegra124_init_speedo_data,
};
static void __init tegra30_fuse_add_randomness(void)
{
......@@ -158,67 +78,83 @@ static void __init tegra30_fuse_add_randomness(void)
randomness[1] = tegra_read_straps();
randomness[2] = tegra_read_chipid();
randomness[3] = tegra_sku_info.cpu_process_id << 16;
randomness[3] |= tegra_sku_info.core_process_id;
randomness[3] |= tegra_sku_info.soc_process_id;
randomness[4] = tegra_sku_info.cpu_speedo_id << 16;
randomness[4] |= tegra_sku_info.soc_speedo_id;
randomness[5] = tegra30_fuse_readl(FUSE_VENDOR_CODE);
randomness[6] = tegra30_fuse_readl(FUSE_FAB_CODE);
randomness[7] = tegra30_fuse_readl(FUSE_LOT_CODE_0);
randomness[8] = tegra30_fuse_readl(FUSE_LOT_CODE_1);
randomness[9] = tegra30_fuse_readl(FUSE_WAFER_ID);
randomness[10] = tegra30_fuse_readl(FUSE_X_COORDINATE);
randomness[11] = tegra30_fuse_readl(FUSE_Y_COORDINATE);
randomness[5] = tegra_fuse_read_early(FUSE_VENDOR_CODE);
randomness[6] = tegra_fuse_read_early(FUSE_FAB_CODE);
randomness[7] = tegra_fuse_read_early(FUSE_LOT_CODE_0);
randomness[8] = tegra_fuse_read_early(FUSE_LOT_CODE_1);
randomness[9] = tegra_fuse_read_early(FUSE_WAFER_ID);
randomness[10] = tegra_fuse_read_early(FUSE_X_COORDINATE);
randomness[11] = tegra_fuse_read_early(FUSE_Y_COORDINATE);
add_device_randomness(randomness, sizeof(randomness));
}
static void __init legacy_fuse_init(void)
static void __init tegra30_fuse_init(struct tegra_fuse *fuse)
{
switch (tegra_get_chip_id()) {
case TEGRA30:
fuse_info = &tegra30_info;
break;
case TEGRA114:
fuse_info = &tegra114_info;
break;
case TEGRA124:
case TEGRA132:
fuse_info = &tegra124_info;
break;
default:
return;
}
fuse->read_early = tegra30_fuse_read_early;
fuse->read = tegra30_fuse_read;
fuse_base = ioremap(TEGRA_FUSE_BASE, TEGRA_FUSE_SIZE);
tegra_init_revision();
fuse->soc->speedo_init(&tegra_sku_info);
tegra30_fuse_add_randomness();
}
#endif
bool __init tegra30_spare_fuse(int spare_bit)
{
u32 offset = fuse_info->spare_bit + spare_bit * 4;
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
static const struct tegra_fuse_info tegra30_fuse_info = {
.read = tegra30_fuse_read,
.size = 0x2a4,
.spare = 0x144,
};
return tegra30_fuse_readl(offset) & 1;
}
const struct tegra_fuse_soc tegra30_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra30_init_speedo_data,
.info = &tegra30_fuse_info,
};
#endif
void __init tegra30_init_fuse_early(void)
{
struct device_node *np;
const struct of_device_id *of_match;
np = of_find_matching_node_and_match(NULL, tegra30_fuse_of_match,
&of_match);
if (np) {
fuse_base = of_iomap(np, 0);
fuse_info = (struct tegra_fuse_info *)of_match->data;
} else
legacy_fuse_init();
if (!fuse_base) {
pr_warn("fuse DT node missing and unknown chip id: 0x%02x\n",
tegra_get_chip_id());
return;
}
#ifdef CONFIG_ARCH_TEGRA_114_SOC
static const struct tegra_fuse_info tegra114_fuse_info = {
.read = tegra30_fuse_read,
.size = 0x2a0,
.spare = 0x180,
};
tegra_init_revision();
speedo_tbl[fuse_info->speedo_idx](&tegra_sku_info);
tegra30_fuse_add_randomness();
}
const struct tegra_fuse_soc tegra114_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra114_init_speedo_data,
.info = &tegra114_fuse_info,
};
#endif
#if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC)
static const struct tegra_fuse_info tegra124_fuse_info = {
.read = tegra30_fuse_read,
.size = 0x300,
.spare = 0x200,
};
const struct tegra_fuse_soc tegra124_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra124_init_speedo_data,
.info = &tegra124_fuse_info,
};
#endif
#if defined(CONFIG_ARCH_TEGRA_210_SOC)
static const struct tegra_fuse_info tegra210_fuse_info = {
.read = tegra30_fuse_read,
.size = 0x300,
.spare = 0x280,
};
const struct tegra_fuse_soc tegra210_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra210_init_speedo_data,
.info = &tegra210_fuse_info,
};
#endif
......@@ -19,53 +19,90 @@
#ifndef __DRIVERS_MISC_TEGRA_FUSE_H
#define __DRIVERS_MISC_TEGRA_FUSE_H
#define TEGRA_FUSE_BASE 0x7000f800
#define TEGRA_FUSE_SIZE 0x400
#include <linux/dmaengine.h>
#include <linux/types.h>
int tegra_fuse_create_sysfs(struct device *dev, int size,
u32 (*readl)(const unsigned int offset));
struct tegra_fuse;
struct tegra_fuse_info {
u32 (*read)(struct tegra_fuse *fuse, unsigned int offset);
unsigned int size;
unsigned int spare;
};
struct tegra_fuse_soc {
void (*init)(struct tegra_fuse *fuse);
void (*speedo_init)(struct tegra_sku_info *info);
int (*probe)(struct tegra_fuse *fuse);
const struct tegra_fuse_info *info;
};
struct tegra_fuse {
struct device *dev;
void __iomem *base;
phys_addr_t phys;
struct clk *clk;
u32 (*read_early)(struct tegra_fuse *fuse, unsigned int offset);
u32 (*read)(struct tegra_fuse *fuse, unsigned int offset);
const struct tegra_fuse_soc *soc;
/* APBDMA on Tegra20 */
struct {
struct mutex lock;
struct completion wait;
struct dma_chan *chan;
struct dma_slave_config config;
dma_addr_t phys;
u32 *virt;
} apbdma;
};
bool tegra30_spare_fuse(int bit);
u32 tegra30_fuse_readl(const unsigned int offset);
void tegra30_init_fuse_early(void);
void tegra_init_revision(void);
void tegra_init_apbmisc(void);
bool __init tegra_fuse_read_spare(unsigned int spare);
u32 __init tegra_fuse_read_early(unsigned int offset);
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
void tegra20_init_speedo_data(struct tegra_sku_info *sku_info);
bool tegra20_spare_fuse_early(int spare_bit);
void tegra20_init_fuse_early(void);
u32 tegra20_fuse_early(const unsigned int offset);
#else
static inline void tegra20_init_speedo_data(struct tegra_sku_info *sku_info) {}
static inline bool tegra20_spare_fuse_early(int spare_bit)
{
return false;
}
static inline void tegra20_init_fuse_early(void) {}
static inline u32 tegra20_fuse_early(const unsigned int offset)
{
return 0;
}
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
void tegra30_init_speedo_data(struct tegra_sku_info *sku_info);
#else
static inline void tegra30_init_speedo_data(struct tegra_sku_info *sku_info) {}
#endif
#ifdef CONFIG_ARCH_TEGRA_114_SOC
void tegra114_init_speedo_data(struct tegra_sku_info *sku_info);
#else
static inline void tegra114_init_speedo_data(struct tegra_sku_info *sku_info) {}
#endif
#ifdef CONFIG_ARCH_TEGRA_124_SOC
#if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC)
void tegra124_init_speedo_data(struct tegra_sku_info *sku_info);
#else
static inline void tegra124_init_speedo_data(struct tegra_sku_info *sku_info) {}
#endif
#ifdef CONFIG_ARCH_TEGRA_210_SOC
void tegra210_init_speedo_data(struct tegra_sku_info *sku_info);
#endif
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
extern const struct tegra_fuse_soc tegra20_fuse_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
extern const struct tegra_fuse_soc tegra30_fuse_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_114_SOC
extern const struct tegra_fuse_soc tegra114_fuse_soc;
#endif
#if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC)
extern const struct tegra_fuse_soc tegra124_fuse_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_210_SOC
extern const struct tegra_fuse_soc tegra210_fuse_soc;
#endif
#endif
......@@ -22,7 +22,7 @@
#include "fuse.h"
#define CORE_PROCESS_CORNERS 2
#define SOC_PROCESS_CORNERS 2
#define CPU_PROCESS_CORNERS 2
enum {
......@@ -31,7 +31,7 @@ enum {
THRESHOLD_INDEX_COUNT,
};
static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = {
static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = {
{1123, UINT_MAX},
{0, UINT_MAX},
};
......@@ -74,8 +74,8 @@ static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info,
}
if (rev == TEGRA_REVISION_A01) {
tmp = tegra30_fuse_readl(0x270) << 1;
tmp |= tegra30_fuse_readl(0x26c);
tmp = tegra_fuse_read_early(0x270) << 1;
tmp |= tegra_fuse_read_early(0x26c);
if (!tmp)
sku_info->cpu_speedo_id = 0;
}
......@@ -84,27 +84,27 @@ static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info,
void __init tegra114_init_speedo_data(struct tegra_sku_info *sku_info)
{
u32 cpu_speedo_val;
u32 core_speedo_val;
u32 soc_speedo_val;
int threshold;
int i;
BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) !=
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) !=
BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) !=
THRESHOLD_INDEX_COUNT);
rev_sku_to_speedo_ids(sku_info, &threshold);
cpu_speedo_val = tegra30_fuse_readl(0x12c) + 1024;
core_speedo_val = tegra30_fuse_readl(0x134);
cpu_speedo_val = tegra_fuse_read_early(0x12c) + 1024;
soc_speedo_val = tegra_fuse_read_early(0x134);
for (i = 0; i < CPU_PROCESS_CORNERS; i++)
if (cpu_speedo_val < cpu_process_speedos[threshold][i])
break;
sku_info->cpu_process_id = i;
for (i = 0; i < CORE_PROCESS_CORNERS; i++)
if (core_speedo_val < core_process_speedos[threshold][i])
for (i = 0; i < SOC_PROCESS_CORNERS; i++)
if (soc_speedo_val < soc_process_speedos[threshold][i])
break;
sku_info->core_process_id = i;
sku_info->soc_process_id = i;
}
......@@ -24,7 +24,7 @@
#define CPU_PROCESS_CORNERS 2
#define GPU_PROCESS_CORNERS 2
#define CORE_PROCESS_CORNERS 2
#define SOC_PROCESS_CORNERS 2
#define FUSE_CPU_SPEEDO_0 0x14
#define FUSE_CPU_SPEEDO_1 0x2c
......@@ -53,7 +53,7 @@ static const u32 __initconst gpu_process_speedos[][GPU_PROCESS_CORNERS] = {
{0, UINT_MAX},
};
static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = {
static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = {
{2101, UINT_MAX},
{0, UINT_MAX},
};
......@@ -119,19 +119,19 @@ void __init tegra124_init_speedo_data(struct tegra_sku_info *sku_info)
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(gpu_process_speedos) !=
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) !=
BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) !=
THRESHOLD_INDEX_COUNT);
cpu_speedo_0_value = tegra30_fuse_readl(FUSE_CPU_SPEEDO_0);
cpu_speedo_0_value = tegra_fuse_read_early(FUSE_CPU_SPEEDO_0);
/* GPU Speedo is stored in CPU_SPEEDO_2 */
sku_info->gpu_speedo_value = tegra30_fuse_readl(FUSE_CPU_SPEEDO_2);
sku_info->gpu_speedo_value = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2);
soc_speedo_0_value = tegra30_fuse_readl(FUSE_SOC_SPEEDO_0);
soc_speedo_0_value = tegra_fuse_read_early(FUSE_SOC_SPEEDO_0);
cpu_iddq_value = tegra30_fuse_readl(FUSE_CPU_IDDQ);
soc_iddq_value = tegra30_fuse_readl(FUSE_SOC_IDDQ);
gpu_iddq_value = tegra30_fuse_readl(FUSE_GPU_IDDQ);
cpu_iddq_value = tegra_fuse_read_early(FUSE_CPU_IDDQ);
soc_iddq_value = tegra_fuse_read_early(FUSE_SOC_IDDQ);
gpu_iddq_value = tegra_fuse_read_early(FUSE_GPU_IDDQ);
sku_info->cpu_speedo_value = cpu_speedo_0_value;
......@@ -143,7 +143,7 @@ void __init tegra124_init_speedo_data(struct tegra_sku_info *sku_info)
rev_sku_to_speedo_ids(sku_info, &threshold);
sku_info->cpu_iddq_value = tegra30_fuse_readl(FUSE_CPU_IDDQ);
sku_info->cpu_iddq_value = tegra_fuse_read_early(FUSE_CPU_IDDQ);
for (i = 0; i < GPU_PROCESS_CORNERS; i++)
if (sku_info->gpu_speedo_value <
......@@ -157,11 +157,11 @@ void __init tegra124_init_speedo_data(struct tegra_sku_info *sku_info)
break;
sku_info->cpu_process_id = i;
for (i = 0; i < CORE_PROCESS_CORNERS; i++)
for (i = 0; i < SOC_PROCESS_CORNERS; i++)
if (soc_speedo_0_value <
core_process_speedos[threshold][i])
soc_process_speedos[threshold][i])
break;
sku_info->core_process_id = i;
sku_info->soc_process_id = i;
pr_debug("Tegra GPU Speedo ID=%d, Speedo Value=%d\n",
sku_info->gpu_speedo_id, sku_info->gpu_speedo_value);
......
......@@ -28,11 +28,11 @@
#define CPU_SPEEDO_REDUND_MSBIT 39
#define CPU_SPEEDO_REDUND_OFFS (CPU_SPEEDO_REDUND_MSBIT - CPU_SPEEDO_MSBIT)
#define CORE_SPEEDO_LSBIT 40
#define CORE_SPEEDO_MSBIT 47
#define CORE_SPEEDO_REDUND_LSBIT 48
#define CORE_SPEEDO_REDUND_MSBIT 55
#define CORE_SPEEDO_REDUND_OFFS (CORE_SPEEDO_REDUND_MSBIT - CORE_SPEEDO_MSBIT)
#define SOC_SPEEDO_LSBIT 40
#define SOC_SPEEDO_MSBIT 47
#define SOC_SPEEDO_REDUND_LSBIT 48
#define SOC_SPEEDO_REDUND_MSBIT 55
#define SOC_SPEEDO_REDUND_OFFS (SOC_SPEEDO_REDUND_MSBIT - SOC_SPEEDO_MSBIT)
#define SPEEDO_MULT 4
......@@ -56,7 +56,7 @@ static const u32 __initconst cpu_process_speedos[][PROCESS_CORNERS_NUM] = {
{316, 331, 383, UINT_MAX},
};
static const u32 __initconst core_process_speedos[][PROCESS_CORNERS_NUM] = {
static const u32 __initconst soc_process_speedos[][PROCESS_CORNERS_NUM] = {
{165, 195, 224, UINT_MAX},
{165, 195, 224, UINT_MAX},
{165, 195, 224, UINT_MAX},
......@@ -69,7 +69,7 @@ void __init tegra20_init_speedo_data(struct tegra_sku_info *sku_info)
int i;
BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) != SPEEDO_ID_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) != SPEEDO_ID_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) != SPEEDO_ID_COUNT);
if (SPEEDO_ID_SELECT_0(sku_info->revision))
sku_info->soc_speedo_id = SPEEDO_ID_0;
......@@ -80,8 +80,8 @@ void __init tegra20_init_speedo_data(struct tegra_sku_info *sku_info)
val = 0;
for (i = CPU_SPEEDO_MSBIT; i >= CPU_SPEEDO_LSBIT; i--) {
reg = tegra20_spare_fuse_early(i) |
tegra20_spare_fuse_early(i + CPU_SPEEDO_REDUND_OFFS);
reg = tegra_fuse_read_spare(i) |
tegra_fuse_read_spare(i + CPU_SPEEDO_REDUND_OFFS);
val = (val << 1) | (reg & 0x1);
}
val = val * SPEEDO_MULT;
......@@ -94,17 +94,17 @@ void __init tegra20_init_speedo_data(struct tegra_sku_info *sku_info)
sku_info->cpu_process_id = i;
val = 0;
for (i = CORE_SPEEDO_MSBIT; i >= CORE_SPEEDO_LSBIT; i--) {
reg = tegra20_spare_fuse_early(i) |
tegra20_spare_fuse_early(i + CORE_SPEEDO_REDUND_OFFS);
for (i = SOC_SPEEDO_MSBIT; i >= SOC_SPEEDO_LSBIT; i--) {
reg = tegra_fuse_read_spare(i) |
tegra_fuse_read_spare(i + SOC_SPEEDO_REDUND_OFFS);
val = (val << 1) | (reg & 0x1);
}
val = val * SPEEDO_MULT;
pr_debug("Core speedo value %u\n", val);
for (i = 0; i < (PROCESS_CORNERS_NUM - 1); i++) {
if (val <= core_process_speedos[sku_info->soc_speedo_id][i])
if (val <= soc_process_speedos[sku_info->soc_speedo_id][i])
break;
}
sku_info->core_process_id = i;
sku_info->soc_process_id = i;
}
/*
* Copyright (c) 2013-2015, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/bug.h>
#include <soc/tegra/fuse.h>
#include "fuse.h"
#define CPU_PROCESS_CORNERS 2
#define GPU_PROCESS_CORNERS 2
#define SOC_PROCESS_CORNERS 3
#define FUSE_CPU_SPEEDO_0 0x014
#define FUSE_CPU_SPEEDO_1 0x02c
#define FUSE_CPU_SPEEDO_2 0x030
#define FUSE_SOC_SPEEDO_0 0x034
#define FUSE_SOC_SPEEDO_1 0x038
#define FUSE_SOC_SPEEDO_2 0x03c
#define FUSE_CPU_IDDQ 0x018
#define FUSE_SOC_IDDQ 0x040
#define FUSE_GPU_IDDQ 0x128
#define FUSE_FT_REV 0x028
enum {
THRESHOLD_INDEX_0,
THRESHOLD_INDEX_1,
THRESHOLD_INDEX_COUNT,
};
static const u32 __initconst cpu_process_speedos[][CPU_PROCESS_CORNERS] = {
{ 2119, UINT_MAX },
{ 2119, UINT_MAX },
};
static const u32 __initconst gpu_process_speedos[][GPU_PROCESS_CORNERS] = {
{ UINT_MAX, UINT_MAX },
{ UINT_MAX, UINT_MAX },
};
static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = {
{ 1950, 2100, UINT_MAX },
{ 1950, 2100, UINT_MAX },
};
static u8 __init get_speedo_revision(void)
{
return tegra_fuse_read_spare(4) << 2 |
tegra_fuse_read_spare(3) << 1 |
tegra_fuse_read_spare(2) << 0;
}
static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info,
u8 speedo_rev, int *threshold)
{
int sku = sku_info->sku_id;
/* Assign to default */
sku_info->cpu_speedo_id = 0;
sku_info->soc_speedo_id = 0;
sku_info->gpu_speedo_id = 0;
*threshold = THRESHOLD_INDEX_0;
switch (sku) {
case 0x00: /* Engineering SKU */
case 0x01: /* Engineering SKU */
case 0x07:
case 0x17:
case 0x27:
if (speedo_rev >= 2)
sku_info->gpu_speedo_id = 1;
break;
case 0x13:
if (speedo_rev >= 2)
sku_info->gpu_speedo_id = 1;
sku_info->cpu_speedo_id = 1;
break;
default:
pr_err("Tegra210: unknown SKU %#04x\n", sku);
/* Using the default for the error case */
break;
}
}
static int get_process_id(int value, const u32 *speedos, unsigned int num)
{
unsigned int i;
for (i = 0; i < num; i++)
if (value < speedos[num])
return i;
return -EINVAL;
}
void __init tegra210_init_speedo_data(struct tegra_sku_info *sku_info)
{
int cpu_speedo[3], soc_speedo[3], cpu_iddq, gpu_iddq, soc_iddq;
unsigned int index;
u8 speedo_revision;
BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) !=
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(gpu_process_speedos) !=
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) !=
THRESHOLD_INDEX_COUNT);
/* Read speedo/IDDQ fuses */
cpu_speedo[0] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_0);
cpu_speedo[1] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_1);
cpu_speedo[2] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2);
soc_speedo[0] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_0);
soc_speedo[1] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_1);
soc_speedo[2] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2);
cpu_iddq = tegra_fuse_read_early(FUSE_CPU_IDDQ) * 4;
soc_iddq = tegra_fuse_read_early(FUSE_SOC_IDDQ) * 4;
gpu_iddq = tegra_fuse_read_early(FUSE_GPU_IDDQ) * 5;
/*
* Determine CPU, GPU and SoC speedo values depending on speedo fusing
* revision. Note that GPU speedo value is fused in CPU_SPEEDO_2.
*/
speedo_revision = get_speedo_revision();
pr_info("Speedo Revision %u\n", speedo_revision);
if (speedo_revision >= 3) {
sku_info->cpu_speedo_value = cpu_speedo[0];
sku_info->gpu_speedo_value = cpu_speedo[2];
sku_info->soc_speedo_value = soc_speedo[0];
} else if (speedo_revision == 2) {
sku_info->cpu_speedo_value = (-1938 + (1095 * cpu_speedo[0] / 100)) / 10;
sku_info->gpu_speedo_value = (-1662 + (1082 * cpu_speedo[2] / 100)) / 10;
sku_info->soc_speedo_value = ( -705 + (1037 * soc_speedo[0] / 100)) / 10;
} else {
sku_info->cpu_speedo_value = 2100;
sku_info->gpu_speedo_value = cpu_speedo[2] - 75;
sku_info->soc_speedo_value = 1900;
}
if ((sku_info->cpu_speedo_value <= 0) ||
(sku_info->gpu_speedo_value <= 0) ||
(sku_info->soc_speedo_value <= 0)) {
WARN(1, "speedo value not fused\n");
return;
}
rev_sku_to_speedo_ids(sku_info, speedo_revision, &index);
sku_info->gpu_process_id = get_process_id(sku_info->gpu_speedo_value,
gpu_process_speedos[index],
GPU_PROCESS_CORNERS);
sku_info->cpu_process_id = get_process_id(sku_info->cpu_speedo_value,
cpu_process_speedos[index],
CPU_PROCESS_CORNERS);
sku_info->soc_process_id = get_process_id(sku_info->soc_speedo_value,
soc_process_speedos[index],
SOC_PROCESS_CORNERS);
pr_debug("Tegra GPU Speedo ID=%d, Speedo Value=%d\n",
sku_info->gpu_speedo_id, sku_info->gpu_speedo_value);
}
......@@ -22,7 +22,7 @@
#include "fuse.h"
#define CORE_PROCESS_CORNERS 1
#define SOC_PROCESS_CORNERS 1
#define CPU_PROCESS_CORNERS 6
#define FUSE_SPEEDO_CALIB_0 0x14
......@@ -54,7 +54,7 @@ enum {
THRESHOLD_INDEX_COUNT,
};
static const u32 __initconst core_process_speedos[][CORE_PROCESS_CORNERS] = {
static const u32 __initconst soc_process_speedos[][SOC_PROCESS_CORNERS] = {
{180},
{170},
{195},
......@@ -93,25 +93,25 @@ static void __init fuse_speedo_calib(u32 *speedo_g, u32 *speedo_lp)
int bit_minus1;
int bit_minus2;
reg = tegra30_fuse_readl(FUSE_SPEEDO_CALIB_0);
reg = tegra_fuse_read_early(FUSE_SPEEDO_CALIB_0);
*speedo_lp = (reg & 0xFFFF) * 4;
*speedo_g = ((reg >> 16) & 0xFFFF) * 4;
ate_ver = tegra30_fuse_readl(FUSE_TEST_PROG_VER);
ate_ver = tegra_fuse_read_early(FUSE_TEST_PROG_VER);
pr_debug("Tegra ATE prog ver %d.%d\n", ate_ver/10, ate_ver%10);
if (ate_ver >= 26) {
bit_minus1 = tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS1);
bit_minus1 |= tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS1_R);
bit_minus2 = tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS2);
bit_minus2 |= tegra30_spare_fuse(LP_SPEEDO_BIT_MINUS2_R);
bit_minus1 = tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS1);
bit_minus1 |= tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS1_R);
bit_minus2 = tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS2);
bit_minus2 |= tegra_fuse_read_spare(LP_SPEEDO_BIT_MINUS2_R);
*speedo_lp |= (bit_minus1 << 1) | bit_minus2;
bit_minus1 = tegra30_spare_fuse(G_SPEEDO_BIT_MINUS1);
bit_minus1 |= tegra30_spare_fuse(G_SPEEDO_BIT_MINUS1_R);
bit_minus2 = tegra30_spare_fuse(G_SPEEDO_BIT_MINUS2);
bit_minus2 |= tegra30_spare_fuse(G_SPEEDO_BIT_MINUS2_R);
bit_minus1 = tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS1);
bit_minus1 |= tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS1_R);
bit_minus2 = tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS2);
bit_minus2 |= tegra_fuse_read_spare(G_SPEEDO_BIT_MINUS2_R);
*speedo_g |= (bit_minus1 << 1) | bit_minus2;
} else {
*speedo_lp |= 0x3;
......@@ -121,7 +121,7 @@ static void __init fuse_speedo_calib(u32 *speedo_g, u32 *speedo_lp)
static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info)
{
int package_id = tegra30_fuse_readl(FUSE_PACKAGE_INFO) & 0x0F;
int package_id = tegra_fuse_read_early(FUSE_PACKAGE_INFO) & 0x0F;
switch (sku_info->revision) {
case TEGRA_REVISION_A01:
......@@ -246,19 +246,19 @@ static void __init rev_sku_to_speedo_ids(struct tegra_sku_info *sku_info)
void __init tegra30_init_speedo_data(struct tegra_sku_info *sku_info)
{
u32 cpu_speedo_val;
u32 core_speedo_val;
u32 soc_speedo_val;
int i;
BUILD_BUG_ON(ARRAY_SIZE(cpu_process_speedos) !=
THRESHOLD_INDEX_COUNT);
BUILD_BUG_ON(ARRAY_SIZE(core_process_speedos) !=
BUILD_BUG_ON(ARRAY_SIZE(soc_process_speedos) !=
THRESHOLD_INDEX_COUNT);
rev_sku_to_speedo_ids(sku_info);
fuse_speedo_calib(&cpu_speedo_val, &core_speedo_val);
fuse_speedo_calib(&cpu_speedo_val, &soc_speedo_val);
pr_debug("Tegra CPU speedo value %u\n", cpu_speedo_val);
pr_debug("Tegra Core speedo value %u\n", core_speedo_val);
pr_debug("Tegra Core speedo value %u\n", soc_speedo_val);
for (i = 0; i < CPU_PROCESS_CORNERS; i++) {
if (cpu_speedo_val < cpu_process_speedos[threshold_index][i])
......@@ -273,16 +273,16 @@ void __init tegra30_init_speedo_data(struct tegra_sku_info *sku_info)
sku_info->cpu_speedo_id = 1;
}
for (i = 0; i < CORE_PROCESS_CORNERS; i++) {
if (core_speedo_val < core_process_speedos[threshold_index][i])
for (i = 0; i < SOC_PROCESS_CORNERS; i++) {
if (soc_speedo_val < soc_process_speedos[threshold_index][i])
break;
}
sku_info->core_process_id = i - 1;
sku_info->soc_process_id = i - 1;
if (sku_info->core_process_id == -1) {
pr_warn("Tegra CORE speedo value %3d out of range",
core_speedo_val);
sku_info->core_process_id = 0;
if (sku_info->soc_process_id == -1) {
pr_warn("Tegra SoC speedo value %3d out of range",
soc_speedo_val);
sku_info->soc_process_id = 0;
sku_info->soc_speedo_id = 1;
}
}
......@@ -21,11 +21,10 @@
#include <linux/io.h>
#include <soc/tegra/fuse.h>
#include <soc/tegra/common.h>
#include "fuse.h"
#define APBMISC_BASE 0x70000800
#define APBMISC_SIZE 0x64
#define FUSE_SKU_INFO 0x10
#define PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT 4
......@@ -95,8 +94,8 @@ void __init tegra_init_revision(void)
rev = TEGRA_REVISION_A02;
break;
case 3:
if (chip_id == TEGRA20 && (tegra20_spare_fuse_early(18) ||
tegra20_spare_fuse_early(19)))
if (chip_id == TEGRA20 && (tegra_fuse_read_spare(18) ||
tegra_fuse_read_spare(19)))
rev = TEGRA_REVISION_A03p;
else
rev = TEGRA_REVISION_A03;
......@@ -110,27 +109,74 @@ void __init tegra_init_revision(void)
tegra_sku_info.revision = rev;
if (chip_id == TEGRA20)
tegra_sku_info.sku_id = tegra20_fuse_early(FUSE_SKU_INFO);
else
tegra_sku_info.sku_id = tegra30_fuse_readl(FUSE_SKU_INFO);
tegra_sku_info.sku_id = tegra_fuse_read_early(FUSE_SKU_INFO);
}
void __init tegra_init_apbmisc(void)
{
struct resource apbmisc, straps;
struct device_node *np;
np = of_find_matching_node(NULL, apbmisc_match);
apbmisc_base = of_iomap(np, 0);
if (!apbmisc_base) {
pr_warn("ioremap tegra apbmisc failed. using %08x instead\n",
APBMISC_BASE);
apbmisc_base = ioremap(APBMISC_BASE, APBMISC_SIZE);
if (!np) {
/*
* Fall back to legacy initialization for 32-bit ARM only. All
* 64-bit ARM device tree files for Tegra are required to have
* an APBMISC node.
*
* This is for backwards-compatibility with old device trees
* that didn't contain an APBMISC node.
*/
if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) {
/* APBMISC registers (chip revision, ...) */
apbmisc.start = 0x70000800;
apbmisc.end = 0x70000863;
apbmisc.flags = IORESOURCE_MEM;
/* strapping options */
if (tegra_get_chip_id() == TEGRA124) {
straps.start = 0x7000e864;
straps.end = 0x7000e867;
} else {
straps.start = 0x70000008;
straps.end = 0x7000000b;
}
straps.flags = IORESOURCE_MEM;
pr_warn("Using APBMISC region %pR\n", &apbmisc);
pr_warn("Using strapping options registers %pR\n",
&straps);
} else {
/*
* At this point we're not running on Tegra, so play
* nice with multi-platform kernels.
*/
return;
}
} else {
/*
* Extract information from the device tree if we've found a
* matching node.
*/
if (of_address_to_resource(np, 0, &apbmisc) < 0) {
pr_err("failed to get APBMISC registers\n");
return;
}
if (of_address_to_resource(np, 1, &straps) < 0) {
pr_err("failed to get strapping options registers\n");
return;
}
}
strapping_base = of_iomap(np, 1);
apbmisc_base = ioremap_nocache(apbmisc.start, resource_size(&apbmisc));
if (!apbmisc_base)
pr_err("failed to map APBMISC registers\n");
strapping_base = ioremap_nocache(straps.start, resource_size(&straps));
if (!strapping_base)
pr_err("ioremap tegra strapping_base failed\n");
pr_err("failed to map strapping options registers\n");
long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code");
}
......@@ -17,6 +17,8 @@
*
*/
#define pr_fmt(fmt) "tegra-pmc: " fmt
#include <linux/kernel.h>
#include <linux/clk.h>
#include <linux/clk/tegra.h>
......@@ -457,7 +459,6 @@ static int tegra_io_rail_prepare(int id, unsigned long *request,
unsigned long *status, unsigned int *bit)
{
unsigned long rate, value;
struct clk *clk;
*bit = id % 32;
......@@ -476,12 +477,7 @@ static int tegra_io_rail_prepare(int id, unsigned long *request,
*request = IO_DPD2_REQ;
}
clk = clk_get_sys(NULL, "pclk");
if (IS_ERR(clk))
return PTR_ERR(clk);
rate = clk_get_rate(clk);
clk_put(clk);
rate = clk_get_rate(pmc->clk);
tegra_pmc_writel(DPD_SAMPLE_ENABLE, DPD_SAMPLE);
......@@ -535,8 +531,10 @@ int tegra_io_rail_power_on(int id)
tegra_pmc_writel(value, request);
err = tegra_io_rail_poll(status, mask, 0, 250);
if (err < 0)
if (err < 0) {
pr_info("tegra_io_rail_poll() failed: %d\n", err);
return err;
}
tegra_io_rail_unprepare();
......@@ -551,8 +549,10 @@ int tegra_io_rail_power_off(int id)
int err;
err = tegra_io_rail_prepare(id, &request, &status, &bit);
if (err < 0)
if (err < 0) {
pr_info("tegra_io_rail_prepare() failed: %d\n", err);
return err;
}
mask = 1 << bit;
......@@ -736,12 +736,12 @@ void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
u32 value, checksum;
if (!pmc->soc->has_tsense_reset)
goto out;
return;
np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
if (!np) {
dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
goto out;
return;
}
if (of_property_read_u32(np, "nvidia,i2c-controller-id", &ctrl_id)) {
......@@ -801,7 +801,6 @@ void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
out:
of_node_put(np);
return;
}
static int tegra_pmc_probe(struct platform_device *pdev)
......@@ -1002,7 +1001,56 @@ static const struct tegra_pmc_soc tegra124_pmc_soc = {
.has_gpu_clamps = true,
};
static const char * const tegra210_powergates[] = {
[TEGRA_POWERGATE_CPU] = "crail",
[TEGRA_POWERGATE_3D] = "3d",
[TEGRA_POWERGATE_VENC] = "venc",
[TEGRA_POWERGATE_PCIE] = "pcie",
[TEGRA_POWERGATE_L2] = "l2",
[TEGRA_POWERGATE_MPE] = "mpe",
[TEGRA_POWERGATE_HEG] = "heg",
[TEGRA_POWERGATE_SATA] = "sata",
[TEGRA_POWERGATE_CPU1] = "cpu1",
[TEGRA_POWERGATE_CPU2] = "cpu2",
[TEGRA_POWERGATE_CPU3] = "cpu3",
[TEGRA_POWERGATE_CELP] = "celp",
[TEGRA_POWERGATE_CPU0] = "cpu0",
[TEGRA_POWERGATE_C0NC] = "c0nc",
[TEGRA_POWERGATE_C1NC] = "c1nc",
[TEGRA_POWERGATE_SOR] = "sor",
[TEGRA_POWERGATE_DIS] = "dis",
[TEGRA_POWERGATE_DISB] = "disb",
[TEGRA_POWERGATE_XUSBA] = "xusba",
[TEGRA_POWERGATE_XUSBB] = "xusbb",
[TEGRA_POWERGATE_XUSBC] = "xusbc",
[TEGRA_POWERGATE_VIC] = "vic",
[TEGRA_POWERGATE_IRAM] = "iram",
[TEGRA_POWERGATE_NVDEC] = "nvdec",
[TEGRA_POWERGATE_NVJPG] = "nvjpg",
[TEGRA_POWERGATE_AUD] = "aud",
[TEGRA_POWERGATE_DFD] = "dfd",
[TEGRA_POWERGATE_VE2] = "ve2",
};
static const u8 tegra210_cpu_powergates[] = {
TEGRA_POWERGATE_CPU0,
TEGRA_POWERGATE_CPU1,
TEGRA_POWERGATE_CPU2,
TEGRA_POWERGATE_CPU3,
};
static const struct tegra_pmc_soc tegra210_pmc_soc = {
.num_powergates = ARRAY_SIZE(tegra210_powergates),
.powergates = tegra210_powergates,
.num_cpu_powergates = ARRAY_SIZE(tegra210_cpu_powergates),
.cpu_powergates = tegra210_cpu_powergates,
.has_tsense_reset = true,
.has_gpu_clamps = true,
};
static const struct of_device_id tegra_pmc_match[] = {
{ .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc },
{ .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc },
{ .compatible = "nvidia,tegra124-pmc", .data = &tegra124_pmc_soc },
{ .compatible = "nvidia,tegra114-pmc", .data = &tegra114_pmc_soc },
{ .compatible = "nvidia,tegra30-pmc", .data = &tegra30_pmc_soc },
......@@ -1035,25 +1083,44 @@ static int __init tegra_pmc_early_init(void)
bool invert;
u32 value;
if (!soc_is_tegra())
return 0;
np = of_find_matching_node_and_match(NULL, tegra_pmc_match, &match);
if (!np) {
pr_warn("PMC device node not found, disabling powergating\n");
regs.start = 0x7000e400;
regs.end = 0x7000e7ff;
regs.flags = IORESOURCE_MEM;
pr_warn("Using memory region %pR\n", &regs);
/*
* Fall back to legacy initialization for 32-bit ARM only. All
* 64-bit ARM device tree files for Tegra are required to have
* a PMC node.
*
* This is for backwards-compatibility with old device trees
* that didn't contain a PMC node. Note that in this case the
* SoC data can't be matched and therefore powergating is
* disabled.
*/
if (IS_ENABLED(CONFIG_ARM) && soc_is_tegra()) {
pr_warn("DT node not found, powergating disabled\n");
regs.start = 0x7000e400;
regs.end = 0x7000e7ff;
regs.flags = IORESOURCE_MEM;
pr_warn("Using memory region %pR\n", &regs);
} else {
/*
* At this point we're not running on Tegra, so play
* nice with multi-platform kernels.
*/
return 0;
}
} else {
pmc->soc = match->data;
}
/*
* Extract information from the device tree if we've found a
* matching node.
*/
if (of_address_to_resource(np, 0, &regs) < 0) {
pr_err("failed to get PMC registers\n");
return -ENXIO;
}
if (of_address_to_resource(np, 0, &regs) < 0) {
pr_err("failed to get PMC registers\n");
return -ENXIO;
pmc->soc = match->data;
}
pmc->base = ioremap_nocache(regs.start, resource_size(&regs));
......@@ -1064,6 +1131,10 @@ static int __init tegra_pmc_early_init(void)
mutex_init(&pmc->powergates_lock);
/*
* Invert the interrupt polarity if a PMC device tree node exists and
* contains the nvidia,invert-interrupt property.
*/
invert = of_property_read_bool(np, "nvidia,invert-interrupt");
value = tegra_pmc_readl(PMC_CNTRL);
......
#ifndef DT_BINDINGS_MEMORY_TEGRA210_MC_H
#define DT_BINDINGS_MEMORY_TEGRA210_MC_H
#define TEGRA_SWGROUP_PTC 0
#define TEGRA_SWGROUP_DC 1
#define TEGRA_SWGROUP_DCB 2
#define TEGRA_SWGROUP_AFI 3
#define TEGRA_SWGROUP_AVPC 4
#define TEGRA_SWGROUP_HDA 5
#define TEGRA_SWGROUP_HC 6
#define TEGRA_SWGROUP_NVENC 7
#define TEGRA_SWGROUP_PPCS 8
#define TEGRA_SWGROUP_SATA 9
#define TEGRA_SWGROUP_MPCORE 10
#define TEGRA_SWGROUP_ISP2 11
#define TEGRA_SWGROUP_XUSB_HOST 12
#define TEGRA_SWGROUP_XUSB_DEV 13
#define TEGRA_SWGROUP_ISP2B 14
#define TEGRA_SWGROUP_TSEC 15
#define TEGRA_SWGROUP_A9AVP 16
#define TEGRA_SWGROUP_GPU 17
#define TEGRA_SWGROUP_SDMMC1A 18
#define TEGRA_SWGROUP_SDMMC2A 19
#define TEGRA_SWGROUP_SDMMC3A 20
#define TEGRA_SWGROUP_SDMMC4A 21
#define TEGRA_SWGROUP_VIC 22
#define TEGRA_SWGROUP_VI 23
#define TEGRA_SWGROUP_NVDEC 24
#define TEGRA_SWGROUP_APE 25
#define TEGRA_SWGROUP_NVJPG 26
#define TEGRA_SWGROUP_SE 27
#define TEGRA_SWGROUP_AXIAP 28
#define TEGRA_SWGROUP_ETR 29
#define TEGRA_SWGROUP_TSECB 30
#endif
......@@ -16,8 +16,20 @@
#include <linux/types.h>
struct device;
struct device_node;
struct generic_pm_domain;
void r8a7778_clocks_init(u32 mode);
void r8a7779_clocks_init(u32 mode);
void rcar_gen2_clocks_init(u32 mode);
#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
void cpg_mstp_add_clk_domain(struct device_node *np);
int cpg_mstp_attach_dev(struct generic_pm_domain *domain, struct device *dev);
void cpg_mstp_detach_dev(struct generic_pm_domain *domain, struct device *dev);
#else
static inline void cpg_mstp_add_clk_domain(struct device_node *np) {}
#endif
#endif
#ifndef LINUX_SOC_DOVE_PMU_H
#define LINUX_SOC_DOVE_PMU_H
int dove_init_pmu(void);
#endif
#ifndef __QCOM_SMD_RPM_H__
#define __QCOM_SMD_RPM_H__
struct qcom_smd_rpm;
#define QCOM_SMD_RPM_ACTIVE_STATE 0
#define QCOM_SMD_RPM_SLEEP_STATE 1
/*
* Constants used for addressing resources in the RPM.
*/
#define QCOM_SMD_RPM_BOOST 0x61747362
#define QCOM_SMD_RPM_BUS_CLK 0x316b6c63
#define QCOM_SMD_RPM_BUS_MASTER 0x73616d62
#define QCOM_SMD_RPM_BUS_SLAVE 0x766c7362
#define QCOM_SMD_RPM_CLK_BUF_A 0x616B6C63
#define QCOM_SMD_RPM_LDOA 0x616f646c
#define QCOM_SMD_RPM_LDOB 0x626F646C
#define QCOM_SMD_RPM_MEM_CLK 0x326b6c63
#define QCOM_SMD_RPM_MISC_CLK 0x306b6c63
#define QCOM_SMD_RPM_NCPA 0x6170636E
#define QCOM_SMD_RPM_NCPB 0x6270636E
#define QCOM_SMD_RPM_OCMEM_PWR 0x706d636f
#define QCOM_SMD_RPM_QPIC_CLK 0x63697071
#define QCOM_SMD_RPM_SMPA 0x61706d73
#define QCOM_SMD_RPM_SMPB 0x62706d73
#define QCOM_SMD_RPM_SPDM 0x63707362
#define QCOM_SMD_RPM_VSA 0x00617376
int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,
int state,
u32 resource_type, u32 resource_id,
void *buf, size_t count);
#endif
#ifndef __QCOM_SMD_H__
#define __QCOM_SMD_H__
#include <linux/device.h>
#include <linux/mod_devicetable.h>
struct qcom_smd;
struct qcom_smd_channel;
struct qcom_smd_lookup;
/**
* struct qcom_smd_device - smd device struct
* @dev: the device struct
* @channel: handle to the smd channel for this device
*/
struct qcom_smd_device {
struct device dev;
struct qcom_smd_channel *channel;
};
/**
* struct qcom_smd_driver - smd driver struct
* @driver: underlying device driver
* @probe: invoked when the smd channel is found
* @remove: invoked when the smd channel is closed
* @callback: invoked when an inbound message is received on the channel,
* should return 0 on success or -EBUSY if the data cannot be
* consumed at this time
*/
struct qcom_smd_driver {
struct device_driver driver;
int (*probe)(struct qcom_smd_device *dev);
void (*remove)(struct qcom_smd_device *dev);
int (*callback)(struct qcom_smd_device *, const void *, size_t);
};
int qcom_smd_driver_register(struct qcom_smd_driver *drv);
void qcom_smd_driver_unregister(struct qcom_smd_driver *drv);
#define module_qcom_smd_driver(__smd_driver) \
module_driver(__smd_driver, qcom_smd_driver_register, \
qcom_smd_driver_unregister)
int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len);
#endif
#ifndef __QCOM_SMEM_H__
#define __QCOM_SMEM_H__
#define QCOM_SMEM_HOST_ANY -1
int qcom_smem_alloc(unsigned host, unsigned item, size_t size);
int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size);
int qcom_smem_get_free_space(unsigned host);
#endif
......@@ -22,6 +22,7 @@
#define TEGRA114 0x35
#define TEGRA124 0x40
#define TEGRA132 0x13
#define TEGRA210 0x21
#define TEGRA_FUSE_SKU_CALIB_0 0xf0
#define TEGRA30_FUSE_SATA_CALIB 0x124
......@@ -47,10 +48,11 @@ struct tegra_sku_info {
int cpu_speedo_id;
int cpu_speedo_value;
int cpu_iddq_value;
int core_process_id;
int soc_process_id;
int soc_speedo_id;
int gpu_speedo_id;
int soc_speedo_value;
int gpu_process_id;
int gpu_speedo_id;
int gpu_speedo_value;
enum tegra_revision revision;
};
......
......@@ -102,6 +102,8 @@ struct tegra_mc_soc {
unsigned int num_address_bits;
unsigned int atom_size;
u8 client_id_mask;
const struct tegra_smmu_soc *smmu;
};
......
......@@ -67,6 +67,11 @@ int tegra_pmc_cpu_remove_clamping(int cpuid);
#define TEGRA_POWERGATE_XUSBC 22
#define TEGRA_POWERGATE_VIC 23
#define TEGRA_POWERGATE_IRAM 24
#define TEGRA_POWERGATE_NVDEC 25
#define TEGRA_POWERGATE_NVJPG 26
#define TEGRA_POWERGATE_AUD 27
#define TEGRA_POWERGATE_DFD 28
#define TEGRA_POWERGATE_VE2 29
#define TEGRA_POWERGATE_3D0 TEGRA_POWERGATE_3D
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment