Commit 617e7481 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'rproc-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc

Pull remoteproc updates from Bjorn Andersson:
 "This introduces a new "detached" state for remote processors that are
  deemed to be running at the time Linux boots and the infrastructure
  for "attaching" to these. It then introduces the support for
  performing this operation for the STM32 platform.

  The coredump functionality is moved out from the core file and gains
  support for an optional mode where the recovery phase awaits the
  notification from devcoredump that the dump should be released. This
  allows userspace to grab the coredump in scenarios where vmalloc space
  is too low for creating a complete copy of the coredump before handing
  this to devcoredump.

  A new character device based interface is introduced to allow tying
  the stoppage of a remote processor to the termination of a user space
  process. This is useful in situations when such process provides
  crucial resources/operations for the firmware running on the remote
  processor.

  The Texas Instrument K3 driver gains support for the C66x and C71x
  DSPs.

  Qualcomm remoteprocs gains support for stashing relocation information
  in IMEM, to aid post mortem debugging and the crash notification
  mechanism is generalized to be reusable in cases where loosely coupled
  drivers needs to know about the status of a remote processor. One such
  example is the IPA hardware block, which is jointly owned with the
  modem and migrated to this improved interface.

  It also introduces a number of bug fixes and debug improvements for
  the Qualcomm modem remoteproc driver.

  And it cleans up the inconsistent interface for remoteproc drivers to
  implement power management"

* tag 'rproc-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc: (56 commits)
  remoteproc: core: Register the character device interface
  remoteproc: Add remoteproc character device interface
  remoteproc: kill IPA notify code
  net: ipa: new notification infrastructure
  remoteproc: k3-dsp: Add support for C71x DSPs
  dt-bindings: remoteproc: k3-dsp: Update bindings for C71x DSPs
  remoteproc: k3-dsp: Add support for L2RAM loading on C66x DSPs
  remoteproc: k3-dsp: Add a remoteproc driver of K3 C66x DSPs
  dt-bindings: remoteproc: Add bindings for C66x DSPs on TI K3 SoCs
  remoteproc: k3: Add TI-SCI processor control helper functions
  remoteproc: Introduce rproc_of_parse_firmware() helper
  dt-bindings: arm: keystone: Add common TI SCI bindings
  remoteproc: qcom_q6v5_mss: Remove redundant running state
  remoteproc: qcom: q6v5: Update running state before requesting stop
  remoteproc: qcom_q6v5_mss: Add modem debug policy support
  remoteproc: qcom_q6v5_mss: Validate modem blob firmware size before load
  remoteproc: qcom_q6v5_mss: Validate MBA firmware size before load
  rpmsg: update documentation
  remoteproc: qcom_q6v5_mss: Add MBA log extraction support
  remoteproc: Add coredump debugfs entry
  ...
parents dded87af 62b8f9e9
# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/arm/keystone/ti,k3-sci-common.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Common K3 TI-SCI bindings
maintainers:
- Nishanth Menon <nm@ti.com>
description: |
The TI K3 family of SoCs usually have a central System Controller Processor
that is responsible for managing various SoC-level resources like clocks,
resets, interrupts etc. The communication with that processor is performed
through the TI-SCI protocol.
Each specific device management node like a clock controller node, a reset
controller node or an interrupt-controller node should define a common set
of properties that enables them to implement the corresponding functionality
over the TI-SCI protocol. The following are some of the common properties
needed by such individual nodes. The required properties for each device
management node is defined in the respective binding.
properties:
ti,sci:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Should be a phandle to the TI-SCI System Controller node
ti,sci-dev-id:
$ref: /schemas/types.yaml#/definitions/uint32
description: |
Should contain the TI-SCI device id corresponding to the device. Please
refer to the corresponding System Controller documentation for valid
values for the desired device.
ti,sci-proc-ids:
description: Should contain a single tuple of <proc_id host_id>.
$ref: /schemas/types.yaml#/definitions/uint32-array
items:
- description: TI-SCI processor id for the remote processor device
- description: TI-SCI host id to which processor control ownership
should be transferred to
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/remoteproc/qcom,pil-info.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm peripheral image loader relocation info binding
maintainers:
- Bjorn Andersson <bjorn.andersson@linaro.org>
description:
The Qualcomm peripheral image loader relocation memory region, in IMEM, is
used for communicating remoteproc relocation information to post mortem
debugging tools.
properties:
compatible:
const: qcom,pil-reloc-info
reg:
maxItems: 1
required:
- compatible
- reg
examples:
- |
imem@146bf000 {
compatible = "syscon", "simple-mfd";
reg = <0x146bf000 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0 0x146bf000 0x1000>;
pil-reloc@94c {
compatible = "qcom,pil-reloc-info";
reg = <0x94c 0xc8>;
};
};
...
# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/remoteproc/ti,k3-dsp-rproc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI K3 DSP devices
maintainers:
- Suman Anna <s-anna@ti.com>
description: |
The TI K3 family of SoCs usually have one or more TI DSP Core sub-systems
that are used to offload some of the processor-intensive tasks or algorithms,
for achieving various system level goals.
These processor sub-systems usually contain additional sub-modules like
L1 and/or L2 caches/SRAMs, an Interrupt Controller, an external memory
controller, a dedicated local power/sleep controller etc. The DSP processor
cores in the K3 SoCs are usually either a TMS320C66x CorePac processor or a
TMS320C71x CorePac processor.
Each DSP Core sub-system is represented as a single DT node. Each node has a
number of required or optional properties that enable the OS running on the
host processor (Arm CorePac) to perform the device management of the remote
processor and to communicate with the remote processor.
allOf:
- $ref: /schemas/arm/keystone/ti,k3-sci-common.yaml#
properties:
compatible:
enum:
- ti,j721e-c66-dsp
- ti,j721e-c71-dsp
description:
Use "ti,j721e-c66-dsp" for C66x DSPs on K3 J721E SoCs
Use "ti,j721e-c71-dsp" for C71x DSPs on K3 J721E SoCs
resets:
description: |
Should contain the phandle to the reset controller node managing the
local resets for this device, and a reset specifier.
maxItems: 1
firmware-name:
description: |
Should contain the name of the default firmware image
file located on the firmware search path
mboxes:
description: |
OMAP Mailbox specifier denoting the sub-mailbox, to be used for
communication with the remote processor. This property should match
with the sub-mailbox node used in the firmware image.
maxItems: 1
memory-region:
minItems: 2
maxItems: 8
description: |
phandle to the reserved memory nodes to be associated with the remoteproc
device. There should be at least two reserved memory nodes defined. The
reserved memory nodes should be carveout nodes, and should be defined as
per the bindings in
Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
items:
- description: region used for dynamic DMA allocations like vrings and
vring buffers
- description: region reserved for firmware image sections
additionalItems: true
# Optional properties:
# --------------------
sram:
$ref: /schemas/types.yaml#/definitions/phandle-array
minItems: 1
maxItems: 4
description: |
phandles to one or more reserved on-chip SRAM regions. The regions
should be defined as child nodes of the respective SRAM node, and
should be defined as per the generic bindings in,
Documentation/devicetree/bindings/sram/sram.yaml
if:
properties:
compatible:
enum:
- ti,j721e-c66-dsp
then:
properties:
reg:
items:
- description: Address and Size of the L2 SRAM internal memory region
- description: Address and Size of the L1 PRAM internal memory region
- description: Address and Size of the L1 DRAM internal memory region
reg-names:
items:
- const: l2sram
- const: l1pram
- const: l1dram
else:
if:
properties:
compatible:
enum:
- ti,j721e-c71-dsp
then:
properties:
reg:
items:
- description: Address and Size of the L2 SRAM internal memory region
- description: Address and Size of the L1 DRAM internal memory region
reg-names:
items:
- const: l2sram
- const: l1dram
required:
- compatible
- reg
- reg-names
- ti,sci
- ti,sci-dev-id
- ti,sci-proc-ids
- resets
- firmware-name
- mboxes
- memory-region
unevaluatedProperties: false
examples:
- |
/ {
model = "Texas Instruments K3 J721E SoC";
compatible = "ti,j721e";
#address-cells = <2>;
#size-cells = <2>;
bus@100000 {
compatible = "simple-bus";
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x00 0x00100000 0x00 0x00100000 0x00 0x00020000>, /* ctrl mmr */
<0x00 0x64800000 0x00 0x64800000 0x00 0x00800000>, /* C71_0 */
<0x4d 0x80800000 0x4d 0x80800000 0x00 0x00800000>, /* C66_0 */
<0x4d 0x81800000 0x4d 0x81800000 0x00 0x00800000>; /* C66_1 */
/* J721E C66_0 DSP node */
dsp@4d80800000 {
compatible = "ti,j721e-c66-dsp";
reg = <0x4d 0x80800000 0x00 0x00048000>,
<0x4d 0x80e00000 0x00 0x00008000>,
<0x4d 0x80f00000 0x00 0x00008000>;
reg-names = "l2sram", "l1pram", "l1dram";
ti,sci = <&dmsc>;
ti,sci-dev-id = <142>;
ti,sci-proc-ids = <0x03 0xFF>;
resets = <&k3_reset 142 1>;
firmware-name = "j7-c66_0-fw";
memory-region = <&c66_0_dma_memory_region>,
<&c66_0_memory_region>;
mboxes = <&mailbox0_cluster3 &mbox_c66_0>;
};
/* J721E C71_0 DSP node */
c71_0: dsp@64800000 {
compatible = "ti,j721e-c71-dsp";
reg = <0x00 0x64800000 0x00 0x00080000>,
<0x00 0x64e00000 0x00 0x0000c000>;
reg-names = "l2sram", "l1dram";
ti,sci = <&dmsc>;
ti,sci-dev-id = <15>;
ti,sci-proc-ids = <0x30 0xFF>;
resets = <&k3_reset 15 1>;
firmware-name = "j7-c71_0-fw";
memory-region = <&c71_0_dma_memory_region>,
<&c71_0_memory_region>;
mboxes = <&mailbox0_cluster4 &mbox_c71_0>;
};
};
};
......@@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
::
struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
void *priv, u32 addr);
struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
rpmsg_rx_cb_t cb, void *priv,
struct rpmsg_channel_info chinfo);
every rpmsg address in the system is bound to an rx callback (so when
inbound messages arrive, they are dispatched by the rpmsg bus using the
......
......@@ -339,6 +339,7 @@ Code Seq# Include File Comments
0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org>
0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org>
0xB6 all linux/fpga-dfl.h
0xB7 all uapi/linux/remoteproc_cdev.h <mailto:linux-remoteproc@vger.kernel.org>
0xC0 00-0F linux/usb/iowarrior.h
0xCA 00-0F uapi/misc/cxl.h
0xCA 10-2F uapi/misc/ocxl.h
......
......@@ -17065,6 +17065,7 @@ M: Tero Kristo <t-kristo@ti.com>
M: Santosh Shilimkar <ssantosh@kernel.org>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/arm/keystone/ti,k3-sci-common.yaml
F: Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
F: Documentation/devicetree/bindings/clock/ti,sci-clk.txt
F: Documentation/devicetree/bindings/interrupt-controller/ti,sci-inta.txt
......
......@@ -10,6 +10,7 @@
#include <linux/device.h>
#include <linux/notifier.h>
#include <linux/pm_wakeup.h>
#include <linux/notifier.h>
#include "ipa_version.h"
#include "gsi.h"
......@@ -73,6 +74,8 @@ struct ipa {
enum ipa_version version;
struct platform_device *pdev;
struct rproc *modem_rproc;
struct notifier_block nb;
void *notifier;
struct ipa_smp2p *smp2p;
struct ipa_clock *clock;
atomic_t suspend_ref;
......
......@@ -9,7 +9,7 @@
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/if_rmnet.h>
#include <linux/remoteproc/qcom_q6v5_ipa_notify.h>
#include <linux/remoteproc/qcom_rproc.h>
#include "ipa.h"
#include "ipa_data.h"
......@@ -311,43 +311,40 @@ static void ipa_modem_crashed(struct ipa *ipa)
dev_err(dev, "error %d zeroing modem memory regions\n", ret);
}
static void ipa_modem_notify(void *data, enum qcom_rproc_event event)
static int ipa_modem_notify(struct notifier_block *nb, unsigned long action,
void *data)
{
struct ipa *ipa = data;
struct device *dev;
struct ipa *ipa = container_of(nb, struct ipa, nb);
struct qcom_ssr_notify_data *notify_data = data;
struct device *dev = &ipa->pdev->dev;
dev = &ipa->pdev->dev;
switch (event) {
case MODEM_STARTING:
switch (action) {
case QCOM_SSR_BEFORE_POWERUP:
dev_info(dev, "received modem starting event\n");
ipa_smp2p_notify_reset(ipa);
break;
case MODEM_RUNNING:
case QCOM_SSR_AFTER_POWERUP:
dev_info(dev, "received modem running event\n");
break;
case MODEM_STOPPING:
case MODEM_CRASHED:
case QCOM_SSR_BEFORE_SHUTDOWN:
dev_info(dev, "received modem %s event\n",
event == MODEM_STOPPING ? "stopping"
: "crashed");
notify_data->crashed ? "crashed" : "stopping");
if (ipa->setup_complete)
ipa_modem_crashed(ipa);
break;
case MODEM_OFFLINE:
case QCOM_SSR_AFTER_SHUTDOWN:
dev_info(dev, "received modem offline event\n");
break;
case MODEM_REMOVING:
dev_info(dev, "received modem stopping event\n");
break;
default:
dev_err(&ipa->pdev->dev, "unrecognized event %u\n", event);
dev_err(dev, "received unrecognized event %lu\n", action);
break;
}
return NOTIFY_OK;
}
int ipa_modem_init(struct ipa *ipa, bool modem_init)
......@@ -362,13 +359,30 @@ void ipa_modem_exit(struct ipa *ipa)
int ipa_modem_config(struct ipa *ipa)
{
return qcom_register_ipa_notify(ipa->modem_rproc, ipa_modem_notify,
ipa);
void *notifier;
ipa->nb.notifier_call = ipa_modem_notify;
notifier = qcom_register_ssr_notifier("mpss", &ipa->nb);
if (IS_ERR(notifier))
return PTR_ERR(notifier);
ipa->notifier = notifier;
return 0;
}
void ipa_modem_deconfig(struct ipa *ipa)
{
qcom_deregister_ipa_notify(ipa->modem_rproc);
struct device *dev = &ipa->pdev->dev;
int ret;
ret = qcom_unregister_ssr_notifier(ipa->notifier, &ipa->nb);
if (ret)
dev_err(dev, "error %d unregistering notifier", ret);
ipa->notifier = NULL;
memset(&ipa->nb, 0, sizeof(ipa->nb));
}
int ipa_modem_setup(struct ipa *ipa)
......
......@@ -14,6 +14,15 @@ config REMOTEPROC
if REMOTEPROC
config REMOTEPROC_CDEV
bool "Remoteproc character device interface"
help
Say y here to have a character device interface for the remoteproc
framework. Userspace can boot/shutdown remote processors through
this interface.
It's safe to say N if you don't want to use this interface.
config IMX_REMOTEPROC
tristate "IMX6/7 remoteproc support"
depends on ARCH_MXC
......@@ -116,6 +125,9 @@ config KEYSTONE_REMOTEPROC
It's safe to say N here if you're not interested in the Keystone
DSPs or just want to use a bare minimum kernel.
config QCOM_PIL_INFO
tristate
config QCOM_RPROC_COMMON
tristate
......@@ -132,6 +144,7 @@ config QCOM_Q6V5_ADSP
depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
depends on QCOM_SYSMON || QCOM_SYSMON=n
select MFD_SYSCON
select QCOM_PIL_INFO
select QCOM_MDT_LOADER
select QCOM_Q6V5_COMMON
select QCOM_RPROC_COMMON
......@@ -148,8 +161,8 @@ config QCOM_Q6V5_MSS
depends on QCOM_SYSMON || QCOM_SYSMON=n
select MFD_SYSCON
select QCOM_MDT_LOADER
select QCOM_PIL_INFO
select QCOM_Q6V5_COMMON
select QCOM_Q6V5_IPA_NOTIFY
select QCOM_RPROC_COMMON
select QCOM_SCM
help
......@@ -164,6 +177,7 @@ config QCOM_Q6V5_PAS
depends on RPMSG_QCOM_GLINK_SMEM || RPMSG_QCOM_GLINK_SMEM=n
depends on QCOM_SYSMON || QCOM_SYSMON=n
select MFD_SYSCON
select QCOM_PIL_INFO
select QCOM_MDT_LOADER
select QCOM_Q6V5_COMMON
select QCOM_RPROC_COMMON
......@@ -182,6 +196,7 @@ config QCOM_Q6V5_WCSS
depends on QCOM_SYSMON || QCOM_SYSMON=n
select MFD_SYSCON
select QCOM_MDT_LOADER
select QCOM_PIL_INFO
select QCOM_Q6V5_COMMON
select QCOM_RPROC_COMMON
select QCOM_SCM
......@@ -189,9 +204,6 @@ config QCOM_Q6V5_WCSS
Say y here to support the Qualcomm Peripheral Image Loader for the
Hexagon V5 based WCSS remote processors.
config QCOM_Q6V5_IPA_NOTIFY
tristate
config QCOM_SYSMON
tristate "Qualcomm sysmon driver"
depends on RPMSG
......@@ -215,6 +227,7 @@ config QCOM_WCNSS_PIL
depends on QCOM_SMEM
depends on QCOM_SYSMON || QCOM_SYSMON=n
select QCOM_MDT_LOADER
select QCOM_PIL_INFO
select QCOM_RPROC_COMMON
select QCOM_SCM
help
......@@ -249,6 +262,19 @@ config STM32_RPROC
This can be either built-in or a loadable module.
config TI_K3_DSP_REMOTEPROC
tristate "TI K3 DSP remoteproc support"
depends on ARCH_K3
select MAILBOX
select OMAP2PLUS_MBOX
help
Say m here to support TI's C66x and C71x DSP remote processor
subsystems on various TI K3 family of SoCs through the remote
processor framework.
It's safe to say N here if you're not interested in utilizing
the DSP slave processors.
endif # REMOTEPROC
endmenu
......@@ -5,10 +5,12 @@
obj-$(CONFIG_REMOTEPROC) += remoteproc.o
remoteproc-y := remoteproc_core.o
remoteproc-y += remoteproc_coredump.o
remoteproc-y += remoteproc_debugfs.o
remoteproc-y += remoteproc_sysfs.o
remoteproc-y += remoteproc_virtio.o
remoteproc-y += remoteproc_elf_loader.o
obj-$(CONFIG_REMOTEPROC_CDEV) += remoteproc_cdev.o
obj-$(CONFIG_IMX_REMOTEPROC) += imx_rproc.o
obj-$(CONFIG_INGENIC_VPU_RPROC) += ingenic_rproc.o
obj-$(CONFIG_MTK_SCP) += mtk_scp.o mtk_scp_ipi.o
......@@ -16,13 +18,13 @@ obj-$(CONFIG_OMAP_REMOTEPROC) += omap_remoteproc.o
obj-$(CONFIG_WKUP_M3_RPROC) += wkup_m3_rproc.o
obj-$(CONFIG_DA8XX_REMOTEPROC) += da8xx_remoteproc.o
obj-$(CONFIG_KEYSTONE_REMOTEPROC) += keystone_remoteproc.o
obj-$(CONFIG_QCOM_PIL_INFO) += qcom_pil_info.o
obj-$(CONFIG_QCOM_RPROC_COMMON) += qcom_common.o
obj-$(CONFIG_QCOM_Q6V5_COMMON) += qcom_q6v5.o
obj-$(CONFIG_QCOM_Q6V5_ADSP) += qcom_q6v5_adsp.o
obj-$(CONFIG_QCOM_Q6V5_MSS) += qcom_q6v5_mss.o
obj-$(CONFIG_QCOM_Q6V5_PAS) += qcom_q6v5_pas.o
obj-$(CONFIG_QCOM_Q6V5_WCSS) += qcom_q6v5_wcss.o
obj-$(CONFIG_QCOM_Q6V5_IPA_NOTIFY) += qcom_q6v5_ipa_notify.o
obj-$(CONFIG_QCOM_SYSMON) += qcom_sysmon.o
obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
qcom_wcnss_pil-y += qcom_wcnss.o
......@@ -30,3 +32,4 @@ qcom_wcnss_pil-y += qcom_wcnss_iris.o
obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o
obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o
obj-$(CONFIG_TI_K3_DSP_REMOTEPROC) += ti_k3_dsp_remoteproc.o
......@@ -11,7 +11,6 @@
#include <linux/io.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/remoteproc.h>
#include "remoteproc_internal.h"
......@@ -62,6 +61,28 @@ struct vpu {
struct device *dev;
};
static int ingenic_rproc_prepare(struct rproc *rproc)
{
struct vpu *vpu = rproc->priv;
int ret;
/* The clocks must be enabled for the firmware to be loaded in TCSM */
ret = clk_bulk_prepare_enable(ARRAY_SIZE(vpu->clks), vpu->clks);
if (ret)
dev_err(vpu->dev, "Unable to start clocks: %d\n", ret);
return ret;
}
static int ingenic_rproc_unprepare(struct rproc *rproc)
{
struct vpu *vpu = rproc->priv;
clk_bulk_disable_unprepare(ARRAY_SIZE(vpu->clks), vpu->clks);
return 0;
}
static int ingenic_rproc_start(struct rproc *rproc)
{
struct vpu *vpu = rproc->priv;
......@@ -115,6 +136,8 @@ static void *ingenic_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len)
}
static struct rproc_ops ingenic_rproc_ops = {
.prepare = ingenic_rproc_prepare,
.unprepare = ingenic_rproc_unprepare,
.start = ingenic_rproc_start,
.stop = ingenic_rproc_stop,
.kick = ingenic_rproc_kick,
......@@ -135,16 +158,6 @@ static irqreturn_t vpu_interrupt(int irq, void *data)
return rproc_vq_interrupt(rproc, vring);
}
static void ingenic_rproc_disable_clks(void *data)
{
struct vpu *vpu = data;
pm_runtime_resume(vpu->dev);
pm_runtime_disable(vpu->dev);
clk_bulk_disable_unprepare(ARRAY_SIZE(vpu->clks), vpu->clks);
}
static int ingenic_rproc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
......@@ -206,35 +219,13 @@ static int ingenic_rproc_probe(struct platform_device *pdev)
disable_irq(vpu->irq);
/* The clocks must be enabled for the firmware to be loaded in TCSM */
ret = clk_bulk_prepare_enable(ARRAY_SIZE(vpu->clks), vpu->clks);
if (ret) {
dev_err(dev, "Unable to start clocks\n");
return ret;
}
pm_runtime_irq_safe(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
pm_runtime_use_autosuspend(dev);
ret = devm_add_action_or_reset(dev, ingenic_rproc_disable_clks, vpu);
if (ret) {
dev_err(dev, "Unable to register action\n");
goto out_pm_put;
}
ret = devm_rproc_add(dev, rproc);
if (ret) {
dev_err(dev, "Failed to register remote processor\n");
goto out_pm_put;
return ret;
}
out_pm_put:
pm_runtime_put_autosuspend(dev);
return ret;
return 0;
}
static const struct of_device_id ingenic_rproc_of_matches[] = {
......@@ -243,33 +234,10 @@ static const struct of_device_id ingenic_rproc_of_matches[] = {
};
MODULE_DEVICE_TABLE(of, ingenic_rproc_of_matches);
static int __maybe_unused ingenic_rproc_suspend(struct device *dev)
{
struct vpu *vpu = dev_get_drvdata(dev);
clk_bulk_disable(ARRAY_SIZE(vpu->clks), vpu->clks);
return 0;
}
static int __maybe_unused ingenic_rproc_resume(struct device *dev)
{
struct vpu *vpu = dev_get_drvdata(dev);
return clk_bulk_enable(ARRAY_SIZE(vpu->clks), vpu->clks);
}
static const struct dev_pm_ops __maybe_unused ingenic_rproc_pm = {
SET_RUNTIME_PM_OPS(ingenic_rproc_suspend, ingenic_rproc_resume, NULL)
};
static struct platform_driver ingenic_rproc_driver = {
.probe = ingenic_rproc_probe,
.driver = {
.name = "ingenic-vpu",
#ifdef CONFIG_PM
.pm = &ingenic_rproc_pm,
#endif
.of_match_table = ingenic_rproc_of_matches,
},
};
......
......@@ -12,8 +12,10 @@
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/remoteproc.h>
#include <linux/remoteproc/qcom_rproc.h>
#include <linux/rpmsg/qcom_glink.h>
#include <linux/rpmsg/qcom_smd.h>
#include <linux/slab.h>
#include <linux/soc/qcom/mdt_loader.h>
#include "remoteproc_internal.h"
......@@ -23,7 +25,14 @@
#define to_smd_subdev(d) container_of(d, struct qcom_rproc_subdev, subdev)
#define to_ssr_subdev(d) container_of(d, struct qcom_rproc_ssr, subdev)
static BLOCKING_NOTIFIER_HEAD(ssr_notifiers);
struct qcom_ssr_subsystem {
const char *name;
struct srcu_notifier_head notifier_list;
struct list_head list;
};
static LIST_HEAD(qcom_ssr_subsystem_list);
static DEFINE_MUTEX(qcom_ssr_subsys_lock);
static int glink_subdev_start(struct rproc_subdev *subdev)
{
......@@ -189,37 +198,122 @@ void qcom_remove_smd_subdev(struct rproc *rproc, struct qcom_rproc_subdev *smd)
}
EXPORT_SYMBOL_GPL(qcom_remove_smd_subdev);
static struct qcom_ssr_subsystem *qcom_ssr_get_subsys(const char *name)
{
struct qcom_ssr_subsystem *info;
mutex_lock(&qcom_ssr_subsys_lock);
/* Match in the global qcom_ssr_subsystem_list with name */
list_for_each_entry(info, &qcom_ssr_subsystem_list, list)
if (!strcmp(info->name, name))
goto out;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
info = ERR_PTR(-ENOMEM);
goto out;
}
info->name = kstrdup_const(name, GFP_KERNEL);
srcu_init_notifier_head(&info->notifier_list);
/* Add to global notification list */
list_add_tail(&info->list, &qcom_ssr_subsystem_list);
out:
mutex_unlock(&qcom_ssr_subsys_lock);
return info;
}
/**
* qcom_register_ssr_notifier() - register SSR notification handler
* @nb: notifier_block to notify for restart notifications
* @name: Subsystem's SSR name
* @nb: notifier_block to be invoked upon subsystem's state change
*
* Returns 0 on success, negative errno on failure.
* This registers the @nb notifier block as part the notifier chain for a
* remoteproc associated with @name. The notifier block's callback
* will be invoked when the remote processor's SSR events occur
* (pre/post startup and pre/post shutdown).
*
* This register the @notify function as handler for restart notifications. As
* remote processors are stopped this function will be called, with the SSR
* name passed as a parameter.
* Return: a subsystem cookie on success, ERR_PTR on failure.
*/
int qcom_register_ssr_notifier(struct notifier_block *nb)
void *qcom_register_ssr_notifier(const char *name, struct notifier_block *nb)
{
return blocking_notifier_chain_register(&ssr_notifiers, nb);
struct qcom_ssr_subsystem *info;
info = qcom_ssr_get_subsys(name);
if (IS_ERR(info))
return info;
srcu_notifier_chain_register(&info->notifier_list, nb);
return &info->notifier_list;
}
EXPORT_SYMBOL_GPL(qcom_register_ssr_notifier);
/**
* qcom_unregister_ssr_notifier() - unregister SSR notification handler
* @notify: subsystem cookie returned from qcom_register_ssr_notifier
* @nb: notifier_block to unregister
*
* This function will unregister the notifier from the particular notifier
* chain.
*
* Return: 0 on success, %ENOENT otherwise.
*/
void qcom_unregister_ssr_notifier(struct notifier_block *nb)
int qcom_unregister_ssr_notifier(void *notify, struct notifier_block *nb)
{
blocking_notifier_chain_unregister(&ssr_notifiers, nb);
return srcu_notifier_chain_unregister(notify, nb);
}
EXPORT_SYMBOL_GPL(qcom_unregister_ssr_notifier);
static int ssr_notify_prepare(struct rproc_subdev *subdev)
{
struct qcom_rproc_ssr *ssr = to_ssr_subdev(subdev);
struct qcom_ssr_notify_data data = {
.name = ssr->info->name,
.crashed = false,
};
srcu_notifier_call_chain(&ssr->info->notifier_list,
QCOM_SSR_BEFORE_POWERUP, &data);
return 0;
}
static int ssr_notify_start(struct rproc_subdev *subdev)
{
struct qcom_rproc_ssr *ssr = to_ssr_subdev(subdev);
struct qcom_ssr_notify_data data = {
.name = ssr->info->name,
.crashed = false,
};
srcu_notifier_call_chain(&ssr->info->notifier_list,
QCOM_SSR_AFTER_POWERUP, &data);
return 0;
}
static void ssr_notify_stop(struct rproc_subdev *subdev, bool crashed)
{
struct qcom_rproc_ssr *ssr = to_ssr_subdev(subdev);
struct qcom_ssr_notify_data data = {
.name = ssr->info->name,
.crashed = crashed,
};
srcu_notifier_call_chain(&ssr->info->notifier_list,
QCOM_SSR_BEFORE_SHUTDOWN, &data);
}
static void ssr_notify_unprepare(struct rproc_subdev *subdev)
{
struct qcom_rproc_ssr *ssr = to_ssr_subdev(subdev);
struct qcom_ssr_notify_data data = {
.name = ssr->info->name,
.crashed = false,
};
blocking_notifier_call_chain(&ssr_notifiers, 0, (void *)ssr->name);
srcu_notifier_call_chain(&ssr->info->notifier_list,
QCOM_SSR_AFTER_SHUTDOWN, &data);
}
/**
......@@ -229,12 +323,24 @@ static void ssr_notify_unprepare(struct rproc_subdev *subdev)
* @ssr_name: identifier to use for notifications originating from @rproc
*
* As the @ssr is registered with the @rproc SSR events will be sent to all
* registered listeners in the system as the remoteproc is shut down.
* registered listeners for the remoteproc when it's SSR events occur
* (pre/post startup and pre/post shutdown).
*/
void qcom_add_ssr_subdev(struct rproc *rproc, struct qcom_rproc_ssr *ssr,
const char *ssr_name)
{
ssr->name = ssr_name;
struct qcom_ssr_subsystem *info;
info = qcom_ssr_get_subsys(ssr_name);
if (IS_ERR(info)) {
dev_err(&rproc->dev, "Failed to add ssr subdevice\n");
return;
}
ssr->info = info;
ssr->subdev.prepare = ssr_notify_prepare;
ssr->subdev.start = ssr_notify_start;
ssr->subdev.stop = ssr_notify_stop;
ssr->subdev.unprepare = ssr_notify_unprepare;
rproc_add_subdev(rproc, &ssr->subdev);
......@@ -249,6 +355,7 @@ EXPORT_SYMBOL_GPL(qcom_add_ssr_subdev);
void qcom_remove_ssr_subdev(struct rproc *rproc, struct qcom_rproc_ssr *ssr)
{
rproc_remove_subdev(rproc, &ssr->subdev);
ssr->info = NULL;
}
EXPORT_SYMBOL_GPL(qcom_remove_ssr_subdev);
......
......@@ -26,10 +26,11 @@ struct qcom_rproc_subdev {
struct qcom_smd_edge *edge;
};
struct qcom_ssr_subsystem;
struct qcom_rproc_ssr {
struct rproc_subdev subdev;
const char *name;
struct qcom_ssr_subsystem *info;
};
void qcom_add_glink_subdev(struct rproc *rproc, struct qcom_rproc_glink *glink,
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2019-2020 Linaro Ltd.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_address.h>
#include "qcom_pil_info.h"
/*
* The PIL relocation information region is used to communicate memory regions
* occupied by co-processor firmware for post mortem crash analysis.
*
* It consists of an array of entries with an 8 byte textual identifier of the
* region followed by a 64 bit base address and 32 bit size, both little
* endian.
*/
#define PIL_RELOC_NAME_LEN 8
#define PIL_RELOC_ENTRY_SIZE (PIL_RELOC_NAME_LEN + sizeof(__le64) + sizeof(__le32))
struct pil_reloc {
void __iomem *base;
size_t num_entries;
};
static struct pil_reloc _reloc __read_mostly;
static DEFINE_MUTEX(pil_reloc_lock);
static int qcom_pil_info_init(void)
{
struct device_node *np;
struct resource imem;
void __iomem *base;
int ret;
/* Already initialized? */
if (_reloc.base)
return 0;
np = of_find_compatible_node(NULL, NULL, "qcom,pil-reloc-info");
if (!np)
return -ENOENT;
ret = of_address_to_resource(np, 0, &imem);
of_node_put(np);
if (ret < 0)
return ret;
base = ioremap(imem.start, resource_size(&imem));
if (!base) {
pr_err("failed to map PIL relocation info region\n");
return -ENOMEM;
}
memset_io(base, 0, resource_size(&imem));
_reloc.base = base;
_reloc.num_entries = resource_size(&imem) / PIL_RELOC_ENTRY_SIZE;
return 0;
}
/**
* qcom_pil_info_store() - store PIL information of image in IMEM
* @image: name of the image
* @base: base address of the loaded image
* @size: size of the loaded image
*
* Return: 0 on success, negative errno on failure
*/
int qcom_pil_info_store(const char *image, phys_addr_t base, size_t size)
{
char buf[PIL_RELOC_NAME_LEN];
void __iomem *entry;
int ret;
int i;
mutex_lock(&pil_reloc_lock);
ret = qcom_pil_info_init();
if (ret < 0) {
mutex_unlock(&pil_reloc_lock);
return ret;
}
for (i = 0; i < _reloc.num_entries; i++) {
entry = _reloc.base + i * PIL_RELOC_ENTRY_SIZE;
memcpy_fromio(buf, entry, PIL_RELOC_NAME_LEN);
/*
* An empty record means we didn't find it, given that the
* records are packed.
*/
if (!buf[0])
goto found_unused;
if (!strncmp(buf, image, PIL_RELOC_NAME_LEN))
goto found_existing;
}
pr_warn("insufficient PIL info slots\n");
mutex_unlock(&pil_reloc_lock);
return -ENOMEM;
found_unused:
memcpy_toio(entry, image, PIL_RELOC_NAME_LEN);
found_existing:
/* Use two writel() as base is only aligned to 4 bytes on odd entries */
writel(base, entry + PIL_RELOC_NAME_LEN);
writel((u64)base >> 32, entry + PIL_RELOC_NAME_LEN + 4);
writel(size, entry + PIL_RELOC_NAME_LEN + sizeof(__le64));
mutex_unlock(&pil_reloc_lock);
return 0;
}
EXPORT_SYMBOL_GPL(qcom_pil_info_store);
static void __exit pil_reloc_exit(void)
{
mutex_lock(&pil_reloc_lock);
iounmap(_reloc.base);
_reloc.base = NULL;
mutex_unlock(&pil_reloc_lock);
}
module_exit(pil_reloc_exit);
MODULE_DESCRIPTION("Qualcomm PIL relocation info");
MODULE_LICENSE("GPL v2");
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __QCOM_PIL_INFO_H__
#define __QCOM_PIL_INFO_H__
#include <linux/types.h>
int qcom_pil_info_store(const char *image, phys_addr_t base, size_t size);
#endif
......@@ -153,6 +153,8 @@ int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5)
{
int ret;
q6v5->running = false;
qcom_smem_state_update_bits(q6v5->state,
BIT(q6v5->stop_bit), BIT(q6v5->stop_bit));
......
......@@ -26,6 +26,7 @@
#include <linux/soc/qcom/smem_state.h>
#include "qcom_common.h"
#include "qcom_pil_info.h"
#include "qcom_q6v5.h"
#include "remoteproc_internal.h"
......@@ -82,6 +83,7 @@ struct qcom_adsp {
unsigned int halt_lpass;
int crash_reason_smem;
const char *info_name;
struct completion start_done;
struct completion stop_done;
......@@ -164,10 +166,17 @@ static int qcom_adsp_shutdown(struct qcom_adsp *adsp)
static int adsp_load(struct rproc *rproc, const struct firmware *fw)
{
struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv;
int ret;
ret = qcom_mdt_load_no_init(adsp->dev, fw, rproc->firmware, 0,
adsp->mem_region, adsp->mem_phys,
adsp->mem_size, &adsp->mem_reloc);
if (ret)
return ret;
qcom_pil_info_store(adsp->info_name, adsp->mem_phys, adsp->mem_size);
return qcom_mdt_load_no_init(adsp->dev, fw, rproc->firmware, 0,
adsp->mem_region, adsp->mem_phys, adsp->mem_size,
&adsp->mem_reloc);
return 0;
}
static int adsp_start(struct rproc *rproc)
......@@ -436,6 +445,7 @@ static int adsp_probe(struct platform_device *pdev)
adsp = (struct qcom_adsp *)rproc->priv;
adsp->dev = &pdev->dev;
adsp->rproc = rproc;
adsp->info_name = desc->sysmon_name;
platform_set_drvdata(pdev, adsp);
ret = adsp_alloc_memory_region(adsp);
......
// SPDX-License-Identifier: GPL-2.0
/*
* Qualcomm IPA notification subdev support
*
* Copyright (C) 2019 Linaro Ltd.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/remoteproc.h>
#include <linux/remoteproc/qcom_q6v5_ipa_notify.h>
static void
ipa_notify_common(struct rproc_subdev *subdev, enum qcom_rproc_event event)
{
struct qcom_rproc_ipa_notify *ipa_notify;
qcom_ipa_notify_t notify;
ipa_notify = container_of(subdev, struct qcom_rproc_ipa_notify, subdev);
notify = ipa_notify->notify;
if (notify)
notify(ipa_notify->data, event);
}
static int ipa_notify_prepare(struct rproc_subdev *subdev)
{
ipa_notify_common(subdev, MODEM_STARTING);
return 0;
}
static int ipa_notify_start(struct rproc_subdev *subdev)
{
ipa_notify_common(subdev, MODEM_RUNNING);
return 0;
}
static void ipa_notify_stop(struct rproc_subdev *subdev, bool crashed)
{
ipa_notify_common(subdev, crashed ? MODEM_CRASHED : MODEM_STOPPING);
}
static void ipa_notify_unprepare(struct rproc_subdev *subdev)
{
ipa_notify_common(subdev, MODEM_OFFLINE);
}
static void ipa_notify_removing(struct rproc_subdev *subdev)
{
ipa_notify_common(subdev, MODEM_REMOVING);
}
/* Register the IPA notification subdevice with the Q6V5 MSS remoteproc */
void qcom_add_ipa_notify_subdev(struct rproc *rproc,
struct qcom_rproc_ipa_notify *ipa_notify)
{
ipa_notify->notify = NULL;
ipa_notify->data = NULL;
ipa_notify->subdev.prepare = ipa_notify_prepare;
ipa_notify->subdev.start = ipa_notify_start;
ipa_notify->subdev.stop = ipa_notify_stop;
ipa_notify->subdev.unprepare = ipa_notify_unprepare;
rproc_add_subdev(rproc, &ipa_notify->subdev);
}
EXPORT_SYMBOL_GPL(qcom_add_ipa_notify_subdev);
/* Remove the IPA notification subdevice */
void qcom_remove_ipa_notify_subdev(struct rproc *rproc,
struct qcom_rproc_ipa_notify *ipa_notify)
{
struct rproc_subdev *subdev = &ipa_notify->subdev;
ipa_notify_removing(subdev);
rproc_remove_subdev(rproc, subdev);
ipa_notify->notify = NULL; /* Make it obvious */
}
EXPORT_SYMBOL_GPL(qcom_remove_ipa_notify_subdev);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Qualcomm IPA notification remoteproc subdev");
......@@ -9,6 +9,7 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/devcoredump.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
......@@ -22,7 +23,6 @@
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <linux/remoteproc.h>
#include "linux/remoteproc/qcom_q6v5_ipa_notify.h"
#include <linux/reset.h>
#include <linux/soc/qcom/mdt_loader.h>
#include <linux/iopoll.h>
......@@ -30,12 +30,15 @@
#include "remoteproc_internal.h"
#include "qcom_common.h"
#include "qcom_pil_info.h"
#include "qcom_q6v5.h"
#include <linux/qcom_scm.h>
#define MPSS_CRASH_REASON_SMEM 421
#define MBA_LOG_SIZE SZ_4K
/* RMB Status Register Values */
#define RMB_PBL_SUCCESS 0x1
......@@ -112,8 +115,6 @@
#define QDSP6SS_SLEEP 0x3C
#define QDSP6SS_BOOT_CORE_START 0x400
#define QDSP6SS_BOOT_CMD 0x404
#define QDSP6SS_BOOT_STATUS 0x408
#define BOOT_STATUS_TIMEOUT_US 200
#define BOOT_FSM_TIMEOUT 10000
struct reg_info {
......@@ -140,6 +141,7 @@ struct rproc_hexagon_res {
int version;
bool need_mem_protection;
bool has_alt_reset;
bool has_mba_logs;
bool has_spare_reg;
};
......@@ -179,15 +181,14 @@ struct q6v5 {
int active_reg_count;
int proxy_reg_count;
bool running;
bool dump_mba_loaded;
unsigned long dump_segment_mask;
unsigned long dump_complete_mask;
size_t current_dump_size;
size_t total_dump_size;
phys_addr_t mba_phys;
void *mba_region;
size_t mba_size;
size_t dp_size;
phys_addr_t mpss_phys;
phys_addr_t mpss_reloc;
......@@ -196,10 +197,10 @@ struct q6v5 {
struct qcom_rproc_glink glink_subdev;
struct qcom_rproc_subdev smd_subdev;
struct qcom_rproc_ssr ssr_subdev;
struct qcom_rproc_ipa_notify ipa_notify_subdev;
struct qcom_sysmon *sysmon;
bool need_mem_protection;
bool has_alt_reset;
bool has_mba_logs;
bool has_spare_reg;
int mpss_perm;
int mba_perm;
......@@ -404,11 +405,33 @@ static int q6v5_xfer_mem_ownership(struct q6v5 *qproc, int *current_perm,
current_perm, next, perms);
}
static void q6v5_debug_policy_load(struct q6v5 *qproc)
{
const struct firmware *dp_fw;
if (request_firmware_direct(&dp_fw, "msadp", qproc->dev))
return;
if (SZ_1M + dp_fw->size <= qproc->mba_size) {
memcpy(qproc->mba_region + SZ_1M, dp_fw->data, dp_fw->size);
qproc->dp_size = dp_fw->size;
}
release_firmware(dp_fw);
}
static int q6v5_load(struct rproc *rproc, const struct firmware *fw)
{
struct q6v5 *qproc = rproc->priv;
/* MBA is restricted to a maximum size of 1M */
if (fw->size > qproc->mba_size || fw->size > SZ_1M) {
dev_err(qproc->dev, "MBA firmware load failed\n");
return -EINVAL;
}
memcpy(qproc->mba_region, fw->data, fw->size);
q6v5_debug_policy_load(qproc);
return 0;
}
......@@ -511,6 +534,26 @@ static int q6v5_rmb_mba_wait(struct q6v5 *qproc, u32 status, int ms)
return val;
}
static void q6v5_dump_mba_logs(struct q6v5 *qproc)
{
struct rproc *rproc = qproc->rproc;
void *data;
if (!qproc->has_mba_logs)
return;
if (q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true, false, qproc->mba_phys,
qproc->mba_size))
return;
data = vmalloc(MBA_LOG_SIZE);
if (!data)
return;
memcpy(data, qproc->mba_region, MBA_LOG_SIZE);
dev_coredumpv(&rproc->dev, data, MBA_LOG_SIZE, GFP_KERNEL);
}
static int q6v5proc_reset(struct q6v5 *qproc)
{
u32 val;
......@@ -579,13 +622,15 @@ static int q6v5proc_reset(struct q6v5 *qproc)
/* De-assert the Q6 stop core signal */
writel(1, qproc->reg_base + QDSP6SS_BOOT_CORE_START);
/* Wait for 10 us for any staggering logic to settle */
usleep_range(10, 20);
/* Trigger the boot FSM to start the Q6 out-of-reset sequence */
writel(1, qproc->reg_base + QDSP6SS_BOOT_CMD);
/* Poll the QDSP6SS_BOOT_STATUS for FSM completion */
ret = readl_poll_timeout(qproc->reg_base + QDSP6SS_BOOT_STATUS,
val, (val & BIT(0)) != 0, 1,
BOOT_STATUS_TIMEOUT_US);
/* Poll the MSS_STATUS for FSM completion */
ret = readl_poll_timeout(qproc->rmb_base + RMB_MBA_MSS_STATUS,
val, (val & BIT(0)) != 0, 10, BOOT_FSM_TIMEOUT);
if (ret) {
dev_err(qproc->dev, "Boot FSM failed to complete.\n");
/* Reset the modem so that boot FSM is in reset state */
......@@ -829,6 +874,7 @@ static int q6v5_mba_load(struct q6v5 *qproc)
{
int ret;
int xfermemop_ret;
bool mba_load_err = false;
qcom_q6v5_prepare(&qproc->q6v5);
......@@ -895,6 +941,10 @@ static int q6v5_mba_load(struct q6v5 *qproc)
}
writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
if (qproc->dp_size) {
writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG);
writel(qproc->dp_size, qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG);
}
ret = q6v5proc_reset(qproc);
if (ret)
......@@ -918,7 +968,7 @@ static int q6v5_mba_load(struct q6v5 *qproc)
q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc);
mba_load_err = true;
reclaim_mba:
xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true,
false, qproc->mba_phys,
......@@ -926,6 +976,8 @@ static int q6v5_mba_load(struct q6v5 *qproc)
if (xfermemop_ret) {
dev_err(qproc->dev,
"Failed to reclaim mba buffer, system may become unstable\n");
} else if (mba_load_err) {
q6v5_dump_mba_logs(qproc);
}
disable_active_clks:
......@@ -961,6 +1013,7 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
u32 val;
qproc->dump_mba_loaded = false;
qproc->dp_size = 0;
q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
......@@ -1139,15 +1192,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
} else if (phdr->p_filesz) {
/* Replace "xxx.xxx" with "xxx.bxx" */
sprintf(fw_name + fw_name_len - 3, "b%02d", i);
ret = request_firmware(&seg_fw, fw_name, qproc->dev);
ret = request_firmware_into_buf(&seg_fw, fw_name, qproc->dev,
ptr, phdr->p_filesz);
if (ret) {
dev_err(qproc->dev, "failed to load %s\n", fw_name);
iounmap(ptr);
goto release_firmware;
}
memcpy(ptr, seg_fw->data, seg_fw->size);
release_firmware(seg_fw);
}
......@@ -1190,6 +1242,8 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
else if (ret < 0)
dev_err(qproc->dev, "MPSS authentication failed: %d\n", ret);
qcom_pil_info_store("modem", qproc->mpss_phys, qproc->mpss_size);
release_firmware:
release_firmware(fw);
out:
......@@ -1200,11 +1254,10 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
static void qcom_q6v5_dump_segment(struct rproc *rproc,
struct rproc_dump_segment *segment,
void *dest)
void *dest, size_t cp_offset, size_t size)
{
int ret = 0;
struct q6v5 *qproc = rproc->priv;
unsigned long mask = BIT((unsigned long)segment->priv);
int offset = segment->da - qproc->mpss_reloc;
void *ptr = NULL;
......@@ -1221,19 +1274,19 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
}
if (!ret)
ptr = ioremap_wc(qproc->mpss_phys + offset, segment->size);
ptr = ioremap_wc(qproc->mpss_phys + offset + cp_offset, size);
if (ptr) {
memcpy(dest, ptr, segment->size);
memcpy(dest, ptr, size);
iounmap(ptr);
} else {
memset(dest, 0xff, segment->size);
memset(dest, 0xff, size);
}
qproc->dump_segment_mask |= mask;
qproc->current_dump_size += size;
/* Reclaim mba after copying segments */
if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
if (qproc->current_dump_size == qproc->total_dump_size) {
if (qproc->dump_mba_loaded) {
/* Try to reset ownership back to Q6 */
q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
......@@ -1255,7 +1308,8 @@ static int q6v5_start(struct rproc *rproc)
if (ret)
return ret;
dev_info(qproc->dev, "MBA booted, loading mpss\n");
dev_info(qproc->dev, "MBA booted with%s debug policy, loading mpss\n",
qproc->dp_size ? "" : "out");
ret = q6v5_mpss_load(qproc);
if (ret)
......@@ -1275,13 +1329,13 @@ static int q6v5_start(struct rproc *rproc)
"Failed to reclaim mba buffer system may become unstable\n");
/* Reset Dump Segment Mask */
qproc->dump_segment_mask = 0;
qproc->running = true;
qproc->current_dump_size = 0;
return 0;
reclaim_mpss:
q6v5_mba_reclaim(qproc);
q6v5_dump_mba_logs(qproc);
return ret;
}
......@@ -1291,8 +1345,6 @@ static int q6v5_stop(struct rproc *rproc)
struct q6v5 *qproc = (struct q6v5 *)rproc->priv;
int ret;
qproc->running = false;
ret = qcom_q6v5_request_stop(&qproc->q6v5);
if (ret == -ETIMEDOUT)
dev_err(qproc->dev, "timed out on wait\n");
......@@ -1324,7 +1376,7 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
ehdr = (struct elf32_hdr *)fw->data;
phdrs = (struct elf32_phdr *)(ehdr + 1);
qproc->dump_complete_mask = 0;
qproc->total_dump_size = 0;
for (i = 0; i < ehdr->e_phnum; i++) {
phdr = &phdrs[i];
......@@ -1335,11 +1387,11 @@ static int qcom_q6v5_register_dump_segments(struct rproc *rproc,
ret = rproc_coredump_add_custom_segment(rproc, phdr->p_paddr,
phdr->p_memsz,
qcom_q6v5_dump_segment,
(void *)i);
NULL);
if (ret)
break;
qproc->dump_complete_mask |= BIT(i);
qproc->total_dump_size += phdr->p_memsz;
}
release_firmware(fw);
......@@ -1554,39 +1606,6 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
return 0;
}
#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
/* Register IPA notification function */
int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify,
void *data)
{
struct qcom_rproc_ipa_notify *ipa_notify;
struct q6v5 *qproc = rproc->priv;
if (!notify)
return -EINVAL;
ipa_notify = &qproc->ipa_notify_subdev;
if (ipa_notify->notify)
return -EBUSY;
ipa_notify->notify = notify;
ipa_notify->data = data;
return 0;
}
EXPORT_SYMBOL_GPL(qcom_register_ipa_notify);
/* Deregister IPA notification function */
void qcom_deregister_ipa_notify(struct rproc *rproc)
{
struct q6v5 *qproc = rproc->priv;
qproc->ipa_notify_subdev.notify = NULL;
}
EXPORT_SYMBOL_GPL(qcom_deregister_ipa_notify);
#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
static int q6v5_probe(struct platform_device *pdev)
{
const struct rproc_hexagon_res *desc;
......@@ -1701,6 +1720,7 @@ static int q6v5_probe(struct platform_device *pdev)
qproc->version = desc->version;
qproc->need_mem_protection = desc->need_mem_protection;
qproc->has_mba_logs = desc->has_mba_logs;
ret = qcom_q6v5_init(&qproc->q6v5, pdev, rproc, MPSS_CRASH_REASON_SMEM,
qcom_msa_handover);
......@@ -1712,7 +1732,6 @@ static int q6v5_probe(struct platform_device *pdev)
qcom_add_glink_subdev(rproc, &qproc->glink_subdev, "mpss");
qcom_add_smd_subdev(rproc, &qproc->smd_subdev);
qcom_add_ssr_subdev(rproc, &qproc->ssr_subdev, "mpss");
qcom_add_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev);
qproc->sysmon = qcom_add_sysmon_subdev(rproc, "modem", 0x12);
if (IS_ERR(qproc->sysmon)) {
ret = PTR_ERR(qproc->sysmon);
......@@ -1728,7 +1747,6 @@ static int q6v5_probe(struct platform_device *pdev)
remove_sysmon_subdev:
qcom_remove_sysmon_subdev(qproc->sysmon);
remove_subdevs:
qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev);
qcom_remove_ssr_subdev(rproc, &qproc->ssr_subdev);
qcom_remove_smd_subdev(rproc, &qproc->smd_subdev);
qcom_remove_glink_subdev(rproc, &qproc->glink_subdev);
......@@ -1750,7 +1768,6 @@ static int q6v5_remove(struct platform_device *pdev)
rproc_del(rproc);
qcom_remove_sysmon_subdev(qproc->sysmon);
qcom_remove_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev);
qcom_remove_ssr_subdev(rproc, &qproc->ssr_subdev);
qcom_remove_smd_subdev(rproc, &qproc->smd_subdev);
qcom_remove_glink_subdev(rproc, &qproc->glink_subdev);
......@@ -1792,6 +1809,7 @@ static const struct rproc_hexagon_res sc7180_mss = {
},
.need_mem_protection = true,
.has_alt_reset = false,
.has_mba_logs = true,
.has_spare_reg = true,
.version = MSS_SC7180,
};
......@@ -1827,6 +1845,7 @@ static const struct rproc_hexagon_res sdm845_mss = {
},
.need_mem_protection = true,
.has_alt_reset = true,
.has_mba_logs = false,
.has_spare_reg = false,
.version = MSS_SDM845,
};
......@@ -1854,6 +1873,7 @@ static const struct rproc_hexagon_res msm8998_mss = {
},
.need_mem_protection = true,
.has_alt_reset = false,
.has_mba_logs = false,
.has_spare_reg = false,
.version = MSS_MSM8998,
};
......@@ -1884,6 +1904,7 @@ static const struct rproc_hexagon_res msm8996_mss = {
},
.need_mem_protection = true,
.has_alt_reset = false,
.has_mba_logs = false,
.has_spare_reg = false,
.version = MSS_MSM8996,
};
......@@ -1917,6 +1938,7 @@ static const struct rproc_hexagon_res msm8916_mss = {
},
.need_mem_protection = false,
.has_alt_reset = false,
.has_mba_logs = false,
.has_spare_reg = false,
.version = MSS_MSM8916,
};
......@@ -1958,6 +1980,7 @@ static const struct rproc_hexagon_res msm8974_mss = {
},
.need_mem_protection = false,
.has_alt_reset = false,
.has_mba_logs = false,
.has_spare_reg = false,
.version = MSS_MSM8974,
};
......
......@@ -25,6 +25,7 @@
#include <linux/soc/qcom/smem_state.h>
#include "qcom_common.h"
#include "qcom_pil_info.h"
#include "qcom_q6v5.h"
#include "remoteproc_internal.h"
......@@ -64,6 +65,7 @@ struct qcom_adsp {
int pas_id;
int crash_reason_smem;
bool has_aggre2_clk;
const char *info_name;
struct completion start_done;
struct completion stop_done;
......@@ -117,11 +119,17 @@ static void adsp_pds_disable(struct qcom_adsp *adsp, struct device **pds,
static int adsp_load(struct rproc *rproc, const struct firmware *fw)
{
struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv;
int ret;
return qcom_mdt_load(adsp->dev, fw, rproc->firmware, adsp->pas_id,
ret = qcom_mdt_load(adsp->dev, fw, rproc->firmware, adsp->pas_id,
adsp->mem_region, adsp->mem_phys, adsp->mem_size,
&adsp->mem_reloc);
if (ret)
return ret;
qcom_pil_info_store(adsp->info_name, adsp->mem_phys, adsp->mem_size);
return 0;
}
static int adsp_start(struct rproc *rproc)
......@@ -405,6 +413,7 @@ static int adsp_probe(struct platform_device *pdev)
adsp->rproc = rproc;
adsp->pas_id = desc->pas_id;
adsp->has_aggre2_clk = desc->has_aggre2_clk;
adsp->info_name = desc->sysmon_name;
platform_set_drvdata(pdev, adsp);
device_wakeup_enable(adsp->dev);
......
......@@ -14,6 +14,7 @@
#include <linux/reset.h>
#include <linux/soc/qcom/mdt_loader.h>
#include "qcom_common.h"
#include "qcom_pil_info.h"
#include "qcom_q6v5.h"
#define WCSS_CRASH_REASON 421
......@@ -424,10 +425,17 @@ static void *q6v5_wcss_da_to_va(struct rproc *rproc, u64 da, size_t len)
static int q6v5_wcss_load(struct rproc *rproc, const struct firmware *fw)
{
struct q6v5_wcss *wcss = rproc->priv;
int ret;
return qcom_mdt_load_no_init(wcss->dev, fw, rproc->firmware,
ret = qcom_mdt_load_no_init(wcss->dev, fw, rproc->firmware,
0, wcss->mem_region, wcss->mem_phys,
wcss->mem_size, &wcss->mem_reloc);
if (ret)
return ret;
qcom_pil_info_store("wcnss", wcss->mem_phys, wcss->mem_size);
return ret;
}
static const struct rproc_ops q6v5_wcss_ops = {
......
......@@ -71,7 +71,7 @@ static LIST_HEAD(sysmon_list);
/**
* sysmon_send_event() - send notification of other remote's SSR event
* @sysmon: sysmon context
* @name: other remote's name
* @event: sysmon event context
*/
static void sysmon_send_event(struct qcom_sysmon *sysmon,
const struct sysmon_event *event)
......@@ -343,7 +343,7 @@ static void ssctl_request_shutdown(struct qcom_sysmon *sysmon)
/**
* ssctl_send_event() - send notification of other remote's SSR event
* @sysmon: sysmon context
* @name: other remote's name
* @event: sysmon event context
*/
static void ssctl_send_event(struct qcom_sysmon *sysmon,
const struct sysmon_event *event)
......
......@@ -27,6 +27,7 @@
#include "qcom_common.h"
#include "remoteproc_internal.h"
#include "qcom_pil_info.h"
#include "qcom_wcnss.h"
#define WCNSS_CRASH_REASON_SMEM 422
......@@ -145,10 +146,17 @@ void qcom_wcnss_assign_iris(struct qcom_wcnss *wcnss,
static int wcnss_load(struct rproc *rproc, const struct firmware *fw)
{
struct qcom_wcnss *wcnss = (struct qcom_wcnss *)rproc->priv;
int ret;
return qcom_mdt_load(wcnss->dev, fw, rproc->firmware, WCNSS_PAS_ID,
ret = qcom_mdt_load(wcnss->dev, fw, rproc->firmware, WCNSS_PAS_ID,
wcnss->mem_region, wcnss->mem_phys,
wcnss->mem_size, &wcnss->mem_reloc);
if (ret)
return ret;
qcom_pil_info_store("wcnss", wcnss->mem_phys, wcnss->mem_size);
return 0;
}
static void wcnss_indicate_nv_download(struct qcom_wcnss *wcnss)
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Character device interface driver for Remoteproc framework.
*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/cdev.h>
#include <linux/compat.h>
#include <linux/fs.h>
#include <linux/module.h>
#include <linux/remoteproc.h>
#include <linux/uaccess.h>
#include <uapi/linux/remoteproc_cdev.h>
#include "remoteproc_internal.h"
#define NUM_RPROC_DEVICES 64
static dev_t rproc_major;
static ssize_t rproc_cdev_write(struct file *filp, const char __user *buf, size_t len, loff_t *pos)
{
struct rproc *rproc = container_of(filp->f_inode->i_cdev, struct rproc, cdev);
int ret = 0;
char cmd[10];
if (!len || len > sizeof(cmd))
return -EINVAL;
ret = copy_from_user(cmd, buf, len);
if (ret)
return -EFAULT;
if (!strncmp(cmd, "start", len)) {
if (rproc->state == RPROC_RUNNING)
return -EBUSY;
ret = rproc_boot(rproc);
} else if (!strncmp(cmd, "stop", len)) {
if (rproc->state != RPROC_RUNNING)
return -EINVAL;
rproc_shutdown(rproc);
} else {
dev_err(&rproc->dev, "Unrecognized option\n");
ret = -EINVAL;
}
return ret ? ret : len;
}
static long rproc_device_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
{
struct rproc *rproc = container_of(filp->f_inode->i_cdev, struct rproc, cdev);
void __user *argp = (void __user *)arg;
s32 param;
switch (ioctl) {
case RPROC_SET_SHUTDOWN_ON_RELEASE:
if (copy_from_user(&param, argp, sizeof(s32)))
return -EFAULT;
rproc->cdev_put_on_release = !!param;
break;
case RPROC_GET_SHUTDOWN_ON_RELEASE:
param = (s32)rproc->cdev_put_on_release;
if (copy_to_user(argp, &param, sizeof(s32)))
return -EFAULT;
break;
default:
dev_err(&rproc->dev, "Unsupported ioctl\n");
return -EINVAL;
}
return 0;
}
static int rproc_cdev_release(struct inode *inode, struct file *filp)
{
struct rproc *rproc = container_of(inode->i_cdev, struct rproc, cdev);
if (rproc->cdev_put_on_release && rproc->state == RPROC_RUNNING)
rproc_shutdown(rproc);
return 0;
}
static const struct file_operations rproc_fops = {
.write = rproc_cdev_write,
.unlocked_ioctl = rproc_device_ioctl,
.compat_ioctl = compat_ptr_ioctl,
.release = rproc_cdev_release,
};
int rproc_char_device_add(struct rproc *rproc)
{
int ret;
cdev_init(&rproc->cdev, &rproc_fops);
rproc->cdev.owner = THIS_MODULE;
rproc->dev.devt = MKDEV(MAJOR(rproc_major), rproc->index);
cdev_set_parent(&rproc->cdev, &rproc->dev.kobj);
ret = cdev_add(&rproc->cdev, rproc->dev.devt, 1);
if (ret < 0)
dev_err(&rproc->dev, "Failed to add char dev for %s\n", rproc->name);
return ret;
}
void rproc_char_device_remove(struct rproc *rproc)
{
__unregister_chrdev(MAJOR(rproc->dev.devt), rproc->index, 1, "remoteproc");
}
void __init rproc_init_cdev(void)
{
int ret;
ret = alloc_chrdev_region(&rproc_major, 0, NUM_RPROC_DEVICES, "remoteproc");
if (ret < 0)
pr_err("Failed to alloc rproc_cdev region, err %d\n", ret);
}
......@@ -26,10 +26,8 @@
#include <linux/firmware.h>
#include <linux/string.h>
#include <linux/debugfs.h>
#include <linux/devcoredump.h>
#include <linux/rculist.h>
#include <linux/remoteproc.h>
#include <linux/pm_runtime.h>
#include <linux/iommu.h>
#include <linux/idr.h>
#include <linux/elf.h>
......@@ -41,7 +39,6 @@
#include <linux/platform_device.h>
#include "remoteproc_internal.h"
#include "remoteproc_elf_helpers.h"
#define HIGH_BITS_MASK 0xFFFFFFFF00000000ULL
......@@ -244,6 +241,7 @@ EXPORT_SYMBOL(rproc_da_to_va);
*
* Return: a valid pointer on carveout entry on success or NULL on failure.
*/
__printf(2, 3)
struct rproc_mem_entry *
rproc_find_carveout_by_name(struct rproc *rproc, const char *name, ...)
{
......@@ -411,10 +409,22 @@ void rproc_free_vring(struct rproc_vring *rvring)
idr_remove(&rproc->notifyids, rvring->notifyid);
/* reset resource entry info */
/*
* At this point rproc_stop() has been called and the installed resource
* table in the remote processor memory may no longer be accessible. As
* such and as per rproc_stop(), rproc->table_ptr points to the cached
* resource table (rproc->cached_table). The cached resource table is
* only available when a remote processor has been booted by the
* remoteproc core, otherwise it is NULL.
*
* Based on the above, reset the virtio device section in the cached
* resource table only if there is one to work with.
*/
if (rproc->table_ptr) {
rsc = (void *)rproc->table_ptr + rvring->rvdev->rsc_offset;
rsc->vring[idx].da = 0;
rsc->vring[idx].notifyid = -1;
}
}
static int rproc_vdev_do_start(struct rproc_subdev *subdev)
......@@ -967,6 +977,7 @@ EXPORT_SYMBOL(rproc_add_carveout);
* This function allocates a rproc_mem_entry struct and fill it with parameters
* provided by client.
*/
__printf(8, 9)
struct rproc_mem_entry *
rproc_mem_entry_init(struct device *dev,
void *va, dma_addr_t dma, size_t len, u32 da,
......@@ -1010,6 +1021,7 @@ EXPORT_SYMBOL(rproc_mem_entry_init);
* This function allocates a rproc_mem_entry struct and fill it with parameters
* provided by client.
*/
__printf(5, 6)
struct rproc_mem_entry *
rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, size_t len,
u32 da, const char *name, ...)
......@@ -1034,6 +1046,29 @@ rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, size_t len,
}
EXPORT_SYMBOL(rproc_of_resm_mem_entry_init);
/**
* rproc_of_parse_firmware() - parse and return the firmware-name
* @dev: pointer on device struct representing a rproc
* @index: index to use for the firmware-name retrieval
* @fw_name: pointer to a character string, in which the firmware
* name is returned on success and unmodified otherwise.
*
* This is an OF helper function that parses a device's DT node for
* the "firmware-name" property and returns the firmware name pointer
* in @fw_name on success.
*
* Return: 0 on success, or an appropriate failure.
*/
int rproc_of_parse_firmware(struct device *dev, int index, const char **fw_name)
{
int ret;
ret = of_property_read_string_index(dev->of_node, "firmware-name",
index, fw_name);
return ret ? ret : 0;
}
EXPORT_SYMBOL(rproc_of_parse_firmware);
/*
* A lookup table for resource handlers. The indices are defined in
* enum fw_resource_type.
......@@ -1239,19 +1274,6 @@ static int rproc_alloc_registered_carveouts(struct rproc *rproc)
return 0;
}
/**
* rproc_coredump_cleanup() - clean up dump_segments list
* @rproc: the remote processor handle
*/
static void rproc_coredump_cleanup(struct rproc *rproc)
{
struct rproc_dump_segment *entry, *tmp;
list_for_each_entry_safe(entry, tmp, &rproc->dump_segments, node) {
list_del(&entry->node);
kfree(entry);
}
}
/**
* rproc_resource_cleanup() - clean up and free all acquired resources
......@@ -1260,7 +1282,7 @@ static void rproc_coredump_cleanup(struct rproc *rproc)
* This function will free all resources acquired for @rproc, and it
* is called whenever @rproc either shuts down or fails to boot.
*/
static void rproc_resource_cleanup(struct rproc *rproc)
void rproc_resource_cleanup(struct rproc *rproc)
{
struct rproc_mem_entry *entry, *tmp;
struct rproc_debug_trace *trace, *ttmp;
......@@ -1304,6 +1326,7 @@ static void rproc_resource_cleanup(struct rproc *rproc)
rproc_coredump_cleanup(rproc);
}
EXPORT_SYMBOL(rproc_resource_cleanup);
static int rproc_start(struct rproc *rproc, const struct firmware *fw)
{
......@@ -1370,6 +1393,48 @@ static int rproc_start(struct rproc *rproc, const struct firmware *fw)
return ret;
}
static int rproc_attach(struct rproc *rproc)
{
struct device *dev = &rproc->dev;
int ret;
ret = rproc_prepare_subdevices(rproc);
if (ret) {
dev_err(dev, "failed to prepare subdevices for %s: %d\n",
rproc->name, ret);
goto out;
}
/* Attach to the remote processor */
ret = rproc_attach_device(rproc);
if (ret) {
dev_err(dev, "can't attach to rproc %s: %d\n",
rproc->name, ret);
goto unprepare_subdevices;
}
/* Start any subdevices for the remote processor */
ret = rproc_start_subdevices(rproc);
if (ret) {
dev_err(dev, "failed to probe subdevices for %s: %d\n",
rproc->name, ret);
goto stop_rproc;
}
rproc->state = RPROC_RUNNING;
dev_info(dev, "remote processor %s is now attached\n", rproc->name);
return 0;
stop_rproc:
rproc->ops->stop(rproc);
unprepare_subdevices:
rproc_unprepare_subdevices(rproc);
out:
return ret;
}
/*
* take a firmware and boot a remote processor with it.
*/
......@@ -1383,12 +1448,6 @@ static int rproc_fw_boot(struct rproc *rproc, const struct firmware *fw)
if (ret)
return ret;
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "pm_runtime_get_sync failed: %d\n", ret);
return ret;
}
dev_info(dev, "Booting fw image %s, size %zd\n", name, fw->size);
/*
......@@ -1398,7 +1457,7 @@ static int rproc_fw_boot(struct rproc *rproc, const struct firmware *fw)
ret = rproc_enable_iommu(rproc);
if (ret) {
dev_err(dev, "can't enable iommu: %d\n", ret);
goto put_pm_runtime;
return ret;
}
/* Prepare rproc for firmware loading if needed */
......@@ -1452,8 +1511,63 @@ static int rproc_fw_boot(struct rproc *rproc, const struct firmware *fw)
rproc_unprepare_device(rproc);
disable_iommu:
rproc_disable_iommu(rproc);
put_pm_runtime:
pm_runtime_put(dev);
return ret;
}
/*
* Attach to remote processor - similar to rproc_fw_boot() but without
* the steps that deal with the firmware image.
*/
static int rproc_actuate(struct rproc *rproc)
{
struct device *dev = &rproc->dev;
int ret;
/*
* if enabling an IOMMU isn't relevant for this rproc, this is
* just a nop
*/
ret = rproc_enable_iommu(rproc);
if (ret) {
dev_err(dev, "can't enable iommu: %d\n", ret);
return ret;
}
/* reset max_notifyid */
rproc->max_notifyid = -1;
/* reset handled vdev */
rproc->nb_vdev = 0;
/*
* Handle firmware resources required to attach to a remote processor.
* Because we are attaching rather than booting the remote processor,
* we expect the platform driver to properly set rproc->table_ptr.
*/
ret = rproc_handle_resources(rproc, rproc_loading_handlers);
if (ret) {
dev_err(dev, "Failed to process resources: %d\n", ret);
goto disable_iommu;
}
/* Allocate carveout resources associated to rproc */
ret = rproc_alloc_registered_carveouts(rproc);
if (ret) {
dev_err(dev, "Failed to allocate associated carveouts: %d\n",
ret);
goto clean_up_resources;
}
ret = rproc_attach(rproc);
if (ret)
goto clean_up_resources;
return 0;
clean_up_resources:
rproc_resource_cleanup(rproc);
disable_iommu:
rproc_disable_iommu(rproc);
return ret;
}
......@@ -1478,6 +1592,15 @@ static int rproc_trigger_auto_boot(struct rproc *rproc)
{
int ret;
/*
* Since the remote processor is in a detached state, it has already
* been booted by another entity. As such there is no point in waiting
* for a firmware image to be loaded, we can simply initiate the process
* of attaching to it immediately.
*/
if (rproc->state == RPROC_DETACHED)
return rproc_boot(rproc);
/*
* We're initiating an asynchronous firmware loading, so we can
* be built-in kernel code, without hanging the boot process.
......@@ -1513,187 +1636,19 @@ static int rproc_stop(struct rproc *rproc, bool crashed)
rproc->state = RPROC_OFFLINE;
dev_info(dev, "stopped remote processor %s\n", rproc->name);
return 0;
}
/**
* rproc_coredump_add_segment() - add segment of device memory to coredump
* @rproc: handle of a remote processor
* @da: device address
* @size: size of segment
*
* Add device memory to the list of segments to be included in a coredump for
* the remoteproc.
*
* Return: 0 on success, negative errno on error.
*/
int rproc_coredump_add_segment(struct rproc *rproc, dma_addr_t da, size_t size)
{
struct rproc_dump_segment *segment;
segment = kzalloc(sizeof(*segment), GFP_KERNEL);
if (!segment)
return -ENOMEM;
segment->da = da;
segment->size = size;
list_add_tail(&segment->node, &rproc->dump_segments);
return 0;
}
EXPORT_SYMBOL(rproc_coredump_add_segment);
/**
* rproc_coredump_add_custom_segment() - add custom coredump segment
* @rproc: handle of a remote processor
* @da: device address
* @size: size of segment
* @dumpfn: custom dump function called for each segment during coredump
* @priv: private data
*
* Add device memory to the list of segments to be included in the coredump
* and associate the segment with the given custom dump function and private
* data.
*
* Return: 0 on success, negative errno on error.
/*
* The remote processor has been stopped and is now offline, which means
* that the next time it is brought back online the remoteproc core will
* be responsible to load its firmware. As such it is no longer
* autonomous.
*/
int rproc_coredump_add_custom_segment(struct rproc *rproc,
dma_addr_t da, size_t size,
void (*dumpfn)(struct rproc *rproc,
struct rproc_dump_segment *segment,
void *dest),
void *priv)
{
struct rproc_dump_segment *segment;
segment = kzalloc(sizeof(*segment), GFP_KERNEL);
if (!segment)
return -ENOMEM;
segment->da = da;
segment->size = size;
segment->priv = priv;
segment->dump = dumpfn;
rproc->autonomous = false;
list_add_tail(&segment->node, &rproc->dump_segments);
dev_info(dev, "stopped remote processor %s\n", rproc->name);
return 0;
}
EXPORT_SYMBOL(rproc_coredump_add_custom_segment);
/**
* rproc_coredump_set_elf_info() - set coredump elf information
* @rproc: handle of a remote processor
* @class: elf class for coredump elf file
* @machine: elf machine for coredump elf file
*
* Set elf information which will be used for coredump elf file.
*
* Return: 0 on success, negative errno on error.
*/
int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
{
if (class != ELFCLASS64 && class != ELFCLASS32)
return -EINVAL;
rproc->elf_class = class;
rproc->elf_machine = machine;
return 0;
}
EXPORT_SYMBOL(rproc_coredump_set_elf_info);
/**
* rproc_coredump() - perform coredump
* @rproc: rproc handle
*
* This function will generate an ELF header for the registered segments
* and create a devcoredump device associated with rproc.
*/
static void rproc_coredump(struct rproc *rproc)
{
struct rproc_dump_segment *segment;
void *phdr;
void *ehdr;
size_t data_size;
size_t offset;
void *data;
void *ptr;
u8 class = rproc->elf_class;
int phnum = 0;
if (list_empty(&rproc->dump_segments))
return;
if (class == ELFCLASSNONE) {
dev_err(&rproc->dev, "Elf class is not set\n");
return;
}
data_size = elf_size_of_hdr(class);
list_for_each_entry(segment, &rproc->dump_segments, node) {
data_size += elf_size_of_phdr(class) + segment->size;
phnum++;
}
data = vmalloc(data_size);
if (!data)
return;
ehdr = data;
memset(ehdr, 0, elf_size_of_hdr(class));
/* e_ident field is common for both elf32 and elf64 */
elf_hdr_init_ident(ehdr, class);
elf_hdr_set_e_type(class, ehdr, ET_CORE);
elf_hdr_set_e_machine(class, ehdr, rproc->elf_machine);
elf_hdr_set_e_version(class, ehdr, EV_CURRENT);
elf_hdr_set_e_entry(class, ehdr, rproc->bootaddr);
elf_hdr_set_e_phoff(class, ehdr, elf_size_of_hdr(class));
elf_hdr_set_e_ehsize(class, ehdr, elf_size_of_hdr(class));
elf_hdr_set_e_phentsize(class, ehdr, elf_size_of_phdr(class));
elf_hdr_set_e_phnum(class, ehdr, phnum);
phdr = data + elf_hdr_get_e_phoff(class, ehdr);
offset = elf_hdr_get_e_phoff(class, ehdr);
offset += elf_size_of_phdr(class) * elf_hdr_get_e_phnum(class, ehdr);
list_for_each_entry(segment, &rproc->dump_segments, node) {
memset(phdr, 0, elf_size_of_phdr(class));
elf_phdr_set_p_type(class, phdr, PT_LOAD);
elf_phdr_set_p_offset(class, phdr, offset);
elf_phdr_set_p_vaddr(class, phdr, segment->da);
elf_phdr_set_p_paddr(class, phdr, segment->da);
elf_phdr_set_p_filesz(class, phdr, segment->size);
elf_phdr_set_p_memsz(class, phdr, segment->size);
elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
elf_phdr_set_p_align(class, phdr, 0);
if (segment->dump) {
segment->dump(rproc, segment, data + offset);
} else {
ptr = rproc_da_to_va(rproc, segment->da, segment->size);
if (!ptr) {
dev_err(&rproc->dev,
"invalid coredump segment (%pad, %zu)\n",
&segment->da, segment->size);
memset(data + offset, 0xff, segment->size);
} else {
memcpy(data + offset, ptr, segment->size);
}
}
offset += elf_phdr_get_p_filesz(class, phdr);
phdr += elf_size_of_phdr(class);
}
dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
}
/**
* rproc_trigger_recovery() - recover a remoteproc
......@@ -1815,12 +1770,17 @@ int rproc_boot(struct rproc *rproc)
goto unlock_mutex;
}
/* skip the boot process if rproc is already powered up */
/* skip the boot or attach process if rproc is already powered up */
if (atomic_inc_return(&rproc->power) > 1) {
ret = 0;
goto unlock_mutex;
}
if (rproc->state == RPROC_DETACHED) {
dev_info(dev, "attaching to %s\n", rproc->name);
ret = rproc_actuate(rproc);
} else {
dev_info(dev, "powering up %s\n", rproc->name);
/* load firmware */
......@@ -1833,6 +1793,7 @@ int rproc_boot(struct rproc *rproc)
ret = rproc_fw_boot(rproc, firmware_p);
release_firmware(firmware_p);
}
downref_rproc:
if (ret)
......@@ -1891,8 +1852,6 @@ void rproc_shutdown(struct rproc *rproc)
rproc_disable_iommu(rproc);
pm_runtime_put(dev);
/* Free the copy of the resource table */
kfree(rproc->cached_table);
rproc->cached_table = NULL;
......@@ -1952,6 +1911,43 @@ struct rproc *rproc_get_by_phandle(phandle phandle)
#endif
EXPORT_SYMBOL(rproc_get_by_phandle);
static int rproc_validate(struct rproc *rproc)
{
switch (rproc->state) {
case RPROC_OFFLINE:
/*
* An offline processor without a start()
* function makes no sense.
*/
if (!rproc->ops->start)
return -EINVAL;
break;
case RPROC_DETACHED:
/*
* A remote processor in a detached state without an
* attach() function makes not sense.
*/
if (!rproc->ops->attach)
return -EINVAL;
/*
* When attaching to a remote processor the device memory
* is already available and as such there is no need to have a
* cached table.
*/
if (rproc->cached_table)
return -EINVAL;
break;
default:
/*
* When adding a remote processor, the state of the device
* can be offline or detached, nothing else.
*/
return -EINVAL;
}
return 0;
}
/**
* rproc_add() - register a remote processor
* @rproc: the remote processor handle to register
......@@ -1981,11 +1977,30 @@ int rproc_add(struct rproc *rproc)
if (ret < 0)
return ret;
ret = rproc_validate(rproc);
if (ret < 0)
return ret;
dev_info(dev, "%s is available\n", rproc->name);
/* create debugfs entries */
rproc_create_debug_dir(rproc);
/* add char device for this remoteproc */
ret = rproc_char_device_add(rproc);
if (ret < 0)
return ret;
/*
* Remind ourselves the remote processor has been attached to rather
* than booted by the remoteproc core. This is important because the
* RPROC_DETACHED state will be lost as soon as the remote processor
* has been attached to. Used in firmware_show() and reset in
* rproc_stop().
*/
if (rproc->state == RPROC_DETACHED)
rproc->autonomous = true;
/* if rproc is marked always-on, request it to boot */
if (rproc->auto_boot) {
ret = rproc_trigger_auto_boot(rproc);
......@@ -2183,9 +2198,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
rproc->state = RPROC_OFFLINE;
pm_runtime_no_callbacks(&rproc->dev);
pm_runtime_enable(&rproc->dev);
return rproc;
put_device:
......@@ -2205,7 +2217,6 @@ EXPORT_SYMBOL(rproc_alloc);
*/
void rproc_free(struct rproc *rproc)
{
pm_runtime_disable(&rproc->dev);
put_device(&rproc->dev);
}
EXPORT_SYMBOL(rproc_free);
......@@ -2256,6 +2267,7 @@ int rproc_del(struct rproc *rproc)
mutex_unlock(&rproc->lock);
rproc_delete_debug_dir(rproc);
rproc_char_device_remove(rproc);
/* the rproc is downref'ed as soon as it's removed from the klist */
mutex_lock(&rproc_list_mutex);
......@@ -2424,6 +2436,7 @@ static int __init remoteproc_init(void)
{
rproc_init_sysfs();
rproc_init_debugfs();
rproc_init_cdev();
rproc_init_panic();
return 0;
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Coredump functionality for Remoteproc framework.
*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/completion.h>
#include <linux/devcoredump.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/remoteproc.h>
#include "remoteproc_internal.h"
#include "remoteproc_elf_helpers.h"
struct rproc_coredump_state {
struct rproc *rproc;
void *header;
struct completion dump_done;
};
/**
* rproc_coredump_cleanup() - clean up dump_segments list
* @rproc: the remote processor handle
*/
void rproc_coredump_cleanup(struct rproc *rproc)
{
struct rproc_dump_segment *entry, *tmp;
list_for_each_entry_safe(entry, tmp, &rproc->dump_segments, node) {
list_del(&entry->node);
kfree(entry);
}
}
/**
* rproc_coredump_add_segment() - add segment of device memory to coredump
* @rproc: handle of a remote processor
* @da: device address
* @size: size of segment
*
* Add device memory to the list of segments to be included in a coredump for
* the remoteproc.
*
* Return: 0 on success, negative errno on error.
*/
int rproc_coredump_add_segment(struct rproc *rproc, dma_addr_t da, size_t size)
{
struct rproc_dump_segment *segment;
segment = kzalloc(sizeof(*segment), GFP_KERNEL);
if (!segment)
return -ENOMEM;
segment->da = da;
segment->size = size;
list_add_tail(&segment->node, &rproc->dump_segments);
return 0;
}
EXPORT_SYMBOL(rproc_coredump_add_segment);
/**
* rproc_coredump_add_custom_segment() - add custom coredump segment
* @rproc: handle of a remote processor
* @da: device address
* @size: size of segment
* @dumpfn: custom dump function called for each segment during coredump
* @priv: private data
*
* Add device memory to the list of segments to be included in the coredump
* and associate the segment with the given custom dump function and private
* data.
*
* Return: 0 on success, negative errno on error.
*/
int rproc_coredump_add_custom_segment(struct rproc *rproc,
dma_addr_t da, size_t size,
void (*dumpfn)(struct rproc *rproc,
struct rproc_dump_segment *segment,
void *dest, size_t offset,
size_t size),
void *priv)
{
struct rproc_dump_segment *segment;
segment = kzalloc(sizeof(*segment), GFP_KERNEL);
if (!segment)
return -ENOMEM;
segment->da = da;
segment->size = size;
segment->priv = priv;
segment->dump = dumpfn;
list_add_tail(&segment->node, &rproc->dump_segments);
return 0;
}
EXPORT_SYMBOL(rproc_coredump_add_custom_segment);
/**
* rproc_coredump_set_elf_info() - set coredump elf information
* @rproc: handle of a remote processor
* @class: elf class for coredump elf file
* @machine: elf machine for coredump elf file
*
* Set elf information which will be used for coredump elf file.
*
* Return: 0 on success, negative errno on error.
*/
int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine)
{
if (class != ELFCLASS64 && class != ELFCLASS32)
return -EINVAL;
rproc->elf_class = class;
rproc->elf_machine = machine;
return 0;
}
EXPORT_SYMBOL(rproc_coredump_set_elf_info);
static void rproc_coredump_free(void *data)
{
struct rproc_coredump_state *dump_state = data;
vfree(dump_state->header);
complete(&dump_state->dump_done);
}
static void *rproc_coredump_find_segment(loff_t user_offset,
struct list_head *segments,
size_t *data_left)
{
struct rproc_dump_segment *segment;
list_for_each_entry(segment, segments, node) {
if (user_offset < segment->size) {
*data_left = segment->size - user_offset;
return segment;
}
user_offset -= segment->size;
}
*data_left = 0;
return NULL;
}
static void rproc_copy_segment(struct rproc *rproc, void *dest,
struct rproc_dump_segment *segment,
size_t offset, size_t size)
{
void *ptr;
if (segment->dump) {
segment->dump(rproc, segment, dest, offset, size);
} else {
ptr = rproc_da_to_va(rproc, segment->da + offset, size);
if (!ptr) {
dev_err(&rproc->dev,
"invalid copy request for segment %pad with offset %zu and size %zu)\n",
&segment->da, offset, size);
memset(dest, 0xff, size);
} else {
memcpy(dest, ptr, size);
}
}
}
static ssize_t rproc_coredump_read(char *buffer, loff_t offset, size_t count,
void *data, size_t header_sz)
{
size_t seg_data, bytes_left = count;
ssize_t copy_sz;
struct rproc_dump_segment *seg;
struct rproc_coredump_state *dump_state = data;
struct rproc *rproc = dump_state->rproc;
void *elfcore = dump_state->header;
/* Copy the vmalloc'ed header first. */
if (offset < header_sz) {
copy_sz = memory_read_from_buffer(buffer, count, &offset,
elfcore, header_sz);
return copy_sz;
}
/*
* Find out the segment memory chunk to be copied based on offset.
* Keep copying data until count bytes are read.
*/
while (bytes_left) {
seg = rproc_coredump_find_segment(offset - header_sz,
&rproc->dump_segments,
&seg_data);
/* EOF check */
if (!seg) {
dev_info(&rproc->dev, "Ramdump done, %lld bytes read",
offset);
break;
}
copy_sz = min_t(size_t, bytes_left, seg_data);
rproc_copy_segment(rproc, buffer, seg, seg->size - seg_data,
copy_sz);
offset += copy_sz;
buffer += copy_sz;
bytes_left -= copy_sz;
}
return count - bytes_left;
}
/**
* rproc_coredump() - perform coredump
* @rproc: rproc handle
*
* This function will generate an ELF header for the registered segments
* and create a devcoredump device associated with rproc. Based on the
* coredump configuration this function will directly copy the segments
* from device memory to userspace or copy segments from device memory to
* a separate buffer, which can then be read by userspace.
* The first approach avoids using extra vmalloc memory. But it will stall
* recovery flow until dump is read by userspace.
*/
void rproc_coredump(struct rproc *rproc)
{
struct rproc_dump_segment *segment;
void *phdr;
void *ehdr;
size_t data_size;
size_t offset;
void *data;
u8 class = rproc->elf_class;
int phnum = 0;
struct rproc_coredump_state dump_state;
enum rproc_dump_mechanism dump_conf = rproc->dump_conf;
if (list_empty(&rproc->dump_segments) ||
dump_conf == RPROC_COREDUMP_DISABLED)
return;
if (class == ELFCLASSNONE) {
dev_err(&rproc->dev, "Elf class is not set\n");
return;
}
data_size = elf_size_of_hdr(class);
list_for_each_entry(segment, &rproc->dump_segments, node) {
/*
* For default configuration buffer includes headers & segments.
* For inline dump buffer just includes headers as segments are
* directly read from device memory.
*/
data_size += elf_size_of_phdr(class);
if (dump_conf == RPROC_COREDUMP_DEFAULT)
data_size += segment->size;
phnum++;
}
data = vmalloc(data_size);
if (!data)
return;
ehdr = data;
memset(ehdr, 0, elf_size_of_hdr(class));
/* e_ident field is common for both elf32 and elf64 */
elf_hdr_init_ident(ehdr, class);
elf_hdr_set_e_type(class, ehdr, ET_CORE);
elf_hdr_set_e_machine(class, ehdr, rproc->elf_machine);
elf_hdr_set_e_version(class, ehdr, EV_CURRENT);
elf_hdr_set_e_entry(class, ehdr, rproc->bootaddr);
elf_hdr_set_e_phoff(class, ehdr, elf_size_of_hdr(class));
elf_hdr_set_e_ehsize(class, ehdr, elf_size_of_hdr(class));
elf_hdr_set_e_phentsize(class, ehdr, elf_size_of_phdr(class));
elf_hdr_set_e_phnum(class, ehdr, phnum);
phdr = data + elf_hdr_get_e_phoff(class, ehdr);
offset = elf_hdr_get_e_phoff(class, ehdr);
offset += elf_size_of_phdr(class) * elf_hdr_get_e_phnum(class, ehdr);
list_for_each_entry(segment, &rproc->dump_segments, node) {
memset(phdr, 0, elf_size_of_phdr(class));
elf_phdr_set_p_type(class, phdr, PT_LOAD);
elf_phdr_set_p_offset(class, phdr, offset);
elf_phdr_set_p_vaddr(class, phdr, segment->da);
elf_phdr_set_p_paddr(class, phdr, segment->da);
elf_phdr_set_p_filesz(class, phdr, segment->size);
elf_phdr_set_p_memsz(class, phdr, segment->size);
elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X);
elf_phdr_set_p_align(class, phdr, 0);
if (dump_conf == RPROC_COREDUMP_DEFAULT)
rproc_copy_segment(rproc, data + offset, segment, 0,
segment->size);
offset += elf_phdr_get_p_filesz(class, phdr);
phdr += elf_size_of_phdr(class);
}
if (dump_conf == RPROC_COREDUMP_DEFAULT) {
dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL);
return;
}
/* Initialize the dump state struct to be used by rproc_coredump_read */
dump_state.rproc = rproc;
dump_state.header = data;
init_completion(&dump_state.dump_done);
dev_coredumpm(&rproc->dev, NULL, &dump_state, data_size, GFP_KERNEL,
rproc_coredump_read, rproc_coredump_free);
/*
* Wait until the dump is read and free is called. Data is freed
* by devcoredump framework automatically after 5 minutes.
*/
wait_for_completion(&dump_state.dump_done);
}
......@@ -27,6 +27,94 @@
/* remoteproc debugfs parent dir */
static struct dentry *rproc_dbg;
/*
* A coredump-configuration-to-string lookup table, for exposing a
* human readable configuration via debugfs. Always keep in sync with
* enum rproc_coredump_mechanism
*/
static const char * const rproc_coredump_str[] = {
[RPROC_COREDUMP_DEFAULT] = "default",
[RPROC_COREDUMP_INLINE] = "inline",
[RPROC_COREDUMP_DISABLED] = "disabled",
};
/* Expose the current coredump configuration via debugfs */
static ssize_t rproc_coredump_read(struct file *filp, char __user *userbuf,
size_t count, loff_t *ppos)
{
struct rproc *rproc = filp->private_data;
char buf[20];
int len;
len = scnprintf(buf, sizeof(buf), "%s\n",
rproc_coredump_str[rproc->dump_conf]);
return simple_read_from_buffer(userbuf, count, ppos, buf, len);
}
/*
* By writing to the 'coredump' debugfs entry, we control the behavior of the
* coredump mechanism dynamically. The default value of this entry is "default".
*
* The 'coredump' debugfs entry supports these commands:
*
* default: This is the default coredump mechanism. When the remoteproc
* crashes the entire coredump will be copied to a separate buffer
* and exposed to userspace.
*
* inline: The coredump will not be copied to a separate buffer and the
* recovery process will have to wait until data is read by
* userspace. But this avoid usage of extra memory.
*
* disabled: This will disable coredump. Recovery will proceed without
* collecting any dump.
*/
static ssize_t rproc_coredump_write(struct file *filp,
const char __user *user_buf, size_t count,
loff_t *ppos)
{
struct rproc *rproc = filp->private_data;
int ret, err = 0;
char buf[20];
if (count > sizeof(buf))
return -EINVAL;
ret = copy_from_user(buf, user_buf, count);
if (ret)
return -EFAULT;
/* remove end of line */
if (buf[count - 1] == '\n')
buf[count - 1] = '\0';
if (rproc->state == RPROC_CRASHED) {
dev_err(&rproc->dev, "can't change coredump configuration\n");
err = -EBUSY;
goto out;
}
if (!strncmp(buf, "disable", count)) {
rproc->dump_conf = RPROC_COREDUMP_DISABLED;
} else if (!strncmp(buf, "inline", count)) {
rproc->dump_conf = RPROC_COREDUMP_INLINE;
} else if (!strncmp(buf, "default", count)) {
rproc->dump_conf = RPROC_COREDUMP_DEFAULT;
} else {
dev_err(&rproc->dev, "Invalid coredump configuration\n");
err = -EINVAL;
}
out:
return err ? err : count;
}
static const struct file_operations rproc_coredump_fops = {
.read = rproc_coredump_read,
.write = rproc_coredump_write,
.open = simple_open,
.llseek = generic_file_llseek,
};
/*
* Some remote processors may support dumping trace logs into a shared
* memory buffer. We expose this trace buffer using debugfs, so users
......@@ -337,6 +425,8 @@ void rproc_create_debug_dir(struct rproc *rproc)
rproc, &rproc_rsc_table_fops);
debugfs_create_file("carveout_memories", 0400, rproc->dbg_dir,
rproc, &rproc_carveouts_fops);
debugfs_create_file("coredump", 0600, rproc->dbg_dir,
rproc, &rproc_coredump_fops);
}
void __init rproc_init_debugfs(void)
......
......@@ -28,6 +28,8 @@ struct rproc_debug_trace {
void rproc_release(struct kref *kref);
irqreturn_t rproc_vq_interrupt(struct rproc *rproc, int vq_id);
void rproc_vdev_release(struct kref *ref);
int rproc_of_parse_firmware(struct device *dev, int index,
const char **fw_name);
/* from remoteproc_virtio.c */
int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id);
......@@ -47,6 +49,38 @@ extern struct class rproc_class;
int rproc_init_sysfs(void);
void rproc_exit_sysfs(void);
/* from remoteproc_coredump.c */
void rproc_coredump_cleanup(struct rproc *rproc);
void rproc_coredump(struct rproc *rproc);
#ifdef CONFIG_REMOTEPROC_CDEV
void rproc_init_cdev(void);
void rproc_exit_cdev(void);
int rproc_char_device_add(struct rproc *rproc);
void rproc_char_device_remove(struct rproc *rproc);
#else
static inline void rproc_init_cdev(void)
{
}
static inline void rproc_exit_cdev(void)
{
}
/*
* The character device interface is an optional feature, if it is not enabled
* the function should not return an error.
*/
static inline int rproc_char_device_add(struct rproc *rproc)
{
return 0;
}
static inline void rproc_char_device_remove(struct rproc *rproc)
{
}
#endif
void rproc_free_vring(struct rproc_vring *rvring);
int rproc_alloc_vring(struct rproc_vdev *rvdev, int i);
......@@ -79,6 +113,14 @@ static inline int rproc_unprepare_device(struct rproc *rproc)
return 0;
}
static inline int rproc_attach_device(struct rproc *rproc)
{
if (rproc->ops->attach)
return rproc->ops->attach(rproc);
return 0;
}
static inline
int rproc_fw_sanity_check(struct rproc *rproc, const struct firmware *fw)
{
......
......@@ -15,8 +15,20 @@ static ssize_t firmware_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct rproc *rproc = to_rproc(dev);
const char *firmware = rproc->firmware;
/*
* If the remote processor has been started by an external
* entity we have no idea of what image it is running. As such
* simply display a generic string rather then rproc->firmware.
*
* Here we rely on the autonomous flag because a remote processor
* may have been attached to and currently in a running state.
*/
if (rproc->autonomous)
firmware = "unknown";
return sprintf(buf, "%s\n", rproc->firmware);
return sprintf(buf, "%s\n", firmware);
}
/* Change firmware name via sysfs */
......@@ -72,6 +84,7 @@ static const char * const rproc_state_string[] = {
[RPROC_RUNNING] = "running",
[RPROC_CRASHED] = "crashed",
[RPROC_DELETED] = "deleted",
[RPROC_DETACHED] = "detached",
[RPROC_LAST] = "invalid",
};
......
......@@ -39,6 +39,15 @@
#define STM32_MBX_VQ1_ID 1
#define STM32_MBX_SHUTDOWN "shutdown"
#define RSC_TBL_SIZE 1024
#define M4_STATE_OFF 0
#define M4_STATE_INI 1
#define M4_STATE_CRUN 2
#define M4_STATE_CSTOP 3
#define M4_STATE_STANDBY 4
#define M4_STATE_CRASH 5
struct stm32_syscon {
struct regmap *map;
u32 reg;
......@@ -71,12 +80,15 @@ struct stm32_rproc {
struct reset_control *rst;
struct stm32_syscon hold_boot;
struct stm32_syscon pdds;
struct stm32_syscon m4_state;
struct stm32_syscon rsctbl;
int wdg_irq;
u32 nb_rmems;
struct stm32_rproc_mem *rmems;
struct stm32_mbox mb[MBOX_NB_MBX];
struct workqueue_struct *workqueue;
bool secured_soc;
void __iomem *rsc_va;
};
static int stm32_rproc_pa_to_da(struct rproc *rproc, phys_addr_t pa, u64 *da)
......@@ -128,10 +140,10 @@ static int stm32_rproc_mem_release(struct rproc *rproc,
return 0;
}
static int stm32_rproc_of_memory_translations(struct rproc *rproc)
static int stm32_rproc_of_memory_translations(struct platform_device *pdev,
struct stm32_rproc *ddata)
{
struct device *parent, *dev = rproc->dev.parent;
struct stm32_rproc *ddata = rproc->priv;
struct device *parent, *dev = &pdev->dev;
struct device_node *np;
struct stm32_rproc_mem *p_mems;
struct stm32_rproc_mem_ranges *mem_range;
......@@ -204,7 +216,7 @@ static int stm32_rproc_elf_load_rsc_table(struct rproc *rproc,
return 0;
}
static int stm32_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
static int stm32_rproc_parse_memory_regions(struct rproc *rproc)
{
struct device *dev = rproc->dev.parent;
struct device_node *np = dev->of_node;
......@@ -257,12 +269,23 @@ static int stm32_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
index++;
}
return 0;
}
static int stm32_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
{
int ret = stm32_rproc_parse_memory_regions(rproc);
if (ret)
return ret;
return stm32_rproc_elf_load_rsc_table(rproc, fw);
}
static irqreturn_t stm32_rproc_wdg(int irq, void *data)
{
struct rproc *rproc = data;
struct platform_device *pdev = data;
struct rproc *rproc = platform_get_drvdata(pdev);
rproc_report_crash(rproc, RPROC_WATCHDOG);
......@@ -437,6 +460,13 @@ static int stm32_rproc_start(struct rproc *rproc)
return stm32_rproc_set_hold_boot(rproc, true);
}
static int stm32_rproc_attach(struct rproc *rproc)
{
stm32_rproc_add_coredump_trace(rproc);
return stm32_rproc_set_hold_boot(rproc, true);
}
static int stm32_rproc_stop(struct rproc *rproc)
{
struct stm32_rproc *ddata = rproc->priv;
......@@ -474,6 +504,18 @@ static int stm32_rproc_stop(struct rproc *rproc)
}
}
/* update coprocessor state to OFF if available */
if (ddata->m4_state.map) {
err = regmap_update_bits(ddata->m4_state.map,
ddata->m4_state.reg,
ddata->m4_state.mask,
M4_STATE_OFF);
if (err) {
dev_err(&rproc->dev, "failed to set copro state\n");
return err;
}
}
return 0;
}
......@@ -502,6 +544,7 @@ static void stm32_rproc_kick(struct rproc *rproc, int vqid)
static struct rproc_ops st_rproc_ops = {
.start = stm32_rproc_start,
.stop = stm32_rproc_stop,
.attach = stm32_rproc_attach,
.kick = stm32_rproc_kick,
.load = rproc_elf_load_segments,
.parse_fw = stm32_rproc_parse_fw,
......@@ -538,12 +581,11 @@ static int stm32_rproc_get_syscon(struct device_node *np, const char *prop,
return err;
}
static int stm32_rproc_parse_dt(struct platform_device *pdev)
static int stm32_rproc_parse_dt(struct platform_device *pdev,
struct stm32_rproc *ddata, bool *auto_boot)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct rproc *rproc = platform_get_drvdata(pdev);
struct stm32_rproc *ddata = rproc->priv;
struct stm32_syscon tz;
unsigned int tzen;
int err, irq;
......@@ -554,7 +596,7 @@ static int stm32_rproc_parse_dt(struct platform_device *pdev)
if (irq > 0) {
err = devm_request_irq(dev, irq, stm32_rproc_wdg, 0,
dev_name(dev), rproc);
dev_name(dev), pdev);
if (err) {
dev_err(dev, "failed to request wdg irq\n");
return err;
......@@ -589,7 +631,7 @@ static int stm32_rproc_parse_dt(struct platform_device *pdev)
err = regmap_read(tz.map, tz.reg, &tzen);
if (err) {
dev_err(&rproc->dev, "failed to read tzen\n");
dev_err(dev, "failed to read tzen\n");
return err;
}
ddata->secured_soc = tzen & tz.mask;
......@@ -605,9 +647,118 @@ static int stm32_rproc_parse_dt(struct platform_device *pdev)
if (err)
dev_info(dev, "failed to get pdds\n");
rproc->auto_boot = of_property_read_bool(np, "st,auto-boot");
*auto_boot = of_property_read_bool(np, "st,auto-boot");
/*
* See if we can check the M4 status, i.e if it was started
* from the boot loader or not.
*/
err = stm32_rproc_get_syscon(np, "st,syscfg-m4-state",
&ddata->m4_state);
if (err) {
/* remember this */
ddata->m4_state.map = NULL;
/* no coprocessor state syscon (optional) */
dev_warn(dev, "m4 state not supported\n");
/* no need to go further */
return 0;
}
/* See if we can get the resource table */
err = stm32_rproc_get_syscon(np, "st,syscfg-rsc-tbl",
&ddata->rsctbl);
if (err) {
/* no rsc table syscon (optional) */
dev_warn(dev, "rsc tbl syscon not supported\n");
}
return 0;
}
static int stm32_rproc_get_m4_status(struct stm32_rproc *ddata,
unsigned int *state)
{
/* See stm32_rproc_parse_dt() */
if (!ddata->m4_state.map) {
/*
* We couldn't get the coprocessor's state, assume
* it is not running.
*/
state = M4_STATE_OFF;
return 0;
}
return stm32_rproc_of_memory_translations(rproc);
return regmap_read(ddata->m4_state.map, ddata->m4_state.reg, state);
}
static int stm32_rproc_da_to_pa(struct platform_device *pdev,
struct stm32_rproc *ddata,
u64 da, phys_addr_t *pa)
{
struct device *dev = &pdev->dev;
struct stm32_rproc_mem *p_mem;
unsigned int i;
for (i = 0; i < ddata->nb_rmems; i++) {
p_mem = &ddata->rmems[i];
if (da < p_mem->dev_addr ||
da >= p_mem->dev_addr + p_mem->size)
continue;
*pa = da - p_mem->dev_addr + p_mem->bus_addr;
dev_dbg(dev, "da %llx to pa %#x\n", da, *pa);
return 0;
}
dev_err(dev, "can't translate da %llx\n", da);
return -EINVAL;
}
static int stm32_rproc_get_loaded_rsc_table(struct platform_device *pdev,
struct rproc *rproc,
struct stm32_rproc *ddata)
{
struct device *dev = &pdev->dev;
phys_addr_t rsc_pa;
u32 rsc_da;
int err;
err = regmap_read(ddata->rsctbl.map, ddata->rsctbl.reg, &rsc_da);
if (err) {
dev_err(dev, "failed to read rsc tbl addr\n");
return err;
}
if (!rsc_da)
/* no rsc table */
return 0;
err = stm32_rproc_da_to_pa(pdev, ddata, rsc_da, &rsc_pa);
if (err)
return err;
ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE);
if (IS_ERR_OR_NULL(ddata->rsc_va)) {
dev_err(dev, "Unable to map memory region: %pa+%zx\n",
&rsc_pa, RSC_TBL_SIZE);
ddata->rsc_va = NULL;
return -ENOMEM;
}
/*
* The resource table is already loaded in device memory, no need
* to work with a cached table.
*/
rproc->cached_table = NULL;
/* Assuming the resource table fits in 1kB is fair */
rproc->table_sz = RSC_TBL_SIZE;
rproc->table_ptr = (struct resource_table *)ddata->rsc_va;
return 0;
}
static int stm32_rproc_probe(struct platform_device *pdev)
......@@ -616,6 +767,7 @@ static int stm32_rproc_probe(struct platform_device *pdev)
struct stm32_rproc *ddata;
struct device_node *np = dev->of_node;
struct rproc *rproc;
unsigned int state;
int ret;
ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));
......@@ -626,25 +778,47 @@ static int stm32_rproc_probe(struct platform_device *pdev)
if (!rproc)
return -ENOMEM;
ddata = rproc->priv;
rproc_coredump_set_elf_info(rproc, ELFCLASS32, EM_NONE);
ret = stm32_rproc_parse_dt(pdev, ddata, &rproc->auto_boot);
if (ret)
goto free_rproc;
ret = stm32_rproc_of_memory_translations(pdev, ddata);
if (ret)
goto free_rproc;
ret = stm32_rproc_get_m4_status(ddata, &state);
if (ret)
goto free_rproc;
if (state == M4_STATE_CRUN) {
rproc->state = RPROC_DETACHED;
ret = stm32_rproc_parse_memory_regions(rproc);
if (ret)
goto free_resources;
ret = stm32_rproc_get_loaded_rsc_table(pdev, rproc, ddata);
if (ret)
goto free_resources;
}
rproc->has_iommu = false;
ddata = rproc->priv;
ddata->workqueue = create_workqueue(dev_name(dev));
if (!ddata->workqueue) {
dev_err(dev, "cannot create workqueue\n");
ret = -ENOMEM;
goto free_rproc;
goto free_resources;
}
platform_set_drvdata(pdev, rproc);
ret = stm32_rproc_parse_dt(pdev);
if (ret)
goto free_wkq;
ret = stm32_rproc_request_mbox(rproc);
if (ret)
goto free_rproc;
goto free_wkq;
ret = rproc_add(rproc);
if (ret)
......@@ -656,6 +830,8 @@ static int stm32_rproc_probe(struct platform_device *pdev)
stm32_rproc_free_mbox(rproc);
free_wkq:
destroy_workqueue(ddata->workqueue);
free_resources:
rproc_resource_cleanup(rproc);
free_rproc:
if (device_may_wakeup(dev)) {
dev_pm_clear_wake_irq(dev);
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* TI K3 DSP Remote Processor(s) driver
*
* Copyright (C) 2018-2020 Texas Instruments Incorporated - https://www.ti.com/
* Suman Anna <s-anna@ti.com>
*/
#include <linux/io.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of_reserved_mem.h>
#include <linux/omap-mailbox.h>
#include <linux/platform_device.h>
#include <linux/remoteproc.h>
#include <linux/reset.h>
#include <linux/slab.h>
#include "omap_remoteproc.h"
#include "remoteproc_internal.h"
#include "ti_sci_proc.h"
#define KEYSTONE_RPROC_LOCAL_ADDRESS_MASK (SZ_16M - 1)
/**
* struct k3_dsp_mem - internal memory structure
* @cpu_addr: MPU virtual address of the memory region
* @bus_addr: Bus address used to access the memory region
* @dev_addr: Device address of the memory region from DSP view
* @size: Size of the memory region
*/
struct k3_dsp_mem {
void __iomem *cpu_addr;
phys_addr_t bus_addr;
u32 dev_addr;
size_t size;
};
/**
* struct k3_dsp_mem_data - memory definitions for a DSP
* @name: name for this memory entry
* @dev_addr: device address for the memory entry
*/
struct k3_dsp_mem_data {
const char *name;
const u32 dev_addr;
};
/**
* struct k3_dsp_dev_data - device data structure for a DSP
* @mems: pointer to memory definitions for a DSP
* @num_mems: number of memory regions in @mems
* @boot_align_addr: boot vector address alignment granularity
* @uses_lreset: flag to denote the need for local reset management
*/
struct k3_dsp_dev_data {
const struct k3_dsp_mem_data *mems;
u32 num_mems;
u32 boot_align_addr;
bool uses_lreset;
};
/**
* struct k3_dsp_rproc - k3 DSP remote processor driver structure
* @dev: cached device pointer
* @rproc: remoteproc device handle
* @mem: internal memory regions data
* @num_mems: number of internal memory regions
* @rmem: reserved memory regions data
* @num_rmems: number of reserved memory regions
* @reset: reset control handle
* @data: pointer to DSP-specific device data
* @tsp: TI-SCI processor control handle
* @ti_sci: TI-SCI handle
* @ti_sci_id: TI-SCI device identifier
* @mbox: mailbox channel handle
* @client: mailbox client to request the mailbox channel
*/
struct k3_dsp_rproc {
struct device *dev;
struct rproc *rproc;
struct k3_dsp_mem *mem;
int num_mems;
struct k3_dsp_mem *rmem;
int num_rmems;
struct reset_control *reset;
const struct k3_dsp_dev_data *data;
struct ti_sci_proc *tsp;
const struct ti_sci_handle *ti_sci;
u32 ti_sci_id;
struct mbox_chan *mbox;
struct mbox_client client;
};
/**
* k3_dsp_rproc_mbox_callback() - inbound mailbox message handler
* @client: mailbox client pointer used for requesting the mailbox channel
* @data: mailbox payload
*
* This handler is invoked by the OMAP mailbox driver whenever a mailbox
* message is received. Usually, the mailbox payload simply contains
* the index of the virtqueue that is kicked by the remote processor,
* and we let remoteproc core handle it.
*
* In addition to virtqueue indices, we also have some out-of-band values
* that indicate different events. Those values are deliberately very
* large so they don't coincide with virtqueue indices.
*/
static void k3_dsp_rproc_mbox_callback(struct mbox_client *client, void *data)
{
struct k3_dsp_rproc *kproc = container_of(client, struct k3_dsp_rproc,
client);
struct device *dev = kproc->rproc->dev.parent;
const char *name = kproc->rproc->name;
u32 msg = omap_mbox_message(data);
dev_dbg(dev, "mbox msg: 0x%x\n", msg);
switch (msg) {
case RP_MBOX_CRASH:
/*
* remoteproc detected an exception, but error recovery is not
* supported. So, just log this for now
*/
dev_err(dev, "K3 DSP rproc %s crashed\n", name);
break;
case RP_MBOX_ECHO_REPLY:
dev_info(dev, "received echo reply from %s\n", name);
break;
default:
/* silently handle all other valid messages */
if (msg >= RP_MBOX_READY && msg < RP_MBOX_END_MSG)
return;
if (msg > kproc->rproc->max_notifyid) {
dev_dbg(dev, "dropping unknown message 0x%x", msg);
return;
}
/* msg contains the index of the triggered vring */
if (rproc_vq_interrupt(kproc->rproc, msg) == IRQ_NONE)
dev_dbg(dev, "no message was found in vqid %d\n", msg);
}
}
/*
* Kick the remote processor to notify about pending unprocessed messages.
* The vqid usage is not used and is inconsequential, as the kick is performed
* through a simulated GPIO (a bit in an IPC interrupt-triggering register),
* the remote processor is expected to process both its Tx and Rx virtqueues.
*/
static void k3_dsp_rproc_kick(struct rproc *rproc, int vqid)
{
struct k3_dsp_rproc *kproc = rproc->priv;
struct device *dev = rproc->dev.parent;
mbox_msg_t msg = (mbox_msg_t)vqid;
int ret;
/* send the index of the triggered virtqueue in the mailbox payload */
ret = mbox_send_message(kproc->mbox, (void *)msg);
if (ret < 0)
dev_err(dev, "failed to send mailbox message, status = %d\n",
ret);
}
/* Put the DSP processor into reset */
static int k3_dsp_rproc_reset(struct k3_dsp_rproc *kproc)
{
struct device *dev = kproc->dev;
int ret;
ret = reset_control_assert(kproc->reset);
if (ret) {
dev_err(dev, "local-reset assert failed, ret = %d\n", ret);
return ret;
}
if (kproc->data->uses_lreset)
return ret;
ret = kproc->ti_sci->ops.dev_ops.put_device(kproc->ti_sci,
kproc->ti_sci_id);
if (ret) {
dev_err(dev, "module-reset assert failed, ret = %d\n", ret);
if (reset_control_deassert(kproc->reset))
dev_warn(dev, "local-reset deassert back failed\n");
}
return ret;
}
/* Release the DSP processor from reset */
static int k3_dsp_rproc_release(struct k3_dsp_rproc *kproc)
{
struct device *dev = kproc->dev;
int ret;
if (kproc->data->uses_lreset)
goto lreset;
ret = kproc->ti_sci->ops.dev_ops.get_device(kproc->ti_sci,
kproc->ti_sci_id);
if (ret) {
dev_err(dev, "module-reset deassert failed, ret = %d\n", ret);
return ret;
}
lreset:
ret = reset_control_deassert(kproc->reset);
if (ret) {
dev_err(dev, "local-reset deassert failed, ret = %d\n", ret);
if (kproc->ti_sci->ops.dev_ops.put_device(kproc->ti_sci,
kproc->ti_sci_id))
dev_warn(dev, "module-reset assert back failed\n");
}
return ret;
}
/*
* The C66x DSP cores have a local reset that affects only the CPU, and a
* generic module reset that powers on the device and allows the DSP internal
* memories to be accessed while the local reset is asserted. This function is
* used to release the global reset on C66x DSPs to allow loading into the DSP
* internal RAMs. The .prepare() ops is invoked by remoteproc core before any
* firmware loading, and is followed by the .start() ops after loading to
* actually let the C66x DSP cores run.
*/
static int k3_dsp_rproc_prepare(struct rproc *rproc)
{
struct k3_dsp_rproc *kproc = rproc->priv;
struct device *dev = kproc->dev;
int ret;
ret = kproc->ti_sci->ops.dev_ops.get_device(kproc->ti_sci,
kproc->ti_sci_id);
if (ret)
dev_err(dev, "module-reset deassert failed, cannot enable internal RAM loading, ret = %d\n",
ret);
return ret;
}
/*
* This function implements the .unprepare() ops and performs the complimentary
* operations to that of the .prepare() ops. The function is used to assert the
* global reset on applicable C66x cores. This completes the second portion of
* powering down the C66x DSP cores. The cores themselves are only halted in the
* .stop() callback through the local reset, and the .unprepare() ops is invoked
* by the remoteproc core after the remoteproc is stopped to balance the global
* reset.
*/
static int k3_dsp_rproc_unprepare(struct rproc *rproc)
{
struct k3_dsp_rproc *kproc = rproc->priv;
struct device *dev = kproc->dev;
int ret;
ret = kproc->ti_sci->ops.dev_ops.put_device(kproc->ti_sci,
kproc->ti_sci_id);
if (ret)
dev_err(dev, "module-reset assert failed, ret = %d\n", ret);
return ret;
}
/*
* Power up the DSP remote processor.
*
* This function will be invoked only after the firmware for this rproc
* was loaded, parsed successfully, and all of its resource requirements
* were met.
*/
static int k3_dsp_rproc_start(struct rproc *rproc)
{
struct k3_dsp_rproc *kproc = rproc->priv;
struct mbox_client *client = &kproc->client;
struct device *dev = kproc->dev;
u32 boot_addr;
int ret;
client->dev = dev;
client->tx_done = NULL;
client->rx_callback = k3_dsp_rproc_mbox_callback;
client->tx_block = false;
client->knows_txdone = false;
kproc->mbox = mbox_request_channel(client, 0);
if (IS_ERR(kproc->mbox)) {
ret = -EBUSY;
dev_err(dev, "mbox_request_channel failed: %ld\n",
PTR_ERR(kproc->mbox));
return ret;
}
/*
* Ping the remote processor, this is only for sanity-sake for now;
* there is no functional effect whatsoever.
*
* Note that the reply will _not_ arrive immediately: this message
* will wait in the mailbox fifo until the remote processor is booted.
*/
ret = mbox_send_message(kproc->mbox, (void *)RP_MBOX_ECHO_REQUEST);
if (ret < 0) {
dev_err(dev, "mbox_send_message failed: %d\n", ret);
goto put_mbox;
}
boot_addr = rproc->bootaddr;
if (boot_addr & (kproc->data->boot_align_addr - 1)) {
dev_err(dev, "invalid boot address 0x%x, must be aligned on a 0x%x boundary\n",
boot_addr, kproc->data->boot_align_addr);
ret = -EINVAL;
goto put_mbox;
}
dev_err(dev, "booting DSP core using boot addr = 0x%x\n", boot_addr);
ret = ti_sci_proc_set_config(kproc->tsp, boot_addr, 0, 0);
if (ret)
goto put_mbox;
ret = k3_dsp_rproc_release(kproc);
if (ret)
goto put_mbox;
return 0;
put_mbox:
mbox_free_channel(kproc->mbox);
return ret;
}
/*
* Stop the DSP remote processor.
*
* This function puts the DSP processor into reset, and finishes processing
* of any pending messages.
*/
static int k3_dsp_rproc_stop(struct rproc *rproc)
{
struct k3_dsp_rproc *kproc = rproc->priv;
mbox_free_channel(kproc->mbox);
k3_dsp_rproc_reset(kproc);
return 0;
}
/*
* Custom function to translate a DSP device address (internal RAMs only) to a
* kernel virtual address. The DSPs can access their RAMs at either an internal
* address visible only from a DSP, or at the SoC-level bus address. Both these
* addresses need to be looked through for translation. The translated addresses
* can be used either by the remoteproc core for loading (when using kernel
* remoteproc loader), or by any rpmsg bus drivers.
*/
static void *k3_dsp_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len)
{
struct k3_dsp_rproc *kproc = rproc->priv;
void __iomem *va = NULL;
phys_addr_t bus_addr;
u32 dev_addr, offset;
size_t size;
int i;
if (len == 0)
return NULL;
for (i = 0; i < kproc->num_mems; i++) {
bus_addr = kproc->mem[i].bus_addr;
dev_addr = kproc->mem[i].dev_addr;
size = kproc->mem[i].size;
if (da < KEYSTONE_RPROC_LOCAL_ADDRESS_MASK) {
/* handle DSP-view addresses */
if (da >= dev_addr &&
((da + len) <= (dev_addr + size))) {
offset = da - dev_addr;
va = kproc->mem[i].cpu_addr + offset;
return (__force void *)va;
}
} else {
/* handle SoC-view addresses */
if (da >= bus_addr &&
(da + len) <= (bus_addr + size)) {
offset = da - bus_addr;
va = kproc->mem[i].cpu_addr + offset;
return (__force void *)va;
}
}
}
/* handle static DDR reserved memory regions */
for (i = 0; i < kproc->num_rmems; i++) {
dev_addr = kproc->rmem[i].dev_addr;
size = kproc->rmem[i].size;
if (da >= dev_addr && ((da + len) <= (dev_addr + size))) {
offset = da - dev_addr;
va = kproc->rmem[i].cpu_addr + offset;
return (__force void *)va;
}
}
return NULL;
}
static const struct rproc_ops k3_dsp_rproc_ops = {
.start = k3_dsp_rproc_start,
.stop = k3_dsp_rproc_stop,
.kick = k3_dsp_rproc_kick,
.da_to_va = k3_dsp_rproc_da_to_va,
};
static int k3_dsp_rproc_of_get_memories(struct platform_device *pdev,
struct k3_dsp_rproc *kproc)
{
const struct k3_dsp_dev_data *data = kproc->data;
struct device *dev = &pdev->dev;
struct resource *res;
int num_mems = 0;
int i;
num_mems = kproc->data->num_mems;
kproc->mem = devm_kcalloc(kproc->dev, num_mems,
sizeof(*kproc->mem), GFP_KERNEL);
if (!kproc->mem)
return -ENOMEM;
for (i = 0; i < num_mems; i++) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
data->mems[i].name);
if (!res) {
dev_err(dev, "found no memory resource for %s\n",
data->mems[i].name);
return -EINVAL;
}
if (!devm_request_mem_region(dev, res->start,
resource_size(res),
dev_name(dev))) {
dev_err(dev, "could not request %s region for resource\n",
data->mems[i].name);
return -EBUSY;
}
kproc->mem[i].cpu_addr = devm_ioremap_wc(dev, res->start,
resource_size(res));
if (IS_ERR(kproc->mem[i].cpu_addr)) {
dev_err(dev, "failed to map %s memory\n",
data->mems[i].name);
return PTR_ERR(kproc->mem[i].cpu_addr);
}
kproc->mem[i].bus_addr = res->start;
kproc->mem[i].dev_addr = data->mems[i].dev_addr;
kproc->mem[i].size = resource_size(res);
dev_dbg(dev, "memory %8s: bus addr %pa size 0x%zx va %pK da 0x%x\n",
data->mems[i].name, &kproc->mem[i].bus_addr,
kproc->mem[i].size, kproc->mem[i].cpu_addr,
kproc->mem[i].dev_addr);
}
kproc->num_mems = num_mems;
return 0;
}
static int k3_dsp_reserved_mem_init(struct k3_dsp_rproc *kproc)
{
struct device *dev = kproc->dev;
struct device_node *np = dev->of_node;
struct device_node *rmem_np;
struct reserved_mem *rmem;
int num_rmems;
int ret, i;
num_rmems = of_property_count_elems_of_size(np, "memory-region",
sizeof(phandle));
if (num_rmems <= 0) {
dev_err(dev, "device does not reserved memory regions, ret = %d\n",
num_rmems);
return -EINVAL;
}
if (num_rmems < 2) {
dev_err(dev, "device needs atleast two memory regions to be defined, num = %d\n",
num_rmems);
return -EINVAL;
}
/* use reserved memory region 0 for vring DMA allocations */
ret = of_reserved_mem_device_init_by_idx(dev, np, 0);
if (ret) {
dev_err(dev, "device cannot initialize DMA pool, ret = %d\n",
ret);
return ret;
}
num_rmems--;
kproc->rmem = kcalloc(num_rmems, sizeof(*kproc->rmem), GFP_KERNEL);
if (!kproc->rmem) {
ret = -ENOMEM;
goto release_rmem;
}
/* use remaining reserved memory regions for static carveouts */
for (i = 0; i < num_rmems; i++) {
rmem_np = of_parse_phandle(np, "memory-region", i + 1);
if (!rmem_np) {
ret = -EINVAL;
goto unmap_rmem;
}
rmem = of_reserved_mem_lookup(rmem_np);
if (!rmem) {
of_node_put(rmem_np);
ret = -EINVAL;
goto unmap_rmem;
}
of_node_put(rmem_np);
kproc->rmem[i].bus_addr = rmem->base;
/* 64-bit address regions currently not supported */
kproc->rmem[i].dev_addr = (u32)rmem->base;
kproc->rmem[i].size = rmem->size;
kproc->rmem[i].cpu_addr = ioremap_wc(rmem->base, rmem->size);
if (!kproc->rmem[i].cpu_addr) {
dev_err(dev, "failed to map reserved memory#%d at %pa of size %pa\n",
i + 1, &rmem->base, &rmem->size);
ret = -ENOMEM;
goto unmap_rmem;
}
dev_dbg(dev, "reserved memory%d: bus addr %pa size 0x%zx va %pK da 0x%x\n",
i + 1, &kproc->rmem[i].bus_addr,
kproc->rmem[i].size, kproc->rmem[i].cpu_addr,
kproc->rmem[i].dev_addr);
}
kproc->num_rmems = num_rmems;
return 0;
unmap_rmem:
for (i--; i >= 0; i--)
iounmap(kproc->rmem[i].cpu_addr);
kfree(kproc->rmem);
release_rmem:
of_reserved_mem_device_release(kproc->dev);
return ret;
}
static void k3_dsp_reserved_mem_exit(struct k3_dsp_rproc *kproc)
{
int i;
for (i = 0; i < kproc->num_rmems; i++)
iounmap(kproc->rmem[i].cpu_addr);
kfree(kproc->rmem);
of_reserved_mem_device_release(kproc->dev);
}
static
struct ti_sci_proc *k3_dsp_rproc_of_get_tsp(struct device *dev,
const struct ti_sci_handle *sci)
{
struct ti_sci_proc *tsp;
u32 temp[2];
int ret;
ret = of_property_read_u32_array(dev->of_node, "ti,sci-proc-ids",
temp, 2);
if (ret < 0)
return ERR_PTR(ret);
tsp = kzalloc(sizeof(*tsp), GFP_KERNEL);
if (!tsp)
return ERR_PTR(-ENOMEM);
tsp->dev = dev;
tsp->sci = sci;
tsp->ops = &sci->ops.proc_ops;
tsp->proc_id = temp[0];
tsp->host_id = temp[1];
return tsp;
}
static int k3_dsp_rproc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
const struct k3_dsp_dev_data *data;
struct k3_dsp_rproc *kproc;
struct rproc *rproc;
const char *fw_name;
int ret = 0;
int ret1;
data = of_device_get_match_data(dev);
if (!data)
return -ENODEV;
ret = rproc_of_parse_firmware(dev, 0, &fw_name);
if (ret) {
dev_err(dev, "failed to parse firmware-name property, ret = %d\n",
ret);
return ret;
}
rproc = rproc_alloc(dev, dev_name(dev), &k3_dsp_rproc_ops, fw_name,
sizeof(*kproc));
if (!rproc)
return -ENOMEM;
rproc->has_iommu = false;
rproc->recovery_disabled = true;
if (data->uses_lreset) {
rproc->ops->prepare = k3_dsp_rproc_prepare;
rproc->ops->unprepare = k3_dsp_rproc_unprepare;
}
kproc = rproc->priv;
kproc->rproc = rproc;
kproc->dev = dev;
kproc->data = data;
kproc->ti_sci = ti_sci_get_by_phandle(np, "ti,sci");
if (IS_ERR(kproc->ti_sci)) {
ret = PTR_ERR(kproc->ti_sci);
if (ret != -EPROBE_DEFER) {
dev_err(dev, "failed to get ti-sci handle, ret = %d\n",
ret);
}
kproc->ti_sci = NULL;
goto free_rproc;
}
ret = of_property_read_u32(np, "ti,sci-dev-id", &kproc->ti_sci_id);
if (ret) {
dev_err(dev, "missing 'ti,sci-dev-id' property\n");
goto put_sci;
}
kproc->reset = devm_reset_control_get_exclusive(dev, NULL);
if (IS_ERR(kproc->reset)) {
ret = PTR_ERR(kproc->reset);
dev_err(dev, "failed to get reset, status = %d\n", ret);
goto put_sci;
}
kproc->tsp = k3_dsp_rproc_of_get_tsp(dev, kproc->ti_sci);
if (IS_ERR(kproc->tsp)) {
dev_err(dev, "failed to construct ti-sci proc control, ret = %d\n",
ret);
ret = PTR_ERR(kproc->tsp);
goto put_sci;
}
ret = ti_sci_proc_request(kproc->tsp);
if (ret < 0) {
dev_err(dev, "ti_sci_proc_request failed, ret = %d\n", ret);
goto free_tsp;
}
ret = k3_dsp_rproc_of_get_memories(pdev, kproc);
if (ret)
goto release_tsp;
ret = k3_dsp_reserved_mem_init(kproc);
if (ret) {
dev_err(dev, "reserved memory init failed, ret = %d\n", ret);
goto release_tsp;
}
/*
* ensure the DSP local reset is asserted to ensure the DSP doesn't
* execute bogus code in .prepare() when the module reset is released.
*/
if (data->uses_lreset) {
ret = reset_control_status(kproc->reset);
if (ret < 0) {
dev_err(dev, "failed to get reset status, status = %d\n",
ret);
goto release_mem;
} else if (ret == 0) {
dev_warn(dev, "local reset is deasserted for device\n");
k3_dsp_rproc_reset(kproc);
}
}
ret = rproc_add(rproc);
if (ret) {
dev_err(dev, "failed to add register device with remoteproc core, status = %d\n",
ret);
goto release_mem;
}
platform_set_drvdata(pdev, kproc);
return 0;
release_mem:
k3_dsp_reserved_mem_exit(kproc);
release_tsp:
ret1 = ti_sci_proc_release(kproc->tsp);
if (ret1)
dev_err(dev, "failed to release proc, ret = %d\n", ret1);
free_tsp:
kfree(kproc->tsp);
put_sci:
ret1 = ti_sci_put_handle(kproc->ti_sci);
if (ret1)
dev_err(dev, "failed to put ti_sci handle, ret = %d\n", ret1);
free_rproc:
rproc_free(rproc);
return ret;
}
static int k3_dsp_rproc_remove(struct platform_device *pdev)
{
struct k3_dsp_rproc *kproc = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
int ret;
rproc_del(kproc->rproc);
ret = ti_sci_proc_release(kproc->tsp);
if (ret)
dev_err(dev, "failed to release proc, ret = %d\n", ret);
kfree(kproc->tsp);
ret = ti_sci_put_handle(kproc->ti_sci);
if (ret)
dev_err(dev, "failed to put ti_sci handle, ret = %d\n", ret);
k3_dsp_reserved_mem_exit(kproc);
rproc_free(kproc->rproc);
return 0;
}
static const struct k3_dsp_mem_data c66_mems[] = {
{ .name = "l2sram", .dev_addr = 0x800000 },
{ .name = "l1pram", .dev_addr = 0xe00000 },
{ .name = "l1dram", .dev_addr = 0xf00000 },
};
/* C71x cores only have a L1P Cache, there are no L1P SRAMs */
static const struct k3_dsp_mem_data c71_mems[] = {
{ .name = "l2sram", .dev_addr = 0x800000 },
{ .name = "l1dram", .dev_addr = 0xe00000 },
};
static const struct k3_dsp_dev_data c66_data = {
.mems = c66_mems,
.num_mems = ARRAY_SIZE(c66_mems),
.boot_align_addr = SZ_1K,
.uses_lreset = true,
};
static const struct k3_dsp_dev_data c71_data = {
.mems = c71_mems,
.num_mems = ARRAY_SIZE(c71_mems),
.boot_align_addr = SZ_2M,
.uses_lreset = false,
};
static const struct of_device_id k3_dsp_of_match[] = {
{ .compatible = "ti,j721e-c66-dsp", .data = &c66_data, },
{ .compatible = "ti,j721e-c71-dsp", .data = &c71_data, },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, k3_dsp_of_match);
static struct platform_driver k3_dsp_rproc_driver = {
.probe = k3_dsp_rproc_probe,
.remove = k3_dsp_rproc_remove,
.driver = {
.name = "k3-dsp-rproc",
.of_match_table = k3_dsp_of_match,
},
};
module_platform_driver(k3_dsp_rproc_driver);
MODULE_AUTHOR("Suman Anna <s-anna@ti.com>");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("TI K3 DSP Remoteproc driver");
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Texas Instruments TI-SCI Processor Controller Helper Functions
*
* Copyright (C) 2018-2020 Texas Instruments Incorporated - https://www.ti.com/
* Suman Anna <s-anna@ti.com>
*/
#ifndef REMOTEPROC_TI_SCI_PROC_H
#define REMOTEPROC_TI_SCI_PROC_H
#include <linux/soc/ti/ti_sci_protocol.h>
/**
* struct ti_sci_proc - structure representing a processor control client
* @sci: cached TI-SCI protocol handle
* @ops: cached TI-SCI proc ops
* @dev: cached client device pointer
* @proc_id: processor id for the consumer remoteproc device
* @host_id: host id to pass the control over for this consumer remoteproc
* device
*/
struct ti_sci_proc {
const struct ti_sci_handle *sci;
const struct ti_sci_proc_ops *ops;
struct device *dev;
u8 proc_id;
u8 host_id;
};
static inline int ti_sci_proc_request(struct ti_sci_proc *tsp)
{
int ret;
ret = tsp->ops->request(tsp->sci, tsp->proc_id);
if (ret)
dev_err(tsp->dev, "ti-sci processor request failed: %d\n",
ret);
return ret;
}
static inline int ti_sci_proc_release(struct ti_sci_proc *tsp)
{
int ret;
ret = tsp->ops->release(tsp->sci, tsp->proc_id);
if (ret)
dev_err(tsp->dev, "ti-sci processor release failed: %d\n",
ret);
return ret;
}
static inline int ti_sci_proc_handover(struct ti_sci_proc *tsp)
{
int ret;
ret = tsp->ops->handover(tsp->sci, tsp->proc_id, tsp->host_id);
if (ret)
dev_err(tsp->dev, "ti-sci processor handover of %d to %d failed: %d\n",
tsp->proc_id, tsp->host_id, ret);
return ret;
}
static inline int ti_sci_proc_set_config(struct ti_sci_proc *tsp,
u64 boot_vector,
u32 cfg_set, u32 cfg_clr)
{
int ret;
ret = tsp->ops->set_config(tsp->sci, tsp->proc_id, boot_vector,
cfg_set, cfg_clr);
if (ret)
dev_err(tsp->dev, "ti-sci processor set_config failed: %d\n",
ret);
return ret;
}
static inline int ti_sci_proc_set_control(struct ti_sci_proc *tsp,
u32 ctrl_set, u32 ctrl_clr)
{
int ret;
ret = tsp->ops->set_control(tsp->sci, tsp->proc_id, ctrl_set, ctrl_clr);
if (ret)
dev_err(tsp->dev, "ti-sci processor set_control failed: %d\n",
ret);
return ret;
}
static inline int ti_sci_proc_get_status(struct ti_sci_proc *tsp,
u64 *boot_vector, u32 *cfg_flags,
u32 *ctrl_flags, u32 *status_flags)
{
int ret;
ret = tsp->ops->get_status(tsp->sci, tsp->proc_id, boot_vector,
cfg_flags, ctrl_flags, status_flags);
if (ret)
dev_err(tsp->dev, "ti-sci processor get_status failed: %d\n",
ret);
return ret;
}
#endif /* REMOTEPROC_TI_SCI_PROC_H */
......@@ -38,6 +38,7 @@
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/virtio.h>
#include <linux/cdev.h>
#include <linux/completion.h>
#include <linux/idr.h>
#include <linux/of.h>
......@@ -359,6 +360,7 @@ enum rsc_handling_status {
* @unprepare: unprepare device after stop
* @start: power on the device and boot it
* @stop: power off the device
* @attach: attach to a device that his already powered up
* @kick: kick a virtqueue (virtqueue id given as a parameter)
* @da_to_va: optional platform hook to perform address translations
* @parse_fw: parse firmware to extract information (e.g. resource table)
......@@ -379,6 +381,7 @@ struct rproc_ops {
int (*unprepare)(struct rproc *rproc);
int (*start)(struct rproc *rproc);
int (*stop)(struct rproc *rproc);
int (*attach)(struct rproc *rproc);
void (*kick)(struct rproc *rproc, int vqid);
void * (*da_to_va)(struct rproc *rproc, u64 da, size_t len);
int (*parse_fw)(struct rproc *rproc, const struct firmware *fw);
......@@ -400,6 +403,8 @@ struct rproc_ops {
* @RPROC_RUNNING: device is up and running
* @RPROC_CRASHED: device has crashed; need to start recovery
* @RPROC_DELETED: device is deleted
* @RPROC_DETACHED: device has been booted by another entity and waiting
* for the core to attach to it
* @RPROC_LAST: just keep this one at the end
*
* Please note that the values of these states are used as indices
......@@ -414,7 +419,8 @@ enum rproc_state {
RPROC_RUNNING = 2,
RPROC_CRASHED = 3,
RPROC_DELETED = 4,
RPROC_LAST = 5,
RPROC_DETACHED = 5,
RPROC_LAST = 6,
};
/**
......@@ -434,6 +440,20 @@ enum rproc_crash_type {
RPROC_FATAL_ERROR,
};
/**
* enum rproc_dump_mechanism - Coredump options for core
* @RPROC_COREDUMP_DEFAULT: Copy dump to separate buffer and carry on with
recovery
* @RPROC_COREDUMP_INLINE: Read segments directly from device memory. Stall
recovery until all segments are read
* @RPROC_COREDUMP_DISABLED: Don't perform any dump
*/
enum rproc_dump_mechanism {
RPROC_COREDUMP_DEFAULT,
RPROC_COREDUMP_INLINE,
RPROC_COREDUMP_DISABLED,
};
/**
* struct rproc_dump_segment - segment info from ELF header
* @node: list node related to the rproc segment list
......@@ -451,7 +471,7 @@ struct rproc_dump_segment {
void *priv;
void (*dump)(struct rproc *rproc, struct rproc_dump_segment *segment,
void *dest);
void *dest, size_t offset, size_t size);
loff_t offset;
};
......@@ -466,6 +486,7 @@ struct rproc_dump_segment {
* @dev: virtual device for refcounting and common remoteproc behavior
* @power: refcount of users who need this rproc powered up
* @state: state of the device
* @dump_conf: Currently selected coredump configuration
* @lock: lock which protects concurrent manipulations of the rproc
* @dbg_dir: debugfs directory of this rproc device
* @traces: list of trace buffers
......@@ -486,8 +507,11 @@ struct rproc_dump_segment {
* @table_sz: size of @cached_table
* @has_iommu: flag to indicate if remote processor is behind an MMU
* @auto_boot: flag to indicate if remote processor should be auto-started
* @autonomous: true if an external entity has booted the remote processor
* @dump_segments: list of segments in the firmware
* @nb_vdev: number of vdev currently handled by rproc
* @char_dev: character device of the rproc
* @cdev_put_on_release: flag to indicate if remoteproc should be shutdown on @char_dev release
*/
struct rproc {
struct list_head node;
......@@ -499,6 +523,7 @@ struct rproc {
struct device dev;
atomic_t power;
unsigned int state;
enum rproc_dump_mechanism dump_conf;
struct mutex lock;
struct dentry *dbg_dir;
struct list_head traces;
......@@ -519,10 +544,13 @@ struct rproc {
size_t table_sz;
bool has_iommu;
bool auto_boot;
bool autonomous;
struct list_head dump_segments;
int nb_vdev;
u8 elf_class;
u16 elf_machine;
struct cdev cdev;
bool cdev_put_on_release;
};
/**
......@@ -603,6 +631,7 @@ void rproc_put(struct rproc *rproc);
int rproc_add(struct rproc *rproc);
int rproc_del(struct rproc *rproc);
void rproc_free(struct rproc *rproc);
void rproc_resource_cleanup(struct rproc *rproc);
struct rproc *devm_rproc_alloc(struct device *dev, const char *name,
const struct rproc_ops *ops,
......@@ -630,7 +659,8 @@ int rproc_coredump_add_custom_segment(struct rproc *rproc,
dma_addr_t da, size_t size,
void (*dumpfn)(struct rproc *rproc,
struct rproc_dump_segment *segment,
void *dest),
void *dest, size_t offset,
size_t size),
void *priv);
int rproc_coredump_set_elf_info(struct rproc *rproc, u8 class, u16 machine);
......
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (C) 2019 Linaro Ltd. */
#ifndef __QCOM_Q6V5_IPA_NOTIFY_H__
#define __QCOM_Q6V5_IPA_NOTIFY_H__
#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
#include <linux/remoteproc.h>
enum qcom_rproc_event {
MODEM_STARTING = 0, /* Modem is about to be started */
MODEM_RUNNING = 1, /* Startup complete; modem is operational */
MODEM_STOPPING = 2, /* Modem is about to shut down */
MODEM_CRASHED = 3, /* Modem has crashed (implies stopping) */
MODEM_OFFLINE = 4, /* Modem is now offline */
MODEM_REMOVING = 5, /* Modem is about to be removed */
};
typedef void (*qcom_ipa_notify_t)(void *data, enum qcom_rproc_event event);
struct qcom_rproc_ipa_notify {
struct rproc_subdev subdev;
qcom_ipa_notify_t notify;
void *data;
};
/**
* qcom_add_ipa_notify_subdev() - Register IPA notification subdevice
* @rproc: rproc handle
* @ipa_notify: IPA notification subdevice handle
*
* Register the @ipa_notify subdevice with the @rproc so modem events
* can be sent to IPA when they occur.
*
* This is defined in "qcom_q6v5_ipa_notify.c".
*/
void qcom_add_ipa_notify_subdev(struct rproc *rproc,
struct qcom_rproc_ipa_notify *ipa_notify);
/**
* qcom_remove_ipa_notify_subdev() - Remove IPA SSR subdevice
* @rproc: rproc handle
* @ipa_notify: IPA notification subdevice handle
*
* This is defined in "qcom_q6v5_ipa_notify.c".
*/
void qcom_remove_ipa_notify_subdev(struct rproc *rproc,
struct qcom_rproc_ipa_notify *ipa_notify);
/**
* qcom_register_ipa_notify() - Register IPA notification function
* @rproc: Remote processor handle
* @notify: Non-null IPA notification callback function pointer
* @data: Data supplied to IPA notification callback function
*
* @Return: 0 if successful, or a negative error code otherwise
*
* This is defined in "qcom_q6v5_mss.c".
*/
int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify,
void *data);
/**
* qcom_deregister_ipa_notify() - Deregister IPA notification function
* @rproc: Remote processor handle
*
* This is defined in "qcom_q6v5_mss.c".
*/
void qcom_deregister_ipa_notify(struct rproc *rproc);
#else /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
struct qcom_rproc_ipa_notify { /* empty */ };
#define qcom_add_ipa_notify_subdev(rproc, ipa_notify) /* no-op */
#define qcom_remove_ipa_notify_subdev(rproc, ipa_notify) /* no-op */
#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
#endif /* !__QCOM_Q6V5_IPA_NOTIFY_H__ */
......@@ -5,17 +5,43 @@ struct notifier_block;
#if IS_ENABLED(CONFIG_QCOM_RPROC_COMMON)
int qcom_register_ssr_notifier(struct notifier_block *nb);
void qcom_unregister_ssr_notifier(struct notifier_block *nb);
/**
* enum qcom_ssr_notify_type - Startup/Shutdown events related to a remoteproc
* processor.
*
* @QCOM_SSR_BEFORE_POWERUP: Remoteproc about to start (prepare stage)
* @QCOM_SSR_AFTER_POWERUP: Remoteproc is running (start stage)
* @QCOM_SSR_BEFORE_SHUTDOWN: Remoteproc crashed or shutting down (stop stage)
* @QCOM_SSR_AFTER_SHUTDOWN: Remoteproc is down (unprepare stage)
*/
enum qcom_ssr_notify_type {
QCOM_SSR_BEFORE_POWERUP,
QCOM_SSR_AFTER_POWERUP,
QCOM_SSR_BEFORE_SHUTDOWN,
QCOM_SSR_AFTER_SHUTDOWN,
};
struct qcom_ssr_notify_data {
const char *name;
bool crashed;
};
void *qcom_register_ssr_notifier(const char *name, struct notifier_block *nb);
int qcom_unregister_ssr_notifier(void *notify, struct notifier_block *nb);
#else
static inline int qcom_register_ssr_notifier(struct notifier_block *nb)
static inline void *qcom_register_ssr_notifier(const char *name,
struct notifier_block *nb)
{
return 0;
return NULL;
}
static inline void qcom_unregister_ssr_notifier(struct notifier_block *nb) {}
static inline int qcom_unregister_ssr_notifier(void *notify,
struct notifier_block *nb)
{
return 0;
}
#endif
......
/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
/*
* IOCTLs for Remoteproc's character device interface.
*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#ifndef _UAPI_REMOTEPROC_CDEV_H_
#define _UAPI_REMOTEPROC_CDEV_H_
#include <linux/ioctl.h>
#include <linux/types.h>
#define RPROC_MAGIC 0xB7
/*
* The RPROC_SET_SHUTDOWN_ON_RELEASE ioctl allows to enable/disable the shutdown of a remote
* processor automatically when the controlling userpsace closes the char device interface.
*
* input parameter: integer
* 0 : disable automatic shutdown
* other : enable automatic shutdown
*/
#define RPROC_SET_SHUTDOWN_ON_RELEASE _IOW(RPROC_MAGIC, 1, __s32)
/*
* The RPROC_GET_SHUTDOWN_ON_RELEASE ioctl gets information about whether the automatic shutdown of
* a remote processor is enabled or disabled when the controlling userspace closes the char device
* interface.
*
* output parameter: integer
* 0 : automatic shutdown disable
* other : automatic shutdown enable
*/
#define RPROC_GET_SHUTDOWN_ON_RELEASE _IOR(RPROC_MAGIC, 2, __s32)
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment