Commit fbd43602 authored by David S. Miller's avatar David S. Miller

Merge branch 'net-introduce-Qualcomm-IPA-driver'

Alex Elder says:

====================
net: introduce Qualcomm IPA driver (UPDATED)

This series presents the driver for the Qualcomm IP Accelerator (IPA).

This is version 2 of this updated series.  It includes the following
small changes since the previous version:
  - Now based on net-next instead of v5.6-rc
  - Config option now named CONFIG_QCOM_IPA
  - Some minor cleanup in the GSI code
  - Small change to replenish logic
  - No longer depends on remoteproc bug fixes
What follows is the basically same explanation as was posted previously.

					-Alex

I have posted earlier versions of this code previously, but it has
undergone quite a bit of development since the last time, so rather
than calling it "version 3" I'm just treating it as a new series
(indicating it's been updated in this message).  The fast/data path
is the same as before.  But the driver now (nearly) supports a
second platform, its transaction handling has been generalized
and improved, and modem activities are now handled in a more
unified way.

This series is available (based on net-next in branch "ipa_updated-v2"
in this git repository:
  https://git.linaro.org/people/alex.elder/linux.git

The branch depends on other one other small patch that I sent out
for review earlier.
  https://lore.kernel.org/lkml/20200306042302.17602-1-elder@linaro.org/

I want to address some of the discussion that arose last time.

First, there was the WWAN discussion.  Here's the history:
  - This was last posted nine months ago.
  - Reviewers at that time favored developing a new WWAN subsystem that
    would be used for managing devices like this.  And the suggestion
    was to not accept this driver until that could be developed.
  - Along the way, Apple acquired much of Intel's modem business.
    And as a result, the generic framework became less pressing.
  - I did participate in the WWAN subsystem design however, and
    although it went dormant for a while it's been resurrected:
      https://lore.kernel.org/netdev/20200225100053.16385-1-johannes@sipsolutions.net/
  - Unfortunately the proposed WWAN design was not an easy fit
    with Qualcomm's integrated modem interfaces.  Given that
    rmnet is a supported link type for in the upstream "iproute2"
    package (more on this below), I have opted not to integrate
    with any WWAN subsystem.

So in summary, this driver does not integrate with a generic WWAN
framework.  And I'd like it to be accepted upstream despite that.

Next, Arnd Bergmann had some concerns about flow control.  (Note:
some of my discussions with Arnd about this were offline.) The
overall architecture here also involves the "rmnet" driver:
  drivers/net/ethernet/qualcomm/rmnet

The rmnet driver presents a network device for use.  It connects
with another network device presented, by the IPA driver.  The
rmnet driver wraps (and unwraps) packets transferred to (and from)
the IPA driver with QMAP headers.

   ---------------
   | rmnet_data0 |    <-- "real" netdev
   ---------------
          ||       }- QMAP spoken here
   --------------
   | rmnet_ipa0 |     <-- also netdev, transporting QMAP packets
   --------------
          ||
   --------------
  ( IPA hardware )
   --------------

Arnd's concern was that the rmnet_data0 network device does not
have the benefit of information about the state of the underlying
IPA hardware in order to be effective in controlling TX flow.
The feared result is over-buffering of TX packets (bufferbloat).
I began working on some simple experiments to see whether (or how
much) his concern was warranted.  But it turned out that completing
these experiments was much more work than had been hoped.

The rmnet driver is present in the upstream kernel.  There is also
support for the rmnet link type in the upstream "ip" user space
command in the "iproute2" package.  Changing the layering of rmnet
over IPA likely involves deprecating the rmnet driver and its
support in "iproute2".  I would really rather not go down that
path.

There is precedent for this sort of layering of network devices
(L2TP, VLAN).  And any architecture like this would suffer the
issues Arnd mentioned; the problem is not limited to rmnet and IPA.
I do think this is a problem worth solving, but the prudent thing
to do might be to try to solve it more generally.

So to summarize on this issue, this driver does not attempt to
change the way the rmnet and IPA drivers work together.  And even
though I think Arnd's concerns warrant more investigation, I'd like
this driver to to be accepted upstream without any change to this
architecture.

Finally, a more technical description for the series, and some
acknowledgements to some people who contributed to it.

The IPA is a component present in some Qualcomm SoCs that allows
network functions such as aggregation, filtering, routing, and NAT
to be performed without active involvement of the main application
processor (AP).

In this initial patch series these advanced features are not
implemented.  The IPA driver simply provides a network interface
that makes the modem's LTE network available in Linux.  This initial
series supports only the Qualcomm SDM845 SoC.  The Qualcomm SC7180
SoC is partially supported, and support for other platforms will
follow.

This code is derived from a driver developed by Qualcomm.  A version
of the original source can be seen here:
  https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree
in the "drivers/platform/msm/ipa" directory.  Many were involved in
developing this, but the following individuals deserve explicit
acknowledgement for their substantial contributions:

    Abhishek Choubey
    Ady Abraham
    Chaitanya Pratapa
    David Arinzon
    Ghanim Fodi
    Gidon Studinski
    Ravi Gummadidala
    Shihuan Liu
    Skylar Chang
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents e2f5cb72 9cc5ae12
# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/qcom,ipa.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm IP Accelerator (IPA)
maintainers:
- Alex Elder <elder@kernel.org>
description:
This binding describes the Qualcomm IPA. The IPA is capable of offloading
certain network processing tasks (e.g. filtering, routing, and NAT) from
the main processor.
The IPA sits between multiple independent "execution environments,"
including the Application Processor (AP) and the modem. The IPA presents
a Generic Software Interface (GSI) to each execution environment.
The GSI is an integral part of the IPA, but it is logically isolated
and has a distinct interrupt and a separately-defined address space.
See also soc/qcom/qcom,smp2p.txt and interconnect/interconnect.txt.
- |
-------- ---------
| | | |
| AP +<---. .----+ Modem |
| +--. | | .->+ |
| | | | | | | |
-------- | | | | ---------
v | v |
--+-+---+-+--
| GSI |
|-----------|
| |
| IPA |
| |
-------------
properties:
compatible:
const: "qcom,sdm845-ipa"
reg:
items:
- description: IPA registers
- description: IPA shared memory
- description: GSI registers
reg-names:
items:
- const: ipa-reg
- const: ipa-shared
- const: gsi
clocks:
maxItems: 1
clock-names:
const: core
interrupts:
items:
- description: IPA interrupt (hardware IRQ)
- description: GSI interrupt (hardware IRQ)
- description: Modem clock query interrupt (smp2p interrupt)
- description: Modem setup ready interrupt (smp2p interrupt)
interrupt-names:
items:
- const: ipa
- const: gsi
- const: ipa-clock-query
- const: ipa-setup-ready
interconnects:
items:
- description: Interconnect path between IPA and main memory
- description: Interconnect path between IPA and internal memory
- description: Interconnect path between IPA and the AP subsystem
interconnect-names:
items:
- const: memory
- const: imem
- const: config
qcom,smem-states:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: State bits used in by the AP to signal the modem.
items:
- description: Whether the "ipa-clock-enabled" state bit is valid
- description: Whether the IPA clock is enabled (if valid)
qcom,smem-state-names:
$ref: /schemas/types.yaml#/definitions/string-array
description: The names of the state bits used for SMP2P output
items:
- const: ipa-clock-enabled-valid
- const: ipa-clock-enabled
modem-init:
type: boolean
description:
If present, it indicates that the modem is responsible for
performing early IPA initialization, including loading and
validating firwmare used by the GSI.
modem-remoteproc:
$ref: /schemas/types.yaml#definitions/phandle
description:
This defines the phandle to the remoteproc node representing
the modem subsystem. This is requied so the IPA driver can
receive and act on notifications of modem up/down events.
memory-region:
$ref: /schemas/types.yaml#/definitions/phandle-array
maxItems: 1
description:
If present, a phandle for a reserved memory area that holds
the firmware passed to Trust Zone for authentication. Required
when Trust Zone (not the modem) performs early initialization.
required:
- compatible
- reg
- clocks
- interrupts
- interconnects
- qcom,smem-states
- modem-remoteproc
oneOf:
- required:
- modem-init
- required:
- memory-region
examples:
- |
smp2p-mpss {
compatible = "qcom,smp2p";
ipa_smp2p_out: ipa-ap-to-modem {
qcom,entry-name = "ipa";
#qcom,smem-state-cells = <1>;
};
ipa_smp2p_in: ipa-modem-to-ap {
qcom,entry-name = "ipa";
interrupt-controller;
#interrupt-cells = <2>;
};
};
ipa@1e40000 {
compatible = "qcom,sdm845-ipa";
modem-init;
modem-remoteproc = <&mss_pil>;
reg = <0 0x1e40000 0 0x7000>,
<0 0x1e47000 0 0x2000>,
<0 0x1e04000 0 0x2c000>;
reg-names = "ipa-reg",
"ipa-shared";
"gsi";
interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>,
<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
interrupt-names = "ipa",
"gsi",
"ipa-clock-query",
"ipa-setup-ready";
clocks = <&rpmhcc RPMH_IPA_CLK>;
clock-names = "core";
interconnects =
<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
interconnect-names = "memory",
"imem",
"config";
qcom,smem-states = <&ipa_smp2p_out 0>,
<&ipa_smp2p_out 1>;
qcom,smem-state-names = "ipa-clock-enabled-valid",
"ipa-clock-enabled";
};
......@@ -13662,6 +13662,12 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
F: sound/soc/qcom/
QCOM IPA DRIVER
M: Alex Elder <elder@kernel.org>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ipa/
QEMU MACHINE EMULATOR AND VIRTUALIZER SUPPORT
M: Gabriel Somlo <somlo@cmu.edu>
M: "Michael S. Tsirkin" <mst@redhat.com>
......
......@@ -675,6 +675,17 @@ modem_smp2p_in: slave-kernel {
interrupt-controller;
#interrupt-cells = <2>;
};
ipa_smp2p_out: ipa-ap-to-modem {
qcom,entry-name = "ipa";
#qcom,smem-state-cells = <1>;
};
ipa_smp2p_in: ipa-modem-to-ap {
qcom,entry-name = "ipa";
interrupt-controller;
#interrupt-cells = <2>;
};
};
smp2p-slpi {
......@@ -1435,6 +1446,46 @@ ufs_mem_phy_lanes: lanes@1d87400 {
};
};
ipa@1e40000 {
compatible = "qcom,sdm845-ipa";
modem-init;
modem-remoteproc = <&mss_pil>;
reg = <0 0x1e40000 0 0x7000>,
<0 0x1e47000 0 0x2000>,
<0 0x1e04000 0 0x2c000>;
reg-names = "ipa-reg",
"ipa-shared",
"gsi";
interrupts-extended =
<&intc 0 311 IRQ_TYPE_EDGE_RISING>,
<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
interrupt-names = "ipa",
"gsi",
"ipa-clock-query",
"ipa-setup-ready";
clocks = <&rpmhcc RPMH_IPA_CLK>;
clock-names = "core";
interconnects =
<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
interconnect-names = "memory",
"imem",
"config";
qcom,smem-states = <&ipa_smp2p_out 0>,
<&ipa_smp2p_out 1>;
qcom,smem-state-names = "ipa-clock-enabled-valid",
"ipa-clock-enabled";
};
tcsr_mutex_regs: syscon@1f40000 {
compatible = "syscon";
reg = <0 0x01f40000 0 0x40000>;
......
......@@ -444,6 +444,8 @@ source "drivers/net/fddi/Kconfig"
source "drivers/net/hippi/Kconfig"
source "drivers/net/ipa/Kconfig"
config NET_SB1000
tristate "General Instruments Surfboard 1000"
depends on PNP
......
......@@ -47,6 +47,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/
obj-$(CONFIG_FDDI) += fddi/
obj-$(CONFIG_HIPPI) += hippi/
obj-$(CONFIG_HAMRADIO) += hamradio/
obj-$(CONFIG_QCOM_IPA) += ipa/
obj-$(CONFIG_PLIP) += plip/
obj-$(CONFIG_PPP) += ppp/
obj-$(CONFIG_PPP_ASYNC) += ppp/
......
config QCOM_IPA
tristate "Qualcomm IPA support"
depends on ARCH_QCOM && 64BIT && NET
select QCOM_QMI_HELPERS
select QCOM_MDT_LOADER
default QCOM_Q6V5_COMMON
help
Choose Y or M here to include support for the Qualcomm
IP Accelerator (IPA), a hardware block present in some
Qualcomm SoCs. The IPA is a programmable protocol processor
that is capable of generic hardware handling of IP packets,
including routing, filtering, and NAT. Currently the IPA
driver supports only basic transport of network traffic
between the AP and modem, on the Qualcomm SDM845 SoC.
Note that if selected, the selection type must match that
of QCOM_Q6V5_COMMON (Y or M).
If unsure, say N.
# Un-comment the next line if you want to validate configuration data
#ccflags-y += -DIPA_VALIDATE
obj-$(CONFIG_QCOM_IPA) += ipa.o
ipa-y := ipa_main.o ipa_clock.o ipa_reg.o ipa_mem.o \
ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
ipa_gsi.o ipa_smp2p.o ipa_uc.o \
ipa_endpoint.o ipa_cmd.o ipa_modem.o \
ipa_qmi.o ipa_qmi_msg.o
ipa-y += ipa_data-sdm845.o ipa_data-sc7180.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2018-2020 Linaro Ltd.
*/
#ifndef _GSI_H_
#define _GSI_H_
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/completion.h>
#include <linux/platform_device.h>
#include <linux/netdevice.h>
/* Maximum number of channels and event rings supported by the driver */
#define GSI_CHANNEL_COUNT_MAX 17
#define GSI_EVT_RING_COUNT_MAX 13
/* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
#define GSI_TLV_MAX 64
struct device;
struct scatterlist;
struct platform_device;
struct gsi;
struct gsi_trans;
struct gsi_channel_data;
struct ipa_gsi_endpoint_data;
/* Execution environment IDs */
enum gsi_ee_id {
GSI_EE_AP = 0,
GSI_EE_MODEM = 1,
GSI_EE_UC = 2,
GSI_EE_TZ = 3,
};
struct gsi_ring {
void *virt; /* ring array base address */
dma_addr_t addr; /* primarily low 32 bits used */
u32 count; /* number of elements in ring */
/* The ring index value indicates the next "open" entry in the ring.
*
* A channel ring consists of TRE entries filled by the AP and passed
* to the hardware for processing. For a channel ring, the ring index
* identifies the next unused entry to be filled by the AP.
*
* An event ring consists of event structures filled by the hardware
* and passed to the AP. For event rings, the ring index identifies
* the next ring entry that is not known to have been filled by the
* hardware.
*/
u32 index;
};
/* Transactions use several resources that can be allocated dynamically
* but taken from a fixed-size pool. The number of elements required for
* the pool is limited by the total number of TREs that can be outstanding.
*
* If sufficient TREs are available to reserve for a transaction,
* allocation from these pools is guaranteed to succeed. Furthermore,
* these resources are implicitly freed whenever the TREs in the
* transaction they're associated with are released.
*
* The result of a pool allocation of multiple elements is always
* contiguous.
*/
struct gsi_trans_pool {
void *base; /* base address of element pool */
u32 count; /* # elements in the pool */
u32 free; /* next free element in pool (modulo) */
u32 size; /* size (bytes) of an element */
u32 max_alloc; /* max allocation request */
dma_addr_t addr; /* DMA address if DMA pool (or 0) */
};
struct gsi_trans_info {
atomic_t tre_avail; /* TREs available for allocation */
struct gsi_trans_pool pool; /* transaction pool */
struct gsi_trans_pool sg_pool; /* scatterlist pool */
struct gsi_trans_pool cmd_pool; /* command payload DMA pool */
struct gsi_trans_pool info_pool;/* command information pool */
struct gsi_trans **map; /* TRE -> transaction map */
spinlock_t spinlock; /* protects updates to the lists */
struct list_head alloc; /* allocated, not committed */
struct list_head pending; /* committed, awaiting completion */
struct list_head complete; /* completed, awaiting poll */
struct list_head polled; /* returned by gsi_channel_poll_one() */
};
/* Hardware values signifying the state of a channel */
enum gsi_channel_state {
GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0,
GSI_CHANNEL_STATE_ALLOCATED = 0x1,
GSI_CHANNEL_STATE_STARTED = 0x2,
GSI_CHANNEL_STATE_STOPPED = 0x3,
GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4,
GSI_CHANNEL_STATE_ERROR = 0xf,
};
/* We only care about channels between IPA and AP */
struct gsi_channel {
struct gsi *gsi;
bool toward_ipa;
bool command; /* AP command TX channel or not */
bool use_prefetch; /* use prefetch (else escape buf) */
u8 tlv_count; /* # entries in TLV FIFO */
u16 tre_count;
u16 event_count;
struct completion completion; /* signals channel state changes */
enum gsi_channel_state state;
struct gsi_ring tre_ring;
u32 evt_ring_id;
u64 byte_count; /* total # bytes transferred */
u64 trans_count; /* total # transactions */
/* The following counts are used only for TX endpoints */
u64 queued_byte_count; /* last reported queued byte count */
u64 queued_trans_count; /* ...and queued trans count */
u64 compl_byte_count; /* last reported completed byte count */
u64 compl_trans_count; /* ...and completed trans count */
struct gsi_trans_info trans_info;
struct napi_struct napi;
};
/* Hardware values signifying the state of an event ring */
enum gsi_evt_ring_state {
GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0,
GSI_EVT_RING_STATE_ALLOCATED = 0x1,
GSI_EVT_RING_STATE_ERROR = 0xf,
};
struct gsi_evt_ring {
struct gsi_channel *channel;
struct completion completion; /* signals event ring state changes */
enum gsi_evt_ring_state state;
struct gsi_ring ring;
};
struct gsi {
struct device *dev; /* Same as IPA device */
struct net_device dummy_dev; /* needed for NAPI */
void __iomem *virt;
u32 irq;
bool irq_wake_enabled;
u32 channel_count;
u32 evt_ring_count;
struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX];
struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
u32 event_bitmap;
u32 event_enable_bitmap;
u32 modem_channel_bitmap;
struct completion completion; /* for global EE commands */
struct mutex mutex; /* protects commands, programming */
};
/**
* gsi_setup() - Set up the GSI subsystem
* @gsi: Address of GSI structure embedded in an IPA structure
* @db_enable: Whether to use the GSI doorbell engine
*
* @Return: 0 if successful, or a negative error code
*
* Performs initialization that must wait until the GSI hardware is
* ready (including firmware loaded).
*/
int gsi_setup(struct gsi *gsi, bool db_enable);
/**
* gsi_teardown() - Tear down GSI subsystem
* @gsi: GSI address previously passed to a successful gsi_setup() call
*/
void gsi_teardown(struct gsi *gsi);
/**
* gsi_channel_tre_max() - Channel maximum number of in-flight TREs
* @gsi: GSI pointer
* @channel_id: Channel whose limit is to be returned
*
* @Return: The maximum number of TREs oustanding on the channel
*/
u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
/**
* gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
* @gsi: GSI pointer
* @channel_id: Channel whose limit is to be returned
*
* @Return: The maximum TRE count per transaction on the channel
*/
u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
/**
* gsi_channel_start() - Start an allocated GSI channel
* @gsi: GSI pointer
* @channel_id: Channel to start
*
* @Return: 0 if successful, or a negative error code
*/
int gsi_channel_start(struct gsi *gsi, u32 channel_id);
/**
* gsi_channel_stop() - Stop a started GSI channel
* @gsi: GSI pointer returned by gsi_setup()
* @channel_id: Channel to stop
*
* @Return: 0 if successful, or a negative error code
*/
int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
/**
* gsi_channel_reset() - Reset an allocated GSI channel
* @gsi: GSI pointer
* @channel_id: Channel to be reset
* @db_enable: Whether doorbell engine should be enabled
*
* Reset a channel and reconfigure it. The @db_enable flag indicates
* whether the doorbell engine will be enabled following reconfiguration.
*
* GSI hardware relinquishes ownership of all pending receive buffer
* transactions and they will complete with their cancelled flag set.
*/
void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable);
int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop);
int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start);
/**
* gsi_init() - Initialize the GSI subsystem
* @gsi: Address of GSI structure embedded in an IPA structure
* @pdev: IPA platform device
*
* @Return: 0 if successful, or a negative error code
*
* Early stage initialization of the GSI subsystem, performing tasks
* that can be done before the GSI hardware is ready to use.
*/
int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch,
u32 count, const struct ipa_gsi_endpoint_data *data,
bool modem_alloc);
/**
* gsi_exit() - Exit the GSI subsystem
* @gsi: GSI address previously passed to a successful gsi_init() call
*/
void gsi_exit(struct gsi *gsi);
#endif /* _GSI_H_ */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2018-2020 Linaro Ltd.
*/
#ifndef _GSI_PRIVATE_H_
#define _GSI_PRIVATE_H_
/* === Only "gsi.c" and "gsi_trans.c" should include this file === */
#include <linux/types.h>
struct gsi_trans;
struct gsi_ring;
struct gsi_channel;
#define GSI_RING_ELEMENT_SIZE 16 /* bytes */
/* Return the entry that follows one provided in a transaction pool */
void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);
/**
* gsi_trans_move_complete() - Mark a GSI transaction completed
* @trans: Transaction to commit
*/
void gsi_trans_move_complete(struct gsi_trans *trans);
/**
* gsi_trans_move_polled() - Mark a transaction polled
* @trans: Transaction to update
*/
void gsi_trans_move_polled(struct gsi_trans *trans);
/**
* gsi_trans_complete() - Complete a GSI transaction
* @trans: Transaction to complete
*
* Marks a transaction complete (including freeing it).
*/
void gsi_trans_complete(struct gsi_trans *trans);
/**
* gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index
* @channel: Channel associated with the transaction
* @index: Index of the TRE having a transaction
*
* @Return: The GSI transaction pointer associated with the TRE index
*/
struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel,
u32 index);
/**
* gsi_channel_trans_complete() - Return a channel's next completed transaction
* @channel: Channel whose next transaction is to be returned
*
* @Return: The next completed transaction, or NULL if nothing new
*/
struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
/**
* gsi_channel_trans_cancel_pending() - Cancel pending transactions
* @channel: Channel whose pending transactions should be cancelled
*
* Cancel all pending transactions on a channel. These are transactions
* that have been committed but not yet completed. This is required when
* the channel gets reset. At that time all pending transactions will be
* marked as cancelled.
*
* NOTE: Transactions already complete at the time of this call are
* unaffected.
*/
void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
/**
* gsi_channel_trans_init() - Initialize a channel's GSI transaction info
* @gsi: GSI pointer
* @channel_id: Channel number
*
* @Return: 0 if successful, or -ENOMEM on allocation failure
*
* Creates and sets up information for managing transactions on a channel
*/
int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id);
/**
* gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init()
* @channel: Channel whose transaction information is to be cleaned up
*/
void gsi_channel_trans_exit(struct gsi_channel *channel);
/**
* gsi_channel_doorbell() - Ring a channel's doorbell
* @channel: Channel whose doorbell should be rung
*
* Rings a channel's doorbell to inform the GSI hardware that new
* transactions (TREs, really) are available for it to process.
*/
void gsi_channel_doorbell(struct gsi_channel *channel);
/**
* gsi_ring_virt() - Return virtual address for a ring entry
* @ring: Ring whose address is to be translated
* @addr: Index (slot number) of entry
*/
void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
/**
* gsi_channel_tx_queued() - Report the number of bytes queued to hardware
* @channel: Channel whose bytes have been queued
*
* This arranges for the the number of transactions and bytes for
* transfer that have been queued to hardware to be reported. It
* passes this information up the network stack so it can be used to
* throttle transmissions.
*/
void gsi_channel_tx_queued(struct gsi_channel *channel);
#endif /* _GSI_PRIVATE_H_ */
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2019-2020 Linaro Ltd.
*/
#ifndef _GSI_TRANS_H_
#define _GSI_TRANS_H_
#include <linux/types.h>
#include <linux/refcount.h>
#include <linux/completion.h>
#include <linux/dma-direction.h>
#include "ipa_cmd.h"
struct scatterlist;
struct device;
struct sk_buff;
struct gsi;
struct gsi_trans;
struct gsi_trans_pool;
/**
* struct gsi_trans - a GSI transaction
*
* Most fields in this structure for internal use by the transaction core code:
* @links: Links for channel transaction lists by state
* @gsi: GSI pointer
* @channel_id: Channel number transaction is associated with
* @cancelled: If set by the core code, transaction was cancelled
* @tre_count: Number of TREs reserved for this transaction
* @used: Number of TREs *used* (could be less than tre_count)
* @len: Total # of transfer bytes represented in sgl[] (set by core)
* @data: Preserved but not touched by the core transaction code
* @sgl: An array of scatter/gather entries managed by core code
* @info: Array of command information structures (command channel)
* @direction: DMA transfer direction (DMA_NONE for commands)
* @refcount: Reference count used for destruction
* @completion: Completed when the transaction completes
* @byte_count: TX channel byte count recorded when transaction committed
* @trans_count: Channel transaction count when committed (for BQL accounting)
*
* The size used for some fields in this structure were chosen to ensure
* the full structure size is no larger than 128 bytes.
*/
struct gsi_trans {
struct list_head links; /* gsi_channel lists */
struct gsi *gsi;
u8 channel_id;
bool cancelled; /* true if transaction was cancelled */
u8 tre_count; /* # TREs requested */
u8 used; /* # entries used in sgl[] */
u32 len; /* total # bytes across sgl[] */
void *data;
struct scatterlist *sgl;
struct ipa_cmd_info *info; /* array of entries, or null */
enum dma_data_direction direction;
refcount_t refcount;
struct completion completion;
u64 byte_count; /* channel byte_count when committed */
u64 trans_count; /* channel trans_count when committed */
};
/**
* gsi_trans_pool_init() - Initialize a pool of structures for transactions
* @gsi: GSI pointer
* @size: Size of elements in the pool
* @count: Minimum number of elements in the pool
* @max_alloc: Maximum number of elements allocated at a time from pool
*
* @Return: 0 if successful, or a negative error code
*/
int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
u32 max_alloc);
/**
* gsi_trans_pool_alloc() - Allocate one or more elements from a pool
* @pool: Pool pointer
* @count: Number of elements to allocate from the pool
*
* @Return: Virtual address of element(s) allocated from the pool
*/
void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count);
/**
* gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
* @pool: Pool pointer
*/
void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
/**
* gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures
* @dev: Device used for DMA
* @pool: Pool pointer
* @size: Size of elements in the pool
* @count: Minimum number of elements in the pool
* @max_alloc: Maximum number of elements allocated at a time from pool
*
* @Return: 0 if successful, or a negative error code
*
* Structures in this pool reside in DMA-coherent memory.
*/
int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
size_t size, u32 count, u32 max_alloc);
/**
* gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool
* @pool: DMA pool pointer
* @addr: DMA address "handle" associated with the allocation
*
* @Return: Virtual address of element allocated from the pool
*
* Only one element at a time may be allocated from a DMA pool.
*/
void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr);
/**
* gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
* @pool: Pool pointer
*/
void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
/**
* gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel
* @gsi: GSI pointer
* @channel_id: Channel the transaction is associated with
* @tre_count: Number of elements in the transaction
* @direction: DMA direction for entire SGL (or DMA_NONE)
*
* @Return: A GSI transaction structure, or a null pointer if all
* available transactions are in use
*/
struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
u32 tre_count,
enum dma_data_direction direction);
/**
* gsi_trans_free() - Free a previously-allocated GSI transaction
* @trans: Transaction to be freed
*/
void gsi_trans_free(struct gsi_trans *trans);
/**
* gsi_trans_cmd_add() - Add an immediate command to a transaction
* @trans: Transaction
* @buf: Buffer pointer for command payload
* @size: Number of bytes in buffer
* @addr: DMA address for payload
* @direction: Direction of DMA transfer (or DMA_NONE if none required)
* @opcode: IPA immediate command opcode
*/
void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
dma_addr_t addr, enum dma_data_direction direction,
enum ipa_cmd_opcode opcode);
/**
* gsi_trans_page_add() - Add a page transfer to a transaction
* @trans: Transaction
* @page: Page pointer
* @size: Number of bytes (starting at offset) to transfer
* @offset: Offset within page for start of transfer
*/
int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
u32 offset);
/**
* gsi_trans_skb_add() - Add a socket transfer to a transaction
* @trans: Transaction
* @skb: Socket buffer for transfer (outbound)
*
* @Return: 0, or -EMSGSIZE if socket data won't fit in transaction.
*/
int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb);
/**
* gsi_trans_commit() - Commit a GSI transaction
* @trans: Transaction to commit
* @ring_db: Whether to tell the hardware about these queued transfers
*/
void gsi_trans_commit(struct gsi_trans *trans, bool ring_db);
/**
* gsi_trans_commit_wait() - Commit a GSI transaction and wait for it
* to complete
* @trans: Transaction to commit
*/
void gsi_trans_commit_wait(struct gsi_trans *trans);
/**
* gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
* it to complete, with timeout
* @trans: Transaction to commit
* @timeout: Timeout period (in milliseconds)
*/
int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
unsigned long timeout);
/**
* gsi_trans_read_byte() - Issue a single byte read TRE on a channel
* @gsi: GSI pointer
* @channel_id: Channel on which to read a byte
* @addr: DMA address into which to transfer the one byte
*
* This is not a transaction operation at all. It's defined here because
* it needs to be done in coordination with other transaction activity.
*/
int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
/**
* gsi_trans_read_byte_done() - Clean up after a single byte read TRE
* @gsi: GSI pointer
* @channel_id: Channel on which byte was read
*
* This function needs to be called to signal that the work related
* to reading a byte initiated by gsi_trans_read_byte() is complete.
*/
void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id);
#endif /* _GSI_TRANS_H_ */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2018-2020 Linaro Ltd.
*/
#ifndef _IPA_H_
#define _IPA_H_
#include <linux/types.h>
#include <linux/device.h>
#include <linux/notifier.h>
#include <linux/pm_wakeup.h>
#include "ipa_version.h"
#include "gsi.h"
#include "ipa_mem.h"
#include "ipa_qmi.h"
#include "ipa_endpoint.h"
#include "ipa_interrupt.h"
struct clk;
struct icc_path;
struct net_device;
struct platform_device;
struct ipa_clock;
struct ipa_smp2p;
struct ipa_interrupt;
/**
* struct ipa - IPA information
* @gsi: Embedded GSI structure
* @version: IPA hardware version
* @pdev: Platform device
* @modem_rproc: Remoteproc handle for modem subsystem
* @smp2p: SMP2P information
* @clock: IPA clocking information
* @suspend_ref: Whether clock reference preventing suspend taken
* @table_addr: DMA address of filter/route table content
* @table_virt: Virtual address of filter/route table content
* @interrupt: IPA Interrupt information
* @uc_loaded: true after microcontroller has reported it's ready
* @reg_addr: DMA address used for IPA register access
* @reg_virt: Virtual address used for IPA register access
* @mem_addr: DMA address of IPA-local memory space
* @mem_virt: Virtual address of IPA-local memory space
* @mem_offset: Offset from @mem_virt used for access to IPA memory
* @mem_size: Total size (bytes) of memory at @mem_virt
* @mem: Array of IPA-local memory region descriptors
* @zero_addr: DMA address of preallocated zero-filled memory
* @zero_virt: Virtual address of preallocated zero-filled memory
* @zero_size: Size (bytes) of preallocated zero-filled memory
* @wakeup_source: Wakeup source information
* @available: Bit mask indicating endpoints hardware supports
* @filter_map: Bit mask indicating endpoints that support filtering
* @initialized: Bit mask indicating endpoints initialized
* @set_up: Bit mask indicating endpoints set up
* @enabled: Bit mask indicating endpoints enabled
* @endpoint: Array of endpoint information
* @channel_map: Mapping of GSI channel to IPA endpoint
* @name_map: Mapping of IPA endpoint name to IPA endpoint
* @setup_complete: Flag indicating whether setup stage has completed
* @modem_state: State of modem (stopped, running)
* @modem_netdev: Network device structure used for modem
* @qmi: QMI information
*/
struct ipa {
struct gsi gsi;
enum ipa_version version;
struct platform_device *pdev;
struct rproc *modem_rproc;
struct ipa_smp2p *smp2p;
struct ipa_clock *clock;
atomic_t suspend_ref;
dma_addr_t table_addr;
__le64 *table_virt;
struct ipa_interrupt *interrupt;
bool uc_loaded;
dma_addr_t reg_addr;
void __iomem *reg_virt;
dma_addr_t mem_addr;
void *mem_virt;
u32 mem_offset;
u32 mem_size;
const struct ipa_mem *mem;
dma_addr_t zero_addr;
void *zero_virt;
size_t zero_size;
struct wakeup_source *wakeup_source;
/* Bit masks indicating endpoint state */
u32 available; /* supported by hardware */
u32 filter_map;
u32 initialized;
u32 set_up;
u32 enabled;
struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX];
struct ipa_endpoint *channel_map[GSI_CHANNEL_COUNT_MAX];
struct ipa_endpoint *name_map[IPA_ENDPOINT_COUNT];
bool setup_complete;
atomic_t modem_state; /* enum ipa_modem_state */
struct net_device *modem_netdev;
struct ipa_qmi qmi;
};
/**
* ipa_setup() - Perform IPA setup
* @ipa: IPA pointer
*
* IPA initialization is broken into stages: init; config; and setup.
* (These have inverses exit, deconfig, and teardown.)
*
* Activities performed at the init stage can be done without requiring
* any access to IPA hardware. Activities performed at the config stage
* require the IPA clock to be running, because they involve access
* to IPA registers. The setup stage is performed only after the GSI
* hardware is ready (more on this below). The setup stage allows
* the AP to perform more complex initialization by issuing "immediate
* commands" using a special interface to the IPA.
*
* This function, @ipa_setup(), starts the setup stage.
*
* In order for the GSI hardware to be functional it needs firmware to be
* loaded (in addition to some other low-level initialization). This early
* GSI initialization can be done either by Trust Zone on the AP or by the
* modem.
*
* If it's done by Trust Zone, the AP loads the GSI firmware and supplies
* it to Trust Zone to verify and install. When this completes, if
* verification was successful, the GSI layer is ready and ipa_setup()
* implements the setup phase of initialization.
*
* If the modem performs early GSI initialization, the AP needs to know
* when this has occurred. An SMP2P interrupt is used for this purpose,
* and receipt of that interrupt triggers the call to ipa_setup().
*/
int ipa_setup(struct ipa *ipa);
#endif /* _IPA_H_ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2018-2020 Linaro Ltd.
*/
#include <linux/atomic.h>
#include <linux/mutex.h>
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/interconnect.h>
#include "ipa.h"
#include "ipa_clock.h"
#include "ipa_modem.h"
/**
* DOC: IPA Clocking
*
* The "IPA Clock" manages both the IPA core clock and the interconnects
* (buses) the IPA depends on as a single logical entity. A reference count
* is incremented by "get" operations and decremented by "put" operations.
* Transitions of that count from 0 to 1 result in the clock and interconnects
* being enabled, and transitions of the count from 1 to 0 cause them to be
* disabled. We currently operate the core clock at a fixed clock rate, and
* all buses at a fixed average and peak bandwidth. As more advanced IPA
* features are enabled, we can make better use of clock and bus scaling.
*
* An IPA clock reference must be held for any access to IPA hardware.
*/
#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */
/* Interconnect path bandwidths (each times 1000 bytes per second) */
#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */
#define IPA_MEMORY_PEAK (600 * 1000)
#define IPA_IMEM_AVG (80 * 1000)
#define IPA_IMEM_PEAK (350 * 1000)
#define IPA_CONFIG_AVG (40 * 1000)
#define IPA_CONFIG_PEAK (40 * 1000)
/**
* struct ipa_clock - IPA clocking information
* @count: Clocking reference count
* @mutex; Protects clock enable/disable
* @core: IPA core clock
* @memory_path: Memory interconnect
* @imem_path: Internal memory interconnect
* @config_path: Configuration space interconnect
*/
struct ipa_clock {
atomic_t count;
struct mutex mutex; /* protects clock enable/disable */
struct clk *core;
struct icc_path *memory_path;
struct icc_path *imem_path;
struct icc_path *config_path;
};
static struct icc_path *
ipa_interconnect_init_one(struct device *dev, const char *name)
{
struct icc_path *path;
path = of_icc_get(dev, name);
if (IS_ERR(path))
dev_err(dev, "error %ld getting memory interconnect\n",
PTR_ERR(path));
return path;
}
/* Initialize interconnects required for IPA operation */
static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev)
{
struct icc_path *path;
path = ipa_interconnect_init_one(dev, "memory");
if (IS_ERR(path))
goto err_return;
clock->memory_path = path;
path = ipa_interconnect_init_one(dev, "imem");
if (IS_ERR(path))
goto err_memory_path_put;
clock->imem_path = path;
path = ipa_interconnect_init_one(dev, "config");
if (IS_ERR(path))
goto err_imem_path_put;
clock->config_path = path;
return 0;
err_imem_path_put:
icc_put(clock->imem_path);
err_memory_path_put:
icc_put(clock->memory_path);
err_return:
return PTR_ERR(path);
}
/* Inverse of ipa_interconnect_init() */
static void ipa_interconnect_exit(struct ipa_clock *clock)
{
icc_put(clock->config_path);
icc_put(clock->imem_path);
icc_put(clock->memory_path);
}
/* Currently we only use one bandwidth level, so just "enable" interconnects */
static int ipa_interconnect_enable(struct ipa *ipa)
{
struct ipa_clock *clock = ipa->clock;
int ret;
ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
if (ret)
return ret;
ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
if (ret)
goto err_memory_path_disable;
ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK);
if (ret)
goto err_imem_path_disable;
return 0;
err_imem_path_disable:
(void)icc_set_bw(clock->imem_path, 0, 0);
err_memory_path_disable:
(void)icc_set_bw(clock->memory_path, 0, 0);
return ret;
}
/* To disable an interconnect, we just its bandwidth to 0 */
static int ipa_interconnect_disable(struct ipa *ipa)
{
struct ipa_clock *clock = ipa->clock;
int ret;
ret = icc_set_bw(clock->memory_path, 0, 0);
if (ret)
return ret;
ret = icc_set_bw(clock->imem_path, 0, 0);
if (ret)
goto err_memory_path_reenable;
ret = icc_set_bw(clock->config_path, 0, 0);
if (ret)
goto err_imem_path_reenable;
return 0;
err_imem_path_reenable:
(void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
err_memory_path_reenable:
(void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
return ret;
}
/* Turn on IPA clocks, including interconnects */
static int ipa_clock_enable(struct ipa *ipa)
{
int ret;
ret = ipa_interconnect_enable(ipa);
if (ret)
return ret;
ret = clk_prepare_enable(ipa->clock->core);
if (ret)
ipa_interconnect_disable(ipa);
return ret;
}
/* Inverse of ipa_clock_enable() */
static void ipa_clock_disable(struct ipa *ipa)
{
clk_disable_unprepare(ipa->clock->core);
(void)ipa_interconnect_disable(ipa);
}
/* Get an IPA clock reference, but only if the reference count is
* already non-zero. Returns true if the additional reference was
* added successfully, or false otherwise.
*/
bool ipa_clock_get_additional(struct ipa *ipa)
{
return !!atomic_inc_not_zero(&ipa->clock->count);
}
/* Get an IPA clock reference. If the reference count is non-zero, it is
* incremented and return is immediate. Otherwise it is checked again
* under protection of the mutex, and if appropriate the clock (and
* interconnects) are enabled suspended endpoints (if any) are resumed
* before returning.
*
* Incrementing the reference count is intentionally deferred until
* after the clock is running and endpoints are resumed.
*/
void ipa_clock_get(struct ipa *ipa)
{
struct ipa_clock *clock = ipa->clock;
int ret;
/* If the clock is running, just bump the reference count */
if (ipa_clock_get_additional(ipa))
return;
/* Otherwise get the mutex and check again */
mutex_lock(&clock->mutex);
/* A reference might have been added before we got the mutex. */
if (ipa_clock_get_additional(ipa))
goto out_mutex_unlock;
ret = ipa_clock_enable(ipa);
if (ret) {
dev_err(&ipa->pdev->dev, "error %d enabling IPA clock\n", ret);
goto out_mutex_unlock;
}
ipa_endpoint_resume(ipa);
atomic_inc(&clock->count);
out_mutex_unlock:
mutex_unlock(&clock->mutex);
}
/* Attempt to remove an IPA clock reference. If this represents the last
* reference, suspend endpoints and disable the clock (and interconnects)
* under protection of a mutex.
*/
void ipa_clock_put(struct ipa *ipa)
{
struct ipa_clock *clock = ipa->clock;
/* If this is not the last reference there's nothing more to do */
if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex))
return;
ipa_endpoint_suspend(ipa);
ipa_clock_disable(ipa);
mutex_unlock(&clock->mutex);
}
/* Initialize IPA clocking */
struct ipa_clock *ipa_clock_init(struct device *dev)
{
struct ipa_clock *clock;
struct clk *clk;
int ret;
clk = clk_get(dev, "core");
if (IS_ERR(clk)) {
dev_err(dev, "error %ld getting core clock\n", PTR_ERR(clk));
return ERR_CAST(clk);
}
ret = clk_set_rate(clk, IPA_CORE_CLOCK_RATE);
if (ret) {
dev_err(dev, "error %d setting core clock rate to %lu\n",
ret, IPA_CORE_CLOCK_RATE);
goto err_clk_put;
}
clock = kzalloc(sizeof(*clock), GFP_KERNEL);
if (!clock) {
ret = -ENOMEM;
goto err_clk_put;
}
clock->core = clk;
ret = ipa_interconnect_init(clock, dev);
if (ret)
goto err_kfree;
mutex_init(&clock->mutex);
atomic_set(&clock->count, 0);
return clock;
err_kfree:
kfree(clock);
err_clk_put:
clk_put(clk);
return ERR_PTR(ret);
}
/* Inverse of ipa_clock_init() */
void ipa_clock_exit(struct ipa_clock *clock)
{
struct clk *clk = clock->core;
WARN_ON(atomic_read(&clock->count) != 0);
mutex_destroy(&clock->mutex);
ipa_interconnect_exit(clock);
kfree(clock);
clk_put(clk);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2018-2020 Linaro Ltd.
*/
#ifndef _IPA_CLOCK_H_
#define _IPA_CLOCK_H_
struct device;
struct ipa;
/**
* ipa_clock_init() - Initialize IPA clocking
* @dev: IPA device
*
* @Return: A pointer to an ipa_clock structure, or a pointer-coded error
*/
struct ipa_clock *ipa_clock_init(struct device *dev);
/**
* ipa_clock_exit() - Inverse of ipa_clock_init()
* @clock: IPA clock pointer
*/
void ipa_clock_exit(struct ipa_clock *clock);
/**
* ipa_clock_get() - Get an IPA clock reference
* @ipa: IPA pointer
*
* This call blocks if this is the first reference.
*/
void ipa_clock_get(struct ipa *ipa);
/**
* ipa_clock_get_additional() - Get an IPA clock reference if not first
* @ipa: IPA pointer
*
* This returns immediately, and only takes a reference if not the first
*/
bool ipa_clock_get_additional(struct ipa *ipa);
/**
* ipa_clock_put() - Drop an IPA clock reference
* @ipa: IPA pointer
*
* This drops a clock reference. If the last reference is being dropped,
* the clock is stopped and RX endpoints are suspended. This call will
* not block unless the last reference is dropped.
*/
void ipa_clock_put(struct ipa *ipa);
#endif /* _IPA_CLOCK_H_ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2019-2020 Linaro Ltd.
*/
#ifndef _IPA_CMD_H_
#define _IPA_CMD_H_
#include <linux/types.h>
#include <linux/dma-direction.h>
struct sk_buff;
struct scatterlist;
struct ipa;
struct ipa_mem;
struct gsi_trans;
struct gsi_channel;
/**
* enum ipa_cmd_opcode: IPA immediate commands
*
* All immediate commands are issued using the AP command TX endpoint.
* The numeric values here are the opcodes for IPA v3.5.1 hardware.
*
* IPA_CMD_NONE is a special (invalid) value that's used to indicate
* a request is *not* an immediate command.
*/
enum ipa_cmd_opcode {
IPA_CMD_NONE = 0,
IPA_CMD_IP_V4_FILTER_INIT = 3,
IPA_CMD_IP_V6_FILTER_INIT = 4,
IPA_CMD_IP_V4_ROUTING_INIT = 7,
IPA_CMD_IP_V6_ROUTING_INIT = 8,
IPA_CMD_HDR_INIT_LOCAL = 9,
IPA_CMD_REGISTER_WRITE = 12,
IPA_CMD_IP_PACKET_INIT = 16,
IPA_CMD_DMA_TASK_32B_ADDR = 17,
IPA_CMD_DMA_SHARED_MEM = 19,
IPA_CMD_IP_PACKET_TAG_STATUS = 20,
};
/**
* struct ipa_cmd_info - information needed for an IPA immediate command
*
* @opcode: The command opcode.
* @direction: Direction of data transfer for DMA commands
*/
struct ipa_cmd_info {
enum ipa_cmd_opcode opcode;
enum dma_data_direction direction;
};
#ifdef IPA_VALIDATE
/**
* ipa_cmd_table_valid() - Validate a memory region holding a table
* @ipa: - IPA pointer
* @mem: - IPA memory region descriptor
* @route: - Whether the region holds a route or filter table
* @ipv6: - Whether the table is for IPv6 or IPv4
* @hashed: - Whether the table is hashed or non-hashed
*
* @Return: true if region is valid, false otherwise
*/
bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
bool route, bool ipv6, bool hashed);
/**
* ipa_cmd_data_valid() - Validate command-realted configuration is valid
* @ipa: - IPA pointer
*
* @Return: true if assumptions required for command are valid
*/
bool ipa_cmd_data_valid(struct ipa *ipa);
#else /* !IPA_VALIDATE */
static inline bool ipa_cmd_table_valid(struct ipa *ipa,
const struct ipa_mem *mem, bool route,
bool ipv6, bool hashed)
{
return true;
}
static inline bool ipa_cmd_data_valid(struct ipa *ipa)
{
return true;
}
#endif /* !IPA_VALIDATE */
/**
* ipa_cmd_pool_init() - initialize command channel pools
* @channel: AP->IPA command TX GSI channel pointer
* @tre_count: Number of pool elements to allocate
*
* @Return: 0 if successful, or a negative error code
*/
int ipa_cmd_pool_init(struct gsi_channel *gsi_channel, u32 tre_count);
/**
* ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init()
* @channel: AP->IPA command TX GSI channel pointer
*/
void ipa_cmd_pool_exit(struct gsi_channel *channel);
/**
* ipa_cmd_table_init_add() - Add table init command to a transaction
* @trans: GSI transaction
* @opcode: IPA immediate command opcode
* @size: Size of non-hashed routing table memory
* @offset: Offset in IPA shared memory of non-hashed routing table memory
* @addr: DMA address of non-hashed table data to write
* @hash_size: Size of hashed routing table memory
* @hash_offset: Offset in IPA shared memory of hashed routing table memory
* @hash_addr: DMA address of hashed table data to write
*
* If hash_size is 0, hash_offset and hash_addr are ignored.
*/
void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
u16 size, u32 offset, dma_addr_t addr,
u16 hash_size, u32 hash_offset,
dma_addr_t hash_addr);
/**
* ipa_cmd_hdr_init_local_add() - Add a header init command to a transaction
* @ipa: IPA structure
* @offset: Offset of header memory in IPA local space
* @size: Size of header memory
* @addr: DMA address of buffer to be written from
*
* Defines and fills the location in IPA memory to use for headers.
*/
void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
dma_addr_t addr);
/**
* ipa_cmd_register_write_add() - Add a register write command to a transaction
* @trans: GSI transaction
* @offset: Offset of register to be written
* @value: Value to be written
* @mask: Mask of bits in register to update with bits from value
* @clear_full: Pipeline clear option; true means full pipeline clear
*/
void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
u32 mask, bool clear_full);
/**
* ipa_cmd_dma_task_32b_addr_add() - Add a 32-bit DMA command to a transaction
* @trans: GSi transaction
* @size: Number of bytes to be memory to be transferred
* @addr: DMA address of buffer to be read into or written from
* @toward_ipa: true means write to IPA memory; false means read
*/
void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size,
dma_addr_t addr, bool toward_ipa);
/**
* ipa_cmd_dma_shared_mem_add() - Add a DMA memory command to a transaction
* @trans: GSI transaction
* @offset: Offset of IPA memory to be read or written
* @size: Number of bytes of memory to be transferred
* @addr: DMA address of buffer to be read into or written from
* @toward_ipa: true means write to IPA memory; false means read
*/
void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset,
u16 size, dma_addr_t addr, bool toward_ipa);
/**
* ipa_cmd_tag_process_add() - Add IPA tag process commands to a transaction
* @trans: GSI transaction
*/
void ipa_cmd_tag_process_add(struct gsi_trans *trans);
/**
* ipa_cmd_tag_process_add_count() - Number of commands in a tag process
*
* @Return: The number of elements to allocate in a transaction
* to hold tag process commands
*/
u32 ipa_cmd_tag_process_count(void);
/**
* ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint
* @ipa: IPA pointer
* @tre_count: Number of elements in the transaction
*
* @Return: A GSI transaction structure, or a null pointer if all
* available transactions are in use
*/
struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
#endif /* _IPA_CMD_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2019-2020 Linaro Ltd.
*/
#ifndef _IPA_ENDPOINT_H_
#define _IPA_ENDPOINT_H_
#include <linux/types.h>
#include <linux/workqueue.h>
#include <linux/if_ether.h>
#include "gsi.h"
#include "ipa_reg.h"
struct net_device;
struct sk_buff;
struct ipa;
struct ipa_gsi_endpoint_data;
/* Non-zero granularity of counter used to implement aggregation timeout */
#define IPA_AGGR_GRANULARITY 500 /* microseconds */
#define IPA_MTU ETH_DATA_LEN
enum ipa_endpoint_name {
IPA_ENDPOINT_AP_MODEM_TX = 0,
IPA_ENDPOINT_MODEM_LAN_TX,
IPA_ENDPOINT_MODEM_COMMAND_TX,
IPA_ENDPOINT_AP_COMMAND_TX,
IPA_ENDPOINT_MODEM_AP_TX,
IPA_ENDPOINT_AP_LAN_RX,
IPA_ENDPOINT_AP_MODEM_RX,
IPA_ENDPOINT_MODEM_AP_RX,
IPA_ENDPOINT_MODEM_LAN_RX,
IPA_ENDPOINT_COUNT, /* Number of names (not an index) */
};
#define IPA_ENDPOINT_MAX 32 /* Max supported by driver */
/**
* struct ipa_endpoint - IPA endpoint information
* @client: Client associated with the endpoint
* @channel_id: EP's GSI channel
* @evt_ring_id: EP's GSI channel event ring
*/
struct ipa_endpoint {
struct ipa *ipa;
enum ipa_seq_type seq_type;
enum gsi_ee_id ee_id;
u32 channel_id;
u32 endpoint_id;
bool toward_ipa;
const struct ipa_endpoint_config_data *data;
u32 trans_tre_max; /* maximum descriptors per transaction */
u32 evt_ring_id;
/* Net device this endpoint is associated with, if any */
struct net_device *netdev;
/* Receive buffer replenishing for RX endpoints */
bool replenish_enabled;
u32 replenish_ready;
atomic_t replenish_saved;
atomic_t replenish_backlog;
struct delayed_work replenish_work; /* global wq */
};
void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa);
void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable);
int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa);
int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb);
int ipa_endpoint_stop(struct ipa_endpoint *endpoint);
void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint);
int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint);
void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint);
void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint);
void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint);
void ipa_endpoint_suspend(struct ipa *ipa);
void ipa_endpoint_resume(struct ipa *ipa);
void ipa_endpoint_setup(struct ipa *ipa);
void ipa_endpoint_teardown(struct ipa *ipa);
int ipa_endpoint_config(struct ipa *ipa);
void ipa_endpoint_deconfig(struct ipa *ipa);
void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id);
void ipa_endpoint_default_route_clear(struct ipa *ipa);
u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
const struct ipa_gsi_endpoint_data *data);
void ipa_endpoint_exit(struct ipa *ipa);
void ipa_endpoint_trans_complete(struct ipa_endpoint *ipa,
struct gsi_trans *trans);
void ipa_endpoint_trans_release(struct ipa_endpoint *ipa,
struct gsi_trans *trans);
#endif /* _IPA_ENDPOINT_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -21,6 +21,7 @@ obj-$(CONFIG_QCOM_Q6V5_ADSP) += qcom_q6v5_adsp.o
obj-$(CONFIG_QCOM_Q6V5_MSS) += qcom_q6v5_mss.o
obj-$(CONFIG_QCOM_Q6V5_PAS) += qcom_q6v5_pas.o
obj-$(CONFIG_QCOM_Q6V5_WCSS) += qcom_q6v5_wcss.o
obj-$(CONFIG_QCOM_Q6V5_IPA_NOTIFY) += qcom_q6v5_ipa_notify.o
obj-$(CONFIG_QCOM_SYSMON) += qcom_sysmon.o
obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
qcom_wcnss_pil-y += qcom_wcnss.o
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment