Commit c6e59bda authored by Kevin Hilman's avatar Kevin Hilman

Merge tag 'qcom-soc-for-4.3-rc2' of git://codeaurora.org/quic/kernel/agross-msm into next/late

Qualcomm ARM Based SoC Updates for 4.3-rc2

* Fix errant private access in SMEM
* Fix use of correct remote processor ID in SMD transactions
* Correct SMD fBLOCKREADINTR handling

* tag 'qcom-soc-for-4.3-rc2' of git://codeaurora.org/quic/kernel/agross-msm:
  soc: qcom: smd: Correct fBLOCKREADINTR handling
  soc: qcom: smd: Use correct remote processor ID
  soc: qcom: smem: Fix errant private access
  devicetree: soc: Add Qualcomm SMD based RPM DT binding
  soc: qcom: Driver for the Qualcomm RPM over SMD
  soc: qcom: Add Shared Memory Driver
  soc: qcom: Add device tree binding for Shared Memory Device
  drivers: qcom: Select QCOM_SCM unconditionally for QCOM_PM
  soc: qcom: Add Shared Memory Manager driver
parents 312146b5 208487a8
Qualcomm Resource Power Manager (RPM) over SMD
This driver is used to interface with the Resource Power Manager (RPM) found in
various Qualcomm platforms. The RPM allows each component in the system to vote
for state of the system resources, such as clocks, regulators and bus
frequencies.
- compatible:
Usage: required
Value type: <string>
Definition: must be one of:
"qcom,rpm-msm8974"
- qcom,smd-channels:
Usage: required
Value type: <stringlist>
Definition: Shared Memory channel used for communication with the RPM
= SUBDEVICES
The RPM exposes resources to its subnodes. The below bindings specify the set
of valid subnodes that can operate on these resources.
== Regulators
Regulator nodes are identified by their compatible:
- compatible:
Usage: required
Value type: <string>
Definition: must be one of:
"qcom,rpm-pm8841-regulators"
"qcom,rpm-pm8941-regulators"
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_s4-supply:
- vdd_s5-supply:
- vdd_s6-supply:
- vdd_s7-supply:
- vdd_s8-supply:
Usage: optional (pm8841 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
- vdd_s1-supply:
- vdd_s2-supply:
- vdd_s3-supply:
- vdd_l1_l3-supply:
- vdd_l2_lvs1_2_3-supply:
- vdd_l4_l11-supply:
- vdd_l5_l7-supply:
- vdd_l6_l12_l14_l15-supply:
- vdd_l8_l16_l18_l19-supply:
- vdd_l9_l10_l17_l22-supply:
- vdd_l13_l20_l23_l24-supply:
- vdd_l21-supply:
- vin_5vs-supply:
Usage: optional (pm8941 only)
Value type: <phandle>
Definition: reference to regulator supplying the input pin, as
described in the data sheet
The regulator node houses sub-nodes for each regulator within the device. Each
sub-node is identified using the node's name, with valid values listed for each
of the pmics below.
pm8841:
s1, s2, s3, s4, s5, s6, s7, s8
pm8941:
s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13,
l14, l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, lvs1, lvs2,
lvs3, 5vs1, 5vs2
The content of each sub-node is defined by the standard binding for regulators -
see regulator.txt.
= EXAMPLE
smd {
compatible = "qcom,smd";
rpm {
interrupts = <0 168 1>;
qcom,ipc = <&apcs 8 0>;
qcom,smd-edge = <15>;
rpm_requests {
compatible = "qcom,rpm-msm8974";
qcom,smd-channels = "rpm_requests";
pm8941-regulators {
compatible = "qcom,rpm-pm8941-regulators";
vdd_l13_l20_l23_l24-supply = <&pm8941_boost>;
pm8941_s3: s3 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
pm8941_boost: s4 {
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
};
pm8941_l20: l20 {
regulator-min-microvolt = <2950000>;
regulator-max-microvolt = <2950000>;
};
};
};
};
};
Qualcomm Shared Memory Driver (SMD) binding
This binding describes the Qualcomm Shared Memory Driver, a fifo based
communication channel for sending data between the various subsystems in
Qualcomm platforms.
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be "qcom,smd"
= EDGES
Each subnode of the SMD node represents a remote subsystem or a remote
processor of some sort - or in SMD language an "edge". The name of the edges
are not important.
The edge is described by the following properties:
- interrupts:
Usage: required
Value type: <prop-encoded-array>
Definition: should specify the IRQ used by the remote processor to
signal this processor about communication related updates
- qcom,ipc:
Usage: required
Value type: <prop-encoded-array>
Definition: three entries specifying the outgoing ipc bit used for
signaling the remote processor:
- phandle to a syscon node representing the apcs registers
- u32 representing offset to the register within the syscon
- u32 representing the ipc bit within the register
- qcom,smd-edge:
Usage: required
Value type: <u32>
Definition: the identifier of the remote processor in the smd channel
allocation table
- qcom,remote-pid:
Usage: optional
Value type: <u32>
Definition: the identifier for the remote processor as known by the rest
of the system.
= SMD DEVICES
In turn, subnodes of the "edges" represent devices tied to SMD channels on that
"edge". The names of the devices are not important. The properties of these
nodes are defined by the individual bindings for the SMD devices - but must
contain the following property:
- qcom,smd-channels:
Usage: required
Value type: <stringlist>
Definition: a list of channels tied to this device, used for matching
the device to channels
= EXAMPLE
The following example represents a smd node, with one edge representing the
"rpm" subsystem. For the "rpm" subsystem we have a device tied to the
"rpm_request" channel.
apcs: syscon@f9011000 {
compatible = "syscon";
reg = <0xf9011000 0x1000>;
};
smd {
compatible = "qcom,smd";
rpm {
interrupts = <0 168 1>;
qcom,ipc = <&apcs 8 0>;
qcom,smd-edge = <15>;
rpm_requests {
compatible = "qcom,rpm-msm8974";
qcom,smd-channels = "rpm_requests";
...
};
};
};
...@@ -13,7 +13,38 @@ config QCOM_GSBI ...@@ -13,7 +13,38 @@ config QCOM_GSBI
config QCOM_PM config QCOM_PM
bool "Qualcomm Power Management" bool "Qualcomm Power Management"
depends on ARCH_QCOM && !ARM64 depends on ARCH_QCOM && !ARM64
select QCOM_SCM
help help
QCOM Platform specific power driver to manage cores and L2 low power QCOM Platform specific power driver to manage cores and L2 low power
modes. It interface with various system drivers to put the cores in modes. It interface with various system drivers to put the cores in
low power modes. low power modes.
config QCOM_SMD
tristate "Qualcomm Shared Memory Driver (SMD)"
depends on QCOM_SMEM
help
Say y here to enable support for the Qualcomm Shared Memory Driver
providing communication channels to remote processors in Qualcomm
platforms.
config QCOM_SMD_RPM
tristate "Qualcomm Resource Power Manager (RPM) over SMD"
depends on QCOM_SMD && OF
help
If you say yes to this option, support will be included for the
Resource Power Manager system found in the Qualcomm 8974 based
devices.
This is required to access many regulators, clocks and bus
frequencies controlled by the RPM on these devices.
Say M here if you want to include support for the Qualcomm RPM as a
module. This will build a module called "qcom-smd-rpm".
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM
help
Say y here to enable support for the Qualcomm Shared Memory Manager.
The driver provides an interface to items in a heap shared among all
processors in a Qualcomm platform.
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_PM) += spm.o obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_SMD) += smd.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/of_platform.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/soc/qcom/smd.h>
#include <linux/soc/qcom/smd-rpm.h>
#define RPM_REQUEST_TIMEOUT (5 * HZ)
/**
* struct qcom_smd_rpm - state of the rpm device driver
* @rpm_channel: reference to the smd channel
* @ack: completion for acks
* @lock: mutual exclusion around the send/complete pair
* @ack_status: result of the rpm request
*/
struct qcom_smd_rpm {
struct qcom_smd_channel *rpm_channel;
struct completion ack;
struct mutex lock;
int ack_status;
};
/**
* struct qcom_rpm_header - header for all rpm requests and responses
* @service_type: identifier of the service
* @length: length of the payload
*/
struct qcom_rpm_header {
u32 service_type;
u32 length;
};
/**
* struct qcom_rpm_request - request message to the rpm
* @msg_id: identifier of the outgoing message
* @flags: active/sleep state flags
* @type: resource type
* @id: resource id
* @data_len: length of the payload following this header
*/
struct qcom_rpm_request {
u32 msg_id;
u32 flags;
u32 type;
u32 id;
u32 data_len;
};
/**
* struct qcom_rpm_message - response message from the rpm
* @msg_type: indicator of the type of message
* @length: the size of this message, including the message header
* @msg_id: message id
* @message: textual message from the rpm
*
* Multiple of these messages can be stacked in an rpm message.
*/
struct qcom_rpm_message {
u32 msg_type;
u32 length;
union {
u32 msg_id;
u8 message[0];
};
};
#define RPM_SERVICE_TYPE_REQUEST 0x00716572 /* "req\0" */
#define RPM_MSG_TYPE_ERR 0x00727265 /* "err\0" */
#define RPM_MSG_TYPE_MSG_ID 0x2367736d /* "msg#" */
/**
* qcom_rpm_smd_write - write @buf to @type:@id
* @rpm: rpm handle
* @type: resource type
* @id: resource identifier
* @buf: the data to be written
* @count: number of bytes in @buf
*/
int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,
int state,
u32 type, u32 id,
void *buf,
size_t count)
{
static unsigned msg_id = 1;
int left;
int ret;
struct {
struct qcom_rpm_header hdr;
struct qcom_rpm_request req;
u8 payload[count];
} pkt;
/* SMD packets to the RPM may not exceed 256 bytes */
if (WARN_ON(sizeof(pkt) >= 256))
return -EINVAL;
mutex_lock(&rpm->lock);
pkt.hdr.service_type = RPM_SERVICE_TYPE_REQUEST;
pkt.hdr.length = sizeof(struct qcom_rpm_request) + count;
pkt.req.msg_id = msg_id++;
pkt.req.flags = BIT(state);
pkt.req.type = type;
pkt.req.id = id;
pkt.req.data_len = count;
memcpy(pkt.payload, buf, count);
ret = qcom_smd_send(rpm->rpm_channel, &pkt, sizeof(pkt));
if (ret)
goto out;
left = wait_for_completion_timeout(&rpm->ack, RPM_REQUEST_TIMEOUT);
if (!left)
ret = -ETIMEDOUT;
else
ret = rpm->ack_status;
out:
mutex_unlock(&rpm->lock);
return ret;
}
EXPORT_SYMBOL(qcom_rpm_smd_write);
static int qcom_smd_rpm_callback(struct qcom_smd_device *qsdev,
const void *data,
size_t count)
{
const struct qcom_rpm_header *hdr = data;
const struct qcom_rpm_message *msg;
struct qcom_smd_rpm *rpm = dev_get_drvdata(&qsdev->dev);
const u8 *buf = data + sizeof(struct qcom_rpm_header);
const u8 *end = buf + hdr->length;
char msgbuf[32];
int status = 0;
u32 len;
if (hdr->service_type != RPM_SERVICE_TYPE_REQUEST ||
hdr->length < sizeof(struct qcom_rpm_message)) {
dev_err(&qsdev->dev, "invalid request\n");
return 0;
}
while (buf < end) {
msg = (struct qcom_rpm_message *)buf;
switch (msg->msg_type) {
case RPM_MSG_TYPE_MSG_ID:
break;
case RPM_MSG_TYPE_ERR:
len = min_t(u32, ALIGN(msg->length, 4), sizeof(msgbuf));
memcpy_fromio(msgbuf, msg->message, len);
msgbuf[len - 1] = 0;
if (!strcmp(msgbuf, "resource does not exist"))
status = -ENXIO;
else
status = -EINVAL;
break;
}
buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg->length, 4);
}
rpm->ack_status = status;
complete(&rpm->ack);
return 0;
}
static int qcom_smd_rpm_probe(struct qcom_smd_device *sdev)
{
struct qcom_smd_rpm *rpm;
rpm = devm_kzalloc(&sdev->dev, sizeof(*rpm), GFP_KERNEL);
if (!rpm)
return -ENOMEM;
mutex_init(&rpm->lock);
init_completion(&rpm->ack);
rpm->rpm_channel = sdev->channel;
dev_set_drvdata(&sdev->dev, rpm);
return of_platform_populate(sdev->dev.of_node, NULL, NULL, &sdev->dev);
}
static void qcom_smd_rpm_remove(struct qcom_smd_device *sdev)
{
of_platform_depopulate(&sdev->dev);
}
static const struct of_device_id qcom_smd_rpm_of_match[] = {
{ .compatible = "qcom,rpm-msm8974" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smd_rpm_of_match);
static struct qcom_smd_driver qcom_smd_rpm_driver = {
.probe = qcom_smd_rpm_probe,
.remove = qcom_smd_rpm_remove,
.callback = qcom_smd_rpm_callback,
.driver = {
.name = "qcom_smd_rpm",
.owner = THIS_MODULE,
.of_match_table = qcom_smd_rpm_of_match,
},
};
static int __init qcom_smd_rpm_init(void)
{
return qcom_smd_driver_register(&qcom_smd_rpm_driver);
}
arch_initcall(qcom_smd_rpm_init);
static void __exit qcom_smd_rpm_exit(void)
{
qcom_smd_driver_unregister(&qcom_smd_rpm_driver);
}
module_exit(qcom_smd_rpm_exit);
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm SMD backed RPM driver");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smd.h>
#include <linux/soc/qcom/smem.h>
#include <linux/wait.h>
/*
* The Qualcomm Shared Memory communication solution provides point-to-point
* channels for clients to send and receive streaming or packet based data.
*
* Each channel consists of a control item (channel info) and a ring buffer
* pair. The channel info carry information related to channel state, flow
* control and the offsets within the ring buffer.
*
* All allocated channels are listed in an allocation table, identifying the
* pair of items by name, type and remote processor.
*
* Upon creating a new channel the remote processor allocates channel info and
* ring buffer items from the smem heap and populate the allocation table. An
* interrupt is sent to the other end of the channel and a scan for new
* channels should be done. A channel never goes away, it will only change
* state.
*
* The remote processor signals it intent for bring up the communication
* channel by setting the state of its end of the channel to "opening" and
* sends out an interrupt. We detect this change and register a smd device to
* consume the channel. Upon finding a consumer we finish the handshake and the
* channel is up.
*
* Upon closing a channel, the remote processor will update the state of its
* end of the channel and signal us, we will then unregister any attached
* device and close our end of the channel.
*
* Devices attached to a channel can use the qcom_smd_send function to push
* data to the channel, this is done by copying the data into the tx ring
* buffer, updating the pointers in the channel info and signaling the remote
* processor.
*
* The remote processor does the equivalent when it transfer data and upon
* receiving the interrupt we check the channel info for new data and delivers
* this to the attached device. If the device is not ready to receive the data
* we leave it in the ring buffer for now.
*/
struct smd_channel_info;
struct smd_channel_info_word;
#define SMD_ALLOC_TBL_COUNT 2
#define SMD_ALLOC_TBL_SIZE 64
/*
* This lists the various smem heap items relevant for the allocation table and
* smd channel entries.
*/
static const struct {
unsigned alloc_tbl_id;
unsigned info_base_id;
unsigned fifo_base_id;
} smem_items[SMD_ALLOC_TBL_COUNT] = {
{
.alloc_tbl_id = 13,
.info_base_id = 14,
.fifo_base_id = 338
},
{
.alloc_tbl_id = 14,
.info_base_id = 266,
.fifo_base_id = 202,
},
};
/**
* struct qcom_smd_edge - representing a remote processor
* @smd: handle to qcom_smd
* @of_node: of_node handle for information related to this edge
* @edge_id: identifier of this edge
* @remote_pid: identifier of remote processor
* @irq: interrupt for signals on this edge
* @ipc_regmap: regmap handle holding the outgoing ipc register
* @ipc_offset: offset within @ipc_regmap of the register for ipc
* @ipc_bit: bit in the register at @ipc_offset of @ipc_regmap
* @channels: list of all channels detected on this edge
* @channels_lock: guard for modifications of @channels
* @allocated: array of bitmaps representing already allocated channels
* @need_rescan: flag that the @work needs to scan smem for new channels
* @smem_available: last available amount of smem triggering a channel scan
* @work: work item for edge house keeping
*/
struct qcom_smd_edge {
struct qcom_smd *smd;
struct device_node *of_node;
unsigned edge_id;
unsigned remote_pid;
int irq;
struct regmap *ipc_regmap;
int ipc_offset;
int ipc_bit;
struct list_head channels;
spinlock_t channels_lock;
DECLARE_BITMAP(allocated[SMD_ALLOC_TBL_COUNT], SMD_ALLOC_TBL_SIZE);
bool need_rescan;
unsigned smem_available;
struct work_struct work;
};
/*
* SMD channel states.
*/
enum smd_channel_state {
SMD_CHANNEL_CLOSED,
SMD_CHANNEL_OPENING,
SMD_CHANNEL_OPENED,
SMD_CHANNEL_FLUSHING,
SMD_CHANNEL_CLOSING,
SMD_CHANNEL_RESET,
SMD_CHANNEL_RESET_OPENING
};
/**
* struct qcom_smd_channel - smd channel struct
* @edge: qcom_smd_edge this channel is living on
* @qsdev: reference to a associated smd client device
* @name: name of the channel
* @state: local state of the channel
* @remote_state: remote state of the channel
* @tx_info: byte aligned outgoing channel info
* @rx_info: byte aligned incoming channel info
* @tx_info_word: word aligned outgoing channel info
* @rx_info_word: word aligned incoming channel info
* @tx_lock: lock to make writes to the channel mutually exclusive
* @fblockread_event: wakeup event tied to tx fBLOCKREADINTR
* @tx_fifo: pointer to the outgoing ring buffer
* @rx_fifo: pointer to the incoming ring buffer
* @fifo_size: size of each ring buffer
* @bounce_buffer: bounce buffer for reading wrapped packets
* @cb: callback function registered for this channel
* @recv_lock: guard for rx info modifications and cb pointer
* @pkt_size: size of the currently handled packet
* @list: lite entry for @channels in qcom_smd_edge
*/
struct qcom_smd_channel {
struct qcom_smd_edge *edge;
struct qcom_smd_device *qsdev;
char *name;
enum smd_channel_state state;
enum smd_channel_state remote_state;
struct smd_channel_info *tx_info;
struct smd_channel_info *rx_info;
struct smd_channel_info_word *tx_info_word;
struct smd_channel_info_word *rx_info_word;
struct mutex tx_lock;
wait_queue_head_t fblockread_event;
void *tx_fifo;
void *rx_fifo;
int fifo_size;
void *bounce_buffer;
int (*cb)(struct qcom_smd_device *, const void *, size_t);
spinlock_t recv_lock;
int pkt_size;
struct list_head list;
};
/**
* struct qcom_smd - smd struct
* @dev: device struct
* @num_edges: number of entries in @edges
* @edges: array of edges to be handled
*/
struct qcom_smd {
struct device *dev;
unsigned num_edges;
struct qcom_smd_edge edges[0];
};
/*
* Format of the smd_info smem items, for byte aligned channels.
*/
struct smd_channel_info {
u32 state;
u8 fDSR;
u8 fCTS;
u8 fCD;
u8 fRI;
u8 fHEAD;
u8 fTAIL;
u8 fSTATE;
u8 fBLOCKREADINTR;
u32 tail;
u32 head;
};
/*
* Format of the smd_info smem items, for word aligned channels.
*/
struct smd_channel_info_word {
u32 state;
u32 fDSR;
u32 fCTS;
u32 fCD;
u32 fRI;
u32 fHEAD;
u32 fTAIL;
u32 fSTATE;
u32 fBLOCKREADINTR;
u32 tail;
u32 head;
};
#define GET_RX_CHANNEL_INFO(channel, param) \
(channel->rx_info_word ? \
channel->rx_info_word->param : \
channel->rx_info->param)
#define SET_RX_CHANNEL_INFO(channel, param, value) \
(channel->rx_info_word ? \
(channel->rx_info_word->param = value) : \
(channel->rx_info->param = value))
#define GET_TX_CHANNEL_INFO(channel, param) \
(channel->tx_info_word ? \
channel->tx_info_word->param : \
channel->tx_info->param)
#define SET_TX_CHANNEL_INFO(channel, param, value) \
(channel->tx_info_word ? \
(channel->tx_info_word->param = value) : \
(channel->tx_info->param = value))
/**
* struct qcom_smd_alloc_entry - channel allocation entry
* @name: channel name
* @cid: channel index
* @flags: channel flags and edge id
* @ref_count: reference count of the channel
*/
struct qcom_smd_alloc_entry {
u8 name[20];
u32 cid;
u32 flags;
u32 ref_count;
} __packed;
#define SMD_CHANNEL_FLAGS_EDGE_MASK 0xff
#define SMD_CHANNEL_FLAGS_STREAM BIT(8)
#define SMD_CHANNEL_FLAGS_PACKET BIT(9)
/*
* Each smd packet contains a 20 byte header, with the first 4 being the length
* of the packet.
*/
#define SMD_PACKET_HEADER_LEN 20
/*
* Signal the remote processor associated with 'channel'.
*/
static void qcom_smd_signal_channel(struct qcom_smd_channel *channel)
{
struct qcom_smd_edge *edge = channel->edge;
regmap_write(edge->ipc_regmap, edge->ipc_offset, BIT(edge->ipc_bit));
}
/*
* Initialize the tx channel info
*/
static void qcom_smd_channel_reset(struct qcom_smd_channel *channel)
{
SET_TX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED);
SET_TX_CHANNEL_INFO(channel, fDSR, 0);
SET_TX_CHANNEL_INFO(channel, fCTS, 0);
SET_TX_CHANNEL_INFO(channel, fCD, 0);
SET_TX_CHANNEL_INFO(channel, fRI, 0);
SET_TX_CHANNEL_INFO(channel, fHEAD, 0);
SET_TX_CHANNEL_INFO(channel, fTAIL, 0);
SET_TX_CHANNEL_INFO(channel, fSTATE, 1);
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1);
SET_TX_CHANNEL_INFO(channel, head, 0);
SET_TX_CHANNEL_INFO(channel, tail, 0);
qcom_smd_signal_channel(channel);
channel->state = SMD_CHANNEL_CLOSED;
channel->pkt_size = 0;
}
/*
* Calculate the amount of data available in the rx fifo
*/
static size_t qcom_smd_channel_get_rx_avail(struct qcom_smd_channel *channel)
{
unsigned head;
unsigned tail;
head = GET_RX_CHANNEL_INFO(channel, head);
tail = GET_RX_CHANNEL_INFO(channel, tail);
return (head - tail) & (channel->fifo_size - 1);
}
/*
* Set tx channel state and inform the remote processor
*/
static void qcom_smd_channel_set_state(struct qcom_smd_channel *channel,
int state)
{
struct qcom_smd_edge *edge = channel->edge;
bool is_open = state == SMD_CHANNEL_OPENED;
if (channel->state == state)
return;
dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state);
SET_TX_CHANNEL_INFO(channel, fDSR, is_open);
SET_TX_CHANNEL_INFO(channel, fCTS, is_open);
SET_TX_CHANNEL_INFO(channel, fCD, is_open);
SET_TX_CHANNEL_INFO(channel, state, state);
SET_TX_CHANNEL_INFO(channel, fSTATE, 1);
channel->state = state;
qcom_smd_signal_channel(channel);
}
/*
* Copy count bytes of data using 32bit accesses, if that's required.
*/
static void smd_copy_to_fifo(void __iomem *_dst,
const void *_src,
size_t count,
bool word_aligned)
{
u32 *dst = (u32 *)_dst;
u32 *src = (u32 *)_src;
if (word_aligned) {
count /= sizeof(u32);
while (count--)
writel_relaxed(*src++, dst++);
} else {
memcpy_toio(_dst, _src, count);
}
}
/*
* Copy count bytes of data using 32bit accesses, if that is required.
*/
static void smd_copy_from_fifo(void *_dst,
const void __iomem *_src,
size_t count,
bool word_aligned)
{
u32 *dst = (u32 *)_dst;
u32 *src = (u32 *)_src;
if (word_aligned) {
count /= sizeof(u32);
while (count--)
*dst++ = readl_relaxed(src++);
} else {
memcpy_fromio(_dst, _src, count);
}
}
/*
* Read count bytes of data from the rx fifo into buf, but don't advance the
* tail.
*/
static size_t qcom_smd_channel_peek(struct qcom_smd_channel *channel,
void *buf, size_t count)
{
bool word_aligned;
unsigned tail;
size_t len;
word_aligned = channel->rx_info_word != NULL;
tail = GET_RX_CHANNEL_INFO(channel, tail);
len = min_t(size_t, count, channel->fifo_size - tail);
if (len) {
smd_copy_from_fifo(buf,
channel->rx_fifo + tail,
len,
word_aligned);
}
if (len != count) {
smd_copy_from_fifo(buf + len,
channel->rx_fifo,
count - len,
word_aligned);
}
return count;
}
/*
* Advance the rx tail by count bytes.
*/
static void qcom_smd_channel_advance(struct qcom_smd_channel *channel,
size_t count)
{
unsigned tail;
tail = GET_RX_CHANNEL_INFO(channel, tail);
tail += count;
tail &= (channel->fifo_size - 1);
SET_RX_CHANNEL_INFO(channel, tail, tail);
}
/*
* Read out a single packet from the rx fifo and deliver it to the device
*/
static int qcom_smd_channel_recv_single(struct qcom_smd_channel *channel)
{
struct qcom_smd_device *qsdev = channel->qsdev;
unsigned tail;
size_t len;
void *ptr;
int ret;
if (!channel->cb)
return 0;
tail = GET_RX_CHANNEL_INFO(channel, tail);
/* Use bounce buffer if the data wraps */
if (tail + channel->pkt_size >= channel->fifo_size) {
ptr = channel->bounce_buffer;
len = qcom_smd_channel_peek(channel, ptr, channel->pkt_size);
} else {
ptr = channel->rx_fifo + tail;
len = channel->pkt_size;
}
ret = channel->cb(qsdev, ptr, len);
if (ret < 0)
return ret;
/* Only forward the tail if the client consumed the data */
qcom_smd_channel_advance(channel, len);
channel->pkt_size = 0;
return 0;
}
/*
* Per channel interrupt handling
*/
static bool qcom_smd_channel_intr(struct qcom_smd_channel *channel)
{
bool need_state_scan = false;
int remote_state;
u32 pktlen;
int avail;
int ret;
/* Handle state changes */
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state != channel->remote_state) {
channel->remote_state = remote_state;
need_state_scan = true;
}
/* Indicate that we have seen any state change */
SET_RX_CHANNEL_INFO(channel, fSTATE, 0);
/* Signal waiting qcom_smd_send() about the interrupt */
if (!GET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR))
wake_up_interruptible(&channel->fblockread_event);
/* Don't consume any data until we've opened the channel */
if (channel->state != SMD_CHANNEL_OPENED)
goto out;
/* Indicate that we've seen the new data */
SET_RX_CHANNEL_INFO(channel, fHEAD, 0);
/* Consume data */
for (;;) {
avail = qcom_smd_channel_get_rx_avail(channel);
if (!channel->pkt_size && avail >= SMD_PACKET_HEADER_LEN) {
qcom_smd_channel_peek(channel, &pktlen, sizeof(pktlen));
qcom_smd_channel_advance(channel, SMD_PACKET_HEADER_LEN);
channel->pkt_size = pktlen;
} else if (channel->pkt_size && avail >= channel->pkt_size) {
ret = qcom_smd_channel_recv_single(channel);
if (ret)
break;
} else {
break;
}
}
/* Indicate that we have seen and updated tail */
SET_RX_CHANNEL_INFO(channel, fTAIL, 1);
/* Signal the remote that we've consumed the data (if requested) */
if (!GET_RX_CHANNEL_INFO(channel, fBLOCKREADINTR)) {
/* Ensure ordering of channel info updates */
wmb();
qcom_smd_signal_channel(channel);
}
out:
return need_state_scan;
}
/*
* The edge interrupts are triggered by the remote processor on state changes,
* channel info updates or when new channels are created.
*/
static irqreturn_t qcom_smd_edge_intr(int irq, void *data)
{
struct qcom_smd_edge *edge = data;
struct qcom_smd_channel *channel;
unsigned available;
bool kick_worker = false;
/*
* Handle state changes or data on each of the channels on this edge
*/
spin_lock(&edge->channels_lock);
list_for_each_entry(channel, &edge->channels, list) {
spin_lock(&channel->recv_lock);
kick_worker |= qcom_smd_channel_intr(channel);
spin_unlock(&channel->recv_lock);
}
spin_unlock(&edge->channels_lock);
/*
* Creating a new channel requires allocating an smem entry, so we only
* have to scan if the amount of available space in smem have changed
* since last scan.
*/
available = qcom_smem_get_free_space(edge->remote_pid);
if (available != edge->smem_available) {
edge->smem_available = available;
edge->need_rescan = true;
kick_worker = true;
}
if (kick_worker)
schedule_work(&edge->work);
return IRQ_HANDLED;
}
/*
* Delivers any outstanding packets in the rx fifo, can be used after probe of
* the clients to deliver any packets that wasn't delivered before the client
* was setup.
*/
static void qcom_smd_channel_resume(struct qcom_smd_channel *channel)
{
unsigned long flags;
spin_lock_irqsave(&channel->recv_lock, flags);
qcom_smd_channel_intr(channel);
spin_unlock_irqrestore(&channel->recv_lock, flags);
}
/*
* Calculate how much space is available in the tx fifo.
*/
static size_t qcom_smd_get_tx_avail(struct qcom_smd_channel *channel)
{
unsigned head;
unsigned tail;
unsigned mask = channel->fifo_size - 1;
head = GET_TX_CHANNEL_INFO(channel, head);
tail = GET_TX_CHANNEL_INFO(channel, tail);
return mask - ((head - tail) & mask);
}
/*
* Write count bytes of data into channel, possibly wrapping in the ring buffer
*/
static int qcom_smd_write_fifo(struct qcom_smd_channel *channel,
const void *data,
size_t count)
{
bool word_aligned;
unsigned head;
size_t len;
word_aligned = channel->tx_info_word != NULL;
head = GET_TX_CHANNEL_INFO(channel, head);
len = min_t(size_t, count, channel->fifo_size - head);
if (len) {
smd_copy_to_fifo(channel->tx_fifo + head,
data,
len,
word_aligned);
}
if (len != count) {
smd_copy_to_fifo(channel->tx_fifo,
data + len,
count - len,
word_aligned);
}
head += count;
head &= (channel->fifo_size - 1);
SET_TX_CHANNEL_INFO(channel, head, head);
return count;
}
/**
* qcom_smd_send - write data to smd channel
* @channel: channel handle
* @data: buffer of data to write
* @len: number of bytes to write
*
* This is a blocking write of len bytes into the channel's tx ring buffer and
* signal the remote end. It will sleep until there is enough space available
* in the tx buffer, utilizing the fBLOCKREADINTR signaling mechanism to avoid
* polling.
*/
int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len)
{
u32 hdr[5] = {len,};
int tlen = sizeof(hdr) + len;
int ret;
/* Word aligned channels only accept word size aligned data */
if (channel->rx_info_word != NULL && len % 4)
return -EINVAL;
ret = mutex_lock_interruptible(&channel->tx_lock);
if (ret)
return ret;
while (qcom_smd_get_tx_avail(channel) < tlen) {
if (channel->state != SMD_CHANNEL_OPENED) {
ret = -EPIPE;
goto out;
}
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0);
ret = wait_event_interruptible(channel->fblockread_event,
qcom_smd_get_tx_avail(channel) >= tlen ||
channel->state != SMD_CHANNEL_OPENED);
if (ret)
goto out;
SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1);
}
SET_TX_CHANNEL_INFO(channel, fTAIL, 0);
qcom_smd_write_fifo(channel, hdr, sizeof(hdr));
qcom_smd_write_fifo(channel, data, len);
SET_TX_CHANNEL_INFO(channel, fHEAD, 1);
/* Ensure ordering of channel info updates */
wmb();
qcom_smd_signal_channel(channel);
out:
mutex_unlock(&channel->tx_lock);
return ret;
}
EXPORT_SYMBOL(qcom_smd_send);
static struct qcom_smd_device *to_smd_device(struct device *dev)
{
return container_of(dev, struct qcom_smd_device, dev);
}
static struct qcom_smd_driver *to_smd_driver(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
return container_of(qsdev->dev.driver, struct qcom_smd_driver, driver);
}
static int qcom_smd_dev_match(struct device *dev, struct device_driver *drv)
{
return of_driver_match_device(dev, drv);
}
/*
* Probe the smd client.
*
* The remote side have indicated that it want the channel to be opened, so
* complete the state handshake and probe our client driver.
*/
static int qcom_smd_dev_probe(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
struct qcom_smd_driver *qsdrv = to_smd_driver(dev);
struct qcom_smd_channel *channel = qsdev->channel;
size_t bb_size;
int ret;
/*
* Packets are maximum 4k, but reduce if the fifo is smaller
*/
bb_size = min(channel->fifo_size, SZ_4K);
channel->bounce_buffer = kmalloc(bb_size, GFP_KERNEL);
if (!channel->bounce_buffer)
return -ENOMEM;
channel->cb = qsdrv->callback;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENING);
qcom_smd_channel_set_state(channel, SMD_CHANNEL_OPENED);
ret = qsdrv->probe(qsdev);
if (ret)
goto err;
qcom_smd_channel_resume(channel);
return 0;
err:
dev_err(&qsdev->dev, "probe failed\n");
channel->cb = NULL;
kfree(channel->bounce_buffer);
channel->bounce_buffer = NULL;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);
return ret;
}
/*
* Remove the smd client.
*
* The channel is going away, for some reason, so remove the smd client and
* reset the channel state.
*/
static int qcom_smd_dev_remove(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
struct qcom_smd_driver *qsdrv = to_smd_driver(dev);
struct qcom_smd_channel *channel = qsdev->channel;
unsigned long flags;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSING);
/*
* Make sure we don't race with the code receiving data.
*/
spin_lock_irqsave(&channel->recv_lock, flags);
channel->cb = NULL;
spin_unlock_irqrestore(&channel->recv_lock, flags);
/* Wake up any sleepers in qcom_smd_send() */
wake_up_interruptible(&channel->fblockread_event);
/*
* We expect that the client might block in remove() waiting for any
* outstanding calls to qcom_smd_send() to wake up and finish.
*/
if (qsdrv->remove)
qsdrv->remove(qsdev);
/*
* The client is now gone, cleanup and reset the channel state.
*/
channel->qsdev = NULL;
kfree(channel->bounce_buffer);
channel->bounce_buffer = NULL;
qcom_smd_channel_set_state(channel, SMD_CHANNEL_CLOSED);
qcom_smd_channel_reset(channel);
return 0;
}
static struct bus_type qcom_smd_bus = {
.name = "qcom_smd",
.match = qcom_smd_dev_match,
.probe = qcom_smd_dev_probe,
.remove = qcom_smd_dev_remove,
};
/*
* Release function for the qcom_smd_device object.
*/
static void qcom_smd_release_device(struct device *dev)
{
struct qcom_smd_device *qsdev = to_smd_device(dev);
kfree(qsdev);
}
/*
* Finds the device_node for the smd child interested in this channel.
*/
static struct device_node *qcom_smd_match_channel(struct device_node *edge_node,
const char *channel)
{
struct device_node *child;
const char *name;
const char *key;
int ret;
for_each_available_child_of_node(edge_node, child) {
key = "qcom,smd-channels";
ret = of_property_read_string(child, key, &name);
if (ret) {
of_node_put(child);
continue;
}
if (strcmp(name, channel) == 0)
return child;
}
return NULL;
}
/*
* Create a smd client device for channel that is being opened.
*/
static int qcom_smd_create_device(struct qcom_smd_channel *channel)
{
struct qcom_smd_device *qsdev;
struct qcom_smd_edge *edge = channel->edge;
struct device_node *node;
struct qcom_smd *smd = edge->smd;
int ret;
if (channel->qsdev)
return -EEXIST;
node = qcom_smd_match_channel(edge->of_node, channel->name);
if (!node) {
dev_dbg(smd->dev, "no match for '%s'\n", channel->name);
return -ENXIO;
}
dev_dbg(smd->dev, "registering '%s'\n", channel->name);
qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL);
if (!qsdev)
return -ENOMEM;
dev_set_name(&qsdev->dev, "%s.%s", edge->of_node->name, node->name);
qsdev->dev.parent = smd->dev;
qsdev->dev.bus = &qcom_smd_bus;
qsdev->dev.release = qcom_smd_release_device;
qsdev->dev.of_node = node;
qsdev->channel = channel;
channel->qsdev = qsdev;
ret = device_register(&qsdev->dev);
if (ret) {
dev_err(smd->dev, "device_register failed: %d\n", ret);
put_device(&qsdev->dev);
}
return ret;
}
/*
* Destroy a smd client device for a channel that's going away.
*/
static void qcom_smd_destroy_device(struct qcom_smd_channel *channel)
{
struct device *dev;
BUG_ON(!channel->qsdev);
dev = &channel->qsdev->dev;
device_unregister(dev);
of_node_put(dev->of_node);
put_device(dev);
}
/**
* qcom_smd_driver_register - register a smd driver
* @qsdrv: qcom_smd_driver struct
*/
int qcom_smd_driver_register(struct qcom_smd_driver *qsdrv)
{
qsdrv->driver.bus = &qcom_smd_bus;
return driver_register(&qsdrv->driver);
}
EXPORT_SYMBOL(qcom_smd_driver_register);
/**
* qcom_smd_driver_unregister - unregister a smd driver
* @qsdrv: qcom_smd_driver struct
*/
void qcom_smd_driver_unregister(struct qcom_smd_driver *qsdrv)
{
driver_unregister(&qsdrv->driver);
}
EXPORT_SYMBOL(qcom_smd_driver_unregister);
/*
* Allocate the qcom_smd_channel object for a newly found smd channel,
* retrieving and validating the smem items involved.
*/
static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *edge,
unsigned smem_info_item,
unsigned smem_fifo_item,
char *name)
{
struct qcom_smd_channel *channel;
struct qcom_smd *smd = edge->smd;
size_t fifo_size;
size_t info_size;
void *fifo_base;
void *info;
int ret;
channel = devm_kzalloc(smd->dev, sizeof(*channel), GFP_KERNEL);
if (!channel)
return ERR_PTR(-ENOMEM);
channel->edge = edge;
channel->name = devm_kstrdup(smd->dev, name, GFP_KERNEL);
if (!channel->name)
return ERR_PTR(-ENOMEM);
mutex_init(&channel->tx_lock);
spin_lock_init(&channel->recv_lock);
init_waitqueue_head(&channel->fblockread_event);
ret = qcom_smem_get(edge->remote_pid, smem_info_item, (void **)&info,
&info_size);
if (ret)
goto free_name_and_channel;
/*
* Use the size of the item to figure out which channel info struct to
* use.
*/
if (info_size == 2 * sizeof(struct smd_channel_info_word)) {
channel->tx_info_word = info;
channel->rx_info_word = info + sizeof(struct smd_channel_info_word);
} else if (info_size == 2 * sizeof(struct smd_channel_info)) {
channel->tx_info = info;
channel->rx_info = info + sizeof(struct smd_channel_info);
} else {
dev_err(smd->dev,
"channel info of size %zu not supported\n", info_size);
ret = -EINVAL;
goto free_name_and_channel;
}
ret = qcom_smem_get(edge->remote_pid, smem_fifo_item, &fifo_base,
&fifo_size);
if (ret)
goto free_name_and_channel;
/* The channel consist of a rx and tx fifo of equal size */
fifo_size /= 2;
dev_dbg(smd->dev, "new channel '%s' info-size: %zu fifo-size: %zu\n",
name, info_size, fifo_size);
channel->tx_fifo = fifo_base;
channel->rx_fifo = fifo_base + fifo_size;
channel->fifo_size = fifo_size;
qcom_smd_channel_reset(channel);
return channel;
free_name_and_channel:
devm_kfree(smd->dev, channel->name);
devm_kfree(smd->dev, channel);
return ERR_PTR(ret);
}
/*
* Scans the allocation table for any newly allocated channels, calls
* qcom_smd_create_channel() to create representations of these and add
* them to the edge's list of channels.
*/
static void qcom_discover_channels(struct qcom_smd_edge *edge)
{
struct qcom_smd_alloc_entry *alloc_tbl;
struct qcom_smd_alloc_entry *entry;
struct qcom_smd_channel *channel;
struct qcom_smd *smd = edge->smd;
unsigned long flags;
unsigned fifo_id;
unsigned info_id;
int ret;
int tbl;
int i;
for (tbl = 0; tbl < SMD_ALLOC_TBL_COUNT; tbl++) {
ret = qcom_smem_get(edge->remote_pid,
smem_items[tbl].alloc_tbl_id,
(void **)&alloc_tbl,
NULL);
if (ret < 0)
continue;
for (i = 0; i < SMD_ALLOC_TBL_SIZE; i++) {
entry = &alloc_tbl[i];
if (test_bit(i, edge->allocated[tbl]))
continue;
if (entry->ref_count == 0)
continue;
if (!entry->name[0])
continue;
if (!(entry->flags & SMD_CHANNEL_FLAGS_PACKET))
continue;
if ((entry->flags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id)
continue;
info_id = smem_items[tbl].info_base_id + entry->cid;
fifo_id = smem_items[tbl].fifo_base_id + entry->cid;
channel = qcom_smd_create_channel(edge, info_id, fifo_id, entry->name);
if (IS_ERR(channel))
continue;
spin_lock_irqsave(&edge->channels_lock, flags);
list_add(&channel->list, &edge->channels);
spin_unlock_irqrestore(&edge->channels_lock, flags);
dev_dbg(smd->dev, "new channel found: '%s'\n", channel->name);
set_bit(i, edge->allocated[tbl]);
}
}
schedule_work(&edge->work);
}
/*
* This per edge worker scans smem for any new channels and register these. It
* then scans all registered channels for state changes that should be handled
* by creating or destroying smd client devices for the registered channels.
*
* LOCKING: edge->channels_lock is not needed to be held during the traversal
* of the channels list as it's done synchronously with the only writer.
*/
static void qcom_channel_state_worker(struct work_struct *work)
{
struct qcom_smd_channel *channel;
struct qcom_smd_edge *edge = container_of(work,
struct qcom_smd_edge,
work);
unsigned remote_state;
/*
* Rescan smem if we have reason to belive that there are new channels.
*/
if (edge->need_rescan) {
edge->need_rescan = false;
qcom_discover_channels(edge);
}
/*
* Register a device for any closed channel where the remote processor
* is showing interest in opening the channel.
*/
list_for_each_entry(channel, &edge->channels, list) {
if (channel->state != SMD_CHANNEL_CLOSED)
continue;
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state != SMD_CHANNEL_OPENING &&
remote_state != SMD_CHANNEL_OPENED)
continue;
qcom_smd_create_device(channel);
}
/*
* Unregister the device for any channel that is opened where the
* remote processor is closing the channel.
*/
list_for_each_entry(channel, &edge->channels, list) {
if (channel->state != SMD_CHANNEL_OPENING &&
channel->state != SMD_CHANNEL_OPENED)
continue;
remote_state = GET_RX_CHANNEL_INFO(channel, state);
if (remote_state == SMD_CHANNEL_OPENING ||
remote_state == SMD_CHANNEL_OPENED)
continue;
qcom_smd_destroy_device(channel);
}
}
/*
* Parses an of_node describing an edge.
*/
static int qcom_smd_parse_edge(struct device *dev,
struct device_node *node,
struct qcom_smd_edge *edge)
{
struct device_node *syscon_np;
const char *key;
int irq;
int ret;
INIT_LIST_HEAD(&edge->channels);
spin_lock_init(&edge->channels_lock);
INIT_WORK(&edge->work, qcom_channel_state_worker);
edge->of_node = of_node_get(node);
irq = irq_of_parse_and_map(node, 0);
if (irq < 0) {
dev_err(dev, "required smd interrupt missing\n");
return -EINVAL;
}
ret = devm_request_irq(dev, irq,
qcom_smd_edge_intr, IRQF_TRIGGER_RISING,
node->name, edge);
if (ret) {
dev_err(dev, "failed to request smd irq\n");
return ret;
}
edge->irq = irq;
key = "qcom,smd-edge";
ret = of_property_read_u32(node, key, &edge->edge_id);
if (ret) {
dev_err(dev, "edge missing %s property\n", key);
return -EINVAL;
}
edge->remote_pid = QCOM_SMEM_HOST_ANY;
key = "qcom,remote-pid";
of_property_read_u32(node, key, &edge->remote_pid);
syscon_np = of_parse_phandle(node, "qcom,ipc", 0);
if (!syscon_np) {
dev_err(dev, "no qcom,ipc node\n");
return -ENODEV;
}
edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
if (IS_ERR(edge->ipc_regmap))
return PTR_ERR(edge->ipc_regmap);
key = "qcom,ipc";
ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);
if (ret < 0) {
dev_err(dev, "no offset in %s\n", key);
return -EINVAL;
}
ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);
if (ret < 0) {
dev_err(dev, "no bit in %s\n", key);
return -EINVAL;
}
return 0;
}
static int qcom_smd_probe(struct platform_device *pdev)
{
struct qcom_smd_edge *edge;
struct device_node *node;
struct qcom_smd *smd;
size_t array_size;
int num_edges;
int ret;
int i = 0;
/* Wait for smem */
ret = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL, NULL);
if (ret == -EPROBE_DEFER)
return ret;
num_edges = of_get_available_child_count(pdev->dev.of_node);
array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge);
smd = devm_kzalloc(&pdev->dev, array_size, GFP_KERNEL);
if (!smd)
return -ENOMEM;
smd->dev = &pdev->dev;
smd->num_edges = num_edges;
for_each_available_child_of_node(pdev->dev.of_node, node) {
edge = &smd->edges[i++];
edge->smd = smd;
ret = qcom_smd_parse_edge(&pdev->dev, node, edge);
if (ret)
continue;
edge->need_rescan = true;
schedule_work(&edge->work);
}
platform_set_drvdata(pdev, smd);
return 0;
}
/*
* Shut down all smd clients by making sure that each edge stops processing
* events and scanning for new channels, then call destroy on the devices.
*/
static int qcom_smd_remove(struct platform_device *pdev)
{
struct qcom_smd_channel *channel;
struct qcom_smd_edge *edge;
struct qcom_smd *smd = platform_get_drvdata(pdev);
int i;
for (i = 0; i < smd->num_edges; i++) {
edge = &smd->edges[i];
disable_irq(edge->irq);
cancel_work_sync(&edge->work);
list_for_each_entry(channel, &edge->channels, list) {
if (!channel->qsdev)
continue;
qcom_smd_destroy_device(channel);
}
}
return 0;
}
static const struct of_device_id qcom_smd_of_match[] = {
{ .compatible = "qcom,smd" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smd_of_match);
static struct platform_driver qcom_smd_driver = {
.probe = qcom_smd_probe,
.remove = qcom_smd_remove,
.driver = {
.name = "qcom-smd",
.of_match_table = qcom_smd_of_match,
},
};
static int __init qcom_smd_init(void)
{
int ret;
ret = bus_register(&qcom_smd_bus);
if (ret) {
pr_err("failed to register smd bus: %d\n", ret);
return ret;
}
return platform_driver_register(&qcom_smd_driver);
}
postcore_initcall(qcom_smd_init);
static void __exit qcom_smd_exit(void)
{
platform_driver_unregister(&qcom_smd_driver);
bus_unregister(&qcom_smd_bus);
}
module_exit(qcom_smd_exit);
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm Shared Memory Driver");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2015, Sony Mobile Communications AB.
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/hwspinlock.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smem.h>
/*
* The Qualcomm shared memory system is a allocate only heap structure that
* consists of one of more memory areas that can be accessed by the processors
* in the SoC.
*
* All systems contains a global heap, accessible by all processors in the SoC,
* with a table of contents data structure (@smem_header) at the beginning of
* the main shared memory block.
*
* The global header contains meta data for allocations as well as a fixed list
* of 512 entries (@smem_global_entry) that can be initialized to reference
* parts of the shared memory space.
*
*
* In addition to this global heap a set of "private" heaps can be set up at
* boot time with access restrictions so that only certain processor pairs can
* access the data.
*
* These partitions are referenced from an optional partition table
* (@smem_ptable), that is found 4kB from the end of the main smem region. The
* partition table entries (@smem_ptable_entry) lists the involved processors
* (or hosts) and their location in the main shared memory region.
*
* Each partition starts with a header (@smem_partition_header) that identifies
* the partition and holds properties for the two internal memory regions. The
* two regions are cached and non-cached memory respectively. Each region
* contain a link list of allocation headers (@smem_private_entry) followed by
* their data.
*
* Items in the non-cached region are allocated from the start of the partition
* while items in the cached region are allocated from the end. The free area
* is hence the region between the cached and non-cached offsets.
*
*
* To synchronize allocations in the shared memory heaps a remote spinlock must
* be held - currently lock number 3 of the sfpb or tcsr is used for this on all
* platforms.
*
*/
/*
* Item 3 of the global heap contains an array of versions for the various
* software components in the SoC. We verify that the boot loader version is
* what the expected version (SMEM_EXPECTED_VERSION) as a sanity check.
*/
#define SMEM_ITEM_VERSION 3
#define SMEM_MASTER_SBL_VERSION_INDEX 7
#define SMEM_EXPECTED_VERSION 11
/*
* The first 8 items are only to be allocated by the boot loader while
* initializing the heap.
*/
#define SMEM_ITEM_LAST_FIXED 8
/* Highest accepted item number, for both global and private heaps */
#define SMEM_ITEM_COUNT 512
/* Processor/host identifier for the application processor */
#define SMEM_HOST_APPS 0
/* Max number of processors/hosts in a system */
#define SMEM_HOST_COUNT 9
/**
* struct smem_proc_comm - proc_comm communication struct (legacy)
* @command: current command to be executed
* @status: status of the currently requested command
* @params: parameters to the command
*/
struct smem_proc_comm {
u32 command;
u32 status;
u32 params[2];
};
/**
* struct smem_global_entry - entry to reference smem items on the heap
* @allocated: boolean to indicate if this entry is used
* @offset: offset to the allocated space
* @size: size of the allocated space, 8 byte aligned
* @aux_base: base address for the memory region used by this unit, or 0 for
* the default region. bits 0,1 are reserved
*/
struct smem_global_entry {
u32 allocated;
u32 offset;
u32 size;
u32 aux_base; /* bits 1:0 reserved */
};
#define AUX_BASE_MASK 0xfffffffc
/**
* struct smem_header - header found in beginning of primary smem region
* @proc_comm: proc_comm communication interface (legacy)
* @version: array of versions for the various subsystems
* @initialized: boolean to indicate that smem is initialized
* @free_offset: index of the first unallocated byte in smem
* @available: number of bytes available for allocation
* @reserved: reserved field, must be 0
* toc: array of references to items
*/
struct smem_header {
struct smem_proc_comm proc_comm[4];
u32 version[32];
u32 initialized;
u32 free_offset;
u32 available;
u32 reserved;
struct smem_global_entry toc[SMEM_ITEM_COUNT];
};
/**
* struct smem_ptable_entry - one entry in the @smem_ptable list
* @offset: offset, within the main shared memory region, of the partition
* @size: size of the partition
* @flags: flags for the partition (currently unused)
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
* @reserved: reserved entries for later use
*/
struct smem_ptable_entry {
u32 offset;
u32 size;
u32 flags;
u16 host0;
u16 host1;
u32 reserved[8];
};
/**
* struct smem_ptable - partition table for the private partitions
* @magic: magic number, must be SMEM_PTABLE_MAGIC
* @version: version of the partition table
* @num_entries: number of partitions in the table
* @reserved: for now reserved entries
* @entry: list of @smem_ptable_entry for the @num_entries partitions
*/
struct smem_ptable {
u32 magic;
u32 version;
u32 num_entries;
u32 reserved[5];
struct smem_ptable_entry entry[];
};
#define SMEM_PTABLE_MAGIC 0x434f5424 /* "$TOC" */
/**
* struct smem_partition_header - header of the partitions
* @magic: magic number, must be SMEM_PART_MAGIC
* @host0: first processor/host with access to this partition
* @host1: second processor/host with access to this partition
* @size: size of the partition
* @offset_free_uncached: offset to the first free byte of uncached memory in
* this partition
* @offset_free_cached: offset to the first free byte of cached memory in this
* partition
* @reserved: for now reserved entries
*/
struct smem_partition_header {
u32 magic;
u16 host0;
u16 host1;
u32 size;
u32 offset_free_uncached;
u32 offset_free_cached;
u32 reserved[3];
};
#define SMEM_PART_MAGIC 0x54525024 /* "$PRT" */
/**
* struct smem_private_entry - header of each item in the private partition
* @canary: magic number, must be SMEM_PRIVATE_CANARY
* @item: identifying number of the smem item
* @size: size of the data, including padding bytes
* @padding_data: number of bytes of padding of data
* @padding_hdr: number of bytes of padding between the header and the data
* @reserved: for now reserved entry
*/
struct smem_private_entry {
u16 canary;
u16 item;
u32 size; /* includes padding bytes */
u16 padding_data;
u16 padding_hdr;
u32 reserved;
};
#define SMEM_PRIVATE_CANARY 0xa5a5
/**
* struct smem_region - representation of a chunk of memory used for smem
* @aux_base: identifier of aux_mem base
* @virt_base: virtual base address of memory with this aux_mem identifier
* @size: size of the memory region
*/
struct smem_region {
u32 aux_base;
void __iomem *virt_base;
size_t size;
};
/**
* struct qcom_smem - device data for the smem device
* @dev: device pointer
* @hwlock: reference to a hwspinlock
* @partitions: list of pointers to partitions affecting the current
* processor/host
* @num_regions: number of @regions
* @regions: list of the memory regions defining the shared memory
*/
struct qcom_smem {
struct device *dev;
struct hwspinlock *hwlock;
struct smem_partition_header *partitions[SMEM_HOST_COUNT];
unsigned num_regions;
struct smem_region regions[0];
};
/* Pointer to the one and only smem handle */
static struct qcom_smem *__smem;
/* Timeout (ms) for the trylock of remote spinlocks */
#define HWSPINLOCK_TIMEOUT 1000
static int qcom_smem_alloc_private(struct qcom_smem *smem,
unsigned host,
unsigned item,
size_t size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *hdr;
size_t alloc_size;
void *p;
phdr = smem->partitions[host];
p = (void *)phdr + sizeof(*phdr);
while (p < (void *)phdr + phdr->offset_free_uncached) {
hdr = p;
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
return -EINVAL;
}
if (hdr->item == item)
return -EEXIST;
p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;
}
/* Check that we don't grow into the cached region */
alloc_size = sizeof(*hdr) + ALIGN(size, 8);
if (p + alloc_size >= (void *)phdr + phdr->offset_free_cached) {
dev_err(smem->dev, "Out of memory\n");
return -ENOSPC;
}
hdr = p;
hdr->canary = SMEM_PRIVATE_CANARY;
hdr->item = item;
hdr->size = ALIGN(size, 8);
hdr->padding_data = hdr->size - size;
hdr->padding_hdr = 0;
/*
* Ensure the header is written before we advance the free offset, so
* that remote processors that does not take the remote spinlock still
* gets a consistent view of the linked list.
*/
wmb();
phdr->offset_free_uncached += alloc_size;
return 0;
}
static int qcom_smem_alloc_global(struct qcom_smem *smem,
unsigned item,
size_t size)
{
struct smem_header *header;
struct smem_global_entry *entry;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return -EINVAL;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (entry->allocated)
return -EEXIST;
size = ALIGN(size, 8);
if (WARN_ON(size > header->available))
return -ENOMEM;
entry->offset = header->free_offset;
entry->size = size;
/*
* Ensure the header is consistent before we mark the item allocated,
* so that remote processors will get a consistent view of the item
* even though they do not take the spinlock on read.
*/
wmb();
entry->allocated = 1;
header->free_offset += size;
header->available -= size;
return 0;
}
/**
* qcom_smem_alloc() - allocate space for a smem item
* @host: remote processor id, or -1
* @item: smem item handle
* @size: number of bytes to be allocated
*
* Allocate space for a given smem item of size @size, given that the item is
* not yet allocated.
*/
int qcom_smem_alloc(unsigned host, unsigned item, size_t size)
{
unsigned long flags;
int ret;
if (!__smem)
return -EPROBE_DEFER;
if (item < SMEM_ITEM_LAST_FIXED) {
dev_err(__smem->dev,
"Rejecting allocation of static entry %d\n", item);
return -EINVAL;
}
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
if (host < SMEM_HOST_COUNT && __smem->partitions[host])
ret = qcom_smem_alloc_private(__smem, host, item, size);
else
ret = qcom_smem_alloc_global(__smem, item, size);
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
return ret;
}
EXPORT_SYMBOL(qcom_smem_alloc);
static int qcom_smem_get_global(struct qcom_smem *smem,
unsigned item,
void **ptr,
size_t *size)
{
struct smem_header *header;
struct smem_region *area;
struct smem_global_entry *entry;
u32 aux_base;
unsigned i;
if (WARN_ON(item >= SMEM_ITEM_COUNT))
return -EINVAL;
header = smem->regions[0].virt_base;
entry = &header->toc[item];
if (!entry->allocated)
return -ENXIO;
if (ptr != NULL) {
aux_base = entry->aux_base & AUX_BASE_MASK;
for (i = 0; i < smem->num_regions; i++) {
area = &smem->regions[i];
if (area->aux_base == aux_base || !aux_base) {
*ptr = area->virt_base + entry->offset;
break;
}
}
}
if (size != NULL)
*size = entry->size;
return 0;
}
static int qcom_smem_get_private(struct qcom_smem *smem,
unsigned host,
unsigned item,
void **ptr,
size_t *size)
{
struct smem_partition_header *phdr;
struct smem_private_entry *hdr;
void *p;
phdr = smem->partitions[host];
p = (void *)phdr + sizeof(*phdr);
while (p < (void *)phdr + phdr->offset_free_uncached) {
hdr = p;
if (hdr->canary != SMEM_PRIVATE_CANARY) {
dev_err(smem->dev,
"Found invalid canary in host %d partition\n",
host);
return -EINVAL;
}
if (hdr->item == item) {
if (ptr != NULL)
*ptr = p + sizeof(*hdr) + hdr->padding_hdr;
if (size != NULL)
*size = hdr->size - hdr->padding_data;
return 0;
}
p += sizeof(*hdr) + hdr->padding_hdr + hdr->size;
}
return -ENOENT;
}
/**
* qcom_smem_get() - resolve ptr of size of a smem item
* @host: the remote processor, or -1
* @item: smem item handle
* @ptr: pointer to be filled out with address of the item
* @size: pointer to be filled out with size of the item
*
* Looks up pointer and size of a smem item.
*/
int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size)
{
unsigned long flags;
int ret;
if (!__smem)
return -EPROBE_DEFER;
ret = hwspin_lock_timeout_irqsave(__smem->hwlock,
HWSPINLOCK_TIMEOUT,
&flags);
if (ret)
return ret;
if (host < SMEM_HOST_COUNT && __smem->partitions[host])
ret = qcom_smem_get_private(__smem, host, item, ptr, size);
else
ret = qcom_smem_get_global(__smem, item, ptr, size);
hwspin_unlock_irqrestore(__smem->hwlock, &flags);
return ret;
}
EXPORT_SYMBOL(qcom_smem_get);
/**
* qcom_smem_get_free_space() - retrieve amount of free space in a partition
* @host: the remote processor identifying a partition, or -1
*
* To be used by smem clients as a quick way to determine if any new
* allocations has been made.
*/
int qcom_smem_get_free_space(unsigned host)
{
struct smem_partition_header *phdr;
struct smem_header *header;
unsigned ret;
if (!__smem)
return -EPROBE_DEFER;
if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
phdr = __smem->partitions[host];
ret = phdr->offset_free_cached - phdr->offset_free_uncached;
} else {
header = __smem->regions[0].virt_base;
ret = header->available;
}
return ret;
}
EXPORT_SYMBOL(qcom_smem_get_free_space);
static int qcom_smem_get_sbl_version(struct qcom_smem *smem)
{
unsigned *versions;
size_t size;
int ret;
ret = qcom_smem_get_global(smem, SMEM_ITEM_VERSION,
(void **)&versions, &size);
if (ret < 0) {
dev_err(smem->dev, "Unable to read the version item\n");
return -ENOENT;
}
if (size < sizeof(unsigned) * SMEM_MASTER_SBL_VERSION_INDEX) {
dev_err(smem->dev, "Version item is too small\n");
return -EINVAL;
}
return versions[SMEM_MASTER_SBL_VERSION_INDEX];
}
static int qcom_smem_enumerate_partitions(struct qcom_smem *smem,
unsigned local_host)
{
struct smem_partition_header *header;
struct smem_ptable_entry *entry;
struct smem_ptable *ptable;
unsigned remote_host;
int i;
ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K;
if (ptable->magic != SMEM_PTABLE_MAGIC)
return 0;
if (ptable->version != 1) {
dev_err(smem->dev,
"Unsupported partition header version %d\n",
ptable->version);
return -EINVAL;
}
for (i = 0; i < ptable->num_entries; i++) {
entry = &ptable->entry[i];
if (entry->host0 != local_host && entry->host1 != local_host)
continue;
if (!entry->offset)
continue;
if (!entry->size)
continue;
if (entry->host0 == local_host)
remote_host = entry->host1;
else
remote_host = entry->host0;
if (remote_host >= SMEM_HOST_COUNT) {
dev_err(smem->dev,
"Invalid remote host %d\n",
remote_host);
return -EINVAL;
}
if (smem->partitions[remote_host]) {
dev_err(smem->dev,
"Already found a partition for host %d\n",
remote_host);
return -EINVAL;
}
header = smem->regions[0].virt_base + entry->offset;
if (header->magic != SMEM_PART_MAGIC) {
dev_err(smem->dev,
"Partition %d has invalid magic\n", i);
return -EINVAL;
}
if (header->host0 != local_host && header->host1 != local_host) {
dev_err(smem->dev,
"Partition %d hosts are invalid\n", i);
return -EINVAL;
}
if (header->host0 != remote_host && header->host1 != remote_host) {
dev_err(smem->dev,
"Partition %d hosts are invalid\n", i);
return -EINVAL;
}
if (header->size != entry->size) {
dev_err(smem->dev,
"Partition %d has invalid size\n", i);
return -EINVAL;
}
if (header->offset_free_uncached > header->size) {
dev_err(smem->dev,
"Partition %d has invalid free pointer\n", i);
return -EINVAL;
}
smem->partitions[remote_host] = header;
}
return 0;
}
static int qcom_smem_count_mem_regions(struct platform_device *pdev)
{
struct resource *res;
int num_regions = 0;
int i;
for (i = 0; i < pdev->num_resources; i++) {
res = &pdev->resource[i];
if (resource_type(res) == IORESOURCE_MEM)
num_regions++;
}
return num_regions;
}
static int qcom_smem_probe(struct platform_device *pdev)
{
struct smem_header *header;
struct device_node *np;
struct qcom_smem *smem;
struct resource *res;
struct resource r;
size_t array_size;
int num_regions = 0;
int hwlock_id;
u32 version;
int ret;
int i;
num_regions = qcom_smem_count_mem_regions(pdev) + 1;
array_size = num_regions * sizeof(struct smem_region);
smem = devm_kzalloc(&pdev->dev, sizeof(*smem) + array_size, GFP_KERNEL);
if (!smem)
return -ENOMEM;
smem->dev = &pdev->dev;
smem->num_regions = num_regions;
np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0);
if (!np) {
dev_err(&pdev->dev, "No memory-region specified\n");
return -EINVAL;
}
ret = of_address_to_resource(np, 0, &r);
of_node_put(np);
if (ret)
return ret;
smem->regions[0].aux_base = (u32)r.start;
smem->regions[0].size = resource_size(&r);
smem->regions[0].virt_base = devm_ioremap_nocache(&pdev->dev,
r.start,
resource_size(&r));
if (!smem->regions[0].virt_base)
return -ENOMEM;
for (i = 1; i < num_regions; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i - 1);
smem->regions[i].aux_base = (u32)res->start;
smem->regions[i].size = resource_size(res);
smem->regions[i].virt_base = devm_ioremap_nocache(&pdev->dev,
res->start,
resource_size(res));
if (!smem->regions[i].virt_base)
return -ENOMEM;
}
header = smem->regions[0].virt_base;
if (header->initialized != 1 || header->reserved) {
dev_err(&pdev->dev, "SMEM is not initialized by SBL\n");
return -EINVAL;
}
version = qcom_smem_get_sbl_version(smem);
if (version >> 16 != SMEM_EXPECTED_VERSION) {
dev_err(&pdev->dev, "Unsupported SMEM version 0x%x\n", version);
return -EINVAL;
}
ret = qcom_smem_enumerate_partitions(smem, SMEM_HOST_APPS);
if (ret < 0)
return ret;
hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
if (hwlock_id < 0) {
dev_err(&pdev->dev, "failed to retrieve hwlock\n");
return hwlock_id;
}
smem->hwlock = hwspin_lock_request_specific(hwlock_id);
if (!smem->hwlock)
return -ENXIO;
__smem = smem;
return 0;
}
static int qcom_smem_remove(struct platform_device *pdev)
{
__smem = NULL;
hwspin_lock_free(__smem->hwlock);
return 0;
}
static const struct of_device_id qcom_smem_of_match[] = {
{ .compatible = "qcom,smem" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_smem_of_match);
static struct platform_driver qcom_smem_driver = {
.probe = qcom_smem_probe,
.remove = qcom_smem_remove,
.driver = {
.name = "qcom-smem",
.of_match_table = qcom_smem_of_match,
.suppress_bind_attrs = true,
},
};
static int __init qcom_smem_init(void)
{
return platform_driver_register(&qcom_smem_driver);
}
arch_initcall(qcom_smem_init);
static void __exit qcom_smem_exit(void)
{
platform_driver_unregister(&qcom_smem_driver);
}
module_exit(qcom_smem_exit)
MODULE_AUTHOR("Bjorn Andersson <bjorn.andersson@sonymobile.com>");
MODULE_DESCRIPTION("Qualcomm Shared Memory Manager");
MODULE_LICENSE("GPL v2");
#ifndef __QCOM_SMD_RPM_H__
#define __QCOM_SMD_RPM_H__
struct qcom_smd_rpm;
#define QCOM_SMD_RPM_ACTIVE_STATE 0
#define QCOM_SMD_RPM_SLEEP_STATE 1
/*
* Constants used for addressing resources in the RPM.
*/
#define QCOM_SMD_RPM_BOOST 0x61747362
#define QCOM_SMD_RPM_BUS_CLK 0x316b6c63
#define QCOM_SMD_RPM_BUS_MASTER 0x73616d62
#define QCOM_SMD_RPM_BUS_SLAVE 0x766c7362
#define QCOM_SMD_RPM_CLK_BUF_A 0x616B6C63
#define QCOM_SMD_RPM_LDOA 0x616f646c
#define QCOM_SMD_RPM_LDOB 0x626F646C
#define QCOM_SMD_RPM_MEM_CLK 0x326b6c63
#define QCOM_SMD_RPM_MISC_CLK 0x306b6c63
#define QCOM_SMD_RPM_NCPA 0x6170636E
#define QCOM_SMD_RPM_NCPB 0x6270636E
#define QCOM_SMD_RPM_OCMEM_PWR 0x706d636f
#define QCOM_SMD_RPM_QPIC_CLK 0x63697071
#define QCOM_SMD_RPM_SMPA 0x61706d73
#define QCOM_SMD_RPM_SMPB 0x62706d73
#define QCOM_SMD_RPM_SPDM 0x63707362
#define QCOM_SMD_RPM_VSA 0x00617376
int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm,
int state,
u32 resource_type, u32 resource_id,
void *buf, size_t count);
#endif
#ifndef __QCOM_SMD_H__
#define __QCOM_SMD_H__
#include <linux/device.h>
#include <linux/mod_devicetable.h>
struct qcom_smd;
struct qcom_smd_channel;
struct qcom_smd_lookup;
/**
* struct qcom_smd_device - smd device struct
* @dev: the device struct
* @channel: handle to the smd channel for this device
*/
struct qcom_smd_device {
struct device dev;
struct qcom_smd_channel *channel;
};
/**
* struct qcom_smd_driver - smd driver struct
* @driver: underlying device driver
* @probe: invoked when the smd channel is found
* @remove: invoked when the smd channel is closed
* @callback: invoked when an inbound message is received on the channel,
* should return 0 on success or -EBUSY if the data cannot be
* consumed at this time
*/
struct qcom_smd_driver {
struct device_driver driver;
int (*probe)(struct qcom_smd_device *dev);
void (*remove)(struct qcom_smd_device *dev);
int (*callback)(struct qcom_smd_device *, const void *, size_t);
};
int qcom_smd_driver_register(struct qcom_smd_driver *drv);
void qcom_smd_driver_unregister(struct qcom_smd_driver *drv);
#define module_qcom_smd_driver(__smd_driver) \
module_driver(__smd_driver, qcom_smd_driver_register, \
qcom_smd_driver_unregister)
int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len);
#endif
#ifndef __QCOM_SMEM_H__
#define __QCOM_SMEM_H__
#define QCOM_SMEM_HOST_ANY -1
int qcom_smem_alloc(unsigned host, unsigned item, size_t size);
int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size);
int qcom_smem_get_free_space(unsigned host);
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment