Commit 11f1ceca authored by Georgi Djakov's avatar Georgi Djakov Committed by Greg Kroah-Hartman

interconnect: Add generic on-chip interconnect API

This patch introduces a new API to get requirements and configure the
interconnect buses across the entire chipset to fit with the current
demand.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) between endpoints and
set the desired constraints on this data flow path. The providers receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each node along the path
to support a bandwidth that satisfies all bandwidth requests that cross
through that node. The topology could be complicated and multi-tiered and
is SoC specific.
Reviewed-by: default avatarEvan Green <evgreen@chromium.org>
Signed-off-by: default avatarGeorgi Djakov <georgi.djakov@linaro.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 2ca46ed2
.. SPDX-License-Identifier: GPL-2.0
=====================================
GENERIC SYSTEM INTERCONNECT SUBSYSTEM
=====================================
Introduction
------------
This framework is designed to provide a standard kernel interface to control
the settings of the interconnects on an SoC. These settings can be throughput,
latency and priority between multiple interconnected devices or functional
blocks. This can be controlled dynamically in order to save power or provide
maximum performance.
The interconnect bus is hardware with configurable parameters, which can be
set on a data path according to the requests received from various drivers.
An example of interconnect buses are the interconnects between various
components or functional blocks in chipsets. There can be multiple interconnects
on an SoC that can be multi-tiered.
Below is a simplified diagram of a real-world SoC interconnect bus topology.
::
+----------------+ +----------------+
| HW Accelerator |--->| M NoC |<---------------+
+----------------+ +----------------+ |
| | +------------+
+-----+ +-------------+ V +------+ | |
| DDR | | +--------+ | PCIe | | |
+-----+ | | Slaves | +------+ | |
^ ^ | +--------+ | | C NoC |
| | V V | |
+------------------+ +------------------------+ | | +-----+
| |-->| |-->| |-->| CPU |
| |-->| |<--| | +-----+
| Mem NoC | | S NoC | +------------+
| |<--| |---------+ |
| |<--| |<------+ | | +--------+
+------------------+ +------------------------+ | | +-->| Slaves |
^ ^ ^ ^ ^ | | +--------+
| | | | | | V
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
| CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
|
+-------+
| Modem |
+-------+
Terminology
-----------
Interconnect provider is the software definition of the interconnect hardware.
The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
and Mem NoC.
Interconnect node is the software definition of the interconnect hardware
port. Each interconnect provider consists of multiple interconnect nodes,
which are connected to other SoC components including other interconnect
providers. The point on the diagram where the CPUs connect to the memory is
called an interconnect node, which belongs to the Mem NoC interconnect provider.
Interconnect endpoints are the first or the last element of the path. Every
endpoint is a node, but not every node is an endpoint.
Interconnect path is everything between two endpoints including all the nodes
that have to be traversed to reach from a source to destination node. It may
include multiple master-slave pairs across several interconnect providers.
Interconnect consumers are the entities which make use of the data paths exposed
by the providers. The consumers send requests to providers requesting various
throughput, latency and priority. Usually the consumers are device drivers, that
send request based on their needs. An example for a consumer is a video decoder
that supports various formats and image sizes.
Interconnect providers
----------------------
Interconnect provider is an entity that implements methods to initialize and
configure interconnect bus hardware. The interconnect provider drivers should
be registered with the interconnect provider core.
.. kernel-doc:: include/linux/interconnect-provider.h
Interconnect consumers
----------------------
Interconnect consumers are the clients which use the interconnect APIs to
get paths between endpoints and set their bandwidth/latency/QoS requirements
for these interconnect paths.
.. kernel-doc:: include/linux/interconnect.h
......@@ -228,4 +228,6 @@ source "drivers/siox/Kconfig"
source "drivers/slimbus/Kconfig"
source "drivers/interconnect/Kconfig"
endmenu
......@@ -186,3 +186,4 @@ obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/
obj-$(CONFIG_GNSS) += gnss/
obj-$(CONFIG_INTERCONNECT) += interconnect/
menuconfig INTERCONNECT
tristate "On-Chip Interconnect management support"
help
Support for management of the on-chip interconnects.
This framework is designed to provide a generic interface for
managing the interconnects in a SoC.
If unsure, say no.
# SPDX-License-Identifier: GPL-2.0
icc-core-objs := core.o
obj-$(CONFIG_INTERCONNECT) += icc-core.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#ifndef __LINUX_INTERCONNECT_PROVIDER_H
#define __LINUX_INTERCONNECT_PROVIDER_H
#include <linux/interconnect.h>
#define icc_units_to_bps(bw) ((bw) * 1000ULL)
struct icc_node;
/**
* struct icc_provider - interconnect provider (controller) entity that might
* provide multiple interconnect controls
*
* @provider_list: list of the registered interconnect providers
* @nodes: internal list of the interconnect provider nodes
* @set: pointer to device specific set operation function
* @aggregate: pointer to device specific aggregate operation function
* @dev: the device this interconnect provider belongs to
* @users: count of active users
* @data: pointer to private data
*/
struct icc_provider {
struct list_head provider_list;
struct list_head nodes;
int (*set)(struct icc_node *src, struct icc_node *dst);
int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
u32 *agg_avg, u32 *agg_peak);
struct device *dev;
int users;
void *data;
};
/**
* struct icc_node - entity that is part of the interconnect topology
*
* @id: platform specific node id
* @name: node name used in debugfs
* @links: a list of targets pointing to where we can go next when traversing
* @num_links: number of links to other interconnect nodes
* @provider: points to the interconnect provider of this node
* @node_list: the list entry in the parent provider's "nodes" list
* @search_list: list used when walking the nodes graph
* @reverse: pointer to previous node when walking the nodes graph
* @is_traversed: flag that is used when walking the nodes graph
* @req_list: a list of QoS constraint requests associated with this node
* @avg_bw: aggregated value of average bandwidth requests from all consumers
* @peak_bw: aggregated value of peak bandwidth requests from all consumers
* @data: pointer to private data
*/
struct icc_node {
int id;
const char *name;
struct icc_node **links;
size_t num_links;
struct icc_provider *provider;
struct list_head node_list;
struct list_head search_list;
struct icc_node *reverse;
u8 is_traversed:1;
struct hlist_head req_list;
u32 avg_bw;
u32 peak_bw;
void *data;
};
#if IS_ENABLED(CONFIG_INTERCONNECT)
struct icc_node *icc_node_create(int id);
void icc_node_destroy(int id);
int icc_link_create(struct icc_node *node, const int dst_id);
int icc_link_destroy(struct icc_node *src, struct icc_node *dst);
void icc_node_add(struct icc_node *node, struct icc_provider *provider);
void icc_node_del(struct icc_node *node);
int icc_provider_add(struct icc_provider *provider);
int icc_provider_del(struct icc_provider *provider);
#else
static inline struct icc_node *icc_node_create(int id)
{
return ERR_PTR(-ENOTSUPP);
}
void icc_node_destroy(int id)
{
}
static inline int icc_link_create(struct icc_node *node, const int dst_id)
{
return -ENOTSUPP;
}
int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
{
return -ENOTSUPP;
}
void icc_node_add(struct icc_node *node, struct icc_provider *provider)
{
}
void icc_node_del(struct icc_node *node)
{
}
static inline int icc_provider_add(struct icc_provider *provider)
{
return -ENOTSUPP;
}
static inline int icc_provider_del(struct icc_provider *provider)
{
return -ENOTSUPP;
}
#endif /* CONFIG_INTERCONNECT */
#endif /* __LINUX_INTERCONNECT_PROVIDER_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018-2019, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#ifndef __LINUX_INTERCONNECT_H
#define __LINUX_INTERCONNECT_H
#include <linux/mutex.h>
#include <linux/types.h>
/* macros for converting to icc units */
#define Bps_to_icc(x) ((x) / 1000)
#define kBps_to_icc(x) (x)
#define MBps_to_icc(x) ((x) * 1000)
#define GBps_to_icc(x) ((x) * 1000 * 1000)
#define bps_to_icc(x) (1)
#define kbps_to_icc(x) ((x) / 8 + ((x) % 8 ? 1 : 0))
#define Mbps_to_icc(x) ((x) * 1000 / 8)
#define Gbps_to_icc(x) ((x) * 1000 * 1000 / 8)
struct icc_path;
struct device;
#if IS_ENABLED(CONFIG_INTERCONNECT)
struct icc_path *icc_get(struct device *dev, const int src_id,
const int dst_id);
void icc_put(struct icc_path *path);
int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
#else
static inline struct icc_path *icc_get(struct device *dev, const int src_id,
const int dst_id)
{
return NULL;
}
static inline void icc_put(struct icc_path *path)
{
}
static inline int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
{
return 0;
}
#endif /* CONFIG_INTERCONNECT */
#endif /* __LINUX_INTERCONNECT_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment