Commit f8f15c34 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-msm-next-2018-07-30' of git://people.freedesktop.org/~robclark/linux into drm-next

A bit larger this time around, due to introduction of "dpu1" support
for the display controller in sdm845 and beyond.  This has been on
list and undergoing refactoring since Feb (going from ~110kloc to
~30kloc), and all my review complaints have been addressed, so I'd be
happy to see this upstream so further feature work can procede on top
of upstream.

Also includes the gpu coredump support, which should be useful for
debugging gpu crashes.  And various other misc fixes and such.
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGv-8y3zguY0Mj1vh=o+vrv_bJ8AwZ96wBXYPvMeQT2XcA@mail.gmail.com
parents caca1ff0 a7663a79
Qualcomm Technologies, Inc. DPU KMS
Description:
Device tree bindings for MSM Mobile Display Subsytem(MDSS) that encapsulates
sub-blocks like DPU display controller, DSI and DP interfaces etc.
The DPU display controller is found in SDM845 SoC.
MDSS:
Required properties:
- compatible: "qcom,sdm845-mdss"
- reg: physical base address and length of contoller's registers.
- reg-names: register region names. The following region is required:
* "mdss"
- power-domains: a power domain consumer specifier according to
Documentation/devicetree/bindings/power/power_domain.txt
- clocks: list of clock specifiers for clocks needed by the device.
- clock-names: device clock names, must be in same order as clocks property.
The following clocks are required:
* "iface"
* "bus"
* "core"
- interrupts: interrupt signal from MDSS.
- interrupt-controller: identifies the node as an interrupt controller.
- #interrupt-cells: specifies the number of cells needed to encode an interrupt
source, should be 1.
- iommus: phandle of iommu device node.
- #address-cells: number of address cells for the MDSS children. Should be 1.
- #size-cells: Should be 1.
- ranges: parent bus address space is the same as the child bus address space.
Optional properties:
- assigned-clocks: list of clock specifiers for clocks needing rate assignment
- assigned-clock-rates: list of clock frequencies sorted in the same order as
the assigned-clocks property.
MDP:
Required properties:
- compatible: "qcom,sdm845-dpu"
- reg: physical base address and length of controller's registers.
- reg-names : register region names. The following region is required:
* "mdp"
* "vbif"
- clocks: list of clock specifiers for clocks needed by the device.
- clock-names: device clock names, must be in same order as clocks property.
The following clocks are required.
* "bus"
* "iface"
* "core"
* "vsync"
- interrupts: interrupt line from DPU to MDSS.
- ports: contains the list of output ports from DPU device. These ports connect
to interfaces that are external to the DPU hardware, such as DSI, DP etc.
Each output port contains an endpoint that describes how it is connected to an
external interface. These are described by the standard properties documented
here:
Documentation/devicetree/bindings/graph.txt
Documentation/devicetree/bindings/media/video-interfaces.txt
Port 0 -> DPU_INTF1 (DSI1)
Port 1 -> DPU_INTF2 (DSI2)
Optional properties:
- assigned-clocks: list of clock specifiers for clocks needing rate assignment
- assigned-clock-rates: list of clock frequencies sorted in the same order as
the assigned-clocks property.
Example:
mdss: mdss@ae00000 {
compatible = "qcom,sdm845-mdss";
reg = <0xae00000 0x1000>;
reg-names = "mdss";
power-domains = <&clock_dispcc 0>;
clocks = <&gcc GCC_DISP_AHB_CLK>, <&gcc GCC_DISP_AXI_CLK>,
<&clock_dispcc DISP_CC_MDSS_MDP_CLK>;
clock-names = "iface", "bus", "core";
assigned-clocks = <&clock_dispcc DISP_CC_MDSS_MDP_CLK>;
assigned-clock-rates = <300000000>;
interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <1>;
iommus = <&apps_iommu 0>;
#address-cells = <2>;
#size-cells = <1>;
ranges = <0 0 0xae00000 0xb2008>;
mdss_mdp: mdp@ae01000 {
compatible = "qcom,sdm845-dpu";
reg = <0 0x1000 0x8f000>, <0 0xb0000 0x2008>;
reg-names = "mdp", "vbif";
clocks = <&clock_dispcc DISP_CC_MDSS_AHB_CLK>,
<&clock_dispcc DISP_CC_MDSS_AXI_CLK>,
<&clock_dispcc DISP_CC_MDSS_MDP_CLK>,
<&clock_dispcc DISP_CC_MDSS_VSYNC_CLK>;
clock-names = "iface", "bus", "core", "vsync";
assigned-clocks = <&clock_dispcc DISP_CC_MDSS_MDP_CLK>,
<&clock_dispcc DISP_CC_MDSS_VSYNC_CLK>;
assigned-clock-rates = <0 0 300000000 19200000>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
dpu_intf1_out: endpoint {
remote-endpoint = <&dsi0_in>;
};
};
port@1 {
reg = <1>;
dpu_intf2_out: endpoint {
remote-endpoint = <&dsi1_in>;
};
};
};
};
};
......@@ -121,6 +121,20 @@ Required properties:
Optional properties:
- qcom,dsi-phy-regulator-ldo-mode: Boolean value indicating if the LDO mode PHY
regulator is wanted.
- qcom,mdss-mdp-transfer-time-us: Specifies the dsi transfer time for command mode
panels in microseconds. Driver uses this number to adjust
the clock rate according to the expected transfer time.
Increasing this value would slow down the mdp processing
and can result in slower performance.
Decreasing this value can speed up the mdp processing,
but this can also impact power consumption.
As a rule this time should not be higher than the time
that would be expected with the processing at the
dsi link rate since anyways this would be the maximum
transfer time that could be achieved.
If ping pong split is enabled, this time should not be higher
than two times the dsi link rate time.
If the property is not specified, then the default value is 14000 us.
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/graph.txt
......@@ -171,6 +185,8 @@ Example:
qcom,master-dsi;
qcom,sync-dual-dsi;
qcom,mdss-mdp-transfer-time-us = <12000>;
pinctrl-names = "default", "sleep";
pinctrl-0 = <&dsi_active>;
pinctrl-1 = <&dsi_suspend>;
......
=====================
MSM Crash Dump Format
=====================
Following a GPU hang the MSM driver outputs debugging information via
/sys/kernel/dri/X/show or via devcoredump (/sys/class/devcoredump/dcdX/data).
This document describes how the output is formatted.
Each entry is in the form key: value. Sections headers will not have a value
and all the contents of a section will be indented two spaces from the header.
Each section might have multiple array entries the start of which is designated
by a (-).
Mappings
--------
kernel
The kernel version that generated the dump (UTS_RELEASE).
module
The module that generated the crashdump.
time
The kernel time at crash formated as seconds.microseconds.
comm
Comm string for the binary that generated the fault.
cmdline
Command line for the binary that generated the fault.
revision
ID of the GPU that generated the crash formatted as
core.major.minor.patchlevel separated by dots.
rbbm-status
The current value of RBBM_STATUS which shows what top level GPU
components are in use at the time of crash.
ringbuffer
Section containing the contents of each ringbuffer. Each ringbuffer is
identified with an id number.
id
Ringbuffer ID (0 based index). Each ringbuffer in the section
will have its own unique id.
iova
GPU address of the ringbuffer.
last-fence
The last fence that was issued on the ringbuffer
retired-fence
The last fence retired on the ringbuffer.
rptr
The current read pointer (rptr) for the ringbuffer.
wptr
The current write pointer (wptr) for the ringbuffer.
size
Maximum size of the ringbuffer programmed in the hardware.
data
The contents of the ring encoded as ascii85. Only the used
portions of the ring will be printed.
bo
List of buffers from the hanging submission if available.
Each buffer object will have a uinque iova.
iova
GPU address of the buffer object.
size
Allocated size of the buffer object.
data
The contents of the buffer object encoded with ascii85. Only
Trailing zeros at the end of the buffer will be skipped.
registers
Set of registers values. Each entry is on its own line enclosed
by brackets { }.
offset
Byte offset of the register from the start of the
GPU memory region.
value
Hexadecimal value of the register.
registers-hlsq
(5xx only) Register values from the HLSQ aperture.
Same format as the register section.
......@@ -392,6 +392,7 @@ bool mipi_dsi_packet_format_is_short(u8 type)
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_READ:
case MIPI_DSI_DCS_COMPRESSION_MODE:
case MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE:
return true;
}
......@@ -410,6 +411,7 @@ EXPORT_SYMBOL(mipi_dsi_packet_format_is_short);
bool mipi_dsi_packet_format_is_long(u8 type)
{
switch (type) {
case MIPI_DSI_PPS_LONG_WRITE:
case MIPI_DSI_NULL_PACKET:
case MIPI_DSI_BLANKING_PACKET:
case MIPI_DSI_GENERIC_LONG_WRITE:
......
......@@ -30,6 +30,100 @@
#include <drm/drmP.h>
#include <drm/drm_print.h>
void __drm_puts_coredump(struct drm_printer *p, const char *str)
{
struct drm_print_iterator *iterator = p->arg;
ssize_t len;
if (!iterator->remain)
return;
if (iterator->offset < iterator->start) {
ssize_t copy;
len = strlen(str);
if (iterator->offset + len <= iterator->start) {
iterator->offset += len;
return;
}
copy = len - (iterator->start - iterator->offset);
if (copy > iterator->remain)
copy = iterator->remain;
/* Copy out the bit of the string that we need */
memcpy(iterator->data,
str + (iterator->start - iterator->offset), copy);
iterator->offset = iterator->start + copy;
iterator->remain -= copy;
} else {
ssize_t pos = iterator->offset - iterator->start;
len = min_t(ssize_t, strlen(str), iterator->remain);
memcpy(iterator->data + pos, str, len);
iterator->offset += len;
iterator->remain -= len;
}
}
EXPORT_SYMBOL(__drm_puts_coredump);
void __drm_printfn_coredump(struct drm_printer *p, struct va_format *vaf)
{
struct drm_print_iterator *iterator = p->arg;
size_t len;
char *buf;
if (!iterator->remain)
return;
/* Figure out how big the string will be */
len = snprintf(NULL, 0, "%pV", vaf);
/* This is the easiest path, we've already advanced beyond the offset */
if (iterator->offset + len <= iterator->start) {
iterator->offset += len;
return;
}
/* Then check if we can directly copy into the target buffer */
if ((iterator->offset >= iterator->start) && (len < iterator->remain)) {
ssize_t pos = iterator->offset - iterator->start;
snprintf(((char *) iterator->data) + pos,
iterator->remain, "%pV", vaf);
iterator->offset += len;
iterator->remain -= len;
return;
}
/*
* Finally, hit the slow path and make a temporary string to copy over
* using _drm_puts_coredump
*/
buf = kmalloc(len + 1, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (!buf)
return;
snprintf(buf, len + 1, "%pV", vaf);
__drm_puts_coredump(p, (const char *) buf);
kfree(buf);
}
EXPORT_SYMBOL(__drm_printfn_coredump);
void __drm_puts_seq_file(struct drm_printer *p, const char *str)
{
seq_puts(p->arg, str);
}
EXPORT_SYMBOL(__drm_puts_seq_file);
void __drm_printfn_seq_file(struct drm_printer *p, struct va_format *vaf)
{
seq_printf(p->arg, "%pV", vaf);
......@@ -48,6 +142,23 @@ void __drm_printfn_debug(struct drm_printer *p, struct va_format *vaf)
}
EXPORT_SYMBOL(__drm_printfn_debug);
/**
* drm_puts - print a const string to a &drm_printer stream
* @p: the &drm printer
* @str: const string
*
* Allow &drm_printer types that have a constant string
* option to use it.
*/
void drm_puts(struct drm_printer *p, const char *str)
{
if (p->puts)
p->puts(p, str);
else
drm_printf(p, "%s", str);
}
EXPORT_SYMBOL(drm_puts);
/**
* drm_printf - print to a &drm_printer stream
* @p: the &drm_printer
......
......@@ -31,6 +31,7 @@
#include <linux/stop_machine.h>
#include <linux/zlib.h>
#include <drm/drm_print.h>
#include <linux/ascii85.h>
#include "i915_gpu_error.h"
#include "i915_drv.h"
......@@ -517,35 +518,12 @@ void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...)
va_end(args);
}
static int
ascii85_encode_len(int len)
{
return DIV_ROUND_UP(len, 4);
}
static bool
ascii85_encode(u32 in, char *out)
{
int i;
if (in == 0)
return false;
out[5] = '\0';
for (i = 5; i--; ) {
out[i] = '!' + in % 85;
in /= 85;
}
return true;
}
static void print_error_obj(struct drm_i915_error_state_buf *m,
struct intel_engine_cs *engine,
const char *name,
struct drm_i915_error_object *obj)
{
char out[6];
char out[ASCII85_BUFSZ];
int page;
if (!obj)
......@@ -567,12 +545,8 @@ static void print_error_obj(struct drm_i915_error_state_buf *m,
len -= obj->unused;
len = ascii85_encode_len(len);
for (i = 0; i < len; i++) {
if (ascii85_encode(obj->pages[page][i], out))
err_puts(m, out);
else
err_puts(m, "z");
}
for (i = 0; i < len; i++)
err_puts(m, ascii85_encode(obj->pages[page][i], out));
}
err_puts(m, "\n");
}
......
......@@ -12,6 +12,7 @@ config DRM_MSM
select SHMEM
select TMPFS
select QCOM_SCM
select WANT_DEV_COREDUMP
select SND_SOC_HDMI_CODEC if SND_SOC
select SYNC_FILE
select PM_OPP
......
# SPDX-License-Identifier: GPL-2.0
ccflags-y := -Idrivers/gpu/drm/msm
ccflags-y += -Idrivers/gpu/drm/msm/disp/dpu1
ccflags-$(CONFIG_DRM_MSM_DSI) += -Idrivers/gpu/drm/msm/dsi
msm-y := \
......@@ -45,6 +46,33 @@ msm-y := \
disp/mdp5/mdp5_mixer.o \
disp/mdp5/mdp5_plane.o \
disp/mdp5/mdp5_smp.o \
disp/dpu1/dpu_core_irq.o \
disp/dpu1/dpu_core_perf.o \
disp/dpu1/dpu_crtc.o \
disp/dpu1/dpu_encoder.o \
disp/dpu1/dpu_encoder_phys_cmd.o \
disp/dpu1/dpu_encoder_phys_vid.o \
disp/dpu1/dpu_formats.o \
disp/dpu1/dpu_hw_blk.o \
disp/dpu1/dpu_hw_catalog.o \
disp/dpu1/dpu_hw_cdm.o \
disp/dpu1/dpu_hw_ctl.o \
disp/dpu1/dpu_hw_interrupts.o \
disp/dpu1/dpu_hw_intf.o \
disp/dpu1/dpu_hw_lm.o \
disp/dpu1/dpu_hw_pingpong.o \
disp/dpu1/dpu_hw_sspp.o \
disp/dpu1/dpu_hw_top.o \
disp/dpu1/dpu_hw_util.o \
disp/dpu1/dpu_hw_vbif.o \
disp/dpu1/dpu_io_util.o \
disp/dpu1/dpu_irq.o \
disp/dpu1/dpu_kms.o \
disp/dpu1/dpu_mdss.o \
disp/dpu1/dpu_plane.o \
disp/dpu1/dpu_power_handle.o \
disp/dpu1/dpu_rm.o \
disp/dpu1/dpu_vbif.o \
msm_atomic.o \
msm_debugfs.o \
msm_drv.o \
......@@ -62,7 +90,8 @@ msm-y := \
msm_ringbuffer.o \
msm_submitqueue.o
msm-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o
msm-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
disp/dpu1/dpu_dbg.o
msm-$(CONFIG_DRM_FBDEV_EMULATION) += msm_fbdev.o
msm-$(CONFIG_COMMON_CLK) += disp/mdp4/mdp4_lvds_pll.o
......
......@@ -411,15 +411,6 @@ static const unsigned int a3xx_registers[] = {
~0 /* sentinel */
};
#ifdef CONFIG_DEBUG_FS
static void a3xx_show(struct msm_gpu *gpu, struct seq_file *m)
{
seq_printf(m, "status: %08x\n",
gpu_read(gpu, REG_A3XX_RBBM_STATUS));
adreno_show(gpu, m);
}
#endif
/* would be nice to not have to duplicate the _show() stuff with printk(): */
static void a3xx_dump(struct msm_gpu *gpu)
{
......@@ -427,6 +418,21 @@ static void a3xx_dump(struct msm_gpu *gpu)
gpu_read(gpu, REG_A3XX_RBBM_STATUS));
adreno_dump(gpu);
}
static struct msm_gpu_state *a3xx_gpu_state_get(struct msm_gpu *gpu)
{
struct msm_gpu_state *state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
return ERR_PTR(-ENOMEM);
adreno_gpu_state_get(gpu, state);
state->rbbm_status = gpu_read(gpu, REG_A3XX_RBBM_STATUS);
return state;
}
/* Register offset defines for A3XX */
static const unsigned int a3xx_register_offsets[REG_ADRENO_REGISTER_MAX] = {
REG_ADRENO_DEFINE(REG_ADRENO_CP_RB_BASE, REG_AXXX_CP_RB_BASE),
......@@ -450,9 +456,11 @@ static const struct adreno_gpu_funcs funcs = {
.active_ring = adreno_active_ring,
.irq = a3xx_irq,
.destroy = a3xx_destroy,
#ifdef CONFIG_DEBUG_FS
.show = a3xx_show,
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
.show = adreno_show,
#endif
.gpu_state_get = a3xx_gpu_state_get,
.gpu_state_put = adreno_gpu_state_put,
},
};
......
......@@ -455,15 +455,19 @@ static const unsigned int a4xx_registers[] = {
~0 /* sentinel */
};
#ifdef CONFIG_DEBUG_FS
static void a4xx_show(struct msm_gpu *gpu, struct seq_file *m)
static struct msm_gpu_state *a4xx_gpu_state_get(struct msm_gpu *gpu)
{
seq_printf(m, "status: %08x\n",
gpu_read(gpu, REG_A4XX_RBBM_STATUS));
adreno_show(gpu, m);
struct msm_gpu_state *state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
return ERR_PTR(-ENOMEM);
adreno_gpu_state_get(gpu, state);
state->rbbm_status = gpu_read(gpu, REG_A4XX_RBBM_STATUS);
return state;
}
#endif
/* Register offset defines for A4XX, in order of enum adreno_regs */
static const unsigned int a4xx_register_offsets[REG_ADRENO_REGISTER_MAX] = {
......@@ -538,9 +542,11 @@ static const struct adreno_gpu_funcs funcs = {
.active_ring = adreno_active_ring,
.irq = a4xx_irq,
.destroy = a4xx_destroy,
#ifdef CONFIG_DEBUG_FS
.show = a4xx_show,
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
.show = adreno_show,
#endif
.gpu_state_get = a4xx_gpu_state_get,
.gpu_state_put = adreno_gpu_state_put,
},
.get_timestamp = a4xx_get_timestamp,
};
......
......@@ -19,6 +19,7 @@
#include <linux/soc/qcom/mdt_loader.h>
#include <linux/pm_opp.h>
#include <linux/nvmem-consumer.h>
#include <linux/iopoll.h>
#include "msm_gem.h"
#include "msm_mmu.h"
#include "a5xx_gpu.h"
......@@ -1123,8 +1124,9 @@ static const u32 a5xx_registers[] = {
0xE800, 0xE806, 0xE810, 0xE89A, 0xE8A0, 0xE8A4, 0xE8AA, 0xE8EB,
0xE900, 0xE905, 0xEB80, 0xEB8F, 0xEBB0, 0xEBB0, 0xEC00, 0xEC05,
0xEC08, 0xECE9, 0xECF0, 0xECF0, 0xEA80, 0xEA80, 0xEA82, 0xEAA3,
0xEAA5, 0xEAC2, 0xA800, 0xA8FF, 0xAC60, 0xAC60, 0xB000, 0xB97F,
0xB9A0, 0xB9BF, ~0
0xEAA5, 0xEAC2, 0xA800, 0xA800, 0xA820, 0xA828, 0xA840, 0xA87D,
0XA880, 0xA88D, 0xA890, 0xA8A3, 0xA8D0, 0xA8D8, 0xA8E0, 0xA8F5,
0xAC60, 0xAC60, ~0,
};
static void a5xx_dump(struct msm_gpu *gpu)
......@@ -1195,19 +1197,231 @@ static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
return 0;
}
#ifdef CONFIG_DEBUG_FS
static void a5xx_show(struct msm_gpu *gpu, struct seq_file *m)
struct a5xx_crashdumper {
void *ptr;
struct drm_gem_object *bo;
u64 iova;
};
struct a5xx_gpu_state {
struct msm_gpu_state base;
u32 *hlsqregs;
};
#define gpu_poll_timeout(gpu, addr, val, cond, interval, timeout) \
readl_poll_timeout((gpu)->mmio + ((addr) << 2), val, cond, \
interval, timeout)
static int a5xx_crashdumper_init(struct msm_gpu *gpu,
struct a5xx_crashdumper *dumper)
{
seq_printf(m, "status: %08x\n",
gpu_read(gpu, REG_A5XX_RBBM_STATUS));
dumper->ptr = msm_gem_kernel_new_locked(gpu->dev,
SZ_1M, MSM_BO_UNCACHED, gpu->aspace,
&dumper->bo, &dumper->iova);
/*
* Temporarily disable hardware clock gating before going into
* adreno_show to avoid issues while reading the registers
*/
if (IS_ERR(dumper->ptr))
return PTR_ERR(dumper->ptr);
return 0;
}
static void a5xx_crashdumper_free(struct msm_gpu *gpu,
struct a5xx_crashdumper *dumper)
{
msm_gem_put_iova(dumper->bo, gpu->aspace);
msm_gem_put_vaddr(dumper->bo);
drm_gem_object_unreference(dumper->bo);
}
static int a5xx_crashdumper_run(struct msm_gpu *gpu,
struct a5xx_crashdumper *dumper)
{
u32 val;
if (IS_ERR_OR_NULL(dumper->ptr))
return -EINVAL;
gpu_write64(gpu, REG_A5XX_CP_CRASH_SCRIPT_BASE_LO,
REG_A5XX_CP_CRASH_SCRIPT_BASE_HI, dumper->iova);
gpu_write(gpu, REG_A5XX_CP_CRASH_DUMP_CNTL, 1);
return gpu_poll_timeout(gpu, REG_A5XX_CP_CRASH_DUMP_CNTL, val,
val & 0x04, 100, 10000);
}
/*
* These are a list of the registers that need to be read through the HLSQ
* aperture through the crashdumper. These are not nominally accessible from
* the CPU on a secure platform.
*/
static const struct {
u32 type;
u32 regoffset;
u32 count;
} a5xx_hlsq_aperture_regs[] = {
{ 0x35, 0xe00, 0x32 }, /* HSLQ non-context */
{ 0x31, 0x2080, 0x1 }, /* HLSQ 2D context 0 */
{ 0x33, 0x2480, 0x1 }, /* HLSQ 2D context 1 */
{ 0x32, 0xe780, 0x62 }, /* HLSQ 3D context 0 */
{ 0x34, 0xef80, 0x62 }, /* HLSQ 3D context 1 */
{ 0x3f, 0x0ec0, 0x40 }, /* SP non-context */
{ 0x3d, 0x2040, 0x1 }, /* SP 2D context 0 */
{ 0x3b, 0x2440, 0x1 }, /* SP 2D context 1 */
{ 0x3e, 0xe580, 0x170 }, /* SP 3D context 0 */
{ 0x3c, 0xed80, 0x170 }, /* SP 3D context 1 */
{ 0x3a, 0x0f00, 0x1c }, /* TP non-context */
{ 0x38, 0x2000, 0xa }, /* TP 2D context 0 */
{ 0x36, 0x2400, 0xa }, /* TP 2D context 1 */
{ 0x39, 0xe700, 0x80 }, /* TP 3D context 0 */
{ 0x37, 0xef00, 0x80 }, /* TP 3D context 1 */
};
static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu,
struct a5xx_gpu_state *a5xx_state)
{
struct a5xx_crashdumper dumper = { 0 };
u32 offset, count = 0;
u64 *ptr;
int i;
if (a5xx_crashdumper_init(gpu, &dumper))
return;
/* The script will be written at offset 0 */
ptr = dumper.ptr;
/* Start writing the data at offset 256k */
offset = dumper.iova + (256 * SZ_1K);
/* Count how many additional registers to get from the HLSQ aperture */
for (i = 0; i < ARRAY_SIZE(a5xx_hlsq_aperture_regs); i++)
count += a5xx_hlsq_aperture_regs[i].count;
a5xx_state->hlsqregs = kcalloc(count, sizeof(u32), GFP_KERNEL);
if (!a5xx_state->hlsqregs)
return;
/* Build the crashdump script */
for (i = 0; i < ARRAY_SIZE(a5xx_hlsq_aperture_regs); i++) {
u32 type = a5xx_hlsq_aperture_regs[i].type;
u32 c = a5xx_hlsq_aperture_regs[i].count;
/* Write the register to select the desired bank */
*ptr++ = ((u64) type << 8);
*ptr++ = (((u64) REG_A5XX_HLSQ_DBG_READ_SEL) << 44) |
(1 << 21) | 1;
*ptr++ = offset;
*ptr++ = (((u64) REG_A5XX_HLSQ_DBG_AHB_READ_APERTURE) << 44)
| c;
offset += c * sizeof(u32);
}
/* Write two zeros to close off the script */
*ptr++ = 0;
*ptr++ = 0;
if (a5xx_crashdumper_run(gpu, &dumper)) {
kfree(a5xx_state->hlsqregs);
a5xx_crashdumper_free(gpu, &dumper);
return;
}
/* Copy the data from the crashdumper to the state */
memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K),
count * sizeof(u32));
a5xx_crashdumper_free(gpu, &dumper);
}
static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu)
{
struct a5xx_gpu_state *a5xx_state = kzalloc(sizeof(*a5xx_state),
GFP_KERNEL);
if (!a5xx_state)
return ERR_PTR(-ENOMEM);
/* Temporarily disable hardware clock gating before reading the hw */
a5xx_set_hwcg(gpu, false);
adreno_show(gpu, m);
/* First get the generic state from the adreno core */
adreno_gpu_state_get(gpu, &(a5xx_state->base));
a5xx_state->base.rbbm_status = gpu_read(gpu, REG_A5XX_RBBM_STATUS);
/* Get the HLSQ regs with the help of the crashdumper */
a5xx_gpu_state_get_hlsq_regs(gpu, a5xx_state);
a5xx_set_hwcg(gpu, true);
return &a5xx_state->base;
}
static void a5xx_gpu_state_destroy(struct kref *kref)
{
struct msm_gpu_state *state = container_of(kref,
struct msm_gpu_state, ref);
struct a5xx_gpu_state *a5xx_state = container_of(state,
struct a5xx_gpu_state, base);
kfree(a5xx_state->hlsqregs);
adreno_gpu_state_destroy(state);
kfree(a5xx_state);
}
int a5xx_gpu_state_put(struct msm_gpu_state *state)
{
if (IS_ERR_OR_NULL(state))
return 1;
return kref_put(&state->ref, a5xx_gpu_state_destroy);
}
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
void a5xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
struct drm_printer *p)
{
int i, j;
u32 pos = 0;
struct a5xx_gpu_state *a5xx_state = container_of(state,
struct a5xx_gpu_state, base);
if (IS_ERR_OR_NULL(state))
return;
adreno_show(gpu, state, p);
/* Dump the additional a5xx HLSQ registers */
if (!a5xx_state->hlsqregs)
return;
drm_printf(p, "registers-hlsq:\n");
for (i = 0; i < ARRAY_SIZE(a5xx_hlsq_aperture_regs); i++) {
u32 o = a5xx_hlsq_aperture_regs[i].regoffset;
u32 c = a5xx_hlsq_aperture_regs[i].count;
for (j = 0; j < c; j++, pos++, o++) {
/*
* To keep the crashdump simple we pull the entire range
* for each register type but not all of the registers
* in the range are valid. Fortunately invalid registers
* stick out like a sore thumb with a value of
* 0xdeadbeef
*/
if (a5xx_state->hlsqregs[pos] == 0xdeadbeef)
continue;
drm_printf(p, " - { offset: 0x%04x, value: 0x%08x }\n",
o << 2, a5xx_state->hlsqregs[pos]);
}
}
}
#endif
......@@ -1239,11 +1453,15 @@ static const struct adreno_gpu_funcs funcs = {
.active_ring = a5xx_active_ring,
.irq = a5xx_irq,
.destroy = a5xx_destroy,
#ifdef CONFIG_DEBUG_FS
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
.show = a5xx_show,
#endif
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = a5xx_debugfs_init,
#endif
.gpu_busy = a5xx_gpu_busy,
.gpu_state_get = a5xx_gpu_state_get,
.gpu_state_put = a5xx_gpu_state_put,
},
.get_timestamp = a5xx_get_timestamp,
};
......
......@@ -35,6 +35,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a300_pfp.fw",
},
.gmem = SZ_256K,
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a3xx_gpu_init,
}, {
.rev = ADRENO_REV(3, 0, 6, 0),
......@@ -45,6 +46,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a300_pfp.fw",
},
.gmem = SZ_128K,
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a3xx_gpu_init,
}, {
.rev = ADRENO_REV(3, 2, ANY_ID, ANY_ID),
......@@ -55,6 +57,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a300_pfp.fw",
},
.gmem = SZ_512K,
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a3xx_gpu_init,
}, {
.rev = ADRENO_REV(3, 3, 0, ANY_ID),
......@@ -65,6 +68,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a330_pfp.fw",
},
.gmem = SZ_1M,
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a3xx_gpu_init,
}, {
.rev = ADRENO_REV(4, 2, 0, ANY_ID),
......@@ -75,6 +79,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a420_pfp.fw",
},
.gmem = (SZ_1M + SZ_512K),
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a4xx_gpu_init,
}, {
.rev = ADRENO_REV(4, 3, 0, ANY_ID),
......@@ -85,6 +90,7 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_PFP] = "a420_pfp.fw",
},
.gmem = (SZ_1M + SZ_512K),
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a4xx_gpu_init,
}, {
.rev = ADRENO_REV(5, 3, 0, 2),
......@@ -96,6 +102,11 @@ static const struct adreno_info gpulist[] = {
[ADRENO_FW_GPMU] = "a530v3_gpmu.fw2",
},
.gmem = SZ_1M,
/*
* Increase inactive period to 250 to avoid bouncing
* the GDSC which appears to make it grumpy
*/
.inactive_period = 250,
.quirks = ADRENO_QUIRK_TWO_PASS_USE_WFI |
ADRENO_QUIRK_FAULT_DETECT_MASK,
.init = a5xx_gpu_init,
......@@ -158,7 +169,7 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
mutex_lock(&dev->struct_mutex);
ret = msm_gpu_hw_init(gpu);
mutex_unlock(&dev->struct_mutex);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_put_autosuspend(&pdev->dev);
if (ret) {
dev_err(dev->dev, "gpu hw init failed: %d\n", ret);
return NULL;
......@@ -316,6 +327,7 @@ static int adreno_suspend(struct device *dev)
#endif
static const struct dev_pm_ops adreno_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
SET_RUNTIME_PM_OPS(adreno_suspend, adreno_resume, NULL)
};
......
......@@ -17,6 +17,7 @@
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/ascii85.h>
#include <linux/pm_opp.h>
#include "adreno_gpu.h"
#include "msm_gem.h"
......@@ -368,40 +369,185 @@ bool adreno_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
return false;
}
#ifdef CONFIG_DEBUG_FS
void adreno_show(struct msm_gpu *gpu, struct seq_file *m)
int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int i, count = 0;
kref_init(&state->ref);
ktime_get_real_ts64(&state->time);
for (i = 0; i < gpu->nr_rings; i++) {
int size = 0, j;
state->ring[i].fence = gpu->rb[i]->memptrs->fence;
state->ring[i].iova = gpu->rb[i]->iova;
state->ring[i].seqno = gpu->rb[i]->seqno;
state->ring[i].rptr = get_rptr(adreno_gpu, gpu->rb[i]);
state->ring[i].wptr = get_wptr(gpu->rb[i]);
/* Copy at least 'wptr' dwords of the data */
size = state->ring[i].wptr;
/* After wptr find the last non zero dword to save space */
for (j = state->ring[i].wptr; j < MSM_GPU_RINGBUFFER_SZ >> 2; j++)
if (gpu->rb[i]->start[j])
size = j + 1;
if (size) {
state->ring[i].data = kmalloc(size << 2, GFP_KERNEL);
if (state->ring[i].data) {
memcpy(state->ring[i].data, gpu->rb[i]->start, size << 2);
state->ring[i].data_size = size << 2;
}
}
}
/* Count the number of registers */
for (i = 0; adreno_gpu->registers[i] != ~0; i += 2)
count += adreno_gpu->registers[i + 1] -
adreno_gpu->registers[i] + 1;
state->registers = kcalloc(count * 2, sizeof(u32), GFP_KERNEL);
if (state->registers) {
int pos = 0;
for (i = 0; adreno_gpu->registers[i] != ~0; i += 2) {
u32 start = adreno_gpu->registers[i];
u32 end = adreno_gpu->registers[i + 1];
u32 addr;
for (addr = start; addr <= end; addr++) {
state->registers[pos++] = addr;
state->registers[pos++] = gpu_read(gpu, addr);
}
}
state->nr_registers = count;
}
return 0;
}
void adreno_gpu_state_destroy(struct msm_gpu_state *state)
{
int i;
seq_printf(m, "revision: %d (%d.%d.%d.%d)\n",
for (i = 0; i < ARRAY_SIZE(state->ring); i++)
kfree(state->ring[i].data);
for (i = 0; state->bos && i < state->nr_bos; i++)
kvfree(state->bos[i].data);
kfree(state->bos);
kfree(state->comm);
kfree(state->cmd);
kfree(state->registers);
}
static void adreno_gpu_state_kref_destroy(struct kref *kref)
{
struct msm_gpu_state *state = container_of(kref,
struct msm_gpu_state, ref);
adreno_gpu_state_destroy(state);
kfree(state);
}
int adreno_gpu_state_put(struct msm_gpu_state *state)
{
if (IS_ERR_OR_NULL(state))
return 1;
return kref_put(&state->ref, adreno_gpu_state_kref_destroy);
}
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
static void adreno_show_object(struct drm_printer *p, u32 *ptr, int len)
{
char out[ASCII85_BUFSZ];
long l, datalen, i;
if (!ptr || !len)
return;
/*
* Only dump the non-zero part of the buffer - rarely will any data
* completely fill the entire allocated size of the buffer
*/
for (datalen = 0, i = 0; i < len >> 2; i++) {
if (ptr[i])
datalen = (i << 2) + 1;
}
/* Skip printing the object if it is empty */
if (datalen == 0)
return;
l = ascii85_encode_len(datalen);
drm_puts(p, " data: !!ascii85 |\n");
drm_puts(p, " ");
for (i = 0; i < l; i++)
drm_puts(p, ascii85_encode(ptr[i], out));
drm_puts(p, "\n");
}
void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
struct drm_printer *p)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int i;
if (IS_ERR_OR_NULL(state))
return;
drm_printf(p, "revision: %d (%d.%d.%d.%d)\n",
adreno_gpu->info->revn, adreno_gpu->rev.core,
adreno_gpu->rev.major, adreno_gpu->rev.minor,
adreno_gpu->rev.patchid);
for (i = 0; i < gpu->nr_rings; i++) {
struct msm_ringbuffer *ring = gpu->rb[i];
drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status);
seq_printf(m, "rb %d: fence: %d/%d\n", i,
ring->memptrs->fence, ring->seqno);
drm_puts(p, "ringbuffer:\n");
seq_printf(m, " rptr: %d\n",
get_rptr(adreno_gpu, ring));
seq_printf(m, "rb wptr: %d\n", get_wptr(ring));
for (i = 0; i < gpu->nr_rings; i++) {
drm_printf(p, " - id: %d\n", i);
drm_printf(p, " iova: 0x%016llx\n", state->ring[i].iova);
drm_printf(p, " last-fence: %d\n", state->ring[i].seqno);
drm_printf(p, " retired-fence: %d\n", state->ring[i].fence);
drm_printf(p, " rptr: %d\n", state->ring[i].rptr);
drm_printf(p, " wptr: %d\n", state->ring[i].wptr);
drm_printf(p, " size: %d\n", MSM_GPU_RINGBUFFER_SZ);
adreno_show_object(p, state->ring[i].data,
state->ring[i].data_size);
}
/* dump these out in a form that can be parsed by demsm: */
seq_printf(m, "IO:region %s 00000000 00020000\n", gpu->name);
for (i = 0; adreno_gpu->registers[i] != ~0; i += 2) {
uint32_t start = adreno_gpu->registers[i];
uint32_t end = adreno_gpu->registers[i+1];
uint32_t addr;
if (state->bos) {
drm_puts(p, "bos:\n");
for (addr = start; addr <= end; addr++) {
uint32_t val = gpu_read(gpu, addr);
seq_printf(m, "IO:R %08x %08x\n", addr<<2, val);
for (i = 0; i < state->nr_bos; i++) {
drm_printf(p, " - iova: 0x%016llx\n",
state->bos[i].iova);
drm_printf(p, " size: %zd\n", state->bos[i].size);
adreno_show_object(p, state->bos[i].data,
state->bos[i].size);
}
}
drm_puts(p, "registers:\n");
for (i = 0; i < state->nr_registers; i++) {
drm_printf(p, " - { offset: 0x%04x, value: 0x%08x }\n",
state->registers[i * 2] << 2,
state->registers[(i * 2) + 1]);
}
}
#endif
......@@ -565,7 +711,8 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
adreno_get_pwrlevels(&pdev->dev, gpu);
pm_runtime_set_autosuspend_delay(&pdev->dev, DRM_MSM_INACTIVE_PERIOD);
pm_runtime_set_autosuspend_delay(&pdev->dev,
adreno_gpu->info->inactive_period);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_enable(&pdev->dev);
......
......@@ -84,6 +84,7 @@ struct adreno_info {
enum adreno_quirks quirks;
struct msm_gpu *(*init)(struct drm_device *dev);
const char *zapfw;
u32 inactive_period;
};
const struct adreno_info *adreno_info(struct adreno_rev rev);
......@@ -214,8 +215,9 @@ void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit,
struct msm_file_private *ctx);
void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
bool adreno_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
#ifdef CONFIG_DEBUG_FS
void adreno_show(struct msm_gpu *gpu, struct seq_file *m);
#if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP)
void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
struct drm_printer *p);
#endif
void adreno_dump_info(struct msm_gpu *gpu);
void adreno_dump(struct msm_gpu *gpu);
......@@ -228,6 +230,11 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
void adreno_gpu_cleanup(struct adreno_gpu *gpu);
void adreno_gpu_state_destroy(struct msm_gpu_state *state);
int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state);
int adreno_gpu_state_put(struct msm_gpu_state *state);
/* ringbuffer helpers (the parts that are adreno specific) */
static inline void
......
This diff is collapsed.
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __DPU_CORE_IRQ_H__
#define __DPU_CORE_IRQ_H__
#include "dpu_kms.h"
#include "dpu_hw_interrupts.h"
/**
* dpu_core_irq_preinstall - perform pre-installation of core IRQ handler
* @dpu_kms: DPU handle
* @return: none
*/
void dpu_core_irq_preinstall(struct dpu_kms *dpu_kms);
/**
* dpu_core_irq_postinstall - perform post-installation of core IRQ handler
* @dpu_kms: DPU handle
* @return: 0 if success; error code otherwise
*/
int dpu_core_irq_postinstall(struct dpu_kms *dpu_kms);
/**
* dpu_core_irq_uninstall - uninstall core IRQ handler
* @dpu_kms: DPU handle
* @return: none
*/
void dpu_core_irq_uninstall(struct dpu_kms *dpu_kms);
/**
* dpu_core_irq - core IRQ handler
* @dpu_kms: DPU handle
* @return: interrupt handling status
*/
irqreturn_t dpu_core_irq(struct dpu_kms *dpu_kms);
/**
* dpu_core_irq_idx_lookup - IRQ helper function for lookup irq_idx from HW
* interrupt mapping table.
* @dpu_kms: DPU handle
* @intr_type: DPU HW interrupt type for lookup
* @instance_idx: DPU HW block instance defined in dpu_hw_mdss.h
* @return: irq_idx or -EINVAL when fail to lookup
*/
int dpu_core_irq_idx_lookup(
struct dpu_kms *dpu_kms,
enum dpu_intr_type intr_type,
uint32_t instance_idx);
/**
* dpu_core_irq_enable - IRQ helper function for enabling one or more IRQs
* @dpu_kms: DPU handle
* @irq_idxs: Array of irq index
* @irq_count: Number of irq_idx provided in the array
* @return: 0 for success enabling IRQ, otherwise failure
*
* This function increments count on each enable and decrements on each
* disable. Interrupts is enabled if count is 0 before increment.
*/
int dpu_core_irq_enable(
struct dpu_kms *dpu_kms,
int *irq_idxs,
uint32_t irq_count);
/**
* dpu_core_irq_disable - IRQ helper function for disabling one of more IRQs
* @dpu_kms: DPU handle
* @irq_idxs: Array of irq index
* @irq_count: Number of irq_idx provided in the array
* @return: 0 for success disabling IRQ, otherwise failure
*
* This function increments count on each enable and decrements on each
* disable. Interrupts is disabled if count is 0 after decrement.
*/
int dpu_core_irq_disable(
struct dpu_kms *dpu_kms,
int *irq_idxs,
uint32_t irq_count);
/**
* dpu_core_irq_read - IRQ helper function for reading IRQ status
* @dpu_kms: DPU handle
* @irq_idx: irq index
* @clear: True to clear the irq after read
* @return: non-zero if irq detected; otherwise no irq detected
*/
u32 dpu_core_irq_read(
struct dpu_kms *dpu_kms,
int irq_idx,
bool clear);
/**
* dpu_core_irq_register_callback - For registering callback function on IRQ
* interrupt
* @dpu_kms: DPU handle
* @irq_idx: irq index
* @irq_cb: IRQ callback structure, containing callback function
* and argument. Passing NULL for irq_cb will unregister
* the callback for the given irq_idx
* This must exist until un-registration.
* @return: 0 for success registering callback, otherwise failure
*
* This function supports registration of multiple callbacks for each interrupt.
*/
int dpu_core_irq_register_callback(
struct dpu_kms *dpu_kms,
int irq_idx,
struct dpu_irq_callback *irq_cb);
/**
* dpu_core_irq_unregister_callback - For unregistering callback function on IRQ
* interrupt
* @dpu_kms: DPU handle
* @irq_idx: irq index
* @irq_cb: IRQ callback structure, containing callback function
* and argument. Passing NULL for irq_cb will unregister
* the callback for the given irq_idx
* This must match with registration.
* @return: 0 for success registering callback, otherwise failure
*
* This function supports registration of multiple callbacks for each interrupt.
*/
int dpu_core_irq_unregister_callback(
struct dpu_kms *dpu_kms,
int irq_idx,
struct dpu_irq_callback *irq_cb);
/**
* dpu_debugfs_core_irq_init - register core irq debugfs
* @dpu_kms: pointer to kms
* @parent: debugfs directory root
* @Return: 0 on success
*/
int dpu_debugfs_core_irq_init(struct dpu_kms *dpu_kms,
struct dentry *parent);
/**
* dpu_debugfs_core_irq_destroy - deregister core irq debugfs
* @dpu_kms: pointer to kms
*/
void dpu_debugfs_core_irq_destroy(struct dpu_kms *dpu_kms);
#endif /* __DPU_CORE_IRQ_H__ */
This diff is collapsed.
/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _DPU_CORE_PERF_H_
#define _DPU_CORE_PERF_H_
#include <linux/types.h>
#include <linux/dcache.h>
#include <linux/mutex.h>
#include <drm/drm_crtc.h>
#include "dpu_hw_catalog.h"
#include "dpu_power_handle.h"
#define DPU_PERF_DEFAULT_MAX_CORE_CLK_RATE 412500000
/**
* struct dpu_core_perf_params - definition of performance parameters
* @max_per_pipe_ib: maximum instantaneous bandwidth request
* @bw_ctl: arbitrated bandwidth request
* @core_clk_rate: core clock rate request
*/
struct dpu_core_perf_params {
u64 max_per_pipe_ib[DPU_POWER_HANDLE_DBUS_ID_MAX];
u64 bw_ctl[DPU_POWER_HANDLE_DBUS_ID_MAX];
u64 core_clk_rate;
};
/**
* struct dpu_core_perf_tune - definition of performance tuning control
* @mode: performance mode
* @min_core_clk: minimum core clock
* @min_bus_vote: minimum bus vote
*/
struct dpu_core_perf_tune {
u32 mode;
u64 min_core_clk;
u64 min_bus_vote;
};
/**
* struct dpu_core_perf - definition of core performance context
* @dev: Pointer to drm device
* @debugfs_root: top level debug folder
* @catalog: Pointer to catalog configuration
* @phandle: Pointer to power handler
* @core_clk: Pointer to core clock structure
* @core_clk_rate: current core clock rate
* @max_core_clk_rate: maximum allowable core clock rate
* @perf_tune: debug control for performance tuning
* @enable_bw_release: debug control for bandwidth release
* @fix_core_clk_rate: fixed core clock request in Hz used in mode 2
* @fix_core_ib_vote: fixed core ib vote in bps used in mode 2
* @fix_core_ab_vote: fixed core ab vote in bps used in mode 2
*/
struct dpu_core_perf {
struct drm_device *dev;
struct dentry *debugfs_root;
struct dpu_mdss_cfg *catalog;
struct dpu_power_handle *phandle;
struct dss_clk *core_clk;
u64 core_clk_rate;
u64 max_core_clk_rate;
struct dpu_core_perf_tune perf_tune;
u32 enable_bw_release;
u64 fix_core_clk_rate;
u64 fix_core_ib_vote;
u64 fix_core_ab_vote;
};
/**
* dpu_core_perf_crtc_check - validate performance of the given crtc state
* @crtc: Pointer to crtc
* @state: Pointer to new crtc state
* return: zero if success, or error code otherwise
*/
int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
struct drm_crtc_state *state);
/**
* dpu_core_perf_crtc_update - update performance of the given crtc
* @crtc: Pointer to crtc
* @params_changed: true if crtc parameters are modified
* @stop_req: true if this is a stop request
* return: zero if success, or error code otherwise
*/
int dpu_core_perf_crtc_update(struct drm_crtc *crtc,
int params_changed, bool stop_req);
/**
* dpu_core_perf_crtc_release_bw - release bandwidth of the given crtc
* @crtc: Pointer to crtc
*/
void dpu_core_perf_crtc_release_bw(struct drm_crtc *crtc);
/**
* dpu_core_perf_destroy - destroy the given core performance context
* @perf: Pointer to core performance context
*/
void dpu_core_perf_destroy(struct dpu_core_perf *perf);
/**
* dpu_core_perf_init - initialize the given core performance context
* @perf: Pointer to core performance context
* @dev: Pointer to drm device
* @catalog: Pointer to catalog
* @phandle: Pointer to power handle
* @core_clk: pointer to core clock
*/
int dpu_core_perf_init(struct dpu_core_perf *perf,
struct drm_device *dev,
struct dpu_mdss_cfg *catalog,
struct dpu_power_handle *phandle,
struct dss_clk *core_clk);
/**
* dpu_core_perf_debugfs_init - initialize debugfs for core performance context
* @perf: Pointer to core performance context
* @debugfs_parent: Pointer to parent debugfs
*/
int dpu_core_perf_debugfs_init(struct dpu_core_perf *perf,
struct dentry *parent);
#endif /* _DPU_CORE_PERF_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef DPU_DBG_H_
#define DPU_DBG_H_
#include <stdarg.h>
#include <linux/debugfs.h>
#include <linux/list.h>
enum dpu_dbg_dump_flag {
DPU_DBG_DUMP_IN_LOG = BIT(0),
DPU_DBG_DUMP_IN_MEM = BIT(1),
};
#if defined(CONFIG_DEBUG_FS)
/**
* dpu_dbg_init_dbg_buses - initialize debug bus dumping support for the chipset
* @hwversion: Chipset revision
*/
void dpu_dbg_init_dbg_buses(u32 hwversion);
/**
* dpu_dbg_init - initialize global dpu debug facilities: regdump
* @dev: device handle
* Returns: 0 or -ERROR
*/
int dpu_dbg_init(struct device *dev);
/**
* dpu_dbg_debugfs_register - register entries at the given debugfs dir
* @debugfs_root: debugfs root in which to create dpu debug entries
* Returns: 0 or -ERROR
*/
int dpu_dbg_debugfs_register(struct dentry *debugfs_root);
/**
* dpu_dbg_destroy - destroy the global dpu debug facilities
* Returns: none
*/
void dpu_dbg_destroy(void);
/**
* dpu_dbg_dump - trigger dumping of all dpu_dbg facilities
* @queue_work: whether to queue the dumping work to the work_struct
* @name: string indicating origin of dump
* @dump_dbgbus: dump the dpu debug bus
* @dump_vbif_rt: dump the vbif rt bus
* Returns: none
*/
void dpu_dbg_dump(bool queue_work, const char *name, bool dump_dbgbus_dpu,
bool dump_dbgbus_vbif_rt);
/**
* dpu_dbg_set_dpu_top_offset - set the target specific offset from mdss base
* address of the top registers. Used for accessing debug bus controls.
* @blk_off: offset from mdss base of the top block
*/
void dpu_dbg_set_dpu_top_offset(u32 blk_off);
#else
static inline void dpu_dbg_init_dbg_buses(u32 hwversion)
{
}
static inline int dpu_dbg_init(struct device *dev)
{
return 0;
}
static inline int dpu_dbg_debugfs_register(struct dentry *debugfs_root)
{
return 0;
}
static inline void dpu_dbg_destroy(void)
{
}
static inline void dpu_dbg_dump(bool queue_work, const char *name,
bool dump_dbgbus_dpu, bool dump_dbgbus_vbif_rt)
{
}
static inline void dpu_dbg_set_dpu_top_offset(u32 blk_off)
{
}
#endif /* defined(CONFIG_DEBUG_FS) */
#endif /* DPU_DBG_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _DPU_FORMATS_H
#define _DPU_FORMATS_H
#include <drm/drm_fourcc.h>
#include "msm_gem.h"
#include "dpu_hw_mdss.h"
/**
* dpu_get_dpu_format_ext() - Returns dpu format structure pointer.
* @format: DRM FourCC Code
* @modifiers: format modifier array from client, one per plane
*/
const struct dpu_format *dpu_get_dpu_format_ext(
const uint32_t format,
const uint64_t modifier);
#define dpu_get_dpu_format(f) dpu_get_dpu_format_ext(f, 0)
/**
* dpu_get_msm_format - get an dpu_format by its msm_format base
* callback function registers with the msm_kms layer
* @kms: kms driver
* @format: DRM FourCC Code
* @modifiers: data layout modifier
*/
const struct msm_format *dpu_get_msm_format(
struct msm_kms *kms,
const uint32_t format,
const uint64_t modifiers);
/**
* dpu_populate_formats - populate the given array with fourcc codes supported
* @format_list: pointer to list of possible formats
* @pixel_formats: array to populate with fourcc codes
* @pixel_modifiers: array to populate with drm modifiers, can be NULL
* @pixel_formats_max: length of pixel formats array
* Return: number of elements populated
*/
uint32_t dpu_populate_formats(
const struct dpu_format_extended *format_list,
uint32_t *pixel_formats,
uint64_t *pixel_modifiers,
uint32_t pixel_formats_max);
/**
* dpu_format_check_modified_format - validate format and buffers for
* dpu non-standard, i.e. modified format
* @kms: kms driver
* @msm_fmt: pointer to the msm_fmt base pointer of an dpu_format
* @cmd: fb_cmd2 structure user request
* @bos: gem buffer object list
*
* Return: error code on failure, 0 on success
*/
int dpu_format_check_modified_format(
const struct msm_kms *kms,
const struct msm_format *msm_fmt,
const struct drm_mode_fb_cmd2 *cmd,
struct drm_gem_object **bos);
/**
* dpu_format_populate_layout - populate the given format layout based on
* mmu, fb, and format found in the fb
* @aspace: address space pointer
* @fb: framebuffer pointer
* @fmtl: format layout structure to populate
*
* Return: error code on failure, -EAGAIN if success but the addresses
* are the same as before or 0 if new addresses were populated
*/
int dpu_format_populate_layout(
struct msm_gem_address_space *aspace,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *fmtl);
#endif /*_DPU_FORMATS_H */
This diff is collapsed.
/* Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _DPU_HW_BLK_H
#define _DPU_HW_BLK_H
#include <linux/types.h>
#include <linux/list.h>
#include <linux/atomic.h>
struct dpu_hw_blk;
/**
* struct dpu_hw_blk_ops - common hardware block operations
* @start: start operation on first get
* @stop: stop operation on last put
*/
struct dpu_hw_blk_ops {
int (*start)(struct dpu_hw_blk *);
void (*stop)(struct dpu_hw_blk *);
};
/**
* struct dpu_hw_blk - definition of hardware block object
* @list: list of hardware blocks
* @type: hardware block type
* @id: instance id
* @refcount: reference/usage count
*/
struct dpu_hw_blk {
struct list_head list;
u32 type;
int id;
atomic_t refcount;
struct dpu_hw_blk_ops ops;
};
int dpu_hw_blk_init(struct dpu_hw_blk *hw_blk, u32 type, int id,
struct dpu_hw_blk_ops *ops);
void dpu_hw_blk_destroy(struct dpu_hw_blk *hw_blk);
struct dpu_hw_blk *dpu_hw_blk_get(struct dpu_hw_blk *hw_blk, u32 type, int id);
void dpu_hw_blk_put(struct dpu_hw_blk *hw_blk);
#endif /*_DPU_HW_BLK_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -125,6 +125,8 @@ static void mdp4_complete_commit(struct msm_kms *kms, struct drm_atomic_state *s
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
drm_atomic_helper_wait_for_vblanks(mdp4_kms->dev, state);
/* see 119ecb7fd */
for_each_new_crtc_in_state(state, crtc, crtc_state, i)
drm_crtc_vblank_put(crtc);
......
This diff is collapsed.
......@@ -319,7 +319,17 @@ static int mdp5_encoder_atomic_check(struct drm_encoder *encoder,
mdp5_cstate->ctl = ctl;
mdp5_cstate->pipeline.intf = intf;
mdp5_cstate->defer_start = true;
/*
* This is a bit awkward, but we want to flush the CTL and hit the
* START bit at most once for an atomic update. In the non-full-
* modeset case, this is done from crtc->atomic_flush(), but that
* is too early in the case of full modeset, in which case we
* defer to encoder->enable(). But we need to *know* whether
* encoder->enable() will be called to do this:
*/
if (drm_atomic_crtc_needs_modeset(crtc_state))
mdp5_cstate->defer_start = true;
return 0;
}
......
......@@ -170,6 +170,8 @@ static void mdp5_complete_commit(struct msm_kms *kms, struct drm_atomic_state *s
struct device *dev = &mdp5_kms->pdev->dev;
struct mdp5_global_state *global_state;
drm_atomic_helper_wait_for_vblanks(mdp5_kms->dev, state);
global_state = mdp5_get_existing_global_state(mdp5_kms);
if (mdp5_kms->smp)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment