Commit 2b534e90 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-msm-next-2021-12-26' of ssh://gitlab.freedesktop.org/drm/msm into drm-next

* dpu plane state cleanup in prep for multirect
* dpu debugfs cleanup (and moving things to atomic_print_state) in prep
  for multirect
* dp support for sc7280
* struct_mutex removal
* include more GMU state in gpu devcore dumps
* add support for a506
* remove old eDP sub-driver (never was used in any upstream supported
  devices and modern things with eDP will use DP sub-driver instead)
* debugfs to disable hw gpu hang detect for (igt tests)
* debugfs for dumping display hw state
* and the usual assortment of cleanup and bug fixes

There still seems to be a timing issue with dpu, showing up on sc7180
devices, after the bridge probe-order change. Ie. things work great if
loglevel is high enough (or enough debug options are enabled, etc).
We'll continue to debug this in the new year.
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGs+vwr0nkwgYzuYAsCoHtypWpWav+yVvLZGsEJy8tJ56A@mail.gmail.com
parents 040bf2a9 6ed95285
......@@ -10,10 +10,12 @@
# Please keep this list dictionary sorted.
#
Aaron Durbin <adurbin@google.com>
Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org>
Adam Oldham <oldhamca@gmail.com>
Adam Radford <aradford@gmail.com>
Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com>
Adrian Bunk <bunk@stusta.de>
Akhil P Oommen <quic_akhilpo@quicinc.com> <akhilpo@codeaurora.org>
Alan Cox <alan@lxorguk.ukuu.org.uk>
Alan Cox <root@hraefn.swansea.linux.org.uk>
Aleksandar Markovic <aleksandar.markovic@mips.com> <aleksandar.markovic@imgtec.com>
......@@ -172,6 +174,7 @@ Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com>
Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net>
Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org>
Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>
......@@ -191,6 +194,7 @@ Juha Yrjola <at solidboot.com>
Juha Yrjola <juha.yrjola@nokia.com>
Juha Yrjola <juha.yrjola@solidboot.com>
Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org>
Kay Sievers <kay.sievers@vrfy.org>
Kees Cook <keescook@chromium.org> <kees.cook@canonical.com>
Kees Cook <keescook@chromium.org> <keescook@google.com>
......@@ -202,9 +206,11 @@ Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <khlebnikov@yandex-team.ru>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
Koushik <raghavendra.koushik@neterion.com>
Krishna Manikandan <quic_mkrishn@quicinc.com> <mkrishn@codeaurora.org>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org>
Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com>
Leonid I Ananiev <leonid.i.ananiev@intel.com>
Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
......@@ -311,6 +317,7 @@ Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>
Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
Rajeev Nandan <quic_rajeevny@quicinc.com> <rajeevny@codeaurora.org>
Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
......@@ -325,6 +332,7 @@ Rui Saraiva <rmps@joel.ist.utl.pt>
Sachin P Sant <ssant@in.ibm.com>
Sakari Ailus <sakari.ailus@linux.intel.com> <sakari.ailus@iki.fi>
Sam Ravnborg <sam@mars.ravnborg.org>
Sankeerth Billakanti <quic_sbillaka@quicinc.com> <sbillaka@codeaurora.org>
Santosh Shilimkar <santosh.shilimkar@oracle.org>
Santosh Shilimkar <ssantosh@kernel.org>
Sarangdhar Joshi <spjoshi@codeaurora.org>
......
......@@ -17,6 +17,8 @@ properties:
compatible:
enum:
- qcom,sc7180-dp
- qcom,sc7280-dp
- qcom,sc7280-edp
- qcom,sc8180x-dp
- qcom,sc8180x-edp
......
Qualcomm Technologies Inc. adreno/snapdragon eDP output
Required properties:
- compatible:
* "qcom,mdss-edp"
- reg: Physical base address and length of the registers of controller and PLL
- reg-names: The names of register regions. The following regions are required:
* "edp"
* "pll_base"
- interrupts: The interrupt signal from the eDP block.
- power-domains: Should be <&mmcc MDSS_GDSC>.
- clocks: device clocks
See Documentation/devicetree/bindings/clock/clock-bindings.txt for details.
- clock-names: the following clocks are required:
* "core"
* "iface"
* "mdp_core"
* "pixel"
* "link"
- #clock-cells: The value should be 1.
- vdda-supply: phandle to vdda regulator device node
- lvl-vdd-supply: phandle to regulator device node which is used to supply power
to HPD receiving chip
- panel-en-gpios: GPIO pin to supply power to panel.
- panel-hpd-gpios: GPIO pin used for eDP hpd.
Example:
mdss_edp: qcom,mdss_edp@fd923400 {
compatible = "qcom,mdss-edp";
reg-names =
"edp",
"pll_base";
reg = <0xfd923400 0x700>,
<0xfd923a00 0xd4>;
interrupt-parent = <&mdss_mdp>;
interrupts = <12 0>;
power-domains = <&mmcc MDSS_GDSC>;
clock-names =
"core",
"pixel",
"iface",
"link",
"mdp_core";
clocks =
<&mmcc MDSS_EDPAUX_CLK>,
<&mmcc MDSS_EDPPIXEL_CLK>,
<&mmcc MDSS_AHB_CLK>,
<&mmcc MDSS_EDPLINK_CLK>,
<&mmcc MDSS_MDP_CLK>;
#clock-cells = <1>;
vdda-supply = <&pma8084_l12>;
lvl-vdd-supply = <&lvl_vreg>;
panel-en-gpios = <&tlmm 137 0>;
panel-hpd-gpios = <&tlmm 103 0>;
};
......@@ -6050,6 +6050,7 @@ F: drivers/gpu/drm/tiny/mi0283qt.c
DRM DRIVER FOR MSM ADRENO GPU
M: Rob Clark <robdclark@gmail.com>
M: Sean Paul <sean@poorly.run>
R: Abhinav Kumar <quic_abhinavk@quicinc.com>
L: linux-arm-msm@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: freedreno@lists.freedesktop.org
......
......@@ -65,6 +65,7 @@ config DRM_MSM_HDMI_HDCP
config DRM_MSM_DP
bool "Enable DisplayPort support in MSM DRM driver"
depends on DRM_MSM
select RATIONAL
default y
help
Compile in support for DP driver in MSM DRM driver. DP external
......
......@@ -19,7 +19,7 @@ msm-y := \
hdmi/hdmi.o \
hdmi/hdmi_audio.o \
hdmi/hdmi_bridge.o \
hdmi/hdmi_connector.o \
hdmi/hdmi_hpd.o \
hdmi/hdmi_i2c.o \
hdmi/hdmi_phy.o \
hdmi/hdmi_phy_8960.o \
......@@ -27,12 +27,6 @@ msm-y := \
hdmi/hdmi_phy_8x60.o \
hdmi/hdmi_phy_8x74.o \
hdmi/hdmi_pll_8960.o \
edp/edp.o \
edp/edp_aux.o \
edp/edp_bridge.o \
edp/edp_connector.o \
edp/edp_ctrl.o \
edp/edp_phy.o \
disp/mdp_format.o \
disp/mdp_kms.o \
disp/mdp4/mdp4_crtc.o \
......
......@@ -12,7 +12,6 @@ static bool a2xx_idle(struct msm_gpu *gpu);
static void a2xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
......@@ -23,7 +22,7 @@ static void a2xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......
......@@ -30,7 +30,6 @@ static bool a3xx_idle(struct msm_gpu *gpu);
static void a3xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
......@@ -41,7 +40,7 @@ static void a3xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......
......@@ -24,7 +24,6 @@ static bool a4xx_idle(struct msm_gpu *gpu);
static void a4xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i;
......@@ -35,7 +34,7 @@ static void a4xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
/* ignore if there has not been a ctx switch: */
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......
......@@ -107,7 +107,7 @@ reset_set(void *data, u64 val)
* try to reset an active GPU.
*/
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
release_firmware(adreno_gpu->fw[ADRENO_FW_PM4]);
adreno_gpu->fw[ADRENO_FW_PM4] = NULL;
......@@ -133,7 +133,7 @@ reset_set(void *data, u64 val)
gpu->funcs->recover(gpu);
pm_runtime_put_sync(&gpu->pdev->dev);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
return 0;
}
......
......@@ -65,7 +65,6 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
struct msm_gem_object *obj;
uint32_t *ptr, dwords;
......@@ -76,7 +75,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......@@ -126,12 +125,11 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu);
struct msm_drm_private *priv = gpu->dev->dev_private;
struct msm_ringbuffer *ring = submit->ring;
unsigned int i, ibs = 0;
if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) {
priv->lastctx = NULL;
gpu->cur_ctx_seqno = 0;
a5xx_submit_in_rb(gpu, submit);
return;
}
......@@ -166,7 +164,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......@@ -441,7 +439,7 @@ void a5xx_set_hwcg(struct msm_gpu *gpu, bool state)
const struct adreno_five_hwcg_regs *regs;
unsigned int i, sz;
if (adreno_is_a508(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu)) {
regs = a50x_hwcg;
sz = ARRAY_SIZE(a50x_hwcg);
} else if (adreno_is_a509(adreno_gpu) || adreno_is_a512(adreno_gpu)) {
......@@ -485,7 +483,7 @@ static int a5xx_me_init(struct msm_gpu *gpu)
OUT_RING(ring, 0x00000000);
/* Specify workarounds for various microcode issues */
if (adreno_is_a530(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a530(adreno_gpu)) {
/* Workaround for token end syncs
* Force a WFI after every direct-render 3D mode draw and every
* 2D mode 3 draw
......@@ -620,8 +618,16 @@ static int a5xx_ucode_init(struct msm_gpu *gpu)
static int a5xx_zap_shader_resume(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int ret;
/*
* Adreno 506 have CPZ Retention feature and doesn't require
* to resume zap shader
*/
if (adreno_is_a506(adreno_gpu))
return 0;
ret = qcom_scm_set_remote_state(SCM_GPU_ZAP_SHADER_RESUME, GPU_PAS_ID);
if (ret)
DRM_ERROR("%s: zap-shader resume failed: %d\n",
......@@ -733,9 +739,10 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
0x00100000 + adreno_gpu->gmem - 1);
gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MAX_HI, 0x00000000);
if (adreno_is_a508(adreno_gpu) || adreno_is_a510(adreno_gpu)) {
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a510(adreno_gpu)) {
gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x20);
if (adreno_is_a508(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu))
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x400);
else
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x20);
......@@ -751,7 +758,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16);
}
if (adreno_is_a508(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu))
gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL,
(0x100 << 11 | 0x100 << 22));
else if (adreno_is_a509(adreno_gpu) || adreno_is_a510(adreno_gpu) ||
......@@ -769,8 +776,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
* Disable the RB sampler datapath DP2 clock gating optimization
* for 1-SP GPUs, as it is enabled by default.
*/
if (adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a512(adreno_gpu))
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a509(adreno_gpu) || adreno_is_a512(adreno_gpu))
gpu_rmw(gpu, REG_A5XX_RB_DBG_ECO_CNTL, 0, (1 << 9));
/* Disable UCHE global filter as SP can invalidate/flush independently */
......@@ -851,10 +858,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
/* UCHE */
gpu_write(gpu, REG_A5XX_CP_PROTECT(16), ADRENO_PROTECT_RW(0xE80, 16));
if (adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a510(adreno_gpu) || adreno_is_a512(adreno_gpu) ||
adreno_is_a530(adreno_gpu))
gpu_write(gpu, REG_A5XX_CP_PROTECT(17),
/* SMMU */
gpu_write(gpu, REG_A5XX_CP_PROTECT(17),
ADRENO_PROTECT_RW(0x10000, 0x8000));
gpu_write(gpu, REG_A5XX_RBBM_SECVID_TSB_CNTL, 0);
......@@ -895,8 +900,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
if (ret)
return ret;
if (!(adreno_is_a508(adreno_gpu) || adreno_is_a509(adreno_gpu) ||
adreno_is_a510(adreno_gpu) || adreno_is_a512(adreno_gpu)))
if (adreno_is_a530(adreno_gpu) || adreno_is_a540(adreno_gpu))
a5xx_gpmu_ucode_init(gpu);
ret = a5xx_ucode_init(gpu);
......@@ -927,6 +931,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
if (IS_ERR(a5xx_gpu->shadow))
return PTR_ERR(a5xx_gpu->shadow);
msm_gem_object_set_name(a5xx_gpu->shadow_bo, "shadow");
}
gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR,
......@@ -1254,6 +1260,7 @@ static void a5xx_fault_detect_irq(struct msm_gpu *gpu)
static irqreturn_t a5xx_irq(struct msm_gpu *gpu)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
u32 status = gpu_read(gpu, REG_A5XX_RBBM_INT_0_STATUS);
/*
......@@ -1263,6 +1270,11 @@ static irqreturn_t a5xx_irq(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_RBBM_INT_CLEAR_CMD,
status & ~A5XX_RBBM_INT_0_MASK_RBBM_AHB_ERROR);
if (priv->disable_err_irq) {
status &= A5XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS |
A5XX_RBBM_INT_0_MASK_CP_SW;
}
/* Pass status to a5xx_rbbm_err_irq because we've already cleared it */
if (status & RBBM_ERROR_MASK)
a5xx_rbbm_err_irq(gpu, status);
......@@ -1338,7 +1350,7 @@ static int a5xx_pm_resume(struct msm_gpu *gpu)
if (ret)
return ret;
/* Adreno 508, 509, 510, 512 needs manual RBBM sus/res control */
/* Adreno 506, 508, 509, 510, 512 needs manual RBBM sus/res control */
if (!(adreno_is_a530(adreno_gpu) || adreno_is_a540(adreno_gpu))) {
/* Halt the sp_input_clk at HM level */
gpu_write(gpu, REG_A5XX_RBBM_CLOCK_CNTL, 0x00000055);
......@@ -1381,8 +1393,9 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
u32 mask = 0xf;
int i, ret;
/* A508, A510 have 3 XIN ports in VBIF */
if (adreno_is_a508(adreno_gpu) || adreno_is_a510(adreno_gpu))
/* A506, A508, A510 have 3 XIN ports in VBIF */
if (adreno_is_a506(adreno_gpu) || adreno_is_a508(adreno_gpu) ||
adreno_is_a510(adreno_gpu))
mask = 0x7;
/* Clear the VBIF pipe before shutting down */
......
......@@ -1146,7 +1146,7 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu)
}
static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
size_t size, u64 iova)
size_t size, u64 iova, const char *name)
{
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct drm_device *dev = a6xx_gpu->base.base.dev;
......@@ -1181,6 +1181,8 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
bo->virt = msm_gem_get_vaddr(bo->obj);
bo->size = size;
msm_gem_object_set_name(bo->obj, name);
return 0;
}
......@@ -1515,7 +1517,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
*/
gmu->dummy.size = SZ_4K;
if (adreno_is_a660_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7, 0x60400000);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_4K * 7,
0x60400000, "debug");
if (ret)
goto err_memory;
......@@ -1523,42 +1526,46 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
}
/* Allocate memory for the GMU dummy page */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dummy, gmu->dummy.size, 0x60000000);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dummy, gmu->dummy.size,
0x60000000, "dummy");
if (ret)
goto err_memory;
/* Note that a650 family also includes a660 family: */
if (adreno_is_a650_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache,
SZ_16M - SZ_16K, 0x04000);
SZ_16M - SZ_16K, 0x04000, "icache");
if (ret)
goto err_memory;
} else if (adreno_is_a640_family(adreno_gpu)) {
ret = a6xx_gmu_memory_alloc(gmu, &gmu->icache,
SZ_256K - SZ_16K, 0x04000);
SZ_256K - SZ_16K, 0x04000, "icache");
if (ret)
goto err_memory;
ret = a6xx_gmu_memory_alloc(gmu, &gmu->dcache,
SZ_256K - SZ_16K, 0x44000);
SZ_256K - SZ_16K, 0x44000, "dcache");
if (ret)
goto err_memory;
} else {
BUG_ON(adreno_is_a660_family(adreno_gpu));
/* HFI v1, has sptprac */
gmu->legacy = true;
/* Allocate memory for the GMU debug region */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_16K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->debug, SZ_16K, 0, "debug");
if (ret)
goto err_memory;
}
/* Allocate memory for for the HFI queues */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->hfi, SZ_16K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->hfi, SZ_16K, 0, "hfi");
if (ret)
goto err_memory;
/* Allocate memory for the GMU log region */
ret = a6xx_gmu_memory_alloc(gmu, &gmu->log, SZ_4K, 0);
ret = a6xx_gmu_memory_alloc(gmu, &gmu->log, SZ_4K, 0, "log");
if (ret)
goto err_memory;
......
......@@ -106,7 +106,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
u32 asid;
u64 memptr = rbmemptr(ring, ttbr0);
if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
if (ctx->seqno == a6xx_gpu->base.base.cur_ctx_seqno)
return;
if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
......@@ -138,14 +138,11 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
OUT_PKT7(ring, CP_EVENT_WRITE, 1);
OUT_RING(ring, 0x31);
a6xx_gpu->cur_ctx_seqno = ctx->seqno;
}
static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
struct msm_drm_private *priv = gpu->dev->dev_private;
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct msm_ringbuffer *ring = submit->ring;
......@@ -177,7 +174,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
case MSM_SUBMIT_CMD_IB_TARGET_BUF:
break;
case MSM_SUBMIT_CMD_CTX_RESTORE_BUF:
if (priv->lastctx == submit->queue->ctx)
if (gpu->cur_ctx_seqno == submit->queue->ctx->seqno)
break;
fallthrough;
case MSM_SUBMIT_CMD_BUF:
......@@ -1071,6 +1068,8 @@ static int hw_init(struct msm_gpu *gpu)
if (IS_ERR(a6xx_gpu->shadow))
return PTR_ERR(a6xx_gpu->shadow);
msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
}
gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO,
......@@ -1081,7 +1080,7 @@ static int hw_init(struct msm_gpu *gpu)
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
a6xx_gpu->cur_ctx_seqno = 0;
gpu->cur_ctx_seqno = 0;
/* Enable the SQE_to start the CP engine */
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
......@@ -1376,10 +1375,14 @@ static void a6xx_fault_detect_irq(struct msm_gpu *gpu)
static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
{
struct msm_drm_private *priv = gpu->dev->dev_private;
u32 status = gpu_read(gpu, REG_A6XX_RBBM_INT_0_STATUS);
gpu_write(gpu, REG_A6XX_RBBM_INT_CLEAR_CMD, status);
if (priv->disable_err_irq)
status &= A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS;
if (status & A6XX_RBBM_INT_0_MASK_RBBM_HANG_DETECT)
a6xx_fault_detect_irq(gpu);
......
......@@ -20,16 +20,6 @@ struct a6xx_gpu {
struct msm_ringbuffer *cur_ring;
/**
* cur_ctx_seqno:
*
* The ctx->seqno value of the context with current pgtables
* installed. Tracked by seqno rather than pointer value to
* avoid dangling pointers, and cases where a ctx can be freed
* and a new one created with the same address.
*/
int cur_ctx_seqno;
struct a6xx_gmu gmu;
struct drm_gem_object *shadow_bo;
......
......@@ -42,7 +42,15 @@ struct a6xx_gpu_state {
struct a6xx_gpu_state_obj *cx_debugbus;
int nr_cx_debugbus;
struct msm_gpu_state_bo *gmu_log;
struct msm_gpu_state_bo *gmu_hfi;
struct msm_gpu_state_bo *gmu_debug;
s32 hfi_queue_history[2][HFI_HISTORY_SZ];
struct list_head objs;
bool gpu_initialized;
};
static inline int CRASHDUMP_WRITE(u64 *in, u32 reg, u32 val)
......@@ -800,6 +808,45 @@ static void a6xx_get_gmu_registers(struct msm_gpu *gpu,
&a6xx_state->gmu_registers[2], false);
}
static struct msm_gpu_state_bo *a6xx_snapshot_gmu_bo(
struct a6xx_gpu_state *a6xx_state, struct a6xx_gmu_bo *bo)
{
struct msm_gpu_state_bo *snapshot;
snapshot = state_kcalloc(a6xx_state, 1, sizeof(*snapshot));
if (!snapshot)
return NULL;
snapshot->iova = bo->iova;
snapshot->size = bo->size;
snapshot->data = kvzalloc(snapshot->size, GFP_KERNEL);
if (!snapshot->data)
return NULL;
memcpy(snapshot->data, bo->virt, bo->size);
return snapshot;
}
static void a6xx_snapshot_gmu_hfi_history(struct msm_gpu *gpu,
struct a6xx_gpu_state *a6xx_state)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
unsigned i, j;
BUILD_BUG_ON(ARRAY_SIZE(gmu->queues) != ARRAY_SIZE(a6xx_state->hfi_queue_history));
for (i = 0; i < ARRAY_SIZE(gmu->queues); i++) {
struct a6xx_hfi_queue *queue = &gmu->queues[i];
for (j = 0; j < HFI_HISTORY_SZ; j++) {
unsigned idx = (j + queue->history_idx) % HFI_HISTORY_SZ;
a6xx_state->hfi_queue_history[i][j] = queue->history[idx];
}
}
}
#define A6XX_GBIF_REGLIST_SIZE 1
static void a6xx_get_registers(struct msm_gpu *gpu,
struct a6xx_gpu_state *a6xx_state,
......@@ -937,6 +984,12 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
a6xx_get_gmu_registers(gpu, a6xx_state);
a6xx_state->gmu_log = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.log);
a6xx_state->gmu_hfi = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.hfi);
a6xx_state->gmu_debug = a6xx_snapshot_gmu_bo(a6xx_state, &a6xx_gpu->gmu.debug);
a6xx_snapshot_gmu_hfi_history(gpu, a6xx_state);
/* If GX isn't on the rest of the data isn't going to be accessible */
if (!a6xx_gmu_gx_is_on(&a6xx_gpu->gmu))
return &a6xx_state->base;
......@@ -950,7 +1003,8 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
* write out GPU state, so we need to skip this when the SMMU is
* stalled in response to an iova fault
*/
if (!stalled && !a6xx_crashdumper_init(gpu, &_dumper)) {
if (!stalled && !gpu->needs_hw_init &&
!a6xx_crashdumper_init(gpu, &_dumper)) {
dumper = &_dumper;
}
......@@ -967,6 +1021,8 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
if (snapshot_debugbus)
a6xx_get_debugbus(gpu, a6xx_state);
a6xx_state->gpu_initialized = !gpu->needs_hw_init;
return &a6xx_state->base;
}
......@@ -978,6 +1034,12 @@ static void a6xx_gpu_state_destroy(struct kref *kref)
struct a6xx_gpu_state *a6xx_state = container_of(state,
struct a6xx_gpu_state, base);
if (a6xx_state->gmu_log)
kvfree(a6xx_state->gmu_log->data);
if (a6xx_state->gmu_hfi)
kvfree(a6xx_state->gmu_hfi->data);
list_for_each_entry_safe(obj, tmp, &a6xx_state->objs, node)
kfree(obj);
......@@ -1189,8 +1251,48 @@ void a6xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state,
if (IS_ERR_OR_NULL(state))
return;
drm_printf(p, "gpu-initialized: %d\n", a6xx_state->gpu_initialized);
adreno_show(gpu, state, p);
drm_puts(p, "gmu-log:\n");
if (a6xx_state->gmu_log) {
struct msm_gpu_state_bo *gmu_log = a6xx_state->gmu_log;
drm_printf(p, " iova: 0x%016llx\n", gmu_log->iova);
drm_printf(p, " size: %zu\n", gmu_log->size);
adreno_show_object(p, &gmu_log->data, gmu_log->size,
&gmu_log->encoded);
}
drm_puts(p, "gmu-hfi:\n");
if (a6xx_state->gmu_hfi) {
struct msm_gpu_state_bo *gmu_hfi = a6xx_state->gmu_hfi;
unsigned i, j;
drm_printf(p, " iova: 0x%016llx\n", gmu_hfi->iova);
drm_printf(p, " size: %zu\n", gmu_hfi->size);
for (i = 0; i < ARRAY_SIZE(a6xx_state->hfi_queue_history); i++) {
drm_printf(p, " queue-history[%u]:", i);
for (j = 0; j < HFI_HISTORY_SZ; j++) {
drm_printf(p, " %d", a6xx_state->hfi_queue_history[i][j]);
}
drm_printf(p, "\n");
}
adreno_show_object(p, &gmu_hfi->data, gmu_hfi->size,
&gmu_hfi->encoded);
}
drm_puts(p, "gmu-debug:\n");
if (a6xx_state->gmu_debug) {
struct msm_gpu_state_bo *gmu_debug = a6xx_state->gmu_debug;
drm_printf(p, " iova: 0x%016llx\n", gmu_debug->iova);
drm_printf(p, " size: %zu\n", gmu_debug->size);
adreno_show_object(p, &gmu_debug->data, gmu_debug->size,
&gmu_debug->encoded);
}
drm_puts(p, "registers:\n");
for (i = 0; i < a6xx_state->nr_registers; i++) {
struct a6xx_gpu_state_obj *obj = &a6xx_state->registers[i];
......
......@@ -36,6 +36,8 @@ static int a6xx_hfi_queue_read(struct a6xx_gmu *gmu,
hdr = queue->data[index];
queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index;
/*
* If we are to assume that the GMU firmware is in fact a rational actor
* and is programmed to not send us a larger response than we expect
......@@ -75,6 +77,8 @@ static int a6xx_hfi_queue_write(struct a6xx_gmu *gmu,
return -ENOSPC;
}
queue->history[(queue->history_idx++) % HFI_HISTORY_SZ] = index;
for (i = 0; i < dwords; i++) {
queue->data[index] = data[i];
index = (index + 1) % header->size;
......@@ -600,6 +604,9 @@ void a6xx_hfi_stop(struct a6xx_gmu *gmu)
queue->header->read_index = 0;
queue->header->write_index = 0;
memset(&queue->history, 0xff, sizeof(queue->history));
queue->history_idx = 0;
}
}
......@@ -612,6 +619,9 @@ static void a6xx_hfi_queue_init(struct a6xx_hfi_queue *queue,
queue->data = virt;
atomic_set(&queue->seqnum, 0);
memset(&queue->history, 0xff, sizeof(queue->history));
queue->history_idx = 0;
/* Set up the shared memory header */
header->iova = iova;
header->type = 10 << 8 | id;
......
......@@ -33,6 +33,17 @@ struct a6xx_hfi_queue {
spinlock_t lock;
u32 *data;
atomic_t seqnum;
/*
* Tracking for the start index of the last N messages in the
* queue, for the benefit of devcore dump / crashdec (since
* parsing in the reverse direction to decode the last N
* messages is difficult to do and would rely on heuristics
* which are not guaranteed to be correct)
*/
#define HFI_HISTORY_SZ 8
s32 history[HFI_HISTORY_SZ];
u8 history_idx;
};
/* This is the outgoing queue to the GMU */
......
......@@ -131,6 +131,24 @@ static const struct adreno_info gpulist[] = {
.gmem = (SZ_1M + SZ_512K),
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.init = a4xx_gpu_init,
}, {
.rev = ADRENO_REV(5, 0, 6, ANY_ID),
.revn = 506,
.name = "A506",
.fw = {
[ADRENO_FW_PM4] = "a530_pm4.fw",
[ADRENO_FW_PFP] = "a530_pfp.fw",
},
.gmem = (SZ_128K + SZ_8K),
/*
* Increase inactive period to 250 to avoid bouncing
* the GDSC which appears to make it grumpy
*/
.inactive_period = 250,
.quirks = ADRENO_QUIRK_TWO_PASS_USE_WFI |
ADRENO_QUIRK_LMLOADKILL_DISABLE,
.init = a5xx_gpu_init,
.zapfw = "a506_zap.mdt",
}, {
.rev = ADRENO_REV(5, 0, 8, ANY_ID),
.revn = 508,
......@@ -408,9 +426,9 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
return NULL;
}
mutex_lock(&dev->struct_mutex);
mutex_lock(&gpu->lock);
ret = msm_gpu_hw_init(gpu);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&gpu->lock);
pm_runtime_put_autosuspend(&pdev->dev);
if (ret) {
DRM_DEV_ERROR(dev->dev, "gpu hw init failed: %d\n", ret);
......@@ -427,13 +445,6 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
return gpu;
}
static void set_gpu_pdev(struct drm_device *dev,
struct platform_device *pdev)
{
struct msm_drm_private *priv = dev->dev_private;
priv->gpu_pdev = pdev;
}
static int find_chipid(struct device *dev, struct adreno_rev *rev)
{
struct device_node *node = dev->of_node;
......@@ -482,8 +493,8 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
{
static struct adreno_platform_config config = {};
const struct adreno_info *info;
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *drm = priv->dev;
struct msm_gpu *gpu;
int ret;
......@@ -492,7 +503,7 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
return ret;
dev->platform_data = &config;
set_gpu_pdev(drm, to_platform_device(dev));
priv->gpu_pdev = to_platform_device(dev);
info = adreno_info(config.rev);
......@@ -521,12 +532,13 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
static void adreno_unbind(struct device *dev, struct device *master,
void *data)
{
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_gpu *gpu = dev_to_gpu(dev);
pm_runtime_force_suspend(dev);
gpu->funcs->destroy(gpu);
set_gpu_pdev(dev_get_drvdata(master), NULL);
priv->gpu_pdev = NULL;
}
static const struct component_ops a3xx_ops = {
......
......@@ -504,6 +504,8 @@ int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state)
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int i, count = 0;
WARN_ON(!mutex_is_locked(&gpu->lock));
kref_init(&state->ref);
ktime_get_real_ts64(&state->time);
......@@ -630,7 +632,7 @@ static char *adreno_gpu_ascii85_encode(u32 *src, size_t len)
}
/* len is expected to be in bytes */
static void adreno_show_object(struct drm_printer *p, void **ptr, int len,
void adreno_show_object(struct drm_printer *p, void **ptr, int len,
bool *encoded)
{
if (!*ptr || !len)
......
......@@ -201,6 +201,11 @@ static inline int adreno_is_a430(struct adreno_gpu *gpu)
return gpu->revn == 430;
}
static inline int adreno_is_a506(struct adreno_gpu *gpu)
{
return gpu->revn == 506;
}
static inline int adreno_is_a508(struct adreno_gpu *gpu)
{
return gpu->revn == 508;
......@@ -306,6 +311,8 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *state);
int adreno_gpu_state_get(struct msm_gpu *gpu, struct msm_gpu_state *state);
int adreno_gpu_state_put(struct msm_gpu_state *state);
void adreno_show_object(struct drm_printer *p, void **ptr, int len,
bool *encoded);
/*
* Common helper function to initialize the default address space for arm-smmu
......
......@@ -337,7 +337,8 @@ static void _dpu_crtc_program_lm_output_roi(struct drm_crtc *crtc)
}
static void _dpu_crtc_blend_setup_mixer(struct drm_crtc *crtc,
struct dpu_crtc *dpu_crtc, struct dpu_crtc_mixer *mixer)
struct dpu_crtc *dpu_crtc, struct dpu_crtc_mixer *mixer,
struct dpu_hw_stage_cfg *stage_cfg)
{
struct drm_plane *plane;
struct drm_framebuffer *fb;
......@@ -346,7 +347,6 @@ static void _dpu_crtc_blend_setup_mixer(struct drm_crtc *crtc,
struct dpu_plane_state *pstate = NULL;
struct dpu_format *format;
struct dpu_hw_ctl *ctl = mixer->lm_ctl;
struct dpu_hw_stage_cfg *stage_cfg = &dpu_crtc->stage_cfg;
u32 flush_mask;
uint32_t stage_idx, lm_idx;
......@@ -422,6 +422,7 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
struct dpu_crtc_mixer *mixer = cstate->mixers;
struct dpu_hw_ctl *ctl;
struct dpu_hw_mixer *lm;
struct dpu_hw_stage_cfg stage_cfg;
int i;
DRM_DEBUG_ATOMIC("%s\n", dpu_crtc->name);
......@@ -435,9 +436,9 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
}
/* initialize stage cfg */
memset(&dpu_crtc->stage_cfg, 0, sizeof(struct dpu_hw_stage_cfg));
memset(&stage_cfg, 0, sizeof(struct dpu_hw_stage_cfg));
_dpu_crtc_blend_setup_mixer(crtc, dpu_crtc, mixer);
_dpu_crtc_blend_setup_mixer(crtc, dpu_crtc, mixer, &stage_cfg);
for (i = 0; i < cstate->num_mixers; i++) {
ctl = mixer[i].lm_ctl;
......@@ -458,7 +459,7 @@ static void _dpu_crtc_blend_setup(struct drm_crtc *crtc)
mixer[i].flush_mask);
ctl->ops.setup_blendstage(ctl, mixer[i].hw_lm->idx,
&dpu_crtc->stage_cfg);
&stage_cfg);
}
}
......@@ -923,6 +924,20 @@ static struct drm_crtc_state *dpu_crtc_duplicate_state(struct drm_crtc *crtc)
return &cstate->base;
}
static void dpu_crtc_atomic_print_state(struct drm_printer *p,
const struct drm_crtc_state *state)
{
const struct dpu_crtc_state *cstate = to_dpu_crtc_state(state);
int i;
for (i = 0; i < cstate->num_mixers; i++) {
drm_printf(p, "\tlm[%d]=%d\n", i, cstate->mixers[i].hw_lm->idx - LM_0);
drm_printf(p, "\tctl[%d]=%d\n", i, cstate->mixers[i].lm_ctl->idx - CTL_0);
if (cstate->mixers[i].hw_dspp)
drm_printf(p, "\tdspp[%d]=%d\n", i, cstate->mixers[i].hw_dspp->idx - DSPP_0);
}
}
static void dpu_crtc_disable(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
......@@ -1423,15 +1438,16 @@ DEFINE_SHOW_ATTRIBUTE(dpu_crtc_debugfs_state);
static int _dpu_crtc_init_debugfs(struct drm_crtc *crtc)
{
struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc);
struct dentry *debugfs_root;
dpu_crtc->debugfs_root = debugfs_create_dir(dpu_crtc->name,
debugfs_root = debugfs_create_dir(dpu_crtc->name,
crtc->dev->primary->debugfs_root);
debugfs_create_file("status", 0400,
dpu_crtc->debugfs_root,
debugfs_root,
dpu_crtc, &_dpu_debugfs_status_fops);
debugfs_create_file("state", 0600,
dpu_crtc->debugfs_root,
debugfs_root,
&dpu_crtc->base,
&dpu_crtc_debugfs_state_fops);
......@@ -1449,13 +1465,6 @@ static int dpu_crtc_late_register(struct drm_crtc *crtc)
return _dpu_crtc_init_debugfs(crtc);
}
static void dpu_crtc_early_unregister(struct drm_crtc *crtc)
{
struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc);
debugfs_remove_recursive(dpu_crtc->debugfs_root);
}
static const struct drm_crtc_funcs dpu_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = dpu_crtc_destroy,
......@@ -1463,8 +1472,8 @@ static const struct drm_crtc_funcs dpu_crtc_funcs = {
.reset = dpu_crtc_reset,
.atomic_duplicate_state = dpu_crtc_duplicate_state,
.atomic_destroy_state = dpu_crtc_destroy_state,
.atomic_print_state = dpu_crtc_atomic_print_state,
.late_register = dpu_crtc_late_register,
.early_unregister = dpu_crtc_early_unregister,
.verify_crc_source = dpu_crtc_verify_crc_source,
.set_crc_source = dpu_crtc_set_crc_source,
.enable_vblank = msm_crtc_enable_vblank,
......
......@@ -129,8 +129,6 @@ struct dpu_crtc_frame_event {
* @drm_requested_vblank : Whether vblanks have been enabled in the encoder
* @property_info : Opaque structure for generic property support
* @property_defaults : Array of default values for generic property support
* @stage_cfg : H/w mixer stage configuration
* @debugfs_root : Parent of debugfs node
* @vblank_cb_count : count of vblank callback since last reset
* @play_count : frame count between crtc enable and disable
* @vblank_cb_time : ktime at vblank count reset
......@@ -161,9 +159,6 @@ struct dpu_crtc {
struct drm_pending_vblank_event *event;
u32 vsync_count;
struct dpu_hw_stage_cfg stage_cfg;
struct dentry *debugfs_root;
u32 vblank_cb_count;
u64 play_count;
ktime_t vblank_cb_time;
......
......@@ -995,9 +995,6 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
trace_dpu_enc_mode_set(DRMID(drm_enc));
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS)
msm_dp_display_mode_set(dpu_enc->dp, drm_enc, mode, adj_mode);
list_for_each_entry(conn_iter, connector_list, head)
if (conn_iter->encoder == drm_enc)
conn = conn_iter;
......@@ -1148,10 +1145,6 @@ static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc)
struct msm_drm_private *priv;
struct drm_display_mode *cur_mode = NULL;
if (!drm_enc) {
DPU_ERROR("invalid encoder\n");
return;
}
dpu_enc = to_dpu_encoder_virt(drm_enc);
mutex_lock(&dpu_enc->enc_lock);
......@@ -1177,14 +1170,6 @@ static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc)
_dpu_encoder_virt_enable_helper(drm_enc);
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
ret = msm_dp_display_enable(dpu_enc->dp, drm_enc);
if (ret) {
DPU_ERROR_ENC(dpu_enc, "dp display enable failed: %d\n",
ret);
goto out;
}
}
dpu_enc->enabled = true;
out:
......@@ -1197,14 +1182,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
struct msm_drm_private *priv;
int i = 0;
if (!drm_enc) {
DPU_ERROR("invalid encoder\n");
return;
} else if (!drm_enc->dev) {
DPU_ERROR("invalid dev\n");
return;
}
dpu_enc = to_dpu_encoder_virt(drm_enc);
DPU_DEBUG_ENC(dpu_enc, "\n");
......@@ -1218,11 +1195,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
/* wait for idle */
dpu_encoder_wait_for_event(drm_enc, MSM_ENC_TX_COMPLETE);
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
if (msm_dp_display_pre_disable(dpu_enc->dp, drm_enc))
DPU_ERROR_ENC(dpu_enc, "dp display push idle failed\n");
}
dpu_encoder_resource_control(drm_enc, DPU_ENC_RC_EVENT_PRE_STOP);
for (i = 0; i < dpu_enc->num_phys_encs; i++) {
......@@ -1247,11 +1219,6 @@ static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc)
DPU_DEBUG_ENC(dpu_enc, "encoder disabled\n");
if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS) {
if (msm_dp_display_disable(dpu_enc->dp, drm_enc))
DPU_ERROR_ENC(dpu_enc, "dp display disable failed\n");
}
mutex_unlock(&dpu_enc->enc_lock);
}
......@@ -2128,11 +2095,8 @@ static void dpu_encoder_frame_done_timeout(struct timer_list *t)
static const struct drm_encoder_helper_funcs dpu_encoder_helper_funcs = {
.mode_set = dpu_encoder_virt_mode_set,
.disable = dpu_encoder_virt_disable,
.enable = dpu_kms_encoder_enable,
.enable = dpu_encoder_virt_enable,
.atomic_check = dpu_encoder_virt_atomic_check,
/* This is called by dpu_kms_encoder_enable */
.commit = dpu_encoder_virt_enable,
};
static const struct drm_encoder_funcs dpu_encoder_funcs = {
......
......@@ -698,17 +698,17 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(
{
struct dpu_encoder_phys *phys_enc = NULL;
struct dpu_encoder_irq *irq;
int i, ret = 0;
int i;
if (!p) {
ret = -EINVAL;
goto fail;
DPU_ERROR("failed to create encoder due to invalid parameter\n");
return ERR_PTR(-EINVAL);
}
phys_enc = kzalloc(sizeof(*phys_enc), GFP_KERNEL);
if (!phys_enc) {
ret = -ENOMEM;
goto fail;
DPU_ERROR("failed to create encoder due to memory allocation error\n");
return ERR_PTR(-ENOMEM);
}
phys_enc->hw_mdptop = p->dpu_kms->hw_mdp;
......@@ -748,11 +748,4 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(
DPU_DEBUG_VIDENC(phys_enc, "created intf idx:%d\n", p->intf_idx);
return phys_enc;
fail:
DPU_ERROR("failed to create encoder\n");
if (phys_enc)
dpu_encoder_phys_vid_destroy(phys_enc);
return ERR_PTR(ret);
}
......@@ -45,7 +45,7 @@
(PINGPONG_SDM845_MASK | BIT(DPU_PINGPONG_TE2))
#define CTL_SC7280_MASK \
(BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_FETCH_ACTIVE))
(BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_FETCH_ACTIVE) | BIT(DPU_CTL_VM_CFG))
#define MERGE_3D_SM8150_MASK (0)
......@@ -856,9 +856,9 @@ static const struct dpu_intf_cfg sm8150_intf[] = {
};
static const struct dpu_intf_cfg sc7280_intf[] = {
INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_0", INTF_0, 0x34000, INTF_DP, MSM_DP_CONTROLLER_0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_1", INTF_1, 0x35000, INTF_DSI, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
INTF_BLK("intf_5", INTF_5, 0x39000, INTF_EDP, 0, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
INTF_BLK("intf_5", INTF_5, 0x39000, INTF_DP, MSM_DP_CONTROLLER_1, 24, INTF_SC7280_MASK, MDP_SSPP_TOP0_INTR, 22, 23),
};
/*************************************************************
......
......@@ -179,13 +179,16 @@ enum {
/**
* CTL sub-blocks
* @DPU_CTL_SPLIT_DISPLAY CTL supports video mode split display
* @DPU_CTL_SPLIT_DISPLAY: CTL supports video mode split display
* @DPU_CTL_FETCH_ACTIVE: Active CTL for fetch HW (SSPPs)
* @DPU_CTL_VM_CFG: CTL config to support multiple VMs
* @DPU_CTL_MAX
*/
enum {
DPU_CTL_SPLIT_DISPLAY = 0x1,
DPU_CTL_ACTIVE_CFG,
DPU_CTL_FETCH_ACTIVE,
DPU_CTL_VM_CFG,
DPU_CTL_MAX
};
......
......@@ -36,6 +36,7 @@
#define MERGE_3D_IDX 23
#define INTF_IDX 31
#define CTL_INVALID_BIT 0xffff
#define CTL_DEFAULT_GROUP_ID 0xf
static const u32 fetch_tbl[SSPP_MAX] = {CTL_INVALID_BIT, 16, 17, 18, 19,
CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, 0,
......@@ -498,6 +499,13 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx,
u32 intf_active = 0;
u32 mode_sel = 0;
/* CTL_TOP[31:28] carries group_id to collate CTL paths
* per VM. Explicitly disable it until VM support is
* added in SW. Power on reset value is not disable.
*/
if ((test_bit(DPU_CTL_VM_CFG, &ctx->caps->features)))
mode_sel = CTL_DEFAULT_GROUP_ID << 28;
if (cfg->intf_mode_sel == DPU_CTL_MODE_SEL_CMD)
mode_sel |= BIT(17);
......
......@@ -30,6 +30,9 @@
#define MDP_AD4_INTR_STATUS_OFF 0x420
#define MDP_INTF_0_OFF_REV_7xxx 0x34000
#define MDP_INTF_1_OFF_REV_7xxx 0x35000
#define MDP_INTF_2_OFF_REV_7xxx 0x36000
#define MDP_INTF_3_OFF_REV_7xxx 0x37000
#define MDP_INTF_4_OFF_REV_7xxx 0x38000
#define MDP_INTF_5_OFF_REV_7xxx 0x39000
/**
......@@ -110,6 +113,21 @@ static const struct dpu_intr_reg dpu_intr_set[] = {
MDP_INTF_1_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_1_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_2_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_3_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_EN,
MDP_INTF_4_OFF_REV_7xxx+INTF_INTR_STATUS
},
{
MDP_INTF_5_OFF_REV_7xxx+INTF_INTR_CLEAR,
MDP_INTF_5_OFF_REV_7xxx+INTF_INTR_EN,
......
......@@ -26,6 +26,9 @@ enum dpu_hw_intr_reg {
MDP_AD4_1_INTR,
MDP_INTF0_7xxx_INTR,
MDP_INTF1_7xxx_INTR,
MDP_INTF2_7xxx_INTR,
MDP_INTF3_7xxx_INTR,
MDP_INTF4_7xxx_INTR,
MDP_INTF5_7xxx_INTR,
MDP_INTR_MAX,
};
......
......@@ -8,6 +8,8 @@
#include "dpu_hw_sspp.h"
#include "dpu_kms.h"
#include <drm/drm_file.h>
#define DPU_FETCH_CONFIG_RESET_VALUE 0x00000087
/* DPU_SSPP_SRC */
......@@ -75,6 +77,7 @@
#define SSPP_TRAFFIC_SHAPER 0x130
#define SSPP_CDP_CNTL 0x134
#define SSPP_UBWC_ERROR_STATUS 0x138
#define SSPP_CDP_CNTL_REC1 0x13c
#define SSPP_TRAFFIC_SHAPER_PREFILL 0x150
#define SSPP_TRAFFIC_SHAPER_REC1_PREFILL 0x154
#define SSPP_TRAFFIC_SHAPER_REC1 0x158
......@@ -413,13 +416,11 @@ static void dpu_hw_sspp_setup_pe_config(struct dpu_hw_pipe *ctx,
static void _dpu_hw_sspp_setup_scaler3(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cfg *sspp,
struct dpu_hw_pixel_ext *pe,
void *scaler_cfg)
{
u32 idx;
struct dpu_hw_scaler3_cfg *scaler3_cfg = scaler_cfg;
(void)pe;
if (_sspp_subblk_offset(ctx, DPU_SSPP_SCALER_QSEED3, &idx) || !sspp
|| !scaler3_cfg)
return;
......@@ -539,7 +540,7 @@ static void dpu_hw_sspp_setup_sourceaddress(struct dpu_hw_pipe *ctx,
}
static void dpu_hw_sspp_setup_csc(struct dpu_hw_pipe *ctx,
struct dpu_csc_cfg *data)
const struct dpu_csc_cfg *data)
{
u32 idx;
bool csc10 = false;
......@@ -571,19 +572,20 @@ static void dpu_hw_sspp_setup_solidfill(struct dpu_hw_pipe *ctx, u32 color, enum
}
static void dpu_hw_sspp_setup_danger_safe_lut(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg)
u32 danger_lut,
u32 safe_lut)
{
u32 idx;
if (_sspp_subblk_offset(ctx, DPU_SSPP_SRC, &idx))
return;
DPU_REG_WRITE(&ctx->hw, SSPP_DANGER_LUT + idx, cfg->danger_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_SAFE_LUT + idx, cfg->safe_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_DANGER_LUT + idx, danger_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_SAFE_LUT + idx, safe_lut);
}
static void dpu_hw_sspp_setup_creq_lut(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg)
u64 creq_lut)
{
u32 idx;
......@@ -591,11 +593,11 @@ static void dpu_hw_sspp_setup_creq_lut(struct dpu_hw_pipe *ctx,
return;
if (ctx->cap && test_bit(DPU_SSPP_QOS_8LVL, &ctx->cap->features)) {
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_0 + idx, cfg->creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_0 + idx, creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT_1 + idx,
cfg->creq_lut >> 32);
creq_lut >> 32);
} else {
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT + idx, cfg->creq_lut);
DPU_REG_WRITE(&ctx->hw, SSPP_CREQ_LUT + idx, creq_lut);
}
}
......@@ -625,10 +627,12 @@ static void dpu_hw_sspp_setup_qos_ctrl(struct dpu_hw_pipe *ctx,
}
static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cdp_cfg *cfg)
struct dpu_hw_pipe_cdp_cfg *cfg,
enum dpu_sspp_multirect_index index)
{
u32 idx;
u32 cdp_cntl = 0;
u32 cdp_cntl_offset = 0;
if (!ctx || !cfg)
return;
......@@ -636,6 +640,11 @@ static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
if (_sspp_subblk_offset(ctx, DPU_SSPP_SRC, &idx))
return;
if (index == DPU_SSPP_RECT_SOLO || index == DPU_SSPP_RECT_0)
cdp_cntl_offset = SSPP_CDP_CNTL;
else
cdp_cntl_offset = SSPP_CDP_CNTL_REC1;
if (cfg->enable)
cdp_cntl |= BIT(0);
if (cfg->ubwc_meta_enable)
......@@ -645,7 +654,7 @@ static void dpu_hw_sspp_setup_cdp(struct dpu_hw_pipe *ctx,
if (cfg->preload_ahead == DPU_SSPP_CDP_PRELOAD_AHEAD_64)
cdp_cntl |= BIT(3);
DPU_REG_WRITE(&ctx->hw, SSPP_CDP_CNTL, cdp_cntl);
DPU_REG_WRITE(&ctx->hw, cdp_cntl_offset, cdp_cntl);
}
static void _setup_layer_ops(struct dpu_hw_pipe *c,
......@@ -685,6 +694,71 @@ static void _setup_layer_ops(struct dpu_hw_pipe *c,
c->ops.setup_cdp = dpu_hw_sspp_setup_cdp;
}
#ifdef CONFIG_DEBUG_FS
int _dpu_hw_sspp_init_debugfs(struct dpu_hw_pipe *hw_pipe, struct dpu_kms *kms, struct dentry *entry)
{
const struct dpu_sspp_cfg *cfg = hw_pipe->cap;
const struct dpu_sspp_sub_blks *sblk = cfg->sblk;
struct dentry *debugfs_root;
char sspp_name[32];
snprintf(sspp_name, sizeof(sspp_name), "%d", hw_pipe->idx);
/* create overall sub-directory for the pipe */
debugfs_root =
debugfs_create_dir(sspp_name, entry);
/* don't error check these */
debugfs_create_xul("features", 0600,
debugfs_root, (unsigned long *)&hw_pipe->cap->features);
/* add register dump support */
dpu_debugfs_create_regset32("src_blk", 0400,
debugfs_root,
sblk->src_blk.base + cfg->base,
sblk->src_blk.len,
kms);
if (cfg->features & BIT(DPU_SSPP_SCALER_QSEED3) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED3LITE) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED2) ||
cfg->features & BIT(DPU_SSPP_SCALER_QSEED4))
dpu_debugfs_create_regset32("scaler_blk", 0400,
debugfs_root,
sblk->scaler_blk.base + cfg->base,
sblk->scaler_blk.len,
kms);
if (cfg->features & BIT(DPU_SSPP_CSC) ||
cfg->features & BIT(DPU_SSPP_CSC_10BIT))
dpu_debugfs_create_regset32("csc_blk", 0400,
debugfs_root,
sblk->csc_blk.base + cfg->base,
sblk->csc_blk.len,
kms);
debugfs_create_u32("xin_id",
0400,
debugfs_root,
(u32 *) &cfg->xin_id);
debugfs_create_u32("clk_ctrl",
0400,
debugfs_root,
(u32 *) &cfg->clk_ctrl);
debugfs_create_x32("creq_vblank",
0600,
debugfs_root,
(u32 *) &sblk->creq_vblank);
debugfs_create_x32("danger_vblank",
0600,
debugfs_root,
(u32 *) &sblk->danger_vblank);
return 0;
}
#endif
static const struct dpu_sspp_cfg *_sspp_offset(enum dpu_sspp sspp,
void __iomem *addr,
struct dpu_mdss_cfg *catalog,
......
......@@ -25,11 +25,17 @@ struct dpu_hw_pipe;
/**
* Define all scaler feature bits in catalog
*/
#define DPU_SSPP_SCALER ((1UL << DPU_SSPP_SCALER_RGB) | \
(1UL << DPU_SSPP_SCALER_QSEED2) | \
(1UL << DPU_SSPP_SCALER_QSEED3) | \
(1UL << DPU_SSPP_SCALER_QSEED3LITE) | \
(1UL << DPU_SSPP_SCALER_QSEED4))
#define DPU_SSPP_SCALER (BIT(DPU_SSPP_SCALER_RGB) | \
BIT(DPU_SSPP_SCALER_QSEED2) | \
BIT(DPU_SSPP_SCALER_QSEED3) | \
BIT(DPU_SSPP_SCALER_QSEED3LITE) | \
BIT(DPU_SSPP_SCALER_QSEED4))
/*
* Define all CSC feature bits in catalog
*/
#define DPU_SSPP_CSC_ANY (BIT(DPU_SSPP_CSC) | \
BIT(DPU_SSPP_CSC_10BIT))
/**
* Component indices
......@@ -166,18 +172,12 @@ struct dpu_hw_pipe_cfg {
/**
* struct dpu_hw_pipe_qos_cfg : Source pipe QoS configuration
* @danger_lut: LUT for generate danger level based on fill level
* @safe_lut: LUT for generate safe level based on fill level
* @creq_lut: LUT for generate creq level based on fill level
* @creq_vblank: creq value generated to vbif during vertical blanking
* @danger_vblank: danger value generated during vertical blanking
* @vblank_en: enable creq_vblank and danger_vblank during vblank
* @danger_safe_en: enable danger safe generation
*/
struct dpu_hw_pipe_qos_cfg {
u32 danger_lut;
u32 safe_lut;
u64 creq_lut;
u32 creq_vblank;
u32 danger_vblank;
bool vblank_en;
......@@ -268,7 +268,7 @@ struct dpu_hw_sspp_ops {
* @ctx: Pointer to pipe context
* @data: Pointer to config structure
*/
void (*setup_csc)(struct dpu_hw_pipe *ctx, struct dpu_csc_cfg *data);
void (*setup_csc)(struct dpu_hw_pipe *ctx, const struct dpu_csc_cfg *data);
/**
* setup_solidfill - enable/disable colorfill
......@@ -302,20 +302,22 @@ struct dpu_hw_sspp_ops {
/**
* setup_danger_safe_lut - setup danger safe LUTs
* @ctx: Pointer to pipe context
* @cfg: Pointer to pipe QoS configuration
* @danger_lut: LUT for generate danger level based on fill level
* @safe_lut: LUT for generate safe level based on fill level
*
*/
void (*setup_danger_safe_lut)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg);
u32 danger_lut,
u32 safe_lut);
/**
* setup_creq_lut - setup CREQ LUT
* @ctx: Pointer to pipe context
* @cfg: Pointer to pipe QoS configuration
* @creq_lut: LUT for generate creq level based on fill level
*
*/
void (*setup_creq_lut)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_qos_cfg *cfg);
u64 creq_lut);
/**
* setup_qos_ctrl - setup QoS control
......@@ -338,12 +340,10 @@ struct dpu_hw_sspp_ops {
* setup_scaler - setup scaler
* @ctx: Pointer to pipe context
* @pipe_cfg: Pointer to pipe configuration
* @pe_cfg: Pointer to pixel extension configuration
* @scaler_cfg: Pointer to scaler configuration
*/
void (*setup_scaler)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cfg *pipe_cfg,
struct dpu_hw_pixel_ext *pe_cfg,
void *scaler_cfg);
/**
......@@ -356,9 +356,11 @@ struct dpu_hw_sspp_ops {
* setup_cdp - setup client driven prefetch
* @ctx: Pointer to pipe context
* @cfg: Pointer to cdp configuration
* @index: rectangle index in multirect
*/
void (*setup_cdp)(struct dpu_hw_pipe *ctx,
struct dpu_hw_pipe_cdp_cfg *cfg);
struct dpu_hw_pipe_cdp_cfg *cfg,
enum dpu_sspp_multirect_index index);
};
/**
......@@ -385,6 +387,7 @@ struct dpu_hw_pipe {
struct dpu_hw_sspp_ops ops;
};
struct dpu_kms;
/**
* dpu_hw_sspp_init - initializes the sspp hw driver object.
* Should be called once before accessing every pipe.
......@@ -404,5 +407,8 @@ struct dpu_hw_pipe *dpu_hw_sspp_init(enum dpu_sspp idx,
*/
void dpu_hw_sspp_destroy(struct dpu_hw_pipe *ctx);
void dpu_debugfs_sspp_init(struct dpu_kms *dpu_kms, struct dentry *debugfs_root);
int _dpu_hw_sspp_init_debugfs(struct dpu_hw_pipe *hw_pipe, struct dpu_kms *kms, struct dentry *entry);
#endif /*_DPU_HW_SSPP_H */
......@@ -374,7 +374,7 @@ u32 dpu_hw_get_scaler3_ver(struct dpu_hw_blk_reg_map *c,
void dpu_hw_csc_setup(struct dpu_hw_blk_reg_map *c,
u32 csc_reg_off,
struct dpu_csc_cfg *data, bool csc10)
const struct dpu_csc_cfg *data, bool csc10)
{
static const u32 matrix_shift = 7;
u32 clamp_shift = csc10 ? 16 : 8;
......
......@@ -322,6 +322,6 @@ u32 dpu_hw_get_scaler3_ver(struct dpu_hw_blk_reg_map *c,
void dpu_hw_csc_setup(struct dpu_hw_blk_reg_map *c,
u32 csc_reg_off,
struct dpu_csc_cfg *data, bool csc10);
const struct dpu_csc_cfg *data, bool csc10);
#endif /* _DPU_HW_UTIL_H */
......@@ -21,14 +21,14 @@
#include "msm_gem.h"
#include "disp/msm_disp_snapshot.h"
#include "dpu_kms.h"
#include "dpu_core_irq.h"
#include "dpu_crtc.h"
#include "dpu_encoder.h"
#include "dpu_formats.h"
#include "dpu_hw_vbif.h"
#include "dpu_vbif.h"
#include "dpu_encoder.h"
#include "dpu_kms.h"
#include "dpu_plane.h"
#include "dpu_crtc.h"
#include "dpu_vbif.h"
#define CREATE_TRACE_POINTS
#include "dpu_trace.h"
......@@ -73,8 +73,8 @@ static int _dpu_danger_signal_status(struct seq_file *s,
&status);
} else {
seq_puts(s, "\nSafe signal status:\n");
if (kms->hw_mdp->ops.get_danger_status)
kms->hw_mdp->ops.get_danger_status(kms->hw_mdp,
if (kms->hw_mdp->ops.get_safe_status)
kms->hw_mdp->ops.get_safe_status(kms->hw_mdp,
&status);
}
pm_runtime_put_sync(&kms->pdev->dev);
......@@ -82,7 +82,7 @@ static int _dpu_danger_signal_status(struct seq_file *s,
seq_printf(s, "MDP : 0x%x\n", status.mdp);
for (i = SSPP_VIG0; i < SSPP_MAX; i++)
seq_printf(s, "SSPP%d : 0x%x \t", i - SSPP_VIG0,
seq_printf(s, "SSPP%d : 0x%x \n", i - SSPP_VIG0,
status.sspp[i]);
seq_puts(s, "\n");
......@@ -101,6 +101,73 @@ static int dpu_debugfs_safe_stats_show(struct seq_file *s, void *v)
}
DEFINE_SHOW_ATTRIBUTE(dpu_debugfs_safe_stats);
static ssize_t _dpu_plane_danger_read(struct file *file,
char __user *buff, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int len;
char buf[40];
len = scnprintf(buf, sizeof(buf), "%d\n", !kms->has_danger_ctrl);
return simple_read_from_buffer(buff, count, ppos, buf, len);
}
static void _dpu_plane_set_danger_state(struct dpu_kms *kms, bool enable)
{
struct drm_plane *plane;
drm_for_each_plane(plane, kms->dev) {
if (plane->fb && plane->state) {
dpu_plane_danger_signal_ctrl(plane, enable);
DPU_DEBUG("plane:%d img:%dx%d ",
plane->base.id, plane->fb->width,
plane->fb->height);
DPU_DEBUG("src[%d,%d,%d,%d] dst[%d,%d,%d,%d]\n",
plane->state->src_x >> 16,
plane->state->src_y >> 16,
plane->state->src_w >> 16,
plane->state->src_h >> 16,
plane->state->crtc_x, plane->state->crtc_y,
plane->state->crtc_w, plane->state->crtc_h);
} else {
DPU_DEBUG("Inactive plane:%d\n", plane->base.id);
}
}
}
static ssize_t _dpu_plane_danger_write(struct file *file,
const char __user *user_buf, size_t count, loff_t *ppos)
{
struct dpu_kms *kms = file->private_data;
int disable_panic;
int ret;
ret = kstrtouint_from_user(user_buf, count, 0, &disable_panic);
if (ret)
return ret;
if (disable_panic) {
/* Disable panic signal for all active pipes */
DPU_DEBUG("Disabling danger:\n");
_dpu_plane_set_danger_state(kms, false);
kms->has_danger_ctrl = false;
} else {
/* Enable panic signal for all active pipes */
DPU_DEBUG("Enabling danger:\n");
kms->has_danger_ctrl = true;
_dpu_plane_set_danger_state(kms, true);
}
return count;
}
static const struct file_operations dpu_plane_danger_enable = {
.open = simple_open,
.read = _dpu_plane_danger_read,
.write = _dpu_plane_danger_write,
};
static void dpu_debugfs_danger_init(struct dpu_kms *dpu_kms,
struct dentry *parent)
{
......@@ -110,8 +177,20 @@ static void dpu_debugfs_danger_init(struct dpu_kms *dpu_kms,
dpu_kms, &dpu_debugfs_danger_stats_fops);
debugfs_create_file("safe_status", 0600, entry,
dpu_kms, &dpu_debugfs_safe_stats_fops);
debugfs_create_file("disable_danger", 0600, entry,
dpu_kms, &dpu_plane_danger_enable);
}
/*
* Companion structure for dpu_debugfs_create_regset32.
*/
struct dpu_debugfs_regset32 {
uint32_t offset;
uint32_t blk_len;
struct dpu_kms *dpu_kms;
};
static int _dpu_debugfs_show_regset32(struct seq_file *s, void *data)
{
struct dpu_debugfs_regset32 *regset = s->private;
......@@ -159,24 +238,23 @@ static const struct file_operations dpu_fops_regset32 = {
.release = single_release,
};
void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms)
{
if (regset) {
regset->offset = offset;
regset->blk_len = length;
regset->dpu_kms = dpu_kms;
}
}
struct dpu_debugfs_regset32 *regset;
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent, struct dpu_debugfs_regset32 *regset)
{
if (!name || !regset || !regset->dpu_kms || !regset->blk_len)
if (WARN_ON(!name || !dpu_kms || !length))
return;
regset = devm_kzalloc(&dpu_kms->pdev->dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
return;
/* make sure offset is a multiple of 4 */
regset->offset = round_down(regset->offset, 4);
regset->offset = round_down(offset, 4);
regset->blk_len = length;
regset->dpu_kms = dpu_kms;
debugfs_create_file(name, mode, parent, regset, &dpu_fops_regset32);
}
......@@ -203,6 +281,7 @@ static int dpu_kms_debugfs_init(struct msm_kms *kms, struct drm_minor *minor)
dpu_debugfs_danger_init(dpu_kms, entry);
dpu_debugfs_vbif_init(dpu_kms, entry);
dpu_debugfs_core_irq_init(dpu_kms, entry);
dpu_debugfs_sspp_init(dpu_kms, entry);
for (i = 0; i < ARRAY_SIZE(priv->dp); i++) {
if (priv->dp[i])
......@@ -384,28 +463,6 @@ static void dpu_kms_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
}
}
/*
* Override the encoder enable since we need to setup the inline rotator and do
* some crtc magic before enabling any bridge that might be present.
*/
void dpu_kms_encoder_enable(struct drm_encoder *encoder)
{
const struct drm_encoder_helper_funcs *funcs = encoder->helper_private;
struct drm_device *dev = encoder->dev;
struct drm_crtc *crtc;
/* Forward this enable call to the commit hook */
if (funcs && funcs->commit)
funcs->commit(encoder);
drm_for_each_crtc(crtc, dev) {
if (!(crtc->state->encoder_mask & drm_encoder_mask(encoder)))
continue;
trace_dpu_kms_enc_enable(DRMID(crtc));
}
}
static void dpu_kms_complete_commit(struct msm_kms *kms, unsigned crtc_mask)
{
struct dpu_kms *dpu_kms = to_dpu_kms(kms);
......@@ -863,6 +920,11 @@ static void dpu_kms_mdp_snapshot(struct msm_disp_state *disp_state, struct msm_k
msm_disp_snapshot_add_block(disp_state, cat->sspp[i].len,
dpu_kms->mmio + cat->sspp[i].base, "sspp_%d", i);
/* dump LM sub-blocks HW regs info */
for (i = 0; i < cat->mixer_count; i++)
msm_disp_snapshot_add_block(disp_state, cat->mixer[i].len,
dpu_kms->mmio + cat->mixer[i].base, "lm_%d", i);
msm_disp_snapshot_add_block(disp_state, top->hw.length,
dpu_kms->mmio + top->hw.blk_off, "top");
......@@ -1153,9 +1215,9 @@ struct msm_kms *dpu_kms_init(struct drm_device *dev)
static int dpu_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *ddev = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct platform_device *pdev = to_platform_device(dev);
struct msm_drm_private *priv = ddev->dev_private;
struct drm_device *ddev = priv->dev;
struct dpu_kms *dpu_kms;
struct dss_module_power *mp;
int ret = 0;
......@@ -1285,7 +1347,7 @@ static const struct dev_pm_ops dpu_pm_ops = {
pm_runtime_force_resume)
};
static const struct of_device_id dpu_dt_match[] = {
const struct of_device_id dpu_dt_match[] = {
{ .compatible = "qcom,sdm845-dpu", },
{ .compatible = "qcom,sc7180-dpu", },
{ .compatible = "qcom,sc7280-dpu", },
......
......@@ -160,33 +160,9 @@ struct dpu_global_state
*
* Documentation/filesystems/debugfs.rst
*
* @dpu_debugfs_setup_regset32: Initialize data for dpu_debugfs_create_regset32
* @dpu_debugfs_create_regset32: Create 32-bit register dump file
* @dpu_debugfs_get_root: Get root dentry for DPU_KMS's debugfs node
*/
/**
* Companion structure for dpu_debugfs_create_regset32. Do not initialize the
* members of this structure explicitly; use dpu_debugfs_setup_regset32 instead.
*/
struct dpu_debugfs_regset32 {
uint32_t offset;
uint32_t blk_len;
struct dpu_kms *dpu_kms;
};
/**
* dpu_debugfs_setup_regset32 - Initialize register block definition for debugfs
* This function is meant to initialize dpu_debugfs_regset32 structures for use
* with dpu_debugfs_create_regset32.
* @regset: opaque register definition structure
* @offset: sub-block offset
* @length: sub-block length, in bytes
* @dpu_kms: pointer to dpu kms structure
*/
void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms);
/**
* dpu_debugfs_create_regset32 - Create register read back file for debugfs
*
......@@ -195,20 +171,16 @@ void dpu_debugfs_setup_regset32(struct dpu_debugfs_regset32 *regset,
* names/offsets do not need to be provided. The 'read' function simply outputs
* sequential register values over a specified range.
*
* Similar to the related debugfs_create_regset32 API, the structure pointed to
* by regset needs to persist for the lifetime of the created file. The calling
* code is responsible for initialization/management of this structure.
*
* The structure pointed to by regset is meant to be opaque. Please use
* dpu_debugfs_setup_regset32 to initialize it.
*
* @name: File name within debugfs
* @mode: File mode within debugfs
* @parent: Parent directory entry within debugfs, can be NULL
* @regset: Pointer to persistent register block definition
* @offset: sub-block offset
* @length: sub-block length, in bytes
* @dpu_kms: pointer to dpu kms structure
*/
void dpu_debugfs_create_regset32(const char *name, umode_t mode,
void *parent, struct dpu_debugfs_regset32 *regset);
void *parent,
uint32_t offset, uint32_t length, struct dpu_kms *dpu_kms);
/**
* dpu_debugfs_get_root - Return root directory entry for KMS's debugfs
......@@ -235,8 +207,6 @@ void *dpu_debugfs_get_root(struct dpu_kms *dpu_kms);
int dpu_enable_vblank(struct msm_kms *kms, struct drm_crtc *crtc);
void dpu_disable_vblank(struct msm_kms *kms, struct drm_crtc *crtc);
void dpu_kms_encoder_enable(struct drm_encoder *encoder);
/**
* dpu_kms_get_clk_rate() - get the clock rate
* @dpu_kms: pointer to dpu_kms structure
......
......@@ -111,7 +111,7 @@ static int _dpu_mdss_irq_domain_add(struct dpu_mdss *dpu_mdss)
struct device *dev;
struct irq_domain *domain;
dev = dpu_mdss->base.dev->dev;
dev = dpu_mdss->base.dev;
domain = irq_domain_add_linear(dev->of_node, 32,
&dpu_mdss_irqdomain_ops, dpu_mdss);
......@@ -184,16 +184,15 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
return ret;
}
static void dpu_mdss_destroy(struct drm_device *dev)
static void dpu_mdss_destroy(struct msm_mdss *mdss)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
struct platform_device *pdev = to_platform_device(mdss->dev);
struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
struct dss_module_power *mp = &dpu_mdss->mp;
int irq;
pm_runtime_suspend(dev->dev);
pm_runtime_disable(dev->dev);
pm_runtime_suspend(mdss->dev);
pm_runtime_disable(mdss->dev);
_dpu_mdss_irq_domain_fini(dpu_mdss);
irq = platform_get_irq(pdev, 0);
irq_set_chained_handler_and_data(irq, NULL, NULL);
......@@ -203,7 +202,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
if (dpu_mdss->mmio)
devm_iounmap(&pdev->dev, dpu_mdss->mmio);
dpu_mdss->mmio = NULL;
priv->mdss = NULL;
}
static const struct msm_mdss_funcs mdss_funcs = {
......@@ -212,16 +210,15 @@ static const struct msm_mdss_funcs mdss_funcs = {
.destroy = dpu_mdss_destroy,
};
int dpu_mdss_init(struct drm_device *dev)
int dpu_mdss_init(struct platform_device *pdev)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct dpu_mdss *dpu_mdss;
struct dss_module_power *mp;
int ret;
int irq;
dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
dpu_mdss = devm_kzalloc(&pdev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
if (!dpu_mdss)
return -ENOMEM;
......@@ -238,7 +235,7 @@ int dpu_mdss_init(struct drm_device *dev)
goto clk_parse_err;
}
dpu_mdss->base.dev = dev;
dpu_mdss->base.dev = &pdev->dev;
dpu_mdss->base.funcs = &mdss_funcs;
ret = _dpu_mdss_irq_domain_add(dpu_mdss);
......@@ -256,7 +253,7 @@ int dpu_mdss_init(struct drm_device *dev)
priv->mdss = &dpu_mdss->base;
pm_runtime_enable(dev->dev);
pm_runtime_enable(&pdev->dev);
return 0;
......
This diff is collapsed.
......@@ -23,9 +23,6 @@
* @multirect_index: index of the rectangle of SSPP
* @multirect_mode: parallel or time multiplex multirect mode
* @pending: whether the current update is still pending
* @scaler3_cfg: configuration data for scaler3
* @pixel_ext: configuration data for pixel extensions
* @cdp_cfg: CDP configuration
* @plane_fetch_bw: calculated BW per plane
* @plane_clk: calculated clk per plane
*/
......@@ -38,11 +35,6 @@ struct dpu_plane_state {
uint32_t multirect_mode;
bool pending;
/* scaler configuration */
struct dpu_hw_scaler3_cfg scaler3_cfg;
struct dpu_hw_pixel_ext pixel_ext;
struct dpu_hw_pipe_cdp_cfg cdp_cfg;
u64 plane_fetch_bw;
u64 plane_clk;
};
......@@ -134,4 +126,10 @@ void dpu_plane_clear_multirect(const struct drm_plane_state *drm_state);
int dpu_plane_color_fill(struct drm_plane *plane,
uint32_t color, uint32_t alpha);
#ifdef CONFIG_DEBUG_FS
void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable);
#else
static inline void dpu_plane_danger_signal_ctrl(struct drm_plane *plane, bool enable) {}
#endif
#endif /* _DPU_PLANE_H_ */
......@@ -266,10 +266,6 @@ DEFINE_EVENT(dpu_drm_obj_template, dpu_crtc_complete_commit,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)
);
DEFINE_EVENT(dpu_drm_obj_template, dpu_kms_enc_enable,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)
);
DEFINE_EVENT(dpu_drm_obj_template, dpu_kms_commit,
TP_PROTO(uint32_t drm_id),
TP_ARGS(drm_id)
......
......@@ -370,22 +370,7 @@ static int modeset_init_intf(struct mdp5_kms *mdp5_kms,
switch (intf->type) {
case INTF_eDP:
if (!priv->edp)
break;
ctl = mdp5_ctlm_request(ctlm, intf->num);
if (!ctl) {
ret = -EINVAL;
break;
}
encoder = construct_encoder(mdp5_kms, intf, ctl);
if (IS_ERR(encoder)) {
ret = PTR_ERR(encoder);
break;
}
ret = msm_edp_modeset_init(priv->edp, dev, encoder);
DRM_DEV_INFO(dev->dev, "Skipping eDP interface %d\n", intf->num);
break;
case INTF_HDMI:
if (!priv->hdmi)
......@@ -936,7 +921,8 @@ static int mdp5_init(struct platform_device *pdev, struct drm_device *dev)
static int mdp5_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *ddev = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *ddev = priv->dev;
struct platform_device *pdev = to_platform_device(dev);
DBG("");
......@@ -1031,7 +1017,7 @@ static const struct dev_pm_ops mdp5_pm_ops = {
SET_RUNTIME_PM_OPS(mdp5_runtime_suspend, mdp5_runtime_resume, NULL)
};
static const struct of_device_id mdp5_dt_match[] = {
const struct of_device_id mdp5_dt_match[] = {
{ .compatible = "qcom,mdp5", },
/* to support downstream DT files */
{ .compatible = "qcom,mdss_mdp", },
......
......@@ -16,8 +16,6 @@ struct mdp5_mdss {
void __iomem *mmio, *vbif;
struct regulator *vdd;
struct clk *ahb_clk;
struct clk *axi_clk;
struct clk *vsync_clk;
......@@ -114,7 +112,7 @@ static const struct irq_domain_ops mdss_hw_irqdomain_ops = {
static int mdss_irq_domain_init(struct mdp5_mdss *mdp5_mdss)
{
struct device *dev = mdp5_mdss->base.dev->dev;
struct device *dev = mdp5_mdss->base.dev;
struct irq_domain *d;
d = irq_domain_add_linear(dev->of_node, 32, &mdss_hw_irqdomain_ops,
......@@ -157,7 +155,7 @@ static int mdp5_mdss_disable(struct msm_mdss *mdss)
static int msm_mdss_get_clocks(struct mdp5_mdss *mdp5_mdss)
{
struct platform_device *pdev =
to_platform_device(mdp5_mdss->base.dev->dev);
to_platform_device(mdp5_mdss->base.dev);
mdp5_mdss->ahb_clk = msm_clk_get(pdev, "iface");
if (IS_ERR(mdp5_mdss->ahb_clk))
......@@ -174,10 +172,9 @@ static int msm_mdss_get_clocks(struct mdp5_mdss *mdp5_mdss)
return 0;
}
static void mdp5_mdss_destroy(struct drm_device *dev)
static void mdp5_mdss_destroy(struct msm_mdss *mdss)
{
struct msm_drm_private *priv = dev->dev_private;
struct mdp5_mdss *mdp5_mdss = to_mdp5_mdss(priv->mdss);
struct mdp5_mdss *mdp5_mdss = to_mdp5_mdss(mdss);
if (!mdp5_mdss)
return;
......@@ -185,9 +182,7 @@ static void mdp5_mdss_destroy(struct drm_device *dev)
irq_domain_remove(mdp5_mdss->irqcontroller.domain);
mdp5_mdss->irqcontroller.domain = NULL;
regulator_disable(mdp5_mdss->vdd);
pm_runtime_disable(dev->dev);
pm_runtime_disable(mdss->dev);
}
static const struct msm_mdss_funcs mdss_funcs = {
......@@ -196,25 +191,24 @@ static const struct msm_mdss_funcs mdss_funcs = {
.destroy = mdp5_mdss_destroy,
};
int mdp5_mdss_init(struct drm_device *dev)
int mdp5_mdss_init(struct platform_device *pdev)
{
struct platform_device *pdev = to_platform_device(dev->dev);
struct msm_drm_private *priv = dev->dev_private;
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct mdp5_mdss *mdp5_mdss;
int ret;
DBG("");
if (!of_device_is_compatible(dev->dev->of_node, "qcom,mdss"))
if (!of_device_is_compatible(pdev->dev.of_node, "qcom,mdss"))
return 0;
mdp5_mdss = devm_kzalloc(dev->dev, sizeof(*mdp5_mdss), GFP_KERNEL);
mdp5_mdss = devm_kzalloc(&pdev->dev, sizeof(*mdp5_mdss), GFP_KERNEL);
if (!mdp5_mdss) {
ret = -ENOMEM;
goto fail;
}
mdp5_mdss->base.dev = dev;
mdp5_mdss->base.dev = &pdev->dev;
mdp5_mdss->mmio = msm_ioremap(pdev, "mdss_phys", "MDSS");
if (IS_ERR(mdp5_mdss->mmio)) {
......@@ -230,45 +224,29 @@ int mdp5_mdss_init(struct drm_device *dev)
ret = msm_mdss_get_clocks(mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to get clocks: %d\n", ret);
goto fail;
}
/* Regulator to enable GDSCs in downstream kernels */
mdp5_mdss->vdd = devm_regulator_get(dev->dev, "vdd");
if (IS_ERR(mdp5_mdss->vdd)) {
ret = PTR_ERR(mdp5_mdss->vdd);
goto fail;
}
ret = regulator_enable(mdp5_mdss->vdd);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to enable regulator vdd: %d\n",
ret);
DRM_DEV_ERROR(&pdev->dev, "failed to get clocks: %d\n", ret);
goto fail;
}
ret = devm_request_irq(dev->dev, platform_get_irq(pdev, 0),
ret = devm_request_irq(&pdev->dev, platform_get_irq(pdev, 0),
mdss_irq, 0, "mdss_isr", mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to init irq: %d\n", ret);
goto fail_irq;
DRM_DEV_ERROR(&pdev->dev, "failed to init irq: %d\n", ret);
goto fail;
}
ret = mdss_irq_domain_init(mdp5_mdss);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to init sub-block irqs: %d\n", ret);
goto fail_irq;
DRM_DEV_ERROR(&pdev->dev, "failed to init sub-block irqs: %d\n", ret);
goto fail;
}
mdp5_mdss->base.funcs = &mdss_funcs;
priv->mdss = &mdp5_mdss->base;
pm_runtime_enable(dev->dev);
pm_runtime_enable(&pdev->dev);
return 0;
fail_irq:
regulator_disable(mdp5_mdss->vdd);
fail:
return ret;
}
......@@ -28,29 +28,42 @@ static ssize_t __maybe_unused disp_devcoredump_read(char *buffer, loff_t offset,
return count - iter.remain;
}
static void _msm_disp_snapshot_work(struct kthread_work *work)
struct msm_disp_state *
msm_disp_snapshot_state_sync(struct msm_kms *kms)
{
struct msm_kms *kms = container_of(work, struct msm_kms, dump_work);
struct drm_device *drm_dev = kms->dev;
struct msm_disp_state *disp_state;
struct drm_printer p;
WARN_ON(!mutex_is_locked(&kms->dump_mutex));
disp_state = kzalloc(sizeof(struct msm_disp_state), GFP_KERNEL);
if (!disp_state)
return;
return ERR_PTR(-ENOMEM);
disp_state->dev = drm_dev->dev;
disp_state->drm_dev = drm_dev;
INIT_LIST_HEAD(&disp_state->blocks);
/* Serialize dumping here */
mutex_lock(&kms->dump_mutex);
msm_disp_snapshot_capture_state(disp_state);
return disp_state;
}
static void _msm_disp_snapshot_work(struct kthread_work *work)
{
struct msm_kms *kms = container_of(work, struct msm_kms, dump_work);
struct msm_disp_state *disp_state;
struct drm_printer p;
/* Serialize dumping here */
mutex_lock(&kms->dump_mutex);
disp_state = msm_disp_snapshot_state_sync(kms);
mutex_unlock(&kms->dump_mutex);
if (IS_ERR(disp_state))
return;
if (MSM_DISP_SNAPSHOT_DUMP_IN_CONSOLE) {
p = drm_info_printer(disp_state->drm_dev->dev);
msm_disp_state_print(disp_state, &p);
......
......@@ -39,7 +39,7 @@
* @dev: device pointer
* @drm_dev: drm device pointer
* @atomic_state: atomic state duplicated at the time of the error
* @timestamp: timestamp at which the coredump was captured
* @time: timestamp at which the coredump was captured
*/
struct msm_disp_state {
struct device *dev;
......@@ -49,7 +49,7 @@ struct msm_disp_state {
struct drm_atomic_state *atomic_state;
ktime_t timestamp;
struct timespec64 time;
};
/**
......@@ -84,6 +84,16 @@ int msm_disp_snapshot_init(struct drm_device *drm_dev);
*/
void msm_disp_snapshot_destroy(struct drm_device *drm_dev);
/**
* msm_disp_snapshot_state_sync - synchronously snapshot display state
* @kms: the kms object
*
* Returns state or error
*
* Must be called with &kms->dump_mutex held
*/
struct msm_disp_state *msm_disp_snapshot_state_sync(struct msm_kms *kms);
/**
* msm_disp_snapshot_state - trigger to dump the display snapshot
* @drm_dev: handle to drm device
......
......@@ -5,6 +5,8 @@
#define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__
#include <generated/utsrelease.h>
#include "msm_disp_snapshot.h"
static void msm_disp_state_dump_regs(u32 **reg, u32 aligned_len, void __iomem *base_addr)
......@@ -79,10 +81,11 @@ void msm_disp_state_print(struct msm_disp_state *state, struct drm_printer *p)
}
drm_printf(p, "---\n");
drm_printf(p, "kernel: " UTS_RELEASE "\n");
drm_printf(p, "module: " KBUILD_MODNAME "\n");
drm_printf(p, "dpu devcoredump\n");
drm_printf(p, "timestamp %lld\n", ktime_to_ns(state->timestamp));
drm_printf(p, "time: %lld.%09ld\n",
state->time.tv_sec, state->time.tv_nsec);
list_for_each_entry_safe(block, tmp, &state->blocks, node) {
drm_printf(p, "====================%s================\n", block->name);
......@@ -100,7 +103,7 @@ static void msm_disp_capture_atomic_state(struct msm_disp_state *disp_state)
struct drm_device *ddev;
struct drm_modeset_acquire_ctx ctx;
disp_state->timestamp = ktime_get();
ktime_get_real_ts64(&disp_state->time);
ddev = disp_state->drm_dev;
......
......@@ -119,13 +119,13 @@ void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl)
static void dp_ctrl_config_ctrl(struct dp_ctrl_private *ctrl)
{
u32 config = 0, tbd;
u8 *dpcd = ctrl->panel->dpcd;
const u8 *dpcd = ctrl->panel->dpcd;
/* Default-> LSCLK DIV: 1/4 LCLK */
config |= (2 << DP_CONFIGURATION_CTRL_LSCLK_DIV_SHIFT);
/* Scrambler reset enable */
if (dpcd[DP_EDP_CONFIGURATION_CAP] & DP_ALTERNATE_SCRAMBLER_RESET_CAP)
if (drm_dp_alternate_scrambler_reset_cap(dpcd))
config |= DP_CONFIGURATION_CTRL_ASSR;
tbd = dp_link_get_test_bits_depth(ctrl->link,
......@@ -1228,7 +1228,10 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
int *training_step)
{
int ret = 0;
const u8 *dpcd = ctrl->panel->dpcd;
u8 encoding = DP_SET_ANSI_8B10B;
u8 ssc;
u8 assr;
struct dp_link_info link_info = {0};
dp_ctrl_config_ctrl(ctrl);
......@@ -1238,9 +1241,21 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
link_info.capabilities = DP_LINK_CAP_ENHANCED_FRAMING;
dp_aux_link_configure(ctrl->aux, &link_info);
if (drm_dp_max_downspread(dpcd)) {
ssc = DP_SPREAD_AMP_0_5;
drm_dp_dpcd_write(ctrl->aux, DP_DOWNSPREAD_CTRL, &ssc, 1);
}
drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
&encoding, 1);
if (drm_dp_alternate_scrambler_reset_cap(dpcd)) {
assr = DP_ALTERNATE_SCRAMBLER_RESET_ENABLE;
drm_dp_dpcd_write(ctrl->aux, DP_EDP_CONFIGURATION_SET,
&assr, 1);
}
ret = dp_ctrl_link_train_1(ctrl, training_step);
if (ret) {
DRM_ERROR("link training #1 failed. ret=%d\n", ret);
......@@ -1312,9 +1327,11 @@ static int dp_ctrl_enable_mainlink_clocks(struct dp_ctrl_private *ctrl)
struct dp_io *dp_io = &ctrl->parser->io;
struct phy *phy = dp_io->phy;
struct phy_configure_opts_dp *opts_dp = &dp_io->phy_opts.dp;
const u8 *dpcd = ctrl->panel->dpcd;
opts_dp->lanes = ctrl->link->link_params.num_lanes;
opts_dp->link_rate = ctrl->link->link_params.rate / 100;
opts_dp->ssc = drm_dp_max_downspread(dpcd);
dp_ctrl_set_clock_rate(ctrl, DP_CTRL_PM, "ctrl_link",
ctrl->link->link_params.rate * 1000);
......@@ -1406,7 +1423,7 @@ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl)
{
u8 *dpcd = ctrl->panel->dpcd;
const u8 *dpcd = ctrl->panel->dpcd;
/*
* For better interop experience, used a fixed NVID=0x8000
......
......@@ -135,8 +135,18 @@ static const struct msm_dp_config sc7180_dp_cfg = {
.num_descs = 1,
};
static const struct msm_dp_config sc7280_dp_cfg = {
.descs = (const struct msm_dp_desc[]) {
[MSM_DP_CONTROLLER_0] = { .io_start = 0x0ae90000, .connector_type = DRM_MODE_CONNECTOR_DisplayPort },
[MSM_DP_CONTROLLER_1] = { .io_start = 0x0aea0000, .connector_type = DRM_MODE_CONNECTOR_eDP },
},
.num_descs = 2,
};
static const struct of_device_id dp_dt_match[] = {
{ .compatible = "qcom,sc7180-dp", .data = &sc7180_dp_cfg },
{ .compatible = "qcom,sc7280-dp", .data = &sc7280_dp_cfg },
{ .compatible = "qcom,sc7280-edp", .data = &sc7280_dp_cfg },
{}
};
......@@ -224,13 +234,10 @@ static int dp_display_bind(struct device *dev, struct device *master,
{
int rc = 0;
struct dp_display_private *dp = dev_get_dp_display_private(dev);
struct msm_drm_private *priv;
struct drm_device *drm;
drm = dev_get_drvdata(master);
struct msm_drm_private *priv = dev_get_drvdata(master);
struct drm_device *drm = priv->dev;
dp->dp_display.drm_dev = drm;
priv = drm->dev_private;
priv->dp[dp->id] = &dp->dp_display;
rc = dp->parser->parse(dp->parser, dp->dp_display.connector_type);
......@@ -266,8 +273,7 @@ static void dp_display_unbind(struct device *dev, struct device *master,
void *data)
{
struct dp_display_private *dp = dev_get_dp_display_private(dev);
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
dp_power_client_deinit(dp->power);
dp_aux_unregister(dp->aux);
......@@ -410,12 +416,11 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
static int dp_display_usbpd_disconnect_cb(struct device *dev)
{
int rc = 0;
struct dp_display_private *dp = dev_get_dp_display_private(dev);
dp_add_event(dp, EV_USER_NOTIFICATION, false, 0);
return rc;
return 0;
}
static void dp_display_handle_video_request(struct dp_display_private *dp)
......@@ -522,11 +527,8 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
dp->hpd_state = ST_CONNECT_PENDING;
hpd->hpd_high = 1;
ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
if (ret) { /* link train failed */
hpd->hpd_high = 0;
dp->hpd_state = ST_DISCONNECTED;
if (ret == -ECONNRESET) { /* cable unplugged */
......@@ -603,7 +605,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
/* triggered by irq_hdp with sink_count = 0 */
if (dp->link->sink_count == 0) {
dp_ctrl_off_phy(dp->ctrl);
hpd->hpd_high = 0;
dp->core_initialized = false;
}
mutex_unlock(&dp->event_mutex);
......@@ -627,8 +628,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
/* disable HPD plug interrupts */
dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false);
hpd->hpd_high = 0;
/*
* We don't need separate work for disconnect as
* connect/attention interrupts are disabled
......@@ -693,9 +692,15 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
return 0;
}
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
/*
* dp core (ahb/aux clks) must be initialized before
* irq_hpd be handled
*/
if (dp->core_initialized) {
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;
}
}
DRM_DEBUG_DP("hpd_state=%d\n", state);
......@@ -707,9 +712,9 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
static void dp_display_deinit_sub_modules(struct dp_display_private *dp)
{
dp_debug_put(dp->debug);
dp_audio_put(dp->audio);
dp_panel_put(dp->panel);
dp_aux_put(dp->aux);
dp_audio_put(dp->audio);
}
static int dp_init_sub_modules(struct dp_display_private *dp)
......@@ -1481,6 +1486,18 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
}
priv->connectors[priv->num_connectors++] = dp_display->connector;
dp_display->bridge = msm_dp_bridge_init(dp_display, dev, encoder);
if (IS_ERR(dp_display->bridge)) {
ret = PTR_ERR(dp_display->bridge);
DRM_DEV_ERROR(dev->dev,
"failed to create dp bridge: %d\n", ret);
dp_display->bridge = NULL;
return ret;
}
priv->bridges[priv->num_bridges++] = dp_display->bridge;
return 0;
}
......@@ -1584,8 +1601,8 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder)
}
void msm_dp_display_mode_set(struct msm_dp *dp, struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct dp_display_private *dp_display;
......
......@@ -13,6 +13,7 @@
struct msm_dp {
struct drm_device *drm_dev;
struct device *codec_dev;
struct drm_bridge *bridge;
struct drm_connector *connector;
struct drm_encoder *encoder;
struct drm_bridge *panel_bridge;
......
......@@ -12,6 +12,14 @@
#include "msm_kms.h"
#include "dp_drm.h"
struct msm_dp_bridge {
struct drm_bridge bridge;
struct msm_dp *dp_display;
};
#define to_dp_display(x) container_of((x), struct msm_dp_bridge, bridge)
struct dp_connector {
struct drm_connector base;
struct msm_dp *dp_display;
......@@ -173,3 +181,70 @@ struct drm_connector *dp_drm_connector_init(struct msm_dp *dp_display)
return connector;
}
static void dp_bridge_mode_set(struct drm_bridge *drm_bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_mode_set(dp_display, drm_bridge->encoder, mode, adjusted_mode);
}
static void dp_bridge_enable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_enable(dp_display, drm_bridge->encoder);
}
static void dp_bridge_disable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_pre_disable(dp_display, drm_bridge->encoder);
}
static void dp_bridge_post_disable(struct drm_bridge *drm_bridge)
{
struct msm_dp_bridge *dp_bridge = to_dp_display(drm_bridge);
struct msm_dp *dp_display = dp_bridge->dp_display;
msm_dp_display_disable(dp_display, drm_bridge->encoder);
}
static const struct drm_bridge_funcs dp_bridge_ops = {
.enable = dp_bridge_enable,
.disable = dp_bridge_disable,
.post_disable = dp_bridge_post_disable,
.mode_set = dp_bridge_mode_set,
};
struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev,
struct drm_encoder *encoder)
{
int rc;
struct msm_dp_bridge *dp_bridge;
struct drm_bridge *bridge;
dp_bridge = devm_kzalloc(dev->dev, sizeof(*dp_bridge), GFP_KERNEL);
if (!dp_bridge)
return ERR_PTR(-ENOMEM);
dp_bridge->dp_display = dp_display;
bridge = &dp_bridge->bridge;
bridge->funcs = &dp_bridge_ops;
bridge->encoder = encoder;
rc = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (rc) {
DRM_ERROR("failed to attach bridge, rc=%d\n", rc);
return ERR_PTR(rc);
}
return bridge;
}
......@@ -32,8 +32,6 @@ int dp_hpd_connect(struct dp_usbpd *dp_usbpd, bool hpd)
hpd_priv = container_of(dp_usbpd, struct dp_hpd_private,
dp_usbpd);
dp_usbpd->hpd_high = hpd;
if (!hpd_priv->dp_cb || !hpd_priv->dp_cb->configure
|| !hpd_priv->dp_cb->disconnect) {
pr_err("hpd dp_cb not initialized\n");
......
......@@ -26,7 +26,6 @@ enum plug_orientation {
* @multi_func: multi-function preferred
* @usb_config_req: request to switch to usb
* @exit_dp_mode: request exit from displayport mode
* @hpd_high: Hot Plug Detect signal is high.
* @hpd_irq: Change in the status since last message
* @alt_mode_cfg_done: bool to specify alt mode status
* @debug_en: bool to specify debug mode
......@@ -39,7 +38,6 @@ struct dp_usbpd {
bool multi_func;
bool usb_config_req;
bool exit_dp_mode;
bool hpd_high;
bool hpd_irq;
bool alt_mode_cfg_done;
bool debug_en;
......
......@@ -737,18 +737,25 @@ static int dp_link_parse_sink_count(struct dp_link *dp_link)
return 0;
}
static void dp_link_parse_sink_status_field(struct dp_link_private *link)
static int dp_link_parse_sink_status_field(struct dp_link_private *link)
{
int len = 0;
link->prev_sink_count = link->dp_link.sink_count;
dp_link_parse_sink_count(&link->dp_link);
len = dp_link_parse_sink_count(&link->dp_link);
if (len < 0) {
DRM_ERROR("DP parse sink count failed\n");
return len;
}
len = drm_dp_dpcd_read_link_status(link->aux,
link->link_status);
if (len < DP_LINK_STATUS_SIZE)
if (len < DP_LINK_STATUS_SIZE) {
DRM_ERROR("DP link status read failed\n");
dp_link_parse_request(link);
return len;
}
return dp_link_parse_request(link);
}
/**
......@@ -1023,7 +1030,9 @@ int dp_link_process_request(struct dp_link *dp_link)
dp_link_reset_data(link);
dp_link_parse_sink_status_field(link);
ret = dp_link_parse_sink_status_field(link);
if (ret)
return ret;
if (link->request.test_requested == DP_TEST_LINK_EDID_READ) {
dp_link->sink_request |= DP_TEST_LINK_EDID_READ;
......
......@@ -110,8 +110,7 @@ static struct msm_dsi *dsi_init(struct platform_device *pdev)
static int dsi_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_dsi *msm_dsi = dev_get_drvdata(dev);
priv->dsi[msm_dsi->id] = msm_dsi;
......@@ -122,8 +121,7 @@ static int dsi_bind(struct device *dev, struct device *master, void *data)
static void dsi_unbind(struct device *dev, struct device *master,
void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_dsi *msm_dsi = dev_get_drvdata(dev);
priv->dsi[msm_dsi->id] = NULL;
......@@ -225,9 +223,13 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
goto fail;
}
if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
ret = -EINVAL;
goto fail;
if (msm_dsi_is_bonded_dsi(msm_dsi) &&
!msm_dsi_is_master_dsi(msm_dsi)) {
/*
* Do not return an eror here,
* Just skip creating encoder/connector for the slave-DSI.
*/
return 0;
}
msm_dsi->encoder = encoder;
......
......@@ -82,7 +82,6 @@ int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg);
bool msm_dsi_manager_cmd_xfer_trigger(int id, u32 dma_base, u32 len);
int msm_dsi_manager_register(struct msm_dsi *msm_dsi);
void msm_dsi_manager_unregister(struct msm_dsi *msm_dsi);
bool msm_dsi_manager_validate_current_config(u8 id);
void msm_dsi_manager_tpg_enable(void);
/* msm dsi */
......@@ -120,6 +119,8 @@ unsigned long msm_dsi_host_get_mode_flags(struct mipi_dsi_host *host);
struct drm_bridge *msm_dsi_host_get_bridge(struct mipi_dsi_host *host);
int msm_dsi_host_register(struct mipi_dsi_host *host);
void msm_dsi_host_unregister(struct mipi_dsi_host *host);
void msm_dsi_host_set_phy_mode(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy);
int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy);
void msm_dsi_host_reset_phy(struct mipi_dsi_host *host);
......@@ -173,8 +174,6 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
enum msm_dsi_phy_usecase uc);
int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy);
int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy);
void msm_dsi_phy_snapshot(struct msm_disp_state *disp_state, struct msm_dsi_phy *phy);
......
......@@ -2020,7 +2020,7 @@ void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host,
/* TODO: unvote for bus bandwidth */
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
}
int msm_dsi_host_cmd_tx(struct mipi_dsi_host *host,
......@@ -2179,57 +2179,12 @@ void msm_dsi_host_cmd_xfer_commit(struct mipi_dsi_host *host, u32 dma_base,
wmb();
}
int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
void msm_dsi_host_set_phy_mode(struct mipi_dsi_host *host,
struct msm_dsi_phy *src_phy)
{
struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
struct clk *byte_clk_provider, *pixel_clk_provider;
int ret;
msm_host->cphy_mode = src_phy->cphy_mode;
ret = msm_dsi_phy_get_clk_provider(src_phy,
&byte_clk_provider, &pixel_clk_provider);
if (ret) {
pr_info("%s: can't get provider from pll, don't set parent\n",
__func__);
return 0;
}
ret = clk_set_parent(msm_host->byte_clk_src, byte_clk_provider);
if (ret) {
pr_err("%s: can't set parent to byte_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
ret = clk_set_parent(msm_host->pixel_clk_src, pixel_clk_provider);
if (ret) {
pr_err("%s: can't set parent to pixel_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
if (msm_host->dsi_clk_src) {
ret = clk_set_parent(msm_host->dsi_clk_src, pixel_clk_provider);
if (ret) {
pr_err("%s: can't set parent to dsi_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
}
if (msm_host->esc_clk_src) {
ret = clk_set_parent(msm_host->esc_clk_src, byte_clk_provider);
if (ret) {
pr_err("%s: can't set parent to esc_clk_src. ret=%d\n",
__func__, ret);
goto exit;
}
}
exit:
return ret;
}
void msm_dsi_host_reset_phy(struct mipi_dsi_host *host)
......@@ -2297,7 +2252,7 @@ int msm_dsi_host_enable(struct mipi_dsi_host *host)
*/
/* if (msm_panel->mode == MSM_DSI_CMD_MODE) {
* dsi_link_clk_disable(msm_host);
* pm_runtime_put_autosuspend(&msm_host->pdev->dev);
* pm_runtime_put(&msm_host->pdev->dev);
* }
*/
msm_host->enabled = true;
......@@ -2389,7 +2344,7 @@ int msm_dsi_host_power_on(struct mipi_dsi_host *host,
fail_disable_clk:
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
fail_disable_reg:
dsi_host_regulator_disable(msm_host);
unlock_ret:
......@@ -2416,7 +2371,7 @@ int msm_dsi_host_power_off(struct mipi_dsi_host *host)
pinctrl_pm_select_sleep_state(&msm_host->pdev->dev);
cfg_hnd->ops->link_clk_disable(msm_host);
pm_runtime_put_autosuspend(&msm_host->pdev->dev);
pm_runtime_put(&msm_host->pdev->dev);
dsi_host_regulator_disable(msm_host);
......
......@@ -79,10 +79,8 @@ static int dsi_mgr_setup_components(int id)
return ret;
msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
ret = msm_dsi_host_set_src_pll(msm_dsi->host, msm_dsi->phy);
} else if (!other_dsi) {
ret = 0;
} else {
msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
} else if (other_dsi) {
struct msm_dsi *master_link_dsi = IS_MASTER_DSI_LINK(id) ?
msm_dsi : other_dsi;
struct msm_dsi *slave_link_dsi = IS_MASTER_DSI_LINK(id) ?
......@@ -106,13 +104,11 @@ static int dsi_mgr_setup_components(int id)
MSM_DSI_PHY_MASTER);
msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
MSM_DSI_PHY_SLAVE);
ret = msm_dsi_host_set_src_pll(msm_dsi->host, clk_master_dsi->phy);
if (ret)
return ret;
ret = msm_dsi_host_set_src_pll(other_dsi->host, clk_master_dsi->phy);
msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
}
return ret;
return 0;
}
static int enable_phy(struct msm_dsi *msm_dsi,
......@@ -649,23 +645,6 @@ struct drm_connector *msm_dsi_manager_connector_init(u8 id)
return ERR_PTR(ret);
}
bool msm_dsi_manager_validate_current_config(u8 id)
{
bool is_bonded_dsi = IS_BONDED_DSI();
/*
* For bonded DSI, we only have one drm panel. For this
* use case, we register only one bridge/connector.
* Skip bridge/connector initialisation if it is
* slave-DSI for bonded DSI configuration.
*/
if (is_bonded_dsi && !IS_MASTER_DSI_LINK(id)) {
DBG("Skip bridge registration for slave DSI->id: %d\n", id);
return false;
}
return true;
}
/* initialize bridge */
struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
{
......
......@@ -602,7 +602,7 @@ static int dsi_phy_enable_resource(struct msm_dsi_phy *phy)
static void dsi_phy_disable_resource(struct msm_dsi_phy *phy)
{
clk_disable_unprepare(phy->ahb_clk);
pm_runtime_put_autosuspend(&phy->pdev->dev);
pm_runtime_put(&phy->pdev->dev);
}
static const struct of_device_id dsi_phy_dt_match[] = {
......@@ -892,17 +892,6 @@ bool msm_dsi_phy_set_continuous_clock(struct msm_dsi_phy *phy, bool enable)
return phy->cfg->ops.set_continuous_clock(phy, enable);
}
int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider)
{
if (byte_clk_provider)
*byte_clk_provider = phy->provided_clocks->hws[DSI_BYTE_PLL_CLK]->clk;
if (pixel_clk_provider)
*pixel_clk_provider = phy->provided_clocks->hws[DSI_PIXEL_PLL_CLK]->clk;
return 0;
}
void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
{
if (phy->cfg->ops.save_pll_state) {
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include <linux/of_irq.h>
#include "edp.h"
static irqreturn_t edp_irq(int irq, void *dev_id)
{
struct msm_edp *edp = dev_id;
/* Process eDP irq */
return msm_edp_ctrl_irq(edp->ctrl);
}
static void edp_destroy(struct platform_device *pdev)
{
struct msm_edp *edp = platform_get_drvdata(pdev);
if (!edp)
return;
if (edp->ctrl) {
msm_edp_ctrl_destroy(edp->ctrl);
edp->ctrl = NULL;
}
platform_set_drvdata(pdev, NULL);
}
/* construct eDP at bind/probe time, grab all the resources. */
static struct msm_edp *edp_init(struct platform_device *pdev)
{
struct msm_edp *edp = NULL;
int ret;
if (!pdev) {
pr_err("no eDP device\n");
ret = -ENXIO;
goto fail;
}
edp = devm_kzalloc(&pdev->dev, sizeof(*edp), GFP_KERNEL);
if (!edp) {
ret = -ENOMEM;
goto fail;
}
DBG("eDP probed=%p", edp);
edp->pdev = pdev;
platform_set_drvdata(pdev, edp);
ret = msm_edp_ctrl_init(edp);
if (ret)
goto fail;
return edp;
fail:
if (edp)
edp_destroy(pdev);
return ERR_PTR(ret);
}
static int edp_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_edp *edp;
DBG("");
edp = edp_init(to_platform_device(dev));
if (IS_ERR(edp))
return PTR_ERR(edp);
priv->edp = edp;
return 0;
}
static void edp_unbind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
DBG("");
if (priv->edp) {
edp_destroy(to_platform_device(dev));
priv->edp = NULL;
}
}
static const struct component_ops edp_ops = {
.bind = edp_bind,
.unbind = edp_unbind,
};
static int edp_dev_probe(struct platform_device *pdev)
{
DBG("");
return component_add(&pdev->dev, &edp_ops);
}
static int edp_dev_remove(struct platform_device *pdev)
{
DBG("");
component_del(&pdev->dev, &edp_ops);
return 0;
}
static const struct of_device_id dt_match[] = {
{ .compatible = "qcom,mdss-edp" },
{}
};
static struct platform_driver edp_driver = {
.probe = edp_dev_probe,
.remove = edp_dev_remove,
.driver = {
.name = "msm_edp",
.of_match_table = dt_match,
},
};
void __init msm_edp_register(void)
{
DBG("");
platform_driver_register(&edp_driver);
}
void __exit msm_edp_unregister(void)
{
DBG("");
platform_driver_unregister(&edp_driver);
}
/* Second part of initialization, the drm/kms level modeset_init */
int msm_edp_modeset_init(struct msm_edp *edp, struct drm_device *dev,
struct drm_encoder *encoder)
{
struct platform_device *pdev = edp->pdev;
struct msm_drm_private *priv = dev->dev_private;
int ret;
edp->encoder = encoder;
edp->dev = dev;
edp->bridge = msm_edp_bridge_init(edp);
if (IS_ERR(edp->bridge)) {
ret = PTR_ERR(edp->bridge);
DRM_DEV_ERROR(dev->dev, "failed to create eDP bridge: %d\n", ret);
edp->bridge = NULL;
goto fail;
}
edp->connector = msm_edp_connector_init(edp);
if (IS_ERR(edp->connector)) {
ret = PTR_ERR(edp->connector);
DRM_DEV_ERROR(dev->dev, "failed to create eDP connector: %d\n", ret);
edp->connector = NULL;
goto fail;
}
edp->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
if (edp->irq < 0) {
ret = edp->irq;
DRM_DEV_ERROR(dev->dev, "failed to get IRQ: %d\n", ret);
goto fail;
}
ret = devm_request_irq(&pdev->dev, edp->irq,
edp_irq, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"edp_isr", edp);
if (ret < 0) {
DRM_DEV_ERROR(dev->dev, "failed to request IRQ%u: %d\n",
edp->irq, ret);
goto fail;
}
priv->bridges[priv->num_bridges++] = edp->bridge;
priv->connectors[priv->num_connectors++] = edp->connector;
return 0;
fail:
/* bridge/connector are normally destroyed by drm */
if (edp->bridge) {
edp_bridge_destroy(edp->bridge);
edp->bridge = NULL;
}
if (edp->connector) {
edp->connector->funcs->destroy(edp->connector);
edp->connector = NULL;
}
return ret;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#ifndef __EDP_CONNECTOR_H__
#define __EDP_CONNECTOR_H__
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_dp_helper.h>
#include "msm_drv.h"
#define edp_read(offset) msm_readl((offset))
#define edp_write(offset, data) msm_writel((data), (offset))
struct edp_ctrl;
struct edp_aux;
struct edp_phy;
struct msm_edp {
struct drm_device *dev;
struct platform_device *pdev;
struct drm_connector *connector;
struct drm_bridge *bridge;
/* the encoder we are hooked to (outside of eDP block) */
struct drm_encoder *encoder;
struct edp_ctrl *ctrl;
int irq;
};
/* eDP bridge */
struct drm_bridge *msm_edp_bridge_init(struct msm_edp *edp);
void edp_bridge_destroy(struct drm_bridge *bridge);
/* eDP connector */
struct drm_connector *msm_edp_connector_init(struct msm_edp *edp);
/* AUX */
void *msm_edp_aux_init(struct msm_edp *edp, void __iomem *regbase, struct drm_dp_aux **drm_aux);
void msm_edp_aux_destroy(struct device *dev, struct edp_aux *aux);
irqreturn_t msm_edp_aux_irq(struct edp_aux *aux, u32 isr);
void msm_edp_aux_ctrl(struct edp_aux *aux, int enable);
/* Phy */
bool msm_edp_phy_ready(struct edp_phy *phy);
void msm_edp_phy_ctrl(struct edp_phy *phy, int enable);
void msm_edp_phy_vm_pe_init(struct edp_phy *phy);
void msm_edp_phy_vm_pe_cfg(struct edp_phy *phy, u32 v0, u32 v1);
void msm_edp_phy_lane_power_ctrl(struct edp_phy *phy, bool up, u32 max_lane);
void *msm_edp_phy_init(struct device *dev, void __iomem *regbase);
/* Ctrl */
irqreturn_t msm_edp_ctrl_irq(struct edp_ctrl *ctrl);
void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on);
int msm_edp_ctrl_init(struct msm_edp *edp);
void msm_edp_ctrl_destroy(struct edp_ctrl *ctrl);
bool msm_edp_ctrl_panel_connected(struct edp_ctrl *ctrl);
int msm_edp_ctrl_get_panel_info(struct edp_ctrl *ctrl,
struct drm_connector *connector, struct edid **edid);
int msm_edp_ctrl_timing_cfg(struct edp_ctrl *ctrl,
const struct drm_display_mode *mode,
const struct drm_display_info *info);
/* @pixel_rate is in kHz */
bool msm_edp_ctrl_pixel_clock_valid(struct edp_ctrl *ctrl,
u32 pixel_rate, u32 *pm, u32 *pn);
#endif /* __EDP_CONNECTOR_H__ */
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
#include "edp.xml.h"
#define AUX_CMD_FIFO_LEN 144
#define AUX_CMD_NATIVE_MAX 16
#define AUX_CMD_I2C_MAX 128
#define EDP_INTR_AUX_I2C_ERR \
(EDP_INTERRUPT_REG_1_WRONG_ADDR | EDP_INTERRUPT_REG_1_TIMEOUT | \
EDP_INTERRUPT_REG_1_NACK_DEFER | EDP_INTERRUPT_REG_1_WRONG_DATA_CNT | \
EDP_INTERRUPT_REG_1_I2C_NACK | EDP_INTERRUPT_REG_1_I2C_DEFER)
#define EDP_INTR_TRANS_STATUS \
(EDP_INTERRUPT_REG_1_AUX_I2C_DONE | EDP_INTR_AUX_I2C_ERR)
struct edp_aux {
void __iomem *base;
bool msg_err;
struct completion msg_comp;
/* To prevent the message transaction routine from reentry. */
struct mutex msg_mutex;
struct drm_dp_aux drm_aux;
};
#define to_edp_aux(x) container_of(x, struct edp_aux, drm_aux)
static int edp_msg_fifo_tx(struct edp_aux *aux, struct drm_dp_aux_msg *msg)
{
u32 data[4];
u32 reg, len;
bool native = msg->request & (DP_AUX_NATIVE_WRITE & DP_AUX_NATIVE_READ);
bool read = msg->request & (DP_AUX_I2C_READ & DP_AUX_NATIVE_READ);
u8 *msgdata = msg->buffer;
int i;
if (read)
len = 4;
else
len = msg->size + 4;
/*
* cmd fifo only has depth of 144 bytes
*/
if (len > AUX_CMD_FIFO_LEN)
return -EINVAL;
/* Pack cmd and write to HW */
data[0] = (msg->address >> 16) & 0xf; /* addr[19:16] */
if (read)
data[0] |= BIT(4); /* R/W */
data[1] = (msg->address >> 8) & 0xff; /* addr[15:8] */
data[2] = msg->address & 0xff; /* addr[7:0] */
data[3] = (msg->size - 1) & 0xff; /* len[7:0] */
for (i = 0; i < len; i++) {
reg = (i < 4) ? data[i] : msgdata[i - 4];
reg = EDP_AUX_DATA_DATA(reg); /* index = 0, write */
if (i == 0)
reg |= EDP_AUX_DATA_INDEX_WRITE;
edp_write(aux->base + REG_EDP_AUX_DATA, reg);
}
reg = 0; /* Transaction number is always 1 */
if (!native) /* i2c */
reg |= EDP_AUX_TRANS_CTRL_I2C;
reg |= EDP_AUX_TRANS_CTRL_GO;
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, reg);
return 0;
}
static int edp_msg_fifo_rx(struct edp_aux *aux, struct drm_dp_aux_msg *msg)
{
u32 data;
u8 *dp;
int i;
u32 len = msg->size;
edp_write(aux->base + REG_EDP_AUX_DATA,
EDP_AUX_DATA_INDEX_WRITE | EDP_AUX_DATA_READ); /* index = 0 */
dp = msg->buffer;
/* discard first byte */
data = edp_read(aux->base + REG_EDP_AUX_DATA);
for (i = 0; i < len; i++) {
data = edp_read(aux->base + REG_EDP_AUX_DATA);
dp[i] = (u8)((data >> 8) & 0xff);
}
return 0;
}
/*
* This function does the real job to process an AUX transaction.
* It will call msm_edp_aux_ctrl() function to reset the AUX channel,
* if the waiting is timeout.
* The caller who triggers the transaction should avoid the
* msm_edp_aux_ctrl() running concurrently in other threads, i.e.
* start transaction only when AUX channel is fully enabled.
*/
static ssize_t edp_aux_transfer(struct drm_dp_aux *drm_aux,
struct drm_dp_aux_msg *msg)
{
struct edp_aux *aux = to_edp_aux(drm_aux);
ssize_t ret;
unsigned long time_left;
bool native = msg->request & (DP_AUX_NATIVE_WRITE & DP_AUX_NATIVE_READ);
bool read = msg->request & (DP_AUX_I2C_READ & DP_AUX_NATIVE_READ);
/* Ignore address only message */
if ((msg->size == 0) || (msg->buffer == NULL)) {
msg->reply = native ?
DP_AUX_NATIVE_REPLY_ACK : DP_AUX_I2C_REPLY_ACK;
return msg->size;
}
/* msg sanity check */
if ((native && (msg->size > AUX_CMD_NATIVE_MAX)) ||
(msg->size > AUX_CMD_I2C_MAX)) {
pr_err("%s: invalid msg: size(%zu), request(%x)\n",
__func__, msg->size, msg->request);
return -EINVAL;
}
mutex_lock(&aux->msg_mutex);
aux->msg_err = false;
reinit_completion(&aux->msg_comp);
ret = edp_msg_fifo_tx(aux, msg);
if (ret < 0)
goto unlock_exit;
DBG("wait_for_completion");
time_left = wait_for_completion_timeout(&aux->msg_comp,
msecs_to_jiffies(300));
if (!time_left) {
/*
* Clear GO and reset AUX channel
* to cancel the current transaction.
*/
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, 0);
msm_edp_aux_ctrl(aux, 1);
pr_err("%s: aux timeout,\n", __func__);
ret = -ETIMEDOUT;
goto unlock_exit;
}
DBG("completion");
if (!aux->msg_err) {
if (read) {
ret = edp_msg_fifo_rx(aux, msg);
if (ret < 0)
goto unlock_exit;
}
msg->reply = native ?
DP_AUX_NATIVE_REPLY_ACK : DP_AUX_I2C_REPLY_ACK;
} else {
/* Reply defer to retry */
msg->reply = native ?
DP_AUX_NATIVE_REPLY_DEFER : DP_AUX_I2C_REPLY_DEFER;
/*
* The sleep time in caller is not long enough to make sure
* our H/W completes transactions. Add more defer time here.
*/
msleep(100);
}
/* Return requested size for success or retry */
ret = msg->size;
unlock_exit:
mutex_unlock(&aux->msg_mutex);
return ret;
}
void *msm_edp_aux_init(struct msm_edp *edp, void __iomem *regbase, struct drm_dp_aux **drm_aux)
{
struct device *dev = &edp->pdev->dev;
struct edp_aux *aux = NULL;
int ret;
DBG("");
aux = devm_kzalloc(dev, sizeof(*aux), GFP_KERNEL);
if (!aux)
return NULL;
aux->base = regbase;
mutex_init(&aux->msg_mutex);
init_completion(&aux->msg_comp);
aux->drm_aux.name = "msm_edp_aux";
aux->drm_aux.dev = dev;
aux->drm_aux.drm_dev = edp->dev;
aux->drm_aux.transfer = edp_aux_transfer;
ret = drm_dp_aux_register(&aux->drm_aux);
if (ret) {
pr_err("%s: failed to register drm aux: %d\n", __func__, ret);
mutex_destroy(&aux->msg_mutex);
}
if (drm_aux && aux)
*drm_aux = &aux->drm_aux;
return aux;
}
void msm_edp_aux_destroy(struct device *dev, struct edp_aux *aux)
{
if (aux) {
drm_dp_aux_unregister(&aux->drm_aux);
mutex_destroy(&aux->msg_mutex);
}
}
irqreturn_t msm_edp_aux_irq(struct edp_aux *aux, u32 isr)
{
if (isr & EDP_INTR_TRANS_STATUS) {
DBG("isr=%x", isr);
edp_write(aux->base + REG_EDP_AUX_TRANS_CTRL, 0);
if (isr & EDP_INTR_AUX_I2C_ERR)
aux->msg_err = true;
else
aux->msg_err = false;
complete(&aux->msg_comp);
}
return IRQ_HANDLED;
}
void msm_edp_aux_ctrl(struct edp_aux *aux, int enable)
{
u32 data;
DBG("enable=%d", enable);
data = edp_read(aux->base + REG_EDP_AUX_CTRL);
if (enable) {
data |= EDP_AUX_CTRL_RESET;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
/* Make sure full reset */
wmb();
usleep_range(500, 1000);
data &= ~EDP_AUX_CTRL_RESET;
data |= EDP_AUX_CTRL_ENABLE;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
} else {
data &= ~EDP_AUX_CTRL_ENABLE;
edp_write(aux->base + REG_EDP_AUX_CTRL, data);
}
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
struct edp_bridge {
struct drm_bridge base;
struct msm_edp *edp;
};
#define to_edp_bridge(x) container_of(x, struct edp_bridge, base)
void edp_bridge_destroy(struct drm_bridge *bridge)
{
}
static void edp_bridge_pre_enable(struct drm_bridge *bridge)
{
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("");
msm_edp_ctrl_power(edp->ctrl, true);
}
static void edp_bridge_enable(struct drm_bridge *bridge)
{
DBG("");
}
static void edp_bridge_disable(struct drm_bridge *bridge)
{
DBG("");
}
static void edp_bridge_post_disable(struct drm_bridge *bridge)
{
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("");
msm_edp_ctrl_power(edp->ctrl, false);
}
static void edp_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
struct drm_device *dev = bridge->dev;
struct drm_connector *connector;
struct edp_bridge *edp_bridge = to_edp_bridge(bridge);
struct msm_edp *edp = edp_bridge->edp;
DBG("set mode: " DRM_MODE_FMT, DRM_MODE_ARG(mode));
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
struct drm_encoder *encoder = connector->encoder;
struct drm_bridge *first_bridge;
if (!connector->encoder)
continue;
first_bridge = drm_bridge_chain_get_first_bridge(encoder);
if (bridge == first_bridge) {
msm_edp_ctrl_timing_cfg(edp->ctrl,
adjusted_mode, &connector->display_info);
break;
}
}
}
static const struct drm_bridge_funcs edp_bridge_funcs = {
.pre_enable = edp_bridge_pre_enable,
.enable = edp_bridge_enable,
.disable = edp_bridge_disable,
.post_disable = edp_bridge_post_disable,
.mode_set = edp_bridge_mode_set,
};
/* initialize bridge */
struct drm_bridge *msm_edp_bridge_init(struct msm_edp *edp)
{
struct drm_bridge *bridge = NULL;
struct edp_bridge *edp_bridge;
int ret;
edp_bridge = devm_kzalloc(edp->dev->dev,
sizeof(*edp_bridge), GFP_KERNEL);
if (!edp_bridge) {
ret = -ENOMEM;
goto fail;
}
edp_bridge->edp = edp;
bridge = &edp_bridge->base;
bridge->funcs = &edp_bridge_funcs;
ret = drm_bridge_attach(edp->encoder, bridge, NULL, 0);
if (ret)
goto fail;
return bridge;
fail:
if (bridge)
edp_bridge_destroy(bridge);
return ERR_PTR(ret);
}
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "drm/drm_edid.h"
#include "msm_kms.h"
#include "edp.h"
struct edp_connector {
struct drm_connector base;
struct msm_edp *edp;
};
#define to_edp_connector(x) container_of(x, struct edp_connector, base)
static enum drm_connector_status edp_connector_detect(
struct drm_connector *connector, bool force)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
DBG("");
return msm_edp_ctrl_panel_connected(edp->ctrl) ?
connector_status_connected : connector_status_disconnected;
}
static void edp_connector_destroy(struct drm_connector *connector)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
DBG("");
drm_connector_cleanup(connector);
kfree(edp_connector);
}
static int edp_connector_get_modes(struct drm_connector *connector)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
struct edid *drm_edid = NULL;
int ret = 0;
DBG("");
ret = msm_edp_ctrl_get_panel_info(edp->ctrl, connector, &drm_edid);
if (ret)
return ret;
drm_connector_update_edid_property(connector, drm_edid);
if (drm_edid)
ret = drm_add_edid_modes(connector, drm_edid);
return ret;
}
static int edp_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct edp_connector *edp_connector = to_edp_connector(connector);
struct msm_edp *edp = edp_connector->edp;
struct msm_drm_private *priv = connector->dev->dev_private;
struct msm_kms *kms = priv->kms;
long actual, requested;
requested = 1000 * mode->clock;
actual = kms->funcs->round_pixclk(kms,
requested, edp_connector->edp->encoder);
DBG("requested=%ld, actual=%ld", requested, actual);
if (actual != requested)
return MODE_CLOCK_RANGE;
if (!msm_edp_ctrl_pixel_clock_valid(
edp->ctrl, mode->clock, NULL, NULL))
return MODE_CLOCK_RANGE;
/* Invalidate all modes if color format is not supported */
if (connector->display_info.bpc > 8)
return MODE_BAD;
return MODE_OK;
}
static const struct drm_connector_funcs edp_connector_funcs = {
.detect = edp_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = edp_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static const struct drm_connector_helper_funcs edp_connector_helper_funcs = {
.get_modes = edp_connector_get_modes,
.mode_valid = edp_connector_mode_valid,
};
/* initialize connector */
struct drm_connector *msm_edp_connector_init(struct msm_edp *edp)
{
struct drm_connector *connector = NULL;
struct edp_connector *edp_connector;
int ret;
edp_connector = kzalloc(sizeof(*edp_connector), GFP_KERNEL);
if (!edp_connector)
return ERR_PTR(-ENOMEM);
edp_connector->edp = edp;
connector = &edp_connector->base;
ret = drm_connector_init(edp->dev, connector, &edp_connector_funcs,
DRM_MODE_CONNECTOR_eDP);
if (ret)
return ERR_PTR(ret);
drm_connector_helper_add(connector, &edp_connector_helper_funcs);
/* We don't support HPD, so only poll status until connected. */
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/* Display driver doesn't support interlace now. */
connector->interlace_allowed = false;
connector->doublescan_allowed = false;
drm_connector_attach_encoder(connector, edp->encoder);
return connector;
}
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
*/
#include "edp.h"
#include "edp.xml.h"
#define EDP_MAX_LANE 4
struct edp_phy {
void __iomem *base;
};
bool msm_edp_phy_ready(struct edp_phy *phy)
{
u32 status;
int cnt = 100;
while (--cnt) {
status = edp_read(phy->base +
REG_EDP_PHY_GLB_PHY_STATUS);
if (status & 0x01)
break;
usleep_range(500, 1000);
}
if (cnt == 0) {
pr_err("%s: PHY NOT ready\n", __func__);
return false;
} else {
return true;
}
}
void msm_edp_phy_ctrl(struct edp_phy *phy, int enable)
{
DBG("enable=%d", enable);
if (enable) {
/* Reset */
edp_write(phy->base + REG_EDP_PHY_CTRL,
EDP_PHY_CTRL_SW_RESET | EDP_PHY_CTRL_SW_RESET_PLL);
/* Make sure fully reset */
wmb();
usleep_range(500, 1000);
edp_write(phy->base + REG_EDP_PHY_CTRL, 0x000);
edp_write(phy->base + REG_EDP_PHY_GLB_PD_CTL, 0x3f);
edp_write(phy->base + REG_EDP_PHY_GLB_CFG, 0x1);
} else {
edp_write(phy->base + REG_EDP_PHY_GLB_PD_CTL, 0xc0);
}
}
/* voltage mode and pre emphasis cfg */
void msm_edp_phy_vm_pe_init(struct edp_phy *phy)
{
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG0, 0x3);
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG1, 0x64);
edp_write(phy->base + REG_EDP_PHY_GLB_MISC9, 0x6c);
}
void msm_edp_phy_vm_pe_cfg(struct edp_phy *phy, u32 v0, u32 v1)
{
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG0, v0);
edp_write(phy->base + REG_EDP_PHY_GLB_VM_CFG1, v1);
}
void msm_edp_phy_lane_power_ctrl(struct edp_phy *phy, bool up, u32 max_lane)
{
u32 i;
u32 data;
if (up)
data = 0; /* power up */
else
data = 0x7; /* power down */
for (i = 0; i < max_lane; i++)
edp_write(phy->base + REG_EDP_PHY_LN_PD_CTL(i) , data);
/* power down unused lane */
data = 0x7; /* power down */
for (i = max_lane; i < EDP_MAX_LANE; i++)
edp_write(phy->base + REG_EDP_PHY_LN_PD_CTL(i) , data);
}
void *msm_edp_phy_init(struct device *dev, void __iomem *regbase)
{
struct edp_phy *phy = NULL;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
if (!phy)
return NULL;
phy->base = regbase;
return phy;
}
......@@ -8,6 +8,8 @@
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <drm/drm_bridge_connector.h>
#include <sound/hdmi-codec.h>
#include "hdmi.h"
......@@ -41,7 +43,7 @@ static irqreturn_t msm_hdmi_irq(int irq, void *dev_id)
struct hdmi *hdmi = dev_id;
/* Process HPD: */
msm_hdmi_connector_irq(hdmi->connector);
msm_hdmi_hpd_irq(hdmi->bridge);
/* Process DDC: */
msm_hdmi_i2c_irq(hdmi->i2c);
......@@ -281,7 +283,7 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
hdmi->connector = msm_hdmi_connector_init(hdmi);
hdmi->connector = drm_bridge_connector_init(hdmi->dev, encoder);
if (IS_ERR(hdmi->connector)) {
ret = PTR_ERR(hdmi->connector);
DRM_DEV_ERROR(dev->dev, "failed to create HDMI connector: %d\n", ret);
......@@ -289,6 +291,8 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
drm_connector_attach_encoder(hdmi->connector, hdmi->encoder);
hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
if (hdmi->irq < 0) {
ret = hdmi->irq;
......@@ -305,7 +309,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
goto fail;
}
ret = msm_hdmi_hpd_enable(hdmi->connector);
drm_bridge_connector_enable_hpd(hdmi->connector);
ret = msm_hdmi_hpd_enable(hdmi->bridge);
if (ret < 0) {
DRM_DEV_ERROR(&hdmi->pdev->dev, "failed to enable HPD: %d\n", ret);
goto fail;
......@@ -514,8 +520,7 @@ static int msm_hdmi_register_audio_driver(struct hdmi *hdmi, struct device *dev)
static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
struct hdmi_platform_config *hdmi_cfg;
struct hdmi *hdmi;
struct device_node *of_node = dev->of_node;
......@@ -586,8 +591,8 @@ static int msm_hdmi_bind(struct device *dev, struct device *master, void *data)
static void msm_hdmi_unbind(struct device *dev, struct device *master,
void *data)
{
struct drm_device *drm = dev_get_drvdata(master);
struct msm_drm_private *priv = drm->dev_private;
struct msm_drm_private *priv = dev_get_drvdata(master);
if (priv->hdmi) {
if (priv->hdmi->audio_pdev)
platform_device_unregister(priv->hdmi->audio_pdev);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -60,4 +60,16 @@ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence);
struct dma_fence * msm_fence_alloc(struct msm_fence_context *fctx);
static inline bool
fence_before(uint32_t a, uint32_t b)
{
return (int32_t)(a - b) < 0;
}
static inline bool
fence_after(uint32_t a, uint32_t b)
{
return (int32_t)(a - b) > 0;
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment