Commit 8d8f6f70 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-misc-next-2019-04-18' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.2:

UAPI Changes:
- Document which feature flags belong to which command in virtio_gpu.h
- Make the FB_DAMAGE_CLIPS available for atomic userspace only, it's useless for legacy.

Cross-subsystem Changes:
- Add device tree bindings for lg,acx467akm-7 panel and ST-Ericsson Multi Channel Display Engine MCDE
- Add parameters to the device tree bindings for tfp410
- iommu/io-pgtable: Add ARM Mali midgard MMU page table format
- dma-buf: Only do a 64-bits seqno compare when driver explicitly asks for it, else wraparound.
- Use the 64-bits compare for dma-fence-chains

Core Changes:
- Make the fb conversion functions use __iomem dst.
- Rename drm_client_add to drm_client_register
- Move intel_fb_initial_config to core.
- Add a drm_gem_objects_lookup helper
- Add drm_gem_fence_array helpers, and use it in lima.
- Add drm_format_helper.c to kerneldoc.

Driver Changes:
- Add panfrost driver for mali midgard/bitfrost.
- Converts bochs to use the simple display type.
- Small fixes to sun4i, tinydrm, ti-fp410.
- Fid aspeed's Kconfig options.
- Make some symbols/functions static in lima, sun4i and meson.
- Add a driver for the lg,acx467akm-7 panel.
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/737ad994-213d-45b5-207a-b99d795acd21@linux.intel.com
parents b1c4f7fe debcd8f9
......@@ -18,7 +18,14 @@ This device has two video ports. Their connections are modeled using the OF
graph bindings specified in [1]. Each port node shall have a single endpoint.
- Port 0 is the DPI input port. Its endpoint subnode shall contain a
pclk-sample property and a remote-endpoint property as specified in [1].
pclk-sample and bus-width property and a remote-endpoint property as specified
in [1].
- If pclk-sample is not defined, pclk-sample = 0 should be assumed for
backward compatibility.
- If bus-width is not defined then bus-width = 24 should be assumed for
backward compatibility.
bus-width = 24: 24 data lines are connected and single-edge mode
bus-width = 12: 12 data lines are connected and dual-edge mode
- Port 1 is the DVI output port. Its endpoint subnode shall contain a
remote-endpoint property is specified in [1].
......@@ -43,6 +50,7 @@ tfp410: encoder@0 {
tfp410_in: endpoint@0 {
pclk-sample = <1>;
bus-width = <24>;
remote-endpoint = <&dpi_out>;
};
};
......
LG ACX467AKM-7 4.95" 1080×1920 LCD Panel
Required properties:
- compatible: must be "lg,acx467akm-7"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.
ST-Ericsson Multi Channel Display Engine MCDE
The ST-Ericsson MCDE is a display controller with support for compositing
and displaying several channels memory resident graphics data on DSI or
LCD displays or bridges. It is used in the ST-Ericsson U8500 platform.
Required properties:
- compatible: must be:
"ste,mcde"
- reg: register base for the main MCDE control registers, should be
0x1000 in size
- interrupts: the interrupt line for the MCDE
- epod-supply: a phandle to the EPOD regulator
- vana-supply: a phandle to the analog voltage regulator
- clocks: an array of the MCDE clocks in this strict order:
MCDECLK (main MCDE clock), LCDCLK (LCD clock), PLLDSI
(HDMI clock), DSI0ESCLK (DSI0 energy save clock),
DSI1ESCLK (DSI1 energy save clock), DSI2ESCLK (DSI2 energy
save clock)
- clock-names: must be the following array:
"mcde", "lcd", "hdmi"
to match the required clock inputs above.
- #address-cells: should be <1> (for the DSI hosts that will be children)
- #size-cells: should be <1> (for the DSI hosts that will be children)
- ranges: this should always be stated
Required subnodes:
The devicetree must specify subnodes for the DSI host adapters.
These must have the following characteristics:
- compatible: must be:
"ste,mcde-dsi"
- reg: must specify the register range for the DSI host
- vana-supply: phandle to the VANA voltage regulator
- clocks: phandles to the high speed and low power (energy save) clocks
the high speed clock is not present on the third (dsi2) block, so it
should only have the "lp" clock
- clock-names: "hs" for the high speed clock and "lp" for the low power
(energy save) clock
- #address-cells: should be <1>
- #size-cells: should be <0>
Display panels and bridges will appear as children on the DSI hosts, and
the displays are connected to the DSI hosts using the common binding
for video transmitter interfaces; see
Documentation/devicetree/bindings/media/video-interfaces.txt
If a DSI host is unused (not connected) it will have no children defined.
Example:
mcde@a0350000 {
compatible = "ste,mcde";
reg = <0xa0350000 0x1000>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
epod-supply = <&db8500_b2r2_mcde_reg>;
vana-supply = <&ab8500_ldo_ana_reg>;
clocks = <&prcmu_clk PRCMU_MCDECLK>, /* Main MCDE clock */
<&prcmu_clk PRCMU_LCDCLK>, /* LCD clock */
<&prcmu_clk PRCMU_PLLDSI>; /* HDMI clock */
clock-names = "mcde", "lcd", "hdmi";
#address-cells = <1>;
#size-cells = <1>;
ranges;
dsi0: dsi@a0351000 {
compatible = "ste,mcde-dsi";
reg = <0xa0351000 0x1000>;
vana-supply = <&ab8500_ldo_ana_reg>;
clocks = <&prcmu_clk PRCMU_DSI0CLK>, <&prcmu_clk PRCMU_DSI0ESCCLK>;
clock-names = "hs", "lp";
#address-cells = <1>;
#size-cells = <0>;
panel {
compatible = "samsung,s6d16d0";
reg = <0>;
vdd1-supply = <&ab8500_ldo_aux1_reg>;
reset-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
};
};
dsi1: dsi@a0352000 {
compatible = "ste,mcde-dsi";
reg = <0xa0352000 0x1000>;
vana-supply = <&ab8500_ldo_ana_reg>;
clocks = <&prcmu_clk PRCMU_DSI1CLK>, <&prcmu_clk PRCMU_DSI1ESCCLK>;
clock-names = "hs", "lp";
#address-cells = <1>;
#size-cells = <0>;
};
dsi2: dsi@a0353000 {
compatible = "ste,mcde-dsi";
reg = <0xa0353000 0x1000>;
vana-supply = <&ab8500_ldo_ana_reg>;
/* This DSI port only has the Low Power / Energy Save clock */
clocks = <&prcmu_clk PRCMU_DSI2ESCCLK>;
clock-names = "lp";
#address-cells = <1>;
#size-cells = <0>;
};
};
......@@ -107,6 +107,12 @@ fbdev Helper Functions Reference
.. kernel-doc:: drivers/gpu/drm/drm_fb_helper.c
:export:
format Helper Functions Reference
=================================
.. kernel-doc:: drivers/gpu/drm/drm_format_helper.c
:export:
Framebuffer CMA Helper Functions Reference
==========================================
......
......@@ -1180,6 +1180,15 @@ F: drivers/gpu/drm/arm/
F: Documentation/devicetree/bindings/display/arm,malidp.txt
F: Documentation/gpu/afbc.rst
ARM MALI PANFROST DRM DRIVER
M: Rob Herring <robh@kernel.org>
M: Tomeu Vizoso <tomeu.vizoso@collabora.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/panfrost/
F: include/uapi/drm/panfrost_drm.h
ARM MFM AND FLOPPY DRIVERS
M: Ian Molton <spyro@f2s.com>
S: Maintained
......
......@@ -193,6 +193,7 @@ static void dma_fence_chain_release(struct dma_fence *fence)
}
const struct dma_fence_ops dma_fence_chain_ops = {
.use_64bit_seqno = true,
.get_driver_name = dma_fence_chain_get_driver_name,
.get_timeline_name = dma_fence_chain_get_timeline_name,
.enable_signaling = dma_fence_chain_enable_signaling,
......@@ -225,7 +226,7 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
init_irq_work(&chain->work, dma_fence_chain_irq_work);
/* Try to reuse the context of the previous chain node. */
if (prev_chain && __dma_fence_is_later(seqno, prev->seqno)) {
if (prev_chain && __dma_fence_is_later(seqno, prev->seqno, prev->ops)) {
context = prev->context;
chain->prev_seqno = prev->seqno;
} else {
......
......@@ -161,7 +161,7 @@ static bool timeline_fence_signaled(struct dma_fence *fence)
{
struct sync_timeline *parent = dma_fence_parent(fence);
return !__dma_fence_is_later(fence->seqno, parent->value);
return !__dma_fence_is_later(fence->seqno, parent->value, fence->ops);
}
static bool timeline_fence_enable_signaling(struct dma_fence *fence)
......
......@@ -258,7 +258,8 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
i_b++;
} else {
if (__dma_fence_is_later(pt_a->seqno, pt_b->seqno))
if (__dma_fence_is_later(pt_a->seqno, pt_b->seqno,
pt_a->ops))
add_fence(fences, &i, pt_a);
else
add_fence(fences, &i, pt_b);
......
......@@ -337,6 +337,8 @@ source "drivers/gpu/drm/vboxvideo/Kconfig"
source "drivers/gpu/drm/lima/Kconfig"
source "drivers/gpu/drm/panfrost/Kconfig"
source "drivers/gpu/drm/aspeed/Kconfig"
# Keep legacy drivers last
......
......@@ -112,4 +112,5 @@ obj-$(CONFIG_DRM_TVE200) += tve200/
obj-$(CONFIG_DRM_XEN) += xen/
obj-$(CONFIG_DRM_VBOXVIDEO) += vboxvideo/
obj-$(CONFIG_DRM_LIMA) += lima/
obj-$(CONFIG_DRM_PANFROST) += panfrost/
obj-$(CONFIG_DRM_ASPEED_GFX) += aspeed/
config DRM_ASPEED_GFX
tristate "ASPEED BMC Display Controller"
depends on DRM && OF
depends on (COMPILE_TEST || ARCH_ASPEED)
select DRM_KMS_HELPER
select DRM_KMS_CMA_HELPER
select DRM_PANEL
select DMA_CMA
select CMA
select DMA_CMA if HAVE_DMA_CONTIGUOUS
select CMA if HAVE_DMA_CONTIGUOUS
select MFD_SYSCON
help
Chose this option if you have an ASPEED AST2500 SOC Display
......
......@@ -7,6 +7,7 @@
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_simple_kms_helper.h>
#include <drm/drm_gem.h>
......@@ -70,8 +71,7 @@ struct bochs_device {
/* drm */
struct drm_device *dev;
struct drm_crtc crtc;
struct drm_encoder encoder;
struct drm_simple_display_pipe pipe;
struct drm_connector connector;
/* ttm */
......
......@@ -22,75 +22,54 @@ MODULE_PARM_DESC(defy, "default y resolution");
/* ---------------------------------------------------------------------- */
static void bochs_crtc_mode_set_nofb(struct drm_crtc *crtc)
static const uint32_t bochs_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_BGRX8888,
};
static void bochs_plane_update(struct bochs_device *bochs,
struct drm_plane_state *state)
{
struct bochs_device *bochs =
container_of(crtc, struct bochs_device, crtc);
struct bochs_bo *bo;
bochs_hw_setmode(bochs, &crtc->mode);
}
if (!state->fb || !bochs->stride)
return;
static void bochs_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *old_crtc_state)
{
bo = gem_to_bochs_bo(state->fb->obj[0]);
bochs_hw_setbase(bochs,
state->crtc_x,
state->crtc_y,
bo->bo.offset);
bochs_hw_setformat(bochs, state->fb->format);
}
static void bochs_crtc_atomic_flush(struct drm_crtc *crtc,
struct drm_crtc_state *old_crtc_state)
static void bochs_pipe_enable(struct drm_simple_display_pipe *pipe,
struct drm_crtc_state *crtc_state,
struct drm_plane_state *plane_state)
{
struct drm_device *dev = crtc->dev;
struct drm_pending_vblank_event *event;
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
if (crtc->state && crtc->state->event) {
unsigned long irqflags;
spin_lock_irqsave(&dev->event_lock, irqflags);
event = crtc->state->event;
crtc->state->event = NULL;
drm_crtc_send_vblank_event(crtc, event);
spin_unlock_irqrestore(&dev->event_lock, irqflags);
}
bochs_hw_setmode(bochs, &crtc_state->mode);
bochs_plane_update(bochs, plane_state);
}
/* These provide the minimum set of functions required to handle a CRTC */
static const struct drm_crtc_funcs bochs_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.destroy = drm_crtc_cleanup,
.page_flip = drm_atomic_helper_page_flip,
.reset = drm_atomic_helper_crtc_reset,
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
static const struct drm_crtc_helper_funcs bochs_helper_funcs = {
.mode_set_nofb = bochs_crtc_mode_set_nofb,
.atomic_enable = bochs_crtc_atomic_enable,
.atomic_flush = bochs_crtc_atomic_flush,
};
static const uint32_t bochs_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_BGRX8888,
};
static void bochs_plane_atomic_update(struct drm_plane *plane,
static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *old_state)
{
struct bochs_device *bochs = plane->dev->dev_private;
struct bochs_bo *bo;
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
struct drm_crtc *crtc = &pipe->crtc;
if (!plane->state->fb)
return;
bo = gem_to_bochs_bo(plane->state->fb->obj[0]);
bochs_hw_setbase(bochs,
plane->state->crtc_x,
plane->state->crtc_y,
bo->bo.offset);
bochs_hw_setformat(bochs, plane->state->fb->format);
bochs_plane_update(bochs, pipe->plane.state);
if (crtc->state->event) {
spin_lock_irq(&crtc->dev->event_lock);
drm_crtc_send_vblank_event(crtc, crtc->state->event);
crtc->state->event = NULL;
spin_unlock_irq(&crtc->dev->event_lock);
}
}
static int bochs_plane_prepare_fb(struct drm_plane *plane,
static int bochs_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *new_state)
{
struct bochs_bo *bo;
......@@ -101,7 +80,7 @@ static int bochs_plane_prepare_fb(struct drm_plane *plane,
return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
}
static void bochs_plane_cleanup_fb(struct drm_plane *plane,
static void bochs_pipe_cleanup_fb(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *old_state)
{
struct bochs_bo *bo;
......@@ -112,73 +91,13 @@ static void bochs_plane_cleanup_fb(struct drm_plane *plane,
bochs_bo_unpin(bo);
}
static const struct drm_plane_helper_funcs bochs_plane_helper_funcs = {
.atomic_update = bochs_plane_atomic_update,
.prepare_fb = bochs_plane_prepare_fb,
.cleanup_fb = bochs_plane_cleanup_fb,
};
static const struct drm_plane_funcs bochs_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = drm_primary_helper_destroy,
.reset = drm_atomic_helper_plane_reset,
.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
};
static struct drm_plane *bochs_primary_plane(struct drm_device *dev)
{
struct drm_plane *primary;
int ret;
primary = kzalloc(sizeof(*primary), GFP_KERNEL);
if (primary == NULL) {
DRM_DEBUG_KMS("Failed to allocate primary plane\n");
return NULL;
}
ret = drm_universal_plane_init(dev, primary, 0,
&bochs_plane_funcs,
bochs_formats,
ARRAY_SIZE(bochs_formats),
NULL,
DRM_PLANE_TYPE_PRIMARY, NULL);
if (ret) {
kfree(primary);
return NULL;
}
drm_plane_helper_add(primary, &bochs_plane_helper_funcs);
return primary;
}
static void bochs_crtc_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_crtc *crtc = &bochs->crtc;
struct drm_plane *primary = bochs_primary_plane(dev);
drm_crtc_init_with_planes(dev, crtc, primary, NULL,
&bochs_crtc_funcs, NULL);
drm_crtc_helper_add(crtc, &bochs_helper_funcs);
}
static const struct drm_encoder_funcs bochs_encoder_encoder_funcs = {
.destroy = drm_encoder_cleanup,
static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
.enable = bochs_pipe_enable,
.update = bochs_pipe_update,
.prepare_fb = bochs_pipe_prepare_fb,
.cleanup_fb = bochs_pipe_cleanup_fb,
};
static void bochs_encoder_init(struct drm_device *dev)
{
struct bochs_device *bochs = dev->dev_private;
struct drm_encoder *encoder = &bochs->encoder;
encoder->possible_crtcs = 0x1;
drm_encoder_init(dev, encoder, &bochs_encoder_encoder_funcs,
DRM_MODE_ENCODER_DAC, NULL);
}
static int bochs_connector_get_modes(struct drm_connector *connector)
{
struct bochs_device *bochs =
......@@ -278,11 +197,14 @@ int bochs_kms_init(struct bochs_device *bochs)
bochs->dev->mode_config.funcs = &bochs_mode_funcs;
bochs_crtc_init(bochs->dev);
bochs_encoder_init(bochs->dev);
bochs_connector_init(bochs->dev);
drm_connector_attach_encoder(&bochs->connector,
&bochs->encoder);
drm_simple_display_pipe_init(bochs->dev,
&bochs->pipe,
&bochs_pipe_funcs,
bochs_formats,
ARRAY_SIZE(bochs_formats),
NULL,
&bochs->connector);
drm_mode_config_reset(bochs->dev);
......
......@@ -29,8 +29,10 @@ struct tfp410 {
struct drm_connector connector;
unsigned int connector_type;
u32 bus_format;
struct i2c_adapter *ddc;
struct gpio_desc *hpd;
int hpd_irq;
struct delayed_work hpd_work;
struct gpio_desc *powerdown;
......@@ -124,8 +126,10 @@ static int tfp410_attach(struct drm_bridge *bridge)
return -ENODEV;
}
if (dvi->hpd)
if (dvi->hpd_irq >= 0)
dvi->connector.polled = DRM_CONNECTOR_POLL_HPD;
else
dvi->connector.polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
drm_connector_helper_add(&dvi->connector,
&tfp410_con_helper_funcs);
......@@ -136,6 +140,9 @@ static int tfp410_attach(struct drm_bridge *bridge)
return ret;
}
drm_display_info_set_bus_formats(&dvi->connector.display_info,
&dvi->bus_format, 1);
drm_connector_attach_encoder(&dvi->connector,
bridge->encoder);
......@@ -194,6 +201,7 @@ static int tfp410_parse_timings(struct tfp410 *dvi, bool i2c)
struct drm_bridge_timings *timings = &dvi->timings;
struct device_node *ep;
u32 pclk_sample = 0;
u32 bus_width = 24;
s32 deskew = 0;
/* Start with defaults. */
......@@ -218,6 +226,7 @@ static int tfp410_parse_timings(struct tfp410 *dvi, bool i2c)
/* Get the sampling edge from the endpoint. */
of_property_read_u32(ep, "pclk-sample", &pclk_sample);
of_property_read_u32(ep, "bus-width", &bus_width);
of_node_put(ep);
timings->input_bus_flags = DRM_BUS_FLAG_DE_HIGH;
......@@ -235,6 +244,17 @@ static int tfp410_parse_timings(struct tfp410 *dvi, bool i2c)
return -EINVAL;
}
switch (bus_width) {
case 12:
dvi->bus_format = MEDIA_BUS_FMT_RGB888_2X12_LE;
break;
case 24:
dvi->bus_format = MEDIA_BUS_FMT_RGB888_1X24;
break;
default:
return -EINVAL;
}
/* Get the setup and hold time from vendor-specific properties. */
of_property_read_u32(dvi->dev->of_node, "ti,deskew", (u32 *)&deskew);
if (deskew < -4 || deskew > 3)
......@@ -324,10 +344,15 @@ static int tfp410_init(struct device *dev, bool i2c)
return PTR_ERR(dvi->powerdown);
}
if (dvi->hpd) {
if (dvi->hpd)
dvi->hpd_irq = gpiod_to_irq(dvi->hpd);
else
dvi->hpd_irq = -ENXIO;
if (dvi->hpd_irq >= 0) {
INIT_DELAYED_WORK(&dvi->hpd_work, tfp410_hpd_work_func);
ret = devm_request_threaded_irq(dev, gpiod_to_irq(dvi->hpd),
ret = devm_request_threaded_irq(dev, dvi->hpd_irq,
NULL, tfp410_hpd_irq_thread, IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"hdmi-hpd", dvi);
......
......@@ -307,16 +307,16 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
return -ENOMEM;
if (cirrus->cpp == fb->format->cpp[0])
drm_fb_memcpy_dstclip(__io_virt(cirrus->vram),
drm_fb_memcpy_dstclip(cirrus->vram,
vmap, fb, rect);
else if (fb->format->cpp[0] == 4 && cirrus->cpp == 2)
drm_fb_xrgb8888_to_rgb565_dstclip(__io_virt(cirrus->vram),
drm_fb_xrgb8888_to_rgb565_dstclip(cirrus->vram,
cirrus->pitch,
vmap, fb, rect, false);
else if (fb->format->cpp[0] == 4 && cirrus->cpp == 3)
drm_fb_xrgb8888_to_rgb888_dstclip(__io_virt(cirrus->vram),
drm_fb_xrgb8888_to_rgb888_dstclip(cirrus->vram,
cirrus->pitch,
vmap, fb, rect);
......
......@@ -69,7 +69,8 @@ EXPORT_SYMBOL(drm_client_close);
* @name: Client name
* @funcs: DRM client functions (optional)
*
* This initialises the client and opens a &drm_file. Use drm_client_add() to complete the process.
* This initialises the client and opens a &drm_file.
* Use drm_client_register() to complete the process.
* The caller needs to hold a reference on @dev before calling this function.
* The client is freed when the &drm_device is unregistered. See drm_client_release().
*
......@@ -108,16 +109,16 @@ int drm_client_init(struct drm_device *dev, struct drm_client_dev *client,
EXPORT_SYMBOL(drm_client_init);
/**
* drm_client_add - Add client to the device list
* drm_client_register - Register client
* @client: DRM client
*
* Add the client to the &drm_device client list to activate its callbacks.
* @client must be initialized by a call to drm_client_init(). After
* drm_client_add() it is no longer permissible to call drm_client_release()
* drm_client_register() it is no longer permissible to call drm_client_release()
* directly (outside the unregister callback), instead cleanup will happen
* automatically on driver unload.
*/
void drm_client_add(struct drm_client_dev *client)
void drm_client_register(struct drm_client_dev *client)
{
struct drm_device *dev = client->dev;
......@@ -125,7 +126,7 @@ void drm_client_add(struct drm_client_dev *client)
list_add(&client->list, &dev->clientlist);
mutex_unlock(&dev->clientlist_mutex);
}
EXPORT_SYMBOL(drm_client_add);
EXPORT_SYMBOL(drm_client_register);
/**
* drm_client_release - Release DRM client resources
......
......@@ -2559,6 +2559,194 @@ static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper,
fb_helper->sw_rotations |= DRM_MODE_ROTATE_0;
}
static struct drm_fb_helper_crtc *
drm_fb_helper_crtc(struct drm_fb_helper *fb_helper, struct drm_crtc *crtc)
{
int i;
for (i = 0; i < fb_helper->crtc_count; i++)
if (fb_helper->crtc_info[i].mode_set.crtc == crtc)
return &fb_helper->crtc_info[i];
return NULL;
}
/* Try to read the BIOS display configuration and use it for the initial config */
static bool drm_fb_helper_firmware_config(struct drm_fb_helper *fb_helper,
struct drm_fb_helper_crtc **crtcs,
struct drm_display_mode **modes,
struct drm_fb_offset *offsets,
bool *enabled, int width, int height)
{
struct drm_device *dev = fb_helper->dev;
unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
unsigned long conn_configured, conn_seq;
int i, j;
bool *save_enabled;
bool fallback = true, ret = true;
int num_connectors_enabled = 0;
int num_connectors_detected = 0;
struct drm_modeset_acquire_ctx ctx;
save_enabled = kcalloc(count, sizeof(bool), GFP_KERNEL);
if (!save_enabled)
return false;
drm_modeset_acquire_init(&ctx, 0);
while (drm_modeset_lock_all_ctx(dev, &ctx) != 0)
drm_modeset_backoff(&ctx);
memcpy(save_enabled, enabled, count);
conn_seq = GENMASK(count - 1, 0);
conn_configured = 0;
retry:
for (i = 0; i < count; i++) {
struct drm_fb_helper_connector *fb_conn;
struct drm_connector *connector;
struct drm_encoder *encoder;
struct drm_fb_helper_crtc *new_crtc;
fb_conn = fb_helper->connector_info[i];
connector = fb_conn->connector;
if (conn_configured & BIT(i))
continue;
/* First pass, only consider tiled connectors */
if (conn_seq == GENMASK(count - 1, 0) && !connector->has_tile)
continue;
if (connector->status == connector_status_connected)
num_connectors_detected++;
if (!enabled[i]) {
DRM_DEBUG_KMS("connector %s not enabled, skipping\n",
connector->name);
conn_configured |= BIT(i);
continue;
}
if (connector->force == DRM_FORCE_OFF) {
DRM_DEBUG_KMS("connector %s is disabled by user, skipping\n",
connector->name);
enabled[i] = false;
continue;
}
encoder = connector->state->best_encoder;
if (!encoder || WARN_ON(!connector->state->crtc)) {
if (connector->force > DRM_FORCE_OFF)
goto bail;
DRM_DEBUG_KMS("connector %s has no encoder or crtc, skipping\n",
connector->name);
enabled[i] = false;
conn_configured |= BIT(i);
continue;
}
num_connectors_enabled++;
new_crtc = drm_fb_helper_crtc(fb_helper, connector->state->crtc);
/*
* Make sure we're not trying to drive multiple connectors
* with a single CRTC, since our cloning support may not
* match the BIOS.
*/
for (j = 0; j < count; j++) {
if (crtcs[j] == new_crtc) {
DRM_DEBUG_KMS("fallback: cloned configuration\n");
goto bail;
}
}
DRM_DEBUG_KMS("looking for cmdline mode on connector %s\n",
connector->name);
/* go for command line mode first */
modes[i] = drm_pick_cmdline_mode(fb_conn);
/* try for preferred next */
if (!modes[i]) {
DRM_DEBUG_KMS("looking for preferred mode on connector %s %d\n",
connector->name, connector->has_tile);
modes[i] = drm_has_preferred_mode(fb_conn, width,
height);
}
/* No preferred mode marked by the EDID? Are there any modes? */
if (!modes[i] && !list_empty(&connector->modes)) {
DRM_DEBUG_KMS("using first mode listed on connector %s\n",
connector->name);
modes[i] = list_first_entry(&connector->modes,
struct drm_display_mode,
head);
}
/* last resort: use current mode */
if (!modes[i]) {
/*
* IMPORTANT: We want to use the adjusted mode (i.e.
* after the panel fitter upscaling) as the initial
* config, not the input mode, which is what crtc->mode
* usually contains. But since our current
* code puts a mode derived from the post-pfit timings
* into crtc->mode this works out correctly.
*
* This is crtc->mode and not crtc->state->mode for the
* fastboot check to work correctly.
*/
DRM_DEBUG_KMS("looking for current mode on connector %s\n",
connector->name);
modes[i] = &connector->state->crtc->mode;
}
crtcs[i] = new_crtc;
DRM_DEBUG_KMS("connector %s on [CRTC:%d:%s]: %dx%d%s\n",
connector->name,
connector->state->crtc->base.id,
connector->state->crtc->name,
modes[i]->hdisplay, modes[i]->vdisplay,
modes[i]->flags & DRM_MODE_FLAG_INTERLACE ? "i" : "");
fallback = false;
conn_configured |= BIT(i);
}
if (conn_configured != conn_seq) { /* repeat until no more are found */
conn_seq = conn_configured;
goto retry;
}
/*
* If the BIOS didn't enable everything it could, fall back to have the
* same user experiencing of lighting up as much as possible like the
* fbdev helper library.
*/
if (num_connectors_enabled != num_connectors_detected &&
num_connectors_enabled < dev->mode_config.num_crtc) {
DRM_DEBUG_KMS("fallback: Not all outputs enabled\n");
DRM_DEBUG_KMS("Enabled: %i, detected: %i\n", num_connectors_enabled,
num_connectors_detected);
fallback = true;
}
if (fallback) {
bail:
DRM_DEBUG_KMS("Not using firmware configuration\n");
memcpy(enabled, save_enabled, count);
ret = false;
}
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
kfree(save_enabled);
return ret;
}
static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
u32 width, u32 height)
{
......@@ -2591,10 +2779,8 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
DRM_DEBUG_KMS("No connectors reported connected with modes\n");
drm_enable_connectors(fb_helper, enabled);
if (!(fb_helper->funcs->initial_config &&
fb_helper->funcs->initial_config(fb_helper, crtcs, modes,
offsets,
enabled, width, height))) {
if (!drm_fb_helper_firmware_config(fb_helper, crtcs, modes, offsets,
enabled, width, height)) {
memset(modes, 0, fb_helper->connector_count*sizeof(modes[0]));
memset(crtcs, 0, fb_helper->connector_count*sizeof(crtcs[0]));
memset(offsets, 0, fb_helper->connector_count*sizeof(offsets[0]));
......@@ -3322,7 +3508,7 @@ int drm_fbdev_generic_setup(struct drm_device *dev, unsigned int preferred_bpp)
if (ret)
DRM_DEV_DEBUG(dev->dev, "client hotplug ret=%d\n", ret);
drm_client_add(&fb_helper->client);
drm_client_register(&fb_helper->client);
return 0;
}
......
......@@ -10,23 +10,17 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <drm/drm_format_helper.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_rect.h>
static void drm_fb_memcpy_lines(void *dst, unsigned int dst_pitch,
void *src, unsigned int src_pitch,
unsigned int linelength, unsigned int lines)
static unsigned int clip_offset(struct drm_rect *clip,
unsigned int pitch, unsigned int cpp)
{
int line;
for (line = 0; line < lines; line++) {
memcpy(dst, src, linelength);
src += src_pitch;
dst += dst_pitch;
}
return clip->y1 * pitch + clip->x1 * cpp;
}
/**
......@@ -43,35 +37,44 @@ void drm_fb_memcpy(void *dst, void *vaddr, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
unsigned int offset = (clip->y1 * fb->pitches[0]) + (clip->x1 * cpp);
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y, lines = clip->y2 - clip->y1;
drm_fb_memcpy_lines(dst, len,
vaddr + offset, fb->pitches[0],
len, clip->y2 - clip->y1);
vaddr += clip_offset(clip, fb->pitches[0], cpp);
for (y = 0; y < lines; y++) {
memcpy(dst, vaddr, len);
vaddr += fb->pitches[0];
dst += len;
}
}
EXPORT_SYMBOL(drm_fb_memcpy);
/**
* drm_fb_memcpy_dstclip - Copy clip buffer
* @dst: Destination buffer
* @dst: Destination buffer (iomem)
* @vaddr: Source buffer
* @fb: DRM framebuffer
* @clip: Clip rectangle area to copy
*
* This function applies clipping on dst, i.e. the destination is a
* full framebuffer but only the clip rect content is copied over.
* full (iomem) framebuffer but only the clip rect content is copied over.
*/
void drm_fb_memcpy_dstclip(void *dst, void *vaddr, struct drm_framebuffer *fb,
void drm_fb_memcpy_dstclip(void __iomem *dst, void *vaddr,
struct drm_framebuffer *fb,
struct drm_rect *clip)
{
unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
unsigned int offset = (clip->y1 * fb->pitches[0]) + (clip->x1 * cpp);
unsigned int offset = clip_offset(clip, fb->pitches[0], cpp);
size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y, lines = clip->y2 - clip->y1;
drm_fb_memcpy_lines(dst + offset, fb->pitches[0],
vaddr + offset, fb->pitches[0],
len, clip->y2 - clip->y1);
vaddr += offset;
dst += offset;
for (y = 0; y < lines; y++) {
memcpy_toio(dst, vaddr, len);
vaddr += fb->pitches[0];
dst += fb->pitches[0];
}
}
EXPORT_SYMBOL(drm_fb_memcpy_dstclip);
......@@ -110,42 +113,22 @@ void drm_fb_swab16(u16 *dst, void *vaddr, struct drm_framebuffer *fb,
}
EXPORT_SYMBOL(drm_fb_swab16);
static void drm_fb_xrgb8888_to_rgb565_lines(void *dst, unsigned int dst_pitch,
void *src, unsigned int src_pitch,
unsigned int src_linelength,
unsigned int lines,
bool swap)
static void drm_fb_xrgb8888_to_rgb565_line(u16 *dbuf, u32 *sbuf,
unsigned int pixels,
bool swab)
{
unsigned int linepixels = src_linelength / sizeof(u32);
unsigned int x, y;
u32 *sbuf;
u16 *dbuf, val16;
unsigned int x;
u16 val16;
/*
* The cma memory is write-combined so reads are uncached.
* Speed up by fetching one line at a time.
*/
sbuf = kmalloc(src_linelength, GFP_KERNEL);
if (!sbuf)
return;
for (y = 0; y < lines; y++) {
memcpy(sbuf, src, src_linelength);
dbuf = dst;
for (x = 0; x < linepixels; x++) {
for (x = 0; x < pixels; x++) {
val16 = ((sbuf[x] & 0x00F80000) >> 8) |
((sbuf[x] & 0x0000FC00) >> 5) |
((sbuf[x] & 0x000000F8) >> 3);
if (swap)
*dbuf++ = swab16(val16);
if (swab)
dbuf[x] = swab16(val16);
else
*dbuf++ = val16;
}
src += src_pitch;
dst += dst_pitch;
dbuf[x] = val16;
}
kfree(sbuf);
}
/**
......@@ -154,7 +137,7 @@ static void drm_fb_xrgb8888_to_rgb565_lines(void *dst, unsigned int dst_pitch,
* @vaddr: XRGB8888 source buffer
* @fb: DRM framebuffer
* @clip: Clip rectangle area to copy
* @swap: Swap bytes
* @swab: Swap bytes
*
* Drivers can use this function for RGB565 devices that don't natively
* support XRGB8888.
......@@ -164,109 +147,124 @@ static void drm_fb_xrgb8888_to_rgb565_lines(void *dst, unsigned int dst_pitch,
*/
void drm_fb_xrgb8888_to_rgb565(void *dst, void *vaddr,
struct drm_framebuffer *fb,
struct drm_rect *clip, bool swap)
struct drm_rect *clip, bool swab)
{
unsigned int src_offset = (clip->y1 * fb->pitches[0])
+ (clip->x1 * sizeof(u32));
size_t src_len = (clip->x2 - clip->x1) * sizeof(u32);
size_t dst_len = (clip->x2 - clip->x1) * sizeof(u16);
drm_fb_xrgb8888_to_rgb565_lines(dst, dst_len,
vaddr + src_offset, fb->pitches[0],
src_len, clip->y2 - clip->y1,
swap);
size_t linepixels = clip->x2 - clip->x1;
size_t src_len = linepixels * sizeof(u32);
size_t dst_len = linepixels * sizeof(u16);
unsigned y, lines = clip->y2 - clip->y1;
void *sbuf;
/*
* The cma memory is write-combined so reads are uncached.
* Speed up by fetching one line at a time.
*/
sbuf = kmalloc(src_len, GFP_KERNEL);
if (!sbuf)
return;
vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32));
for (y = 0; y < lines; y++) {
memcpy(sbuf, vaddr, src_len);
drm_fb_xrgb8888_to_rgb565_line(dst, sbuf, linepixels, swab);
vaddr += fb->pitches[0];
dst += dst_len;
}
kfree(sbuf);
}
EXPORT_SYMBOL(drm_fb_xrgb8888_to_rgb565);
/**
* drm_fb_xrgb8888_to_rgb565_dstclip - Convert XRGB8888 to RGB565 clip buffer
* @dst: RGB565 destination buffer
* @dst: RGB565 destination buffer (iomem)
* @dst_pitch: destination buffer pitch
* @vaddr: XRGB8888 source buffer
* @fb: DRM framebuffer
* @clip: Clip rectangle area to copy
* @swap: Swap bytes
* @swab: Swap bytes
*
* Drivers can use this function for RGB565 devices that don't natively
* support XRGB8888.
*
* This function applies clipping on dst, i.e. the destination is a
* full framebuffer but only the clip rect content is copied over.
* full (iomem) framebuffer but only the clip rect content is copied over.
*/
void drm_fb_xrgb8888_to_rgb565_dstclip(void *dst, unsigned int dst_pitch,
void drm_fb_xrgb8888_to_rgb565_dstclip(void __iomem *dst, unsigned int dst_pitch,
void *vaddr, struct drm_framebuffer *fb,
struct drm_rect *clip, bool swap)
struct drm_rect *clip, bool swab)
{
unsigned int src_offset = (clip->y1 * fb->pitches[0])
+ (clip->x1 * sizeof(u32));
unsigned int dst_offset = (clip->y1 * dst_pitch)
+ (clip->x1 * sizeof(u16));
size_t src_len = (clip->x2 - clip->x1) * sizeof(u32);
drm_fb_xrgb8888_to_rgb565_lines(dst + dst_offset, dst_pitch,
vaddr + src_offset, fb->pitches[0],
src_len, clip->y2 - clip->y1,
swap);
size_t linepixels = clip->x2 - clip->x1;
size_t dst_len = linepixels * sizeof(u16);
unsigned y, lines = clip->y2 - clip->y1;
void *dbuf;
dbuf = kmalloc(dst_len, GFP_KERNEL);
if (!dbuf)
return;
vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32));
dst += clip_offset(clip, dst_pitch, sizeof(u16));
for (y = 0; y < lines; y++) {
drm_fb_xrgb8888_to_rgb565_line(dbuf, vaddr, linepixels, swab);
memcpy_toio(dst, dbuf, dst_len);
vaddr += fb->pitches[0];
dst += dst_len;
}
kfree(dbuf);
}
EXPORT_SYMBOL(drm_fb_xrgb8888_to_rgb565_dstclip);
static void drm_fb_xrgb8888_to_rgb888_lines(void *dst, unsigned int dst_pitch,
void *src, unsigned int src_pitch,
unsigned int src_linelength,
unsigned int lines)
static void drm_fb_xrgb8888_to_rgb888_line(u8 *dbuf, u32 *sbuf,
unsigned int pixels)
{
unsigned int linepixels = src_linelength / 3;
unsigned int x, y;
u32 *sbuf;
u8 *dbuf;
unsigned int x;
sbuf = kmalloc(src_linelength, GFP_KERNEL);
if (!sbuf)
return;
for (y = 0; y < lines; y++) {
memcpy(sbuf, src, src_linelength);
dbuf = dst;
for (x = 0; x < linepixels; x++) {
for (x = 0; x < pixels; x++) {
*dbuf++ = (sbuf[x] & 0x000000FF) >> 0;
*dbuf++ = (sbuf[x] & 0x0000FF00) >> 8;
*dbuf++ = (sbuf[x] & 0x00FF0000) >> 16;
}
src += src_pitch;
dst += dst_pitch;
}
kfree(sbuf);
}
/**
* drm_fb_xrgb8888_to_rgb888_dstclip - Convert XRGB8888 to RGB888 clip buffer
* @dst: RGB565 destination buffer
* @dst: RGB565 destination buffer (iomem)
* @dst_pitch: destination buffer pitch
* @vaddr: XRGB8888 source buffer
* @fb: DRM framebuffer
* @clip: Clip rectangle area to copy
* @dstclip: Clip destination too.
*
* Drivers can use this function for RGB888 devices that don't natively
* support XRGB8888.
*
* This function applies clipping on dst, i.e. the destination is a
* full framebuffer but only the clip rect content is copied over.
* full (iomem) framebuffer but only the clip rect content is copied over.
*/
void drm_fb_xrgb8888_to_rgb888_dstclip(void *dst, unsigned int dst_pitch,
void drm_fb_xrgb8888_to_rgb888_dstclip(void __iomem *dst, unsigned int dst_pitch,
void *vaddr, struct drm_framebuffer *fb,
struct drm_rect *clip)
{
unsigned int src_offset = (clip->y1 * fb->pitches[0])
+ (clip->x1 * sizeof(u32));
unsigned int dst_offset = (clip->y1 * dst_pitch)
+ (clip->x1 * 3);
size_t src_len = (clip->x2 - clip->x1) * sizeof(u32);
drm_fb_xrgb8888_to_rgb888_lines(dst + dst_offset, dst_pitch,
vaddr + src_offset, fb->pitches[0],
src_len, clip->y2 - clip->y1);
size_t linepixels = clip->x2 - clip->x1;
size_t dst_len = linepixels * 3;
unsigned y, lines = clip->y2 - clip->y1;
void *dbuf;
dbuf = kmalloc(dst_len, GFP_KERNEL);
if (!dbuf)
return;
vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32));
dst += clip_offset(clip, dst_pitch, sizeof(u16));
for (y = 0; y < lines; y++) {
drm_fb_xrgb8888_to_rgb888_line(dbuf, vaddr, linepixels);
memcpy_toio(dst, dbuf, dst_len);
vaddr += fb->pitches[0];
dst += dst_len;
}
kfree(dbuf);
}
EXPORT_SYMBOL(drm_fb_xrgb8888_to_rgb888_dstclip);
......
......@@ -646,6 +646,85 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
}
EXPORT_SYMBOL(drm_gem_put_pages);
static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
struct drm_gem_object **objs)
{
int i, ret = 0;
struct drm_gem_object *obj;
spin_lock(&filp->table_lock);
for (i = 0; i < count; i++) {
/* Check if we currently have a reference on the object */
obj = idr_find(&filp->object_idr, handle[i]);
if (!obj) {
ret = -ENOENT;
break;
}
drm_gem_object_get(obj);
objs[i] = obj;
}
spin_unlock(&filp->table_lock);
return ret;
}
/**
* drm_gem_objects_lookup - look up GEM objects from an array of handles
* @filp: DRM file private date
* @bo_handles: user pointer to array of userspace handle
* @count: size of handle array
* @objs_out: returned pointer to array of drm_gem_object pointers
*
* Takes an array of userspace handles and returns a newly allocated array of
* GEM objects.
*
* For a single handle lookup, use drm_gem_object_lookup().
*
* Returns:
*
* @objs filled in with GEM object pointers. Returned GEM objects need to be
* released with drm_gem_object_put(). -ENOENT is returned on a lookup
* failure. 0 is returned on success.
*
*/
int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
int count, struct drm_gem_object ***objs_out)
{
int ret;
u32 *handles;
struct drm_gem_object **objs;
if (!count)
return 0;
objs = kvmalloc_array(count, sizeof(struct drm_gem_object *),
GFP_KERNEL | __GFP_ZERO);
if (!objs)
return -ENOMEM;
handles = kvmalloc_array(count, sizeof(u32), GFP_KERNEL);
if (!handles) {
ret = -ENOMEM;
goto out;
}
if (copy_from_user(handles, bo_handles, count * sizeof(u32))) {
ret = -EFAULT;
DRM_DEBUG("Failed to copy in GEM handles\n");
goto out;
}
ret = objects_lookup(filp, handles, count, objs);
*objs_out = objs;
out:
kvfree(handles);
return ret;
}
EXPORT_SYMBOL(drm_gem_objects_lookup);
/**
* drm_gem_object_lookup - look up a GEM object from its handle
* @filp: DRM file private date
......@@ -655,21 +734,15 @@ EXPORT_SYMBOL(drm_gem_put_pages);
*
* A reference to the object named by the handle if such exists on @filp, NULL
* otherwise.
*
* If looking up an array of handles, use drm_gem_objects_lookup().
*/
struct drm_gem_object *
drm_gem_object_lookup(struct drm_file *filp, u32 handle)
{
struct drm_gem_object *obj;
spin_lock(&filp->table_lock);
/* Check if we currently have a reference on the object */
obj = idr_find(&filp->object_idr, handle);
if (obj)
drm_gem_object_get(obj);
spin_unlock(&filp->table_lock);
struct drm_gem_object *obj = NULL;
objects_lookup(filp, &handle, 1, &obj);
return obj;
}
EXPORT_SYMBOL(drm_gem_object_lookup);
......@@ -1294,3 +1367,96 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
ww_acquire_fini(acquire_ctx);
}
EXPORT_SYMBOL(drm_gem_unlock_reservations);
/**
* drm_gem_fence_array_add - Adds the fence to an array of fences to be
* waited on, deduplicating fences from the same context.
*
* @fence_array array of dma_fence * for the job to block on.
* @fence the dma_fence to add to the list of dependencies.
*
* Returns:
* 0 on success, or an error on failing to expand the array.
*/
int drm_gem_fence_array_add(struct xarray *fence_array,
struct dma_fence *fence)
{
struct dma_fence *entry;
unsigned long index;
u32 id = 0;
int ret;
if (!fence)
return 0;
/* Deduplicate if we already depend on a fence from the same context.
* This lets the size of the array of deps scale with the number of
* engines involved, rather than the number of BOs.
*/
xa_for_each(fence_array, index, entry) {
if (entry->context != fence->context)
continue;
if (dma_fence_is_later(fence, entry)) {
dma_fence_put(entry);
xa_store(fence_array, index, fence, GFP_KERNEL);
} else {
dma_fence_put(fence);
}
return 0;
}
ret = xa_alloc(fence_array, &id, fence, xa_limit_32b, GFP_KERNEL);
if (ret != 0)
dma_fence_put(fence);
return ret;
}
EXPORT_SYMBOL(drm_gem_fence_array_add);
/**
* drm_gem_fence_array_add_implicit - Adds the implicit dependencies tracked
* in the GEM object's reservation object to an array of dma_fences for use in
* scheduling a rendering job.
*
* This should be called after drm_gem_lock_reservations() on your array of
* GEM objects used in the job but before updating the reservations with your
* own fences.
*
* @fence_array array of dma_fence * for the job to block on.
* @obj the gem object to add new dependencies from.
* @write whether the job might write the object (so we need to depend on
* shared fences in the reservation object).
*/
int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
struct drm_gem_object *obj,
bool write)
{
int ret;
struct dma_fence **fences;
unsigned int i, fence_count;
if (!write) {
struct dma_fence *fence =
reservation_object_get_excl_rcu(obj->resv);
return drm_gem_fence_array_add(fence_array, fence);
}
ret = reservation_object_get_fences_rcu(obj->resv, NULL,
&fence_count, &fences);
if (ret || !fence_count)
return ret;
for (i = 0; i < fence_count; i++) {
ret = drm_gem_fence_array_add(fence_array, fences[i]);
if (ret)
break;
}
for (; i < fence_count; i++)
dma_fence_put(fences[i]);
kfree(fences);
return ret;
}
EXPORT_SYMBOL(drm_gem_fence_array_add_implicit);
......@@ -297,8 +297,9 @@ static int drm_mode_create_standard_properties(struct drm_device *dev)
return -ENOMEM;
dev->mode_config.prop_crtc_id = prop;
prop = drm_property_create(dev, DRM_MODE_PROP_BLOB, "FB_DAMAGE_CLIPS",
0);
prop = drm_property_create(dev,
DRM_MODE_PROP_ATOMIC | DRM_MODE_PROP_BLOB,
"FB_DAMAGE_CLIPS", 0);
if (!prop)
return -ENOMEM;
dev->mode_config.prop_fb_damage_clips = prop;
......
......@@ -285,225 +285,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
return ret;
}
static struct drm_fb_helper_crtc *
intel_fb_helper_crtc(struct drm_fb_helper *fb_helper, struct drm_crtc *crtc)
{
int i;
for (i = 0; i < fb_helper->crtc_count; i++)
if (fb_helper->crtc_info[i].mode_set.crtc == crtc)
return &fb_helper->crtc_info[i];
return NULL;
}
/*
* Try to read the BIOS display configuration and use it for the initial
* fb configuration.
*
* The BIOS or boot loader will generally create an initial display
* configuration for us that includes some set of active pipes and displays.
* This routine tries to figure out which pipes and connectors are active
* and stuffs them into the crtcs and modes array given to us by the
* drm_fb_helper code.
*
* The overall sequence is:
* intel_fbdev_init - from driver load
* intel_fbdev_init_bios - initialize the intel_fbdev using BIOS data
* drm_fb_helper_init - build fb helper structs
* drm_fb_helper_single_add_all_connectors - more fb helper structs
* intel_fbdev_initial_config - apply the config
* drm_fb_helper_initial_config - call ->probe then register_framebuffer()
* drm_setup_crtcs - build crtc config for fbdev
* intel_fb_initial_config - find active connectors etc
* drm_fb_helper_single_fb_probe - set up fbdev
* intelfb_create - re-use or alloc fb, build out fbdev structs
*
* Note that we don't make special consideration whether we could actually
* switch to the selected modes without a full modeset. E.g. when the display
* is in VGA mode we need to recalculate watermarks and set a new high-res
* framebuffer anyway.
*/
static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
struct drm_fb_helper_crtc **crtcs,
struct drm_display_mode **modes,
struct drm_fb_offset *offsets,
bool *enabled, int width, int height)
{
struct drm_i915_private *dev_priv = to_i915(fb_helper->dev);
unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
unsigned long conn_configured, conn_seq;
int i, j;
bool *save_enabled;
bool fallback = true, ret = true;
int num_connectors_enabled = 0;
int num_connectors_detected = 0;
struct drm_modeset_acquire_ctx ctx;
save_enabled = kcalloc(count, sizeof(bool), GFP_KERNEL);
if (!save_enabled)
return false;
drm_modeset_acquire_init(&ctx, 0);
while (drm_modeset_lock_all_ctx(fb_helper->dev, &ctx) != 0)
drm_modeset_backoff(&ctx);
memcpy(save_enabled, enabled, count);
conn_seq = GENMASK(count - 1, 0);
conn_configured = 0;
retry:
for (i = 0; i < count; i++) {
struct drm_fb_helper_connector *fb_conn;
struct drm_connector *connector;
struct drm_encoder *encoder;
struct drm_fb_helper_crtc *new_crtc;
fb_conn = fb_helper->connector_info[i];
connector = fb_conn->connector;
if (conn_configured & BIT(i))
continue;
/* First pass, only consider tiled connectors */
if (conn_seq == GENMASK(count - 1, 0) && !connector->has_tile)
continue;
if (connector->status == connector_status_connected)
num_connectors_detected++;
if (!enabled[i]) {
DRM_DEBUG_KMS("connector %s not enabled, skipping\n",
connector->name);
conn_configured |= BIT(i);
continue;
}
if (connector->force == DRM_FORCE_OFF) {
DRM_DEBUG_KMS("connector %s is disabled by user, skipping\n",
connector->name);
enabled[i] = false;
continue;
}
encoder = connector->state->best_encoder;
if (!encoder || WARN_ON(!connector->state->crtc)) {
if (connector->force > DRM_FORCE_OFF)
goto bail;
DRM_DEBUG_KMS("connector %s has no encoder or crtc, skipping\n",
connector->name);
enabled[i] = false;
conn_configured |= BIT(i);
continue;
}
num_connectors_enabled++;
new_crtc = intel_fb_helper_crtc(fb_helper,
connector->state->crtc);
/*
* Make sure we're not trying to drive multiple connectors
* with a single CRTC, since our cloning support may not
* match the BIOS.
*/
for (j = 0; j < count; j++) {
if (crtcs[j] == new_crtc) {
DRM_DEBUG_KMS("fallback: cloned configuration\n");
goto bail;
}
}
DRM_DEBUG_KMS("looking for cmdline mode on connector %s\n",
connector->name);
/* go for command line mode first */
modes[i] = drm_pick_cmdline_mode(fb_conn);
/* try for preferred next */
if (!modes[i]) {
DRM_DEBUG_KMS("looking for preferred mode on connector %s %d\n",
connector->name, connector->has_tile);
modes[i] = drm_has_preferred_mode(fb_conn, width,
height);
}
/* No preferred mode marked by the EDID? Are there any modes? */
if (!modes[i] && !list_empty(&connector->modes)) {
DRM_DEBUG_KMS("using first mode listed on connector %s\n",
connector->name);
modes[i] = list_first_entry(&connector->modes,
struct drm_display_mode,
head);
}
/* last resort: use current mode */
if (!modes[i]) {
/*
* IMPORTANT: We want to use the adjusted mode (i.e.
* after the panel fitter upscaling) as the initial
* config, not the input mode, which is what crtc->mode
* usually contains. But since our current
* code puts a mode derived from the post-pfit timings
* into crtc->mode this works out correctly.
*
* This is crtc->mode and not crtc->state->mode for the
* fastboot check to work correctly. crtc_state->mode has
* I915_MODE_FLAG_INHERITED, which we clear to force check
* state.
*/
DRM_DEBUG_KMS("looking for current mode on connector %s\n",
connector->name);
modes[i] = &connector->state->crtc->mode;
}
crtcs[i] = new_crtc;
DRM_DEBUG_KMS("connector %s on [CRTC:%d:%s]: %dx%d%s\n",
connector->name,
connector->state->crtc->base.id,
connector->state->crtc->name,
modes[i]->hdisplay, modes[i]->vdisplay,
modes[i]->flags & DRM_MODE_FLAG_INTERLACE ? "i" :"");
fallback = false;
conn_configured |= BIT(i);
}
if (conn_configured != conn_seq) { /* repeat until no more are found */
conn_seq = conn_configured;
goto retry;
}
/*
* If the BIOS didn't enable everything it could, fall back to have the
* same user experiencing of lighting up as much as possible like the
* fbdev helper library.
*/
if (num_connectors_enabled != num_connectors_detected &&
num_connectors_enabled < INTEL_INFO(dev_priv)->num_pipes) {
DRM_DEBUG_KMS("fallback: Not all outputs enabled\n");
DRM_DEBUG_KMS("Enabled: %i, detected: %i\n", num_connectors_enabled,
num_connectors_detected);
fallback = true;
}
if (fallback) {
bail:
DRM_DEBUG_KMS("Not using firmware configuration\n");
memcpy(enabled, save_enabled, count);
ret = false;
}
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
kfree(save_enabled);
return ret;
}
static const struct drm_fb_helper_funcs intel_fb_helper_funcs = {
.initial_config = intel_fb_initial_config,
.fb_probe = intelfb_create,
};
......
......@@ -145,40 +145,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
if (explicit)
return 0;
/* implicit sync use bo fence in resv obj */
if (write) {
unsigned nr_fences;
struct dma_fence **fences;
int i;
err = reservation_object_get_fences_rcu(
bo->gem.resv, NULL, &nr_fences, &fences);
if (err || !nr_fences)
return err;
for (i = 0; i < nr_fences; i++) {
err = lima_sched_task_add_dep(task, fences[i]);
if (err)
break;
}
/* for error case free remaining fences */
for ( ; i < nr_fences; i++)
dma_fence_put(fences[i]);
kfree(fences);
} else {
struct dma_fence *fence;
fence = reservation_object_get_excl_rcu(bo->gem.resv);
if (fence) {
err = lima_sched_task_add_dep(task, fence);
if (err)
dma_fence_put(fence);
}
}
return err;
return drm_gem_fence_array_add_implicit(&task->deps, &bo->gem, write);
}
static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos,
......@@ -251,7 +218,7 @@ static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit)
if (err)
return err;
err = lima_sched_task_add_dep(submit->task, fence);
err = drm_gem_fence_array_add(&submit->task->deps, fence);
if (err) {
dma_fence_put(fence);
return err;
......
......@@ -3,6 +3,7 @@
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/xarray.h>
#include "lima_drv.h"
#include "lima_sched.h"
......@@ -126,19 +127,24 @@ int lima_sched_task_init(struct lima_sched_task *task,
task->num_bos = num_bos;
task->vm = lima_vm_get(vm);
xa_init_flags(&task->deps, XA_FLAGS_ALLOC);
return 0;
}
void lima_sched_task_fini(struct lima_sched_task *task)
{
struct dma_fence *fence;
unsigned long index;
int i;
drm_sched_job_cleanup(&task->base);
for (i = 0; i < task->num_dep; i++)
dma_fence_put(task->dep[i]);
kfree(task->dep);
xa_for_each(&task->deps, index, fence) {
dma_fence_put(fence);
}
xa_destroy(&task->deps);
if (task->bos) {
for (i = 0; i < task->num_bos; i++)
......@@ -149,42 +155,6 @@ void lima_sched_task_fini(struct lima_sched_task *task)
lima_vm_put(task->vm);
}
int lima_sched_task_add_dep(struct lima_sched_task *task, struct dma_fence *fence)
{
int i, new_dep = 4;
/* same context's fence is definitly earlier then this task */
if (fence->context == task->base.s_fence->finished.context) {
dma_fence_put(fence);
return 0;
}
if (task->dep && task->num_dep == task->max_dep)
new_dep = task->max_dep * 2;
if (task->max_dep < new_dep) {
void *dep = krealloc(task->dep, sizeof(*task->dep) * new_dep, GFP_KERNEL);
if (!dep)
return -ENOMEM;
task->max_dep = new_dep;
task->dep = dep;
}
for (i = 0; i < task->num_dep; i++) {
if (task->dep[i]->context == fence->context &&
dma_fence_is_later(fence, task->dep[i])) {
dma_fence_put(task->dep[i]);
task->dep[i] = fence;
return 0;
}
}
task->dep[task->num_dep++] = fence;
return 0;
}
int lima_sched_context_init(struct lima_sched_pipe *pipe,
struct lima_sched_context *context,
atomic_t *guilty)
......@@ -213,21 +183,9 @@ static struct dma_fence *lima_sched_dependency(struct drm_sched_job *job,
struct drm_sched_entity *entity)
{
struct lima_sched_task *task = to_lima_task(job);
int i;
for (i = 0; i < task->num_dep; i++) {
struct dma_fence *fence = task->dep[i];
if (!task->dep[i])
continue;
task->dep[i] = NULL;
if (!dma_fence_is_signaled(fence))
return fence;
dma_fence_put(fence);
}
if (!xa_empty(&task->deps))
return xa_erase(&task->deps, task->last_dep++);
return NULL;
}
......@@ -353,7 +311,7 @@ static void lima_sched_free_job(struct drm_sched_job *job)
kmem_cache_free(pipe->task_slab, task);
}
const struct drm_sched_backend_ops lima_sched_ops = {
static const struct drm_sched_backend_ops lima_sched_ops = {
.dependency = lima_sched_dependency,
.run_job = lima_sched_run_job,
.timedout_job = lima_sched_timedout_job,
......
......@@ -14,9 +14,8 @@ struct lima_sched_task {
struct lima_vm *vm;
void *frame;
struct dma_fence **dep;
int num_dep;
int max_dep;
struct xarray deps;
unsigned long last_dep;
struct lima_bo **bos;
int num_bos;
......@@ -78,7 +77,6 @@ int lima_sched_task_init(struct lima_sched_task *task,
struct lima_bo **bos, int num_bos,
struct lima_vm *vm);
void lima_sched_task_fini(struct lima_sched_task *task);
int lima_sched_task_add_dep(struct lima_sched_task *task, struct dma_fence *fence);
int lima_sched_context_init(struct lima_sched_pipe *pipe,
struct lima_sched_context *context,
......
......@@ -90,6 +90,18 @@ static irqreturn_t meson_irq(int irq, void *arg)
return IRQ_HANDLED;
}
static int meson_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
/*
* We need 64bytes aligned stride, and PAGE aligned size
*/
args->pitch = ALIGN(DIV_ROUND_UP(args->width * args->bpp, 8), SZ_64);
args->size = PAGE_ALIGN(args->pitch * args->height);
return drm_gem_cma_dumb_create_internal(file, dev, args);
}
DEFINE_DRM_GEM_CMA_FOPS(fops);
static struct drm_driver meson_driver = {
......@@ -112,7 +124,7 @@ static struct drm_driver meson_driver = {
.gem_prime_mmap = drm_gem_cma_prime_mmap,
/* GEM Ops */
.dumb_create = drm_gem_cma_dumb_create,
.dumb_create = meson_dumb_create,
.gem_free_object_unlocked = drm_gem_cma_free_object,
.gem_vm_ops = &drm_gem_cma_vm_ops,
......
......@@ -90,8 +90,8 @@ static int eotf_bypass_coeff[EOTF_COEFF_SIZE] = {
EOTF_COEFF_RIGHTSHIFT /* right shift */
};
void meson_viu_set_g12a_osd1_matrix(struct meson_drm *priv, int *m,
bool csc_on)
static void meson_viu_set_g12a_osd1_matrix(struct meson_drm *priv,
int *m, bool csc_on)
{
/* VPP WRAP OSD1 matrix */
writel(((m[0] & 0xfff) << 16) | (m[1] & 0xfff),
......@@ -118,7 +118,7 @@ void meson_viu_set_g12a_osd1_matrix(struct meson_drm *priv, int *m,
priv->io_base + _REG(VPP_WRAP_OSD1_MATRIX_EN_CTRL));
}
void meson_viu_set_osd_matrix(struct meson_drm *priv,
static void meson_viu_set_osd_matrix(struct meson_drm *priv,
enum viu_matrix_sel_e m_select,
int *m, bool csc_on)
{
......@@ -187,10 +187,10 @@ void meson_viu_set_osd_matrix(struct meson_drm *priv,
#define OSD_EOTF_LUT_SIZE 33
#define OSD_OETF_LUT_SIZE 41
void meson_viu_set_osd_lut(struct meson_drm *priv, enum viu_lut_sel_e lut_sel,
static void
meson_viu_set_osd_lut(struct meson_drm *priv, enum viu_lut_sel_e lut_sel,
unsigned int *r_map, unsigned int *g_map,
unsigned int *b_map,
bool csc_on)
unsigned int *b_map, bool csc_on)
{
unsigned int addr_port;
unsigned int data_port;
......
......@@ -3025,6 +3025,34 @@ static const struct panel_desc_dsi panasonic_vvx10f004b00 = {
.lanes = 4,
};
static const struct drm_display_mode lg_acx467akm_7_mode = {
.clock = 150000,
.hdisplay = 1080,
.hsync_start = 1080 + 2,
.hsync_end = 1080 + 2 + 2,
.htotal = 1080 + 2 + 2 + 2,
.vdisplay = 1920,
.vsync_start = 1920 + 2,
.vsync_end = 1920 + 2 + 2,
.vtotal = 1920 + 2 + 2 + 2,
.vrefresh = 60,
};
static const struct panel_desc_dsi lg_acx467akm_7 = {
.desc = {
.modes = &lg_acx467akm_7_mode,
.num_modes = 1,
.bpc = 8,
.size = {
.width = 62,
.height = 110,
},
},
.flags = 0,
.format = MIPI_DSI_FMT_RGB888,
.lanes = 4,
};
static const struct of_device_id dsi_of_match[] = {
{
.compatible = "auo,b080uan01",
......@@ -3041,6 +3069,9 @@ static const struct of_device_id dsi_of_match[] = {
}, {
.compatible = "panasonic,vvx10f004b00",
.data = &panasonic_vvx10f004b00
}, {
.compatible = "lg,acx467akm-7",
.data = &lg_acx467akm_7
}, {
/* sentinel */
}
......
# SPDX-License-Identifier: GPL-2.0
config DRM_PANFROST
tristate "Panfrost (DRM support for ARM Mali Midgard/Bifrost GPUs)"
depends on DRM
depends on ARM || ARM64 || COMPILE_TEST
depends on MMU
select DRM_SCHED
select IOMMU_SUPPORT
select IOMMU_IO_PGTABLE_LPAE
select DRM_GEM_SHMEM_HELPER
help
DRM driver for ARM Mali Midgard (T6xx, T7xx, T8xx) and
Bifrost (G3x, G5x, G7x) GPUs.
# SPDX-License-Identifier: GPL-2.0
panfrost-y := \
panfrost_drv.o \
panfrost_device.o \
panfrost_devfreq.o \
panfrost_gem.o \
panfrost_gpu.o \
panfrost_job.o \
panfrost_mmu.o
obj-$(CONFIG_DRM_PANFROST) += panfrost.o
- Thermal support.
- Bifrost support:
- DT bindings (Neil, WIP)
- MMU page table format and address space setup
- Bifrost specific feature and issue handling
- Coherent DMA support
- Support for 2MB pages. The io-pgtable code already supports this. Finishing
support involves either copying or adapting the iommu API to handle passing
aligned addresses and sizes to the io-pgtable code.
- Per FD address space support. The h/w supports multiple addresses spaces.
The hard part is handling when more address spaces are needed than what
the h/w provides.
- Support pinning pages on demand (GPU page faults).
- Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu)
- Support for madvise and a shrinker.
- Compute job support. So called 'compute only' jobs need to be plumbed up to
userspace.
- Performance counter support. (Boris)
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2019 Collabora ltd. */
#include <linux/devfreq.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/clk.h>
#include <linux/regulator/consumer.h>
#include "panfrost_device.h"
#include "panfrost_features.h"
#include "panfrost_issues.h"
#include "panfrost_gpu.h"
#include "panfrost_regs.h"
static void panfrost_devfreq_update_utilization(struct panfrost_device *pfdev, int slot);
static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct panfrost_device *pfdev = platform_get_drvdata(to_platform_device(dev));
struct dev_pm_opp *opp;
unsigned long old_clk_rate = pfdev->devfreq.cur_freq;
unsigned long target_volt, target_rate;
int err;
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
target_rate = dev_pm_opp_get_freq(opp);
target_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
if (old_clk_rate == target_rate)
return 0;
/*
* If frequency scaling from low to high, adjust voltage first.
* If frequency scaling from high to low, adjust frequency first.
*/
if (old_clk_rate < target_rate) {
err = regulator_set_voltage(pfdev->regulator, target_volt,
target_volt);
if (err) {
dev_err(dev, "Cannot set voltage %lu uV\n",
target_volt);
return err;
}
}
err = clk_set_rate(pfdev->clock, target_rate);
if (err) {
dev_err(dev, "Cannot set frequency %lu (%d)\n", target_rate,
err);
regulator_set_voltage(pfdev->regulator, pfdev->devfreq.cur_volt,
pfdev->devfreq.cur_volt);
return err;
}
if (old_clk_rate > target_rate) {
err = regulator_set_voltage(pfdev->regulator, target_volt,
target_volt);
if (err)
dev_err(dev, "Cannot set voltage %lu uV\n", target_volt);
}
pfdev->devfreq.cur_freq = target_rate;
pfdev->devfreq.cur_volt = target_volt;
return 0;
}
static void panfrost_devfreq_reset(struct panfrost_device *pfdev)
{
ktime_t now = ktime_get();
int i;
for (i = 0; i < NUM_JOB_SLOTS; i++) {
pfdev->devfreq.slot[i].busy_time = 0;
pfdev->devfreq.slot[i].idle_time = 0;
pfdev->devfreq.slot[i].time_last_update = now;
}
}
static int panfrost_devfreq_get_dev_status(struct device *dev,
struct devfreq_dev_status *status)
{
struct panfrost_device *pfdev = platform_get_drvdata(to_platform_device(dev));
int i;
for (i = 0; i < NUM_JOB_SLOTS; i++) {
panfrost_devfreq_update_utilization(pfdev, i);
}
status->current_frequency = clk_get_rate(pfdev->clock);
status->total_time = ktime_to_ns(ktime_add(pfdev->devfreq.slot[0].busy_time,
pfdev->devfreq.slot[0].idle_time));
status->busy_time = 0;
for (i = 0; i < NUM_JOB_SLOTS; i++) {
status->busy_time += ktime_to_ns(pfdev->devfreq.slot[i].busy_time);
}
/* We're scheduling only to one core atm, so don't divide for now */
/* status->busy_time /= NUM_JOB_SLOTS; */
panfrost_devfreq_reset(pfdev);
dev_dbg(pfdev->dev, "busy %lu total %lu %lu %% freq %lu MHz\n", status->busy_time,
status->total_time,
status->busy_time / (status->total_time / 100),
status->current_frequency / 1000 / 1000);
return 0;
}
static int panfrost_devfreq_get_cur_freq(struct device *dev, unsigned long *freq)
{
struct panfrost_device *pfdev = platform_get_drvdata(to_platform_device(dev));
*freq = pfdev->devfreq.cur_freq;
return 0;
}
static struct devfreq_dev_profile panfrost_devfreq_profile = {
.polling_ms = 50, /* ~3 frames */
.target = panfrost_devfreq_target,
.get_dev_status = panfrost_devfreq_get_dev_status,
.get_cur_freq = panfrost_devfreq_get_cur_freq,
};
int panfrost_devfreq_init(struct panfrost_device *pfdev)
{
int ret;
struct dev_pm_opp *opp;
if (!pfdev->regulator)
return 0;
ret = dev_pm_opp_of_add_table(&pfdev->pdev->dev);
if (ret == -ENODEV) /* Optional, continue without devfreq */
return 0;
panfrost_devfreq_reset(pfdev);
pfdev->devfreq.cur_freq = clk_get_rate(pfdev->clock);
opp = devfreq_recommended_opp(&pfdev->pdev->dev, &pfdev->devfreq.cur_freq, 0);
if (IS_ERR(opp))
return PTR_ERR(opp);
panfrost_devfreq_profile.initial_freq = pfdev->devfreq.cur_freq;
dev_pm_opp_put(opp);
pfdev->devfreq.devfreq = devm_devfreq_add_device(&pfdev->pdev->dev,
&panfrost_devfreq_profile, "simple_ondemand", NULL);
if (IS_ERR(pfdev->devfreq.devfreq)) {
DRM_DEV_ERROR(&pfdev->pdev->dev, "Couldn't initialize GPU devfreq\n");
ret = PTR_ERR(pfdev->devfreq.devfreq);
pfdev->devfreq.devfreq = NULL;
return ret;
}
return 0;
}
void panfrost_devfreq_resume(struct panfrost_device *pfdev)
{
int i;
if (!pfdev->devfreq.devfreq)
return;
panfrost_devfreq_reset(pfdev);
for (i = 0; i < NUM_JOB_SLOTS; i++)
pfdev->devfreq.slot[i].busy = false;
devfreq_resume_device(pfdev->devfreq.devfreq);
}
void panfrost_devfreq_suspend(struct panfrost_device *pfdev)
{
if (!pfdev->devfreq.devfreq)
return;
devfreq_suspend_device(pfdev->devfreq.devfreq);
}
static void panfrost_devfreq_update_utilization(struct panfrost_device *pfdev, int slot)
{
struct panfrost_devfreq_slot *devfreq_slot = &pfdev->devfreq.slot[slot];
ktime_t now;
ktime_t last;
if (!pfdev->devfreq.devfreq)
return;
now = ktime_get();
last = pfdev->devfreq.slot[slot].time_last_update;
/* If we last recorded a transition to busy, we have been idle since */
if (devfreq_slot->busy)
pfdev->devfreq.slot[slot].busy_time += ktime_sub(now, last);
else
pfdev->devfreq.slot[slot].idle_time += ktime_sub(now, last);
pfdev->devfreq.slot[slot].time_last_update = now;
}
/* The job scheduler is expected to call this at every transition busy <-> idle */
void panfrost_devfreq_record_transition(struct panfrost_device *pfdev, int slot)
{
struct panfrost_devfreq_slot *devfreq_slot = &pfdev->devfreq.slot[slot];
panfrost_devfreq_update_utilization(pfdev, slot);
devfreq_slot->busy = !devfreq_slot->busy;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 Collabora ltd. */
#ifndef __PANFROST_DEVFREQ_H__
#define __PANFROST_DEVFREQ_H__
int panfrost_devfreq_init(struct panfrost_device *pfdev);
void panfrost_devfreq_resume(struct panfrost_device *pfdev);
void panfrost_devfreq_suspend(struct panfrost_device *pfdev);
void panfrost_devfreq_record_transition(struct panfrost_device *pfdev, int slot);
#endif /* __PANFROST_DEVFREQ_H__ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2018 Marty E. Plummer <hanetzer@startmail.com> */
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#include <linux/clk.h>
#include <linux/reset.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include "panfrost_device.h"
#include "panfrost_devfreq.h"
#include "panfrost_features.h"
#include "panfrost_gpu.h"
#include "panfrost_job.h"
#include "panfrost_mmu.h"
static int panfrost_reset_init(struct panfrost_device *pfdev)
{
int err;
pfdev->rstc = devm_reset_control_array_get(pfdev->dev, false, true);
if (IS_ERR(pfdev->rstc)) {
dev_err(pfdev->dev, "get reset failed %ld\n", PTR_ERR(pfdev->rstc));
return PTR_ERR(pfdev->rstc);
}
err = reset_control_deassert(pfdev->rstc);
if (err)
return err;
return 0;
}
static void panfrost_reset_fini(struct panfrost_device *pfdev)
{
reset_control_assert(pfdev->rstc);
}
static int panfrost_clk_init(struct panfrost_device *pfdev)
{
int err;
unsigned long rate;
pfdev->clock = devm_clk_get(pfdev->dev, NULL);
if (IS_ERR(pfdev->clock)) {
dev_err(pfdev->dev, "get clock failed %ld\n", PTR_ERR(pfdev->clock));
return PTR_ERR(pfdev->clock);
}
rate = clk_get_rate(pfdev->clock);
dev_info(pfdev->dev, "clock rate = %lu\n", rate);
err = clk_prepare_enable(pfdev->clock);
if (err)
return err;
return 0;
}
static void panfrost_clk_fini(struct panfrost_device *pfdev)
{
clk_disable_unprepare(pfdev->clock);
}
static int panfrost_regulator_init(struct panfrost_device *pfdev)
{
int ret;
pfdev->regulator = devm_regulator_get_optional(pfdev->dev, "mali");
if (IS_ERR(pfdev->regulator)) {
ret = PTR_ERR(pfdev->regulator);
pfdev->regulator = NULL;
if (ret == -ENODEV)
return 0;
dev_err(pfdev->dev, "failed to get regulator: %d\n", ret);
return ret;
}
ret = regulator_enable(pfdev->regulator);
if (ret < 0) {
dev_err(pfdev->dev, "failed to enable regulator: %d\n", ret);
return ret;
}
return 0;
}
static void panfrost_regulator_fini(struct panfrost_device *pfdev)
{
if (pfdev->regulator)
regulator_disable(pfdev->regulator);
}
int panfrost_device_init(struct panfrost_device *pfdev)
{
int err;
struct resource *res;
mutex_init(&pfdev->sched_lock);
INIT_LIST_HEAD(&pfdev->scheduled_jobs);
spin_lock_init(&pfdev->hwaccess_lock);
err = panfrost_clk_init(pfdev);
if (err) {
dev_err(pfdev->dev, "clk init failed %d\n", err);
return err;
}
err = panfrost_regulator_init(pfdev);
if (err) {
dev_err(pfdev->dev, "regulator init failed %d\n", err);
goto err_out0;
}
err = panfrost_reset_init(pfdev);
if (err) {
dev_err(pfdev->dev, "reset init failed %d\n", err);
goto err_out1;
}
res = platform_get_resource(pfdev->pdev, IORESOURCE_MEM, 0);
pfdev->iomem = devm_ioremap_resource(pfdev->dev, res);
if (IS_ERR(pfdev->iomem)) {
dev_err(pfdev->dev, "failed to ioremap iomem\n");
err = PTR_ERR(pfdev->iomem);
goto err_out2;
}
err = panfrost_gpu_init(pfdev);
if (err)
goto err_out2;
err = panfrost_mmu_init(pfdev);
if (err)
goto err_out3;
err = panfrost_job_init(pfdev);
if (err)
goto err_out4;
/* runtime PM will wake us up later */
panfrost_gpu_power_off(pfdev);
pm_runtime_set_active(pfdev->dev);
pm_runtime_get_sync(pfdev->dev);
pm_runtime_mark_last_busy(pfdev->dev);
pm_runtime_put_autosuspend(pfdev->dev);
return 0;
err_out4:
panfrost_mmu_fini(pfdev);
err_out3:
panfrost_gpu_fini(pfdev);
err_out2:
panfrost_reset_fini(pfdev);
err_out1:
panfrost_regulator_fini(pfdev);
err_out0:
panfrost_clk_fini(pfdev);
return err;
}
void panfrost_device_fini(struct panfrost_device *pfdev)
{
panfrost_regulator_fini(pfdev);
panfrost_clk_fini(pfdev);
}
const char *panfrost_exception_name(struct panfrost_device *pfdev, u32 exception_code)
{
switch (exception_code) {
/* Non-Fault Status code */
case 0x00: return "NOT_STARTED/IDLE/OK";
case 0x01: return "DONE";
case 0x02: return "INTERRUPTED";
case 0x03: return "STOPPED";
case 0x04: return "TERMINATED";
case 0x08: return "ACTIVE";
/* Job exceptions */
case 0x40: return "JOB_CONFIG_FAULT";
case 0x41: return "JOB_POWER_FAULT";
case 0x42: return "JOB_READ_FAULT";
case 0x43: return "JOB_WRITE_FAULT";
case 0x44: return "JOB_AFFINITY_FAULT";
case 0x48: return "JOB_BUS_FAULT";
case 0x50: return "INSTR_INVALID_PC";
case 0x51: return "INSTR_INVALID_ENC";
case 0x52: return "INSTR_TYPE_MISMATCH";
case 0x53: return "INSTR_OPERAND_FAULT";
case 0x54: return "INSTR_TLS_FAULT";
case 0x55: return "INSTR_BARRIER_FAULT";
case 0x56: return "INSTR_ALIGN_FAULT";
case 0x58: return "DATA_INVALID_FAULT";
case 0x59: return "TILE_RANGE_FAULT";
case 0x5A: return "ADDR_RANGE_FAULT";
case 0x60: return "OUT_OF_MEMORY";
/* GPU exceptions */
case 0x80: return "DELAYED_BUS_FAULT";
case 0x88: return "SHAREABILITY_FAULT";
/* MMU exceptions */
case 0xC1: return "TRANSLATION_FAULT_LEVEL1";
case 0xC2: return "TRANSLATION_FAULT_LEVEL2";
case 0xC3: return "TRANSLATION_FAULT_LEVEL3";
case 0xC4: return "TRANSLATION_FAULT_LEVEL4";
case 0xC8: return "PERMISSION_FAULT";
case 0xC9 ... 0xCF: return "PERMISSION_FAULT";
case 0xD1: return "TRANSTAB_BUS_FAULT_LEVEL1";
case 0xD2: return "TRANSTAB_BUS_FAULT_LEVEL2";
case 0xD3: return "TRANSTAB_BUS_FAULT_LEVEL3";
case 0xD4: return "TRANSTAB_BUS_FAULT_LEVEL4";
case 0xD8: return "ACCESS_FLAG";
case 0xD9 ... 0xDF: return "ACCESS_FLAG";
case 0xE0 ... 0xE7: return "ADDRESS_SIZE_FAULT";
case 0xE8 ... 0xEF: return "MEMORY_ATTRIBUTES_FAULT";
}
return "UNKNOWN";
}
#ifdef CONFIG_PM
int panfrost_device_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct panfrost_device *pfdev = platform_get_drvdata(pdev);
panfrost_gpu_soft_reset(pfdev);
/* TODO: Re-enable all other address spaces */
panfrost_gpu_power_on(pfdev);
panfrost_mmu_enable(pfdev, 0);
panfrost_job_enable_interrupts(pfdev);
panfrost_devfreq_resume(pfdev);
return 0;
}
int panfrost_device_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct panfrost_device *pfdev = platform_get_drvdata(pdev);
if (!panfrost_job_is_idle(pfdev))
return -EBUSY;
panfrost_devfreq_suspend(pfdev);
panfrost_gpu_power_off(pfdev);
return 0;
}
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2018 Marty E. Plummer <hanetzer@startmail.com> */
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#ifndef __PANFROST_DEVICE_H__
#define __PANFROST_DEVICE_H__
#include <linux/spinlock.h>
#include <drm/drm_device.h>
#include <drm/drm_mm.h>
#include <drm/gpu_scheduler.h>
struct panfrost_device;
struct panfrost_mmu;
struct panfrost_job_slot;
struct panfrost_job;
#define NUM_JOB_SLOTS 3
struct panfrost_features {
u16 id;
u16 revision;
u64 shader_present;
u64 tiler_present;
u64 l2_present;
u64 stack_present;
u32 as_present;
u32 js_present;
u32 l2_features;
u32 core_features;
u32 tiler_features;
u32 mem_features;
u32 mmu_features;
u32 thread_features;
u32 max_threads;
u32 thread_max_workgroup_sz;
u32 thread_max_barrier_sz;
u32 coherency_features;
u32 texture_features[4];
u32 js_features[16];
u32 nr_core_groups;
unsigned long hw_features[64 / BITS_PER_LONG];
unsigned long hw_issues[64 / BITS_PER_LONG];
};
struct panfrost_devfreq_slot {
ktime_t busy_time;
ktime_t idle_time;
ktime_t time_last_update;
bool busy;
};
struct panfrost_device {
struct device *dev;
struct drm_device *ddev;
struct platform_device *pdev;
spinlock_t hwaccess_lock;
struct drm_mm mm;
spinlock_t mm_lock;
void __iomem *iomem;
struct clk *clock;
struct regulator *regulator;
struct reset_control *rstc;
struct panfrost_features features;
struct panfrost_mmu *mmu;
struct panfrost_job_slot *js;
struct panfrost_job *jobs[NUM_JOB_SLOTS];
struct list_head scheduled_jobs;
struct mutex sched_lock;
struct {
struct devfreq *devfreq;
struct thermal_cooling_device *cooling;
unsigned long cur_freq;
unsigned long cur_volt;
struct panfrost_devfreq_slot slot[NUM_JOB_SLOTS];
} devfreq;
};
struct panfrost_file_priv {
struct panfrost_device *pfdev;
struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
};
static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev)
{
return ddev->dev_private;
}
static inline int panfrost_model_cmp(struct panfrost_device *pfdev, s32 id)
{
s32 match_id = pfdev->features.id;
if (match_id & 0xf000)
match_id &= 0xf00f;
return match_id - id;
}
static inline bool panfrost_model_eq(struct panfrost_device *pfdev, s32 id)
{
return !panfrost_model_cmp(pfdev, id);
}
int panfrost_device_init(struct panfrost_device *pfdev);
void panfrost_device_fini(struct panfrost_device *pfdev);
int panfrost_device_resume(struct device *dev);
int panfrost_device_suspend(struct device *dev);
const char *panfrost_exception_name(struct panfrost_device *pfdev, u32 exception_code);
#endif
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/dma-buf.h>
#include <linux/dma-mapping.h>
#include <drm/panfrost_drm.h>
#include "panfrost_device.h"
#include "panfrost_gem.h"
#include "panfrost_mmu.h"
/* Called DRM core on the last userspace/kernel unreference of the
* BO.
*/
void panfrost_gem_free_object(struct drm_gem_object *obj)
{
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
struct panfrost_device *pfdev = obj->dev->dev_private;
panfrost_mmu_unmap(bo);
spin_lock(&pfdev->mm_lock);
drm_mm_remove_node(&bo->node);
spin_unlock(&pfdev->mm_lock);
drm_gem_shmem_free_object(obj);
}
static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.free = panfrost_gem_free_object,
.print_info = drm_gem_shmem_print_info,
.pin = drm_gem_shmem_pin,
.unpin = drm_gem_shmem_unpin,
.get_sg_table = drm_gem_shmem_get_sg_table,
.vmap = drm_gem_shmem_vmap,
.vunmap = drm_gem_shmem_vunmap,
.vm_ops = &drm_gem_shmem_vm_ops,
};
/**
* panfrost_gem_create_object - Implementation of driver->gem_create_object.
* @dev: DRM device
* @size: Size in bytes of the memory the object will reference
*
* This lets the GEM helpers allocate object structs for us, and keep
* our BO stats correct.
*/
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size)
{
int ret;
struct panfrost_device *pfdev = dev->dev_private;
struct panfrost_gem_object *obj;
obj = kzalloc(sizeof(*obj), GFP_KERNEL);
if (!obj)
return NULL;
obj->base.base.funcs = &panfrost_gem_funcs;
spin_lock(&pfdev->mm_lock);
ret = drm_mm_insert_node(&pfdev->mm, &obj->node,
roundup(size, PAGE_SIZE) >> PAGE_SHIFT);
spin_unlock(&pfdev->mm_lock);
if (ret)
goto free_obj;
return &obj->base.base;
free_obj:
kfree(obj);
return ERR_PTR(ret);
}
struct drm_gem_object *
panfrost_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt)
{
struct drm_gem_object *obj;
struct panfrost_gem_object *pobj;
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj))
return ERR_CAST(obj);
pobj = to_panfrost_bo(obj);
obj->resv = attach->dmabuf->resv;
panfrost_mmu_map(pobj);
return obj;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#ifndef __PANFROST_GEM_H__
#define __PANFROST_GEM_H__
#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_mm.h>
struct panfrost_gem_object {
struct drm_gem_shmem_object base;
struct drm_mm_node node;
};
static inline
struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj)
{
return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base);
}
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
struct drm_gem_object *
panfrost_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
#endif /* __PANFROST_GEM_H__ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2018 Marty E. Plummer <hanetzer@startmail.com> */
/* Copyright 2019 Collabora ltd. */
#ifndef __PANFROST_GPU_H__
#define __PANFROST_GPU_H__
struct panfrost_device;
int panfrost_gpu_init(struct panfrost_device *pfdev);
void panfrost_gpu_fini(struct panfrost_device *pfdev);
u32 panfrost_gpu_get_latest_flush_id(struct panfrost_device *pfdev);
int panfrost_gpu_soft_reset(struct panfrost_device *pfdev);
void panfrost_gpu_power_on(struct panfrost_device *pfdev);
void panfrost_gpu_power_off(struct panfrost_device *pfdev);
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* (C) COPYRIGHT 2014-2018 ARM Limited. All rights reserved. */
/* Copyright 2019 Linaro, Ltd., Rob Herring <robh@kernel.org> */
#ifndef __PANFROST_ISSUES_H__
#define __PANFROST_ISSUES_H__
#include <linux/bitops.h>
#include "panfrost_device.h"
/*
* This is not a complete list of issues, but only the ones the driver needs
* to care about.
*/
enum panfrost_hw_issue {
HW_ISSUE_6367,
HW_ISSUE_6787,
HW_ISSUE_8186,
HW_ISSUE_8245,
HW_ISSUE_8316,
HW_ISSUE_8394,
HW_ISSUE_8401,
HW_ISSUE_8408,
HW_ISSUE_8443,
HW_ISSUE_8987,
HW_ISSUE_9435,
HW_ISSUE_9510,
HW_ISSUE_9630,
HW_ISSUE_10327,
HW_ISSUE_10649,
HW_ISSUE_10676,
HW_ISSUE_10797,
HW_ISSUE_10817,
HW_ISSUE_10883,
HW_ISSUE_10959,
HW_ISSUE_10969,
HW_ISSUE_11020,
HW_ISSUE_11024,
HW_ISSUE_11035,
HW_ISSUE_11056,
HW_ISSUE_T76X_3542,
HW_ISSUE_T76X_3953,
HW_ISSUE_TMIX_8463,
GPUCORE_1619,
HW_ISSUE_TMIX_8438,
HW_ISSUE_TGOX_R1_1234,
HW_ISSUE_END
};
#define hw_issues_all (\
BIT_ULL(HW_ISSUE_9435))
#define hw_issues_t600 (\
BIT_ULL(HW_ISSUE_6367) | \
BIT_ULL(HW_ISSUE_6787) | \
BIT_ULL(HW_ISSUE_8408) | \
BIT_ULL(HW_ISSUE_9510) | \
BIT_ULL(HW_ISSUE_10649) | \
BIT_ULL(HW_ISSUE_10676) | \
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11035) | \
BIT_ULL(HW_ISSUE_11056) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t600_r0p0_15dev0 (\
BIT_ULL(HW_ISSUE_8186) | \
BIT_ULL(HW_ISSUE_8245) | \
BIT_ULL(HW_ISSUE_8316) | \
BIT_ULL(HW_ISSUE_8394) | \
BIT_ULL(HW_ISSUE_8401) | \
BIT_ULL(HW_ISSUE_8443) | \
BIT_ULL(HW_ISSUE_8987) | \
BIT_ULL(HW_ISSUE_9630) | \
BIT_ULL(HW_ISSUE_10969) | \
BIT_ULL(GPUCORE_1619))
#define hw_issues_t620 (\
BIT_ULL(HW_ISSUE_10649) | \
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_10959) | \
BIT_ULL(HW_ISSUE_11056) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t620_r0p1 (\
BIT_ULL(HW_ISSUE_10327) | \
BIT_ULL(HW_ISSUE_10676) | \
BIT_ULL(HW_ISSUE_10817) | \
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11024) | \
BIT_ULL(HW_ISSUE_11035))
#define hw_issues_t620_r1p0 (\
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11024))
#define hw_issues_t720 (\
BIT_ULL(HW_ISSUE_10649) | \
BIT_ULL(HW_ISSUE_10797) | \
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_11056) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t760 (\
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_T76X_3953) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t760_r0p0 (\
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11024) | \
BIT_ULL(HW_ISSUE_T76X_3542))
#define hw_issues_t760_r0p1 (\
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11024) | \
BIT_ULL(HW_ISSUE_T76X_3542))
#define hw_issues_t760_r0p1_50rel0 (\
BIT_ULL(HW_ISSUE_T76X_3542))
#define hw_issues_t760_r0p2 (\
BIT_ULL(HW_ISSUE_11020) | \
BIT_ULL(HW_ISSUE_11024) | \
BIT_ULL(HW_ISSUE_T76X_3542))
#define hw_issues_t760_r0p3 (\
BIT_ULL(HW_ISSUE_T76X_3542))
#define hw_issues_t820 (\
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_T76X_3953) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t830 (\
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_T76X_3953) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t860 (\
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_T76X_3953) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_t880 (\
BIT_ULL(HW_ISSUE_10883) | \
BIT_ULL(HW_ISSUE_T76X_3953) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_g31 0
#define hw_issues_g31_r1p0 (\
BIT_ULL(HW_ISSUE_TGOX_R1_1234))
#define hw_issues_g51 0
#define hw_issues_g52 0
#define hw_issues_g71 (\
BIT_ULL(HW_ISSUE_TMIX_8463) | \
BIT_ULL(HW_ISSUE_TMIX_8438))
#define hw_issues_g71_r0p0_05dev0 (\
BIT_ULL(HW_ISSUE_T76X_3953))
#define hw_issues_g72 0
#define hw_issues_g76 0
static inline bool panfrost_has_hw_issue(struct panfrost_device *pfdev,
enum panfrost_hw_issue issue)
{
return test_bit(issue, pfdev->features.hw_issues);
}
#endif /* __PANFROST_ISSUES_H__ */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 Collabora ltd. */
#ifndef __PANFROST_JOB_H__
#define __PANFROST_JOB_H__
#include <uapi/drm/panfrost_drm.h>
#include <drm/gpu_scheduler.h>
struct panfrost_device;
struct panfrost_gem_object;
struct panfrost_file_priv;
struct panfrost_job {
struct drm_sched_job base;
struct kref refcount;
struct panfrost_device *pfdev;
struct panfrost_file_priv *file_priv;
/* Optional fences userspace can pass in for the job to depend on. */
struct dma_fence **in_fences;
u32 in_fence_count;
/* Fence to be signaled by IRQ handler when the job is complete. */
struct dma_fence *done_fence;
__u64 jc;
__u32 requirements;
__u32 flush_id;
/* Exclusive fences we have taken from the BOs to wait for */
struct dma_fence **implicit_fences;
struct drm_gem_object **bos;
u32 bo_count;
/* Fence to be signaled by drm-sched once its done with the job */
struct dma_fence *render_done_fence;
};
int panfrost_job_init(struct panfrost_device *pfdev);
void panfrost_job_fini(struct panfrost_device *pfdev);
int panfrost_job_open(struct panfrost_file_priv *panfrost_priv);
void panfrost_job_close(struct panfrost_file_priv *panfrost_priv);
int panfrost_job_push(struct panfrost_job *job);
void panfrost_job_put(struct panfrost_job *job);
void panfrost_job_enable_interrupts(struct panfrost_device *pfdev);
int panfrost_job_is_idle(struct panfrost_device *pfdev);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/io-pgtable.h>
#include <linux/iommu.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/sizes.h>
#include "panfrost_device.h"
#include "panfrost_mmu.h"
#include "panfrost_gem.h"
#include "panfrost_features.h"
#include "panfrost_regs.h"
#define mmu_write(dev, reg, data) writel(data, dev->iomem + reg)
#define mmu_read(dev, reg) readl(dev->iomem + reg)
struct panfrost_mmu {
struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable_ops *pgtbl_ops;
struct mutex lock;
};
static int wait_ready(struct panfrost_device *pfdev, u32 as_nr)
{
int ret;
u32 val;
/* Wait for the MMU status to indicate there is no active command, in
* case one is pending. */
ret = readl_relaxed_poll_timeout_atomic(pfdev->iomem + AS_STATUS(as_nr),
val, !(val & AS_STATUS_AS_ACTIVE), 10, 1000);
if (ret)
dev_err(pfdev->dev, "AS_ACTIVE bit stuck\n");
return ret;
}
static int write_cmd(struct panfrost_device *pfdev, u32 as_nr, u32 cmd)
{
int status;
/* write AS_COMMAND when MMU is ready to accept another command */
status = wait_ready(pfdev, as_nr);
if (!status)
mmu_write(pfdev, AS_COMMAND(as_nr), cmd);
return status;
}
static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
u64 iova, size_t size)
{
u8 region_width;
u64 region = iova & PAGE_MASK;
/*
* fls returns:
* 1 .. 32
*
* 10 + fls(num_pages)
* results in the range (11 .. 42)
*/
size = round_up(size, PAGE_SIZE);
region_width = 10 + fls(size >> PAGE_SHIFT);
if ((size >> PAGE_SHIFT) != (1ul << (region_width - 11))) {
/* not pow2, so must go up to the next pow2 */
region_width += 1;
}
region |= region_width;
/* Lock the region that needs to be updated */
mmu_write(pfdev, AS_LOCKADDR_LO(as_nr), region & 0xFFFFFFFFUL);
mmu_write(pfdev, AS_LOCKADDR_HI(as_nr), (region >> 32) & 0xFFFFFFFFUL);
write_cmd(pfdev, as_nr, AS_COMMAND_LOCK);
}
static int mmu_hw_do_operation(struct panfrost_device *pfdev, u32 as_nr,
u64 iova, size_t size, u32 op)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&pfdev->hwaccess_lock, flags);
if (op != AS_COMMAND_UNLOCK)
lock_region(pfdev, as_nr, iova, size);
/* Run the MMU operation */
write_cmd(pfdev, as_nr, op);
/* Wait for the flush to complete */
ret = wait_ready(pfdev, as_nr);
spin_unlock_irqrestore(&pfdev->hwaccess_lock, flags);
return ret;
}
void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr)
{
struct io_pgtable_cfg *cfg = &pfdev->mmu->pgtbl_cfg;
u64 transtab = cfg->arm_mali_lpae_cfg.transtab;
u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
mmu_write(pfdev, MMU_INT_CLEAR, ~0);
mmu_write(pfdev, MMU_INT_MASK, ~0);
mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL);
mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
/* Need to revisit mem attrs.
* NC is the default, Mali driver is inner WT.
*/
mmu_write(pfdev, AS_MEMATTR_LO(as_nr), memattr & 0xffffffffUL);
mmu_write(pfdev, AS_MEMATTR_HI(as_nr), memattr >> 32);
write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE);
}
static void mmu_disable(struct panfrost_device *pfdev, u32 as_nr)
{
mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0);
mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0);
mmu_write(pfdev, AS_MEMATTR_LO(as_nr), 0);
mmu_write(pfdev, AS_MEMATTR_HI(as_nr), 0);
write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE);
}
static size_t get_pgsize(u64 addr, size_t size)
{
if (addr & (SZ_2M - 1) || size < SZ_2M)
return SZ_4K;
return SZ_2M;
}
int panfrost_mmu_map(struct panfrost_gem_object *bo)
{
struct drm_gem_object *obj = &bo->base.base;
struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops;
u64 iova = bo->node.start << PAGE_SHIFT;
unsigned int count;
struct scatterlist *sgl;
struct sg_table *sgt;
int ret;
sgt = drm_gem_shmem_get_pages_sgt(obj);
if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
ret = pm_runtime_get_sync(pfdev->dev);
if (ret < 0)
return ret;
mutex_lock(&pfdev->mmu->lock);
for_each_sg(sgt->sgl, sgl, sgt->nents, count) {
unsigned long paddr = sg_dma_address(sgl);
size_t len = sg_dma_len(sgl);
dev_dbg(pfdev->dev, "map: iova=%llx, paddr=%lx, len=%zx", iova, paddr, len);
while (len) {
size_t pgsize = get_pgsize(iova | paddr, len);
ops->map(ops, iova, paddr, pgsize, IOMMU_WRITE | IOMMU_READ);
iova += pgsize;
paddr += pgsize;
len -= pgsize;
}
}
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT,
bo->node.size << PAGE_SHIFT, AS_COMMAND_FLUSH_PT);
mutex_unlock(&pfdev->mmu->lock);
pm_runtime_mark_last_busy(pfdev->dev);
pm_runtime_put_autosuspend(pfdev->dev);
return 0;
}
void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
{
struct drm_gem_object *obj = &bo->base.base;
struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops;
u64 iova = bo->node.start << PAGE_SHIFT;
size_t len = bo->node.size << PAGE_SHIFT;
size_t unmapped_len = 0;
int ret;
dev_dbg(pfdev->dev, "unmap: iova=%llx, len=%zx", iova, len);
ret = pm_runtime_get_sync(pfdev->dev);
if (ret < 0)
return;
mutex_lock(&pfdev->mmu->lock);
while (unmapped_len < len) {
size_t unmapped_page;
size_t pgsize = get_pgsize(iova, len - unmapped_len);
unmapped_page = ops->unmap(ops, iova, pgsize);
if (!unmapped_page)
break;
iova += unmapped_page;
unmapped_len += unmapped_page;
}
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT,
bo->node.size << PAGE_SHIFT, AS_COMMAND_FLUSH_PT);
mutex_unlock(&pfdev->mmu->lock);
pm_runtime_mark_last_busy(pfdev->dev);
pm_runtime_put_autosuspend(pfdev->dev);
}
static void mmu_tlb_inv_context_s1(void *cookie)
{
struct panfrost_device *pfdev = cookie;
mmu_hw_do_operation(pfdev, 0, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
}
static void mmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
size_t granule, bool leaf, void *cookie)
{}
static void mmu_tlb_sync_context(void *cookie)
{
//struct panfrost_device *pfdev = cookie;
// TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X
}
static const struct iommu_gather_ops mmu_tlb_ops = {
.tlb_flush_all = mmu_tlb_inv_context_s1,
.tlb_add_flush = mmu_tlb_inv_range_nosync,
.tlb_sync = mmu_tlb_sync_context,
};
static const char *access_type_name(struct panfrost_device *pfdev,
u32 fault_status)
{
switch (fault_status & AS_FAULTSTATUS_ACCESS_TYPE_MASK) {
case AS_FAULTSTATUS_ACCESS_TYPE_ATOMIC:
if (panfrost_has_hw_feature(pfdev, HW_FEATURE_AARCH64_MMU))
return "ATOMIC";
else
return "UNKNOWN";
case AS_FAULTSTATUS_ACCESS_TYPE_READ:
return "READ";
case AS_FAULTSTATUS_ACCESS_TYPE_WRITE:
return "WRITE";
case AS_FAULTSTATUS_ACCESS_TYPE_EX:
return "EXECUTE";
default:
WARN_ON(1);
return NULL;
}
}
static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data)
{
struct panfrost_device *pfdev = data;
u32 status = mmu_read(pfdev, MMU_INT_STAT);
int i;
if (!status)
return IRQ_NONE;
dev_err(pfdev->dev, "mmu irq status=%x\n", status);
for (i = 0; status; i++) {
u32 mask = BIT(i) | BIT(i + 16);
u64 addr;
u32 fault_status;
u32 exception_type;
u32 access_type;
u32 source_id;
if (!(status & mask))
continue;
fault_status = mmu_read(pfdev, AS_FAULTSTATUS(i));
addr = mmu_read(pfdev, AS_FAULTADDRESS_LO(i));
addr |= (u64)mmu_read(pfdev, AS_FAULTADDRESS_HI(i)) << 32;
/* decode the fault status */
exception_type = fault_status & 0xFF;
access_type = (fault_status >> 8) & 0x3;
source_id = (fault_status >> 16);
/* terminal fault, print info about the fault */
dev_err(pfdev->dev,
"Unhandled Page fault in AS%d at VA 0x%016llX\n"
"Reason: %s\n"
"raw fault status: 0x%X\n"
"decoded fault status: %s\n"
"exception type 0x%X: %s\n"
"access type 0x%X: %s\n"
"source id 0x%X\n",
i, addr,
"TODO",
fault_status,
(fault_status & (1 << 10) ? "DECODER FAULT" : "SLAVE FAULT"),
exception_type, panfrost_exception_name(pfdev, exception_type),
access_type, access_type_name(pfdev, fault_status),
source_id);
mmu_write(pfdev, MMU_INT_CLEAR, mask);
status &= ~mask;
}
return IRQ_HANDLED;
};
int panfrost_mmu_init(struct panfrost_device *pfdev)
{
struct io_pgtable_ops *pgtbl_ops;
int err, irq;
pfdev->mmu = devm_kzalloc(pfdev->dev, sizeof(*pfdev->mmu), GFP_KERNEL);
if (!pfdev->mmu)
return -ENOMEM;
mutex_init(&pfdev->mmu->lock);
irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "mmu");
if (irq <= 0)
return -ENODEV;
err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler,
IRQF_SHARED, "mmu", pfdev);
if (err) {
dev_err(pfdev->dev, "failed to request mmu irq");
return err;
}
mmu_write(pfdev, MMU_INT_CLEAR, ~0);
mmu_write(pfdev, MMU_INT_MASK, ~0);
pfdev->mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
.pgsize_bitmap = SZ_4K | SZ_2M,
.ias = FIELD_GET(0xff, pfdev->features.mmu_features),
.oas = FIELD_GET(0xff00, pfdev->features.mmu_features),
.tlb = &mmu_tlb_ops,
.iommu_dev = pfdev->dev,
};
pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
pfdev);
if (!pgtbl_ops)
return -ENOMEM;
pfdev->mmu->pgtbl_ops = pgtbl_ops;
panfrost_mmu_enable(pfdev, 0);
return 0;
}
void panfrost_mmu_fini(struct panfrost_device *pfdev)
{
mmu_write(pfdev, MMU_INT_MASK, 0);
mmu_disable(pfdev, 0);
free_io_pgtable_ops(pfdev->mmu->pgtbl_ops);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
#ifndef __PANFROST_MMU_H__
#define __PANFROST_MMU_H__
struct panfrost_gem_object;
int panfrost_mmu_map(struct panfrost_gem_object *bo);
void panfrost_mmu_unmap(struct panfrost_gem_object *bo);
int panfrost_mmu_init(struct panfrost_device *pfdev);
void panfrost_mmu_fini(struct panfrost_device *pfdev);
void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr);
#endif
This diff is collapsed.
This diff is collapsed.
......@@ -236,7 +236,7 @@ static struct sun4i_tcon *sun4i_get_tcon0(struct drm_device *drm)
return NULL;
}
void sun4i_tcon_set_mux(struct sun4i_tcon *tcon, int channel,
static void sun4i_tcon_set_mux(struct sun4i_tcon *tcon, int channel,
const struct drm_encoder *encoder)
{
int ret = -ENOTSUPP;
......
......@@ -269,12 +269,12 @@ static int sun8i_tcon_top_remove(struct platform_device *pdev)
return 0;
}
const struct sun8i_tcon_top_quirks sun8i_r40_tcon_top_quirks = {
static const struct sun8i_tcon_top_quirks sun8i_r40_tcon_top_quirks = {
.has_tcon_tv1 = true,
.has_dsi = true,
};
const struct sun8i_tcon_top_quirks sun50i_h6_tcon_top_quirks = {
static const struct sun8i_tcon_top_quirks sun50i_h6_tcon_top_quirks = {
/* Nothing special */
};
......
......@@ -267,7 +267,7 @@ static int hx8357d_probe(struct spi_device *spi)
spi_set_drvdata(spi, drm);
drm_fbdev_generic_setup(drm, 32);
drm_fbdev_generic_setup(drm, 0);
return 0;
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment