Commit dd08ebf6 authored by Matthew Brost's avatar Matthew Brost Committed by Rodrigo Vivi

drm/xe: Introduce a new DRM driver for Intel GPUs

Xe, is a new driver for Intel GPUs that supports both integrated and
discrete platforms starting with Tiger Lake (first Intel Xe Architecture).

The code is at a stage where it is already functional and has experimental
support for multiple platforms starting from Tiger Lake, with initial
support implemented in Mesa (for Iris and Anv, our OpenGL and Vulkan
drivers), as well as in NEO (for OpenCL and Level0).

The new Xe driver leverages a lot from i915.

As for display, the intent is to share the display code with the i915
driver so that there is maximum reuse there. But it is not added
in this patch.

This initial work is a collaboration of many people and unfortunately
the big squashed patch won't fully honor the proper credits. But let's
get some git quick stats so we can at least try to preserve some of the
credits:
Co-developed-by: default avatarMatthew Brost <matthew.brost@intel.com>
Co-developed-by: default avatarMatthew Auld <matthew.auld@intel.com>
Co-developed-by: default avatarMatt Roper <matthew.d.roper@intel.com>
Co-developed-by: default avatarThomas Hellström <thomas.hellstrom@linux.intel.com>
Co-developed-by: default avatarFrancois Dugast <francois.dugast@intel.com>
Co-developed-by: default avatarLucas De Marchi <lucas.demarchi@intel.com>
Co-developed-by: default avatarMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
Co-developed-by: default avatarPhilippe Lecluse <philippe.lecluse@intel.com>
Co-developed-by: default avatarNirmoy Das <nirmoy.das@intel.com>
Co-developed-by: default avatarJani Nikula <jani.nikula@intel.com>
Co-developed-by: default avatarJosé Roberto de Souza <jose.souza@intel.com>
Co-developed-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
Co-developed-by: default avatarDave Airlie <airlied@redhat.com>
Co-developed-by: default avatarFaith Ekstrand <faith.ekstrand@collabora.com>
Co-developed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Co-developed-by: default avatarMauro Carvalho Chehab <mchehab@kernel.org>
Signed-off-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: default avatarMatthew Brost <matthew.brost@intel.com>
parent a60501d7
......@@ -18,6 +18,7 @@ GPU Driver Documentation
vkms
bridge/dw-hdmi
xen-front
xe/index
afbc
komeda-kms
panfrost
......
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=======================
drm/xe Intel GFX Driver
=======================
The drm/xe driver supports some future GFX cards with rendering, display,
compute and media. Support for currently available platforms like TGL, ADL,
DG2, etc is provided to prototype the driver.
.. toctree::
:titlesonly:
xe_mm
xe_map
xe_migrate
xe_cs
xe_pm
xe_pcode
xe_gt_mcr
xe_wa
xe_rtp
xe_firmware
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
==================
Command submission
==================
.. kernel-doc:: drivers/gpu/drm/xe/xe_exec.c
:doc: Execbuf (User GPU command submission)
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
========
Firmware
========
Firmware Layout
===============
.. kernel-doc:: drivers/gpu/drm/xe/xe_uc_fw_abi.h
:doc: Firmware Layout
Write Once Protected Content Memory (WOPCM) Layout
==================================================
.. kernel-doc:: drivers/gpu/drm/xe/xe_wopcm.c
:doc: Write Once Protected Content Memory (WOPCM) Layout
GuC CTB Blob
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_guc_ct.c
:doc: GuC CTB Blob
GuC Power Conservation (PC)
===========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_guc_pc.c
:doc: GuC Power Conservation (PC)
Internal API
============
TODO
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
==============================================
GT Multicast/Replicated (MCR) Register Support
==============================================
.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_mcr.c
:doc: GT Multicast/Replicated (MCR) Register Support
Internal API
============
TODO
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=========
Map Layer
=========
.. kernel-doc:: drivers/gpu/drm/xe/xe_map.h
:doc: Map layer
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=============
Migrate Layer
=============
.. kernel-doc:: drivers/gpu/drm/xe/xe_migrate_doc.h
:doc: Migrate Layer
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=================
Memory Management
=================
.. kernel-doc:: drivers/gpu/drm/xe/xe_bo_doc.h
:doc: Buffer Objects (BO)
Pagetable building
==================
.. kernel-doc:: drivers/gpu/drm/xe/xe_pt.c
:doc: Pagetable building
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=====
Pcode
=====
.. kernel-doc:: drivers/gpu/drm/xe/xe_pcode.c
:doc: PCODE
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_pcode.c
:internal:
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
========================
Runtime Power Management
========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_pm.c
:doc: Xe Power Management
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_pm.c
:internal:
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
=========================
Register Table Processing
=========================
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.c
:doc: Register Table Processing
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp_types.h
:internal:
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.h
:internal:
.. kernel-doc:: drivers/gpu/drm/xe/xe_rtp.c
:internal:
.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
====================
Hardware workarounds
====================
.. kernel-doc:: drivers/gpu/drm/xe/xe_wa.c
:doc: Hardware workarounds
Internal API
============
.. kernel-doc:: drivers/gpu/drm/xe/xe_wa.c
:internal:
......@@ -276,6 +276,8 @@ source "drivers/gpu/drm/nouveau/Kconfig"
source "drivers/gpu/drm/i915/Kconfig"
source "drivers/gpu/drm/xe/Kconfig"
source "drivers/gpu/drm/kmb/Kconfig"
config DRM_VGEM
......
......@@ -134,6 +134,7 @@ obj-$(CONFIG_DRM_RADEON)+= radeon/
obj-$(CONFIG_DRM_AMDGPU)+= amd/amdgpu/
obj-$(CONFIG_DRM_AMDGPU)+= amd/amdxcp/
obj-$(CONFIG_DRM_I915) += i915/
obj-$(CONFIG_DRM_XE) += xe/
obj-$(CONFIG_DRM_KMB_DISPLAY) += kmb/
obj-$(CONFIG_DRM_MGAG200) += mgag200/
obj-$(CONFIG_DRM_V3D) += v3d/
......
# SPDX-License-Identifier: GPL-2.0-only
*.hdrtest
# SPDX-License-Identifier: GPL-2.0-only
config DRM_XE
tristate "Intel Xe Graphics"
depends on DRM && PCI && MMU
select INTERVAL_TREE
# we need shmfs for the swappable backing store, and in particular
# the shmem_readpage() which depends upon tmpfs
select SHMEM
select TMPFS
select DRM_BUDDY
select DRM_KMS_HELPER
select DRM_PANEL
select DRM_SUBALLOC_HELPER
select RELAY
select IRQ_WORK
select SYNC_FILE
select IOSF_MBI
select CRC32
select SND_HDA_I915 if SND_HDA_CORE
select CEC_CORE if CEC_NOTIFIER
select VMAP_PFN
select DRM_TTM
select DRM_TTM_HELPER
select DRM_SCHED
select MMU_NOTIFIER
help
Experimental driver for Intel Xe series GPUs
If "M" is selected, the module will be called xe.
config DRM_XE_FORCE_PROBE
string "Force probe xe for selected Intel hardware IDs"
depends on DRM_XE
help
This is the default value for the xe.force_probe module
parameter. Using the module parameter overrides this option.
Force probe the xe for Intel graphics devices that are
recognized but not properly supported by this kernel version. It is
recommended to upgrade to a kernel version with proper support as soon
as it is available.
It can also be used to block the probe of recognized and fully
supported devices.
Use "" to disable force probe. If in doubt, use this.
Use "<pci-id>[,<pci-id>,...]" to force probe the xe for listed
devices. For example, "4500" or "4500,4571".
Use "*" to force probe the driver for all known devices.
Use "!" right before the ID to block the probe of the device. For
example, "4500,!4571" forces the probe of 4500 and blocks the probe of
4571.
Use "!*" to block the probe of the driver for all known devices.
menu "drm/Xe Debugging"
depends on DRM_XE
depends on EXPERT
source "drivers/gpu/drm/xe/Kconfig.debug"
endmenu
# SPDX-License-Identifier: GPL-2.0-only
config DRM_XE_WERROR
bool "Force GCC to throw an error instead of a warning when compiling"
# As this may inadvertently break the build, only allow the user
# to shoot oneself in the foot iff they aim really hard
depends on EXPERT
# We use the dependency on !COMPILE_TEST to not be enabled in
# allmodconfig or allyesconfig configurations
depends on !COMPILE_TEST
default n
help
Add -Werror to the build flags for (and only for) xe.ko.
Do not enable this unless you are writing code for the xe.ko module.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG
bool "Enable additional driver debugging"
depends on DRM_XE
depends on EXPERT
depends on !COMPILE_TEST
default n
help
Choose this option to turn on extra driver debugging that may affect
performance but will catch some internal issues.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG_VM
bool "Enable extra VM debugging info"
default n
help
Enable extra VM debugging info
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_DEBUG_MEM
bool "Enable passing SYS/LMEM addresses to user space"
default n
help
Pass object location trough uapi. Intended for extended
testing and development only.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_SIMPLE_ERROR_CAPTURE
bool "Enable simple error capture to dmesg on job timeout"
default n
help
Choose this option when debugging an unexpected job timeout
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_KUNIT_TEST
tristate "KUnit tests for the drm xe driver" if !KUNIT_ALL_TESTS
depends on DRM_XE && KUNIT
default KUNIT_ALL_TESTS
select DRM_EXPORT_FOR_TESTS if m
help
Choose this option to allow the driver to perform selftests under
the kunit framework
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_LARGE_GUC_BUFFER
bool "Enable larger guc log buffer"
default n
help
Choose this option when debugging guc issues.
Buffer should be large enough for complex issues.
Recommended for driver developers only.
If in doubt, say "N".
config DRM_XE_USERPTR_INVAL_INJECT
bool "Inject userptr invalidation -EINVAL errors"
default n
help
Choose this option when debugging error paths that
are hit during checks for userptr invalidations.
Recomended for driver developers only.
If in doubt, say "N".
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
# Add a set of useful warning flags and enable -Werror for CI to prevent
# trivial mistakes from creeping in. We have to do this piecemeal as we reject
# any patch that isn't warning clean, so turning on -Wall -Wextra (or W=1) we
# need to filter out dubious warnings. Still it is our interest
# to keep running locally with W=1 C=1 until we are completely clean.
#
# Note the danger in using -Wall -Wextra is that when CI updates gcc we
# will most likely get a sudden build breakage... Hopefully we will fix
# new warnings before CI updates!
subdir-ccflags-y := -Wall -Wextra
# making these call cc-disable-warning breaks when trying to build xe.mod.o
# by calling make M=drivers/gpu/drm/xe. This doesn't happen in upstream tree,
# so it was somehow fixed by the changes in the build system. Move it back to
# $(call cc-disable-warning, ...) after rebase.
subdir-ccflags-y += -Wno-unused-parameter
subdir-ccflags-y += -Wno-type-limits
#subdir-ccflags-y += $(call cc-disable-warning, unused-parameter)
#subdir-ccflags-y += $(call cc-disable-warning, type-limits)
subdir-ccflags-y += $(call cc-disable-warning, missing-field-initializers)
subdir-ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
# clang warnings
subdir-ccflags-y += $(call cc-disable-warning, sign-compare)
subdir-ccflags-y += $(call cc-disable-warning, sometimes-uninitialized)
subdir-ccflags-y += $(call cc-disable-warning, initializer-overrides)
subdir-ccflags-y += $(call cc-disable-warning, frame-address)
subdir-ccflags-$(CONFIG_DRM_XE_WERROR) += -Werror
# Fine grained warnings disable
CFLAGS_xe_pci.o = $(call cc-disable-warning, override-init)
subdir-ccflags-y += -I$(srctree)/$(src)
# Please keep these build lists sorted!
# core driver code
xe-y += xe_bb.o \
xe_bo.o \
xe_bo_evict.o \
xe_debugfs.o \
xe_device.o \
xe_dma_buf.o \
xe_engine.o \
xe_exec.o \
xe_execlist.o \
xe_force_wake.o \
xe_ggtt.o \
xe_gpu_scheduler.o \
xe_gt.o \
xe_gt_clock.o \
xe_gt_debugfs.o \
xe_gt_mcr.o \
xe_gt_pagefault.o \
xe_gt_sysfs.o \
xe_gt_topology.o \
xe_guc.o \
xe_guc_ads.o \
xe_guc_ct.o \
xe_guc_debugfs.o \
xe_guc_hwconfig.o \
xe_guc_log.o \
xe_guc_pc.o \
xe_guc_submit.o \
xe_hw_engine.o \
xe_hw_fence.o \
xe_huc.o \
xe_huc_debugfs.o \
xe_irq.o \
xe_lrc.o \
xe_migrate.o \
xe_mmio.o \
xe_mocs.o \
xe_module.o \
xe_pci.o \
xe_pcode.o \
xe_pm.o \
xe_preempt_fence.o \
xe_pt.o \
xe_pt_walk.o \
xe_query.o \
xe_reg_sr.o \
xe_reg_whitelist.o \
xe_rtp.o \
xe_ring_ops.o \
xe_sa.o \
xe_sched_job.o \
xe_step.o \
xe_sync.o \
xe_trace.o \
xe_ttm_gtt_mgr.o \
xe_ttm_vram_mgr.o \
xe_tuning.o \
xe_uc.o \
xe_uc_debugfs.o \
xe_uc_fw.o \
xe_vm.o \
xe_vm_madvise.o \
xe_wait_user_fence.o \
xe_wa.o \
xe_wopcm.o
# XXX: Needed for i915 register definitions. Will be removed after xe-regs.
subdir-ccflags-y += -I$(srctree)/drivers/gpu/drm/i915/
obj-$(CONFIG_DRM_XE) += xe.o
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += tests/
\
# header test
always-$(CONFIG_DRM_XE_WERROR) += \
$(patsubst %.h,%.hdrtest, $(shell cd $(srctree)/$(src) && find * -name '*.h'))
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = $(CC) -DHDRTEST $(filter-out $(CFLAGS_GCOV), $(c_flags)) -S -o /dev/null -x c /dev/null -include $<; touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE
$(call if_changed_dep,hdrtest)
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_COMMUNICATION_CTB_ABI_H
#define _ABI_GUC_COMMUNICATION_CTB_ABI_H
#include <linux/types.h>
#include <linux/build_bug.h>
#include "guc_messages_abi.h"
/**
* DOC: CT Buffer
*
* Circular buffer used to send `CTB Message`_
*/
/**
* DOC: CTB Descriptor
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:0 | **HEAD** - offset (in dwords) to the last dword that was |
* | | | read from the `CT Buffer`_. |
* | | | It can only be updated by the receiver. |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | **TAIL** - offset (in dwords) to the last dword that was |
* | | | written to the `CT Buffer`_. |
* | | | It can only be updated by the sender. |
* +---+-------+--------------------------------------------------------------+
* | 2 | 31:0 | **STATUS** - status of the CTB |
* | | | |
* | | | - _`GUC_CTB_STATUS_NO_ERROR` = 0 (normal operation) |
* | | | - _`GUC_CTB_STATUS_OVERFLOW` = 1 (head/tail too large) |
* | | | - _`GUC_CTB_STATUS_UNDERFLOW` = 2 (truncated message) |
* | | | - _`GUC_CTB_STATUS_MISMATCH` = 4 (head/tail modified) |
* +---+-------+--------------------------------------------------------------+
* |...| | RESERVED = MBZ |
* +---+-------+--------------------------------------------------------------+
* | 15| 31:0 | RESERVED = MBZ |
* +---+-------+--------------------------------------------------------------+
*/
struct guc_ct_buffer_desc {
u32 head;
u32 tail;
u32 status;
#define GUC_CTB_STATUS_NO_ERROR 0
#define GUC_CTB_STATUS_OVERFLOW (1 << 0)
#define GUC_CTB_STATUS_UNDERFLOW (1 << 1)
#define GUC_CTB_STATUS_MISMATCH (1 << 2)
u32 reserved[13];
} __packed;
static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
/**
* DOC: CTB Message
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:16 | **FENCE** - message identifier |
* | +-------+--------------------------------------------------------------+
* | | 15:12 | **FORMAT** - format of the CTB message |
* | | | - _`GUC_CTB_FORMAT_HXG` = 0 - see `CTB HXG Message`_ |
* | +-------+--------------------------------------------------------------+
* | | 11:8 | **RESERVED** |
* | +-------+--------------------------------------------------------------+
* | | 7:0 | **NUM_DWORDS** - length of the CTB message (w/o header) |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | optional (depends on FORMAT) |
* +---+-------+ |
* |...| | |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_CTB_HDR_LEN 1u
#define GUC_CTB_MSG_MIN_LEN GUC_CTB_HDR_LEN
#define GUC_CTB_MSG_MAX_LEN 256u
#define GUC_CTB_MSG_0_FENCE (0xffff << 16)
#define GUC_CTB_MSG_0_FORMAT (0xf << 12)
#define GUC_CTB_FORMAT_HXG 0u
#define GUC_CTB_MSG_0_RESERVED (0xf << 8)
#define GUC_CTB_MSG_0_NUM_DWORDS (0xff << 0)
/**
* DOC: CTB HXG Message
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:16 | FENCE |
* | +-------+--------------------------------------------------------------+
* | | 15:12 | FORMAT = GUC_CTB_FORMAT_HXG_ |
* | +-------+--------------------------------------------------------------+
* | | 11:8 | RESERVED = MBZ |
* | +-------+--------------------------------------------------------------+
* | | 7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message |
* +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | |
* +---+-------+ |
* |...| | [Embedded `HXG Message`_] |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#define GUC_CTB_HXG_MSG_MIN_LEN (GUC_CTB_MSG_MIN_LEN + GUC_HXG_MSG_MIN_LEN)
#define GUC_CTB_HXG_MSG_MAX_LEN GUC_CTB_MSG_MAX_LEN
/**
* DOC: CTB based communication
*
* The CTB (command transport buffer) communication between Host and GuC
* is based on u32 data stream written to the shared buffer. One buffer can
* be used to transmit data only in one direction (one-directional channel).
*
* Current status of the each buffer is stored in the buffer descriptor.
* Buffer descriptor holds tail and head fields that represents active data
* stream. The tail field is updated by the data producer (sender), and head
* field is updated by the data consumer (receiver)::
*
* +------------+
* | DESCRIPTOR | +=================+============+========+
* +============+ | | MESSAGE(s) | |
* | address |--------->+=================+============+========+
* +------------+
* | head | ^-----head--------^
* +------------+
* | tail | ^---------tail-----------------^
* +------------+
* | size | ^---------------size--------------------^
* +------------+
*
* Each message in data stream starts with the single u32 treated as a header,
* followed by optional set of u32 data that makes message specific payload::
*
* +------------+---------+---------+---------+
* | MESSAGE |
* +------------+---------+---------+---------+
* | msg[0] | [1] | ... | [n-1] |
* +------------+---------+---------+---------+
* | MESSAGE | MESSAGE PAYLOAD |
* + HEADER +---------+---------+---------+
* | | 0 | ... | n |
* +======+=====+=========+=========+=========+
* | 31:16| code| | | |
* +------+-----+ | | |
* | 15:5|flags| | | |
* +------+-----+ | | |
* | 4:0| len| | | |
* +------+-----+---------+---------+---------+
*
* ^-------------len-------------^
*
* The message header consists of:
*
* - **len**, indicates length of the message payload (in u32)
* - **code**, indicates message code
* - **flags**, holds various bits to control message handling
*/
/*
* Definition of the command transport message header (DW0)
*
* bit[4..0] message len (in dwords)
* bit[7..5] reserved
* bit[8] response (G2H only)
* bit[8] write fence to desc (H2G only)
* bit[9] write status to H2G buff (H2G only)
* bit[10] send status back via G2H (H2G only)
* bit[15..11] reserved
* bit[31..16] action code
*/
#define GUC_CT_MSG_LEN_SHIFT 0
#define GUC_CT_MSG_LEN_MASK 0x1F
#define GUC_CT_MSG_IS_RESPONSE (1 << 8)
#define GUC_CT_MSG_WRITE_FENCE_TO_DESC (1 << 8)
#define GUC_CT_MSG_WRITE_STATUS_TO_BUFF (1 << 9)
#define GUC_CT_MSG_SEND_STATUS (1 << 10)
#define GUC_CT_MSG_ACTION_SHIFT 16
#define GUC_CT_MSG_ACTION_MASK 0xFFFF
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_COMMUNICATION_MMIO_ABI_H
#define _ABI_GUC_COMMUNICATION_MMIO_ABI_H
/**
* DOC: GuC MMIO based communication
*
* The MMIO based communication between Host and GuC relies on special
* hardware registers which format could be defined by the software
* (so called scratch registers).
*
* Each MMIO based message, both Host to GuC (H2G) and GuC to Host (G2H)
* messages, which maximum length depends on number of available scratch
* registers, is directly written into those scratch registers.
*
* For Gen9+, there are 16 software scratch registers 0xC180-0xC1B8,
* but no H2G command takes more than 4 parameters and the GuC firmware
* itself uses an 4-element array to store the H2G message.
*
* For Gen11+, there are additional 4 registers 0x190240-0x19024C, which
* are, regardless on lower count, preferred over legacy ones.
*
* The MMIO based communication is mainly used during driver initialization
* phase to setup the `CTB based communication`_ that will be used afterwards.
*/
#define GUC_MAX_MMIO_MSG_LEN 4
/**
* DOC: MMIO HXG Message
*
* Format of the MMIO messages follows definitions of `HXG Message`_.
*
* +---+-------+--------------------------------------------------------------+
* | | Bits | Description |
* +===+=======+==============================================================+
* | 0 | 31:0 | |
* +---+-------+ |
* |...| | [Embedded `HXG Message`_] |
* +---+-------+ |
* | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+
*/
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2014-2021 Intel Corporation
*/
#ifndef _ABI_GUC_ERRORS_ABI_H
#define _ABI_GUC_ERRORS_ABI_H
enum xe_guc_response_status {
XE_GUC_RESPONSE_STATUS_SUCCESS = 0x0,
XE_GUC_RESPONSE_STATUS_GENERIC_FAIL = 0xF000,
};
enum xe_guc_load_status {
XE_GUC_LOAD_STATUS_DEFAULT = 0x00,
XE_GUC_LOAD_STATUS_START = 0x01,
XE_GUC_LOAD_STATUS_ERROR_DEVID_BUILD_MISMATCH = 0x02,
XE_GUC_LOAD_STATUS_GUC_PREPROD_BUILD_MISMATCH = 0x03,
XE_GUC_LOAD_STATUS_ERROR_DEVID_INVALID_GUCTYPE = 0x04,
XE_GUC_LOAD_STATUS_GDT_DONE = 0x10,
XE_GUC_LOAD_STATUS_IDT_DONE = 0x20,
XE_GUC_LOAD_STATUS_LAPIC_DONE = 0x30,
XE_GUC_LOAD_STATUS_GUCINT_DONE = 0x40,
XE_GUC_LOAD_STATUS_DPC_READY = 0x50,
XE_GUC_LOAD_STATUS_DPC_ERROR = 0x60,
XE_GUC_LOAD_STATUS_EXCEPTION = 0x70,
XE_GUC_LOAD_STATUS_INIT_DATA_INVALID = 0x71,
XE_GUC_LOAD_STATUS_PXP_TEARDOWN_CTRL_ENABLED = 0x72,
XE_GUC_LOAD_STATUS_INVALID_INIT_DATA_RANGE_START,
XE_GUC_LOAD_STATUS_MPU_DATA_INVALID = 0x73,
XE_GUC_LOAD_STATUS_INIT_MMIO_SAVE_RESTORE_INVALID = 0x74,
XE_GUC_LOAD_STATUS_INVALID_INIT_DATA_RANGE_END,
XE_GUC_LOAD_STATUS_READY = 0xF0,
};
#endif
This diff is collapsed.
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += xe_bo_test.o xe_dma_buf_test.o \
xe_migrate_test.o
// SPDX-License-Identifier: GPL-2.0 AND MIT
/*
* Copyright © 2022 Intel Corporation
*/
#include <kunit/test.h>
#include "xe_bo_evict.h"
#include "xe_pci.h"
static int ccs_test_migrate(struct xe_gt *gt, struct xe_bo *bo,
bool clear, u64 get_val, u64 assign_val,
struct kunit *test)
{
struct dma_fence *fence;
struct ttm_tt *ttm;
struct page *page;
pgoff_t ccs_page;
long timeout;
u64 *cpu_map;
int ret;
u32 offset;
/* Move bo to VRAM if not already there. */
ret = xe_bo_validate(bo, NULL, false);
if (ret) {
KUNIT_FAIL(test, "Failed to validate bo.\n");
return ret;
}
/* Optionally clear bo *and* CCS data in VRAM. */
if (clear) {
fence = xe_migrate_clear(gt->migrate, bo, bo->ttm.resource, 0);
if (IS_ERR(fence)) {
KUNIT_FAIL(test, "Failed to submit bo clear.\n");
return PTR_ERR(fence);
}
dma_fence_put(fence);
}
/* Evict to system. CCS data should be copied. */
ret = xe_bo_evict(bo, true);
if (ret) {
KUNIT_FAIL(test, "Failed to evict bo.\n");
return ret;
}
/* Sync all migration blits */
timeout = dma_resv_wait_timeout(bo->ttm.base.resv,
DMA_RESV_USAGE_KERNEL,
true,
5 * HZ);
if (timeout <= 0) {
KUNIT_FAIL(test, "Failed to sync bo eviction.\n");
return -ETIME;
}
/*
* Bo with CCS data is now in system memory. Verify backing store
* and data integrity. Then assign for the next testing round while
* we still have a CPU map.
*/
ttm = bo->ttm.ttm;
if (!ttm || !ttm_tt_is_populated(ttm)) {
KUNIT_FAIL(test, "Bo was not in expected placement.\n");
return -EINVAL;
}
ccs_page = xe_bo_ccs_pages_start(bo) >> PAGE_SHIFT;
if (ccs_page >= ttm->num_pages) {
KUNIT_FAIL(test, "No TTM CCS pages present.\n");
return -EINVAL;
}
page = ttm->pages[ccs_page];
cpu_map = kmap_local_page(page);
/* Check first CCS value */
if (cpu_map[0] != get_val) {
KUNIT_FAIL(test,
"Expected CCS readout 0x%016llx, got 0x%016llx.\n",
(unsigned long long)get_val,
(unsigned long long)cpu_map[0]);
ret = -EINVAL;
}
/* Check last CCS value, or at least last value in page. */
offset = xe_device_ccs_bytes(gt->xe, bo->size);
offset = min_t(u32, offset, PAGE_SIZE) / sizeof(u64) - 1;
if (cpu_map[offset] != get_val) {
KUNIT_FAIL(test,
"Expected CCS readout 0x%016llx, got 0x%016llx.\n",
(unsigned long long)get_val,
(unsigned long long)cpu_map[offset]);
ret = -EINVAL;
}
cpu_map[0] = assign_val;
cpu_map[offset] = assign_val;
kunmap_local(cpu_map);
return ret;
}
static void ccs_test_run_gt(struct xe_device *xe, struct xe_gt *gt,
struct kunit *test)
{
struct xe_bo *bo;
u32 vram_bit;
int ret;
/* TODO: Sanity check */
vram_bit = XE_BO_CREATE_VRAM0_BIT << gt->info.vram_id;
kunit_info(test, "Testing gt id %u vram id %u\n", gt->info.id,
gt->info.vram_id);
bo = xe_bo_create_locked(xe, NULL, NULL, SZ_1M, ttm_bo_type_device,
vram_bit);
if (IS_ERR(bo)) {
KUNIT_FAIL(test, "Failed to create bo.\n");
return;
}
kunit_info(test, "Verifying that CCS data is cleared on creation.\n");
ret = ccs_test_migrate(gt, bo, false, 0ULL, 0xdeadbeefdeadbeefULL,
test);
if (ret)
goto out_unlock;
kunit_info(test, "Verifying that CCS data survives migration.\n");
ret = ccs_test_migrate(gt, bo, false, 0xdeadbeefdeadbeefULL,
0xdeadbeefdeadbeefULL, test);
if (ret)
goto out_unlock;
kunit_info(test, "Verifying that CCS data can be properly cleared.\n");
ret = ccs_test_migrate(gt, bo, true, 0ULL, 0ULL, test);
out_unlock:
xe_bo_unlock_no_vm(bo);
xe_bo_put(bo);
}
static int ccs_test_run_device(struct xe_device *xe)
{
struct kunit *test = xe_cur_kunit();
struct xe_gt *gt;
int id;
if (!xe_device_has_flat_ccs(xe)) {
kunit_info(test, "Skipping non-flat-ccs device.\n");
return 0;
}
for_each_gt(gt, xe, id)
ccs_test_run_gt(xe, gt, test);
return 0;
}
void xe_ccs_migrate_kunit(struct kunit *test)
{
xe_call_for_each_device(ccs_test_run_device);
}
EXPORT_SYMBOL(xe_ccs_migrate_kunit);
static int evict_test_run_gt(struct xe_device *xe, struct xe_gt *gt, struct kunit *test)
{
struct xe_bo *bo, *external;
unsigned int bo_flags = XE_BO_CREATE_USER_BIT |
XE_BO_CREATE_VRAM_IF_DGFX(gt);
struct xe_vm *vm = xe_migrate_get_vm(xe->gt[0].migrate);
struct ww_acquire_ctx ww;
int err, i;
kunit_info(test, "Testing device %s gt id %u vram id %u\n",
dev_name(xe->drm.dev), gt->info.id, gt->info.vram_id);
for (i = 0; i < 2; ++i) {
xe_vm_lock(vm, &ww, 0, false);
bo = xe_bo_create(xe, NULL, vm, 0x10000, ttm_bo_type_device,
bo_flags);
xe_vm_unlock(vm, &ww);
if (IS_ERR(bo)) {
KUNIT_FAIL(test, "bo create err=%pe\n", bo);
break;
}
external = xe_bo_create(xe, NULL, NULL, 0x10000,
ttm_bo_type_device, bo_flags);
if (IS_ERR(external)) {
KUNIT_FAIL(test, "external bo create err=%pe\n", external);
goto cleanup_bo;
}
xe_bo_lock(external, &ww, 0, false);
err = xe_bo_pin_external(external);
xe_bo_unlock(external, &ww);
if (err) {
KUNIT_FAIL(test, "external bo pin err=%pe\n",
ERR_PTR(err));
goto cleanup_external;
}
err = xe_bo_evict_all(xe);
if (err) {
KUNIT_FAIL(test, "evict err=%pe\n", ERR_PTR(err));
goto cleanup_all;
}
err = xe_bo_restore_kernel(xe);
if (err) {
KUNIT_FAIL(test, "restore kernel err=%pe\n",
ERR_PTR(err));
goto cleanup_all;
}
err = xe_bo_restore_user(xe);
if (err) {
KUNIT_FAIL(test, "restore user err=%pe\n", ERR_PTR(err));
goto cleanup_all;
}
if (!xe_bo_is_vram(external)) {
KUNIT_FAIL(test, "external bo is not vram\n");
err = -EPROTO;
goto cleanup_all;
}
if (xe_bo_is_vram(bo)) {
KUNIT_FAIL(test, "bo is vram\n");
err = -EPROTO;
goto cleanup_all;
}
if (i) {
down_read(&vm->lock);
xe_vm_lock(vm, &ww, 0, false);
err = xe_bo_validate(bo, bo->vm, false);
xe_vm_unlock(vm, &ww);
up_read(&vm->lock);
if (err) {
KUNIT_FAIL(test, "bo valid err=%pe\n",
ERR_PTR(err));
goto cleanup_all;
}
xe_bo_lock(external, &ww, 0, false);
err = xe_bo_validate(external, NULL, false);
xe_bo_unlock(external, &ww);
if (err) {
KUNIT_FAIL(test, "external bo valid err=%pe\n",
ERR_PTR(err));
goto cleanup_all;
}
}
xe_bo_lock(external, &ww, 0, false);
xe_bo_unpin_external(external);
xe_bo_unlock(external, &ww);
xe_bo_put(external);
xe_bo_put(bo);
continue;
cleanup_all:
xe_bo_lock(external, &ww, 0, false);
xe_bo_unpin_external(external);
xe_bo_unlock(external, &ww);
cleanup_external:
xe_bo_put(external);
cleanup_bo:
xe_bo_put(bo);
break;
}
xe_vm_put(vm);
return 0;
}
static int evict_test_run_device(struct xe_device *xe)
{
struct kunit *test = xe_cur_kunit();
struct xe_gt *gt;
int id;
if (!IS_DGFX(xe)) {
kunit_info(test, "Skipping non-discrete device %s.\n",
dev_name(xe->drm.dev));
return 0;
}
for_each_gt(gt, xe, id)
evict_test_run_gt(xe, gt, test);
return 0;
}
void xe_bo_evict_kunit(struct kunit *test)
{
xe_call_for_each_device(evict_test_run_device);
}
EXPORT_SYMBOL(xe_bo_evict_kunit);
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright © 2022 Intel Corporation
*/
#include <kunit/test.h>
void xe_ccs_migrate_kunit(struct kunit *test);
void xe_bo_evict_kunit(struct kunit *test);
static struct kunit_case xe_bo_tests[] = {
KUNIT_CASE(xe_ccs_migrate_kunit),
KUNIT_CASE(xe_bo_evict_kunit),
{}
};
static struct kunit_suite xe_bo_test_suite = {
.name = "xe_bo",
.test_cases = xe_bo_tests,
};
kunit_test_suite(xe_bo_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
// SPDX-License-Identifier: GPL-2.0 AND MIT
/*
* Copyright © 2022 Intel Corporation
*/
#include <kunit/test.h>
#include "xe_pci.h"
static bool p2p_enabled(struct dma_buf_test_params *params)
{
return IS_ENABLED(CONFIG_PCI_P2PDMA) && params->attach_ops &&
params->attach_ops->allow_peer2peer;
}
static bool is_dynamic(struct dma_buf_test_params *params)
{
return IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY) && params->attach_ops &&
params->attach_ops->move_notify;
}
static void check_residency(struct kunit *test, struct xe_bo *exported,
struct xe_bo *imported, struct dma_buf *dmabuf)
{
struct dma_buf_test_params *params = to_dma_buf_test_params(test->priv);
u32 mem_type;
int ret;
xe_bo_assert_held(exported);
xe_bo_assert_held(imported);
mem_type = XE_PL_VRAM0;
if (!(params->mem_mask & XE_BO_CREATE_VRAM0_BIT))
/* No VRAM allowed */
mem_type = XE_PL_TT;
else if (params->force_different_devices && !p2p_enabled(params))
/* No P2P */
mem_type = XE_PL_TT;
else if (params->force_different_devices && !is_dynamic(params) &&
(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT))
/* Pin migrated to TT */
mem_type = XE_PL_TT;
if (!xe_bo_is_mem_type(exported, mem_type)) {
KUNIT_FAIL(test, "Exported bo was not in expected memory type.\n");
return;
}
if (xe_bo_is_pinned(exported))
return;
/*
* Evict exporter. Note that the gem object dma_buf member isn't
* set from xe_gem_prime_export(), and it's needed for the move_notify()
* functionality, so hack that up here. Evicting the exported bo will
* evict also the imported bo through the move_notify() functionality if
* importer is on a different device. If they're on the same device,
* the exporter and the importer should be the same bo.
*/
swap(exported->ttm.base.dma_buf, dmabuf);
ret = xe_bo_evict(exported, true);
swap(exported->ttm.base.dma_buf, dmabuf);
if (ret) {
if (ret != -EINTR && ret != -ERESTARTSYS)
KUNIT_FAIL(test, "Evicting exporter failed with err=%d.\n",
ret);
return;
}
/* Verify that also importer has been evicted to SYSTEM */
if (!xe_bo_is_mem_type(imported, XE_PL_SYSTEM)) {
KUNIT_FAIL(test, "Importer wasn't properly evicted.\n");
return;
}
/* Re-validate the importer. This should move also exporter in. */
ret = xe_bo_validate(imported, NULL, false);
if (ret) {
if (ret != -EINTR && ret != -ERESTARTSYS)
KUNIT_FAIL(test, "Validating importer failed with err=%d.\n",
ret);
return;
}
/*
* If on different devices, the exporter is kept in system if
* possible, saving a migration step as the transfer is just
* likely as fast from system memory.
*/
if (params->force_different_devices &&
params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)
KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, XE_PL_TT));
else
KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
if (params->force_different_devices)
KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(imported, XE_PL_TT));
else
KUNIT_EXPECT_TRUE(test, exported == imported);
}
static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
{
struct kunit *test = xe_cur_kunit();
struct dma_buf_test_params *params = to_dma_buf_test_params(test->priv);
struct drm_gem_object *import;
struct dma_buf *dmabuf;
struct xe_bo *bo;
/* No VRAM on this device? */
if (!ttm_manager_type(&xe->ttm, XE_PL_VRAM0) &&
(params->mem_mask & XE_BO_CREATE_VRAM0_BIT))
return;
kunit_info(test, "running %s\n", __func__);
bo = xe_bo_create(xe, NULL, NULL, PAGE_SIZE, ttm_bo_type_device,
XE_BO_CREATE_USER_BIT | params->mem_mask);
if (IS_ERR(bo)) {
KUNIT_FAIL(test, "xe_bo_create() failed with err=%ld\n",
PTR_ERR(bo));
return;
}
dmabuf = xe_gem_prime_export(&bo->ttm.base, 0);
if (IS_ERR(dmabuf)) {
KUNIT_FAIL(test, "xe_gem_prime_export() failed with err=%ld\n",
PTR_ERR(dmabuf));
goto out;
}
import = xe_gem_prime_import(&xe->drm, dmabuf);
if (!IS_ERR(import)) {
struct xe_bo *import_bo = gem_to_xe_bo(import);
/*
* Did import succeed when it shouldn't due to lack of p2p support?
*/
if (params->force_different_devices &&
!p2p_enabled(params) &&
!(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)) {
KUNIT_FAIL(test,
"xe_gem_prime_import() succeeded when it shouldn't have\n");
} else {
int err;
/* Is everything where we expect it to be? */
xe_bo_lock_no_vm(import_bo, NULL);
err = xe_bo_validate(import_bo, NULL, false);
if (err && err != -EINTR && err != -ERESTARTSYS)
KUNIT_FAIL(test,
"xe_bo_validate() failed with err=%d\n", err);
check_residency(test, bo, import_bo, dmabuf);
xe_bo_unlock_no_vm(import_bo);
}
drm_gem_object_put(import);
} else if (PTR_ERR(import) != -EOPNOTSUPP) {
/* Unexpected error code. */
KUNIT_FAIL(test,
"xe_gem_prime_import failed with the wrong err=%ld\n",
PTR_ERR(import));
} else if (!params->force_different_devices ||
p2p_enabled(params) ||
(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)) {
/* Shouldn't fail if we can reuse same bo, use p2p or use system */
KUNIT_FAIL(test, "dynamic p2p attachment failed with err=%ld\n",
PTR_ERR(import));
}
dma_buf_put(dmabuf);
out:
drm_gem_object_put(&bo->ttm.base);
}
static const struct dma_buf_attach_ops nop2p_attach_ops = {
.allow_peer2peer = false,
.move_notify = xe_dma_buf_move_notify
};
/*
* We test the implementation with bos of different residency and with
* importers with different capabilities; some lacking p2p support and some
* lacking dynamic capabilities (attach_ops == NULL). We also fake
* different devices avoiding the import shortcut that just reuses the same
* gem object.
*/
static const struct dma_buf_test_params test_params[] = {
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &xe_dma_buf_attach_ops},
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &xe_dma_buf_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &nop2p_attach_ops},
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &nop2p_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_VRAM0_BIT},
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
.attach_ops = &xe_dma_buf_attach_ops},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
.attach_ops = &xe_dma_buf_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
.attach_ops = &nop2p_attach_ops},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
.attach_ops = &nop2p_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &xe_dma_buf_attach_ops},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &xe_dma_buf_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &nop2p_attach_ops},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
.attach_ops = &nop2p_attach_ops,
.force_different_devices = true},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT},
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
.force_different_devices = true},
{}
};
static int dma_buf_run_device(struct xe_device *xe)
{
const struct dma_buf_test_params *params;
struct kunit *test = xe_cur_kunit();
for (params = test_params; params->mem_mask; ++params) {
struct dma_buf_test_params p = *params;
p.base.id = XE_TEST_LIVE_DMA_BUF;
test->priv = &p;
xe_test_dmabuf_import_same_driver(xe);
}
/* A non-zero return would halt iteration over driver devices */
return 0;
}
void xe_dma_buf_kunit(struct kunit *test)
{
xe_call_for_each_device(dma_buf_run_device);
}
EXPORT_SYMBOL(xe_dma_buf_kunit);
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright © 2022 Intel Corporation
*/
#include <kunit/test.h>
void xe_dma_buf_kunit(struct kunit *test);
static struct kunit_case xe_dma_buf_tests[] = {
KUNIT_CASE(xe_dma_buf_kunit),
{}
};
static struct kunit_suite xe_dma_buf_test_suite = {
.name = "xe_dma_buf",
.test_cases = xe_dma_buf_tests,
};
kunit_test_suite(xe_dma_buf_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright © 2022 Intel Corporation
*/
#include <kunit/test.h>
void xe_migrate_sanity_kunit(struct kunit *test);
static struct kunit_case xe_migrate_tests[] = {
KUNIT_CASE(xe_migrate_sanity_kunit),
{}
};
static struct kunit_suite xe_migrate_test_suite = {
.name = "xe_migrate",
.test_cases = xe_migrate_tests,
};
kunit_test_suite(xe_migrate_test_suite);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
/* SPDX-License-Identifier: GPL-2.0 AND MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef __XE_TEST_H__
#define __XE_TEST_H__
#include <linux/types.h>
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
#include <linux/sched.h>
#include <kunit/test.h>
/*
* Each test that provides a kunit private test structure, place a test id
* here and point the kunit->priv to an embedded struct xe_test_priv.
*/
enum xe_test_priv_id {
XE_TEST_LIVE_DMA_BUF,
};
/**
* struct xe_test_priv - Base class for test private info
* @id: enum xe_test_priv_id to identify the subclass.
*/
struct xe_test_priv {
enum xe_test_priv_id id;
};
#define XE_TEST_DECLARE(x) x
#define XE_TEST_ONLY(x) unlikely(x)
#define XE_TEST_EXPORT
#define xe_cur_kunit() current->kunit_test
/**
* xe_cur_kunit_priv - Obtain the struct xe_test_priv pointed to by
* current->kunit->priv if it exists and is embedded in the expected subclass.
* @id: Id of the expected subclass.
*
* Return: NULL if the process is not a kunit test, and NULL if the
* current kunit->priv pointer is not pointing to an object of the expected
* subclass. A pointer to the embedded struct xe_test_priv otherwise.
*/
static inline struct xe_test_priv *
xe_cur_kunit_priv(enum xe_test_priv_id id)
{
struct xe_test_priv *priv;
if (!xe_cur_kunit())
return NULL;
priv = xe_cur_kunit()->priv;
return priv->id == id ? priv : NULL;
}
#else /* if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) */
#define XE_TEST_DECLARE(x)
#define XE_TEST_ONLY(x) 0
#define XE_TEST_EXPORT static
#define xe_cur_kunit() NULL
#define xe_cur_kunit_priv(_id) NULL
#endif
#endif
// SPDX-License-Identifier: MIT
/*
* Copyright © 2022 Intel Corporation
*/
#include "xe_bb.h"
#include "xe_sa.h"
#include "xe_device.h"
#include "xe_engine_types.h"
#include "xe_hw_fence.h"
#include "xe_sched_job.h"
#include "xe_vm_types.h"
#include "gt/intel_gpu_commands.h"
struct xe_bb *xe_bb_new(struct xe_gt *gt, u32 dwords, bool usm)
{
struct xe_bb *bb = kmalloc(sizeof(*bb), GFP_KERNEL);
int err;
if (!bb)
return ERR_PTR(-ENOMEM);
bb->bo = xe_sa_bo_new(!usm ? &gt->kernel_bb_pool :
&gt->usm.bb_pool, 4 * dwords + 4);
if (IS_ERR(bb->bo)) {
err = PTR_ERR(bb->bo);
goto err;
}
bb->cs = xe_sa_bo_cpu_addr(bb->bo);
bb->len = 0;
return bb;
err:
kfree(bb);
return ERR_PTR(err);
}
static struct xe_sched_job *
__xe_bb_create_job(struct xe_engine *kernel_eng, struct xe_bb *bb, u64 *addr)
{
u32 size = drm_suballoc_size(bb->bo);
XE_BUG_ON((bb->len * 4 + 1) > size);
bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
xe_sa_bo_flush_write(bb->bo);
return xe_sched_job_create(kernel_eng, addr);
}
struct xe_sched_job *xe_bb_create_wa_job(struct xe_engine *wa_eng,
struct xe_bb *bb, u64 batch_base_ofs)
{
u64 addr = batch_base_ofs + drm_suballoc_soffset(bb->bo);
XE_BUG_ON(!(wa_eng->vm->flags & XE_VM_FLAG_MIGRATION));
return __xe_bb_create_job(wa_eng, bb, &addr);
}
struct xe_sched_job *xe_bb_create_migration_job(struct xe_engine *kernel_eng,
struct xe_bb *bb,
u64 batch_base_ofs,
u32 second_idx)
{
u64 addr[2] = {
batch_base_ofs + drm_suballoc_soffset(bb->bo),
batch_base_ofs + drm_suballoc_soffset(bb->bo) +
4 * second_idx,
};
BUG_ON(second_idx > bb->len);
BUG_ON(!(kernel_eng->vm->flags & XE_VM_FLAG_MIGRATION));
return __xe_bb_create_job(kernel_eng, bb, addr);
}
struct xe_sched_job *xe_bb_create_job(struct xe_engine *kernel_eng,
struct xe_bb *bb)
{
u64 addr = xe_sa_bo_gpu_addr(bb->bo);
BUG_ON(kernel_eng->vm && kernel_eng->vm->flags & XE_VM_FLAG_MIGRATION);
return __xe_bb_create_job(kernel_eng, bb, &addr);
}
void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence)
{
if (!bb)
return;
xe_sa_bo_free(bb->bo, fence);
kfree(bb);
}
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _XE_BB_H_
#define _XE_BB_H_
#include "xe_bb_types.h"
struct dma_fence;
struct xe_gt;
struct xe_engine;
struct xe_sched_job;
struct xe_bb *xe_bb_new(struct xe_gt *gt, u32 size, bool usm);
struct xe_sched_job *xe_bb_create_job(struct xe_engine *kernel_eng,
struct xe_bb *bb);
struct xe_sched_job *xe_bb_create_migration_job(struct xe_engine *kernel_eng,
struct xe_bb *bb, u64 batch_ofs,
u32 second_idx);
struct xe_sched_job *xe_bb_create_wa_job(struct xe_engine *wa_eng,
struct xe_bb *bb, u64 batch_ofs);
void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence);
#endif
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _XE_BB_TYPES_H_
#define _XE_BB_TYPES_H_
#include <linux/types.h>
struct drm_suballoc;
struct xe_bb {
struct drm_suballoc *bo;
u32 *cs;
u32 len; /* in dwords */
};
#endif
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2021 Intel Corporation
*/
#ifndef _XE_BO_H_
#define _XE_BO_H_
#include "xe_bo_types.h"
#include "xe_macros.h"
#include "xe_vm_types.h"
#define XE_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
#define XE_BO_CREATE_USER_BIT BIT(1)
#define XE_BO_CREATE_SYSTEM_BIT BIT(2)
#define XE_BO_CREATE_VRAM0_BIT BIT(3)
#define XE_BO_CREATE_VRAM1_BIT BIT(4)
#define XE_BO_CREATE_VRAM_IF_DGFX(gt) \
(IS_DGFX(gt_to_xe(gt)) ? XE_BO_CREATE_VRAM0_BIT << gt->info.vram_id : \
XE_BO_CREATE_SYSTEM_BIT)
#define XE_BO_CREATE_GGTT_BIT BIT(5)
#define XE_BO_CREATE_IGNORE_MIN_PAGE_SIZE_BIT BIT(6)
#define XE_BO_CREATE_PINNED_BIT BIT(7)
#define XE_BO_DEFER_BACKING BIT(8)
#define XE_BO_SCANOUT_BIT BIT(9)
/* this one is trigger internally only */
#define XE_BO_INTERNAL_TEST BIT(30)
#define XE_BO_INTERNAL_64K BIT(31)
#define PPAT_UNCACHED GENMASK_ULL(4, 3)
#define PPAT_CACHED_PDE 0
#define PPAT_CACHED BIT_ULL(7)
#define PPAT_DISPLAY_ELLC BIT_ULL(4)
#define GEN8_PTE_SHIFT 12
#define GEN8_PAGE_SIZE (1 << GEN8_PTE_SHIFT)
#define GEN8_PTE_MASK (GEN8_PAGE_SIZE - 1)
#define GEN8_PDE_SHIFT (GEN8_PTE_SHIFT - 3)
#define GEN8_PDES (1 << GEN8_PDE_SHIFT)
#define GEN8_PDE_MASK (GEN8_PDES - 1)
#define GEN8_64K_PTE_SHIFT 16
#define GEN8_64K_PAGE_SIZE (1 << GEN8_64K_PTE_SHIFT)
#define GEN8_64K_PTE_MASK (GEN8_64K_PAGE_SIZE - 1)
#define GEN8_64K_PDE_MASK (GEN8_PDE_MASK >> 4)
#define GEN8_PDE_PS_2M BIT_ULL(7)
#define GEN8_PDPE_PS_1G BIT_ULL(7)
#define GEN8_PDE_IPS_64K BIT_ULL(11)
#define GEN12_GGTT_PTE_LM BIT_ULL(1)
#define GEN12_USM_PPGTT_PTE_AE BIT_ULL(10)
#define GEN12_PPGTT_PTE_LM BIT_ULL(11)
#define GEN12_PDE_64K BIT_ULL(6)
#define GEN12_PTE_PS64 BIT_ULL(8)
#define GEN8_PAGE_PRESENT BIT_ULL(0)
#define GEN8_PAGE_RW BIT_ULL(1)
#define PTE_READ_ONLY BIT(0)
#define XE_PL_SYSTEM TTM_PL_SYSTEM
#define XE_PL_TT TTM_PL_TT
#define XE_PL_VRAM0 TTM_PL_VRAM
#define XE_PL_VRAM1 (XE_PL_VRAM0 + 1)
#define XE_BO_PROPS_INVALID (-1)
struct sg_table;
struct xe_bo *xe_bo_alloc(void);
void xe_bo_free(struct xe_bo *bo);
struct xe_bo *__xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
struct xe_gt *gt, struct dma_resv *resv,
size_t size, enum ttm_bo_type type,
u32 flags);
struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_gt *gt,
struct xe_vm *vm, size_t size,
enum ttm_bo_type type, u32 flags);
struct xe_bo *xe_bo_create(struct xe_device *xe, struct xe_gt *gt,
struct xe_vm *vm, size_t size,
enum ttm_bo_type type, u32 flags);
struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_gt *gt,
struct xe_vm *vm, size_t size,
enum ttm_bo_type type, u32 flags);
struct xe_bo *xe_bo_create_from_data(struct xe_device *xe, struct xe_gt *gt,
const void *data, size_t size,
enum ttm_bo_type type, u32 flags);
int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo,
u32 bo_flags);
static inline struct xe_bo *ttm_to_xe_bo(const struct ttm_buffer_object *bo)
{
return container_of(bo, struct xe_bo, ttm);
}
static inline struct xe_bo *gem_to_xe_bo(const struct drm_gem_object *obj)
{
return container_of(obj, struct xe_bo, ttm.base);
}
#define xe_bo_device(bo) ttm_to_xe_device((bo)->ttm.bdev)
static inline struct xe_bo *xe_bo_get(struct xe_bo *bo)
{
if (bo)
drm_gem_object_get(&bo->ttm.base);
return bo;
}
static inline void xe_bo_put(struct xe_bo *bo)
{
if (bo)
drm_gem_object_put(&bo->ttm.base);
}
static inline void xe_bo_assert_held(struct xe_bo *bo)
{
if (bo)
dma_resv_assert_held((bo)->ttm.base.resv);
}
int xe_bo_lock(struct xe_bo *bo, struct ww_acquire_ctx *ww,
int num_resv, bool intr);
void xe_bo_unlock(struct xe_bo *bo, struct ww_acquire_ctx *ww);
static inline void xe_bo_unlock_vm_held(struct xe_bo *bo)
{
if (bo) {
XE_BUG_ON(bo->vm && bo->ttm.base.resv != &bo->vm->resv);
if (bo->vm)
xe_vm_assert_held(bo->vm);
else
dma_resv_unlock(bo->ttm.base.resv);
}
}
static inline void xe_bo_lock_no_vm(struct xe_bo *bo,
struct ww_acquire_ctx *ctx)
{
if (bo) {
XE_BUG_ON(bo->vm || (bo->ttm.type != ttm_bo_type_sg &&
bo->ttm.base.resv != &bo->ttm.base._resv));
dma_resv_lock(bo->ttm.base.resv, ctx);
}
}
static inline void xe_bo_unlock_no_vm(struct xe_bo *bo)
{
if (bo) {
XE_BUG_ON(bo->vm || (bo->ttm.type != ttm_bo_type_sg &&
bo->ttm.base.resv != &bo->ttm.base._resv));
dma_resv_unlock(bo->ttm.base.resv);
}
}
int xe_bo_pin_external(struct xe_bo *bo);
int xe_bo_pin(struct xe_bo *bo);
void xe_bo_unpin_external(struct xe_bo *bo);
void xe_bo_unpin(struct xe_bo *bo);
int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict);
static inline bool xe_bo_is_pinned(struct xe_bo *bo)
{
return bo->ttm.pin_count;
}
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
xe_bo_lock_no_vm(bo, NULL);
xe_bo_unpin(bo);
xe_bo_unlock_no_vm(bo);
xe_bo_put(bo);
}
}
bool xe_bo_is_xe_bo(struct ttm_buffer_object *bo);
dma_addr_t xe_bo_addr(struct xe_bo *bo, u64 offset,
size_t page_size, bool *is_lmem);
static inline dma_addr_t
xe_bo_main_addr(struct xe_bo *bo, size_t page_size)
{
bool is_lmem;
return xe_bo_addr(bo, 0, page_size, &is_lmem);
}
static inline u32
xe_bo_ggtt_addr(struct xe_bo *bo)
{
XE_BUG_ON(bo->ggtt_node.size > bo->size);
XE_BUG_ON(bo->ggtt_node.start + bo->ggtt_node.size > (1ull << 32));
return bo->ggtt_node.start;
}
int xe_bo_vmap(struct xe_bo *bo);
void xe_bo_vunmap(struct xe_bo *bo);
bool mem_type_is_vram(u32 mem_type);
bool xe_bo_is_vram(struct xe_bo *bo);
bool xe_bo_can_migrate(struct xe_bo *bo, u32 mem_type);
int xe_bo_migrate(struct xe_bo *bo, u32 mem_type);
int xe_bo_evict(struct xe_bo *bo, bool force_alloc);
extern struct ttm_device_funcs xe_ttm_funcs;
int xe_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_bo_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
bool xe_bo_needs_ccs_pages(struct xe_bo *bo);
static inline size_t xe_bo_ccs_pages_start(struct xe_bo *bo)
{
return PAGE_ALIGN(bo->ttm.base.size);
}
void __xe_bo_release_dummy(struct kref *kref);
/**
* xe_bo_put_deferred() - Put a buffer object with delayed final freeing
* @bo: The bo to put.
* @deferred: List to which to add the buffer object if we cannot put, or
* NULL if the function is to put unconditionally.
*
* Since the final freeing of an object includes both sleeping and (!)
* memory allocation in the dma_resv individualization, it's not ok
* to put an object from atomic context nor from within a held lock
* tainted by reclaim. In such situations we want to defer the final
* freeing until we've exited the restricting context, or in the worst
* case to a workqueue.
* This function either puts the object if possible without the refcount
* reaching zero, or adds it to the @deferred list if that was not possible.
* The caller needs to follow up with a call to xe_bo_put_commit() to actually
* put the bo iff this function returns true. It's safe to always
* follow up with a call to xe_bo_put_commit().
* TODO: It's TTM that is the villain here. Perhaps TTM should add an
* interface like this.
*
* Return: true if @bo was the first object put on the @freed list,
* false otherwise.
*/
static inline bool
xe_bo_put_deferred(struct xe_bo *bo, struct llist_head *deferred)
{
if (!deferred) {
xe_bo_put(bo);
return false;
}
if (!kref_put(&bo->ttm.base.refcount, __xe_bo_release_dummy))
return false;
return llist_add(&bo->freed, deferred);
}
void xe_bo_put_commit(struct llist_head *deferred);
struct sg_table *xe_bo_get_sg(struct xe_bo *bo);
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
/**
* xe_bo_is_mem_type - Whether the bo currently resides in the given
* TTM memory type
* @bo: The bo to check.
* @mem_type: The TTM memory type.
*
* Return: true iff the bo resides in @mem_type, false otherwise.
*/
static inline bool xe_bo_is_mem_type(struct xe_bo *bo, u32 mem_type)
{
xe_bo_assert_held(bo);
return bo->ttm.resource->mem_type == mem_type;
}
#endif
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2022 Intel Corporation
*/
#ifndef _XE_DEBUGFS_H_
#define _XE_DEBUGFS_H_
struct xe_device;
void xe_debugfs_register(struct xe_device *xe);
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment