Commit ca797d29 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-intel-next-2017-11-17-1' of...

Merge tag 'drm-intel-next-2017-11-17-1' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

More change sets for 4.16:

- Many improvements for selftests and other igt tests (Chris)
- Forcewake with PUNIT->PMIC bus fixes and robustness (Hans)
- Define an engine class for uABI (Tvrtko)
- Context switch fixes and improvements (Chris)
- GT powersavings and power gating simplification and fixes (Chris)
- Other general driver clean-ups (Chris, Lucas, Ville)
- Removing old, useless and/or bad workarounds (Chris, Oscar, Radhakrishna)
- IPS, pipe config, etc in preparation for another Fast Boot attempt (Maarten)
- OA perf fixes and support to Coffee Lake and Cannonlake (Lionel)
- Fixes around GPU fault registers (Michel)
- GEM Proxy (Tina)
- Refactor of Geminilake and Cannonlake plane color handling (James)
- Generalize transcoder loop (Mika Kahola)
- New HW Workaround for Cannonlake and Geminilake (Rodrigo)
- Resume GuC before using GEM (Chris)
- Stolen Memory handling improvements (Ville)
- Initialize entry in PPAT for older compilers (Chris)
- Other fixes and robustness improvements on execbuf (Chris)
- Improve logs of GEM_BUG_ON (Mika Kuoppala)
- Rework with massive rename of GuC functions and files (Sagar)
- Don't sanitize frame start delay if pipe is off (Ville)
- Cannonlake clock fixes (Rodrigo)
- Cannonlake HDMI 2.0 support (Rodrigo)
- Add a GuC doorbells selftest (Michel)
- Add might_sleep() check to our wait_for() (Chris)

Many GVT changes for 4.16:

- CSB HWSP update support (Weinan)
- GVT debug helpers, dyndbg and debugfs (Chuanxiao, Shuo)
- full virtualized opregion (Xiaolin)
- VM health check for sane fallback (Fred)
- workload submission code refactor for future enabling (Zhi)
- Updated repo URL in MAINTAINERS (Zhenyu)
- other many misc fixes

* tag 'drm-intel-next-2017-11-17-1' of git://anongit.freedesktop.org/drm/drm-intel: (260 commits)
  drm/i915: Update DRIVER_DATE to 20171117
  drm/i915: Add a policy note for removing workarounds
  drm/i915/selftests: Report ENOMEM clearly for an allocation failure
  Revert "drm/i915: Display WA #1133 WaFbcSkipSegments:cnl, glk"
  drm/i915: Calculate g4x intermediate watermarks correctly
  drm/i915: Calculate vlv/chv intermediate watermarks correctly, v3.
  drm/i915: Pass crtc_state to ips toggle functions, v2
  drm/i915: Pass idle crtc_state to intel_dp_sink_crc
  drm/i915: Enable FIFO underrun reporting after initial fastset, v4.
  drm/i915: Mark the userptr invalidate workqueue as WQ_MEM_RECLAIM
  drm/i915: Add might_sleep() check to wait_for()
  drm/i915/selftests: Add a GuC doorbells selftest
  drm/i915/cnl: Extend HDMI 2.0 support to CNL.
  drm/i915/cnl: Simplify dco_fraction calculation.
  drm/i915/cnl: Don't blindly replace qdiv.
  drm/i915/cnl: Fix wrpll math for higher freqs.
  drm/i915/cnl: Fix, simplify and unify wrpll variable sizes.
  drm/i915/cnl: Remove useless conversion.
  drm/i915/cnl: Remove spurious central_freq.
  drm/i915/selftests: exercise_ggtt may have nothing to do
  ...
parents 2c1c55cb 010d118c
...@@ -350,10 +350,10 @@ GuC-specific firmware loader ...@@ -350,10 +350,10 @@ GuC-specific firmware loader
GuC-based command submission GuC-based command submission
---------------------------- ----------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_guc_submission.c .. kernel-doc:: drivers/gpu/drm/i915/intel_guc_submission.c
:doc: GuC-based command submission :doc: GuC-based command submission
.. kernel-doc:: drivers/gpu/drm/i915/i915_guc_submission.c .. kernel-doc:: drivers/gpu/drm/i915/intel_guc_submission.c
:internal: :internal:
GuC Firmware Layout GuC Firmware Layout
......
...@@ -7030,7 +7030,7 @@ M: Zhi Wang <zhi.a.wang@intel.com> ...@@ -7030,7 +7030,7 @@ M: Zhi Wang <zhi.a.wang@intel.com>
L: intel-gvt-dev@lists.freedesktop.org L: intel-gvt-dev@lists.freedesktop.org
L: intel-gfx@lists.freedesktop.org L: intel-gfx@lists.freedesktop.org
W: https://01.org/igvt-g W: https://01.org/igvt-g
T: git https://github.com/01org/gvt-linux.git T: git https://github.com/intel/gvt-linux.git
S: Supported S: Supported
F: drivers/gpu/drm/i915/gvt/ F: drivers/gpu/drm/i915/gvt/
......
...@@ -146,6 +146,18 @@ int iosf_mbi_register_pmic_bus_access_notifier(struct notifier_block *nb); ...@@ -146,6 +146,18 @@ int iosf_mbi_register_pmic_bus_access_notifier(struct notifier_block *nb);
*/ */
int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb); int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb);
/**
* iosf_mbi_unregister_pmic_bus_access_notifier_unlocked - Unregister PMIC bus
* notifier, unlocked
*
* Like iosf_mbi_unregister_pmic_bus_access_notifier(), but for use when the
* caller has already called iosf_mbi_punit_acquire() itself.
*
* @nb: notifier_block to unregister
*/
int iosf_mbi_unregister_pmic_bus_access_notifier_unlocked(
struct notifier_block *nb);
/** /**
* iosf_mbi_call_pmic_bus_access_notifier_chain - Call PMIC bus notifier chain * iosf_mbi_call_pmic_bus_access_notifier_chain - Call PMIC bus notifier chain
* *
...@@ -154,6 +166,11 @@ int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb); ...@@ -154,6 +166,11 @@ int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb);
*/ */
int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v); int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v);
/**
* iosf_mbi_assert_punit_acquired - Assert that the P-Unit has been acquired.
*/
void iosf_mbi_assert_punit_acquired(void);
#else /* CONFIG_IOSF_MBI is not enabled */ #else /* CONFIG_IOSF_MBI is not enabled */
static inline static inline
bool iosf_mbi_available(void) bool iosf_mbi_available(void)
...@@ -197,12 +214,20 @@ int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb) ...@@ -197,12 +214,20 @@ int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb)
return 0; return 0;
} }
static inline int
iosf_mbi_unregister_pmic_bus_access_notifier_unlocked(struct notifier_block *nb)
{
return 0;
}
static inline static inline
int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v) int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v)
{ {
return 0; return 0;
} }
static inline void iosf_mbi_assert_punit_acquired(void) {}
#endif /* CONFIG_IOSF_MBI */ #endif /* CONFIG_IOSF_MBI */
#endif /* IOSF_MBI_SYMS_H */ #endif /* IOSF_MBI_SYMS_H */
...@@ -218,14 +218,23 @@ int iosf_mbi_register_pmic_bus_access_notifier(struct notifier_block *nb) ...@@ -218,14 +218,23 @@ int iosf_mbi_register_pmic_bus_access_notifier(struct notifier_block *nb)
} }
EXPORT_SYMBOL(iosf_mbi_register_pmic_bus_access_notifier); EXPORT_SYMBOL(iosf_mbi_register_pmic_bus_access_notifier);
int iosf_mbi_unregister_pmic_bus_access_notifier_unlocked(
struct notifier_block *nb)
{
iosf_mbi_assert_punit_acquired();
return blocking_notifier_chain_unregister(
&iosf_mbi_pmic_bus_access_notifier, nb);
}
EXPORT_SYMBOL(iosf_mbi_unregister_pmic_bus_access_notifier_unlocked);
int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb) int iosf_mbi_unregister_pmic_bus_access_notifier(struct notifier_block *nb)
{ {
int ret; int ret;
/* Wait for the bus to go inactive before unregistering */ /* Wait for the bus to go inactive before unregistering */
mutex_lock(&iosf_mbi_punit_mutex); mutex_lock(&iosf_mbi_punit_mutex);
ret = blocking_notifier_chain_unregister( ret = iosf_mbi_unregister_pmic_bus_access_notifier_unlocked(nb);
&iosf_mbi_pmic_bus_access_notifier, nb);
mutex_unlock(&iosf_mbi_punit_mutex); mutex_unlock(&iosf_mbi_punit_mutex);
return ret; return ret;
...@@ -239,6 +248,12 @@ int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v) ...@@ -239,6 +248,12 @@ int iosf_mbi_call_pmic_bus_access_notifier_chain(unsigned long val, void *v)
} }
EXPORT_SYMBOL(iosf_mbi_call_pmic_bus_access_notifier_chain); EXPORT_SYMBOL(iosf_mbi_call_pmic_bus_access_notifier_chain);
void iosf_mbi_assert_punit_acquired(void)
{
WARN_ON(!mutex_is_locked(&iosf_mbi_punit_mutex));
}
EXPORT_SYMBOL(iosf_mbi_assert_punit_acquired);
#ifdef CONFIG_IOSF_MBI_DEBUG #ifdef CONFIG_IOSF_MBI_DEBUG
static u32 dbg_mdr; static u32 dbg_mdr;
static u32 dbg_mcr; static u32 dbg_mcr;
......
...@@ -28,6 +28,7 @@ config DRM_I915_DEBUG ...@@ -28,6 +28,7 @@ config DRM_I915_DEBUG
select SW_SYNC # signaling validation framework (igt/syncobj*) select SW_SYNC # signaling validation framework (igt/syncobj*)
select DRM_I915_SW_FENCE_DEBUG_OBJECTS select DRM_I915_SW_FENCE_DEBUG_OBJECTS
select DRM_I915_SELFTEST select DRM_I915_SELFTEST
select DRM_I915_TRACE_GEM
default n default n
help help
Choose this option to turn on extra driver debugging that may affect Choose this option to turn on extra driver debugging that may affect
...@@ -49,6 +50,19 @@ config DRM_I915_DEBUG_GEM ...@@ -49,6 +50,19 @@ config DRM_I915_DEBUG_GEM
If in doubt, say "N". If in doubt, say "N".
config DRM_I915_TRACE_GEM
bool "Insert extra ftrace output from the GEM internals"
select TRACING
default n
help
Enable additional and verbose debugging output that will spam
ordinary tests, but may be vital for post-mortem debugging when
used with /proc/sys/kernel/ftrace_dump_on_oops
Recommended for driver developers only.
If in doubt, say "N".
config DRM_I915_SW_FENCE_DEBUG_OBJECTS config DRM_I915_SW_FENCE_DEBUG_OBJECTS
bool "Enable additional driver debugging for fence objects" bool "Enable additional driver debugging for fence objects"
depends on DRM_I915 depends on DRM_I915
...@@ -90,6 +104,20 @@ config DRM_I915_SELFTEST ...@@ -90,6 +104,20 @@ config DRM_I915_SELFTEST
If in doubt, say "N". If in doubt, say "N".
config DRM_I915_SELFTEST_BROKEN
bool "Enable broken and dangerous selftests"
depends on DRM_I915_SELFTEST
depends on BROKEN
default n
help
This option enables the execution of selftests that are "dangerous"
and may trigger unintended HW side-effects as they break strict
rules given in the HW specification. For science.
Recommended for masochistic driver developers only.
If in doubt, say "N".
config DRM_I915_LOW_LEVEL_TRACEPOINTS config DRM_I915_LOW_LEVEL_TRACEPOINTS
bool "Enable low level request tracing events" bool "Enable low level request tracing events"
depends on DRM_I915 depends on DRM_I915
......
...@@ -3,7 +3,26 @@ ...@@ -3,7 +3,26 @@
# Makefile for the drm device driver. This driver provides support for the # Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
subdir-ccflags-$(CONFIG_DRM_I915_WERROR) := -Werror # Add a set of useful warning flags and enable -Werror for CI to prevent
# trivial mistakes from creeping in. We have to do this piecemeal as we reject
# any patch that isn't warning clean, so turning on -Wall -Wextra (or W=1) we
# need to filter out dubious warnings. Still it is our interest
# to keep running locally with W=1 C=1 until we are completely clean.
#
# Note the danger in using -Wall -Wextra is that when CI updates gcc we
# will most likely get a sudden build breakage... Hopefully we will fix
# new warnings before CI updates!
subdir-ccflags-y := -Wall -Wextra
subdir-ccflags-y += $(call cc-disable-warning, unused-parameter)
subdir-ccflags-y += $(call cc-disable-warning, type-limits)
subdir-ccflags-y += $(call cc-disable-warning, missing-field-initializers)
subdir-ccflags-y += $(call cc-disable-warning, implicit-fallthrough)
subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror
# Fine grained warnings disable
CFLAGS_i915_pci.o = $(call cc-disable-warning, override-init)
CFLAGS_intel_fbdev.o = $(call cc-disable-warning, override-init)
subdir-ccflags-y += \ subdir-ccflags-y += \
$(call as-instr,movntdqa (%eax)$(comma)%xmm0,-DCONFIG_AS_MOVNTDQA) $(call as-instr,movntdqa (%eax)$(comma)%xmm0,-DCONFIG_AS_MOVNTDQA)
...@@ -64,10 +83,10 @@ i915-y += intel_uc.o \ ...@@ -64,10 +83,10 @@ i915-y += intel_uc.o \
intel_uc_fw.o \ intel_uc_fw.o \
intel_guc.o \ intel_guc.o \
intel_guc_ct.o \ intel_guc_ct.o \
intel_guc_log.o \
intel_guc_fw.o \ intel_guc_fw.o \
intel_huc.o \ intel_guc_log.o \
i915_guc_submission.o intel_guc_submission.o \
intel_huc.o
# autogenerated null render state # autogenerated null render state
i915-y += intel_renderstate_gen6.o \ i915-y += intel_renderstate_gen6.o \
...@@ -144,7 +163,9 @@ i915-y += i915_perf.o \ ...@@ -144,7 +163,9 @@ i915-y += i915_perf.o \
i915_oa_kblgt2.o \ i915_oa_kblgt2.o \
i915_oa_kblgt3.o \ i915_oa_kblgt3.o \
i915_oa_glk.o \ i915_oa_glk.o \
i915_oa_cflgt2.o i915_oa_cflgt2.o \
i915_oa_cflgt3.o \
i915_oa_cnl.o
ifeq ($(CONFIG_DRM_I915_GVT),y) ifeq ($(CONFIG_DRM_I915_GVT),y)
i915-y += intel_gvt.o i915-y += intel_gvt.o
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
GVT_DIR := gvt GVT_DIR := gvt
GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \ GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \ interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \
execlist.o scheduler.o sched_policy.o render.o cmd_parser.o execlist.o scheduler.o sched_policy.o render.o cmd_parser.o debugfs.o
ccflags-y += -I$(src) -I$(src)/$(GVT_DIR) ccflags-y += -I$(src) -I$(src)/$(GVT_DIR)
i915-y += $(addprefix $(GVT_DIR)/, $(GVT_SOURCE)) i915-y += $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
......
...@@ -208,6 +208,20 @@ static int emulate_pci_command_write(struct intel_vgpu *vgpu, ...@@ -208,6 +208,20 @@ static int emulate_pci_command_write(struct intel_vgpu *vgpu,
return 0; return 0;
} }
static int emulate_pci_rom_bar_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 *pval = (u32 *)(vgpu_cfg_space(vgpu) + offset);
u32 new = *(u32 *)(p_data);
if ((new & PCI_ROM_ADDRESS_MASK) == PCI_ROM_ADDRESS_MASK)
/* We don't have rom, return size of 0. */
*pval = 0;
else
vgpu_pci_cfg_mem_write(vgpu, offset, p_data, bytes);
return 0;
}
static int emulate_pci_bar_write(struct intel_vgpu *vgpu, unsigned int offset, static int emulate_pci_bar_write(struct intel_vgpu *vgpu, unsigned int offset,
void *p_data, unsigned int bytes) void *p_data, unsigned int bytes)
{ {
...@@ -300,6 +314,11 @@ int intel_vgpu_emulate_cfg_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -300,6 +314,11 @@ int intel_vgpu_emulate_cfg_write(struct intel_vgpu *vgpu, unsigned int offset,
} }
switch (rounddown(offset, 4)) { switch (rounddown(offset, 4)) {
case PCI_ROM_ADDRESS:
if (WARN_ON(!IS_ALIGNED(offset, 4)))
return -EINVAL;
return emulate_pci_rom_bar_write(vgpu, offset, p_data, bytes);
case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_5: case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_5:
if (WARN_ON(!IS_ALIGNED(offset, 4))) if (WARN_ON(!IS_ALIGNED(offset, 4)))
return -EINVAL; return -EINVAL;
...@@ -375,6 +394,8 @@ void intel_vgpu_init_cfg_space(struct intel_vgpu *vgpu, ...@@ -375,6 +394,8 @@ void intel_vgpu_init_cfg_space(struct intel_vgpu *vgpu,
pci_resource_len(gvt->dev_priv->drm.pdev, 0); pci_resource_len(gvt->dev_priv->drm.pdev, 0);
vgpu->cfg_space.bar[INTEL_GVT_PCI_BAR_APERTURE].size = vgpu->cfg_space.bar[INTEL_GVT_PCI_BAR_APERTURE].size =
pci_resource_len(gvt->dev_priv->drm.pdev, 2); pci_resource_len(gvt->dev_priv->drm.pdev, 2);
memset(vgpu_cfg_space(vgpu) + PCI_ROM_ADDRESS, 0, 4);
} }
/** /**
......
This diff is collapsed.
...@@ -25,41 +25,41 @@ ...@@ -25,41 +25,41 @@
#define __GVT_DEBUG_H__ #define __GVT_DEBUG_H__
#define gvt_err(fmt, args...) \ #define gvt_err(fmt, args...) \
DRM_ERROR("gvt: "fmt, ##args) pr_err("gvt: "fmt, ##args)
#define gvt_vgpu_err(fmt, args...) \ #define gvt_vgpu_err(fmt, args...) \
do { \ do { \
if (IS_ERR_OR_NULL(vgpu)) \ if (IS_ERR_OR_NULL(vgpu)) \
DRM_DEBUG_DRIVER("gvt: "fmt, ##args); \ pr_err("gvt: "fmt, ##args); \
else \ else \
DRM_DEBUG_DRIVER("gvt: vgpu %d: "fmt, vgpu->id, ##args);\ pr_err("gvt: vgpu %d: "fmt, vgpu->id, ##args);\
} while (0) } while (0)
#define gvt_dbg_core(fmt, args...) \ #define gvt_dbg_core(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: core: "fmt, ##args) pr_debug("gvt: core: "fmt, ##args)
#define gvt_dbg_irq(fmt, args...) \ #define gvt_dbg_irq(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: irq: "fmt, ##args) pr_debug("gvt: irq: "fmt, ##args)
#define gvt_dbg_mm(fmt, args...) \ #define gvt_dbg_mm(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: mm: "fmt, ##args) pr_debug("gvt: mm: "fmt, ##args)
#define gvt_dbg_mmio(fmt, args...) \ #define gvt_dbg_mmio(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: mmio: "fmt, ##args) pr_debug("gvt: mmio: "fmt, ##args)
#define gvt_dbg_dpy(fmt, args...) \ #define gvt_dbg_dpy(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: dpy: "fmt, ##args) pr_debug("gvt: dpy: "fmt, ##args)
#define gvt_dbg_el(fmt, args...) \ #define gvt_dbg_el(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: el: "fmt, ##args) pr_debug("gvt: el: "fmt, ##args)
#define gvt_dbg_sched(fmt, args...) \ #define gvt_dbg_sched(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: sched: "fmt, ##args) pr_debug("gvt: sched: "fmt, ##args)
#define gvt_dbg_render(fmt, args...) \ #define gvt_dbg_render(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: render: "fmt, ##args) pr_debug("gvt: render: "fmt, ##args)
#define gvt_dbg_cmd(fmt, args...) \ #define gvt_dbg_cmd(fmt, args...) \
DRM_DEBUG_DRIVER("gvt: cmd: "fmt, ##args) pr_debug("gvt: cmd: "fmt, ##args)
#endif #endif
/*
* Copyright(c) 2011-2017 Intel Corporation. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/debugfs.h>
#include <linux/list_sort.h>
#include "i915_drv.h"
#include "gvt.h"
struct mmio_diff_param {
struct intel_vgpu *vgpu;
int total;
int diff;
struct list_head diff_mmio_list;
};
struct diff_mmio {
struct list_head node;
u32 offset;
u32 preg;
u32 vreg;
};
/* Compare two diff_mmio items. */
static int mmio_offset_compare(void *priv,
struct list_head *a, struct list_head *b)
{
struct diff_mmio *ma;
struct diff_mmio *mb;
ma = container_of(a, struct diff_mmio, node);
mb = container_of(b, struct diff_mmio, node);
if (ma->offset < mb->offset)
return -1;
else if (ma->offset > mb->offset)
return 1;
return 0;
}
static inline int mmio_diff_handler(struct intel_gvt *gvt,
u32 offset, void *data)
{
struct drm_i915_private *dev_priv = gvt->dev_priv;
struct mmio_diff_param *param = data;
struct diff_mmio *node;
u32 preg, vreg;
preg = I915_READ_NOTRACE(_MMIO(offset));
vreg = vgpu_vreg(param->vgpu, offset);
if (preg != vreg) {
node = kmalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return -ENOMEM;
node->offset = offset;
node->preg = preg;
node->vreg = vreg;
list_add(&node->node, &param->diff_mmio_list);
param->diff++;
}
param->total++;
return 0;
}
/* Show the all the different values of tracked mmio. */
static int vgpu_mmio_diff_show(struct seq_file *s, void *unused)
{
struct intel_vgpu *vgpu = s->private;
struct intel_gvt *gvt = vgpu->gvt;
struct mmio_diff_param param = {
.vgpu = vgpu,
.total = 0,
.diff = 0,
};
struct diff_mmio *node, *next;
INIT_LIST_HEAD(&param.diff_mmio_list);
mutex_lock(&gvt->lock);
spin_lock_bh(&gvt->scheduler.mmio_context_lock);
mmio_hw_access_pre(gvt->dev_priv);
/* Recognize all the diff mmios to list. */
intel_gvt_for_each_tracked_mmio(gvt, mmio_diff_handler, &param);
mmio_hw_access_post(gvt->dev_priv);
spin_unlock_bh(&gvt->scheduler.mmio_context_lock);
mutex_unlock(&gvt->lock);
/* In an ascending order by mmio offset. */
list_sort(NULL, &param.diff_mmio_list, mmio_offset_compare);
seq_printf(s, "%-8s %-8s %-8s %-8s\n", "Offset", "HW", "vGPU", "Diff");
list_for_each_entry_safe(node, next, &param.diff_mmio_list, node) {
u32 diff = node->preg ^ node->vreg;
seq_printf(s, "%08x %08x %08x %*pbl\n",
node->offset, node->preg, node->vreg,
32, &diff);
list_del(&node->node);
kfree(node);
}
seq_printf(s, "Total: %d, Diff: %d\n", param.total, param.diff);
return 0;
}
static int vgpu_mmio_diff_open(struct inode *inode, struct file *file)
{
return single_open(file, vgpu_mmio_diff_show, inode->i_private);
}
static const struct file_operations vgpu_mmio_diff_fops = {
.open = vgpu_mmio_diff_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/**
* intel_gvt_debugfs_add_vgpu - register debugfs entries for a vGPU
* @vgpu: a vGPU
*
* Returns:
* Zero on success, negative error code if failed.
*/
int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu)
{
struct dentry *ent;
char name[10] = "";
sprintf(name, "vgpu%d", vgpu->id);
vgpu->debugfs = debugfs_create_dir(name, vgpu->gvt->debugfs_root);
if (!vgpu->debugfs)
return -ENOMEM;
ent = debugfs_create_bool("active", 0444, vgpu->debugfs,
&vgpu->active);
if (!ent)
return -ENOMEM;
ent = debugfs_create_file("mmio_diff", 0444, vgpu->debugfs,
vgpu, &vgpu_mmio_diff_fops);
if (!ent)
return -ENOMEM;
return 0;
}
/**
* intel_gvt_debugfs_remove_vgpu - remove debugfs entries of a vGPU
* @vgpu: a vGPU
*/
void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu)
{
debugfs_remove_recursive(vgpu->debugfs);
vgpu->debugfs = NULL;
}
/**
* intel_gvt_debugfs_init - register gvt debugfs root entry
* @gvt: GVT device
*
* Returns:
* zero on success, negative if failed.
*/
int intel_gvt_debugfs_init(struct intel_gvt *gvt)
{
struct drm_minor *minor = gvt->dev_priv->drm.primary;
struct dentry *ent;
gvt->debugfs_root = debugfs_create_dir("gvt", minor->debugfs_root);
if (!gvt->debugfs_root) {
gvt_err("Cannot create debugfs dir\n");
return -ENOMEM;
}
ent = debugfs_create_ulong("num_tracked_mmio", 0444, gvt->debugfs_root,
&gvt->mmio.num_tracked_mmio);
if (!ent)
return -ENOMEM;
return 0;
}
/**
* intel_gvt_debugfs_clean - remove debugfs entries
* @gvt: GVT device
*/
void intel_gvt_debugfs_clean(struct intel_gvt *gvt)
{
debugfs_remove_recursive(gvt->debugfs_root);
gvt->debugfs_root = NULL;
}
This diff is collapsed.
...@@ -36,10 +36,6 @@ ...@@ -36,10 +36,6 @@
#define _GVT_EXECLIST_H_ #define _GVT_EXECLIST_H_
struct execlist_ctx_descriptor_format { struct execlist_ctx_descriptor_format {
union {
u32 udw;
u32 context_id;
};
union { union {
u32 ldw; u32 ldw;
struct { struct {
...@@ -54,6 +50,10 @@ struct execlist_ctx_descriptor_format { ...@@ -54,6 +50,10 @@ struct execlist_ctx_descriptor_format {
u32 lrca : 20; u32 lrca : 20;
}; };
}; };
union {
u32 udw;
u32 context_id;
};
}; };
struct execlist_status_format { struct execlist_status_format {
......
...@@ -66,20 +66,23 @@ static struct bin_attribute firmware_attr = { ...@@ -66,20 +66,23 @@ static struct bin_attribute firmware_attr = {
.mmap = NULL, .mmap = NULL,
}; };
static int expose_firmware_sysfs(struct intel_gvt *gvt) static int mmio_snapshot_handler(struct intel_gvt *gvt, u32 offset, void *data)
{ {
struct drm_i915_private *dev_priv = gvt->dev_priv; struct drm_i915_private *dev_priv = gvt->dev_priv;
*(u32 *)(data + offset) = I915_READ_NOTRACE(_MMIO(offset));
return 0;
}
static int expose_firmware_sysfs(struct intel_gvt *gvt)
{
struct intel_gvt_device_info *info = &gvt->device_info; struct intel_gvt_device_info *info = &gvt->device_info;
struct pci_dev *pdev = gvt->dev_priv->drm.pdev; struct pci_dev *pdev = gvt->dev_priv->drm.pdev;
struct intel_gvt_mmio_info *e;
struct gvt_mmio_block *block = gvt->mmio.mmio_block;
int num = gvt->mmio.num_mmio_block;
struct gvt_firmware_header *h; struct gvt_firmware_header *h;
void *firmware; void *firmware;
void *p; void *p;
unsigned long size, crc32_start; unsigned long size, crc32_start;
int i, j; int i, ret;
int ret;
size = sizeof(*h) + info->mmio_size + info->cfg_space_size; size = sizeof(*h) + info->mmio_size + info->cfg_space_size;
firmware = vzalloc(size); firmware = vzalloc(size);
...@@ -104,15 +107,8 @@ static int expose_firmware_sysfs(struct intel_gvt *gvt) ...@@ -104,15 +107,8 @@ static int expose_firmware_sysfs(struct intel_gvt *gvt)
p = firmware + h->mmio_offset; p = firmware + h->mmio_offset;
hash_for_each(gvt->mmio.mmio_info_table, i, e, node) /* Take a snapshot of hw mmio registers. */
*(u32 *)(p + e->offset) = I915_READ_NOTRACE(_MMIO(e->offset)); intel_gvt_for_each_tracked_mmio(gvt, mmio_snapshot_handler, p);
for (i = 0; i < num; i++, block++) {
for (j = 0; j < block->size; j += 4)
*(u32 *)(p + INTEL_GVT_MMIO_OFFSET(block->offset) + j) =
I915_READ_NOTRACE(_MMIO(INTEL_GVT_MMIO_OFFSET(
block->offset) + j));
}
memcpy(gvt->firmware.mmio, p, info->mmio_size); memcpy(gvt->firmware.mmio, p, info->mmio_size);
......
This diff is collapsed.
...@@ -34,9 +34,8 @@ ...@@ -34,9 +34,8 @@
#ifndef _GVT_GTT_H_ #ifndef _GVT_GTT_H_
#define _GVT_GTT_H_ #define _GVT_GTT_H_
#define GTT_PAGE_SHIFT 12 #define I915_GTT_PAGE_SHIFT 12
#define GTT_PAGE_SIZE (1UL << GTT_PAGE_SHIFT) #define I915_GTT_PAGE_MASK (~(I915_GTT_PAGE_SIZE - 1))
#define GTT_PAGE_MASK (~(GTT_PAGE_SIZE-1))
struct intel_vgpu_mm; struct intel_vgpu_mm;
...@@ -63,6 +62,7 @@ struct intel_gvt_gtt_pte_ops { ...@@ -63,6 +62,7 @@ struct intel_gvt_gtt_pte_ops {
struct intel_vgpu *vgpu); struct intel_vgpu *vgpu);
bool (*test_present)(struct intel_gvt_gtt_entry *e); bool (*test_present)(struct intel_gvt_gtt_entry *e);
void (*clear_present)(struct intel_gvt_gtt_entry *e); void (*clear_present)(struct intel_gvt_gtt_entry *e);
void (*set_present)(struct intel_gvt_gtt_entry *e);
bool (*test_pse)(struct intel_gvt_gtt_entry *e); bool (*test_pse)(struct intel_gvt_gtt_entry *e);
void (*set_pfn)(struct intel_gvt_gtt_entry *e, unsigned long pfn); void (*set_pfn)(struct intel_gvt_gtt_entry *e, unsigned long pfn);
unsigned long (*get_pfn)(struct intel_gvt_gtt_entry *e); unsigned long (*get_pfn)(struct intel_gvt_gtt_entry *e);
...@@ -86,8 +86,8 @@ struct intel_gvt_gtt { ...@@ -86,8 +86,8 @@ struct intel_gvt_gtt {
struct list_head oos_page_free_list_head; struct list_head oos_page_free_list_head;
struct list_head mm_lru_list_head; struct list_head mm_lru_list_head;
struct page *scratch_ggtt_page; struct page *scratch_page;
unsigned long scratch_ggtt_mfn; unsigned long scratch_mfn;
}; };
enum { enum {
...@@ -193,18 +193,16 @@ struct intel_vgpu_scratch_pt { ...@@ -193,18 +193,16 @@ struct intel_vgpu_scratch_pt {
unsigned long page_mfn; unsigned long page_mfn;
}; };
struct intel_vgpu_gtt { struct intel_vgpu_gtt {
struct intel_vgpu_mm *ggtt_mm; struct intel_vgpu_mm *ggtt_mm;
unsigned long active_ppgtt_mm_bitmap; unsigned long active_ppgtt_mm_bitmap;
struct list_head mm_list_head; struct list_head mm_list_head;
DECLARE_HASHTABLE(shadow_page_hash_table, INTEL_GVT_GTT_HASH_BITS); DECLARE_HASHTABLE(shadow_page_hash_table, INTEL_GVT_GTT_HASH_BITS);
DECLARE_HASHTABLE(guest_page_hash_table, INTEL_GVT_GTT_HASH_BITS); DECLARE_HASHTABLE(tracked_guest_page_hash_table, INTEL_GVT_GTT_HASH_BITS);
atomic_t n_write_protected_guest_page; atomic_t n_tracked_guest_page;
struct list_head oos_page_list_head; struct list_head oos_page_list_head;
struct list_head post_shadow_list_head; struct list_head post_shadow_list_head;
struct intel_vgpu_scratch_pt scratch_pt[GTT_TYPE_MAX]; struct intel_vgpu_scratch_pt scratch_pt[GTT_TYPE_MAX];
}; };
extern int intel_vgpu_init_gtt(struct intel_vgpu *vgpu); extern int intel_vgpu_init_gtt(struct intel_vgpu *vgpu);
...@@ -228,12 +226,16 @@ struct intel_vgpu_shadow_page { ...@@ -228,12 +226,16 @@ struct intel_vgpu_shadow_page {
unsigned long mfn; unsigned long mfn;
}; };
struct intel_vgpu_guest_page { struct intel_vgpu_page_track {
struct hlist_node node; struct hlist_node node;
bool writeprotection; bool tracked;
unsigned long gfn; unsigned long gfn;
int (*handler)(void *, u64, void *, int); int (*handler)(void *, u64, void *, int);
void *data; void *data;
};
struct intel_vgpu_guest_page {
struct intel_vgpu_page_track track;
unsigned long write_cnt; unsigned long write_cnt;
struct intel_vgpu_oos_page *oos_page; struct intel_vgpu_oos_page *oos_page;
}; };
...@@ -243,7 +245,7 @@ struct intel_vgpu_oos_page { ...@@ -243,7 +245,7 @@ struct intel_vgpu_oos_page {
struct list_head list; struct list_head list;
struct list_head vm_list; struct list_head vm_list;
int id; int id;
unsigned char mem[GTT_PAGE_SIZE]; unsigned char mem[I915_GTT_PAGE_SIZE];
}; };
#define GTT_ENTRY_NUM_IN_ONE_PAGE 512 #define GTT_ENTRY_NUM_IN_ONE_PAGE 512
...@@ -258,22 +260,16 @@ struct intel_vgpu_ppgtt_spt { ...@@ -258,22 +260,16 @@ struct intel_vgpu_ppgtt_spt {
struct list_head post_shadow_list; struct list_head post_shadow_list;
}; };
int intel_vgpu_init_guest_page(struct intel_vgpu *vgpu, int intel_vgpu_init_page_track(struct intel_vgpu *vgpu,
struct intel_vgpu_guest_page *guest_page, struct intel_vgpu_page_track *t,
unsigned long gfn, unsigned long gfn,
int (*handler)(void *gp, u64, void *, int), int (*handler)(void *gp, u64, void *, int),
void *data); void *data);
void intel_vgpu_clean_guest_page(struct intel_vgpu *vgpu, void intel_vgpu_clean_page_track(struct intel_vgpu *vgpu,
struct intel_vgpu_guest_page *guest_page); struct intel_vgpu_page_track *t);
int intel_vgpu_set_guest_page_writeprotection(struct intel_vgpu *vgpu,
struct intel_vgpu_guest_page *guest_page);
void intel_vgpu_clear_guest_page_writeprotection(struct intel_vgpu *vgpu,
struct intel_vgpu_guest_page *guest_page);
struct intel_vgpu_guest_page *intel_vgpu_find_guest_page( struct intel_vgpu_page_track *intel_vgpu_find_tracked_page(
struct intel_vgpu *vgpu, unsigned long gfn); struct intel_vgpu *vgpu, unsigned long gfn);
int intel_vgpu_sync_oos_pages(struct intel_vgpu *vgpu); int intel_vgpu_sync_oos_pages(struct intel_vgpu *vgpu);
......
...@@ -36,6 +36,8 @@ ...@@ -36,6 +36,8 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "gvt.h" #include "gvt.h"
#include <linux/vfio.h>
#include <linux/mdev.h>
struct intel_gvt_host intel_gvt_host; struct intel_gvt_host intel_gvt_host;
...@@ -44,6 +46,129 @@ static const char * const supported_hypervisors[] = { ...@@ -44,6 +46,129 @@ static const char * const supported_hypervisors[] = {
[INTEL_GVT_HYPERVISOR_KVM] = "KVM", [INTEL_GVT_HYPERVISOR_KVM] = "KVM",
}; };
static struct intel_vgpu_type *intel_gvt_find_vgpu_type(struct intel_gvt *gvt,
const char *name)
{
int i;
struct intel_vgpu_type *t;
const char *driver_name = dev_driver_string(
&gvt->dev_priv->drm.pdev->dev);
for (i = 0; i < gvt->num_types; i++) {
t = &gvt->types[i];
if (!strncmp(t->name, name + strlen(driver_name) + 1,
sizeof(t->name)))
return t;
}
return NULL;
}
static ssize_t available_instances_show(struct kobject *kobj,
struct device *dev, char *buf)
{
struct intel_vgpu_type *type;
unsigned int num = 0;
void *gvt = kdev_to_i915(dev)->gvt;
type = intel_gvt_find_vgpu_type(gvt, kobject_name(kobj));
if (!type)
num = 0;
else
num = type->avail_instance;
return sprintf(buf, "%u\n", num);
}
static ssize_t device_api_show(struct kobject *kobj, struct device *dev,
char *buf)
{
return sprintf(buf, "%s\n", VFIO_DEVICE_API_PCI_STRING);
}
static ssize_t description_show(struct kobject *kobj, struct device *dev,
char *buf)
{
struct intel_vgpu_type *type;
void *gvt = kdev_to_i915(dev)->gvt;
type = intel_gvt_find_vgpu_type(gvt, kobject_name(kobj));
if (!type)
return 0;
return sprintf(buf, "low_gm_size: %dMB\nhigh_gm_size: %dMB\n"
"fence: %d\nresolution: %s\n"
"weight: %d\n",
BYTES_TO_MB(type->low_gm_size),
BYTES_TO_MB(type->high_gm_size),
type->fence, vgpu_edid_str(type->resolution),
type->weight);
}
static MDEV_TYPE_ATTR_RO(available_instances);
static MDEV_TYPE_ATTR_RO(device_api);
static MDEV_TYPE_ATTR_RO(description);
static struct attribute *gvt_type_attrs[] = {
&mdev_type_attr_available_instances.attr,
&mdev_type_attr_device_api.attr,
&mdev_type_attr_description.attr,
NULL,
};
static struct attribute_group *gvt_vgpu_type_groups[] = {
[0 ... NR_MAX_INTEL_VGPU_TYPES - 1] = NULL,
};
static bool intel_get_gvt_attrs(struct attribute ***type_attrs,
struct attribute_group ***intel_vgpu_type_groups)
{
*type_attrs = gvt_type_attrs;
*intel_vgpu_type_groups = gvt_vgpu_type_groups;
return true;
}
static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
{
int i, j;
struct intel_vgpu_type *type;
struct attribute_group *group;
for (i = 0; i < gvt->num_types; i++) {
type = &gvt->types[i];
group = kzalloc(sizeof(struct attribute_group), GFP_KERNEL);
if (WARN_ON(!group))
goto unwind;
group->name = type->name;
group->attrs = gvt_type_attrs;
gvt_vgpu_type_groups[i] = group;
}
return true;
unwind:
for (j = 0; j < i; j++) {
group = gvt_vgpu_type_groups[j];
kfree(group);
}
return false;
}
static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
{
int i;
struct attribute_group *group;
for (i = 0; i < gvt->num_types; i++) {
group = gvt_vgpu_type_groups[i];
gvt_vgpu_type_groups[i] = NULL;
kfree(group);
}
}
static const struct intel_gvt_ops intel_gvt_ops = { static const struct intel_gvt_ops intel_gvt_ops = {
.emulate_cfg_read = intel_vgpu_emulate_cfg_read, .emulate_cfg_read = intel_vgpu_emulate_cfg_read,
.emulate_cfg_write = intel_vgpu_emulate_cfg_write, .emulate_cfg_write = intel_vgpu_emulate_cfg_write,
...@@ -54,6 +179,8 @@ static const struct intel_gvt_ops intel_gvt_ops = { ...@@ -54,6 +179,8 @@ static const struct intel_gvt_ops intel_gvt_ops = {
.vgpu_reset = intel_gvt_reset_vgpu, .vgpu_reset = intel_gvt_reset_vgpu,
.vgpu_activate = intel_gvt_activate_vgpu, .vgpu_activate = intel_gvt_activate_vgpu,
.vgpu_deactivate = intel_gvt_deactivate_vgpu, .vgpu_deactivate = intel_gvt_deactivate_vgpu,
.gvt_find_vgpu_type = intel_gvt_find_vgpu_type,
.get_gvt_attrs = intel_get_gvt_attrs,
}; };
/** /**
...@@ -191,17 +318,18 @@ void intel_gvt_clean_device(struct drm_i915_private *dev_priv) ...@@ -191,17 +318,18 @@ void intel_gvt_clean_device(struct drm_i915_private *dev_priv)
if (WARN_ON(!gvt)) if (WARN_ON(!gvt))
return; return;
intel_gvt_debugfs_clean(gvt);
clean_service_thread(gvt); clean_service_thread(gvt);
intel_gvt_clean_cmd_parser(gvt); intel_gvt_clean_cmd_parser(gvt);
intel_gvt_clean_sched_policy(gvt); intel_gvt_clean_sched_policy(gvt);
intel_gvt_clean_workload_scheduler(gvt); intel_gvt_clean_workload_scheduler(gvt);
intel_gvt_clean_opregion(gvt);
intel_gvt_clean_gtt(gvt); intel_gvt_clean_gtt(gvt);
intel_gvt_clean_irq(gvt); intel_gvt_clean_irq(gvt);
intel_gvt_clean_mmio_info(gvt); intel_gvt_clean_mmio_info(gvt);
intel_gvt_free_firmware(gvt); intel_gvt_free_firmware(gvt);
intel_gvt_hypervisor_host_exit(&dev_priv->drm.pdev->dev, gvt); intel_gvt_hypervisor_host_exit(&dev_priv->drm.pdev->dev, gvt);
intel_gvt_cleanup_vgpu_type_groups(gvt);
intel_gvt_clean_vgpu_types(gvt); intel_gvt_clean_vgpu_types(gvt);
idr_destroy(&gvt->vgpu_idr); idr_destroy(&gvt->vgpu_idr);
...@@ -268,13 +396,9 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv) ...@@ -268,13 +396,9 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
if (ret) if (ret)
goto out_clean_irq; goto out_clean_irq;
ret = intel_gvt_init_opregion(gvt);
if (ret)
goto out_clean_gtt;
ret = intel_gvt_init_workload_scheduler(gvt); ret = intel_gvt_init_workload_scheduler(gvt);
if (ret) if (ret)
goto out_clean_opregion; goto out_clean_gtt;
ret = intel_gvt_init_sched_policy(gvt); ret = intel_gvt_init_sched_policy(gvt);
if (ret) if (ret)
...@@ -292,6 +416,12 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv) ...@@ -292,6 +416,12 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
if (ret) if (ret)
goto out_clean_thread; goto out_clean_thread;
ret = intel_gvt_init_vgpu_type_groups(gvt);
if (ret == false) {
gvt_err("failed to init vgpu type groups: %d\n", ret);
goto out_clean_types;
}
ret = intel_gvt_hypervisor_host_init(&dev_priv->drm.pdev->dev, gvt, ret = intel_gvt_hypervisor_host_init(&dev_priv->drm.pdev->dev, gvt,
&intel_gvt_ops); &intel_gvt_ops);
if (ret) { if (ret) {
...@@ -307,6 +437,10 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv) ...@@ -307,6 +437,10 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
} }
gvt->idle_vgpu = vgpu; gvt->idle_vgpu = vgpu;
ret = intel_gvt_debugfs_init(gvt);
if (ret)
gvt_err("debugfs registeration failed, go on.\n");
gvt_dbg_core("gvt device initialization is done\n"); gvt_dbg_core("gvt device initialization is done\n");
dev_priv->gvt = gvt; dev_priv->gvt = gvt;
return 0; return 0;
...@@ -321,8 +455,6 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv) ...@@ -321,8 +455,6 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
intel_gvt_clean_sched_policy(gvt); intel_gvt_clean_sched_policy(gvt);
out_clean_workload_scheduler: out_clean_workload_scheduler:
intel_gvt_clean_workload_scheduler(gvt); intel_gvt_clean_workload_scheduler(gvt);
out_clean_opregion:
intel_gvt_clean_opregion(gvt);
out_clean_gtt: out_clean_gtt:
intel_gvt_clean_gtt(gvt); intel_gvt_clean_gtt(gvt);
out_clean_irq: out_clean_irq:
......
...@@ -125,7 +125,6 @@ struct intel_vgpu_irq { ...@@ -125,7 +125,6 @@ struct intel_vgpu_irq {
struct intel_vgpu_opregion { struct intel_vgpu_opregion {
void *va; void *va;
u32 gfn[INTEL_GVT_OPREGION_PAGES]; u32 gfn[INTEL_GVT_OPREGION_PAGES];
struct page *pages[INTEL_GVT_OPREGION_PAGES];
}; };
#define vgpu_opregion(vgpu) (&(vgpu->opregion)) #define vgpu_opregion(vgpu) (&(vgpu->opregion))
...@@ -142,6 +141,33 @@ struct vgpu_sched_ctl { ...@@ -142,6 +141,33 @@ struct vgpu_sched_ctl {
int weight; int weight;
}; };
enum {
INTEL_VGPU_EXECLIST_SUBMISSION = 1,
INTEL_VGPU_GUC_SUBMISSION,
};
struct intel_vgpu_submission_ops {
const char *name;
int (*init)(struct intel_vgpu *vgpu);
void (*clean)(struct intel_vgpu *vgpu);
void (*reset)(struct intel_vgpu *vgpu, unsigned long engine_mask);
};
struct intel_vgpu_submission {
struct intel_vgpu_execlist execlist[I915_NUM_ENGINES];
struct list_head workload_q_head[I915_NUM_ENGINES];
struct kmem_cache *workloads;
atomic_t running_workload_num;
struct i915_gem_context *shadow_ctx;
DECLARE_BITMAP(shadow_ctx_desc_updated, I915_NUM_ENGINES);
DECLARE_BITMAP(tlb_handle_pending, I915_NUM_ENGINES);
void *ring_scan_buffer[I915_NUM_ENGINES];
int ring_scan_buffer_size[I915_NUM_ENGINES];
const struct intel_vgpu_submission_ops *ops;
int virtual_submission_interface;
bool active;
};
struct intel_vgpu { struct intel_vgpu {
struct intel_gvt *gvt; struct intel_gvt *gvt;
int id; int id;
...@@ -161,16 +187,10 @@ struct intel_vgpu { ...@@ -161,16 +187,10 @@ struct intel_vgpu {
struct intel_vgpu_gtt gtt; struct intel_vgpu_gtt gtt;
struct intel_vgpu_opregion opregion; struct intel_vgpu_opregion opregion;
struct intel_vgpu_display display; struct intel_vgpu_display display;
struct intel_vgpu_execlist execlist[I915_NUM_ENGINES]; struct intel_vgpu_submission submission;
struct list_head workload_q_head[I915_NUM_ENGINES]; u32 hws_pga[I915_NUM_ENGINES];
struct kmem_cache *workloads;
atomic_t running_workload_num; struct dentry *debugfs;
/* 1/2K for each reserve ring buffer */
void *reserve_ring_buffer_va[I915_NUM_ENGINES];
int reserve_ring_buffer_size[I915_NUM_ENGINES];
DECLARE_BITMAP(tlb_handle_pending, I915_NUM_ENGINES);
struct i915_gem_context *shadow_ctx;
DECLARE_BITMAP(shadow_ctx_desc_updated, I915_NUM_ENGINES);
#if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT) #if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT)
struct { struct {
...@@ -190,6 +210,10 @@ struct intel_vgpu { ...@@ -190,6 +210,10 @@ struct intel_vgpu {
#endif #endif
}; };
/* validating GM healthy status*/
#define vgpu_is_vm_unhealthy(ret_val) \
(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
struct intel_gvt_gm { struct intel_gvt_gm {
unsigned long vgpu_allocated_low_gm_size; unsigned long vgpu_allocated_low_gm_size;
unsigned long vgpu_allocated_high_gm_size; unsigned long vgpu_allocated_high_gm_size;
...@@ -231,7 +255,7 @@ struct intel_gvt_mmio { ...@@ -231,7 +255,7 @@ struct intel_gvt_mmio {
unsigned int num_mmio_block; unsigned int num_mmio_block;
DECLARE_HASHTABLE(mmio_info_table, INTEL_GVT_MMIO_HASH_BITS); DECLARE_HASHTABLE(mmio_info_table, INTEL_GVT_MMIO_HASH_BITS);
unsigned int num_tracked_mmio; unsigned long num_tracked_mmio;
}; };
struct intel_gvt_firmware { struct intel_gvt_firmware {
...@@ -240,11 +264,6 @@ struct intel_gvt_firmware { ...@@ -240,11 +264,6 @@ struct intel_gvt_firmware {
bool firmware_loaded; bool firmware_loaded;
}; };
struct intel_gvt_opregion {
void *opregion_va;
u32 opregion_pa;
};
#define NR_MAX_INTEL_VGPU_TYPES 20 #define NR_MAX_INTEL_VGPU_TYPES 20
struct intel_vgpu_type { struct intel_vgpu_type {
char name[16]; char name[16];
...@@ -268,7 +287,6 @@ struct intel_gvt { ...@@ -268,7 +287,6 @@ struct intel_gvt {
struct intel_gvt_firmware firmware; struct intel_gvt_firmware firmware;
struct intel_gvt_irq irq; struct intel_gvt_irq irq;
struct intel_gvt_gtt gtt; struct intel_gvt_gtt gtt;
struct intel_gvt_opregion opregion;
struct intel_gvt_workload_scheduler scheduler; struct intel_gvt_workload_scheduler scheduler;
struct notifier_block shadow_ctx_notifier_block[I915_NUM_ENGINES]; struct notifier_block shadow_ctx_notifier_block[I915_NUM_ENGINES];
DECLARE_HASHTABLE(cmd_table, GVT_CMD_HASH_BITS); DECLARE_HASHTABLE(cmd_table, GVT_CMD_HASH_BITS);
...@@ -279,6 +297,8 @@ struct intel_gvt { ...@@ -279,6 +297,8 @@ struct intel_gvt {
struct task_struct *service_thread; struct task_struct *service_thread;
wait_queue_head_t service_thread_wq; wait_queue_head_t service_thread_wq;
unsigned long service_request; unsigned long service_request;
struct dentry *debugfs_root;
}; };
static inline struct intel_gvt *to_gvt(struct drm_i915_private *i915) static inline struct intel_gvt *to_gvt(struct drm_i915_private *i915)
...@@ -484,9 +504,6 @@ static inline u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar) ...@@ -484,9 +504,6 @@ static inline u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar)
PCI_BASE_ADDRESS_MEM_MASK; PCI_BASE_ADDRESS_MEM_MASK;
} }
void intel_gvt_clean_opregion(struct intel_gvt *gvt);
int intel_gvt_init_opregion(struct intel_gvt *gvt);
void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu); void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu);
int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa); int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa);
...@@ -494,6 +511,7 @@ int intel_vgpu_emulate_opregion_request(struct intel_vgpu *vgpu, u32 swsci); ...@@ -494,6 +511,7 @@ int intel_vgpu_emulate_opregion_request(struct intel_vgpu *vgpu, u32 swsci);
void populate_pvinfo_page(struct intel_vgpu *vgpu); void populate_pvinfo_page(struct intel_vgpu *vgpu);
int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload); int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload);
void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason);
struct intel_gvt_ops { struct intel_gvt_ops {
int (*emulate_cfg_read)(struct intel_vgpu *, unsigned int, void *, int (*emulate_cfg_read)(struct intel_vgpu *, unsigned int, void *,
...@@ -510,12 +528,17 @@ struct intel_gvt_ops { ...@@ -510,12 +528,17 @@ struct intel_gvt_ops {
void (*vgpu_reset)(struct intel_vgpu *); void (*vgpu_reset)(struct intel_vgpu *);
void (*vgpu_activate)(struct intel_vgpu *); void (*vgpu_activate)(struct intel_vgpu *);
void (*vgpu_deactivate)(struct intel_vgpu *); void (*vgpu_deactivate)(struct intel_vgpu *);
struct intel_vgpu_type *(*gvt_find_vgpu_type)(struct intel_gvt *gvt,
const char *name);
bool (*get_gvt_attrs)(struct attribute ***type_attrs,
struct attribute_group ***intel_vgpu_type_groups);
}; };
enum { enum {
GVT_FAILSAFE_UNSUPPORTED_GUEST, GVT_FAILSAFE_UNSUPPORTED_GUEST,
GVT_FAILSAFE_INSUFFICIENT_RESOURCE, GVT_FAILSAFE_INSUFFICIENT_RESOURCE,
GVT_FAILSAFE_GUEST_ERR,
}; };
static inline void mmio_hw_access_pre(struct drm_i915_private *dev_priv) static inline void mmio_hw_access_pre(struct drm_i915_private *dev_priv)
...@@ -591,6 +614,12 @@ static inline bool intel_gvt_mmio_has_mode_mask( ...@@ -591,6 +614,12 @@ static inline bool intel_gvt_mmio_has_mode_mask(
return gvt->mmio.mmio_attribute[offset >> 2] & F_MODE_MASK; return gvt->mmio.mmio_attribute[offset >> 2] & F_MODE_MASK;
} }
int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu);
void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
int intel_gvt_debugfs_init(struct intel_gvt *gvt);
void intel_gvt_debugfs_clean(struct intel_gvt *gvt);
#include "trace.h" #include "trace.h"
#include "mpt.h" #include "mpt.h"
......
...@@ -137,17 +137,26 @@ static int new_mmio_info(struct intel_gvt *gvt, ...@@ -137,17 +137,26 @@ static int new_mmio_info(struct intel_gvt *gvt,
return 0; return 0;
} }
static int render_mmio_to_ring_id(struct intel_gvt *gvt, unsigned int reg) /**
* intel_gvt_render_mmio_to_ring_id - convert a mmio offset into ring id
* @gvt: a GVT device
* @offset: register offset
*
* Returns:
* Ring ID on success, negative error code if failed.
*/
int intel_gvt_render_mmio_to_ring_id(struct intel_gvt *gvt,
unsigned int offset)
{ {
enum intel_engine_id id; enum intel_engine_id id;
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
reg &= ~GENMASK(11, 0); offset &= ~GENMASK(11, 0);
for_each_engine(engine, gvt->dev_priv, id) { for_each_engine(engine, gvt->dev_priv, id) {
if (engine->mmio_base == reg) if (engine->mmio_base == offset)
return id; return id;
} }
return -1; return -ENODEV;
} }
#define offset_to_fence_num(offset) \ #define offset_to_fence_num(offset) \
...@@ -157,7 +166,7 @@ static int render_mmio_to_ring_id(struct intel_gvt *gvt, unsigned int reg) ...@@ -157,7 +166,7 @@ static int render_mmio_to_ring_id(struct intel_gvt *gvt, unsigned int reg)
(num * 8 + i915_mmio_reg_offset(FENCE_REG_GEN6_LO(0))) (num * 8 + i915_mmio_reg_offset(FENCE_REG_GEN6_LO(0)))
static void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason) void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason)
{ {
switch (reason) { switch (reason) {
case GVT_FAILSAFE_UNSUPPORTED_GUEST: case GVT_FAILSAFE_UNSUPPORTED_GUEST:
...@@ -165,6 +174,8 @@ static void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason) ...@@ -165,6 +174,8 @@ static void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason)
break; break;
case GVT_FAILSAFE_INSUFFICIENT_RESOURCE: case GVT_FAILSAFE_INSUFFICIENT_RESOURCE:
pr_err("Graphics resource is not enough for the guest\n"); pr_err("Graphics resource is not enough for the guest\n");
case GVT_FAILSAFE_GUEST_ERR:
pr_err("GVT Internal error for the guest\n");
default: default:
break; break;
} }
...@@ -1369,6 +1380,34 @@ static int mailbox_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -1369,6 +1380,34 @@ static int mailbox_write(struct intel_vgpu *vgpu, unsigned int offset,
return intel_vgpu_default_mmio_write(vgpu, offset, &value, bytes); return intel_vgpu_default_mmio_write(vgpu, offset, &value, bytes);
} }
static int hws_pga_write(struct intel_vgpu *vgpu, unsigned int offset,
void *p_data, unsigned int bytes)
{
u32 value = *(u32 *)p_data;
int ring_id = intel_gvt_render_mmio_to_ring_id(vgpu->gvt, offset);
if (!intel_gvt_ggtt_validate_range(vgpu, value, I915_GTT_PAGE_SIZE)) {
gvt_vgpu_err("VM(%d) write invalid HWSP address, reg:0x%x, value:0x%x\n",
vgpu->id, offset, value);
return -EINVAL;
}
/*
* Need to emulate all the HWSP register write to ensure host can
* update the VM CSB status correctly. Here listed registers can
* support BDW, SKL or other platforms with same HWSP registers.
*/
if (unlikely(ring_id < 0 || ring_id > I915_NUM_ENGINES)) {
gvt_vgpu_err("VM(%d) access unknown hardware status page register:0x%x\n",
vgpu->id, offset);
return -EINVAL;
}
vgpu->hws_pga[ring_id] = value;
gvt_dbg_mmio("VM(%d) write: 0x%x to HWSP: 0x%x\n",
vgpu->id, value, offset);
return intel_vgpu_default_mmio_write(vgpu, offset, &value, bytes);
}
static int skl_power_well_ctl_write(struct intel_vgpu *vgpu, static int skl_power_well_ctl_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes) unsigned int offset, void *p_data, unsigned int bytes)
{ {
...@@ -1398,18 +1437,36 @@ static int skl_lcpll_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -1398,18 +1437,36 @@ static int skl_lcpll_write(struct intel_vgpu *vgpu, unsigned int offset,
static int mmio_read_from_hw(struct intel_vgpu *vgpu, static int mmio_read_from_hw(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes) unsigned int offset, void *p_data, unsigned int bytes)
{ {
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct intel_gvt *gvt = vgpu->gvt;
struct drm_i915_private *dev_priv = gvt->dev_priv;
int ring_id;
u32 ring_base;
ring_id = intel_gvt_render_mmio_to_ring_id(gvt, offset);
/**
* Read HW reg in following case
* a. the offset isn't a ring mmio
* b. the offset's ring is running on hw.
* c. the offset is ring time stamp mmio
*/
if (ring_id >= 0)
ring_base = dev_priv->engine[ring_id]->mmio_base;
if (ring_id < 0 || vgpu == gvt->scheduler.engine_owner[ring_id] ||
offset == i915_mmio_reg_offset(RING_TIMESTAMP(ring_base)) ||
offset == i915_mmio_reg_offset(RING_TIMESTAMP_UDW(ring_base))) {
mmio_hw_access_pre(dev_priv);
vgpu_vreg(vgpu, offset) = I915_READ(_MMIO(offset));
mmio_hw_access_post(dev_priv);
}
mmio_hw_access_pre(dev_priv);
vgpu_vreg(vgpu, offset) = I915_READ(_MMIO(offset));
mmio_hw_access_post(dev_priv);
return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes); return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
} }
static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
void *p_data, unsigned int bytes) void *p_data, unsigned int bytes)
{ {
int ring_id = render_mmio_to_ring_id(vgpu->gvt, offset); int ring_id = intel_gvt_render_mmio_to_ring_id(vgpu->gvt, offset);
struct intel_vgpu_execlist *execlist; struct intel_vgpu_execlist *execlist;
u32 data = *(u32 *)p_data; u32 data = *(u32 *)p_data;
int ret = 0; int ret = 0;
...@@ -1417,9 +1474,9 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -1417,9 +1474,9 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
if (WARN_ON(ring_id < 0 || ring_id > I915_NUM_ENGINES - 1)) if (WARN_ON(ring_id < 0 || ring_id > I915_NUM_ENGINES - 1))
return -EINVAL; return -EINVAL;
execlist = &vgpu->execlist[ring_id]; execlist = &vgpu->submission.execlist[ring_id];
execlist->elsp_dwords.data[execlist->elsp_dwords.index] = data; execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
if (execlist->elsp_dwords.index == 3) { if (execlist->elsp_dwords.index == 3) {
ret = intel_vgpu_submit_execlist(vgpu, ring_id); ret = intel_vgpu_submit_execlist(vgpu, ring_id);
if(ret) if(ret)
...@@ -1435,9 +1492,11 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -1435,9 +1492,11 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
static int ring_mode_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, static int ring_mode_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
void *p_data, unsigned int bytes) void *p_data, unsigned int bytes)
{ {
struct intel_vgpu_submission *s = &vgpu->submission;
u32 data = *(u32 *)p_data; u32 data = *(u32 *)p_data;
int ring_id = render_mmio_to_ring_id(vgpu->gvt, offset); int ring_id = intel_gvt_render_mmio_to_ring_id(vgpu->gvt, offset);
bool enable_execlist; bool enable_execlist;
int ret;
write_vreg(vgpu, offset, p_data, bytes); write_vreg(vgpu, offset, p_data, bytes);
...@@ -1459,8 +1518,18 @@ static int ring_mode_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, ...@@ -1459,8 +1518,18 @@ static int ring_mode_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
(enable_execlist ? "enabling" : "disabling"), (enable_execlist ? "enabling" : "disabling"),
ring_id); ring_id);
if (enable_execlist) if (!enable_execlist)
intel_vgpu_start_schedule(vgpu); return 0;
if (s->active)
return 0;
ret = intel_vgpu_select_submission_ops(vgpu,
INTEL_VGPU_EXECLIST_SUBMISSION);
if (ret)
return ret;
intel_vgpu_start_schedule(vgpu);
} }
return 0; return 0;
} }
...@@ -1492,7 +1561,7 @@ static int gvt_reg_tlb_control_handler(struct intel_vgpu *vgpu, ...@@ -1492,7 +1561,7 @@ static int gvt_reg_tlb_control_handler(struct intel_vgpu *vgpu,
default: default:
return -EINVAL; return -EINVAL;
} }
set_bit(id, (void *)vgpu->tlb_handle_pending); set_bit(id, (void *)vgpu->submission.tlb_handle_pending);
return 0; return 0;
} }
...@@ -2478,7 +2547,7 @@ static int init_broadwell_mmio_info(struct intel_gvt *gvt) ...@@ -2478,7 +2547,7 @@ static int init_broadwell_mmio_info(struct intel_gvt *gvt)
MMIO_RING_F(RING_REG, 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL); MMIO_RING_F(RING_REG, 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL);
#undef RING_REG #undef RING_REG
MMIO_RING_GM_RDR(RING_HWS_PGA, D_BDW_PLUS, NULL, NULL); MMIO_RING_GM_RDR(RING_HWS_PGA, D_BDW_PLUS, NULL, hws_pga_write);
MMIO_DFH(HDC_CHICKEN0, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); MMIO_DFH(HDC_CHICKEN0, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
...@@ -2879,14 +2948,46 @@ int intel_gvt_setup_mmio_info(struct intel_gvt *gvt) ...@@ -2879,14 +2948,46 @@ int intel_gvt_setup_mmio_info(struct intel_gvt *gvt)
gvt->mmio.mmio_block = mmio_blocks; gvt->mmio.mmio_block = mmio_blocks;
gvt->mmio.num_mmio_block = ARRAY_SIZE(mmio_blocks); gvt->mmio.num_mmio_block = ARRAY_SIZE(mmio_blocks);
gvt_dbg_mmio("traced %u virtual mmio registers\n",
gvt->mmio.num_tracked_mmio);
return 0; return 0;
err: err:
intel_gvt_clean_mmio_info(gvt); intel_gvt_clean_mmio_info(gvt);
return ret; return ret;
} }
/**
* intel_gvt_for_each_tracked_mmio - iterate each tracked mmio
* @gvt: a GVT device
* @handler: the handler
* @data: private data given to handler
*
* Returns:
* Zero on success, negative error code if failed.
*/
int intel_gvt_for_each_tracked_mmio(struct intel_gvt *gvt,
int (*handler)(struct intel_gvt *gvt, u32 offset, void *data),
void *data)
{
struct gvt_mmio_block *block = gvt->mmio.mmio_block;
struct intel_gvt_mmio_info *e;
int i, j, ret;
hash_for_each(gvt->mmio.mmio_info_table, i, e, node) {
ret = handler(gvt, e->offset, data);
if (ret)
return ret;
}
for (i = 0; i < gvt->mmio.num_mmio_block; i++, block++) {
for (j = 0; j < block->size; j += 4) {
ret = handler(gvt,
INTEL_GVT_MMIO_OFFSET(block->offset) + j,
data);
if (ret)
return ret;
}
}
return 0;
}
/** /**
* intel_vgpu_default_mmio_read - default MMIO read handler * intel_vgpu_default_mmio_read - default MMIO read handler
......
...@@ -248,120 +248,6 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) ...@@ -248,120 +248,6 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu)
} }
} }
static struct intel_vgpu_type *intel_gvt_find_vgpu_type(struct intel_gvt *gvt,
const char *name)
{
int i;
struct intel_vgpu_type *t;
const char *driver_name = dev_driver_string(
&gvt->dev_priv->drm.pdev->dev);
for (i = 0; i < gvt->num_types; i++) {
t = &gvt->types[i];
if (!strncmp(t->name, name + strlen(driver_name) + 1,
sizeof(t->name)))
return t;
}
return NULL;
}
static ssize_t available_instances_show(struct kobject *kobj,
struct device *dev, char *buf)
{
struct intel_vgpu_type *type;
unsigned int num = 0;
void *gvt = kdev_to_i915(dev)->gvt;
type = intel_gvt_find_vgpu_type(gvt, kobject_name(kobj));
if (!type)
num = 0;
else
num = type->avail_instance;
return sprintf(buf, "%u\n", num);
}
static ssize_t device_api_show(struct kobject *kobj, struct device *dev,
char *buf)
{
return sprintf(buf, "%s\n", VFIO_DEVICE_API_PCI_STRING);
}
static ssize_t description_show(struct kobject *kobj, struct device *dev,
char *buf)
{
struct intel_vgpu_type *type;
void *gvt = kdev_to_i915(dev)->gvt;
type = intel_gvt_find_vgpu_type(gvt, kobject_name(kobj));
if (!type)
return 0;
return sprintf(buf, "low_gm_size: %dMB\nhigh_gm_size: %dMB\n"
"fence: %d\nresolution: %s\n"
"weight: %d\n",
BYTES_TO_MB(type->low_gm_size),
BYTES_TO_MB(type->high_gm_size),
type->fence, vgpu_edid_str(type->resolution),
type->weight);
}
static MDEV_TYPE_ATTR_RO(available_instances);
static MDEV_TYPE_ATTR_RO(device_api);
static MDEV_TYPE_ATTR_RO(description);
static struct attribute *type_attrs[] = {
&mdev_type_attr_available_instances.attr,
&mdev_type_attr_device_api.attr,
&mdev_type_attr_description.attr,
NULL,
};
static struct attribute_group *intel_vgpu_type_groups[] = {
[0 ... NR_MAX_INTEL_VGPU_TYPES - 1] = NULL,
};
static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
{
int i, j;
struct intel_vgpu_type *type;
struct attribute_group *group;
for (i = 0; i < gvt->num_types; i++) {
type = &gvt->types[i];
group = kzalloc(sizeof(struct attribute_group), GFP_KERNEL);
if (WARN_ON(!group))
goto unwind;
group->name = type->name;
group->attrs = type_attrs;
intel_vgpu_type_groups[i] = group;
}
return true;
unwind:
for (j = 0; j < i; j++) {
group = intel_vgpu_type_groups[j];
kfree(group);
}
return false;
}
static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
{
int i;
struct attribute_group *group;
for (i = 0; i < gvt->num_types; i++) {
group = intel_vgpu_type_groups[i];
kfree(group);
}
}
static void kvmgt_protect_table_init(struct kvmgt_guest_info *info) static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
{ {
hash_init(info->ptable); hash_init(info->ptable);
...@@ -441,7 +327,7 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev) ...@@ -441,7 +327,7 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
pdev = mdev_parent_dev(mdev); pdev = mdev_parent_dev(mdev);
gvt = kdev_to_i915(pdev)->gvt; gvt = kdev_to_i915(pdev)->gvt;
type = intel_gvt_find_vgpu_type(gvt, kobject_name(kobj)); type = intel_gvt_ops->gvt_find_vgpu_type(gvt, kobject_name(kobj));
if (!type) { if (!type) {
gvt_vgpu_err("failed to find type %s to create\n", gvt_vgpu_err("failed to find type %s to create\n",
kobject_name(kobj)); kobject_name(kobj));
...@@ -1188,7 +1074,7 @@ hw_id_show(struct device *dev, struct device_attribute *attr, ...@@ -1188,7 +1074,7 @@ hw_id_show(struct device *dev, struct device_attribute *attr,
struct intel_vgpu *vgpu = (struct intel_vgpu *) struct intel_vgpu *vgpu = (struct intel_vgpu *)
mdev_get_drvdata(mdev); mdev_get_drvdata(mdev);
return sprintf(buf, "%u\n", return sprintf(buf, "%u\n",
vgpu->shadow_ctx->hw_id); vgpu->submission.shadow_ctx->hw_id);
} }
return sprintf(buf, "\n"); return sprintf(buf, "\n");
} }
...@@ -1212,8 +1098,7 @@ static const struct attribute_group *intel_vgpu_groups[] = { ...@@ -1212,8 +1098,7 @@ static const struct attribute_group *intel_vgpu_groups[] = {
NULL, NULL,
}; };
static const struct mdev_parent_ops intel_vgpu_ops = { static struct mdev_parent_ops intel_vgpu_ops = {
.supported_type_groups = intel_vgpu_type_groups,
.mdev_attr_groups = intel_vgpu_groups, .mdev_attr_groups = intel_vgpu_groups,
.create = intel_vgpu_create, .create = intel_vgpu_create,
.remove = intel_vgpu_remove, .remove = intel_vgpu_remove,
...@@ -1229,17 +1114,20 @@ static const struct mdev_parent_ops intel_vgpu_ops = { ...@@ -1229,17 +1114,20 @@ static const struct mdev_parent_ops intel_vgpu_ops = {
static int kvmgt_host_init(struct device *dev, void *gvt, const void *ops) static int kvmgt_host_init(struct device *dev, void *gvt, const void *ops)
{ {
if (!intel_gvt_init_vgpu_type_groups(gvt)) struct attribute **kvm_type_attrs;
return -EFAULT; struct attribute_group **kvm_vgpu_type_groups;
intel_gvt_ops = ops; intel_gvt_ops = ops;
if (!intel_gvt_ops->get_gvt_attrs(&kvm_type_attrs,
&kvm_vgpu_type_groups))
return -EFAULT;
intel_vgpu_ops.supported_type_groups = kvm_vgpu_type_groups;
return mdev_register_device(dev, &intel_vgpu_ops); return mdev_register_device(dev, &intel_vgpu_ops);
} }
static void kvmgt_host_exit(struct device *dev, void *gvt) static void kvmgt_host_exit(struct device *dev, void *gvt)
{ {
intel_gvt_cleanup_vgpu_type_groups(gvt);
mdev_unregister_device(dev); mdev_unregister_device(dev);
} }
......
...@@ -117,18 +117,18 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa, ...@@ -117,18 +117,18 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa,
else else
memcpy(pt, p_data, bytes); memcpy(pt, p_data, bytes);
} else if (atomic_read(&vgpu->gtt.n_write_protected_guest_page)) { } else if (atomic_read(&vgpu->gtt.n_tracked_guest_page)) {
struct intel_vgpu_guest_page *gp; struct intel_vgpu_page_track *t;
/* Since we enter the failsafe mode early during guest boot, /* Since we enter the failsafe mode early during guest boot,
* guest may not have chance to set up its ppgtt table, so * guest may not have chance to set up its ppgtt table, so
* there should not be any wp pages for guest. Keep the wp * there should not be any wp pages for guest. Keep the wp
* related code here in case we need to handle it in furture. * related code here in case we need to handle it in furture.
*/ */
gp = intel_vgpu_find_guest_page(vgpu, pa >> PAGE_SHIFT); t = intel_vgpu_find_tracked_page(vgpu, pa >> PAGE_SHIFT);
if (gp) { if (t) {
/* remove write protection to prevent furture traps */ /* remove write protection to prevent furture traps */
intel_vgpu_clean_guest_page(vgpu, gp); intel_vgpu_clean_page_track(vgpu, t);
if (read) if (read)
intel_gvt_hypervisor_read_gpa(vgpu, pa, intel_gvt_hypervisor_read_gpa(vgpu, pa,
p_data, bytes); p_data, bytes);
...@@ -170,17 +170,17 @@ int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, uint64_t pa, ...@@ -170,17 +170,17 @@ int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, uint64_t pa,
return ret; return ret;
} }
if (atomic_read(&vgpu->gtt.n_write_protected_guest_page)) { if (atomic_read(&vgpu->gtt.n_tracked_guest_page)) {
struct intel_vgpu_guest_page *gp; struct intel_vgpu_page_track *t;
gp = intel_vgpu_find_guest_page(vgpu, pa >> PAGE_SHIFT); t = intel_vgpu_find_tracked_page(vgpu, pa >> PAGE_SHIFT);
if (gp) { if (t) {
ret = intel_gvt_hypervisor_read_gpa(vgpu, pa, ret = intel_gvt_hypervisor_read_gpa(vgpu, pa,
p_data, bytes); p_data, bytes);
if (ret) { if (ret) {
gvt_vgpu_err("guest page read error %d, " gvt_vgpu_err("guest page read error %d, "
"gfn 0x%lx, pa 0x%llx, var 0x%x, len %d\n", "gfn 0x%lx, pa 0x%llx, var 0x%x, len %d\n",
ret, gp->gfn, pa, *(u32 *)p_data, ret, t->gfn, pa, *(u32 *)p_data,
bytes); bytes);
} }
mutex_unlock(&gvt->lock); mutex_unlock(&gvt->lock);
...@@ -267,17 +267,17 @@ int intel_vgpu_emulate_mmio_write(struct intel_vgpu *vgpu, uint64_t pa, ...@@ -267,17 +267,17 @@ int intel_vgpu_emulate_mmio_write(struct intel_vgpu *vgpu, uint64_t pa,
return ret; return ret;
} }
if (atomic_read(&vgpu->gtt.n_write_protected_guest_page)) { if (atomic_read(&vgpu->gtt.n_tracked_guest_page)) {
struct intel_vgpu_guest_page *gp; struct intel_vgpu_page_track *t;
gp = intel_vgpu_find_guest_page(vgpu, pa >> PAGE_SHIFT); t = intel_vgpu_find_tracked_page(vgpu, pa >> PAGE_SHIFT);
if (gp) { if (t) {
ret = gp->handler(gp, pa, p_data, bytes); ret = t->handler(t, pa, p_data, bytes);
if (ret) { if (ret) {
gvt_err("guest page write error %d, " gvt_err("guest page write error %d, "
"gfn 0x%lx, pa 0x%llx, " "gfn 0x%lx, pa 0x%llx, "
"var 0x%x, len %d\n", "var 0x%x, len %d\n",
ret, gp->gfn, pa, ret, t->gfn, pa,
*(u32 *)p_data, bytes); *(u32 *)p_data, bytes);
} }
mutex_unlock(&gvt->lock); mutex_unlock(&gvt->lock);
......
...@@ -65,11 +65,17 @@ struct intel_gvt_mmio_info { ...@@ -65,11 +65,17 @@ struct intel_gvt_mmio_info {
struct hlist_node node; struct hlist_node node;
}; };
int intel_gvt_render_mmio_to_ring_id(struct intel_gvt *gvt,
unsigned int reg);
unsigned long intel_gvt_get_device_type(struct intel_gvt *gvt); unsigned long intel_gvt_get_device_type(struct intel_gvt *gvt);
bool intel_gvt_match_device(struct intel_gvt *gvt, unsigned long device); bool intel_gvt_match_device(struct intel_gvt *gvt, unsigned long device);
int intel_gvt_setup_mmio_info(struct intel_gvt *gvt); int intel_gvt_setup_mmio_info(struct intel_gvt *gvt);
void intel_gvt_clean_mmio_info(struct intel_gvt *gvt); void intel_gvt_clean_mmio_info(struct intel_gvt *gvt);
int intel_gvt_for_each_tracked_mmio(struct intel_gvt *gvt,
int (*handler)(struct intel_gvt *gvt, u32 offset, void *data),
void *data);
#define INTEL_GVT_MMIO_OFFSET(reg) ({ \ #define INTEL_GVT_MMIO_OFFSET(reg) ({ \
typeof(reg) __reg = reg; \ typeof(reg) __reg = reg; \
......
...@@ -154,51 +154,53 @@ static inline unsigned long intel_gvt_hypervisor_virt_to_mfn(void *p) ...@@ -154,51 +154,53 @@ static inline unsigned long intel_gvt_hypervisor_virt_to_mfn(void *p)
} }
/** /**
* intel_gvt_hypervisor_set_wp_page - set a guest page to write-protected * intel_gvt_hypervisor_enable - set a guest page to write-protected
* @vgpu: a vGPU * @vgpu: a vGPU
* @p: intel_vgpu_guest_page * @t: page track data structure
* *
* Returns: * Returns:
* Zero on success, negative error code if failed. * Zero on success, negative error code if failed.
*/ */
static inline int intel_gvt_hypervisor_set_wp_page(struct intel_vgpu *vgpu, static inline int intel_gvt_hypervisor_enable_page_track(
struct intel_vgpu_guest_page *p) struct intel_vgpu *vgpu,
struct intel_vgpu_page_track *t)
{ {
int ret; int ret;
if (p->writeprotection) if (t->tracked)
return 0; return 0;
ret = intel_gvt_host.mpt->set_wp_page(vgpu->handle, p->gfn); ret = intel_gvt_host.mpt->set_wp_page(vgpu->handle, t->gfn);
if (ret) if (ret)
return ret; return ret;
p->writeprotection = true; t->tracked = true;
atomic_inc(&vgpu->gtt.n_write_protected_guest_page); atomic_inc(&vgpu->gtt.n_tracked_guest_page);
return 0; return 0;
} }
/** /**
* intel_gvt_hypervisor_unset_wp_page - remove the write-protection of a * intel_gvt_hypervisor_disable_page_track - remove the write-protection of a
* guest page * guest page
* @vgpu: a vGPU * @vgpu: a vGPU
* @p: intel_vgpu_guest_page * @t: page track data structure
* *
* Returns: * Returns:
* Zero on success, negative error code if failed. * Zero on success, negative error code if failed.
*/ */
static inline int intel_gvt_hypervisor_unset_wp_page(struct intel_vgpu *vgpu, static inline int intel_gvt_hypervisor_disable_page_track(
struct intel_vgpu_guest_page *p) struct intel_vgpu *vgpu,
struct intel_vgpu_page_track *t)
{ {
int ret; int ret;
if (!p->writeprotection) if (!t->tracked)
return 0; return 0;
ret = intel_gvt_host.mpt->unset_wp_page(vgpu->handle, p->gfn); ret = intel_gvt_host.mpt->unset_wp_page(vgpu->handle, t->gfn);
if (ret) if (ret)
return ret; return ret;
p->writeprotection = false; t->tracked = false;
atomic_dec(&vgpu->gtt.n_write_protected_guest_page); atomic_dec(&vgpu->gtt.n_tracked_guest_page);
return 0; return 0;
} }
......
...@@ -25,36 +25,247 @@ ...@@ -25,36 +25,247 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "gvt.h" #include "gvt.h"
static int init_vgpu_opregion(struct intel_vgpu *vgpu, u32 gpa) /*
* Note: Only for GVT-g virtual VBT generation, other usage must
* not do like this.
*/
#define _INTEL_BIOS_PRIVATE
#include "intel_vbt_defs.h"
#define OPREGION_SIGNATURE "IntelGraphicsMem"
#define MBOX_VBT (1<<3)
/* device handle */
#define DEVICE_TYPE_CRT 0x01
#define DEVICE_TYPE_EFP1 0x04
#define DEVICE_TYPE_EFP2 0x40
#define DEVICE_TYPE_EFP3 0x20
#define DEVICE_TYPE_EFP4 0x10
#define DEV_SIZE 38
struct opregion_header {
u8 signature[16];
u32 size;
u32 opregion_ver;
u8 bios_ver[32];
u8 vbios_ver[16];
u8 driver_ver[16];
u32 mboxes;
u32 driver_model;
u32 pcon;
u8 dver[32];
u8 rsvd[124];
} __packed;
struct bdb_data_header {
u8 id;
u16 size; /* data size */
} __packed;
struct efp_child_device_config {
u16 handle;
u16 device_type;
u16 device_class;
u8 i2c_speed;
u8 dp_onboard_redriver; /* 158 */
u8 dp_ondock_redriver; /* 158 */
u8 hdmi_level_shifter_value:4; /* 169 */
u8 hdmi_max_data_rate:4; /* 204 */
u16 dtd_buf_ptr; /* 161 */
u8 edidless_efp:1; /* 161 */
u8 compression_enable:1; /* 198 */
u8 compression_method:1; /* 198 */
u8 ganged_edp:1; /* 202 */
u8 skip0:4;
u8 compression_structure_index:4; /* 198 */
u8 skip1:4;
u8 slave_port; /* 202 */
u8 skip2;
u8 dvo_port;
u8 i2c_pin; /* for add-in card */
u8 slave_addr; /* for add-in card */
u8 ddc_pin;
u16 edid_ptr;
u8 dvo_config;
u8 efp_docked_port:1; /* 158 */
u8 lane_reversal:1; /* 184 */
u8 onboard_lspcon:1; /* 192 */
u8 iboost_enable:1; /* 196 */
u8 hpd_invert:1; /* BXT 196 */
u8 slip3:3;
u8 hdmi_compat:1;
u8 dp_compat:1;
u8 tmds_compat:1;
u8 skip4:5;
u8 aux_channel;
u8 dongle_detect;
u8 pipe_cap:2;
u8 sdvo_stall:1; /* 158 */
u8 hpd_status:2;
u8 integrated_encoder:1;
u8 skip5:2;
u8 dvo_wiring;
u8 mipi_bridge_type; /* 171 */
u16 device_class_ext;
u8 dvo_function;
u8 dp_usb_type_c:1; /* 195 */
u8 skip6:7;
u8 dp_usb_type_c_2x_gpio_index; /* 195 */
u16 dp_usb_type_c_2x_gpio_pin; /* 195 */
u8 iboost_dp:4; /* 196 */
u8 iboost_hdmi:4; /* 196 */
} __packed;
struct vbt {
/* header->bdb_offset point to bdb_header offset */
struct vbt_header header;
struct bdb_header bdb_header;
struct bdb_data_header general_features_header;
struct bdb_general_features general_features;
struct bdb_data_header general_definitions_header;
struct bdb_general_definitions general_definitions;
struct efp_child_device_config child0;
struct efp_child_device_config child1;
struct efp_child_device_config child2;
struct efp_child_device_config child3;
struct bdb_data_header driver_features_header;
struct bdb_driver_features driver_features;
};
static void virt_vbt_generation(struct vbt *v)
{ {
u8 *buf; int num_child;
int i;
memset(v, 0, sizeof(struct vbt));
v->header.signature[0] = '$';
v->header.signature[1] = 'V';
v->header.signature[2] = 'B';
v->header.signature[3] = 'T';
/* there's features depending on version! */
v->header.version = 155;
v->header.header_size = sizeof(v->header);
v->header.vbt_size = sizeof(struct vbt) - sizeof(v->header);
v->header.bdb_offset = offsetof(struct vbt, bdb_header);
strcpy(&v->bdb_header.signature[0], "BIOS_DATA_BLOCK");
v->bdb_header.version = 186; /* child_dev_size = 38 */
v->bdb_header.header_size = sizeof(v->bdb_header);
v->bdb_header.bdb_size = sizeof(struct vbt) - sizeof(struct vbt_header)
- sizeof(struct bdb_header);
/* general features */
v->general_features_header.id = BDB_GENERAL_FEATURES;
v->general_features_header.size = sizeof(struct bdb_general_features);
v->general_features.int_crt_support = 0;
v->general_features.int_tv_support = 0;
/* child device */
num_child = 4; /* each port has one child */
v->general_definitions_header.id = BDB_GENERAL_DEFINITIONS;
/* size will include child devices */
v->general_definitions_header.size =
sizeof(struct bdb_general_definitions) + num_child * DEV_SIZE;
v->general_definitions.child_dev_size = DEV_SIZE;
/* portA */
v->child0.handle = DEVICE_TYPE_EFP1;
v->child0.device_type = DEVICE_TYPE_DP;
v->child0.dvo_port = DVO_PORT_DPA;
v->child0.aux_channel = DP_AUX_A;
v->child0.dp_compat = true;
v->child0.integrated_encoder = true;
/* portB */
v->child1.handle = DEVICE_TYPE_EFP2;
v->child1.device_type = DEVICE_TYPE_DP;
v->child1.dvo_port = DVO_PORT_DPB;
v->child1.aux_channel = DP_AUX_B;
v->child1.dp_compat = true;
v->child1.integrated_encoder = true;
/* portC */
v->child2.handle = DEVICE_TYPE_EFP3;
v->child2.device_type = DEVICE_TYPE_DP;
v->child2.dvo_port = DVO_PORT_DPC;
v->child2.aux_channel = DP_AUX_C;
v->child2.dp_compat = true;
v->child2.integrated_encoder = true;
/* portD */
v->child3.handle = DEVICE_TYPE_EFP4;
v->child3.device_type = DEVICE_TYPE_DP;
v->child3.dvo_port = DVO_PORT_DPD;
v->child3.aux_channel = DP_AUX_D;
v->child3.dp_compat = true;
v->child3.integrated_encoder = true;
/* driver features */
v->driver_features_header.id = BDB_DRIVER_FEATURES;
v->driver_features_header.size = sizeof(struct bdb_driver_features);
v->driver_features.lvds_config = BDB_DRIVER_FEATURE_NO_LVDS;
}
if (WARN((vgpu_opregion(vgpu)->va), static int alloc_and_init_virt_opregion(struct intel_vgpu *vgpu)
"vgpu%d: opregion has been initialized already.\n", {
vgpu->id)) u8 *buf;
return -EINVAL; struct opregion_header *header;
struct vbt v;
gvt_dbg_core("init vgpu%d opregion\n", vgpu->id);
vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL | vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL |
__GFP_ZERO, __GFP_ZERO,
get_order(INTEL_GVT_OPREGION_SIZE)); get_order(INTEL_GVT_OPREGION_SIZE));
if (!vgpu_opregion(vgpu)->va) {
if (!vgpu_opregion(vgpu)->va) gvt_err("fail to get memory for vgpu virt opregion\n");
return -ENOMEM; return -ENOMEM;
}
memcpy(vgpu_opregion(vgpu)->va, vgpu->gvt->opregion.opregion_va, /* emulated opregion with VBT mailbox only */
INTEL_GVT_OPREGION_SIZE); buf = (u8 *)vgpu_opregion(vgpu)->va;
header = (struct opregion_header *)buf;
for (i = 0; i < INTEL_GVT_OPREGION_PAGES; i++) memcpy(header->signature, OPREGION_SIGNATURE,
vgpu_opregion(vgpu)->gfn[i] = (gpa >> PAGE_SHIFT) + i; sizeof(OPREGION_SIGNATURE));
header->size = 0x8;
header->opregion_ver = 0x02000000;
header->mboxes = MBOX_VBT;
/* for unknown reason, the value in LID field is incorrect /* for unknown reason, the value in LID field is incorrect
* which block the windows guest, so workaround it by force * which block the windows guest, so workaround it by force
* setting it to "OPEN" * setting it to "OPEN"
*/ */
buf = (u8 *)vgpu_opregion(vgpu)->va;
buf[INTEL_GVT_OPREGION_CLID] = 0x3; buf[INTEL_GVT_OPREGION_CLID] = 0x3;
/* emulated vbt from virt vbt generation */
virt_vbt_generation(&v);
memcpy(buf + INTEL_GVT_OPREGION_VBT_OFFSET, &v, sizeof(struct vbt));
return 0;
}
static int init_vgpu_opregion(struct intel_vgpu *vgpu, u32 gpa)
{
int i, ret;
if (WARN((vgpu_opregion(vgpu)->va),
"vgpu%d: opregion has been initialized already.\n",
vgpu->id))
return -EINVAL;
ret = alloc_and_init_virt_opregion(vgpu);
if (ret < 0)
return ret;
for (i = 0; i < INTEL_GVT_OPREGION_PAGES; i++)
vgpu_opregion(vgpu)->gfn[i] = (gpa >> PAGE_SHIFT) + i;
return 0; return 0;
} }
...@@ -132,40 +343,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa) ...@@ -132,40 +343,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa)
return 0; return 0;
} }
/**
* intel_gvt_clean_opregion - clean host opergion related stuffs
* @gvt: a GVT device
*
*/
void intel_gvt_clean_opregion(struct intel_gvt *gvt)
{
memunmap(gvt->opregion.opregion_va);
gvt->opregion.opregion_va = NULL;
}
/**
* intel_gvt_init_opregion - initialize host opergion related stuffs
* @gvt: a GVT device
*
* Returns:
* Zero on success, negative error code if failed.
*/
int intel_gvt_init_opregion(struct intel_gvt *gvt)
{
gvt_dbg_core("init host opregion\n");
pci_read_config_dword(gvt->dev_priv->drm.pdev, INTEL_GVT_PCI_OPREGION,
&gvt->opregion.opregion_pa);
gvt->opregion.opregion_va = memremap(gvt->opregion.opregion_pa,
INTEL_GVT_OPREGION_SIZE, MEMREMAP_WB);
if (!gvt->opregion.opregion_va) {
gvt_err("fail to map host opregion\n");
return -EFAULT;
}
return 0;
}
#define GVT_OPREGION_FUNC(scic) \ #define GVT_OPREGION_FUNC(scic) \
({ \ ({ \
u32 __ret; \ u32 __ret; \
......
...@@ -51,6 +51,9 @@ ...@@ -51,6 +51,9 @@
#define INTEL_GVT_OPREGION_PAGES 2 #define INTEL_GVT_OPREGION_PAGES 2
#define INTEL_GVT_OPREGION_SIZE (INTEL_GVT_OPREGION_PAGES * PAGE_SIZE) #define INTEL_GVT_OPREGION_SIZE (INTEL_GVT_OPREGION_PAGES * PAGE_SIZE)
#define INTEL_GVT_OPREGION_VBT_OFFSET 0x400
#define INTEL_GVT_OPREGION_VBT_SIZE \
(INTEL_GVT_OPREGION_SIZE - INTEL_GVT_OPREGION_VBT_OFFSET)
#define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B) #define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B)
...@@ -71,6 +74,7 @@ ...@@ -71,6 +74,7 @@
#define RB_HEAD_OFF_MASK ((1U << 21) - (1U << 2)) #define RB_HEAD_OFF_MASK ((1U << 21) - (1U << 2))
#define RB_TAIL_OFF_MASK ((1U << 21) - (1U << 3)) #define RB_TAIL_OFF_MASK ((1U << 21) - (1U << 3))
#define RB_TAIL_SIZE_MASK ((1U << 21) - (1U << 12)) #define RB_TAIL_SIZE_MASK ((1U << 21) - (1U << 12))
#define _RING_CTL_BUF_SIZE(ctl) (((ctl) & RB_TAIL_SIZE_MASK) + GTT_PAGE_SIZE) #define _RING_CTL_BUF_SIZE(ctl) (((ctl) & RB_TAIL_SIZE_MASK) + \
I915_GTT_PAGE_SIZE)
#endif #endif
...@@ -147,6 +147,7 @@ static u32 gen9_render_mocs_L3[32]; ...@@ -147,6 +147,7 @@ static u32 gen9_render_mocs_L3[32];
static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id) static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id)
{ {
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
struct intel_vgpu_submission *s = &vgpu->submission;
enum forcewake_domains fw; enum forcewake_domains fw;
i915_reg_t reg; i915_reg_t reg;
u32 regs[] = { u32 regs[] = {
...@@ -160,7 +161,7 @@ static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id) ...@@ -160,7 +161,7 @@ static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id)
if (WARN_ON(ring_id >= ARRAY_SIZE(regs))) if (WARN_ON(ring_id >= ARRAY_SIZE(regs)))
return; return;
if (!test_and_clear_bit(ring_id, (void *)vgpu->tlb_handle_pending)) if (!test_and_clear_bit(ring_id, (void *)s->tlb_handle_pending))
return; return;
reg = _MMIO(regs[ring_id]); reg = _MMIO(regs[ring_id]);
...@@ -208,7 +209,7 @@ static void load_mocs(struct intel_vgpu *vgpu, int ring_id) ...@@ -208,7 +209,7 @@ static void load_mocs(struct intel_vgpu *vgpu, int ring_id)
offset.reg = regs[ring_id]; offset.reg = regs[ring_id];
for (i = 0; i < 64; i++) { for (i = 0; i < 64; i++) {
gen9_render_mocs[ring_id][i] = I915_READ_FW(offset); gen9_render_mocs[ring_id][i] = I915_READ_FW(offset);
I915_WRITE(offset, vgpu_vreg(vgpu, offset)); I915_WRITE_FW(offset, vgpu_vreg(vgpu, offset));
offset.reg += 4; offset.reg += 4;
} }
...@@ -261,14 +262,15 @@ static void restore_mocs(struct intel_vgpu *vgpu, int ring_id) ...@@ -261,14 +262,15 @@ static void restore_mocs(struct intel_vgpu *vgpu, int ring_id)
static void switch_mmio_to_vgpu(struct intel_vgpu *vgpu, int ring_id) static void switch_mmio_to_vgpu(struct intel_vgpu *vgpu, int ring_id)
{ {
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
struct render_mmio *mmio; struct intel_vgpu_submission *s = &vgpu->submission;
u32 v; u32 *reg_state = s->shadow_ctx->engine[ring_id].lrc_reg_state;
int i, array_size;
u32 *reg_state = vgpu->shadow_ctx->engine[ring_id].lrc_reg_state;
u32 ctx_ctrl = reg_state[CTX_CONTEXT_CONTROL_VAL]; u32 ctx_ctrl = reg_state[CTX_CONTEXT_CONTROL_VAL];
u32 inhibit_mask = u32 inhibit_mask =
_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT); _MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT);
i915_reg_t last_reg = _MMIO(0); i915_reg_t last_reg = _MMIO(0);
struct render_mmio *mmio;
u32 v;
int i, array_size;
if (IS_SKYLAKE(vgpu->gvt->dev_priv) if (IS_SKYLAKE(vgpu->gvt->dev_priv)
|| IS_KABYLAKE(vgpu->gvt->dev_priv)) { || IS_KABYLAKE(vgpu->gvt->dev_priv)) {
......
This diff is collapsed.
...@@ -112,17 +112,18 @@ struct intel_vgpu_workload { ...@@ -112,17 +112,18 @@ struct intel_vgpu_workload {
struct intel_shadow_wa_ctx wa_ctx; struct intel_shadow_wa_ctx wa_ctx;
}; };
/* Intel shadow batch buffer is a i915 gem object */ struct intel_vgpu_shadow_bb {
struct intel_shadow_bb_entry {
struct list_head list; struct list_head list;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
struct i915_vma *vma;
void *va; void *va;
unsigned long len;
u32 *bb_start_cmd_va; u32 *bb_start_cmd_va;
unsigned int clflush;
bool accessing;
}; };
#define workload_q_head(vgpu, ring_id) \ #define workload_q_head(vgpu, ring_id) \
(&(vgpu->workload_q_head[ring_id])) (&(vgpu->submission.workload_q_head[ring_id]))
#define queue_workload(workload) do { \ #define queue_workload(workload) do { \
list_add_tail(&workload->list, \ list_add_tail(&workload->list, \
...@@ -137,12 +138,23 @@ void intel_gvt_clean_workload_scheduler(struct intel_gvt *gvt); ...@@ -137,12 +138,23 @@ void intel_gvt_clean_workload_scheduler(struct intel_gvt *gvt);
void intel_gvt_wait_vgpu_idle(struct intel_vgpu *vgpu); void intel_gvt_wait_vgpu_idle(struct intel_vgpu *vgpu);
int intel_vgpu_init_gvt_context(struct intel_vgpu *vgpu); int intel_vgpu_setup_submission(struct intel_vgpu *vgpu);
void intel_vgpu_clean_gvt_context(struct intel_vgpu *vgpu); void intel_vgpu_reset_submission(struct intel_vgpu *vgpu,
unsigned long engine_mask);
void release_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx); void intel_vgpu_clean_submission(struct intel_vgpu *vgpu);
int intel_gvt_generate_request(struct intel_vgpu_workload *workload); int intel_vgpu_select_submission_ops(struct intel_vgpu *vgpu,
unsigned int interface);
extern const struct intel_vgpu_submission_ops
intel_vgpu_execlist_submission_ops;
struct intel_vgpu_workload *
intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
struct execlist_ctx_descriptor_format *desc);
void intel_vgpu_destroy_workload(struct intel_vgpu_workload *workload);
#endif #endif
...@@ -43,7 +43,10 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu) ...@@ -43,7 +43,10 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
vgpu_vreg(vgpu, vgtif_reg(version_minor)) = 0; vgpu_vreg(vgpu, vgtif_reg(version_minor)) = 0;
vgpu_vreg(vgpu, vgtif_reg(display_ready)) = 0; vgpu_vreg(vgpu, vgtif_reg(display_ready)) = 0;
vgpu_vreg(vgpu, vgtif_reg(vgt_id)) = vgpu->id; vgpu_vreg(vgpu, vgtif_reg(vgt_id)) = vgpu->id;
vgpu_vreg(vgpu, vgtif_reg(vgt_caps)) = VGT_CAPS_FULL_48BIT_PPGTT; vgpu_vreg(vgpu, vgtif_reg(vgt_caps)) = VGT_CAPS_FULL_48BIT_PPGTT;
vgpu_vreg(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HWSP_EMULATION;
vgpu_vreg(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) = vgpu_vreg(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
vgpu_aperture_gmadr_base(vgpu); vgpu_aperture_gmadr_base(vgpu);
vgpu_vreg(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) = vgpu_vreg(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =
...@@ -226,7 +229,7 @@ void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu) ...@@ -226,7 +229,7 @@ void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu)
vgpu->active = false; vgpu->active = false;
if (atomic_read(&vgpu->running_workload_num)) { if (atomic_read(&vgpu->submission.running_workload_num)) {
mutex_unlock(&gvt->lock); mutex_unlock(&gvt->lock);
intel_gvt_wait_vgpu_idle(vgpu); intel_gvt_wait_vgpu_idle(vgpu);
mutex_lock(&gvt->lock); mutex_lock(&gvt->lock);
...@@ -252,10 +255,10 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu) ...@@ -252,10 +255,10 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
WARN(vgpu->active, "vGPU is still active!\n"); WARN(vgpu->active, "vGPU is still active!\n");
intel_gvt_debugfs_remove_vgpu(vgpu);
idr_remove(&gvt->vgpu_idr, vgpu->id); idr_remove(&gvt->vgpu_idr, vgpu->id);
intel_vgpu_clean_sched_policy(vgpu); intel_vgpu_clean_sched_policy(vgpu);
intel_vgpu_clean_gvt_context(vgpu); intel_vgpu_clean_submission(vgpu);
intel_vgpu_clean_execlist(vgpu);
intel_vgpu_clean_display(vgpu); intel_vgpu_clean_display(vgpu);
intel_vgpu_clean_opregion(vgpu); intel_vgpu_clean_opregion(vgpu);
intel_vgpu_clean_gtt(vgpu); intel_vgpu_clean_gtt(vgpu);
...@@ -293,7 +296,7 @@ struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt) ...@@ -293,7 +296,7 @@ struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt)
vgpu->gvt = gvt; vgpu->gvt = gvt;
for (i = 0; i < I915_NUM_ENGINES; i++) for (i = 0; i < I915_NUM_ENGINES; i++)
INIT_LIST_HEAD(&vgpu->workload_q_head[i]); INIT_LIST_HEAD(&vgpu->submission.workload_q_head[i]);
ret = intel_vgpu_init_sched_policy(vgpu); ret = intel_vgpu_init_sched_policy(vgpu);
if (ret) if (ret)
...@@ -346,7 +349,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt, ...@@ -346,7 +349,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
vgpu->handle = param->handle; vgpu->handle = param->handle;
vgpu->gvt = gvt; vgpu->gvt = gvt;
vgpu->sched_ctl.weight = param->weight; vgpu->sched_ctl.weight = param->weight;
bitmap_zero(vgpu->tlb_handle_pending, I915_NUM_ENGINES);
intel_vgpu_init_cfg_space(vgpu, param->primary); intel_vgpu_init_cfg_space(vgpu, param->primary);
...@@ -372,26 +374,26 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt, ...@@ -372,26 +374,26 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
if (ret) if (ret)
goto out_clean_gtt; goto out_clean_gtt;
ret = intel_vgpu_init_execlist(vgpu); ret = intel_vgpu_setup_submission(vgpu);
if (ret) if (ret)
goto out_clean_display; goto out_clean_display;
ret = intel_vgpu_init_gvt_context(vgpu); ret = intel_vgpu_init_sched_policy(vgpu);
if (ret) if (ret)
goto out_clean_execlist; goto out_clean_submission;
ret = intel_vgpu_init_sched_policy(vgpu); ret = intel_gvt_debugfs_add_vgpu(vgpu);
if (ret) if (ret)
goto out_clean_shadow_ctx; goto out_clean_sched_policy;
mutex_unlock(&gvt->lock); mutex_unlock(&gvt->lock);
return vgpu; return vgpu;
out_clean_shadow_ctx: out_clean_sched_policy:
intel_vgpu_clean_gvt_context(vgpu); intel_vgpu_clean_sched_policy(vgpu);
out_clean_execlist: out_clean_submission:
intel_vgpu_clean_execlist(vgpu); intel_vgpu_clean_submission(vgpu);
out_clean_display: out_clean_display:
intel_vgpu_clean_display(vgpu); intel_vgpu_clean_display(vgpu);
out_clean_gtt: out_clean_gtt:
...@@ -500,10 +502,10 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr, ...@@ -500,10 +502,10 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
mutex_lock(&gvt->lock); mutex_lock(&gvt->lock);
} }
intel_vgpu_reset_execlist(vgpu, resetting_eng); intel_vgpu_reset_submission(vgpu, resetting_eng);
/* full GPU reset or device model level reset */ /* full GPU reset or device model level reset */
if (engine_mask == ALL_ENGINES || dmlr) { if (engine_mask == ALL_ENGINES || dmlr) {
intel_vgpu_select_submission_ops(vgpu, 0);
/*fence will not be reset during virtual reset */ /*fence will not be reset during virtual reset */
if (dmlr) { if (dmlr) {
......
...@@ -798,22 +798,15 @@ struct cmd_node { ...@@ -798,22 +798,15 @@ struct cmd_node {
*/ */
static inline u32 cmd_header_key(u32 x) static inline u32 cmd_header_key(u32 x)
{ {
u32 shift;
switch (x >> INSTR_CLIENT_SHIFT) { switch (x >> INSTR_CLIENT_SHIFT) {
default: default:
case INSTR_MI_CLIENT: case INSTR_MI_CLIENT:
shift = STD_MI_OPCODE_SHIFT; return x >> STD_MI_OPCODE_SHIFT;
break;
case INSTR_RC_CLIENT: case INSTR_RC_CLIENT:
shift = STD_3D_OPCODE_SHIFT; return x >> STD_3D_OPCODE_SHIFT;
break;
case INSTR_BC_CLIENT: case INSTR_BC_CLIENT:
shift = STD_2D_OPCODE_SHIFT; return x >> STD_2D_OPCODE_SHIFT;
break;
} }
return x >> shift;
} }
static int init_hash_table(struct intel_engine_cs *engine, static int init_hash_table(struct intel_engine_cs *engine,
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include "intel_drv.h" #include "intel_drv.h"
#include "i915_guc_submission.h" #include "intel_guc_submission.h"
static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node) static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node)
{ {
...@@ -1974,7 +1974,6 @@ static int i915_context_status(struct seq_file *m, void *unused) ...@@ -1974,7 +1974,6 @@ static int i915_context_status(struct seq_file *m, void *unused)
struct intel_context *ce = &ctx->engine[engine->id]; struct intel_context *ce = &ctx->engine[engine->id];
seq_printf(m, "%s: ", engine->name); seq_printf(m, "%s: ", engine->name);
seq_putc(m, ce->initialised ? 'I' : 'i');
if (ce->state) if (ce->state)
describe_obj(m, ce->state->obj); describe_obj(m, ce->state->obj);
if (ce->ring) if (ce->ring)
...@@ -2434,7 +2433,7 @@ static void i915_guc_log_info(struct seq_file *m, ...@@ -2434,7 +2433,7 @@ static void i915_guc_log_info(struct seq_file *m,
static void i915_guc_client_info(struct seq_file *m, static void i915_guc_client_info(struct seq_file *m,
struct drm_i915_private *dev_priv, struct drm_i915_private *dev_priv,
struct i915_guc_client *client) struct intel_guc_client *client)
{ {
struct intel_engine_cs *engine; struct intel_engine_cs *engine;
enum intel_engine_id id; enum intel_engine_id id;
...@@ -2484,6 +2483,8 @@ static int i915_guc_info(struct seq_file *m, void *data) ...@@ -2484,6 +2483,8 @@ static int i915_guc_info(struct seq_file *m, void *data)
seq_printf(m, "\nGuC execbuf client @ %p:\n", guc->execbuf_client); seq_printf(m, "\nGuC execbuf client @ %p:\n", guc->execbuf_client);
i915_guc_client_info(m, dev_priv, guc->execbuf_client); i915_guc_client_info(m, dev_priv, guc->execbuf_client);
seq_printf(m, "\nGuC preempt client @ %p:\n", guc->preempt_client);
i915_guc_client_info(m, dev_priv, guc->preempt_client);
i915_guc_log_info(m, dev_priv); i915_guc_log_info(m, dev_priv);
...@@ -2497,7 +2498,7 @@ static int i915_guc_stage_pool(struct seq_file *m, void *data) ...@@ -2497,7 +2498,7 @@ static int i915_guc_stage_pool(struct seq_file *m, void *data)
struct drm_i915_private *dev_priv = node_to_i915(m->private); struct drm_i915_private *dev_priv = node_to_i915(m->private);
const struct intel_guc *guc = &dev_priv->guc; const struct intel_guc *guc = &dev_priv->guc;
struct guc_stage_desc *desc = guc->stage_desc_pool_vaddr; struct guc_stage_desc *desc = guc->stage_desc_pool_vaddr;
struct i915_guc_client *client = guc->execbuf_client; struct intel_guc_client *client = guc->execbuf_client;
unsigned int tmp; unsigned int tmp;
int index; int index;
...@@ -2734,39 +2735,76 @@ static int i915_sink_crc(struct seq_file *m, void *data) ...@@ -2734,39 +2735,76 @@ static int i915_sink_crc(struct seq_file *m, void *data)
struct intel_connector *connector; struct intel_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
struct intel_dp *intel_dp = NULL; struct intel_dp *intel_dp = NULL;
struct drm_modeset_acquire_ctx ctx;
int ret; int ret;
u8 crc[6]; u8 crc[6];
drm_modeset_lock_all(dev); drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
drm_connector_list_iter_begin(dev, &conn_iter); drm_connector_list_iter_begin(dev, &conn_iter);
for_each_intel_connector_iter(connector, &conn_iter) { for_each_intel_connector_iter(connector, &conn_iter) {
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_connector_state *state;
struct intel_crtc_state *crtc_state;
if (!connector->base.state->best_encoder) if (connector->base.connector_type != DRM_MODE_CONNECTOR_eDP)
continue; continue;
crtc = connector->base.state->crtc; retry:
if (!crtc->state->active) ret = drm_modeset_lock(&dev->mode_config.connection_mutex, &ctx);
if (ret)
goto err;
state = connector->base.state;
if (!state->best_encoder)
continue; continue;
if (connector->base.connector_type != DRM_MODE_CONNECTOR_eDP) crtc = state->crtc;
ret = drm_modeset_lock(&crtc->mutex, &ctx);
if (ret)
goto err;
crtc_state = to_intel_crtc_state(crtc->state);
if (!crtc_state->base.active)
continue; continue;
intel_dp = enc_to_intel_dp(connector->base.state->best_encoder); /*
* We need to wait for all crtc updates to complete, to make
* sure any pending modesets and plane updates are completed.
*/
if (crtc_state->base.commit) {
ret = wait_for_completion_interruptible(&crtc_state->base.commit->hw_done);
ret = intel_dp_sink_crc(intel_dp, crc); if (ret)
goto err;
}
intel_dp = enc_to_intel_dp(state->best_encoder);
ret = intel_dp_sink_crc(intel_dp, crtc_state, crc);
if (ret) if (ret)
goto out; goto err;
seq_printf(m, "%02x%02x%02x%02x%02x%02x\n", seq_printf(m, "%02x%02x%02x%02x%02x%02x\n",
crc[0], crc[1], crc[2], crc[0], crc[1], crc[2],
crc[3], crc[4], crc[5]); crc[3], crc[4], crc[5]);
goto out; goto out;
err:
if (ret == -EDEADLK) {
ret = drm_modeset_backoff(&ctx);
if (!ret)
goto retry;
}
goto out;
} }
ret = -ENODEV; ret = -ENODEV;
out: out:
drm_connector_list_iter_end(&conn_iter); drm_connector_list_iter_end(&conn_iter);
drm_modeset_unlock_all(dev); drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
return ret; return ret;
} }
...@@ -3049,7 +3087,7 @@ static void intel_connector_info(struct seq_file *m, ...@@ -3049,7 +3087,7 @@ static void intel_connector_info(struct seq_file *m,
break; break;
case DRM_MODE_CONNECTOR_HDMIA: case DRM_MODE_CONNECTOR_HDMIA:
if (intel_encoder->type == INTEL_OUTPUT_HDMI || if (intel_encoder->type == INTEL_OUTPUT_HDMI ||
intel_encoder->type == INTEL_OUTPUT_UNKNOWN) intel_encoder->type == INTEL_OUTPUT_DDI)
intel_hdmi_info(m, intel_connector); intel_hdmi_info(m, intel_connector);
break; break;
default: default:
...@@ -3244,6 +3282,8 @@ static int i915_engine_info(struct seq_file *m, void *unused) ...@@ -3244,6 +3282,8 @@ static int i915_engine_info(struct seq_file *m, void *unused)
yesno(dev_priv->gt.awake)); yesno(dev_priv->gt.awake));
seq_printf(m, "Global active requests: %d\n", seq_printf(m, "Global active requests: %d\n",
dev_priv->gt.active_requests); dev_priv->gt.active_requests);
seq_printf(m, "CS timestamp frequency: %u kHz\n",
dev_priv->info.cs_timestamp_frequency_khz);
p = drm_seq_file_printer(m); p = drm_seq_file_printer(m);
for_each_engine(engine, dev_priv, id) for_each_engine(engine, dev_priv, id)
...@@ -3601,7 +3641,7 @@ static int i915_dp_mst_info(struct seq_file *m, void *unused) ...@@ -3601,7 +3641,7 @@ static int i915_dp_mst_info(struct seq_file *m, void *unused)
continue; continue;
seq_printf(m, "MST Source Port %c\n", seq_printf(m, "MST Source Port %c\n",
port_name(intel_dig_port->port)); port_name(intel_dig_port->base.port));
drm_dp_mst_dump_topology(m, &intel_dig_port->dp.mst_mgr); drm_dp_mst_dump_topology(m, &intel_dig_port->dp.mst_mgr);
} }
drm_connector_list_iter_end(&conn_iter); drm_connector_list_iter_end(&conn_iter);
...@@ -4448,6 +4488,61 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv, ...@@ -4448,6 +4488,61 @@ static void cherryview_sseu_device_status(struct drm_i915_private *dev_priv,
} }
} }
static void gen10_sseu_device_status(struct drm_i915_private *dev_priv,
struct sseu_dev_info *sseu)
{
const struct intel_device_info *info = INTEL_INFO(dev_priv);
int s_max = 6, ss_max = 4;
int s, ss;
u32 s_reg[s_max], eu_reg[2 * s_max], eu_mask[2];
for (s = 0; s < s_max; s++) {
/*
* FIXME: Valid SS Mask respects the spec and read
* only valid bits for those registers, excluding reserverd
* although this seems wrong because it would leave many
* subslices without ACK.
*/
s_reg[s] = I915_READ(GEN10_SLICE_PGCTL_ACK(s)) &
GEN10_PGCTL_VALID_SS_MASK(s);
eu_reg[2 * s] = I915_READ(GEN10_SS01_EU_PGCTL_ACK(s));
eu_reg[2 * s + 1] = I915_READ(GEN10_SS23_EU_PGCTL_ACK(s));
}
eu_mask[0] = GEN9_PGCTL_SSA_EU08_ACK |
GEN9_PGCTL_SSA_EU19_ACK |
GEN9_PGCTL_SSA_EU210_ACK |
GEN9_PGCTL_SSA_EU311_ACK;
eu_mask[1] = GEN9_PGCTL_SSB_EU08_ACK |
GEN9_PGCTL_SSB_EU19_ACK |
GEN9_PGCTL_SSB_EU210_ACK |
GEN9_PGCTL_SSB_EU311_ACK;
for (s = 0; s < s_max; s++) {
if ((s_reg[s] & GEN9_PGCTL_SLICE_ACK) == 0)
/* skip disabled slice */
continue;
sseu->slice_mask |= BIT(s);
sseu->subslice_mask = info->sseu.subslice_mask;
for (ss = 0; ss < ss_max; ss++) {
unsigned int eu_cnt;
if (!(s_reg[s] & (GEN9_PGCTL_SS_ACK(ss))))
/* skip disabled subslice */
continue;
eu_cnt = 2 * hweight32(eu_reg[2 * s + ss / 2] &
eu_mask[ss % 2]);
sseu->eu_total += eu_cnt;
sseu->eu_per_subslice = max_t(unsigned int,
sseu->eu_per_subslice,
eu_cnt);
}
}
}
static void gen9_sseu_device_status(struct drm_i915_private *dev_priv, static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
struct sseu_dev_info *sseu) struct sseu_dev_info *sseu)
{ {
...@@ -4483,7 +4578,7 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv, ...@@ -4483,7 +4578,7 @@ static void gen9_sseu_device_status(struct drm_i915_private *dev_priv,
sseu->slice_mask |= BIT(s); sseu->slice_mask |= BIT(s);
if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv)) if (IS_GEN9_BC(dev_priv))
sseu->subslice_mask = sseu->subslice_mask =
INTEL_INFO(dev_priv)->sseu.subslice_mask; INTEL_INFO(dev_priv)->sseu.subslice_mask;
...@@ -4589,8 +4684,10 @@ static int i915_sseu_status(struct seq_file *m, void *unused) ...@@ -4589,8 +4684,10 @@ static int i915_sseu_status(struct seq_file *m, void *unused)
cherryview_sseu_device_status(dev_priv, &sseu); cherryview_sseu_device_status(dev_priv, &sseu);
} else if (IS_BROADWELL(dev_priv)) { } else if (IS_BROADWELL(dev_priv)) {
broadwell_sseu_device_status(dev_priv, &sseu); broadwell_sseu_device_status(dev_priv, &sseu);
} else if (INTEL_GEN(dev_priv) >= 9) { } else if (IS_GEN9(dev_priv)) {
gen9_sseu_device_status(dev_priv, &sseu); gen9_sseu_device_status(dev_priv, &sseu);
} else if (INTEL_GEN(dev_priv) >= 10) {
gen10_sseu_device_status(dev_priv, &sseu);
} }
intel_runtime_pm_put(dev_priv); intel_runtime_pm_put(dev_priv);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -28,7 +28,11 @@ ...@@ -28,7 +28,11 @@
#include <linux/bug.h> #include <linux/bug.h>
#ifdef CONFIG_DRM_I915_DEBUG_GEM #ifdef CONFIG_DRM_I915_DEBUG_GEM
#define GEM_BUG_ON(expr) BUG_ON(expr) #define GEM_BUG_ON(condition) do { if (unlikely((condition))) { \
printk(KERN_ERR "GEM_BUG_ON(%s)\n", __stringify(condition)); \
BUG(); \
} \
} while(0)
#define GEM_WARN_ON(expr) WARN_ON(expr) #define GEM_WARN_ON(expr) WARN_ON(expr)
#define GEM_DEBUG_DECL(var) var #define GEM_DEBUG_DECL(var) var
...@@ -44,6 +48,12 @@ ...@@ -44,6 +48,12 @@
#define GEM_DEBUG_BUG_ON(expr) #define GEM_DEBUG_BUG_ON(expr)
#endif #endif
#if IS_ENABLED(CONFIG_DRM_I915_TRACE_GEM)
#define GEM_TRACE(...) trace_printk(__VA_ARGS__)
#else
#define GEM_TRACE(...) do { } while (0)
#endif
#define I915_NUM_ENGINES 5 #define I915_NUM_ENGINES 5
#endif /* __I915_GEM_H__ */ #endif /* __I915_GEM_H__ */
This diff is collapsed.
...@@ -157,7 +157,6 @@ struct i915_gem_context { ...@@ -157,7 +157,6 @@ struct i915_gem_context {
u32 *lrc_reg_state; u32 *lrc_reg_state;
u64 lrc_desc; u64 lrc_desc;
int pin_count; int pin_count;
bool initialised;
} engine[I915_NUM_ENGINES]; } engine[I915_NUM_ENGINES];
/** ring_size: size for allocating the per-engine ring buffer */ /** ring_size: size for allocating the per-engine ring buffer */
...@@ -292,6 +291,9 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, ...@@ -292,6 +291,9 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data, int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file); struct drm_file *file);
struct i915_gem_context *
i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio);
static inline struct i915_gem_context * static inline struct i915_gem_context *
i915_gem_context_get(struct i915_gem_context *ctx) i915_gem_context_get(struct i915_gem_context *ctx)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment