Commit 5c37daf5 authored by Dave Airlie's avatar Dave Airlie

Merge tag 'drm-intel-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-intel into drm-next

More 4.11 stuff, holidays edition (i.e. not much):

- docs and cleanups for shared dpll code (Ander)
- some kerneldoc work (Chris)
- fbc by default on gen9+ too, yeah! (Paulo)
- fixes, polish and other small things all over gem code (Chris)
- and a few small things on top

Plus a backmerge, because Dave was enjoying time off too.

* tag 'drm-intel-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-intel: (275 commits)
  drm/i915: Update DRIVER_DATE to 20170109
  drm/i915: Drain freed objects for mmap space exhaustion
  drm/i915: Purge loose pages if we run out of DMA remap space
  drm/i915: Fix phys pwrite for struct_mutex-less operation
  drm/i915: Simplify testing for am-I-the-kernel-context?
  drm/i915: Use range_overflows()
  drm/i915: Use fixed-sized types for stolen
  drm/i915: Use phys_addr_t for the address of stolen memory
  drm/i915: Consolidate checks for memcpy-from-wc support
  drm/i915: Only skip requests once a context is banned
  drm/i915: Move a few more utility macros to i915_utils.h
  drm/i915: Clear ret before unbinding in i915_gem_evict_something()
  drm/i915/guc: Exclude the upper end of the Global GTT for the GuC
  drm/i915: Move a few utility macros into a separate header
  drm/i915/execlists: Reorder execlists register enabling
  drm/i915: Assert that we do create the deferred context
  drm/i915: Assert all timeline requests are gone before fini
  drm/i915: Revoke fenced GTT mmapings across GPU reset
  drm/i915: enable FBC on gen9+ too
  drm/i915: actually drive the BDW reserved IDs
  ...
parents 3806a271 5d799acd
......@@ -213,6 +213,18 @@ Video BIOS Table (VBT)
.. kernel-doc:: drivers/gpu/drm/i915/intel_vbt_defs.h
:internal:
Display PLLs
------------
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.c
:doc: Display PLLs
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.c
:internal:
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.h
:internal:
Memory Management and Command Submission
========================================
......@@ -356,4 +368,95 @@ switch_mm
.. kernel-doc:: drivers/gpu/drm/i915/i915_trace.h
:doc: switch_mm tracepoint
Perf
====
Overview
--------
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:doc: i915 Perf Overview
Comparison with Core Perf
-------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:doc: i915 Perf History and Comparison with Core Perf
i915 Driver Entry Points
------------------------
This section covers the entrypoints exported outside of i915_perf.c to
integrate with drm/i915 and to handle the `DRM_I915_PERF_OPEN` ioctl.
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_init
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_fini
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_register
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_unregister
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_open_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_release
i915 Perf Stream
----------------
This section covers the stream-semantics-agnostic structures and functions
for representing an i915 perf stream FD and associated file operations.
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_perf_stream
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_perf_stream_ops
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: read_properties_unlocked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_open_ioctl_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_destroy_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_read
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_enable_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_disable_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_poll
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_poll_locked
i915 Perf Observation Architecture Stream
-----------------------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_oa_ops
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_init
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_read
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_enable
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_disable
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_wait_unlocked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_poll_wait
All i915 Perf Internals
-----------------------
This section simply includes all currently documented i915 perf internals, in
no particular order, but may include some more minor utilities or platform
specific details than found in the more high-level sections.
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:internal:
.. WARNING: DOCPROC directive not supported: !Cdrivers/gpu/drm/i915/i915_irq.c
......@@ -1420,8 +1420,10 @@ int intel_gmch_probe(struct pci_dev *bridge_pdev, struct pci_dev *gpu_pdev,
}
EXPORT_SYMBOL(intel_gmch_probe);
void intel_gtt_get(u64 *gtt_total, size_t *stolen_size,
phys_addr_t *mappable_base, u64 *mappable_end)
void intel_gtt_get(u64 *gtt_total,
u32 *stolen_size,
phys_addr_t *mappable_base,
u64 *mappable_end)
{
*gtt_total = intel_private.gtt_total_entries << PAGE_SHIFT;
*stolen_size = intel_private.stolen_size;
......
......@@ -19,9 +19,12 @@ config DRM_I915_DEBUG
bool "Enable additional driver debugging"
depends on DRM_I915
select PREEMPT_COUNT
select I2C_CHARDEV
select DRM_DP_AUX_CHARDEV
select X86_MSR # used by igt/pm_rpm
select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
select DRM_DEBUG_MM if DRM=y
select DRM_I915_SW_FENCE_DEBUG_OBJECTS
default n
help
Choose this option to turn on extra driver debugging that may affect
......@@ -43,3 +46,15 @@ config DRM_I915_DEBUG_GEM
If in doubt, say "N".
config DRM_I915_SW_FENCE_DEBUG_OBJECTS
bool "Enable additional driver debugging for fence objects"
depends on DRM_I915
select DEBUG_OBJECTS
default n
help
Choose this option to turn on extra driver debugging that may affect
performance but will catch some internal issues.
Recommended for driver developers only.
If in doubt, say "N".
......@@ -24,7 +24,7 @@ i915-y := i915_drv.o \
intel_runtime_pm.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
# GEM code
i915-y += i915_cmd_parser.o \
......@@ -55,7 +55,8 @@ i915-y += i915_cmd_parser.o \
intel_uncore.o
# general-purpose microcontroller (GuC) support
i915-y += intel_guc_loader.o \
i915-y += intel_uc.o \
intel_guc_loader.o \
i915_guc_submission.o
# autogenerated null render state
......@@ -117,6 +118,10 @@ i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
# virtual gpu code
i915-y += i915_vgpu.o
# perf code
i915-y += i915_perf.o \
i915_oa_hsw.o
ifeq ($(CONFIG_DRM_I915_GVT),y)
i915-y += intel_gvt.o
include $(src)/gvt/Makefile
......
......@@ -73,12 +73,15 @@ static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
mutex_lock(&dev_priv->drm.struct_mutex);
search_again:
ret = drm_mm_insert_node_in_range_generic(&dev_priv->ggtt.base.mm,
node, size, 4096, 0,
node, size, 4096,
I915_COLOR_UNEVICTABLE,
start, end, search_flag,
alloc_flag);
if (ret) {
ret = i915_gem_evict_something(&dev_priv->ggtt.base,
size, 4096, 0, start, end, 0);
size, 4096,
I915_COLOR_UNEVICTABLE,
start, end, 0);
if (ret == 0 && ++retried < 3)
goto search_again;
......
......@@ -1602,7 +1602,7 @@ static int perform_bb_shadow(struct parser_exec_state *s)
return -ENOMEM;
entry_obj->obj =
i915_gem_object_create(&(s->vgpu->gvt->dev_priv->drm),
i915_gem_object_create(s->vgpu->gvt->dev_priv,
roundup(bb_size, PAGE_SIZE));
if (IS_ERR(entry_obj->obj)) {
ret = PTR_ERR(entry_obj->obj);
......@@ -2665,14 +2665,13 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
{
struct drm_device *dev = &wa_ctx->workload->vgpu->gvt->dev_priv->drm;
int ctx_size = wa_ctx->indirect_ctx.size;
unsigned long guest_gma = wa_ctx->indirect_ctx.guest_gma;
struct drm_i915_gem_object *obj;
int ret = 0;
void *map;
obj = i915_gem_object_create(dev,
obj = i915_gem_object_create(wa_ctx->workload->vgpu->gvt->dev_priv,
roundup(ctx_size + CACHELINE_BYTES,
PAGE_SIZE));
if (IS_ERR(obj))
......
......@@ -2200,7 +2200,7 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_DFH(0x1217c, D_ALL, F_CMD_ACCESS, NULL, NULL);
MMIO_F(0x2290, 8, 0, 0, 0, D_HSW_PLUS, NULL, NULL);
MMIO_D(OACONTROL, D_HSW);
MMIO_D(GEN7_OACONTROL, D_HSW);
MMIO_D(0x2b00, D_BDW_PLUS);
MMIO_D(0x2360, D_BDW_PLUS);
MMIO_F(0x5200, 32, 0, 0, 0, D_ALL, NULL, NULL);
......
......@@ -549,18 +549,10 @@ int intel_gvt_init_workload_scheduler(struct intel_gvt *gvt)
void intel_vgpu_clean_gvt_context(struct intel_vgpu *vgpu)
{
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
atomic_notifier_chain_unregister(&vgpu->shadow_ctx->status_notifier,
&vgpu->shadow_ctx_notifier_block);
mutex_lock(&dev_priv->drm.struct_mutex);
/* a little hacky to mark as ctx closed */
vgpu->shadow_ctx->closed = true;
i915_gem_context_put(vgpu->shadow_ctx);
mutex_unlock(&dev_priv->drm.struct_mutex);
i915_gem_context_put_unlocked(vgpu->shadow_ctx);
}
int intel_vgpu_init_gvt_context(struct intel_vgpu *vgpu)
......
......@@ -86,6 +86,102 @@
* general bitmasking mechanism.
*/
/*
* A command that requires special handling by the command parser.
*/
struct drm_i915_cmd_descriptor {
/*
* Flags describing how the command parser processes the command.
*
* CMD_DESC_FIXED: The command has a fixed length if this is set,
* a length mask if not set
* CMD_DESC_SKIP: The command is allowed but does not follow the
* standard length encoding for the opcode range in
* which it falls
* CMD_DESC_REJECT: The command is never allowed
* CMD_DESC_REGISTER: The command should be checked against the
* register whitelist for the appropriate ring
* CMD_DESC_MASTER: The command is allowed if the submitting process
* is the DRM master
*/
u32 flags;
#define CMD_DESC_FIXED (1<<0)
#define CMD_DESC_SKIP (1<<1)
#define CMD_DESC_REJECT (1<<2)
#define CMD_DESC_REGISTER (1<<3)
#define CMD_DESC_BITMASK (1<<4)
#define CMD_DESC_MASTER (1<<5)
/*
* The command's unique identification bits and the bitmask to get them.
* This isn't strictly the opcode field as defined in the spec and may
* also include type, subtype, and/or subop fields.
*/
struct {
u32 value;
u32 mask;
} cmd;
/*
* The command's length. The command is either fixed length (i.e. does
* not include a length field) or has a length field mask. The flag
* CMD_DESC_FIXED indicates a fixed length. Otherwise, the command has
* a length mask. All command entries in a command table must include
* length information.
*/
union {
u32 fixed;
u32 mask;
} length;
/*
* Describes where to find a register address in the command to check
* against the ring's register whitelist. Only valid if flags has the
* CMD_DESC_REGISTER bit set.
*
* A non-zero step value implies that the command may access multiple
* registers in sequence (e.g. LRI), in that case step gives the
* distance in dwords between individual offset fields.
*/
struct {
u32 offset;
u32 mask;
u32 step;
} reg;
#define MAX_CMD_DESC_BITMASKS 3
/*
* Describes command checks where a particular dword is masked and
* compared against an expected value. If the command does not match
* the expected value, the parser rejects it. Only valid if flags has
* the CMD_DESC_BITMASK bit set. Only entries where mask is non-zero
* are valid.
*
* If the check specifies a non-zero condition_mask then the parser
* only performs the check when the bits specified by condition_mask
* are non-zero.
*/
struct {
u32 offset;
u32 mask;
u32 expected;
u32 condition_offset;
u32 condition_mask;
} bits[MAX_CMD_DESC_BITMASKS];
};
/*
* A table of commands requiring special handling by the command parser.
*
* Each engine has an array of tables. Each table consists of an array of
* command descriptors, which must be sorted with command opcodes in
* ascending order.
*/
struct drm_i915_cmd_table {
const struct drm_i915_cmd_descriptor *table;
int count;
};
#define STD_MI_OPCODE_SHIFT (32 - 9)
#define STD_3D_OPCODE_SHIFT (32 - 16)
#define STD_2D_OPCODE_SHIFT (32 - 10)
......@@ -450,7 +546,6 @@ static const struct drm_i915_reg_descriptor gen7_render_regs[] = {
REG64(PS_INVOCATION_COUNT),
REG64(PS_DEPTH_COUNT),
REG64_IDX(RING_TIMESTAMP, RENDER_RING_BASE),
REG32(OACONTROL), /* Only allowed for LRI and SRM. See below. */
REG64(MI_PREDICATE_SRC0),
REG64(MI_PREDICATE_SRC1),
REG32(GEN7_3DPRIM_END_OFFSET),
......@@ -559,7 +654,7 @@ static const struct drm_i915_reg_table hsw_blt_reg_tables[] = {
static u32 gen7_render_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
u32 subclient =
(cmd_header & INSTR_SUBCLIENT_MASK) >> INSTR_SUBCLIENT_SHIFT;
......@@ -578,7 +673,7 @@ static u32 gen7_render_get_cmd_length_mask(u32 cmd_header)
static u32 gen7_bsd_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
u32 subclient =
(cmd_header & INSTR_SUBCLIENT_MASK) >> INSTR_SUBCLIENT_SHIFT;
u32 op = (cmd_header & INSTR_26_TO_24_MASK) >> INSTR_26_TO_24_SHIFT;
......@@ -601,7 +696,7 @@ static u32 gen7_bsd_get_cmd_length_mask(u32 cmd_header)
static u32 gen7_blt_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
if (client == INSTR_MI_CLIENT)
return 0x3F;
......@@ -984,7 +1079,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
src = ERR_PTR(-ENODEV);
if (src_needs_clflush &&
i915_memcpy_from_wc((void *)(uintptr_t)batch_start_offset, NULL, 0)) {
i915_can_memcpy_from_wc(NULL, batch_start_offset, 0)) {
src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
if (!IS_ERR(src)) {
i915_memcpy_from_wc(dst,
......@@ -1036,32 +1131,10 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
return dst;
}
/**
* intel_engine_needs_cmd_parser() - should a given engine use software
* command parsing?
* @engine: the engine in question
*
* Only certain platforms require software batch buffer command parsing, and
* only when enabled via module parameter.
*
* Return: true if the engine requires software command parsing
*/
bool intel_engine_needs_cmd_parser(struct intel_engine_cs *engine)
{
if (!engine->needs_cmd_parser)
return false;
if (!USES_PPGTT(engine->i915))
return false;
return (i915.enable_cmd_parser == 1);
}
static bool check_cmd(const struct intel_engine_cs *engine,
const struct drm_i915_cmd_descriptor *desc,
const u32 *cmd, u32 length,
const bool is_master,
bool *oacontrol_set)
const bool is_master)
{
if (desc->flags & CMD_DESC_SKIP)
return true;
......@@ -1098,31 +1171,6 @@ static bool check_cmd(const struct intel_engine_cs *engine,
return false;
}
/*
* OACONTROL requires some special handling for
* writes. We want to make sure that any batch which
* enables OA also disables it before the end of the
* batch. The goal is to prevent one process from
* snooping on the perf data from another process. To do
* that, we need to check the value that will be written
* to the register. Hence, limit OACONTROL writes to
* only MI_LOAD_REGISTER_IMM commands.
*/
if (reg_addr == i915_mmio_reg_offset(OACONTROL)) {
if (desc->cmd.value == MI_LOAD_REGISTER_MEM) {
DRM_DEBUG_DRIVER("CMD: Rejected LRM to OACONTROL\n");
return false;
}
if (desc->cmd.value == MI_LOAD_REGISTER_REG) {
DRM_DEBUG_DRIVER("CMD: Rejected LRR to OACONTROL\n");
return false;
}
if (desc->cmd.value == MI_LOAD_REGISTER_IMM(1))
*oacontrol_set = (cmd[offset + 1] != 0);
}
/*
* Check the value written to the register against the
* allowed mask/value pair given in the whitelist entry.
......@@ -1214,7 +1262,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
u32 *cmd, *batch_end;
struct drm_i915_cmd_descriptor default_desc = noop_desc;
const struct drm_i915_cmd_descriptor *desc = &default_desc;
bool oacontrol_set = false; /* OACONTROL tracking. See check_cmd() */
bool needs_clflush_after = false;
int ret = 0;
......@@ -1270,20 +1317,14 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
break;
}
if (!check_cmd(engine, desc, cmd, length, is_master,
&oacontrol_set)) {
ret = -EINVAL;
if (!check_cmd(engine, desc, cmd, length, is_master)) {
ret = -EACCES;
break;
}
cmd += length;
}
if (oacontrol_set) {
DRM_DEBUG_DRIVER("CMD: batch set OACONTROL but did not clear it\n");
ret = -EINVAL;
}
if (cmd >= batch_end) {
DRM_DEBUG_DRIVER("CMD: Got to the end of the buffer w/o a BBE cmd!\n");
ret = -EINVAL;
......@@ -1313,7 +1354,7 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
/* If the command parser is not enabled, report 0 - unsupported */
for_each_engine(engine, dev_priv, id) {
if (intel_engine_needs_cmd_parser(engine)) {
if (engine->needs_cmd_parser) {
active = true;
break;
}
......@@ -1333,6 +1374,11 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
* 5. GPGPU dispatch compute indirect registers.
* 6. TIMESTAMP register and Haswell CS GPR registers
* 7. Allow MI_LOAD_REGISTER_REG between whitelisted registers.
* 8. Don't report cmd_check() failures as EINVAL errors to userspace;
* rely on the HW to NOOP disallowed commands as it would without
* the parser enabled.
* 9. Don't whitelist or handle oacontrol specially, as ownership
* for oacontrol state is moving to i915-perf.
*/
return 7;
return 9;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -27,8 +27,10 @@
#ifdef CONFIG_DRM_I915_DEBUG_GEM
#define GEM_BUG_ON(expr) BUG_ON(expr)
#define GEM_WARN_ON(expr) WARN_ON(expr)
#else
#define GEM_BUG_ON(expr) do { } while (0)
#define GEM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
#define GEM_WARN_ON(expr) (BUILD_BUG_ON_INVALID(expr), 0)
#endif
#define I915_NUM_ENGINES 5
......
This diff is collapsed.
/*
* Copyright © 2016 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#ifndef __I915_GEM_CONTEXT_H__
#define __I915_GEM_CONTEXT_H__
#include <linux/bitops.h>
#include <linux/list.h>
struct pid;
struct drm_device;
struct drm_file;
struct drm_i915_private;
struct drm_i915_file_private;
struct i915_hw_ppgtt;
struct i915_vma;
struct intel_ring;
#define DEFAULT_CONTEXT_HANDLE 0
/**
* struct i915_gem_context - client state
*
* The struct i915_gem_context represents the combined view of the driver and
* logical hardware state for a particular client.
*/
struct i915_gem_context {
/** i915: i915 device backpointer */
struct drm_i915_private *i915;
/** file_priv: owning file descriptor */
struct drm_i915_file_private *file_priv;
/**
* @ppgtt: unique address space (GTT)
*
* In full-ppgtt mode, each context has its own address space ensuring
* complete seperation of one client from all others.
*
* In other modes, this is a NULL pointer with the expectation that
* the caller uses the shared global GTT.
*/
struct i915_hw_ppgtt *ppgtt;
/**
* @pid: process id of creator
*
* Note that who created the context may not be the principle user,
* as the context may be shared across a local socket. However,
* that should only affect the default context, all contexts created
* explicitly by the client are expected to be isolated.
*/
struct pid *pid;
/**
* @name: arbitrary name
*
* A name is constructed for the context from the creator's process
* name, pid and user handle in order to uniquely identify the
* context in messages.
*/
const char *name;
/** link: place with &drm_i915_private.context_list */
struct list_head link;
/**
* @ref: reference count
*
* A reference to a context is held by both the client who created it
* and on each request submitted to the hardware using the request
* (to ensure the hardware has access to the state until it has
* finished all pending writes). See i915_gem_context_get() and
* i915_gem_context_put() for access.
*/
struct kref ref;
/**
* @flags: small set of booleans
*/
unsigned long flags;
#define CONTEXT_NO_ZEROMAP BIT(0)
#define CONTEXT_NO_ERROR_CAPTURE 1
#define CONTEXT_CLOSED 2
#define CONTEXT_BANNABLE 3
#define CONTEXT_BANNED 4
#define CONTEXT_FORCE_SINGLE_SUBMISSION 5
/**
* @hw_id: - unique identifier for the context
*
* The hardware needs to uniquely identify the context for a few
* functions like fault reporting, PASID, scheduling. The
* &drm_i915_private.context_hw_ida is used to assign a unqiue
* id for the lifetime of the context.
*/
unsigned int hw_id;
/**
* @user_handle: userspace identifier
*
* A unique per-file identifier is generated from
* &drm_i915_file_private.contexts.
*/
u32 user_handle;
/**
* @priority: execution and service priority
*
* All clients are equal, but some are more equal than others!
*
* Requests from a context with a greater (more positive) value of
* @priority will be executed before those with a lower @priority
* value, forming a simple QoS.
*
* The &drm_i915_private.kernel_context is assigned the lowest priority.
*/
int priority;
/** ggtt_alignment: alignment restriction for context objects */
u32 ggtt_alignment;
/** ggtt_offset_bias: placement restriction for context objects */
u32 ggtt_offset_bias;
/** engine: per-engine logical HW state */
struct intel_context {
struct i915_vma *state;
struct intel_ring *ring;
u32 *lrc_reg_state;
u64 lrc_desc;
int pin_count;
bool initialised;
} engine[I915_NUM_ENGINES];
/** ring_size: size for allocating the per-engine ring buffer */
u32 ring_size;
/** desc_template: invariant fields for the HW context descriptor */
u32 desc_template;
/** status_notifier: list of callbacks for context-switch changes */
struct atomic_notifier_head status_notifier;
/** guilty_count: How many times this context has caused a GPU hang. */
unsigned int guilty_count;
/**
* @active_count: How many times this context was active during a GPU
* hang, but did not cause it.
*/
unsigned int active_count;
#define CONTEXT_SCORE_GUILTY 10
#define CONTEXT_SCORE_BAN_THRESHOLD 40
/** ban_score: Accumulated score of all hangs caused by this context. */
int ban_score;
/** remap_slice: Bitmask of cache lines that need remapping */
u8 remap_slice;
};
static inline bool i915_gem_context_is_closed(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_CLOSED, &ctx->flags);
}
static inline void i915_gem_context_set_closed(struct i915_gem_context *ctx)
{
GEM_BUG_ON(i915_gem_context_is_closed(ctx));
__set_bit(CONTEXT_CLOSED, &ctx->flags);
}
static inline bool i915_gem_context_no_error_capture(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline void i915_gem_context_set_no_error_capture(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline void i915_gem_context_clear_no_error_capture(struct i915_gem_context *ctx)
{
__clear_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline bool i915_gem_context_is_bannable(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline void i915_gem_context_set_bannable(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline void i915_gem_context_clear_bannable(struct i915_gem_context *ctx)
{
__clear_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline void i915_gem_context_set_banned(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline bool i915_gem_context_force_single_submission(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline void i915_gem_context_set_force_single_submission(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline bool i915_gem_context_is_default(const struct i915_gem_context *c)
{
return c->user_handle == DEFAULT_CONTEXT_HANDLE;
}
static inline bool i915_gem_context_is_kernel(struct i915_gem_context *ctx)
{
return !ctx->file_priv;
}
/* i915_gem_context.c */
int __must_check i915_gem_context_init(struct drm_i915_private *dev_priv);
void i915_gem_context_lost(struct drm_i915_private *dev_priv);
void i915_gem_context_fini(struct drm_i915_private *dev_priv);
int i915_gem_context_open(struct drm_device *dev, struct drm_file *file);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct drm_i915_gem_request *req);
int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv);
void i915_gem_context_free(struct kref *ctx_ref);
struct i915_gem_context *
i915_gem_context_create_gvt(struct drm_device *dev);
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
#endif /* !__I915_GEM_CONTEXT_H__ */
......@@ -278,7 +278,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
get_dma_buf(dma_buf);
obj = i915_gem_object_alloc(dev);
obj = i915_gem_object_alloc(to_i915(dev));
if (obj == NULL) {
ret = -ENOMEM;
goto fail_detach;
......
This diff is collapsed.
......@@ -274,6 +274,7 @@ static void eb_destroy(struct eb_vmas *eb)
exec_list);
list_del_init(&vma->exec_list);
i915_gem_execbuffer_unreserve_vma(vma);
vma->exec_entry = NULL;
i915_vma_put(vma);
}
kfree(eb);
......@@ -437,7 +438,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
memset(&cache->node, 0, sizeof(cache->node));
ret = drm_mm_insert_node_in_range_generic
(&ggtt->base.mm, &cache->node,
4096, 0, 0,
4096, 0, I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_SEARCH_DEFAULT,
DRM_MM_CREATE_DEFAULT);
......@@ -1232,14 +1233,12 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
struct intel_engine_cs *engine, const u32 ctx_id)
{
struct i915_gem_context *ctx;
struct i915_ctx_hang_stats *hs;
ctx = i915_gem_context_lookup(file->driver_priv, ctx_id);
if (IS_ERR(ctx))
return ctx;
hs = &ctx->hang_stats;
if (hs->banned) {
if (i915_gem_context_is_banned(ctx)) {
DRM_DEBUG("Context %u tried to submit while banned\n", ctx_id);
return ERR_PTR(-EIO);
}
......@@ -1260,6 +1259,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
struct drm_i915_gem_object *obj = vma->obj;
const unsigned int idx = req->engine->id;
lockdep_assert_held(&req->i915->drm.struct_mutex);
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
/* Add a reference if we're newly entering the active list.
......@@ -1715,7 +1715,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
}
params->args_batch_start_offset = args->batch_start_offset;
if (intel_engine_needs_cmd_parser(engine) && args->batch_len) {
if (engine->needs_cmd_parser && args->batch_len) {
struct i915_vma *vma;
vma = i915_gem_execbuffer_parse(engine, &shadow_exec_entry,
......
......@@ -290,7 +290,7 @@ i915_vma_put_fence(struct i915_vma *vma)
{
struct drm_i915_fence_reg *fence = vma->fence;
assert_rpm_wakelock_held(to_i915(vma->vm->dev));
assert_rpm_wakelock_held(vma->vm->i915);
if (!fence)
return 0;
......@@ -313,7 +313,7 @@ static struct drm_i915_fence_reg *fence_find(struct drm_i915_private *dev_priv)
}
/* Wait for completion of pending flips which consume fences */
if (intel_has_pending_fb_unpin(&dev_priv->drm))
if (intel_has_pending_fb_unpin(dev_priv))
return ERR_PTR(-EAGAIN);
return ERR_PTR(-EDEADLK);
......@@ -346,7 +346,7 @@ i915_vma_get_fence(struct i915_vma *vma)
/* Note that we revoke fences on runtime suspend. Therefore the user
* must keep the device awake whilst using the fence.
*/
assert_rpm_wakelock_held(to_i915(vma->vm->dev));
assert_rpm_wakelock_held(vma->vm->i915);
/* Just update our place in the LRU if our fence is getting reused. */
if (vma->fence) {
......@@ -357,7 +357,7 @@ i915_vma_get_fence(struct i915_vma *vma)
return 0;
}
} else if (set) {
fence = fence_find(to_i915(vma->vm->dev));
fence = fence_find(vma->vm->i915);
if (IS_ERR(fence))
return PTR_ERR(fence);
} else
......@@ -366,6 +366,30 @@ i915_vma_get_fence(struct i915_vma *vma)
return fence_update(fence, set);
}
/**
* i915_gem_revoke_fences - revoke fence state
* @dev_priv: i915 device private
*
* Removes all GTT mmappings via the fence registers. This forces any user
* of the fence to reacquire that fence before continuing with their access.
* One use is during GPU reset where the fence register is lost and we need to
* revoke concurrent userspace access via GTT mmaps until the hardware has been
* reset and the fence registers have been restored.
*/
void i915_gem_revoke_fences(struct drm_i915_private *dev_priv)
{
int i;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
for (i = 0; i < dev_priv->num_fence_regs; i++) {
struct drm_i915_fence_reg *fence = &dev_priv->fence_regs[i];
if (fence->vma)
i915_gem_release_mmap(fence->vma->obj);
}
}
/**
* i915_gem_restore_fences - restore fence state
* @dev_priv: i915 device private
......@@ -512,8 +536,8 @@ i915_gem_detect_bit_6_swizzle(struct drm_i915_private *dev_priv)
*/
swizzle_x = I915_BIT_6_SWIZZLE_NONE;
swizzle_y = I915_BIT_6_SWIZZLE_NONE;
} else if (IS_MOBILE(dev_priv) || (IS_GEN3(dev_priv) &&
!IS_G33(dev_priv))) {
} else if (IS_MOBILE(dev_priv) ||
IS_I915G(dev_priv) || IS_I945G(dev_priv)) {
uint32_t dcc;
/* On 9xx chipsets, channel interleave by the CPU is
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment