1. 05 Dec, 2020 2 commits
  2. 04 Dec, 2020 5 commits
  3. 03 Dec, 2020 1 commit
  4. 02 Dec, 2020 6 commits
    • Chris Wilson's avatar
      Revert "drm/i915/lmem: Limit block size to 4G" · 7d1a31e1
      Chris Wilson authored
      Mixing I915_ALLOC_CONTIGUOUS and I915_ALLOC_MAX_SEGMENT_SIZE fared
      badly. The two directives conflict, with the contiguous request setting
      the min_order to the full size of the object, and the max-segment-size
      setting the max_order to the limit of the DMA mapper. This results in a
      situation where max_order < min_order, causing our sanity checks to
      fail.
      
      Instead of limiting the buddy block size, in the previous patch we split
      the oversized buddy into multiple scatterlist elements.
      
      Fixes: d2cf0125 ("drm/i915/lmem: Limit block size to 4G")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-2-chris@chris-wilson.co.uk
      7d1a31e1
    • Chris Wilson's avatar
      drm/i915/gem: Limit lmem scatterlist elements to UINT_MAX · a2843b3b
      Chris Wilson authored
      Adhere to the i915_sg_max_segment() limit on the lengths of individual
      scatterlist elements, and in doing so split up very large chunks of lmem
      into manageable pieces for the dma-mapping backend.
      Reported-by: default avatarVenkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
      Suggested-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-1-chris@chris-wilson.co.uk
      a2843b3b
    • Chris Wilson's avatar
      drm/i915/selftests: Tidy prng constructor for client blits · 840291a7
      Chris Wilson authored
      Since we only initialise the prng once within the scope of the selftest,
      we can use the default initialiser.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201202130406.18461-1-chris@chris-wilson.co.uk
      840291a7
    • Tvrtko Ursulin's avatar
      drm/i915/pmu: Deprecate I915_PMU_LAST and optimize state tracking · 348fb0cb
      Tvrtko Ursulin authored
      Adding any kinds of "last" abi markers is usually a mistake which I
      repeated when implementing the PMU because it felt convenient at the time.
      
      This patch marks I915_PMU_LAST as deprecated and stops the internal
      implementation using it for sizing the event status bitmask and array.
      
      New way of sizing the fields is a bit less elegant, but it omits reserving
      slots for tracking events we are not interested in, and as such saves some
      runtime space. Adding sampling events is likely to be a special event and
      the new plumbing needed will be easily detected in testing. Existing
      asserts against the bitfield and array sizes are keeping the code safe.
      
      First event which gets the new treatment in this new scheme are the
      interrupts - which neither needs any tracking in i915 pmu nor needs
      waking up the GPU to read it.
      
      v2:
       * Streamline helper names. (Chris)
      
      v3:
       * Comment which events need tracking. (Chris)
      Signed-off-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201201131757.206367-1-tvrtko.ursulin@linux.intel.com
      348fb0cb
    • Chris Wilson's avatar
      drm/i915/gem: Report error for vmap() failure · 37df0edf
      Chris Wilson authored
      Convert the NULL pointer from a failed vmap() to ERR_PTR(-ENOMEM) for
      propagation.
      
      <1> [269.830447] BUG: kernel NULL pointer dereference, address: 0000000000000000
      <1> [269.830455] #PF: supervisor write access in kernel mode
      <1> [269.830457] #PF: error_code(0x0002) - not-present page
      <6> [269.830459] PGD 0 P4D 0
      <4> [269.830465] Oops: 0002 [#1] PREEMPT SMP PTI
      <4> [269.830469] CPU: 3 PID: 5789 Comm: i915_selftest Tainted: G     U            5.10.0-rc6-CI-CI_DRM_9412+ #1
      <4> [269.830472] Hardware name: Intel Corp. Geminilake/GLK RVP2 LP4SD (07), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017
      <4> [269.830636] RIP: 0010:igt_client_fill+0x1b9/0x5f0 [i915]
      <4> [269.830640] Code: e8 0c 32 02 00 48 89 c5 48 3d 00 f0 ff ff 0f 87 e9 02 00 00 48 8b 8b 78 06 00 00 44 89 f0 48 89 ef 35 af be ad de 48 c1 e9 02 <f3> ab 0f b6 83 80 03 00 00 89 c2 c0 ea 03 83 e2 02 75 09 83 c8 20
      <4> [269.830642] RSP: 0018:ffffc900007a79e8 EFLAGS: 00010206
      <4> [269.830645] RAX: 00000000df0bf37b RBX: ffff88811d8af3c0 RCX: 00000000010afc00
      <4> [269.830647] RDX: 0000000000000000 RSI: ffffffff822f2b17 RDI: 0000000000000000
      <4> [269.830648] RBP: 0000000000000000 R08: ffff888111c80930 R09: 00000000fffffffe
      <4> [269.830650] R10: 0000000000000000 R11: 00000000ffbc70e4 R12: ffff88811090f700
      <4> [269.830652] R13: ffff88810df60180 R14: 0000000001a64dd4 R15: 0000000000000000
      <4> [269.830655] FS:  00007f137b07de40(0000) GS:ffff88817b980000(0000) knlGS:0000000000000000
      <4> [269.830657] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [269.830659] CR2: 0000000000000000 CR3: 0000000115984000 CR4: 0000000000350ee0
      <4> [269.830661] Call Trace:
      <4> [269.830780]  __i915_subtests.cold.7+0x42/0x92 [i915]
      <4> [269.830886]  ? __i915_nop_teardown+0x10/0x10 [i915]
      <4> [269.830989]  ? __i915_live_setup+0x30/0x30 [i915]
      <4> [269.831104]  __run_selftests.part.3+0xf7/0x14c [i915]
      <4> [269.831939]  i915_live_selftests.cold.5+0x1f/0x47 [i915]
      <4> [269.832027]  i915_pci_probe+0x93/0x1d0 [i915]
      <4> [269.832037]  ? _raw_spin_unlock_irqrestore+0x2f/0x50
      <4> [269.832043]  pci_device_probe+0x9e/0x110
      <4> [269.832049]  really_probe+0x1c4/0x430
      <4> [269.832053]  driver_probe_device+0xd9/0x140
      <4> [269.832056]  device_driver_attach+0x4a/0x50
      <4> [269.832059]  __driver_attach+0x83/0x140
      <4> [269.832062]  ? device_driver_attach+0x50/0x50
      <4> [269.832064]  ? device_driver_attach+0x50/0x50
      <4> [269.832067]  bus_for_each_dev+0x75/0xc0
      <4> [269.832070]  bus_add_driver+0x14b/0x1f0
      <4> [269.832073]  driver_register+0x66/0xb0
      <4> [269.832160]  i915_init+0x70/0x87 [i915]
      <4> [269.832164]  ? 0xffffffffa05e3000
      <4> [269.832168]  do_one_initcall+0x56/0x2e0
      <4> [269.832174]  ? kmem_cache_alloc_trace+0x6a4/0x770
      <4> [269.832180]  do_init_module+0x55/0x200
      <4> [269.832184]  load_module+0x22a2/0x2480
      <4> [269.832191]  ? __do_sys_finit_module+0xad/0x110
      <4> [269.832194]  __do_sys_finit_module+0xad/0x110
      <4> [269.832199]  do_syscall_64+0x33/0x80
      <4> [269.832202]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      <4> [269.832204] RIP: 0033:0x7f137a718839
      <4> [269.832208] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
      <4> [269.832210] RSP: 002b:00007ffc4267d308 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
      <4> [269.832214] RAX: ffffffffffffffda RBX: 000056288b88f0d0 RCX: 00007f137a718839
      <4> [269.832216] RDX: 0000000000000000 RSI: 000056288b895850 RDI: 0000000000000007
      <4> [269.832218] RBP: 000056288b895850 R08: 312d3d7374736574 R09: 000056288b88c020
      <4> [269.832220] R10: 00007ffc4267d450 R11: 0000000000000246 R12: 0000000000000000
      <4> [269.832222] R13: 000056288b8877a0 R14: 0000000000000020 R15: 0000000000000045
      <4> [269.832226] Modules linked in: i915(+) vgem mei_hdcp snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel cdc_ether usbnet snd_intel_dspcfg mii snd_hda_codec snd_hwdep snd_hda_core r8169 snd_pcm realtek mei_me mei prime_numbers intel_lpss_pci i2c_hid pinctrl_geminilake [last unloaded: i915]
      <4> [269.832264] CR2: 0000000000000000
      
      Fixes: cb2ce93e ("drm/i915/gem: Differentiate oom failures from invalid map types")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201201215441.31900-1-chris@chris-wilson.co.uk
      37df0edf
    • Swathi Dhanavanthri's avatar
  5. 30 Nov, 2020 5 commits
  6. 28 Nov, 2020 1 commit
  7. 27 Nov, 2020 2 commits
  8. 26 Nov, 2020 6 commits
    • Chris Wilson's avatar
      drm/i915/gt: Move the breadcrumb to the signaler if completed upon cancel · 85cc2917
      Chris Wilson authored
      If while we are cancelling the breadcrumb signaling, we find that the
      request is already completed, move it to the irq signaler and let it be
      signaled.
      
      v2: Tweak reference counting so that we only acquire a new reference on
      adding to a signal list, as opposed to a hidden i915_request_put of the
      caller's reference.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-5-chris@chris-wilson.co.uk
      85cc2917
    • Chris Wilson's avatar
      drm/i915/gt: Split the breadcrumb spinlock between global and contexts · c744d503
      Chris Wilson authored
      As we funnel more and more contexts into the breadcrumbs on an engine,
      the hold time of b->irq_lock grows. As we may then contend with the
      b->irq_lock during request submission, this increases the burden upon
      the engine->active.lock and so directly impacts both our execution
      latency and client latency. If we split the b->irq_lock by introducing a
      per-context spinlock to manage the signalers within a context, we then
      only need the b->irq_lock for enabling/disabling the interrupt and can
      avoid taking the lock for walking the list of contexts within the signal
      worker. Even with the current setup, this greatly reduces the number of
      times we have to take and fight for b->irq_lock.
      
      Furthermore, this closes the race between enabling the signaling context
      while it is in the process of being signaled and removed:
      
      <4>[  416.208555] list_add corruption. prev->next should be next (ffff8881951d5910), but was dead000000000100. (prev=ffff8882781bb870).
      <4>[  416.208573] WARNING: CPU: 7 PID: 0 at lib/list_debug.c:28 __list_add_valid+0x4d/0x70
      <4>[  416.208575] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
      <4>[  416.208611] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G     U            5.8.0-CI-CI_DRM_8852+ #1
      <4>[  416.208614] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
      <4>[  416.208627] RIP: 0010:__list_add_valid+0x4d/0x70
      <4>[  416.208631] Code: c3 48 89 d1 48 c7 c7 60 18 33 82 48 89 c2 e8 ea e0 b6 ff 0f 0b 31 c0 c3 48 89 c1 4c 89 c6 48 c7 c7 b0 18 33 82 e8 d3 e0 b6 ff <0f> 0b 31 c0 c3 48 89 f2 4c 89 c1 48 89 fe 48 c7 c7 00 19 33 82 e8
      <4>[  416.208633] RSP: 0018:ffffc90000280e18 EFLAGS: 00010086
      <4>[  416.208636] RAX: 0000000000000000 RBX: ffff888250a44880 RCX: 0000000000000105
      <4>[  416.208639] RDX: 0000000000000105 RSI: ffffffff82320c5b RDI: 00000000ffffffff
      <4>[  416.208641] RBP: ffff8882781bb870 R08: 0000000000000000 R09: 0000000000000001
      <4>[  416.208643] R10: 00000000054d2957 R11: 000000006abbd991 R12: ffff8881951d58c8
      <4>[  416.208646] R13: ffff888286073880 R14: ffff888286073848 R15: ffff8881951d5910
      <4>[  416.208669] FS:  0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
      <4>[  416.208671] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4>[  416.208673] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
      <4>[  416.208675] PKRU: 55555554
      <4>[  416.208677] Call Trace:
      <4>[  416.208679]  <IRQ>
      <4>[  416.208751]  i915_request_enable_breadcrumb+0x278/0x400 [i915]
      <4>[  416.208839]  __i915_request_submit+0xca/0x2a0 [i915]
      <4>[  416.208892]  __execlists_submission_tasklet+0x480/0x1830 [i915]
      <4>[  416.208942]  execlists_submission_tasklet+0xc4/0x130 [i915]
      <4>[  416.208947]  tasklet_action_common.isra.17+0x6c/0x1c0
      <4>[  416.208954]  __do_softirq+0xdf/0x498
      <4>[  416.208960]  ? handle_fasteoi_irq+0x150/0x150
      <4>[  416.208964]  asm_call_on_stack+0xf/0x20
      <4>[  416.208966]  </IRQ>
      <4>[  416.208969]  do_softirq_own_stack+0xa1/0xc0
      <4>[  416.208972]  irq_exit_rcu+0xb5/0xc0
      <4>[  416.208976]  common_interrupt+0xf7/0x260
      <4>[  416.208980]  asm_common_interrupt+0x1e/0x40
      <4>[  416.208985] RIP: 0010:cpuidle_enter_state+0xb6/0x410
      <4>[  416.208987] Code: 00 31 ff e8 9c 3e 89 ff 80 7c 24 0b 00 74 12 9c 58 f6 c4 02 0f 85 31 03 00 00 31 ff e8 e3 6c 90 ff e8 fe a4 94 ff fb 45 85 ed <0f> 88 c7 02 00 00 49 63 c5 4c 2b 24 24 48 8d 14 40 48 8d 14 90 48
      <4>[  416.208989] RSP: 0018:ffffc90000143e70 EFLAGS: 00000206
      <4>[  416.208991] RAX: 0000000000000007 RBX: ffffe8ffffda8070 RCX: 0000000000000000
      <4>[  416.208993] RDX: 0000000000000000 RSI: ffffffff8238b4ee RDI: ffffffff8233184f
      <4>[  416.208995] RBP: ffffffff826b4e00 R08: 0000000000000000 R09: 0000000000000000
      <4>[  416.208997] R10: 0000000000000001 R11: 0000000000000000 R12: 00000060e7f24a8f
      <4>[  416.208998] R13: 0000000000000003 R14: 0000000000000003 R15: 0000000000000003
      <4>[  416.209012]  cpuidle_enter+0x24/0x40
      <4>[  416.209016]  do_idle+0x22f/0x2d0
      <4>[  416.209022]  cpu_startup_entry+0x14/0x20
      <4>[  416.209025]  start_secondary+0x158/0x1a0
      <4>[  416.209030]  secondary_startup_64+0xa4/0xb0
      <4>[  416.209039] irq event stamp: 10186977
      <4>[  416.209042] hardirqs last  enabled at (10186976): [<ffffffff810b9363>] tasklet_action_common.isra.17+0xe3/0x1c0
      <4>[  416.209044] hardirqs last disabled at (10186977): [<ffffffff81a5e5ed>] _raw_spin_lock_irqsave+0xd/0x50
      <4>[  416.209047] softirqs last  enabled at (10186968): [<ffffffff810b9a1a>] irq_enter_rcu+0x6a/0x70
      <4>[  416.209049] softirqs last disabled at (10186969): [<ffffffff81c00f4f>] asm_call_on_stack+0xf/0x20
      
      <4>[  416.209317] list_del corruption, ffff8882781bb870->next is LIST_POISON1 (dead000000000100)
      <4>[  416.209317] WARNING: CPU: 7 PID: 46 at lib/list_debug.c:47 __list_del_entry_valid+0x4e/0x90
      <4>[  416.209317] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio mei_hdcp x86_pkg_temp_thermal coretemp ax88179_178a usbnet mii crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec snd_hwdep ghash_clmulni_intel snd_hda_core e1000e snd_pcm ptp pps_core mei_me mei prime_numbers intel_lpss_pci [last unloaded: i915]
      <4>[  416.209317] CPU: 7 PID: 46 Comm: ksoftirqd/7 Tainted: G     U  W         5.8.0-CI-CI_DRM_8852+ #1
      <4>[  416.209317] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake Y LPDDR4x T4 RVP TLC, BIOS ICLSFWR1.R00.3212.A00.1905212112 05/21/2019
      <4>[  416.209317] RIP: 0010:__list_del_entry_valid+0x4e/0x90
      <4>[  416.209317] Code: 2e 48 8b 32 48 39 fe 75 3a 48 8b 50 08 48 39 f2 75 48 b8 01 00 00 00 c3 48 89 fe 48 89 c2 48 c7 c7 38 19 33 82 e8 62 e0 b6 ff <0f> 0b 31 c0 c3 48 89 fe 48 c7 c7 70 19 33 82 e8 4e e0 b6 ff 0f 0b
      <4>[  416.209317] RSP: 0018:ffffc90000280de8 EFLAGS: 00010086
      <4>[  416.209317] RAX: 0000000000000000 RBX: ffff8882781bb848 RCX: 0000000000010104
      <4>[  416.209317] RDX: 0000000000010104 RSI: ffffffff8238b4ee RDI: 00000000ffffffff
      <4>[  416.209317] RBP: ffff8882781bb880 R08: 0000000000000000 R09: 0000000000000001
      <4>[  416.209317] R10: 000000009fb6666e R11: 00000000feca9427 R12: ffffc90000280e18
      <4>[  416.209317] R13: ffff8881951d5930 R14: dead0000000000d8 R15: ffff8882781bb880
      <4>[  416.209317] FS:  0000000000000000(0000) GS:ffff88829c180000(0000) knlGS:0000000000000000
      <4>[  416.209317] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4>[  416.209317] CR2: 0000556231326c48 CR3: 0000000005610001 CR4: 0000000000760ee0
      <4>[  416.209317] PKRU: 55555554
      <4>[  416.209317] Call Trace:
      <4>[  416.209317]  <IRQ>
      <4>[  416.209317]  remove_signaling_context.isra.13+0xd/0x70 [i915]
      <4>[  416.209513]  signal_irq_work+0x1f7/0x4b0 [i915]
      
      This is caused by virtual engines where although we take the breadcrumb
      lock on each of the active engines, they may be different engines on
      different requests, It turns out that the b->irq_lock was not a
      sufficient proxy for the engine->active.lock in the case of more than
      one request, so introduce an explicit lock around ce->signals.
      
      v2: ce->signal_lock is acquired with only RCU protection and so must be
      treated carefully and not cleared during reallocation. We also then need
      to confirm that the ce we lock is the same as we found in the breadcrumb
      list.
      
      Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2276
      Fixes: c18636f7 ("drm/i915: Remove requirement for holding i915_request.lock for breadcrumbs")
      Fixes: 2854d866 ("drm/i915/gt: Replace intel_engine_transfer_stale_breadcrumbs")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-4-chris@chris-wilson.co.uk
      c744d503
    • Chris Wilson's avatar
      drm/i915/gt: Protect context lifetime with RCU · 14d1eaf0
      Chris Wilson authored
      Allow a brief period for continued access to a dead intel_context by
      deferring the release of the struct until after an RCU grace period.
      As we are using a dedicated slab cache for the contexts, we can defer
      the release of the slab pages via RCU, with the caveat that individual
      structs may be reused from the freelist within an RCU grace period. To
      handle that, we have to avoid clearing members of the zombie struct.
      
      This is required for a later patch to handle locking around virtual
      requests in the signaler, as those requests may want to move between
      engines and be destroyed while we are holding b->irq_lock on a physical
      engine.
      
      v2: Drop mutex_reinit(), if we never mark the mutex as destroyed we
      don't need to reset the debug code, at the loss of having the mutex
      debug code spot us attempting to destroy a locked mutex.
      v3: As the intended use will remain strongly referenced counted, with
      very little inflight access across reuse, drop the ctor.
      v4: Drop the unrequired change to remove the temporary reference around
      dropping the active context, and add back some more missing ctor
      operations.
      v5: The ctor is back. Tvrtko spotted that ce->signal_lock [introduced
      later] maybe accessed under RCU and so needs special care not to be
      reinitialised.
      v6: Don't mix SLAB_TYPESAFE_BY_RCU and RCU list iteration.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-3-chris@chris-wilson.co.uk
      14d1eaf0
    • Chris Wilson's avatar
      drm/i915/gt: Check for a completed last request once · a5855989
      Chris Wilson authored
      Pull the repeated check for the last active request being completed to a
      single spot, when deciding whether or not execlist preemption is
      required.
      
      In doing so, we remove the tasklet kick, introduced with the completion
      checks in commit 35f3fd81 ("drm/i915/execlists: Workaround switching
      back to a completed context"), if we find the request was completed but
      have not yet seen the corresponding CS event. This was devolving into a
      busy spin of the tasklet while we waited for the event as the delivery
      was not as instantaneous as expected. Under load this is sufficient to
      exhaust the tasklet softirq timeslice, and force ksoftirqd. Quite
      noticeable overhead for no apparent improvement in latency.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-2-chris@chris-wilson.co.uk
      a5855989
    • Chris Wilson's avatar
      drm/i915/gt: Decouple completed requests on unwind · b8e2bd98
      Chris Wilson authored
      Since the introduction of preempt-to-busy, requests can complete in the
      background, even while they are not on the engine->active.requests list.
      As such, the engine->active.request list itself is not in strict
      retirement order, and we have to scan the entire list while unwinding to
      not miss any. However, if the request is completed we currently leave it
      on the list [until retirement], but we could just as simply remove it
      and stop treating it as active. We would only have to then traverse it
      once while unwinding in quick succession.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140407.31952-1-chris@chris-wilson.co.uk
      b8e2bd98
    • Chris Wilson's avatar
      drm/i915/gt: Program mocs:63 for cache eviction on gen9 · 977933b5
      Chris Wilson authored
      Ville noticed that the last mocs entry is used unconditionally by the HW
      when it performs cache evictions, and noted that while the value is not
      meant to be writable by the driver, we should program it to a reasonable
      value nevertheless.
      
      As it turns out, we can change the value of mocs:63 and the value we
      were programming into it would cause hard hangs in conjunction with
      atomic operations.
      
      v2: Add details from bspec about how it is used by HW
      Suggested-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2707
      Fixes: 3bbaba0c ("drm/i915: Added Programming of the MOCS")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Jason Ekstrand <jason@jlekstrand.net>
      Cc: <stable@vger.kernel.org> # v4.3+
      Reviewed-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20201126140841.1982-1-chris@chris-wilson.co.uk
      977933b5
  9. 24 Nov, 2020 2 commits
  10. 23 Nov, 2020 4 commits
  11. 21 Nov, 2020 1 commit
  12. 20 Nov, 2020 3 commits
  13. 19 Nov, 2020 2 commits