1. 19 Apr, 2019 1 commit
  2. 18 Apr, 2019 2 commits
  3. 17 Apr, 2019 7 commits
  4. 16 Apr, 2019 11 commits
  5. 15 Apr, 2019 2 commits
  6. 13 Apr, 2019 1 commit
  7. 12 Apr, 2019 8 commits
  8. 11 Apr, 2019 8 commits
    • Ville Syrjälä's avatar
      drm/i915: Do not enable FEC without DSC · 6fd3134a
      Ville Syrjälä authored
      Currently we enable FEC even when DSC is no used. While that is
      theoretically valid supposedly there isn't much of a benefit from
      this. But more importantly we do not account for the FEC link
      bandwidth overhead (2.4%) in the non-DSC link bandwidth computations.
      So the code may think we have enough bandwidth when we in fact
      do not.
      
      Cc: stable@vger.kernel.org
      Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
      Cc: Manasi Navare <manasi.d.navare@intel.com>
      Fixes: 240999cf ("i915/dp/fec: Add fec_enable to the crtc state.")
      Signed-off-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190326144903.6617-1-ville.syrjala@linux.intel.comReviewed-by: default avatarManasi Navare <manasi.d.navare@intel.com>
      6fd3134a
    • Chris Wilson's avatar
      drm/i915: Avoid reclaim taints from runtime-pm debug · 2e1e5c55
      Chris Wilson authored
      As intel_runtime_pm_get/_put may be called from any blockable context,
      we need to avoid allowing reclaim from our mallocs, as we need to
      avoid tainting any mutexes held by the callers (as they may themselves
      not allow for allocations as they are taken in the shrinker).
      
      <4> [435.339331] WARNING: possible circular locking dependency detected
      <4> [435.339364] 5.1.0-rc4-CI-Trybot_4116+ #1 Tainted: G     U
      <4> [435.339395] ------------------------------------------------------
      <4> [435.339426] gem_caching/1334 is trying to acquire lock:
      <4> [435.339456] 000000004505c39b (wakeref#3){+.+.}, at: intel_engine_pm_put+0x1b/0x40 [i915]
      <4> [435.339788]
      but task is already holding lock:
      <4> [435.339819] 00000000ee77b4ed (fs_reclaim){+.+.}, at: fs_reclaim_acquire.part.24+0x0/0x30
      <4> [435.339879]
      which lock already depends on the new lock.
      
      <4> [435.339918]
      the existing dependency chain (in reverse order) is:
      <4> [435.339952]
      -> #1 (fs_reclaim){+.+.}:
      <4> [435.339998]        fs_reclaim_acquire.part.24+0x24/0x30
      <4> [435.340035]        kmem_cache_alloc_trace+0x2a/0x290
      <4> [435.340311]        __print_intel_runtime_pm_wakeref+0x24/0x160 [i915]
      <4> [435.340590]        untrack_intel_runtime_pm_wakeref+0x16e/0x1d0 [i915]
      <4> [435.340869]        intel_runtime_pm_put_unchecked+0xd/0x30 [i915]
      <4> [435.341147]        __intel_wakeref_put_once+0x22/0x40 [i915]
      <4> [435.341508]        i915_request_retire+0x477/0xaf0 [i915]
      <4> [435.341871]        ring_retire_requests+0x86/0x160 [i915]
      <4> [435.342226]        i915_retire_requests+0x58/0xc0 [i915]
      <4> [435.342576]        retire_work_handler+0x5b/0x70 [i915]
      <4> [435.342615]        process_one_work+0x245/0x610
      <4> [435.342646]        worker_thread+0x37/0x380
      <4> [435.342679]        kthread+0x119/0x130
      <4> [435.342714]        ret_from_fork+0x3a/0x50
      <4> [435.342739]
      -> #0 (wakeref#3){+.+.}:
      <4> [435.342788]        lock_acquire+0xa6/0x1c0
      <4> [435.342822]        __mutex_lock+0x8c/0x960
      <4> [435.342853]        atomic_dec_and_mutex_lock+0x33/0x50
      <4> [435.343151]        intel_engine_pm_put+0x1b/0x40 [i915]
      <4> [435.343501]        i915_request_retire+0x477/0xaf0 [i915]
      <4> [435.343851]        ring_retire_requests+0x86/0x160 [i915]
      <4> [435.344202]        i915_retire_requests+0x58/0xc0 [i915]
      <4> [435.344543]        i915_gem_shrink+0xd8/0x5b0 [i915]
      <4> [435.344835]        i915_drop_caches_set+0x17b/0x250 [i915]
      <4> [435.344877]        simple_attr_write+0xb0/0xd0
      <4> [435.344911]        full_proxy_write+0x51/0x80
      <4> [435.344943]        vfs_write+0xbd/0x1b0
      <4> [435.344972]        ksys_write+0x55/0xe0
      <4> [435.345002]        do_syscall_64+0x55/0x190
      <4> [435.345040]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190409174108.19396-1-chris@chris-wilson.co.uk
      2e1e5c55
    • Chris Wilson's avatar
      drm/i915/execlists: Always reset the context's RING registers · 1863e302
      Chris Wilson authored
      During reset, we try and stop the active ring. This has the consequence
      that we often clobber the RING registers within the context image. When
      we find an active request, we update the context image to rerun that
      request (if it was guilty, we replace the hanging user payload with
      NOPs). However, we were ignoring an active context if the request had
      completed, with the consequence that the next submission on that request
      would start with RING_HEAD==0 and not the tail of the previous request,
      causing all requests still in the ring to be rerun. Rare, but
      occasionally seen within CI where we would spot that the context seqno
      would reverse and complain that we were retiring an incomplete request.
      
          <0> [412.390350]   <idle>-0       3d.s2 408373352us : __i915_request_submit: rcs0 fence 1e95b:3640 -> current 3638
          <0> [412.390350]   <idle>-0       3d.s2 408373353us : __i915_request_submit: rcs0 fence 1e95b:3642 -> current 3638
          <0> [412.390350]   <idle>-0       3d.s2 408373354us : __i915_request_submit: rcs0 fence 1e95b:3644 -> current 3638
          <0> [412.390350]   <idle>-0       3d.s2 408373354us : __i915_request_submit: rcs0 fence 1e95b:3646 -> current 3638
          <0> [412.390350]   <idle>-0       3d.s2 408373356us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=2.1, fence 1e95b:3646 (current 3638), prio=4
          <0> [412.390350] i915_sel-4613    0.... 408373374us : __i915_request_commit: rcs0 fence 1e95b:3648
          <0> [412.390350] i915_sel-4613    0d..1 408373377us : process_csb: rcs0 cs-irq head=2, tail=3
          <0> [412.390350] i915_sel-4613    0d..1 408373377us : process_csb: rcs0 csb[3]: status=0x00000001:0x00000000, active=0x1
          <0> [412.390350] i915_sel-4613    0d..1 408373378us : __i915_request_submit: rcs0 fence 1e95b:3648 -> current 3638
          <0> [412.390350]   <idle>-0       3..s1 408373378us : execlists_submission_tasklet: rcs0 awake?=1, active=5
          <0> [412.390350] i915_sel-4613    0d..1 408373379us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=2.2, fence 1e95b:3648 (current 3638), prio=4
          <0> [412.390350] i915_sel-4613    0.... 408373381us : i915_reset_engine: rcs0 flags=4
          <0> [412.390350] i915_sel-4613    0.... 408373382us : execlists_reset_prepare: rcs0: depth<-0
          <0> [412.390350]   <idle>-0       3d.s2 408373390us : process_csb: rcs0 cs-irq head=3, tail=4
          <0> [412.390350]   <idle>-0       3d.s2 408373390us : process_csb: rcs0 csb[4]: status=0x00008002:0x00000002, active=0x1
          <0> [412.390350]   <idle>-0       3d.s2 408373390us : process_csb: rcs0 out[0]: ctx=2.2, fence 1e95b:3648 (current 3640), prio=4
          <0> [412.390350] i915_sel-4613    0.... 408373401us : intel_engine_stop_cs: rcs0
          <0> [412.390350] i915_sel-4613    0d..1 408373402us : process_csb: rcs0 cs-irq head=4, tail=4
          <0> [412.390350] i915_sel-4613    0.... 408373403us : intel_gpu_reset: engine_mask=1
          <0> [412.390350] i915_sel-4613    0d..1 408373408us : execlists_cancel_port_requests: rcs0:port0 fence 1e95b:3648, (current 3648)
          <0> [412.390350] i915_sel-4613    0.... 408373442us : intel_engine_cancel_stop_cs: rcs0
          <0> [412.390350] i915_sel-4613    0.... 408373442us : execlists_reset_finish: rcs0: depth->0
          <0> [412.390350] ksoftirq-26      3..s. 408373442us : execlists_submission_tasklet: rcs0 awake?=1, active=0
          <0> [412.390350] ksoftirq-26      3d.s1 408373443us : process_csb: rcs0 cs-irq head=5, tail=5
          <0> [412.390350] i915_sel-4613    0.... 408373475us : i915_request_retire: rcs0 fence 1e95b:3640, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373476us : i915_request_retire: __retire_engine_request(rcs0) fence 1e95b:3640, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373494us : __i915_request_commit: rcs0 fence 1e95b:3650
          <0> [412.390350] i915_sel-4613    0d..1 408373496us : process_csb: rcs0 cs-irq head=5, tail=5
          <0> [412.390350] i915_sel-4613    0d..1 408373496us : __i915_request_submit: rcs0 fence 1e95b:3650 -> current 3648
          <0> [412.390350] i915_sel-4613    0d..1 408373498us : __execlists_submission_tasklet: rcs0 in[0]:  ctx=2.1, fence 1e95b:3650 (current 3648), prio=6
          <0> [412.390350] i915_sel-4613    0.... 408373500us : i915_request_retire_upto: rcs0 fence 1e95b:3648, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373500us : i915_request_retire: rcs0 fence 1e95b:3642, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373501us : i915_request_retire: __retire_engine_request(rcs0) fence 1e95b:3642, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373514us : i915_request_retire: rcs0 fence 1e95b:3644, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373515us : i915_request_retire: __retire_engine_request(rcs0) fence 1e95b:3644, current 3648
          <0> [412.390350] i915_sel-4613    0.... 408373527us : i915_request_retire: rcs0 fence 1e95b:3646, current 3640
          <0> [412.390350]   <idle>-0       3..s1 408373569us : execlists_submission_tasklet: rcs0 awake?=1, active=1
          <0> [412.390350]   <idle>-0       3d.s2 408373569us : process_csb: rcs0 cs-irq head=5, tail=1
          <0> [412.390350]   <idle>-0       3d.s2 408373570us : process_csb: rcs0 csb[0]: status=0x00000001:0x00000000, active=0x1
          <0> [412.390350]   <idle>-0       3d.s2 408373570us : process_csb: rcs0 csb[1]: status=0x00000018:0x00000002, active=0x5
          <0> [412.390350]   <idle>-0       3d.s2 408373570us : process_csb: rcs0 out[0]: ctx=2.1, fence 1e95b:3650 (current 3650), prio=6
          <0> [412.390350]   <idle>-0       3d.s2 408373571us : process_csb: rcs0 completed ctx=2
          <0> [412.390350] i915_sel-4613    0.... 408373621us : i915_request_retire: i915_request_retire:253 GEM_BUG_ON(!i915_request_completed(request))
      
      v2: Fixup the cancellation path to drain the CSB and reset the pointers.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190411130515.20716-2-chris@chris-wilson.co.uk
      1863e302
    • Chris Wilson's avatar
      drm/i915/guc: Implement reset locally · 292ad25c
      Chris Wilson authored
      Before causing guc and execlists to diverge further (breaking guc in the
      process), take a copy of the current reset procedure and make it local to
      the guc submission backend
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190411130515.20716-1-chris@chris-wilson.co.uk
      292ad25c
    • Mika Kuoppala's avatar
      drm/i915: Disable read only ppgtt support for gen11 · 3936867d
      Mika Kuoppala authored
      On gen11 writing to read only ppgtt page causes a gpu hang.
      This behaviour is different than with previous gen where
      read only ppgtt access is supported. On those, the write
      is just dropped without visible side effects.
      
      Disable ro ppgtt support on gen11 until a solution can
      be found to bring it into line with its predecessors.
      
      References: HSDES#1807136187
      References: https://bugzilla.freedesktop.org/show_bug.cgi?id=108569
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Acked-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190411083034.28311-1-mika.kuoppala@linux.intel.com
      3936867d
    • Chris Wilson's avatar
      drm/i915: Call i915_sw_fence_fini on request cleanup · 0c441cb6
      Chris Wilson authored
      As i915_requests are put into an RCU-freelist, they may get reused
      before debugobjects notice them as being freed. On cleanup, explicitly
      call i915_sw_fence_fini() so that the debugobject is properly tracked.
      Reported-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Fixes: b7404c7e ("drm/i915: Bump ready tasks ahead of busywaits")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190411122445.20060-1-chris@chris-wilson.co.uk
      0c441cb6
    • Ville Syrjälä's avatar
      drm/i915: Clean up DSC vs. not bpp handling · aefa95ba
      Ville Syrjälä authored
      No point in duplicating all this code when we can just
      use a variable to hold the output bpp (the only thing
      that differs between the two branches).
      
      Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
      Cc: Manasi Navare <manasi.d.navare@intel.comk>
      Signed-off-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190326144903.6617-2-ville.syrjala@linux.intel.comReviewed-by: default avatarManasi Navare <manasi.d.navare@intel.com>
      aefa95ba
    • Ville Syrjälä's avatar
      drm/i915: Set DP min_bpp to 8*3 for non-RGB output formats · 4e2056e0
      Ville Syrjälä authored
      6bpc is only legal for RGB and RAW pixel encodings. For the rest
      the minimum is 8bpc. Set our lower limit accordingly.
      Signed-off-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190326142556.21176-6-ville.syrjala@linux.intel.comReviewed-by: default avatarDhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
      4e2056e0