- 14 Dec, 2020 1 commit
-
-
Tvrtko Ursulin authored
Chris found a CI report which points out calling intel_runtime_pm_get from inside i915_pmu_enable hook is not allowed since it can be invoked from hard irq context. This is something we knew but forgot, so lets fix it once again. We do this by syncing the internal book keeping with hardware rc6 counter on driver load. v2: * Always sync on parking and fully sync on init. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: f4e9894b ("drm/i915/pmu: Correct the rc6 offset upon enabling") Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201214094349.3563876-1-tvrtko.ursulin@linux.intel.com
-
- 10 Dec, 2020 3 commits
-
-
Chris Wilson authored
The workarounds are tied to the GT and we should derive the tests local to the GT. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201210080240.24529-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
When we reset the legacy ring context, due to potential corruption over suspend/resume, remove the valid bit so that we avoid loading garbage. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201210080240.24529-1-chris@chris-wilson.co.uk
-
John Harrison authored
The above workaround was added as an engine workaround not a GT workaround. Moved it to the correct location. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201210170615.3107266-1-lucas.demarchi@intel.com
-
- 09 Dec, 2020 9 commits
-
-
Daniele Ceraolo Spurio authored
These functions are independent from the backend used and can therefore be split out of the exelists submission file, so they can be re-used by the upcoming GuC submission backend. Based on a patch by Chris Wilson. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-3-chris@chris-wilson.co.ukSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
We want to separate the utility functions for controlling the logical ring context from the execlists submission mechanism (which is an overgrown scheduler). This is similar to Daniele's work to split up the files, but being selfish I wanted to base it after my own changes to intel_lrc.c petered out. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Cleanup intel_lrc.h by moving some of the residual common register definitions into intel_lrc_reg.h, prior to rebranding and splitting off the submission backends. v2: keep the SCHEDULE enum in the old file, since it is specific to the gvt usage of the execlists submission backend (John) Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v2 Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209233618.4287-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Now that the only user of the uninterruptible wait was eliminated, remove the support. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
Tigerlake is plagued by spontaneous DMAR faults [reason 7, next page table ptr is invalid] which lead to GPU hangs. These faults occur when an iommu map is immediately reused. Adding further clflushes and barriers around either the GTT PTE or iommu PTE updates do not prevent the faults. So far the only effect has been from inducing a delay between reuse of the iommu on the GPU, and applying the delay at the iommu map allows for the smallest stable delay. Note that such a delay is hideous and clearly does not fix the root cause, and so should only be a bandaid until a complete solution is found. The delay was determined by running igt/gem_exec_fence/parallel in a loop for a few hours (unpatched MTBF is about 10s). We have also seen such DMAR fault [reason 7] errors on other platforms, notably gen9-gen11, but so far it has only been trivially and consistently reproduced on Tigerlake. v2: Leave a tell-tale to know when we apply the vt'd quirk, and as a reminder to remove it again. Hopefully. Testcase: igt/gem_exec_fence/parallel Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
A call to wait for the GT to idle from inside the put_pages fallback is prone to cause an uninterruptible livelock. As it does not provide adequate serialisation with new requests, simply fallback to a trivial sleep. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201209164008.5487-1-chris@chris-wilson.co.uk
-
Lucas De Marchi authored
Document what a masked register is according to bspec so we avoid developers using the wrong functions to implement WAs. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-3-lucas.demarchi@intel.com
-
Lucas De Marchi authored
The use of "masked" in this function is due to its history. Once upon a time it received a mask and a value as parameter. Since commit eeec73f8 ("drm/i915/gt: Skip rmw for masked registers") that is not true anymore and now there is a clear and a set parameter. Depending on the case, that can still be thought as a mask and value, but there are some subtle differences: what we clear doesn't need to be the same bits we are setting, particularly when we are using masked registers. The fact that we also have "masked registers", i.e. registers whose mask is stored in the upper 16 bits of the register, makes it even more confusing, because "masked" in wa_write_masked_or() has little to do with masked registers, but rather refers to the old mask parameter the function received (that can also, but not exclusively, be used to write to masked register). Avoid the ambiguity and misnomer by renaming it to something else, hopefully less confusing: wa_write_clr_set(), to designate that we are doing both clr and set operations in the register. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-2-lucas.demarchi@intel.com
-
Lucas De Marchi authored
When using masked registers, there is nothing to clear since a masked register has the mask in the upper 16b: we can just write to the location we want and use the mask to control what bits we are writing to. However that doesn't mean we don't want to read back the register and check the value actually matched what we wanted to write, i.e. that the WA stick. That should be an explicit opt-out for registers that are either write-only or that are affected by hardware misbehavior. Moreover both wa_masked_en() and wa_masked_dis() check the WA stick, so skipping the check just because the field is more than 1 bit is surprising and error-prone. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201209045246.2905675-1-lucas.demarchi@intel.com
-
- 08 Dec, 2020 1 commit
-
-
Chris Wilson authored
Closed vma are protected by the GT wakeref held as we lookup the vma, so we know that the vma will not be freed as we process it for the execbuf. Instead we expect to catch the closed status of the context, and simply allow the close-race on an individual vma to be washed away. Longer term, the GT wakeref protection will be removed by explicit vma.kref tracking. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2245Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201207193824.18114-1-chris@chris-wilson.co.uk
-
- 07 Dec, 2020 2 commits
-
-
Chris Wilson authored
When we fail to find a single block large enough to require splitting, report the largest block we did find. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201207130346.11849-1-chris@chris-wilson.co.uk
-
Colin Ian King authored
Currently the check that the unsigned size_t variable i is >= 0 is always true because the unsigned variable will never be negative, causing the loop to run forever. Fix this by changing the pre-decrement check to a zero check on i followed by a decrement of i. Addresses-Coverity: ("Unsigned compared against 0") Fixes: bfed6708 ("drm/i915: use vmap in shmem_pin_map") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201002170354.94627-1-colin.king@canonical.com
-
- 05 Dec, 2020 4 commits
-
-
Lucas De Marchi authored
Remove the last macro and implement it as a function like the rest of the operations that don't assume there is a `wal` list, but rather receive it as argument. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201205092542.2325477-4-lucas.demarchi@intel.com
-
Lucas De Marchi authored
Just ommitting the list it's operating on doesn't save much typing and adds another way to do the same thing. Just replace it with wa_masked_dis(). Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201205092542.2325477-3-lucas.demarchi@intel.com
-
Lucas De Marchi authored
Just ommitting the list it's operating on doesn't save much typing and adds another way to do the same thing. Just replace it with wa_masked_en(). Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201205092542.2325477-2-lucas.demarchi@intel.com
-
Swathi Dhanavanthri authored
Set GS Timer to 224 to prevent a HS/DS hang. Bspec: 53508 v2: reword commit message and add comment explaining why read verification is ignored (Chris) Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Swathi Dhanavanthri <swathi.dhanavanthri@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201205092542.2325477-1-lucas.demarchi@intel.com
-
- 04 Dec, 2020 5 commits
-
-
Chris Wilson authored
Across a reset, we stop the engine but not the timers. This leaves a window where the timers have inconsistent state with the engine, but should only result in a spurious timeout. As we cancel the outstanding events, also cancel their timers. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201204151234.19729-4-chris@chris-wilson.co.uk
-
Chris Wilson authored
The GT and engine reset failures are completely invisible when looking at a trace for a bug, but are vital to understanding the incomplete flow. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201204151234.19729-3-chris@chris-wilson.co.uk
-
Chris Wilson authored
We currently presume that the engine reset is successful, cancelling the expired preemption timer in the process. However, engine resets can fail, leaving the timeout still pending and we will then respond to the timeout again next time the tasklet fires. What we want is for the failed engine reset to be promoted to a full device reset, which is kicked by the heartbeat once the engine stops processing events. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1168 Fixes: 3a7a92ab ("drm/i915/execlists: Force preemption") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.5+ Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201204151234.19729-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Before reseting the engine, we suspend the execution of the guilty request, so that we can continue execution with a new context while we slowly compress the captured error state for the guilty context. However, if the reset fails, we will promptly attempt to reset the same request again, and discover the ongoing capture. Ignore the second attempt to suspend and capture the same request. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1168 Fixes: 32ff621f ("drm/i915/gt: Allow temporary suspension of inflight requests") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.7+ Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201204151234.19729-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
In the course of discovering and closing many races with context closure and execbuf submission, since commit 61231f6b ("drm/i915/gem: Check that the context wasn't closed during setup") we started checking that the context was not closed by another userspace thread during the execbuf ioctl. In doing so we cancelled the inflight request (by telling it to be skipped), but kept reporting success since we do submit a request, albeit one that doesn't execute. As the error is known before we return from the ioctl, we can report the error we detect immediately, rather than leave it on the fence status. With the immediate propagation of the error, it is easier for userspace to handle. Fixes: 61231f6b ("drm/i915/gem: Check that the context wasn't closed during setup") Testcase: igt/gem_ctx_exec/basic-close-race Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.7+ Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201203103432.31526-1-chris@chris-wilson.co.uk
-
- 03 Dec, 2020 1 commit
-
-
Dan Carpenter authored
There is a copy and paste bug in this code. It's supposed to check "obj2" instead of checking "obj" a second time. Fixes: 80f0b679 ("drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/8ilneOcJAjwqU4t@mwand
-
- 02 Dec, 2020 6 commits
-
-
Chris Wilson authored
Mixing I915_ALLOC_CONTIGUOUS and I915_ALLOC_MAX_SEGMENT_SIZE fared badly. The two directives conflict, with the contiguous request setting the min_order to the full size of the object, and the max-segment-size setting the max_order to the limit of the DMA mapper. This results in a situation where max_order < min_order, causing our sanity checks to fail. Instead of limiting the buddy block size, in the previous patch we split the oversized buddy into multiple scatterlist elements. Fixes: d2cf0125 ("drm/i915/lmem: Limit block size to 4G") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-2-chris@chris-wilson.co.uk
-
Chris Wilson authored
Adhere to the i915_sg_max_segment() limit on the lengths of individual scatterlist elements, and in doing so split up very large chunks of lmem into manageable pieces for the dma-mapping backend. Reported-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Suggested-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Since we only initialise the prng once within the scope of the selftest, we can use the default initialiser. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201202130406.18461-1-chris@chris-wilson.co.uk
-
Tvrtko Ursulin authored
Adding any kinds of "last" abi markers is usually a mistake which I repeated when implementing the PMU because it felt convenient at the time. This patch marks I915_PMU_LAST as deprecated and stops the internal implementation using it for sizing the event status bitmask and array. New way of sizing the fields is a bit less elegant, but it omits reserving slots for tracking events we are not interested in, and as such saves some runtime space. Adding sampling events is likely to be a special event and the new plumbing needed will be easily detected in testing. Existing asserts against the bitfield and array sizes are keeping the code safe. First event which gets the new treatment in this new scheme are the interrupts - which neither needs any tracking in i915 pmu nor needs waking up the GPU to read it. v2: * Streamline helper names. (Chris) v3: * Comment which events need tracking. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201201131757.206367-1-tvrtko.ursulin@linux.intel.com
-
Chris Wilson authored
Convert the NULL pointer from a failed vmap() to ERR_PTR(-ENOMEM) for propagation. <1> [269.830447] BUG: kernel NULL pointer dereference, address: 0000000000000000 <1> [269.830455] #PF: supervisor write access in kernel mode <1> [269.830457] #PF: error_code(0x0002) - not-present page <6> [269.830459] PGD 0 P4D 0 <4> [269.830465] Oops: 0002 [#1] PREEMPT SMP PTI <4> [269.830469] CPU: 3 PID: 5789 Comm: i915_selftest Tainted: G U 5.10.0-rc6-CI-CI_DRM_9412+ #1 <4> [269.830472] Hardware name: Intel Corp. Geminilake/GLK RVP2 LP4SD (07), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017 <4> [269.830636] RIP: 0010:igt_client_fill+0x1b9/0x5f0 [i915] <4> [269.830640] Code: e8 0c 32 02 00 48 89 c5 48 3d 00 f0 ff ff 0f 87 e9 02 00 00 48 8b 8b 78 06 00 00 44 89 f0 48 89 ef 35 af be ad de 48 c1 e9 02 <f3> ab 0f b6 83 80 03 00 00 89 c2 c0 ea 03 83 e2 02 75 09 83 c8 20 <4> [269.830642] RSP: 0018:ffffc900007a79e8 EFLAGS: 00010206 <4> [269.830645] RAX: 00000000df0bf37b RBX: ffff88811d8af3c0 RCX: 00000000010afc00 <4> [269.830647] RDX: 0000000000000000 RSI: ffffffff822f2b17 RDI: 0000000000000000 <4> [269.830648] RBP: 0000000000000000 R08: ffff888111c80930 R09: 00000000fffffffe <4> [269.830650] R10: 0000000000000000 R11: 00000000ffbc70e4 R12: ffff88811090f700 <4> [269.830652] R13: ffff88810df60180 R14: 0000000001a64dd4 R15: 0000000000000000 <4> [269.830655] FS: 00007f137b07de40(0000) GS:ffff88817b980000(0000) knlGS:0000000000000000 <4> [269.830657] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [269.830659] CR2: 0000000000000000 CR3: 0000000115984000 CR4: 0000000000350ee0 <4> [269.830661] Call Trace: <4> [269.830780] __i915_subtests.cold.7+0x42/0x92 [i915] <4> [269.830886] ? __i915_nop_teardown+0x10/0x10 [i915] <4> [269.830989] ? __i915_live_setup+0x30/0x30 [i915] <4> [269.831104] __run_selftests.part.3+0xf7/0x14c [i915] <4> [269.831939] i915_live_selftests.cold.5+0x1f/0x47 [i915] <4> [269.832027] i915_pci_probe+0x93/0x1d0 [i915] <4> [269.832037] ? _raw_spin_unlock_irqrestore+0x2f/0x50 <4> [269.832043] pci_device_probe+0x9e/0x110 <4> [269.832049] really_probe+0x1c4/0x430 <4> [269.832053] driver_probe_device+0xd9/0x140 <4> [269.832056] device_driver_attach+0x4a/0x50 <4> [269.832059] __driver_attach+0x83/0x140 <4> [269.832062] ? device_driver_attach+0x50/0x50 <4> [269.832064] ? device_driver_attach+0x50/0x50 <4> [269.832067] bus_for_each_dev+0x75/0xc0 <4> [269.832070] bus_add_driver+0x14b/0x1f0 <4> [269.832073] driver_register+0x66/0xb0 <4> [269.832160] i915_init+0x70/0x87 [i915] <4> [269.832164] ? 0xffffffffa05e3000 <4> [269.832168] do_one_initcall+0x56/0x2e0 <4> [269.832174] ? kmem_cache_alloc_trace+0x6a4/0x770 <4> [269.832180] do_init_module+0x55/0x200 <4> [269.832184] load_module+0x22a2/0x2480 <4> [269.832191] ? __do_sys_finit_module+0xad/0x110 <4> [269.832194] __do_sys_finit_module+0xad/0x110 <4> [269.832199] do_syscall_64+0x33/0x80 <4> [269.832202] entry_SYSCALL_64_after_hwframe+0x44/0xa9 <4> [269.832204] RIP: 0033:0x7f137a718839 <4> [269.832208] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48 <4> [269.832210] RSP: 002b:00007ffc4267d308 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 <4> [269.832214] RAX: ffffffffffffffda RBX: 000056288b88f0d0 RCX: 00007f137a718839 <4> [269.832216] RDX: 0000000000000000 RSI: 000056288b895850 RDI: 0000000000000007 <4> [269.832218] RBP: 000056288b895850 R08: 312d3d7374736574 R09: 000056288b88c020 <4> [269.832220] R10: 00007ffc4267d450 R11: 0000000000000246 R12: 0000000000000000 <4> [269.832222] R13: 000056288b8877a0 R14: 0000000000000020 R15: 0000000000000045 <4> [269.832226] Modules linked in: i915(+) vgem mei_hdcp snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel cdc_ether usbnet snd_intel_dspcfg mii snd_hda_codec snd_hwdep snd_hda_core r8169 snd_pcm realtek mei_me mei prime_numbers intel_lpss_pci i2c_hid pinctrl_geminilake [last unloaded: i915] <4> [269.832264] CR2: 0000000000000000 Fixes: cb2ce93e ("drm/i915/gem: Differentiate oom failures from invalid map types") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201201215441.31900-1-chris@chris-wilson.co.uk
-
Swathi Dhanavanthri authored
This workaround is applicable only for tgl,rkl and dg1. Bspec: 52890, 53273, 53508. Signed-off-by: Swathi Dhanavanthri <swathi.dhanavanthri@intel.com> Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201201175735.1377372-1-lucas.demarchi@intel.com
-
- 30 Nov, 2020 5 commits
-
-
Chris Wilson authored
After a cursory check on the parameters to i915_gem_object_pin_map(), where we return a precise error, if the backend rejects the mapping we always return PTR_ERR(-ENOMEM). Let us also return a more precise error here so we can differentiate between running out of memory and programming errors (or situations where we may be trying different paths and looking for an error from an unsupported map). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201127195334.13134-1-chris@chris-wilson.co.uk
-
Venkata Sandeep Dhanalakota authored
Block sizes are only limited by the largest power-of-two that will fit in the region size, but to construct an object we also require feeding it into an sg list, where the upper limit of the sg entry is at most UINT_MAX. Therefore to prevent issues with allocating blocks that are too large, add the flag I915_ALLOC_MAX_SEGMENT_SIZE which should limit block sizes to the i915_sg_segment_size(). v2: (matt) - query the max segment. - prefer flag to limit block size to 4G, since it's best not to assume the user will feed the blocks into an sg list. - simple selftest so we don't have to guess. Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: CQ Tang <cq.tang@intel.com> Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201130134721.54457-1-matthew.auld@intel.com
-
Matthew Auld authored
For the LMEM case if we have suitable alignment and 2M physical pages we should always get 2M GTT pages within the constraints of the hugepages selftest. If we don't then something might be wrong in our construction of the backing pages. References: 330b7d33 ("drm/i915/region: fix order when adding blocks") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201130141809.65330-2-matthew.auld@intel.com
-
Matthew Auld authored
In igt_ppgtt_sanity_check we should also exercise the non-contiguous option for LMEM, since this will give us slightly different sg layouts and alignment. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201130141809.65330-1-matthew.auld@intel.com
-
Chris Wilson authored
We treat idling the GT (intel_rps_park) as a downclock event, and reduce the frequency we intend to restart the GT with. Since the two workloads are likely related (e.g. a compositor rendering every 16ms), we want to carry the frequency and load information from across the idling. However, we do also need to update the frequencies so that workloads that run for less than 1ms are autotuned by RPS (otherwise we leave compositors running at max clocks, draining excess power). Conversely, if we try to run too slowly, the next workload has to run longer. Since there is a hysteresis in the power graph, below a certain frequency running a short workload for longer consumes more energy than running it slightly higher for less time. The exact balance point is unknown beforehand, but measurements with 30fps media playback indicate that RPe is a better choice. Reported-by: Edward Baker <edward.baker@intel.com> Tested-by: Edward Baker <edward.baker@intel.com> Fixes: 043cd2d1 ("drm/i915/gt: Leave rps->cur_freq on unpark") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Edward Baker <edward.baker@intel.com> Cc: Andi Shyti <andi.shyti@intel.com> Cc: Lyude Paul <lyude@redhat.com> Cc: <stable@vger.kernel.org> # v5.8+ Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Andi Shyti <andi.shyti@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201124183521.28623-1-chris@chris-wilson.co.uk
-
- 28 Nov, 2020 1 commit
-
-
Deepak R Varma authored
idr_init() uses base 0 which is an invalid identifier. The new function idr_init_base allows IDR to set the ID lookup from base 1. This avoids all lookups that otherwise starts from 0 since 0 is always unused. References: commit 6ce711f2 ("idr: Make 1-based IDRs more efficient") Signed-off-by: Deepak R Varma <mh12gx2825@gmail.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201104150339.GA68663@localhost
-
- 27 Nov, 2020 2 commits
-
-
Venkata Ramana Nayana authored
As we use a shmemfs file to hold the context state, when not in use it may be swapped out, such as across suspend. Since we wrote into the shmemfs without marking the pages as dirty, the contents may be dropped instead of being written back to swap. On re-using the shmemfs file, such as creating a new context after resume, the contents of that file were likely garbage and so the new context could then hang the GPU. Simply mark the page as being written when copying into the shmemfs file, and it the new contents will be retained across swapout. Fixes: be1cb55a ("drm/i915/gt: Keep a no-frills swappable copy of the default context state") Cc: Sudeep Dutt <sudeep.dutt@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Signed-off-by: CQ Tang <cq.tang@intel.com> Signed-off-by: Venkata Ramana Nayana <venkata.ramana.nayana@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.8+ Link: https://patchwork.freedesktop.org/patch/msgid/20201127120718.454037-161-matthew.auld@intel.com
-
Chris Wilson authored
We checked the table size against a hardcoded number of entries, and that number was excluding the special mocs registers at the end. Fixes: 977933b5 ("drm/i915/gt: Program mocs:63 for cache eviction on gen9") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v4.3+ Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201127102540.13117-1-chris@chris-wilson.co.uk
-