- 01 Jul, 2022 14 commits
-
-
Gustavo Sousa authored
A new PVC-specific workaround has just been added to the BSpec. BSpec: 64027 Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220630201407.16770-1-gustavo.sousa@intel.com
-
Matthew Auld authored
With the uAPI in place we should now have enough in place to ensure a working system on small BAR configurations. v2: (Nirmoy & Thomas): - s/full BAR/Resizable BAR/ which is hopefully more easily understood by users. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-13-matthew.auld@intel.com
-
Matthew Auld authored
Falling back to memcpy/memset shouldn't be allowed if we know we have CCS state to manage using the blitter. Otherwise we are potentially leaving the aux CCS state in an unknown state, which smells like an info leak. Fixes: 48760ffe ("drm/i915/gt: Clear compress metadata for Flat-ccs objects") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-12-matthew.auld@intel.com
-
Matthew Auld authored
If the move or clear operation somehow fails, and the memory underneath is not cleared, like when moving to lmem, then we currently fallback to memcpy or memset. However with small-BAR systems this fallback might no longer be possible. For now we use the set_wedged sledgehammer if we ever encounter such a scenario, and mark the object as borked to plug any holes where access to the memory underneath can happen. Add some basic selftests to exercise this. v2: - In the selftests make sure we grab the runtime pm around the reset. Also make sure we grab the reset lock before checking if the device is wedged, since the wedge might still be in-progress and hence the bit might not be set yet. - Don't wedge or put the object into an unknown state, if the request construction fails (or similar). Just returning an error and skipping the fallback should be safe here. - Make sure we wedge each gt. (Thomas) - Peek at the unknown_state in io_reserve, that way we don't have to export or hand roll the fault_wait_for_idle. (Thomas) - Add the missing read-side barriers for the unknown_state. (Thomas) - Some kernel-doc fixes. (Thomas) v3: - Tweak the ordering of the set_wedged, also add FIXME. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-11-matthew.auld@intel.com
-
Matthew Auld authored
We should always be explicit and allocate a fence slot before adding a new fence. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-10-matthew.auld@intel.com
-
Matthew Auld authored
It's not supported, and just skips later anyway. With small-BAR things get more complicated since all of stolen is likely not even CPU accessible, hence not passing I915_BO_ALLOC_GPU_ONLY just results in the object create failing. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-9-matthew.auld@intel.com
-
Matthew Auld authored
A non-recoverable context must be used if the user wants proper error capture on discrete platforms. In the future the kernel may want to blit the contents of some objects when later doing the capture stage. Also extend to newer integrated platforms. v2(Thomas): - Also extend to newer integrated platforms, for capture buffer memory allocation purposes. v3 (Reported-by: kernel test robot <lkp@intel.com>): - Fix build on !CONFIG_DRM_I915_CAPTURE_ERROR Testcase: igt@gem_exec_capture@capture-recoverable Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-8-matthew.auld@intel.com
-
Matthew Auld authored
Skip capturing any lmem pages that can't be copied using the CPU. This in now only best effort on platforms that have small BAR. Testcase: igt@gem-exec-capture@capture-invisible Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-7-matthew.auld@intel.com
-
Matthew Auld authored
If set, force the allocation to be placed in the mappable portion of I915_MEMORY_CLASS_DEVICE. One big restriction here is that system memory (i.e I915_MEMORY_CLASS_SYSTEM) must be given as a potential placement for the object, that way we can always spill the object into system memory if we can't make space. Testcase: igt@gem-create@create-ext-cpu-access-sanity-check Testcase: igt@gem-create@create-ext-cpu-access-big Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-6-matthew.auld@intel.com
-
Matthew Auld authored
On small BAR configurations, when dealing with I915_MEMORY_CLASS_DEVICE allocations, we assume that by default, all userspace allocations should be placed in the non-CPU visible portion. Note that dumb buffers are not included here, since these are not "GPU accelerated" and likely need CPU access. We choose to just always set GPU_ONLY, and let the backend figure out if that should be ignored or not, for example on full BAR systems. In a later patch userspace will be able to provide a hint if CPU access to the buffer is needed. v2(Thomas) - Apply GPU_ONLY on all discrete devices, but only if the BO can be placed in LMEM. Down in the depths this should be turned into a noop, where required, and as an annotation it still make some sense. If we apply it regardless of the placements then we end up needing to check the placements during exec capture. Also it's slightly inconsistent since the NEEDS_CPU_ACCESS can only be applied on objects that can be placed in LMEM. The other annoyance would be gem_create_ext vs plain gem_create, if we were to always apply GPU_ONLY. Testcase: igt@gem-create@create-ext-cpu-access-sanity-check Testcase: igt@gem-create@create-ext-cpu-access-big Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-5-matthew.auld@intel.com
-
Matthew Auld authored
No longer used. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-4-matthew.auld@intel.com
-
Matthew Auld authored
Vulkan would like to have a rough measure of how much device memory can in theory be allocated. Also add unallocated_cpu_visible_size to track the visible portion, in case the device is using small BAR. Also tweak the locking so we nice consistent values for both the mm->avail and the visible tracking. v2: tweak the locking slightly so we update the mm->avail and visible tracking as one atomic operation, such that userspace doesn't get strange values when sampling the values. Testcase: igt@i915_query@query-regions-unallocated Testcase: igt@i915_query@query-regions-sanity-check Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-3-matthew.auld@intel.com
-
Matthew Auld authored
Userspace wants to know the size of CPU visible portion of device local-memory, and on small BAR devices the probed_size is no longer enough. In Vulkan, for example, it would like to know the size in bytes for CPU visible VkMemoryHeap. We already track the io_size for each region, so plumb that through to the region query. v2: Drop the ( -1 = unknown ) stuff, which is confusing since nothing can currently ever return such a value. Testcase: igt@i915_query@query-regions-sanity-check Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-2-matthew.auld@intel.com
-
Matthew Auld authored
Add an entry for the new uapi needed for small BAR on DG2+. v2: - Some spelling fixes and other small tweaks. (Akeem & Thomas) - Rework error capture interactions, including no longer needing NEEDS_CPU_ACCESS for objects marked for capture. (Thomas) - Add probed_cpu_visible_size. (Lionel) v3: - Drop the vma query for now. - Add unallocated_cpu_visible_size as part of the region query. - Improve the docs some more, including documenting the expected behaviour on older kernels, since this came up in some offline discussion. v4: - Various improvements all over. (Tvrtko) v5: - Include newer integrated platforms when applying the non-recoverable context and error capture restriction. (Thomas) Mesa: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/16739Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: mesa-dev@lists.freedesktop.org Acked-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Acked-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Acked-by: Jordan Justen <jordan.l.justen@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220629174350.384910-1-matthew.auld@intel.com
-
- 30 Jun, 2022 1 commit
-
-
Daniele Ceraolo Spurio authored
The only difference between the ADL S and P GuC FWs is the HWConfig support. ADL-N does not support HWConfig, so we should use the same binary as ADL-S, otherwise the GuC might attempt to fetch a config table that does not exist. ADL-N is internally identified as an ADL-P, so we need to special-case it in the FW selection code. Fixes: 7e28d0b2 ("drm/i915/adl-n: Enable ADL-N platform") Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Tejas Upadhyay <tejas.upadhyay@intel.com> Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220621233005.3952293-1-daniele.ceraolospurio@intel.com
-
- 29 Jun, 2022 2 commits
-
-
Vinay Belgaumkar authored
This test will validate we can achieve actual frequency of RP0. Pcode grants frequencies based on what GuC is requesting. However, thermal throttling can limit what is being granted. Add a test to request for max, but don't fail the test if RP0 is not granted due to throttle reasons. Also optimize the selftest by using a common run_test function to avoid code duplication. Rename the "clamp" tests to vary_max_freq and vary_min_freq. v2: Fix compile warning v3: Review comments (Ashutosh). Added a FIXME for the media RP0 case. v4: Checkpatch (strict) fixes, remove FIXME and other comments (Ashutosh) Fixes commit 8ee2c227 ("drm/i915/guc/slpc: Add SLPC selftest") Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220627230346.27720-1-vinay.belgaumkar@intel.com
-
Nirmoy Das authored
For some platfroms we use stop_machine version of gen8_ggtt_insert_page/gen8_ggtt_insert_entries to avoid a concurrent GGTT access bug but this causes a circular locking dependency warning: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ggtt->error_mutex); lock(dma_fence_map); lock(&ggtt->error_mutex); lock(cpu_hotplug_lock); Fix this by calling gen8_ggtt_insert_page/gen8_ggtt_insert_entries directly at error capture which is concurrent GGTT access safe because reset path make sure of that. v2: Fix rebase conflict and added a comment. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5595Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220624110821.29190-1-nirmoy.das@intel.com
-
- 27 Jun, 2022 8 commits
-
-
Vinay Belgaumkar authored
SLPC min/max frequency updates require H2G calls. We are seeing timeouts when GuC channel is backed up and it is unable to respond in a timely fashion causing warnings and affecting CI. This is seen when waitboosting happens during a stress test. this patch updates the waitboost path to use a non-blocking H2G call instead, which returns as soon as the message is successfully transmitted. v2: Use drm_notice to report any errors that might occur while sending the waitboost H2G request (Tvrtko) v3: Add drm_notice inside force_min_freq (Ashutosh) Cc: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220623003225.23301-1-vinay.belgaumkar@intel.com
-
Umesh Nerlige Ramappa authored
For execlists backend, current implementation of Wa_22011802037 is to stop the CS before doing a reset of the engine. This WA was further extended to wait for any pending MI FORCE WAKEUPs before issuing a reset. Add the extended steps in the execlist path of reset. In addition, extend the WA to gen11. v2: (Tvrtko) - Clarify comments, commit message, fix typos - Use IS_GRAPHICS_VER for gen 11/12 checks v3: (Daneile) - Drop changes to intel_ring_submission since WA does not apply to it - Log an error if MSG IDLE is not defined for an engine Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Fixes: f6aa0d71 ("drm/i915: Add Wa_22011802037 force cs halt") Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220621192105.2100585-1-umesh.nerlige.ramappa@intel.com
-
Alan Previn authored
Using two different types of workoads, it was observed that guc_update_engine_gt_clks was being called too frequently and/or causing a CPU-to-lmem bandwidth hit over PCIE. Details on the workloads and numbers are in the notes below. Background: At the moment, guc_update_engine_gt_clks can be invoked via one of 3 ways. #1 and #2 are infrequent under normal operating conditions: 1.When a predefined "ping_delay" timer expires so that GuC- busyness can sample the GTPM clock counter to ensure it doesn't miss a wrap-around of the 32-bits of the HW counter. (The ping_delay is calculated based on 1/8th the time taken for the counter go from 0x0 to 0xffffffff based on the GT frequency. This comes to about once every 28 seconds at a GT frequency of 19.2Mhz). 2.In preparation for a gt reset. 3.In response to __gt_park events (as the gt power management puts the gt into a lower power state when there is no work being done). Root-cause: For both the workloads described farther below, it was observed that when user space calls IOCTLs that unparks the gt momentarily and repeats such calls many times in quick succession, it triggers calling guc_update_engine_gt_clks as many times. However, the primary purpose of guc_update_engine_gt_clks is to ensure we don't miss the wraparound while the counter is ticking. Thus, the solution is to ensure we skip that check if gt_park is calling this function earlier than necessary. Solution: Snapshot jiffies when we do actually update the busyness stats. Then get the new jiffies every time intel_guc_busyness_park is called and bail if we are being called too soon. Use half of the ping_delay as a safe threshold. NOTE1: Workload1: IGTs' gem_create was modified to create a file handle, allocate memory with sizes that range from a min of 4K to the max supported (in power of two step-sizes). Its maps, modifies and reads back the memory. Allocations and modification is repeated until total memory allocation reaches the max. Then the file handle is closed. With this workload, guc_update_engine_gt_clks was called over 188 thousand times in the span of 15 seconds while this test ran three times. With this patch, the number of calls reduced to 14. NOTE2: Workload2: 30 transcode sessions are created in quick succession. While these sessions are created, pcm-iio tool was used to measure I/O read operation bandwidth consumption sampled at 100 milisecond intervals over the course of 20 seconds. The total bandwidth consumed over 20 seconds without this patch was measured at average at 311KBps per sample. With this patch, the number went down to about 175Kbps which is about a 43% savings. Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Reviewed-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220623023157.211650-2-alan.previn.teres.alexis@intel.com
-
Niranjana Vishwanathapura authored
This reverts commit 1e98d8c5. The problem with this patch is that it makes i915_request to hold a reference to intel_context, which in turn holds a reference on the VM. This strong back referencing can lead to reference loops which leads to resource leak. An example is the upcoming VM_BIND work which requires VM to hold a reference to some shared VM specific BO. But this BO's dma-resv fences holds reference to the i915_request thus leading to reference loop. v2: Do not use reserved requests for virtual engines Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Suggested-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Mathew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220614184348.23746-3-ramalingam.c@intel.com
-
Niranjana Vishwanathapura authored
In i915_fence_get_driver_name(), user may not hold a reference to rq->engine. Hence do not access it. Instead, store required device private pointer in 'rq->i915' and use it. Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Suggested-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220614184348.23746-2-ramalingam.c@intel.com
-
Matt Roper authored
We've been introducing new registers with a mix of "XEHP_" (architecture) and "XEHPSDV_" (platform) prefixes. For consistency, let's settle on "XEHP_" as the preferred form. XEHPSDV_RP_STATE_CAP stays with its current name since that's truly a platform-specific register and not something that applies to the Xe_HP architecture as a whole. Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Caz Yokoyama <caz@caztech.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220624210328.308630-2-matthew.d.roper@intel.com
-
Matt Roper authored
XEHPSDV_FLAT_CCS_BASE_ADDR, GEN8_L3_LRA_1_GPGPU, and MMCD_MISC_CTRL were duplicated between i915_reg.h and intel_gt_regs.h. These are all GT registers, so we should drop the copy from i915_reg.h. XEHPSDV_TILE0_ADDR_RANGE was defined in i915_reg.h, but really belongs in intel_gt_regs.h. Move it. Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220624210328.308630-1-matthew.d.roper@intel.com
-
Matthew Auld authored
For imported dma-buf objects we leave the object as cache_coherent = 0 across all platforms, which is reasonable given that have no clue what the memory underneath is, and its not like the driver can ever manually clflush the pages anyway (like with i915_gem_clflush_object) for such objects. However on discrete we choose to treat cache_dirty = true as a programmer error, leading to a warning. The simplest fix looks to be to just change the ordering in cpu_write_needs_clflush to prevent ever setting cache_dirty for dma-buf objects on discrete. Fixes: d028a769 ("drm/i915/dmabuf: Fix prime_mmap to work when using LMEM") Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5266Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220622155919.355081-1-matthew.auld@intel.com
-
- 23 Jun, 2022 1 commit
-
-
Akeem G Abodunrin authored
With GuC submission, it takes a little bit longer switching contexts among all available engines simultaneously, when running live_parallel_switch subtest. Increase the timeout. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5885Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220622141104.334432-1-matthew.auld@intel.com
-
- 22 Jun, 2022 7 commits
-
-
Lucas De Marchi authored
Re-do what was attempted in commit 7a5c9223 ("drm/i915/gt: Split intel-gtt functions by arch"). The goal of that commit was to split the handlers for older hardware that depend on intel-gtt.ko so i915 can be built for non-x86 archs, after some more patches. Other archs do not need intel-gtt.ko. Main issue with the previous approach: it moved all the hooks, including the gen8, which is used by all platforms gen8 and newer. Re-do the split moving only the handlers for gen < 6, which are the only ones calling out to the separate module. While at it do some minor cleanups: - Rename the prefix s/gen5_/gmch_/ to be more accurate what platforms are covered by intel_ggtt_gmch.c - Remove dead code for gen12 out of needs_idle_maps() - Remove TODO comment leftover - Re-order if/else ladder in ggtt_probe_hw() to keep newest platforms first v2: Add minor cleanups (Matt Roper) Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220617230559.2109427-2-lucas.demarchi@intel.com
-
Lucas De Marchi authored
Exporting the symbols like intel_gtt_* creates some confusion inside i915 that has symbols named similarly. In an attempt to isolate platforms needing intel-gtt.ko, commit 7a5c9223 ("drm/i915/gt: Split intel-gtt functions by arch") moved way too much inside gt/intel_gt_gmch.c, even the functions that don't callout to this module. Rename the symbols to make the separation clear. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220617230559.2109427-1-lucas.demarchi@intel.com
-
Juha-Pekka Heikkila authored
Add fallback smem allocation for dpt if stolen memory allocation failed. Signed-off-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220610121205.29645-1-juhapekka.heikkila@gmail.com
-
CQ Tang authored
In the future display might try call this with a normal smem object, which doesn't require PIN_MAPPABLE underneath in order to CPU map the pages (unlike stolen). Extend i915_vma_pin_iomap() to directly use i915_gem_object_pin_map() for such cases. This change was suggested by Chris P Wilson, that we pin the smem with i915_gem_object_pin_map_unlocked(). v2 (jheikkil): Change i915_gem_object_pin_map_unlocked to i915_gem_object_pin_map Signed-off-by: CQ Tang <cq.tang@intel.com> Signed-off-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Jari Tahvanainen <jari.tahvanainen@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> [mauld: tweak commit message, plus minor checkpatch fix] Link: https://patchwork.freedesktop.org/patch/msgid/20220610121205.29645-2-juhapekka.heikkila@gmail.com
-
Juha-Pekka Heikkila authored
Don't leak lmem mapping in vma_evict, move __i915_vma_iounmap outside i915_vma_is_map_and_fenceable. Signed-off-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220610121205.29645-3-juhapekka.heikkila@gmail.com
-
katrinzhou authored
Add missing else in set_proto_ctx_param() to fix coverity issue. Addresses-Coverity: ("Unused value") Fixes: d4433c76 ("drm/i915/gem: Use the proto-context to handle create parameters (v5)") Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: katrinzhou <katrinzhou@tencent.com> [tursulin: fixup alignment] Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220621124926.615884-1-tvrtko.ursulin@linux.intel.com
-
pengfuyuan authored
Fix spelling typo in comment. Reported-by: k2ci <kernel-bot@kylinos.cn> Signed-off-by: pengfuyuan <pengfuyuan@kylinos.cn> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/tencent_7B226C4A9BC2B5EEB37B70C188B5015D290A@qq.com
-
- 21 Jun, 2022 1 commit
-
-
Vinay Belgaumkar authored
We have seen multiple RC6 issues where it is useful to know which global forcewake bits are set. Add this to the 'drpc' debugfs output. v2: Review comments (Ashutosh) Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220617212032.34577-1-vinay.belgaumkar@intel.com
-
- 20 Jun, 2022 1 commit
-
-
Thomas Hellström authored
When DMAR / VT-d is enabled, the display engine uses overfetching, presumably to deal with the increased latency. To avoid display engine errors and DMAR faults, as a workaround the GGTT is populated with scatch PTEs when VT-d is enabled. However starting with gen10, Write-combined writing of scratch PTES is no longer possible and as a result, populating the full GGTT with scratch PTEs like on resume becomes very slow as uncached access is needed. Therefore, on integrated GPUs utilize the fact that the PTEs are stored in stolen memory which retain content across S3 suspend. Don't clear the PTEs on suspend and resume. This improves on resume time with around 100 ms. While 100+ms might appear like a short time it's 10% to 20% of total resume time and important in some applications. One notable exception is Intel Rapid Start Technology which may cause stolen memory to be lost across what the OS percieves as S3 suspend. If IRST is enabled or if we can't detect whether IRST is enabled, retain the old workaround, clearing and re-instating PTEs. As an additional measure, if we detect that the last ggtt pte was lost during suspend, print a warning and re-populate the GGTT ptes On discrete GPUs, the display engine scans out from LMEM which isn't subject to DMAR, and presumably the workaround is therefore not needed, but that needs to be verified and disabling the workaround for dGPU, if possible, will be deferred to a follow-up patch. v2: - Rely on retained ptes to also speed up suspend and resume re-binding. - Re-build GGTT ptes if Intel rst is enabled. v3: - Re-build GGTT ptes also if we can't detect whether Intel rst is enabled, and if the guard page PTE and end of GGTT was lost. v4: - Fix some kerneldoc issues (Matthew Auld), rebase. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220617152856.249295-1-thomas.hellstrom@linux.intel.com
-
- 17 Jun, 2022 4 commits
-
-
Matt Roper authored
Let's replace the assortment of intel_gt_* and intel_uncore_* functions that operate on MCR registers with a cleaner set of interfaces: * intel_gt_mcr_read -- unicast read from specific instance * intel_gt_mcr_read_any[_fw] -- unicast read from any non-terminated instance * intel_gt_mcr_unicast_write -- unicast write to specific instance * intel_gt_mcr_multicast_write[_fw] -- multicast write to all instances We'll also replace the historic "slice" and "subslice" terminology with "group" and "instance" to match the documentation for more recent platforms; these days MCR steering applies to more types of replication than just slice/subslice. v2: - Reference the new kerneldoc from i915.rst. (Jani) - Tweak the wording of the documentation for a couple functions to clarify the difference between "_fw" and non-"_fw" forms. v3: - s/read/write/ to fix copy-paste mistake in a couple comments. (Harish) Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Acked-by: Jani Nikula <jani.nikula@linux.intel.com> Reviewed-by: Harish Chegondi <harish.chegondi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220615001019.1821989-3-matthew.d.roper@intel.com
-
Matt Roper authored
Handling of multicast/replicated registers is spread across intel_gt.c and intel_uncore.c today. As multicast handling and the related steering logic gets more complicated with the addition of new platforms and new rules it makes sense to centralize it all in one place. For now the existing functions have been moved to the new .c/.h as-is. Function renames and updates to operate in a more consistent manner will be done in subsequent patches. Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Acked-by: Jani Nikula <jani.nikula@linux.intel.com> Reviewed-by: Harish Chegondi <harish.chegondi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220615001019.1821989-2-matthew.d.roper@intel.com
-
Tvrtko Ursulin authored
Stop displaying engine classes with no engines - it is not a huge problem if they are shown, since the values will correctly be all zeroes, but it does count as misleading. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 055634e4 ("drm/i915: Expose client engine utilisation via fdinfo") Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220616140056.559074-1-tvrtko.ursulin@linux.intel.com
-
Tvrtko Ursulin authored
We have long standing customer complaints that pressing Ctrl-C (or to the effect of) causes engine resets with otherwise well behaving programs. Not only is logging engine resets during normal operation not desirable since it creates support incidents, but more fundamentally we should avoid going the engine reset path when we can since any engine reset introduces a chance of harming an innocent context. Reason for this undesirable behaviour is that the driver currently does not distinguish between banned contexts and non-persistent contexts which have been closed. To fix this we add the distinction between the two reasons for revoking contexts, which then allows the strict timeout only be applied to banned, while innocent contexts (well behaving) can preempt cleanly and exit without triggering the engine reset path. Note that the added context exiting category applies both to closed non- persistent context, and any exiting context when hangcheck has been disabled by the user. At the same time we rename the backend operation from 'ban' to 'revoke' which more accurately describes the actual semantics. (There is no ban at the backend level since banning is a concept driven by the scheduling frontend. Backends are simply able to revoke a running context so that is the more appropriate name chosen.) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220527072452.2225610-1-tvrtko.ursulin@linux.intel.com
-
- 15 Jun, 2022 1 commit
-
-
Matt Roper authored
As with past platforms, the bspec's performance tuning guide provides recommended MMIO settings. Although not technically "workarounds" we apply these through the workaround framework to ensure that they're re-applied at the proper times (e.g., on engine resets) and that any conflicts with real workarounds are flagged. Bspec: 72161 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220613165314.862029-1-matthew.d.roper@intel.com
-