- 26 Jul, 2021 8 commits
-
-
Thomas Hellström authored
If our exported dma-bufs are imported by another instance of our driver, that instance will typically have the imported dma-bufs locked during dma_buf_map_attachment(). But the exporter also locks the same reservation object in the map_dma_buf() callback, which leads to recursive locking. So taking the lock inside _pin_pages_unlocked() is incorrect. Additionally, the current pinning code path is contrary to the defined way that pinning should occur. Remove the explicit pin/unpin from the map/umap functions and move them to the attach/detach allowing correct locking to occur, and to match the static dma-buf drm_prime pattern. Add a live selftest to exercise both dynamic and non-dynamic exports. v2: - Extend the selftest with a fake dynamic importer. - Provide real pin and unpin callbacks to not abuse the interface. v3: (ruhl) - Remove the dynamic export support and move the pinning into the attach/detach path. v4: (ruhl) - Put pages does not need to assert on the dma-resv v5: (jason) - Lock around dma_buf_unmap_attachment() when emulating a dynamic importer in the subtests. - Use pin_pages_unlocked v6: (jason) - Use dma_buf_attach instead of dma_buf_attach_dynamic in the selftests v7: (mauld) - Use __i915_gem_object_get_pages (2 __underscores) instead of the 4 ____underscore version in the selftests v8: (mauld) - Drop the kernel doc from the static i915_gem_dmabuf_attach function - Add missing "err = PTR_ERR()" to a bunch of selftest error cases Reported-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-8-jason@jlekstrand.net
-
Jason Ekstrand authored
Without TTM, we have no such hook so we exit early but this is fine because we use TTM on all LMEM platforms and, on integrated platforms, there is no real migration. If we do have the hook, it's better to just let TTM handle the migration because it knows where things are actually placed. This fixes a bug where i915_gem_object_migrate fails to migrate newly created LMEM objects. In that scenario, the object has obj->mm.region set to LMEM but TTM has it in SMEM because that's where all new objects are placed there prior to getting actual pages. When we invoke i915_gem_object_migrate, it exits early because, from the point of view of the GEM object, it's already in LMEM and no migration is needed. Then, when we try to pin the pages, __i915_ttm_get_pages is called which, unaware of our failed attempt at a migration, places the object in SMEM. This only happens on newly created objects because they have this weird state where TTM thinks they're in SMEM, GEM thinks they're in LMEM, and the reality is that they don't exist at all. It's better if GEM just always calls into TTM and let's TTM handle things. That way the lies stay better contained. Once the migration is complete, the object will have pages, obj->mm.region will be correct, and we're done lying. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-7-jason@jlekstrand.net
-
Jason Ekstrand authored
__i915_ttm_get_pages does two things. First, it calls ttm_bo_validate() to check the given placement and migrate the BO if needed. Then, it updates the GEM object to match, in case the object was migrated. If no migration occured, however, we might still have pages on the GEM object in which case we don't need to fetch them from TTM and call __i915_gem_object_set_pages. This hasn't been a problem before because the primary user of __i915_ttm_get_pages is __i915_gem_object_get_pages which only calls it if the GEM object doesn't have pages. However, i915_ttm_migrate also uses __i915_ttm_get_pages to do the migration so this meant it was unsafe to call on an already populated object. This patch checks i915_gem_object_has_pages() before trying to __i915_gem_object_set_pages so i915_ttm_migrate is safe to call, even on populated objects. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-6-jason@jlekstrand.net
-
Jason Ekstrand authored
Instead of hand-rolling the same three calls in each function, pull them into an i915_gem_object_create_user helper. Apart from re-ordering of the placements array ENOMEM check, there should be no functional change. v2 (Matthew Auld): - Add the call to i915_gem_flush_free_objects() from i915_gem_dumb_create() in a separate patch - Move i915_gem_object_alloc() below the simple error checks v3 (Matthew Auld): - Add __ to i915_gem_object_create_user and kerneldoc which warns the caller that it's not validating anything. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-5-jason@jlekstrand.net
-
Jason Ekstrand authored
This doesn't really fix anything serious since the chances of a client creating and destroying a mass of dumb BOs is pretty low. However, it is called by the other two create IOCTLs to garbage collect old objects. Call it here too for consistency. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-4-jason@jlekstrand.net
-
Jason Ekstrand authored
Since we don't allow changing the set of regions after creation, we can make ext_set_placements() build up the region set directly in the create_ext and assign it to the object later. This is similar to what we did for contexts with the proto-context only simpler because there's no funny object shuffling. This will be used in the next patch to allow us to de-duplicate a bunch of code. Also, since we know the maximum number of regions up-front, we can use a fixed-size temporary array for the regions. This simplifies memory management a bit for this new delayed approach. v2 (Matthew Auld): - Get rid of MAX_N_PLACEMENTS - Drop kfree(placements) from set_placements() v3 (Matthew Auld): - Properly set ext_data->n_placements Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-3-jason@jlekstrand.net
-
Jason Ekstrand authored
We don't roll them together entirely because there are still a couple cases where we want a separate can_migrate check. For instance, the display code checks that you can migrate a buffer to LMEM before it accepts it in fb_create. The dma-buf import code also uses it to do an early check and return a different error code if someone tries to attach a LMEM-only dma-buf to another driver. However, no one actually wants to call object_migrate when can_migrate has failed. The stated intention is for self-tests but none of those actually take advantage of this unsafe migration. Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723172142.3273510-2-jason@jlekstrand.net
-
Lucas De Marchi authored
This is only used by GRAPHICS_VER == 6 and GRAPHICS_VER == 7. All other recent platforms do not depend on this field, so it doesn't make much sense to keep it generic like that. Instead, just do a mapping from engine class to HW ID in the single place that is needed. v2: use macros with the direct register address instead of calculating from the legacy HW_ID (Matt Roper) Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723002551.3906535-1-lucas.demarchi@intel.com
-
- 24 Jul, 2021 4 commits
-
-
Matt Roper authored
Implement Xe_HP forcewake handling. While we're at it, let's reorder to the forcewake assignment if/else ladder to match our usual driver conventions. Co-authored-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723174239.1551352-6-matthew.d.roper@intel.com
-
John Harrison authored
Xe_HP can have a lot of extra media engines. This patch adds the reset support for them. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723174239.1551352-5-matthew.d.roper@intel.com
-
John Harrison authored
Xe_HP can have a lot of extra media engines. This patch adds the interrupt handler support for them. Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723174239.1551352-4-matthew.d.roper@intel.com
-
John Harrison authored
Xe_HP can have a lot of extra media engines. This patch adds the basic definitions for them. v2: - Re-order intel_gt_info and intel_device_info slightly to avoid unnecessary padding now that we've increased the size of intel_engine_mask_t. (Tvrtko) v3: - Drop the .hw_id assignments. (Lucas) v4: - Fix graphics_ver typo for VCS4 (should be 12, not 11). (Lucas) Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210723191024.1553405-1-matthew.d.roper@intel.com
-
- 23 Jul, 2021 6 commits
-
-
Matt Roper authored
Since we can't steer multicast register reads during ring-based workaround verification, we need to define the multicast ranges where failure to steer could potentially cause us to read back from a fused-off register instance. As with gen12, we can ignore the multicast ranges that the bspec describes as 'SQIDI' since all instances of those registers will always be present and we'll always be able to read back a workaround value that was written with multicast. Bspec: 66534 Cc: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210714031540.3539704-11-matthew.d.roper@intel.com
-
José Roberto de Souza authored
Workaround also needed for alderlake-P. HSDES: 14010801662 Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210722192041.92346-1-jose.souza@intel.com
-
Matthew Auld authored
The CPU domain should be static for discrete, and on DG1 we don't need any flushing since everything is already coherent, so really all this does is an object wait, for which we have an ioctl. Longer term the desired caching should be an immutable creation time property for the BO, which can be set with something like gem_create_ext. One other user is iris + userptr, which uses the set_domain to probe all the pages to check if the GUP succeeds, however we now have a PROBE flag for this purpose. v2: add some more kernel doc, also add the implicit rules with caching Suggested-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ramalingam C <ramalingam.c@intel.com> Reviewed-by: Ramalingam C <ramalingam.c@intel.com> Acked-by: Kenneth Graunke <kenneth@whitecape.org> Link: https://patchwork.freedesktop.org/patch/msgid/20210715101536.2606307-5-matthew.auld@intel.com
-
Tvrtko Ursulin authored
On Xe_HP the fusing register is renamed and changed to have the "enable" semantics, but otherwise remains compatible (mmio address, bitmask ranges) with older platforms. To simplify things we do not add a new register definition but just stop inverting the fusing masks before processing them. Bspec: 52615 Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721223043.834562-6-matthew.d.roper@intel.com
-
Lucas De Marchi authored
We kept adding new engines and for that increasing hw_id unnecessarily: it's not used since GRAPHICS_VER == 8. Prepend "gen6" to the field and try to pack it in the structs to give a hint this field is actually not used in recent platforms. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210720232014.3302645-4-lucas.demarchi@intel.com
-
Lucas De Marchi authored
The engine hw_id is only used by RING_FAULT_REG(), which is not used by GRAPHICS_VER >= 8. We did use hw_id on recent platforms to set the engine's guc_id, but that is not the case anymore since commit c784e524 ("drm/i915/guc: Update to use firmware v49.0.1"): now we only use class and id information to generate guc_id. We tend to keep adding new defines just to be consistent, but let's try to remove them and let them defined to 0 for engines that only exist on gen8+ platforms. v2: Reword commit message and add information about when we stopped using hw_id (Matt Roper) Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210720232014.3302645-3-lucas.demarchi@intel.com
-
- 22 Jul, 2021 22 commits
-
-
Lucas De Marchi authored
gen8_clear_engine_error_register() is actually not used by GRAPHICS_VER >= 8, since for those we are using another register that is not engine-dependent. Fix the platform prefix, to make clear we are not using any GEN6_RING_FAULT_REG_* one GRAPHICS_VER >= 8. Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210720232014.3302645-2-lucas.demarchi@intel.com
-
Matthew Brost authored
Add intel_context tracing. These trace points are particular helpful when debugging the GuC firmware and can be enabled via CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS kernel config option. Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-19-matthew.brost@intel.com
-
Matthew Brost authored
Add trace point for GuC submit. Extended existing request trace points to include submit fence value,, guc_id, and ring tail value. v2: Fix white space alignment in i915_request_add trace point v3: Delete dep_from , dep_to (Tvrtko) Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-18-matthew.brost@intel.com
-
Matthew Brost authored
Update GuC debugfs to support the new GuC structures. v2: (John Harrison) - Remove intel_lrc_reg.h include from i915_debugfs.c (Michal) - Rename GuC debugfs functions Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-17-matthew.brost@intel.com
-
Matthew Brost authored
When running the GuC the GPU can't be considered idle if the GuC still has contexts pinned. As such, a call has been added in intel_gt_wait_for_idle to idle the UC and in turn the GuC by waiting for the number of unpinned contexts to go to zero. v2: rtimeout -> remaining_timeout v3: Drop unnecessary includes, guc_submission_busy_loop -> guc_submission_send_busy_loop, drop negatie timeout trick, move a refactor of guc_context_unpin to earlier path (John H) v4: Add stddef.h back into intel_gt_requests.h, sort circuit idle function if not in GuC submission mode Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-16-matthew.brost@intel.com
-
Matthew Brost authored
Ensure G2H response has space in the buffer before sending H2G CTB as the GuC can't handle any backpressure on the G2H interface. v2: (Matthew) - s/INTEL_GUC_SEND/INTEL_GUC_CT_SEND v3: (Matthew) - Add G2H credit accounting to blocking path, add g2h_release_space helper (John H) - CTB_G2H_BUFFER_SIZE / 4 == G2H_ROOM_BUFFER_SIZE Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-15-matthew.brost@intel.com
-
Matthew Brost authored
Semaphores are an optimization and not required for basic GuC submission to work properly. Disable until we have time to do the implementation to enable semaphores and tune them for performance. Also long direction is just to delete semaphores from the i915 so another reason to not enable these for GuC submission. This patch fixes an existing bugs where I915_ENGINE_HAS_SEMAPHORES was not honored correctly. v2: Reword commit message v3: (John H) - Add text to commit indicating this also fixing an existing bug v4: (John H) - s/bug/bugs Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-14-matthew.brost@intel.com
-
Matthew Brost authored
If two requests are on the same ring, they are explicitly ordered by the HW. So, a submission fence is sufficient to ensure ordering when using the new GuC submission interface. Conversely, if two requests share a timeline and are on the same physical engine but different context this doesn't ensure ordering on the new GuC submission interface. So, a completion fence needs to be used to ensure ordering. v2: (Daniele) - Don't delete spin lock v3: (Daniele) - Delete forward dec Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-13-matthew.brost@intel.com
-
Matthew Brost authored
Disable preempt busywait when using GuC scheduling. This isn't needed as the GuC controls preemption when scheduling. v2: (John H): - Fix commit message Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-12-matthew.brost@intel.com
-
Matthew Brost authored
Extend the deregistration context fence to fence whne a GuC context has scheduling disable pending. v2: (John H) - Update comment why we check the pin count within spin lock Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <john.c.harrison@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-11-matthew.brost@intel.com
-
Matthew Brost authored
Disable engine barriers for unpinning with GuC. This feature isn't needed with the GuC as it disables context scheduling before unpinning which guarantees the HW will not reference the context. Hence it is not necessary to defer unpinning until a kernel context request completes on each engine in the context engine mask. Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-10-matthew.brost@intel.com
-
Matthew Brost authored
With GuC scheduling, it isn't safe to unpin a context while scheduling is enabled for that context as the GuC may touch some of the pinned state (e.g. LRC). To ensure scheduling isn't enabled when an unpin is done, a call back is added to intel_context_unpin when pin count == 1 to disable scheduling for that context. When the response CTB is received it is safe to do the final unpin. Future patches may add a heuristic / delay to schedule the disable call back to avoid thrashing on schedule enable / disable. v2: (John H) - s/drm_dbg/drm_err (Daneiel) - Clean up sched state function Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-9-matthew.brost@intel.com
-
Matthew Brost authored
Sometimes during context pinning a context with the same guc_id is registered with the GuC. In this a case deregister must be done before the context can be registered. A fence is inserted on all requests while the deregister is in flight. Once the G2H is received indicating the deregistration is complete the context is registered and the fence is released. v2: (John H) - Fix commit message Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <john.c.harrison@intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-8-matthew.brost@intel.com
-
Matthew Brost authored
Implement GuC context operations which includes GuC specific operations alloc, pin, unpin, and destroy. v2: (Daniel Vetter) - Use msleep_interruptible rather than cond_resched in busy loop (Michal) - Remove C++ style comment v3: (Matthew Brost) - Drop GUC_ID_START (John Harrison) - Fix a bunch of typos - Use drm_err rather than drm_dbg for G2H errors (Daniele) - Fix ;; typo - Clean up sched state functions - Add lockdep for guc_id functions - Don't call __release_guc_id when guc_id is invalid - Use MISSING_CASE - Add comment in guc_context_pin - Use shorter path to rpm (Daniele / CI) - Don't call release_guc_id on an invalid guc_id in destroy v4: (Daniel Vetter) - Add FIXME comment Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-7-matthew.brost@intel.com
-
Matthew Brost authored
Add bypass tasklet submission path to GuC. The tasklet is only used if H2G channel has backpresure. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-6-matthew.brost@intel.com
-
Matthew Brost authored
Implement GuC submission tasklet for new interface. The new GuC interface uses H2G to submit contexts to the GuC. Since H2G use a single channel, a single tasklet is used for the submission path. Also the per engine interrupt handler has been updated to disable the rescheduling of the physical engine tasklet, when using GuC scheduling, as the physical engine tasklet is no longer used. In this patch the field, guc_id, has been added to intel_context and is not assigned. Patches later in the series will assign this value. v2: (John Harrison) - Clean up some comments v3: (John Harrison) - More comment cleanups Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-5-matthew.brost@intel.com
-
Matthew Brost authored
Add LRC descriptor context lookup array which can resolve the intel_context from the LRC descriptor index. In addition to lookup, it can determine if the LRC descriptor context is currently registered with the GuC by checking if an entry for a descriptor index is present. Future patches in the series will make use of this array. v2: (Michal) - "linux/xarray.h" -> <linux/xarray.h> - s/lrc/LRC (John H) - Fix commit message Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-4-matthew.brost@intel.com
-
Matthew Brost authored
Remove old GuC stage descriptor, add LRC descriptor which will be used by the new GuC interface implemented in this patch series. v2: (John Harrison) - s/lrc/LRC/g Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-3-matthew.brost@intel.com
-
Matthew Brost authored
Add new GuC interface defines and structures while maintaining old ones in parallel. Cc: John Harrison <john.c.harrison@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721215101.139794-2-matthew.brost@intel.com
-
Prathap Kumar Valsan authored
The layout of some engine contexts has changed on Xe_HP. Define the new offsets. Bspec: 45585, 46256 Signed-off-by: Prathap Kumar Valsan <prathap.kumar.valsan@intel.com> Signed-off-by: Ramalingam C <ramalingam.c@intel.com> Signed-off-by: Venkata Ramana Nayana <venkata.ramana.nayana@intel.com> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721223043.834562-10-matthew.d.roper@intel.com
-
Stuart Summers authored
Xe_HP changes the format of the context ID from past platforms. Signed-off-by: Stuart Summers <stuart.summers@intel.com> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721223043.834562-9-matthew.d.roper@intel.com
-
John Harrison authored
Increasing the engine count causes a couple of local array variables to exceed the kernel stack limit. So make them dynamic allocations instead. Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210721223043.834562-8-matthew.d.roper@intel.com
-