- 04 Oct, 2021 14 commits
-
-
Anshuman Gupta authored
When protected sufaces has flipped and pxp session is disabled, display black pixels by using plane color CTM correction. v2: - Display black pixels in async flip too. v3: - Removed the black pixels logic for async flip. [Ville] - Used plane state to force black pixels. [Ville] v4 (Daniele): update pxp_is_borked check. v5: rebase on top of v9 plane decryption moving the decrypt check (Juston) Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Gaurav Kumar <kumar.gaurav@intel.com> Cc: Shankar Uma <uma.shankar@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Juston Li <juston.li@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-15-alan.previn.teres.alexis@intel.com
-
Anshuman Gupta authored
Add support to enable/disable PLANE_SURF Decryption Request bit. It requires only to enable plane decryption support when following condition met. 1. PXP session is enabled. 2. Buffer object is protected. v2: - Used gen fb obj user_flags instead gem_object_metadata. [Krishna] v3: - intel_pxp_gem_object_status() API changes. v4: use intel_pxp_is_active (Daniele) v5: rebase and use the new protected object status checker (Daniele) v6: used plane state for plane_decryption to handle async flip as suggested by Ville. v7: check pxp session while plane decrypt state computation. [Ville] removed pointless code. [Ville] v8 (Daniele): update PXP check v9: move decrypt check after icl_check_nv12_planes() when overlays have fb set (Juston) v10 (Daniele): update PXP check again to match rework in earlier patches and don't consider protection valid if the object has not been used in an execbuf beforehand. Cc: Bommu Krishnaiah <krishnaiah.bommu@intel.com> Cc: Huang Sean Z <sean.z.huang@intel.com> Cc: Gaurav Kumar <kumar.gaurav@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Juston Li <juston.li@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Uma Shankar <uma.shankar@intel.com> #v9 Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-14-alan.previn.teres.alexis@intel.com
-
Huang, Sean Z authored
During the power event S3+ sleep/resume, hardware will lose all the encryption keys for every hardware session, even though the session state might still be marked as alive after resume. Therefore, we should consider the session as dead on suspend and invalidate all the objects. The session will be automatically restarted on the first protected submission on resume. v2: runtime suspend also invalidates the keys v3: fix return codes, simplify rpm ops (Chris), use the new worker func v4: invalidate the objects on suspend, don't re-create the arb sesson on resume (delayed to first submission). v5: move irq changes back to irq patch (Rodrigo) v6: drop invalidation in runtime suspend (Rodrigo) Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-13-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
Now that we can handle destruction and re-creation of the arb session, we can postpone the start of the session to the first submission that requires it, to avoid keeping it running with no user. v10: increase timeout when waiting in intel_pxp_start as firmware session startup is slower right after boot. v13: increase the same timeout by 50 milisec because previous timeout was not enough to cover two lower level 100 milisec timeouts in the session termination + creation steps. Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-12-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
This api allow user mode to create protected buffers and to mark contexts as making use of such objects. Only when using contexts marked in such a way is the execution guaranteed to work as expected. Contexts can only be marked as using protected content at creation time (i.e. the parameter is immutable) and they must be both bannable and not recoverable. Given that the protected session gets invalidated on suspend, contexts created this way hold a runtime pm wakeref until they're either destroyed or invalidated. All protected objects and contexts will be considered invalid when the PXP session is destroyed and all new submissions using them will be rejected. All intel contexts within the invalidated gem contexts will be marked banned. Userspace can detect that an invalidation has occurred via the RESET_STATS ioctl, where we report it the same way as a ban due to a hang. v5: squash patches, rebase on proto_ctx, update kerneldoc v6: rebase on obj create_ext changes v7: Use session counter to check if an object it valid, hold wakeref in context, don't add a new flag to RESET_STATS (Daniel) v8: don't increase guilty count for contexts banned during pxp invalidation (Rodrigo) v9: better comments, avoid wakeref put race between pxp_inval and context_close, add usage examples (Rodrigo) v10: modify internal set/get-protected-context functions to not return -ENODEV when setting PXP param to false or getting param when running on pxp-unsupported hw or getting param when i915 was built with CONFIG_PXP off Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: Bommu Krishnaiah <krishnaiah.bommu@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-11-alan.previn.teres.alexis@intel.com
-
Huang, Sean Z authored
The HW will generate a teardown interrupt when session termination is required, which requires i915 to submit a terminating batch. Once the HW is done with the termination it will generate another interrupt, at which point it is safe to re-create the session. Since the termination and re-creation flow is something we want to trigger from the driver as well, use a common work function that can be called both from the irq handler and from the driver set-up flows, which has the addded benefit of allowing us to skip any extra locks because the work itself serializes the operations. v2: use struct completion instead of bool (Chris) v3: drop locks, clean up functions and improve comments (Chris), move to common work function. v4: improve comments, simplify wait logic (Rodrigo) v5: unconditionally set interrupts, rename state_attacked var (Rodrigo) v10: remove inclusion of intel_gt_types.h from intel_pxp.h (Jani) Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-10-alan.previn.teres.alexis@intel.com
-
Huang, Sean Z authored
Teardown is triggered when the display topology changes and no long meets the secure playback requirement, and hardware trashes all the encryption keys for display. Additionally, we want to emit a teardown operation to make sure we're clean on boot and resume v2: emit in the ring, use high prio request (Chris) v3: better defines, stalling flush, cleaned up and renamed submission funcs (Chris) v12: fix uninitialized variable bug Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-9-alan.previn.teres.alexis@intel.com
-
Huang, Sean Z authored
Create the arbitrary session, with the fixed session id 0xf, after system boot, for the case that application allocates the protected buffer without establishing any protection session. Because the hardware requires at least one alive session for protected buffer creation. This arbitrary session will need to be re-created after teardown or power event because hardware encryption key won't be valid after such cases. The session ID is exposed as part of the uapi so it can be used as part of userspace commands. v2: use gt->uncore->rpm (Chris) v3: s/arb_is_in_play/arb_is_valid (Chris), move set-up to the new init_hw function v4: move interface defs to separate header, set arb_is valid to false on fini (Rodrigo) v5: handle async component binding Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-8-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
The setting is required by hardware to allow us doing further protection operation such as sending commands to GPU or TEE. The register needs to be re-programmed on resume, so for simplicitly we bundle the programming with the component binding, which is automatically called on resume. Further HW set-up operations will be added in the same location in follow-up patches, so get ready for them by using a couple of init/fini_hw wrappers instead of calling the KCR funcs directly. v3: move programming to component binding function, rework commit msg Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-7-alan.previn.teres.alexis@intel.com
-
Huang, Sean Z authored
Implement the funcs to create the TEE channel, so kernel can send the TEE commands directly to TEE for creating the arbitrary (default) session. v2: fix locking, don't pollute dev_priv (Chris) v3: wait for mei PXP component to be bound. v4: drop the wait, as the component might be bound after i915 load completes. We'll instead check when sending a tee message. v5: fix an issue with mei_pxp module removal v6: don't use fetch_and_zero in fini (Rodrigo) Signed-off-by: Huang, Sean Z <sean.z.huang@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-6-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
The context is required to send the session termination commands to the VCS, which will be implemented in a follow-up patch. We can also use the presence of the context as a check of pxp initialization completion. v2: use perma-pinned context (Chris) v3: rename pinned_context functions (Chris) v4: split export of pinned_context functions to a separate patch (Rodrigo) v10: remove inclusion of intel_gt_types.h from intel_pxp.h (Jani) v13: fixed for loop pointer dereference (Vinay) Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-5-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
Ahead of the PXP implementation, define the relevant define flag and kconfig option. v2: flip kconfig default to N. Some machines have IFWIs that do not support PXP, so we need it to be an opt-in until we add support to query the caps from the mei device. v10: change comments from "Gen12+" to "Gen12 and newer" Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-4-alan.previn.teres.alexis@intel.com
-
Vitaly Lubart authored
Export PAVP client to work with i915 driver, for binding it uses kernel component framework. v2:drop debug prints, refactor match code to match mei_hdcp (Tomas) Signed-off-by: Vitaly Lubart <vitaly.lubart@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-3-alan.previn.teres.alexis@intel.com
-
Daniele Ceraolo Spurio authored
This will be used for communication between the i915 driver and the mei one. Defining it in a stand-alone patch to avoid circualr dependedencies between the patches modifying the 2 drivers. Split out from an original patch from Huang, Sean Z v2: rename the component struct (Rodrigo) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924191452.1539378-2-alan.previn.teres.alexis@intel.com
-
- 01 Oct, 2021 6 commits
-
-
Michal Wajdeczko authored
If we timeout waiting for a CT reply we print very simple error message. Improve that and by moving error reporting to the caller we can use CT_ERROR instead of DRM_ERROR and report just fence as error code will be reported later anyway. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210926184545.1407-5-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Instead of plain error value (%d) print more user friendly error name (%pe). Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210926184545.1407-4-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
Instead of plain error value (%d) print more user friendly error name (%pe). Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210926184545.1407-3-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
In commit b839a869 ("drm/i915/guc: Add support for data reporting in GuC responses") we missed the hypothetical case that GuC might return positive non-zero value as success data. While that would be lucky treated as error case, and at the end will result in reporting valid -EIO, in the meantime this value will be passed to ERR_PTR that could be misleading. v2: rebased Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210926184545.1407-2-michal.wajdeczko@intel.com
-
Michal Wajdeczko authored
We assumed that for all modern GENs the PTEs and register space are split in the GTTMMADR BAR, but while it is true, we should rather use fixed offset as it is defined in the specification. Bspec: 4409, 4457, 4604, 11181, 9027, 13246, 13321, 44980 Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: CQ Tang <cq.tang@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210926201005.1450-1-michal.wajdeczko@intel.com
-
Thomas Hellström authored
We may end up in i915_ttm_bo_destroy() in an error path before the object is fully initialized. In that case it's not correct to call __i915_gem_free_object(), because that function a) Assumes the gem object refcount is 0, which it isn't. b) frees the placements which are owned by the caller until the init_object() region ops returns successfully. Fix this by providing a lightweight cleanup function __i915_gem_object_fini() which is also called by __i915_gem_free_object(). While doing this, also make sure we call dma_resv_fini() as part of ordinary object destruction and not from the RCU callback that frees the object. This will help track down bugs where the object is incorrectly locked from an RCU lookup. Finally, make sure the object isn't put on the region list until it's either locked or fully initialized in order to block list processing of partially initialized objects. v2: - The TTM object backend memory was freed before the gem pages were put. Separate this functionality into __i915_gem_object_pages_fini() and call it from the TTM delete_mem_notify() callback. v3: - Include i915_gem_object_free_mmaps() in __i915_gem_object_pages_fini() to make sure we don't inadvertedly introduce a race. Fixes: 48b09612 ("drm/i915: Move __i915_gem_free_object to ttm_bo_destroy") Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20210930113236.583531-1-thomas.hellstrom@linux.intel.com
-
- 30 Sep, 2021 1 commit
-
-
Cai Huoqing authored
Replace direction definition PCI_DMA_BIDIRECTIONAL with DMA_BIDIRECTIONAL, because it helps to enhance readability and avoid possible inconsistency. Signed-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210925124613.144-1-caihuoqing@baidu.com
-
- 29 Sep, 2021 1 commit
-
-
Maarten Lankhorst authored
We forgot to call intel_runtime_pm_put on error, fix it! Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Fixes: cf41a8f1 ("drm/i915: Finally remove obj->mm.lock.") Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: <stable@vger.kernel.org> # v5.13+ Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210830121006.2978297-9-maarten.lankhorst@linux.intel.com
-
- 27 Sep, 2021 1 commit
-
-
Matthew Auld authored
Seems to fix some object-debug splat which appeared while debugging something unrelated. v2: s/guc_blocked/guc_state.blocked/ Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924144646.4096402-1-matthew.auld@intel.com
-
- 25 Sep, 2021 1 commit
-
-
Janusz Krzysztofik authored
We currently do an explicit flush of the buffer pools within the call path of drm_driver.release(); this removes all buffers, regardless of their age, freeing the buffers' associated resources (objects, address space areas). However there is other code that runs within the drm_driver.release() call chain that expects objects and their associated address space areas have already been flushed. Since buffer pools auto-flush old buffers once per second in a worker thread, there's a small window where if we remove the driver while there are still objects in buffers with an age of less than one second, the assumptions of the other release code may be violated. By moving the flush to driver remove (which executes earlier via the pci_driver.remove() flow) we're ensuring that all buffers are flushed and their associated objects freed before some other code in pci_driver.remove() flushes those objects so they are released before _any_ code in drm_driver.release() that check completness of those flushes executes. v2: Reword commit description as suggested by Matt. Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210924163825.634606-1-janusz.krzysztofik@linux.intel.com
-
- 24 Sep, 2021 11 commits
-
-
Tejas Upadhyay authored
In commit 4e5c8a99 ("drm/i915: Drop i915_request.lock requirement for intel_rps_boost()"), we decoupled the rps worker from the pm so that we could avoid the synchronization penalty which makes the assertion liable to run too early. Which makes warning invalid hence removed. Fixes: 4e5c8a99 ("drm/i915: Drop i915_request.lock requirement for intel_rps_boost()") Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210914090412.1393498-1-tejaskumarx.surendrakumar.upadhyay@intel.com
-
Matthew Auld authored
In commit: commit 1e6decf3 Author: Hugh Dickins <hughd@google.com> Date: Thu Sep 2 14:54:43 2021 -0700 shmem: shmem_writepage() split unlikely i915 THP it looks THP + shmem_writeback was an unexpected combination, and ends up hitting some BUG_ON, but it also looks like that is now fixed. While the IGTs did eventually hit this(although not during pre-merge it seems), it's likely worthwhile adding some explicit coverage for this scenario in the shrink_thp selftest. References: https://gitlab.freedesktop.org/drm/intel/-/issues/4166Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210921142116.3807946-1-matthew.auld@intel.com
-
Matthew Auld authored
Currently we blow up in trace_dma_fence_init, when calling into get_driver_name or get_timeline_name, since both the engine and context might be NULL(or contain some garbage address) in the case of newly allocated slab objects via the request ctor. Note that we also use SLAB_TYPESAFE_BY_RCU here, which allows requests to be immediately freed, but delay freeing the underlying page by an RCU grace period. With this scheme requests can be re-allocated, at the same time as they are also being read by some lockless RCU lookup mechanism. In the ctor case, which is only called for new slab objects(i.e allocate new page and call the ctor for each object) it's safe to reset the context/engine prior to calling into dma_fence_init, since we can be certain that no one is doing an RCU lookup which might depend on peeking at the engine/context, like in active_engine(), since the object can't yet be externally visible. In the recycled case(which might also be externally visible) the request refcount always transitions from 0->1 after we set the context/engine etc, which should ensure it's valid to dereference the engine for example, when doing an RCU list-walk, so long as we can also increment the refcount first. If the refcount is already zero, then the request is considered complete/released. If it's non-zero, then the request might be in the process of being re-allocated, or potentially still in flight, however after successfully incrementing the refcount, it's possible to carefully inspect the request state, to determine if the request is still what we were looking for. Note that all externally visible requests returned to the cache must have zero refcount. One possible fix then is to move dma_fence_init out from the request ctor. Originally this was how it was done, but it was moved in: commit 855e39e6 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Feb 3 09:41:48 2020 +0000 drm/i915: Initialise basic fence before acquiring seqno where it looks like intel_timeline_get_seqno() relied on some of the rq->fence state, but that is no longer the case since: commit 12ca695d Author: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Date: Tue Mar 23 16:49:50 2021 +0100 drm/i915: Do not share hwsp across contexts any more, v8. intel_timeline_get_seqno() could also be cleaned up slightly by dropping the request argument. Moving dma_fence_init back out of the ctor, should ensure we have enough of the request initialised in case of trace_dma_fence_init. Functionally this should be the same, and is effectively what we were already open coding before, except now we also assign the fence->lock and fence->ops, but since these are invariant for recycled requests(which might be externally visible), and will therefore already hold the same value, it shouldn't matter. An alternative fix, since we don't yet have a fully initialised request when in the ctor, is just setting the context/engine as NULL, but this does require adding some extra handling in get_driver_name etc. v2(Daniel): - Try to make the commit message less confusing Fixes: 855e39e6 ("drm/i915: Initialise basic fence before acquiring seqno") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Michael Mason <michael.w.mason@intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210921134202.3803151-1-matthew.auld@intel.com
-
Thomas Hellström authored
We really only need memcpy restore for objects that affect the operability of the migrate context. That is, primarily the page-table objects of the migrate VM. Add an object flag, I915_BO_ALLOC_PM_EARLY for objects that need early restores using memcpy and a way to assign LMEM page-table object flags to be used by the vms. Restore objects without this flag with the gpu blitter and only objects carrying the flag using TTM memcpy. Initially mark the migrate, gt, gtt and vgpu vms to use this flag, and defer for a later audit which vms actually need it. Most importantly, user- allocated vms with pinned page-table objects can be restored using the blitter. Performance-wise memcpy restore is probably as fast as gpu restore if not faster, but using gpu restore will help tackling future restrictions in mappable LMEM size. v4: - Don't mark the aliasing ppgtt page table flags for early resume, but rather the ggtt page table flags as intended. (Matthew Auld) - The check for user buffer objects during early resume is pointless, since they are never marked I915_BO_ALLOC_PM_EARLY. (Matthew Auld) v5: - Mark GuC LMEM objects with I915_BO_ALLOC_PM_EARLY to have them restored before we fire up the migrate context. Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-8-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Pinned context images are now reset during resume. Don't back them up, and assuming that rings can be assumed empty at suspend, don't back them up either. Introduce a new object flag, I915_BO_ALLOC_PM_VOLATILE meaning that an object is allowed to lose its content on suspend. v3: - Slight documentation clarification (Matthew Auld) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-7-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Pinned contexts, like the migrate contexts need reset after resume since their context image may have been lost. Also the GuC needs to register pinned contexts. Add a list to struct intel_engine_cs where we add all pinned contexts on creation, and traverse that list at resume time to reset the pinned contexts. This fixes the kms_pipe_crc_basic@suspend-read-crc-pipe-a selftest for now, but proper LMEM backup / restore is needed for full suspend functionality. However, note that even with full LMEM backup / restore it may be desirable to keep the reset since backing up the migrate context images must happen using memcpy() after the migrate context has become inactive, and for performance- and other reasons we want to avoid memcpy() from LMEM. Also traverse the list at guc_init_lrc_mapping() calling guc_kernel_context_pin() for the pinned contexts, like is already done for the kernel context. v2: - Don't reset the contexts on each __engine_unpark() but rather at resume time (Chris Wilson). v3: - Reset contexts in the engine sanitize callback. (Chris Wilson) Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Brost Matthew <matthew.brost@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-6-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Just evict unpinned objects to system. For pinned LMEM objects, make a backup system object and blit the contents to that. Backup is performed in three steps, 1: Opportunistically evict evictable objects using the gpu blitter. 2: After gt idle, evict evictable objects using the gpu blitter. This will be modified in an upcoming patch to backup pinned objects that are not used by the blitter itself. 3: Backup remaining pinned objects using memcpy. Also move uC suspend to after 2) to make sure we have a functional GuC during 2) if using GuC submission. v2: - Major refactor to make sure gem_exec_suspend@hang-SX subtests work, and suspend / resume works with a slightly modified GuC submission enabling patch series. v3: - Fix a potential use-after-free (Matthew Auld) - Use i915_gem_object_create_shmem() instead of i915_gem_object_create_region (Matthew Auld) - Minor simplifications (Matthew Auld) - Fix up kerneldoc for i195_ttm_restore_region(). - Final lmem_suspend() call moved to i915_gem_backup_suspend from i915_gem_suspend_late, since the latter gets called at driver unload and we don't unnecessarily want to run it at that time. v4: - Interface change of ttm- & lmem suspend / resume functions to use flags rather than bools. (Matthew Auld) - Completely drop the i915_gem_backup_suspend change (Matthew Auld) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-5-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
With GuC submission on DG1, the execution of the requests times out for the gem_exec_suspend igt test case after executing around 800-900 of 1000 submitted requests. Given the time we allow elsewhere for fences to signal (in the order of seconds), increase the timeout before we mark the gt wedged and proceed. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-4-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
An upcoming common pattern is to traverse the region object list and perform certain actions on all objects in a region. It's a little tricky to get the list locking right, in particular since a gem object may change region unless it's pinned or the object lock is held. Define a function that does this for us and that takes an argument that defines the action to be performed on each object. v3: - Improve structure documentation a bit (Matthew Auld) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-3-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
When backing up or restoring contents of pinned objects at suspend / resume time we need to allocate a new object as the backup. Add a function to facilitate copies between the two. Some data needs to be copied before the migration context is ready for operation, so make sure we can disable accelerated copies. v2: - Fix a missing return value check (Matthew Auld) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-2-thomas.hellstrom@linux.intel.com
-
Thomas Hellström authored
Somehow we managed to invert the test for i915_gem_object_evictable(), which causes a warning in DG1 BAT, igt@debugfs_test@read_all_entries. Fix the lock check to only warn if the object *is* indeed evictable and not protected from eviction by fences. Cc: Matthew Brost <matthew.brost@intel.com> Fixes: 91160c83 ("drm/i915: Take pinning into account in __i915_gem_object_is_lmem") Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210922083807.888206-2-thomas.hellstrom@linux.intel.com
-
- 23 Sep, 2021 2 commits
-
-
Akira Yokosawa authored
Nested grids in grid-table cells are not specified as proper ReST constructs. Commit 572f2a5c ("drm/i915/guc: Update firmware to v62.0.0") added a couple of kerneldoc tables of the form: +---+-------+------------------------------------------------------+ | 1 | 31:0 | +------------------------------------------------+ | +---+-------+ | | | |...| | | Embedded `HXG Message`_ | | +---+-------+ | | | | n | 31:0 | +------------------------------------------------+ | +---+-------+------------------------------------------------------+ For "make htmldocs", they happen to work as one might expect, but they are incompatible with "make latexdocs" and "make pdfdocs", and cause the generated gpu.tex file to become incomplete and unbuildable by xelatex. Restore the compatibility by removing those nested grids in the tables. Size comparison of generated gpu.tex: Sphinx 2.4.4 Sphinx 4.2.0 v5.14: 3238686 3841631 v5.15-rc1: 376270 432729 with this fix: 3377846 3998095 Fixes: 572f2a5c ("drm/i915/guc: Update firmware to v62.0.0") Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Akira Yokosawa <akiyks@gmail.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/4a227569-074f-c501-58bb-d0d8f60a8ae9@gmail.com
-
Matt Roper authored
With the recent refactor of the uncore mmio handling, all forcewake-based platforms (i.e., graphics version 6 and beyond) now use the 'fwtable' read handlers. Let's pull the assignment out of the per-platform if/else ladder to make this more obvious. Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Suggested-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210923003029.2194375-1-matthew.d.roper@intel.com
-
- 22 Sep, 2021 3 commits
-
-
Tvrtko Ursulin authored
It is not very useful to have code which tries to report a rapidly transient state which will not report anything majority of the time, especially since it is currently only used from <debugfs>/i915_gem_framebuffers. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: Christian König <christian.koenig@amd.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210915114153.951670-1-tvrtko.ursulin@linux.intel.com
-
Matt Roper authored
We thought the DG2 table of shadowed registers would be the same as the gen12/xehp table, but it turns out that there are a few minor differences that require us to define a new DG2-specific table: * One register is removed (0xC4D4) * One register is added (0xC4E0) Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210910201030.3436066-7-matthew.d.roper@intel.com
-
Matt Roper authored
Consolidate down to just a single 'fwtable' implementation. For reads we don't need to worry about shadow tables. While consolidating the functions, gen11/gen12 pick up a NEEDS_FORCE_WAKE() check that they didn't have before, allowing them to bypass a lot of forcewake/shadow checking for non-GT registers (e.g., display). v2: - Restore NEEDS_FORCE_WAKE() check. (Chris, Tvrtko) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210910201030.3436066-6-matthew.d.roper@intel.com
-