- 21 Mar, 2024 1 commit
-
-
Matthew Auld authored
The queue width will determine the number of batch buffer emitted into the ring. In the case of xe_bb_create_job() we pass exactly one batch address, therefore add an assert for the width to make sure we don't go out of bounds. While here also convert to the helper to determine if the queue is migration based. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240320112730.219854-3-matthew.auld@intel.com
-
- 20 Mar, 2024 12 commits
-
-
Daniele Ceraolo Spurio authored
The GGTT is currently a 32 bit address space, but the HW and GuC support 48b addresses in GGTT-related operations, both to keep the interface/HW paths common between PPGTT and GGTT and to allow for future increase of the GGTT size. This leaves us having to program a 64b field with a 32b offset, which currently we're in some cases doing this by using an upper_32_bits() call on a 32b variable, which doesn't make any sense. To do this cleanly we have 2 options: 1 - Set the upper 32 bits directly to zero. 2 - Use 64b variables for the offset and keep programming the whole thing, so we're ready if we ever have bigger offsets. This patch goes with option #2 and switches the related variables to u64. v2: don't change the log ctl flag variable (John) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240319195101.2784480-1-daniele.ceraolospurio@intel.com
-
Daniele Ceraolo Spurio authored
A force_wake_get failure means that the HW might not be awake for the access we're doing; this can lead to an immediate error or it can be a more subtle problem (e.g. a register read might return an incorrect value that is still valid, leading the driver to make a wrong choice instead of flagging an error). We avoid an error from the force_wake function because callers might handle or tolerate the error, but this only works if all callers are checking the error code. The majority already do, but a few are not. These are mainly falling into 3 categories, which are each handled differently: 1) error capture: in this case we want to continue the capture, but we log an info message in dmesg to notify the user that the capture might have incorrect data. 2) ioctl: in this case we return a -EIO error to userspace 3) unabortable actions: these are scenarios where we can't simply abort and retry and so it's better to just try it anyway because there is a chance the HW is awake even with the failure. In this case we throw a warning so we know there was a forcewake problem if something fails down the line. v2: use gt_WARN_ON where appropriate Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Tejas Upadhyay <tejas.upadhyay@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318154924.3453513-1-daniele.ceraolospurio@intel.com
-
Radhakrishna Sripada authored
Disable clockgating for TDL SVHS fub. v2: Extend the Wa to 1274(MattR) Bspec: 46045 Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318210120.564692-1-radhakrishna.sripada@intel.com
-
Tejas Upadhyay authored
Remove continue statement which does not have real effect as no actions are to be taken post continue. Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com> Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Acked-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318114057.3831274-1-tejas.upadhyay@intel.comSigned-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
-
Luca Coelho authored
Some of the backported intel_uncore_read*() functions used the wrong types. Change the function declarations accordingly. Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314065221.1181158-1-luciano.coelho@intel.comSigned-off-by: Jani Nikula <jani.nikula@intel.com>
-
Maarten Lankhorst authored
Considering the caller of the GGTT functions should keep the backing storage alive before the function completes, it's not necessary to invalidate with the GGTT lock held. This just adds latency for every user of the GGTT. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240306052002.311196-5-matthew.brost@intel.com
-
Matthew Brost authored
Add XE_BO_GGTT_INVALIDATE flag which indicates the GGTT should be invalidated when a BO is added / removed from the GGTT. This is typically set when a BO is used by the GuC as the GuC has GGTT TLBs. Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> [mlankhorst: Small fix to only inherit GGTT_INVALIDATE from src bo] [mlankhorst: Remove _BIT from name] Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240306052002.311196-4-matthew.brost@intel.com
-
Matthew Brost authored
Only buffers mapped in the GGTT used by the GuC require an invalidation. Display buffers do not require an invalidation. Delete the invalidatio from display code and make invalidation a static function in xe_ggtt.c. Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240306052002.311196-3-matthew.brost@intel.com
-
Nirmoy Das authored
Add an explicit check to ensure that the mgr is not NULL. Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240319130925.22399-1-nirmoy.das@intel.com
-
Niranjana Vishwanathapura authored
Use xe_exec_queue_user_extension_fn type for exec_queue_user_extension_funcs.` Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240319174919.1847-1-niranjana.vishwanathapura@intel.com
-
Niranjana Vishwanathapura authored
Ensure exec queue freeing happens at one place, that is in __xe_exec_queue_free(). It releases q->vm reference also. Set q->vm before handling extensions as they can potentially reference it. Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240319175947.15890-1-niranjana.vishwanathapura@intel.com
-
Niranjana Vishwanathapura authored
Abstract out the core part of sched_done and deregister_done handlers to separate functions to decouple them from any protocol error handling part and make them more readable. Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240319184153.16667-1-niranjana.vishwanathapura@intel.com
-
- 19 Mar, 2024 15 commits
-
-
Daniele Ceraolo Spurio authored
Supporting older GuC versions comes with baggage, both on the coding side (due to interfaces only being available from a certain version onwards) and on the testing side (due to having to make sure the driver works as expected with older GuCs). Since all of our Xe platform are still under force probe, we haven't committed to support any specific GuC version and we therefore don't need to support the older once, which means that we can force a bottom limit to what GuC we accept. This allows us to remove any conditional statements based on older GuC versions and also to approach newer additions knowing that we'll never attempt to load something older than our minimum requirement. As an initial value, the minimum expected version is set to 70.19, which is the version currently in the firmware table, but the expectation is that this will be bumbed every time we update the table, until we remove the force probe. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240304162616.824884-1-daniele.ceraolospurio@intel.com
-
Rodrigo Vivi authored
In case of the suspend/resume flow getting locked up we can get reports with some useful hints on where it might get locked and if that has failed. Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318180141.267458-2-rodrigo.vivi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Rodrigo Vivi authored
Let's be quieter on production configuration and let's also print the entry point of the gt suspend when debug messages are enabled. Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318180141.267458-1-rodrigo.vivi@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Himal Prasad Ghimiray authored
vm->flags are already assigned with passed flags. Remove the redundant assignment. Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed: Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240307065213.1968688-1-himal.prasad.ghimiray@intel.comSigned-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
-
Juha-Pekka Heikkila authored
Mark dpt and related vma as uncached to avoid pipe faults on some devices. Signed-off-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318201850.127785-1-juhapekka.heikkila@gmail.com
-
Matthew Auld authored
Otherwise in the case where we use normal system memory, the CPU access will always be cached, like when filling the DPT PTEs, which is likely not what we want since HW access could be incoherent on platforms like LNL. Marking as XE_BO_PAGETABLE will force wc/uc underneath on such platforms. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314164905.239449-2-matthew.auld@intel.com
-
Nirmoy Das authored
Remove usage of unsafe strcpy with a helper function to convert engine class to string. Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318091055.638-1-nirmoy.das@intel.com
-
Nirmoy Das authored
The vma pointer can't be NULL here. Cc: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318093547.16326-1-nirmoy.das@intel.com
-
Matthew Auld authored
Here XE_MAX_TILES_PER_DEVICE is the gt array size, therefore the gt index should always be less than. v2 (Lucas): - Add fixes tag. Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Acked-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318180532.57522-6-matthew.auld@intel.com
-
Matthew Auld authored
Here XE_MAX_GT_PER_TILE is the total, therefore the gt index should always be less than. Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318180532.57522-5-matthew.auld@intel.com
-
Matthew Auld authored
The engine_class is the index into the user_to_xe_engine_class, therefore it needs to be less than. Fixes: dd08ebf6 ("drm/xe: Introduce a new DRM driver for Intel GPUs") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318180532.57522-4-matthew.auld@intel.com
-
Nirmoy Das authored
Explicitly cast tbo->page_alignment to u64 before bit-shifting to prevent overflow when assigning to min_page_size. Cc: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318164342.3094-1-nirmoy.das@intel.com
-
Matthew Auld authored
The region can be used an index into the region_to_mem_type, so we should be asserting that it is less than the ARRAY_SIZE here. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318103616.26240-2-matthew.auld@intel.com
-
Matthew Auld authored
If we fished it out the list then it can't be null; the list entry is embedded in the bo. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318093431.21075-4-matthew.auld@intel.com
-
Matthew Auld authored
We use plain spinlock to protect readers and writers, so there is no actual RCU here. Rather use the more appropriate non-rcu list based API. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240318093431.21075-3-matthew.auld@intel.com
-
- 15 Mar, 2024 9 commits
-
-
Michal Wajdeczko authored
The Multi-Level LMTT variant is not specific only to the PVC. Change logic to select it for all new platforms beyond 12.60. Bspec: 52404, 67468 Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240313104132.1045-4-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
The LMTT Page Directory, as well as the directory entries, must be aligned on a 64KB boundary in VRAM. Use explicit alignment flag to match hardware requirement. Bspec: 52404, 67468 Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240313104132.1045-3-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
While today we are getting VRAM allocations aligned to 64K as the XE_VRAM_FLAGS_NEED64K flag could be set, we shouldn't only rely on that flag and we should also allow caller to specify required 64K alignment explicitly. Define new XE_BO_NEEDS_64K flag for that. Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240313104132.1045-2-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
Shortly we will updating xe_mmio_read|write() functions with SR-IOV specific features making those functions less suitable for inline. Convert now those functions into regular ones, lowering driver footprint, according to scripts/bloat-o-meter, by 6% add/remove: 18/18 grow/shrink: 31/603 up/down: 2719/-79663 (-76944) Function old new delta Total: Before=1276633, After=1199689, chg -6.03% add/remove: 0/0 grow/shrink: 0/0 up/down: 0/0 (0) Data old new delta Total: Before=48990, After=48990, chg +0.00% add/remove: 0/0 grow/shrink: 0/0 up/down: 0/0 (0) RO Data old new delta Total: Before=115680, After=115680, chg +0.00% Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-7-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
Interrupt registers 1900xx are VF accessible but only until version 12.50 as on newer platforms VFs are using memory-based interrupts. To avoid complexity, we mark those registers with XE_REG_OPTION_VF unconditionally, as IRQ handling on newer VFs is different anyway. Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-6-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
Only selected registers are available for Virtual Functions. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-5-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
Only selected registers are available for Virtual Functions. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-4-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
We will tag registers that SR-IOV Virtual Functions can access. This will help us catch any invalid usage and/or provide custom replacement if available. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-3-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
Michal Wajdeczko authored
We want to keep the struct xe_reg as small as possible. Make sure we don't accidentally change its size. Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314173130.1177-2-michal.wajdeczko@intel.comSigned-off-by: Michał Winiarski <michal.winiarski@intel.com>
-
- 14 Mar, 2024 3 commits
-
-
Zhanjun Dong authored
Add helper macro to loop each DSS. This is a precursor patch to allow for easier iteration through MCR registers and other per-DSS uses. Signed-off-by: Zhanjun Dong <zhanjun.dong@intel.com> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314210735.258553-2-zhanjun.dong@intel.com
-
Matt Roper authored
Switch the MOCS-related debug messages to use a GT-specific logging function and add ID/type output to the beginning of the MOCS kunit test to assist with debug when problems arise. Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314195825.3226856-4-matthew.d.roper@intel.com
-
Matt Roper authored
Although MOCS registers became multicast in graphics version 12.50 on the primary GT, this transition did not happen until version 20 on the media GT. Considering each GT independently is mostly important for MTL/ARL where the Xe_LPM+ IP has non-MCR MOCS registers, even though Xe_LPG IP has MCR registers. Bspec: 67789, 71186 Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240314195825.3226856-3-matthew.d.roper@intel.com
-