- 01 Jul, 2021 40 commits
-
-
Andrey Grodzovsky authored
Fix from number of processed bytes to number of processed I2C messages. Cc: Jean Delvare <jdelvare@suse.de> Cc: Alexander Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Lijo Lazar <Lijo.Lazar@amd.com> Cc: Stanley Yang <Stanley.Yang@amd.com> Cc: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Alexander Deucher <Alexander.Deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Andrey Grodzovsky authored
Let's just ignore the I2C_M_STOP hint from upper layer for SMU I2C code as there is no clean mapping between single per I2C message STOP flag at the kernel I2C layer and the SMU, per each byte STOP flag. We will just by default set it at the end of the SMU I2C message. Cc: Jean Delvare <jdelvare@suse.de> Cc: Alexander Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Lijo Lazar <Lijo.Lazar@amd.com> Cc: Stanley Yang <Stanley.Yang@amd.com> Cc: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Suggested-by: Lazar Lijo <Lijo.Lazar@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Acked-by: Alexander Deucher <Alexander.Deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Andrey Grodzovsky authored
Drop i > 0 restriction for issuing RESTART. Cc: Jean Delvare <jdelvare@suse.de> Cc: Alexander Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Lijo Lazar <Lijo.Lazar@amd.com> Cc: Stanley Yang <Stanley.Yang@amd.com> Cc: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Andrey Grodzovsky authored
Also generilize the code to accept and translate to HW bits any I2C relvent flags both for read and write. Cc: Jean Delvare <jdelvare@suse.de> Cc: Alexander Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Lijo Lazar <Lijo.Lazar@amd.com> Cc: Stanley Yang <Stanley.Yang@amd.com> Cc: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com> Acked-by: Alexander Deucher <Alexander.Deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Andrey Grodzovsky authored
EEPROM spec requests this. v2: Only to be done for write data transactions. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Not sure how the firmware interprets these. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Aaron Rice authored
Handle things besides EEPROMS. Signed-off-by: Aaron Rice <wolf@lovehindpa.ws> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Not sure that this really matters that much, but these could have various other hwmon chips on them. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Convert from 8 bit to 7 bit. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Use the new helper rather than doing i2c transfers directly. v2: fix typo Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Use the new helper rather than doing i2c transfers directly. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Encapsulates the i2c protocol handling so other parts of the driver can just tell it the offset and size of data to write. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
And handle more than just EEPROMs. v2: fix restart handling between transactions. v3: handle 7 to 8 bit addr conversion v4: Fix &req --> req. (Luben T) v5: squash in i2c channel fix Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Make it generic so we can support more than just EEPROMs. v2: fix restart handling between transactions. v3: handle 7 to 8 bit addr conversion v4: Fix &req --> req. (Luben T) v5: squash in i2c channel fix Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
Make it generic so we can support more than just EEPROMs. v2: fix restart handling between transactions. v3: handle 7 to 8 bit addr conversion v4: Fix &req --> req. (Luben T) Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Alex Deucher authored
So we lock software as well as hardware access to the bus. v2: fix mutex handling. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Luben Tuikov <luben.tuikov@amd.com>
-
Mukul Joshi authored
Reset SDMA RAS error counts during init only if persistent EDC harvesting is not supported. Signed-off-by: Mukul Joshi <mukul.joshi@amd.com> Reviewed-by: John Clements <john.clements@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
Each zone-device page holds a reference to the SVM BO that manages its backing storage. This is necessary to correctly hold on to the BO in case zone_device pages are shared with a child-process. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
This is for debug purposes only. It conditionally generates partial migrations to test mixed CPU/GPU memory domain pages in a prange easily. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
Migration skipped for pages that are already in VRAM domain. These could be the result of previous partial migrations to SYS RAM, and prefetch back to VRAM. Ex. Coherent pages in VRAM that were not written/invalidated after a copy-on-write. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
Invalid pages can be the result of pages that have been migrated already due to copy-on-write procedure or pages that were never migrated to VRAM in first place. This is not an issue anymore, as pranges now support mixed memory domains (CPU/GPU). Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
[Why] svm ranges can have mixed pages from device or system memory. A good example is, after a prange has been allocated in VRAM and a copy-on-write is triggered by a fork. This invalidates some pages inside the prange. Endding up in mixed pages. [How] By classifying each page inside a prange, based on its type. Device or system memory, during dma mapping call. If page corresponds to VRAM domain, a flag is set to its dma_addr entry for each GPU. Then, at the GPU page table mapping. All group of contiguous pages within the same type are mapped with their proper pte flags. v2: Instead of using ttm_res to calculate vram pfns in the svm_range. It is now done by setting the vram real physical address into drm_addr array. This makes more flexible VRAM management, plus removes the need to have a BO reference in the svm_range. v3: Remove mapping member from svm_range Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
Now that prange could have mixed domains (VRAM or SYSRAM), actual_loc nor svm_bo can not be used to check its current domain and eventually get its pfns to map them in GPU. Instead, pfns from both domains, are now obtained from hmm_range_fault through amdgpu_hmm_range_get_pages call. This is done everytime a GPU map occur. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
Get the proper owner reference for amdgpu_hmm_range_get_pages function. This is useful for partial migrations. To avoid migrating back to system memory, VRAM pages, that are accessible by all devices in the same memory domain. Ex. multiple devices in the same hive. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
svm_range_prefault is called right before migrations to VRAM, to make sure pages are resident in system memory before the migration. With partial migrations, this reference is used by hmm range get pages to avoid migrating pages that are already in the same VRAM domain. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
The parameter is used in the dev_private_owner to decide if device pages in the range require to be migrated back to system memory, based if they are or not in the same memory domain. In this case, this reference could come from the same memory domain with devices connected to the same hive. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
GPUs in the same XGMI hive have direct access to all members'VRAM. When mapping memory to a GPU, we don't need hmm_range_fault to fault device-private pages in the same hive back to the host. Identifying the page owner as the hive, rather than the individual GPU, accomplishes this. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Alex Sierra authored
During GPU page table invalidation with xnack off, new ranges split may occur concurrently in the same prange. Creating a new child per split. Each child should also increment its invalid counter, to assure GPU page table updates in these ranges. Signed-off-by: Alex Sierra <alex.sierra@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Nicholas Kazlauskas authored
[Why & How] Extend existing support for DCN2.1 DMUB diagnostic logging to DCN3.1 so we can collect useful information if the DMUB hangs. Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Joseph Greathouse authored
Navi series GPUs have 2 SIMDs per CU (and then 2 CUs per WGP). The NV enum headers incorrectly listed this as 4, which later meant we were incorrectly reporting the number of SIMDs in the HSA topology. This could cause problems down the line for user-space applications that want to launch a fixed amount of work to each SIMD. Signed-off-by: Joseph Greathouse <Joseph.Greathouse@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Alex Deucher authored
Add new PCI device id. Reviewed-by: Guchun Chen <guchun.chen@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Shyam Sundar S K authored
The documentation around PrepareMp1ForUnload message says that anything sent to SMU after this command would be stalled as the PMFW would not be in a state to take further job requests. Technically this is right in case of S3 scenario. But, this might not be the case during s0ix as the PMC driver would be the last to send the SMU on the OS_HINT. If SMU gets a PrepareMp1ForUnload message before the OS_HINT, this would stall the entire S0ix process. Results show that, this message to SMU is not required during S0ix and hence skip it. Reviewed-by: Prike Liang <Prike.Liang@amd.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Huang Rui authored
In some asics, we need to adjust the behavior according to the apu flags at very early stage. Signed-off-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Aaron Liu <aaron.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Reka Norman authored
Setting CONFIG_FRAME_WARN=0 should disable 'stack frame larger than' warnings. This is useful for example in KASAN builds. Make the dml Makefile respect this config. Fixes the following build warnings with CONFIG_KASAN=y and CONFIG_FRAME_WARN=0: drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/display_mode_vba_30.c:3642:6: warning: stack frame size of 2216 bytes in function 'dml30_ModeSupportAndSystemConfigurationFull' [-Wframe-larger-than=] drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn31/display_mode_vba_31.c:3957:6: warning: stack frame size of 2568 bytes in function 'dml31_ModeSupportAndSystemConfigurationFull' [-Wframe-larger-than=] Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Reka Norman <rekanorman@google.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Jing Xiangfeng authored
radeon_user_framebuffer_create() misses to call drm_gem_object_put() in an error path. Add the missed function call to fix it. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Michal Suchanek authored
Also copy over the part that makes old gcc handling cross-platform. Fixes: df7a1658 ("drm/amdgpu/dc: fix DCN3.1 Makefile for PPC64") Fixes: 926d6972 ("drm/amd/display: Add DCN3.1 blocks to the DC Makefile") Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Tiezhu Yang authored
On the Loongson64 platform used with Radeon GPU, shutdown or reboot failed when console=tty is in the boot cmdline. radeon_suspend_kms() puts the hw in the suspend state, especially set fb state as FBINFO_STATE_SUSPENDED: if (fbcon) { console_lock(); radeon_fbdev_set_suspend(rdev, 1); console_unlock(); } Then avoid to do any more fb operations in the related functions: if (p->state != FBINFO_STATE_RUNNING) return; So call radeon_suspend_kms() in radeon_pci_shutdown() for Loongson64 to fix this issue, it looks like some kind of workaround like powerpc. Co-developed-by: Jianmin Lv <lvjianmin@loongson.cn> Signed-off-by: Jianmin Lv <lvjianmin@loongson.cn> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
Oak Zeng authored
The ttm caching flags (ttm_cached, ttm_write_combined etc) are used to determine a buffer object's mapping attributes in both CPU page table and GPU page table (when that buffer is also accessed by GPU). Currently the ttm caching flags are set in function amdgpu_ttm_io_mem_reserve which is called during DRM_AMDGPU_GEM_MMAP ioctl. This has a problem since the GPU mapping of the buffer object (ioctl DRM_AMDGPU_GEM_VA) can happen earlier than the mmap time, thus the GPU page table update code can't pick up the right ttm caching flags to decide the right GPU page table attributes. This patch moves the ttm caching flags setting to function amdgpu_vram_mgr_new - this function is called during the first step of a buffer object create (eg, DRM_AMDGPU_GEM_CREATE) so the later both CPU and GPU mapping function calls will pick up this flag for CPU/GPU page table set up. v2: rebase (Alex) Signed-off-by: Oak Zeng <Oak.Zeng@amd.com> Suggested-by: Christian Koenig <Christian.Koenig@amd.com> Reviewed-by: Christian Koenig <Christian.Koenig@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Tested-by: Po Huang <Po.Huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Guchun Chen authored
During GPU reset, when receiving a DMCUB OUTBUX0 interrupt, DAL code will set it to be OUTBOX interrupt and sets hw interrupt. However, OUTBOX interrupt is not registered yet, so a NULL pointer access will be executed. Call Trace: dal_irq_service_set+0x30/0x90 [amdgpu] dc_interrupt_set+0x24/0x30 [amdgpu] amdgpu_dm_set_dmub_outbox_irq_state+0x22/0x30 [amdgpu] amdgpu_irq_update+0x77/0xa0 [amdgpu] amdgpu_irq_gpu_reset_resume_helper+0x67/0xa0 [amdgpu] amdgpu_do_asic_reset+0x219/0x260 [amdgpu] amdgpu_device_gpu_recover.cold+0x8c5/0xb64 [amdgpu] amdgpu_debugfs_gpu_recover_show+0x2c/0x60 [amdgpu] seq_read_iter+0xc2/0x450 ? do_anonymous_page+0x22c/0x3b0 seq_read+0xf9/0x140 full_proxy_read+0x5c/0x90 vfs_read+0xaa/0x190 ksys_read+0x67/0xe0 __x64_sys_read+0x1a/0x20 Fixes: effbf6ca ("drm/amdgpu/display: remove an old DCN3 guard") Signed-off-by: Guchun Chen <guchun.chen@amd.com> Reviewed-and-tested-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
-
Guchun Chen authored
valid DAL irq should be < DAL_IRQ_SOURCES_NUMBER. Signed-off-by: Guchun Chen <guchun.chen@amd.com> Reviewed-and-tested-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-