1. 15 Dec, 2023 1 commit
  2. 13 Dec, 2023 2 commits
  3. 11 Dec, 2023 2 commits
  4. 08 Dec, 2023 2 commits
  5. 07 Dec, 2023 1 commit
  6. 06 Dec, 2023 1 commit
    • Thomas Zimmermann's avatar
      drm/atomic-helpers: Invoke end_fb_access while owning plane state · e0f04e41
      Thomas Zimmermann authored
      Invoke drm_plane_helper_funcs.end_fb_access before
      drm_atomic_helper_commit_hw_done(). The latter function hands over
      ownership of the plane state to the following commit, which might
      free it. Releasing resources in end_fb_access then operates on undefined
      state. This bug has been observed with non-blocking commits when they
      are being queued up quickly.
      
      Here is an example stack trace from the bug report. The plane state has
      been free'd already, so the pages for drm_gem_fb_vunmap() are gone.
      
      Unable to handle kernel paging request at virtual address 0000000100000049
      [...]
       drm_gem_fb_vunmap+0x18/0x74
       drm_gem_end_shadow_fb_access+0x1c/0x2c
       drm_atomic_helper_cleanup_planes+0x58/0xd8
       drm_atomic_helper_commit_tail+0x90/0xa0
       commit_tail+0x15c/0x188
       commit_work+0x14/0x20
      
      Fix this by running end_fb_access immediately after updating all planes
      in drm_atomic_helper_commit_planes(). The existing clean-up helper
      drm_atomic_helper_cleanup_planes() now only handles cleanup_fb.
      
      For aborted commits, roll back from drm_atomic_helper_prepare_planes()
      in the new helper drm_atomic_helper_unprepare_planes(). This case is
      different from regular cleanup, as we have to release the new state;
      regular cleanup releases the old state. The new helper also invokes
      cleanup_fb for all planes.
      
      The changes mostly involve DRM's atomic helpers. Only two drivers, i915
      and nouveau, implement their own commit function. Update them to invoke
      drm_atomic_helper_unprepare_planes(). Drivers with custom commit_tail
      function do not require changes.
      
      v4:
      	* fix documentation (kernel test robot)
      v3:
      	* add drm_atomic_helper_unprepare_planes() for rolling back
      	* use correct state for end_fb_access
      v2:
      	* fix test in drm_atomic_helper_cleanup_planes()
      Reported-by: default avatarAlyssa Ross <hi@alyssa.is>
      Closes: https://lore.kernel.org/dri-devel/87leazm0ya.fsf@alyssa.is/Suggested-by: default avatarDaniel Vetter <daniel@ffwll.ch>
      Fixes: 94d879ea ("drm/atomic-helper: Add {begin,end}_fb_access to plane helpers")
      Tested-by: default avatarAlyssa Ross <hi@alyssa.is>
      Reviewed-by: default avatarAlyssa Ross <hi@alyssa.is>
      Signed-off-by: default avatarThomas Zimmermann <tzimmermann@suse.de>
      Cc: <stable@vger.kernel.org> # v6.2+
      Link: https://patchwork.freedesktop.org/patch/msgid/20231204083247.22006-1-tzimmermann@suse.de
      e0f04e41
  7. 05 Dec, 2023 1 commit
  8. 30 Nov, 2023 3 commits
  9. 29 Nov, 2023 6 commits
  10. 28 Nov, 2023 4 commits
  11. 27 Nov, 2023 2 commits
  12. 24 Nov, 2023 1 commit
  13. 21 Nov, 2023 4 commits
  14. 20 Nov, 2023 1 commit
  15. 19 Nov, 2023 2 commits
  16. 17 Nov, 2023 1 commit
  17. 15 Nov, 2023 3 commits
  18. 14 Nov, 2023 3 commits
    • Dave Airlie's avatar
      nouveau: use an rwlock for the event lock. · a2e36cd5
      Dave Airlie authored
      This allows it to break the following circular locking dependency.
      
      Aug 10 07:01:29 dg1test kernel: ======================================================
      Aug 10 07:01:29 dg1test kernel: WARNING: possible circular locking dependency detected
      Aug 10 07:01:29 dg1test kernel: 6.4.0-rc7+ #10 Not tainted
      Aug 10 07:01:29 dg1test kernel: ------------------------------------------------------
      Aug 10 07:01:29 dg1test kernel: wireplumber/2236 is trying to acquire lock:
      Aug 10 07:01:29 dg1test kernel: ffff8fca5320da18 (&fctx->lock){-...}-{2:2}, at: nouveau_fence_wait_uevent_handler+0x2b/0x100 [nouveau]
      Aug 10 07:01:29 dg1test kernel:
                                      but task is already holding lock:
      Aug 10 07:01:29 dg1test kernel: ffff8fca41208610 (&event->list_lock#2){-...}-{2:2}, at: nvkm_event_ntfy+0x50/0xf0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:
                                      which lock already depends on the new lock.
      Aug 10 07:01:29 dg1test kernel:
                                      the existing dependency chain (in reverse order) is:
      Aug 10 07:01:29 dg1test kernel:
                                      -> #3 (&event->list_lock#2){-...}-{2:2}:
      Aug 10 07:01:29 dg1test kernel:        _raw_spin_lock_irqsave+0x4b/0x70
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy+0x50/0xf0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        ga100_fifo_nonstall_intr+0x24/0x30 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_intr+0x12c/0x240 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __handle_irq_event_percpu+0x88/0x240
      Aug 10 07:01:29 dg1test kernel:        handle_irq_event+0x38/0x80
      Aug 10 07:01:29 dg1test kernel:        handle_edge_irq+0xa3/0x240
      Aug 10 07:01:29 dg1test kernel:        __common_interrupt+0x72/0x160
      Aug 10 07:01:29 dg1test kernel:        common_interrupt+0x60/0xe0
      Aug 10 07:01:29 dg1test kernel:        asm_common_interrupt+0x26/0x40
      Aug 10 07:01:29 dg1test kernel:
                                      -> #2 (&device->intr.lock){-...}-{2:2}:
      Aug 10 07:01:29 dg1test kernel:        _raw_spin_lock_irqsave+0x4b/0x70
      Aug 10 07:01:29 dg1test kernel:        nvkm_inth_allow+0x2c/0x80 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy_state+0x181/0x250 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy_allow+0x63/0xd0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_uevent_mthd+0x4d/0x70 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_ioctl+0x10b/0x250 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvif_object_mthd+0xa8/0x1f0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvif_event_allow+0x2a/0xa0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nouveau_fence_enable_signaling+0x78/0x80 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __dma_fence_enable_signaling+0x5e/0x100
      Aug 10 07:01:29 dg1test kernel:        dma_fence_add_callback+0x4b/0xd0
      Aug 10 07:01:29 dg1test kernel:        nouveau_cli_work_queue+0xae/0x110 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nouveau_gem_object_close+0x1d1/0x2a0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        drm_gem_handle_delete+0x70/0xe0 [drm]
      Aug 10 07:01:29 dg1test kernel:        drm_ioctl_kernel+0xa5/0x150 [drm]
      Aug 10 07:01:29 dg1test kernel:        drm_ioctl+0x256/0x490 [drm]
      Aug 10 07:01:29 dg1test kernel:        nouveau_drm_ioctl+0x5a/0xb0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __x64_sys_ioctl+0x91/0xd0
      Aug 10 07:01:29 dg1test kernel:        do_syscall_64+0x3c/0x90
      Aug 10 07:01:29 dg1test kernel:        entry_SYSCALL_64_after_hwframe+0x72/0xdc
      Aug 10 07:01:29 dg1test kernel:
                                      -> #1 (&event->refs_lock#4){....}-{2:2}:
      Aug 10 07:01:29 dg1test kernel:        _raw_spin_lock_irqsave+0x4b/0x70
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy_state+0x37/0x250 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy_allow+0x63/0xd0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_uevent_mthd+0x4d/0x70 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_ioctl+0x10b/0x250 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvif_object_mthd+0xa8/0x1f0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvif_event_allow+0x2a/0xa0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nouveau_fence_enable_signaling+0x78/0x80 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __dma_fence_enable_signaling+0x5e/0x100
      Aug 10 07:01:29 dg1test kernel:        dma_fence_add_callback+0x4b/0xd0
      Aug 10 07:01:29 dg1test kernel:        nouveau_cli_work_queue+0xae/0x110 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nouveau_gem_object_close+0x1d1/0x2a0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        drm_gem_handle_delete+0x70/0xe0 [drm]
      Aug 10 07:01:29 dg1test kernel:        drm_ioctl_kernel+0xa5/0x150 [drm]
      Aug 10 07:01:29 dg1test kernel:        drm_ioctl+0x256/0x490 [drm]
      Aug 10 07:01:29 dg1test kernel:        nouveau_drm_ioctl+0x5a/0xb0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __x64_sys_ioctl+0x91/0xd0
      Aug 10 07:01:29 dg1test kernel:        do_syscall_64+0x3c/0x90
      Aug 10 07:01:29 dg1test kernel:        entry_SYSCALL_64_after_hwframe+0x72/0xdc
      Aug 10 07:01:29 dg1test kernel:
                                      -> #0 (&fctx->lock){-...}-{2:2}:
      Aug 10 07:01:29 dg1test kernel:        __lock_acquire+0x14e3/0x2240
      Aug 10 07:01:29 dg1test kernel:        lock_acquire+0xc8/0x2a0
      Aug 10 07:01:29 dg1test kernel:        _raw_spin_lock_irqsave+0x4b/0x70
      Aug 10 07:01:29 dg1test kernel:        nouveau_fence_wait_uevent_handler+0x2b/0x100 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_client_event+0xf/0x20 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_event_ntfy+0x9b/0xf0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        ga100_fifo_nonstall_intr+0x24/0x30 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        nvkm_intr+0x12c/0x240 [nouveau]
      Aug 10 07:01:29 dg1test kernel:        __handle_irq_event_percpu+0x88/0x240
      Aug 10 07:01:29 dg1test kernel:        handle_irq_event+0x38/0x80
      Aug 10 07:01:29 dg1test kernel:        handle_edge_irq+0xa3/0x240
      Aug 10 07:01:29 dg1test kernel:        __common_interrupt+0x72/0x160
      Aug 10 07:01:29 dg1test kernel:        common_interrupt+0x60/0xe0
      Aug 10 07:01:29 dg1test kernel:        asm_common_interrupt+0x26/0x40
      Aug 10 07:01:29 dg1test kernel:
                                      other info that might help us debug this:
      Aug 10 07:01:29 dg1test kernel: Chain exists of:
                                        &fctx->lock --> &device->intr.lock --> &event->list_lock#2
      Aug 10 07:01:29 dg1test kernel:  Possible unsafe locking scenario:
      Aug 10 07:01:29 dg1test kernel:        CPU0                    CPU1
      Aug 10 07:01:29 dg1test kernel:        ----                    ----
      Aug 10 07:01:29 dg1test kernel:   lock(&event->list_lock#2);
      Aug 10 07:01:29 dg1test kernel:                                lock(&device->intr.lock);
      Aug 10 07:01:29 dg1test kernel:                                lock(&event->list_lock#2);
      Aug 10 07:01:29 dg1test kernel:   lock(&fctx->lock);
      Aug 10 07:01:29 dg1test kernel:
                                       *** DEADLOCK ***
      Aug 10 07:01:29 dg1test kernel: 2 locks held by wireplumber/2236:
      Aug 10 07:01:29 dg1test kernel:  #0: ffff8fca53177bf8 (&device->intr.lock){-...}-{2:2}, at: nvkm_intr+0x29/0x240 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  #1: ffff8fca41208610 (&event->list_lock#2){-...}-{2:2}, at: nvkm_event_ntfy+0x50/0xf0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:
                                      stack backtrace:
      Aug 10 07:01:29 dg1test kernel: CPU: 6 PID: 2236 Comm: wireplumber Not tainted 6.4.0-rc7+ #10
      Aug 10 07:01:29 dg1test kernel: Hardware name: Gigabyte Technology Co., Ltd. Z390 I AORUS PRO WIFI/Z390 I AORUS PRO WIFI-CF, BIOS F8 11/05/2021
      Aug 10 07:01:29 dg1test kernel: Call Trace:
      Aug 10 07:01:29 dg1test kernel:  <TASK>
      Aug 10 07:01:29 dg1test kernel:  dump_stack_lvl+0x5b/0x90
      Aug 10 07:01:29 dg1test kernel:  check_noncircular+0xe2/0x110
      Aug 10 07:01:29 dg1test kernel:  __lock_acquire+0x14e3/0x2240
      Aug 10 07:01:29 dg1test kernel:  lock_acquire+0xc8/0x2a0
      Aug 10 07:01:29 dg1test kernel:  ? nouveau_fence_wait_uevent_handler+0x2b/0x100 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  ? lock_acquire+0xc8/0x2a0
      Aug 10 07:01:29 dg1test kernel:  _raw_spin_lock_irqsave+0x4b/0x70
      Aug 10 07:01:29 dg1test kernel:  ? nouveau_fence_wait_uevent_handler+0x2b/0x100 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  nouveau_fence_wait_uevent_handler+0x2b/0x100 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  nvkm_client_event+0xf/0x20 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  nvkm_event_ntfy+0x9b/0xf0 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  ga100_fifo_nonstall_intr+0x24/0x30 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  nvkm_intr+0x12c/0x240 [nouveau]
      Aug 10 07:01:29 dg1test kernel:  __handle_irq_event_percpu+0x88/0x240
      Aug 10 07:01:29 dg1test kernel:  handle_irq_event+0x38/0x80
      Aug 10 07:01:29 dg1test kernel:  handle_edge_irq+0xa3/0x240
      Aug 10 07:01:29 dg1test kernel:  __common_interrupt+0x72/0x160
      Aug 10 07:01:29 dg1test kernel:  common_interrupt+0x60/0xe0
      Aug 10 07:01:29 dg1test kernel:  asm_common_interrupt+0x26/0x40
      Aug 10 07:01:29 dg1test kernel: RIP: 0033:0x7fb66174d700
      Aug 10 07:01:29 dg1test kernel: Code: c1 e2 05 29 ca 8d 0c 10 0f be 07 84 c0 75 eb 89 c8 c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa e9 d7 0f fc ff 0f 1f 80 00 00 00 00 <f3> 0f 1e fa e9 c7 0f fc>
      Aug 10 07:01:29 dg1test kernel: RSP: 002b:00007ffdd3c48438 EFLAGS: 00000206
      Aug 10 07:01:29 dg1test kernel: RAX: 000055bb758763c0 RBX: 000055bb758752c0 RCX: 00000000000028b0
      Aug 10 07:01:29 dg1test kernel: RDX: 000055bb758752c0 RSI: 000055bb75887490 RDI: 000055bb75862950
      Aug 10 07:01:29 dg1test kernel: RBP: 00007ffdd3c48490 R08: 000055bb75873b10 R09: 0000000000000001
      Aug 10 07:01:29 dg1test kernel: R10: 0000000000000004 R11: 000055bb7587f000 R12: 000055bb75887490
      Aug 10 07:01:29 dg1test kernel: R13: 000055bb757f6280 R14: 000055bb758875c0 R15: 000055bb757f6280
      Aug 10 07:01:29 dg1test kernel:  </TASK>
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      Tested-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Reviewed-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Signed-off-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20231107053255.2257079-1-airlied@gmail.com
      a2e36cd5
    • Dan Carpenter's avatar
      nouveau/gsp/r535: Fix a NULL vs error pointer bug · 42bd415b
      Dan Carpenter authored
      The r535_gsp_cmdq_get() function returns error pointers but this code
      checks for NULL.  Also we need to propagate the error pointer back to
      the callers in r535_gsp_rpc_get().  Returning NULL will lead to a NULL
      pointer dereference.
      
      Fixes: 176fdcbd ("drm/nouveau/gsp/r535: add support for booting GSP-RM")
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
      Reviewed-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Signed-off-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/f71996d9-d1cb-45ea-a4b2-2dfc21312d8c@kili.mountain
      42bd415b
    • Dan Carpenter's avatar
      nouveau/gsp/r535: uninitialized variable in r535_gsp_acpi_mux_id() · 09f12bf9
      Dan Carpenter authored
      The if we hit the "continue" statement on the first iteration through
      the loop then "handle_mux" needs to be set to NULL so we continue
      looping.
      
      Fixes: 176fdcbd ("drm/nouveau/gsp/r535: add support for booting GSP-RM")
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
      Reviewed-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Signed-off-by: default avatarDanilo Krummrich <dakr@redhat.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/1d864f6e-43e9-43d8-9d90-30e76c9c843b@moroto.mountain
      09f12bf9