1. 11 Dec, 2023 3 commits
    • Peng Zhang's avatar
      maple_tree: introduce {mtree,mas}_lock_nested() · b2472efe
      Peng Zhang authored
      In some cases, nested locks may be needed, so {mtree,mas}_lock_nested is
      introduced.  For example, when duplicating maple tree, we need to hold the
      locks of two trees, in which case nested locks are needed.
      
      At the same time, add the definition of spin_lock_nested() in tools for
      testing.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-3-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b2472efe
    • Peng Zhang's avatar
      maple_tree: add mt_free_one() and mt_attr() helpers · 4f2267b5
      Peng Zhang authored
      Patch series "Introduce __mt_dup() to improve the performance of fork()", v7.
      
      This series introduces __mt_dup() to improve the performance of fork(). 
      During the duplication process of mmap, all VMAs are traversed and
      inserted one by one into the new maple tree, causing the maple tree to be
      rebalanced multiple times.  Balancing the maple tree is a costly
      operation.  To duplicate VMAs more efficiently, mtree_dup() and __mt_dup()
      are introduced for the maple tree.  They can efficiently duplicate a maple
      tree.
      
      Here are some algorithmic details about {mtree,__mt}_dup().  We perform a
      DFS pre-order traversal of all nodes in the source maple tree.  During
      this process, we fully copy the nodes from the source tree to the new
      tree.  This involves memory allocation, and when encountering a new node,
      if it is a non-leaf node, all its child nodes are allocated at once.
      
      This idea was originally from Liam R.  Howlett's Maple Tree Work email,
      and I added some of my own ideas to implement it.  Some previous
      discussions can be found in [1].  For a more detailed analysis of the
      algorithm, please refer to the logs for patch [3/10] and patch [10/10].
      
      There is a "spawn" in byte-unixbench[2], which can be used to test the
      performance of fork().  I modified it slightly to make it work with
      different number of VMAs.
      
      Below are the test results.  The first row shows the number of VMAs.  The
      second and third rows show the number of fork() calls per ten seconds,
      corresponding to next-20231006 and the this patchset, respectively.  The
      test results were obtained with CPU binding to avoid scheduler load
      balancing that could cause unstable results.  There are still some
      fluctuations in the test results, but at least they are better than the
      original performance.
      
      21     121   221    421    821    1621   3221   6421   12821  25621  51221
      112100 76261 54227  34035  20195  11112  6017   3161   1606   802    393
      114558 83067 65008  45824  28751  16072  8922   4747   2436   1233   599
      2.19%  8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42%
      
      Thanks to Liam and Matthew for the review.
      
      
      This patch (of 10):
      
      Add two helpers:
      1. mt_free_one(), used to free a maple node.
      2. mt_attr(), used to obtain the attributes of maple tree.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-1-zhangpeng.00@bytedance.com
      Link: https://lkml.kernel.org/r/20231027033845.90608-2-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4f2267b5
    • Li Zhijian's avatar
      mm/vmstat: move pgdemote_* to per-node stats · 23e9f013
      Li Zhijian authored
      Demotion will migrate pages across nodes.  Previously, only the global
      demotion statistics were accounted for.  Changed them to per-node
      statistics, making it easier to observe where demotion occurs on each
      node.
      
      This will help to identify which nodes are under pressure.
      
      This patch also make pgdemote_* behind CONFIG_NUMA_BALANCING, since
      demotion is not available for !CONFIG_NUMA_BALANCING
      
      With this patch, here is a sample where node0 node1 are DRAM,
      node3 is PMEM:
      Global stats:
      $ grep demote /proc/vmstat
      pgdemote_kswapd 254288
      pgdemote_direct 113497
      pgdemote_khugepaged 0
      
      Per-node stats:
      $ grep demote /sys/devices/system/node/node0/vmstat # demotion source
      pgdemote_kswapd 68454
      pgdemote_direct 83431
      pgdemote_khugepaged 0
      $ grep demote /sys/devices/system/node/node1/vmstat # demotion source
      pgdemote_kswapd 185834
      pgdemote_direct 30066
      pgdemote_khugepaged 0
      $ grep demote /sys/devices/system/node/node3/vmstat # demotion target
      pgdemote_kswapd 0
      pgdemote_direct 0
      pgdemote_khugepaged 0
      
      Link: https://lkml.kernel.org/r/20231103031450.1456523-1-lizhijian@fujitsu.comSigned-off-by: default avatarLi Zhijian <lizhijian@fujitsu.com>
      Acked-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      23e9f013
  2. 07 Dec, 2023 32 commits
    • Andrew Morton's avatar
      0c92218f
    • Jiexun Wang's avatar
      mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() · b2f557a2
      Jiexun Wang authored
      I conducted real-time testing and observed that
      madvise_cold_or_pageout_pte_range() causes significant latency under
      memory pressure, which can be effectively reduced by adding cond_resched()
      within the loop.
      
      I tested on the LicheePi 4A board using Cylictest for latency testing and
      Ftrace for latency tracing.  The board uses TH1520 processor and has a
      memory size of 8GB.  The kernel version is 6.5.0 with the PREEMPT_RT patch
      applied.
      
      The script I tested is as follows:
      
      echo wakeup_rt > /sys/kernel/tracing/current_tracer
      echo 1 > /sys/kernel/tracing/tracing_on
      echo 0 > /sys/kernel/tracing/tracing_max_latency
      stress-ng --vm 8 --vm-bytes 2G &
      cyclictest --mlockall --smp --priority=99 --distance=0 --duration=30m
      echo 0 > /sys/kernel/tracing/tracing_on
      cat /sys/kernel/tracing/trace 
      
      The tracing results before modification are as follows:
      
      # tracer: wakeup_rt
      #
      # wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf
      # --------------------------------------------------------------------
      # latency: 2552 us, #6/6, CPU#3 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
      #    -----------------
      #    | task: cyclictest-196 (uid:0 nice:0 policy:1 rt_prio:99)
      #    -----------------
      #
      #                    _--------=> CPU#
      #                   / _-------=> irqs-off/BH-disabled
      #                  | / _------=> need-resched
      #                  || / _-----=> need-resched-lazy
      #                  ||| / _----=> hardirq/softirq
      #                  |||| / _---=> preempt-depth
      #                  ||||| / _--=> preempt-lazy-depth
      #                  |||||| / _-=> migrate-disable
      #                  ||||||| /     delay
      #  cmd     pid     |||||||| time  |   caller
      #     \   /        ||||||||  \    |    /
      stress-n-206       3dn.h512    2us :      206:120:R   + [003]     196:  0:R cyclictest
      stress-n-206       3dn.h512    7us : <stack trace>
       => __ftrace_trace_stack
       => __trace_stack
       => probe_wakeup
       => ttwu_do_activate
       => try_to_wake_up
       => wake_up_process
       => hrtimer_wakeup
       => __hrtimer_run_queues
       => hrtimer_interrupt
       => riscv_timer_interrupt
       => handle_percpu_devid_irq
       => generic_handle_domain_irq
       => riscv_intc_irq
       => handle_riscv_irq
       => do_irq
      stress-n-206       3dn.h512    9us#: 0
      stress-n-206       3d...3.. 2544us : __schedule
      stress-n-206       3d...3.. 2545us :      206:120:R ==> [003]     196:  0:R cyclictest
      stress-n-206       3d...3.. 2551us : <stack trace>
       => __ftrace_trace_stack
       => __trace_stack
       => probe_wakeup_sched_switch
       => __schedule
       => preempt_schedule
       => migrate_enable
       => rt_spin_unlock
       => madvise_cold_or_pageout_pte_range
       => walk_pgd_range
       => __walk_page_range
       => walk_page_range
       => madvise_pageout
       => madvise_vma_behavior
       => do_madvise
       => sys_madvise
       => do_trap_ecall_u
       => ret_from_exception
      
      The tracing results after modification are as follows:
      
      # tracer: wakeup_rt
      #
      # wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00004-gca3876fc69a6-dirty
      # --------------------------------------------------------------------
      # latency: 1689 us, #6/6, CPU#0 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
      #    -----------------
      #    | task: cyclictest-217 (uid:0 nice:0 policy:1 rt_prio:99)
      #    -----------------
      #
      #                    _--------=> CPU#
      #                   / _-------=> irqs-off/BH-disabled
      #                  | / _------=> need-resched
      #                  || / _-----=> need-resched-lazy
      #                  ||| / _----=> hardirq/softirq
      #                  |||| / _---=> preempt-depth
      #                  ||||| / _--=> preempt-lazy-depth
      #                  |||||| / _-=> migrate-disable
      #                  ||||||| /     delay
      #  cmd     pid     |||||||| time  |   caller
      #     \   /        ||||||||  \    |    /
      stress-n-232       0dn.h413    1us+:      232:120:R   + [000]     217:  0:R cyclictest
      stress-n-232       0dn.h413   12us : <stack trace>
       => __ftrace_trace_stack
       => __trace_stack
       => probe_wakeup
       => ttwu_do_activate
       => try_to_wake_up
       => wake_up_process
       => hrtimer_wakeup
       => __hrtimer_run_queues
       => hrtimer_interrupt
       => riscv_timer_interrupt
       => handle_percpu_devid_irq
       => generic_handle_domain_irq
       => riscv_intc_irq
       => handle_riscv_irq
       => do_irq
      stress-n-232       0dn.h413   19us#: 0
      stress-n-232       0d...3.. 1671us : __schedule
      stress-n-232       0d...3.. 1676us+:      232:120:R ==> [000]     217:  0:R cyclictest
      stress-n-232       0d...3.. 1687us : <stack trace>
       => __ftrace_trace_stack
       => __trace_stack
       => probe_wakeup_sched_switch
       => __schedule
       => preempt_schedule
       => migrate_enable
       => free_unref_page_list
       => release_pages
       => free_pages_and_swap_cache
       => tlb_batch_pages_flush
       => tlb_flush_mmu
       => unmap_page_range
       => unmap_vmas
       => unmap_region
       => do_vmi_align_munmap.constprop.0
       => do_vmi_munmap
       => __vm_munmap
       => sys_munmap
       => do_trap_ecall_u
       => ret_from_exception
      
      After the modification, the cause of maximum latency is no longer
      madvise_cold_or_pageout_pte_range(), so this modification can reduce the
      latency caused by madvise_cold_or_pageout_pte_range().
      
      
      Currently the madvise_cold_or_pageout_pte_range() function exhibits
      significant latency under memory pressure, which can be effectively
      reduced by adding cond_resched() within the loop.
      
      When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule
      the task to ensure fairness and avoid long lock holding times.
      
      Link: https://lkml.kernel.org/r/85363861af65fac66c7a98c251906afc0d9c8098.1695291046.git.wangjiexun@tinylab.orgSigned-off-by: default avatarJiexun Wang <wangjiexun@tinylab.org>
      Cc: Zhangjin Wu <falcon@tinylab.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b2f557a2
    • Ryusuke Konishi's avatar
      nilfs2: prevent WARNING in nilfs_sufile_set_segment_usage() · 675abf8d
      Ryusuke Konishi authored
      If nilfs2 reads a disk image with corrupted segment usage metadata, and
      its segment usage information is marked as an error for the segment at the
      write location, nilfs_sufile_set_segment_usage() can trigger WARN_ONs
      during log writing.
      
      Segments newly allocated for writing with nilfs_sufile_alloc() will not
      have this error flag set, but this unexpected situation will occur if the
      segment indexed by either nilfs->ns_segnum or nilfs->ns_nextnum (active
      segment) was marked in error.
      
      Fix this issue by inserting a sanity check to treat it as a file system
      corruption.
      
      Since error returns are not allowed during the execution phase where
      nilfs_sufile_set_segment_usage() is used, this inserts the sanity check
      into nilfs_sufile_mark_dirty() which pre-reads the buffer containing the
      segment usage record to be updated and sets it up in a dirty state for
      writing.
      
      In addition, nilfs_sufile_set_segment_usage() is also called when
      canceling log writing and undoing segment usage update, so in order to
      avoid issuing the same kernel warning in that case, in case of
      cancellation, avoid checking the error flag in
      nilfs_sufile_set_segment_usage().
      
      Link: https://lkml.kernel.org/r/20231205085947.4431-1-konishi.ryusuke@gmail.comSigned-off-by: default avatarRyusuke Konishi <konishi.ryusuke@gmail.com>
      Reported-by: syzbot+14e9f834f6ddecece094@syzkaller.appspotmail.com
      Closes: https://syzkaller.appspot.com/bug?extid=14e9f834f6ddecece094Tested-by: default avatarRyusuke Konishi <konishi.ryusuke@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      675abf8d
    • Sidhartha Kumar's avatar
      mm/hugetlb: have CONFIG_HUGETLB_PAGE select CONFIG_XARRAY_MULTI · 4a3ef6be
      Sidhartha Kumar authored
      After commit a08c7193 "mm/filemap: remove hugetlb special casing in
      filemap.c", hugetlb pages are stored in the page cache in base page sized
      indexes.  This leads to multi index stores in the xarray which is only
      supporting through CONFIG_XARRAY_MULTI.  The other page cache user of
      multi index stores ,THP, selects XARRAY_MULTI.  Have CONFIG_HUGETLB_PAGE
      follow this behavior as well to avoid the BUG() with a CONFIG_HUGETLB_PAGE
      && !CONFIG_XARRAY_MULTI config.
      
      Link: https://lkml.kernel.org/r/20231204183234.348697-1-sidhartha.kumar@oracle.com
      Fixes: a08c7193 ("mm/filemap: remove hugetlb special casing in filemap.c")
      Signed-off-by: default avatarSidhartha Kumar <sidhartha.kumar@oracle.com>
      Reported-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4a3ef6be
    • Florian Fainelli's avatar
      scripts/gdb: fix lx-device-list-bus and lx-device-list-class · 801a2b1b
      Florian Fainelli authored
      After the conversion to bus_to_subsys() and class_to_subsys(), the gdb
      scripts listing the system buses and classes respectively was broken, fix
      those by returning the subsys_priv pointer and have the various caller
      de-reference either the 'bus' or 'class' structure members accordingly.
      
      Link: https://lkml.kernel.org/r/20231130043317.174188-1-florian.fainelli@broadcom.com
      Fixes: 7b884b7f ("driver core: class.c: convert to only use class_to_subsys")
      Signed-off-by: default avatarFlorian Fainelli <florian.fainelli@broadcom.com>
      Tested-by: default avatarKuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      Cc: Kieran Bingham <kbingham@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      801a2b1b
    • Bagas Sanjaya's avatar
      MAINTAINERS: drop Antti Palosaari · bc220fe7
      Bagas Sanjaya authored
      He is currently inactive (last message from him is two years ago [1]). 
      His media tree [2] is also dormant (latest activity is 6 years ago), yet
      his site is still online [3].
      
      Drop him from MAINTAINERS and add CREDITS entry for him. We thank him
      for maintaining various DVB drivers.
      
      [1]: https://lore.kernel.org/all/660772b3-0597-02db-ed94-c6a9be04e8e8@iki.fi/
      [2]: https://git.linuxtv.org/anttip/media_tree.git/
      [3]: https://palosaari.fi/linux/
      
      Link: https://lkml.kernel.org/r/20231130083848.5396-1-bagasdotme@gmail.comSigned-off-by: default avatarBagas Sanjaya <bagasdotme@gmail.com>
      Acked-by: default avatarAntti Palosaari <crope@iki.fi>
      Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
      Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      bc220fe7
    • Su Hui's avatar
      highmem: fix a memory copy problem in memcpy_from_folio · 73424d00
      Su Hui authored
      Clang static checker complains that value stored to 'from' is never read. 
      And memcpy_from_folio() only copy the last chunk memory from folio to
      destination.  Use 'to += chunk' to replace 'from += chunk' to fix this
      typo problem.
      
      Link: https://lkml.kernel.org/r/20231130034017.1210429-1-suhui@nfschina.com
      Fixes: b23d03ef ("highmem: add memcpy_to_folio() and memcpy_from_folio()")
      Signed-off-by: default avatarSu Hui <suhui@nfschina.com>
      Reviewed-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jiaqi Yan <jiaqiyan@google.com>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Tom Rix <trix@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      73424d00
    • Ryusuke Konishi's avatar
      nilfs2: fix missing error check for sb_set_blocksize call · d61d0ab5
      Ryusuke Konishi authored
      When mounting a filesystem image with a block size larger than the page
      size, nilfs2 repeatedly outputs long error messages with stack traces to
      the kernel log, such as the following:
      
       getblk(): invalid block size 8192 requested
       logical block size: 512
       ...
       Call Trace:
        dump_stack_lvl+0x92/0xd4
        dump_stack+0xd/0x10
        bdev_getblk+0x33a/0x354
        __breadahead+0x11/0x80
        nilfs_search_super_root+0xe2/0x704 [nilfs2]
        load_nilfs+0x72/0x504 [nilfs2]
        nilfs_mount+0x30f/0x518 [nilfs2]
        legacy_get_tree+0x1b/0x40
        vfs_get_tree+0x18/0xc4
        path_mount+0x786/0xa88
        __ia32_sys_mount+0x147/0x1a8
        __do_fast_syscall_32+0x56/0xc8
        do_fast_syscall_32+0x29/0x58
        do_SYSENTER_32+0x15/0x18
        entry_SYSENTER_32+0x98/0xf1
       ...
      
      This overloads the system logger.  And to make matters worse, it sometimes
      crashes the kernel with a memory access violation.
      
      This is because the return value of the sb_set_blocksize() call, which
      should be checked for errors, is not checked.
      
      The latter issue is due to out-of-buffer memory being accessed based on a
      large block size that caused sb_set_blocksize() to fail for buffers read
      with the initial minimum block size that remained unupdated in the
      super_block structure.
      
      Since nilfs2 mkfs tool does not accept block sizes larger than the system
      page size, this has been overlooked.  However, it is possible to create
      this situation by intentionally modifying the tool or by passing a
      filesystem image created on a system with a large page size to a system
      with a smaller page size and mounting it.
      
      Fix this issue by inserting the expected error handling for the call to
      sb_set_blocksize().
      
      Link: https://lkml.kernel.org/r/20231129141547.4726-1-konishi.ryusuke@gmail.comSigned-off-by: default avatarRyusuke Konishi <konishi.ryusuke@gmail.com>
      Tested-by: default avatarRyusuke Konishi <konishi.ryusuke@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d61d0ab5
    • Baoquan He's avatar
      kernel/Kconfig.kexec: drop select of KEXEC for CRASH_DUMP · dccf78d3
      Baoquan He authored
      Ignat Korchagin complained that a potential config regression was
      introduced by commit 89cde455 ("kexec: consolidate kexec and crash
      options into kernel/Kconfig.kexec").  Before the commit, CONFIG_CRASH_DUMP
      has no dependency on CONFIG_KEXEC.  After the commit, CRASH_DUMP selects
      KEXEC.  That enforces system to have CONFIG_KEXEC=y as long as
      CONFIG_CRASH_DUMP=Y which people may not want.
      
      In Ignat's case, he sets CONFIG_CRASH_DUMP=y, CONFIG_KEXEC_FILE=y and
      CONFIG_KEXEC=n because kexec_load interface could have security issue if
      kernel/initrd has no chance to be signed and verified.
      
      CRASH_DUMP has select of KEXEC because Eric, author of above commit, met a
      LKP report of build failure when posting patch of earlier version.  Please
      see below link to get detail of the LKP report:
      
          https://lore.kernel.org/all/3e8eecd1-a277-2cfb-690e-5de2eb7b988e@oracle.com/T/#u
      
      In fact, that LKP report is triggered because arm's <asm/kexec.h> is
      wrapped in CONFIG_KEXEC ifdeffery scope.  That is wrong.  CONFIG_KEXEC
      controls the enabling/disabling of kexec_load interface, but not kexec
      feature.  Removing the wrongly added CONFIG_KEXEC ifdeffery scope in
      <asm/kexec.h> of arm allows us to drop the select KEXEC for CRASH_DUMP. 
      Meanwhile, change arch/arm/kernel/Makefile to let machine_kexec.o
      relocate_kernel.o depend on KEXEC_CORE.
      
      Link: https://lkml.kernel.org/r/20231128054457.659452-1-bhe@redhat.com
      Fixes: 89cde455 ("kexec: consolidate kexec and crash options into kernel/Kconfig.kexec")
      Signed-off-by: default avatarBaoquan He <bhe@redhat.com>
      Reported-by: default avatarIgnat Korchagin <ignat@cloudflare.com>
      Tested-by: Ignat Korchagin <ignat@cloudflare.com>	[compile-time only]
      Tested-by: default avatarAlexander Gordeev <agordeev@linux.ibm.com>
      Reviewed-by: default avatarEric DeVolder <eric_devolder@yahoo.com>
      Tested-by: default avatarEric DeVolder <eric_devolder@yahoo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      dccf78d3
    • Andy Shevchenko's avatar
      units: add missing header · 8e92157d
      Andy Shevchenko authored
      BITS_PER_BYTE is defined in bits.h.
      
      Link: https://lkml.kernel.org/r/20231128174404.393393-1-andriy.shevchenko@linux.intel.com
      Fixes: e8eed5f7 ("units: Add BYTES_PER_*BIT")
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Damian Muszynski <damian.muszynski@intel.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      8e92157d
    • Baoquan He's avatar
      drivers/base/cpu: crash data showing should depends on KEXEC_CORE · 4e9e2e4c
      Baoquan He authored
      After commit 88a6f899 ("crash: memory and CPU hotplug sysfs
      attributes"), on x86_64, if only below kernel configs related to kdump are
      set, compiling error are triggered.
      
      ----
      CONFIG_CRASH_CORE=y
      CONFIG_KEXEC_CORE=y
      CONFIG_CRASH_DUMP=y
      CONFIG_CRASH_HOTPLUG=y
      ------
      
      ------------------------------------------------------
      drivers/base/cpu.c: In function `crash_hotplug_show':
      drivers/base/cpu.c:309:40: error: implicit declaration of function `crash_hotplug_cpu_support'; did you mean `crash_hotplug_show'? [-Werror=implicit-function-declaration]
        309 |         return sysfs_emit(buf, "%d\n", crash_hotplug_cpu_support());
            |                                        ^~~~~~~~~~~~~~~~~~~~~~~~~
            |                                        crash_hotplug_show
      cc1: some warnings being treated as errors
      ------------------------------------------------------
      
      CONFIG_KEXEC is used to enable kexec_load interface, the
      crash_notes/crash_notes_size/crash_hotplug showing depends on
      CONFIG_KEXEC is incorrect. It should depend on KEXEC_CORE instead.
      
      Fix it now.
      
      Link: https://lkml.kernel.org/r/20231128055248.659808-1-bhe@redhat.com
      Fixes: 88a6f899 ("crash: memory and CPU hotplug sysfs attributes")
      Signed-off-by: default avatarBaoquan He <bhe@redhat.com>
      Tested-by: Ignat Korchagin <ignat@cloudflare.com>	[compile-time only]
      Tested-by: default avatarAlexander Gordeev <agordeev@linux.ibm.com>
      Reviewed-by: default avatarEric DeVolder <eric_devolder@yahoo.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4e9e2e4c
    • SeongJae Park's avatar
      mm/damon/sysfs-schemes: add timeout for update_schemes_tried_regions · 7d6fa31a
      SeongJae Park authored
      If a scheme is set to not applied to any monitoring target region for any
      reasons including the target access pattern, quota, filters, or
      watermarks, writing 'update_schemes_tried_regions' to 'state' DAMON sysfs
      file can indefinitely hang.  Fix the case by implementing a timeout for
      the operation.  The time limit is two apply intervals of each scheme.
      
      Link: https://lkml.kernel.org/r/20231124213840.39157-1-sj@kernel.org
      Fixes: 4d4e41b6 ("mm/damon/sysfs-schemes: do not update tried regions more than one DAMON snapshot")
      Signed-off-by: default avatarSeongJae Park <sj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      7d6fa31a
    • Kuan-Ying Lee's avatar
      scripts/gdb/tasks: fix lx-ps command error · 854f2764
      Kuan-Ying Lee authored
      Since commit 8e1f3851 ("kill task_struct->thread_group") remove
      the thread_group, we will encounter below issue.
      
      (gdb) lx-ps
            TASK          PID    COMM
      0xffff800086503340   0   swapper/0
      Python Exception <class 'gdb.error'>: There is no member named thread_group.
      Error occurred in Python: There is no member named thread_group.
      
      We use signal->thread_head to iterate all threads instead.
      
      [Kuan-Ying.Lee@mediatek.com: v2]
        Link: https://lkml.kernel.org/r/20231129065142.13375-2-Kuan-Ying.Lee@mediatek.com
      Link: https://lkml.kernel.org/r/20231127070404.4192-2-Kuan-Ying.Lee@mediatek.com
      Fixes: 8e1f3851 ("kill task_struct->thread_group")
      Signed-off-by: default avatarKuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Tested-by: default avatarFlorian Fainelli <florian.fainelli@broadcom.com>
      Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
      Cc: Chinwen Chang <chinwen.chang@mediatek.com>
      Cc: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
      Cc: Matthias Brugger <matthias.bgg@gmail.com>
      Cc: Qun-Wei Lin <qun-wei.lin@mediatek.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      854f2764
    • Peter Xu's avatar
      mm/Kconfig: make userfaultfd a menuconfig · 97219cc3
      Peter Xu authored
      PTE_MARKER_UFFD_WP is a subconfig for userfaultfd.  To make it clear,
      switch to use menuconfig for userfaultfd.
      
      Link: https://lkml.kernel.org/r/20231123224204.1060152-1-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Axel Rasmussen <axelrasmussen@google.com>
      Cc: Mike Rapoport (IBM) <rppt@kernel.org>
      Cc: Peter Xu <peterx@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      97219cc3
    • Nico Pache's avatar
      selftests/mm: prevent duplicate runs caused by TEST_GEN_PROGS · f39fb633
      Nico Pache authored
      Commit 05f1edac ("selftests/mm: run all tests from run_vmtests.sh")
      fixed the inconsistency caused by tests being defined as TEST_GEN_PROGS. 
      This issue was leading to tests not being executed via run_vmtests.sh and
      furthermore some tests running twice due to the kselftests wrapper also
      executing them.
      
      Fix the definition of two tests (soft-dirty and pagemap_ioctl) that are
      still incorrectly defined.
      
      Link: https://lkml.kernel.org/r/20231120222908.28559-1-npache@redhat.comSigned-off-by: default avatarNico Pache <npache@redhat.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Joel Savitz <jsavitz@redhat.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f39fb633
    • SeongJae Park's avatar
      mm/damon/core: copy nr_accesses when splitting region · 1f3730fd
      SeongJae Park authored
      Regions split function ('damon_split_region_at()') is called at the
      beginning of an aggregation interval, and when DAMOS applying the actions
      and charging quota.  Because 'nr_accesses' fields of all regions are reset
      at the beginning of each aggregation interval, and DAMOS was applying the
      action at the end of each aggregation interval, there was no need to copy
      the 'nr_accesses' field to the split-out region.
      
      However, commit 42f994b7 ("mm/damon/core: implement scheme-specific
      apply interval") made DAMOS applies action on its own timing interval. 
      Hence, 'nr_accesses' should also copied to split-out regions, but the
      commit didn't.  Fix it by copying it.
      
      Link: https://lkml.kernel.org/r/20231119171529.66863-1-sj@kernel.org
      Fixes: 42f994b7 ("mm/damon/core: implement scheme-specific apply interval")
      Signed-off-by: default avatarSeongJae Park <sj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      1f3730fd
    • Ming Lei's avatar
      lib/group_cpus.c: avoid acquiring cpu hotplug lock in group_cpus_evenly · 0263f92f
      Ming Lei authored
      group_cpus_evenly() could be part of storage driver's error handler, such
      as nvme driver, when may happen during CPU hotplug, in which storage queue
      has to drain its pending IOs because all CPUs associated with the queue
      are offline and the queue is becoming inactive.  And handling IO needs
      error handler to provide forward progress.
      
      Then deadlock is caused:
      
      1) inside CPU hotplug handler, CPU hotplug lock is held, and blk-mq's
         handler is waiting for inflight IO
      
      2) error handler is waiting for CPU hotplug lock
      
      3) inflight IO can't be completed in blk-mq's CPU hotplug handler
         because error handling can't provide forward progress.
      
      Solve the deadlock by not holding CPU hotplug lock in group_cpus_evenly(),
      in which two stage spreads are taken: 1) the 1st stage is over all present
      CPUs; 2) the end stage is over all other CPUs.
      
      Turns out the two stage spread just needs consistent 'cpu_present_mask',
      and remove the CPU hotplug lock by storing it into one local cache.  This
      way doesn't change correctness, because all CPUs are still covered.
      
      Link: https://lkml.kernel.org/r/20231120083559.285174-1-ming.lei@redhat.comSigned-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Reported-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Reported-by: default avatarGuangwu Zhang <guazhang@redhat.com>
      Tested-by: default avatarGuangwu Zhang <guazhang@redhat.com>
      Reviewed-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
      Reviewed-by: default avatarJens Axboe <axboe@kernel.dk>
      Cc: Keith Busch <kbusch@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0263f92f
    • Heiko Carstens's avatar
      checkstack: fix printed address · ee34db3f
      Heiko Carstens authored
      All addresses printed by checkstack have an extra incorrect 0 appended at
      the end.
      
      This was introduced with commit 677f1410 ("scripts/checkstack.pl: don't
      display $dre as different entity"): since then the address is taken from
      the line which contains the function name, instead of the line which
      contains stack consumption. E.g. on s390:
      
      0000000000100a30 <do_one_initcall>:
      ...
        100a44:       e3 f0 ff 70 ff 71       lay     %r15,-144(%r15)
      
      So the used regex which matches spaces and hexadecimal numbers to extract
      an address now matches a different substring. Subsequently replacing spaces
      with 0 appends a zero at the and, instead of replacing leading spaces.
      
      Fix this by using the proper regex, and simplify the code a bit.
      
      Link: https://lkml.kernel.org/r/20231120183719.2188479-2-hca@linux.ibm.com
      Fixes: 677f1410 ("scripts/checkstack.pl: don't display $dre as different entity")
      Signed-off-by: default avatarHeiko Carstens <hca@linux.ibm.com>
      Cc: Maninder Singh <maninder1.s@samsung.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Vaneet Narang <v.narang@samsung.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      ee34db3f
    • Sumanth Korikkar's avatar
      mm/memory_hotplug: fix error handling in add_memory_resource() · f42ce5f0
      Sumanth Korikkar authored
      In add_memory_resource(), creation of memory block devices occurs after
      successful call to arch_add_memory().  However, creation of memory block
      devices could fail.  In that case, arch_remove_memory() is called to
      perform necessary cleanup.
      
      Currently with or without altmap support, arch_remove_memory() is always
      passed with altmap set to NULL during error handling.  This leads to
      freeing of struct pages using free_pages(), eventhough the allocation
      might have been performed with altmap support via
      altmap_alloc_block_buf().
      
      Fix the error handling by passing altmap in arch_remove_memory(). This
      ensures the following:
      * When altmap is disabled, deallocation of the struct pages array occurs
        via free_pages().
      * When altmap is enabled, deallocation occurs via vmem_altmap_free().
      
      Link: https://lkml.kernel.org/r/20231120145354.308999-3-sumanthk@linux.ibm.com
      Fixes: a08a2ae3 ("mm,memory_hotplug: allocate memmap from the added memory range")
      Signed-off-by: default avatarSumanth Korikkar <sumanthk@linux.ibm.com>
      Reviewed-by: default avatarGerald Schaefer <gerald.schaefer@linux.ibm.com>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alexander Gordeev <agordeev@linux.ibm.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: <stable@vger.kernel.org>	[5.15+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f42ce5f0
    • Sumanth Korikkar's avatar
      mm/memory_hotplug: add missing mem_hotplug_lock · 001002e7
      Sumanth Korikkar authored
      From Documentation/core-api/memory-hotplug.rst:
      When adding/removing/onlining/offlining memory or adding/removing
      heterogeneous/device memory, we should always hold the mem_hotplug_lock
      in write mode to serialise memory hotplug (e.g. access to global/zone
      variables).
      
      mhp_(de)init_memmap_on_memory() functions can change zone stats and
      struct page content, but they are currently called w/o the
      mem_hotplug_lock.
      
      When memory block is being offlined and when kmemleak goes through each
      populated zone, the following theoretical race conditions could occur:
      CPU 0:					     | CPU 1:
      memory_offline()			     |
      -> offline_pages()			     |
      	-> mem_hotplug_begin()		     |
      	   ...				     |
      	-> mem_hotplug_done()		     |
      					     | kmemleak_scan()
      					     | -> get_online_mems()
      					     |    ...
      -> mhp_deinit_memmap_on_memory()	     |
        [not protected by mem_hotplug_begin/done()]|
        Marks memory section as offline,	     |   Retrieves zone_start_pfn
        poisons vmemmap struct pages and updates   |   and struct page members.
        the zone related data			     |
         					     |    ...
         					     | -> put_online_mems()
      
      Fix this by ensuring mem_hotplug_lock is taken before performing
      mhp_init_memmap_on_memory().  Also ensure that
      mhp_deinit_memmap_on_memory() holds the lock.
      
      online/offline_pages() are currently only called from
      memory_block_online/offline(), so it is safe to move the locking there.
      
      Link: https://lkml.kernel.org/r/20231120145354.308999-2-sumanthk@linux.ibm.com
      Fixes: a08a2ae3 ("mm,memory_hotplug: allocate memmap from the added memory range")
      Signed-off-by: default avatarSumanth Korikkar <sumanthk@linux.ibm.com>
      Reviewed-by: default avatarGerald Schaefer <gerald.schaefer@linux.ibm.com>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alexander Gordeev <agordeev@linux.ibm.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: <stable@vger.kernel.org>	[5.15+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      001002e7
    • Chester Lin's avatar
      .mailmap: add a new address mapping for Chester Lin · c540b038
      Chester Lin authored
      My company email address is going to be disabled so let's create a mapping
      that links to my private/community email just in case people might still
      try to reach me via the old one.
      
      Link: https://lkml.kernel.org/r/20231117022807.29461-1-clin@suse.comSigned-off-by: default avatarChester Lin <clin@suse.com>
      Cc: Chester Lin <chester62515@gmail.com>
      Cc: Bjorn Andersson <quic_bjorande@quicinc.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Heiko Stuebner <heiko@sntech.de>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: Konrad Dybcio <konrad.dybcio@linaro.org>
      Cc: Oleksij Rempel <o.rempel@pengutronix.de>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Cc: Conor Dooley <conor.dooley@microchip.com>
      Cc: Matthias Brugger <mbrugger@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c540b038
    • Hugh Dickins's avatar
      mm: fix oops when filemap_map_pmd() without prealloc_pte · 9aa1345d
      Hugh Dickins authored
      syzbot reports oops in lockdep's __lock_acquire(), called from
      __pte_offset_map_lock() called from filemap_map_pages(); or when I run the
      repro, the oops comes in pmd_install(), called from filemap_map_pmd()
      called from filemap_map_pages(), just before the __pte_offset_map_lock().
      
      The problem is that filemap_map_pmd() has been assuming that when it finds
      pmd_none(), a page table has already been prepared in prealloc_pte; and
      indeed do_fault_around() has been careful to preallocate one there, when
      it finds pmd_none(): but what if *pmd became none in between?
      
      My 6.6 mods in mm/khugepaged.c, avoiding mmap_lock for write, have made it
      easy for *pmd to be cleared while servicing a page fault; but even before
      those, a huge *pmd might be zapped while a fault is serviced.
      
      The difference in symptomatic stack traces comes from the "memory model"
      in use: pmd_install() uses pmd_populate() uses page_to_pfn(): in some
      models that is strict, and will oops on the NULL prealloc_pte; in other
      models, it will construct a bogus value to be populated into *pmd, then
      __pte_offset_map_lock() oops when trying to access split ptlock pointer
      (or some other symptom in normal case of ptlock embedded not pointer).
      
      Link: https://lore.kernel.org/linux-mm/20231115065506.19780-1-jose.pekkarinen@foxhound.fi/
      Link: https://lkml.kernel.org/r/6ed0c50c-78ef-0719-b3c5-60c0c010431c@google.com
      Fixes: f9ce0be7 ("mm: Cleanup faultaround and finish_fault() codepaths")
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reported-and-tested-by: syzbot+89edd67979b52675ddec@syzkaller.appspotmail.com
      Closes: https://lore.kernel.org/linux-mm/0000000000005e44550608a0806c@google.com/Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Jann Horn <jannh@google.com>,
      Cc: José Pekkarinen <jose.pekkarinen@foxhound.fi>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: <stable@vger.kernel.org>    [5.12+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9aa1345d
    • Lizhi Xu's avatar
      squashfs: squashfs_read_data need to check if the length is 0 · eb66b8ab
      Lizhi Xu authored
      When the length passed in is 0, the pagemap_scan_test_walk() caller should
      bail.  This error causes at least a WARN_ON().
      
      Link: https://lkml.kernel.org/r/20231116031352.40853-1-lizhi.xu@windriver.com
      Reported-by: syzbot+32d3767580a1ea339a81@syzkaller.appspotmail.com
      Closes: https://lkml.kernel.org/r/0000000000000526f2060a30a085@google.comSigned-off-by: default avatarLizhi Xu <lizhi.xu@windriver.com>
      Reviewed-by: default avatarPhillip Lougher <phillip@squashfs.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      eb66b8ab
    • Peter Xu's avatar
      mm/selftests: fix pagemap_ioctl memory map test · 3f3cac5c
      Peter Xu authored
      __FILE__ is not guaranteed to exist in current dir.  Replace that with
      argv[0] for memory map test.
      
      Link: https://lkml.kernel.org/r/20231116201547.536857-4-peterx@redhat.com
      Fixes: 46fd75d4 ("selftests: mm: add pagemap ioctl tests")
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Andrei Vagin <avagin@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Muhammad Usama Anjum <usama.anjum@collabora.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3f3cac5c
    • Peter Xu's avatar
      mm/pagemap: fix wr-protect even if PM_SCAN_WP_MATCHING not set · 4980e837
      Peter Xu authored
      The new pagemap ioctl contains a fast path for wr-protections without
      looking into category masks.  It forgets to check PM_SCAN_WP_MATCHING
      before applying the wr-protections.  It can cause, e.g., pte markers
      installed on archs that do not even support uffd wr-protect.
      
      WARNING: CPU: 0 PID: 5059 at mm/memory.c:1520 zap_pte_range mm/memory.c:1520 [inline]
      
      Link: https://lkml.kernel.org/r/20231116201547.536857-3-peterx@redhat.com
      Fixes: 12f6b01a ("fs/proc/task_mmu: add fast paths to get/clear PAGE_IS_WRITTEN flag")
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reported-by: syzbot+7ca4b2719dc742b8d0a4@syzkaller.appspotmail.com
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarAndrei Vagin <avagin@gmail.com>
      Cc: Muhammad Usama Anjum <usama.anjum@collabora.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4980e837
    • Peter Xu's avatar
      mm/pagemap: fix ioctl(PAGEMAP_SCAN) on vma check · 0dff1b40
      Peter Xu authored
      Patch series "mm/pagemap: A few fixes to the recent PAGEMAP_SCAN".
      
      This series should fix two known reports from syzbot on the new
      PAGEMAP_SCAN ioctl():
      
      https://lore.kernel.org/all/000000000000b0e576060a30ee3b@google.com/
      https://lore.kernel.org/all/000000000000773fa7060a31e2cc@google.com/
      
      The 3rd patch is something I found when testing these patches.
      
      
      This patch (of 3):
      
      The new ioctl(PAGEMAP_SCAN) relies on vma wr-protect capability provided
      by userfault, however in the vma test it didn't explicitly require the vma
      to have wr-protect function enabled, even if PM_SCAN_WP_MATCHING flag is
      set.
      
      It means the pagemap code can now apply uffd-wp bit to a page in the vma
      even if not registered to userfaultfd at all.
      
      Then in whatever way as long as the pte got written and page fault
      resolved, we'll apply the write bit even if uffd-wp bit is set.  We'll see
      a pte that has both UFFD_WP and WRITE bit set.  Anything later that looks
      up the pte for uffd-wp bit will trigger the warning:
      
      WARNING: CPU: 1 PID: 5071 at arch/x86/include/asm/pgtable.h:403 pte_uffd_wp arch/x86/include/asm/pgtable.h:403 [inline]
      
      Fix it by doing proper check over the vma attributes when
      PM_SCAN_WP_MATCHING is specified.
      
      Link: https://lkml.kernel.org/r/20231116201547.536857-1-peterx@redhat.com
      Link: https://lkml.kernel.org/r/20231116201547.536857-2-peterx@redhat.com
      Fixes: 52526ca7 ("fs/proc/task_mmu: implement IOCTL to get and optionally clear info about PTEs")
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reported-by: syzbot+e94c5aaf7890901ebf9b@syzkaller.appspotmail.com
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarAndrei Vagin <avagin@gmail.com>
      Reviewed-by: default avatarMuhammad Usama Anjum <usama.anjum@collabora.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0dff1b40
    • Roman Gushchin's avatar
      mm: kmem: properly initialize local objcg variable in current_obj_cgroup() · 5f79489a
      Roman Gushchin authored
      Erhard reported that the 6.7-rc1 kernel panics on boot if being
      built with clang-16. The problem was not reproducible with gcc.
      
      [    5.975049] general protection fault, probably for non-canonical address 0xf555515555555557: 0000 [#1] SMP KASAN PTI
      [    5.976422] KASAN: maybe wild-memory-access in range [0xaaaaaaaaaaaaaab8-0xaaaaaaaaaaaaaabf]
      [    5.977475] CPU: 3 PID: 1 Comm: systemd Not tainted 6.7.0-rc1-Zen3 #77
      [    5.977860] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
      [    5.977860] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
      [    5.977860] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
      [    5.977860] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
      [    5.977860] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
      [    5.977860] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
      [    5.977860] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
      [    5.977860] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
      [    5.977860] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
      [    5.977860] FS:  00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
      [    5.977860] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [    5.977860] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
      [    5.977860] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [    5.977860] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [    5.977860] Call Trace:
      [    5.977860]  <TASK>
      [    5.977860]  ? __die_body+0x16/0x75
      [    5.977860]  ? die_addr+0x4a/0x70
      [    5.977860]  ? exc_general_protection+0x1c9/0x2d0
      [    5.977860]  ? cgroup_mkdir+0x455/0x9fb
      [    5.977860]  ? __x64_sys_mkdir+0x69/0x80
      [    5.977860]  ? asm_exc_general_protection+0x26/0x30
      [    5.977860]  ? obj_cgroup_charge_pages+0x27/0x2d5
      [    5.977860]  obj_cgroup_charge+0x114/0x1ab
      [    5.977860]  pcpu_alloc+0x1a6/0xa65
      [    5.977860]  ? mem_cgroup_css_alloc+0x1eb/0x1140
      [    5.977860]  ? cgroup_apply_control_enable+0x26b/0x7c0
      [    5.977860]  mem_cgroup_css_alloc+0x23f/0x1140
      [    5.977860]  cgroup_apply_control_enable+0x26b/0x7c0
      [    5.977860]  ? cgroup_kn_set_ugid+0x2d/0x1a0
      [    5.977860]  cgroup_mkdir+0x455/0x9fb
      [    5.977860]  ? __cfi_cgroup_mkdir+0x10/0x10
      [    5.977860]  kernfs_iop_mkdir+0x130/0x170
      [    5.977860]  vfs_mkdir+0x405/0x530
      [    5.977860]  do_mkdirat+0x188/0x1f0
      [    5.977860]  __x64_sys_mkdir+0x69/0x80
      [    5.977860]  do_syscall_64+0x7d/0x100
      [    5.977860]  ? do_syscall_64+0x89/0x100
      [    5.977860]  ? do_syscall_64+0x89/0x100
      [    5.977860]  ? do_syscall_64+0x89/0x100
      [    5.977860]  ? do_syscall_64+0x89/0x100
      [    5.977860]  entry_SYSCALL_64_after_hwframe+0x4b/0x53
      [    5.977860] RIP: 0033:0x7f297671defb
      [    5.977860] Code: 8b 05 39 7f 0d 00 bb ff ff ff ff 64 c7 00 16 00 00 00 e9 61 ff ff ff e8 23 0c 02 00 0f 1f 00 f3 0f 1e fa b88
      [    5.977860] RSP: 002b:00007ffee6242bb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
      [    5.977860] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f297671defb
      [    5.977860] RDX: 0000000000000000 RSI: 00000000000001ed RDI: 000055c6b449f0e0
      [    5.977860] RBP: 00007ffee6242bf0 R08: 000000000000000e R09: 0000000000000000
      [    5.977860] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c6b445db80
      [    5.977860] R13: 00000000000003a0 R14: 00007f2976a68651 R15: 00000000000003a0
      [    5.977860]  </TASK>
      [    5.977860] Modules linked in:
      [    6.014095] ---[ end trace 0000000000000000 ]---
      [    6.014701] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
      [    6.015348] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
      [    6.017575] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
      [    6.018255] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
      [    6.019120] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
      [    6.019983] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
      [    6.020849] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
      [    6.021747] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
      [    6.022609] FS:  00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
      [    6.023593] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [    6.024296] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
      [    6.025279] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [    6.026139] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [    6.027000] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      Actually the problem is caused by uninitialized local variable in
      current_obj_cgroup().  If the root memory cgroup is set as an active
      memory cgroup for a charging scope (as in the trace, where systemd tries
      to create the first non-root cgroup, so the parent cgroup is the root
      cgroup), the "for" loop is skipped and uninitialized objcg is returned,
      causing a panic down the accounting stack.
      
      The fix is trivial: initialize the objcg variable to NULL unconditionally
      before the "for" loop.
      
      [vbabka@suse.cz: remove redundant assignment]
        Link: https://lkml.kernel.org/r/4bd106d5-c3e3-6731-9a74-cff81e2392de@suse.cz
      Link: https://lkml.kernel.org/r/20231116025109.3775055-1-roman.gushchin@linux.dev
      Fixes: e86828e5 ("mm: kmem: scoped objcg protection")
      Signed-off-by: default avatarRoman Gushchin (Cruise) <roman.gushchin@linux.dev>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarErhard Furtner <erhard_f@mailbox.org>
      Closes: https://github.com/ClangBuiltLinux/linux/issues/1959Tested-by: default avatarErhard Furtner <erhard_f@mailbox.org>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarShakeel Butt <shakeelb@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5f79489a
    • Liu Shixin's avatar
      mm/kmemleak: move set_track_prepare() outside raw_spinlocks · d63385a7
      Liu Shixin authored
      set_track_prepare() will call __alloc_pages() which attempts to acquire
      zone->lock(spinlocks), so move it outside object->lock(raw_spinlocks)
      because it's not right to acquire spinlocks while holding raw_spinlocks in
      RT mode.
      
      Link: https://lkml.kernel.org/r/20231115082138.2649870-3-liushixin2@huawei.comSigned-off-by: default avatarLiu Shixin <liushixin2@huawei.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert+renesas@glider.be>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Patrick Wang <patrick.wang.shcn@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d63385a7
    • Liu Shixin's avatar
      Revert "mm/kmemleak: move the initialisation of object to __link_object" · 4eff7d62
      Liu Shixin authored
      Patch series "Fix invalid wait context of set_track_prepare()".
      
      Geert reported an invalid wait context[1] which is resulted by moving
      set_track_prepare() inside kmemleak_lock.  This is not allowed because in
      RT mode, the spinlocks can be preempted but raw_spinlocks can not, so it
      is not allowd to acquire spinlocks while holding raw_spinlocks.  The
      second patch fix same problem in kmemleak_update_trace().
      
      
      This patch (of 2):
      
      Move the initialisation of object back to__alloc_object() because
      set_track_prepare() attempt to acquire zone->lock(spinlocks) while
      __link_object is holding kmemleak_lock(raw_spinlocks).  This is not right
      for RT mode.
      
      This reverts commit 245245c2 ("mm/kmemleak: move the initialisation
      of object to __link_object").
      
      Link: https://lkml.kernel.org/r/20231115082138.2649870-1-liushixin2@huawei.com
      Link: https://lkml.kernel.org/r/20231115082138.2649870-2-liushixin2@huawei.com
      Fixes: 245245c2 ("mm/kmemleak: move the initialisation of object to __link_object")
      Signed-off-by: default avatarLiu Shixin <liushixin2@huawei.com>
      Reported-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Closes: https://lore.kernel.org/linux-mm/CAMuHMdWj0UzwNaxUvcocTfh481qRJpOWwXxsJCTJfu1oCqvgdA@mail.gmail.com/ [1]
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Patrick Wang <patrick.wang.shcn@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4eff7d62
    • Andrew Morton's avatar
      mm/memory.c:zap_pte_range() print bad swap entry · 727d16f1
      Andrew Morton authored
      We have a report of this WARN() triggering.  Let's print the offending
      swp_entry_t to help diagnosis.
      
      Link: https://lkml.kernel.org/r/000000000000b0e576060a30ee3b@google.com
      Cc: Muhammad Usama Anjum <usama.anjum@collabora.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      727d16f1
    • Mike Kravetz's avatar
      hugetlb: fix null-ptr-deref in hugetlb_vma_lock_write · 187da0f8
      Mike Kravetz authored
      The routine __vma_private_lock tests for the existence of a reserve map
      associated with a private hugetlb mapping.  A pointer to the reserve map
      is in vma->vm_private_data.  __vma_private_lock was checking the pointer
      for NULL.  However, it is possible that the low bits of the pointer could
      be used as flags.  In such instances, vm_private_data is not NULL and not
      a valid pointer.  This results in the null-ptr-deref reported by syzbot:
      
      general protection fault, probably for non-canonical address 0xdffffc000000001d:
       0000 [#1] PREEMPT SMP KASAN
      KASAN: null-ptr-deref in range [0x00000000000000e8-0x00000000000000ef]
      CPU: 0 PID: 5048 Comm: syz-executor139 Not tainted 6.6.0-rc7-syzkaller-00142-g88
      8cf78c29e2 #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 1
      0/09/2023
      RIP: 0010:__lock_acquire+0x109/0x5de0 kernel/locking/lockdep.c:5004
      ...
      Call Trace:
       <TASK>
       lock_acquire kernel/locking/lockdep.c:5753 [inline]
       lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718
       down_write+0x93/0x200 kernel/locking/rwsem.c:1573
       hugetlb_vma_lock_write mm/hugetlb.c:300 [inline]
       hugetlb_vma_lock_write+0xae/0x100 mm/hugetlb.c:291
       __hugetlb_zap_begin+0x1e9/0x2b0 mm/hugetlb.c:5447
       hugetlb_zap_begin include/linux/hugetlb.h:258 [inline]
       unmap_vmas+0x2f4/0x470 mm/memory.c:1733
       exit_mmap+0x1ad/0xa60 mm/mmap.c:3230
       __mmput+0x12a/0x4d0 kernel/fork.c:1349
       mmput+0x62/0x70 kernel/fork.c:1371
       exit_mm kernel/exit.c:567 [inline]
       do_exit+0x9ad/0x2a20 kernel/exit.c:861
       __do_sys_exit kernel/exit.c:991 [inline]
       __se_sys_exit kernel/exit.c:989 [inline]
       __x64_sys_exit+0x42/0x50 kernel/exit.c:989
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
      Mask off low bit flags before checking for NULL pointer.  In addition, the
      reserve map only 'belongs' to the OWNER (parent in parent/child
      relationships) so also check for the OWNER flag.
      
      Link: https://lkml.kernel.org/r/20231114012033.259600-1-mike.kravetz@oracle.com
      Reported-by: syzbot+6ada951e7c0f7bc8a71e@syzkaller.appspotmail.com
      Closes: https://lore.kernel.org/linux-mm/00000000000078d1e00608d7878b@google.com/
      Fixes: bf491692 ("hugetlbfs: extend hugetlb_vma_lock to private VMAs")
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: default avatarRik van Riel <riel@surriel.com>
      Cc: Edward Adam Davis <eadavis@qq.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Tom Rix <trix@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      187da0f8
    • Andrew Morton's avatar
      MAINTAINERS: add Andrew Morton for lib/* · b197d166
      Andrew Morton authored
      Add myself as the fallthough maintainer for material under lib/.
      
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b197d166
  3. 03 Dec, 2023 3 commits
  4. 02 Dec, 2023 2 commits
    • Linus Torvalds's avatar
      Merge tag 'powerpc-6.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · 1b8af655
      Linus Torvalds authored
      Pull powerpc fixes from Michael Ellerman:
      
       - Fix corruption of f0/vs0 during FP/Vector save, seen as userspace
         crashes when using io-uring workers (in particular with MariaDB)
      
       - Fix KVM_RUN potentially clobbering all host userspace FP/Vector
         registers
      
      Thanks to Timothy Pearson, Jens Axboe, and Nicholas Piggin.
      
      * tag 'powerpc-6.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        KVM: PPC: Book3S HV: Fix KVM_RUN clobbering FP/VEC user registers
        powerpc: Don't clobber f0/vs0 during fp|altivec register save
      1b8af655
    • Linus Torvalds's avatar
      Merge tag 'vfio-v6.7-rc4' of https://github.com/awilliam/linux-vfio · 17b17be2
      Linus Torvalds authored
      Pull vfio fixes from Alex Williamson:
      
       - Fix the lifecycle of a mutex in the pds variant driver such that a
         reset prior to opening the device won't find it uninitialized.
         Implement the release path to symmetrically destroy the mutex. Also
         switch a different lock from spinlock to mutex as the code path has
         the potential to sleep and doesn't need the spinlock context
         otherwise (Brett Creeley)
      
       - Fix an issue detected via randconfig where KVM tries to symbol_get an
         undeclared function. The symbol is temporarily declared
         unconditionally here, which resolves the problem and avoids churn
         relative to a series pending for the next merge window which resolves
         some of this symbol ugliness, but also fixes Kconfig dependencies
         (Sean Christopherson)
      
      * tag 'vfio-v6.7-rc4' of https://github.com/awilliam/linux-vfio:
        vfio: Drop vfio_file_iommu_group() stub to fudge around a KVM wart
        vfio/pds: Fix possible sleep while in atomic context
        vfio/pds: Fix mutex lock->magic != lock warning
      17b17be2