1. 08 Apr, 2024 1 commit
  2. 25 Mar, 2024 12 commits
    • Kemeng Shi's avatar
      workqueue: remove unnecessary import and function in wq_monitor.py · 8034b314
      Kemeng Shi authored
      Remove unnecessary import and function in wq_monitor.py
      Signed-off-by: default avatarKemeng Shi <shikemeng@huaweicloud.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      8034b314
    • Allen Pais's avatar
      workqueue: Introduce enable_and_queue_work() convenience function · 474a549f
      Allen Pais authored
      The enable_and_queue_work() function is introduced to streamline
      the process of enabling and queuing a work item on a specific
      workqueue. This function combines the functionalities of
      enable_work() and queue_work() in a single call, providing a
      concise and convenient API for enabling and queuing work items.
      
      The function accepts a target workqueue and a work item as parameters.
      It first attempts to enable the work item using enable_work(). A successful
      enable operation means that the work item was previously disabled
      and is now marked as eligible for execution. If the enable operation
      is successful, the work item is then queued on the specified workqueue
      using queue_work(). The function returns true if the work item was
      successfully enabled and queued, and false otherwise.
      
      Note: This function may lead to unnecessary spurious wake-ups in cases
      where the work item is expected to be dormant but enable/disable are called
      frequently. Spurious wake-ups refer to the condition where worker threads
      are woken up without actual work to be done. Callers should be aware of
      this behavior and may need to employ additional synchronization mechanisms
      to avoid these overheads if such wake-ups are not desired.
      
      This addition aims to enhance code readability and maintainability by
      providing a unified interface for the common use case of enabling and
      queuing work items on a workqueue.
      
      tj: Made the function comment more compact.
      Signed-off-by: default avatarAllen Pais <allen.lkml@gmail.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      474a549f
    • Kassey Li's avatar
      workqueue: add function in event of workqueue_activate_work · d6a7bbdd
      Kassey Li authored
      The trace event "workqueue_activate_work" only print work struct.
      However, function is the region of interest in a full sequence of work.
      Current workqueue_activate_work trace event output:
      
          workqueue_activate_work: work struct ffffff88b4a0f450
      
      With this change, workqueue_activate_work will print the function name,
      align with workqueue_queue_work/execute_start/execute_end event.
      
          workqueue_activate_work: work struct ffffff80413a78b8 function=vmstat_update
      Signed-off-by: default avatarKassey Li <quic_yingangl@quicinc.com>
      Reviewed-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      d6a7bbdd
    • Dan Williams's avatar
      workqueue: Cleanup subsys attribute registration · 79202591
      Dan Williams authored
      While reviewing users of subsys_virtual_register() I noticed that
      wq_sysfs_init() ignores the @groups argument. This looks like a
      historical artifact as the original wq_subsys only had one attribute to
      register.
      
      On the way to building up an @groups argument to pass to
      subsys_virtual_register() a few more cleanups fell out:
      
      * Use DEVICE_ATTR_RO() and DEVICE_ATTR_RW() for
        cpumask_{isolated,requested} and cpumask respectively. Rename the
        @show and @store methods accordingly.
      
      * Co-locate the attribute definition with the methods. This required
        moving wq_unbound_cpumask_show down next to wq_unbound_cpumask_store
        (renamed to cpumask_show() and cpumask_store())
      
      * Use ATTRIBUTE_GROUPS() to skip some boilerplate declarations
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      79202591
    • Lai Jiangshan's avatar
      workqueue: Use list_last_entry() to get the last idle worker · d70f5d57
      Lai Jiangshan authored
      It is clearer than open code.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      d70f5d57
    • Lai Jiangshan's avatar
      workqueue: Move attrs->cpumask out of worker_pool's properties when attrs->affn_strict · ae1296a7
      Lai Jiangshan authored
      Allow more pools can be shared when attrs->affn_strict.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      ae1296a7
    • Lai Jiangshan's avatar
      workqueue: Use INIT_WORK_ONSTACK in workqueue_softirq_dead() · e7cc3be6
      Lai Jiangshan authored
      dead_work is a stack variable.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      e7cc3be6
    • Tejun Heo's avatar
      workqueue: Allow cancel_work_sync() and disable_work() from atomic contexts on BH work items · 134874e2
      Tejun Heo authored
      Now that work_grab_pending() can always grab the PENDING bit without
      sleeping, the only thing that prevents allowing cancel_work_sync() of a BH
      work item from an atomic context is the flushing of the in-flight instance.
      
      When we're flushing a BH work item for cancel_work_sync(), we know that the
      work item is not queued and must be executing in a BH context, which means
      that it's safe to busy-wait for its completion from a non-hardirq atomic
      context.
      
      This patch updates __flush_work() so that it busy-waits when flushing a BH
      work item for cancel_work_sync(). might_sleep() is pushed from
      start_flush_work() to its callers - when operating on a BH work item,
      __cancel_work_sync() now enforces !in_hardirq() instead of might_sleep().
      
      This allows cancel_work_sync() and disable_work() to be called from
      non-hardirq atomic contexts on BH work items.
      
      v3: In __flush_work(), test WORK_OFFQ_BH to tell whether a work item being
          canceled can be busy waited instead of making start_flush_work() return
          the pool. (Lai)
      
      v2: Lai pointed out that __flush_work() was accessing pool->flags outside
          the RCU critical section protecting the pool pointer. Fix it by testing
          and remembering the result inside the RCU critical section.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      134874e2
    • Tejun Heo's avatar
      workqueue: Remember whether a work item was on a BH workqueue · 456a78ee
      Tejun Heo authored
      Add an off-queue flag, WORK_OFFQ_BH, that indicates whether the last
      workqueue the work item was on was a BH one. This will be used to test
      whether a work item is BH in cancel_sync path to implement atomic
      cancel_sync'ing for BH work items.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      456a78ee
    • Tejun Heo's avatar
      workqueue: Remove WORK_OFFQ_CANCELING · f09b10b6
      Tejun Heo authored
      cancel[_delayed]_work_sync() guarantees that it can shut down
      self-requeueing work items. To achieve that, it grabs and then holds
      WORK_STRUCT_PENDING bit set while flushing the currently executing instance.
      As the PENDING bit is set, all queueing attempts including the
      self-requeueing ones fail and once the currently executing instance is
      flushed, the work item should be idle as long as someone else isn't actively
      queueing it.
      
      This means that the cancel_work_sync path may hold the PENDING bit set while
      flushing the target work item. This isn't a problem for the queueing path -
      it can just fail which is the desired effect. It doesn't affect flush. It
      doesn't matter to cancel_work either as it can just report that the work
      item has successfully canceled. However, if there's another cancel_work_sync
      attempt on the work item, it can't simply fail or report success and that
      would breach the guarantee that it should provide. cancel_work_sync has to
      wait for and grab that PENDING bit and go through the motions.
      
      WORK_OFFQ_CANCELING and wq_cancel_waitq are what implement this
      cancel_work_sync to cancel_work_sync wait mechanism. When a work item is
      being canceled, WORK_OFFQ_CANCELING is also set on it and other
      cancel_work_sync attempts wait on the bit to be cleared using the wait
      queue.
      
      While this works, it's an isolated wart which doesn't jive with the rest of
      flush and cancel mechanisms and forces enable_work() and disable_work() to
      require a sleepable context, which hampers their usability.
      
      Now that a work item can be disabled, we can use that to block queueing
      while cancel_work_sync is in progress. Instead of holding PENDING the bit,
      it can temporarily disable the work item, flush and then re-enable it as
      that'd achieve the same end result of blocking queueings while canceling and
      thus enable canceling of self-requeueing work items.
      
      - WORK_OFFQ_CANCELING and the surrounding mechanims are removed.
      
      - work_grab_pending() is now simpler, no longer has to wait for a blocking
        operation and thus can be called from any context.
      
      - With work_grab_pending() simplified, no need to use try_to_grab_pending()
        directly. All users are converted to use work_grab_pending().
      
      - __cancel_work_sync() is updated to __cancel_work() with
        WORK_CANCEL_DISABLE to cancel and plug racing queueing attempts. It then
        flushes and re-enables the work item if necessary.
      
      - These changes allow disable_work() and enable_work() to be called from any
        context.
      
      v2: Lai pointed out that mod_delayed_work_on() needs to check the disable
          count before queueing the delayed work item. Added
          clear_pending_if_disabled() call.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      f09b10b6
    • Tejun Heo's avatar
      workqueue: Implement disable/enable for (delayed) work items · 86898fa6
      Tejun Heo authored
      While (delayed) work items could be flushed and canceled, there was no way
      to prevent them from being queued in the future. While this didn't lead to
      functional deficiencies, it sometimes required a bit more effort from the
      workqueue users to e.g. sequence shutdown steps with more care.
      
      Workqueue is currently in the process of replacing tasklet which does
      support disabling and enabling. The feature is used relatively widely to,
      for example, temporarily suppress main path while a control plane operation
      (reset or config change) is in progress.
      
      To enable easy conversion of tasklet users and as it seems like an inherent
      useful feature, this patch implements disabling and enabling of work items.
      
      - A work item carries 16bit disable count in work->data while not queued.
        The access to the count is synchronized by the PENDING bit like all other
        parts of work->data.
      
      - If the count is non-zero, the work item cannot be queued. Any attempt to
        queue the work item fails and returns %false.
      
      - disable_work[_sync](), enable_work(), disable_delayed_work[_sync]() and
        enable_delayed_work() are added.
      
      v3: enable_work() was using local_irq_enable() instead of
          local_irq_restore() to undo IRQ-disable by work_grab_pending(). This is
          awkward now and will become incorrect as enable_work() will later be
          used from IRQ context too. (Lai)
      
      v2: Lai noticed that queue_work_node() wasn't checking the disable count.
          Fixed. queue_rcu_work() is updated to trigger warning if the inner work
          item is disabled.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      86898fa6
    • Tejun Heo's avatar
      workqueue: Preserve OFFQ bits in cancel[_sync] paths · 1211f3b2
      Tejun Heo authored
      The cancel[_sync] paths acquire and release WORK_STRUCT_PENDING, and
      manipulate WORK_OFFQ_CANCELING. However, they assume that all the OFFQ bit
      values except for the pool ID are statically known and don't preserve them,
      which is not wrong in the current code as the pool ID and CANCELING are the
      only information carried. However, the planned disable/enable support will
      add more fields and need them to be preserved.
      
      This patch updates work data handling so that only the bits which need
      updating are updated.
      
      - struct work_offq_data is added along with work_offqd_unpack() and
        work_offqd_pack_flags() to help manipulating multiple fields contained in
        work->data. Note that the helpers look a bit silly right now as there
        isn't that much to pack. The next patch will add more.
      
      - mark_work_canceling() which is used only by __cancel_work_sync() is
        replaced by open-coded usage of work_offq_data and
        set_work_pool_and_keep_pending() in __cancel_work_sync().
      
      - __cancel_work[_sync]() uses offq_data helpers to preserve other OFFQ bits
        when clearing WORK_STRUCT_PENDING and WORK_OFFQ_CANCELING at the end.
      
      - This removes all users of get_work_pool_id() which is dropped. Note that
        get_work_pool_id() could handle both WORK_STRUCT_PWQ and !WORK_STRUCT_PWQ
        cases; however, it was only being called after try_to_grab_pending()
        succeeded, in which case WORK_STRUCT_PWQ is never set and thus it's safe
        to use work_offqd_unpack() instead.
      
      No behavior changes intended.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
      1211f3b2
  3. 24 Mar, 2024 13 commits
    • Linus Torvalds's avatar
      Linux 6.9-rc1 · 4cece764
      Linus Torvalds authored
      4cece764
    • Linus Torvalds's avatar
      Merge tag 'efi-fixes-for-v6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi · ab8de2db
      Linus Torvalds authored
      Pull EFI fixes from Ard Biesheuvel:
      
       - Fix logic that is supposed to prevent placement of the kernel image
         below LOAD_PHYSICAL_ADDR
      
       - Use the firmware stack in the EFI stub when running in mixed mode
      
       - Clear BSS only once when using mixed mode
      
       - Check efi.get_variable() function pointer for NULL before trying to
         call it
      
      * tag 'efi-fixes-for-v6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi:
        efi: fix panic in kdump kernel
        x86/efistub: Don't clear BSS twice in mixed mode
        x86/efistub: Call mixed mode boot services on the firmware's stack
        efi/libstub: fix efi_random_alloc() to allocate memory at alloc_min or higher address
      ab8de2db
    • Linus Torvalds's avatar
      Merge tag 'x86-urgent-2024-03-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 5e74df2f
      Linus Torvalds authored
      Pull x86 fixes from Thomas Gleixner:
      
       - Ensure that the encryption mask at boot is properly propagated on
         5-level page tables, otherwise the PGD entry is incorrectly set to
         non-encrypted, which causes system crashes during boot.
      
       - Undo the deferred 5-level page table setup as it cannot work with
         memory encryption enabled.
      
       - Prevent inconsistent XFD state on CPU hotplug, where the MSR is reset
         to the default value but the cached variable is not, so subsequent
         comparisons might yield the wrong result and as a consequence the
         result prevents updating the MSR.
      
       - Register the local APIC address only once in the MPPARSE enumeration
         to prevent triggering the related WARN_ONs() in the APIC and topology
         code.
      
       - Handle the case where no APIC is found gracefully by registering a
         fake APIC in the topology code. That makes all related topology
         functions work correctly and does not affect the actual APIC driver
         code at all.
      
       - Don't evaluate logical IDs during early boot as the local APIC IDs
         are not yet enumerated and the invoked function returns an error
         code. Nothing requires the logical IDs before the final CPUID
         enumeration takes place, which happens after the enumeration.
      
       - Cure the fallout of the per CPU rework on UP which misplaced the
         copying of boot_cpu_data to per CPU data so that the final update to
         boot_cpu_data got lost which caused inconsistent state and boot
         crashes.
      
       - Use copy_from_kernel_nofault() in the kprobes setup as there is no
         guarantee that the address can be safely accessed.
      
       - Reorder struct members in struct saved_context to work around another
         kmemleak false positive
      
       - Remove the buggy code which tries to update the E820 kexec table for
         setup_data as that is never passed to the kexec kernel.
      
       - Update the resource control documentation to use the proper units.
      
       - Fix a Kconfig warning observed with tinyconfig
      
      * tag 'x86-urgent-2024-03-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86/boot/64: Move 5-level paging global variable assignments back
        x86/boot/64: Apply encryption mask to 5-level pagetable update
        x86/cpu: Add model number for another Intel Arrow Lake mobile processor
        x86/fpu: Keep xfd_state in sync with MSR_IA32_XFD
        Documentation/x86: Document that resctrl bandwidth control units are MiB
        x86/mpparse: Register APIC address only once
        x86/topology: Handle the !APIC case gracefully
        x86/topology: Don't evaluate logical IDs during early boot
        x86/cpu: Ensure that CPU info updates are propagated on UP
        kprobes/x86: Use copy_from_kernel_nofault() to read from unsafe address
        x86/pm: Work around false positive kmemleak report in msr_build_context()
        x86/kexec: Do not update E820 kexec table for setup_data
        x86/config: Fix warning for 'make ARCH=x86_64 tinyconfig'
      5e74df2f
    • Linus Torvalds's avatar
      Merge tag 'sched-urgent-2024-03-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · b136f68e
      Linus Torvalds authored
      Pull scheduler doc clarification from Thomas Gleixner:
       "A single update for the documentation of the base_slice_ns tunable to
        clarify that any value which is less than the tick slice has no effect
        because the scheduler tick is not guaranteed to happen within the set
        time slice"
      
      * tag 'sched-urgent-2024-03-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        sched/doc: Update documentation for base_slice_ns and CONFIG_HZ relation
      b136f68e
    • Linus Torvalds's avatar
      Merge tag 'dma-mapping-6.9-2024-03-24' of git://git.infradead.org/users/hch/dma-mapping · 864ad046
      Linus Torvalds authored
      Pull dma-mapping fixes from Christoph Hellwig:
       "This has a set of swiotlb alignment fixes for sometimes very long
        standing bugs from Will. We've been discussion them for a while and
        they should be solid now"
      
      * tag 'dma-mapping-6.9-2024-03-24' of git://git.infradead.org/users/hch/dma-mapping:
        swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
        iommu/dma: Force swiotlb_max_mapping_size on an untrusted device
        swiotlb: Fix alignment checks when both allocation and DMA masks are present
        swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()
        swiotlb: Enforce page alignment in swiotlb_alloc()
        swiotlb: Fix double-allocation of slots due to broken alignment handling
      864ad046
    • Oleksandr Tymoshenko's avatar
      efi: fix panic in kdump kernel · 62b71cd7
      Oleksandr Tymoshenko authored
      Check if get_next_variable() is actually valid pointer before
      calling it. In kdump kernel this method is set to NULL that causes
      panic during the kexec-ed kernel boot.
      
      Tested with QEMU and OVMF firmware.
      
      Fixes: bad267f9 ("efi: verify that variable services are supported")
      Signed-off-by: default avatarOleksandr Tymoshenko <ovt@google.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      62b71cd7
    • Ard Biesheuvel's avatar
      x86/efistub: Don't clear BSS twice in mixed mode · df7ecce8
      Ard Biesheuvel authored
      Clearing BSS should only be done once, at the very beginning.
      efi_pe_entry() is the entrypoint from the firmware, which may not clear
      BSS and so it is done explicitly. However, efi_pe_entry() is also used
      as an entrypoint by the mixed mode startup code, in which case BSS will
      already have been cleared, and doing it again at this point will corrupt
      global variables holding the firmware's GDT/IDT and segment selectors.
      
      So make the memset() conditional on whether the EFI stub is running in
      native mode.
      
      Fixes: b3810c5a ("x86/efistub: Clear decompressor BSS in native EFI entrypoint")
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      df7ecce8
    • Ard Biesheuvel's avatar
      x86/efistub: Call mixed mode boot services on the firmware's stack · cefcd4fe
      Ard Biesheuvel authored
      Normally, the EFI stub calls into the EFI boot services using the stack
      that was live when the stub was entered. According to the UEFI spec,
      this stack needs to be at least 128k in size - this might seem large but
      all asynchronous processing and event handling in EFI runs from the same
      stack and so quite a lot of space may be used in practice.
      
      In mixed mode, the situation is a bit different: the bootloader calls
      the 32-bit EFI stub entry point, which calls the decompressor's 32-bit
      entry point, where the boot stack is set up, using a fixed allocation
      of 16k. This stack is still in use when the EFI stub is started in
      64-bit mode, and so all calls back into the EFI firmware will be using
      the decompressor's limited boot stack.
      
      Due to the placement of the boot stack right after the boot heap, any
      stack overruns have gone unnoticed. However, commit
      
        5c4feadb0011983b ("x86/decompressor: Move global symbol references to C code")
      
      moved the definition of the boot heap into C code, and now the boot
      stack is placed right at the base of BSS, where any overruns will
      corrupt the end of the .data section.
      
      While it would be possible to work around this by increasing the size of
      the boot stack, doing so would affect all x86 systems, and mixed mode
      systems are a tiny (and shrinking) fraction of the x86 installed base.
      
      So instead, record the firmware stack pointer value when entering from
      the 32-bit firmware, and switch to this stack every time a EFI boot
      service call is made.
      
      Cc: <stable@kernel.org> # v6.1+
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      cefcd4fe
    • Tom Lendacky's avatar
      x86/boot/64: Move 5-level paging global variable assignments back · 9843231c
      Tom Lendacky authored
      Commit 63bed966 ("x86/startup_64: Defer assignment of 5-level paging
      global variables") moved assignment of 5-level global variables to later
      in the boot in order to avoid having to use RIP relative addressing in
      order to set them. However, when running with 5-level paging and SME
      active (mem_encrypt=on), the variables are needed as part of the page
      table setup needed to encrypt the kernel (using pgd_none(), p4d_offset(),
      etc.). Since the variables haven't been set, the page table manipulation
      is done as if 4-level paging is active, causing the system to crash on
      boot.
      
      While only a subset of the assignments that were moved need to be set
      early, move all of the assignments back into check_la57_support() so that
      these assignments aren't spread between two locations. Instead of just
      reverting the fix, this uses the new RIP_REL_REF() macro when assigning
      the variables.
      
      Fixes: 63bed966 ("x86/startup_64: Defer assignment of 5-level paging global variables")
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/2ca419f4d0de719926fd82353f6751f717590a86.1711122067.git.thomas.lendacky@amd.com
      9843231c
    • Tom Lendacky's avatar
      x86/boot/64: Apply encryption mask to 5-level pagetable update · 4d0d7e78
      Tom Lendacky authored
      When running with 5-level page tables, the kernel mapping PGD entry is
      updated to point to the P4D table. The assignment uses _PAGE_TABLE_NOENC,
      which, when SME is active (mem_encrypt=on), results in a page table
      entry without the encryption mask set, causing the system to crash on
      boot.
      
      Change the assignment to use _PAGE_TABLE instead of _PAGE_TABLE_NOENC so
      that the encryption mask is set for the PGD entry.
      
      Fixes: 533568e0 ("x86/boot/64: Use RIP_REL_REF() to access early_top_pgt[]")
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/8f20345cda7dbba2cf748b286e1bc00816fe649a.1711122067.git.thomas.lendacky@amd.com
      4d0d7e78
    • Tony Luck's avatar
    • Adamos Ttofari's avatar
      x86/fpu: Keep xfd_state in sync with MSR_IA32_XFD · 10e4b516
      Adamos Ttofari authored
      Commit 67236547 ("x86/fpu: Update XFD state where required") and
      commit 8bf26758 ("x86/fpu: Add XFD state to fpstate") introduced a
      per CPU variable xfd_state to keep the MSR_IA32_XFD value cached, in
      order to avoid unnecessary writes to the MSR.
      
      On CPU hotplug MSR_IA32_XFD is reset to the init_fpstate.xfd, which
      wipes out any stale state. But the per CPU cached xfd value is not
      reset, which brings them out of sync.
      
      As a consequence a subsequent xfd_update_state() might fail to update
      the MSR which in turn can result in XRSTOR raising a #NM in kernel
      space, which crashes the kernel.
      
      To fix this, introduce xfd_set_state() to write xfd_state together
      with MSR_IA32_XFD, and use it in all places that set MSR_IA32_XFD.
      
      Fixes: 67236547 ("x86/fpu: Update XFD state where required")
      Signed-off-by: default avatarAdamos Ttofari <attofari@amazon.de>
      Signed-off-by: default avatarChang S. Bae <chang.seok.bae@intel.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lore.kernel.org/r/20240322230439.456571-1-chang.seok.bae@intel.com
      
      Closes: https://lore.kernel.org/lkml/20230511152818.13839-1-attofari@amazon.de
      10e4b516
    • Tony Luck's avatar
      Documentation/x86: Document that resctrl bandwidth control units are MiB · a8ed59a3
      Tony Luck authored
      The memory bandwidth software controller uses 2^20 units rather than
      10^6. See mbm_bw_count() which computes bandwidth using the "SZ_1M"
      Linux define for 0x00100000.
      
      Update the documentation to use MiB when describing this feature.
      It's too late to fix the mount option "mba_MBps" as that is now an
      established user interface.
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/20240322182016.196544-1-tony.luck@intel.com
      a8ed59a3
  4. 23 Mar, 2024 11 commits
    • Linus Torvalds's avatar
      Merge tag 'timers-urgent-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 70293240
      Linus Torvalds authored
      Pull timer fixes from Thomas Gleixner:
       "Two regression fixes for the timer and timer migration code:
      
         - Prevent endless timer requeuing which is caused by two CPUs racing
           out of idle. This happens when the last CPU goes idle and therefore
           has to ensure to expire the pending global timers and some other
           CPU come out of idle at the same time and the other CPU wins the
           race and expires the global queue. This causes the last CPU to
           chase ghost timers forever and reprogramming it's clockevent device
           endlessly.
      
           Cure this by re-evaluating the wakeup time unconditionally.
      
         - The split into local (pinned) and global timers in the timer wheel
           caused a regression for NOHZ full as it broke the idle tracking of
           global timers. On NOHZ full this prevents an self IPI being sent
           which in turn causes the timer to be not programmed and not being
           expired on time.
      
           Restore the idle tracking for the global timer base so that the
           self IPI condition for NOHZ full is working correctly again"
      
      * tag 'timers-urgent-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        timers: Fix removed self-IPI on global timer's enqueue in nohz_full
        timers/migration: Fix endless timer requeue after idle interrupts
      70293240
    • Linus Torvalds's avatar
      Merge tag 'timers-core-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 00164f47
      Linus Torvalds authored
      Pull more clocksource updates from Thomas Gleixner:
       "A set of updates for clocksource and clockevent drivers:
      
         - A fix for the prescaler of the ARM global timer where the prescaler
           mask define only covered 4 bits while it is actully 8 bits wide.
           This obviously restricted the possible range of prescaler
           adjustments
      
         - A fix for the RISC-V timer which prevents a timer interrupt being
           raised while the timer is initialized
      
         - A set of device tree updates to support new system on chips in
           various drivers
      
         - Kernel-doc and other cleanups all over the place"
      
      * tag 'timers-core-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        clocksource/drivers/timer-riscv: Clear timer interrupt on timer initialization
        dt-bindings: timer: Add support for cadence TTC PWM
        clocksource/drivers/arm_global_timer: Simplify prescaler register access
        clocksource/drivers/arm_global_timer: Guard against division by zero
        clocksource/drivers/arm_global_timer: Make gt_target_rate unsigned long
        dt-bindings: timer: add Ralink SoCs system tick counter
        clocksource: arm_global_timer: fix non-kernel-doc comment
        clocksource/drivers/arm_global_timer: Remove stray tab
        clocksource/drivers/arm_global_timer: Fix maximum prescaler value
        clocksource/drivers/imx-sysctr: Add i.MX95 support
        clocksource/drivers/imx-sysctr: Drop use global variables
        dt-bindings: timer: nxp,sysctr-timer: support i.MX95
        dt-bindings: timer: renesas: ostm: Document RZ/Five SoC
        dt-bindings: timer: renesas,tmu: Document input capture interrupt
        clocksource/drivers/ti-32K: Fix misuse of "/**" comment
        clocksource/drivers/stm32: Fix all kernel-doc warnings
        dt-bindings: timer: exynos4210-mct: Add google,gs101-mct compatible
        clocksource/drivers/imx: Fix -Wunused-but-set-variable warning
      00164f47
    • Linus Torvalds's avatar
      Merge tag 'irq-urgent-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 1a391931
      Linus Torvalds authored
      Pull irq fixes from Thomas Gleixner:
       "A series of fixes for the Renesas RZG21 interrupt chip driver to
        prevent spurious and misrouted interrupts.
      
         - Ensure that posted writes are flushed in the eoi() callback
      
         - Ensure that interrupts are masked at the chip level when the
           trigger type is changed
      
         - Clear the interrupt status register when setting up edge type
           trigger modes
      
         - Ensure that the trigger type and routing information is set before
           the interrupt is enabled"
      
      * tag 'irq-urgent-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        irqchip/renesas-rzg2l: Do not set TIEN and TINT source at the same time
        irqchip/renesas-rzg2l: Prevent spurious interrupts when setting trigger type
        irqchip/renesas-rzg2l: Rename rzg2l_irq_eoi()
        irqchip/renesas-rzg2l: Rename rzg2l_tint_eoi()
        irqchip/renesas-rzg2l: Flush posted write in irq_eoi()
      1a391931
    • Linus Torvalds's avatar
      Merge tag 'core-entry-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 976b029d
      Linus Torvalds authored
      Pull core entry fix from Thomas Gleixner:
       "A single fix for the generic entry code:
      
        The trace_sys_enter() tracepoint can modify the syscall number via
        kprobes or BPF in pt_regs, but that requires that the syscall number
        is re-evaluted from pt_regs after the tracepoint.
      
        A seccomp fix in that area removed the re-evaluation so the change
        does not take effect as the code just uses the locally cached number.
      
        Restore the original behaviour by re-evaluating the syscall number
        after the tracepoint"
      
      * tag 'core-entry-2024-03-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        entry: Respect changes to system call number by trace_sys_enter()
      976b029d
    • Linus Torvalds's avatar
      Merge tag 'powerpc-6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · 484193fe
      Linus Torvalds authored
      Pull more powerpc updates from Michael Ellerman:
      
       - Handle errors in mark_rodata_ro() and mark_initmem_nx()
      
       - Make struct crash_mem available without CONFIG_CRASH_DUMP
      
      Thanks to Christophe Leroy and Hari Bathini.
      
      * tag 'powerpc-6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        powerpc/kdump: Split KEXEC_CORE and CRASH_DUMP dependency
        powerpc/kexec: split CONFIG_KEXEC_FILE and CONFIG_CRASH_DUMP
        kexec/kdump: make struct crash_mem available without CONFIG_CRASH_DUMP
        powerpc: Handle error in mark_rodata_ro() and mark_initmem_nx()
      484193fe
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm · 02fb638b
      Linus Torvalds authored
      Pull ARM updates from Russell King:
      
       - remove a misuse of kernel-doc comment
      
       - use "Call trace:" for backtraces like other architectures
      
       - implement copy_from_kernel_nofault_allowed() to fix a LKDTM test
      
       - add a "cut here" line for prefetch aborts
      
       - remove unnecessary Kconfing entry for FRAME_POINTER
      
       - remove iwmmxy support for PJ4/PJ4B cores
      
       - use bitfield helpers in ptrace to improve readabililty
      
       - check if folio is reserved before flushing
      
      * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
        ARM: 9359/1: flush: check if the folio is reserved for no-mapping addresses
        ARM: 9354/1: ptrace: Use bitfield helpers
        ARM: 9352/1: iwmmxt: Remove support for PJ4/PJ4B cores
        ARM: 9353/1: remove unneeded entry for CONFIG_FRAME_POINTER
        ARM: 9351/1: fault: Add "cut here" line for prefetch aborts
        ARM: 9350/1: fault: Implement copy_from_kernel_nofault_allowed()
        ARM: 9349/1: unwind: Add missing "Call trace:" line
        ARM: 9334/1: mm: init: remove misuse of kernel-doc comment
      02fb638b
    • Linus Torvalds's avatar
      Merge tag 'hardening-v6.9-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux · b7187139
      Linus Torvalds authored
      Pull more hardening updates from Kees Cook:
      
       - CONFIG_MEMCPY_SLOW_KUNIT_TEST is no longer needed (Guenter Roeck)
      
       - Fix needless UTF-8 character in arch/Kconfig (Liu Song)
      
       - Improve __counted_by warning message in LKDTM (Nathan Chancellor)
      
       - Refactor DEFINE_FLEX() for default use of __counted_by
      
       - Disable signed integer overflow sanitizer on GCC < 8
      
      * tag 'hardening-v6.9-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
        lkdtm/bugs: Improve warning message for compilers without counted_by support
        overflow: Change DEFINE_FLEX to take __counted_by member
        Revert "kunit: memcpy: Split slow memcpy tests into MEMCPY_SLOW_KUNIT_TEST"
        arch/Kconfig: eliminate needless UTF-8 character in Kconfig help
        ubsan: Disable signed integer overflow sanitizer on GCC < 8
      b7187139
    • Thomas Gleixner's avatar
      x86/mpparse: Register APIC address only once · f2208aa1
      Thomas Gleixner authored
      The APIC address is registered twice. First during the early detection and
      afterwards when actually scanning the table for APIC IDs. The APIC and
      topology core warn about the second attempt.
      
      Restrict it to the early detection call.
      
      Fixes: 81287ad6 ("x86/apic: Sanitize APIC address setup")
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Link: https://lore.kernel.org/r/20240322185305.297774848@linutronix.de
      f2208aa1
    • Thomas Gleixner's avatar
      x86/topology: Handle the !APIC case gracefully · 5e25eb25
      Thomas Gleixner authored
      If there is no local APIC enumerated and registered then the topology
      bitmaps are empty. Therefore, topology_init_possible_cpus() will die with
      a division by zero exception.
      
      Prevent this by registering a fake APIC id to populate the topology
      bitmap. This also allows to use all topology query interfaces
      unconditionally. It does not affect the actual APIC code because either
      the local APIC address was not registered or no local APIC could be
      detected.
      
      Fixes: f1f758a8 ("x86/topology: Add a mechanism to track topology via APIC IDs")
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Link: https://lore.kernel.org/r/20240322185305.242709302@linutronix.de
      5e25eb25
    • Thomas Gleixner's avatar
      x86/topology: Don't evaluate logical IDs during early boot · 7af541ce
      Thomas Gleixner authored
      The local APICs have not yet been enumerated so the logical ID evaluation
      from the topology bitmaps does not work and would return an error code.
      
      Skip the evaluation during the early boot CPUID evaluation and only apply
      it on the final run.
      
      Fixes: 380414be ("x86/cpu/topology: Use topology logical mapping mechanism")
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Link: https://lore.kernel.org/r/20240322185305.186943142@linutronix.de
      7af541ce
    • Thomas Gleixner's avatar
      x86/cpu: Ensure that CPU info updates are propagated on UP · c90399fb
      Thomas Gleixner authored
      The boot sequence evaluates CPUID information twice:
      
        1) During early boot
      
        2) When finalizing the early setup right before
           mitigations are selected and alternatives are patched.
      
      In both cases the evaluation is stored in boot_cpu_data, but on UP the
      copying of boot_cpu_data to the per CPU info of the boot CPU happens
      between #1 and #2. So any update which happens in #2 is never propagated to
      the per CPU info instance.
      
      Consolidate the whole logic and copy boot_cpu_data right before applying
      alternatives as that's the point where boot_cpu_data is in it's final
      state and not supposed to change anymore.
      
      This also removes the voodoo mb() from smp_prepare_cpus_common() which
      had absolutely no purpose.
      
      Fixes: 71eb4893 ("x86/percpu: Cure per CPU madness on UP")
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Link: https://lore.kernel.org/r/20240322185305.127642785@linutronix.de
      c90399fb
  5. 22 Mar, 2024 3 commits