Commit 95c8a35f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mm-hotfixes-stable-2024-01-05-11-35' of...

Merge tag 'mm-hotfixes-stable-2024-01-05-11-35' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc mm fixes from Andrew Morton:
 "12 hotfixes.

  Two are cc:stable and the remainder either address post-6.7 issues or
  aren't considered necessary for earlier kernel versions"

* tag 'mm-hotfixes-stable-2024-01-05-11-35' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  mm: shrinker: use kvzalloc_node() from expand_one_shrinker_info()
  mailmap: add entries for Mathieu Othacehe
  MAINTAINERS: change vmware.com addresses to broadcom.com
  arch/mm/fault: fix major fault accounting when retrying under per-VMA lock
  mm/mglru: skip special VMAs in lru_gen_look_around()
  MAINTAINERS: hand over hwpoison maintainership to Miaohe Lin
  MAINTAINERS: remove hugetlb maintainer Mike Kravetz
  mm: fix unmap_mapping_range high bits shift bug
  mm: memcg: fix split queue list crash when large folio migration
  mm: fix arithmetic for max_prop_frac when setting max_ratio
  mm: fix arithmetic for bdi min_ratio
  mm: align larger anonymous mappings on THP boundaries
parents 0d3ac66e 7fba9420
...@@ -377,7 +377,7 @@ Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com> ...@@ -377,7 +377,7 @@ Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com> Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com> Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com>
Mathieu Othacehe <m.othacehe@gmail.com> Mathieu Othacehe <m.othacehe@gmail.com> <othacehe@gnu.org>
Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com> Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com>
Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org> Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org>
Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com> Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com>
...@@ -638,4 +638,5 @@ Wolfram Sang <wsa@kernel.org> <w.sang@pengutronix.de> ...@@ -638,4 +638,5 @@ Wolfram Sang <wsa@kernel.org> <w.sang@pengutronix.de>
Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de> Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de>
Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com>
Yusuke Goda <goda.yusuke@renesas.com> Yusuke Goda <goda.yusuke@renesas.com>
Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>
Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com> Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
...@@ -2130,6 +2130,10 @@ S: 2213 La Terrace Circle ...@@ -2130,6 +2130,10 @@ S: 2213 La Terrace Circle
S: San Jose, CA 95123 S: San Jose, CA 95123
S: USA S: USA
N: Mike Kravetz
E: mike.kravetz@oracle.com
D: Maintenance and development of the hugetlb subsystem
N: Andreas S. Krebs N: Andreas S. Krebs
E: akrebs@altavista.net E: akrebs@altavista.net
D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
......
...@@ -6901,8 +6901,8 @@ T: git git://anongit.freedesktop.org/drm/drm-misc ...@@ -6901,8 +6901,8 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/vboxvideo/ F: drivers/gpu/drm/vboxvideo/
DRM DRIVER FOR VMWARE VIRTUAL GPU DRM DRIVER FOR VMWARE VIRTUAL GPU
M: Zack Rusin <zackr@vmware.com> M: Zack Rusin <zack.rusin@broadcom.com>
R: VMware Graphics Reviewers <linux-graphics-maintainer@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Supported S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
...@@ -9767,7 +9767,6 @@ F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst ...@@ -9767,7 +9767,6 @@ F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst
F: drivers/net/ethernet/huawei/hinic/ F: drivers/net/ethernet/huawei/hinic/
HUGETLB SUBSYSTEM HUGETLB SUBSYSTEM
M: Mike Kravetz <mike.kravetz@oracle.com>
M: Muchun Song <muchun.song@linux.dev> M: Muchun Song <muchun.song@linux.dev>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
...@@ -9791,8 +9790,8 @@ T: git git://linuxtv.org/media_tree.git ...@@ -9791,8 +9790,8 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/platform/st/sti/hva F: drivers/media/platform/st/sti/hva
HWPOISON MEMORY FAILURE HANDLING HWPOISON MEMORY FAILURE HANDLING
M: Naoya Horiguchi <naoya.horiguchi@nec.com> M: Miaohe Lin <linmiaohe@huawei.com>
R: Miaohe Lin <linmiaohe@huawei.com> R: Naoya Horiguchi <naoya.horiguchi@nec.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
F: mm/hwpoison-inject.c F: mm/hwpoison-inject.c
...@@ -23215,9 +23214,8 @@ F: drivers/misc/vmw_vmci/ ...@@ -23215,9 +23214,8 @@ F: drivers/misc/vmw_vmci/
F: include/linux/vmw_vmci* F: include/linux/vmw_vmci*
VMWARE VMMOUSE SUBDRIVER VMWARE VMMOUSE SUBDRIVER
M: Zack Rusin <zackr@vmware.com> M: Zack Rusin <zack.rusin@broadcom.com>
R: VMware Graphics Reviewers <linux-graphics-maintainer@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
L: linux-input@vger.kernel.org L: linux-input@vger.kernel.org
S: Supported S: Supported
F: drivers/input/mouse/vmmouse.c F: drivers/input/mouse/vmmouse.c
......
...@@ -607,6 +607,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, ...@@ -607,6 +607,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
goto done; goto done;
} }
count_vm_vma_lock_event(VMA_LOCK_RETRY); count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault & VM_FAULT_MAJOR)
mm_flags |= FAULT_FLAG_TRIED;
/* Quick path to respond to signals */ /* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
......
...@@ -497,6 +497,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, ...@@ -497,6 +497,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
goto done; goto done;
} }
count_vm_vma_lock_event(VMA_LOCK_RETRY); count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault & VM_FAULT_MAJOR)
flags |= FAULT_FLAG_TRIED;
if (fault_signal_pending(fault, regs)) if (fault_signal_pending(fault, regs))
return user_mode(regs) ? 0 : SIGBUS; return user_mode(regs) ? 0 : SIGBUS;
......
...@@ -304,6 +304,8 @@ void handle_page_fault(struct pt_regs *regs) ...@@ -304,6 +304,8 @@ void handle_page_fault(struct pt_regs *regs)
goto done; goto done;
} }
count_vm_vma_lock_event(VMA_LOCK_RETRY); count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault & VM_FAULT_MAJOR)
flags |= FAULT_FLAG_TRIED;
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
if (!user_mode(regs)) if (!user_mode(regs))
......
...@@ -337,6 +337,9 @@ static void do_exception(struct pt_regs *regs, int access) ...@@ -337,6 +337,9 @@ static void do_exception(struct pt_regs *regs, int access)
return; return;
} }
count_vm_vma_lock_event(VMA_LOCK_RETRY); count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault & VM_FAULT_MAJOR)
flags |= FAULT_FLAG_TRIED;
/* Quick path to respond to signals */ /* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
if (!user_mode(regs)) if (!user_mode(regs))
......
...@@ -1370,6 +1370,8 @@ void do_user_addr_fault(struct pt_regs *regs, ...@@ -1370,6 +1370,8 @@ void do_user_addr_fault(struct pt_regs *regs,
goto done; goto done;
} }
count_vm_vma_lock_event(VMA_LOCK_RETRY); count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault & VM_FAULT_MAJOR)
flags |= FAULT_FLAG_TRIED;
/* Quick path to respond to signals */ /* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
......
...@@ -2823,7 +2823,7 @@ void folio_undo_large_rmappable(struct folio *folio) ...@@ -2823,7 +2823,7 @@ void folio_undo_large_rmappable(struct folio *folio)
spin_lock_irqsave(&ds_queue->split_queue_lock, flags); spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (!list_empty(&folio->_deferred_list)) { if (!list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--; ds_queue->split_queue_len--;
list_del(&folio->_deferred_list); list_del_init(&folio->_deferred_list);
} }
spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
} }
......
...@@ -7543,6 +7543,17 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) ...@@ -7543,6 +7543,17 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
/* Transfer the charge and the css ref */ /* Transfer the charge and the css ref */
commit_charge(new, memcg); commit_charge(new, memcg);
/*
* If the old folio is a large folio and is in the split queue, it needs
* to be removed from the split queue now, in case getting an incorrect
* split queue in destroy_large_folio() after the memcg of the old folio
* is cleared.
*
* In addition, the old folio is about to be freed after migration, so
* removing from the split queue a bit earlier seems reasonable.
*/
if (folio_test_large(old) && folio_test_large_rmappable(old))
folio_undo_large_rmappable(old);
old->memcg_data = 0; old->memcg_data = 0;
} }
......
...@@ -3624,8 +3624,8 @@ EXPORT_SYMBOL_GPL(unmap_mapping_pages); ...@@ -3624,8 +3624,8 @@ EXPORT_SYMBOL_GPL(unmap_mapping_pages);
void unmap_mapping_range(struct address_space *mapping, void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows) loff_t const holebegin, loff_t const holelen, int even_cows)
{ {
pgoff_t hba = holebegin >> PAGE_SHIFT; pgoff_t hba = (pgoff_t)(holebegin) >> PAGE_SHIFT;
pgoff_t hlen = (holelen + PAGE_SIZE - 1) >> PAGE_SHIFT; pgoff_t hlen = ((pgoff_t)(holelen) + PAGE_SIZE - 1) >> PAGE_SHIFT;
/* Check for overflow. */ /* Check for overflow. */
if (sizeof(holelen) > sizeof(hlen)) { if (sizeof(holelen) > sizeof(hlen)) {
......
...@@ -1829,6 +1829,9 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, ...@@ -1829,6 +1829,9 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
*/ */
pgoff = 0; pgoff = 0;
get_area = shmem_get_unmapped_area; get_area = shmem_get_unmapped_area;
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
/* Ensures that larger anonymous mappings are THP aligned. */
get_area = thp_get_unmapped_area;
} }
addr = get_area(file, addr, len, pgoff, flags); addr = get_area(file, addr, len, pgoff, flags);
......
...@@ -692,7 +692,6 @@ static int __bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ra ...@@ -692,7 +692,6 @@ static int __bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ra
if (min_ratio > 100 * BDI_RATIO_SCALE) if (min_ratio > 100 * BDI_RATIO_SCALE)
return -EINVAL; return -EINVAL;
min_ratio *= BDI_RATIO_SCALE;
spin_lock_bh(&bdi_lock); spin_lock_bh(&bdi_lock);
if (min_ratio > bdi->max_ratio) { if (min_ratio > bdi->max_ratio) {
...@@ -729,7 +728,8 @@ static int __bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ra ...@@ -729,7 +728,8 @@ static int __bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ra
ret = -EINVAL; ret = -EINVAL;
} else { } else {
bdi->max_ratio = max_ratio; bdi->max_ratio = max_ratio;
bdi->max_prop_frac = (FPROP_FRAC_BASE * max_ratio) / 100; bdi->max_prop_frac = (FPROP_FRAC_BASE * max_ratio) /
(100 * BDI_RATIO_SCALE);
} }
spin_unlock_bh(&bdi_lock); spin_unlock_bh(&bdi_lock);
......
...@@ -126,7 +126,7 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, int new_size, ...@@ -126,7 +126,7 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, int new_size,
if (new_nr_max <= old->map_nr_max) if (new_nr_max <= old->map_nr_max)
continue; continue;
new = kvmalloc_node(sizeof(*new) + new_size, GFP_KERNEL, nid); new = kvzalloc_node(sizeof(*new) + new_size, GFP_KERNEL, nid);
if (!new) if (!new)
return -ENOMEM; return -ENOMEM;
......
...@@ -3955,6 +3955,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) ...@@ -3955,6 +3955,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
int young = 0; int young = 0;
pte_t *pte = pvmw->pte; pte_t *pte = pvmw->pte;
unsigned long addr = pvmw->address; unsigned long addr = pvmw->address;
struct vm_area_struct *vma = pvmw->vma;
struct folio *folio = pfn_folio(pvmw->pfn); struct folio *folio = pfn_folio(pvmw->pfn);
bool can_swap = !folio_is_file_lru(folio); bool can_swap = !folio_is_file_lru(folio);
struct mem_cgroup *memcg = folio_memcg(folio); struct mem_cgroup *memcg = folio_memcg(folio);
...@@ -3969,11 +3970,15 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) ...@@ -3969,11 +3970,15 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
if (spin_is_contended(pvmw->ptl)) if (spin_is_contended(pvmw->ptl))
return; return;
/* exclude special VMAs containing anon pages from COW */
if (vma->vm_flags & VM_SPECIAL)
return;
/* avoid taking the LRU lock under the PTL when possible */ /* avoid taking the LRU lock under the PTL when possible */
walk = current->reclaim_state ? current->reclaim_state->mm_walk : NULL; walk = current->reclaim_state ? current->reclaim_state->mm_walk : NULL;
start = max(addr & PMD_MASK, pvmw->vma->vm_start); start = max(addr & PMD_MASK, vma->vm_start);
end = min(addr | ~PMD_MASK, pvmw->vma->vm_end - 1) + 1; end = min(addr | ~PMD_MASK, vma->vm_end - 1) + 1;
if (end - start > MIN_LRU_BATCH * PAGE_SIZE) { if (end - start > MIN_LRU_BATCH * PAGE_SIZE) {
if (addr - start < MIN_LRU_BATCH * PAGE_SIZE / 2) if (addr - start < MIN_LRU_BATCH * PAGE_SIZE / 2)
...@@ -3998,7 +4003,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) ...@@ -3998,7 +4003,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
unsigned long pfn; unsigned long pfn;
pte_t ptent = ptep_get(pte + i); pte_t ptent = ptep_get(pte + i);
pfn = get_pte_pfn(ptent, pvmw->vma, addr); pfn = get_pte_pfn(ptent, vma, addr);
if (pfn == -1) if (pfn == -1)
continue; continue;
...@@ -4009,7 +4014,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) ...@@ -4009,7 +4014,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
if (!folio) if (!folio)
continue; continue;
if (!ptep_test_and_clear_young(pvmw->vma, addr, pte + i)) if (!ptep_test_and_clear_young(vma, addr, pte + i))
VM_WARN_ON_ONCE(true); VM_WARN_ON_ONCE(true);
young++; young++;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment